aboutsummaryrefslogtreecommitdiffstatshomepage
path: root/drivers/net
diff options
context:
space:
mode:
authorLinus Torvalds <torvalds@linux-foundation.org>2023-02-21 18:24:12 -0800
committerLinus Torvalds <torvalds@linux-foundation.org>2023-02-21 18:24:12 -0800
commit5b7c4cabbb65f5c469464da6c5f614cbd7f730f2 (patch)
treecc5c2d0a898769fd59549594fedb3ee6f84e59a0 /drivers/net
parentMerge tag 'v6.3-p1' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6 (diff)
parentMerge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net (diff)
downloadwireguard-linux-5b7c4cabbb65f5c469464da6c5f614cbd7f730f2.tar.xz
wireguard-linux-5b7c4cabbb65f5c469464da6c5f614cbd7f730f2.zip
Merge tag 'net-next-6.3' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next
Pull networking updates from Jakub Kicinski: "Core: - Add dedicated kmem_cache for typical/small skb->head, avoid having to access struct page at kfree time, and improve memory use. - Introduce sysctl to set default RPS configuration for new netdevs. - Define Netlink protocol specification format which can be used to describe messages used by each family and auto-generate parsers. Add tools for generating kernel data structures and uAPI headers. - Expose all net/core sysctls inside netns. - Remove 4s sleep in netpoll if carrier is instantly detected on boot. - Add configurable limit of MDB entries per port, and port-vlan. - Continue populating drop reasons throughout the stack. - Retire a handful of legacy Qdiscs and classifiers. Protocols: - Support IPv4 big TCP (TSO frames larger than 64kB). - Add IP_LOCAL_PORT_RANGE socket option, to control local port range on socket by socket basis. - Track and report in procfs number of MPTCP sockets used. - Support mixing IPv4 and IPv6 flows in the in-kernel MPTCP path manager. - IPv6: don't check net.ipv6.route.max_size and rely on garbage collection to free memory (similarly to IPv4). - Support Penultimate Segment Pop (PSP) flavor in SRv6 (RFC8986). - ICMP: add per-rate limit counters. - Add support for user scanning requests in ieee802154. - Remove static WEP support. - Support minimal Wi-Fi 7 Extremely High Throughput (EHT) rate reporting. - WiFi 7 EHT channel puncturing support (client & AP). BPF: - Add a rbtree data structure following the "next-gen data structure" precedent set by recently added linked list, that is, by using kfunc + kptr instead of adding a new BPF map type. - Expose XDP hints via kfuncs with initial support for RX hash and timestamp metadata. - Add BPF_F_NO_TUNNEL_KEY extension to bpf_skb_set_tunnel_key to better support decap on GRE tunnel devices not operating in collect metadata. - Improve x86 JIT's codegen for PROBE_MEM runtime error checks. - Remove the need for trace_printk_lock for bpf_trace_printk and bpf_trace_vprintk helpers. - Extend libbpf's bpf_tracing.h support for tracing arguments of kprobes/uprobes and syscall as a special case. - Significantly reduce the search time for module symbols by livepatch and BPF. - Enable cpumasks to be used as kptrs, which is useful for tracing programs tracking which tasks end up running on which CPUs in different time intervals. - Add support for BPF trampoline on s390x and riscv64. - Add capability to export the XDP features supported by the NIC. - Add __bpf_kfunc tag for marking kernel functions as kfuncs. - Add cgroup.memory=nobpf kernel parameter option to disable BPF memory accounting for container environments. Netfilter: - Remove the CLUSTERIP target. It has been marked as obsolete for years, and we still have WARN splats wrt races of the out-of-band /proc interface installed by this target. - Add 'destroy' commands to nf_tables. They are identical to the existing 'delete' commands, but do not return an error if the referenced object (set, chain, rule...) did not exist. Driver API: - Improve cpumask_local_spread() locality to help NICs set the right IRQ affinity on AMD platforms. - Separate C22 and C45 MDIO bus transactions more clearly. - Introduce new DCB table to control DSCP rewrite on egress. - Support configuration of Physical Layer Collision Avoidance (PLCA) Reconciliation Sublayer (RS) (802.3cg-2019). Modern version of shared medium Ethernet. - Support for MAC Merge layer (IEEE 802.3-2018 clause 99). Allowing preemption of low priority frames by high priority frames. - Add support for controlling MACSec offload using netlink SET. - Rework devlink instance refcounts to allow registration and de-registration under the instance lock. Split the code into multiple files, drop some of the unnecessarily granular locks and factor out common parts of netlink operation handling. - Add TX frame aggregation parameters (for USB drivers). - Add a new attr TCA_EXT_WARN_MSG to report TC (offload) warning messages with notifications for debug. - Allow offloading of UDP NEW connections via act_ct. - Add support for per action HW stats in TC. - Support hardware miss to TC action (continue processing in SW from a specific point in the action chain). - Warn if old Wireless Extension user space interface is used with modern cfg80211/mac80211 drivers. Do not support Wireless Extensions for Wi-Fi 7 devices at all. Everyone should switch to using nl80211 interface instead. - Improve the CAN bit timing configuration. Use extack to return error messages directly to user space, update the SJW handling, including the definition of a new default value that will benefit CAN-FD controllers, by increasing their oscillator tolerance. New hardware / drivers: - Ethernet: - nVidia BlueField-3 support (control traffic driver) - Ethernet support for imx93 SoCs - Motorcomm yt8531 gigabit Ethernet PHY - onsemi NCN26000 10BASE-T1S PHY (with support for PLCA) - Microchip LAN8841 PHY (incl. cable diagnostics and PTP) - Amlogic gxl MDIO mux - WiFi: - RealTek RTL8188EU (rtl8xxxu) - Qualcomm Wi-Fi 7 devices (ath12k) - CAN: - Renesas R-Car V4H Drivers: - Bluetooth: - Set Per Platform Antenna Gain (PPAG) for Intel controllers. - Ethernet NICs: - Intel (1G, igc): - support TSN / Qbv / packet scheduling features of i226 model - Intel (100G, ice): - use GNSS subsystem instead of TTY - multi-buffer XDP support - extend support for GPIO pins to E823 devices - nVidia/Mellanox: - update the shared buffer configuration on PFC commands - implement PTP adjphase function for HW offset control - TC support for Geneve and GRE with VF tunnel offload - more efficient crypto key management method - multi-port eswitch support - Netronome/Corigine: - add DCB IEEE support - support IPsec offloading for NFP3800 - Freescale/NXP (enetc): - support XDP_REDIRECT for XDP non-linear buffers - improve reconfig, avoid link flap and waiting for idle - support MAC Merge layer - Other NICs: - sfc/ef100: add basic devlink support for ef100 - ionic: rx_push mode operation (writing descriptors via MMIO) - bnxt: use the auxiliary bus abstraction for RDMA - r8169: disable ASPM and reset bus in case of tx timeout - cpsw: support QSGMII mode for J721e CPSW9G - cpts: support pulse-per-second output - ngbe: add an mdio bus driver - usbnet: optimize usbnet_bh() by avoiding unnecessary queuing - r8152: handle devices with FW with NCM support - amd-xgbe: support 10Mbps, 2.5GbE speeds and rx-adaptation - virtio-net: support multi buffer XDP - virtio/vsock: replace virtio_vsock_pkt with sk_buff - tsnep: XDP support - Ethernet high-speed switches: - nVidia/Mellanox (mlxsw): - add support for latency TLV (in FW control messages) - Microchip (sparx5): - separate explicit and implicit traffic forwarding rules, make the implicit rules always active - add support for egress DSCP rewrite - IS0 VCAP support (Ingress Classification) - IS2 VCAP filters (protos, L3 addrs, L4 ports, flags, ToS etc.) - ES2 VCAP support (Egress Access Control) - support for Per-Stream Filtering and Policing (802.1Q, 8.6.5.1) - Ethernet embedded switches: - Marvell (mv88e6xxx): - add MAB (port auth) offload support - enable PTP receive for mv88e6390 - NXP (ocelot): - support MAC Merge layer - support for the the vsc7512 internal copper phys - Microchip: - lan9303: convert to PHYLINK - lan966x: support TC flower filter statistics - lan937x: PTP support for KSZ9563/KSZ8563 and LAN937x - lan937x: support Credit Based Shaper configuration - ksz9477: support Energy Efficient Ethernet - other: - qca8k: convert to regmap read/write API, use bulk operations - rswitch: Improve TX timestamp accuracy - Intel WiFi (iwlwifi): - EHT (Wi-Fi 7) rate reporting - STEP equalizer support: transfer some STEP (connection to radio on platforms with integrated wifi) related parameters from the BIOS to the firmware. - Qualcomm 802.11ax WiFi (ath11k): - IPQ5018 support - Fine Timing Measurement (FTM) responder role support - channel 177 support - MediaTek WiFi (mt76): - per-PHY LED support - mt7996: EHT (Wi-Fi 7) support - Wireless Ethernet Dispatch (WED) reset support - switch to using page pool allocator - RealTek WiFi (rtw89): - support new version of Bluetooth co-existance - Mobile: - rmnet: support TX aggregation" * tag 'net-next-6.3' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next: (1872 commits) page_pool: add a comment explaining the fragment counter usage net: ethtool: fix __ethtool_dev_mm_supported() implementation ethtool: pse-pd: Fix double word in comments xsk: add linux/vmalloc.h to xsk.c sefltests: netdevsim: wait for devlink instance after netns removal selftest: fib_tests: Always cleanup before exit net/mlx5e: Align IPsec ASO result memory to be as required by hardware net/mlx5e: TC, Set CT miss to the specific ct action instance net/mlx5e: Rename CHAIN_TO_REG to MAPPED_OBJ_TO_REG net/mlx5: Refactor tc miss handling to a single function net/mlx5: Kconfig: Make tc offload depend on tc skb extension net/sched: flower: Support hardware miss to tc action net/sched: flower: Move filter handle initialization earlier net/sched: cls_api: Support hardware miss to tc action net/sched: Rename user cookie and act cookie sfc: fix builds without CONFIG_RTC_LIB sfc: clean up some inconsistent indentings net/mlx4_en: Introduce flexible array to silence overflow warning net: lan966x: Fix possible deadlock inside PTP net/ulp: Remove redundant ->clone() test in inet_clone_ulp(). ...
Diffstat (limited to 'drivers/net')
-rw-r--r--drivers/net/Kconfig13
-rw-r--r--drivers/net/Makefile4
-rw-r--r--drivers/net/bonding/bond_main.c10
-rw-r--r--drivers/net/can/ctucanfd/ctucanfd_platform.c4
-rw-r--r--drivers/net/can/dev/bittiming.c120
-rw-r--r--drivers/net/can/dev/calc_bittiming.c34
-rw-r--r--drivers/net/can/dev/dev.c21
-rw-r--r--drivers/net/can/dev/netlink.c49
-rw-r--r--drivers/net/can/rcar/rcar_canfd.c225
-rw-r--r--drivers/net/can/sja1000/ems_pci.c154
-rw-r--r--drivers/net/can/spi/mcp251xfd/mcp251xfd-ring.c18
-rw-r--r--drivers/net/can/spi/mcp251xfd/mcp251xfd.h26
-rw-r--r--drivers/net/can/usb/esd_usb.c70
-rw-r--r--drivers/net/can/usb/peak_usb/pcan_usb.c44
-rw-r--r--drivers/net/can/usb/peak_usb/pcan_usb_core.c122
-rw-r--r--drivers/net/can/usb/peak_usb/pcan_usb_core.h12
-rw-r--r--drivers/net/can/usb/peak_usb/pcan_usb_fd.c68
-rw-r--r--drivers/net/can/usb/peak_usb/pcan_usb_pro.c30
-rw-r--r--drivers/net/can/usb/peak_usb/pcan_usb_pro.h1
-rw-r--r--drivers/net/dsa/lan9303-core.c169
-rw-r--r--drivers/net/dsa/microchip/Kconfig10
-rw-r--r--drivers/net/dsa/microchip/Makefile5
-rw-r--r--drivers/net/dsa/microchip/ksz9477.c25
-rw-r--r--drivers/net/dsa/microchip/ksz9477.h2
-rw-r--r--drivers/net/dsa/microchip/ksz9477_reg.h33
-rw-r--r--drivers/net/dsa/microchip/ksz_common.c246
-rw-r--r--drivers/net/dsa/microchip/ksz_common.h69
-rw-r--r--drivers/net/dsa/microchip/ksz_ptp.c1201
-rw-r--r--drivers/net/dsa/microchip/ksz_ptp.h86
-rw-r--r--drivers/net/dsa/microchip/ksz_ptp_reg.h142
-rw-r--r--drivers/net/dsa/microchip/lan937x.h1
-rw-r--r--drivers/net/dsa/microchip/lan937x_main.c9
-rw-r--r--drivers/net/dsa/microchip/lan937x_reg.h3
-rw-r--r--drivers/net/dsa/mt7530.c87
-rw-r--r--drivers/net/dsa/mt7530.h15
-rw-r--r--drivers/net/dsa/mv88e6xxx/Makefile1
-rw-r--r--drivers/net/dsa/mv88e6xxx/chip.c201
-rw-r--r--drivers/net/dsa/mv88e6xxx/chip.h23
-rw-r--r--drivers/net/dsa/mv88e6xxx/global1.c12
-rw-r--r--drivers/net/dsa/mv88e6xxx/global1.h2
-rw-r--r--drivers/net/dsa/mv88e6xxx/global1_atu.c24
-rw-r--r--drivers/net/dsa/mv88e6xxx/global2.c66
-rw-r--r--drivers/net/dsa/mv88e6xxx/global2.h18
-rw-r--r--drivers/net/dsa/mv88e6xxx/phy.c32
-rw-r--r--drivers/net/dsa/mv88e6xxx/phy.h4
-rw-r--r--drivers/net/dsa/mv88e6xxx/ptp.c46
-rw-r--r--drivers/net/dsa/mv88e6xxx/ptp.h2
-rw-r--r--drivers/net/dsa/mv88e6xxx/serdes.c8
-rw-r--r--drivers/net/dsa/mv88e6xxx/switchdev.c83
-rw-r--r--drivers/net/dsa/mv88e6xxx/switchdev.h19
-rw-r--r--drivers/net/dsa/ocelot/Kconfig32
-rw-r--r--drivers/net/dsa/ocelot/Makefile13
-rw-r--r--drivers/net/dsa/ocelot/felix.c59
-rw-r--r--drivers/net/dsa/ocelot/felix.h2
-rw-r--r--drivers/net/dsa/ocelot/felix_vsc9959.c64
-rw-r--r--drivers/net/dsa/ocelot/ocelot_ext.c163
-rw-r--r--drivers/net/dsa/ocelot/seville_vsc9953.c1
-rw-r--r--drivers/net/dsa/qca/qca8k-8xxx.c92
-rw-r--r--drivers/net/dsa/qca/qca8k-common.c49
-rw-r--r--drivers/net/dsa/qca/qca8k.h5
-rw-r--r--drivers/net/dsa/rzn1_a5psw.c6
-rw-r--r--drivers/net/dsa/sja1105/sja1105.h16
-rw-r--r--drivers/net/dsa/sja1105/sja1105_mdio.c137
-rw-r--r--drivers/net/dsa/sja1105/sja1105_spi.c24
-rw-r--r--drivers/net/ethernet/actions/owl-emac.c6
-rw-r--r--drivers/net/ethernet/adi/adin1110.c1
-rw-r--r--drivers/net/ethernet/amazon/ena/ena_netdev.c4
-rw-r--r--drivers/net/ethernet/amd/xgbe/xgbe-common.h49
-rw-r--r--drivers/net/ethernet/amd/xgbe/xgbe-dev.c94
-rw-r--r--drivers/net/ethernet/amd/xgbe/xgbe-mdio.c24
-rw-r--r--drivers/net/ethernet/amd/xgbe/xgbe-phy-v2.c415
-rw-r--r--drivers/net/ethernet/amd/xgbe/xgbe.h14
-rw-r--r--drivers/net/ethernet/aquantia/atlantic/aq_main.c1
-rw-r--r--drivers/net/ethernet/aquantia/atlantic/aq_nic.c5
-rw-r--r--drivers/net/ethernet/atheros/alx/main.c10
-rw-r--r--drivers/net/ethernet/broadcom/Kconfig1
-rw-r--r--drivers/net/ethernet/broadcom/b44.c22
-rw-r--r--drivers/net/ethernet/broadcom/bnxt/bnxt.c13
-rw-r--r--drivers/net/ethernet/broadcom/bnxt/bnxt.h8
-rw-r--r--drivers/net/ethernet/broadcom/bnxt/bnxt_devlink.c1
-rw-r--r--drivers/net/ethernet/broadcom/bnxt/bnxt_sriov.c7
-rw-r--r--drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.c474
-rw-r--r--drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.h51
-rw-r--r--drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c2
-rw-r--r--drivers/net/ethernet/broadcom/genet/bcmgenet.c8
-rw-r--r--drivers/net/ethernet/broadcom/genet/bcmgenet_wol.c8
-rw-r--r--drivers/net/ethernet/broadcom/genet/bcmmii.c11
-rw-r--r--drivers/net/ethernet/cadence/macb.h29
-rw-r--r--drivers/net/ethernet/cadence/macb_main.c177
-rw-r--r--drivers/net/ethernet/cadence/macb_ptp.c83
-rw-r--r--drivers/net/ethernet/cavium/thunder/nicvf_main.c2
-rw-r--r--drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c8
-rw-r--r--drivers/net/ethernet/chelsio/cxgb4/cxgb4_tc_mqprio.h2
-rw-r--r--drivers/net/ethernet/chelsio/inline_crypto/ch_ipsec/chcr_ipsec.c34
-rw-r--r--drivers/net/ethernet/engleder/Makefile2
-rw-r--r--drivers/net/ethernet/engleder/tsnep.h16
-rw-r--r--drivers/net/ethernet/engleder/tsnep_main.c479
-rw-r--r--drivers/net/ethernet/engleder/tsnep_tc.c21
-rw-r--r--drivers/net/ethernet/engleder/tsnep_xdp.c19
-rw-r--r--drivers/net/ethernet/faraday/ftmac100.c6
-rw-r--r--drivers/net/ethernet/freescale/dpaa/dpaa_eth.c4
-rw-r--r--drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c6
-rw-r--r--drivers/net/ethernet/freescale/enetc/Kconfig14
-rw-r--r--drivers/net/ethernet/freescale/enetc/Makefile7
-rw-r--r--drivers/net/ethernet/freescale/enetc/enetc.c746
-rw-r--r--drivers/net/ethernet/freescale/enetc/enetc.h40
-rw-r--r--drivers/net/ethernet/freescale/enetc/enetc_cbdr.c8
-rw-r--r--drivers/net/ethernet/freescale/enetc/enetc_ethtool.c232
-rw-r--r--drivers/net/ethernet/freescale/enetc/enetc_hw.h137
-rw-r--r--drivers/net/ethernet/freescale/enetc/enetc_mdio.c119
-rw-r--r--drivers/net/ethernet/freescale/enetc/enetc_pci_mdio.c6
-rw-r--r--drivers/net/ethernet/freescale/enetc/enetc_pf.c113
-rw-r--r--drivers/net/ethernet/freescale/enetc/enetc_qos.c27
-rw-r--r--drivers/net/ethernet/freescale/fec_main.c182
-rw-r--r--drivers/net/ethernet/freescale/xgmac_mdio.c149
-rw-r--r--drivers/net/ethernet/fungible/funeth/Kconfig2
-rw-r--r--drivers/net/ethernet/fungible/funeth/funeth_main.c6
-rw-r--r--drivers/net/ethernet/google/gve/gve_main.c9
-rw-r--r--drivers/net/ethernet/hisilicon/hns/hns_dsaf_misc.c20
-rw-r--r--drivers/net/ethernet/hisilicon/hns3/hnae3.h1
-rw-r--r--drivers/net/ethernet/hisilicon/hns3/hns3_enet.c1
-rw-r--r--drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_debugfs.c1
-rw-r--r--drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_devlink.c1
-rw-r--r--drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_err.c2
-rw-r--r--drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_devlink.c1
-rw-r--r--drivers/net/ethernet/hisilicon/hns_mdio.c192
-rw-r--r--drivers/net/ethernet/ibm/ibmvnic.c29
-rw-r--r--drivers/net/ethernet/intel/Kconfig3
-rw-r--r--drivers/net/ethernet/intel/e1000e/ethtool.c10
-rw-r--r--drivers/net/ethernet/intel/e1000e/netdev.c7
-rw-r--r--drivers/net/ethernet/intel/e1000e/phy.c9
-rw-r--r--drivers/net/ethernet/intel/fm10k/fm10k_pci.c5
-rw-r--r--drivers/net/ethernet/intel/i40e/i40e.h7
-rw-r--r--drivers/net/ethernet/intel/i40e/i40e_adminq.c68
-rw-r--r--drivers/net/ethernet/intel/i40e/i40e_alloc.h22
-rw-r--r--drivers/net/ethernet/intel/i40e/i40e_client.c14
-rw-r--r--drivers/net/ethernet/intel/i40e/i40e_common.c1038
-rw-r--r--drivers/net/ethernet/intel/i40e/i40e_dcb.c60
-rw-r--r--drivers/net/ethernet/intel/i40e/i40e_dcb.h28
-rw-r--r--drivers/net/ethernet/intel/i40e/i40e_dcb_nl.c16
-rw-r--r--drivers/net/ethernet/intel/i40e/i40e_ddp.c14
-rw-r--r--drivers/net/ethernet/intel/i40e/i40e_debugfs.c8
-rw-r--r--drivers/net/ethernet/intel/i40e/i40e_diag.c12
-rw-r--r--drivers/net/ethernet/intel/i40e/i40e_diag.h4
-rw-r--r--drivers/net/ethernet/intel/i40e/i40e_ethtool.c65
-rw-r--r--drivers/net/ethernet/intel/i40e/i40e_hmc.c56
-rw-r--r--drivers/net/ethernet/intel/i40e/i40e_hmc.h46
-rw-r--r--drivers/net/ethernet/intel/i40e/i40e_lan_hmc.c94
-rw-r--r--drivers/net/ethernet/intel/i40e/i40e_lan_hmc.h34
-rw-r--r--drivers/net/ethernet/intel/i40e/i40e_main.c421
-rw-r--r--drivers/net/ethernet/intel/i40e/i40e_nvm.c252
-rw-r--r--drivers/net/ethernet/intel/i40e/i40e_osdep.h1
-rw-r--r--drivers/net/ethernet/intel/i40e/i40e_prototype.h643
-rw-r--r--drivers/net/ethernet/intel/i40e/i40e_status.h35
-rw-r--r--drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c157
-rw-r--r--drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.h6
-rw-r--r--drivers/net/ethernet/intel/iavf/iavf.h7
-rw-r--r--drivers/net/ethernet/intel/iavf/iavf_client.c32
-rw-r--r--drivers/net/ethernet/intel/iavf/iavf_client.h2
-rw-r--r--drivers/net/ethernet/intel/iavf/iavf_common.c4
-rw-r--r--drivers/net/ethernet/intel/iavf/iavf_main.c7
-rw-r--r--drivers/net/ethernet/intel/iavf/iavf_status.h2
-rw-r--r--drivers/net/ethernet/intel/iavf/iavf_virtchnl.c6
-rw-r--r--drivers/net/ethernet/intel/ice/Makefile3
-rw-r--r--drivers/net/ethernet/intel/ice/ice.h15
-rw-r--r--drivers/net/ethernet/intel/ice/ice_adminq_cmd.h18
-rw-r--r--drivers/net/ethernet/intel/ice/ice_base.c21
-rw-r--r--drivers/net/ethernet/intel/ice/ice_common.c49
-rw-r--r--drivers/net/ethernet/intel/ice/ice_common.h4
-rw-r--r--drivers/net/ethernet/intel/ice/ice_dcb.c43
-rw-r--r--drivers/net/ethernet/intel/ice/ice_dcb.h2
-rw-r--r--drivers/net/ethernet/intel/ice/ice_dcb_lib.c70
-rw-r--r--drivers/net/ethernet/intel/ice/ice_ddp.c1897
-rw-r--r--drivers/net/ethernet/intel/ice/ice_ddp.h445
-rw-r--r--drivers/net/ethernet/intel/ice/ice_devlink.c124
-rw-r--r--drivers/net/ethernet/intel/ice/ice_eswitch.c26
-rw-r--r--drivers/net/ethernet/intel/ice/ice_ethtool.c69
-rw-r--r--drivers/net/ethernet/intel/ice/ice_flex_pipe.c1916
-rw-r--r--drivers/net/ethernet/intel/ice/ice_flex_pipe.h69
-rw-r--r--drivers/net/ethernet/intel/ice/ice_flex_type.h328
-rw-r--r--drivers/net/ethernet/intel/ice/ice_fltr.c5
-rw-r--r--drivers/net/ethernet/intel/ice/ice_gnss.c377
-rw-r--r--drivers/net/ethernet/intel/ice/ice_gnss.h18
-rw-r--r--drivers/net/ethernet/intel/ice/ice_idc.c53
-rw-r--r--drivers/net/ethernet/intel/ice/ice_lib.c1005
-rw-r--r--drivers/net/ethernet/intel/ice/ice_lib.h50
-rw-r--r--drivers/net/ethernet/intel/ice/ice_main.c1071
-rw-r--r--drivers/net/ethernet/intel/ice/ice_nvm.c1
-rw-r--r--drivers/net/ethernet/intel/ice/ice_ptp.c74
-rw-r--r--drivers/net/ethernet/intel/ice/ice_sched.c7
-rw-r--r--drivers/net/ethernet/intel/ice/ice_sriov.c133
-rw-r--r--drivers/net/ethernet/intel/ice/ice_tc_lib.c50
-rw-r--r--drivers/net/ethernet/intel/ice/ice_tc_lib.h10
-rw-r--r--drivers/net/ethernet/intel/ice/ice_txrx.c463
-rw-r--r--drivers/net/ethernet/intel/ice/ice_txrx.h87
-rw-r--r--drivers/net/ethernet/intel/ice/ice_txrx_lib.c264
-rw-r--r--drivers/net/ethernet/intel/ice/ice_txrx_lib.h75
-rw-r--r--drivers/net/ethernet/intel/ice/ice_vf_lib.c183
-rw-r--r--drivers/net/ethernet/intel/ice/ice_vf_lib.h12
-rw-r--r--drivers/net/ethernet/intel/ice/ice_vf_lib_private.h3
-rw-r--r--drivers/net/ethernet/intel/ice/ice_virtchnl.c24
-rw-r--r--drivers/net/ethernet/intel/ice/ice_virtchnl_fdir.c8
-rw-r--r--drivers/net/ethernet/intel/ice/ice_xsk.c206
-rw-r--r--drivers/net/ethernet/intel/igb/igb_main.c32
-rw-r--r--drivers/net/ethernet/intel/igc/igc_base.c29
-rw-r--r--drivers/net/ethernet/intel/igc/igc_base.h2
-rw-r--r--drivers/net/ethernet/intel/igc/igc_defines.h1
-rw-r--r--drivers/net/ethernet/intel/igc/igc_main.c39
-rw-r--r--drivers/net/ethernet/intel/igc/igc_tsn.c56
-rw-r--r--drivers/net/ethernet/intel/igc/igc_xdp.c5
-rw-r--r--drivers/net/ethernet/intel/ixgbe/ixgbe_common.c21
-rw-r--r--drivers/net/ethernet/intel/ixgbe/ixgbe_ipsec.c27
-rw-r--r--drivers/net/ethernet/intel/ixgbe/ixgbe_main.c30
-rw-r--r--drivers/net/ethernet/intel/ixgbe/ixgbe_phy.c237
-rw-r--r--drivers/net/ethernet/intel/ixgbevf/ipsec.c21
-rw-r--r--drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c1
-rw-r--r--drivers/net/ethernet/marvell/mvmdio.c30
-rw-r--r--drivers/net/ethernet/marvell/mvneta.c8
-rw-r--r--drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c4
-rw-r--r--drivers/net/ethernet/marvell/octeontx2/af/mbox.h33
-rw-r--r--drivers/net/ethernet/marvell/octeontx2/af/rvu.c8
-rw-r--r--drivers/net/ethernet/marvell/octeontx2/af/rvu.h21
-rw-r--r--drivers/net/ethernet/marvell/octeontx2/af/rvu_cn10k.c18
-rw-r--r--drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.c309
-rw-r--r--drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c56
-rw-r--r--drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_hash.c18
-rw-r--r--drivers/net/ethernet/marvell/octeontx2/af/rvu_reg.h4
-rw-r--r--drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c8
-rw-r--r--drivers/net/ethernet/marvell/pxa168_eth.c2
-rw-r--r--drivers/net/ethernet/mediatek/mtk_eth_soc.c482
-rw-r--r--drivers/net/ethernet/mediatek/mtk_eth_soc.h38
-rw-r--r--drivers/net/ethernet/mediatek/mtk_ppe.c27
-rw-r--r--drivers/net/ethernet/mediatek/mtk_ppe.h1
-rw-r--r--drivers/net/ethernet/mediatek/mtk_ppe_regs.h6
-rw-r--r--drivers/net/ethernet/mediatek/mtk_star_emac.c6
-rw-r--r--drivers/net/ethernet/mediatek/mtk_wed.c43
-rw-r--r--drivers/net/ethernet/mediatek/mtk_wed.h9
-rw-r--r--drivers/net/ethernet/mediatek/mtk_wed_wo.c11
-rw-r--r--drivers/net/ethernet/mediatek/mtk_wed_wo.h1
-rw-r--r--drivers/net/ethernet/mellanox/mlx4/en_clock.c13
-rw-r--r--drivers/net/ethernet/mellanox/mlx4/en_netdev.c8
-rw-r--r--drivers/net/ethernet/mellanox/mlx4/en_rx.c63
-rw-r--r--drivers/net/ethernet/mellanox/mlx4/en_tx.c22
-rw-r--r--drivers/net/ethernet/mellanox/mlx4/main.c81
-rw-r--r--drivers/net/ethernet/mellanox/mlx4/mlx4_en.h5
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/Kconfig4
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/Makefile4
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/cmd.c124
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/dev.c46
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/devlink.c312
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/devlink.h10
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/diag/fs_tracepoint.c4
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/diag/fw_tracer.c79
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/diag/fw_tracer.h9
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/ecpf.c8
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/en.h14
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/en/devlink.c68
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/en/devlink.h14
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/en/fs.h6
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/en/mod_hdr.c1
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/en/params.c18
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/en/port.c72
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/en/port.h6
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/en/port_buffer.c222
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/en/port_buffer.h1
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/en/ptp.c2
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/en/rep/bond.c6
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/en/rep/tc.c227
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/en/reporter_rx.c6
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/en/reporter_tx.c10
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/en/tc/act/mirred.c15
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/en/tc/act/vlan.c35
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/en/tc/act_stats.c197
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/en/tc/act_stats.h27
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/en/tc/meter.c8
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/en/tc/sample.c2
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c174
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.h2
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/en/tc_priv.h3
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun.c8
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun_encap.c5
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/en/txrx.h8
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c40
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/en/xdp.h12
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/en/xsk/rx.c47
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/en/xsk/rx.h2
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/en/xsk/setup.c19
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/en_accel/en_accel.h2
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/en_accel/fs_tcp.c6
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.c126
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.h14
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_fs.c77
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_offload.c11
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls.c49
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls.h19
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_rx.c21
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_tx.c37
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/en_accel/macsec.c2
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/en_common.c10
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/en_fs.c22
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/en_main.c112
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/en_rep.c44
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/en_rep.h5
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/en_rx.c115
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/en_tc.c678
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/en_tc.h47
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/en_tx.c15
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/eq.c38
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/esw/acl/ingress_ofld.c4
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/esw/acl/ofld.h4
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/esw/indir_table.c213
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/esw/indir_table.h4
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/eswitch.c18
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/eswitch.h11
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c337
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/events.c2
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/fs_cmd.c13
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/fs_core.c131
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/fs_counters.c10
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/fw.c6
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/fw_reset.c51
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/fw_reset.h2
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/health.c30
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/ipoib/ethtool.c2
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/ipoib/ipoib.c3
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/lag/debugfs.c12
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/lag/lag.c15
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/lag/lag.h19
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/lag/mp.c8
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/lag/mpesw.c164
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/lag/mpesw.h30
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/lib/clock.c56
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/lib/crypto.c755
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/lib/crypto.h34
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/lib/fs_chains.c14
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/lib/ipsec_fs_roce.c368
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/lib/ipsec_fs_roce.h25
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/lib/mlx5.h17
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/main.c68
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/mlx5_core.h8
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/pagealloc.c3
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/sf/dev/driver.c2
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/steering/dr_send.c5
-rw-r--r--drivers/net/ethernet/mellanox/mlxbf_gige/mlxbf_gige.h27
-rw-r--r--drivers/net/ethernet/mellanox/mlxbf_gige/mlxbf_gige_ethtool.c1
-rw-r--r--drivers/net/ethernet/mellanox/mlxbf_gige/mlxbf_gige_main.c109
-rw-r--r--drivers/net/ethernet/mellanox/mlxbf_gige/mlxbf_gige_mdio.c178
-rw-r--r--drivers/net/ethernet/mellanox/mlxbf_gige/mlxbf_gige_mdio_bf2.h53
-rw-r--r--drivers/net/ethernet/mellanox/mlxbf_gige/mlxbf_gige_mdio_bf3.h54
-rw-r--r--drivers/net/ethernet/mellanox/mlxbf_gige/mlxbf_gige_regs.h22
-rw-r--r--drivers/net/ethernet/mellanox/mlxsw/core.c166
-rw-r--r--drivers/net/ethernet/mellanox/mlxsw/core.h4
-rw-r--r--drivers/net/ethernet/mellanox/mlxsw/core_linecards.c8
-rw-r--r--drivers/net/ethernet/mellanox/mlxsw/emad.h4
-rw-r--r--drivers/net/ethernet/mellanox/mlxsw/reg.h12
-rw-r--r--drivers/net/ethernet/mellanox/mlxsw/spectrum.c63
-rw-r--r--drivers/net/ethernet/mellanox/mlxsw/spectrum.h3
-rw-r--r--drivers/net/ethernet/mellanox/mlxsw/spectrum_acl.c21
-rw-r--r--drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_tcam.c244
-rw-r--r--drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_tcam.h5
-rw-r--r--drivers/net/ethernet/mellanox/mlxsw/spectrum_flower.c2
-rw-r--r--drivers/net/ethernet/microchip/lan743x_main.c167
-rw-r--r--drivers/net/ethernet/microchip/lan743x_main.h1
-rw-r--r--drivers/net/ethernet/microchip/lan966x/Makefile2
-rw-r--r--drivers/net/ethernet/microchip/lan966x/lan966x_goto.c10
-rw-r--r--drivers/net/ethernet/microchip/lan966x/lan966x_main.c9
-rw-r--r--drivers/net/ethernet/microchip/lan966x/lan966x_main.h32
-rw-r--r--drivers/net/ethernet/microchip/lan966x/lan966x_ptp.c7
-rw-r--r--drivers/net/ethernet/microchip/lan966x/lan966x_tc.c3
-rw-r--r--drivers/net/ethernet/microchip/lan966x/lan966x_tc_flower.c198
-rw-r--r--drivers/net/ethernet/microchip/lan966x/lan966x_tc_matchall.c16
-rw-r--r--drivers/net/ethernet/microchip/lan966x/lan966x_vcap_debugfs.c94
-rw-r--r--drivers/net/ethernet/microchip/lan966x/lan966x_vcap_impl.c46
-rw-r--r--drivers/net/ethernet/microchip/sparx5/Makefile3
-rw-r--r--drivers/net/ethernet/microchip/sparx5/sparx5_dcb.c121
-rw-r--r--drivers/net/ethernet/microchip/sparx5/sparx5_main.c7
-rw-r--r--drivers/net/ethernet/microchip/sparx5/sparx5_main.h124
-rw-r--r--drivers/net/ethernet/microchip/sparx5/sparx5_main_regs.h2511
-rw-r--r--drivers/net/ethernet/microchip/sparx5/sparx5_police.c53
-rw-r--r--drivers/net/ethernet/microchip/sparx5/sparx5_pool.c81
-rw-r--r--drivers/net/ethernet/microchip/sparx5/sparx5_port.c102
-rw-r--r--drivers/net/ethernet/microchip/sparx5/sparx5_port.h41
-rw-r--r--drivers/net/ethernet/microchip/sparx5/sparx5_psfp.c332
-rw-r--r--drivers/net/ethernet/microchip/sparx5/sparx5_ptp.c3
-rw-r--r--drivers/net/ethernet/microchip/sparx5/sparx5_qos.c59
-rw-r--r--drivers/net/ethernet/microchip/sparx5/sparx5_sdlb.c335
-rw-r--r--drivers/net/ethernet/microchip/sparx5/sparx5_tc.c1
-rw-r--r--drivers/net/ethernet/microchip/sparx5/sparx5_tc.h74
-rw-r--r--drivers/net/ethernet/microchip/sparx5/sparx5_tc_flower.c1260
-rw-r--r--drivers/net/ethernet/microchip/sparx5/sparx5_tc_matchall.c16
-rw-r--r--drivers/net/ethernet/microchip/sparx5/sparx5_vcap_ag_api.c2531
-rw-r--r--drivers/net/ethernet/microchip/sparx5/sparx5_vcap_debugfs.c291
-rw-r--r--drivers/net/ethernet/microchip/sparx5/sparx5_vcap_impl.c1356
-rw-r--r--drivers/net/ethernet/microchip/sparx5/sparx5_vcap_impl.h120
-rw-r--r--drivers/net/ethernet/microchip/sparx5/sparx5_vlan.c4
-rw-r--r--drivers/net/ethernet/microchip/vcap/Makefile2
-rw-r--r--drivers/net/ethernet/microchip/vcap/vcap_ag_api.h499
-rw-r--r--drivers/net/ethernet/microchip/vcap/vcap_api.c1201
-rw-r--r--drivers/net/ethernet/microchip/vcap/vcap_api.h13
-rw-r--r--drivers/net/ethernet/microchip/vcap/vcap_api_client.h13
-rw-r--r--drivers/net/ethernet/microchip/vcap/vcap_api_debugfs.c77
-rw-r--r--drivers/net/ethernet/microchip/vcap/vcap_api_debugfs_kunit.c19
-rw-r--r--drivers/net/ethernet/microchip/vcap/vcap_api_kunit.c127
-rw-r--r--drivers/net/ethernet/microchip/vcap/vcap_api_private.h15
-rw-r--r--drivers/net/ethernet/microchip/vcap/vcap_model_kunit.c1910
-rw-r--r--drivers/net/ethernet/microchip/vcap/vcap_model_kunit.h10
-rw-r--r--drivers/net/ethernet/microchip/vcap/vcap_tc.c412
-rw-r--r--drivers/net/ethernet/microchip/vcap/vcap_tc.h32
-rw-r--r--drivers/net/ethernet/microsoft/mana/mana_en.c2
-rw-r--r--drivers/net/ethernet/mscc/Kconfig1
-rw-r--r--drivers/net/ethernet/mscc/Makefile1
-rw-r--r--drivers/net/ethernet/mscc/ocelot.c66
-rw-r--r--drivers/net/ethernet/mscc/ocelot.h2
-rw-r--r--drivers/net/ethernet/mscc/ocelot_devlink.c31
-rw-r--r--drivers/net/ethernet/mscc/ocelot_mm.c215
-rw-r--r--drivers/net/ethernet/mscc/ocelot_stats.c332
-rw-r--r--drivers/net/ethernet/mscc/ocelot_vsc7514.c190
-rw-r--r--drivers/net/ethernet/mscc/vsc7514_regs.c159
-rw-r--r--drivers/net/ethernet/netronome/Kconfig2
-rw-r--r--drivers/net/ethernet/netronome/nfp/Makefile4
-rw-r--r--drivers/net/ethernet/netronome/nfp/crypto/ipsec.c50
-rw-r--r--drivers/net/ethernet/netronome/nfp/devlink_param.c8
-rw-r--r--drivers/net/ethernet/netronome/nfp/flower/conntrack.c24
-rw-r--r--drivers/net/ethernet/netronome/nfp/nfd3/dp.c11
-rw-r--r--drivers/net/ethernet/netronome/nfp/nfdk/dp.c49
-rw-r--r--drivers/net/ethernet/netronome/nfp/nfdk/ipsec.c17
-rw-r--r--drivers/net/ethernet/netronome/nfp/nfdk/nfdk.h8
-rw-r--r--drivers/net/ethernet/netronome/nfp/nfp_net_common.c5
-rw-r--r--drivers/net/ethernet/netronome/nfp/nfp_net_ctrl.h1
-rw-r--r--drivers/net/ethernet/netronome/nfp/nfp_net_ethtool.c35
-rw-r--r--drivers/net/ethernet/netronome/nfp/nfp_net_main.c7
-rw-r--r--drivers/net/ethernet/netronome/nfp/nfpcore/nfp_nsp.h3
-rw-r--r--drivers/net/ethernet/netronome/nfp/nic/dcb.c571
-rw-r--r--drivers/net/ethernet/netronome/nfp/nic/main.c43
-rw-r--r--drivers/net/ethernet/netronome/nfp/nic/main.h46
-rw-r--r--drivers/net/ethernet/ni/nixge.c141
-rw-r--r--drivers/net/ethernet/pensando/ionic/ionic_bus_pci.c6
-rw-r--r--drivers/net/ethernet/pensando/ionic/ionic_dev.c67
-rw-r--r--drivers/net/ethernet/pensando/ionic/ionic_dev.h13
-rw-r--r--drivers/net/ethernet/pensando/ionic/ionic_ethtool.c117
-rw-r--r--drivers/net/ethernet/pensando/ionic/ionic_if.h3
-rw-r--r--drivers/net/ethernet/pensando/ionic/ionic_lif.c165
-rw-r--r--drivers/net/ethernet/pensando/ionic/ionic_lif.h40
-rw-r--r--drivers/net/ethernet/pensando/ionic/ionic_main.c4
-rw-r--r--drivers/net/ethernet/pensando/ionic/ionic_phc.c2
-rw-r--r--drivers/net/ethernet/pensando/ionic/ionic_rx_filter.c4
-rw-r--r--drivers/net/ethernet/pensando/ionic/ionic_txrx.c22
-rw-r--r--drivers/net/ethernet/qlogic/qed/qed_devlink.c6
-rw-r--r--drivers/net/ethernet/qlogic/qed/qed_sriov.c2
-rw-r--r--drivers/net/ethernet/qlogic/qede/qede_main.c14
-rw-r--r--drivers/net/ethernet/qualcomm/rmnet/rmnet_config.c5
-rw-r--r--drivers/net/ethernet/qualcomm/rmnet/rmnet_config.h20
-rw-r--r--drivers/net/ethernet/qualcomm/rmnet/rmnet_handlers.c18
-rw-r--r--drivers/net/ethernet/qualcomm/rmnet/rmnet_map.h6
-rw-r--r--drivers/net/ethernet/qualcomm/rmnet/rmnet_map_data.c191
-rw-r--r--drivers/net/ethernet/qualcomm/rmnet/rmnet_vnd.c54
-rw-r--r--drivers/net/ethernet/qualcomm/rmnet/rmnet_vnd.h1
-rw-r--r--drivers/net/ethernet/realtek/r8169_main.c24
-rw-r--r--drivers/net/ethernet/renesas/rswitch.c554
-rw-r--r--drivers/net/ethernet/renesas/rswitch.h50
-rw-r--r--drivers/net/ethernet/renesas/sh_eth.c37
-rw-r--r--drivers/net/ethernet/samsung/sxgbe/sxgbe_mdio.c105
-rw-r--r--drivers/net/ethernet/sfc/Kconfig1
-rw-r--r--drivers/net/ethernet/sfc/Makefile3
-rw-r--r--drivers/net/ethernet/sfc/ef100_netdev.c30
-rw-r--r--drivers/net/ethernet/sfc/ef100_nic.c114
-rw-r--r--drivers/net/ethernet/sfc/ef100_nic.h7
-rw-r--r--drivers/net/ethernet/sfc/ef100_rep.c57
-rw-r--r--drivers/net/ethernet/sfc/ef100_rep.h10
-rw-r--r--drivers/net/ethernet/sfc/efx.c4
-rw-r--r--drivers/net/ethernet/sfc/efx_devlink.c731
-rw-r--r--drivers/net/ethernet/sfc/efx_devlink.h47
-rw-r--r--drivers/net/ethernet/sfc/mae.c218
-rw-r--r--drivers/net/ethernet/sfc/mae.h40
-rw-r--r--drivers/net/ethernet/sfc/mcdi.c72
-rw-r--r--drivers/net/ethernet/sfc/mcdi.h8
-rw-r--r--drivers/net/ethernet/sfc/net_driver.h8
-rw-r--r--drivers/net/ethernet/sfc/siena/efx.c4
-rw-r--r--drivers/net/ethernet/socionext/netsec.c3
-rw-r--r--drivers/net/ethernet/stmicro/stmmac/dwmac-dwc-qos-eth.c21
-rw-r--r--drivers/net/ethernet/stmicro/stmmac/dwmac-imx.c55
-rw-r--r--drivers/net/ethernet/stmicro/stmmac/dwmac-rk.c5
-rw-r--r--drivers/net/ethernet/stmicro/stmmac/dwmac-sti.c5
-rw-r--r--drivers/net/ethernet/stmicro/stmmac/dwmac-stm32.c5
-rw-r--r--drivers/net/ethernet/stmicro/stmmac/hwif.h5
-rw-r--r--drivers/net/ethernet/stmicro/stmmac/stmmac.h2
-rw-r--r--drivers/net/ethernet/stmicro/stmmac/stmmac_main.c9
-rw-r--r--drivers/net/ethernet/stmicro/stmmac/stmmac_mdio.c334
-rw-r--r--drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c5
-rw-r--r--drivers/net/ethernet/stmicro/stmmac/stmmac_tc.c20
-rw-r--r--drivers/net/ethernet/sunplus/spl2sw_mdio.c6
-rw-r--r--drivers/net/ethernet/ti/am65-cpsw-nuss.c85
-rw-r--r--drivers/net/ethernet/ti/am65-cpsw-nuss.h1
-rw-r--r--drivers/net/ethernet/ti/am65-cpsw-qos.c22
-rw-r--r--drivers/net/ethernet/ti/am65-cpts.c170
-rw-r--r--drivers/net/ethernet/ti/am65-cpts.h5
-rw-r--r--drivers/net/ethernet/ti/cpsw.c4
-rw-r--r--drivers/net/ethernet/ti/cpsw_new.c4
-rw-r--r--drivers/net/ethernet/ti/cpsw_priv.c1
-rw-r--r--drivers/net/ethernet/ti/davinci_mdio.c50
-rw-r--r--drivers/net/ethernet/wangxun/Kconfig2
-rw-r--r--drivers/net/ethernet/wangxun/libwx/Makefile2
-rw-r--r--drivers/net/ethernet/wangxun/libwx/wx_ethtool.c18
-rw-r--r--drivers/net/ethernet/wangxun/libwx/wx_ethtool.h8
-rw-r--r--drivers/net/ethernet/wangxun/libwx/wx_hw.c1197
-rw-r--r--drivers/net/ethernet/wangxun/libwx/wx_hw.h42
-rw-r--r--drivers/net/ethernet/wangxun/libwx/wx_lib.c2004
-rw-r--r--drivers/net/ethernet/wangxun/libwx/wx_lib.h32
-rw-r--r--drivers/net/ethernet/wangxun/libwx/wx_type.h409
-rw-r--r--drivers/net/ethernet/wangxun/ngbe/Makefile2
-rw-r--r--drivers/net/ethernet/wangxun/ngbe/ngbe.h79
-rw-r--r--drivers/net/ethernet/wangxun/ngbe/ngbe_ethtool.c22
-rw-r--r--drivers/net/ethernet/wangxun/ngbe/ngbe_ethtool.h9
-rw-r--r--drivers/net/ethernet/wangxun/ngbe/ngbe_hw.c70
-rw-r--r--drivers/net/ethernet/wangxun/ngbe/ngbe_hw.h5
-rw-r--r--drivers/net/ethernet/wangxun/ngbe/ngbe_main.c583
-rw-r--r--drivers/net/ethernet/wangxun/ngbe/ngbe_mdio.c286
-rw-r--r--drivers/net/ethernet/wangxun/ngbe/ngbe_mdio.h12
-rw-r--r--drivers/net/ethernet/wangxun/ngbe/ngbe_type.h98
-rw-r--r--drivers/net/ethernet/wangxun/txgbe/Makefile3
-rw-r--r--drivers/net/ethernet/wangxun/txgbe/txgbe.h43
-rw-r--r--drivers/net/ethernet/wangxun/txgbe/txgbe_ethtool.c19
-rw-r--r--drivers/net/ethernet/wangxun/txgbe/txgbe_ethtool.h9
-rw-r--r--drivers/net/ethernet/wangxun/txgbe/txgbe_hw.c116
-rw-r--r--drivers/net/ethernet/wangxun/txgbe/txgbe_hw.h6
-rw-r--r--drivers/net/ethernet/wangxun/txgbe/txgbe_main.c569
-rw-r--r--drivers/net/ethernet/wangxun/txgbe/txgbe_type.h35
-rw-r--r--drivers/net/hamradio/baycom_epp.c8
-rw-r--r--drivers/net/hyperv/netvsc.c18
-rw-r--r--drivers/net/hyperv/netvsc_drv.c3
-rw-r--r--drivers/net/ieee802154/at86rf230.c90
-rw-r--r--drivers/net/ieee802154/cc2520.c136
-rw-r--r--drivers/net/ipa/Makefile9
-rw-r--r--drivers/net/ipa/gsi.c486
-rw-r--r--drivers/net/ipa/gsi.h7
-rw-r--r--drivers/net/ipa/gsi_reg.c151
-rw-r--r--drivers/net/ipa/gsi_reg.h504
-rw-r--r--drivers/net/ipa/ipa.h4
-rw-r--r--drivers/net/ipa/ipa_cmd.c38
-rw-r--r--drivers/net/ipa/ipa_endpoint.c585
-rw-r--r--drivers/net/ipa/ipa_endpoint.h4
-rw-r--r--drivers/net/ipa/ipa_interrupt.c142
-rw-r--r--drivers/net/ipa/ipa_interrupt.h48
-rw-r--r--drivers/net/ipa/ipa_main.c122
-rw-r--r--drivers/net/ipa/ipa_mem.c22
-rw-r--r--drivers/net/ipa/ipa_mem.h8
-rw-r--r--drivers/net/ipa/ipa_power.c19
-rw-r--r--drivers/net/ipa/ipa_power.h12
-rw-r--r--drivers/net/ipa/ipa_reg.c90
-rw-r--r--drivers/net/ipa/ipa_reg.h190
-rw-r--r--drivers/net/ipa/ipa_resource.c16
-rw-r--r--drivers/net/ipa/ipa_table.c68
-rw-r--r--drivers/net/ipa/ipa_uc.c27
-rw-r--r--drivers/net/ipa/ipa_uc.h8
-rw-r--r--drivers/net/ipa/ipa_version.h6
-rw-r--r--drivers/net/ipa/reg.h133
-rw-r--r--drivers/net/ipa/reg/gsi_reg-v3.1.c291
-rw-r--r--drivers/net/ipa/reg/gsi_reg-v3.5.1.c303
-rw-r--r--drivers/net/ipa/reg/gsi_reg-v4.0.c308
-rw-r--r--drivers/net/ipa/reg/gsi_reg-v4.11.c313
-rw-r--r--drivers/net/ipa/reg/gsi_reg-v4.5.c311
-rw-r--r--drivers/net/ipa/reg/gsi_reg-v4.9.c312
-rw-r--r--drivers/net/ipa/reg/ipa_reg-v3.1.c283
-rw-r--r--drivers/net/ipa/reg/ipa_reg-v3.5.1.c269
-rw-r--r--drivers/net/ipa/reg/ipa_reg-v4.11.c271
-rw-r--r--drivers/net/ipa/reg/ipa_reg-v4.2.c255
-rw-r--r--drivers/net/ipa/reg/ipa_reg-v4.5.c287
-rw-r--r--drivers/net/ipa/reg/ipa_reg-v4.7.c271
-rw-r--r--drivers/net/ipa/reg/ipa_reg-v4.9.c271
-rw-r--r--drivers/net/ipvlan/ipvlan_core.c2
-rw-r--r--drivers/net/macsec.c125
-rw-r--r--drivers/net/mdio/Kconfig11
-rw-r--r--drivers/net/mdio/Makefile1
-rw-r--r--drivers/net/mdio/fwnode_mdio.c8
-rw-r--r--drivers/net/mdio/mdio-aspeed.c48
-rw-r--r--drivers/net/mdio/mdio-bitbang.c77
-rw-r--r--drivers/net/mdio/mdio-cavium.c111
-rw-r--r--drivers/net/mdio/mdio-cavium.h9
-rw-r--r--drivers/net/mdio/mdio-i2c.c38
-rw-r--r--drivers/net/mdio/mdio-ipq4019.c154
-rw-r--r--drivers/net/mdio/mdio-ipq8064.c8
-rw-r--r--drivers/net/mdio/mdio-mscc-miim.c6
-rw-r--r--drivers/net/mdio/mdio-mux-bcm-iproc.c54
-rw-r--r--drivers/net/mdio/mdio-mux-meson-g12a.c38
-rw-r--r--drivers/net/mdio/mdio-mux-meson-gxl.c164
-rw-r--r--drivers/net/mdio/mdio-mvusb.c6
-rw-r--r--drivers/net/mdio/mdio-octeon.c6
-rw-r--r--drivers/net/mdio/mdio-thunder.c6
-rw-r--r--drivers/net/netdevsim/bpf.c4
-rw-r--r--drivers/net/netdevsim/dev.c50
-rw-r--r--drivers/net/netdevsim/health.c20
-rw-r--r--drivers/net/netdevsim/ipsec.c14
-rw-r--r--drivers/net/netdevsim/netdev.c1
-rw-r--r--drivers/net/pcs/pcs-lynx.c20
-rw-r--r--drivers/net/pcs/pcs-rzn1-miic.c6
-rw-r--r--drivers/net/pcs/pcs-xpcs.c4
-rw-r--r--drivers/net/phy/Kconfig9
-rw-r--r--drivers/net/phy/Makefile1
-rw-r--r--drivers/net/phy/marvell.c2
-rw-r--r--drivers/net/phy/mdio-open-alliance.h46
-rw-r--r--drivers/net/phy/mdio_bus.c464
-rw-r--r--drivers/net/phy/micrel.c870
-rw-r--r--drivers/net/phy/microchip_t1.c70
-rw-r--r--drivers/net/phy/motorcomm.c559
-rw-r--r--drivers/net/phy/mxl-gpy.c5
-rw-r--r--drivers/net/phy/ncn26000.c171
-rw-r--r--drivers/net/phy/phy-c45.c514
-rw-r--r--drivers/net/phy/phy-core.c5
-rw-r--r--drivers/net/phy/phy.c417
-rw-r--r--drivers/net/phy/phy_device.c56
-rw-r--r--drivers/net/phy/phylink.c23
-rw-r--r--drivers/net/phy/sfp.c39
-rw-r--r--drivers/net/tap.c2
-rw-r--r--drivers/net/thunderbolt/Kconfig12
-rw-r--r--drivers/net/thunderbolt/Makefile6
-rw-r--r--drivers/net/thunderbolt/main.c (renamed from drivers/net/thunderbolt.c)48
-rw-r--r--drivers/net/thunderbolt/trace.c10
-rw-r--r--drivers/net/thunderbolt/trace.h141
-rw-r--r--drivers/net/tun.c7
-rw-r--r--drivers/net/usb/cdc_ether.c114
-rw-r--r--drivers/net/usb/r8152.c179
-rw-r--r--drivers/net/usb/usbnet.c29
-rw-r--r--drivers/net/veth.c91
-rw-r--r--drivers/net/virtio_net.c428
-rw-r--r--drivers/net/wireless/ath/Kconfig1
-rw-r--r--drivers/net/wireless/ath/Makefile1
-rw-r--r--drivers/net/wireless/ath/ath10k/ce.c8
-rw-r--r--drivers/net/wireless/ath/ath11k/ahb.c47
-rw-r--r--drivers/net/wireless/ath/ath11k/ce.h16
-rw-r--r--drivers/net/wireless/ath/ath11k/core.c93
-rw-r--r--drivers/net/wireless/ath/ath11k/core.h18
-rw-r--r--drivers/net/wireless/ath/ath11k/debugfs.c48
-rw-r--r--drivers/net/wireless/ath/ath11k/dp_rx.c24
-rw-r--r--drivers/net/wireless/ath/ath11k/hal.c17
-rw-r--r--drivers/net/wireless/ath/ath11k/hal.h5
-rw-r--r--drivers/net/wireless/ath/ath11k/hw.c371
-rw-r--r--drivers/net/wireless/ath/ath11k/hw.h12
-rw-r--r--drivers/net/wireless/ath/ath11k/mac.c104
-rw-r--r--drivers/net/wireless/ath/ath11k/pci.c2
-rw-r--r--drivers/net/wireless/ath/ath11k/wmi.c4
-rw-r--r--drivers/net/wireless/ath/ath11k/wmi.h1
-rw-r--r--drivers/net/wireless/ath/ath12k/Kconfig34
-rw-r--r--drivers/net/wireless/ath/ath12k/Makefile27
-rw-r--r--drivers/net/wireless/ath/ath12k/ce.c964
-rw-r--r--drivers/net/wireless/ath/ath12k/ce.h184
-rw-r--r--drivers/net/wireless/ath/ath12k/core.c939
-rw-r--r--drivers/net/wireless/ath/ath12k/core.h822
-rw-r--r--drivers/net/wireless/ath/ath12k/dbring.c357
-rw-r--r--drivers/net/wireless/ath/ath12k/dbring.h80
-rw-r--r--drivers/net/wireless/ath/ath12k/debug.c102
-rw-r--r--drivers/net/wireless/ath/ath12k/debug.h67
-rw-r--r--drivers/net/wireless/ath/ath12k/dp.c1580
-rw-r--r--drivers/net/wireless/ath/ath12k/dp.h1816
-rw-r--r--drivers/net/wireless/ath/ath12k/dp_mon.c2596
-rw-r--r--drivers/net/wireless/ath/ath12k/dp_mon.h106
-rw-r--r--drivers/net/wireless/ath/ath12k/dp_rx.c4234
-rw-r--r--drivers/net/wireless/ath/ath12k/dp_rx.h145
-rw-r--r--drivers/net/wireless/ath/ath12k/dp_tx.c1211
-rw-r--r--drivers/net/wireless/ath/ath12k/dp_tx.h41
-rw-r--r--drivers/net/wireless/ath/ath12k/hal.c2222
-rw-r--r--drivers/net/wireless/ath/ath12k/hal.h1142
-rw-r--r--drivers/net/wireless/ath/ath12k/hal_desc.h2961
-rw-r--r--drivers/net/wireless/ath/ath12k/hal_rx.c850
-rw-r--r--drivers/net/wireless/ath/ath12k/hal_rx.h704
-rw-r--r--drivers/net/wireless/ath/ath12k/hal_tx.c145
-rw-r--r--drivers/net/wireless/ath/ath12k/hal_tx.h194
-rw-r--r--drivers/net/wireless/ath/ath12k/hif.h144
-rw-r--r--drivers/net/wireless/ath/ath12k/htc.c789
-rw-r--r--drivers/net/wireless/ath/ath12k/htc.h316
-rw-r--r--drivers/net/wireless/ath/ath12k/hw.c1041
-rw-r--r--drivers/net/wireless/ath/ath12k/hw.h312
-rw-r--r--drivers/net/wireless/ath/ath12k/mac.c7038
-rw-r--r--drivers/net/wireless/ath/ath12k/mac.h76
-rw-r--r--drivers/net/wireless/ath/ath12k/mhi.c616
-rw-r--r--drivers/net/wireless/ath/ath12k/mhi.h46
-rw-r--r--drivers/net/wireless/ath/ath12k/pci.c1374
-rw-r--r--drivers/net/wireless/ath/ath12k/pci.h135
-rw-r--r--drivers/net/wireless/ath/ath12k/peer.c342
-rw-r--r--drivers/net/wireless/ath/ath12k/peer.h67
-rw-r--r--drivers/net/wireless/ath/ath12k/qmi.c3087
-rw-r--r--drivers/net/wireless/ath/ath12k/qmi.h569
-rw-r--r--drivers/net/wireless/ath/ath12k/reg.c732
-rw-r--r--drivers/net/wireless/ath/ath12k/reg.h95
-rw-r--r--drivers/net/wireless/ath/ath12k/rx_desc.h1441
-rw-r--r--drivers/net/wireless/ath/ath12k/trace.c10
-rw-r--r--drivers/net/wireless/ath/ath12k/trace.h152
-rw-r--r--drivers/net/wireless/ath/ath12k/wmi.c6600
-rw-r--r--drivers/net/wireless/ath/ath12k/wmi.h4803
-rw-r--r--drivers/net/wireless/ath/ath6kl/cfg80211.c2
-rw-r--r--drivers/net/wireless/ath/ath9k/ar5008_phy.c10
-rw-r--r--drivers/net/wireless/ath/ath9k/ar9002_calib.c30
-rw-r--r--drivers/net/wireless/ath/ath9k/ar9002_hw.c10
-rw-r--r--drivers/net/wireless/ath/ath9k/ar9002_mac.c14
-rw-r--r--drivers/net/wireless/ath/ath9k/ar9002_phy.c4
-rw-r--r--drivers/net/wireless/ath/ath9k/ar9003_calib.c74
-rw-r--r--drivers/net/wireless/ath/ath9k/ar9003_eeprom.c64
-rw-r--r--drivers/net/wireless/ath/ath9k/ar9003_eeprom.h12
-rw-r--r--drivers/net/wireless/ath/ath9k/ar9003_hw.c4
-rw-r--r--drivers/net/wireless/ath/ath9k/ar9003_mac.c12
-rw-r--r--drivers/net/wireless/ath/ath9k/ar9003_mci.c6
-rw-r--r--drivers/net/wireless/ath/ath9k/ar9003_paprd.c56
-rw-r--r--drivers/net/wireless/ath/ath9k/ar9003_phy.c26
-rw-r--r--drivers/net/wireless/ath/ath9k/ar9003_phy.h82
-rw-r--r--drivers/net/wireless/ath/ath9k/ar9003_wow.c18
-rw-r--r--drivers/net/wireless/ath/ath9k/btcoex.c14
-rw-r--r--drivers/net/wireless/ath/ath9k/calib.c32
-rw-r--r--drivers/net/wireless/ath/ath9k/eeprom.h12
-rw-r--r--drivers/net/wireless/ath/ath9k/eeprom_def.c10
-rw-r--r--drivers/net/wireless/ath/ath9k/hif_usb.c33
-rw-r--r--drivers/net/wireless/ath/ath9k/htc_drv_init.c6
-rw-r--r--drivers/net/wireless/ath/ath9k/htc_hst.c4
-rw-r--r--drivers/net/wireless/ath/ath9k/hw.c128
-rw-r--r--drivers/net/wireless/ath/ath9k/mac.c42
-rw-r--r--drivers/net/wireless/ath/ath9k/pci.c4
-rw-r--r--drivers/net/wireless/ath/ath9k/reg.h148
-rw-r--r--drivers/net/wireless/ath/ath9k/rng.c6
-rw-r--r--drivers/net/wireless/ath/ath9k/wmi.c1
-rw-r--r--drivers/net/wireless/ath/ath9k/xmit.c2
-rw-r--r--drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c7
-rw-r--r--drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.h2
-rw-r--r--drivers/net/wireless/broadcom/brcm80211/brcmfmac/chip.c6
-rw-r--r--drivers/net/wireless/broadcom/brcm80211/brcmfmac/common.c7
-rw-r--r--drivers/net/wireless/broadcom/brcm80211/brcmfmac/core.c1
-rw-r--r--drivers/net/wireless/broadcom/brcm80211/brcmfmac/msgbuf.c5
-rw-r--r--drivers/net/wireless/broadcom/brcm80211/brcmfmac/p2p.c4
-rw-r--r--drivers/net/wireless/broadcom/brcm80211/brcmfmac/pcie.c33
-rw-r--r--drivers/net/wireless/broadcom/brcm80211/include/brcm_hw_ids.h8
-rw-r--r--drivers/net/wireless/intel/ipw2x00/ipw2200.c11
-rw-r--r--drivers/net/wireless/intel/iwlegacy/3945-mac.c16
-rw-r--r--drivers/net/wireless/intel/iwlegacy/4965-mac.c14
-rw-r--r--drivers/net/wireless/intel/iwlegacy/common.c4
-rw-r--r--drivers/net/wireless/intel/iwlwifi/cfg/22000.c2
-rw-r--r--drivers/net/wireless/intel/iwlwifi/fw/api/commands.h1
-rw-r--r--drivers/net/wireless/intel/iwlwifi/fw/api/datapath.h2
-rw-r--r--drivers/net/wireless/intel/iwlwifi/fw/api/rx.h145
-rw-r--r--drivers/net/wireless/intel/iwlwifi/fw/uefi.c59
-rw-r--r--drivers/net/wireless/intel/iwlwifi/fw/uefi.h19
-rw-r--r--drivers/net/wireless/intel/iwlwifi/iwl-context-info-gen3.h21
-rw-r--r--drivers/net/wireless/intel/iwlwifi/iwl-drv.c12
-rw-r--r--drivers/net/wireless/intel/iwlwifi/iwl-trans.h4
-rw-r--r--drivers/net/wireless/intel/iwlwifi/mei/main.c6
-rw-r--r--drivers/net/wireless/intel/iwlwifi/mvm/debugfs-vif.c7
-rw-r--r--drivers/net/wireless/intel/iwlwifi/mvm/debugfs.c6
-rw-r--r--drivers/net/wireless/intel/iwlwifi/mvm/ftm-initiator.c4
-rw-r--r--drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c7
-rw-r--r--drivers/net/wireless/intel/iwlwifi/mvm/ops.c1
-rw-r--r--drivers/net/wireless/intel/iwlwifi/mvm/rxmq.c80
-rw-r--r--drivers/net/wireless/intel/iwlwifi/mvm/tx.c7
-rw-r--r--drivers/net/wireless/intel/iwlwifi/pcie/ctxt-info-gen3.c5
-rw-r--r--drivers/net/wireless/intersil/orinoco/hermes.c1
-rw-r--r--drivers/net/wireless/intersil/orinoco/hw.c2
-rw-r--r--drivers/net/wireless/mac80211_hwsim.c6
-rw-r--r--drivers/net/wireless/marvell/libertas/cfg.c76
-rw-r--r--drivers/net/wireless/marvell/libertas/cmdresp.c2
-rw-r--r--drivers/net/wireless/marvell/libertas/if_usb.c2
-rw-r--r--drivers/net/wireless/marvell/libertas/main.c3
-rw-r--r--drivers/net/wireless/marvell/libertas/types.h21
-rw-r--r--drivers/net/wireless/marvell/libertas_tf/if_usb.c2
-rw-r--r--drivers/net/wireless/marvell/mwifiex/11h.c2
-rw-r--r--drivers/net/wireless/marvell/mwifiex/11n.c6
-rw-r--r--drivers/net/wireless/marvell/mwifiex/11n_rxreorder.c2
-rw-r--r--drivers/net/wireless/marvell/mwifiex/Kconfig5
-rw-r--r--drivers/net/wireless/marvell/mwifiex/cmdevt.c5
-rw-r--r--drivers/net/wireless/marvell/mwifiex/fw.h23
-rw-r--r--drivers/net/wireless/marvell/mwifiex/sdio.c26
-rw-r--r--drivers/net/wireless/marvell/mwifiex/sdio.h1
-rw-r--r--drivers/net/wireless/mediatek/mt76/Kconfig1
-rw-r--r--drivers/net/wireless/mediatek/mt76/debugfs.c2
-rw-r--r--drivers/net/wireless/mediatek/mt76/dma.c132
-rw-r--r--drivers/net/wireless/mediatek/mt76/dma.h1
-rw-r--r--drivers/net/wireless/mediatek/mt76/eeprom.c1
-rw-r--r--drivers/net/wireless/mediatek/mt76/mac80211.c124
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt76.h67
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt7603/init.c34
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt7603/mcu.c3
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt7615/init.c85
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt7615/mcu.c3
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt7615/mmio.c16
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt7615/mt7615.h6
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt7615/pci_init.c62
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt7615/regs.h1
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt7615/sdio_mcu.c1
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt7615/usb_mcu.c1
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt76_connac.h5
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt76_connac_mac.c9
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.c46
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.h16
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt76x0/phy.c7
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt76x0/usb_mcu.c1
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt76x02_util.c35
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt7915/debugfs.c6
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt7915/dma.c45
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt7915/eeprom.c24
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt7915/init.c194
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt7915/mac.c12
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt7915/main.c39
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt7915/mcu.c193
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt7915/mcu.h1
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt7915/mmio.c99
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt7915/mt7915.h7
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt7915/regs.h13
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt7915/soc.c3
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt7921/acpi_sar.c62
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt7921/acpi_sar.h12
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt7921/init.c14
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt7921/mac.c15
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt7921/main.c116
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt7921/mcu.c110
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt7921/mt7921.h16
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt7921/pci_mcu.c9
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt7921/regs.h8
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt7921/testmode.c1
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt7921/usb.c4
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt7996/debugfs.c13
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt7996/eeprom.c45
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt7996/init.c416
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt7996/mac.c149
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt7996/mac.h24
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt7996/main.c17
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt7996/mcu.c249
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt7996/mcu.h16
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt7996/mmio.c7
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt7996/mt7996.h26
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt7996/regs.h16
-rw-r--r--drivers/net/wireless/mediatek/mt76/sdio.c4
-rw-r--r--drivers/net/wireless/mediatek/mt76/sdio_txrx.c4
-rw-r--r--drivers/net/wireless/mediatek/mt76/usb.c42
-rw-r--r--drivers/net/wireless/mediatek/mt76/util.c10
-rw-r--r--drivers/net/wireless/mediatek/mt7601u/dma.c3
-rw-r--r--drivers/net/wireless/microchip/wilc1000/netdev.c8
-rw-r--r--drivers/net/wireless/quantenna/qtnfmac/event.c3
-rw-r--r--drivers/net/wireless/ralink/rt2x00/rt2800lib.c2
-rw-r--r--drivers/net/wireless/realtek/rtl8xxxu/Kconfig3
-rw-r--r--drivers/net/wireless/realtek/rtl8xxxu/Makefile3
-rw-r--r--drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu.h142
-rw-r--r--drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_8188e.c1899
-rw-r--r--drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_8188f.c24
-rw-r--r--drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_8192c.c13
-rw-r--r--drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_8192e.c45
-rw-r--r--drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_8723a.c28
-rw-r--r--drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_8723b.c18
-rw-r--r--drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_core.c450
-rw-r--r--drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_regs.h46
-rw-r--r--drivers/net/wireless/realtek/rtlwifi/rtl8188ee/hw.c6
-rw-r--r--drivers/net/wireless/realtek/rtlwifi/rtl8723ae/hal_bt_coexist.h2
-rw-r--r--drivers/net/wireless/realtek/rtlwifi/rtl8723be/hw.c6
-rw-r--r--drivers/net/wireless/realtek/rtlwifi/rtl8821ae/hw.c6
-rw-r--r--drivers/net/wireless/realtek/rtlwifi/rtl8821ae/phy.c52
-rw-r--r--drivers/net/wireless/realtek/rtw88/bf.c13
-rw-r--r--drivers/net/wireless/realtek/rtw88/coex.c2
-rw-r--r--drivers/net/wireless/realtek/rtw88/mac.c14
-rw-r--r--drivers/net/wireless/realtek/rtw88/mac80211.c4
-rw-r--r--drivers/net/wireless/realtek/rtw88/main.c6
-rw-r--r--drivers/net/wireless/realtek/rtw88/main.h2
-rw-r--r--drivers/net/wireless/realtek/rtw88/pci.c50
-rw-r--r--drivers/net/wireless/realtek/rtw88/ps.c4
-rw-r--r--drivers/net/wireless/realtek/rtw88/tx.c41
-rw-r--r--drivers/net/wireless/realtek/rtw88/tx.h3
-rw-r--r--drivers/net/wireless/realtek/rtw88/usb.c18
-rw-r--r--drivers/net/wireless/realtek/rtw88/wow.c2
-rw-r--r--drivers/net/wireless/realtek/rtw89/coex.c1813
-rw-r--r--drivers/net/wireless/realtek/rtw89/coex.h1
-rw-r--r--drivers/net/wireless/realtek/rtw89/core.c130
-rw-r--r--drivers/net/wireless/realtek/rtw89/core.h295
-rw-r--r--drivers/net/wireless/realtek/rtw89/debug.c43
-rw-r--r--drivers/net/wireless/realtek/rtw89/debug.h1
-rw-r--r--drivers/net/wireless/realtek/rtw89/fw.c146
-rw-r--r--drivers/net/wireless/realtek/rtw89/fw.h54
-rw-r--r--drivers/net/wireless/realtek/rtw89/mac.c99
-rw-r--r--drivers/net/wireless/realtek/rtw89/mac.h19
-rw-r--r--drivers/net/wireless/realtek/rtw89/mac80211.c1
-rw-r--r--drivers/net/wireless/realtek/rtw89/pci.c17
-rw-r--r--drivers/net/wireless/realtek/rtw89/pci.h15
-rw-r--r--drivers/net/wireless/realtek/rtw89/phy.c19
-rw-r--r--drivers/net/wireless/realtek/rtw89/reg.h25
-rw-r--r--drivers/net/wireless/realtek/rtw89/rtw8852a.c26
-rw-r--r--drivers/net/wireless/realtek/rtw89/rtw8852a_rfk.c2
-rw-r--r--drivers/net/wireless/realtek/rtw89/rtw8852ae.c1
-rw-r--r--drivers/net/wireless/realtek/rtw89/rtw8852b.c27
-rw-r--r--drivers/net/wireless/realtek/rtw89/rtw8852be.c1
-rw-r--r--drivers/net/wireless/realtek/rtw89/rtw8852c.c20
-rw-r--r--drivers/net/wireless/realtek/rtw89/rtw8852c_rfk.c353
-rw-r--r--drivers/net/wireless/realtek/rtw89/rtw8852ce.c1
-rw-r--r--drivers/net/wireless/realtek/rtw89/ser.c1
-rw-r--r--drivers/net/wireless/realtek/rtw89/txrx.h2
-rw-r--r--drivers/net/wireless/realtek/rtw89/wow.c26
-rw-r--r--drivers/net/wireless/rsi/rsi_91x_coex.c1
-rw-r--r--drivers/net/wireless/rsi/rsi_91x_hal.c4
-rw-r--r--drivers/net/wireless/rsi/rsi_hal.h2
-rw-r--r--drivers/net/wireless/ti/wl1251/init.c2
-rw-r--r--drivers/net/wireless/wl3501_cs.c2
-rw-r--r--drivers/net/wireless/zydas/zd1211rw/zd_rf.h3
-rw-r--r--drivers/net/xen-netfront.c2
893 files changed, 109829 insertions, 21883 deletions
diff --git a/drivers/net/Kconfig b/drivers/net/Kconfig
index 12910338ea1a..c34bd432da27 100644
--- a/drivers/net/Kconfig
+++ b/drivers/net/Kconfig
@@ -582,18 +582,7 @@ config FUJITSU_ES
This driver provides support for Extended Socket network device
on Extended Partitioning of FUJITSU PRIMEQUEST 2000 E2 series.
-config USB4_NET
- tristate "Networking over USB4 and Thunderbolt cables"
- depends on USB4 && INET
- help
- Select this if you want to create network between two computers
- over a USB4 and Thunderbolt cables. The driver supports Apple
- ThunderboltIP protocol and allows communication with any host
- supporting the same protocol including Windows and macOS.
-
- To compile this driver a module, choose M here. The module will be
- called thunderbolt-net.
-
+source "drivers/net/thunderbolt/Kconfig"
source "drivers/net/hyperv/Kconfig"
config NETDEVSIM
diff --git a/drivers/net/Makefile b/drivers/net/Makefile
index 6ce076462dbf..e26f98f897c5 100644
--- a/drivers/net/Makefile
+++ b/drivers/net/Makefile
@@ -84,8 +84,6 @@ obj-$(CONFIG_HYPERV_NET) += hyperv/
obj-$(CONFIG_NTB_NETDEV) += ntb_netdev.o
obj-$(CONFIG_FUJITSU_ES) += fjes/
-
-thunderbolt-net-y += thunderbolt.o
-obj-$(CONFIG_USB4_NET) += thunderbolt-net.o
+obj-$(CONFIG_USB4_NET) += thunderbolt/
obj-$(CONFIG_NETDEVSIM) += netdevsim/
obj-$(CONFIG_NET_FAILOVER) += net_failover.o
diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c
index 0363ce597661..00646aa315c3 100644
--- a/drivers/net/bonding/bond_main.c
+++ b/drivers/net/bonding/bond_main.c
@@ -419,8 +419,10 @@ static int bond_vlan_rx_kill_vid(struct net_device *bond_dev,
/**
* bond_ipsec_add_sa - program device with a security association
* @xs: pointer to transformer state struct
+ * @extack: extack point to fill failure reason
**/
-static int bond_ipsec_add_sa(struct xfrm_state *xs)
+static int bond_ipsec_add_sa(struct xfrm_state *xs,
+ struct netlink_ext_ack *extack)
{
struct net_device *bond_dev = xs->xso.dev;
struct bond_ipsec *ipsec;
@@ -442,7 +444,7 @@ static int bond_ipsec_add_sa(struct xfrm_state *xs)
if (!slave->dev->xfrmdev_ops ||
!slave->dev->xfrmdev_ops->xdo_dev_state_add ||
netif_is_bond_master(slave->dev)) {
- slave_warn(bond_dev, slave->dev, "Slave does not support ipsec offload\n");
+ NL_SET_ERR_MSG_MOD(extack, "Slave does not support ipsec offload");
rcu_read_unlock();
return -EINVAL;
}
@@ -454,7 +456,7 @@ static int bond_ipsec_add_sa(struct xfrm_state *xs)
}
xs->xso.real_dev = slave->dev;
- err = slave->dev->xfrmdev_ops->xdo_dev_state_add(xs);
+ err = slave->dev->xfrmdev_ops->xdo_dev_state_add(xs, extack);
if (!err) {
ipsec->xs = xs;
INIT_LIST_HEAD(&ipsec->list);
@@ -494,7 +496,7 @@ static void bond_ipsec_add_sa_all(struct bonding *bond)
spin_lock_bh(&bond->ipsec_lock);
list_for_each_entry(ipsec, &bond->ipsec_list, list) {
ipsec->xs->xso.real_dev = slave->dev;
- if (slave->dev->xfrmdev_ops->xdo_dev_state_add(ipsec->xs)) {
+ if (slave->dev->xfrmdev_ops->xdo_dev_state_add(ipsec->xs, NULL)) {
slave_warn(bond_dev, slave->dev, "%s: failed to add SA\n", __func__);
ipsec->xs->xso.real_dev = NULL;
}
diff --git a/drivers/net/can/ctucanfd/ctucanfd_platform.c b/drivers/net/can/ctucanfd/ctucanfd_platform.c
index f83684f006ea..a17561d97192 100644
--- a/drivers/net/can/ctucanfd/ctucanfd_platform.c
+++ b/drivers/net/can/ctucanfd/ctucanfd_platform.c
@@ -47,7 +47,6 @@ static void ctucan_platform_set_drvdata(struct device *dev,
*/
static int ctucan_platform_probe(struct platform_device *pdev)
{
- struct resource *res; /* IO mem resources */
struct device *dev = &pdev->dev;
void __iomem *addr;
int ret;
@@ -55,8 +54,7 @@ static int ctucan_platform_probe(struct platform_device *pdev)
int irq;
/* Get the virtual base address for the device */
- res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
- addr = devm_ioremap_resource(dev, res);
+ addr = devm_platform_ioremap_resource(pdev, 0);
if (IS_ERR(addr)) {
ret = PTR_ERR(addr);
goto err;
diff --git a/drivers/net/can/dev/bittiming.c b/drivers/net/can/dev/bittiming.c
index 7ae80763c960..0b93900b1dfa 100644
--- a/drivers/net/can/dev/bittiming.c
+++ b/drivers/net/can/dev/bittiming.c
@@ -6,25 +6,81 @@
#include <linux/can/dev.h>
+void can_sjw_set_default(struct can_bittiming *bt)
+{
+ if (bt->sjw)
+ return;
+
+ /* If user space provides no sjw, use sane default of phase_seg2 / 2 */
+ bt->sjw = max(1U, min(bt->phase_seg1, bt->phase_seg2 / 2));
+}
+
+int can_sjw_check(const struct net_device *dev, const struct can_bittiming *bt,
+ const struct can_bittiming_const *btc, struct netlink_ext_ack *extack)
+{
+ if (bt->sjw > btc->sjw_max) {
+ NL_SET_ERR_MSG_FMT(extack, "sjw: %u greater than max sjw: %u",
+ bt->sjw, btc->sjw_max);
+ return -EINVAL;
+ }
+
+ if (bt->sjw > bt->phase_seg1) {
+ NL_SET_ERR_MSG_FMT(extack,
+ "sjw: %u greater than phase-seg1: %u",
+ bt->sjw, bt->phase_seg1);
+ return -EINVAL;
+ }
+
+ if (bt->sjw > bt->phase_seg2) {
+ NL_SET_ERR_MSG_FMT(extack,
+ "sjw: %u greater than phase-seg2: %u",
+ bt->sjw, bt->phase_seg2);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
/* Checks the validity of the specified bit-timing parameters prop_seg,
* phase_seg1, phase_seg2 and sjw and tries to determine the bitrate
* prescaler value brp. You can find more information in the header
* file linux/can/netlink.h.
*/
static int can_fixup_bittiming(const struct net_device *dev, struct can_bittiming *bt,
- const struct can_bittiming_const *btc)
+ const struct can_bittiming_const *btc,
+ struct netlink_ext_ack *extack)
{
+ const unsigned int tseg1 = bt->prop_seg + bt->phase_seg1;
const struct can_priv *priv = netdev_priv(dev);
- unsigned int tseg1, alltseg;
u64 brp64;
+ int err;
- tseg1 = bt->prop_seg + bt->phase_seg1;
- if (!bt->sjw)
- bt->sjw = 1;
- if (bt->sjw > btc->sjw_max ||
- tseg1 < btc->tseg1_min || tseg1 > btc->tseg1_max ||
- bt->phase_seg2 < btc->tseg2_min || bt->phase_seg2 > btc->tseg2_max)
- return -ERANGE;
+ if (tseg1 < btc->tseg1_min) {
+ NL_SET_ERR_MSG_FMT(extack, "prop-seg + phase-seg1: %u less than tseg1-min: %u",
+ tseg1, btc->tseg1_min);
+ return -EINVAL;
+ }
+ if (tseg1 > btc->tseg1_max) {
+ NL_SET_ERR_MSG_FMT(extack, "prop-seg + phase-seg1: %u greater than tseg1-max: %u",
+ tseg1, btc->tseg1_max);
+ return -EINVAL;
+ }
+ if (bt->phase_seg2 < btc->tseg2_min) {
+ NL_SET_ERR_MSG_FMT(extack, "phase-seg2: %u less than tseg2-min: %u",
+ bt->phase_seg2, btc->tseg2_min);
+ return -EINVAL;
+ }
+ if (bt->phase_seg2 > btc->tseg2_max) {
+ NL_SET_ERR_MSG_FMT(extack, "phase-seg2: %u greater than tseg2-max: %u",
+ bt->phase_seg2, btc->tseg2_max);
+ return -EINVAL;
+ }
+
+ can_sjw_set_default(bt);
+
+ err = can_sjw_check(dev, bt, btc, extack);
+ if (err)
+ return err;
brp64 = (u64)priv->clock.freq * (u64)bt->tq;
if (btc->brp_inc > 1)
@@ -35,12 +91,21 @@ static int can_fixup_bittiming(const struct net_device *dev, struct can_bittimin
brp64 *= btc->brp_inc;
bt->brp = (u32)brp64;
- if (bt->brp < btc->brp_min || bt->brp > btc->brp_max)
+ if (bt->brp < btc->brp_min) {
+ NL_SET_ERR_MSG_FMT(extack, "resulting brp: %u less than brp-min: %u",
+ bt->brp, btc->brp_min);
+ return -EINVAL;
+ }
+ if (bt->brp > btc->brp_max) {
+ NL_SET_ERR_MSG_FMT(extack, "resulting brp: %u greater than brp-max: %u",
+ bt->brp, btc->brp_max);
return -EINVAL;
+ }
- alltseg = bt->prop_seg + bt->phase_seg1 + bt->phase_seg2 + 1;
- bt->bitrate = priv->clock.freq / (bt->brp * alltseg);
- bt->sample_point = ((tseg1 + 1) * 1000) / alltseg;
+ bt->bitrate = priv->clock.freq / (bt->brp * can_bit_time(bt));
+ bt->sample_point = ((CAN_SYNC_SEG + tseg1) * 1000) / can_bit_time(bt);
+ bt->tq = DIV_U64_ROUND_CLOSEST(mul_u32_u32(bt->brp, NSEC_PER_SEC),
+ priv->clock.freq);
return 0;
}
@@ -49,7 +114,8 @@ static int can_fixup_bittiming(const struct net_device *dev, struct can_bittimin
static int
can_validate_bitrate(const struct net_device *dev, const struct can_bittiming *bt,
const u32 *bitrate_const,
- const unsigned int bitrate_const_cnt)
+ const unsigned int bitrate_const_cnt,
+ struct netlink_ext_ack *extack)
{
unsigned int i;
@@ -58,30 +124,30 @@ can_validate_bitrate(const struct net_device *dev, const struct can_bittiming *b
return 0;
}
+ NL_SET_ERR_MSG_FMT(extack, "bitrate %u bps not supported",
+ bt->brp);
+
return -EINVAL;
}
int can_get_bittiming(const struct net_device *dev, struct can_bittiming *bt,
const struct can_bittiming_const *btc,
const u32 *bitrate_const,
- const unsigned int bitrate_const_cnt)
+ const unsigned int bitrate_const_cnt,
+ struct netlink_ext_ack *extack)
{
- int err;
-
/* Depending on the given can_bittiming parameter structure the CAN
* timing parameters are calculated based on the provided bitrate OR
* alternatively the CAN timing parameters (tq, prop_seg, etc.) are
* provided directly which are then checked and fixed up.
*/
if (!bt->tq && bt->bitrate && btc)
- err = can_calc_bittiming(dev, bt, btc);
- else if (bt->tq && !bt->bitrate && btc)
- err = can_fixup_bittiming(dev, bt, btc);
- else if (!bt->tq && bt->bitrate && bitrate_const)
- err = can_validate_bitrate(dev, bt, bitrate_const,
- bitrate_const_cnt);
- else
- err = -EINVAL;
-
- return err;
+ return can_calc_bittiming(dev, bt, btc, extack);
+ if (bt->tq && !bt->bitrate && btc)
+ return can_fixup_bittiming(dev, bt, btc, extack);
+ if (!bt->tq && bt->bitrate && bitrate_const)
+ return can_validate_bitrate(dev, bt, bitrate_const,
+ bitrate_const_cnt, extack);
+
+ return -EINVAL;
}
diff --git a/drivers/net/can/dev/calc_bittiming.c b/drivers/net/can/dev/calc_bittiming.c
index d3caa040614d..3809c148fb88 100644
--- a/drivers/net/can/dev/calc_bittiming.c
+++ b/drivers/net/can/dev/calc_bittiming.c
@@ -63,7 +63,7 @@ can_update_sample_point(const struct can_bittiming_const *btc,
}
int can_calc_bittiming(const struct net_device *dev, struct can_bittiming *bt,
- const struct can_bittiming_const *btc)
+ const struct can_bittiming_const *btc, struct netlink_ext_ack *extack)
{
struct can_priv *priv = netdev_priv(dev);
unsigned int bitrate; /* current bitrate */
@@ -76,6 +76,7 @@ int can_calc_bittiming(const struct net_device *dev, struct can_bittiming *bt,
unsigned int best_brp = 0; /* current best value for brp */
unsigned int brp, tsegall, tseg, tseg1 = 0, tseg2 = 0;
u64 v64;
+ int err;
/* Use CiA recommended sample points */
if (bt->sample_point) {
@@ -133,13 +134,14 @@ int can_calc_bittiming(const struct net_device *dev, struct can_bittiming *bt,
do_div(v64, bt->bitrate);
bitrate_error = (u32)v64;
if (bitrate_error > CAN_CALC_MAX_ERROR) {
- netdev_err(dev,
- "bitrate error %d.%d%% too high\n",
- bitrate_error / 10, bitrate_error % 10);
- return -EDOM;
+ NL_SET_ERR_MSG_FMT(extack,
+ "bitrate error: %u.%u%% too high",
+ bitrate_error / 10, bitrate_error % 10);
+ return -EINVAL;
}
- netdev_warn(dev, "bitrate error %d.%d%%\n",
- bitrate_error / 10, bitrate_error % 10);
+ NL_SET_ERR_MSG_FMT(extack,
+ "bitrate error: %u.%u%%",
+ bitrate_error / 10, bitrate_error % 10);
}
/* real sample point */
@@ -154,23 +156,17 @@ int can_calc_bittiming(const struct net_device *dev, struct can_bittiming *bt,
bt->phase_seg1 = tseg1 - bt->prop_seg;
bt->phase_seg2 = tseg2;
- /* check for sjw user settings */
- if (!bt->sjw || !btc->sjw_max) {
- bt->sjw = 1;
- } else {
- /* bt->sjw is at least 1 -> sanitize upper bound to sjw_max */
- if (bt->sjw > btc->sjw_max)
- bt->sjw = btc->sjw_max;
- /* bt->sjw must not be higher than tseg2 */
- if (tseg2 < bt->sjw)
- bt->sjw = tseg2;
- }
+ can_sjw_set_default(bt);
+
+ err = can_sjw_check(dev, bt, btc, extack);
+ if (err)
+ return err;
bt->brp = best_brp;
/* real bitrate */
bt->bitrate = priv->clock.freq /
- (bt->brp * (CAN_SYNC_SEG + tseg1 + tseg2));
+ (bt->brp * can_bit_time(bt));
return 0;
}
diff --git a/drivers/net/can/dev/dev.c b/drivers/net/can/dev/dev.c
index c1956b1e9faf..7f9334a8af50 100644
--- a/drivers/net/can/dev/dev.c
+++ b/drivers/net/can/dev/dev.c
@@ -498,6 +498,18 @@ static int can_get_termination(struct net_device *ndev)
return 0;
}
+static bool
+can_bittiming_const_valid(const struct can_bittiming_const *btc)
+{
+ if (!btc)
+ return true;
+
+ if (!btc->sjw_max)
+ return false;
+
+ return true;
+}
+
/* Register the CAN network device */
int register_candev(struct net_device *dev)
{
@@ -518,6 +530,15 @@ int register_candev(struct net_device *dev)
if (!priv->data_bitrate_const != !priv->data_bitrate_const_cnt)
return -EINVAL;
+ /* We only support either fixed bit rates or bit timing const. */
+ if ((priv->bitrate_const || priv->data_bitrate_const) &&
+ (priv->bittiming_const || priv->data_bittiming_const))
+ return -EINVAL;
+
+ if (!can_bittiming_const_valid(priv->bittiming_const) ||
+ !can_bittiming_const_valid(priv->data_bittiming_const))
+ return -EINVAL;
+
if (!priv->termination_const) {
err = can_get_termination(dev);
if (err)
diff --git a/drivers/net/can/dev/netlink.c b/drivers/net/can/dev/netlink.c
index 8efa22d9f214..036d85ef07f5 100644
--- a/drivers/net/can/dev/netlink.c
+++ b/drivers/net/can/dev/netlink.c
@@ -36,10 +36,24 @@ static const struct nla_policy can_tdc_policy[IFLA_CAN_TDC_MAX + 1] = {
[IFLA_CAN_TDC_TDCF] = { .type = NLA_U32 },
};
+static int can_validate_bittiming(const struct can_bittiming *bt,
+ struct netlink_ext_ack *extack)
+{
+ /* sample point is in one-tenth of a percent */
+ if (bt->sample_point >= 1000) {
+ NL_SET_ERR_MSG(extack, "sample point must be between 0 and 100%");
+
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
static int can_validate(struct nlattr *tb[], struct nlattr *data[],
struct netlink_ext_ack *extack)
{
bool is_can_fd = false;
+ int err;
/* Make sure that valid CAN FD configurations always consist of
* - nominal/arbitration bittiming
@@ -51,6 +65,15 @@ static int can_validate(struct nlattr *tb[], struct nlattr *data[],
if (!data)
return 0;
+ if (data[IFLA_CAN_BITTIMING]) {
+ struct can_bittiming bt;
+
+ memcpy(&bt, nla_data(data[IFLA_CAN_BITTIMING]), sizeof(bt));
+ err = can_validate_bittiming(&bt, extack);
+ if (err)
+ return err;
+ }
+
if (data[IFLA_CAN_CTRLMODE]) {
struct can_ctrlmode *cm = nla_data(data[IFLA_CAN_CTRLMODE]);
u32 tdc_flags = cm->flags & CAN_CTRLMODE_TDC_MASK;
@@ -71,7 +94,6 @@ static int can_validate(struct nlattr *tb[], struct nlattr *data[],
*/
if (data[IFLA_CAN_TDC]) {
struct nlattr *tb_tdc[IFLA_CAN_TDC_MAX + 1];
- int err;
err = nla_parse_nested(tb_tdc, IFLA_CAN_TDC_MAX,
data[IFLA_CAN_TDC],
@@ -102,6 +124,15 @@ static int can_validate(struct nlattr *tb[], struct nlattr *data[],
return -EOPNOTSUPP;
}
+ if (data[IFLA_CAN_DATA_BITTIMING]) {
+ struct can_bittiming bt;
+
+ memcpy(&bt, nla_data(data[IFLA_CAN_DATA_BITTIMING]), sizeof(bt));
+ err = can_validate_bittiming(&bt, extack);
+ if (err)
+ return err;
+ }
+
return 0;
}
@@ -184,13 +215,15 @@ static int can_changelink(struct net_device *dev, struct nlattr *tb[],
err = can_get_bittiming(dev, &bt,
priv->bittiming_const,
priv->bitrate_const,
- priv->bitrate_const_cnt);
+ priv->bitrate_const_cnt,
+ extack);
if (err)
return err;
if (priv->bitrate_max && bt.bitrate > priv->bitrate_max) {
- netdev_err(dev, "arbitration bitrate surpasses transceiver capabilities of %d bps\n",
- priv->bitrate_max);
+ NL_SET_ERR_MSG_FMT(extack,
+ "arbitration bitrate %u bps surpasses transceiver capabilities of %u bps",
+ bt.bitrate, priv->bitrate_max);
return -EINVAL;
}
@@ -288,13 +321,15 @@ static int can_changelink(struct net_device *dev, struct nlattr *tb[],
err = can_get_bittiming(dev, &dbt,
priv->data_bittiming_const,
priv->data_bitrate_const,
- priv->data_bitrate_const_cnt);
+ priv->data_bitrate_const_cnt,
+ extack);
if (err)
return err;
if (priv->bitrate_max && dbt.bitrate > priv->bitrate_max) {
- netdev_err(dev, "canfd data bitrate surpasses transceiver capabilities of %d bps\n",
- priv->bitrate_max);
+ NL_SET_ERR_MSG_FMT(extack,
+ "CANFD data bitrate %u bps surpasses transceiver capabilities of %u bps",
+ dbt.bitrate, priv->bitrate_max);
return -EINVAL;
}
diff --git a/drivers/net/can/rcar/rcar_canfd.c b/drivers/net/can/rcar/rcar_canfd.c
index f6fa7157b99b..ef4e1b9a9e1e 100644
--- a/drivers/net/can/rcar/rcar_canfd.c
+++ b/drivers/net/can/rcar/rcar_canfd.c
@@ -21,23 +21,23 @@
* wherever it is modified to a readable name.
*/
-#include <linux/module.h>
-#include <linux/moduleparam.h>
-#include <linux/kernel.h>
-#include <linux/types.h>
-#include <linux/interrupt.h>
+#include <linux/bitmap.h>
+#include <linux/bitops.h>
+#include <linux/can/dev.h>
+#include <linux/clk.h>
#include <linux/errno.h>
#include <linux/ethtool.h>
+#include <linux/interrupt.h>
+#include <linux/iopoll.h>
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/moduleparam.h>
#include <linux/netdevice.h>
-#include <linux/platform_device.h>
-#include <linux/can/dev.h>
-#include <linux/clk.h>
#include <linux/of.h>
#include <linux/of_device.h>
-#include <linux/bitmap.h>
-#include <linux/bitops.h>
-#include <linux/iopoll.h>
+#include <linux/platform_device.h>
#include <linux/reset.h>
+#include <linux/types.h>
#define RCANFD_DRV_NAME "rcar_canfd"
@@ -82,8 +82,8 @@
#define RCANFD_GERFL_DEF BIT(0)
#define RCANFD_GERFL_ERR(gpriv, x) \
- ((x) & (reg_v3u(gpriv, RCANFD_GERFL_EEF0_7, \
- RCANFD_GERFL_EEF(0) | RCANFD_GERFL_EEF(1)) | \
+ ((x) & (reg_gen4(gpriv, RCANFD_GERFL_EEF0_7, \
+ RCANFD_GERFL_EEF(0) | RCANFD_GERFL_EEF(1)) | \
RCANFD_GERFL_MES | \
((gpriv)->fdmode ? RCANFD_GERFL_CMPOF : 0)))
@@ -91,16 +91,16 @@
/* RSCFDnCFDGAFLCFG0 / RSCFDnGAFLCFG0 */
#define RCANFD_GAFLCFG_SETRNC(gpriv, n, x) \
- (((x) & reg_v3u(gpriv, 0x1ff, 0xff)) << \
- (reg_v3u(gpriv, 16, 24) - (n) * reg_v3u(gpriv, 16, 8)))
+ (((x) & reg_gen4(gpriv, 0x1ff, 0xff)) << \
+ (reg_gen4(gpriv, 16, 24) - ((n) & 1) * reg_gen4(gpriv, 16, 8)))
#define RCANFD_GAFLCFG_GETRNC(gpriv, n, x) \
- (((x) >> (reg_v3u(gpriv, 16, 24) - (n) * reg_v3u(gpriv, 16, 8))) & \
- reg_v3u(gpriv, 0x1ff, 0xff))
+ (((x) >> (reg_gen4(gpriv, 16, 24) - ((n) & 1) * reg_gen4(gpriv, 16, 8))) & \
+ reg_gen4(gpriv, 0x1ff, 0xff))
/* RSCFDnCFDGAFLECTR / RSCFDnGAFLECTR */
#define RCANFD_GAFLECTR_AFLDAE BIT(8)
-#define RCANFD_GAFLECTR_AFLPN(gpriv, x) ((x) & reg_v3u(gpriv, 0x7f, 0x1f))
+#define RCANFD_GAFLECTR_AFLPN(gpriv, x) ((x) & reg_gen4(gpriv, 0x7f, 0x1f))
/* RSCFDnCFDGAFLIDj / RSCFDnGAFLIDj */
#define RCANFD_GAFLID_GAFLLB BIT(29)
@@ -118,13 +118,13 @@
/* RSCFDnCFDCmNCFG - CAN FD only */
#define RCANFD_NCFG_NTSEG2(gpriv, x) \
- (((x) & reg_v3u(gpriv, 0x7f, 0x1f)) << reg_v3u(gpriv, 25, 24))
+ (((x) & reg_gen4(gpriv, 0x7f, 0x1f)) << reg_gen4(gpriv, 25, 24))
#define RCANFD_NCFG_NTSEG1(gpriv, x) \
- (((x) & reg_v3u(gpriv, 0xff, 0x7f)) << reg_v3u(gpriv, 17, 16))
+ (((x) & reg_gen4(gpriv, 0xff, 0x7f)) << reg_gen4(gpriv, 17, 16))
#define RCANFD_NCFG_NSJW(gpriv, x) \
- (((x) & reg_v3u(gpriv, 0x7f, 0x1f)) << reg_v3u(gpriv, 10, 11))
+ (((x) & reg_gen4(gpriv, 0x7f, 0x1f)) << reg_gen4(gpriv, 10, 11))
#define RCANFD_NCFG_NBRP(x) (((x) & 0x3ff) << 0)
@@ -186,19 +186,19 @@
#define RCANFD_CERFL_ERR(x) ((x) & (0x7fff)) /* above bits 14:0 */
/* RSCFDnCFDCmDCFG */
-#define RCANFD_DCFG_DSJW(x) (((x) & 0x7) << 24)
+#define RCANFD_DCFG_DSJW(gpriv, x) (((x) & reg_gen4(gpriv, 0xf, 0x7)) << 24)
#define RCANFD_DCFG_DTSEG2(gpriv, x) \
- (((x) & reg_v3u(gpriv, 0x0f, 0x7)) << reg_v3u(gpriv, 16, 20))
+ (((x) & reg_gen4(gpriv, 0x0f, 0x7)) << reg_gen4(gpriv, 16, 20))
#define RCANFD_DCFG_DTSEG1(gpriv, x) \
- (((x) & reg_v3u(gpriv, 0x1f, 0xf)) << reg_v3u(gpriv, 8, 16))
+ (((x) & reg_gen4(gpriv, 0x1f, 0xf)) << reg_gen4(gpriv, 8, 16))
#define RCANFD_DCFG_DBRP(x) (((x) & 0xff) << 0)
/* RSCFDnCFDCmFDCFG */
-#define RCANFD_FDCFG_CLOE BIT(30)
-#define RCANFD_FDCFG_FDOE BIT(28)
+#define RCANFD_GEN4_FDCFG_CLOE BIT(30)
+#define RCANFD_GEN4_FDCFG_FDOE BIT(28)
#define RCANFD_FDCFG_TDCE BIT(9)
#define RCANFD_FDCFG_TDCOC BIT(8)
#define RCANFD_FDCFG_TDCO(x) (((x) & 0x7f) >> 16)
@@ -233,10 +233,11 @@
/* Common FIFO bits */
/* RSCFDnCFDCFCCk */
-#define RCANFD_CFCC_CFTML(gpriv, x) (((x) & 0xf) << reg_v3u(gpriv, 16, 20))
-#define RCANFD_CFCC_CFM(gpriv, x) (((x) & 0x3) << reg_v3u(gpriv, 8, 16))
+#define RCANFD_CFCC_CFTML(gpriv, x) \
+ (((x) & reg_gen4(gpriv, 0x1f, 0xf)) << reg_gen4(gpriv, 16, 20))
+#define RCANFD_CFCC_CFM(gpriv, x) (((x) & 0x3) << reg_gen4(gpriv, 8, 16))
#define RCANFD_CFCC_CFIM BIT(12)
-#define RCANFD_CFCC_CFDC(gpriv, x) (((x) & 0x7) << reg_v3u(gpriv, 21, 8))
+#define RCANFD_CFCC_CFDC(gpriv, x) (((x) & 0x7) << reg_gen4(gpriv, 21, 8))
#define RCANFD_CFCC_CFPLS(x) (((x) & 0x7) << 4)
#define RCANFD_CFCC_CFTXIE BIT(2)
#define RCANFD_CFCC_CFE BIT(0)
@@ -304,7 +305,7 @@
#define RCANFD_RMND(y) (0x00a8 + (0x04 * (y)))
/* RSCFDnCFDRFCCx / RSCFDnRFCCx */
-#define RCANFD_RFCC(gpriv, x) (reg_v3u(gpriv, 0x00c0, 0x00b8) + (0x04 * (x)))
+#define RCANFD_RFCC(gpriv, x) (reg_gen4(gpriv, 0x00c0, 0x00b8) + (0x04 * (x)))
/* RSCFDnCFDRFSTSx / RSCFDnRFSTSx */
#define RCANFD_RFSTS(gpriv, x) (RCANFD_RFCC(gpriv, x) + 0x20)
/* RSCFDnCFDRFPCTRx / RSCFDnRFPCTRx */
@@ -314,13 +315,13 @@
/* RSCFDnCFDCFCCx / RSCFDnCFCCx */
#define RCANFD_CFCC(gpriv, ch, idx) \
- (reg_v3u(gpriv, 0x0120, 0x0118) + (0x0c * (ch)) + (0x04 * (idx)))
+ (reg_gen4(gpriv, 0x0120, 0x0118) + (0x0c * (ch)) + (0x04 * (idx)))
/* RSCFDnCFDCFSTSx / RSCFDnCFSTSx */
#define RCANFD_CFSTS(gpriv, ch, idx) \
- (reg_v3u(gpriv, 0x01e0, 0x0178) + (0x0c * (ch)) + (0x04 * (idx)))
+ (reg_gen4(gpriv, 0x01e0, 0x0178) + (0x0c * (ch)) + (0x04 * (idx)))
/* RSCFDnCFDCFPCTRx / RSCFDnCFPCTRx */
#define RCANFD_CFPCTR(gpriv, ch, idx) \
- (reg_v3u(gpriv, 0x0240, 0x01d8) + (0x0c * (ch)) + (0x04 * (idx)))
+ (reg_gen4(gpriv, 0x0240, 0x01d8) + (0x0c * (ch)) + (0x04 * (idx)))
/* RSCFDnCFDFESTS / RSCFDnFESTS */
#define RCANFD_FESTS (0x0238)
@@ -428,16 +429,15 @@
/* RSCFDnRPGACCr */
#define RCANFD_C_RPGACC(r) (0x1900 + (0x04 * (r)))
-/* R-Car V3U Classical and CAN FD mode specific register map */
-#define RCANFD_V3U_CFDCFG (0x1314)
-#define RCANFD_V3U_DCFG(m) (0x1400 + (0x20 * (m)))
+/* R-Car Gen4 Classical and CAN FD mode specific register map */
+#define RCANFD_GEN4_FDCFG(m) (0x1404 + (0x20 * (m)))
-#define RCANFD_V3U_GAFL_OFFSET (0x1800)
+#define RCANFD_GEN4_GAFL_OFFSET (0x1800)
/* CAN FD mode specific register map */
/* RSCFDnCFDCmXXX -> RCANFD_F_XXX(m) */
-#define RCANFD_F_DCFG(m) (0x0500 + (0x20 * (m)))
+#define RCANFD_F_DCFG(gpriv, m) (reg_gen4(gpriv, 0x1400, 0x0500) + (0x20 * (m)))
#define RCANFD_F_CFDCFG(m) (0x0504 + (0x20 * (m)))
#define RCANFD_F_CFDCTR(m) (0x0508 + (0x20 * (m)))
#define RCANFD_F_CFDSTS(m) (0x050c + (0x20 * (m)))
@@ -453,7 +453,7 @@
#define RCANFD_F_RMDF(q, b) (0x200c + (0x04 * (b)) + (0x20 * (q)))
/* RSCFDnCFDRFXXx -> RCANFD_F_RFXX(x) */
-#define RCANFD_F_RFOFFSET(gpriv) reg_v3u(gpriv, 0x6000, 0x3000)
+#define RCANFD_F_RFOFFSET(gpriv) reg_gen4(gpriv, 0x6000, 0x3000)
#define RCANFD_F_RFID(gpriv, x) (RCANFD_F_RFOFFSET(gpriv) + (0x80 * (x)))
#define RCANFD_F_RFPTR(gpriv, x) (RCANFD_F_RFOFFSET(gpriv) + 0x04 + (0x80 * (x)))
#define RCANFD_F_RFFDSTS(gpriv, x) (RCANFD_F_RFOFFSET(gpriv) + 0x08 + (0x80 * (x)))
@@ -461,7 +461,7 @@
(RCANFD_F_RFOFFSET(gpriv) + 0x0c + (0x80 * (x)) + (0x04 * (df)))
/* RSCFDnCFDCFXXk -> RCANFD_F_CFXX(ch, k) */
-#define RCANFD_F_CFOFFSET(gpriv) reg_v3u(gpriv, 0x6400, 0x3400)
+#define RCANFD_F_CFOFFSET(gpriv) reg_gen4(gpriv, 0x6400, 0x3400)
#define RCANFD_F_CFID(gpriv, ch, idx) \
(RCANFD_F_CFOFFSET(gpriv) + (0x180 * (ch)) + (0x80 * (idx)))
@@ -597,28 +597,28 @@ static const struct rcar_canfd_hw_info rcar_gen3_hw_info = {
.shared_global_irqs = 1,
};
+static const struct rcar_canfd_hw_info rcar_gen4_hw_info = {
+ .max_channels = 8,
+ .postdiv = 2,
+ .shared_global_irqs = 1,
+};
+
static const struct rcar_canfd_hw_info rzg2l_hw_info = {
.max_channels = 2,
.postdiv = 1,
.multi_channel_irqs = 1,
};
-static const struct rcar_canfd_hw_info r8a779a0_hw_info = {
- .max_channels = 8,
- .postdiv = 2,
- .shared_global_irqs = 1,
-};
-
/* Helper functions */
-static inline bool is_v3u(struct rcar_canfd_global *gpriv)
+static inline bool is_gen4(struct rcar_canfd_global *gpriv)
{
- return gpriv->info == &r8a779a0_hw_info;
+ return gpriv->info == &rcar_gen4_hw_info;
}
-static inline u32 reg_v3u(struct rcar_canfd_global *gpriv,
- u32 v3u, u32 not_v3u)
+static inline u32 reg_gen4(struct rcar_canfd_global *gpriv,
+ u32 gen4, u32 not_gen4)
{
- return is_v3u(gpriv) ? v3u : not_v3u;
+ return is_gen4(gpriv) ? gen4 : not_gen4;
}
static inline void rcar_canfd_update(u32 mask, u32 val, u32 __iomem *reg)
@@ -688,13 +688,14 @@ static void rcar_canfd_tx_failure_cleanup(struct net_device *ndev)
static void rcar_canfd_set_mode(struct rcar_canfd_global *gpriv)
{
- if (is_v3u(gpriv)) {
- if (gpriv->fdmode)
- rcar_canfd_set_bit(gpriv->base, RCANFD_V3U_CFDCFG,
- RCANFD_FDCFG_FDOE);
- else
- rcar_canfd_set_bit(gpriv->base, RCANFD_V3U_CFDCFG,
- RCANFD_FDCFG_CLOE);
+ if (is_gen4(gpriv)) {
+ u32 ch, val = gpriv->fdmode ? RCANFD_GEN4_FDCFG_FDOE
+ : RCANFD_GEN4_FDCFG_CLOE;
+
+ for_each_set_bit(ch, &gpriv->channels_mask,
+ gpriv->info->max_channels)
+ rcar_canfd_set_bit(gpriv->base, RCANFD_GEN4_FDCFG(ch),
+ val);
} else {
if (gpriv->fdmode)
rcar_canfd_set_bit(gpriv->base, RCANFD_GRMCFG,
@@ -814,8 +815,8 @@ static void rcar_canfd_configure_afl_rules(struct rcar_canfd_global *gpriv,
/* Write number of rules for channel */
rcar_canfd_set_bit(gpriv->base, RCANFD_GAFLCFG(ch),
RCANFD_GAFLCFG_SETRNC(gpriv, ch, num_rules));
- if (is_v3u(gpriv))
- offset = RCANFD_V3U_GAFL_OFFSET;
+ if (is_gen4(gpriv))
+ offset = RCANFD_GEN4_GAFL_OFFSET;
else if (gpriv->fdmode)
offset = RCANFD_F_GAFL_OFFSET;
else
@@ -1343,17 +1344,14 @@ static void rcar_canfd_set_bittiming(struct net_device *dev)
tseg2 = dbt->phase_seg2 - 1;
cfg = (RCANFD_DCFG_DTSEG1(gpriv, tseg1) | RCANFD_DCFG_DBRP(brp) |
- RCANFD_DCFG_DSJW(sjw) | RCANFD_DCFG_DTSEG2(gpriv, tseg2));
+ RCANFD_DCFG_DSJW(gpriv, sjw) | RCANFD_DCFG_DTSEG2(gpriv, tseg2));
- if (is_v3u(gpriv))
- rcar_canfd_write(priv->base, RCANFD_V3U_DCFG(ch), cfg);
- else
- rcar_canfd_write(priv->base, RCANFD_F_DCFG(ch), cfg);
+ rcar_canfd_write(priv->base, RCANFD_F_DCFG(gpriv, ch), cfg);
netdev_dbg(priv->ndev, "drate: brp %u, sjw %u, tseg1 %u, tseg2 %u\n",
brp, sjw, tseg1, tseg2);
} else {
/* Classical CAN only mode */
- if (is_v3u(gpriv)) {
+ if (is_gen4(gpriv)) {
cfg = (RCANFD_NCFG_NTSEG1(gpriv, tseg1) |
RCANFD_NCFG_NBRP(brp) |
RCANFD_NCFG_NSJW(gpriv, sjw) |
@@ -1510,7 +1508,7 @@ static netdev_tx_t rcar_canfd_start_xmit(struct sk_buff *skb,
dlc = RCANFD_CFPTR_CFDLC(can_fd_len2dlc(cf->len));
- if ((priv->can.ctrlmode & CAN_CTRLMODE_FD) || is_v3u(gpriv)) {
+ if ((priv->can.ctrlmode & CAN_CTRLMODE_FD) || is_gen4(gpriv)) {
rcar_canfd_write(priv->base,
RCANFD_F_CFID(gpriv, ch, RCANFD_CFFIFO_IDX), id);
rcar_canfd_write(priv->base,
@@ -1569,7 +1567,7 @@ static void rcar_canfd_rx_pkt(struct rcar_canfd_channel *priv)
u32 ch = priv->channel;
u32 ridx = ch + RCANFD_RFFIFO_IDX;
- if ((priv->can.ctrlmode & CAN_CTRLMODE_FD) || is_v3u(gpriv)) {
+ if ((priv->can.ctrlmode & CAN_CTRLMODE_FD) || is_gen4(gpriv)) {
id = rcar_canfd_read(priv->base, RCANFD_F_RFID(gpriv, ridx));
dlc = rcar_canfd_read(priv->base, RCANFD_F_RFPTR(gpriv, ridx));
@@ -1620,7 +1618,7 @@ static void rcar_canfd_rx_pkt(struct rcar_canfd_channel *priv)
cf->len = can_cc_dlc2len(RCANFD_RFPTR_RFDLC(dlc));
if (id & RCANFD_RFID_RFRTR)
cf->can_id |= CAN_RTR_FLAG;
- else if (is_v3u(gpriv))
+ else if (is_gen4(gpriv))
rcar_canfd_get_data(priv, cf, RCANFD_F_RFDF(gpriv, ridx, 0));
else
rcar_canfd_get_data(priv, cf, RCANFD_C_RFDF(ridx, 0));
@@ -1717,13 +1715,14 @@ static int rcar_canfd_channel_probe(struct rcar_canfd_global *gpriv, u32 ch,
{
const struct rcar_canfd_hw_info *info = gpriv->info;
struct platform_device *pdev = gpriv->pdev;
+ struct device *dev = &pdev->dev;
struct rcar_canfd_channel *priv;
struct net_device *ndev;
int err = -ENODEV;
ndev = alloc_candev(sizeof(*priv), RCANFD_FIFO_DEPTH);
if (!ndev) {
- dev_err(&pdev->dev, "alloc_candev() failed\n");
+ dev_err(dev, "alloc_candev() failed\n");
return -ENOMEM;
}
priv = netdev_priv(ndev);
@@ -1736,7 +1735,7 @@ static int rcar_canfd_channel_probe(struct rcar_canfd_global *gpriv, u32 ch,
priv->channel = ch;
priv->gpriv = gpriv;
priv->can.clock.freq = fcan_freq;
- dev_info(&pdev->dev, "can_clk rate is %u\n", priv->can.clock.freq);
+ dev_info(dev, "can_clk rate is %u\n", priv->can.clock.freq);
if (info->multi_channel_irqs) {
char *irq_name;
@@ -1755,31 +1754,31 @@ static int rcar_canfd_channel_probe(struct rcar_canfd_global *gpriv, u32 ch,
goto fail;
}
- irq_name = devm_kasprintf(&pdev->dev, GFP_KERNEL,
- "canfd.ch%d_err", ch);
+ irq_name = devm_kasprintf(dev, GFP_KERNEL, "canfd.ch%d_err",
+ ch);
if (!irq_name) {
err = -ENOMEM;
goto fail;
}
- err = devm_request_irq(&pdev->dev, err_irq,
+ err = devm_request_irq(dev, err_irq,
rcar_canfd_channel_err_interrupt, 0,
irq_name, priv);
if (err) {
- dev_err(&pdev->dev, "devm_request_irq CH Err(%d) failed, error %d\n",
+ dev_err(dev, "devm_request_irq CH Err(%d) failed, error %d\n",
err_irq, err);
goto fail;
}
- irq_name = devm_kasprintf(&pdev->dev, GFP_KERNEL,
- "canfd.ch%d_trx", ch);
+ irq_name = devm_kasprintf(dev, GFP_KERNEL, "canfd.ch%d_trx",
+ ch);
if (!irq_name) {
err = -ENOMEM;
goto fail;
}
- err = devm_request_irq(&pdev->dev, tx_irq,
+ err = devm_request_irq(dev, tx_irq,
rcar_canfd_channel_tx_interrupt, 0,
irq_name, priv);
if (err) {
- dev_err(&pdev->dev, "devm_request_irq Tx (%d) failed, error %d\n",
+ dev_err(dev, "devm_request_irq Tx (%d) failed, error %d\n",
tx_irq, err);
goto fail;
}
@@ -1803,7 +1802,7 @@ static int rcar_canfd_channel_probe(struct rcar_canfd_global *gpriv, u32 ch,
priv->can.do_set_mode = rcar_canfd_do_set_mode;
priv->can.do_get_berr_counter = rcar_canfd_get_berr_counter;
- SET_NETDEV_DEV(ndev, &pdev->dev);
+ SET_NETDEV_DEV(ndev, dev);
netif_napi_add_weight(ndev, &priv->napi, rcar_canfd_rx_poll,
RCANFD_NAPI_WEIGHT);
@@ -1811,11 +1810,10 @@ static int rcar_canfd_channel_probe(struct rcar_canfd_global *gpriv, u32 ch,
gpriv->ch[priv->channel] = priv;
err = register_candev(ndev);
if (err) {
- dev_err(&pdev->dev,
- "register_candev() failed, error %d\n", err);
+ dev_err(dev, "register_candev() failed, error %d\n", err);
goto fail_candev;
}
- dev_info(&pdev->dev, "device registered (channel %u)\n", priv->channel);
+ dev_info(dev, "device registered (channel %u)\n", priv->channel);
return 0;
fail_candev:
@@ -1839,6 +1837,7 @@ static void rcar_canfd_channel_remove(struct rcar_canfd_global *gpriv, u32 ch)
static int rcar_canfd_probe(struct platform_device *pdev)
{
const struct rcar_canfd_hw_info *info;
+ struct device *dev = &pdev->dev;
void __iomem *addr;
u32 sts, ch, fcan_freq;
struct rcar_canfd_global *gpriv;
@@ -1850,14 +1849,14 @@ static int rcar_canfd_probe(struct platform_device *pdev)
char name[9] = "channelX";
int i;
- info = of_device_get_match_data(&pdev->dev);
+ info = of_device_get_match_data(dev);
- if (of_property_read_bool(pdev->dev.of_node, "renesas,no-can-fd"))
+ if (of_property_read_bool(dev->of_node, "renesas,no-can-fd"))
fdmode = false; /* Classical CAN only mode */
for (i = 0; i < info->max_channels; ++i) {
name[7] = '0' + i;
- of_child = of_get_child_by_name(pdev->dev.of_node, name);
+ of_child = of_get_child_by_name(dev->of_node, name);
if (of_child && of_device_is_available(of_child))
channels_mask |= BIT(i);
of_node_put(of_child);
@@ -1890,7 +1889,7 @@ static int rcar_canfd_probe(struct platform_device *pdev)
}
/* Global controller context */
- gpriv = devm_kzalloc(&pdev->dev, sizeof(*gpriv), GFP_KERNEL);
+ gpriv = devm_kzalloc(dev, sizeof(*gpriv), GFP_KERNEL);
if (!gpriv)
return -ENOMEM;
@@ -1899,32 +1898,30 @@ static int rcar_canfd_probe(struct platform_device *pdev)
gpriv->fdmode = fdmode;
gpriv->info = info;
- gpriv->rstc1 = devm_reset_control_get_optional_exclusive(&pdev->dev,
- "rstp_n");
+ gpriv->rstc1 = devm_reset_control_get_optional_exclusive(dev, "rstp_n");
if (IS_ERR(gpriv->rstc1))
- return dev_err_probe(&pdev->dev, PTR_ERR(gpriv->rstc1),
+ return dev_err_probe(dev, PTR_ERR(gpriv->rstc1),
"failed to get rstp_n\n");
- gpriv->rstc2 = devm_reset_control_get_optional_exclusive(&pdev->dev,
- "rstc_n");
+ gpriv->rstc2 = devm_reset_control_get_optional_exclusive(dev, "rstc_n");
if (IS_ERR(gpriv->rstc2))
- return dev_err_probe(&pdev->dev, PTR_ERR(gpriv->rstc2),
+ return dev_err_probe(dev, PTR_ERR(gpriv->rstc2),
"failed to get rstc_n\n");
/* Peripheral clock */
- gpriv->clkp = devm_clk_get(&pdev->dev, "fck");
+ gpriv->clkp = devm_clk_get(dev, "fck");
if (IS_ERR(gpriv->clkp))
- return dev_err_probe(&pdev->dev, PTR_ERR(gpriv->clkp),
+ return dev_err_probe(dev, PTR_ERR(gpriv->clkp),
"cannot get peripheral clock\n");
/* fCAN clock: Pick External clock. If not available fallback to
* CANFD clock
*/
- gpriv->can_clk = devm_clk_get(&pdev->dev, "can_clk");
+ gpriv->can_clk = devm_clk_get(dev, "can_clk");
if (IS_ERR(gpriv->can_clk) || (clk_get_rate(gpriv->can_clk) == 0)) {
- gpriv->can_clk = devm_clk_get(&pdev->dev, "canfd");
+ gpriv->can_clk = devm_clk_get(dev, "canfd");
if (IS_ERR(gpriv->can_clk))
- return dev_err_probe(&pdev->dev, PTR_ERR(gpriv->can_clk),
+ return dev_err_probe(dev, PTR_ERR(gpriv->can_clk),
"cannot get canfd clock\n");
gpriv->fcan = RCANFD_CANFDCLK;
@@ -1947,39 +1944,38 @@ static int rcar_canfd_probe(struct platform_device *pdev)
/* Request IRQ that's common for both channels */
if (info->shared_global_irqs) {
- err = devm_request_irq(&pdev->dev, ch_irq,
+ err = devm_request_irq(dev, ch_irq,
rcar_canfd_channel_interrupt, 0,
"canfd.ch_int", gpriv);
if (err) {
- dev_err(&pdev->dev, "devm_request_irq(%d) failed, error %d\n",
+ dev_err(dev, "devm_request_irq(%d) failed, error %d\n",
ch_irq, err);
goto fail_dev;
}
- err = devm_request_irq(&pdev->dev, g_irq,
- rcar_canfd_global_interrupt, 0,
- "canfd.g_int", gpriv);
+ err = devm_request_irq(dev, g_irq, rcar_canfd_global_interrupt,
+ 0, "canfd.g_int", gpriv);
if (err) {
- dev_err(&pdev->dev, "devm_request_irq(%d) failed, error %d\n",
+ dev_err(dev, "devm_request_irq(%d) failed, error %d\n",
g_irq, err);
goto fail_dev;
}
} else {
- err = devm_request_irq(&pdev->dev, g_recc_irq,
+ err = devm_request_irq(dev, g_recc_irq,
rcar_canfd_global_receive_fifo_interrupt, 0,
"canfd.g_recc", gpriv);
if (err) {
- dev_err(&pdev->dev, "devm_request_irq(%d) failed, error %d\n",
+ dev_err(dev, "devm_request_irq(%d) failed, error %d\n",
g_recc_irq, err);
goto fail_dev;
}
- err = devm_request_irq(&pdev->dev, g_err_irq,
+ err = devm_request_irq(dev, g_err_irq,
rcar_canfd_global_err_interrupt, 0,
"canfd.g_err", gpriv);
if (err) {
- dev_err(&pdev->dev, "devm_request_irq(%d) failed, error %d\n",
+ dev_err(dev, "devm_request_irq(%d) failed, error %d\n",
g_err_irq, err);
goto fail_dev;
}
@@ -1997,14 +1993,14 @@ static int rcar_canfd_probe(struct platform_device *pdev)
/* Enable peripheral clock for register access */
err = clk_prepare_enable(gpriv->clkp);
if (err) {
- dev_err(&pdev->dev,
- "failed to enable peripheral clock, error %d\n", err);
+ dev_err(dev, "failed to enable peripheral clock, error %d\n",
+ err);
goto fail_reset;
}
err = rcar_canfd_reset_controller(gpriv);
if (err) {
- dev_err(&pdev->dev, "reset controller failed\n");
+ dev_err(dev, "reset controller failed\n");
goto fail_clk;
}
@@ -2034,7 +2030,7 @@ static int rcar_canfd_probe(struct platform_device *pdev)
err = readl_poll_timeout((gpriv->base + RCANFD_GSTS), sts,
!(sts & RCANFD_GSTS_GNOPM), 2, 500000);
if (err) {
- dev_err(&pdev->dev, "global operational mode failed\n");
+ dev_err(dev, "global operational mode failed\n");
goto fail_mode;
}
@@ -2045,7 +2041,7 @@ static int rcar_canfd_probe(struct platform_device *pdev)
}
platform_set_drvdata(pdev, gpriv);
- dev_info(&pdev->dev, "global operational state (clk %d, fdmode %d)\n",
+ dev_info(dev, "global operational state (clk %d, fdmode %d)\n",
gpriv->fcan, gpriv->fdmode);
return 0;
@@ -2099,9 +2095,10 @@ static SIMPLE_DEV_PM_OPS(rcar_canfd_pm_ops, rcar_canfd_suspend,
rcar_canfd_resume);
static const __maybe_unused struct of_device_id rcar_canfd_of_table[] = {
+ { .compatible = "renesas,r8a779a0-canfd", .data = &rcar_gen4_hw_info },
{ .compatible = "renesas,rcar-gen3-canfd", .data = &rcar_gen3_hw_info },
+ { .compatible = "renesas,rcar-gen4-canfd", .data = &rcar_gen4_hw_info },
{ .compatible = "renesas,rzg2l-canfd", .data = &rzg2l_hw_info },
- { .compatible = "renesas,r8a779a0-canfd", .data = &r8a779a0_hw_info },
{ }
};
diff --git a/drivers/net/can/sja1000/ems_pci.c b/drivers/net/can/sja1000/ems_pci.c
index 4ab91759a5c6..c56e27223e5f 100644
--- a/drivers/net/can/sja1000/ems_pci.c
+++ b/drivers/net/can/sja1000/ems_pci.c
@@ -3,6 +3,7 @@
* Copyright (C) 2007 Wolfgang Grandegger <wg@grandegger.com>
* Copyright (C) 2008 Markus Plessing <plessing@ems-wuensche.com>
* Copyright (C) 2008 Sebastian Haas <haas@ems-wuensche.com>
+ * Copyright (C) 2023 EMS Dr. Thomas Wuensche
*/
#include <linux/kernel.h>
@@ -19,12 +20,14 @@
#define DRV_NAME "ems_pci"
-MODULE_AUTHOR("Sebastian Haas <haas@ems-wuenche.com>");
+MODULE_AUTHOR("Sebastian Haas <support@ems-wuensche.com>");
+MODULE_AUTHOR("Gerhard Uttenthaler <uttenthaler@ems-wuensche.com>");
MODULE_DESCRIPTION("Socket-CAN driver for EMS CPC-PCI/PCIe/104P CAN cards");
MODULE_LICENSE("GPL v2");
#define EMS_PCI_V1_MAX_CHAN 2
#define EMS_PCI_V2_MAX_CHAN 4
+#define EMS_PCI_V3_MAX_CHAN 4
#define EMS_PCI_MAX_CHAN EMS_PCI_V2_MAX_CHAN
struct ems_pci_card {
@@ -40,8 +43,7 @@ struct ems_pci_card {
#define EMS_PCI_CAN_CLOCK (16000000 / 2)
-/*
- * Register definitions and descriptions are from LinCAN 0.3.3.
+/* Register definitions and descriptions are from LinCAN 0.3.3.
*
* PSB4610 PITA-2 bridge control registers
*/
@@ -52,8 +54,7 @@ struct ems_pci_card {
#define PITA2_MISC 0x1c /* Miscellaneous Register */
#define PITA2_MISC_CONFIG 0x04000000 /* Multiplexed parallel interface */
-/*
- * Register definitions for the PLX 9030
+/* Register definitions for the PLX 9030
*/
#define PLX_ICSR 0x4c /* Interrupt Control/Status register */
#define PLX_ICSR_LINTI1_ENA 0x0001 /* LINTi1 Enable */
@@ -62,8 +63,16 @@ struct ems_pci_card {
#define PLX_ICSR_ENA_CLR (PLX_ICSR_LINTI1_ENA | PLX_ICSR_PCIINT_ENA | \
PLX_ICSR_LINTI1_CLR)
-/*
- * The board configuration is probably following:
+/* Register definitions for the ASIX99100
+ */
+#define ASIX_LINTSR 0x28 /* Interrupt Control/Status register */
+#define ASIX_LINTSR_INT0AC BIT(0) /* Writing 1 enables or clears interrupt */
+
+#define ASIX_LIEMR 0x24 /* Local Interrupt Enable / Miscellaneous Register */
+#define ASIX_LIEMR_L0EINTEN BIT(16) /* Local INT0 input assertion enable */
+#define ASIX_LIEMR_LRST BIT(14) /* Local Reset assert */
+
+/* The board configuration is probably following:
* RX1 is connected to ground.
* TX1 is not connected.
* CLKO is not connected.
@@ -72,23 +81,40 @@ struct ems_pci_card {
*/
#define EMS_PCI_OCR (OCR_TX0_PUSHPULL | OCR_TX1_PUSHPULL)
-/*
- * In the CDR register, you should set CBP to 1.
+/* In the CDR register, you should set CBP to 1.
* You will probably also want to set the clock divider value to 7
* (meaning direct oscillator output) because the second SJA1000 chip
* is driven by the first one CLKOUT output.
*/
#define EMS_PCI_CDR (CDR_CBP | CDR_CLKOUT_MASK)
-#define EMS_PCI_V1_BASE_BAR 1
-#define EMS_PCI_V1_CONF_SIZE 4096 /* size of PITA control area */
-#define EMS_PCI_V2_BASE_BAR 2
-#define EMS_PCI_V2_CONF_SIZE 128 /* size of PLX control area */
-#define EMS_PCI_CAN_BASE_OFFSET 0x400 /* offset where the controllers starts */
-#define EMS_PCI_CAN_CTRL_SIZE 0x200 /* memory size for each controller */
+#define EMS_PCI_V1_BASE_BAR 1
+#define EMS_PCI_V1_CONF_BAR 0
+#define EMS_PCI_V1_CONF_SIZE 4096 /* size of PITA control area */
+#define EMS_PCI_V1_CAN_BASE_OFFSET 0x400 /* offset where the controllers start */
+#define EMS_PCI_V1_CAN_CTRL_SIZE 0x200 /* memory size for each controller */
+
+#define EMS_PCI_V2_BASE_BAR 2
+#define EMS_PCI_V2_CONF_BAR 0
+#define EMS_PCI_V2_CONF_SIZE 128 /* size of PLX control area */
+#define EMS_PCI_V2_CAN_BASE_OFFSET 0x400 /* offset where the controllers start */
+#define EMS_PCI_V2_CAN_CTRL_SIZE 0x200 /* memory size for each controller */
+
+#define EMS_PCI_V3_BASE_BAR 0
+#define EMS_PCI_V3_CONF_BAR 5
+#define EMS_PCI_V3_CONF_SIZE 128 /* size of ASIX control area */
+#define EMS_PCI_V3_CAN_BASE_OFFSET 0x00 /* offset where the controllers starts */
+#define EMS_PCI_V3_CAN_CTRL_SIZE 0x100 /* memory size for each controller */
#define EMS_PCI_BASE_SIZE 4096 /* size of controller area */
+#ifndef PCI_VENDOR_ID_ASIX
+#define PCI_VENDOR_ID_ASIX 0x125b
+#define PCI_DEVICE_ID_ASIX_9110 0x9110
+#define PCI_SUBVENDOR_ID_ASIX 0xa000
+#endif
+#define PCI_SUBDEVICE_ID_EMS 0x4010
+
static const struct pci_device_id ems_pci_tbl[] = {
/* CPC-PCI v1 */
{PCI_VENDOR_ID_SIEMENS, 0x2104, PCI_ANY_ID, PCI_ANY_ID,},
@@ -96,12 +122,13 @@ static const struct pci_device_id ems_pci_tbl[] = {
{PCI_VENDOR_ID_PLX, PCI_DEVICE_ID_PLX_9030, PCI_VENDOR_ID_PLX, 0x4000},
/* CPC-104P v2 */
{PCI_VENDOR_ID_PLX, PCI_DEVICE_ID_PLX_9030, PCI_VENDOR_ID_PLX, 0x4002},
+ /* CPC-PCIe v3 */
+ {PCI_VENDOR_ID_ASIX, PCI_DEVICE_ID_ASIX_9110, PCI_SUBVENDOR_ID_ASIX, PCI_SUBDEVICE_ID_EMS},
{0,}
};
MODULE_DEVICE_TABLE(pci, ems_pci_tbl);
-/*
- * Helper to read internal registers from card logic (not CAN)
+/* Helper to read internal registers from card logic (not CAN)
*/
static u8 ems_pci_v1_readb(struct ems_pci_card *card, unsigned int port)
{
@@ -146,8 +173,25 @@ static void ems_pci_v2_post_irq(const struct sja1000_priv *priv)
writel(PLX_ICSR_ENA_CLR, card->conf_addr + PLX_ICSR);
}
-/*
- * Check if a CAN controller is present at the specified location
+static u8 ems_pci_v3_read_reg(const struct sja1000_priv *priv, int port)
+{
+ return readb(priv->reg_base + port);
+}
+
+static void ems_pci_v3_write_reg(const struct sja1000_priv *priv,
+ int port, u8 val)
+{
+ writeb(val, priv->reg_base + port);
+}
+
+static void ems_pci_v3_post_irq(const struct sja1000_priv *priv)
+{
+ struct ems_pci_card *card = (struct ems_pci_card *)priv->priv;
+
+ writel(ASIX_LINTSR_INT0AC, card->conf_addr + ASIX_LINTSR);
+}
+
+/* Check if a CAN controller is present at the specified location
* by trying to set 'em into the PeliCAN mode
*/
static inline int ems_pci_check_chan(const struct sja1000_priv *priv)
@@ -185,10 +229,10 @@ static void ems_pci_del_card(struct pci_dev *pdev)
free_sja1000dev(dev);
}
- if (card->base_addr != NULL)
+ if (card->base_addr)
pci_iounmap(card->pci_dev, card->base_addr);
- if (card->conf_addr != NULL)
+ if (card->conf_addr)
pci_iounmap(card->pci_dev, card->conf_addr);
kfree(card);
@@ -202,8 +246,7 @@ static void ems_pci_card_reset(struct ems_pci_card *card)
writeb(0, card->base_addr);
}
-/*
- * Probe PCI device for EMS CAN signature and register each available
+/* Probe PCI device for EMS CAN signature and register each available
* CAN channel to SJA1000 Socket-CAN subsystem.
*/
static int ems_pci_add_card(struct pci_dev *pdev,
@@ -212,7 +255,7 @@ static int ems_pci_add_card(struct pci_dev *pdev,
struct sja1000_priv *priv;
struct net_device *dev;
struct ems_pci_card *card;
- int max_chan, conf_size, base_bar;
+ int max_chan, conf_size, base_bar, conf_bar;
int err, i;
/* Enabling PCI device */
@@ -222,8 +265,8 @@ static int ems_pci_add_card(struct pci_dev *pdev,
}
/* Allocating card structures to hold addresses, ... */
- card = kzalloc(sizeof(struct ems_pci_card), GFP_KERNEL);
- if (card == NULL) {
+ card = kzalloc(sizeof(*card), GFP_KERNEL);
+ if (!card) {
pci_disable_device(pdev);
return -ENOMEM;
}
@@ -234,27 +277,35 @@ static int ems_pci_add_card(struct pci_dev *pdev,
card->channels = 0;
- if (pdev->vendor == PCI_VENDOR_ID_PLX) {
+ if (pdev->vendor == PCI_VENDOR_ID_ASIX) {
+ card->version = 3; /* CPC-PCI v3 */
+ max_chan = EMS_PCI_V3_MAX_CHAN;
+ base_bar = EMS_PCI_V3_BASE_BAR;
+ conf_bar = EMS_PCI_V3_CONF_BAR;
+ conf_size = EMS_PCI_V3_CONF_SIZE;
+ } else if (pdev->vendor == PCI_VENDOR_ID_PLX) {
card->version = 2; /* CPC-PCI v2 */
max_chan = EMS_PCI_V2_MAX_CHAN;
base_bar = EMS_PCI_V2_BASE_BAR;
+ conf_bar = EMS_PCI_V2_CONF_BAR;
conf_size = EMS_PCI_V2_CONF_SIZE;
} else {
card->version = 1; /* CPC-PCI v1 */
max_chan = EMS_PCI_V1_MAX_CHAN;
base_bar = EMS_PCI_V1_BASE_BAR;
+ conf_bar = EMS_PCI_V1_CONF_BAR;
conf_size = EMS_PCI_V1_CONF_SIZE;
}
/* Remap configuration space and controller memory area */
- card->conf_addr = pci_iomap(pdev, 0, conf_size);
- if (card->conf_addr == NULL) {
+ card->conf_addr = pci_iomap(pdev, conf_bar, conf_size);
+ if (!card->conf_addr) {
err = -ENOMEM;
goto failure_cleanup;
}
card->base_addr = pci_iomap(pdev, base_bar, EMS_PCI_BASE_SIZE);
- if (card->base_addr == NULL) {
+ if (!card->base_addr) {
err = -ENOMEM;
goto failure_cleanup;
}
@@ -276,12 +327,20 @@ static int ems_pci_add_card(struct pci_dev *pdev,
}
}
+ if (card->version == 3) {
+ /* ASIX chip asserts local reset to CAN controllers
+ * after bootup until it is deasserted
+ */
+ writel(readl(card->conf_addr + ASIX_LIEMR) & ~ASIX_LIEMR_LRST,
+ card->conf_addr + ASIX_LIEMR);
+ }
+
ems_pci_card_reset(card);
/* Detect available channels */
for (i = 0; i < max_chan; i++) {
dev = alloc_sja1000dev(0);
- if (dev == NULL) {
+ if (!dev) {
err = -ENOMEM;
goto failure_cleanup;
}
@@ -292,16 +351,25 @@ static int ems_pci_add_card(struct pci_dev *pdev,
priv->irq_flags = IRQF_SHARED;
dev->irq = pdev->irq;
- priv->reg_base = card->base_addr + EMS_PCI_CAN_BASE_OFFSET
- + (i * EMS_PCI_CAN_CTRL_SIZE);
+
if (card->version == 1) {
priv->read_reg = ems_pci_v1_read_reg;
priv->write_reg = ems_pci_v1_write_reg;
priv->post_irq = ems_pci_v1_post_irq;
- } else {
+ priv->reg_base = card->base_addr + EMS_PCI_V1_CAN_BASE_OFFSET
+ + (i * EMS_PCI_V1_CAN_CTRL_SIZE);
+ } else if (card->version == 2) {
priv->read_reg = ems_pci_v2_read_reg;
priv->write_reg = ems_pci_v2_write_reg;
priv->post_irq = ems_pci_v2_post_irq;
+ priv->reg_base = card->base_addr + EMS_PCI_V2_CAN_BASE_OFFSET
+ + (i * EMS_PCI_V2_CAN_CTRL_SIZE);
+ } else {
+ priv->read_reg = ems_pci_v3_read_reg;
+ priv->write_reg = ems_pci_v3_write_reg;
+ priv->post_irq = ems_pci_v3_post_irq;
+ priv->reg_base = card->base_addr + EMS_PCI_V3_CAN_BASE_OFFSET
+ + (i * EMS_PCI_V3_CAN_CTRL_SIZE);
}
/* Check if channel is present */
@@ -313,20 +381,28 @@ static int ems_pci_add_card(struct pci_dev *pdev,
SET_NETDEV_DEV(dev, &pdev->dev);
dev->dev_id = i;
- if (card->version == 1)
+ if (card->version == 1) {
/* reset int flag of pita */
writel(PITA2_ICR_INT0_EN | PITA2_ICR_INT0,
card->conf_addr + PITA2_ICR);
- else
+ } else if (card->version == 2) {
/* enable IRQ in PLX 9030 */
writel(PLX_ICSR_ENA_CLR,
card->conf_addr + PLX_ICSR);
+ } else {
+ /* Enable IRQ in AX99100 */
+ writel(ASIX_LINTSR_INT0AC, card->conf_addr + ASIX_LINTSR);
+ /* Enable local INT0 input enable */
+ writel(readl(card->conf_addr + ASIX_LIEMR) | ASIX_LIEMR_L0EINTEN,
+ card->conf_addr + ASIX_LIEMR);
+ }
/* Register SJA1000 device */
err = register_sja1000dev(dev);
if (err) {
- dev_err(&pdev->dev, "Registering device failed "
- "(err=%d)\n", err);
+ dev_err(&pdev->dev,
+ "Registering device failed: %pe\n",
+ ERR_PTR(err));
free_sja1000dev(dev);
goto failure_cleanup;
}
@@ -334,7 +410,7 @@ static int ems_pci_add_card(struct pci_dev *pdev,
card->channels++;
dev_info(&pdev->dev, "Channel #%d at 0x%p, irq %d\n",
- i + 1, priv->reg_base, dev->irq);
+ i + 1, priv->reg_base, dev->irq);
} else {
free_sja1000dev(dev);
}
diff --git a/drivers/net/can/spi/mcp251xfd/mcp251xfd-ring.c b/drivers/net/can/spi/mcp251xfd/mcp251xfd-ring.c
index bf3f0f150199..bfe4caa0c99d 100644
--- a/drivers/net/can/spi/mcp251xfd/mcp251xfd-ring.c
+++ b/drivers/net/can/spi/mcp251xfd/mcp251xfd-ring.c
@@ -30,11 +30,23 @@ mcp251xfd_cmd_prepare_write_reg(const struct mcp251xfd_priv *priv,
last_byte = mcp251xfd_last_byte_set(mask);
len = last_byte - first_byte + 1;
- data = mcp251xfd_spi_cmd_write(priv, write_reg_buf, reg + first_byte);
+ data = mcp251xfd_spi_cmd_write(priv, write_reg_buf, reg + first_byte, len);
val_le32 = cpu_to_le32(val >> BITS_PER_BYTE * first_byte);
memcpy(data, &val_le32, len);
- if (priv->devtype_data.quirks & MCP251XFD_QUIRK_CRC_REG) {
+ if (!(priv->devtype_data.quirks & MCP251XFD_QUIRK_CRC_REG)) {
+ len += sizeof(write_reg_buf->nocrc.cmd);
+ } else if (len == 1) {
+ u16 crc;
+
+ /* CRC */
+ len += sizeof(write_reg_buf->safe.cmd);
+ crc = mcp251xfd_crc16_compute(&write_reg_buf->safe, len);
+ put_unaligned_be16(crc, (void *)write_reg_buf + len);
+
+ /* Total length */
+ len += sizeof(write_reg_buf->safe.crc);
+ } else {
u16 crc;
mcp251xfd_spi_cmd_crc_set_len_in_reg(&write_reg_buf->crc.cmd,
@@ -46,8 +58,6 @@ mcp251xfd_cmd_prepare_write_reg(const struct mcp251xfd_priv *priv,
/* Total length */
len += sizeof(write_reg_buf->crc.crc);
- } else {
- len += sizeof(write_reg_buf->nocrc.cmd);
}
return len;
diff --git a/drivers/net/can/spi/mcp251xfd/mcp251xfd.h b/drivers/net/can/spi/mcp251xfd/mcp251xfd.h
index 2b0309fedfac..7024ff0cc2c0 100644
--- a/drivers/net/can/spi/mcp251xfd/mcp251xfd.h
+++ b/drivers/net/can/spi/mcp251xfd/mcp251xfd.h
@@ -504,6 +504,11 @@ union mcp251xfd_write_reg_buf {
u8 data[4];
__be16 crc;
} crc;
+ struct __packed {
+ struct mcp251xfd_buf_cmd cmd;
+ u8 data[1];
+ __be16 crc;
+ } safe;
} ____cacheline_aligned;
struct mcp251xfd_tx_obj {
@@ -759,6 +764,13 @@ mcp251xfd_spi_cmd_write_crc_set_addr(struct mcp251xfd_buf_cmd_crc *cmd,
}
static inline void
+mcp251xfd_spi_cmd_write_safe_set_addr(struct mcp251xfd_buf_cmd *cmd,
+ u16 addr)
+{
+ cmd->cmd = cpu_to_be16(MCP251XFD_SPI_INSTRUCTION_WRITE_CRC_SAFE | addr);
+}
+
+static inline void
mcp251xfd_spi_cmd_write_crc(struct mcp251xfd_buf_cmd_crc *cmd,
u16 addr, u16 len)
{
@@ -769,14 +781,20 @@ mcp251xfd_spi_cmd_write_crc(struct mcp251xfd_buf_cmd_crc *cmd,
static inline u8 *
mcp251xfd_spi_cmd_write(const struct mcp251xfd_priv *priv,
union mcp251xfd_write_reg_buf *write_reg_buf,
- u16 addr)
+ u16 addr, u8 len)
{
u8 *data;
if (priv->devtype_data.quirks & MCP251XFD_QUIRK_CRC_REG) {
- mcp251xfd_spi_cmd_write_crc_set_addr(&write_reg_buf->crc.cmd,
- addr);
- data = write_reg_buf->crc.data;
+ if (len == 1) {
+ mcp251xfd_spi_cmd_write_safe_set_addr(&write_reg_buf->safe.cmd,
+ addr);
+ data = write_reg_buf->safe.data;
+ } else {
+ mcp251xfd_spi_cmd_write_crc_set_addr(&write_reg_buf->crc.cmd,
+ addr);
+ data = write_reg_buf->crc.data;
+ }
} else {
mcp251xfd_spi_cmd_write_nocrc(&write_reg_buf->nocrc.cmd,
addr);
diff --git a/drivers/net/can/usb/esd_usb.c b/drivers/net/can/usb/esd_usb.c
index 42323f5e6f3a..55b36973952d 100644
--- a/drivers/net/can/usb/esd_usb.c
+++ b/drivers/net/can/usb/esd_usb.c
@@ -127,7 +127,15 @@ struct rx_msg {
u8 dlc;
__le32 ts;
__le32 id; /* upper 3 bits contain flags */
- u8 data[8];
+ union {
+ u8 data[8];
+ struct {
+ u8 status; /* CAN Controller Status */
+ u8 ecc; /* Error Capture Register */
+ u8 rec; /* RX Error Counter */
+ u8 tec; /* TX Error Counter */
+ } ev_can_err_ext; /* For ESD_EV_CAN_ERROR_EXT */
+ };
};
struct tx_msg {
@@ -229,51 +237,52 @@ static void esd_usb_rx_event(struct esd_usb_net_priv *priv,
u32 id = le32_to_cpu(msg->msg.rx.id) & ESD_IDMASK;
if (id == ESD_EV_CAN_ERROR_EXT) {
- u8 state = msg->msg.rx.data[0];
- u8 ecc = msg->msg.rx.data[1];
- u8 rxerr = msg->msg.rx.data[2];
- u8 txerr = msg->msg.rx.data[3];
+ u8 state = msg->msg.rx.ev_can_err_ext.status;
+ u8 ecc = msg->msg.rx.ev_can_err_ext.ecc;
+ u8 rxerr = msg->msg.rx.ev_can_err_ext.rec;
+ u8 txerr = msg->msg.rx.ev_can_err_ext.tec;
netdev_dbg(priv->netdev,
"CAN_ERR_EV_EXT: dlc=%#02x state=%02x ecc=%02x rec=%02x tec=%02x\n",
msg->msg.rx.dlc, state, ecc, rxerr, txerr);
skb = alloc_can_err_skb(priv->netdev, &cf);
- if (skb == NULL) {
- stats->rx_dropped++;
- return;
- }
if (state != priv->old_state) {
+ enum can_state tx_state, rx_state;
+ enum can_state new_state = CAN_STATE_ERROR_ACTIVE;
+
priv->old_state = state;
switch (state & ESD_BUSSTATE_MASK) {
case ESD_BUSSTATE_BUSOFF:
- priv->can.state = CAN_STATE_BUS_OFF;
- cf->can_id |= CAN_ERR_BUSOFF;
- priv->can.can_stats.bus_off++;
+ new_state = CAN_STATE_BUS_OFF;
can_bus_off(priv->netdev);
break;
case ESD_BUSSTATE_WARN:
- priv->can.state = CAN_STATE_ERROR_WARNING;
- priv->can.can_stats.error_warning++;
+ new_state = CAN_STATE_ERROR_WARNING;
break;
case ESD_BUSSTATE_ERRPASSIVE:
- priv->can.state = CAN_STATE_ERROR_PASSIVE;
- priv->can.can_stats.error_passive++;
+ new_state = CAN_STATE_ERROR_PASSIVE;
break;
default:
- priv->can.state = CAN_STATE_ERROR_ACTIVE;
+ new_state = CAN_STATE_ERROR_ACTIVE;
txerr = 0;
rxerr = 0;
break;
}
- } else {
+
+ if (new_state != priv->can.state) {
+ tx_state = (txerr >= rxerr) ? new_state : 0;
+ rx_state = (txerr <= rxerr) ? new_state : 0;
+ can_change_state(priv->netdev, cf,
+ tx_state, rx_state);
+ }
+ } else if (skb) {
priv->can.can_stats.bus_error++;
stats->rx_errors++;
- cf->can_id |= CAN_ERR_PROT | CAN_ERR_BUSERROR |
- CAN_ERR_CNT;
+ cf->can_id |= CAN_ERR_PROT | CAN_ERR_BUSERROR;
switch (ecc & SJA1000_ECC_MASK) {
case SJA1000_ECC_BIT:
@@ -286,7 +295,6 @@ static void esd_usb_rx_event(struct esd_usb_net_priv *priv,
cf->data[2] |= CAN_ERR_PROT_STUFF;
break;
default:
- cf->data[3] = ecc & SJA1000_ECC_SEG;
break;
}
@@ -294,20 +302,22 @@ static void esd_usb_rx_event(struct esd_usb_net_priv *priv,
if (!(ecc & SJA1000_ECC_DIR))
cf->data[2] |= CAN_ERR_PROT_TX;
- if (priv->can.state == CAN_STATE_ERROR_WARNING ||
- priv->can.state == CAN_STATE_ERROR_PASSIVE) {
- cf->data[1] = (txerr > rxerr) ?
- CAN_ERR_CRTL_TX_PASSIVE :
- CAN_ERR_CRTL_RX_PASSIVE;
- }
- cf->data[6] = txerr;
- cf->data[7] = rxerr;
+ /* Bit stream position in CAN frame as the error was detected */
+ cf->data[3] = ecc & SJA1000_ECC_SEG;
}
priv->bec.txerr = txerr;
priv->bec.rxerr = rxerr;
- netif_rx(skb);
+ if (skb) {
+ cf->can_id |= CAN_ERR_CNT;
+ cf->data[6] = txerr;
+ cf->data[7] = rxerr;
+
+ netif_rx(skb);
+ } else {
+ stats->rx_dropped++;
+ }
}
}
diff --git a/drivers/net/can/usb/peak_usb/pcan_usb.c b/drivers/net/can/usb/peak_usb/pcan_usb.c
index 687dd542f7f6..b211b6e283a2 100644
--- a/drivers/net/can/usb/peak_usb/pcan_usb.c
+++ b/drivers/net/can/usb/peak_usb/pcan_usb.c
@@ -9,10 +9,11 @@
* Many thanks to Klaus Hitschler <klaus.hitschler@gmx.de>
*/
#include <asm/unaligned.h>
+
+#include <linux/ethtool.h>
+#include <linux/module.h>
#include <linux/netdevice.h>
#include <linux/usb.h>
-#include <linux/module.h>
-#include <linux/ethtool.h>
#include <linux/can.h>
#include <linux/can/dev.h>
@@ -381,23 +382,42 @@ static int pcan_usb_get_serial(struct peak_usb_device *dev, u32 *serial_number)
}
/*
- * read device id from device
+ * read can channel id from device
*/
-static int pcan_usb_get_device_id(struct peak_usb_device *dev, u32 *device_id)
+static int pcan_usb_get_can_channel_id(struct peak_usb_device *dev, u32 *can_ch_id)
{
u8 args[PCAN_USB_CMD_ARGS_LEN];
int err;
err = pcan_usb_wait_rsp(dev, PCAN_USB_CMD_DEVID, PCAN_USB_GET, args);
if (err)
- netdev_err(dev->netdev, "getting device id failure: %d\n", err);
+ netdev_err(dev->netdev, "getting can channel id failure: %d\n", err);
else
- *device_id = args[0];
+ *can_ch_id = args[0];
return err;
}
+/* set a new CAN channel id in the flash memory of the device */
+static int pcan_usb_set_can_channel_id(struct peak_usb_device *dev, u32 can_ch_id)
+{
+ u8 args[PCAN_USB_CMD_ARGS_LEN];
+
+ /* this kind of device supports 8-bit values only */
+ if (can_ch_id > U8_MAX)
+ return -EINVAL;
+
+ /* during the flash process the device disconnects during ~1.25 s.:
+ * prohibit access when interface is UP
+ */
+ if (dev->netdev->flags & IFF_UP)
+ return -EBUSY;
+
+ args[0] = can_ch_id;
+ return pcan_usb_send_cmd(dev, PCAN_USB_CMD_DEVID, PCAN_USB_SET, args);
+}
+
/*
* update current time ref with received timestamp
*/
@@ -963,9 +983,18 @@ static int pcan_usb_set_phys_id(struct net_device *netdev,
return err;
}
+/* This device only handles 8-bit CAN channel id. */
+static int pcan_usb_get_eeprom_len(struct net_device *netdev)
+{
+ return sizeof(u8);
+}
+
static const struct ethtool_ops pcan_usb_ethtool_ops = {
.set_phys_id = pcan_usb_set_phys_id,
.get_ts_info = pcan_get_ts_info,
+ .get_eeprom_len = pcan_usb_get_eeprom_len,
+ .get_eeprom = peak_usb_get_eeprom,
+ .set_eeprom = peak_usb_set_eeprom,
};
/*
@@ -1017,7 +1046,8 @@ const struct peak_usb_adapter pcan_usb = {
.dev_init = pcan_usb_init,
.dev_set_bus = pcan_usb_write_mode,
.dev_set_bittiming = pcan_usb_set_bittiming,
- .dev_get_device_id = pcan_usb_get_device_id,
+ .dev_get_can_channel_id = pcan_usb_get_can_channel_id,
+ .dev_set_can_channel_id = pcan_usb_set_can_channel_id,
.dev_decode_buf = pcan_usb_decode_buf,
.dev_encode_msg = pcan_usb_encode_msg,
.dev_start = pcan_usb_start,
diff --git a/drivers/net/can/usb/peak_usb/pcan_usb_core.c b/drivers/net/can/usb/peak_usb/pcan_usb_core.c
index 1d996d3320fe..d881e1d30183 100644
--- a/drivers/net/can/usb/peak_usb/pcan_usb_core.c
+++ b/drivers/net/can/usb/peak_usb/pcan_usb_core.c
@@ -8,13 +8,15 @@
*
* Many thanks to Klaus Hitschler <klaus.hitschler@gmx.de>
*/
+#include <linux/device.h>
+#include <linux/ethtool.h>
#include <linux/init.h>
-#include <linux/signal.h>
-#include <linux/slab.h>
#include <linux/module.h>
#include <linux/netdevice.h>
+#include <linux/signal.h>
+#include <linux/slab.h>
+#include <linux/sysfs.h>
#include <linux/usb.h>
-#include <linux/ethtool.h>
#include <linux/can.h>
#include <linux/can/dev.h>
@@ -53,6 +55,26 @@ static const struct usb_device_id peak_usb_table[] = {
MODULE_DEVICE_TABLE(usb, peak_usb_table);
+static ssize_t can_channel_id_show(struct device *dev, struct device_attribute *attr, char *buf)
+{
+ struct net_device *netdev = to_net_dev(dev);
+ struct peak_usb_device *peak_dev = netdev_priv(netdev);
+
+ return sysfs_emit(buf, "%08X\n", peak_dev->can_channel_id);
+}
+static DEVICE_ATTR_RO(can_channel_id);
+
+/* mutable to avoid cast in attribute_group */
+static struct attribute *peak_usb_sysfs_attrs[] = {
+ &dev_attr_can_channel_id.attr,
+ NULL,
+};
+
+static const struct attribute_group peak_usb_sysfs_group = {
+ .name = "peak_usb",
+ .attrs = peak_usb_sysfs_attrs,
+};
+
/*
* dump memory
*/
@@ -808,6 +830,86 @@ static const struct net_device_ops peak_usb_netdev_ops = {
.ndo_change_mtu = can_change_mtu,
};
+/* CAN-USB devices generally handle 32-bit CAN channel IDs.
+ * In case one doesn't, then it have to overload this function.
+ */
+int peak_usb_get_eeprom_len(struct net_device *netdev)
+{
+ return sizeof(u32);
+}
+
+/* Every CAN-USB device exports the dev_get_can_channel_id() operation. It is used
+ * here to fill the data buffer with the user defined CAN channel ID.
+ */
+int peak_usb_get_eeprom(struct net_device *netdev,
+ struct ethtool_eeprom *eeprom, u8 *data)
+{
+ struct peak_usb_device *dev = netdev_priv(netdev);
+ u32 ch_id;
+ __le32 ch_id_le;
+ int err;
+
+ err = dev->adapter->dev_get_can_channel_id(dev, &ch_id);
+ if (err)
+ return err;
+
+ /* ethtool operates on individual bytes. The byte order of the CAN
+ * channel id in memory depends on the kernel architecture. We
+ * convert the CAN channel id back to the native byte order of the PEAK
+ * device itself to ensure that the order is consistent for all
+ * host architectures.
+ */
+ ch_id_le = cpu_to_le32(ch_id);
+ memcpy(data, (u8 *)&ch_id_le + eeprom->offset, eeprom->len);
+
+ /* update cached value */
+ dev->can_channel_id = ch_id;
+ return err;
+}
+
+/* Every CAN-USB device exports the dev_get_can_channel_id()/dev_set_can_channel_id()
+ * operations. They are used here to set the new user defined CAN channel ID.
+ */
+int peak_usb_set_eeprom(struct net_device *netdev,
+ struct ethtool_eeprom *eeprom, u8 *data)
+{
+ struct peak_usb_device *dev = netdev_priv(netdev);
+ u32 ch_id;
+ __le32 ch_id_le;
+ int err;
+
+ /* first, read the current user defined CAN channel ID */
+ err = dev->adapter->dev_get_can_channel_id(dev, &ch_id);
+ if (err) {
+ netdev_err(netdev, "Failed to init CAN channel id (err %d)\n", err);
+ return err;
+ }
+
+ /* do update the value with user given bytes.
+ * ethtool operates on individual bytes. The byte order of the CAN
+ * channel ID in memory depends on the kernel architecture. We
+ * convert the CAN channel ID back to the native byte order of the PEAK
+ * device itself to ensure that the order is consistent for all
+ * host architectures.
+ */
+ ch_id_le = cpu_to_le32(ch_id);
+ memcpy((u8 *)&ch_id_le + eeprom->offset, data, eeprom->len);
+ ch_id = le32_to_cpu(ch_id_le);
+
+ /* flash the new value now */
+ err = dev->adapter->dev_set_can_channel_id(dev, ch_id);
+ if (err) {
+ netdev_err(netdev, "Failed to write new CAN channel id (err %d)\n",
+ err);
+ return err;
+ }
+
+ /* update cached value with the new one */
+ dev->can_channel_id = ch_id;
+
+ return 0;
+}
+
int pcan_get_ts_info(struct net_device *dev, struct ethtool_ts_info *info)
{
info->so_timestamping =
@@ -881,6 +983,9 @@ static int peak_usb_create_dev(const struct peak_usb_adapter *peak_usb_adapter,
/* add ethtool support */
netdev->ethtool_ops = peak_usb_adapter->ethtool_ops;
+ /* register peak_usb sysfs files */
+ netdev->sysfs_groups[0] = &peak_usb_sysfs_group;
+
init_usb_anchor(&dev->rx_submitted);
init_usb_anchor(&dev->tx_submitted);
@@ -921,12 +1026,11 @@ static int peak_usb_create_dev(const struct peak_usb_adapter *peak_usb_adapter,
goto adap_dev_free;
}
- /* get device number early */
- if (dev->adapter->dev_get_device_id)
- dev->adapter->dev_get_device_id(dev, &dev->device_number);
+ /* get CAN channel id early */
+ dev->adapter->dev_get_can_channel_id(dev, &dev->can_channel_id);
- netdev_info(netdev, "attached to %s channel %u (device %u)\n",
- peak_usb_adapter->name, ctrl_idx, dev->device_number);
+ netdev_info(netdev, "attached to %s channel %u (device 0x%08X)\n",
+ peak_usb_adapter->name, ctrl_idx, dev->can_channel_id);
return 0;
@@ -964,7 +1068,7 @@ static void peak_usb_disconnect(struct usb_interface *intf)
dev->state &= ~PCAN_USB_STATE_CONNECTED;
strscpy(name, netdev->name, IFNAMSIZ);
- unregister_netdev(netdev);
+ unregister_candev(netdev);
kfree(dev->cmd_buf);
dev->next_siblings = NULL;
diff --git a/drivers/net/can/usb/peak_usb/pcan_usb_core.h b/drivers/net/can/usb/peak_usb/pcan_usb_core.h
index f6bdd8b3f290..980e315186cf 100644
--- a/drivers/net/can/usb/peak_usb/pcan_usb_core.h
+++ b/drivers/net/can/usb/peak_usb/pcan_usb_core.h
@@ -60,7 +60,8 @@ struct peak_usb_adapter {
int (*dev_set_data_bittiming)(struct peak_usb_device *dev,
struct can_bittiming *bt);
int (*dev_set_bus)(struct peak_usb_device *dev, u8 onoff);
- int (*dev_get_device_id)(struct peak_usb_device *dev, u32 *device_id);
+ int (*dev_get_can_channel_id)(struct peak_usb_device *dev, u32 *can_ch_id);
+ int (*dev_set_can_channel_id)(struct peak_usb_device *dev, u32 can_ch_id);
int (*dev_decode_buf)(struct peak_usb_device *dev, struct urb *urb);
int (*dev_encode_msg)(struct peak_usb_device *dev, struct sk_buff *skb,
u8 *obuf, size_t *size);
@@ -122,7 +123,8 @@ struct peak_usb_device {
u8 *cmd_buf;
struct usb_anchor rx_submitted;
- u32 device_number;
+ /* equivalent to the device ID in the Windows API */
+ u32 can_channel_id;
u8 device_rev;
u8 ep_msg_in;
@@ -147,4 +149,10 @@ void peak_usb_async_complete(struct urb *urb);
void peak_usb_restart_complete(struct peak_usb_device *dev);
int pcan_get_ts_info(struct net_device *dev, struct ethtool_ts_info *info);
+/* common 32-bit CAN channel ID ethtool management */
+int peak_usb_get_eeprom_len(struct net_device *netdev);
+int peak_usb_get_eeprom(struct net_device *netdev,
+ struct ethtool_eeprom *eeprom, u8 *data);
+int peak_usb_set_eeprom(struct net_device *netdev,
+ struct ethtool_eeprom *eeprom, u8 *data);
#endif
diff --git a/drivers/net/can/usb/peak_usb/pcan_usb_fd.c b/drivers/net/can/usb/peak_usb/pcan_usb_fd.c
index 2ea1500df393..4d85b29a17b7 100644
--- a/drivers/net/can/usb/peak_usb/pcan_usb_fd.c
+++ b/drivers/net/can/usb/peak_usb/pcan_usb_fd.c
@@ -4,10 +4,10 @@
*
* Copyright (C) 2013-2014 Stephane Grosjean <s.grosjean@peak-system.com>
*/
+#include <linux/ethtool.h>
+#include <linux/module.h>
#include <linux/netdevice.h>
#include <linux/usb.h>
-#include <linux/module.h>
-#include <linux/ethtool.h>
#include <linux/can.h>
#include <linux/can/dev.h>
@@ -147,6 +147,15 @@ struct __packed pcan_ufd_ovr_msg {
u8 unused[3];
};
+#define PCAN_UFD_CMD_DEVID_SET 0x81
+
+struct __packed pcan_ufd_device_id {
+ __le16 opcode_channel;
+
+ u16 unused;
+ __le32 device_id;
+};
+
static inline int pufd_omsg_get_channel(struct pcan_ufd_ovr_msg *om)
{
return om->channel & 0xf;
@@ -234,6 +243,15 @@ static int pcan_usb_fd_send_cmd(struct peak_usb_device *dev, void *cmd_tail)
return err;
}
+static int pcan_usb_fd_read_fwinfo(struct peak_usb_device *dev,
+ struct pcan_ufd_fw_info *fw_info)
+{
+ return pcan_usb_pro_send_req(dev, PCAN_USBPRO_REQ_INFO,
+ PCAN_USBPRO_INFO_FW,
+ fw_info,
+ sizeof(*fw_info));
+}
+
/* build the commands list in the given buffer, to enter operational mode */
static int pcan_usb_fd_build_restart_cmd(struct peak_usb_device *dev, u8 *buf)
{
@@ -434,6 +452,34 @@ static int pcan_usb_fd_set_bittiming_fast(struct peak_usb_device *dev,
return pcan_usb_fd_send_cmd(dev, ++cmd);
}
+/* read user CAN channel id from device */
+static int pcan_usb_fd_get_can_channel_id(struct peak_usb_device *dev,
+ u32 *can_ch_id)
+{
+ int err;
+ struct pcan_usb_fd_if *usb_if = pcan_usb_fd_dev_if(dev);
+
+ err = pcan_usb_fd_read_fwinfo(dev, &usb_if->fw_info);
+ if (err)
+ return err;
+
+ *can_ch_id = le32_to_cpu(usb_if->fw_info.dev_id[dev->ctrl_idx]);
+ return err;
+}
+
+/* set a new CAN channel id in the flash memory of the device */
+static int pcan_usb_fd_set_can_channel_id(struct peak_usb_device *dev, u32 can_ch_id)
+{
+ struct pcan_ufd_device_id *cmd = pcan_usb_fd_cmd_buffer(dev);
+
+ cmd->opcode_channel = pucan_cmd_opcode_channel(dev->ctrl_idx,
+ PCAN_UFD_CMD_DEVID_SET);
+ cmd->device_id = cpu_to_le32(can_ch_id);
+
+ /* send the command */
+ return pcan_usb_fd_send_cmd(dev, ++cmd);
+}
+
/* handle restart but in asynchronously way
* (uses PCAN-USB Pro code to complete asynchronous request)
*/
@@ -907,10 +953,7 @@ static int pcan_usb_fd_init(struct peak_usb_device *dev)
fw_info = &pdev->usb_if->fw_info;
- err = pcan_usb_pro_send_req(dev, PCAN_USBPRO_REQ_INFO,
- PCAN_USBPRO_INFO_FW,
- fw_info,
- sizeof(*fw_info));
+ err = pcan_usb_fd_read_fwinfo(dev, fw_info);
if (err) {
dev_err(dev->netdev->dev.parent,
"unable to read %s firmware info (err %d)\n",
@@ -972,7 +1015,7 @@ static int pcan_usb_fd_init(struct peak_usb_device *dev)
}
pdev->usb_if->dev[dev->ctrl_idx] = dev;
- dev->device_number =
+ dev->can_channel_id =
le32_to_cpu(pdev->usb_if->fw_info.dev_id[dev->ctrl_idx]);
/* if vendor rsp is of type 2, then it contains EP numbers to
@@ -1081,6 +1124,9 @@ static int pcan_usb_fd_set_phys_id(struct net_device *netdev,
static const struct ethtool_ops pcan_usb_fd_ethtool_ops = {
.set_phys_id = pcan_usb_fd_set_phys_id,
.get_ts_info = pcan_get_ts_info,
+ .get_eeprom_len = peak_usb_get_eeprom_len,
+ .get_eeprom = peak_usb_get_eeprom,
+ .set_eeprom = peak_usb_set_eeprom,
};
/* describes the PCAN-USB FD adapter */
@@ -1148,6 +1194,8 @@ const struct peak_usb_adapter pcan_usb_fd = {
.dev_set_bus = pcan_usb_fd_set_bus,
.dev_set_bittiming = pcan_usb_fd_set_bittiming_slow,
.dev_set_data_bittiming = pcan_usb_fd_set_bittiming_fast,
+ .dev_get_can_channel_id = pcan_usb_fd_get_can_channel_id,
+ .dev_set_can_channel_id = pcan_usb_fd_set_can_channel_id,
.dev_decode_buf = pcan_usb_fd_decode_buf,
.dev_start = pcan_usb_fd_start,
.dev_stop = pcan_usb_fd_stop,
@@ -1222,6 +1270,8 @@ const struct peak_usb_adapter pcan_usb_chip = {
.dev_set_bus = pcan_usb_fd_set_bus,
.dev_set_bittiming = pcan_usb_fd_set_bittiming_slow,
.dev_set_data_bittiming = pcan_usb_fd_set_bittiming_fast,
+ .dev_get_can_channel_id = pcan_usb_fd_get_can_channel_id,
+ .dev_set_can_channel_id = pcan_usb_fd_set_can_channel_id,
.dev_decode_buf = pcan_usb_fd_decode_buf,
.dev_start = pcan_usb_fd_start,
.dev_stop = pcan_usb_fd_stop,
@@ -1296,6 +1346,8 @@ const struct peak_usb_adapter pcan_usb_pro_fd = {
.dev_set_bus = pcan_usb_fd_set_bus,
.dev_set_bittiming = pcan_usb_fd_set_bittiming_slow,
.dev_set_data_bittiming = pcan_usb_fd_set_bittiming_fast,
+ .dev_get_can_channel_id = pcan_usb_fd_get_can_channel_id,
+ .dev_set_can_channel_id = pcan_usb_fd_set_can_channel_id,
.dev_decode_buf = pcan_usb_fd_decode_buf,
.dev_start = pcan_usb_fd_start,
.dev_stop = pcan_usb_fd_stop,
@@ -1370,6 +1422,8 @@ const struct peak_usb_adapter pcan_usb_x6 = {
.dev_set_bus = pcan_usb_fd_set_bus,
.dev_set_bittiming = pcan_usb_fd_set_bittiming_slow,
.dev_set_data_bittiming = pcan_usb_fd_set_bittiming_fast,
+ .dev_get_can_channel_id = pcan_usb_fd_get_can_channel_id,
+ .dev_set_can_channel_id = pcan_usb_fd_set_can_channel_id,
.dev_decode_buf = pcan_usb_fd_decode_buf,
.dev_start = pcan_usb_fd_start,
.dev_stop = pcan_usb_fd_stop,
diff --git a/drivers/net/can/usb/peak_usb/pcan_usb_pro.c b/drivers/net/can/usb/peak_usb/pcan_usb_pro.c
index 5d8f6a40bb2c..f736196383ac 100644
--- a/drivers/net/can/usb/peak_usb/pcan_usb_pro.c
+++ b/drivers/net/can/usb/peak_usb/pcan_usb_pro.c
@@ -6,10 +6,10 @@
* Copyright (C) 2003-2011 PEAK System-Technik GmbH
* Copyright (C) 2011-2012 Stephane Grosjean <s.grosjean@peak-system.com>
*/
+#include <linux/ethtool.h>
+#include <linux/module.h>
#include <linux/netdevice.h>
#include <linux/usb.h>
-#include <linux/module.h>
-#include <linux/ethtool.h>
#include <linux/can.h>
#include <linux/can/dev.h>
@@ -76,6 +76,7 @@ static u16 pcan_usb_pro_sizeof_rec[256] = {
[PCAN_USBPRO_SETFILTR] = sizeof(struct pcan_usb_pro_filter),
[PCAN_USBPRO_SETTS] = sizeof(struct pcan_usb_pro_setts),
[PCAN_USBPRO_GETDEVID] = sizeof(struct pcan_usb_pro_devid),
+ [PCAN_USBPRO_SETDEVID] = sizeof(struct pcan_usb_pro_devid),
[PCAN_USBPRO_SETLED] = sizeof(struct pcan_usb_pro_setled),
[PCAN_USBPRO_RXMSG8] = sizeof(struct pcan_usb_pro_rxmsg),
[PCAN_USBPRO_RXMSG4] = sizeof(struct pcan_usb_pro_rxmsg) - 4,
@@ -149,6 +150,7 @@ static int pcan_msg_add_rec(struct pcan_usb_pro_msg *pm, int id, ...)
case PCAN_USBPRO_SETBTR:
case PCAN_USBPRO_GETDEVID:
+ case PCAN_USBPRO_SETDEVID:
*pc++ = va_arg(ap, int);
pc += 2;
*(__le32 *)pc = cpu_to_le32(va_arg(ap, u32));
@@ -419,8 +421,8 @@ static int pcan_usb_pro_set_led(struct peak_usb_device *dev, u8 mode,
return pcan_usb_pro_send_cmd(dev, &um);
}
-static int pcan_usb_pro_get_device_id(struct peak_usb_device *dev,
- u32 *device_id)
+static int pcan_usb_pro_get_can_channel_id(struct peak_usb_device *dev,
+ u32 *can_ch_id)
{
struct pcan_usb_pro_devid *pdn;
struct pcan_usb_pro_msg um;
@@ -439,11 +441,23 @@ static int pcan_usb_pro_get_device_id(struct peak_usb_device *dev,
return err;
pdn = (struct pcan_usb_pro_devid *)pc;
- *device_id = le32_to_cpu(pdn->dev_num);
+ *can_ch_id = le32_to_cpu(pdn->dev_num);
return err;
}
+static int pcan_usb_pro_set_can_channel_id(struct peak_usb_device *dev,
+ u32 can_ch_id)
+{
+ struct pcan_usb_pro_msg um;
+
+ pcan_msg_init_empty(&um, dev->cmd_buf, PCAN_USB_MAX_CMD_LEN);
+ pcan_msg_add_rec(&um, PCAN_USBPRO_SETDEVID, dev->ctrl_idx,
+ can_ch_id);
+
+ return pcan_usb_pro_send_cmd(dev, &um);
+}
+
static int pcan_usb_pro_set_bittiming(struct peak_usb_device *dev,
struct can_bittiming *bt)
{
@@ -1023,6 +1037,9 @@ static int pcan_usb_pro_set_phys_id(struct net_device *netdev,
static const struct ethtool_ops pcan_usb_pro_ethtool_ops = {
.set_phys_id = pcan_usb_pro_set_phys_id,
.get_ts_info = pcan_get_ts_info,
+ .get_eeprom_len = peak_usb_get_eeprom_len,
+ .get_eeprom = peak_usb_get_eeprom,
+ .set_eeprom = peak_usb_set_eeprom,
};
/*
@@ -1076,7 +1093,8 @@ const struct peak_usb_adapter pcan_usb_pro = {
.dev_free = pcan_usb_pro_free,
.dev_set_bus = pcan_usb_pro_set_bus,
.dev_set_bittiming = pcan_usb_pro_set_bittiming,
- .dev_get_device_id = pcan_usb_pro_get_device_id,
+ .dev_get_can_channel_id = pcan_usb_pro_get_can_channel_id,
+ .dev_set_can_channel_id = pcan_usb_pro_set_can_channel_id,
.dev_decode_buf = pcan_usb_pro_decode_buf,
.dev_encode_msg = pcan_usb_pro_encode_msg,
.dev_start = pcan_usb_pro_start,
diff --git a/drivers/net/can/usb/peak_usb/pcan_usb_pro.h b/drivers/net/can/usb/peak_usb/pcan_usb_pro.h
index a34e0fc021c9..28e740af905d 100644
--- a/drivers/net/can/usb/peak_usb/pcan_usb_pro.h
+++ b/drivers/net/can/usb/peak_usb/pcan_usb_pro.h
@@ -62,6 +62,7 @@ struct __packed pcan_usb_pro_fwinfo {
#define PCAN_USBPRO_SETBTR 0x02
#define PCAN_USBPRO_SETBUSACT 0x04
#define PCAN_USBPRO_SETSILENT 0x05
+#define PCAN_USBPRO_SETDEVID 0x06
#define PCAN_USBPRO_SETFILTR 0x0a
#define PCAN_USBPRO_SETTS 0x10
#define PCAN_USBPRO_GETDEVID 0x12
diff --git a/drivers/net/dsa/lan9303-core.c b/drivers/net/dsa/lan9303-core.c
index 2e270b479143..cbe831875347 100644
--- a/drivers/net/dsa/lan9303-core.c
+++ b/drivers/net/dsa/lan9303-core.c
@@ -15,6 +15,9 @@
#include "lan9303.h"
+/* For the LAN9303 and LAN9354, only port 0 is an XMII port. */
+#define IS_PORT_XMII(port) ((port) == 0)
+
#define LAN9303_NUM_PORTS 3
/* 13.2 System Control and Status Registers
@@ -50,6 +53,9 @@
#define LAN9303_MANUAL_FC_1 0x68
#define LAN9303_MANUAL_FC_2 0x69
#define LAN9303_MANUAL_FC_0 0x6a
+# define LAN9303_BP_EN BIT(6)
+# define LAN9303_RX_FC_EN BIT(2)
+# define LAN9303_TX_FC_EN BIT(1)
#define LAN9303_SWITCH_CSR_DATA 0x6b
#define LAN9303_SWITCH_CSR_CMD 0x6c
#define LAN9303_SWITCH_CSR_CMD_BUSY BIT(31)
@@ -225,6 +231,13 @@ const struct regmap_access_table lan9303_register_set = {
};
EXPORT_SYMBOL(lan9303_register_set);
+/* Flow Control registers indexed by port number */
+static unsigned int flow_ctl_reg[] = {
+ LAN9303_MANUAL_FC_0,
+ LAN9303_MANUAL_FC_1,
+ LAN9303_MANUAL_FC_2
+};
+
static int lan9303_read(struct regmap *regmap, unsigned int offset, u32 *reg)
{
int ret, i;
@@ -902,6 +915,7 @@ static int lan9303_setup(struct dsa_switch *ds)
{
struct lan9303 *chip = ds->priv;
int ret;
+ u32 reg;
/* Make sure that port 0 is the cpu port */
if (!dsa_is_cpu_port(ds, 0)) {
@@ -909,6 +923,17 @@ static int lan9303_setup(struct dsa_switch *ds)
return -EINVAL;
}
+ /* Virtual Phy: Remove Turbo 200Mbit mode */
+ ret = lan9303_read(chip->regmap, LAN9303_VIRT_SPECIAL_CTRL, &reg);
+ if (ret)
+ return (ret);
+
+ /* Clear the TURBO Mode bit if it was set. */
+ if (reg & LAN9303_VIRT_SPECIAL_TURBO) {
+ reg &= ~LAN9303_VIRT_SPECIAL_TURBO;
+ regmap_write(chip->regmap, LAN9303_VIRT_SPECIAL_CTRL, reg);
+ }
+
ret = lan9303_setup_tagging(chip);
if (ret)
dev_err(chip->dev, "failed to setup port tagging %d\n", ret);
@@ -1049,42 +1074,6 @@ static int lan9303_phy_write(struct dsa_switch *ds, int phy, int regnum,
return chip->ops->phy_write(chip, phy, regnum, val);
}
-static void lan9303_adjust_link(struct dsa_switch *ds, int port,
- struct phy_device *phydev)
-{
- struct lan9303 *chip = ds->priv;
- int ctl;
-
- if (!phy_is_pseudo_fixed_link(phydev))
- return;
-
- ctl = lan9303_phy_read(ds, port, MII_BMCR);
-
- ctl &= ~BMCR_ANENABLE;
-
- if (phydev->speed == SPEED_100)
- ctl |= BMCR_SPEED100;
- else if (phydev->speed == SPEED_10)
- ctl &= ~BMCR_SPEED100;
- else
- dev_err(ds->dev, "unsupported speed: %d\n", phydev->speed);
-
- if (phydev->duplex == DUPLEX_FULL)
- ctl |= BMCR_FULLDPLX;
- else
- ctl &= ~BMCR_FULLDPLX;
-
- lan9303_phy_write(ds, port, MII_BMCR, ctl);
-
- if (port == chip->phy_addr_base) {
- /* Virtual Phy: Remove Turbo 200Mbit mode */
- lan9303_read(chip->regmap, LAN9303_VIRT_SPECIAL_CTRL, &ctl);
-
- ctl &= ~LAN9303_VIRT_SPECIAL_TURBO;
- regmap_write(chip->regmap, LAN9303_VIRT_SPECIAL_CTRL, ctl);
- }
-}
-
static int lan9303_port_enable(struct dsa_switch *ds, int port,
struct phy_device *phy)
{
@@ -1281,26 +1270,96 @@ static int lan9303_port_mdb_del(struct dsa_switch *ds, int port,
return 0;
}
+static void lan9303_phylink_get_caps(struct dsa_switch *ds, int port,
+ struct phylink_config *config)
+{
+ struct lan9303 *chip = ds->priv;
+
+ dev_dbg(chip->dev, "%s(%d) entered.", __func__, port);
+
+ config->mac_capabilities = MAC_10 | MAC_100 | MAC_ASYM_PAUSE |
+ MAC_SYM_PAUSE;
+
+ if (port == 0) {
+ __set_bit(PHY_INTERFACE_MODE_RMII,
+ config->supported_interfaces);
+ __set_bit(PHY_INTERFACE_MODE_MII,
+ config->supported_interfaces);
+ } else {
+ __set_bit(PHY_INTERFACE_MODE_INTERNAL,
+ config->supported_interfaces);
+ /* Compatibility for phylib's default interface type when the
+ * phy-mode property is absent
+ */
+ __set_bit(PHY_INTERFACE_MODE_GMII,
+ config->supported_interfaces);
+ }
+
+ /* This driver does not make use of the speed, duplex, pause or the
+ * advertisement in its mac_config, so it is safe to mark this driver
+ * as non-legacy.
+ */
+ config->legacy_pre_march2020 = false;
+}
+
+static void lan9303_phylink_mac_link_up(struct dsa_switch *ds, int port,
+ unsigned int mode,
+ phy_interface_t interface,
+ struct phy_device *phydev, int speed,
+ int duplex, bool tx_pause,
+ bool rx_pause)
+{
+ struct lan9303 *chip = ds->priv;
+ u32 ctl;
+ u32 reg;
+
+ /* On this device, we are only interested in doing something here if
+ * this is the xMII port. All other ports are 10/100 phys using MDIO
+ * to control there link settings.
+ */
+ if (!IS_PORT_XMII(port))
+ return;
+
+ /* Disable auto-negotiation and force the speed/duplex settings. */
+ ctl = lan9303_phy_read(ds, port, MII_BMCR);
+ ctl &= ~(BMCR_ANENABLE | BMCR_SPEED100 | BMCR_FULLDPLX);
+ if (speed == SPEED_100)
+ ctl |= BMCR_SPEED100;
+ if (duplex == DUPLEX_FULL)
+ ctl |= BMCR_FULLDPLX;
+ lan9303_phy_write(ds, port, MII_BMCR, ctl);
+
+ /* Force the flow control settings. */
+ lan9303_read(chip->regmap, flow_ctl_reg[port], &reg);
+ reg &= ~(LAN9303_BP_EN | LAN9303_RX_FC_EN | LAN9303_TX_FC_EN);
+ if (rx_pause)
+ reg |= (LAN9303_RX_FC_EN | LAN9303_BP_EN);
+ if (tx_pause)
+ reg |= LAN9303_TX_FC_EN;
+ regmap_write(chip->regmap, flow_ctl_reg[port], reg);
+}
+
static const struct dsa_switch_ops lan9303_switch_ops = {
- .get_tag_protocol = lan9303_get_tag_protocol,
- .setup = lan9303_setup,
- .get_strings = lan9303_get_strings,
- .phy_read = lan9303_phy_read,
- .phy_write = lan9303_phy_write,
- .adjust_link = lan9303_adjust_link,
- .get_ethtool_stats = lan9303_get_ethtool_stats,
- .get_sset_count = lan9303_get_sset_count,
- .port_enable = lan9303_port_enable,
- .port_disable = lan9303_port_disable,
- .port_bridge_join = lan9303_port_bridge_join,
- .port_bridge_leave = lan9303_port_bridge_leave,
- .port_stp_state_set = lan9303_port_stp_state_set,
- .port_fast_age = lan9303_port_fast_age,
- .port_fdb_add = lan9303_port_fdb_add,
- .port_fdb_del = lan9303_port_fdb_del,
- .port_fdb_dump = lan9303_port_fdb_dump,
- .port_mdb_add = lan9303_port_mdb_add,
- .port_mdb_del = lan9303_port_mdb_del,
+ .get_tag_protocol = lan9303_get_tag_protocol,
+ .setup = lan9303_setup,
+ .get_strings = lan9303_get_strings,
+ .phy_read = lan9303_phy_read,
+ .phy_write = lan9303_phy_write,
+ .phylink_get_caps = lan9303_phylink_get_caps,
+ .phylink_mac_link_up = lan9303_phylink_mac_link_up,
+ .get_ethtool_stats = lan9303_get_ethtool_stats,
+ .get_sset_count = lan9303_get_sset_count,
+ .port_enable = lan9303_port_enable,
+ .port_disable = lan9303_port_disable,
+ .port_bridge_join = lan9303_port_bridge_join,
+ .port_bridge_leave = lan9303_port_bridge_leave,
+ .port_stp_state_set = lan9303_port_stp_state_set,
+ .port_fast_age = lan9303_port_fast_age,
+ .port_fdb_add = lan9303_port_fdb_add,
+ .port_fdb_del = lan9303_port_fdb_del,
+ .port_fdb_dump = lan9303_port_fdb_dump,
+ .port_mdb_add = lan9303_port_mdb_add,
+ .port_mdb_del = lan9303_port_mdb_del,
};
static int lan9303_register_switch(struct lan9303 *chip)
diff --git a/drivers/net/dsa/microchip/Kconfig b/drivers/net/dsa/microchip/Kconfig
index 913f83ef013c..394ca8678d2b 100644
--- a/drivers/net/dsa/microchip/Kconfig
+++ b/drivers/net/dsa/microchip/Kconfig
@@ -22,6 +22,16 @@ config NET_DSA_MICROCHIP_KSZ_SPI
help
Select to enable support for registering switches configured through SPI.
+config NET_DSA_MICROCHIP_KSZ_PTP
+ bool "Support for the PTP clock on the KSZ9563/LAN937x Ethernet Switch"
+ depends on NET_DSA_MICROCHIP_KSZ_COMMON && PTP_1588_CLOCK
+ depends on NET_DSA_MICROCHIP_KSZ_COMMON=m || PTP_1588_CLOCK=y
+ help
+ Select to enable support for timestamping & PTP clock manipulation in
+ KSZ8563/KSZ9563/LAN937x series of switches. KSZ9563/KSZ8563 supports
+ only one step timestamping. LAN937x switch supports both one step and
+ two step timestamping.
+
config NET_DSA_MICROCHIP_KSZ8863_SMI
tristate "KSZ series SMI connected switch driver"
depends on NET_DSA_MICROCHIP_KSZ_COMMON
diff --git a/drivers/net/dsa/microchip/Makefile b/drivers/net/dsa/microchip/Makefile
index 28873559efc2..48360cc9fc68 100644
--- a/drivers/net/dsa/microchip/Makefile
+++ b/drivers/net/dsa/microchip/Makefile
@@ -4,6 +4,11 @@ ksz_switch-objs := ksz_common.o
ksz_switch-objs += ksz9477.o
ksz_switch-objs += ksz8795.o
ksz_switch-objs += lan937x_main.o
+
+ifdef CONFIG_NET_DSA_MICROCHIP_KSZ_PTP
+ksz_switch-objs += ksz_ptp.o
+endif
+
obj-$(CONFIG_NET_DSA_MICROCHIP_KSZ9477_I2C) += ksz9477_i2c.o
obj-$(CONFIG_NET_DSA_MICROCHIP_KSZ_SPI) += ksz_spi.o
obj-$(CONFIG_NET_DSA_MICROCHIP_KSZ8863_SMI) += ksz8863_smi.o
diff --git a/drivers/net/dsa/microchip/ksz9477.c b/drivers/net/dsa/microchip/ksz9477.c
index 6178a96e389f..bf13d47c26cf 100644
--- a/drivers/net/dsa/microchip/ksz9477.c
+++ b/drivers/net/dsa/microchip/ksz9477.c
@@ -980,6 +980,22 @@ int ksz9477_set_ageing_time(struct ksz_device *dev, unsigned int msecs)
return ksz_write8(dev, REG_SW_LUE_CTRL_0, value);
}
+void ksz9477_port_queue_split(struct ksz_device *dev, int port)
+{
+ u8 data;
+
+ if (dev->info->num_tx_queues == 8)
+ data = PORT_EIGHT_QUEUE;
+ else if (dev->info->num_tx_queues == 4)
+ data = PORT_FOUR_QUEUE;
+ else if (dev->info->num_tx_queues == 2)
+ data = PORT_TWO_QUEUE;
+ else
+ data = PORT_SINGLE_QUEUE;
+
+ ksz_prmw8(dev, port, REG_PORT_CTRL_0, PORT_QUEUE_SPLIT_MASK, data);
+}
+
void ksz9477_port_setup(struct ksz_device *dev, int port, bool cpu_port)
{
struct dsa_switch *ds = dev->ds;
@@ -991,6 +1007,8 @@ void ksz9477_port_setup(struct ksz_device *dev, int port, bool cpu_port)
ksz_port_cfg(dev, port, REG_PORT_CTRL_0, PORT_TAIL_TAG_ENABLE,
true);
+ ksz9477_port_queue_split(dev, port);
+
ksz_port_cfg(dev, port, REG_PORT_CTRL_0, PORT_MAC_LOOPBACK, false);
/* set back pressure */
@@ -1166,6 +1184,13 @@ u32 ksz9477_get_port_addr(int port, int offset)
return PORT_CTRL_ADDR(port, offset);
}
+int ksz9477_tc_cbs_set_cinc(struct ksz_device *dev, int port, u32 val)
+{
+ val = val >> 8;
+
+ return ksz_pwrite16(dev, port, REG_PORT_MTI_CREDIT_INCREMENT, val);
+}
+
int ksz9477_switch_init(struct ksz_device *dev)
{
u8 data8;
diff --git a/drivers/net/dsa/microchip/ksz9477.h b/drivers/net/dsa/microchip/ksz9477.h
index 7c5bb3032772..b6f7e3c46e3f 100644
--- a/drivers/net/dsa/microchip/ksz9477.h
+++ b/drivers/net/dsa/microchip/ksz9477.h
@@ -51,10 +51,12 @@ int ksz9477_mdb_del(struct ksz_device *dev, int port,
const struct switchdev_obj_port_mdb *mdb, struct dsa_db db);
int ksz9477_change_mtu(struct ksz_device *dev, int port, int mtu);
void ksz9477_config_cpu_port(struct dsa_switch *ds);
+int ksz9477_tc_cbs_set_cinc(struct ksz_device *dev, int port, u32 val);
int ksz9477_enable_stp_addr(struct ksz_device *dev);
int ksz9477_reset_switch(struct ksz_device *dev);
int ksz9477_dsa_init(struct ksz_device *dev);
int ksz9477_switch_init(struct ksz_device *dev);
void ksz9477_switch_exit(struct ksz_device *dev);
+void ksz9477_port_queue_split(struct ksz_device *dev, int port);
#endif
diff --git a/drivers/net/dsa/microchip/ksz9477_reg.h b/drivers/net/dsa/microchip/ksz9477_reg.h
index cc457fa64939..cba3dba58bc3 100644
--- a/drivers/net/dsa/microchip/ksz9477_reg.h
+++ b/drivers/net/dsa/microchip/ksz9477_reg.h
@@ -850,7 +850,11 @@
#define PORT_FORCE_TX_FLOW_CTRL BIT(4)
#define PORT_FORCE_RX_FLOW_CTRL BIT(3)
#define PORT_TAIL_TAG_ENABLE BIT(2)
-#define PORT_QUEUE_SPLIT_ENABLE 0x3
+#define PORT_QUEUE_SPLIT_MASK GENMASK(1, 0)
+#define PORT_EIGHT_QUEUE 0x3
+#define PORT_FOUR_QUEUE 0x2
+#define PORT_TWO_QUEUE 0x1
+#define PORT_SINGLE_QUEUE 0x0
#define REG_PORT_CTRL_1 0x0021
@@ -1480,33 +1484,10 @@
/* 9 - Shaping */
-#define REG_PORT_MTI_QUEUE_INDEX__4 0x0900
+#define REG_PORT_MTI_QUEUE_CTRL_0__4 0x0904
-#define REG_PORT_MTI_QUEUE_CTRL_0__4 0x0904
+#define MTI_PVID_REPLACE BIT(0)
-#define MTI_PVID_REPLACE BIT(0)
-
-#define REG_PORT_MTI_QUEUE_CTRL_0 0x0914
-
-#define MTI_SCHEDULE_MODE_M 0x3
-#define MTI_SCHEDULE_MODE_S 6
-#define MTI_SCHEDULE_STRICT_PRIO 0
-#define MTI_SCHEDULE_WRR 2
-#define MTI_SHAPING_M 0x3
-#define MTI_SHAPING_S 4
-#define MTI_SHAPING_OFF 0
-#define MTI_SHAPING_SRP 1
-#define MTI_SHAPING_TIME_AWARE 2
-
-#define REG_PORT_MTI_QUEUE_CTRL_1 0x0915
-
-#define MTI_TX_RATIO_M (BIT(7) - 1)
-
-#define REG_PORT_MTI_QUEUE_CTRL_2__2 0x0916
-#define REG_PORT_MTI_HI_WATER_MARK 0x0916
-#define REG_PORT_MTI_QUEUE_CTRL_3__2 0x0918
-#define REG_PORT_MTI_LO_WATER_MARK 0x0918
-#define REG_PORT_MTI_QUEUE_CTRL_4__2 0x091A
#define REG_PORT_MTI_CREDIT_INCREMENT 0x091A
/* A - QM */
diff --git a/drivers/net/dsa/microchip/ksz_common.c b/drivers/net/dsa/microchip/ksz_common.c
index 9b20c2ee6d62..729b36eeb2c4 100644
--- a/drivers/net/dsa/microchip/ksz_common.c
+++ b/drivers/net/dsa/microchip/ksz_common.c
@@ -6,6 +6,7 @@
*/
#include <linux/delay.h>
+#include <linux/dsa/ksz_common.h>
#include <linux/export.h>
#include <linux/gpio/consumer.h>
#include <linux/kernel.h>
@@ -22,13 +23,19 @@
#include <linux/of_net.h>
#include <linux/micrel_phy.h>
#include <net/dsa.h>
+#include <net/pkt_cls.h>
#include <net/switchdev.h>
#include "ksz_common.h"
+#include "ksz_ptp.h"
#include "ksz8.h"
#include "ksz9477.h"
#include "lan937x.h"
+#define KSZ_CBS_ENABLE ((MTI_SCHEDULE_STRICT_PRIO << MTI_SCHEDULE_MODE_S) | \
+ (MTI_SHAPING_SRP << MTI_SHAPING_S))
+#define KSZ_CBS_DISABLE ((MTI_SCHEDULE_WRR << MTI_SCHEDULE_MODE_S) |\
+ (MTI_SHAPING_OFF << MTI_SHAPING_S))
#define MIB_COUNTER_NUM 0x20
struct ksz_stats_raw {
@@ -248,6 +255,7 @@ static const struct ksz_dev_ops ksz9477_dev_ops = {
.change_mtu = ksz9477_change_mtu,
.phylink_mac_link_up = ksz9477_phylink_mac_link_up,
.config_cpu_port = ksz9477_config_cpu_port,
+ .tc_cbs_set_cinc = ksz9477_tc_cbs_set_cinc,
.enable_stp_addr = ksz9477_enable_stp_addr,
.reset = ksz9477_reset_switch,
.init = ksz9477_switch_init,
@@ -284,6 +292,7 @@ static const struct ksz_dev_ops lan937x_dev_ops = {
.change_mtu = lan937x_change_mtu,
.phylink_mac_link_up = ksz9477_phylink_mac_link_up,
.config_cpu_port = lan937x_config_cpu_port,
+ .tc_cbs_set_cinc = lan937x_tc_cbs_set_cinc,
.enable_stp_addr = ksz9477_enable_stp_addr,
.reset = lan937x_reset_switch,
.init = lan937x_switch_init,
@@ -1078,6 +1087,8 @@ const struct ksz_chip_data ksz_switch_chips[] = {
.cpu_ports = 0x07, /* can be configured as cpu port */
.port_cnt = 3, /* total port count */
.port_nirqs = 3,
+ .num_tx_queues = 4,
+ .tc_cbs_supported = true,
.ops = &ksz9477_dev_ops,
.mib_names = ksz9477_mib_names,
.mib_cnt = ARRAY_SIZE(ksz9477_mib_names),
@@ -1104,6 +1115,7 @@ const struct ksz_chip_data ksz_switch_chips[] = {
.num_statics = 8,
.cpu_ports = 0x10, /* can be configured as cpu port */
.port_cnt = 5, /* total cpu and user ports */
+ .num_tx_queues = 4,
.ops = &ksz8_dev_ops,
.ksz87xx_eee_link_erratum = true,
.mib_names = ksz9477_mib_names,
@@ -1142,6 +1154,7 @@ const struct ksz_chip_data ksz_switch_chips[] = {
.num_statics = 8,
.cpu_ports = 0x10, /* can be configured as cpu port */
.port_cnt = 5, /* total cpu and user ports */
+ .num_tx_queues = 4,
.ops = &ksz8_dev_ops,
.ksz87xx_eee_link_erratum = true,
.mib_names = ksz9477_mib_names,
@@ -1166,6 +1179,7 @@ const struct ksz_chip_data ksz_switch_chips[] = {
.num_statics = 8,
.cpu_ports = 0x10, /* can be configured as cpu port */
.port_cnt = 5, /* total cpu and user ports */
+ .num_tx_queues = 4,
.ops = &ksz8_dev_ops,
.ksz87xx_eee_link_erratum = true,
.mib_names = ksz9477_mib_names,
@@ -1190,6 +1204,7 @@ const struct ksz_chip_data ksz_switch_chips[] = {
.num_statics = 8,
.cpu_ports = 0x4, /* can be configured as cpu port */
.port_cnt = 3,
+ .num_tx_queues = 4,
.ops = &ksz8_dev_ops,
.mib_names = ksz88xx_mib_names,
.mib_cnt = ARRAY_SIZE(ksz88xx_mib_names),
@@ -1211,6 +1226,8 @@ const struct ksz_chip_data ksz_switch_chips[] = {
.cpu_ports = 0x7F, /* can be configured as cpu port */
.port_cnt = 7, /* total physical port count */
.port_nirqs = 4,
+ .num_tx_queues = 4,
+ .tc_cbs_supported = true,
.ops = &ksz9477_dev_ops,
.phy_errata_9477 = true,
.mib_names = ksz9477_mib_names,
@@ -1243,6 +1260,7 @@ const struct ksz_chip_data ksz_switch_chips[] = {
.cpu_ports = 0x3F, /* can be configured as cpu port */
.port_cnt = 6, /* total physical port count */
.port_nirqs = 2,
+ .num_tx_queues = 4,
.ops = &ksz9477_dev_ops,
.phy_errata_9477 = true,
.mib_names = ksz9477_mib_names,
@@ -1275,6 +1293,7 @@ const struct ksz_chip_data ksz_switch_chips[] = {
.cpu_ports = 0x7F, /* can be configured as cpu port */
.port_cnt = 7, /* total physical port count */
.port_nirqs = 2,
+ .num_tx_queues = 4,
.ops = &ksz9477_dev_ops,
.phy_errata_9477 = true,
.mib_names = ksz9477_mib_names,
@@ -1305,6 +1324,7 @@ const struct ksz_chip_data ksz_switch_chips[] = {
.cpu_ports = 0x07, /* can be configured as cpu port */
.port_cnt = 3, /* total port count */
.port_nirqs = 2,
+ .num_tx_queues = 4,
.ops = &ksz9477_dev_ops,
.mib_names = ksz9477_mib_names,
.mib_cnt = ARRAY_SIZE(ksz9477_mib_names),
@@ -1330,6 +1350,8 @@ const struct ksz_chip_data ksz_switch_chips[] = {
.cpu_ports = 0x07, /* can be configured as cpu port */
.port_cnt = 3, /* total port count */
.port_nirqs = 3,
+ .num_tx_queues = 4,
+ .tc_cbs_supported = true,
.ops = &ksz9477_dev_ops,
.mib_names = ksz9477_mib_names,
.mib_cnt = ARRAY_SIZE(ksz9477_mib_names),
@@ -1355,6 +1377,8 @@ const struct ksz_chip_data ksz_switch_chips[] = {
.cpu_ports = 0x7F, /* can be configured as cpu port */
.port_cnt = 7, /* total physical port count */
.port_nirqs = 3,
+ .num_tx_queues = 4,
+ .tc_cbs_supported = true,
.ops = &ksz9477_dev_ops,
.phy_errata_9477 = true,
.mib_names = ksz9477_mib_names,
@@ -1385,6 +1409,8 @@ const struct ksz_chip_data ksz_switch_chips[] = {
.cpu_ports = 0x10, /* can be configured as cpu port */
.port_cnt = 5, /* total physical port count */
.port_nirqs = 6,
+ .num_tx_queues = 8,
+ .tc_cbs_supported = true,
.ops = &lan937x_dev_ops,
.mib_names = ksz9477_mib_names,
.mib_cnt = ARRAY_SIZE(ksz9477_mib_names),
@@ -1409,6 +1435,8 @@ const struct ksz_chip_data ksz_switch_chips[] = {
.cpu_ports = 0x30, /* can be configured as cpu port */
.port_cnt = 6, /* total physical port count */
.port_nirqs = 6,
+ .num_tx_queues = 8,
+ .tc_cbs_supported = true,
.ops = &lan937x_dev_ops,
.mib_names = ksz9477_mib_names,
.mib_cnt = ARRAY_SIZE(ksz9477_mib_names),
@@ -1433,6 +1461,8 @@ const struct ksz_chip_data ksz_switch_chips[] = {
.cpu_ports = 0x30, /* can be configured as cpu port */
.port_cnt = 8, /* total physical port count */
.port_nirqs = 6,
+ .num_tx_queues = 8,
+ .tc_cbs_supported = true,
.ops = &lan937x_dev_ops,
.mib_names = ksz9477_mib_names,
.mib_cnt = ARRAY_SIZE(ksz9477_mib_names),
@@ -1461,6 +1491,8 @@ const struct ksz_chip_data ksz_switch_chips[] = {
.cpu_ports = 0x38, /* can be configured as cpu port */
.port_cnt = 5, /* total physical port count */
.port_nirqs = 6,
+ .num_tx_queues = 8,
+ .tc_cbs_supported = true,
.ops = &lan937x_dev_ops,
.mib_names = ksz9477_mib_names,
.mib_cnt = ARRAY_SIZE(ksz9477_mib_names),
@@ -1489,6 +1521,8 @@ const struct ksz_chip_data ksz_switch_chips[] = {
.cpu_ports = 0x30, /* can be configured as cpu port */
.port_cnt = 8, /* total physical port count */
.port_nirqs = 6,
+ .num_tx_queues = 8,
+ .tc_cbs_supported = true,
.ops = &lan937x_dev_ops,
.mib_names = ksz9477_mib_names,
.mib_cnt = ARRAY_SIZE(ksz9477_mib_names),
@@ -1775,9 +1809,6 @@ static int ksz_sw_mdio_read(struct mii_bus *bus, int addr, int regnum)
u16 val;
int ret;
- if (regnum & MII_ADDR_C45)
- return -EOPNOTSUPP;
-
ret = dev->dev_ops->r_phy(dev, addr, regnum, &val);
if (ret < 0)
return ret;
@@ -1790,9 +1821,6 @@ static int ksz_sw_mdio_write(struct mii_bus *bus, int addr, int regnum,
{
struct ksz_device *dev = bus->priv;
- if (regnum & MII_ADDR_C45)
- return -EOPNOTSUPP;
-
return dev->dev_ops->w_phy(dev, addr, regnum, val);
}
@@ -2069,6 +2097,8 @@ static int ksz_setup(struct dsa_switch *ds)
dev->dev_ops->enable_stp_addr(dev);
+ ds->num_tx_queues = dev->info->num_tx_queues;
+
regmap_update_bits(dev->regmap[0], regs[S_MULTICAST_CTRL],
MULTICAST_STORM_DISABLE, MULTICAST_STORM_DISABLE);
@@ -2099,13 +2129,23 @@ static int ksz_setup(struct dsa_switch *ds)
ret = ksz_pirq_setup(dev, dp->index);
if (ret)
goto out_girq;
+
+ ret = ksz_ptp_irq_setup(ds, dp->index);
+ if (ret)
+ goto out_pirq;
}
}
+ ret = ksz_ptp_clock_register(ds);
+ if (ret) {
+ dev_err(dev->dev, "Failed to register PTP clock: %d\n", ret);
+ goto out_ptpirq;
+ }
+
ret = ksz_mdio_register(dev);
if (ret < 0) {
dev_err(dev->dev, "failed to register the mdio");
- goto out_pirq;
+ goto out_ptp_clock_unregister;
}
/* start switch */
@@ -2114,6 +2154,12 @@ static int ksz_setup(struct dsa_switch *ds)
return 0;
+out_ptp_clock_unregister:
+ ksz_ptp_clock_unregister(ds);
+out_ptpirq:
+ if (dev->irq > 0)
+ dsa_switch_for_each_user_port(dp, dev->ds)
+ ksz_ptp_irq_free(ds, dp->index);
out_pirq:
if (dev->irq > 0)
dsa_switch_for_each_user_port(dp, dev->ds)
@@ -2130,9 +2176,14 @@ static void ksz_teardown(struct dsa_switch *ds)
struct ksz_device *dev = ds->priv;
struct dsa_port *dp;
+ ksz_ptp_clock_unregister(ds);
+
if (dev->irq > 0) {
- dsa_switch_for_each_user_port(dp, dev->ds)
+ dsa_switch_for_each_user_port(dp, dev->ds) {
+ ksz_ptp_irq_free(ds, dp->index);
+
ksz_irq_free(&dev->ports[dp->index].pirq);
+ }
ksz_irq_free(&dev->girq);
}
@@ -2517,6 +2568,17 @@ static enum dsa_tag_protocol ksz_get_tag_protocol(struct dsa_switch *ds,
return proto;
}
+static int ksz_connect_tag_protocol(struct dsa_switch *ds,
+ enum dsa_tag_protocol proto)
+{
+ struct ksz_tagger_data *tagger_data;
+
+ tagger_data = ksz_tagger_data(ds);
+ tagger_data->xmit_work_fn = ksz_port_deferred_xmit;
+
+ return 0;
+}
+
static int ksz_port_vlan_filtering(struct dsa_switch *ds, int port,
bool flag, struct netlink_ext_ack *extack)
{
@@ -2611,6 +2673,70 @@ static int ksz_max_mtu(struct dsa_switch *ds, int port)
return -EOPNOTSUPP;
}
+static int ksz_validate_eee(struct dsa_switch *ds, int port)
+{
+ struct ksz_device *dev = ds->priv;
+
+ if (!dev->info->internal_phy[port])
+ return -EOPNOTSUPP;
+
+ switch (dev->chip_id) {
+ case KSZ8563_CHIP_ID:
+ case KSZ9477_CHIP_ID:
+ case KSZ9563_CHIP_ID:
+ case KSZ9567_CHIP_ID:
+ case KSZ9893_CHIP_ID:
+ case KSZ9896_CHIP_ID:
+ case KSZ9897_CHIP_ID:
+ return 0;
+ }
+
+ return -EOPNOTSUPP;
+}
+
+static int ksz_get_mac_eee(struct dsa_switch *ds, int port,
+ struct ethtool_eee *e)
+{
+ int ret;
+
+ ret = ksz_validate_eee(ds, port);
+ if (ret)
+ return ret;
+
+ /* There is no documented control of Tx LPI configuration. */
+ e->tx_lpi_enabled = true;
+
+ /* There is no documented control of Tx LPI timer. According to tests
+ * Tx LPI timer seems to be set by default to minimal value.
+ */
+ e->tx_lpi_timer = 0;
+
+ return 0;
+}
+
+static int ksz_set_mac_eee(struct dsa_switch *ds, int port,
+ struct ethtool_eee *e)
+{
+ struct ksz_device *dev = ds->priv;
+ int ret;
+
+ ret = ksz_validate_eee(ds, port);
+ if (ret)
+ return ret;
+
+ if (!e->tx_lpi_enabled) {
+ dev_err(dev->dev, "Disabling EEE Tx LPI is not supported\n");
+ return -EINVAL;
+ }
+
+ if (e->tx_lpi_timer) {
+ dev_err(dev->dev, "Setting EEE Tx LPI timer is not supported\n");
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
static void ksz_set_xmii(struct ksz_device *dev, int port,
phy_interface_t interface)
{
@@ -2930,8 +3056,104 @@ static int ksz_switch_detect(struct ksz_device *dev)
return 0;
}
+/* Bandwidth is calculated by idle slope/transmission speed. Then the Bandwidth
+ * is converted to Hex-decimal using the successive multiplication method. On
+ * every step, integer part is taken and decimal part is carry forwarded.
+ */
+static int cinc_cal(s32 idle_slope, s32 send_slope, u32 *bw)
+{
+ u32 cinc = 0;
+ u32 txrate;
+ u32 rate;
+ u8 temp;
+ u8 i;
+
+ txrate = idle_slope - send_slope;
+
+ if (!txrate)
+ return -EINVAL;
+
+ rate = idle_slope;
+
+ /* 24 bit register */
+ for (i = 0; i < 6; i++) {
+ rate = rate * 16;
+
+ temp = rate / txrate;
+
+ rate %= txrate;
+
+ cinc = ((cinc << 4) | temp);
+ }
+
+ *bw = cinc;
+
+ return 0;
+}
+
+static int ksz_setup_tc_cbs(struct dsa_switch *ds, int port,
+ struct tc_cbs_qopt_offload *qopt)
+{
+ struct ksz_device *dev = ds->priv;
+ int ret;
+ u32 bw;
+
+ if (!dev->info->tc_cbs_supported)
+ return -EOPNOTSUPP;
+
+ if (qopt->queue > dev->info->num_tx_queues)
+ return -EINVAL;
+
+ /* Queue Selection */
+ ret = ksz_pwrite32(dev, port, REG_PORT_MTI_QUEUE_INDEX__4, qopt->queue);
+ if (ret)
+ return ret;
+
+ if (!qopt->enable)
+ return ksz_pwrite8(dev, port, REG_PORT_MTI_QUEUE_CTRL_0,
+ KSZ_CBS_DISABLE);
+
+ /* High Credit */
+ ret = ksz_pwrite16(dev, port, REG_PORT_MTI_HI_WATER_MARK,
+ qopt->hicredit);
+ if (ret)
+ return ret;
+
+ /* Low Credit */
+ ret = ksz_pwrite16(dev, port, REG_PORT_MTI_LO_WATER_MARK,
+ qopt->locredit);
+ if (ret)
+ return ret;
+
+ /* Credit Increment Register */
+ ret = cinc_cal(qopt->idleslope, qopt->sendslope, &bw);
+ if (ret)
+ return ret;
+
+ if (dev->dev_ops->tc_cbs_set_cinc) {
+ ret = dev->dev_ops->tc_cbs_set_cinc(dev, port, bw);
+ if (ret)
+ return ret;
+ }
+
+ return ksz_pwrite8(dev, port, REG_PORT_MTI_QUEUE_CTRL_0,
+ KSZ_CBS_ENABLE);
+}
+
+static int ksz_setup_tc(struct dsa_switch *ds, int port,
+ enum tc_setup_type type, void *type_data)
+{
+ switch (type) {
+ case TC_SETUP_QDISC_CBS:
+ return ksz_setup_tc_cbs(ds, port, type_data);
+ default:
+ return -EOPNOTSUPP;
+ }
+}
+
static const struct dsa_switch_ops ksz_switch_ops = {
.get_tag_protocol = ksz_get_tag_protocol,
+ .connect_tag_protocol = ksz_connect_tag_protocol,
.get_phy_flags = ksz_get_phy_flags,
.setup = ksz_setup,
.teardown = ksz_teardown,
@@ -2966,6 +3188,14 @@ static const struct dsa_switch_ops ksz_switch_ops = {
.get_pause_stats = ksz_get_pause_stats,
.port_change_mtu = ksz_change_mtu,
.port_max_mtu = ksz_max_mtu,
+ .get_ts_info = ksz_get_ts_info,
+ .port_hwtstamp_get = ksz_hwtstamp_get,
+ .port_hwtstamp_set = ksz_hwtstamp_set,
+ .port_txtstamp = ksz_port_txtstamp,
+ .port_rxtstamp = ksz_port_rxtstamp,
+ .port_setup_tc = ksz_setup_tc,
+ .get_mac_eee = ksz_get_mac_eee,
+ .set_mac_eee = ksz_set_mac_eee,
};
struct ksz_device *ksz_switch_alloc(struct device *base, void *priv)
diff --git a/drivers/net/dsa/microchip/ksz_common.h b/drivers/net/dsa/microchip/ksz_common.h
index 055d61ff3fb8..d2d5761d58e9 100644
--- a/drivers/net/dsa/microchip/ksz_common.h
+++ b/drivers/net/dsa/microchip/ksz_common.h
@@ -15,9 +15,12 @@
#include <net/dsa.h>
#include <linux/irq.h>
+#include "ksz_ptp.h"
+
#define KSZ_MAX_NUM_PORTS 8
struct ksz_device;
+struct ksz_port;
struct vlan_table {
u32 table[3];
@@ -46,6 +49,8 @@ struct ksz_chip_data {
int cpu_ports;
int port_cnt;
u8 port_nirqs;
+ u8 num_tx_queues;
+ bool tc_cbs_supported;
const struct ksz_dev_ops *ops;
bool phy_errata_9477;
bool ksz87xx_eee_link_erratum;
@@ -81,6 +86,14 @@ struct ksz_irq {
struct ksz_device *dev;
};
+struct ksz_ptp_irq {
+ struct ksz_port *port;
+ u16 ts_reg;
+ bool ts_en;
+ char name[16];
+ int num;
+};
+
struct ksz_port {
bool remove_tag; /* Remove Tag flag set, for ksz8795 only */
bool learning;
@@ -100,6 +113,15 @@ struct ksz_port {
struct ksz_device *ksz_dev;
struct ksz_irq pirq;
u8 num;
+#if IS_ENABLED(CONFIG_NET_DSA_MICROCHIP_KSZ_PTP)
+ struct hwtstamp_config tstamp_config;
+ bool hwts_tx_en;
+ bool hwts_rx_en;
+ struct ksz_irq ptpirq;
+ struct ksz_ptp_irq ptpmsg_irq[3];
+ ktime_t tstamp_msg;
+ struct completion tstamp_msg_comp;
+#endif
};
struct ksz_device {
@@ -140,6 +162,7 @@ struct ksz_device {
u16 port_mask;
struct mutex lock_irq; /* IRQ Access */
struct ksz_irq girq;
+ struct ksz_ptp_data ptp_data;
};
/* List of supported models */
@@ -332,6 +355,7 @@ struct ksz_dev_ops {
struct phy_device *phydev, int speed,
int duplex, bool tx_pause, bool rx_pause);
void (*setup_rgmii_delay)(struct ksz_device *dev, int port);
+ int (*tc_cbs_set_cinc)(struct ksz_device *dev, int port, u32 val);
void (*config_cpu_port)(struct dsa_switch *ds);
int (*enable_stp_addr)(struct ksz_device *dev);
int (*reset)(struct ksz_device *dev);
@@ -443,6 +467,32 @@ static inline int ksz_write32(struct ksz_device *dev, u32 reg, u32 value)
return ret;
}
+static inline int ksz_rmw16(struct ksz_device *dev, u32 reg, u16 mask,
+ u16 value)
+{
+ int ret;
+
+ ret = regmap_update_bits(dev->regmap[1], reg, mask, value);
+ if (ret)
+ dev_err(dev->dev, "can't rmw 16bit reg 0x%x: %pe\n", reg,
+ ERR_PTR(ret));
+
+ return ret;
+}
+
+static inline int ksz_rmw32(struct ksz_device *dev, u32 reg, u32 mask,
+ u32 value)
+{
+ int ret;
+
+ ret = regmap_update_bits(dev->regmap[2], reg, mask, value);
+ if (ret)
+ dev_err(dev->dev, "can't rmw 32bit reg 0x%x: %pe\n", reg,
+ ERR_PTR(ret));
+
+ return ret;
+}
+
static inline int ksz_write64(struct ksz_device *dev, u32 reg, u64 value)
{
u32 val[2];
@@ -591,6 +641,7 @@ static inline int is_lan937x(struct ksz_device *dev)
#define REG_PORT_INT_MASK 0x001F
#define PORT_SRC_PHY_INT 1
+#define PORT_SRC_PTP_INT 2
#define KSZ8795_HUGE_PACKET_SIZE 2000
#define KSZ8863_HUGE_PACKET_SIZE 1916
@@ -598,6 +649,24 @@ static inline int is_lan937x(struct ksz_device *dev)
#define KSZ8_LEGAL_PACKET_SIZE 1518
#define KSZ9477_MAX_FRAME_SIZE 9000
+/* CBS related registers */
+#define REG_PORT_MTI_QUEUE_INDEX__4 0x0900
+
+#define REG_PORT_MTI_QUEUE_CTRL_0 0x0914
+
+#define MTI_SCHEDULE_MODE_M 0x3
+#define MTI_SCHEDULE_MODE_S 6
+#define MTI_SCHEDULE_STRICT_PRIO 0
+#define MTI_SCHEDULE_WRR 2
+#define MTI_SHAPING_M 0x3
+#define MTI_SHAPING_S 4
+#define MTI_SHAPING_OFF 0
+#define MTI_SHAPING_SRP 1
+#define MTI_SHAPING_TIME_AWARE 2
+
+#define REG_PORT_MTI_HI_WATER_MARK 0x0916
+#define REG_PORT_MTI_LO_WATER_MARK 0x0918
+
/* Regmap tables generation */
#define KSZ_SPI_OP_RD 3
#define KSZ_SPI_OP_WR 2
diff --git a/drivers/net/dsa/microchip/ksz_ptp.c b/drivers/net/dsa/microchip/ksz_ptp.c
new file mode 100644
index 000000000000..4e22a695a64c
--- /dev/null
+++ b/drivers/net/dsa/microchip/ksz_ptp.c
@@ -0,0 +1,1201 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Microchip KSZ PTP Implementation
+ *
+ * Copyright (C) 2020 ARRI Lighting
+ * Copyright (C) 2022 Microchip Technology Inc.
+ */
+
+#include <linux/dsa/ksz_common.h>
+#include <linux/irq.h>
+#include <linux/irqdomain.h>
+#include <linux/kernel.h>
+#include <linux/ptp_classify.h>
+#include <linux/ptp_clock_kernel.h>
+
+#include "ksz_common.h"
+#include "ksz_ptp.h"
+#include "ksz_ptp_reg.h"
+
+#define ptp_caps_to_data(d) container_of((d), struct ksz_ptp_data, caps)
+#define ptp_data_to_ksz_dev(d) container_of((d), struct ksz_device, ptp_data)
+#define work_to_xmit_work(w) \
+ container_of((w), struct ksz_deferred_xmit_work, work)
+
+/* Sub-nanoseconds-adj,max * sub-nanoseconds / 40ns * 1ns
+ * = (2^30-1) * (2 ^ 32) / 40 ns * 1 ns = 6249999
+ */
+#define KSZ_MAX_DRIFT_CORR 6249999
+#define KSZ_MAX_PULSE_WIDTH 125000000LL
+
+#define KSZ_PTP_INC_NS 40ULL /* HW clock is incremented every 40 ns (by 40) */
+#define KSZ_PTP_SUBNS_BITS 32
+
+#define KSZ_PTP_INT_START 13
+
+static int ksz_ptp_tou_gpio(struct ksz_device *dev)
+{
+ int ret;
+
+ if (!is_lan937x(dev))
+ return 0;
+
+ ret = ksz_rmw32(dev, REG_PTP_CTRL_STAT__4, GPIO_OUT,
+ GPIO_OUT);
+ if (ret)
+ return ret;
+
+ ret = ksz_rmw32(dev, REG_SW_GLOBAL_LED_OVR__4, LED_OVR_1 | LED_OVR_2,
+ LED_OVR_1 | LED_OVR_2);
+ if (ret)
+ return ret;
+
+ return ksz_rmw32(dev, REG_SW_GLOBAL_LED_SRC__4,
+ LED_SRC_PTP_GPIO_1 | LED_SRC_PTP_GPIO_2,
+ LED_SRC_PTP_GPIO_1 | LED_SRC_PTP_GPIO_2);
+}
+
+static int ksz_ptp_tou_reset(struct ksz_device *dev, u8 unit)
+{
+ u32 data;
+ int ret;
+
+ /* Reset trigger unit (clears TRIGGER_EN, but not GPIOSTATx) */
+ ret = ksz_rmw32(dev, REG_PTP_CTRL_STAT__4, TRIG_RESET, TRIG_RESET);
+
+ data = FIELD_PREP(TRIG_DONE_M, BIT(unit));
+ ret = ksz_write32(dev, REG_PTP_TRIG_STATUS__4, data);
+ if (ret)
+ return ret;
+
+ data = FIELD_PREP(TRIG_INT_M, BIT(unit));
+ ret = ksz_write32(dev, REG_PTP_INT_STATUS__4, data);
+ if (ret)
+ return ret;
+
+ /* Clear reset and set GPIO direction */
+ return ksz_rmw32(dev, REG_PTP_CTRL_STAT__4, (TRIG_RESET | TRIG_ENABLE),
+ 0);
+}
+
+static int ksz_ptp_tou_pulse_verify(u64 pulse_ns)
+{
+ u32 data;
+
+ if (pulse_ns & 0x3)
+ return -EINVAL;
+
+ data = (pulse_ns / 8);
+ if (!FIELD_FIT(TRIG_PULSE_WIDTH_M, data))
+ return -ERANGE;
+
+ return 0;
+}
+
+static int ksz_ptp_tou_target_time_set(struct ksz_device *dev,
+ struct timespec64 const *ts)
+{
+ int ret;
+
+ /* Hardware has only 32 bit */
+ if ((ts->tv_sec & 0xffffffff) != ts->tv_sec)
+ return -EINVAL;
+
+ ret = ksz_write32(dev, REG_TRIG_TARGET_NANOSEC, ts->tv_nsec);
+ if (ret)
+ return ret;
+
+ ret = ksz_write32(dev, REG_TRIG_TARGET_SEC, ts->tv_sec);
+ if (ret)
+ return ret;
+
+ return 0;
+}
+
+static int ksz_ptp_tou_start(struct ksz_device *dev, u8 unit)
+{
+ u32 data;
+ int ret;
+
+ ret = ksz_rmw32(dev, REG_PTP_CTRL_STAT__4, TRIG_ENABLE, TRIG_ENABLE);
+ if (ret)
+ return ret;
+
+ /* Check error flag:
+ * - the ACTIVE flag is NOT cleared an error!
+ */
+ ret = ksz_read32(dev, REG_PTP_TRIG_STATUS__4, &data);
+ if (ret)
+ return ret;
+
+ if (FIELD_GET(TRIG_ERROR_M, data) & (1 << unit)) {
+ dev_err(dev->dev, "%s: Trigger unit%d error!\n", __func__,
+ unit);
+ ret = -EIO;
+ /* Unit will be reset on next access */
+ return ret;
+ }
+
+ return 0;
+}
+
+static int ksz_ptp_configure_perout(struct ksz_device *dev,
+ u32 cycle_width_ns, u32 pulse_width_ns,
+ struct timespec64 const *target_time,
+ u8 index)
+{
+ u32 data;
+ int ret;
+
+ data = FIELD_PREP(TRIG_NOTIFY, 1) |
+ FIELD_PREP(TRIG_GPO_M, index) |
+ FIELD_PREP(TRIG_PATTERN_M, TRIG_POS_PERIOD);
+ ret = ksz_write32(dev, REG_TRIG_CTRL__4, data);
+ if (ret)
+ return ret;
+
+ ret = ksz_write32(dev, REG_TRIG_CYCLE_WIDTH, cycle_width_ns);
+ if (ret)
+ return ret;
+
+ /* Set cycle count 0 - Infinite */
+ ret = ksz_rmw32(dev, REG_TRIG_CYCLE_CNT, TRIG_CYCLE_CNT_M, 0);
+ if (ret)
+ return ret;
+
+ data = (pulse_width_ns / 8);
+ ret = ksz_write32(dev, REG_TRIG_PULSE_WIDTH__4, data);
+ if (ret)
+ return ret;
+
+ ret = ksz_ptp_tou_target_time_set(dev, target_time);
+ if (ret)
+ return ret;
+
+ return 0;
+}
+
+static int ksz_ptp_enable_perout(struct ksz_device *dev,
+ struct ptp_perout_request const *request,
+ int on)
+{
+ struct ksz_ptp_data *ptp_data = &dev->ptp_data;
+ u64 req_pulse_width_ns;
+ u64 cycle_width_ns;
+ u64 pulse_width_ns;
+ int pin = 0;
+ u32 data32;
+ int ret;
+
+ if (request->flags & ~PTP_PEROUT_DUTY_CYCLE)
+ return -EOPNOTSUPP;
+
+ if (ptp_data->tou_mode != KSZ_PTP_TOU_PEROUT &&
+ ptp_data->tou_mode != KSZ_PTP_TOU_IDLE)
+ return -EBUSY;
+
+ pin = ptp_find_pin(ptp_data->clock, PTP_PF_PEROUT, request->index);
+ if (pin < 0)
+ return -EINVAL;
+
+ data32 = FIELD_PREP(PTP_GPIO_INDEX, pin) |
+ FIELD_PREP(PTP_TOU_INDEX, request->index);
+ ret = ksz_rmw32(dev, REG_PTP_UNIT_INDEX__4,
+ PTP_GPIO_INDEX | PTP_TOU_INDEX, data32);
+ if (ret)
+ return ret;
+
+ ret = ksz_ptp_tou_reset(dev, request->index);
+ if (ret)
+ return ret;
+
+ if (!on) {
+ ptp_data->tou_mode = KSZ_PTP_TOU_IDLE;
+ return 0;
+ }
+
+ ptp_data->perout_target_time_first.tv_sec = request->start.sec;
+ ptp_data->perout_target_time_first.tv_nsec = request->start.nsec;
+
+ ptp_data->perout_period.tv_sec = request->period.sec;
+ ptp_data->perout_period.tv_nsec = request->period.nsec;
+
+ cycle_width_ns = timespec64_to_ns(&ptp_data->perout_period);
+ if ((cycle_width_ns & TRIG_CYCLE_WIDTH_M) != cycle_width_ns)
+ return -EINVAL;
+
+ if (request->flags & PTP_PEROUT_DUTY_CYCLE) {
+ pulse_width_ns = request->on.sec * NSEC_PER_SEC +
+ request->on.nsec;
+ } else {
+ /* Use a duty cycle of 50%. Maximum pulse width supported by the
+ * hardware is a little bit more than 125 ms.
+ */
+ req_pulse_width_ns = (request->period.sec * NSEC_PER_SEC +
+ request->period.nsec) / 2;
+ pulse_width_ns = min_t(u64, req_pulse_width_ns,
+ KSZ_MAX_PULSE_WIDTH);
+ }
+
+ ret = ksz_ptp_tou_pulse_verify(pulse_width_ns);
+ if (ret)
+ return ret;
+
+ ret = ksz_ptp_configure_perout(dev, cycle_width_ns, pulse_width_ns,
+ &ptp_data->perout_target_time_first,
+ pin);
+ if (ret)
+ return ret;
+
+ ret = ksz_ptp_tou_gpio(dev);
+ if (ret)
+ return ret;
+
+ ret = ksz_ptp_tou_start(dev, request->index);
+ if (ret)
+ return ret;
+
+ ptp_data->tou_mode = KSZ_PTP_TOU_PEROUT;
+
+ return 0;
+}
+
+static int ksz_ptp_enable_mode(struct ksz_device *dev)
+{
+ struct ksz_tagger_data *tagger_data = ksz_tagger_data(dev->ds);
+ struct ksz_ptp_data *ptp_data = &dev->ptp_data;
+ struct ksz_port *prt;
+ struct dsa_port *dp;
+ bool tag_en = false;
+ int ret;
+
+ dsa_switch_for_each_user_port(dp, dev->ds) {
+ prt = &dev->ports[dp->index];
+ if (prt->hwts_tx_en || prt->hwts_rx_en) {
+ tag_en = true;
+ break;
+ }
+ }
+
+ if (tag_en) {
+ ret = ptp_schedule_worker(ptp_data->clock, 0);
+ if (ret)
+ return ret;
+ } else {
+ ptp_cancel_worker_sync(ptp_data->clock);
+ }
+
+ tagger_data->hwtstamp_set_state(dev->ds, tag_en);
+
+ return ksz_rmw16(dev, REG_PTP_MSG_CONF1, PTP_ENABLE,
+ tag_en ? PTP_ENABLE : 0);
+}
+
+/* The function is return back the capability of timestamping feature when
+ * requested through ethtool -T <interface> utility
+ */
+int ksz_get_ts_info(struct dsa_switch *ds, int port, struct ethtool_ts_info *ts)
+{
+ struct ksz_device *dev = ds->priv;
+ struct ksz_ptp_data *ptp_data;
+
+ ptp_data = &dev->ptp_data;
+
+ if (!ptp_data->clock)
+ return -ENODEV;
+
+ ts->so_timestamping = SOF_TIMESTAMPING_TX_HARDWARE |
+ SOF_TIMESTAMPING_RX_HARDWARE |
+ SOF_TIMESTAMPING_RAW_HARDWARE;
+
+ ts->tx_types = BIT(HWTSTAMP_TX_OFF) | BIT(HWTSTAMP_TX_ONESTEP_P2P);
+
+ if (is_lan937x(dev))
+ ts->tx_types |= BIT(HWTSTAMP_TX_ON);
+
+ ts->rx_filters = BIT(HWTSTAMP_FILTER_NONE) |
+ BIT(HWTSTAMP_FILTER_PTP_V2_L4_EVENT) |
+ BIT(HWTSTAMP_FILTER_PTP_V2_L2_EVENT) |
+ BIT(HWTSTAMP_FILTER_PTP_V2_EVENT);
+
+ ts->phc_index = ptp_clock_index(ptp_data->clock);
+
+ return 0;
+}
+
+int ksz_hwtstamp_get(struct dsa_switch *ds, int port, struct ifreq *ifr)
+{
+ struct ksz_device *dev = ds->priv;
+ struct hwtstamp_config *config;
+ struct ksz_port *prt;
+
+ prt = &dev->ports[port];
+ config = &prt->tstamp_config;
+
+ return copy_to_user(ifr->ifr_data, config, sizeof(*config)) ?
+ -EFAULT : 0;
+}
+
+static int ksz_set_hwtstamp_config(struct ksz_device *dev,
+ struct ksz_port *prt,
+ struct hwtstamp_config *config)
+{
+ int ret;
+
+ if (config->flags)
+ return -EINVAL;
+
+ switch (config->tx_type) {
+ case HWTSTAMP_TX_OFF:
+ prt->ptpmsg_irq[KSZ_SYNC_MSG].ts_en = false;
+ prt->ptpmsg_irq[KSZ_XDREQ_MSG].ts_en = false;
+ prt->ptpmsg_irq[KSZ_PDRES_MSG].ts_en = false;
+ prt->hwts_tx_en = false;
+ break;
+ case HWTSTAMP_TX_ONESTEP_P2P:
+ prt->ptpmsg_irq[KSZ_SYNC_MSG].ts_en = false;
+ prt->ptpmsg_irq[KSZ_XDREQ_MSG].ts_en = true;
+ prt->ptpmsg_irq[KSZ_PDRES_MSG].ts_en = false;
+ prt->hwts_tx_en = true;
+
+ ret = ksz_rmw16(dev, REG_PTP_MSG_CONF1, PTP_1STEP, PTP_1STEP);
+ if (ret)
+ return ret;
+
+ break;
+ case HWTSTAMP_TX_ON:
+ if (!is_lan937x(dev))
+ return -ERANGE;
+
+ prt->ptpmsg_irq[KSZ_SYNC_MSG].ts_en = true;
+ prt->ptpmsg_irq[KSZ_XDREQ_MSG].ts_en = true;
+ prt->ptpmsg_irq[KSZ_PDRES_MSG].ts_en = true;
+ prt->hwts_tx_en = true;
+
+ ret = ksz_rmw16(dev, REG_PTP_MSG_CONF1, PTP_1STEP, 0);
+ if (ret)
+ return ret;
+
+ break;
+ default:
+ return -ERANGE;
+ }
+
+ switch (config->rx_filter) {
+ case HWTSTAMP_FILTER_NONE:
+ prt->hwts_rx_en = false;
+ break;
+ case HWTSTAMP_FILTER_PTP_V2_L4_EVENT:
+ case HWTSTAMP_FILTER_PTP_V2_L4_SYNC:
+ config->rx_filter = HWTSTAMP_FILTER_PTP_V2_L4_EVENT;
+ prt->hwts_rx_en = true;
+ break;
+ case HWTSTAMP_FILTER_PTP_V2_L2_EVENT:
+ case HWTSTAMP_FILTER_PTP_V2_L2_SYNC:
+ config->rx_filter = HWTSTAMP_FILTER_PTP_V2_L2_EVENT;
+ prt->hwts_rx_en = true;
+ break;
+ case HWTSTAMP_FILTER_PTP_V2_EVENT:
+ case HWTSTAMP_FILTER_PTP_V2_SYNC:
+ config->rx_filter = HWTSTAMP_FILTER_PTP_V2_EVENT;
+ prt->hwts_rx_en = true;
+ break;
+ default:
+ config->rx_filter = HWTSTAMP_FILTER_NONE;
+ return -ERANGE;
+ }
+
+ return ksz_ptp_enable_mode(dev);
+}
+
+int ksz_hwtstamp_set(struct dsa_switch *ds, int port, struct ifreq *ifr)
+{
+ struct ksz_device *dev = ds->priv;
+ struct hwtstamp_config config;
+ struct ksz_port *prt;
+ int ret;
+
+ prt = &dev->ports[port];
+
+ if (copy_from_user(&config, ifr->ifr_data, sizeof(config)))
+ return -EFAULT;
+
+ ret = ksz_set_hwtstamp_config(dev, prt, &config);
+ if (ret)
+ return ret;
+
+ memcpy(&prt->tstamp_config, &config, sizeof(config));
+
+ if (copy_to_user(ifr->ifr_data, &config, sizeof(config)))
+ return -EFAULT;
+
+ return 0;
+}
+
+static ktime_t ksz_tstamp_reconstruct(struct ksz_device *dev, ktime_t tstamp)
+{
+ struct timespec64 ptp_clock_time;
+ struct ksz_ptp_data *ptp_data;
+ struct timespec64 diff;
+ struct timespec64 ts;
+
+ ptp_data = &dev->ptp_data;
+ ts = ktime_to_timespec64(tstamp);
+
+ spin_lock_bh(&ptp_data->clock_lock);
+ ptp_clock_time = ptp_data->clock_time;
+ spin_unlock_bh(&ptp_data->clock_lock);
+
+ /* calculate full time from partial time stamp */
+ ts.tv_sec = (ptp_clock_time.tv_sec & ~3) | ts.tv_sec;
+
+ /* find nearest possible point in time */
+ diff = timespec64_sub(ts, ptp_clock_time);
+ if (diff.tv_sec > 2)
+ ts.tv_sec -= 4;
+ else if (diff.tv_sec < -2)
+ ts.tv_sec += 4;
+
+ return timespec64_to_ktime(ts);
+}
+
+bool ksz_port_rxtstamp(struct dsa_switch *ds, int port, struct sk_buff *skb,
+ unsigned int type)
+{
+ struct skb_shared_hwtstamps *hwtstamps = skb_hwtstamps(skb);
+ struct ksz_device *dev = ds->priv;
+ struct ptp_header *ptp_hdr;
+ struct ksz_port *prt;
+ u8 ptp_msg_type;
+ ktime_t tstamp;
+ s64 correction;
+
+ prt = &dev->ports[port];
+
+ tstamp = KSZ_SKB_CB(skb)->tstamp;
+ memset(hwtstamps, 0, sizeof(*hwtstamps));
+ hwtstamps->hwtstamp = ksz_tstamp_reconstruct(dev, tstamp);
+
+ if (prt->tstamp_config.tx_type != HWTSTAMP_TX_ONESTEP_P2P)
+ goto out;
+
+ ptp_hdr = ptp_parse_header(skb, type);
+ if (!ptp_hdr)
+ goto out;
+
+ ptp_msg_type = ptp_get_msgtype(ptp_hdr, type);
+ if (ptp_msg_type != PTP_MSGTYPE_PDELAY_REQ)
+ goto out;
+
+ /* Only subtract the partial time stamp from the correction field. When
+ * the hardware adds the egress time stamp to the correction field of
+ * the PDelay_Resp message on tx, also only the partial time stamp will
+ * be added.
+ */
+ correction = (s64)get_unaligned_be64(&ptp_hdr->correction);
+ correction -= ktime_to_ns(tstamp) << 16;
+
+ ptp_header_update_correction(skb, type, ptp_hdr, correction);
+
+out:
+ return false;
+}
+
+void ksz_port_txtstamp(struct dsa_switch *ds, int port, struct sk_buff *skb)
+{
+ struct ksz_device *dev = ds->priv;
+ struct ptp_header *hdr;
+ struct sk_buff *clone;
+ struct ksz_port *prt;
+ unsigned int type;
+ u8 ptp_msg_type;
+
+ prt = &dev->ports[port];
+
+ if (!prt->hwts_tx_en)
+ return;
+
+ type = ptp_classify_raw(skb);
+ if (type == PTP_CLASS_NONE)
+ return;
+
+ hdr = ptp_parse_header(skb, type);
+ if (!hdr)
+ return;
+
+ ptp_msg_type = ptp_get_msgtype(hdr, type);
+
+ switch (ptp_msg_type) {
+ case PTP_MSGTYPE_SYNC:
+ if (prt->tstamp_config.tx_type == HWTSTAMP_TX_ONESTEP_P2P)
+ return;
+ break;
+ case PTP_MSGTYPE_PDELAY_REQ:
+ break;
+ case PTP_MSGTYPE_PDELAY_RESP:
+ if (prt->tstamp_config.tx_type == HWTSTAMP_TX_ONESTEP_P2P) {
+ KSZ_SKB_CB(skb)->ptp_type = type;
+ KSZ_SKB_CB(skb)->update_correction = true;
+ return;
+ }
+ break;
+
+ default:
+ return;
+ }
+
+ clone = skb_clone_sk(skb);
+ if (!clone)
+ return;
+
+ /* caching the value to be used in tag_ksz.c */
+ KSZ_SKB_CB(skb)->clone = clone;
+}
+
+static void ksz_ptp_txtstamp_skb(struct ksz_device *dev,
+ struct ksz_port *prt, struct sk_buff *skb)
+{
+ struct skb_shared_hwtstamps hwtstamps = {};
+ int ret;
+
+ /* timeout must include DSA master to transmit data, tstamp latency,
+ * IRQ latency and time for reading the time stamp.
+ */
+ ret = wait_for_completion_timeout(&prt->tstamp_msg_comp,
+ msecs_to_jiffies(100));
+ if (!ret)
+ return;
+
+ hwtstamps.hwtstamp = prt->tstamp_msg;
+ skb_complete_tx_timestamp(skb, &hwtstamps);
+}
+
+void ksz_port_deferred_xmit(struct kthread_work *work)
+{
+ struct ksz_deferred_xmit_work *xmit_work = work_to_xmit_work(work);
+ struct sk_buff *clone, *skb = xmit_work->skb;
+ struct dsa_switch *ds = xmit_work->dp->ds;
+ struct ksz_device *dev = ds->priv;
+ struct ksz_port *prt;
+
+ prt = &dev->ports[xmit_work->dp->index];
+
+ clone = KSZ_SKB_CB(skb)->clone;
+
+ skb_shinfo(clone)->tx_flags |= SKBTX_IN_PROGRESS;
+
+ reinit_completion(&prt->tstamp_msg_comp);
+
+ dsa_enqueue_skb(skb, skb->dev);
+
+ ksz_ptp_txtstamp_skb(dev, prt, clone);
+
+ kfree(xmit_work);
+}
+
+static int _ksz_ptp_gettime(struct ksz_device *dev, struct timespec64 *ts)
+{
+ u32 nanoseconds;
+ u32 seconds;
+ u8 phase;
+ int ret;
+
+ /* Copy current PTP clock into shadow registers and read */
+ ret = ksz_rmw16(dev, REG_PTP_CLK_CTRL, PTP_READ_TIME, PTP_READ_TIME);
+ if (ret)
+ return ret;
+
+ ret = ksz_read8(dev, REG_PTP_RTC_SUB_NANOSEC__2, &phase);
+ if (ret)
+ return ret;
+
+ ret = ksz_read32(dev, REG_PTP_RTC_NANOSEC, &nanoseconds);
+ if (ret)
+ return ret;
+
+ ret = ksz_read32(dev, REG_PTP_RTC_SEC, &seconds);
+ if (ret)
+ return ret;
+
+ ts->tv_sec = seconds;
+ ts->tv_nsec = nanoseconds + phase * 8;
+
+ return 0;
+}
+
+static int ksz_ptp_gettime(struct ptp_clock_info *ptp, struct timespec64 *ts)
+{
+ struct ksz_ptp_data *ptp_data = ptp_caps_to_data(ptp);
+ struct ksz_device *dev = ptp_data_to_ksz_dev(ptp_data);
+ int ret;
+
+ mutex_lock(&ptp_data->lock);
+ ret = _ksz_ptp_gettime(dev, ts);
+ mutex_unlock(&ptp_data->lock);
+
+ return ret;
+}
+
+static int ksz_ptp_restart_perout(struct ksz_device *dev)
+{
+ struct ksz_ptp_data *ptp_data = &dev->ptp_data;
+ s64 now_ns, first_ns, period_ns, next_ns;
+ struct ptp_perout_request request;
+ struct timespec64 next;
+ struct timespec64 now;
+ unsigned int count;
+ int ret;
+
+ dev_info(dev->dev, "Restarting periodic output signal\n");
+
+ ret = _ksz_ptp_gettime(dev, &now);
+ if (ret)
+ return ret;
+
+ now_ns = timespec64_to_ns(&now);
+ first_ns = timespec64_to_ns(&ptp_data->perout_target_time_first);
+
+ /* Calculate next perout event based on start time and period */
+ period_ns = timespec64_to_ns(&ptp_data->perout_period);
+
+ if (first_ns < now_ns) {
+ count = div_u64(now_ns - first_ns, period_ns);
+ next_ns = first_ns + count * period_ns;
+ } else {
+ next_ns = first_ns;
+ }
+
+ /* Ensure 100 ms guard time prior next event */
+ while (next_ns < now_ns + 100000000)
+ next_ns += period_ns;
+
+ /* Restart periodic output signal */
+ next = ns_to_timespec64(next_ns);
+ request.start.sec = next.tv_sec;
+ request.start.nsec = next.tv_nsec;
+ request.period.sec = ptp_data->perout_period.tv_sec;
+ request.period.nsec = ptp_data->perout_period.tv_nsec;
+ request.index = 0;
+ request.flags = 0;
+
+ return ksz_ptp_enable_perout(dev, &request, 1);
+}
+
+static int ksz_ptp_settime(struct ptp_clock_info *ptp,
+ const struct timespec64 *ts)
+{
+ struct ksz_ptp_data *ptp_data = ptp_caps_to_data(ptp);
+ struct ksz_device *dev = ptp_data_to_ksz_dev(ptp_data);
+ int ret;
+
+ mutex_lock(&ptp_data->lock);
+
+ /* Write to shadow registers and Load PTP clock */
+ ret = ksz_write16(dev, REG_PTP_RTC_SUB_NANOSEC__2, PTP_RTC_0NS);
+ if (ret)
+ goto unlock;
+
+ ret = ksz_write32(dev, REG_PTP_RTC_NANOSEC, ts->tv_nsec);
+ if (ret)
+ goto unlock;
+
+ ret = ksz_write32(dev, REG_PTP_RTC_SEC, ts->tv_sec);
+ if (ret)
+ goto unlock;
+
+ ret = ksz_rmw16(dev, REG_PTP_CLK_CTRL, PTP_LOAD_TIME, PTP_LOAD_TIME);
+ if (ret)
+ goto unlock;
+
+ switch (ptp_data->tou_mode) {
+ case KSZ_PTP_TOU_IDLE:
+ break;
+
+ case KSZ_PTP_TOU_PEROUT:
+ ret = ksz_ptp_restart_perout(dev);
+ if (ret)
+ goto unlock;
+
+ break;
+ }
+
+ spin_lock_bh(&ptp_data->clock_lock);
+ ptp_data->clock_time = *ts;
+ spin_unlock_bh(&ptp_data->clock_lock);
+
+unlock:
+ mutex_unlock(&ptp_data->lock);
+
+ return ret;
+}
+
+static int ksz_ptp_adjfine(struct ptp_clock_info *ptp, long scaled_ppm)
+{
+ struct ksz_ptp_data *ptp_data = ptp_caps_to_data(ptp);
+ struct ksz_device *dev = ptp_data_to_ksz_dev(ptp_data);
+ u64 base, adj;
+ bool negative;
+ u32 data32;
+ int ret;
+
+ mutex_lock(&ptp_data->lock);
+
+ if (scaled_ppm) {
+ base = KSZ_PTP_INC_NS << KSZ_PTP_SUBNS_BITS;
+ negative = diff_by_scaled_ppm(base, scaled_ppm, &adj);
+
+ data32 = (u32)adj;
+ data32 &= PTP_SUBNANOSEC_M;
+ if (!negative)
+ data32 |= PTP_RATE_DIR;
+
+ ret = ksz_write32(dev, REG_PTP_SUBNANOSEC_RATE, data32);
+ if (ret)
+ goto unlock;
+
+ ret = ksz_rmw16(dev, REG_PTP_CLK_CTRL, PTP_CLK_ADJ_ENABLE,
+ PTP_CLK_ADJ_ENABLE);
+ if (ret)
+ goto unlock;
+ } else {
+ ret = ksz_rmw16(dev, REG_PTP_CLK_CTRL, PTP_CLK_ADJ_ENABLE, 0);
+ if (ret)
+ goto unlock;
+ }
+
+unlock:
+ mutex_unlock(&ptp_data->lock);
+ return ret;
+}
+
+static int ksz_ptp_adjtime(struct ptp_clock_info *ptp, s64 delta)
+{
+ struct ksz_ptp_data *ptp_data = ptp_caps_to_data(ptp);
+ struct ksz_device *dev = ptp_data_to_ksz_dev(ptp_data);
+ struct timespec64 delta64 = ns_to_timespec64(delta);
+ s32 sec, nsec;
+ u16 data16;
+ int ret;
+
+ mutex_lock(&ptp_data->lock);
+
+ /* do not use ns_to_timespec64(),
+ * both sec and nsec are subtracted by hw
+ */
+ sec = div_s64_rem(delta, NSEC_PER_SEC, &nsec);
+
+ ret = ksz_write32(dev, REG_PTP_RTC_NANOSEC, abs(nsec));
+ if (ret)
+ goto unlock;
+
+ ret = ksz_write32(dev, REG_PTP_RTC_SEC, abs(sec));
+ if (ret)
+ goto unlock;
+
+ ret = ksz_read16(dev, REG_PTP_CLK_CTRL, &data16);
+ if (ret)
+ goto unlock;
+
+ data16 |= PTP_STEP_ADJ;
+
+ /* PTP_STEP_DIR -- 0: subtract, 1: add */
+ if (delta < 0)
+ data16 &= ~PTP_STEP_DIR;
+ else
+ data16 |= PTP_STEP_DIR;
+
+ ret = ksz_write16(dev, REG_PTP_CLK_CTRL, data16);
+ if (ret)
+ goto unlock;
+
+ switch (ptp_data->tou_mode) {
+ case KSZ_PTP_TOU_IDLE:
+ break;
+
+ case KSZ_PTP_TOU_PEROUT:
+ ret = ksz_ptp_restart_perout(dev);
+ if (ret)
+ goto unlock;
+
+ break;
+ }
+
+ spin_lock_bh(&ptp_data->clock_lock);
+ ptp_data->clock_time = timespec64_add(ptp_data->clock_time, delta64);
+ spin_unlock_bh(&ptp_data->clock_lock);
+
+unlock:
+ mutex_unlock(&ptp_data->lock);
+ return ret;
+}
+
+static int ksz_ptp_enable(struct ptp_clock_info *ptp,
+ struct ptp_clock_request *req, int on)
+{
+ struct ksz_ptp_data *ptp_data = ptp_caps_to_data(ptp);
+ struct ksz_device *dev = ptp_data_to_ksz_dev(ptp_data);
+ int ret;
+
+ switch (req->type) {
+ case PTP_CLK_REQ_PEROUT:
+ mutex_lock(&ptp_data->lock);
+ ret = ksz_ptp_enable_perout(dev, &req->perout, on);
+ mutex_unlock(&ptp_data->lock);
+ break;
+ default:
+ return -EOPNOTSUPP;
+ }
+
+ return ret;
+}
+
+static int ksz_ptp_verify_pin(struct ptp_clock_info *ptp, unsigned int pin,
+ enum ptp_pin_function func, unsigned int chan)
+{
+ int ret = 0;
+
+ switch (func) {
+ case PTP_PF_NONE:
+ case PTP_PF_PEROUT:
+ break;
+ default:
+ ret = -1;
+ break;
+ }
+
+ return ret;
+}
+
+/* Function is pointer to the do_aux_work in the ptp_clock capability */
+static long ksz_ptp_do_aux_work(struct ptp_clock_info *ptp)
+{
+ struct ksz_ptp_data *ptp_data = ptp_caps_to_data(ptp);
+ struct ksz_device *dev = ptp_data_to_ksz_dev(ptp_data);
+ struct timespec64 ts;
+ int ret;
+
+ mutex_lock(&ptp_data->lock);
+ ret = _ksz_ptp_gettime(dev, &ts);
+ if (ret)
+ goto out;
+
+ spin_lock_bh(&ptp_data->clock_lock);
+ ptp_data->clock_time = ts;
+ spin_unlock_bh(&ptp_data->clock_lock);
+
+out:
+ mutex_unlock(&ptp_data->lock);
+
+ return HZ; /* reschedule in 1 second */
+}
+
+static int ksz_ptp_start_clock(struct ksz_device *dev)
+{
+ struct ksz_ptp_data *ptp_data = &dev->ptp_data;
+ int ret;
+
+ ret = ksz_rmw16(dev, REG_PTP_CLK_CTRL, PTP_CLK_ENABLE, PTP_CLK_ENABLE);
+ if (ret)
+ return ret;
+
+ ptp_data->clock_time.tv_sec = 0;
+ ptp_data->clock_time.tv_nsec = 0;
+
+ return 0;
+}
+
+int ksz_ptp_clock_register(struct dsa_switch *ds)
+{
+ struct ksz_device *dev = ds->priv;
+ struct ksz_ptp_data *ptp_data;
+ int ret;
+ u8 i;
+
+ ptp_data = &dev->ptp_data;
+ mutex_init(&ptp_data->lock);
+ spin_lock_init(&ptp_data->clock_lock);
+
+ ptp_data->caps.owner = THIS_MODULE;
+ snprintf(ptp_data->caps.name, 16, "Microchip Clock");
+ ptp_data->caps.max_adj = KSZ_MAX_DRIFT_CORR;
+ ptp_data->caps.gettime64 = ksz_ptp_gettime;
+ ptp_data->caps.settime64 = ksz_ptp_settime;
+ ptp_data->caps.adjfine = ksz_ptp_adjfine;
+ ptp_data->caps.adjtime = ksz_ptp_adjtime;
+ ptp_data->caps.do_aux_work = ksz_ptp_do_aux_work;
+ ptp_data->caps.enable = ksz_ptp_enable;
+ ptp_data->caps.verify = ksz_ptp_verify_pin;
+ ptp_data->caps.n_pins = KSZ_PTP_N_GPIO;
+ ptp_data->caps.n_per_out = 3;
+
+ ret = ksz_ptp_start_clock(dev);
+ if (ret)
+ return ret;
+
+ for (i = 0; i < KSZ_PTP_N_GPIO; i++) {
+ struct ptp_pin_desc *ptp_pin = &ptp_data->pin_config[i];
+
+ snprintf(ptp_pin->name,
+ sizeof(ptp_pin->name), "ksz_ptp_pin_%02d", i);
+ ptp_pin->index = i;
+ ptp_pin->func = PTP_PF_NONE;
+ }
+
+ ptp_data->caps.pin_config = ptp_data->pin_config;
+
+ /* Currently only P2P mode is supported. When 802_1AS bit is set, it
+ * forwards all PTP packets to host port and none to other ports.
+ */
+ ret = ksz_rmw16(dev, REG_PTP_MSG_CONF1, PTP_TC_P2P | PTP_802_1AS,
+ PTP_TC_P2P | PTP_802_1AS);
+ if (ret)
+ return ret;
+
+ ptp_data->clock = ptp_clock_register(&ptp_data->caps, dev->dev);
+ if (IS_ERR_OR_NULL(ptp_data->clock))
+ return PTR_ERR(ptp_data->clock);
+
+ return 0;
+}
+
+void ksz_ptp_clock_unregister(struct dsa_switch *ds)
+{
+ struct ksz_device *dev = ds->priv;
+ struct ksz_ptp_data *ptp_data;
+
+ ptp_data = &dev->ptp_data;
+
+ if (ptp_data->clock)
+ ptp_clock_unregister(ptp_data->clock);
+}
+
+static irqreturn_t ksz_ptp_msg_thread_fn(int irq, void *dev_id)
+{
+ struct ksz_ptp_irq *ptpmsg_irq = dev_id;
+ struct ksz_device *dev;
+ struct ksz_port *port;
+ u32 tstamp_raw;
+ ktime_t tstamp;
+ int ret;
+
+ port = ptpmsg_irq->port;
+ dev = port->ksz_dev;
+
+ if (ptpmsg_irq->ts_en) {
+ ret = ksz_read32(dev, ptpmsg_irq->ts_reg, &tstamp_raw);
+ if (ret)
+ return IRQ_NONE;
+
+ tstamp = ksz_decode_tstamp(tstamp_raw);
+
+ port->tstamp_msg = ksz_tstamp_reconstruct(dev, tstamp);
+
+ complete(&port->tstamp_msg_comp);
+ }
+
+ return IRQ_HANDLED;
+}
+
+static irqreturn_t ksz_ptp_irq_thread_fn(int irq, void *dev_id)
+{
+ struct ksz_irq *ptpirq = dev_id;
+ unsigned int nhandled = 0;
+ struct ksz_device *dev;
+ unsigned int sub_irq;
+ u16 data;
+ int ret;
+ u8 n;
+
+ dev = ptpirq->dev;
+
+ ret = ksz_read16(dev, ptpirq->reg_status, &data);
+ if (ret)
+ goto out;
+
+ /* Clear the interrupts W1C */
+ ret = ksz_write16(dev, ptpirq->reg_status, data);
+ if (ret)
+ return IRQ_NONE;
+
+ for (n = 0; n < ptpirq->nirqs; ++n) {
+ if (data & BIT(n + KSZ_PTP_INT_START)) {
+ sub_irq = irq_find_mapping(ptpirq->domain, n);
+ handle_nested_irq(sub_irq);
+ ++nhandled;
+ }
+ }
+
+out:
+ return (nhandled > 0 ? IRQ_HANDLED : IRQ_NONE);
+}
+
+static void ksz_ptp_irq_mask(struct irq_data *d)
+{
+ struct ksz_irq *kirq = irq_data_get_irq_chip_data(d);
+
+ kirq->masked &= ~BIT(d->hwirq + KSZ_PTP_INT_START);
+}
+
+static void ksz_ptp_irq_unmask(struct irq_data *d)
+{
+ struct ksz_irq *kirq = irq_data_get_irq_chip_data(d);
+
+ kirq->masked |= BIT(d->hwirq + KSZ_PTP_INT_START);
+}
+
+static void ksz_ptp_irq_bus_lock(struct irq_data *d)
+{
+ struct ksz_irq *kirq = irq_data_get_irq_chip_data(d);
+
+ mutex_lock(&kirq->dev->lock_irq);
+}
+
+static void ksz_ptp_irq_bus_sync_unlock(struct irq_data *d)
+{
+ struct ksz_irq *kirq = irq_data_get_irq_chip_data(d);
+ struct ksz_device *dev = kirq->dev;
+ int ret;
+
+ ret = ksz_write16(dev, kirq->reg_mask, kirq->masked);
+ if (ret)
+ dev_err(dev->dev, "failed to change IRQ mask\n");
+
+ mutex_unlock(&dev->lock_irq);
+}
+
+static const struct irq_chip ksz_ptp_irq_chip = {
+ .name = "ksz-irq",
+ .irq_mask = ksz_ptp_irq_mask,
+ .irq_unmask = ksz_ptp_irq_unmask,
+ .irq_bus_lock = ksz_ptp_irq_bus_lock,
+ .irq_bus_sync_unlock = ksz_ptp_irq_bus_sync_unlock,
+};
+
+static int ksz_ptp_irq_domain_map(struct irq_domain *d,
+ unsigned int irq, irq_hw_number_t hwirq)
+{
+ irq_set_chip_data(irq, d->host_data);
+ irq_set_chip_and_handler(irq, &ksz_ptp_irq_chip, handle_level_irq);
+ irq_set_noprobe(irq);
+
+ return 0;
+}
+
+static const struct irq_domain_ops ksz_ptp_irq_domain_ops = {
+ .map = ksz_ptp_irq_domain_map,
+ .xlate = irq_domain_xlate_twocell,
+};
+
+static void ksz_ptp_msg_irq_free(struct ksz_port *port, u8 n)
+{
+ struct ksz_ptp_irq *ptpmsg_irq;
+
+ ptpmsg_irq = &port->ptpmsg_irq[n];
+
+ free_irq(ptpmsg_irq->num, ptpmsg_irq);
+ irq_dispose_mapping(ptpmsg_irq->num);
+}
+
+static int ksz_ptp_msg_irq_setup(struct ksz_port *port, u8 n)
+{
+ u16 ts_reg[] = {REG_PTP_PORT_PDRESP_TS, REG_PTP_PORT_XDELAY_TS,
+ REG_PTP_PORT_SYNC_TS};
+ static const char * const name[] = {"pdresp-msg", "xdreq-msg",
+ "sync-msg"};
+ const struct ksz_dev_ops *ops = port->ksz_dev->dev_ops;
+ struct ksz_ptp_irq *ptpmsg_irq;
+
+ ptpmsg_irq = &port->ptpmsg_irq[n];
+
+ ptpmsg_irq->port = port;
+ ptpmsg_irq->ts_reg = ops->get_port_addr(port->num, ts_reg[n]);
+
+ snprintf(ptpmsg_irq->name, sizeof(ptpmsg_irq->name), name[n]);
+
+ ptpmsg_irq->num = irq_find_mapping(port->ptpirq.domain, n);
+ if (ptpmsg_irq->num < 0)
+ return ptpmsg_irq->num;
+
+ return request_threaded_irq(ptpmsg_irq->num, NULL,
+ ksz_ptp_msg_thread_fn, IRQF_ONESHOT,
+ ptpmsg_irq->name, ptpmsg_irq);
+}
+
+int ksz_ptp_irq_setup(struct dsa_switch *ds, u8 p)
+{
+ struct ksz_device *dev = ds->priv;
+ const struct ksz_dev_ops *ops = dev->dev_ops;
+ struct ksz_port *port = &dev->ports[p];
+ struct ksz_irq *ptpirq = &port->ptpirq;
+ int irq;
+ int ret;
+
+ ptpirq->dev = dev;
+ ptpirq->masked = 0;
+ ptpirq->nirqs = 3;
+ ptpirq->reg_mask = ops->get_port_addr(p, REG_PTP_PORT_TX_INT_ENABLE__2);
+ ptpirq->reg_status = ops->get_port_addr(p,
+ REG_PTP_PORT_TX_INT_STATUS__2);
+ snprintf(ptpirq->name, sizeof(ptpirq->name), "ptp-irq-%d", p);
+
+ init_completion(&port->tstamp_msg_comp);
+
+ ptpirq->domain = irq_domain_add_linear(dev->dev->of_node, ptpirq->nirqs,
+ &ksz_ptp_irq_domain_ops, ptpirq);
+ if (!ptpirq->domain)
+ return -ENOMEM;
+
+ for (irq = 0; irq < ptpirq->nirqs; irq++)
+ irq_create_mapping(ptpirq->domain, irq);
+
+ ptpirq->irq_num = irq_find_mapping(port->pirq.domain, PORT_SRC_PTP_INT);
+ if (ptpirq->irq_num < 0) {
+ ret = ptpirq->irq_num;
+ goto out;
+ }
+
+ ret = request_threaded_irq(ptpirq->irq_num, NULL, ksz_ptp_irq_thread_fn,
+ IRQF_ONESHOT, ptpirq->name, ptpirq);
+ if (ret)
+ goto out;
+
+ for (irq = 0; irq < ptpirq->nirqs; irq++) {
+ ret = ksz_ptp_msg_irq_setup(port, irq);
+ if (ret)
+ goto out_ptp_msg;
+ }
+
+ return 0;
+
+out_ptp_msg:
+ free_irq(ptpirq->irq_num, ptpirq);
+ while (irq--)
+ free_irq(port->ptpmsg_irq[irq].num, &port->ptpmsg_irq[irq]);
+out:
+ for (irq = 0; irq < ptpirq->nirqs; irq++)
+ irq_dispose_mapping(port->ptpmsg_irq[irq].num);
+
+ irq_domain_remove(ptpirq->domain);
+
+ return ret;
+}
+
+void ksz_ptp_irq_free(struct dsa_switch *ds, u8 p)
+{
+ struct ksz_device *dev = ds->priv;
+ struct ksz_port *port = &dev->ports[p];
+ struct ksz_irq *ptpirq = &port->ptpirq;
+ u8 n;
+
+ for (n = 0; n < ptpirq->nirqs; n++)
+ ksz_ptp_msg_irq_free(port, n);
+
+ free_irq(ptpirq->irq_num, ptpirq);
+ irq_dispose_mapping(ptpirq->irq_num);
+
+ irq_domain_remove(ptpirq->domain);
+}
+
+MODULE_AUTHOR("Christian Eggers <ceggers@arri.de>");
+MODULE_AUTHOR("Arun Ramadoss <arun.ramadoss@microchip.com>");
+MODULE_DESCRIPTION("PTP support for KSZ switch");
+MODULE_LICENSE("GPL");
diff --git a/drivers/net/dsa/microchip/ksz_ptp.h b/drivers/net/dsa/microchip/ksz_ptp.h
new file mode 100644
index 000000000000..0ca8ca4f804e
--- /dev/null
+++ b/drivers/net/dsa/microchip/ksz_ptp.h
@@ -0,0 +1,86 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Microchip KSZ PTP Implementation
+ *
+ * Copyright (C) 2020 ARRI Lighting
+ * Copyright (C) 2022 Microchip Technology Inc.
+ */
+
+#ifndef _NET_DSA_DRIVERS_KSZ_PTP_H
+#define _NET_DSA_DRIVERS_KSZ_PTP_H
+
+#if IS_ENABLED(CONFIG_NET_DSA_MICROCHIP_KSZ_PTP)
+
+#include <linux/ptp_clock_kernel.h>
+
+#define KSZ_PTP_N_GPIO 2
+
+enum ksz_ptp_tou_mode {
+ KSZ_PTP_TOU_IDLE,
+ KSZ_PTP_TOU_PEROUT,
+};
+
+struct ksz_ptp_data {
+ struct ptp_clock_info caps;
+ struct ptp_clock *clock;
+ struct ptp_pin_desc pin_config[KSZ_PTP_N_GPIO];
+ /* Serializes all operations on the PTP hardware clock */
+ struct mutex lock;
+ /* lock for accessing the clock_time */
+ spinlock_t clock_lock;
+ struct timespec64 clock_time;
+ enum ksz_ptp_tou_mode tou_mode;
+ struct timespec64 perout_target_time_first; /* start of first pulse */
+ struct timespec64 perout_period;
+};
+
+int ksz_ptp_clock_register(struct dsa_switch *ds);
+
+void ksz_ptp_clock_unregister(struct dsa_switch *ds);
+
+int ksz_get_ts_info(struct dsa_switch *ds, int port,
+ struct ethtool_ts_info *ts);
+int ksz_hwtstamp_get(struct dsa_switch *ds, int port, struct ifreq *ifr);
+int ksz_hwtstamp_set(struct dsa_switch *ds, int port, struct ifreq *ifr);
+void ksz_port_txtstamp(struct dsa_switch *ds, int port, struct sk_buff *skb);
+void ksz_port_deferred_xmit(struct kthread_work *work);
+bool ksz_port_rxtstamp(struct dsa_switch *ds, int port, struct sk_buff *skb,
+ unsigned int type);
+int ksz_ptp_irq_setup(struct dsa_switch *ds, u8 p);
+void ksz_ptp_irq_free(struct dsa_switch *ds, u8 p);
+
+#else
+
+struct ksz_ptp_data {
+ /* Serializes all operations on the PTP hardware clock */
+ struct mutex lock;
+};
+
+static inline int ksz_ptp_clock_register(struct dsa_switch *ds)
+{
+ return 0;
+}
+
+static inline void ksz_ptp_clock_unregister(struct dsa_switch *ds) { }
+
+static inline int ksz_ptp_irq_setup(struct dsa_switch *ds, u8 p)
+{
+ return 0;
+}
+
+static inline void ksz_ptp_irq_free(struct dsa_switch *ds, u8 p) {}
+
+#define ksz_get_ts_info NULL
+
+#define ksz_hwtstamp_get NULL
+
+#define ksz_hwtstamp_set NULL
+
+#define ksz_port_rxtstamp NULL
+
+#define ksz_port_txtstamp NULL
+
+#define ksz_port_deferred_xmit NULL
+
+#endif /* End of CONFIG_NET_DSA_MICROCHIP_KSZ_PTP */
+
+#endif
diff --git a/drivers/net/dsa/microchip/ksz_ptp_reg.h b/drivers/net/dsa/microchip/ksz_ptp_reg.h
new file mode 100644
index 000000000000..d71e85510cda
--- /dev/null
+++ b/drivers/net/dsa/microchip/ksz_ptp_reg.h
@@ -0,0 +1,142 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Microchip KSZ PTP register definitions
+ * Copyright (C) 2022 Microchip Technology Inc.
+ */
+
+#ifndef __KSZ_PTP_REGS_H
+#define __KSZ_PTP_REGS_H
+
+#define REG_SW_GLOBAL_LED_OVR__4 0x0120
+#define LED_OVR_2 BIT(1)
+#define LED_OVR_1 BIT(0)
+
+#define REG_SW_GLOBAL_LED_SRC__4 0x0128
+#define LED_SRC_PTP_GPIO_1 BIT(3)
+#define LED_SRC_PTP_GPIO_2 BIT(2)
+
+/* 5 - PTP Clock */
+#define REG_PTP_CLK_CTRL 0x0500
+
+#define PTP_STEP_ADJ BIT(6)
+#define PTP_STEP_DIR BIT(5)
+#define PTP_READ_TIME BIT(4)
+#define PTP_LOAD_TIME BIT(3)
+#define PTP_CLK_ADJ_ENABLE BIT(2)
+#define PTP_CLK_ENABLE BIT(1)
+#define PTP_CLK_RESET BIT(0)
+
+#define REG_PTP_RTC_SUB_NANOSEC__2 0x0502
+
+#define PTP_RTC_SUB_NANOSEC_M 0x0007
+#define PTP_RTC_0NS 0x00
+
+#define REG_PTP_RTC_NANOSEC 0x0504
+
+#define REG_PTP_RTC_SEC 0x0508
+
+#define REG_PTP_SUBNANOSEC_RATE 0x050C
+
+#define PTP_SUBNANOSEC_M 0x3FFFFFFF
+#define PTP_RATE_DIR BIT(31)
+#define PTP_TMP_RATE_ENABLE BIT(30)
+
+#define REG_PTP_SUBNANOSEC_RATE_L 0x050E
+
+#define REG_PTP_RATE_DURATION 0x0510
+#define REG_PTP_RATE_DURATION_H 0x0510
+#define REG_PTP_RATE_DURATION_L 0x0512
+
+#define REG_PTP_MSG_CONF1 0x0514
+
+#define PTP_802_1AS BIT(7)
+#define PTP_ENABLE BIT(6)
+#define PTP_ETH_ENABLE BIT(5)
+#define PTP_IPV4_UDP_ENABLE BIT(4)
+#define PTP_IPV6_UDP_ENABLE BIT(3)
+#define PTP_TC_P2P BIT(2)
+#define PTP_MASTER BIT(1)
+#define PTP_1STEP BIT(0)
+
+#define REG_PTP_UNIT_INDEX__4 0x0520
+
+#define PTP_GPIO_INDEX GENMASK(19, 16)
+#define PTP_TSI_INDEX BIT(8)
+#define PTP_TOU_INDEX GENMASK(1, 0)
+
+#define REG_PTP_TRIG_STATUS__4 0x0524
+
+#define TRIG_ERROR_M GENMASK(18, 16)
+#define TRIG_DONE_M GENMASK(2, 0)
+
+#define REG_PTP_INT_STATUS__4 0x0528
+
+#define TRIG_INT_M GENMASK(18, 16)
+#define TS_INT_M GENMASK(1, 0)
+
+#define REG_PTP_CTRL_STAT__4 0x052C
+
+#define GPIO_IN BIT(7)
+#define GPIO_OUT BIT(6)
+#define TS_INT_ENABLE BIT(5)
+#define TRIG_ACTIVE BIT(4)
+#define TRIG_ENABLE BIT(3)
+#define TRIG_RESET BIT(2)
+#define TS_ENABLE BIT(1)
+#define TS_RESET BIT(0)
+
+#define REG_TRIG_TARGET_NANOSEC 0x0530
+#define REG_TRIG_TARGET_SEC 0x0534
+
+#define REG_TRIG_CTRL__4 0x0538
+
+#define TRIG_CASCADE_ENABLE BIT(31)
+#define TRIG_CASCADE_TAIL BIT(30)
+#define TRIG_CASCADE_UPS_M GENMASK(29, 26)
+#define TRIG_NOW BIT(25)
+#define TRIG_NOTIFY BIT(24)
+#define TRIG_EDGE BIT(23)
+#define TRIG_PATTERN_M GENMASK(22, 20)
+#define TRIG_NEG_EDGE 0
+#define TRIG_POS_EDGE 1
+#define TRIG_NEG_PULSE 2
+#define TRIG_POS_PULSE 3
+#define TRIG_NEG_PERIOD 4
+#define TRIG_POS_PERIOD 5
+#define TRIG_REG_OUTPUT 6
+#define TRIG_GPO_M GENMASK(19, 16)
+#define TRIG_CASCADE_ITERATE_CNT_M GENMASK(15, 0)
+
+#define REG_TRIG_CYCLE_WIDTH 0x053C
+#define TRIG_CYCLE_WIDTH_M GENMASK(31, 0)
+
+#define REG_TRIG_CYCLE_CNT 0x0540
+
+#define TRIG_CYCLE_CNT_M GENMASK(31, 16)
+#define TRIG_BIT_PATTERN_M GENMASK(15, 0)
+
+#define REG_TRIG_ITERATE_TIME 0x0544
+
+#define REG_TRIG_PULSE_WIDTH__4 0x0548
+
+#define TRIG_PULSE_WIDTH_M GENMASK(23, 0)
+
+/* Port PTP Register */
+#define REG_PTP_PORT_RX_DELAY__2 0x0C00
+#define REG_PTP_PORT_TX_DELAY__2 0x0C02
+#define REG_PTP_PORT_ASYM_DELAY__2 0x0C04
+
+#define REG_PTP_PORT_XDELAY_TS 0x0C08
+#define REG_PTP_PORT_SYNC_TS 0x0C0C
+#define REG_PTP_PORT_PDRESP_TS 0x0C10
+
+#define REG_PTP_PORT_TX_INT_STATUS__2 0x0C14
+#define REG_PTP_PORT_TX_INT_ENABLE__2 0x0C16
+
+#define PTP_PORT_SYNC_INT BIT(15)
+#define PTP_PORT_XDELAY_REQ_INT BIT(14)
+#define PTP_PORT_PDELAY_RESP_INT BIT(13)
+#define KSZ_SYNC_MSG 2
+#define KSZ_XDREQ_MSG 1
+#define KSZ_PDRES_MSG 0
+
+#endif
diff --git a/drivers/net/dsa/microchip/lan937x.h b/drivers/net/dsa/microchip/lan937x.h
index 8e9e66d6728d..3388d91dbc44 100644
--- a/drivers/net/dsa/microchip/lan937x.h
+++ b/drivers/net/dsa/microchip/lan937x.h
@@ -20,4 +20,5 @@ void lan937x_phylink_get_caps(struct ksz_device *dev, int port,
struct phylink_config *config);
void lan937x_setup_rgmii_delay(struct ksz_device *dev, int port);
int lan937x_set_ageing_time(struct ksz_device *dev, unsigned int msecs);
+int lan937x_tc_cbs_set_cinc(struct ksz_device *dev, int port, u32 val);
#endif
diff --git a/drivers/net/dsa/microchip/lan937x_main.c b/drivers/net/dsa/microchip/lan937x_main.c
index 06d3d0308cba..399a3905e6ca 100644
--- a/drivers/net/dsa/microchip/lan937x_main.c
+++ b/drivers/net/dsa/microchip/lan937x_main.c
@@ -15,6 +15,7 @@
#include "lan937x_reg.h"
#include "ksz_common.h"
+#include "ksz9477.h"
#include "lan937x.h"
static int lan937x_cfg(struct ksz_device *dev, u32 addr, u8 bits, bool set)
@@ -180,6 +181,9 @@ void lan937x_port_setup(struct ksz_device *dev, int port, bool cpu_port)
lan937x_port_cfg(dev, port, REG_PORT_CTRL_0,
PORT_TAIL_TAG_ENABLE, true);
+ /* Enable the Port Queue split */
+ ksz9477_port_queue_split(dev, port);
+
/* set back pressure for half duplex */
lan937x_port_cfg(dev, port, REG_PORT_MAC_CTRL_1, PORT_BACK_PRESSURE,
true);
@@ -336,6 +340,11 @@ void lan937x_setup_rgmii_delay(struct ksz_device *dev, int port)
}
}
+int lan937x_tc_cbs_set_cinc(struct ksz_device *dev, int port, u32 val)
+{
+ return ksz_pwrite32(dev, port, REG_PORT_MTI_CREDIT_INCREMENT, val);
+}
+
int lan937x_switch_init(struct ksz_device *dev)
{
dev->port_mask = (1 << dev->info->port_cnt) - 1;
diff --git a/drivers/net/dsa/microchip/lan937x_reg.h b/drivers/net/dsa/microchip/lan937x_reg.h
index 5bc16a4c4441..45b606b6429f 100644
--- a/drivers/net/dsa/microchip/lan937x_reg.h
+++ b/drivers/net/dsa/microchip/lan937x_reg.h
@@ -185,6 +185,9 @@
#define P_PRIO_CTRL REG_PORT_MRI_PRIO_CTRL
+/* 9 - Shaping */
+#define REG_PORT_MTI_CREDIT_INCREMENT 0x091C
+
/* The port number as per the datasheet */
#define RGMII_2_PORT_NUM 5
#define RGMII_1_PORT_NUM 6
diff --git a/drivers/net/dsa/mt7530.c b/drivers/net/dsa/mt7530.c
index 338f238f2043..3a15015bc409 100644
--- a/drivers/net/dsa/mt7530.c
+++ b/drivers/net/dsa/mt7530.c
@@ -608,17 +608,29 @@ mt7530_mib_reset(struct dsa_switch *ds)
mt7530_write(priv, MT7530_MIB_CCR, CCR_MIB_ACTIVATE);
}
-static int mt7530_phy_read(struct mt7530_priv *priv, int port, int regnum)
+static int mt7530_phy_read_c22(struct mt7530_priv *priv, int port, int regnum)
{
return mdiobus_read_nested(priv->bus, port, regnum);
}
-static int mt7530_phy_write(struct mt7530_priv *priv, int port, int regnum,
- u16 val)
+static int mt7530_phy_write_c22(struct mt7530_priv *priv, int port, int regnum,
+ u16 val)
{
return mdiobus_write_nested(priv->bus, port, regnum, val);
}
+static int mt7530_phy_read_c45(struct mt7530_priv *priv, int port,
+ int devad, int regnum)
+{
+ return mdiobus_c45_read_nested(priv->bus, port, devad, regnum);
+}
+
+static int mt7530_phy_write_c45(struct mt7530_priv *priv, int port, int devad,
+ int regnum, u16 val)
+{
+ return mdiobus_c45_write_nested(priv->bus, port, devad, regnum, val);
+}
+
static int
mt7531_ind_c45_phy_read(struct mt7530_priv *priv, int port, int devad,
int regnum)
@@ -670,7 +682,7 @@ out:
static int
mt7531_ind_c45_phy_write(struct mt7530_priv *priv, int port, int devad,
- int regnum, u32 data)
+ int regnum, u16 data)
{
struct mii_bus *bus = priv->bus;
struct mt7530_dummy_poll p;
@@ -793,55 +805,36 @@ out:
}
static int
-mt7531_ind_phy_read(struct mt7530_priv *priv, int port, int regnum)
+mt753x_phy_read_c22(struct mii_bus *bus, int port, int regnum)
{
- int devad;
- int ret;
-
- if (regnum & MII_ADDR_C45) {
- devad = (regnum >> MII_DEVADDR_C45_SHIFT) & 0x1f;
- ret = mt7531_ind_c45_phy_read(priv, port, devad,
- regnum & MII_REGADDR_C45_MASK);
- } else {
- ret = mt7531_ind_c22_phy_read(priv, port, regnum);
- }
+ struct mt7530_priv *priv = bus->priv;
- return ret;
+ return priv->info->phy_read_c22(priv, port, regnum);
}
static int
-mt7531_ind_phy_write(struct mt7530_priv *priv, int port, int regnum,
- u16 data)
+mt753x_phy_read_c45(struct mii_bus *bus, int port, int devad, int regnum)
{
- int devad;
- int ret;
-
- if (regnum & MII_ADDR_C45) {
- devad = (regnum >> MII_DEVADDR_C45_SHIFT) & 0x1f;
- ret = mt7531_ind_c45_phy_write(priv, port, devad,
- regnum & MII_REGADDR_C45_MASK,
- data);
- } else {
- ret = mt7531_ind_c22_phy_write(priv, port, regnum, data);
- }
+ struct mt7530_priv *priv = bus->priv;
- return ret;
+ return priv->info->phy_read_c45(priv, port, devad, regnum);
}
static int
-mt753x_phy_read(struct mii_bus *bus, int port, int regnum)
+mt753x_phy_write_c22(struct mii_bus *bus, int port, int regnum, u16 val)
{
struct mt7530_priv *priv = bus->priv;
- return priv->info->phy_read(priv, port, regnum);
+ return priv->info->phy_write_c22(priv, port, regnum, val);
}
static int
-mt753x_phy_write(struct mii_bus *bus, int port, int regnum, u16 val)
+mt753x_phy_write_c45(struct mii_bus *bus, int port, int devad, int regnum,
+ u16 val)
{
struct mt7530_priv *priv = bus->priv;
- return priv->info->phy_write(priv, port, regnum, val);
+ return priv->info->phy_write_c45(priv, port, devad, regnum, val);
}
static void
@@ -2098,8 +2091,10 @@ mt7530_setup_mdio(struct mt7530_priv *priv)
bus->priv = priv;
bus->name = KBUILD_MODNAME "-mii";
snprintf(bus->id, MII_BUS_ID_SIZE, KBUILD_MODNAME "-%d", idx++);
- bus->read = mt753x_phy_read;
- bus->write = mt753x_phy_write;
+ bus->read = mt753x_phy_read_c22;
+ bus->write = mt753x_phy_write_c22;
+ bus->read_c45 = mt753x_phy_read_c45;
+ bus->write_c45 = mt753x_phy_write_c45;
bus->parent = dev;
bus->phy_mask = ~ds->phys_mii_mask;
@@ -3194,8 +3189,10 @@ static const struct mt753x_info mt753x_table[] = {
.id = ID_MT7621,
.pcs_ops = &mt7530_pcs_ops,
.sw_setup = mt7530_setup,
- .phy_read = mt7530_phy_read,
- .phy_write = mt7530_phy_write,
+ .phy_read_c22 = mt7530_phy_read_c22,
+ .phy_write_c22 = mt7530_phy_write_c22,
+ .phy_read_c45 = mt7530_phy_read_c45,
+ .phy_write_c45 = mt7530_phy_write_c45,
.pad_setup = mt7530_pad_clk_setup,
.mac_port_get_caps = mt7530_mac_port_get_caps,
.mac_port_config = mt7530_mac_config,
@@ -3204,8 +3201,10 @@ static const struct mt753x_info mt753x_table[] = {
.id = ID_MT7530,
.pcs_ops = &mt7530_pcs_ops,
.sw_setup = mt7530_setup,
- .phy_read = mt7530_phy_read,
- .phy_write = mt7530_phy_write,
+ .phy_read_c22 = mt7530_phy_read_c22,
+ .phy_write_c22 = mt7530_phy_write_c22,
+ .phy_read_c45 = mt7530_phy_read_c45,
+ .phy_write_c45 = mt7530_phy_write_c45,
.pad_setup = mt7530_pad_clk_setup,
.mac_port_get_caps = mt7530_mac_port_get_caps,
.mac_port_config = mt7530_mac_config,
@@ -3214,8 +3213,10 @@ static const struct mt753x_info mt753x_table[] = {
.id = ID_MT7531,
.pcs_ops = &mt7531_pcs_ops,
.sw_setup = mt7531_setup,
- .phy_read = mt7531_ind_phy_read,
- .phy_write = mt7531_ind_phy_write,
+ .phy_read_c22 = mt7531_ind_c22_phy_read,
+ .phy_write_c22 = mt7531_ind_c22_phy_write,
+ .phy_read_c45 = mt7531_ind_c45_phy_read,
+ .phy_write_c45 = mt7531_ind_c45_phy_write,
.pad_setup = mt7531_pad_setup,
.cpu_port_config = mt7531_cpu_port_config,
.mac_port_get_caps = mt7531_mac_port_get_caps,
@@ -3275,7 +3276,7 @@ mt7530_probe(struct mdio_device *mdiodev)
* properly.
*/
if (!priv->info->sw_setup || !priv->info->pad_setup ||
- !priv->info->phy_read || !priv->info->phy_write ||
+ !priv->info->phy_read_c22 || !priv->info->phy_write_c22 ||
!priv->info->mac_port_get_caps ||
!priv->info->mac_port_config)
return -EINVAL;
diff --git a/drivers/net/dsa/mt7530.h b/drivers/net/dsa/mt7530.h
index e8d966435350..6b2fc6290ea8 100644
--- a/drivers/net/dsa/mt7530.h
+++ b/drivers/net/dsa/mt7530.h
@@ -750,8 +750,10 @@ struct mt753x_pcs {
/* struct mt753x_info - This is the main data structure for holding the specific
* part for each supported device
* @sw_setup: Holding the handler to a device initialization
- * @phy_read: Holding the way reading PHY port
- * @phy_write: Holding the way writing PHY port
+ * @phy_read_c22: Holding the way reading PHY port using C22
+ * @phy_write_c22: Holding the way writing PHY port using C22
+ * @phy_read_c45: Holding the way reading PHY port using C45
+ * @phy_write_c45: Holding the way writing PHY port using C45
* @pad_setup: Holding the way setting up the bus pad for a certain
* MAC port
* @phy_mode_supported: Check if the PHY type is being supported on a certain
@@ -767,8 +769,13 @@ struct mt753x_info {
const struct phylink_pcs_ops *pcs_ops;
int (*sw_setup)(struct dsa_switch *ds);
- int (*phy_read)(struct mt7530_priv *priv, int port, int regnum);
- int (*phy_write)(struct mt7530_priv *priv, int port, int regnum, u16 val);
+ int (*phy_read_c22)(struct mt7530_priv *priv, int port, int regnum);
+ int (*phy_write_c22)(struct mt7530_priv *priv, int port, int regnum,
+ u16 val);
+ int (*phy_read_c45)(struct mt7530_priv *priv, int port, int devad,
+ int regnum);
+ int (*phy_write_c45)(struct mt7530_priv *priv, int port, int devad,
+ int regnum, u16 val);
int (*pad_setup)(struct dsa_switch *ds, phy_interface_t interface);
int (*cpu_port_config)(struct dsa_switch *ds, int port);
void (*mac_port_get_caps)(struct dsa_switch *ds, int port,
diff --git a/drivers/net/dsa/mv88e6xxx/Makefile b/drivers/net/dsa/mv88e6xxx/Makefile
index 49bf358b9c4f..1409e691ab77 100644
--- a/drivers/net/dsa/mv88e6xxx/Makefile
+++ b/drivers/net/dsa/mv88e6xxx/Makefile
@@ -15,6 +15,7 @@ mv88e6xxx-objs += port_hidden.o
mv88e6xxx-$(CONFIG_NET_DSA_MV88E6XXX_PTP) += ptp.o
mv88e6xxx-objs += serdes.o
mv88e6xxx-objs += smi.o
+mv88e6xxx-objs += switchdev.o
mv88e6xxx-objs += trace.o
# for tracing framework to find trace.h
diff --git a/drivers/net/dsa/mv88e6xxx/chip.c b/drivers/net/dsa/mv88e6xxx/chip.c
index 242b8b325504..0a5d6c7bb128 100644
--- a/drivers/net/dsa/mv88e6xxx/chip.c
+++ b/drivers/net/dsa/mv88e6xxx/chip.c
@@ -1728,11 +1728,11 @@ static int mv88e6xxx_vtu_get(struct mv88e6xxx_chip *chip, u16 vid,
return err;
}
-static int mv88e6xxx_vtu_walk(struct mv88e6xxx_chip *chip,
- int (*cb)(struct mv88e6xxx_chip *chip,
- const struct mv88e6xxx_vtu_entry *entry,
- void *priv),
- void *priv)
+int mv88e6xxx_vtu_walk(struct mv88e6xxx_chip *chip,
+ int (*cb)(struct mv88e6xxx_chip *chip,
+ const struct mv88e6xxx_vtu_entry *entry,
+ void *priv),
+ void *priv)
{
struct mv88e6xxx_vtu_entry entry = {
.vid = mv88e6xxx_max_vid(chip),
@@ -3884,6 +3884,24 @@ static int mv88e6xxx_mdio_read(struct mii_bus *bus, int phy, int reg)
return err ? err : val;
}
+static int mv88e6xxx_mdio_read_c45(struct mii_bus *bus, int phy, int devad,
+ int reg)
+{
+ struct mv88e6xxx_mdio_bus *mdio_bus = bus->priv;
+ struct mv88e6xxx_chip *chip = mdio_bus->chip;
+ u16 val;
+ int err;
+
+ if (!chip->info->ops->phy_read_c45)
+ return -EOPNOTSUPP;
+
+ mv88e6xxx_reg_lock(chip);
+ err = chip->info->ops->phy_read_c45(chip, bus, phy, devad, reg, &val);
+ mv88e6xxx_reg_unlock(chip);
+
+ return err ? err : val;
+}
+
static int mv88e6xxx_mdio_write(struct mii_bus *bus, int phy, int reg, u16 val)
{
struct mv88e6xxx_mdio_bus *mdio_bus = bus->priv;
@@ -3900,6 +3918,23 @@ static int mv88e6xxx_mdio_write(struct mii_bus *bus, int phy, int reg, u16 val)
return err;
}
+static int mv88e6xxx_mdio_write_c45(struct mii_bus *bus, int phy, int devad,
+ int reg, u16 val)
+{
+ struct mv88e6xxx_mdio_bus *mdio_bus = bus->priv;
+ struct mv88e6xxx_chip *chip = mdio_bus->chip;
+ int err;
+
+ if (!chip->info->ops->phy_write_c45)
+ return -EOPNOTSUPP;
+
+ mv88e6xxx_reg_lock(chip);
+ err = chip->info->ops->phy_write_c45(chip, bus, phy, devad, reg, val);
+ mv88e6xxx_reg_unlock(chip);
+
+ return err;
+}
+
static int mv88e6xxx_mdio_register(struct mv88e6xxx_chip *chip,
struct device_node *np,
bool external)
@@ -3938,6 +3973,8 @@ static int mv88e6xxx_mdio_register(struct mv88e6xxx_chip *chip,
bus->read = mv88e6xxx_mdio_read;
bus->write = mv88e6xxx_mdio_write;
+ bus->read_c45 = mv88e6xxx_mdio_read_c45;
+ bus->write_c45 = mv88e6xxx_mdio_write_c45;
bus->parent = chip->dev;
if (!external) {
@@ -4149,8 +4186,10 @@ static const struct mv88e6xxx_ops mv88e6097_ops = {
.ip_pri_map = mv88e6085_g1_ip_pri_map,
.irl_init_all = mv88e6352_g2_irl_init_all,
.set_switch_mac = mv88e6xxx_g2_set_switch_mac,
- .phy_read = mv88e6xxx_g2_smi_phy_read,
- .phy_write = mv88e6xxx_g2_smi_phy_write,
+ .phy_read = mv88e6xxx_g2_smi_phy_read_c22,
+ .phy_write = mv88e6xxx_g2_smi_phy_write_c22,
+ .phy_read_c45 = mv88e6xxx_g2_smi_phy_read_c45,
+ .phy_write_c45 = mv88e6xxx_g2_smi_phy_write_c45,
.port_set_link = mv88e6xxx_port_set_link,
.port_sync_link = mv88e6185_port_sync_link,
.port_set_speed_duplex = mv88e6185_port_set_speed_duplex,
@@ -4198,8 +4237,10 @@ static const struct mv88e6xxx_ops mv88e6123_ops = {
.ip_pri_map = mv88e6085_g1_ip_pri_map,
.irl_init_all = mv88e6352_g2_irl_init_all,
.set_switch_mac = mv88e6xxx_g2_set_switch_mac,
- .phy_read = mv88e6xxx_g2_smi_phy_read,
- .phy_write = mv88e6xxx_g2_smi_phy_write,
+ .phy_read = mv88e6xxx_g2_smi_phy_read_c22,
+ .phy_write = mv88e6xxx_g2_smi_phy_write_c22,
+ .phy_read_c45 = mv88e6xxx_g2_smi_phy_read_c45,
+ .phy_write_c45 = mv88e6xxx_g2_smi_phy_write_c45,
.port_set_link = mv88e6xxx_port_set_link,
.port_sync_link = mv88e6xxx_port_sync_link,
.port_set_speed_duplex = mv88e6185_port_set_speed_duplex,
@@ -4279,8 +4320,10 @@ static const struct mv88e6xxx_ops mv88e6141_ops = {
.get_eeprom = mv88e6xxx_g2_get_eeprom8,
.set_eeprom = mv88e6xxx_g2_set_eeprom8,
.set_switch_mac = mv88e6xxx_g2_set_switch_mac,
- .phy_read = mv88e6xxx_g2_smi_phy_read,
- .phy_write = mv88e6xxx_g2_smi_phy_write,
+ .phy_read = mv88e6xxx_g2_smi_phy_read_c22,
+ .phy_write = mv88e6xxx_g2_smi_phy_write_c22,
+ .phy_read_c45 = mv88e6xxx_g2_smi_phy_read_c45,
+ .phy_write_c45 = mv88e6xxx_g2_smi_phy_write_c45,
.port_set_link = mv88e6xxx_port_set_link,
.port_sync_link = mv88e6xxx_port_sync_link,
.port_set_rgmii_delay = mv88e6390_port_set_rgmii_delay,
@@ -4343,8 +4386,10 @@ static const struct mv88e6xxx_ops mv88e6161_ops = {
.ip_pri_map = mv88e6085_g1_ip_pri_map,
.irl_init_all = mv88e6352_g2_irl_init_all,
.set_switch_mac = mv88e6xxx_g2_set_switch_mac,
- .phy_read = mv88e6xxx_g2_smi_phy_read,
- .phy_write = mv88e6xxx_g2_smi_phy_write,
+ .phy_read = mv88e6xxx_g2_smi_phy_read_c22,
+ .phy_write = mv88e6xxx_g2_smi_phy_write_c22,
+ .phy_read_c45 = mv88e6xxx_g2_smi_phy_read_c45,
+ .phy_write_c45 = mv88e6xxx_g2_smi_phy_write_c45,
.port_set_link = mv88e6xxx_port_set_link,
.port_sync_link = mv88e6xxx_port_sync_link,
.port_set_speed_duplex = mv88e6185_port_set_speed_duplex,
@@ -4426,8 +4471,10 @@ static const struct mv88e6xxx_ops mv88e6171_ops = {
.ip_pri_map = mv88e6085_g1_ip_pri_map,
.irl_init_all = mv88e6352_g2_irl_init_all,
.set_switch_mac = mv88e6xxx_g2_set_switch_mac,
- .phy_read = mv88e6xxx_g2_smi_phy_read,
- .phy_write = mv88e6xxx_g2_smi_phy_write,
+ .phy_read = mv88e6xxx_g2_smi_phy_read_c22,
+ .phy_write = mv88e6xxx_g2_smi_phy_write_c22,
+ .phy_read_c45 = mv88e6xxx_g2_smi_phy_read_c45,
+ .phy_write_c45 = mv88e6xxx_g2_smi_phy_write_c45,
.port_set_link = mv88e6xxx_port_set_link,
.port_sync_link = mv88e6xxx_port_sync_link,
.port_set_rgmii_delay = mv88e6352_port_set_rgmii_delay,
@@ -4472,8 +4519,10 @@ static const struct mv88e6xxx_ops mv88e6172_ops = {
.get_eeprom = mv88e6xxx_g2_get_eeprom16,
.set_eeprom = mv88e6xxx_g2_set_eeprom16,
.set_switch_mac = mv88e6xxx_g2_set_switch_mac,
- .phy_read = mv88e6xxx_g2_smi_phy_read,
- .phy_write = mv88e6xxx_g2_smi_phy_write,
+ .phy_read = mv88e6xxx_g2_smi_phy_read_c22,
+ .phy_write = mv88e6xxx_g2_smi_phy_write_c22,
+ .phy_read_c45 = mv88e6xxx_g2_smi_phy_read_c45,
+ .phy_write_c45 = mv88e6xxx_g2_smi_phy_write_c45,
.port_set_link = mv88e6xxx_port_set_link,
.port_sync_link = mv88e6xxx_port_sync_link,
.port_set_rgmii_delay = mv88e6352_port_set_rgmii_delay,
@@ -4527,8 +4576,10 @@ static const struct mv88e6xxx_ops mv88e6175_ops = {
.ip_pri_map = mv88e6085_g1_ip_pri_map,
.irl_init_all = mv88e6352_g2_irl_init_all,
.set_switch_mac = mv88e6xxx_g2_set_switch_mac,
- .phy_read = mv88e6xxx_g2_smi_phy_read,
- .phy_write = mv88e6xxx_g2_smi_phy_write,
+ .phy_read = mv88e6xxx_g2_smi_phy_read_c22,
+ .phy_write = mv88e6xxx_g2_smi_phy_write_c22,
+ .phy_read_c45 = mv88e6xxx_g2_smi_phy_read_c45,
+ .phy_write_c45 = mv88e6xxx_g2_smi_phy_write_c45,
.port_set_link = mv88e6xxx_port_set_link,
.port_sync_link = mv88e6xxx_port_sync_link,
.port_set_rgmii_delay = mv88e6352_port_set_rgmii_delay,
@@ -4573,8 +4624,10 @@ static const struct mv88e6xxx_ops mv88e6176_ops = {
.get_eeprom = mv88e6xxx_g2_get_eeprom16,
.set_eeprom = mv88e6xxx_g2_set_eeprom16,
.set_switch_mac = mv88e6xxx_g2_set_switch_mac,
- .phy_read = mv88e6xxx_g2_smi_phy_read,
- .phy_write = mv88e6xxx_g2_smi_phy_write,
+ .phy_read = mv88e6xxx_g2_smi_phy_read_c22,
+ .phy_write = mv88e6xxx_g2_smi_phy_write_c22,
+ .phy_read_c45 = mv88e6xxx_g2_smi_phy_read_c45,
+ .phy_write_c45 = mv88e6xxx_g2_smi_phy_write_c45,
.port_set_link = mv88e6xxx_port_set_link,
.port_sync_link = mv88e6xxx_port_sync_link,
.port_set_rgmii_delay = mv88e6352_port_set_rgmii_delay,
@@ -4673,8 +4726,10 @@ static const struct mv88e6xxx_ops mv88e6190_ops = {
.get_eeprom = mv88e6xxx_g2_get_eeprom8,
.set_eeprom = mv88e6xxx_g2_set_eeprom8,
.set_switch_mac = mv88e6xxx_g2_set_switch_mac,
- .phy_read = mv88e6xxx_g2_smi_phy_read,
- .phy_write = mv88e6xxx_g2_smi_phy_write,
+ .phy_read = mv88e6xxx_g2_smi_phy_read_c22,
+ .phy_write = mv88e6xxx_g2_smi_phy_write_c22,
+ .phy_read_c45 = mv88e6xxx_g2_smi_phy_read_c45,
+ .phy_write_c45 = mv88e6xxx_g2_smi_phy_write_c45,
.port_set_link = mv88e6xxx_port_set_link,
.port_sync_link = mv88e6xxx_port_sync_link,
.port_set_rgmii_delay = mv88e6390_port_set_rgmii_delay,
@@ -4736,8 +4791,10 @@ static const struct mv88e6xxx_ops mv88e6190x_ops = {
.get_eeprom = mv88e6xxx_g2_get_eeprom8,
.set_eeprom = mv88e6xxx_g2_set_eeprom8,
.set_switch_mac = mv88e6xxx_g2_set_switch_mac,
- .phy_read = mv88e6xxx_g2_smi_phy_read,
- .phy_write = mv88e6xxx_g2_smi_phy_write,
+ .phy_read = mv88e6xxx_g2_smi_phy_read_c22,
+ .phy_write = mv88e6xxx_g2_smi_phy_write_c22,
+ .phy_read_c45 = mv88e6xxx_g2_smi_phy_read_c45,
+ .phy_write_c45 = mv88e6xxx_g2_smi_phy_write_c45,
.port_set_link = mv88e6xxx_port_set_link,
.port_sync_link = mv88e6xxx_port_sync_link,
.port_set_rgmii_delay = mv88e6390_port_set_rgmii_delay,
@@ -4799,8 +4856,10 @@ static const struct mv88e6xxx_ops mv88e6191_ops = {
.get_eeprom = mv88e6xxx_g2_get_eeprom8,
.set_eeprom = mv88e6xxx_g2_set_eeprom8,
.set_switch_mac = mv88e6xxx_g2_set_switch_mac,
- .phy_read = mv88e6xxx_g2_smi_phy_read,
- .phy_write = mv88e6xxx_g2_smi_phy_write,
+ .phy_read = mv88e6xxx_g2_smi_phy_read_c22,
+ .phy_write = mv88e6xxx_g2_smi_phy_write_c22,
+ .phy_read_c45 = mv88e6xxx_g2_smi_phy_read_c45,
+ .phy_write_c45 = mv88e6xxx_g2_smi_phy_write_c45,
.port_set_link = mv88e6xxx_port_set_link,
.port_sync_link = mv88e6xxx_port_sync_link,
.port_set_rgmii_delay = mv88e6390_port_set_rgmii_delay,
@@ -4862,8 +4921,10 @@ static const struct mv88e6xxx_ops mv88e6240_ops = {
.get_eeprom = mv88e6xxx_g2_get_eeprom16,
.set_eeprom = mv88e6xxx_g2_set_eeprom16,
.set_switch_mac = mv88e6xxx_g2_set_switch_mac,
- .phy_read = mv88e6xxx_g2_smi_phy_read,
- .phy_write = mv88e6xxx_g2_smi_phy_write,
+ .phy_read = mv88e6xxx_g2_smi_phy_read_c22,
+ .phy_write = mv88e6xxx_g2_smi_phy_write_c22,
+ .phy_read_c45 = mv88e6xxx_g2_smi_phy_read_c45,
+ .phy_write_c45 = mv88e6xxx_g2_smi_phy_write_c45,
.port_set_link = mv88e6xxx_port_set_link,
.port_sync_link = mv88e6xxx_port_sync_link,
.port_set_rgmii_delay = mv88e6352_port_set_rgmii_delay,
@@ -4925,8 +4986,10 @@ static const struct mv88e6xxx_ops mv88e6250_ops = {
.get_eeprom = mv88e6xxx_g2_get_eeprom16,
.set_eeprom = mv88e6xxx_g2_set_eeprom16,
.set_switch_mac = mv88e6xxx_g2_set_switch_mac,
- .phy_read = mv88e6xxx_g2_smi_phy_read,
- .phy_write = mv88e6xxx_g2_smi_phy_write,
+ .phy_read = mv88e6xxx_g2_smi_phy_read_c22,
+ .phy_write = mv88e6xxx_g2_smi_phy_write_c22,
+ .phy_read_c45 = mv88e6xxx_g2_smi_phy_read_c45,
+ .phy_write_c45 = mv88e6xxx_g2_smi_phy_write_c45,
.port_set_link = mv88e6xxx_port_set_link,
.port_sync_link = mv88e6xxx_port_sync_link,
.port_set_rgmii_delay = mv88e6352_port_set_rgmii_delay,
@@ -4964,8 +5027,10 @@ static const struct mv88e6xxx_ops mv88e6290_ops = {
.get_eeprom = mv88e6xxx_g2_get_eeprom8,
.set_eeprom = mv88e6xxx_g2_set_eeprom8,
.set_switch_mac = mv88e6xxx_g2_set_switch_mac,
- .phy_read = mv88e6xxx_g2_smi_phy_read,
- .phy_write = mv88e6xxx_g2_smi_phy_write,
+ .phy_read = mv88e6xxx_g2_smi_phy_read_c22,
+ .phy_write = mv88e6xxx_g2_smi_phy_write_c22,
+ .phy_read_c45 = mv88e6xxx_g2_smi_phy_read_c45,
+ .phy_write_c45 = mv88e6xxx_g2_smi_phy_write_c45,
.port_set_link = mv88e6xxx_port_set_link,
.port_sync_link = mv88e6xxx_port_sync_link,
.port_set_rgmii_delay = mv88e6390_port_set_rgmii_delay,
@@ -5017,7 +5082,7 @@ static const struct mv88e6xxx_ops mv88e6290_ops = {
.serdes_get_regs = mv88e6390_serdes_get_regs,
.gpio_ops = &mv88e6352_gpio_ops,
.avb_ops = &mv88e6390_avb_ops,
- .ptp_ops = &mv88e6352_ptp_ops,
+ .ptp_ops = &mv88e6390_ptp_ops,
.phylink_get_caps = mv88e6390_phylink_get_caps,
};
@@ -5029,8 +5094,10 @@ static const struct mv88e6xxx_ops mv88e6320_ops = {
.get_eeprom = mv88e6xxx_g2_get_eeprom16,
.set_eeprom = mv88e6xxx_g2_set_eeprom16,
.set_switch_mac = mv88e6xxx_g2_set_switch_mac,
- .phy_read = mv88e6xxx_g2_smi_phy_read,
- .phy_write = mv88e6xxx_g2_smi_phy_write,
+ .phy_read = mv88e6xxx_g2_smi_phy_read_c22,
+ .phy_write = mv88e6xxx_g2_smi_phy_write_c22,
+ .phy_read_c45 = mv88e6xxx_g2_smi_phy_read_c45,
+ .phy_write_c45 = mv88e6xxx_g2_smi_phy_write_c45,
.port_set_link = mv88e6xxx_port_set_link,
.port_sync_link = mv88e6xxx_port_sync_link,
.port_set_rgmii_delay = mv88e6320_port_set_rgmii_delay,
@@ -5074,8 +5141,10 @@ static const struct mv88e6xxx_ops mv88e6321_ops = {
.get_eeprom = mv88e6xxx_g2_get_eeprom16,
.set_eeprom = mv88e6xxx_g2_set_eeprom16,
.set_switch_mac = mv88e6xxx_g2_set_switch_mac,
- .phy_read = mv88e6xxx_g2_smi_phy_read,
- .phy_write = mv88e6xxx_g2_smi_phy_write,
+ .phy_read = mv88e6xxx_g2_smi_phy_read_c22,
+ .phy_write = mv88e6xxx_g2_smi_phy_write_c22,
+ .phy_read_c45 = mv88e6xxx_g2_smi_phy_read_c45,
+ .phy_write_c45 = mv88e6xxx_g2_smi_phy_write_c45,
.port_set_link = mv88e6xxx_port_set_link,
.port_sync_link = mv88e6xxx_port_sync_link,
.port_set_rgmii_delay = mv88e6320_port_set_rgmii_delay,
@@ -5117,8 +5186,10 @@ static const struct mv88e6xxx_ops mv88e6341_ops = {
.get_eeprom = mv88e6xxx_g2_get_eeprom8,
.set_eeprom = mv88e6xxx_g2_set_eeprom8,
.set_switch_mac = mv88e6xxx_g2_set_switch_mac,
- .phy_read = mv88e6xxx_g2_smi_phy_read,
- .phy_write = mv88e6xxx_g2_smi_phy_write,
+ .phy_read = mv88e6xxx_g2_smi_phy_read_c22,
+ .phy_write = mv88e6xxx_g2_smi_phy_write_c22,
+ .phy_read_c45 = mv88e6xxx_g2_smi_phy_read_c45,
+ .phy_write_c45 = mv88e6xxx_g2_smi_phy_write_c45,
.port_set_link = mv88e6xxx_port_set_link,
.port_sync_link = mv88e6xxx_port_sync_link,
.port_set_rgmii_delay = mv88e6390_port_set_rgmii_delay,
@@ -5183,8 +5254,10 @@ static const struct mv88e6xxx_ops mv88e6350_ops = {
.ip_pri_map = mv88e6085_g1_ip_pri_map,
.irl_init_all = mv88e6352_g2_irl_init_all,
.set_switch_mac = mv88e6xxx_g2_set_switch_mac,
- .phy_read = mv88e6xxx_g2_smi_phy_read,
- .phy_write = mv88e6xxx_g2_smi_phy_write,
+ .phy_read = mv88e6xxx_g2_smi_phy_read_c22,
+ .phy_write = mv88e6xxx_g2_smi_phy_write_c22,
+ .phy_read_c45 = mv88e6xxx_g2_smi_phy_read_c45,
+ .phy_write_c45 = mv88e6xxx_g2_smi_phy_write_c45,
.port_set_link = mv88e6xxx_port_set_link,
.port_sync_link = mv88e6xxx_port_sync_link,
.port_set_rgmii_delay = mv88e6352_port_set_rgmii_delay,
@@ -5227,8 +5300,10 @@ static const struct mv88e6xxx_ops mv88e6351_ops = {
.ip_pri_map = mv88e6085_g1_ip_pri_map,
.irl_init_all = mv88e6352_g2_irl_init_all,
.set_switch_mac = mv88e6xxx_g2_set_switch_mac,
- .phy_read = mv88e6xxx_g2_smi_phy_read,
- .phy_write = mv88e6xxx_g2_smi_phy_write,
+ .phy_read = mv88e6xxx_g2_smi_phy_read_c22,
+ .phy_write = mv88e6xxx_g2_smi_phy_write_c22,
+ .phy_read_c45 = mv88e6xxx_g2_smi_phy_read_c45,
+ .phy_write_c45 = mv88e6xxx_g2_smi_phy_write_c45,
.port_set_link = mv88e6xxx_port_set_link,
.port_sync_link = mv88e6xxx_port_sync_link,
.port_set_rgmii_delay = mv88e6352_port_set_rgmii_delay,
@@ -5275,8 +5350,10 @@ static const struct mv88e6xxx_ops mv88e6352_ops = {
.get_eeprom = mv88e6xxx_g2_get_eeprom16,
.set_eeprom = mv88e6xxx_g2_set_eeprom16,
.set_switch_mac = mv88e6xxx_g2_set_switch_mac,
- .phy_read = mv88e6xxx_g2_smi_phy_read,
- .phy_write = mv88e6xxx_g2_smi_phy_write,
+ .phy_read = mv88e6xxx_g2_smi_phy_read_c22,
+ .phy_write = mv88e6xxx_g2_smi_phy_write_c22,
+ .phy_read_c45 = mv88e6xxx_g2_smi_phy_read_c45,
+ .phy_write_c45 = mv88e6xxx_g2_smi_phy_write_c45,
.port_set_link = mv88e6xxx_port_set_link,
.port_sync_link = mv88e6xxx_port_sync_link,
.port_set_rgmii_delay = mv88e6352_port_set_rgmii_delay,
@@ -5340,8 +5417,10 @@ static const struct mv88e6xxx_ops mv88e6390_ops = {
.get_eeprom = mv88e6xxx_g2_get_eeprom8,
.set_eeprom = mv88e6xxx_g2_set_eeprom8,
.set_switch_mac = mv88e6xxx_g2_set_switch_mac,
- .phy_read = mv88e6xxx_g2_smi_phy_read,
- .phy_write = mv88e6xxx_g2_smi_phy_write,
+ .phy_read = mv88e6xxx_g2_smi_phy_read_c22,
+ .phy_write = mv88e6xxx_g2_smi_phy_write_c22,
+ .phy_read_c45 = mv88e6xxx_g2_smi_phy_read_c45,
+ .phy_write_c45 = mv88e6xxx_g2_smi_phy_write_c45,
.port_set_link = mv88e6xxx_port_set_link,
.port_sync_link = mv88e6xxx_port_sync_link,
.port_set_rgmii_delay = mv88e6390_port_set_rgmii_delay,
@@ -5391,7 +5470,7 @@ static const struct mv88e6xxx_ops mv88e6390_ops = {
.serdes_irq_status = mv88e6390_serdes_irq_status,
.gpio_ops = &mv88e6352_gpio_ops,
.avb_ops = &mv88e6390_avb_ops,
- .ptp_ops = &mv88e6352_ptp_ops,
+ .ptp_ops = &mv88e6390_ptp_ops,
.serdes_get_sset_count = mv88e6390_serdes_get_sset_count,
.serdes_get_strings = mv88e6390_serdes_get_strings,
.serdes_get_stats = mv88e6390_serdes_get_stats,
@@ -5407,8 +5486,10 @@ static const struct mv88e6xxx_ops mv88e6390x_ops = {
.get_eeprom = mv88e6xxx_g2_get_eeprom8,
.set_eeprom = mv88e6xxx_g2_set_eeprom8,
.set_switch_mac = mv88e6xxx_g2_set_switch_mac,
- .phy_read = mv88e6xxx_g2_smi_phy_read,
- .phy_write = mv88e6xxx_g2_smi_phy_write,
+ .phy_read = mv88e6xxx_g2_smi_phy_read_c22,
+ .phy_write = mv88e6xxx_g2_smi_phy_write_c22,
+ .phy_read_c45 = mv88e6xxx_g2_smi_phy_read_c45,
+ .phy_write_c45 = mv88e6xxx_g2_smi_phy_write_c45,
.port_set_link = mv88e6xxx_port_set_link,
.port_sync_link = mv88e6xxx_port_sync_link,
.port_set_rgmii_delay = mv88e6390_port_set_rgmii_delay,
@@ -5462,7 +5543,7 @@ static const struct mv88e6xxx_ops mv88e6390x_ops = {
.serdes_get_regs = mv88e6390_serdes_get_regs,
.gpio_ops = &mv88e6352_gpio_ops,
.avb_ops = &mv88e6390_avb_ops,
- .ptp_ops = &mv88e6352_ptp_ops,
+ .ptp_ops = &mv88e6390_ptp_ops,
.phylink_get_caps = mv88e6390x_phylink_get_caps,
};
@@ -5473,8 +5554,10 @@ static const struct mv88e6xxx_ops mv88e6393x_ops = {
.get_eeprom = mv88e6xxx_g2_get_eeprom8,
.set_eeprom = mv88e6xxx_g2_set_eeprom8,
.set_switch_mac = mv88e6xxx_g2_set_switch_mac,
- .phy_read = mv88e6xxx_g2_smi_phy_read,
- .phy_write = mv88e6xxx_g2_smi_phy_write,
+ .phy_read = mv88e6xxx_g2_smi_phy_read_c22,
+ .phy_write = mv88e6xxx_g2_smi_phy_write_c22,
+ .phy_read_c45 = mv88e6xxx_g2_smi_phy_read_c45,
+ .phy_write_c45 = mv88e6xxx_g2_smi_phy_write_c45,
.port_set_link = mv88e6xxx_port_set_link,
.port_sync_link = mv88e6xxx_port_sync_link,
.port_set_rgmii_delay = mv88e6390_port_set_rgmii_delay,
@@ -6526,7 +6609,7 @@ static int mv88e6xxx_port_pre_bridge_flags(struct dsa_switch *ds, int port,
const struct mv88e6xxx_ops *ops;
if (flags.mask & ~(BR_LEARNING | BR_FLOOD | BR_MCAST_FLOOD |
- BR_BCAST_FLOOD | BR_PORT_LOCKED))
+ BR_BCAST_FLOOD | BR_PORT_LOCKED | BR_PORT_MAB))
return -EINVAL;
ops = chip->info->ops;
@@ -6545,7 +6628,7 @@ static int mv88e6xxx_port_bridge_flags(struct dsa_switch *ds, int port,
struct netlink_ext_ack *extack)
{
struct mv88e6xxx_chip *chip = ds->priv;
- int err = -EOPNOTSUPP;
+ int err = 0;
mv88e6xxx_reg_lock(chip);
@@ -6584,6 +6667,12 @@ static int mv88e6xxx_port_bridge_flags(struct dsa_switch *ds, int port,
goto out;
}
+ if (flags.mask & BR_PORT_MAB) {
+ bool mab = !!(flags.val & BR_PORT_MAB);
+
+ mv88e6xxx_port_set_mab(chip, port, mab);
+ }
+
if (flags.mask & BR_PORT_LOCKED) {
bool locked = !!(flags.val & BR_PORT_LOCKED);
diff --git a/drivers/net/dsa/mv88e6xxx/chip.h b/drivers/net/dsa/mv88e6xxx/chip.h
index e693154cf803..da6e1339f809 100644
--- a/drivers/net/dsa/mv88e6xxx/chip.h
+++ b/drivers/net/dsa/mv88e6xxx/chip.h
@@ -280,6 +280,9 @@ struct mv88e6xxx_port {
unsigned int serdes_irq;
char serdes_irq_name[64];
struct devlink_region *region;
+
+ /* MacAuth Bypass control flag */
+ bool mab;
};
enum mv88e6xxx_region_id {
@@ -451,6 +454,13 @@ struct mv88e6xxx_ops {
struct mii_bus *bus,
int addr, int reg, u16 val);
+ int (*phy_read_c45)(struct mv88e6xxx_chip *chip,
+ struct mii_bus *bus,
+ int addr, int devad, int reg, u16 *val);
+ int (*phy_write_c45)(struct mv88e6xxx_chip *chip,
+ struct mii_bus *bus,
+ int addr, int devad, int reg, u16 val);
+
/* Priority Override Table operations */
int (*pot_clear)(struct mv88e6xxx_chip *chip);
@@ -705,6 +715,7 @@ struct mv88e6xxx_ptp_ops {
int (*port_disable)(struct mv88e6xxx_chip *chip, int port);
int (*global_enable)(struct mv88e6xxx_chip *chip);
int (*global_disable)(struct mv88e6xxx_chip *chip);
+ int (*set_ptp_cpu_port)(struct mv88e6xxx_chip *chip, int port);
int n_ext_ts;
int arr0_sts_reg;
int arr1_sts_reg;
@@ -784,6 +795,12 @@ static inline bool mv88e6xxx_is_invalid_port(struct mv88e6xxx_chip *chip, int po
return (chip->info->invalid_port_mask & BIT(port)) != 0;
}
+static inline void mv88e6xxx_port_set_mab(struct mv88e6xxx_chip *chip,
+ int port, bool mab)
+{
+ chip->ports[port].mab = mab;
+}
+
int mv88e6xxx_read(struct mv88e6xxx_chip *chip, int addr, int reg, u16 *val);
int mv88e6xxx_write(struct mv88e6xxx_chip *chip, int addr, int reg, u16 val);
int mv88e6xxx_wait_mask(struct mv88e6xxx_chip *chip, int addr, int reg,
@@ -802,6 +819,12 @@ static inline void mv88e6xxx_reg_unlock(struct mv88e6xxx_chip *chip)
mutex_unlock(&chip->reg_lock);
}
+int mv88e6xxx_vtu_walk(struct mv88e6xxx_chip *chip,
+ int (*cb)(struct mv88e6xxx_chip *chip,
+ const struct mv88e6xxx_vtu_entry *entry,
+ void *priv),
+ void *priv);
+
int mv88e6xxx_fid_map(struct mv88e6xxx_chip *chip, unsigned long *bitmap);
#endif /* _MV88E6XXX_CHIP_H */
diff --git a/drivers/net/dsa/mv88e6xxx/global1.c b/drivers/net/dsa/mv88e6xxx/global1.c
index 5848112036b0..2fa55a643591 100644
--- a/drivers/net/dsa/mv88e6xxx/global1.c
+++ b/drivers/net/dsa/mv88e6xxx/global1.c
@@ -403,6 +403,18 @@ int mv88e6390_g1_set_cpu_port(struct mv88e6xxx_chip *chip, int port)
return mv88e6390_g1_monitor_write(chip, ptr, port);
}
+int mv88e6390_g1_set_ptp_cpu_port(struct mv88e6xxx_chip *chip, int port)
+{
+ u16 ptr = MV88E6390_G1_MONITOR_MGMT_CTL_PTR_PTP_CPU_DEST;
+
+ /* Use the default high priority for PTP frames sent to
+ * the CPU.
+ */
+ port |= MV88E6390_G1_MONITOR_MGMT_CTL_PTR_CPU_DEST_MGMTPRI;
+
+ return mv88e6390_g1_monitor_write(chip, ptr, port);
+}
+
int mv88e6390_g1_mgmt_rsvd2cpu(struct mv88e6xxx_chip *chip)
{
u16 ptr;
diff --git a/drivers/net/dsa/mv88e6xxx/global1.h b/drivers/net/dsa/mv88e6xxx/global1.h
index 65958b2a0d3a..c99ddd117fe6 100644
--- a/drivers/net/dsa/mv88e6xxx/global1.h
+++ b/drivers/net/dsa/mv88e6xxx/global1.h
@@ -214,6 +214,7 @@
#define MV88E6390_G1_MONITOR_MGMT_CTL_PTR_INGRESS_DEST 0x2000
#define MV88E6390_G1_MONITOR_MGMT_CTL_PTR_EGRESS_DEST 0x2100
#define MV88E6390_G1_MONITOR_MGMT_CTL_PTR_CPU_DEST 0x3000
+#define MV88E6390_G1_MONITOR_MGMT_CTL_PTR_PTP_CPU_DEST 0x3200
#define MV88E6390_G1_MONITOR_MGMT_CTL_PTR_CPU_DEST_MGMTPRI 0x00e0
#define MV88E6390_G1_MONITOR_MGMT_CTL_DATA_MASK 0x00ff
@@ -303,6 +304,7 @@ int mv88e6390_g1_set_egress_port(struct mv88e6xxx_chip *chip,
int port);
int mv88e6095_g1_set_cpu_port(struct mv88e6xxx_chip *chip, int port);
int mv88e6390_g1_set_cpu_port(struct mv88e6xxx_chip *chip, int port);
+int mv88e6390_g1_set_ptp_cpu_port(struct mv88e6xxx_chip *chip, int port);
int mv88e6390_g1_mgmt_rsvd2cpu(struct mv88e6xxx_chip *chip);
int mv88e6085_g1_ip_pri_map(struct mv88e6xxx_chip *chip);
diff --git a/drivers/net/dsa/mv88e6xxx/global1_atu.c b/drivers/net/dsa/mv88e6xxx/global1_atu.c
index 61ae2d61e25c..ce3b3690c3c0 100644
--- a/drivers/net/dsa/mv88e6xxx/global1_atu.c
+++ b/drivers/net/dsa/mv88e6xxx/global1_atu.c
@@ -12,6 +12,7 @@
#include "chip.h"
#include "global1.h"
+#include "switchdev.h"
#include "trace.h"
/* Offset 0x01: ATU FID Register */
@@ -409,23 +410,25 @@ static irqreturn_t mv88e6xxx_g1_atu_prob_irq_thread_fn(int irq, void *dev_id)
err = mv88e6xxx_g1_read_atu_violation(chip);
if (err)
- goto out;
+ goto out_unlock;
err = mv88e6xxx_g1_read(chip, MV88E6XXX_G1_ATU_OP, &val);
if (err)
- goto out;
+ goto out_unlock;
err = mv88e6xxx_g1_atu_fid_read(chip, &fid);
if (err)
- goto out;
+ goto out_unlock;
err = mv88e6xxx_g1_atu_data_read(chip, &entry);
if (err)
- goto out;
+ goto out_unlock;
err = mv88e6xxx_g1_atu_mac_read(chip, &entry);
if (err)
- goto out;
+ goto out_unlock;
+
+ mv88e6xxx_reg_unlock(chip);
spid = entry.state;
@@ -441,6 +444,13 @@ static irqreturn_t mv88e6xxx_g1_atu_prob_irq_thread_fn(int irq, void *dev_id)
entry.portvec, entry.mac,
fid);
chip->ports[spid].atu_miss_violation++;
+
+ if (fid != MV88E6XXX_FID_STANDALONE && chip->ports[spid].mab) {
+ err = mv88e6xxx_handle_miss_violation(chip, spid,
+ &entry, fid);
+ if (err)
+ goto out;
+ }
}
if (val & MV88E6XXX_G1_ATU_OP_FULL_VIOLATION) {
@@ -449,13 +459,13 @@ static irqreturn_t mv88e6xxx_g1_atu_prob_irq_thread_fn(int irq, void *dev_id)
fid);
chip->ports[spid].atu_full_violation++;
}
- mv88e6xxx_reg_unlock(chip);
return IRQ_HANDLED;
-out:
+out_unlock:
mv88e6xxx_reg_unlock(chip);
+out:
dev_err(chip->dev, "ATU problem: error %d while handling interrupt\n",
err);
return IRQ_HANDLED;
diff --git a/drivers/net/dsa/mv88e6xxx/global2.c b/drivers/net/dsa/mv88e6xxx/global2.c
index fa65ecd9cb85..ed3b2f88e783 100644
--- a/drivers/net/dsa/mv88e6xxx/global2.c
+++ b/drivers/net/dsa/mv88e6xxx/global2.c
@@ -739,20 +739,18 @@ static int mv88e6xxx_g2_smi_phy_read_data_c45(struct mv88e6xxx_chip *chip,
return mv88e6xxx_g2_read(chip, MV88E6XXX_G2_SMI_PHY_DATA, data);
}
-static int mv88e6xxx_g2_smi_phy_read_c45(struct mv88e6xxx_chip *chip,
- bool external, int port, int reg,
- u16 *data)
+static int _mv88e6xxx_g2_smi_phy_read_c45(struct mv88e6xxx_chip *chip,
+ bool external, int port, int devad,
+ int reg, u16 *data)
{
- int dev = (reg >> 16) & 0x1f;
- int addr = reg & 0xffff;
int err;
- err = mv88e6xxx_g2_smi_phy_write_addr_c45(chip, external, port, dev,
- addr);
+ err = mv88e6xxx_g2_smi_phy_write_addr_c45(chip, external, port, devad,
+ reg);
if (err)
return err;
- return mv88e6xxx_g2_smi_phy_read_data_c45(chip, external, port, dev,
+ return mv88e6xxx_g2_smi_phy_read_data_c45(chip, external, port, devad,
data);
}
@@ -771,51 +769,65 @@ static int mv88e6xxx_g2_smi_phy_write_data_c45(struct mv88e6xxx_chip *chip,
return mv88e6xxx_g2_smi_phy_access_c45(chip, external, op, port, dev);
}
-static int mv88e6xxx_g2_smi_phy_write_c45(struct mv88e6xxx_chip *chip,
- bool external, int port, int reg,
- u16 data)
+static int _mv88e6xxx_g2_smi_phy_write_c45(struct mv88e6xxx_chip *chip,
+ bool external, int port, int devad,
+ int reg, u16 data)
{
- int dev = (reg >> 16) & 0x1f;
- int addr = reg & 0xffff;
int err;
- err = mv88e6xxx_g2_smi_phy_write_addr_c45(chip, external, port, dev,
- addr);
+ err = mv88e6xxx_g2_smi_phy_write_addr_c45(chip, external, port, devad,
+ reg);
if (err)
return err;
- return mv88e6xxx_g2_smi_phy_write_data_c45(chip, external, port, dev,
+ return mv88e6xxx_g2_smi_phy_write_data_c45(chip, external, port, devad,
data);
}
-int mv88e6xxx_g2_smi_phy_read(struct mv88e6xxx_chip *chip, struct mii_bus *bus,
- int addr, int reg, u16 *val)
+int mv88e6xxx_g2_smi_phy_read_c22(struct mv88e6xxx_chip *chip,
+ struct mii_bus *bus,
+ int addr, int reg, u16 *val)
{
struct mv88e6xxx_mdio_bus *mdio_bus = bus->priv;
bool external = mdio_bus->external;
- if (reg & MII_ADDR_C45)
- return mv88e6xxx_g2_smi_phy_read_c45(chip, external, addr, reg,
- val);
-
return mv88e6xxx_g2_smi_phy_read_data_c22(chip, external, addr, reg,
val);
}
-int mv88e6xxx_g2_smi_phy_write(struct mv88e6xxx_chip *chip, struct mii_bus *bus,
- int addr, int reg, u16 val)
+int mv88e6xxx_g2_smi_phy_read_c45(struct mv88e6xxx_chip *chip,
+ struct mii_bus *bus, int addr, int devad,
+ int reg, u16 *val)
{
struct mv88e6xxx_mdio_bus *mdio_bus = bus->priv;
bool external = mdio_bus->external;
- if (reg & MII_ADDR_C45)
- return mv88e6xxx_g2_smi_phy_write_c45(chip, external, addr, reg,
- val);
+ return _mv88e6xxx_g2_smi_phy_read_c45(chip, external, addr, devad, reg,
+ val);
+}
+
+int mv88e6xxx_g2_smi_phy_write_c22(struct mv88e6xxx_chip *chip,
+ struct mii_bus *bus, int addr, int reg,
+ u16 val)
+{
+ struct mv88e6xxx_mdio_bus *mdio_bus = bus->priv;
+ bool external = mdio_bus->external;
return mv88e6xxx_g2_smi_phy_write_data_c22(chip, external, addr, reg,
val);
}
+int mv88e6xxx_g2_smi_phy_write_c45(struct mv88e6xxx_chip *chip,
+ struct mii_bus *bus, int addr, int devad,
+ int reg, u16 val)
+{
+ struct mv88e6xxx_mdio_bus *mdio_bus = bus->priv;
+ bool external = mdio_bus->external;
+
+ return _mv88e6xxx_g2_smi_phy_write_c45(chip, external, addr, devad, reg,
+ val);
+}
+
/* Offset 0x1B: Watchdog Control */
static int mv88e6097_watchdog_action(struct mv88e6xxx_chip *chip, int irq)
{
diff --git a/drivers/net/dsa/mv88e6xxx/global2.h b/drivers/net/dsa/mv88e6xxx/global2.h
index 7536b8b0ad01..e973114d6890 100644
--- a/drivers/net/dsa/mv88e6xxx/global2.h
+++ b/drivers/net/dsa/mv88e6xxx/global2.h
@@ -314,12 +314,18 @@ int mv88e6xxx_g2_wait_bit(struct mv88e6xxx_chip *chip, int reg,
int mv88e6352_g2_irl_init_all(struct mv88e6xxx_chip *chip, int port);
int mv88e6390_g2_irl_init_all(struct mv88e6xxx_chip *chip, int port);
-int mv88e6xxx_g2_smi_phy_read(struct mv88e6xxx_chip *chip,
- struct mii_bus *bus,
- int addr, int reg, u16 *val);
-int mv88e6xxx_g2_smi_phy_write(struct mv88e6xxx_chip *chip,
- struct mii_bus *bus,
- int addr, int reg, u16 val);
+int mv88e6xxx_g2_smi_phy_read_c22(struct mv88e6xxx_chip *chip,
+ struct mii_bus *bus,
+ int addr, int reg, u16 *val);
+int mv88e6xxx_g2_smi_phy_write_c22(struct mv88e6xxx_chip *chip,
+ struct mii_bus *bus,
+ int addr, int reg, u16 val);
+int mv88e6xxx_g2_smi_phy_read_c45(struct mv88e6xxx_chip *chip,
+ struct mii_bus *bus,
+ int addr, int devad, int reg, u16 *val);
+int mv88e6xxx_g2_smi_phy_write_c45(struct mv88e6xxx_chip *chip,
+ struct mii_bus *bus,
+ int addr, int devad, int reg, u16 val);
int mv88e6xxx_g2_set_switch_mac(struct mv88e6xxx_chip *chip, u8 *addr);
int mv88e6xxx_g2_get_eeprom8(struct mv88e6xxx_chip *chip,
diff --git a/drivers/net/dsa/mv88e6xxx/phy.c b/drivers/net/dsa/mv88e6xxx/phy.c
index 252b5b3a3efe..8bb88b3d900d 100644
--- a/drivers/net/dsa/mv88e6xxx/phy.c
+++ b/drivers/net/dsa/mv88e6xxx/phy.c
@@ -55,6 +55,38 @@ int mv88e6xxx_phy_write(struct mv88e6xxx_chip *chip, int phy, int reg, u16 val)
return chip->info->ops->phy_write(chip, bus, addr, reg, val);
}
+int mv88e6xxx_phy_read_c45(struct mv88e6xxx_chip *chip, int phy, int devad,
+ int reg, u16 *val)
+{
+ int addr = phy; /* PHY devices addresses start at 0x0 */
+ struct mii_bus *bus;
+
+ bus = mv88e6xxx_default_mdio_bus(chip);
+ if (!bus)
+ return -EOPNOTSUPP;
+
+ if (!chip->info->ops->phy_read_c45)
+ return -EOPNOTSUPP;
+
+ return chip->info->ops->phy_read_c45(chip, bus, addr, devad, reg, val);
+}
+
+int mv88e6xxx_phy_write_c45(struct mv88e6xxx_chip *chip, int phy, int devad,
+ int reg, u16 val)
+{
+ int addr = phy; /* PHY devices addresses start at 0x0 */
+ struct mii_bus *bus;
+
+ bus = mv88e6xxx_default_mdio_bus(chip);
+ if (!bus)
+ return -EOPNOTSUPP;
+
+ if (!chip->info->ops->phy_write_c45)
+ return -EOPNOTSUPP;
+
+ return chip->info->ops->phy_write_c45(chip, bus, addr, devad, reg, val);
+}
+
static int mv88e6xxx_phy_page_get(struct mv88e6xxx_chip *chip, int phy, u8 page)
{
return mv88e6xxx_phy_write(chip, phy, MV88E6XXX_PHY_PAGE, page);
diff --git a/drivers/net/dsa/mv88e6xxx/phy.h b/drivers/net/dsa/mv88e6xxx/phy.h
index 05ea0d546969..5f47722364cc 100644
--- a/drivers/net/dsa/mv88e6xxx/phy.h
+++ b/drivers/net/dsa/mv88e6xxx/phy.h
@@ -28,6 +28,10 @@ int mv88e6xxx_phy_read(struct mv88e6xxx_chip *chip, int phy,
int reg, u16 *val);
int mv88e6xxx_phy_write(struct mv88e6xxx_chip *chip, int phy,
int reg, u16 val);
+int mv88e6xxx_phy_read_c45(struct mv88e6xxx_chip *chip, int phy, int devad,
+ int reg, u16 *val);
+int mv88e6xxx_phy_write_c45(struct mv88e6xxx_chip *chip, int phy, int devad,
+ int reg, u16 val);
int mv88e6xxx_phy_page_read(struct mv88e6xxx_chip *chip, int phy,
u8 page, int reg, u16 *val);
int mv88e6xxx_phy_page_write(struct mv88e6xxx_chip *chip, int phy,
diff --git a/drivers/net/dsa/mv88e6xxx/ptp.c b/drivers/net/dsa/mv88e6xxx/ptp.c
index d838c174dc0d..ea17231dc34e 100644
--- a/drivers/net/dsa/mv88e6xxx/ptp.c
+++ b/drivers/net/dsa/mv88e6xxx/ptp.c
@@ -11,6 +11,7 @@
*/
#include "chip.h"
+#include "global1.h"
#include "global2.h"
#include "hwtstamp.h"
#include "ptp.h"
@@ -419,6 +420,34 @@ const struct mv88e6xxx_ptp_ops mv88e6352_ptp_ops = {
.cc_mult_dem = MV88E6XXX_CC_MULT_DEM,
};
+const struct mv88e6xxx_ptp_ops mv88e6390_ptp_ops = {
+ .clock_read = mv88e6352_ptp_clock_read,
+ .ptp_enable = mv88e6352_ptp_enable,
+ .ptp_verify = mv88e6352_ptp_verify,
+ .event_work = mv88e6352_tai_event_work,
+ .port_enable = mv88e6352_hwtstamp_port_enable,
+ .port_disable = mv88e6352_hwtstamp_port_disable,
+ .set_ptp_cpu_port = mv88e6390_g1_set_ptp_cpu_port,
+ .n_ext_ts = 1,
+ .arr0_sts_reg = MV88E6XXX_PORT_PTP_ARR0_STS,
+ .arr1_sts_reg = MV88E6XXX_PORT_PTP_ARR1_STS,
+ .dep_sts_reg = MV88E6XXX_PORT_PTP_DEP_STS,
+ .rx_filters = (1 << HWTSTAMP_FILTER_NONE) |
+ (1 << HWTSTAMP_FILTER_PTP_V2_L4_EVENT) |
+ (1 << HWTSTAMP_FILTER_PTP_V2_L4_SYNC) |
+ (1 << HWTSTAMP_FILTER_PTP_V2_L4_DELAY_REQ) |
+ (1 << HWTSTAMP_FILTER_PTP_V2_L2_EVENT) |
+ (1 << HWTSTAMP_FILTER_PTP_V2_L2_SYNC) |
+ (1 << HWTSTAMP_FILTER_PTP_V2_L2_DELAY_REQ) |
+ (1 << HWTSTAMP_FILTER_PTP_V2_EVENT) |
+ (1 << HWTSTAMP_FILTER_PTP_V2_SYNC) |
+ (1 << HWTSTAMP_FILTER_PTP_V2_DELAY_REQ),
+ .cc_shift = MV88E6XXX_CC_SHIFT,
+ .cc_mult = MV88E6XXX_CC_MULT,
+ .cc_mult_num = MV88E6XXX_CC_MULT_NUM,
+ .cc_mult_dem = MV88E6XXX_CC_MULT_DEM,
+};
+
static u64 mv88e6xxx_ptp_clock_read(const struct cyclecounter *cc)
{
struct mv88e6xxx_chip *chip = cc_to_chip(cc);
@@ -491,6 +520,23 @@ int mv88e6xxx_ptp_setup(struct mv88e6xxx_chip *chip)
chip->ptp_clock_info.verify = ptp_ops->ptp_verify;
chip->ptp_clock_info.do_aux_work = mv88e6xxx_hwtstamp_work;
+ if (ptp_ops->set_ptp_cpu_port) {
+ struct dsa_port *dp;
+ int upstream = 0;
+ int err;
+
+ dsa_switch_for_each_user_port(dp, chip->ds) {
+ upstream = dsa_upstream_port(chip->ds, dp->index);
+ break;
+ }
+
+ err = ptp_ops->set_ptp_cpu_port(chip, upstream);
+ if (err) {
+ dev_err(chip->dev, "Failed to set PTP CPU destination port!\n");
+ return err;
+ }
+ }
+
chip->ptp_clock = ptp_clock_register(&chip->ptp_clock_info, chip->dev);
if (IS_ERR(chip->ptp_clock))
return PTR_ERR(chip->ptp_clock);
diff --git a/drivers/net/dsa/mv88e6xxx/ptp.h b/drivers/net/dsa/mv88e6xxx/ptp.h
index 269d5d16a466..6c4d09adc93c 100644
--- a/drivers/net/dsa/mv88e6xxx/ptp.h
+++ b/drivers/net/dsa/mv88e6xxx/ptp.h
@@ -151,6 +151,7 @@ void mv88e6xxx_ptp_free(struct mv88e6xxx_chip *chip);
extern const struct mv88e6xxx_ptp_ops mv88e6165_ptp_ops;
extern const struct mv88e6xxx_ptp_ops mv88e6250_ptp_ops;
extern const struct mv88e6xxx_ptp_ops mv88e6352_ptp_ops;
+extern const struct mv88e6xxx_ptp_ops mv88e6390_ptp_ops;
#else /* !CONFIG_NET_DSA_MV88E6XXX_PTP */
@@ -171,6 +172,7 @@ static inline void mv88e6xxx_ptp_free(struct mv88e6xxx_chip *chip)
static const struct mv88e6xxx_ptp_ops mv88e6165_ptp_ops = {};
static const struct mv88e6xxx_ptp_ops mv88e6250_ptp_ops = {};
static const struct mv88e6xxx_ptp_ops mv88e6352_ptp_ops = {};
+static const struct mv88e6xxx_ptp_ops mv88e6390_ptp_ops = {};
#endif /* CONFIG_NET_DSA_MV88E6XXX_PTP */
diff --git a/drivers/net/dsa/mv88e6xxx/serdes.c b/drivers/net/dsa/mv88e6xxx/serdes.c
index d94150d8f3f4..72faec8f44dc 100644
--- a/drivers/net/dsa/mv88e6xxx/serdes.c
+++ b/drivers/net/dsa/mv88e6xxx/serdes.c
@@ -36,17 +36,13 @@ static int mv88e6352_serdes_write(struct mv88e6xxx_chip *chip, int reg,
static int mv88e6390_serdes_read(struct mv88e6xxx_chip *chip,
int lane, int device, int reg, u16 *val)
{
- int reg_c45 = MII_ADDR_C45 | device << 16 | reg;
-
- return mv88e6xxx_phy_read(chip, lane, reg_c45, val);
+ return mv88e6xxx_phy_read_c45(chip, lane, device, reg, val);
}
static int mv88e6390_serdes_write(struct mv88e6xxx_chip *chip,
int lane, int device, int reg, u16 val)
{
- int reg_c45 = MII_ADDR_C45 | device << 16 | reg;
-
- return mv88e6xxx_phy_write(chip, lane, reg_c45, val);
+ return mv88e6xxx_phy_write_c45(chip, lane, device, reg, val);
}
static int mv88e6xxx_serdes_pcs_get_state(struct mv88e6xxx_chip *chip,
diff --git a/drivers/net/dsa/mv88e6xxx/switchdev.c b/drivers/net/dsa/mv88e6xxx/switchdev.c
new file mode 100644
index 000000000000..4c346a884fb2
--- /dev/null
+++ b/drivers/net/dsa/mv88e6xxx/switchdev.c
@@ -0,0 +1,83 @@
+// SPDX-License-Identifier: GPL-2.0-or-later
+/*
+ * switchdev.c
+ *
+ * Authors:
+ * Hans J. Schultz <netdev@kapio-technology.com>
+ *
+ */
+
+#include <net/switchdev.h>
+#include "chip.h"
+#include "global1.h"
+#include "switchdev.h"
+
+struct mv88e6xxx_fid_search_ctx {
+ u16 fid_search;
+ u16 vid_found;
+};
+
+static int __mv88e6xxx_find_vid(struct mv88e6xxx_chip *chip,
+ const struct mv88e6xxx_vtu_entry *entry,
+ void *priv)
+{
+ struct mv88e6xxx_fid_search_ctx *ctx = priv;
+
+ if (ctx->fid_search == entry->fid) {
+ ctx->vid_found = entry->vid;
+ return 1;
+ }
+
+ return 0;
+}
+
+static int mv88e6xxx_find_vid(struct mv88e6xxx_chip *chip, u16 fid, u16 *vid)
+{
+ struct mv88e6xxx_fid_search_ctx ctx;
+ int err;
+
+ ctx.fid_search = fid;
+ mv88e6xxx_reg_lock(chip);
+ err = mv88e6xxx_vtu_walk(chip, __mv88e6xxx_find_vid, &ctx);
+ mv88e6xxx_reg_unlock(chip);
+ if (err < 0)
+ return err;
+ if (err == 1)
+ *vid = ctx.vid_found;
+ else
+ return -ENOENT;
+
+ return 0;
+}
+
+int mv88e6xxx_handle_miss_violation(struct mv88e6xxx_chip *chip, int port,
+ struct mv88e6xxx_atu_entry *entry, u16 fid)
+{
+ struct switchdev_notifier_fdb_info info = {
+ .addr = entry->mac,
+ .locked = true,
+ };
+ struct net_device *brport;
+ struct dsa_port *dp;
+ u16 vid;
+ int err;
+
+ err = mv88e6xxx_find_vid(chip, fid, &vid);
+ if (err)
+ return err;
+
+ info.vid = vid;
+ dp = dsa_to_port(chip->ds, port);
+
+ rtnl_lock();
+ brport = dsa_port_to_bridge_port(dp);
+ if (!brport) {
+ rtnl_unlock();
+ return -ENODEV;
+ }
+ err = call_switchdev_notifiers(SWITCHDEV_FDB_ADD_TO_BRIDGE,
+ brport, &info.info, NULL);
+ rtnl_unlock();
+
+ return err;
+}
diff --git a/drivers/net/dsa/mv88e6xxx/switchdev.h b/drivers/net/dsa/mv88e6xxx/switchdev.h
new file mode 100644
index 000000000000..62214f9d62b0
--- /dev/null
+++ b/drivers/net/dsa/mv88e6xxx/switchdev.h
@@ -0,0 +1,19 @@
+/* SPDX-License-Identifier: GPL-2.0-or-later
+ *
+ * switchdev.h
+ *
+ * Authors:
+ * Hans J. Schultz <netdev@kapio-technology.com>
+ *
+ */
+
+#ifndef _MV88E6XXX_SWITCHDEV_H_
+#define _MV88E6XXX_SWITCHDEV_H_
+
+#include "chip.h"
+
+int mv88e6xxx_handle_miss_violation(struct mv88e6xxx_chip *chip, int port,
+ struct mv88e6xxx_atu_entry *entry,
+ u16 fid);
+
+#endif /* _MV88E6XXX_SWITCHDEV_H_ */
diff --git a/drivers/net/dsa/ocelot/Kconfig b/drivers/net/dsa/ocelot/Kconfig
index 08db9cf76818..081e7a88ea02 100644
--- a/drivers/net/dsa/ocelot/Kconfig
+++ b/drivers/net/dsa/ocelot/Kconfig
@@ -1,4 +1,34 @@
# SPDX-License-Identifier: GPL-2.0-only
+config NET_DSA_MSCC_FELIX_DSA_LIB
+ tristate
+ help
+ This is an umbrella module for all network switches that are
+ register-compatible with Ocelot and that perform I/O to their host
+ CPU through an NPI (Node Processor Interface) Ethernet port.
+ Its name comes from the first hardware chip to make use of it
+ (VSC9959), code named Felix.
+
+config NET_DSA_MSCC_OCELOT_EXT
+ tristate "Ocelot External Ethernet switch support"
+ depends on NET_DSA && SPI
+ depends on NET_VENDOR_MICROSEMI
+ depends on PTP_1588_CLOCK_OPTIONAL
+ select MDIO_MSCC_MIIM
+ select MFD_OCELOT
+ select MSCC_OCELOT_SWITCH_LIB
+ select NET_DSA_MSCC_FELIX_DSA_LIB
+ select NET_DSA_TAG_OCELOT_8021Q
+ select NET_DSA_TAG_OCELOT
+ help
+ This driver supports the VSC7511, VSC7512, VSC7513 and VSC7514 chips
+ when controlled through SPI.
+
+ The Ocelot switch family is a set of multi-port networking chips. All
+ of these chips have the ability to be controlled externally through
+ SPI or PCIe interfaces.
+
+ Say "Y" here to enable external control to these chips.
+
config NET_DSA_MSCC_FELIX
tristate "Ocelot / Felix Ethernet switch support"
depends on NET_DSA && PCI
@@ -8,6 +38,7 @@ config NET_DSA_MSCC_FELIX
depends on PTP_1588_CLOCK_OPTIONAL
depends on NET_SCH_TAPRIO || NET_SCH_TAPRIO=n
select MSCC_OCELOT_SWITCH_LIB
+ select NET_DSA_MSCC_FELIX_DSA_LIB
select NET_DSA_TAG_OCELOT_8021Q
select NET_DSA_TAG_OCELOT
select FSL_ENETC_MDIO
@@ -24,6 +55,7 @@ config NET_DSA_MSCC_SEVILLE
depends on PTP_1588_CLOCK_OPTIONAL
select MDIO_MSCC_MIIM
select MSCC_OCELOT_SWITCH_LIB
+ select NET_DSA_MSCC_FELIX_DSA_LIB
select NET_DSA_TAG_OCELOT_8021Q
select NET_DSA_TAG_OCELOT
select PCS_LYNX
diff --git a/drivers/net/dsa/ocelot/Makefile b/drivers/net/dsa/ocelot/Makefile
index f6dd131e7491..ead868a293e3 100644
--- a/drivers/net/dsa/ocelot/Makefile
+++ b/drivers/net/dsa/ocelot/Makefile
@@ -1,11 +1,10 @@
# SPDX-License-Identifier: GPL-2.0-only
+obj-$(CONFIG_NET_DSA_MSCC_FELIX_DSA_LIB) += mscc_felix_dsa_lib.o
obj-$(CONFIG_NET_DSA_MSCC_FELIX) += mscc_felix.o
+obj-$(CONFIG_NET_DSA_MSCC_OCELOT_EXT) += mscc_ocelot_ext.o
obj-$(CONFIG_NET_DSA_MSCC_SEVILLE) += mscc_seville.o
-mscc_felix-objs := \
- felix.o \
- felix_vsc9959.o
-
-mscc_seville-objs := \
- felix.o \
- seville_vsc9953.o
+mscc_felix_dsa_lib-objs := felix.o
+mscc_felix-objs := felix_vsc9959.o
+mscc_ocelot_ext-objs := ocelot_ext.o
+mscc_seville-objs := seville_vsc9953.o
diff --git a/drivers/net/dsa/ocelot/felix.c b/drivers/net/dsa/ocelot/felix.c
index 3b738cb2ae6e..d4cc9e60f369 100644
--- a/drivers/net/dsa/ocelot/felix.c
+++ b/drivers/net/dsa/ocelot/felix.c
@@ -1075,9 +1075,12 @@ static void felix_phylink_mac_link_down(struct dsa_switch *ds, int port,
phy_interface_t interface)
{
struct ocelot *ocelot = ds->priv;
+ struct felix *felix;
+
+ felix = ocelot_to_felix(ocelot);
ocelot_phylink_mac_link_down(ocelot, port, link_an_mode, interface,
- FELIX_MAC_QUIRKS);
+ felix->info->quirks);
}
static void felix_phylink_mac_link_up(struct dsa_switch *ds, int port,
@@ -1092,7 +1095,7 @@ static void felix_phylink_mac_link_up(struct dsa_switch *ds, int port,
ocelot_phylink_mac_link_up(ocelot, port, phydev, link_an_mode,
interface, speed, duplex, tx_pause, rx_pause,
- FELIX_MAC_QUIRKS);
+ felix->info->quirks);
if (felix->info->port_sched_speed_set)
felix->info->port_sched_speed_set(ocelot, port, speed);
@@ -1270,10 +1273,15 @@ static int felix_parse_ports_node(struct felix *felix,
err = felix_validate_phy_mode(felix, port, phy_mode);
if (err < 0) {
- dev_err(dev, "Unsupported PHY mode %s on port %d\n",
- phy_modes(phy_mode), port);
+ dev_info(dev, "Unsupported PHY mode %s on port %d\n",
+ phy_modes(phy_mode), port);
of_node_put(child);
- return err;
+
+ /* Leave port_phy_modes[port] = 0, which is also
+ * PHY_INTERFACE_MODE_NA. This will perform a
+ * best-effort to bring up as many ports as possible.
+ */
+ continue;
}
port_phy_modes[port] = phy_mode;
@@ -1312,6 +1320,13 @@ static struct regmap *felix_request_regmap_by_name(struct felix *felix,
struct resource res;
int i;
+ /* In an MFD configuration, regmaps are registered directly to the
+ * parent device before the child devices are probed, so there is no
+ * need to initialize a new one.
+ */
+ if (!felix->info->resources)
+ return dev_get_regmap(ocelot->dev->parent, resource_name);
+
for (i = 0; i < felix->info->num_resources; i++) {
if (strcmp(resource_name, felix->info->resources[i].name))
continue;
@@ -2024,6 +2039,31 @@ static int felix_port_del_dscp_prio(struct dsa_switch *ds, int port, u8 dscp,
return ocelot_port_del_dscp_prio(ocelot, port, dscp, prio);
}
+static int felix_get_mm(struct dsa_switch *ds, int port,
+ struct ethtool_mm_state *state)
+{
+ struct ocelot *ocelot = ds->priv;
+
+ return ocelot_port_get_mm(ocelot, port, state);
+}
+
+static int felix_set_mm(struct dsa_switch *ds, int port,
+ struct ethtool_mm_cfg *cfg,
+ struct netlink_ext_ack *extack)
+{
+ struct ocelot *ocelot = ds->priv;
+
+ return ocelot_port_set_mm(ocelot, port, cfg, extack);
+}
+
+static void felix_get_mm_stats(struct dsa_switch *ds, int port,
+ struct ethtool_mm_stats *stats)
+{
+ struct ocelot *ocelot = ds->priv;
+
+ ocelot_port_get_mm_stats(ocelot, port, stats);
+}
+
const struct dsa_switch_ops felix_switch_ops = {
.get_tag_protocol = felix_get_tag_protocol,
.change_tag_protocol = felix_change_tag_protocol,
@@ -2031,6 +2071,9 @@ const struct dsa_switch_ops felix_switch_ops = {
.setup = felix_setup,
.teardown = felix_teardown,
.set_ageing_time = felix_set_ageing_time,
+ .get_mm = felix_get_mm,
+ .set_mm = felix_set_mm,
+ .get_mm_stats = felix_get_mm_stats,
.get_stats64 = felix_get_stats64,
.get_pause_stats = felix_get_pause_stats,
.get_rmon_stats = felix_get_rmon_stats,
@@ -2103,6 +2146,7 @@ const struct dsa_switch_ops felix_switch_ops = {
.port_set_host_flood = felix_port_set_host_flood,
.port_change_master = felix_port_change_master,
};
+EXPORT_SYMBOL_GPL(felix_switch_ops);
struct net_device *felix_port_to_netdev(struct ocelot *ocelot, int port)
{
@@ -2114,6 +2158,7 @@ struct net_device *felix_port_to_netdev(struct ocelot *ocelot, int port)
return dsa_to_port(ds, port)->slave;
}
+EXPORT_SYMBOL_GPL(felix_port_to_netdev);
int felix_netdev_to_port(struct net_device *dev)
{
@@ -2125,3 +2170,7 @@ int felix_netdev_to_port(struct net_device *dev)
return dp->index;
}
+EXPORT_SYMBOL_GPL(felix_netdev_to_port);
+
+MODULE_DESCRIPTION("Felix DSA library");
+MODULE_LICENSE("GPL");
diff --git a/drivers/net/dsa/ocelot/felix.h b/drivers/net/dsa/ocelot/felix.h
index be22d6ccd7c8..d5d0b30c0b75 100644
--- a/drivers/net/dsa/ocelot/felix.h
+++ b/drivers/net/dsa/ocelot/felix.h
@@ -7,6 +7,7 @@
#define ocelot_to_felix(o) container_of((o), struct felix, ocelot)
#define FELIX_MAC_QUIRKS OCELOT_QUIRK_PCS_PERFORMS_RATE_ADAPTATION
+#define OCELOT_PORT_MODE_NONE 0
#define OCELOT_PORT_MODE_INTERNAL BIT(0)
#define OCELOT_PORT_MODE_SGMII BIT(1)
#define OCELOT_PORT_MODE_QSGMII BIT(2)
@@ -36,6 +37,7 @@ struct felix_info {
u16 vcap_pol_base2;
u16 vcap_pol_max2;
const struct ptp_clock_info *ptp_caps;
+ unsigned long quirks;
/* Some Ocelot switches are integrated into the SoC without the
* extraction IRQ line connected to the ARM GIC. By enabling this
diff --git a/drivers/net/dsa/ocelot/felix_vsc9959.c b/drivers/net/dsa/ocelot/felix_vsc9959.c
index 01ac70fd7ddf..354aa3dbfde7 100644
--- a/drivers/net/dsa/ocelot/felix_vsc9959.c
+++ b/drivers/net/dsa/ocelot/felix_vsc9959.c
@@ -6,6 +6,7 @@
#include <soc/mscc/ocelot_qsys.h>
#include <soc/mscc/ocelot_vcap.h>
#include <soc/mscc/ocelot_ana.h>
+#include <soc/mscc/ocelot_dev.h>
#include <soc/mscc/ocelot_ptp.h>
#include <soc/mscc/ocelot_sys.h>
#include <net/tc_act/tc_gate.h>
@@ -318,6 +319,29 @@ static const u32 vsc9959_sys_regmap[] = {
REG(SYS_COUNT_RX_GREEN_PRIO_5, 0x0000a4),
REG(SYS_COUNT_RX_GREEN_PRIO_6, 0x0000a8),
REG(SYS_COUNT_RX_GREEN_PRIO_7, 0x0000ac),
+ REG(SYS_COUNT_RX_ASSEMBLY_ERRS, 0x0000b0),
+ REG(SYS_COUNT_RX_SMD_ERRS, 0x0000b4),
+ REG(SYS_COUNT_RX_ASSEMBLY_OK, 0x0000b8),
+ REG(SYS_COUNT_RX_MERGE_FRAGMENTS, 0x0000bc),
+ REG(SYS_COUNT_RX_PMAC_OCTETS, 0x0000c0),
+ REG(SYS_COUNT_RX_PMAC_UNICAST, 0x0000c4),
+ REG(SYS_COUNT_RX_PMAC_MULTICAST, 0x0000c8),
+ REG(SYS_COUNT_RX_PMAC_BROADCAST, 0x0000cc),
+ REG(SYS_COUNT_RX_PMAC_SHORTS, 0x0000d0),
+ REG(SYS_COUNT_RX_PMAC_FRAGMENTS, 0x0000d4),
+ REG(SYS_COUNT_RX_PMAC_JABBERS, 0x0000d8),
+ REG(SYS_COUNT_RX_PMAC_CRC_ALIGN_ERRS, 0x0000dc),
+ REG(SYS_COUNT_RX_PMAC_SYM_ERRS, 0x0000e0),
+ REG(SYS_COUNT_RX_PMAC_64, 0x0000e4),
+ REG(SYS_COUNT_RX_PMAC_65_127, 0x0000e8),
+ REG(SYS_COUNT_RX_PMAC_128_255, 0x0000ec),
+ REG(SYS_COUNT_RX_PMAC_256_511, 0x0000f0),
+ REG(SYS_COUNT_RX_PMAC_512_1023, 0x0000f4),
+ REG(SYS_COUNT_RX_PMAC_1024_1526, 0x0000f8),
+ REG(SYS_COUNT_RX_PMAC_1527_MAX, 0x0000fc),
+ REG(SYS_COUNT_RX_PMAC_PAUSE, 0x000100),
+ REG(SYS_COUNT_RX_PMAC_CONTROL, 0x000104),
+ REG(SYS_COUNT_RX_PMAC_LONGS, 0x000108),
REG(SYS_COUNT_TX_OCTETS, 0x000200),
REG(SYS_COUNT_TX_UNICAST, 0x000204),
REG(SYS_COUNT_TX_MULTICAST, 0x000208),
@@ -349,6 +373,20 @@ static const u32 vsc9959_sys_regmap[] = {
REG(SYS_COUNT_TX_GREEN_PRIO_6, 0x000270),
REG(SYS_COUNT_TX_GREEN_PRIO_7, 0x000274),
REG(SYS_COUNT_TX_AGED, 0x000278),
+ REG(SYS_COUNT_TX_MM_HOLD, 0x00027c),
+ REG(SYS_COUNT_TX_MERGE_FRAGMENTS, 0x000280),
+ REG(SYS_COUNT_TX_PMAC_OCTETS, 0x000284),
+ REG(SYS_COUNT_TX_PMAC_UNICAST, 0x000288),
+ REG(SYS_COUNT_TX_PMAC_MULTICAST, 0x00028c),
+ REG(SYS_COUNT_TX_PMAC_BROADCAST, 0x000290),
+ REG(SYS_COUNT_TX_PMAC_PAUSE, 0x000294),
+ REG(SYS_COUNT_TX_PMAC_64, 0x000298),
+ REG(SYS_COUNT_TX_PMAC_65_127, 0x00029c),
+ REG(SYS_COUNT_TX_PMAC_128_255, 0x0002a0),
+ REG(SYS_COUNT_TX_PMAC_256_511, 0x0002a4),
+ REG(SYS_COUNT_TX_PMAC_512_1023, 0x0002a8),
+ REG(SYS_COUNT_TX_PMAC_1024_1526, 0x0002ac),
+ REG(SYS_COUNT_TX_PMAC_1527_MAX, 0x0002b0),
REG(SYS_COUNT_DROP_LOCAL, 0x000400),
REG(SYS_COUNT_DROP_TAIL, 0x000404),
REG(SYS_COUNT_DROP_YELLOW_PRIO_0, 0x000408),
@@ -439,6 +477,9 @@ static const u32 vsc9959_dev_gmii_regmap[] = {
REG(DEV_MAC_FC_MAC_LOW_CFG, 0x3c),
REG(DEV_MAC_FC_MAC_HIGH_CFG, 0x40),
REG(DEV_MAC_STICKY, 0x44),
+ REG(DEV_MM_ENABLE_CONFIG, 0x48),
+ REG(DEV_MM_VERIF_CONFIG, 0x4C),
+ REG(DEV_MM_STATUS, 0x50),
REG_RESERVED(PCS1G_CFG),
REG_RESERVED(PCS1G_MODE_CFG),
REG_RESERVED(PCS1G_SD_CFG),
@@ -954,8 +995,10 @@ static int vsc9959_mdio_bus_alloc(struct ocelot *ocelot)
return -ENOMEM;
bus->name = "VSC9959 internal MDIO bus";
- bus->read = enetc_mdio_read;
- bus->write = enetc_mdio_write;
+ bus->read = enetc_mdio_read_c22;
+ bus->write = enetc_mdio_write_c22;
+ bus->read_c45 = enetc_mdio_read_c45;
+ bus->write_c45 = enetc_mdio_write_c45;
bus->parent = dev;
mdio_priv = bus->priv;
mdio_priv->hw = hw;
@@ -2550,6 +2593,7 @@ static const struct felix_info felix_info_vsc9959 = {
.num_mact_rows = 2048,
.num_ports = VSC9959_NUM_PORTS,
.num_tx_queues = OCELOT_NUM_TC,
+ .quirks = FELIX_MAC_QUIRKS,
.quirk_no_xtr_irq = true,
.ptp_caps = &vsc9959_ptp_caps,
.mdio_bus_alloc = vsc9959_mdio_bus_alloc,
@@ -2560,20 +2604,19 @@ static const struct felix_info felix_info_vsc9959 = {
.tas_guard_bands_update = vsc9959_tas_guard_bands_update,
};
+/* The INTB interrupt is shared between for PTP TX timestamp availability
+ * notification and MAC Merge status change on each port.
+ */
static irqreturn_t felix_irq_handler(int irq, void *data)
{
struct ocelot *ocelot = (struct ocelot *)data;
-
- /* The INTB interrupt is used for both PTP TX timestamp interrupt
- * and preemption status change interrupt on each port.
- *
- * - Get txtstamp if have
- * - TODO: handle preemption. Without handling it, driver may get
- * interrupt storm.
- */
+ int port;
ocelot_get_txtstamp(ocelot);
+ for (port = 0; port < ocelot->num_phys_ports; port++)
+ ocelot_port_mm_irq(ocelot, port);
+
return IRQ_HANDLED;
}
@@ -2621,6 +2664,7 @@ static int felix_pci_probe(struct pci_dev *pdev,
}
ocelot->ptp = 1;
+ ocelot->mm_supported = true;
ds = kzalloc(sizeof(struct dsa_switch), GFP_KERNEL);
if (!ds) {
diff --git a/drivers/net/dsa/ocelot/ocelot_ext.c b/drivers/net/dsa/ocelot/ocelot_ext.c
new file mode 100644
index 000000000000..14efa6387bd7
--- /dev/null
+++ b/drivers/net/dsa/ocelot/ocelot_ext.c
@@ -0,0 +1,163 @@
+// SPDX-License-Identifier: (GPL-2.0 OR MIT)
+/*
+ * Copyright 2021-2022 Innovative Advantage Inc.
+ */
+
+#include <linux/mfd/ocelot.h>
+#include <linux/phylink.h>
+#include <linux/platform_device.h>
+#include <linux/regmap.h>
+#include <soc/mscc/ocelot.h>
+#include <soc/mscc/vsc7514_regs.h>
+#include "felix.h"
+
+#define VSC7514_NUM_PORTS 11
+
+#define OCELOT_PORT_MODE_SERDES (OCELOT_PORT_MODE_SGMII | \
+ OCELOT_PORT_MODE_QSGMII)
+
+static const u32 vsc7512_port_modes[VSC7514_NUM_PORTS] = {
+ OCELOT_PORT_MODE_INTERNAL,
+ OCELOT_PORT_MODE_INTERNAL,
+ OCELOT_PORT_MODE_INTERNAL,
+ OCELOT_PORT_MODE_INTERNAL,
+ OCELOT_PORT_MODE_NONE,
+ OCELOT_PORT_MODE_NONE,
+ OCELOT_PORT_MODE_NONE,
+ OCELOT_PORT_MODE_NONE,
+ OCELOT_PORT_MODE_NONE,
+ OCELOT_PORT_MODE_NONE,
+ OCELOT_PORT_MODE_NONE,
+};
+
+static const struct ocelot_ops ocelot_ext_ops = {
+ .reset = ocelot_reset,
+ .wm_enc = ocelot_wm_enc,
+ .wm_dec = ocelot_wm_dec,
+ .wm_stat = ocelot_wm_stat,
+ .port_to_netdev = felix_port_to_netdev,
+ .netdev_to_port = felix_netdev_to_port,
+};
+
+static const char * const vsc7512_resource_names[TARGET_MAX] = {
+ [SYS] = "sys",
+ [REW] = "rew",
+ [S0] = "s0",
+ [S1] = "s1",
+ [S2] = "s2",
+ [QS] = "qs",
+ [QSYS] = "qsys",
+ [ANA] = "ana",
+};
+
+static const struct felix_info vsc7512_info = {
+ .resource_names = vsc7512_resource_names,
+ .regfields = vsc7514_regfields,
+ .map = vsc7514_regmap,
+ .ops = &ocelot_ext_ops,
+ .vcap = vsc7514_vcap_props,
+ .num_mact_rows = 1024,
+ .num_ports = VSC7514_NUM_PORTS,
+ .num_tx_queues = OCELOT_NUM_TC,
+ .port_modes = vsc7512_port_modes,
+};
+
+static int ocelot_ext_probe(struct platform_device *pdev)
+{
+ struct device *dev = &pdev->dev;
+ struct dsa_switch *ds;
+ struct ocelot *ocelot;
+ struct felix *felix;
+ int err;
+
+ felix = kzalloc(sizeof(*felix), GFP_KERNEL);
+ if (!felix)
+ return -ENOMEM;
+
+ dev_set_drvdata(dev, felix);
+
+ ocelot = &felix->ocelot;
+ ocelot->dev = dev;
+
+ ocelot->num_flooding_pgids = 1;
+
+ felix->info = &vsc7512_info;
+
+ ds = kzalloc(sizeof(*ds), GFP_KERNEL);
+ if (!ds) {
+ err = -ENOMEM;
+ dev_err_probe(dev, err, "Failed to allocate DSA switch\n");
+ goto err_free_felix;
+ }
+
+ ds->dev = dev;
+ ds->num_ports = felix->info->num_ports;
+ ds->num_tx_queues = felix->info->num_tx_queues;
+
+ ds->ops = &felix_switch_ops;
+ ds->priv = ocelot;
+ felix->ds = ds;
+ felix->tag_proto = DSA_TAG_PROTO_OCELOT;
+
+ err = dsa_register_switch(ds);
+ if (err) {
+ dev_err_probe(dev, err, "Failed to register DSA switch\n");
+ goto err_free_ds;
+ }
+
+ return 0;
+
+err_free_ds:
+ kfree(ds);
+err_free_felix:
+ kfree(felix);
+ return err;
+}
+
+static int ocelot_ext_remove(struct platform_device *pdev)
+{
+ struct felix *felix = dev_get_drvdata(&pdev->dev);
+
+ if (!felix)
+ return 0;
+
+ dsa_unregister_switch(felix->ds);
+
+ kfree(felix->ds);
+ kfree(felix);
+
+ return 0;
+}
+
+static void ocelot_ext_shutdown(struct platform_device *pdev)
+{
+ struct felix *felix = dev_get_drvdata(&pdev->dev);
+
+ if (!felix)
+ return;
+
+ dsa_switch_shutdown(felix->ds);
+
+ dev_set_drvdata(&pdev->dev, NULL);
+}
+
+static const struct of_device_id ocelot_ext_switch_of_match[] = {
+ { .compatible = "mscc,vsc7512-switch" },
+ { },
+};
+MODULE_DEVICE_TABLE(of, ocelot_ext_switch_of_match);
+
+static struct platform_driver ocelot_ext_switch_driver = {
+ .driver = {
+ .name = "ocelot-switch",
+ .of_match_table = of_match_ptr(ocelot_ext_switch_of_match),
+ },
+ .probe = ocelot_ext_probe,
+ .remove = ocelot_ext_remove,
+ .shutdown = ocelot_ext_shutdown,
+};
+module_platform_driver(ocelot_ext_switch_driver);
+
+MODULE_DESCRIPTION("External Ocelot Switch driver");
+MODULE_LICENSE("GPL");
+MODULE_IMPORT_NS(MFD_OCELOT);
diff --git a/drivers/net/dsa/ocelot/seville_vsc9953.c b/drivers/net/dsa/ocelot/seville_vsc9953.c
index 88ed3a2e487a..287b64b788db 100644
--- a/drivers/net/dsa/ocelot/seville_vsc9953.c
+++ b/drivers/net/dsa/ocelot/seville_vsc9953.c
@@ -971,6 +971,7 @@ static const struct felix_info seville_info_vsc9953 = {
.vcap_pol_max = VSC9953_VCAP_POLICER_MAX,
.vcap_pol_base2 = VSC9953_VCAP_POLICER_BASE2,
.vcap_pol_max2 = VSC9953_VCAP_POLICER_MAX2,
+ .quirks = FELIX_MAC_QUIRKS,
.num_mact_rows = 2048,
.num_ports = VSC9953_NUM_PORTS,
.num_tx_queues = OCELOT_NUM_TC,
diff --git a/drivers/net/dsa/qca/qca8k-8xxx.c b/drivers/net/dsa/qca/qca8k-8xxx.c
index 2f224b166bbb..55df4479ea30 100644
--- a/drivers/net/dsa/qca/qca8k-8xxx.c
+++ b/drivers/net/dsa/qca/qca8k-8xxx.c
@@ -425,16 +425,12 @@ qca8k_regmap_update_bits_eth(struct qca8k_priv *priv, u32 reg, u32 mask, u32 wri
}
static int
-qca8k_regmap_read(void *ctx, uint32_t reg, uint32_t *val)
+qca8k_read_mii(struct qca8k_priv *priv, uint32_t reg, uint32_t *val)
{
- struct qca8k_priv *priv = (struct qca8k_priv *)ctx;
struct mii_bus *bus = priv->bus;
u16 r1, r2, page;
int ret;
- if (!qca8k_read_eth(priv, reg, val, sizeof(*val)))
- return 0;
-
qca8k_split_addr(reg, &r1, &r2, &page);
mutex_lock_nested(&bus->mdio_lock, MDIO_MUTEX_NESTED);
@@ -451,16 +447,12 @@ exit:
}
static int
-qca8k_regmap_write(void *ctx, uint32_t reg, uint32_t val)
+qca8k_write_mii(struct qca8k_priv *priv, uint32_t reg, uint32_t val)
{
- struct qca8k_priv *priv = (struct qca8k_priv *)ctx;
struct mii_bus *bus = priv->bus;
u16 r1, r2, page;
int ret;
- if (!qca8k_write_eth(priv, reg, &val, sizeof(val)))
- return 0;
-
qca8k_split_addr(reg, &r1, &r2, &page);
mutex_lock_nested(&bus->mdio_lock, MDIO_MUTEX_NESTED);
@@ -477,17 +469,14 @@ exit:
}
static int
-qca8k_regmap_update_bits(void *ctx, uint32_t reg, uint32_t mask, uint32_t write_val)
+qca8k_regmap_update_bits_mii(struct qca8k_priv *priv, uint32_t reg,
+ uint32_t mask, uint32_t write_val)
{
- struct qca8k_priv *priv = (struct qca8k_priv *)ctx;
struct mii_bus *bus = priv->bus;
u16 r1, r2, page;
u32 val;
int ret;
- if (!qca8k_regmap_update_bits_eth(priv, reg, mask, write_val))
- return 0;
-
qca8k_split_addr(reg, &r1, &r2, &page);
mutex_lock_nested(&bus->mdio_lock, MDIO_MUTEX_NESTED);
@@ -510,17 +499,84 @@ exit:
return ret;
}
+static int
+qca8k_bulk_read(void *ctx, const void *reg_buf, size_t reg_len,
+ void *val_buf, size_t val_len)
+{
+ int i, count = val_len / sizeof(u32), ret;
+ u32 reg = *(u32 *)reg_buf & U16_MAX;
+ struct qca8k_priv *priv = ctx;
+
+ if (priv->mgmt_master &&
+ !qca8k_read_eth(priv, reg, val_buf, val_len))
+ return 0;
+
+ /* loop count times and increment reg of 4 */
+ for (i = 0; i < count; i++, reg += sizeof(u32)) {
+ ret = qca8k_read_mii(priv, reg, val_buf + i);
+ if (ret < 0)
+ return ret;
+ }
+
+ return 0;
+}
+
+static int
+qca8k_bulk_gather_write(void *ctx, const void *reg_buf, size_t reg_len,
+ const void *val_buf, size_t val_len)
+{
+ int i, count = val_len / sizeof(u32), ret;
+ u32 reg = *(u32 *)reg_buf & U16_MAX;
+ struct qca8k_priv *priv = ctx;
+ u32 *val = (u32 *)val_buf;
+
+ if (priv->mgmt_master &&
+ !qca8k_write_eth(priv, reg, val, val_len))
+ return 0;
+
+ /* loop count times, increment reg of 4 and increment val ptr to
+ * the next value
+ */
+ for (i = 0; i < count; i++, reg += sizeof(u32), val++) {
+ ret = qca8k_write_mii(priv, reg, *val);
+ if (ret < 0)
+ return ret;
+ }
+
+ return 0;
+}
+
+static int
+qca8k_bulk_write(void *ctx, const void *data, size_t bytes)
+{
+ return qca8k_bulk_gather_write(ctx, data, sizeof(u16), data + sizeof(u16),
+ bytes - sizeof(u16));
+}
+
+static int
+qca8k_regmap_update_bits(void *ctx, uint32_t reg, uint32_t mask, uint32_t write_val)
+{
+ struct qca8k_priv *priv = ctx;
+
+ if (!qca8k_regmap_update_bits_eth(priv, reg, mask, write_val))
+ return 0;
+
+ return qca8k_regmap_update_bits_mii(priv, reg, mask, write_val);
+}
+
static struct regmap_config qca8k_regmap_config = {
.reg_bits = 16,
.val_bits = 32,
.reg_stride = 4,
.max_register = 0x16ac, /* end MIB - Port6 range */
- .reg_read = qca8k_regmap_read,
- .reg_write = qca8k_regmap_write,
+ .read = qca8k_bulk_read,
+ .write = qca8k_bulk_write,
.reg_update_bits = qca8k_regmap_update_bits,
.rd_table = &qca8k_readable_table,
.disable_locking = true, /* Locking is handled by qca8k read/write */
.cache_type = REGCACHE_NONE, /* Explicitly disable CACHE */
+ .max_raw_read = 32, /* mgmt eth can read/write up to 8 registers at time */
+ .max_raw_write = 32,
};
static int
@@ -2089,8 +2145,6 @@ static SIMPLE_DEV_PM_OPS(qca8k_pm_ops,
static const struct qca8k_info_ops qca8xxx_ops = {
.autocast_mib = qca8k_get_ethtool_stats_eth,
- .read_eth = qca8k_read_eth,
- .write_eth = qca8k_write_eth,
};
static const struct qca8k_match_data qca8327 = {
diff --git a/drivers/net/dsa/qca/qca8k-common.c b/drivers/net/dsa/qca/qca8k-common.c
index fb45b598847b..96773e432558 100644
--- a/drivers/net/dsa/qca/qca8k-common.c
+++ b/drivers/net/dsa/qca/qca8k-common.c
@@ -101,45 +101,6 @@ const struct regmap_access_table qca8k_readable_table = {
.n_yes_ranges = ARRAY_SIZE(qca8k_readable_ranges),
};
-/* TODO: remove these extra ops when we can support regmap bulk read/write */
-static int qca8k_bulk_read(struct qca8k_priv *priv, u32 reg, u32 *val, int len)
-{
- int i, count = len / sizeof(u32), ret;
-
- if (priv->mgmt_master && priv->info->ops->read_eth &&
- !priv->info->ops->read_eth(priv, reg, val, len))
- return 0;
-
- for (i = 0; i < count; i++) {
- ret = regmap_read(priv->regmap, reg + (i * 4), val + i);
- if (ret < 0)
- return ret;
- }
-
- return 0;
-}
-
-/* TODO: remove these extra ops when we can support regmap bulk read/write */
-static int qca8k_bulk_write(struct qca8k_priv *priv, u32 reg, u32 *val, int len)
-{
- int i, count = len / sizeof(u32), ret;
- u32 tmp;
-
- if (priv->mgmt_master && priv->info->ops->write_eth &&
- !priv->info->ops->write_eth(priv, reg, val, len))
- return 0;
-
- for (i = 0; i < count; i++) {
- tmp = val[i];
-
- ret = regmap_write(priv->regmap, reg + (i * 4), tmp);
- if (ret < 0)
- return ret;
- }
-
- return 0;
-}
-
static int qca8k_busy_wait(struct qca8k_priv *priv, u32 reg, u32 mask)
{
u32 val;
@@ -150,11 +111,12 @@ static int qca8k_busy_wait(struct qca8k_priv *priv, u32 reg, u32 mask)
static int qca8k_fdb_read(struct qca8k_priv *priv, struct qca8k_fdb *fdb)
{
- u32 reg[3];
+ u32 reg[QCA8K_ATU_TABLE_SIZE];
int ret;
/* load the ARL table into an array */
- ret = qca8k_bulk_read(priv, QCA8K_REG_ATU_DATA0, reg, sizeof(reg));
+ ret = regmap_bulk_read(priv->regmap, QCA8K_REG_ATU_DATA0, reg,
+ QCA8K_ATU_TABLE_SIZE);
if (ret)
return ret;
@@ -178,7 +140,7 @@ static int qca8k_fdb_read(struct qca8k_priv *priv, struct qca8k_fdb *fdb)
static void qca8k_fdb_write(struct qca8k_priv *priv, u16 vid, u8 port_mask,
const u8 *mac, u8 aging)
{
- u32 reg[3] = { 0 };
+ u32 reg[QCA8K_ATU_TABLE_SIZE] = { 0 };
/* vid - 83:72 */
reg[2] = FIELD_PREP(QCA8K_ATU_VID_MASK, vid);
@@ -195,7 +157,8 @@ static void qca8k_fdb_write(struct qca8k_priv *priv, u16 vid, u8 port_mask,
reg[0] |= FIELD_PREP(QCA8K_ATU_ADDR5_MASK, mac[5]);
/* load the array into the ARL table */
- qca8k_bulk_write(priv, QCA8K_REG_ATU_DATA0, reg, sizeof(reg));
+ regmap_bulk_write(priv->regmap, QCA8K_REG_ATU_DATA0, reg,
+ QCA8K_ATU_TABLE_SIZE);
}
static int qca8k_fdb_access(struct qca8k_priv *priv, enum qca8k_fdb_cmd cmd,
diff --git a/drivers/net/dsa/qca/qca8k.h b/drivers/net/dsa/qca/qca8k.h
index 03514f7a20be..7996975d29d3 100644
--- a/drivers/net/dsa/qca/qca8k.h
+++ b/drivers/net/dsa/qca/qca8k.h
@@ -148,6 +148,8 @@
#define QCA8K_REG_IPV4_PRI_ADDR_MASK 0x474
/* Lookup registers */
+#define QCA8K_ATU_TABLE_SIZE 3 /* 12 bytes wide table / sizeof(u32) */
+
#define QCA8K_REG_ATU_DATA0 0x600
#define QCA8K_ATU_ADDR2_MASK GENMASK(31, 24)
#define QCA8K_ATU_ADDR3_MASK GENMASK(23, 16)
@@ -328,9 +330,6 @@ struct qca8k_priv;
struct qca8k_info_ops {
int (*autocast_mib)(struct dsa_switch *ds, int port, u64 *data);
- /* TODO: remove these extra ops when we can support regmap bulk read/write */
- int (*read_eth)(struct qca8k_priv *priv, u32 reg, u32 *val, int len);
- int (*write_eth)(struct qca8k_priv *priv, u32 reg, u32 *val, int len);
};
struct qca8k_match_data {
diff --git a/drivers/net/dsa/rzn1_a5psw.c b/drivers/net/dsa/rzn1_a5psw.c
index ed413d555bec..919027cf2012 100644
--- a/drivers/net/dsa/rzn1_a5psw.c
+++ b/drivers/net/dsa/rzn1_a5psw.c
@@ -781,9 +781,6 @@ static int a5psw_mdio_read(struct mii_bus *bus, int phy_id, int phy_reg)
u32 cmd, status;
int ret;
- if (phy_reg & MII_ADDR_C45)
- return -EOPNOTSUPP;
-
cmd = A5PSW_MDIO_COMMAND_READ;
cmd |= FIELD_PREP(A5PSW_MDIO_COMMAND_REG_ADDR, phy_reg);
cmd |= FIELD_PREP(A5PSW_MDIO_COMMAND_PHY_ADDR, phy_id);
@@ -809,9 +806,6 @@ static int a5psw_mdio_write(struct mii_bus *bus, int phy_id, int phy_reg,
struct a5psw *a5psw = bus->priv;
u32 cmd;
- if (phy_reg & MII_ADDR_C45)
- return -EOPNOTSUPP;
-
cmd = FIELD_PREP(A5PSW_MDIO_COMMAND_REG_ADDR, phy_reg);
cmd |= FIELD_PREP(A5PSW_MDIO_COMMAND_PHY_ADDR, phy_id);
diff --git a/drivers/net/dsa/sja1105/sja1105.h b/drivers/net/dsa/sja1105/sja1105.h
index 9ba2ec2b966d..fb1549a5fe32 100644
--- a/drivers/net/dsa/sja1105/sja1105.h
+++ b/drivers/net/dsa/sja1105/sja1105.h
@@ -149,8 +149,10 @@ struct sja1105_info {
bool (*rxtstamp)(struct dsa_switch *ds, int port, struct sk_buff *skb);
void (*txtstamp)(struct dsa_switch *ds, int port, struct sk_buff *skb);
int (*clocking_setup)(struct sja1105_private *priv);
- int (*pcs_mdio_read)(struct mii_bus *bus, int phy, int reg);
- int (*pcs_mdio_write)(struct mii_bus *bus, int phy, int reg, u16 val);
+ int (*pcs_mdio_read_c45)(struct mii_bus *bus, int phy, int mmd,
+ int reg);
+ int (*pcs_mdio_write_c45)(struct mii_bus *bus, int phy, int mmd,
+ int reg, u16 val);
int (*disable_microcontroller)(struct sja1105_private *priv);
const char *name;
bool supports_mii[SJA1105_MAX_NUM_PORTS];
@@ -303,10 +305,12 @@ void sja1105_frame_memory_partitioning(struct sja1105_private *priv);
/* From sja1105_mdio.c */
int sja1105_mdiobus_register(struct dsa_switch *ds);
void sja1105_mdiobus_unregister(struct dsa_switch *ds);
-int sja1105_pcs_mdio_read(struct mii_bus *bus, int phy, int reg);
-int sja1105_pcs_mdio_write(struct mii_bus *bus, int phy, int reg, u16 val);
-int sja1110_pcs_mdio_read(struct mii_bus *bus, int phy, int reg);
-int sja1110_pcs_mdio_write(struct mii_bus *bus, int phy, int reg, u16 val);
+int sja1105_pcs_mdio_read_c45(struct mii_bus *bus, int phy, int mmd, int reg);
+int sja1105_pcs_mdio_write_c45(struct mii_bus *bus, int phy, int mmd, int reg,
+ u16 val);
+int sja1110_pcs_mdio_read_c45(struct mii_bus *bus, int phy, int mmd, int reg);
+int sja1110_pcs_mdio_write_c45(struct mii_bus *bus, int phy, int mmd, int reg,
+ u16 val);
/* From sja1105_devlink.c */
int sja1105_devlink_setup(struct dsa_switch *ds);
diff --git a/drivers/net/dsa/sja1105/sja1105_mdio.c b/drivers/net/dsa/sja1105/sja1105_mdio.c
index 4059fcc8c832..01f1cb719042 100644
--- a/drivers/net/dsa/sja1105/sja1105_mdio.c
+++ b/drivers/net/dsa/sja1105/sja1105_mdio.c
@@ -7,20 +7,15 @@
#define SJA1110_PCS_BANK_REG SJA1110_SPI_ADDR(0x3fc)
-int sja1105_pcs_mdio_read(struct mii_bus *bus, int phy, int reg)
+int sja1105_pcs_mdio_read_c45(struct mii_bus *bus, int phy, int mmd, int reg)
{
struct sja1105_mdio_private *mdio_priv = bus->priv;
struct sja1105_private *priv = mdio_priv->priv;
u64 addr;
u32 tmp;
- u16 mmd;
int rc;
- if (!(reg & MII_ADDR_C45))
- return -EINVAL;
-
- mmd = (reg >> MII_DEVADDR_C45_SHIFT) & 0x1f;
- addr = (mmd << 16) | (reg & GENMASK(15, 0));
+ addr = (mmd << 16) | reg;
if (mmd != MDIO_MMD_VEND1 && mmd != MDIO_MMD_VEND2)
return 0xffff;
@@ -37,19 +32,15 @@ int sja1105_pcs_mdio_read(struct mii_bus *bus, int phy, int reg)
return tmp & 0xffff;
}
-int sja1105_pcs_mdio_write(struct mii_bus *bus, int phy, int reg, u16 val)
+int sja1105_pcs_mdio_write_c45(struct mii_bus *bus, int phy, int mmd,
+ int reg, u16 val)
{
struct sja1105_mdio_private *mdio_priv = bus->priv;
struct sja1105_private *priv = mdio_priv->priv;
u64 addr;
u32 tmp;
- u16 mmd;
-
- if (!(reg & MII_ADDR_C45))
- return -EINVAL;
- mmd = (reg >> MII_DEVADDR_C45_SHIFT) & 0x1f;
- addr = (mmd << 16) | (reg & GENMASK(15, 0));
+ addr = (mmd << 16) | reg;
tmp = val;
if (mmd != MDIO_MMD_VEND1 && mmd != MDIO_MMD_VEND2)
@@ -58,7 +49,7 @@ int sja1105_pcs_mdio_write(struct mii_bus *bus, int phy, int reg, u16 val)
return sja1105_xfer_u32(priv, SPI_WRITE, addr, &tmp, NULL);
}
-int sja1110_pcs_mdio_read(struct mii_bus *bus, int phy, int reg)
+int sja1110_pcs_mdio_read_c45(struct mii_bus *bus, int phy, int mmd, int reg)
{
struct sja1105_mdio_private *mdio_priv = bus->priv;
struct sja1105_private *priv = mdio_priv->priv;
@@ -66,17 +57,12 @@ int sja1110_pcs_mdio_read(struct mii_bus *bus, int phy, int reg)
int offset, bank;
u64 addr;
u32 tmp;
- u16 mmd;
int rc;
- if (!(reg & MII_ADDR_C45))
- return -EINVAL;
-
if (regs->pcs_base[phy] == SJA1105_RSV_ADDR)
return -ENODEV;
- mmd = (reg >> MII_DEVADDR_C45_SHIFT) & 0x1f;
- addr = (mmd << 16) | (reg & GENMASK(15, 0));
+ addr = (mmd << 16) | reg;
if (mmd == MDIO_MMD_VEND2 && (reg & GENMASK(15, 0)) == MII_PHYSID1)
return NXP_SJA1110_XPCS_ID >> 16;
@@ -108,7 +94,8 @@ int sja1110_pcs_mdio_read(struct mii_bus *bus, int phy, int reg)
return tmp & 0xffff;
}
-int sja1110_pcs_mdio_write(struct mii_bus *bus, int phy, int reg, u16 val)
+int sja1110_pcs_mdio_write_c45(struct mii_bus *bus, int phy, int reg, int mmd,
+ u16 val)
{
struct sja1105_mdio_private *mdio_priv = bus->priv;
struct sja1105_private *priv = mdio_priv->priv;
@@ -116,17 +103,12 @@ int sja1110_pcs_mdio_write(struct mii_bus *bus, int phy, int reg, u16 val)
int offset, bank;
u64 addr;
u32 tmp;
- u16 mmd;
int rc;
- if (!(reg & MII_ADDR_C45))
- return -EINVAL;
-
if (regs->pcs_base[phy] == SJA1105_RSV_ADDR)
return -ENODEV;
- mmd = (reg >> MII_DEVADDR_C45_SHIFT) & 0x1f;
- addr = (mmd << 16) | (reg & GENMASK(15, 0));
+ addr = (mmd << 16) | reg;
bank = addr >> 8;
offset = addr & GENMASK(7, 0);
@@ -167,7 +149,7 @@ static u64 sja1105_base_t1_encode_addr(struct sja1105_private *priv,
return regs->mdio_100base_t1 | (phy << 7) | (op << 5) | (xad << 0);
}
-static int sja1105_base_t1_mdio_read(struct mii_bus *bus, int phy, int reg)
+static int sja1105_base_t1_mdio_read_c22(struct mii_bus *bus, int phy, int reg)
{
struct sja1105_mdio_private *mdio_priv = bus->priv;
struct sja1105_private *priv = mdio_priv->priv;
@@ -175,30 +157,31 @@ static int sja1105_base_t1_mdio_read(struct mii_bus *bus, int phy, int reg)
u32 tmp;
int rc;
- if (reg & MII_ADDR_C45) {
- u16 mmd = (reg >> MII_DEVADDR_C45_SHIFT) & 0x1f;
-
- addr = sja1105_base_t1_encode_addr(priv, phy, SJA1105_C45_ADDR,
- mmd);
+ addr = sja1105_base_t1_encode_addr(priv, phy, SJA1105_C22, reg & 0x1f);
- tmp = reg & MII_REGADDR_C45_MASK;
+ rc = sja1105_xfer_u32(priv, SPI_READ, addr, &tmp, NULL);
+ if (rc < 0)
+ return rc;
- rc = sja1105_xfer_u32(priv, SPI_WRITE, addr, &tmp, NULL);
- if (rc < 0)
- return rc;
+ return tmp & 0xffff;
+}
- addr = sja1105_base_t1_encode_addr(priv, phy, SJA1105_C45_DATA,
- mmd);
+static int sja1105_base_t1_mdio_read_c45(struct mii_bus *bus, int phy,
+ int mmd, int reg)
+{
+ struct sja1105_mdio_private *mdio_priv = bus->priv;
+ struct sja1105_private *priv = mdio_priv->priv;
+ u64 addr;
+ u32 tmp;
+ int rc;
- rc = sja1105_xfer_u32(priv, SPI_READ, addr, &tmp, NULL);
- if (rc < 0)
- return rc;
+ addr = sja1105_base_t1_encode_addr(priv, phy, SJA1105_C45_ADDR, mmd);
- return tmp & 0xffff;
- }
+ rc = sja1105_xfer_u32(priv, SPI_WRITE, addr, &reg, NULL);
+ if (rc < 0)
+ return rc;
- /* Clause 22 read */
- addr = sja1105_base_t1_encode_addr(priv, phy, SJA1105_C22, reg & 0x1f);
+ addr = sja1105_base_t1_encode_addr(priv, phy, SJA1105_C45_DATA, mmd);
rc = sja1105_xfer_u32(priv, SPI_READ, addr, &tmp, NULL);
if (rc < 0)
@@ -207,41 +190,37 @@ static int sja1105_base_t1_mdio_read(struct mii_bus *bus, int phy, int reg)
return tmp & 0xffff;
}
-static int sja1105_base_t1_mdio_write(struct mii_bus *bus, int phy, int reg,
- u16 val)
+static int sja1105_base_t1_mdio_write_c22(struct mii_bus *bus, int phy, int reg,
+ u16 val)
{
struct sja1105_mdio_private *mdio_priv = bus->priv;
struct sja1105_private *priv = mdio_priv->priv;
u64 addr;
u32 tmp;
- int rc;
-
- if (reg & MII_ADDR_C45) {
- u16 mmd = (reg >> MII_DEVADDR_C45_SHIFT) & 0x1f;
-
- addr = sja1105_base_t1_encode_addr(priv, phy, SJA1105_C45_ADDR,
- mmd);
- tmp = reg & MII_REGADDR_C45_MASK;
+ addr = sja1105_base_t1_encode_addr(priv, phy, SJA1105_C22, reg & 0x1f);
- rc = sja1105_xfer_u32(priv, SPI_WRITE, addr, &tmp, NULL);
- if (rc < 0)
- return rc;
+ tmp = val & 0xffff;
- addr = sja1105_base_t1_encode_addr(priv, phy, SJA1105_C45_DATA,
- mmd);
+ return sja1105_xfer_u32(priv, SPI_WRITE, addr, &tmp, NULL);
+}
- tmp = val & 0xffff;
+static int sja1105_base_t1_mdio_write_c45(struct mii_bus *bus, int phy,
+ int mmd, int reg, u16 val)
+{
+ struct sja1105_mdio_private *mdio_priv = bus->priv;
+ struct sja1105_private *priv = mdio_priv->priv;
+ u64 addr;
+ u32 tmp;
+ int rc;
- rc = sja1105_xfer_u32(priv, SPI_WRITE, addr, &tmp, NULL);
- if (rc < 0)
- return rc;
+ addr = sja1105_base_t1_encode_addr(priv, phy, SJA1105_C45_ADDR, mmd);
- return 0;
- }
+ rc = sja1105_xfer_u32(priv, SPI_WRITE, addr, &reg, NULL);
+ if (rc < 0)
+ return rc;
- /* Clause 22 write */
- addr = sja1105_base_t1_encode_addr(priv, phy, SJA1105_C22, reg & 0x1f);
+ addr = sja1105_base_t1_encode_addr(priv, phy, SJA1105_C45_DATA, mmd);
tmp = val & 0xffff;
@@ -256,9 +235,6 @@ static int sja1105_base_tx_mdio_read(struct mii_bus *bus, int phy, int reg)
u32 tmp;
int rc;
- if (reg & MII_ADDR_C45)
- return -EOPNOTSUPP;
-
rc = sja1105_xfer_u32(priv, SPI_READ, regs->mdio_100base_tx + reg,
&tmp, NULL);
if (rc < 0)
@@ -275,9 +251,6 @@ static int sja1105_base_tx_mdio_write(struct mii_bus *bus, int phy, int reg,
const struct sja1105_regs *regs = priv->info->regs;
u32 tmp = val;
- if (reg & MII_ADDR_C45)
- return -EOPNOTSUPP;
-
return sja1105_xfer_u32(priv, SPI_WRITE, regs->mdio_100base_tx + reg,
&tmp, NULL);
}
@@ -360,8 +333,10 @@ static int sja1105_mdiobus_base_t1_register(struct sja1105_private *priv,
bus->name = "SJA1110 100base-T1 MDIO bus";
snprintf(bus->id, MII_BUS_ID_SIZE, "%s-base-t1",
dev_name(priv->ds->dev));
- bus->read = sja1105_base_t1_mdio_read;
- bus->write = sja1105_base_t1_mdio_write;
+ bus->read = sja1105_base_t1_mdio_read_c22;
+ bus->write = sja1105_base_t1_mdio_write_c22;
+ bus->read_c45 = sja1105_base_t1_mdio_read_c45;
+ bus->write_c45 = sja1105_base_t1_mdio_write_c45;
bus->parent = priv->ds->dev;
mdio_priv = bus->priv;
mdio_priv->priv = priv;
@@ -398,7 +373,7 @@ static int sja1105_mdiobus_pcs_register(struct sja1105_private *priv)
int rc = 0;
int port;
- if (!priv->info->pcs_mdio_read || !priv->info->pcs_mdio_write)
+ if (!priv->info->pcs_mdio_read_c45 || !priv->info->pcs_mdio_write_c45)
return 0;
bus = mdiobus_alloc_size(sizeof(*mdio_priv));
@@ -408,8 +383,8 @@ static int sja1105_mdiobus_pcs_register(struct sja1105_private *priv)
bus->name = "SJA1105 PCS MDIO bus";
snprintf(bus->id, MII_BUS_ID_SIZE, "%s-pcs",
dev_name(ds->dev));
- bus->read = priv->info->pcs_mdio_read;
- bus->write = priv->info->pcs_mdio_write;
+ bus->read_c45 = priv->info->pcs_mdio_read_c45;
+ bus->write_c45 = priv->info->pcs_mdio_write_c45;
bus->parent = ds->dev;
/* There is no PHY on this MDIO bus => mask out all PHY addresses
* from auto probing.
diff --git a/drivers/net/dsa/sja1105/sja1105_spi.c b/drivers/net/dsa/sja1105/sja1105_spi.c
index d3c9ad6d39d4..5ce29c8057a4 100644
--- a/drivers/net/dsa/sja1105/sja1105_spi.c
+++ b/drivers/net/dsa/sja1105/sja1105_spi.c
@@ -719,8 +719,8 @@ const struct sja1105_info sja1105r_info = {
.ptp_cmd_packing = sja1105pqrs_ptp_cmd_packing,
.rxtstamp = sja1105_rxtstamp,
.clocking_setup = sja1105_clocking_setup,
- .pcs_mdio_read = sja1105_pcs_mdio_read,
- .pcs_mdio_write = sja1105_pcs_mdio_write,
+ .pcs_mdio_read_c45 = sja1105_pcs_mdio_read_c45,
+ .pcs_mdio_write_c45 = sja1105_pcs_mdio_write_c45,
.regs = &sja1105pqrs_regs,
.port_speed = {
[SJA1105_SPEED_AUTO] = 0,
@@ -756,8 +756,8 @@ const struct sja1105_info sja1105s_info = {
.ptp_cmd_packing = sja1105pqrs_ptp_cmd_packing,
.rxtstamp = sja1105_rxtstamp,
.clocking_setup = sja1105_clocking_setup,
- .pcs_mdio_read = sja1105_pcs_mdio_read,
- .pcs_mdio_write = sja1105_pcs_mdio_write,
+ .pcs_mdio_read_c45 = sja1105_pcs_mdio_read_c45,
+ .pcs_mdio_write_c45 = sja1105_pcs_mdio_write_c45,
.port_speed = {
[SJA1105_SPEED_AUTO] = 0,
[SJA1105_SPEED_10MBPS] = 3,
@@ -794,8 +794,8 @@ const struct sja1105_info sja1110a_info = {
.rxtstamp = sja1110_rxtstamp,
.txtstamp = sja1110_txtstamp,
.disable_microcontroller = sja1110_disable_microcontroller,
- .pcs_mdio_read = sja1110_pcs_mdio_read,
- .pcs_mdio_write = sja1110_pcs_mdio_write,
+ .pcs_mdio_read_c45 = sja1110_pcs_mdio_read_c45,
+ .pcs_mdio_write_c45 = sja1110_pcs_mdio_write_c45,
.port_speed = {
[SJA1105_SPEED_AUTO] = 0,
[SJA1105_SPEED_10MBPS] = 4,
@@ -844,8 +844,8 @@ const struct sja1105_info sja1110b_info = {
.rxtstamp = sja1110_rxtstamp,
.txtstamp = sja1110_txtstamp,
.disable_microcontroller = sja1110_disable_microcontroller,
- .pcs_mdio_read = sja1110_pcs_mdio_read,
- .pcs_mdio_write = sja1110_pcs_mdio_write,
+ .pcs_mdio_read_c45 = sja1110_pcs_mdio_read_c45,
+ .pcs_mdio_write_c45 = sja1110_pcs_mdio_write_c45,
.port_speed = {
[SJA1105_SPEED_AUTO] = 0,
[SJA1105_SPEED_10MBPS] = 4,
@@ -894,8 +894,8 @@ const struct sja1105_info sja1110c_info = {
.rxtstamp = sja1110_rxtstamp,
.txtstamp = sja1110_txtstamp,
.disable_microcontroller = sja1110_disable_microcontroller,
- .pcs_mdio_read = sja1110_pcs_mdio_read,
- .pcs_mdio_write = sja1110_pcs_mdio_write,
+ .pcs_mdio_read_c45 = sja1110_pcs_mdio_read_c45,
+ .pcs_mdio_write_c45 = sja1110_pcs_mdio_write_c45,
.port_speed = {
[SJA1105_SPEED_AUTO] = 0,
[SJA1105_SPEED_10MBPS] = 4,
@@ -944,8 +944,8 @@ const struct sja1105_info sja1110d_info = {
.rxtstamp = sja1110_rxtstamp,
.txtstamp = sja1110_txtstamp,
.disable_microcontroller = sja1110_disable_microcontroller,
- .pcs_mdio_read = sja1110_pcs_mdio_read,
- .pcs_mdio_write = sja1110_pcs_mdio_write,
+ .pcs_mdio_read_c45 = sja1110_pcs_mdio_read_c45,
+ .pcs_mdio_write_c45 = sja1110_pcs_mdio_write_c45,
.port_speed = {
[SJA1105_SPEED_AUTO] = 0,
[SJA1105_SPEED_10MBPS] = 4,
diff --git a/drivers/net/ethernet/actions/owl-emac.c b/drivers/net/ethernet/actions/owl-emac.c
index cd4d71b83c33..c6f8f852bff1 100644
--- a/drivers/net/ethernet/actions/owl-emac.c
+++ b/drivers/net/ethernet/actions/owl-emac.c
@@ -1275,9 +1275,6 @@ static int owl_emac_mdio_read(struct mii_bus *bus, int addr, int regnum)
u32 data, tmp;
int ret;
- if (regnum & MII_ADDR_C45)
- return -EOPNOTSUPP;
-
data = OWL_EMAC_BIT_MAC_CSR10_SB;
data |= OWL_EMAC_VAL_MAC_CSR10_OPCODE_RD << OWL_EMAC_OFF_MAC_CSR10_OPCODE;
@@ -1305,9 +1302,6 @@ owl_emac_mdio_write(struct mii_bus *bus, int addr, int regnum, u16 val)
struct owl_emac_priv *priv = bus->priv;
u32 data, tmp;
- if (regnum & MII_ADDR_C45)
- return -EOPNOTSUPP;
-
data = OWL_EMAC_BIT_MAC_CSR10_SB;
data |= OWL_EMAC_VAL_MAC_CSR10_OPCODE_WR << OWL_EMAC_OFF_MAC_CSR10_OPCODE;
diff --git a/drivers/net/ethernet/adi/adin1110.c b/drivers/net/ethernet/adi/adin1110.c
index c26b8597945b..3f316a0f4158 100644
--- a/drivers/net/ethernet/adi/adin1110.c
+++ b/drivers/net/ethernet/adi/adin1110.c
@@ -523,7 +523,6 @@ static int adin1110_register_mdiobus(struct adin1110_priv *priv,
mii_bus->priv = priv;
mii_bus->parent = dev;
mii_bus->phy_mask = ~((u32)GENMASK(2, 0));
- mii_bus->probe_capabilities = MDIOBUS_C22;
snprintf(mii_bus->id, MII_BUS_ID_SIZE, "%s", dev_name(dev));
ret = devm_mdiobus_register(dev, mii_bus);
diff --git a/drivers/net/ethernet/amazon/ena/ena_netdev.c b/drivers/net/ethernet/amazon/ena/ena_netdev.c
index e8ad5ea31aff..d3999db7c6a2 100644
--- a/drivers/net/ethernet/amazon/ena/ena_netdev.c
+++ b/drivers/net/ethernet/amazon/ena/ena_netdev.c
@@ -597,7 +597,9 @@ static int ena_xdp_set(struct net_device *netdev, struct netdev_bpf *bpf)
if (rc)
return rc;
}
+ xdp_features_set_redirect_target(netdev, false);
} else if (old_bpf_prog) {
+ xdp_features_clear_redirect_target(netdev);
rc = ena_destroy_and_free_all_xdp_queues(adapter);
if (rc)
return rc;
@@ -4103,6 +4105,8 @@ static void ena_set_conf_feat_params(struct ena_adapter *adapter,
/* Set offload features */
ena_set_dev_offloads(feat, netdev);
+ netdev->xdp_features = NETDEV_XDP_ACT_BASIC | NETDEV_XDP_ACT_REDIRECT;
+
adapter->max_mtu = feat->dev_attr.max_mtu;
netdev->max_mtu = adapter->max_mtu;
netdev->min_mtu = ENA_MIN_MTU;
diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-common.h b/drivers/net/ethernet/amd/xgbe/xgbe-common.h
index 466273b22f0a..3b70f6737633 100644
--- a/drivers/net/ethernet/amd/xgbe/xgbe-common.h
+++ b/drivers/net/ethernet/amd/xgbe/xgbe-common.h
@@ -1285,6 +1285,22 @@
#define MDIO_PMA_RX_CTRL1 0x8051
#endif
+#ifndef MDIO_PMA_RX_LSTS
+#define MDIO_PMA_RX_LSTS 0x018020
+#endif
+
+#ifndef MDIO_PMA_RX_EQ_CTRL4
+#define MDIO_PMA_RX_EQ_CTRL4 0x0001805C
+#endif
+
+#ifndef MDIO_PMA_MP_MISC_STS
+#define MDIO_PMA_MP_MISC_STS 0x0078
+#endif
+
+#ifndef MDIO_PMA_PHY_RX_EQ_CEU
+#define MDIO_PMA_PHY_RX_EQ_CEU 0x1800E
+#endif
+
#ifndef MDIO_PCS_DIG_CTRL
#define MDIO_PCS_DIG_CTRL 0x8000
#endif
@@ -1395,6 +1411,28 @@
#define XGBE_PMA_RX_RST_0_RESET_ON 0x10
#define XGBE_PMA_RX_RST_0_RESET_OFF 0x00
+#define XGBE_PMA_RX_SIG_DET_0_MASK BIT(4)
+#define XGBE_PMA_RX_SIG_DET_0_ENABLE BIT(4)
+#define XGBE_PMA_RX_SIG_DET_0_DISABLE 0x0000
+
+#define XGBE_PMA_RX_VALID_0_MASK BIT(12)
+#define XGBE_PMA_RX_VALID_0_ENABLE BIT(12)
+#define XGBE_PMA_RX_VALID_0_DISABLE 0x0000
+
+#define XGBE_PMA_RX_AD_REQ_MASK BIT(12)
+#define XGBE_PMA_RX_AD_REQ_ENABLE BIT(12)
+#define XGBE_PMA_RX_AD_REQ_DISABLE 0x0000
+
+#define XGBE_PMA_RX_ADPT_ACK_MASK BIT(12)
+#define XGBE_PMA_RX_ADPT_ACK BIT(12)
+
+#define XGBE_PMA_CFF_UPDTM1_VLD BIT(8)
+#define XGBE_PMA_CFF_UPDT0_VLD BIT(9)
+#define XGBE_PMA_CFF_UPDT1_VLD BIT(10)
+#define XGBE_PMA_CFF_UPDT_MASK (XGBE_PMA_CFF_UPDTM1_VLD |\
+ XGBE_PMA_CFF_UPDT0_VLD | \
+ XGBE_PMA_CFF_UPDT1_VLD)
+
#define XGBE_PMA_PLL_CTRL_MASK BIT(15)
#define XGBE_PMA_PLL_CTRL_ENABLE BIT(15)
#define XGBE_PMA_PLL_CTRL_DISABLE 0x0000
@@ -1699,20 +1737,21 @@ do { \
} while (0)
/* Macros for building, reading or writing register values or bits
- * using MDIO. Different from above because of the use of standardized
- * Linux include values. No shifting is performed with the bit
- * operations, everything works on mask values.
+ * using MDIO.
*/
+
+#define XGBE_ADDR_C45 BIT(30)
+
#define XMDIO_READ(_pdata, _mmd, _reg) \
((_pdata)->hw_if.read_mmd_regs((_pdata), 0, \
- MII_ADDR_C45 | (_mmd << 16) | ((_reg) & 0xffff)))
+ XGBE_ADDR_C45 | (_mmd << 16) | ((_reg) & 0xffff)))
#define XMDIO_READ_BITS(_pdata, _mmd, _reg, _mask) \
(XMDIO_READ((_pdata), _mmd, _reg) & _mask)
#define XMDIO_WRITE(_pdata, _mmd, _reg, _val) \
((_pdata)->hw_if.write_mmd_regs((_pdata), 0, \
- MII_ADDR_C45 | (_mmd << 16) | ((_reg) & 0xffff), (_val)))
+ XGBE_ADDR_C45 | (_mmd << 16) | ((_reg) & 0xffff), (_val)))
#define XMDIO_WRITE_BITS(_pdata, _mmd, _reg, _mask, _val) \
do { \
diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-dev.c b/drivers/net/ethernet/amd/xgbe/xgbe-dev.c
index 4030d619e84f..f393228d41c7 100644
--- a/drivers/net/ethernet/amd/xgbe/xgbe-dev.c
+++ b/drivers/net/ethernet/amd/xgbe/xgbe-dev.c
@@ -814,6 +814,9 @@ static int xgbe_set_speed(struct xgbe_prv_data *pdata, int speed)
unsigned int ss;
switch (speed) {
+ case SPEED_10:
+ ss = 0x07;
+ break;
case SPEED_1000:
ss = 0x03;
break;
@@ -1154,8 +1157,8 @@ static int xgbe_read_mmd_regs_v2(struct xgbe_prv_data *pdata, int prtad,
unsigned int mmd_address, index, offset;
int mmd_data;
- if (mmd_reg & MII_ADDR_C45)
- mmd_address = mmd_reg & ~MII_ADDR_C45;
+ if (mmd_reg & XGBE_ADDR_C45)
+ mmd_address = mmd_reg & ~XGBE_ADDR_C45;
else
mmd_address = (pdata->mdio_mmd << 16) | (mmd_reg & 0xffff);
@@ -1186,8 +1189,8 @@ static void xgbe_write_mmd_regs_v2(struct xgbe_prv_data *pdata, int prtad,
unsigned long flags;
unsigned int mmd_address, index, offset;
- if (mmd_reg & MII_ADDR_C45)
- mmd_address = mmd_reg & ~MII_ADDR_C45;
+ if (mmd_reg & XGBE_ADDR_C45)
+ mmd_address = mmd_reg & ~XGBE_ADDR_C45;
else
mmd_address = (pdata->mdio_mmd << 16) | (mmd_reg & 0xffff);
@@ -1217,8 +1220,8 @@ static int xgbe_read_mmd_regs_v1(struct xgbe_prv_data *pdata, int prtad,
unsigned int mmd_address;
int mmd_data;
- if (mmd_reg & MII_ADDR_C45)
- mmd_address = mmd_reg & ~MII_ADDR_C45;
+ if (mmd_reg & XGBE_ADDR_C45)
+ mmd_address = mmd_reg & ~XGBE_ADDR_C45;
else
mmd_address = (pdata->mdio_mmd << 16) | (mmd_reg & 0xffff);
@@ -1245,8 +1248,8 @@ static void xgbe_write_mmd_regs_v1(struct xgbe_prv_data *pdata, int prtad,
unsigned int mmd_address;
unsigned long flags;
- if (mmd_reg & MII_ADDR_C45)
- mmd_address = mmd_reg & ~MII_ADDR_C45;
+ if (mmd_reg & XGBE_ADDR_C45)
+ mmd_address = mmd_reg & ~XGBE_ADDR_C45;
else
mmd_address = (pdata->mdio_mmd << 16) | (mmd_reg & 0xffff);
@@ -1291,11 +1294,20 @@ static void xgbe_write_mmd_regs(struct xgbe_prv_data *pdata, int prtad,
}
}
-static unsigned int xgbe_create_mdio_sca(int port, int reg)
+static unsigned int xgbe_create_mdio_sca_c22(int port, int reg)
{
- unsigned int mdio_sca, da;
+ unsigned int mdio_sca;
+
+ mdio_sca = 0;
+ XGMAC_SET_BITS(mdio_sca, MAC_MDIOSCAR, RA, reg);
+ XGMAC_SET_BITS(mdio_sca, MAC_MDIOSCAR, PA, port);
- da = (reg & MII_ADDR_C45) ? reg >> 16 : 0;
+ return mdio_sca;
+}
+
+static unsigned int xgbe_create_mdio_sca_c45(int port, unsigned int da, int reg)
+{
+ unsigned int mdio_sca;
mdio_sca = 0;
XGMAC_SET_BITS(mdio_sca, MAC_MDIOSCAR, RA, reg);
@@ -1305,14 +1317,13 @@ static unsigned int xgbe_create_mdio_sca(int port, int reg)
return mdio_sca;
}
-static int xgbe_write_ext_mii_regs(struct xgbe_prv_data *pdata, int addr,
- int reg, u16 val)
+static int xgbe_write_ext_mii_regs(struct xgbe_prv_data *pdata,
+ unsigned int mdio_sca, u16 val)
{
- unsigned int mdio_sca, mdio_sccd;
+ unsigned int mdio_sccd;
reinit_completion(&pdata->mdio_complete);
- mdio_sca = xgbe_create_mdio_sca(addr, reg);
XGMAC_IOWRITE(pdata, MAC_MDIOSCAR, mdio_sca);
mdio_sccd = 0;
@@ -1329,14 +1340,33 @@ static int xgbe_write_ext_mii_regs(struct xgbe_prv_data *pdata, int addr,
return 0;
}
-static int xgbe_read_ext_mii_regs(struct xgbe_prv_data *pdata, int addr,
- int reg)
+static int xgbe_write_ext_mii_regs_c22(struct xgbe_prv_data *pdata, int addr,
+ int reg, u16 val)
{
- unsigned int mdio_sca, mdio_sccd;
+ unsigned int mdio_sca;
+
+ mdio_sca = xgbe_create_mdio_sca_c22(addr, reg);
+
+ return xgbe_write_ext_mii_regs(pdata, mdio_sca, val);
+}
+
+static int xgbe_write_ext_mii_regs_c45(struct xgbe_prv_data *pdata, int addr,
+ int devad, int reg, u16 val)
+{
+ unsigned int mdio_sca;
+
+ mdio_sca = xgbe_create_mdio_sca_c45(addr, devad, reg);
+
+ return xgbe_write_ext_mii_regs(pdata, mdio_sca, val);
+}
+
+static int xgbe_read_ext_mii_regs(struct xgbe_prv_data *pdata,
+ unsigned int mdio_sca)
+{
+ unsigned int mdio_sccd;
reinit_completion(&pdata->mdio_complete);
- mdio_sca = xgbe_create_mdio_sca(addr, reg);
XGMAC_IOWRITE(pdata, MAC_MDIOSCAR, mdio_sca);
mdio_sccd = 0;
@@ -1352,6 +1382,26 @@ static int xgbe_read_ext_mii_regs(struct xgbe_prv_data *pdata, int addr,
return XGMAC_IOREAD_BITS(pdata, MAC_MDIOSCCDR, DATA);
}
+static int xgbe_read_ext_mii_regs_c22(struct xgbe_prv_data *pdata, int addr,
+ int reg)
+{
+ unsigned int mdio_sca;
+
+ mdio_sca = xgbe_create_mdio_sca_c22(addr, reg);
+
+ return xgbe_read_ext_mii_regs(pdata, mdio_sca);
+}
+
+static int xgbe_read_ext_mii_regs_c45(struct xgbe_prv_data *pdata, int addr,
+ int devad, int reg)
+{
+ unsigned int mdio_sca;
+
+ mdio_sca = xgbe_create_mdio_sca_c45(addr, devad, reg);
+
+ return xgbe_read_ext_mii_regs(pdata, mdio_sca);
+}
+
static int xgbe_set_ext_mii_mode(struct xgbe_prv_data *pdata, unsigned int port,
enum xgbe_mdio_mode mode)
{
@@ -3565,8 +3615,10 @@ void xgbe_init_function_ptrs_dev(struct xgbe_hw_if *hw_if)
hw_if->set_speed = xgbe_set_speed;
hw_if->set_ext_mii_mode = xgbe_set_ext_mii_mode;
- hw_if->read_ext_mii_regs = xgbe_read_ext_mii_regs;
- hw_if->write_ext_mii_regs = xgbe_write_ext_mii_regs;
+ hw_if->read_ext_mii_regs_c22 = xgbe_read_ext_mii_regs_c22;
+ hw_if->write_ext_mii_regs_c22 = xgbe_write_ext_mii_regs_c22;
+ hw_if->read_ext_mii_regs_c45 = xgbe_read_ext_mii_regs_c45;
+ hw_if->write_ext_mii_regs_c45 = xgbe_write_ext_mii_regs_c45;
hw_if->set_gpio = xgbe_set_gpio;
hw_if->clr_gpio = xgbe_clr_gpio;
diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-mdio.c b/drivers/net/ethernet/amd/xgbe/xgbe-mdio.c
index 43fdd111235a..33a9574e9e04 100644
--- a/drivers/net/ethernet/amd/xgbe/xgbe-mdio.c
+++ b/drivers/net/ethernet/amd/xgbe/xgbe-mdio.c
@@ -274,6 +274,15 @@ static void xgbe_sgmii_1000_mode(struct xgbe_prv_data *pdata)
pdata->phy_if.phy_impl.set_mode(pdata, XGBE_MODE_SGMII_1000);
}
+static void xgbe_sgmii_10_mode(struct xgbe_prv_data *pdata)
+{
+ /* Set MAC to 10M speed */
+ pdata->hw_if.set_speed(pdata, SPEED_10);
+
+ /* Call PHY implementation support to complete rate change */
+ pdata->phy_if.phy_impl.set_mode(pdata, XGBE_MODE_SGMII_10);
+}
+
static void xgbe_sgmii_100_mode(struct xgbe_prv_data *pdata)
{
/* Set MAC to 1G speed */
@@ -306,6 +315,9 @@ static void xgbe_change_mode(struct xgbe_prv_data *pdata,
case XGBE_MODE_KR:
xgbe_kr_mode(pdata);
break;
+ case XGBE_MODE_SGMII_10:
+ xgbe_sgmii_10_mode(pdata);
+ break;
case XGBE_MODE_SGMII_100:
xgbe_sgmii_100_mode(pdata);
break;
@@ -1077,6 +1089,8 @@ static const char *xgbe_phy_fc_string(struct xgbe_prv_data *pdata)
static const char *xgbe_phy_speed_string(int speed)
{
switch (speed) {
+ case SPEED_10:
+ return "10Mbps";
case SPEED_100:
return "100Mbps";
case SPEED_1000:
@@ -1164,6 +1178,7 @@ static int xgbe_phy_config_fixed(struct xgbe_prv_data *pdata)
case XGBE_MODE_KX_1000:
case XGBE_MODE_KX_2500:
case XGBE_MODE_KR:
+ case XGBE_MODE_SGMII_10:
case XGBE_MODE_SGMII_100:
case XGBE_MODE_SGMII_1000:
case XGBE_MODE_X:
@@ -1225,6 +1240,8 @@ static int __xgbe_phy_config_aneg(struct xgbe_prv_data *pdata, bool set_mode)
xgbe_set_mode(pdata, XGBE_MODE_SGMII_1000);
} else if (xgbe_use_mode(pdata, XGBE_MODE_SGMII_100)) {
xgbe_set_mode(pdata, XGBE_MODE_SGMII_100);
+ } else if (xgbe_use_mode(pdata, XGBE_MODE_SGMII_10)) {
+ xgbe_set_mode(pdata, XGBE_MODE_SGMII_10);
} else {
enable_irq(pdata->an_irq);
ret = -EINVAL;
@@ -1325,6 +1342,9 @@ static void xgbe_phy_status_result(struct xgbe_prv_data *pdata)
mode = xgbe_phy_status_aneg(pdata);
switch (mode) {
+ case XGBE_MODE_SGMII_10:
+ pdata->phy.speed = SPEED_10;
+ break;
case XGBE_MODE_SGMII_100:
pdata->phy.speed = SPEED_100;
break;
@@ -1467,6 +1487,8 @@ static int xgbe_phy_start(struct xgbe_prv_data *pdata)
xgbe_sgmii_1000_mode(pdata);
} else if (xgbe_use_mode(pdata, XGBE_MODE_SGMII_100)) {
xgbe_sgmii_100_mode(pdata);
+ } else if (xgbe_use_mode(pdata, XGBE_MODE_SGMII_10)) {
+ xgbe_sgmii_10_mode(pdata);
} else {
ret = -EINVAL;
goto err_irq;
@@ -1564,6 +1586,8 @@ static int xgbe_phy_best_advertised_speed(struct xgbe_prv_data *pdata)
return SPEED_1000;
else if (XGBE_ADV(lks, 100baseT_Full))
return SPEED_100;
+ else if (XGBE_ADV(lks, 10baseT_Full))
+ return SPEED_10;
return SPEED_UNKNOWN;
}
diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-phy-v2.c b/drivers/net/ethernet/amd/xgbe/xgbe-phy-v2.c
index c731a04731f8..16e7fb2c0dae 100644
--- a/drivers/net/ethernet/amd/xgbe/xgbe-phy-v2.c
+++ b/drivers/net/ethernet/amd/xgbe/xgbe-phy-v2.c
@@ -124,6 +124,7 @@
#include "xgbe.h"
#include "xgbe-common.h"
+#define XGBE_PHY_PORT_SPEED_10 BIT(0)
#define XGBE_PHY_PORT_SPEED_100 BIT(1)
#define XGBE_PHY_PORT_SPEED_1000 BIT(2)
#define XGBE_PHY_PORT_SPEED_2500 BIT(3)
@@ -387,6 +388,10 @@ struct xgbe_phy_data {
static DEFINE_MUTEX(xgbe_phy_comm_lock);
static enum xgbe_an_mode xgbe_phy_an_mode(struct xgbe_prv_data *pdata);
+static void xgbe_phy_rrc(struct xgbe_prv_data *pdata);
+static void xgbe_phy_perform_ratechange(struct xgbe_prv_data *pdata,
+ enum xgbe_mb_cmd cmd,
+ enum xgbe_mb_subcmd sub_cmd);
static int xgbe_phy_i2c_xfer(struct xgbe_prv_data *pdata,
struct xgbe_i2c_op *i2c_op)
@@ -599,20 +604,27 @@ static int xgbe_phy_get_comm_ownership(struct xgbe_prv_data *pdata)
return -ETIMEDOUT;
}
-static int xgbe_phy_mdio_mii_write(struct xgbe_prv_data *pdata, int addr,
- int reg, u16 val)
+static int xgbe_phy_mdio_mii_write_c22(struct xgbe_prv_data *pdata, int addr,
+ int reg, u16 val)
{
struct xgbe_phy_data *phy_data = pdata->phy_data;
- if (reg & MII_ADDR_C45) {
- if (phy_data->phydev_mode != XGBE_MDIO_MODE_CL45)
- return -ENOTSUPP;
- } else {
- if (phy_data->phydev_mode != XGBE_MDIO_MODE_CL22)
- return -ENOTSUPP;
- }
+ if (phy_data->phydev_mode != XGBE_MDIO_MODE_CL22)
+ return -EOPNOTSUPP;
+
+ return pdata->hw_if.write_ext_mii_regs_c22(pdata, addr, reg, val);
+}
- return pdata->hw_if.write_ext_mii_regs(pdata, addr, reg, val);
+static int xgbe_phy_mdio_mii_write_c45(struct xgbe_prv_data *pdata, int addr,
+ int devad, int reg, u16 val)
+{
+ struct xgbe_phy_data *phy_data = pdata->phy_data;
+
+ if (phy_data->phydev_mode != XGBE_MDIO_MODE_CL45)
+ return -EOPNOTSUPP;
+
+ return pdata->hw_if.write_ext_mii_regs_c45(pdata, addr, devad,
+ reg, val);
}
static int xgbe_phy_i2c_mii_write(struct xgbe_prv_data *pdata, int reg, u16 val)
@@ -637,7 +649,8 @@ static int xgbe_phy_i2c_mii_write(struct xgbe_prv_data *pdata, int reg, u16 val)
return ret;
}
-static int xgbe_phy_mii_write(struct mii_bus *mii, int addr, int reg, u16 val)
+static int xgbe_phy_mii_write_c22(struct mii_bus *mii, int addr, int reg,
+ u16 val)
{
struct xgbe_prv_data *pdata = mii->priv;
struct xgbe_phy_data *phy_data = pdata->phy_data;
@@ -650,29 +663,58 @@ static int xgbe_phy_mii_write(struct mii_bus *mii, int addr, int reg, u16 val)
if (phy_data->conn_type == XGBE_CONN_TYPE_SFP)
ret = xgbe_phy_i2c_mii_write(pdata, reg, val);
else if (phy_data->conn_type & XGBE_CONN_TYPE_MDIO)
- ret = xgbe_phy_mdio_mii_write(pdata, addr, reg, val);
+ ret = xgbe_phy_mdio_mii_write_c22(pdata, addr, reg, val);
else
- ret = -ENOTSUPP;
+ ret = -EOPNOTSUPP;
xgbe_phy_put_comm_ownership(pdata);
return ret;
}
-static int xgbe_phy_mdio_mii_read(struct xgbe_prv_data *pdata, int addr,
- int reg)
+static int xgbe_phy_mii_write_c45(struct mii_bus *mii, int addr, int devad,
+ int reg, u16 val)
{
+ struct xgbe_prv_data *pdata = mii->priv;
struct xgbe_phy_data *phy_data = pdata->phy_data;
+ int ret;
- if (reg & MII_ADDR_C45) {
- if (phy_data->phydev_mode != XGBE_MDIO_MODE_CL45)
- return -ENOTSUPP;
- } else {
- if (phy_data->phydev_mode != XGBE_MDIO_MODE_CL22)
- return -ENOTSUPP;
- }
+ ret = xgbe_phy_get_comm_ownership(pdata);
+ if (ret)
+ return ret;
- return pdata->hw_if.read_ext_mii_regs(pdata, addr, reg);
+ if (phy_data->conn_type == XGBE_CONN_TYPE_SFP)
+ ret = -EOPNOTSUPP;
+ else if (phy_data->conn_type & XGBE_CONN_TYPE_MDIO)
+ ret = xgbe_phy_mdio_mii_write_c45(pdata, addr, devad, reg, val);
+ else
+ ret = -EOPNOTSUPP;
+
+ xgbe_phy_put_comm_ownership(pdata);
+
+ return ret;
+}
+
+static int xgbe_phy_mdio_mii_read_c22(struct xgbe_prv_data *pdata, int addr,
+ int reg)
+{
+ struct xgbe_phy_data *phy_data = pdata->phy_data;
+
+ if (phy_data->phydev_mode != XGBE_MDIO_MODE_CL22)
+ return -EOPNOTSUPP;
+
+ return pdata->hw_if.read_ext_mii_regs_c22(pdata, addr, reg);
+}
+
+static int xgbe_phy_mdio_mii_read_c45(struct xgbe_prv_data *pdata, int addr,
+ int devad, int reg)
+{
+ struct xgbe_phy_data *phy_data = pdata->phy_data;
+
+ if (phy_data->phydev_mode != XGBE_MDIO_MODE_CL45)
+ return -EOPNOTSUPP;
+
+ return pdata->hw_if.read_ext_mii_regs_c45(pdata, addr, devad, reg);
}
static int xgbe_phy_i2c_mii_read(struct xgbe_prv_data *pdata, int reg)
@@ -697,7 +739,7 @@ static int xgbe_phy_i2c_mii_read(struct xgbe_prv_data *pdata, int reg)
return ret;
}
-static int xgbe_phy_mii_read(struct mii_bus *mii, int addr, int reg)
+static int xgbe_phy_mii_read_c22(struct mii_bus *mii, int addr, int reg)
{
struct xgbe_prv_data *pdata = mii->priv;
struct xgbe_phy_data *phy_data = pdata->phy_data;
@@ -710,7 +752,30 @@ static int xgbe_phy_mii_read(struct mii_bus *mii, int addr, int reg)
if (phy_data->conn_type == XGBE_CONN_TYPE_SFP)
ret = xgbe_phy_i2c_mii_read(pdata, reg);
else if (phy_data->conn_type & XGBE_CONN_TYPE_MDIO)
- ret = xgbe_phy_mdio_mii_read(pdata, addr, reg);
+ ret = xgbe_phy_mdio_mii_read_c22(pdata, addr, reg);
+ else
+ ret = -EOPNOTSUPP;
+
+ xgbe_phy_put_comm_ownership(pdata);
+
+ return ret;
+}
+
+static int xgbe_phy_mii_read_c45(struct mii_bus *mii, int addr, int devad,
+ int reg)
+{
+ struct xgbe_prv_data *pdata = mii->priv;
+ struct xgbe_phy_data *phy_data = pdata->phy_data;
+ int ret;
+
+ ret = xgbe_phy_get_comm_ownership(pdata);
+ if (ret)
+ return ret;
+
+ if (phy_data->conn_type == XGBE_CONN_TYPE_SFP)
+ ret = -EOPNOTSUPP;
+ else if (phy_data->conn_type & XGBE_CONN_TYPE_MDIO)
+ ret = xgbe_phy_mdio_mii_read_c45(pdata, addr, devad, reg);
else
ret = -ENOTSUPP;
@@ -759,6 +824,8 @@ static void xgbe_phy_sfp_phy_settings(struct xgbe_prv_data *pdata)
XGBE_SET_SUP(lks, Pause);
XGBE_SET_SUP(lks, Asym_Pause);
if (phy_data->sfp_base == XGBE_SFP_BASE_1000_T) {
+ if (phy_data->port_speeds & XGBE_PHY_PORT_SPEED_10)
+ XGBE_SET_SUP(lks, 10baseT_Full);
if (phy_data->port_speeds & XGBE_PHY_PORT_SPEED_100)
XGBE_SET_SUP(lks, 100baseT_Full);
if (phy_data->port_speeds & XGBE_PHY_PORT_SPEED_1000)
@@ -1542,6 +1609,16 @@ static enum xgbe_mode xgbe_phy_an37_sgmii_outcome(struct xgbe_prv_data *pdata)
xgbe_phy_phydev_flowctrl(pdata);
switch (pdata->an_status & XGBE_SGMII_AN_LINK_SPEED) {
+ case XGBE_SGMII_AN_LINK_SPEED_10:
+ if (pdata->an_status & XGBE_SGMII_AN_LINK_DUPLEX) {
+ XGBE_SET_LP_ADV(lks, 10baseT_Full);
+ mode = XGBE_MODE_SGMII_10;
+ } else {
+ /* Half-duplex not supported */
+ XGBE_SET_LP_ADV(lks, 10baseT_Half);
+ mode = XGBE_MODE_UNKNOWN;
+ }
+ break;
case XGBE_SGMII_AN_LINK_SPEED_100:
if (pdata->an_status & XGBE_SGMII_AN_LINK_DUPLEX) {
XGBE_SET_LP_ADV(lks, 100baseT_Full);
@@ -1658,7 +1735,10 @@ static enum xgbe_mode xgbe_phy_an73_redrv_outcome(struct xgbe_prv_data *pdata)
switch (phy_data->sfp_base) {
case XGBE_SFP_BASE_1000_T:
if (phy_data->phydev &&
- (phy_data->phydev->speed == SPEED_100))
+ (phy_data->phydev->speed == SPEED_10))
+ mode = XGBE_MODE_SGMII_10;
+ else if (phy_data->phydev &&
+ (phy_data->phydev->speed == SPEED_100))
mode = XGBE_MODE_SGMII_100;
else
mode = XGBE_MODE_SGMII_1000;
@@ -1673,7 +1753,10 @@ static enum xgbe_mode xgbe_phy_an73_redrv_outcome(struct xgbe_prv_data *pdata)
break;
default:
if (phy_data->phydev &&
- (phy_data->phydev->speed == SPEED_100))
+ (phy_data->phydev->speed == SPEED_10))
+ mode = XGBE_MODE_SGMII_10;
+ else if (phy_data->phydev &&
+ (phy_data->phydev->speed == SPEED_100))
mode = XGBE_MODE_SGMII_100;
else
mode = XGBE_MODE_SGMII_1000;
@@ -1803,6 +1886,9 @@ static void xgbe_phy_an_advertising(struct xgbe_prv_data *pdata,
if (phy_data->phydev &&
(phy_data->phydev->speed == SPEED_10000))
XGBE_SET_ADV(dlks, 10000baseKR_Full);
+ else if (phy_data->phydev &&
+ (phy_data->phydev->speed == SPEED_2500))
+ XGBE_SET_ADV(dlks, 2500baseX_Full);
else
XGBE_SET_ADV(dlks, 1000baseKX_Full);
break;
@@ -1910,8 +1996,8 @@ static int xgbe_phy_set_redrv_mode_mdio(struct xgbe_prv_data *pdata,
redrv_reg = XGBE_PHY_REDRV_MODE_REG + (phy_data->redrv_lane * 0x1000);
redrv_val = (u16)mode;
- return pdata->hw_if.write_ext_mii_regs(pdata, phy_data->redrv_addr,
- redrv_reg, redrv_val);
+ return pdata->hw_if.write_ext_mii_regs_c22(pdata, phy_data->redrv_addr,
+ redrv_reg, redrv_val);
}
static int xgbe_phy_set_redrv_mode_i2c(struct xgbe_prv_data *pdata,
@@ -1956,6 +2042,93 @@ static void xgbe_phy_set_redrv_mode(struct xgbe_prv_data *pdata)
xgbe_phy_put_comm_ownership(pdata);
}
+#define MAX_RX_ADAPT_RETRIES 1
+#define XGBE_PMA_RX_VAL_SIG_MASK (XGBE_PMA_RX_SIG_DET_0_MASK | \
+ XGBE_PMA_RX_VALID_0_MASK)
+
+static void xgbe_set_rx_adap_mode(struct xgbe_prv_data *pdata,
+ enum xgbe_mode mode)
+{
+ if (pdata->rx_adapt_retries++ >= MAX_RX_ADAPT_RETRIES) {
+ pdata->rx_adapt_retries = 0;
+ return;
+ }
+
+ xgbe_phy_perform_ratechange(pdata,
+ mode == XGBE_MODE_KR ?
+ XGBE_MB_CMD_SET_10G_KR :
+ XGBE_MB_CMD_SET_10G_SFI,
+ XGBE_MB_SUBCMD_RX_ADAP);
+}
+
+static void xgbe_rx_adaptation(struct xgbe_prv_data *pdata)
+{
+ struct xgbe_phy_data *phy_data = pdata->phy_data;
+ unsigned int reg;
+
+ /* step 2: force PCS to send RX_ADAPT Req to PHY */
+ XMDIO_WRITE_BITS(pdata, MDIO_MMD_PMAPMD, MDIO_PMA_RX_EQ_CTRL4,
+ XGBE_PMA_RX_AD_REQ_MASK, XGBE_PMA_RX_AD_REQ_ENABLE);
+
+ /* Step 3: Wait for RX_ADAPT ACK from the PHY */
+ msleep(200);
+
+ /* Software polls for coefficient update command (given by local PHY) */
+ reg = XMDIO_READ(pdata, MDIO_MMD_PMAPMD, MDIO_PMA_PHY_RX_EQ_CEU);
+
+ /* Clear the RX_AD_REQ bit */
+ XMDIO_WRITE_BITS(pdata, MDIO_MMD_PMAPMD, MDIO_PMA_RX_EQ_CTRL4,
+ XGBE_PMA_RX_AD_REQ_MASK, XGBE_PMA_RX_AD_REQ_DISABLE);
+
+ /* Check if coefficient update command is set */
+ if ((reg & XGBE_PMA_CFF_UPDT_MASK) != XGBE_PMA_CFF_UPDT_MASK)
+ goto set_mode;
+
+ /* Step 4: Check for Block lock */
+
+ /* Link status is latched low, so read once to clear
+ * and then read again to get current state
+ */
+ reg = XMDIO_READ(pdata, MDIO_MMD_PCS, MDIO_STAT1);
+ reg = XMDIO_READ(pdata, MDIO_MMD_PCS, MDIO_STAT1);
+ if (reg & MDIO_STAT1_LSTATUS) {
+ /* If the block lock is found, update the helpers
+ * and declare the link up
+ */
+ netif_dbg(pdata, link, pdata->netdev, "Block_lock done");
+ pdata->rx_adapt_done = true;
+ pdata->mode_set = false;
+ return;
+ }
+
+set_mode:
+ xgbe_set_rx_adap_mode(pdata, phy_data->cur_mode);
+}
+
+static void xgbe_phy_rx_adaptation(struct xgbe_prv_data *pdata)
+{
+ unsigned int reg;
+
+rx_adapt_reinit:
+ reg = XMDIO_READ_BITS(pdata, MDIO_MMD_PMAPMD, MDIO_PMA_RX_LSTS,
+ XGBE_PMA_RX_VAL_SIG_MASK);
+
+ /* step 1: Check for RX_VALID && LF_SIGDET */
+ if ((reg & XGBE_PMA_RX_VAL_SIG_MASK) != XGBE_PMA_RX_VAL_SIG_MASK) {
+ netif_dbg(pdata, link, pdata->netdev,
+ "RX_VALID or LF_SIGDET is unset, issue rrc");
+ xgbe_phy_rrc(pdata);
+ if (pdata->rx_adapt_retries++ >= MAX_RX_ADAPT_RETRIES) {
+ pdata->rx_adapt_retries = 0;
+ return;
+ }
+ goto rx_adapt_reinit;
+ }
+
+ /* perform rx adaptation */
+ xgbe_rx_adaptation(pdata);
+}
+
static void xgbe_phy_rx_reset(struct xgbe_prv_data *pdata)
{
int reg;
@@ -2021,7 +2194,7 @@ static void xgbe_phy_perform_ratechange(struct xgbe_prv_data *pdata,
wait = XGBE_RATECHANGE_COUNT;
while (wait--) {
if (!XP_IOREAD_BITS(pdata, XP_DRIVER_INT_RO, STATUS))
- goto reenable_pll;
+ goto do_rx_adaptation;
usleep_range(1000, 2000);
}
@@ -2031,6 +2204,20 @@ static void xgbe_phy_perform_ratechange(struct xgbe_prv_data *pdata,
/* Reset on error */
xgbe_phy_rx_reset(pdata);
+ goto reenable_pll;
+
+do_rx_adaptation:
+ if (pdata->en_rx_adap && sub_cmd == XGBE_MB_SUBCMD_RX_ADAP &&
+ (cmd == XGBE_MB_CMD_SET_10G_KR || cmd == XGBE_MB_CMD_SET_10G_SFI)) {
+ netif_dbg(pdata, link, pdata->netdev,
+ "Enabling RX adaptation\n");
+ pdata->mode_set = true;
+ xgbe_phy_rx_adaptation(pdata);
+ /* return from here to avoid enabling PLL ctrl
+ * during adaptation phase
+ */
+ return;
+ }
reenable_pll:
/* Enable PLL re-initialization, not needed for PHY Power Off and RRC cmds */
@@ -2059,6 +2246,31 @@ static void xgbe_phy_power_off(struct xgbe_prv_data *pdata)
netif_dbg(pdata, link, pdata->netdev, "phy powered off\n");
}
+static bool enable_rx_adap(struct xgbe_prv_data *pdata, enum xgbe_mode mode)
+{
+ struct xgbe_phy_data *phy_data = pdata->phy_data;
+ unsigned int ver;
+
+ /* Rx-Adaptation is not supported on older platforms(< 0x30H) */
+ ver = XGMAC_GET_BITS(pdata->hw_feat.version, MAC_VR, SNPSVER);
+ if (ver < 0x30)
+ return false;
+
+ /* Re-driver models 4223 && 4227 do not support Rx-Adaptation */
+ if (phy_data->redrv &&
+ (phy_data->redrv_model == XGBE_PHY_REDRV_MODEL_4223 ||
+ phy_data->redrv_model == XGBE_PHY_REDRV_MODEL_4227))
+ return false;
+
+ /* 10G KR mode with AN does not support Rx-Adaptation */
+ if (mode == XGBE_MODE_KR &&
+ phy_data->port_mode != XGBE_PORT_MODE_BACKPLANE_NO_AUTONEG)
+ return false;
+
+ pdata->en_rx_adap = 1;
+ return true;
+}
+
static void xgbe_phy_sfi_mode(struct xgbe_prv_data *pdata)
{
struct xgbe_phy_data *phy_data = pdata->phy_data;
@@ -2067,7 +2279,12 @@ static void xgbe_phy_sfi_mode(struct xgbe_prv_data *pdata)
/* 10G/SFI */
if (phy_data->sfp_cable != XGBE_SFP_CABLE_PASSIVE) {
+ pdata->en_rx_adap = 0;
xgbe_phy_perform_ratechange(pdata, XGBE_MB_CMD_SET_10G_SFI, XGBE_MB_SUBCMD_ACTIVE);
+ } else if ((phy_data->sfp_cable == XGBE_SFP_CABLE_PASSIVE) &&
+ (enable_rx_adap(pdata, XGBE_MODE_SFI))) {
+ xgbe_phy_perform_ratechange(pdata, XGBE_MB_CMD_SET_10G_SFI,
+ XGBE_MB_SUBCMD_RX_ADAP);
} else {
if (phy_data->sfp_cable_len <= 1)
xgbe_phy_perform_ratechange(pdata, XGBE_MB_CMD_SET_10G_SFI,
@@ -2127,6 +2344,20 @@ static void xgbe_phy_sgmii_100_mode(struct xgbe_prv_data *pdata)
netif_dbg(pdata, link, pdata->netdev, "100MbE SGMII mode set\n");
}
+static void xgbe_phy_sgmii_10_mode(struct xgbe_prv_data *pdata)
+{
+ struct xgbe_phy_data *phy_data = pdata->phy_data;
+
+ xgbe_phy_set_redrv_mode(pdata);
+
+ /* 10M/SGMII */
+ xgbe_phy_perform_ratechange(pdata, XGBE_MB_CMD_SET_1G, XGBE_MB_SUBCMD_10MBITS);
+
+ phy_data->cur_mode = XGBE_MODE_SGMII_10;
+
+ netif_dbg(pdata, link, pdata->netdev, "10MbE SGMII mode set\n");
+}
+
static void xgbe_phy_kr_mode(struct xgbe_prv_data *pdata)
{
struct xgbe_phy_data *phy_data = pdata->phy_data;
@@ -2134,7 +2365,12 @@ static void xgbe_phy_kr_mode(struct xgbe_prv_data *pdata)
xgbe_phy_set_redrv_mode(pdata);
/* 10G/KR */
- xgbe_phy_perform_ratechange(pdata, XGBE_MB_CMD_SET_10G_KR, XGBE_MB_SUBCMD_NONE);
+ if (enable_rx_adap(pdata, XGBE_MODE_KR))
+ xgbe_phy_perform_ratechange(pdata, XGBE_MB_CMD_SET_10G_KR,
+ XGBE_MB_SUBCMD_RX_ADAP);
+ else
+ xgbe_phy_perform_ratechange(pdata, XGBE_MB_CMD_SET_10G_KR,
+ XGBE_MB_SUBCMD_NONE);
phy_data->cur_mode = XGBE_MODE_KR;
@@ -2185,12 +2421,15 @@ static enum xgbe_mode xgbe_phy_switch_baset_mode(struct xgbe_prv_data *pdata)
return xgbe_phy_cur_mode(pdata);
switch (xgbe_phy_cur_mode(pdata)) {
+ case XGBE_MODE_SGMII_10:
case XGBE_MODE_SGMII_100:
case XGBE_MODE_SGMII_1000:
return XGBE_MODE_KR;
+ case XGBE_MODE_KX_2500:
+ return XGBE_MODE_SGMII_1000;
case XGBE_MODE_KR:
default:
- return XGBE_MODE_SGMII_1000;
+ return XGBE_MODE_KX_2500;
}
}
@@ -2252,6 +2491,8 @@ static enum xgbe_mode xgbe_phy_get_baset_mode(struct xgbe_phy_data *phy_data,
int speed)
{
switch (speed) {
+ case SPEED_10:
+ return XGBE_MODE_SGMII_10;
case SPEED_100:
return XGBE_MODE_SGMII_100;
case SPEED_1000:
@@ -2269,6 +2510,8 @@ static enum xgbe_mode xgbe_phy_get_sfp_mode(struct xgbe_phy_data *phy_data,
int speed)
{
switch (speed) {
+ case SPEED_10:
+ return XGBE_MODE_SGMII_10;
case SPEED_100:
return XGBE_MODE_SGMII_100;
case SPEED_1000:
@@ -2343,6 +2586,9 @@ static void xgbe_phy_set_mode(struct xgbe_prv_data *pdata, enum xgbe_mode mode)
case XGBE_MODE_KR:
xgbe_phy_kr_mode(pdata);
break;
+ case XGBE_MODE_SGMII_10:
+ xgbe_phy_sgmii_10_mode(pdata);
+ break;
case XGBE_MODE_SGMII_100:
xgbe_phy_sgmii_100_mode(pdata);
break;
@@ -2399,6 +2645,9 @@ static bool xgbe_phy_use_baset_mode(struct xgbe_prv_data *pdata,
struct ethtool_link_ksettings *lks = &pdata->phy.lks;
switch (mode) {
+ case XGBE_MODE_SGMII_10:
+ return xgbe_phy_check_mode(pdata, mode,
+ XGBE_ADV(lks, 10baseT_Full));
case XGBE_MODE_SGMII_100:
return xgbe_phy_check_mode(pdata, mode,
XGBE_ADV(lks, 100baseT_Full));
@@ -2428,6 +2677,11 @@ static bool xgbe_phy_use_sfp_mode(struct xgbe_prv_data *pdata,
return false;
return xgbe_phy_check_mode(pdata, mode,
XGBE_ADV(lks, 1000baseX_Full));
+ case XGBE_MODE_SGMII_10:
+ if (phy_data->sfp_base != XGBE_SFP_BASE_1000_T)
+ return false;
+ return xgbe_phy_check_mode(pdata, mode,
+ XGBE_ADV(lks, 10baseT_Full));
case XGBE_MODE_SGMII_100:
if (phy_data->sfp_base != XGBE_SFP_BASE_1000_T)
return false;
@@ -2520,15 +2774,23 @@ static bool xgbe_phy_valid_speed_basex_mode(struct xgbe_phy_data *phy_data,
}
}
-static bool xgbe_phy_valid_speed_baset_mode(struct xgbe_phy_data *phy_data,
+static bool xgbe_phy_valid_speed_baset_mode(struct xgbe_prv_data *pdata,
int speed)
{
+ struct xgbe_phy_data *phy_data = pdata->phy_data;
+ unsigned int ver;
+
switch (speed) {
+ case SPEED_10:
+ /* Supported in ver >= 30H */
+ ver = XGMAC_GET_BITS(pdata->hw_feat.version, MAC_VR, SNPSVER);
+ return (ver >= 0x30) ? true : false;
case SPEED_100:
case SPEED_1000:
return true;
case SPEED_2500:
- return (phy_data->port_mode == XGBE_PORT_MODE_NBASE_T);
+ return ((phy_data->port_mode == XGBE_PORT_MODE_10GBASE_T) ||
+ (phy_data->port_mode == XGBE_PORT_MODE_NBASE_T));
case SPEED_10000:
return (phy_data->port_mode == XGBE_PORT_MODE_10GBASE_T);
default:
@@ -2536,10 +2798,17 @@ static bool xgbe_phy_valid_speed_baset_mode(struct xgbe_phy_data *phy_data,
}
}
-static bool xgbe_phy_valid_speed_sfp_mode(struct xgbe_phy_data *phy_data,
+static bool xgbe_phy_valid_speed_sfp_mode(struct xgbe_prv_data *pdata,
int speed)
{
+ struct xgbe_phy_data *phy_data = pdata->phy_data;
+ unsigned int ver;
+
switch (speed) {
+ case SPEED_10:
+ /* Supported in ver >= 30H */
+ ver = XGMAC_GET_BITS(pdata->hw_feat.version, MAC_VR, SNPSVER);
+ return (ver >= 0x30) && (phy_data->sfp_speed == XGBE_SFP_SPEED_100_1000);
case SPEED_100:
return (phy_data->sfp_speed == XGBE_SFP_SPEED_100_1000);
case SPEED_1000:
@@ -2586,12 +2855,12 @@ static bool xgbe_phy_valid_speed(struct xgbe_prv_data *pdata, int speed)
case XGBE_PORT_MODE_1000BASE_T:
case XGBE_PORT_MODE_NBASE_T:
case XGBE_PORT_MODE_10GBASE_T:
- return xgbe_phy_valid_speed_baset_mode(phy_data, speed);
+ return xgbe_phy_valid_speed_baset_mode(pdata, speed);
case XGBE_PORT_MODE_1000BASE_X:
case XGBE_PORT_MODE_10GBASE_R:
return xgbe_phy_valid_speed_basex_mode(phy_data, speed);
case XGBE_PORT_MODE_SFP:
- return xgbe_phy_valid_speed_sfp_mode(phy_data, speed);
+ return xgbe_phy_valid_speed_sfp_mode(pdata, speed);
default:
return false;
}
@@ -2614,8 +2883,11 @@ static int xgbe_phy_link_status(struct xgbe_prv_data *pdata, int *an_restart)
return 0;
}
- if (phy_data->sfp_mod_absent || phy_data->sfp_rx_los)
+ if (phy_data->sfp_mod_absent || phy_data->sfp_rx_los) {
+ if (pdata->en_rx_adap)
+ pdata->rx_adapt_done = false;
return 0;
+ }
}
if (phy_data->phydev) {
@@ -2637,7 +2909,29 @@ static int xgbe_phy_link_status(struct xgbe_prv_data *pdata, int *an_restart)
*/
reg = XMDIO_READ(pdata, MDIO_MMD_PCS, MDIO_STAT1);
reg = XMDIO_READ(pdata, MDIO_MMD_PCS, MDIO_STAT1);
- if (reg & MDIO_STAT1_LSTATUS)
+
+ if (pdata->en_rx_adap) {
+ /* if the link is available and adaptation is done,
+ * declare link up
+ */
+ if ((reg & MDIO_STAT1_LSTATUS) && pdata->rx_adapt_done)
+ return 1;
+ /* If either link is not available or adaptation is not done,
+ * retrigger the adaptation logic. (if the mode is not set,
+ * then issue mailbox command first)
+ */
+ if (pdata->mode_set) {
+ xgbe_phy_rx_adaptation(pdata);
+ } else {
+ pdata->rx_adapt_done = false;
+ xgbe_phy_set_mode(pdata, phy_data->cur_mode);
+ }
+
+ /* check again for the link and adaptation status */
+ reg = XMDIO_READ(pdata, MDIO_MMD_PCS, MDIO_STAT1);
+ if ((reg & MDIO_STAT1_LSTATUS) && pdata->rx_adapt_done)
+ return 1;
+ } else if (reg & MDIO_STAT1_LSTATUS)
return 1;
if (pdata->phy.autoneg == AUTONEG_ENABLE &&
@@ -2862,6 +3156,12 @@ static int xgbe_phy_mdio_reset_setup(struct xgbe_prv_data *pdata)
static bool xgbe_phy_port_mode_mismatch(struct xgbe_prv_data *pdata)
{
struct xgbe_phy_data *phy_data = pdata->phy_data;
+ unsigned int ver;
+
+ /* 10 Mbps speed is not supported in ver < 30H */
+ ver = XGMAC_GET_BITS(pdata->hw_feat.version, MAC_VR, SNPSVER);
+ if (ver < 0x30 && (phy_data->port_speeds & XGBE_PHY_PORT_SPEED_10))
+ return true;
switch (phy_data->port_mode) {
case XGBE_PORT_MODE_BACKPLANE:
@@ -2875,7 +3175,8 @@ static bool xgbe_phy_port_mode_mismatch(struct xgbe_prv_data *pdata)
return false;
break;
case XGBE_PORT_MODE_1000BASE_T:
- if ((phy_data->port_speeds & XGBE_PHY_PORT_SPEED_100) ||
+ if ((phy_data->port_speeds & XGBE_PHY_PORT_SPEED_10) ||
+ (phy_data->port_speeds & XGBE_PHY_PORT_SPEED_100) ||
(phy_data->port_speeds & XGBE_PHY_PORT_SPEED_1000))
return false;
break;
@@ -2884,14 +3185,17 @@ static bool xgbe_phy_port_mode_mismatch(struct xgbe_prv_data *pdata)
return false;
break;
case XGBE_PORT_MODE_NBASE_T:
- if ((phy_data->port_speeds & XGBE_PHY_PORT_SPEED_100) ||
+ if ((phy_data->port_speeds & XGBE_PHY_PORT_SPEED_10) ||
+ (phy_data->port_speeds & XGBE_PHY_PORT_SPEED_100) ||
(phy_data->port_speeds & XGBE_PHY_PORT_SPEED_1000) ||
(phy_data->port_speeds & XGBE_PHY_PORT_SPEED_2500))
return false;
break;
case XGBE_PORT_MODE_10GBASE_T:
- if ((phy_data->port_speeds & XGBE_PHY_PORT_SPEED_100) ||
+ if ((phy_data->port_speeds & XGBE_PHY_PORT_SPEED_10) ||
+ (phy_data->port_speeds & XGBE_PHY_PORT_SPEED_100) ||
(phy_data->port_speeds & XGBE_PHY_PORT_SPEED_1000) ||
+ (phy_data->port_speeds & XGBE_PHY_PORT_SPEED_2500) ||
(phy_data->port_speeds & XGBE_PHY_PORT_SPEED_10000))
return false;
break;
@@ -2900,7 +3204,8 @@ static bool xgbe_phy_port_mode_mismatch(struct xgbe_prv_data *pdata)
return false;
break;
case XGBE_PORT_MODE_SFP:
- if ((phy_data->port_speeds & XGBE_PHY_PORT_SPEED_100) ||
+ if ((phy_data->port_speeds & XGBE_PHY_PORT_SPEED_10) ||
+ (phy_data->port_speeds & XGBE_PHY_PORT_SPEED_100) ||
(phy_data->port_speeds & XGBE_PHY_PORT_SPEED_1000) ||
(phy_data->port_speeds & XGBE_PHY_PORT_SPEED_10000))
return false;
@@ -3269,6 +3574,10 @@ static int xgbe_phy_init(struct xgbe_prv_data *pdata)
XGBE_SET_SUP(lks, Pause);
XGBE_SET_SUP(lks, Asym_Pause);
XGBE_SET_SUP(lks, TP);
+ if (phy_data->port_speeds & XGBE_PHY_PORT_SPEED_10) {
+ XGBE_SET_SUP(lks, 10baseT_Full);
+ phy_data->start_mode = XGBE_MODE_SGMII_10;
+ }
if (phy_data->port_speeds & XGBE_PHY_PORT_SPEED_100) {
XGBE_SET_SUP(lks, 100baseT_Full);
phy_data->start_mode = XGBE_MODE_SGMII_100;
@@ -3299,6 +3608,10 @@ static int xgbe_phy_init(struct xgbe_prv_data *pdata)
XGBE_SET_SUP(lks, Pause);
XGBE_SET_SUP(lks, Asym_Pause);
XGBE_SET_SUP(lks, TP);
+ if (phy_data->port_speeds & XGBE_PHY_PORT_SPEED_10) {
+ XGBE_SET_SUP(lks, 10baseT_Full);
+ phy_data->start_mode = XGBE_MODE_SGMII_10;
+ }
if (phy_data->port_speeds & XGBE_PHY_PORT_SPEED_100) {
XGBE_SET_SUP(lks, 100baseT_Full);
phy_data->start_mode = XGBE_MODE_SGMII_100;
@@ -3321,6 +3634,10 @@ static int xgbe_phy_init(struct xgbe_prv_data *pdata)
XGBE_SET_SUP(lks, Pause);
XGBE_SET_SUP(lks, Asym_Pause);
XGBE_SET_SUP(lks, TP);
+ if (phy_data->port_speeds & XGBE_PHY_PORT_SPEED_10) {
+ XGBE_SET_SUP(lks, 10baseT_Full);
+ phy_data->start_mode = XGBE_MODE_SGMII_10;
+ }
if (phy_data->port_speeds & XGBE_PHY_PORT_SPEED_100) {
XGBE_SET_SUP(lks, 100baseT_Full);
phy_data->start_mode = XGBE_MODE_SGMII_100;
@@ -3329,6 +3646,10 @@ static int xgbe_phy_init(struct xgbe_prv_data *pdata)
XGBE_SET_SUP(lks, 1000baseT_Full);
phy_data->start_mode = XGBE_MODE_SGMII_1000;
}
+ if (phy_data->port_speeds & XGBE_PHY_PORT_SPEED_2500) {
+ XGBE_SET_SUP(lks, 2500baseT_Full);
+ phy_data->start_mode = XGBE_MODE_KX_2500;
+ }
if (phy_data->port_speeds & XGBE_PHY_PORT_SPEED_10000) {
XGBE_SET_SUP(lks, 10000baseT_Full);
phy_data->start_mode = XGBE_MODE_KR;
@@ -3361,6 +3682,8 @@ static int xgbe_phy_init(struct xgbe_prv_data *pdata)
XGBE_SET_SUP(lks, Asym_Pause);
XGBE_SET_SUP(lks, TP);
XGBE_SET_SUP(lks, FIBRE);
+ if (phy_data->port_speeds & XGBE_PHY_PORT_SPEED_10)
+ phy_data->start_mode = XGBE_MODE_SGMII_10;
if (phy_data->port_speeds & XGBE_PHY_PORT_SPEED_100)
phy_data->start_mode = XGBE_MODE_SGMII_100;
if (phy_data->port_speeds & XGBE_PHY_PORT_SPEED_1000)
@@ -3415,8 +3738,10 @@ static int xgbe_phy_init(struct xgbe_prv_data *pdata)
mii->priv = pdata;
mii->name = "amd-xgbe-mii";
- mii->read = xgbe_phy_mii_read;
- mii->write = xgbe_phy_mii_write;
+ mii->read = xgbe_phy_mii_read_c22;
+ mii->write = xgbe_phy_mii_write_c22;
+ mii->read_c45 = xgbe_phy_mii_read_c45;
+ mii->write_c45 = xgbe_phy_mii_write_c45;
mii->parent = pdata->dev;
mii->phy_mask = ~0;
snprintf(mii->id, sizeof(mii->id), "%s", dev_name(pdata->dev));
diff --git a/drivers/net/ethernet/amd/xgbe/xgbe.h b/drivers/net/ethernet/amd/xgbe/xgbe.h
index 7a41367c437d..ad136ed493ed 100644
--- a/drivers/net/ethernet/amd/xgbe/xgbe.h
+++ b/drivers/net/ethernet/amd/xgbe/xgbe.h
@@ -294,6 +294,7 @@
#define XGBE_SGMII_AN_LINK_STATUS BIT(1)
#define XGBE_SGMII_AN_LINK_SPEED (BIT(2) | BIT(3))
+#define XGBE_SGMII_AN_LINK_SPEED_10 0x00
#define XGBE_SGMII_AN_LINK_SPEED_100 0x04
#define XGBE_SGMII_AN_LINK_SPEED_1000 0x08
#define XGBE_SGMII_AN_LINK_DUPLEX BIT(4)
@@ -595,6 +596,7 @@ enum xgbe_mode {
XGBE_MODE_KX_2500,
XGBE_MODE_KR,
XGBE_MODE_X,
+ XGBE_MODE_SGMII_10,
XGBE_MODE_SGMII_100,
XGBE_MODE_SGMII_1000,
XGBE_MODE_SFI,
@@ -623,6 +625,7 @@ enum xgbe_mb_cmd {
enum xgbe_mb_subcmd {
XGBE_MB_SUBCMD_NONE = 0,
+ XGBE_MB_SUBCMD_RX_ADAP,
/* 10GbE SFP subcommands */
XGBE_MB_SUBCMD_ACTIVE = 0,
@@ -774,8 +777,11 @@ struct xgbe_hw_if {
int (*set_ext_mii_mode)(struct xgbe_prv_data *, unsigned int,
enum xgbe_mdio_mode);
- int (*read_ext_mii_regs)(struct xgbe_prv_data *, int, int);
- int (*write_ext_mii_regs)(struct xgbe_prv_data *, int, int, u16);
+ int (*read_ext_mii_regs_c22)(struct xgbe_prv_data *, int, int);
+ int (*write_ext_mii_regs_c22)(struct xgbe_prv_data *, int, int, u16);
+ int (*read_ext_mii_regs_c45)(struct xgbe_prv_data *, int, int, int);
+ int (*write_ext_mii_regs_c45)(struct xgbe_prv_data *, int, int, int,
+ u16);
int (*set_gpio)(struct xgbe_prv_data *, unsigned int);
int (*clr_gpio)(struct xgbe_prv_data *, unsigned int);
@@ -1311,6 +1317,10 @@ struct xgbe_prv_data {
bool debugfs_an_cdr_workaround;
bool debugfs_an_cdr_track_early;
+ bool en_rx_adap;
+ int rx_adapt_retries;
+ bool rx_adapt_done;
+ bool mode_set;
};
/* Function prototypes*/
diff --git a/drivers/net/ethernet/aquantia/atlantic/aq_main.c b/drivers/net/ethernet/aquantia/atlantic/aq_main.c
index 77609dc0a08d..0b2a52199914 100644
--- a/drivers/net/ethernet/aquantia/atlantic/aq_main.c
+++ b/drivers/net/ethernet/aquantia/atlantic/aq_main.c
@@ -21,6 +21,7 @@
#include <linux/ip.h>
#include <linux/udp.h>
#include <net/pkt_cls.h>
+#include <net/pkt_sched.h>
#include <linux/filter.h>
MODULE_LICENSE("GPL v2");
diff --git a/drivers/net/ethernet/aquantia/atlantic/aq_nic.c b/drivers/net/ethernet/aquantia/atlantic/aq_nic.c
index 06508eebb585..d6d6d5d37ff3 100644
--- a/drivers/net/ethernet/aquantia/atlantic/aq_nic.c
+++ b/drivers/net/ethernet/aquantia/atlantic/aq_nic.c
@@ -384,6 +384,11 @@ void aq_nic_ndev_init(struct aq_nic_s *self)
self->ndev->mtu = aq_nic_cfg->mtu - ETH_HLEN;
self->ndev->max_mtu = aq_hw_caps->mtu - ETH_FCS_LEN - ETH_HLEN;
+ self->ndev->xdp_features = NETDEV_XDP_ACT_BASIC |
+ NETDEV_XDP_ACT_REDIRECT |
+ NETDEV_XDP_ACT_NDO_XMIT |
+ NETDEV_XDP_ACT_RX_SG |
+ NETDEV_XDP_ACT_NDO_XMIT_SG;
}
void aq_nic_set_tx_ring(struct aq_nic_s *self, unsigned int idx,
diff --git a/drivers/net/ethernet/atheros/alx/main.c b/drivers/net/ethernet/atheros/alx/main.c
index d30d11872719..306393f8eeca 100644
--- a/drivers/net/ethernet/atheros/alx/main.c
+++ b/drivers/net/ethernet/atheros/alx/main.c
@@ -1905,7 +1905,6 @@ static void alx_remove(struct pci_dev *pdev)
free_netdev(alx->dev);
}
-#ifdef CONFIG_PM_SLEEP
static int alx_suspend(struct device *dev)
{
struct alx_priv *alx = dev_get_drvdata(dev);
@@ -1951,12 +1950,7 @@ unlock:
return err;
}
-static SIMPLE_DEV_PM_OPS(alx_pm_ops, alx_suspend, alx_resume);
-#define ALX_PM_OPS (&alx_pm_ops)
-#else
-#define ALX_PM_OPS NULL
-#endif
-
+static DEFINE_SIMPLE_DEV_PM_OPS(alx_pm_ops, alx_suspend, alx_resume);
static pci_ers_result_t alx_pci_error_detected(struct pci_dev *pdev,
pci_channel_state_t state)
@@ -2055,7 +2049,7 @@ static struct pci_driver alx_driver = {
.probe = alx_probe,
.remove = alx_remove,
.err_handler = &alx_err_handlers,
- .driver.pm = ALX_PM_OPS,
+ .driver.pm = pm_sleep_ptr(&alx_pm_ops),
};
module_pci_driver(alx_driver);
diff --git a/drivers/net/ethernet/broadcom/Kconfig b/drivers/net/ethernet/broadcom/Kconfig
index f4ca0c6c0f51..948586bf1b5b 100644
--- a/drivers/net/ethernet/broadcom/Kconfig
+++ b/drivers/net/ethernet/broadcom/Kconfig
@@ -213,6 +213,7 @@ config BNXT
select NET_DEVLINK
select PAGE_POOL
select DIMLIB
+ select AUXILIARY_BUS
help
This driver supports Broadcom NetXtreme-C/E 10/25/40/50 gigabit
Ethernet cards. To compile this driver as a module, choose M here:
diff --git a/drivers/net/ethernet/broadcom/b44.c b/drivers/net/ethernet/broadcom/b44.c
index b751dc8486dc..392ec09a1d8a 100644
--- a/drivers/net/ethernet/broadcom/b44.c
+++ b/drivers/net/ethernet/broadcom/b44.c
@@ -196,28 +196,6 @@ static int b44_wait_bit(struct b44 *bp, unsigned long reg,
return 0;
}
-static inline void __b44_cam_read(struct b44 *bp, unsigned char *data, int index)
-{
- u32 val;
-
- bw32(bp, B44_CAM_CTRL, (CAM_CTRL_READ |
- (index << CAM_CTRL_INDEX_SHIFT)));
-
- b44_wait_bit(bp, B44_CAM_CTRL, CAM_CTRL_BUSY, 100, 1);
-
- val = br32(bp, B44_CAM_DATA_LO);
-
- data[2] = (val >> 24) & 0xFF;
- data[3] = (val >> 16) & 0xFF;
- data[4] = (val >> 8) & 0xFF;
- data[5] = (val >> 0) & 0xFF;
-
- val = br32(bp, B44_CAM_DATA_HI);
-
- data[0] = (val >> 8) & 0xFF;
- data[1] = (val >> 0) & 0xFF;
-}
-
static inline void __b44_cam_write(struct b44 *bp,
const unsigned char *data, int index)
{
diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
index 6c32f5c427b5..5d4b1f2ebeac 100644
--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
@@ -2414,7 +2414,6 @@ static int bnxt_async_event_process(struct bnxt *bp,
}
bnxt_queue_sp_work(bp);
async_event_process_exit:
- bnxt_ulp_async_events(bp, cmpl);
return 0;
}
@@ -5538,7 +5537,7 @@ vnic_mru:
#endif
if ((bp->flags & BNXT_FLAG_STRIP_VLAN) || def_vlan)
req->flags |= cpu_to_le32(VNIC_CFG_REQ_FLAGS_VLAN_STRIP_MODE);
- if (!vnic_id && bnxt_ulp_registered(bp->edev, BNXT_ROCE_ULP))
+ if (!vnic_id && bnxt_ulp_registered(bp->edev))
req->flags |= cpu_to_le32(bnxt_get_roce_vnic_mode(bp));
return hwrm_req_send(bp, req);
@@ -13185,6 +13184,8 @@ static void bnxt_remove_one(struct pci_dev *pdev)
if (BNXT_PF(bp))
bnxt_sriov_disable(bp);
+ bnxt_rdma_aux_device_uninit(bp);
+
bnxt_ptp_clear(bp);
pci_disable_pcie_error_reporting(pdev);
unregister_netdev(dev);
@@ -13690,6 +13691,9 @@ static int bnxt_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
netif_set_tso_max_size(dev, GSO_MAX_SIZE);
+ dev->xdp_features = NETDEV_XDP_ACT_BASIC | NETDEV_XDP_ACT_REDIRECT |
+ NETDEV_XDP_ACT_RX_SG;
+
#ifdef CONFIG_BNXT_SRIOV
init_waitqueue_head(&bp->sriov_cfg_wait);
#endif
@@ -13780,11 +13784,13 @@ static int bnxt_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
bnxt_dl_fw_reporters_create(bp);
+ bnxt_rdma_aux_device_init(bp);
+
bnxt_print_device_info(bp);
pci_save_state(pdev);
- return 0;
+ return 0;
init_err_cleanup:
bnxt_dl_unregister(bp);
init_err_dl:
@@ -13828,7 +13834,6 @@ static void bnxt_shutdown(struct pci_dev *pdev)
if (netif_running(dev))
dev_close(dev);
- bnxt_ulp_shutdown(bp);
bnxt_clear_int_mode(bp);
pci_disable_device(pdev);
diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.h b/drivers/net/ethernet/broadcom/bnxt/bnxt.h
index 5163ef4a49ea..dcb09fbe4007 100644
--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.h
+++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.h
@@ -24,6 +24,7 @@
#include <linux/interrupt.h>
#include <linux/rhashtable.h>
#include <linux/crash_dump.h>
+#include <linux/auxiliary_bus.h>
#include <net/devlink.h>
#include <net/dst_metadata.h>
#include <net/xdp.h>
@@ -1631,6 +1632,12 @@ struct bnxt_fw_health {
#define BNXT_FW_IF_RETRY 10
#define BNXT_FW_SLOT_RESET_RETRY 4
+struct bnxt_aux_priv {
+ struct auxiliary_device aux_dev;
+ struct bnxt_en_dev *edev;
+ int id;
+};
+
enum board_idx {
BCM57301,
BCM57302,
@@ -1852,6 +1859,7 @@ struct bnxt {
#define BNXT_CHIP_P4_PLUS(bp) \
(BNXT_CHIP_P4(bp) || BNXT_CHIP_P5(bp))
+ struct bnxt_aux_priv *aux_priv;
struct bnxt_en_dev *edev;
struct bnxt_napi **bnapi;
diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_devlink.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_devlink.c
index 26913dc816d3..8b3e7697390f 100644
--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_devlink.c
+++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_devlink.c
@@ -1303,7 +1303,6 @@ int bnxt_dl_register(struct bnxt *bp)
if (rc)
goto err_dl_port_unreg;
- devlink_set_features(dl, DEVLINK_F_RELOAD);
out:
devlink_register(dl);
return 0;
diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_sriov.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_sriov.c
index a4cba7cb2783..3ed3a2b3b3a9 100644
--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_sriov.c
+++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_sriov.c
@@ -749,7 +749,6 @@ int bnxt_cfg_hw_sriov(struct bnxt *bp, int *num_vfs, bool reset)
*num_vfs = rc;
}
- bnxt_ulp_sriov_cfg(bp, *num_vfs);
return 0;
}
@@ -823,10 +822,8 @@ static int bnxt_sriov_enable(struct bnxt *bp, int *num_vfs)
goto err_out2;
rc = pci_enable_sriov(bp->pdev, *num_vfs);
- if (rc) {
- bnxt_ulp_sriov_cfg(bp, 0);
+ if (rc)
goto err_out2;
- }
return 0;
@@ -872,8 +869,6 @@ void bnxt_sriov_disable(struct bnxt *bp)
rtnl_lock();
bnxt_restore_pf_fw_resources(bp);
rtnl_unlock();
-
- bnxt_ulp_sriov_cfg(bp, 0);
}
int bnxt_sriov_configure(struct pci_dev *pdev, int num_vfs)
diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.c
index 2e54bf4fc7a7..d4cc9c371e7b 100644
--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.c
+++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.c
@@ -19,89 +19,26 @@
#include <linux/irq.h>
#include <asm/byteorder.h>
#include <linux/bitmap.h>
+#include <linux/auxiliary_bus.h>
#include "bnxt_hsi.h"
#include "bnxt.h"
#include "bnxt_hwrm.h"
#include "bnxt_ulp.h"
-static int bnxt_register_dev(struct bnxt_en_dev *edev, unsigned int ulp_id,
- struct bnxt_ulp_ops *ulp_ops, void *handle)
-{
- struct net_device *dev = edev->net;
- struct bnxt *bp = netdev_priv(dev);
- struct bnxt_ulp *ulp;
-
- ASSERT_RTNL();
- if (ulp_id >= BNXT_MAX_ULP)
- return -EINVAL;
-
- ulp = &edev->ulp_tbl[ulp_id];
- if (rcu_access_pointer(ulp->ulp_ops)) {
- netdev_err(bp->dev, "ulp id %d already registered\n", ulp_id);
- return -EBUSY;
- }
- if (ulp_id == BNXT_ROCE_ULP) {
- unsigned int max_stat_ctxs;
-
- max_stat_ctxs = bnxt_get_max_func_stat_ctxs(bp);
- if (max_stat_ctxs <= BNXT_MIN_ROCE_STAT_CTXS ||
- bp->cp_nr_rings == max_stat_ctxs)
- return -ENOMEM;
- }
-
- atomic_set(&ulp->ref_count, 0);
- ulp->handle = handle;
- rcu_assign_pointer(ulp->ulp_ops, ulp_ops);
-
- if (ulp_id == BNXT_ROCE_ULP) {
- if (test_bit(BNXT_STATE_OPEN, &bp->state))
- bnxt_hwrm_vnic_cfg(bp, 0);
- }
-
- return 0;
-}
-
-static int bnxt_unregister_dev(struct bnxt_en_dev *edev, unsigned int ulp_id)
-{
- struct net_device *dev = edev->net;
- struct bnxt *bp = netdev_priv(dev);
- struct bnxt_ulp *ulp;
- int i = 0;
-
- ASSERT_RTNL();
- if (ulp_id >= BNXT_MAX_ULP)
- return -EINVAL;
-
- ulp = &edev->ulp_tbl[ulp_id];
- if (!rcu_access_pointer(ulp->ulp_ops)) {
- netdev_err(bp->dev, "ulp id %d not registered\n", ulp_id);
- return -EINVAL;
- }
- if (ulp_id == BNXT_ROCE_ULP && ulp->msix_requested)
- edev->en_ops->bnxt_free_msix(edev, ulp_id);
-
- if (ulp->max_async_event_id)
- bnxt_hwrm_func_drv_rgtr(bp, NULL, 0, true);
-
- RCU_INIT_POINTER(ulp->ulp_ops, NULL);
- synchronize_rcu();
- ulp->max_async_event_id = 0;
- ulp->async_events_bmap = NULL;
- while (atomic_read(&ulp->ref_count) != 0 && i < 10) {
- msleep(100);
- i++;
- }
- return 0;
-}
+static DEFINE_IDA(bnxt_aux_dev_ids);
static void bnxt_fill_msix_vecs(struct bnxt *bp, struct bnxt_msix_entry *ent)
{
struct bnxt_en_dev *edev = bp->edev;
int num_msix, idx, i;
- num_msix = edev->ulp_tbl[BNXT_ROCE_ULP].msix_requested;
- idx = edev->ulp_tbl[BNXT_ROCE_ULP].msix_base;
+ if (!edev->ulp_tbl->msix_requested) {
+ netdev_warn(bp->dev, "Requested MSI-X vectors insufficient\n");
+ return;
+ }
+ num_msix = edev->ulp_tbl->msix_requested;
+ idx = edev->ulp_tbl->msix_base;
for (i = 0; i < num_msix; i++) {
ent[i].vector = bp->irq_tbl[idx + i].vector;
ent[i].ring_idx = idx + i;
@@ -115,125 +52,95 @@ static void bnxt_fill_msix_vecs(struct bnxt *bp, struct bnxt_msix_entry *ent)
}
}
-static int bnxt_req_msix_vecs(struct bnxt_en_dev *edev, unsigned int ulp_id,
- struct bnxt_msix_entry *ent, int num_msix)
+int bnxt_register_dev(struct bnxt_en_dev *edev,
+ struct bnxt_ulp_ops *ulp_ops,
+ void *handle)
{
struct net_device *dev = edev->net;
struct bnxt *bp = netdev_priv(dev);
- struct bnxt_hw_resc *hw_resc;
- int max_idx, max_cp_rings;
- int avail_msix, idx;
- int total_vecs;
- int rc = 0;
-
- ASSERT_RTNL();
- if (ulp_id != BNXT_ROCE_ULP)
- return -EINVAL;
-
- if (!(bp->flags & BNXT_FLAG_USING_MSIX))
- return -ENODEV;
+ unsigned int max_stat_ctxs;
+ struct bnxt_ulp *ulp;
- if (edev->ulp_tbl[ulp_id].msix_requested)
- return -EAGAIN;
+ max_stat_ctxs = bnxt_get_max_func_stat_ctxs(bp);
+ if (max_stat_ctxs <= BNXT_MIN_ROCE_STAT_CTXS ||
+ bp->cp_nr_rings == max_stat_ctxs)
+ return -ENOMEM;
- max_cp_rings = bnxt_get_max_func_cp_rings(bp);
- avail_msix = bnxt_get_avail_msix(bp, num_msix);
- if (!avail_msix)
+ ulp = edev->ulp_tbl;
+ if (!ulp)
return -ENOMEM;
- if (avail_msix > num_msix)
- avail_msix = num_msix;
-
- if (BNXT_NEW_RM(bp)) {
- idx = bp->cp_nr_rings;
- } else {
- max_idx = min_t(int, bp->total_irqs, max_cp_rings);
- idx = max_idx - avail_msix;
- }
- edev->ulp_tbl[ulp_id].msix_base = idx;
- edev->ulp_tbl[ulp_id].msix_requested = avail_msix;
- hw_resc = &bp->hw_resc;
- total_vecs = idx + avail_msix;
- if (bp->total_irqs < total_vecs ||
- (BNXT_NEW_RM(bp) && hw_resc->resv_irqs < total_vecs)) {
- if (netif_running(dev)) {
- bnxt_close_nic(bp, true, false);
- rc = bnxt_open_nic(bp, true, false);
- } else {
- rc = bnxt_reserve_rings(bp, true);
- }
- }
- if (rc) {
- edev->ulp_tbl[ulp_id].msix_requested = 0;
- return -EAGAIN;
- }
- if (BNXT_NEW_RM(bp)) {
- int resv_msix;
+ ulp->handle = handle;
+ rcu_assign_pointer(ulp->ulp_ops, ulp_ops);
+
+ if (test_bit(BNXT_STATE_OPEN, &bp->state))
+ bnxt_hwrm_vnic_cfg(bp, 0);
- resv_msix = hw_resc->resv_irqs - bp->cp_nr_rings;
- avail_msix = min_t(int, resv_msix, avail_msix);
- edev->ulp_tbl[ulp_id].msix_requested = avail_msix;
- }
- bnxt_fill_msix_vecs(bp, ent);
+ bnxt_fill_msix_vecs(bp, bp->edev->msix_entries);
edev->flags |= BNXT_EN_FLAG_MSIX_REQUESTED;
- return avail_msix;
+ return 0;
}
+EXPORT_SYMBOL(bnxt_register_dev);
-static int bnxt_free_msix_vecs(struct bnxt_en_dev *edev, unsigned int ulp_id)
+void bnxt_unregister_dev(struct bnxt_en_dev *edev)
{
struct net_device *dev = edev->net;
struct bnxt *bp = netdev_priv(dev);
+ struct bnxt_ulp *ulp;
+ int i = 0;
- ASSERT_RTNL();
- if (ulp_id != BNXT_ROCE_ULP)
- return -EINVAL;
+ ulp = edev->ulp_tbl;
+ if (ulp->msix_requested)
+ edev->flags &= ~BNXT_EN_FLAG_MSIX_REQUESTED;
- if (!(edev->flags & BNXT_EN_FLAG_MSIX_REQUESTED))
- return 0;
+ if (ulp->max_async_event_id)
+ bnxt_hwrm_func_drv_rgtr(bp, NULL, 0, true);
- edev->ulp_tbl[ulp_id].msix_requested = 0;
- edev->flags &= ~BNXT_EN_FLAG_MSIX_REQUESTED;
- if (netif_running(dev) && !(edev->flags & BNXT_EN_FLAG_ULP_STOPPED)) {
- bnxt_close_nic(bp, true, false);
- bnxt_open_nic(bp, true, false);
+ RCU_INIT_POINTER(ulp->ulp_ops, NULL);
+ synchronize_rcu();
+ ulp->max_async_event_id = 0;
+ ulp->async_events_bmap = NULL;
+ while (atomic_read(&ulp->ref_count) != 0 && i < 10) {
+ msleep(100);
+ i++;
}
- return 0;
+ return;
}
+EXPORT_SYMBOL(bnxt_unregister_dev);
int bnxt_get_ulp_msix_num(struct bnxt *bp)
{
- if (bnxt_ulp_registered(bp->edev, BNXT_ROCE_ULP)) {
- struct bnxt_en_dev *edev = bp->edev;
+ u32 roce_msix = BNXT_VF(bp) ?
+ BNXT_MAX_VF_ROCE_MSIX : BNXT_MAX_ROCE_MSIX;
- return edev->ulp_tbl[BNXT_ROCE_ULP].msix_requested;
- }
- return 0;
+ return ((bp->flags & BNXT_FLAG_ROCE_CAP) ?
+ min_t(u32, roce_msix, num_online_cpus()) : 0);
}
int bnxt_get_ulp_msix_base(struct bnxt *bp)
{
- if (bnxt_ulp_registered(bp->edev, BNXT_ROCE_ULP)) {
+ if (bnxt_ulp_registered(bp->edev)) {
struct bnxt_en_dev *edev = bp->edev;
- if (edev->ulp_tbl[BNXT_ROCE_ULP].msix_requested)
- return edev->ulp_tbl[BNXT_ROCE_ULP].msix_base;
+ if (edev->ulp_tbl->msix_requested)
+ return edev->ulp_tbl->msix_base;
}
return 0;
}
int bnxt_get_ulp_stat_ctxs(struct bnxt *bp)
{
- if (bnxt_ulp_registered(bp->edev, BNXT_ROCE_ULP)) {
+ if (bnxt_ulp_registered(bp->edev)) {
struct bnxt_en_dev *edev = bp->edev;
- if (edev->ulp_tbl[BNXT_ROCE_ULP].msix_requested)
+ if (edev->ulp_tbl->msix_requested)
return BNXT_MIN_ROCE_STAT_CTXS;
}
return 0;
}
-static int bnxt_send_msg(struct bnxt_en_dev *edev, unsigned int ulp_id,
+int bnxt_send_msg(struct bnxt_en_dev *edev,
struct bnxt_fw_msg *fw_msg)
{
struct net_device *dev = edev->net;
@@ -243,7 +150,7 @@ static int bnxt_send_msg(struct bnxt_en_dev *edev, unsigned int ulp_id,
u32 resp_len;
int rc;
- if (ulp_id != BNXT_ROCE_ULP && bp->fw_reset_state)
+ if (bp->fw_reset_state)
return -EBUSY;
rc = hwrm_req_init(bp, req, 0 /* don't care */);
@@ -267,42 +174,36 @@ static int bnxt_send_msg(struct bnxt_en_dev *edev, unsigned int ulp_id,
hwrm_req_drop(bp, req);
return rc;
}
-
-static void bnxt_ulp_get(struct bnxt_ulp *ulp)
-{
- atomic_inc(&ulp->ref_count);
-}
-
-static void bnxt_ulp_put(struct bnxt_ulp *ulp)
-{
- atomic_dec(&ulp->ref_count);
-}
+EXPORT_SYMBOL(bnxt_send_msg);
void bnxt_ulp_stop(struct bnxt *bp)
{
+ struct bnxt_aux_priv *aux_priv = bp->aux_priv;
struct bnxt_en_dev *edev = bp->edev;
- struct bnxt_ulp_ops *ops;
- int i;
if (!edev)
return;
edev->flags |= BNXT_EN_FLAG_ULP_STOPPED;
- for (i = 0; i < BNXT_MAX_ULP; i++) {
- struct bnxt_ulp *ulp = &edev->ulp_tbl[i];
+ if (aux_priv) {
+ struct auxiliary_device *adev;
- ops = rtnl_dereference(ulp->ulp_ops);
- if (!ops || !ops->ulp_stop)
- continue;
- ops->ulp_stop(ulp->handle);
+ adev = &aux_priv->aux_dev;
+ if (adev->dev.driver) {
+ struct auxiliary_driver *adrv;
+ pm_message_t pm = {};
+
+ adrv = to_auxiliary_drv(adev->dev.driver);
+ edev->en_state = bp->state;
+ adrv->suspend(adev, pm);
+ }
}
}
void bnxt_ulp_start(struct bnxt *bp, int err)
{
+ struct bnxt_aux_priv *aux_priv = bp->aux_priv;
struct bnxt_en_dev *edev = bp->edev;
- struct bnxt_ulp_ops *ops;
- int i;
if (!edev)
return;
@@ -312,58 +213,19 @@ void bnxt_ulp_start(struct bnxt *bp, int err)
if (err)
return;
- for (i = 0; i < BNXT_MAX_ULP; i++) {
- struct bnxt_ulp *ulp = &edev->ulp_tbl[i];
-
- ops = rtnl_dereference(ulp->ulp_ops);
- if (!ops || !ops->ulp_start)
- continue;
- ops->ulp_start(ulp->handle);
- }
-}
-
-void bnxt_ulp_sriov_cfg(struct bnxt *bp, int num_vfs)
-{
- struct bnxt_en_dev *edev = bp->edev;
- struct bnxt_ulp_ops *ops;
- int i;
-
- if (!edev)
- return;
+ if (aux_priv) {
+ struct auxiliary_device *adev;
- for (i = 0; i < BNXT_MAX_ULP; i++) {
- struct bnxt_ulp *ulp = &edev->ulp_tbl[i];
+ adev = &aux_priv->aux_dev;
+ if (adev->dev.driver) {
+ struct auxiliary_driver *adrv;
- rcu_read_lock();
- ops = rcu_dereference(ulp->ulp_ops);
- if (!ops || !ops->ulp_sriov_config) {
- rcu_read_unlock();
- continue;
+ adrv = to_auxiliary_drv(adev->dev.driver);
+ edev->en_state = bp->state;
+ adrv->resume(adev);
}
- bnxt_ulp_get(ulp);
- rcu_read_unlock();
- ops->ulp_sriov_config(ulp->handle, num_vfs);
- bnxt_ulp_put(ulp);
}
-}
-void bnxt_ulp_shutdown(struct bnxt *bp)
-{
- struct bnxt_en_dev *edev = bp->edev;
- struct bnxt_ulp_ops *ops;
- int i;
-
- if (!edev)
- return;
-
- for (i = 0; i < BNXT_MAX_ULP; i++) {
- struct bnxt_ulp *ulp = &edev->ulp_tbl[i];
-
- ops = rtnl_dereference(ulp->ulp_ops);
- if (!ops || !ops->ulp_shutdown)
- continue;
- ops->ulp_shutdown(ulp->handle);
- }
}
void bnxt_ulp_irq_stop(struct bnxt *bp)
@@ -374,8 +236,8 @@ void bnxt_ulp_irq_stop(struct bnxt *bp)
if (!edev || !(edev->flags & BNXT_EN_FLAG_MSIX_REQUESTED))
return;
- if (bnxt_ulp_registered(bp->edev, BNXT_ROCE_ULP)) {
- struct bnxt_ulp *ulp = &edev->ulp_tbl[BNXT_ROCE_ULP];
+ if (bnxt_ulp_registered(bp->edev)) {
+ struct bnxt_ulp *ulp = edev->ulp_tbl;
if (!ulp->msix_requested)
return;
@@ -395,8 +257,8 @@ void bnxt_ulp_irq_restart(struct bnxt *bp, int err)
if (!edev || !(edev->flags & BNXT_EN_FLAG_MSIX_REQUESTED))
return;
- if (bnxt_ulp_registered(bp->edev, BNXT_ROCE_ULP)) {
- struct bnxt_ulp *ulp = &edev->ulp_tbl[BNXT_ROCE_ULP];
+ if (bnxt_ulp_registered(bp->edev)) {
+ struct bnxt_ulp *ulp = edev->ulp_tbl;
struct bnxt_msix_entry *ent = NULL;
if (!ulp->msix_requested)
@@ -418,46 +280,15 @@ void bnxt_ulp_irq_restart(struct bnxt *bp, int err)
}
}
-void bnxt_ulp_async_events(struct bnxt *bp, struct hwrm_async_event_cmpl *cmpl)
-{
- u16 event_id = le16_to_cpu(cmpl->event_id);
- struct bnxt_en_dev *edev = bp->edev;
- struct bnxt_ulp_ops *ops;
- int i;
-
- if (!edev)
- return;
-
- rcu_read_lock();
- for (i = 0; i < BNXT_MAX_ULP; i++) {
- struct bnxt_ulp *ulp = &edev->ulp_tbl[i];
-
- ops = rcu_dereference(ulp->ulp_ops);
- if (!ops || !ops->ulp_async_notifier)
- continue;
- if (!ulp->async_events_bmap ||
- event_id > ulp->max_async_event_id)
- continue;
-
- /* Read max_async_event_id first before testing the bitmap. */
- smp_rmb();
- if (test_bit(event_id, ulp->async_events_bmap))
- ops->ulp_async_notifier(ulp->handle, cmpl);
- }
- rcu_read_unlock();
-}
-
-static int bnxt_register_async_events(struct bnxt_en_dev *edev, unsigned int ulp_id,
- unsigned long *events_bmap, u16 max_id)
+int bnxt_register_async_events(struct bnxt_en_dev *edev,
+ unsigned long *events_bmap,
+ u16 max_id)
{
struct net_device *dev = edev->net;
struct bnxt *bp = netdev_priv(dev);
struct bnxt_ulp *ulp;
- if (ulp_id >= BNXT_MAX_ULP)
- return -EINVAL;
-
- ulp = &edev->ulp_tbl[ulp_id];
+ ulp = edev->ulp_tbl;
ulp->async_events_bmap = events_bmap;
/* Make sure bnxt_ulp_async_events() sees this order */
smp_wmb();
@@ -465,38 +296,121 @@ static int bnxt_register_async_events(struct bnxt_en_dev *edev, unsigned int ulp
bnxt_hwrm_func_drv_rgtr(bp, events_bmap, max_id + 1, true);
return 0;
}
+EXPORT_SYMBOL(bnxt_register_async_events);
-static const struct bnxt_en_ops bnxt_en_ops_tbl = {
- .bnxt_register_device = bnxt_register_dev,
- .bnxt_unregister_device = bnxt_unregister_dev,
- .bnxt_request_msix = bnxt_req_msix_vecs,
- .bnxt_free_msix = bnxt_free_msix_vecs,
- .bnxt_send_fw_msg = bnxt_send_msg,
- .bnxt_register_fw_async_events = bnxt_register_async_events,
-};
+void bnxt_rdma_aux_device_uninit(struct bnxt *bp)
+{
+ struct bnxt_aux_priv *aux_priv;
+ struct auxiliary_device *adev;
+
+ /* Skip if no auxiliary device init was done. */
+ if (!(bp->flags & BNXT_FLAG_ROCE_CAP))
+ return;
+
+ aux_priv = bp->aux_priv;
+ adev = &aux_priv->aux_dev;
+ auxiliary_device_delete(adev);
+ auxiliary_device_uninit(adev);
+}
-struct bnxt_en_dev *bnxt_ulp_probe(struct net_device *dev)
+static void bnxt_aux_dev_release(struct device *dev)
{
- struct bnxt *bp = netdev_priv(dev);
- struct bnxt_en_dev *edev;
+ struct bnxt_aux_priv *aux_priv =
+ container_of(dev, struct bnxt_aux_priv, aux_dev.dev);
+
+ ida_free(&bnxt_aux_dev_ids, aux_priv->id);
+ kfree(aux_priv->edev->ulp_tbl);
+ kfree(aux_priv->edev);
+ kfree(aux_priv);
+}
+
+static void bnxt_set_edev_info(struct bnxt_en_dev *edev, struct bnxt *bp)
+{
+ edev->net = bp->dev;
+ edev->pdev = bp->pdev;
+ edev->l2_db_size = bp->db_size;
+ edev->l2_db_size_nc = bp->db_size;
- edev = bp->edev;
- if (!edev) {
- edev = kzalloc(sizeof(*edev), GFP_KERNEL);
- if (!edev)
- return ERR_PTR(-ENOMEM);
- edev->en_ops = &bnxt_en_ops_tbl;
- edev->net = dev;
- edev->pdev = bp->pdev;
- edev->l2_db_size = bp->db_size;
- edev->l2_db_size_nc = bp->db_size;
- bp->edev = edev;
- }
- edev->flags &= ~BNXT_EN_FLAG_ROCE_CAP;
if (bp->flags & BNXT_FLAG_ROCEV1_CAP)
edev->flags |= BNXT_EN_FLAG_ROCEV1_CAP;
if (bp->flags & BNXT_FLAG_ROCEV2_CAP)
edev->flags |= BNXT_EN_FLAG_ROCEV2_CAP;
- return bp->edev;
+ if (bp->flags & BNXT_FLAG_VF)
+ edev->flags |= BNXT_EN_FLAG_VF;
+
+ edev->chip_num = bp->chip_num;
+ edev->hw_ring_stats_size = bp->hw_ring_stats_size;
+ edev->pf_port_id = bp->pf.port_id;
+ edev->en_state = bp->state;
+
+ edev->ulp_tbl->msix_requested = bnxt_get_ulp_msix_num(bp);
+}
+
+void bnxt_rdma_aux_device_init(struct bnxt *bp)
+{
+ struct auxiliary_device *aux_dev;
+ struct bnxt_aux_priv *aux_priv;
+ struct bnxt_en_dev *edev;
+ struct bnxt_ulp *ulp;
+ int rc;
+
+ if (!(bp->flags & BNXT_FLAG_ROCE_CAP))
+ return;
+
+ bp->aux_priv = kzalloc(sizeof(*bp->aux_priv), GFP_KERNEL);
+ if (!bp->aux_priv)
+ goto exit;
+
+ bp->aux_priv->id = ida_alloc(&bnxt_aux_dev_ids, GFP_KERNEL);
+ if (bp->aux_priv->id < 0) {
+ netdev_warn(bp->dev,
+ "ida alloc failed for ROCE auxiliary device\n");
+ kfree(bp->aux_priv);
+ goto exit;
+ }
+
+ aux_priv = bp->aux_priv;
+ aux_dev = &aux_priv->aux_dev;
+ aux_dev->id = aux_priv->id;
+ aux_dev->name = "rdma";
+ aux_dev->dev.parent = &bp->pdev->dev;
+ aux_dev->dev.release = bnxt_aux_dev_release;
+
+ rc = auxiliary_device_init(aux_dev);
+ if (rc) {
+ ida_free(&bnxt_aux_dev_ids, bp->aux_priv->id);
+ kfree(bp->aux_priv);
+ goto exit;
+ }
+
+ /* From this point, all cleanup will happen via the .release callback &
+ * any error unwinding will need to include a call to
+ * auxiliary_device_uninit.
+ */
+ edev = kzalloc(sizeof(*edev), GFP_KERNEL);
+ if (!edev)
+ goto aux_dev_uninit;
+
+ ulp = kzalloc(sizeof(*ulp), GFP_KERNEL);
+ if (!ulp)
+ goto aux_dev_uninit;
+
+ edev->ulp_tbl = ulp;
+ aux_priv->edev = edev;
+ bp->edev = edev;
+ bnxt_set_edev_info(edev, bp);
+
+ rc = auxiliary_device_add(aux_dev);
+ if (rc) {
+ netdev_warn(bp->dev,
+ "Failed to add auxiliary device for ROCE\n");
+ goto aux_dev_uninit;
+ }
+
+ return;
+
+aux_dev_uninit:
+ auxiliary_device_uninit(aux_dev);
+exit:
+ bp->flags &= ~BNXT_FLAG_ROCE_CAP;
}
-EXPORT_SYMBOL(bnxt_ulp_probe);
diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.h b/drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.h
index 42b50abc3e91..80cbc4b6130a 100644
--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.h
+++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.h
@@ -15,6 +15,8 @@
#define BNXT_MIN_ROCE_CP_RINGS 2
#define BNXT_MIN_ROCE_STAT_CTXS 1
+#define BNXT_MAX_ROCE_MSIX 9
+#define BNXT_MAX_VF_ROCE_MSIX 2
struct hwrm_async_event_cmpl;
struct bnxt;
@@ -26,12 +28,6 @@ struct bnxt_msix_entry {
};
struct bnxt_ulp_ops {
- /* async_notifier() cannot sleep (in BH context) */
- void (*ulp_async_notifier)(void *, struct hwrm_async_event_cmpl *);
- void (*ulp_stop)(void *);
- void (*ulp_start)(void *);
- void (*ulp_sriov_config)(void *, int);
- void (*ulp_shutdown)(void *);
void (*ulp_irq_stop)(void *);
void (*ulp_irq_restart)(void *, struct bnxt_msix_entry *);
};
@@ -57,6 +53,7 @@ struct bnxt_ulp {
struct bnxt_en_dev {
struct net_device *net;
struct pci_dev *pdev;
+ struct bnxt_msix_entry msix_entries[BNXT_MAX_ROCE_MSIX];
u32 flags;
#define BNXT_EN_FLAG_ROCEV1_CAP 0x1
#define BNXT_EN_FLAG_ROCEV2_CAP 0x2
@@ -64,8 +61,10 @@ struct bnxt_en_dev {
BNXT_EN_FLAG_ROCEV2_CAP)
#define BNXT_EN_FLAG_MSIX_REQUESTED 0x4
#define BNXT_EN_FLAG_ULP_STOPPED 0x8
- const struct bnxt_en_ops *en_ops;
- struct bnxt_ulp ulp_tbl[BNXT_MAX_ULP];
+ #define BNXT_EN_FLAG_VF 0x10
+#define BNXT_EN_VF(edev) ((edev)->flags & BNXT_EN_FLAG_VF)
+
+ struct bnxt_ulp *ulp_tbl;
int l2_db_size; /* Doorbell BAR size in
* bytes mapped by L2
* driver.
@@ -74,24 +73,19 @@ struct bnxt_en_dev {
* bytes mapped as non-
* cacheable.
*/
+ u16 chip_num;
+ u16 hw_ring_stats_size;
+ u16 pf_port_id;
+ unsigned long en_state; /* Could be checked in
+ * RoCE driver suspend
+ * mode only. Will be
+ * updated in resume.
+ */
};
-struct bnxt_en_ops {
- int (*bnxt_register_device)(struct bnxt_en_dev *, unsigned int,
- struct bnxt_ulp_ops *, void *);
- int (*bnxt_unregister_device)(struct bnxt_en_dev *, unsigned int);
- int (*bnxt_request_msix)(struct bnxt_en_dev *, unsigned int,
- struct bnxt_msix_entry *, int);
- int (*bnxt_free_msix)(struct bnxt_en_dev *, unsigned int);
- int (*bnxt_send_fw_msg)(struct bnxt_en_dev *, unsigned int,
- struct bnxt_fw_msg *);
- int (*bnxt_register_fw_async_events)(struct bnxt_en_dev *, unsigned int,
- unsigned long *, u16);
-};
-
-static inline bool bnxt_ulp_registered(struct bnxt_en_dev *edev, int ulp_id)
+static inline bool bnxt_ulp_registered(struct bnxt_en_dev *edev)
{
- if (edev && rcu_access_pointer(edev->ulp_tbl[ulp_id].ulp_ops))
+ if (edev && edev->ulp_tbl)
return true;
return false;
}
@@ -102,10 +96,15 @@ int bnxt_get_ulp_stat_ctxs(struct bnxt *bp);
void bnxt_ulp_stop(struct bnxt *bp);
void bnxt_ulp_start(struct bnxt *bp, int err);
void bnxt_ulp_sriov_cfg(struct bnxt *bp, int num_vfs);
-void bnxt_ulp_shutdown(struct bnxt *bp);
void bnxt_ulp_irq_stop(struct bnxt *bp);
void bnxt_ulp_irq_restart(struct bnxt *bp, int err);
void bnxt_ulp_async_events(struct bnxt *bp, struct hwrm_async_event_cmpl *cmpl);
-struct bnxt_en_dev *bnxt_ulp_probe(struct net_device *dev);
-
+void bnxt_rdma_aux_device_uninit(struct bnxt *bp);
+void bnxt_rdma_aux_device_init(struct bnxt *bp);
+int bnxt_register_dev(struct bnxt_en_dev *edev, struct bnxt_ulp_ops *ulp_ops,
+ void *handle);
+void bnxt_unregister_dev(struct bnxt_en_dev *edev);
+int bnxt_send_msg(struct bnxt_en_dev *edev, struct bnxt_fw_msg *fw_msg);
+int bnxt_register_async_events(struct bnxt_en_dev *edev,
+ unsigned long *events_bmap, u16 max_id);
#endif
diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c
index 36d5202c0aee..5843c93b1711 100644
--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c
+++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c
@@ -422,9 +422,11 @@ static int bnxt_xdp_set(struct bnxt *bp, struct bpf_prog *prog)
if (prog) {
bnxt_set_rx_skb_mode(bp, true);
+ xdp_features_set_redirect_target(dev, true);
} else {
int rx, tx;
+ xdp_features_clear_redirect_target(dev);
bnxt_set_rx_skb_mode(bp, false);
bnxt_get_max_rings(bp, &rx, &tx, true);
if (rx > 1) {
diff --git a/drivers/net/ethernet/broadcom/genet/bcmgenet.c b/drivers/net/ethernet/broadcom/genet/bcmgenet.c
index 21973046b12b..d937daa8ee88 100644
--- a/drivers/net/ethernet/broadcom/genet/bcmgenet.c
+++ b/drivers/net/ethernet/broadcom/genet/bcmgenet.c
@@ -2316,6 +2316,14 @@ static unsigned int bcmgenet_desc_rx(struct bcmgenet_rx_ring *ring,
__func__, p_index, ring->c_index,
ring->read_ptr, dma_length_status);
+ if (unlikely(len > RX_BUF_LENGTH)) {
+ netif_err(priv, rx_status, dev, "oversized packet\n");
+ dev->stats.rx_length_errors++;
+ dev->stats.rx_errors++;
+ dev_kfree_skb_any(skb);
+ goto next;
+ }
+
if (unlikely(!(dma_flag & DMA_EOP) || !(dma_flag & DMA_SOP))) {
netif_err(priv, rx_status, dev,
"dropping fragmented packet!\n");
diff --git a/drivers/net/ethernet/broadcom/genet/bcmgenet_wol.c b/drivers/net/ethernet/broadcom/genet/bcmgenet_wol.c
index f55d9d9c01a8..3a4b6cb7b7b9 100644
--- a/drivers/net/ethernet/broadcom/genet/bcmgenet_wol.c
+++ b/drivers/net/ethernet/broadcom/genet/bcmgenet_wol.c
@@ -77,14 +77,18 @@ int bcmgenet_set_wol(struct net_device *dev, struct ethtool_wolinfo *wol)
if (wol->wolopts) {
device_set_wakeup_enable(kdev, 1);
/* Avoid unbalanced enable_irq_wake calls */
- if (priv->wol_irq_disabled)
+ if (priv->wol_irq_disabled) {
enable_irq_wake(priv->wol_irq);
+ enable_irq_wake(priv->irq0);
+ }
priv->wol_irq_disabled = false;
} else {
device_set_wakeup_enable(kdev, 0);
/* Avoid unbalanced disable_irq_wake calls */
- if (!priv->wol_irq_disabled)
+ if (!priv->wol_irq_disabled) {
disable_irq_wake(priv->wol_irq);
+ disable_irq_wake(priv->irq0);
+ }
priv->wol_irq_disabled = true;
}
diff --git a/drivers/net/ethernet/broadcom/genet/bcmmii.c b/drivers/net/ethernet/broadcom/genet/bcmmii.c
index b615176338b2..be042905ada2 100644
--- a/drivers/net/ethernet/broadcom/genet/bcmmii.c
+++ b/drivers/net/ethernet/broadcom/genet/bcmmii.c
@@ -176,15 +176,6 @@ void bcmgenet_phy_power_set(struct net_device *dev, bool enable)
static void bcmgenet_moca_phy_setup(struct bcmgenet_priv *priv)
{
- u32 reg;
-
- if (!GENET_IS_V5(priv)) {
- /* Speed settings are set in bcmgenet_mii_setup() */
- reg = bcmgenet_sys_readl(priv, SYS_PORT_CTRL);
- reg |= LED_ACT_SOURCE_MAC;
- bcmgenet_sys_writel(priv, reg, SYS_PORT_CTRL);
- }
-
if (priv->hw_params->flags & GENET_HAS_MOCA_LINK_DET)
fixed_phy_set_link_update(priv->dev->phydev,
bcmgenet_fixed_phy_link_update);
@@ -217,6 +208,8 @@ int bcmgenet_mii_config(struct net_device *dev, bool init)
if (!phy_name) {
phy_name = "MoCA";
+ if (!GENET_IS_V5(priv))
+ port_ctrl |= LED_ACT_SOURCE_MAC;
bcmgenet_moca_phy_setup(priv);
}
break;
diff --git a/drivers/net/ethernet/cadence/macb.h b/drivers/net/ethernet/cadence/macb.h
index 9c410f93a103..14dfec4db8f9 100644
--- a/drivers/net/ethernet/cadence/macb.h
+++ b/drivers/net/ethernet/cadence/macb.h
@@ -768,8 +768,6 @@
#define gem_readl_n(port, reg, idx) (port)->macb_reg_readl((port), GEM_##reg + idx * 4)
#define gem_writel_n(port, reg, idx, value) (port)->macb_reg_writel((port), GEM_##reg + idx * 4, (value))
-#define PTP_TS_BUFFER_SIZE 128 /* must be power of 2 */
-
/* Conditional GEM/MACB macros. These perform the operation to the correct
* register dependent on whether the device is a GEM or a MACB. For registers
* and bitfields that are common across both devices, use macb_{read,write}l
@@ -819,11 +817,6 @@ struct macb_dma_desc_ptp {
u32 ts_1;
u32 ts_2;
};
-
-struct gem_tx_ts {
- struct sk_buff *skb;
- struct macb_dma_desc_ptp desc_ptp;
-};
#endif
/* DMA descriptor bitfields */
@@ -1224,12 +1217,6 @@ struct macb_queue {
void *rx_buffers;
struct napi_struct napi_rx;
struct queue_stats stats;
-
-#ifdef CONFIG_MACB_USE_HWSTAMP
- struct work_struct tx_ts_task;
- unsigned int tx_ts_head, tx_ts_tail;
- struct gem_tx_ts tx_timestamps[PTP_TS_BUFFER_SIZE];
-#endif
};
struct ethtool_rx_fs_item {
@@ -1340,14 +1327,14 @@ enum macb_bd_control {
void gem_ptp_init(struct net_device *ndev);
void gem_ptp_remove(struct net_device *ndev);
-int gem_ptp_txstamp(struct macb_queue *queue, struct sk_buff *skb, struct macb_dma_desc *des);
+void gem_ptp_txstamp(struct macb *bp, struct sk_buff *skb, struct macb_dma_desc *desc);
void gem_ptp_rxstamp(struct macb *bp, struct sk_buff *skb, struct macb_dma_desc *desc);
-static inline int gem_ptp_do_txstamp(struct macb_queue *queue, struct sk_buff *skb, struct macb_dma_desc *desc)
+static inline void gem_ptp_do_txstamp(struct macb *bp, struct sk_buff *skb, struct macb_dma_desc *desc)
{
- if (queue->bp->tstamp_config.tx_type == TSTAMP_DISABLED)
- return -ENOTSUPP;
+ if (bp->tstamp_config.tx_type == TSTAMP_DISABLED)
+ return;
- return gem_ptp_txstamp(queue, skb, desc);
+ gem_ptp_txstamp(bp, skb, desc);
}
static inline void gem_ptp_do_rxstamp(struct macb *bp, struct sk_buff *skb, struct macb_dma_desc *desc)
@@ -1363,11 +1350,7 @@ int gem_set_hwtst(struct net_device *dev, struct ifreq *ifr, int cmd);
static inline void gem_ptp_init(struct net_device *ndev) { }
static inline void gem_ptp_remove(struct net_device *ndev) { }
-static inline int gem_ptp_do_txstamp(struct macb_queue *queue, struct sk_buff *skb, struct macb_dma_desc *desc)
-{
- return -1;
-}
-
+static inline void gem_ptp_do_txstamp(struct macb *bp, struct sk_buff *skb, struct macb_dma_desc *desc) { }
static inline void gem_ptp_do_rxstamp(struct macb *bp, struct sk_buff *skb, struct macb_dma_desc *desc) { }
#endif
diff --git a/drivers/net/ethernet/cadence/macb_main.c b/drivers/net/ethernet/cadence/macb_main.c
index 6cda31520c42..6e141a8bbf43 100644
--- a/drivers/net/ethernet/cadence/macb_main.c
+++ b/drivers/net/ethernet/cadence/macb_main.c
@@ -334,7 +334,7 @@ static int macb_mdio_wait_for_idle(struct macb *bp)
1, MACB_MDIO_TIMEOUT);
}
-static int macb_mdio_read(struct mii_bus *bus, int mii_id, int regnum)
+static int macb_mdio_read_c22(struct mii_bus *bus, int mii_id, int regnum)
{
struct macb *bp = bus->priv;
int status;
@@ -347,35 +347,62 @@ static int macb_mdio_read(struct mii_bus *bus, int mii_id, int regnum)
if (status < 0)
goto mdio_read_exit;
- if (regnum & MII_ADDR_C45) {
- macb_writel(bp, MAN, (MACB_BF(SOF, MACB_MAN_C45_SOF)
- | MACB_BF(RW, MACB_MAN_C45_ADDR)
- | MACB_BF(PHYA, mii_id)
- | MACB_BF(REGA, (regnum >> 16) & 0x1F)
- | MACB_BF(DATA, regnum & 0xFFFF)
- | MACB_BF(CODE, MACB_MAN_C45_CODE)));
-
- status = macb_mdio_wait_for_idle(bp);
- if (status < 0)
- goto mdio_read_exit;
-
- macb_writel(bp, MAN, (MACB_BF(SOF, MACB_MAN_C45_SOF)
- | MACB_BF(RW, MACB_MAN_C45_READ)
- | MACB_BF(PHYA, mii_id)
- | MACB_BF(REGA, (regnum >> 16) & 0x1F)
- | MACB_BF(CODE, MACB_MAN_C45_CODE)));
- } else {
- macb_writel(bp, MAN, (MACB_BF(SOF, MACB_MAN_C22_SOF)
- | MACB_BF(RW, MACB_MAN_C22_READ)
- | MACB_BF(PHYA, mii_id)
- | MACB_BF(REGA, regnum)
- | MACB_BF(CODE, MACB_MAN_C22_CODE)));
+ macb_writel(bp, MAN, (MACB_BF(SOF, MACB_MAN_C22_SOF)
+ | MACB_BF(RW, MACB_MAN_C22_READ)
+ | MACB_BF(PHYA, mii_id)
+ | MACB_BF(REGA, regnum)
+ | MACB_BF(CODE, MACB_MAN_C22_CODE)));
+
+ status = macb_mdio_wait_for_idle(bp);
+ if (status < 0)
+ goto mdio_read_exit;
+
+ status = MACB_BFEXT(DATA, macb_readl(bp, MAN));
+
+mdio_read_exit:
+ pm_runtime_mark_last_busy(&bp->pdev->dev);
+ pm_runtime_put_autosuspend(&bp->pdev->dev);
+mdio_pm_exit:
+ return status;
+}
+
+static int macb_mdio_read_c45(struct mii_bus *bus, int mii_id, int devad,
+ int regnum)
+{
+ struct macb *bp = bus->priv;
+ int status;
+
+ status = pm_runtime_get_sync(&bp->pdev->dev);
+ if (status < 0) {
+ pm_runtime_put_noidle(&bp->pdev->dev);
+ goto mdio_pm_exit;
}
status = macb_mdio_wait_for_idle(bp);
if (status < 0)
goto mdio_read_exit;
+ macb_writel(bp, MAN, (MACB_BF(SOF, MACB_MAN_C45_SOF)
+ | MACB_BF(RW, MACB_MAN_C45_ADDR)
+ | MACB_BF(PHYA, mii_id)
+ | MACB_BF(REGA, devad & 0x1F)
+ | MACB_BF(DATA, regnum & 0xFFFF)
+ | MACB_BF(CODE, MACB_MAN_C45_CODE)));
+
+ status = macb_mdio_wait_for_idle(bp);
+ if (status < 0)
+ goto mdio_read_exit;
+
+ macb_writel(bp, MAN, (MACB_BF(SOF, MACB_MAN_C45_SOF)
+ | MACB_BF(RW, MACB_MAN_C45_READ)
+ | MACB_BF(PHYA, mii_id)
+ | MACB_BF(REGA, devad & 0x1F)
+ | MACB_BF(CODE, MACB_MAN_C45_CODE)));
+
+ status = macb_mdio_wait_for_idle(bp);
+ if (status < 0)
+ goto mdio_read_exit;
+
status = MACB_BFEXT(DATA, macb_readl(bp, MAN));
mdio_read_exit:
@@ -385,8 +412,8 @@ mdio_pm_exit:
return status;
}
-static int macb_mdio_write(struct mii_bus *bus, int mii_id, int regnum,
- u16 value)
+static int macb_mdio_write_c22(struct mii_bus *bus, int mii_id, int regnum,
+ u16 value)
{
struct macb *bp = bus->priv;
int status;
@@ -399,37 +426,63 @@ static int macb_mdio_write(struct mii_bus *bus, int mii_id, int regnum,
if (status < 0)
goto mdio_write_exit;
- if (regnum & MII_ADDR_C45) {
- macb_writel(bp, MAN, (MACB_BF(SOF, MACB_MAN_C45_SOF)
- | MACB_BF(RW, MACB_MAN_C45_ADDR)
- | MACB_BF(PHYA, mii_id)
- | MACB_BF(REGA, (regnum >> 16) & 0x1F)
- | MACB_BF(DATA, regnum & 0xFFFF)
- | MACB_BF(CODE, MACB_MAN_C45_CODE)));
-
- status = macb_mdio_wait_for_idle(bp);
- if (status < 0)
- goto mdio_write_exit;
-
- macb_writel(bp, MAN, (MACB_BF(SOF, MACB_MAN_C45_SOF)
- | MACB_BF(RW, MACB_MAN_C45_WRITE)
- | MACB_BF(PHYA, mii_id)
- | MACB_BF(REGA, (regnum >> 16) & 0x1F)
- | MACB_BF(CODE, MACB_MAN_C45_CODE)
- | MACB_BF(DATA, value)));
- } else {
- macb_writel(bp, MAN, (MACB_BF(SOF, MACB_MAN_C22_SOF)
- | MACB_BF(RW, MACB_MAN_C22_WRITE)
- | MACB_BF(PHYA, mii_id)
- | MACB_BF(REGA, regnum)
- | MACB_BF(CODE, MACB_MAN_C22_CODE)
- | MACB_BF(DATA, value)));
+ macb_writel(bp, MAN, (MACB_BF(SOF, MACB_MAN_C22_SOF)
+ | MACB_BF(RW, MACB_MAN_C22_WRITE)
+ | MACB_BF(PHYA, mii_id)
+ | MACB_BF(REGA, regnum)
+ | MACB_BF(CODE, MACB_MAN_C22_CODE)
+ | MACB_BF(DATA, value)));
+
+ status = macb_mdio_wait_for_idle(bp);
+ if (status < 0)
+ goto mdio_write_exit;
+
+mdio_write_exit:
+ pm_runtime_mark_last_busy(&bp->pdev->dev);
+ pm_runtime_put_autosuspend(&bp->pdev->dev);
+mdio_pm_exit:
+ return status;
+}
+
+static int macb_mdio_write_c45(struct mii_bus *bus, int mii_id,
+ int devad, int regnum,
+ u16 value)
+{
+ struct macb *bp = bus->priv;
+ int status;
+
+ status = pm_runtime_get_sync(&bp->pdev->dev);
+ if (status < 0) {
+ pm_runtime_put_noidle(&bp->pdev->dev);
+ goto mdio_pm_exit;
}
status = macb_mdio_wait_for_idle(bp);
if (status < 0)
goto mdio_write_exit;
+ macb_writel(bp, MAN, (MACB_BF(SOF, MACB_MAN_C45_SOF)
+ | MACB_BF(RW, MACB_MAN_C45_ADDR)
+ | MACB_BF(PHYA, mii_id)
+ | MACB_BF(REGA, devad & 0x1F)
+ | MACB_BF(DATA, regnum & 0xFFFF)
+ | MACB_BF(CODE, MACB_MAN_C45_CODE)));
+
+ status = macb_mdio_wait_for_idle(bp);
+ if (status < 0)
+ goto mdio_write_exit;
+
+ macb_writel(bp, MAN, (MACB_BF(SOF, MACB_MAN_C45_SOF)
+ | MACB_BF(RW, MACB_MAN_C45_WRITE)
+ | MACB_BF(PHYA, mii_id)
+ | MACB_BF(REGA, devad & 0x1F)
+ | MACB_BF(CODE, MACB_MAN_C45_CODE)
+ | MACB_BF(DATA, value)));
+
+ status = macb_mdio_wait_for_idle(bp);
+ if (status < 0)
+ goto mdio_write_exit;
+
mdio_write_exit:
pm_runtime_mark_last_busy(&bp->pdev->dev);
pm_runtime_put_autosuspend(&bp->pdev->dev);
@@ -902,8 +955,10 @@ static int macb_mii_init(struct macb *bp)
}
bp->mii_bus->name = "MACB_mii_bus";
- bp->mii_bus->read = &macb_mdio_read;
- bp->mii_bus->write = &macb_mdio_write;
+ bp->mii_bus->read = &macb_mdio_read_c22;
+ bp->mii_bus->write = &macb_mdio_write_c22;
+ bp->mii_bus->read_c45 = &macb_mdio_read_c45;
+ bp->mii_bus->write_c45 = &macb_mdio_write_c45;
snprintf(bp->mii_bus->id, MII_BUS_ID_SIZE, "%s-%x",
bp->pdev->name, bp->pdev->id);
bp->mii_bus->priv = bp;
@@ -1191,13 +1246,9 @@ static int macb_tx_complete(struct macb_queue *queue, int budget)
/* First, update TX stats if needed */
if (skb) {
if (unlikely(skb_shinfo(skb)->tx_flags & SKBTX_HW_TSTAMP) &&
- !ptp_one_step_sync(skb) &&
- gem_ptp_do_txstamp(queue, skb, desc) == 0) {
- /* skb now belongs to timestamp buffer
- * and will be removed later
- */
- tx_skb->skb = NULL;
- }
+ !ptp_one_step_sync(skb))
+ gem_ptp_do_txstamp(bp, skb, desc);
+
netdev_vdbg(bp->dev, "skb %u (data %p) TX complete\n",
macb_tx_ring_wrap(bp, tail),
skb->data);
@@ -2253,6 +2304,12 @@ static netdev_tx_t macb_start_xmit(struct sk_buff *skb, struct net_device *dev)
return ret;
}
+#ifdef CONFIG_MACB_USE_HWSTAMP
+ if ((skb_shinfo(skb)->tx_flags & SKBTX_HW_TSTAMP) &&
+ (bp->hw_dma_cap & HW_DMA_CAP_PTP))
+ skb_shinfo(skb)->tx_flags |= SKBTX_IN_PROGRESS;
+#endif
+
is_lso = (skb_shinfo(skb)->gso_size != 0);
if (is_lso) {
diff --git a/drivers/net/ethernet/cadence/macb_ptp.c b/drivers/net/ethernet/cadence/macb_ptp.c
index e6cb20aaa76a..f962a95068a0 100644
--- a/drivers/net/ethernet/cadence/macb_ptp.c
+++ b/drivers/net/ethernet/cadence/macb_ptp.c
@@ -292,79 +292,39 @@ void gem_ptp_rxstamp(struct macb *bp, struct sk_buff *skb,
}
}
-static void gem_tstamp_tx(struct macb *bp, struct sk_buff *skb,
- struct macb_dma_desc_ptp *desc_ptp)
+void gem_ptp_txstamp(struct macb *bp, struct sk_buff *skb,
+ struct macb_dma_desc *desc)
{
struct skb_shared_hwtstamps shhwtstamps;
- struct timespec64 ts;
-
- gem_hw_timestamp(bp, desc_ptp->ts_1, desc_ptp->ts_2, &ts);
- memset(&shhwtstamps, 0, sizeof(shhwtstamps));
- shhwtstamps.hwtstamp = ktime_set(ts.tv_sec, ts.tv_nsec);
- skb_tstamp_tx(skb, &shhwtstamps);
-}
-
-int gem_ptp_txstamp(struct macb_queue *queue, struct sk_buff *skb,
- struct macb_dma_desc *desc)
-{
- unsigned long tail = READ_ONCE(queue->tx_ts_tail);
- unsigned long head = queue->tx_ts_head;
struct macb_dma_desc_ptp *desc_ptp;
- struct gem_tx_ts *tx_timestamp;
-
- if (!GEM_BFEXT(DMA_TXVALID, desc->ctrl))
- return -EINVAL;
+ struct timespec64 ts;
- if (CIRC_SPACE(head, tail, PTP_TS_BUFFER_SIZE) == 0)
- return -ENOMEM;
+ if (!GEM_BFEXT(DMA_TXVALID, desc->ctrl)) {
+ dev_warn_ratelimited(&bp->pdev->dev,
+ "Timestamp not set in TX BD as expected\n");
+ return;
+ }
- desc_ptp = macb_ptp_desc(queue->bp, desc);
+ desc_ptp = macb_ptp_desc(bp, desc);
/* Unlikely but check */
- if (!desc_ptp)
- return -EINVAL;
- skb_shinfo(skb)->tx_flags |= SKBTX_IN_PROGRESS;
- tx_timestamp = &queue->tx_timestamps[head];
- tx_timestamp->skb = skb;
+ if (!desc_ptp) {
+ dev_warn_ratelimited(&bp->pdev->dev,
+ "Timestamp not supported in BD\n");
+ return;
+ }
+
/* ensure ts_1/ts_2 is loaded after ctrl (TX_USED check) */
dma_rmb();
- tx_timestamp->desc_ptp.ts_1 = desc_ptp->ts_1;
- tx_timestamp->desc_ptp.ts_2 = desc_ptp->ts_2;
- /* move head */
- smp_store_release(&queue->tx_ts_head,
- (head + 1) & (PTP_TS_BUFFER_SIZE - 1));
-
- schedule_work(&queue->tx_ts_task);
- return 0;
-}
+ gem_hw_timestamp(bp, desc_ptp->ts_1, desc_ptp->ts_2, &ts);
-static void gem_tx_timestamp_flush(struct work_struct *work)
-{
- struct macb_queue *queue =
- container_of(work, struct macb_queue, tx_ts_task);
- unsigned long head, tail;
- struct gem_tx_ts *tx_ts;
-
- /* take current head */
- head = smp_load_acquire(&queue->tx_ts_head);
- tail = queue->tx_ts_tail;
-
- while (CIRC_CNT(head, tail, PTP_TS_BUFFER_SIZE)) {
- tx_ts = &queue->tx_timestamps[tail];
- gem_tstamp_tx(queue->bp, tx_ts->skb, &tx_ts->desc_ptp);
- /* cleanup */
- dev_kfree_skb_any(tx_ts->skb);
- /* remove old tail */
- smp_store_release(&queue->tx_ts_tail,
- (tail + 1) & (PTP_TS_BUFFER_SIZE - 1));
- tail = queue->tx_ts_tail;
- }
+ memset(&shhwtstamps, 0, sizeof(shhwtstamps));
+ shhwtstamps.hwtstamp = ktime_set(ts.tv_sec, ts.tv_nsec);
+ skb_tstamp_tx(skb, &shhwtstamps);
}
void gem_ptp_init(struct net_device *dev)
{
struct macb *bp = netdev_priv(dev);
- struct macb_queue *queue;
- unsigned int q;
bp->ptp_clock_info = gem_ptp_caps_template;
@@ -384,11 +344,6 @@ void gem_ptp_init(struct net_device *dev)
}
spin_lock_init(&bp->tsu_clk_lock);
- for (q = 0, queue = bp->queues; q < bp->num_queues; ++q, ++queue) {
- queue->tx_ts_head = 0;
- queue->tx_ts_tail = 0;
- INIT_WORK(&queue->tx_ts_task, gem_tx_timestamp_flush);
- }
gem_ptp_init_tsu(bp);
diff --git a/drivers/net/ethernet/cavium/thunder/nicvf_main.c b/drivers/net/ethernet/cavium/thunder/nicvf_main.c
index f2f95493ec89..8b25313c7f6b 100644
--- a/drivers/net/ethernet/cavium/thunder/nicvf_main.c
+++ b/drivers/net/ethernet/cavium/thunder/nicvf_main.c
@@ -2218,6 +2218,8 @@ static int nicvf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
netdev->netdev_ops = &nicvf_netdev_ops;
netdev->watchdog_timeo = NICVF_TX_TIMEOUT;
+ netdev->xdp_features = NETDEV_XDP_ACT_BASIC;
+
/* MTU range: 64 - 9200 */
netdev->min_mtu = NIC_HW_MIN_FRS;
netdev->max_mtu = NIC_HW_MAX_FRS;
diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
index 9cbce1faab26..7db2403c4c9c 100644
--- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
+++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
@@ -6490,21 +6490,21 @@ static const struct tlsdev_ops cxgb4_ktls_ops = {
#if IS_ENABLED(CONFIG_CHELSIO_IPSEC_INLINE)
-static int cxgb4_xfrm_add_state(struct xfrm_state *x)
+static int cxgb4_xfrm_add_state(struct xfrm_state *x,
+ struct netlink_ext_ack *extack)
{
struct adapter *adap = netdev2adap(x->xso.dev);
int ret;
if (!mutex_trylock(&uld_mutex)) {
- dev_dbg(adap->pdev_dev,
- "crypto uld critical resource is under use\n");
+ NL_SET_ERR_MSG_MOD(extack, "crypto uld critical resource is under use");
return -EBUSY;
}
ret = chcr_offload_state(adap, CXGB4_XFRMDEV_OPS);
if (ret)
goto out_unlock;
- ret = adap->uld[CXGB4_ULD_IPSEC].xfrmdev_ops->xdo_dev_state_add(x);
+ ret = adap->uld[CXGB4_ULD_IPSEC].xfrmdev_ops->xdo_dev_state_add(x, extack);
out_unlock:
mutex_unlock(&uld_mutex);
diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_tc_mqprio.h b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_tc_mqprio.h
index be96f1dc0372..d4a862a9fd7d 100644
--- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_tc_mqprio.h
+++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_tc_mqprio.h
@@ -4,7 +4,7 @@
#ifndef __CXGB4_TC_MQPRIO_H__
#define __CXGB4_TC_MQPRIO_H__
-#include <net/pkt_cls.h>
+#include <net/pkt_sched.h>
#define CXGB4_EOSW_TXQ_DEFAULT_DESC_NUM 128
diff --git a/drivers/net/ethernet/chelsio/inline_crypto/ch_ipsec/chcr_ipsec.c b/drivers/net/ethernet/chelsio/inline_crypto/ch_ipsec/chcr_ipsec.c
index ca21794281d6..3731c93f8f95 100644
--- a/drivers/net/ethernet/chelsio/inline_crypto/ch_ipsec/chcr_ipsec.c
+++ b/drivers/net/ethernet/chelsio/inline_crypto/ch_ipsec/chcr_ipsec.c
@@ -80,7 +80,8 @@ static void *ch_ipsec_uld_add(const struct cxgb4_lld_info *infop);
static void ch_ipsec_advance_esn_state(struct xfrm_state *x);
static void ch_ipsec_xfrm_free_state(struct xfrm_state *x);
static void ch_ipsec_xfrm_del_state(struct xfrm_state *x);
-static int ch_ipsec_xfrm_add_state(struct xfrm_state *x);
+static int ch_ipsec_xfrm_add_state(struct xfrm_state *x,
+ struct netlink_ext_ack *extack);
static const struct xfrmdev_ops ch_ipsec_xfrmdev_ops = {
.xdo_dev_state_add = ch_ipsec_xfrm_add_state,
@@ -226,65 +227,66 @@ out:
* returns 0 on success, negative error if failed to send message to FPGA
* positive error if FPGA returned a bad response
*/
-static int ch_ipsec_xfrm_add_state(struct xfrm_state *x)
+static int ch_ipsec_xfrm_add_state(struct xfrm_state *x,
+ struct netlink_ext_ack *extack)
{
struct ipsec_sa_entry *sa_entry;
int res = 0;
if (x->props.aalgo != SADB_AALG_NONE) {
- pr_debug("Cannot offload authenticated xfrm states\n");
+ NL_SET_ERR_MSG_MOD(extack, "Cannot offload authenticated xfrm states");
return -EINVAL;
}
if (x->props.calgo != SADB_X_CALG_NONE) {
- pr_debug("Cannot offload compressed xfrm states\n");
+ NL_SET_ERR_MSG_MOD(extack, "Cannot offload compressed xfrm states");
return -EINVAL;
}
if (x->props.family != AF_INET &&
x->props.family != AF_INET6) {
- pr_debug("Only IPv4/6 xfrm state offloaded\n");
+ NL_SET_ERR_MSG_MOD(extack, "Only IPv4/6 xfrm state offloaded");
return -EINVAL;
}
if (x->props.mode != XFRM_MODE_TRANSPORT &&
x->props.mode != XFRM_MODE_TUNNEL) {
- pr_debug("Only transport and tunnel xfrm offload\n");
+ NL_SET_ERR_MSG_MOD(extack, "Only transport and tunnel xfrm offload");
return -EINVAL;
}
if (x->id.proto != IPPROTO_ESP) {
- pr_debug("Only ESP xfrm state offloaded\n");
+ NL_SET_ERR_MSG_MOD(extack, "Only ESP xfrm state offloaded");
return -EINVAL;
}
if (x->encap) {
- pr_debug("Encapsulated xfrm state not offloaded\n");
+ NL_SET_ERR_MSG_MOD(extack, "Encapsulated xfrm state not offloaded");
return -EINVAL;
}
if (!x->aead) {
- pr_debug("Cannot offload xfrm states without aead\n");
+ NL_SET_ERR_MSG_MOD(extack, "Cannot offload xfrm states without aead");
return -EINVAL;
}
if (x->aead->alg_icv_len != 128 &&
x->aead->alg_icv_len != 96) {
- pr_debug("Cannot offload xfrm states with AEAD ICV length other than 96b & 128b\n");
- return -EINVAL;
+ NL_SET_ERR_MSG_MOD(extack, "Cannot offload xfrm states with AEAD ICV length other than 96b & 128b");
+ return -EINVAL;
}
if ((x->aead->alg_key_len != 128 + 32) &&
(x->aead->alg_key_len != 256 + 32)) {
- pr_debug("cannot offload xfrm states with AEAD key length other than 128/256 bit\n");
+ NL_SET_ERR_MSG_MOD(extack, "cannot offload xfrm states with AEAD key length other than 128/256 bit");
return -EINVAL;
}
if (x->tfcpad) {
- pr_debug("Cannot offload xfrm states with tfc padding\n");
+ NL_SET_ERR_MSG_MOD(extack, "Cannot offload xfrm states with tfc padding");
return -EINVAL;
}
if (!x->geniv) {
- pr_debug("Cannot offload xfrm states without geniv\n");
+ NL_SET_ERR_MSG_MOD(extack, "Cannot offload xfrm states without geniv");
return -EINVAL;
}
if (strcmp(x->geniv, "seqiv")) {
- pr_debug("Cannot offload xfrm states with geniv other than seqiv\n");
+ NL_SET_ERR_MSG_MOD(extack, "Cannot offload xfrm states with geniv other than seqiv");
return -EINVAL;
}
if (x->xso.type != XFRM_DEV_OFFLOAD_CRYPTO) {
- pr_debug("Unsupported xfrm offload\n");
+ NL_SET_ERR_MSG_MOD(extack, "Unsupported xfrm offload");
return -EINVAL;
}
diff --git a/drivers/net/ethernet/engleder/Makefile b/drivers/net/ethernet/engleder/Makefile
index b6e3b16623de..b98135f65eb7 100644
--- a/drivers/net/ethernet/engleder/Makefile
+++ b/drivers/net/ethernet/engleder/Makefile
@@ -6,5 +6,5 @@
obj-$(CONFIG_TSNEP) += tsnep.o
tsnep-objs := tsnep_main.o tsnep_ethtool.o tsnep_ptp.o tsnep_tc.o \
- tsnep_rxnfc.o $(tsnep-y)
+ tsnep_rxnfc.o tsnep_xdp.o
tsnep-$(CONFIG_TSNEP_SELFTESTS) += tsnep_selftests.o
diff --git a/drivers/net/ethernet/engleder/tsnep.h b/drivers/net/ethernet/engleder/tsnep.h
index f93ba48bac3f..058c2bcf31a7 100644
--- a/drivers/net/ethernet/engleder/tsnep.h
+++ b/drivers/net/ethernet/engleder/tsnep.h
@@ -65,7 +65,11 @@ struct tsnep_tx_entry {
u32 properties;
- struct sk_buff *skb;
+ u32 type;
+ union {
+ struct sk_buff *skb;
+ struct xdp_frame *xdpf;
+ };
size_t len;
DEFINE_DMA_UNMAP_ADDR(dma);
};
@@ -78,8 +82,6 @@ struct tsnep_tx {
void *page[TSNEP_RING_PAGE_COUNT];
dma_addr_t page_dma[TSNEP_RING_PAGE_COUNT];
- /* TX ring lock */
- spinlock_t lock;
struct tsnep_tx_entry entry[TSNEP_RING_SIZE];
int write;
int read;
@@ -107,6 +109,7 @@ struct tsnep_rx {
struct tsnep_adapter *adapter;
void __iomem *addr;
int queue_index;
+ int tx_queue_index;
void *page[TSNEP_RING_PAGE_COUNT];
dma_addr_t page_dma[TSNEP_RING_PAGE_COUNT];
@@ -123,6 +126,8 @@ struct tsnep_rx {
u32 dropped;
u32 multicast;
u32 alloc_failed;
+
+ struct xdp_rxq_info xdp_rxq;
};
struct tsnep_queue {
@@ -172,6 +177,8 @@ struct tsnep_adapter {
int rxnfc_count;
int rxnfc_max;
+ struct bpf_prog *xdp_prog;
+
int num_tx_queues;
struct tsnep_tx tx[TSNEP_MAX_QUEUES];
int num_rx_queues;
@@ -204,6 +211,9 @@ int tsnep_rxnfc_add_rule(struct tsnep_adapter *adapter,
int tsnep_rxnfc_del_rule(struct tsnep_adapter *adapter,
struct ethtool_rxnfc *cmd);
+int tsnep_xdp_setup_prog(struct tsnep_adapter *adapter, struct bpf_prog *prog,
+ struct netlink_ext_ack *extack);
+
#if IS_ENABLED(CONFIG_TSNEP_SELFTESTS)
int tsnep_ethtool_get_test_count(void);
void tsnep_ethtool_get_test_strings(u8 *data);
diff --git a/drivers/net/ethernet/engleder/tsnep_main.c b/drivers/net/ethernet/engleder/tsnep_main.c
index 00e2108f2ca4..6982aaa928b5 100644
--- a/drivers/net/ethernet/engleder/tsnep_main.c
+++ b/drivers/net/ethernet/engleder/tsnep_main.c
@@ -26,9 +26,11 @@
#include <linux/etherdevice.h>
#include <linux/phy.h>
#include <linux/iopoll.h>
+#include <linux/bpf.h>
+#include <linux/bpf_trace.h>
-#define TSNEP_SKB_PAD (NET_SKB_PAD + NET_IP_ALIGN)
-#define TSNEP_HEADROOM ALIGN(TSNEP_SKB_PAD, 4)
+#define TSNEP_RX_OFFSET (max(NET_SKB_PAD, XDP_PACKET_HEADROOM) + NET_IP_ALIGN)
+#define TSNEP_HEADROOM ALIGN(TSNEP_RX_OFFSET, 4)
#define TSNEP_MAX_RX_BUF_SIZE (PAGE_SIZE - TSNEP_HEADROOM - \
SKB_DATA_ALIGN(sizeof(struct skb_shared_info)))
@@ -43,6 +45,14 @@
#define TSNEP_COALESCE_USECS_MAX ((ECM_INT_DELAY_MASK >> ECM_INT_DELAY_SHIFT) * \
ECM_INT_DELAY_BASE_US + ECM_INT_DELAY_BASE_US - 1)
+#define TSNEP_TX_TYPE_SKB BIT(0)
+#define TSNEP_TX_TYPE_SKB_FRAG BIT(1)
+#define TSNEP_TX_TYPE_XDP_TX BIT(2)
+#define TSNEP_TX_TYPE_XDP_NDO BIT(3)
+
+#define TSNEP_XDP_TX BIT(0)
+#define TSNEP_XDP_REDIRECT BIT(1)
+
static void tsnep_enable_irq(struct tsnep_adapter *adapter, u32 mask)
{
iowrite32(mask, adapter->addr + ECM_INT_ENABLE);
@@ -120,9 +130,6 @@ static int tsnep_mdiobus_read(struct mii_bus *bus, int addr, int regnum)
u32 md;
int retval;
- if (regnum & MII_ADDR_C45)
- return -EOPNOTSUPP;
-
md = ECM_MD_READ;
if (!adapter->suppress_preamble)
md |= ECM_MD_PREAMBLE;
@@ -144,9 +151,6 @@ static int tsnep_mdiobus_write(struct mii_bus *bus, int addr, int regnum,
u32 md;
int retval;
- if (regnum & MII_ADDR_C45)
- return -EOPNOTSUPP;
-
md = ECM_MD_WRITE;
if (!adapter->suppress_preamble)
md |= ECM_MD_PREAMBLE;
@@ -306,10 +310,12 @@ static void tsnep_tx_activate(struct tsnep_tx *tx, int index, int length,
struct tsnep_tx_entry *entry = &tx->entry[index];
entry->properties = 0;
+ /* xdpf is union with skb */
if (entry->skb) {
entry->properties = length & TSNEP_DESC_LENGTH_MASK;
entry->properties |= TSNEP_DESC_INTERRUPT_FLAG;
- if (skb_shinfo(entry->skb)->tx_flags & SKBTX_IN_PROGRESS)
+ if ((entry->type & TSNEP_TX_TYPE_SKB) &&
+ (skb_shinfo(entry->skb)->tx_flags & SKBTX_IN_PROGRESS))
entry->properties |= TSNEP_DESC_EXTENDED_WRITEBACK_FLAG;
/* toggle user flag to prevent false acknowledge
@@ -378,15 +384,19 @@ static int tsnep_tx_map(struct sk_buff *skb, struct tsnep_tx *tx, int count)
for (i = 0; i < count; i++) {
entry = &tx->entry[(tx->write + i) % TSNEP_RING_SIZE];
- if (i == 0) {
+ if (!i) {
len = skb_headlen(skb);
dma = dma_map_single(dmadev, skb->data, len,
DMA_TO_DEVICE);
+
+ entry->type = TSNEP_TX_TYPE_SKB;
} else {
len = skb_frag_size(&skb_shinfo(skb)->frags[i - 1]);
dma = skb_frag_dma_map(dmadev,
&skb_shinfo(skb)->frags[i - 1],
0, len, DMA_TO_DEVICE);
+
+ entry->type = TSNEP_TX_TYPE_SKB_FRAG;
}
if (dma_mapping_error(dmadev, dma))
return -ENOMEM;
@@ -413,12 +423,13 @@ static int tsnep_tx_unmap(struct tsnep_tx *tx, int index, int count)
entry = &tx->entry[(index + i) % TSNEP_RING_SIZE];
if (entry->len) {
- if (i == 0)
+ if (entry->type & TSNEP_TX_TYPE_SKB)
dma_unmap_single(dmadev,
dma_unmap_addr(entry, dma),
dma_unmap_len(entry, len),
DMA_TO_DEVICE);
- else
+ else if (entry->type &
+ (TSNEP_TX_TYPE_SKB_FRAG | TSNEP_TX_TYPE_XDP_NDO))
dma_unmap_page(dmadev,
dma_unmap_addr(entry, dma),
dma_unmap_len(entry, len),
@@ -434,7 +445,6 @@ static int tsnep_tx_unmap(struct tsnep_tx *tx, int index, int count)
static netdev_tx_t tsnep_xmit_frame_ring(struct sk_buff *skb,
struct tsnep_tx *tx)
{
- unsigned long flags;
int count = 1;
struct tsnep_tx_entry *entry;
int length;
@@ -444,16 +454,12 @@ static netdev_tx_t tsnep_xmit_frame_ring(struct sk_buff *skb,
if (skb_shinfo(skb)->nr_frags > 0)
count += skb_shinfo(skb)->nr_frags;
- spin_lock_irqsave(&tx->lock, flags);
-
if (tsnep_tx_desc_available(tx) < count) {
/* ring full, shall not happen because queue is stopped if full
* below
*/
netif_stop_subqueue(tx->adapter->netdev, tx->queue_index);
- spin_unlock_irqrestore(&tx->lock, flags);
-
return NETDEV_TX_BUSY;
}
@@ -468,10 +474,6 @@ static netdev_tx_t tsnep_xmit_frame_ring(struct sk_buff *skb,
tx->dropped++;
- spin_unlock_irqrestore(&tx->lock, flags);
-
- netdev_err(tx->adapter->netdev, "TX DMA map failed\n");
-
return NETDEV_TX_OK;
}
length = retval;
@@ -481,7 +483,7 @@ static netdev_tx_t tsnep_xmit_frame_ring(struct sk_buff *skb,
for (i = 0; i < count; i++)
tsnep_tx_activate(tx, (tx->write + i) % TSNEP_RING_SIZE, length,
- i == (count - 1));
+ i == count - 1);
tx->write = (tx->write + count) % TSNEP_RING_SIZE;
skb_tx_timestamp(skb);
@@ -496,23 +498,146 @@ static netdev_tx_t tsnep_xmit_frame_ring(struct sk_buff *skb,
netif_stop_subqueue(tx->adapter->netdev, tx->queue_index);
}
- spin_unlock_irqrestore(&tx->lock, flags);
-
return NETDEV_TX_OK;
}
+static int tsnep_xdp_tx_map(struct xdp_frame *xdpf, struct tsnep_tx *tx,
+ struct skb_shared_info *shinfo, int count, u32 type)
+{
+ struct device *dmadev = tx->adapter->dmadev;
+ struct tsnep_tx_entry *entry;
+ struct page *page;
+ skb_frag_t *frag;
+ unsigned int len;
+ int map_len = 0;
+ dma_addr_t dma;
+ void *data;
+ int i;
+
+ frag = NULL;
+ len = xdpf->len;
+ for (i = 0; i < count; i++) {
+ entry = &tx->entry[(tx->write + i) % TSNEP_RING_SIZE];
+ if (type & TSNEP_TX_TYPE_XDP_NDO) {
+ data = unlikely(frag) ? skb_frag_address(frag) :
+ xdpf->data;
+ dma = dma_map_single(dmadev, data, len, DMA_TO_DEVICE);
+ if (dma_mapping_error(dmadev, dma))
+ return -ENOMEM;
+
+ entry->type = TSNEP_TX_TYPE_XDP_NDO;
+ } else {
+ page = unlikely(frag) ? skb_frag_page(frag) :
+ virt_to_page(xdpf->data);
+ dma = page_pool_get_dma_addr(page);
+ if (unlikely(frag))
+ dma += skb_frag_off(frag);
+ else
+ dma += sizeof(*xdpf) + xdpf->headroom;
+ dma_sync_single_for_device(dmadev, dma, len,
+ DMA_BIDIRECTIONAL);
+
+ entry->type = TSNEP_TX_TYPE_XDP_TX;
+ }
+
+ entry->len = len;
+ dma_unmap_addr_set(entry, dma, dma);
+
+ entry->desc->tx = __cpu_to_le64(dma);
+
+ map_len += len;
+
+ if (i + 1 < count) {
+ frag = &shinfo->frags[i];
+ len = skb_frag_size(frag);
+ }
+ }
+
+ return map_len;
+}
+
+/* This function requires __netif_tx_lock is held by the caller. */
+static bool tsnep_xdp_xmit_frame_ring(struct xdp_frame *xdpf,
+ struct tsnep_tx *tx, u32 type)
+{
+ struct skb_shared_info *shinfo = xdp_get_shared_info_from_frame(xdpf);
+ struct tsnep_tx_entry *entry;
+ int count, length, retval, i;
+
+ count = 1;
+ if (unlikely(xdp_frame_has_frags(xdpf)))
+ count += shinfo->nr_frags;
+
+ /* ensure that TX ring is not filled up by XDP, always MAX_SKB_FRAGS
+ * will be available for normal TX path and queue is stopped there if
+ * necessary
+ */
+ if (tsnep_tx_desc_available(tx) < (MAX_SKB_FRAGS + 1 + count))
+ return false;
+
+ entry = &tx->entry[tx->write];
+ entry->xdpf = xdpf;
+
+ retval = tsnep_xdp_tx_map(xdpf, tx, shinfo, count, type);
+ if (retval < 0) {
+ tsnep_tx_unmap(tx, tx->write, count);
+ entry->xdpf = NULL;
+
+ tx->dropped++;
+
+ return false;
+ }
+ length = retval;
+
+ for (i = 0; i < count; i++)
+ tsnep_tx_activate(tx, (tx->write + i) % TSNEP_RING_SIZE, length,
+ i == count - 1);
+ tx->write = (tx->write + count) % TSNEP_RING_SIZE;
+
+ /* descriptor properties shall be valid before hardware is notified */
+ dma_wmb();
+
+ return true;
+}
+
+static void tsnep_xdp_xmit_flush(struct tsnep_tx *tx)
+{
+ iowrite32(TSNEP_CONTROL_TX_ENABLE, tx->addr + TSNEP_CONTROL);
+}
+
+static bool tsnep_xdp_xmit_back(struct tsnep_adapter *adapter,
+ struct xdp_buff *xdp,
+ struct netdev_queue *tx_nq, struct tsnep_tx *tx)
+{
+ struct xdp_frame *xdpf = xdp_convert_buff_to_frame(xdp);
+ bool xmit;
+
+ if (unlikely(!xdpf))
+ return false;
+
+ __netif_tx_lock(tx_nq, smp_processor_id());
+
+ xmit = tsnep_xdp_xmit_frame_ring(xdpf, tx, TSNEP_TX_TYPE_XDP_TX);
+
+ /* Avoid transmit queue timeout since we share it with the slow path */
+ if (xmit)
+ txq_trans_cond_update(tx_nq);
+
+ __netif_tx_unlock(tx_nq);
+
+ return xmit;
+}
+
static bool tsnep_tx_poll(struct tsnep_tx *tx, int napi_budget)
{
struct tsnep_tx_entry *entry;
struct netdev_queue *nq;
- unsigned long flags;
int budget = 128;
int length;
int count;
nq = netdev_get_tx_queue(tx->adapter->netdev, tx->queue_index);
-
- spin_lock_irqsave(&tx->lock, flags);
+ __netif_tx_lock(nq, smp_processor_id());
do {
if (tx->read == tx->write)
@@ -530,12 +655,17 @@ static bool tsnep_tx_poll(struct tsnep_tx *tx, int napi_budget)
dma_rmb();
count = 1;
- if (skb_shinfo(entry->skb)->nr_frags > 0)
+ if ((entry->type & TSNEP_TX_TYPE_SKB) &&
+ skb_shinfo(entry->skb)->nr_frags > 0)
count += skb_shinfo(entry->skb)->nr_frags;
+ else if (!(entry->type & TSNEP_TX_TYPE_SKB) &&
+ xdp_frame_has_frags(entry->xdpf))
+ count += xdp_get_shared_info_from_frame(entry->xdpf)->nr_frags;
length = tsnep_tx_unmap(tx, tx->read, count);
- if ((skb_shinfo(entry->skb)->tx_flags & SKBTX_IN_PROGRESS) &&
+ if ((entry->type & TSNEP_TX_TYPE_SKB) &&
+ (skb_shinfo(entry->skb)->tx_flags & SKBTX_IN_PROGRESS) &&
(__le32_to_cpu(entry->desc_wb->properties) &
TSNEP_DESC_EXTENDED_WRITEBACK_FLAG)) {
struct skb_shared_hwtstamps hwtstamps;
@@ -555,7 +685,11 @@ static bool tsnep_tx_poll(struct tsnep_tx *tx, int napi_budget)
skb_tstamp_tx(entry->skb, &hwtstamps);
}
- napi_consume_skb(entry->skb, budget);
+ if (entry->type & TSNEP_TX_TYPE_SKB)
+ napi_consume_skb(entry->skb, napi_budget);
+ else
+ xdp_return_frame_rx_napi(entry->xdpf);
+ /* xdpf is union with skb */
entry->skb = NULL;
tx->read = (tx->read + count) % TSNEP_RING_SIZE;
@@ -571,18 +705,19 @@ static bool tsnep_tx_poll(struct tsnep_tx *tx, int napi_budget)
netif_tx_wake_queue(nq);
}
- spin_unlock_irqrestore(&tx->lock, flags);
+ __netif_tx_unlock(nq);
- return (budget != 0);
+ return budget != 0;
}
static bool tsnep_tx_pending(struct tsnep_tx *tx)
{
- unsigned long flags;
struct tsnep_tx_entry *entry;
+ struct netdev_queue *nq;
bool pending = false;
- spin_lock_irqsave(&tx->lock, flags);
+ nq = netdev_get_tx_queue(tx->adapter->netdev, tx->queue_index);
+ __netif_tx_lock(nq, smp_processor_id());
if (tx->read != tx->write) {
entry = &tx->entry[tx->read];
@@ -592,7 +727,7 @@ static bool tsnep_tx_pending(struct tsnep_tx *tx)
pending = true;
}
- spin_unlock_irqrestore(&tx->lock, flags);
+ __netif_tx_unlock(nq);
return pending;
}
@@ -618,8 +753,6 @@ static int tsnep_tx_open(struct tsnep_adapter *adapter, void __iomem *addr,
tx->owner_counter = 1;
tx->increment_owner_counter = TSNEP_RING_SIZE - 1;
- spin_lock_init(&tx->lock);
-
return 0;
}
@@ -695,9 +828,9 @@ static int tsnep_rx_ring_init(struct tsnep_rx *rx)
pp_params.pool_size = TSNEP_RING_SIZE;
pp_params.nid = dev_to_node(dmadev);
pp_params.dev = dmadev;
- pp_params.dma_dir = DMA_FROM_DEVICE;
+ pp_params.dma_dir = DMA_BIDIRECTIONAL;
pp_params.max_len = TSNEP_MAX_RX_BUF_SIZE;
- pp_params.offset = TSNEP_SKB_PAD;
+ pp_params.offset = TSNEP_RX_OFFSET;
rx->page_pool = page_pool_create(&pp_params);
if (IS_ERR(rx->page_pool)) {
retval = PTR_ERR(rx->page_pool);
@@ -732,7 +865,7 @@ static void tsnep_rx_set_page(struct tsnep_rx *rx, struct tsnep_rx_entry *entry,
entry->page = page;
entry->len = TSNEP_MAX_RX_BUF_SIZE;
entry->dma = page_pool_get_dma_addr(entry->page);
- entry->desc->rx = __cpu_to_le64(entry->dma + TSNEP_SKB_PAD);
+ entry->desc->rx = __cpu_to_le64(entry->dma + TSNEP_RX_OFFSET);
}
static int tsnep_rx_alloc_buffer(struct tsnep_rx *rx, int index)
@@ -826,6 +959,62 @@ static int tsnep_rx_refill(struct tsnep_rx *rx, int count, bool reuse)
return i;
}
+static bool tsnep_xdp_run_prog(struct tsnep_rx *rx, struct bpf_prog *prog,
+ struct xdp_buff *xdp, int *status,
+ struct netdev_queue *tx_nq, struct tsnep_tx *tx)
+{
+ unsigned int length;
+ unsigned int sync;
+ u32 act;
+
+ length = xdp->data_end - xdp->data_hard_start - XDP_PACKET_HEADROOM;
+
+ act = bpf_prog_run_xdp(prog, xdp);
+
+ /* Due xdp_adjust_tail: DMA sync for_device cover max len CPU touch */
+ sync = xdp->data_end - xdp->data_hard_start - XDP_PACKET_HEADROOM;
+ sync = max(sync, length);
+
+ switch (act) {
+ case XDP_PASS:
+ return false;
+ case XDP_TX:
+ if (!tsnep_xdp_xmit_back(rx->adapter, xdp, tx_nq, tx))
+ goto out_failure;
+ *status |= TSNEP_XDP_TX;
+ return true;
+ case XDP_REDIRECT:
+ if (xdp_do_redirect(rx->adapter->netdev, xdp, prog) < 0)
+ goto out_failure;
+ *status |= TSNEP_XDP_REDIRECT;
+ return true;
+ default:
+ bpf_warn_invalid_xdp_action(rx->adapter->netdev, prog, act);
+ fallthrough;
+ case XDP_ABORTED:
+out_failure:
+ trace_xdp_exception(rx->adapter->netdev, prog, act);
+ fallthrough;
+ case XDP_DROP:
+ page_pool_put_page(rx->page_pool, virt_to_head_page(xdp->data),
+ sync, true);
+ return true;
+ }
+}
+
+static void tsnep_finalize_xdp(struct tsnep_adapter *adapter, int status,
+ struct netdev_queue *tx_nq, struct tsnep_tx *tx)
+{
+ if (status & TSNEP_XDP_TX) {
+ __netif_tx_lock(tx_nq, smp_processor_id());
+ tsnep_xdp_xmit_flush(tx);
+ __netif_tx_unlock(tx_nq);
+ }
+
+ if (status & TSNEP_XDP_REDIRECT)
+ xdp_do_flush();
+}
+
static struct sk_buff *tsnep_build_skb(struct tsnep_rx *rx, struct page *page,
int length)
{
@@ -836,14 +1025,14 @@ static struct sk_buff *tsnep_build_skb(struct tsnep_rx *rx, struct page *page,
return NULL;
/* update pointers within the skb to store the data */
- skb_reserve(skb, TSNEP_SKB_PAD + TSNEP_RX_INLINE_METADATA_SIZE);
- __skb_put(skb, length - TSNEP_RX_INLINE_METADATA_SIZE - ETH_FCS_LEN);
+ skb_reserve(skb, TSNEP_RX_OFFSET + TSNEP_RX_INLINE_METADATA_SIZE);
+ __skb_put(skb, length - ETH_FCS_LEN);
if (rx->adapter->hwtstamp_config.rx_filter == HWTSTAMP_FILTER_ALL) {
struct skb_shared_hwtstamps *hwtstamps = skb_hwtstamps(skb);
struct tsnep_rx_inline *rx_inline =
(struct tsnep_rx_inline *)(page_address(page) +
- TSNEP_SKB_PAD);
+ TSNEP_RX_OFFSET);
skb_shinfo(skb)->tx_flags |=
SKBTX_HW_TSTAMP_NETDEV;
@@ -861,15 +1050,28 @@ static int tsnep_rx_poll(struct tsnep_rx *rx, struct napi_struct *napi,
int budget)
{
struct device *dmadev = rx->adapter->dmadev;
- int desc_available;
- int done = 0;
enum dma_data_direction dma_dir;
struct tsnep_rx_entry *entry;
+ struct netdev_queue *tx_nq;
+ struct bpf_prog *prog;
+ struct xdp_buff xdp;
struct sk_buff *skb;
+ struct tsnep_tx *tx;
+ int desc_available;
+ int xdp_status = 0;
+ int done = 0;
int length;
desc_available = tsnep_rx_desc_available(rx);
dma_dir = page_pool_get_dma_dir(rx->page_pool);
+ prog = READ_ONCE(rx->adapter->xdp_prog);
+ if (prog) {
+ tx_nq = netdev_get_tx_queue(rx->adapter->netdev,
+ rx->tx_queue_index);
+ tx = &rx->adapter->tx[rx->tx_queue_index];
+
+ xdp_init_buff(&xdp, PAGE_SIZE, &rx->xdp_rxq);
+ }
while (likely(done < budget) && (rx->read != rx->write)) {
entry = &rx->entry[rx->read];
@@ -903,21 +1105,47 @@ static int tsnep_rx_poll(struct tsnep_rx *rx, struct napi_struct *napi,
*/
dma_rmb();
- prefetch(page_address(entry->page) + TSNEP_SKB_PAD);
+ prefetch(page_address(entry->page) + TSNEP_RX_OFFSET);
length = __le32_to_cpu(entry->desc_wb->properties) &
TSNEP_DESC_LENGTH_MASK;
- dma_sync_single_range_for_cpu(dmadev, entry->dma, TSNEP_SKB_PAD,
- length, dma_dir);
+ dma_sync_single_range_for_cpu(dmadev, entry->dma,
+ TSNEP_RX_OFFSET, length, dma_dir);
+
+ /* RX metadata with timestamps is in front of actual data,
+ * subtract metadata size to get length of actual data and
+ * consider metadata size as offset of actual data during RX
+ * processing
+ */
+ length -= TSNEP_RX_INLINE_METADATA_SIZE;
rx->read = (rx->read + 1) % TSNEP_RING_SIZE;
desc_available++;
+ if (prog) {
+ bool consume;
+
+ xdp_prepare_buff(&xdp, page_address(entry->page),
+ XDP_PACKET_HEADROOM + TSNEP_RX_INLINE_METADATA_SIZE,
+ length, false);
+
+ consume = tsnep_xdp_run_prog(rx, prog, &xdp,
+ &xdp_status, tx_nq, tx);
+ if (consume) {
+ rx->packets++;
+ rx->bytes += length;
+
+ entry->page = NULL;
+
+ continue;
+ }
+ }
+
skb = tsnep_build_skb(rx, entry->page, length);
if (skb) {
page_pool_release_page(rx->page_pool, entry->page);
rx->packets++;
- rx->bytes += length - TSNEP_RX_INLINE_METADATA_SIZE;
+ rx->bytes += length;
if (skb->pkt_type == PACKET_MULTICAST)
rx->multicast++;
@@ -930,6 +1158,9 @@ static int tsnep_rx_poll(struct tsnep_rx *rx, struct napi_struct *napi,
entry->page = NULL;
}
+ if (xdp_status)
+ tsnep_finalize_xdp(rx->adapter, xdp_status, tx_nq, tx);
+
if (desc_available)
tsnep_rx_refill(rx, desc_available, false);
@@ -1086,17 +1317,73 @@ static void tsnep_free_irq(struct tsnep_queue *queue, bool first)
memset(queue->name, 0, sizeof(queue->name));
}
+static void tsnep_queue_close(struct tsnep_queue *queue, bool first)
+{
+ struct tsnep_rx *rx = queue->rx;
+
+ tsnep_free_irq(queue, first);
+
+ if (rx && xdp_rxq_info_is_reg(&rx->xdp_rxq))
+ xdp_rxq_info_unreg(&rx->xdp_rxq);
+
+ netif_napi_del(&queue->napi);
+}
+
+static int tsnep_queue_open(struct tsnep_adapter *adapter,
+ struct tsnep_queue *queue, bool first)
+{
+ struct tsnep_rx *rx = queue->rx;
+ struct tsnep_tx *tx = queue->tx;
+ int retval;
+
+ queue->adapter = adapter;
+
+ netif_napi_add(adapter->netdev, &queue->napi, tsnep_poll);
+
+ if (rx) {
+ /* choose TX queue for XDP_TX */
+ if (tx)
+ rx->tx_queue_index = tx->queue_index;
+ else if (rx->queue_index < adapter->num_tx_queues)
+ rx->tx_queue_index = rx->queue_index;
+ else
+ rx->tx_queue_index = 0;
+
+ retval = xdp_rxq_info_reg(&rx->xdp_rxq, adapter->netdev,
+ rx->queue_index, queue->napi.napi_id);
+ if (retval)
+ goto failed;
+ retval = xdp_rxq_info_reg_mem_model(&rx->xdp_rxq,
+ MEM_TYPE_PAGE_POOL,
+ rx->page_pool);
+ if (retval)
+ goto failed;
+ }
+
+ retval = tsnep_request_irq(queue, first);
+ if (retval) {
+ netif_err(adapter, drv, adapter->netdev,
+ "can't get assigned irq %d.\n", queue->irq);
+ goto failed;
+ }
+
+ return 0;
+
+failed:
+ tsnep_queue_close(queue, first);
+
+ return retval;
+}
+
static int tsnep_netdev_open(struct net_device *netdev)
{
struct tsnep_adapter *adapter = netdev_priv(netdev);
- int i;
- void __iomem *addr;
int tx_queue_index = 0;
int rx_queue_index = 0;
- int retval;
+ void __iomem *addr;
+ int i, retval;
for (i = 0; i < adapter->num_queues; i++) {
- adapter->queue[i].adapter = adapter;
if (adapter->queue[i].tx) {
addr = adapter->addr + TSNEP_QUEUE(tx_queue_index);
retval = tsnep_tx_open(adapter, addr, tx_queue_index,
@@ -1107,21 +1394,16 @@ static int tsnep_netdev_open(struct net_device *netdev)
}
if (adapter->queue[i].rx) {
addr = adapter->addr + TSNEP_QUEUE(rx_queue_index);
- retval = tsnep_rx_open(adapter, addr,
- rx_queue_index,
+ retval = tsnep_rx_open(adapter, addr, rx_queue_index,
adapter->queue[i].rx);
if (retval)
goto failed;
rx_queue_index++;
}
- retval = tsnep_request_irq(&adapter->queue[i], i == 0);
- if (retval) {
- netif_err(adapter, drv, adapter->netdev,
- "can't get assigned irq %d.\n",
- adapter->queue[i].irq);
+ retval = tsnep_queue_open(adapter, &adapter->queue[i], i == 0);
+ if (retval)
goto failed;
- }
}
retval = netif_set_real_num_tx_queues(adapter->netdev,
@@ -1139,8 +1421,6 @@ static int tsnep_netdev_open(struct net_device *netdev)
goto phy_failed;
for (i = 0; i < adapter->num_queues; i++) {
- netif_napi_add(adapter->netdev, &adapter->queue[i].napi,
- tsnep_poll);
napi_enable(&adapter->queue[i].napi);
tsnep_enable_irq(adapter, adapter->queue[i].irq_mask);
@@ -1150,10 +1430,9 @@ static int tsnep_netdev_open(struct net_device *netdev)
phy_failed:
tsnep_disable_irq(adapter, ECM_INT_LINK);
- tsnep_phy_close(adapter);
failed:
for (i = 0; i < adapter->num_queues; i++) {
- tsnep_free_irq(&adapter->queue[i], i == 0);
+ tsnep_queue_close(&adapter->queue[i], i == 0);
if (adapter->queue[i].rx)
tsnep_rx_close(adapter->queue[i].rx);
@@ -1175,9 +1454,8 @@ static int tsnep_netdev_close(struct net_device *netdev)
tsnep_disable_irq(adapter, adapter->queue[i].irq_mask);
napi_disable(&adapter->queue[i].napi);
- netif_napi_del(&adapter->queue[i].napi);
- tsnep_free_irq(&adapter->queue[i], i == 0);
+ tsnep_queue_close(&adapter->queue[i], i == 0);
if (adapter->queue[i].rx)
tsnep_rx_close(adapter->queue[i].rx);
@@ -1330,6 +1608,67 @@ static ktime_t tsnep_netdev_get_tstamp(struct net_device *netdev,
return ns_to_ktime(timestamp);
}
+static int tsnep_netdev_bpf(struct net_device *dev, struct netdev_bpf *bpf)
+{
+ struct tsnep_adapter *adapter = netdev_priv(dev);
+
+ switch (bpf->command) {
+ case XDP_SETUP_PROG:
+ return tsnep_xdp_setup_prog(adapter, bpf->prog, bpf->extack);
+ default:
+ return -EOPNOTSUPP;
+ }
+}
+
+static struct tsnep_tx *tsnep_xdp_get_tx(struct tsnep_adapter *adapter, u32 cpu)
+{
+ if (cpu >= TSNEP_MAX_QUEUES)
+ cpu &= TSNEP_MAX_QUEUES - 1;
+
+ while (cpu >= adapter->num_tx_queues)
+ cpu -= adapter->num_tx_queues;
+
+ return &adapter->tx[cpu];
+}
+
+static int tsnep_netdev_xdp_xmit(struct net_device *dev, int n,
+ struct xdp_frame **xdp, u32 flags)
+{
+ struct tsnep_adapter *adapter = netdev_priv(dev);
+ u32 cpu = smp_processor_id();
+ struct netdev_queue *nq;
+ struct tsnep_tx *tx;
+ int nxmit;
+ bool xmit;
+
+ if (unlikely(flags & ~XDP_XMIT_FLAGS_MASK))
+ return -EINVAL;
+
+ tx = tsnep_xdp_get_tx(adapter, cpu);
+ nq = netdev_get_tx_queue(adapter->netdev, tx->queue_index);
+
+ __netif_tx_lock(nq, cpu);
+
+ for (nxmit = 0; nxmit < n; nxmit++) {
+ xmit = tsnep_xdp_xmit_frame_ring(xdp[nxmit], tx,
+ TSNEP_TX_TYPE_XDP_NDO);
+ if (!xmit)
+ break;
+
+ /* avoid transmit queue timeout since we share it with the slow
+ * path
+ */
+ txq_trans_cond_update(nq);
+ }
+
+ if (flags & XDP_XMIT_FLUSH)
+ tsnep_xdp_xmit_flush(tx);
+
+ __netif_tx_unlock(nq);
+
+ return nxmit;
+}
+
static const struct net_device_ops tsnep_netdev_ops = {
.ndo_open = tsnep_netdev_open,
.ndo_stop = tsnep_netdev_close,
@@ -1341,6 +1680,8 @@ static const struct net_device_ops tsnep_netdev_ops = {
.ndo_set_features = tsnep_netdev_set_features,
.ndo_get_tstamp = tsnep_netdev_get_tstamp,
.ndo_setup_tc = tsnep_tc_setup,
+ .ndo_bpf = tsnep_netdev_bpf,
+ .ndo_xdp_xmit = tsnep_netdev_xdp_xmit,
};
static int tsnep_mac_init(struct tsnep_adapter *adapter)
@@ -1585,6 +1926,10 @@ static int tsnep_probe(struct platform_device *pdev)
netdev->features = NETIF_F_SG;
netdev->hw_features = netdev->features | NETIF_F_LOOPBACK;
+ netdev->xdp_features = NETDEV_XDP_ACT_BASIC | NETDEV_XDP_ACT_REDIRECT |
+ NETDEV_XDP_ACT_NDO_XMIT |
+ NETDEV_XDP_ACT_NDO_XMIT_SG;
+
/* carrier off reporting is important to ethtool even BEFORE open */
netif_carrier_off(netdev);
diff --git a/drivers/net/ethernet/engleder/tsnep_tc.c b/drivers/net/ethernet/engleder/tsnep_tc.c
index c4c6e1357317..d083e6684f12 100644
--- a/drivers/net/ethernet/engleder/tsnep_tc.c
+++ b/drivers/net/ethernet/engleder/tsnep_tc.c
@@ -403,12 +403,33 @@ static int tsnep_taprio(struct tsnep_adapter *adapter,
return 0;
}
+static int tsnep_tc_query_caps(struct tsnep_adapter *adapter,
+ struct tc_query_caps_base *base)
+{
+ switch (base->type) {
+ case TC_SETUP_QDISC_TAPRIO: {
+ struct tc_taprio_caps *caps = base->caps;
+
+ if (!adapter->gate_control)
+ return -EOPNOTSUPP;
+
+ caps->gate_mask_per_txq = true;
+
+ return 0;
+ }
+ default:
+ return -EOPNOTSUPP;
+ }
+}
+
int tsnep_tc_setup(struct net_device *netdev, enum tc_setup_type type,
void *type_data)
{
struct tsnep_adapter *adapter = netdev_priv(netdev);
switch (type) {
+ case TC_QUERY_CAPS:
+ return tsnep_tc_query_caps(adapter, type_data);
case TC_SETUP_QDISC_TAPRIO:
return tsnep_taprio(adapter, type_data);
default:
diff --git a/drivers/net/ethernet/engleder/tsnep_xdp.c b/drivers/net/ethernet/engleder/tsnep_xdp.c
new file mode 100644
index 000000000000..4d14cb1fd772
--- /dev/null
+++ b/drivers/net/ethernet/engleder/tsnep_xdp.c
@@ -0,0 +1,19 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (C) 2022 Gerhard Engleder <gerhard@engleder-embedded.com> */
+
+#include <linux/if_vlan.h>
+#include <net/xdp_sock_drv.h>
+
+#include "tsnep.h"
+
+int tsnep_xdp_setup_prog(struct tsnep_adapter *adapter, struct bpf_prog *prog,
+ struct netlink_ext_ack *extack)
+{
+ struct bpf_prog *old_prog;
+
+ old_prog = xchg(&adapter->xdp_prog, prog);
+ if (old_prog)
+ bpf_prog_put(old_prog);
+
+ return 0;
+}
diff --git a/drivers/net/ethernet/faraday/ftmac100.c b/drivers/net/ethernet/faraday/ftmac100.c
index 6c8c78018ce6..139fe66f8bcd 100644
--- a/drivers/net/ethernet/faraday/ftmac100.c
+++ b/drivers/net/ethernet/faraday/ftmac100.c
@@ -182,6 +182,12 @@ static int ftmac100_start_hw(struct ftmac100 *priv)
if (netdev->mtu > ETH_DATA_LEN)
maccr |= FTMAC100_MACCR_RX_FTL;
+ /* Add other bits as needed */
+ if (netdev->flags & IFF_PROMISC)
+ maccr |= FTMAC100_MACCR_RCV_ALL;
+ if (netdev->flags & IFF_ALLMULTI)
+ maccr |= FTMAC100_MACCR_RX_MULTIPKT;
+
iowrite32(maccr, priv->base + FTMAC100_OFFSET_MACCR);
return 0;
}
diff --git a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c
index 027fff9f7db0..9318a2554056 100644
--- a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c
+++ b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c
@@ -244,6 +244,10 @@ static int dpaa_netdev_init(struct net_device *net_dev,
net_dev->features |= net_dev->hw_features;
net_dev->vlan_features = net_dev->features;
+ net_dev->xdp_features = NETDEV_XDP_ACT_BASIC |
+ NETDEV_XDP_ACT_REDIRECT |
+ NETDEV_XDP_ACT_NDO_XMIT;
+
if (is_valid_ether_addr(mac_addr)) {
memcpy(net_dev->perm_addr, mac_addr, net_dev->addr_len);
eth_hw_addr_set(net_dev, mac_addr);
diff --git a/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c
index 2e79d18fc3c7..a62cffaf6ff1 100644
--- a/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c
+++ b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c
@@ -4596,6 +4596,12 @@ static int dpaa2_eth_netdev_init(struct net_device *net_dev)
NETIF_F_LLTX | NETIF_F_HW_TC | NETIF_F_TSO;
net_dev->gso_max_segs = DPAA2_ETH_ENQUEUE_MAX_FDS;
net_dev->hw_features = net_dev->features;
+ net_dev->xdp_features = NETDEV_XDP_ACT_BASIC |
+ NETDEV_XDP_ACT_REDIRECT |
+ NETDEV_XDP_ACT_NDO_XMIT;
+ if (priv->dpni_attrs.wriop_version >= DPAA2_WRIOP_VERSION(3, 0, 0) &&
+ priv->dpni_attrs.num_queues <= 8)
+ net_dev->xdp_features |= NETDEV_XDP_ACT_XSK_ZEROCOPY;
if (priv->dpni_attrs.vlan_filter_entries)
net_dev->hw_features |= NETIF_F_HW_VLAN_CTAG_FILTER;
diff --git a/drivers/net/ethernet/freescale/enetc/Kconfig b/drivers/net/ethernet/freescale/enetc/Kconfig
index cdc0ff89388a..9bc099cf3cb1 100644
--- a/drivers/net/ethernet/freescale/enetc/Kconfig
+++ b/drivers/net/ethernet/freescale/enetc/Kconfig
@@ -1,7 +1,16 @@
# SPDX-License-Identifier: GPL-2.0
+config FSL_ENETC_CORE
+ tristate
+ help
+ This module supports common functionality between the PF and VF
+ drivers for the NXP ENETC controller.
+
+ If compiled as module (M), the module name is fsl-enetc-core.
+
config FSL_ENETC
tristate "ENETC PF driver"
- depends on PCI && PCI_MSI
+ depends on PCI_MSI
+ select FSL_ENETC_CORE
select FSL_ENETC_IERB
select FSL_ENETC_MDIO
select PHYLINK
@@ -16,7 +25,8 @@ config FSL_ENETC
config FSL_ENETC_VF
tristate "ENETC VF driver"
- depends on PCI && PCI_MSI
+ depends on PCI_MSI
+ select FSL_ENETC_CORE
select FSL_ENETC_MDIO
select PHYLINK
select DIMLIB
diff --git a/drivers/net/ethernet/freescale/enetc/Makefile b/drivers/net/ethernet/freescale/enetc/Makefile
index e0e8dfd13793..b13cbbabb2ea 100644
--- a/drivers/net/ethernet/freescale/enetc/Makefile
+++ b/drivers/net/ethernet/freescale/enetc/Makefile
@@ -1,14 +1,15 @@
# SPDX-License-Identifier: GPL-2.0
-common-objs := enetc.o enetc_cbdr.o enetc_ethtool.o
+obj-$(CONFIG_FSL_ENETC_CORE) += fsl-enetc-core.o
+fsl-enetc-core-y := enetc.o enetc_cbdr.o enetc_ethtool.o
obj-$(CONFIG_FSL_ENETC) += fsl-enetc.o
-fsl-enetc-y := enetc_pf.o $(common-objs)
+fsl-enetc-y := enetc_pf.o
fsl-enetc-$(CONFIG_PCI_IOV) += enetc_msg.o
fsl-enetc-$(CONFIG_FSL_ENETC_QOS) += enetc_qos.o
obj-$(CONFIG_FSL_ENETC_VF) += fsl-enetc-vf.o
-fsl-enetc-vf-y := enetc_vf.o $(common-objs)
+fsl-enetc-vf-y := enetc_vf.o
obj-$(CONFIG_FSL_ENETC_IERB) += fsl-enetc-ierb.o
fsl-enetc-ierb-y := enetc_ierb.o
diff --git a/drivers/net/ethernet/freescale/enetc/enetc.c b/drivers/net/ethernet/freescale/enetc/enetc.c
index e96449eedfb5..2fc712b24d12 100644
--- a/drivers/net/ethernet/freescale/enetc/enetc.c
+++ b/drivers/net/ethernet/freescale/enetc/enetc.c
@@ -11,14 +11,26 @@
#include <net/pkt_sched.h>
#include <net/tso.h>
+u32 enetc_port_mac_rd(struct enetc_si *si, u32 reg)
+{
+ return enetc_port_rd(&si->hw, reg);
+}
+EXPORT_SYMBOL_GPL(enetc_port_mac_rd);
+
+void enetc_port_mac_wr(struct enetc_si *si, u32 reg, u32 val)
+{
+ enetc_port_wr(&si->hw, reg, val);
+ if (si->hw_features & ENETC_SI_F_QBU)
+ enetc_port_wr(&si->hw, reg + ENETC_PMAC_OFFSET, val);
+}
+EXPORT_SYMBOL_GPL(enetc_port_mac_wr);
+
static int enetc_num_stack_tx_queues(struct enetc_ndev_priv *priv)
{
int num_tx_rings = priv->num_tx_rings;
- int i;
- for (i = 0; i < priv->num_rx_rings; i++)
- if (priv->rx_ring[i]->xdp.prog)
- return num_tx_rings - num_possible_cpus();
+ if (priv->xdp_prog)
+ return num_tx_rings - num_possible_cpus();
return num_tx_rings;
}
@@ -243,8 +255,8 @@ static int enetc_map_tx_buffs(struct enetc_bdr *tx_ring, struct sk_buff *skb)
if (udp)
val |= ENETC_PM0_SINGLE_STEP_CH;
- enetc_port_wr(hw, ENETC_PM0_SINGLE_STEP, val);
- enetc_port_wr(hw, ENETC_PM1_SINGLE_STEP, val);
+ enetc_port_mac_wr(priv->si, ENETC_PM0_SINGLE_STEP,
+ val);
} else if (do_twostep_tstamp) {
skb_shinfo(skb)->tx_flags |= SKBTX_IN_PROGRESS;
e_flags |= ENETC_TXBD_E_FLAGS_TWO_STEP_PTP;
@@ -651,6 +663,7 @@ netdev_tx_t enetc_xmit(struct sk_buff *skb, struct net_device *ndev)
return enetc_start_xmit(skb, ndev);
}
+EXPORT_SYMBOL_GPL(enetc_xmit);
static irqreturn_t enetc_msix(int irq, void *data)
{
@@ -1305,6 +1318,10 @@ static int enetc_xdp_frame_to_xdp_tx_swbd(struct enetc_bdr *tx_ring,
xdp_tx_swbd->xdp_frame = NULL;
n++;
+
+ if (!xdp_frame_has_frags(xdp_frame))
+ goto out;
+
xdp_tx_swbd = &xdp_tx_arr[n];
shinfo = xdp_get_shared_info_from_frame(xdp_frame);
@@ -1334,7 +1351,7 @@ static int enetc_xdp_frame_to_xdp_tx_swbd(struct enetc_bdr *tx_ring,
n++;
xdp_tx_swbd = &xdp_tx_arr[n];
}
-
+out:
xdp_tx_arr[n - 1].is_eof = true;
xdp_tx_arr[n - 1].xdp_frame = xdp_frame;
@@ -1384,22 +1401,19 @@ int enetc_xdp_xmit(struct net_device *ndev, int num_frames,
return xdp_tx_frm_cnt;
}
+EXPORT_SYMBOL_GPL(enetc_xdp_xmit);
static void enetc_map_rx_buff_to_xdp(struct enetc_bdr *rx_ring, int i,
struct xdp_buff *xdp_buff, u16 size)
{
struct enetc_rx_swbd *rx_swbd = enetc_get_rx_buff(rx_ring, i, size);
void *hard_start = page_address(rx_swbd->page) + rx_swbd->page_offset;
- struct skb_shared_info *shinfo;
/* To be used for XDP_TX */
rx_swbd->len = size;
xdp_prepare_buff(xdp_buff, hard_start - rx_ring->buffer_offset,
rx_ring->buffer_offset, size, false);
-
- shinfo = xdp_get_shared_info_from_buff(xdp_buff);
- shinfo->nr_frags = 0;
}
static void enetc_add_rx_buff_to_xdp(struct enetc_bdr *rx_ring, int i,
@@ -1407,11 +1421,23 @@ static void enetc_add_rx_buff_to_xdp(struct enetc_bdr *rx_ring, int i,
{
struct skb_shared_info *shinfo = xdp_get_shared_info_from_buff(xdp_buff);
struct enetc_rx_swbd *rx_swbd = enetc_get_rx_buff(rx_ring, i, size);
- skb_frag_t *frag = &shinfo->frags[shinfo->nr_frags];
+ skb_frag_t *frag;
/* To be used for XDP_TX */
rx_swbd->len = size;
+ if (!xdp_buff_has_frags(xdp_buff)) {
+ xdp_buff_set_frags_flag(xdp_buff);
+ shinfo->xdp_frags_size = size;
+ shinfo->nr_frags = 0;
+ } else {
+ shinfo->xdp_frags_size += size;
+ }
+
+ if (page_is_pfmemalloc(rx_swbd->page))
+ xdp_buff_set_frag_pfmemalloc(xdp_buff);
+
+ frag = &shinfo->frags[shinfo->nr_frags];
skb_frag_off_set(frag, rx_swbd->page_offset);
skb_frag_size_set(frag, size);
__skb_frag_set_page(frag, rx_swbd->page);
@@ -1584,20 +1610,6 @@ static int enetc_clean_rx_ring_xdp(struct enetc_bdr *rx_ring,
}
break;
case XDP_REDIRECT:
- /* xdp_return_frame does not support S/G in the sense
- * that it leaks the fragments (__xdp_return should not
- * call page_frag_free only for the initial buffer).
- * Until XDP_REDIRECT gains support for S/G let's keep
- * the code structure in place, but dead. We drop the
- * S/G frames ourselves to avoid memory leaks which
- * would otherwise leave the kernel OOM.
- */
- if (unlikely(cleaned_cnt - orig_cleaned_cnt != 1)) {
- enetc_xdp_drop(rx_ring, orig_i, i);
- rx_ring->stats.xdp_redirect_sg++;
- break;
- }
-
err = xdp_do_redirect(rx_ring->ndev, &xdp_buff, prog);
if (unlikely(err)) {
enetc_xdp_drop(rx_ring, orig_i, i);
@@ -1713,204 +1725,263 @@ void enetc_get_si_caps(struct enetc_si *si)
if (val & ENETC_SIPCAPR0_QBV)
si->hw_features |= ENETC_SI_F_QBV;
+ if (val & ENETC_SIPCAPR0_QBU)
+ si->hw_features |= ENETC_SI_F_QBU;
+
if (val & ENETC_SIPCAPR0_PSFP)
si->hw_features |= ENETC_SI_F_PSFP;
}
+EXPORT_SYMBOL_GPL(enetc_get_si_caps);
-static int enetc_dma_alloc_bdr(struct enetc_bdr *r, size_t bd_size)
+static int enetc_dma_alloc_bdr(struct enetc_bdr_resource *res)
{
- r->bd_base = dma_alloc_coherent(r->dev, r->bd_count * bd_size,
- &r->bd_dma_base, GFP_KERNEL);
- if (!r->bd_base)
+ size_t bd_base_size = res->bd_count * res->bd_size;
+
+ res->bd_base = dma_alloc_coherent(res->dev, bd_base_size,
+ &res->bd_dma_base, GFP_KERNEL);
+ if (!res->bd_base)
return -ENOMEM;
/* h/w requires 128B alignment */
- if (!IS_ALIGNED(r->bd_dma_base, 128)) {
- dma_free_coherent(r->dev, r->bd_count * bd_size, r->bd_base,
- r->bd_dma_base);
+ if (!IS_ALIGNED(res->bd_dma_base, 128)) {
+ dma_free_coherent(res->dev, bd_base_size, res->bd_base,
+ res->bd_dma_base);
return -EINVAL;
}
return 0;
}
-static int enetc_alloc_txbdr(struct enetc_bdr *txr)
+static void enetc_dma_free_bdr(const struct enetc_bdr_resource *res)
+{
+ size_t bd_base_size = res->bd_count * res->bd_size;
+
+ dma_free_coherent(res->dev, bd_base_size, res->bd_base,
+ res->bd_dma_base);
+}
+
+static int enetc_alloc_tx_resource(struct enetc_bdr_resource *res,
+ struct device *dev, size_t bd_count)
{
int err;
- txr->tx_swbd = vzalloc(txr->bd_count * sizeof(struct enetc_tx_swbd));
- if (!txr->tx_swbd)
+ res->dev = dev;
+ res->bd_count = bd_count;
+ res->bd_size = sizeof(union enetc_tx_bd);
+
+ res->tx_swbd = vzalloc(bd_count * sizeof(*res->tx_swbd));
+ if (!res->tx_swbd)
return -ENOMEM;
- err = enetc_dma_alloc_bdr(txr, sizeof(union enetc_tx_bd));
+ err = enetc_dma_alloc_bdr(res);
if (err)
goto err_alloc_bdr;
- txr->tso_headers = dma_alloc_coherent(txr->dev,
- txr->bd_count * TSO_HEADER_SIZE,
- &txr->tso_headers_dma,
+ res->tso_headers = dma_alloc_coherent(dev, bd_count * TSO_HEADER_SIZE,
+ &res->tso_headers_dma,
GFP_KERNEL);
- if (!txr->tso_headers) {
+ if (!res->tso_headers) {
err = -ENOMEM;
goto err_alloc_tso;
}
- txr->next_to_clean = 0;
- txr->next_to_use = 0;
-
return 0;
err_alloc_tso:
- dma_free_coherent(txr->dev, txr->bd_count * sizeof(union enetc_tx_bd),
- txr->bd_base, txr->bd_dma_base);
- txr->bd_base = NULL;
+ enetc_dma_free_bdr(res);
err_alloc_bdr:
- vfree(txr->tx_swbd);
- txr->tx_swbd = NULL;
+ vfree(res->tx_swbd);
+ res->tx_swbd = NULL;
return err;
}
-static void enetc_free_txbdr(struct enetc_bdr *txr)
+static void enetc_free_tx_resource(const struct enetc_bdr_resource *res)
{
- int size, i;
-
- for (i = 0; i < txr->bd_count; i++)
- enetc_free_tx_frame(txr, &txr->tx_swbd[i]);
-
- size = txr->bd_count * sizeof(union enetc_tx_bd);
-
- dma_free_coherent(txr->dev, txr->bd_count * TSO_HEADER_SIZE,
- txr->tso_headers, txr->tso_headers_dma);
- txr->tso_headers = NULL;
-
- dma_free_coherent(txr->dev, size, txr->bd_base, txr->bd_dma_base);
- txr->bd_base = NULL;
-
- vfree(txr->tx_swbd);
- txr->tx_swbd = NULL;
+ dma_free_coherent(res->dev, res->bd_count * TSO_HEADER_SIZE,
+ res->tso_headers, res->tso_headers_dma);
+ enetc_dma_free_bdr(res);
+ vfree(res->tx_swbd);
}
-static int enetc_alloc_tx_resources(struct enetc_ndev_priv *priv)
+static struct enetc_bdr_resource *
+enetc_alloc_tx_resources(struct enetc_ndev_priv *priv)
{
+ struct enetc_bdr_resource *tx_res;
int i, err;
+ tx_res = kcalloc(priv->num_tx_rings, sizeof(*tx_res), GFP_KERNEL);
+ if (!tx_res)
+ return ERR_PTR(-ENOMEM);
+
for (i = 0; i < priv->num_tx_rings; i++) {
- err = enetc_alloc_txbdr(priv->tx_ring[i]);
+ struct enetc_bdr *tx_ring = priv->tx_ring[i];
+ err = enetc_alloc_tx_resource(&tx_res[i], tx_ring->dev,
+ tx_ring->bd_count);
if (err)
goto fail;
}
- return 0;
+ return tx_res;
fail:
while (i-- > 0)
- enetc_free_txbdr(priv->tx_ring[i]);
+ enetc_free_tx_resource(&tx_res[i]);
- return err;
+ kfree(tx_res);
+
+ return ERR_PTR(err);
}
-static void enetc_free_tx_resources(struct enetc_ndev_priv *priv)
+static void enetc_free_tx_resources(const struct enetc_bdr_resource *tx_res,
+ size_t num_resources)
{
- int i;
+ size_t i;
- for (i = 0; i < priv->num_tx_rings; i++)
- enetc_free_txbdr(priv->tx_ring[i]);
+ for (i = 0; i < num_resources; i++)
+ enetc_free_tx_resource(&tx_res[i]);
+
+ kfree(tx_res);
}
-static int enetc_alloc_rxbdr(struct enetc_bdr *rxr, bool extended)
+static int enetc_alloc_rx_resource(struct enetc_bdr_resource *res,
+ struct device *dev, size_t bd_count,
+ bool extended)
{
- size_t size = sizeof(union enetc_rx_bd);
int err;
- rxr->rx_swbd = vzalloc(rxr->bd_count * sizeof(struct enetc_rx_swbd));
- if (!rxr->rx_swbd)
- return -ENOMEM;
-
+ res->dev = dev;
+ res->bd_count = bd_count;
+ res->bd_size = sizeof(union enetc_rx_bd);
if (extended)
- size *= 2;
+ res->bd_size *= 2;
+
+ res->rx_swbd = vzalloc(bd_count * sizeof(struct enetc_rx_swbd));
+ if (!res->rx_swbd)
+ return -ENOMEM;
- err = enetc_dma_alloc_bdr(rxr, size);
+ err = enetc_dma_alloc_bdr(res);
if (err) {
- vfree(rxr->rx_swbd);
+ vfree(res->rx_swbd);
return err;
}
- rxr->next_to_clean = 0;
- rxr->next_to_use = 0;
- rxr->next_to_alloc = 0;
- rxr->ext_en = extended;
-
return 0;
}
-static void enetc_free_rxbdr(struct enetc_bdr *rxr)
+static void enetc_free_rx_resource(const struct enetc_bdr_resource *res)
{
- int size;
-
- size = rxr->bd_count * sizeof(union enetc_rx_bd);
-
- dma_free_coherent(rxr->dev, size, rxr->bd_base, rxr->bd_dma_base);
- rxr->bd_base = NULL;
-
- vfree(rxr->rx_swbd);
- rxr->rx_swbd = NULL;
+ enetc_dma_free_bdr(res);
+ vfree(res->rx_swbd);
}
-static int enetc_alloc_rx_resources(struct enetc_ndev_priv *priv)
+static struct enetc_bdr_resource *
+enetc_alloc_rx_resources(struct enetc_ndev_priv *priv, bool extended)
{
- bool extended = !!(priv->active_offloads & ENETC_F_RX_TSTAMP);
+ struct enetc_bdr_resource *rx_res;
int i, err;
+ rx_res = kcalloc(priv->num_rx_rings, sizeof(*rx_res), GFP_KERNEL);
+ if (!rx_res)
+ return ERR_PTR(-ENOMEM);
+
for (i = 0; i < priv->num_rx_rings; i++) {
- err = enetc_alloc_rxbdr(priv->rx_ring[i], extended);
+ struct enetc_bdr *rx_ring = priv->rx_ring[i];
+ err = enetc_alloc_rx_resource(&rx_res[i], rx_ring->dev,
+ rx_ring->bd_count, extended);
if (err)
goto fail;
}
- return 0;
+ return rx_res;
fail:
while (i-- > 0)
- enetc_free_rxbdr(priv->rx_ring[i]);
+ enetc_free_rx_resource(&rx_res[i]);
- return err;
+ kfree(rx_res);
+
+ return ERR_PTR(err);
}
-static void enetc_free_rx_resources(struct enetc_ndev_priv *priv)
+static void enetc_free_rx_resources(const struct enetc_bdr_resource *rx_res,
+ size_t num_resources)
+{
+ size_t i;
+
+ for (i = 0; i < num_resources; i++)
+ enetc_free_rx_resource(&rx_res[i]);
+
+ kfree(rx_res);
+}
+
+static void enetc_assign_tx_resource(struct enetc_bdr *tx_ring,
+ const struct enetc_bdr_resource *res)
+{
+ tx_ring->bd_base = res ? res->bd_base : NULL;
+ tx_ring->bd_dma_base = res ? res->bd_dma_base : 0;
+ tx_ring->tx_swbd = res ? res->tx_swbd : NULL;
+ tx_ring->tso_headers = res ? res->tso_headers : NULL;
+ tx_ring->tso_headers_dma = res ? res->tso_headers_dma : 0;
+}
+
+static void enetc_assign_rx_resource(struct enetc_bdr *rx_ring,
+ const struct enetc_bdr_resource *res)
+{
+ rx_ring->bd_base = res ? res->bd_base : NULL;
+ rx_ring->bd_dma_base = res ? res->bd_dma_base : 0;
+ rx_ring->rx_swbd = res ? res->rx_swbd : NULL;
+}
+
+static void enetc_assign_tx_resources(struct enetc_ndev_priv *priv,
+ const struct enetc_bdr_resource *res)
{
int i;
- for (i = 0; i < priv->num_rx_rings; i++)
- enetc_free_rxbdr(priv->rx_ring[i]);
+ if (priv->tx_res)
+ enetc_free_tx_resources(priv->tx_res, priv->num_tx_rings);
+
+ for (i = 0; i < priv->num_tx_rings; i++) {
+ enetc_assign_tx_resource(priv->tx_ring[i],
+ res ? &res[i] : NULL);
+ }
+
+ priv->tx_res = res;
}
-static void enetc_free_tx_ring(struct enetc_bdr *tx_ring)
+static void enetc_assign_rx_resources(struct enetc_ndev_priv *priv,
+ const struct enetc_bdr_resource *res)
{
int i;
- if (!tx_ring->tx_swbd)
- return;
+ if (priv->rx_res)
+ enetc_free_rx_resources(priv->rx_res, priv->num_rx_rings);
+
+ for (i = 0; i < priv->num_rx_rings; i++) {
+ enetc_assign_rx_resource(priv->rx_ring[i],
+ res ? &res[i] : NULL);
+ }
+
+ priv->rx_res = res;
+}
+
+static void enetc_free_tx_ring(struct enetc_bdr *tx_ring)
+{
+ int i;
for (i = 0; i < tx_ring->bd_count; i++) {
struct enetc_tx_swbd *tx_swbd = &tx_ring->tx_swbd[i];
enetc_free_tx_frame(tx_ring, tx_swbd);
}
-
- tx_ring->next_to_clean = 0;
- tx_ring->next_to_use = 0;
}
static void enetc_free_rx_ring(struct enetc_bdr *rx_ring)
{
int i;
- if (!rx_ring->rx_swbd)
- return;
-
for (i = 0; i < rx_ring->bd_count; i++) {
struct enetc_rx_swbd *rx_swbd = &rx_ring->rx_swbd[i];
@@ -1922,10 +1993,6 @@ static void enetc_free_rx_ring(struct enetc_bdr *rx_ring)
__free_page(rx_swbd->page);
rx_swbd->page = NULL;
}
-
- rx_ring->next_to_clean = 0;
- rx_ring->next_to_use = 0;
- rx_ring->next_to_alloc = 0;
}
static void enetc_free_rxtx_rings(struct enetc_ndev_priv *priv)
@@ -1980,6 +2047,7 @@ int enetc_configure_si(struct enetc_ndev_priv *priv)
return 0;
}
+EXPORT_SYMBOL_GPL(enetc_configure_si);
void enetc_init_si_rings_params(struct enetc_ndev_priv *priv)
{
@@ -1999,6 +2067,7 @@ void enetc_init_si_rings_params(struct enetc_ndev_priv *priv)
priv->ic_mode = ENETC_IC_RX_ADAPTIVE | ENETC_IC_TX_MANUAL;
priv->tx_ictt = ENETC_TXIC_TIMETHR;
}
+EXPORT_SYMBOL_GPL(enetc_init_si_rings_params);
int enetc_alloc_si_resources(struct enetc_ndev_priv *priv)
{
@@ -2011,11 +2080,13 @@ int enetc_alloc_si_resources(struct enetc_ndev_priv *priv)
return 0;
}
+EXPORT_SYMBOL_GPL(enetc_alloc_si_resources);
void enetc_free_si_resources(struct enetc_ndev_priv *priv)
{
kfree(priv->cls_rules);
}
+EXPORT_SYMBOL_GPL(enetc_free_si_resources);
static void enetc_setup_txbdr(struct enetc_hw *hw, struct enetc_bdr *tx_ring)
{
@@ -2039,7 +2110,7 @@ static void enetc_setup_txbdr(struct enetc_hw *hw, struct enetc_bdr *tx_ring)
/* enable Tx ints by setting pkt thr to 1 */
enetc_txbdr_wr(hw, idx, ENETC_TBICR0, ENETC_TBICR0_ICEN | 0x1);
- tbmr = ENETC_TBMR_EN | ENETC_TBMR_SET_PRIO(tx_ring->prio);
+ tbmr = ENETC_TBMR_SET_PRIO(tx_ring->prio);
if (tx_ring->ndev->features & NETIF_F_HW_VLAN_CTAG_TX)
tbmr |= ENETC_TBMR_VIH;
@@ -2051,10 +2122,11 @@ static void enetc_setup_txbdr(struct enetc_hw *hw, struct enetc_bdr *tx_ring)
tx_ring->idr = hw->reg + ENETC_SITXIDR;
}
-static void enetc_setup_rxbdr(struct enetc_hw *hw, struct enetc_bdr *rx_ring)
+static void enetc_setup_rxbdr(struct enetc_hw *hw, struct enetc_bdr *rx_ring,
+ bool extended)
{
int idx = rx_ring->index;
- u32 rbmr;
+ u32 rbmr = 0;
enetc_rxbdr_wr(hw, idx, ENETC_RBBAR0,
lower_32_bits(rx_ring->bd_dma_base));
@@ -2081,8 +2153,7 @@ static void enetc_setup_rxbdr(struct enetc_hw *hw, struct enetc_bdr *rx_ring)
/* enable Rx ints by setting pkt thr to 1 */
enetc_rxbdr_wr(hw, idx, ENETC_RBICR0, ENETC_RBICR0_ICEN | 0x1);
- rbmr = ENETC_RBMR_EN;
-
+ rx_ring->ext_en = extended;
if (rx_ring->ext_en)
rbmr |= ENETC_RBMR_BDS;
@@ -2092,15 +2163,18 @@ static void enetc_setup_rxbdr(struct enetc_hw *hw, struct enetc_bdr *rx_ring)
rx_ring->rcir = hw->reg + ENETC_BDR(RX, idx, ENETC_RBCIR);
rx_ring->idr = hw->reg + ENETC_SIRXIDR;
+ rx_ring->next_to_clean = 0;
+ rx_ring->next_to_use = 0;
+ rx_ring->next_to_alloc = 0;
+
enetc_lock_mdio();
enetc_refill_rx_ring(rx_ring, enetc_bd_unused(rx_ring));
enetc_unlock_mdio();
- /* enable ring */
enetc_rxbdr_wr(hw, idx, ENETC_RBMR, rbmr);
}
-static void enetc_setup_bdrs(struct enetc_ndev_priv *priv)
+static void enetc_setup_bdrs(struct enetc_ndev_priv *priv, bool extended)
{
struct enetc_hw *hw = &priv->si->hw;
int i;
@@ -2109,10 +2183,42 @@ static void enetc_setup_bdrs(struct enetc_ndev_priv *priv)
enetc_setup_txbdr(hw, priv->tx_ring[i]);
for (i = 0; i < priv->num_rx_rings; i++)
- enetc_setup_rxbdr(hw, priv->rx_ring[i]);
+ enetc_setup_rxbdr(hw, priv->rx_ring[i], extended);
+}
+
+static void enetc_enable_txbdr(struct enetc_hw *hw, struct enetc_bdr *tx_ring)
+{
+ int idx = tx_ring->index;
+ u32 tbmr;
+
+ tbmr = enetc_txbdr_rd(hw, idx, ENETC_TBMR);
+ tbmr |= ENETC_TBMR_EN;
+ enetc_txbdr_wr(hw, idx, ENETC_TBMR, tbmr);
+}
+
+static void enetc_enable_rxbdr(struct enetc_hw *hw, struct enetc_bdr *rx_ring)
+{
+ int idx = rx_ring->index;
+ u32 rbmr;
+
+ rbmr = enetc_rxbdr_rd(hw, idx, ENETC_RBMR);
+ rbmr |= ENETC_RBMR_EN;
+ enetc_rxbdr_wr(hw, idx, ENETC_RBMR, rbmr);
+}
+
+static void enetc_enable_bdrs(struct enetc_ndev_priv *priv)
+{
+ struct enetc_hw *hw = &priv->si->hw;
+ int i;
+
+ for (i = 0; i < priv->num_tx_rings; i++)
+ enetc_enable_txbdr(hw, priv->tx_ring[i]);
+
+ for (i = 0; i < priv->num_rx_rings; i++)
+ enetc_enable_rxbdr(hw, priv->rx_ring[i]);
}
-static void enetc_clear_rxbdr(struct enetc_hw *hw, struct enetc_bdr *rx_ring)
+static void enetc_disable_rxbdr(struct enetc_hw *hw, struct enetc_bdr *rx_ring)
{
int idx = rx_ring->index;
@@ -2120,13 +2226,30 @@ static void enetc_clear_rxbdr(struct enetc_hw *hw, struct enetc_bdr *rx_ring)
enetc_rxbdr_wr(hw, idx, ENETC_RBMR, 0);
}
-static void enetc_clear_txbdr(struct enetc_hw *hw, struct enetc_bdr *tx_ring)
+static void enetc_disable_txbdr(struct enetc_hw *hw, struct enetc_bdr *rx_ring)
{
- int delay = 8, timeout = 100;
- int idx = tx_ring->index;
+ int idx = rx_ring->index;
/* disable EN bit on ring */
enetc_txbdr_wr(hw, idx, ENETC_TBMR, 0);
+}
+
+static void enetc_disable_bdrs(struct enetc_ndev_priv *priv)
+{
+ struct enetc_hw *hw = &priv->si->hw;
+ int i;
+
+ for (i = 0; i < priv->num_tx_rings; i++)
+ enetc_disable_txbdr(hw, priv->tx_ring[i]);
+
+ for (i = 0; i < priv->num_rx_rings; i++)
+ enetc_disable_rxbdr(hw, priv->rx_ring[i]);
+}
+
+static void enetc_wait_txbdr(struct enetc_hw *hw, struct enetc_bdr *tx_ring)
+{
+ int delay = 8, timeout = 100;
+ int idx = tx_ring->index;
/* wait for busy to clear */
while (delay < timeout &&
@@ -2140,18 +2263,13 @@ static void enetc_clear_txbdr(struct enetc_hw *hw, struct enetc_bdr *tx_ring)
idx);
}
-static void enetc_clear_bdrs(struct enetc_ndev_priv *priv)
+static void enetc_wait_bdrs(struct enetc_ndev_priv *priv)
{
struct enetc_hw *hw = &priv->si->hw;
int i;
for (i = 0; i < priv->num_tx_rings; i++)
- enetc_clear_txbdr(hw, priv->tx_ring[i]);
-
- for (i = 0; i < priv->num_rx_rings; i++)
- enetc_clear_rxbdr(hw, priv->rx_ring[i]);
-
- udelay(1);
+ enetc_wait_txbdr(hw, priv->tx_ring[i]);
}
static int enetc_setup_irqs(struct enetc_ndev_priv *priv)
@@ -2267,8 +2385,11 @@ static int enetc_phylink_connect(struct net_device *ndev)
struct ethtool_eee edata;
int err;
- if (!priv->phylink)
- return 0; /* phy-less mode */
+ if (!priv->phylink) {
+ /* phy-less mode */
+ netif_carrier_on(ndev);
+ return 0;
+ }
err = phylink_of_phy_connect(priv->phylink, priv->dev->of_node, 0);
if (err) {
@@ -2280,6 +2401,8 @@ static int enetc_phylink_connect(struct net_device *ndev)
memset(&edata, 0, sizeof(struct ethtool_eee));
phylink_ethtool_set_eee(priv->phylink, &edata);
+ phylink_start(priv->phylink);
+
return 0;
}
@@ -2321,20 +2444,21 @@ void enetc_start(struct net_device *ndev)
enable_irq(irq);
}
- if (priv->phylink)
- phylink_start(priv->phylink);
- else
- netif_carrier_on(ndev);
+ enetc_enable_bdrs(priv);
netif_tx_start_all_queues(ndev);
}
+EXPORT_SYMBOL_GPL(enetc_start);
int enetc_open(struct net_device *ndev)
{
struct enetc_ndev_priv *priv = netdev_priv(ndev);
- int num_stack_tx_queues;
+ struct enetc_bdr_resource *tx_res, *rx_res;
+ bool extended;
int err;
+ extended = !!(priv->active_offloads & ENETC_F_RX_TSTAMP);
+
err = enetc_setup_irqs(priv);
if (err)
return err;
@@ -2343,34 +2467,28 @@ int enetc_open(struct net_device *ndev)
if (err)
goto err_phy_connect;
- err = enetc_alloc_tx_resources(priv);
- if (err)
+ tx_res = enetc_alloc_tx_resources(priv);
+ if (IS_ERR(tx_res)) {
+ err = PTR_ERR(tx_res);
goto err_alloc_tx;
+ }
- err = enetc_alloc_rx_resources(priv);
- if (err)
+ rx_res = enetc_alloc_rx_resources(priv, extended);
+ if (IS_ERR(rx_res)) {
+ err = PTR_ERR(rx_res);
goto err_alloc_rx;
-
- num_stack_tx_queues = enetc_num_stack_tx_queues(priv);
-
- err = netif_set_real_num_tx_queues(ndev, num_stack_tx_queues);
- if (err)
- goto err_set_queues;
-
- err = netif_set_real_num_rx_queues(ndev, priv->num_rx_rings);
- if (err)
- goto err_set_queues;
+ }
enetc_tx_onestep_tstamp_init(priv);
- enetc_setup_bdrs(priv);
+ enetc_assign_tx_resources(priv, tx_res);
+ enetc_assign_rx_resources(priv, rx_res);
+ enetc_setup_bdrs(priv, extended);
enetc_start(ndev);
return 0;
-err_set_queues:
- enetc_free_rx_resources(priv);
err_alloc_rx:
- enetc_free_tx_resources(priv);
+ enetc_free_tx_resources(tx_res, priv->num_tx_rings);
err_alloc_tx:
if (priv->phylink)
phylink_disconnect_phy(priv->phylink);
@@ -2379,6 +2497,7 @@ err_phy_connect:
return err;
}
+EXPORT_SYMBOL_GPL(enetc_open);
void enetc_stop(struct net_device *ndev)
{
@@ -2387,6 +2506,8 @@ void enetc_stop(struct net_device *ndev)
netif_tx_stop_all_queues(ndev);
+ enetc_disable_bdrs(priv);
+
for (i = 0; i < priv->bdr_int_num; i++) {
int irq = pci_irq_vector(priv->si->pdev,
ENETC_BDR_INT_BASE_IDX + i);
@@ -2396,104 +2517,205 @@ void enetc_stop(struct net_device *ndev)
napi_disable(&priv->int_vector[i]->napi);
}
- if (priv->phylink)
- phylink_stop(priv->phylink);
- else
- netif_carrier_off(ndev);
+ enetc_wait_bdrs(priv);
enetc_clear_interrupts(priv);
}
+EXPORT_SYMBOL_GPL(enetc_stop);
int enetc_close(struct net_device *ndev)
{
struct enetc_ndev_priv *priv = netdev_priv(ndev);
enetc_stop(ndev);
- enetc_clear_bdrs(priv);
- if (priv->phylink)
+ if (priv->phylink) {
+ phylink_stop(priv->phylink);
phylink_disconnect_phy(priv->phylink);
+ } else {
+ netif_carrier_off(ndev);
+ }
+
enetc_free_rxtx_rings(priv);
- enetc_free_rx_resources(priv);
- enetc_free_tx_resources(priv);
+
+ /* Avoids dangling pointers and also frees old resources */
+ enetc_assign_rx_resources(priv, NULL);
+ enetc_assign_tx_resources(priv, NULL);
+
enetc_free_irqs(priv);
return 0;
}
+EXPORT_SYMBOL_GPL(enetc_close);
-int enetc_setup_tc_mqprio(struct net_device *ndev, void *type_data)
+static int enetc_reconfigure(struct enetc_ndev_priv *priv, bool extended,
+ int (*cb)(struct enetc_ndev_priv *priv, void *ctx),
+ void *ctx)
+{
+ struct enetc_bdr_resource *tx_res, *rx_res;
+ int err;
+
+ ASSERT_RTNL();
+
+ /* If the interface is down, run the callback right away,
+ * without reconfiguration.
+ */
+ if (!netif_running(priv->ndev)) {
+ if (cb) {
+ err = cb(priv, ctx);
+ if (err)
+ return err;
+ }
+
+ return 0;
+ }
+
+ tx_res = enetc_alloc_tx_resources(priv);
+ if (IS_ERR(tx_res)) {
+ err = PTR_ERR(tx_res);
+ goto out;
+ }
+
+ rx_res = enetc_alloc_rx_resources(priv, extended);
+ if (IS_ERR(rx_res)) {
+ err = PTR_ERR(rx_res);
+ goto out_free_tx_res;
+ }
+
+ enetc_stop(priv->ndev);
+ enetc_free_rxtx_rings(priv);
+
+ /* Interface is down, run optional callback now */
+ if (cb) {
+ err = cb(priv, ctx);
+ if (err)
+ goto out_restart;
+ }
+
+ enetc_assign_tx_resources(priv, tx_res);
+ enetc_assign_rx_resources(priv, rx_res);
+ enetc_setup_bdrs(priv, extended);
+ enetc_start(priv->ndev);
+
+ return 0;
+
+out_restart:
+ enetc_setup_bdrs(priv, extended);
+ enetc_start(priv->ndev);
+ enetc_free_rx_resources(rx_res, priv->num_rx_rings);
+out_free_tx_res:
+ enetc_free_tx_resources(tx_res, priv->num_tx_rings);
+out:
+ return err;
+}
+
+static void enetc_debug_tx_ring_prios(struct enetc_ndev_priv *priv)
+{
+ int i;
+
+ for (i = 0; i < priv->num_tx_rings; i++)
+ netdev_dbg(priv->ndev, "TX ring %d prio %d\n", i,
+ priv->tx_ring[i]->prio);
+}
+
+static void enetc_reset_tc_mqprio(struct net_device *ndev)
{
struct enetc_ndev_priv *priv = netdev_priv(ndev);
- struct tc_mqprio_qopt *mqprio = type_data;
struct enetc_hw *hw = &priv->si->hw;
struct enetc_bdr *tx_ring;
int num_stack_tx_queues;
- u8 num_tc;
int i;
num_stack_tx_queues = enetc_num_stack_tx_queues(priv);
- mqprio->hw = TC_MQPRIO_HW_OFFLOAD_TCS;
- num_tc = mqprio->num_tc;
- if (!num_tc) {
- netdev_reset_tc(ndev);
- netif_set_real_num_tx_queues(ndev, num_stack_tx_queues);
+ netdev_reset_tc(ndev);
+ netif_set_real_num_tx_queues(ndev, num_stack_tx_queues);
+ priv->min_num_stack_tx_queues = num_possible_cpus();
- /* Reset all ring priorities to 0 */
- for (i = 0; i < priv->num_tx_rings; i++) {
- tx_ring = priv->tx_ring[i];
- tx_ring->prio = 0;
- enetc_set_bdr_prio(hw, tx_ring->index, tx_ring->prio);
- }
+ /* Reset all ring priorities to 0 */
+ for (i = 0; i < priv->num_tx_rings; i++) {
+ tx_ring = priv->tx_ring[i];
+ tx_ring->prio = 0;
+ enetc_set_bdr_prio(hw, tx_ring->index, tx_ring->prio);
+ }
+
+ enetc_debug_tx_ring_prios(priv);
+}
+
+int enetc_setup_tc_mqprio(struct net_device *ndev, void *type_data)
+{
+ struct enetc_ndev_priv *priv = netdev_priv(ndev);
+ struct tc_mqprio_qopt *mqprio = type_data;
+ struct enetc_hw *hw = &priv->si->hw;
+ int num_stack_tx_queues = 0;
+ u8 num_tc = mqprio->num_tc;
+ struct enetc_bdr *tx_ring;
+ int offset, count;
+ int err, tc, q;
+ if (!num_tc) {
+ enetc_reset_tc_mqprio(ndev);
return 0;
}
- /* Check if we have enough BD rings available to accommodate all TCs */
- if (num_tc > num_stack_tx_queues) {
- netdev_err(ndev, "Max %d traffic classes supported\n",
- priv->num_tx_rings);
- return -EINVAL;
- }
+ err = netdev_set_num_tc(ndev, num_tc);
+ if (err)
+ return err;
- /* For the moment, we use only one BD ring per TC.
- *
- * Configure num_tc BD rings with increasing priorities.
- */
- for (i = 0; i < num_tc; i++) {
- tx_ring = priv->tx_ring[i];
- tx_ring->prio = i;
- enetc_set_bdr_prio(hw, tx_ring->index, tx_ring->prio);
+ for (tc = 0; tc < num_tc; tc++) {
+ offset = mqprio->offset[tc];
+ count = mqprio->count[tc];
+ num_stack_tx_queues += count;
+
+ err = netdev_set_tc_queue(ndev, tc, count, offset);
+ if (err)
+ goto err_reset_tc;
+
+ for (q = offset; q < offset + count; q++) {
+ tx_ring = priv->tx_ring[q];
+ /* The prio_tc_map is skb_tx_hash()'s way of selecting
+ * between TX queues based on skb->priority. As such,
+ * there's nothing to offload based on it.
+ * Make the mqprio "traffic class" be the priority of
+ * this ring group, and leave the Tx IPV to traffic
+ * class mapping as its default mapping value of 1:1.
+ */
+ tx_ring->prio = tc;
+ enetc_set_bdr_prio(hw, tx_ring->index, tx_ring->prio);
+ }
}
- /* Reset the number of netdev queues based on the TC count */
- netif_set_real_num_tx_queues(ndev, num_tc);
+ err = netif_set_real_num_tx_queues(ndev, num_stack_tx_queues);
+ if (err)
+ goto err_reset_tc;
- netdev_set_num_tc(ndev, num_tc);
+ priv->min_num_stack_tx_queues = num_stack_tx_queues;
- /* Each TC is associated with one netdev queue */
- for (i = 0; i < num_tc; i++)
- netdev_set_tc_queue(ndev, i, 1, i);
+ enetc_debug_tx_ring_prios(priv);
return 0;
+
+err_reset_tc:
+ enetc_reset_tc_mqprio(ndev);
+ return err;
}
+EXPORT_SYMBOL_GPL(enetc_setup_tc_mqprio);
-static int enetc_setup_xdp_prog(struct net_device *dev, struct bpf_prog *prog,
- struct netlink_ext_ack *extack)
+static int enetc_reconfigure_xdp_cb(struct enetc_ndev_priv *priv, void *ctx)
{
- struct enetc_ndev_priv *priv = netdev_priv(dev);
- struct bpf_prog *old_prog;
- bool is_up;
- int i;
-
- /* The buffer layout is changing, so we need to drain the old
- * RX buffers and seed new ones.
- */
- is_up = netif_running(dev);
- if (is_up)
- dev_close(dev);
+ struct bpf_prog *old_prog, *prog = ctx;
+ int num_stack_tx_queues;
+ int err, i;
old_prog = xchg(&priv->xdp_prog, prog);
+
+ num_stack_tx_queues = enetc_num_stack_tx_queues(priv);
+ err = netif_set_real_num_tx_queues(priv->ndev, num_stack_tx_queues);
+ if (err) {
+ xchg(&priv->xdp_prog, old_prog);
+ return err;
+ }
+
if (old_prog)
bpf_prog_put(old_prog);
@@ -2508,23 +2730,46 @@ static int enetc_setup_xdp_prog(struct net_device *dev, struct bpf_prog *prog,
rx_ring->buffer_offset = ENETC_RXB_PAD;
}
- if (is_up)
- return dev_open(dev, extack);
-
return 0;
}
-int enetc_setup_bpf(struct net_device *dev, struct netdev_bpf *xdp)
+static int enetc_setup_xdp_prog(struct net_device *ndev, struct bpf_prog *prog,
+ struct netlink_ext_ack *extack)
{
- switch (xdp->command) {
+ int num_xdp_tx_queues = prog ? num_possible_cpus() : 0;
+ struct enetc_ndev_priv *priv = netdev_priv(ndev);
+ bool extended;
+
+ if (priv->min_num_stack_tx_queues + num_xdp_tx_queues >
+ priv->num_tx_rings) {
+ NL_SET_ERR_MSG_FMT_MOD(extack,
+ "Reserving %d XDP TXQs does not leave a minimum of %d TXQs for network stack (total %d available)",
+ num_xdp_tx_queues,
+ priv->min_num_stack_tx_queues,
+ priv->num_tx_rings);
+ return -EBUSY;
+ }
+
+ extended = !!(priv->active_offloads & ENETC_F_RX_TSTAMP);
+
+ /* The buffer layout is changing, so we need to drain the old
+ * RX buffers and seed new ones.
+ */
+ return enetc_reconfigure(priv, extended, enetc_reconfigure_xdp_cb, prog);
+}
+
+int enetc_setup_bpf(struct net_device *ndev, struct netdev_bpf *bpf)
+{
+ switch (bpf->command) {
case XDP_SETUP_PROG:
- return enetc_setup_xdp_prog(dev, xdp->prog, xdp->extack);
+ return enetc_setup_xdp_prog(ndev, bpf->prog, bpf->extack);
default:
return -EINVAL;
}
return 0;
}
+EXPORT_SYMBOL_GPL(enetc_setup_bpf);
struct net_device_stats *enetc_get_stats(struct net_device *ndev)
{
@@ -2556,6 +2801,7 @@ struct net_device_stats *enetc_get_stats(struct net_device *ndev)
return stats;
}
+EXPORT_SYMBOL_GPL(enetc_get_stats);
static int enetc_set_rss(struct net_device *ndev, int en)
{
@@ -2608,48 +2854,53 @@ void enetc_set_features(struct net_device *ndev, netdev_features_t features)
enetc_enable_txvlan(ndev,
!!(features & NETIF_F_HW_VLAN_CTAG_TX));
}
+EXPORT_SYMBOL_GPL(enetc_set_features);
#ifdef CONFIG_FSL_ENETC_PTP_CLOCK
static int enetc_hwtstamp_set(struct net_device *ndev, struct ifreq *ifr)
{
struct enetc_ndev_priv *priv = netdev_priv(ndev);
+ int err, new_offloads = priv->active_offloads;
struct hwtstamp_config config;
- int ao;
if (copy_from_user(&config, ifr->ifr_data, sizeof(config)))
return -EFAULT;
switch (config.tx_type) {
case HWTSTAMP_TX_OFF:
- priv->active_offloads &= ~ENETC_F_TX_TSTAMP_MASK;
+ new_offloads &= ~ENETC_F_TX_TSTAMP_MASK;
break;
case HWTSTAMP_TX_ON:
- priv->active_offloads &= ~ENETC_F_TX_TSTAMP_MASK;
- priv->active_offloads |= ENETC_F_TX_TSTAMP;
+ new_offloads &= ~ENETC_F_TX_TSTAMP_MASK;
+ new_offloads |= ENETC_F_TX_TSTAMP;
break;
case HWTSTAMP_TX_ONESTEP_SYNC:
- priv->active_offloads &= ~ENETC_F_TX_TSTAMP_MASK;
- priv->active_offloads |= ENETC_F_TX_ONESTEP_SYNC_TSTAMP;
+ new_offloads &= ~ENETC_F_TX_TSTAMP_MASK;
+ new_offloads |= ENETC_F_TX_ONESTEP_SYNC_TSTAMP;
break;
default:
return -ERANGE;
}
- ao = priv->active_offloads;
switch (config.rx_filter) {
case HWTSTAMP_FILTER_NONE:
- priv->active_offloads &= ~ENETC_F_RX_TSTAMP;
+ new_offloads &= ~ENETC_F_RX_TSTAMP;
break;
default:
- priv->active_offloads |= ENETC_F_RX_TSTAMP;
+ new_offloads |= ENETC_F_RX_TSTAMP;
config.rx_filter = HWTSTAMP_FILTER_ALL;
}
- if (netif_running(ndev) && ao != priv->active_offloads) {
- enetc_close(ndev);
- enetc_open(ndev);
+ if ((new_offloads ^ priv->active_offloads) & ENETC_F_RX_TSTAMP) {
+ bool extended = !!(new_offloads & ENETC_F_RX_TSTAMP);
+
+ err = enetc_reconfigure(priv, extended, NULL, NULL);
+ if (err)
+ return err;
}
+ priv->active_offloads = new_offloads;
+
return copy_to_user(ifr->ifr_data, &config, sizeof(config)) ?
-EFAULT : 0;
}
@@ -2691,10 +2942,12 @@ int enetc_ioctl(struct net_device *ndev, struct ifreq *rq, int cmd)
return phylink_mii_ioctl(priv->phylink, rq, cmd);
}
+EXPORT_SYMBOL_GPL(enetc_ioctl);
int enetc_alloc_msix(struct enetc_ndev_priv *priv)
{
struct pci_dev *pdev = priv->si->pdev;
+ int num_stack_tx_queues;
int first_xdp_tx_ring;
int i, n, err, nvec;
int v_tx_rings;
@@ -2771,6 +3024,17 @@ int enetc_alloc_msix(struct enetc_ndev_priv *priv)
}
}
+ num_stack_tx_queues = enetc_num_stack_tx_queues(priv);
+
+ err = netif_set_real_num_tx_queues(priv->ndev, num_stack_tx_queues);
+ if (err)
+ goto fail;
+
+ err = netif_set_real_num_rx_queues(priv->ndev, priv->num_rx_rings);
+ if (err)
+ goto fail;
+
+ priv->min_num_stack_tx_queues = num_possible_cpus();
first_xdp_tx_ring = priv->num_tx_rings - num_possible_cpus();
priv->xdp_tx_ring = &priv->tx_ring[first_xdp_tx_ring];
@@ -2792,6 +3056,7 @@ fail:
return err;
}
+EXPORT_SYMBOL_GPL(enetc_alloc_msix);
void enetc_free_msix(struct enetc_ndev_priv *priv)
{
@@ -2821,6 +3086,7 @@ void enetc_free_msix(struct enetc_ndev_priv *priv)
/* disable all MSIX for this device */
pci_free_irq_vectors(priv->si->pdev);
}
+EXPORT_SYMBOL_GPL(enetc_free_msix);
static void enetc_kfree_si(struct enetc_si *si)
{
@@ -2910,6 +3176,7 @@ err_dma:
return err;
}
+EXPORT_SYMBOL_GPL(enetc_pci_probe);
void enetc_pci_remove(struct pci_dev *pdev)
{
@@ -2921,3 +3188,6 @@ void enetc_pci_remove(struct pci_dev *pdev)
pci_release_mem_regions(pdev);
pci_disable_device(pdev);
}
+EXPORT_SYMBOL_GPL(enetc_pci_remove);
+
+MODULE_LICENSE("Dual BSD/GPL");
diff --git a/drivers/net/ethernet/freescale/enetc/enetc.h b/drivers/net/ethernet/freescale/enetc/enetc.h
index c6d8cc15c270..8010f31cd10d 100644
--- a/drivers/net/ethernet/freescale/enetc/enetc.h
+++ b/drivers/net/ethernet/freescale/enetc/enetc.h
@@ -70,7 +70,6 @@ struct enetc_ring_stats {
unsigned int xdp_tx_drops;
unsigned int xdp_redirect;
unsigned int xdp_redirect_failures;
- unsigned int xdp_redirect_sg;
unsigned int recycles;
unsigned int recycle_failures;
unsigned int win_drop;
@@ -86,6 +85,23 @@ struct enetc_xdp_data {
#define ENETC_TX_RING_DEFAULT_SIZE 2048
#define ENETC_DEFAULT_TX_WORK (ENETC_TX_RING_DEFAULT_SIZE / 2)
+struct enetc_bdr_resource {
+ /* Input arguments saved for teardown */
+ struct device *dev; /* for DMA mapping */
+ size_t bd_count;
+ size_t bd_size;
+
+ /* Resource proper */
+ void *bd_base; /* points to Rx or Tx BD ring */
+ dma_addr_t bd_dma_base;
+ union {
+ struct enetc_tx_swbd *tx_swbd;
+ struct enetc_rx_swbd *rx_swbd;
+ };
+ char *tso_headers;
+ dma_addr_t tso_headers_dma;
+};
+
struct enetc_bdr {
struct device *dev; /* for DMA mapping */
struct net_device *ndev;
@@ -213,8 +229,9 @@ enum enetc_errata {
ENETC_ERR_UCMCSWP = BIT(1),
};
-#define ENETC_SI_F_QBV BIT(0)
-#define ENETC_SI_F_PSFP BIT(1)
+#define ENETC_SI_F_PSFP BIT(0)
+#define ENETC_SI_F_QBV BIT(1)
+#define ENETC_SI_F_QBU BIT(2)
/* PCI IEP device data */
struct enetc_si {
@@ -297,7 +314,6 @@ struct psfp_cap {
};
#define ENETC_F_TX_TSTAMP_MASK 0xff
-/* TODO: more hardware offloads */
enum enetc_active_offloads {
/* 8 bits reserved for TX timestamp types (hwtstamp_tx_types) */
ENETC_F_TX_TSTAMP = BIT(0),
@@ -306,6 +322,7 @@ enum enetc_active_offloads {
ENETC_F_RX_TSTAMP = BIT(8),
ENETC_F_QBV = BIT(9),
ENETC_F_QCI = BIT(10),
+ ENETC_F_QBU = BIT(11),
};
enum enetc_flags_bit {
@@ -345,11 +362,16 @@ struct enetc_ndev_priv {
struct enetc_bdr **xdp_tx_ring;
struct enetc_bdr *tx_ring[16];
struct enetc_bdr *rx_ring[16];
+ const struct enetc_bdr_resource *tx_res;
+ const struct enetc_bdr_resource *rx_res;
struct enetc_cls_rule *cls_rules;
struct psfp_cap psfp_cap;
+ /* Minimum number of TX queues required by the network stack */
+ unsigned int min_num_stack_tx_queues;
+
struct phylink *phylink;
int ic_mode;
u32 tx_ictt;
@@ -360,6 +382,11 @@ struct enetc_ndev_priv {
struct work_struct tx_onestep_tstamp;
struct sk_buff_head tx_skbs;
+
+ /* Serialize access to MAC Merge state between ethtool requests
+ * and link state updates
+ */
+ struct mutex mm_lock;
};
/* Messaging */
@@ -378,6 +405,8 @@ struct enetc_msg_cmd_set_primary_mac {
extern int enetc_phc_index;
/* SI common */
+u32 enetc_port_mac_rd(struct enetc_si *si, u32 reg);
+void enetc_port_mac_wr(struct enetc_si *si, u32 reg, u32 val);
int enetc_pci_probe(struct pci_dev *pdev, const char *name, int sizeof_priv);
void enetc_pci_remove(struct pci_dev *pdev);
int enetc_alloc_msix(struct enetc_ndev_priv *priv);
@@ -397,12 +426,13 @@ struct net_device_stats *enetc_get_stats(struct net_device *ndev);
void enetc_set_features(struct net_device *ndev, netdev_features_t features);
int enetc_ioctl(struct net_device *ndev, struct ifreq *rq, int cmd);
int enetc_setup_tc_mqprio(struct net_device *ndev, void *type_data);
-int enetc_setup_bpf(struct net_device *dev, struct netdev_bpf *xdp);
+int enetc_setup_bpf(struct net_device *ndev, struct netdev_bpf *bpf);
int enetc_xdp_xmit(struct net_device *ndev, int num_frames,
struct xdp_frame **frames, u32 flags);
/* ethtool */
void enetc_set_ethtool_ops(struct net_device *ndev);
+void enetc_mm_link_state_update(struct enetc_ndev_priv *priv, bool link);
/* control buffer descriptor ring (CBDR) */
int enetc_setup_cbdr(struct device *dev, struct enetc_hw *hw, int bd_count,
diff --git a/drivers/net/ethernet/freescale/enetc/enetc_cbdr.c b/drivers/net/ethernet/freescale/enetc/enetc_cbdr.c
index af68dc46a795..20bfdf7fb4b4 100644
--- a/drivers/net/ethernet/freescale/enetc/enetc_cbdr.c
+++ b/drivers/net/ethernet/freescale/enetc/enetc_cbdr.c
@@ -44,6 +44,7 @@ int enetc_setup_cbdr(struct device *dev, struct enetc_hw *hw, int bd_count,
return 0;
}
+EXPORT_SYMBOL_GPL(enetc_setup_cbdr);
void enetc_teardown_cbdr(struct enetc_cbdr *cbdr)
{
@@ -57,6 +58,7 @@ void enetc_teardown_cbdr(struct enetc_cbdr *cbdr)
cbdr->bd_base = NULL;
cbdr->dma_dev = NULL;
}
+EXPORT_SYMBOL_GPL(enetc_teardown_cbdr);
static void enetc_clean_cbdr(struct enetc_cbdr *ring)
{
@@ -127,6 +129,7 @@ int enetc_send_cmd(struct enetc_si *si, struct enetc_cbd *cbd)
return 0;
}
+EXPORT_SYMBOL_GPL(enetc_send_cmd);
int enetc_clear_mac_flt_entry(struct enetc_si *si, int index)
{
@@ -140,6 +143,7 @@ int enetc_clear_mac_flt_entry(struct enetc_si *si, int index)
return enetc_send_cmd(si, &cbd);
}
+EXPORT_SYMBOL_GPL(enetc_clear_mac_flt_entry);
int enetc_set_mac_flt_entry(struct enetc_si *si, int index,
char *mac_addr, int si_map)
@@ -165,6 +169,7 @@ int enetc_set_mac_flt_entry(struct enetc_si *si, int index,
return enetc_send_cmd(si, &cbd);
}
+EXPORT_SYMBOL_GPL(enetc_set_mac_flt_entry);
/* Set entry in RFS table */
int enetc_set_fs_entry(struct enetc_si *si, struct enetc_cmd_rfse *rfse,
@@ -197,6 +202,7 @@ int enetc_set_fs_entry(struct enetc_si *si, struct enetc_cmd_rfse *rfse,
return err;
}
+EXPORT_SYMBOL_GPL(enetc_set_fs_entry);
static int enetc_cmd_rss_table(struct enetc_si *si, u32 *table, int count,
bool read)
@@ -242,9 +248,11 @@ int enetc_get_rss_table(struct enetc_si *si, u32 *table, int count)
{
return enetc_cmd_rss_table(si, table, count, true);
}
+EXPORT_SYMBOL_GPL(enetc_get_rss_table);
/* Set RSS table */
int enetc_set_rss_table(struct enetc_si *si, const u32 *table, int count)
{
return enetc_cmd_rss_table(si, (u32 *)table, count, false);
}
+EXPORT_SYMBOL_GPL(enetc_set_rss_table);
diff --git a/drivers/net/ethernet/freescale/enetc/enetc_ethtool.c b/drivers/net/ethernet/freescale/enetc/enetc_ethtool.c
index c8369e3752b0..bca68edfbe9c 100644
--- a/drivers/net/ethernet/freescale/enetc/enetc_ethtool.c
+++ b/drivers/net/ethernet/freescale/enetc/enetc_ethtool.c
@@ -1,6 +1,7 @@
// SPDX-License-Identifier: (GPL-2.0+ OR BSD-3-Clause)
/* Copyright 2017-2019 NXP */
+#include <linux/ethtool_netlink.h>
#include <linux/net_tstamp.h>
#include <linux/module.h>
#include "enetc.h"
@@ -197,7 +198,6 @@ static const char rx_ring_stats[][ETH_GSTRING_LEN] = {
"Rx ring %2d recycle failures",
"Rx ring %2d redirects",
"Rx ring %2d redirect failures",
- "Rx ring %2d redirect S/G",
};
static const char tx_ring_stats[][ETH_GSTRING_LEN] = {
@@ -291,7 +291,6 @@ static void enetc_get_ethtool_stats(struct net_device *ndev,
data[o++] = priv->rx_ring[i]->stats.recycle_failures;
data[o++] = priv->rx_ring[i]->stats.xdp_redirect;
data[o++] = priv->rx_ring[i]->stats.xdp_redirect_failures;
- data[o++] = priv->rx_ring[i]->stats.xdp_redirect_sg;
}
if (!enetc_si_is_pf(priv->si))
@@ -301,14 +300,32 @@ static void enetc_get_ethtool_stats(struct net_device *ndev,
data[o++] = enetc_port_rd(hw, enetc_port_counters[i].reg);
}
+static void enetc_pause_stats(struct enetc_hw *hw, int mac,
+ struct ethtool_pause_stats *pause_stats)
+{
+ pause_stats->tx_pause_frames = enetc_port_rd(hw, ENETC_PM_TXPF(mac));
+ pause_stats->rx_pause_frames = enetc_port_rd(hw, ENETC_PM_RXPF(mac));
+}
+
static void enetc_get_pause_stats(struct net_device *ndev,
struct ethtool_pause_stats *pause_stats)
{
struct enetc_ndev_priv *priv = netdev_priv(ndev);
struct enetc_hw *hw = &priv->si->hw;
+ struct enetc_si *si = priv->si;
- pause_stats->tx_pause_frames = enetc_port_rd(hw, ENETC_PM_TXPF(0));
- pause_stats->rx_pause_frames = enetc_port_rd(hw, ENETC_PM_RXPF(0));
+ switch (pause_stats->src) {
+ case ETHTOOL_MAC_STATS_SRC_EMAC:
+ enetc_pause_stats(hw, 0, pause_stats);
+ break;
+ case ETHTOOL_MAC_STATS_SRC_PMAC:
+ if (si->hw_features & ENETC_SI_F_QBU)
+ enetc_pause_stats(hw, 1, pause_stats);
+ break;
+ case ETHTOOL_MAC_STATS_SRC_AGGREGATE:
+ ethtool_aggregate_pause_stats(ndev, pause_stats);
+ break;
+ }
}
static void enetc_mac_stats(struct enetc_hw *hw, int mac,
@@ -385,8 +402,20 @@ static void enetc_get_eth_mac_stats(struct net_device *ndev,
{
struct enetc_ndev_priv *priv = netdev_priv(ndev);
struct enetc_hw *hw = &priv->si->hw;
+ struct enetc_si *si = priv->si;
- enetc_mac_stats(hw, 0, mac_stats);
+ switch (mac_stats->src) {
+ case ETHTOOL_MAC_STATS_SRC_EMAC:
+ enetc_mac_stats(hw, 0, mac_stats);
+ break;
+ case ETHTOOL_MAC_STATS_SRC_PMAC:
+ if (si->hw_features & ENETC_SI_F_QBU)
+ enetc_mac_stats(hw, 1, mac_stats);
+ break;
+ case ETHTOOL_MAC_STATS_SRC_AGGREGATE:
+ ethtool_aggregate_mac_stats(ndev, mac_stats);
+ break;
+ }
}
static void enetc_get_eth_ctrl_stats(struct net_device *ndev,
@@ -394,8 +423,20 @@ static void enetc_get_eth_ctrl_stats(struct net_device *ndev,
{
struct enetc_ndev_priv *priv = netdev_priv(ndev);
struct enetc_hw *hw = &priv->si->hw;
+ struct enetc_si *si = priv->si;
- enetc_ctrl_stats(hw, 0, ctrl_stats);
+ switch (ctrl_stats->src) {
+ case ETHTOOL_MAC_STATS_SRC_EMAC:
+ enetc_ctrl_stats(hw, 0, ctrl_stats);
+ break;
+ case ETHTOOL_MAC_STATS_SRC_PMAC:
+ if (si->hw_features & ENETC_SI_F_QBU)
+ enetc_ctrl_stats(hw, 1, ctrl_stats);
+ break;
+ case ETHTOOL_MAC_STATS_SRC_AGGREGATE:
+ ethtool_aggregate_ctrl_stats(ndev, ctrl_stats);
+ break;
+ }
}
static void enetc_get_rmon_stats(struct net_device *ndev,
@@ -404,8 +445,20 @@ static void enetc_get_rmon_stats(struct net_device *ndev,
{
struct enetc_ndev_priv *priv = netdev_priv(ndev);
struct enetc_hw *hw = &priv->si->hw;
+ struct enetc_si *si = priv->si;
- enetc_rmon_stats(hw, 0, rmon_stats, ranges);
+ switch (rmon_stats->src) {
+ case ETHTOOL_MAC_STATS_SRC_EMAC:
+ enetc_rmon_stats(hw, 0, rmon_stats, ranges);
+ break;
+ case ETHTOOL_MAC_STATS_SRC_PMAC:
+ if (si->hw_features & ENETC_SI_F_QBU)
+ enetc_rmon_stats(hw, 1, rmon_stats, ranges);
+ break;
+ case ETHTOOL_MAC_STATS_SRC_AGGREGATE:
+ ethtool_aggregate_rmon_stats(ndev, rmon_stats);
+ break;
+ }
}
#define ENETC_RSSHASH_L3 (RXH_L2DA | RXH_VLAN | RXH_L3_PROTO | RXH_IP_SRC | \
@@ -651,6 +704,7 @@ void enetc_set_rss_key(struct enetc_hw *hw, const u8 *bytes)
for (i = 0; i < ENETC_RSSHASH_KEY_SIZE / 4; i++)
enetc_port_wr(hw, ENETC_PRSSK(i), ((u32 *)bytes)[i]);
}
+EXPORT_SYMBOL_GPL(enetc_set_rss_key);
static int enetc_set_rxfh(struct net_device *ndev, const u32 *indir,
const u8 *key, const u8 hfunc)
@@ -864,6 +918,166 @@ static int enetc_set_link_ksettings(struct net_device *dev,
return phylink_ethtool_ksettings_set(priv->phylink, cmd);
}
+static void enetc_get_mm_stats(struct net_device *ndev,
+ struct ethtool_mm_stats *s)
+{
+ struct enetc_ndev_priv *priv = netdev_priv(ndev);
+ struct enetc_hw *hw = &priv->si->hw;
+ struct enetc_si *si = priv->si;
+
+ if (!(si->hw_features & ENETC_SI_F_QBU))
+ return;
+
+ s->MACMergeFrameAssErrorCount = enetc_port_rd(hw, ENETC_MMFAECR);
+ s->MACMergeFrameSmdErrorCount = enetc_port_rd(hw, ENETC_MMFSECR);
+ s->MACMergeFrameAssOkCount = enetc_port_rd(hw, ENETC_MMFAOCR);
+ s->MACMergeFragCountRx = enetc_port_rd(hw, ENETC_MMFCRXR);
+ s->MACMergeFragCountTx = enetc_port_rd(hw, ENETC_MMFCTXR);
+ s->MACMergeHoldCount = enetc_port_rd(hw, ENETC_MMHCR);
+}
+
+static int enetc_get_mm(struct net_device *ndev, struct ethtool_mm_state *state)
+{
+ struct enetc_ndev_priv *priv = netdev_priv(ndev);
+ struct enetc_si *si = priv->si;
+ struct enetc_hw *hw = &si->hw;
+ u32 lafs, rafs, val;
+
+ if (!(si->hw_features & ENETC_SI_F_QBU))
+ return -EOPNOTSUPP;
+
+ mutex_lock(&priv->mm_lock);
+
+ val = enetc_port_rd(hw, ENETC_PFPMR);
+ state->pmac_enabled = !!(val & ENETC_PFPMR_PMACE);
+
+ val = enetc_port_rd(hw, ENETC_MMCSR);
+
+ switch (ENETC_MMCSR_GET_VSTS(val)) {
+ case 0:
+ state->verify_status = ETHTOOL_MM_VERIFY_STATUS_DISABLED;
+ break;
+ case 2:
+ state->verify_status = ETHTOOL_MM_VERIFY_STATUS_VERIFYING;
+ break;
+ case 3:
+ state->verify_status = ETHTOOL_MM_VERIFY_STATUS_SUCCEEDED;
+ break;
+ case 4:
+ state->verify_status = ETHTOOL_MM_VERIFY_STATUS_FAILED;
+ break;
+ case 5:
+ default:
+ state->verify_status = ETHTOOL_MM_VERIFY_STATUS_UNKNOWN;
+ break;
+ }
+
+ rafs = ENETC_MMCSR_GET_RAFS(val);
+ state->tx_min_frag_size = ethtool_mm_frag_size_add_to_min(rafs);
+ lafs = ENETC_MMCSR_GET_LAFS(val);
+ state->rx_min_frag_size = ethtool_mm_frag_size_add_to_min(lafs);
+ state->tx_enabled = !!(val & ENETC_MMCSR_LPE); /* mirror of MMCSR_ME */
+ state->tx_active = !!(val & ENETC_MMCSR_LPA);
+ state->verify_enabled = !(val & ENETC_MMCSR_VDIS);
+ state->verify_time = ENETC_MMCSR_GET_VT(val);
+ /* A verifyTime of 128 ms would exceed the 7 bit width
+ * of the ENETC_MMCSR_VT field
+ */
+ state->max_verify_time = 127;
+
+ mutex_unlock(&priv->mm_lock);
+
+ return 0;
+}
+
+static int enetc_set_mm(struct net_device *ndev, struct ethtool_mm_cfg *cfg,
+ struct netlink_ext_ack *extack)
+{
+ struct enetc_ndev_priv *priv = netdev_priv(ndev);
+ struct enetc_hw *hw = &priv->si->hw;
+ struct enetc_si *si = priv->si;
+ u32 val, add_frag_size;
+ int err;
+
+ if (!(si->hw_features & ENETC_SI_F_QBU))
+ return -EOPNOTSUPP;
+
+ err = ethtool_mm_frag_size_min_to_add(cfg->tx_min_frag_size,
+ &add_frag_size, extack);
+ if (err)
+ return err;
+
+ mutex_lock(&priv->mm_lock);
+
+ val = enetc_port_rd(hw, ENETC_PFPMR);
+ if (cfg->pmac_enabled)
+ val |= ENETC_PFPMR_PMACE;
+ else
+ val &= ~ENETC_PFPMR_PMACE;
+ enetc_port_wr(hw, ENETC_PFPMR, val);
+
+ val = enetc_port_rd(hw, ENETC_MMCSR);
+
+ if (cfg->verify_enabled)
+ val &= ~ENETC_MMCSR_VDIS;
+ else
+ val |= ENETC_MMCSR_VDIS;
+
+ if (cfg->tx_enabled)
+ priv->active_offloads |= ENETC_F_QBU;
+ else
+ priv->active_offloads &= ~ENETC_F_QBU;
+
+ /* If link is up, enable MAC Merge right away */
+ if (!!(priv->active_offloads & ENETC_F_QBU) &&
+ !(val & ENETC_MMCSR_LINK_FAIL))
+ val |= ENETC_MMCSR_ME;
+
+ val &= ~ENETC_MMCSR_VT_MASK;
+ val |= ENETC_MMCSR_VT(cfg->verify_time);
+
+ val &= ~ENETC_MMCSR_RAFS_MASK;
+ val |= ENETC_MMCSR_RAFS(add_frag_size);
+
+ enetc_port_wr(hw, ENETC_MMCSR, val);
+
+ mutex_unlock(&priv->mm_lock);
+
+ return 0;
+}
+
+/* When the link is lost, the verification state machine goes to the FAILED
+ * state and doesn't restart on its own after a new link up event.
+ * According to 802.3 Figure 99-8 - Verify state diagram, the LINK_FAIL bit
+ * should have been sufficient to re-trigger verification, but for ENETC it
+ * doesn't. As a workaround, we need to toggle the Merge Enable bit to
+ * re-trigger verification when link comes up.
+ */
+void enetc_mm_link_state_update(struct enetc_ndev_priv *priv, bool link)
+{
+ struct enetc_hw *hw = &priv->si->hw;
+ u32 val;
+
+ mutex_lock(&priv->mm_lock);
+
+ val = enetc_port_rd(hw, ENETC_MMCSR);
+
+ if (link) {
+ val &= ~ENETC_MMCSR_LINK_FAIL;
+ if (priv->active_offloads & ENETC_F_QBU)
+ val |= ENETC_MMCSR_ME;
+ } else {
+ val |= ENETC_MMCSR_LINK_FAIL;
+ if (priv->active_offloads & ENETC_F_QBU)
+ val &= ~ENETC_MMCSR_ME;
+ }
+
+ enetc_port_wr(hw, ENETC_MMCSR, val);
+
+ mutex_unlock(&priv->mm_lock);
+}
+EXPORT_SYMBOL_GPL(enetc_mm_link_state_update);
+
static const struct ethtool_ops enetc_pf_ethtool_ops = {
.supported_coalesce_params = ETHTOOL_COALESCE_USECS |
ETHTOOL_COALESCE_MAX_FRAMES |
@@ -894,6 +1108,9 @@ static const struct ethtool_ops enetc_pf_ethtool_ops = {
.set_wol = enetc_set_wol,
.get_pauseparam = enetc_get_pauseparam,
.set_pauseparam = enetc_set_pauseparam,
+ .get_mm = enetc_get_mm,
+ .set_mm = enetc_set_mm,
+ .get_mm_stats = enetc_get_mm_stats,
};
static const struct ethtool_ops enetc_vf_ethtool_ops = {
@@ -926,3 +1143,4 @@ void enetc_set_ethtool_ops(struct net_device *ndev)
else
ndev->ethtool_ops = &enetc_vf_ethtool_ops;
}
+EXPORT_SYMBOL_GPL(enetc_set_ethtool_ops);
diff --git a/drivers/net/ethernet/freescale/enetc/enetc_hw.h b/drivers/net/ethernet/freescale/enetc/enetc_hw.h
index 18ca1f42b1f7..de2e0ee8cdcb 100644
--- a/drivers/net/ethernet/freescale/enetc/enetc_hw.h
+++ b/drivers/net/ethernet/freescale/enetc/enetc_hw.h
@@ -18,9 +18,10 @@
#define ENETC_SICTR0 0x18
#define ENETC_SICTR1 0x1c
#define ENETC_SIPCAPR0 0x20
-#define ENETC_SIPCAPR0_QBV BIT(4)
#define ENETC_SIPCAPR0_PSFP BIT(9)
#define ENETC_SIPCAPR0_RSS BIT(8)
+#define ENETC_SIPCAPR0_QBV BIT(4)
+#define ENETC_SIPCAPR0_QBU BIT(3)
#define ENETC_SIPCAPR1 0x24
#define ENETC_SITGTGR 0x30
#define ENETC_SIRBGCR 0x38
@@ -213,7 +214,6 @@ enum enetc_bdr_type {TX, RX};
#define ENETC_PSIRFSCFGR(n) (0x1814 + (n) * 4) /* n = SI index */
#define ENETC_PFPMR 0x1900
#define ENETC_PFPMR_PMACE BIT(1)
-#define ENETC_PFPMR_MWLM BIT(0)
#define ENETC_EMDIO_BASE 0x1c00
#define ENETC_PSIUMHFR0(n, err) (((err) ? 0x1d08 : 0x1d00) + (n) * 0x10)
#define ENETC_PSIUMHFR1(n) (0x1d04 + (n) * 0x10)
@@ -222,11 +222,35 @@ enum enetc_bdr_type {TX, RX};
#define ENETC_PSIVHFR0(n) (0x1e00 + (n) * 8) /* n = SI index */
#define ENETC_PSIVHFR1(n) (0x1e04 + (n) * 8) /* n = SI index */
#define ENETC_MMCSR 0x1f00
-#define ENETC_MMCSR_ME BIT(16)
+#define ENETC_MMCSR_LINK_FAIL BIT(31)
+#define ENETC_MMCSR_VT_MASK GENMASK(29, 23) /* Verify Time */
+#define ENETC_MMCSR_VT(x) (((x) << 23) & ENETC_MMCSR_VT_MASK)
+#define ENETC_MMCSR_GET_VT(x) (((x) & ENETC_MMCSR_VT_MASK) >> 23)
+#define ENETC_MMCSR_TXSTS_MASK GENMASK(22, 21) /* Merge Status */
+#define ENETC_MMCSR_GET_TXSTS(x) (((x) & ENETC_MMCSR_TXSTS_MASK) >> 21)
+#define ENETC_MMCSR_VSTS_MASK GENMASK(20, 18) /* Verify Status */
+#define ENETC_MMCSR_GET_VSTS(x) (((x) & ENETC_MMCSR_VSTS_MASK) >> 18)
+#define ENETC_MMCSR_VDIS BIT(17) /* Verify Disabled */
+#define ENETC_MMCSR_ME BIT(16) /* Merge Enabled */
+#define ENETC_MMCSR_RAFS_MASK GENMASK(9, 8) /* Remote Additional Fragment Size */
+#define ENETC_MMCSR_RAFS(x) (((x) << 8) & ENETC_MMCSR_RAFS_MASK)
+#define ENETC_MMCSR_GET_RAFS(x) (((x) & ENETC_MMCSR_RAFS_MASK) >> 8)
+#define ENETC_MMCSR_LAFS_MASK GENMASK(4, 3) /* Local Additional Fragment Size */
+#define ENETC_MMCSR_GET_LAFS(x) (((x) & ENETC_MMCSR_LAFS_MASK) >> 3)
+#define ENETC_MMCSR_LPA BIT(2) /* Local Preemption Active */
+#define ENETC_MMCSR_LPE BIT(1) /* Local Preemption Enabled */
+#define ENETC_MMCSR_LPS BIT(0) /* Local Preemption Supported */
+#define ENETC_MMFAECR 0x1f08
+#define ENETC_MMFSECR 0x1f0c
+#define ENETC_MMFAOCR 0x1f10
+#define ENETC_MMFCRXR 0x1f14
+#define ENETC_MMFCTXR 0x1f18
+#define ENETC_MMHCR 0x1f1c
#define ENETC_PTCMSDUR(n) (0x2020 + (n) * 4) /* n = TC index [0..7] */
+#define ENETC_PMAC_OFFSET 0x1000
+
#define ENETC_PM0_CMD_CFG 0x8008
-#define ENETC_PM1_CMD_CFG 0x9008
#define ENETC_PM0_TX_EN BIT(0)
#define ENETC_PM0_RX_EN BIT(1)
#define ENETC_PM0_PROMISC BIT(4)
@@ -245,11 +269,8 @@ enum enetc_bdr_type {TX, RX};
#define ENETC_PM0_PAUSE_QUANTA 0x8054
#define ENETC_PM0_PAUSE_THRESH 0x8064
-#define ENETC_PM1_PAUSE_QUANTA 0x9054
-#define ENETC_PM1_PAUSE_THRESH 0x9064
#define ENETC_PM0_SINGLE_STEP 0x80c0
-#define ENETC_PM1_SINGLE_STEP 0x90c0
#define ENETC_PM0_SINGLE_STEP_CH BIT(7)
#define ENETC_PM0_SINGLE_STEP_EN BIT(31)
#define ENETC_SET_SINGLE_STEP_OFFSET(v) (((v) & 0xff) << 8)
@@ -279,57 +300,57 @@ enum enetc_bdr_type {TX, RX};
/* Port MAC counters: Port MAC 0 corresponds to the eMAC and
* Port MAC 1 to the pMAC.
*/
-#define ENETC_PM_REOCT(mac) (0x8100 + 0x1000 * (mac))
-#define ENETC_PM_RALN(mac) (0x8110 + 0x1000 * (mac))
-#define ENETC_PM_RXPF(mac) (0x8118 + 0x1000 * (mac))
-#define ENETC_PM_RFRM(mac) (0x8120 + 0x1000 * (mac))
-#define ENETC_PM_RFCS(mac) (0x8128 + 0x1000 * (mac))
-#define ENETC_PM_RVLAN(mac) (0x8130 + 0x1000 * (mac))
-#define ENETC_PM_RERR(mac) (0x8138 + 0x1000 * (mac))
-#define ENETC_PM_RUCA(mac) (0x8140 + 0x1000 * (mac))
-#define ENETC_PM_RMCA(mac) (0x8148 + 0x1000 * (mac))
-#define ENETC_PM_RBCA(mac) (0x8150 + 0x1000 * (mac))
-#define ENETC_PM_RDRP(mac) (0x8158 + 0x1000 * (mac))
-#define ENETC_PM_RPKT(mac) (0x8160 + 0x1000 * (mac))
-#define ENETC_PM_RUND(mac) (0x8168 + 0x1000 * (mac))
-#define ENETC_PM_R64(mac) (0x8170 + 0x1000 * (mac))
-#define ENETC_PM_R127(mac) (0x8178 + 0x1000 * (mac))
-#define ENETC_PM_R255(mac) (0x8180 + 0x1000 * (mac))
-#define ENETC_PM_R511(mac) (0x8188 + 0x1000 * (mac))
-#define ENETC_PM_R1023(mac) (0x8190 + 0x1000 * (mac))
-#define ENETC_PM_R1522(mac) (0x8198 + 0x1000 * (mac))
-#define ENETC_PM_R1523X(mac) (0x81A0 + 0x1000 * (mac))
-#define ENETC_PM_ROVR(mac) (0x81A8 + 0x1000 * (mac))
-#define ENETC_PM_RJBR(mac) (0x81B0 + 0x1000 * (mac))
-#define ENETC_PM_RFRG(mac) (0x81B8 + 0x1000 * (mac))
-#define ENETC_PM_RCNP(mac) (0x81C0 + 0x1000 * (mac))
-#define ENETC_PM_RDRNTP(mac) (0x81C8 + 0x1000 * (mac))
-#define ENETC_PM_TEOCT(mac) (0x8200 + 0x1000 * (mac))
-#define ENETC_PM_TOCT(mac) (0x8208 + 0x1000 * (mac))
-#define ENETC_PM_TCRSE(mac) (0x8210 + 0x1000 * (mac))
-#define ENETC_PM_TXPF(mac) (0x8218 + 0x1000 * (mac))
-#define ENETC_PM_TFRM(mac) (0x8220 + 0x1000 * (mac))
-#define ENETC_PM_TFCS(mac) (0x8228 + 0x1000 * (mac))
-#define ENETC_PM_TVLAN(mac) (0x8230 + 0x1000 * (mac))
-#define ENETC_PM_TERR(mac) (0x8238 + 0x1000 * (mac))
-#define ENETC_PM_TUCA(mac) (0x8240 + 0x1000 * (mac))
-#define ENETC_PM_TMCA(mac) (0x8248 + 0x1000 * (mac))
-#define ENETC_PM_TBCA(mac) (0x8250 + 0x1000 * (mac))
-#define ENETC_PM_TPKT(mac) (0x8260 + 0x1000 * (mac))
-#define ENETC_PM_TUND(mac) (0x8268 + 0x1000 * (mac))
-#define ENETC_PM_T64(mac) (0x8270 + 0x1000 * (mac))
-#define ENETC_PM_T127(mac) (0x8278 + 0x1000 * (mac))
-#define ENETC_PM_T255(mac) (0x8280 + 0x1000 * (mac))
-#define ENETC_PM_T511(mac) (0x8288 + 0x1000 * (mac))
-#define ENETC_PM_T1023(mac) (0x8290 + 0x1000 * (mac))
-#define ENETC_PM_T1522(mac) (0x8298 + 0x1000 * (mac))
-#define ENETC_PM_T1523X(mac) (0x82A0 + 0x1000 * (mac))
-#define ENETC_PM_TCNP(mac) (0x82C0 + 0x1000 * (mac))
-#define ENETC_PM_TDFR(mac) (0x82D0 + 0x1000 * (mac))
-#define ENETC_PM_TMCOL(mac) (0x82D8 + 0x1000 * (mac))
-#define ENETC_PM_TSCOL(mac) (0x82E0 + 0x1000 * (mac))
-#define ENETC_PM_TLCOL(mac) (0x82E8 + 0x1000 * (mac))
-#define ENETC_PM_TECOL(mac) (0x82F0 + 0x1000 * (mac))
+#define ENETC_PM_REOCT(mac) (0x8100 + ENETC_PMAC_OFFSET * (mac))
+#define ENETC_PM_RALN(mac) (0x8110 + ENETC_PMAC_OFFSET * (mac))
+#define ENETC_PM_RXPF(mac) (0x8118 + ENETC_PMAC_OFFSET * (mac))
+#define ENETC_PM_RFRM(mac) (0x8120 + ENETC_PMAC_OFFSET * (mac))
+#define ENETC_PM_RFCS(mac) (0x8128 + ENETC_PMAC_OFFSET * (mac))
+#define ENETC_PM_RVLAN(mac) (0x8130 + ENETC_PMAC_OFFSET * (mac))
+#define ENETC_PM_RERR(mac) (0x8138 + ENETC_PMAC_OFFSET * (mac))
+#define ENETC_PM_RUCA(mac) (0x8140 + ENETC_PMAC_OFFSET * (mac))
+#define ENETC_PM_RMCA(mac) (0x8148 + ENETC_PMAC_OFFSET * (mac))
+#define ENETC_PM_RBCA(mac) (0x8150 + ENETC_PMAC_OFFSET * (mac))
+#define ENETC_PM_RDRP(mac) (0x8158 + ENETC_PMAC_OFFSET * (mac))
+#define ENETC_PM_RPKT(mac) (0x8160 + ENETC_PMAC_OFFSET * (mac))
+#define ENETC_PM_RUND(mac) (0x8168 + ENETC_PMAC_OFFSET * (mac))
+#define ENETC_PM_R64(mac) (0x8170 + ENETC_PMAC_OFFSET * (mac))
+#define ENETC_PM_R127(mac) (0x8178 + ENETC_PMAC_OFFSET * (mac))
+#define ENETC_PM_R255(mac) (0x8180 + ENETC_PMAC_OFFSET * (mac))
+#define ENETC_PM_R511(mac) (0x8188 + ENETC_PMAC_OFFSET * (mac))
+#define ENETC_PM_R1023(mac) (0x8190 + ENETC_PMAC_OFFSET * (mac))
+#define ENETC_PM_R1522(mac) (0x8198 + ENETC_PMAC_OFFSET * (mac))
+#define ENETC_PM_R1523X(mac) (0x81A0 + ENETC_PMAC_OFFSET * (mac))
+#define ENETC_PM_ROVR(mac) (0x81A8 + ENETC_PMAC_OFFSET * (mac))
+#define ENETC_PM_RJBR(mac) (0x81B0 + ENETC_PMAC_OFFSET * (mac))
+#define ENETC_PM_RFRG(mac) (0x81B8 + ENETC_PMAC_OFFSET * (mac))
+#define ENETC_PM_RCNP(mac) (0x81C0 + ENETC_PMAC_OFFSET * (mac))
+#define ENETC_PM_RDRNTP(mac) (0x81C8 + ENETC_PMAC_OFFSET * (mac))
+#define ENETC_PM_TEOCT(mac) (0x8200 + ENETC_PMAC_OFFSET * (mac))
+#define ENETC_PM_TOCT(mac) (0x8208 + ENETC_PMAC_OFFSET * (mac))
+#define ENETC_PM_TCRSE(mac) (0x8210 + ENETC_PMAC_OFFSET * (mac))
+#define ENETC_PM_TXPF(mac) (0x8218 + ENETC_PMAC_OFFSET * (mac))
+#define ENETC_PM_TFRM(mac) (0x8220 + ENETC_PMAC_OFFSET * (mac))
+#define ENETC_PM_TFCS(mac) (0x8228 + ENETC_PMAC_OFFSET * (mac))
+#define ENETC_PM_TVLAN(mac) (0x8230 + ENETC_PMAC_OFFSET * (mac))
+#define ENETC_PM_TERR(mac) (0x8238 + ENETC_PMAC_OFFSET * (mac))
+#define ENETC_PM_TUCA(mac) (0x8240 + ENETC_PMAC_OFFSET * (mac))
+#define ENETC_PM_TMCA(mac) (0x8248 + ENETC_PMAC_OFFSET * (mac))
+#define ENETC_PM_TBCA(mac) (0x8250 + ENETC_PMAC_OFFSET * (mac))
+#define ENETC_PM_TPKT(mac) (0x8260 + ENETC_PMAC_OFFSET * (mac))
+#define ENETC_PM_TUND(mac) (0x8268 + ENETC_PMAC_OFFSET * (mac))
+#define ENETC_PM_T64(mac) (0x8270 + ENETC_PMAC_OFFSET * (mac))
+#define ENETC_PM_T127(mac) (0x8278 + ENETC_PMAC_OFFSET * (mac))
+#define ENETC_PM_T255(mac) (0x8280 + ENETC_PMAC_OFFSET * (mac))
+#define ENETC_PM_T511(mac) (0x8288 + ENETC_PMAC_OFFSET * (mac))
+#define ENETC_PM_T1023(mac) (0x8290 + ENETC_PMAC_OFFSET * (mac))
+#define ENETC_PM_T1522(mac) (0x8298 + ENETC_PMAC_OFFSET * (mac))
+#define ENETC_PM_T1523X(mac) (0x82A0 + ENETC_PMAC_OFFSET * (mac))
+#define ENETC_PM_TCNP(mac) (0x82C0 + ENETC_PMAC_OFFSET * (mac))
+#define ENETC_PM_TDFR(mac) (0x82D0 + ENETC_PMAC_OFFSET * (mac))
+#define ENETC_PM_TMCOL(mac) (0x82D8 + ENETC_PMAC_OFFSET * (mac))
+#define ENETC_PM_TSCOL(mac) (0x82E0 + ENETC_PMAC_OFFSET * (mac))
+#define ENETC_PM_TLCOL(mac) (0x82E8 + ENETC_PMAC_OFFSET * (mac))
+#define ENETC_PM_TECOL(mac) (0x82F0 + ENETC_PMAC_OFFSET * (mac))
/* Port counters */
#define ENETC_PICDR(n) (0x0700 + (n) * 8) /* n = [0..3] */
diff --git a/drivers/net/ethernet/freescale/enetc/enetc_mdio.c b/drivers/net/ethernet/freescale/enetc/enetc_mdio.c
index 1c8f5cc6dec4..998aaa394e9c 100644
--- a/drivers/net/ethernet/freescale/enetc/enetc_mdio.c
+++ b/drivers/net/ethernet/freescale/enetc/enetc_mdio.c
@@ -55,7 +55,8 @@ static int enetc_mdio_wait_complete(struct enetc_mdio_priv *mdio_priv)
is_busy, !is_busy, 10, 10 * 1000);
}
-int enetc_mdio_write(struct mii_bus *bus, int phy_id, int regnum, u16 value)
+int enetc_mdio_write_c22(struct mii_bus *bus, int phy_id, int regnum,
+ u16 value)
{
struct enetc_mdio_priv *mdio_priv = bus->priv;
u32 mdio_ctl, mdio_cfg;
@@ -63,14 +64,39 @@ int enetc_mdio_write(struct mii_bus *bus, int phy_id, int regnum, u16 value)
int ret;
mdio_cfg = ENETC_EMDIO_CFG;
- if (regnum & MII_ADDR_C45) {
- dev_addr = (regnum >> 16) & 0x1f;
- mdio_cfg |= MDIO_CFG_ENC45;
- } else {
- /* clause 22 (ie 1G) */
- dev_addr = regnum & 0x1f;
- mdio_cfg &= ~MDIO_CFG_ENC45;
- }
+ dev_addr = regnum & 0x1f;
+ mdio_cfg &= ~MDIO_CFG_ENC45;
+
+ enetc_mdio_wr(mdio_priv, ENETC_MDIO_CFG, mdio_cfg);
+
+ ret = enetc_mdio_wait_complete(mdio_priv);
+ if (ret)
+ return ret;
+
+ /* set port and dev addr */
+ mdio_ctl = MDIO_CTL_PORT_ADDR(phy_id) | MDIO_CTL_DEV_ADDR(dev_addr);
+ enetc_mdio_wr(mdio_priv, ENETC_MDIO_CTL, mdio_ctl);
+
+ /* write the value */
+ enetc_mdio_wr(mdio_priv, ENETC_MDIO_DATA, value);
+
+ ret = enetc_mdio_wait_complete(mdio_priv);
+ if (ret)
+ return ret;
+
+ return 0;
+}
+EXPORT_SYMBOL_GPL(enetc_mdio_write_c22);
+
+int enetc_mdio_write_c45(struct mii_bus *bus, int phy_id, int dev_addr,
+ int regnum, u16 value)
+{
+ struct enetc_mdio_priv *mdio_priv = bus->priv;
+ u32 mdio_ctl, mdio_cfg;
+ int ret;
+
+ mdio_cfg = ENETC_EMDIO_CFG;
+ mdio_cfg |= MDIO_CFG_ENC45;
enetc_mdio_wr(mdio_priv, ENETC_MDIO_CFG, mdio_cfg);
@@ -83,13 +109,11 @@ int enetc_mdio_write(struct mii_bus *bus, int phy_id, int regnum, u16 value)
enetc_mdio_wr(mdio_priv, ENETC_MDIO_CTL, mdio_ctl);
/* set the register address */
- if (regnum & MII_ADDR_C45) {
- enetc_mdio_wr(mdio_priv, ENETC_MDIO_ADDR, regnum & 0xffff);
+ enetc_mdio_wr(mdio_priv, ENETC_MDIO_ADDR, regnum & 0xffff);
- ret = enetc_mdio_wait_complete(mdio_priv);
- if (ret)
- return ret;
- }
+ ret = enetc_mdio_wait_complete(mdio_priv);
+ if (ret)
+ return ret;
/* write the value */
enetc_mdio_wr(mdio_priv, ENETC_MDIO_DATA, value);
@@ -100,9 +124,9 @@ int enetc_mdio_write(struct mii_bus *bus, int phy_id, int regnum, u16 value)
return 0;
}
-EXPORT_SYMBOL_GPL(enetc_mdio_write);
+EXPORT_SYMBOL_GPL(enetc_mdio_write_c45);
-int enetc_mdio_read(struct mii_bus *bus, int phy_id, int regnum)
+int enetc_mdio_read_c22(struct mii_bus *bus, int phy_id, int regnum)
{
struct enetc_mdio_priv *mdio_priv = bus->priv;
u32 mdio_ctl, mdio_cfg;
@@ -110,14 +134,51 @@ int enetc_mdio_read(struct mii_bus *bus, int phy_id, int regnum)
int ret;
mdio_cfg = ENETC_EMDIO_CFG;
- if (regnum & MII_ADDR_C45) {
- dev_addr = (regnum >> 16) & 0x1f;
- mdio_cfg |= MDIO_CFG_ENC45;
- } else {
- dev_addr = regnum & 0x1f;
- mdio_cfg &= ~MDIO_CFG_ENC45;
+ dev_addr = regnum & 0x1f;
+ mdio_cfg &= ~MDIO_CFG_ENC45;
+
+ enetc_mdio_wr(mdio_priv, ENETC_MDIO_CFG, mdio_cfg);
+
+ ret = enetc_mdio_wait_complete(mdio_priv);
+ if (ret)
+ return ret;
+
+ /* set port and device addr */
+ mdio_ctl = MDIO_CTL_PORT_ADDR(phy_id) | MDIO_CTL_DEV_ADDR(dev_addr);
+ enetc_mdio_wr(mdio_priv, ENETC_MDIO_CTL, mdio_ctl);
+
+ /* initiate the read */
+ enetc_mdio_wr(mdio_priv, ENETC_MDIO_CTL, mdio_ctl | MDIO_CTL_READ);
+
+ ret = enetc_mdio_wait_complete(mdio_priv);
+ if (ret)
+ return ret;
+
+ /* return all Fs if nothing was there */
+ if (enetc_mdio_rd(mdio_priv, ENETC_MDIO_CFG) & MDIO_CFG_RD_ER) {
+ dev_dbg(&bus->dev,
+ "Error while reading PHY%d reg at %d.%d\n",
+ phy_id, dev_addr, regnum);
+ return 0xffff;
}
+ value = enetc_mdio_rd(mdio_priv, ENETC_MDIO_DATA) & 0xffff;
+
+ return value;
+}
+EXPORT_SYMBOL_GPL(enetc_mdio_read_c22);
+
+int enetc_mdio_read_c45(struct mii_bus *bus, int phy_id, int dev_addr,
+ int regnum)
+{
+ struct enetc_mdio_priv *mdio_priv = bus->priv;
+ u32 mdio_ctl, mdio_cfg;
+ u16 value;
+ int ret;
+
+ mdio_cfg = ENETC_EMDIO_CFG;
+ mdio_cfg |= MDIO_CFG_ENC45;
+
enetc_mdio_wr(mdio_priv, ENETC_MDIO_CFG, mdio_cfg);
ret = enetc_mdio_wait_complete(mdio_priv);
@@ -129,13 +190,11 @@ int enetc_mdio_read(struct mii_bus *bus, int phy_id, int regnum)
enetc_mdio_wr(mdio_priv, ENETC_MDIO_CTL, mdio_ctl);
/* set the register address */
- if (regnum & MII_ADDR_C45) {
- enetc_mdio_wr(mdio_priv, ENETC_MDIO_ADDR, regnum & 0xffff);
+ enetc_mdio_wr(mdio_priv, ENETC_MDIO_ADDR, regnum & 0xffff);
- ret = enetc_mdio_wait_complete(mdio_priv);
- if (ret)
- return ret;
- }
+ ret = enetc_mdio_wait_complete(mdio_priv);
+ if (ret)
+ return ret;
/* initiate the read */
enetc_mdio_wr(mdio_priv, ENETC_MDIO_CTL, mdio_ctl | MDIO_CTL_READ);
@@ -156,7 +215,7 @@ int enetc_mdio_read(struct mii_bus *bus, int phy_id, int regnum)
return value;
}
-EXPORT_SYMBOL_GPL(enetc_mdio_read);
+EXPORT_SYMBOL_GPL(enetc_mdio_read_c45);
struct enetc_hw *enetc_hw_alloc(struct device *dev, void __iomem *port_regs)
{
diff --git a/drivers/net/ethernet/freescale/enetc/enetc_pci_mdio.c b/drivers/net/ethernet/freescale/enetc/enetc_pci_mdio.c
index dafb26f81f95..a1b595bd7993 100644
--- a/drivers/net/ethernet/freescale/enetc/enetc_pci_mdio.c
+++ b/drivers/net/ethernet/freescale/enetc/enetc_pci_mdio.c
@@ -39,8 +39,10 @@ static int enetc_pci_mdio_probe(struct pci_dev *pdev,
}
bus->name = ENETC_MDIO_BUS_NAME;
- bus->read = enetc_mdio_read;
- bus->write = enetc_mdio_write;
+ bus->read = enetc_mdio_read_c22;
+ bus->write = enetc_mdio_write_c22;
+ bus->read_c45 = enetc_mdio_read_c45;
+ bus->write_c45 = enetc_mdio_write_c45;
bus->parent = dev;
mdio_priv = bus->priv;
mdio_priv->hw = hw;
diff --git a/drivers/net/ethernet/freescale/enetc/enetc_pf.c b/drivers/net/ethernet/freescale/enetc/enetc_pf.c
index 9f6c4f5c0a6c..7cd22d370caa 100644
--- a/drivers/net/ethernet/freescale/enetc/enetc_pf.c
+++ b/drivers/net/ethernet/freescale/enetc/enetc_pf.c
@@ -319,24 +319,23 @@ static int enetc_vlan_rx_del_vid(struct net_device *ndev, __be16 prot, u16 vid)
static void enetc_set_loopback(struct net_device *ndev, bool en)
{
struct enetc_ndev_priv *priv = netdev_priv(ndev);
- struct enetc_hw *hw = &priv->si->hw;
+ struct enetc_si *si = priv->si;
u32 reg;
- reg = enetc_port_rd(hw, ENETC_PM0_IF_MODE);
+ reg = enetc_port_mac_rd(si, ENETC_PM0_IF_MODE);
if (reg & ENETC_PM0_IFM_RG) {
/* RGMII mode */
reg = (reg & ~ENETC_PM0_IFM_RLP) |
(en ? ENETC_PM0_IFM_RLP : 0);
- enetc_port_wr(hw, ENETC_PM0_IF_MODE, reg);
+ enetc_port_mac_wr(si, ENETC_PM0_IF_MODE, reg);
} else {
/* assume SGMII mode */
- reg = enetc_port_rd(hw, ENETC_PM0_CMD_CFG);
+ reg = enetc_port_mac_rd(si, ENETC_PM0_CMD_CFG);
reg = (reg & ~ENETC_PM0_CMD_XGLP) |
(en ? ENETC_PM0_CMD_XGLP : 0);
reg = (reg & ~ENETC_PM0_CMD_PHY_TX_EN) |
(en ? ENETC_PM0_CMD_PHY_TX_EN : 0);
- enetc_port_wr(hw, ENETC_PM0_CMD_CFG, reg);
- enetc_port_wr(hw, ENETC_PM1_CMD_CFG, reg);
+ enetc_port_mac_wr(si, ENETC_PM0_CMD_CFG, reg);
}
}
@@ -538,65 +537,50 @@ void enetc_reset_ptcmsdur(struct enetc_hw *hw)
enetc_port_wr(hw, ENETC_PTCMSDUR(tc), ENETC_MAC_MAXFRM_SIZE);
}
-static void enetc_configure_port_mac(struct enetc_hw *hw)
+static void enetc_configure_port_mac(struct enetc_si *si)
{
- enetc_port_wr(hw, ENETC_PM0_MAXFRM,
- ENETC_SET_MAXFRM(ENETC_RX_MAXFRM_SIZE));
+ struct enetc_hw *hw = &si->hw;
- enetc_reset_ptcmsdur(hw);
+ enetc_port_mac_wr(si, ENETC_PM0_MAXFRM,
+ ENETC_SET_MAXFRM(ENETC_RX_MAXFRM_SIZE));
- enetc_port_wr(hw, ENETC_PM0_CMD_CFG, ENETC_PM0_CMD_PHY_TX_EN |
- ENETC_PM0_CMD_TXP | ENETC_PM0_PROMISC);
+ enetc_reset_ptcmsdur(hw);
- enetc_port_wr(hw, ENETC_PM1_CMD_CFG, ENETC_PM0_CMD_PHY_TX_EN |
- ENETC_PM0_CMD_TXP | ENETC_PM0_PROMISC);
+ enetc_port_mac_wr(si, ENETC_PM0_CMD_CFG, ENETC_PM0_CMD_PHY_TX_EN |
+ ENETC_PM0_CMD_TXP | ENETC_PM0_PROMISC);
/* On LS1028A, the MAC RX FIFO defaults to 2, which is too high
* and may lead to RX lock-up under traffic. Set it to 1 instead,
* as recommended by the hardware team.
*/
- enetc_port_wr(hw, ENETC_PM0_RX_FIFO, ENETC_PM0_RX_FIFO_VAL);
+ enetc_port_mac_wr(si, ENETC_PM0_RX_FIFO, ENETC_PM0_RX_FIFO_VAL);
}
-static void enetc_mac_config(struct enetc_hw *hw, phy_interface_t phy_mode)
+static void enetc_mac_config(struct enetc_si *si, phy_interface_t phy_mode)
{
u32 val;
if (phy_interface_mode_is_rgmii(phy_mode)) {
- val = enetc_port_rd(hw, ENETC_PM0_IF_MODE);
+ val = enetc_port_mac_rd(si, ENETC_PM0_IF_MODE);
val &= ~(ENETC_PM0_IFM_EN_AUTO | ENETC_PM0_IFM_IFMODE_MASK);
val |= ENETC_PM0_IFM_IFMODE_GMII | ENETC_PM0_IFM_RG;
- enetc_port_wr(hw, ENETC_PM0_IF_MODE, val);
+ enetc_port_mac_wr(si, ENETC_PM0_IF_MODE, val);
}
if (phy_mode == PHY_INTERFACE_MODE_USXGMII) {
val = ENETC_PM0_IFM_FULL_DPX | ENETC_PM0_IFM_IFMODE_XGMII;
- enetc_port_wr(hw, ENETC_PM0_IF_MODE, val);
+ enetc_port_mac_wr(si, ENETC_PM0_IF_MODE, val);
}
}
-static void enetc_mac_enable(struct enetc_hw *hw, bool en)
+static void enetc_mac_enable(struct enetc_si *si, bool en)
{
- u32 val = enetc_port_rd(hw, ENETC_PM0_CMD_CFG);
+ u32 val = enetc_port_mac_rd(si, ENETC_PM0_CMD_CFG);
val &= ~(ENETC_PM0_TX_EN | ENETC_PM0_RX_EN);
val |= en ? (ENETC_PM0_TX_EN | ENETC_PM0_RX_EN) : 0;
- enetc_port_wr(hw, ENETC_PM0_CMD_CFG, val);
- enetc_port_wr(hw, ENETC_PM1_CMD_CFG, val);
-}
-
-static void enetc_configure_port_pmac(struct enetc_hw *hw)
-{
- u32 temp;
-
- /* Set pMAC step lock */
- temp = enetc_port_rd(hw, ENETC_PFPMR);
- enetc_port_wr(hw, ENETC_PFPMR,
- temp | ENETC_PFPMR_PMACE | ENETC_PFPMR_MWLM);
-
- temp = enetc_port_rd(hw, ENETC_MMCSR);
- enetc_port_wr(hw, ENETC_MMCSR, temp | ENETC_MMCSR_ME);
+ enetc_port_mac_wr(si, ENETC_PM0_CMD_CFG, val);
}
static void enetc_configure_port(struct enetc_pf *pf)
@@ -604,9 +588,7 @@ static void enetc_configure_port(struct enetc_pf *pf)
u8 hash_key[ENETC_RSSHASH_KEY_SIZE];
struct enetc_hw *hw = &pf->si->hw;
- enetc_configure_port_pmac(hw);
-
- enetc_configure_port_mac(hw);
+ enetc_configure_port_mac(pf->si);
enetc_port_si_configure(pf->si);
@@ -825,6 +807,9 @@ static void enetc_pf_netdev_setup(struct enetc_si *si, struct net_device *ndev,
ndev->hw_features |= NETIF_F_RXHASH;
ndev->priv_flags |= IFF_UNICAST_FLT;
+ ndev->xdp_features = NETDEV_XDP_ACT_BASIC | NETDEV_XDP_ACT_REDIRECT |
+ NETDEV_XDP_ACT_NDO_XMIT | NETDEV_XDP_ACT_RX_SG |
+ NETDEV_XDP_ACT_NDO_XMIT_SG;
if (si->hw_features & ENETC_SI_F_PSFP && !enetc_psfp_enable(priv)) {
priv->active_offloads |= ENETC_F_QCI;
@@ -848,8 +833,10 @@ static int enetc_mdio_probe(struct enetc_pf *pf, struct device_node *np)
return -ENOMEM;
bus->name = "Freescale ENETC MDIO Bus";
- bus->read = enetc_mdio_read;
- bus->write = enetc_mdio_write;
+ bus->read = enetc_mdio_read_c22;
+ bus->write = enetc_mdio_write_c22;
+ bus->read_c45 = enetc_mdio_read_c45;
+ bus->write_c45 = enetc_mdio_write_c45;
bus->parent = dev;
mdio_priv = bus->priv;
mdio_priv->hw = &pf->si->hw;
@@ -885,8 +872,10 @@ static int enetc_imdio_create(struct enetc_pf *pf)
return -ENOMEM;
bus->name = "Freescale ENETC internal MDIO Bus";
- bus->read = enetc_mdio_read;
- bus->write = enetc_mdio_write;
+ bus->read = enetc_mdio_read_c22;
+ bus->write = enetc_mdio_write_c22;
+ bus->read_c45 = enetc_mdio_read_c45;
+ bus->write_c45 = enetc_mdio_write_c45;
bus->parent = dev;
bus->phy_mask = ~0;
mdio_priv = bus->priv;
@@ -994,14 +983,14 @@ static void enetc_pl_mac_config(struct phylink_config *config,
{
struct enetc_pf *pf = phylink_to_enetc_pf(config);
- enetc_mac_config(&pf->si->hw, state->interface);
+ enetc_mac_config(pf->si, state->interface);
}
-static void enetc_force_rgmii_mac(struct enetc_hw *hw, int speed, int duplex)
+static void enetc_force_rgmii_mac(struct enetc_si *si, int speed, int duplex)
{
u32 old_val, val;
- old_val = val = enetc_port_rd(hw, ENETC_PM0_IF_MODE);
+ old_val = val = enetc_port_mac_rd(si, ENETC_PM0_IF_MODE);
if (speed == SPEED_1000) {
val &= ~ENETC_PM0_IFM_SSP_MASK;
@@ -1022,7 +1011,7 @@ static void enetc_force_rgmii_mac(struct enetc_hw *hw, int speed, int duplex)
if (val == old_val)
return;
- enetc_port_wr(hw, ENETC_PM0_IF_MODE, val);
+ enetc_port_mac_wr(si, ENETC_PM0_IF_MODE, val);
}
static void enetc_pl_mac_link_up(struct phylink_config *config,
@@ -1034,6 +1023,7 @@ static void enetc_pl_mac_link_up(struct phylink_config *config,
u32 pause_off_thresh = 0, pause_on_thresh = 0;
u32 init_quanta = 0, refresh_quanta = 0;
struct enetc_hw *hw = &pf->si->hw;
+ struct enetc_si *si = pf->si;
struct enetc_ndev_priv *priv;
u32 rbmr, cmd_cfg;
int idx;
@@ -1045,7 +1035,7 @@ static void enetc_pl_mac_link_up(struct phylink_config *config,
if (!phylink_autoneg_inband(mode) &&
phy_interface_mode_is_rgmii(interface))
- enetc_force_rgmii_mac(hw, speed, duplex);
+ enetc_force_rgmii_mac(si, speed, duplex);
/* Flow control */
for (idx = 0; idx < priv->num_rx_rings; idx++) {
@@ -1081,24 +1071,24 @@ static void enetc_pl_mac_link_up(struct phylink_config *config,
pause_off_thresh = 1 * ENETC_MAC_MAXFRM_SIZE;
}
- enetc_port_wr(hw, ENETC_PM0_PAUSE_QUANTA, init_quanta);
- enetc_port_wr(hw, ENETC_PM1_PAUSE_QUANTA, init_quanta);
- enetc_port_wr(hw, ENETC_PM0_PAUSE_THRESH, refresh_quanta);
- enetc_port_wr(hw, ENETC_PM1_PAUSE_THRESH, refresh_quanta);
+ enetc_port_mac_wr(si, ENETC_PM0_PAUSE_QUANTA, init_quanta);
+ enetc_port_mac_wr(si, ENETC_PM0_PAUSE_THRESH, refresh_quanta);
enetc_port_wr(hw, ENETC_PPAUONTR, pause_on_thresh);
enetc_port_wr(hw, ENETC_PPAUOFFTR, pause_off_thresh);
- cmd_cfg = enetc_port_rd(hw, ENETC_PM0_CMD_CFG);
+ cmd_cfg = enetc_port_mac_rd(si, ENETC_PM0_CMD_CFG);
if (rx_pause)
cmd_cfg &= ~ENETC_PM0_PAUSE_IGN;
else
cmd_cfg |= ENETC_PM0_PAUSE_IGN;
- enetc_port_wr(hw, ENETC_PM0_CMD_CFG, cmd_cfg);
- enetc_port_wr(hw, ENETC_PM1_CMD_CFG, cmd_cfg);
+ enetc_port_mac_wr(si, ENETC_PM0_CMD_CFG, cmd_cfg);
+
+ enetc_mac_enable(si, true);
- enetc_mac_enable(hw, true);
+ if (si->hw_features & ENETC_SI_F_QBU)
+ enetc_mm_link_state_update(priv, true);
}
static void enetc_pl_mac_link_down(struct phylink_config *config,
@@ -1106,8 +1096,15 @@ static void enetc_pl_mac_link_down(struct phylink_config *config,
phy_interface_t interface)
{
struct enetc_pf *pf = phylink_to_enetc_pf(config);
+ struct enetc_si *si = pf->si;
+ struct enetc_ndev_priv *priv;
- enetc_mac_enable(&pf->si->hw, false);
+ priv = netdev_priv(si->ndev);
+
+ if (si->hw_features & ENETC_SI_F_QBU)
+ enetc_mm_link_state_update(priv, false);
+
+ enetc_mac_enable(si, false);
}
static const struct phylink_mac_ops enetc_mac_phylink_ops = {
@@ -1300,6 +1297,8 @@ static int enetc_pf_probe(struct pci_dev *pdev,
priv = netdev_priv(ndev);
+ mutex_init(&priv->mm_lock);
+
enetc_init_si_rings_params(priv);
err = enetc_alloc_si_resources(priv);
diff --git a/drivers/net/ethernet/freescale/enetc/enetc_qos.c b/drivers/net/ethernet/freescale/enetc/enetc_qos.c
index fcebb54224c0..130ebf6853e6 100644
--- a/drivers/net/ethernet/freescale/enetc/enetc_qos.c
+++ b/drivers/net/ethernet/freescale/enetc/enetc_qos.c
@@ -136,29 +136,21 @@ int enetc_setup_tc_taprio(struct net_device *ndev, void *type_data)
{
struct tc_taprio_qopt_offload *taprio = type_data;
struct enetc_ndev_priv *priv = netdev_priv(ndev);
- struct enetc_hw *hw = &priv->si->hw;
- struct enetc_bdr *tx_ring;
- int err;
- int i;
+ int err, i;
/* TSD and Qbv are mutually exclusive in hardware */
for (i = 0; i < priv->num_tx_rings; i++)
if (priv->tx_ring[i]->tsd_enable)
return -EBUSY;
- for (i = 0; i < priv->num_tx_rings; i++) {
- tx_ring = priv->tx_ring[i];
- tx_ring->prio = taprio->enable ? i : 0;
- enetc_set_bdr_prio(hw, tx_ring->index, tx_ring->prio);
- }
+ err = enetc_setup_tc_mqprio(ndev, &taprio->mqprio);
+ if (err)
+ return err;
err = enetc_setup_taprio(ndev, taprio);
if (err) {
- for (i = 0; i < priv->num_tx_rings; i++) {
- tx_ring = priv->tx_ring[i];
- tx_ring->prio = taprio->enable ? 0 : i;
- enetc_set_bdr_prio(hw, tx_ring->index, tx_ring->prio);
- }
+ taprio->mqprio.qopt.num_tc = 0;
+ enetc_setup_tc_mqprio(ndev, &taprio->mqprio);
}
return err;
@@ -1611,6 +1603,13 @@ int enetc_qos_query_caps(struct net_device *ndev, void *type_data)
struct enetc_si *si = priv->si;
switch (base->type) {
+ case TC_SETUP_QDISC_MQPRIO: {
+ struct tc_mqprio_caps *caps = base->caps;
+
+ caps->validate_queue_counts = true;
+
+ return 0;
+ }
case TC_SETUP_QDISC_TAPRIO: {
struct tc_taprio_caps *caps = base->caps;
diff --git a/drivers/net/ethernet/freescale/fec_main.c b/drivers/net/ethernet/freescale/fec_main.c
index 2341597408d1..c73e25f8995e 100644
--- a/drivers/net/ethernet/freescale/fec_main.c
+++ b/drivers/net/ethernet/freescale/fec_main.c
@@ -56,12 +56,12 @@
#include <linux/fec.h>
#include <linux/of.h>
#include <linux/of_device.h>
-#include <linux/of_gpio.h>
#include <linux/of_mdio.h>
#include <linux/of_net.h>
#include <linux/regulator/consumer.h>
#include <linux/if_vlan.h>
#include <linux/pinctrl/consumer.h>
+#include <linux/gpio/consumer.h>
#include <linux/prefetch.h>
#include <linux/mfd/syscon.h>
#include <linux/regmap.h>
@@ -1987,47 +1987,74 @@ static int fec_enet_mdio_wait(struct fec_enet_private *fep)
return ret;
}
-static int fec_enet_mdio_read(struct mii_bus *bus, int mii_id, int regnum)
+static int fec_enet_mdio_read_c22(struct mii_bus *bus, int mii_id, int regnum)
{
struct fec_enet_private *fep = bus->priv;
struct device *dev = &fep->pdev->dev;
int ret = 0, frame_start, frame_addr, frame_op;
- bool is_c45 = !!(regnum & MII_ADDR_C45);
ret = pm_runtime_resume_and_get(dev);
if (ret < 0)
return ret;
- if (is_c45) {
- frame_start = FEC_MMFR_ST_C45;
+ /* C22 read */
+ frame_op = FEC_MMFR_OP_READ;
+ frame_start = FEC_MMFR_ST;
+ frame_addr = regnum;
- /* write address */
- frame_addr = (regnum >> 16);
- writel(frame_start | FEC_MMFR_OP_ADDR_WRITE |
- FEC_MMFR_PA(mii_id) | FEC_MMFR_RA(frame_addr) |
- FEC_MMFR_TA | (regnum & 0xFFFF),
- fep->hwp + FEC_MII_DATA);
+ /* start a read op */
+ writel(frame_start | frame_op |
+ FEC_MMFR_PA(mii_id) | FEC_MMFR_RA(frame_addr) |
+ FEC_MMFR_TA, fep->hwp + FEC_MII_DATA);
- /* wait for end of transfer */
- ret = fec_enet_mdio_wait(fep);
- if (ret) {
- netdev_err(fep->netdev, "MDIO address write timeout\n");
- goto out;
- }
+ /* wait for end of transfer */
+ ret = fec_enet_mdio_wait(fep);
+ if (ret) {
+ netdev_err(fep->netdev, "MDIO read timeout\n");
+ goto out;
+ }
- frame_op = FEC_MMFR_OP_READ_C45;
+ ret = FEC_MMFR_DATA(readl(fep->hwp + FEC_MII_DATA));
- } else {
- /* C22 read */
- frame_op = FEC_MMFR_OP_READ;
- frame_start = FEC_MMFR_ST;
- frame_addr = regnum;
+out:
+ pm_runtime_mark_last_busy(dev);
+ pm_runtime_put_autosuspend(dev);
+
+ return ret;
+}
+
+static int fec_enet_mdio_read_c45(struct mii_bus *bus, int mii_id,
+ int devad, int regnum)
+{
+ struct fec_enet_private *fep = bus->priv;
+ struct device *dev = &fep->pdev->dev;
+ int ret = 0, frame_start, frame_op;
+
+ ret = pm_runtime_resume_and_get(dev);
+ if (ret < 0)
+ return ret;
+
+ frame_start = FEC_MMFR_ST_C45;
+
+ /* write address */
+ writel(frame_start | FEC_MMFR_OP_ADDR_WRITE |
+ FEC_MMFR_PA(mii_id) | FEC_MMFR_RA(devad) |
+ FEC_MMFR_TA | (regnum & 0xFFFF),
+ fep->hwp + FEC_MII_DATA);
+
+ /* wait for end of transfer */
+ ret = fec_enet_mdio_wait(fep);
+ if (ret) {
+ netdev_err(fep->netdev, "MDIO address write timeout\n");
+ goto out;
}
+ frame_op = FEC_MMFR_OP_READ_C45;
+
/* start a read op */
writel(frame_start | frame_op |
- FEC_MMFR_PA(mii_id) | FEC_MMFR_RA(frame_addr) |
- FEC_MMFR_TA, fep->hwp + FEC_MII_DATA);
+ FEC_MMFR_PA(mii_id) | FEC_MMFR_RA(devad) |
+ FEC_MMFR_TA, fep->hwp + FEC_MII_DATA);
/* wait for end of transfer */
ret = fec_enet_mdio_wait(fep);
@@ -2045,45 +2072,69 @@ out:
return ret;
}
-static int fec_enet_mdio_write(struct mii_bus *bus, int mii_id, int regnum,
- u16 value)
+static int fec_enet_mdio_write_c22(struct mii_bus *bus, int mii_id, int regnum,
+ u16 value)
{
struct fec_enet_private *fep = bus->priv;
struct device *dev = &fep->pdev->dev;
int ret, frame_start, frame_addr;
- bool is_c45 = !!(regnum & MII_ADDR_C45);
ret = pm_runtime_resume_and_get(dev);
if (ret < 0)
return ret;
- if (is_c45) {
- frame_start = FEC_MMFR_ST_C45;
+ /* C22 write */
+ frame_start = FEC_MMFR_ST;
+ frame_addr = regnum;
+
+ /* start a write op */
+ writel(frame_start | FEC_MMFR_OP_WRITE |
+ FEC_MMFR_PA(mii_id) | FEC_MMFR_RA(frame_addr) |
+ FEC_MMFR_TA | FEC_MMFR_DATA(value),
+ fep->hwp + FEC_MII_DATA);
+
+ /* wait for end of transfer */
+ ret = fec_enet_mdio_wait(fep);
+ if (ret)
+ netdev_err(fep->netdev, "MDIO write timeout\n");
+
+ pm_runtime_mark_last_busy(dev);
+ pm_runtime_put_autosuspend(dev);
+
+ return ret;
+}
+
+static int fec_enet_mdio_write_c45(struct mii_bus *bus, int mii_id,
+ int devad, int regnum, u16 value)
+{
+ struct fec_enet_private *fep = bus->priv;
+ struct device *dev = &fep->pdev->dev;
+ int ret, frame_start;
+
+ ret = pm_runtime_resume_and_get(dev);
+ if (ret < 0)
+ return ret;
+
+ frame_start = FEC_MMFR_ST_C45;
- /* write address */
- frame_addr = (regnum >> 16);
- writel(frame_start | FEC_MMFR_OP_ADDR_WRITE |
- FEC_MMFR_PA(mii_id) | FEC_MMFR_RA(frame_addr) |
- FEC_MMFR_TA | (regnum & 0xFFFF),
- fep->hwp + FEC_MII_DATA);
+ /* write address */
+ writel(frame_start | FEC_MMFR_OP_ADDR_WRITE |
+ FEC_MMFR_PA(mii_id) | FEC_MMFR_RA(devad) |
+ FEC_MMFR_TA | (regnum & 0xFFFF),
+ fep->hwp + FEC_MII_DATA);
- /* wait for end of transfer */
- ret = fec_enet_mdio_wait(fep);
- if (ret) {
- netdev_err(fep->netdev, "MDIO address write timeout\n");
- goto out;
- }
- } else {
- /* C22 write */
- frame_start = FEC_MMFR_ST;
- frame_addr = regnum;
+ /* wait for end of transfer */
+ ret = fec_enet_mdio_wait(fep);
+ if (ret) {
+ netdev_err(fep->netdev, "MDIO address write timeout\n");
+ goto out;
}
/* start a write op */
writel(frame_start | FEC_MMFR_OP_WRITE |
- FEC_MMFR_PA(mii_id) | FEC_MMFR_RA(frame_addr) |
- FEC_MMFR_TA | FEC_MMFR_DATA(value),
- fep->hwp + FEC_MII_DATA);
+ FEC_MMFR_PA(mii_id) | FEC_MMFR_RA(devad) |
+ FEC_MMFR_TA | FEC_MMFR_DATA(value),
+ fep->hwp + FEC_MII_DATA);
/* wait for end of transfer */
ret = fec_enet_mdio_wait(fep);
@@ -2381,8 +2432,10 @@ static int fec_enet_mii_init(struct platform_device *pdev)
}
fep->mii_bus->name = "fec_enet_mii_bus";
- fep->mii_bus->read = fec_enet_mdio_read;
- fep->mii_bus->write = fec_enet_mdio_write;
+ fep->mii_bus->read = fec_enet_mdio_read_c22;
+ fep->mii_bus->write = fec_enet_mdio_write_c22;
+ fep->mii_bus->read_c45 = fec_enet_mdio_read_c45;
+ fep->mii_bus->write_c45 = fec_enet_mdio_write_c45;
snprintf(fep->mii_bus->id, MII_BUS_ID_SIZE, "%s-%x",
pdev->name, fep->dev_id + 1);
fep->mii_bus->priv = fep;
@@ -3982,10 +4035,10 @@ free_queue_mem:
#ifdef CONFIG_OF
static int fec_reset_phy(struct platform_device *pdev)
{
- int err, phy_reset;
- bool active_high = false;
+ struct gpio_desc *phy_reset;
int msec = 1, phy_post_delay = 0;
struct device_node *np = pdev->dev.of_node;
+ int err;
if (!np)
return 0;
@@ -3995,33 +4048,26 @@ static int fec_reset_phy(struct platform_device *pdev)
if (!err && msec > 1000)
msec = 1;
- phy_reset = of_get_named_gpio(np, "phy-reset-gpios", 0);
- if (phy_reset == -EPROBE_DEFER)
- return phy_reset;
- else if (!gpio_is_valid(phy_reset))
- return 0;
-
err = of_property_read_u32(np, "phy-reset-post-delay", &phy_post_delay);
/* valid reset duration should be less than 1s */
if (!err && phy_post_delay > 1000)
return -EINVAL;
- active_high = of_property_read_bool(np, "phy-reset-active-high");
+ phy_reset = devm_gpiod_get_optional(&pdev->dev, "phy-reset",
+ GPIOD_OUT_HIGH);
+ if (IS_ERR(phy_reset))
+ return dev_err_probe(&pdev->dev, PTR_ERR(phy_reset),
+ "failed to get phy-reset-gpios\n");
- err = devm_gpio_request_one(&pdev->dev, phy_reset,
- active_high ? GPIOF_OUT_INIT_HIGH : GPIOF_OUT_INIT_LOW,
- "phy-reset");
- if (err) {
- dev_err(&pdev->dev, "failed to get phy-reset-gpios: %d\n", err);
- return err;
- }
+ if (!phy_reset)
+ return 0;
if (msec > 20)
msleep(msec);
else
usleep_range(msec * 1000, msec * 1000 + 1000);
- gpio_set_value_cansleep(phy_reset, !active_high);
+ gpiod_set_value_cansleep(phy_reset, 0);
if (!phy_post_delay)
return 0;
diff --git a/drivers/net/ethernet/freescale/xgmac_mdio.c b/drivers/net/ethernet/freescale/xgmac_mdio.c
index d7d39a58cd80..a13b4ba4d6e1 100644
--- a/drivers/net/ethernet/freescale/xgmac_mdio.c
+++ b/drivers/net/ethernet/freescale/xgmac_mdio.c
@@ -128,30 +128,49 @@ static int xgmac_wait_until_done(struct device *dev,
return 0;
}
-/*
- * Write value to the PHY for this device to the register at regnum,waiting
- * until the write is done before it returns. All PHY configuration has to be
- * done through the TSEC1 MIIM regs.
- */
-static int xgmac_mdio_write(struct mii_bus *bus, int phy_id, int regnum, u16 value)
+static int xgmac_mdio_write_c22(struct mii_bus *bus, int phy_id, int regnum,
+ u16 value)
{
struct mdio_fsl_priv *priv = (struct mdio_fsl_priv *)bus->priv;
struct tgec_mdio_controller __iomem *regs = priv->mdio_base;
- uint16_t dev_addr;
+ bool endian = priv->is_little_endian;
+ u16 dev_addr = regnum & 0x1f;
u32 mdio_ctl, mdio_stat;
int ret;
+
+ mdio_stat = xgmac_read32(&regs->mdio_stat, endian);
+ mdio_stat &= ~MDIO_STAT_ENC;
+ xgmac_write32(mdio_stat, &regs->mdio_stat, endian);
+
+ ret = xgmac_wait_until_free(&bus->dev, regs, endian);
+ if (ret)
+ return ret;
+
+ /* Set the port and dev addr */
+ mdio_ctl = MDIO_CTL_PORT_ADDR(phy_id) | MDIO_CTL_DEV_ADDR(dev_addr);
+ xgmac_write32(mdio_ctl, &regs->mdio_ctl, endian);
+
+ /* Write the value to the register */
+ xgmac_write32(MDIO_DATA(value), &regs->mdio_data, endian);
+
+ ret = xgmac_wait_until_done(&bus->dev, regs, endian);
+ if (ret)
+ return ret;
+
+ return 0;
+}
+
+static int xgmac_mdio_write_c45(struct mii_bus *bus, int phy_id, int dev_addr,
+ int regnum, u16 value)
+{
+ struct mdio_fsl_priv *priv = (struct mdio_fsl_priv *)bus->priv;
+ struct tgec_mdio_controller __iomem *regs = priv->mdio_base;
bool endian = priv->is_little_endian;
+ u32 mdio_ctl, mdio_stat;
+ int ret;
mdio_stat = xgmac_read32(&regs->mdio_stat, endian);
- if (regnum & MII_ADDR_C45) {
- /* Clause 45 (ie 10G) */
- dev_addr = (regnum >> 16) & 0x1f;
- mdio_stat |= MDIO_STAT_ENC;
- } else {
- /* Clause 22 (ie 1G) */
- dev_addr = regnum & 0x1f;
- mdio_stat &= ~MDIO_STAT_ENC;
- }
+ mdio_stat |= MDIO_STAT_ENC;
xgmac_write32(mdio_stat, &regs->mdio_stat, endian);
@@ -164,13 +183,11 @@ static int xgmac_mdio_write(struct mii_bus *bus, int phy_id, int regnum, u16 val
xgmac_write32(mdio_ctl, &regs->mdio_ctl, endian);
/* Set the register address */
- if (regnum & MII_ADDR_C45) {
- xgmac_write32(regnum & 0xffff, &regs->mdio_addr, endian);
+ xgmac_write32(regnum & 0xffff, &regs->mdio_addr, endian);
- ret = xgmac_wait_until_free(&bus->dev, regs, endian);
- if (ret)
- return ret;
- }
+ ret = xgmac_wait_until_free(&bus->dev, regs, endian);
+ if (ret)
+ return ret;
/* Write the value to the register */
xgmac_write32(MDIO_DATA(value), &regs->mdio_data, endian);
@@ -182,31 +199,82 @@ static int xgmac_mdio_write(struct mii_bus *bus, int phy_id, int regnum, u16 val
return 0;
}
-/*
- * Reads from register regnum in the PHY for device dev, returning the value.
+/* Reads from register regnum in the PHY for device dev, returning the value.
* Clears miimcom first. All PHY configuration has to be done through the
* TSEC1 MIIM regs.
*/
-static int xgmac_mdio_read(struct mii_bus *bus, int phy_id, int regnum)
+static int xgmac_mdio_read_c22(struct mii_bus *bus, int phy_id, int regnum)
{
struct mdio_fsl_priv *priv = (struct mdio_fsl_priv *)bus->priv;
struct tgec_mdio_controller __iomem *regs = priv->mdio_base;
+ bool endian = priv->is_little_endian;
+ u16 dev_addr = regnum & 0x1f;
unsigned long flags;
- uint16_t dev_addr;
uint32_t mdio_stat;
uint32_t mdio_ctl;
int ret;
- bool endian = priv->is_little_endian;
mdio_stat = xgmac_read32(&regs->mdio_stat, endian);
- if (regnum & MII_ADDR_C45) {
- dev_addr = (regnum >> 16) & 0x1f;
- mdio_stat |= MDIO_STAT_ENC;
+ mdio_stat &= ~MDIO_STAT_ENC;
+ xgmac_write32(mdio_stat, &regs->mdio_stat, endian);
+
+ ret = xgmac_wait_until_free(&bus->dev, regs, endian);
+ if (ret)
+ return ret;
+
+ /* Set the Port and Device Addrs */
+ mdio_ctl = MDIO_CTL_PORT_ADDR(phy_id) | MDIO_CTL_DEV_ADDR(dev_addr);
+ xgmac_write32(mdio_ctl, &regs->mdio_ctl, endian);
+
+ if (priv->has_a009885)
+ /* Once the operation completes, i.e. MDIO_STAT_BSY clears, we
+ * must read back the data register within 16 MDC cycles.
+ */
+ local_irq_save(flags);
+
+ /* Initiate the read */
+ xgmac_write32(mdio_ctl | MDIO_CTL_READ, &regs->mdio_ctl, endian);
+
+ ret = xgmac_wait_until_done(&bus->dev, regs, endian);
+ if (ret)
+ goto irq_restore;
+
+ /* Return all Fs if nothing was there */
+ if ((xgmac_read32(&regs->mdio_stat, endian) & MDIO_STAT_RD_ER) &&
+ !priv->has_a011043) {
+ dev_dbg(&bus->dev,
+ "Error while reading PHY%d reg at %d.%d\n",
+ phy_id, dev_addr, regnum);
+ ret = 0xffff;
} else {
- dev_addr = regnum & 0x1f;
- mdio_stat &= ~MDIO_STAT_ENC;
+ ret = xgmac_read32(&regs->mdio_data, endian) & 0xffff;
+ dev_dbg(&bus->dev, "read %04x\n", ret);
}
+irq_restore:
+ if (priv->has_a009885)
+ local_irq_restore(flags);
+
+ return ret;
+}
+
+/* Reads from register regnum in the PHY for device dev, returning the value.
+ * Clears miimcom first. All PHY configuration has to be done through the
+ * TSEC1 MIIM regs.
+ */
+static int xgmac_mdio_read_c45(struct mii_bus *bus, int phy_id, int dev_addr,
+ int regnum)
+{
+ struct mdio_fsl_priv *priv = (struct mdio_fsl_priv *)bus->priv;
+ struct tgec_mdio_controller __iomem *regs = priv->mdio_base;
+ bool endian = priv->is_little_endian;
+ u32 mdio_stat, mdio_ctl;
+ unsigned long flags;
+ int ret;
+
+ mdio_stat = xgmac_read32(&regs->mdio_stat, endian);
+ mdio_stat |= MDIO_STAT_ENC;
+
xgmac_write32(mdio_stat, &regs->mdio_stat, endian);
ret = xgmac_wait_until_free(&bus->dev, regs, endian);
@@ -218,13 +286,11 @@ static int xgmac_mdio_read(struct mii_bus *bus, int phy_id, int regnum)
xgmac_write32(mdio_ctl, &regs->mdio_ctl, endian);
/* Set the register address */
- if (regnum & MII_ADDR_C45) {
- xgmac_write32(regnum & 0xffff, &regs->mdio_addr, endian);
+ xgmac_write32(regnum & 0xffff, &regs->mdio_addr, endian);
- ret = xgmac_wait_until_free(&bus->dev, regs, endian);
- if (ret)
- return ret;
- }
+ ret = xgmac_wait_until_free(&bus->dev, regs, endian);
+ if (ret)
+ return ret;
if (priv->has_a009885)
/* Once the operation completes, i.e. MDIO_STAT_BSY clears, we
@@ -326,10 +392,11 @@ static int xgmac_mdio_probe(struct platform_device *pdev)
return -ENOMEM;
bus->name = "Freescale XGMAC MDIO Bus";
- bus->read = xgmac_mdio_read;
- bus->write = xgmac_mdio_write;
+ bus->read = xgmac_mdio_read_c22;
+ bus->write = xgmac_mdio_write_c22;
+ bus->read_c45 = xgmac_mdio_read_c45;
+ bus->write_c45 = xgmac_mdio_write_c45;
bus->parent = &pdev->dev;
- bus->probe_capabilities = MDIOBUS_C22_C45;
snprintf(bus->id, MII_BUS_ID_SIZE, "%pa", &res->start);
priv = bus->priv;
diff --git a/drivers/net/ethernet/fungible/funeth/Kconfig b/drivers/net/ethernet/fungible/funeth/Kconfig
index c72ad9386400..e742e7663449 100644
--- a/drivers/net/ethernet/fungible/funeth/Kconfig
+++ b/drivers/net/ethernet/fungible/funeth/Kconfig
@@ -5,7 +5,7 @@
config FUN_ETH
tristate "Fungible Ethernet device driver"
- depends on PCI && PCI_MSI
+ depends on PCI_MSI
depends on TLS && TLS_DEVICE || TLS_DEVICE=n
select NET_DEVLINK
select FUN_CORE
diff --git a/drivers/net/ethernet/fungible/funeth/funeth_main.c b/drivers/net/ethernet/fungible/funeth/funeth_main.c
index b4cce30e526a..df86770731ad 100644
--- a/drivers/net/ethernet/fungible/funeth/funeth_main.c
+++ b/drivers/net/ethernet/fungible/funeth/funeth_main.c
@@ -1160,6 +1160,11 @@ static int fun_xdp_setup(struct net_device *dev, struct netdev_bpf *xdp)
WRITE_ONCE(rxqs[i]->xdp_prog, prog);
}
+ if (prog)
+ xdp_features_set_redirect_target(dev, true);
+ else
+ xdp_features_clear_redirect_target(dev);
+
dev->max_mtu = prog ? XDP_MAX_MTU : FUN_MAX_MTU;
old_prog = xchg(&fp->xdp_prog, prog);
if (old_prog)
@@ -1765,6 +1770,7 @@ static int fun_create_netdev(struct fun_ethdev *ed, unsigned int portid)
netdev->vlan_features = netdev->features & VLAN_FEAT;
netdev->mpls_features = netdev->vlan_features;
netdev->hw_enc_features = netdev->hw_features;
+ netdev->xdp_features = NETDEV_XDP_ACT_BASIC | NETDEV_XDP_ACT_REDIRECT;
netdev->min_mtu = ETH_MIN_MTU;
netdev->max_mtu = FUN_MAX_MTU;
diff --git a/drivers/net/ethernet/google/gve/gve_main.c b/drivers/net/ethernet/google/gve/gve_main.c
index 5b40f9c53196..07111c241e0e 100644
--- a/drivers/net/ethernet/google/gve/gve_main.c
+++ b/drivers/net/ethernet/google/gve/gve_main.c
@@ -327,7 +327,6 @@ static int gve_napi_poll_dqo(struct napi_struct *napi, int budget)
static int gve_alloc_notify_blocks(struct gve_priv *priv)
{
int num_vecs_requested = priv->num_ntfy_blks + 1;
- char *name = priv->dev->name;
unsigned int active_cpus;
int vecs_enabled;
int i, j;
@@ -371,8 +370,8 @@ static int gve_alloc_notify_blocks(struct gve_priv *priv)
active_cpus = min_t(int, priv->num_ntfy_blks / 2, num_online_cpus());
/* Setup Management Vector - the last vector */
- snprintf(priv->mgmt_msix_name, sizeof(priv->mgmt_msix_name), "%s-mgmnt",
- name);
+ snprintf(priv->mgmt_msix_name, sizeof(priv->mgmt_msix_name), "gve-mgmnt@pci:%s",
+ pci_name(priv->pdev));
err = request_irq(priv->msix_vectors[priv->mgmt_msix_idx].vector,
gve_mgmnt_intr, 0, priv->mgmt_msix_name, priv);
if (err) {
@@ -401,8 +400,8 @@ static int gve_alloc_notify_blocks(struct gve_priv *priv)
struct gve_notify_block *block = &priv->ntfy_blocks[i];
int msix_idx = i;
- snprintf(block->name, sizeof(block->name), "%s-ntfy-block.%d",
- name, i);
+ snprintf(block->name, sizeof(block->name), "gve-ntfy-blk%d@pci:%s",
+ i, pci_name(priv->pdev));
block->priv = priv;
err = request_irq(priv->msix_vectors[msix_idx].vector,
gve_is_gqi(priv) ? gve_intr : gve_intr_dqo,
diff --git a/drivers/net/ethernet/hisilicon/hns/hns_dsaf_misc.c b/drivers/net/ethernet/hisilicon/hns/hns_dsaf_misc.c
index 740850b64aff..5df19c604d09 100644
--- a/drivers/net/ethernet/hisilicon/hns/hns_dsaf_misc.c
+++ b/drivers/net/ethernet/hisilicon/hns/hns_dsaf_misc.c
@@ -554,11 +554,11 @@ static phy_interface_t hns_mac_get_phy_if_acpi(struct hns_mac_cb *mac_cb)
argv4.package.count = 1;
argv4.package.elements = &obj_args;
- obj = acpi_evaluate_dsm(ACPI_HANDLE(mac_cb->dev),
- &hns_dsaf_acpi_dsm_guid, 0,
- HNS_OP_GET_PORT_TYPE_FUNC, &argv4);
-
- if (!obj || obj->type != ACPI_TYPE_INTEGER)
+ obj = acpi_evaluate_dsm_typed(ACPI_HANDLE(mac_cb->dev),
+ &hns_dsaf_acpi_dsm_guid, 0,
+ HNS_OP_GET_PORT_TYPE_FUNC, &argv4,
+ ACPI_TYPE_INTEGER);
+ if (!obj)
return phy_if;
phy_if = obj->integer.value ?
@@ -601,11 +601,11 @@ static int hns_mac_get_sfp_prsnt_acpi(struct hns_mac_cb *mac_cb, int *sfp_prsnt)
argv4.package.count = 1;
argv4.package.elements = &obj_args;
- obj = acpi_evaluate_dsm(ACPI_HANDLE(mac_cb->dev),
- &hns_dsaf_acpi_dsm_guid, 0,
- HNS_OP_GET_SFP_STAT_FUNC, &argv4);
-
- if (!obj || obj->type != ACPI_TYPE_INTEGER)
+ obj = acpi_evaluate_dsm_typed(ACPI_HANDLE(mac_cb->dev),
+ &hns_dsaf_acpi_dsm_guid, 0,
+ HNS_OP_GET_SFP_STAT_FUNC, &argv4,
+ ACPI_TYPE_INTEGER);
+ if (!obj)
return -ENODEV;
*sfp_prsnt = obj->integer.value;
diff --git a/drivers/net/ethernet/hisilicon/hns3/hnae3.h b/drivers/net/ethernet/hisilicon/hns3/hnae3.h
index 17137de9338c..40f4306449eb 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hnae3.h
+++ b/drivers/net/ethernet/hisilicon/hns3/hnae3.h
@@ -32,6 +32,7 @@
#include <linux/pkt_sched.h>
#include <linux/types.h>
#include <net/pkt_cls.h>
+#include <net/pkt_sched.h>
#define HNAE3_MOD_VERSION "1.0"
diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
index b4c4fb873568..25be7f8ac7cd 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
@@ -20,6 +20,7 @@
#include <net/gro.h>
#include <net/ip6_checksum.h>
#include <net/pkt_cls.h>
+#include <net/pkt_sched.h>
#include <net/tcp.h>
#include <net/vxlan.h>
#include <net/geneve.h>
diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_debugfs.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_debugfs.c
index 142415c84c6b..a0b46e7d863e 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_debugfs.c
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_debugfs.c
@@ -2,6 +2,7 @@
/* Copyright (c) 2018-2019 Hisilicon Limited. */
#include <linux/device.h>
+#include <linux/sched/clock.h>
#include "hclge_debugfs.h"
#include "hclge_err.h"
diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_devlink.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_devlink.c
index 3d3b69605423..9a939c0b217f 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_devlink.c
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_devlink.c
@@ -114,7 +114,6 @@ int hclge_devlink_init(struct hclge_dev *hdev)
priv->hdev = hdev;
hdev->devlink = devlink;
- devlink_set_features(devlink, DEVLINK_F_RELOAD);
devlink_register(devlink);
return 0;
}
diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_err.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_err.c
index 6efd768cc07c..3f35227ef1fa 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_err.c
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_err.c
@@ -1,6 +1,8 @@
// SPDX-License-Identifier: GPL-2.0+
/* Copyright (c) 2016-2017 Hisilicon Limited. */
+#include <linux/sched/clock.h>
+
#include "hclge_err.h"
static const struct hclge_hw_error hclge_imp_tcm_ecc_int[] = {
diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_devlink.c b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_devlink.c
index a6c3c5e8f0ab..1b535142c65a 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_devlink.c
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_devlink.c
@@ -116,7 +116,6 @@ int hclgevf_devlink_init(struct hclgevf_dev *hdev)
priv->hdev = hdev;
hdev->devlink = devlink;
- devlink_set_features(devlink, DEVLINK_F_RELOAD);
devlink_register(devlink);
return 0;
}
diff --git a/drivers/net/ethernet/hisilicon/hns_mdio.c b/drivers/net/ethernet/hisilicon/hns_mdio.c
index c2ae1b4f9a5f..9232caaf0bdc 100644
--- a/drivers/net/ethernet/hisilicon/hns_mdio.c
+++ b/drivers/net/ethernet/hisilicon/hns_mdio.c
@@ -206,7 +206,7 @@ static void hns_mdio_cmd_write(struct hns_mdio_device *mdio_dev,
}
/**
- * hns_mdio_write - access phy register
+ * hns_mdio_write_c22 - access phy register
* @bus: mdio bus
* @phy_id: phy id
* @regnum: register num
@@ -214,21 +214,19 @@ static void hns_mdio_cmd_write(struct hns_mdio_device *mdio_dev,
*
* Return 0 on success, negative on failure
*/
-static int hns_mdio_write(struct mii_bus *bus,
- int phy_id, int regnum, u16 data)
+static int hns_mdio_write_c22(struct mii_bus *bus,
+ int phy_id, int regnum, u16 data)
{
- int ret;
struct hns_mdio_device *mdio_dev = (struct hns_mdio_device *)bus->priv;
- u8 devad = ((regnum >> 16) & 0x1f);
- u8 is_c45 = !!(regnum & MII_ADDR_C45);
u16 reg = (u16)(regnum & 0xffff);
- u8 op;
u16 cmd_reg_cfg;
+ int ret;
+ u8 op;
dev_dbg(&bus->dev, "mdio write %s,base is %p\n",
bus->id, mdio_dev->vbase);
- dev_dbg(&bus->dev, "phy id=%d, is_c45=%d, devad=%d, reg=%#x, write data=%d\n",
- phy_id, is_c45, devad, reg, data);
+ dev_dbg(&bus->dev, "phy id=%d, reg=%#x, write data=%d\n",
+ phy_id, reg, data);
/* wait for ready */
ret = hns_mdio_wait_ready(bus);
@@ -237,58 +235,91 @@ static int hns_mdio_write(struct mii_bus *bus,
return ret;
}
- if (!is_c45) {
- cmd_reg_cfg = reg;
- op = MDIO_C22_WRITE;
- } else {
- /* config the cmd-reg to write addr*/
- MDIO_SET_REG_FIELD(mdio_dev, MDIO_ADDR_REG, MDIO_ADDR_DATA_M,
- MDIO_ADDR_DATA_S, reg);
+ cmd_reg_cfg = reg;
+ op = MDIO_C22_WRITE;
- hns_mdio_cmd_write(mdio_dev, is_c45,
- MDIO_C45_WRITE_ADDR, phy_id, devad);
+ MDIO_SET_REG_FIELD(mdio_dev, MDIO_WDATA_REG, MDIO_WDATA_DATA_M,
+ MDIO_WDATA_DATA_S, data);
- /* check for read or write opt is finished */
- ret = hns_mdio_wait_ready(bus);
- if (ret) {
- dev_err(&bus->dev, "MDIO bus is busy\n");
- return ret;
- }
+ hns_mdio_cmd_write(mdio_dev, false, op, phy_id, cmd_reg_cfg);
+
+ return 0;
+}
+
+/**
+ * hns_mdio_write_c45 - access phy register
+ * @bus: mdio bus
+ * @phy_id: phy id
+ * @devad: device address to read
+ * @regnum: register num
+ * @data: register value
+ *
+ * Return 0 on success, negative on failure
+ */
+static int hns_mdio_write_c45(struct mii_bus *bus, int phy_id, int devad,
+ int regnum, u16 data)
+{
+ struct hns_mdio_device *mdio_dev = (struct hns_mdio_device *)bus->priv;
+ u16 reg = (u16)(regnum & 0xffff);
+ u16 cmd_reg_cfg;
+ int ret;
+ u8 op;
+
+ dev_dbg(&bus->dev, "mdio write %s,base is %p\n",
+ bus->id, mdio_dev->vbase);
+ dev_dbg(&bus->dev, "phy id=%d, devad=%d, reg=%#x, write data=%d\n",
+ phy_id, devad, reg, data);
+
+ /* wait for ready */
+ ret = hns_mdio_wait_ready(bus);
+ if (ret) {
+ dev_err(&bus->dev, "MDIO bus is busy\n");
+ return ret;
+ }
+
+ /* config the cmd-reg to write addr*/
+ MDIO_SET_REG_FIELD(mdio_dev, MDIO_ADDR_REG, MDIO_ADDR_DATA_M,
+ MDIO_ADDR_DATA_S, reg);
- /* config the data needed writing */
- cmd_reg_cfg = devad;
- op = MDIO_C45_WRITE_DATA;
+ hns_mdio_cmd_write(mdio_dev, true, MDIO_C45_WRITE_ADDR, phy_id, devad);
+
+ /* check for read or write opt is finished */
+ ret = hns_mdio_wait_ready(bus);
+ if (ret) {
+ dev_err(&bus->dev, "MDIO bus is busy\n");
+ return ret;
}
+ /* config the data needed writing */
+ cmd_reg_cfg = devad;
+ op = MDIO_C45_WRITE_DATA;
+
MDIO_SET_REG_FIELD(mdio_dev, MDIO_WDATA_REG, MDIO_WDATA_DATA_M,
MDIO_WDATA_DATA_S, data);
- hns_mdio_cmd_write(mdio_dev, is_c45, op, phy_id, cmd_reg_cfg);
+ hns_mdio_cmd_write(mdio_dev, true, op, phy_id, cmd_reg_cfg);
return 0;
}
/**
- * hns_mdio_read - access phy register
+ * hns_mdio_read_c22 - access phy register
* @bus: mdio bus
* @phy_id: phy id
* @regnum: register num
*
* Return phy register value
*/
-static int hns_mdio_read(struct mii_bus *bus, int phy_id, int regnum)
+static int hns_mdio_read_c22(struct mii_bus *bus, int phy_id, int regnum)
{
- int ret;
- u16 reg_val;
- u8 devad = ((regnum >> 16) & 0x1f);
- u8 is_c45 = !!(regnum & MII_ADDR_C45);
- u16 reg = (u16)(regnum & 0xffff);
struct hns_mdio_device *mdio_dev = (struct hns_mdio_device *)bus->priv;
+ u16 reg = (u16)(regnum & 0xffff);
+ u16 reg_val;
+ int ret;
dev_dbg(&bus->dev, "mdio read %s,base is %p\n",
bus->id, mdio_dev->vbase);
- dev_dbg(&bus->dev, "phy id=%d, is_c45=%d, devad=%d, reg=%#x!\n",
- phy_id, is_c45, devad, reg);
+ dev_dbg(&bus->dev, "phy id=%d, reg=%#x!\n", phy_id, reg);
/* Step 1: wait for ready */
ret = hns_mdio_wait_ready(bus);
@@ -297,29 +328,74 @@ static int hns_mdio_read(struct mii_bus *bus, int phy_id, int regnum)
return ret;
}
- if (!is_c45) {
- hns_mdio_cmd_write(mdio_dev, is_c45,
- MDIO_C22_READ, phy_id, reg);
- } else {
- MDIO_SET_REG_FIELD(mdio_dev, MDIO_ADDR_REG, MDIO_ADDR_DATA_M,
- MDIO_ADDR_DATA_S, reg);
+ hns_mdio_cmd_write(mdio_dev, false, MDIO_C22_READ, phy_id, reg);
- /* Step 2; config the cmd-reg to write addr*/
- hns_mdio_cmd_write(mdio_dev, is_c45,
- MDIO_C45_WRITE_ADDR, phy_id, devad);
+ /* Step 2: waiting for MDIO_COMMAND_REG 's mdio_start==0,*/
+ /* check for read or write opt is finished */
+ ret = hns_mdio_wait_ready(bus);
+ if (ret) {
+ dev_err(&bus->dev, "MDIO bus is busy\n");
+ return ret;
+ }
- /* Step 3: check for read or write opt is finished */
- ret = hns_mdio_wait_ready(bus);
- if (ret) {
- dev_err(&bus->dev, "MDIO bus is busy\n");
- return ret;
- }
+ reg_val = MDIO_GET_REG_BIT(mdio_dev, MDIO_STA_REG, MDIO_STATE_STA_B);
+ if (reg_val) {
+ dev_err(&bus->dev, " ERROR! MDIO Read failed!\n");
+ return -EBUSY;
+ }
- hns_mdio_cmd_write(mdio_dev, is_c45,
- MDIO_C45_READ, phy_id, devad);
+ /* Step 3; get out data*/
+ reg_val = (u16)MDIO_GET_REG_FIELD(mdio_dev, MDIO_RDATA_REG,
+ MDIO_RDATA_DATA_M, MDIO_RDATA_DATA_S);
+
+ return reg_val;
+}
+
+/**
+ * hns_mdio_read_c45 - access phy register
+ * @bus: mdio bus
+ * @phy_id: phy id
+ * @devad: device address to read
+ * @regnum: register num
+ *
+ * Return phy register value
+ */
+static int hns_mdio_read_c45(struct mii_bus *bus, int phy_id, int devad,
+ int regnum)
+{
+ struct hns_mdio_device *mdio_dev = (struct hns_mdio_device *)bus->priv;
+ u16 reg = (u16)(regnum & 0xffff);
+ u16 reg_val;
+ int ret;
+
+ dev_dbg(&bus->dev, "mdio read %s,base is %p\n",
+ bus->id, mdio_dev->vbase);
+ dev_dbg(&bus->dev, "phy id=%d, devad=%d, reg=%#x!\n",
+ phy_id, devad, reg);
+
+ /* Step 1: wait for ready */
+ ret = hns_mdio_wait_ready(bus);
+ if (ret) {
+ dev_err(&bus->dev, "MDIO bus is busy\n");
+ return ret;
+ }
+
+ MDIO_SET_REG_FIELD(mdio_dev, MDIO_ADDR_REG, MDIO_ADDR_DATA_M,
+ MDIO_ADDR_DATA_S, reg);
+
+ /* Step 2; config the cmd-reg to write addr*/
+ hns_mdio_cmd_write(mdio_dev, true, MDIO_C45_WRITE_ADDR, phy_id, devad);
+
+ /* Step 3: check for read or write opt is finished */
+ ret = hns_mdio_wait_ready(bus);
+ if (ret) {
+ dev_err(&bus->dev, "MDIO bus is busy\n");
+ return ret;
}
- /* Step 5: waiting for MDIO_COMMAND_REG's mdio_start==0,*/
+ hns_mdio_cmd_write(mdio_dev, true, MDIO_C45_READ, phy_id, devad);
+
+ /* Step 5: waiting for MDIO_COMMAND_REG 's mdio_start==0,*/
/* check for read or write opt is finished */
ret = hns_mdio_wait_ready(bus);
if (ret) {
@@ -438,8 +514,10 @@ static int hns_mdio_probe(struct platform_device *pdev)
}
new_bus->name = MDIO_BUS_NAME;
- new_bus->read = hns_mdio_read;
- new_bus->write = hns_mdio_write;
+ new_bus->read = hns_mdio_read_c22;
+ new_bus->write = hns_mdio_write_c22;
+ new_bus->read_c45 = hns_mdio_read_c45;
+ new_bus->write_c45 = hns_mdio_write_c45;
new_bus->reset = hns_mdio_reset;
new_bus->priv = mdio_dev;
new_bus->parent = &pdev->dev;
diff --git a/drivers/net/ethernet/ibm/ibmvnic.c b/drivers/net/ethernet/ibm/ibmvnic.c
index e19a6bb3f444..146ca1d8031b 100644
--- a/drivers/net/ethernet/ibm/ibmvnic.c
+++ b/drivers/net/ethernet/ibm/ibmvnic.c
@@ -250,10 +250,11 @@ static void ibmvnic_set_affinity(struct ibmvnic_adapter *adapter)
struct ibmvnic_sub_crq_queue **rxqs = adapter->rx_scrq;
struct ibmvnic_sub_crq_queue **txqs = adapter->tx_scrq;
struct ibmvnic_sub_crq_queue *queue;
- int num_rxqs = adapter->num_active_rx_scrqs;
- int num_txqs = adapter->num_active_tx_scrqs;
+ int num_rxqs = adapter->num_active_rx_scrqs, i_rxqs = 0;
+ int num_txqs = adapter->num_active_tx_scrqs, i_txqs = 0;
int total_queues, stride, stragglers, i;
unsigned int num_cpu, cpu;
+ bool is_rx_queue;
int rc = 0;
netdev_dbg(adapter->netdev, "%s: Setting irq affinity hints", __func__);
@@ -273,14 +274,24 @@ static void ibmvnic_set_affinity(struct ibmvnic_adapter *adapter)
/* next available cpu to assign irq to */
cpu = cpumask_next(-1, cpu_online_mask);
- for (i = 0; i < num_txqs; i++) {
- queue = txqs[i];
+ for (i = 0; i < total_queues; i++) {
+ is_rx_queue = false;
+ /* balance core load by alternating rx and tx assignments
+ * ex: TX0 -> RX0 -> TX1 -> RX1 etc.
+ */
+ if ((i % 2 == 1 && i_rxqs < num_rxqs) || i_txqs == num_txqs) {
+ queue = rxqs[i_rxqs++];
+ is_rx_queue = true;
+ } else {
+ queue = txqs[i_txqs++];
+ }
+
rc = ibmvnic_set_queue_affinity(queue, &cpu, &stragglers,
stride);
if (rc)
goto out;
- if (!queue)
+ if (!queue || is_rx_queue)
continue;
rc = __netif_set_xps_queue(adapter->netdev,
@@ -291,14 +302,6 @@ static void ibmvnic_set_affinity(struct ibmvnic_adapter *adapter)
__func__, i, rc);
}
- for (i = 0; i < num_rxqs; i++) {
- queue = rxqs[i];
- rc = ibmvnic_set_queue_affinity(queue, &cpu, &stragglers,
- stride);
- if (rc)
- goto out;
- }
-
out:
if (rc) {
netdev_warn(adapter->netdev,
diff --git a/drivers/net/ethernet/intel/Kconfig b/drivers/net/ethernet/intel/Kconfig
index 3facb55b7161..a3c84bf05e44 100644
--- a/drivers/net/ethernet/intel/Kconfig
+++ b/drivers/net/ethernet/intel/Kconfig
@@ -337,6 +337,9 @@ config ICE_HWTS
the PTP clock driver precise cross-timestamp ioctl
(PTP_SYS_OFFSET_PRECISE).
+config ICE_GNSS
+ def_bool GNSS = y || GNSS = ICE
+
config FM10K
tristate "Intel(R) FM10000 Ethernet Switch Host Interface Support"
default n
diff --git a/drivers/net/ethernet/intel/e1000e/ethtool.c b/drivers/net/ethernet/intel/e1000e/ethtool.c
index 59e82d131d88..721f86fd5802 100644
--- a/drivers/net/ethernet/intel/e1000e/ethtool.c
+++ b/drivers/net/ethernet/intel/e1000e/ethtool.c
@@ -110,9 +110,9 @@ static const char e1000_gstrings_test[][ETH_GSTRING_LEN] = {
static int e1000_get_link_ksettings(struct net_device *netdev,
struct ethtool_link_ksettings *cmd)
{
+ u32 speed, supported, advertising, lp_advertising, lpa_t;
struct e1000_adapter *adapter = netdev_priv(netdev);
struct e1000_hw *hw = &adapter->hw;
- u32 speed, supported, advertising;
if (hw->phy.media_type == e1000_media_type_copper) {
supported = (SUPPORTED_10baseT_Half |
@@ -120,7 +120,9 @@ static int e1000_get_link_ksettings(struct net_device *netdev,
SUPPORTED_100baseT_Half |
SUPPORTED_100baseT_Full |
SUPPORTED_1000baseT_Full |
+ SUPPORTED_Asym_Pause |
SUPPORTED_Autoneg |
+ SUPPORTED_Pause |
SUPPORTED_TP);
if (hw->phy.type == e1000_phy_ife)
supported &= ~SUPPORTED_1000baseT_Full;
@@ -192,10 +194,16 @@ static int e1000_get_link_ksettings(struct net_device *netdev,
if (hw->phy.media_type != e1000_media_type_copper)
cmd->base.eth_tp_mdix_ctrl = ETH_TP_MDI_INVALID;
+ lpa_t = mii_stat1000_to_ethtool_lpa_t(adapter->phy_regs.stat1000);
+ lp_advertising = lpa_t |
+ mii_lpa_to_ethtool_lpa_t(adapter->phy_regs.lpa);
+
ethtool_convert_legacy_u32_to_link_mode(cmd->link_modes.supported,
supported);
ethtool_convert_legacy_u32_to_link_mode(cmd->link_modes.advertising,
advertising);
+ ethtool_convert_legacy_u32_to_link_mode(cmd->link_modes.lp_advertising,
+ lp_advertising);
return 0;
}
diff --git a/drivers/net/ethernet/intel/e1000e/netdev.c b/drivers/net/ethernet/intel/e1000e/netdev.c
index 04acd1a992fa..e1eb1de88bf9 100644
--- a/drivers/net/ethernet/intel/e1000e/netdev.c
+++ b/drivers/net/ethernet/intel/e1000e/netdev.c
@@ -7418,9 +7418,6 @@ static int e1000_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
if (err)
goto err_pci_reg;
- /* AER (Advanced Error Reporting) hooks */
- pci_enable_pcie_error_reporting(pdev);
-
pci_set_master(pdev);
/* PCI config space info */
err = pci_save_state(pdev);
@@ -7708,7 +7705,6 @@ err_flashmap:
err_ioremap:
free_netdev(netdev);
err_alloc_etherdev:
- pci_disable_pcie_error_reporting(pdev);
pci_release_mem_regions(pdev);
err_pci_reg:
err_dma:
@@ -7775,9 +7771,6 @@ static void e1000_remove(struct pci_dev *pdev)
free_netdev(netdev);
- /* AER disable */
- pci_disable_pcie_error_reporting(pdev);
-
pci_disable_device(pdev);
}
diff --git a/drivers/net/ethernet/intel/e1000e/phy.c b/drivers/net/ethernet/intel/e1000e/phy.c
index 060b263348ce..08c3d477dd6f 100644
--- a/drivers/net/ethernet/intel/e1000e/phy.c
+++ b/drivers/net/ethernet/intel/e1000e/phy.c
@@ -2,6 +2,7 @@
/* Copyright(c) 1999 - 2018 Intel Corporation. */
#include "e1000.h"
+#include <linux/ethtool.h>
static s32 e1000_wait_autoneg(struct e1000_hw *hw);
static s32 e1000_access_phy_wakeup_reg_bm(struct e1000_hw *hw, u32 offset,
@@ -1011,6 +1012,8 @@ static s32 e1000_phy_setup_autoneg(struct e1000_hw *hw)
*/
mii_autoneg_adv_reg &=
~(ADVERTISE_PAUSE_ASYM | ADVERTISE_PAUSE_CAP);
+ phy->autoneg_advertised &=
+ ~(ADVERTISED_Pause | ADVERTISED_Asym_Pause);
break;
case e1000_fc_rx_pause:
/* Rx Flow control is enabled, and Tx Flow control is
@@ -1024,6 +1027,8 @@ static s32 e1000_phy_setup_autoneg(struct e1000_hw *hw)
*/
mii_autoneg_adv_reg |=
(ADVERTISE_PAUSE_ASYM | ADVERTISE_PAUSE_CAP);
+ phy->autoneg_advertised |=
+ (ADVERTISED_Pause | ADVERTISED_Asym_Pause);
break;
case e1000_fc_tx_pause:
/* Tx Flow control is enabled, and Rx Flow control is
@@ -1031,6 +1036,8 @@ static s32 e1000_phy_setup_autoneg(struct e1000_hw *hw)
*/
mii_autoneg_adv_reg |= ADVERTISE_PAUSE_ASYM;
mii_autoneg_adv_reg &= ~ADVERTISE_PAUSE_CAP;
+ phy->autoneg_advertised |= ADVERTISED_Asym_Pause;
+ phy->autoneg_advertised &= ~ADVERTISED_Pause;
break;
case e1000_fc_full:
/* Flow control (both Rx and Tx) is enabled by a software
@@ -1038,6 +1045,8 @@ static s32 e1000_phy_setup_autoneg(struct e1000_hw *hw)
*/
mii_autoneg_adv_reg |=
(ADVERTISE_PAUSE_ASYM | ADVERTISE_PAUSE_CAP);
+ phy->autoneg_advertised |=
+ (ADVERTISED_Pause | ADVERTISED_Asym_Pause);
break;
default:
e_dbg("Flow control param set incorrectly\n");
diff --git a/drivers/net/ethernet/intel/fm10k/fm10k_pci.c b/drivers/net/ethernet/intel/fm10k/fm10k_pci.c
index b473cb7d7c57..027d721feb18 100644
--- a/drivers/net/ethernet/intel/fm10k/fm10k_pci.c
+++ b/drivers/net/ethernet/intel/fm10k/fm10k_pci.c
@@ -2127,8 +2127,6 @@ static int fm10k_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
goto err_pci_reg;
}
- pci_enable_pcie_error_reporting(pdev);
-
pci_set_master(pdev);
pci_save_state(pdev);
@@ -2227,7 +2225,6 @@ err_sw_init:
err_ioremap:
free_netdev(netdev);
err_alloc_netdev:
- pci_disable_pcie_error_reporting(pdev);
pci_release_mem_regions(pdev);
err_pci_reg:
err_dma:
@@ -2281,8 +2278,6 @@ static void fm10k_remove(struct pci_dev *pdev)
pci_release_mem_regions(pdev);
- pci_disable_pcie_error_reporting(pdev);
-
pci_disable_device(pdev);
}
diff --git a/drivers/net/ethernet/intel/i40e/i40e.h b/drivers/net/ethernet/intel/i40e/i40e.h
index 3a1c28ca5bb4..60ce4d15d82a 100644
--- a/drivers/net/ethernet/intel/i40e/i40e.h
+++ b/drivers/net/ethernet/intel/i40e/i40e.h
@@ -33,6 +33,7 @@
#include <linux/net_tstamp.h>
#include <linux/ptp_clock_kernel.h>
#include <net/pkt_cls.h>
+#include <net/pkt_sched.h>
#include <net/tc_act/tc_gact.h>
#include <net/tc_act/tc_mirred.h>
#include <net/udp_tunnel.h>
@@ -1287,9 +1288,9 @@ void i40e_ptp_stop(struct i40e_pf *pf);
int i40e_ptp_alloc_pins(struct i40e_pf *pf);
int i40e_update_adq_vsi_queues(struct i40e_vsi *vsi, int vsi_offset);
int i40e_is_vsi_uplink_mode_veb(struct i40e_vsi *vsi);
-i40e_status i40e_get_partition_bw_setting(struct i40e_pf *pf);
-i40e_status i40e_set_partition_bw_setting(struct i40e_pf *pf);
-i40e_status i40e_commit_partition_bw_setting(struct i40e_pf *pf);
+int i40e_get_partition_bw_setting(struct i40e_pf *pf);
+int i40e_set_partition_bw_setting(struct i40e_pf *pf);
+int i40e_commit_partition_bw_setting(struct i40e_pf *pf);
void i40e_print_link_message(struct i40e_vsi *vsi, bool isup);
void i40e_set_fec_in_flags(u8 fec_cfg, u32 *flags);
diff --git a/drivers/net/ethernet/intel/i40e/i40e_adminq.c b/drivers/net/ethernet/intel/i40e/i40e_adminq.c
index 42439f725aa4..86fac8f959bb 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_adminq.c
+++ b/drivers/net/ethernet/intel/i40e/i40e_adminq.c
@@ -47,9 +47,9 @@ static void i40e_adminq_init_regs(struct i40e_hw *hw)
* i40e_alloc_adminq_asq_ring - Allocate Admin Queue send rings
* @hw: pointer to the hardware structure
**/
-static i40e_status i40e_alloc_adminq_asq_ring(struct i40e_hw *hw)
+static int i40e_alloc_adminq_asq_ring(struct i40e_hw *hw)
{
- i40e_status ret_code;
+ int ret_code;
ret_code = i40e_allocate_dma_mem(hw, &hw->aq.asq.desc_buf,
i40e_mem_atq_ring,
@@ -74,9 +74,9 @@ static i40e_status i40e_alloc_adminq_asq_ring(struct i40e_hw *hw)
* i40e_alloc_adminq_arq_ring - Allocate Admin Queue receive rings
* @hw: pointer to the hardware structure
**/
-static i40e_status i40e_alloc_adminq_arq_ring(struct i40e_hw *hw)
+static int i40e_alloc_adminq_arq_ring(struct i40e_hw *hw)
{
- i40e_status ret_code;
+ int ret_code;
ret_code = i40e_allocate_dma_mem(hw, &hw->aq.arq.desc_buf,
i40e_mem_arq_ring,
@@ -115,11 +115,11 @@ static void i40e_free_adminq_arq(struct i40e_hw *hw)
* i40e_alloc_arq_bufs - Allocate pre-posted buffers for the receive queue
* @hw: pointer to the hardware structure
**/
-static i40e_status i40e_alloc_arq_bufs(struct i40e_hw *hw)
+static int i40e_alloc_arq_bufs(struct i40e_hw *hw)
{
- i40e_status ret_code;
struct i40e_aq_desc *desc;
struct i40e_dma_mem *bi;
+ int ret_code;
int i;
/* We'll be allocating the buffer info memory first, then we can
@@ -182,10 +182,10 @@ unwind_alloc_arq_bufs:
* i40e_alloc_asq_bufs - Allocate empty buffer structs for the send queue
* @hw: pointer to the hardware structure
**/
-static i40e_status i40e_alloc_asq_bufs(struct i40e_hw *hw)
+static int i40e_alloc_asq_bufs(struct i40e_hw *hw)
{
- i40e_status ret_code;
struct i40e_dma_mem *bi;
+ int ret_code;
int i;
/* No mapped memory needed yet, just the buffer info structures */
@@ -266,9 +266,9 @@ static void i40e_free_asq_bufs(struct i40e_hw *hw)
*
* Configure base address and length registers for the transmit queue
**/
-static i40e_status i40e_config_asq_regs(struct i40e_hw *hw)
+static int i40e_config_asq_regs(struct i40e_hw *hw)
{
- i40e_status ret_code = 0;
+ int ret_code = 0;
u32 reg = 0;
/* Clear Head and Tail */
@@ -295,9 +295,9 @@ static i40e_status i40e_config_asq_regs(struct i40e_hw *hw)
*
* Configure base address and length registers for the receive (event queue)
**/
-static i40e_status i40e_config_arq_regs(struct i40e_hw *hw)
+static int i40e_config_arq_regs(struct i40e_hw *hw)
{
- i40e_status ret_code = 0;
+ int ret_code = 0;
u32 reg = 0;
/* Clear Head and Tail */
@@ -334,9 +334,9 @@ static i40e_status i40e_config_arq_regs(struct i40e_hw *hw)
* Do *NOT* hold the lock when calling this as the memory allocation routines
* called are not going to be atomic context safe
**/
-static i40e_status i40e_init_asq(struct i40e_hw *hw)
+static int i40e_init_asq(struct i40e_hw *hw)
{
- i40e_status ret_code = 0;
+ int ret_code = 0;
if (hw->aq.asq.count > 0) {
/* queue already initialized */
@@ -393,9 +393,9 @@ init_adminq_exit:
* Do *NOT* hold the lock when calling this as the memory allocation routines
* called are not going to be atomic context safe
**/
-static i40e_status i40e_init_arq(struct i40e_hw *hw)
+static int i40e_init_arq(struct i40e_hw *hw)
{
- i40e_status ret_code = 0;
+ int ret_code = 0;
if (hw->aq.arq.count > 0) {
/* queue already initialized */
@@ -445,9 +445,9 @@ init_adminq_exit:
*
* The main shutdown routine for the Admin Send Queue
**/
-static i40e_status i40e_shutdown_asq(struct i40e_hw *hw)
+static int i40e_shutdown_asq(struct i40e_hw *hw)
{
- i40e_status ret_code = 0;
+ int ret_code = 0;
mutex_lock(&hw->aq.asq_mutex);
@@ -479,9 +479,9 @@ shutdown_asq_out:
*
* The main shutdown routine for the Admin Receive Queue
**/
-static i40e_status i40e_shutdown_arq(struct i40e_hw *hw)
+static int i40e_shutdown_arq(struct i40e_hw *hw)
{
- i40e_status ret_code = 0;
+ int ret_code = 0;
mutex_lock(&hw->aq.arq_mutex);
@@ -582,12 +582,12 @@ static void i40e_set_hw_flags(struct i40e_hw *hw)
* - hw->aq.arq_buf_size
* - hw->aq.asq_buf_size
**/
-i40e_status i40e_init_adminq(struct i40e_hw *hw)
+int i40e_init_adminq(struct i40e_hw *hw)
{
u16 cfg_ptr, oem_hi, oem_lo;
u16 eetrack_lo, eetrack_hi;
- i40e_status ret_code;
int retry = 0;
+ int ret_code;
/* verify input for valid configuration */
if ((hw->aq.num_arq_entries == 0) ||
@@ -780,7 +780,7 @@ static bool i40e_asq_done(struct i40e_hw *hw)
* This is the main send command driver routine for the Admin Queue send
* queue. It runs the queue, cleans the queue, etc
**/
-static i40e_status
+static int
i40e_asq_send_command_atomic_exec(struct i40e_hw *hw,
struct i40e_aq_desc *desc,
void *buff, /* can be NULL */
@@ -788,12 +788,12 @@ i40e_asq_send_command_atomic_exec(struct i40e_hw *hw,
struct i40e_asq_cmd_details *cmd_details,
bool is_atomic_context)
{
- i40e_status status = 0;
struct i40e_dma_mem *dma_buff = NULL;
struct i40e_asq_cmd_details *details;
struct i40e_aq_desc *desc_on_ring;
bool cmd_completed = false;
u16 retval = 0;
+ int status = 0;
u32 val = 0;
if (hw->aq.asq.count == 0) {
@@ -984,7 +984,7 @@ asq_send_command_error:
* Acquires the lock and calls the main send command execution
* routine.
**/
-i40e_status
+int
i40e_asq_send_command_atomic(struct i40e_hw *hw,
struct i40e_aq_desc *desc,
void *buff, /* can be NULL */
@@ -992,7 +992,7 @@ i40e_asq_send_command_atomic(struct i40e_hw *hw,
struct i40e_asq_cmd_details *cmd_details,
bool is_atomic_context)
{
- i40e_status status;
+ int status;
mutex_lock(&hw->aq.asq_mutex);
status = i40e_asq_send_command_atomic_exec(hw, desc, buff, buff_size,
@@ -1003,7 +1003,7 @@ i40e_asq_send_command_atomic(struct i40e_hw *hw,
return status;
}
-i40e_status
+int
i40e_asq_send_command(struct i40e_hw *hw, struct i40e_aq_desc *desc,
void *buff, /* can be NULL */ u16 buff_size,
struct i40e_asq_cmd_details *cmd_details)
@@ -1026,7 +1026,7 @@ i40e_asq_send_command(struct i40e_hw *hw, struct i40e_aq_desc *desc,
* routine. Returns the last Admin Queue status in aq_status
* to avoid race conditions in access to hw->aq.asq_last_status.
**/
-i40e_status
+int
i40e_asq_send_command_atomic_v2(struct i40e_hw *hw,
struct i40e_aq_desc *desc,
void *buff, /* can be NULL */
@@ -1035,7 +1035,7 @@ i40e_asq_send_command_atomic_v2(struct i40e_hw *hw,
bool is_atomic_context,
enum i40e_admin_queue_err *aq_status)
{
- i40e_status status;
+ int status;
mutex_lock(&hw->aq.asq_mutex);
status = i40e_asq_send_command_atomic_exec(hw, desc, buff,
@@ -1048,7 +1048,7 @@ i40e_asq_send_command_atomic_v2(struct i40e_hw *hw,
return status;
}
-i40e_status
+int
i40e_asq_send_command_v2(struct i40e_hw *hw, struct i40e_aq_desc *desc,
void *buff, /* can be NULL */ u16 buff_size,
struct i40e_asq_cmd_details *cmd_details,
@@ -1084,14 +1084,14 @@ void i40e_fill_default_direct_cmd_desc(struct i40e_aq_desc *desc,
* the contents through e. It can also return how many events are
* left to process through 'pending'
**/
-i40e_status i40e_clean_arq_element(struct i40e_hw *hw,
- struct i40e_arq_event_info *e,
- u16 *pending)
+int i40e_clean_arq_element(struct i40e_hw *hw,
+ struct i40e_arq_event_info *e,
+ u16 *pending)
{
- i40e_status ret_code = 0;
u16 ntc = hw->aq.arq.next_to_clean;
struct i40e_aq_desc *desc;
struct i40e_dma_mem *bi;
+ int ret_code = 0;
u16 desc_idx;
u16 datalen;
u16 flags;
diff --git a/drivers/net/ethernet/intel/i40e/i40e_alloc.h b/drivers/net/ethernet/intel/i40e/i40e_alloc.h
index cb8689222c8b..a6c9a9e343d1 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_alloc.h
+++ b/drivers/net/ethernet/intel/i40e/i40e_alloc.h
@@ -20,16 +20,16 @@ enum i40e_memory_type {
};
/* prototype for functions used for dynamic memory allocation */
-i40e_status i40e_allocate_dma_mem(struct i40e_hw *hw,
- struct i40e_dma_mem *mem,
- enum i40e_memory_type type,
- u64 size, u32 alignment);
-i40e_status i40e_free_dma_mem(struct i40e_hw *hw,
- struct i40e_dma_mem *mem);
-i40e_status i40e_allocate_virt_mem(struct i40e_hw *hw,
- struct i40e_virt_mem *mem,
- u32 size);
-i40e_status i40e_free_virt_mem(struct i40e_hw *hw,
- struct i40e_virt_mem *mem);
+int i40e_allocate_dma_mem(struct i40e_hw *hw,
+ struct i40e_dma_mem *mem,
+ enum i40e_memory_type type,
+ u64 size, u32 alignment);
+int i40e_free_dma_mem(struct i40e_hw *hw,
+ struct i40e_dma_mem *mem);
+int i40e_allocate_virt_mem(struct i40e_hw *hw,
+ struct i40e_virt_mem *mem,
+ u32 size);
+int i40e_free_virt_mem(struct i40e_hw *hw,
+ struct i40e_virt_mem *mem);
#endif /* _I40E_ALLOC_H_ */
diff --git a/drivers/net/ethernet/intel/i40e/i40e_client.c b/drivers/net/ethernet/intel/i40e/i40e_client.c
index 10d7a982a5b9..639c5a1ca853 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_client.c
+++ b/drivers/net/ethernet/intel/i40e/i40e_client.c
@@ -541,9 +541,9 @@ static int i40e_client_virtchnl_send(struct i40e_info *ldev,
{
struct i40e_pf *pf = ldev->pf;
struct i40e_hw *hw = &pf->hw;
- i40e_status err;
+ int err;
- err = i40e_aq_send_msg_to_vf(hw, vf_id, VIRTCHNL_OP_IWARP,
+ err = i40e_aq_send_msg_to_vf(hw, vf_id, VIRTCHNL_OP_RDMA,
0, msg, len, NULL);
if (err)
dev_err(&pf->pdev->dev, "Unable to send iWarp message to VF, error %d, aq status %d\n",
@@ -674,7 +674,7 @@ static int i40e_client_update_vsi_ctxt(struct i40e_info *ldev,
struct i40e_vsi *vsi = pf->vsi[pf->lan_vsi];
struct i40e_vsi_context ctxt;
bool update = true;
- i40e_status err;
+ int err;
/* TODO: for now do not allow setting VF's VSI setting */
if (is_vf)
@@ -686,8 +686,8 @@ static int i40e_client_update_vsi_ctxt(struct i40e_info *ldev,
ctxt.flags = I40E_AQ_VSI_TYPE_PF;
if (err) {
dev_info(&pf->pdev->dev,
- "couldn't get PF vsi config, err %s aq_err %s\n",
- i40e_stat_str(&pf->hw, err),
+ "couldn't get PF vsi config, err %pe aq_err %s\n",
+ ERR_PTR(err),
i40e_aq_str(&pf->hw,
pf->hw.aq.asq_last_status));
return -ENOENT;
@@ -714,8 +714,8 @@ static int i40e_client_update_vsi_ctxt(struct i40e_info *ldev,
err = i40e_aq_update_vsi_params(&vsi->back->hw, &ctxt, NULL);
if (err) {
dev_info(&pf->pdev->dev,
- "update VSI ctxt for PE failed, err %s aq_err %s\n",
- i40e_stat_str(&pf->hw, err),
+ "update VSI ctxt for PE failed, err %pe aq_err %s\n",
+ ERR_PTR(err),
i40e_aq_str(&pf->hw,
pf->hw.aq.asq_last_status));
}
diff --git a/drivers/net/ethernet/intel/i40e/i40e_common.c b/drivers/net/ethernet/intel/i40e/i40e_common.c
index 8f764ff5c990..ed88e38d488b 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_common.c
+++ b/drivers/net/ethernet/intel/i40e/i40e_common.c
@@ -14,9 +14,9 @@
* This function sets the mac type of the adapter based on the
* vendor ID and device ID stored in the hw structure.
**/
-i40e_status i40e_set_mac_type(struct i40e_hw *hw)
+int i40e_set_mac_type(struct i40e_hw *hw)
{
- i40e_status status = 0;
+ int status = 0;
if (hw->vendor_id == PCI_VENDOR_ID_INTEL) {
switch (hw->device_id) {
@@ -125,154 +125,6 @@ const char *i40e_aq_str(struct i40e_hw *hw, enum i40e_admin_queue_err aq_err)
}
/**
- * i40e_stat_str - convert status err code to a string
- * @hw: pointer to the HW structure
- * @stat_err: the status error code to convert
- **/
-const char *i40e_stat_str(struct i40e_hw *hw, i40e_status stat_err)
-{
- switch (stat_err) {
- case 0:
- return "OK";
- case I40E_ERR_NVM:
- return "I40E_ERR_NVM";
- case I40E_ERR_NVM_CHECKSUM:
- return "I40E_ERR_NVM_CHECKSUM";
- case I40E_ERR_PHY:
- return "I40E_ERR_PHY";
- case I40E_ERR_CONFIG:
- return "I40E_ERR_CONFIG";
- case I40E_ERR_PARAM:
- return "I40E_ERR_PARAM";
- case I40E_ERR_MAC_TYPE:
- return "I40E_ERR_MAC_TYPE";
- case I40E_ERR_UNKNOWN_PHY:
- return "I40E_ERR_UNKNOWN_PHY";
- case I40E_ERR_LINK_SETUP:
- return "I40E_ERR_LINK_SETUP";
- case I40E_ERR_ADAPTER_STOPPED:
- return "I40E_ERR_ADAPTER_STOPPED";
- case I40E_ERR_INVALID_MAC_ADDR:
- return "I40E_ERR_INVALID_MAC_ADDR";
- case I40E_ERR_DEVICE_NOT_SUPPORTED:
- return "I40E_ERR_DEVICE_NOT_SUPPORTED";
- case I40E_ERR_PRIMARY_REQUESTS_PENDING:
- return "I40E_ERR_PRIMARY_REQUESTS_PENDING";
- case I40E_ERR_INVALID_LINK_SETTINGS:
- return "I40E_ERR_INVALID_LINK_SETTINGS";
- case I40E_ERR_AUTONEG_NOT_COMPLETE:
- return "I40E_ERR_AUTONEG_NOT_COMPLETE";
- case I40E_ERR_RESET_FAILED:
- return "I40E_ERR_RESET_FAILED";
- case I40E_ERR_SWFW_SYNC:
- return "I40E_ERR_SWFW_SYNC";
- case I40E_ERR_NO_AVAILABLE_VSI:
- return "I40E_ERR_NO_AVAILABLE_VSI";
- case I40E_ERR_NO_MEMORY:
- return "I40E_ERR_NO_MEMORY";
- case I40E_ERR_BAD_PTR:
- return "I40E_ERR_BAD_PTR";
- case I40E_ERR_RING_FULL:
- return "I40E_ERR_RING_FULL";
- case I40E_ERR_INVALID_PD_ID:
- return "I40E_ERR_INVALID_PD_ID";
- case I40E_ERR_INVALID_QP_ID:
- return "I40E_ERR_INVALID_QP_ID";
- case I40E_ERR_INVALID_CQ_ID:
- return "I40E_ERR_INVALID_CQ_ID";
- case I40E_ERR_INVALID_CEQ_ID:
- return "I40E_ERR_INVALID_CEQ_ID";
- case I40E_ERR_INVALID_AEQ_ID:
- return "I40E_ERR_INVALID_AEQ_ID";
- case I40E_ERR_INVALID_SIZE:
- return "I40E_ERR_INVALID_SIZE";
- case I40E_ERR_INVALID_ARP_INDEX:
- return "I40E_ERR_INVALID_ARP_INDEX";
- case I40E_ERR_INVALID_FPM_FUNC_ID:
- return "I40E_ERR_INVALID_FPM_FUNC_ID";
- case I40E_ERR_QP_INVALID_MSG_SIZE:
- return "I40E_ERR_QP_INVALID_MSG_SIZE";
- case I40E_ERR_QP_TOOMANY_WRS_POSTED:
- return "I40E_ERR_QP_TOOMANY_WRS_POSTED";
- case I40E_ERR_INVALID_FRAG_COUNT:
- return "I40E_ERR_INVALID_FRAG_COUNT";
- case I40E_ERR_QUEUE_EMPTY:
- return "I40E_ERR_QUEUE_EMPTY";
- case I40E_ERR_INVALID_ALIGNMENT:
- return "I40E_ERR_INVALID_ALIGNMENT";
- case I40E_ERR_FLUSHED_QUEUE:
- return "I40E_ERR_FLUSHED_QUEUE";
- case I40E_ERR_INVALID_PUSH_PAGE_INDEX:
- return "I40E_ERR_INVALID_PUSH_PAGE_INDEX";
- case I40E_ERR_INVALID_IMM_DATA_SIZE:
- return "I40E_ERR_INVALID_IMM_DATA_SIZE";
- case I40E_ERR_TIMEOUT:
- return "I40E_ERR_TIMEOUT";
- case I40E_ERR_OPCODE_MISMATCH:
- return "I40E_ERR_OPCODE_MISMATCH";
- case I40E_ERR_CQP_COMPL_ERROR:
- return "I40E_ERR_CQP_COMPL_ERROR";
- case I40E_ERR_INVALID_VF_ID:
- return "I40E_ERR_INVALID_VF_ID";
- case I40E_ERR_INVALID_HMCFN_ID:
- return "I40E_ERR_INVALID_HMCFN_ID";
- case I40E_ERR_BACKING_PAGE_ERROR:
- return "I40E_ERR_BACKING_PAGE_ERROR";
- case I40E_ERR_NO_PBLCHUNKS_AVAILABLE:
- return "I40E_ERR_NO_PBLCHUNKS_AVAILABLE";
- case I40E_ERR_INVALID_PBLE_INDEX:
- return "I40E_ERR_INVALID_PBLE_INDEX";
- case I40E_ERR_INVALID_SD_INDEX:
- return "I40E_ERR_INVALID_SD_INDEX";
- case I40E_ERR_INVALID_PAGE_DESC_INDEX:
- return "I40E_ERR_INVALID_PAGE_DESC_INDEX";
- case I40E_ERR_INVALID_SD_TYPE:
- return "I40E_ERR_INVALID_SD_TYPE";
- case I40E_ERR_MEMCPY_FAILED:
- return "I40E_ERR_MEMCPY_FAILED";
- case I40E_ERR_INVALID_HMC_OBJ_INDEX:
- return "I40E_ERR_INVALID_HMC_OBJ_INDEX";
- case I40E_ERR_INVALID_HMC_OBJ_COUNT:
- return "I40E_ERR_INVALID_HMC_OBJ_COUNT";
- case I40E_ERR_INVALID_SRQ_ARM_LIMIT:
- return "I40E_ERR_INVALID_SRQ_ARM_LIMIT";
- case I40E_ERR_SRQ_ENABLED:
- return "I40E_ERR_SRQ_ENABLED";
- case I40E_ERR_ADMIN_QUEUE_ERROR:
- return "I40E_ERR_ADMIN_QUEUE_ERROR";
- case I40E_ERR_ADMIN_QUEUE_TIMEOUT:
- return "I40E_ERR_ADMIN_QUEUE_TIMEOUT";
- case I40E_ERR_BUF_TOO_SHORT:
- return "I40E_ERR_BUF_TOO_SHORT";
- case I40E_ERR_ADMIN_QUEUE_FULL:
- return "I40E_ERR_ADMIN_QUEUE_FULL";
- case I40E_ERR_ADMIN_QUEUE_NO_WORK:
- return "I40E_ERR_ADMIN_QUEUE_NO_WORK";
- case I40E_ERR_BAD_IWARP_CQE:
- return "I40E_ERR_BAD_IWARP_CQE";
- case I40E_ERR_NVM_BLANK_MODE:
- return "I40E_ERR_NVM_BLANK_MODE";
- case I40E_ERR_NOT_IMPLEMENTED:
- return "I40E_ERR_NOT_IMPLEMENTED";
- case I40E_ERR_PE_DOORBELL_NOT_ENABLED:
- return "I40E_ERR_PE_DOORBELL_NOT_ENABLED";
- case I40E_ERR_DIAG_TEST_FAILED:
- return "I40E_ERR_DIAG_TEST_FAILED";
- case I40E_ERR_NOT_READY:
- return "I40E_ERR_NOT_READY";
- case I40E_NOT_SUPPORTED:
- return "I40E_NOT_SUPPORTED";
- case I40E_ERR_FIRMWARE_API_VERSION:
- return "I40E_ERR_FIRMWARE_API_VERSION";
- case I40E_ERR_ADMIN_QUEUE_CRITICAL_ERROR:
- return "I40E_ERR_ADMIN_QUEUE_CRITICAL_ERROR";
- }
-
- snprintf(hw->err_str, sizeof(hw->err_str), "%d", stat_err);
- return hw->err_str;
-}
-
-/**
* i40e_debug_aq
* @hw: debug mask related to admin queue
* @mask: debug mask
@@ -355,13 +207,13 @@ bool i40e_check_asq_alive(struct i40e_hw *hw)
* Tell the Firmware that we're shutting down the AdminQ and whether
* or not the driver is unloading as well.
**/
-i40e_status i40e_aq_queue_shutdown(struct i40e_hw *hw,
- bool unloading)
+int i40e_aq_queue_shutdown(struct i40e_hw *hw,
+ bool unloading)
{
struct i40e_aq_desc desc;
struct i40e_aqc_queue_shutdown *cmd =
(struct i40e_aqc_queue_shutdown *)&desc.params.raw;
- i40e_status status;
+ int status;
i40e_fill_default_direct_cmd_desc(&desc,
i40e_aqc_opc_queue_shutdown);
@@ -384,15 +236,15 @@ i40e_status i40e_aq_queue_shutdown(struct i40e_hw *hw,
*
* Internal function to get or set RSS look up table
**/
-static i40e_status i40e_aq_get_set_rss_lut(struct i40e_hw *hw,
- u16 vsi_id, bool pf_lut,
- u8 *lut, u16 lut_size,
- bool set)
+static int i40e_aq_get_set_rss_lut(struct i40e_hw *hw,
+ u16 vsi_id, bool pf_lut,
+ u8 *lut, u16 lut_size,
+ bool set)
{
- i40e_status status;
struct i40e_aq_desc desc;
struct i40e_aqc_get_set_rss_lut *cmd_resp =
(struct i40e_aqc_get_set_rss_lut *)&desc.params.raw;
+ int status;
if (set)
i40e_fill_default_direct_cmd_desc(&desc,
@@ -437,8 +289,8 @@ static i40e_status i40e_aq_get_set_rss_lut(struct i40e_hw *hw,
*
* get the RSS lookup table, PF or VSI type
**/
-i40e_status i40e_aq_get_rss_lut(struct i40e_hw *hw, u16 vsi_id,
- bool pf_lut, u8 *lut, u16 lut_size)
+int i40e_aq_get_rss_lut(struct i40e_hw *hw, u16 vsi_id,
+ bool pf_lut, u8 *lut, u16 lut_size)
{
return i40e_aq_get_set_rss_lut(hw, vsi_id, pf_lut, lut, lut_size,
false);
@@ -454,8 +306,8 @@ i40e_status i40e_aq_get_rss_lut(struct i40e_hw *hw, u16 vsi_id,
*
* set the RSS lookup table, PF or VSI type
**/
-i40e_status i40e_aq_set_rss_lut(struct i40e_hw *hw, u16 vsi_id,
- bool pf_lut, u8 *lut, u16 lut_size)
+int i40e_aq_set_rss_lut(struct i40e_hw *hw, u16 vsi_id,
+ bool pf_lut, u8 *lut, u16 lut_size)
{
return i40e_aq_get_set_rss_lut(hw, vsi_id, pf_lut, lut, lut_size, true);
}
@@ -469,16 +321,16 @@ i40e_status i40e_aq_set_rss_lut(struct i40e_hw *hw, u16 vsi_id,
*
* get the RSS key per VSI
**/
-static i40e_status i40e_aq_get_set_rss_key(struct i40e_hw *hw,
- u16 vsi_id,
- struct i40e_aqc_get_set_rss_key_data *key,
- bool set)
+static int i40e_aq_get_set_rss_key(struct i40e_hw *hw,
+ u16 vsi_id,
+ struct i40e_aqc_get_set_rss_key_data *key,
+ bool set)
{
- i40e_status status;
struct i40e_aq_desc desc;
struct i40e_aqc_get_set_rss_key *cmd_resp =
(struct i40e_aqc_get_set_rss_key *)&desc.params.raw;
u16 key_size = sizeof(struct i40e_aqc_get_set_rss_key_data);
+ int status;
if (set)
i40e_fill_default_direct_cmd_desc(&desc,
@@ -509,9 +361,9 @@ static i40e_status i40e_aq_get_set_rss_key(struct i40e_hw *hw,
* @key: pointer to key info struct
*
**/
-i40e_status i40e_aq_get_rss_key(struct i40e_hw *hw,
- u16 vsi_id,
- struct i40e_aqc_get_set_rss_key_data *key)
+int i40e_aq_get_rss_key(struct i40e_hw *hw,
+ u16 vsi_id,
+ struct i40e_aqc_get_set_rss_key_data *key)
{
return i40e_aq_get_set_rss_key(hw, vsi_id, key, false);
}
@@ -524,9 +376,9 @@ i40e_status i40e_aq_get_rss_key(struct i40e_hw *hw,
*
* set the RSS key per VSI
**/
-i40e_status i40e_aq_set_rss_key(struct i40e_hw *hw,
- u16 vsi_id,
- struct i40e_aqc_get_set_rss_key_data *key)
+int i40e_aq_set_rss_key(struct i40e_hw *hw,
+ u16 vsi_id,
+ struct i40e_aqc_get_set_rss_key_data *key)
{
return i40e_aq_get_set_rss_key(hw, vsi_id, key, true);
}
@@ -796,10 +648,10 @@ struct i40e_rx_ptype_decoded i40e_ptype_lookup[BIT(8)] = {
* hw_addr, back, device_id, vendor_id, subsystem_device_id,
* subsystem_vendor_id, and revision_id
**/
-i40e_status i40e_init_shared_code(struct i40e_hw *hw)
+int i40e_init_shared_code(struct i40e_hw *hw)
{
- i40e_status status = 0;
u32 port, ari, func_rid;
+ int status = 0;
i40e_set_mac_type(hw);
@@ -836,15 +688,16 @@ i40e_status i40e_init_shared_code(struct i40e_hw *hw)
* @addrs: the requestor's mac addr store
* @cmd_details: pointer to command details structure or NULL
**/
-static i40e_status i40e_aq_mac_address_read(struct i40e_hw *hw,
- u16 *flags,
- struct i40e_aqc_mac_address_read_data *addrs,
- struct i40e_asq_cmd_details *cmd_details)
+static int
+i40e_aq_mac_address_read(struct i40e_hw *hw,
+ u16 *flags,
+ struct i40e_aqc_mac_address_read_data *addrs,
+ struct i40e_asq_cmd_details *cmd_details)
{
struct i40e_aq_desc desc;
struct i40e_aqc_mac_address_read *cmd_data =
(struct i40e_aqc_mac_address_read *)&desc.params.raw;
- i40e_status status;
+ int status;
i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_mac_address_read);
desc.flags |= cpu_to_le16(I40E_AQ_FLAG_BUF);
@@ -863,14 +716,14 @@ static i40e_status i40e_aq_mac_address_read(struct i40e_hw *hw,
* @mac_addr: address to write
* @cmd_details: pointer to command details structure or NULL
**/
-i40e_status i40e_aq_mac_address_write(struct i40e_hw *hw,
- u16 flags, u8 *mac_addr,
- struct i40e_asq_cmd_details *cmd_details)
+int i40e_aq_mac_address_write(struct i40e_hw *hw,
+ u16 flags, u8 *mac_addr,
+ struct i40e_asq_cmd_details *cmd_details)
{
struct i40e_aq_desc desc;
struct i40e_aqc_mac_address_write *cmd_data =
(struct i40e_aqc_mac_address_write *)&desc.params.raw;
- i40e_status status;
+ int status;
i40e_fill_default_direct_cmd_desc(&desc,
i40e_aqc_opc_mac_address_write);
@@ -893,11 +746,11 @@ i40e_status i40e_aq_mac_address_write(struct i40e_hw *hw,
*
* Reads the adapter's MAC address from register
**/
-i40e_status i40e_get_mac_addr(struct i40e_hw *hw, u8 *mac_addr)
+int i40e_get_mac_addr(struct i40e_hw *hw, u8 *mac_addr)
{
struct i40e_aqc_mac_address_read_data addrs;
- i40e_status status;
u16 flags = 0;
+ int status;
status = i40e_aq_mac_address_read(hw, &flags, &addrs, NULL);
@@ -914,11 +767,11 @@ i40e_status i40e_get_mac_addr(struct i40e_hw *hw, u8 *mac_addr)
*
* Reads the adapter's Port MAC address
**/
-i40e_status i40e_get_port_mac_addr(struct i40e_hw *hw, u8 *mac_addr)
+int i40e_get_port_mac_addr(struct i40e_hw *hw, u8 *mac_addr)
{
struct i40e_aqc_mac_address_read_data addrs;
- i40e_status status;
u16 flags = 0;
+ int status;
status = i40e_aq_mac_address_read(hw, &flags, &addrs, NULL);
if (status)
@@ -972,13 +825,13 @@ void i40e_pre_tx_queue_cfg(struct i40e_hw *hw, u32 queue, bool enable)
*
* Reads the part number string from the EEPROM.
**/
-i40e_status i40e_read_pba_string(struct i40e_hw *hw, u8 *pba_num,
- u32 pba_num_size)
+int i40e_read_pba_string(struct i40e_hw *hw, u8 *pba_num,
+ u32 pba_num_size)
{
- i40e_status status = 0;
u16 pba_word = 0;
u16 pba_size = 0;
u16 pba_ptr = 0;
+ int status = 0;
u16 i = 0;
status = i40e_read_nvm_word(hw, I40E_SR_PBA_FLAGS, &pba_word);
@@ -1087,8 +940,8 @@ static enum i40e_media_type i40e_get_media_type(struct i40e_hw *hw)
* @hw: pointer to the hardware structure
* @retry_limit: how many times to retry before failure
**/
-static i40e_status i40e_poll_globr(struct i40e_hw *hw,
- u32 retry_limit)
+static int i40e_poll_globr(struct i40e_hw *hw,
+ u32 retry_limit)
{
u32 cnt, reg = 0;
@@ -1114,7 +967,7 @@ static i40e_status i40e_poll_globr(struct i40e_hw *hw,
* Assuming someone else has triggered a global reset,
* assure the global reset is complete and then reset the PF
**/
-i40e_status i40e_pf_reset(struct i40e_hw *hw)
+int i40e_pf_reset(struct i40e_hw *hw)
{
u32 cnt = 0;
u32 cnt1 = 0;
@@ -1453,15 +1306,16 @@ void i40e_led_set(struct i40e_hw *hw, u32 mode, bool blink)
*
* Returns the various PHY abilities supported on the Port.
**/
-i40e_status i40e_aq_get_phy_capabilities(struct i40e_hw *hw,
- bool qualified_modules, bool report_init,
- struct i40e_aq_get_phy_abilities_resp *abilities,
- struct i40e_asq_cmd_details *cmd_details)
+int
+i40e_aq_get_phy_capabilities(struct i40e_hw *hw,
+ bool qualified_modules, bool report_init,
+ struct i40e_aq_get_phy_abilities_resp *abilities,
+ struct i40e_asq_cmd_details *cmd_details)
{
- struct i40e_aq_desc desc;
- i40e_status status;
u16 abilities_size = sizeof(struct i40e_aq_get_phy_abilities_resp);
u16 max_delay = I40E_MAX_PHY_TIMEOUT, total_delay = 0;
+ struct i40e_aq_desc desc;
+ int status;
if (!abilities)
return I40E_ERR_PARAM;
@@ -1532,14 +1386,14 @@ i40e_status i40e_aq_get_phy_capabilities(struct i40e_hw *hw,
* of the PHY Config parameters. This status will be indicated by the
* command response.
**/
-enum i40e_status_code i40e_aq_set_phy_config(struct i40e_hw *hw,
- struct i40e_aq_set_phy_config *config,
- struct i40e_asq_cmd_details *cmd_details)
+int i40e_aq_set_phy_config(struct i40e_hw *hw,
+ struct i40e_aq_set_phy_config *config,
+ struct i40e_asq_cmd_details *cmd_details)
{
struct i40e_aq_desc desc;
struct i40e_aq_set_phy_config *cmd =
(struct i40e_aq_set_phy_config *)&desc.params.raw;
- enum i40e_status_code status;
+ int status;
if (!config)
return I40E_ERR_PARAM;
@@ -1554,7 +1408,7 @@ enum i40e_status_code i40e_aq_set_phy_config(struct i40e_hw *hw,
return status;
}
-static noinline_for_stack enum i40e_status_code
+static noinline_for_stack int
i40e_set_fc_status(struct i40e_hw *hw,
struct i40e_aq_get_phy_abilities_resp *abilities,
bool atomic_restart)
@@ -1612,11 +1466,11 @@ i40e_set_fc_status(struct i40e_hw *hw,
*
* Set the requested flow control mode using set_phy_config.
**/
-enum i40e_status_code i40e_set_fc(struct i40e_hw *hw, u8 *aq_failures,
- bool atomic_restart)
+int i40e_set_fc(struct i40e_hw *hw, u8 *aq_failures,
+ bool atomic_restart)
{
struct i40e_aq_get_phy_abilities_resp abilities;
- enum i40e_status_code status;
+ int status;
*aq_failures = 0x0;
@@ -1655,13 +1509,13 @@ enum i40e_status_code i40e_set_fc(struct i40e_hw *hw, u8 *aq_failures,
*
* Tell the firmware that the driver is taking over from PXE
**/
-i40e_status i40e_aq_clear_pxe_mode(struct i40e_hw *hw,
- struct i40e_asq_cmd_details *cmd_details)
+int i40e_aq_clear_pxe_mode(struct i40e_hw *hw,
+ struct i40e_asq_cmd_details *cmd_details)
{
- i40e_status status;
struct i40e_aq_desc desc;
struct i40e_aqc_clear_pxe *cmd =
(struct i40e_aqc_clear_pxe *)&desc.params.raw;
+ int status;
i40e_fill_default_direct_cmd_desc(&desc,
i40e_aqc_opc_clear_pxe_mode);
@@ -1683,14 +1537,14 @@ i40e_status i40e_aq_clear_pxe_mode(struct i40e_hw *hw,
*
* Sets up the link and restarts the Auto-Negotiation over the link.
**/
-i40e_status i40e_aq_set_link_restart_an(struct i40e_hw *hw,
- bool enable_link,
- struct i40e_asq_cmd_details *cmd_details)
+int i40e_aq_set_link_restart_an(struct i40e_hw *hw,
+ bool enable_link,
+ struct i40e_asq_cmd_details *cmd_details)
{
struct i40e_aq_desc desc;
struct i40e_aqc_set_link_restart_an *cmd =
(struct i40e_aqc_set_link_restart_an *)&desc.params.raw;
- i40e_status status;
+ int status;
i40e_fill_default_direct_cmd_desc(&desc,
i40e_aqc_opc_set_link_restart_an);
@@ -1715,17 +1569,17 @@ i40e_status i40e_aq_set_link_restart_an(struct i40e_hw *hw,
*
* Returns the link status of the adapter.
**/
-i40e_status i40e_aq_get_link_info(struct i40e_hw *hw,
- bool enable_lse, struct i40e_link_status *link,
- struct i40e_asq_cmd_details *cmd_details)
+int i40e_aq_get_link_info(struct i40e_hw *hw,
+ bool enable_lse, struct i40e_link_status *link,
+ struct i40e_asq_cmd_details *cmd_details)
{
struct i40e_aq_desc desc;
struct i40e_aqc_get_link_status *resp =
(struct i40e_aqc_get_link_status *)&desc.params.raw;
struct i40e_link_status *hw_link_info = &hw->phy.link_info;
- i40e_status status;
bool tx_pause, rx_pause;
u16 command_flags;
+ int status;
i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_get_link_status);
@@ -1811,14 +1665,14 @@ aq_get_link_info_exit:
*
* Set link interrupt mask.
**/
-i40e_status i40e_aq_set_phy_int_mask(struct i40e_hw *hw,
- u16 mask,
- struct i40e_asq_cmd_details *cmd_details)
+int i40e_aq_set_phy_int_mask(struct i40e_hw *hw,
+ u16 mask,
+ struct i40e_asq_cmd_details *cmd_details)
{
struct i40e_aq_desc desc;
struct i40e_aqc_set_phy_int_mask *cmd =
(struct i40e_aqc_set_phy_int_mask *)&desc.params.raw;
- i40e_status status;
+ int status;
i40e_fill_default_direct_cmd_desc(&desc,
i40e_aqc_opc_set_phy_int_mask);
@@ -1838,8 +1692,8 @@ i40e_status i40e_aq_set_phy_int_mask(struct i40e_hw *hw,
*
* Enable/disable loopback on a given port
*/
-i40e_status i40e_aq_set_mac_loopback(struct i40e_hw *hw, bool ena_lpbk,
- struct i40e_asq_cmd_details *cmd_details)
+int i40e_aq_set_mac_loopback(struct i40e_hw *hw, bool ena_lpbk,
+ struct i40e_asq_cmd_details *cmd_details)
{
struct i40e_aq_desc desc;
struct i40e_aqc_set_lb_mode *cmd =
@@ -1864,13 +1718,13 @@ i40e_status i40e_aq_set_mac_loopback(struct i40e_hw *hw, bool ena_lpbk,
*
* Reset the external PHY.
**/
-i40e_status i40e_aq_set_phy_debug(struct i40e_hw *hw, u8 cmd_flags,
- struct i40e_asq_cmd_details *cmd_details)
+int i40e_aq_set_phy_debug(struct i40e_hw *hw, u8 cmd_flags,
+ struct i40e_asq_cmd_details *cmd_details)
{
struct i40e_aq_desc desc;
struct i40e_aqc_set_phy_debug *cmd =
(struct i40e_aqc_set_phy_debug *)&desc.params.raw;
- i40e_status status;
+ int status;
i40e_fill_default_direct_cmd_desc(&desc,
i40e_aqc_opc_set_phy_debug);
@@ -1905,9 +1759,9 @@ static bool i40e_is_aq_api_ver_ge(struct i40e_adminq_info *aq, u16 maj,
*
* Add a VSI context to the hardware.
**/
-i40e_status i40e_aq_add_vsi(struct i40e_hw *hw,
- struct i40e_vsi_context *vsi_ctx,
- struct i40e_asq_cmd_details *cmd_details)
+int i40e_aq_add_vsi(struct i40e_hw *hw,
+ struct i40e_vsi_context *vsi_ctx,
+ struct i40e_asq_cmd_details *cmd_details)
{
struct i40e_aq_desc desc;
struct i40e_aqc_add_get_update_vsi *cmd =
@@ -1915,7 +1769,7 @@ i40e_status i40e_aq_add_vsi(struct i40e_hw *hw,
struct i40e_aqc_add_get_update_vsi_completion *resp =
(struct i40e_aqc_add_get_update_vsi_completion *)
&desc.params.raw;
- i40e_status status;
+ int status;
i40e_fill_default_direct_cmd_desc(&desc,
i40e_aqc_opc_add_vsi);
@@ -1949,15 +1803,15 @@ aq_add_vsi_exit:
* @seid: vsi number
* @cmd_details: pointer to command details structure or NULL
**/
-i40e_status i40e_aq_set_default_vsi(struct i40e_hw *hw,
- u16 seid,
- struct i40e_asq_cmd_details *cmd_details)
+int i40e_aq_set_default_vsi(struct i40e_hw *hw,
+ u16 seid,
+ struct i40e_asq_cmd_details *cmd_details)
{
struct i40e_aq_desc desc;
struct i40e_aqc_set_vsi_promiscuous_modes *cmd =
(struct i40e_aqc_set_vsi_promiscuous_modes *)
&desc.params.raw;
- i40e_status status;
+ int status;
i40e_fill_default_direct_cmd_desc(&desc,
i40e_aqc_opc_set_vsi_promiscuous_modes);
@@ -1977,15 +1831,15 @@ i40e_status i40e_aq_set_default_vsi(struct i40e_hw *hw,
* @seid: vsi number
* @cmd_details: pointer to command details structure or NULL
**/
-i40e_status i40e_aq_clear_default_vsi(struct i40e_hw *hw,
- u16 seid,
- struct i40e_asq_cmd_details *cmd_details)
+int i40e_aq_clear_default_vsi(struct i40e_hw *hw,
+ u16 seid,
+ struct i40e_asq_cmd_details *cmd_details)
{
struct i40e_aq_desc desc;
struct i40e_aqc_set_vsi_promiscuous_modes *cmd =
(struct i40e_aqc_set_vsi_promiscuous_modes *)
&desc.params.raw;
- i40e_status status;
+ int status;
i40e_fill_default_direct_cmd_desc(&desc,
i40e_aqc_opc_set_vsi_promiscuous_modes);
@@ -2007,16 +1861,16 @@ i40e_status i40e_aq_clear_default_vsi(struct i40e_hw *hw,
* @cmd_details: pointer to command details structure or NULL
* @rx_only_promisc: flag to decide if egress traffic gets mirrored in promisc
**/
-i40e_status i40e_aq_set_vsi_unicast_promiscuous(struct i40e_hw *hw,
- u16 seid, bool set,
- struct i40e_asq_cmd_details *cmd_details,
- bool rx_only_promisc)
+int i40e_aq_set_vsi_unicast_promiscuous(struct i40e_hw *hw,
+ u16 seid, bool set,
+ struct i40e_asq_cmd_details *cmd_details,
+ bool rx_only_promisc)
{
struct i40e_aq_desc desc;
struct i40e_aqc_set_vsi_promiscuous_modes *cmd =
(struct i40e_aqc_set_vsi_promiscuous_modes *)&desc.params.raw;
- i40e_status status;
u16 flags = 0;
+ int status;
i40e_fill_default_direct_cmd_desc(&desc,
i40e_aqc_opc_set_vsi_promiscuous_modes);
@@ -2047,14 +1901,15 @@ i40e_status i40e_aq_set_vsi_unicast_promiscuous(struct i40e_hw *hw,
* @set: set multicast promiscuous enable/disable
* @cmd_details: pointer to command details structure or NULL
**/
-i40e_status i40e_aq_set_vsi_multicast_promiscuous(struct i40e_hw *hw,
- u16 seid, bool set, struct i40e_asq_cmd_details *cmd_details)
+int i40e_aq_set_vsi_multicast_promiscuous(struct i40e_hw *hw,
+ u16 seid, bool set,
+ struct i40e_asq_cmd_details *cmd_details)
{
struct i40e_aq_desc desc;
struct i40e_aqc_set_vsi_promiscuous_modes *cmd =
(struct i40e_aqc_set_vsi_promiscuous_modes *)&desc.params.raw;
- i40e_status status;
u16 flags = 0;
+ int status;
i40e_fill_default_direct_cmd_desc(&desc,
i40e_aqc_opc_set_vsi_promiscuous_modes);
@@ -2080,16 +1935,16 @@ i40e_status i40e_aq_set_vsi_multicast_promiscuous(struct i40e_hw *hw,
* @vid: The VLAN tag filter - capture any multicast packet with this VLAN tag
* @cmd_details: pointer to command details structure or NULL
**/
-enum i40e_status_code i40e_aq_set_vsi_mc_promisc_on_vlan(struct i40e_hw *hw,
- u16 seid, bool enable,
- u16 vid,
- struct i40e_asq_cmd_details *cmd_details)
+int i40e_aq_set_vsi_mc_promisc_on_vlan(struct i40e_hw *hw,
+ u16 seid, bool enable,
+ u16 vid,
+ struct i40e_asq_cmd_details *cmd_details)
{
struct i40e_aq_desc desc;
struct i40e_aqc_set_vsi_promiscuous_modes *cmd =
(struct i40e_aqc_set_vsi_promiscuous_modes *)&desc.params.raw;
- enum i40e_status_code status;
u16 flags = 0;
+ int status;
i40e_fill_default_direct_cmd_desc(&desc,
i40e_aqc_opc_set_vsi_promiscuous_modes);
@@ -2116,16 +1971,16 @@ enum i40e_status_code i40e_aq_set_vsi_mc_promisc_on_vlan(struct i40e_hw *hw,
* @vid: The VLAN tag filter - capture any unicast packet with this VLAN tag
* @cmd_details: pointer to command details structure or NULL
**/
-enum i40e_status_code i40e_aq_set_vsi_uc_promisc_on_vlan(struct i40e_hw *hw,
- u16 seid, bool enable,
- u16 vid,
- struct i40e_asq_cmd_details *cmd_details)
+int i40e_aq_set_vsi_uc_promisc_on_vlan(struct i40e_hw *hw,
+ u16 seid, bool enable,
+ u16 vid,
+ struct i40e_asq_cmd_details *cmd_details)
{
struct i40e_aq_desc desc;
struct i40e_aqc_set_vsi_promiscuous_modes *cmd =
(struct i40e_aqc_set_vsi_promiscuous_modes *)&desc.params.raw;
- enum i40e_status_code status;
u16 flags = 0;
+ int status;
i40e_fill_default_direct_cmd_desc(&desc,
i40e_aqc_opc_set_vsi_promiscuous_modes);
@@ -2158,15 +2013,15 @@ enum i40e_status_code i40e_aq_set_vsi_uc_promisc_on_vlan(struct i40e_hw *hw,
* @vid: The VLAN tag filter - capture any broadcast packet with this VLAN tag
* @cmd_details: pointer to command details structure or NULL
**/
-i40e_status i40e_aq_set_vsi_bc_promisc_on_vlan(struct i40e_hw *hw,
- u16 seid, bool enable, u16 vid,
- struct i40e_asq_cmd_details *cmd_details)
+int i40e_aq_set_vsi_bc_promisc_on_vlan(struct i40e_hw *hw,
+ u16 seid, bool enable, u16 vid,
+ struct i40e_asq_cmd_details *cmd_details)
{
struct i40e_aq_desc desc;
struct i40e_aqc_set_vsi_promiscuous_modes *cmd =
(struct i40e_aqc_set_vsi_promiscuous_modes *)&desc.params.raw;
- i40e_status status;
u16 flags = 0;
+ int status;
i40e_fill_default_direct_cmd_desc(&desc,
i40e_aqc_opc_set_vsi_promiscuous_modes);
@@ -2193,14 +2048,14 @@ i40e_status i40e_aq_set_vsi_bc_promisc_on_vlan(struct i40e_hw *hw,
*
* Set or clear the broadcast promiscuous flag (filter) for a given VSI.
**/
-i40e_status i40e_aq_set_vsi_broadcast(struct i40e_hw *hw,
- u16 seid, bool set_filter,
- struct i40e_asq_cmd_details *cmd_details)
+int i40e_aq_set_vsi_broadcast(struct i40e_hw *hw,
+ u16 seid, bool set_filter,
+ struct i40e_asq_cmd_details *cmd_details)
{
struct i40e_aq_desc desc;
struct i40e_aqc_set_vsi_promiscuous_modes *cmd =
(struct i40e_aqc_set_vsi_promiscuous_modes *)&desc.params.raw;
- i40e_status status;
+ int status;
i40e_fill_default_direct_cmd_desc(&desc,
i40e_aqc_opc_set_vsi_promiscuous_modes);
@@ -2226,15 +2081,15 @@ i40e_status i40e_aq_set_vsi_broadcast(struct i40e_hw *hw,
* @enable: set MAC L2 layer unicast promiscuous enable/disable for a given VLAN
* @cmd_details: pointer to command details structure or NULL
**/
-i40e_status i40e_aq_set_vsi_vlan_promisc(struct i40e_hw *hw,
- u16 seid, bool enable,
- struct i40e_asq_cmd_details *cmd_details)
+int i40e_aq_set_vsi_vlan_promisc(struct i40e_hw *hw,
+ u16 seid, bool enable,
+ struct i40e_asq_cmd_details *cmd_details)
{
struct i40e_aq_desc desc;
struct i40e_aqc_set_vsi_promiscuous_modes *cmd =
(struct i40e_aqc_set_vsi_promiscuous_modes *)&desc.params.raw;
- i40e_status status;
u16 flags = 0;
+ int status;
i40e_fill_default_direct_cmd_desc(&desc,
i40e_aqc_opc_set_vsi_promiscuous_modes);
@@ -2256,9 +2111,9 @@ i40e_status i40e_aq_set_vsi_vlan_promisc(struct i40e_hw *hw,
* @vsi_ctx: pointer to a vsi context struct
* @cmd_details: pointer to command details structure or NULL
**/
-i40e_status i40e_aq_get_vsi_params(struct i40e_hw *hw,
- struct i40e_vsi_context *vsi_ctx,
- struct i40e_asq_cmd_details *cmd_details)
+int i40e_aq_get_vsi_params(struct i40e_hw *hw,
+ struct i40e_vsi_context *vsi_ctx,
+ struct i40e_asq_cmd_details *cmd_details)
{
struct i40e_aq_desc desc;
struct i40e_aqc_add_get_update_vsi *cmd =
@@ -2266,7 +2121,7 @@ i40e_status i40e_aq_get_vsi_params(struct i40e_hw *hw,
struct i40e_aqc_add_get_update_vsi_completion *resp =
(struct i40e_aqc_add_get_update_vsi_completion *)
&desc.params.raw;
- i40e_status status;
+ int status;
i40e_fill_default_direct_cmd_desc(&desc,
i40e_aqc_opc_get_vsi_parameters);
@@ -2298,9 +2153,9 @@ aq_get_vsi_params_exit:
*
* Update a VSI context.
**/
-i40e_status i40e_aq_update_vsi_params(struct i40e_hw *hw,
- struct i40e_vsi_context *vsi_ctx,
- struct i40e_asq_cmd_details *cmd_details)
+int i40e_aq_update_vsi_params(struct i40e_hw *hw,
+ struct i40e_vsi_context *vsi_ctx,
+ struct i40e_asq_cmd_details *cmd_details)
{
struct i40e_aq_desc desc;
struct i40e_aqc_add_get_update_vsi *cmd =
@@ -2308,7 +2163,7 @@ i40e_status i40e_aq_update_vsi_params(struct i40e_hw *hw,
struct i40e_aqc_add_get_update_vsi_completion *resp =
(struct i40e_aqc_add_get_update_vsi_completion *)
&desc.params.raw;
- i40e_status status;
+ int status;
i40e_fill_default_direct_cmd_desc(&desc,
i40e_aqc_opc_update_vsi_parameters);
@@ -2336,15 +2191,15 @@ i40e_status i40e_aq_update_vsi_params(struct i40e_hw *hw,
*
* Fill the buf with switch configuration returned from AdminQ command
**/
-i40e_status i40e_aq_get_switch_config(struct i40e_hw *hw,
- struct i40e_aqc_get_switch_config_resp *buf,
- u16 buf_size, u16 *start_seid,
- struct i40e_asq_cmd_details *cmd_details)
+int i40e_aq_get_switch_config(struct i40e_hw *hw,
+ struct i40e_aqc_get_switch_config_resp *buf,
+ u16 buf_size, u16 *start_seid,
+ struct i40e_asq_cmd_details *cmd_details)
{
struct i40e_aq_desc desc;
struct i40e_aqc_switch_seid *scfg =
(struct i40e_aqc_switch_seid *)&desc.params.raw;
- i40e_status status;
+ int status;
i40e_fill_default_direct_cmd_desc(&desc,
i40e_aqc_opc_get_switch_config);
@@ -2370,15 +2225,15 @@ i40e_status i40e_aq_get_switch_config(struct i40e_hw *hw,
*
* Set switch configuration bits
**/
-enum i40e_status_code i40e_aq_set_switch_config(struct i40e_hw *hw,
- u16 flags,
- u16 valid_flags, u8 mode,
- struct i40e_asq_cmd_details *cmd_details)
+int i40e_aq_set_switch_config(struct i40e_hw *hw,
+ u16 flags,
+ u16 valid_flags, u8 mode,
+ struct i40e_asq_cmd_details *cmd_details)
{
struct i40e_aq_desc desc;
struct i40e_aqc_set_switch_config *scfg =
(struct i40e_aqc_set_switch_config *)&desc.params.raw;
- enum i40e_status_code status;
+ int status;
i40e_fill_default_direct_cmd_desc(&desc,
i40e_aqc_opc_set_switch_config);
@@ -2407,16 +2262,16 @@ enum i40e_status_code i40e_aq_set_switch_config(struct i40e_hw *hw,
*
* Get the firmware version from the admin queue commands
**/
-i40e_status i40e_aq_get_firmware_version(struct i40e_hw *hw,
- u16 *fw_major_version, u16 *fw_minor_version,
- u32 *fw_build,
- u16 *api_major_version, u16 *api_minor_version,
- struct i40e_asq_cmd_details *cmd_details)
+int i40e_aq_get_firmware_version(struct i40e_hw *hw,
+ u16 *fw_major_version, u16 *fw_minor_version,
+ u32 *fw_build,
+ u16 *api_major_version, u16 *api_minor_version,
+ struct i40e_asq_cmd_details *cmd_details)
{
struct i40e_aq_desc desc;
struct i40e_aqc_get_version *resp =
(struct i40e_aqc_get_version *)&desc.params.raw;
- i40e_status status;
+ int status;
i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_get_version);
@@ -2446,14 +2301,14 @@ i40e_status i40e_aq_get_firmware_version(struct i40e_hw *hw,
*
* Send the driver version to the firmware
**/
-i40e_status i40e_aq_send_driver_version(struct i40e_hw *hw,
+int i40e_aq_send_driver_version(struct i40e_hw *hw,
struct i40e_driver_version *dv,
struct i40e_asq_cmd_details *cmd_details)
{
struct i40e_aq_desc desc;
struct i40e_aqc_driver_version *cmd =
(struct i40e_aqc_driver_version *)&desc.params.raw;
- i40e_status status;
+ int status;
u16 len;
if (dv == NULL)
@@ -2488,9 +2343,9 @@ i40e_status i40e_aq_send_driver_version(struct i40e_hw *hw,
*
* Side effect: LinkStatusEvent reporting becomes enabled
**/
-i40e_status i40e_get_link_status(struct i40e_hw *hw, bool *link_up)
+int i40e_get_link_status(struct i40e_hw *hw, bool *link_up)
{
- i40e_status status = 0;
+ int status = 0;
if (hw->phy.get_link_info) {
status = i40e_update_link_info(hw);
@@ -2509,10 +2364,10 @@ i40e_status i40e_get_link_status(struct i40e_hw *hw, bool *link_up)
* i40e_update_link_info - update status of the HW network link
* @hw: pointer to the hw struct
**/
-noinline_for_stack i40e_status i40e_update_link_info(struct i40e_hw *hw)
+noinline_for_stack int i40e_update_link_info(struct i40e_hw *hw)
{
struct i40e_aq_get_phy_abilities_resp abilities;
- i40e_status status = 0;
+ int status = 0;
status = i40e_aq_get_link_info(hw, true, NULL, NULL);
if (status)
@@ -2559,19 +2414,19 @@ noinline_for_stack i40e_status i40e_update_link_info(struct i40e_hw *hw)
* This asks the FW to add a VEB between the uplink and downlink
* elements. If the uplink SEID is 0, this will be a floating VEB.
**/
-i40e_status i40e_aq_add_veb(struct i40e_hw *hw, u16 uplink_seid,
- u16 downlink_seid, u8 enabled_tc,
- bool default_port, u16 *veb_seid,
- bool enable_stats,
- struct i40e_asq_cmd_details *cmd_details)
+int i40e_aq_add_veb(struct i40e_hw *hw, u16 uplink_seid,
+ u16 downlink_seid, u8 enabled_tc,
+ bool default_port, u16 *veb_seid,
+ bool enable_stats,
+ struct i40e_asq_cmd_details *cmd_details)
{
struct i40e_aq_desc desc;
struct i40e_aqc_add_veb *cmd =
(struct i40e_aqc_add_veb *)&desc.params.raw;
struct i40e_aqc_add_veb_completion *resp =
(struct i40e_aqc_add_veb_completion *)&desc.params.raw;
- i40e_status status;
u16 veb_flags = 0;
+ int status;
/* SEIDs need to either both be set or both be 0 for floating VEB */
if (!!uplink_seid != !!downlink_seid)
@@ -2617,17 +2472,17 @@ i40e_status i40e_aq_add_veb(struct i40e_hw *hw, u16 uplink_seid,
* This retrieves the parameters for a particular VEB, specified by
* uplink_seid, and returns them to the caller.
**/
-i40e_status i40e_aq_get_veb_parameters(struct i40e_hw *hw,
- u16 veb_seid, u16 *switch_id,
- bool *floating, u16 *statistic_index,
- u16 *vebs_used, u16 *vebs_free,
- struct i40e_asq_cmd_details *cmd_details)
+int i40e_aq_get_veb_parameters(struct i40e_hw *hw,
+ u16 veb_seid, u16 *switch_id,
+ bool *floating, u16 *statistic_index,
+ u16 *vebs_used, u16 *vebs_free,
+ struct i40e_asq_cmd_details *cmd_details)
{
struct i40e_aq_desc desc;
struct i40e_aqc_get_veb_parameters_completion *cmd_resp =
(struct i40e_aqc_get_veb_parameters_completion *)
&desc.params.raw;
- i40e_status status;
+ int status;
if (veb_seid == 0)
return I40E_ERR_PARAM;
@@ -2711,7 +2566,7 @@ i40e_prepare_add_macvlan(struct i40e_aqc_add_macvlan_element_data *mv_list,
*
* Add MAC/VLAN addresses to the HW filtering
**/
-i40e_status
+int
i40e_aq_add_macvlan(struct i40e_hw *hw, u16 seid,
struct i40e_aqc_add_macvlan_element_data *mv_list,
u16 count, struct i40e_asq_cmd_details *cmd_details)
@@ -2743,7 +2598,7 @@ i40e_aq_add_macvlan(struct i40e_hw *hw, u16 seid,
* It also calls _v2 versions of asq_send_command functions to
* get the aq_status on the stack.
**/
-i40e_status
+int
i40e_aq_add_macvlan_v2(struct i40e_hw *hw, u16 seid,
struct i40e_aqc_add_macvlan_element_data *mv_list,
u16 count, struct i40e_asq_cmd_details *cmd_details,
@@ -2771,15 +2626,16 @@ i40e_aq_add_macvlan_v2(struct i40e_hw *hw, u16 seid,
*
* Remove MAC/VLAN addresses from the HW filtering
**/
-i40e_status i40e_aq_remove_macvlan(struct i40e_hw *hw, u16 seid,
- struct i40e_aqc_remove_macvlan_element_data *mv_list,
- u16 count, struct i40e_asq_cmd_details *cmd_details)
+int
+i40e_aq_remove_macvlan(struct i40e_hw *hw, u16 seid,
+ struct i40e_aqc_remove_macvlan_element_data *mv_list,
+ u16 count, struct i40e_asq_cmd_details *cmd_details)
{
struct i40e_aq_desc desc;
struct i40e_aqc_macvlan *cmd =
(struct i40e_aqc_macvlan *)&desc.params.raw;
- i40e_status status;
u16 buf_size;
+ int status;
if (count == 0 || !mv_list || !hw)
return I40E_ERR_PARAM;
@@ -2818,7 +2674,7 @@ i40e_status i40e_aq_remove_macvlan(struct i40e_hw *hw, u16 seid,
* It also calls _v2 versions of asq_send_command functions to
* get the aq_status on the stack.
**/
-i40e_status
+int
i40e_aq_remove_macvlan_v2(struct i40e_hw *hw, u16 seid,
struct i40e_aqc_remove_macvlan_element_data *mv_list,
u16 count, struct i40e_asq_cmd_details *cmd_details,
@@ -2866,19 +2722,19 @@ i40e_aq_remove_macvlan_v2(struct i40e_hw *hw, u16 seid,
* Add/Delete a mirror rule to a specific switch. Mirror rules are supported for
* VEBs/VEPA elements only
**/
-static i40e_status i40e_mirrorrule_op(struct i40e_hw *hw,
- u16 opcode, u16 sw_seid, u16 rule_type, u16 id,
- u16 count, __le16 *mr_list,
- struct i40e_asq_cmd_details *cmd_details,
- u16 *rule_id, u16 *rules_used, u16 *rules_free)
+static int i40e_mirrorrule_op(struct i40e_hw *hw,
+ u16 opcode, u16 sw_seid, u16 rule_type, u16 id,
+ u16 count, __le16 *mr_list,
+ struct i40e_asq_cmd_details *cmd_details,
+ u16 *rule_id, u16 *rules_used, u16 *rules_free)
{
struct i40e_aq_desc desc;
struct i40e_aqc_add_delete_mirror_rule *cmd =
(struct i40e_aqc_add_delete_mirror_rule *)&desc.params.raw;
struct i40e_aqc_add_delete_mirror_rule_completion *resp =
(struct i40e_aqc_add_delete_mirror_rule_completion *)&desc.params.raw;
- i40e_status status;
u16 buf_size;
+ int status;
buf_size = count * sizeof(*mr_list);
@@ -2926,10 +2782,11 @@ static i40e_status i40e_mirrorrule_op(struct i40e_hw *hw,
*
* Add mirror rule. Mirror rules are supported for VEBs or VEPA elements only
**/
-i40e_status i40e_aq_add_mirrorrule(struct i40e_hw *hw, u16 sw_seid,
- u16 rule_type, u16 dest_vsi, u16 count, __le16 *mr_list,
- struct i40e_asq_cmd_details *cmd_details,
- u16 *rule_id, u16 *rules_used, u16 *rules_free)
+int i40e_aq_add_mirrorrule(struct i40e_hw *hw, u16 sw_seid,
+ u16 rule_type, u16 dest_vsi, u16 count,
+ __le16 *mr_list,
+ struct i40e_asq_cmd_details *cmd_details,
+ u16 *rule_id, u16 *rules_used, u16 *rules_free)
{
if (!(rule_type == I40E_AQC_MIRROR_RULE_TYPE_ALL_INGRESS ||
rule_type == I40E_AQC_MIRROR_RULE_TYPE_ALL_EGRESS)) {
@@ -2957,10 +2814,11 @@ i40e_status i40e_aq_add_mirrorrule(struct i40e_hw *hw, u16 sw_seid,
*
* Delete a mirror rule. Mirror rules are supported for VEBs/VEPA elements only
**/
-i40e_status i40e_aq_delete_mirrorrule(struct i40e_hw *hw, u16 sw_seid,
- u16 rule_type, u16 rule_id, u16 count, __le16 *mr_list,
- struct i40e_asq_cmd_details *cmd_details,
- u16 *rules_used, u16 *rules_free)
+int i40e_aq_delete_mirrorrule(struct i40e_hw *hw, u16 sw_seid,
+ u16 rule_type, u16 rule_id, u16 count,
+ __le16 *mr_list,
+ struct i40e_asq_cmd_details *cmd_details,
+ u16 *rules_used, u16 *rules_free)
{
/* Rule ID has to be valid except rule_type: INGRESS VLAN mirroring */
if (rule_type == I40E_AQC_MIRROR_RULE_TYPE_VLAN) {
@@ -2989,14 +2847,14 @@ i40e_status i40e_aq_delete_mirrorrule(struct i40e_hw *hw, u16 sw_seid,
*
* send msg to vf
**/
-i40e_status i40e_aq_send_msg_to_vf(struct i40e_hw *hw, u16 vfid,
- u32 v_opcode, u32 v_retval, u8 *msg, u16 msglen,
- struct i40e_asq_cmd_details *cmd_details)
+int i40e_aq_send_msg_to_vf(struct i40e_hw *hw, u16 vfid,
+ u32 v_opcode, u32 v_retval, u8 *msg, u16 msglen,
+ struct i40e_asq_cmd_details *cmd_details)
{
struct i40e_aq_desc desc;
struct i40e_aqc_pf_vf_message *cmd =
(struct i40e_aqc_pf_vf_message *)&desc.params.raw;
- i40e_status status;
+ int status;
i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_send_msg_to_vf);
cmd->id = cpu_to_le32(vfid);
@@ -3024,14 +2882,14 @@ i40e_status i40e_aq_send_msg_to_vf(struct i40e_hw *hw, u16 vfid,
*
* Read the register using the admin queue commands
**/
-i40e_status i40e_aq_debug_read_register(struct i40e_hw *hw,
+int i40e_aq_debug_read_register(struct i40e_hw *hw,
u32 reg_addr, u64 *reg_val,
struct i40e_asq_cmd_details *cmd_details)
{
struct i40e_aq_desc desc;
struct i40e_aqc_debug_reg_read_write *cmd_resp =
(struct i40e_aqc_debug_reg_read_write *)&desc.params.raw;
- i40e_status status;
+ int status;
if (reg_val == NULL)
return I40E_ERR_PARAM;
@@ -3059,14 +2917,14 @@ i40e_status i40e_aq_debug_read_register(struct i40e_hw *hw,
*
* Write to a register using the admin queue commands
**/
-i40e_status i40e_aq_debug_write_register(struct i40e_hw *hw,
- u32 reg_addr, u64 reg_val,
- struct i40e_asq_cmd_details *cmd_details)
+int i40e_aq_debug_write_register(struct i40e_hw *hw,
+ u32 reg_addr, u64 reg_val,
+ struct i40e_asq_cmd_details *cmd_details)
{
struct i40e_aq_desc desc;
struct i40e_aqc_debug_reg_read_write *cmd =
(struct i40e_aqc_debug_reg_read_write *)&desc.params.raw;
- i40e_status status;
+ int status;
i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_debug_write_reg);
@@ -3090,16 +2948,16 @@ i40e_status i40e_aq_debug_write_register(struct i40e_hw *hw,
*
* requests common resource using the admin queue commands
**/
-i40e_status i40e_aq_request_resource(struct i40e_hw *hw,
- enum i40e_aq_resources_ids resource,
- enum i40e_aq_resource_access_type access,
- u8 sdp_number, u64 *timeout,
- struct i40e_asq_cmd_details *cmd_details)
+int i40e_aq_request_resource(struct i40e_hw *hw,
+ enum i40e_aq_resources_ids resource,
+ enum i40e_aq_resource_access_type access,
+ u8 sdp_number, u64 *timeout,
+ struct i40e_asq_cmd_details *cmd_details)
{
struct i40e_aq_desc desc;
struct i40e_aqc_request_resource *cmd_resp =
(struct i40e_aqc_request_resource *)&desc.params.raw;
- i40e_status status;
+ int status;
i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_request_resource);
@@ -3129,15 +2987,15 @@ i40e_status i40e_aq_request_resource(struct i40e_hw *hw,
*
* release common resource using the admin queue commands
**/
-i40e_status i40e_aq_release_resource(struct i40e_hw *hw,
- enum i40e_aq_resources_ids resource,
- u8 sdp_number,
- struct i40e_asq_cmd_details *cmd_details)
+int i40e_aq_release_resource(struct i40e_hw *hw,
+ enum i40e_aq_resources_ids resource,
+ u8 sdp_number,
+ struct i40e_asq_cmd_details *cmd_details)
{
struct i40e_aq_desc desc;
struct i40e_aqc_request_resource *cmd =
(struct i40e_aqc_request_resource *)&desc.params.raw;
- i40e_status status;
+ int status;
i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_release_resource);
@@ -3161,15 +3019,15 @@ i40e_status i40e_aq_release_resource(struct i40e_hw *hw,
*
* Read the NVM using the admin queue commands
**/
-i40e_status i40e_aq_read_nvm(struct i40e_hw *hw, u8 module_pointer,
- u32 offset, u16 length, void *data,
- bool last_command,
- struct i40e_asq_cmd_details *cmd_details)
+int i40e_aq_read_nvm(struct i40e_hw *hw, u8 module_pointer,
+ u32 offset, u16 length, void *data,
+ bool last_command,
+ struct i40e_asq_cmd_details *cmd_details)
{
struct i40e_aq_desc desc;
struct i40e_aqc_nvm_update *cmd =
(struct i40e_aqc_nvm_update *)&desc.params.raw;
- i40e_status status;
+ int status;
/* In offset the highest byte must be zeroed. */
if (offset & 0xFF000000) {
@@ -3207,14 +3065,14 @@ i40e_aq_read_nvm_exit:
*
* Erase the NVM sector using the admin queue commands
**/
-i40e_status i40e_aq_erase_nvm(struct i40e_hw *hw, u8 module_pointer,
- u32 offset, u16 length, bool last_command,
- struct i40e_asq_cmd_details *cmd_details)
+int i40e_aq_erase_nvm(struct i40e_hw *hw, u8 module_pointer,
+ u32 offset, u16 length, bool last_command,
+ struct i40e_asq_cmd_details *cmd_details)
{
struct i40e_aq_desc desc;
struct i40e_aqc_nvm_update *cmd =
(struct i40e_aqc_nvm_update *)&desc.params.raw;
- i40e_status status;
+ int status;
/* In offset the highest byte must be zeroed. */
if (offset & 0xFF000000) {
@@ -3255,8 +3113,8 @@ static void i40e_parse_discover_capabilities(struct i40e_hw *hw, void *buff,
u32 number, logical_id, phys_id;
struct i40e_hw_capabilities *p;
u16 id, ocp_cfg_word0;
- i40e_status status;
u8 major_rev;
+ int status;
u32 i = 0;
cap = (struct i40e_aqc_list_capabilities_element_resp *) buff;
@@ -3497,14 +3355,14 @@ static void i40e_parse_discover_capabilities(struct i40e_hw *hw, void *buff,
*
* Get the device capabilities descriptions from the firmware
**/
-i40e_status i40e_aq_discover_capabilities(struct i40e_hw *hw,
- void *buff, u16 buff_size, u16 *data_size,
- enum i40e_admin_queue_opc list_type_opc,
- struct i40e_asq_cmd_details *cmd_details)
+int i40e_aq_discover_capabilities(struct i40e_hw *hw,
+ void *buff, u16 buff_size, u16 *data_size,
+ enum i40e_admin_queue_opc list_type_opc,
+ struct i40e_asq_cmd_details *cmd_details)
{
struct i40e_aqc_list_capabilites *cmd;
struct i40e_aq_desc desc;
- i40e_status status = 0;
+ int status = 0;
cmd = (struct i40e_aqc_list_capabilites *)&desc.params.raw;
@@ -3546,15 +3404,15 @@ exit:
*
* Update the NVM using the admin queue commands
**/
-i40e_status i40e_aq_update_nvm(struct i40e_hw *hw, u8 module_pointer,
- u32 offset, u16 length, void *data,
- bool last_command, u8 preservation_flags,
- struct i40e_asq_cmd_details *cmd_details)
+int i40e_aq_update_nvm(struct i40e_hw *hw, u8 module_pointer,
+ u32 offset, u16 length, void *data,
+ bool last_command, u8 preservation_flags,
+ struct i40e_asq_cmd_details *cmd_details)
{
struct i40e_aq_desc desc;
struct i40e_aqc_nvm_update *cmd =
(struct i40e_aqc_nvm_update *)&desc.params.raw;
- i40e_status status;
+ int status;
/* In offset the highest byte must be zeroed. */
if (offset & 0xFF000000) {
@@ -3599,13 +3457,13 @@ i40e_aq_update_nvm_exit:
*
* Rearrange NVM structure, available only for transition FW
**/
-i40e_status i40e_aq_rearrange_nvm(struct i40e_hw *hw,
- u8 rearrange_nvm,
- struct i40e_asq_cmd_details *cmd_details)
+int i40e_aq_rearrange_nvm(struct i40e_hw *hw,
+ u8 rearrange_nvm,
+ struct i40e_asq_cmd_details *cmd_details)
{
struct i40e_aqc_nvm_update *cmd;
- i40e_status status;
struct i40e_aq_desc desc;
+ int status;
cmd = (struct i40e_aqc_nvm_update *)&desc.params.raw;
@@ -3639,17 +3497,17 @@ i40e_aq_rearrange_nvm_exit:
*
* Requests the complete LLDP MIB (entire packet).
**/
-i40e_status i40e_aq_get_lldp_mib(struct i40e_hw *hw, u8 bridge_type,
- u8 mib_type, void *buff, u16 buff_size,
- u16 *local_len, u16 *remote_len,
- struct i40e_asq_cmd_details *cmd_details)
+int i40e_aq_get_lldp_mib(struct i40e_hw *hw, u8 bridge_type,
+ u8 mib_type, void *buff, u16 buff_size,
+ u16 *local_len, u16 *remote_len,
+ struct i40e_asq_cmd_details *cmd_details)
{
struct i40e_aq_desc desc;
struct i40e_aqc_lldp_get_mib *cmd =
(struct i40e_aqc_lldp_get_mib *)&desc.params.raw;
struct i40e_aqc_lldp_get_mib *resp =
(struct i40e_aqc_lldp_get_mib *)&desc.params.raw;
- i40e_status status;
+ int status;
if (buff_size == 0 || !buff)
return I40E_ERR_PARAM;
@@ -3689,14 +3547,14 @@ i40e_status i40e_aq_get_lldp_mib(struct i40e_hw *hw, u8 bridge_type,
*
* Set the LLDP MIB.
**/
-enum i40e_status_code
+int
i40e_aq_set_lldp_mib(struct i40e_hw *hw,
u8 mib_type, void *buff, u16 buff_size,
struct i40e_asq_cmd_details *cmd_details)
{
struct i40e_aqc_lldp_set_local_mib *cmd;
- enum i40e_status_code status;
struct i40e_aq_desc desc;
+ int status;
cmd = (struct i40e_aqc_lldp_set_local_mib *)&desc.params.raw;
if (buff_size == 0 || !buff)
@@ -3728,14 +3586,14 @@ i40e_aq_set_lldp_mib(struct i40e_hw *hw,
* Enable or Disable posting of an event on ARQ when LLDP MIB
* associated with the interface changes
**/
-i40e_status i40e_aq_cfg_lldp_mib_change_event(struct i40e_hw *hw,
- bool enable_update,
- struct i40e_asq_cmd_details *cmd_details)
+int i40e_aq_cfg_lldp_mib_change_event(struct i40e_hw *hw,
+ bool enable_update,
+ struct i40e_asq_cmd_details *cmd_details)
{
struct i40e_aq_desc desc;
struct i40e_aqc_lldp_update_mib *cmd =
(struct i40e_aqc_lldp_update_mib *)&desc.params.raw;
- i40e_status status;
+ int status;
i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_lldp_update_mib);
@@ -3757,14 +3615,14 @@ i40e_status i40e_aq_cfg_lldp_mib_change_event(struct i40e_hw *hw,
* Restore LLDP Agent factory settings if @restore set to True. In other case
* only returns factory setting in AQ response.
**/
-enum i40e_status_code
+int
i40e_aq_restore_lldp(struct i40e_hw *hw, u8 *setting, bool restore,
struct i40e_asq_cmd_details *cmd_details)
{
struct i40e_aq_desc desc;
struct i40e_aqc_lldp_restore *cmd =
(struct i40e_aqc_lldp_restore *)&desc.params.raw;
- i40e_status status;
+ int status;
if (!(hw->flags & I40E_HW_FLAG_FW_LLDP_PERSISTENT)) {
i40e_debug(hw, I40E_DEBUG_ALL,
@@ -3794,14 +3652,14 @@ i40e_aq_restore_lldp(struct i40e_hw *hw, u8 *setting, bool restore,
*
* Stop or Shutdown the embedded LLDP Agent
**/
-i40e_status i40e_aq_stop_lldp(struct i40e_hw *hw, bool shutdown_agent,
- bool persist,
- struct i40e_asq_cmd_details *cmd_details)
+int i40e_aq_stop_lldp(struct i40e_hw *hw, bool shutdown_agent,
+ bool persist,
+ struct i40e_asq_cmd_details *cmd_details)
{
struct i40e_aq_desc desc;
struct i40e_aqc_lldp_stop *cmd =
(struct i40e_aqc_lldp_stop *)&desc.params.raw;
- i40e_status status;
+ int status;
i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_lldp_stop);
@@ -3829,13 +3687,13 @@ i40e_status i40e_aq_stop_lldp(struct i40e_hw *hw, bool shutdown_agent,
*
* Start the embedded LLDP Agent on all ports.
**/
-i40e_status i40e_aq_start_lldp(struct i40e_hw *hw, bool persist,
- struct i40e_asq_cmd_details *cmd_details)
+int i40e_aq_start_lldp(struct i40e_hw *hw, bool persist,
+ struct i40e_asq_cmd_details *cmd_details)
{
struct i40e_aq_desc desc;
struct i40e_aqc_lldp_start *cmd =
(struct i40e_aqc_lldp_start *)&desc.params.raw;
- i40e_status status;
+ int status;
i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_lldp_start);
@@ -3861,14 +3719,14 @@ i40e_status i40e_aq_start_lldp(struct i40e_hw *hw, bool persist,
* @dcb_enable: True if DCB configuration needs to be applied
*
**/
-enum i40e_status_code
+int
i40e_aq_set_dcb_parameters(struct i40e_hw *hw, bool dcb_enable,
struct i40e_asq_cmd_details *cmd_details)
{
struct i40e_aq_desc desc;
struct i40e_aqc_set_dcb_parameters *cmd =
(struct i40e_aqc_set_dcb_parameters *)&desc.params.raw;
- i40e_status status;
+ int status;
if (!(hw->flags & I40E_HW_FLAG_FW_LLDP_STOPPABLE))
return I40E_ERR_DEVICE_NOT_SUPPORTED;
@@ -3894,12 +3752,12 @@ i40e_aq_set_dcb_parameters(struct i40e_hw *hw, bool dcb_enable,
*
* Get CEE DCBX mode operational configuration from firmware
**/
-i40e_status i40e_aq_get_cee_dcb_config(struct i40e_hw *hw,
- void *buff, u16 buff_size,
- struct i40e_asq_cmd_details *cmd_details)
+int i40e_aq_get_cee_dcb_config(struct i40e_hw *hw,
+ void *buff, u16 buff_size,
+ struct i40e_asq_cmd_details *cmd_details)
{
struct i40e_aq_desc desc;
- i40e_status status;
+ int status;
if (buff_size == 0 || !buff)
return I40E_ERR_PARAM;
@@ -3925,17 +3783,17 @@ i40e_status i40e_aq_get_cee_dcb_config(struct i40e_hw *hw,
* and this function will call cpu_to_le16 to convert from Host byte order to
* Little Endian order.
**/
-i40e_status i40e_aq_add_udp_tunnel(struct i40e_hw *hw,
- u16 udp_port, u8 protocol_index,
- u8 *filter_index,
- struct i40e_asq_cmd_details *cmd_details)
+int i40e_aq_add_udp_tunnel(struct i40e_hw *hw,
+ u16 udp_port, u8 protocol_index,
+ u8 *filter_index,
+ struct i40e_asq_cmd_details *cmd_details)
{
struct i40e_aq_desc desc;
struct i40e_aqc_add_udp_tunnel *cmd =
(struct i40e_aqc_add_udp_tunnel *)&desc.params.raw;
struct i40e_aqc_del_udp_tunnel_completion *resp =
(struct i40e_aqc_del_udp_tunnel_completion *)&desc.params.raw;
- i40e_status status;
+ int status;
i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_add_udp_tunnel);
@@ -3956,13 +3814,13 @@ i40e_status i40e_aq_add_udp_tunnel(struct i40e_hw *hw,
* @index: filter index
* @cmd_details: pointer to command details structure or NULL
**/
-i40e_status i40e_aq_del_udp_tunnel(struct i40e_hw *hw, u8 index,
- struct i40e_asq_cmd_details *cmd_details)
+int i40e_aq_del_udp_tunnel(struct i40e_hw *hw, u8 index,
+ struct i40e_asq_cmd_details *cmd_details)
{
struct i40e_aq_desc desc;
struct i40e_aqc_remove_udp_tunnel *cmd =
(struct i40e_aqc_remove_udp_tunnel *)&desc.params.raw;
- i40e_status status;
+ int status;
i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_del_udp_tunnel);
@@ -3981,13 +3839,13 @@ i40e_status i40e_aq_del_udp_tunnel(struct i40e_hw *hw, u8 index,
*
* This deletes a switch element from the switch.
**/
-i40e_status i40e_aq_delete_element(struct i40e_hw *hw, u16 seid,
- struct i40e_asq_cmd_details *cmd_details)
+int i40e_aq_delete_element(struct i40e_hw *hw, u16 seid,
+ struct i40e_asq_cmd_details *cmd_details)
{
struct i40e_aq_desc desc;
struct i40e_aqc_switch_seid *cmd =
(struct i40e_aqc_switch_seid *)&desc.params.raw;
- i40e_status status;
+ int status;
if (seid == 0)
return I40E_ERR_PARAM;
@@ -4011,11 +3869,11 @@ i40e_status i40e_aq_delete_element(struct i40e_hw *hw, u16 seid,
* recomputed and modified. The retval field in the descriptor
* will be set to 0 when RPB is modified.
**/
-i40e_status i40e_aq_dcb_updated(struct i40e_hw *hw,
- struct i40e_asq_cmd_details *cmd_details)
+int i40e_aq_dcb_updated(struct i40e_hw *hw,
+ struct i40e_asq_cmd_details *cmd_details)
{
struct i40e_aq_desc desc;
- i40e_status status;
+ int status;
i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_dcb_updated);
@@ -4035,15 +3893,15 @@ i40e_status i40e_aq_dcb_updated(struct i40e_hw *hw,
*
* Generic command handler for Tx scheduler AQ commands
**/
-static i40e_status i40e_aq_tx_sched_cmd(struct i40e_hw *hw, u16 seid,
+static int i40e_aq_tx_sched_cmd(struct i40e_hw *hw, u16 seid,
void *buff, u16 buff_size,
- enum i40e_admin_queue_opc opcode,
+ enum i40e_admin_queue_opc opcode,
struct i40e_asq_cmd_details *cmd_details)
{
struct i40e_aq_desc desc;
struct i40e_aqc_tx_sched_ind *cmd =
(struct i40e_aqc_tx_sched_ind *)&desc.params.raw;
- i40e_status status;
+ int status;
bool cmd_param_flag = false;
switch (opcode) {
@@ -4093,14 +3951,14 @@ static i40e_status i40e_aq_tx_sched_cmd(struct i40e_hw *hw, u16 seid,
* @max_credit: Max BW limit credits
* @cmd_details: pointer to command details structure or NULL
**/
-i40e_status i40e_aq_config_vsi_bw_limit(struct i40e_hw *hw,
+int i40e_aq_config_vsi_bw_limit(struct i40e_hw *hw,
u16 seid, u16 credit, u8 max_credit,
struct i40e_asq_cmd_details *cmd_details)
{
struct i40e_aq_desc desc;
struct i40e_aqc_configure_vsi_bw_limit *cmd =
(struct i40e_aqc_configure_vsi_bw_limit *)&desc.params.raw;
- i40e_status status;
+ int status;
i40e_fill_default_direct_cmd_desc(&desc,
i40e_aqc_opc_configure_vsi_bw_limit);
@@ -4121,10 +3979,10 @@ i40e_status i40e_aq_config_vsi_bw_limit(struct i40e_hw *hw,
* @bw_data: Buffer holding enabled TCs, relative TC BW limit/credits
* @cmd_details: pointer to command details structure or NULL
**/
-i40e_status i40e_aq_config_vsi_tc_bw(struct i40e_hw *hw,
- u16 seid,
- struct i40e_aqc_configure_vsi_tc_bw_data *bw_data,
- struct i40e_asq_cmd_details *cmd_details)
+int i40e_aq_config_vsi_tc_bw(struct i40e_hw *hw,
+ u16 seid,
+ struct i40e_aqc_configure_vsi_tc_bw_data *bw_data,
+ struct i40e_asq_cmd_details *cmd_details)
{
return i40e_aq_tx_sched_cmd(hw, seid, (void *)bw_data, sizeof(*bw_data),
i40e_aqc_opc_configure_vsi_tc_bw,
@@ -4139,11 +3997,12 @@ i40e_status i40e_aq_config_vsi_tc_bw(struct i40e_hw *hw,
* @opcode: Tx scheduler AQ command opcode
* @cmd_details: pointer to command details structure or NULL
**/
-i40e_status i40e_aq_config_switch_comp_ets(struct i40e_hw *hw,
- u16 seid,
- struct i40e_aqc_configure_switching_comp_ets_data *ets_data,
- enum i40e_admin_queue_opc opcode,
- struct i40e_asq_cmd_details *cmd_details)
+int
+i40e_aq_config_switch_comp_ets(struct i40e_hw *hw,
+ u16 seid,
+ struct i40e_aqc_configure_switching_comp_ets_data *ets_data,
+ enum i40e_admin_queue_opc opcode,
+ struct i40e_asq_cmd_details *cmd_details)
{
return i40e_aq_tx_sched_cmd(hw, seid, (void *)ets_data,
sizeof(*ets_data), opcode, cmd_details);
@@ -4156,7 +4015,8 @@ i40e_status i40e_aq_config_switch_comp_ets(struct i40e_hw *hw,
* @bw_data: Buffer holding enabled TCs, relative/absolute TC BW limit/credits
* @cmd_details: pointer to command details structure or NULL
**/
-i40e_status i40e_aq_config_switch_comp_bw_config(struct i40e_hw *hw,
+int
+i40e_aq_config_switch_comp_bw_config(struct i40e_hw *hw,
u16 seid,
struct i40e_aqc_configure_switching_comp_bw_config_data *bw_data,
struct i40e_asq_cmd_details *cmd_details)
@@ -4173,10 +4033,11 @@ i40e_status i40e_aq_config_switch_comp_bw_config(struct i40e_hw *hw,
* @bw_data: Buffer to hold VSI BW configuration
* @cmd_details: pointer to command details structure or NULL
**/
-i40e_status i40e_aq_query_vsi_bw_config(struct i40e_hw *hw,
- u16 seid,
- struct i40e_aqc_query_vsi_bw_config_resp *bw_data,
- struct i40e_asq_cmd_details *cmd_details)
+int
+i40e_aq_query_vsi_bw_config(struct i40e_hw *hw,
+ u16 seid,
+ struct i40e_aqc_query_vsi_bw_config_resp *bw_data,
+ struct i40e_asq_cmd_details *cmd_details)
{
return i40e_aq_tx_sched_cmd(hw, seid, (void *)bw_data, sizeof(*bw_data),
i40e_aqc_opc_query_vsi_bw_config,
@@ -4190,10 +4051,11 @@ i40e_status i40e_aq_query_vsi_bw_config(struct i40e_hw *hw,
* @bw_data: Buffer to hold VSI BW configuration per TC
* @cmd_details: pointer to command details structure or NULL
**/
-i40e_status i40e_aq_query_vsi_ets_sla_config(struct i40e_hw *hw,
- u16 seid,
- struct i40e_aqc_query_vsi_ets_sla_config_resp *bw_data,
- struct i40e_asq_cmd_details *cmd_details)
+int
+i40e_aq_query_vsi_ets_sla_config(struct i40e_hw *hw,
+ u16 seid,
+ struct i40e_aqc_query_vsi_ets_sla_config_resp *bw_data,
+ struct i40e_asq_cmd_details *cmd_details)
{
return i40e_aq_tx_sched_cmd(hw, seid, (void *)bw_data, sizeof(*bw_data),
i40e_aqc_opc_query_vsi_ets_sla_config,
@@ -4207,10 +4069,11 @@ i40e_status i40e_aq_query_vsi_ets_sla_config(struct i40e_hw *hw,
* @bw_data: Buffer to hold switching component's per TC BW config
* @cmd_details: pointer to command details structure or NULL
**/
-i40e_status i40e_aq_query_switch_comp_ets_config(struct i40e_hw *hw,
- u16 seid,
- struct i40e_aqc_query_switching_comp_ets_config_resp *bw_data,
- struct i40e_asq_cmd_details *cmd_details)
+int
+i40e_aq_query_switch_comp_ets_config(struct i40e_hw *hw,
+ u16 seid,
+ struct i40e_aqc_query_switching_comp_ets_config_resp *bw_data,
+ struct i40e_asq_cmd_details *cmd_details)
{
return i40e_aq_tx_sched_cmd(hw, seid, (void *)bw_data, sizeof(*bw_data),
i40e_aqc_opc_query_switching_comp_ets_config,
@@ -4224,10 +4087,11 @@ i40e_status i40e_aq_query_switch_comp_ets_config(struct i40e_hw *hw,
* @bw_data: Buffer to hold current ETS configuration for the Physical Port
* @cmd_details: pointer to command details structure or NULL
**/
-i40e_status i40e_aq_query_port_ets_config(struct i40e_hw *hw,
- u16 seid,
- struct i40e_aqc_query_port_ets_config_resp *bw_data,
- struct i40e_asq_cmd_details *cmd_details)
+int
+i40e_aq_query_port_ets_config(struct i40e_hw *hw,
+ u16 seid,
+ struct i40e_aqc_query_port_ets_config_resp *bw_data,
+ struct i40e_asq_cmd_details *cmd_details)
{
return i40e_aq_tx_sched_cmd(hw, seid, (void *)bw_data, sizeof(*bw_data),
i40e_aqc_opc_query_port_ets_config,
@@ -4241,10 +4105,11 @@ i40e_status i40e_aq_query_port_ets_config(struct i40e_hw *hw,
* @bw_data: Buffer to hold switching component's BW configuration
* @cmd_details: pointer to command details structure or NULL
**/
-i40e_status i40e_aq_query_switch_comp_bw_config(struct i40e_hw *hw,
- u16 seid,
- struct i40e_aqc_query_switching_comp_bw_config_resp *bw_data,
- struct i40e_asq_cmd_details *cmd_details)
+int
+i40e_aq_query_switch_comp_bw_config(struct i40e_hw *hw,
+ u16 seid,
+ struct i40e_aqc_query_switching_comp_bw_config_resp *bw_data,
+ struct i40e_asq_cmd_details *cmd_details)
{
return i40e_aq_tx_sched_cmd(hw, seid, (void *)bw_data, sizeof(*bw_data),
i40e_aqc_opc_query_switching_comp_bw_config,
@@ -4263,8 +4128,9 @@ i40e_status i40e_aq_query_switch_comp_bw_config(struct i40e_hw *hw,
* Returns 0 if the values passed are valid and within
* range else returns an error.
**/
-static i40e_status i40e_validate_filter_settings(struct i40e_hw *hw,
- struct i40e_filter_control_settings *settings)
+static int
+i40e_validate_filter_settings(struct i40e_hw *hw,
+ struct i40e_filter_control_settings *settings)
{
u32 fcoe_cntx_size, fcoe_filt_size;
u32 fcoe_fmax;
@@ -4350,11 +4216,11 @@ static i40e_status i40e_validate_filter_settings(struct i40e_hw *hw,
* for a single PF. It is expected that these settings are programmed
* at the driver initialization time.
**/
-i40e_status i40e_set_filter_control(struct i40e_hw *hw,
- struct i40e_filter_control_settings *settings)
+int i40e_set_filter_control(struct i40e_hw *hw,
+ struct i40e_filter_control_settings *settings)
{
- i40e_status ret = 0;
u32 hash_lut_size = 0;
+ int ret = 0;
u32 val;
if (!settings)
@@ -4424,11 +4290,11 @@ i40e_status i40e_set_filter_control(struct i40e_hw *hw,
* In return it will update the total number of perfect filter count in
* the stats member.
**/
-i40e_status i40e_aq_add_rem_control_packet_filter(struct i40e_hw *hw,
- u8 *mac_addr, u16 ethtype, u16 flags,
- u16 vsi_seid, u16 queue, bool is_add,
- struct i40e_control_filter_stats *stats,
- struct i40e_asq_cmd_details *cmd_details)
+int i40e_aq_add_rem_control_packet_filter(struct i40e_hw *hw,
+ u8 *mac_addr, u16 ethtype, u16 flags,
+ u16 vsi_seid, u16 queue, bool is_add,
+ struct i40e_control_filter_stats *stats,
+ struct i40e_asq_cmd_details *cmd_details)
{
struct i40e_aq_desc desc;
struct i40e_aqc_add_remove_control_packet_filter *cmd =
@@ -4437,7 +4303,7 @@ i40e_status i40e_aq_add_rem_control_packet_filter(struct i40e_hw *hw,
struct i40e_aqc_add_remove_control_packet_filter_completion *resp =
(struct i40e_aqc_add_remove_control_packet_filter_completion *)
&desc.params.raw;
- i40e_status status;
+ int status;
if (vsi_seid == 0)
return I40E_ERR_PARAM;
@@ -4483,7 +4349,7 @@ void i40e_add_filter_to_drop_tx_flow_control_frames(struct i40e_hw *hw,
I40E_AQC_ADD_CONTROL_PACKET_FLAGS_DROP |
I40E_AQC_ADD_CONTROL_PACKET_FLAGS_TX;
u16 ethtype = I40E_FLOW_CONTROL_ETHTYPE;
- i40e_status status;
+ int status;
status = i40e_aq_add_rem_control_packet_filter(hw, NULL, ethtype, flag,
seid, 0, true, NULL,
@@ -4505,14 +4371,14 @@ void i40e_add_filter_to_drop_tx_flow_control_frames(struct i40e_hw *hw,
* is not passed then only register at 'reg_addr0' is read.
*
**/
-static i40e_status i40e_aq_alternate_read(struct i40e_hw *hw,
- u32 reg_addr0, u32 *reg_val0,
- u32 reg_addr1, u32 *reg_val1)
+static int i40e_aq_alternate_read(struct i40e_hw *hw,
+ u32 reg_addr0, u32 *reg_val0,
+ u32 reg_addr1, u32 *reg_val1)
{
struct i40e_aq_desc desc;
struct i40e_aqc_alternate_write *cmd_resp =
(struct i40e_aqc_alternate_write *)&desc.params.raw;
- i40e_status status;
+ int status;
if (!reg_val0)
return I40E_ERR_PARAM;
@@ -4541,12 +4407,12 @@ static i40e_status i40e_aq_alternate_read(struct i40e_hw *hw,
*
* Suspend port's Tx traffic
**/
-i40e_status i40e_aq_suspend_port_tx(struct i40e_hw *hw, u16 seid,
- struct i40e_asq_cmd_details *cmd_details)
+int i40e_aq_suspend_port_tx(struct i40e_hw *hw, u16 seid,
+ struct i40e_asq_cmd_details *cmd_details)
{
struct i40e_aqc_tx_sched_ind *cmd;
struct i40e_aq_desc desc;
- i40e_status status;
+ int status;
cmd = (struct i40e_aqc_tx_sched_ind *)&desc.params.raw;
i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_suspend_port_tx);
@@ -4563,11 +4429,11 @@ i40e_status i40e_aq_suspend_port_tx(struct i40e_hw *hw, u16 seid,
*
* Resume port's Tx traffic
**/
-i40e_status i40e_aq_resume_port_tx(struct i40e_hw *hw,
- struct i40e_asq_cmd_details *cmd_details)
+int i40e_aq_resume_port_tx(struct i40e_hw *hw,
+ struct i40e_asq_cmd_details *cmd_details)
{
struct i40e_aq_desc desc;
- i40e_status status;
+ int status;
i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_resume_port_tx);
@@ -4637,18 +4503,18 @@ void i40e_set_pci_config_data(struct i40e_hw *hw, u16 link_status)
* Dump internal FW/HW data for debug purposes.
*
**/
-i40e_status i40e_aq_debug_dump(struct i40e_hw *hw, u8 cluster_id,
- u8 table_id, u32 start_index, u16 buff_size,
- void *buff, u16 *ret_buff_size,
- u8 *ret_next_table, u32 *ret_next_index,
- struct i40e_asq_cmd_details *cmd_details)
+int i40e_aq_debug_dump(struct i40e_hw *hw, u8 cluster_id,
+ u8 table_id, u32 start_index, u16 buff_size,
+ void *buff, u16 *ret_buff_size,
+ u8 *ret_next_table, u32 *ret_next_index,
+ struct i40e_asq_cmd_details *cmd_details)
{
struct i40e_aq_desc desc;
struct i40e_aqc_debug_dump_internals *cmd =
(struct i40e_aqc_debug_dump_internals *)&desc.params.raw;
struct i40e_aqc_debug_dump_internals *resp =
(struct i40e_aqc_debug_dump_internals *)&desc.params.raw;
- i40e_status status;
+ int status;
if (buff_size == 0 || !buff)
return I40E_ERR_PARAM;
@@ -4689,12 +4555,12 @@ i40e_status i40e_aq_debug_dump(struct i40e_hw *hw, u8 cluster_id,
*
* Read bw from the alternate ram for the given pf
**/
-i40e_status i40e_read_bw_from_alt_ram(struct i40e_hw *hw,
- u32 *max_bw, u32 *min_bw,
- bool *min_valid, bool *max_valid)
+int i40e_read_bw_from_alt_ram(struct i40e_hw *hw,
+ u32 *max_bw, u32 *min_bw,
+ bool *min_valid, bool *max_valid)
{
- i40e_status status;
u32 max_bw_addr, min_bw_addr;
+ int status;
/* Calculate the address of the min/max bw registers */
max_bw_addr = I40E_ALT_STRUCT_FIRST_PF_OFFSET +
@@ -4729,13 +4595,14 @@ i40e_status i40e_read_bw_from_alt_ram(struct i40e_hw *hw,
*
* Configure partitions guaranteed/max bw
**/
-i40e_status i40e_aq_configure_partition_bw(struct i40e_hw *hw,
- struct i40e_aqc_configure_partition_bw_data *bw_data,
- struct i40e_asq_cmd_details *cmd_details)
+int
+i40e_aq_configure_partition_bw(struct i40e_hw *hw,
+ struct i40e_aqc_configure_partition_bw_data *bw_data,
+ struct i40e_asq_cmd_details *cmd_details)
{
- i40e_status status;
- struct i40e_aq_desc desc;
u16 bwd_size = sizeof(*bw_data);
+ struct i40e_aq_desc desc;
+ int status;
i40e_fill_default_direct_cmd_desc(&desc,
i40e_aqc_opc_configure_partition_bw);
@@ -4764,11 +4631,11 @@ i40e_status i40e_aq_configure_partition_bw(struct i40e_hw *hw,
*
* Reads specified PHY register value
**/
-i40e_status i40e_read_phy_register_clause22(struct i40e_hw *hw,
- u16 reg, u8 phy_addr, u16 *value)
+int i40e_read_phy_register_clause22(struct i40e_hw *hw,
+ u16 reg, u8 phy_addr, u16 *value)
{
- i40e_status status = I40E_ERR_TIMEOUT;
u8 port_num = (u8)hw->func_caps.mdio_port_num;
+ int status = I40E_ERR_TIMEOUT;
u32 command = 0;
u16 retry = 1000;
@@ -4809,11 +4676,11 @@ i40e_status i40e_read_phy_register_clause22(struct i40e_hw *hw,
*
* Writes specified PHY register value
**/
-i40e_status i40e_write_phy_register_clause22(struct i40e_hw *hw,
- u16 reg, u8 phy_addr, u16 value)
+int i40e_write_phy_register_clause22(struct i40e_hw *hw,
+ u16 reg, u8 phy_addr, u16 value)
{
- i40e_status status = I40E_ERR_TIMEOUT;
u8 port_num = (u8)hw->func_caps.mdio_port_num;
+ int status = I40E_ERR_TIMEOUT;
u32 command = 0;
u16 retry = 1000;
@@ -4850,13 +4717,13 @@ i40e_status i40e_write_phy_register_clause22(struct i40e_hw *hw,
*
* Reads specified PHY register value
**/
-i40e_status i40e_read_phy_register_clause45(struct i40e_hw *hw,
- u8 page, u16 reg, u8 phy_addr, u16 *value)
+int i40e_read_phy_register_clause45(struct i40e_hw *hw,
+ u8 page, u16 reg, u8 phy_addr, u16 *value)
{
- i40e_status status = I40E_ERR_TIMEOUT;
+ u8 port_num = hw->func_caps.mdio_port_num;
+ int status = I40E_ERR_TIMEOUT;
u32 command = 0;
u16 retry = 1000;
- u8 port_num = hw->func_caps.mdio_port_num;
command = (reg << I40E_GLGEN_MSCA_MDIADD_SHIFT) |
(page << I40E_GLGEN_MSCA_DEVADD_SHIFT) |
@@ -4924,13 +4791,13 @@ phy_read_end:
*
* Writes value to specified PHY register
**/
-i40e_status i40e_write_phy_register_clause45(struct i40e_hw *hw,
- u8 page, u16 reg, u8 phy_addr, u16 value)
+int i40e_write_phy_register_clause45(struct i40e_hw *hw,
+ u8 page, u16 reg, u8 phy_addr, u16 value)
{
- i40e_status status = I40E_ERR_TIMEOUT;
- u32 command = 0;
- u16 retry = 1000;
u8 port_num = hw->func_caps.mdio_port_num;
+ int status = I40E_ERR_TIMEOUT;
+ u16 retry = 1000;
+ u32 command = 0;
command = (reg << I40E_GLGEN_MSCA_MDIADD_SHIFT) |
(page << I40E_GLGEN_MSCA_DEVADD_SHIFT) |
@@ -4991,10 +4858,10 @@ phy_write_end:
*
* Writes value to specified PHY register
**/
-i40e_status i40e_write_phy_register(struct i40e_hw *hw,
- u8 page, u16 reg, u8 phy_addr, u16 value)
+int i40e_write_phy_register(struct i40e_hw *hw,
+ u8 page, u16 reg, u8 phy_addr, u16 value)
{
- i40e_status status;
+ int status;
switch (hw->device_id) {
case I40E_DEV_ID_1G_BASE_T_X722:
@@ -5030,10 +4897,10 @@ i40e_status i40e_write_phy_register(struct i40e_hw *hw,
*
* Reads specified PHY register value
**/
-i40e_status i40e_read_phy_register(struct i40e_hw *hw,
- u8 page, u16 reg, u8 phy_addr, u16 *value)
+int i40e_read_phy_register(struct i40e_hw *hw,
+ u8 page, u16 reg, u8 phy_addr, u16 *value)
{
- i40e_status status;
+ int status;
switch (hw->device_id) {
case I40E_DEV_ID_1G_BASE_T_X722:
@@ -5082,17 +4949,17 @@ u8 i40e_get_phy_address(struct i40e_hw *hw, u8 dev_num)
*
* Blinks PHY link LED
**/
-i40e_status i40e_blink_phy_link_led(struct i40e_hw *hw,
- u32 time, u32 interval)
+int i40e_blink_phy_link_led(struct i40e_hw *hw,
+ u32 time, u32 interval)
{
- i40e_status status = 0;
- u32 i;
- u16 led_ctl;
- u16 gpio_led_port;
- u16 led_reg;
u16 led_addr = I40E_PHY_LED_PROV_REG_1;
+ u16 gpio_led_port;
u8 phy_addr = 0;
+ int status = 0;
+ u16 led_ctl;
u8 port_num;
+ u16 led_reg;
+ u32 i;
i = rd32(hw, I40E_PFGEN_PORTNUM);
port_num = (u8)(i & I40E_PFGEN_PORTNUM_PORT_NUM_MASK);
@@ -5154,12 +5021,12 @@ phy_blinking_end:
* @led_addr: LED register address
* @reg_val: read register value
**/
-static enum i40e_status_code i40e_led_get_reg(struct i40e_hw *hw, u16 led_addr,
- u32 *reg_val)
+static int i40e_led_get_reg(struct i40e_hw *hw, u16 led_addr,
+ u32 *reg_val)
{
- enum i40e_status_code status;
u8 phy_addr = 0;
u8 port_num;
+ int status;
u32 i;
*reg_val = 0;
@@ -5188,12 +5055,12 @@ static enum i40e_status_code i40e_led_get_reg(struct i40e_hw *hw, u16 led_addr,
* @led_addr: LED register address
* @reg_val: register value to write
**/
-static enum i40e_status_code i40e_led_set_reg(struct i40e_hw *hw, u16 led_addr,
- u32 reg_val)
+static int i40e_led_set_reg(struct i40e_hw *hw, u16 led_addr,
+ u32 reg_val)
{
- enum i40e_status_code status;
u8 phy_addr = 0;
u8 port_num;
+ int status;
u32 i;
if (hw->flags & I40E_HW_FLAG_AQ_PHY_ACCESS_CAPABLE) {
@@ -5223,17 +5090,17 @@ static enum i40e_status_code i40e_led_set_reg(struct i40e_hw *hw, u16 led_addr,
* @val: original value of register to use
*
**/
-i40e_status i40e_led_get_phy(struct i40e_hw *hw, u16 *led_addr,
- u16 *val)
+int i40e_led_get_phy(struct i40e_hw *hw, u16 *led_addr,
+ u16 *val)
{
- i40e_status status = 0;
u16 gpio_led_port;
u8 phy_addr = 0;
- u16 reg_val;
+ u32 reg_val_aq;
+ int status = 0;
u16 temp_addr;
+ u16 reg_val;
u8 port_num;
u32 i;
- u32 reg_val_aq;
if (hw->flags & I40E_HW_FLAG_AQ_PHY_ACCESS_CAPABLE) {
status =
@@ -5278,12 +5145,12 @@ i40e_status i40e_led_get_phy(struct i40e_hw *hw, u16 *led_addr,
* Set led's on or off when controlled by the PHY
*
**/
-i40e_status i40e_led_set_phy(struct i40e_hw *hw, bool on,
- u16 led_addr, u32 mode)
+int i40e_led_set_phy(struct i40e_hw *hw, bool on,
+ u16 led_addr, u32 mode)
{
- i40e_status status = 0;
u32 led_ctl = 0;
u32 led_reg = 0;
+ int status = 0;
status = i40e_led_get_reg(hw, led_addr, &led_reg);
if (status)
@@ -5327,14 +5194,14 @@ restore_config:
* Use the firmware to read the Rx control register,
* especially useful if the Rx unit is under heavy pressure
**/
-i40e_status i40e_aq_rx_ctl_read_register(struct i40e_hw *hw,
- u32 reg_addr, u32 *reg_val,
- struct i40e_asq_cmd_details *cmd_details)
+int i40e_aq_rx_ctl_read_register(struct i40e_hw *hw,
+ u32 reg_addr, u32 *reg_val,
+ struct i40e_asq_cmd_details *cmd_details)
{
struct i40e_aq_desc desc;
struct i40e_aqc_rx_ctl_reg_read_write *cmd_resp =
(struct i40e_aqc_rx_ctl_reg_read_write *)&desc.params.raw;
- i40e_status status;
+ int status;
if (!reg_val)
return I40E_ERR_PARAM;
@@ -5358,8 +5225,8 @@ i40e_status i40e_aq_rx_ctl_read_register(struct i40e_hw *hw,
**/
u32 i40e_read_rx_ctl(struct i40e_hw *hw, u32 reg_addr)
{
- i40e_status status = 0;
bool use_register;
+ int status = 0;
int retry = 5;
u32 val = 0;
@@ -5393,14 +5260,14 @@ do_retry:
* Use the firmware to write to an Rx control register,
* especially useful if the Rx unit is under heavy pressure
**/
-i40e_status i40e_aq_rx_ctl_write_register(struct i40e_hw *hw,
- u32 reg_addr, u32 reg_val,
- struct i40e_asq_cmd_details *cmd_details)
+int i40e_aq_rx_ctl_write_register(struct i40e_hw *hw,
+ u32 reg_addr, u32 reg_val,
+ struct i40e_asq_cmd_details *cmd_details)
{
struct i40e_aq_desc desc;
struct i40e_aqc_rx_ctl_reg_read_write *cmd =
(struct i40e_aqc_rx_ctl_reg_read_write *)&desc.params.raw;
- i40e_status status;
+ int status;
i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_rx_ctl_reg_write);
@@ -5420,8 +5287,8 @@ i40e_status i40e_aq_rx_ctl_write_register(struct i40e_hw *hw,
**/
void i40e_write_rx_ctl(struct i40e_hw *hw, u32 reg_addr, u32 reg_val)
{
- i40e_status status = 0;
bool use_register;
+ int status = 0;
int retry = 5;
use_register = (((hw->aq.api_maj_ver == 1) &&
@@ -5483,16 +5350,16 @@ static void i40e_mdio_if_number_selection(struct i40e_hw *hw, bool set_mdio,
* NOTE: In common cases MDIO I/F number should not be changed, thats why you
* may use simple wrapper i40e_aq_set_phy_register.
**/
-enum i40e_status_code i40e_aq_set_phy_register_ext(struct i40e_hw *hw,
- u8 phy_select, u8 dev_addr, bool page_change,
- bool set_mdio, u8 mdio_num,
- u32 reg_addr, u32 reg_val,
- struct i40e_asq_cmd_details *cmd_details)
+int i40e_aq_set_phy_register_ext(struct i40e_hw *hw,
+ u8 phy_select, u8 dev_addr, bool page_change,
+ bool set_mdio, u8 mdio_num,
+ u32 reg_addr, u32 reg_val,
+ struct i40e_asq_cmd_details *cmd_details)
{
struct i40e_aq_desc desc;
struct i40e_aqc_phy_register_access *cmd =
(struct i40e_aqc_phy_register_access *)&desc.params.raw;
- i40e_status status;
+ int status;
i40e_fill_default_direct_cmd_desc(&desc,
i40e_aqc_opc_set_phy_register);
@@ -5528,16 +5395,16 @@ enum i40e_status_code i40e_aq_set_phy_register_ext(struct i40e_hw *hw,
* NOTE: In common cases MDIO I/F number should not be changed, thats why you
* may use simple wrapper i40e_aq_get_phy_register.
**/
-enum i40e_status_code i40e_aq_get_phy_register_ext(struct i40e_hw *hw,
- u8 phy_select, u8 dev_addr, bool page_change,
- bool set_mdio, u8 mdio_num,
- u32 reg_addr, u32 *reg_val,
- struct i40e_asq_cmd_details *cmd_details)
+int i40e_aq_get_phy_register_ext(struct i40e_hw *hw,
+ u8 phy_select, u8 dev_addr, bool page_change,
+ bool set_mdio, u8 mdio_num,
+ u32 reg_addr, u32 *reg_val,
+ struct i40e_asq_cmd_details *cmd_details)
{
struct i40e_aq_desc desc;
struct i40e_aqc_phy_register_access *cmd =
(struct i40e_aqc_phy_register_access *)&desc.params.raw;
- i40e_status status;
+ int status;
i40e_fill_default_direct_cmd_desc(&desc,
i40e_aqc_opc_get_phy_register);
@@ -5568,18 +5435,17 @@ enum i40e_status_code i40e_aq_get_phy_register_ext(struct i40e_hw *hw,
* @error_info: returns error information
* @cmd_details: pointer to command details structure or NULL
**/
-enum
-i40e_status_code i40e_aq_write_ddp(struct i40e_hw *hw, void *buff,
- u16 buff_size, u32 track_id,
- u32 *error_offset, u32 *error_info,
- struct i40e_asq_cmd_details *cmd_details)
+int i40e_aq_write_ddp(struct i40e_hw *hw, void *buff,
+ u16 buff_size, u32 track_id,
+ u32 *error_offset, u32 *error_info,
+ struct i40e_asq_cmd_details *cmd_details)
{
struct i40e_aq_desc desc;
struct i40e_aqc_write_personalization_profile *cmd =
(struct i40e_aqc_write_personalization_profile *)
&desc.params.raw;
struct i40e_aqc_write_ddp_resp *resp;
- i40e_status status;
+ int status;
i40e_fill_default_direct_cmd_desc(&desc,
i40e_aqc_opc_write_personalization_profile);
@@ -5612,15 +5478,14 @@ i40e_status_code i40e_aq_write_ddp(struct i40e_hw *hw, void *buff,
* @flags: AdminQ command flags
* @cmd_details: pointer to command details structure or NULL
**/
-enum
-i40e_status_code i40e_aq_get_ddp_list(struct i40e_hw *hw, void *buff,
- u16 buff_size, u8 flags,
- struct i40e_asq_cmd_details *cmd_details)
+int i40e_aq_get_ddp_list(struct i40e_hw *hw, void *buff,
+ u16 buff_size, u8 flags,
+ struct i40e_asq_cmd_details *cmd_details)
{
struct i40e_aq_desc desc;
struct i40e_aqc_get_applied_profiles *cmd =
(struct i40e_aqc_get_applied_profiles *)&desc.params.raw;
- i40e_status status;
+ int status;
i40e_fill_default_direct_cmd_desc(&desc,
i40e_aqc_opc_get_personalization_profile_list);
@@ -5719,14 +5584,13 @@ i40e_find_section_in_profile(u32 section_type,
* @hw: pointer to the hw struct
* @aq: command buffer containing all data to execute AQ
**/
-static enum
-i40e_status_code i40e_ddp_exec_aq_section(struct i40e_hw *hw,
- struct i40e_profile_aq_section *aq)
+static int i40e_ddp_exec_aq_section(struct i40e_hw *hw,
+ struct i40e_profile_aq_section *aq)
{
- i40e_status status;
struct i40e_aq_desc desc;
u8 *msg = NULL;
u16 msglen;
+ int status;
i40e_fill_default_direct_cmd_desc(&desc, aq->opcode);
desc.flags |= cpu_to_le16(aq->flags);
@@ -5766,14 +5630,14 @@ i40e_status_code i40e_ddp_exec_aq_section(struct i40e_hw *hw,
*
* Validates supported devices and profile's sections.
*/
-static enum i40e_status_code
+static int
i40e_validate_profile(struct i40e_hw *hw, struct i40e_profile_segment *profile,
u32 track_id, bool rollback)
{
struct i40e_profile_section_header *sec = NULL;
- i40e_status status = 0;
struct i40e_section_table *sec_tbl;
u32 vendor_dev_id;
+ int status = 0;
u32 dev_cnt;
u32 sec_off;
u32 i;
@@ -5831,16 +5695,16 @@ i40e_validate_profile(struct i40e_hw *hw, struct i40e_profile_segment *profile,
*
* Handles the download of a complete package.
*/
-enum i40e_status_code
+int
i40e_write_profile(struct i40e_hw *hw, struct i40e_profile_segment *profile,
u32 track_id)
{
- i40e_status status = 0;
- struct i40e_section_table *sec_tbl;
struct i40e_profile_section_header *sec = NULL;
struct i40e_profile_aq_section *ddp_aq;
- u32 section_size = 0;
+ struct i40e_section_table *sec_tbl;
u32 offset = 0, info = 0;
+ u32 section_size = 0;
+ int status = 0;
u32 sec_off;
u32 i;
@@ -5894,15 +5758,15 @@ i40e_write_profile(struct i40e_hw *hw, struct i40e_profile_segment *profile,
*
* Rolls back previously loaded package.
*/
-enum i40e_status_code
+int
i40e_rollback_profile(struct i40e_hw *hw, struct i40e_profile_segment *profile,
u32 track_id)
{
struct i40e_profile_section_header *sec = NULL;
- i40e_status status = 0;
struct i40e_section_table *sec_tbl;
u32 offset = 0, info = 0;
u32 section_size = 0;
+ int status = 0;
u32 sec_off;
int i;
@@ -5946,15 +5810,15 @@ i40e_rollback_profile(struct i40e_hw *hw, struct i40e_profile_segment *profile,
*
* Register a profile to the list of loaded profiles.
*/
-enum i40e_status_code
+int
i40e_add_pinfo_to_list(struct i40e_hw *hw,
struct i40e_profile_segment *profile,
u8 *profile_info_sec, u32 track_id)
{
- i40e_status status = 0;
struct i40e_profile_section_header *sec = NULL;
struct i40e_profile_info *pinfo;
u32 offset = 0, info = 0;
+ int status = 0;
sec = (struct i40e_profile_section_header *)profile_info_sec;
sec->tbl_size = 1;
@@ -5988,7 +5852,7 @@ i40e_add_pinfo_to_list(struct i40e_hw *hw,
* of the function.
*
**/
-enum i40e_status_code
+int
i40e_aq_add_cloud_filters(struct i40e_hw *hw, u16 seid,
struct i40e_aqc_cloud_filters_element_data *filters,
u8 filter_count)
@@ -5996,8 +5860,8 @@ i40e_aq_add_cloud_filters(struct i40e_hw *hw, u16 seid,
struct i40e_aq_desc desc;
struct i40e_aqc_add_remove_cloud_filters *cmd =
(struct i40e_aqc_add_remove_cloud_filters *)&desc.params.raw;
- enum i40e_status_code status;
u16 buff_len;
+ int status;
i40e_fill_default_direct_cmd_desc(&desc,
i40e_aqc_opc_add_cloud_filters);
@@ -6025,7 +5889,7 @@ i40e_aq_add_cloud_filters(struct i40e_hw *hw, u16 seid,
* function.
*
**/
-enum i40e_status_code
+int
i40e_aq_add_cloud_filters_bb(struct i40e_hw *hw, u16 seid,
struct i40e_aqc_cloud_filters_element_bb *filters,
u8 filter_count)
@@ -6033,8 +5897,8 @@ i40e_aq_add_cloud_filters_bb(struct i40e_hw *hw, u16 seid,
struct i40e_aq_desc desc;
struct i40e_aqc_add_remove_cloud_filters *cmd =
(struct i40e_aqc_add_remove_cloud_filters *)&desc.params.raw;
- i40e_status status;
u16 buff_len;
+ int status;
int i;
i40e_fill_default_direct_cmd_desc(&desc,
@@ -6082,7 +5946,7 @@ i40e_aq_add_cloud_filters_bb(struct i40e_hw *hw, u16 seid,
* of the function.
*
**/
-enum i40e_status_code
+int
i40e_aq_rem_cloud_filters(struct i40e_hw *hw, u16 seid,
struct i40e_aqc_cloud_filters_element_data *filters,
u8 filter_count)
@@ -6090,8 +5954,8 @@ i40e_aq_rem_cloud_filters(struct i40e_hw *hw, u16 seid,
struct i40e_aq_desc desc;
struct i40e_aqc_add_remove_cloud_filters *cmd =
(struct i40e_aqc_add_remove_cloud_filters *)&desc.params.raw;
- enum i40e_status_code status;
u16 buff_len;
+ int status;
i40e_fill_default_direct_cmd_desc(&desc,
i40e_aqc_opc_remove_cloud_filters);
@@ -6119,7 +5983,7 @@ i40e_aq_rem_cloud_filters(struct i40e_hw *hw, u16 seid,
* function.
*
**/
-enum i40e_status_code
+int
i40e_aq_rem_cloud_filters_bb(struct i40e_hw *hw, u16 seid,
struct i40e_aqc_cloud_filters_element_bb *filters,
u8 filter_count)
@@ -6127,8 +5991,8 @@ i40e_aq_rem_cloud_filters_bb(struct i40e_hw *hw, u16 seid,
struct i40e_aq_desc desc;
struct i40e_aqc_add_remove_cloud_filters *cmd =
(struct i40e_aqc_add_remove_cloud_filters *)&desc.params.raw;
- i40e_status status;
u16 buff_len;
+ int status;
int i;
i40e_fill_default_direct_cmd_desc(&desc,
diff --git a/drivers/net/ethernet/intel/i40e/i40e_dcb.c b/drivers/net/ethernet/intel/i40e/i40e_dcb.c
index 673f341f4c0c..90638b67f8dc 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_dcb.c
+++ b/drivers/net/ethernet/intel/i40e/i40e_dcb.c
@@ -12,7 +12,7 @@
*
* Get the DCBX status from the Firmware
**/
-i40e_status i40e_get_dcbx_status(struct i40e_hw *hw, u16 *status)
+int i40e_get_dcbx_status(struct i40e_hw *hw, u16 *status)
{
u32 reg;
@@ -497,15 +497,15 @@ static void i40e_parse_org_tlv(struct i40e_lldp_org_tlv *tlv,
*
* Parse DCB configuration from the LLDPDU
**/
-i40e_status i40e_lldp_to_dcb_config(u8 *lldpmib,
- struct i40e_dcbx_config *dcbcfg)
+int i40e_lldp_to_dcb_config(u8 *lldpmib,
+ struct i40e_dcbx_config *dcbcfg)
{
- i40e_status ret = 0;
struct i40e_lldp_org_tlv *tlv;
- u16 type;
- u16 length;
u16 typelength;
u16 offset = 0;
+ int ret = 0;
+ u16 length;
+ u16 type;
if (!lldpmib || !dcbcfg)
return I40E_ERR_PARAM;
@@ -551,12 +551,12 @@ i40e_status i40e_lldp_to_dcb_config(u8 *lldpmib,
*
* Query DCB configuration from the Firmware
**/
-i40e_status i40e_aq_get_dcb_config(struct i40e_hw *hw, u8 mib_type,
- u8 bridgetype,
- struct i40e_dcbx_config *dcbcfg)
+int i40e_aq_get_dcb_config(struct i40e_hw *hw, u8 mib_type,
+ u8 bridgetype,
+ struct i40e_dcbx_config *dcbcfg)
{
- i40e_status ret = 0;
struct i40e_virt_mem mem;
+ int ret = 0;
u8 *lldpmib;
/* Allocate the LLDPDU */
@@ -767,9 +767,9 @@ static void i40e_cee_to_dcb_config(
*
* Get IEEE mode DCB configuration from the Firmware
**/
-static i40e_status i40e_get_ieee_dcb_config(struct i40e_hw *hw)
+static int i40e_get_ieee_dcb_config(struct i40e_hw *hw)
{
- i40e_status ret = 0;
+ int ret = 0;
/* IEEE mode */
hw->local_dcbx_config.dcbx_mode = I40E_DCBX_MODE_IEEE;
@@ -797,11 +797,11 @@ out:
*
* Get DCB configuration from the Firmware
**/
-i40e_status i40e_get_dcb_config(struct i40e_hw *hw)
+int i40e_get_dcb_config(struct i40e_hw *hw)
{
- i40e_status ret = 0;
- struct i40e_aqc_get_cee_dcb_cfg_resp cee_cfg;
struct i40e_aqc_get_cee_dcb_cfg_v1_resp cee_v1_cfg;
+ struct i40e_aqc_get_cee_dcb_cfg_resp cee_cfg;
+ int ret = 0;
/* If Firmware version < v4.33 on X710/XL710, IEEE only */
if ((hw->mac.type == I40E_MAC_XL710) &&
@@ -867,11 +867,11 @@ out:
*
* Update DCB configuration from the Firmware
**/
-i40e_status i40e_init_dcb(struct i40e_hw *hw, bool enable_mib_change)
+int i40e_init_dcb(struct i40e_hw *hw, bool enable_mib_change)
{
- i40e_status ret = 0;
struct i40e_lldp_variables lldp_cfg;
u8 adminstatus = 0;
+ int ret = 0;
if (!hw->func_caps.dcb)
return I40E_NOT_SUPPORTED;
@@ -940,13 +940,13 @@ i40e_status i40e_init_dcb(struct i40e_hw *hw, bool enable_mib_change)
* Get status of FW Link Layer Discovery Protocol (LLDP) Agent.
* Status of agent is reported via @lldp_status parameter.
**/
-enum i40e_status_code
+int
i40e_get_fw_lldp_status(struct i40e_hw *hw,
enum i40e_get_fw_lldp_status_resp *lldp_status)
{
struct i40e_virt_mem mem;
- i40e_status ret;
u8 *lldpmib;
+ int ret;
if (!lldp_status)
return I40E_ERR_PARAM;
@@ -1238,13 +1238,13 @@ static void i40e_add_dcb_tlv(struct i40e_lldp_org_tlv *tlv,
*
* Set DCB configuration to the Firmware
**/
-i40e_status i40e_set_dcb_config(struct i40e_hw *hw)
+int i40e_set_dcb_config(struct i40e_hw *hw)
{
struct i40e_dcbx_config *dcbcfg;
struct i40e_virt_mem mem;
u8 mib_type, *lldpmib;
- i40e_status ret;
u16 miblen;
+ int ret;
/* update the hw local config */
dcbcfg = &hw->local_dcbx_config;
@@ -1274,8 +1274,8 @@ i40e_status i40e_set_dcb_config(struct i40e_hw *hw)
*
* send DCB configuration to FW
**/
-i40e_status i40e_dcb_config_to_lldp(u8 *lldpmib, u16 *miblen,
- struct i40e_dcbx_config *dcbcfg)
+int i40e_dcb_config_to_lldp(u8 *lldpmib, u16 *miblen,
+ struct i40e_dcbx_config *dcbcfg)
{
u16 length, offset = 0, tlvid, typelength;
struct i40e_lldp_org_tlv *tlv;
@@ -1888,13 +1888,13 @@ void i40e_dcb_hw_rx_pb_config(struct i40e_hw *hw,
*
* Reads the LLDP configuration data from NVM using passed addresses
**/
-static i40e_status _i40e_read_lldp_cfg(struct i40e_hw *hw,
- struct i40e_lldp_variables *lldp_cfg,
- u8 module, u32 word_offset)
+static int _i40e_read_lldp_cfg(struct i40e_hw *hw,
+ struct i40e_lldp_variables *lldp_cfg,
+ u8 module, u32 word_offset)
{
u32 address, offset = (2 * word_offset);
- i40e_status ret;
__le16 raw_mem;
+ int ret;
u16 mem;
ret = i40e_acquire_nvm(hw, I40E_RESOURCE_READ);
@@ -1950,10 +1950,10 @@ err_lldp_cfg:
*
* Reads the LLDP configuration data from NVM
**/
-i40e_status i40e_read_lldp_cfg(struct i40e_hw *hw,
- struct i40e_lldp_variables *lldp_cfg)
+int i40e_read_lldp_cfg(struct i40e_hw *hw,
+ struct i40e_lldp_variables *lldp_cfg)
{
- i40e_status ret = 0;
+ int ret = 0;
u32 mem;
if (!lldp_cfg)
diff --git a/drivers/net/ethernet/intel/i40e/i40e_dcb.h b/drivers/net/ethernet/intel/i40e/i40e_dcb.h
index 2370ceecb061..6b60dc9b7736 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_dcb.h
+++ b/drivers/net/ethernet/intel/i40e/i40e_dcb.h
@@ -264,20 +264,20 @@ void i40e_dcb_hw_calculate_pool_sizes(struct i40e_hw *hw,
void i40e_dcb_hw_rx_pb_config(struct i40e_hw *hw,
struct i40e_rx_pb_config *old_pb_cfg,
struct i40e_rx_pb_config *new_pb_cfg);
-i40e_status i40e_get_dcbx_status(struct i40e_hw *hw,
- u16 *status);
-i40e_status i40e_lldp_to_dcb_config(u8 *lldpmib,
- struct i40e_dcbx_config *dcbcfg);
-i40e_status i40e_aq_get_dcb_config(struct i40e_hw *hw, u8 mib_type,
- u8 bridgetype,
- struct i40e_dcbx_config *dcbcfg);
-i40e_status i40e_get_dcb_config(struct i40e_hw *hw);
-i40e_status i40e_init_dcb(struct i40e_hw *hw,
- bool enable_mib_change);
-enum i40e_status_code
+int i40e_get_dcbx_status(struct i40e_hw *hw,
+ u16 *status);
+int i40e_lldp_to_dcb_config(u8 *lldpmib,
+ struct i40e_dcbx_config *dcbcfg);
+int i40e_aq_get_dcb_config(struct i40e_hw *hw, u8 mib_type,
+ u8 bridgetype,
+ struct i40e_dcbx_config *dcbcfg);
+int i40e_get_dcb_config(struct i40e_hw *hw);
+int i40e_init_dcb(struct i40e_hw *hw,
+ bool enable_mib_change);
+int
i40e_get_fw_lldp_status(struct i40e_hw *hw,
enum i40e_get_fw_lldp_status_resp *lldp_status);
-i40e_status i40e_set_dcb_config(struct i40e_hw *hw);
-i40e_status i40e_dcb_config_to_lldp(u8 *lldpmib, u16 *miblen,
- struct i40e_dcbx_config *dcbcfg);
+int i40e_set_dcb_config(struct i40e_hw *hw);
+int i40e_dcb_config_to_lldp(u8 *lldpmib, u16 *miblen,
+ struct i40e_dcbx_config *dcbcfg);
#endif /* _I40E_DCB_H_ */
diff --git a/drivers/net/ethernet/intel/i40e/i40e_dcb_nl.c b/drivers/net/ethernet/intel/i40e/i40e_dcb_nl.c
index e32c61909b31..195421d863ab 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_dcb_nl.c
+++ b/drivers/net/ethernet/intel/i40e/i40e_dcb_nl.c
@@ -135,8 +135,8 @@ static int i40e_dcbnl_ieee_setets(struct net_device *netdev,
ret = i40e_hw_dcb_config(pf, &pf->tmp_cfg);
if (ret) {
dev_info(&pf->pdev->dev,
- "Failed setting DCB ETS configuration err %s aq_err %s\n",
- i40e_stat_str(&pf->hw, ret),
+ "Failed setting DCB ETS configuration err %pe aq_err %s\n",
+ ERR_PTR(ret),
i40e_aq_str(&pf->hw, pf->hw.aq.asq_last_status));
return -EINVAL;
}
@@ -174,8 +174,8 @@ static int i40e_dcbnl_ieee_setpfc(struct net_device *netdev,
ret = i40e_hw_dcb_config(pf, &pf->tmp_cfg);
if (ret) {
dev_info(&pf->pdev->dev,
- "Failed setting DCB PFC configuration err %s aq_err %s\n",
- i40e_stat_str(&pf->hw, ret),
+ "Failed setting DCB PFC configuration err %pe aq_err %s\n",
+ ERR_PTR(ret),
i40e_aq_str(&pf->hw, pf->hw.aq.asq_last_status));
return -EINVAL;
}
@@ -225,8 +225,8 @@ static int i40e_dcbnl_ieee_setapp(struct net_device *netdev,
ret = i40e_hw_dcb_config(pf, &pf->tmp_cfg);
if (ret) {
dev_info(&pf->pdev->dev,
- "Failed setting DCB configuration err %s aq_err %s\n",
- i40e_stat_str(&pf->hw, ret),
+ "Failed setting DCB configuration err %pe aq_err %s\n",
+ ERR_PTR(ret),
i40e_aq_str(&pf->hw, pf->hw.aq.asq_last_status));
return -EINVAL;
}
@@ -290,8 +290,8 @@ static int i40e_dcbnl_ieee_delapp(struct net_device *netdev,
ret = i40e_hw_dcb_config(pf, &pf->tmp_cfg);
if (ret) {
dev_info(&pf->pdev->dev,
- "Failed setting DCB configuration err %s aq_err %s\n",
- i40e_stat_str(&pf->hw, ret),
+ "Failed setting DCB configuration err %pe aq_err %s\n",
+ ERR_PTR(ret),
i40e_aq_str(&pf->hw, pf->hw.aq.asq_last_status));
return -EINVAL;
}
diff --git a/drivers/net/ethernet/intel/i40e/i40e_ddp.c b/drivers/net/ethernet/intel/i40e/i40e_ddp.c
index e1069ae658ad..7e8183762fd9 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_ddp.c
+++ b/drivers/net/ethernet/intel/i40e/i40e_ddp.c
@@ -36,7 +36,7 @@ static int i40e_ddp_does_profile_exist(struct i40e_hw *hw,
{
struct i40e_ddp_profile_list *profile_list;
u8 buff[I40E_PROFILE_LIST_SIZE];
- i40e_status status;
+ int status;
int i;
status = i40e_aq_get_ddp_list(hw, buff, I40E_PROFILE_LIST_SIZE, 0,
@@ -91,7 +91,7 @@ static int i40e_ddp_does_profile_overlap(struct i40e_hw *hw,
{
struct i40e_ddp_profile_list *profile_list;
u8 buff[I40E_PROFILE_LIST_SIZE];
- i40e_status status;
+ int status;
int i;
status = i40e_aq_get_ddp_list(hw, buff, I40E_PROFILE_LIST_SIZE, 0,
@@ -117,14 +117,14 @@ static int i40e_ddp_does_profile_overlap(struct i40e_hw *hw,
*
* Register a profile to the list of loaded profiles.
*/
-static enum i40e_status_code
+static int
i40e_add_pinfo(struct i40e_hw *hw, struct i40e_profile_segment *profile,
u8 *profile_info_sec, u32 track_id)
{
struct i40e_profile_section_header *sec;
struct i40e_profile_info *pinfo;
- i40e_status status;
u32 offset = 0, info = 0;
+ int status;
sec = (struct i40e_profile_section_header *)profile_info_sec;
sec->tbl_size = 1;
@@ -157,14 +157,14 @@ i40e_add_pinfo(struct i40e_hw *hw, struct i40e_profile_segment *profile,
*
* Removes DDP profile from the NIC.
**/
-static enum i40e_status_code
+static int
i40e_del_pinfo(struct i40e_hw *hw, struct i40e_profile_segment *profile,
u8 *profile_info_sec, u32 track_id)
{
struct i40e_profile_section_header *sec;
struct i40e_profile_info *pinfo;
- i40e_status status;
u32 offset = 0, info = 0;
+ int status;
sec = (struct i40e_profile_section_header *)profile_info_sec;
sec->tbl_size = 1;
@@ -270,12 +270,12 @@ int i40e_ddp_load(struct net_device *netdev, const u8 *data, size_t size,
struct i40e_profile_segment *profile_hdr;
struct i40e_profile_info pinfo;
struct i40e_package_header *pkg_hdr;
- i40e_status status;
struct i40e_netdev_priv *np = netdev_priv(netdev);
struct i40e_vsi *vsi = np->vsi;
struct i40e_pf *pf = vsi->back;
u32 track_id;
int istatus;
+ int status;
pkg_hdr = (struct i40e_package_header *)data;
if (!i40e_ddp_is_pkg_hdr_valid(netdev, pkg_hdr, size))
diff --git a/drivers/net/ethernet/intel/i40e/i40e_debugfs.c b/drivers/net/ethernet/intel/i40e/i40e_debugfs.c
index c9dcd6d92c83..9954493cd448 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_debugfs.c
+++ b/drivers/net/ethernet/intel/i40e/i40e_debugfs.c
@@ -918,9 +918,9 @@ static ssize_t i40e_dbg_command_write(struct file *filp,
dev_info(&pf->pdev->dev, "deleting relay %d\n", veb_seid);
i40e_veb_release(pf->veb[i]);
} else if (strncmp(cmd_buf, "add pvid", 8) == 0) {
- i40e_status ret;
- u16 vid;
unsigned int v;
+ int ret;
+ u16 vid;
cnt = sscanf(&cmd_buf[8], "%i %u", &vsi_seid, &v);
if (cnt != 2) {
@@ -1284,7 +1284,7 @@ static ssize_t i40e_dbg_command_write(struct file *filp,
}
} else if (strncmp(cmd_buf, "send aq_cmd", 11) == 0) {
struct i40e_aq_desc *desc;
- i40e_status ret;
+ int ret;
desc = kzalloc(sizeof(struct i40e_aq_desc), GFP_KERNEL);
if (!desc)
@@ -1330,9 +1330,9 @@ static ssize_t i40e_dbg_command_write(struct file *filp,
desc = NULL;
} else if (strncmp(cmd_buf, "send indirect aq_cmd", 20) == 0) {
struct i40e_aq_desc *desc;
- i40e_status ret;
u16 buffer_len;
u8 *buff;
+ int ret;
desc = kzalloc(sizeof(struct i40e_aq_desc), GFP_KERNEL);
if (!desc)
diff --git a/drivers/net/ethernet/intel/i40e/i40e_diag.c b/drivers/net/ethernet/intel/i40e/i40e_diag.c
index ef4d3762bf37..5b3519c6e362 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_diag.c
+++ b/drivers/net/ethernet/intel/i40e/i40e_diag.c
@@ -10,8 +10,8 @@
* @reg: reg to be tested
* @mask: bits to be touched
**/
-static i40e_status i40e_diag_reg_pattern_test(struct i40e_hw *hw,
- u32 reg, u32 mask)
+static int i40e_diag_reg_pattern_test(struct i40e_hw *hw,
+ u32 reg, u32 mask)
{
static const u32 patterns[] = {
0x5A5A5A5A, 0xA5A5A5A5, 0x00000000, 0xFFFFFFFF
@@ -74,9 +74,9 @@ struct i40e_diag_reg_test_info i40e_reg_list[] = {
*
* Perform registers diagnostic test
**/
-i40e_status i40e_diag_reg_test(struct i40e_hw *hw)
+int i40e_diag_reg_test(struct i40e_hw *hw)
{
- i40e_status ret_code = 0;
+ int ret_code = 0;
u32 reg, mask;
u32 i, j;
@@ -114,9 +114,9 @@ i40e_status i40e_diag_reg_test(struct i40e_hw *hw)
*
* Perform EEPROM diagnostic test
**/
-i40e_status i40e_diag_eeprom_test(struct i40e_hw *hw)
+int i40e_diag_eeprom_test(struct i40e_hw *hw)
{
- i40e_status ret_code;
+ int ret_code;
u16 reg_val;
/* read NVM control word and if NVM valid, validate EEPROM checksum*/
diff --git a/drivers/net/ethernet/intel/i40e/i40e_diag.h b/drivers/net/ethernet/intel/i40e/i40e_diag.h
index c3340f320a18..e641035c7297 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_diag.h
+++ b/drivers/net/ethernet/intel/i40e/i40e_diag.h
@@ -22,7 +22,7 @@ struct i40e_diag_reg_test_info {
extern struct i40e_diag_reg_test_info i40e_reg_list[];
-i40e_status i40e_diag_reg_test(struct i40e_hw *hw);
-i40e_status i40e_diag_eeprom_test(struct i40e_hw *hw);
+int i40e_diag_reg_test(struct i40e_hw *hw);
+int i40e_diag_eeprom_test(struct i40e_hw *hw);
#endif /* _I40E_DIAG_H_ */
diff --git a/drivers/net/ethernet/intel/i40e/i40e_ethtool.c b/drivers/net/ethernet/intel/i40e/i40e_ethtool.c
index 887a735fe2a7..4934ff58332c 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_ethtool.c
+++ b/drivers/net/ethernet/intel/i40e/i40e_ethtool.c
@@ -1226,8 +1226,8 @@ static int i40e_set_link_ksettings(struct net_device *netdev,
struct i40e_vsi *vsi = np->vsi;
struct i40e_hw *hw = &pf->hw;
bool autoneg_changed = false;
- i40e_status status = 0;
int timeout = 50;
+ int status = 0;
int err = 0;
__u32 speed;
u8 autoneg;
@@ -1455,8 +1455,8 @@ static int i40e_set_link_ksettings(struct net_device *netdev,
status = i40e_aq_set_phy_config(hw, &config, NULL);
if (status) {
netdev_info(netdev,
- "Set phy config failed, err %s aq_err %s\n",
- i40e_stat_str(hw, status),
+ "Set phy config failed, err %pe aq_err %s\n",
+ ERR_PTR(status),
i40e_aq_str(hw, hw->aq.asq_last_status));
err = -EAGAIN;
goto done;
@@ -1465,8 +1465,8 @@ static int i40e_set_link_ksettings(struct net_device *netdev,
status = i40e_update_link_info(hw);
if (status)
netdev_dbg(netdev,
- "Updating link info failed with err %s aq_err %s\n",
- i40e_stat_str(hw, status),
+ "Updating link info failed with err %pe aq_err %s\n",
+ ERR_PTR(status),
i40e_aq_str(hw, hw->aq.asq_last_status));
} else {
@@ -1485,7 +1485,7 @@ static int i40e_set_fec_cfg(struct net_device *netdev, u8 fec_cfg)
struct i40e_aq_get_phy_abilities_resp abilities;
struct i40e_pf *pf = np->vsi->back;
struct i40e_hw *hw = &pf->hw;
- i40e_status status = 0;
+ int status = 0;
u32 flags = 0;
int err = 0;
@@ -1517,8 +1517,8 @@ static int i40e_set_fec_cfg(struct net_device *netdev, u8 fec_cfg)
status = i40e_aq_set_phy_config(hw, &config, NULL);
if (status) {
netdev_info(netdev,
- "Set phy config failed, err %s aq_err %s\n",
- i40e_stat_str(hw, status),
+ "Set phy config failed, err %pe aq_err %s\n",
+ ERR_PTR(status),
i40e_aq_str(hw, hw->aq.asq_last_status));
err = -EAGAIN;
goto done;
@@ -1531,8 +1531,8 @@ static int i40e_set_fec_cfg(struct net_device *netdev, u8 fec_cfg)
* (e.g. no physical connection etc.)
*/
netdev_dbg(netdev,
- "Updating link info failed with err %s aq_err %s\n",
- i40e_stat_str(hw, status),
+ "Updating link info failed with err %pe aq_err %s\n",
+ ERR_PTR(status),
i40e_aq_str(hw, hw->aq.asq_last_status));
}
@@ -1547,7 +1547,7 @@ static int i40e_get_fec_param(struct net_device *netdev,
struct i40e_aq_get_phy_abilities_resp abilities;
struct i40e_pf *pf = np->vsi->back;
struct i40e_hw *hw = &pf->hw;
- i40e_status status = 0;
+ int status = 0;
int err = 0;
u8 fec_cfg;
@@ -1634,12 +1634,12 @@ static int i40e_nway_reset(struct net_device *netdev)
struct i40e_pf *pf = np->vsi->back;
struct i40e_hw *hw = &pf->hw;
bool link_up = hw->phy.link_info.link_info & I40E_AQ_LINK_UP;
- i40e_status ret = 0;
+ int ret = 0;
ret = i40e_aq_set_link_restart_an(hw, link_up, NULL);
if (ret) {
- netdev_info(netdev, "link restart failed, err %s aq_err %s\n",
- i40e_stat_str(hw, ret),
+ netdev_info(netdev, "link restart failed, err %pe aq_err %s\n",
+ ERR_PTR(ret),
i40e_aq_str(hw, hw->aq.asq_last_status));
return -EIO;
}
@@ -1699,9 +1699,9 @@ static int i40e_set_pauseparam(struct net_device *netdev,
struct i40e_link_status *hw_link_info = &hw->phy.link_info;
struct i40e_dcbx_config *dcbx_cfg = &hw->local_dcbx_config;
bool link_up = hw_link_info->link_info & I40E_AQ_LINK_UP;
- i40e_status status;
u8 aq_failures;
int err = 0;
+ int status;
u32 is_an;
/* Changing the port's flow control is not supported if this isn't the
@@ -1755,20 +1755,20 @@ static int i40e_set_pauseparam(struct net_device *netdev,
status = i40e_set_fc(hw, &aq_failures, link_up);
if (aq_failures & I40E_SET_FC_AQ_FAIL_GET) {
- netdev_info(netdev, "Set fc failed on the get_phy_capabilities call with err %s aq_err %s\n",
- i40e_stat_str(hw, status),
+ netdev_info(netdev, "Set fc failed on the get_phy_capabilities call with err %pe aq_err %s\n",
+ ERR_PTR(status),
i40e_aq_str(hw, hw->aq.asq_last_status));
err = -EAGAIN;
}
if (aq_failures & I40E_SET_FC_AQ_FAIL_SET) {
- netdev_info(netdev, "Set fc failed on the set_phy_config call with err %s aq_err %s\n",
- i40e_stat_str(hw, status),
+ netdev_info(netdev, "Set fc failed on the set_phy_config call with err %pe aq_err %s\n",
+ ERR_PTR(status),
i40e_aq_str(hw, hw->aq.asq_last_status));
err = -EAGAIN;
}
if (aq_failures & I40E_SET_FC_AQ_FAIL_UPDATE) {
- netdev_info(netdev, "Set fc failed on the get_link_info call with err %s aq_err %s\n",
- i40e_stat_str(hw, status),
+ netdev_info(netdev, "Set fc failed on the get_link_info call with err %pe aq_err %s\n",
+ ERR_PTR(status),
i40e_aq_str(hw, hw->aq.asq_last_status));
err = -EAGAIN;
}
@@ -2583,8 +2583,8 @@ static u64 i40e_link_test(struct net_device *netdev, u64 *data)
{
struct i40e_netdev_priv *np = netdev_priv(netdev);
struct i40e_pf *pf = np->vsi->back;
- i40e_status status;
bool link_up = false;
+ int status;
netif_info(pf, hw, netdev, "link test\n");
status = i40e_get_link_status(&pf->hw, &link_up);
@@ -2807,11 +2807,11 @@ static int i40e_set_phys_id(struct net_device *netdev,
enum ethtool_phys_id_state state)
{
struct i40e_netdev_priv *np = netdev_priv(netdev);
- i40e_status ret = 0;
struct i40e_pf *pf = np->vsi->back;
struct i40e_hw *hw = &pf->hw;
int blink_freq = 2;
u16 temp_status;
+ int ret = 0;
switch (state) {
case ETHTOOL_ID_ACTIVE:
@@ -5247,7 +5247,7 @@ static int i40e_set_priv_flags(struct net_device *dev, u32 flags)
struct i40e_vsi *vsi = np->vsi;
struct i40e_pf *pf = vsi->back;
u32 reset_needed = 0;
- i40e_status status;
+ int status;
u32 i, j;
orig_flags = READ_ONCE(pf->flags);
@@ -5362,8 +5362,8 @@ flags_complete:
0, NULL);
if (ret && pf->hw.aq.asq_last_status != I40E_AQ_RC_ESRCH) {
dev_info(&pf->pdev->dev,
- "couldn't set switch config bits, err %s aq_err %s\n",
- i40e_stat_str(&pf->hw, ret),
+ "couldn't set switch config bits, err %pe aq_err %s\n",
+ ERR_PTR(ret),
i40e_aq_str(&pf->hw,
pf->hw.aq.asq_last_status));
/* not a fatal problem, just keep going */
@@ -5435,9 +5435,8 @@ flags_complete:
return -EBUSY;
default:
dev_warn(&pf->pdev->dev,
- "Starting FW LLDP agent failed: error: %s, %s\n",
- i40e_stat_str(&pf->hw,
- status),
+ "Starting FW LLDP agent failed: error: %pe, %s\n",
+ ERR_PTR(status),
i40e_aq_str(&pf->hw,
adq_err));
return -EINVAL;
@@ -5477,8 +5476,8 @@ static int i40e_get_module_info(struct net_device *netdev,
u32 sff8472_comp = 0;
u32 sff8472_swap = 0;
u32 sff8636_rev = 0;
- i40e_status status;
u32 type = 0;
+ int status;
/* Check if firmware supports reading module EEPROM. */
if (!(hw->flags & I40E_HW_FLAG_AQ_PHY_ACCESS_CAPABLE)) {
@@ -5582,8 +5581,8 @@ static int i40e_get_module_eeprom(struct net_device *netdev,
struct i40e_pf *pf = vsi->back;
struct i40e_hw *hw = &pf->hw;
bool is_sfp = false;
- i40e_status status;
u32 value = 0;
+ int status;
int i;
if (!ee || !ee->len || !data)
@@ -5624,10 +5623,10 @@ static int i40e_get_eee(struct net_device *netdev, struct ethtool_eee *edata)
{
struct i40e_netdev_priv *np = netdev_priv(netdev);
struct i40e_aq_get_phy_abilities_resp phy_cfg;
- enum i40e_status_code status = 0;
struct i40e_vsi *vsi = np->vsi;
struct i40e_pf *pf = vsi->back;
struct i40e_hw *hw = &pf->hw;
+ int status = 0;
/* Get initial PHY capabilities */
status = i40e_aq_get_phy_capabilities(hw, false, true, &phy_cfg, NULL);
@@ -5689,11 +5688,11 @@ static int i40e_set_eee(struct net_device *netdev, struct ethtool_eee *edata)
{
struct i40e_netdev_priv *np = netdev_priv(netdev);
struct i40e_aq_get_phy_abilities_resp abilities;
- enum i40e_status_code status = I40E_SUCCESS;
struct i40e_aq_set_phy_config config;
struct i40e_vsi *vsi = np->vsi;
struct i40e_pf *pf = vsi->back;
struct i40e_hw *hw = &pf->hw;
+ int status = I40E_SUCCESS;
__le16 eee_capability;
/* Deny parameters we don't support */
diff --git a/drivers/net/ethernet/intel/i40e/i40e_hmc.c b/drivers/net/ethernet/intel/i40e/i40e_hmc.c
index 163ee8c6311c..46f7950a0049 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_hmc.c
+++ b/drivers/net/ethernet/intel/i40e/i40e_hmc.c
@@ -17,17 +17,17 @@
* @type: what type of segment descriptor we're manipulating
* @direct_mode_sz: size to alloc in direct mode
**/
-i40e_status i40e_add_sd_table_entry(struct i40e_hw *hw,
- struct i40e_hmc_info *hmc_info,
- u32 sd_index,
- enum i40e_sd_entry_type type,
- u64 direct_mode_sz)
+int i40e_add_sd_table_entry(struct i40e_hw *hw,
+ struct i40e_hmc_info *hmc_info,
+ u32 sd_index,
+ enum i40e_sd_entry_type type,
+ u64 direct_mode_sz)
{
enum i40e_memory_type mem_type __attribute__((unused));
struct i40e_hmc_sd_entry *sd_entry;
bool dma_mem_alloc_done = false;
+ int ret_code = I40E_SUCCESS;
struct i40e_dma_mem mem;
- i40e_status ret_code = I40E_SUCCESS;
u64 alloc_len;
if (NULL == hmc_info->sd_table.sd_entry) {
@@ -106,19 +106,19 @@ exit:
* aligned on 4K boundary and zeroed memory.
* 2. It should be 4K in size.
**/
-i40e_status i40e_add_pd_table_entry(struct i40e_hw *hw,
- struct i40e_hmc_info *hmc_info,
- u32 pd_index,
- struct i40e_dma_mem *rsrc_pg)
+int i40e_add_pd_table_entry(struct i40e_hw *hw,
+ struct i40e_hmc_info *hmc_info,
+ u32 pd_index,
+ struct i40e_dma_mem *rsrc_pg)
{
- i40e_status ret_code = 0;
struct i40e_hmc_pd_table *pd_table;
struct i40e_hmc_pd_entry *pd_entry;
struct i40e_dma_mem mem;
struct i40e_dma_mem *page = &mem;
u32 sd_idx, rel_pd_idx;
- u64 *pd_addr;
+ int ret_code = 0;
u64 page_desc;
+ u64 *pd_addr;
if (pd_index / I40E_HMC_PD_CNT_IN_SD >= hmc_info->sd_table.sd_cnt) {
ret_code = I40E_ERR_INVALID_PAGE_DESC_INDEX;
@@ -185,15 +185,15 @@ exit:
* 1. Caller can deallocate the memory used by backing storage after this
* function returns.
**/
-i40e_status i40e_remove_pd_bp(struct i40e_hw *hw,
- struct i40e_hmc_info *hmc_info,
- u32 idx)
+int i40e_remove_pd_bp(struct i40e_hw *hw,
+ struct i40e_hmc_info *hmc_info,
+ u32 idx)
{
- i40e_status ret_code = 0;
struct i40e_hmc_pd_entry *pd_entry;
struct i40e_hmc_pd_table *pd_table;
struct i40e_hmc_sd_entry *sd_entry;
u32 sd_idx, rel_pd_idx;
+ int ret_code = 0;
u64 *pd_addr;
/* calculate index */
@@ -241,11 +241,11 @@ exit:
* @hmc_info: pointer to the HMC configuration information structure
* @idx: the page index
**/
-i40e_status i40e_prep_remove_sd_bp(struct i40e_hmc_info *hmc_info,
- u32 idx)
+int i40e_prep_remove_sd_bp(struct i40e_hmc_info *hmc_info,
+ u32 idx)
{
- i40e_status ret_code = 0;
struct i40e_hmc_sd_entry *sd_entry;
+ int ret_code = 0;
/* get the entry and decrease its ref counter */
sd_entry = &hmc_info->sd_table.sd_entry[idx];
@@ -269,9 +269,9 @@ exit:
* @idx: the page index
* @is_pf: used to distinguish between VF and PF
**/
-i40e_status i40e_remove_sd_bp_new(struct i40e_hw *hw,
- struct i40e_hmc_info *hmc_info,
- u32 idx, bool is_pf)
+int i40e_remove_sd_bp_new(struct i40e_hw *hw,
+ struct i40e_hmc_info *hmc_info,
+ u32 idx, bool is_pf)
{
struct i40e_hmc_sd_entry *sd_entry;
@@ -290,11 +290,11 @@ i40e_status i40e_remove_sd_bp_new(struct i40e_hw *hw,
* @hmc_info: pointer to the HMC configuration information structure
* @idx: segment descriptor index to find the relevant page descriptor
**/
-i40e_status i40e_prep_remove_pd_page(struct i40e_hmc_info *hmc_info,
- u32 idx)
+int i40e_prep_remove_pd_page(struct i40e_hmc_info *hmc_info,
+ u32 idx)
{
- i40e_status ret_code = 0;
struct i40e_hmc_sd_entry *sd_entry;
+ int ret_code = 0;
sd_entry = &hmc_info->sd_table.sd_entry[idx];
@@ -318,9 +318,9 @@ exit:
* @idx: segment descriptor index to find the relevant page descriptor
* @is_pf: used to distinguish between VF and PF
**/
-i40e_status i40e_remove_pd_page_new(struct i40e_hw *hw,
- struct i40e_hmc_info *hmc_info,
- u32 idx, bool is_pf)
+int i40e_remove_pd_page_new(struct i40e_hw *hw,
+ struct i40e_hmc_info *hmc_info,
+ u32 idx, bool is_pf)
{
struct i40e_hmc_sd_entry *sd_entry;
diff --git a/drivers/net/ethernet/intel/i40e/i40e_hmc.h b/drivers/net/ethernet/intel/i40e/i40e_hmc.h
index 3113792afaff..9960da07a573 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_hmc.h
+++ b/drivers/net/ethernet/intel/i40e/i40e_hmc.h
@@ -187,28 +187,28 @@ struct i40e_hmc_info {
/* add one more to the limit to correct our range */ \
*(pd_limit) += 1; \
}
-i40e_status i40e_add_sd_table_entry(struct i40e_hw *hw,
- struct i40e_hmc_info *hmc_info,
- u32 sd_index,
- enum i40e_sd_entry_type type,
- u64 direct_mode_sz);
-
-i40e_status i40e_add_pd_table_entry(struct i40e_hw *hw,
- struct i40e_hmc_info *hmc_info,
- u32 pd_index,
- struct i40e_dma_mem *rsrc_pg);
-i40e_status i40e_remove_pd_bp(struct i40e_hw *hw,
- struct i40e_hmc_info *hmc_info,
- u32 idx);
-i40e_status i40e_prep_remove_sd_bp(struct i40e_hmc_info *hmc_info,
- u32 idx);
-i40e_status i40e_remove_sd_bp_new(struct i40e_hw *hw,
- struct i40e_hmc_info *hmc_info,
- u32 idx, bool is_pf);
-i40e_status i40e_prep_remove_pd_page(struct i40e_hmc_info *hmc_info,
- u32 idx);
-i40e_status i40e_remove_pd_page_new(struct i40e_hw *hw,
- struct i40e_hmc_info *hmc_info,
- u32 idx, bool is_pf);
+
+int i40e_add_sd_table_entry(struct i40e_hw *hw,
+ struct i40e_hmc_info *hmc_info,
+ u32 sd_index,
+ enum i40e_sd_entry_type type,
+ u64 direct_mode_sz);
+int i40e_add_pd_table_entry(struct i40e_hw *hw,
+ struct i40e_hmc_info *hmc_info,
+ u32 pd_index,
+ struct i40e_dma_mem *rsrc_pg);
+int i40e_remove_pd_bp(struct i40e_hw *hw,
+ struct i40e_hmc_info *hmc_info,
+ u32 idx);
+int i40e_prep_remove_sd_bp(struct i40e_hmc_info *hmc_info,
+ u32 idx);
+int i40e_remove_sd_bp_new(struct i40e_hw *hw,
+ struct i40e_hmc_info *hmc_info,
+ u32 idx, bool is_pf);
+int i40e_prep_remove_pd_page(struct i40e_hmc_info *hmc_info,
+ u32 idx);
+int i40e_remove_pd_page_new(struct i40e_hw *hw,
+ struct i40e_hmc_info *hmc_info,
+ u32 idx, bool is_pf);
#endif /* _I40E_HMC_H_ */
diff --git a/drivers/net/ethernet/intel/i40e/i40e_lan_hmc.c b/drivers/net/ethernet/intel/i40e/i40e_lan_hmc.c
index d6e92ecddfbd..40c101f286d1 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_lan_hmc.c
+++ b/drivers/net/ethernet/intel/i40e/i40e_lan_hmc.c
@@ -74,12 +74,12 @@ static u64 i40e_calculate_l2fpm_size(u32 txq_num, u32 rxq_num,
* Assumptions:
* - HMC Resource Profile has been selected before calling this function.
**/
-i40e_status i40e_init_lan_hmc(struct i40e_hw *hw, u32 txq_num,
- u32 rxq_num, u32 fcoe_cntx_num,
- u32 fcoe_filt_num)
+int i40e_init_lan_hmc(struct i40e_hw *hw, u32 txq_num,
+ u32 rxq_num, u32 fcoe_cntx_num,
+ u32 fcoe_filt_num)
{
struct i40e_hmc_obj_info *obj, *full_obj;
- i40e_status ret_code = 0;
+ int ret_code = 0;
u64 l2fpm_size;
u32 size_exp;
@@ -229,11 +229,11 @@ init_lan_hmc_out:
* 1. caller can deallocate the memory used by pd after this function
* returns.
**/
-static i40e_status i40e_remove_pd_page(struct i40e_hw *hw,
- struct i40e_hmc_info *hmc_info,
- u32 idx)
+static int i40e_remove_pd_page(struct i40e_hw *hw,
+ struct i40e_hmc_info *hmc_info,
+ u32 idx)
{
- i40e_status ret_code = 0;
+ int ret_code = 0;
if (!i40e_prep_remove_pd_page(hmc_info, idx))
ret_code = i40e_remove_pd_page_new(hw, hmc_info, idx, true);
@@ -256,11 +256,11 @@ static i40e_status i40e_remove_pd_page(struct i40e_hw *hw,
* 1. caller can deallocate the memory used by backing storage after this
* function returns.
**/
-static i40e_status i40e_remove_sd_bp(struct i40e_hw *hw,
- struct i40e_hmc_info *hmc_info,
- u32 idx)
+static int i40e_remove_sd_bp(struct i40e_hw *hw,
+ struct i40e_hmc_info *hmc_info,
+ u32 idx)
{
- i40e_status ret_code = 0;
+ int ret_code = 0;
if (!i40e_prep_remove_sd_bp(hmc_info, idx))
ret_code = i40e_remove_sd_bp_new(hw, hmc_info, idx, true);
@@ -276,15 +276,15 @@ static i40e_status i40e_remove_sd_bp(struct i40e_hw *hw,
* This will allocate memory for PDs and backing pages and populate
* the sd and pd entries.
**/
-static i40e_status i40e_create_lan_hmc_object(struct i40e_hw *hw,
- struct i40e_hmc_lan_create_obj_info *info)
+static int i40e_create_lan_hmc_object(struct i40e_hw *hw,
+ struct i40e_hmc_lan_create_obj_info *info)
{
- i40e_status ret_code = 0;
struct i40e_hmc_sd_entry *sd_entry;
u32 pd_idx1 = 0, pd_lmt1 = 0;
u32 pd_idx = 0, pd_lmt = 0;
bool pd_error = false;
u32 sd_idx, sd_lmt;
+ int ret_code = 0;
u64 sd_size;
u32 i, j;
@@ -435,13 +435,13 @@ exit:
* - This function will be called after i40e_init_lan_hmc() and before
* any LAN/FCoE HMC objects can be created.
**/
-i40e_status i40e_configure_lan_hmc(struct i40e_hw *hw,
- enum i40e_hmc_model model)
+int i40e_configure_lan_hmc(struct i40e_hw *hw,
+ enum i40e_hmc_model model)
{
struct i40e_hmc_lan_create_obj_info info;
- i40e_status ret_code = 0;
u8 hmc_fn_id = hw->hmc.hmc_fn_id;
struct i40e_hmc_obj_info *obj;
+ int ret_code = 0;
/* Initialize part of the create object info struct */
info.hmc_info = &hw->hmc;
@@ -520,13 +520,13 @@ configure_lan_hmc_out:
* caller should deallocate memory allocated previously for
* book-keeping information about PDs and backing storage.
**/
-static i40e_status i40e_delete_lan_hmc_object(struct i40e_hw *hw,
- struct i40e_hmc_lan_delete_obj_info *info)
+static int i40e_delete_lan_hmc_object(struct i40e_hw *hw,
+ struct i40e_hmc_lan_delete_obj_info *info)
{
- i40e_status ret_code = 0;
struct i40e_hmc_pd_table *pd_table;
u32 pd_idx, pd_lmt, rel_pd_idx;
u32 sd_idx, sd_lmt;
+ int ret_code = 0;
u32 i, j;
if (NULL == info) {
@@ -632,10 +632,10 @@ exit:
* This must be called by drivers as they are shutting down and being
* removed from the OS.
**/
-i40e_status i40e_shutdown_lan_hmc(struct i40e_hw *hw)
+int i40e_shutdown_lan_hmc(struct i40e_hw *hw)
{
struct i40e_hmc_lan_delete_obj_info info;
- i40e_status ret_code;
+ int ret_code;
info.hmc_info = &hw->hmc;
info.rsrc_type = I40E_HMC_LAN_FULL;
@@ -915,9 +915,9 @@ static void i40e_write_qword(u8 *hmc_bits,
* @context_bytes: pointer to the context bit array (DMA memory)
* @hmc_type: the type of HMC resource
**/
-static i40e_status i40e_clear_hmc_context(struct i40e_hw *hw,
- u8 *context_bytes,
- enum i40e_hmc_lan_rsrc_type hmc_type)
+static int i40e_clear_hmc_context(struct i40e_hw *hw,
+ u8 *context_bytes,
+ enum i40e_hmc_lan_rsrc_type hmc_type)
{
/* clean the bit array */
memset(context_bytes, 0, (u32)hw->hmc.hmc_obj[hmc_type].size);
@@ -931,9 +931,9 @@ static i40e_status i40e_clear_hmc_context(struct i40e_hw *hw,
* @ce_info: a description of the struct to be filled
* @dest: the struct to be filled
**/
-static i40e_status i40e_set_hmc_context(u8 *context_bytes,
- struct i40e_context_ele *ce_info,
- u8 *dest)
+static int i40e_set_hmc_context(u8 *context_bytes,
+ struct i40e_context_ele *ce_info,
+ u8 *dest)
{
int f;
@@ -973,18 +973,18 @@ static i40e_status i40e_set_hmc_context(u8 *context_bytes,
* base pointer. This function is used for LAN Queue contexts.
**/
static
-i40e_status i40e_hmc_get_object_va(struct i40e_hw *hw, u8 **object_base,
- enum i40e_hmc_lan_rsrc_type rsrc_type,
- u32 obj_idx)
+int i40e_hmc_get_object_va(struct i40e_hw *hw, u8 **object_base,
+ enum i40e_hmc_lan_rsrc_type rsrc_type,
+ u32 obj_idx)
{
struct i40e_hmc_info *hmc_info = &hw->hmc;
u32 obj_offset_in_sd, obj_offset_in_pd;
struct i40e_hmc_sd_entry *sd_entry;
struct i40e_hmc_pd_entry *pd_entry;
u32 pd_idx, pd_lmt, rel_pd_idx;
- i40e_status ret_code = 0;
u64 obj_offset_in_fpm;
u32 sd_idx, sd_lmt;
+ int ret_code = 0;
if (NULL == hmc_info) {
ret_code = I40E_ERR_BAD_PTR;
@@ -1042,11 +1042,11 @@ exit:
* @hw: the hardware struct
* @queue: the queue we care about
**/
-i40e_status i40e_clear_lan_tx_queue_context(struct i40e_hw *hw,
- u16 queue)
+int i40e_clear_lan_tx_queue_context(struct i40e_hw *hw,
+ u16 queue)
{
- i40e_status err;
u8 *context_bytes;
+ int err;
err = i40e_hmc_get_object_va(hw, &context_bytes,
I40E_HMC_LAN_TX, queue);
@@ -1062,12 +1062,12 @@ i40e_status i40e_clear_lan_tx_queue_context(struct i40e_hw *hw,
* @queue: the queue we care about
* @s: the struct to be filled
**/
-i40e_status i40e_set_lan_tx_queue_context(struct i40e_hw *hw,
- u16 queue,
- struct i40e_hmc_obj_txq *s)
+int i40e_set_lan_tx_queue_context(struct i40e_hw *hw,
+ u16 queue,
+ struct i40e_hmc_obj_txq *s)
{
- i40e_status err;
u8 *context_bytes;
+ int err;
err = i40e_hmc_get_object_va(hw, &context_bytes,
I40E_HMC_LAN_TX, queue);
@@ -1083,11 +1083,11 @@ i40e_status i40e_set_lan_tx_queue_context(struct i40e_hw *hw,
* @hw: the hardware struct
* @queue: the queue we care about
**/
-i40e_status i40e_clear_lan_rx_queue_context(struct i40e_hw *hw,
- u16 queue)
+int i40e_clear_lan_rx_queue_context(struct i40e_hw *hw,
+ u16 queue)
{
- i40e_status err;
u8 *context_bytes;
+ int err;
err = i40e_hmc_get_object_va(hw, &context_bytes,
I40E_HMC_LAN_RX, queue);
@@ -1103,12 +1103,12 @@ i40e_status i40e_clear_lan_rx_queue_context(struct i40e_hw *hw,
* @queue: the queue we care about
* @s: the struct to be filled
**/
-i40e_status i40e_set_lan_rx_queue_context(struct i40e_hw *hw,
- u16 queue,
- struct i40e_hmc_obj_rxq *s)
+int i40e_set_lan_rx_queue_context(struct i40e_hw *hw,
+ u16 queue,
+ struct i40e_hmc_obj_rxq *s)
{
- i40e_status err;
u8 *context_bytes;
+ int err;
err = i40e_hmc_get_object_va(hw, &context_bytes,
I40E_HMC_LAN_RX, queue);
diff --git a/drivers/net/ethernet/intel/i40e/i40e_lan_hmc.h b/drivers/net/ethernet/intel/i40e/i40e_lan_hmc.h
index c46a2c449e60..9f960404c2b3 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_lan_hmc.h
+++ b/drivers/net/ethernet/intel/i40e/i40e_lan_hmc.h
@@ -137,22 +137,22 @@ struct i40e_hmc_lan_delete_obj_info {
u32 count;
};
-i40e_status i40e_init_lan_hmc(struct i40e_hw *hw, u32 txq_num,
- u32 rxq_num, u32 fcoe_cntx_num,
- u32 fcoe_filt_num);
-i40e_status i40e_configure_lan_hmc(struct i40e_hw *hw,
- enum i40e_hmc_model model);
-i40e_status i40e_shutdown_lan_hmc(struct i40e_hw *hw);
-
-i40e_status i40e_clear_lan_tx_queue_context(struct i40e_hw *hw,
- u16 queue);
-i40e_status i40e_set_lan_tx_queue_context(struct i40e_hw *hw,
- u16 queue,
- struct i40e_hmc_obj_txq *s);
-i40e_status i40e_clear_lan_rx_queue_context(struct i40e_hw *hw,
- u16 queue);
-i40e_status i40e_set_lan_rx_queue_context(struct i40e_hw *hw,
- u16 queue,
- struct i40e_hmc_obj_rxq *s);
+int i40e_init_lan_hmc(struct i40e_hw *hw, u32 txq_num,
+ u32 rxq_num, u32 fcoe_cntx_num,
+ u32 fcoe_filt_num);
+int i40e_configure_lan_hmc(struct i40e_hw *hw,
+ enum i40e_hmc_model model);
+int i40e_shutdown_lan_hmc(struct i40e_hw *hw);
+
+int i40e_clear_lan_tx_queue_context(struct i40e_hw *hw,
+ u16 queue);
+int i40e_set_lan_tx_queue_context(struct i40e_hw *hw,
+ u16 queue,
+ struct i40e_hmc_obj_txq *s);
+int i40e_clear_lan_rx_queue_context(struct i40e_hw *hw,
+ u16 queue);
+int i40e_set_lan_rx_queue_context(struct i40e_hw *hw,
+ u16 queue,
+ struct i40e_hmc_obj_rxq *s);
#endif /* _I40E_LAN_HMC_H_ */
diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c
index 52eec0a50492..467001db5070 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_main.c
+++ b/drivers/net/ethernet/intel/i40e/i40e_main.c
@@ -1817,13 +1817,13 @@ static int i40e_set_mac(struct net_device *netdev, void *p)
spin_unlock_bh(&vsi->mac_filter_hash_lock);
if (vsi->type == I40E_VSI_MAIN) {
- i40e_status ret;
+ int ret;
ret = i40e_aq_mac_address_write(hw, I40E_AQC_WRITE_TYPE_LAA_WOL,
addr->sa_data, NULL);
if (ret)
- netdev_info(netdev, "Ignoring error from firmware on LAA update, status %s, AQ ret %s\n",
- i40e_stat_str(hw, ret),
+ netdev_info(netdev, "Ignoring error from firmware on LAA update, status %pe, AQ ret %s\n",
+ ERR_PTR(ret),
i40e_aq_str(hw, hw->aq.asq_last_status));
}
@@ -1854,8 +1854,8 @@ static int i40e_config_rss_aq(struct i40e_vsi *vsi, const u8 *seed,
ret = i40e_aq_set_rss_key(hw, vsi->id, seed_dw);
if (ret) {
dev_info(&pf->pdev->dev,
- "Cannot set RSS key, err %s aq_err %s\n",
- i40e_stat_str(hw, ret),
+ "Cannot set RSS key, err %pe aq_err %s\n",
+ ERR_PTR(ret),
i40e_aq_str(hw, hw->aq.asq_last_status));
return ret;
}
@@ -1866,8 +1866,8 @@ static int i40e_config_rss_aq(struct i40e_vsi *vsi, const u8 *seed,
ret = i40e_aq_set_rss_lut(hw, vsi->id, pf_lut, lut, lut_size);
if (ret) {
dev_info(&pf->pdev->dev,
- "Cannot set RSS lut, err %s aq_err %s\n",
- i40e_stat_str(hw, ret),
+ "Cannot set RSS lut, err %pe aq_err %s\n",
+ ERR_PTR(ret),
i40e_aq_str(hw, hw->aq.asq_last_status));
return ret;
}
@@ -2349,7 +2349,7 @@ void i40e_aqc_del_filters(struct i40e_vsi *vsi, const char *vsi_name,
{
struct i40e_hw *hw = &vsi->back->hw;
enum i40e_admin_queue_err aq_status;
- i40e_status aq_ret;
+ int aq_ret;
aq_ret = i40e_aq_remove_macvlan_v2(hw, vsi->seid, list, num_del, NULL,
&aq_status);
@@ -2358,8 +2358,8 @@ void i40e_aqc_del_filters(struct i40e_vsi *vsi, const char *vsi_name,
if (aq_ret && !(aq_status == I40E_AQ_RC_ENOENT)) {
*retval = -EIO;
dev_info(&vsi->back->pdev->dev,
- "ignoring delete macvlan error on %s, err %s, aq_err %s\n",
- vsi_name, i40e_stat_str(hw, aq_ret),
+ "ignoring delete macvlan error on %s, err %pe, aq_err %s\n",
+ vsi_name, ERR_PTR(aq_ret),
i40e_aq_str(hw, aq_status));
}
}
@@ -2423,13 +2423,13 @@ void i40e_aqc_add_filters(struct i40e_vsi *vsi, const char *vsi_name,
*
* Returns status indicating success or failure;
**/
-static i40e_status
+static int
i40e_aqc_broadcast_filter(struct i40e_vsi *vsi, const char *vsi_name,
struct i40e_mac_filter *f)
{
bool enable = f->state == I40E_FILTER_NEW;
struct i40e_hw *hw = &vsi->back->hw;
- i40e_status aq_ret;
+ int aq_ret;
if (f->vlan == I40E_VLAN_ANY) {
aq_ret = i40e_aq_set_vsi_broadcast(hw,
@@ -2468,7 +2468,7 @@ static int i40e_set_promiscuous(struct i40e_pf *pf, bool promisc)
{
struct i40e_vsi *vsi = pf->vsi[pf->lan_vsi];
struct i40e_hw *hw = &pf->hw;
- i40e_status aq_ret;
+ int aq_ret;
if (vsi->type == I40E_VSI_MAIN &&
pf->lan_veb != I40E_NO_VEB &&
@@ -2488,8 +2488,8 @@ static int i40e_set_promiscuous(struct i40e_pf *pf, bool promisc)
NULL);
if (aq_ret) {
dev_info(&pf->pdev->dev,
- "Set default VSI failed, err %s, aq_err %s\n",
- i40e_stat_str(hw, aq_ret),
+ "Set default VSI failed, err %pe, aq_err %s\n",
+ ERR_PTR(aq_ret),
i40e_aq_str(hw, hw->aq.asq_last_status));
}
} else {
@@ -2500,8 +2500,8 @@ static int i40e_set_promiscuous(struct i40e_pf *pf, bool promisc)
true);
if (aq_ret) {
dev_info(&pf->pdev->dev,
- "set unicast promisc failed, err %s, aq_err %s\n",
- i40e_stat_str(hw, aq_ret),
+ "set unicast promisc failed, err %pe, aq_err %s\n",
+ ERR_PTR(aq_ret),
i40e_aq_str(hw, hw->aq.asq_last_status));
}
aq_ret = i40e_aq_set_vsi_multicast_promiscuous(
@@ -2510,8 +2510,8 @@ static int i40e_set_promiscuous(struct i40e_pf *pf, bool promisc)
promisc, NULL);
if (aq_ret) {
dev_info(&pf->pdev->dev,
- "set multicast promisc failed, err %s, aq_err %s\n",
- i40e_stat_str(hw, aq_ret),
+ "set multicast promisc failed, err %pe, aq_err %s\n",
+ ERR_PTR(aq_ret),
i40e_aq_str(hw, hw->aq.asq_last_status));
}
}
@@ -2541,12 +2541,12 @@ int i40e_sync_vsi_filters(struct i40e_vsi *vsi)
unsigned int vlan_filters = 0;
char vsi_name[16] = "PF";
int filter_list_len = 0;
- i40e_status aq_ret = 0;
u32 changed_flags = 0;
struct hlist_node *h;
struct i40e_pf *pf;
int num_add = 0;
int num_del = 0;
+ int aq_ret = 0;
int retval = 0;
u16 cmd_flags;
int list_size;
@@ -2814,9 +2814,9 @@ int i40e_sync_vsi_filters(struct i40e_vsi *vsi)
retval = i40e_aq_rc_to_posix(aq_ret,
hw->aq.asq_last_status);
dev_info(&pf->pdev->dev,
- "set multi promisc failed on %s, err %s aq_err %s\n",
+ "set multi promisc failed on %s, err %pe aq_err %s\n",
vsi_name,
- i40e_stat_str(hw, aq_ret),
+ ERR_PTR(aq_ret),
i40e_aq_str(hw, hw->aq.asq_last_status));
} else {
dev_info(&pf->pdev->dev, "%s allmulti mode.\n",
@@ -2834,10 +2834,10 @@ int i40e_sync_vsi_filters(struct i40e_vsi *vsi)
retval = i40e_aq_rc_to_posix(aq_ret,
hw->aq.asq_last_status);
dev_info(&pf->pdev->dev,
- "Setting promiscuous %s failed on %s, err %s aq_err %s\n",
+ "Setting promiscuous %s failed on %s, err %pe aq_err %s\n",
cur_promisc ? "on" : "off",
vsi_name,
- i40e_stat_str(hw, aq_ret),
+ ERR_PTR(aq_ret),
i40e_aq_str(hw, hw->aq.asq_last_status));
}
}
@@ -2965,7 +2965,7 @@ int i40e_ioctl(struct net_device *netdev, struct ifreq *ifr, int cmd)
void i40e_vlan_stripping_enable(struct i40e_vsi *vsi)
{
struct i40e_vsi_context ctxt;
- i40e_status ret;
+ int ret;
/* Don't modify stripping options if a port VLAN is active */
if (vsi->info.pvid)
@@ -2985,8 +2985,8 @@ void i40e_vlan_stripping_enable(struct i40e_vsi *vsi)
ret = i40e_aq_update_vsi_params(&vsi->back->hw, &ctxt, NULL);
if (ret) {
dev_info(&vsi->back->pdev->dev,
- "update vlan stripping failed, err %s aq_err %s\n",
- i40e_stat_str(&vsi->back->hw, ret),
+ "update vlan stripping failed, err %pe aq_err %s\n",
+ ERR_PTR(ret),
i40e_aq_str(&vsi->back->hw,
vsi->back->hw.aq.asq_last_status));
}
@@ -2999,7 +2999,7 @@ void i40e_vlan_stripping_enable(struct i40e_vsi *vsi)
void i40e_vlan_stripping_disable(struct i40e_vsi *vsi)
{
struct i40e_vsi_context ctxt;
- i40e_status ret;
+ int ret;
/* Don't modify stripping options if a port VLAN is active */
if (vsi->info.pvid)
@@ -3020,8 +3020,8 @@ void i40e_vlan_stripping_disable(struct i40e_vsi *vsi)
ret = i40e_aq_update_vsi_params(&vsi->back->hw, &ctxt, NULL);
if (ret) {
dev_info(&vsi->back->pdev->dev,
- "update vlan stripping failed, err %s aq_err %s\n",
- i40e_stat_str(&vsi->back->hw, ret),
+ "update vlan stripping failed, err %pe aq_err %s\n",
+ ERR_PTR(ret),
i40e_aq_str(&vsi->back->hw,
vsi->back->hw.aq.asq_last_status));
}
@@ -3252,7 +3252,7 @@ static void i40e_restore_vlan(struct i40e_vsi *vsi)
int i40e_vsi_add_pvid(struct i40e_vsi *vsi, u16 vid)
{
struct i40e_vsi_context ctxt;
- i40e_status ret;
+ int ret;
vsi->info.valid_sections = cpu_to_le16(I40E_AQ_VSI_PROP_VLAN_VALID);
vsi->info.pvid = cpu_to_le16(vid);
@@ -3265,8 +3265,8 @@ int i40e_vsi_add_pvid(struct i40e_vsi *vsi, u16 vid)
ret = i40e_aq_update_vsi_params(&vsi->back->hw, &ctxt, NULL);
if (ret) {
dev_info(&vsi->back->pdev->dev,
- "add pvid failed, err %s aq_err %s\n",
- i40e_stat_str(&vsi->back->hw, ret),
+ "add pvid failed, err %pe aq_err %s\n",
+ ERR_PTR(ret),
i40e_aq_str(&vsi->back->hw,
vsi->back->hw.aq.asq_last_status));
return -ENOENT;
@@ -3429,8 +3429,8 @@ static int i40e_configure_tx_ring(struct i40e_ring *ring)
u16 pf_q = vsi->base_queue + ring->queue_index;
struct i40e_hw *hw = &vsi->back->hw;
struct i40e_hmc_obj_txq tx_ctx;
- i40e_status err = 0;
u32 qtx_ctl = 0;
+ int err = 0;
if (ring_is_xdp(ring))
ring->xsk_pool = i40e_xsk_pool(ring);
@@ -3554,7 +3554,7 @@ static int i40e_configure_rx_ring(struct i40e_ring *ring)
u16 pf_q = vsi->base_queue + ring->queue_index;
struct i40e_hw *hw = &vsi->back->hw;
struct i40e_hmc_obj_rxq rx_ctx;
- i40e_status err = 0;
+ int err = 0;
bool ok;
int ret;
@@ -5525,16 +5525,16 @@ static int i40e_vsi_get_bw_info(struct i40e_vsi *vsi)
struct i40e_aqc_query_vsi_bw_config_resp bw_config = {0};
struct i40e_pf *pf = vsi->back;
struct i40e_hw *hw = &pf->hw;
- i40e_status ret;
u32 tc_bw_max;
+ int ret;
int i;
/* Get the VSI level BW configuration */
ret = i40e_aq_query_vsi_bw_config(hw, vsi->seid, &bw_config, NULL);
if (ret) {
dev_info(&pf->pdev->dev,
- "couldn't get PF vsi bw config, err %s aq_err %s\n",
- i40e_stat_str(&pf->hw, ret),
+ "couldn't get PF vsi bw config, err %pe aq_err %s\n",
+ ERR_PTR(ret),
i40e_aq_str(&pf->hw, pf->hw.aq.asq_last_status));
return -EINVAL;
}
@@ -5544,8 +5544,8 @@ static int i40e_vsi_get_bw_info(struct i40e_vsi *vsi)
NULL);
if (ret) {
dev_info(&pf->pdev->dev,
- "couldn't get PF vsi ets bw config, err %s aq_err %s\n",
- i40e_stat_str(&pf->hw, ret),
+ "couldn't get PF vsi ets bw config, err %pe aq_err %s\n",
+ ERR_PTR(ret),
i40e_aq_str(&pf->hw, pf->hw.aq.asq_last_status));
return -EINVAL;
}
@@ -5586,7 +5586,7 @@ static int i40e_vsi_configure_bw_alloc(struct i40e_vsi *vsi, u8 enabled_tc,
{
struct i40e_aqc_configure_vsi_tc_bw_data bw_data;
struct i40e_pf *pf = vsi->back;
- i40e_status ret;
+ int ret;
int i;
/* There is no need to reset BW when mqprio mode is on. */
@@ -5734,8 +5734,8 @@ int i40e_update_adq_vsi_queues(struct i40e_vsi *vsi, int vsi_offset)
ret = i40e_aq_update_vsi_params(hw, &ctxt, NULL);
if (ret) {
- dev_info(&pf->pdev->dev, "Update vsi config failed, err %s aq_err %s\n",
- i40e_stat_str(hw, ret),
+ dev_info(&pf->pdev->dev, "Update vsi config failed, err %pe aq_err %s\n",
+ ERR_PTR(ret),
i40e_aq_str(hw, hw->aq.asq_last_status));
return ret;
}
@@ -5790,8 +5790,8 @@ static int i40e_vsi_config_tc(struct i40e_vsi *vsi, u8 enabled_tc)
&bw_config, NULL);
if (ret) {
dev_info(&pf->pdev->dev,
- "Failed querying vsi bw info, err %s aq_err %s\n",
- i40e_stat_str(hw, ret),
+ "Failed querying vsi bw info, err %pe aq_err %s\n",
+ ERR_PTR(ret),
i40e_aq_str(hw, hw->aq.asq_last_status));
goto out;
}
@@ -5857,8 +5857,8 @@ static int i40e_vsi_config_tc(struct i40e_vsi *vsi, u8 enabled_tc)
ret = i40e_aq_update_vsi_params(hw, &ctxt, NULL);
if (ret) {
dev_info(&pf->pdev->dev,
- "Update vsi tc config failed, err %s aq_err %s\n",
- i40e_stat_str(hw, ret),
+ "Update vsi tc config failed, err %pe aq_err %s\n",
+ ERR_PTR(ret),
i40e_aq_str(hw, hw->aq.asq_last_status));
goto out;
}
@@ -5870,8 +5870,8 @@ static int i40e_vsi_config_tc(struct i40e_vsi *vsi, u8 enabled_tc)
ret = i40e_vsi_get_bw_info(vsi);
if (ret) {
dev_info(&pf->pdev->dev,
- "Failed updating vsi bw info, err %s aq_err %s\n",
- i40e_stat_str(hw, ret),
+ "Failed updating vsi bw info, err %pe aq_err %s\n",
+ ERR_PTR(ret),
i40e_aq_str(hw, hw->aq.asq_last_status));
goto out;
}
@@ -5962,8 +5962,8 @@ int i40e_set_bw_limit(struct i40e_vsi *vsi, u16 seid, u64 max_tx_rate)
I40E_MAX_BW_INACTIVE_ACCUM, NULL);
if (ret)
dev_err(&pf->pdev->dev,
- "Failed set tx rate (%llu Mbps) for vsi->seid %u, err %s aq_err %s\n",
- max_tx_rate, seid, i40e_stat_str(&pf->hw, ret),
+ "Failed set tx rate (%llu Mbps) for vsi->seid %u, err %pe aq_err %s\n",
+ max_tx_rate, seid, ERR_PTR(ret),
i40e_aq_str(&pf->hw, pf->hw.aq.asq_last_status));
return ret;
}
@@ -6038,8 +6038,8 @@ static void i40e_remove_queue_channels(struct i40e_vsi *vsi)
last_aq_status = pf->hw.aq.asq_last_status;
if (ret)
dev_info(&pf->pdev->dev,
- "Failed to delete cloud filter, err %s aq_err %s\n",
- i40e_stat_str(&pf->hw, ret),
+ "Failed to delete cloud filter, err %pe aq_err %s\n",
+ ERR_PTR(ret),
i40e_aq_str(&pf->hw, last_aq_status));
kfree(cfilter);
}
@@ -6173,8 +6173,8 @@ static int i40e_vsi_reconfig_rss(struct i40e_vsi *vsi, u16 rss_size)
ret = i40e_config_rss(vsi, seed, lut, vsi->rss_table_size);
if (ret) {
dev_info(&pf->pdev->dev,
- "Cannot set RSS lut, err %s aq_err %s\n",
- i40e_stat_str(hw, ret),
+ "Cannot set RSS lut, err %pe aq_err %s\n",
+ ERR_PTR(ret),
i40e_aq_str(hw, hw->aq.asq_last_status));
kfree(lut);
return ret;
@@ -6272,8 +6272,8 @@ static int i40e_add_channel(struct i40e_pf *pf, u16 uplink_seid,
ret = i40e_aq_add_vsi(hw, &ctxt, NULL);
if (ret) {
dev_info(&pf->pdev->dev,
- "add new vsi failed, err %s aq_err %s\n",
- i40e_stat_str(&pf->hw, ret),
+ "add new vsi failed, err %pe aq_err %s\n",
+ ERR_PTR(ret),
i40e_aq_str(&pf->hw,
pf->hw.aq.asq_last_status));
return -ENOENT;
@@ -6304,7 +6304,7 @@ static int i40e_channel_config_bw(struct i40e_vsi *vsi, struct i40e_channel *ch,
u8 *bw_share)
{
struct i40e_aqc_configure_vsi_tc_bw_data bw_data;
- i40e_status ret;
+ int ret;
int i;
memset(&bw_data, 0, sizeof(bw_data));
@@ -6340,9 +6340,9 @@ static int i40e_channel_config_tx_ring(struct i40e_pf *pf,
struct i40e_vsi *vsi,
struct i40e_channel *ch)
{
- i40e_status ret;
- int i;
u8 bw_share[I40E_MAX_TRAFFIC_CLASS] = {0};
+ int ret;
+ int i;
/* Enable ETS TCs with equal BW Share for now across all VSIs */
for (i = 0; i < I40E_MAX_TRAFFIC_CLASS; i++) {
@@ -6518,8 +6518,8 @@ static int i40e_validate_and_set_switch_mode(struct i40e_vsi *vsi)
mode, NULL);
if (ret && hw->aq.asq_last_status != I40E_AQ_RC_ESRCH)
dev_err(&pf->pdev->dev,
- "couldn't set switch config bits, err %s aq_err %s\n",
- i40e_stat_str(hw, ret),
+ "couldn't set switch config bits, err %pe aq_err %s\n",
+ ERR_PTR(ret),
i40e_aq_str(hw,
hw->aq.asq_last_status));
@@ -6719,8 +6719,8 @@ int i40e_veb_config_tc(struct i40e_veb *veb, u8 enabled_tc)
&bw_data, NULL);
if (ret) {
dev_info(&pf->pdev->dev,
- "VEB bw config failed, err %s aq_err %s\n",
- i40e_stat_str(&pf->hw, ret),
+ "VEB bw config failed, err %pe aq_err %s\n",
+ ERR_PTR(ret),
i40e_aq_str(&pf->hw, pf->hw.aq.asq_last_status));
goto out;
}
@@ -6729,8 +6729,8 @@ int i40e_veb_config_tc(struct i40e_veb *veb, u8 enabled_tc)
ret = i40e_veb_get_bw_info(veb);
if (ret) {
dev_info(&pf->pdev->dev,
- "Failed getting veb bw config, err %s aq_err %s\n",
- i40e_stat_str(&pf->hw, ret),
+ "Failed getting veb bw config, err %pe aq_err %s\n",
+ ERR_PTR(ret),
i40e_aq_str(&pf->hw, pf->hw.aq.asq_last_status));
}
@@ -6813,8 +6813,8 @@ static int i40e_resume_port_tx(struct i40e_pf *pf)
ret = i40e_aq_resume_port_tx(hw, NULL);
if (ret) {
dev_info(&pf->pdev->dev,
- "Resume Port Tx failed, err %s aq_err %s\n",
- i40e_stat_str(&pf->hw, ret),
+ "Resume Port Tx failed, err %pe aq_err %s\n",
+ ERR_PTR(ret),
i40e_aq_str(&pf->hw, pf->hw.aq.asq_last_status));
/* Schedule PF reset to recover */
set_bit(__I40E_PF_RESET_REQUESTED, pf->state);
@@ -6838,8 +6838,8 @@ static int i40e_suspend_port_tx(struct i40e_pf *pf)
ret = i40e_aq_suspend_port_tx(hw, pf->mac_seid, NULL);
if (ret) {
dev_info(&pf->pdev->dev,
- "Suspend Port Tx failed, err %s aq_err %s\n",
- i40e_stat_str(&pf->hw, ret),
+ "Suspend Port Tx failed, err %pe aq_err %s\n",
+ ERR_PTR(ret),
i40e_aq_str(&pf->hw, pf->hw.aq.asq_last_status));
/* Schedule PF reset to recover */
set_bit(__I40E_PF_RESET_REQUESTED, pf->state);
@@ -6878,8 +6878,8 @@ static int i40e_hw_set_dcb_config(struct i40e_pf *pf,
ret = i40e_set_dcb_config(&pf->hw);
if (ret) {
dev_info(&pf->pdev->dev,
- "Set DCB Config failed, err %s aq_err %s\n",
- i40e_stat_str(&pf->hw, ret),
+ "Set DCB Config failed, err %pe aq_err %s\n",
+ ERR_PTR(ret),
i40e_aq_str(&pf->hw, pf->hw.aq.asq_last_status));
goto out;
}
@@ -6995,8 +6995,8 @@ int i40e_hw_dcb_config(struct i40e_pf *pf, struct i40e_dcbx_config *new_cfg)
i40e_aqc_opc_modify_switching_comp_ets, NULL);
if (ret) {
dev_info(&pf->pdev->dev,
- "Modify Port ETS failed, err %s aq_err %s\n",
- i40e_stat_str(&pf->hw, ret),
+ "Modify Port ETS failed, err %pe aq_err %s\n",
+ ERR_PTR(ret),
i40e_aq_str(&pf->hw, pf->hw.aq.asq_last_status));
goto out;
}
@@ -7033,8 +7033,8 @@ int i40e_hw_dcb_config(struct i40e_pf *pf, struct i40e_dcbx_config *new_cfg)
ret = i40e_aq_dcb_updated(&pf->hw, NULL);
if (ret) {
dev_info(&pf->pdev->dev,
- "DCB Updated failed, err %s aq_err %s\n",
- i40e_stat_str(&pf->hw, ret),
+ "DCB Updated failed, err %pe aq_err %s\n",
+ ERR_PTR(ret),
i40e_aq_str(&pf->hw, pf->hw.aq.asq_last_status));
goto out;
}
@@ -7117,8 +7117,8 @@ int i40e_dcb_sw_default_config(struct i40e_pf *pf)
i40e_aqc_opc_enable_switching_comp_ets, NULL);
if (err) {
dev_info(&pf->pdev->dev,
- "Enable Port ETS failed, err %s aq_err %s\n",
- i40e_stat_str(&pf->hw, err),
+ "Enable Port ETS failed, err %pe aq_err %s\n",
+ ERR_PTR(err),
i40e_aq_str(&pf->hw, pf->hw.aq.asq_last_status));
err = -ENOENT;
goto out;
@@ -7197,8 +7197,8 @@ static int i40e_init_pf_dcb(struct i40e_pf *pf)
pf->flags |= I40E_FLAG_DISABLE_FW_LLDP;
} else {
dev_info(&pf->pdev->dev,
- "Query for DCB configuration failed, err %s aq_err %s\n",
- i40e_stat_str(&pf->hw, err),
+ "Query for DCB configuration failed, err %pe aq_err %s\n",
+ ERR_PTR(err),
i40e_aq_str(&pf->hw, pf->hw.aq.asq_last_status));
}
@@ -7416,15 +7416,15 @@ static void i40e_vsi_reinit_locked(struct i40e_vsi *vsi)
* @pf: board private structure
* @is_up: whether the link state should be forced up or down
**/
-static i40e_status i40e_force_link_state(struct i40e_pf *pf, bool is_up)
+static int i40e_force_link_state(struct i40e_pf *pf, bool is_up)
{
struct i40e_aq_get_phy_abilities_resp abilities;
struct i40e_aq_set_phy_config config = {0};
bool non_zero_phy_type = is_up;
struct i40e_hw *hw = &pf->hw;
- i40e_status err;
u64 mask;
u8 speed;
+ int err;
/* Card might've been put in an unstable state by other drivers
* and applications, which causes incorrect speed values being
@@ -7436,8 +7436,8 @@ static i40e_status i40e_force_link_state(struct i40e_pf *pf, bool is_up)
NULL);
if (err) {
dev_err(&pf->pdev->dev,
- "failed to get phy cap., ret = %s last_status = %s\n",
- i40e_stat_str(hw, err),
+ "failed to get phy cap., ret = %pe last_status = %s\n",
+ ERR_PTR(err),
i40e_aq_str(hw, hw->aq.asq_last_status));
return err;
}
@@ -7448,8 +7448,8 @@ static i40e_status i40e_force_link_state(struct i40e_pf *pf, bool is_up)
NULL);
if (err) {
dev_err(&pf->pdev->dev,
- "failed to get phy cap., ret = %s last_status = %s\n",
- i40e_stat_str(hw, err),
+ "failed to get phy cap., ret = %pe last_status = %s\n",
+ ERR_PTR(err),
i40e_aq_str(hw, hw->aq.asq_last_status));
return err;
}
@@ -7493,8 +7493,8 @@ static i40e_status i40e_force_link_state(struct i40e_pf *pf, bool is_up)
if (err) {
dev_err(&pf->pdev->dev,
- "set phy config ret = %s last_status = %s\n",
- i40e_stat_str(&pf->hw, err),
+ "set phy config ret = %pe last_status = %s\n",
+ ERR_PTR(err),
i40e_aq_str(&pf->hw, pf->hw.aq.asq_last_status));
return err;
}
@@ -7657,11 +7657,11 @@ static void i40e_vsi_set_default_tc_config(struct i40e_vsi *vsi)
* This function deletes a mac filter on the channel VSI which serves as the
* macvlan. Returns 0 on success.
**/
-static i40e_status i40e_del_macvlan_filter(struct i40e_hw *hw, u16 seid,
- const u8 *macaddr, int *aq_err)
+static int i40e_del_macvlan_filter(struct i40e_hw *hw, u16 seid,
+ const u8 *macaddr, int *aq_err)
{
struct i40e_aqc_remove_macvlan_element_data element;
- i40e_status status;
+ int status;
memset(&element, 0, sizeof(element));
ether_addr_copy(element.mac_addr, macaddr);
@@ -7683,12 +7683,12 @@ static i40e_status i40e_del_macvlan_filter(struct i40e_hw *hw, u16 seid,
* This function adds a mac filter on the channel VSI which serves as the
* macvlan. Returns 0 on success.
**/
-static i40e_status i40e_add_macvlan_filter(struct i40e_hw *hw, u16 seid,
- const u8 *macaddr, int *aq_err)
+static int i40e_add_macvlan_filter(struct i40e_hw *hw, u16 seid,
+ const u8 *macaddr, int *aq_err)
{
struct i40e_aqc_add_macvlan_element_data element;
- i40e_status status;
u16 cmd_flags = 0;
+ int status;
ether_addr_copy(element.mac_addr, macaddr);
element.vlan_tag = 0;
@@ -7834,8 +7834,8 @@ static int i40e_fwd_ring_up(struct i40e_vsi *vsi, struct net_device *vdev,
rx_ring->netdev = NULL;
}
dev_info(&pf->pdev->dev,
- "Error adding mac filter on macvlan err %s, aq_err %s\n",
- i40e_stat_str(hw, ret),
+ "Error adding mac filter on macvlan err %pe, aq_err %s\n",
+ ERR_PTR(ret),
i40e_aq_str(hw, aq_err));
netdev_err(vdev, "L2fwd offload disabled to L2 filter error\n");
}
@@ -7907,8 +7907,8 @@ static int i40e_setup_macvlans(struct i40e_vsi *vsi, u16 macvlan_cnt, u16 qcnt,
ret = i40e_aq_update_vsi_params(hw, &ctxt, NULL);
if (ret) {
dev_info(&pf->pdev->dev,
- "Update vsi tc config failed, err %s aq_err %s\n",
- i40e_stat_str(hw, ret),
+ "Update vsi tc config failed, err %pe aq_err %s\n",
+ ERR_PTR(ret),
i40e_aq_str(hw, hw->aq.asq_last_status));
return ret;
}
@@ -8123,8 +8123,8 @@ static void i40e_fwd_del(struct net_device *netdev, void *vdev)
ch->fwd = NULL;
} else {
dev_info(&pf->pdev->dev,
- "Error deleting mac filter on macvlan err %s, aq_err %s\n",
- i40e_stat_str(hw, ret),
+ "Error deleting mac filter on macvlan err %pe, aq_err %s\n",
+ ERR_PTR(ret),
i40e_aq_str(hw, aq_err));
}
break;
@@ -8875,8 +8875,8 @@ static int i40e_delete_clsflower(struct i40e_vsi *vsi,
kfree(filter);
if (err) {
dev_err(&pf->pdev->dev,
- "Failed to delete cloud filter, err %s\n",
- i40e_stat_str(&pf->hw, err));
+ "Failed to delete cloud filter, err %pe\n",
+ ERR_PTR(err));
return i40e_aq_rc_to_posix(err, pf->hw.aq.asq_last_status);
}
@@ -9438,8 +9438,8 @@ static int i40e_handle_lldp_event(struct i40e_pf *pf,
pf->flags &= ~I40E_FLAG_DCB_CAPABLE;
} else {
dev_info(&pf->pdev->dev,
- "Failed querying DCB configuration data from firmware, err %s aq_err %s\n",
- i40e_stat_str(&pf->hw, ret),
+ "Failed querying DCB configuration data from firmware, err %pe aq_err %s\n",
+ ERR_PTR(ret),
i40e_aq_str(&pf->hw,
pf->hw.aq.asq_last_status));
}
@@ -9887,8 +9887,8 @@ static void i40e_link_event(struct i40e_pf *pf)
{
struct i40e_vsi *vsi = pf->vsi[pf->lan_vsi];
u8 new_link_speed, old_link_speed;
- i40e_status status;
bool new_link, old_link;
+ int status;
#ifdef CONFIG_I40E_DCB
int err;
#endif /* CONFIG_I40E_DCB */
@@ -10099,9 +10099,9 @@ static void i40e_clean_adminq_subtask(struct i40e_pf *pf)
struct i40e_arq_event_info event;
struct i40e_hw *hw = &pf->hw;
u16 pending, i = 0;
- i40e_status ret;
u16 opcode;
u32 oldval;
+ int ret;
u32 val;
/* Do not run clean AQ when PF reset fails */
@@ -10265,8 +10265,8 @@ static void i40e_enable_pf_switch_lb(struct i40e_pf *pf)
ret = i40e_aq_get_vsi_params(&pf->hw, &ctxt, NULL);
if (ret) {
dev_info(&pf->pdev->dev,
- "couldn't get PF vsi config, err %s aq_err %s\n",
- i40e_stat_str(&pf->hw, ret),
+ "couldn't get PF vsi config, err %pe aq_err %s\n",
+ ERR_PTR(ret),
i40e_aq_str(&pf->hw, pf->hw.aq.asq_last_status));
return;
}
@@ -10277,8 +10277,8 @@ static void i40e_enable_pf_switch_lb(struct i40e_pf *pf)
ret = i40e_aq_update_vsi_params(&vsi->back->hw, &ctxt, NULL);
if (ret) {
dev_info(&pf->pdev->dev,
- "update vsi switch failed, err %s aq_err %s\n",
- i40e_stat_str(&pf->hw, ret),
+ "update vsi switch failed, err %pe aq_err %s\n",
+ ERR_PTR(ret),
i40e_aq_str(&pf->hw, pf->hw.aq.asq_last_status));
}
}
@@ -10301,8 +10301,8 @@ static void i40e_disable_pf_switch_lb(struct i40e_pf *pf)
ret = i40e_aq_get_vsi_params(&pf->hw, &ctxt, NULL);
if (ret) {
dev_info(&pf->pdev->dev,
- "couldn't get PF vsi config, err %s aq_err %s\n",
- i40e_stat_str(&pf->hw, ret),
+ "couldn't get PF vsi config, err %pe aq_err %s\n",
+ ERR_PTR(ret),
i40e_aq_str(&pf->hw, pf->hw.aq.asq_last_status));
return;
}
@@ -10313,8 +10313,8 @@ static void i40e_disable_pf_switch_lb(struct i40e_pf *pf)
ret = i40e_aq_update_vsi_params(&vsi->back->hw, &ctxt, NULL);
if (ret) {
dev_info(&pf->pdev->dev,
- "update vsi switch failed, err %s aq_err %s\n",
- i40e_stat_str(&pf->hw, ret),
+ "update vsi switch failed, err %pe aq_err %s\n",
+ ERR_PTR(ret),
i40e_aq_str(&pf->hw, pf->hw.aq.asq_last_status));
}
}
@@ -10458,8 +10458,8 @@ static int i40e_get_capabilities(struct i40e_pf *pf,
buf_len = data_size;
} else if (pf->hw.aq.asq_last_status != I40E_AQ_RC_OK || err) {
dev_info(&pf->pdev->dev,
- "capability discovery failed, err %s aq_err %s\n",
- i40e_stat_str(&pf->hw, err),
+ "capability discovery failed, err %pe aq_err %s\n",
+ ERR_PTR(err),
i40e_aq_str(&pf->hw,
pf->hw.aq.asq_last_status));
return -ENODEV;
@@ -10580,7 +10580,7 @@ static int i40e_rebuild_cloud_filters(struct i40e_vsi *vsi, u16 seid)
struct i40e_cloud_filter *cfilter;
struct i40e_pf *pf = vsi->back;
struct hlist_node *node;
- i40e_status ret;
+ int ret;
/* Add cloud filters back if they exist */
hlist_for_each_entry_safe(cfilter, node, &pf->cloud_filter_list,
@@ -10596,8 +10596,8 @@ static int i40e_rebuild_cloud_filters(struct i40e_vsi *vsi, u16 seid)
if (ret) {
dev_dbg(&pf->pdev->dev,
- "Failed to rebuild cloud filter, err %s aq_err %s\n",
- i40e_stat_str(&pf->hw, ret),
+ "Failed to rebuild cloud filter, err %pe aq_err %s\n",
+ ERR_PTR(ret),
i40e_aq_str(&pf->hw,
pf->hw.aq.asq_last_status));
return ret;
@@ -10615,7 +10615,7 @@ static int i40e_rebuild_cloud_filters(struct i40e_vsi *vsi, u16 seid)
static int i40e_rebuild_channels(struct i40e_vsi *vsi)
{
struct i40e_channel *ch, *ch_tmp;
- i40e_status ret;
+ int ret;
if (list_empty(&vsi->ch_list))
return 0;
@@ -10691,7 +10691,7 @@ static void i40e_clean_xps_state(struct i40e_vsi *vsi)
static void i40e_prep_for_reset(struct i40e_pf *pf)
{
struct i40e_hw *hw = &pf->hw;
- i40e_status ret = 0;
+ int ret = 0;
u32 v;
clear_bit(__I40E_RESET_INTR_RECEIVED, pf->state);
@@ -10796,7 +10796,7 @@ static void i40e_get_oem_version(struct i40e_hw *hw)
static int i40e_reset(struct i40e_pf *pf)
{
struct i40e_hw *hw = &pf->hw;
- i40e_status ret;
+ int ret;
ret = i40e_pf_reset(hw);
if (ret) {
@@ -10821,7 +10821,7 @@ static void i40e_rebuild(struct i40e_pf *pf, bool reinit, bool lock_acquired)
const bool is_recovery_mode_reported = i40e_check_recovery_mode(pf);
struct i40e_vsi *vsi = pf->vsi[pf->lan_vsi];
struct i40e_hw *hw = &pf->hw;
- i40e_status ret;
+ int ret;
u32 val;
int v;
@@ -10837,8 +10837,8 @@ static void i40e_rebuild(struct i40e_pf *pf, bool reinit, bool lock_acquired)
/* rebuild the basics for the AdminQ, HMC, and initial HW switch */
ret = i40e_init_adminq(&pf->hw);
if (ret) {
- dev_info(&pf->pdev->dev, "Rebuild AdminQ failed, err %s aq_err %s\n",
- i40e_stat_str(&pf->hw, ret),
+ dev_info(&pf->pdev->dev, "Rebuild AdminQ failed, err %pe aq_err %s\n",
+ ERR_PTR(ret),
i40e_aq_str(&pf->hw, pf->hw.aq.asq_last_status));
goto clear_recovery;
}
@@ -10949,8 +10949,8 @@ static void i40e_rebuild(struct i40e_pf *pf, bool reinit, bool lock_acquired)
I40E_AQ_EVENT_MEDIA_NA |
I40E_AQ_EVENT_MODULE_QUAL_FAIL), NULL);
if (ret)
- dev_info(&pf->pdev->dev, "set phy mask fail, err %s aq_err %s\n",
- i40e_stat_str(&pf->hw, ret),
+ dev_info(&pf->pdev->dev, "set phy mask fail, err %pe aq_err %s\n",
+ ERR_PTR(ret),
i40e_aq_str(&pf->hw, pf->hw.aq.asq_last_status));
/* Rebuild the VSIs and VEBs that existed before reset.
@@ -11053,8 +11053,8 @@ static void i40e_rebuild(struct i40e_pf *pf, bool reinit, bool lock_acquired)
msleep(75);
ret = i40e_aq_set_link_restart_an(&pf->hw, true, NULL);
if (ret)
- dev_info(&pf->pdev->dev, "link restart failed, err %s aq_err %s\n",
- i40e_stat_str(&pf->hw, ret),
+ dev_info(&pf->pdev->dev, "link restart failed, err %pe aq_err %s\n",
+ ERR_PTR(ret),
i40e_aq_str(&pf->hw,
pf->hw.aq.asq_last_status));
}
@@ -11082,9 +11082,9 @@ static void i40e_rebuild(struct i40e_pf *pf, bool reinit, bool lock_acquired)
ret = i40e_set_promiscuous(pf, pf->cur_promisc);
if (ret)
dev_warn(&pf->pdev->dev,
- "Failed to restore promiscuous setting: %s, err %s aq_err %s\n",
+ "Failed to restore promiscuous setting: %s, err %pe aq_err %s\n",
pf->cur_promisc ? "on" : "off",
- i40e_stat_str(&pf->hw, ret),
+ ERR_PTR(ret),
i40e_aq_str(&pf->hw, pf->hw.aq.asq_last_status));
i40e_reset_all_vfs(pf, true);
@@ -12218,8 +12218,8 @@ static int i40e_get_rss_aq(struct i40e_vsi *vsi, const u8 *seed,
(struct i40e_aqc_get_set_rss_key_data *)seed);
if (ret) {
dev_info(&pf->pdev->dev,
- "Cannot get RSS key, err %s aq_err %s\n",
- i40e_stat_str(&pf->hw, ret),
+ "Cannot get RSS key, err %pe aq_err %s\n",
+ ERR_PTR(ret),
i40e_aq_str(&pf->hw,
pf->hw.aq.asq_last_status));
return ret;
@@ -12232,8 +12232,8 @@ static int i40e_get_rss_aq(struct i40e_vsi *vsi, const u8 *seed,
ret = i40e_aq_get_rss_lut(hw, vsi->id, pf_lut, lut, lut_size);
if (ret) {
dev_info(&pf->pdev->dev,
- "Cannot get RSS lut, err %s aq_err %s\n",
- i40e_stat_str(&pf->hw, ret),
+ "Cannot get RSS lut, err %pe aq_err %s\n",
+ ERR_PTR(ret),
i40e_aq_str(&pf->hw,
pf->hw.aq.asq_last_status));
return ret;
@@ -12508,11 +12508,11 @@ int i40e_reconfig_rss_queues(struct i40e_pf *pf, int queue_count)
* i40e_get_partition_bw_setting - Retrieve BW settings for this PF partition
* @pf: board private structure
**/
-i40e_status i40e_get_partition_bw_setting(struct i40e_pf *pf)
+int i40e_get_partition_bw_setting(struct i40e_pf *pf)
{
- i40e_status status;
bool min_valid, max_valid;
u32 max_bw, min_bw;
+ int status;
status = i40e_read_bw_from_alt_ram(&pf->hw, &max_bw, &min_bw,
&min_valid, &max_valid);
@@ -12531,10 +12531,10 @@ i40e_status i40e_get_partition_bw_setting(struct i40e_pf *pf)
* i40e_set_partition_bw_setting - Set BW settings for this PF partition
* @pf: board private structure
**/
-i40e_status i40e_set_partition_bw_setting(struct i40e_pf *pf)
+int i40e_set_partition_bw_setting(struct i40e_pf *pf)
{
struct i40e_aqc_configure_partition_bw_data bw_data;
- i40e_status status;
+ int status;
memset(&bw_data, 0, sizeof(bw_data));
@@ -12553,12 +12553,12 @@ i40e_status i40e_set_partition_bw_setting(struct i40e_pf *pf)
* i40e_commit_partition_bw_setting - Commit BW settings for this PF partition
* @pf: board private structure
**/
-i40e_status i40e_commit_partition_bw_setting(struct i40e_pf *pf)
+int i40e_commit_partition_bw_setting(struct i40e_pf *pf)
{
/* Commit temporary BW setting to permanent NVM image */
enum i40e_admin_queue_err last_aq_status;
- i40e_status ret;
u16 nvm_word;
+ int ret;
if (pf->hw.partition_id != 1) {
dev_info(&pf->pdev->dev,
@@ -12573,8 +12573,8 @@ i40e_status i40e_commit_partition_bw_setting(struct i40e_pf *pf)
last_aq_status = pf->hw.aq.asq_last_status;
if (ret) {
dev_info(&pf->pdev->dev,
- "Cannot acquire NVM for read access, err %s aq_err %s\n",
- i40e_stat_str(&pf->hw, ret),
+ "Cannot acquire NVM for read access, err %pe aq_err %s\n",
+ ERR_PTR(ret),
i40e_aq_str(&pf->hw, last_aq_status));
goto bw_commit_out;
}
@@ -12590,8 +12590,8 @@ i40e_status i40e_commit_partition_bw_setting(struct i40e_pf *pf)
last_aq_status = pf->hw.aq.asq_last_status;
i40e_release_nvm(&pf->hw);
if (ret) {
- dev_info(&pf->pdev->dev, "NVM read error, err %s aq_err %s\n",
- i40e_stat_str(&pf->hw, ret),
+ dev_info(&pf->pdev->dev, "NVM read error, err %pe aq_err %s\n",
+ ERR_PTR(ret),
i40e_aq_str(&pf->hw, last_aq_status));
goto bw_commit_out;
}
@@ -12604,8 +12604,8 @@ i40e_status i40e_commit_partition_bw_setting(struct i40e_pf *pf)
last_aq_status = pf->hw.aq.asq_last_status;
if (ret) {
dev_info(&pf->pdev->dev,
- "Cannot acquire NVM for write access, err %s aq_err %s\n",
- i40e_stat_str(&pf->hw, ret),
+ "Cannot acquire NVM for write access, err %pe aq_err %s\n",
+ ERR_PTR(ret),
i40e_aq_str(&pf->hw, last_aq_status));
goto bw_commit_out;
}
@@ -12624,8 +12624,8 @@ i40e_status i40e_commit_partition_bw_setting(struct i40e_pf *pf)
i40e_release_nvm(&pf->hw);
if (ret)
dev_info(&pf->pdev->dev,
- "BW settings NOT SAVED, err %s aq_err %s\n",
- i40e_stat_str(&pf->hw, ret),
+ "BW settings NOT SAVED, err %pe aq_err %s\n",
+ ERR_PTR(ret),
i40e_aq_str(&pf->hw, last_aq_status));
bw_commit_out:
@@ -12646,7 +12646,7 @@ static bool i40e_is_total_port_shutdown_enabled(struct i40e_pf *pf)
#define I40E_LINK_BEHAVIOR_WORD_LENGTH 0x1
#define I40E_LINK_BEHAVIOR_OS_FORCED_ENABLED BIT(0)
#define I40E_LINK_BEHAVIOR_PORT_BIT_LENGTH 4
- i40e_status read_status = I40E_SUCCESS;
+ int read_status = I40E_SUCCESS;
u16 sr_emp_sr_settings_ptr = 0;
u16 features_enable = 0;
u16 link_behavior = 0;
@@ -12679,8 +12679,8 @@ static bool i40e_is_total_port_shutdown_enabled(struct i40e_pf *pf)
err_nvm:
dev_warn(&pf->pdev->dev,
- "total-port-shutdown feature is off due to read nvm error: %s\n",
- i40e_stat_str(&pf->hw, read_status));
+ "total-port-shutdown feature is off due to read nvm error: %pe\n",
+ ERR_PTR(read_status));
return ret;
}
@@ -13025,7 +13025,7 @@ static int i40e_udp_tunnel_set_port(struct net_device *netdev,
struct i40e_netdev_priv *np = netdev_priv(netdev);
struct i40e_hw *hw = &np->vsi->back->hw;
u8 type, filter_index;
- i40e_status ret;
+ int ret;
type = ti->type == UDP_TUNNEL_TYPE_VXLAN ? I40E_AQC_TUNNEL_TYPE_VXLAN :
I40E_AQC_TUNNEL_TYPE_NGE;
@@ -13033,8 +13033,8 @@ static int i40e_udp_tunnel_set_port(struct net_device *netdev,
ret = i40e_aq_add_udp_tunnel(hw, ntohs(ti->port), type, &filter_index,
NULL);
if (ret) {
- netdev_info(netdev, "add UDP port failed, err %s aq_err %s\n",
- i40e_stat_str(hw, ret),
+ netdev_info(netdev, "add UDP port failed, err %pe aq_err %s\n",
+ ERR_PTR(ret),
i40e_aq_str(hw, hw->aq.asq_last_status));
return -EIO;
}
@@ -13049,12 +13049,12 @@ static int i40e_udp_tunnel_unset_port(struct net_device *netdev,
{
struct i40e_netdev_priv *np = netdev_priv(netdev);
struct i40e_hw *hw = &np->vsi->back->hw;
- i40e_status ret;
+ int ret;
ret = i40e_aq_del_udp_tunnel(hw, ti->hw_priv, NULL);
if (ret) {
- netdev_info(netdev, "delete UDP port failed, err %s aq_err %s\n",
- i40e_stat_str(hw, ret),
+ netdev_info(netdev, "delete UDP port failed, err %pe aq_err %s\n",
+ ERR_PTR(ret),
i40e_aq_str(hw, hw->aq.asq_last_status));
return -EIO;
}
@@ -13341,9 +13341,11 @@ static int i40e_xdp_setup(struct i40e_vsi *vsi, struct bpf_prog *prog,
old_prog = xchg(&vsi->xdp_prog, prog);
if (need_reset) {
- if (!prog)
+ if (!prog) {
+ xdp_features_clear_redirect_target(vsi->netdev);
/* Wait until ndo_xsk_wakeup completes. */
synchronize_rcu();
+ }
i40e_reset_and_rebuild(pf, true, true);
}
@@ -13364,11 +13366,13 @@ static int i40e_xdp_setup(struct i40e_vsi *vsi, struct bpf_prog *prog,
/* Kick start the NAPI context if there is an AF_XDP socket open
* on that queue id. This so that receiving will start.
*/
- if (need_reset && prog)
+ if (need_reset && prog) {
for (i = 0; i < vsi->num_queue_pairs; i++)
if (vsi->xdp_rings[i]->xsk_pool)
(void)i40e_xsk_wakeup(vsi->netdev, i,
XDP_WAKEUP_RX);
+ xdp_features_set_redirect_target(vsi->netdev, true);
+ }
return 0;
}
@@ -13803,6 +13807,10 @@ static int i40e_config_netdev(struct i40e_vsi *vsi)
spin_lock_bh(&vsi->mac_filter_hash_lock);
i40e_add_mac_filter(vsi, mac_addr);
spin_unlock_bh(&vsi->mac_filter_hash_lock);
+
+ netdev->xdp_features = NETDEV_XDP_ACT_BASIC |
+ NETDEV_XDP_ACT_REDIRECT |
+ NETDEV_XDP_ACT_XSK_ZEROCOPY;
} else {
/* Relate the VSI_VMDQ name to the VSI_MAIN name. Note that we
* are still limited by IFNAMSIZ, but we're adding 'v%d\0' to
@@ -13943,8 +13951,8 @@ static int i40e_add_vsi(struct i40e_vsi *vsi)
ctxt.flags = I40E_AQ_VSI_TYPE_PF;
if (ret) {
dev_info(&pf->pdev->dev,
- "couldn't get PF vsi config, err %s aq_err %s\n",
- i40e_stat_str(&pf->hw, ret),
+ "couldn't get PF vsi config, err %pe aq_err %s\n",
+ ERR_PTR(ret),
i40e_aq_str(&pf->hw,
pf->hw.aq.asq_last_status));
return -ENOENT;
@@ -13973,8 +13981,8 @@ static int i40e_add_vsi(struct i40e_vsi *vsi)
ret = i40e_aq_update_vsi_params(hw, &ctxt, NULL);
if (ret) {
dev_info(&pf->pdev->dev,
- "update vsi failed, err %s aq_err %s\n",
- i40e_stat_str(&pf->hw, ret),
+ "update vsi failed, err %d aq_err %s\n",
+ ret,
i40e_aq_str(&pf->hw,
pf->hw.aq.asq_last_status));
ret = -ENOENT;
@@ -13993,8 +14001,8 @@ static int i40e_add_vsi(struct i40e_vsi *vsi)
ret = i40e_aq_update_vsi_params(hw, &ctxt, NULL);
if (ret) {
dev_info(&pf->pdev->dev,
- "update vsi failed, err %s aq_err %s\n",
- i40e_stat_str(&pf->hw, ret),
+ "update vsi failed, err %pe aq_err %s\n",
+ ERR_PTR(ret),
i40e_aq_str(&pf->hw,
pf->hw.aq.asq_last_status));
ret = -ENOENT;
@@ -14016,9 +14024,9 @@ static int i40e_add_vsi(struct i40e_vsi *vsi)
* message and continue
*/
dev_info(&pf->pdev->dev,
- "failed to configure TCs for main VSI tc_map 0x%08x, err %s aq_err %s\n",
+ "failed to configure TCs for main VSI tc_map 0x%08x, err %pe aq_err %s\n",
enabled_tc,
- i40e_stat_str(&pf->hw, ret),
+ ERR_PTR(ret),
i40e_aq_str(&pf->hw,
pf->hw.aq.asq_last_status));
}
@@ -14112,8 +14120,8 @@ static int i40e_add_vsi(struct i40e_vsi *vsi)
ret = i40e_aq_add_vsi(hw, &ctxt, NULL);
if (ret) {
dev_info(&vsi->back->pdev->dev,
- "add vsi failed, err %s aq_err %s\n",
- i40e_stat_str(&pf->hw, ret),
+ "add vsi failed, err %pe aq_err %s\n",
+ ERR_PTR(ret),
i40e_aq_str(&pf->hw,
pf->hw.aq.asq_last_status));
ret = -ENOENT;
@@ -14144,8 +14152,8 @@ static int i40e_add_vsi(struct i40e_vsi *vsi)
ret = i40e_vsi_get_bw_info(vsi);
if (ret) {
dev_info(&pf->pdev->dev,
- "couldn't get vsi bw info, err %s aq_err %s\n",
- i40e_stat_str(&pf->hw, ret),
+ "couldn't get vsi bw info, err %pe aq_err %s\n",
+ ERR_PTR(ret),
i40e_aq_str(&pf->hw, pf->hw.aq.asq_last_status));
/* VSI is already added so not tearing that up */
ret = 0;
@@ -14591,8 +14599,8 @@ static int i40e_veb_get_bw_info(struct i40e_veb *veb)
&bw_data, NULL);
if (ret) {
dev_info(&pf->pdev->dev,
- "query veb bw config failed, err %s aq_err %s\n",
- i40e_stat_str(&pf->hw, ret),
+ "query veb bw config failed, err %pe aq_err %s\n",
+ ERR_PTR(ret),
i40e_aq_str(&pf->hw, hw->aq.asq_last_status));
goto out;
}
@@ -14601,8 +14609,8 @@ static int i40e_veb_get_bw_info(struct i40e_veb *veb)
&ets_data, NULL);
if (ret) {
dev_info(&pf->pdev->dev,
- "query veb bw ets config failed, err %s aq_err %s\n",
- i40e_stat_str(&pf->hw, ret),
+ "query veb bw ets config failed, err %pe aq_err %s\n",
+ ERR_PTR(ret),
i40e_aq_str(&pf->hw, hw->aq.asq_last_status));
goto out;
}
@@ -14798,8 +14806,8 @@ static int i40e_add_veb(struct i40e_veb *veb, struct i40e_vsi *vsi)
/* get a VEB from the hardware */
if (ret) {
dev_info(&pf->pdev->dev,
- "couldn't add VEB, err %s aq_err %s\n",
- i40e_stat_str(&pf->hw, ret),
+ "couldn't add VEB, err %pe aq_err %s\n",
+ ERR_PTR(ret),
i40e_aq_str(&pf->hw, pf->hw.aq.asq_last_status));
return -EPERM;
}
@@ -14809,16 +14817,16 @@ static int i40e_add_veb(struct i40e_veb *veb, struct i40e_vsi *vsi)
&veb->stats_idx, NULL, NULL, NULL);
if (ret) {
dev_info(&pf->pdev->dev,
- "couldn't get VEB statistics idx, err %s aq_err %s\n",
- i40e_stat_str(&pf->hw, ret),
+ "couldn't get VEB statistics idx, err %pe aq_err %s\n",
+ ERR_PTR(ret),
i40e_aq_str(&pf->hw, pf->hw.aq.asq_last_status));
return -EPERM;
}
ret = i40e_veb_get_bw_info(veb);
if (ret) {
dev_info(&pf->pdev->dev,
- "couldn't get VEB bw info, err %s aq_err %s\n",
- i40e_stat_str(&pf->hw, ret),
+ "couldn't get VEB bw info, err %pe aq_err %s\n",
+ ERR_PTR(ret),
i40e_aq_str(&pf->hw, pf->hw.aq.asq_last_status));
i40e_aq_delete_element(&pf->hw, veb->seid, NULL);
return -ENOENT;
@@ -15028,8 +15036,8 @@ int i40e_fetch_switch_configuration(struct i40e_pf *pf, bool printconfig)
&next_seid, NULL);
if (ret) {
dev_info(&pf->pdev->dev,
- "get switch config failed err %s aq_err %s\n",
- i40e_stat_str(&pf->hw, ret),
+ "get switch config failed err %d aq_err %s\n",
+ ret,
i40e_aq_str(&pf->hw,
pf->hw.aq.asq_last_status));
kfree(aq_buf);
@@ -15074,8 +15082,8 @@ static int i40e_setup_pf_switch(struct i40e_pf *pf, bool reinit, bool lock_acqui
ret = i40e_fetch_switch_configuration(pf, false);
if (ret) {
dev_info(&pf->pdev->dev,
- "couldn't fetch switch config, err %s aq_err %s\n",
- i40e_stat_str(&pf->hw, ret),
+ "couldn't fetch switch config, err %pe aq_err %s\n",
+ ERR_PTR(ret),
i40e_aq_str(&pf->hw, pf->hw.aq.asq_last_status));
return ret;
}
@@ -15101,8 +15109,8 @@ static int i40e_setup_pf_switch(struct i40e_pf *pf, bool reinit, bool lock_acqui
NULL);
if (ret && pf->hw.aq.asq_last_status != I40E_AQ_RC_ESRCH) {
dev_info(&pf->pdev->dev,
- "couldn't set switch config bits, err %s aq_err %s\n",
- i40e_stat_str(&pf->hw, ret),
+ "couldn't set switch config bits, err %pe aq_err %s\n",
+ ERR_PTR(ret),
i40e_aq_str(&pf->hw,
pf->hw.aq.asq_last_status));
/* not a fatal problem, just keep going */
@@ -15439,13 +15447,12 @@ static bool i40e_check_recovery_mode(struct i40e_pf *pf)
*
* Return 0 on success, negative on failure.
**/
-static i40e_status i40e_pf_loop_reset(struct i40e_pf *pf)
+static int i40e_pf_loop_reset(struct i40e_pf *pf)
{
/* wait max 10 seconds for PF reset to succeed */
const unsigned long time_end = jiffies + 10 * HZ;
-
struct i40e_hw *hw = &pf->hw;
- i40e_status ret;
+ int ret;
ret = i40e_pf_reset(hw);
while (ret != I40E_SUCCESS && time_before(jiffies, time_end)) {
@@ -15491,9 +15498,9 @@ static bool i40e_check_fw_empr(struct i40e_pf *pf)
* Return 0 if NIC is healthy or negative value when there are issues
* with resets
**/
-static i40e_status i40e_handle_resets(struct i40e_pf *pf)
+static int i40e_handle_resets(struct i40e_pf *pf)
{
- const i40e_status pfr = i40e_pf_loop_reset(pf);
+ const int pfr = i40e_pf_loop_reset(pf);
const bool is_empr = i40e_check_fw_empr(pf);
if (is_empr || pfr != I40E_SUCCESS)
@@ -15591,7 +15598,6 @@ err_switch_setup:
timer_shutdown_sync(&pf->service_timer);
i40e_shutdown_adminq(hw);
iounmap(hw->hw_addr);
- pci_disable_pcie_error_reporting(pf->pdev);
pci_release_mem_regions(pf->pdev);
pci_disable_device(pf->pdev);
kfree(pf);
@@ -15631,13 +15637,15 @@ static int i40e_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
struct i40e_aq_get_phy_abilities_resp abilities;
#ifdef CONFIG_I40E_DCB
enum i40e_get_fw_lldp_status_resp lldp_status;
- i40e_status status;
#endif /* CONFIG_I40E_DCB */
struct i40e_pf *pf;
struct i40e_hw *hw;
static u16 pfs_found;
u16 wol_nvm_bits;
u16 link_status;
+#ifdef CONFIG_I40E_DCB
+ int status;
+#endif /* CONFIG_I40E_DCB */
int err;
u32 val;
u32 i;
@@ -15662,7 +15670,6 @@ static int i40e_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
goto err_pci_reg;
}
- pci_enable_pcie_error_reporting(pdev);
pci_set_master(pdev);
/* Now that we have a PCI connection, we need to do the
@@ -16006,8 +16013,8 @@ static int i40e_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
I40E_AQ_EVENT_MEDIA_NA |
I40E_AQ_EVENT_MODULE_QUAL_FAIL), NULL);
if (err)
- dev_info(&pf->pdev->dev, "set phy mask fail, err %s aq_err %s\n",
- i40e_stat_str(&pf->hw, err),
+ dev_info(&pf->pdev->dev, "set phy mask fail, err %pe aq_err %s\n",
+ ERR_PTR(err),
i40e_aq_str(&pf->hw, pf->hw.aq.asq_last_status));
/* Reconfigure hardware for allowing smaller MSS in the case
@@ -16025,8 +16032,8 @@ static int i40e_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
msleep(75);
err = i40e_aq_set_link_restart_an(&pf->hw, true, NULL);
if (err)
- dev_info(&pf->pdev->dev, "link restart failed, err %s aq_err %s\n",
- i40e_stat_str(&pf->hw, err),
+ dev_info(&pf->pdev->dev, "link restart failed, err %pe aq_err %s\n",
+ ERR_PTR(err),
i40e_aq_str(&pf->hw,
pf->hw.aq.asq_last_status));
}
@@ -16158,8 +16165,8 @@ static int i40e_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
/* get the requested speeds from the fw */
err = i40e_aq_get_phy_capabilities(hw, false, false, &abilities, NULL);
if (err)
- dev_dbg(&pf->pdev->dev, "get requested speeds ret = %s last_status = %s\n",
- i40e_stat_str(&pf->hw, err),
+ dev_dbg(&pf->pdev->dev, "get requested speeds ret = %pe last_status = %s\n",
+ ERR_PTR(err),
i40e_aq_str(&pf->hw, pf->hw.aq.asq_last_status));
pf->hw.phy.link_info.requested_speeds = abilities.link_speed;
@@ -16169,8 +16176,8 @@ static int i40e_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
/* get the supported phy types from the fw */
err = i40e_aq_get_phy_capabilities(hw, false, true, &abilities, NULL);
if (err)
- dev_dbg(&pf->pdev->dev, "get supported phy types ret = %s last_status = %s\n",
- i40e_stat_str(&pf->hw, err),
+ dev_dbg(&pf->pdev->dev, "get supported phy types ret = %pe last_status = %s\n",
+ ERR_PTR(err),
i40e_aq_str(&pf->hw, pf->hw.aq.asq_last_status));
/* make sure the MFS hasn't been set lower than the default */
@@ -16220,7 +16227,6 @@ err_pf_reset:
err_ioremap:
kfree(pf);
err_pf_alloc:
- pci_disable_pcie_error_reporting(pdev);
pci_release_mem_regions(pdev);
err_pci_reg:
err_dma:
@@ -16241,7 +16247,7 @@ static void i40e_remove(struct pci_dev *pdev)
{
struct i40e_pf *pf = pci_get_drvdata(pdev);
struct i40e_hw *hw = &pf->hw;
- i40e_status ret_code;
+ int ret_code;
int i;
i40e_dbg_pf_exit(pf);
@@ -16368,7 +16374,6 @@ unmap:
kfree(pf);
pci_release_mem_regions(pdev);
- pci_disable_pcie_error_reporting(pdev);
pci_disable_device(pdev);
}
@@ -16489,9 +16494,9 @@ static void i40e_pci_error_resume(struct pci_dev *pdev)
static void i40e_enable_mc_magic_wake(struct i40e_pf *pf)
{
struct i40e_hw *hw = &pf->hw;
- i40e_status ret;
u8 mac_addr[6];
u16 flags = 0;
+ int ret;
/* Get current MAC address in case it's an LAA */
if (pf->vsi[pf->lan_vsi] && pf->vsi[pf->lan_vsi]->netdev) {
diff --git a/drivers/net/ethernet/intel/i40e/i40e_nvm.c b/drivers/net/ethernet/intel/i40e/i40e_nvm.c
index 3a38bf8bcde7..9da0c87f0328 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_nvm.c
+++ b/drivers/net/ethernet/intel/i40e/i40e_nvm.c
@@ -13,10 +13,10 @@
* in this file) as an equivalent of the FLASH part mapped into the SR.
* We are accessing FLASH always thru the Shadow RAM.
**/
-i40e_status i40e_init_nvm(struct i40e_hw *hw)
+int i40e_init_nvm(struct i40e_hw *hw)
{
struct i40e_nvm_info *nvm = &hw->nvm;
- i40e_status ret_code = 0;
+ int ret_code = 0;
u32 fla, gens;
u8 sr_size;
@@ -52,12 +52,12 @@ i40e_status i40e_init_nvm(struct i40e_hw *hw)
* This function will request NVM ownership for reading
* via the proper Admin Command.
**/
-i40e_status i40e_acquire_nvm(struct i40e_hw *hw,
- enum i40e_aq_resource_access_type access)
+int i40e_acquire_nvm(struct i40e_hw *hw,
+ enum i40e_aq_resource_access_type access)
{
- i40e_status ret_code = 0;
u64 gtime, timeout;
u64 time_left = 0;
+ int ret_code = 0;
if (hw->nvm.blank_nvm_mode)
goto i40e_i40e_acquire_nvm_exit;
@@ -111,7 +111,7 @@ i40e_i40e_acquire_nvm_exit:
**/
void i40e_release_nvm(struct i40e_hw *hw)
{
- i40e_status ret_code = I40E_SUCCESS;
+ int ret_code = I40E_SUCCESS;
u32 total_delay = 0;
if (hw->nvm.blank_nvm_mode)
@@ -138,9 +138,9 @@ void i40e_release_nvm(struct i40e_hw *hw)
*
* Polls the SRCTL Shadow RAM register done bit.
**/
-static i40e_status i40e_poll_sr_srctl_done_bit(struct i40e_hw *hw)
+static int i40e_poll_sr_srctl_done_bit(struct i40e_hw *hw)
{
- i40e_status ret_code = I40E_ERR_TIMEOUT;
+ int ret_code = I40E_ERR_TIMEOUT;
u32 srctl, wait_cnt;
/* Poll the I40E_GLNVM_SRCTL until the done bit is set */
@@ -165,10 +165,10 @@ static i40e_status i40e_poll_sr_srctl_done_bit(struct i40e_hw *hw)
*
* Reads one 16 bit word from the Shadow RAM using the GLNVM_SRCTL register.
**/
-static i40e_status i40e_read_nvm_word_srctl(struct i40e_hw *hw, u16 offset,
- u16 *data)
+static int i40e_read_nvm_word_srctl(struct i40e_hw *hw, u16 offset,
+ u16 *data)
{
- i40e_status ret_code = I40E_ERR_TIMEOUT;
+ int ret_code = I40E_ERR_TIMEOUT;
u32 sr_reg;
if (offset >= hw->nvm.sr_size) {
@@ -216,13 +216,13 @@ read_nvm_exit:
*
* Writes a 16 bit words buffer to the Shadow RAM using the admin command.
**/
-static i40e_status i40e_read_nvm_aq(struct i40e_hw *hw,
- u8 module_pointer, u32 offset,
- u16 words, void *data,
- bool last_command)
+static int i40e_read_nvm_aq(struct i40e_hw *hw,
+ u8 module_pointer, u32 offset,
+ u16 words, void *data,
+ bool last_command)
{
- i40e_status ret_code = I40E_ERR_NVM;
struct i40e_asq_cmd_details cmd_details;
+ int ret_code = I40E_ERR_NVM;
memset(&cmd_details, 0, sizeof(cmd_details));
cmd_details.wb_desc = &hw->nvm_wb_desc;
@@ -264,10 +264,10 @@ static i40e_status i40e_read_nvm_aq(struct i40e_hw *hw,
*
* Reads one 16 bit word from the Shadow RAM using the AdminQ
**/
-static i40e_status i40e_read_nvm_word_aq(struct i40e_hw *hw, u16 offset,
- u16 *data)
+static int i40e_read_nvm_word_aq(struct i40e_hw *hw, u16 offset,
+ u16 *data)
{
- i40e_status ret_code = I40E_ERR_TIMEOUT;
+ int ret_code = I40E_ERR_TIMEOUT;
ret_code = i40e_read_nvm_aq(hw, 0x0, offset, 1, data, true);
*data = le16_to_cpu(*(__le16 *)data);
@@ -286,8 +286,8 @@ static i40e_status i40e_read_nvm_word_aq(struct i40e_hw *hw, u16 offset,
* Do not use this function except in cases where the nvm lock is already
* taken via i40e_acquire_nvm().
**/
-static i40e_status __i40e_read_nvm_word(struct i40e_hw *hw,
- u16 offset, u16 *data)
+static int __i40e_read_nvm_word(struct i40e_hw *hw,
+ u16 offset, u16 *data)
{
if (hw->flags & I40E_HW_FLAG_AQ_SRCTL_ACCESS_ENABLE)
return i40e_read_nvm_word_aq(hw, offset, data);
@@ -303,10 +303,10 @@ static i40e_status __i40e_read_nvm_word(struct i40e_hw *hw,
*
* Reads one 16 bit word from the Shadow RAM.
**/
-i40e_status i40e_read_nvm_word(struct i40e_hw *hw, u16 offset,
- u16 *data)
+int i40e_read_nvm_word(struct i40e_hw *hw, u16 offset,
+ u16 *data)
{
- i40e_status ret_code = 0;
+ int ret_code = 0;
if (hw->flags & I40E_HW_FLAG_NVM_READ_REQUIRES_LOCK)
ret_code = i40e_acquire_nvm(hw, I40E_RESOURCE_READ);
@@ -330,17 +330,17 @@ i40e_status i40e_read_nvm_word(struct i40e_hw *hw, u16 offset,
* @words_data_size: Words to read from NVM
* @data_ptr: Pointer to memory location where resulting buffer will be stored
**/
-enum i40e_status_code i40e_read_nvm_module_data(struct i40e_hw *hw,
- u8 module_ptr,
- u16 module_offset,
- u16 data_offset,
- u16 words_data_size,
- u16 *data_ptr)
+int i40e_read_nvm_module_data(struct i40e_hw *hw,
+ u8 module_ptr,
+ u16 module_offset,
+ u16 data_offset,
+ u16 words_data_size,
+ u16 *data_ptr)
{
- i40e_status status;
u16 specific_ptr = 0;
u16 ptr_value = 0;
u32 offset = 0;
+ int status;
if (module_ptr != 0) {
status = i40e_read_nvm_word(hw, module_ptr, &ptr_value);
@@ -406,10 +406,10 @@ enum i40e_status_code i40e_read_nvm_module_data(struct i40e_hw *hw,
* method. The buffer read is preceded by the NVM ownership take
* and followed by the release.
**/
-static i40e_status i40e_read_nvm_buffer_srctl(struct i40e_hw *hw, u16 offset,
- u16 *words, u16 *data)
+static int i40e_read_nvm_buffer_srctl(struct i40e_hw *hw, u16 offset,
+ u16 *words, u16 *data)
{
- i40e_status ret_code = 0;
+ int ret_code = 0;
u16 index, word;
/* Loop thru the selected region */
@@ -437,13 +437,13 @@ static i40e_status i40e_read_nvm_buffer_srctl(struct i40e_hw *hw, u16 offset,
* method. The buffer read is preceded by the NVM ownership take
* and followed by the release.
**/
-static i40e_status i40e_read_nvm_buffer_aq(struct i40e_hw *hw, u16 offset,
- u16 *words, u16 *data)
+static int i40e_read_nvm_buffer_aq(struct i40e_hw *hw, u16 offset,
+ u16 *words, u16 *data)
{
- i40e_status ret_code;
- u16 read_size;
bool last_cmd = false;
u16 words_read = 0;
+ u16 read_size;
+ int ret_code;
u16 i = 0;
do {
@@ -493,9 +493,9 @@ read_nvm_buffer_aq_exit:
* Reads 16 bit words (data buffer) from the SR using the i40e_read_nvm_srrd()
* method.
**/
-static i40e_status __i40e_read_nvm_buffer(struct i40e_hw *hw,
- u16 offset, u16 *words,
- u16 *data)
+static int __i40e_read_nvm_buffer(struct i40e_hw *hw,
+ u16 offset, u16 *words,
+ u16 *data)
{
if (hw->flags & I40E_HW_FLAG_AQ_SRCTL_ACCESS_ENABLE)
return i40e_read_nvm_buffer_aq(hw, offset, words, data);
@@ -514,10 +514,10 @@ static i40e_status __i40e_read_nvm_buffer(struct i40e_hw *hw,
* method. The buffer read is preceded by the NVM ownership take
* and followed by the release.
**/
-i40e_status i40e_read_nvm_buffer(struct i40e_hw *hw, u16 offset,
- u16 *words, u16 *data)
+int i40e_read_nvm_buffer(struct i40e_hw *hw, u16 offset,
+ u16 *words, u16 *data)
{
- i40e_status ret_code = 0;
+ int ret_code = 0;
if (hw->flags & I40E_HW_FLAG_AQ_SRCTL_ACCESS_ENABLE) {
ret_code = i40e_acquire_nvm(hw, I40E_RESOURCE_READ);
@@ -544,12 +544,12 @@ i40e_status i40e_read_nvm_buffer(struct i40e_hw *hw, u16 offset,
*
* Writes a 16 bit words buffer to the Shadow RAM using the admin command.
**/
-static i40e_status i40e_write_nvm_aq(struct i40e_hw *hw, u8 module_pointer,
- u32 offset, u16 words, void *data,
- bool last_command)
+static int i40e_write_nvm_aq(struct i40e_hw *hw, u8 module_pointer,
+ u32 offset, u16 words, void *data,
+ bool last_command)
{
- i40e_status ret_code = I40E_ERR_NVM;
struct i40e_asq_cmd_details cmd_details;
+ int ret_code = I40E_ERR_NVM;
memset(&cmd_details, 0, sizeof(cmd_details));
cmd_details.wb_desc = &hw->nvm_wb_desc;
@@ -594,14 +594,14 @@ static i40e_status i40e_write_nvm_aq(struct i40e_hw *hw, u8 module_pointer,
* is customer specific and unknown. Therefore, this function skips all maximum
* possible size of VPD (1kB).
**/
-static i40e_status i40e_calc_nvm_checksum(struct i40e_hw *hw,
- u16 *checksum)
+static int i40e_calc_nvm_checksum(struct i40e_hw *hw,
+ u16 *checksum)
{
- i40e_status ret_code;
struct i40e_virt_mem vmem;
u16 pcie_alt_module = 0;
u16 checksum_local = 0;
u16 vpd_module = 0;
+ int ret_code;
u16 *data;
u16 i = 0;
@@ -675,11 +675,11 @@ i40e_calc_nvm_checksum_exit:
* on ARQ completion event reception by caller.
* This function will commit SR to NVM.
**/
-i40e_status i40e_update_nvm_checksum(struct i40e_hw *hw)
+int i40e_update_nvm_checksum(struct i40e_hw *hw)
{
- i40e_status ret_code;
- u16 checksum;
__le16 le_sum;
+ int ret_code;
+ u16 checksum;
ret_code = i40e_calc_nvm_checksum(hw, &checksum);
if (!ret_code) {
@@ -699,12 +699,12 @@ i40e_status i40e_update_nvm_checksum(struct i40e_hw *hw)
* Performs checksum calculation and validates the NVM SW checksum. If the
* caller does not need checksum, the value can be NULL.
**/
-i40e_status i40e_validate_nvm_checksum(struct i40e_hw *hw,
- u16 *checksum)
+int i40e_validate_nvm_checksum(struct i40e_hw *hw,
+ u16 *checksum)
{
- i40e_status ret_code = 0;
- u16 checksum_sr = 0;
u16 checksum_local = 0;
+ u16 checksum_sr = 0;
+ int ret_code = 0;
/* We must acquire the NVM lock in order to correctly synchronize the
* NVM accesses across multiple PFs. Without doing so it is possible
@@ -733,36 +733,36 @@ i40e_status i40e_validate_nvm_checksum(struct i40e_hw *hw,
return ret_code;
}
-static i40e_status i40e_nvmupd_state_init(struct i40e_hw *hw,
- struct i40e_nvm_access *cmd,
- u8 *bytes, int *perrno);
-static i40e_status i40e_nvmupd_state_reading(struct i40e_hw *hw,
- struct i40e_nvm_access *cmd,
- u8 *bytes, int *perrno);
-static i40e_status i40e_nvmupd_state_writing(struct i40e_hw *hw,
- struct i40e_nvm_access *cmd,
- u8 *bytes, int *errno);
+static int i40e_nvmupd_state_init(struct i40e_hw *hw,
+ struct i40e_nvm_access *cmd,
+ u8 *bytes, int *perrno);
+static int i40e_nvmupd_state_reading(struct i40e_hw *hw,
+ struct i40e_nvm_access *cmd,
+ u8 *bytes, int *perrno);
+static int i40e_nvmupd_state_writing(struct i40e_hw *hw,
+ struct i40e_nvm_access *cmd,
+ u8 *bytes, int *errno);
static enum i40e_nvmupd_cmd i40e_nvmupd_validate_command(struct i40e_hw *hw,
struct i40e_nvm_access *cmd,
int *perrno);
-static i40e_status i40e_nvmupd_nvm_erase(struct i40e_hw *hw,
- struct i40e_nvm_access *cmd,
- int *perrno);
-static i40e_status i40e_nvmupd_nvm_write(struct i40e_hw *hw,
- struct i40e_nvm_access *cmd,
- u8 *bytes, int *perrno);
-static i40e_status i40e_nvmupd_nvm_read(struct i40e_hw *hw,
- struct i40e_nvm_access *cmd,
- u8 *bytes, int *perrno);
-static i40e_status i40e_nvmupd_exec_aq(struct i40e_hw *hw,
- struct i40e_nvm_access *cmd,
- u8 *bytes, int *perrno);
-static i40e_status i40e_nvmupd_get_aq_result(struct i40e_hw *hw,
- struct i40e_nvm_access *cmd,
- u8 *bytes, int *perrno);
-static i40e_status i40e_nvmupd_get_aq_event(struct i40e_hw *hw,
- struct i40e_nvm_access *cmd,
- u8 *bytes, int *perrno);
+static int i40e_nvmupd_nvm_erase(struct i40e_hw *hw,
+ struct i40e_nvm_access *cmd,
+ int *perrno);
+static int i40e_nvmupd_nvm_write(struct i40e_hw *hw,
+ struct i40e_nvm_access *cmd,
+ u8 *bytes, int *perrno);
+static int i40e_nvmupd_nvm_read(struct i40e_hw *hw,
+ struct i40e_nvm_access *cmd,
+ u8 *bytes, int *perrno);
+static int i40e_nvmupd_exec_aq(struct i40e_hw *hw,
+ struct i40e_nvm_access *cmd,
+ u8 *bytes, int *perrno);
+static int i40e_nvmupd_get_aq_result(struct i40e_hw *hw,
+ struct i40e_nvm_access *cmd,
+ u8 *bytes, int *perrno);
+static int i40e_nvmupd_get_aq_event(struct i40e_hw *hw,
+ struct i40e_nvm_access *cmd,
+ u8 *bytes, int *perrno);
static inline u8 i40e_nvmupd_get_module(u32 val)
{
return (u8)(val & I40E_NVM_MOD_PNT_MASK);
@@ -807,12 +807,12 @@ static const char * const i40e_nvm_update_state_str[] = {
*
* Dispatches command depending on what update state is current
**/
-i40e_status i40e_nvmupd_command(struct i40e_hw *hw,
- struct i40e_nvm_access *cmd,
- u8 *bytes, int *perrno)
+int i40e_nvmupd_command(struct i40e_hw *hw,
+ struct i40e_nvm_access *cmd,
+ u8 *bytes, int *perrno)
{
- i40e_status status;
enum i40e_nvmupd_cmd upd_cmd;
+ int status;
/* assume success */
*perrno = 0;
@@ -923,12 +923,12 @@ i40e_status i40e_nvmupd_command(struct i40e_hw *hw,
* Process legitimate commands of the Init state and conditionally set next
* state. Reject all other commands.
**/
-static i40e_status i40e_nvmupd_state_init(struct i40e_hw *hw,
- struct i40e_nvm_access *cmd,
- u8 *bytes, int *perrno)
+static int i40e_nvmupd_state_init(struct i40e_hw *hw,
+ struct i40e_nvm_access *cmd,
+ u8 *bytes, int *perrno)
{
- i40e_status status = 0;
enum i40e_nvmupd_cmd upd_cmd;
+ int status = 0;
upd_cmd = i40e_nvmupd_validate_command(hw, cmd, perrno);
@@ -1062,12 +1062,12 @@ static i40e_status i40e_nvmupd_state_init(struct i40e_hw *hw,
* NVM ownership is already held. Process legitimate commands and set any
* change in state; reject all other commands.
**/
-static i40e_status i40e_nvmupd_state_reading(struct i40e_hw *hw,
- struct i40e_nvm_access *cmd,
- u8 *bytes, int *perrno)
+static int i40e_nvmupd_state_reading(struct i40e_hw *hw,
+ struct i40e_nvm_access *cmd,
+ u8 *bytes, int *perrno)
{
- i40e_status status = 0;
enum i40e_nvmupd_cmd upd_cmd;
+ int status = 0;
upd_cmd = i40e_nvmupd_validate_command(hw, cmd, perrno);
@@ -1104,13 +1104,13 @@ static i40e_status i40e_nvmupd_state_reading(struct i40e_hw *hw,
* NVM ownership is already held. Process legitimate commands and set any
* change in state; reject all other commands
**/
-static i40e_status i40e_nvmupd_state_writing(struct i40e_hw *hw,
- struct i40e_nvm_access *cmd,
- u8 *bytes, int *perrno)
+static int i40e_nvmupd_state_writing(struct i40e_hw *hw,
+ struct i40e_nvm_access *cmd,
+ u8 *bytes, int *perrno)
{
- i40e_status status = 0;
enum i40e_nvmupd_cmd upd_cmd;
bool retry_attempt = false;
+ int status = 0;
upd_cmd = i40e_nvmupd_validate_command(hw, cmd, perrno);
@@ -1187,8 +1187,8 @@ retry:
*/
if (status && (hw->aq.asq_last_status == I40E_AQ_RC_EBUSY) &&
!retry_attempt) {
- i40e_status old_status = status;
u32 old_asq_status = hw->aq.asq_last_status;
+ int old_status = status;
u32 gtime;
gtime = rd32(hw, I40E_GLVFGEN_TIMER);
@@ -1370,17 +1370,17 @@ static enum i40e_nvmupd_cmd i40e_nvmupd_validate_command(struct i40e_hw *hw,
*
* cmd structure contains identifiers and data buffer
**/
-static i40e_status i40e_nvmupd_exec_aq(struct i40e_hw *hw,
- struct i40e_nvm_access *cmd,
- u8 *bytes, int *perrno)
+static int i40e_nvmupd_exec_aq(struct i40e_hw *hw,
+ struct i40e_nvm_access *cmd,
+ u8 *bytes, int *perrno)
{
struct i40e_asq_cmd_details cmd_details;
- i40e_status status;
struct i40e_aq_desc *aq_desc;
u32 buff_size = 0;
u8 *buff = NULL;
u32 aq_desc_len;
u32 aq_data_len;
+ int status;
i40e_debug(hw, I40E_DEBUG_NVM, "NVMUPD: %s\n", __func__);
if (cmd->offset == 0xffff)
@@ -1429,8 +1429,8 @@ static i40e_status i40e_nvmupd_exec_aq(struct i40e_hw *hw,
buff_size, &cmd_details);
if (status) {
i40e_debug(hw, I40E_DEBUG_NVM,
- "i40e_nvmupd_exec_aq err %s aq_err %s\n",
- i40e_stat_str(hw, status),
+ "%s err %pe aq_err %s\n",
+ __func__, ERR_PTR(status),
i40e_aq_str(hw, hw->aq.asq_last_status));
*perrno = i40e_aq_rc_to_posix(status, hw->aq.asq_last_status);
return status;
@@ -1454,9 +1454,9 @@ static i40e_status i40e_nvmupd_exec_aq(struct i40e_hw *hw,
*
* cmd structure contains identifiers and data buffer
**/
-static i40e_status i40e_nvmupd_get_aq_result(struct i40e_hw *hw,
- struct i40e_nvm_access *cmd,
- u8 *bytes, int *perrno)
+static int i40e_nvmupd_get_aq_result(struct i40e_hw *hw,
+ struct i40e_nvm_access *cmd,
+ u8 *bytes, int *perrno)
{
u32 aq_total_len;
u32 aq_desc_len;
@@ -1523,9 +1523,9 @@ static i40e_status i40e_nvmupd_get_aq_result(struct i40e_hw *hw,
*
* cmd structure contains identifiers and data buffer
**/
-static i40e_status i40e_nvmupd_get_aq_event(struct i40e_hw *hw,
- struct i40e_nvm_access *cmd,
- u8 *bytes, int *perrno)
+static int i40e_nvmupd_get_aq_event(struct i40e_hw *hw,
+ struct i40e_nvm_access *cmd,
+ u8 *bytes, int *perrno)
{
u32 aq_total_len;
u32 aq_desc_len;
@@ -1557,13 +1557,13 @@ static i40e_status i40e_nvmupd_get_aq_event(struct i40e_hw *hw,
*
* cmd structure contains identifiers and data buffer
**/
-static i40e_status i40e_nvmupd_nvm_read(struct i40e_hw *hw,
- struct i40e_nvm_access *cmd,
- u8 *bytes, int *perrno)
+static int i40e_nvmupd_nvm_read(struct i40e_hw *hw,
+ struct i40e_nvm_access *cmd,
+ u8 *bytes, int *perrno)
{
struct i40e_asq_cmd_details cmd_details;
- i40e_status status;
u8 module, transaction;
+ int status;
bool last;
transaction = i40e_nvmupd_get_transaction(cmd->config);
@@ -1596,13 +1596,13 @@ static i40e_status i40e_nvmupd_nvm_read(struct i40e_hw *hw,
*
* module, offset, data_size and data are in cmd structure
**/
-static i40e_status i40e_nvmupd_nvm_erase(struct i40e_hw *hw,
- struct i40e_nvm_access *cmd,
- int *perrno)
+static int i40e_nvmupd_nvm_erase(struct i40e_hw *hw,
+ struct i40e_nvm_access *cmd,
+ int *perrno)
{
- i40e_status status = 0;
struct i40e_asq_cmd_details cmd_details;
u8 module, transaction;
+ int status = 0;
bool last;
transaction = i40e_nvmupd_get_transaction(cmd->config);
@@ -1636,14 +1636,14 @@ static i40e_status i40e_nvmupd_nvm_erase(struct i40e_hw *hw,
*
* module, offset, data_size and data are in cmd structure
**/
-static i40e_status i40e_nvmupd_nvm_write(struct i40e_hw *hw,
- struct i40e_nvm_access *cmd,
- u8 *bytes, int *perrno)
+static int i40e_nvmupd_nvm_write(struct i40e_hw *hw,
+ struct i40e_nvm_access *cmd,
+ u8 *bytes, int *perrno)
{
- i40e_status status = 0;
struct i40e_asq_cmd_details cmd_details;
u8 module, transaction;
u8 preservation_flags;
+ int status = 0;
bool last;
transaction = i40e_nvmupd_get_transaction(cmd->config);
diff --git a/drivers/net/ethernet/intel/i40e/i40e_osdep.h b/drivers/net/ethernet/intel/i40e/i40e_osdep.h
index 2f6815b2f8df..2bd4de03dafa 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_osdep.h
+++ b/drivers/net/ethernet/intel/i40e/i40e_osdep.h
@@ -56,5 +56,4 @@ do { \
(h)->bus.func, ##__VA_ARGS__); \
} while (0)
-typedef enum i40e_status_code i40e_status;
#endif /* _I40E_OSDEP_H_ */
diff --git a/drivers/net/ethernet/intel/i40e/i40e_prototype.h b/drivers/net/ethernet/intel/i40e/i40e_prototype.h
index 9a71121420c3..fe845987d99a 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_prototype.h
+++ b/drivers/net/ethernet/intel/i40e/i40e_prototype.h
@@ -16,29 +16,29 @@
*/
/* adminq functions */
-i40e_status i40e_init_adminq(struct i40e_hw *hw);
+int i40e_init_adminq(struct i40e_hw *hw);
void i40e_shutdown_adminq(struct i40e_hw *hw);
void i40e_adminq_init_ring_data(struct i40e_hw *hw);
-i40e_status i40e_clean_arq_element(struct i40e_hw *hw,
- struct i40e_arq_event_info *e,
- u16 *events_pending);
-i40e_status
+int i40e_clean_arq_element(struct i40e_hw *hw,
+ struct i40e_arq_event_info *e,
+ u16 *events_pending);
+int
i40e_asq_send_command(struct i40e_hw *hw, struct i40e_aq_desc *desc,
void *buff, /* can be NULL */ u16 buff_size,
struct i40e_asq_cmd_details *cmd_details);
-i40e_status
+int
i40e_asq_send_command_v2(struct i40e_hw *hw,
struct i40e_aq_desc *desc,
void *buff, /* can be NULL */
u16 buff_size,
struct i40e_asq_cmd_details *cmd_details,
enum i40e_admin_queue_err *aq_status);
-i40e_status
+int
i40e_asq_send_command_atomic(struct i40e_hw *hw, struct i40e_aq_desc *desc,
void *buff, /* can be NULL */ u16 buff_size,
struct i40e_asq_cmd_details *cmd_details,
bool is_atomic_context);
-i40e_status
+int
i40e_asq_send_command_atomic_v2(struct i40e_hw *hw,
struct i40e_aq_desc *desc,
void *buff, /* can be NULL */
@@ -53,327 +53,332 @@ void i40e_debug_aq(struct i40e_hw *hw, enum i40e_debug_mask mask,
void i40e_idle_aq(struct i40e_hw *hw);
bool i40e_check_asq_alive(struct i40e_hw *hw);
-i40e_status i40e_aq_queue_shutdown(struct i40e_hw *hw, bool unloading);
+int i40e_aq_queue_shutdown(struct i40e_hw *hw, bool unloading);
const char *i40e_aq_str(struct i40e_hw *hw, enum i40e_admin_queue_err aq_err);
-const char *i40e_stat_str(struct i40e_hw *hw, i40e_status stat_err);
-i40e_status i40e_aq_get_rss_lut(struct i40e_hw *hw, u16 seid,
- bool pf_lut, u8 *lut, u16 lut_size);
-i40e_status i40e_aq_set_rss_lut(struct i40e_hw *hw, u16 seid,
- bool pf_lut, u8 *lut, u16 lut_size);
-i40e_status i40e_aq_get_rss_key(struct i40e_hw *hw,
- u16 seid,
- struct i40e_aqc_get_set_rss_key_data *key);
-i40e_status i40e_aq_set_rss_key(struct i40e_hw *hw,
- u16 seid,
- struct i40e_aqc_get_set_rss_key_data *key);
+int i40e_aq_get_rss_lut(struct i40e_hw *hw, u16 seid,
+ bool pf_lut, u8 *lut, u16 lut_size);
+int i40e_aq_set_rss_lut(struct i40e_hw *hw, u16 seid,
+ bool pf_lut, u8 *lut, u16 lut_size);
+int i40e_aq_get_rss_key(struct i40e_hw *hw,
+ u16 seid,
+ struct i40e_aqc_get_set_rss_key_data *key);
+int i40e_aq_set_rss_key(struct i40e_hw *hw,
+ u16 seid,
+ struct i40e_aqc_get_set_rss_key_data *key);
u32 i40e_led_get(struct i40e_hw *hw);
void i40e_led_set(struct i40e_hw *hw, u32 mode, bool blink);
-i40e_status i40e_led_set_phy(struct i40e_hw *hw, bool on,
- u16 led_addr, u32 mode);
-i40e_status i40e_led_get_phy(struct i40e_hw *hw, u16 *led_addr,
- u16 *val);
-i40e_status i40e_blink_phy_link_led(struct i40e_hw *hw,
- u32 time, u32 interval);
+int i40e_led_set_phy(struct i40e_hw *hw, bool on,
+ u16 led_addr, u32 mode);
+int i40e_led_get_phy(struct i40e_hw *hw, u16 *led_addr,
+ u16 *val);
+int i40e_blink_phy_link_led(struct i40e_hw *hw,
+ u32 time, u32 interval);
/* admin send queue commands */
-i40e_status i40e_aq_get_firmware_version(struct i40e_hw *hw,
- u16 *fw_major_version, u16 *fw_minor_version,
- u32 *fw_build,
- u16 *api_major_version, u16 *api_minor_version,
- struct i40e_asq_cmd_details *cmd_details);
-i40e_status i40e_aq_debug_write_register(struct i40e_hw *hw,
- u32 reg_addr, u64 reg_val,
- struct i40e_asq_cmd_details *cmd_details);
-i40e_status i40e_aq_debug_read_register(struct i40e_hw *hw,
+int i40e_aq_get_firmware_version(struct i40e_hw *hw,
+ u16 *fw_major_version, u16 *fw_minor_version,
+ u32 *fw_build,
+ u16 *api_major_version, u16 *api_minor_version,
+ struct i40e_asq_cmd_details *cmd_details);
+int i40e_aq_debug_write_register(struct i40e_hw *hw,
+ u32 reg_addr, u64 reg_val,
+ struct i40e_asq_cmd_details *cmd_details);
+int i40e_aq_debug_read_register(struct i40e_hw *hw,
u32 reg_addr, u64 *reg_val,
struct i40e_asq_cmd_details *cmd_details);
-i40e_status i40e_aq_set_phy_debug(struct i40e_hw *hw, u8 cmd_flags,
- struct i40e_asq_cmd_details *cmd_details);
-i40e_status i40e_aq_set_default_vsi(struct i40e_hw *hw, u16 vsi_id,
- struct i40e_asq_cmd_details *cmd_details);
-i40e_status i40e_aq_clear_default_vsi(struct i40e_hw *hw, u16 vsi_id,
- struct i40e_asq_cmd_details *cmd_details);
-enum i40e_status_code i40e_aq_get_phy_capabilities(struct i40e_hw *hw,
- bool qualified_modules, bool report_init,
- struct i40e_aq_get_phy_abilities_resp *abilities,
- struct i40e_asq_cmd_details *cmd_details);
-enum i40e_status_code i40e_aq_set_phy_config(struct i40e_hw *hw,
- struct i40e_aq_set_phy_config *config,
- struct i40e_asq_cmd_details *cmd_details);
-enum i40e_status_code i40e_set_fc(struct i40e_hw *hw, u8 *aq_failures,
- bool atomic_reset);
-i40e_status i40e_aq_set_mac_loopback(struct i40e_hw *hw,
- bool ena_lpbk,
- struct i40e_asq_cmd_details *cmd_details);
-i40e_status i40e_aq_set_phy_int_mask(struct i40e_hw *hw, u16 mask,
- struct i40e_asq_cmd_details *cmd_details);
-i40e_status i40e_aq_clear_pxe_mode(struct i40e_hw *hw,
- struct i40e_asq_cmd_details *cmd_details);
-i40e_status i40e_aq_set_link_restart_an(struct i40e_hw *hw,
- bool enable_link,
- struct i40e_asq_cmd_details *cmd_details);
-i40e_status i40e_aq_get_link_info(struct i40e_hw *hw,
- bool enable_lse, struct i40e_link_status *link,
- struct i40e_asq_cmd_details *cmd_details);
-i40e_status i40e_aq_set_local_advt_reg(struct i40e_hw *hw,
- u64 advt_reg,
- struct i40e_asq_cmd_details *cmd_details);
-i40e_status i40e_aq_send_driver_version(struct i40e_hw *hw,
+int i40e_aq_set_phy_debug(struct i40e_hw *hw, u8 cmd_flags,
+ struct i40e_asq_cmd_details *cmd_details);
+int i40e_aq_set_default_vsi(struct i40e_hw *hw, u16 vsi_id,
+ struct i40e_asq_cmd_details *cmd_details);
+int i40e_aq_clear_default_vsi(struct i40e_hw *hw, u16 vsi_id,
+ struct i40e_asq_cmd_details *cmd_details);
+int i40e_aq_get_phy_capabilities(struct i40e_hw *hw,
+ bool qualified_modules, bool report_init,
+ struct i40e_aq_get_phy_abilities_resp *abilities,
+ struct i40e_asq_cmd_details *cmd_details);
+int i40e_aq_set_phy_config(struct i40e_hw *hw,
+ struct i40e_aq_set_phy_config *config,
+ struct i40e_asq_cmd_details *cmd_details);
+int i40e_set_fc(struct i40e_hw *hw, u8 *aq_failures,
+ bool atomic_reset);
+int i40e_aq_set_mac_loopback(struct i40e_hw *hw,
+ bool ena_lpbk,
+ struct i40e_asq_cmd_details *cmd_details);
+int i40e_aq_set_phy_int_mask(struct i40e_hw *hw, u16 mask,
+ struct i40e_asq_cmd_details *cmd_details);
+int i40e_aq_clear_pxe_mode(struct i40e_hw *hw,
+ struct i40e_asq_cmd_details *cmd_details);
+int i40e_aq_set_link_restart_an(struct i40e_hw *hw,
+ bool enable_link,
+ struct i40e_asq_cmd_details *cmd_details);
+int i40e_aq_get_link_info(struct i40e_hw *hw,
+ bool enable_lse, struct i40e_link_status *link,
+ struct i40e_asq_cmd_details *cmd_details);
+int i40e_aq_set_local_advt_reg(struct i40e_hw *hw,
+ u64 advt_reg,
+ struct i40e_asq_cmd_details *cmd_details);
+int i40e_aq_send_driver_version(struct i40e_hw *hw,
struct i40e_driver_version *dv,
struct i40e_asq_cmd_details *cmd_details);
-i40e_status i40e_aq_add_vsi(struct i40e_hw *hw,
- struct i40e_vsi_context *vsi_ctx,
- struct i40e_asq_cmd_details *cmd_details);
-i40e_status i40e_aq_set_vsi_broadcast(struct i40e_hw *hw,
- u16 vsi_id, bool set_filter,
- struct i40e_asq_cmd_details *cmd_details);
-i40e_status i40e_aq_set_vsi_unicast_promiscuous(struct i40e_hw *hw,
- u16 vsi_id, bool set, struct i40e_asq_cmd_details *cmd_details,
- bool rx_only_promisc);
-i40e_status i40e_aq_set_vsi_multicast_promiscuous(struct i40e_hw *hw,
- u16 vsi_id, bool set, struct i40e_asq_cmd_details *cmd_details);
-enum i40e_status_code i40e_aq_set_vsi_mc_promisc_on_vlan(struct i40e_hw *hw,
- u16 seid, bool enable,
- u16 vid,
- struct i40e_asq_cmd_details *cmd_details);
-enum i40e_status_code i40e_aq_set_vsi_uc_promisc_on_vlan(struct i40e_hw *hw,
- u16 seid, bool enable,
- u16 vid,
- struct i40e_asq_cmd_details *cmd_details);
-i40e_status i40e_aq_set_vsi_bc_promisc_on_vlan(struct i40e_hw *hw,
- u16 seid, bool enable, u16 vid,
- struct i40e_asq_cmd_details *cmd_details);
-i40e_status i40e_aq_set_vsi_vlan_promisc(struct i40e_hw *hw,
- u16 seid, bool enable,
- struct i40e_asq_cmd_details *cmd_details);
-i40e_status i40e_aq_get_vsi_params(struct i40e_hw *hw,
- struct i40e_vsi_context *vsi_ctx,
- struct i40e_asq_cmd_details *cmd_details);
-i40e_status i40e_aq_update_vsi_params(struct i40e_hw *hw,
- struct i40e_vsi_context *vsi_ctx,
- struct i40e_asq_cmd_details *cmd_details);
-i40e_status i40e_aq_add_veb(struct i40e_hw *hw, u16 uplink_seid,
- u16 downlink_seid, u8 enabled_tc,
- bool default_port, u16 *pveb_seid,
- bool enable_stats,
- struct i40e_asq_cmd_details *cmd_details);
-i40e_status i40e_aq_get_veb_parameters(struct i40e_hw *hw,
- u16 veb_seid, u16 *switch_id, bool *floating,
- u16 *statistic_index, u16 *vebs_used,
- u16 *vebs_free,
- struct i40e_asq_cmd_details *cmd_details);
-i40e_status i40e_aq_add_macvlan(struct i40e_hw *hw, u16 vsi_id,
+int i40e_aq_add_vsi(struct i40e_hw *hw,
+ struct i40e_vsi_context *vsi_ctx,
+ struct i40e_asq_cmd_details *cmd_details);
+int i40e_aq_set_vsi_broadcast(struct i40e_hw *hw,
+ u16 vsi_id, bool set_filter,
+ struct i40e_asq_cmd_details *cmd_details);
+int i40e_aq_set_vsi_unicast_promiscuous(struct i40e_hw *hw, u16 vsi_id, bool set,
+ struct i40e_asq_cmd_details *cmd_details,
+ bool rx_only_promisc);
+int i40e_aq_set_vsi_multicast_promiscuous(struct i40e_hw *hw, u16 vsi_id, bool set,
+ struct i40e_asq_cmd_details *cmd_details);
+int i40e_aq_set_vsi_mc_promisc_on_vlan(struct i40e_hw *hw,
+ u16 seid, bool enable,
+ u16 vid,
+ struct i40e_asq_cmd_details *cmd_details);
+int i40e_aq_set_vsi_uc_promisc_on_vlan(struct i40e_hw *hw,
+ u16 seid, bool enable,
+ u16 vid,
+ struct i40e_asq_cmd_details *cmd_details);
+int i40e_aq_set_vsi_bc_promisc_on_vlan(struct i40e_hw *hw,
+ u16 seid, bool enable, u16 vid,
+ struct i40e_asq_cmd_details *cmd_details);
+int i40e_aq_set_vsi_vlan_promisc(struct i40e_hw *hw,
+ u16 seid, bool enable,
+ struct i40e_asq_cmd_details *cmd_details);
+int i40e_aq_get_vsi_params(struct i40e_hw *hw,
+ struct i40e_vsi_context *vsi_ctx,
+ struct i40e_asq_cmd_details *cmd_details);
+int i40e_aq_update_vsi_params(struct i40e_hw *hw,
+ struct i40e_vsi_context *vsi_ctx,
+ struct i40e_asq_cmd_details *cmd_details);
+int i40e_aq_add_veb(struct i40e_hw *hw, u16 uplink_seid,
+ u16 downlink_seid, u8 enabled_tc,
+ bool default_port, u16 *pveb_seid,
+ bool enable_stats,
+ struct i40e_asq_cmd_details *cmd_details);
+int i40e_aq_get_veb_parameters(struct i40e_hw *hw,
+ u16 veb_seid, u16 *switch_id, bool *floating,
+ u16 *statistic_index, u16 *vebs_used,
+ u16 *vebs_free,
+ struct i40e_asq_cmd_details *cmd_details);
+int i40e_aq_add_macvlan(struct i40e_hw *hw, u16 vsi_id,
struct i40e_aqc_add_macvlan_element_data *mv_list,
u16 count, struct i40e_asq_cmd_details *cmd_details);
-i40e_status
+int
i40e_aq_add_macvlan_v2(struct i40e_hw *hw, u16 seid,
struct i40e_aqc_add_macvlan_element_data *mv_list,
u16 count, struct i40e_asq_cmd_details *cmd_details,
enum i40e_admin_queue_err *aq_status);
-i40e_status i40e_aq_remove_macvlan(struct i40e_hw *hw, u16 vsi_id,
- struct i40e_aqc_remove_macvlan_element_data *mv_list,
- u16 count, struct i40e_asq_cmd_details *cmd_details);
-i40e_status
+int i40e_aq_remove_macvlan(struct i40e_hw *hw, u16 vsi_id,
+ struct i40e_aqc_remove_macvlan_element_data *mv_list,
+ u16 count, struct i40e_asq_cmd_details *cmd_details);
+int
i40e_aq_remove_macvlan_v2(struct i40e_hw *hw, u16 seid,
struct i40e_aqc_remove_macvlan_element_data *mv_list,
u16 count, struct i40e_asq_cmd_details *cmd_details,
enum i40e_admin_queue_err *aq_status);
-i40e_status i40e_aq_add_mirrorrule(struct i40e_hw *hw, u16 sw_seid,
- u16 rule_type, u16 dest_vsi, u16 count, __le16 *mr_list,
- struct i40e_asq_cmd_details *cmd_details,
- u16 *rule_id, u16 *rules_used, u16 *rules_free);
-i40e_status i40e_aq_delete_mirrorrule(struct i40e_hw *hw, u16 sw_seid,
- u16 rule_type, u16 rule_id, u16 count, __le16 *mr_list,
- struct i40e_asq_cmd_details *cmd_details,
- u16 *rules_used, u16 *rules_free);
+int i40e_aq_add_mirrorrule(struct i40e_hw *hw, u16 sw_seid,
+ u16 rule_type, u16 dest_vsi, u16 count, __le16 *mr_list,
+ struct i40e_asq_cmd_details *cmd_details,
+ u16 *rule_id, u16 *rules_used, u16 *rules_free);
+int i40e_aq_delete_mirrorrule(struct i40e_hw *hw, u16 sw_seid,
+ u16 rule_type, u16 rule_id, u16 count, __le16 *mr_list,
+ struct i40e_asq_cmd_details *cmd_details,
+ u16 *rules_used, u16 *rules_free);
-i40e_status i40e_aq_send_msg_to_vf(struct i40e_hw *hw, u16 vfid,
- u32 v_opcode, u32 v_retval, u8 *msg, u16 msglen,
- struct i40e_asq_cmd_details *cmd_details);
-i40e_status i40e_aq_get_switch_config(struct i40e_hw *hw,
- struct i40e_aqc_get_switch_config_resp *buf,
- u16 buf_size, u16 *start_seid,
- struct i40e_asq_cmd_details *cmd_details);
-enum i40e_status_code i40e_aq_set_switch_config(struct i40e_hw *hw,
- u16 flags,
- u16 valid_flags, u8 mode,
- struct i40e_asq_cmd_details *cmd_details);
-i40e_status i40e_aq_request_resource(struct i40e_hw *hw,
- enum i40e_aq_resources_ids resource,
- enum i40e_aq_resource_access_type access,
- u8 sdp_number, u64 *timeout,
- struct i40e_asq_cmd_details *cmd_details);
-i40e_status i40e_aq_release_resource(struct i40e_hw *hw,
- enum i40e_aq_resources_ids resource,
- u8 sdp_number,
- struct i40e_asq_cmd_details *cmd_details);
-i40e_status i40e_aq_read_nvm(struct i40e_hw *hw, u8 module_pointer,
- u32 offset, u16 length, void *data,
- bool last_command,
- struct i40e_asq_cmd_details *cmd_details);
-i40e_status i40e_aq_erase_nvm(struct i40e_hw *hw, u8 module_pointer,
- u32 offset, u16 length, bool last_command,
+int i40e_aq_send_msg_to_vf(struct i40e_hw *hw, u16 vfid,
+ u32 v_opcode, u32 v_retval, u8 *msg, u16 msglen,
+ struct i40e_asq_cmd_details *cmd_details);
+int i40e_aq_get_switch_config(struct i40e_hw *hw,
+ struct i40e_aqc_get_switch_config_resp *buf,
+ u16 buf_size, u16 *start_seid,
struct i40e_asq_cmd_details *cmd_details);
-i40e_status i40e_aq_discover_capabilities(struct i40e_hw *hw,
- void *buff, u16 buff_size, u16 *data_size,
- enum i40e_admin_queue_opc list_type_opc,
- struct i40e_asq_cmd_details *cmd_details);
-i40e_status i40e_aq_update_nvm(struct i40e_hw *hw, u8 module_pointer,
- u32 offset, u16 length, void *data,
- bool last_command, u8 preservation_flags,
- struct i40e_asq_cmd_details *cmd_details);
-i40e_status i40e_aq_rearrange_nvm(struct i40e_hw *hw,
- u8 rearrange_nvm,
+int i40e_aq_set_switch_config(struct i40e_hw *hw,
+ u16 flags,
+ u16 valid_flags, u8 mode,
+ struct i40e_asq_cmd_details *cmd_details);
+int i40e_aq_request_resource(struct i40e_hw *hw,
+ enum i40e_aq_resources_ids resource,
+ enum i40e_aq_resource_access_type access,
+ u8 sdp_number, u64 *timeout,
+ struct i40e_asq_cmd_details *cmd_details);
+int i40e_aq_release_resource(struct i40e_hw *hw,
+ enum i40e_aq_resources_ids resource,
+ u8 sdp_number,
+ struct i40e_asq_cmd_details *cmd_details);
+int i40e_aq_read_nvm(struct i40e_hw *hw, u8 module_pointer,
+ u32 offset, u16 length, void *data,
+ bool last_command,
+ struct i40e_asq_cmd_details *cmd_details);
+int i40e_aq_erase_nvm(struct i40e_hw *hw, u8 module_pointer,
+ u32 offset, u16 length, bool last_command,
+ struct i40e_asq_cmd_details *cmd_details);
+int i40e_aq_discover_capabilities(struct i40e_hw *hw,
+ void *buff, u16 buff_size, u16 *data_size,
+ enum i40e_admin_queue_opc list_type_opc,
struct i40e_asq_cmd_details *cmd_details);
-i40e_status i40e_aq_get_lldp_mib(struct i40e_hw *hw, u8 bridge_type,
- u8 mib_type, void *buff, u16 buff_size,
- u16 *local_len, u16 *remote_len,
- struct i40e_asq_cmd_details *cmd_details);
-enum i40e_status_code
+int i40e_aq_update_nvm(struct i40e_hw *hw, u8 module_pointer,
+ u32 offset, u16 length, void *data,
+ bool last_command, u8 preservation_flags,
+ struct i40e_asq_cmd_details *cmd_details);
+int i40e_aq_rearrange_nvm(struct i40e_hw *hw,
+ u8 rearrange_nvm,
+ struct i40e_asq_cmd_details *cmd_details);
+int i40e_aq_get_lldp_mib(struct i40e_hw *hw, u8 bridge_type,
+ u8 mib_type, void *buff, u16 buff_size,
+ u16 *local_len, u16 *remote_len,
+ struct i40e_asq_cmd_details *cmd_details);
+int
i40e_aq_set_lldp_mib(struct i40e_hw *hw,
u8 mib_type, void *buff, u16 buff_size,
struct i40e_asq_cmd_details *cmd_details);
-i40e_status i40e_aq_cfg_lldp_mib_change_event(struct i40e_hw *hw,
- bool enable_update,
- struct i40e_asq_cmd_details *cmd_details);
-enum i40e_status_code
+int i40e_aq_cfg_lldp_mib_change_event(struct i40e_hw *hw,
+ bool enable_update,
+ struct i40e_asq_cmd_details *cmd_details);
+int
i40e_aq_restore_lldp(struct i40e_hw *hw, u8 *setting, bool restore,
struct i40e_asq_cmd_details *cmd_details);
-i40e_status i40e_aq_stop_lldp(struct i40e_hw *hw, bool shutdown_agent,
- bool persist,
- struct i40e_asq_cmd_details *cmd_details);
-i40e_status i40e_aq_set_dcb_parameters(struct i40e_hw *hw,
- bool dcb_enable,
- struct i40e_asq_cmd_details
- *cmd_details);
-i40e_status i40e_aq_start_lldp(struct i40e_hw *hw, bool persist,
+int i40e_aq_stop_lldp(struct i40e_hw *hw, bool shutdown_agent,
+ bool persist,
+ struct i40e_asq_cmd_details *cmd_details);
+int i40e_aq_set_dcb_parameters(struct i40e_hw *hw,
+ bool dcb_enable,
+ struct i40e_asq_cmd_details
+ *cmd_details);
+int i40e_aq_start_lldp(struct i40e_hw *hw, bool persist,
+ struct i40e_asq_cmd_details *cmd_details);
+int i40e_aq_get_cee_dcb_config(struct i40e_hw *hw,
+ void *buff, u16 buff_size,
struct i40e_asq_cmd_details *cmd_details);
-i40e_status i40e_aq_get_cee_dcb_config(struct i40e_hw *hw,
- void *buff, u16 buff_size,
- struct i40e_asq_cmd_details *cmd_details);
-i40e_status i40e_aq_add_udp_tunnel(struct i40e_hw *hw,
- u16 udp_port, u8 protocol_index,
- u8 *filter_index,
- struct i40e_asq_cmd_details *cmd_details);
-i40e_status i40e_aq_del_udp_tunnel(struct i40e_hw *hw, u8 index,
- struct i40e_asq_cmd_details *cmd_details);
-i40e_status i40e_aq_delete_element(struct i40e_hw *hw, u16 seid,
- struct i40e_asq_cmd_details *cmd_details);
-i40e_status i40e_aq_mac_address_write(struct i40e_hw *hw,
- u16 flags, u8 *mac_addr,
- struct i40e_asq_cmd_details *cmd_details);
-i40e_status i40e_aq_config_vsi_bw_limit(struct i40e_hw *hw,
+int i40e_aq_add_udp_tunnel(struct i40e_hw *hw,
+ u16 udp_port, u8 protocol_index,
+ u8 *filter_index,
+ struct i40e_asq_cmd_details *cmd_details);
+int i40e_aq_del_udp_tunnel(struct i40e_hw *hw, u8 index,
+ struct i40e_asq_cmd_details *cmd_details);
+int i40e_aq_delete_element(struct i40e_hw *hw, u16 seid,
+ struct i40e_asq_cmd_details *cmd_details);
+int i40e_aq_mac_address_write(struct i40e_hw *hw,
+ u16 flags, u8 *mac_addr,
+ struct i40e_asq_cmd_details *cmd_details);
+int i40e_aq_config_vsi_bw_limit(struct i40e_hw *hw,
u16 seid, u16 credit, u8 max_credit,
struct i40e_asq_cmd_details *cmd_details);
-i40e_status i40e_aq_dcb_updated(struct i40e_hw *hw,
- struct i40e_asq_cmd_details *cmd_details);
-i40e_status i40e_aq_config_switch_comp_bw_limit(struct i40e_hw *hw,
- u16 seid, u16 credit, u8 max_bw,
- struct i40e_asq_cmd_details *cmd_details);
-i40e_status i40e_aq_config_vsi_tc_bw(struct i40e_hw *hw, u16 seid,
- struct i40e_aqc_configure_vsi_tc_bw_data *bw_data,
+int i40e_aq_dcb_updated(struct i40e_hw *hw,
struct i40e_asq_cmd_details *cmd_details);
-i40e_status i40e_aq_config_switch_comp_ets(struct i40e_hw *hw,
- u16 seid,
- struct i40e_aqc_configure_switching_comp_ets_data *ets_data,
- enum i40e_admin_queue_opc opcode,
- struct i40e_asq_cmd_details *cmd_details);
-i40e_status i40e_aq_config_switch_comp_bw_config(struct i40e_hw *hw,
+int i40e_aq_config_switch_comp_bw_limit(struct i40e_hw *hw,
+ u16 seid, u16 credit, u8 max_bw,
+ struct i40e_asq_cmd_details *cmd_details);
+int i40e_aq_config_vsi_tc_bw(struct i40e_hw *hw, u16 seid,
+ struct i40e_aqc_configure_vsi_tc_bw_data *bw_data,
+ struct i40e_asq_cmd_details *cmd_details);
+int
+i40e_aq_config_switch_comp_ets(struct i40e_hw *hw,
+ u16 seid,
+ struct i40e_aqc_configure_switching_comp_ets_data *ets_data,
+ enum i40e_admin_queue_opc opcode,
+ struct i40e_asq_cmd_details *cmd_details);
+int i40e_aq_config_switch_comp_bw_config(struct i40e_hw *hw,
u16 seid,
struct i40e_aqc_configure_switching_comp_bw_config_data *bw_data,
struct i40e_asq_cmd_details *cmd_details);
-i40e_status i40e_aq_query_vsi_bw_config(struct i40e_hw *hw,
- u16 seid,
- struct i40e_aqc_query_vsi_bw_config_resp *bw_data,
- struct i40e_asq_cmd_details *cmd_details);
-i40e_status i40e_aq_query_vsi_ets_sla_config(struct i40e_hw *hw,
- u16 seid,
- struct i40e_aqc_query_vsi_ets_sla_config_resp *bw_data,
- struct i40e_asq_cmd_details *cmd_details);
-i40e_status i40e_aq_query_switch_comp_ets_config(struct i40e_hw *hw,
- u16 seid,
- struct i40e_aqc_query_switching_comp_ets_config_resp *bw_data,
- struct i40e_asq_cmd_details *cmd_details);
-i40e_status i40e_aq_query_port_ets_config(struct i40e_hw *hw,
- u16 seid,
- struct i40e_aqc_query_port_ets_config_resp *bw_data,
- struct i40e_asq_cmd_details *cmd_details);
-i40e_status i40e_aq_query_switch_comp_bw_config(struct i40e_hw *hw,
- u16 seid,
- struct i40e_aqc_query_switching_comp_bw_config_resp *bw_data,
- struct i40e_asq_cmd_details *cmd_details);
-i40e_status i40e_aq_resume_port_tx(struct i40e_hw *hw,
- struct i40e_asq_cmd_details *cmd_details);
-enum i40e_status_code
+int i40e_aq_query_vsi_bw_config(struct i40e_hw *hw,
+ u16 seid,
+ struct i40e_aqc_query_vsi_bw_config_resp *bw_data,
+ struct i40e_asq_cmd_details *cmd_details);
+int
+i40e_aq_query_vsi_ets_sla_config(struct i40e_hw *hw,
+ u16 seid,
+ struct i40e_aqc_query_vsi_ets_sla_config_resp *bw_data,
+ struct i40e_asq_cmd_details *cmd_details);
+int
+i40e_aq_query_switch_comp_ets_config(struct i40e_hw *hw,
+ u16 seid,
+ struct i40e_aqc_query_switching_comp_ets_config_resp *bw_data,
+ struct i40e_asq_cmd_details *cmd_details);
+int
+i40e_aq_query_port_ets_config(struct i40e_hw *hw,
+ u16 seid,
+ struct i40e_aqc_query_port_ets_config_resp *bw_data,
+ struct i40e_asq_cmd_details *cmd_details);
+int
+i40e_aq_query_switch_comp_bw_config(struct i40e_hw *hw,
+ u16 seid,
+ struct i40e_aqc_query_switching_comp_bw_config_resp *bw_data,
+ struct i40e_asq_cmd_details *cmd_details);
+int i40e_aq_resume_port_tx(struct i40e_hw *hw,
+ struct i40e_asq_cmd_details *cmd_details);
+int
i40e_aq_add_cloud_filters_bb(struct i40e_hw *hw, u16 seid,
struct i40e_aqc_cloud_filters_element_bb *filters,
u8 filter_count);
-enum i40e_status_code
+int
i40e_aq_add_cloud_filters(struct i40e_hw *hw, u16 vsi,
struct i40e_aqc_cloud_filters_element_data *filters,
u8 filter_count);
-enum i40e_status_code
+int
i40e_aq_rem_cloud_filters(struct i40e_hw *hw, u16 vsi,
struct i40e_aqc_cloud_filters_element_data *filters,
u8 filter_count);
-enum i40e_status_code
+int
i40e_aq_rem_cloud_filters_bb(struct i40e_hw *hw, u16 seid,
struct i40e_aqc_cloud_filters_element_bb *filters,
u8 filter_count);
-i40e_status i40e_read_lldp_cfg(struct i40e_hw *hw,
- struct i40e_lldp_variables *lldp_cfg);
-enum i40e_status_code
+int i40e_read_lldp_cfg(struct i40e_hw *hw,
+ struct i40e_lldp_variables *lldp_cfg);
+int
i40e_aq_suspend_port_tx(struct i40e_hw *hw, u16 seid,
struct i40e_asq_cmd_details *cmd_details);
/* i40e_common */
-i40e_status i40e_init_shared_code(struct i40e_hw *hw);
-i40e_status i40e_pf_reset(struct i40e_hw *hw);
+int i40e_init_shared_code(struct i40e_hw *hw);
+int i40e_pf_reset(struct i40e_hw *hw);
void i40e_clear_hw(struct i40e_hw *hw);
void i40e_clear_pxe_mode(struct i40e_hw *hw);
-i40e_status i40e_get_link_status(struct i40e_hw *hw, bool *link_up);
-i40e_status i40e_update_link_info(struct i40e_hw *hw);
-i40e_status i40e_get_mac_addr(struct i40e_hw *hw, u8 *mac_addr);
-i40e_status i40e_read_bw_from_alt_ram(struct i40e_hw *hw,
- u32 *max_bw, u32 *min_bw, bool *min_valid,
- bool *max_valid);
-i40e_status i40e_aq_configure_partition_bw(struct i40e_hw *hw,
- struct i40e_aqc_configure_partition_bw_data *bw_data,
- struct i40e_asq_cmd_details *cmd_details);
-i40e_status i40e_get_port_mac_addr(struct i40e_hw *hw, u8 *mac_addr);
-i40e_status i40e_read_pba_string(struct i40e_hw *hw, u8 *pba_num,
- u32 pba_num_size);
-i40e_status i40e_validate_mac_addr(u8 *mac_addr);
+int i40e_get_link_status(struct i40e_hw *hw, bool *link_up);
+int i40e_update_link_info(struct i40e_hw *hw);
+int i40e_get_mac_addr(struct i40e_hw *hw, u8 *mac_addr);
+int i40e_read_bw_from_alt_ram(struct i40e_hw *hw,
+ u32 *max_bw, u32 *min_bw, bool *min_valid,
+ bool *max_valid);
+int
+i40e_aq_configure_partition_bw(struct i40e_hw *hw,
+ struct i40e_aqc_configure_partition_bw_data *bw_data,
+ struct i40e_asq_cmd_details *cmd_details);
+int i40e_get_port_mac_addr(struct i40e_hw *hw, u8 *mac_addr);
+int i40e_read_pba_string(struct i40e_hw *hw, u8 *pba_num,
+ u32 pba_num_size);
+int i40e_validate_mac_addr(u8 *mac_addr);
void i40e_pre_tx_queue_cfg(struct i40e_hw *hw, u32 queue, bool enable);
/* prototype for functions used for NVM access */
-i40e_status i40e_init_nvm(struct i40e_hw *hw);
-i40e_status i40e_acquire_nvm(struct i40e_hw *hw,
- enum i40e_aq_resource_access_type access);
+int i40e_init_nvm(struct i40e_hw *hw);
+int i40e_acquire_nvm(struct i40e_hw *hw,
+ enum i40e_aq_resource_access_type access);
void i40e_release_nvm(struct i40e_hw *hw);
-i40e_status i40e_read_nvm_word(struct i40e_hw *hw, u16 offset,
- u16 *data);
-enum i40e_status_code i40e_read_nvm_module_data(struct i40e_hw *hw,
- u8 module_ptr,
- u16 module_offset,
- u16 data_offset,
- u16 words_data_size,
- u16 *data_ptr);
-i40e_status i40e_read_nvm_buffer(struct i40e_hw *hw, u16 offset,
- u16 *words, u16 *data);
-i40e_status i40e_update_nvm_checksum(struct i40e_hw *hw);
-i40e_status i40e_validate_nvm_checksum(struct i40e_hw *hw,
- u16 *checksum);
-i40e_status i40e_nvmupd_command(struct i40e_hw *hw,
- struct i40e_nvm_access *cmd,
- u8 *bytes, int *);
+int i40e_read_nvm_word(struct i40e_hw *hw, u16 offset,
+ u16 *data);
+int i40e_read_nvm_module_data(struct i40e_hw *hw,
+ u8 module_ptr,
+ u16 module_offset,
+ u16 data_offset,
+ u16 words_data_size,
+ u16 *data_ptr);
+int i40e_read_nvm_buffer(struct i40e_hw *hw, u16 offset,
+ u16 *words, u16 *data);
+int i40e_update_nvm_checksum(struct i40e_hw *hw);
+int i40e_validate_nvm_checksum(struct i40e_hw *hw,
+ u16 *checksum);
+int i40e_nvmupd_command(struct i40e_hw *hw,
+ struct i40e_nvm_access *cmd,
+ u8 *bytes, int *errno);
void i40e_nvmupd_check_wait_event(struct i40e_hw *hw, u16 opcode,
struct i40e_aq_desc *desc);
void i40e_nvmupd_clear_wait_state(struct i40e_hw *hw);
void i40e_set_pci_config_data(struct i40e_hw *hw, u16 link_status);
-i40e_status i40e_set_mac_type(struct i40e_hw *hw);
+int i40e_set_mac_type(struct i40e_hw *hw);
extern struct i40e_rx_ptype_decoded i40e_ptype_lookup[];
@@ -422,41 +427,41 @@ i40e_virtchnl_link_speed(enum i40e_aq_link_speed link_speed)
/* i40e_common for VF drivers*/
void i40e_vf_parse_hw_config(struct i40e_hw *hw,
struct virtchnl_vf_resource *msg);
-i40e_status i40e_vf_reset(struct i40e_hw *hw);
-i40e_status i40e_aq_send_msg_to_pf(struct i40e_hw *hw,
- enum virtchnl_ops v_opcode,
- i40e_status v_retval,
- u8 *msg, u16 msglen,
- struct i40e_asq_cmd_details *cmd_details);
-i40e_status i40e_set_filter_control(struct i40e_hw *hw,
- struct i40e_filter_control_settings *settings);
-i40e_status i40e_aq_add_rem_control_packet_filter(struct i40e_hw *hw,
- u8 *mac_addr, u16 ethtype, u16 flags,
- u16 vsi_seid, u16 queue, bool is_add,
- struct i40e_control_filter_stats *stats,
- struct i40e_asq_cmd_details *cmd_details);
-i40e_status i40e_aq_debug_dump(struct i40e_hw *hw, u8 cluster_id,
- u8 table_id, u32 start_index, u16 buff_size,
- void *buff, u16 *ret_buff_size,
- u8 *ret_next_table, u32 *ret_next_index,
- struct i40e_asq_cmd_details *cmd_details);
+int i40e_vf_reset(struct i40e_hw *hw);
+int i40e_aq_send_msg_to_pf(struct i40e_hw *hw,
+ enum virtchnl_ops v_opcode,
+ int v_retval,
+ u8 *msg, u16 msglen,
+ struct i40e_asq_cmd_details *cmd_details);
+int i40e_set_filter_control(struct i40e_hw *hw,
+ struct i40e_filter_control_settings *settings);
+int i40e_aq_add_rem_control_packet_filter(struct i40e_hw *hw,
+ u8 *mac_addr, u16 ethtype, u16 flags,
+ u16 vsi_seid, u16 queue, bool is_add,
+ struct i40e_control_filter_stats *stats,
+ struct i40e_asq_cmd_details *cmd_details);
+int i40e_aq_debug_dump(struct i40e_hw *hw, u8 cluster_id,
+ u8 table_id, u32 start_index, u16 buff_size,
+ void *buff, u16 *ret_buff_size,
+ u8 *ret_next_table, u32 *ret_next_index,
+ struct i40e_asq_cmd_details *cmd_details);
void i40e_add_filter_to_drop_tx_flow_control_frames(struct i40e_hw *hw,
u16 vsi_seid);
-i40e_status i40e_aq_rx_ctl_read_register(struct i40e_hw *hw,
- u32 reg_addr, u32 *reg_val,
- struct i40e_asq_cmd_details *cmd_details);
+int i40e_aq_rx_ctl_read_register(struct i40e_hw *hw,
+ u32 reg_addr, u32 *reg_val,
+ struct i40e_asq_cmd_details *cmd_details);
u32 i40e_read_rx_ctl(struct i40e_hw *hw, u32 reg_addr);
-i40e_status i40e_aq_rx_ctl_write_register(struct i40e_hw *hw,
- u32 reg_addr, u32 reg_val,
- struct i40e_asq_cmd_details *cmd_details);
+int i40e_aq_rx_ctl_write_register(struct i40e_hw *hw,
+ u32 reg_addr, u32 reg_val,
+ struct i40e_asq_cmd_details *cmd_details);
void i40e_write_rx_ctl(struct i40e_hw *hw, u32 reg_addr, u32 reg_val);
-enum i40e_status_code
+int
i40e_aq_set_phy_register_ext(struct i40e_hw *hw,
u8 phy_select, u8 dev_addr, bool page_change,
bool set_mdio, u8 mdio_num,
u32 reg_addr, u32 reg_val,
struct i40e_asq_cmd_details *cmd_details);
-enum i40e_status_code
+int
i40e_aq_get_phy_register_ext(struct i40e_hw *hw,
u8 phy_select, u8 dev_addr, bool page_change,
bool set_mdio, u8 mdio_num,
@@ -469,43 +474,43 @@ i40e_aq_get_phy_register_ext(struct i40e_hw *hw,
#define i40e_aq_get_phy_register(hw, ps, da, pc, ra, rv, cd) \
i40e_aq_get_phy_register_ext(hw, ps, da, pc, false, 0, ra, rv, cd)
-i40e_status i40e_read_phy_register_clause22(struct i40e_hw *hw,
- u16 reg, u8 phy_addr, u16 *value);
-i40e_status i40e_write_phy_register_clause22(struct i40e_hw *hw,
- u16 reg, u8 phy_addr, u16 value);
-i40e_status i40e_read_phy_register_clause45(struct i40e_hw *hw,
- u8 page, u16 reg, u8 phy_addr, u16 *value);
-i40e_status i40e_write_phy_register_clause45(struct i40e_hw *hw,
- u8 page, u16 reg, u8 phy_addr, u16 value);
-i40e_status i40e_read_phy_register(struct i40e_hw *hw, u8 page, u16 reg,
- u8 phy_addr, u16 *value);
-i40e_status i40e_write_phy_register(struct i40e_hw *hw, u8 page, u16 reg,
- u8 phy_addr, u16 value);
+int i40e_read_phy_register_clause22(struct i40e_hw *hw,
+ u16 reg, u8 phy_addr, u16 *value);
+int i40e_write_phy_register_clause22(struct i40e_hw *hw,
+ u16 reg, u8 phy_addr, u16 value);
+int i40e_read_phy_register_clause45(struct i40e_hw *hw,
+ u8 page, u16 reg, u8 phy_addr, u16 *value);
+int i40e_write_phy_register_clause45(struct i40e_hw *hw,
+ u8 page, u16 reg, u8 phy_addr, u16 value);
+int i40e_read_phy_register(struct i40e_hw *hw, u8 page, u16 reg,
+ u8 phy_addr, u16 *value);
+int i40e_write_phy_register(struct i40e_hw *hw, u8 page, u16 reg,
+ u8 phy_addr, u16 value);
u8 i40e_get_phy_address(struct i40e_hw *hw, u8 dev_num);
-i40e_status i40e_blink_phy_link_led(struct i40e_hw *hw,
- u32 time, u32 interval);
-i40e_status i40e_aq_write_ddp(struct i40e_hw *hw, void *buff,
- u16 buff_size, u32 track_id,
- u32 *error_offset, u32 *error_info,
- struct i40e_asq_cmd_details *
- cmd_details);
-i40e_status i40e_aq_get_ddp_list(struct i40e_hw *hw, void *buff,
- u16 buff_size, u8 flags,
- struct i40e_asq_cmd_details *
- cmd_details);
+int i40e_blink_phy_link_led(struct i40e_hw *hw,
+ u32 time, u32 interval);
+int i40e_aq_write_ddp(struct i40e_hw *hw, void *buff,
+ u16 buff_size, u32 track_id,
+ u32 *error_offset, u32 *error_info,
+ struct i40e_asq_cmd_details *
+ cmd_details);
+int i40e_aq_get_ddp_list(struct i40e_hw *hw, void *buff,
+ u16 buff_size, u8 flags,
+ struct i40e_asq_cmd_details *
+ cmd_details);
struct i40e_generic_seg_header *
i40e_find_segment_in_package(u32 segment_type,
struct i40e_package_header *pkg_header);
struct i40e_profile_section_header *
i40e_find_section_in_profile(u32 section_type,
struct i40e_profile_segment *profile);
-enum i40e_status_code
+int
i40e_write_profile(struct i40e_hw *hw, struct i40e_profile_segment *i40e_seg,
u32 track_id);
-enum i40e_status_code
+int
i40e_rollback_profile(struct i40e_hw *hw, struct i40e_profile_segment *i40e_seg,
u32 track_id);
-enum i40e_status_code
+int
i40e_add_pinfo_to_list(struct i40e_hw *hw,
struct i40e_profile_segment *profile,
u8 *profile_info_sec, u32 track_id);
diff --git a/drivers/net/ethernet/intel/i40e/i40e_status.h b/drivers/net/ethernet/intel/i40e/i40e_status.h
index db3714a65dc7..4d2782e76038 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_status.h
+++ b/drivers/net/ethernet/intel/i40e/i40e_status.h
@@ -9,65 +9,30 @@ enum i40e_status_code {
I40E_SUCCESS = 0,
I40E_ERR_NVM = -1,
I40E_ERR_NVM_CHECKSUM = -2,
- I40E_ERR_PHY = -3,
I40E_ERR_CONFIG = -4,
I40E_ERR_PARAM = -5,
- I40E_ERR_MAC_TYPE = -6,
I40E_ERR_UNKNOWN_PHY = -7,
- I40E_ERR_LINK_SETUP = -8,
- I40E_ERR_ADAPTER_STOPPED = -9,
I40E_ERR_INVALID_MAC_ADDR = -10,
I40E_ERR_DEVICE_NOT_SUPPORTED = -11,
- I40E_ERR_PRIMARY_REQUESTS_PENDING = -12,
- I40E_ERR_INVALID_LINK_SETTINGS = -13,
- I40E_ERR_AUTONEG_NOT_COMPLETE = -14,
I40E_ERR_RESET_FAILED = -15,
- I40E_ERR_SWFW_SYNC = -16,
I40E_ERR_NO_AVAILABLE_VSI = -17,
I40E_ERR_NO_MEMORY = -18,
I40E_ERR_BAD_PTR = -19,
- I40E_ERR_RING_FULL = -20,
- I40E_ERR_INVALID_PD_ID = -21,
- I40E_ERR_INVALID_QP_ID = -22,
- I40E_ERR_INVALID_CQ_ID = -23,
- I40E_ERR_INVALID_CEQ_ID = -24,
- I40E_ERR_INVALID_AEQ_ID = -25,
I40E_ERR_INVALID_SIZE = -26,
- I40E_ERR_INVALID_ARP_INDEX = -27,
- I40E_ERR_INVALID_FPM_FUNC_ID = -28,
- I40E_ERR_QP_INVALID_MSG_SIZE = -29,
- I40E_ERR_QP_TOOMANY_WRS_POSTED = -30,
- I40E_ERR_INVALID_FRAG_COUNT = -31,
I40E_ERR_QUEUE_EMPTY = -32,
- I40E_ERR_INVALID_ALIGNMENT = -33,
- I40E_ERR_FLUSHED_QUEUE = -34,
- I40E_ERR_INVALID_PUSH_PAGE_INDEX = -35,
- I40E_ERR_INVALID_IMM_DATA_SIZE = -36,
I40E_ERR_TIMEOUT = -37,
- I40E_ERR_OPCODE_MISMATCH = -38,
- I40E_ERR_CQP_COMPL_ERROR = -39,
- I40E_ERR_INVALID_VF_ID = -40,
- I40E_ERR_INVALID_HMCFN_ID = -41,
- I40E_ERR_BACKING_PAGE_ERROR = -42,
- I40E_ERR_NO_PBLCHUNKS_AVAILABLE = -43,
- I40E_ERR_INVALID_PBLE_INDEX = -44,
I40E_ERR_INVALID_SD_INDEX = -45,
I40E_ERR_INVALID_PAGE_DESC_INDEX = -46,
I40E_ERR_INVALID_SD_TYPE = -47,
- I40E_ERR_MEMCPY_FAILED = -48,
I40E_ERR_INVALID_HMC_OBJ_INDEX = -49,
I40E_ERR_INVALID_HMC_OBJ_COUNT = -50,
- I40E_ERR_INVALID_SRQ_ARM_LIMIT = -51,
- I40E_ERR_SRQ_ENABLED = -52,
I40E_ERR_ADMIN_QUEUE_ERROR = -53,
I40E_ERR_ADMIN_QUEUE_TIMEOUT = -54,
I40E_ERR_BUF_TOO_SHORT = -55,
I40E_ERR_ADMIN_QUEUE_FULL = -56,
I40E_ERR_ADMIN_QUEUE_NO_WORK = -57,
- I40E_ERR_BAD_IWARP_CQE = -58,
I40E_ERR_NVM_BLANK_MODE = -59,
I40E_ERR_NOT_IMPLEMENTED = -60,
- I40E_ERR_PE_DOORBELL_NOT_ENABLED = -61,
I40E_ERR_DIAG_TEST_FAILED = -62,
I40E_ERR_NOT_READY = -63,
I40E_NOT_SUPPORTED = -64,
diff --git a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
index 635f93d60318..8a4587585acd 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
+++ b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
@@ -17,7 +17,7 @@
**/
static void i40e_vc_vf_broadcast(struct i40e_pf *pf,
enum virtchnl_ops v_opcode,
- i40e_status v_retval, u8 *msg,
+ int v_retval, u8 *msg,
u16 msglen)
{
struct i40e_hw *hw = &pf->hw;
@@ -441,14 +441,14 @@ irq_list_done:
}
/**
- * i40e_release_iwarp_qvlist
+ * i40e_release_rdma_qvlist
* @vf: pointer to the VF.
*
**/
-static void i40e_release_iwarp_qvlist(struct i40e_vf *vf)
+static void i40e_release_rdma_qvlist(struct i40e_vf *vf)
{
struct i40e_pf *pf = vf->pf;
- struct virtchnl_iwarp_qvlist_info *qvlist_info = vf->qvlist_info;
+ struct virtchnl_rdma_qvlist_info *qvlist_info = vf->qvlist_info;
u32 msix_vf;
u32 i;
@@ -457,7 +457,7 @@ static void i40e_release_iwarp_qvlist(struct i40e_vf *vf)
msix_vf = pf->hw.func_caps.num_msix_vectors_vf;
for (i = 0; i < qvlist_info->num_vectors; i++) {
- struct virtchnl_iwarp_qv_info *qv_info;
+ struct virtchnl_rdma_qv_info *qv_info;
u32 next_q_index, next_q_type;
struct i40e_hw *hw = &pf->hw;
u32 v_idx, reg_idx, reg;
@@ -491,18 +491,19 @@ static void i40e_release_iwarp_qvlist(struct i40e_vf *vf)
}
/**
- * i40e_config_iwarp_qvlist
+ * i40e_config_rdma_qvlist
* @vf: pointer to the VF info
* @qvlist_info: queue and vector list
*
* Return 0 on success or < 0 on error
**/
-static int i40e_config_iwarp_qvlist(struct i40e_vf *vf,
- struct virtchnl_iwarp_qvlist_info *qvlist_info)
+static int
+i40e_config_rdma_qvlist(struct i40e_vf *vf,
+ struct virtchnl_rdma_qvlist_info *qvlist_info)
{
struct i40e_pf *pf = vf->pf;
struct i40e_hw *hw = &pf->hw;
- struct virtchnl_iwarp_qv_info *qv_info;
+ struct virtchnl_rdma_qv_info *qv_info;
u32 v_idx, i, reg_idx, reg;
u32 next_q_idx, next_q_type;
u32 msix_vf;
@@ -1246,13 +1247,13 @@ err:
* @vl: List of VLANs - apply filter for given VLANs
* @num_vlans: Number of elements in @vl
**/
-static i40e_status
+static int
i40e_set_vsi_promisc(struct i40e_vf *vf, u16 seid, bool multi_enable,
bool unicast_enable, s16 *vl, u16 num_vlans)
{
- i40e_status aq_ret, aq_tmp = 0;
struct i40e_pf *pf = vf->pf;
struct i40e_hw *hw = &pf->hw;
+ int aq_ret, aq_tmp = 0;
int i;
/* No VLAN to set promisc on, set on VSI */
@@ -1264,9 +1265,9 @@ i40e_set_vsi_promisc(struct i40e_vf *vf, u16 seid, bool multi_enable,
int aq_err = pf->hw.aq.asq_last_status;
dev_err(&pf->pdev->dev,
- "VF %d failed to set multicast promiscuous mode err %s aq_err %s\n",
+ "VF %d failed to set multicast promiscuous mode err %pe aq_err %s\n",
vf->vf_id,
- i40e_stat_str(&pf->hw, aq_ret),
+ ERR_PTR(aq_ret),
i40e_aq_str(&pf->hw, aq_err));
return aq_ret;
@@ -1280,9 +1281,9 @@ i40e_set_vsi_promisc(struct i40e_vf *vf, u16 seid, bool multi_enable,
int aq_err = pf->hw.aq.asq_last_status;
dev_err(&pf->pdev->dev,
- "VF %d failed to set unicast promiscuous mode err %s aq_err %s\n",
+ "VF %d failed to set unicast promiscuous mode err %pe aq_err %s\n",
vf->vf_id,
- i40e_stat_str(&pf->hw, aq_ret),
+ ERR_PTR(aq_ret),
i40e_aq_str(&pf->hw, aq_err));
}
@@ -1297,9 +1298,9 @@ i40e_set_vsi_promisc(struct i40e_vf *vf, u16 seid, bool multi_enable,
int aq_err = pf->hw.aq.asq_last_status;
dev_err(&pf->pdev->dev,
- "VF %d failed to set multicast promiscuous mode err %s aq_err %s\n",
+ "VF %d failed to set multicast promiscuous mode err %pe aq_err %s\n",
vf->vf_id,
- i40e_stat_str(&pf->hw, aq_ret),
+ ERR_PTR(aq_ret),
i40e_aq_str(&pf->hw, aq_err));
if (!aq_tmp)
@@ -1313,9 +1314,9 @@ i40e_set_vsi_promisc(struct i40e_vf *vf, u16 seid, bool multi_enable,
int aq_err = pf->hw.aq.asq_last_status;
dev_err(&pf->pdev->dev,
- "VF %d failed to set unicast promiscuous mode err %s aq_err %s\n",
+ "VF %d failed to set unicast promiscuous mode err %pe aq_err %s\n",
vf->vf_id,
- i40e_stat_str(&pf->hw, aq_ret),
+ ERR_PTR(aq_ret),
i40e_aq_str(&pf->hw, aq_err));
if (!aq_tmp)
@@ -1339,13 +1340,13 @@ i40e_set_vsi_promisc(struct i40e_vf *vf, u16 seid, bool multi_enable,
* Called from the VF to configure the promiscuous mode of
* VF vsis and from the VF reset path to reset promiscuous mode.
**/
-static i40e_status i40e_config_vf_promiscuous_mode(struct i40e_vf *vf,
- u16 vsi_id,
- bool allmulti,
- bool alluni)
+static int i40e_config_vf_promiscuous_mode(struct i40e_vf *vf,
+ u16 vsi_id,
+ bool allmulti,
+ bool alluni)
{
- i40e_status aq_ret = I40E_SUCCESS;
struct i40e_pf *pf = vf->pf;
+ int aq_ret = I40E_SUCCESS;
struct i40e_vsi *vsi;
u16 num_vlans;
s16 *vl;
@@ -1955,7 +1956,7 @@ static int i40e_vc_send_msg_to_vf(struct i40e_vf *vf, u32 v_opcode,
struct i40e_pf *pf;
struct i40e_hw *hw;
int abs_vf_id;
- i40e_status aq_ret;
+ int aq_ret;
/* validate the request */
if (!vf || vf->vf_id >= vf->pf->num_alloc_vfs)
@@ -1987,7 +1988,7 @@ static int i40e_vc_send_msg_to_vf(struct i40e_vf *vf, u32 v_opcode,
**/
static int i40e_vc_send_resp_to_vf(struct i40e_vf *vf,
enum virtchnl_ops opcode,
- i40e_status retval)
+ int retval)
{
return i40e_vc_send_msg_to_vf(vf, opcode, retval, NULL, 0);
}
@@ -2091,9 +2092,9 @@ static int i40e_vc_get_vf_resources_msg(struct i40e_vf *vf, u8 *msg)
{
struct virtchnl_vf_resource *vfres = NULL;
struct i40e_pf *pf = vf->pf;
- i40e_status aq_ret = 0;
struct i40e_vsi *vsi;
int num_vsis = 1;
+ int aq_ret = 0;
size_t len = 0;
int ret;
@@ -2123,11 +2124,11 @@ static int i40e_vc_get_vf_resources_msg(struct i40e_vf *vf, u8 *msg)
vfres->vf_cap_flags |= VIRTCHNL_VF_OFFLOAD_VLAN;
if (i40e_vf_client_capable(pf, vf->vf_id) &&
- (vf->driver_caps & VIRTCHNL_VF_OFFLOAD_IWARP)) {
- vfres->vf_cap_flags |= VIRTCHNL_VF_OFFLOAD_IWARP;
- set_bit(I40E_VF_STATE_IWARPENA, &vf->vf_states);
+ (vf->driver_caps & VIRTCHNL_VF_OFFLOAD_RDMA)) {
+ vfres->vf_cap_flags |= VIRTCHNL_VF_OFFLOAD_RDMA;
+ set_bit(I40E_VF_STATE_RDMAENA, &vf->vf_states);
} else {
- clear_bit(I40E_VF_STATE_IWARPENA, &vf->vf_states);
+ clear_bit(I40E_VF_STATE_RDMAENA, &vf->vf_states);
}
if (vf->driver_caps & VIRTCHNL_VF_OFFLOAD_RSS_PF) {
@@ -2221,9 +2222,9 @@ static int i40e_vc_config_promiscuous_mode_msg(struct i40e_vf *vf, u8 *msg)
struct virtchnl_promisc_info *info =
(struct virtchnl_promisc_info *)msg;
struct i40e_pf *pf = vf->pf;
- i40e_status aq_ret = 0;
bool allmulti = false;
bool alluni = false;
+ int aq_ret = 0;
if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE)) {
aq_ret = I40E_ERR_PARAM;
@@ -2308,10 +2309,10 @@ static int i40e_vc_config_queues_msg(struct i40e_vf *vf, u8 *msg)
struct virtchnl_queue_pair_info *qpi;
u16 vsi_id, vsi_queue_id = 0;
struct i40e_pf *pf = vf->pf;
- i40e_status aq_ret = 0;
int i, j = 0, idx = 0;
struct i40e_vsi *vsi;
u16 num_qps_all = 0;
+ int aq_ret = 0;
if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE)) {
aq_ret = I40E_ERR_PARAM;
@@ -2458,8 +2459,8 @@ static int i40e_vc_config_irq_map_msg(struct i40e_vf *vf, u8 *msg)
struct virtchnl_irq_map_info *irqmap_info =
(struct virtchnl_irq_map_info *)msg;
struct virtchnl_vector_map *map;
+ int aq_ret = 0;
u16 vsi_id;
- i40e_status aq_ret = 0;
int i;
if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE)) {
@@ -2574,7 +2575,7 @@ static int i40e_vc_enable_queues_msg(struct i40e_vf *vf, u8 *msg)
struct virtchnl_queue_select *vqs =
(struct virtchnl_queue_select *)msg;
struct i40e_pf *pf = vf->pf;
- i40e_status aq_ret = 0;
+ int aq_ret = 0;
int i;
if (!test_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states)) {
@@ -2632,7 +2633,7 @@ static int i40e_vc_disable_queues_msg(struct i40e_vf *vf, u8 *msg)
struct virtchnl_queue_select *vqs =
(struct virtchnl_queue_select *)msg;
struct i40e_pf *pf = vf->pf;
- i40e_status aq_ret = 0;
+ int aq_ret = 0;
if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE)) {
aq_ret = I40E_ERR_PARAM;
@@ -2783,7 +2784,7 @@ static int i40e_vc_get_stats_msg(struct i40e_vf *vf, u8 *msg)
(struct virtchnl_queue_select *)msg;
struct i40e_pf *pf = vf->pf;
struct i40e_eth_stats stats;
- i40e_status aq_ret = 0;
+ int aq_ret = 0;
struct i40e_vsi *vsi;
memset(&stats, 0, sizeof(struct i40e_eth_stats));
@@ -2926,7 +2927,7 @@ static int i40e_vc_add_mac_addr_msg(struct i40e_vf *vf, u8 *msg)
(struct virtchnl_ether_addr_list *)msg;
struct i40e_pf *pf = vf->pf;
struct i40e_vsi *vsi = NULL;
- i40e_status ret = 0;
+ int ret = 0;
int i;
if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE) ||
@@ -2998,7 +2999,7 @@ static int i40e_vc_del_mac_addr_msg(struct i40e_vf *vf, u8 *msg)
bool was_unimac_deleted = false;
struct i40e_pf *pf = vf->pf;
struct i40e_vsi *vsi = NULL;
- i40e_status ret = 0;
+ int ret = 0;
int i;
if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE) ||
@@ -3071,7 +3072,7 @@ static int i40e_vc_add_vlan_msg(struct i40e_vf *vf, u8 *msg)
(struct virtchnl_vlan_filter_list *)msg;
struct i40e_pf *pf = vf->pf;
struct i40e_vsi *vsi = NULL;
- i40e_status aq_ret = 0;
+ int aq_ret = 0;
int i;
if ((vf->num_vlan >= I40E_VC_MAX_VLAN_PER_VF) &&
@@ -3142,7 +3143,7 @@ static int i40e_vc_remove_vlan_msg(struct i40e_vf *vf, u8 *msg)
(struct virtchnl_vlan_filter_list *)msg;
struct i40e_pf *pf = vf->pf;
struct i40e_vsi *vsi = NULL;
- i40e_status aq_ret = 0;
+ int aq_ret = 0;
int i;
if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE) ||
@@ -3187,21 +3188,21 @@ error_param:
}
/**
- * i40e_vc_iwarp_msg
+ * i40e_vc_rdma_msg
* @vf: pointer to the VF info
* @msg: pointer to the msg buffer
* @msglen: msg length
*
* called from the VF for the iwarp msgs
**/
-static int i40e_vc_iwarp_msg(struct i40e_vf *vf, u8 *msg, u16 msglen)
+static int i40e_vc_rdma_msg(struct i40e_vf *vf, u8 *msg, u16 msglen)
{
struct i40e_pf *pf = vf->pf;
int abs_vf_id = vf->vf_id + pf->hw.func_caps.vf_base_id;
- i40e_status aq_ret = 0;
+ int aq_ret = 0;
if (!test_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states) ||
- !test_bit(I40E_VF_STATE_IWARPENA, &vf->vf_states)) {
+ !test_bit(I40E_VF_STATE_RDMAENA, &vf->vf_states)) {
aq_ret = I40E_ERR_PARAM;
goto error_param;
}
@@ -3211,42 +3212,42 @@ static int i40e_vc_iwarp_msg(struct i40e_vf *vf, u8 *msg, u16 msglen)
error_param:
/* send the response to the VF */
- return i40e_vc_send_resp_to_vf(vf, VIRTCHNL_OP_IWARP,
+ return i40e_vc_send_resp_to_vf(vf, VIRTCHNL_OP_RDMA,
aq_ret);
}
/**
- * i40e_vc_iwarp_qvmap_msg
+ * i40e_vc_rdma_qvmap_msg
* @vf: pointer to the VF info
* @msg: pointer to the msg buffer
* @config: config qvmap or release it
*
* called from the VF for the iwarp msgs
**/
-static int i40e_vc_iwarp_qvmap_msg(struct i40e_vf *vf, u8 *msg, bool config)
+static int i40e_vc_rdma_qvmap_msg(struct i40e_vf *vf, u8 *msg, bool config)
{
- struct virtchnl_iwarp_qvlist_info *qvlist_info =
- (struct virtchnl_iwarp_qvlist_info *)msg;
- i40e_status aq_ret = 0;
+ struct virtchnl_rdma_qvlist_info *qvlist_info =
+ (struct virtchnl_rdma_qvlist_info *)msg;
+ int aq_ret = 0;
if (!test_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states) ||
- !test_bit(I40E_VF_STATE_IWARPENA, &vf->vf_states)) {
+ !test_bit(I40E_VF_STATE_RDMAENA, &vf->vf_states)) {
aq_ret = I40E_ERR_PARAM;
goto error_param;
}
if (config) {
- if (i40e_config_iwarp_qvlist(vf, qvlist_info))
+ if (i40e_config_rdma_qvlist(vf, qvlist_info))
aq_ret = I40E_ERR_PARAM;
} else {
- i40e_release_iwarp_qvlist(vf);
+ i40e_release_rdma_qvlist(vf);
}
error_param:
/* send the response to the VF */
return i40e_vc_send_resp_to_vf(vf,
- config ? VIRTCHNL_OP_CONFIG_IWARP_IRQ_MAP :
- VIRTCHNL_OP_RELEASE_IWARP_IRQ_MAP,
+ config ? VIRTCHNL_OP_CONFIG_RDMA_IRQ_MAP :
+ VIRTCHNL_OP_RELEASE_RDMA_IRQ_MAP,
aq_ret);
}
@@ -3263,7 +3264,7 @@ static int i40e_vc_config_rss_key(struct i40e_vf *vf, u8 *msg)
(struct virtchnl_rss_key *)msg;
struct i40e_pf *pf = vf->pf;
struct i40e_vsi *vsi = NULL;
- i40e_status aq_ret = 0;
+ int aq_ret = 0;
if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE) ||
!i40e_vc_isvalid_vsi_id(vf, vrk->vsi_id) ||
@@ -3293,7 +3294,7 @@ static int i40e_vc_config_rss_lut(struct i40e_vf *vf, u8 *msg)
(struct virtchnl_rss_lut *)msg;
struct i40e_pf *pf = vf->pf;
struct i40e_vsi *vsi = NULL;
- i40e_status aq_ret = 0;
+ int aq_ret = 0;
u16 i;
if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE) ||
@@ -3328,7 +3329,7 @@ static int i40e_vc_get_rss_hena(struct i40e_vf *vf, u8 *msg)
{
struct virtchnl_rss_hena *vrh = NULL;
struct i40e_pf *pf = vf->pf;
- i40e_status aq_ret = 0;
+ int aq_ret = 0;
int len = 0;
if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE)) {
@@ -3365,7 +3366,7 @@ static int i40e_vc_set_rss_hena(struct i40e_vf *vf, u8 *msg)
(struct virtchnl_rss_hena *)msg;
struct i40e_pf *pf = vf->pf;
struct i40e_hw *hw = &pf->hw;
- i40e_status aq_ret = 0;
+ int aq_ret = 0;
if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE)) {
aq_ret = I40E_ERR_PARAM;
@@ -3389,8 +3390,8 @@ err:
**/
static int i40e_vc_enable_vlan_stripping(struct i40e_vf *vf, u8 *msg)
{
- i40e_status aq_ret = 0;
struct i40e_vsi *vsi;
+ int aq_ret = 0;
if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE)) {
aq_ret = I40E_ERR_PARAM;
@@ -3415,8 +3416,8 @@ err:
**/
static int i40e_vc_disable_vlan_stripping(struct i40e_vf *vf, u8 *msg)
{
- i40e_status aq_ret = 0;
struct i40e_vsi *vsi;
+ int aq_ret = 0;
if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE)) {
aq_ret = I40E_ERR_PARAM;
@@ -3615,8 +3616,8 @@ static void i40e_del_all_cloud_filters(struct i40e_vf *vf)
ret = i40e_add_del_cloud_filter(vsi, cfilter, false);
if (ret)
dev_err(&pf->pdev->dev,
- "VF %d: Failed to delete cloud filter, err %s aq_err %s\n",
- vf->vf_id, i40e_stat_str(&pf->hw, ret),
+ "VF %d: Failed to delete cloud filter, err %pe aq_err %s\n",
+ vf->vf_id, ERR_PTR(ret),
i40e_aq_str(&pf->hw,
pf->hw.aq.asq_last_status));
@@ -3642,7 +3643,7 @@ static int i40e_vc_del_cloud_filter(struct i40e_vf *vf, u8 *msg)
struct i40e_pf *pf = vf->pf;
struct i40e_vsi *vsi = NULL;
struct hlist_node *node;
- i40e_status aq_ret = 0;
+ int aq_ret = 0;
int i, ret;
if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE)) {
@@ -3718,8 +3719,8 @@ static int i40e_vc_del_cloud_filter(struct i40e_vf *vf, u8 *msg)
ret = i40e_add_del_cloud_filter(vsi, &cfilter, false);
if (ret) {
dev_err(&pf->pdev->dev,
- "VF %d: Failed to delete cloud filter, err %s aq_err %s\n",
- vf->vf_id, i40e_stat_str(&pf->hw, ret),
+ "VF %d: Failed to delete cloud filter, err %pe aq_err %s\n",
+ vf->vf_id, ERR_PTR(ret),
i40e_aq_str(&pf->hw, pf->hw.aq.asq_last_status));
goto err;
}
@@ -3773,7 +3774,7 @@ static int i40e_vc_add_cloud_filter(struct i40e_vf *vf, u8 *msg)
struct i40e_cloud_filter *cfilter = NULL;
struct i40e_pf *pf = vf->pf;
struct i40e_vsi *vsi = NULL;
- i40e_status aq_ret = 0;
+ int aq_ret = 0;
int i, ret;
if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE)) {
@@ -3852,8 +3853,8 @@ static int i40e_vc_add_cloud_filter(struct i40e_vf *vf, u8 *msg)
ret = i40e_add_del_cloud_filter(vsi, cfilter, true);
if (ret) {
dev_err(&pf->pdev->dev,
- "VF %d: Failed to add cloud filter, err %s aq_err %s\n",
- vf->vf_id, i40e_stat_str(&pf->hw, ret),
+ "VF %d: Failed to add cloud filter, err %pe aq_err %s\n",
+ vf->vf_id, ERR_PTR(ret),
i40e_aq_str(&pf->hw, pf->hw.aq.asq_last_status));
goto err_free;
}
@@ -3882,7 +3883,7 @@ static int i40e_vc_add_qch_msg(struct i40e_vf *vf, u8 *msg)
struct i40e_pf *pf = vf->pf;
struct i40e_link_status *ls = &pf->hw.phy.link_info;
int i, adq_request_qps = 0;
- i40e_status aq_ret = 0;
+ int aq_ret = 0;
u64 speed = 0;
if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE)) {
@@ -3994,7 +3995,7 @@ err:
static int i40e_vc_del_qch_msg(struct i40e_vf *vf, u8 *msg)
{
struct i40e_pf *pf = vf->pf;
- i40e_status aq_ret = 0;
+ int aq_ret = 0;
if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE)) {
aq_ret = I40E_ERR_PARAM;
@@ -4112,14 +4113,14 @@ int i40e_vc_process_vf_msg(struct i40e_pf *pf, s16 vf_id, u32 v_opcode,
case VIRTCHNL_OP_GET_STATS:
ret = i40e_vc_get_stats_msg(vf, msg);
break;
- case VIRTCHNL_OP_IWARP:
- ret = i40e_vc_iwarp_msg(vf, msg, msglen);
+ case VIRTCHNL_OP_RDMA:
+ ret = i40e_vc_rdma_msg(vf, msg, msglen);
break;
- case VIRTCHNL_OP_CONFIG_IWARP_IRQ_MAP:
- ret = i40e_vc_iwarp_qvmap_msg(vf, msg, true);
+ case VIRTCHNL_OP_CONFIG_RDMA_IRQ_MAP:
+ ret = i40e_vc_rdma_qvmap_msg(vf, msg, true);
break;
- case VIRTCHNL_OP_RELEASE_IWARP_IRQ_MAP:
- ret = i40e_vc_iwarp_qvmap_msg(vf, msg, false);
+ case VIRTCHNL_OP_RELEASE_RDMA_IRQ_MAP:
+ ret = i40e_vc_rdma_qvmap_msg(vf, msg, false);
break;
case VIRTCHNL_OP_CONFIG_RSS_KEY:
ret = i40e_vc_config_rss_key(vf, msg);
diff --git a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.h b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.h
index 358bbdb58795..895b8feb2567 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.h
+++ b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.h
@@ -34,7 +34,7 @@ enum i40e_queue_ctrl {
enum i40e_vf_states {
I40E_VF_STATE_INIT = 0,
I40E_VF_STATE_ACTIVE,
- I40E_VF_STATE_IWARPENA,
+ I40E_VF_STATE_RDMAENA,
I40E_VF_STATE_DISABLED,
I40E_VF_STATE_MC_PROMISC,
I40E_VF_STATE_UC_PROMISC,
@@ -46,7 +46,7 @@ enum i40e_vf_states {
enum i40e_vf_capabilities {
I40E_VIRTCHNL_VF_CAP_PRIVILEGE = 0,
I40E_VIRTCHNL_VF_CAP_L2,
- I40E_VIRTCHNL_VF_CAP_IWARP,
+ I40E_VIRTCHNL_VF_CAP_RDMA,
};
/* In ADq, max 4 VSI's can be allocated per VF including primary VF VSI.
@@ -108,7 +108,7 @@ struct i40e_vf {
u16 num_cloud_filters;
/* RDMA Client */
- struct virtchnl_iwarp_qvlist_info *qvlist_info;
+ struct virtchnl_rdma_qvlist_info *qvlist_info;
};
void i40e_free_vfs(struct i40e_pf *pf);
diff --git a/drivers/net/ethernet/intel/iavf/iavf.h b/drivers/net/ethernet/intel/iavf/iavf.h
index 2a9f1eeeb701..232bc61d9eee 100644
--- a/drivers/net/ethernet/intel/iavf/iavf.h
+++ b/drivers/net/ethernet/intel/iavf/iavf.h
@@ -30,6 +30,7 @@
#include <linux/jiffies.h>
#include <net/ip6_checksum.h>
#include <net/pkt_cls.h>
+#include <net/pkt_sched.h>
#include <net/udp.h>
#include <net/tc_act/tc_gact.h>
#include <net/tc_act/tc_mirred.h>
@@ -276,8 +277,8 @@ struct iavf_adapter {
u64 hw_csum_rx_error;
u32 rx_desc_count;
int num_msix_vectors;
- int num_iwarp_msix;
- int iwarp_base_vector;
+ int num_rdma_msix;
+ int rdma_base_vector;
u32 client_pending;
struct iavf_client_instance *cinst;
struct msix_entry *msix_entries;
@@ -384,7 +385,7 @@ struct iavf_adapter {
enum virtchnl_ops current_op;
#define CLIENT_ALLOWED(_a) ((_a)->vf_res ? \
(_a)->vf_res->vf_cap_flags & \
- VIRTCHNL_VF_OFFLOAD_IWARP : \
+ VIRTCHNL_VF_OFFLOAD_RDMA : \
0)
#define CLIENT_ENABLED(_a) ((_a)->cinst)
/* RSS by the PF should be preferred over RSS via other methods. */
diff --git a/drivers/net/ethernet/intel/iavf/iavf_client.c b/drivers/net/ethernet/intel/iavf/iavf_client.c
index 0c77e4171808..93c903c02c64 100644
--- a/drivers/net/ethernet/intel/iavf/iavf_client.c
+++ b/drivers/net/ethernet/intel/iavf/iavf_client.c
@@ -127,7 +127,7 @@ void iavf_notify_client_open(struct iavf_vsi *vsi)
}
/**
- * iavf_client_release_qvlist - send a message to the PF to release iwarp qv map
+ * iavf_client_release_qvlist - send a message to the PF to release rdma qv map
* @ldev: pointer to L2 context.
*
* Return 0 on success or < 0 on error
@@ -141,12 +141,12 @@ static int iavf_client_release_qvlist(struct iavf_info *ldev)
return -EAGAIN;
err = iavf_aq_send_msg_to_pf(&adapter->hw,
- VIRTCHNL_OP_RELEASE_IWARP_IRQ_MAP,
+ VIRTCHNL_OP_RELEASE_RDMA_IRQ_MAP,
IAVF_SUCCESS, NULL, 0, NULL);
if (err)
dev_err(&adapter->pdev->dev,
- "Unable to send iWarp vector release message to PF, error %d, aq status %d\n",
+ "Unable to send RDMA vector release message to PF, error %d, aq status %d\n",
err, adapter->hw.aq.asq_last_status);
return err;
@@ -215,9 +215,9 @@ iavf_client_add_instance(struct iavf_adapter *adapter)
cinst->lan_info.params = params;
set_bit(__IAVF_CLIENT_INSTANCE_NONE, &cinst->state);
- cinst->lan_info.msix_count = adapter->num_iwarp_msix;
+ cinst->lan_info.msix_count = adapter->num_rdma_msix;
cinst->lan_info.msix_entries =
- &adapter->msix_entries[adapter->iwarp_base_vector];
+ &adapter->msix_entries[adapter->rdma_base_vector];
mac = list_first_entry(&cinst->lan_info.netdev->dev_addrs.list,
struct netdev_hw_addr, list);
@@ -425,17 +425,17 @@ static u32 iavf_client_virtchnl_send(struct iavf_info *ldev,
if (adapter->aq_required)
return -EAGAIN;
- err = iavf_aq_send_msg_to_pf(&adapter->hw, VIRTCHNL_OP_IWARP,
+ err = iavf_aq_send_msg_to_pf(&adapter->hw, VIRTCHNL_OP_RDMA,
IAVF_SUCCESS, msg, len, NULL);
if (err)
- dev_err(&adapter->pdev->dev, "Unable to send iWarp message to PF, error %d, aq status %d\n",
+ dev_err(&adapter->pdev->dev, "Unable to send RDMA message to PF, error %d, aq status %d\n",
err, adapter->hw.aq.asq_last_status);
return err;
}
/**
- * iavf_client_setup_qvlist - send a message to the PF to setup iwarp qv map
+ * iavf_client_setup_qvlist - send a message to the PF to setup rdma qv map
* @ldev: pointer to L2 context.
* @client: Client pointer.
* @qvlist_info: queue and vector list
@@ -446,7 +446,7 @@ static int iavf_client_setup_qvlist(struct iavf_info *ldev,
struct iavf_client *client,
struct iavf_qvlist_info *qvlist_info)
{
- struct virtchnl_iwarp_qvlist_info *v_qvlist_info;
+ struct virtchnl_rdma_qvlist_info *v_qvlist_info;
struct iavf_adapter *adapter = ldev->vf;
struct iavf_qv_info *qv_info;
enum iavf_status err;
@@ -463,23 +463,23 @@ static int iavf_client_setup_qvlist(struct iavf_info *ldev,
continue;
v_idx = qv_info->v_idx;
if ((v_idx >=
- (adapter->iwarp_base_vector + adapter->num_iwarp_msix)) ||
- (v_idx < adapter->iwarp_base_vector))
+ (adapter->rdma_base_vector + adapter->num_rdma_msix)) ||
+ (v_idx < adapter->rdma_base_vector))
return -EINVAL;
}
- v_qvlist_info = (struct virtchnl_iwarp_qvlist_info *)qvlist_info;
+ v_qvlist_info = (struct virtchnl_rdma_qvlist_info *)qvlist_info;
msg_size = struct_size(v_qvlist_info, qv_info,
v_qvlist_info->num_vectors - 1);
- adapter->client_pending |= BIT(VIRTCHNL_OP_CONFIG_IWARP_IRQ_MAP);
+ adapter->client_pending |= BIT(VIRTCHNL_OP_CONFIG_RDMA_IRQ_MAP);
err = iavf_aq_send_msg_to_pf(&adapter->hw,
- VIRTCHNL_OP_CONFIG_IWARP_IRQ_MAP, IAVF_SUCCESS,
+ VIRTCHNL_OP_CONFIG_RDMA_IRQ_MAP, IAVF_SUCCESS,
(u8 *)v_qvlist_info, msg_size, NULL);
if (err) {
dev_err(&adapter->pdev->dev,
- "Unable to send iWarp vector config message to PF, error %d, aq status %d\n",
+ "Unable to send RDMA vector config message to PF, error %d, aq status %d\n",
err, adapter->hw.aq.asq_last_status);
goto out;
}
@@ -488,7 +488,7 @@ static int iavf_client_setup_qvlist(struct iavf_info *ldev,
for (i = 0; i < 5; i++) {
msleep(100);
if (!(adapter->client_pending &
- BIT(VIRTCHNL_OP_CONFIG_IWARP_IRQ_MAP))) {
+ BIT(VIRTCHNL_OP_CONFIG_RDMA_IRQ_MAP))) {
err = 0;
break;
}
diff --git a/drivers/net/ethernet/intel/iavf/iavf_client.h b/drivers/net/ethernet/intel/iavf/iavf_client.h
index 9a7cf39ea75a..c5d51d7dc7cc 100644
--- a/drivers/net/ethernet/intel/iavf/iavf_client.h
+++ b/drivers/net/ethernet/intel/iavf/iavf_client.h
@@ -159,7 +159,7 @@ struct iavf_client {
#define IAVF_CLIENT_FLAGS_LAUNCH_ON_PROBE BIT(0)
#define IAVF_TX_FLAGS_NOTIFY_OTHER_EVENTS BIT(2)
u8 type;
-#define IAVF_CLIENT_IWARP 0
+#define IAVF_CLIENT_RDMA 0
struct iavf_client_ops *ops; /* client ops provided by the client */
};
diff --git a/drivers/net/ethernet/intel/iavf/iavf_common.c b/drivers/net/ethernet/intel/iavf/iavf_common.c
index 34e46a23894f..16c490965b61 100644
--- a/drivers/net/ethernet/intel/iavf/iavf_common.c
+++ b/drivers/net/ethernet/intel/iavf/iavf_common.c
@@ -223,8 +223,8 @@ const char *iavf_stat_str(struct iavf_hw *hw, enum iavf_status stat_err)
return "IAVF_ERR_ADMIN_QUEUE_FULL";
case IAVF_ERR_ADMIN_QUEUE_NO_WORK:
return "IAVF_ERR_ADMIN_QUEUE_NO_WORK";
- case IAVF_ERR_BAD_IWARP_CQE:
- return "IAVF_ERR_BAD_IWARP_CQE";
+ case IAVF_ERR_BAD_RDMA_CQE:
+ return "IAVF_ERR_BAD_RDMA_CQE";
case IAVF_ERR_NVM_BLANK_MODE:
return "IAVF_ERR_NVM_BLANK_MODE";
case IAVF_ERR_NOT_IMPLEMENTED:
diff --git a/drivers/net/ethernet/intel/iavf/iavf_main.c b/drivers/net/ethernet/intel/iavf/iavf_main.c
index 4b09785d2147..3273aeb8fa67 100644
--- a/drivers/net/ethernet/intel/iavf/iavf_main.c
+++ b/drivers/net/ethernet/intel/iavf/iavf_main.c
@@ -105,7 +105,7 @@ int iavf_status_to_errno(enum iavf_status status)
case IAVF_ERR_SRQ_ENABLED:
case IAVF_ERR_ADMIN_QUEUE_ERROR:
case IAVF_ERR_ADMIN_QUEUE_FULL:
- case IAVF_ERR_BAD_IWARP_CQE:
+ case IAVF_ERR_BAD_RDMA_CQE:
case IAVF_ERR_NVM_BLANK_MODE:
case IAVF_ERR_PE_DOORBELL_NOT_ENABLED:
case IAVF_ERR_DIAG_TEST_FAILED:
@@ -4868,8 +4868,6 @@ static int iavf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
goto err_pci_reg;
}
- pci_enable_pcie_error_reporting(pdev);
-
pci_set_master(pdev);
netdev = alloc_etherdev_mq(sizeof(struct iavf_adapter),
@@ -4957,7 +4955,6 @@ err_ioremap:
err_alloc_wq:
free_netdev(netdev);
err_alloc_etherdev:
- pci_disable_pcie_error_reporting(pdev);
pci_release_regions(pdev);
err_pci_reg:
err_dma:
@@ -5175,8 +5172,6 @@ static void iavf_remove(struct pci_dev *pdev)
free_netdev(netdev);
- pci_disable_pcie_error_reporting(pdev);
-
pci_disable_device(pdev);
}
diff --git a/drivers/net/ethernet/intel/iavf/iavf_status.h b/drivers/net/ethernet/intel/iavf/iavf_status.h
index 2ea5c7c339bc..0e493ee9e9d1 100644
--- a/drivers/net/ethernet/intel/iavf/iavf_status.h
+++ b/drivers/net/ethernet/intel/iavf/iavf_status.h
@@ -64,7 +64,7 @@ enum iavf_status {
IAVF_ERR_BUF_TOO_SHORT = -55,
IAVF_ERR_ADMIN_QUEUE_FULL = -56,
IAVF_ERR_ADMIN_QUEUE_NO_WORK = -57,
- IAVF_ERR_BAD_IWARP_CQE = -58,
+ IAVF_ERR_BAD_RDMA_CQE = -58,
IAVF_ERR_NVM_BLANK_MODE = -59,
IAVF_ERR_NOT_IMPLEMENTED = -60,
IAVF_ERR_PE_DOORBELL_NOT_ENABLED = -61,
diff --git a/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c b/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c
index 365ca0c710c4..6d23338604bb 100644
--- a/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c
+++ b/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c
@@ -2298,7 +2298,7 @@ void iavf_virtchnl_completion(struct iavf_adapter *adapter,
if (v_opcode != adapter->current_op)
return;
break;
- case VIRTCHNL_OP_IWARP:
+ case VIRTCHNL_OP_RDMA:
/* Gobble zero-length replies from the PF. They indicate that
* a previous message was received OK, and the client doesn't
* care about that.
@@ -2307,9 +2307,9 @@ void iavf_virtchnl_completion(struct iavf_adapter *adapter,
iavf_notify_client_message(&adapter->vsi, msg, msglen);
break;
- case VIRTCHNL_OP_CONFIG_IWARP_IRQ_MAP:
+ case VIRTCHNL_OP_CONFIG_RDMA_IRQ_MAP:
adapter->client_pending &=
- ~(BIT(VIRTCHNL_OP_CONFIG_IWARP_IRQ_MAP));
+ ~(BIT(VIRTCHNL_OP_CONFIG_RDMA_IRQ_MAP));
break;
case VIRTCHNL_OP_GET_RSS_HENA_CAPS: {
struct virtchnl_rss_hena *vrh = (struct virtchnl_rss_hena *)msg;
diff --git a/drivers/net/ethernet/intel/ice/Makefile b/drivers/net/ethernet/intel/ice/Makefile
index 9183d480b70b..f269952d207d 100644
--- a/drivers/net/ethernet/intel/ice/Makefile
+++ b/drivers/net/ethernet/intel/ice/Makefile
@@ -28,6 +28,7 @@ ice-y := ice_main.o \
ice_flow.o \
ice_idc.o \
ice_devlink.o \
+ ice_ddp.o \
ice_fw_update.o \
ice_lag.o \
ice_ethtool.o \
@@ -42,8 +43,8 @@ ice-$(CONFIG_PCI_IOV) += \
ice_vf_vsi_vlan_ops.o \
ice_vf_lib.o
ice-$(CONFIG_PTP_1588_CLOCK) += ice_ptp.o ice_ptp_hw.o
-ice-$(CONFIG_TTY) += ice_gnss.o
ice-$(CONFIG_DCB) += ice_dcb.o ice_dcb_nl.o ice_dcb_lib.o
ice-$(CONFIG_RFS_ACCEL) += ice_arfs.o
ice-$(CONFIG_XDP_SOCKETS) += ice_xsk.o
ice-$(CONFIG_ICE_SWITCHDEV) += ice_eswitch.o
+ice-$(CONFIG_ICE_GNSS) += ice_gnss.o
diff --git a/drivers/net/ethernet/intel/ice/ice.h b/drivers/net/ethernet/intel/ice/ice.h
index 713069f809ec..b0e29e342401 100644
--- a/drivers/net/ethernet/intel/ice/ice.h
+++ b/drivers/net/ethernet/intel/ice/ice.h
@@ -39,7 +39,9 @@
#include <linux/avf/virtchnl.h>
#include <linux/cpu_rmap.h>
#include <linux/dim.h>
+#include <linux/gnss.h>
#include <net/pkt_cls.h>
+#include <net/pkt_sched.h>
#include <net/tc_act/tc_mirred.h>
#include <net/tc_act/tc_gact.h>
#include <net/ip.h>
@@ -121,6 +123,8 @@
#define ICE_MAX_MTU (ICE_AQ_SET_MAC_FRAME_SIZE_MAX - ICE_ETH_PKT_HDR_PAD)
+#define ICE_MAX_TSO_SIZE 131072
+
#define ICE_UP_TABLE_TRANSLATE(val, i) \
(((val) << ICE_AQ_VSI_UP_TABLE_UP##i##_S) & \
ICE_AQ_VSI_UP_TABLE_UP##i##_M)
@@ -352,7 +356,6 @@ struct ice_vsi {
struct ice_vf *vf; /* VF associated with this VSI */
- u16 ethtype; /* Ethernet protocol for pause frame */
u16 num_gfltr;
u16 num_bfltr;
@@ -565,9 +568,8 @@ struct ice_pf {
struct mutex adev_mutex; /* lock to protect aux device access */
u32 msg_enable;
struct ice_ptp ptp;
- struct tty_driver *ice_gnss_tty_driver;
- struct tty_port *gnss_tty_port[ICE_GNSS_TTY_MINOR_DEVICES];
- struct gnss_serial *gnss_serial[ICE_GNSS_TTY_MINOR_DEVICES];
+ struct gnss_serial *gnss_serial;
+ struct gnss_device *gnss_dev;
u16 num_rdma_msix; /* Total MSIX vectors for RDMA driver */
u16 rdma_base_vector;
@@ -889,7 +891,7 @@ ice_fetch_u64_stats_per_ring(struct u64_stats_sync *syncp,
int ice_up(struct ice_vsi *vsi);
int ice_down(struct ice_vsi *vsi);
int ice_down_up(struct ice_vsi *vsi);
-int ice_vsi_cfg(struct ice_vsi *vsi);
+int ice_vsi_cfg_lan(struct ice_vsi *vsi);
struct ice_vsi *ice_lb_vsi_setup(struct ice_pf *pf, struct ice_port_info *pi);
int ice_vsi_determine_xdp_res(struct ice_vsi *vsi);
int ice_prepare_xdp_rings(struct ice_vsi *vsi, struct bpf_prog *prog);
@@ -907,6 +909,7 @@ void ice_print_link_msg(struct ice_vsi *vsi, bool isup);
int ice_plug_aux_dev(struct ice_pf *pf);
void ice_unplug_aux_dev(struct ice_pf *pf);
int ice_init_rdma(struct ice_pf *pf);
+void ice_deinit_rdma(struct ice_pf *pf);
const char *ice_aq_str(enum ice_aq_err aq_err);
bool ice_is_wol_supported(struct ice_hw *hw);
void ice_fdir_del_all_fltrs(struct ice_vsi *vsi);
@@ -931,6 +934,8 @@ int ice_open(struct net_device *netdev);
int ice_open_internal(struct net_device *netdev);
int ice_stop(struct net_device *netdev);
void ice_service_task_schedule(struct ice_pf *pf);
+int ice_load(struct ice_pf *pf);
+void ice_unload(struct ice_pf *pf);
/**
* ice_set_rdma_cap - enable RDMA support
diff --git a/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h b/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h
index 958c1e435232..838d9b274d68 100644
--- a/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h
+++ b/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h
@@ -1659,14 +1659,24 @@ struct ice_aqc_lldp_get_mib {
#define ICE_AQ_LLDP_TX_ACTIVE 0
#define ICE_AQ_LLDP_TX_SUSPENDED 1
#define ICE_AQ_LLDP_TX_FLUSHED 3
+/* DCBX mode */
+#define ICE_AQ_LLDP_DCBX_M GENMASK(7, 6)
+#define ICE_AQ_LLDP_DCBX_NA 0
+#define ICE_AQ_LLDP_DCBX_CEE 1
+#define ICE_AQ_LLDP_DCBX_IEEE 2
+
+ u8 state;
+#define ICE_AQ_LLDP_MIB_CHANGE_STATE_M BIT(0)
+#define ICE_AQ_LLDP_MIB_CHANGE_EXECUTED 0
+#define ICE_AQ_LLDP_MIB_CHANGE_PENDING 1
+
/* The following bytes are reserved for the Get LLDP MIB command (0x0A00)
* and in the LLDP MIB Change Event (0x0A01). They are valid for the
* Get LLDP MIB (0x0A00) response only.
*/
- u8 reserved1;
__le16 local_len;
__le16 remote_len;
- u8 reserved2[2];
+ u8 reserved[2];
__le32 addr_high;
__le32 addr_low;
};
@@ -1677,6 +1687,9 @@ struct ice_aqc_lldp_set_mib_change {
u8 command;
#define ICE_AQ_LLDP_MIB_UPDATE_ENABLE 0x0
#define ICE_AQ_LLDP_MIB_UPDATE_DIS 0x1
+#define ICE_AQ_LLDP_MIB_PENDING_M BIT(1)
+#define ICE_AQ_LLDP_MIB_PENDING_DISABLE 0
+#define ICE_AQ_LLDP_MIB_PENDING_ENABLE 1
u8 reserved[15];
};
@@ -2329,6 +2342,7 @@ enum ice_adminq_opc {
ice_aqc_opc_lldp_set_local_mib = 0x0A08,
ice_aqc_opc_lldp_stop_start_specific_agent = 0x0A09,
ice_aqc_opc_lldp_filter_ctrl = 0x0A0A,
+ ice_aqc_opc_lldp_execute_pending_mib = 0x0A0B,
/* RSS commands */
ice_aqc_opc_set_rss_key = 0x0B02,
diff --git a/drivers/net/ethernet/intel/ice/ice_base.c b/drivers/net/ethernet/intel/ice/ice_base.c
index 554095b25f44..1911d644dfa8 100644
--- a/drivers/net/ethernet/intel/ice/ice_base.c
+++ b/drivers/net/ethernet/intel/ice/ice_base.c
@@ -355,9 +355,6 @@ static unsigned int ice_rx_offset(struct ice_rx_ring *rx_ring)
{
if (ice_ring_uses_build_skb(rx_ring))
return ICE_SKB_PAD;
- else if (ice_is_xdp_ena_vsi(rx_ring->vsi))
- return XDP_PACKET_HEADROOM;
-
return 0;
}
@@ -495,7 +492,7 @@ static int ice_setup_rx_ctx(struct ice_rx_ring *ring)
int ice_vsi_cfg_rxq(struct ice_rx_ring *ring)
{
struct device *dev = ice_pf_to_dev(ring->vsi->back);
- u16 num_bufs = ICE_DESC_UNUSED(ring);
+ u32 num_bufs = ICE_RX_DESC_UNUSED(ring);
int err;
ring->rx_buf_len = ring->vsi->rx_buf_len;
@@ -503,8 +500,10 @@ int ice_vsi_cfg_rxq(struct ice_rx_ring *ring)
if (ring->vsi->type == ICE_VSI_PF) {
if (!xdp_rxq_info_is_reg(&ring->xdp_rxq))
/* coverity[check_return] */
- xdp_rxq_info_reg(&ring->xdp_rxq, ring->netdev,
- ring->q_index, ring->q_vector->napi.napi_id);
+ __xdp_rxq_info_reg(&ring->xdp_rxq, ring->netdev,
+ ring->q_index,
+ ring->q_vector->napi.napi_id,
+ ring->vsi->rx_buf_len);
ring->xsk_pool = ice_xsk_pool(ring);
if (ring->xsk_pool) {
@@ -524,9 +523,11 @@ int ice_vsi_cfg_rxq(struct ice_rx_ring *ring)
} else {
if (!xdp_rxq_info_is_reg(&ring->xdp_rxq))
/* coverity[check_return] */
- xdp_rxq_info_reg(&ring->xdp_rxq,
- ring->netdev,
- ring->q_index, ring->q_vector->napi.napi_id);
+ __xdp_rxq_info_reg(&ring->xdp_rxq,
+ ring->netdev,
+ ring->q_index,
+ ring->q_vector->napi.napi_id,
+ ring->vsi->rx_buf_len);
err = xdp_rxq_info_reg_mem_model(&ring->xdp_rxq,
MEM_TYPE_PAGE_SHARED,
@@ -536,6 +537,8 @@ int ice_vsi_cfg_rxq(struct ice_rx_ring *ring)
}
}
+ xdp_init_buff(&ring->xdp, ice_rx_pg_size(ring) / 2, &ring->xdp_rxq);
+ ring->xdp.data = NULL;
err = ice_setup_rx_ctx(ring);
if (err) {
dev_err(dev, "ice_setup_rx_ctx failed for RxQ %d, err %d\n",
diff --git a/drivers/net/ethernet/intel/ice/ice_common.c b/drivers/net/ethernet/intel/ice/ice_common.c
index 3e08847505ce..c2fda4fa4188 100644
--- a/drivers/net/ethernet/intel/ice/ice_common.c
+++ b/drivers/net/ethernet/intel/ice/ice_common.c
@@ -208,6 +208,31 @@ bool ice_is_e810t(struct ice_hw *hw)
}
/**
+ * ice_is_e823
+ * @hw: pointer to the hardware structure
+ *
+ * returns true if the device is E823-L or E823-C based, false if not.
+ */
+bool ice_is_e823(struct ice_hw *hw)
+{
+ switch (hw->device_id) {
+ case ICE_DEV_ID_E823L_BACKPLANE:
+ case ICE_DEV_ID_E823L_SFP:
+ case ICE_DEV_ID_E823L_10G_BASE_T:
+ case ICE_DEV_ID_E823L_1GBE:
+ case ICE_DEV_ID_E823L_QSFP:
+ case ICE_DEV_ID_E823C_BACKPLANE:
+ case ICE_DEV_ID_E823C_QSFP:
+ case ICE_DEV_ID_E823C_SFP:
+ case ICE_DEV_ID_E823C_10G_BASE_T:
+ case ICE_DEV_ID_E823C_SGMII:
+ return true;
+ default:
+ return false;
+ }
+}
+
+/**
* ice_clear_pf_cfg - Clear PF configuration
* @hw: pointer to the hardware structure
*
@@ -1088,8 +1113,10 @@ int ice_init_hw(struct ice_hw *hw)
if (status)
goto err_unroll_cqinit;
- hw->port_info = devm_kzalloc(ice_hw_to_dev(hw),
- sizeof(*hw->port_info), GFP_KERNEL);
+ if (!hw->port_info)
+ hw->port_info = devm_kzalloc(ice_hw_to_dev(hw),
+ sizeof(*hw->port_info),
+ GFP_KERNEL);
if (!hw->port_info) {
status = -ENOMEM;
goto err_unroll_cqinit;
@@ -1217,11 +1244,6 @@ void ice_deinit_hw(struct ice_hw *hw)
ice_free_hw_tbls(hw);
mutex_destroy(&hw->tnl_lock);
- if (hw->port_info) {
- devm_kfree(ice_hw_to_dev(hw), hw->port_info);
- hw->port_info = NULL;
- }
-
/* Attempt to disable FW logging before shutting down control queues */
ice_cfg_fw_log(hw, false);
ice_destroy_all_ctrlq(hw);
@@ -5504,6 +5526,19 @@ ice_lldp_fltr_add_remove(struct ice_hw *hw, u16 vsi_num, bool add)
}
/**
+ * ice_lldp_execute_pending_mib - execute LLDP pending MIB request
+ * @hw: pointer to HW struct
+ */
+int ice_lldp_execute_pending_mib(struct ice_hw *hw)
+{
+ struct ice_aq_desc desc;
+
+ ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_lldp_execute_pending_mib);
+
+ return ice_aq_send_cmd(hw, &desc, NULL, 0, NULL);
+}
+
+/**
* ice_fw_supports_report_dflt_cfg
* @hw: pointer to the hardware structure
*
diff --git a/drivers/net/ethernet/intel/ice/ice_common.h b/drivers/net/ethernet/intel/ice/ice_common.h
index 4c6a0b5c9304..8ba5f935a092 100644
--- a/drivers/net/ethernet/intel/ice/ice_common.h
+++ b/drivers/net/ethernet/intel/ice/ice_common.h
@@ -122,7 +122,7 @@ ice_set_fc(struct ice_port_info *pi, u8 *aq_failures,
bool ena_auto_link_update);
int
ice_cfg_phy_fc(struct ice_port_info *pi, struct ice_aqc_set_phy_cfg_data *cfg,
- enum ice_fc_mode fc);
+ enum ice_fc_mode req_mode);
bool
ice_phy_caps_equals_cfg(struct ice_aqc_get_phy_caps_data *caps,
struct ice_aqc_set_phy_cfg_data *cfg);
@@ -199,6 +199,7 @@ void
ice_stat_update32(struct ice_hw *hw, u32 reg, bool prev_stat_loaded,
u64 *prev_stat, u64 *cur_stat);
bool ice_is_e810t(struct ice_hw *hw);
+bool ice_is_e823(struct ice_hw *hw);
int
ice_sched_query_elem(struct ice_hw *hw, u32 node_teid,
struct ice_aqc_txsched_elem_data *buf);
@@ -221,6 +222,7 @@ ice_aq_set_lldp_mib(struct ice_hw *hw, u8 mib_type, void *buf, u16 buf_size,
bool ice_fw_supports_lldp_fltr_ctrl(struct ice_hw *hw);
int
ice_lldp_fltr_add_remove(struct ice_hw *hw, u16 vsi_num, bool add);
+int ice_lldp_execute_pending_mib(struct ice_hw *hw);
int
ice_aq_read_i2c(struct ice_hw *hw, struct ice_aqc_link_topo_addr topo_addr,
u16 bus_addr, __le16 addr, u8 params, u8 *data,
diff --git a/drivers/net/ethernet/intel/ice/ice_dcb.c b/drivers/net/ethernet/intel/ice/ice_dcb.c
index 6be02f9b0b8c..c557dfc50aad 100644
--- a/drivers/net/ethernet/intel/ice/ice_dcb.c
+++ b/drivers/net/ethernet/intel/ice/ice_dcb.c
@@ -73,6 +73,9 @@ ice_aq_cfg_lldp_mib_change(struct ice_hw *hw, bool ena_update,
if (!ena_update)
cmd->command |= ICE_AQ_LLDP_MIB_UPDATE_DIS;
+ else
+ cmd->command |= FIELD_PREP(ICE_AQ_LLDP_MIB_PENDING_M,
+ ICE_AQ_LLDP_MIB_PENDING_ENABLE);
return ice_aq_send_cmd(hw, &desc, NULL, 0, cd);
}
@@ -566,7 +569,7 @@ ice_parse_cee_tlv(struct ice_lldp_org_tlv *tlv, struct ice_dcbx_cfg *dcbcfg)
* @tlv: Organization specific TLV
* @dcbcfg: Local store to update ETS REC data
*
- * Currently only IEEE 802.1Qaz TLV is supported, all others
+ * Currently IEEE 802.1Qaz and CEE DCBX TLV are supported, others
* will be returned
*/
static void
@@ -585,7 +588,7 @@ ice_parse_org_tlv(struct ice_lldp_org_tlv *tlv, struct ice_dcbx_cfg *dcbcfg)
ice_parse_cee_tlv(tlv, dcbcfg);
break;
default:
- break;
+ break; /* Other OUIs not supported */
}
}
@@ -964,6 +967,42 @@ int ice_get_dcb_cfg(struct ice_port_info *pi)
}
/**
+ * ice_get_dcb_cfg_from_mib_change
+ * @pi: port information structure
+ * @event: pointer to the admin queue receive event
+ *
+ * Set DCB configuration from received MIB Change event
+ */
+void ice_get_dcb_cfg_from_mib_change(struct ice_port_info *pi,
+ struct ice_rq_event_info *event)
+{
+ struct ice_dcbx_cfg *dcbx_cfg = &pi->qos_cfg.local_dcbx_cfg;
+ struct ice_aqc_lldp_get_mib *mib;
+ u8 change_type, dcbx_mode;
+
+ mib = (struct ice_aqc_lldp_get_mib *)&event->desc.params.raw;
+
+ change_type = FIELD_GET(ICE_AQ_LLDP_MIB_TYPE_M, mib->type);
+ if (change_type == ICE_AQ_LLDP_MIB_REMOTE)
+ dcbx_cfg = &pi->qos_cfg.remote_dcbx_cfg;
+
+ dcbx_mode = FIELD_GET(ICE_AQ_LLDP_DCBX_M, mib->type);
+
+ switch (dcbx_mode) {
+ case ICE_AQ_LLDP_DCBX_IEEE:
+ dcbx_cfg->dcbx_mode = ICE_DCBX_MODE_IEEE;
+ ice_lldp_to_dcb_cfg(event->msg_buf, dcbx_cfg);
+ break;
+
+ case ICE_AQ_LLDP_DCBX_CEE:
+ pi->qos_cfg.desired_dcbx_cfg = pi->qos_cfg.local_dcbx_cfg;
+ ice_cee_to_dcb_cfg((struct ice_aqc_get_cee_dcb_cfg_resp *)
+ event->msg_buf, pi);
+ break;
+ }
+}
+
+/**
* ice_init_dcb
* @hw: pointer to the HW struct
* @enable_mib_change: enable MIB change event
diff --git a/drivers/net/ethernet/intel/ice/ice_dcb.h b/drivers/net/ethernet/intel/ice/ice_dcb.h
index 6abf28a14291..be34650a77d5 100644
--- a/drivers/net/ethernet/intel/ice/ice_dcb.h
+++ b/drivers/net/ethernet/intel/ice/ice_dcb.h
@@ -144,6 +144,8 @@ ice_aq_get_dcb_cfg(struct ice_hw *hw, u8 mib_type, u8 bridgetype,
struct ice_dcbx_cfg *dcbcfg);
int ice_get_dcb_cfg(struct ice_port_info *pi);
int ice_set_dcb_cfg(struct ice_port_info *pi);
+void ice_get_dcb_cfg_from_mib_change(struct ice_port_info *pi,
+ struct ice_rq_event_info *event);
int ice_init_dcb(struct ice_hw *hw, bool enable_mib_change);
int
ice_query_port_ets(struct ice_port_info *pi,
diff --git a/drivers/net/ethernet/intel/ice/ice_dcb_lib.c b/drivers/net/ethernet/intel/ice/ice_dcb_lib.c
index 0a55c552189a..c6d4926f0fcf 100644
--- a/drivers/net/ethernet/intel/ice/ice_dcb_lib.c
+++ b/drivers/net/ethernet/intel/ice/ice_dcb_lib.c
@@ -862,7 +862,7 @@ int ice_init_pf_dcb(struct ice_pf *pf, bool locked)
if (err)
goto dcb_init_err;
- return err;
+ return 0;
dcb_init_err:
dev_err(dev, "DCB init failed\n");
@@ -947,6 +947,16 @@ ice_tx_prepare_vlan_flags_dcb(struct ice_tx_ring *tx_ring,
}
/**
+ * ice_dcb_is_mib_change_pending - Check if MIB change is pending
+ * @state: MIB change state
+ */
+static bool ice_dcb_is_mib_change_pending(u8 state)
+{
+ return ICE_AQ_LLDP_MIB_CHANGE_PENDING ==
+ FIELD_GET(ICE_AQ_LLDP_MIB_CHANGE_STATE_M, state);
+}
+
+/**
* ice_dcb_process_lldp_set_mib_change - Process MIB change
* @pf: ptr to ice_pf
* @event: pointer to the admin queue receive event
@@ -959,6 +969,7 @@ ice_dcb_process_lldp_set_mib_change(struct ice_pf *pf,
struct device *dev = ice_pf_to_dev(pf);
struct ice_aqc_lldp_get_mib *mib;
struct ice_dcbx_cfg tmp_dcbx_cfg;
+ bool pending_handled = true;
bool need_reconfig = false;
struct ice_port_info *pi;
u8 mib_type;
@@ -975,41 +986,58 @@ ice_dcb_process_lldp_set_mib_change(struct ice_pf *pf,
pi = pf->hw.port_info;
mib = (struct ice_aqc_lldp_get_mib *)&event->desc.params.raw;
+
/* Ignore if event is not for Nearest Bridge */
- mib_type = ((mib->type >> ICE_AQ_LLDP_BRID_TYPE_S) &
- ICE_AQ_LLDP_BRID_TYPE_M);
+ mib_type = FIELD_GET(ICE_AQ_LLDP_BRID_TYPE_M, mib->type);
dev_dbg(dev, "LLDP event MIB bridge type 0x%x\n", mib_type);
if (mib_type != ICE_AQ_LLDP_BRID_TYPE_NEAREST_BRID)
return;
+ /* A pending change event contains accurate config information, and
+ * the FW setting has not been updaed yet, so detect if change is
+ * pending to determine where to pull config information from
+ * (FW vs event)
+ */
+ if (ice_dcb_is_mib_change_pending(mib->state))
+ pending_handled = false;
+
/* Check MIB Type and return if event for Remote MIB update */
- mib_type = mib->type & ICE_AQ_LLDP_MIB_TYPE_M;
+ mib_type = FIELD_GET(ICE_AQ_LLDP_MIB_TYPE_M, mib->type);
dev_dbg(dev, "LLDP event mib type %s\n", mib_type ? "remote" : "local");
if (mib_type == ICE_AQ_LLDP_MIB_REMOTE) {
/* Update the remote cached instance and return */
- ret = ice_aq_get_dcb_cfg(pi->hw, ICE_AQ_LLDP_MIB_REMOTE,
- ICE_AQ_LLDP_BRID_TYPE_NEAREST_BRID,
- &pi->qos_cfg.remote_dcbx_cfg);
- if (ret) {
- dev_err(dev, "Failed to get remote DCB config\n");
- return;
+ if (!pending_handled) {
+ ice_get_dcb_cfg_from_mib_change(pi, event);
+ } else {
+ ret =
+ ice_aq_get_dcb_cfg(pi->hw, ICE_AQ_LLDP_MIB_REMOTE,
+ ICE_AQ_LLDP_BRID_TYPE_NEAREST_BRID,
+ &pi->qos_cfg.remote_dcbx_cfg);
+ if (ret)
+ dev_dbg(dev, "Failed to get remote DCB config\n");
}
+ return;
}
+ /* That a DCB change has happened is now determined */
mutex_lock(&pf->tc_mutex);
/* store the old configuration */
- tmp_dcbx_cfg = pf->hw.port_info->qos_cfg.local_dcbx_cfg;
+ tmp_dcbx_cfg = pi->qos_cfg.local_dcbx_cfg;
/* Reset the old DCBX configuration data */
memset(&pi->qos_cfg.local_dcbx_cfg, 0,
sizeof(pi->qos_cfg.local_dcbx_cfg));
/* Get updated DCBX data from firmware */
- ret = ice_get_dcb_cfg(pf->hw.port_info);
- if (ret) {
- dev_err(dev, "Failed to get DCB config\n");
- goto out;
+ if (!pending_handled) {
+ ice_get_dcb_cfg_from_mib_change(pi, event);
+ } else {
+ ret = ice_get_dcb_cfg(pi);
+ if (ret) {
+ dev_err(dev, "Failed to get DCB config\n");
+ goto out;
+ }
}
/* No change detected in DCBX configs */
@@ -1036,11 +1064,17 @@ ice_dcb_process_lldp_set_mib_change(struct ice_pf *pf,
clear_bit(ICE_FLAG_DCB_ENA, pf->flags);
}
+ /* Send Execute Pending MIB Change event if it is a Pending event */
+ if (!pending_handled) {
+ ice_lldp_execute_pending_mib(&pf->hw);
+ pending_handled = true;
+ }
+
rtnl_lock();
/* disable VSIs affected by DCB changes */
ice_dcb_ena_dis_vsi(pf, false, true);
- ret = ice_query_port_ets(pf->hw.port_info, &buf, sizeof(buf), NULL);
+ ret = ice_query_port_ets(pi, &buf, sizeof(buf), NULL);
if (ret) {
dev_err(dev, "Query Port ETS failed\n");
goto unlock_rtnl;
@@ -1055,4 +1089,8 @@ unlock_rtnl:
rtnl_unlock();
out:
mutex_unlock(&pf->tc_mutex);
+
+ /* Send Execute Pending MIB Change event if it is a Pending event */
+ if (!pending_handled)
+ ice_lldp_execute_pending_mib(&pf->hw);
}
diff --git a/drivers/net/ethernet/intel/ice/ice_ddp.c b/drivers/net/ethernet/intel/ice/ice_ddp.c
new file mode 100644
index 000000000000..d71ed210f9c4
--- /dev/null
+++ b/drivers/net/ethernet/intel/ice/ice_ddp.c
@@ -0,0 +1,1897 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2022, Intel Corporation. */
+
+#include "ice_common.h"
+#include "ice.h"
+#include "ice_ddp.h"
+
+/* For supporting double VLAN mode, it is necessary to enable or disable certain
+ * boost tcam entries. The metadata labels names that match the following
+ * prefixes will be saved to allow enabling double VLAN mode.
+ */
+#define ICE_DVM_PRE "BOOST_MAC_VLAN_DVM" /* enable these entries */
+#define ICE_SVM_PRE "BOOST_MAC_VLAN_SVM" /* disable these entries */
+
+/* To support tunneling entries by PF, the package will append the PF number to
+ * the label; for example TNL_VXLAN_PF0, TNL_VXLAN_PF1, TNL_VXLAN_PF2, etc.
+ */
+#define ICE_TNL_PRE "TNL_"
+static const struct ice_tunnel_type_scan tnls[] = {
+ { TNL_VXLAN, "TNL_VXLAN_PF" },
+ { TNL_GENEVE, "TNL_GENEVE_PF" },
+ { TNL_LAST, "" }
+};
+
+/**
+ * ice_verify_pkg - verify package
+ * @pkg: pointer to the package buffer
+ * @len: size of the package buffer
+ *
+ * Verifies various attributes of the package file, including length, format
+ * version, and the requirement of at least one segment.
+ */
+enum ice_ddp_state ice_verify_pkg(struct ice_pkg_hdr *pkg, u32 len)
+{
+ u32 seg_count;
+ u32 i;
+
+ if (len < struct_size(pkg, seg_offset, 1))
+ return ICE_DDP_PKG_INVALID_FILE;
+
+ if (pkg->pkg_format_ver.major != ICE_PKG_FMT_VER_MAJ ||
+ pkg->pkg_format_ver.minor != ICE_PKG_FMT_VER_MNR ||
+ pkg->pkg_format_ver.update != ICE_PKG_FMT_VER_UPD ||
+ pkg->pkg_format_ver.draft != ICE_PKG_FMT_VER_DFT)
+ return ICE_DDP_PKG_INVALID_FILE;
+
+ /* pkg must have at least one segment */
+ seg_count = le32_to_cpu(pkg->seg_count);
+ if (seg_count < 1)
+ return ICE_DDP_PKG_INVALID_FILE;
+
+ /* make sure segment array fits in package length */
+ if (len < struct_size(pkg, seg_offset, seg_count))
+ return ICE_DDP_PKG_INVALID_FILE;
+
+ /* all segments must fit within length */
+ for (i = 0; i < seg_count; i++) {
+ u32 off = le32_to_cpu(pkg->seg_offset[i]);
+ struct ice_generic_seg_hdr *seg;
+
+ /* segment header must fit */
+ if (len < off + sizeof(*seg))
+ return ICE_DDP_PKG_INVALID_FILE;
+
+ seg = (struct ice_generic_seg_hdr *)((u8 *)pkg + off);
+
+ /* segment body must fit */
+ if (len < off + le32_to_cpu(seg->seg_size))
+ return ICE_DDP_PKG_INVALID_FILE;
+ }
+
+ return ICE_DDP_PKG_SUCCESS;
+}
+
+/**
+ * ice_free_seg - free package segment pointer
+ * @hw: pointer to the hardware structure
+ *
+ * Frees the package segment pointer in the proper manner, depending on if the
+ * segment was allocated or just the passed in pointer was stored.
+ */
+void ice_free_seg(struct ice_hw *hw)
+{
+ if (hw->pkg_copy) {
+ devm_kfree(ice_hw_to_dev(hw), hw->pkg_copy);
+ hw->pkg_copy = NULL;
+ hw->pkg_size = 0;
+ }
+ hw->seg = NULL;
+}
+
+/**
+ * ice_chk_pkg_version - check package version for compatibility with driver
+ * @pkg_ver: pointer to a version structure to check
+ *
+ * Check to make sure that the package about to be downloaded is compatible with
+ * the driver. To be compatible, the major and minor components of the package
+ * version must match our ICE_PKG_SUPP_VER_MAJ and ICE_PKG_SUPP_VER_MNR
+ * definitions.
+ */
+static enum ice_ddp_state ice_chk_pkg_version(struct ice_pkg_ver *pkg_ver)
+{
+ if (pkg_ver->major > ICE_PKG_SUPP_VER_MAJ ||
+ (pkg_ver->major == ICE_PKG_SUPP_VER_MAJ &&
+ pkg_ver->minor > ICE_PKG_SUPP_VER_MNR))
+ return ICE_DDP_PKG_FILE_VERSION_TOO_HIGH;
+ else if (pkg_ver->major < ICE_PKG_SUPP_VER_MAJ ||
+ (pkg_ver->major == ICE_PKG_SUPP_VER_MAJ &&
+ pkg_ver->minor < ICE_PKG_SUPP_VER_MNR))
+ return ICE_DDP_PKG_FILE_VERSION_TOO_LOW;
+
+ return ICE_DDP_PKG_SUCCESS;
+}
+
+/**
+ * ice_pkg_val_buf
+ * @buf: pointer to the ice buffer
+ *
+ * This helper function validates a buffer's header.
+ */
+struct ice_buf_hdr *ice_pkg_val_buf(struct ice_buf *buf)
+{
+ struct ice_buf_hdr *hdr;
+ u16 section_count;
+ u16 data_end;
+
+ hdr = (struct ice_buf_hdr *)buf->buf;
+ /* verify data */
+ section_count = le16_to_cpu(hdr->section_count);
+ if (section_count < ICE_MIN_S_COUNT || section_count > ICE_MAX_S_COUNT)
+ return NULL;
+
+ data_end = le16_to_cpu(hdr->data_end);
+ if (data_end < ICE_MIN_S_DATA_END || data_end > ICE_MAX_S_DATA_END)
+ return NULL;
+
+ return hdr;
+}
+
+/**
+ * ice_find_buf_table
+ * @ice_seg: pointer to the ice segment
+ *
+ * Returns the address of the buffer table within the ice segment.
+ */
+static struct ice_buf_table *ice_find_buf_table(struct ice_seg *ice_seg)
+{
+ struct ice_nvm_table *nvms = (struct ice_nvm_table *)
+ (ice_seg->device_table + le32_to_cpu(ice_seg->device_table_count));
+
+ return (__force struct ice_buf_table *)(nvms->vers +
+ le32_to_cpu(nvms->table_count));
+}
+
+/**
+ * ice_pkg_enum_buf
+ * @ice_seg: pointer to the ice segment (or NULL on subsequent calls)
+ * @state: pointer to the enum state
+ *
+ * This function will enumerate all the buffers in the ice segment. The first
+ * call is made with the ice_seg parameter non-NULL; on subsequent calls,
+ * ice_seg is set to NULL which continues the enumeration. When the function
+ * returns a NULL pointer, then the end of the buffers has been reached, or an
+ * unexpected value has been detected (for example an invalid section count or
+ * an invalid buffer end value).
+ */
+static struct ice_buf_hdr *ice_pkg_enum_buf(struct ice_seg *ice_seg,
+ struct ice_pkg_enum *state)
+{
+ if (ice_seg) {
+ state->buf_table = ice_find_buf_table(ice_seg);
+ if (!state->buf_table)
+ return NULL;
+
+ state->buf_idx = 0;
+ return ice_pkg_val_buf(state->buf_table->buf_array);
+ }
+
+ if (++state->buf_idx < le32_to_cpu(state->buf_table->buf_count))
+ return ice_pkg_val_buf(state->buf_table->buf_array +
+ state->buf_idx);
+ else
+ return NULL;
+}
+
+/**
+ * ice_pkg_advance_sect
+ * @ice_seg: pointer to the ice segment (or NULL on subsequent calls)
+ * @state: pointer to the enum state
+ *
+ * This helper function will advance the section within the ice segment,
+ * also advancing the buffer if needed.
+ */
+static bool ice_pkg_advance_sect(struct ice_seg *ice_seg,
+ struct ice_pkg_enum *state)
+{
+ if (!ice_seg && !state->buf)
+ return false;
+
+ if (!ice_seg && state->buf)
+ if (++state->sect_idx < le16_to_cpu(state->buf->section_count))
+ return true;
+
+ state->buf = ice_pkg_enum_buf(ice_seg, state);
+ if (!state->buf)
+ return false;
+
+ /* start of new buffer, reset section index */
+ state->sect_idx = 0;
+ return true;
+}
+
+/**
+ * ice_pkg_enum_section
+ * @ice_seg: pointer to the ice segment (or NULL on subsequent calls)
+ * @state: pointer to the enum state
+ * @sect_type: section type to enumerate
+ *
+ * This function will enumerate all the sections of a particular type in the
+ * ice segment. The first call is made with the ice_seg parameter non-NULL;
+ * on subsequent calls, ice_seg is set to NULL which continues the enumeration.
+ * When the function returns a NULL pointer, then the end of the matching
+ * sections has been reached.
+ */
+void *ice_pkg_enum_section(struct ice_seg *ice_seg, struct ice_pkg_enum *state,
+ u32 sect_type)
+{
+ u16 offset, size;
+
+ if (ice_seg)
+ state->type = sect_type;
+
+ if (!ice_pkg_advance_sect(ice_seg, state))
+ return NULL;
+
+ /* scan for next matching section */
+ while (state->buf->section_entry[state->sect_idx].type !=
+ cpu_to_le32(state->type))
+ if (!ice_pkg_advance_sect(NULL, state))
+ return NULL;
+
+ /* validate section */
+ offset = le16_to_cpu(state->buf->section_entry[state->sect_idx].offset);
+ if (offset < ICE_MIN_S_OFF || offset > ICE_MAX_S_OFF)
+ return NULL;
+
+ size = le16_to_cpu(state->buf->section_entry[state->sect_idx].size);
+ if (size < ICE_MIN_S_SZ || size > ICE_MAX_S_SZ)
+ return NULL;
+
+ /* make sure the section fits in the buffer */
+ if (offset + size > ICE_PKG_BUF_SIZE)
+ return NULL;
+
+ state->sect_type =
+ le32_to_cpu(state->buf->section_entry[state->sect_idx].type);
+
+ /* calc pointer to this section */
+ state->sect =
+ ((u8 *)state->buf) +
+ le16_to_cpu(state->buf->section_entry[state->sect_idx].offset);
+
+ return state->sect;
+}
+
+/**
+ * ice_pkg_enum_entry
+ * @ice_seg: pointer to the ice segment (or NULL on subsequent calls)
+ * @state: pointer to the enum state
+ * @sect_type: section type to enumerate
+ * @offset: pointer to variable that receives the offset in the table (optional)
+ * @handler: function that handles access to the entries into the section type
+ *
+ * This function will enumerate all the entries in particular section type in
+ * the ice segment. The first call is made with the ice_seg parameter non-NULL;
+ * on subsequent calls, ice_seg is set to NULL which continues the enumeration.
+ * When the function returns a NULL pointer, then the end of the entries has
+ * been reached.
+ *
+ * Since each section may have a different header and entry size, the handler
+ * function is needed to determine the number and location entries in each
+ * section.
+ *
+ * The offset parameter is optional, but should be used for sections that
+ * contain an offset for each section table. For such cases, the section handler
+ * function must return the appropriate offset + index to give the absolution
+ * offset for each entry. For example, if the base for a section's header
+ * indicates a base offset of 10, and the index for the entry is 2, then
+ * section handler function should set the offset to 10 + 2 = 12.
+ */
+static void *ice_pkg_enum_entry(struct ice_seg *ice_seg,
+ struct ice_pkg_enum *state, u32 sect_type,
+ u32 *offset,
+ void *(*handler)(u32 sect_type, void *section,
+ u32 index, u32 *offset))
+{
+ void *entry;
+
+ if (ice_seg) {
+ if (!handler)
+ return NULL;
+
+ if (!ice_pkg_enum_section(ice_seg, state, sect_type))
+ return NULL;
+
+ state->entry_idx = 0;
+ state->handler = handler;
+ } else {
+ state->entry_idx++;
+ }
+
+ if (!state->handler)
+ return NULL;
+
+ /* get entry */
+ entry = state->handler(state->sect_type, state->sect, state->entry_idx,
+ offset);
+ if (!entry) {
+ /* end of a section, look for another section of this type */
+ if (!ice_pkg_enum_section(NULL, state, 0))
+ return NULL;
+
+ state->entry_idx = 0;
+ entry = state->handler(state->sect_type, state->sect,
+ state->entry_idx, offset);
+ }
+
+ return entry;
+}
+
+/**
+ * ice_sw_fv_handler
+ * @sect_type: section type
+ * @section: pointer to section
+ * @index: index of the field vector entry to be returned
+ * @offset: ptr to variable that receives the offset in the field vector table
+ *
+ * This is a callback function that can be passed to ice_pkg_enum_entry.
+ * This function treats the given section as of type ice_sw_fv_section and
+ * enumerates offset field. "offset" is an index into the field vector table.
+ */
+static void *ice_sw_fv_handler(u32 sect_type, void *section, u32 index,
+ u32 *offset)
+{
+ struct ice_sw_fv_section *fv_section = section;
+
+ if (!section || sect_type != ICE_SID_FLD_VEC_SW)
+ return NULL;
+ if (index >= le16_to_cpu(fv_section->count))
+ return NULL;
+ if (offset)
+ /* "index" passed in to this function is relative to a given
+ * 4k block. To get to the true index into the field vector
+ * table need to add the relative index to the base_offset
+ * field of this section
+ */
+ *offset = le16_to_cpu(fv_section->base_offset) + index;
+ return fv_section->fv + index;
+}
+
+/**
+ * ice_get_prof_index_max - get the max profile index for used profile
+ * @hw: pointer to the HW struct
+ *
+ * Calling this function will get the max profile index for used profile
+ * and store the index number in struct ice_switch_info *switch_info
+ * in HW for following use.
+ */
+static int ice_get_prof_index_max(struct ice_hw *hw)
+{
+ u16 prof_index = 0, j, max_prof_index = 0;
+ struct ice_pkg_enum state;
+ struct ice_seg *ice_seg;
+ bool flag = false;
+ struct ice_fv *fv;
+ u32 offset;
+
+ memset(&state, 0, sizeof(state));
+
+ if (!hw->seg)
+ return -EINVAL;
+
+ ice_seg = hw->seg;
+
+ do {
+ fv = ice_pkg_enum_entry(ice_seg, &state, ICE_SID_FLD_VEC_SW,
+ &offset, ice_sw_fv_handler);
+ if (!fv)
+ break;
+ ice_seg = NULL;
+
+ /* in the profile that not be used, the prot_id is set to 0xff
+ * and the off is set to 0x1ff for all the field vectors.
+ */
+ for (j = 0; j < hw->blk[ICE_BLK_SW].es.fvw; j++)
+ if (fv->ew[j].prot_id != ICE_PROT_INVALID ||
+ fv->ew[j].off != ICE_FV_OFFSET_INVAL)
+ flag = true;
+ if (flag && prof_index > max_prof_index)
+ max_prof_index = prof_index;
+
+ prof_index++;
+ flag = false;
+ } while (fv);
+
+ hw->switch_info->max_used_prof_index = max_prof_index;
+
+ return 0;
+}
+
+/**
+ * ice_get_ddp_pkg_state - get DDP pkg state after download
+ * @hw: pointer to the HW struct
+ * @already_loaded: indicates if pkg was already loaded onto the device
+ */
+static enum ice_ddp_state ice_get_ddp_pkg_state(struct ice_hw *hw,
+ bool already_loaded)
+{
+ if (hw->pkg_ver.major == hw->active_pkg_ver.major &&
+ hw->pkg_ver.minor == hw->active_pkg_ver.minor &&
+ hw->pkg_ver.update == hw->active_pkg_ver.update &&
+ hw->pkg_ver.draft == hw->active_pkg_ver.draft &&
+ !memcmp(hw->pkg_name, hw->active_pkg_name, sizeof(hw->pkg_name))) {
+ if (already_loaded)
+ return ICE_DDP_PKG_SAME_VERSION_ALREADY_LOADED;
+ else
+ return ICE_DDP_PKG_SUCCESS;
+ } else if (hw->active_pkg_ver.major != ICE_PKG_SUPP_VER_MAJ ||
+ hw->active_pkg_ver.minor != ICE_PKG_SUPP_VER_MNR) {
+ return ICE_DDP_PKG_ALREADY_LOADED_NOT_SUPPORTED;
+ } else if (hw->active_pkg_ver.major == ICE_PKG_SUPP_VER_MAJ &&
+ hw->active_pkg_ver.minor == ICE_PKG_SUPP_VER_MNR) {
+ return ICE_DDP_PKG_COMPATIBLE_ALREADY_LOADED;
+ } else {
+ return ICE_DDP_PKG_ERR;
+ }
+}
+
+/**
+ * ice_init_pkg_regs - initialize additional package registers
+ * @hw: pointer to the hardware structure
+ */
+static void ice_init_pkg_regs(struct ice_hw *hw)
+{
+#define ICE_SW_BLK_INP_MASK_L 0xFFFFFFFF
+#define ICE_SW_BLK_INP_MASK_H 0x0000FFFF
+#define ICE_SW_BLK_IDX 0
+
+ /* setup Switch block input mask, which is 48-bits in two parts */
+ wr32(hw, GL_PREEXT_L2_PMASK0(ICE_SW_BLK_IDX), ICE_SW_BLK_INP_MASK_L);
+ wr32(hw, GL_PREEXT_L2_PMASK1(ICE_SW_BLK_IDX), ICE_SW_BLK_INP_MASK_H);
+}
+
+/**
+ * ice_marker_ptype_tcam_handler
+ * @sect_type: section type
+ * @section: pointer to section
+ * @index: index of the Marker PType TCAM entry to be returned
+ * @offset: pointer to receive absolute offset, always 0 for ptype TCAM sections
+ *
+ * This is a callback function that can be passed to ice_pkg_enum_entry.
+ * Handles enumeration of individual Marker PType TCAM entries.
+ */
+static void *ice_marker_ptype_tcam_handler(u32 sect_type, void *section,
+ u32 index, u32 *offset)
+{
+ struct ice_marker_ptype_tcam_section *marker_ptype;
+
+ if (sect_type != ICE_SID_RXPARSER_MARKER_PTYPE)
+ return NULL;
+
+ if (index > ICE_MAX_MARKER_PTYPE_TCAMS_IN_BUF)
+ return NULL;
+
+ if (offset)
+ *offset = 0;
+
+ marker_ptype = section;
+ if (index >= le16_to_cpu(marker_ptype->count))
+ return NULL;
+
+ return marker_ptype->tcam + index;
+}
+
+/**
+ * ice_add_dvm_hint
+ * @hw: pointer to the HW structure
+ * @val: value of the boost entry
+ * @enable: true if entry needs to be enabled, or false if needs to be disabled
+ */
+static void ice_add_dvm_hint(struct ice_hw *hw, u16 val, bool enable)
+{
+ if (hw->dvm_upd.count < ICE_DVM_MAX_ENTRIES) {
+ hw->dvm_upd.tbl[hw->dvm_upd.count].boost_addr = val;
+ hw->dvm_upd.tbl[hw->dvm_upd.count].enable = enable;
+ hw->dvm_upd.count++;
+ }
+}
+
+/**
+ * ice_add_tunnel_hint
+ * @hw: pointer to the HW structure
+ * @label_name: label text
+ * @val: value of the tunnel port boost entry
+ */
+static void ice_add_tunnel_hint(struct ice_hw *hw, char *label_name, u16 val)
+{
+ if (hw->tnl.count < ICE_TUNNEL_MAX_ENTRIES) {
+ u16 i;
+
+ for (i = 0; tnls[i].type != TNL_LAST; i++) {
+ size_t len = strlen(tnls[i].label_prefix);
+
+ /* Look for matching label start, before continuing */
+ if (strncmp(label_name, tnls[i].label_prefix, len))
+ continue;
+
+ /* Make sure this label matches our PF. Note that the PF
+ * character ('0' - '7') will be located where our
+ * prefix string's null terminator is located.
+ */
+ if ((label_name[len] - '0') == hw->pf_id) {
+ hw->tnl.tbl[hw->tnl.count].type = tnls[i].type;
+ hw->tnl.tbl[hw->tnl.count].valid = false;
+ hw->tnl.tbl[hw->tnl.count].boost_addr = val;
+ hw->tnl.tbl[hw->tnl.count].port = 0;
+ hw->tnl.count++;
+ break;
+ }
+ }
+ }
+}
+
+/**
+ * ice_label_enum_handler
+ * @sect_type: section type
+ * @section: pointer to section
+ * @index: index of the label entry to be returned
+ * @offset: pointer to receive absolute offset, always zero for label sections
+ *
+ * This is a callback function that can be passed to ice_pkg_enum_entry.
+ * Handles enumeration of individual label entries.
+ */
+static void *ice_label_enum_handler(u32 __always_unused sect_type,
+ void *section, u32 index, u32 *offset)
+{
+ struct ice_label_section *labels;
+
+ if (!section)
+ return NULL;
+
+ if (index > ICE_MAX_LABELS_IN_BUF)
+ return NULL;
+
+ if (offset)
+ *offset = 0;
+
+ labels = section;
+ if (index >= le16_to_cpu(labels->count))
+ return NULL;
+
+ return labels->label + index;
+}
+
+/**
+ * ice_enum_labels
+ * @ice_seg: pointer to the ice segment (NULL on subsequent calls)
+ * @type: the section type that will contain the label (0 on subsequent calls)
+ * @state: ice_pkg_enum structure that will hold the state of the enumeration
+ * @value: pointer to a value that will return the label's value if found
+ *
+ * Enumerates a list of labels in the package. The caller will call
+ * ice_enum_labels(ice_seg, type, ...) to start the enumeration, then call
+ * ice_enum_labels(NULL, 0, ...) to continue. When the function returns a NULL
+ * the end of the list has been reached.
+ */
+static char *ice_enum_labels(struct ice_seg *ice_seg, u32 type,
+ struct ice_pkg_enum *state, u16 *value)
+{
+ struct ice_label *label;
+
+ /* Check for valid label section on first call */
+ if (type && !(type >= ICE_SID_LBL_FIRST && type <= ICE_SID_LBL_LAST))
+ return NULL;
+
+ label = ice_pkg_enum_entry(ice_seg, state, type, NULL,
+ ice_label_enum_handler);
+ if (!label)
+ return NULL;
+
+ *value = le16_to_cpu(label->value);
+ return label->name;
+}
+
+/**
+ * ice_boost_tcam_handler
+ * @sect_type: section type
+ * @section: pointer to section
+ * @index: index of the boost TCAM entry to be returned
+ * @offset: pointer to receive absolute offset, always 0 for boost TCAM sections
+ *
+ * This is a callback function that can be passed to ice_pkg_enum_entry.
+ * Handles enumeration of individual boost TCAM entries.
+ */
+static void *ice_boost_tcam_handler(u32 sect_type, void *section, u32 index,
+ u32 *offset)
+{
+ struct ice_boost_tcam_section *boost;
+
+ if (!section)
+ return NULL;
+
+ if (sect_type != ICE_SID_RXPARSER_BOOST_TCAM)
+ return NULL;
+
+ if (index > ICE_MAX_BST_TCAMS_IN_BUF)
+ return NULL;
+
+ if (offset)
+ *offset = 0;
+
+ boost = section;
+ if (index >= le16_to_cpu(boost->count))
+ return NULL;
+
+ return boost->tcam + index;
+}
+
+/**
+ * ice_find_boost_entry
+ * @ice_seg: pointer to the ice segment (non-NULL)
+ * @addr: Boost TCAM address of entry to search for
+ * @entry: returns pointer to the entry
+ *
+ * Finds a particular Boost TCAM entry and returns a pointer to that entry
+ * if it is found. The ice_seg parameter must not be NULL since the first call
+ * to ice_pkg_enum_entry requires a pointer to an actual ice_segment structure.
+ */
+static int ice_find_boost_entry(struct ice_seg *ice_seg, u16 addr,
+ struct ice_boost_tcam_entry **entry)
+{
+ struct ice_boost_tcam_entry *tcam;
+ struct ice_pkg_enum state;
+
+ memset(&state, 0, sizeof(state));
+
+ if (!ice_seg)
+ return -EINVAL;
+
+ do {
+ tcam = ice_pkg_enum_entry(ice_seg, &state,
+ ICE_SID_RXPARSER_BOOST_TCAM, NULL,
+ ice_boost_tcam_handler);
+ if (tcam && le16_to_cpu(tcam->addr) == addr) {
+ *entry = tcam;
+ return 0;
+ }
+
+ ice_seg = NULL;
+ } while (tcam);
+
+ *entry = NULL;
+ return -EIO;
+}
+
+/**
+ * ice_is_init_pkg_successful - check if DDP init was successful
+ * @state: state of the DDP pkg after download
+ */
+bool ice_is_init_pkg_successful(enum ice_ddp_state state)
+{
+ switch (state) {
+ case ICE_DDP_PKG_SUCCESS:
+ case ICE_DDP_PKG_SAME_VERSION_ALREADY_LOADED:
+ case ICE_DDP_PKG_COMPATIBLE_ALREADY_LOADED:
+ return true;
+ default:
+ return false;
+ }
+}
+
+/**
+ * ice_pkg_buf_alloc
+ * @hw: pointer to the HW structure
+ *
+ * Allocates a package buffer and returns a pointer to the buffer header.
+ * Note: all package contents must be in Little Endian form.
+ */
+struct ice_buf_build *ice_pkg_buf_alloc(struct ice_hw *hw)
+{
+ struct ice_buf_build *bld;
+ struct ice_buf_hdr *buf;
+
+ bld = devm_kzalloc(ice_hw_to_dev(hw), sizeof(*bld), GFP_KERNEL);
+ if (!bld)
+ return NULL;
+
+ buf = (struct ice_buf_hdr *)bld;
+ buf->data_end =
+ cpu_to_le16(offsetof(struct ice_buf_hdr, section_entry));
+ return bld;
+}
+
+static bool ice_is_gtp_u_profile(u16 prof_idx)
+{
+ return (prof_idx >= ICE_PROFID_IPV6_GTPU_TEID &&
+ prof_idx <= ICE_PROFID_IPV6_GTPU_IPV6_TCP_INNER) ||
+ prof_idx == ICE_PROFID_IPV4_GTPU_TEID;
+}
+
+static bool ice_is_gtp_c_profile(u16 prof_idx)
+{
+ switch (prof_idx) {
+ case ICE_PROFID_IPV4_GTPC_TEID:
+ case ICE_PROFID_IPV4_GTPC_NO_TEID:
+ case ICE_PROFID_IPV6_GTPC_TEID:
+ case ICE_PROFID_IPV6_GTPC_NO_TEID:
+ return true;
+ default:
+ return false;
+ }
+}
+
+/**
+ * ice_get_sw_prof_type - determine switch profile type
+ * @hw: pointer to the HW structure
+ * @fv: pointer to the switch field vector
+ * @prof_idx: profile index to check
+ */
+static enum ice_prof_type ice_get_sw_prof_type(struct ice_hw *hw,
+ struct ice_fv *fv, u32 prof_idx)
+{
+ u16 i;
+
+ if (ice_is_gtp_c_profile(prof_idx))
+ return ICE_PROF_TUN_GTPC;
+
+ if (ice_is_gtp_u_profile(prof_idx))
+ return ICE_PROF_TUN_GTPU;
+
+ for (i = 0; i < hw->blk[ICE_BLK_SW].es.fvw; i++) {
+ /* UDP tunnel will have UDP_OF protocol ID and VNI offset */
+ if (fv->ew[i].prot_id == (u8)ICE_PROT_UDP_OF &&
+ fv->ew[i].off == ICE_VNI_OFFSET)
+ return ICE_PROF_TUN_UDP;
+
+ /* GRE tunnel will have GRE protocol */
+ if (fv->ew[i].prot_id == (u8)ICE_PROT_GRE_OF)
+ return ICE_PROF_TUN_GRE;
+ }
+
+ return ICE_PROF_NON_TUN;
+}
+
+/**
+ * ice_get_sw_fv_bitmap - Get switch field vector bitmap based on profile type
+ * @hw: pointer to hardware structure
+ * @req_profs: type of profiles requested
+ * @bm: pointer to memory for returning the bitmap of field vectors
+ */
+void ice_get_sw_fv_bitmap(struct ice_hw *hw, enum ice_prof_type req_profs,
+ unsigned long *bm)
+{
+ struct ice_pkg_enum state;
+ struct ice_seg *ice_seg;
+ struct ice_fv *fv;
+
+ if (req_profs == ICE_PROF_ALL) {
+ bitmap_set(bm, 0, ICE_MAX_NUM_PROFILES);
+ return;
+ }
+
+ memset(&state, 0, sizeof(state));
+ bitmap_zero(bm, ICE_MAX_NUM_PROFILES);
+ ice_seg = hw->seg;
+ do {
+ enum ice_prof_type prof_type;
+ u32 offset;
+
+ fv = ice_pkg_enum_entry(ice_seg, &state, ICE_SID_FLD_VEC_SW,
+ &offset, ice_sw_fv_handler);
+ ice_seg = NULL;
+
+ if (fv) {
+ /* Determine field vector type */
+ prof_type = ice_get_sw_prof_type(hw, fv, offset);
+
+ if (req_profs & prof_type)
+ set_bit((u16)offset, bm);
+ }
+ } while (fv);
+}
+
+/**
+ * ice_get_sw_fv_list
+ * @hw: pointer to the HW structure
+ * @lkups: list of protocol types
+ * @bm: bitmap of field vectors to consider
+ * @fv_list: Head of a list
+ *
+ * Finds all the field vector entries from switch block that contain
+ * a given protocol ID and offset and returns a list of structures of type
+ * "ice_sw_fv_list_entry". Every structure in the list has a field vector
+ * definition and profile ID information
+ * NOTE: The caller of the function is responsible for freeing the memory
+ * allocated for every list entry.
+ */
+int ice_get_sw_fv_list(struct ice_hw *hw, struct ice_prot_lkup_ext *lkups,
+ unsigned long *bm, struct list_head *fv_list)
+{
+ struct ice_sw_fv_list_entry *fvl;
+ struct ice_sw_fv_list_entry *tmp;
+ struct ice_pkg_enum state;
+ struct ice_seg *ice_seg;
+ struct ice_fv *fv;
+ u32 offset;
+
+ memset(&state, 0, sizeof(state));
+
+ if (!lkups->n_val_words || !hw->seg)
+ return -EINVAL;
+
+ ice_seg = hw->seg;
+ do {
+ u16 i;
+
+ fv = ice_pkg_enum_entry(ice_seg, &state, ICE_SID_FLD_VEC_SW,
+ &offset, ice_sw_fv_handler);
+ if (!fv)
+ break;
+ ice_seg = NULL;
+
+ /* If field vector is not in the bitmap list, then skip this
+ * profile.
+ */
+ if (!test_bit((u16)offset, bm))
+ continue;
+
+ for (i = 0; i < lkups->n_val_words; i++) {
+ int j;
+
+ for (j = 0; j < hw->blk[ICE_BLK_SW].es.fvw; j++)
+ if (fv->ew[j].prot_id ==
+ lkups->fv_words[i].prot_id &&
+ fv->ew[j].off == lkups->fv_words[i].off)
+ break;
+ if (j >= hw->blk[ICE_BLK_SW].es.fvw)
+ break;
+ if (i + 1 == lkups->n_val_words) {
+ fvl = devm_kzalloc(ice_hw_to_dev(hw),
+ sizeof(*fvl), GFP_KERNEL);
+ if (!fvl)
+ goto err;
+ fvl->fv_ptr = fv;
+ fvl->profile_id = offset;
+ list_add(&fvl->list_entry, fv_list);
+ break;
+ }
+ }
+ } while (fv);
+ if (list_empty(fv_list)) {
+ dev_warn(ice_hw_to_dev(hw),
+ "Required profiles not found in currently loaded DDP package");
+ return -EIO;
+ }
+
+ return 0;
+
+err:
+ list_for_each_entry_safe(fvl, tmp, fv_list, list_entry) {
+ list_del(&fvl->list_entry);
+ devm_kfree(ice_hw_to_dev(hw), fvl);
+ }
+
+ return -ENOMEM;
+}
+
+/**
+ * ice_init_prof_result_bm - Initialize the profile result index bitmap
+ * @hw: pointer to hardware structure
+ */
+void ice_init_prof_result_bm(struct ice_hw *hw)
+{
+ struct ice_pkg_enum state;
+ struct ice_seg *ice_seg;
+ struct ice_fv *fv;
+
+ memset(&state, 0, sizeof(state));
+
+ if (!hw->seg)
+ return;
+
+ ice_seg = hw->seg;
+ do {
+ u32 off;
+ u16 i;
+
+ fv = ice_pkg_enum_entry(ice_seg, &state, ICE_SID_FLD_VEC_SW,
+ &off, ice_sw_fv_handler);
+ ice_seg = NULL;
+ if (!fv)
+ break;
+
+ bitmap_zero(hw->switch_info->prof_res_bm[off],
+ ICE_MAX_FV_WORDS);
+
+ /* Determine empty field vector indices, these can be
+ * used for recipe results. Skip index 0, since it is
+ * always used for Switch ID.
+ */
+ for (i = 1; i < ICE_MAX_FV_WORDS; i++)
+ if (fv->ew[i].prot_id == ICE_PROT_INVALID &&
+ fv->ew[i].off == ICE_FV_OFFSET_INVAL)
+ set_bit(i, hw->switch_info->prof_res_bm[off]);
+ } while (fv);
+}
+
+/**
+ * ice_pkg_buf_free
+ * @hw: pointer to the HW structure
+ * @bld: pointer to pkg build (allocated by ice_pkg_buf_alloc())
+ *
+ * Frees a package buffer
+ */
+void ice_pkg_buf_free(struct ice_hw *hw, struct ice_buf_build *bld)
+{
+ devm_kfree(ice_hw_to_dev(hw), bld);
+}
+
+/**
+ * ice_pkg_buf_reserve_section
+ * @bld: pointer to pkg build (allocated by ice_pkg_buf_alloc())
+ * @count: the number of sections to reserve
+ *
+ * Reserves one or more section table entries in a package buffer. This routine
+ * can be called multiple times as long as they are made before calling
+ * ice_pkg_buf_alloc_section(). Once ice_pkg_buf_alloc_section()
+ * is called once, the number of sections that can be allocated will not be able
+ * to be increased; not using all reserved sections is fine, but this will
+ * result in some wasted space in the buffer.
+ * Note: all package contents must be in Little Endian form.
+ */
+int ice_pkg_buf_reserve_section(struct ice_buf_build *bld, u16 count)
+{
+ struct ice_buf_hdr *buf;
+ u16 section_count;
+ u16 data_end;
+
+ if (!bld)
+ return -EINVAL;
+
+ buf = (struct ice_buf_hdr *)&bld->buf;
+
+ /* already an active section, can't increase table size */
+ section_count = le16_to_cpu(buf->section_count);
+ if (section_count > 0)
+ return -EIO;
+
+ if (bld->reserved_section_table_entries + count > ICE_MAX_S_COUNT)
+ return -EIO;
+ bld->reserved_section_table_entries += count;
+
+ data_end = le16_to_cpu(buf->data_end) +
+ flex_array_size(buf, section_entry, count);
+ buf->data_end = cpu_to_le16(data_end);
+
+ return 0;
+}
+
+/**
+ * ice_pkg_buf_alloc_section
+ * @bld: pointer to pkg build (allocated by ice_pkg_buf_alloc())
+ * @type: the section type value
+ * @size: the size of the section to reserve (in bytes)
+ *
+ * Reserves memory in the buffer for a section's content and updates the
+ * buffers' status accordingly. This routine returns a pointer to the first
+ * byte of the section start within the buffer, which is used to fill in the
+ * section contents.
+ * Note: all package contents must be in Little Endian form.
+ */
+void *ice_pkg_buf_alloc_section(struct ice_buf_build *bld, u32 type, u16 size)
+{
+ struct ice_buf_hdr *buf;
+ u16 sect_count;
+ u16 data_end;
+
+ if (!bld || !type || !size)
+ return NULL;
+
+ buf = (struct ice_buf_hdr *)&bld->buf;
+
+ /* check for enough space left in buffer */
+ data_end = le16_to_cpu(buf->data_end);
+
+ /* section start must align on 4 byte boundary */
+ data_end = ALIGN(data_end, 4);
+
+ if ((data_end + size) > ICE_MAX_S_DATA_END)
+ return NULL;
+
+ /* check for more available section table entries */
+ sect_count = le16_to_cpu(buf->section_count);
+ if (sect_count < bld->reserved_section_table_entries) {
+ void *section_ptr = ((u8 *)buf) + data_end;
+
+ buf->section_entry[sect_count].offset = cpu_to_le16(data_end);
+ buf->section_entry[sect_count].size = cpu_to_le16(size);
+ buf->section_entry[sect_count].type = cpu_to_le32(type);
+
+ data_end += size;
+ buf->data_end = cpu_to_le16(data_end);
+
+ buf->section_count = cpu_to_le16(sect_count + 1);
+ return section_ptr;
+ }
+
+ /* no free section table entries */
+ return NULL;
+}
+
+/**
+ * ice_pkg_buf_alloc_single_section
+ * @hw: pointer to the HW structure
+ * @type: the section type value
+ * @size: the size of the section to reserve (in bytes)
+ * @section: returns pointer to the section
+ *
+ * Allocates a package buffer with a single section.
+ * Note: all package contents must be in Little Endian form.
+ */
+struct ice_buf_build *ice_pkg_buf_alloc_single_section(struct ice_hw *hw,
+ u32 type, u16 size,
+ void **section)
+{
+ struct ice_buf_build *buf;
+
+ if (!section)
+ return NULL;
+
+ buf = ice_pkg_buf_alloc(hw);
+ if (!buf)
+ return NULL;
+
+ if (ice_pkg_buf_reserve_section(buf, 1))
+ goto ice_pkg_buf_alloc_single_section_err;
+
+ *section = ice_pkg_buf_alloc_section(buf, type, size);
+ if (!*section)
+ goto ice_pkg_buf_alloc_single_section_err;
+
+ return buf;
+
+ice_pkg_buf_alloc_single_section_err:
+ ice_pkg_buf_free(hw, buf);
+ return NULL;
+}
+
+/**
+ * ice_pkg_buf_get_active_sections
+ * @bld: pointer to pkg build (allocated by ice_pkg_buf_alloc())
+ *
+ * Returns the number of active sections. Before using the package buffer
+ * in an update package command, the caller should make sure that there is at
+ * least one active section - otherwise, the buffer is not legal and should
+ * not be used.
+ * Note: all package contents must be in Little Endian form.
+ */
+u16 ice_pkg_buf_get_active_sections(struct ice_buf_build *bld)
+{
+ struct ice_buf_hdr *buf;
+
+ if (!bld)
+ return 0;
+
+ buf = (struct ice_buf_hdr *)&bld->buf;
+ return le16_to_cpu(buf->section_count);
+}
+
+/**
+ * ice_pkg_buf
+ * @bld: pointer to pkg build (allocated by ice_pkg_buf_alloc())
+ *
+ * Return a pointer to the buffer's header
+ */
+struct ice_buf *ice_pkg_buf(struct ice_buf_build *bld)
+{
+ if (!bld)
+ return NULL;
+
+ return &bld->buf;
+}
+
+static enum ice_ddp_state ice_map_aq_err_to_ddp_state(enum ice_aq_err aq_err)
+{
+ switch (aq_err) {
+ case ICE_AQ_RC_ENOSEC:
+ case ICE_AQ_RC_EBADSIG:
+ return ICE_DDP_PKG_FILE_SIGNATURE_INVALID;
+ case ICE_AQ_RC_ESVN:
+ return ICE_DDP_PKG_FILE_REVISION_TOO_LOW;
+ case ICE_AQ_RC_EBADMAN:
+ case ICE_AQ_RC_EBADBUF:
+ return ICE_DDP_PKG_LOAD_ERROR;
+ default:
+ return ICE_DDP_PKG_ERR;
+ }
+}
+
+/**
+ * ice_acquire_global_cfg_lock
+ * @hw: pointer to the HW structure
+ * @access: access type (read or write)
+ *
+ * This function will request ownership of the global config lock for reading
+ * or writing of the package. When attempting to obtain write access, the
+ * caller must check for the following two return values:
+ *
+ * 0 - Means the caller has acquired the global config lock
+ * and can perform writing of the package.
+ * -EALREADY - Indicates another driver has already written the
+ * package or has found that no update was necessary; in
+ * this case, the caller can just skip performing any
+ * update of the package.
+ */
+static int ice_acquire_global_cfg_lock(struct ice_hw *hw,
+ enum ice_aq_res_access_type access)
+{
+ int status;
+
+ status = ice_acquire_res(hw, ICE_GLOBAL_CFG_LOCK_RES_ID, access,
+ ICE_GLOBAL_CFG_LOCK_TIMEOUT);
+
+ if (!status)
+ mutex_lock(&ice_global_cfg_lock_sw);
+ else if (status == -EALREADY)
+ ice_debug(hw, ICE_DBG_PKG,
+ "Global config lock: No work to do\n");
+
+ return status;
+}
+
+/**
+ * ice_release_global_cfg_lock
+ * @hw: pointer to the HW structure
+ *
+ * This function will release the global config lock.
+ */
+static void ice_release_global_cfg_lock(struct ice_hw *hw)
+{
+ mutex_unlock(&ice_global_cfg_lock_sw);
+ ice_release_res(hw, ICE_GLOBAL_CFG_LOCK_RES_ID);
+}
+
+/**
+ * ice_dwnld_cfg_bufs
+ * @hw: pointer to the hardware structure
+ * @bufs: pointer to an array of buffers
+ * @count: the number of buffers in the array
+ *
+ * Obtains global config lock and downloads the package configuration buffers
+ * to the firmware. Metadata buffers are skipped, and the first metadata buffer
+ * found indicates that the rest of the buffers are all metadata buffers.
+ */
+static enum ice_ddp_state ice_dwnld_cfg_bufs(struct ice_hw *hw,
+ struct ice_buf *bufs, u32 count)
+{
+ enum ice_ddp_state state = ICE_DDP_PKG_SUCCESS;
+ struct ice_buf_hdr *bh;
+ enum ice_aq_err err;
+ u32 offset, info, i;
+ int status;
+
+ if (!bufs || !count)
+ return ICE_DDP_PKG_ERR;
+
+ /* If the first buffer's first section has its metadata bit set
+ * then there are no buffers to be downloaded, and the operation is
+ * considered a success.
+ */
+ bh = (struct ice_buf_hdr *)bufs;
+ if (le32_to_cpu(bh->section_entry[0].type) & ICE_METADATA_BUF)
+ return ICE_DDP_PKG_SUCCESS;
+
+ status = ice_acquire_global_cfg_lock(hw, ICE_RES_WRITE);
+ if (status) {
+ if (status == -EALREADY)
+ return ICE_DDP_PKG_ALREADY_LOADED;
+ return ice_map_aq_err_to_ddp_state(hw->adminq.sq_last_status);
+ }
+
+ for (i = 0; i < count; i++) {
+ bool last = ((i + 1) == count);
+
+ if (!last) {
+ /* check next buffer for metadata flag */
+ bh = (struct ice_buf_hdr *)(bufs + i + 1);
+
+ /* A set metadata flag in the next buffer will signal
+ * that the current buffer will be the last buffer
+ * downloaded
+ */
+ if (le16_to_cpu(bh->section_count))
+ if (le32_to_cpu(bh->section_entry[0].type) &
+ ICE_METADATA_BUF)
+ last = true;
+ }
+
+ bh = (struct ice_buf_hdr *)(bufs + i);
+
+ status = ice_aq_download_pkg(hw, bh, ICE_PKG_BUF_SIZE, last,
+ &offset, &info, NULL);
+
+ /* Save AQ status from download package */
+ if (status) {
+ ice_debug(hw, ICE_DBG_PKG,
+ "Pkg download failed: err %d off %d inf %d\n",
+ status, offset, info);
+ err = hw->adminq.sq_last_status;
+ state = ice_map_aq_err_to_ddp_state(err);
+ break;
+ }
+
+ if (last)
+ break;
+ }
+
+ if (!status) {
+ status = ice_set_vlan_mode(hw);
+ if (status)
+ ice_debug(hw, ICE_DBG_PKG,
+ "Failed to set VLAN mode: err %d\n", status);
+ }
+
+ ice_release_global_cfg_lock(hw);
+
+ return state;
+}
+
+/**
+ * ice_aq_get_pkg_info_list
+ * @hw: pointer to the hardware structure
+ * @pkg_info: the buffer which will receive the information list
+ * @buf_size: the size of the pkg_info information buffer
+ * @cd: pointer to command details structure or NULL
+ *
+ * Get Package Info List (0x0C43)
+ */
+static int ice_aq_get_pkg_info_list(struct ice_hw *hw,
+ struct ice_aqc_get_pkg_info_resp *pkg_info,
+ u16 buf_size, struct ice_sq_cd *cd)
+{
+ struct ice_aq_desc desc;
+
+ ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_get_pkg_info_list);
+
+ return ice_aq_send_cmd(hw, &desc, pkg_info, buf_size, cd);
+}
+
+/**
+ * ice_download_pkg
+ * @hw: pointer to the hardware structure
+ * @ice_seg: pointer to the segment of the package to be downloaded
+ *
+ * Handles the download of a complete package.
+ */
+static enum ice_ddp_state ice_download_pkg(struct ice_hw *hw,
+ struct ice_seg *ice_seg)
+{
+ struct ice_buf_table *ice_buf_tbl;
+ int status;
+
+ ice_debug(hw, ICE_DBG_PKG, "Segment format version: %d.%d.%d.%d\n",
+ ice_seg->hdr.seg_format_ver.major,
+ ice_seg->hdr.seg_format_ver.minor,
+ ice_seg->hdr.seg_format_ver.update,
+ ice_seg->hdr.seg_format_ver.draft);
+
+ ice_debug(hw, ICE_DBG_PKG, "Seg: type 0x%X, size %d, name %s\n",
+ le32_to_cpu(ice_seg->hdr.seg_type),
+ le32_to_cpu(ice_seg->hdr.seg_size), ice_seg->hdr.seg_id);
+
+ ice_buf_tbl = ice_find_buf_table(ice_seg);
+
+ ice_debug(hw, ICE_DBG_PKG, "Seg buf count: %d\n",
+ le32_to_cpu(ice_buf_tbl->buf_count));
+
+ status = ice_dwnld_cfg_bufs(hw, ice_buf_tbl->buf_array,
+ le32_to_cpu(ice_buf_tbl->buf_count));
+
+ ice_post_pkg_dwnld_vlan_mode_cfg(hw);
+
+ return status;
+}
+
+/**
+ * ice_aq_download_pkg
+ * @hw: pointer to the hardware structure
+ * @pkg_buf: the package buffer to transfer
+ * @buf_size: the size of the package buffer
+ * @last_buf: last buffer indicator
+ * @error_offset: returns error offset
+ * @error_info: returns error information
+ * @cd: pointer to command details structure or NULL
+ *
+ * Download Package (0x0C40)
+ */
+int ice_aq_download_pkg(struct ice_hw *hw, struct ice_buf_hdr *pkg_buf,
+ u16 buf_size, bool last_buf, u32 *error_offset,
+ u32 *error_info, struct ice_sq_cd *cd)
+{
+ struct ice_aqc_download_pkg *cmd;
+ struct ice_aq_desc desc;
+ int status;
+
+ if (error_offset)
+ *error_offset = 0;
+ if (error_info)
+ *error_info = 0;
+
+ cmd = &desc.params.download_pkg;
+ ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_download_pkg);
+ desc.flags |= cpu_to_le16(ICE_AQ_FLAG_RD);
+
+ if (last_buf)
+ cmd->flags |= ICE_AQC_DOWNLOAD_PKG_LAST_BUF;
+
+ status = ice_aq_send_cmd(hw, &desc, pkg_buf, buf_size, cd);
+ if (status == -EIO) {
+ /* Read error from buffer only when the FW returned an error */
+ struct ice_aqc_download_pkg_resp *resp;
+
+ resp = (struct ice_aqc_download_pkg_resp *)pkg_buf;
+ if (error_offset)
+ *error_offset = le32_to_cpu(resp->error_offset);
+ if (error_info)
+ *error_info = le32_to_cpu(resp->error_info);
+ }
+
+ return status;
+}
+
+/**
+ * ice_aq_upload_section
+ * @hw: pointer to the hardware structure
+ * @pkg_buf: the package buffer which will receive the section
+ * @buf_size: the size of the package buffer
+ * @cd: pointer to command details structure or NULL
+ *
+ * Upload Section (0x0C41)
+ */
+int ice_aq_upload_section(struct ice_hw *hw, struct ice_buf_hdr *pkg_buf,
+ u16 buf_size, struct ice_sq_cd *cd)
+{
+ struct ice_aq_desc desc;
+
+ ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_upload_section);
+ desc.flags |= cpu_to_le16(ICE_AQ_FLAG_RD);
+
+ return ice_aq_send_cmd(hw, &desc, pkg_buf, buf_size, cd);
+}
+
+/**
+ * ice_aq_update_pkg
+ * @hw: pointer to the hardware structure
+ * @pkg_buf: the package cmd buffer
+ * @buf_size: the size of the package cmd buffer
+ * @last_buf: last buffer indicator
+ * @error_offset: returns error offset
+ * @error_info: returns error information
+ * @cd: pointer to command details structure or NULL
+ *
+ * Update Package (0x0C42)
+ */
+static int ice_aq_update_pkg(struct ice_hw *hw, struct ice_buf_hdr *pkg_buf,
+ u16 buf_size, bool last_buf, u32 *error_offset,
+ u32 *error_info, struct ice_sq_cd *cd)
+{
+ struct ice_aqc_download_pkg *cmd;
+ struct ice_aq_desc desc;
+ int status;
+
+ if (error_offset)
+ *error_offset = 0;
+ if (error_info)
+ *error_info = 0;
+
+ cmd = &desc.params.download_pkg;
+ ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_update_pkg);
+ desc.flags |= cpu_to_le16(ICE_AQ_FLAG_RD);
+
+ if (last_buf)
+ cmd->flags |= ICE_AQC_DOWNLOAD_PKG_LAST_BUF;
+
+ status = ice_aq_send_cmd(hw, &desc, pkg_buf, buf_size, cd);
+ if (status == -EIO) {
+ /* Read error from buffer only when the FW returned an error */
+ struct ice_aqc_download_pkg_resp *resp;
+
+ resp = (struct ice_aqc_download_pkg_resp *)pkg_buf;
+ if (error_offset)
+ *error_offset = le32_to_cpu(resp->error_offset);
+ if (error_info)
+ *error_info = le32_to_cpu(resp->error_info);
+ }
+
+ return status;
+}
+
+/**
+ * ice_update_pkg_no_lock
+ * @hw: pointer to the hardware structure
+ * @bufs: pointer to an array of buffers
+ * @count: the number of buffers in the array
+ */
+int ice_update_pkg_no_lock(struct ice_hw *hw, struct ice_buf *bufs, u32 count)
+{
+ int status = 0;
+ u32 i;
+
+ for (i = 0; i < count; i++) {
+ struct ice_buf_hdr *bh = (struct ice_buf_hdr *)(bufs + i);
+ bool last = ((i + 1) == count);
+ u32 offset, info;
+
+ status = ice_aq_update_pkg(hw, bh, le16_to_cpu(bh->data_end),
+ last, &offset, &info, NULL);
+
+ if (status) {
+ ice_debug(hw, ICE_DBG_PKG,
+ "Update pkg failed: err %d off %d inf %d\n",
+ status, offset, info);
+ break;
+ }
+ }
+
+ return status;
+}
+
+/**
+ * ice_update_pkg
+ * @hw: pointer to the hardware structure
+ * @bufs: pointer to an array of buffers
+ * @count: the number of buffers in the array
+ *
+ * Obtains change lock and updates package.
+ */
+int ice_update_pkg(struct ice_hw *hw, struct ice_buf *bufs, u32 count)
+{
+ int status;
+
+ status = ice_acquire_change_lock(hw, ICE_RES_WRITE);
+ if (status)
+ return status;
+
+ status = ice_update_pkg_no_lock(hw, bufs, count);
+
+ ice_release_change_lock(hw);
+
+ return status;
+}
+
+/**
+ * ice_find_seg_in_pkg
+ * @hw: pointer to the hardware structure
+ * @seg_type: the segment type to search for (i.e., SEGMENT_TYPE_CPK)
+ * @pkg_hdr: pointer to the package header to be searched
+ *
+ * This function searches a package file for a particular segment type. On
+ * success it returns a pointer to the segment header, otherwise it will
+ * return NULL.
+ */
+struct ice_generic_seg_hdr *ice_find_seg_in_pkg(struct ice_hw *hw, u32 seg_type,
+ struct ice_pkg_hdr *pkg_hdr)
+{
+ u32 i;
+
+ ice_debug(hw, ICE_DBG_PKG, "Package format version: %d.%d.%d.%d\n",
+ pkg_hdr->pkg_format_ver.major, pkg_hdr->pkg_format_ver.minor,
+ pkg_hdr->pkg_format_ver.update,
+ pkg_hdr->pkg_format_ver.draft);
+
+ /* Search all package segments for the requested segment type */
+ for (i = 0; i < le32_to_cpu(pkg_hdr->seg_count); i++) {
+ struct ice_generic_seg_hdr *seg;
+
+ seg = (struct ice_generic_seg_hdr
+ *)((u8 *)pkg_hdr +
+ le32_to_cpu(pkg_hdr->seg_offset[i]));
+
+ if (le32_to_cpu(seg->seg_type) == seg_type)
+ return seg;
+ }
+
+ return NULL;
+}
+
+/**
+ * ice_init_pkg_info
+ * @hw: pointer to the hardware structure
+ * @pkg_hdr: pointer to the driver's package hdr
+ *
+ * Saves off the package details into the HW structure.
+ */
+static enum ice_ddp_state ice_init_pkg_info(struct ice_hw *hw,
+ struct ice_pkg_hdr *pkg_hdr)
+{
+ struct ice_generic_seg_hdr *seg_hdr;
+
+ if (!pkg_hdr)
+ return ICE_DDP_PKG_ERR;
+
+ seg_hdr = ice_find_seg_in_pkg(hw, SEGMENT_TYPE_ICE, pkg_hdr);
+ if (seg_hdr) {
+ struct ice_meta_sect *meta;
+ struct ice_pkg_enum state;
+
+ memset(&state, 0, sizeof(state));
+
+ /* Get package information from the Metadata Section */
+ meta = ice_pkg_enum_section((struct ice_seg *)seg_hdr, &state,
+ ICE_SID_METADATA);
+ if (!meta) {
+ ice_debug(hw, ICE_DBG_INIT,
+ "Did not find ice metadata section in package\n");
+ return ICE_DDP_PKG_INVALID_FILE;
+ }
+
+ hw->pkg_ver = meta->ver;
+ memcpy(hw->pkg_name, meta->name, sizeof(meta->name));
+
+ ice_debug(hw, ICE_DBG_PKG, "Pkg: %d.%d.%d.%d, %s\n",
+ meta->ver.major, meta->ver.minor, meta->ver.update,
+ meta->ver.draft, meta->name);
+
+ hw->ice_seg_fmt_ver = seg_hdr->seg_format_ver;
+ memcpy(hw->ice_seg_id, seg_hdr->seg_id, sizeof(hw->ice_seg_id));
+
+ ice_debug(hw, ICE_DBG_PKG, "Ice Seg: %d.%d.%d.%d, %s\n",
+ seg_hdr->seg_format_ver.major,
+ seg_hdr->seg_format_ver.minor,
+ seg_hdr->seg_format_ver.update,
+ seg_hdr->seg_format_ver.draft, seg_hdr->seg_id);
+ } else {
+ ice_debug(hw, ICE_DBG_INIT,
+ "Did not find ice segment in driver package\n");
+ return ICE_DDP_PKG_INVALID_FILE;
+ }
+
+ return ICE_DDP_PKG_SUCCESS;
+}
+
+/**
+ * ice_get_pkg_info
+ * @hw: pointer to the hardware structure
+ *
+ * Store details of the package currently loaded in HW into the HW structure.
+ */
+static enum ice_ddp_state ice_get_pkg_info(struct ice_hw *hw)
+{
+ enum ice_ddp_state state = ICE_DDP_PKG_SUCCESS;
+ struct ice_aqc_get_pkg_info_resp *pkg_info;
+ u16 size;
+ u32 i;
+
+ size = struct_size(pkg_info, pkg_info, ICE_PKG_CNT);
+ pkg_info = kzalloc(size, GFP_KERNEL);
+ if (!pkg_info)
+ return ICE_DDP_PKG_ERR;
+
+ if (ice_aq_get_pkg_info_list(hw, pkg_info, size, NULL)) {
+ state = ICE_DDP_PKG_ERR;
+ goto init_pkg_free_alloc;
+ }
+
+ for (i = 0; i < le32_to_cpu(pkg_info->count); i++) {
+#define ICE_PKG_FLAG_COUNT 4
+ char flags[ICE_PKG_FLAG_COUNT + 1] = { 0 };
+ u8 place = 0;
+
+ if (pkg_info->pkg_info[i].is_active) {
+ flags[place++] = 'A';
+ hw->active_pkg_ver = pkg_info->pkg_info[i].ver;
+ hw->active_track_id =
+ le32_to_cpu(pkg_info->pkg_info[i].track_id);
+ memcpy(hw->active_pkg_name, pkg_info->pkg_info[i].name,
+ sizeof(pkg_info->pkg_info[i].name));
+ hw->active_pkg_in_nvm = pkg_info->pkg_info[i].is_in_nvm;
+ }
+ if (pkg_info->pkg_info[i].is_active_at_boot)
+ flags[place++] = 'B';
+ if (pkg_info->pkg_info[i].is_modified)
+ flags[place++] = 'M';
+ if (pkg_info->pkg_info[i].is_in_nvm)
+ flags[place++] = 'N';
+
+ ice_debug(hw, ICE_DBG_PKG, "Pkg[%d]: %d.%d.%d.%d,%s,%s\n", i,
+ pkg_info->pkg_info[i].ver.major,
+ pkg_info->pkg_info[i].ver.minor,
+ pkg_info->pkg_info[i].ver.update,
+ pkg_info->pkg_info[i].ver.draft,
+ pkg_info->pkg_info[i].name, flags);
+ }
+
+init_pkg_free_alloc:
+ kfree(pkg_info);
+
+ return state;
+}
+
+/**
+ * ice_chk_pkg_compat
+ * @hw: pointer to the hardware structure
+ * @ospkg: pointer to the package hdr
+ * @seg: pointer to the package segment hdr
+ *
+ * This function checks the package version compatibility with driver and NVM
+ */
+static enum ice_ddp_state ice_chk_pkg_compat(struct ice_hw *hw,
+ struct ice_pkg_hdr *ospkg,
+ struct ice_seg **seg)
+{
+ struct ice_aqc_get_pkg_info_resp *pkg;
+ enum ice_ddp_state state;
+ u16 size;
+ u32 i;
+
+ /* Check package version compatibility */
+ state = ice_chk_pkg_version(&hw->pkg_ver);
+ if (state) {
+ ice_debug(hw, ICE_DBG_INIT, "Package version check failed.\n");
+ return state;
+ }
+
+ /* find ICE segment in given package */
+ *seg = (struct ice_seg *)ice_find_seg_in_pkg(hw, SEGMENT_TYPE_ICE,
+ ospkg);
+ if (!*seg) {
+ ice_debug(hw, ICE_DBG_INIT, "no ice segment in package.\n");
+ return ICE_DDP_PKG_INVALID_FILE;
+ }
+
+ /* Check if FW is compatible with the OS package */
+ size = struct_size(pkg, pkg_info, ICE_PKG_CNT);
+ pkg = kzalloc(size, GFP_KERNEL);
+ if (!pkg)
+ return ICE_DDP_PKG_ERR;
+
+ if (ice_aq_get_pkg_info_list(hw, pkg, size, NULL)) {
+ state = ICE_DDP_PKG_LOAD_ERROR;
+ goto fw_ddp_compat_free_alloc;
+ }
+
+ for (i = 0; i < le32_to_cpu(pkg->count); i++) {
+ /* loop till we find the NVM package */
+ if (!pkg->pkg_info[i].is_in_nvm)
+ continue;
+ if ((*seg)->hdr.seg_format_ver.major !=
+ pkg->pkg_info[i].ver.major ||
+ (*seg)->hdr.seg_format_ver.minor >
+ pkg->pkg_info[i].ver.minor) {
+ state = ICE_DDP_PKG_FW_MISMATCH;
+ ice_debug(hw, ICE_DBG_INIT,
+ "OS package is not compatible with NVM.\n");
+ }
+ /* done processing NVM package so break */
+ break;
+ }
+fw_ddp_compat_free_alloc:
+ kfree(pkg);
+ return state;
+}
+
+/**
+ * ice_init_pkg_hints
+ * @hw: pointer to the HW structure
+ * @ice_seg: pointer to the segment of the package scan (non-NULL)
+ *
+ * This function will scan the package and save off relevant information
+ * (hints or metadata) for driver use. The ice_seg parameter must not be NULL
+ * since the first call to ice_enum_labels requires a pointer to an actual
+ * ice_seg structure.
+ */
+static void ice_init_pkg_hints(struct ice_hw *hw, struct ice_seg *ice_seg)
+{
+ struct ice_pkg_enum state;
+ char *label_name;
+ u16 val;
+ int i;
+
+ memset(&hw->tnl, 0, sizeof(hw->tnl));
+ memset(&state, 0, sizeof(state));
+
+ if (!ice_seg)
+ return;
+
+ label_name = ice_enum_labels(ice_seg, ICE_SID_LBL_RXPARSER_TMEM, &state,
+ &val);
+
+ while (label_name) {
+ if (!strncmp(label_name, ICE_TNL_PRE, strlen(ICE_TNL_PRE)))
+ /* check for a tunnel entry */
+ ice_add_tunnel_hint(hw, label_name, val);
+
+ /* check for a dvm mode entry */
+ else if (!strncmp(label_name, ICE_DVM_PRE, strlen(ICE_DVM_PRE)))
+ ice_add_dvm_hint(hw, val, true);
+
+ /* check for a svm mode entry */
+ else if (!strncmp(label_name, ICE_SVM_PRE, strlen(ICE_SVM_PRE)))
+ ice_add_dvm_hint(hw, val, false);
+
+ label_name = ice_enum_labels(NULL, 0, &state, &val);
+ }
+
+ /* Cache the appropriate boost TCAM entry pointers for tunnels */
+ for (i = 0; i < hw->tnl.count; i++) {
+ ice_find_boost_entry(ice_seg, hw->tnl.tbl[i].boost_addr,
+ &hw->tnl.tbl[i].boost_entry);
+ if (hw->tnl.tbl[i].boost_entry) {
+ hw->tnl.tbl[i].valid = true;
+ if (hw->tnl.tbl[i].type < __TNL_TYPE_CNT)
+ hw->tnl.valid_count[hw->tnl.tbl[i].type]++;
+ }
+ }
+
+ /* Cache the appropriate boost TCAM entry pointers for DVM and SVM */
+ for (i = 0; i < hw->dvm_upd.count; i++)
+ ice_find_boost_entry(ice_seg, hw->dvm_upd.tbl[i].boost_addr,
+ &hw->dvm_upd.tbl[i].boost_entry);
+}
+
+/**
+ * ice_fill_hw_ptype - fill the enabled PTYPE bit information
+ * @hw: pointer to the HW structure
+ */
+static void ice_fill_hw_ptype(struct ice_hw *hw)
+{
+ struct ice_marker_ptype_tcam_entry *tcam;
+ struct ice_seg *seg = hw->seg;
+ struct ice_pkg_enum state;
+
+ bitmap_zero(hw->hw_ptype, ICE_FLOW_PTYPE_MAX);
+ if (!seg)
+ return;
+
+ memset(&state, 0, sizeof(state));
+
+ do {
+ tcam = ice_pkg_enum_entry(seg, &state,
+ ICE_SID_RXPARSER_MARKER_PTYPE, NULL,
+ ice_marker_ptype_tcam_handler);
+ if (tcam &&
+ le16_to_cpu(tcam->addr) < ICE_MARKER_PTYPE_TCAM_ADDR_MAX &&
+ le16_to_cpu(tcam->ptype) < ICE_FLOW_PTYPE_MAX)
+ set_bit(le16_to_cpu(tcam->ptype), hw->hw_ptype);
+
+ seg = NULL;
+ } while (tcam);
+}
+
+/**
+ * ice_init_pkg - initialize/download package
+ * @hw: pointer to the hardware structure
+ * @buf: pointer to the package buffer
+ * @len: size of the package buffer
+ *
+ * This function initializes a package. The package contains HW tables
+ * required to do packet processing. First, the function extracts package
+ * information such as version. Then it finds the ice configuration segment
+ * within the package; this function then saves a copy of the segment pointer
+ * within the supplied package buffer. Next, the function will cache any hints
+ * from the package, followed by downloading the package itself. Note, that if
+ * a previous PF driver has already downloaded the package successfully, then
+ * the current driver will not have to download the package again.
+ *
+ * The local package contents will be used to query default behavior and to
+ * update specific sections of the HW's version of the package (e.g. to update
+ * the parse graph to understand new protocols).
+ *
+ * This function stores a pointer to the package buffer memory, and it is
+ * expected that the supplied buffer will not be freed immediately. If the
+ * package buffer needs to be freed, such as when read from a file, use
+ * ice_copy_and_init_pkg() instead of directly calling ice_init_pkg() in this
+ * case.
+ */
+enum ice_ddp_state ice_init_pkg(struct ice_hw *hw, u8 *buf, u32 len)
+{
+ bool already_loaded = false;
+ enum ice_ddp_state state;
+ struct ice_pkg_hdr *pkg;
+ struct ice_seg *seg;
+
+ if (!buf || !len)
+ return ICE_DDP_PKG_ERR;
+
+ pkg = (struct ice_pkg_hdr *)buf;
+ state = ice_verify_pkg(pkg, len);
+ if (state) {
+ ice_debug(hw, ICE_DBG_INIT, "failed to verify pkg (err: %d)\n",
+ state);
+ return state;
+ }
+
+ /* initialize package info */
+ state = ice_init_pkg_info(hw, pkg);
+ if (state)
+ return state;
+
+ /* before downloading the package, check package version for
+ * compatibility with driver
+ */
+ state = ice_chk_pkg_compat(hw, pkg, &seg);
+ if (state)
+ return state;
+
+ /* initialize package hints and then download package */
+ ice_init_pkg_hints(hw, seg);
+ state = ice_download_pkg(hw, seg);
+ if (state == ICE_DDP_PKG_ALREADY_LOADED) {
+ ice_debug(hw, ICE_DBG_INIT,
+ "package previously loaded - no work.\n");
+ already_loaded = true;
+ }
+
+ /* Get information on the package currently loaded in HW, then make sure
+ * the driver is compatible with this version.
+ */
+ if (!state || state == ICE_DDP_PKG_ALREADY_LOADED) {
+ state = ice_get_pkg_info(hw);
+ if (!state)
+ state = ice_get_ddp_pkg_state(hw, already_loaded);
+ }
+
+ if (ice_is_init_pkg_successful(state)) {
+ hw->seg = seg;
+ /* on successful package download update other required
+ * registers to support the package and fill HW tables
+ * with package content.
+ */
+ ice_init_pkg_regs(hw);
+ ice_fill_blk_tbls(hw);
+ ice_fill_hw_ptype(hw);
+ ice_get_prof_index_max(hw);
+ } else {
+ ice_debug(hw, ICE_DBG_INIT, "package load failed, %d\n", state);
+ }
+
+ return state;
+}
+
+/**
+ * ice_copy_and_init_pkg - initialize/download a copy of the package
+ * @hw: pointer to the hardware structure
+ * @buf: pointer to the package buffer
+ * @len: size of the package buffer
+ *
+ * This function copies the package buffer, and then calls ice_init_pkg() to
+ * initialize the copied package contents.
+ *
+ * The copying is necessary if the package buffer supplied is constant, or if
+ * the memory may disappear shortly after calling this function.
+ *
+ * If the package buffer resides in the data segment and can be modified, the
+ * caller is free to use ice_init_pkg() instead of ice_copy_and_init_pkg().
+ *
+ * However, if the package buffer needs to be copied first, such as when being
+ * read from a file, the caller should use ice_copy_and_init_pkg().
+ *
+ * This function will first copy the package buffer, before calling
+ * ice_init_pkg(). The caller is free to immediately destroy the original
+ * package buffer, as the new copy will be managed by this function and
+ * related routines.
+ */
+enum ice_ddp_state ice_copy_and_init_pkg(struct ice_hw *hw, const u8 *buf,
+ u32 len)
+{
+ enum ice_ddp_state state;
+ u8 *buf_copy;
+
+ if (!buf || !len)
+ return ICE_DDP_PKG_ERR;
+
+ buf_copy = devm_kmemdup(ice_hw_to_dev(hw), buf, len, GFP_KERNEL);
+
+ state = ice_init_pkg(hw, buf_copy, len);
+ if (!ice_is_init_pkg_successful(state)) {
+ /* Free the copy, since we failed to initialize the package */
+ devm_kfree(ice_hw_to_dev(hw), buf_copy);
+ } else {
+ /* Track the copied pkg so we can free it later */
+ hw->pkg_copy = buf_copy;
+ hw->pkg_size = len;
+ }
+
+ return state;
+}
diff --git a/drivers/net/ethernet/intel/ice/ice_ddp.h b/drivers/net/ethernet/intel/ice/ice_ddp.h
new file mode 100644
index 000000000000..37eadb3d27a8
--- /dev/null
+++ b/drivers/net/ethernet/intel/ice/ice_ddp.h
@@ -0,0 +1,445 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright (c) 2022, Intel Corporation. */
+
+#ifndef _ICE_DDP_H_
+#define _ICE_DDP_H_
+
+#include "ice_type.h"
+
+/* Package minimal version supported */
+#define ICE_PKG_SUPP_VER_MAJ 1
+#define ICE_PKG_SUPP_VER_MNR 3
+
+/* Package format version */
+#define ICE_PKG_FMT_VER_MAJ 1
+#define ICE_PKG_FMT_VER_MNR 0
+#define ICE_PKG_FMT_VER_UPD 0
+#define ICE_PKG_FMT_VER_DFT 0
+
+#define ICE_PKG_CNT 4
+
+#define ICE_FV_OFFSET_INVAL 0x1FF
+
+/* Extraction Sequence (Field Vector) Table */
+struct ice_fv_word {
+ u8 prot_id;
+ u16 off; /* Offset within the protocol header */
+ u8 resvrd;
+} __packed;
+
+#define ICE_MAX_NUM_PROFILES 256
+
+#define ICE_MAX_FV_WORDS 48
+struct ice_fv {
+ struct ice_fv_word ew[ICE_MAX_FV_WORDS];
+};
+
+enum ice_ddp_state {
+ /* Indicates that this call to ice_init_pkg
+ * successfully loaded the requested DDP package
+ */
+ ICE_DDP_PKG_SUCCESS = 0,
+
+ /* Generic error for already loaded errors, it is mapped later to
+ * the more specific one (one of the next 3)
+ */
+ ICE_DDP_PKG_ALREADY_LOADED = -1,
+
+ /* Indicates that a DDP package of the same version has already been
+ * loaded onto the device by a previous call or by another PF
+ */
+ ICE_DDP_PKG_SAME_VERSION_ALREADY_LOADED = -2,
+
+ /* The device has a DDP package that is not supported by the driver */
+ ICE_DDP_PKG_ALREADY_LOADED_NOT_SUPPORTED = -3,
+
+ /* The device has a compatible package
+ * (but different from the request) already loaded
+ */
+ ICE_DDP_PKG_COMPATIBLE_ALREADY_LOADED = -4,
+
+ /* The firmware loaded on the device is not compatible with
+ * the DDP package loaded
+ */
+ ICE_DDP_PKG_FW_MISMATCH = -5,
+
+ /* The DDP package file is invalid */
+ ICE_DDP_PKG_INVALID_FILE = -6,
+
+ /* The version of the DDP package provided is higher than
+ * the driver supports
+ */
+ ICE_DDP_PKG_FILE_VERSION_TOO_HIGH = -7,
+
+ /* The version of the DDP package provided is lower than the
+ * driver supports
+ */
+ ICE_DDP_PKG_FILE_VERSION_TOO_LOW = -8,
+
+ /* The signature of the DDP package file provided is invalid */
+ ICE_DDP_PKG_FILE_SIGNATURE_INVALID = -9,
+
+ /* The DDP package file security revision is too low and not
+ * supported by firmware
+ */
+ ICE_DDP_PKG_FILE_REVISION_TOO_LOW = -10,
+
+ /* An error occurred in firmware while loading the DDP package */
+ ICE_DDP_PKG_LOAD_ERROR = -11,
+
+ /* Other errors */
+ ICE_DDP_PKG_ERR = -12
+};
+
+/* Package and segment headers and tables */
+struct ice_pkg_hdr {
+ struct ice_pkg_ver pkg_format_ver;
+ __le32 seg_count;
+ __le32 seg_offset[];
+};
+
+/* generic segment */
+struct ice_generic_seg_hdr {
+#define SEGMENT_TYPE_METADATA 0x00000001
+#define SEGMENT_TYPE_ICE 0x00000010
+ __le32 seg_type;
+ struct ice_pkg_ver seg_format_ver;
+ __le32 seg_size;
+ char seg_id[ICE_PKG_NAME_SIZE];
+};
+
+/* ice specific segment */
+
+union ice_device_id {
+ struct {
+ __le16 device_id;
+ __le16 vendor_id;
+ } dev_vend_id;
+ __le32 id;
+};
+
+struct ice_device_id_entry {
+ union ice_device_id device;
+ union ice_device_id sub_device;
+};
+
+struct ice_seg {
+ struct ice_generic_seg_hdr hdr;
+ __le32 device_table_count;
+ struct ice_device_id_entry device_table[];
+};
+
+struct ice_nvm_table {
+ __le32 table_count;
+ __le32 vers[];
+};
+
+struct ice_buf {
+#define ICE_PKG_BUF_SIZE 4096
+ u8 buf[ICE_PKG_BUF_SIZE];
+};
+
+struct ice_buf_table {
+ __le32 buf_count;
+ struct ice_buf buf_array[];
+};
+
+struct ice_run_time_cfg_seg {
+ struct ice_generic_seg_hdr hdr;
+ u8 rsvd[8];
+ struct ice_buf_table buf_table;
+};
+
+/* global metadata specific segment */
+struct ice_global_metadata_seg {
+ struct ice_generic_seg_hdr hdr;
+ struct ice_pkg_ver pkg_ver;
+ __le32 rsvd;
+ char pkg_name[ICE_PKG_NAME_SIZE];
+};
+
+#define ICE_MIN_S_OFF 12
+#define ICE_MAX_S_OFF 4095
+#define ICE_MIN_S_SZ 1
+#define ICE_MAX_S_SZ 4084
+
+/* section information */
+struct ice_section_entry {
+ __le32 type;
+ __le16 offset;
+ __le16 size;
+};
+
+#define ICE_MIN_S_COUNT 1
+#define ICE_MAX_S_COUNT 511
+#define ICE_MIN_S_DATA_END 12
+#define ICE_MAX_S_DATA_END 4096
+
+#define ICE_METADATA_BUF 0x80000000
+
+struct ice_buf_hdr {
+ __le16 section_count;
+ __le16 data_end;
+ struct ice_section_entry section_entry[];
+};
+
+#define ICE_MAX_ENTRIES_IN_BUF(hd_sz, ent_sz) \
+ ((ICE_PKG_BUF_SIZE - \
+ struct_size((struct ice_buf_hdr *)0, section_entry, 1) - (hd_sz)) / \
+ (ent_sz))
+
+/* ice package section IDs */
+#define ICE_SID_METADATA 1
+#define ICE_SID_XLT0_SW 10
+#define ICE_SID_XLT_KEY_BUILDER_SW 11
+#define ICE_SID_XLT1_SW 12
+#define ICE_SID_XLT2_SW 13
+#define ICE_SID_PROFID_TCAM_SW 14
+#define ICE_SID_PROFID_REDIR_SW 15
+#define ICE_SID_FLD_VEC_SW 16
+#define ICE_SID_CDID_KEY_BUILDER_SW 17
+
+struct ice_meta_sect {
+ struct ice_pkg_ver ver;
+#define ICE_META_SECT_NAME_SIZE 28
+ char name[ICE_META_SECT_NAME_SIZE];
+ __le32 track_id;
+};
+
+#define ICE_SID_CDID_REDIR_SW 18
+
+#define ICE_SID_XLT0_ACL 20
+#define ICE_SID_XLT_KEY_BUILDER_ACL 21
+#define ICE_SID_XLT1_ACL 22
+#define ICE_SID_XLT2_ACL 23
+#define ICE_SID_PROFID_TCAM_ACL 24
+#define ICE_SID_PROFID_REDIR_ACL 25
+#define ICE_SID_FLD_VEC_ACL 26
+#define ICE_SID_CDID_KEY_BUILDER_ACL 27
+#define ICE_SID_CDID_REDIR_ACL 28
+
+#define ICE_SID_XLT0_FD 30
+#define ICE_SID_XLT_KEY_BUILDER_FD 31
+#define ICE_SID_XLT1_FD 32
+#define ICE_SID_XLT2_FD 33
+#define ICE_SID_PROFID_TCAM_FD 34
+#define ICE_SID_PROFID_REDIR_FD 35
+#define ICE_SID_FLD_VEC_FD 36
+#define ICE_SID_CDID_KEY_BUILDER_FD 37
+#define ICE_SID_CDID_REDIR_FD 38
+
+#define ICE_SID_XLT0_RSS 40
+#define ICE_SID_XLT_KEY_BUILDER_RSS 41
+#define ICE_SID_XLT1_RSS 42
+#define ICE_SID_XLT2_RSS 43
+#define ICE_SID_PROFID_TCAM_RSS 44
+#define ICE_SID_PROFID_REDIR_RSS 45
+#define ICE_SID_FLD_VEC_RSS 46
+#define ICE_SID_CDID_KEY_BUILDER_RSS 47
+#define ICE_SID_CDID_REDIR_RSS 48
+
+#define ICE_SID_RXPARSER_MARKER_PTYPE 55
+#define ICE_SID_RXPARSER_BOOST_TCAM 56
+#define ICE_SID_RXPARSER_METADATA_INIT 58
+#define ICE_SID_TXPARSER_BOOST_TCAM 66
+
+#define ICE_SID_XLT0_PE 80
+#define ICE_SID_XLT_KEY_BUILDER_PE 81
+#define ICE_SID_XLT1_PE 82
+#define ICE_SID_XLT2_PE 83
+#define ICE_SID_PROFID_TCAM_PE 84
+#define ICE_SID_PROFID_REDIR_PE 85
+#define ICE_SID_FLD_VEC_PE 86
+#define ICE_SID_CDID_KEY_BUILDER_PE 87
+#define ICE_SID_CDID_REDIR_PE 88
+
+/* Label Metadata section IDs */
+#define ICE_SID_LBL_FIRST 0x80000010
+#define ICE_SID_LBL_RXPARSER_TMEM 0x80000018
+/* The following define MUST be updated to reflect the last label section ID */
+#define ICE_SID_LBL_LAST 0x80000038
+
+/* Label ICE runtime configuration section IDs */
+#define ICE_SID_TX_5_LAYER_TOPO 0x10
+
+enum ice_block {
+ ICE_BLK_SW = 0,
+ ICE_BLK_ACL,
+ ICE_BLK_FD,
+ ICE_BLK_RSS,
+ ICE_BLK_PE,
+ ICE_BLK_COUNT
+};
+
+enum ice_sect {
+ ICE_XLT0 = 0,
+ ICE_XLT_KB,
+ ICE_XLT1,
+ ICE_XLT2,
+ ICE_PROF_TCAM,
+ ICE_PROF_REDIR,
+ ICE_VEC_TBL,
+ ICE_CDID_KB,
+ ICE_CDID_REDIR,
+ ICE_SECT_COUNT
+};
+
+/* package labels */
+struct ice_label {
+ __le16 value;
+#define ICE_PKG_LABEL_SIZE 64
+ char name[ICE_PKG_LABEL_SIZE];
+};
+
+struct ice_label_section {
+ __le16 count;
+ struct ice_label label[];
+};
+
+#define ICE_MAX_LABELS_IN_BUF \
+ ICE_MAX_ENTRIES_IN_BUF(struct_size((struct ice_label_section *)0, \
+ label, 1) - \
+ sizeof(struct ice_label), \
+ sizeof(struct ice_label))
+
+struct ice_sw_fv_section {
+ __le16 count;
+ __le16 base_offset;
+ struct ice_fv fv[];
+};
+
+struct ice_sw_fv_list_entry {
+ struct list_head list_entry;
+ u32 profile_id;
+ struct ice_fv *fv_ptr;
+};
+
+/* The BOOST TCAM stores the match packet header in reverse order, meaning
+ * the fields are reversed; in addition, this means that the normally big endian
+ * fields of the packet are now little endian.
+ */
+struct ice_boost_key_value {
+#define ICE_BOOST_REMAINING_HV_KEY 15
+ u8 remaining_hv_key[ICE_BOOST_REMAINING_HV_KEY];
+ __le16 hv_dst_port_key;
+ __le16 hv_src_port_key;
+ u8 tcam_search_key;
+} __packed;
+
+struct ice_boost_key {
+ struct ice_boost_key_value key;
+ struct ice_boost_key_value key2;
+};
+
+/* package Boost TCAM entry */
+struct ice_boost_tcam_entry {
+ __le16 addr;
+ __le16 reserved;
+ /* break up the 40 bytes of key into different fields */
+ struct ice_boost_key key;
+ u8 boost_hit_index_group;
+ /* The following contains bitfields which are not on byte boundaries.
+ * These fields are currently unused by driver software.
+ */
+#define ICE_BOOST_BIT_FIELDS 43
+ u8 bit_fields[ICE_BOOST_BIT_FIELDS];
+};
+
+struct ice_boost_tcam_section {
+ __le16 count;
+ __le16 reserved;
+ struct ice_boost_tcam_entry tcam[];
+};
+
+#define ICE_MAX_BST_TCAMS_IN_BUF \
+ ICE_MAX_ENTRIES_IN_BUF(struct_size((struct ice_boost_tcam_section *)0, \
+ tcam, 1) - \
+ sizeof(struct ice_boost_tcam_entry), \
+ sizeof(struct ice_boost_tcam_entry))
+
+/* package Marker Ptype TCAM entry */
+struct ice_marker_ptype_tcam_entry {
+#define ICE_MARKER_PTYPE_TCAM_ADDR_MAX 1024
+ __le16 addr;
+ __le16 ptype;
+ u8 keys[20];
+};
+
+struct ice_marker_ptype_tcam_section {
+ __le16 count;
+ __le16 reserved;
+ struct ice_marker_ptype_tcam_entry tcam[];
+};
+
+#define ICE_MAX_MARKER_PTYPE_TCAMS_IN_BUF \
+ ICE_MAX_ENTRIES_IN_BUF( \
+ struct_size((struct ice_marker_ptype_tcam_section *)0, tcam, \
+ 1) - \
+ sizeof(struct ice_marker_ptype_tcam_entry), \
+ sizeof(struct ice_marker_ptype_tcam_entry))
+
+struct ice_xlt1_section {
+ __le16 count;
+ __le16 offset;
+ u8 value[];
+};
+
+struct ice_xlt2_section {
+ __le16 count;
+ __le16 offset;
+ __le16 value[];
+};
+
+struct ice_prof_redir_section {
+ __le16 count;
+ __le16 offset;
+ u8 redir_value[];
+};
+
+/* package buffer building */
+
+struct ice_buf_build {
+ struct ice_buf buf;
+ u16 reserved_section_table_entries;
+};
+
+struct ice_pkg_enum {
+ struct ice_buf_table *buf_table;
+ u32 buf_idx;
+
+ u32 type;
+ struct ice_buf_hdr *buf;
+ u32 sect_idx;
+ void *sect;
+ u32 sect_type;
+
+ u32 entry_idx;
+ void *(*handler)(u32 sect_type, void *section, u32 index, u32 *offset);
+};
+
+int ice_aq_download_pkg(struct ice_hw *hw, struct ice_buf_hdr *pkg_buf,
+ u16 buf_size, bool last_buf, u32 *error_offset,
+ u32 *error_info, struct ice_sq_cd *cd);
+int ice_aq_upload_section(struct ice_hw *hw, struct ice_buf_hdr *pkg_buf,
+ u16 buf_size, struct ice_sq_cd *cd);
+
+void *ice_pkg_buf_alloc_section(struct ice_buf_build *bld, u32 type, u16 size);
+
+enum ice_ddp_state ice_verify_pkg(struct ice_pkg_hdr *pkg, u32 len);
+
+struct ice_buf_build *ice_pkg_buf_alloc(struct ice_hw *hw);
+
+struct ice_generic_seg_hdr *ice_find_seg_in_pkg(struct ice_hw *hw, u32 seg_type,
+ struct ice_pkg_hdr *pkg_hdr);
+
+int ice_update_pkg_no_lock(struct ice_hw *hw, struct ice_buf *bufs, u32 count);
+int ice_update_pkg(struct ice_hw *hw, struct ice_buf *bufs, u32 count);
+
+int ice_pkg_buf_reserve_section(struct ice_buf_build *bld, u16 count);
+u16 ice_pkg_buf_get_active_sections(struct ice_buf_build *bld);
+void *ice_pkg_enum_section(struct ice_seg *ice_seg, struct ice_pkg_enum *state,
+ u32 sect_type);
+
+struct ice_buf_hdr *ice_pkg_val_buf(struct ice_buf *buf);
+
+#endif
diff --git a/drivers/net/ethernet/intel/ice/ice_devlink.c b/drivers/net/ethernet/intel/ice/ice_devlink.c
index 0fae0186bd85..05f216af8c81 100644
--- a/drivers/net/ethernet/intel/ice/ice_devlink.c
+++ b/drivers/net/ethernet/intel/ice/ice_devlink.c
@@ -371,10 +371,7 @@ out_free_ctx:
/**
* ice_devlink_reload_empr_start - Start EMP reset to activate new firmware
- * @devlink: pointer to the devlink instance to reload
- * @netns_change: if true, the network namespace is changing
- * @action: the action to perform. Must be DEVLINK_RELOAD_ACTION_FW_ACTIVATE
- * @limit: limits on what reload should do, such as not resetting
+ * @pf: pointer to the pf instance
* @extack: netlink extended ACK structure
*
* Allow user to activate new Embedded Management Processor firmware by
@@ -387,12 +384,9 @@ out_free_ctx:
* any source.
*/
static int
-ice_devlink_reload_empr_start(struct devlink *devlink, bool netns_change,
- enum devlink_reload_action action,
- enum devlink_reload_limit limit,
+ice_devlink_reload_empr_start(struct ice_pf *pf,
struct netlink_ext_ack *extack)
{
- struct ice_pf *pf = devlink_priv(devlink);
struct device *dev = ice_pf_to_dev(pf);
struct ice_hw *hw = &pf->hw;
u8 pending;
@@ -431,11 +425,51 @@ ice_devlink_reload_empr_start(struct devlink *devlink, bool netns_change,
}
/**
+ * ice_devlink_reload_down - prepare for reload
+ * @devlink: pointer to the devlink instance to reload
+ * @netns_change: if true, the network namespace is changing
+ * @action: the action to perform
+ * @limit: limits on what reload should do, such as not resetting
+ * @extack: netlink extended ACK structure
+ */
+static int
+ice_devlink_reload_down(struct devlink *devlink, bool netns_change,
+ enum devlink_reload_action action,
+ enum devlink_reload_limit limit,
+ struct netlink_ext_ack *extack)
+{
+ struct ice_pf *pf = devlink_priv(devlink);
+
+ switch (action) {
+ case DEVLINK_RELOAD_ACTION_DRIVER_REINIT:
+ if (ice_is_eswitch_mode_switchdev(pf)) {
+ NL_SET_ERR_MSG_MOD(extack,
+ "Go to legacy mode before doing reinit\n");
+ return -EOPNOTSUPP;
+ }
+ if (ice_is_adq_active(pf)) {
+ NL_SET_ERR_MSG_MOD(extack,
+ "Turn off ADQ before doing reinit\n");
+ return -EOPNOTSUPP;
+ }
+ if (ice_has_vfs(pf)) {
+ NL_SET_ERR_MSG_MOD(extack,
+ "Remove all VFs before doing reinit\n");
+ return -EOPNOTSUPP;
+ }
+ ice_unload(pf);
+ return 0;
+ case DEVLINK_RELOAD_ACTION_FW_ACTIVATE:
+ return ice_devlink_reload_empr_start(pf, extack);
+ default:
+ WARN_ON(1);
+ return -EOPNOTSUPP;
+ }
+}
+
+/**
* ice_devlink_reload_empr_finish - Wait for EMP reset to finish
- * @devlink: pointer to the devlink instance reloading
- * @action: the action requested
- * @limit: limits imposed by userspace, such as not resetting
- * @actions_performed: on return, indicate what actions actually performed
+ * @pf: pointer to the pf instance
* @extack: netlink extended ACK structure
*
* Wait for driver to finish rebuilding after EMP reset is completed. This
@@ -443,17 +477,11 @@ ice_devlink_reload_empr_start(struct devlink *devlink, bool netns_change,
* for the driver's rebuild to complete.
*/
static int
-ice_devlink_reload_empr_finish(struct devlink *devlink,
- enum devlink_reload_action action,
- enum devlink_reload_limit limit,
- u32 *actions_performed,
+ice_devlink_reload_empr_finish(struct ice_pf *pf,
struct netlink_ext_ack *extack)
{
- struct ice_pf *pf = devlink_priv(devlink);
int err;
- *actions_performed = BIT(DEVLINK_RELOAD_ACTION_FW_ACTIVATE);
-
err = ice_wait_for_reset(pf, 60 * HZ);
if (err) {
NL_SET_ERR_MSG_MOD(extack, "Device still resetting after 1 minute");
@@ -1192,12 +1220,43 @@ static int ice_devlink_set_parent(struct devlink_rate *devlink_rate,
return status;
}
+/**
+ * ice_devlink_reload_up - do reload up after reinit
+ * @devlink: pointer to the devlink instance reloading
+ * @action: the action requested
+ * @limit: limits imposed by userspace, such as not resetting
+ * @actions_performed: on return, indicate what actions actually performed
+ * @extack: netlink extended ACK structure
+ */
+static int
+ice_devlink_reload_up(struct devlink *devlink,
+ enum devlink_reload_action action,
+ enum devlink_reload_limit limit,
+ u32 *actions_performed,
+ struct netlink_ext_ack *extack)
+{
+ struct ice_pf *pf = devlink_priv(devlink);
+
+ switch (action) {
+ case DEVLINK_RELOAD_ACTION_DRIVER_REINIT:
+ *actions_performed = BIT(DEVLINK_RELOAD_ACTION_DRIVER_REINIT);
+ return ice_load(pf);
+ case DEVLINK_RELOAD_ACTION_FW_ACTIVATE:
+ *actions_performed = BIT(DEVLINK_RELOAD_ACTION_FW_ACTIVATE);
+ return ice_devlink_reload_empr_finish(pf, extack);
+ default:
+ WARN_ON(1);
+ return -EOPNOTSUPP;
+ }
+}
+
static const struct devlink_ops ice_devlink_ops = {
.supported_flash_update_params = DEVLINK_SUPPORT_FLASH_UPDATE_OVERWRITE_MASK,
- .reload_actions = BIT(DEVLINK_RELOAD_ACTION_FW_ACTIVATE),
+ .reload_actions = BIT(DEVLINK_RELOAD_ACTION_DRIVER_REINIT) |
+ BIT(DEVLINK_RELOAD_ACTION_FW_ACTIVATE),
/* The ice driver currently does not support driver reinit */
- .reload_down = ice_devlink_reload_empr_start,
- .reload_up = ice_devlink_reload_empr_finish,
+ .reload_down = ice_devlink_reload_down,
+ .reload_up = ice_devlink_reload_up,
.port_split = ice_devlink_port_split,
.port_unsplit = ice_devlink_port_unsplit,
.eswitch_mode_get = ice_eswitch_mode_get,
@@ -1376,7 +1435,6 @@ void ice_devlink_register(struct ice_pf *pf)
{
struct devlink *devlink = priv_to_devlink(pf);
- devlink_set_features(devlink, DEVLINK_F_RELOAD);
devlink_register(devlink);
}
@@ -1411,25 +1469,9 @@ ice_devlink_set_switch_id(struct ice_pf *pf, struct netdev_phys_item_id *ppid)
int ice_devlink_register_params(struct ice_pf *pf)
{
struct devlink *devlink = priv_to_devlink(pf);
- union devlink_param_value value;
- int err;
- err = devlink_params_register(devlink, ice_devlink_params,
- ARRAY_SIZE(ice_devlink_params));
- if (err)
- return err;
-
- value.vbool = false;
- devlink_param_driverinit_value_set(devlink,
- DEVLINK_PARAM_GENERIC_ID_ENABLE_IWARP,
- value);
-
- value.vbool = test_bit(ICE_FLAG_RDMA_ENA, pf->flags) ? true : false;
- devlink_param_driverinit_value_set(devlink,
- DEVLINK_PARAM_GENERIC_ID_ENABLE_ROCE,
- value);
-
- return 0;
+ return devlink_params_register(devlink, ice_devlink_params,
+ ARRAY_SIZE(ice_devlink_params));
}
void ice_devlink_unregister_params(struct ice_pf *pf)
diff --git a/drivers/net/ethernet/intel/ice/ice_eswitch.c b/drivers/net/ethernet/intel/ice/ice_eswitch.c
index f9f15acae90a..f6dd3f8fd936 100644
--- a/drivers/net/ethernet/intel/ice/ice_eswitch.c
+++ b/drivers/net/ethernet/intel/ice/ice_eswitch.c
@@ -71,17 +71,17 @@ void ice_eswitch_replay_vf_mac_rule(struct ice_vf *vf)
if (!ice_is_switchdev_running(vf->pf))
return;
- if (is_valid_ether_addr(vf->hw_lan_addr.addr)) {
+ if (is_valid_ether_addr(vf->hw_lan_addr)) {
err = ice_eswitch_add_vf_mac_rule(vf->pf, vf,
- vf->hw_lan_addr.addr);
+ vf->hw_lan_addr);
if (err) {
dev_err(ice_pf_to_dev(vf->pf), "Failed to add MAC %pM for VF %d\n, error %d\n",
- vf->hw_lan_addr.addr, vf->vf_id, err);
+ vf->hw_lan_addr, vf->vf_id, err);
return;
}
vf->num_mac++;
- ether_addr_copy(vf->dev_lan_addr.addr, vf->hw_lan_addr.addr);
+ ether_addr_copy(vf->dev_lan_addr, vf->hw_lan_addr);
}
}
@@ -237,7 +237,7 @@ ice_eswitch_release_reprs(struct ice_pf *pf, struct ice_vsi *ctrl_vsi)
ice_vsi_update_security(vsi, ice_vsi_ctx_set_antispoof);
metadata_dst_free(vf->repr->dst);
vf->repr->dst = NULL;
- ice_fltr_add_mac_and_broadcast(vsi, vf->hw_lan_addr.addr,
+ ice_fltr_add_mac_and_broadcast(vsi, vf->hw_lan_addr,
ICE_FWD_TO_VSI);
netif_napi_del(&vf->repr->q_vector->napi);
@@ -265,14 +265,14 @@ static int ice_eswitch_setup_reprs(struct ice_pf *pf)
GFP_KERNEL);
if (!vf->repr->dst) {
ice_fltr_add_mac_and_broadcast(vsi,
- vf->hw_lan_addr.addr,
+ vf->hw_lan_addr,
ICE_FWD_TO_VSI);
goto err;
}
if (ice_vsi_update_security(vsi, ice_vsi_ctx_clear_antispoof)) {
ice_fltr_add_mac_and_broadcast(vsi,
- vf->hw_lan_addr.addr,
+ vf->hw_lan_addr,
ICE_FWD_TO_VSI);
metadata_dst_free(vf->repr->dst);
vf->repr->dst = NULL;
@@ -281,7 +281,7 @@ static int ice_eswitch_setup_reprs(struct ice_pf *pf)
if (ice_vsi_add_vlan_zero(vsi)) {
ice_fltr_add_mac_and_broadcast(vsi,
- vf->hw_lan_addr.addr,
+ vf->hw_lan_addr,
ICE_FWD_TO_VSI);
metadata_dst_free(vf->repr->dst);
vf->repr->dst = NULL;
@@ -338,7 +338,7 @@ void ice_eswitch_update_repr(struct ice_vsi *vsi)
ret = ice_vsi_update_security(vsi, ice_vsi_ctx_clear_antispoof);
if (ret) {
- ice_fltr_add_mac_and_broadcast(vsi, vf->hw_lan_addr.addr, ICE_FWD_TO_VSI);
+ ice_fltr_add_mac_and_broadcast(vsi, vf->hw_lan_addr, ICE_FWD_TO_VSI);
dev_err(ice_pf_to_dev(pf), "Failed to update VF %d port representor",
vsi->vf->vf_id);
}
@@ -425,7 +425,13 @@ static void ice_eswitch_release_env(struct ice_pf *pf)
static struct ice_vsi *
ice_eswitch_vsi_setup(struct ice_pf *pf, struct ice_port_info *pi)
{
- return ice_vsi_setup(pf, pi, ICE_VSI_SWITCHDEV_CTRL, NULL, NULL);
+ struct ice_vsi_cfg_params params = {};
+
+ params.type = ICE_VSI_SWITCHDEV_CTRL;
+ params.pi = pi;
+ params.flags = ICE_VSI_FLAG_INIT;
+
+ return ice_vsi_setup(pf, &params);
}
/**
diff --git a/drivers/net/ethernet/intel/ice/ice_ethtool.c b/drivers/net/ethernet/intel/ice/ice_ethtool.c
index a359f1610fc1..b360bd8f1599 100644
--- a/drivers/net/ethernet/intel/ice/ice_ethtool.c
+++ b/drivers/net/ethernet/intel/ice/ice_ethtool.c
@@ -656,7 +656,7 @@ static int ice_lbtest_prepare_rings(struct ice_vsi *vsi)
if (status)
goto err_setup_rx_ring;
- status = ice_vsi_cfg(vsi);
+ status = ice_vsi_cfg_lan(vsi);
if (status)
goto err_setup_rx_ring;
@@ -664,7 +664,7 @@ static int ice_lbtest_prepare_rings(struct ice_vsi *vsi)
if (status)
goto err_start_rx_ring;
- return status;
+ return 0;
err_start_rx_ring:
ice_vsi_free_rx_rings(vsi);
@@ -1950,8 +1950,7 @@ ice_phy_type_to_ethtool(struct net_device *netdev,
ICE_PHY_TYPE_LOW_100G_CAUI4 |
ICE_PHY_TYPE_LOW_100G_AUI4_AOC_ACC |
ICE_PHY_TYPE_LOW_100G_AUI4 |
- ICE_PHY_TYPE_LOW_100GBASE_CR_PAM4 |
- ICE_PHY_TYPE_LOW_100GBASE_CP2;
+ ICE_PHY_TYPE_LOW_100GBASE_CR_PAM4;
phy_type_mask_hi = ICE_PHY_TYPE_HIGH_100G_CAUI2_AOC_ACC |
ICE_PHY_TYPE_HIGH_100G_CAUI2 |
ICE_PHY_TYPE_HIGH_100G_AUI2_AOC_ACC |
@@ -1964,15 +1963,27 @@ ice_phy_type_to_ethtool(struct net_device *netdev,
100000baseCR4_Full);
}
- phy_type_mask_lo = ICE_PHY_TYPE_LOW_100GBASE_SR4 |
- ICE_PHY_TYPE_LOW_100GBASE_SR2;
- if (phy_types_low & phy_type_mask_lo) {
+ if (phy_types_low & ICE_PHY_TYPE_LOW_100GBASE_CP2) {
+ ethtool_link_ksettings_add_link_mode(ks, supported,
+ 100000baseCR2_Full);
+ ice_ethtool_advertise_link_mode(ICE_AQ_LINK_SPEED_100GB,
+ 100000baseCR2_Full);
+ }
+
+ if (phy_types_low & ICE_PHY_TYPE_LOW_100GBASE_SR4) {
ethtool_link_ksettings_add_link_mode(ks, supported,
100000baseSR4_Full);
ice_ethtool_advertise_link_mode(ICE_AQ_LINK_SPEED_100GB,
100000baseSR4_Full);
}
+ if (phy_types_low & ICE_PHY_TYPE_LOW_100GBASE_SR2) {
+ ethtool_link_ksettings_add_link_mode(ks, supported,
+ 100000baseSR2_Full);
+ ice_ethtool_advertise_link_mode(ICE_AQ_LINK_SPEED_100GB,
+ 100000baseSR2_Full);
+ }
+
phy_type_mask_lo = ICE_PHY_TYPE_LOW_100GBASE_LR4 |
ICE_PHY_TYPE_LOW_100GBASE_DR;
if (phy_types_low & phy_type_mask_lo) {
@@ -1984,14 +1995,20 @@ ice_phy_type_to_ethtool(struct net_device *netdev,
phy_type_mask_lo = ICE_PHY_TYPE_LOW_100GBASE_KR4 |
ICE_PHY_TYPE_LOW_100GBASE_KR_PAM4;
- phy_type_mask_hi = ICE_PHY_TYPE_HIGH_100GBASE_KR2_PAM4;
- if (phy_types_low & phy_type_mask_lo ||
- phy_types_high & phy_type_mask_hi) {
+ if (phy_types_low & phy_type_mask_lo) {
ethtool_link_ksettings_add_link_mode(ks, supported,
100000baseKR4_Full);
ice_ethtool_advertise_link_mode(ICE_AQ_LINK_SPEED_100GB,
100000baseKR4_Full);
}
+
+ if (phy_types_high & ICE_PHY_TYPE_HIGH_100GBASE_KR2_PAM4) {
+ ethtool_link_ksettings_add_link_mode(ks, supported,
+ 100000baseKR2_Full);
+ ice_ethtool_advertise_link_mode(ICE_AQ_LINK_SPEED_100GB,
+ 100000baseKR2_Full);
+ }
+
}
#define TEST_SET_BITS_TIMEOUT 50
@@ -2242,17 +2259,15 @@ ice_ksettings_find_adv_link_speed(const struct ethtool_link_ksettings *ks)
100baseT_Full))
adv_link_speed |= ICE_AQ_LINK_SPEED_100MB;
if (ethtool_link_ksettings_test_link_mode(ks, advertising,
- 1000baseX_Full))
- adv_link_speed |= ICE_AQ_LINK_SPEED_1000MB;
- if (ethtool_link_ksettings_test_link_mode(ks, advertising,
+ 1000baseX_Full) ||
+ ethtool_link_ksettings_test_link_mode(ks, advertising,
1000baseT_Full) ||
ethtool_link_ksettings_test_link_mode(ks, advertising,
1000baseKX_Full))
adv_link_speed |= ICE_AQ_LINK_SPEED_1000MB;
if (ethtool_link_ksettings_test_link_mode(ks, advertising,
- 2500baseT_Full))
- adv_link_speed |= ICE_AQ_LINK_SPEED_2500MB;
- if (ethtool_link_ksettings_test_link_mode(ks, advertising,
+ 2500baseT_Full) ||
+ ethtool_link_ksettings_test_link_mode(ks, advertising,
2500baseX_Full))
adv_link_speed |= ICE_AQ_LINK_SPEED_2500MB;
if (ethtool_link_ksettings_test_link_mode(ks, advertising,
@@ -2261,9 +2276,8 @@ ice_ksettings_find_adv_link_speed(const struct ethtool_link_ksettings *ks)
if (ethtool_link_ksettings_test_link_mode(ks, advertising,
10000baseT_Full) ||
ethtool_link_ksettings_test_link_mode(ks, advertising,
- 10000baseKR_Full))
- adv_link_speed |= ICE_AQ_LINK_SPEED_10GB;
- if (ethtool_link_ksettings_test_link_mode(ks, advertising,
+ 10000baseKR_Full) ||
+ ethtool_link_ksettings_test_link_mode(ks, advertising,
10000baseSR_Full) ||
ethtool_link_ksettings_test_link_mode(ks, advertising,
10000baseLR_Full))
@@ -2287,9 +2301,8 @@ ice_ksettings_find_adv_link_speed(const struct ethtool_link_ksettings *ks)
if (ethtool_link_ksettings_test_link_mode(ks, advertising,
50000baseCR2_Full) ||
ethtool_link_ksettings_test_link_mode(ks, advertising,
- 50000baseKR2_Full))
- adv_link_speed |= ICE_AQ_LINK_SPEED_50GB;
- if (ethtool_link_ksettings_test_link_mode(ks, advertising,
+ 50000baseKR2_Full) ||
+ ethtool_link_ksettings_test_link_mode(ks, advertising,
50000baseSR2_Full))
adv_link_speed |= ICE_AQ_LINK_SPEED_50GB;
if (ethtool_link_ksettings_test_link_mode(ks, advertising,
@@ -2299,7 +2312,13 @@ ice_ksettings_find_adv_link_speed(const struct ethtool_link_ksettings *ks)
ethtool_link_ksettings_test_link_mode(ks, advertising,
100000baseLR4_ER4_Full) ||
ethtool_link_ksettings_test_link_mode(ks, advertising,
- 100000baseKR4_Full))
+ 100000baseKR4_Full) ||
+ ethtool_link_ksettings_test_link_mode(ks, advertising,
+ 100000baseCR2_Full) ||
+ ethtool_link_ksettings_test_link_mode(ks, advertising,
+ 100000baseSR2_Full) ||
+ ethtool_link_ksettings_test_link_mode(ks, advertising,
+ 100000baseKR2_Full))
adv_link_speed |= ICE_AQ_LINK_SPEED_100GB;
return adv_link_speed;
@@ -3027,8 +3046,6 @@ ice_set_ringparam(struct net_device *netdev, struct ethtool_ringparam *ring,
/* clone ring and setup updated count */
xdp_rings[i] = *vsi->xdp_rings[i];
xdp_rings[i].count = new_tx_cnt;
- xdp_rings[i].next_dd = ICE_RING_QUARTER(&xdp_rings[i]) - 1;
- xdp_rings[i].next_rs = ICE_RING_QUARTER(&xdp_rings[i]) - 1;
xdp_rings[i].desc = NULL;
xdp_rings[i].tx_buf = NULL;
err = ice_setup_tx_ring(&xdp_rings[i]);
@@ -3073,7 +3090,7 @@ process_rx:
/* allocate Rx buffers */
err = ice_alloc_rx_bufs(&rx_rings[i],
- ICE_DESC_UNUSED(&rx_rings[i]));
+ ICE_RX_DESC_UNUSED(&rx_rings[i]));
rx_unwind:
if (err) {
while (i) {
diff --git a/drivers/net/ethernet/intel/ice/ice_flex_pipe.c b/drivers/net/ethernet/intel/ice/ice_flex_pipe.c
index 4b3bb19e1d06..5ce413965930 100644
--- a/drivers/net/ethernet/intel/ice/ice_flex_pipe.c
+++ b/drivers/net/ethernet/intel/ice/ice_flex_pipe.c
@@ -6,23 +6,6 @@
#include "ice_flow.h"
#include "ice.h"
-/* For supporting double VLAN mode, it is necessary to enable or disable certain
- * boost tcam entries. The metadata labels names that match the following
- * prefixes will be saved to allow enabling double VLAN mode.
- */
-#define ICE_DVM_PRE "BOOST_MAC_VLAN_DVM" /* enable these entries */
-#define ICE_SVM_PRE "BOOST_MAC_VLAN_SVM" /* disable these entries */
-
-/* To support tunneling entries by PF, the package will append the PF number to
- * the label; for example TNL_VXLAN_PF0, TNL_VXLAN_PF1, TNL_VXLAN_PF2, etc.
- */
-#define ICE_TNL_PRE "TNL_"
-static const struct ice_tunnel_type_scan tnls[] = {
- { TNL_VXLAN, "TNL_VXLAN_PF" },
- { TNL_GENEVE, "TNL_GENEVE_PF" },
- { TNL_LAST, "" }
-};
-
static const u32 ice_sect_lkup[ICE_BLK_COUNT][ICE_SECT_COUNT] = {
/* SWITCH */
{
@@ -104,225 +87,6 @@ static u32 ice_sect_id(enum ice_block blk, enum ice_sect sect)
}
/**
- * ice_pkg_val_buf
- * @buf: pointer to the ice buffer
- *
- * This helper function validates a buffer's header.
- */
-static struct ice_buf_hdr *ice_pkg_val_buf(struct ice_buf *buf)
-{
- struct ice_buf_hdr *hdr;
- u16 section_count;
- u16 data_end;
-
- hdr = (struct ice_buf_hdr *)buf->buf;
- /* verify data */
- section_count = le16_to_cpu(hdr->section_count);
- if (section_count < ICE_MIN_S_COUNT || section_count > ICE_MAX_S_COUNT)
- return NULL;
-
- data_end = le16_to_cpu(hdr->data_end);
- if (data_end < ICE_MIN_S_DATA_END || data_end > ICE_MAX_S_DATA_END)
- return NULL;
-
- return hdr;
-}
-
-/**
- * ice_find_buf_table
- * @ice_seg: pointer to the ice segment
- *
- * Returns the address of the buffer table within the ice segment.
- */
-static struct ice_buf_table *ice_find_buf_table(struct ice_seg *ice_seg)
-{
- struct ice_nvm_table *nvms;
-
- nvms = (struct ice_nvm_table *)
- (ice_seg->device_table +
- le32_to_cpu(ice_seg->device_table_count));
-
- return (__force struct ice_buf_table *)
- (nvms->vers + le32_to_cpu(nvms->table_count));
-}
-
-/**
- * ice_pkg_enum_buf
- * @ice_seg: pointer to the ice segment (or NULL on subsequent calls)
- * @state: pointer to the enum state
- *
- * This function will enumerate all the buffers in the ice segment. The first
- * call is made with the ice_seg parameter non-NULL; on subsequent calls,
- * ice_seg is set to NULL which continues the enumeration. When the function
- * returns a NULL pointer, then the end of the buffers has been reached, or an
- * unexpected value has been detected (for example an invalid section count or
- * an invalid buffer end value).
- */
-static struct ice_buf_hdr *
-ice_pkg_enum_buf(struct ice_seg *ice_seg, struct ice_pkg_enum *state)
-{
- if (ice_seg) {
- state->buf_table = ice_find_buf_table(ice_seg);
- if (!state->buf_table)
- return NULL;
-
- state->buf_idx = 0;
- return ice_pkg_val_buf(state->buf_table->buf_array);
- }
-
- if (++state->buf_idx < le32_to_cpu(state->buf_table->buf_count))
- return ice_pkg_val_buf(state->buf_table->buf_array +
- state->buf_idx);
- else
- return NULL;
-}
-
-/**
- * ice_pkg_advance_sect
- * @ice_seg: pointer to the ice segment (or NULL on subsequent calls)
- * @state: pointer to the enum state
- *
- * This helper function will advance the section within the ice segment,
- * also advancing the buffer if needed.
- */
-static bool
-ice_pkg_advance_sect(struct ice_seg *ice_seg, struct ice_pkg_enum *state)
-{
- if (!ice_seg && !state->buf)
- return false;
-
- if (!ice_seg && state->buf)
- if (++state->sect_idx < le16_to_cpu(state->buf->section_count))
- return true;
-
- state->buf = ice_pkg_enum_buf(ice_seg, state);
- if (!state->buf)
- return false;
-
- /* start of new buffer, reset section index */
- state->sect_idx = 0;
- return true;
-}
-
-/**
- * ice_pkg_enum_section
- * @ice_seg: pointer to the ice segment (or NULL on subsequent calls)
- * @state: pointer to the enum state
- * @sect_type: section type to enumerate
- *
- * This function will enumerate all the sections of a particular type in the
- * ice segment. The first call is made with the ice_seg parameter non-NULL;
- * on subsequent calls, ice_seg is set to NULL which continues the enumeration.
- * When the function returns a NULL pointer, then the end of the matching
- * sections has been reached.
- */
-static void *
-ice_pkg_enum_section(struct ice_seg *ice_seg, struct ice_pkg_enum *state,
- u32 sect_type)
-{
- u16 offset, size;
-
- if (ice_seg)
- state->type = sect_type;
-
- if (!ice_pkg_advance_sect(ice_seg, state))
- return NULL;
-
- /* scan for next matching section */
- while (state->buf->section_entry[state->sect_idx].type !=
- cpu_to_le32(state->type))
- if (!ice_pkg_advance_sect(NULL, state))
- return NULL;
-
- /* validate section */
- offset = le16_to_cpu(state->buf->section_entry[state->sect_idx].offset);
- if (offset < ICE_MIN_S_OFF || offset > ICE_MAX_S_OFF)
- return NULL;
-
- size = le16_to_cpu(state->buf->section_entry[state->sect_idx].size);
- if (size < ICE_MIN_S_SZ || size > ICE_MAX_S_SZ)
- return NULL;
-
- /* make sure the section fits in the buffer */
- if (offset + size > ICE_PKG_BUF_SIZE)
- return NULL;
-
- state->sect_type =
- le32_to_cpu(state->buf->section_entry[state->sect_idx].type);
-
- /* calc pointer to this section */
- state->sect = ((u8 *)state->buf) +
- le16_to_cpu(state->buf->section_entry[state->sect_idx].offset);
-
- return state->sect;
-}
-
-/**
- * ice_pkg_enum_entry
- * @ice_seg: pointer to the ice segment (or NULL on subsequent calls)
- * @state: pointer to the enum state
- * @sect_type: section type to enumerate
- * @offset: pointer to variable that receives the offset in the table (optional)
- * @handler: function that handles access to the entries into the section type
- *
- * This function will enumerate all the entries in particular section type in
- * the ice segment. The first call is made with the ice_seg parameter non-NULL;
- * on subsequent calls, ice_seg is set to NULL which continues the enumeration.
- * When the function returns a NULL pointer, then the end of the entries has
- * been reached.
- *
- * Since each section may have a different header and entry size, the handler
- * function is needed to determine the number and location entries in each
- * section.
- *
- * The offset parameter is optional, but should be used for sections that
- * contain an offset for each section table. For such cases, the section handler
- * function must return the appropriate offset + index to give the absolution
- * offset for each entry. For example, if the base for a section's header
- * indicates a base offset of 10, and the index for the entry is 2, then
- * section handler function should set the offset to 10 + 2 = 12.
- */
-static void *
-ice_pkg_enum_entry(struct ice_seg *ice_seg, struct ice_pkg_enum *state,
- u32 sect_type, u32 *offset,
- void *(*handler)(u32 sect_type, void *section,
- u32 index, u32 *offset))
-{
- void *entry;
-
- if (ice_seg) {
- if (!handler)
- return NULL;
-
- if (!ice_pkg_enum_section(ice_seg, state, sect_type))
- return NULL;
-
- state->entry_idx = 0;
- state->handler = handler;
- } else {
- state->entry_idx++;
- }
-
- if (!state->handler)
- return NULL;
-
- /* get entry */
- entry = state->handler(state->sect_type, state->sect, state->entry_idx,
- offset);
- if (!entry) {
- /* end of a section, look for another section of this type */
- if (!ice_pkg_enum_section(NULL, state, 0))
- return NULL;
-
- state->entry_idx = 0;
- entry = state->handler(state->sect_type, state->sect,
- state->entry_idx, offset);
- }
-
- return entry;
-}
-
-/**
* ice_hw_ptype_ena - check if the PTYPE is enabled or not
* @hw: pointer to the HW structure
* @ptype: the hardware PTYPE
@@ -333,312 +97,6 @@ bool ice_hw_ptype_ena(struct ice_hw *hw, u16 ptype)
test_bit(ptype, hw->hw_ptype);
}
-/**
- * ice_marker_ptype_tcam_handler
- * @sect_type: section type
- * @section: pointer to section
- * @index: index of the Marker PType TCAM entry to be returned
- * @offset: pointer to receive absolute offset, always 0 for ptype TCAM sections
- *
- * This is a callback function that can be passed to ice_pkg_enum_entry.
- * Handles enumeration of individual Marker PType TCAM entries.
- */
-static void *
-ice_marker_ptype_tcam_handler(u32 sect_type, void *section, u32 index,
- u32 *offset)
-{
- struct ice_marker_ptype_tcam_section *marker_ptype;
-
- if (sect_type != ICE_SID_RXPARSER_MARKER_PTYPE)
- return NULL;
-
- if (index > ICE_MAX_MARKER_PTYPE_TCAMS_IN_BUF)
- return NULL;
-
- if (offset)
- *offset = 0;
-
- marker_ptype = section;
- if (index >= le16_to_cpu(marker_ptype->count))
- return NULL;
-
- return marker_ptype->tcam + index;
-}
-
-/**
- * ice_fill_hw_ptype - fill the enabled PTYPE bit information
- * @hw: pointer to the HW structure
- */
-static void ice_fill_hw_ptype(struct ice_hw *hw)
-{
- struct ice_marker_ptype_tcam_entry *tcam;
- struct ice_seg *seg = hw->seg;
- struct ice_pkg_enum state;
-
- bitmap_zero(hw->hw_ptype, ICE_FLOW_PTYPE_MAX);
- if (!seg)
- return;
-
- memset(&state, 0, sizeof(state));
-
- do {
- tcam = ice_pkg_enum_entry(seg, &state,
- ICE_SID_RXPARSER_MARKER_PTYPE, NULL,
- ice_marker_ptype_tcam_handler);
- if (tcam &&
- le16_to_cpu(tcam->addr) < ICE_MARKER_PTYPE_TCAM_ADDR_MAX &&
- le16_to_cpu(tcam->ptype) < ICE_FLOW_PTYPE_MAX)
- set_bit(le16_to_cpu(tcam->ptype), hw->hw_ptype);
-
- seg = NULL;
- } while (tcam);
-}
-
-/**
- * ice_boost_tcam_handler
- * @sect_type: section type
- * @section: pointer to section
- * @index: index of the boost TCAM entry to be returned
- * @offset: pointer to receive absolute offset, always 0 for boost TCAM sections
- *
- * This is a callback function that can be passed to ice_pkg_enum_entry.
- * Handles enumeration of individual boost TCAM entries.
- */
-static void *
-ice_boost_tcam_handler(u32 sect_type, void *section, u32 index, u32 *offset)
-{
- struct ice_boost_tcam_section *boost;
-
- if (!section)
- return NULL;
-
- if (sect_type != ICE_SID_RXPARSER_BOOST_TCAM)
- return NULL;
-
- /* cppcheck-suppress nullPointer */
- if (index > ICE_MAX_BST_TCAMS_IN_BUF)
- return NULL;
-
- if (offset)
- *offset = 0;
-
- boost = section;
- if (index >= le16_to_cpu(boost->count))
- return NULL;
-
- return boost->tcam + index;
-}
-
-/**
- * ice_find_boost_entry
- * @ice_seg: pointer to the ice segment (non-NULL)
- * @addr: Boost TCAM address of entry to search for
- * @entry: returns pointer to the entry
- *
- * Finds a particular Boost TCAM entry and returns a pointer to that entry
- * if it is found. The ice_seg parameter must not be NULL since the first call
- * to ice_pkg_enum_entry requires a pointer to an actual ice_segment structure.
- */
-static int
-ice_find_boost_entry(struct ice_seg *ice_seg, u16 addr,
- struct ice_boost_tcam_entry **entry)
-{
- struct ice_boost_tcam_entry *tcam;
- struct ice_pkg_enum state;
-
- memset(&state, 0, sizeof(state));
-
- if (!ice_seg)
- return -EINVAL;
-
- do {
- tcam = ice_pkg_enum_entry(ice_seg, &state,
- ICE_SID_RXPARSER_BOOST_TCAM, NULL,
- ice_boost_tcam_handler);
- if (tcam && le16_to_cpu(tcam->addr) == addr) {
- *entry = tcam;
- return 0;
- }
-
- ice_seg = NULL;
- } while (tcam);
-
- *entry = NULL;
- return -EIO;
-}
-
-/**
- * ice_label_enum_handler
- * @sect_type: section type
- * @section: pointer to section
- * @index: index of the label entry to be returned
- * @offset: pointer to receive absolute offset, always zero for label sections
- *
- * This is a callback function that can be passed to ice_pkg_enum_entry.
- * Handles enumeration of individual label entries.
- */
-static void *
-ice_label_enum_handler(u32 __always_unused sect_type, void *section, u32 index,
- u32 *offset)
-{
- struct ice_label_section *labels;
-
- if (!section)
- return NULL;
-
- /* cppcheck-suppress nullPointer */
- if (index > ICE_MAX_LABELS_IN_BUF)
- return NULL;
-
- if (offset)
- *offset = 0;
-
- labels = section;
- if (index >= le16_to_cpu(labels->count))
- return NULL;
-
- return labels->label + index;
-}
-
-/**
- * ice_enum_labels
- * @ice_seg: pointer to the ice segment (NULL on subsequent calls)
- * @type: the section type that will contain the label (0 on subsequent calls)
- * @state: ice_pkg_enum structure that will hold the state of the enumeration
- * @value: pointer to a value that will return the label's value if found
- *
- * Enumerates a list of labels in the package. The caller will call
- * ice_enum_labels(ice_seg, type, ...) to start the enumeration, then call
- * ice_enum_labels(NULL, 0, ...) to continue. When the function returns a NULL
- * the end of the list has been reached.
- */
-static char *
-ice_enum_labels(struct ice_seg *ice_seg, u32 type, struct ice_pkg_enum *state,
- u16 *value)
-{
- struct ice_label *label;
-
- /* Check for valid label section on first call */
- if (type && !(type >= ICE_SID_LBL_FIRST && type <= ICE_SID_LBL_LAST))
- return NULL;
-
- label = ice_pkg_enum_entry(ice_seg, state, type, NULL,
- ice_label_enum_handler);
- if (!label)
- return NULL;
-
- *value = le16_to_cpu(label->value);
- return label->name;
-}
-
-/**
- * ice_add_tunnel_hint
- * @hw: pointer to the HW structure
- * @label_name: label text
- * @val: value of the tunnel port boost entry
- */
-static void ice_add_tunnel_hint(struct ice_hw *hw, char *label_name, u16 val)
-{
- if (hw->tnl.count < ICE_TUNNEL_MAX_ENTRIES) {
- u16 i;
-
- for (i = 0; tnls[i].type != TNL_LAST; i++) {
- size_t len = strlen(tnls[i].label_prefix);
-
- /* Look for matching label start, before continuing */
- if (strncmp(label_name, tnls[i].label_prefix, len))
- continue;
-
- /* Make sure this label matches our PF. Note that the PF
- * character ('0' - '7') will be located where our
- * prefix string's null terminator is located.
- */
- if ((label_name[len] - '0') == hw->pf_id) {
- hw->tnl.tbl[hw->tnl.count].type = tnls[i].type;
- hw->tnl.tbl[hw->tnl.count].valid = false;
- hw->tnl.tbl[hw->tnl.count].boost_addr = val;
- hw->tnl.tbl[hw->tnl.count].port = 0;
- hw->tnl.count++;
- break;
- }
- }
- }
-}
-
-/**
- * ice_add_dvm_hint
- * @hw: pointer to the HW structure
- * @val: value of the boost entry
- * @enable: true if entry needs to be enabled, or false if needs to be disabled
- */
-static void ice_add_dvm_hint(struct ice_hw *hw, u16 val, bool enable)
-{
- if (hw->dvm_upd.count < ICE_DVM_MAX_ENTRIES) {
- hw->dvm_upd.tbl[hw->dvm_upd.count].boost_addr = val;
- hw->dvm_upd.tbl[hw->dvm_upd.count].enable = enable;
- hw->dvm_upd.count++;
- }
-}
-
-/**
- * ice_init_pkg_hints
- * @hw: pointer to the HW structure
- * @ice_seg: pointer to the segment of the package scan (non-NULL)
- *
- * This function will scan the package and save off relevant information
- * (hints or metadata) for driver use. The ice_seg parameter must not be NULL
- * since the first call to ice_enum_labels requires a pointer to an actual
- * ice_seg structure.
- */
-static void ice_init_pkg_hints(struct ice_hw *hw, struct ice_seg *ice_seg)
-{
- struct ice_pkg_enum state;
- char *label_name;
- u16 val;
- int i;
-
- memset(&hw->tnl, 0, sizeof(hw->tnl));
- memset(&state, 0, sizeof(state));
-
- if (!ice_seg)
- return;
-
- label_name = ice_enum_labels(ice_seg, ICE_SID_LBL_RXPARSER_TMEM, &state,
- &val);
-
- while (label_name) {
- if (!strncmp(label_name, ICE_TNL_PRE, strlen(ICE_TNL_PRE)))
- /* check for a tunnel entry */
- ice_add_tunnel_hint(hw, label_name, val);
-
- /* check for a dvm mode entry */
- else if (!strncmp(label_name, ICE_DVM_PRE, strlen(ICE_DVM_PRE)))
- ice_add_dvm_hint(hw, val, true);
-
- /* check for a svm mode entry */
- else if (!strncmp(label_name, ICE_SVM_PRE, strlen(ICE_SVM_PRE)))
- ice_add_dvm_hint(hw, val, false);
-
- label_name = ice_enum_labels(NULL, 0, &state, &val);
- }
-
- /* Cache the appropriate boost TCAM entry pointers for tunnels */
- for (i = 0; i < hw->tnl.count; i++) {
- ice_find_boost_entry(ice_seg, hw->tnl.tbl[i].boost_addr,
- &hw->tnl.tbl[i].boost_entry);
- if (hw->tnl.tbl[i].boost_entry) {
- hw->tnl.tbl[i].valid = true;
- if (hw->tnl.tbl[i].type < __TNL_TYPE_CNT)
- hw->tnl.valid_count[hw->tnl.tbl[i].type]++;
- }
- }
-
- /* Cache the appropriate boost TCAM entry pointers for DVM and SVM */
- for (i = 0; i < hw->dvm_upd.count; i++)
- ice_find_boost_entry(ice_seg, hw->dvm_upd.tbl[i].boost_addr,
- &hw->dvm_upd.tbl[i].boost_entry);
-}
-
/* Key creation */
#define ICE_DC_KEY 0x1 /* don't care */
@@ -810,51 +268,6 @@ ice_set_key(u8 *key, u16 size, u8 *val, u8 *upd, u8 *dc, u8 *nm, u16 off,
}
/**
- * ice_acquire_global_cfg_lock
- * @hw: pointer to the HW structure
- * @access: access type (read or write)
- *
- * This function will request ownership of the global config lock for reading
- * or writing of the package. When attempting to obtain write access, the
- * caller must check for the following two return values:
- *
- * 0 - Means the caller has acquired the global config lock
- * and can perform writing of the package.
- * -EALREADY - Indicates another driver has already written the
- * package or has found that no update was necessary; in
- * this case, the caller can just skip performing any
- * update of the package.
- */
-static int
-ice_acquire_global_cfg_lock(struct ice_hw *hw,
- enum ice_aq_res_access_type access)
-{
- int status;
-
- status = ice_acquire_res(hw, ICE_GLOBAL_CFG_LOCK_RES_ID, access,
- ICE_GLOBAL_CFG_LOCK_TIMEOUT);
-
- if (!status)
- mutex_lock(&ice_global_cfg_lock_sw);
- else if (status == -EALREADY)
- ice_debug(hw, ICE_DBG_PKG, "Global config lock: No work to do\n");
-
- return status;
-}
-
-/**
- * ice_release_global_cfg_lock
- * @hw: pointer to the HW structure
- *
- * This function will release the global config lock.
- */
-static void ice_release_global_cfg_lock(struct ice_hw *hw)
-{
- mutex_unlock(&ice_global_cfg_lock_sw);
- ice_release_res(hw, ICE_GLOBAL_CFG_LOCK_RES_ID);
-}
-
-/**
* ice_acquire_change_lock
* @hw: pointer to the HW structure
* @access: access type (read or write)
@@ -880,1325 +293,6 @@ void ice_release_change_lock(struct ice_hw *hw)
}
/**
- * ice_aq_download_pkg
- * @hw: pointer to the hardware structure
- * @pkg_buf: the package buffer to transfer
- * @buf_size: the size of the package buffer
- * @last_buf: last buffer indicator
- * @error_offset: returns error offset
- * @error_info: returns error information
- * @cd: pointer to command details structure or NULL
- *
- * Download Package (0x0C40)
- */
-static int
-ice_aq_download_pkg(struct ice_hw *hw, struct ice_buf_hdr *pkg_buf,
- u16 buf_size, bool last_buf, u32 *error_offset,
- u32 *error_info, struct ice_sq_cd *cd)
-{
- struct ice_aqc_download_pkg *cmd;
- struct ice_aq_desc desc;
- int status;
-
- if (error_offset)
- *error_offset = 0;
- if (error_info)
- *error_info = 0;
-
- cmd = &desc.params.download_pkg;
- ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_download_pkg);
- desc.flags |= cpu_to_le16(ICE_AQ_FLAG_RD);
-
- if (last_buf)
- cmd->flags |= ICE_AQC_DOWNLOAD_PKG_LAST_BUF;
-
- status = ice_aq_send_cmd(hw, &desc, pkg_buf, buf_size, cd);
- if (status == -EIO) {
- /* Read error from buffer only when the FW returned an error */
- struct ice_aqc_download_pkg_resp *resp;
-
- resp = (struct ice_aqc_download_pkg_resp *)pkg_buf;
- if (error_offset)
- *error_offset = le32_to_cpu(resp->error_offset);
- if (error_info)
- *error_info = le32_to_cpu(resp->error_info);
- }
-
- return status;
-}
-
-/**
- * ice_aq_upload_section
- * @hw: pointer to the hardware structure
- * @pkg_buf: the package buffer which will receive the section
- * @buf_size: the size of the package buffer
- * @cd: pointer to command details structure or NULL
- *
- * Upload Section (0x0C41)
- */
-int
-ice_aq_upload_section(struct ice_hw *hw, struct ice_buf_hdr *pkg_buf,
- u16 buf_size, struct ice_sq_cd *cd)
-{
- struct ice_aq_desc desc;
-
- ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_upload_section);
- desc.flags |= cpu_to_le16(ICE_AQ_FLAG_RD);
-
- return ice_aq_send_cmd(hw, &desc, pkg_buf, buf_size, cd);
-}
-
-/**
- * ice_aq_update_pkg
- * @hw: pointer to the hardware structure
- * @pkg_buf: the package cmd buffer
- * @buf_size: the size of the package cmd buffer
- * @last_buf: last buffer indicator
- * @error_offset: returns error offset
- * @error_info: returns error information
- * @cd: pointer to command details structure or NULL
- *
- * Update Package (0x0C42)
- */
-static int
-ice_aq_update_pkg(struct ice_hw *hw, struct ice_buf_hdr *pkg_buf, u16 buf_size,
- bool last_buf, u32 *error_offset, u32 *error_info,
- struct ice_sq_cd *cd)
-{
- struct ice_aqc_download_pkg *cmd;
- struct ice_aq_desc desc;
- int status;
-
- if (error_offset)
- *error_offset = 0;
- if (error_info)
- *error_info = 0;
-
- cmd = &desc.params.download_pkg;
- ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_update_pkg);
- desc.flags |= cpu_to_le16(ICE_AQ_FLAG_RD);
-
- if (last_buf)
- cmd->flags |= ICE_AQC_DOWNLOAD_PKG_LAST_BUF;
-
- status = ice_aq_send_cmd(hw, &desc, pkg_buf, buf_size, cd);
- if (status == -EIO) {
- /* Read error from buffer only when the FW returned an error */
- struct ice_aqc_download_pkg_resp *resp;
-
- resp = (struct ice_aqc_download_pkg_resp *)pkg_buf;
- if (error_offset)
- *error_offset = le32_to_cpu(resp->error_offset);
- if (error_info)
- *error_info = le32_to_cpu(resp->error_info);
- }
-
- return status;
-}
-
-/**
- * ice_find_seg_in_pkg
- * @hw: pointer to the hardware structure
- * @seg_type: the segment type to search for (i.e., SEGMENT_TYPE_CPK)
- * @pkg_hdr: pointer to the package header to be searched
- *
- * This function searches a package file for a particular segment type. On
- * success it returns a pointer to the segment header, otherwise it will
- * return NULL.
- */
-static struct ice_generic_seg_hdr *
-ice_find_seg_in_pkg(struct ice_hw *hw, u32 seg_type,
- struct ice_pkg_hdr *pkg_hdr)
-{
- u32 i;
-
- ice_debug(hw, ICE_DBG_PKG, "Package format version: %d.%d.%d.%d\n",
- pkg_hdr->pkg_format_ver.major, pkg_hdr->pkg_format_ver.minor,
- pkg_hdr->pkg_format_ver.update,
- pkg_hdr->pkg_format_ver.draft);
-
- /* Search all package segments for the requested segment type */
- for (i = 0; i < le32_to_cpu(pkg_hdr->seg_count); i++) {
- struct ice_generic_seg_hdr *seg;
-
- seg = (struct ice_generic_seg_hdr *)
- ((u8 *)pkg_hdr + le32_to_cpu(pkg_hdr->seg_offset[i]));
-
- if (le32_to_cpu(seg->seg_type) == seg_type)
- return seg;
- }
-
- return NULL;
-}
-
-/**
- * ice_update_pkg_no_lock
- * @hw: pointer to the hardware structure
- * @bufs: pointer to an array of buffers
- * @count: the number of buffers in the array
- */
-static int
-ice_update_pkg_no_lock(struct ice_hw *hw, struct ice_buf *bufs, u32 count)
-{
- int status = 0;
- u32 i;
-
- for (i = 0; i < count; i++) {
- struct ice_buf_hdr *bh = (struct ice_buf_hdr *)(bufs + i);
- bool last = ((i + 1) == count);
- u32 offset, info;
-
- status = ice_aq_update_pkg(hw, bh, le16_to_cpu(bh->data_end),
- last, &offset, &info, NULL);
-
- if (status) {
- ice_debug(hw, ICE_DBG_PKG, "Update pkg failed: err %d off %d inf %d\n",
- status, offset, info);
- break;
- }
- }
-
- return status;
-}
-
-/**
- * ice_update_pkg
- * @hw: pointer to the hardware structure
- * @bufs: pointer to an array of buffers
- * @count: the number of buffers in the array
- *
- * Obtains change lock and updates package.
- */
-static int ice_update_pkg(struct ice_hw *hw, struct ice_buf *bufs, u32 count)
-{
- int status;
-
- status = ice_acquire_change_lock(hw, ICE_RES_WRITE);
- if (status)
- return status;
-
- status = ice_update_pkg_no_lock(hw, bufs, count);
-
- ice_release_change_lock(hw);
-
- return status;
-}
-
-static enum ice_ddp_state ice_map_aq_err_to_ddp_state(enum ice_aq_err aq_err)
-{
- switch (aq_err) {
- case ICE_AQ_RC_ENOSEC:
- case ICE_AQ_RC_EBADSIG:
- return ICE_DDP_PKG_FILE_SIGNATURE_INVALID;
- case ICE_AQ_RC_ESVN:
- return ICE_DDP_PKG_FILE_REVISION_TOO_LOW;
- case ICE_AQ_RC_EBADMAN:
- case ICE_AQ_RC_EBADBUF:
- return ICE_DDP_PKG_LOAD_ERROR;
- default:
- return ICE_DDP_PKG_ERR;
- }
-}
-
-/**
- * ice_dwnld_cfg_bufs
- * @hw: pointer to the hardware structure
- * @bufs: pointer to an array of buffers
- * @count: the number of buffers in the array
- *
- * Obtains global config lock and downloads the package configuration buffers
- * to the firmware. Metadata buffers are skipped, and the first metadata buffer
- * found indicates that the rest of the buffers are all metadata buffers.
- */
-static enum ice_ddp_state
-ice_dwnld_cfg_bufs(struct ice_hw *hw, struct ice_buf *bufs, u32 count)
-{
- enum ice_ddp_state state = ICE_DDP_PKG_SUCCESS;
- struct ice_buf_hdr *bh;
- enum ice_aq_err err;
- u32 offset, info, i;
- int status;
-
- if (!bufs || !count)
- return ICE_DDP_PKG_ERR;
-
- /* If the first buffer's first section has its metadata bit set
- * then there are no buffers to be downloaded, and the operation is
- * considered a success.
- */
- bh = (struct ice_buf_hdr *)bufs;
- if (le32_to_cpu(bh->section_entry[0].type) & ICE_METADATA_BUF)
- return ICE_DDP_PKG_SUCCESS;
-
- status = ice_acquire_global_cfg_lock(hw, ICE_RES_WRITE);
- if (status) {
- if (status == -EALREADY)
- return ICE_DDP_PKG_ALREADY_LOADED;
- return ice_map_aq_err_to_ddp_state(hw->adminq.sq_last_status);
- }
-
- for (i = 0; i < count; i++) {
- bool last = ((i + 1) == count);
-
- if (!last) {
- /* check next buffer for metadata flag */
- bh = (struct ice_buf_hdr *)(bufs + i + 1);
-
- /* A set metadata flag in the next buffer will signal
- * that the current buffer will be the last buffer
- * downloaded
- */
- if (le16_to_cpu(bh->section_count))
- if (le32_to_cpu(bh->section_entry[0].type) &
- ICE_METADATA_BUF)
- last = true;
- }
-
- bh = (struct ice_buf_hdr *)(bufs + i);
-
- status = ice_aq_download_pkg(hw, bh, ICE_PKG_BUF_SIZE, last,
- &offset, &info, NULL);
-
- /* Save AQ status from download package */
- if (status) {
- ice_debug(hw, ICE_DBG_PKG, "Pkg download failed: err %d off %d inf %d\n",
- status, offset, info);
- err = hw->adminq.sq_last_status;
- state = ice_map_aq_err_to_ddp_state(err);
- break;
- }
-
- if (last)
- break;
- }
-
- if (!status) {
- status = ice_set_vlan_mode(hw);
- if (status)
- ice_debug(hw, ICE_DBG_PKG, "Failed to set VLAN mode: err %d\n",
- status);
- }
-
- ice_release_global_cfg_lock(hw);
-
- return state;
-}
-
-/**
- * ice_aq_get_pkg_info_list
- * @hw: pointer to the hardware structure
- * @pkg_info: the buffer which will receive the information list
- * @buf_size: the size of the pkg_info information buffer
- * @cd: pointer to command details structure or NULL
- *
- * Get Package Info List (0x0C43)
- */
-static int
-ice_aq_get_pkg_info_list(struct ice_hw *hw,
- struct ice_aqc_get_pkg_info_resp *pkg_info,
- u16 buf_size, struct ice_sq_cd *cd)
-{
- struct ice_aq_desc desc;
-
- ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_get_pkg_info_list);
-
- return ice_aq_send_cmd(hw, &desc, pkg_info, buf_size, cd);
-}
-
-/**
- * ice_download_pkg
- * @hw: pointer to the hardware structure
- * @ice_seg: pointer to the segment of the package to be downloaded
- *
- * Handles the download of a complete package.
- */
-static enum ice_ddp_state
-ice_download_pkg(struct ice_hw *hw, struct ice_seg *ice_seg)
-{
- struct ice_buf_table *ice_buf_tbl;
- int status;
-
- ice_debug(hw, ICE_DBG_PKG, "Segment format version: %d.%d.%d.%d\n",
- ice_seg->hdr.seg_format_ver.major,
- ice_seg->hdr.seg_format_ver.minor,
- ice_seg->hdr.seg_format_ver.update,
- ice_seg->hdr.seg_format_ver.draft);
-
- ice_debug(hw, ICE_DBG_PKG, "Seg: type 0x%X, size %d, name %s\n",
- le32_to_cpu(ice_seg->hdr.seg_type),
- le32_to_cpu(ice_seg->hdr.seg_size), ice_seg->hdr.seg_id);
-
- ice_buf_tbl = ice_find_buf_table(ice_seg);
-
- ice_debug(hw, ICE_DBG_PKG, "Seg buf count: %d\n",
- le32_to_cpu(ice_buf_tbl->buf_count));
-
- status = ice_dwnld_cfg_bufs(hw, ice_buf_tbl->buf_array,
- le32_to_cpu(ice_buf_tbl->buf_count));
-
- ice_post_pkg_dwnld_vlan_mode_cfg(hw);
-
- return status;
-}
-
-/**
- * ice_init_pkg_info
- * @hw: pointer to the hardware structure
- * @pkg_hdr: pointer to the driver's package hdr
- *
- * Saves off the package details into the HW structure.
- */
-static enum ice_ddp_state
-ice_init_pkg_info(struct ice_hw *hw, struct ice_pkg_hdr *pkg_hdr)
-{
- struct ice_generic_seg_hdr *seg_hdr;
-
- if (!pkg_hdr)
- return ICE_DDP_PKG_ERR;
-
- seg_hdr = ice_find_seg_in_pkg(hw, SEGMENT_TYPE_ICE, pkg_hdr);
- if (seg_hdr) {
- struct ice_meta_sect *meta;
- struct ice_pkg_enum state;
-
- memset(&state, 0, sizeof(state));
-
- /* Get package information from the Metadata Section */
- meta = ice_pkg_enum_section((struct ice_seg *)seg_hdr, &state,
- ICE_SID_METADATA);
- if (!meta) {
- ice_debug(hw, ICE_DBG_INIT, "Did not find ice metadata section in package\n");
- return ICE_DDP_PKG_INVALID_FILE;
- }
-
- hw->pkg_ver = meta->ver;
- memcpy(hw->pkg_name, meta->name, sizeof(meta->name));
-
- ice_debug(hw, ICE_DBG_PKG, "Pkg: %d.%d.%d.%d, %s\n",
- meta->ver.major, meta->ver.minor, meta->ver.update,
- meta->ver.draft, meta->name);
-
- hw->ice_seg_fmt_ver = seg_hdr->seg_format_ver;
- memcpy(hw->ice_seg_id, seg_hdr->seg_id,
- sizeof(hw->ice_seg_id));
-
- ice_debug(hw, ICE_DBG_PKG, "Ice Seg: %d.%d.%d.%d, %s\n",
- seg_hdr->seg_format_ver.major,
- seg_hdr->seg_format_ver.minor,
- seg_hdr->seg_format_ver.update,
- seg_hdr->seg_format_ver.draft,
- seg_hdr->seg_id);
- } else {
- ice_debug(hw, ICE_DBG_INIT, "Did not find ice segment in driver package\n");
- return ICE_DDP_PKG_INVALID_FILE;
- }
-
- return ICE_DDP_PKG_SUCCESS;
-}
-
-/**
- * ice_get_pkg_info
- * @hw: pointer to the hardware structure
- *
- * Store details of the package currently loaded in HW into the HW structure.
- */
-static enum ice_ddp_state ice_get_pkg_info(struct ice_hw *hw)
-{
- enum ice_ddp_state state = ICE_DDP_PKG_SUCCESS;
- struct ice_aqc_get_pkg_info_resp *pkg_info;
- u16 size;
- u32 i;
-
- size = struct_size(pkg_info, pkg_info, ICE_PKG_CNT);
- pkg_info = kzalloc(size, GFP_KERNEL);
- if (!pkg_info)
- return ICE_DDP_PKG_ERR;
-
- if (ice_aq_get_pkg_info_list(hw, pkg_info, size, NULL)) {
- state = ICE_DDP_PKG_ERR;
- goto init_pkg_free_alloc;
- }
-
- for (i = 0; i < le32_to_cpu(pkg_info->count); i++) {
-#define ICE_PKG_FLAG_COUNT 4
- char flags[ICE_PKG_FLAG_COUNT + 1] = { 0 };
- u8 place = 0;
-
- if (pkg_info->pkg_info[i].is_active) {
- flags[place++] = 'A';
- hw->active_pkg_ver = pkg_info->pkg_info[i].ver;
- hw->active_track_id =
- le32_to_cpu(pkg_info->pkg_info[i].track_id);
- memcpy(hw->active_pkg_name,
- pkg_info->pkg_info[i].name,
- sizeof(pkg_info->pkg_info[i].name));
- hw->active_pkg_in_nvm = pkg_info->pkg_info[i].is_in_nvm;
- }
- if (pkg_info->pkg_info[i].is_active_at_boot)
- flags[place++] = 'B';
- if (pkg_info->pkg_info[i].is_modified)
- flags[place++] = 'M';
- if (pkg_info->pkg_info[i].is_in_nvm)
- flags[place++] = 'N';
-
- ice_debug(hw, ICE_DBG_PKG, "Pkg[%d]: %d.%d.%d.%d,%s,%s\n",
- i, pkg_info->pkg_info[i].ver.major,
- pkg_info->pkg_info[i].ver.minor,
- pkg_info->pkg_info[i].ver.update,
- pkg_info->pkg_info[i].ver.draft,
- pkg_info->pkg_info[i].name, flags);
- }
-
-init_pkg_free_alloc:
- kfree(pkg_info);
-
- return state;
-}
-
-/**
- * ice_verify_pkg - verify package
- * @pkg: pointer to the package buffer
- * @len: size of the package buffer
- *
- * Verifies various attributes of the package file, including length, format
- * version, and the requirement of at least one segment.
- */
-static enum ice_ddp_state ice_verify_pkg(struct ice_pkg_hdr *pkg, u32 len)
-{
- u32 seg_count;
- u32 i;
-
- if (len < struct_size(pkg, seg_offset, 1))
- return ICE_DDP_PKG_INVALID_FILE;
-
- if (pkg->pkg_format_ver.major != ICE_PKG_FMT_VER_MAJ ||
- pkg->pkg_format_ver.minor != ICE_PKG_FMT_VER_MNR ||
- pkg->pkg_format_ver.update != ICE_PKG_FMT_VER_UPD ||
- pkg->pkg_format_ver.draft != ICE_PKG_FMT_VER_DFT)
- return ICE_DDP_PKG_INVALID_FILE;
-
- /* pkg must have at least one segment */
- seg_count = le32_to_cpu(pkg->seg_count);
- if (seg_count < 1)
- return ICE_DDP_PKG_INVALID_FILE;
-
- /* make sure segment array fits in package length */
- if (len < struct_size(pkg, seg_offset, seg_count))
- return ICE_DDP_PKG_INVALID_FILE;
-
- /* all segments must fit within length */
- for (i = 0; i < seg_count; i++) {
- u32 off = le32_to_cpu(pkg->seg_offset[i]);
- struct ice_generic_seg_hdr *seg;
-
- /* segment header must fit */
- if (len < off + sizeof(*seg))
- return ICE_DDP_PKG_INVALID_FILE;
-
- seg = (struct ice_generic_seg_hdr *)((u8 *)pkg + off);
-
- /* segment body must fit */
- if (len < off + le32_to_cpu(seg->seg_size))
- return ICE_DDP_PKG_INVALID_FILE;
- }
-
- return ICE_DDP_PKG_SUCCESS;
-}
-
-/**
- * ice_free_seg - free package segment pointer
- * @hw: pointer to the hardware structure
- *
- * Frees the package segment pointer in the proper manner, depending on if the
- * segment was allocated or just the passed in pointer was stored.
- */
-void ice_free_seg(struct ice_hw *hw)
-{
- if (hw->pkg_copy) {
- devm_kfree(ice_hw_to_dev(hw), hw->pkg_copy);
- hw->pkg_copy = NULL;
- hw->pkg_size = 0;
- }
- hw->seg = NULL;
-}
-
-/**
- * ice_init_pkg_regs - initialize additional package registers
- * @hw: pointer to the hardware structure
- */
-static void ice_init_pkg_regs(struct ice_hw *hw)
-{
-#define ICE_SW_BLK_INP_MASK_L 0xFFFFFFFF
-#define ICE_SW_BLK_INP_MASK_H 0x0000FFFF
-#define ICE_SW_BLK_IDX 0
-
- /* setup Switch block input mask, which is 48-bits in two parts */
- wr32(hw, GL_PREEXT_L2_PMASK0(ICE_SW_BLK_IDX), ICE_SW_BLK_INP_MASK_L);
- wr32(hw, GL_PREEXT_L2_PMASK1(ICE_SW_BLK_IDX), ICE_SW_BLK_INP_MASK_H);
-}
-
-/**
- * ice_chk_pkg_version - check package version for compatibility with driver
- * @pkg_ver: pointer to a version structure to check
- *
- * Check to make sure that the package about to be downloaded is compatible with
- * the driver. To be compatible, the major and minor components of the package
- * version must match our ICE_PKG_SUPP_VER_MAJ and ICE_PKG_SUPP_VER_MNR
- * definitions.
- */
-static enum ice_ddp_state ice_chk_pkg_version(struct ice_pkg_ver *pkg_ver)
-{
- if (pkg_ver->major > ICE_PKG_SUPP_VER_MAJ ||
- (pkg_ver->major == ICE_PKG_SUPP_VER_MAJ &&
- pkg_ver->minor > ICE_PKG_SUPP_VER_MNR))
- return ICE_DDP_PKG_FILE_VERSION_TOO_HIGH;
- else if (pkg_ver->major < ICE_PKG_SUPP_VER_MAJ ||
- (pkg_ver->major == ICE_PKG_SUPP_VER_MAJ &&
- pkg_ver->minor < ICE_PKG_SUPP_VER_MNR))
- return ICE_DDP_PKG_FILE_VERSION_TOO_LOW;
-
- return ICE_DDP_PKG_SUCCESS;
-}
-
-/**
- * ice_chk_pkg_compat
- * @hw: pointer to the hardware structure
- * @ospkg: pointer to the package hdr
- * @seg: pointer to the package segment hdr
- *
- * This function checks the package version compatibility with driver and NVM
- */
-static enum ice_ddp_state
-ice_chk_pkg_compat(struct ice_hw *hw, struct ice_pkg_hdr *ospkg,
- struct ice_seg **seg)
-{
- struct ice_aqc_get_pkg_info_resp *pkg;
- enum ice_ddp_state state;
- u16 size;
- u32 i;
-
- /* Check package version compatibility */
- state = ice_chk_pkg_version(&hw->pkg_ver);
- if (state) {
- ice_debug(hw, ICE_DBG_INIT, "Package version check failed.\n");
- return state;
- }
-
- /* find ICE segment in given package */
- *seg = (struct ice_seg *)ice_find_seg_in_pkg(hw, SEGMENT_TYPE_ICE,
- ospkg);
- if (!*seg) {
- ice_debug(hw, ICE_DBG_INIT, "no ice segment in package.\n");
- return ICE_DDP_PKG_INVALID_FILE;
- }
-
- /* Check if FW is compatible with the OS package */
- size = struct_size(pkg, pkg_info, ICE_PKG_CNT);
- pkg = kzalloc(size, GFP_KERNEL);
- if (!pkg)
- return ICE_DDP_PKG_ERR;
-
- if (ice_aq_get_pkg_info_list(hw, pkg, size, NULL)) {
- state = ICE_DDP_PKG_LOAD_ERROR;
- goto fw_ddp_compat_free_alloc;
- }
-
- for (i = 0; i < le32_to_cpu(pkg->count); i++) {
- /* loop till we find the NVM package */
- if (!pkg->pkg_info[i].is_in_nvm)
- continue;
- if ((*seg)->hdr.seg_format_ver.major !=
- pkg->pkg_info[i].ver.major ||
- (*seg)->hdr.seg_format_ver.minor >
- pkg->pkg_info[i].ver.minor) {
- state = ICE_DDP_PKG_FW_MISMATCH;
- ice_debug(hw, ICE_DBG_INIT, "OS package is not compatible with NVM.\n");
- }
- /* done processing NVM package so break */
- break;
- }
-fw_ddp_compat_free_alloc:
- kfree(pkg);
- return state;
-}
-
-/**
- * ice_sw_fv_handler
- * @sect_type: section type
- * @section: pointer to section
- * @index: index of the field vector entry to be returned
- * @offset: ptr to variable that receives the offset in the field vector table
- *
- * This is a callback function that can be passed to ice_pkg_enum_entry.
- * This function treats the given section as of type ice_sw_fv_section and
- * enumerates offset field. "offset" is an index into the field vector table.
- */
-static void *
-ice_sw_fv_handler(u32 sect_type, void *section, u32 index, u32 *offset)
-{
- struct ice_sw_fv_section *fv_section = section;
-
- if (!section || sect_type != ICE_SID_FLD_VEC_SW)
- return NULL;
- if (index >= le16_to_cpu(fv_section->count))
- return NULL;
- if (offset)
- /* "index" passed in to this function is relative to a given
- * 4k block. To get to the true index into the field vector
- * table need to add the relative index to the base_offset
- * field of this section
- */
- *offset = le16_to_cpu(fv_section->base_offset) + index;
- return fv_section->fv + index;
-}
-
-/**
- * ice_get_prof_index_max - get the max profile index for used profile
- * @hw: pointer to the HW struct
- *
- * Calling this function will get the max profile index for used profile
- * and store the index number in struct ice_switch_info *switch_info
- * in HW for following use.
- */
-static int ice_get_prof_index_max(struct ice_hw *hw)
-{
- u16 prof_index = 0, j, max_prof_index = 0;
- struct ice_pkg_enum state;
- struct ice_seg *ice_seg;
- bool flag = false;
- struct ice_fv *fv;
- u32 offset;
-
- memset(&state, 0, sizeof(state));
-
- if (!hw->seg)
- return -EINVAL;
-
- ice_seg = hw->seg;
-
- do {
- fv = ice_pkg_enum_entry(ice_seg, &state, ICE_SID_FLD_VEC_SW,
- &offset, ice_sw_fv_handler);
- if (!fv)
- break;
- ice_seg = NULL;
-
- /* in the profile that not be used, the prot_id is set to 0xff
- * and the off is set to 0x1ff for all the field vectors.
- */
- for (j = 0; j < hw->blk[ICE_BLK_SW].es.fvw; j++)
- if (fv->ew[j].prot_id != ICE_PROT_INVALID ||
- fv->ew[j].off != ICE_FV_OFFSET_INVAL)
- flag = true;
- if (flag && prof_index > max_prof_index)
- max_prof_index = prof_index;
-
- prof_index++;
- flag = false;
- } while (fv);
-
- hw->switch_info->max_used_prof_index = max_prof_index;
-
- return 0;
-}
-
-/**
- * ice_get_ddp_pkg_state - get DDP pkg state after download
- * @hw: pointer to the HW struct
- * @already_loaded: indicates if pkg was already loaded onto the device
- */
-static enum ice_ddp_state
-ice_get_ddp_pkg_state(struct ice_hw *hw, bool already_loaded)
-{
- if (hw->pkg_ver.major == hw->active_pkg_ver.major &&
- hw->pkg_ver.minor == hw->active_pkg_ver.minor &&
- hw->pkg_ver.update == hw->active_pkg_ver.update &&
- hw->pkg_ver.draft == hw->active_pkg_ver.draft &&
- !memcmp(hw->pkg_name, hw->active_pkg_name, sizeof(hw->pkg_name))) {
- if (already_loaded)
- return ICE_DDP_PKG_SAME_VERSION_ALREADY_LOADED;
- else
- return ICE_DDP_PKG_SUCCESS;
- } else if (hw->active_pkg_ver.major != ICE_PKG_SUPP_VER_MAJ ||
- hw->active_pkg_ver.minor != ICE_PKG_SUPP_VER_MNR) {
- return ICE_DDP_PKG_ALREADY_LOADED_NOT_SUPPORTED;
- } else if (hw->active_pkg_ver.major == ICE_PKG_SUPP_VER_MAJ &&
- hw->active_pkg_ver.minor == ICE_PKG_SUPP_VER_MNR) {
- return ICE_DDP_PKG_COMPATIBLE_ALREADY_LOADED;
- } else {
- return ICE_DDP_PKG_ERR;
- }
-}
-
-/**
- * ice_init_pkg - initialize/download package
- * @hw: pointer to the hardware structure
- * @buf: pointer to the package buffer
- * @len: size of the package buffer
- *
- * This function initializes a package. The package contains HW tables
- * required to do packet processing. First, the function extracts package
- * information such as version. Then it finds the ice configuration segment
- * within the package; this function then saves a copy of the segment pointer
- * within the supplied package buffer. Next, the function will cache any hints
- * from the package, followed by downloading the package itself. Note, that if
- * a previous PF driver has already downloaded the package successfully, then
- * the current driver will not have to download the package again.
- *
- * The local package contents will be used to query default behavior and to
- * update specific sections of the HW's version of the package (e.g. to update
- * the parse graph to understand new protocols).
- *
- * This function stores a pointer to the package buffer memory, and it is
- * expected that the supplied buffer will not be freed immediately. If the
- * package buffer needs to be freed, such as when read from a file, use
- * ice_copy_and_init_pkg() instead of directly calling ice_init_pkg() in this
- * case.
- */
-enum ice_ddp_state ice_init_pkg(struct ice_hw *hw, u8 *buf, u32 len)
-{
- bool already_loaded = false;
- enum ice_ddp_state state;
- struct ice_pkg_hdr *pkg;
- struct ice_seg *seg;
-
- if (!buf || !len)
- return ICE_DDP_PKG_ERR;
-
- pkg = (struct ice_pkg_hdr *)buf;
- state = ice_verify_pkg(pkg, len);
- if (state) {
- ice_debug(hw, ICE_DBG_INIT, "failed to verify pkg (err: %d)\n",
- state);
- return state;
- }
-
- /* initialize package info */
- state = ice_init_pkg_info(hw, pkg);
- if (state)
- return state;
-
- /* before downloading the package, check package version for
- * compatibility with driver
- */
- state = ice_chk_pkg_compat(hw, pkg, &seg);
- if (state)
- return state;
-
- /* initialize package hints and then download package */
- ice_init_pkg_hints(hw, seg);
- state = ice_download_pkg(hw, seg);
- if (state == ICE_DDP_PKG_ALREADY_LOADED) {
- ice_debug(hw, ICE_DBG_INIT, "package previously loaded - no work.\n");
- already_loaded = true;
- }
-
- /* Get information on the package currently loaded in HW, then make sure
- * the driver is compatible with this version.
- */
- if (!state || state == ICE_DDP_PKG_ALREADY_LOADED) {
- state = ice_get_pkg_info(hw);
- if (!state)
- state = ice_get_ddp_pkg_state(hw, already_loaded);
- }
-
- if (ice_is_init_pkg_successful(state)) {
- hw->seg = seg;
- /* on successful package download update other required
- * registers to support the package and fill HW tables
- * with package content.
- */
- ice_init_pkg_regs(hw);
- ice_fill_blk_tbls(hw);
- ice_fill_hw_ptype(hw);
- ice_get_prof_index_max(hw);
- } else {
- ice_debug(hw, ICE_DBG_INIT, "package load failed, %d\n",
- state);
- }
-
- return state;
-}
-
-/**
- * ice_copy_and_init_pkg - initialize/download a copy of the package
- * @hw: pointer to the hardware structure
- * @buf: pointer to the package buffer
- * @len: size of the package buffer
- *
- * This function copies the package buffer, and then calls ice_init_pkg() to
- * initialize the copied package contents.
- *
- * The copying is necessary if the package buffer supplied is constant, or if
- * the memory may disappear shortly after calling this function.
- *
- * If the package buffer resides in the data segment and can be modified, the
- * caller is free to use ice_init_pkg() instead of ice_copy_and_init_pkg().
- *
- * However, if the package buffer needs to be copied first, such as when being
- * read from a file, the caller should use ice_copy_and_init_pkg().
- *
- * This function will first copy the package buffer, before calling
- * ice_init_pkg(). The caller is free to immediately destroy the original
- * package buffer, as the new copy will be managed by this function and
- * related routines.
- */
-enum ice_ddp_state
-ice_copy_and_init_pkg(struct ice_hw *hw, const u8 *buf, u32 len)
-{
- enum ice_ddp_state state;
- u8 *buf_copy;
-
- if (!buf || !len)
- return ICE_DDP_PKG_ERR;
-
- buf_copy = devm_kmemdup(ice_hw_to_dev(hw), buf, len, GFP_KERNEL);
-
- state = ice_init_pkg(hw, buf_copy, len);
- if (!ice_is_init_pkg_successful(state)) {
- /* Free the copy, since we failed to initialize the package */
- devm_kfree(ice_hw_to_dev(hw), buf_copy);
- } else {
- /* Track the copied pkg so we can free it later */
- hw->pkg_copy = buf_copy;
- hw->pkg_size = len;
- }
-
- return state;
-}
-
-/**
- * ice_is_init_pkg_successful - check if DDP init was successful
- * @state: state of the DDP pkg after download
- */
-bool ice_is_init_pkg_successful(enum ice_ddp_state state)
-{
- switch (state) {
- case ICE_DDP_PKG_SUCCESS:
- case ICE_DDP_PKG_SAME_VERSION_ALREADY_LOADED:
- case ICE_DDP_PKG_COMPATIBLE_ALREADY_LOADED:
- return true;
- default:
- return false;
- }
-}
-
-/**
- * ice_pkg_buf_alloc
- * @hw: pointer to the HW structure
- *
- * Allocates a package buffer and returns a pointer to the buffer header.
- * Note: all package contents must be in Little Endian form.
- */
-static struct ice_buf_build *ice_pkg_buf_alloc(struct ice_hw *hw)
-{
- struct ice_buf_build *bld;
- struct ice_buf_hdr *buf;
-
- bld = devm_kzalloc(ice_hw_to_dev(hw), sizeof(*bld), GFP_KERNEL);
- if (!bld)
- return NULL;
-
- buf = (struct ice_buf_hdr *)bld;
- buf->data_end = cpu_to_le16(offsetof(struct ice_buf_hdr,
- section_entry));
- return bld;
-}
-
-static bool ice_is_gtp_u_profile(u16 prof_idx)
-{
- return (prof_idx >= ICE_PROFID_IPV6_GTPU_TEID &&
- prof_idx <= ICE_PROFID_IPV6_GTPU_IPV6_TCP_INNER) ||
- prof_idx == ICE_PROFID_IPV4_GTPU_TEID;
-}
-
-static bool ice_is_gtp_c_profile(u16 prof_idx)
-{
- switch (prof_idx) {
- case ICE_PROFID_IPV4_GTPC_TEID:
- case ICE_PROFID_IPV4_GTPC_NO_TEID:
- case ICE_PROFID_IPV6_GTPC_TEID:
- case ICE_PROFID_IPV6_GTPC_NO_TEID:
- return true;
- default:
- return false;
- }
-}
-
-/**
- * ice_get_sw_prof_type - determine switch profile type
- * @hw: pointer to the HW structure
- * @fv: pointer to the switch field vector
- * @prof_idx: profile index to check
- */
-static enum ice_prof_type
-ice_get_sw_prof_type(struct ice_hw *hw, struct ice_fv *fv, u32 prof_idx)
-{
- u16 i;
-
- if (ice_is_gtp_c_profile(prof_idx))
- return ICE_PROF_TUN_GTPC;
-
- if (ice_is_gtp_u_profile(prof_idx))
- return ICE_PROF_TUN_GTPU;
-
- for (i = 0; i < hw->blk[ICE_BLK_SW].es.fvw; i++) {
- /* UDP tunnel will have UDP_OF protocol ID and VNI offset */
- if (fv->ew[i].prot_id == (u8)ICE_PROT_UDP_OF &&
- fv->ew[i].off == ICE_VNI_OFFSET)
- return ICE_PROF_TUN_UDP;
-
- /* GRE tunnel will have GRE protocol */
- if (fv->ew[i].prot_id == (u8)ICE_PROT_GRE_OF)
- return ICE_PROF_TUN_GRE;
- }
-
- return ICE_PROF_NON_TUN;
-}
-
-/**
- * ice_get_sw_fv_bitmap - Get switch field vector bitmap based on profile type
- * @hw: pointer to hardware structure
- * @req_profs: type of profiles requested
- * @bm: pointer to memory for returning the bitmap of field vectors
- */
-void
-ice_get_sw_fv_bitmap(struct ice_hw *hw, enum ice_prof_type req_profs,
- unsigned long *bm)
-{
- struct ice_pkg_enum state;
- struct ice_seg *ice_seg;
- struct ice_fv *fv;
-
- if (req_profs == ICE_PROF_ALL) {
- bitmap_set(bm, 0, ICE_MAX_NUM_PROFILES);
- return;
- }
-
- memset(&state, 0, sizeof(state));
- bitmap_zero(bm, ICE_MAX_NUM_PROFILES);
- ice_seg = hw->seg;
- do {
- enum ice_prof_type prof_type;
- u32 offset;
-
- fv = ice_pkg_enum_entry(ice_seg, &state, ICE_SID_FLD_VEC_SW,
- &offset, ice_sw_fv_handler);
- ice_seg = NULL;
-
- if (fv) {
- /* Determine field vector type */
- prof_type = ice_get_sw_prof_type(hw, fv, offset);
-
- if (req_profs & prof_type)
- set_bit((u16)offset, bm);
- }
- } while (fv);
-}
-
-/**
- * ice_get_sw_fv_list
- * @hw: pointer to the HW structure
- * @lkups: list of protocol types
- * @bm: bitmap of field vectors to consider
- * @fv_list: Head of a list
- *
- * Finds all the field vector entries from switch block that contain
- * a given protocol ID and offset and returns a list of structures of type
- * "ice_sw_fv_list_entry". Every structure in the list has a field vector
- * definition and profile ID information
- * NOTE: The caller of the function is responsible for freeing the memory
- * allocated for every list entry.
- */
-int
-ice_get_sw_fv_list(struct ice_hw *hw, struct ice_prot_lkup_ext *lkups,
- unsigned long *bm, struct list_head *fv_list)
-{
- struct ice_sw_fv_list_entry *fvl;
- struct ice_sw_fv_list_entry *tmp;
- struct ice_pkg_enum state;
- struct ice_seg *ice_seg;
- struct ice_fv *fv;
- u32 offset;
-
- memset(&state, 0, sizeof(state));
-
- if (!lkups->n_val_words || !hw->seg)
- return -EINVAL;
-
- ice_seg = hw->seg;
- do {
- u16 i;
-
- fv = ice_pkg_enum_entry(ice_seg, &state, ICE_SID_FLD_VEC_SW,
- &offset, ice_sw_fv_handler);
- if (!fv)
- break;
- ice_seg = NULL;
-
- /* If field vector is not in the bitmap list, then skip this
- * profile.
- */
- if (!test_bit((u16)offset, bm))
- continue;
-
- for (i = 0; i < lkups->n_val_words; i++) {
- int j;
-
- for (j = 0; j < hw->blk[ICE_BLK_SW].es.fvw; j++)
- if (fv->ew[j].prot_id ==
- lkups->fv_words[i].prot_id &&
- fv->ew[j].off == lkups->fv_words[i].off)
- break;
- if (j >= hw->blk[ICE_BLK_SW].es.fvw)
- break;
- if (i + 1 == lkups->n_val_words) {
- fvl = devm_kzalloc(ice_hw_to_dev(hw),
- sizeof(*fvl), GFP_KERNEL);
- if (!fvl)
- goto err;
- fvl->fv_ptr = fv;
- fvl->profile_id = offset;
- list_add(&fvl->list_entry, fv_list);
- break;
- }
- }
- } while (fv);
- if (list_empty(fv_list)) {
- dev_warn(ice_hw_to_dev(hw), "Required profiles not found in currently loaded DDP package");
- return -EIO;
- }
-
- return 0;
-
-err:
- list_for_each_entry_safe(fvl, tmp, fv_list, list_entry) {
- list_del(&fvl->list_entry);
- devm_kfree(ice_hw_to_dev(hw), fvl);
- }
-
- return -ENOMEM;
-}
-
-/**
- * ice_init_prof_result_bm - Initialize the profile result index bitmap
- * @hw: pointer to hardware structure
- */
-void ice_init_prof_result_bm(struct ice_hw *hw)
-{
- struct ice_pkg_enum state;
- struct ice_seg *ice_seg;
- struct ice_fv *fv;
-
- memset(&state, 0, sizeof(state));
-
- if (!hw->seg)
- return;
-
- ice_seg = hw->seg;
- do {
- u32 off;
- u16 i;
-
- fv = ice_pkg_enum_entry(ice_seg, &state, ICE_SID_FLD_VEC_SW,
- &off, ice_sw_fv_handler);
- ice_seg = NULL;
- if (!fv)
- break;
-
- bitmap_zero(hw->switch_info->prof_res_bm[off],
- ICE_MAX_FV_WORDS);
-
- /* Determine empty field vector indices, these can be
- * used for recipe results. Skip index 0, since it is
- * always used for Switch ID.
- */
- for (i = 1; i < ICE_MAX_FV_WORDS; i++)
- if (fv->ew[i].prot_id == ICE_PROT_INVALID &&
- fv->ew[i].off == ICE_FV_OFFSET_INVAL)
- set_bit(i, hw->switch_info->prof_res_bm[off]);
- } while (fv);
-}
-
-/**
- * ice_pkg_buf_free
- * @hw: pointer to the HW structure
- * @bld: pointer to pkg build (allocated by ice_pkg_buf_alloc())
- *
- * Frees a package buffer
- */
-void ice_pkg_buf_free(struct ice_hw *hw, struct ice_buf_build *bld)
-{
- devm_kfree(ice_hw_to_dev(hw), bld);
-}
-
-/**
- * ice_pkg_buf_reserve_section
- * @bld: pointer to pkg build (allocated by ice_pkg_buf_alloc())
- * @count: the number of sections to reserve
- *
- * Reserves one or more section table entries in a package buffer. This routine
- * can be called multiple times as long as they are made before calling
- * ice_pkg_buf_alloc_section(). Once ice_pkg_buf_alloc_section()
- * is called once, the number of sections that can be allocated will not be able
- * to be increased; not using all reserved sections is fine, but this will
- * result in some wasted space in the buffer.
- * Note: all package contents must be in Little Endian form.
- */
-static int
-ice_pkg_buf_reserve_section(struct ice_buf_build *bld, u16 count)
-{
- struct ice_buf_hdr *buf;
- u16 section_count;
- u16 data_end;
-
- if (!bld)
- return -EINVAL;
-
- buf = (struct ice_buf_hdr *)&bld->buf;
-
- /* already an active section, can't increase table size */
- section_count = le16_to_cpu(buf->section_count);
- if (section_count > 0)
- return -EIO;
-
- if (bld->reserved_section_table_entries + count > ICE_MAX_S_COUNT)
- return -EIO;
- bld->reserved_section_table_entries += count;
-
- data_end = le16_to_cpu(buf->data_end) +
- flex_array_size(buf, section_entry, count);
- buf->data_end = cpu_to_le16(data_end);
-
- return 0;
-}
-
-/**
- * ice_pkg_buf_alloc_section
- * @bld: pointer to pkg build (allocated by ice_pkg_buf_alloc())
- * @type: the section type value
- * @size: the size of the section to reserve (in bytes)
- *
- * Reserves memory in the buffer for a section's content and updates the
- * buffers' status accordingly. This routine returns a pointer to the first
- * byte of the section start within the buffer, which is used to fill in the
- * section contents.
- * Note: all package contents must be in Little Endian form.
- */
-static void *
-ice_pkg_buf_alloc_section(struct ice_buf_build *bld, u32 type, u16 size)
-{
- struct ice_buf_hdr *buf;
- u16 sect_count;
- u16 data_end;
-
- if (!bld || !type || !size)
- return NULL;
-
- buf = (struct ice_buf_hdr *)&bld->buf;
-
- /* check for enough space left in buffer */
- data_end = le16_to_cpu(buf->data_end);
-
- /* section start must align on 4 byte boundary */
- data_end = ALIGN(data_end, 4);
-
- if ((data_end + size) > ICE_MAX_S_DATA_END)
- return NULL;
-
- /* check for more available section table entries */
- sect_count = le16_to_cpu(buf->section_count);
- if (sect_count < bld->reserved_section_table_entries) {
- void *section_ptr = ((u8 *)buf) + data_end;
-
- buf->section_entry[sect_count].offset = cpu_to_le16(data_end);
- buf->section_entry[sect_count].size = cpu_to_le16(size);
- buf->section_entry[sect_count].type = cpu_to_le32(type);
-
- data_end += size;
- buf->data_end = cpu_to_le16(data_end);
-
- buf->section_count = cpu_to_le16(sect_count + 1);
- return section_ptr;
- }
-
- /* no free section table entries */
- return NULL;
-}
-
-/**
- * ice_pkg_buf_alloc_single_section
- * @hw: pointer to the HW structure
- * @type: the section type value
- * @size: the size of the section to reserve (in bytes)
- * @section: returns pointer to the section
- *
- * Allocates a package buffer with a single section.
- * Note: all package contents must be in Little Endian form.
- */
-struct ice_buf_build *
-ice_pkg_buf_alloc_single_section(struct ice_hw *hw, u32 type, u16 size,
- void **section)
-{
- struct ice_buf_build *buf;
-
- if (!section)
- return NULL;
-
- buf = ice_pkg_buf_alloc(hw);
- if (!buf)
- return NULL;
-
- if (ice_pkg_buf_reserve_section(buf, 1))
- goto ice_pkg_buf_alloc_single_section_err;
-
- *section = ice_pkg_buf_alloc_section(buf, type, size);
- if (!*section)
- goto ice_pkg_buf_alloc_single_section_err;
-
- return buf;
-
-ice_pkg_buf_alloc_single_section_err:
- ice_pkg_buf_free(hw, buf);
- return NULL;
-}
-
-/**
- * ice_pkg_buf_get_active_sections
- * @bld: pointer to pkg build (allocated by ice_pkg_buf_alloc())
- *
- * Returns the number of active sections. Before using the package buffer
- * in an update package command, the caller should make sure that there is at
- * least one active section - otherwise, the buffer is not legal and should
- * not be used.
- * Note: all package contents must be in Little Endian form.
- */
-static u16 ice_pkg_buf_get_active_sections(struct ice_buf_build *bld)
-{
- struct ice_buf_hdr *buf;
-
- if (!bld)
- return 0;
-
- buf = (struct ice_buf_hdr *)&bld->buf;
- return le16_to_cpu(buf->section_count);
-}
-
-/**
- * ice_pkg_buf
- * @bld: pointer to pkg build (allocated by ice_pkg_buf_alloc())
- *
- * Return a pointer to the buffer's header
- */
-struct ice_buf *ice_pkg_buf(struct ice_buf_build *bld)
-{
- if (!bld)
- return NULL;
-
- return &bld->buf;
-}
-
-/**
* ice_get_open_tunnel_port - retrieve an open tunnel port
* @hw: pointer to the HW structure
* @port: returns open port
@@ -2297,10 +391,11 @@ ice_upd_dvm_boost_entry_err:
*/
int ice_set_dvm_boost_entries(struct ice_hw *hw)
{
- int status;
u16 i;
for (i = 0; i < hw->dvm_upd.count; i++) {
+ int status;
+
status = ice_upd_dvm_boost_entry(hw, &hw->dvm_upd.tbl[i]);
if (status)
return status;
@@ -2757,7 +852,6 @@ ice_match_prop_lst(struct list_head *list1, struct list_head *list2)
count++;
list_for_each_entry(tmp2, list2, list)
chk_count++;
- /* cppcheck-suppress knownConditionTrueFalse */
if (!count || count != chk_count)
return false;
@@ -5102,12 +3196,13 @@ ice_rem_vsig(struct ice_hw *hw, enum ice_block blk, u16 vsig,
u16 idx = vsig & ICE_VSIG_IDX_M;
struct ice_vsig_vsi *vsi_cur;
struct ice_vsig_prof *d, *t;
- int status;
/* remove TCAM entries */
list_for_each_entry_safe(d, t,
&hw->blk[blk].xlt2.vsig_tbl[idx].prop_lst,
list) {
+ int status;
+
status = ice_rem_prof_id(hw, blk, d);
if (status)
return status;
@@ -5158,12 +3253,13 @@ ice_rem_prof_id_vsig(struct ice_hw *hw, enum ice_block blk, u16 vsig, u64 hdl,
{
u16 idx = vsig & ICE_VSIG_IDX_M;
struct ice_vsig_prof *p, *t;
- int status;
list_for_each_entry_safe(p, t,
&hw->blk[blk].xlt2.vsig_tbl[idx].prop_lst,
list)
if (p->profile_cookie == hdl) {
+ int status;
+
if (ice_vsig_prof_id_count(hw, blk, vsig) == 1)
/* this is the last profile, remove the VSIG */
return ice_rem_vsig(hw, blk, vsig, chg);
diff --git a/drivers/net/ethernet/intel/ice/ice_flex_pipe.h b/drivers/net/ethernet/intel/ice/ice_flex_pipe.h
index 9c530c86703e..7af7c8e9aa4e 100644
--- a/drivers/net/ethernet/intel/ice/ice_flex_pipe.h
+++ b/drivers/net/ethernet/intel/ice/ice_flex_pipe.h
@@ -6,75 +6,6 @@
#include "ice_type.h"
-/* Package minimal version supported */
-#define ICE_PKG_SUPP_VER_MAJ 1
-#define ICE_PKG_SUPP_VER_MNR 3
-
-/* Package format version */
-#define ICE_PKG_FMT_VER_MAJ 1
-#define ICE_PKG_FMT_VER_MNR 0
-#define ICE_PKG_FMT_VER_UPD 0
-#define ICE_PKG_FMT_VER_DFT 0
-
-#define ICE_PKG_CNT 4
-
-enum ice_ddp_state {
- /* Indicates that this call to ice_init_pkg
- * successfully loaded the requested DDP package
- */
- ICE_DDP_PKG_SUCCESS = 0,
-
- /* Generic error for already loaded errors, it is mapped later to
- * the more specific one (one of the next 3)
- */
- ICE_DDP_PKG_ALREADY_LOADED = -1,
-
- /* Indicates that a DDP package of the same version has already been
- * loaded onto the device by a previous call or by another PF
- */
- ICE_DDP_PKG_SAME_VERSION_ALREADY_LOADED = -2,
-
- /* The device has a DDP package that is not supported by the driver */
- ICE_DDP_PKG_ALREADY_LOADED_NOT_SUPPORTED = -3,
-
- /* The device has a compatible package
- * (but different from the request) already loaded
- */
- ICE_DDP_PKG_COMPATIBLE_ALREADY_LOADED = -4,
-
- /* The firmware loaded on the device is not compatible with
- * the DDP package loaded
- */
- ICE_DDP_PKG_FW_MISMATCH = -5,
-
- /* The DDP package file is invalid */
- ICE_DDP_PKG_INVALID_FILE = -6,
-
- /* The version of the DDP package provided is higher than
- * the driver supports
- */
- ICE_DDP_PKG_FILE_VERSION_TOO_HIGH = -7,
-
- /* The version of the DDP package provided is lower than the
- * driver supports
- */
- ICE_DDP_PKG_FILE_VERSION_TOO_LOW = -8,
-
- /* The signature of the DDP package file provided is invalid */
- ICE_DDP_PKG_FILE_SIGNATURE_INVALID = -9,
-
- /* The DDP package file security revision is too low and not
- * supported by firmware
- */
- ICE_DDP_PKG_FILE_REVISION_TOO_LOW = -10,
-
- /* An error occurred in firmware while loading the DDP package */
- ICE_DDP_PKG_LOAD_ERROR = -11,
-
- /* Other errors */
- ICE_DDP_PKG_ERR = -12
-};
-
int
ice_acquire_change_lock(struct ice_hw *hw, enum ice_aq_res_access_type access);
void ice_release_change_lock(struct ice_hw *hw);
diff --git a/drivers/net/ethernet/intel/ice/ice_flex_type.h b/drivers/net/ethernet/intel/ice/ice_flex_type.h
index 974d14a83b2e..4f42e14ed3ae 100644
--- a/drivers/net/ethernet/intel/ice/ice_flex_type.h
+++ b/drivers/net/ethernet/intel/ice/ice_flex_type.h
@@ -3,205 +3,7 @@
#ifndef _ICE_FLEX_TYPE_H_
#define _ICE_FLEX_TYPE_H_
-
-#define ICE_FV_OFFSET_INVAL 0x1FF
-
-/* Extraction Sequence (Field Vector) Table */
-struct ice_fv_word {
- u8 prot_id;
- u16 off; /* Offset within the protocol header */
- u8 resvrd;
-} __packed;
-
-#define ICE_MAX_NUM_PROFILES 256
-
-#define ICE_MAX_FV_WORDS 48
-struct ice_fv {
- struct ice_fv_word ew[ICE_MAX_FV_WORDS];
-};
-
-/* Package and segment headers and tables */
-struct ice_pkg_hdr {
- struct ice_pkg_ver pkg_format_ver;
- __le32 seg_count;
- __le32 seg_offset[];
-};
-
-/* generic segment */
-struct ice_generic_seg_hdr {
-#define SEGMENT_TYPE_METADATA 0x00000001
-#define SEGMENT_TYPE_ICE 0x00000010
- __le32 seg_type;
- struct ice_pkg_ver seg_format_ver;
- __le32 seg_size;
- char seg_id[ICE_PKG_NAME_SIZE];
-};
-
-/* ice specific segment */
-
-union ice_device_id {
- struct {
- __le16 device_id;
- __le16 vendor_id;
- } dev_vend_id;
- __le32 id;
-};
-
-struct ice_device_id_entry {
- union ice_device_id device;
- union ice_device_id sub_device;
-};
-
-struct ice_seg {
- struct ice_generic_seg_hdr hdr;
- __le32 device_table_count;
- struct ice_device_id_entry device_table[];
-};
-
-struct ice_nvm_table {
- __le32 table_count;
- __le32 vers[];
-};
-
-struct ice_buf {
-#define ICE_PKG_BUF_SIZE 4096
- u8 buf[ICE_PKG_BUF_SIZE];
-};
-
-struct ice_buf_table {
- __le32 buf_count;
- struct ice_buf buf_array[];
-};
-
-/* global metadata specific segment */
-struct ice_global_metadata_seg {
- struct ice_generic_seg_hdr hdr;
- struct ice_pkg_ver pkg_ver;
- __le32 rsvd;
- char pkg_name[ICE_PKG_NAME_SIZE];
-};
-
-#define ICE_MIN_S_OFF 12
-#define ICE_MAX_S_OFF 4095
-#define ICE_MIN_S_SZ 1
-#define ICE_MAX_S_SZ 4084
-
-/* section information */
-struct ice_section_entry {
- __le32 type;
- __le16 offset;
- __le16 size;
-};
-
-#define ICE_MIN_S_COUNT 1
-#define ICE_MAX_S_COUNT 511
-#define ICE_MIN_S_DATA_END 12
-#define ICE_MAX_S_DATA_END 4096
-
-#define ICE_METADATA_BUF 0x80000000
-
-struct ice_buf_hdr {
- __le16 section_count;
- __le16 data_end;
- struct ice_section_entry section_entry[];
-};
-
-#define ICE_MAX_ENTRIES_IN_BUF(hd_sz, ent_sz) ((ICE_PKG_BUF_SIZE - \
- struct_size((struct ice_buf_hdr *)0, section_entry, 1) - (hd_sz)) /\
- (ent_sz))
-
-/* ice package section IDs */
-#define ICE_SID_METADATA 1
-#define ICE_SID_XLT0_SW 10
-#define ICE_SID_XLT_KEY_BUILDER_SW 11
-#define ICE_SID_XLT1_SW 12
-#define ICE_SID_XLT2_SW 13
-#define ICE_SID_PROFID_TCAM_SW 14
-#define ICE_SID_PROFID_REDIR_SW 15
-#define ICE_SID_FLD_VEC_SW 16
-#define ICE_SID_CDID_KEY_BUILDER_SW 17
-
-struct ice_meta_sect {
- struct ice_pkg_ver ver;
-#define ICE_META_SECT_NAME_SIZE 28
- char name[ICE_META_SECT_NAME_SIZE];
- __le32 track_id;
-};
-
-#define ICE_SID_CDID_REDIR_SW 18
-
-#define ICE_SID_XLT0_ACL 20
-#define ICE_SID_XLT_KEY_BUILDER_ACL 21
-#define ICE_SID_XLT1_ACL 22
-#define ICE_SID_XLT2_ACL 23
-#define ICE_SID_PROFID_TCAM_ACL 24
-#define ICE_SID_PROFID_REDIR_ACL 25
-#define ICE_SID_FLD_VEC_ACL 26
-#define ICE_SID_CDID_KEY_BUILDER_ACL 27
-#define ICE_SID_CDID_REDIR_ACL 28
-
-#define ICE_SID_XLT0_FD 30
-#define ICE_SID_XLT_KEY_BUILDER_FD 31
-#define ICE_SID_XLT1_FD 32
-#define ICE_SID_XLT2_FD 33
-#define ICE_SID_PROFID_TCAM_FD 34
-#define ICE_SID_PROFID_REDIR_FD 35
-#define ICE_SID_FLD_VEC_FD 36
-#define ICE_SID_CDID_KEY_BUILDER_FD 37
-#define ICE_SID_CDID_REDIR_FD 38
-
-#define ICE_SID_XLT0_RSS 40
-#define ICE_SID_XLT_KEY_BUILDER_RSS 41
-#define ICE_SID_XLT1_RSS 42
-#define ICE_SID_XLT2_RSS 43
-#define ICE_SID_PROFID_TCAM_RSS 44
-#define ICE_SID_PROFID_REDIR_RSS 45
-#define ICE_SID_FLD_VEC_RSS 46
-#define ICE_SID_CDID_KEY_BUILDER_RSS 47
-#define ICE_SID_CDID_REDIR_RSS 48
-
-#define ICE_SID_RXPARSER_MARKER_PTYPE 55
-#define ICE_SID_RXPARSER_BOOST_TCAM 56
-#define ICE_SID_RXPARSER_METADATA_INIT 58
-#define ICE_SID_TXPARSER_BOOST_TCAM 66
-
-#define ICE_SID_XLT0_PE 80
-#define ICE_SID_XLT_KEY_BUILDER_PE 81
-#define ICE_SID_XLT1_PE 82
-#define ICE_SID_XLT2_PE 83
-#define ICE_SID_PROFID_TCAM_PE 84
-#define ICE_SID_PROFID_REDIR_PE 85
-#define ICE_SID_FLD_VEC_PE 86
-#define ICE_SID_CDID_KEY_BUILDER_PE 87
-#define ICE_SID_CDID_REDIR_PE 88
-
-/* Label Metadata section IDs */
-#define ICE_SID_LBL_FIRST 0x80000010
-#define ICE_SID_LBL_RXPARSER_TMEM 0x80000018
-/* The following define MUST be updated to reflect the last label section ID */
-#define ICE_SID_LBL_LAST 0x80000038
-
-enum ice_block {
- ICE_BLK_SW = 0,
- ICE_BLK_ACL,
- ICE_BLK_FD,
- ICE_BLK_RSS,
- ICE_BLK_PE,
- ICE_BLK_COUNT
-};
-
-enum ice_sect {
- ICE_XLT0 = 0,
- ICE_XLT_KB,
- ICE_XLT1,
- ICE_XLT2,
- ICE_PROF_TCAM,
- ICE_PROF_REDIR,
- ICE_VEC_TBL,
- ICE_CDID_KB,
- ICE_CDID_REDIR,
- ICE_SECT_COUNT
-};
+#include "ice_ddp.h"
/* Packet Type (PTYPE) values */
#define ICE_PTYPE_MAC_PAY 1
@@ -283,134 +85,6 @@ struct ice_ptype_attributes {
enum ice_ptype_attrib_type attrib;
};
-/* package labels */
-struct ice_label {
- __le16 value;
-#define ICE_PKG_LABEL_SIZE 64
- char name[ICE_PKG_LABEL_SIZE];
-};
-
-struct ice_label_section {
- __le16 count;
- struct ice_label label[];
-};
-
-#define ICE_MAX_LABELS_IN_BUF ICE_MAX_ENTRIES_IN_BUF( \
- struct_size((struct ice_label_section *)0, label, 1) - \
- sizeof(struct ice_label), sizeof(struct ice_label))
-
-struct ice_sw_fv_section {
- __le16 count;
- __le16 base_offset;
- struct ice_fv fv[];
-};
-
-struct ice_sw_fv_list_entry {
- struct list_head list_entry;
- u32 profile_id;
- struct ice_fv *fv_ptr;
-};
-
-/* The BOOST TCAM stores the match packet header in reverse order, meaning
- * the fields are reversed; in addition, this means that the normally big endian
- * fields of the packet are now little endian.
- */
-struct ice_boost_key_value {
-#define ICE_BOOST_REMAINING_HV_KEY 15
- u8 remaining_hv_key[ICE_BOOST_REMAINING_HV_KEY];
- __le16 hv_dst_port_key;
- __le16 hv_src_port_key;
- u8 tcam_search_key;
-} __packed;
-
-struct ice_boost_key {
- struct ice_boost_key_value key;
- struct ice_boost_key_value key2;
-};
-
-/* package Boost TCAM entry */
-struct ice_boost_tcam_entry {
- __le16 addr;
- __le16 reserved;
- /* break up the 40 bytes of key into different fields */
- struct ice_boost_key key;
- u8 boost_hit_index_group;
- /* The following contains bitfields which are not on byte boundaries.
- * These fields are currently unused by driver software.
- */
-#define ICE_BOOST_BIT_FIELDS 43
- u8 bit_fields[ICE_BOOST_BIT_FIELDS];
-};
-
-struct ice_boost_tcam_section {
- __le16 count;
- __le16 reserved;
- struct ice_boost_tcam_entry tcam[];
-};
-
-#define ICE_MAX_BST_TCAMS_IN_BUF ICE_MAX_ENTRIES_IN_BUF( \
- struct_size((struct ice_boost_tcam_section *)0, tcam, 1) - \
- sizeof(struct ice_boost_tcam_entry), \
- sizeof(struct ice_boost_tcam_entry))
-
-/* package Marker Ptype TCAM entry */
-struct ice_marker_ptype_tcam_entry {
-#define ICE_MARKER_PTYPE_TCAM_ADDR_MAX 1024
- __le16 addr;
- __le16 ptype;
- u8 keys[20];
-};
-
-struct ice_marker_ptype_tcam_section {
- __le16 count;
- __le16 reserved;
- struct ice_marker_ptype_tcam_entry tcam[];
-};
-
-#define ICE_MAX_MARKER_PTYPE_TCAMS_IN_BUF \
- ICE_MAX_ENTRIES_IN_BUF(struct_size((struct ice_marker_ptype_tcam_section *)0, tcam, 1) - \
- sizeof(struct ice_marker_ptype_tcam_entry), \
- sizeof(struct ice_marker_ptype_tcam_entry))
-
-struct ice_xlt1_section {
- __le16 count;
- __le16 offset;
- u8 value[];
-};
-
-struct ice_xlt2_section {
- __le16 count;
- __le16 offset;
- __le16 value[];
-};
-
-struct ice_prof_redir_section {
- __le16 count;
- __le16 offset;
- u8 redir_value[];
-};
-
-/* package buffer building */
-
-struct ice_buf_build {
- struct ice_buf buf;
- u16 reserved_section_table_entries;
-};
-
-struct ice_pkg_enum {
- struct ice_buf_table *buf_table;
- u32 buf_idx;
-
- u32 type;
- struct ice_buf_hdr *buf;
- u32 sect_idx;
- void *sect;
- u32 sect_type;
-
- u32 entry_idx;
- void *(*handler)(u32 sect_type, void *section, u32 index, u32 *offset);
-};
-
/* Tunnel enabling */
enum ice_tunnel_type {
diff --git a/drivers/net/ethernet/intel/ice/ice_fltr.c b/drivers/net/ethernet/intel/ice/ice_fltr.c
index 40e678cfb507..aff7a141c30d 100644
--- a/drivers/net/ethernet/intel/ice/ice_fltr.c
+++ b/drivers/net/ethernet/intel/ice/ice_fltr.c
@@ -208,6 +208,11 @@ static int ice_fltr_remove_eth_list(struct ice_vsi *vsi, struct list_head *list)
void ice_fltr_remove_all(struct ice_vsi *vsi)
{
ice_remove_vsi_fltr(&vsi->back->hw, vsi->idx);
+ /* sync netdev filters if exist */
+ if (vsi->netdev) {
+ __dev_uc_unsync(vsi->netdev, NULL);
+ __dev_mc_unsync(vsi->netdev, NULL);
+ }
}
/**
diff --git a/drivers/net/ethernet/intel/ice/ice_gnss.c b/drivers/net/ethernet/intel/ice/ice_gnss.c
index 43e199b5b513..8dec748bb53a 100644
--- a/drivers/net/ethernet/intel/ice/ice_gnss.c
+++ b/drivers/net/ethernet/intel/ice/ice_gnss.c
@@ -3,15 +3,18 @@
#include "ice.h"
#include "ice_lib.h"
-#include <linux/tty_driver.h>
/**
- * ice_gnss_do_write - Write data to internal GNSS
+ * ice_gnss_do_write - Write data to internal GNSS receiver
* @pf: board private structure
* @buf: command buffer
* @size: command buffer size
*
* Write UBX command data to the GNSS receiver
+ *
+ * Return:
+ * * number of bytes written - success
+ * * negative - error code
*/
static unsigned int
ice_gnss_do_write(struct ice_pf *pf, unsigned char *buf, unsigned int size)
@@ -82,6 +85,12 @@ static void ice_gnss_write_pending(struct kthread_work *work)
write_work);
struct ice_pf *pf = gnss->back;
+ if (!pf)
+ return;
+
+ if (!test_bit(ICE_FLAG_GNSS, pf->flags))
+ return;
+
if (!list_empty(&gnss->queue)) {
struct gnss_write_buf *write_buf = NULL;
unsigned int bytes;
@@ -102,16 +111,14 @@ static void ice_gnss_write_pending(struct kthread_work *work)
* ice_gnss_read - Read data from internal GNSS module
* @work: GNSS read work structure
*
- * Read the data from internal GNSS receiver, number of bytes read will be
- * returned in *read_data parameter.
+ * Read the data from internal GNSS receiver, write it to gnss_dev.
*/
static void ice_gnss_read(struct kthread_work *work)
{
struct gnss_serial *gnss = container_of(work, struct gnss_serial,
read_work.work);
+ unsigned int i, bytes_read, data_len, count;
struct ice_aqc_link_topo_addr link_topo;
- unsigned int i, bytes_read, data_len;
- struct tty_port *port;
struct ice_pf *pf;
struct ice_hw *hw;
__be16 data_len_b;
@@ -120,14 +127,15 @@ static void ice_gnss_read(struct kthread_work *work)
int err = 0;
pf = gnss->back;
- if (!pf || !gnss->tty || !gnss->tty->port) {
+ if (!pf) {
err = -EFAULT;
goto exit;
}
- hw = &pf->hw;
- port = gnss->tty->port;
+ if (!test_bit(ICE_FLAG_GNSS, pf->flags))
+ return;
+ hw = &pf->hw;
buf = (char *)get_zeroed_page(GFP_KERNEL);
if (!buf) {
err = -ENOMEM;
@@ -159,7 +167,6 @@ static void ice_gnss_read(struct kthread_work *work)
}
data_len = min_t(typeof(data_len), data_len, PAGE_SIZE);
- data_len = tty_buffer_request_room(port, data_len);
if (!data_len) {
err = -ENOMEM;
goto exit_buf;
@@ -179,12 +186,11 @@ static void ice_gnss_read(struct kthread_work *work)
goto exit_buf;
}
- /* Send the data to the tty layer for users to read. This doesn't
- * actually push the data through unless tty->low_latency is set.
- */
- tty_insert_flip_string(port, buf, i);
- tty_flip_buffer_push(port);
-
+ count = gnss_insert_raw(pf->gnss_dev, buf, i);
+ if (count != i)
+ dev_warn(ice_pf_to_dev(pf),
+ "gnss_insert_raw ret=%d size=%d\n",
+ count, i);
exit_buf:
free_page((unsigned long)buf);
kthread_queue_delayed_work(gnss->kworker, &gnss->read_work,
@@ -195,11 +201,16 @@ exit:
}
/**
- * ice_gnss_struct_init - Initialize GNSS structure for the TTY
+ * ice_gnss_struct_init - Initialize GNSS receiver
* @pf: Board private structure
- * @index: TTY device index
+ *
+ * Initialize GNSS structures and workers.
+ *
+ * Return:
+ * * pointer to initialized gnss_serial struct - success
+ * * NULL - error
*/
-static struct gnss_serial *ice_gnss_struct_init(struct ice_pf *pf, int index)
+static struct gnss_serial *ice_gnss_struct_init(struct ice_pf *pf)
{
struct device *dev = ice_pf_to_dev(pf);
struct kthread_worker *kworker;
@@ -209,17 +220,12 @@ static struct gnss_serial *ice_gnss_struct_init(struct ice_pf *pf, int index)
if (!gnss)
return NULL;
- mutex_init(&gnss->gnss_mutex);
- gnss->open_count = 0;
gnss->back = pf;
- pf->gnss_serial[index] = gnss;
+ pf->gnss_serial = gnss;
kthread_init_delayed_work(&gnss->read_work, ice_gnss_read);
INIT_LIST_HEAD(&gnss->queue);
kthread_init_work(&gnss->write_work, ice_gnss_write_pending);
- /* Allocate a kworker for handling work required for the GNSS TTY
- * writes.
- */
kworker = kthread_create_worker(0, "ice-gnss-%s", dev_name(dev));
if (IS_ERR(kworker)) {
kfree(gnss);
@@ -232,140 +238,100 @@ static struct gnss_serial *ice_gnss_struct_init(struct ice_pf *pf, int index)
}
/**
- * ice_gnss_tty_open - Initialize GNSS structures on TTY device open
- * @tty: pointer to the tty_struct
- * @filp: pointer to the file
+ * ice_gnss_open - Open GNSS device
+ * @gdev: pointer to the gnss device struct
+ *
+ * Open GNSS device and start filling the read buffer for consumer.
*
- * This routine is mandatory. If this routine is not filled in, the attempted
- * open will fail with ENODEV.
+ * Return:
+ * * 0 - success
+ * * negative - error code
*/
-static int ice_gnss_tty_open(struct tty_struct *tty, struct file *filp)
+static int ice_gnss_open(struct gnss_device *gdev)
{
+ struct ice_pf *pf = gnss_get_drvdata(gdev);
struct gnss_serial *gnss;
- struct ice_pf *pf;
- pf = (struct ice_pf *)tty->driver->driver_state;
if (!pf)
return -EFAULT;
- /* Clear the pointer in case something fails */
- tty->driver_data = NULL;
-
- /* Get the serial object associated with this tty pointer */
- gnss = pf->gnss_serial[tty->index];
- if (!gnss) {
- /* Initialize GNSS struct on the first device open */
- gnss = ice_gnss_struct_init(pf, tty->index);
- if (!gnss)
- return -ENOMEM;
- }
+ if (!test_bit(ICE_FLAG_GNSS, pf->flags))
+ return -EFAULT;
- mutex_lock(&gnss->gnss_mutex);
+ gnss = pf->gnss_serial;
+ if (!gnss)
+ return -ENODEV;
- /* Save our structure within the tty structure */
- tty->driver_data = gnss;
- gnss->tty = tty;
- gnss->open_count++;
kthread_queue_delayed_work(gnss->kworker, &gnss->read_work, 0);
- mutex_unlock(&gnss->gnss_mutex);
-
return 0;
}
/**
- * ice_gnss_tty_close - Cleanup GNSS structures on tty device close
- * @tty: pointer to the tty_struct
- * @filp: pointer to the file
+ * ice_gnss_close - Close GNSS device
+ * @gdev: pointer to the gnss device struct
+ *
+ * Close GNSS device, cancel worker, stop filling the read buffer.
*/
-static void ice_gnss_tty_close(struct tty_struct *tty, struct file *filp)
+static void ice_gnss_close(struct gnss_device *gdev)
{
- struct gnss_serial *gnss = tty->driver_data;
- struct ice_pf *pf;
-
- if (!gnss)
- return;
+ struct ice_pf *pf = gnss_get_drvdata(gdev);
+ struct gnss_serial *gnss;
- pf = (struct ice_pf *)tty->driver->driver_state;
if (!pf)
return;
- mutex_lock(&gnss->gnss_mutex);
-
- if (!gnss->open_count) {
- /* Port was never opened */
- dev_err(ice_pf_to_dev(pf), "GNSS port not opened\n");
- goto exit;
- }
+ gnss = pf->gnss_serial;
+ if (!gnss)
+ return;
- gnss->open_count--;
- if (gnss->open_count <= 0) {
- /* Port is in shutdown state */
- kthread_cancel_delayed_work_sync(&gnss->read_work);
- }
-exit:
- mutex_unlock(&gnss->gnss_mutex);
+ kthread_cancel_work_sync(&gnss->write_work);
+ kthread_cancel_delayed_work_sync(&gnss->read_work);
}
/**
- * ice_gnss_tty_write - Write GNSS data
- * @tty: pointer to the tty_struct
+ * ice_gnss_write - Write to GNSS device
+ * @gdev: pointer to the gnss device struct
* @buf: pointer to the user data
- * @count: the number of characters queued to be sent to the HW
+ * @count: size of the buffer to be sent to the GNSS device
*
- * The write function call is called by the user when there is data to be sent
- * to the hardware. First the tty core receives the call, and then it passes the
- * data on to the tty driver's write function. The tty core also tells the tty
- * driver the size of the data being sent.
- * If any errors happen during the write call, a negative error value should be
- * returned instead of the number of characters queued to be written.
+ * Return:
+ * * number of written bytes - success
+ * * negative - error code
*/
static int
-ice_gnss_tty_write(struct tty_struct *tty, const unsigned char *buf, int count)
+ice_gnss_write(struct gnss_device *gdev, const unsigned char *buf,
+ size_t count)
{
+ struct ice_pf *pf = gnss_get_drvdata(gdev);
struct gnss_write_buf *write_buf;
struct gnss_serial *gnss;
unsigned char *cmd_buf;
- struct ice_pf *pf;
int err = count;
/* We cannot write a single byte using our I2C implementation. */
if (count <= 1 || count > ICE_GNSS_TTY_WRITE_BUF)
return -EINVAL;
- gnss = tty->driver_data;
- if (!gnss)
- return -EFAULT;
-
- pf = (struct ice_pf *)tty->driver->driver_state;
if (!pf)
return -EFAULT;
- /* Only allow to write on TTY 0 */
- if (gnss != pf->gnss_serial[0])
- return -EIO;
-
- mutex_lock(&gnss->gnss_mutex);
+ if (!test_bit(ICE_FLAG_GNSS, pf->flags))
+ return -EFAULT;
- if (!gnss->open_count) {
- err = -EINVAL;
- goto exit;
- }
+ gnss = pf->gnss_serial;
+ if (!gnss)
+ return -ENODEV;
cmd_buf = kcalloc(count, sizeof(*buf), GFP_KERNEL);
- if (!cmd_buf) {
- err = -ENOMEM;
- goto exit;
- }
+ if (!cmd_buf)
+ return -ENOMEM;
memcpy(cmd_buf, buf, count);
-
- /* Send the data out to a hardware port */
write_buf = kzalloc(sizeof(*write_buf), GFP_KERNEL);
if (!write_buf) {
kfree(cmd_buf);
- err = -ENOMEM;
- goto exit;
+ return -ENOMEM;
}
write_buf->buf = cmd_buf;
@@ -373,141 +339,89 @@ ice_gnss_tty_write(struct tty_struct *tty, const unsigned char *buf, int count)
INIT_LIST_HEAD(&write_buf->queue);
list_add_tail(&write_buf->queue, &gnss->queue);
kthread_queue_work(gnss->kworker, &gnss->write_work);
-exit:
- mutex_unlock(&gnss->gnss_mutex);
+
return err;
}
+static const struct gnss_operations ice_gnss_ops = {
+ .open = ice_gnss_open,
+ .close = ice_gnss_close,
+ .write_raw = ice_gnss_write,
+};
+
/**
- * ice_gnss_tty_write_room - Returns the numbers of characters to be written.
- * @tty: pointer to the tty_struct
+ * ice_gnss_register - Register GNSS receiver
+ * @pf: Board private structure
+ *
+ * Allocate and register GNSS receiver in the Linux GNSS subsystem.
*
- * This routine returns the numbers of characters the tty driver will accept
- * for queuing to be written or 0 if either the TTY is not open or user
- * tries to write to the TTY other than the first.
+ * Return:
+ * * 0 - success
+ * * negative - error code
*/
-static unsigned int ice_gnss_tty_write_room(struct tty_struct *tty)
+static int ice_gnss_register(struct ice_pf *pf)
{
- struct gnss_serial *gnss = tty->driver_data;
-
- /* Only allow to write on TTY 0 */
- if (!gnss || gnss != gnss->back->gnss_serial[0])
- return 0;
-
- mutex_lock(&gnss->gnss_mutex);
+ struct gnss_device *gdev;
+ int ret;
+
+ gdev = gnss_allocate_device(ice_pf_to_dev(pf));
+ if (!gdev) {
+ dev_err(ice_pf_to_dev(pf),
+ "gnss_allocate_device returns NULL\n");
+ return -ENOMEM;
+ }
- if (!gnss->open_count) {
- mutex_unlock(&gnss->gnss_mutex);
- return 0;
+ gdev->ops = &ice_gnss_ops;
+ gdev->type = GNSS_TYPE_UBX;
+ gnss_set_drvdata(gdev, pf);
+ ret = gnss_register_device(gdev);
+ if (ret) {
+ dev_err(ice_pf_to_dev(pf), "gnss_register_device err=%d\n",
+ ret);
+ gnss_put_device(gdev);
+ } else {
+ pf->gnss_dev = gdev;
}
- mutex_unlock(&gnss->gnss_mutex);
- return ICE_GNSS_TTY_WRITE_BUF;
+ return ret;
}
-static const struct tty_operations tty_gps_ops = {
- .open = ice_gnss_tty_open,
- .close = ice_gnss_tty_close,
- .write = ice_gnss_tty_write,
- .write_room = ice_gnss_tty_write_room,
-};
-
/**
- * ice_gnss_create_tty_driver - Create a TTY driver for GNSS
+ * ice_gnss_deregister - Deregister GNSS receiver
* @pf: Board private structure
+ *
+ * Deregister GNSS receiver from the Linux GNSS subsystem,
+ * release its resources.
*/
-static struct tty_driver *ice_gnss_create_tty_driver(struct ice_pf *pf)
+static void ice_gnss_deregister(struct ice_pf *pf)
{
- struct device *dev = ice_pf_to_dev(pf);
- const int ICE_TTYDRV_NAME_MAX = 14;
- struct tty_driver *tty_driver;
- char *ttydrv_name;
- unsigned int i;
- int err;
-
- tty_driver = tty_alloc_driver(ICE_GNSS_TTY_MINOR_DEVICES,
- TTY_DRIVER_REAL_RAW);
- if (IS_ERR(tty_driver)) {
- dev_err(dev, "Failed to allocate memory for GNSS TTY\n");
- return NULL;
- }
-
- ttydrv_name = kzalloc(ICE_TTYDRV_NAME_MAX, GFP_KERNEL);
- if (!ttydrv_name) {
- tty_driver_kref_put(tty_driver);
- return NULL;
+ if (pf->gnss_dev) {
+ gnss_deregister_device(pf->gnss_dev);
+ gnss_put_device(pf->gnss_dev);
+ pf->gnss_dev = NULL;
}
-
- snprintf(ttydrv_name, ICE_TTYDRV_NAME_MAX, "ttyGNSS_%02x%02x_",
- (u8)pf->pdev->bus->number, (u8)PCI_SLOT(pf->pdev->devfn));
-
- /* Initialize the tty driver*/
- tty_driver->owner = THIS_MODULE;
- tty_driver->driver_name = dev_driver_string(dev);
- tty_driver->name = (const char *)ttydrv_name;
- tty_driver->type = TTY_DRIVER_TYPE_SERIAL;
- tty_driver->subtype = SERIAL_TYPE_NORMAL;
- tty_driver->init_termios = tty_std_termios;
- tty_driver->init_termios.c_iflag &= ~INLCR;
- tty_driver->init_termios.c_iflag |= IGNCR;
- tty_driver->init_termios.c_oflag &= ~OPOST;
- tty_driver->init_termios.c_lflag &= ~ICANON;
- tty_driver->init_termios.c_cflag &= ~(CSIZE | CBAUD | CBAUDEX);
- /* baud rate 9600 */
- tty_termios_encode_baud_rate(&tty_driver->init_termios, 9600, 9600);
- tty_driver->driver_state = pf;
- tty_set_operations(tty_driver, &tty_gps_ops);
-
- for (i = 0; i < ICE_GNSS_TTY_MINOR_DEVICES; i++) {
- pf->gnss_tty_port[i] = kzalloc(sizeof(*pf->gnss_tty_port[i]),
- GFP_KERNEL);
- if (!pf->gnss_tty_port[i])
- goto err_out;
-
- pf->gnss_serial[i] = NULL;
-
- tty_port_init(pf->gnss_tty_port[i]);
- tty_port_link_device(pf->gnss_tty_port[i], tty_driver, i);
- }
-
- err = tty_register_driver(tty_driver);
- if (err) {
- dev_err(dev, "Failed to register TTY driver err=%d\n", err);
- goto err_out;
- }
-
- for (i = 0; i < ICE_GNSS_TTY_MINOR_DEVICES; i++)
- dev_info(dev, "%s%d registered\n", ttydrv_name, i);
-
- return tty_driver;
-
-err_out:
- while (i--) {
- tty_port_destroy(pf->gnss_tty_port[i]);
- kfree(pf->gnss_tty_port[i]);
- }
- kfree(ttydrv_name);
- tty_driver_kref_put(pf->ice_gnss_tty_driver);
-
- return NULL;
}
/**
- * ice_gnss_init - Initialize GNSS TTY support
+ * ice_gnss_init - Initialize GNSS support
* @pf: Board private structure
*/
void ice_gnss_init(struct ice_pf *pf)
{
- struct tty_driver *tty_driver;
+ int ret;
- tty_driver = ice_gnss_create_tty_driver(pf);
- if (!tty_driver)
+ pf->gnss_serial = ice_gnss_struct_init(pf);
+ if (!pf->gnss_serial)
return;
- pf->ice_gnss_tty_driver = tty_driver;
-
- set_bit(ICE_FLAG_GNSS, pf->flags);
- dev_info(ice_pf_to_dev(pf), "GNSS TTY init successful\n");
+ ret = ice_gnss_register(pf);
+ if (!ret) {
+ set_bit(ICE_FLAG_GNSS, pf->flags);
+ dev_info(ice_pf_to_dev(pf), "GNSS init successful\n");
+ } else {
+ ice_gnss_exit(pf);
+ dev_err(ice_pf_to_dev(pf), "GNSS init failure\n");
+ }
}
/**
@@ -516,31 +430,20 @@ void ice_gnss_init(struct ice_pf *pf)
*/
void ice_gnss_exit(struct ice_pf *pf)
{
- unsigned int i;
+ ice_gnss_deregister(pf);
+ clear_bit(ICE_FLAG_GNSS, pf->flags);
- if (!test_bit(ICE_FLAG_GNSS, pf->flags) || !pf->ice_gnss_tty_driver)
- return;
-
- for (i = 0; i < ICE_GNSS_TTY_MINOR_DEVICES; i++) {
- if (pf->gnss_tty_port[i]) {
- tty_port_destroy(pf->gnss_tty_port[i]);
- kfree(pf->gnss_tty_port[i]);
- }
+ if (pf->gnss_serial) {
+ struct gnss_serial *gnss = pf->gnss_serial;
- if (pf->gnss_serial[i]) {
- struct gnss_serial *gnss = pf->gnss_serial[i];
+ kthread_cancel_work_sync(&gnss->write_work);
+ kthread_cancel_delayed_work_sync(&gnss->read_work);
+ kthread_destroy_worker(gnss->kworker);
+ gnss->kworker = NULL;
- kthread_cancel_work_sync(&gnss->write_work);
- kthread_cancel_delayed_work_sync(&gnss->read_work);
- kfree(gnss);
- pf->gnss_serial[i] = NULL;
- }
+ kfree(gnss);
+ pf->gnss_serial = NULL;
}
-
- tty_unregister_driver(pf->ice_gnss_tty_driver);
- kfree(pf->ice_gnss_tty_driver->name);
- tty_driver_kref_put(pf->ice_gnss_tty_driver);
- pf->ice_gnss_tty_driver = NULL;
}
/**
diff --git a/drivers/net/ethernet/intel/ice/ice_gnss.h b/drivers/net/ethernet/intel/ice/ice_gnss.h
index f454dd1d9285..31db0701d13f 100644
--- a/drivers/net/ethernet/intel/ice/ice_gnss.h
+++ b/drivers/net/ethernet/intel/ice/ice_gnss.h
@@ -4,15 +4,8 @@
#ifndef _ICE_GNSS_H_
#define _ICE_GNSS_H_
-#include <linux/tty.h>
-#include <linux/tty_flip.h>
-
#define ICE_E810T_GNSS_I2C_BUS 0x2
#define ICE_GNSS_TIMER_DELAY_TIME (HZ / 10) /* 0.1 second per message */
-/* Create 2 minor devices, both using the same GNSS module. First one is RW,
- * second one RO.
- */
-#define ICE_GNSS_TTY_MINOR_DEVICES 2
#define ICE_GNSS_TTY_WRITE_BUF 250
#define ICE_MAX_I2C_DATA_SIZE FIELD_MAX(ICE_AQC_I2C_DATA_SIZE_M)
#define ICE_MAX_I2C_WRITE_BYTES 4
@@ -36,13 +29,9 @@ struct gnss_write_buf {
unsigned char *buf;
};
-
/**
* struct gnss_serial - data used to initialize GNSS TTY port
* @back: back pointer to PF
- * @tty: pointer to the tty for this device
- * @open_count: number of times this port has been opened
- * @gnss_mutex: gnss_mutex used to protect GNSS serial operations
* @kworker: kwork thread for handling periodic work
* @read_work: read_work function for handling GNSS reads
* @write_work: write_work function for handling GNSS writes
@@ -50,16 +39,13 @@ struct gnss_write_buf {
*/
struct gnss_serial {
struct ice_pf *back;
- struct tty_struct *tty;
- int open_count;
- struct mutex gnss_mutex; /* protects GNSS serial structure */
struct kthread_worker *kworker;
struct kthread_delayed_work read_work;
struct kthread_work write_work;
struct list_head queue;
};
-#if IS_ENABLED(CONFIG_TTY)
+#if IS_ENABLED(CONFIG_ICE_GNSS)
void ice_gnss_init(struct ice_pf *pf);
void ice_gnss_exit(struct ice_pf *pf);
bool ice_gnss_is_gps_present(struct ice_hw *hw);
@@ -70,5 +56,5 @@ static inline bool ice_gnss_is_gps_present(struct ice_hw *hw)
{
return false;
}
-#endif /* IS_ENABLED(CONFIG_TTY) */
+#endif /* IS_ENABLED(CONFIG_ICE_GNSS) */
#endif /* _ICE_GNSS_H_ */
diff --git a/drivers/net/ethernet/intel/ice/ice_idc.c b/drivers/net/ethernet/intel/ice/ice_idc.c
index 895c32bcc8b5..e6bc2285071e 100644
--- a/drivers/net/ethernet/intel/ice/ice_idc.c
+++ b/drivers/net/ethernet/intel/ice/ice_idc.c
@@ -6,6 +6,8 @@
#include "ice_lib.h"
#include "ice_dcb_lib.h"
+static DEFINE_XARRAY_ALLOC1(ice_aux_id);
+
/**
* ice_get_auxiliary_drv - retrieve iidc_auxiliary_drv struct
* @pf: pointer to PF struct
@@ -246,6 +248,17 @@ static int ice_reserve_rdma_qvector(struct ice_pf *pf)
}
/**
+ * ice_free_rdma_qvector - free vector resources reserved for RDMA driver
+ * @pf: board private structure to initialize
+ */
+static void ice_free_rdma_qvector(struct ice_pf *pf)
+{
+ pf->num_avail_sw_msix -= pf->num_rdma_msix;
+ ice_free_res(pf->irq_tracker, pf->rdma_base_vector,
+ ICE_RES_RDMA_VEC_ID);
+}
+
+/**
* ice_adev_release - function to be mapped to AUX dev's release op
* @dev: pointer to device to free
*/
@@ -331,12 +344,48 @@ int ice_init_rdma(struct ice_pf *pf)
struct device *dev = &pf->pdev->dev;
int ret;
+ if (!ice_is_rdma_ena(pf)) {
+ dev_warn(dev, "RDMA is not supported on this device\n");
+ return 0;
+ }
+
+ ret = xa_alloc(&ice_aux_id, &pf->aux_idx, NULL, XA_LIMIT(1, INT_MAX),
+ GFP_KERNEL);
+ if (ret) {
+ dev_err(dev, "Failed to allocate device ID for AUX driver\n");
+ return -ENOMEM;
+ }
+
/* Reserve vector resources */
ret = ice_reserve_rdma_qvector(pf);
if (ret < 0) {
dev_err(dev, "failed to reserve vectors for RDMA\n");
- return ret;
+ goto err_reserve_rdma_qvector;
}
pf->rdma_mode |= IIDC_RDMA_PROTOCOL_ROCEV2;
- return ice_plug_aux_dev(pf);
+ ret = ice_plug_aux_dev(pf);
+ if (ret)
+ goto err_plug_aux_dev;
+ return 0;
+
+err_plug_aux_dev:
+ ice_free_rdma_qvector(pf);
+err_reserve_rdma_qvector:
+ pf->adev = NULL;
+ xa_erase(&ice_aux_id, pf->aux_idx);
+ return ret;
+}
+
+/**
+ * ice_deinit_rdma - deinitialize RDMA on PF
+ * @pf: ptr to ice_pf
+ */
+void ice_deinit_rdma(struct ice_pf *pf)
+{
+ if (!ice_is_rdma_ena(pf))
+ return;
+
+ ice_unplug_aux_dev(pf);
+ ice_free_rdma_qvector(pf);
+ xa_erase(&ice_aux_id, pf->aux_idx);
}
diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c
index a596e07b3ce9..781475480ff2 100644
--- a/drivers/net/ethernet/intel/ice/ice_lib.c
+++ b/drivers/net/ethernet/intel/ice/ice_lib.c
@@ -166,14 +166,14 @@ static void ice_vsi_set_num_desc(struct ice_vsi *vsi)
/**
* ice_vsi_set_num_qs - Set number of queues, descriptors and vectors for a VSI
* @vsi: the VSI being configured
- * @vf: the VF associated with this VSI, if any
*
* Return 0 on success and a negative value on error
*/
-static void ice_vsi_set_num_qs(struct ice_vsi *vsi, struct ice_vf *vf)
+static void ice_vsi_set_num_qs(struct ice_vsi *vsi)
{
enum ice_vsi_type vsi_type = vsi->type;
struct ice_pf *pf = vsi->back;
+ struct ice_vf *vf = vsi->vf;
if (WARN_ON(vsi_type == ICE_VSI_VF && !vf))
return;
@@ -282,10 +282,10 @@ static int ice_get_free_slot(void *array, int size, int curr)
}
/**
- * ice_vsi_delete - delete a VSI from the switch
+ * ice_vsi_delete_from_hw - delete a VSI from the switch
* @vsi: pointer to VSI being removed
*/
-void ice_vsi_delete(struct ice_vsi *vsi)
+static void ice_vsi_delete_from_hw(struct ice_vsi *vsi)
{
struct ice_pf *pf = vsi->back;
struct ice_vsi_ctx *ctxt;
@@ -348,47 +348,144 @@ static void ice_vsi_free_arrays(struct ice_vsi *vsi)
}
/**
- * ice_vsi_clear - clean up and deallocate the provided VSI
+ * ice_vsi_free_stats - Free the ring statistics structures
+ * @vsi: VSI pointer
+ */
+static void ice_vsi_free_stats(struct ice_vsi *vsi)
+{
+ struct ice_vsi_stats *vsi_stat;
+ struct ice_pf *pf = vsi->back;
+ int i;
+
+ if (vsi->type == ICE_VSI_CHNL)
+ return;
+ if (!pf->vsi_stats)
+ return;
+
+ vsi_stat = pf->vsi_stats[vsi->idx];
+ if (!vsi_stat)
+ return;
+
+ ice_for_each_alloc_txq(vsi, i) {
+ if (vsi_stat->tx_ring_stats[i]) {
+ kfree_rcu(vsi_stat->tx_ring_stats[i], rcu);
+ WRITE_ONCE(vsi_stat->tx_ring_stats[i], NULL);
+ }
+ }
+
+ ice_for_each_alloc_rxq(vsi, i) {
+ if (vsi_stat->rx_ring_stats[i]) {
+ kfree_rcu(vsi_stat->rx_ring_stats[i], rcu);
+ WRITE_ONCE(vsi_stat->rx_ring_stats[i], NULL);
+ }
+ }
+
+ kfree(vsi_stat->tx_ring_stats);
+ kfree(vsi_stat->rx_ring_stats);
+ kfree(vsi_stat);
+ pf->vsi_stats[vsi->idx] = NULL;
+}
+
+/**
+ * ice_vsi_alloc_ring_stats - Allocates Tx and Rx ring stats for the VSI
+ * @vsi: VSI which is having stats allocated
+ */
+static int ice_vsi_alloc_ring_stats(struct ice_vsi *vsi)
+{
+ struct ice_ring_stats **tx_ring_stats;
+ struct ice_ring_stats **rx_ring_stats;
+ struct ice_vsi_stats *vsi_stats;
+ struct ice_pf *pf = vsi->back;
+ u16 i;
+
+ vsi_stats = pf->vsi_stats[vsi->idx];
+ tx_ring_stats = vsi_stats->tx_ring_stats;
+ rx_ring_stats = vsi_stats->rx_ring_stats;
+
+ /* Allocate Tx ring stats */
+ ice_for_each_alloc_txq(vsi, i) {
+ struct ice_ring_stats *ring_stats;
+ struct ice_tx_ring *ring;
+
+ ring = vsi->tx_rings[i];
+ ring_stats = tx_ring_stats[i];
+
+ if (!ring_stats) {
+ ring_stats = kzalloc(sizeof(*ring_stats), GFP_KERNEL);
+ if (!ring_stats)
+ goto err_out;
+
+ WRITE_ONCE(tx_ring_stats[i], ring_stats);
+ }
+
+ ring->ring_stats = ring_stats;
+ }
+
+ /* Allocate Rx ring stats */
+ ice_for_each_alloc_rxq(vsi, i) {
+ struct ice_ring_stats *ring_stats;
+ struct ice_rx_ring *ring;
+
+ ring = vsi->rx_rings[i];
+ ring_stats = rx_ring_stats[i];
+
+ if (!ring_stats) {
+ ring_stats = kzalloc(sizeof(*ring_stats), GFP_KERNEL);
+ if (!ring_stats)
+ goto err_out;
+
+ WRITE_ONCE(rx_ring_stats[i], ring_stats);
+ }
+
+ ring->ring_stats = ring_stats;
+ }
+
+ return 0;
+
+err_out:
+ ice_vsi_free_stats(vsi);
+ return -ENOMEM;
+}
+
+/**
+ * ice_vsi_free - clean up and deallocate the provided VSI
* @vsi: pointer to VSI being cleared
*
* This deallocates the VSI's queue resources, removes it from the PF's
* VSI array if necessary, and deallocates the VSI
- *
- * Returns 0 on success, negative on failure
*/
-int ice_vsi_clear(struct ice_vsi *vsi)
+static void ice_vsi_free(struct ice_vsi *vsi)
{
struct ice_pf *pf = NULL;
struct device *dev;
- if (!vsi)
- return 0;
-
- if (!vsi->back)
- return -EINVAL;
+ if (!vsi || !vsi->back)
+ return;
pf = vsi->back;
dev = ice_pf_to_dev(pf);
if (!pf->vsi[vsi->idx] || pf->vsi[vsi->idx] != vsi) {
dev_dbg(dev, "vsi does not exist at pf->vsi[%d]\n", vsi->idx);
- return -EINVAL;
+ return;
}
mutex_lock(&pf->sw_mutex);
/* updates the PF for this cleared VSI */
pf->vsi[vsi->idx] = NULL;
- if (vsi->idx < pf->next_vsi && vsi->type != ICE_VSI_CTRL)
- pf->next_vsi = vsi->idx;
- if (vsi->idx < pf->next_vsi && vsi->type == ICE_VSI_CTRL && vsi->vf)
- pf->next_vsi = vsi->idx;
+ pf->next_vsi = vsi->idx;
+ ice_vsi_free_stats(vsi);
ice_vsi_free_arrays(vsi);
mutex_unlock(&pf->sw_mutex);
devm_kfree(dev, vsi);
+}
- return 0;
+void ice_vsi_delete(struct ice_vsi *vsi)
+{
+ ice_vsi_delete_from_hw(vsi);
+ ice_vsi_free(vsi);
}
/**
@@ -461,6 +558,10 @@ static int ice_vsi_alloc_stat_arrays(struct ice_vsi *vsi)
if (!pf->vsi_stats)
return -ENOENT;
+ if (pf->vsi_stats[vsi->idx])
+ /* realloc will happen in rebuild path */
+ return 0;
+
vsi_stat = kzalloc(sizeof(*vsi_stat), GFP_KERNEL);
if (!vsi_stat)
return -ENOMEM;
@@ -491,128 +592,93 @@ err_alloc_tx:
}
/**
- * ice_vsi_alloc - Allocates the next available struct VSI in the PF
- * @pf: board private structure
- * @vsi_type: type of VSI
+ * ice_vsi_alloc_def - set default values for already allocated VSI
+ * @vsi: ptr to VSI
* @ch: ptr to channel
- * @vf: VF for ICE_VSI_VF and ICE_VSI_CTRL
- *
- * The VF pointer is used for ICE_VSI_VF and ICE_VSI_CTRL. For ICE_VSI_CTRL,
- * it may be NULL in the case there is no association with a VF. For
- * ICE_VSI_VF the VF pointer *must not* be NULL.
- *
- * returns a pointer to a VSI on success, NULL on failure.
*/
-static struct ice_vsi *
-ice_vsi_alloc(struct ice_pf *pf, enum ice_vsi_type vsi_type,
- struct ice_channel *ch, struct ice_vf *vf)
+static int
+ice_vsi_alloc_def(struct ice_vsi *vsi, struct ice_channel *ch)
{
- struct device *dev = ice_pf_to_dev(pf);
- struct ice_vsi *vsi = NULL;
-
- if (WARN_ON(vsi_type == ICE_VSI_VF && !vf))
- return NULL;
-
- /* Need to protect the allocation of the VSIs at the PF level */
- mutex_lock(&pf->sw_mutex);
-
- /* If we have already allocated our maximum number of VSIs,
- * pf->next_vsi will be ICE_NO_VSI. If not, pf->next_vsi index
- * is available to be populated
- */
- if (pf->next_vsi == ICE_NO_VSI) {
- dev_dbg(dev, "out of VSI slots!\n");
- goto unlock_pf;
+ if (vsi->type != ICE_VSI_CHNL) {
+ ice_vsi_set_num_qs(vsi);
+ if (ice_vsi_alloc_arrays(vsi))
+ return -ENOMEM;
}
- vsi = devm_kzalloc(dev, sizeof(*vsi), GFP_KERNEL);
- if (!vsi)
- goto unlock_pf;
-
- vsi->type = vsi_type;
- vsi->back = pf;
- set_bit(ICE_VSI_DOWN, vsi->state);
-
- if (vsi_type == ICE_VSI_VF)
- ice_vsi_set_num_qs(vsi, vf);
- else if (vsi_type != ICE_VSI_CHNL)
- ice_vsi_set_num_qs(vsi, NULL);
-
switch (vsi->type) {
case ICE_VSI_SWITCHDEV_CTRL:
- if (ice_vsi_alloc_arrays(vsi))
- goto err_rings;
-
/* Setup eswitch MSIX irq handler for VSI */
vsi->irq_handler = ice_eswitch_msix_clean_rings;
break;
case ICE_VSI_PF:
- if (ice_vsi_alloc_arrays(vsi))
- goto err_rings;
-
/* Setup default MSIX irq handler for VSI */
vsi->irq_handler = ice_msix_clean_rings;
break;
case ICE_VSI_CTRL:
- if (ice_vsi_alloc_arrays(vsi))
- goto err_rings;
-
/* Setup ctrl VSI MSIX irq handler */
vsi->irq_handler = ice_msix_clean_ctrl_vsi;
-
- /* For the PF control VSI this is NULL, for the VF control VSI
- * this will be the first VF to allocate it.
- */
- vsi->vf = vf;
- break;
- case ICE_VSI_VF:
- if (ice_vsi_alloc_arrays(vsi))
- goto err_rings;
- vsi->vf = vf;
break;
case ICE_VSI_CHNL:
if (!ch)
- goto err_rings;
+ return -EINVAL;
+
vsi->num_rxq = ch->num_rxq;
vsi->num_txq = ch->num_txq;
vsi->next_base_q = ch->base_q;
break;
+ case ICE_VSI_VF:
case ICE_VSI_LB:
- if (ice_vsi_alloc_arrays(vsi))
- goto err_rings;
break;
default:
- dev_warn(dev, "Unknown VSI type %d\n", vsi->type);
- goto unlock_pf;
+ ice_vsi_free_arrays(vsi);
+ return -EINVAL;
}
- if (vsi->type == ICE_VSI_CTRL && !vf) {
- /* Use the last VSI slot as the index for PF control VSI */
- vsi->idx = pf->num_alloc_vsi - 1;
- pf->ctrl_vsi_idx = vsi->idx;
- pf->vsi[vsi->idx] = vsi;
- } else {
- /* fill slot and make note of the index */
- vsi->idx = pf->next_vsi;
- pf->vsi[pf->next_vsi] = vsi;
+ return 0;
+}
+
+/**
+ * ice_vsi_alloc - Allocates the next available struct VSI in the PF
+ * @pf: board private structure
+ *
+ * Reserves a VSI index from the PF and allocates an empty VSI structure
+ * without a type. The VSI structure must later be initialized by calling
+ * ice_vsi_cfg().
+ *
+ * returns a pointer to a VSI on success, NULL on failure.
+ */
+static struct ice_vsi *ice_vsi_alloc(struct ice_pf *pf)
+{
+ struct device *dev = ice_pf_to_dev(pf);
+ struct ice_vsi *vsi = NULL;
- /* prepare pf->next_vsi for next use */
- pf->next_vsi = ice_get_free_slot(pf->vsi, pf->num_alloc_vsi,
- pf->next_vsi);
+ /* Need to protect the allocation of the VSIs at the PF level */
+ mutex_lock(&pf->sw_mutex);
+
+ /* If we have already allocated our maximum number of VSIs,
+ * pf->next_vsi will be ICE_NO_VSI. If not, pf->next_vsi index
+ * is available to be populated
+ */
+ if (pf->next_vsi == ICE_NO_VSI) {
+ dev_dbg(dev, "out of VSI slots!\n");
+ goto unlock_pf;
}
- if (vsi->type == ICE_VSI_CTRL && vf)
- vf->ctrl_vsi_idx = vsi->idx;
+ vsi = devm_kzalloc(dev, sizeof(*vsi), GFP_KERNEL);
+ if (!vsi)
+ goto unlock_pf;
- /* allocate memory for Tx/Rx ring stat pointers */
- if (ice_vsi_alloc_stat_arrays(vsi))
- goto err_rings;
+ vsi->back = pf;
+ set_bit(ICE_VSI_DOWN, vsi->state);
- goto unlock_pf;
+ /* fill slot and make note of the index */
+ vsi->idx = pf->next_vsi;
+ pf->vsi[pf->next_vsi] = vsi;
+
+ /* prepare pf->next_vsi for next use */
+ pf->next_vsi = ice_get_free_slot(pf->vsi, pf->num_alloc_vsi,
+ pf->next_vsi);
-err_rings:
- devm_kfree(dev, vsi);
- vsi = NULL;
unlock_pf:
mutex_unlock(&pf->sw_mutex);
return vsi;
@@ -1177,12 +1243,15 @@ ice_chnl_vsi_setup_q_map(struct ice_vsi *vsi, struct ice_vsi_ctx *ctxt)
/**
* ice_vsi_init - Create and initialize a VSI
* @vsi: the VSI being configured
- * @init_vsi: is this call creating a VSI
+ * @vsi_flags: VSI configuration flags
+ *
+ * Set ICE_FLAG_VSI_INIT to initialize a new VSI context, clear it to
+ * reconfigure an existing context.
*
* This initializes a VSI context depending on the VSI type to be added and
* passes it down to the add_vsi aq command to create a new VSI.
*/
-static int ice_vsi_init(struct ice_vsi *vsi, bool init_vsi)
+static int ice_vsi_init(struct ice_vsi *vsi, u32 vsi_flags)
{
struct ice_pf *pf = vsi->back;
struct ice_hw *hw = &pf->hw;
@@ -1244,7 +1313,7 @@ static int ice_vsi_init(struct ice_vsi *vsi, bool init_vsi)
/* if updating VSI context, make sure to set valid_section:
* to indicate which section of VSI context being updated
*/
- if (!init_vsi)
+ if (!(vsi_flags & ICE_VSI_FLAG_INIT))
ctxt->info.valid_sections |=
cpu_to_le16(ICE_AQ_VSI_PROP_Q_OPT_VALID);
}
@@ -1257,7 +1326,8 @@ static int ice_vsi_init(struct ice_vsi *vsi, bool init_vsi)
if (ret)
goto out;
- if (!init_vsi) /* means VSI being updated */
+ if (!(vsi_flags & ICE_VSI_FLAG_INIT))
+ /* means VSI being updated */
/* must to indicate which section of VSI context are
* being modified
*/
@@ -1272,7 +1342,7 @@ static int ice_vsi_init(struct ice_vsi *vsi, bool init_vsi)
cpu_to_le16(ICE_AQ_VSI_PROP_SECURITY_VALID);
}
- if (init_vsi) {
+ if (vsi_flags & ICE_VSI_FLAG_INIT) {
ret = ice_add_vsi(hw, vsi->idx, ctxt, NULL);
if (ret) {
dev_err(dev, "Add VSI failed, err %d\n", ret);
@@ -1436,7 +1506,7 @@ static int ice_get_vf_ctrl_res(struct ice_pf *pf, struct ice_vsi *vsi)
* ice_vsi_setup_vector_base - Set up the base vector for the given VSI
* @vsi: ptr to the VSI
*
- * This should only be called after ice_vsi_alloc() which allocates the
+ * This should only be called after ice_vsi_alloc_def() which allocates the
* corresponding SW VSI structure and initializes num_queue_pairs for the
* newly allocated VSI.
*
@@ -1584,106 +1654,6 @@ err_out:
}
/**
- * ice_vsi_free_stats - Free the ring statistics structures
- * @vsi: VSI pointer
- */
-static void ice_vsi_free_stats(struct ice_vsi *vsi)
-{
- struct ice_vsi_stats *vsi_stat;
- struct ice_pf *pf = vsi->back;
- int i;
-
- if (vsi->type == ICE_VSI_CHNL)
- return;
- if (!pf->vsi_stats)
- return;
-
- vsi_stat = pf->vsi_stats[vsi->idx];
- if (!vsi_stat)
- return;
-
- ice_for_each_alloc_txq(vsi, i) {
- if (vsi_stat->tx_ring_stats[i]) {
- kfree_rcu(vsi_stat->tx_ring_stats[i], rcu);
- WRITE_ONCE(vsi_stat->tx_ring_stats[i], NULL);
- }
- }
-
- ice_for_each_alloc_rxq(vsi, i) {
- if (vsi_stat->rx_ring_stats[i]) {
- kfree_rcu(vsi_stat->rx_ring_stats[i], rcu);
- WRITE_ONCE(vsi_stat->rx_ring_stats[i], NULL);
- }
- }
-
- kfree(vsi_stat->tx_ring_stats);
- kfree(vsi_stat->rx_ring_stats);
- kfree(vsi_stat);
- pf->vsi_stats[vsi->idx] = NULL;
-}
-
-/**
- * ice_vsi_alloc_ring_stats - Allocates Tx and Rx ring stats for the VSI
- * @vsi: VSI which is having stats allocated
- */
-static int ice_vsi_alloc_ring_stats(struct ice_vsi *vsi)
-{
- struct ice_ring_stats **tx_ring_stats;
- struct ice_ring_stats **rx_ring_stats;
- struct ice_vsi_stats *vsi_stats;
- struct ice_pf *pf = vsi->back;
- u16 i;
-
- vsi_stats = pf->vsi_stats[vsi->idx];
- tx_ring_stats = vsi_stats->tx_ring_stats;
- rx_ring_stats = vsi_stats->rx_ring_stats;
-
- /* Allocate Tx ring stats */
- ice_for_each_alloc_txq(vsi, i) {
- struct ice_ring_stats *ring_stats;
- struct ice_tx_ring *ring;
-
- ring = vsi->tx_rings[i];
- ring_stats = tx_ring_stats[i];
-
- if (!ring_stats) {
- ring_stats = kzalloc(sizeof(*ring_stats), GFP_KERNEL);
- if (!ring_stats)
- goto err_out;
-
- WRITE_ONCE(tx_ring_stats[i], ring_stats);
- }
-
- ring->ring_stats = ring_stats;
- }
-
- /* Allocate Rx ring stats */
- ice_for_each_alloc_rxq(vsi, i) {
- struct ice_ring_stats *ring_stats;
- struct ice_rx_ring *ring;
-
- ring = vsi->rx_rings[i];
- ring_stats = rx_ring_stats[i];
-
- if (!ring_stats) {
- ring_stats = kzalloc(sizeof(*ring_stats), GFP_KERNEL);
- if (!ring_stats)
- goto err_out;
-
- WRITE_ONCE(rx_ring_stats[i], ring_stats);
- }
-
- ring->ring_stats = ring_stats;
- }
-
- return 0;
-
-err_out:
- ice_vsi_free_stats(vsi);
- return -ENOMEM;
-}
-
-/**
* ice_vsi_manage_rss_lut - disable/enable RSS
* @vsi: the VSI being changed
* @ena: boolean value indicating if this is an enable or disable request
@@ -1992,8 +1962,8 @@ void ice_update_eth_stats(struct ice_vsi *vsi)
void ice_vsi_cfg_frame_size(struct ice_vsi *vsi)
{
if (!vsi->netdev || test_bit(ICE_FLAG_LEGACY_RX, vsi->back->flags)) {
- vsi->max_frame = ICE_AQ_SET_MAC_FRAME_SIZE_MAX;
- vsi->rx_buf_len = ICE_RXBUF_2048;
+ vsi->max_frame = ICE_MAX_FRAME_LEGACY_RX;
+ vsi->rx_buf_len = ICE_RXBUF_1664;
#if (PAGE_SIZE < 8192)
} else if (!ICE_2K_TOO_SMALL_WITH_PADDING &&
(vsi->netdev->mtu <= ETH_DATA_LEN)) {
@@ -2002,11 +1972,7 @@ void ice_vsi_cfg_frame_size(struct ice_vsi *vsi)
#endif
} else {
vsi->max_frame = ICE_AQ_SET_MAC_FRAME_SIZE_MAX;
-#if (PAGE_SIZE < 8192)
vsi->rx_buf_len = ICE_RXBUF_3072;
-#else
- vsi->rx_buf_len = ICE_RXBUF_2048;
-#endif
}
}
@@ -2645,54 +2611,97 @@ static void ice_set_agg_vsi(struct ice_vsi *vsi)
}
/**
- * ice_vsi_setup - Set up a VSI by a given type
- * @pf: board private structure
- * @pi: pointer to the port_info instance
- * @vsi_type: VSI type
- * @vf: pointer to VF to which this VSI connects. This field is used primarily
- * for the ICE_VSI_VF type. Other VSI types should pass NULL.
- * @ch: ptr to channel
- *
- * This allocates the sw VSI structure and its queue resources.
+ * ice_free_vf_ctrl_res - Free the VF control VSI resource
+ * @pf: pointer to PF structure
+ * @vsi: the VSI to free resources for
*
- * Returns pointer to the successfully allocated and configured VSI sw struct on
- * success, NULL on failure.
+ * Check if the VF control VSI resource is still in use. If no VF is using it
+ * any more, release the VSI resource. Otherwise, leave it to be cleaned up
+ * once no other VF uses it.
*/
-struct ice_vsi *
-ice_vsi_setup(struct ice_pf *pf, struct ice_port_info *pi,
- enum ice_vsi_type vsi_type, struct ice_vf *vf,
- struct ice_channel *ch)
+static void ice_free_vf_ctrl_res(struct ice_pf *pf, struct ice_vsi *vsi)
+{
+ struct ice_vf *vf;
+ unsigned int bkt;
+
+ rcu_read_lock();
+ ice_for_each_vf_rcu(pf, bkt, vf) {
+ if (vf != vsi->vf && vf->ctrl_vsi_idx != ICE_NO_VSI) {
+ rcu_read_unlock();
+ return;
+ }
+ }
+ rcu_read_unlock();
+
+ /* No other VFs left that have control VSI. It is now safe to reclaim
+ * SW interrupts back to the common pool.
+ */
+ ice_free_res(pf->irq_tracker, vsi->base_vector,
+ ICE_RES_VF_CTRL_VEC_ID);
+ pf->num_avail_sw_msix += vsi->num_q_vectors;
+}
+
+static int ice_vsi_cfg_tc_lan(struct ice_pf *pf, struct ice_vsi *vsi)
{
u16 max_txqs[ICE_MAX_TRAFFIC_CLASS] = { 0 };
struct device *dev = ice_pf_to_dev(pf);
- struct ice_vsi *vsi;
int ret, i;
- if (vsi_type == ICE_VSI_CHNL)
- vsi = ice_vsi_alloc(pf, vsi_type, ch, NULL);
- else if (vsi_type == ICE_VSI_VF || vsi_type == ICE_VSI_CTRL)
- vsi = ice_vsi_alloc(pf, vsi_type, NULL, vf);
- else
- vsi = ice_vsi_alloc(pf, vsi_type, NULL, NULL);
+ /* configure VSI nodes based on number of queues and TC's */
+ ice_for_each_traffic_class(i) {
+ if (!(vsi->tc_cfg.ena_tc & BIT(i)))
+ continue;
- if (!vsi) {
- dev_err(dev, "could not allocate VSI\n");
- return NULL;
+ if (vsi->type == ICE_VSI_CHNL) {
+ if (!vsi->alloc_txq && vsi->num_txq)
+ max_txqs[i] = vsi->num_txq;
+ else
+ max_txqs[i] = pf->num_lan_tx;
+ } else {
+ max_txqs[i] = vsi->alloc_txq;
+ }
}
- vsi->port_info = pi;
+ dev_dbg(dev, "vsi->tc_cfg.ena_tc = %d\n", vsi->tc_cfg.ena_tc);
+ ret = ice_cfg_vsi_lan(vsi->port_info, vsi->idx, vsi->tc_cfg.ena_tc,
+ max_txqs);
+ if (ret) {
+ dev_err(dev, "VSI %d failed lan queue config, error %d\n",
+ vsi->vsi_num, ret);
+ return ret;
+ }
+
+ return 0;
+}
+
+/**
+ * ice_vsi_cfg_def - configure default VSI based on the type
+ * @vsi: pointer to VSI
+ * @params: the parameters to configure this VSI with
+ */
+static int
+ice_vsi_cfg_def(struct ice_vsi *vsi, struct ice_vsi_cfg_params *params)
+{
+ struct device *dev = ice_pf_to_dev(vsi->back);
+ struct ice_pf *pf = vsi->back;
+ int ret;
+
vsi->vsw = pf->first_sw;
- if (vsi->type == ICE_VSI_PF)
- vsi->ethtype = ETH_P_PAUSE;
+
+ ret = ice_vsi_alloc_def(vsi, params->ch);
+ if (ret)
+ return ret;
+
+ /* allocate memory for Tx/Rx ring stat pointers */
+ if (ice_vsi_alloc_stat_arrays(vsi))
+ goto unroll_vsi_alloc;
ice_alloc_fd_res(vsi);
- if (vsi_type != ICE_VSI_CHNL) {
- if (ice_vsi_get_qs(vsi)) {
- dev_err(dev, "Failed to allocate queues. vsi->idx = %d\n",
- vsi->idx);
- goto unroll_vsi_alloc;
- }
+ if (ice_vsi_get_qs(vsi)) {
+ dev_err(dev, "Failed to allocate queues. vsi->idx = %d\n",
+ vsi->idx);
+ goto unroll_vsi_alloc_stat;
}
/* set RSS capabilities */
@@ -2702,7 +2711,7 @@ ice_vsi_setup(struct ice_pf *pf, struct ice_port_info *pi,
ice_vsi_set_tc_cfg(vsi);
/* create the VSI */
- ret = ice_vsi_init(vsi, true);
+ ret = ice_vsi_init(vsi, params->flags);
if (ret)
goto unroll_get_qs;
@@ -2733,6 +2742,14 @@ ice_vsi_setup(struct ice_pf *pf, struct ice_port_info *pi,
goto unroll_vector_base;
ice_vsi_map_rings_to_vectors(vsi);
+ if (ice_is_xdp_ena_vsi(vsi)) {
+ ret = ice_vsi_determine_xdp_res(vsi);
+ if (ret)
+ goto unroll_vector_base;
+ ret = ice_prepare_xdp_rings(vsi, vsi->xdp_prog);
+ if (ret)
+ goto unroll_vector_base;
+ }
/* ICE_VSI_CTRL does not need RSS so skip RSS processing */
if (vsi->type != ICE_VSI_CTRL)
@@ -2797,30 +2814,156 @@ ice_vsi_setup(struct ice_pf *pf, struct ice_port_info *pi,
goto unroll_vsi_init;
}
- /* configure VSI nodes based on number of queues and TC's */
- ice_for_each_traffic_class(i) {
- if (!(vsi->tc_cfg.ena_tc & BIT(i)))
- continue;
+ return 0;
- if (vsi->type == ICE_VSI_CHNL) {
- if (!vsi->alloc_txq && vsi->num_txq)
- max_txqs[i] = vsi->num_txq;
- else
- max_txqs[i] = pf->num_lan_tx;
+unroll_vector_base:
+ /* reclaim SW interrupts back to the common pool */
+ ice_free_res(pf->irq_tracker, vsi->base_vector, vsi->idx);
+ pf->num_avail_sw_msix += vsi->num_q_vectors;
+unroll_alloc_q_vector:
+ ice_vsi_free_q_vectors(vsi);
+unroll_vsi_init:
+ ice_vsi_delete_from_hw(vsi);
+unroll_get_qs:
+ ice_vsi_put_qs(vsi);
+unroll_vsi_alloc_stat:
+ ice_vsi_free_stats(vsi);
+unroll_vsi_alloc:
+ ice_vsi_free_arrays(vsi);
+ return ret;
+}
+
+/**
+ * ice_vsi_cfg - configure a previously allocated VSI
+ * @vsi: pointer to VSI
+ * @params: parameters used to configure this VSI
+ */
+int ice_vsi_cfg(struct ice_vsi *vsi, struct ice_vsi_cfg_params *params)
+{
+ struct ice_pf *pf = vsi->back;
+ int ret;
+
+ if (WARN_ON(params->type == ICE_VSI_VF && !params->vf))
+ return -EINVAL;
+
+ vsi->type = params->type;
+ vsi->port_info = params->pi;
+
+ /* For VSIs which don't have a connected VF, this will be NULL */
+ vsi->vf = params->vf;
+
+ ret = ice_vsi_cfg_def(vsi, params);
+ if (ret)
+ return ret;
+
+ ret = ice_vsi_cfg_tc_lan(vsi->back, vsi);
+ if (ret)
+ ice_vsi_decfg(vsi);
+
+ if (vsi->type == ICE_VSI_CTRL) {
+ if (vsi->vf) {
+ WARN_ON(vsi->vf->ctrl_vsi_idx != ICE_NO_VSI);
+ vsi->vf->ctrl_vsi_idx = vsi->idx;
} else {
- max_txqs[i] = vsi->alloc_txq;
+ WARN_ON(pf->ctrl_vsi_idx != ICE_NO_VSI);
+ pf->ctrl_vsi_idx = vsi->idx;
}
}
- dev_dbg(dev, "vsi->tc_cfg.ena_tc = %d\n", vsi->tc_cfg.ena_tc);
- ret = ice_cfg_vsi_lan(vsi->port_info, vsi->idx, vsi->tc_cfg.ena_tc,
- max_txqs);
- if (ret) {
- dev_err(dev, "VSI %d failed lan queue config, error %d\n",
- vsi->vsi_num, ret);
- goto unroll_clear_rings;
+ return ret;
+}
+
+/**
+ * ice_vsi_decfg - remove all VSI configuration
+ * @vsi: pointer to VSI
+ */
+void ice_vsi_decfg(struct ice_vsi *vsi)
+{
+ struct ice_pf *pf = vsi->back;
+ int err;
+
+ /* The Rx rule will only exist to remove if the LLDP FW
+ * engine is currently stopped
+ */
+ if (!ice_is_safe_mode(pf) && vsi->type == ICE_VSI_PF &&
+ !test_bit(ICE_FLAG_FW_LLDP_AGENT, pf->flags))
+ ice_cfg_sw_lldp(vsi, false, false);
+
+ ice_fltr_remove_all(vsi);
+ ice_rm_vsi_lan_cfg(vsi->port_info, vsi->idx);
+ err = ice_rm_vsi_rdma_cfg(vsi->port_info, vsi->idx);
+ if (err)
+ dev_err(ice_pf_to_dev(pf), "Failed to remove RDMA scheduler config for VSI %u, err %d\n",
+ vsi->vsi_num, err);
+
+ if (ice_is_xdp_ena_vsi(vsi))
+ /* return value check can be skipped here, it always returns
+ * 0 if reset is in progress
+ */
+ ice_destroy_xdp_rings(vsi);
+
+ ice_vsi_clear_rings(vsi);
+ ice_vsi_free_q_vectors(vsi);
+ ice_vsi_put_qs(vsi);
+ ice_vsi_free_arrays(vsi);
+
+ /* SR-IOV determines needed MSIX resources all at once instead of per
+ * VSI since when VFs are spawned we know how many VFs there are and how
+ * many interrupts each VF needs. SR-IOV MSIX resources are also
+ * cleared in the same manner.
+ */
+ if (vsi->type == ICE_VSI_CTRL && vsi->vf) {
+ ice_free_vf_ctrl_res(pf, vsi);
+ } else if (vsi->type != ICE_VSI_VF) {
+ /* reclaim SW interrupts back to the common pool */
+ ice_free_res(pf->irq_tracker, vsi->base_vector, vsi->idx);
+ pf->num_avail_sw_msix += vsi->num_q_vectors;
+ vsi->base_vector = 0;
}
+ if (vsi->type == ICE_VSI_VF &&
+ vsi->agg_node && vsi->agg_node->valid)
+ vsi->agg_node->num_vsis--;
+ if (vsi->agg_node) {
+ vsi->agg_node->valid = false;
+ vsi->agg_node->agg_id = 0;
+ }
+}
+
+/**
+ * ice_vsi_setup - Set up a VSI by a given type
+ * @pf: board private structure
+ * @params: parameters to use when creating the VSI
+ *
+ * This allocates the sw VSI structure and its queue resources.
+ *
+ * Returns pointer to the successfully allocated and configured VSI sw struct on
+ * success, NULL on failure.
+ */
+struct ice_vsi *
+ice_vsi_setup(struct ice_pf *pf, struct ice_vsi_cfg_params *params)
+{
+ struct device *dev = ice_pf_to_dev(pf);
+ struct ice_vsi *vsi;
+ int ret;
+
+ /* ice_vsi_setup can only initialize a new VSI, and we must have
+ * a port_info structure for it.
+ */
+ if (WARN_ON(!(params->flags & ICE_VSI_FLAG_INIT)) ||
+ WARN_ON(!params->pi))
+ return NULL;
+
+ vsi = ice_vsi_alloc(pf);
+ if (!vsi) {
+ dev_err(dev, "could not allocate VSI\n");
+ return NULL;
+ }
+
+ ret = ice_vsi_cfg(vsi, params);
+ if (ret)
+ goto err_vsi_cfg;
+
/* Add switch rule to drop all Tx Flow Control Frames, of look up
* type ETHERTYPE from VSIs, and restrict malicious VF from sending
* out PAUSE or PFC frames. If enabled, FW can still send FC frames.
@@ -2830,34 +2973,21 @@ ice_vsi_setup(struct ice_pf *pf, struct ice_port_info *pi,
* be dropped so that VFs cannot send LLDP packets to reconfig DCB
* settings in the HW.
*/
- if (!ice_is_safe_mode(pf))
- if (vsi->type == ICE_VSI_PF) {
- ice_fltr_add_eth(vsi, ETH_P_PAUSE, ICE_FLTR_TX,
- ICE_DROP_PACKET);
- ice_cfg_sw_lldp(vsi, true, true);
- }
+ if (!ice_is_safe_mode(pf) && vsi->type == ICE_VSI_PF) {
+ ice_fltr_add_eth(vsi, ETH_P_PAUSE, ICE_FLTR_TX,
+ ICE_DROP_PACKET);
+ ice_cfg_sw_lldp(vsi, true, true);
+ }
if (!vsi->agg_node)
ice_set_agg_vsi(vsi);
+
return vsi;
-unroll_clear_rings:
- ice_vsi_clear_rings(vsi);
-unroll_vector_base:
- /* reclaim SW interrupts back to the common pool */
- ice_free_res(pf->irq_tracker, vsi->base_vector, vsi->idx);
- pf->num_avail_sw_msix += vsi->num_q_vectors;
-unroll_alloc_q_vector:
- ice_vsi_free_q_vectors(vsi);
-unroll_vsi_init:
- ice_vsi_free_stats(vsi);
- ice_vsi_delete(vsi);
-unroll_get_qs:
- ice_vsi_put_qs(vsi);
-unroll_vsi_alloc:
- if (vsi_type == ICE_VSI_VF)
+err_vsi_cfg:
+ if (params->type == ICE_VSI_VF)
ice_enable_lag(pf->lag);
- ice_vsi_clear(vsi);
+ ice_vsi_free(vsi);
return NULL;
}
@@ -3121,37 +3251,6 @@ void ice_napi_del(struct ice_vsi *vsi)
}
/**
- * ice_free_vf_ctrl_res - Free the VF control VSI resource
- * @pf: pointer to PF structure
- * @vsi: the VSI to free resources for
- *
- * Check if the VF control VSI resource is still in use. If no VF is using it
- * any more, release the VSI resource. Otherwise, leave it to be cleaned up
- * once no other VF uses it.
- */
-static void ice_free_vf_ctrl_res(struct ice_pf *pf, struct ice_vsi *vsi)
-{
- struct ice_vf *vf;
- unsigned int bkt;
-
- rcu_read_lock();
- ice_for_each_vf_rcu(pf, bkt, vf) {
- if (vf != vsi->vf && vf->ctrl_vsi_idx != ICE_NO_VSI) {
- rcu_read_unlock();
- return;
- }
- }
- rcu_read_unlock();
-
- /* No other VFs left that have control VSI. It is now safe to reclaim
- * SW interrupts back to the common pool.
- */
- ice_free_res(pf->irq_tracker, vsi->base_vector,
- ICE_RES_VF_CTRL_VEC_ID);
- pf->num_avail_sw_msix += vsi->num_q_vectors;
-}
-
-/**
* ice_vsi_release - Delete a VSI and free its resources
* @vsi: the VSI being removed
*
@@ -3160,7 +3259,6 @@ static void ice_free_vf_ctrl_res(struct ice_pf *pf, struct ice_vsi *vsi)
int ice_vsi_release(struct ice_vsi *vsi)
{
struct ice_pf *pf;
- int err;
if (!vsi->back)
return -ENODEV;
@@ -3178,50 +3276,14 @@ int ice_vsi_release(struct ice_vsi *vsi)
clear_bit(ICE_VSI_NETDEV_REGISTERED, vsi->state);
}
+ if (vsi->type == ICE_VSI_PF)
+ ice_devlink_destroy_pf_port(pf);
+
if (test_bit(ICE_FLAG_RSS_ENA, pf->flags))
ice_rss_clean(vsi);
- /* Disable VSI and free resources */
- if (vsi->type != ICE_VSI_LB)
- ice_vsi_dis_irq(vsi);
ice_vsi_close(vsi);
-
- /* SR-IOV determines needed MSIX resources all at once instead of per
- * VSI since when VFs are spawned we know how many VFs there are and how
- * many interrupts each VF needs. SR-IOV MSIX resources are also
- * cleared in the same manner.
- */
- if (vsi->type == ICE_VSI_CTRL && vsi->vf) {
- ice_free_vf_ctrl_res(pf, vsi);
- } else if (vsi->type != ICE_VSI_VF) {
- /* reclaim SW interrupts back to the common pool */
- ice_free_res(pf->irq_tracker, vsi->base_vector, vsi->idx);
- pf->num_avail_sw_msix += vsi->num_q_vectors;
- }
-
- if (!ice_is_safe_mode(pf)) {
- if (vsi->type == ICE_VSI_PF) {
- ice_fltr_remove_eth(vsi, ETH_P_PAUSE, ICE_FLTR_TX,
- ICE_DROP_PACKET);
- ice_cfg_sw_lldp(vsi, true, false);
- /* The Rx rule will only exist to remove if the LLDP FW
- * engine is currently stopped
- */
- if (!test_bit(ICE_FLAG_FW_LLDP_AGENT, pf->flags))
- ice_cfg_sw_lldp(vsi, false, false);
- }
- }
-
- if (ice_is_vsi_dflt_vsi(vsi))
- ice_clear_dflt_vsi(vsi);
- ice_fltr_remove_all(vsi);
- ice_rm_vsi_lan_cfg(vsi->port_info, vsi->idx);
- err = ice_rm_vsi_rdma_cfg(vsi->port_info, vsi->idx);
- if (err)
- dev_err(ice_pf_to_dev(vsi->back), "Failed to remove RDMA scheduler config for VSI %u, err %d\n",
- vsi->vsi_num, err);
- ice_vsi_delete(vsi);
- ice_vsi_free_q_vectors(vsi);
+ ice_vsi_decfg(vsi);
if (vsi->netdev) {
if (test_bit(ICE_VSI_NETDEV_REGISTERED, vsi->state)) {
@@ -3235,19 +3297,12 @@ int ice_vsi_release(struct ice_vsi *vsi)
}
}
- if (vsi->type == ICE_VSI_VF &&
- vsi->agg_node && vsi->agg_node->valid)
- vsi->agg_node->num_vsis--;
- ice_vsi_clear_rings(vsi);
- ice_vsi_free_stats(vsi);
- ice_vsi_put_qs(vsi);
-
/* retain SW VSI data structure since it is needed to unregister and
* free VSI netdev when PF is not in reset recovery pending state,\
* for ex: during rmmod.
*/
if (!ice_is_reset_in_progress(pf->state))
- ice_vsi_clear(vsi);
+ ice_vsi_delete(vsi);
return 0;
}
@@ -3372,7 +3427,7 @@ ice_vsi_rebuild_set_coalesce(struct ice_vsi *vsi,
* @prev_txq: Number of Tx rings before ring reallocation
* @prev_rxq: Number of Rx rings before ring reallocation
*/
-static int
+static void
ice_vsi_realloc_stat_arrays(struct ice_vsi *vsi, int prev_txq, int prev_rxq)
{
struct ice_vsi_stats *vsi_stat;
@@ -3380,9 +3435,9 @@ ice_vsi_realloc_stat_arrays(struct ice_vsi *vsi, int prev_txq, int prev_rxq)
int i;
if (!prev_txq || !prev_rxq)
- return 0;
+ return;
if (vsi->type == ICE_VSI_CHNL)
- return 0;
+ return;
vsi_stat = pf->vsi_stats[vsi->idx];
@@ -3403,36 +3458,36 @@ ice_vsi_realloc_stat_arrays(struct ice_vsi *vsi, int prev_txq, int prev_rxq)
}
}
}
-
- return 0;
}
/**
* ice_vsi_rebuild - Rebuild VSI after reset
* @vsi: VSI to be rebuild
- * @init_vsi: is this an initialization or a reconfigure of the VSI
+ * @vsi_flags: flags used for VSI rebuild flow
+ *
+ * Set vsi_flags to ICE_VSI_FLAG_INIT to initialize a new VSI, or
+ * ICE_VSI_FLAG_NO_INIT to rebuild an existing VSI in hardware.
*
* Returns 0 on success and negative value on failure
*/
-int ice_vsi_rebuild(struct ice_vsi *vsi, bool init_vsi)
+int ice_vsi_rebuild(struct ice_vsi *vsi, u32 vsi_flags)
{
- u16 max_txqs[ICE_MAX_TRAFFIC_CLASS] = { 0 };
+ struct ice_vsi_cfg_params params = {};
struct ice_coalesce_stored *coalesce;
- int ret, i, prev_txq, prev_rxq;
+ int ret, prev_txq, prev_rxq;
int prev_num_q_vectors = 0;
- enum ice_vsi_type vtype;
struct ice_pf *pf;
if (!vsi)
return -EINVAL;
+ params = ice_vsi_to_params(vsi);
+ params.flags = vsi_flags;
+
pf = vsi->back;
- vtype = vsi->type;
- if (WARN_ON(vtype == ICE_VSI_VF && !vsi->vf))
+ if (WARN_ON(vsi->type == ICE_VSI_VF && !vsi->vf))
return -EINVAL;
- ice_vsi_init_vlan_ops(vsi);
-
coalesce = kcalloc(vsi->num_q_vectors,
sizeof(struct ice_coalesce_stored), GFP_KERNEL);
if (!coalesce)
@@ -3443,188 +3498,32 @@ int ice_vsi_rebuild(struct ice_vsi *vsi, bool init_vsi)
prev_txq = vsi->num_txq;
prev_rxq = vsi->num_rxq;
- ice_rm_vsi_lan_cfg(vsi->port_info, vsi->idx);
- ret = ice_rm_vsi_rdma_cfg(vsi->port_info, vsi->idx);
+ ice_vsi_decfg(vsi);
+ ret = ice_vsi_cfg_def(vsi, &params);
if (ret)
- dev_err(ice_pf_to_dev(vsi->back), "Failed to remove RDMA scheduler config for VSI %u, err %d\n",
- vsi->vsi_num, ret);
- ice_vsi_free_q_vectors(vsi);
-
- /* SR-IOV determines needed MSIX resources all at once instead of per
- * VSI since when VFs are spawned we know how many VFs there are and how
- * many interrupts each VF needs. SR-IOV MSIX resources are also
- * cleared in the same manner.
- */
- if (vtype != ICE_VSI_VF) {
- /* reclaim SW interrupts back to the common pool */
- ice_free_res(pf->irq_tracker, vsi->base_vector, vsi->idx);
- pf->num_avail_sw_msix += vsi->num_q_vectors;
- vsi->base_vector = 0;
- }
-
- if (ice_is_xdp_ena_vsi(vsi))
- /* return value check can be skipped here, it always returns
- * 0 if reset is in progress
- */
- ice_destroy_xdp_rings(vsi);
- ice_vsi_put_qs(vsi);
- ice_vsi_clear_rings(vsi);
- ice_vsi_free_arrays(vsi);
- if (vtype == ICE_VSI_VF)
- ice_vsi_set_num_qs(vsi, vsi->vf);
- else
- ice_vsi_set_num_qs(vsi, NULL);
-
- ret = ice_vsi_alloc_arrays(vsi);
- if (ret < 0)
- goto err_vsi;
-
- ice_vsi_get_qs(vsi);
-
- ice_alloc_fd_res(vsi);
- ice_vsi_set_tc_cfg(vsi);
-
- /* Initialize VSI struct elements and create VSI in FW */
- ret = ice_vsi_init(vsi, init_vsi);
- if (ret < 0)
- goto err_vsi;
-
- switch (vtype) {
- case ICE_VSI_CTRL:
- case ICE_VSI_SWITCHDEV_CTRL:
- case ICE_VSI_PF:
- ret = ice_vsi_alloc_q_vectors(vsi);
- if (ret)
- goto err_rings;
-
- ret = ice_vsi_setup_vector_base(vsi);
- if (ret)
- goto err_vectors;
-
- ret = ice_vsi_set_q_vectors_reg_idx(vsi);
- if (ret)
- goto err_vectors;
-
- ret = ice_vsi_alloc_rings(vsi);
- if (ret)
- goto err_vectors;
-
- ret = ice_vsi_alloc_ring_stats(vsi);
- if (ret)
- goto err_vectors;
-
- ice_vsi_map_rings_to_vectors(vsi);
-
- vsi->stat_offsets_loaded = false;
- if (ice_is_xdp_ena_vsi(vsi)) {
- ret = ice_vsi_determine_xdp_res(vsi);
- if (ret)
- goto err_vectors;
- ret = ice_prepare_xdp_rings(vsi, vsi->xdp_prog);
- if (ret)
- goto err_vectors;
- }
- /* ICE_VSI_CTRL does not need RSS so skip RSS processing */
- if (vtype != ICE_VSI_CTRL)
- /* Do not exit if configuring RSS had an issue, at
- * least receive traffic on first queue. Hence no
- * need to capture return value
- */
- if (test_bit(ICE_FLAG_RSS_ENA, pf->flags))
- ice_vsi_cfg_rss_lut_key(vsi);
-
- /* disable or enable CRC stripping */
- if (vsi->netdev)
- ice_vsi_cfg_crc_strip(vsi, !!(vsi->netdev->features &
- NETIF_F_RXFCS));
-
- break;
- case ICE_VSI_VF:
- ret = ice_vsi_alloc_q_vectors(vsi);
- if (ret)
- goto err_rings;
-
- ret = ice_vsi_set_q_vectors_reg_idx(vsi);
- if (ret)
- goto err_vectors;
-
- ret = ice_vsi_alloc_rings(vsi);
- if (ret)
- goto err_vectors;
-
- ret = ice_vsi_alloc_ring_stats(vsi);
- if (ret)
- goto err_vectors;
-
- vsi->stat_offsets_loaded = false;
- break;
- case ICE_VSI_CHNL:
- if (test_bit(ICE_FLAG_RSS_ENA, pf->flags)) {
- ice_vsi_cfg_rss_lut_key(vsi);
- ice_vsi_set_rss_flow_fld(vsi);
- }
- break;
- default:
- break;
- }
-
- /* configure VSI nodes based on number of queues and TC's */
- for (i = 0; i < vsi->tc_cfg.numtc; i++) {
- /* configure VSI nodes based on number of queues and TC's.
- * ADQ creates VSIs for each TC/Channel but doesn't
- * allocate queues instead it reconfigures the PF queues
- * as per the TC command. So max_txqs should point to the
- * PF Tx queues.
- */
- if (vtype == ICE_VSI_CHNL)
- max_txqs[i] = pf->num_lan_tx;
- else
- max_txqs[i] = vsi->alloc_txq;
-
- if (ice_is_xdp_ena_vsi(vsi))
- max_txqs[i] += vsi->num_xdp_txq;
- }
-
- if (test_bit(ICE_FLAG_TC_MQPRIO, pf->flags))
- /* If MQPRIO is set, means channel code path, hence for main
- * VSI's, use TC as 1
- */
- ret = ice_cfg_vsi_lan(vsi->port_info, vsi->idx, 1, max_txqs);
- else
- ret = ice_cfg_vsi_lan(vsi->port_info, vsi->idx,
- vsi->tc_cfg.ena_tc, max_txqs);
+ goto err_vsi_cfg;
+ ret = ice_vsi_cfg_tc_lan(pf, vsi);
if (ret) {
- dev_err(ice_pf_to_dev(pf), "VSI %d failed lan queue config, error %d\n",
- vsi->vsi_num, ret);
- if (init_vsi) {
+ if (vsi_flags & ICE_VSI_FLAG_INIT) {
ret = -EIO;
- goto err_vectors;
+ goto err_vsi_cfg_tc_lan;
} else {
+ kfree(coalesce);
return ice_schedule_reset(pf, ICE_RESET_PFR);
}
}
- if (ice_vsi_realloc_stat_arrays(vsi, prev_txq, prev_rxq))
- goto err_vectors;
+ ice_vsi_realloc_stat_arrays(vsi, prev_txq, prev_rxq);
ice_vsi_rebuild_set_coalesce(vsi, coalesce, prev_num_q_vectors);
kfree(coalesce);
return 0;
-err_vectors:
- ice_vsi_free_q_vectors(vsi);
-err_rings:
- if (vsi->netdev) {
- vsi->current_netdev_flags = 0;
- unregister_netdev(vsi->netdev);
- free_netdev(vsi->netdev);
- vsi->netdev = NULL;
- }
-err_vsi:
- ice_vsi_clear(vsi);
- set_bit(ICE_RESET_FAILED, pf->state);
+err_vsi_cfg_tc_lan:
+ ice_vsi_decfg(vsi);
+err_vsi_cfg:
kfree(coalesce);
return ret;
}
diff --git a/drivers/net/ethernet/intel/ice/ice_lib.h b/drivers/net/ethernet/intel/ice/ice_lib.h
index dcdf69a693e9..75221478f2dc 100644
--- a/drivers/net/ethernet/intel/ice/ice_lib.h
+++ b/drivers/net/ethernet/intel/ice/ice_lib.h
@@ -7,6 +7,47 @@
#include "ice.h"
#include "ice_vlan.h"
+/* Flags used for VSI configuration and rebuild */
+#define ICE_VSI_FLAG_INIT BIT(0)
+#define ICE_VSI_FLAG_NO_INIT 0
+
+/**
+ * struct ice_vsi_cfg_params - VSI configuration parameters
+ * @pi: pointer to the port_info instance for the VSI
+ * @ch: pointer to the channel structure for the VSI, may be NULL
+ * @vf: pointer to the VF associated with this VSI, may be NULL
+ * @type: the type of VSI to configure
+ * @flags: VSI flags used for rebuild and configuration
+ *
+ * Parameter structure used when configuring a new VSI.
+ */
+struct ice_vsi_cfg_params {
+ struct ice_port_info *pi;
+ struct ice_channel *ch;
+ struct ice_vf *vf;
+ enum ice_vsi_type type;
+ u32 flags;
+};
+
+/**
+ * ice_vsi_to_params - Get parameters for an existing VSI
+ * @vsi: the VSI to get parameters for
+ *
+ * Fill a parameter structure for reconfiguring a VSI with its current
+ * parameters, such as during a rebuild operation.
+ */
+static inline struct ice_vsi_cfg_params ice_vsi_to_params(struct ice_vsi *vsi)
+{
+ struct ice_vsi_cfg_params params = {};
+
+ params.pi = vsi->port_info;
+ params.ch = vsi->ch;
+ params.vf = vsi->vf;
+ params.type = vsi->type;
+
+ return params;
+}
+
const char *ice_vsi_type_str(enum ice_vsi_type vsi_type);
bool ice_pf_state_is_nominal(struct ice_pf *pf);
@@ -42,7 +83,6 @@ void ice_cfg_sw_lldp(struct ice_vsi *vsi, bool tx, bool create);
int ice_set_link(struct ice_vsi *vsi, bool ena);
void ice_vsi_delete(struct ice_vsi *vsi);
-int ice_vsi_clear(struct ice_vsi *vsi);
int ice_vsi_cfg_tc(struct ice_vsi *vsi, u8 ena_tc);
@@ -51,9 +91,7 @@ int ice_vsi_cfg_rss_lut_key(struct ice_vsi *vsi);
void ice_vsi_cfg_netdev_tc(struct ice_vsi *vsi, u8 ena_tc);
struct ice_vsi *
-ice_vsi_setup(struct ice_pf *pf, struct ice_port_info *pi,
- enum ice_vsi_type vsi_type, struct ice_vf *vf,
- struct ice_channel *ch);
+ice_vsi_setup(struct ice_pf *pf, struct ice_vsi_cfg_params *params);
void ice_napi_del(struct ice_vsi *vsi);
@@ -63,6 +101,7 @@ void ice_vsi_close(struct ice_vsi *vsi);
int ice_ena_vsi(struct ice_vsi *vsi, bool locked);
+void ice_vsi_decfg(struct ice_vsi *vsi);
void ice_dis_vsi(struct ice_vsi *vsi, bool locked);
int ice_free_res(struct ice_res_tracker *res, u16 index, u16 id);
@@ -70,7 +109,8 @@ int ice_free_res(struct ice_res_tracker *res, u16 index, u16 id);
int
ice_get_res(struct ice_pf *pf, struct ice_res_tracker *res, u16 needed, u16 id);
-int ice_vsi_rebuild(struct ice_vsi *vsi, bool init_vsi);
+int ice_vsi_rebuild(struct ice_vsi *vsi, u32 vsi_flags);
+int ice_vsi_cfg(struct ice_vsi *vsi, struct ice_vsi_cfg_params *params);
bool ice_is_reset_in_progress(unsigned long *state);
int ice_wait_for_reset(struct ice_pf *pf, unsigned long timeout);
diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c
index 8ec24f6cf6be..567694bf098b 100644
--- a/drivers/net/ethernet/intel/ice/ice_main.c
+++ b/drivers/net/ethernet/intel/ice/ice_main.c
@@ -22,6 +22,7 @@
#include "ice_eswitch.h"
#include "ice_tc_lib.h"
#include "ice_vsi_vlan_ops.h"
+#include <net/xdp_sock_drv.h>
#define DRV_SUMMARY "Intel(R) Ethernet Connection E800 Series Linux Driver"
static const char ice_driver_string[] = DRV_SUMMARY;
@@ -44,7 +45,6 @@ MODULE_PARM_DESC(debug, "netif level (0=none,...,16=all), hw debug_mask (0x8XXXX
MODULE_PARM_DESC(debug, "netif level (0=none,...,16=all)");
#endif /* !CONFIG_DYNAMIC_DEBUG */
-static DEFINE_IDA(ice_aux_ida);
DEFINE_STATIC_KEY_FALSE(ice_xdp_locking_key);
EXPORT_SYMBOL(ice_xdp_locking_key);
@@ -564,7 +564,7 @@ ice_prepare_for_reset(struct ice_pf *pf, enum ice_reset_req reset_type)
/* Disable VFs until reset is completed */
mutex_lock(&pf->vfs.table_lock);
ice_for_each_vf(pf, bkt, vf)
- ice_set_vf_state_qs_dis(vf);
+ ice_set_vf_state_dis(vf);
mutex_unlock(&pf->vfs.table_lock);
if (ice_is_eswitch_mode_switchdev(pf)) {
@@ -2596,8 +2596,6 @@ static int ice_xdp_alloc_setup_rings(struct ice_vsi *vsi)
xdp_ring->netdev = NULL;
xdp_ring->dev = dev;
xdp_ring->count = vsi->num_tx_desc;
- xdp_ring->next_dd = ICE_RING_QUARTER(xdp_ring) - 1;
- xdp_ring->next_rs = ICE_RING_QUARTER(xdp_ring) - 1;
WRITE_ONCE(vsi->xdp_rings[i], xdp_ring);
if (ice_setup_tx_ring(xdp_ring))
goto free_xdp_rings;
@@ -2889,6 +2887,18 @@ int ice_vsi_determine_xdp_res(struct ice_vsi *vsi)
}
/**
+ * ice_max_xdp_frame_size - returns the maximum allowed frame size for XDP
+ * @vsi: Pointer to VSI structure
+ */
+static int ice_max_xdp_frame_size(struct ice_vsi *vsi)
+{
+ if (test_bit(ICE_FLAG_LEGACY_RX, vsi->back->flags))
+ return ICE_RXBUF_1664;
+ else
+ return ICE_RXBUF_3072;
+}
+
+/**
* ice_xdp_setup_prog - Add or remove XDP eBPF program
* @vsi: VSI to setup XDP for
* @prog: XDP program
@@ -2898,13 +2908,16 @@ static int
ice_xdp_setup_prog(struct ice_vsi *vsi, struct bpf_prog *prog,
struct netlink_ext_ack *extack)
{
- int frame_size = vsi->netdev->mtu + ICE_ETH_PKT_HDR_PAD;
+ unsigned int frame_size = vsi->netdev->mtu + ICE_ETH_PKT_HDR_PAD;
bool if_running = netif_running(vsi->netdev);
int ret = 0, xdp_ring_err = 0;
- if (frame_size > vsi->rx_buf_len) {
- NL_SET_ERR_MSG_MOD(extack, "MTU too large for loading XDP");
- return -EOPNOTSUPP;
+ if (prog && !prog->aux->xdp_has_frags) {
+ if (frame_size > ice_max_xdp_frame_size(vsi)) {
+ NL_SET_ERR_MSG_MOD(extack,
+ "MTU is too large for linear frames and XDP prog does not support frags");
+ return -EOPNOTSUPP;
+ }
}
/* need to stop netdev while setting up the program for Rx rings */
@@ -2925,11 +2938,13 @@ ice_xdp_setup_prog(struct ice_vsi *vsi, struct bpf_prog *prog,
if (xdp_ring_err)
NL_SET_ERR_MSG_MOD(extack, "Setting up XDP Tx resources failed");
}
+ xdp_features_set_redirect_target(vsi->netdev, true);
/* reallocate Rx queues that are used for zero-copy */
xdp_ring_err = ice_realloc_zc_buf(vsi, true);
if (xdp_ring_err)
NL_SET_ERR_MSG_MOD(extack, "Setting up XDP Rx resources failed");
} else if (ice_is_xdp_ena_vsi(vsi) && !prog) {
+ xdp_features_clear_redirect_target(vsi->netdev);
xdp_ring_err = ice_destroy_xdp_rings(vsi);
if (xdp_ring_err)
NL_SET_ERR_MSG_MOD(extack, "Freeing XDP Tx resources failed");
@@ -3344,10 +3359,11 @@ static void ice_napi_add(struct ice_vsi *vsi)
/**
* ice_set_ops - set netdev and ethtools ops for the given netdev
- * @netdev: netdev instance
+ * @vsi: the VSI associated with the new netdev
*/
-static void ice_set_ops(struct net_device *netdev)
+static void ice_set_ops(struct ice_vsi *vsi)
{
+ struct net_device *netdev = vsi->netdev;
struct ice_pf *pf = ice_netdev_to_pf(netdev);
if (ice_is_safe_mode(pf)) {
@@ -3359,6 +3375,13 @@ static void ice_set_ops(struct net_device *netdev)
netdev->netdev_ops = &ice_netdev_ops;
netdev->udp_tunnel_nic_info = &pf->hw.udp_tunnel_nic;
ice_set_ethtool_ops(netdev);
+
+ if (vsi->type != ICE_VSI_PF)
+ return;
+
+ netdev->xdp_features = NETDEV_XDP_ACT_BASIC | NETDEV_XDP_ACT_REDIRECT |
+ NETDEV_XDP_ACT_XSK_ZEROCOPY |
+ NETDEV_XDP_ACT_RX_SG;
}
/**
@@ -3447,53 +3470,8 @@ static void ice_set_netdev_features(struct net_device *netdev)
* be changed at runtime
*/
netdev->hw_features |= NETIF_F_RXFCS;
-}
-
-/**
- * ice_cfg_netdev - Allocate, configure and register a netdev
- * @vsi: the VSI associated with the new netdev
- *
- * Returns 0 on success, negative value on failure
- */
-static int ice_cfg_netdev(struct ice_vsi *vsi)
-{
- struct ice_netdev_priv *np;
- struct net_device *netdev;
- u8 mac_addr[ETH_ALEN];
-
- netdev = alloc_etherdev_mqs(sizeof(*np), vsi->alloc_txq,
- vsi->alloc_rxq);
- if (!netdev)
- return -ENOMEM;
-
- set_bit(ICE_VSI_NETDEV_ALLOCD, vsi->state);
- vsi->netdev = netdev;
- np = netdev_priv(netdev);
- np->vsi = vsi;
-
- ice_set_netdev_features(netdev);
-
- ice_set_ops(netdev);
- if (vsi->type == ICE_VSI_PF) {
- SET_NETDEV_DEV(netdev, ice_pf_to_dev(vsi->back));
- ether_addr_copy(mac_addr, vsi->port_info->mac.perm_addr);
- eth_hw_addr_set(netdev, mac_addr);
- ether_addr_copy(netdev->perm_addr, mac_addr);
- }
-
- netdev->priv_flags |= IFF_UNICAST_FLT;
-
- /* Setup netdev TC information */
- ice_vsi_cfg_netdev_tc(vsi, vsi->tc_cfg.ena_tc);
-
- /* setup watchdog timeout value to be 5 second */
- netdev->watchdog_timeo = 5 * HZ;
-
- netdev->min_mtu = ETH_MIN_MTU;
- netdev->max_mtu = ICE_MAX_MTU;
-
- return 0;
+ netif_set_tso_max_size(netdev, ICE_MAX_TSO_SIZE);
}
/**
@@ -3521,14 +3499,27 @@ void ice_fill_rss_lut(u8 *lut, u16 rss_table_size, u16 rss_size)
static struct ice_vsi *
ice_pf_vsi_setup(struct ice_pf *pf, struct ice_port_info *pi)
{
- return ice_vsi_setup(pf, pi, ICE_VSI_PF, NULL, NULL);
+ struct ice_vsi_cfg_params params = {};
+
+ params.type = ICE_VSI_PF;
+ params.pi = pi;
+ params.flags = ICE_VSI_FLAG_INIT;
+
+ return ice_vsi_setup(pf, &params);
}
static struct ice_vsi *
ice_chnl_vsi_setup(struct ice_pf *pf, struct ice_port_info *pi,
struct ice_channel *ch)
{
- return ice_vsi_setup(pf, pi, ICE_VSI_CHNL, NULL, ch);
+ struct ice_vsi_cfg_params params = {};
+
+ params.type = ICE_VSI_CHNL;
+ params.pi = pi;
+ params.ch = ch;
+ params.flags = ICE_VSI_FLAG_INIT;
+
+ return ice_vsi_setup(pf, &params);
}
/**
@@ -3542,7 +3533,13 @@ ice_chnl_vsi_setup(struct ice_pf *pf, struct ice_port_info *pi,
static struct ice_vsi *
ice_ctrl_vsi_setup(struct ice_pf *pf, struct ice_port_info *pi)
{
- return ice_vsi_setup(pf, pi, ICE_VSI_CTRL, NULL, NULL);
+ struct ice_vsi_cfg_params params = {};
+
+ params.type = ICE_VSI_CTRL;
+ params.pi = pi;
+ params.flags = ICE_VSI_FLAG_INIT;
+
+ return ice_vsi_setup(pf, &params);
}
/**
@@ -3556,7 +3553,13 @@ ice_ctrl_vsi_setup(struct ice_pf *pf, struct ice_port_info *pi)
struct ice_vsi *
ice_lb_vsi_setup(struct ice_pf *pf, struct ice_port_info *pi)
{
- return ice_vsi_setup(pf, pi, ICE_VSI_LB, NULL, NULL);
+ struct ice_vsi_cfg_params params = {};
+
+ params.type = ICE_VSI_LB;
+ params.pi = pi;
+ params.flags = ICE_VSI_FLAG_INIT;
+
+ return ice_vsi_setup(pf, &params);
}
/**
@@ -3716,20 +3719,6 @@ static void ice_tc_indir_block_unregister(struct ice_vsi *vsi)
}
/**
- * ice_tc_indir_block_remove - clean indirect TC block notifications
- * @pf: PF structure
- */
-static void ice_tc_indir_block_remove(struct ice_pf *pf)
-{
- struct ice_vsi *pf_vsi = ice_get_main_vsi(pf);
-
- if (!pf_vsi)
- return;
-
- ice_tc_indir_block_unregister(pf_vsi);
-}
-
-/**
* ice_tc_indir_block_register - Register TC indirect block notifications
* @vsi: VSI struct which has the netdev
*
@@ -3749,78 +3738,6 @@ static int ice_tc_indir_block_register(struct ice_vsi *vsi)
}
/**
- * ice_setup_pf_sw - Setup the HW switch on startup or after reset
- * @pf: board private structure
- *
- * Returns 0 on success, negative value on failure
- */
-static int ice_setup_pf_sw(struct ice_pf *pf)
-{
- struct device *dev = ice_pf_to_dev(pf);
- bool dvm = ice_is_dvm_ena(&pf->hw);
- struct ice_vsi *vsi;
- int status;
-
- if (ice_is_reset_in_progress(pf->state))
- return -EBUSY;
-
- status = ice_aq_set_port_params(pf->hw.port_info, dvm, NULL);
- if (status)
- return -EIO;
-
- vsi = ice_pf_vsi_setup(pf, pf->hw.port_info);
- if (!vsi)
- return -ENOMEM;
-
- /* init channel list */
- INIT_LIST_HEAD(&vsi->ch_list);
-
- status = ice_cfg_netdev(vsi);
- if (status)
- goto unroll_vsi_setup;
- /* netdev has to be configured before setting frame size */
- ice_vsi_cfg_frame_size(vsi);
-
- /* init indirect block notifications */
- status = ice_tc_indir_block_register(vsi);
- if (status) {
- dev_err(dev, "Failed to register netdev notifier\n");
- goto unroll_cfg_netdev;
- }
-
- /* Setup DCB netlink interface */
- ice_dcbnl_setup(vsi);
-
- /* registering the NAPI handler requires both the queues and
- * netdev to be created, which are done in ice_pf_vsi_setup()
- * and ice_cfg_netdev() respectively
- */
- ice_napi_add(vsi);
-
- status = ice_init_mac_fltr(pf);
- if (status)
- goto unroll_napi_add;
-
- return 0;
-
-unroll_napi_add:
- ice_tc_indir_block_unregister(vsi);
-unroll_cfg_netdev:
- if (vsi) {
- ice_napi_del(vsi);
- if (vsi->netdev) {
- clear_bit(ICE_VSI_NETDEV_ALLOCD, vsi->state);
- free_netdev(vsi->netdev);
- vsi->netdev = NULL;
- }
- }
-
-unroll_vsi_setup:
- ice_vsi_release(vsi);
- return status;
-}
-
-/**
* ice_get_avail_q_count - Get count of queues in use
* @pf_qmap: bitmap to get queue use count from
* @lock: pointer to a mutex that protects access to pf_qmap
@@ -4249,13 +4166,13 @@ int ice_vsi_recfg_qs(struct ice_vsi *vsi, int new_rx, int new_tx, bool locked)
/* set for the next time the netdev is started */
if (!netif_running(vsi->netdev)) {
- ice_vsi_rebuild(vsi, false);
+ ice_vsi_rebuild(vsi, ICE_VSI_FLAG_NO_INIT);
dev_dbg(ice_pf_to_dev(pf), "Link is down, queue count change happens when link is brought up\n");
goto done;
}
ice_vsi_close(vsi);
- ice_vsi_rebuild(vsi, false);
+ ice_vsi_rebuild(vsi, ICE_VSI_FLAG_NO_INIT);
ice_pf_dcb_recfg(pf, locked);
ice_vsi_open(vsi);
done:
@@ -4518,6 +4435,23 @@ err_vsi_open:
return err;
}
+static void ice_deinit_fdir(struct ice_pf *pf)
+{
+ struct ice_vsi *vsi = ice_get_ctrl_vsi(pf);
+
+ if (!vsi)
+ return;
+
+ ice_vsi_manage_fdir(vsi, false);
+ ice_vsi_release(vsi);
+ if (pf->ctrl_vsi_idx != ICE_NO_VSI) {
+ pf->vsi[pf->ctrl_vsi_idx] = NULL;
+ pf->ctrl_vsi_idx = ICE_NO_VSI;
+ }
+
+ mutex_destroy(&(&pf->hw)->fdir_fltr_lock);
+}
+
/**
* ice_get_opt_fw_name - return optional firmware file name or NULL
* @pf: pointer to the PF instance
@@ -4618,116 +4552,171 @@ static void ice_print_wake_reason(struct ice_pf *pf)
/**
* ice_register_netdev - register netdev
- * @pf: pointer to the PF struct
+ * @vsi: pointer to the VSI struct
*/
-static int ice_register_netdev(struct ice_pf *pf)
+static int ice_register_netdev(struct ice_vsi *vsi)
{
- struct ice_vsi *vsi;
- int err = 0;
+ int err;
- vsi = ice_get_main_vsi(pf);
if (!vsi || !vsi->netdev)
return -EIO;
err = register_netdev(vsi->netdev);
if (err)
- goto err_register_netdev;
+ return err;
set_bit(ICE_VSI_NETDEV_REGISTERED, vsi->state);
netif_carrier_off(vsi->netdev);
netif_tx_stop_all_queues(vsi->netdev);
return 0;
-err_register_netdev:
- free_netdev(vsi->netdev);
- vsi->netdev = NULL;
- clear_bit(ICE_VSI_NETDEV_ALLOCD, vsi->state);
- return err;
+}
+
+static void ice_unregister_netdev(struct ice_vsi *vsi)
+{
+ if (!vsi || !vsi->netdev)
+ return;
+
+ unregister_netdev(vsi->netdev);
+ clear_bit(ICE_VSI_NETDEV_REGISTERED, vsi->state);
}
/**
- * ice_probe - Device initialization routine
- * @pdev: PCI device information struct
- * @ent: entry in ice_pci_tbl
+ * ice_cfg_netdev - Allocate, configure and register a netdev
+ * @vsi: the VSI associated with the new netdev
*
- * Returns 0 on success, negative on failure
+ * Returns 0 on success, negative value on failure
*/
-static int
-ice_probe(struct pci_dev *pdev, const struct pci_device_id __always_unused *ent)
+static int ice_cfg_netdev(struct ice_vsi *vsi)
{
- struct device *dev = &pdev->dev;
- struct ice_vsi *vsi;
- struct ice_pf *pf;
- struct ice_hw *hw;
- int i, err;
+ struct ice_netdev_priv *np;
+ struct net_device *netdev;
+ u8 mac_addr[ETH_ALEN];
- if (pdev->is_virtfn) {
- dev_err(dev, "can't probe a virtual function\n");
- return -EINVAL;
+ netdev = alloc_etherdev_mqs(sizeof(*np), vsi->alloc_txq,
+ vsi->alloc_rxq);
+ if (!netdev)
+ return -ENOMEM;
+
+ set_bit(ICE_VSI_NETDEV_ALLOCD, vsi->state);
+ vsi->netdev = netdev;
+ np = netdev_priv(netdev);
+ np->vsi = vsi;
+
+ ice_set_netdev_features(netdev);
+ ice_set_ops(vsi);
+
+ if (vsi->type == ICE_VSI_PF) {
+ SET_NETDEV_DEV(netdev, ice_pf_to_dev(vsi->back));
+ ether_addr_copy(mac_addr, vsi->port_info->mac.perm_addr);
+ eth_hw_addr_set(netdev, mac_addr);
}
- /* this driver uses devres, see
- * Documentation/driver-api/driver-model/devres.rst
- */
- err = pcim_enable_device(pdev);
+ netdev->priv_flags |= IFF_UNICAST_FLT;
+
+ /* Setup netdev TC information */
+ ice_vsi_cfg_netdev_tc(vsi, vsi->tc_cfg.ena_tc);
+
+ netdev->max_mtu = ICE_MAX_MTU;
+
+ return 0;
+}
+
+static void ice_decfg_netdev(struct ice_vsi *vsi)
+{
+ clear_bit(ICE_VSI_NETDEV_ALLOCD, vsi->state);
+ free_netdev(vsi->netdev);
+ vsi->netdev = NULL;
+}
+
+static int ice_start_eth(struct ice_vsi *vsi)
+{
+ int err;
+
+ err = ice_init_mac_fltr(vsi->back);
if (err)
return err;
- err = pcim_iomap_regions(pdev, BIT(ICE_BAR0), dev_driver_string(dev));
- if (err) {
- dev_err(dev, "BAR0 I/O map error %d\n", err);
- return err;
- }
+ rtnl_lock();
+ err = ice_vsi_open(vsi);
+ rtnl_unlock();
- pf = ice_allocate_pf(dev);
- if (!pf)
- return -ENOMEM;
+ return err;
+}
- /* initialize Auxiliary index to invalid value */
- pf->aux_idx = -1;
+static int ice_init_eth(struct ice_pf *pf)
+{
+ struct ice_vsi *vsi = ice_get_main_vsi(pf);
+ int err;
- /* set up for high or low DMA */
- err = dma_set_mask_and_coherent(dev, DMA_BIT_MASK(64));
- if (err) {
- dev_err(dev, "DMA configuration failed: 0x%x\n", err);
+ if (!vsi)
+ return -EINVAL;
+
+ /* init channel list */
+ INIT_LIST_HEAD(&vsi->ch_list);
+
+ err = ice_cfg_netdev(vsi);
+ if (err)
return err;
- }
+ /* Setup DCB netlink interface */
+ ice_dcbnl_setup(vsi);
- pci_enable_pcie_error_reporting(pdev);
- pci_set_master(pdev);
+ err = ice_init_mac_fltr(pf);
+ if (err)
+ goto err_init_mac_fltr;
- pf->pdev = pdev;
- pci_set_drvdata(pdev, pf);
- set_bit(ICE_DOWN, pf->state);
- /* Disable service task until DOWN bit is cleared */
- set_bit(ICE_SERVICE_DIS, pf->state);
+ err = ice_devlink_create_pf_port(pf);
+ if (err)
+ goto err_devlink_create_pf_port;
- hw = &pf->hw;
- hw->hw_addr = pcim_iomap_table(pdev)[ICE_BAR0];
- pci_save_state(pdev);
+ SET_NETDEV_DEVLINK_PORT(vsi->netdev, &pf->devlink_port);
- hw->back = pf;
- hw->vendor_id = pdev->vendor;
- hw->device_id = pdev->device;
- pci_read_config_byte(pdev, PCI_REVISION_ID, &hw->revision_id);
- hw->subsystem_vendor_id = pdev->subsystem_vendor;
- hw->subsystem_device_id = pdev->subsystem_device;
- hw->bus.device = PCI_SLOT(pdev->devfn);
- hw->bus.func = PCI_FUNC(pdev->devfn);
- ice_set_ctrlq_len(hw);
+ err = ice_register_netdev(vsi);
+ if (err)
+ goto err_register_netdev;
- pf->msg_enable = netif_msg_init(debug, ICE_DFLT_NETIF_M);
+ err = ice_tc_indir_block_register(vsi);
+ if (err)
+ goto err_tc_indir_block_register;
-#ifndef CONFIG_DYNAMIC_DEBUG
- if (debug < -1)
- hw->debug_mask = debug;
-#endif
+ ice_napi_add(vsi);
+
+ return 0;
+
+err_tc_indir_block_register:
+ ice_unregister_netdev(vsi);
+err_register_netdev:
+ ice_devlink_destroy_pf_port(pf);
+err_devlink_create_pf_port:
+err_init_mac_fltr:
+ ice_decfg_netdev(vsi);
+ return err;
+}
+
+static void ice_deinit_eth(struct ice_pf *pf)
+{
+ struct ice_vsi *vsi = ice_get_main_vsi(pf);
+
+ if (!vsi)
+ return;
+
+ ice_vsi_close(vsi);
+ ice_unregister_netdev(vsi);
+ ice_devlink_destroy_pf_port(pf);
+ ice_tc_indir_block_unregister(vsi);
+ ice_decfg_netdev(vsi);
+}
+
+static int ice_init_dev(struct ice_pf *pf)
+{
+ struct device *dev = ice_pf_to_dev(pf);
+ struct ice_hw *hw = &pf->hw;
+ int err;
err = ice_init_hw(hw);
if (err) {
dev_err(dev, "ice_init_hw failed: %d\n", err);
- err = -EIO;
- goto err_exit_unroll;
+ return err;
}
ice_init_feature_support(pf);
@@ -4750,62 +4739,31 @@ ice_probe(struct pci_dev *pdev, const struct pci_device_id __always_unused *ent)
err = ice_init_pf(pf);
if (err) {
dev_err(dev, "ice_init_pf failed: %d\n", err);
- goto err_init_pf_unroll;
+ goto err_init_pf;
}
- ice_devlink_init_regions(pf);
-
pf->hw.udp_tunnel_nic.set_port = ice_udp_tunnel_set_port;
pf->hw.udp_tunnel_nic.unset_port = ice_udp_tunnel_unset_port;
pf->hw.udp_tunnel_nic.flags = UDP_TUNNEL_NIC_INFO_MAY_SLEEP;
pf->hw.udp_tunnel_nic.shared = &pf->hw.udp_tunnel_shared;
- i = 0;
if (pf->hw.tnl.valid_count[TNL_VXLAN]) {
- pf->hw.udp_tunnel_nic.tables[i].n_entries =
+ pf->hw.udp_tunnel_nic.tables[0].n_entries =
pf->hw.tnl.valid_count[TNL_VXLAN];
- pf->hw.udp_tunnel_nic.tables[i].tunnel_types =
+ pf->hw.udp_tunnel_nic.tables[0].tunnel_types =
UDP_TUNNEL_TYPE_VXLAN;
- i++;
}
if (pf->hw.tnl.valid_count[TNL_GENEVE]) {
- pf->hw.udp_tunnel_nic.tables[i].n_entries =
+ pf->hw.udp_tunnel_nic.tables[1].n_entries =
pf->hw.tnl.valid_count[TNL_GENEVE];
- pf->hw.udp_tunnel_nic.tables[i].tunnel_types =
+ pf->hw.udp_tunnel_nic.tables[1].tunnel_types =
UDP_TUNNEL_TYPE_GENEVE;
- i++;
- }
-
- pf->num_alloc_vsi = hw->func_caps.guar_num_vsi;
- if (!pf->num_alloc_vsi) {
- err = -EIO;
- goto err_init_pf_unroll;
- }
- if (pf->num_alloc_vsi > UDP_TUNNEL_NIC_MAX_SHARING_DEVICES) {
- dev_warn(&pf->pdev->dev,
- "limiting the VSI count due to UDP tunnel limitation %d > %d\n",
- pf->num_alloc_vsi, UDP_TUNNEL_NIC_MAX_SHARING_DEVICES);
- pf->num_alloc_vsi = UDP_TUNNEL_NIC_MAX_SHARING_DEVICES;
- }
-
- pf->vsi = devm_kcalloc(dev, pf->num_alloc_vsi, sizeof(*pf->vsi),
- GFP_KERNEL);
- if (!pf->vsi) {
- err = -ENOMEM;
- goto err_init_pf_unroll;
- }
-
- pf->vsi_stats = devm_kcalloc(dev, pf->num_alloc_vsi,
- sizeof(*pf->vsi_stats), GFP_KERNEL);
- if (!pf->vsi_stats) {
- err = -ENOMEM;
- goto err_init_vsi_unroll;
}
err = ice_init_interrupt_scheme(pf);
if (err) {
dev_err(dev, "ice_init_interrupt_scheme failed: %d\n", err);
err = -EIO;
- goto err_init_vsi_stats_unroll;
+ goto err_init_interrupt_scheme;
}
/* In case of MSIX we are going to setup the misc vector right here
@@ -4816,49 +4774,94 @@ ice_probe(struct pci_dev *pdev, const struct pci_device_id __always_unused *ent)
err = ice_req_irq_msix_misc(pf);
if (err) {
dev_err(dev, "setup of misc vector failed: %d\n", err);
- goto err_init_interrupt_unroll;
+ goto err_req_irq_msix_misc;
}
- /* create switch struct for the switch element created by FW on boot */
- pf->first_sw = devm_kzalloc(dev, sizeof(*pf->first_sw), GFP_KERNEL);
- if (!pf->first_sw) {
- err = -ENOMEM;
- goto err_msix_misc_unroll;
- }
+ return 0;
- if (hw->evb_veb)
- pf->first_sw->bridge_mode = BRIDGE_MODE_VEB;
- else
- pf->first_sw->bridge_mode = BRIDGE_MODE_VEPA;
+err_req_irq_msix_misc:
+ ice_clear_interrupt_scheme(pf);
+err_init_interrupt_scheme:
+ ice_deinit_pf(pf);
+err_init_pf:
+ ice_deinit_hw(hw);
+ return err;
+}
- pf->first_sw->pf = pf;
+static void ice_deinit_dev(struct ice_pf *pf)
+{
+ ice_free_irq_msix_misc(pf);
+ ice_clear_interrupt_scheme(pf);
+ ice_deinit_pf(pf);
+ ice_deinit_hw(&pf->hw);
+}
- /* record the sw_id available for later use */
- pf->first_sw->sw_id = hw->port_info->sw_id;
+static void ice_init_features(struct ice_pf *pf)
+{
+ struct device *dev = ice_pf_to_dev(pf);
- err = ice_setup_pf_sw(pf);
- if (err) {
- dev_err(dev, "probe failed due to setup PF switch: %d\n", err);
- goto err_alloc_sw_unroll;
- }
+ if (ice_is_safe_mode(pf))
+ return;
- clear_bit(ICE_SERVICE_DIS, pf->state);
+ /* initialize DDP driven features */
+ if (test_bit(ICE_FLAG_PTP_SUPPORTED, pf->flags))
+ ice_ptp_init(pf);
- /* tell the firmware we are up */
- err = ice_send_version(pf);
- if (err) {
- dev_err(dev, "probe failed sending driver version %s. error: %d\n",
- UTS_RELEASE, err);
- goto err_send_version_unroll;
+ if (ice_is_feature_supported(pf, ICE_F_GNSS))
+ ice_gnss_init(pf);
+
+ /* Note: Flow director init failure is non-fatal to load */
+ if (ice_init_fdir(pf))
+ dev_err(dev, "could not initialize flow director\n");
+
+ /* Note: DCB init failure is non-fatal to load */
+ if (ice_init_pf_dcb(pf, false)) {
+ clear_bit(ICE_FLAG_DCB_CAPABLE, pf->flags);
+ clear_bit(ICE_FLAG_DCB_ENA, pf->flags);
+ } else {
+ ice_cfg_lldp_mib_change(&pf->hw, true);
}
- /* since everything is good, start the service timer */
- mod_timer(&pf->serv_tmr, round_jiffies(jiffies + pf->serv_tmr_period));
+ if (ice_init_lag(pf))
+ dev_warn(dev, "Failed to init link aggregation support\n");
+}
+
+static void ice_deinit_features(struct ice_pf *pf)
+{
+ ice_deinit_lag(pf);
+ if (test_bit(ICE_FLAG_DCB_CAPABLE, pf->flags))
+ ice_cfg_lldp_mib_change(&pf->hw, false);
+ ice_deinit_fdir(pf);
+ if (ice_is_feature_supported(pf, ICE_F_GNSS))
+ ice_gnss_exit(pf);
+ if (test_bit(ICE_FLAG_PTP_SUPPORTED, pf->flags))
+ ice_ptp_release(pf);
+}
+
+static void ice_init_wakeup(struct ice_pf *pf)
+{
+ /* Save wakeup reason register for later use */
+ pf->wakeup_reason = rd32(&pf->hw, PFPM_WUS);
+
+ /* check for a power management event */
+ ice_print_wake_reason(pf);
+
+ /* clear wake status, all bits */
+ wr32(&pf->hw, PFPM_WUS, U32_MAX);
+
+ /* Disable WoL at init, wait for user to enable */
+ device_set_wakeup_enable(ice_pf_to_dev(pf), false);
+}
+
+static int ice_init_link(struct ice_pf *pf)
+{
+ struct device *dev = ice_pf_to_dev(pf);
+ int err;
err = ice_init_link_events(pf->hw.port_info);
if (err) {
dev_err(dev, "ice_init_link_events failed: %d\n", err);
- goto err_send_version_unroll;
+ return err;
}
/* not a fatal error if this fails */
@@ -4894,123 +4897,350 @@ ice_probe(struct pci_dev *pdev, const struct pci_device_id __always_unused *ent)
set_bit(ICE_FLAG_NO_MEDIA, pf->flags);
}
- ice_verify_cacheline_size(pf);
+ return err;
+}
- /* Save wakeup reason register for later use */
- pf->wakeup_reason = rd32(hw, PFPM_WUS);
+static int ice_init_pf_sw(struct ice_pf *pf)
+{
+ bool dvm = ice_is_dvm_ena(&pf->hw);
+ struct ice_vsi *vsi;
+ int err;
- /* check for a power management event */
- ice_print_wake_reason(pf);
+ /* create switch struct for the switch element created by FW on boot */
+ pf->first_sw = kzalloc(sizeof(*pf->first_sw), GFP_KERNEL);
+ if (!pf->first_sw)
+ return -ENOMEM;
- /* clear wake status, all bits */
- wr32(hw, PFPM_WUS, U32_MAX);
+ if (pf->hw.evb_veb)
+ pf->first_sw->bridge_mode = BRIDGE_MODE_VEB;
+ else
+ pf->first_sw->bridge_mode = BRIDGE_MODE_VEPA;
- /* Disable WoL at init, wait for user to enable */
- device_set_wakeup_enable(dev, false);
+ pf->first_sw->pf = pf;
- if (ice_is_safe_mode(pf)) {
- ice_set_safe_mode_vlan_cfg(pf);
- goto probe_done;
+ /* record the sw_id available for later use */
+ pf->first_sw->sw_id = pf->hw.port_info->sw_id;
+
+ err = ice_aq_set_port_params(pf->hw.port_info, dvm, NULL);
+ if (err)
+ goto err_aq_set_port_params;
+
+ vsi = ice_pf_vsi_setup(pf, pf->hw.port_info);
+ if (!vsi) {
+ err = -ENOMEM;
+ goto err_pf_vsi_setup;
}
- /* initialize DDP driven features */
- if (test_bit(ICE_FLAG_PTP_SUPPORTED, pf->flags))
- ice_ptp_init(pf);
+ return 0;
- if (ice_is_feature_supported(pf, ICE_F_GNSS))
- ice_gnss_init(pf);
+err_pf_vsi_setup:
+err_aq_set_port_params:
+ kfree(pf->first_sw);
+ return err;
+}
- /* Note: Flow director init failure is non-fatal to load */
- if (ice_init_fdir(pf))
- dev_err(dev, "could not initialize flow director\n");
+static void ice_deinit_pf_sw(struct ice_pf *pf)
+{
+ struct ice_vsi *vsi = ice_get_main_vsi(pf);
- /* Note: DCB init failure is non-fatal to load */
- if (ice_init_pf_dcb(pf, false)) {
- clear_bit(ICE_FLAG_DCB_CAPABLE, pf->flags);
- clear_bit(ICE_FLAG_DCB_ENA, pf->flags);
- } else {
- ice_cfg_lldp_mib_change(&pf->hw, true);
+ if (!vsi)
+ return;
+
+ ice_vsi_release(vsi);
+ kfree(pf->first_sw);
+}
+
+static int ice_alloc_vsis(struct ice_pf *pf)
+{
+ struct device *dev = ice_pf_to_dev(pf);
+
+ pf->num_alloc_vsi = pf->hw.func_caps.guar_num_vsi;
+ if (!pf->num_alloc_vsi)
+ return -EIO;
+
+ if (pf->num_alloc_vsi > UDP_TUNNEL_NIC_MAX_SHARING_DEVICES) {
+ dev_warn(dev,
+ "limiting the VSI count due to UDP tunnel limitation %d > %d\n",
+ pf->num_alloc_vsi, UDP_TUNNEL_NIC_MAX_SHARING_DEVICES);
+ pf->num_alloc_vsi = UDP_TUNNEL_NIC_MAX_SHARING_DEVICES;
}
- if (ice_init_lag(pf))
- dev_warn(dev, "Failed to init link aggregation support\n");
+ pf->vsi = devm_kcalloc(dev, pf->num_alloc_vsi, sizeof(*pf->vsi),
+ GFP_KERNEL);
+ if (!pf->vsi)
+ return -ENOMEM;
- /* print PCI link speed and width */
- pcie_print_link_status(pf->pdev);
+ pf->vsi_stats = devm_kcalloc(dev, pf->num_alloc_vsi,
+ sizeof(*pf->vsi_stats), GFP_KERNEL);
+ if (!pf->vsi_stats) {
+ devm_kfree(dev, pf->vsi);
+ return -ENOMEM;
+ }
-probe_done:
- err = ice_devlink_create_pf_port(pf);
+ return 0;
+}
+
+static void ice_dealloc_vsis(struct ice_pf *pf)
+{
+ devm_kfree(ice_pf_to_dev(pf), pf->vsi_stats);
+ pf->vsi_stats = NULL;
+
+ pf->num_alloc_vsi = 0;
+ devm_kfree(ice_pf_to_dev(pf), pf->vsi);
+ pf->vsi = NULL;
+}
+
+static int ice_init_devlink(struct ice_pf *pf)
+{
+ int err;
+
+ err = ice_devlink_register_params(pf);
if (err)
- goto err_create_pf_port;
+ return err;
- vsi = ice_get_main_vsi(pf);
- if (!vsi || !vsi->netdev) {
- err = -EINVAL;
- goto err_netdev_reg;
- }
+ ice_devlink_init_regions(pf);
+ ice_devlink_register(pf);
- SET_NETDEV_DEVLINK_PORT(vsi->netdev, &pf->devlink_port);
+ return 0;
+}
+
+static void ice_deinit_devlink(struct ice_pf *pf)
+{
+ ice_devlink_unregister(pf);
+ ice_devlink_destroy_regions(pf);
+ ice_devlink_unregister_params(pf);
+}
+
+static int ice_init(struct ice_pf *pf)
+{
+ int err;
- err = ice_register_netdev(pf);
+ err = ice_init_dev(pf);
if (err)
- goto err_netdev_reg;
+ return err;
- err = ice_devlink_register_params(pf);
+ err = ice_alloc_vsis(pf);
+ if (err)
+ goto err_alloc_vsis;
+
+ err = ice_init_pf_sw(pf);
if (err)
- goto err_netdev_reg;
+ goto err_init_pf_sw;
+
+ ice_init_wakeup(pf);
+
+ err = ice_init_link(pf);
+ if (err)
+ goto err_init_link;
+
+ err = ice_send_version(pf);
+ if (err)
+ goto err_init_link;
+
+ ice_verify_cacheline_size(pf);
+
+ if (ice_is_safe_mode(pf))
+ ice_set_safe_mode_vlan_cfg(pf);
+ else
+ /* print PCI link speed and width */
+ pcie_print_link_status(pf->pdev);
/* ready to go, so clear down state bit */
clear_bit(ICE_DOWN, pf->state);
- if (ice_is_rdma_ena(pf)) {
- pf->aux_idx = ida_alloc(&ice_aux_ida, GFP_KERNEL);
- if (pf->aux_idx < 0) {
- dev_err(dev, "Failed to allocate device ID for AUX driver\n");
- err = -ENOMEM;
- goto err_devlink_reg_param;
- }
+ clear_bit(ICE_SERVICE_DIS, pf->state);
- err = ice_init_rdma(pf);
- if (err) {
- dev_err(dev, "Failed to initialize RDMA: %d\n", err);
- err = -EIO;
- goto err_init_aux_unroll;
- }
- } else {
- dev_warn(dev, "RDMA is not supported on this device\n");
- }
+ /* since everything is good, start the service timer */
+ mod_timer(&pf->serv_tmr, round_jiffies(jiffies + pf->serv_tmr_period));
- ice_devlink_register(pf);
return 0;
-err_init_aux_unroll:
- pf->adev = NULL;
- ida_free(&ice_aux_ida, pf->aux_idx);
-err_devlink_reg_param:
- ice_devlink_unregister_params(pf);
-err_netdev_reg:
- ice_devlink_destroy_pf_port(pf);
-err_create_pf_port:
-err_send_version_unroll:
- ice_vsi_release_all(pf);
-err_alloc_sw_unroll:
+err_init_link:
+ ice_deinit_pf_sw(pf);
+err_init_pf_sw:
+ ice_dealloc_vsis(pf);
+err_alloc_vsis:
+ ice_deinit_dev(pf);
+ return err;
+}
+
+static void ice_deinit(struct ice_pf *pf)
+{
set_bit(ICE_SERVICE_DIS, pf->state);
set_bit(ICE_DOWN, pf->state);
- devm_kfree(dev, pf->first_sw);
-err_msix_misc_unroll:
- ice_free_irq_msix_misc(pf);
-err_init_interrupt_unroll:
- ice_clear_interrupt_scheme(pf);
-err_init_vsi_stats_unroll:
- devm_kfree(dev, pf->vsi_stats);
- pf->vsi_stats = NULL;
-err_init_vsi_unroll:
- devm_kfree(dev, pf->vsi);
-err_init_pf_unroll:
- ice_deinit_pf(pf);
- ice_devlink_destroy_regions(pf);
- ice_deinit_hw(hw);
-err_exit_unroll:
- pci_disable_pcie_error_reporting(pdev);
+
+ ice_deinit_pf_sw(pf);
+ ice_dealloc_vsis(pf);
+ ice_deinit_dev(pf);
+}
+
+/**
+ * ice_load - load pf by init hw and starting VSI
+ * @pf: pointer to the pf instance
+ */
+int ice_load(struct ice_pf *pf)
+{
+ struct ice_vsi_cfg_params params = {};
+ struct ice_vsi *vsi;
+ int err;
+
+ err = ice_reset(&pf->hw, ICE_RESET_PFR);
+ if (err)
+ return err;
+
+ err = ice_init_dev(pf);
+ if (err)
+ return err;
+
+ vsi = ice_get_main_vsi(pf);
+
+ params = ice_vsi_to_params(vsi);
+ params.flags = ICE_VSI_FLAG_INIT;
+
+ err = ice_vsi_cfg(vsi, &params);
+ if (err)
+ goto err_vsi_cfg;
+
+ err = ice_start_eth(ice_get_main_vsi(pf));
+ if (err)
+ goto err_start_eth;
+
+ err = ice_init_rdma(pf);
+ if (err)
+ goto err_init_rdma;
+
+ ice_init_features(pf);
+ ice_service_task_restart(pf);
+
+ clear_bit(ICE_DOWN, pf->state);
+
+ return 0;
+
+err_init_rdma:
+ ice_vsi_close(ice_get_main_vsi(pf));
+err_start_eth:
+ ice_vsi_decfg(ice_get_main_vsi(pf));
+err_vsi_cfg:
+ ice_deinit_dev(pf);
+ return err;
+}
+
+/**
+ * ice_unload - unload pf by stopping VSI and deinit hw
+ * @pf: pointer to the pf instance
+ */
+void ice_unload(struct ice_pf *pf)
+{
+ ice_deinit_features(pf);
+ ice_deinit_rdma(pf);
+ ice_vsi_close(ice_get_main_vsi(pf));
+ ice_vsi_decfg(ice_get_main_vsi(pf));
+ ice_deinit_dev(pf);
+}
+
+/**
+ * ice_probe - Device initialization routine
+ * @pdev: PCI device information struct
+ * @ent: entry in ice_pci_tbl
+ *
+ * Returns 0 on success, negative on failure
+ */
+static int
+ice_probe(struct pci_dev *pdev, const struct pci_device_id __always_unused *ent)
+{
+ struct device *dev = &pdev->dev;
+ struct ice_pf *pf;
+ struct ice_hw *hw;
+ int err;
+
+ if (pdev->is_virtfn) {
+ dev_err(dev, "can't probe a virtual function\n");
+ return -EINVAL;
+ }
+
+ /* this driver uses devres, see
+ * Documentation/driver-api/driver-model/devres.rst
+ */
+ err = pcim_enable_device(pdev);
+ if (err)
+ return err;
+
+ err = pcim_iomap_regions(pdev, BIT(ICE_BAR0), dev_driver_string(dev));
+ if (err) {
+ dev_err(dev, "BAR0 I/O map error %d\n", err);
+ return err;
+ }
+
+ pf = ice_allocate_pf(dev);
+ if (!pf)
+ return -ENOMEM;
+
+ /* initialize Auxiliary index to invalid value */
+ pf->aux_idx = -1;
+
+ /* set up for high or low DMA */
+ err = dma_set_mask_and_coherent(dev, DMA_BIT_MASK(64));
+ if (err) {
+ dev_err(dev, "DMA configuration failed: 0x%x\n", err);
+ return err;
+ }
+
+ pci_set_master(pdev);
+
+ pf->pdev = pdev;
+ pci_set_drvdata(pdev, pf);
+ set_bit(ICE_DOWN, pf->state);
+ /* Disable service task until DOWN bit is cleared */
+ set_bit(ICE_SERVICE_DIS, pf->state);
+
+ hw = &pf->hw;
+ hw->hw_addr = pcim_iomap_table(pdev)[ICE_BAR0];
+ pci_save_state(pdev);
+
+ hw->back = pf;
+ hw->port_info = NULL;
+ hw->vendor_id = pdev->vendor;
+ hw->device_id = pdev->device;
+ pci_read_config_byte(pdev, PCI_REVISION_ID, &hw->revision_id);
+ hw->subsystem_vendor_id = pdev->subsystem_vendor;
+ hw->subsystem_device_id = pdev->subsystem_device;
+ hw->bus.device = PCI_SLOT(pdev->devfn);
+ hw->bus.func = PCI_FUNC(pdev->devfn);
+ ice_set_ctrlq_len(hw);
+
+ pf->msg_enable = netif_msg_init(debug, ICE_DFLT_NETIF_M);
+
+#ifndef CONFIG_DYNAMIC_DEBUG
+ if (debug < -1)
+ hw->debug_mask = debug;
+#endif
+
+ err = ice_init(pf);
+ if (err)
+ goto err_init;
+
+ err = ice_init_eth(pf);
+ if (err)
+ goto err_init_eth;
+
+ err = ice_init_rdma(pf);
+ if (err)
+ goto err_init_rdma;
+
+ err = ice_init_devlink(pf);
+ if (err)
+ goto err_init_devlink;
+
+ ice_init_features(pf);
+
+ return 0;
+
+err_init_devlink:
+ ice_deinit_rdma(pf);
+err_init_rdma:
+ ice_deinit_eth(pf);
+err_init_eth:
+ ice_deinit(pf);
+err_init:
pci_disable_device(pdev);
return err;
}
@@ -5085,52 +5315,33 @@ static void ice_remove(struct pci_dev *pdev)
struct ice_pf *pf = pci_get_drvdata(pdev);
int i;
- ice_devlink_unregister(pf);
for (i = 0; i < ICE_MAX_RESET_WAIT; i++) {
if (!ice_is_reset_in_progress(pf->state))
break;
msleep(100);
}
- ice_tc_indir_block_remove(pf);
-
if (test_bit(ICE_FLAG_SRIOV_ENA, pf->flags)) {
set_bit(ICE_VF_RESETS_DISABLED, pf->state);
ice_free_vfs(pf);
}
ice_service_task_stop(pf);
-
ice_aq_cancel_waiting_tasks(pf);
- ice_unplug_aux_dev(pf);
- if (pf->aux_idx >= 0)
- ida_free(&ice_aux_ida, pf->aux_idx);
- ice_devlink_unregister_params(pf);
set_bit(ICE_DOWN, pf->state);
- ice_deinit_lag(pf);
- if (test_bit(ICE_FLAG_PTP_SUPPORTED, pf->flags))
- ice_ptp_release(pf);
- if (ice_is_feature_supported(pf, ICE_F_GNSS))
- ice_gnss_exit(pf);
if (!ice_is_safe_mode(pf))
ice_remove_arfs(pf);
- ice_setup_mc_magic_wake(pf);
+ ice_deinit_features(pf);
+ ice_deinit_devlink(pf);
+ ice_deinit_rdma(pf);
+ ice_deinit_eth(pf);
+ ice_deinit(pf);
+
ice_vsi_release_all(pf);
- mutex_destroy(&(&pf->hw)->fdir_fltr_lock);
- ice_devlink_destroy_pf_port(pf);
+
+ ice_setup_mc_magic_wake(pf);
ice_set_wake(pf);
- ice_free_irq_msix_misc(pf);
- ice_for_each_vsi(pf, i) {
- if (!pf->vsi[i])
- continue;
- ice_vsi_free_q_vectors(pf->vsi[i]);
- }
- devm_kfree(&pdev->dev, pf->vsi_stats);
- pf->vsi_stats = NULL;
- ice_deinit_pf(pf);
- ice_devlink_destroy_regions(pf);
- ice_deinit_hw(&pf->hw);
/* Issue a PFR as part of the prescribed driver unload flow. Do not
* do it via ice_schedule_reset() since there is no need to rebuild
@@ -5138,8 +5349,6 @@ static void ice_remove(struct pci_dev *pdev)
*/
ice_reset(&pf->hw, ICE_RESET_PFR);
pci_wait_for_pending_transaction(pdev);
- ice_clear_interrupt_scheme(pf);
- pci_disable_pcie_error_reporting(pdev);
pci_disable_device(pdev);
}
@@ -6173,24 +6382,21 @@ static int ice_vsi_vlan_setup(struct ice_vsi *vsi)
}
/**
- * ice_vsi_cfg - Setup the VSI
+ * ice_vsi_cfg_lan - Setup the VSI lan related config
* @vsi: the VSI being configured
*
* Return 0 on success and negative value on error
*/
-int ice_vsi_cfg(struct ice_vsi *vsi)
+int ice_vsi_cfg_lan(struct ice_vsi *vsi)
{
int err;
- if (vsi->netdev) {
+ if (vsi->netdev && vsi->type == ICE_VSI_PF) {
ice_set_rx_mode(vsi->netdev);
- if (vsi->type != ICE_VSI_LB) {
- err = ice_vsi_vlan_setup(vsi);
-
- if (err)
- return err;
- }
+ err = ice_vsi_vlan_setup(vsi);
+ if (err)
+ return err;
}
ice_vsi_cfg_dcb_rings(vsi);
@@ -6371,7 +6577,7 @@ static int ice_up_complete(struct ice_vsi *vsi)
if (vsi->port_info &&
(vsi->port_info->phy.link_info.link_info & ICE_AQ_LINK_UP) &&
- vsi->netdev) {
+ vsi->netdev && vsi->type == ICE_VSI_PF) {
ice_print_link_msg(vsi, true);
netif_tx_start_all_queues(vsi->netdev);
netif_carrier_on(vsi->netdev);
@@ -6382,7 +6588,9 @@ static int ice_up_complete(struct ice_vsi *vsi)
* set the baseline so counters are ready when interface is up
*/
ice_update_eth_stats(vsi);
- ice_service_task_schedule(pf);
+
+ if (vsi->type == ICE_VSI_PF)
+ ice_service_task_schedule(pf);
return 0;
}
@@ -6395,7 +6603,7 @@ int ice_up(struct ice_vsi *vsi)
{
int err;
- err = ice_vsi_cfg(vsi);
+ err = ice_vsi_cfg_lan(vsi);
if (!err)
err = ice_up_complete(vsi);
@@ -6963,7 +7171,7 @@ int ice_vsi_open_ctrl(struct ice_vsi *vsi)
if (err)
goto err_setup_rx;
- err = ice_vsi_cfg(vsi);
+ err = ice_vsi_cfg_lan(vsi);
if (err)
goto err_setup_rx;
@@ -7017,7 +7225,7 @@ int ice_vsi_open(struct ice_vsi *vsi)
if (err)
goto err_setup_rx;
- err = ice_vsi_cfg(vsi);
+ err = ice_vsi_cfg_lan(vsi);
if (err)
goto err_setup_rx;
@@ -7102,7 +7310,7 @@ static int ice_vsi_rebuild_by_type(struct ice_pf *pf, enum ice_vsi_type type)
continue;
/* rebuild the VSI */
- err = ice_vsi_rebuild(vsi, true);
+ err = ice_vsi_rebuild(vsi, ICE_VSI_FLAG_INIT);
if (err) {
dev_err(dev, "rebuild VSI failed, err %d, VSI index %d, type %s\n",
err, vsi->idx, ice_vsi_type_str(type));
@@ -7358,18 +7566,6 @@ clear_recovery:
}
/**
- * ice_max_xdp_frame_size - returns the maximum allowed frame size for XDP
- * @vsi: Pointer to VSI structure
- */
-static int ice_max_xdp_frame_size(struct ice_vsi *vsi)
-{
- if (PAGE_SIZE >= 8192 || test_bit(ICE_FLAG_LEGACY_RX, vsi->back->flags))
- return ICE_RXBUF_2048 - XDP_PACKET_HEADROOM;
- else
- return ICE_RXBUF_3072;
-}
-
-/**
* ice_change_mtu - NDO callback to change the MTU
* @netdev: network interface device structure
* @new_mtu: new value for maximum frame size
@@ -7381,6 +7577,7 @@ static int ice_change_mtu(struct net_device *netdev, int new_mtu)
struct ice_netdev_priv *np = netdev_priv(netdev);
struct ice_vsi *vsi = np->vsi;
struct ice_pf *pf = vsi->back;
+ struct bpf_prog *prog;
u8 count = 0;
int err = 0;
@@ -7389,7 +7586,8 @@ static int ice_change_mtu(struct net_device *netdev, int new_mtu)
return 0;
}
- if (ice_is_xdp_ena_vsi(vsi)) {
+ prog = vsi->xdp_prog;
+ if (prog && !prog->aux->xdp_has_frags) {
int frame_size = ice_max_xdp_frame_size(vsi);
if (new_mtu + ICE_ETH_PKT_HDR_PAD > frame_size) {
@@ -7397,6 +7595,12 @@ static int ice_change_mtu(struct net_device *netdev, int new_mtu)
frame_size - ICE_ETH_PKT_HDR_PAD);
return -EINVAL;
}
+ } else if (test_bit(ICE_FLAG_LEGACY_RX, pf->flags)) {
+ if (new_mtu + ICE_ETH_PKT_HDR_PAD > ICE_MAX_FRAME_LEGACY_RX) {
+ netdev_err(netdev, "Too big MTU for legacy-rx; Max is %d\n",
+ ICE_MAX_FRAME_LEGACY_RX - ICE_ETH_PKT_HDR_PAD);
+ return -EINVAL;
+ }
}
/* if a reset is in progress, wait for some time for it to complete */
@@ -8447,12 +8651,9 @@ static void ice_remove_q_channels(struct ice_vsi *vsi, bool rem_fltr)
/* clear the VSI from scheduler tree */
ice_rm_vsi_lan_cfg(ch->ch_vsi->port_info, ch->ch_vsi->idx);
- /* Delete VSI from FW */
+ /* Delete VSI from FW, PF and HW VSI arrays */
ice_vsi_delete(ch->ch_vsi);
- /* Delete VSI from PF and HW VSI arrays */
- ice_vsi_clear(ch->ch_vsi);
-
/* free the channel */
kfree(ch);
}
@@ -8511,7 +8712,7 @@ static int ice_rebuild_channels(struct ice_pf *pf)
type = vsi->type;
/* rebuild ADQ VSI */
- err = ice_vsi_rebuild(vsi, true);
+ err = ice_vsi_rebuild(vsi, ICE_VSI_FLAG_INIT);
if (err) {
dev_err(dev, "VSI (type:%s) at index %d rebuild failed, err %d\n",
ice_vsi_type_str(type), vsi->idx, err);
@@ -8743,14 +8944,14 @@ config_tcf:
cur_rxq = vsi->num_rxq;
/* proceed with rebuild main VSI using correct number of queues */
- ret = ice_vsi_rebuild(vsi, false);
+ ret = ice_vsi_rebuild(vsi, ICE_VSI_FLAG_NO_INIT);
if (ret) {
/* fallback to current number of queues */
dev_info(dev, "Rebuild failed with new queues, try with current number of queues\n");
vsi->req_txq = cur_txq;
vsi->req_rxq = cur_rxq;
clear_bit(ICE_RESET_FAILED, pf->state);
- if (ice_vsi_rebuild(vsi, false)) {
+ if (ice_vsi_rebuild(vsi, ICE_VSI_FLAG_NO_INIT)) {
dev_err(dev, "Rebuild of main VSI failed again\n");
return ret;
}
diff --git a/drivers/net/ethernet/intel/ice/ice_nvm.c b/drivers/net/ethernet/intel/ice/ice_nvm.c
index c262dc886e6a..f6f52a248066 100644
--- a/drivers/net/ethernet/intel/ice/ice_nvm.c
+++ b/drivers/net/ethernet/intel/ice/ice_nvm.c
@@ -662,7 +662,6 @@ ice_get_orom_civd_data(struct ice_hw *hw, enum ice_bank_select bank,
/* Verify that the simple checksum is zero */
for (i = 0; i < sizeof(*tmp); i++)
- /* cppcheck-suppress objectIndex */
sum += ((u8 *)tmp)[i];
if (sum) {
diff --git a/drivers/net/ethernet/intel/ice/ice_ptp.c b/drivers/net/ethernet/intel/ice/ice_ptp.c
index d63161d73eb1..ac6f06f9a2ed 100644
--- a/drivers/net/ethernet/intel/ice/ice_ptp.c
+++ b/drivers/net/ethernet/intel/ice/ice_ptp.c
@@ -680,6 +680,7 @@ static bool ice_ptp_tx_tstamp(struct ice_ptp_tx *tx)
struct ice_pf *pf;
struct ice_hw *hw;
u64 tstamp_ready;
+ bool link_up;
int err;
u8 idx;
@@ -695,11 +696,14 @@ static bool ice_ptp_tx_tstamp(struct ice_ptp_tx *tx)
if (err)
return false;
+ /* Drop packets if the link went down */
+ link_up = ptp_port->link_up;
+
for_each_set_bit(idx, tx->in_use, tx->len) {
struct skb_shared_hwtstamps shhwtstamps = {};
u8 phy_idx = idx + tx->offset;
u64 raw_tstamp = 0, tstamp;
- bool drop_ts = false;
+ bool drop_ts = !link_up;
struct sk_buff *skb;
/* Drop packets which have waited for more than 2 seconds */
@@ -728,7 +732,7 @@ static bool ice_ptp_tx_tstamp(struct ice_ptp_tx *tx)
ice_trace(tx_tstamp_fw_req, tx->tstamps[idx].skb, idx);
err = ice_read_phy_tstamp(hw, tx->block, phy_idx, &raw_tstamp);
- if (err)
+ if (err && !drop_ts)
continue;
ice_trace(tx_tstamp_fw_done, tx->tstamps[idx].skb, idx);
@@ -1770,6 +1774,38 @@ ice_ptp_gpio_enable_e810(struct ptp_clock_info *info,
}
/**
+ * ice_ptp_gpio_enable_e823 - Enable/disable ancillary features of PHC
+ * @info: the driver's PTP info structure
+ * @rq: The requested feature to change
+ * @on: Enable/disable flag
+ */
+static int ice_ptp_gpio_enable_e823(struct ptp_clock_info *info,
+ struct ptp_clock_request *rq, int on)
+{
+ struct ice_pf *pf = ptp_info_to_pf(info);
+ struct ice_perout_channel clk_cfg = {0};
+ int err;
+
+ switch (rq->type) {
+ case PTP_CLK_REQ_PPS:
+ clk_cfg.gpio_pin = PPS_PIN_INDEX;
+ clk_cfg.period = NSEC_PER_SEC;
+ clk_cfg.ena = !!on;
+
+ err = ice_ptp_cfg_clkout(pf, PPS_CLK_GEN_CHAN, &clk_cfg, true);
+ break;
+ case PTP_CLK_REQ_EXTTS:
+ err = ice_ptp_cfg_extts(pf, !!on, rq->extts.index,
+ TIME_SYNC_PIN_INDEX, rq->extts.flags);
+ break;
+ default:
+ return -EOPNOTSUPP;
+ }
+
+ return err;
+}
+
+/**
* ice_ptp_gettimex64 - Get the time of the clock
* @info: the driver's PTP info structure
* @ts: timespec64 structure to hold the current time value
@@ -2221,6 +2257,19 @@ ice_ptp_setup_pins_e810(struct ice_pf *pf, struct ptp_clock_info *info)
}
/**
+ * ice_ptp_setup_pins_e823 - Setup PTP pins in sysfs
+ * @pf: pointer to the PF instance
+ * @info: PTP clock capabilities
+ */
+static void
+ice_ptp_setup_pins_e823(struct ice_pf *pf, struct ptp_clock_info *info)
+{
+ info->pps = 1;
+ info->n_per_out = 0;
+ info->n_ext_ts = 1;
+}
+
+/**
* ice_ptp_set_funcs_e822 - Set specialized functions for E822 support
* @pf: Board private structure
* @info: PTP info to fill
@@ -2258,6 +2307,23 @@ ice_ptp_set_funcs_e810(struct ice_pf *pf, struct ptp_clock_info *info)
}
/**
+ * ice_ptp_set_funcs_e823 - Set specialized functions for E823 support
+ * @pf: Board private structure
+ * @info: PTP info to fill
+ *
+ * Assign functions to the PTP capabiltiies structure for E823 devices.
+ * Functions which operate across all device families should be set directly
+ * in ice_ptp_set_caps. Only add functions here which are distinct for e823
+ * devices.
+ */
+static void
+ice_ptp_set_funcs_e823(struct ice_pf *pf, struct ptp_clock_info *info)
+{
+ info->enable = ice_ptp_gpio_enable_e823;
+ ice_ptp_setup_pins_e823(pf, info);
+}
+
+/**
* ice_ptp_set_caps - Set PTP capabilities
* @pf: Board private structure
*/
@@ -2269,7 +2335,7 @@ static void ice_ptp_set_caps(struct ice_pf *pf)
snprintf(info->name, sizeof(info->name) - 1, "%s-%s-clk",
dev_driver_string(dev), dev_name(dev));
info->owner = THIS_MODULE;
- info->max_adj = 999999999;
+ info->max_adj = 100000000;
info->adjtime = ice_ptp_adjtime;
info->adjfine = ice_ptp_adjfine;
info->gettimex64 = ice_ptp_gettimex64;
@@ -2277,6 +2343,8 @@ static void ice_ptp_set_caps(struct ice_pf *pf)
if (ice_is_e810(&pf->hw))
ice_ptp_set_funcs_e810(pf, info);
+ else if (ice_is_e823(&pf->hw))
+ ice_ptp_set_funcs_e823(pf, info);
else
ice_ptp_set_funcs_e822(pf, info);
}
diff --git a/drivers/net/ethernet/intel/ice/ice_sched.c b/drivers/net/ethernet/intel/ice/ice_sched.c
index 6d08b397df2a..4eca8d195ef0 100644
--- a/drivers/net/ethernet/intel/ice/ice_sched.c
+++ b/drivers/net/ethernet/intel/ice/ice_sched.c
@@ -1063,7 +1063,6 @@ ice_sched_add_nodes_to_layer(struct ice_port_info *pi,
*num_nodes_added = 0;
while (*num_nodes_added < num_nodes) {
u16 max_child_nodes, num_added = 0;
- /* cppcheck-suppress unusedVariable */
u32 temp;
status = ice_sched_add_nodes_to_hw_layer(pi, tc_node, parent,
@@ -1655,12 +1654,13 @@ ice_sched_add_vsi_child_nodes(struct ice_port_info *pi, u16 vsi_handle,
u32 first_node_teid;
u16 num_added = 0;
u8 i, qgl, vsil;
- int status;
qgl = ice_sched_get_qgrp_layer(hw);
vsil = ice_sched_get_vsi_layer(hw);
parent = ice_sched_get_vsi_node(pi, tc_node, vsi_handle);
for (i = vsil + 1; i <= qgl; i++) {
+ int status;
+
if (!parent)
return -EIO;
@@ -1756,13 +1756,14 @@ ice_sched_add_vsi_support_nodes(struct ice_port_info *pi, u16 vsi_handle,
u32 first_node_teid;
u16 num_added = 0;
u8 i, vsil;
- int status;
if (!pi)
return -EINVAL;
vsil = ice_sched_get_vsi_layer(pi->hw);
for (i = pi->hw->sw_entry_point_layer; i <= vsil; i++) {
+ int status;
+
status = ice_sched_add_nodes_to_layer(pi, tc_node, parent,
i, num_nodes[i],
&first_node_teid,
diff --git a/drivers/net/ethernet/intel/ice/ice_sriov.c b/drivers/net/ethernet/intel/ice/ice_sriov.c
index 3ba1408c56a9..96a64c25e2ef 100644
--- a/drivers/net/ethernet/intel/ice/ice_sriov.c
+++ b/drivers/net/ethernet/intel/ice/ice_sriov.c
@@ -41,21 +41,6 @@ static void ice_free_vf_entries(struct ice_pf *pf)
}
/**
- * ice_vf_vsi_release - invalidate the VF's VSI after freeing it
- * @vf: invalidate this VF's VSI after freeing it
- */
-static void ice_vf_vsi_release(struct ice_vf *vf)
-{
- struct ice_vsi *vsi = ice_get_vf_vsi(vf);
-
- if (WARN_ON(!vsi))
- return;
-
- ice_vsi_release(vsi);
- ice_vf_invalidate_vsi(vf);
-}
-
-/**
* ice_free_vf_res - Free a VF's resources
* @vf: pointer to the VF info
*/
@@ -248,11 +233,16 @@ void ice_free_vfs(struct ice_pf *pf)
*/
static struct ice_vsi *ice_vf_vsi_setup(struct ice_vf *vf)
{
- struct ice_port_info *pi = ice_vf_get_port_info(vf);
+ struct ice_vsi_cfg_params params = {};
struct ice_pf *pf = vf->pf;
struct ice_vsi *vsi;
- vsi = ice_vsi_setup(pf, pi, ICE_VSI_VF, vf, NULL);
+ params.type = ICE_VSI_VF;
+ params.pi = ice_vf_get_port_info(vf);
+ params.vf = vf;
+ params.flags = ICE_VSI_FLAG_INIT;
+
+ vsi = ice_vsi_setup(pf, &params);
if (!vsi) {
dev_err(ice_pf_to_dev(pf), "Failed to create VF VSI\n");
@@ -583,51 +573,19 @@ static int ice_set_per_vf_res(struct ice_pf *pf, u16 num_vfs)
*/
static int ice_init_vf_vsi_res(struct ice_vf *vf)
{
- struct ice_vsi_vlan_ops *vlan_ops;
struct ice_pf *pf = vf->pf;
- u8 broadcast[ETH_ALEN];
struct ice_vsi *vsi;
- struct device *dev;
int err;
vf->first_vector_idx = ice_calc_vf_first_vector_idx(pf, vf);
- dev = ice_pf_to_dev(pf);
vsi = ice_vf_vsi_setup(vf);
if (!vsi)
return -ENOMEM;
- err = ice_vsi_add_vlan_zero(vsi);
- if (err) {
- dev_warn(dev, "Failed to add VLAN 0 filter for VF %d\n",
- vf->vf_id);
- goto release_vsi;
- }
-
- vlan_ops = ice_get_compat_vsi_vlan_ops(vsi);
- err = vlan_ops->ena_rx_filtering(vsi);
- if (err) {
- dev_warn(dev, "Failed to enable Rx VLAN filtering for VF %d\n",
- vf->vf_id);
- goto release_vsi;
- }
-
- eth_broadcast_addr(broadcast);
- err = ice_fltr_add_mac(vsi, broadcast, ICE_FWD_TO_VSI);
- if (err) {
- dev_err(dev, "Failed to add broadcast MAC filter for VF %d, error %d\n",
- vf->vf_id, err);
- goto release_vsi;
- }
-
- err = ice_vsi_apply_spoofchk(vsi, vf->spoofchk);
- if (err) {
- dev_warn(dev, "Failed to initialize spoofchk setting for VF %d\n",
- vf->vf_id);
+ err = ice_vf_init_host_cfg(vf, vsi);
+ if (err)
goto release_vsi;
- }
-
- vf->num_mac = 1;
return 0;
@@ -697,6 +655,21 @@ static void ice_sriov_free_vf(struct ice_vf *vf)
}
/**
+ * ice_sriov_clear_reset_state - clears VF Reset status register
+ * @vf: the vf to configure
+ */
+static void ice_sriov_clear_reset_state(struct ice_vf *vf)
+{
+ struct ice_hw *hw = &vf->pf->hw;
+
+ /* Clear the reset status register so that VF immediately sees that
+ * the device is resetting, even if hardware hasn't yet gotten around
+ * to clearing VFGEN_RSTAT for us.
+ */
+ wr32(hw, VFGEN_RSTAT(vf->vf_id), VIRTCHNL_VFR_INPROGRESS);
+}
+
+/**
* ice_sriov_clear_mbx_register - clears SRIOV VF's mailbox registers
* @vf: the vf to configure
*/
@@ -799,23 +772,19 @@ static void ice_sriov_clear_reset_trigger(struct ice_vf *vf)
}
/**
- * ice_sriov_vsi_rebuild - release and rebuild VF's VSI
- * @vf: VF to release and setup the VSI for
+ * ice_sriov_create_vsi - Create a new VSI for a VF
+ * @vf: VF to create the VSI for
*
- * This is only called when a single VF is being reset (i.e. VFR, VFLR, host VF
- * configuration change, etc.).
+ * This is called by ice_vf_recreate_vsi to create the new VSI after the old
+ * VSI has been released.
*/
-static int ice_sriov_vsi_rebuild(struct ice_vf *vf)
+static int ice_sriov_create_vsi(struct ice_vf *vf)
{
- struct ice_pf *pf = vf->pf;
+ struct ice_vsi *vsi;
- ice_vf_vsi_release(vf);
- if (!ice_vf_vsi_setup(vf)) {
- dev_err(ice_pf_to_dev(pf),
- "Failed to release and setup the VF%u's VSI\n",
- vf->vf_id);
+ vsi = ice_vf_vsi_setup(vf);
+ if (!vsi)
return -ENOMEM;
- }
return 0;
}
@@ -826,8 +795,6 @@ static int ice_sriov_vsi_rebuild(struct ice_vf *vf)
*/
static void ice_sriov_post_vsi_rebuild(struct ice_vf *vf)
{
- ice_vf_rebuild_host_cfg(vf);
- ice_vf_set_initialized(vf);
ice_ena_vf_mappings(vf);
wr32(&vf->pf->hw, VFGEN_RSTAT(vf->vf_id), VIRTCHNL_VFR_VFACTIVE);
}
@@ -835,11 +802,13 @@ static void ice_sriov_post_vsi_rebuild(struct ice_vf *vf)
static const struct ice_vf_ops ice_sriov_vf_ops = {
.reset_type = ICE_VF_RESET,
.free = ice_sriov_free_vf,
+ .clear_reset_state = ice_sriov_clear_reset_state,
.clear_mbx_register = ice_sriov_clear_mbx_register,
.trigger_reset_register = ice_sriov_trigger_reset_register,
.poll_reset_status = ice_sriov_poll_reset_status,
.clear_reset_trigger = ice_sriov_clear_reset_trigger,
- .vsi_rebuild = ice_sriov_vsi_rebuild,
+ .irq_close = NULL,
+ .create_vsi = ice_sriov_create_vsi,
.post_vsi_rebuild = ice_sriov_post_vsi_rebuild,
};
@@ -879,21 +848,9 @@ static int ice_create_vf_entries(struct ice_pf *pf, u16 num_vfs)
/* set sriov vf ops for VFs created during SRIOV flow */
vf->vf_ops = &ice_sriov_vf_ops;
- vf->vf_sw_id = pf->first_sw;
- /* assign default capabilities */
- vf->spoofchk = true;
- vf->num_vf_qs = pf->vfs.num_qps_per;
- ice_vc_set_default_allowlist(vf);
-
- /* ctrl_vsi_idx will be set to a valid value only when VF
- * creates its first fdir rule.
- */
- ice_vf_ctrl_invalidate_vsi(vf);
- ice_vf_fdir_init(vf);
-
- ice_virtchnl_set_dflt_ops(vf);
+ ice_initialize_vf_entry(vf);
- mutex_init(&vf->cfg_lock);
+ vf->vf_sw_id = pf->first_sw;
hash_add_rcu(vfs->table, &vf->entry, vf_id);
}
@@ -1285,7 +1242,7 @@ ice_get_vf_cfg(struct net_device *netdev, int vf_id, struct ifla_vf_info *ivi)
goto out_put_vf;
ivi->vf = vf_id;
- ether_addr_copy(ivi->mac, vf->hw_lan_addr.addr);
+ ether_addr_copy(ivi->mac, vf->hw_lan_addr);
/* VF configuration for VLAN and applicable QoS */
ivi->vlan = ice_vf_get_port_vlan_id(vf);
@@ -1333,8 +1290,8 @@ int ice_set_vf_mac(struct net_device *netdev, int vf_id, u8 *mac)
return -EINVAL;
/* nothing left to do, unicast MAC already set */
- if (ether_addr_equal(vf->dev_lan_addr.addr, mac) &&
- ether_addr_equal(vf->hw_lan_addr.addr, mac)) {
+ if (ether_addr_equal(vf->dev_lan_addr, mac) &&
+ ether_addr_equal(vf->hw_lan_addr, mac)) {
ret = 0;
goto out_put_vf;
}
@@ -1348,8 +1305,8 @@ int ice_set_vf_mac(struct net_device *netdev, int vf_id, u8 *mac)
/* VF is notified of its new MAC via the PF's response to the
* VIRTCHNL_OP_GET_VF_RESOURCES message after the VF has been reset
*/
- ether_addr_copy(vf->dev_lan_addr.addr, mac);
- ether_addr_copy(vf->hw_lan_addr.addr, mac);
+ ether_addr_copy(vf->dev_lan_addr, mac);
+ ether_addr_copy(vf->hw_lan_addr, mac);
if (is_zero_ether_addr(mac)) {
/* VF will send VIRTCHNL_OP_ADD_ETH_ADDR message with its MAC */
vf->pf_set_mac = false;
@@ -1750,7 +1707,7 @@ void ice_print_vf_rx_mdd_event(struct ice_vf *vf)
dev_info(dev, "%d Rx Malicious Driver Detection events detected on PF %d VF %d MAC %pM. mdd-auto-reset-vfs=%s\n",
vf->mdd_rx_events.count, pf->hw.pf_id, vf->vf_id,
- vf->dev_lan_addr.addr,
+ vf->dev_lan_addr,
test_bit(ICE_FLAG_MDD_AUTO_RESET_VF, pf->flags)
? "on" : "off");
}
@@ -1794,7 +1751,7 @@ void ice_print_vfs_mdd_events(struct ice_pf *pf)
dev_info(dev, "%d Tx Malicious Driver Detection events detected on PF %d VF %d MAC %pM.\n",
vf->mdd_tx_events.count, hw->pf_id, vf->vf_id,
- vf->dev_lan_addr.addr);
+ vf->dev_lan_addr);
}
}
mutex_unlock(&pf->vfs.table_lock);
@@ -1884,7 +1841,7 @@ ice_is_malicious_vf(struct ice_pf *pf, struct ice_rq_event_info *event,
if (pf_vsi)
dev_warn(dev, "VF MAC %pM on PF MAC %pM is generating asynchronous messages and may be overflowing the PF message queue. Please see the Adapter User Guide for more information\n",
- &vf->dev_lan_addr.addr[0],
+ &vf->dev_lan_addr[0],
pf_vsi->netdev->dev_addr);
}
}
diff --git a/drivers/net/ethernet/intel/ice/ice_tc_lib.c b/drivers/net/ethernet/intel/ice/ice_tc_lib.c
index 95f392ab9670..6b48cbc049c6 100644
--- a/drivers/net/ethernet/intel/ice/ice_tc_lib.c
+++ b/drivers/net/ethernet/intel/ice/ice_tc_lib.c
@@ -792,7 +792,7 @@ static struct ice_vsi *
ice_tc_forward_action(struct ice_vsi *vsi, struct ice_tc_flower_fltr *tc_fltr)
{
struct ice_rx_ring *ring = NULL;
- struct ice_vsi *ch_vsi = NULL;
+ struct ice_vsi *dest_vsi = NULL;
struct ice_pf *pf = vsi->back;
struct device *dev;
u32 tc_class;
@@ -810,7 +810,7 @@ ice_tc_forward_action(struct ice_vsi *vsi, struct ice_tc_flower_fltr *tc_fltr)
return ERR_PTR(-EOPNOTSUPP);
}
/* Locate ADQ VSI depending on hw_tc number */
- ch_vsi = vsi->tc_map_vsi[tc_class];
+ dest_vsi = vsi->tc_map_vsi[tc_class];
break;
case ICE_FWD_TO_Q:
/* Locate the Rx queue */
@@ -824,7 +824,7 @@ ice_tc_forward_action(struct ice_vsi *vsi, struct ice_tc_flower_fltr *tc_fltr)
/* Determine destination VSI even though the action is
* FWD_TO_QUEUE, because QUEUE is associated with VSI
*/
- ch_vsi = tc_fltr->dest_vsi;
+ dest_vsi = tc_fltr->dest_vsi;
break;
default:
dev_err(dev,
@@ -832,13 +832,13 @@ ice_tc_forward_action(struct ice_vsi *vsi, struct ice_tc_flower_fltr *tc_fltr)
tc_fltr->action.fltr_act);
return ERR_PTR(-EINVAL);
}
- /* Must have valid ch_vsi (it could be main VSI or ADQ VSI) */
- if (!ch_vsi) {
+ /* Must have valid dest_vsi (it could be main VSI or ADQ VSI) */
+ if (!dest_vsi) {
dev_err(dev,
"Unable to add filter because specified destination VSI doesn't exist\n");
return ERR_PTR(-EINVAL);
}
- return ch_vsi;
+ return dest_vsi;
}
/**
@@ -860,7 +860,7 @@ ice_add_tc_flower_adv_fltr(struct ice_vsi *vsi,
struct ice_pf *pf = vsi->back;
struct ice_hw *hw = &pf->hw;
u32 flags = tc_fltr->flags;
- struct ice_vsi *ch_vsi;
+ struct ice_vsi *dest_vsi;
struct device *dev;
u16 lkups_cnt = 0;
u16 l4_proto = 0;
@@ -883,9 +883,11 @@ ice_add_tc_flower_adv_fltr(struct ice_vsi *vsi,
}
/* validate forwarding action VSI and queue */
- ch_vsi = ice_tc_forward_action(vsi, tc_fltr);
- if (IS_ERR(ch_vsi))
- return PTR_ERR(ch_vsi);
+ if (ice_is_forward_action(tc_fltr->action.fltr_act)) {
+ dest_vsi = ice_tc_forward_action(vsi, tc_fltr);
+ if (IS_ERR(dest_vsi))
+ return PTR_ERR(dest_vsi);
+ }
lkups_cnt = ice_tc_count_lkups(flags, headers, tc_fltr);
list = kcalloc(lkups_cnt, sizeof(*list), GFP_ATOMIC);
@@ -904,7 +906,7 @@ ice_add_tc_flower_adv_fltr(struct ice_vsi *vsi,
switch (tc_fltr->action.fltr_act) {
case ICE_FWD_TO_VSI:
- rule_info.sw_act.vsi_handle = ch_vsi->idx;
+ rule_info.sw_act.vsi_handle = dest_vsi->idx;
rule_info.priority = ICE_SWITCH_FLTR_PRIO_VSI;
rule_info.sw_act.src = hw->pf_id;
rule_info.rx = true;
@@ -915,7 +917,7 @@ ice_add_tc_flower_adv_fltr(struct ice_vsi *vsi,
case ICE_FWD_TO_Q:
/* HW queue number in global space */
rule_info.sw_act.fwd_id.q_id = tc_fltr->action.fwd.q.hw_queue;
- rule_info.sw_act.vsi_handle = ch_vsi->idx;
+ rule_info.sw_act.vsi_handle = dest_vsi->idx;
rule_info.priority = ICE_SWITCH_FLTR_PRIO_QUEUE;
rule_info.sw_act.src = hw->pf_id;
rule_info.rx = true;
@@ -923,14 +925,15 @@ ice_add_tc_flower_adv_fltr(struct ice_vsi *vsi,
tc_fltr->action.fwd.q.queue,
tc_fltr->action.fwd.q.hw_queue, lkups_cnt);
break;
- default:
- rule_info.sw_act.flag |= ICE_FLTR_TX;
- /* In case of Tx (LOOKUP_TX), src needs to be src VSI */
- rule_info.sw_act.src = vsi->idx;
- /* 'Rx' is false, direction of rule(LOOKUPTRX) */
- rule_info.rx = false;
+ case ICE_DROP_PACKET:
+ rule_info.sw_act.flag |= ICE_FLTR_RX;
+ rule_info.sw_act.src = hw->pf_id;
+ rule_info.rx = true;
rule_info.priority = ICE_SWITCH_FLTR_PRIO_VSI;
break;
+ default:
+ ret = -EOPNOTSUPP;
+ goto exit;
}
ret = ice_add_adv_rule(hw, list, lkups_cnt, &rule_info, &rule_added);
@@ -953,11 +956,11 @@ ice_add_tc_flower_adv_fltr(struct ice_vsi *vsi,
tc_fltr->dest_vsi_handle = rule_added.vsi_handle;
if (tc_fltr->action.fltr_act == ICE_FWD_TO_VSI ||
tc_fltr->action.fltr_act == ICE_FWD_TO_Q) {
- tc_fltr->dest_vsi = ch_vsi;
+ tc_fltr->dest_vsi = dest_vsi;
/* keep track of advanced switch filter for
* destination VSI
*/
- ch_vsi->num_chnl_fltr++;
+ dest_vsi->num_chnl_fltr++;
/* keeps track of channel filters for PF VSI */
if (vsi->type == ICE_VSI_PF &&
@@ -978,6 +981,10 @@ ice_add_tc_flower_adv_fltr(struct ice_vsi *vsi,
tc_fltr->action.fwd.q.hw_queue, rule_added.rid,
rule_added.rule_id);
break;
+ case ICE_DROP_PACKET:
+ dev_dbg(dev, "added switch rule (lkups_cnt %u, flags 0x%x), action is drop, rid %u, rule_id %u\n",
+ lkups_cnt, flags, rule_added.rid, rule_added.rule_id);
+ break;
default:
break;
}
@@ -1712,6 +1719,9 @@ ice_tc_parse_action(struct ice_vsi *vsi, struct ice_tc_flower_fltr *fltr,
case FLOW_ACTION_RX_QUEUE_MAPPING:
/* forward to queue */
return ice_tc_forward_to_queue(vsi, fltr, act);
+ case FLOW_ACTION_DROP:
+ fltr->action.fltr_act = ICE_DROP_PACKET;
+ return 0;
default:
NL_SET_ERR_MSG_MOD(fltr->extack, "Unsupported TC action");
return -EOPNOTSUPP;
diff --git a/drivers/net/ethernet/intel/ice/ice_tc_lib.h b/drivers/net/ethernet/intel/ice/ice_tc_lib.h
index d916d1e92aa3..8d5e22ac7023 100644
--- a/drivers/net/ethernet/intel/ice/ice_tc_lib.h
+++ b/drivers/net/ethernet/intel/ice/ice_tc_lib.h
@@ -211,4 +211,14 @@ ice_del_cls_flower(struct ice_vsi *vsi, struct flow_cls_offload *cls_flower);
void ice_replay_tc_fltrs(struct ice_pf *pf);
bool ice_is_tunnel_supported(struct net_device *dev);
+static inline bool ice_is_forward_action(enum ice_sw_fwd_act_type fltr_act)
+{
+ switch (fltr_act) {
+ case ICE_FWD_TO_VSI:
+ case ICE_FWD_TO_Q:
+ return true;
+ default:
+ return false;
+ }
+}
#endif /* _ICE_TC_LIB_H_ */
diff --git a/drivers/net/ethernet/intel/ice/ice_txrx.c b/drivers/net/ethernet/intel/ice/ice_txrx.c
index 086f0b3ab68d..dfd22862e926 100644
--- a/drivers/net/ethernet/intel/ice/ice_txrx.c
+++ b/drivers/net/ethernet/intel/ice/ice_txrx.c
@@ -85,7 +85,7 @@ ice_prgm_fdir_fltr(struct ice_vsi *vsi, struct ice_fltr_desc *fdir_desc,
td_cmd = ICE_TXD_LAST_DESC_CMD | ICE_TX_DESC_CMD_DUMMY |
ICE_TX_DESC_CMD_RE;
- tx_buf->tx_flags = ICE_TX_FLAGS_DUMMY_PKT;
+ tx_buf->type = ICE_TX_BUF_DUMMY;
tx_buf->raw_buf = raw_packet;
tx_desc->cmd_type_offset_bsz =
@@ -112,27 +112,29 @@ ice_prgm_fdir_fltr(struct ice_vsi *vsi, struct ice_fltr_desc *fdir_desc,
static void
ice_unmap_and_free_tx_buf(struct ice_tx_ring *ring, struct ice_tx_buf *tx_buf)
{
- if (tx_buf->skb) {
- if (tx_buf->tx_flags & ICE_TX_FLAGS_DUMMY_PKT)
- devm_kfree(ring->dev, tx_buf->raw_buf);
- else if (ice_ring_is_xdp(ring))
- page_frag_free(tx_buf->raw_buf);
- else
- dev_kfree_skb_any(tx_buf->skb);
- if (dma_unmap_len(tx_buf, len))
- dma_unmap_single(ring->dev,
- dma_unmap_addr(tx_buf, dma),
- dma_unmap_len(tx_buf, len),
- DMA_TO_DEVICE);
- } else if (dma_unmap_len(tx_buf, len)) {
+ if (dma_unmap_len(tx_buf, len))
dma_unmap_page(ring->dev,
dma_unmap_addr(tx_buf, dma),
dma_unmap_len(tx_buf, len),
DMA_TO_DEVICE);
+
+ switch (tx_buf->type) {
+ case ICE_TX_BUF_DUMMY:
+ devm_kfree(ring->dev, tx_buf->raw_buf);
+ break;
+ case ICE_TX_BUF_SKB:
+ dev_kfree_skb_any(tx_buf->skb);
+ break;
+ case ICE_TX_BUF_XDP_TX:
+ page_frag_free(tx_buf->raw_buf);
+ break;
+ case ICE_TX_BUF_XDP_XMIT:
+ xdp_return_frame(tx_buf->xdpf);
+ break;
}
tx_buf->next_to_watch = NULL;
- tx_buf->skb = NULL;
+ tx_buf->type = ICE_TX_BUF_EMPTY;
dma_unmap_len_set(tx_buf, len, 0);
/* tx_buf must be completely set up in the transmit path */
}
@@ -174,8 +176,6 @@ tx_skip_free:
tx_ring->next_to_use = 0;
tx_ring->next_to_clean = 0;
- tx_ring->next_dd = ICE_RING_QUARTER(tx_ring) - 1;
- tx_ring->next_rs = ICE_RING_QUARTER(tx_ring) - 1;
if (!tx_ring->netdev)
return;
@@ -267,7 +267,7 @@ static bool ice_clean_tx_irq(struct ice_tx_ring *tx_ring, int napi_budget)
DMA_TO_DEVICE);
/* clear tx_buf data */
- tx_buf->skb = NULL;
+ tx_buf->type = ICE_TX_BUF_EMPTY;
dma_unmap_len_set(tx_buf, len, 0);
/* unmap remaining buffers */
@@ -382,6 +382,7 @@ err:
*/
void ice_clean_rx_ring(struct ice_rx_ring *rx_ring)
{
+ struct xdp_buff *xdp = &rx_ring->xdp;
struct device *dev = rx_ring->dev;
u32 size;
u16 i;
@@ -390,16 +391,16 @@ void ice_clean_rx_ring(struct ice_rx_ring *rx_ring)
if (!rx_ring->rx_buf)
return;
- if (rx_ring->skb) {
- dev_kfree_skb(rx_ring->skb);
- rx_ring->skb = NULL;
- }
-
if (rx_ring->xsk_pool) {
ice_xsk_clean_rx_ring(rx_ring);
goto rx_skip_free;
}
+ if (xdp->data) {
+ xdp_return_buff(xdp);
+ xdp->data = NULL;
+ }
+
/* Free all the Rx ring sk_buffs */
for (i = 0; i < rx_ring->count; i++) {
struct ice_rx_buf *rx_buf = &rx_ring->rx_buf[i];
@@ -437,6 +438,7 @@ rx_skip_free:
rx_ring->next_to_alloc = 0;
rx_ring->next_to_clean = 0;
+ rx_ring->first_desc = 0;
rx_ring->next_to_use = 0;
}
@@ -506,6 +508,7 @@ int ice_setup_rx_ring(struct ice_rx_ring *rx_ring)
rx_ring->next_to_use = 0;
rx_ring->next_to_clean = 0;
+ rx_ring->first_desc = 0;
if (ice_is_xdp_ena_vsi(rx_ring->vsi))
WRITE_ONCE(rx_ring->xdp_prog, rx_ring->vsi->xdp_prog);
@@ -523,8 +526,16 @@ err:
return -ENOMEM;
}
+/**
+ * ice_rx_frame_truesize
+ * @rx_ring: ptr to Rx ring
+ * @size: size
+ *
+ * calculate the truesize with taking into the account PAGE_SIZE of
+ * underlying arch
+ */
static unsigned int
-ice_rx_frame_truesize(struct ice_rx_ring *rx_ring, unsigned int __maybe_unused size)
+ice_rx_frame_truesize(struct ice_rx_ring *rx_ring, const unsigned int size)
{
unsigned int truesize;
@@ -545,34 +556,39 @@ ice_rx_frame_truesize(struct ice_rx_ring *rx_ring, unsigned int __maybe_unused s
* @xdp: xdp_buff used as input to the XDP program
* @xdp_prog: XDP program to run
* @xdp_ring: ring to be used for XDP_TX action
+ * @rx_buf: Rx buffer to store the XDP action
*
* Returns any of ICE_XDP_{PASS, CONSUMED, TX, REDIR}
*/
-static int
+static void
ice_run_xdp(struct ice_rx_ring *rx_ring, struct xdp_buff *xdp,
- struct bpf_prog *xdp_prog, struct ice_tx_ring *xdp_ring)
+ struct bpf_prog *xdp_prog, struct ice_tx_ring *xdp_ring,
+ struct ice_rx_buf *rx_buf)
{
- int err;
+ unsigned int ret = ICE_XDP_PASS;
u32 act;
+ if (!xdp_prog)
+ goto exit;
+
act = bpf_prog_run_xdp(xdp_prog, xdp);
switch (act) {
case XDP_PASS:
- return ICE_XDP_PASS;
+ break;
case XDP_TX:
if (static_branch_unlikely(&ice_xdp_locking_key))
spin_lock(&xdp_ring->tx_lock);
- err = ice_xmit_xdp_ring(xdp->data, xdp->data_end - xdp->data, xdp_ring);
+ ret = __ice_xmit_xdp_ring(xdp, xdp_ring, false);
if (static_branch_unlikely(&ice_xdp_locking_key))
spin_unlock(&xdp_ring->tx_lock);
- if (err == ICE_XDP_CONSUMED)
+ if (ret == ICE_XDP_CONSUMED)
goto out_failure;
- return err;
+ break;
case XDP_REDIRECT:
- err = xdp_do_redirect(rx_ring->netdev, xdp, xdp_prog);
- if (err)
+ if (xdp_do_redirect(rx_ring->netdev, xdp, xdp_prog))
goto out_failure;
- return ICE_XDP_REDIR;
+ ret = ICE_XDP_REDIR;
+ break;
default:
bpf_warn_invalid_xdp_action(rx_ring->netdev, xdp_prog, act);
fallthrough;
@@ -581,8 +597,31 @@ out_failure:
trace_xdp_exception(rx_ring->netdev, xdp_prog, act);
fallthrough;
case XDP_DROP:
- return ICE_XDP_CONSUMED;
+ ret = ICE_XDP_CONSUMED;
}
+exit:
+ rx_buf->act = ret;
+ if (unlikely(xdp_buff_has_frags(xdp)))
+ ice_set_rx_bufs_act(xdp, rx_ring, ret);
+}
+
+/**
+ * ice_xmit_xdp_ring - submit frame to XDP ring for transmission
+ * @xdpf: XDP frame that will be converted to XDP buff
+ * @xdp_ring: XDP ring for transmission
+ */
+static int ice_xmit_xdp_ring(const struct xdp_frame *xdpf,
+ struct ice_tx_ring *xdp_ring)
+{
+ struct xdp_buff xdp;
+
+ xdp.data_hard_start = (void *)xdpf;
+ xdp.data = xdpf->data;
+ xdp.data_end = xdp.data + xdpf->len;
+ xdp.frame_sz = xdpf->frame_sz;
+ xdp.flags = xdpf->flags;
+
+ return __ice_xmit_xdp_ring(&xdp, xdp_ring, true);
}
/**
@@ -605,6 +644,7 @@ ice_xdp_xmit(struct net_device *dev, int n, struct xdp_frame **frames,
unsigned int queue_index = smp_processor_id();
struct ice_vsi *vsi = np->vsi;
struct ice_tx_ring *xdp_ring;
+ struct ice_tx_buf *tx_buf;
int nxmit = 0, i;
if (test_bit(ICE_VSI_DOWN, vsi->state))
@@ -627,16 +667,18 @@ ice_xdp_xmit(struct net_device *dev, int n, struct xdp_frame **frames,
xdp_ring = vsi->xdp_rings[queue_index];
}
+ tx_buf = &xdp_ring->tx_buf[xdp_ring->next_to_use];
for (i = 0; i < n; i++) {
- struct xdp_frame *xdpf = frames[i];
+ const struct xdp_frame *xdpf = frames[i];
int err;
- err = ice_xmit_xdp_ring(xdpf->data, xdpf->len, xdp_ring);
+ err = ice_xmit_xdp_ring(xdpf, xdp_ring);
if (err != ICE_XDP_TX)
break;
nxmit++;
}
+ tx_buf->rs_idx = ice_set_rs_bit(xdp_ring);
if (unlikely(flags & XDP_XMIT_FLUSH))
ice_xdp_ring_update_tail(xdp_ring);
@@ -706,7 +748,7 @@ ice_alloc_mapped_page(struct ice_rx_ring *rx_ring, struct ice_rx_buf *bi)
* buffers. Then bump tail at most one time. Grouping like this lets us avoid
* multiple tail writes per call.
*/
-bool ice_alloc_rx_bufs(struct ice_rx_ring *rx_ring, u16 cleaned_count)
+bool ice_alloc_rx_bufs(struct ice_rx_ring *rx_ring, unsigned int cleaned_count)
{
union ice_32b_rx_flex_desc *rx_desc;
u16 ntu = rx_ring->next_to_use;
@@ -783,7 +825,6 @@ ice_rx_buf_adjust_pg_offset(struct ice_rx_buf *rx_buf, unsigned int size)
/**
* ice_can_reuse_rx_page - Determine if page can be reused for another Rx
* @rx_buf: buffer containing the page
- * @rx_buf_pgcnt: rx_buf page refcount pre xdp_do_redirect() call
*
* If page is reusable, we have a green light for calling ice_reuse_rx_page,
* which will assign the current buffer to the buffer that next_to_alloc is
@@ -791,7 +832,7 @@ ice_rx_buf_adjust_pg_offset(struct ice_rx_buf *rx_buf, unsigned int size)
* page freed
*/
static bool
-ice_can_reuse_rx_page(struct ice_rx_buf *rx_buf, int rx_buf_pgcnt)
+ice_can_reuse_rx_page(struct ice_rx_buf *rx_buf)
{
unsigned int pagecnt_bias = rx_buf->pagecnt_bias;
struct page *page = rx_buf->page;
@@ -802,7 +843,7 @@ ice_can_reuse_rx_page(struct ice_rx_buf *rx_buf, int rx_buf_pgcnt)
#if (PAGE_SIZE < 8192)
/* if we are only owner of page we can reuse it */
- if (unlikely((rx_buf_pgcnt - pagecnt_bias) > 1))
+ if (unlikely(rx_buf->pgcnt - pagecnt_bias > 1))
return false;
#else
#define ICE_LAST_OFFSET \
@@ -824,33 +865,44 @@ ice_can_reuse_rx_page(struct ice_rx_buf *rx_buf, int rx_buf_pgcnt)
}
/**
- * ice_add_rx_frag - Add contents of Rx buffer to sk_buff as a frag
+ * ice_add_xdp_frag - Add contents of Rx buffer to xdp buf as a frag
* @rx_ring: Rx descriptor ring to transact packets on
+ * @xdp: xdp buff to place the data into
* @rx_buf: buffer containing page to add
- * @skb: sk_buff to place the data into
* @size: packet length from rx_desc
*
- * This function will add the data contained in rx_buf->page to the skb.
- * It will just attach the page as a frag to the skb.
- * The function will then update the page offset.
+ * This function will add the data contained in rx_buf->page to the xdp buf.
+ * It will just attach the page as a frag.
*/
-static void
-ice_add_rx_frag(struct ice_rx_ring *rx_ring, struct ice_rx_buf *rx_buf,
- struct sk_buff *skb, unsigned int size)
+static int
+ice_add_xdp_frag(struct ice_rx_ring *rx_ring, struct xdp_buff *xdp,
+ struct ice_rx_buf *rx_buf, const unsigned int size)
{
-#if (PAGE_SIZE >= 8192)
- unsigned int truesize = SKB_DATA_ALIGN(size + rx_ring->rx_offset);
-#else
- unsigned int truesize = ice_rx_pg_size(rx_ring) / 2;
-#endif
+ struct skb_shared_info *sinfo = xdp_get_shared_info_from_buff(xdp);
if (!size)
- return;
- skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags, rx_buf->page,
- rx_buf->page_offset, size, truesize);
+ return 0;
+
+ if (!xdp_buff_has_frags(xdp)) {
+ sinfo->nr_frags = 0;
+ sinfo->xdp_frags_size = 0;
+ xdp_buff_set_frags_flag(xdp);
+ }
- /* page is being used so we must update the page offset */
- ice_rx_buf_adjust_pg_offset(rx_buf, truesize);
+ if (unlikely(sinfo->nr_frags == MAX_SKB_FRAGS)) {
+ if (unlikely(xdp_buff_has_frags(xdp)))
+ ice_set_rx_bufs_act(xdp, rx_ring, ICE_XDP_CONSUMED);
+ return -ENOMEM;
+ }
+
+ __skb_fill_page_desc_noacc(sinfo, sinfo->nr_frags++, rx_buf->page,
+ rx_buf->page_offset, size);
+ sinfo->xdp_frags_size += size;
+
+ if (page_is_pfmemalloc(rx_buf->page))
+ xdp_buff_set_frag_pfmemalloc(xdp);
+
+ return 0;
}
/**
@@ -886,19 +938,18 @@ ice_reuse_rx_page(struct ice_rx_ring *rx_ring, struct ice_rx_buf *old_buf)
* ice_get_rx_buf - Fetch Rx buffer and synchronize data for use
* @rx_ring: Rx descriptor ring to transact packets on
* @size: size of buffer to add to skb
- * @rx_buf_pgcnt: rx_buf page refcount
*
* This function will pull an Rx buffer from the ring and synchronize it
* for use by the CPU.
*/
static struct ice_rx_buf *
ice_get_rx_buf(struct ice_rx_ring *rx_ring, const unsigned int size,
- int *rx_buf_pgcnt)
+ const unsigned int ntc)
{
struct ice_rx_buf *rx_buf;
- rx_buf = &rx_ring->rx_buf[rx_ring->next_to_clean];
- *rx_buf_pgcnt =
+ rx_buf = &rx_ring->rx_buf[ntc];
+ rx_buf->pgcnt =
#if (PAGE_SIZE < 8192)
page_count(rx_buf->page);
#else
@@ -922,26 +973,25 @@ ice_get_rx_buf(struct ice_rx_ring *rx_ring, const unsigned int size,
/**
* ice_build_skb - Build skb around an existing buffer
* @rx_ring: Rx descriptor ring to transact packets on
- * @rx_buf: Rx buffer to pull data from
* @xdp: xdp_buff pointing to the data
*
- * This function builds an skb around an existing Rx buffer, taking care
- * to set up the skb correctly and avoid any memcpy overhead.
+ * This function builds an skb around an existing XDP buffer, taking care
+ * to set up the skb correctly and avoid any memcpy overhead. Driver has
+ * already combined frags (if any) to skb_shared_info.
*/
static struct sk_buff *
-ice_build_skb(struct ice_rx_ring *rx_ring, struct ice_rx_buf *rx_buf,
- struct xdp_buff *xdp)
+ice_build_skb(struct ice_rx_ring *rx_ring, struct xdp_buff *xdp)
{
u8 metasize = xdp->data - xdp->data_meta;
-#if (PAGE_SIZE < 8192)
- unsigned int truesize = ice_rx_pg_size(rx_ring) / 2;
-#else
- unsigned int truesize = SKB_DATA_ALIGN(sizeof(struct skb_shared_info)) +
- SKB_DATA_ALIGN(xdp->data_end -
- xdp->data_hard_start);
-#endif
+ struct skb_shared_info *sinfo = NULL;
+ unsigned int nr_frags;
struct sk_buff *skb;
+ if (unlikely(xdp_buff_has_frags(xdp))) {
+ sinfo = xdp_get_shared_info_from_buff(xdp);
+ nr_frags = sinfo->nr_frags;
+ }
+
/* Prefetch first cache line of first page. If xdp->data_meta
* is unused, this points exactly as xdp->data, otherwise we
* likely have a consumer accessing first few bytes of meta
@@ -949,7 +999,7 @@ ice_build_skb(struct ice_rx_ring *rx_ring, struct ice_rx_buf *rx_buf,
*/
net_prefetch(xdp->data_meta);
/* build an skb around the page buffer */
- skb = napi_build_skb(xdp->data_hard_start, truesize);
+ skb = napi_build_skb(xdp->data_hard_start, xdp->frame_sz);
if (unlikely(!skb))
return NULL;
@@ -964,8 +1014,11 @@ ice_build_skb(struct ice_rx_ring *rx_ring, struct ice_rx_buf *rx_buf,
if (metasize)
skb_metadata_set(skb, metasize);
- /* buffer is used by skb, update page_offset */
- ice_rx_buf_adjust_pg_offset(rx_buf, truesize);
+ if (unlikely(xdp_buff_has_frags(xdp)))
+ xdp_update_skb_shared_info(skb, nr_frags,
+ sinfo->xdp_frags_size,
+ nr_frags * xdp->frame_sz,
+ xdp_buff_is_frag_pfmemalloc(xdp));
return skb;
}
@@ -981,24 +1034,30 @@ ice_build_skb(struct ice_rx_ring *rx_ring, struct ice_rx_buf *rx_buf,
* skb correctly.
*/
static struct sk_buff *
-ice_construct_skb(struct ice_rx_ring *rx_ring, struct ice_rx_buf *rx_buf,
- struct xdp_buff *xdp)
+ice_construct_skb(struct ice_rx_ring *rx_ring, struct xdp_buff *xdp)
{
- unsigned int metasize = xdp->data - xdp->data_meta;
unsigned int size = xdp->data_end - xdp->data;
+ struct skb_shared_info *sinfo = NULL;
+ struct ice_rx_buf *rx_buf;
+ unsigned int nr_frags = 0;
unsigned int headlen;
struct sk_buff *skb;
/* prefetch first cache line of first page */
- net_prefetch(xdp->data_meta);
+ net_prefetch(xdp->data);
+
+ if (unlikely(xdp_buff_has_frags(xdp))) {
+ sinfo = xdp_get_shared_info_from_buff(xdp);
+ nr_frags = sinfo->nr_frags;
+ }
/* allocate a skb to store the frags */
- skb = __napi_alloc_skb(&rx_ring->q_vector->napi,
- ICE_RX_HDR_SIZE + metasize,
+ skb = __napi_alloc_skb(&rx_ring->q_vector->napi, ICE_RX_HDR_SIZE,
GFP_ATOMIC | __GFP_NOWARN);
if (unlikely(!skb))
return NULL;
+ rx_buf = &rx_ring->rx_buf[rx_ring->first_desc];
skb_record_rx_queue(skb, rx_ring->q_index);
/* Determine available headroom for copy */
headlen = size;
@@ -1006,32 +1065,42 @@ ice_construct_skb(struct ice_rx_ring *rx_ring, struct ice_rx_buf *rx_buf,
headlen = eth_get_headlen(skb->dev, xdp->data, ICE_RX_HDR_SIZE);
/* align pull length to size of long to optimize memcpy performance */
- memcpy(__skb_put(skb, headlen + metasize), xdp->data_meta,
- ALIGN(headlen + metasize, sizeof(long)));
-
- if (metasize) {
- skb_metadata_set(skb, metasize);
- __skb_pull(skb, metasize);
- }
+ memcpy(__skb_put(skb, headlen), xdp->data, ALIGN(headlen,
+ sizeof(long)));
/* if we exhaust the linear part then add what is left as a frag */
size -= headlen;
if (size) {
-#if (PAGE_SIZE >= 8192)
- unsigned int truesize = SKB_DATA_ALIGN(size);
-#else
- unsigned int truesize = ice_rx_pg_size(rx_ring) / 2;
-#endif
+ /* besides adding here a partial frag, we are going to add
+ * frags from xdp_buff, make sure there is enough space for
+ * them
+ */
+ if (unlikely(nr_frags >= MAX_SKB_FRAGS - 1)) {
+ dev_kfree_skb(skb);
+ return NULL;
+ }
skb_add_rx_frag(skb, 0, rx_buf->page,
- rx_buf->page_offset + headlen, size, truesize);
- /* buffer is used by skb, update page_offset */
- ice_rx_buf_adjust_pg_offset(rx_buf, truesize);
+ rx_buf->page_offset + headlen, size,
+ xdp->frame_sz);
} else {
- /* buffer is unused, reset bias back to rx_buf; data was copied
- * onto skb's linear part so there's no need for adjusting
- * page offset and we can reuse this buffer as-is
+ /* buffer is unused, change the act that should be taken later
+ * on; data was copied onto skb's linear part so there's no
+ * need for adjusting page offset and we can reuse this buffer
+ * as-is
*/
- rx_buf->pagecnt_bias++;
+ rx_buf->act = ICE_SKB_CONSUMED;
+ }
+
+ if (unlikely(xdp_buff_has_frags(xdp))) {
+ struct skb_shared_info *skinfo = skb_shinfo(skb);
+
+ memcpy(&skinfo->frags[skinfo->nr_frags], &sinfo->frags[0],
+ sizeof(skb_frag_t) * nr_frags);
+
+ xdp_update_skb_shared_info(skb, skinfo->nr_frags + nr_frags,
+ sinfo->xdp_frags_size,
+ nr_frags * xdp->frame_sz,
+ xdp_buff_is_frag_pfmemalloc(xdp));
}
return skb;
@@ -1041,26 +1110,17 @@ ice_construct_skb(struct ice_rx_ring *rx_ring, struct ice_rx_buf *rx_buf,
* ice_put_rx_buf - Clean up used buffer and either recycle or free
* @rx_ring: Rx descriptor ring to transact packets on
* @rx_buf: Rx buffer to pull data from
- * @rx_buf_pgcnt: Rx buffer page count pre xdp_do_redirect()
*
- * This function will update next_to_clean and then clean up the contents
- * of the rx_buf. It will either recycle the buffer or unmap it and free
- * the associated resources.
+ * This function will clean up the contents of the rx_buf. It will either
+ * recycle the buffer or unmap it and free the associated resources.
*/
static void
-ice_put_rx_buf(struct ice_rx_ring *rx_ring, struct ice_rx_buf *rx_buf,
- int rx_buf_pgcnt)
+ice_put_rx_buf(struct ice_rx_ring *rx_ring, struct ice_rx_buf *rx_buf)
{
- u16 ntc = rx_ring->next_to_clean + 1;
-
- /* fetch, update, and store next to clean */
- ntc = (ntc < rx_ring->count) ? ntc : 0;
- rx_ring->next_to_clean = ntc;
-
if (!rx_buf)
return;
- if (ice_can_reuse_rx_page(rx_buf, rx_buf_pgcnt)) {
+ if (ice_can_reuse_rx_page(rx_buf)) {
/* hand second half of page back to the ring */
ice_reuse_rx_page(rx_ring, rx_buf);
} else {
@@ -1076,27 +1136,6 @@ ice_put_rx_buf(struct ice_rx_ring *rx_ring, struct ice_rx_buf *rx_buf,
}
/**
- * ice_is_non_eop - process handling of non-EOP buffers
- * @rx_ring: Rx ring being processed
- * @rx_desc: Rx descriptor for current buffer
- *
- * If the buffer is an EOP buffer, this function exits returning false,
- * otherwise return true indicating that this is in fact a non-EOP buffer.
- */
-static bool
-ice_is_non_eop(struct ice_rx_ring *rx_ring, union ice_32b_rx_flex_desc *rx_desc)
-{
- /* if we are the last buffer then there is nothing else to do */
-#define ICE_RXD_EOF BIT(ICE_RX_FLEX_DESC_STATUS0_EOF_S)
- if (likely(ice_test_staterr(rx_desc->wb.status_error0, ICE_RXD_EOF)))
- return false;
-
- rx_ring->ring_stats->rx_stats.non_eop_descs++;
-
- return true;
-}
-
-/**
* ice_clean_rx_irq - Clean completed descriptors from Rx ring - bounce buf
* @rx_ring: Rx descriptor ring to transact packets on
* @budget: Total limit on number of packets to process
@@ -1110,39 +1149,42 @@ ice_is_non_eop(struct ice_rx_ring *rx_ring, union ice_32b_rx_flex_desc *rx_desc)
*/
int ice_clean_rx_irq(struct ice_rx_ring *rx_ring, int budget)
{
- unsigned int total_rx_bytes = 0, total_rx_pkts = 0, frame_sz = 0;
- u16 cleaned_count = ICE_DESC_UNUSED(rx_ring);
+ unsigned int total_rx_bytes = 0, total_rx_pkts = 0;
unsigned int offset = rx_ring->rx_offset;
+ struct xdp_buff *xdp = &rx_ring->xdp;
struct ice_tx_ring *xdp_ring = NULL;
- unsigned int xdp_res, xdp_xmit = 0;
- struct sk_buff *skb = rx_ring->skb;
struct bpf_prog *xdp_prog = NULL;
- struct xdp_buff xdp;
+ u32 ntc = rx_ring->next_to_clean;
+ u32 cnt = rx_ring->count;
+ u32 cached_ntc = ntc;
+ u32 xdp_xmit = 0;
+ u32 cached_ntu;
bool failure;
+ u32 first;
/* Frame size depend on rx_ring setup when PAGE_SIZE=4K */
#if (PAGE_SIZE < 8192)
- frame_sz = ice_rx_frame_truesize(rx_ring, 0);
+ xdp->frame_sz = ice_rx_frame_truesize(rx_ring, 0);
#endif
- xdp_init_buff(&xdp, frame_sz, &rx_ring->xdp_rxq);
xdp_prog = READ_ONCE(rx_ring->xdp_prog);
- if (xdp_prog)
+ if (xdp_prog) {
xdp_ring = rx_ring->xdp_ring;
+ cached_ntu = xdp_ring->next_to_use;
+ }
/* start the loop to process Rx packets bounded by 'budget' */
while (likely(total_rx_pkts < (unsigned int)budget)) {
union ice_32b_rx_flex_desc *rx_desc;
struct ice_rx_buf *rx_buf;
- unsigned char *hard_start;
+ struct sk_buff *skb;
unsigned int size;
u16 stat_err_bits;
- int rx_buf_pgcnt;
u16 vlan_tag = 0;
u16 rx_ptype;
/* get the Rx desc from Rx ring based on 'next_to_clean' */
- rx_desc = ICE_RX_DESC(rx_ring, rx_ring->next_to_clean);
+ rx_desc = ICE_RX_DESC(rx_ring, ntc);
/* status_error_len will always be zero for unused descriptors
* because it's cleared in cleanup, and overlaps with hdr_addr
@@ -1166,8 +1208,8 @@ int ice_clean_rx_irq(struct ice_rx_ring *rx_ring, int budget)
if (rx_desc->wb.rxdid == FDIR_DESC_RXDID &&
ctrl_vsi->vf)
ice_vc_fdir_irq_handler(ctrl_vsi, rx_desc);
- ice_put_rx_buf(rx_ring, NULL, 0);
- cleaned_count++;
+ if (++ntc == cnt)
+ ntc = 0;
continue;
}
@@ -1175,65 +1217,56 @@ int ice_clean_rx_irq(struct ice_rx_ring *rx_ring, int budget)
ICE_RX_FLX_DESC_PKT_LEN_M;
/* retrieve a buffer from the ring */
- rx_buf = ice_get_rx_buf(rx_ring, size, &rx_buf_pgcnt);
+ rx_buf = ice_get_rx_buf(rx_ring, size, ntc);
- if (!size) {
- xdp.data = NULL;
- xdp.data_end = NULL;
- xdp.data_hard_start = NULL;
- xdp.data_meta = NULL;
- goto construct_skb;
- }
+ if (!xdp->data) {
+ void *hard_start;
- hard_start = page_address(rx_buf->page) + rx_buf->page_offset -
- offset;
- xdp_prepare_buff(&xdp, hard_start, offset, size, true);
+ hard_start = page_address(rx_buf->page) + rx_buf->page_offset -
+ offset;
+ xdp_prepare_buff(xdp, hard_start, offset, size, !!offset);
#if (PAGE_SIZE > 4096)
- /* At larger PAGE_SIZE, frame_sz depend on len size */
- xdp.frame_sz = ice_rx_frame_truesize(rx_ring, size);
+ /* At larger PAGE_SIZE, frame_sz depend on len size */
+ xdp->frame_sz = ice_rx_frame_truesize(rx_ring, size);
#endif
+ xdp_buff_clear_frags_flag(xdp);
+ } else if (ice_add_xdp_frag(rx_ring, xdp, rx_buf, size)) {
+ break;
+ }
+ if (++ntc == cnt)
+ ntc = 0;
- if (!xdp_prog)
- goto construct_skb;
+ /* skip if it is NOP desc */
+ if (ice_is_non_eop(rx_ring, rx_desc))
+ continue;
- xdp_res = ice_run_xdp(rx_ring, &xdp, xdp_prog, xdp_ring);
- if (!xdp_res)
+ ice_run_xdp(rx_ring, xdp, xdp_prog, xdp_ring, rx_buf);
+ if (rx_buf->act == ICE_XDP_PASS)
goto construct_skb;
- if (xdp_res & (ICE_XDP_TX | ICE_XDP_REDIR)) {
- xdp_xmit |= xdp_res;
- ice_rx_buf_adjust_pg_offset(rx_buf, xdp.frame_sz);
- } else {
- rx_buf->pagecnt_bias++;
- }
- total_rx_bytes += size;
+ total_rx_bytes += xdp_get_buff_len(xdp);
total_rx_pkts++;
- cleaned_count++;
- ice_put_rx_buf(rx_ring, rx_buf, rx_buf_pgcnt);
+ xdp->data = NULL;
+ rx_ring->first_desc = ntc;
continue;
construct_skb:
- if (skb) {
- ice_add_rx_frag(rx_ring, rx_buf, skb, size);
- } else if (likely(xdp.data)) {
- if (ice_ring_uses_build_skb(rx_ring))
- skb = ice_build_skb(rx_ring, rx_buf, &xdp);
- else
- skb = ice_construct_skb(rx_ring, rx_buf, &xdp);
- }
+ if (likely(ice_ring_uses_build_skb(rx_ring)))
+ skb = ice_build_skb(rx_ring, xdp);
+ else
+ skb = ice_construct_skb(rx_ring, xdp);
/* exit if we failed to retrieve a buffer */
if (!skb) {
- rx_ring->ring_stats->rx_stats.alloc_buf_failed++;
- if (rx_buf)
- rx_buf->pagecnt_bias++;
+ rx_ring->ring_stats->rx_stats.alloc_page_failed++;
+ rx_buf->act = ICE_XDP_CONSUMED;
+ if (unlikely(xdp_buff_has_frags(xdp)))
+ ice_set_rx_bufs_act(xdp, rx_ring,
+ ICE_XDP_CONSUMED);
+ xdp->data = NULL;
+ rx_ring->first_desc = ntc;
break;
}
-
- ice_put_rx_buf(rx_ring, rx_buf, rx_buf_pgcnt);
- cleaned_count++;
-
- /* skip if it is NOP desc */
- if (ice_is_non_eop(rx_ring, rx_desc))
- continue;
+ xdp->data = NULL;
+ rx_ring->first_desc = ntc;
stat_err_bits = BIT(ICE_RX_FLEX_DESC_STATUS0_RXE_S);
if (unlikely(ice_test_staterr(rx_desc->wb.status_error0,
@@ -1245,10 +1278,8 @@ construct_skb:
vlan_tag = ice_get_vlan_tag_from_rx_desc(rx_desc);
/* pad the skb if needed, to make a valid ethernet frame */
- if (eth_skb_pad(skb)) {
- skb = NULL;
+ if (eth_skb_pad(skb))
continue;
- }
/* probably a little skewed due to removing CRC */
total_rx_bytes += skb->len;
@@ -1262,18 +1293,34 @@ construct_skb:
ice_trace(clean_rx_irq_indicate, rx_ring, rx_desc, skb);
/* send completed skb up the stack */
ice_receive_skb(rx_ring, skb, vlan_tag);
- skb = NULL;
/* update budget accounting */
total_rx_pkts++;
}
+ first = rx_ring->first_desc;
+ while (cached_ntc != first) {
+ struct ice_rx_buf *buf = &rx_ring->rx_buf[cached_ntc];
+
+ if (buf->act & (ICE_XDP_TX | ICE_XDP_REDIR)) {
+ ice_rx_buf_adjust_pg_offset(buf, xdp->frame_sz);
+ xdp_xmit |= buf->act;
+ } else if (buf->act & ICE_XDP_CONSUMED) {
+ buf->pagecnt_bias++;
+ } else if (buf->act == ICE_XDP_PASS) {
+ ice_rx_buf_adjust_pg_offset(buf, xdp->frame_sz);
+ }
+
+ ice_put_rx_buf(rx_ring, buf);
+ if (++cached_ntc >= cnt)
+ cached_ntc = 0;
+ }
+ rx_ring->next_to_clean = ntc;
/* return up to cleaned_count buffers to hardware */
- failure = ice_alloc_rx_bufs(rx_ring, cleaned_count);
+ failure = ice_alloc_rx_bufs(rx_ring, ICE_RX_DESC_UNUSED(rx_ring));
- if (xdp_prog)
- ice_finalize_xdp_rx(xdp_ring, xdp_xmit);
- rx_ring->skb = skb;
+ if (xdp_xmit)
+ ice_finalize_xdp_rx(xdp_ring, xdp_xmit, cached_ntu);
if (rx_ring->ring_stats)
ice_update_rx_ring_stats(rx_ring, total_rx_pkts,
@@ -1682,6 +1729,7 @@ ice_tx_map(struct ice_tx_ring *tx_ring, struct ice_tx_buf *first,
DMA_TO_DEVICE);
tx_buf = &tx_ring->tx_buf[i];
+ tx_buf->type = ICE_TX_BUF_FRAG;
}
/* record SW timestamp if HW timestamp is not available */
@@ -1996,7 +2044,6 @@ int ice_tso(struct ice_tx_buf *first, struct ice_tx_offload_params *off)
if (err < 0)
return err;
- /* cppcheck-suppress unreadVariable */
protocol = vlan_get_protocol(skb);
if (eth_p_mpls(protocol))
@@ -2033,8 +2080,6 @@ int ice_tso(struct ice_tx_buf *first, struct ice_tx_offload_params *off)
}
/* reset pointers to inner headers */
-
- /* cppcheck-suppress unreadVariable */
ip.hdr = skb_inner_network_header(skb);
l4.hdr = skb_inner_transport_header(skb);
@@ -2300,6 +2345,9 @@ ice_xmit_frame_ring(struct sk_buff *skb, struct ice_tx_ring *tx_ring)
ice_trace(xmit_frame_ring, tx_ring, skb);
+ if (unlikely(ipv6_hopopt_jumbo_remove(skb)))
+ goto out_drop;
+
count = ice_xmit_desc_count(skb);
if (ice_chk_linearize(skb, count)) {
if (__skb_linearize(skb))
@@ -2328,6 +2376,7 @@ ice_xmit_frame_ring(struct sk_buff *skb, struct ice_tx_ring *tx_ring)
/* record the location of the first descriptor for this packet */
first = &tx_ring->tx_buf[tx_ring->next_to_use];
first->skb = skb;
+ first->type = ICE_TX_BUF_SKB;
first->bytecount = max_t(unsigned int, skb->len, ETH_ZLEN);
first->gso_segs = 1;
first->tx_flags = 0;
@@ -2500,11 +2549,11 @@ void ice_clean_ctrl_tx_irq(struct ice_tx_ring *tx_ring)
dma_unmap_addr(tx_buf, dma),
dma_unmap_len(tx_buf, len),
DMA_TO_DEVICE);
- if (tx_buf->tx_flags & ICE_TX_FLAGS_DUMMY_PKT)
+ if (tx_buf->type == ICE_TX_BUF_DUMMY)
devm_kfree(tx_ring->dev, tx_buf->raw_buf);
/* clear next_to_watch to prevent false hangs */
- tx_buf->raw_buf = NULL;
+ tx_buf->type = ICE_TX_BUF_EMPTY;
tx_buf->tx_flags = 0;
tx_buf->next_to_watch = NULL;
dma_unmap_len_set(tx_buf, len, 0);
diff --git a/drivers/net/ethernet/intel/ice/ice_txrx.h b/drivers/net/ethernet/intel/ice/ice_txrx.h
index 4fd0e5d0a313..fff0efe28373 100644
--- a/drivers/net/ethernet/intel/ice/ice_txrx.h
+++ b/drivers/net/ethernet/intel/ice/ice_txrx.h
@@ -9,10 +9,12 @@
#define ICE_DFLT_IRQ_WORK 256
#define ICE_RXBUF_3072 3072
#define ICE_RXBUF_2048 2048
+#define ICE_RXBUF_1664 1664
#define ICE_RXBUF_1536 1536
#define ICE_MAX_CHAINED_RX_BUFS 5
#define ICE_MAX_BUF_TXD 8
#define ICE_MIN_TX_LEN 17
+#define ICE_MAX_FRAME_LEGACY_RX 8320
/* The size limit for a transmit buffer in a descriptor is (16K - 1).
* In order to align with the read requests we will align the value to
@@ -110,15 +112,16 @@ static inline int ice_skb_pad(void)
(u16)((((R)->next_to_clean > (R)->next_to_use) ? 0 : (R)->count) + \
(R)->next_to_clean - (R)->next_to_use - 1)
+#define ICE_RX_DESC_UNUSED(R) \
+ ((((R)->first_desc > (R)->next_to_use) ? 0 : (R)->count) + \
+ (R)->first_desc - (R)->next_to_use - 1)
+
#define ICE_RING_QUARTER(R) ((R)->count >> 2)
#define ICE_TX_FLAGS_TSO BIT(0)
#define ICE_TX_FLAGS_HW_VLAN BIT(1)
#define ICE_TX_FLAGS_SW_VLAN BIT(2)
-/* ICE_TX_FLAGS_DUMMY_PKT is used to mark dummy packets that should be
- * freed instead of returned like skb packets.
- */
-#define ICE_TX_FLAGS_DUMMY_PKT BIT(3)
+/* Free, was ICE_TX_FLAGS_DUMMY_PKT */
#define ICE_TX_FLAGS_TSYN BIT(4)
#define ICE_TX_FLAGS_IPV4 BIT(5)
#define ICE_TX_FLAGS_IPV6 BIT(6)
@@ -134,6 +137,7 @@ static inline int ice_skb_pad(void)
#define ICE_XDP_TX BIT(1)
#define ICE_XDP_REDIR BIT(2)
#define ICE_XDP_EXIT BIT(3)
+#define ICE_SKB_CONSUMED ICE_XDP_CONSUMED
#define ICE_RX_DMA_ATTR \
(DMA_ATTR_SKIP_CPU_SYNC | DMA_ATTR_WEAK_ORDERING)
@@ -142,15 +146,44 @@ static inline int ice_skb_pad(void)
#define ICE_TXD_LAST_DESC_CMD (ICE_TX_DESC_CMD_EOP | ICE_TX_DESC_CMD_RS)
+/**
+ * enum ice_tx_buf_type - type of &ice_tx_buf to act on Tx completion
+ * @ICE_TX_BUF_EMPTY: unused OR XSk frame, no action required
+ * @ICE_TX_BUF_DUMMY: dummy Flow Director packet, unmap and kfree()
+ * @ICE_TX_BUF_FRAG: mapped skb OR &xdp_buff frag, only unmap DMA
+ * @ICE_TX_BUF_SKB: &sk_buff, unmap and consume_skb(), update stats
+ * @ICE_TX_BUF_XDP_TX: &xdp_buff, unmap and page_frag_free(), stats
+ * @ICE_TX_BUF_XDP_XMIT: &xdp_frame, unmap and xdp_return_frame(), stats
+ * @ICE_TX_BUF_XSK_TX: &xdp_buff on XSk queue, xsk_buff_free(), stats
+ */
+enum ice_tx_buf_type {
+ ICE_TX_BUF_EMPTY = 0U,
+ ICE_TX_BUF_DUMMY,
+ ICE_TX_BUF_FRAG,
+ ICE_TX_BUF_SKB,
+ ICE_TX_BUF_XDP_TX,
+ ICE_TX_BUF_XDP_XMIT,
+ ICE_TX_BUF_XSK_TX,
+};
+
struct ice_tx_buf {
- struct ice_tx_desc *next_to_watch;
union {
- struct sk_buff *skb;
- void *raw_buf; /* used for XDP */
+ struct ice_tx_desc *next_to_watch;
+ u32 rs_idx;
+ };
+ union {
+ void *raw_buf; /* used for XDP_TX and FDir rules */
+ struct sk_buff *skb; /* used for .ndo_start_xmit() */
+ struct xdp_frame *xdpf; /* used for .ndo_xdp_xmit() */
+ struct xdp_buff *xdp; /* used for XDP_TX ZC */
};
unsigned int bytecount;
- unsigned short gso_segs;
- u32 tx_flags;
+ union {
+ unsigned int gso_segs;
+ unsigned int nr_frags; /* used for mbuf XDP */
+ };
+ u32 type:16; /* &ice_tx_buf_type */
+ u32 tx_flags:16;
DEFINE_DMA_UNMAP_LEN(len);
DEFINE_DMA_UNMAP_ADDR(dma);
};
@@ -170,7 +203,9 @@ struct ice_rx_buf {
dma_addr_t dma;
struct page *page;
unsigned int page_offset;
- u16 pagecnt_bias;
+ unsigned int pgcnt;
+ unsigned int act;
+ unsigned int pagecnt_bias;
};
struct ice_q_stats {
@@ -273,42 +308,44 @@ struct ice_rx_ring {
struct ice_vsi *vsi; /* Backreference to associated VSI */
struct ice_q_vector *q_vector; /* Backreference to associated vector */
u8 __iomem *tail;
+ u16 q_index; /* Queue number of ring */
+
+ u16 count; /* Number of descriptors */
+ u16 reg_idx; /* HW register index of the ring */
+ u16 next_to_alloc;
+ /* CL2 - 2nd cacheline starts here */
union {
struct ice_rx_buf *rx_buf;
struct xdp_buff **xdp_buf;
};
- /* CL2 - 2nd cacheline starts here */
- struct xdp_rxq_info xdp_rxq;
+ struct xdp_buff xdp;
/* CL3 - 3rd cacheline starts here */
- u16 q_index; /* Queue number of ring */
-
- u16 count; /* Number of descriptors */
- u16 reg_idx; /* HW register index of the ring */
+ struct bpf_prog *xdp_prog;
+ u16 rx_offset;
/* used in interrupt processing */
u16 next_to_use;
u16 next_to_clean;
- u16 next_to_alloc;
- u16 rx_offset;
- u16 rx_buf_len;
+ u16 first_desc;
/* stats structs */
struct ice_ring_stats *ring_stats;
struct rcu_head rcu; /* to avoid race on free */
- /* CL4 - 3rd cacheline starts here */
+ /* CL4 - 4th cacheline starts here */
struct ice_channel *ch;
- struct bpf_prog *xdp_prog;
struct ice_tx_ring *xdp_ring;
struct xsk_buff_pool *xsk_pool;
- struct sk_buff *skb;
dma_addr_t dma; /* physical address of ring */
u64 cached_phctime;
+ u16 rx_buf_len;
u8 dcb_tc; /* Traffic class of ring */
u8 ptp_rx;
#define ICE_RX_FLAGS_RING_BUILD_SKB BIT(1)
#define ICE_RX_FLAGS_CRC_STRIP_DIS BIT(2)
u8 flags;
+ /* CL5 - 5th cacheline starts here */
+ struct xdp_rxq_info xdp_rxq;
} ____cacheline_internodealigned_in_smp;
struct ice_tx_ring {
@@ -326,12 +363,11 @@ struct ice_tx_ring {
struct xsk_buff_pool *xsk_pool;
u16 next_to_use;
u16 next_to_clean;
- u16 next_rs;
- u16 next_dd;
u16 q_handle; /* Queue handle per TC */
u16 reg_idx; /* HW register index of the ring */
u16 count; /* Number of descriptors */
u16 q_index; /* Queue number of ring */
+ u16 xdp_tx_active;
/* stats structs */
struct ice_ring_stats *ring_stats;
/* CL3 - 3rd cacheline starts here */
@@ -342,7 +378,6 @@ struct ice_tx_ring {
spinlock_t tx_lock;
u32 txq_teid; /* Added Tx queue TEID */
/* CL4 - 4th cacheline starts here */
- u16 xdp_tx_active;
#define ICE_TX_FLAGS_RING_XDP BIT(0)
#define ICE_TX_FLAGS_RING_VLAN_L2TAG1 BIT(1)
#define ICE_TX_FLAGS_RING_VLAN_L2TAG2 BIT(2)
@@ -431,7 +466,7 @@ static inline unsigned int ice_rx_pg_order(struct ice_rx_ring *ring)
union ice_32b_rx_flex_desc;
-bool ice_alloc_rx_bufs(struct ice_rx_ring *rxr, u16 cleaned_count);
+bool ice_alloc_rx_bufs(struct ice_rx_ring *rxr, unsigned int cleaned_count);
netdev_tx_t ice_start_xmit(struct sk_buff *skb, struct net_device *netdev);
u16
ice_select_queue(struct net_device *dev, struct sk_buff *skb,
diff --git a/drivers/net/ethernet/intel/ice/ice_txrx_lib.c b/drivers/net/ethernet/intel/ice/ice_txrx_lib.c
index 25f04266c668..7bc5aa340c7d 100644
--- a/drivers/net/ethernet/intel/ice/ice_txrx_lib.c
+++ b/drivers/net/ethernet/intel/ice/ice_txrx_lib.c
@@ -221,128 +221,217 @@ ice_receive_skb(struct ice_rx_ring *rx_ring, struct sk_buff *skb, u16 vlan_tag)
}
/**
+ * ice_clean_xdp_tx_buf - Free and unmap XDP Tx buffer
+ * @dev: device for DMA mapping
+ * @tx_buf: Tx buffer to clean
+ * @bq: XDP bulk flush struct
+ */
+static void
+ice_clean_xdp_tx_buf(struct device *dev, struct ice_tx_buf *tx_buf,
+ struct xdp_frame_bulk *bq)
+{
+ dma_unmap_single(dev, dma_unmap_addr(tx_buf, dma),
+ dma_unmap_len(tx_buf, len), DMA_TO_DEVICE);
+ dma_unmap_len_set(tx_buf, len, 0);
+
+ switch (tx_buf->type) {
+ case ICE_TX_BUF_XDP_TX:
+ page_frag_free(tx_buf->raw_buf);
+ break;
+ case ICE_TX_BUF_XDP_XMIT:
+ xdp_return_frame_bulk(tx_buf->xdpf, bq);
+ break;
+ }
+
+ tx_buf->type = ICE_TX_BUF_EMPTY;
+}
+
+/**
* ice_clean_xdp_irq - Reclaim resources after transmit completes on XDP ring
* @xdp_ring: XDP ring to clean
*/
-static void ice_clean_xdp_irq(struct ice_tx_ring *xdp_ring)
+static u32 ice_clean_xdp_irq(struct ice_tx_ring *xdp_ring)
{
- unsigned int total_bytes = 0, total_pkts = 0;
- u16 tx_thresh = ICE_RING_QUARTER(xdp_ring);
- u16 ntc = xdp_ring->next_to_clean;
- struct ice_tx_desc *next_dd_desc;
- u16 next_dd = xdp_ring->next_dd;
- struct ice_tx_buf *tx_buf;
- int i;
+ int total_bytes = 0, total_pkts = 0;
+ struct device *dev = xdp_ring->dev;
+ u32 ntc = xdp_ring->next_to_clean;
+ struct ice_tx_desc *tx_desc;
+ u32 cnt = xdp_ring->count;
+ struct xdp_frame_bulk bq;
+ u32 frags, xdp_tx = 0;
+ u32 ready_frames = 0;
+ u32 idx;
+ u32 ret;
+
+ idx = xdp_ring->tx_buf[ntc].rs_idx;
+ tx_desc = ICE_TX_DESC(xdp_ring, idx);
+ if (tx_desc->cmd_type_offset_bsz &
+ cpu_to_le64(ICE_TX_DESC_DTYPE_DESC_DONE)) {
+ if (idx >= ntc)
+ ready_frames = idx - ntc + 1;
+ else
+ ready_frames = idx + cnt - ntc + 1;
+ }
- next_dd_desc = ICE_TX_DESC(xdp_ring, next_dd);
- if (!(next_dd_desc->cmd_type_offset_bsz &
- cpu_to_le64(ICE_TX_DESC_DTYPE_DESC_DONE)))
- return;
+ if (unlikely(!ready_frames))
+ return 0;
+ ret = ready_frames;
+
+ xdp_frame_bulk_init(&bq);
+ rcu_read_lock(); /* xdp_return_frame_bulk() */
- for (i = 0; i < tx_thresh; i++) {
- tx_buf = &xdp_ring->tx_buf[ntc];
+ while (ready_frames) {
+ struct ice_tx_buf *tx_buf = &xdp_ring->tx_buf[ntc];
+ struct ice_tx_buf *head = tx_buf;
+ /* bytecount holds size of head + frags */
total_bytes += tx_buf->bytecount;
- /* normally tx_buf->gso_segs was taken but at this point
- * it's always 1 for us
- */
+ frags = tx_buf->nr_frags;
total_pkts++;
-
- page_frag_free(tx_buf->raw_buf);
- dma_unmap_single(xdp_ring->dev, dma_unmap_addr(tx_buf, dma),
- dma_unmap_len(tx_buf, len), DMA_TO_DEVICE);
- dma_unmap_len_set(tx_buf, len, 0);
- tx_buf->raw_buf = NULL;
+ /* count head + frags */
+ ready_frames -= frags + 1;
+ xdp_tx++;
ntc++;
- if (ntc >= xdp_ring->count)
+ if (ntc == cnt)
ntc = 0;
+
+ for (int i = 0; i < frags; i++) {
+ tx_buf = &xdp_ring->tx_buf[ntc];
+
+ ice_clean_xdp_tx_buf(dev, tx_buf, &bq);
+ ntc++;
+ if (ntc == cnt)
+ ntc = 0;
+ }
+
+ ice_clean_xdp_tx_buf(dev, head, &bq);
}
- next_dd_desc->cmd_type_offset_bsz = 0;
- xdp_ring->next_dd = xdp_ring->next_dd + tx_thresh;
- if (xdp_ring->next_dd > xdp_ring->count)
- xdp_ring->next_dd = tx_thresh - 1;
+ xdp_flush_frame_bulk(&bq);
+ rcu_read_unlock();
+
+ tx_desc->cmd_type_offset_bsz = 0;
xdp_ring->next_to_clean = ntc;
+ xdp_ring->xdp_tx_active -= xdp_tx;
ice_update_tx_ring_stats(xdp_ring, total_pkts, total_bytes);
+
+ return ret;
}
/**
- * ice_xmit_xdp_ring - submit single packet to XDP ring for transmission
- * @data: packet data pointer
- * @size: packet data size
+ * __ice_xmit_xdp_ring - submit frame to XDP ring for transmission
+ * @xdp: XDP buffer to be placed onto Tx descriptors
* @xdp_ring: XDP ring for transmission
+ * @frame: whether this comes from .ndo_xdp_xmit()
*/
-int ice_xmit_xdp_ring(void *data, u16 size, struct ice_tx_ring *xdp_ring)
+int __ice_xmit_xdp_ring(struct xdp_buff *xdp, struct ice_tx_ring *xdp_ring,
+ bool frame)
{
- u16 tx_thresh = ICE_RING_QUARTER(xdp_ring);
- u16 i = xdp_ring->next_to_use;
+ struct skb_shared_info *sinfo = NULL;
+ u32 size = xdp->data_end - xdp->data;
+ struct device *dev = xdp_ring->dev;
+ u32 ntu = xdp_ring->next_to_use;
struct ice_tx_desc *tx_desc;
+ struct ice_tx_buf *tx_head;
struct ice_tx_buf *tx_buf;
- dma_addr_t dma;
+ u32 cnt = xdp_ring->count;
+ void *data = xdp->data;
+ u32 nr_frags = 0;
+ u32 free_space;
+ u32 frag = 0;
+
+ free_space = ICE_DESC_UNUSED(xdp_ring);
+ if (free_space < ICE_RING_QUARTER(xdp_ring))
+ free_space += ice_clean_xdp_irq(xdp_ring);
+
+ if (unlikely(!free_space))
+ goto busy;
+
+ if (unlikely(xdp_buff_has_frags(xdp))) {
+ sinfo = xdp_get_shared_info_from_buff(xdp);
+ nr_frags = sinfo->nr_frags;
+ if (free_space < nr_frags + 1)
+ goto busy;
+ }
- if (ICE_DESC_UNUSED(xdp_ring) < tx_thresh)
- ice_clean_xdp_irq(xdp_ring);
+ tx_desc = ICE_TX_DESC(xdp_ring, ntu);
+ tx_head = &xdp_ring->tx_buf[ntu];
+ tx_buf = tx_head;
- if (!unlikely(ICE_DESC_UNUSED(xdp_ring))) {
- xdp_ring->ring_stats->tx_stats.tx_busy++;
- return ICE_XDP_CONSUMED;
- }
+ for (;;) {
+ dma_addr_t dma;
- dma = dma_map_single(xdp_ring->dev, data, size, DMA_TO_DEVICE);
- if (dma_mapping_error(xdp_ring->dev, dma))
- return ICE_XDP_CONSUMED;
+ dma = dma_map_single(dev, data, size, DMA_TO_DEVICE);
+ if (dma_mapping_error(dev, dma))
+ goto dma_unmap;
- tx_buf = &xdp_ring->tx_buf[i];
- tx_buf->bytecount = size;
- tx_buf->gso_segs = 1;
- tx_buf->raw_buf = data;
+ /* record length, and DMA address */
+ dma_unmap_len_set(tx_buf, len, size);
+ dma_unmap_addr_set(tx_buf, dma, dma);
- /* record length, and DMA address */
- dma_unmap_len_set(tx_buf, len, size);
- dma_unmap_addr_set(tx_buf, dma, dma);
+ if (frame) {
+ tx_buf->type = ICE_TX_BUF_FRAG;
+ } else {
+ tx_buf->type = ICE_TX_BUF_XDP_TX;
+ tx_buf->raw_buf = data;
+ }
- tx_desc = ICE_TX_DESC(xdp_ring, i);
- tx_desc->buf_addr = cpu_to_le64(dma);
- tx_desc->cmd_type_offset_bsz = ice_build_ctob(ICE_TX_DESC_CMD_EOP, 0,
- size, 0);
+ tx_desc->buf_addr = cpu_to_le64(dma);
+ tx_desc->cmd_type_offset_bsz = ice_build_ctob(0, 0, size, 0);
- xdp_ring->xdp_tx_active++;
- i++;
- if (i == xdp_ring->count) {
- i = 0;
- tx_desc = ICE_TX_DESC(xdp_ring, xdp_ring->next_rs);
- tx_desc->cmd_type_offset_bsz |=
- cpu_to_le64(ICE_TX_DESC_CMD_RS << ICE_TXD_QW1_CMD_S);
- xdp_ring->next_rs = tx_thresh - 1;
+ ntu++;
+ if (ntu == cnt)
+ ntu = 0;
+
+ if (frag == nr_frags)
+ break;
+
+ tx_desc = ICE_TX_DESC(xdp_ring, ntu);
+ tx_buf = &xdp_ring->tx_buf[ntu];
+
+ data = skb_frag_address(&sinfo->frags[frag]);
+ size = skb_frag_size(&sinfo->frags[frag]);
+ frag++;
}
- xdp_ring->next_to_use = i;
- if (i > xdp_ring->next_rs) {
- tx_desc = ICE_TX_DESC(xdp_ring, xdp_ring->next_rs);
- tx_desc->cmd_type_offset_bsz |=
- cpu_to_le64(ICE_TX_DESC_CMD_RS << ICE_TXD_QW1_CMD_S);
- xdp_ring->next_rs += tx_thresh;
+ /* store info about bytecount and frag count in first desc */
+ tx_head->bytecount = xdp_get_buff_len(xdp);
+ tx_head->nr_frags = nr_frags;
+
+ if (frame) {
+ tx_head->type = ICE_TX_BUF_XDP_XMIT;
+ tx_head->xdpf = xdp->data_hard_start;
}
+ /* update last descriptor from a frame with EOP */
+ tx_desc->cmd_type_offset_bsz |=
+ cpu_to_le64(ICE_TX_DESC_CMD_EOP << ICE_TXD_QW1_CMD_S);
+
+ xdp_ring->xdp_tx_active++;
+ xdp_ring->next_to_use = ntu;
+
return ICE_XDP_TX;
-}
-/**
- * ice_xmit_xdp_buff - convert an XDP buffer to an XDP frame and send it
- * @xdp: XDP buffer
- * @xdp_ring: XDP Tx ring
- *
- * Returns negative on failure, 0 on success.
- */
-int ice_xmit_xdp_buff(struct xdp_buff *xdp, struct ice_tx_ring *xdp_ring)
-{
- struct xdp_frame *xdpf = xdp_convert_buff_to_frame(xdp);
+dma_unmap:
+ for (;;) {
+ tx_buf = &xdp_ring->tx_buf[ntu];
+ dma_unmap_page(dev, dma_unmap_addr(tx_buf, dma),
+ dma_unmap_len(tx_buf, len), DMA_TO_DEVICE);
+ dma_unmap_len_set(tx_buf, len, 0);
+ if (tx_buf == tx_head)
+ break;
+
+ if (!ntu)
+ ntu += cnt;
+ ntu--;
+ }
+ return ICE_XDP_CONSUMED;
- if (unlikely(!xdpf))
- return ICE_XDP_CONSUMED;
+busy:
+ xdp_ring->ring_stats->tx_stats.tx_busy++;
- return ice_xmit_xdp_ring(xdpf->data, xdpf->len, xdp_ring);
+ return ICE_XDP_CONSUMED;
}
/**
@@ -354,14 +443,21 @@ int ice_xmit_xdp_buff(struct xdp_buff *xdp, struct ice_tx_ring *xdp_ring)
* should be called when a batch of packets has been processed in the
* napi loop.
*/
-void ice_finalize_xdp_rx(struct ice_tx_ring *xdp_ring, unsigned int xdp_res)
+void ice_finalize_xdp_rx(struct ice_tx_ring *xdp_ring, unsigned int xdp_res,
+ u32 first_idx)
{
+ struct ice_tx_buf *tx_buf = &xdp_ring->tx_buf[first_idx];
+
if (xdp_res & ICE_XDP_REDIR)
xdp_do_flush_map();
if (xdp_res & ICE_XDP_TX) {
if (static_branch_unlikely(&ice_xdp_locking_key))
spin_lock(&xdp_ring->tx_lock);
+ /* store index of descriptor with RS bit set in the first
+ * ice_tx_buf of given NAPI batch
+ */
+ tx_buf->rs_idx = ice_set_rs_bit(xdp_ring);
ice_xdp_ring_update_tail(xdp_ring);
if (static_branch_unlikely(&ice_xdp_locking_key))
spin_unlock(&xdp_ring->tx_lock);
diff --git a/drivers/net/ethernet/intel/ice/ice_txrx_lib.h b/drivers/net/ethernet/intel/ice/ice_txrx_lib.h
index c7d2954dc9ea..115969ecdf7b 100644
--- a/drivers/net/ethernet/intel/ice/ice_txrx_lib.h
+++ b/drivers/net/ethernet/intel/ice/ice_txrx_lib.h
@@ -6,6 +6,36 @@
#include "ice.h"
/**
+ * ice_set_rx_bufs_act - propagate Rx buffer action to frags
+ * @xdp: XDP buffer representing frame (linear and frags part)
+ * @rx_ring: Rx ring struct
+ * act: action to store onto Rx buffers related to XDP buffer parts
+ *
+ * Set action that should be taken before putting Rx buffer from first frag
+ * to one before last. Last one is handled by caller of this function as it
+ * is the EOP frag that is currently being processed. This function is
+ * supposed to be called only when XDP buffer contains frags.
+ */
+static inline void
+ice_set_rx_bufs_act(struct xdp_buff *xdp, const struct ice_rx_ring *rx_ring,
+ const unsigned int act)
+{
+ const struct skb_shared_info *sinfo = xdp_get_shared_info_from_buff(xdp);
+ u32 first = rx_ring->first_desc;
+ u32 nr_frags = sinfo->nr_frags;
+ u32 cnt = rx_ring->count;
+ struct ice_rx_buf *buf;
+
+ for (int i = 0; i < nr_frags; i++) {
+ buf = &rx_ring->rx_buf[first];
+ buf->act = act;
+
+ if (++first == cnt)
+ first = 0;
+ }
+}
+
+/**
* ice_test_staterr - tests bits in Rx descriptor status and error fields
* @status_err_n: Rx descriptor status_error0 or status_error1 bits
* @stat_err_bits: value to mask
@@ -21,6 +51,28 @@ ice_test_staterr(__le16 status_err_n, const u16 stat_err_bits)
return !!(status_err_n & cpu_to_le16(stat_err_bits));
}
+/**
+ * ice_is_non_eop - process handling of non-EOP buffers
+ * @rx_ring: Rx ring being processed
+ * @rx_desc: Rx descriptor for current buffer
+ *
+ * If the buffer is an EOP buffer, this function exits returning false,
+ * otherwise return true indicating that this is in fact a non-EOP buffer.
+ */
+static inline bool
+ice_is_non_eop(const struct ice_rx_ring *rx_ring,
+ const union ice_32b_rx_flex_desc *rx_desc)
+{
+ /* if we are the last buffer then there is nothing else to do */
+#define ICE_RXD_EOF BIT(ICE_RX_FLEX_DESC_STATUS0_EOF_S)
+ if (likely(ice_test_staterr(rx_desc->wb.status_error0, ICE_RXD_EOF)))
+ return false;
+
+ rx_ring->ring_stats->rx_stats.non_eop_descs++;
+
+ return true;
+}
+
static inline __le64
ice_build_ctob(u64 td_cmd, u64 td_offset, unsigned int size, u64 td_tag)
{
@@ -70,9 +122,28 @@ static inline void ice_xdp_ring_update_tail(struct ice_tx_ring *xdp_ring)
writel_relaxed(xdp_ring->next_to_use, xdp_ring->tail);
}
-void ice_finalize_xdp_rx(struct ice_tx_ring *xdp_ring, unsigned int xdp_res);
+/**
+ * ice_set_rs_bit - set RS bit on last produced descriptor (one behind current NTU)
+ * @xdp_ring: XDP ring to produce the HW Tx descriptors on
+ *
+ * returns index of descriptor that had RS bit produced on
+ */
+static inline u32 ice_set_rs_bit(const struct ice_tx_ring *xdp_ring)
+{
+ u32 rs_idx = xdp_ring->next_to_use ? xdp_ring->next_to_use - 1 : xdp_ring->count - 1;
+ struct ice_tx_desc *tx_desc;
+
+ tx_desc = ICE_TX_DESC(xdp_ring, rs_idx);
+ tx_desc->cmd_type_offset_bsz |=
+ cpu_to_le64(ICE_TX_DESC_CMD_RS << ICE_TXD_QW1_CMD_S);
+
+ return rs_idx;
+}
+
+void ice_finalize_xdp_rx(struct ice_tx_ring *xdp_ring, unsigned int xdp_res, u32 first_idx);
int ice_xmit_xdp_buff(struct xdp_buff *xdp, struct ice_tx_ring *xdp_ring);
-int ice_xmit_xdp_ring(void *data, u16 size, struct ice_tx_ring *xdp_ring);
+int __ice_xmit_xdp_ring(struct xdp_buff *xdp, struct ice_tx_ring *xdp_ring,
+ bool frame);
void ice_release_rx_desc(struct ice_rx_ring *rx_ring, u16 val);
void
ice_process_skb_fields(struct ice_rx_ring *rx_ring,
diff --git a/drivers/net/ethernet/intel/ice/ice_vf_lib.c b/drivers/net/ethernet/intel/ice/ice_vf_lib.c
index 375eb6493f0f..0e57bd1b85fd 100644
--- a/drivers/net/ethernet/intel/ice/ice_vf_lib.c
+++ b/drivers/net/ethernet/intel/ice/ice_vf_lib.c
@@ -237,16 +237,49 @@ static void ice_vf_clear_counters(struct ice_vf *vf)
*/
static void ice_vf_pre_vsi_rebuild(struct ice_vf *vf)
{
+ /* Close any IRQ mapping now */
+ if (vf->vf_ops->irq_close)
+ vf->vf_ops->irq_close(vf);
+
ice_vf_clear_counters(vf);
vf->vf_ops->clear_reset_trigger(vf);
}
/**
+ * ice_vf_recreate_vsi - Release and re-create the VF's VSI
+ * @vf: VF to recreate the VSI for
+ *
+ * This is only called when a single VF is being reset (i.e. VVF, VFLR, host
+ * VF configuration change, etc)
+ *
+ * It releases and then re-creates a new VSI.
+ */
+static int ice_vf_recreate_vsi(struct ice_vf *vf)
+{
+ struct ice_pf *pf = vf->pf;
+ int err;
+
+ ice_vf_vsi_release(vf);
+
+ err = vf->vf_ops->create_vsi(vf);
+ if (err) {
+ dev_err(ice_pf_to_dev(pf),
+ "Failed to recreate the VF%u's VSI, error %d\n",
+ vf->vf_id, err);
+ return err;
+ }
+
+ return 0;
+}
+
+/**
* ice_vf_rebuild_vsi - rebuild the VF's VSI
* @vf: VF to rebuild the VSI for
*
* This is only called when all VF(s) are being reset (i.e. PCIe Reset on the
* host, PFR, CORER, etc.).
+ *
+ * It reprograms the VSI configuration back into hardware.
*/
static int ice_vf_rebuild_vsi(struct ice_vf *vf)
{
@@ -256,7 +289,7 @@ static int ice_vf_rebuild_vsi(struct ice_vf *vf)
if (WARN_ON(!vsi))
return -EINVAL;
- if (ice_vsi_rebuild(vsi, true)) {
+ if (ice_vsi_rebuild(vsi, ICE_VSI_FLAG_INIT)) {
dev_err(ice_pf_to_dev(pf), "failed to rebuild VF %d VSI\n",
vf->vf_id);
return -EIO;
@@ -271,6 +304,21 @@ static int ice_vf_rebuild_vsi(struct ice_vf *vf)
}
/**
+ * ice_vf_post_vsi_rebuild - Reset tasks that occur after VSI rebuild
+ * @vf: the VF being reset
+ *
+ * Perform reset tasks which must occur after the VSI has been re-created or
+ * rebuilt during a VF reset.
+ */
+static void ice_vf_post_vsi_rebuild(struct ice_vf *vf)
+{
+ ice_vf_rebuild_host_cfg(vf);
+ ice_vf_set_initialized(vf);
+
+ vf->vf_ops->post_vsi_rebuild(vf);
+}
+
+/**
* ice_is_any_vf_in_unicast_promisc - check if any VF(s)
* are in unicast promiscuous mode
* @pf: PF structure for accessing VF(s)
@@ -495,7 +543,7 @@ void ice_reset_all_vfs(struct ice_pf *pf)
ice_vf_pre_vsi_rebuild(vf);
ice_vf_rebuild_vsi(vf);
- vf->vf_ops->post_vsi_rebuild(vf);
+ ice_vf_post_vsi_rebuild(vf);
mutex_unlock(&vf->cfg_lock);
}
@@ -639,14 +687,14 @@ int ice_reset_vf(struct ice_vf *vf, u32 flags)
ice_vf_pre_vsi_rebuild(vf);
- if (vf->vf_ops->vsi_rebuild(vf)) {
+ if (ice_vf_recreate_vsi(vf)) {
dev_err(dev, "Failed to release and setup the VF%u's VSI\n",
vf->vf_id);
err = -EFAULT;
goto out_unlock;
}
- vf->vf_ops->post_vsi_rebuild(vf);
+ ice_vf_post_vsi_rebuild(vf);
vsi = ice_get_vf_vsi(vf);
if (WARN_ON(!vsi)) {
err = -EINVAL;
@@ -673,7 +721,7 @@ out_unlock:
* ice_set_vf_state_qs_dis - Set VF queues state to disabled
* @vf: pointer to the VF structure
*/
-void ice_set_vf_state_qs_dis(struct ice_vf *vf)
+static void ice_set_vf_state_qs_dis(struct ice_vf *vf)
{
/* Clear Rx/Tx enabled queues flag */
bitmap_zero(vf->txq_ena, ICE_MAX_RSS_QS_PER_VF);
@@ -681,9 +729,45 @@ void ice_set_vf_state_qs_dis(struct ice_vf *vf)
clear_bit(ICE_VF_STATE_QS_ENA, vf->vf_states);
}
+/**
+ * ice_set_vf_state_dis - Set VF state to disabled
+ * @vf: pointer to the VF structure
+ */
+void ice_set_vf_state_dis(struct ice_vf *vf)
+{
+ ice_set_vf_state_qs_dis(vf);
+ vf->vf_ops->clear_reset_state(vf);
+}
+
/* Private functions only accessed from other virtualization files */
/**
+ * ice_initialize_vf_entry - Initialize a VF entry
+ * @vf: pointer to the VF structure
+ */
+void ice_initialize_vf_entry(struct ice_vf *vf)
+{
+ struct ice_pf *pf = vf->pf;
+ struct ice_vfs *vfs;
+
+ vfs = &pf->vfs;
+
+ /* assign default capabilities */
+ vf->spoofchk = true;
+ vf->num_vf_qs = vfs->num_qps_per;
+ ice_vc_set_default_allowlist(vf);
+ ice_virtchnl_set_dflt_ops(vf);
+
+ /* ctrl_vsi_idx will be set to a valid value only when iAVF
+ * creates its first fdir rule.
+ */
+ ice_vf_ctrl_invalidate_vsi(vf);
+ ice_vf_fdir_init(vf);
+
+ mutex_init(&vf->cfg_lock);
+}
+
+/**
* ice_dis_vf_qs - Disable the VF queues
* @vf: pointer to the VF structure
*/
@@ -924,18 +1008,18 @@ static int ice_vf_rebuild_host_mac_cfg(struct ice_vf *vf)
vf->num_mac++;
- if (is_valid_ether_addr(vf->hw_lan_addr.addr)) {
- status = ice_fltr_add_mac(vsi, vf->hw_lan_addr.addr,
+ if (is_valid_ether_addr(vf->hw_lan_addr)) {
+ status = ice_fltr_add_mac(vsi, vf->hw_lan_addr,
ICE_FWD_TO_VSI);
if (status) {
dev_err(dev, "failed to add default unicast MAC filter %pM for VF %u, error %d\n",
- &vf->hw_lan_addr.addr[0], vf->vf_id,
+ &vf->hw_lan_addr[0], vf->vf_id,
status);
return status;
}
vf->num_mac++;
- ether_addr_copy(vf->dev_lan_addr.addr, vf->hw_lan_addr.addr);
+ ether_addr_copy(vf->dev_lan_addr, vf->hw_lan_addr);
}
return 0;
@@ -1115,11 +1199,16 @@ void ice_vf_ctrl_vsi_release(struct ice_vf *vf)
*/
struct ice_vsi *ice_vf_ctrl_vsi_setup(struct ice_vf *vf)
{
- struct ice_port_info *pi = ice_vf_get_port_info(vf);
+ struct ice_vsi_cfg_params params = {};
struct ice_pf *pf = vf->pf;
struct ice_vsi *vsi;
- vsi = ice_vsi_setup(pf, pi, ICE_VSI_CTRL, vf, NULL);
+ params.type = ICE_VSI_CTRL;
+ params.pi = ice_vf_get_port_info(vf);
+ params.vf = vf;
+ params.flags = ICE_VSI_FLAG_INIT;
+
+ vsi = ice_vsi_setup(pf, &params);
if (!vsi) {
dev_err(ice_pf_to_dev(pf), "Failed to create VF control VSI\n");
ice_vf_ctrl_invalidate_vsi(vf);
@@ -1129,6 +1218,60 @@ struct ice_vsi *ice_vf_ctrl_vsi_setup(struct ice_vf *vf)
}
/**
+ * ice_vf_init_host_cfg - Initialize host admin configuration
+ * @vf: VF to initialize
+ * @vsi: the VSI created at initialization
+ *
+ * Initialize the VF host configuration. Called during VF creation to setup
+ * VLAN 0, add the VF VSI broadcast filter, and setup spoof checking. It
+ * should only be called during VF creation.
+ */
+int ice_vf_init_host_cfg(struct ice_vf *vf, struct ice_vsi *vsi)
+{
+ struct ice_vsi_vlan_ops *vlan_ops;
+ struct ice_pf *pf = vf->pf;
+ u8 broadcast[ETH_ALEN];
+ struct device *dev;
+ int err;
+
+ dev = ice_pf_to_dev(pf);
+
+ err = ice_vsi_add_vlan_zero(vsi);
+ if (err) {
+ dev_warn(dev, "Failed to add VLAN 0 filter for VF %d\n",
+ vf->vf_id);
+ return err;
+ }
+
+ vlan_ops = ice_get_compat_vsi_vlan_ops(vsi);
+ err = vlan_ops->ena_rx_filtering(vsi);
+ if (err) {
+ dev_warn(dev, "Failed to enable Rx VLAN filtering for VF %d\n",
+ vf->vf_id);
+ return err;
+ }
+
+ eth_broadcast_addr(broadcast);
+ err = ice_fltr_add_mac(vsi, broadcast, ICE_FWD_TO_VSI);
+ if (err) {
+ dev_err(dev, "Failed to add broadcast MAC filter for VF %d, status %d\n",
+ vf->vf_id, err);
+ return err;
+ }
+
+ vf->num_mac = 1;
+
+ err = ice_vsi_apply_spoofchk(vsi, vf->spoofchk);
+ if (err) {
+ dev_warn(dev, "Failed to initialize spoofchk setting for VF %d\n",
+ vf->vf_id);
+ return err;
+ }
+
+ return 0;
+}
+
+/**
* ice_vf_invalidate_vsi - invalidate vsi_idx/vsi_num to remove VSI access
* @vf: VF to remove access to VSI for
*/
@@ -1139,6 +1282,24 @@ void ice_vf_invalidate_vsi(struct ice_vf *vf)
}
/**
+ * ice_vf_vsi_release - Release the VF VSI and invalidate indexes
+ * @vf: pointer to the VF structure
+ *
+ * Release the VF associated with this VSI and then invalidate the VSI
+ * indexes.
+ */
+void ice_vf_vsi_release(struct ice_vf *vf)
+{
+ struct ice_vsi *vsi = ice_get_vf_vsi(vf);
+
+ if (WARN_ON(!vsi))
+ return;
+
+ ice_vsi_release(vsi);
+ ice_vf_invalidate_vsi(vf);
+}
+
+/**
* ice_vf_set_initialized - VF is ready for VIRTCHNL communication
* @vf: VF to set in initialized state
*
diff --git a/drivers/net/ethernet/intel/ice/ice_vf_lib.h b/drivers/net/ethernet/intel/ice/ice_vf_lib.h
index 52bd9a3816bf..ef30f05b5d02 100644
--- a/drivers/net/ethernet/intel/ice/ice_vf_lib.h
+++ b/drivers/net/ethernet/intel/ice/ice_vf_lib.h
@@ -56,11 +56,13 @@ struct ice_mdd_vf_events {
struct ice_vf_ops {
enum ice_disq_rst_src reset_type;
void (*free)(struct ice_vf *vf);
+ void (*clear_reset_state)(struct ice_vf *vf);
void (*clear_mbx_register)(struct ice_vf *vf);
void (*trigger_reset_register)(struct ice_vf *vf, bool is_vflr);
bool (*poll_reset_status)(struct ice_vf *vf);
void (*clear_reset_trigger)(struct ice_vf *vf);
- int (*vsi_rebuild)(struct ice_vf *vf);
+ void (*irq_close)(struct ice_vf *vf);
+ int (*create_vsi)(struct ice_vf *vf);
void (*post_vsi_rebuild)(struct ice_vf *vf);
};
@@ -96,8 +98,8 @@ struct ice_vf {
struct ice_sw *vf_sw_id; /* switch ID the VF VSIs connect to */
struct virtchnl_version_info vf_ver;
u32 driver_caps; /* reported by VF driver */
- struct virtchnl_ether_addr dev_lan_addr;
- struct virtchnl_ether_addr hw_lan_addr;
+ u8 dev_lan_addr[ETH_ALEN];
+ u8 hw_lan_addr[ETH_ALEN];
struct ice_time_mac legacy_last_added_umac;
DECLARE_BITMAP(txq_ena, ICE_MAX_RSS_QS_PER_VF);
DECLARE_BITMAP(rxq_ena, ICE_MAX_RSS_QS_PER_VF);
@@ -213,7 +215,7 @@ u16 ice_get_num_vfs(struct ice_pf *pf);
struct ice_vsi *ice_get_vf_vsi(struct ice_vf *vf);
bool ice_is_vf_disabled(struct ice_vf *vf);
int ice_check_vf_ready_for_cfg(struct ice_vf *vf);
-void ice_set_vf_state_qs_dis(struct ice_vf *vf);
+void ice_set_vf_state_dis(struct ice_vf *vf);
bool ice_is_any_vf_in_unicast_promisc(struct ice_pf *pf);
void
ice_vf_get_promisc_masks(struct ice_vf *vf, struct ice_vsi *vsi,
@@ -259,7 +261,7 @@ static inline int ice_check_vf_ready_for_cfg(struct ice_vf *vf)
return -EOPNOTSUPP;
}
-static inline void ice_set_vf_state_qs_dis(struct ice_vf *vf)
+static inline void ice_set_vf_state_dis(struct ice_vf *vf)
{
}
diff --git a/drivers/net/ethernet/intel/ice/ice_vf_lib_private.h b/drivers/net/ethernet/intel/ice/ice_vf_lib_private.h
index 9c8ef2b01f0f..6f3293b793b5 100644
--- a/drivers/net/ethernet/intel/ice/ice_vf_lib_private.h
+++ b/drivers/net/ethernet/intel/ice/ice_vf_lib_private.h
@@ -23,6 +23,7 @@
#warning "Only include ice_vf_lib_private.h in CONFIG_PCI_IOV virtualization files"
#endif
+void ice_initialize_vf_entry(struct ice_vf *vf);
void ice_dis_vf_qs(struct ice_vf *vf);
int ice_check_vf_init(struct ice_vf *vf);
enum virtchnl_status_code ice_err_to_virt_err(int err);
@@ -35,7 +36,9 @@ void ice_vf_rebuild_host_cfg(struct ice_vf *vf);
void ice_vf_ctrl_invalidate_vsi(struct ice_vf *vf);
void ice_vf_ctrl_vsi_release(struct ice_vf *vf);
struct ice_vsi *ice_vf_ctrl_vsi_setup(struct ice_vf *vf);
+int ice_vf_init_host_cfg(struct ice_vf *vf, struct ice_vsi *vsi);
void ice_vf_invalidate_vsi(struct ice_vf *vf);
+void ice_vf_vsi_release(struct ice_vf *vf);
void ice_vf_set_initialized(struct ice_vf *vf);
#endif /* _ICE_VF_LIB_PRIVATE_H_ */
diff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl.c b/drivers/net/ethernet/intel/ice/ice_virtchnl.c
index dab3cd5d300e..e24e3f5017ca 100644
--- a/drivers/net/ethernet/intel/ice/ice_virtchnl.c
+++ b/drivers/net/ethernet/intel/ice/ice_virtchnl.c
@@ -507,7 +507,7 @@ static int ice_vc_get_vf_res_msg(struct ice_vf *vf, u8 *msg)
vfres->vsi_res[0].vsi_type = VIRTCHNL_VSI_SRIOV;
vfres->vsi_res[0].num_queue_pairs = vsi->num_txq;
ether_addr_copy(vfres->vsi_res[0].default_mac_addr,
- vf->hw_lan_addr.addr);
+ vf->hw_lan_addr);
/* match guest capabilities */
vf->driver_caps = vfres->vf_cap_flags;
@@ -1802,10 +1802,10 @@ ice_vfhw_mac_add(struct ice_vf *vf, struct virtchnl_ether_addr *vc_ether_addr)
* was correctly specified over VIRTCHNL
*/
if ((ice_is_vc_addr_legacy(vc_ether_addr) &&
- is_zero_ether_addr(vf->hw_lan_addr.addr)) ||
+ is_zero_ether_addr(vf->hw_lan_addr)) ||
ice_is_vc_addr_primary(vc_ether_addr)) {
- ether_addr_copy(vf->dev_lan_addr.addr, mac_addr);
- ether_addr_copy(vf->hw_lan_addr.addr, mac_addr);
+ ether_addr_copy(vf->dev_lan_addr, mac_addr);
+ ether_addr_copy(vf->hw_lan_addr, mac_addr);
}
/* hardware and device MACs are already set, but its possible that the
@@ -1836,7 +1836,7 @@ ice_vc_add_mac_addr(struct ice_vf *vf, struct ice_vsi *vsi,
int ret;
/* device MAC already added */
- if (ether_addr_equal(mac_addr, vf->dev_lan_addr.addr))
+ if (ether_addr_equal(mac_addr, vf->dev_lan_addr))
return 0;
if (is_unicast_ether_addr(mac_addr) && !ice_can_vf_change_mac(vf)) {
@@ -1891,8 +1891,8 @@ ice_update_legacy_cached_mac(struct ice_vf *vf,
ice_is_legacy_umac_expired(&vf->legacy_last_added_umac))
return;
- ether_addr_copy(vf->dev_lan_addr.addr, vf->legacy_last_added_umac.addr);
- ether_addr_copy(vf->hw_lan_addr.addr, vf->legacy_last_added_umac.addr);
+ ether_addr_copy(vf->dev_lan_addr, vf->legacy_last_added_umac.addr);
+ ether_addr_copy(vf->hw_lan_addr, vf->legacy_last_added_umac.addr);
}
/**
@@ -1906,15 +1906,15 @@ ice_vfhw_mac_del(struct ice_vf *vf, struct virtchnl_ether_addr *vc_ether_addr)
u8 *mac_addr = vc_ether_addr->addr;
if (!is_valid_ether_addr(mac_addr) ||
- !ether_addr_equal(vf->dev_lan_addr.addr, mac_addr))
+ !ether_addr_equal(vf->dev_lan_addr, mac_addr))
return;
/* allow the device MAC to be repopulated in the add flow and don't
- * clear the hardware MAC (i.e. hw_lan_addr.addr) here as that is meant
+ * clear the hardware MAC (i.e. hw_lan_addr) here as that is meant
* to be persistent on VM reboot and across driver unload/load, which
* won't work if we clear the hardware MAC here
*/
- eth_zero_addr(vf->dev_lan_addr.addr);
+ eth_zero_addr(vf->dev_lan_addr);
ice_update_legacy_cached_mac(vf, vc_ether_addr);
}
@@ -1934,7 +1934,7 @@ ice_vc_del_mac_addr(struct ice_vf *vf, struct ice_vsi *vsi,
int status;
if (!ice_can_vf_change_mac(vf) &&
- ether_addr_equal(vf->dev_lan_addr.addr, mac_addr))
+ ether_addr_equal(vf->dev_lan_addr, mac_addr))
return 0;
status = ice_fltr_remove_mac(vsi, mac_addr, ICE_FWD_TO_VSI);
@@ -3733,7 +3733,7 @@ static int ice_vc_repr_add_mac(struct ice_vf *vf, u8 *msg)
int result;
if (!is_unicast_ether_addr(mac_addr) ||
- ether_addr_equal(mac_addr, vf->hw_lan_addr.addr))
+ ether_addr_equal(mac_addr, vf->hw_lan_addr))
continue;
if (vf->pf_set_mac) {
diff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl_fdir.c b/drivers/net/ethernet/intel/ice/ice_virtchnl_fdir.c
index c6a58343d81d..e6ef6b303222 100644
--- a/drivers/net/ethernet/intel/ice/ice_virtchnl_fdir.c
+++ b/drivers/net/ethernet/intel/ice/ice_virtchnl_fdir.c
@@ -113,7 +113,7 @@ ice_vc_fdir_param_check(struct ice_vf *vf, u16 vsi_id)
if (!ice_vc_isvalid_vsi_id(vf, vsi_id))
return -EINVAL;
- if (!pf->vsi[vf->lan_vsi_idx])
+ if (!ice_get_vf_vsi(vf))
return -EINVAL;
return 0;
@@ -494,7 +494,7 @@ ice_vc_fdir_rem_prof(struct ice_vf *vf, enum ice_fltr_ptype flow, int tun)
vf_prof = fdir->fdir_prof[flow];
- vf_vsi = pf->vsi[vf->lan_vsi_idx];
+ vf_vsi = ice_get_vf_vsi(vf);
if (!vf_vsi) {
dev_dbg(dev, "NULL vf %d vsi pointer\n", vf->vf_id);
return;
@@ -572,7 +572,7 @@ ice_vc_fdir_write_flow_prof(struct ice_vf *vf, enum ice_fltr_ptype flow,
pf = vf->pf;
dev = ice_pf_to_dev(pf);
hw = &pf->hw;
- vf_vsi = pf->vsi[vf->lan_vsi_idx];
+ vf_vsi = ice_get_vf_vsi(vf);
if (!vf_vsi)
return -EINVAL;
@@ -1205,7 +1205,7 @@ static int ice_vc_fdir_write_fltr(struct ice_vf *vf,
pf = vf->pf;
dev = ice_pf_to_dev(pf);
hw = &pf->hw;
- vsi = pf->vsi[vf->lan_vsi_idx];
+ vsi = ice_get_vf_vsi(vf);
if (!vsi) {
dev_dbg(dev, "Invalid vsi for VF %d\n", vf->vf_id);
return -EINVAL;
diff --git a/drivers/net/ethernet/intel/ice/ice_xsk.c b/drivers/net/ethernet/intel/ice/ice_xsk.c
index 374b7f10b549..31565bbafa22 100644
--- a/drivers/net/ethernet/intel/ice/ice_xsk.c
+++ b/drivers/net/ethernet/intel/ice/ice_xsk.c
@@ -598,6 +598,112 @@ ice_construct_skb_zc(struct ice_rx_ring *rx_ring, struct xdp_buff *xdp)
}
/**
+ * ice_clean_xdp_irq_zc - produce AF_XDP descriptors to CQ
+ * @xdp_ring: XDP Tx ring
+ */
+static void ice_clean_xdp_irq_zc(struct ice_tx_ring *xdp_ring)
+{
+ u16 ntc = xdp_ring->next_to_clean;
+ struct ice_tx_desc *tx_desc;
+ u16 cnt = xdp_ring->count;
+ struct ice_tx_buf *tx_buf;
+ u16 completed_frames = 0;
+ u16 xsk_frames = 0;
+ u16 last_rs;
+ int i;
+
+ last_rs = xdp_ring->next_to_use ? xdp_ring->next_to_use - 1 : cnt - 1;
+ tx_desc = ICE_TX_DESC(xdp_ring, last_rs);
+ if (tx_desc->cmd_type_offset_bsz &
+ cpu_to_le64(ICE_TX_DESC_DTYPE_DESC_DONE)) {
+ if (last_rs >= ntc)
+ completed_frames = last_rs - ntc + 1;
+ else
+ completed_frames = last_rs + cnt - ntc + 1;
+ }
+
+ if (!completed_frames)
+ return;
+
+ if (likely(!xdp_ring->xdp_tx_active)) {
+ xsk_frames = completed_frames;
+ goto skip;
+ }
+
+ ntc = xdp_ring->next_to_clean;
+ for (i = 0; i < completed_frames; i++) {
+ tx_buf = &xdp_ring->tx_buf[ntc];
+
+ if (tx_buf->type == ICE_TX_BUF_XSK_TX) {
+ tx_buf->type = ICE_TX_BUF_EMPTY;
+ xsk_buff_free(tx_buf->xdp);
+ xdp_ring->xdp_tx_active--;
+ } else {
+ xsk_frames++;
+ }
+
+ ntc++;
+ if (ntc >= xdp_ring->count)
+ ntc = 0;
+ }
+skip:
+ tx_desc->cmd_type_offset_bsz = 0;
+ xdp_ring->next_to_clean += completed_frames;
+ if (xdp_ring->next_to_clean >= cnt)
+ xdp_ring->next_to_clean -= cnt;
+ if (xsk_frames)
+ xsk_tx_completed(xdp_ring->xsk_pool, xsk_frames);
+}
+
+/**
+ * ice_xmit_xdp_tx_zc - AF_XDP ZC handler for XDP_TX
+ * @xdp: XDP buffer to xmit
+ * @xdp_ring: XDP ring to produce descriptor onto
+ *
+ * note that this function works directly on xdp_buff, no need to convert
+ * it to xdp_frame. xdp_buff pointer is stored to ice_tx_buf so that cleaning
+ * side will be able to xsk_buff_free() it.
+ *
+ * Returns ICE_XDP_TX for successfully produced desc, ICE_XDP_CONSUMED if there
+ * was not enough space on XDP ring
+ */
+static int ice_xmit_xdp_tx_zc(struct xdp_buff *xdp,
+ struct ice_tx_ring *xdp_ring)
+{
+ u32 size = xdp->data_end - xdp->data;
+ u32 ntu = xdp_ring->next_to_use;
+ struct ice_tx_desc *tx_desc;
+ struct ice_tx_buf *tx_buf;
+ dma_addr_t dma;
+
+ if (ICE_DESC_UNUSED(xdp_ring) < ICE_RING_QUARTER(xdp_ring)) {
+ ice_clean_xdp_irq_zc(xdp_ring);
+ if (!ICE_DESC_UNUSED(xdp_ring)) {
+ xdp_ring->ring_stats->tx_stats.tx_busy++;
+ return ICE_XDP_CONSUMED;
+ }
+ }
+
+ dma = xsk_buff_xdp_get_dma(xdp);
+ xsk_buff_raw_dma_sync_for_device(xdp_ring->xsk_pool, dma, size);
+
+ tx_buf = &xdp_ring->tx_buf[ntu];
+ tx_buf->xdp = xdp;
+ tx_buf->type = ICE_TX_BUF_XSK_TX;
+ tx_desc = ICE_TX_DESC(xdp_ring, ntu);
+ tx_desc->buf_addr = cpu_to_le64(dma);
+ tx_desc->cmd_type_offset_bsz = ice_build_ctob(ICE_TX_DESC_CMD_EOP,
+ 0, size, 0);
+ xdp_ring->xdp_tx_active++;
+
+ if (++ntu == xdp_ring->count)
+ ntu = 0;
+ xdp_ring->next_to_use = ntu;
+
+ return ICE_XDP_TX;
+}
+
+/**
* ice_run_xdp_zc - Executes an XDP program in zero-copy path
* @rx_ring: Rx ring
* @xdp: xdp_buff used as input to the XDP program
@@ -630,7 +736,7 @@ ice_run_xdp_zc(struct ice_rx_ring *rx_ring, struct xdp_buff *xdp,
case XDP_PASS:
break;
case XDP_TX:
- result = ice_xmit_xdp_buff(xdp, xdp_ring);
+ result = ice_xmit_xdp_tx_zc(xdp, xdp_ring);
if (result == ICE_XDP_CONSUMED)
goto out_failure;
break;
@@ -760,7 +866,7 @@ construct_skb:
if (entries_to_alloc > ICE_RING_QUARTER(rx_ring))
failure |= !ice_alloc_rx_bufs_zc(rx_ring, entries_to_alloc);
- ice_finalize_xdp_rx(xdp_ring, xdp_xmit);
+ ice_finalize_xdp_rx(xdp_ring, xdp_xmit, 0);
ice_update_rx_ring_stats(rx_ring, total_rx_packets, total_rx_bytes);
if (xsk_uses_need_wakeup(rx_ring->xsk_pool)) {
@@ -776,78 +882,6 @@ construct_skb:
}
/**
- * ice_clean_xdp_tx_buf - Free and unmap XDP Tx buffer
- * @xdp_ring: XDP Tx ring
- * @tx_buf: Tx buffer to clean
- */
-static void
-ice_clean_xdp_tx_buf(struct ice_tx_ring *xdp_ring, struct ice_tx_buf *tx_buf)
-{
- page_frag_free(tx_buf->raw_buf);
- xdp_ring->xdp_tx_active--;
- dma_unmap_single(xdp_ring->dev, dma_unmap_addr(tx_buf, dma),
- dma_unmap_len(tx_buf, len), DMA_TO_DEVICE);
- dma_unmap_len_set(tx_buf, len, 0);
-}
-
-/**
- * ice_clean_xdp_irq_zc - produce AF_XDP descriptors to CQ
- * @xdp_ring: XDP Tx ring
- */
-static void ice_clean_xdp_irq_zc(struct ice_tx_ring *xdp_ring)
-{
- u16 ntc = xdp_ring->next_to_clean;
- struct ice_tx_desc *tx_desc;
- u16 cnt = xdp_ring->count;
- struct ice_tx_buf *tx_buf;
- u16 completed_frames = 0;
- u16 xsk_frames = 0;
- u16 last_rs;
- int i;
-
- last_rs = xdp_ring->next_to_use ? xdp_ring->next_to_use - 1 : cnt - 1;
- tx_desc = ICE_TX_DESC(xdp_ring, last_rs);
- if ((tx_desc->cmd_type_offset_bsz &
- cpu_to_le64(ICE_TX_DESC_DTYPE_DESC_DONE))) {
- if (last_rs >= ntc)
- completed_frames = last_rs - ntc + 1;
- else
- completed_frames = last_rs + cnt - ntc + 1;
- }
-
- if (!completed_frames)
- return;
-
- if (likely(!xdp_ring->xdp_tx_active)) {
- xsk_frames = completed_frames;
- goto skip;
- }
-
- ntc = xdp_ring->next_to_clean;
- for (i = 0; i < completed_frames; i++) {
- tx_buf = &xdp_ring->tx_buf[ntc];
-
- if (tx_buf->raw_buf) {
- ice_clean_xdp_tx_buf(xdp_ring, tx_buf);
- tx_buf->raw_buf = NULL;
- } else {
- xsk_frames++;
- }
-
- ntc++;
- if (ntc >= xdp_ring->count)
- ntc = 0;
- }
-skip:
- tx_desc->cmd_type_offset_bsz = 0;
- xdp_ring->next_to_clean += completed_frames;
- if (xdp_ring->next_to_clean >= cnt)
- xdp_ring->next_to_clean -= cnt;
- if (xsk_frames)
- xsk_tx_completed(xdp_ring->xsk_pool, xsk_frames);
-}
-
-/**
* ice_xmit_pkt - produce a single HW Tx descriptor out of AF_XDP descriptor
* @xdp_ring: XDP ring to produce the HW Tx descriptor on
* @desc: AF_XDP descriptor to pull the DMA address and length from
@@ -921,20 +955,6 @@ static void ice_fill_tx_hw_ring(struct ice_tx_ring *xdp_ring, struct xdp_desc *d
}
/**
- * ice_set_rs_bit - set RS bit on last produced descriptor (one behind current NTU)
- * @xdp_ring: XDP ring to produce the HW Tx descriptors on
- */
-static void ice_set_rs_bit(struct ice_tx_ring *xdp_ring)
-{
- u16 ntu = xdp_ring->next_to_use ? xdp_ring->next_to_use - 1 : xdp_ring->count - 1;
- struct ice_tx_desc *tx_desc;
-
- tx_desc = ICE_TX_DESC(xdp_ring, ntu);
- tx_desc->cmd_type_offset_bsz |=
- cpu_to_le64(ICE_TX_DESC_CMD_RS << ICE_TXD_QW1_CMD_S);
-}
-
-/**
* ice_xmit_zc - take entries from XSK Tx ring and place them onto HW Tx ring
* @xdp_ring: XDP ring to produce the HW Tx descriptors on
*
@@ -1068,12 +1088,12 @@ void ice_xsk_clean_xdp_ring(struct ice_tx_ring *xdp_ring)
while (ntc != ntu) {
struct ice_tx_buf *tx_buf = &xdp_ring->tx_buf[ntc];
- if (tx_buf->raw_buf)
- ice_clean_xdp_tx_buf(xdp_ring, tx_buf);
- else
+ if (tx_buf->type == ICE_TX_BUF_XSK_TX) {
+ tx_buf->type = ICE_TX_BUF_EMPTY;
+ xsk_buff_free(tx_buf->xdp);
+ } else {
xsk_frames++;
-
- tx_buf->raw_buf = NULL;
+ }
ntc++;
if (ntc >= xdp_ring->count)
diff --git a/drivers/net/ethernet/intel/igb/igb_main.c b/drivers/net/ethernet/intel/igb/igb_main.c
index b5b443883da9..03bc1e8af575 100644
--- a/drivers/net/ethernet/intel/igb/igb_main.c
+++ b/drivers/net/ethernet/intel/igb/igb_main.c
@@ -2835,6 +2835,22 @@ static int igb_offload_txtime(struct igb_adapter *adapter,
return 0;
}
+static int igb_tc_query_caps(struct igb_adapter *adapter,
+ struct tc_query_caps_base *base)
+{
+ switch (base->type) {
+ case TC_SETUP_QDISC_TAPRIO: {
+ struct tc_taprio_caps *caps = base->caps;
+
+ caps->broken_mqprio = true;
+
+ return 0;
+ }
+ default:
+ return -EOPNOTSUPP;
+ }
+}
+
static LIST_HEAD(igb_block_cb_list);
static int igb_setup_tc(struct net_device *dev, enum tc_setup_type type,
@@ -2843,6 +2859,8 @@ static int igb_setup_tc(struct net_device *dev, enum tc_setup_type type,
struct igb_adapter *adapter = netdev_priv(dev);
switch (type) {
+ case TC_QUERY_CAPS:
+ return igb_tc_query_caps(adapter, type_data);
case TC_SETUP_QDISC_CBS:
return igb_offload_cbs(adapter, type_data);
case TC_SETUP_BLOCK:
@@ -2896,8 +2914,14 @@ static int igb_xdp_setup(struct net_device *dev, struct netdev_bpf *bpf)
bpf_prog_put(old_prog);
/* bpf is just replaced, RXQ and MTU are already setup */
- if (!need_reset)
+ if (!need_reset) {
return 0;
+ } else {
+ if (prog)
+ xdp_features_set_redirect_target(dev, true);
+ else
+ xdp_features_clear_redirect_target(dev);
+ }
if (running)
igb_open(dev);
@@ -3210,8 +3234,6 @@ static int igb_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
if (err)
goto err_pci_reg;
- pci_enable_pcie_error_reporting(pdev);
-
pci_set_master(pdev);
pci_save_state(pdev);
@@ -3333,6 +3355,7 @@ static int igb_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
netdev->priv_flags |= IFF_SUPP_NOFCS;
netdev->priv_flags |= IFF_UNICAST_FLT;
+ netdev->xdp_features = NETDEV_XDP_ACT_BASIC | NETDEV_XDP_ACT_REDIRECT;
/* MTU range: 68 - 9216 */
netdev->min_mtu = ETH_MIN_MTU;
@@ -3648,7 +3671,6 @@ err_sw_init:
err_ioremap:
free_netdev(netdev);
err_alloc_etherdev:
- pci_disable_pcie_error_reporting(pdev);
pci_release_mem_regions(pdev);
err_pci_reg:
err_dma:
@@ -3859,8 +3881,6 @@ static void igb_remove(struct pci_dev *pdev)
kfree(adapter->shadow_vfta);
free_netdev(netdev);
- pci_disable_pcie_error_reporting(pdev);
-
pci_disable_device(pdev);
}
diff --git a/drivers/net/ethernet/intel/igc/igc_base.c b/drivers/net/ethernet/intel/igc/igc_base.c
index a15927e77272..a1d815af507d 100644
--- a/drivers/net/ethernet/intel/igc/igc_base.c
+++ b/drivers/net/ethernet/intel/igc/igc_base.c
@@ -396,6 +396,35 @@ void igc_rx_fifo_flush_base(struct igc_hw *hw)
rd32(IGC_MPC);
}
+bool igc_is_device_id_i225(struct igc_hw *hw)
+{
+ switch (hw->device_id) {
+ case IGC_DEV_ID_I225_LM:
+ case IGC_DEV_ID_I225_V:
+ case IGC_DEV_ID_I225_I:
+ case IGC_DEV_ID_I225_K:
+ case IGC_DEV_ID_I225_K2:
+ case IGC_DEV_ID_I225_LMVP:
+ case IGC_DEV_ID_I225_IT:
+ return true;
+ default:
+ return false;
+ }
+}
+
+bool igc_is_device_id_i226(struct igc_hw *hw)
+{
+ switch (hw->device_id) {
+ case IGC_DEV_ID_I226_LM:
+ case IGC_DEV_ID_I226_V:
+ case IGC_DEV_ID_I226_K:
+ case IGC_DEV_ID_I226_IT:
+ return true;
+ default:
+ return false;
+ }
+}
+
static struct igc_mac_operations igc_mac_ops_base = {
.init_hw = igc_init_hw_base,
.check_for_link = igc_check_for_copper_link,
diff --git a/drivers/net/ethernet/intel/igc/igc_base.h b/drivers/net/ethernet/intel/igc/igc_base.h
index ce530f5fd7bd..7a992befca24 100644
--- a/drivers/net/ethernet/intel/igc/igc_base.h
+++ b/drivers/net/ethernet/intel/igc/igc_base.h
@@ -7,6 +7,8 @@
/* forward declaration */
void igc_rx_fifo_flush_base(struct igc_hw *hw);
void igc_power_down_phy_copper_base(struct igc_hw *hw);
+bool igc_is_device_id_i225(struct igc_hw *hw);
+bool igc_is_device_id_i226(struct igc_hw *hw);
/* Transmit Descriptor - Advanced */
union igc_adv_tx_desc {
diff --git a/drivers/net/ethernet/intel/igc/igc_defines.h b/drivers/net/ethernet/intel/igc/igc_defines.h
index e9747ec5ac0b..9dec3563ce3a 100644
--- a/drivers/net/ethernet/intel/igc/igc_defines.h
+++ b/drivers/net/ethernet/intel/igc/igc_defines.h
@@ -524,6 +524,7 @@
/* Transmit Scheduling */
#define IGC_TQAVCTRL_TRANSMIT_MODE_TSN 0x00000001
#define IGC_TQAVCTRL_ENHANCED_QAV 0x00000008
+#define IGC_TQAVCTRL_FUTSCDDIS 0x00000080
#define IGC_TXQCTL_QUEUE_MODE_LAUNCHT 0x00000001
#define IGC_TXQCTL_STRICT_CYCLE 0x00000002
diff --git a/drivers/net/ethernet/intel/igc/igc_main.c b/drivers/net/ethernet/intel/igc/igc_main.c
index 1dd2a7fee8d4..2928a6c73692 100644
--- a/drivers/net/ethernet/intel/igc/igc_main.c
+++ b/drivers/net/ethernet/intel/igc/igc_main.c
@@ -5978,6 +5978,7 @@ static bool validate_schedule(struct igc_adapter *adapter,
const struct tc_taprio_qopt_offload *qopt)
{
int queue_uses[IGC_MAX_TX_QUEUES] = { };
+ struct igc_hw *hw = &adapter->hw;
struct timespec64 now;
size_t n;
@@ -5990,8 +5991,10 @@ static bool validate_schedule(struct igc_adapter *adapter,
* in the future, it will hold all the packets until that
* time, causing a lot of TX Hangs, so to avoid that, we
* reject schedules that would start in the future.
+ * Note: Limitation above is no longer in i226.
*/
- if (!is_base_time_past(qopt->base_time, &now))
+ if (!is_base_time_past(qopt->base_time, &now) &&
+ igc_is_device_id_i225(hw))
return false;
for (n = 0; n < qopt->num_entries; n++) {
@@ -6061,6 +6064,7 @@ static int igc_save_qbv_schedule(struct igc_adapter *adapter,
struct tc_taprio_qopt_offload *qopt)
{
bool queue_configured[IGC_MAX_TX_QUEUES] = { };
+ struct igc_hw *hw = &adapter->hw;
u32 start_time = 0, end_time = 0;
size_t n;
int i;
@@ -6073,7 +6077,7 @@ static int igc_save_qbv_schedule(struct igc_adapter *adapter,
if (qopt->base_time < 0)
return -ERANGE;
- if (adapter->base_time)
+ if (igc_is_device_id_i225(hw) && adapter->base_time)
return -EALREADY;
if (!validate_schedule(adapter, qopt))
@@ -6221,12 +6225,35 @@ static int igc_tsn_enable_cbs(struct igc_adapter *adapter,
return igc_tsn_offload_apply(adapter);
}
+static int igc_tc_query_caps(struct igc_adapter *adapter,
+ struct tc_query_caps_base *base)
+{
+ struct igc_hw *hw = &adapter->hw;
+
+ switch (base->type) {
+ case TC_SETUP_QDISC_TAPRIO: {
+ struct tc_taprio_caps *caps = base->caps;
+
+ caps->broken_mqprio = true;
+
+ if (hw->mac.type == igc_i225)
+ caps->gate_mask_per_txq = true;
+
+ return 0;
+ }
+ default:
+ return -EOPNOTSUPP;
+ }
+}
+
static int igc_setup_tc(struct net_device *dev, enum tc_setup_type type,
void *type_data)
{
struct igc_adapter *adapter = netdev_priv(dev);
switch (type) {
+ case TC_QUERY_CAPS:
+ return igc_tc_query_caps(adapter, type_data);
case TC_SETUP_QDISC_TAPRIO:
return igc_tsn_enable_qbv_scheduling(adapter, type_data);
@@ -6451,8 +6478,6 @@ static int igc_probe(struct pci_dev *pdev,
if (err)
goto err_pci_reg;
- pci_enable_pcie_error_reporting(pdev);
-
err = pci_enable_ptm(pdev, NULL);
if (err < 0)
dev_info(&pdev->dev, "PCIe PTM not supported by PCIe bus/controller\n");
@@ -6550,6 +6575,9 @@ static int igc_probe(struct pci_dev *pdev,
netdev->mpls_features |= NETIF_F_HW_CSUM;
netdev->hw_enc_features |= netdev->vlan_features;
+ netdev->xdp_features = NETDEV_XDP_ACT_BASIC | NETDEV_XDP_ACT_REDIRECT |
+ NETDEV_XDP_ACT_XSK_ZEROCOPY;
+
/* MTU range: 68 - 9216 */
netdev->min_mtu = ETH_MIN_MTU;
netdev->max_mtu = MAX_STD_JUMBO_FRAME_SIZE;
@@ -6657,7 +6685,6 @@ err_sw_init:
err_ioremap:
free_netdev(netdev);
err_alloc_etherdev:
- pci_disable_pcie_error_reporting(pdev);
pci_release_mem_regions(pdev);
err_pci_reg:
err_dma:
@@ -6705,8 +6732,6 @@ static void igc_remove(struct pci_dev *pdev)
free_netdev(netdev);
- pci_disable_pcie_error_reporting(pdev);
-
pci_disable_device(pdev);
}
diff --git a/drivers/net/ethernet/intel/igc/igc_tsn.c b/drivers/net/ethernet/intel/igc/igc_tsn.c
index bb10d7b65232..a386c8d61dbf 100644
--- a/drivers/net/ethernet/intel/igc/igc_tsn.c
+++ b/drivers/net/ethernet/intel/igc/igc_tsn.c
@@ -2,6 +2,7 @@
/* Copyright (c) 2019 Intel Corporation */
#include "igc.h"
+#include "igc_hw.h"
#include "igc_tsn.h"
static bool is_any_launchtime(struct igc_adapter *adapter)
@@ -92,7 +93,8 @@ static int igc_tsn_disable_offload(struct igc_adapter *adapter)
tqavctrl = rd32(IGC_TQAVCTRL);
tqavctrl &= ~(IGC_TQAVCTRL_TRANSMIT_MODE_TSN |
- IGC_TQAVCTRL_ENHANCED_QAV);
+ IGC_TQAVCTRL_ENHANCED_QAV | IGC_TQAVCTRL_FUTSCDDIS);
+
wr32(IGC_TQAVCTRL, tqavctrl);
for (i = 0; i < adapter->num_tx_queues; i++) {
@@ -117,20 +119,10 @@ static int igc_tsn_enable_offload(struct igc_adapter *adapter)
ktime_t base_time, systim;
int i;
- cycle = adapter->cycle_time;
- base_time = adapter->base_time;
-
wr32(IGC_TSAUXC, 0);
wr32(IGC_DTXMXPKTSZ, IGC_DTXMXPKTSZ_TSN);
wr32(IGC_TXPBS, IGC_TXPBSIZE_TSN);
- tqavctrl = rd32(IGC_TQAVCTRL);
- tqavctrl |= IGC_TQAVCTRL_TRANSMIT_MODE_TSN | IGC_TQAVCTRL_ENHANCED_QAV;
- wr32(IGC_TQAVCTRL, tqavctrl);
-
- wr32(IGC_QBVCYCLET_S, cycle);
- wr32(IGC_QBVCYCLET, cycle);
-
for (i = 0; i < adapter->num_tx_queues; i++) {
struct igc_ring *ring = adapter->tx_ring[i];
u32 txqctl = 0;
@@ -233,21 +225,46 @@ skip_cbs:
wr32(IGC_TXQCTL(i), txqctl);
}
+ tqavctrl = rd32(IGC_TQAVCTRL) & ~IGC_TQAVCTRL_FUTSCDDIS;
+ tqavctrl |= IGC_TQAVCTRL_TRANSMIT_MODE_TSN | IGC_TQAVCTRL_ENHANCED_QAV;
+
+ cycle = adapter->cycle_time;
+ base_time = adapter->base_time;
+
nsec = rd32(IGC_SYSTIML);
sec = rd32(IGC_SYSTIMH);
systim = ktime_set(sec, nsec);
-
if (ktime_compare(systim, base_time) > 0) {
- s64 n;
+ s64 n = div64_s64(ktime_sub_ns(systim, base_time), cycle);
- n = div64_s64(ktime_sub_ns(systim, base_time), cycle);
base_time = ktime_add_ns(base_time, (n + 1) * cycle);
+ } else {
+ /* According to datasheet section 7.5.2.9.3.3, FutScdDis bit
+ * has to be configured before the cycle time and base time.
+ * Tx won't hang if there is a GCL is already running,
+ * so in this case we don't need to set FutScdDis.
+ */
+ if (igc_is_device_id_i226(hw) &&
+ !(rd32(IGC_BASET_H) || rd32(IGC_BASET_L)))
+ tqavctrl |= IGC_TQAVCTRL_FUTSCDDIS;
}
- baset_h = div_s64_rem(base_time, NSEC_PER_SEC, &baset_l);
+ wr32(IGC_TQAVCTRL, tqavctrl);
+
+ wr32(IGC_QBVCYCLET_S, cycle);
+ wr32(IGC_QBVCYCLET, cycle);
+ baset_h = div_s64_rem(base_time, NSEC_PER_SEC, &baset_l);
wr32(IGC_BASET_H, baset_h);
+
+ /* In i226, Future base time is only supported when FutScdDis bit
+ * is enabled and only active for re-configuration.
+ * In this case, initialize the base time with zero to create
+ * "re-configuration" scenario then only set the desired base time.
+ */
+ if (tqavctrl & IGC_TQAVCTRL_FUTSCDDIS)
+ wr32(IGC_BASET_L, 0);
wr32(IGC_BASET_L, baset_l);
return 0;
@@ -274,17 +291,14 @@ int igc_tsn_reset(struct igc_adapter *adapter)
int igc_tsn_offload_apply(struct igc_adapter *adapter)
{
- int err;
+ struct igc_hw *hw = &adapter->hw;
- if (netif_running(adapter->netdev)) {
+ if (netif_running(adapter->netdev) && igc_is_device_id_i225(hw)) {
schedule_work(&adapter->reset_task);
return 0;
}
- err = igc_tsn_enable_offload(adapter);
- if (err < 0)
- return err;
+ igc_tsn_reset(adapter);
- adapter->flags = igc_tsn_new_flags(adapter);
return 0;
}
diff --git a/drivers/net/ethernet/intel/igc/igc_xdp.c b/drivers/net/ethernet/intel/igc/igc_xdp.c
index aeeb34e64610..e27af72aada8 100644
--- a/drivers/net/ethernet/intel/igc/igc_xdp.c
+++ b/drivers/net/ethernet/intel/igc/igc_xdp.c
@@ -29,6 +29,11 @@ int igc_xdp_set_prog(struct igc_adapter *adapter, struct bpf_prog *prog,
if (old_prog)
bpf_prog_put(old_prog);
+ if (prog)
+ xdp_features_set_redirect_target(dev, true);
+ else
+ xdp_features_clear_redirect_target(dev);
+
if (if_running)
igc_open(dev);
diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_common.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_common.c
index 38c4609bd429..878dd8dff528 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_common.c
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_common.c
@@ -3292,13 +3292,14 @@ static bool ixgbe_need_crosstalk_fix(struct ixgbe_hw *hw)
s32 ixgbe_check_mac_link_generic(struct ixgbe_hw *hw, ixgbe_link_speed *speed,
bool *link_up, bool link_up_wait_to_complete)
{
+ bool crosstalk_fix_active = ixgbe_need_crosstalk_fix(hw);
u32 links_reg, links_orig;
u32 i;
/* If Crosstalk fix enabled do the sanity check of making sure
* the SFP+ cage is full.
*/
- if (ixgbe_need_crosstalk_fix(hw)) {
+ if (crosstalk_fix_active) {
u32 sfp_cage_full;
switch (hw->mac.type) {
@@ -3346,10 +3347,24 @@ s32 ixgbe_check_mac_link_generic(struct ixgbe_hw *hw, ixgbe_link_speed *speed,
links_reg = IXGBE_READ_REG(hw, IXGBE_LINKS);
}
} else {
- if (links_reg & IXGBE_LINKS_UP)
+ if (links_reg & IXGBE_LINKS_UP) {
+ if (crosstalk_fix_active) {
+ /* Check the link state again after a delay
+ * to filter out spurious link up
+ * notifications.
+ */
+ mdelay(5);
+ links_reg = IXGBE_READ_REG(hw, IXGBE_LINKS);
+ if (!(links_reg & IXGBE_LINKS_UP)) {
+ *link_up = false;
+ *speed = IXGBE_LINK_SPEED_UNKNOWN;
+ return 0;
+ }
+ }
*link_up = true;
- else
+ } else {
*link_up = false;
+ }
}
switch (links_reg & IXGBE_LINKS_SPEED_82599) {
diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_ipsec.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_ipsec.c
index 53a969e34883..13a6fca31004 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_ipsec.c
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_ipsec.c
@@ -557,8 +557,10 @@ static int ixgbe_ipsec_check_mgmt_ip(struct xfrm_state *xs)
/**
* ixgbe_ipsec_add_sa - program device with a security association
* @xs: pointer to transformer state struct
+ * @extack: extack point to fill failure reason
**/
-static int ixgbe_ipsec_add_sa(struct xfrm_state *xs)
+static int ixgbe_ipsec_add_sa(struct xfrm_state *xs,
+ struct netlink_ext_ack *extack)
{
struct net_device *dev = xs->xso.real_dev;
struct ixgbe_adapter *adapter = netdev_priv(dev);
@@ -570,23 +572,22 @@ static int ixgbe_ipsec_add_sa(struct xfrm_state *xs)
int i;
if (xs->id.proto != IPPROTO_ESP && xs->id.proto != IPPROTO_AH) {
- netdev_err(dev, "Unsupported protocol 0x%04x for ipsec offload\n",
- xs->id.proto);
+ NL_SET_ERR_MSG_MOD(extack, "Unsupported protocol for ipsec offload");
return -EINVAL;
}
if (xs->props.mode != XFRM_MODE_TRANSPORT) {
- netdev_err(dev, "Unsupported mode for ipsec offload\n");
+ NL_SET_ERR_MSG_MOD(extack, "Unsupported mode for ipsec offload");
return -EINVAL;
}
if (ixgbe_ipsec_check_mgmt_ip(xs)) {
- netdev_err(dev, "IPsec IP addr clash with mgmt filters\n");
+ NL_SET_ERR_MSG_MOD(extack, "IPsec IP addr clash with mgmt filters");
return -EINVAL;
}
if (xs->xso.type != XFRM_DEV_OFFLOAD_CRYPTO) {
- netdev_err(dev, "Unsupported ipsec offload type\n");
+ NL_SET_ERR_MSG_MOD(extack, "Unsupported ipsec offload type");
return -EINVAL;
}
@@ -594,14 +595,14 @@ static int ixgbe_ipsec_add_sa(struct xfrm_state *xs)
struct rx_sa rsa;
if (xs->calg) {
- netdev_err(dev, "Compression offload not supported\n");
+ NL_SET_ERR_MSG_MOD(extack, "Compression offload not supported");
return -EINVAL;
}
/* find the first unused index */
ret = ixgbe_ipsec_find_empty_idx(ipsec, true);
if (ret < 0) {
- netdev_err(dev, "No space for SA in Rx table!\n");
+ NL_SET_ERR_MSG_MOD(extack, "No space for SA in Rx table!");
return ret;
}
sa_idx = (u16)ret;
@@ -616,7 +617,7 @@ static int ixgbe_ipsec_add_sa(struct xfrm_state *xs)
/* get the key and salt */
ret = ixgbe_ipsec_parse_proto_keys(xs, rsa.key, &rsa.salt);
if (ret) {
- netdev_err(dev, "Failed to get key data for Rx SA table\n");
+ NL_SET_ERR_MSG_MOD(extack, "Failed to get key data for Rx SA table");
return ret;
}
@@ -676,7 +677,7 @@ static int ixgbe_ipsec_add_sa(struct xfrm_state *xs)
} else {
/* no match and no empty slot */
- netdev_err(dev, "No space for SA in Rx IP SA table\n");
+ NL_SET_ERR_MSG_MOD(extack, "No space for SA in Rx IP SA table");
memset(&rsa, 0, sizeof(rsa));
return -ENOSPC;
}
@@ -711,7 +712,7 @@ static int ixgbe_ipsec_add_sa(struct xfrm_state *xs)
/* find the first unused index */
ret = ixgbe_ipsec_find_empty_idx(ipsec, false);
if (ret < 0) {
- netdev_err(dev, "No space for SA in Tx table\n");
+ NL_SET_ERR_MSG_MOD(extack, "No space for SA in Tx table");
return ret;
}
sa_idx = (u16)ret;
@@ -725,7 +726,7 @@ static int ixgbe_ipsec_add_sa(struct xfrm_state *xs)
ret = ixgbe_ipsec_parse_proto_keys(xs, tsa.key, &tsa.salt);
if (ret) {
- netdev_err(dev, "Failed to get key data for Tx SA table\n");
+ NL_SET_ERR_MSG_MOD(extack, "Failed to get key data for Tx SA table");
memset(&tsa, 0, sizeof(tsa));
return ret;
}
@@ -950,7 +951,7 @@ int ixgbe_ipsec_vf_add_sa(struct ixgbe_adapter *adapter, u32 *msgbuf, u32 vf)
memcpy(xs->aead->alg_name, aes_gcm_name, sizeof(aes_gcm_name));
/* set up the HW offload */
- err = ixgbe_ipsec_add_sa(xs);
+ err = ixgbe_ipsec_add_sa(xs, NULL);
if (err)
goto err_aead;
diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
index 4507fba8747a..773c35fecace 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
@@ -6647,7 +6647,7 @@ int ixgbe_setup_rx_resources(struct ixgbe_adapter *adapter,
rx_ring->queue_index, ixgbe_rx_napi_id(rx_ring)) < 0)
goto err;
- rx_ring->xdp_prog = adapter->xdp_prog;
+ WRITE_ONCE(rx_ring->xdp_prog, adapter->xdp_prog);
return 0;
err:
@@ -8943,7 +8943,8 @@ ixgbe_mdio_read(struct net_device *netdev, int prtad, int devad, u16 addr)
int regnum = addr;
if (devad != MDIO_DEVAD_NONE)
- regnum |= (devad << 16) | MII_ADDR_C45;
+ return mdiobus_c45_read(adapter->mii_bus, prtad,
+ devad, regnum);
return mdiobus_read(adapter->mii_bus, prtad, regnum);
}
@@ -8966,7 +8967,8 @@ static int ixgbe_mdio_write(struct net_device *netdev, int prtad, int devad,
int regnum = addr;
if (devad != MDIO_DEVAD_NONE)
- regnum |= (devad << 16) | MII_ADDR_C45;
+ return mdiobus_c45_write(adapter->mii_bus, prtad, devad,
+ regnum, value);
return mdiobus_write(adapter->mii_bus, prtad, regnum, value);
}
@@ -10303,14 +10305,15 @@ static int ixgbe_xdp_setup(struct net_device *dev, struct bpf_prog *prog)
synchronize_rcu();
err = ixgbe_setup_tc(dev, adapter->hw_tcs);
- if (err) {
- rcu_assign_pointer(adapter->xdp_prog, old_prog);
+ if (err)
return -EINVAL;
- }
+ if (!prog)
+ xdp_features_clear_redirect_target(dev);
} else {
- for (i = 0; i < adapter->num_rx_queues; i++)
- (void)xchg(&adapter->rx_ring[i]->xdp_prog,
- adapter->xdp_prog);
+ for (i = 0; i < adapter->num_rx_queues; i++) {
+ WRITE_ONCE(adapter->rx_ring[i]->xdp_prog,
+ adapter->xdp_prog);
+ }
}
if (old_prog)
@@ -10326,6 +10329,7 @@ static int ixgbe_xdp_setup(struct net_device *dev, struct bpf_prog *prog)
if (adapter->xdp_ring[i]->xsk_pool)
(void)ixgbe_xsk_wakeup(adapter->netdev, i,
XDP_WAKEUP_RX);
+ xdp_features_set_redirect_target(dev, true);
}
return 0;
@@ -10814,8 +10818,6 @@ static int ixgbe_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
goto err_pci_reg;
}
- pci_enable_pcie_error_reporting(pdev);
-
pci_set_master(pdev);
pci_save_state(pdev);
@@ -11023,6 +11025,9 @@ skip_sriov:
netdev->priv_flags |= IFF_UNICAST_FLT;
netdev->priv_flags |= IFF_SUPP_NOFCS;
+ netdev->xdp_features = NETDEV_XDP_ACT_BASIC | NETDEV_XDP_ACT_REDIRECT |
+ NETDEV_XDP_ACT_XSK_ZEROCOPY;
+
/* MTU range: 68 - 9710 */
netdev->min_mtu = ETH_MIN_MTU;
netdev->max_mtu = IXGBE_MAX_JUMBO_FRAME_SIZE - (ETH_HLEN + ETH_FCS_LEN);
@@ -11243,7 +11248,6 @@ err_ioremap:
disable_dev = !test_and_set_bit(__IXGBE_DISABLED, &adapter->state);
free_netdev(netdev);
err_alloc_etherdev:
- pci_disable_pcie_error_reporting(pdev);
pci_release_mem_regions(pdev);
err_pci_reg:
err_dma:
@@ -11332,8 +11336,6 @@ static void ixgbe_remove(struct pci_dev *pdev)
disable_dev = !test_and_set_bit(__IXGBE_DISABLED, &adapter->state);
free_netdev(netdev);
- pci_disable_pcie_error_reporting(pdev);
-
if (disable_dev)
pci_disable_device(pdev);
}
diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_phy.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_phy.c
index 123dca9ce468..689470c1e8ad 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_phy.c
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_phy.c
@@ -680,14 +680,14 @@ static s32 ixgbe_msca_cmd(struct ixgbe_hw *hw, u32 cmd)
}
/**
- * ixgbe_mii_bus_read_generic - Read a clause 22/45 register with gssr flags
+ * ixgbe_mii_bus_read_generic_c22 - Read a clause 22 register with gssr flags
* @hw: pointer to hardware structure
* @addr: address
* @regnum: register number
* @gssr: semaphore flags to acquire
**/
-static s32 ixgbe_mii_bus_read_generic(struct ixgbe_hw *hw, int addr,
- int regnum, u32 gssr)
+static s32 ixgbe_mii_bus_read_generic_c22(struct ixgbe_hw *hw, int addr,
+ int regnum, u32 gssr)
{
u32 hwaddr, cmd;
s32 data;
@@ -696,31 +696,52 @@ static s32 ixgbe_mii_bus_read_generic(struct ixgbe_hw *hw, int addr,
return -EBUSY;
hwaddr = addr << IXGBE_MSCA_PHY_ADDR_SHIFT;
- if (regnum & MII_ADDR_C45) {
- hwaddr |= regnum & GENMASK(21, 0);
- cmd = hwaddr | IXGBE_MSCA_ADDR_CYCLE | IXGBE_MSCA_MDI_COMMAND;
- } else {
- hwaddr |= (regnum & GENMASK(5, 0)) << IXGBE_MSCA_DEV_TYPE_SHIFT;
- cmd = hwaddr | IXGBE_MSCA_OLD_PROTOCOL |
- IXGBE_MSCA_READ_AUTOINC | IXGBE_MSCA_MDI_COMMAND;
- }
+ hwaddr |= (regnum & GENMASK(5, 0)) << IXGBE_MSCA_DEV_TYPE_SHIFT;
+ cmd = hwaddr | IXGBE_MSCA_OLD_PROTOCOL |
+ IXGBE_MSCA_READ_AUTOINC | IXGBE_MSCA_MDI_COMMAND;
data = ixgbe_msca_cmd(hw, cmd);
if (data < 0)
goto mii_bus_read_done;
- /* For a clause 45 access the address cycle just completed, we still
- * need to do the read command, otherwise just get the data
- */
- if (!(regnum & MII_ADDR_C45))
- goto do_mii_bus_read;
+ data = IXGBE_READ_REG(hw, IXGBE_MSRWD);
+ data = (data >> IXGBE_MSRWD_READ_DATA_SHIFT) & GENMASK(16, 0);
+
+mii_bus_read_done:
+ hw->mac.ops.release_swfw_sync(hw, gssr);
+ return data;
+}
+
+/**
+ * ixgbe_mii_bus_read_generic_c45 - Read a clause 45 register with gssr flags
+ * @hw: pointer to hardware structure
+ * @addr: address
+ * @devad: device address to read
+ * @regnum: register number
+ * @gssr: semaphore flags to acquire
+ **/
+static s32 ixgbe_mii_bus_read_generic_c45(struct ixgbe_hw *hw, int addr,
+ int devad, int regnum, u32 gssr)
+{
+ u32 hwaddr, cmd;
+ s32 data;
+
+ if (hw->mac.ops.acquire_swfw_sync(hw, gssr))
+ return -EBUSY;
+
+ hwaddr = addr << IXGBE_MSCA_PHY_ADDR_SHIFT;
+ hwaddr |= devad << 16 | regnum;
+ cmd = hwaddr | IXGBE_MSCA_ADDR_CYCLE | IXGBE_MSCA_MDI_COMMAND;
+
+ data = ixgbe_msca_cmd(hw, cmd);
+ if (data < 0)
+ goto mii_bus_read_done;
cmd = hwaddr | IXGBE_MSCA_READ | IXGBE_MSCA_MDI_COMMAND;
data = ixgbe_msca_cmd(hw, cmd);
if (data < 0)
goto mii_bus_read_done;
-do_mii_bus_read:
data = IXGBE_READ_REG(hw, IXGBE_MSRWD);
data = (data >> IXGBE_MSRWD_READ_DATA_SHIFT) & GENMASK(16, 0);
@@ -730,15 +751,15 @@ mii_bus_read_done:
}
/**
- * ixgbe_mii_bus_write_generic - Write a clause 22/45 register with gssr flags
+ * ixgbe_mii_bus_write_generic_c22 - Write a clause 22 register with gssr flags
* @hw: pointer to hardware structure
* @addr: address
* @regnum: register number
* @val: value to write
* @gssr: semaphore flags to acquire
**/
-static s32 ixgbe_mii_bus_write_generic(struct ixgbe_hw *hw, int addr,
- int regnum, u16 val, u32 gssr)
+static s32 ixgbe_mii_bus_write_generic_c22(struct ixgbe_hw *hw, int addr,
+ int regnum, u16 val, u32 gssr)
{
u32 hwaddr, cmd;
s32 err;
@@ -749,20 +770,43 @@ static s32 ixgbe_mii_bus_write_generic(struct ixgbe_hw *hw, int addr,
IXGBE_WRITE_REG(hw, IXGBE_MSRWD, (u32)val);
hwaddr = addr << IXGBE_MSCA_PHY_ADDR_SHIFT;
- if (regnum & MII_ADDR_C45) {
- hwaddr |= regnum & GENMASK(21, 0);
- cmd = hwaddr | IXGBE_MSCA_ADDR_CYCLE | IXGBE_MSCA_MDI_COMMAND;
- } else {
- hwaddr |= (regnum & GENMASK(5, 0)) << IXGBE_MSCA_DEV_TYPE_SHIFT;
- cmd = hwaddr | IXGBE_MSCA_OLD_PROTOCOL | IXGBE_MSCA_WRITE |
- IXGBE_MSCA_MDI_COMMAND;
- }
+ hwaddr |= (regnum & GENMASK(5, 0)) << IXGBE_MSCA_DEV_TYPE_SHIFT;
+ cmd = hwaddr | IXGBE_MSCA_OLD_PROTOCOL | IXGBE_MSCA_WRITE |
+ IXGBE_MSCA_MDI_COMMAND;
+
+ err = ixgbe_msca_cmd(hw, cmd);
+
+ hw->mac.ops.release_swfw_sync(hw, gssr);
+ return err;
+}
+
+/**
+ * ixgbe_mii_bus_write_generic_c45 - Write a clause 45 register with gssr flags
+ * @hw: pointer to hardware structure
+ * @addr: address
+ * @devad: device address to read
+ * @regnum: register number
+ * @val: value to write
+ * @gssr: semaphore flags to acquire
+ **/
+static s32 ixgbe_mii_bus_write_generic_c45(struct ixgbe_hw *hw, int addr,
+ int devad, int regnum, u16 val,
+ u32 gssr)
+{
+ u32 hwaddr, cmd;
+ s32 err;
+
+ if (hw->mac.ops.acquire_swfw_sync(hw, gssr))
+ return -EBUSY;
+
+ IXGBE_WRITE_REG(hw, IXGBE_MSRWD, (u32)val);
+
+ hwaddr = addr << IXGBE_MSCA_PHY_ADDR_SHIFT;
+ hwaddr |= devad << 16 | regnum;
+ cmd = hwaddr | IXGBE_MSCA_ADDR_CYCLE | IXGBE_MSCA_MDI_COMMAND;
- /* For clause 45 this is an address cycle, for clause 22 this is the
- * entire transaction
- */
err = ixgbe_msca_cmd(hw, cmd);
- if (err < 0 || !(regnum & MII_ADDR_C45))
+ if (err < 0)
goto mii_bus_write_done;
cmd = hwaddr | IXGBE_MSCA_WRITE | IXGBE_MSCA_MDI_COMMAND;
@@ -774,70 +818,144 @@ mii_bus_write_done:
}
/**
- * ixgbe_mii_bus_read - Read a clause 22/45 register
+ * ixgbe_mii_bus_read_c22 - Read a clause 22 register
+ * @bus: pointer to mii_bus structure which points to our driver private
+ * @addr: address
+ * @regnum: register number
+ **/
+static s32 ixgbe_mii_bus_read_c22(struct mii_bus *bus, int addr, int regnum)
+{
+ struct ixgbe_adapter *adapter = bus->priv;
+ struct ixgbe_hw *hw = &adapter->hw;
+ u32 gssr = hw->phy.phy_semaphore_mask;
+
+ return ixgbe_mii_bus_read_generic_c22(hw, addr, regnum, gssr);
+}
+
+/**
+ * ixgbe_mii_bus_read_c45 - Read a clause 45 register
* @bus: pointer to mii_bus structure which points to our driver private
+ * @devad: device address to read
* @addr: address
* @regnum: register number
**/
-static s32 ixgbe_mii_bus_read(struct mii_bus *bus, int addr, int regnum)
+static s32 ixgbe_mii_bus_read_c45(struct mii_bus *bus, int devad, int addr,
+ int regnum)
+{
+ struct ixgbe_adapter *adapter = bus->priv;
+ struct ixgbe_hw *hw = &adapter->hw;
+ u32 gssr = hw->phy.phy_semaphore_mask;
+
+ return ixgbe_mii_bus_read_generic_c45(hw, addr, devad, regnum, gssr);
+}
+
+/**
+ * ixgbe_mii_bus_write_c22 - Write a clause 22 register
+ * @bus: pointer to mii_bus structure which points to our driver private
+ * @addr: address
+ * @regnum: register number
+ * @val: value to write
+ **/
+static s32 ixgbe_mii_bus_write_c22(struct mii_bus *bus, int addr, int regnum,
+ u16 val)
{
struct ixgbe_adapter *adapter = bus->priv;
struct ixgbe_hw *hw = &adapter->hw;
u32 gssr = hw->phy.phy_semaphore_mask;
- return ixgbe_mii_bus_read_generic(hw, addr, regnum, gssr);
+ return ixgbe_mii_bus_write_generic_c22(hw, addr, regnum, val, gssr);
}
/**
- * ixgbe_mii_bus_write - Write a clause 22/45 register
+ * ixgbe_mii_bus_write_c45 - Write a clause 45 register
* @bus: pointer to mii_bus structure which points to our driver private
* @addr: address
+ * @devad: device address to read
* @regnum: register number
* @val: value to write
**/
-static s32 ixgbe_mii_bus_write(struct mii_bus *bus, int addr, int regnum,
- u16 val)
+static s32 ixgbe_mii_bus_write_c45(struct mii_bus *bus, int addr, int devad,
+ int regnum, u16 val)
{
struct ixgbe_adapter *adapter = bus->priv;
struct ixgbe_hw *hw = &adapter->hw;
u32 gssr = hw->phy.phy_semaphore_mask;
- return ixgbe_mii_bus_write_generic(hw, addr, regnum, val, gssr);
+ return ixgbe_mii_bus_write_generic_c45(hw, addr, devad, regnum, val,
+ gssr);
}
/**
- * ixgbe_x550em_a_mii_bus_read - Read a clause 22/45 register on x550em_a
+ * ixgbe_x550em_a_mii_bus_read_c22 - Read a clause 22 register on x550em_a
* @bus: pointer to mii_bus structure which points to our driver private
* @addr: address
* @regnum: register number
**/
-static s32 ixgbe_x550em_a_mii_bus_read(struct mii_bus *bus, int addr,
- int regnum)
+static s32 ixgbe_x550em_a_mii_bus_read_c22(struct mii_bus *bus, int addr,
+ int regnum)
+{
+ struct ixgbe_adapter *adapter = bus->priv;
+ struct ixgbe_hw *hw = &adapter->hw;
+ u32 gssr = hw->phy.phy_semaphore_mask;
+
+ gssr |= IXGBE_GSSR_TOKEN_SM | IXGBE_GSSR_PHY0_SM;
+ return ixgbe_mii_bus_read_generic_c22(hw, addr, regnum, gssr);
+}
+
+/**
+ * ixgbe_x550em_a_mii_bus_read_c45 - Read a clause 45 register on x550em_a
+ * @bus: pointer to mii_bus structure which points to our driver private
+ * @addr: address
+ * @devad: device address to read
+ * @regnum: register number
+ **/
+static s32 ixgbe_x550em_a_mii_bus_read_c45(struct mii_bus *bus, int addr,
+ int devad, int regnum)
+{
+ struct ixgbe_adapter *adapter = bus->priv;
+ struct ixgbe_hw *hw = &adapter->hw;
+ u32 gssr = hw->phy.phy_semaphore_mask;
+
+ gssr |= IXGBE_GSSR_TOKEN_SM | IXGBE_GSSR_PHY0_SM;
+ return ixgbe_mii_bus_read_generic_c45(hw, addr, devad, regnum, gssr);
+}
+
+/**
+ * ixgbe_x550em_a_mii_bus_write_c22 - Write a clause 22 register on x550em_a
+ * @bus: pointer to mii_bus structure which points to our driver private
+ * @addr: address
+ * @regnum: register number
+ * @val: value to write
+ **/
+static s32 ixgbe_x550em_a_mii_bus_write_c22(struct mii_bus *bus, int addr,
+ int regnum, u16 val)
{
struct ixgbe_adapter *adapter = bus->priv;
struct ixgbe_hw *hw = &adapter->hw;
u32 gssr = hw->phy.phy_semaphore_mask;
gssr |= IXGBE_GSSR_TOKEN_SM | IXGBE_GSSR_PHY0_SM;
- return ixgbe_mii_bus_read_generic(hw, addr, regnum, gssr);
+ return ixgbe_mii_bus_write_generic_c22(hw, addr, regnum, val, gssr);
}
/**
- * ixgbe_x550em_a_mii_bus_write - Write a clause 22/45 register on x550em_a
+ * ixgbe_x550em_a_mii_bus_write_c45 - Write a clause 45 register on x550em_a
* @bus: pointer to mii_bus structure which points to our driver private
* @addr: address
+ * @devad: device address to read
* @regnum: register number
* @val: value to write
**/
-static s32 ixgbe_x550em_a_mii_bus_write(struct mii_bus *bus, int addr,
- int regnum, u16 val)
+static s32 ixgbe_x550em_a_mii_bus_write_c45(struct mii_bus *bus, int addr,
+ int devad, int regnum, u16 val)
{
struct ixgbe_adapter *adapter = bus->priv;
struct ixgbe_hw *hw = &adapter->hw;
u32 gssr = hw->phy.phy_semaphore_mask;
gssr |= IXGBE_GSSR_TOKEN_SM | IXGBE_GSSR_PHY0_SM;
- return ixgbe_mii_bus_write_generic(hw, addr, regnum, val, gssr);
+ return ixgbe_mii_bus_write_generic_c45(hw, addr, devad, regnum, val,
+ gssr);
}
/**
@@ -909,8 +1027,11 @@ out:
**/
s32 ixgbe_mii_bus_init(struct ixgbe_hw *hw)
{
- s32 (*write)(struct mii_bus *bus, int addr, int regnum, u16 val);
- s32 (*read)(struct mii_bus *bus, int addr, int regnum);
+ s32 (*write_c22)(struct mii_bus *bus, int addr, int regnum, u16 val);
+ s32 (*read_c22)(struct mii_bus *bus, int addr, int regnum);
+ s32 (*write_c45)(struct mii_bus *bus, int addr, int devad, int regnum,
+ u16 val);
+ s32 (*read_c45)(struct mii_bus *bus, int addr, int devad, int regnum);
struct ixgbe_adapter *adapter = hw->back;
struct pci_dev *pdev = adapter->pdev;
struct device *dev = &adapter->netdev->dev;
@@ -929,12 +1050,16 @@ s32 ixgbe_mii_bus_init(struct ixgbe_hw *hw)
case IXGBE_DEV_ID_X550EM_A_1G_T_L:
if (!ixgbe_x550em_a_has_mii(hw))
return 0;
- read = &ixgbe_x550em_a_mii_bus_read;
- write = &ixgbe_x550em_a_mii_bus_write;
+ read_c22 = ixgbe_x550em_a_mii_bus_read_c22;
+ write_c22 = ixgbe_x550em_a_mii_bus_write_c22;
+ read_c45 = ixgbe_x550em_a_mii_bus_read_c45;
+ write_c45 = ixgbe_x550em_a_mii_bus_write_c45;
break;
default:
- read = &ixgbe_mii_bus_read;
- write = &ixgbe_mii_bus_write;
+ read_c22 = ixgbe_mii_bus_read_c22;
+ write_c22 = ixgbe_mii_bus_write_c22;
+ read_c45 = ixgbe_mii_bus_read_c45;
+ write_c45 = ixgbe_mii_bus_write_c45;
break;
}
@@ -942,8 +1067,10 @@ s32 ixgbe_mii_bus_init(struct ixgbe_hw *hw)
if (!bus)
return -ENOMEM;
- bus->read = read;
- bus->write = write;
+ bus->read = read_c22;
+ bus->write = write_c22;
+ bus->read_c45 = read_c45;
+ bus->write_c45 = write_c45;
/* Use the position of the device in the PCI hierarchy as the id */
snprintf(bus->id, MII_BUS_ID_SIZE, "%s-mdio-%s", ixgbe_driver_name,
diff --git a/drivers/net/ethernet/intel/ixgbevf/ipsec.c b/drivers/net/ethernet/intel/ixgbevf/ipsec.c
index c1cf540d162a..66cf17f19408 100644
--- a/drivers/net/ethernet/intel/ixgbevf/ipsec.c
+++ b/drivers/net/ethernet/intel/ixgbevf/ipsec.c
@@ -257,8 +257,10 @@ static int ixgbevf_ipsec_parse_proto_keys(struct xfrm_state *xs,
/**
* ixgbevf_ipsec_add_sa - program device with a security association
* @xs: pointer to transformer state struct
+ * @extack: extack point to fill failure reason
**/
-static int ixgbevf_ipsec_add_sa(struct xfrm_state *xs)
+static int ixgbevf_ipsec_add_sa(struct xfrm_state *xs,
+ struct netlink_ext_ack *extack)
{
struct net_device *dev = xs->xso.real_dev;
struct ixgbevf_adapter *adapter;
@@ -270,18 +272,17 @@ static int ixgbevf_ipsec_add_sa(struct xfrm_state *xs)
ipsec = adapter->ipsec;
if (xs->id.proto != IPPROTO_ESP && xs->id.proto != IPPROTO_AH) {
- netdev_err(dev, "Unsupported protocol 0x%04x for IPsec offload\n",
- xs->id.proto);
+ NL_SET_ERR_MSG_MOD(extack, "Unsupported protocol for IPsec offload");
return -EINVAL;
}
if (xs->props.mode != XFRM_MODE_TRANSPORT) {
- netdev_err(dev, "Unsupported mode for ipsec offload\n");
+ NL_SET_ERR_MSG_MOD(extack, "Unsupported mode for ipsec offload");
return -EINVAL;
}
if (xs->xso.type != XFRM_DEV_OFFLOAD_CRYPTO) {
- netdev_err(dev, "Unsupported ipsec offload type\n");
+ NL_SET_ERR_MSG_MOD(extack, "Unsupported ipsec offload type");
return -EINVAL;
}
@@ -289,14 +290,14 @@ static int ixgbevf_ipsec_add_sa(struct xfrm_state *xs)
struct rx_sa rsa;
if (xs->calg) {
- netdev_err(dev, "Compression offload not supported\n");
+ NL_SET_ERR_MSG_MOD(extack, "Compression offload not supported");
return -EINVAL;
}
/* find the first unused index */
ret = ixgbevf_ipsec_find_empty_idx(ipsec, true);
if (ret < 0) {
- netdev_err(dev, "No space for SA in Rx table!\n");
+ NL_SET_ERR_MSG_MOD(extack, "No space for SA in Rx table!");
return ret;
}
sa_idx = (u16)ret;
@@ -311,7 +312,7 @@ static int ixgbevf_ipsec_add_sa(struct xfrm_state *xs)
/* get the key and salt */
ret = ixgbevf_ipsec_parse_proto_keys(xs, rsa.key, &rsa.salt);
if (ret) {
- netdev_err(dev, "Failed to get key data for Rx SA table\n");
+ NL_SET_ERR_MSG_MOD(extack, "Failed to get key data for Rx SA table");
return ret;
}
@@ -350,7 +351,7 @@ static int ixgbevf_ipsec_add_sa(struct xfrm_state *xs)
/* find the first unused index */
ret = ixgbevf_ipsec_find_empty_idx(ipsec, false);
if (ret < 0) {
- netdev_err(dev, "No space for SA in Tx table\n");
+ NL_SET_ERR_MSG_MOD(extack, "No space for SA in Tx table");
return ret;
}
sa_idx = (u16)ret;
@@ -364,7 +365,7 @@ static int ixgbevf_ipsec_add_sa(struct xfrm_state *xs)
ret = ixgbevf_ipsec_parse_proto_keys(xs, tsa.key, &tsa.salt);
if (ret) {
- netdev_err(dev, "Failed to get key data for Tx SA table\n");
+ NL_SET_ERR_MSG_MOD(extack, "Failed to get key data for Tx SA table");
memset(&tsa, 0, sizeof(tsa));
return ret;
}
diff --git a/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c b/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c
index ea0a230c1153..a44e4bd56142 100644
--- a/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c
+++ b/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c
@@ -4634,6 +4634,7 @@ static int ixgbevf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
NETIF_F_HW_VLAN_CTAG_TX;
netdev->priv_flags |= IFF_UNICAST_FLT;
+ netdev->xdp_features = NETDEV_XDP_ACT_BASIC;
/* MTU range: 68 - 1504 or 9710 */
netdev->min_mtu = ETH_MIN_MTU;
diff --git a/drivers/net/ethernet/marvell/mvmdio.c b/drivers/net/ethernet/marvell/mvmdio.c
index ef878973b859..8662543ca5c8 100644
--- a/drivers/net/ethernet/marvell/mvmdio.c
+++ b/drivers/net/ethernet/marvell/mvmdio.c
@@ -146,9 +146,6 @@ static int orion_mdio_smi_read(struct mii_bus *bus, int mii_id,
u32 val;
int ret;
- if (regnum & MII_ADDR_C45)
- return -EOPNOTSUPP;
-
ret = orion_mdio_wait_ready(&orion_mdio_smi_ops, bus);
if (ret < 0)
return ret;
@@ -177,9 +174,6 @@ static int orion_mdio_smi_write(struct mii_bus *bus, int mii_id,
struct orion_mdio_dev *dev = bus->priv;
int ret;
- if (regnum & MII_ADDR_C45)
- return -EOPNOTSUPP;
-
ret = orion_mdio_wait_ready(&orion_mdio_smi_ops, bus);
if (ret < 0)
return ret;
@@ -204,21 +198,17 @@ static const struct orion_mdio_ops orion_mdio_xsmi_ops = {
.poll_interval_max = MVMDIO_XSMI_POLL_INTERVAL_MAX,
};
-static int orion_mdio_xsmi_read(struct mii_bus *bus, int mii_id,
- int regnum)
+static int orion_mdio_xsmi_read_c45(struct mii_bus *bus, int mii_id,
+ int dev_addr, int regnum)
{
struct orion_mdio_dev *dev = bus->priv;
- u16 dev_addr = (regnum >> 16) & GENMASK(4, 0);
int ret;
- if (!(regnum & MII_ADDR_C45))
- return -EOPNOTSUPP;
-
ret = orion_mdio_wait_ready(&orion_mdio_xsmi_ops, bus);
if (ret < 0)
return ret;
- writel(regnum & GENMASK(15, 0), dev->regs + MVMDIO_XSMI_ADDR_REG);
+ writel(regnum, dev->regs + MVMDIO_XSMI_ADDR_REG);
writel((mii_id << MVMDIO_XSMI_PHYADDR_SHIFT) |
(dev_addr << MVMDIO_XSMI_DEVADDR_SHIFT) |
MVMDIO_XSMI_READ_OPERATION,
@@ -237,21 +227,17 @@ static int orion_mdio_xsmi_read(struct mii_bus *bus, int mii_id,
return readl(dev->regs + MVMDIO_XSMI_MGNT_REG) & GENMASK(15, 0);
}
-static int orion_mdio_xsmi_write(struct mii_bus *bus, int mii_id,
- int regnum, u16 value)
+static int orion_mdio_xsmi_write_c45(struct mii_bus *bus, int mii_id,
+ int dev_addr, int regnum, u16 value)
{
struct orion_mdio_dev *dev = bus->priv;
- u16 dev_addr = (regnum >> 16) & GENMASK(4, 0);
int ret;
- if (!(regnum & MII_ADDR_C45))
- return -EOPNOTSUPP;
-
ret = orion_mdio_wait_ready(&orion_mdio_xsmi_ops, bus);
if (ret < 0)
return ret;
- writel(regnum & GENMASK(15, 0), dev->regs + MVMDIO_XSMI_ADDR_REG);
+ writel(regnum, dev->regs + MVMDIO_XSMI_ADDR_REG);
writel((mii_id << MVMDIO_XSMI_PHYADDR_SHIFT) |
(dev_addr << MVMDIO_XSMI_DEVADDR_SHIFT) |
MVMDIO_XSMI_WRITE_OPERATION | value,
@@ -302,8 +288,8 @@ static int orion_mdio_probe(struct platform_device *pdev)
bus->write = orion_mdio_smi_write;
break;
case BUS_TYPE_XSMI:
- bus->read = orion_mdio_xsmi_read;
- bus->write = orion_mdio_xsmi_write;
+ bus->read_c45 = orion_mdio_xsmi_read_c45;
+ bus->write_c45 = orion_mdio_xsmi_write_c45;
break;
}
diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c
index f8925cac61e4..0e39d199ff06 100644
--- a/drivers/net/ethernet/marvell/mvneta.c
+++ b/drivers/net/ethernet/marvell/mvneta.c
@@ -38,7 +38,7 @@
#include <net/ipv6.h>
#include <net/tso.h>
#include <net/page_pool.h>
-#include <net/pkt_cls.h>
+#include <net/pkt_sched.h>
#include <linux/bpf_trace.h>
/* Registers */
@@ -5612,6 +5612,12 @@ static int mvneta_probe(struct platform_device *pdev)
NETIF_F_TSO | NETIF_F_RXCSUM;
dev->hw_features |= dev->features;
dev->vlan_features |= dev->features;
+ if (!pp->bm_priv)
+ dev->xdp_features = NETDEV_XDP_ACT_BASIC |
+ NETDEV_XDP_ACT_REDIRECT |
+ NETDEV_XDP_ACT_NDO_XMIT |
+ NETDEV_XDP_ACT_RX_SG |
+ NETDEV_XDP_ACT_NDO_XMIT_SG;
dev->priv_flags |= IFF_LIVE_ADDR_CHANGE;
netif_set_tso_max_segs(dev, MVNETA_MAX_TSO_SEGS);
diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
index 4da45c5abba5..9b4ecbe4f36d 100644
--- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
+++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
@@ -6866,6 +6866,10 @@ static int mvpp2_port_probe(struct platform_device *pdev,
dev->vlan_features |= features;
netif_set_tso_max_segs(dev, MVPP2_MAX_TSO_SEGS);
+
+ dev->xdp_features = NETDEV_XDP_ACT_BASIC | NETDEV_XDP_ACT_REDIRECT |
+ NETDEV_XDP_ACT_NDO_XMIT;
+
dev->priv_flags |= IFF_UNICAST_FLT;
/* MTU range: 68 - 9704 */
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
index d2584ebb7a70..5727d67e0259 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
+++ b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
@@ -195,6 +195,9 @@ M(CPT_STATS, 0xA05, cpt_sts, cpt_sts_req, cpt_sts_rsp) \
M(CPT_RXC_TIME_CFG, 0xA06, cpt_rxc_time_cfg, cpt_rxc_time_cfg_req, \
msg_rsp) \
M(CPT_CTX_CACHE_SYNC, 0xA07, cpt_ctx_cache_sync, msg_req, msg_rsp) \
+M(CPT_LF_RESET, 0xA08, cpt_lf_reset, cpt_lf_rst_req, msg_rsp) \
+M(CPT_FLT_ENG_INFO, 0xA09, cpt_flt_eng_info, cpt_flt_eng_info_req, \
+ cpt_flt_eng_info_rsp) \
/* SDP mbox IDs (range 0x1000 - 0x11FF) */ \
M(SET_SDP_CHAN_INFO, 0x1000, set_sdp_chan_info, sdp_chan_info_msg, msg_rsp) \
M(GET_SDP_CHAN_INFO, 0x1001, get_sdp_chan_info, msg_req, sdp_get_chan_info_msg) \
@@ -297,6 +300,8 @@ M(NIX_BANDPROF_FREE, 0x801e, nix_bandprof_free, nix_bandprof_free_req, \
msg_rsp) \
M(NIX_BANDPROF_GET_HWINFO, 0x801f, nix_bandprof_get_hwinfo, msg_req, \
nix_bandprof_get_hwinfo_rsp) \
+M(NIX_READ_INLINE_IPSEC_CFG, 0x8023, nix_read_inline_ipsec_cfg, \
+ msg_req, nix_inline_ipsec_cfg) \
/* MCS mbox IDs (range 0xA000 - 0xBFFF) */ \
M(MCS_ALLOC_RESOURCES, 0xa000, mcs_alloc_resources, mcs_alloc_rsrc_req, \
mcs_alloc_rsrc_rsp) \
@@ -1196,7 +1201,7 @@ struct nix_inline_ipsec_cfg {
u32 cpt_credit;
struct {
u8 egrp;
- u8 opcode;
+ u16 opcode;
u16 param1;
u16 param2;
} gen_cfg;
@@ -1205,6 +1210,8 @@ struct nix_inline_ipsec_cfg {
u8 cpt_slot;
} inst_qsel;
u8 enable;
+ u16 bpid;
+ u32 credit_th;
};
/* Per NIX LF inline IPSec configuration */
@@ -1609,6 +1616,8 @@ struct cpt_lf_alloc_req_msg {
u16 sso_pf_func;
u16 eng_grpmsk;
int blkaddr;
+ u8 ctx_ilen_valid : 1;
+ u8 ctx_ilen : 7;
};
#define CPT_INLINE_INBOUND 0
@@ -1692,6 +1701,28 @@ struct cpt_inst_lmtst_req {
u64 rsvd;
};
+/* Mailbox message format to request for CPT LF reset */
+struct cpt_lf_rst_req {
+ struct mbox_msghdr hdr;
+ u32 slot;
+ u32 rsvd;
+};
+
+/* Mailbox message format to request for CPT faulted engines */
+struct cpt_flt_eng_info_req {
+ struct mbox_msghdr hdr;
+ int blkaddr;
+ bool reset;
+ u32 rsvd;
+};
+
+struct cpt_flt_eng_info_rsp {
+ struct mbox_msghdr hdr;
+ u64 flt_eng_map[CPT_10K_AF_INT_VEC_RVU];
+ u64 rcvrd_eng_map[CPT_10K_AF_INT_VEC_RVU];
+ u64 rsvd;
+};
+
struct sdp_node_info {
/* Node to which this PF belons to */
u8 node_id;
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu.c
index 3f5e09b77d4b..8683ce57ed3f 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.c
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.c
@@ -1164,8 +1164,16 @@ cpt:
goto nix_err;
}
+ err = rvu_cpt_init(rvu);
+ if (err) {
+ dev_err(rvu->dev, "%s: Failed to initialize cpt\n", __func__);
+ goto mcs_err;
+ }
+
return 0;
+mcs_err:
+ rvu_mcs_exit(rvu);
nix_err:
rvu_nix_freemem(rvu);
npa_err:
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
index 7f0a64731c67..389663a13d1d 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
@@ -108,6 +108,8 @@ struct rvu_block {
u64 lfreset_reg;
unsigned char name[NAME_SIZE];
struct rvu *rvu;
+ u64 cpt_flt_eng_map[3];
+ u64 cpt_rcvrd_eng_map[3];
};
struct nix_mcast {
@@ -459,6 +461,7 @@ struct rvu {
struct rvu_pfvf *pf;
struct rvu_pfvf *hwvf;
struct mutex rsrc_lock; /* Serialize resource alloc/free */
+ struct mutex alias_lock; /* Serialize bar2 alias access */
int vfs; /* Number of VFs attached to RVU */
int nix_blkaddr[MAX_NIX_BLKS];
@@ -510,6 +513,7 @@ struct rvu {
struct ptp *ptp;
int mcs_blk_cnt;
+ int cpt_pf_num;
#ifdef CONFIG_DEBUG_FS
struct rvu_debugfs rvu_dbg;
@@ -524,6 +528,8 @@ struct rvu {
struct list_head mcs_intrq_head;
/* mcs interrupt queue lock */
spinlock_t mcs_intrq_lock;
+ /* CPT interrupt lock */
+ spinlock_t cpt_intr_lock;
};
static inline void rvu_write64(struct rvu *rvu, u64 block, u64 offset, u64 val)
@@ -546,6 +552,17 @@ static inline u64 rvupf_read64(struct rvu *rvu, u64 offset)
return readq(rvu->pfreg_base + offset);
}
+static inline void rvu_bar2_sel_write64(struct rvu *rvu, u64 block, u64 offset, u64 val)
+{
+ /* HW requires read back of RVU_AF_BAR2_SEL register to make sure completion of
+ * write operation.
+ */
+ rvu_write64(rvu, block, offset, val);
+ rvu_read64(rvu, block, offset);
+ /* Barrier to ensure read completes before accessing LF registers */
+ mb();
+}
+
/* Silicon revisions */
static inline bool is_rvu_pre_96xx_C0(struct rvu *rvu)
{
@@ -865,11 +882,15 @@ void rvu_cpt_unregister_interrupts(struct rvu *rvu);
int rvu_cpt_lf_teardown(struct rvu *rvu, u16 pcifunc, int blkaddr, int lf,
int slot);
int rvu_cpt_ctx_flush(struct rvu *rvu, u16 pcifunc);
+int rvu_cpt_init(struct rvu *rvu);
/* CN10K RVU */
int rvu_set_channels_base(struct rvu *rvu);
void rvu_program_channels(struct rvu *rvu);
+/* CN10K NIX */
+void rvu_nix_block_cn10k_init(struct rvu *rvu, struct nix_hw *nix_hw);
+
/* CN10K RVU - LMT*/
void rvu_reset_lmt_map_tbl(struct rvu *rvu, u16 pcifunc);
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cn10k.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cn10k.c
index 7dbbc115cde4..4ad9ff025c96 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cn10k.c
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cn10k.c
@@ -538,3 +538,21 @@ void rvu_program_channels(struct rvu *rvu)
rvu_lbk_set_channels(rvu);
rvu_rpm_set_channels(rvu);
}
+
+void rvu_nix_block_cn10k_init(struct rvu *rvu, struct nix_hw *nix_hw)
+{
+ int blkaddr = nix_hw->blkaddr;
+ u64 cfg;
+
+ /* Set AF vWQE timer interval to a LF configurable range of
+ * 6.4us to 1.632ms.
+ */
+ rvu_write64(rvu, blkaddr, NIX_AF_VWQE_TIMER, 0x3FULL);
+
+ /* Enable NIX RX stream and global conditional clock to
+ * avoild multiple free of NPA buffers.
+ */
+ cfg = rvu_read64(rvu, blkaddr, NIX_AF_CFG);
+ cfg |= BIT_ULL(1) | BIT_ULL(2);
+ rvu_write64(rvu, blkaddr, NIX_AF_CFG, cfg);
+}
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.c
index 38bbae5d9ae0..f047185f38e0 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.c
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.c
@@ -17,7 +17,7 @@
#define PCI_DEVID_OTX2_CPT10K_PF 0xA0F2
/* Length of initial context fetch in 128 byte words */
-#define CPT_CTX_ILEN 2ULL
+#define CPT_CTX_ILEN 1ULL
#define cpt_get_eng_sts(e_min, e_max, rsp, etype) \
({ \
@@ -37,34 +37,68 @@
(_rsp)->free_sts_##etype = free_sts; \
})
-static irqreturn_t rvu_cpt_af_flt_intr_handler(int irq, void *ptr)
+static irqreturn_t cpt_af_flt_intr_handler(int vec, void *ptr)
{
struct rvu_block *block = ptr;
struct rvu *rvu = block->rvu;
int blkaddr = block->addr;
- u64 reg0, reg1, reg2;
-
- reg0 = rvu_read64(rvu, blkaddr, CPT_AF_FLTX_INT(0));
- reg1 = rvu_read64(rvu, blkaddr, CPT_AF_FLTX_INT(1));
- if (!is_rvu_otx2(rvu)) {
- reg2 = rvu_read64(rvu, blkaddr, CPT_AF_FLTX_INT(2));
- dev_err_ratelimited(rvu->dev,
- "Received CPTAF FLT irq : 0x%llx, 0x%llx, 0x%llx",
- reg0, reg1, reg2);
- } else {
- dev_err_ratelimited(rvu->dev,
- "Received CPTAF FLT irq : 0x%llx, 0x%llx",
- reg0, reg1);
+ u64 reg, val;
+ int i, eng;
+ u8 grp;
+
+ reg = rvu_read64(rvu, blkaddr, CPT_AF_FLTX_INT(vec));
+ dev_err_ratelimited(rvu->dev, "Received CPTAF FLT%d irq : 0x%llx", vec, reg);
+
+ i = -1;
+ while ((i = find_next_bit((unsigned long *)&reg, 64, i + 1)) < 64) {
+ switch (vec) {
+ case 0:
+ eng = i;
+ break;
+ case 1:
+ eng = i + 64;
+ break;
+ case 2:
+ eng = i + 128;
+ break;
+ }
+ grp = rvu_read64(rvu, blkaddr, CPT_AF_EXEX_CTL2(eng)) & 0xFF;
+ /* Disable and enable the engine which triggers fault */
+ rvu_write64(rvu, blkaddr, CPT_AF_EXEX_CTL2(eng), 0x0);
+ val = rvu_read64(rvu, blkaddr, CPT_AF_EXEX_CTL(eng));
+ rvu_write64(rvu, blkaddr, CPT_AF_EXEX_CTL(eng), val & ~1ULL);
+
+ rvu_write64(rvu, blkaddr, CPT_AF_EXEX_CTL2(eng), grp);
+ rvu_write64(rvu, blkaddr, CPT_AF_EXEX_CTL(eng), val | 1ULL);
+
+ spin_lock(&rvu->cpt_intr_lock);
+ block->cpt_flt_eng_map[vec] |= BIT_ULL(i);
+ val = rvu_read64(rvu, blkaddr, CPT_AF_EXEX_STS(eng));
+ val = val & 0x3;
+ if (val == 0x1 || val == 0x2)
+ block->cpt_rcvrd_eng_map[vec] |= BIT_ULL(i);
+ spin_unlock(&rvu->cpt_intr_lock);
}
-
- rvu_write64(rvu, blkaddr, CPT_AF_FLTX_INT(0), reg0);
- rvu_write64(rvu, blkaddr, CPT_AF_FLTX_INT(1), reg1);
- if (!is_rvu_otx2(rvu))
- rvu_write64(rvu, blkaddr, CPT_AF_FLTX_INT(2), reg2);
+ rvu_write64(rvu, blkaddr, CPT_AF_FLTX_INT(vec), reg);
return IRQ_HANDLED;
}
+static irqreturn_t rvu_cpt_af_flt0_intr_handler(int irq, void *ptr)
+{
+ return cpt_af_flt_intr_handler(CPT_AF_INT_VEC_FLT0, ptr);
+}
+
+static irqreturn_t rvu_cpt_af_flt1_intr_handler(int irq, void *ptr)
+{
+ return cpt_af_flt_intr_handler(CPT_AF_INT_VEC_FLT1, ptr);
+}
+
+static irqreturn_t rvu_cpt_af_flt2_intr_handler(int irq, void *ptr)
+{
+ return cpt_af_flt_intr_handler(CPT_10K_AF_INT_VEC_FLT2, ptr);
+}
+
static irqreturn_t rvu_cpt_af_rvu_intr_handler(int irq, void *ptr)
{
struct rvu_block *block = ptr;
@@ -119,8 +153,10 @@ static void cpt_10k_unregister_interrupts(struct rvu_block *block, int off)
int i;
/* Disable all CPT AF interrupts */
- for (i = 0; i < CPT_10K_AF_INT_VEC_RVU; i++)
- rvu_write64(rvu, blkaddr, CPT_AF_FLTX_INT_ENA_W1C(i), 0x1);
+ rvu_write64(rvu, blkaddr, CPT_AF_FLTX_INT_ENA_W1C(0), ~0ULL);
+ rvu_write64(rvu, blkaddr, CPT_AF_FLTX_INT_ENA_W1C(1), ~0ULL);
+ rvu_write64(rvu, blkaddr, CPT_AF_FLTX_INT_ENA_W1C(2), 0xFFFF);
+
rvu_write64(rvu, blkaddr, CPT_AF_RVU_INT_ENA_W1C, 0x1);
rvu_write64(rvu, blkaddr, CPT_AF_RAS_INT_ENA_W1C, 0x1);
@@ -151,7 +187,7 @@ static void cpt_unregister_interrupts(struct rvu *rvu, int blkaddr)
/* Disable all CPT AF interrupts */
for (i = 0; i < CPT_AF_INT_VEC_RVU; i++)
- rvu_write64(rvu, blkaddr, CPT_AF_FLTX_INT_ENA_W1C(i), 0x1);
+ rvu_write64(rvu, blkaddr, CPT_AF_FLTX_INT_ENA_W1C(i), ~0ULL);
rvu_write64(rvu, blkaddr, CPT_AF_RVU_INT_ENA_W1C, 0x1);
rvu_write64(rvu, blkaddr, CPT_AF_RAS_INT_ENA_W1C, 0x1);
@@ -172,16 +208,31 @@ static int cpt_10k_register_interrupts(struct rvu_block *block, int off)
{
struct rvu *rvu = block->rvu;
int blkaddr = block->addr;
+ irq_handler_t flt_fn;
int i, ret;
for (i = CPT_10K_AF_INT_VEC_FLT0; i < CPT_10K_AF_INT_VEC_RVU; i++) {
sprintf(&rvu->irq_name[(off + i) * NAME_SIZE], "CPTAF FLT%d", i);
+
+ switch (i) {
+ case CPT_10K_AF_INT_VEC_FLT0:
+ flt_fn = rvu_cpt_af_flt0_intr_handler;
+ break;
+ case CPT_10K_AF_INT_VEC_FLT1:
+ flt_fn = rvu_cpt_af_flt1_intr_handler;
+ break;
+ case CPT_10K_AF_INT_VEC_FLT2:
+ flt_fn = rvu_cpt_af_flt2_intr_handler;
+ break;
+ }
ret = rvu_cpt_do_register_interrupt(block, off + i,
- rvu_cpt_af_flt_intr_handler,
- &rvu->irq_name[(off + i) * NAME_SIZE]);
+ flt_fn, &rvu->irq_name[(off + i) * NAME_SIZE]);
if (ret)
goto err;
- rvu_write64(rvu, blkaddr, CPT_AF_FLTX_INT_ENA_W1S(i), 0x1);
+ if (i == CPT_10K_AF_INT_VEC_FLT2)
+ rvu_write64(rvu, blkaddr, CPT_AF_FLTX_INT_ENA_W1S(i), 0xFFFF);
+ else
+ rvu_write64(rvu, blkaddr, CPT_AF_FLTX_INT_ENA_W1S(i), ~0ULL);
}
ret = rvu_cpt_do_register_interrupt(block, off + CPT_10K_AF_INT_VEC_RVU,
@@ -208,8 +259,8 @@ static int cpt_register_interrupts(struct rvu *rvu, int blkaddr)
{
struct rvu_hwinfo *hw = rvu->hw;
struct rvu_block *block;
+ irq_handler_t flt_fn;
int i, offs, ret = 0;
- char irq_name[16];
if (!is_block_implemented(rvu->hw, blkaddr))
return 0;
@@ -226,13 +277,20 @@ static int cpt_register_interrupts(struct rvu *rvu, int blkaddr)
return cpt_10k_register_interrupts(block, offs);
for (i = CPT_AF_INT_VEC_FLT0; i < CPT_AF_INT_VEC_RVU; i++) {
- snprintf(irq_name, sizeof(irq_name), "CPTAF FLT%d", i);
+ sprintf(&rvu->irq_name[(offs + i) * NAME_SIZE], "CPTAF FLT%d", i);
+ switch (i) {
+ case CPT_AF_INT_VEC_FLT0:
+ flt_fn = rvu_cpt_af_flt0_intr_handler;
+ break;
+ case CPT_AF_INT_VEC_FLT1:
+ flt_fn = rvu_cpt_af_flt1_intr_handler;
+ break;
+ }
ret = rvu_cpt_do_register_interrupt(block, offs + i,
- rvu_cpt_af_flt_intr_handler,
- irq_name);
+ flt_fn, &rvu->irq_name[(offs + i) * NAME_SIZE]);
if (ret)
goto err;
- rvu_write64(rvu, blkaddr, CPT_AF_FLTX_INT_ENA_W1S(i), 0x1);
+ rvu_write64(rvu, blkaddr, CPT_AF_FLTX_INT_ENA_W1S(i), ~0ULL);
}
ret = rvu_cpt_do_register_interrupt(block, offs + CPT_AF_INT_VEC_RVU,
@@ -290,7 +348,7 @@ static int get_cpt_pf_num(struct rvu *rvu)
static bool is_cpt_pf(struct rvu *rvu, u16 pcifunc)
{
- int cpt_pf_num = get_cpt_pf_num(rvu);
+ int cpt_pf_num = rvu->cpt_pf_num;
if (rvu_get_pf(pcifunc) != cpt_pf_num)
return false;
@@ -302,7 +360,7 @@ static bool is_cpt_pf(struct rvu *rvu, u16 pcifunc)
static bool is_cpt_vf(struct rvu *rvu, u16 pcifunc)
{
- int cpt_pf_num = get_cpt_pf_num(rvu);
+ int cpt_pf_num = rvu->cpt_pf_num;
if (rvu_get_pf(pcifunc) != cpt_pf_num)
return false;
@@ -371,8 +429,12 @@ int rvu_mbox_handler_cpt_lf_alloc(struct rvu *rvu,
/* Set CPT LF group and priority */
val = (u64)req->eng_grpmsk << 48 | 1;
- if (!is_rvu_otx2(rvu))
- val |= (CPT_CTX_ILEN << 17);
+ if (!is_rvu_otx2(rvu)) {
+ if (req->ctx_ilen_valid)
+ val |= (req->ctx_ilen << 17);
+ else
+ val |= (CPT_CTX_ILEN << 17);
+ }
rvu_write64(rvu, blkaddr, CPT_AF_LFX_CTL(cptlf), val);
@@ -762,10 +824,21 @@ int rvu_mbox_handler_cpt_sts(struct rvu *rvu, struct cpt_sts_req *req,
#define RXC_ZOMBIE_COUNT GENMASK_ULL(60, 48)
static void cpt_rxc_time_cfg(struct rvu *rvu, struct cpt_rxc_time_cfg_req *req,
- int blkaddr)
+ int blkaddr, struct cpt_rxc_time_cfg_req *save)
{
u64 dfrg_reg;
+ if (save) {
+ /* Save older config */
+ dfrg_reg = rvu_read64(rvu, blkaddr, CPT_AF_RXC_DFRG);
+ save->zombie_thres = FIELD_GET(RXC_ZOMBIE_THRES, dfrg_reg);
+ save->zombie_limit = FIELD_GET(RXC_ZOMBIE_LIMIT, dfrg_reg);
+ save->active_thres = FIELD_GET(RXC_ACTIVE_THRES, dfrg_reg);
+ save->active_limit = FIELD_GET(RXC_ACTIVE_LIMIT, dfrg_reg);
+
+ save->step = rvu_read64(rvu, blkaddr, CPT_AF_RXC_TIME_CFG);
+ }
+
dfrg_reg = FIELD_PREP(RXC_ZOMBIE_THRES, req->zombie_thres);
dfrg_reg |= FIELD_PREP(RXC_ZOMBIE_LIMIT, req->zombie_limit);
dfrg_reg |= FIELD_PREP(RXC_ACTIVE_THRES, req->active_thres);
@@ -790,7 +863,7 @@ int rvu_mbox_handler_cpt_rxc_time_cfg(struct rvu *rvu,
!is_cpt_vf(rvu, req->hdr.pcifunc))
return CPT_AF_ERR_ACCESS_DENIED;
- cpt_rxc_time_cfg(rvu, req, blkaddr);
+ cpt_rxc_time_cfg(rvu, req, blkaddr, NULL);
return 0;
}
@@ -801,9 +874,67 @@ int rvu_mbox_handler_cpt_ctx_cache_sync(struct rvu *rvu, struct msg_req *req,
return rvu_cpt_ctx_flush(rvu, req->hdr.pcifunc);
}
+int rvu_mbox_handler_cpt_lf_reset(struct rvu *rvu, struct cpt_lf_rst_req *req,
+ struct msg_rsp *rsp)
+{
+ u16 pcifunc = req->hdr.pcifunc;
+ struct rvu_block *block;
+ int cptlf, blkaddr, ret;
+ u16 actual_slot;
+ u64 ctl, ctl2;
+
+ blkaddr = rvu_get_blkaddr_from_slot(rvu, BLKTYPE_CPT, pcifunc,
+ req->slot, &actual_slot);
+ if (blkaddr < 0)
+ return CPT_AF_ERR_LF_INVALID;
+
+ block = &rvu->hw->block[blkaddr];
+
+ cptlf = rvu_get_lf(rvu, block, pcifunc, actual_slot);
+ if (cptlf < 0)
+ return CPT_AF_ERR_LF_INVALID;
+ ctl = rvu_read64(rvu, blkaddr, CPT_AF_LFX_CTL(cptlf));
+ ctl2 = rvu_read64(rvu, blkaddr, CPT_AF_LFX_CTL2(cptlf));
+
+ ret = rvu_lf_reset(rvu, block, cptlf);
+ if (ret)
+ dev_err(rvu->dev, "Failed to reset blkaddr %d LF%d\n",
+ block->addr, cptlf);
+
+ rvu_write64(rvu, blkaddr, CPT_AF_LFX_CTL(cptlf), ctl);
+ rvu_write64(rvu, blkaddr, CPT_AF_LFX_CTL2(cptlf), ctl2);
+
+ return 0;
+}
+
+int rvu_mbox_handler_cpt_flt_eng_info(struct rvu *rvu, struct cpt_flt_eng_info_req *req,
+ struct cpt_flt_eng_info_rsp *rsp)
+{
+ struct rvu_block *block;
+ unsigned long flags;
+ int blkaddr, vec;
+
+ blkaddr = validate_and_get_cpt_blkaddr(req->blkaddr);
+ if (blkaddr < 0)
+ return blkaddr;
+
+ block = &rvu->hw->block[blkaddr];
+ for (vec = 0; vec < CPT_10K_AF_INT_VEC_RVU; vec++) {
+ spin_lock_irqsave(&rvu->cpt_intr_lock, flags);
+ rsp->flt_eng_map[vec] = block->cpt_flt_eng_map[vec];
+ rsp->rcvrd_eng_map[vec] = block->cpt_rcvrd_eng_map[vec];
+ if (req->reset) {
+ block->cpt_flt_eng_map[vec] = 0x0;
+ block->cpt_rcvrd_eng_map[vec] = 0x0;
+ }
+ spin_unlock_irqrestore(&rvu->cpt_intr_lock, flags);
+ }
+ return 0;
+}
+
static void cpt_rxc_teardown(struct rvu *rvu, int blkaddr)
{
- struct cpt_rxc_time_cfg_req req;
+ struct cpt_rxc_time_cfg_req req, prev;
int timeout = 2000;
u64 reg;
@@ -819,7 +950,7 @@ static void cpt_rxc_teardown(struct rvu *rvu, int blkaddr)
req.active_thres = 1;
req.active_limit = 1;
- cpt_rxc_time_cfg(rvu, &req, blkaddr);
+ cpt_rxc_time_cfg(rvu, &req, blkaddr, &prev);
do {
reg = rvu_read64(rvu, blkaddr, CPT_AF_RXC_ACTIVE_STS);
@@ -845,70 +976,68 @@ static void cpt_rxc_teardown(struct rvu *rvu, int blkaddr)
if (timeout == 0)
dev_warn(rvu->dev, "Poll for RXC zombie count hits hard loop counter\n");
+
+ /* Restore config */
+ cpt_rxc_time_cfg(rvu, &prev, blkaddr, NULL);
}
-#define INPROG_INFLIGHT(reg) ((reg) & 0x1FF)
-#define INPROG_GRB_PARTIAL(reg) ((reg) & BIT_ULL(31))
-#define INPROG_GRB(reg) (((reg) >> 32) & 0xFF)
-#define INPROG_GWB(reg) (((reg) >> 40) & 0xFF)
+#define INFLIGHT GENMASK_ULL(8, 0)
+#define GRB_CNT GENMASK_ULL(39, 32)
+#define GWB_CNT GENMASK_ULL(47, 40)
+#define XQ_XOR GENMASK_ULL(63, 63)
+#define DQPTR GENMASK_ULL(19, 0)
+#define NQPTR GENMASK_ULL(51, 32)
static void cpt_lf_disable_iqueue(struct rvu *rvu, int blkaddr, int slot)
{
- int i = 0, hard_lp_ctr = 100000;
- u64 inprog, grp_ptr;
- u16 nq_ptr, dq_ptr;
+ int timeout = 1000000;
+ u64 inprog, inst_ptr;
+ u64 qsize, pending;
+ int i = 0;
/* Disable instructions enqueuing */
rvu_write64(rvu, blkaddr, CPT_AF_BAR2_ALIASX(slot, CPT_LF_CTL), 0x0);
- /* Disable executions in the LF's queue */
inprog = rvu_read64(rvu, blkaddr,
CPT_AF_BAR2_ALIASX(slot, CPT_LF_INPROG));
- inprog &= ~BIT_ULL(16);
+ inprog |= BIT_ULL(16);
rvu_write64(rvu, blkaddr,
CPT_AF_BAR2_ALIASX(slot, CPT_LF_INPROG), inprog);
- /* Wait for CPT queue to become execution-quiescent */
+ qsize = rvu_read64(rvu, blkaddr,
+ CPT_AF_BAR2_ALIASX(slot, CPT_LF_Q_SIZE)) & 0x7FFF;
do {
- inprog = rvu_read64(rvu, blkaddr,
- CPT_AF_BAR2_ALIASX(slot, CPT_LF_INPROG));
- if (INPROG_GRB_PARTIAL(inprog)) {
- i = 0;
- hard_lp_ctr--;
- } else {
- i++;
- }
-
- grp_ptr = rvu_read64(rvu, blkaddr,
- CPT_AF_BAR2_ALIASX(slot,
- CPT_LF_Q_GRP_PTR));
- nq_ptr = (grp_ptr >> 32) & 0x7FFF;
- dq_ptr = grp_ptr & 0x7FFF;
-
- } while (hard_lp_ctr && (i < 10) && (nq_ptr != dq_ptr));
+ inst_ptr = rvu_read64(rvu, blkaddr,
+ CPT_AF_BAR2_ALIASX(slot, CPT_LF_Q_INST_PTR));
+ pending = (FIELD_GET(XQ_XOR, inst_ptr) * qsize * 40) +
+ FIELD_GET(NQPTR, inst_ptr) -
+ FIELD_GET(DQPTR, inst_ptr);
+ udelay(1);
+ timeout--;
+ } while ((pending != 0) && (timeout != 0));
- if (hard_lp_ctr == 0)
- dev_warn(rvu->dev, "CPT FLR hits hard loop counter\n");
+ if (timeout == 0)
+ dev_warn(rvu->dev, "TIMEOUT: CPT poll on pending instructions\n");
- i = 0;
- hard_lp_ctr = 100000;
+ timeout = 1000000;
+ /* Wait for CPT queue to become execution-quiescent */
do {
inprog = rvu_read64(rvu, blkaddr,
CPT_AF_BAR2_ALIASX(slot, CPT_LF_INPROG));
- if ((INPROG_INFLIGHT(inprog) == 0) &&
- (INPROG_GWB(inprog) < 40) &&
- ((INPROG_GRB(inprog) == 0) ||
- (INPROG_GRB((inprog)) == 40))) {
+ if ((FIELD_GET(INFLIGHT, inprog) == 0) &&
+ (FIELD_GET(GRB_CNT, inprog) == 0)) {
i++;
} else {
i = 0;
- hard_lp_ctr--;
+ timeout--;
}
- } while (hard_lp_ctr && (i < 10));
+ } while ((timeout != 0) && (i < 10));
- if (hard_lp_ctr == 0)
- dev_warn(rvu->dev, "CPT FLR hits hard loop counter\n");
+ if (timeout == 0)
+ dev_warn(rvu->dev, "TIMEOUT: CPT poll on inflight count\n");
+ /* Wait for 2 us to flush all queue writes to memory */
+ udelay(2);
}
int rvu_cpt_lf_teardown(struct rvu *rvu, u16 pcifunc, int blkaddr, int lf, int slot)
@@ -918,18 +1047,15 @@ int rvu_cpt_lf_teardown(struct rvu *rvu, u16 pcifunc, int blkaddr, int lf, int s
if (is_cpt_pf(rvu, pcifunc) || is_cpt_vf(rvu, pcifunc))
cpt_rxc_teardown(rvu, blkaddr);
+ mutex_lock(&rvu->alias_lock);
/* Enable BAR2 ALIAS for this pcifunc. */
reg = BIT_ULL(16) | pcifunc;
- rvu_write64(rvu, blkaddr, CPT_AF_BAR2_SEL, reg);
+ rvu_bar2_sel_write64(rvu, blkaddr, CPT_AF_BAR2_SEL, reg);
cpt_lf_disable_iqueue(rvu, blkaddr, slot);
- /* Set group drop to help clear out hardware */
- reg = rvu_read64(rvu, blkaddr, CPT_AF_BAR2_ALIASX(slot, CPT_LF_INPROG));
- reg |= BIT_ULL(17);
- rvu_write64(rvu, blkaddr, CPT_AF_BAR2_ALIASX(slot, CPT_LF_INPROG), reg);
-
- rvu_write64(rvu, blkaddr, CPT_AF_BAR2_SEL, 0);
+ rvu_bar2_sel_write64(rvu, blkaddr, CPT_AF_BAR2_SEL, 0);
+ mutex_unlock(&rvu->alias_lock);
return 0;
}
@@ -940,7 +1066,7 @@ int rvu_cpt_lf_teardown(struct rvu *rvu, u16 pcifunc, int blkaddr, int lf, int s
static int cpt_inline_inb_lf_cmd_send(struct rvu *rvu, int blkaddr,
int nix_blkaddr)
{
- int cpt_pf_num = get_cpt_pf_num(rvu);
+ int cpt_pf_num = rvu->cpt_pf_num;
struct cpt_inst_lmtst_req *req;
dma_addr_t res_daddr;
int timeout = 3000;
@@ -1064,7 +1190,7 @@ int rvu_cpt_ctx_flush(struct rvu *rvu, u16 pcifunc)
/* Enable BAR2 ALIAS for this pcifunc. */
reg = BIT_ULL(16) | pcifunc;
- rvu_write64(rvu, blkaddr, CPT_AF_BAR2_SEL, reg);
+ rvu_bar2_sel_write64(rvu, blkaddr, CPT_AF_BAR2_SEL, reg);
for (i = 0; i < max_ctx_entries; i++) {
cam_data = rvu_read64(rvu, blkaddr, CPT_AF_CTX_CAM_DATA(i));
@@ -1077,10 +1203,19 @@ int rvu_cpt_ctx_flush(struct rvu *rvu, u16 pcifunc)
reg);
}
}
- rvu_write64(rvu, blkaddr, CPT_AF_BAR2_SEL, 0);
+ rvu_bar2_sel_write64(rvu, blkaddr, CPT_AF_BAR2_SEL, 0);
unlock:
mutex_unlock(&rvu->rsrc_lock);
return 0;
}
+
+int rvu_cpt_init(struct rvu *rvu)
+{
+ /* Retrieve CPT PF number */
+ rvu->cpt_pf_num = get_cpt_pf_num(rvu);
+ spin_lock_init(&rvu->cpt_intr_lock);
+
+ return 0;
+}
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
index 6b8747ebc08c..26e639e57dae 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
@@ -2058,6 +2058,13 @@ static int nix_smq_flush(struct rvu *rvu, int blkaddr,
int err, restore_tx_en = 0;
u64 cfg;
+ if (!is_rvu_otx2(rvu)) {
+ /* Skip SMQ flush if pkt count is zero */
+ cfg = rvu_read64(rvu, blkaddr, NIX_AF_MDQX_IN_MD_COUNT(smq));
+ if (!cfg)
+ return 0;
+ }
+
/* enable cgx tx if disabled */
if (is_pf_cgxmapped(rvu, pf)) {
rvu_get_cgx_lmac_id(rvu->pf2cgxlmac_map[pf], &cgx_id, &lmac_id);
@@ -4309,6 +4316,9 @@ static int rvu_nix_block_init(struct rvu *rvu, struct nix_hw *nix_hw)
rvu_write64(rvu, blkaddr, NIX_AF_SEB_CFG, cfg);
+ if (!is_rvu_otx2(rvu))
+ rvu_nix_block_cn10k_init(rvu, nix_hw);
+
if (is_block_implemented(hw, blkaddr)) {
err = nix_setup_txschq(rvu, nix_hw, blkaddr);
if (err)
@@ -4731,6 +4741,10 @@ int rvu_mbox_handler_nix_lso_format_cfg(struct rvu *rvu,
#define CPT_INST_QSEL_PF_FUNC GENMASK_ULL(23, 8)
#define CPT_INST_QSEL_SLOT GENMASK_ULL(7, 0)
+#define CPT_INST_CREDIT_TH GENMASK_ULL(53, 32)
+#define CPT_INST_CREDIT_BPID GENMASK_ULL(30, 22)
+#define CPT_INST_CREDIT_CNT GENMASK_ULL(21, 0)
+
static void nix_inline_ipsec_cfg(struct rvu *rvu, struct nix_inline_ipsec_cfg *req,
int blkaddr)
{
@@ -4767,14 +4781,23 @@ static void nix_inline_ipsec_cfg(struct rvu *rvu, struct nix_inline_ipsec_cfg *r
val);
/* Set CPT credit */
- rvu_write64(rvu, blkaddr, NIX_AF_RX_CPTX_CREDIT(cpt_idx),
- req->cpt_credit);
+ val = rvu_read64(rvu, blkaddr, NIX_AF_RX_CPTX_CREDIT(cpt_idx));
+ if ((val & 0x3FFFFF) != 0x3FFFFF)
+ rvu_write64(rvu, blkaddr, NIX_AF_RX_CPTX_CREDIT(cpt_idx),
+ 0x3FFFFF - val);
+
+ val = FIELD_PREP(CPT_INST_CREDIT_CNT, req->cpt_credit);
+ val |= FIELD_PREP(CPT_INST_CREDIT_BPID, req->bpid);
+ val |= FIELD_PREP(CPT_INST_CREDIT_TH, req->credit_th);
+ rvu_write64(rvu, blkaddr, NIX_AF_RX_CPTX_CREDIT(cpt_idx), val);
} else {
rvu_write64(rvu, blkaddr, NIX_AF_RX_IPSEC_GEN_CFG, 0x0);
rvu_write64(rvu, blkaddr, NIX_AF_RX_CPTX_INST_QSEL(cpt_idx),
0x0);
- rvu_write64(rvu, blkaddr, NIX_AF_RX_CPTX_CREDIT(cpt_idx),
- 0x3FFFFF);
+ val = rvu_read64(rvu, blkaddr, NIX_AF_RX_CPTX_CREDIT(cpt_idx));
+ if ((val & 0x3FFFFF) != 0x3FFFFF)
+ rvu_write64(rvu, blkaddr, NIX_AF_RX_CPTX_CREDIT(cpt_idx),
+ 0x3FFFFF - val);
}
}
@@ -4792,6 +4815,30 @@ int rvu_mbox_handler_nix_inline_ipsec_cfg(struct rvu *rvu,
return 0;
}
+int rvu_mbox_handler_nix_read_inline_ipsec_cfg(struct rvu *rvu,
+ struct msg_req *req,
+ struct nix_inline_ipsec_cfg *rsp)
+
+{
+ u64 val;
+
+ if (!is_block_implemented(rvu->hw, BLKADDR_CPT0))
+ return 0;
+
+ val = rvu_read64(rvu, BLKADDR_NIX0, NIX_AF_RX_IPSEC_GEN_CFG);
+ rsp->gen_cfg.egrp = FIELD_GET(IPSEC_GEN_CFG_EGRP, val);
+ rsp->gen_cfg.opcode = FIELD_GET(IPSEC_GEN_CFG_OPCODE, val);
+ rsp->gen_cfg.param1 = FIELD_GET(IPSEC_GEN_CFG_PARAM1, val);
+ rsp->gen_cfg.param2 = FIELD_GET(IPSEC_GEN_CFG_PARAM2, val);
+
+ val = rvu_read64(rvu, BLKADDR_NIX0, NIX_AF_RX_CPTX_CREDIT(0));
+ rsp->cpt_credit = FIELD_GET(CPT_INST_CREDIT_CNT, val);
+ rsp->credit_th = FIELD_GET(CPT_INST_CREDIT_TH, val);
+ rsp->bpid = FIELD_GET(CPT_INST_CREDIT_BPID, val);
+
+ return 0;
+}
+
int rvu_mbox_handler_nix_inline_ipsec_lf_cfg(struct rvu *rvu,
struct nix_inline_ipsec_lf_cfg *req,
struct msg_rsp *rsp)
@@ -4835,6 +4882,7 @@ int rvu_mbox_handler_nix_inline_ipsec_lf_cfg(struct rvu *rvu,
return 0;
}
+
void rvu_nix_reset_mac(struct rvu_pfvf *pfvf, int pcifunc)
{
bool from_vf = !!(pcifunc & RVU_PFVF_FUNC_MASK);
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_hash.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_hash.c
index f69102d20c90..20ebb9c95c73 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_hash.c
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_hash.c
@@ -200,10 +200,8 @@ void npc_config_secret_key(struct rvu *rvu, int blkaddr)
struct rvu_hwinfo *hw = rvu->hw;
u8 intf;
- if (!hwcap->npc_hash_extract) {
- dev_info(rvu->dev, "HW does not support secret key configuration\n");
+ if (!hwcap->npc_hash_extract)
return;
- }
for (intf = 0; intf < hw->npc_intfs; intf++) {
rvu_write64(rvu, blkaddr, NPC_AF_INTFX_SECRET_KEY0(intf),
@@ -221,10 +219,8 @@ void npc_program_mkex_hash(struct rvu *rvu, int blkaddr)
struct rvu_hwinfo *hw = rvu->hw;
u8 intf;
- if (!hwcap->npc_hash_extract) {
- dev_dbg(rvu->dev, "Field hash extract feature is not supported\n");
+ if (!hwcap->npc_hash_extract)
return;
- }
for (intf = 0; intf < hw->npc_intfs; intf++) {
npc_program_mkex_hash_rx(rvu, blkaddr, intf);
@@ -1853,19 +1849,13 @@ int rvu_npc_exact_init(struct rvu *rvu)
/* Check exact match feature is supported */
npc_const3 = rvu_read64(rvu, blkaddr, NPC_AF_CONST3);
- if (!(npc_const3 & BIT_ULL(62))) {
- dev_info(rvu->dev, "%s: No support for exact match support\n",
- __func__);
+ if (!(npc_const3 & BIT_ULL(62)))
return 0;
- }
/* Check if kex profile has enabled EXACT match nibble */
cfg = rvu_read64(rvu, blkaddr, NPC_AF_INTFX_KEX_CFG(NIX_INTF_RX));
- if (!(cfg & NPC_EXACT_NIBBLE_HIT)) {
- dev_info(rvu->dev, "%s: NPC exact match nibble not enabled in KEX profile\n",
- __func__);
+ if (!(cfg & NPC_EXACT_NIBBLE_HIT))
return 0;
- }
/* Set capability to true */
rvu->hw->cap.npc_exact_match_enabled = true;
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_reg.h b/drivers/net/ethernet/marvell/octeontx2/af/rvu_reg.h
index 0e0d536645ac..1729b22580ce 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_reg.h
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_reg.h
@@ -189,6 +189,7 @@
#define NIX_AF_RX_CFG (0x00D0)
#define NIX_AF_AVG_DELAY (0x00E0)
#define NIX_AF_CINT_DELAY (0x00F0)
+#define NIX_AF_VWQE_TIMER (0x00F8)
#define NIX_AF_RX_MCAST_BASE (0x0100)
#define NIX_AF_RX_MCAST_CFG (0x0110)
#define NIX_AF_RX_MCAST_BUF_BASE (0x0120)
@@ -426,6 +427,7 @@
#define NIX_AF_RX_NPC_MIRROR_DROP (0x4730)
#define NIX_AF_RX_ACTIVE_CYCLES_PCX(a) (0x4800 | (a) << 16)
#define NIX_AF_LINKX_CFG(a) (0x4010 | (a) << 17)
+#define NIX_AF_MDQX_IN_MD_COUNT(a) (0x14e0 | (a) << 16)
#define NIX_PRIV_AF_INT_CFG (0x8000000)
#define NIX_PRIV_LFX_CFG (0x8000010)
@@ -545,6 +547,8 @@
#define CPT_LF_CTL 0x10
#define CPT_LF_INPROG 0x40
+#define CPT_LF_Q_SIZE 0x100
+#define CPT_LF_Q_INST_PTR 0x110
#define CPT_LF_Q_GRP_PTR 0x120
#define CPT_LF_CTX_FLUSH 0x510
diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
index c1ea60bc2630..179433d0a54a 100644
--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
+++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
@@ -2512,10 +2512,13 @@ static int otx2_xdp_setup(struct otx2_nic *pf, struct bpf_prog *prog)
/* Network stack and XDP shared same rx queues.
* Use separate tx queues for XDP and network stack.
*/
- if (pf->xdp_prog)
+ if (pf->xdp_prog) {
pf->hw.xdp_queues = pf->hw.rx_queues;
- else
+ xdp_features_set_redirect_target(dev, false);
+ } else {
pf->hw.xdp_queues = 0;
+ xdp_features_clear_redirect_target(dev);
+ }
pf->hw.tot_tx_queues += pf->hw.xdp_queues;
@@ -2878,6 +2881,7 @@ static int otx2_probe(struct pci_dev *pdev, const struct pci_device_id *id)
netdev->watchdog_timeo = OTX2_TX_TIMEOUT;
netdev->netdev_ops = &otx2_netdev_ops;
+ netdev->xdp_features = NETDEV_XDP_ACT_BASIC | NETDEV_XDP_ACT_REDIRECT;
netdev->min_mtu = OTX2_MIN_MTU;
netdev->max_mtu = otx2_get_max_mtu(pf);
diff --git a/drivers/net/ethernet/marvell/pxa168_eth.c b/drivers/net/ethernet/marvell/pxa168_eth.c
index cf456d62677f..87fff539d39d 100644
--- a/drivers/net/ethernet/marvell/pxa168_eth.c
+++ b/drivers/net/ethernet/marvell/pxa168_eth.c
@@ -965,7 +965,7 @@ static int pxa168_init_phy(struct net_device *dev)
if (dev->phydev)
return 0;
- phy = mdiobus_scan(pep->smi_bus, pep->phy_addr);
+ phy = mdiobus_scan_c22(pep->smi_bus, pep->phy_addr);
if (IS_ERR(phy))
return PTR_ERR(phy);
diff --git a/drivers/net/ethernet/mediatek/mtk_eth_soc.c b/drivers/net/ethernet/mediatek/mtk_eth_soc.c
index e3123723522e..14be6ea51b88 100644
--- a/drivers/net/ethernet/mediatek/mtk_eth_soc.c
+++ b/drivers/net/ethernet/mediatek/mtk_eth_soc.c
@@ -51,6 +51,7 @@ static const struct mtk_reg_map mtk_reg_map = {
.delay_irq = 0x0a0c,
.irq_status = 0x0a20,
.irq_mask = 0x0a28,
+ .adma_rx_dbg0 = 0x0a38,
.int_grp = 0x0a50,
},
.qdma = {
@@ -82,6 +83,8 @@ static const struct mtk_reg_map mtk_reg_map = {
[0] = 0x2800,
[1] = 0x2c00,
},
+ .pse_iq_sta = 0x0110,
+ .pse_oq_sta = 0x0118,
};
static const struct mtk_reg_map mt7628_reg_map = {
@@ -112,6 +115,7 @@ static const struct mtk_reg_map mt7986_reg_map = {
.delay_irq = 0x620c,
.irq_status = 0x6220,
.irq_mask = 0x6228,
+ .adma_rx_dbg0 = 0x6238,
.int_grp = 0x6250,
},
.qdma = {
@@ -143,6 +147,8 @@ static const struct mtk_reg_map mt7986_reg_map = {
[0] = 0x4800,
[1] = 0x4c00,
},
+ .pse_iq_sta = 0x0180,
+ .pse_oq_sta = 0x01a0,
};
/* strings used by ethtool */
@@ -215,8 +221,8 @@ static int mtk_mdio_busy_wait(struct mtk_eth *eth)
return -ETIMEDOUT;
}
-static int _mtk_mdio_write(struct mtk_eth *eth, u32 phy_addr, u32 phy_reg,
- u32 write_data)
+static int _mtk_mdio_write_c22(struct mtk_eth *eth, u32 phy_addr, u32 phy_reg,
+ u32 write_data)
{
int ret;
@@ -224,35 +230,13 @@ static int _mtk_mdio_write(struct mtk_eth *eth, u32 phy_addr, u32 phy_reg,
if (ret < 0)
return ret;
- if (phy_reg & MII_ADDR_C45) {
- mtk_w32(eth, PHY_IAC_ACCESS |
- PHY_IAC_START_C45 |
- PHY_IAC_CMD_C45_ADDR |
- PHY_IAC_REG(mdiobus_c45_devad(phy_reg)) |
- PHY_IAC_ADDR(phy_addr) |
- PHY_IAC_DATA(mdiobus_c45_regad(phy_reg)),
- MTK_PHY_IAC);
-
- ret = mtk_mdio_busy_wait(eth);
- if (ret < 0)
- return ret;
-
- mtk_w32(eth, PHY_IAC_ACCESS |
- PHY_IAC_START_C45 |
- PHY_IAC_CMD_WRITE |
- PHY_IAC_REG(mdiobus_c45_devad(phy_reg)) |
- PHY_IAC_ADDR(phy_addr) |
- PHY_IAC_DATA(write_data),
- MTK_PHY_IAC);
- } else {
- mtk_w32(eth, PHY_IAC_ACCESS |
- PHY_IAC_START_C22 |
- PHY_IAC_CMD_WRITE |
- PHY_IAC_REG(phy_reg) |
- PHY_IAC_ADDR(phy_addr) |
- PHY_IAC_DATA(write_data),
- MTK_PHY_IAC);
- }
+ mtk_w32(eth, PHY_IAC_ACCESS |
+ PHY_IAC_START_C22 |
+ PHY_IAC_CMD_WRITE |
+ PHY_IAC_REG(phy_reg) |
+ PHY_IAC_ADDR(phy_addr) |
+ PHY_IAC_DATA(write_data),
+ MTK_PHY_IAC);
ret = mtk_mdio_busy_wait(eth);
if (ret < 0)
@@ -261,7 +245,8 @@ static int _mtk_mdio_write(struct mtk_eth *eth, u32 phy_addr, u32 phy_reg,
return 0;
}
-static int _mtk_mdio_read(struct mtk_eth *eth, u32 phy_addr, u32 phy_reg)
+static int _mtk_mdio_write_c45(struct mtk_eth *eth, u32 phy_addr,
+ u32 devad, u32 phy_reg, u32 write_data)
{
int ret;
@@ -269,33 +254,47 @@ static int _mtk_mdio_read(struct mtk_eth *eth, u32 phy_addr, u32 phy_reg)
if (ret < 0)
return ret;
- if (phy_reg & MII_ADDR_C45) {
- mtk_w32(eth, PHY_IAC_ACCESS |
- PHY_IAC_START_C45 |
- PHY_IAC_CMD_C45_ADDR |
- PHY_IAC_REG(mdiobus_c45_devad(phy_reg)) |
- PHY_IAC_ADDR(phy_addr) |
- PHY_IAC_DATA(mdiobus_c45_regad(phy_reg)),
- MTK_PHY_IAC);
-
- ret = mtk_mdio_busy_wait(eth);
- if (ret < 0)
- return ret;
-
- mtk_w32(eth, PHY_IAC_ACCESS |
- PHY_IAC_START_C45 |
- PHY_IAC_CMD_C45_READ |
- PHY_IAC_REG(mdiobus_c45_devad(phy_reg)) |
- PHY_IAC_ADDR(phy_addr),
- MTK_PHY_IAC);
- } else {
- mtk_w32(eth, PHY_IAC_ACCESS |
- PHY_IAC_START_C22 |
- PHY_IAC_CMD_C22_READ |
- PHY_IAC_REG(phy_reg) |
- PHY_IAC_ADDR(phy_addr),
- MTK_PHY_IAC);
- }
+ mtk_w32(eth, PHY_IAC_ACCESS |
+ PHY_IAC_START_C45 |
+ PHY_IAC_CMD_C45_ADDR |
+ PHY_IAC_REG(devad) |
+ PHY_IAC_ADDR(phy_addr) |
+ PHY_IAC_DATA(phy_reg),
+ MTK_PHY_IAC);
+
+ ret = mtk_mdio_busy_wait(eth);
+ if (ret < 0)
+ return ret;
+
+ mtk_w32(eth, PHY_IAC_ACCESS |
+ PHY_IAC_START_C45 |
+ PHY_IAC_CMD_WRITE |
+ PHY_IAC_REG(devad) |
+ PHY_IAC_ADDR(phy_addr) |
+ PHY_IAC_DATA(write_data),
+ MTK_PHY_IAC);
+
+ ret = mtk_mdio_busy_wait(eth);
+ if (ret < 0)
+ return ret;
+
+ return 0;
+}
+
+static int _mtk_mdio_read_c22(struct mtk_eth *eth, u32 phy_addr, u32 phy_reg)
+{
+ int ret;
+
+ ret = mtk_mdio_busy_wait(eth);
+ if (ret < 0)
+ return ret;
+
+ mtk_w32(eth, PHY_IAC_ACCESS |
+ PHY_IAC_START_C22 |
+ PHY_IAC_CMD_C22_READ |
+ PHY_IAC_REG(phy_reg) |
+ PHY_IAC_ADDR(phy_addr),
+ MTK_PHY_IAC);
ret = mtk_mdio_busy_wait(eth);
if (ret < 0)
@@ -304,19 +303,70 @@ static int _mtk_mdio_read(struct mtk_eth *eth, u32 phy_addr, u32 phy_reg)
return mtk_r32(eth, MTK_PHY_IAC) & PHY_IAC_DATA_MASK;
}
-static int mtk_mdio_write(struct mii_bus *bus, int phy_addr,
- int phy_reg, u16 val)
+static int _mtk_mdio_read_c45(struct mtk_eth *eth, u32 phy_addr,
+ u32 devad, u32 phy_reg)
+{
+ int ret;
+
+ ret = mtk_mdio_busy_wait(eth);
+ if (ret < 0)
+ return ret;
+
+ mtk_w32(eth, PHY_IAC_ACCESS |
+ PHY_IAC_START_C45 |
+ PHY_IAC_CMD_C45_ADDR |
+ PHY_IAC_REG(devad) |
+ PHY_IAC_ADDR(phy_addr) |
+ PHY_IAC_DATA(phy_reg),
+ MTK_PHY_IAC);
+
+ ret = mtk_mdio_busy_wait(eth);
+ if (ret < 0)
+ return ret;
+
+ mtk_w32(eth, PHY_IAC_ACCESS |
+ PHY_IAC_START_C45 |
+ PHY_IAC_CMD_C45_READ |
+ PHY_IAC_REG(devad) |
+ PHY_IAC_ADDR(phy_addr),
+ MTK_PHY_IAC);
+
+ ret = mtk_mdio_busy_wait(eth);
+ if (ret < 0)
+ return ret;
+
+ return mtk_r32(eth, MTK_PHY_IAC) & PHY_IAC_DATA_MASK;
+}
+
+static int mtk_mdio_write_c22(struct mii_bus *bus, int phy_addr,
+ int phy_reg, u16 val)
{
struct mtk_eth *eth = bus->priv;
- return _mtk_mdio_write(eth, phy_addr, phy_reg, val);
+ return _mtk_mdio_write_c22(eth, phy_addr, phy_reg, val);
}
-static int mtk_mdio_read(struct mii_bus *bus, int phy_addr, int phy_reg)
+static int mtk_mdio_write_c45(struct mii_bus *bus, int phy_addr,
+ int devad, int phy_reg, u16 val)
{
struct mtk_eth *eth = bus->priv;
- return _mtk_mdio_read(eth, phy_addr, phy_reg);
+ return _mtk_mdio_write_c45(eth, phy_addr, devad, phy_reg, val);
+}
+
+static int mtk_mdio_read_c22(struct mii_bus *bus, int phy_addr, int phy_reg)
+{
+ struct mtk_eth *eth = bus->priv;
+
+ return _mtk_mdio_read_c22(eth, phy_addr, phy_reg);
+}
+
+static int mtk_mdio_read_c45(struct mii_bus *bus, int phy_addr, int devad,
+ int phy_reg)
+{
+ struct mtk_eth *eth = bus->priv;
+
+ return _mtk_mdio_read_c45(eth, phy_addr, devad, phy_reg);
}
static int mt7621_gmac0_rgmii_adjust(struct mtk_eth *eth,
@@ -760,9 +810,10 @@ static int mtk_mdio_init(struct mtk_eth *eth)
}
eth->mii_bus->name = "mdio";
- eth->mii_bus->read = mtk_mdio_read;
- eth->mii_bus->write = mtk_mdio_write;
- eth->mii_bus->probe_capabilities = MDIOBUS_C22_C45;
+ eth->mii_bus->read = mtk_mdio_read_c22;
+ eth->mii_bus->write = mtk_mdio_write_c22;
+ eth->mii_bus->read_c45 = mtk_mdio_read_c45;
+ eth->mii_bus->write_c45 = mtk_mdio_write_c45;
eth->mii_bus->priv = eth;
eth->mii_bus->parent = eth->dev;
@@ -2988,14 +3039,29 @@ static void mtk_dma_free(struct mtk_eth *eth)
kfree(eth->scratch_head);
}
+static bool mtk_hw_reset_check(struct mtk_eth *eth)
+{
+ u32 val = mtk_r32(eth, MTK_INT_STATUS2);
+
+ return (val & MTK_FE_INT_FQ_EMPTY) || (val & MTK_FE_INT_RFIFO_UF) ||
+ (val & MTK_FE_INT_RFIFO_OV) || (val & MTK_FE_INT_TSO_FAIL) ||
+ (val & MTK_FE_INT_TSO_ALIGN) || (val & MTK_FE_INT_TSO_ILLEGAL);
+}
+
static void mtk_tx_timeout(struct net_device *dev, unsigned int txqueue)
{
struct mtk_mac *mac = netdev_priv(dev);
struct mtk_eth *eth = mac->hw;
+ if (test_bit(MTK_RESETTING, &eth->state))
+ return;
+
+ if (!mtk_hw_reset_check(eth))
+ return;
+
eth->netdev[mac->id]->stats.tx_errors++;
- netif_err(eth, tx_err, dev,
- "transmit timed out\n");
+ netif_err(eth, tx_err, dev, "transmit timed out\n");
+
schedule_work(&eth->pending_work);
}
@@ -3475,22 +3541,188 @@ static void mtk_set_mcr_max_rx(struct mtk_mac *mac, u32 val)
mtk_w32(mac->hw, mcr_new, MTK_MAC_MCR(mac->id));
}
-static int mtk_hw_init(struct mtk_eth *eth)
+static void mtk_hw_reset(struct mtk_eth *eth)
+{
+ u32 val;
+
+ if (MTK_HAS_CAPS(eth->soc->caps, MTK_NETSYS_V2)) {
+ regmap_write(eth->ethsys, ETHSYS_FE_RST_CHK_IDLE_EN, 0);
+ val = RSTCTRL_PPE0_V2;
+ } else {
+ val = RSTCTRL_PPE0;
+ }
+
+ if (MTK_HAS_CAPS(eth->soc->caps, MTK_RSTCTRL_PPE1))
+ val |= RSTCTRL_PPE1;
+
+ ethsys_reset(eth, RSTCTRL_ETH | RSTCTRL_FE | val);
+
+ if (MTK_HAS_CAPS(eth->soc->caps, MTK_NETSYS_V2))
+ regmap_write(eth->ethsys, ETHSYS_FE_RST_CHK_IDLE_EN,
+ 0x3ffffff);
+}
+
+static u32 mtk_hw_reset_read(struct mtk_eth *eth)
+{
+ u32 val;
+
+ regmap_read(eth->ethsys, ETHSYS_RSTCTRL, &val);
+ return val;
+}
+
+static void mtk_hw_warm_reset(struct mtk_eth *eth)
+{
+ u32 rst_mask, val;
+
+ regmap_update_bits(eth->ethsys, ETHSYS_RSTCTRL, RSTCTRL_FE,
+ RSTCTRL_FE);
+ if (readx_poll_timeout_atomic(mtk_hw_reset_read, eth, val,
+ val & RSTCTRL_FE, 1, 1000)) {
+ dev_err(eth->dev, "warm reset failed\n");
+ mtk_hw_reset(eth);
+ return;
+ }
+
+ if (MTK_HAS_CAPS(eth->soc->caps, MTK_NETSYS_V2))
+ rst_mask = RSTCTRL_ETH | RSTCTRL_PPE0_V2;
+ else
+ rst_mask = RSTCTRL_ETH | RSTCTRL_PPE0;
+
+ if (MTK_HAS_CAPS(eth->soc->caps, MTK_RSTCTRL_PPE1))
+ rst_mask |= RSTCTRL_PPE1;
+
+ regmap_update_bits(eth->ethsys, ETHSYS_RSTCTRL, rst_mask, rst_mask);
+
+ udelay(1);
+ val = mtk_hw_reset_read(eth);
+ if (!(val & rst_mask))
+ dev_err(eth->dev, "warm reset stage0 failed %08x (%08x)\n",
+ val, rst_mask);
+
+ rst_mask |= RSTCTRL_FE;
+ regmap_update_bits(eth->ethsys, ETHSYS_RSTCTRL, rst_mask, ~rst_mask);
+
+ udelay(1);
+ val = mtk_hw_reset_read(eth);
+ if (val & rst_mask)
+ dev_err(eth->dev, "warm reset stage1 failed %08x (%08x)\n",
+ val, rst_mask);
+}
+
+static bool mtk_hw_check_dma_hang(struct mtk_eth *eth)
+{
+ const struct mtk_reg_map *reg_map = eth->soc->reg_map;
+ bool gmac1_tx, gmac2_tx, gdm1_tx, gdm2_tx;
+ bool oq_hang, cdm1_busy, adma_busy;
+ bool wtx_busy, cdm_full, oq_free;
+ u32 wdidx, val, gdm1_fc, gdm2_fc;
+ bool qfsm_hang, qfwd_hang;
+ bool ret = false;
+
+ if (MTK_HAS_CAPS(eth->soc->caps, MTK_SOC_MT7628))
+ return false;
+
+ /* WDMA sanity checks */
+ wdidx = mtk_r32(eth, reg_map->wdma_base[0] + 0xc);
+
+ val = mtk_r32(eth, reg_map->wdma_base[0] + 0x204);
+ wtx_busy = FIELD_GET(MTK_TX_DMA_BUSY, val);
+
+ val = mtk_r32(eth, reg_map->wdma_base[0] + 0x230);
+ cdm_full = !FIELD_GET(MTK_CDM_TXFIFO_RDY, val);
+
+ oq_free = (!(mtk_r32(eth, reg_map->pse_oq_sta) & GENMASK(24, 16)) &&
+ !(mtk_r32(eth, reg_map->pse_oq_sta + 0x4) & GENMASK(8, 0)) &&
+ !(mtk_r32(eth, reg_map->pse_oq_sta + 0x10) & GENMASK(24, 16)));
+
+ if (wdidx == eth->reset.wdidx && wtx_busy && cdm_full && oq_free) {
+ if (++eth->reset.wdma_hang_count > 2) {
+ eth->reset.wdma_hang_count = 0;
+ ret = true;
+ }
+ goto out;
+ }
+
+ /* QDMA sanity checks */
+ qfsm_hang = !!mtk_r32(eth, reg_map->qdma.qtx_cfg + 0x234);
+ qfwd_hang = !mtk_r32(eth, reg_map->qdma.qtx_cfg + 0x308);
+
+ gdm1_tx = FIELD_GET(GENMASK(31, 16), mtk_r32(eth, MTK_FE_GDM1_FSM)) > 0;
+ gdm2_tx = FIELD_GET(GENMASK(31, 16), mtk_r32(eth, MTK_FE_GDM2_FSM)) > 0;
+ gmac1_tx = FIELD_GET(GENMASK(31, 24), mtk_r32(eth, MTK_MAC_FSM(0))) != 1;
+ gmac2_tx = FIELD_GET(GENMASK(31, 24), mtk_r32(eth, MTK_MAC_FSM(1))) != 1;
+ gdm1_fc = mtk_r32(eth, reg_map->gdm1_cnt + 0x24);
+ gdm2_fc = mtk_r32(eth, reg_map->gdm1_cnt + 0x64);
+
+ if (qfsm_hang && qfwd_hang &&
+ ((gdm1_tx && gmac1_tx && gdm1_fc < 1) ||
+ (gdm2_tx && gmac2_tx && gdm2_fc < 1))) {
+ if (++eth->reset.qdma_hang_count > 2) {
+ eth->reset.qdma_hang_count = 0;
+ ret = true;
+ }
+ goto out;
+ }
+
+ /* ADMA sanity checks */
+ oq_hang = !!(mtk_r32(eth, reg_map->pse_oq_sta) & GENMASK(8, 0));
+ cdm1_busy = !!(mtk_r32(eth, MTK_FE_CDM1_FSM) & GENMASK(31, 16));
+ adma_busy = !(mtk_r32(eth, reg_map->pdma.adma_rx_dbg0) & GENMASK(4, 0)) &&
+ !(mtk_r32(eth, reg_map->pdma.adma_rx_dbg0) & BIT(6));
+
+ if (oq_hang && cdm1_busy && adma_busy) {
+ if (++eth->reset.adma_hang_count > 2) {
+ eth->reset.adma_hang_count = 0;
+ ret = true;
+ }
+ goto out;
+ }
+
+ eth->reset.wdma_hang_count = 0;
+ eth->reset.qdma_hang_count = 0;
+ eth->reset.adma_hang_count = 0;
+out:
+ eth->reset.wdidx = wdidx;
+
+ return ret;
+}
+
+static void mtk_hw_reset_monitor_work(struct work_struct *work)
+{
+ struct delayed_work *del_work = to_delayed_work(work);
+ struct mtk_eth *eth = container_of(del_work, struct mtk_eth,
+ reset.monitor_work);
+
+ if (test_bit(MTK_RESETTING, &eth->state))
+ goto out;
+
+ /* DMA stuck checks */
+ if (mtk_hw_check_dma_hang(eth))
+ schedule_work(&eth->pending_work);
+
+out:
+ schedule_delayed_work(&eth->reset.monitor_work,
+ MTK_DMA_MONITOR_TIMEOUT);
+}
+
+static int mtk_hw_init(struct mtk_eth *eth, bool reset)
{
u32 dma_mask = ETHSYS_DMA_AG_MAP_PDMA | ETHSYS_DMA_AG_MAP_QDMA |
ETHSYS_DMA_AG_MAP_PPE;
const struct mtk_reg_map *reg_map = eth->soc->reg_map;
int i, val, ret;
- if (test_and_set_bit(MTK_HW_INIT, &eth->state))
+ if (!reset && test_and_set_bit(MTK_HW_INIT, &eth->state))
return 0;
- pm_runtime_enable(eth->dev);
- pm_runtime_get_sync(eth->dev);
+ if (!reset) {
+ pm_runtime_enable(eth->dev);
+ pm_runtime_get_sync(eth->dev);
- ret = mtk_clk_enable(eth);
- if (ret)
- goto err_disable_pm;
+ ret = mtk_clk_enable(eth);
+ if (ret)
+ goto err_disable_pm;
+ }
if (eth->ethsys)
regmap_update_bits(eth->ethsys, ETHSYS_DMA_AG_MAP, dma_mask,
@@ -3514,22 +3746,14 @@ static int mtk_hw_init(struct mtk_eth *eth)
return 0;
}
- if (MTK_HAS_CAPS(eth->soc->caps, MTK_NETSYS_V2)) {
- regmap_write(eth->ethsys, ETHSYS_FE_RST_CHK_IDLE_EN, 0);
- val = RSTCTRL_PPE0_V2;
- } else {
- val = RSTCTRL_PPE0;
- }
-
- if (MTK_HAS_CAPS(eth->soc->caps, MTK_RSTCTRL_PPE1))
- val |= RSTCTRL_PPE1;
+ msleep(100);
- ethsys_reset(eth, RSTCTRL_ETH | RSTCTRL_FE | val);
+ if (reset)
+ mtk_hw_warm_reset(eth);
+ else
+ mtk_hw_reset(eth);
if (MTK_HAS_CAPS(eth->soc->caps, MTK_NETSYS_V2)) {
- regmap_write(eth->ethsys, ETHSYS_FE_RST_CHK_IDLE_EN,
- 0x3ffffff);
-
/* Set FE to PDMAv2 if necessary */
val = mtk_r32(eth, MTK_FE_GLO_MISC);
mtk_w32(eth, val | BIT(4), MTK_FE_GLO_MISC);
@@ -3631,8 +3855,10 @@ static int mtk_hw_init(struct mtk_eth *eth)
return 0;
err_disable_pm:
- pm_runtime_put_sync(eth->dev);
- pm_runtime_disable(eth->dev);
+ if (!reset) {
+ pm_runtime_put_sync(eth->dev);
+ pm_runtime_disable(eth->dev);
+ }
return ret;
}
@@ -3711,52 +3937,86 @@ static int mtk_do_ioctl(struct net_device *dev, struct ifreq *ifr, int cmd)
return -EOPNOTSUPP;
}
+static void mtk_prepare_for_reset(struct mtk_eth *eth)
+{
+ u32 val;
+ int i;
+
+ /* disabe FE P3 and P4 */
+ val = mtk_r32(eth, MTK_FE_GLO_CFG) | MTK_FE_LINK_DOWN_P3;
+ if (MTK_HAS_CAPS(eth->soc->caps, MTK_RSTCTRL_PPE1))
+ val |= MTK_FE_LINK_DOWN_P4;
+ mtk_w32(eth, val, MTK_FE_GLO_CFG);
+
+ /* adjust PPE configurations to prepare for reset */
+ for (i = 0; i < ARRAY_SIZE(eth->ppe); i++)
+ mtk_ppe_prepare_reset(eth->ppe[i]);
+
+ /* disable NETSYS interrupts */
+ mtk_w32(eth, 0, MTK_FE_INT_ENABLE);
+
+ /* force link down GMAC */
+ for (i = 0; i < 2; i++) {
+ val = mtk_r32(eth, MTK_MAC_MCR(i)) & ~MAC_MCR_FORCE_LINK;
+ mtk_w32(eth, val, MTK_MAC_MCR(i));
+ }
+}
+
static void mtk_pending_work(struct work_struct *work)
{
struct mtk_eth *eth = container_of(work, struct mtk_eth, pending_work);
- int err, i;
unsigned long restart = 0;
+ u32 val;
+ int i;
rtnl_lock();
-
- dev_dbg(eth->dev, "[%s][%d] reset\n", __func__, __LINE__);
set_bit(MTK_RESETTING, &eth->state);
+ mtk_prepare_for_reset(eth);
+ mtk_wed_fe_reset();
+ /* Run again reset preliminary configuration in order to avoid any
+ * possible race during FE reset since it can run releasing RTNL lock.
+ */
+ mtk_prepare_for_reset(eth);
+
/* stop all devices to make sure that dma is properly shut down */
for (i = 0; i < MTK_MAC_COUNT; i++) {
- if (!eth->netdev[i])
+ if (!eth->netdev[i] || !netif_running(eth->netdev[i]))
continue;
+
mtk_stop(eth->netdev[i]);
__set_bit(i, &restart);
}
- dev_dbg(eth->dev, "[%s][%d] mtk_stop ends\n", __func__, __LINE__);
- /* restart underlying hardware such as power, clock, pin mux
- * and the connected phy
- */
- mtk_hw_deinit(eth);
+ usleep_range(15000, 16000);
if (eth->dev->pins)
pinctrl_select_state(eth->dev->pins->p,
eth->dev->pins->default_state);
- mtk_hw_init(eth);
+ mtk_hw_init(eth, true);
/* restart DMA and enable IRQs */
for (i = 0; i < MTK_MAC_COUNT; i++) {
if (!test_bit(i, &restart))
continue;
- err = mtk_open(eth->netdev[i]);
- if (err) {
+
+ if (mtk_open(eth->netdev[i])) {
netif_alert(eth, ifup, eth->netdev[i],
- "Driver up/down cycle failed, closing device.\n");
+ "Driver up/down cycle failed\n");
dev_close(eth->netdev[i]);
}
}
- dev_dbg(eth->dev, "[%s][%d] reset done\n", __func__, __LINE__);
+ /* enabe FE P3 and P4 */
+ val = mtk_r32(eth, MTK_FE_GLO_CFG) & ~MTK_FE_LINK_DOWN_P3;
+ if (MTK_HAS_CAPS(eth->soc->caps, MTK_RSTCTRL_PPE1))
+ val &= ~MTK_FE_LINK_DOWN_P4;
+ mtk_w32(eth, val, MTK_FE_GLO_CFG);
clear_bit(MTK_RESETTING, &eth->state);
+ mtk_wed_fe_reset_complete();
+
rtnl_unlock();
}
@@ -3801,6 +4061,7 @@ static int mtk_cleanup(struct mtk_eth *eth)
mtk_unreg_dev(eth);
mtk_free_dev(eth);
cancel_work_sync(&eth->pending_work);
+ cancel_delayed_work_sync(&eth->reset.monitor_work);
return 0;
}
@@ -4190,6 +4451,12 @@ static int mtk_add_mac(struct mtk_eth *eth, struct device_node *np)
register_netdevice_notifier(&mac->device_notifier);
}
+ if (mtk_page_pool_enabled(eth))
+ eth->netdev[id]->xdp_features = NETDEV_XDP_ACT_BASIC |
+ NETDEV_XDP_ACT_REDIRECT |
+ NETDEV_XDP_ACT_NDO_XMIT |
+ NETDEV_XDP_ACT_NDO_XMIT_SG;
+
return 0;
free_netdev:
@@ -4255,6 +4522,7 @@ static int mtk_probe(struct platform_device *pdev)
eth->rx_dim.mode = DIM_CQ_PERIOD_MODE_START_FROM_EQE;
INIT_WORK(&eth->rx_dim.work, mtk_dim_rx);
+ INIT_DELAYED_WORK(&eth->reset.monitor_work, mtk_hw_reset_monitor_work);
eth->tx_dim.mode = DIM_CQ_PERIOD_MODE_START_FROM_EQE;
INIT_WORK(&eth->tx_dim.work, mtk_dim_tx);
@@ -4368,7 +4636,7 @@ static int mtk_probe(struct platform_device *pdev)
eth->msg_enable = netif_msg_init(mtk_msg_level, MTK_DEFAULT_MSG_ENABLE);
INIT_WORK(&eth->pending_work, mtk_pending_work);
- err = mtk_hw_init(eth);
+ err = mtk_hw_init(eth, false);
if (err)
goto err_wed_exit;
@@ -4457,6 +4725,8 @@ static int mtk_probe(struct platform_device *pdev)
netif_napi_add(&eth->dummy_dev, &eth->rx_napi, mtk_napi_rx);
platform_set_drvdata(pdev, eth);
+ schedule_delayed_work(&eth->reset.monitor_work,
+ MTK_DMA_MONITOR_TIMEOUT);
return 0;
diff --git a/drivers/net/ethernet/mediatek/mtk_eth_soc.h b/drivers/net/ethernet/mediatek/mtk_eth_soc.h
index 2d9186d32bc0..afc9d52e79bf 100644
--- a/drivers/net/ethernet/mediatek/mtk_eth_soc.h
+++ b/drivers/net/ethernet/mediatek/mtk_eth_soc.h
@@ -77,12 +77,24 @@
#define MTK_HW_LRO_REPLACE_DELTA 1000
#define MTK_HW_LRO_SDL_REMAIN_ROOM 1522
+/* Frame Engine Global Configuration */
+#define MTK_FE_GLO_CFG 0x00
+#define MTK_FE_LINK_DOWN_P3 BIT(11)
+#define MTK_FE_LINK_DOWN_P4 BIT(12)
+
/* Frame Engine Global Reset Register */
#define MTK_RST_GL 0x04
#define RST_GL_PSE BIT(0)
/* Frame Engine Interrupt Status Register */
#define MTK_INT_STATUS2 0x08
+#define MTK_FE_INT_ENABLE 0x0c
+#define MTK_FE_INT_FQ_EMPTY BIT(8)
+#define MTK_FE_INT_TSO_FAIL BIT(12)
+#define MTK_FE_INT_TSO_ILLEGAL BIT(13)
+#define MTK_FE_INT_TSO_ALIGN BIT(14)
+#define MTK_FE_INT_RFIFO_OV BIT(18)
+#define MTK_FE_INT_RFIFO_UF BIT(19)
#define MTK_GDM1_AF BIT(28)
#define MTK_GDM2_AF BIT(29)
@@ -272,6 +284,8 @@
#define MTK_RX_DONE_INT_V2 BIT(14)
+#define MTK_CDM_TXFIFO_RDY BIT(7)
+
/* QDMA Interrupt grouping registers */
#define MTK_RLS_DONE_INT BIT(0)
@@ -562,6 +576,17 @@
#define MT7628_SDM_RBCNT (MT7628_SDM_OFFSET + 0x10c)
#define MT7628_SDM_CS_ERR (MT7628_SDM_OFFSET + 0x110)
+#define MTK_FE_CDM1_FSM 0x220
+#define MTK_FE_CDM2_FSM 0x224
+#define MTK_FE_CDM3_FSM 0x238
+#define MTK_FE_CDM4_FSM 0x298
+#define MTK_FE_CDM5_FSM 0x318
+#define MTK_FE_CDM6_FSM 0x328
+#define MTK_FE_GDM1_FSM 0x228
+#define MTK_FE_GDM2_FSM 0x22C
+
+#define MTK_MAC_FSM(x) (0x1010C + ((x) * 0x100))
+
struct mtk_rx_dma {
unsigned int rxd1;
unsigned int rxd2;
@@ -958,6 +983,7 @@ struct mtk_reg_map {
u32 delay_irq; /* delay interrupt */
u32 irq_status; /* interrupt status */
u32 irq_mask; /* interrupt mask */
+ u32 adma_rx_dbg0;
u32 int_grp;
} pdma;
struct {
@@ -986,6 +1012,8 @@ struct mtk_reg_map {
u32 gdma_to_ppe;
u32 ppe_base;
u32 wdma_base[2];
+ u32 pse_iq_sta;
+ u32 pse_oq_sta;
};
/* struct mtk_eth_data - This is the structure holding all differences
@@ -1028,6 +1056,8 @@ struct mtk_soc_data {
} txrx;
};
+#define MTK_DMA_MONITOR_TIMEOUT msecs_to_jiffies(1000)
+
/* currently no SoC has more than 2 macs */
#define MTK_MAX_DEVS 2
@@ -1154,6 +1184,14 @@ struct mtk_eth {
struct rhashtable flow_table;
struct bpf_prog __rcu *prog;
+
+ struct {
+ struct delayed_work monitor_work;
+ u32 wdidx;
+ u8 wdma_hang_count;
+ u8 qdma_hang_count;
+ u8 adma_hang_count;
+ } reset;
};
/* struct mtk_mac - the structure that holds the info about the MACs of the
diff --git a/drivers/net/ethernet/mediatek/mtk_ppe.c b/drivers/net/ethernet/mediatek/mtk_ppe.c
index 1ff024f42444..6883eb34cd8b 100644
--- a/drivers/net/ethernet/mediatek/mtk_ppe.c
+++ b/drivers/net/ethernet/mediatek/mtk_ppe.c
@@ -729,6 +729,33 @@ int mtk_foe_entry_idle_time(struct mtk_ppe *ppe, struct mtk_flow_entry *entry)
return __mtk_foe_entry_idle_time(ppe, entry->data.ib1);
}
+int mtk_ppe_prepare_reset(struct mtk_ppe *ppe)
+{
+ if (!ppe)
+ return -EINVAL;
+
+ /* disable KA */
+ ppe_clear(ppe, MTK_PPE_TB_CFG, MTK_PPE_TB_CFG_KEEPALIVE);
+ ppe_clear(ppe, MTK_PPE_BIND_LMT1, MTK_PPE_NTU_KEEPALIVE);
+ ppe_w32(ppe, MTK_PPE_KEEPALIVE, 0);
+ usleep_range(10000, 11000);
+
+ /* set KA timer to maximum */
+ ppe_set(ppe, MTK_PPE_BIND_LMT1, MTK_PPE_NTU_KEEPALIVE);
+ ppe_w32(ppe, MTK_PPE_KEEPALIVE, 0xffffffff);
+
+ /* set KA tick select */
+ ppe_set(ppe, MTK_PPE_TB_CFG, MTK_PPE_TB_TICK_SEL);
+ ppe_set(ppe, MTK_PPE_TB_CFG, MTK_PPE_TB_CFG_KEEPALIVE);
+ usleep_range(10000, 11000);
+
+ /* disable scan mode */
+ ppe_clear(ppe, MTK_PPE_TB_CFG, MTK_PPE_TB_CFG_SCAN_MODE);
+ usleep_range(10000, 11000);
+
+ return mtk_ppe_wait_busy(ppe);
+}
+
struct mtk_ppe *mtk_ppe_init(struct mtk_eth *eth, void __iomem *base,
int version, int index)
{
diff --git a/drivers/net/ethernet/mediatek/mtk_ppe.h b/drivers/net/ethernet/mediatek/mtk_ppe.h
index b5e432031340..5e8bc48252b1 100644
--- a/drivers/net/ethernet/mediatek/mtk_ppe.h
+++ b/drivers/net/ethernet/mediatek/mtk_ppe.h
@@ -308,6 +308,7 @@ struct mtk_ppe *mtk_ppe_init(struct mtk_eth *eth, void __iomem *base,
void mtk_ppe_deinit(struct mtk_eth *eth);
void mtk_ppe_start(struct mtk_ppe *ppe);
int mtk_ppe_stop(struct mtk_ppe *ppe);
+int mtk_ppe_prepare_reset(struct mtk_ppe *ppe);
void __mtk_ppe_check_skb(struct mtk_ppe *ppe, struct sk_buff *skb, u16 hash);
diff --git a/drivers/net/ethernet/mediatek/mtk_ppe_regs.h b/drivers/net/ethernet/mediatek/mtk_ppe_regs.h
index 59596d823d8b..0fdb983b0a88 100644
--- a/drivers/net/ethernet/mediatek/mtk_ppe_regs.h
+++ b/drivers/net/ethernet/mediatek/mtk_ppe_regs.h
@@ -58,6 +58,12 @@
#define MTK_PPE_TB_CFG_SCAN_MODE GENMASK(17, 16)
#define MTK_PPE_TB_CFG_HASH_DEBUG GENMASK(19, 18)
#define MTK_PPE_TB_CFG_INFO_SEL BIT(20)
+#define MTK_PPE_TB_TICK_SEL BIT(24)
+
+#define MTK_PPE_BIND_LMT1 0x230
+#define MTK_PPE_NTU_KEEPALIVE GENMASK(23, 16)
+
+#define MTK_PPE_KEEPALIVE 0x234
enum {
MTK_PPE_SCAN_MODE_DISABLED,
diff --git a/drivers/net/ethernet/mediatek/mtk_star_emac.c b/drivers/net/ethernet/mediatek/mtk_star_emac.c
index 7050351250b7..02c03325911f 100644
--- a/drivers/net/ethernet/mediatek/mtk_star_emac.c
+++ b/drivers/net/ethernet/mediatek/mtk_star_emac.c
@@ -1378,9 +1378,6 @@ static int mtk_star_mdio_read(struct mii_bus *mii, int phy_id, int regnum)
unsigned int val, data;
int ret;
- if (regnum & MII_ADDR_C45)
- return -EOPNOTSUPP;
-
mtk_star_mdio_rwok_clear(priv);
val = (regnum << MTK_STAR_OFF_PHY_CTRL0_PREG);
@@ -1407,9 +1404,6 @@ static int mtk_star_mdio_write(struct mii_bus *mii, int phy_id,
struct mtk_star_priv *priv = mii->priv;
unsigned int val;
- if (regnum & MII_ADDR_C45)
- return -EOPNOTSUPP;
-
mtk_star_mdio_rwok_clear(priv);
val = data;
diff --git a/drivers/net/ethernet/mediatek/mtk_wed.c b/drivers/net/ethernet/mediatek/mtk_wed.c
index a6271449617f..95d890870984 100644
--- a/drivers/net/ethernet/mediatek/mtk_wed.c
+++ b/drivers/net/ethernet/mediatek/mtk_wed.c
@@ -206,6 +206,48 @@ mtk_wed_wo_reset(struct mtk_wed_device *dev)
iounmap(reg);
}
+void mtk_wed_fe_reset(void)
+{
+ int i;
+
+ mutex_lock(&hw_lock);
+
+ for (i = 0; i < ARRAY_SIZE(hw_list); i++) {
+ struct mtk_wed_hw *hw = hw_list[i];
+ struct mtk_wed_device *dev = hw->wed_dev;
+ int err;
+
+ if (!dev || !dev->wlan.reset)
+ continue;
+
+ /* reset callback blocks until WLAN reset is completed */
+ err = dev->wlan.reset(dev);
+ if (err)
+ dev_err(dev->dev, "wlan reset failed: %d\n", err);
+ }
+
+ mutex_unlock(&hw_lock);
+}
+
+void mtk_wed_fe_reset_complete(void)
+{
+ int i;
+
+ mutex_lock(&hw_lock);
+
+ for (i = 0; i < ARRAY_SIZE(hw_list); i++) {
+ struct mtk_wed_hw *hw = hw_list[i];
+ struct mtk_wed_device *dev = hw->wed_dev;
+
+ if (!dev || !dev->wlan.reset_complete)
+ continue;
+
+ dev->wlan.reset_complete(dev);
+ }
+
+ mutex_unlock(&hw_lock);
+}
+
static struct mtk_wed_hw *
mtk_wed_assign(struct mtk_wed_device *dev)
{
@@ -745,7 +787,6 @@ mtk_wed_rro_ring_alloc(struct mtk_wed_device *dev, struct mtk_wed_ring *ring,
ring->desc_size = sizeof(*ring->desc);
ring->size = size;
- memset(ring->desc, 0, size);
return 0;
}
diff --git a/drivers/net/ethernet/mediatek/mtk_wed.h b/drivers/net/ethernet/mediatek/mtk_wed.h
index e012b8a82133..43ab77eaf683 100644
--- a/drivers/net/ethernet/mediatek/mtk_wed.h
+++ b/drivers/net/ethernet/mediatek/mtk_wed.h
@@ -128,6 +128,8 @@ void mtk_wed_add_hw(struct device_node *np, struct mtk_eth *eth,
void mtk_wed_exit(void);
int mtk_wed_flow_add(int index);
void mtk_wed_flow_remove(int index);
+void mtk_wed_fe_reset(void);
+void mtk_wed_fe_reset_complete(void);
#else
static inline void
mtk_wed_add_hw(struct device_node *np, struct mtk_eth *eth,
@@ -147,6 +149,13 @@ static inline void mtk_wed_flow_remove(int index)
{
}
+static inline void mtk_wed_fe_reset(void)
+{
+}
+
+static inline void mtk_wed_fe_reset_complete(void)
+{
+}
#endif
#ifdef CONFIG_DEBUG_FS
diff --git a/drivers/net/ethernet/mediatek/mtk_wed_wo.c b/drivers/net/ethernet/mediatek/mtk_wed_wo.c
index a0a39643caf7..69fba29055e9 100644
--- a/drivers/net/ethernet/mediatek/mtk_wed_wo.c
+++ b/drivers/net/ethernet/mediatek/mtk_wed_wo.c
@@ -138,7 +138,6 @@ mtk_wed_wo_queue_refill(struct mtk_wed_wo *wo, struct mtk_wed_wo_queue *q,
enum dma_data_direction dir = rx ? DMA_FROM_DEVICE : DMA_TO_DEVICE;
int n_buf = 0;
- spin_lock_bh(&q->lock);
while (q->queued < q->n_desc) {
struct mtk_wed_wo_queue_entry *entry;
dma_addr_t addr;
@@ -172,7 +171,6 @@ mtk_wed_wo_queue_refill(struct mtk_wed_wo *wo, struct mtk_wed_wo_queue *q,
q->queued++;
n_buf++;
}
- spin_unlock_bh(&q->lock);
return n_buf;
}
@@ -260,7 +258,6 @@ mtk_wed_wo_queue_alloc(struct mtk_wed_wo *wo, struct mtk_wed_wo_queue *q,
int n_desc, int buf_size, int index,
struct mtk_wed_wo_queue_regs *regs)
{
- spin_lock_init(&q->lock);
q->regs = *regs;
q->n_desc = n_desc;
q->buf_size = buf_size;
@@ -292,7 +289,6 @@ mtk_wed_wo_queue_tx_clean(struct mtk_wed_wo *wo, struct mtk_wed_wo_queue *q)
struct page *page;
int i;
- spin_lock_bh(&q->lock);
for (i = 0; i < q->n_desc; i++) {
struct mtk_wed_wo_queue_entry *entry = &q->entry[i];
@@ -301,7 +297,6 @@ mtk_wed_wo_queue_tx_clean(struct mtk_wed_wo *wo, struct mtk_wed_wo_queue *q)
skb_free_frag(entry->buf);
entry->buf = NULL;
}
- spin_unlock_bh(&q->lock);
if (!q->cache.va)
return;
@@ -316,7 +311,6 @@ mtk_wed_wo_queue_rx_clean(struct mtk_wed_wo *wo, struct mtk_wed_wo_queue *q)
{
struct page *page;
- spin_lock_bh(&q->lock);
for (;;) {
void *buf = mtk_wed_wo_dequeue(wo, q, NULL, true);
@@ -325,7 +319,6 @@ mtk_wed_wo_queue_rx_clean(struct mtk_wed_wo *wo, struct mtk_wed_wo_queue *q)
skb_free_frag(buf);
}
- spin_unlock_bh(&q->lock);
if (!q->cache.va)
return;
@@ -351,8 +344,6 @@ int mtk_wed_wo_queue_tx_skb(struct mtk_wed_wo *wo, struct mtk_wed_wo_queue *q,
int ret = 0, index;
u32 ctrl;
- spin_lock_bh(&q->lock);
-
q->tail = mtk_wed_mmio_r32(wo, q->regs.dma_idx);
index = (q->head + 1) % q->n_desc;
if (q->tail == index) {
@@ -383,8 +374,6 @@ int mtk_wed_wo_queue_tx_skb(struct mtk_wed_wo *wo, struct mtk_wed_wo_queue *q,
mtk_wed_wo_queue_kick(wo, q, q->head);
mtk_wed_wo_kickout(wo);
out:
- spin_unlock_bh(&q->lock);
-
dev_kfree_skb(skb);
return ret;
diff --git a/drivers/net/ethernet/mediatek/mtk_wed_wo.h b/drivers/net/ethernet/mediatek/mtk_wed_wo.h
index c8fb85795864..dbcf42ce9173 100644
--- a/drivers/net/ethernet/mediatek/mtk_wed_wo.h
+++ b/drivers/net/ethernet/mediatek/mtk_wed_wo.h
@@ -211,7 +211,6 @@ struct mtk_wed_wo_queue {
struct mtk_wed_wo_queue_regs regs;
struct page_frag_cache cache;
- spinlock_t lock;
struct mtk_wed_wo_queue_desc *desc;
dma_addr_t desc_dma;
diff --git a/drivers/net/ethernet/mellanox/mlx4/en_clock.c b/drivers/net/ethernet/mellanox/mlx4/en_clock.c
index 98b5ffb4d729..9e3b76182088 100644
--- a/drivers/net/ethernet/mellanox/mlx4/en_clock.c
+++ b/drivers/net/ethernet/mellanox/mlx4/en_clock.c
@@ -58,9 +58,7 @@ u64 mlx4_en_get_cqe_ts(struct mlx4_cqe *cqe)
return hi | lo;
}
-void mlx4_en_fill_hwtstamps(struct mlx4_en_dev *mdev,
- struct skb_shared_hwtstamps *hwts,
- u64 timestamp)
+u64 mlx4_en_get_hwtstamp(struct mlx4_en_dev *mdev, u64 timestamp)
{
unsigned int seq;
u64 nsec;
@@ -70,8 +68,15 @@ void mlx4_en_fill_hwtstamps(struct mlx4_en_dev *mdev,
nsec = timecounter_cyc2time(&mdev->clock, timestamp);
} while (read_seqretry(&mdev->clock_lock, seq));
+ return ns_to_ktime(nsec);
+}
+
+void mlx4_en_fill_hwtstamps(struct mlx4_en_dev *mdev,
+ struct skb_shared_hwtstamps *hwts,
+ u64 timestamp)
+{
memset(hwts, 0, sizeof(struct skb_shared_hwtstamps));
- hwts->hwtstamp = ns_to_ktime(nsec);
+ hwts->hwtstamp = mlx4_en_get_hwtstamp(mdev, timestamp);
}
/**
diff --git a/drivers/net/ethernet/mellanox/mlx4/en_netdev.c b/drivers/net/ethernet/mellanox/mlx4/en_netdev.c
index 8800d3f1f55c..e11bc0ac880e 100644
--- a/drivers/net/ethernet/mellanox/mlx4/en_netdev.c
+++ b/drivers/net/ethernet/mellanox/mlx4/en_netdev.c
@@ -2889,6 +2889,11 @@ static const struct net_device_ops mlx4_netdev_ops_master = {
.ndo_bpf = mlx4_xdp,
};
+static const struct xdp_metadata_ops mlx4_xdp_metadata_ops = {
+ .xmo_rx_timestamp = mlx4_en_xdp_rx_timestamp,
+ .xmo_rx_hash = mlx4_en_xdp_rx_hash,
+};
+
struct mlx4_en_bond {
struct work_struct work;
struct mlx4_en_priv *priv;
@@ -3310,6 +3315,7 @@ int mlx4_en_init_netdev(struct mlx4_en_dev *mdev, int port,
dev->netdev_ops = &mlx4_netdev_ops_master;
else
dev->netdev_ops = &mlx4_netdev_ops;
+ dev->xdp_metadata_ops = &mlx4_xdp_metadata_ops;
dev->watchdog_timeo = MLX4_EN_WATCHDOG_TIMEOUT;
netif_set_real_num_tx_queues(dev, priv->tx_ring_num[TX]);
netif_set_real_num_rx_queues(dev, priv->rx_ring_num);
@@ -3410,6 +3416,8 @@ int mlx4_en_init_netdev(struct mlx4_en_dev *mdev, int port,
priv->rss_hash_fn = ETH_RSS_HASH_TOP;
}
+ dev->xdp_features = NETDEV_XDP_ACT_BASIC | NETDEV_XDP_ACT_REDIRECT;
+
/* MTU range: 68 - hw-specific max */
dev->min_mtu = ETH_MIN_MTU;
dev->max_mtu = priv->max_mtu;
diff --git a/drivers/net/ethernet/mellanox/mlx4/en_rx.c b/drivers/net/ethernet/mellanox/mlx4/en_rx.c
index 8f762fc170b3..0869d4fff17b 100644
--- a/drivers/net/ethernet/mellanox/mlx4/en_rx.c
+++ b/drivers/net/ethernet/mellanox/mlx4/en_rx.c
@@ -661,9 +661,41 @@ static int check_csum(struct mlx4_cqe *cqe, struct sk_buff *skb, void *va,
#define MLX4_CQE_STATUS_IP_ANY (MLX4_CQE_STATUS_IPV4)
#endif
+struct mlx4_en_xdp_buff {
+ struct xdp_buff xdp;
+ struct mlx4_cqe *cqe;
+ struct mlx4_en_dev *mdev;
+ struct mlx4_en_rx_ring *ring;
+ struct net_device *dev;
+};
+
+int mlx4_en_xdp_rx_timestamp(const struct xdp_md *ctx, u64 *timestamp)
+{
+ struct mlx4_en_xdp_buff *_ctx = (void *)ctx;
+
+ if (unlikely(_ctx->ring->hwtstamp_rx_filter != HWTSTAMP_FILTER_ALL))
+ return -EOPNOTSUPP;
+
+ *timestamp = mlx4_en_get_hwtstamp(_ctx->mdev,
+ mlx4_en_get_cqe_ts(_ctx->cqe));
+ return 0;
+}
+
+int mlx4_en_xdp_rx_hash(const struct xdp_md *ctx, u32 *hash)
+{
+ struct mlx4_en_xdp_buff *_ctx = (void *)ctx;
+
+ if (unlikely(!(_ctx->dev->features & NETIF_F_RXHASH)))
+ return -EOPNOTSUPP;
+
+ *hash = be32_to_cpu(_ctx->cqe->immed_rss_invalid);
+ return 0;
+}
+
int mlx4_en_process_rx_cq(struct net_device *dev, struct mlx4_en_cq *cq, int budget)
{
struct mlx4_en_priv *priv = netdev_priv(dev);
+ struct mlx4_en_xdp_buff mxbuf = {};
int factor = priv->cqe_factor;
struct mlx4_en_rx_ring *ring;
struct bpf_prog *xdp_prog;
@@ -671,7 +703,6 @@ int mlx4_en_process_rx_cq(struct net_device *dev, struct mlx4_en_cq *cq, int bud
bool doorbell_pending;
bool xdp_redir_flush;
struct mlx4_cqe *cqe;
- struct xdp_buff xdp;
int polled = 0;
int index;
@@ -681,7 +712,7 @@ int mlx4_en_process_rx_cq(struct net_device *dev, struct mlx4_en_cq *cq, int bud
ring = priv->rx_ring[cq_ring];
xdp_prog = rcu_dereference_bh(ring->xdp_prog);
- xdp_init_buff(&xdp, priv->frag_info[0].frag_stride, &ring->xdp_rxq);
+ xdp_init_buff(&mxbuf.xdp, priv->frag_info[0].frag_stride, &ring->xdp_rxq);
doorbell_pending = false;
xdp_redir_flush = false;
@@ -776,24 +807,28 @@ int mlx4_en_process_rx_cq(struct net_device *dev, struct mlx4_en_cq *cq, int bud
priv->frag_info[0].frag_size,
DMA_FROM_DEVICE);
- xdp_prepare_buff(&xdp, va - frags[0].page_offset,
- frags[0].page_offset, length, false);
- orig_data = xdp.data;
-
- act = bpf_prog_run_xdp(xdp_prog, &xdp);
-
- length = xdp.data_end - xdp.data;
- if (xdp.data != orig_data) {
- frags[0].page_offset = xdp.data -
- xdp.data_hard_start;
- va = xdp.data;
+ xdp_prepare_buff(&mxbuf.xdp, va - frags[0].page_offset,
+ frags[0].page_offset, length, true);
+ orig_data = mxbuf.xdp.data;
+ mxbuf.cqe = cqe;
+ mxbuf.mdev = priv->mdev;
+ mxbuf.ring = ring;
+ mxbuf.dev = dev;
+
+ act = bpf_prog_run_xdp(xdp_prog, &mxbuf.xdp);
+
+ length = mxbuf.xdp.data_end - mxbuf.xdp.data;
+ if (mxbuf.xdp.data != orig_data) {
+ frags[0].page_offset = mxbuf.xdp.data -
+ mxbuf.xdp.data_hard_start;
+ va = mxbuf.xdp.data;
}
switch (act) {
case XDP_PASS:
break;
case XDP_REDIRECT:
- if (likely(!xdp_do_redirect(dev, &xdp, xdp_prog))) {
+ if (likely(!xdp_do_redirect(dev, &mxbuf.xdp, xdp_prog))) {
ring->xdp_redirect++;
xdp_redir_flush = true;
frags[0].page = NULL;
diff --git a/drivers/net/ethernet/mellanox/mlx4/en_tx.c b/drivers/net/ethernet/mellanox/mlx4/en_tx.c
index c5758637b7be..2f79378fbf6e 100644
--- a/drivers/net/ethernet/mellanox/mlx4/en_tx.c
+++ b/drivers/net/ethernet/mellanox/mlx4/en_tx.c
@@ -699,32 +699,32 @@ static void build_inline_wqe(struct mlx4_en_tx_desc *tx_desc,
inl->byte_count = cpu_to_be32(1 << 31 | skb->len);
} else {
inl->byte_count = cpu_to_be32(1 << 31 | MIN_PKT_LEN);
- memset(((void *)(inl + 1)) + skb->len, 0,
+ memset(inl->data + skb->len, 0,
MIN_PKT_LEN - skb->len);
}
- skb_copy_from_linear_data(skb, inl + 1, hlen);
+ skb_copy_from_linear_data(skb, inl->data, hlen);
if (shinfo->nr_frags)
- memcpy(((void *)(inl + 1)) + hlen, fragptr,
+ memcpy(inl->data + hlen, fragptr,
skb_frag_size(&shinfo->frags[0]));
} else {
inl->byte_count = cpu_to_be32(1 << 31 | spc);
if (hlen <= spc) {
- skb_copy_from_linear_data(skb, inl + 1, hlen);
+ skb_copy_from_linear_data(skb, inl->data, hlen);
if (hlen < spc) {
- memcpy(((void *)(inl + 1)) + hlen,
+ memcpy(inl->data + hlen,
fragptr, spc - hlen);
fragptr += spc - hlen;
}
- inl = (void *) (inl + 1) + spc;
- memcpy(((void *)(inl + 1)), fragptr, skb->len - spc);
+ inl = (void *)inl->data + spc;
+ memcpy(inl->data, fragptr, skb->len - spc);
} else {
- skb_copy_from_linear_data(skb, inl + 1, spc);
- inl = (void *) (inl + 1) + spc;
- skb_copy_from_linear_data_offset(skb, spc, inl + 1,
+ skb_copy_from_linear_data(skb, inl->data, spc);
+ inl = (void *)inl->data + spc;
+ skb_copy_from_linear_data_offset(skb, spc, inl->data,
hlen - spc);
if (shinfo->nr_frags)
- memcpy(((void *)(inl + 1)) + hlen - spc,
+ memcpy(inl->data + hlen - spc,
fragptr,
skb_frag_size(&shinfo->frags[0]));
}
diff --git a/drivers/net/ethernet/mellanox/mlx4/main.c b/drivers/net/ethernet/mellanox/mlx4/main.c
index 3ae246391549..277738c50c56 100644
--- a/drivers/net/ethernet/mellanox/mlx4/main.c
+++ b/drivers/net/ethernet/mellanox/mlx4/main.c
@@ -265,29 +265,29 @@ static void mlx4_devlink_set_params_init_values(struct devlink *devlink)
union devlink_param_value value;
value.vbool = !!mlx4_internal_err_reset;
- devlink_param_driverinit_value_set(devlink,
- DEVLINK_PARAM_GENERIC_ID_INT_ERR_RESET,
- value);
+ devl_param_driverinit_value_set(devlink,
+ DEVLINK_PARAM_GENERIC_ID_INT_ERR_RESET,
+ value);
value.vu32 = 1UL << log_num_mac;
- devlink_param_driverinit_value_set(devlink,
- DEVLINK_PARAM_GENERIC_ID_MAX_MACS,
- value);
+ devl_param_driverinit_value_set(devlink,
+ DEVLINK_PARAM_GENERIC_ID_MAX_MACS,
+ value);
value.vbool = enable_64b_cqe_eqe;
- devlink_param_driverinit_value_set(devlink,
- MLX4_DEVLINK_PARAM_ID_ENABLE_64B_CQE_EQE,
- value);
+ devl_param_driverinit_value_set(devlink,
+ MLX4_DEVLINK_PARAM_ID_ENABLE_64B_CQE_EQE,
+ value);
value.vbool = enable_4k_uar;
- devlink_param_driverinit_value_set(devlink,
- MLX4_DEVLINK_PARAM_ID_ENABLE_4K_UAR,
- value);
+ devl_param_driverinit_value_set(devlink,
+ MLX4_DEVLINK_PARAM_ID_ENABLE_4K_UAR,
+ value);
value.vbool = false;
- devlink_param_driverinit_value_set(devlink,
- DEVLINK_PARAM_GENERIC_ID_REGION_SNAPSHOT,
- value);
+ devl_param_driverinit_value_set(devlink,
+ DEVLINK_PARAM_GENERIC_ID_REGION_SNAPSHOT,
+ value);
}
static inline void mlx4_set_num_reserved_uars(struct mlx4_dev *dev,
@@ -3910,37 +3910,37 @@ static void mlx4_devlink_param_load_driverinit_values(struct devlink *devlink)
union devlink_param_value saved_value;
int err;
- err = devlink_param_driverinit_value_get(devlink,
- DEVLINK_PARAM_GENERIC_ID_INT_ERR_RESET,
- &saved_value);
+ err = devl_param_driverinit_value_get(devlink,
+ DEVLINK_PARAM_GENERIC_ID_INT_ERR_RESET,
+ &saved_value);
if (!err && mlx4_internal_err_reset != saved_value.vbool) {
mlx4_internal_err_reset = saved_value.vbool;
/* Notify on value changed on runtime configuration mode */
- devlink_param_value_changed(devlink,
- DEVLINK_PARAM_GENERIC_ID_INT_ERR_RESET);
+ devl_param_value_changed(devlink,
+ DEVLINK_PARAM_GENERIC_ID_INT_ERR_RESET);
}
- err = devlink_param_driverinit_value_get(devlink,
- DEVLINK_PARAM_GENERIC_ID_MAX_MACS,
- &saved_value);
+ err = devl_param_driverinit_value_get(devlink,
+ DEVLINK_PARAM_GENERIC_ID_MAX_MACS,
+ &saved_value);
if (!err)
log_num_mac = order_base_2(saved_value.vu32);
- err = devlink_param_driverinit_value_get(devlink,
- MLX4_DEVLINK_PARAM_ID_ENABLE_64B_CQE_EQE,
- &saved_value);
+ err = devl_param_driverinit_value_get(devlink,
+ MLX4_DEVLINK_PARAM_ID_ENABLE_64B_CQE_EQE,
+ &saved_value);
if (!err)
enable_64b_cqe_eqe = saved_value.vbool;
- err = devlink_param_driverinit_value_get(devlink,
- MLX4_DEVLINK_PARAM_ID_ENABLE_4K_UAR,
- &saved_value);
+ err = devl_param_driverinit_value_get(devlink,
+ MLX4_DEVLINK_PARAM_ID_ENABLE_4K_UAR,
+ &saved_value);
if (!err)
enable_4k_uar = saved_value.vbool;
- err = devlink_param_driverinit_value_get(devlink,
- DEVLINK_PARAM_GENERIC_ID_REGION_SNAPSHOT,
- &saved_value);
+ err = devl_param_driverinit_value_get(devlink,
+ DEVLINK_PARAM_GENERIC_ID_REGION_SNAPSHOT,
+ &saved_value);
if (!err && crdump->snapshot_enable != saved_value.vbool) {
crdump->snapshot_enable = saved_value.vbool;
- devlink_param_value_changed(devlink,
- DEVLINK_PARAM_GENERIC_ID_REGION_SNAPSHOT);
+ devl_param_value_changed(devlink,
+ DEVLINK_PARAM_GENERIC_ID_REGION_SNAPSHOT);
}
}
@@ -4021,8 +4021,8 @@ static int mlx4_init_one(struct pci_dev *pdev, const struct pci_device_id *id)
mutex_init(&dev->persist->interface_state_mutex);
mutex_init(&dev->persist->pci_status_mutex);
- ret = devlink_params_register(devlink, mlx4_devlink_params,
- ARRAY_SIZE(mlx4_devlink_params));
+ ret = devl_params_register(devlink, mlx4_devlink_params,
+ ARRAY_SIZE(mlx4_devlink_params));
if (ret)
goto err_devlink_unregister;
mlx4_devlink_set_params_init_values(devlink);
@@ -4031,14 +4031,13 @@ static int mlx4_init_one(struct pci_dev *pdev, const struct pci_device_id *id)
goto err_params_unregister;
pci_save_state(pdev);
- devlink_set_features(devlink, DEVLINK_F_RELOAD);
devl_unlock(devlink);
devlink_register(devlink);
return 0;
err_params_unregister:
- devlink_params_unregister(devlink, mlx4_devlink_params,
- ARRAY_SIZE(mlx4_devlink_params));
+ devl_params_unregister(devlink, mlx4_devlink_params,
+ ARRAY_SIZE(mlx4_devlink_params));
err_devlink_unregister:
kfree(dev->persist);
err_devlink_free:
@@ -4181,8 +4180,8 @@ static void mlx4_remove_one(struct pci_dev *pdev)
pci_release_regions(pdev);
mlx4_pci_disable_device(dev);
- devlink_params_unregister(devlink, mlx4_devlink_params,
- ARRAY_SIZE(mlx4_devlink_params));
+ devl_params_unregister(devlink, mlx4_devlink_params,
+ ARRAY_SIZE(mlx4_devlink_params));
kfree(dev->persist);
devl_unlock(devlink);
devlink_free(devlink);
diff --git a/drivers/net/ethernet/mellanox/mlx4/mlx4_en.h b/drivers/net/ethernet/mellanox/mlx4/mlx4_en.h
index 3d4226ddba5e..544e09b97483 100644
--- a/drivers/net/ethernet/mellanox/mlx4/mlx4_en.h
+++ b/drivers/net/ethernet/mellanox/mlx4/mlx4_en.h
@@ -796,10 +796,15 @@ void mlx4_en_update_pfc_stats_bitmap(struct mlx4_dev *dev,
int mlx4_en_netdev_event(struct notifier_block *this,
unsigned long event, void *ptr);
+struct xdp_md;
+int mlx4_en_xdp_rx_timestamp(const struct xdp_md *ctx, u64 *timestamp);
+int mlx4_en_xdp_rx_hash(const struct xdp_md *ctx, u32 *hash);
+
/*
* Functions for time stamping
*/
u64 mlx4_en_get_cqe_ts(struct mlx4_cqe *cqe);
+u64 mlx4_en_get_hwtstamp(struct mlx4_en_dev *mdev, u64 timestamp);
void mlx4_en_fill_hwtstamps(struct mlx4_en_dev *mdev,
struct skb_shared_hwtstamps *hwts,
u64 timestamp);
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/Kconfig b/drivers/net/ethernet/mellanox/mlx5/core/Kconfig
index 26685fd0fdaa..bb1d7b039a7e 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/Kconfig
+++ b/drivers/net/ethernet/mellanox/mlx5/core/Kconfig
@@ -85,7 +85,7 @@ config MLX5_BRIDGE
config MLX5_CLS_ACT
bool "MLX5 TC classifier action support"
- depends on MLX5_ESWITCH && NET_CLS_ACT
+ depends on MLX5_ESWITCH && NET_CLS_ACT && NET_TC_SKB_EXT
default y
help
mlx5 ConnectX offloads support for TC classifier action (NET_CLS_ACT),
@@ -100,7 +100,7 @@ config MLX5_CLS_ACT
config MLX5_TC_CT
bool "MLX5 TC connection tracking offload support"
- depends on MLX5_CLS_ACT && NF_FLOW_TABLE && NET_ACT_CT && NET_TC_SKB_EXT
+ depends on MLX5_CLS_ACT && NF_FLOW_TABLE && NET_ACT_CT
default y
help
Say Y here if you want to support offloading connection tracking rules
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/Makefile b/drivers/net/ethernet/mellanox/mlx5/core/Makefile
index cd4a1ab0ea78..8d4e25cc54ea 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/Makefile
+++ b/drivers/net/ethernet/mellanox/mlx5/core/Makefile
@@ -47,7 +47,7 @@ mlx5_core-$(CONFIG_MLX5_CLS_ACT) += en_tc.o en/rep/tc.o en/rep/neigh.o \
en/tc_tun_vxlan.o en/tc_tun_gre.o en/tc_tun_geneve.o \
en/tc_tun_mplsoudp.o diag/en_tc_tracepoint.o \
en/tc/post_act.o en/tc/int_port.o en/tc/meter.o \
- en/tc/post_meter.o
+ en/tc/post_meter.o en/tc/act_stats.o
mlx5_core-$(CONFIG_MLX5_CLS_ACT) += en/tc/act/act.o en/tc/act/drop.o en/tc/act/trap.o \
en/tc/act/accept.o en/tc/act/mark.o en/tc/act/goto.o \
@@ -97,7 +97,7 @@ mlx5_core-$(CONFIG_MLX5_EN_MACSEC) += en_accel/macsec.o en_accel/macsec_fs.o \
mlx5_core-$(CONFIG_MLX5_EN_IPSEC) += en_accel/ipsec.o en_accel/ipsec_rxtx.o \
en_accel/ipsec_stats.o en_accel/ipsec_fs.o \
- en_accel/ipsec_offload.o
+ en_accel/ipsec_offload.o lib/ipsec_fs_roce.o
mlx5_core-$(CONFIG_MLX5_EN_TLS) += en_accel/ktls_stats.o \
en_accel/fs_tcp.o en_accel/ktls.o en_accel/ktls_txrx.o \
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/cmd.c b/drivers/net/ethernet/mellanox/mlx5/core/cmd.c
index c837103a9ee3..b00e33ed05e9 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/cmd.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/cmd.c
@@ -47,6 +47,25 @@
#define CREATE_TRACE_POINTS
#include "diag/cmd_tracepoint.h"
+struct mlx5_ifc_mbox_out_bits {
+ u8 status[0x8];
+ u8 reserved_at_8[0x18];
+
+ u8 syndrome[0x20];
+
+ u8 reserved_at_40[0x40];
+};
+
+struct mlx5_ifc_mbox_in_bits {
+ u8 opcode[0x10];
+ u8 uid[0x10];
+
+ u8 reserved_at_20[0x10];
+ u8 op_mod[0x10];
+
+ u8 reserved_at_40[0x40];
+};
+
enum {
CMD_IF_REV = 5,
};
@@ -70,6 +89,27 @@ enum {
MLX5_CMD_DELIVERY_STAT_CMD_DESCR_ERR = 0x10,
};
+static u16 in_to_opcode(void *in)
+{
+ return MLX5_GET(mbox_in, in, opcode);
+}
+
+/* Returns true for opcodes that might be triggered very frequently and throttle
+ * the command interface. Limit their command slots usage.
+ */
+static bool mlx5_cmd_is_throttle_opcode(u16 op)
+{
+ switch (op) {
+ case MLX5_CMD_OP_CREATE_GENERAL_OBJECT:
+ case MLX5_CMD_OP_DESTROY_GENERAL_OBJECT:
+ case MLX5_CMD_OP_MODIFY_GENERAL_OBJECT:
+ case MLX5_CMD_OP_QUERY_GENERAL_OBJECT:
+ case MLX5_CMD_OP_SYNC_CRYPTO:
+ return true;
+ }
+ return false;
+}
+
static struct mlx5_cmd_work_ent *
cmd_alloc_ent(struct mlx5_cmd *cmd, struct mlx5_cmd_msg *in,
struct mlx5_cmd_msg *out, void *uout, int uout_size,
@@ -91,6 +131,7 @@ cmd_alloc_ent(struct mlx5_cmd *cmd, struct mlx5_cmd_msg *in,
ent->context = context;
ent->cmd = cmd;
ent->page_queue = page_queue;
+ ent->op = in_to_opcode(in->first.data);
refcount_set(&ent->refcnt, 1);
return ent;
@@ -483,6 +524,7 @@ static int mlx5_internal_err_ret_value(struct mlx5_core_dev *dev, u16 op,
case MLX5_CMD_OP_QUERY_VHCA_MIGRATION_STATE:
case MLX5_CMD_OP_SAVE_VHCA_STATE:
case MLX5_CMD_OP_LOAD_VHCA_STATE:
+ case MLX5_CMD_OP_SYNC_CRYPTO:
*status = MLX5_DRIVER_STATUS_ABORTED;
*synd = MLX5_DRIVER_SYND;
return -ENOLINK;
@@ -685,6 +727,7 @@ const char *mlx5_command_str(int command)
MLX5_COMMAND_STR_CASE(QUERY_VHCA_MIGRATION_STATE);
MLX5_COMMAND_STR_CASE(SAVE_VHCA_STATE);
MLX5_COMMAND_STR_CASE(LOAD_VHCA_STATE);
+ MLX5_COMMAND_STR_CASE(SYNC_CRYPTO);
default: return "unknown command opcode";
}
}
@@ -752,25 +795,6 @@ static int cmd_status_to_err(u8 status)
}
}
-struct mlx5_ifc_mbox_out_bits {
- u8 status[0x8];
- u8 reserved_at_8[0x18];
-
- u8 syndrome[0x20];
-
- u8 reserved_at_40[0x40];
-};
-
-struct mlx5_ifc_mbox_in_bits {
- u8 opcode[0x10];
- u8 uid[0x10];
-
- u8 reserved_at_20[0x10];
- u8 op_mod[0x10];
-
- u8 reserved_at_40[0x40];
-};
-
void mlx5_cmd_out_err(struct mlx5_core_dev *dev, u16 opcode, u16 op_mod, void *out)
{
u32 syndrome = MLX5_GET(mbox_out, out, syndrome);
@@ -788,11 +812,12 @@ static void cmd_status_print(struct mlx5_core_dev *dev, void *in, void *out)
u16 opcode, op_mod;
u16 uid;
- opcode = MLX5_GET(mbox_in, in, opcode);
+ opcode = in_to_opcode(in);
op_mod = MLX5_GET(mbox_in, in, op_mod);
uid = MLX5_GET(mbox_in, in, uid);
- if (!uid && opcode != MLX5_CMD_OP_DESTROY_MKEY)
+ if (!uid && opcode != MLX5_CMD_OP_DESTROY_MKEY &&
+ opcode != MLX5_CMD_OP_CREATE_UCTX)
mlx5_cmd_out_err(dev, opcode, op_mod, out);
}
@@ -800,7 +825,7 @@ int mlx5_cmd_check(struct mlx5_core_dev *dev, int err, void *in, void *out)
{
/* aborted due to PCI error or via reset flow mlx5_cmd_trigger_completions() */
if (err == -ENXIO) {
- u16 opcode = MLX5_GET(mbox_in, in, opcode);
+ u16 opcode = in_to_opcode(in);
u32 syndrome;
u8 status;
@@ -829,9 +854,9 @@ static void dump_command(struct mlx5_core_dev *dev,
struct mlx5_cmd_work_ent *ent, int input)
{
struct mlx5_cmd_msg *msg = input ? ent->in : ent->out;
- u16 op = MLX5_GET(mbox_in, ent->lay->in, opcode);
struct mlx5_cmd_mailbox *next = msg->next;
int n = mlx5_calc_cmd_blocks(msg);
+ u16 op = ent->op;
int data_only;
u32 offset = 0;
int dump_len;
@@ -883,11 +908,6 @@ static void dump_command(struct mlx5_core_dev *dev,
mlx5_core_dbg(dev, "cmd[%d]: end dump\n", ent->idx);
}
-static u16 msg_to_opcode(struct mlx5_cmd_msg *in)
-{
- return MLX5_GET(mbox_in, in->first.data, opcode);
-}
-
static void mlx5_cmd_comp_handler(struct mlx5_core_dev *dev, u64 vec, bool forced);
static void cb_timeout_handler(struct work_struct *work)
@@ -905,13 +925,13 @@ static void cb_timeout_handler(struct work_struct *work)
/* Maybe got handled by eq recover ? */
if (!test_bit(MLX5_CMD_ENT_STATE_PENDING_COMP, &ent->state)) {
mlx5_core_warn(dev, "cmd[%d]: %s(0x%x) Async, recovered after timeout\n", ent->idx,
- mlx5_command_str(msg_to_opcode(ent->in)), msg_to_opcode(ent->in));
+ mlx5_command_str(ent->op), ent->op);
goto out; /* phew, already handled */
}
ent->ret = -ETIMEDOUT;
mlx5_core_warn(dev, "cmd[%d]: %s(0x%x) Async, timeout. Will cause a leak of a command resource\n",
- ent->idx, mlx5_command_str(msg_to_opcode(ent->in)), msg_to_opcode(ent->in));
+ ent->idx, mlx5_command_str(ent->op), ent->op);
mlx5_cmd_comp_handler(dev, 1ULL << ent->idx, true);
out:
@@ -985,7 +1005,6 @@ static void cmd_work_handler(struct work_struct *work)
ent->lay = lay;
memset(lay, 0, sizeof(*lay));
memcpy(lay->in, ent->in->first.data, sizeof(lay->in));
- ent->op = be32_to_cpu(lay->in[0]) >> 16;
if (ent->in->next)
lay->in_ptr = cpu_to_be64(ent->in->next->dma);
lay->inlen = cpu_to_be32(ent->in->len);
@@ -1098,12 +1117,12 @@ static void wait_func_handle_exec_timeout(struct mlx5_core_dev *dev,
*/
if (wait_for_completion_timeout(&ent->done, timeout)) {
mlx5_core_warn(dev, "cmd[%d]: %s(0x%x) recovered after timeout\n", ent->idx,
- mlx5_command_str(msg_to_opcode(ent->in)), msg_to_opcode(ent->in));
+ mlx5_command_str(ent->op), ent->op);
return;
}
mlx5_core_warn(dev, "cmd[%d]: %s(0x%x) No done completion\n", ent->idx,
- mlx5_command_str(msg_to_opcode(ent->in)), msg_to_opcode(ent->in));
+ mlx5_command_str(ent->op), ent->op);
ent->ret = -ETIMEDOUT;
mlx5_cmd_comp_handler(dev, 1ULL << ent->idx, true);
@@ -1130,12 +1149,10 @@ out_err:
if (err == -ETIMEDOUT) {
mlx5_core_warn(dev, "%s(0x%x) timeout. Will cause a leak of a command resource\n",
- mlx5_command_str(msg_to_opcode(ent->in)),
- msg_to_opcode(ent->in));
+ mlx5_command_str(ent->op), ent->op);
} else if (err == -ECANCELED) {
mlx5_core_warn(dev, "%s(0x%x) canceled on out of queue timeout.\n",
- mlx5_command_str(msg_to_opcode(ent->in)),
- msg_to_opcode(ent->in));
+ mlx5_command_str(ent->op), ent->op);
}
mlx5_core_dbg(dev, "err %d, delivery status %s(%d)\n",
err, deliv_status_to_str(ent->status), ent->status);
@@ -1169,7 +1186,6 @@ static int mlx5_cmd_invoke(struct mlx5_core_dev *dev, struct mlx5_cmd_msg *in,
u8 status = 0;
int err = 0;
s64 ds;
- u16 op;
if (callback && page_queue)
return -EINVAL;
@@ -1209,9 +1225,8 @@ static int mlx5_cmd_invoke(struct mlx5_core_dev *dev, struct mlx5_cmd_msg *in,
goto out_free;
ds = ent->ts2 - ent->ts1;
- op = MLX5_GET(mbox_in, in->first.data, opcode);
- if (op < MLX5_CMD_OP_MAX) {
- stats = &cmd->stats[op];
+ if (ent->op < MLX5_CMD_OP_MAX) {
+ stats = &cmd->stats[ent->op];
spin_lock_irq(&stats->lock);
stats->sum += ds;
++stats->n;
@@ -1219,7 +1234,7 @@ static int mlx5_cmd_invoke(struct mlx5_core_dev *dev, struct mlx5_cmd_msg *in,
}
mlx5_core_dbg_mask(dev, 1 << MLX5_CMD_TIME,
"fw exec time for %s is %lld nsec\n",
- mlx5_command_str(op), ds);
+ mlx5_command_str(ent->op), ds);
out_free:
status = ent->status;
@@ -1816,7 +1831,7 @@ cache_miss:
static int is_manage_pages(void *in)
{
- return MLX5_GET(mbox_in, in, opcode) == MLX5_CMD_OP_MANAGE_PAGES;
+ return in_to_opcode(in) == MLX5_CMD_OP_MANAGE_PAGES;
}
/* Notes:
@@ -1827,8 +1842,9 @@ static int cmd_exec(struct mlx5_core_dev *dev, void *in, int in_size, void *out,
int out_size, mlx5_cmd_cbk_t callback, void *context,
bool force_polling)
{
- u16 opcode = MLX5_GET(mbox_in, in, opcode);
struct mlx5_cmd_msg *inb, *outb;
+ u16 opcode = in_to_opcode(in);
+ bool throttle_op;
int pages_queue;
gfp_t gfp;
u8 token;
@@ -1837,13 +1853,21 @@ static int cmd_exec(struct mlx5_core_dev *dev, void *in, int in_size, void *out,
if (mlx5_cmd_is_down(dev) || !opcode_allowed(&dev->cmd, opcode))
return -ENXIO;
+ throttle_op = mlx5_cmd_is_throttle_opcode(opcode);
+ if (throttle_op) {
+ /* atomic context may not sleep */
+ if (callback)
+ return -EINVAL;
+ down(&dev->cmd.throttle_sem);
+ }
+
pages_queue = is_manage_pages(in);
gfp = callback ? GFP_ATOMIC : GFP_KERNEL;
inb = alloc_msg(dev, in_size, gfp);
if (IS_ERR(inb)) {
err = PTR_ERR(inb);
- return err;
+ goto out_up;
}
token = alloc_token(&dev->cmd);
@@ -1877,6 +1901,9 @@ out_out:
mlx5_free_cmd_msg(dev, outb);
out_in:
free_msg(dev, inb);
+out_up:
+ if (throttle_op)
+ up(&dev->cmd.throttle_sem);
return err;
}
@@ -1950,8 +1977,8 @@ static int cmd_status_err(struct mlx5_core_dev *dev, int err, u16 opcode, u16 op
int mlx5_cmd_do(struct mlx5_core_dev *dev, void *in, int in_size, void *out, int out_size)
{
int err = cmd_exec(dev, in, in_size, out, out_size, NULL, NULL, false);
- u16 opcode = MLX5_GET(mbox_in, in, opcode);
u16 op_mod = MLX5_GET(mbox_in, in, op_mod);
+ u16 opcode = in_to_opcode(in);
return cmd_status_err(dev, err, opcode, op_mod, out);
}
@@ -1996,8 +2023,8 @@ int mlx5_cmd_exec_polling(struct mlx5_core_dev *dev, void *in, int in_size,
void *out, int out_size)
{
int err = cmd_exec(dev, in, in_size, out, out_size, NULL, NULL, true);
- u16 opcode = MLX5_GET(mbox_in, in, opcode);
u16 op_mod = MLX5_GET(mbox_in, in, op_mod);
+ u16 opcode = in_to_opcode(in);
err = cmd_status_err(dev, err, opcode, op_mod, out);
return mlx5_cmd_check(dev, err, in, out);
@@ -2049,7 +2076,7 @@ int mlx5_cmd_exec_cb(struct mlx5_async_ctx *ctx, void *in, int in_size,
work->ctx = ctx;
work->user_callback = callback;
- work->opcode = MLX5_GET(mbox_in, in, opcode);
+ work->opcode = in_to_opcode(in);
work->op_mod = MLX5_GET(mbox_in, in, op_mod);
work->out = out;
if (WARN_ON(!atomic_inc_not_zero(&ctx->num_inflight)))
@@ -2220,6 +2247,7 @@ int mlx5_cmd_init(struct mlx5_core_dev *dev)
sema_init(&cmd->sem, cmd->max_reg_cmds);
sema_init(&cmd->pages_sem, 1);
+ sema_init(&cmd->throttle_sem, DIV_ROUND_UP(cmd->max_reg_cmds, 2));
cmd_h = (u32)((u64)(cmd->dma) >> 32);
cmd_l = (u32)(cmd->dma);
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/dev.c b/drivers/net/ethernet/mellanox/mlx5/core/dev.c
index 0571e40c6ee5..445fe30c3d0b 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/dev.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/dev.c
@@ -59,6 +59,9 @@ bool mlx5_eth_supported(struct mlx5_core_dev *dev)
if (!IS_ENABLED(CONFIG_MLX5_CORE_EN))
return false;
+ if (mlx5_core_is_management_pf(dev))
+ return false;
+
if (MLX5_CAP_GEN(dev, port_type) != MLX5_CAP_PORT_TYPE_ETH)
return false;
@@ -111,9 +114,9 @@ static bool is_eth_enabled(struct mlx5_core_dev *dev)
union devlink_param_value val;
int err;
- err = devlink_param_driverinit_value_get(priv_to_devlink(dev),
- DEVLINK_PARAM_GENERIC_ID_ENABLE_ETH,
- &val);
+ err = devl_param_driverinit_value_get(priv_to_devlink(dev),
+ DEVLINK_PARAM_GENERIC_ID_ENABLE_ETH,
+ &val);
return err ? false : val.vbool;
}
@@ -144,9 +147,9 @@ static bool is_vnet_enabled(struct mlx5_core_dev *dev)
union devlink_param_value val;
int err;
- err = devlink_param_driverinit_value_get(priv_to_devlink(dev),
- DEVLINK_PARAM_GENERIC_ID_ENABLE_VNET,
- &val);
+ err = devl_param_driverinit_value_get(priv_to_devlink(dev),
+ DEVLINK_PARAM_GENERIC_ID_ENABLE_VNET,
+ &val);
return err ? false : val.vbool;
}
@@ -198,6 +201,9 @@ bool mlx5_rdma_supported(struct mlx5_core_dev *dev)
if (!IS_ENABLED(CONFIG_MLX5_INFINIBAND))
return false;
+ if (mlx5_core_is_management_pf(dev))
+ return false;
+
if (dev->priv.flags & MLX5_PRIV_FLAGS_DISABLE_IB_ADEV)
return false;
@@ -215,9 +221,9 @@ static bool is_ib_enabled(struct mlx5_core_dev *dev)
union devlink_param_value val;
int err;
- err = devlink_param_driverinit_value_get(priv_to_devlink(dev),
- DEVLINK_PARAM_GENERIC_ID_ENABLE_RDMA,
- &val);
+ err = devl_param_driverinit_value_get(priv_to_devlink(dev),
+ DEVLINK_PARAM_GENERIC_ID_ENABLE_RDMA,
+ &val);
return err ? false : val.vbool;
}
@@ -343,7 +349,6 @@ int mlx5_attach_device(struct mlx5_core_dev *dev)
devl_assert_locked(priv_to_devlink(dev));
mutex_lock(&mlx5_intf_mutex);
priv->flags &= ~MLX5_PRIV_FLAGS_DETACH;
- priv->flags |= MLX5_PRIV_FLAGS_MLX5E_LOCKED_FLOW;
for (i = 0; i < ARRAY_SIZE(mlx5_adev_devices); i++) {
if (!priv->adev[i]) {
bool is_supported = false;
@@ -372,10 +377,6 @@ int mlx5_attach_device(struct mlx5_core_dev *dev)
/* Pay attention that this is not PCI driver that
* mlx5_core_dev is connected, but auxiliary driver.
- *
- * Here we can race of module unload with devlink
- * reload, but we don't need to take extra lock because
- * we are holding global mlx5_intf_mutex.
*/
if (!adev->dev.driver)
continue;
@@ -391,12 +392,11 @@ int mlx5_attach_device(struct mlx5_core_dev *dev)
break;
}
}
- priv->flags &= ~MLX5_PRIV_FLAGS_MLX5E_LOCKED_FLOW;
mutex_unlock(&mlx5_intf_mutex);
return ret;
}
-void mlx5_detach_device(struct mlx5_core_dev *dev)
+void mlx5_detach_device(struct mlx5_core_dev *dev, bool suspend)
{
struct mlx5_priv *priv = &dev->priv;
struct auxiliary_device *adev;
@@ -406,7 +406,6 @@ void mlx5_detach_device(struct mlx5_core_dev *dev)
devl_assert_locked(priv_to_devlink(dev));
mutex_lock(&mlx5_intf_mutex);
- priv->flags |= MLX5_PRIV_FLAGS_MLX5E_LOCKED_FLOW;
for (i = ARRAY_SIZE(mlx5_adev_devices) - 1; i >= 0; i--) {
if (!priv->adev[i])
continue;
@@ -426,7 +425,7 @@ void mlx5_detach_device(struct mlx5_core_dev *dev)
adrv = to_auxiliary_drv(adev->dev.driver);
- if (adrv->suspend) {
+ if (adrv->suspend && suspend) {
adrv->suspend(adev, pm);
continue;
}
@@ -435,7 +434,6 @@ skip_suspend:
del_adev(&priv->adev[i]->adev);
priv->adev[i] = NULL;
}
- priv->flags &= ~MLX5_PRIV_FLAGS_MLX5E_LOCKED_FLOW;
priv->flags |= MLX5_PRIV_FLAGS_DETACH;
mutex_unlock(&mlx5_intf_mutex);
}
@@ -534,22 +532,16 @@ del_adev:
int mlx5_rescan_drivers_locked(struct mlx5_core_dev *dev)
{
struct mlx5_priv *priv = &dev->priv;
- int err = 0;
lockdep_assert_held(&mlx5_intf_mutex);
if (priv->flags & MLX5_PRIV_FLAGS_DETACH)
return 0;
- priv->flags |= MLX5_PRIV_FLAGS_MLX5E_LOCKED_FLOW;
delete_drivers(dev);
if (priv->flags & MLX5_PRIV_FLAGS_DISABLE_ALL_ADEV)
- goto out;
-
- err = add_drivers(dev);
+ return 0;
-out:
- priv->flags &= ~MLX5_PRIV_FLAGS_MLX5E_LOCKED_FLOW;
- return err;
+ return add_drivers(dev);
}
bool mlx5_same_hw_devs(struct mlx5_core_dev *dev, struct mlx5_core_dev *peer_dev)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/devlink.c b/drivers/net/ethernet/mellanox/mlx5/core/devlink.c
index 5bd83c0275f8..c5d2fdcabd56 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/devlink.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/devlink.c
@@ -7,6 +7,7 @@
#include "fw_reset.h"
#include "fs_core.h"
#include "eswitch.h"
+#include "lag/lag.h"
#include "esw/qos.h"
#include "sf/dev/dev.h"
#include "sf/sf.h"
@@ -104,7 +105,7 @@ static int mlx5_devlink_reload_fw_activate(struct devlink *devlink, struct netli
if (err)
return err;
- mlx5_unload_one_devl_locked(dev);
+ mlx5_unload_one_devl_locked(dev, true);
err = mlx5_health_wait_pci_up(dev);
if (err)
NL_SET_ERR_MSG_MOD(extack, "FW activate aborted, PCI reads fail after reset");
@@ -156,13 +157,18 @@ static int mlx5_devlink_reload_down(struct devlink *devlink, bool netns_change,
return -EOPNOTSUPP;
}
+ if (mlx5_core_is_mp_slave(dev)) {
+ NL_SET_ERR_MSG_MOD(extack, "reload is unsupported for multi port slave");
+ return -EOPNOTSUPP;
+ }
+
if (pci_num_vf(pdev)) {
NL_SET_ERR_MSG_MOD(extack, "reload while VFs are present is unfavorable");
}
switch (action) {
case DEVLINK_RELOAD_ACTION_DRIVER_REINIT:
- mlx5_unload_one_devl_locked(dev);
+ mlx5_unload_one_devl_locked(dev, false);
break;
case DEVLINK_RELOAD_ACTION_FW_ACTIVATE:
if (limit == DEVLINK_RELOAD_LIMIT_NO_RESET)
@@ -263,9 +269,10 @@ static int mlx5_devlink_trap_action_set(struct devlink *devlink,
struct netlink_ext_ack *extack)
{
struct mlx5_core_dev *dev = devlink_priv(devlink);
+ struct mlx5_devlink_trap_event_ctx trap_event_ctx;
enum devlink_trap_action action_orig;
struct mlx5_devlink_trap *dl_trap;
- int err = 0;
+ int err;
if (is_mdev_switchdev_mode(dev)) {
NL_SET_ERR_MSG_MOD(extack, "Devlink traps can't be set in switchdev mode");
@@ -275,26 +282,25 @@ static int mlx5_devlink_trap_action_set(struct devlink *devlink,
dl_trap = mlx5_find_trap_by_id(dev, trap->id);
if (!dl_trap) {
mlx5_core_err(dev, "Devlink trap: Set action on invalid trap id 0x%x", trap->id);
- err = -EINVAL;
- goto out;
+ return -EINVAL;
}
- if (action != DEVLINK_TRAP_ACTION_DROP && action != DEVLINK_TRAP_ACTION_TRAP) {
- err = -EOPNOTSUPP;
- goto out;
- }
+ if (action != DEVLINK_TRAP_ACTION_DROP && action != DEVLINK_TRAP_ACTION_TRAP)
+ return -EOPNOTSUPP;
if (action == dl_trap->trap.action)
- goto out;
+ return 0;
action_orig = dl_trap->trap.action;
dl_trap->trap.action = action;
+ trap_event_ctx.trap = &dl_trap->trap;
+ trap_event_ctx.err = 0;
err = mlx5_blocking_notifier_call_chain(dev, MLX5_DRIVER_EVENT_TYPE_TRAP,
- &dl_trap->trap);
- if (err)
+ &trap_event_ctx);
+ if (err == NOTIFY_BAD)
dl_trap->trap.action = action_orig;
-out:
- return err;
+
+ return trap_event_ctx.err;
}
static const struct devlink_ops mlx5_devlink_ops = {
@@ -396,70 +402,6 @@ void mlx5_devlink_free(struct devlink *devlink)
devlink_free(devlink);
}
-static int mlx5_devlink_fs_mode_validate(struct devlink *devlink, u32 id,
- union devlink_param_value val,
- struct netlink_ext_ack *extack)
-{
- struct mlx5_core_dev *dev = devlink_priv(devlink);
- char *value = val.vstr;
- int err = 0;
-
- if (!strcmp(value, "dmfs")) {
- return 0;
- } else if (!strcmp(value, "smfs")) {
- u8 eswitch_mode;
- bool smfs_cap;
-
- eswitch_mode = mlx5_eswitch_mode(dev);
- smfs_cap = mlx5_fs_dr_is_supported(dev);
-
- if (!smfs_cap) {
- err = -EOPNOTSUPP;
- NL_SET_ERR_MSG_MOD(extack,
- "Software managed steering is not supported by current device");
- }
-
- else if (eswitch_mode == MLX5_ESWITCH_OFFLOADS) {
- NL_SET_ERR_MSG_MOD(extack,
- "Software managed steering is not supported when eswitch offloads enabled.");
- err = -EOPNOTSUPP;
- }
- } else {
- NL_SET_ERR_MSG_MOD(extack,
- "Bad parameter: supported values are [\"dmfs\", \"smfs\"]");
- err = -EINVAL;
- }
-
- return err;
-}
-
-static int mlx5_devlink_fs_mode_set(struct devlink *devlink, u32 id,
- struct devlink_param_gset_ctx *ctx)
-{
- struct mlx5_core_dev *dev = devlink_priv(devlink);
- enum mlx5_flow_steering_mode mode;
-
- if (!strcmp(ctx->val.vstr, "smfs"))
- mode = MLX5_FLOW_STEERING_MODE_SMFS;
- else
- mode = MLX5_FLOW_STEERING_MODE_DMFS;
- dev->priv.steering->mode = mode;
-
- return 0;
-}
-
-static int mlx5_devlink_fs_mode_get(struct devlink *devlink, u32 id,
- struct devlink_param_gset_ctx *ctx)
-{
- struct mlx5_core_dev *dev = devlink_priv(devlink);
-
- if (dev->priv.steering->mode == MLX5_FLOW_STEERING_MODE_SMFS)
- strcpy(ctx->val.vstr, "smfs");
- else
- strcpy(ctx->val.vstr, "dmfs");
- return 0;
-}
-
static int mlx5_devlink_enable_roce_validate(struct devlink *devlink, u32 id,
union devlink_param_value val,
struct netlink_ext_ack *extack)
@@ -496,68 +438,54 @@ static int mlx5_devlink_large_group_num_validate(struct devlink *devlink, u32 id
return 0;
}
-static int mlx5_devlink_esw_port_metadata_set(struct devlink *devlink, u32 id,
- struct devlink_param_gset_ctx *ctx)
+static int mlx5_devlink_esw_multiport_set(struct devlink *devlink, u32 id,
+ struct devlink_param_gset_ctx *ctx)
{
struct mlx5_core_dev *dev = devlink_priv(devlink);
if (!MLX5_ESWITCH_MANAGER(dev))
return -EOPNOTSUPP;
- return mlx5_esw_offloads_vport_metadata_set(dev->priv.eswitch, ctx->val.vbool);
+ if (ctx->val.vbool)
+ return mlx5_lag_mpesw_enable(dev);
+
+ mlx5_lag_mpesw_disable(dev);
+ return 0;
}
-static int mlx5_devlink_esw_port_metadata_get(struct devlink *devlink, u32 id,
- struct devlink_param_gset_ctx *ctx)
+static int mlx5_devlink_esw_multiport_get(struct devlink *devlink, u32 id,
+ struct devlink_param_gset_ctx *ctx)
{
struct mlx5_core_dev *dev = devlink_priv(devlink);
if (!MLX5_ESWITCH_MANAGER(dev))
return -EOPNOTSUPP;
- ctx->val.vbool = mlx5_eswitch_vport_match_metadata_enabled(dev->priv.eswitch);
+ ctx->val.vbool = mlx5_lag_is_mpesw(dev);
return 0;
}
-static int mlx5_devlink_esw_port_metadata_validate(struct devlink *devlink, u32 id,
- union devlink_param_value val,
- struct netlink_ext_ack *extack)
+static int mlx5_devlink_esw_multiport_validate(struct devlink *devlink, u32 id,
+ union devlink_param_value val,
+ struct netlink_ext_ack *extack)
{
struct mlx5_core_dev *dev = devlink_priv(devlink);
- u8 esw_mode;
if (!MLX5_ESWITCH_MANAGER(dev)) {
NL_SET_ERR_MSG_MOD(extack, "E-Switch is unsupported");
return -EOPNOTSUPP;
}
- esw_mode = mlx5_eswitch_mode(dev);
- if (esw_mode == MLX5_ESWITCH_OFFLOADS) {
+
+ if (mlx5_eswitch_mode(dev) != MLX5_ESWITCH_OFFLOADS) {
NL_SET_ERR_MSG_MOD(extack,
- "E-Switch must either disabled or non switchdev mode");
+ "E-Switch must be in switchdev mode");
return -EBUSY;
}
- return 0;
-}
-
-#endif
-
-static int mlx5_devlink_enable_remote_dev_reset_set(struct devlink *devlink, u32 id,
- struct devlink_param_gset_ctx *ctx)
-{
- struct mlx5_core_dev *dev = devlink_priv(devlink);
- mlx5_fw_reset_enable_remote_dev_reset_set(dev, ctx->val.vbool);
return 0;
}
-static int mlx5_devlink_enable_remote_dev_reset_get(struct devlink *devlink, u32 id,
- struct devlink_param_gset_ctx *ctx)
-{
- struct mlx5_core_dev *dev = devlink_priv(devlink);
-
- ctx->val.vbool = mlx5_fw_reset_enable_remote_dev_reset_get(dev);
- return 0;
-}
+#endif
static int mlx5_devlink_eq_depth_validate(struct devlink *devlink, u32 id,
union devlink_param_value val,
@@ -567,11 +495,6 @@ static int mlx5_devlink_eq_depth_validate(struct devlink *devlink, u32 id,
}
static const struct devlink_param mlx5_devlink_params[] = {
- DEVLINK_PARAM_DRIVER(MLX5_DEVLINK_PARAM_ID_FLOW_STEERING_MODE,
- "flow_steering_mode", DEVLINK_PARAM_TYPE_STRING,
- BIT(DEVLINK_PARAM_CMODE_RUNTIME),
- mlx5_devlink_fs_mode_get, mlx5_devlink_fs_mode_set,
- mlx5_devlink_fs_mode_validate),
DEVLINK_PARAM_GENERIC(ENABLE_ROCE, BIT(DEVLINK_PARAM_CMODE_DRIVERINIT),
NULL, NULL, mlx5_devlink_enable_roce_validate),
#ifdef CONFIG_MLX5_ESWITCH
@@ -580,16 +503,13 @@ static const struct devlink_param mlx5_devlink_params[] = {
BIT(DEVLINK_PARAM_CMODE_DRIVERINIT),
NULL, NULL,
mlx5_devlink_large_group_num_validate),
- DEVLINK_PARAM_DRIVER(MLX5_DEVLINK_PARAM_ID_ESW_PORT_METADATA,
- "esw_port_metadata", DEVLINK_PARAM_TYPE_BOOL,
+ DEVLINK_PARAM_DRIVER(MLX5_DEVLINK_PARAM_ID_ESW_MULTIPORT,
+ "esw_multiport", DEVLINK_PARAM_TYPE_BOOL,
BIT(DEVLINK_PARAM_CMODE_RUNTIME),
- mlx5_devlink_esw_port_metadata_get,
- mlx5_devlink_esw_port_metadata_set,
- mlx5_devlink_esw_port_metadata_validate),
+ mlx5_devlink_esw_multiport_get,
+ mlx5_devlink_esw_multiport_set,
+ mlx5_devlink_esw_multiport_validate),
#endif
- DEVLINK_PARAM_GENERIC(ENABLE_REMOTE_DEV_RESET, BIT(DEVLINK_PARAM_CMODE_RUNTIME),
- mlx5_devlink_enable_remote_dev_reset_get,
- mlx5_devlink_enable_remote_dev_reset_set, NULL),
DEVLINK_PARAM_GENERIC(IO_EQ_SIZE, BIT(DEVLINK_PARAM_CMODE_DRIVERINIT),
NULL, NULL, mlx5_devlink_eq_depth_validate),
DEVLINK_PARAM_GENERIC(EVENT_EQ_SIZE, BIT(DEVLINK_PARAM_CMODE_DRIVERINIT),
@@ -602,33 +522,34 @@ static void mlx5_devlink_set_params_init_values(struct devlink *devlink)
union devlink_param_value value;
value.vbool = MLX5_CAP_GEN(dev, roce);
- devlink_param_driverinit_value_set(devlink,
- DEVLINK_PARAM_GENERIC_ID_ENABLE_ROCE,
- value);
+ devl_param_driverinit_value_set(devlink,
+ DEVLINK_PARAM_GENERIC_ID_ENABLE_ROCE,
+ value);
#ifdef CONFIG_MLX5_ESWITCH
value.vu32 = ESW_OFFLOADS_DEFAULT_NUM_GROUPS;
- devlink_param_driverinit_value_set(devlink,
- MLX5_DEVLINK_PARAM_ID_ESW_LARGE_GROUP_NUM,
- value);
+ devl_param_driverinit_value_set(devlink,
+ MLX5_DEVLINK_PARAM_ID_ESW_LARGE_GROUP_NUM,
+ value);
#endif
value.vu32 = MLX5_COMP_EQ_SIZE;
- devlink_param_driverinit_value_set(devlink,
- DEVLINK_PARAM_GENERIC_ID_IO_EQ_SIZE,
- value);
+ devl_param_driverinit_value_set(devlink,
+ DEVLINK_PARAM_GENERIC_ID_IO_EQ_SIZE,
+ value);
value.vu32 = MLX5_NUM_ASYNC_EQE;
- devlink_param_driverinit_value_set(devlink,
- DEVLINK_PARAM_GENERIC_ID_EVENT_EQ_SIZE,
- value);
+ devl_param_driverinit_value_set(devlink,
+ DEVLINK_PARAM_GENERIC_ID_EVENT_EQ_SIZE,
+ value);
}
-static const struct devlink_param enable_eth_param =
+static const struct devlink_param mlx5_devlink_eth_params[] = {
DEVLINK_PARAM_GENERIC(ENABLE_ETH, BIT(DEVLINK_PARAM_CMODE_DRIVERINIT),
- NULL, NULL, NULL);
+ NULL, NULL, NULL),
+};
-static int mlx5_devlink_eth_param_register(struct devlink *devlink)
+static int mlx5_devlink_eth_params_register(struct devlink *devlink)
{
struct mlx5_core_dev *dev = devlink_priv(devlink);
union devlink_param_value value;
@@ -637,25 +558,27 @@ static int mlx5_devlink_eth_param_register(struct devlink *devlink)
if (!mlx5_eth_supported(dev))
return 0;
- err = devlink_param_register(devlink, &enable_eth_param);
+ err = devl_params_register(devlink, mlx5_devlink_eth_params,
+ ARRAY_SIZE(mlx5_devlink_eth_params));
if (err)
return err;
value.vbool = true;
- devlink_param_driverinit_value_set(devlink,
- DEVLINK_PARAM_GENERIC_ID_ENABLE_ETH,
- value);
+ devl_param_driverinit_value_set(devlink,
+ DEVLINK_PARAM_GENERIC_ID_ENABLE_ETH,
+ value);
return 0;
}
-static void mlx5_devlink_eth_param_unregister(struct devlink *devlink)
+static void mlx5_devlink_eth_params_unregister(struct devlink *devlink)
{
struct mlx5_core_dev *dev = devlink_priv(devlink);
if (!mlx5_eth_supported(dev))
return;
- devlink_param_unregister(devlink, &enable_eth_param);
+ devl_params_unregister(devlink, mlx5_devlink_eth_params,
+ ARRAY_SIZE(mlx5_devlink_eth_params));
}
static int mlx5_devlink_enable_rdma_validate(struct devlink *devlink, u32 id,
@@ -670,11 +593,12 @@ static int mlx5_devlink_enable_rdma_validate(struct devlink *devlink, u32 id,
return 0;
}
-static const struct devlink_param enable_rdma_param =
+static const struct devlink_param mlx5_devlink_rdma_params[] = {
DEVLINK_PARAM_GENERIC(ENABLE_RDMA, BIT(DEVLINK_PARAM_CMODE_DRIVERINIT),
- NULL, NULL, mlx5_devlink_enable_rdma_validate);
+ NULL, NULL, mlx5_devlink_enable_rdma_validate),
+};
-static int mlx5_devlink_rdma_param_register(struct devlink *devlink)
+static int mlx5_devlink_rdma_params_register(struct devlink *devlink)
{
union devlink_param_value value;
int err;
@@ -682,30 +606,33 @@ static int mlx5_devlink_rdma_param_register(struct devlink *devlink)
if (!IS_ENABLED(CONFIG_MLX5_INFINIBAND))
return 0;
- err = devlink_param_register(devlink, &enable_rdma_param);
+ err = devl_params_register(devlink, mlx5_devlink_rdma_params,
+ ARRAY_SIZE(mlx5_devlink_rdma_params));
if (err)
return err;
value.vbool = true;
- devlink_param_driverinit_value_set(devlink,
- DEVLINK_PARAM_GENERIC_ID_ENABLE_RDMA,
- value);
+ devl_param_driverinit_value_set(devlink,
+ DEVLINK_PARAM_GENERIC_ID_ENABLE_RDMA,
+ value);
return 0;
}
-static void mlx5_devlink_rdma_param_unregister(struct devlink *devlink)
+static void mlx5_devlink_rdma_params_unregister(struct devlink *devlink)
{
if (!IS_ENABLED(CONFIG_MLX5_INFINIBAND))
return;
- devlink_param_unregister(devlink, &enable_rdma_param);
+ devl_params_unregister(devlink, mlx5_devlink_rdma_params,
+ ARRAY_SIZE(mlx5_devlink_rdma_params));
}
-static const struct devlink_param enable_vnet_param =
+static const struct devlink_param mlx5_devlink_vnet_params[] = {
DEVLINK_PARAM_GENERIC(ENABLE_VNET, BIT(DEVLINK_PARAM_CMODE_DRIVERINIT),
- NULL, NULL, NULL);
+ NULL, NULL, NULL),
+};
-static int mlx5_devlink_vnet_param_register(struct devlink *devlink)
+static int mlx5_devlink_vnet_params_register(struct devlink *devlink)
{
struct mlx5_core_dev *dev = devlink_priv(devlink);
union devlink_param_value value;
@@ -714,56 +641,58 @@ static int mlx5_devlink_vnet_param_register(struct devlink *devlink)
if (!mlx5_vnet_supported(dev))
return 0;
- err = devlink_param_register(devlink, &enable_vnet_param);
+ err = devl_params_register(devlink, mlx5_devlink_vnet_params,
+ ARRAY_SIZE(mlx5_devlink_vnet_params));
if (err)
return err;
value.vbool = true;
- devlink_param_driverinit_value_set(devlink,
- DEVLINK_PARAM_GENERIC_ID_ENABLE_VNET,
- value);
+ devl_param_driverinit_value_set(devlink,
+ DEVLINK_PARAM_GENERIC_ID_ENABLE_VNET,
+ value);
return 0;
}
-static void mlx5_devlink_vnet_param_unregister(struct devlink *devlink)
+static void mlx5_devlink_vnet_params_unregister(struct devlink *devlink)
{
struct mlx5_core_dev *dev = devlink_priv(devlink);
if (!mlx5_vnet_supported(dev))
return;
- devlink_param_unregister(devlink, &enable_vnet_param);
+ devl_params_unregister(devlink, mlx5_devlink_vnet_params,
+ ARRAY_SIZE(mlx5_devlink_vnet_params));
}
static int mlx5_devlink_auxdev_params_register(struct devlink *devlink)
{
int err;
- err = mlx5_devlink_eth_param_register(devlink);
+ err = mlx5_devlink_eth_params_register(devlink);
if (err)
return err;
- err = mlx5_devlink_rdma_param_register(devlink);
+ err = mlx5_devlink_rdma_params_register(devlink);
if (err)
goto rdma_err;
- err = mlx5_devlink_vnet_param_register(devlink);
+ err = mlx5_devlink_vnet_params_register(devlink);
if (err)
goto vnet_err;
return 0;
vnet_err:
- mlx5_devlink_rdma_param_unregister(devlink);
+ mlx5_devlink_rdma_params_unregister(devlink);
rdma_err:
- mlx5_devlink_eth_param_unregister(devlink);
+ mlx5_devlink_eth_params_unregister(devlink);
return err;
}
static void mlx5_devlink_auxdev_params_unregister(struct devlink *devlink)
{
- mlx5_devlink_vnet_param_unregister(devlink);
- mlx5_devlink_rdma_param_unregister(devlink);
- mlx5_devlink_eth_param_unregister(devlink);
+ mlx5_devlink_vnet_params_unregister(devlink);
+ mlx5_devlink_rdma_params_unregister(devlink);
+ mlx5_devlink_eth_params_unregister(devlink);
}
static int mlx5_devlink_max_uc_list_validate(struct devlink *devlink, u32 id,
@@ -791,11 +720,12 @@ static int mlx5_devlink_max_uc_list_validate(struct devlink *devlink, u32 id,
return 0;
}
-static const struct devlink_param max_uc_list_param =
+static const struct devlink_param mlx5_devlink_max_uc_list_params[] = {
DEVLINK_PARAM_GENERIC(MAX_MACS, BIT(DEVLINK_PARAM_CMODE_DRIVERINIT),
- NULL, NULL, mlx5_devlink_max_uc_list_validate);
+ NULL, NULL, mlx5_devlink_max_uc_list_validate),
+};
-static int mlx5_devlink_max_uc_list_param_register(struct devlink *devlink)
+static int mlx5_devlink_max_uc_list_params_register(struct devlink *devlink)
{
struct mlx5_core_dev *dev = devlink_priv(devlink);
union devlink_param_value value;
@@ -804,26 +734,28 @@ static int mlx5_devlink_max_uc_list_param_register(struct devlink *devlink)
if (!MLX5_CAP_GEN_MAX(dev, log_max_current_uc_list_wr_supported))
return 0;
- err = devlink_param_register(devlink, &max_uc_list_param);
+ err = devl_params_register(devlink, mlx5_devlink_max_uc_list_params,
+ ARRAY_SIZE(mlx5_devlink_max_uc_list_params));
if (err)
return err;
value.vu32 = 1 << MLX5_CAP_GEN(dev, log_max_current_uc_list);
- devlink_param_driverinit_value_set(devlink,
- DEVLINK_PARAM_GENERIC_ID_MAX_MACS,
- value);
+ devl_param_driverinit_value_set(devlink,
+ DEVLINK_PARAM_GENERIC_ID_MAX_MACS,
+ value);
return 0;
}
static void
-mlx5_devlink_max_uc_list_param_unregister(struct devlink *devlink)
+mlx5_devlink_max_uc_list_params_unregister(struct devlink *devlink)
{
struct mlx5_core_dev *dev = devlink_priv(devlink);
if (!MLX5_CAP_GEN_MAX(dev, log_max_current_uc_list_wr_supported))
return;
- devlink_param_unregister(devlink, &max_uc_list_param);
+ devl_params_unregister(devlink, mlx5_devlink_max_uc_list_params,
+ ARRAY_SIZE(mlx5_devlink_max_uc_list_params));
}
#define MLX5_TRAP_DROP(_id, _group_id) \
@@ -869,13 +801,12 @@ void mlx5_devlink_traps_unregister(struct devlink *devlink)
ARRAY_SIZE(mlx5_trap_groups_arr));
}
-int mlx5_devlink_register(struct devlink *devlink)
+int mlx5_devlink_params_register(struct devlink *devlink)
{
- struct mlx5_core_dev *dev = devlink_priv(devlink);
int err;
- err = devlink_params_register(devlink, mlx5_devlink_params,
- ARRAY_SIZE(mlx5_devlink_params));
+ err = devl_params_register(devlink, mlx5_devlink_params,
+ ARRAY_SIZE(mlx5_devlink_params));
if (err)
return err;
@@ -885,27 +816,24 @@ int mlx5_devlink_register(struct devlink *devlink)
if (err)
goto auxdev_reg_err;
- err = mlx5_devlink_max_uc_list_param_register(devlink);
+ err = mlx5_devlink_max_uc_list_params_register(devlink);
if (err)
goto max_uc_list_err;
- if (!mlx5_core_is_mp_slave(dev))
- devlink_set_features(devlink, DEVLINK_F_RELOAD);
-
return 0;
max_uc_list_err:
mlx5_devlink_auxdev_params_unregister(devlink);
auxdev_reg_err:
- devlink_params_unregister(devlink, mlx5_devlink_params,
- ARRAY_SIZE(mlx5_devlink_params));
+ devl_params_unregister(devlink, mlx5_devlink_params,
+ ARRAY_SIZE(mlx5_devlink_params));
return err;
}
-void mlx5_devlink_unregister(struct devlink *devlink)
+void mlx5_devlink_params_unregister(struct devlink *devlink)
{
- mlx5_devlink_max_uc_list_param_unregister(devlink);
+ mlx5_devlink_max_uc_list_params_unregister(devlink);
mlx5_devlink_auxdev_params_unregister(devlink);
- devlink_params_unregister(devlink, mlx5_devlink_params,
- ARRAY_SIZE(mlx5_devlink_params));
+ devl_params_unregister(devlink, mlx5_devlink_params,
+ ARRAY_SIZE(mlx5_devlink_params));
}
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/devlink.h b/drivers/net/ethernet/mellanox/mlx5/core/devlink.h
index fd033df24856..212b12424146 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/devlink.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/devlink.h
@@ -11,6 +11,7 @@ enum mlx5_devlink_param_id {
MLX5_DEVLINK_PARAM_ID_FLOW_STEERING_MODE,
MLX5_DEVLINK_PARAM_ID_ESW_LARGE_GROUP_NUM,
MLX5_DEVLINK_PARAM_ID_ESW_PORT_METADATA,
+ MLX5_DEVLINK_PARAM_ID_ESW_MULTIPORT,
};
struct mlx5_trap_ctx {
@@ -24,6 +25,11 @@ struct mlx5_devlink_trap {
struct list_head list;
};
+struct mlx5_devlink_trap_event_ctx {
+ struct mlx5_trap_ctx *trap;
+ int err;
+};
+
struct mlx5_core_dev;
void mlx5_devlink_trap_report(struct mlx5_core_dev *dev, int trap_id, struct sk_buff *skb,
struct devlink_port *dl_port);
@@ -35,7 +41,7 @@ void mlx5_devlink_traps_unregister(struct devlink *devlink);
struct devlink *mlx5_devlink_alloc(struct device *dev);
void mlx5_devlink_free(struct devlink *devlink);
-int mlx5_devlink_register(struct devlink *devlink);
-void mlx5_devlink_unregister(struct devlink *devlink);
+int mlx5_devlink_params_register(struct devlink *devlink);
+void mlx5_devlink_params_unregister(struct devlink *devlink);
#endif /* __MLX5_DEVLINK_H__ */
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/diag/fs_tracepoint.c b/drivers/net/ethernet/mellanox/mlx5/core/diag/fs_tracepoint.c
index 2732128e7a6e..6d73127b7217 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/diag/fs_tracepoint.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/diag/fs_tracepoint.c
@@ -275,6 +275,10 @@ const char *parse_fs_dst(struct trace_seq *p,
fs_dest_range_field_to_str(dst->range.field),
dst->range.min, dst->range.max);
break;
+ case MLX5_FLOW_DESTINATION_TYPE_TABLE_TYPE:
+ trace_seq_printf(p, "flow_table_type=%u id:%u\n", dst->ft->type,
+ dst->ft->id);
+ break;
case MLX5_FLOW_DESTINATION_TYPE_NONE:
trace_seq_printf(p, "none\n");
break;
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/diag/fw_tracer.c b/drivers/net/ethernet/mellanox/mlx5/core/diag/fw_tracer.c
index 5b05b884b5fb..f40497823e65 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/diag/fw_tracer.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/diag/fw_tracer.c
@@ -234,6 +234,8 @@ static int mlx5_fw_tracer_allocate_strings_db(struct mlx5_fw_tracer *tracer)
int i;
for (i = 0; i < num_string_db; i++) {
+ if (!string_db_size_out[i])
+ continue;
tracer->str_db.buffer[i] = kzalloc(string_db_size_out[i], GFP_KERNEL);
if (!tracer->str_db.buffer[i])
goto free_strings_db;
@@ -279,6 +281,8 @@ static void mlx5_tracer_read_strings_db(struct work_struct *work)
}
for (i = 0; i < num_string_db; i++) {
+ if (!tracer->str_db.size_out[i])
+ continue;
offset = 0;
MLX5_SET(mtrc_stdb, in, string_db_index, i);
num_of_reads = tracer->str_db.size_out[i] /
@@ -385,6 +389,8 @@ static struct tracer_string_format *mlx5_tracer_get_string(struct mlx5_fw_tracer
str_ptr = tracer_event->string_event.string_param;
for (i = 0; i < tracer->str_db.num_string_db; i++) {
+ if (!tracer->str_db.size_out[i])
+ continue;
if (str_ptr > tracer->str_db.base_address_out[i] &&
str_ptr < tracer->str_db.base_address_out[i] +
tracer->str_db.size_out[i]) {
@@ -460,6 +466,7 @@ static void poll_trace(struct mlx5_fw_tracer *tracer,
tracer_event->event_id = MLX5_GET(tracer_event, trace, event_id);
tracer_event->lost_event = MLX5_GET(tracer_event, trace, lost);
+ tracer_event->out = trace;
switch (tracer_event->event_id) {
case TRACER_EVENT_TYPE_TIMESTAMP:
@@ -582,6 +589,26 @@ void mlx5_tracer_print_trace(struct tracer_string_format *str_frmt,
mlx5_tracer_clean_message(str_frmt);
}
+static int mlx5_tracer_handle_raw_string(struct mlx5_fw_tracer *tracer,
+ struct tracer_event *tracer_event)
+{
+ struct tracer_string_format *cur_string;
+
+ cur_string = mlx5_tracer_message_insert(tracer, tracer_event);
+ if (!cur_string)
+ return -1;
+
+ cur_string->event_id = tracer_event->event_id;
+ cur_string->timestamp = tracer_event->string_event.timestamp;
+ cur_string->lost = tracer_event->lost_event;
+ cur_string->string = "0x%08x%08x";
+ cur_string->num_of_params = 2;
+ cur_string->params[0] = upper_32_bits(*tracer_event->out);
+ cur_string->params[1] = lower_32_bits(*tracer_event->out);
+ list_add_tail(&cur_string->list, &tracer->ready_strings_list);
+ return 0;
+}
+
static int mlx5_tracer_handle_string_trace(struct mlx5_fw_tracer *tracer,
struct tracer_event *tracer_event)
{
@@ -590,7 +617,7 @@ static int mlx5_tracer_handle_string_trace(struct mlx5_fw_tracer *tracer,
if (tracer_event->string_event.tdsn == 0) {
cur_string = mlx5_tracer_get_string(tracer, tracer_event);
if (!cur_string)
- return -1;
+ return mlx5_tracer_handle_raw_string(tracer, tracer_event);
cur_string->num_of_params = mlx5_tracer_get_num_of_params(cur_string->string);
cur_string->last_param_num = 0;
@@ -603,9 +630,9 @@ static int mlx5_tracer_handle_string_trace(struct mlx5_fw_tracer *tracer,
} else {
cur_string = mlx5_tracer_message_get(tracer, tracer_event);
if (!cur_string) {
- pr_debug("%s Got string event for unknown string tdsm: %d\n",
+ pr_debug("%s Got string event for unknown string tmsn: %d\n",
__func__, tracer_event->string_event.tmsn);
- return -1;
+ return mlx5_tracer_handle_raw_string(tracer, tracer_event);
}
cur_string->last_param_num += 1;
if (cur_string->last_param_num > TRACER_MAX_PARAMS) {
@@ -931,6 +958,14 @@ unlock:
return err;
}
+static void mlx5_fw_tracer_update_db(struct work_struct *work)
+{
+ struct mlx5_fw_tracer *tracer =
+ container_of(work, struct mlx5_fw_tracer, update_db_work);
+
+ mlx5_fw_tracer_reload(tracer);
+}
+
/* Create software resources (Buffers, etc ..) */
struct mlx5_fw_tracer *mlx5_fw_tracer_create(struct mlx5_core_dev *dev)
{
@@ -958,6 +993,8 @@ struct mlx5_fw_tracer *mlx5_fw_tracer_create(struct mlx5_core_dev *dev)
INIT_WORK(&tracer->ownership_change_work, mlx5_fw_tracer_ownership_change);
INIT_WORK(&tracer->read_fw_strings_work, mlx5_tracer_read_strings_db);
INIT_WORK(&tracer->handle_traces_work, mlx5_fw_tracer_handle_traces);
+ INIT_WORK(&tracer->update_db_work, mlx5_fw_tracer_update_db);
+ mutex_init(&tracer->state_lock);
err = mlx5_query_mtrc_caps(tracer);
@@ -1004,11 +1041,15 @@ int mlx5_fw_tracer_init(struct mlx5_fw_tracer *tracer)
if (IS_ERR_OR_NULL(tracer))
return 0;
- dev = tracer->dev;
-
if (!tracer->str_db.loaded)
queue_work(tracer->work_queue, &tracer->read_fw_strings_work);
+ mutex_lock(&tracer->state_lock);
+ if (test_and_set_bit(MLX5_TRACER_STATE_UP, &tracer->state))
+ goto unlock;
+
+ dev = tracer->dev;
+
err = mlx5_core_alloc_pd(dev, &tracer->buff.pdn);
if (err) {
mlx5_core_warn(dev, "FWTracer: Failed to allocate PD %d\n", err);
@@ -1029,6 +1070,8 @@ int mlx5_fw_tracer_init(struct mlx5_fw_tracer *tracer)
mlx5_core_warn(dev, "FWTracer: Failed to start tracer %d\n", err);
goto err_notifier_unregister;
}
+unlock:
+ mutex_unlock(&tracer->state_lock);
return 0;
err_notifier_unregister:
@@ -1038,6 +1081,7 @@ err_dealloc_pd:
mlx5_core_dealloc_pd(dev, tracer->buff.pdn);
err_cancel_work:
cancel_work_sync(&tracer->read_fw_strings_work);
+ mutex_unlock(&tracer->state_lock);
return err;
}
@@ -1047,17 +1091,27 @@ void mlx5_fw_tracer_cleanup(struct mlx5_fw_tracer *tracer)
if (IS_ERR_OR_NULL(tracer))
return;
+ mutex_lock(&tracer->state_lock);
+ if (!test_and_clear_bit(MLX5_TRACER_STATE_UP, &tracer->state))
+ goto unlock;
+
mlx5_core_dbg(tracer->dev, "FWTracer: Cleanup, is owner ? (%d)\n",
tracer->owner);
mlx5_eq_notifier_unregister(tracer->dev, &tracer->nb);
cancel_work_sync(&tracer->ownership_change_work);
cancel_work_sync(&tracer->handle_traces_work);
+ /* It is valid to get here from update_db_work. Hence, don't wait for
+ * update_db_work to finished.
+ */
+ cancel_work(&tracer->update_db_work);
if (tracer->owner)
mlx5_fw_tracer_ownership_release(tracer);
mlx5_core_destroy_mkey(tracer->dev, tracer->buff.mkey);
mlx5_core_dealloc_pd(tracer->dev, tracer->buff.pdn);
+unlock:
+ mutex_unlock(&tracer->state_lock);
}
/* Free software resources (Buffers, etc ..) */
@@ -1074,6 +1128,7 @@ void mlx5_fw_tracer_destroy(struct mlx5_fw_tracer *tracer)
mlx5_fw_tracer_clean_saved_traces_array(tracer);
mlx5_fw_tracer_free_strings_db(tracer);
mlx5_fw_tracer_destroy_log_buf(tracer);
+ mutex_destroy(&tracer->state_lock);
destroy_workqueue(tracer->work_queue);
kvfree(tracer);
}
@@ -1083,6 +1138,8 @@ static int mlx5_fw_tracer_recreate_strings_db(struct mlx5_fw_tracer *tracer)
struct mlx5_core_dev *dev;
int err;
+ if (test_and_set_bit(MLX5_TRACER_RECREATE_DB, &tracer->state))
+ return 0;
cancel_work_sync(&tracer->read_fw_strings_work);
mlx5_fw_tracer_clean_ready_list(tracer);
mlx5_fw_tracer_clean_print_hash(tracer);
@@ -1093,17 +1150,18 @@ static int mlx5_fw_tracer_recreate_strings_db(struct mlx5_fw_tracer *tracer)
err = mlx5_query_mtrc_caps(tracer);
if (err) {
mlx5_core_dbg(dev, "FWTracer: Failed to query capabilities %d\n", err);
- return err;
+ goto out;
}
err = mlx5_fw_tracer_allocate_strings_db(tracer);
if (err) {
mlx5_core_warn(dev, "FWTracer: Allocate strings DB failed %d\n", err);
- return err;
+ goto out;
}
mlx5_fw_tracer_init_saved_traces_array(tracer);
-
- return 0;
+out:
+ clear_bit(MLX5_TRACER_RECREATE_DB, &tracer->state);
+ return err;
}
int mlx5_fw_tracer_reload(struct mlx5_fw_tracer *tracer)
@@ -1143,6 +1201,9 @@ static int fw_tracer_event(struct notifier_block *nb, unsigned long action, void
case MLX5_TRACER_SUBTYPE_TRACES_AVAILABLE:
queue_work(tracer->work_queue, &tracer->handle_traces_work);
break;
+ case MLX5_TRACER_SUBTYPE_STRINGS_DB_UPDATE:
+ queue_work(tracer->work_queue, &tracer->update_db_work);
+ break;
default:
mlx5_core_dbg(dev, "FWTracer: Event with unrecognized subtype: sub_type %d\n",
eqe->sub_type);
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/diag/fw_tracer.h b/drivers/net/ethernet/mellanox/mlx5/core/diag/fw_tracer.h
index 4762b55b0b0e..5c548bb74f07 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/diag/fw_tracer.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/diag/fw_tracer.h
@@ -63,6 +63,11 @@ struct mlx5_fw_trace_data {
char msg[TRACE_STR_MSG];
};
+enum mlx5_fw_tracer_state {
+ MLX5_TRACER_STATE_UP = BIT(0),
+ MLX5_TRACER_RECREATE_DB = BIT(1),
+};
+
struct mlx5_fw_tracer {
struct mlx5_core_dev *dev;
struct mlx5_nb nb;
@@ -104,6 +109,9 @@ struct mlx5_fw_tracer {
struct work_struct handle_traces_work;
struct hlist_head hash[MESSAGE_HASH_SIZE];
struct list_head ready_strings_list;
+ struct work_struct update_db_work;
+ struct mutex state_lock; /* Synchronize update work with reload flows */
+ unsigned long state;
};
struct tracer_string_format {
@@ -158,6 +166,7 @@ struct tracer_event {
struct tracer_string_event string_event;
struct tracer_timestamp_event timestamp_event;
};
+ u64 *out;
};
struct mlx5_ifc_tracer_event_bits {
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/ecpf.c b/drivers/net/ethernet/mellanox/mlx5/core/ecpf.c
index cdc87ecae5d3..9a3878f9e582 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/ecpf.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/ecpf.c
@@ -75,6 +75,10 @@ int mlx5_ec_init(struct mlx5_core_dev *dev)
if (!mlx5_core_is_ecpf(dev))
return 0;
+ /* Management PF don't have a peer PF */
+ if (mlx5_core_is_management_pf(dev))
+ return 0;
+
return mlx5_host_pf_init(dev);
}
@@ -85,6 +89,10 @@ void mlx5_ec_cleanup(struct mlx5_core_dev *dev)
if (!mlx5_core_is_ecpf(dev))
return;
+ /* Management PF don't have a peer PF */
+ if (mlx5_core_is_management_pf(dev))
+ return;
+
mlx5_host_pf_cleanup(dev);
err = mlx5_wait_for_pages(dev, &dev->priv.page_counters[MLX5_HOST_PF]);
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en.h b/drivers/net/ethernet/mellanox/mlx5/core/en.h
index 2d77fb8a8a01..88460b7796e5 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en.h
@@ -247,7 +247,7 @@ struct mlx5e_rx_wqe_ll {
};
struct mlx5e_rx_wqe_cyc {
- struct mlx5_wqe_data_seg data[0];
+ DECLARE_FLEX_ARRAY(struct mlx5_wqe_data_seg, data);
};
struct mlx5e_umr_wqe {
@@ -454,6 +454,7 @@ struct mlx5e_txqsq {
struct mlx5_clock *clock;
struct net_device *netdev;
struct mlx5_core_dev *mdev;
+ struct mlx5e_channel *channel;
struct mlx5e_priv *priv;
/* control path */
@@ -626,10 +627,11 @@ struct mlx5e_rq;
typedef void (*mlx5e_fp_handle_rx_cqe)(struct mlx5e_rq*, struct mlx5_cqe64*);
typedef struct sk_buff *
(*mlx5e_fp_skb_from_cqe_mpwrq)(struct mlx5e_rq *rq, struct mlx5e_mpw_info *wi,
- u16 cqe_bcnt, u32 head_offset, u32 page_idx);
+ struct mlx5_cqe64 *cqe, u16 cqe_bcnt,
+ u32 head_offset, u32 page_idx);
typedef struct sk_buff *
(*mlx5e_fp_skb_from_cqe)(struct mlx5e_rq *rq, struct mlx5e_wqe_frag_info *wi,
- u32 cqe_bcnt);
+ struct mlx5_cqe64 *cqe, u32 cqe_bcnt);
typedef bool (*mlx5e_fp_post_rx_wqes)(struct mlx5e_rq *rq);
typedef void (*mlx5e_fp_dealloc_wqe)(struct mlx5e_rq*, u16);
typedef void (*mlx5e_fp_shampo_dealloc_hd)(struct mlx5e_rq*, u16, u16, bool);
@@ -968,6 +970,12 @@ struct mlx5e_priv {
struct mlx5e_scratchpad scratchpad;
struct mlx5e_htb *htb;
struct mlx5e_mqprio_rl *mqprio_rl;
+ struct dentry *dfs_root;
+};
+
+struct mlx5e_dev {
+ struct mlx5e_priv *priv;
+ struct devlink_port dl_port;
};
struct mlx5e_rx_handlers {
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/devlink.c b/drivers/net/ethernet/mellanox/mlx5/core/en/devlink.c
index 83adaabf59f5..c6b6e290fd79 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/devlink.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/devlink.c
@@ -4,6 +4,31 @@
#include "en/devlink.h"
#include "eswitch.h"
+static const struct devlink_ops mlx5e_devlink_ops = {
+};
+
+struct mlx5e_dev *mlx5e_create_devlink(struct device *dev,
+ struct mlx5_core_dev *mdev)
+{
+ struct mlx5e_dev *mlx5e_dev;
+ struct devlink *devlink;
+
+ devlink = devlink_alloc_ns(&mlx5e_devlink_ops, sizeof(*mlx5e_dev),
+ devlink_net(priv_to_devlink(mdev)), dev);
+ if (!devlink)
+ return ERR_PTR(-ENOMEM);
+ devlink_register(devlink);
+ return devlink_priv(devlink);
+}
+
+void mlx5e_destroy_devlink(struct mlx5e_dev *mlx5e_dev)
+{
+ struct devlink *devlink = priv_to_devlink(mlx5e_dev);
+
+ devlink_unregister(devlink);
+ devlink_free(devlink);
+}
+
static void
mlx5e_devlink_get_port_parent_id(struct mlx5_core_dev *dev, struct netdev_phys_item_id *ppid)
{
@@ -14,51 +39,36 @@ mlx5e_devlink_get_port_parent_id(struct mlx5_core_dev *dev, struct netdev_phys_i
memcpy(ppid->id, &parent_id, sizeof(parent_id));
}
-int mlx5e_devlink_port_register(struct mlx5e_priv *priv)
+int mlx5e_devlink_port_register(struct mlx5e_dev *mlx5e_dev,
+ struct mlx5_core_dev *mdev)
{
- struct devlink *devlink = priv_to_devlink(priv->mdev);
+ struct devlink *devlink = priv_to_devlink(mlx5e_dev);
struct devlink_port_attrs attrs = {};
struct netdev_phys_item_id ppid = {};
- struct devlink_port *dl_port;
unsigned int dl_port_index;
- int ret;
- if (mlx5_core_is_pf(priv->mdev)) {
+ if (mlx5_core_is_pf(mdev)) {
attrs.flavour = DEVLINK_PORT_FLAVOUR_PHYSICAL;
- attrs.phys.port_number = mlx5_get_dev_index(priv->mdev);
- if (MLX5_ESWITCH_MANAGER(priv->mdev)) {
- mlx5e_devlink_get_port_parent_id(priv->mdev, &ppid);
+ attrs.phys.port_number = mlx5_get_dev_index(mdev);
+ if (MLX5_ESWITCH_MANAGER(mdev)) {
+ mlx5e_devlink_get_port_parent_id(mdev, &ppid);
memcpy(attrs.switch_id.id, ppid.id, ppid.id_len);
attrs.switch_id.id_len = ppid.id_len;
}
- dl_port_index = mlx5_esw_vport_to_devlink_port_index(priv->mdev,
+ dl_port_index = mlx5_esw_vport_to_devlink_port_index(mdev,
MLX5_VPORT_UPLINK);
} else {
attrs.flavour = DEVLINK_PORT_FLAVOUR_VIRTUAL;
- dl_port_index = mlx5_esw_vport_to_devlink_port_index(priv->mdev, 0);
+ dl_port_index = mlx5_esw_vport_to_devlink_port_index(mdev, 0);
}
- dl_port = mlx5e_devlink_get_dl_port(priv);
- memset(dl_port, 0, sizeof(*dl_port));
- devlink_port_attrs_set(dl_port, &attrs);
+ devlink_port_attrs_set(&mlx5e_dev->dl_port, &attrs);
- if (!(priv->mdev->priv.flags & MLX5_PRIV_FLAGS_MLX5E_LOCKED_FLOW))
- devl_lock(devlink);
- ret = devl_port_register(devlink, dl_port, dl_port_index);
- if (!(priv->mdev->priv.flags & MLX5_PRIV_FLAGS_MLX5E_LOCKED_FLOW))
- devl_unlock(devlink);
-
- return ret;
+ return devlink_port_register(devlink, &mlx5e_dev->dl_port,
+ dl_port_index);
}
-void mlx5e_devlink_port_unregister(struct mlx5e_priv *priv)
+void mlx5e_devlink_port_unregister(struct mlx5e_dev *mlx5e_dev)
{
- struct devlink_port *dl_port = mlx5e_devlink_get_dl_port(priv);
- struct devlink *devlink = priv_to_devlink(priv->mdev);
-
- if (!(priv->mdev->priv.flags & MLX5_PRIV_FLAGS_MLX5E_LOCKED_FLOW))
- devl_lock(devlink);
- devl_port_unregister(dl_port);
- if (!(priv->mdev->priv.flags & MLX5_PRIV_FLAGS_MLX5E_LOCKED_FLOW))
- devl_unlock(devlink);
+ devlink_port_unregister(&mlx5e_dev->dl_port);
}
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/devlink.h b/drivers/net/ethernet/mellanox/mlx5/core/en/devlink.h
index 4f238d4fff55..d5ec4461f300 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/devlink.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/devlink.h
@@ -7,13 +7,11 @@
#include <net/devlink.h>
#include "en.h"
-int mlx5e_devlink_port_register(struct mlx5e_priv *priv);
-void mlx5e_devlink_port_unregister(struct mlx5e_priv *priv);
-
-static inline struct devlink_port *
-mlx5e_devlink_get_dl_port(struct mlx5e_priv *priv)
-{
- return &priv->mdev->mlx5e_res.dl_port;
-}
+struct mlx5e_dev *mlx5e_create_devlink(struct device *dev,
+ struct mlx5_core_dev *mdev);
+void mlx5e_destroy_devlink(struct mlx5e_dev *mlx5e_dev);
+int mlx5e_devlink_port_register(struct mlx5e_dev *mlx5e_dev,
+ struct mlx5_core_dev *mdev);
+void mlx5e_devlink_port_unregister(struct mlx5e_dev *mlx5e_dev);
#endif
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/fs.h b/drivers/net/ethernet/mellanox/mlx5/core/en/fs.h
index 379c6dc9a3be..e5a44b0b9616 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/fs.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/fs.h
@@ -87,6 +87,7 @@ enum {
MLX5E_ACCEL_FS_POL_FT_LEVEL = MLX5E_INNER_TTC_FT_LEVEL + 1,
MLX5E_ACCEL_FS_ESP_FT_LEVEL,
MLX5E_ACCEL_FS_ESP_FT_ERR_LEVEL,
+ MLX5E_ACCEL_FS_ESP_FT_ROCE_LEVEL,
#endif
};
@@ -145,7 +146,8 @@ void mlx5e_destroy_flow_steering(struct mlx5e_flow_steering *fs, bool ntuple,
struct mlx5e_flow_steering *mlx5e_fs_init(const struct mlx5e_profile *profile,
struct mlx5_core_dev *mdev,
- bool state_destroy);
+ bool state_destroy,
+ struct dentry *dfs_root);
void mlx5e_fs_cleanup(struct mlx5e_flow_steering *fs);
struct mlx5e_vlan_table *mlx5e_fs_get_vlan(struct mlx5e_flow_steering *fs);
void mlx5e_fs_set_tc(struct mlx5e_flow_steering *fs, struct mlx5e_tc_table *tc);
@@ -189,6 +191,8 @@ int mlx5e_fs_vlan_rx_kill_vid(struct mlx5e_flow_steering *fs,
__be16 proto, u16 vid);
void mlx5e_fs_init_l2_addr(struct mlx5e_flow_steering *fs, struct net_device *netdev);
+struct dentry *mlx5e_fs_get_debugfs_root(struct mlx5e_flow_steering *fs);
+
#define fs_err(fs, fmt, ...) \
mlx5_core_err(mlx5e_fs_get_mdev(fs), fmt, ##__VA_ARGS__)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/mod_hdr.c b/drivers/net/ethernet/mellanox/mlx5/core/en/mod_hdr.c
index 17325c5d6516..cf60f0a3ff23 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/mod_hdr.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/mod_hdr.c
@@ -47,6 +47,7 @@ void mlx5e_mod_hdr_tbl_init(struct mod_hdr_tbl *tbl)
void mlx5e_mod_hdr_tbl_destroy(struct mod_hdr_tbl *tbl)
{
+ WARN_ON(!hash_empty(tbl->hlist));
mutex_destroy(&tbl->lock);
}
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/params.c b/drivers/net/ethernet/mellanox/mlx5/core/en/params.c
index 4ad19c981294..a21bd1179477 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/params.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/params.c
@@ -411,9 +411,14 @@ u8 mlx5e_mpwqe_get_log_num_strides(struct mlx5_core_dev *mdev,
{
enum mlx5e_mpwrq_umr_mode umr_mode = mlx5e_mpwrq_umr_mode(mdev, xsk);
u8 page_shift = mlx5e_mpwrq_page_shift(mdev, xsk);
+ u8 log_wqe_size, log_stride_size;
- return mlx5e_mpwrq_log_wqe_sz(mdev, page_shift, umr_mode) -
- mlx5e_mpwqe_get_log_stride_size(mdev, params, xsk);
+ log_wqe_size = mlx5e_mpwrq_log_wqe_sz(mdev, page_shift, umr_mode);
+ log_stride_size = mlx5e_mpwqe_get_log_stride_size(mdev, params, xsk);
+ WARN(log_wqe_size < log_stride_size,
+ "Log WQE size %u < log stride size %u (page shift %u, umr mode %d, xsk on? %d)\n",
+ log_wqe_size, log_stride_size, page_shift, umr_mode, !!xsk);
+ return log_wqe_size - log_stride_size;
}
u8 mlx5e_mpwqe_get_min_wqe_bulk(unsigned int wq_sz)
@@ -580,11 +585,16 @@ int mlx5e_mpwrq_validate_xsk(struct mlx5_core_dev *mdev, struct mlx5e_params *pa
u8 page_shift = mlx5e_mpwrq_page_shift(mdev, xsk);
u16 max_mtu_pkts;
- if (!mlx5e_check_fragmented_striding_rq_cap(mdev, page_shift, umr_mode))
+ if (!mlx5e_check_fragmented_striding_rq_cap(mdev, page_shift, umr_mode)) {
+ mlx5_core_err(mdev, "Striding RQ for XSK can't be activated with page_shift %u and umr_mode %d\n",
+ page_shift, umr_mode);
return -EOPNOTSUPP;
+ }
- if (!mlx5e_rx_mpwqe_is_linear_skb(mdev, params, xsk))
+ if (!mlx5e_rx_mpwqe_is_linear_skb(mdev, params, xsk)) {
+ mlx5_core_err(mdev, "Striding RQ linear mode for XSK can't be activated with current params\n");
return -EINVAL;
+ }
/* Current RQ length is too big for the given frame size, the
* needed number of WQEs exceeds the maximum.
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/port.c b/drivers/net/ethernet/mellanox/mlx5/core/en/port.c
index 89510cac46c2..505ba41195b9 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/port.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/port.c
@@ -287,6 +287,78 @@ int mlx5e_port_set_pbmc(struct mlx5_core_dev *mdev, void *in)
return err;
}
+int mlx5e_port_query_sbpr(struct mlx5_core_dev *mdev, u32 desc, u8 dir,
+ u8 pool_idx, void *out, int size_out)
+{
+ u32 in[MLX5_ST_SZ_DW(sbpr_reg)] = {};
+
+ MLX5_SET(sbpr_reg, in, desc, desc);
+ MLX5_SET(sbpr_reg, in, dir, dir);
+ MLX5_SET(sbpr_reg, in, pool, pool_idx);
+
+ return mlx5_core_access_reg(mdev, in, sizeof(in), out, size_out, MLX5_REG_SBPR, 0, 0);
+}
+
+int mlx5e_port_set_sbpr(struct mlx5_core_dev *mdev, u32 desc, u8 dir,
+ u8 pool_idx, u32 infi_size, u32 size)
+{
+ u32 out[MLX5_ST_SZ_DW(sbpr_reg)] = {};
+ u32 in[MLX5_ST_SZ_DW(sbpr_reg)] = {};
+
+ MLX5_SET(sbpr_reg, in, desc, desc);
+ MLX5_SET(sbpr_reg, in, dir, dir);
+ MLX5_SET(sbpr_reg, in, pool, pool_idx);
+ MLX5_SET(sbpr_reg, in, infi_size, infi_size);
+ MLX5_SET(sbpr_reg, in, size, size);
+ MLX5_SET(sbpr_reg, in, mode, 1);
+
+ return mlx5_core_access_reg(mdev, in, sizeof(in), out, sizeof(out), MLX5_REG_SBPR, 0, 1);
+}
+
+static int mlx5e_port_query_sbcm(struct mlx5_core_dev *mdev, u32 desc,
+ u8 pg_buff_idx, u8 dir, void *out,
+ int size_out)
+{
+ u32 in[MLX5_ST_SZ_DW(sbcm_reg)] = {};
+
+ MLX5_SET(sbcm_reg, in, desc, desc);
+ MLX5_SET(sbcm_reg, in, local_port, 1);
+ MLX5_SET(sbcm_reg, in, pg_buff, pg_buff_idx);
+ MLX5_SET(sbcm_reg, in, dir, dir);
+
+ return mlx5_core_access_reg(mdev, in, sizeof(in), out, size_out, MLX5_REG_SBCM, 0, 0);
+}
+
+int mlx5e_port_set_sbcm(struct mlx5_core_dev *mdev, u32 desc, u8 pg_buff_idx,
+ u8 dir, u8 infi_size, u32 max_buff, u8 pool_idx)
+{
+ u32 out[MLX5_ST_SZ_DW(sbcm_reg)] = {};
+ u32 in[MLX5_ST_SZ_DW(sbcm_reg)] = {};
+ u32 min_buff;
+ int err;
+ u8 exc;
+
+ err = mlx5e_port_query_sbcm(mdev, desc, pg_buff_idx, dir, out,
+ sizeof(out));
+ if (err)
+ return err;
+
+ exc = MLX5_GET(sbcm_reg, out, exc);
+ min_buff = MLX5_GET(sbcm_reg, out, min_buff);
+
+ MLX5_SET(sbcm_reg, in, desc, desc);
+ MLX5_SET(sbcm_reg, in, local_port, 1);
+ MLX5_SET(sbcm_reg, in, pg_buff, pg_buff_idx);
+ MLX5_SET(sbcm_reg, in, dir, dir);
+ MLX5_SET(sbcm_reg, in, exc, exc);
+ MLX5_SET(sbcm_reg, in, min_buff, min_buff);
+ MLX5_SET(sbcm_reg, in, infi_max, infi_size);
+ MLX5_SET(sbcm_reg, in, max_buff, max_buff);
+ MLX5_SET(sbcm_reg, in, pool, pool_idx);
+
+ return mlx5_core_access_reg(mdev, in, sizeof(in), out, sizeof(out), MLX5_REG_SBCM, 0, 1);
+}
+
/* buffer[i]: buffer that priority i mapped to */
int mlx5e_port_query_priority2buffer(struct mlx5_core_dev *mdev, u8 *buffer)
{
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/port.h b/drivers/net/ethernet/mellanox/mlx5/core/en/port.h
index 7a7defe60792..3f474e370828 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/port.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/port.h
@@ -57,6 +57,12 @@ u32 mlx5e_port_speed2linkmodes(struct mlx5_core_dev *mdev, u32 speed,
bool mlx5e_ptys_ext_supported(struct mlx5_core_dev *mdev);
int mlx5e_port_query_pbmc(struct mlx5_core_dev *mdev, void *out);
int mlx5e_port_set_pbmc(struct mlx5_core_dev *mdev, void *in);
+int mlx5e_port_query_sbpr(struct mlx5_core_dev *mdev, u32 desc, u8 dir,
+ u8 pool_idx, void *out, int size_out);
+int mlx5e_port_set_sbpr(struct mlx5_core_dev *mdev, u32 desc, u8 dir,
+ u8 pool_idx, u32 infi_size, u32 size);
+int mlx5e_port_set_sbcm(struct mlx5_core_dev *mdev, u32 desc, u8 pg_buff_idx,
+ u8 dir, u8 infi_size, u32 max_buff, u8 pool_idx);
int mlx5e_port_query_priority2buffer(struct mlx5_core_dev *mdev, u8 *buffer);
int mlx5e_port_set_priority2buffer(struct mlx5_core_dev *mdev, u8 *buffer);
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/port_buffer.c b/drivers/net/ethernet/mellanox/mlx5/core/en/port_buffer.c
index c9d5d8d93994..7ac1ad9c46de 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/port_buffer.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/port_buffer.c
@@ -73,6 +73,7 @@ int mlx5e_port_query_buffer(struct mlx5e_priv *priv,
port_buffer->buffer[i].lossy);
}
+ port_buffer->headroom_size = total_used;
port_buffer->port_buffer_size =
MLX5_GET(pbmc_reg, out, port_buffer_size) * port_buff_cell_sz;
port_buffer->spare_buffer_size =
@@ -86,16 +87,204 @@ out:
return err;
}
+struct mlx5e_buffer_pool {
+ u32 infi_size;
+ u32 size;
+ u32 buff_occupancy;
+};
+
+static int mlx5e_port_query_pool(struct mlx5_core_dev *mdev,
+ struct mlx5e_buffer_pool *buffer_pool,
+ u32 desc, u8 dir, u8 pool_idx)
+{
+ u32 out[MLX5_ST_SZ_DW(sbpr_reg)] = {};
+ int err;
+
+ err = mlx5e_port_query_sbpr(mdev, desc, dir, pool_idx, out,
+ sizeof(out));
+ if (err)
+ return err;
+
+ buffer_pool->size = MLX5_GET(sbpr_reg, out, size);
+ buffer_pool->infi_size = MLX5_GET(sbpr_reg, out, infi_size);
+ buffer_pool->buff_occupancy = MLX5_GET(sbpr_reg, out, buff_occupancy);
+
+ return err;
+}
+
+enum {
+ MLX5_INGRESS_DIR = 0,
+ MLX5_EGRESS_DIR = 1,
+};
+
+enum {
+ MLX5_LOSSY_POOL = 0,
+ MLX5_LOSSLESS_POOL = 1,
+};
+
+/* No limit on usage of shared buffer pool (max_buff=0) */
+#define MLX5_SB_POOL_NO_THRESHOLD 0
+/* Shared buffer pool usage threshold when calculated
+ * dynamically in alpha units. alpha=13 is equivalent to
+ * HW_alpha of [(1/128) * 2 ^ (alpha-1)] = 32, where HW_alpha
+ * equates to the following portion of the shared buffer pool:
+ * [32 / (1 + n * 32)] While *n* is the number of buffers
+ * that are using the shared buffer pool.
+ */
+#define MLX5_SB_POOL_THRESHOLD 13
+
+/* Shared buffer class management parameters */
+struct mlx5_sbcm_params {
+ u8 pool_idx;
+ u8 max_buff;
+ u8 infi_size;
+};
+
+static const struct mlx5_sbcm_params sbcm_default = {
+ .pool_idx = MLX5_LOSSY_POOL,
+ .max_buff = MLX5_SB_POOL_NO_THRESHOLD,
+ .infi_size = 0,
+};
+
+static const struct mlx5_sbcm_params sbcm_lossy = {
+ .pool_idx = MLX5_LOSSY_POOL,
+ .max_buff = MLX5_SB_POOL_NO_THRESHOLD,
+ .infi_size = 1,
+};
+
+static const struct mlx5_sbcm_params sbcm_lossless = {
+ .pool_idx = MLX5_LOSSLESS_POOL,
+ .max_buff = MLX5_SB_POOL_THRESHOLD,
+ .infi_size = 0,
+};
+
+static const struct mlx5_sbcm_params sbcm_lossless_no_threshold = {
+ .pool_idx = MLX5_LOSSLESS_POOL,
+ .max_buff = MLX5_SB_POOL_NO_THRESHOLD,
+ .infi_size = 1,
+};
+
+/**
+ * select_sbcm_params() - selects the shared buffer pool configuration
+ *
+ * @buffer: <input> port buffer to retrieve params of
+ * @lossless_buff_count: <input> number of lossless buffers in total
+ *
+ * The selection is based on the following rules:
+ * 1. If buffer size is 0, no shared buffer pool is used.
+ * 2. If buffer is lossy, use lossy shared buffer pool.
+ * 3. If there are more than 1 lossless buffers, use lossless shared buffer pool
+ * with threshold.
+ * 4. If there is only 1 lossless buffer, use lossless shared buffer pool
+ * without threshold.
+ *
+ * @return const struct mlx5_sbcm_params* selected values
+ */
+static const struct mlx5_sbcm_params *
+select_sbcm_params(struct mlx5e_bufferx_reg *buffer, u8 lossless_buff_count)
+{
+ if (buffer->size == 0)
+ return &sbcm_default;
+
+ if (buffer->lossy)
+ return &sbcm_lossy;
+
+ if (lossless_buff_count > 1)
+ return &sbcm_lossless;
+
+ return &sbcm_lossless_no_threshold;
+}
+
+static int port_update_pool_cfg(struct mlx5_core_dev *mdev,
+ struct mlx5e_port_buffer *port_buffer)
+{
+ const struct mlx5_sbcm_params *p;
+ u8 lossless_buff_count = 0;
+ int err;
+ int i;
+
+ if (!MLX5_CAP_GEN(mdev, sbcam_reg))
+ return 0;
+
+ for (i = 0; i < MLX5E_MAX_BUFFER; i++)
+ lossless_buff_count += ((port_buffer->buffer[i].size) &&
+ (!(port_buffer->buffer[i].lossy)));
+
+ for (i = 0; i < MLX5E_MAX_BUFFER; i++) {
+ p = select_sbcm_params(&port_buffer->buffer[i], lossless_buff_count);
+ err = mlx5e_port_set_sbcm(mdev, 0, i,
+ MLX5_INGRESS_DIR,
+ p->infi_size,
+ p->max_buff,
+ p->pool_idx);
+ if (err)
+ return err;
+ }
+
+ return 0;
+}
+
+static int port_update_shared_buffer(struct mlx5_core_dev *mdev,
+ u32 current_headroom_size,
+ u32 new_headroom_size)
+{
+ struct mlx5e_buffer_pool lossless_ipool;
+ struct mlx5e_buffer_pool lossy_epool;
+ u32 lossless_ipool_size;
+ u32 shared_buffer_size;
+ u32 total_buffer_size;
+ u32 lossy_epool_size;
+ int err;
+
+ if (!MLX5_CAP_GEN(mdev, sbcam_reg))
+ return 0;
+
+ err = mlx5e_port_query_pool(mdev, &lossy_epool, 0, MLX5_EGRESS_DIR,
+ MLX5_LOSSY_POOL);
+ if (err)
+ return err;
+
+ err = mlx5e_port_query_pool(mdev, &lossless_ipool, 0, MLX5_INGRESS_DIR,
+ MLX5_LOSSLESS_POOL);
+ if (err)
+ return err;
+
+ total_buffer_size = current_headroom_size + lossy_epool.size +
+ lossless_ipool.size;
+ shared_buffer_size = total_buffer_size - new_headroom_size;
+
+ if (shared_buffer_size < 4) {
+ pr_err("Requested port buffer is too large, not enough space left for shared buffer\n");
+ return -EINVAL;
+ }
+
+ /* Total shared buffer size is split in a ratio of 3:1 between
+ * lossy and lossless pools respectively.
+ */
+ lossy_epool_size = (shared_buffer_size / 4) * 3;
+ lossless_ipool_size = shared_buffer_size / 4;
+
+ mlx5e_port_set_sbpr(mdev, 0, MLX5_EGRESS_DIR, MLX5_LOSSY_POOL, 0,
+ lossy_epool_size);
+ mlx5e_port_set_sbpr(mdev, 0, MLX5_INGRESS_DIR, MLX5_LOSSLESS_POOL, 0,
+ lossless_ipool_size);
+ return 0;
+}
+
static int port_set_buffer(struct mlx5e_priv *priv,
struct mlx5e_port_buffer *port_buffer)
{
u16 port_buff_cell_sz = priv->dcbx.port_buff_cell_sz;
struct mlx5_core_dev *mdev = priv->mdev;
int sz = MLX5_ST_SZ_BYTES(pbmc_reg);
+ u32 new_headroom_size = 0;
+ u32 current_headroom_size;
void *in;
int err;
int i;
+ current_headroom_size = port_buffer->headroom_size;
+
in = kzalloc(sz, GFP_KERNEL);
if (!in)
return -ENOMEM;
@@ -110,6 +299,7 @@ static int port_set_buffer(struct mlx5e_priv *priv,
u64 xoff = port_buffer->buffer[i].xoff;
u64 xon = port_buffer->buffer[i].xon;
+ new_headroom_size += size;
do_div(size, port_buff_cell_sz);
do_div(xoff, port_buff_cell_sz);
do_div(xon, port_buff_cell_sz);
@@ -119,6 +309,17 @@ static int port_set_buffer(struct mlx5e_priv *priv,
MLX5_SET(bufferx_reg, buffer, xon_threshold, xon);
}
+ new_headroom_size /= port_buff_cell_sz;
+ current_headroom_size /= port_buff_cell_sz;
+ err = port_update_shared_buffer(priv->mdev, current_headroom_size,
+ new_headroom_size);
+ if (err)
+ goto out;
+
+ err = port_update_pool_cfg(priv->mdev, port_buffer);
+ if (err)
+ goto out;
+
err = mlx5e_port_set_pbmc(mdev, in);
out:
kfree(in);
@@ -174,6 +375,7 @@ static int update_xoff_threshold(struct mlx5e_port_buffer *port_buffer,
/**
* update_buffer_lossy - Update buffer configuration based on pfc
+ * @mdev: port function core device
* @max_mtu: netdev's max_mtu
* @pfc_en: <input> current pfc configuration
* @buffer: <input> current prio to buffer mapping
@@ -192,7 +394,8 @@ static int update_xoff_threshold(struct mlx5e_port_buffer *port_buffer,
* @return: 0 if no error,
* sets change to true if buffer configuration was modified.
*/
-static int update_buffer_lossy(unsigned int max_mtu,
+static int update_buffer_lossy(struct mlx5_core_dev *mdev,
+ unsigned int max_mtu,
u8 pfc_en, u8 *buffer, u32 xoff, u16 port_buff_cell_sz,
struct mlx5e_port_buffer *port_buffer,
bool *change)
@@ -229,6 +432,10 @@ static int update_buffer_lossy(unsigned int max_mtu,
}
if (changed) {
+ err = port_update_pool_cfg(mdev, port_buffer);
+ if (err)
+ return err;
+
err = update_xoff_threshold(port_buffer, xoff, max_mtu, port_buff_cell_sz);
if (err)
return err;
@@ -293,23 +500,30 @@ int mlx5e_port_manual_buffer_config(struct mlx5e_priv *priv,
}
if (change & MLX5E_PORT_BUFFER_PFC) {
+ mlx5e_dbg(HW, priv, "%s: requested PFC per priority bitmask: 0x%x\n",
+ __func__, pfc->pfc_en);
err = mlx5e_port_query_priority2buffer(priv->mdev, buffer);
if (err)
return err;
- err = update_buffer_lossy(max_mtu, pfc->pfc_en, buffer, xoff, port_buff_cell_sz,
- &port_buffer, &update_buffer);
+ err = update_buffer_lossy(priv->mdev, max_mtu, pfc->pfc_en, buffer, xoff,
+ port_buff_cell_sz, &port_buffer,
+ &update_buffer);
if (err)
return err;
}
if (change & MLX5E_PORT_BUFFER_PRIO2BUFFER) {
update_prio2buffer = true;
+ for (i = 0; i < MLX5E_MAX_BUFFER; i++)
+ mlx5e_dbg(HW, priv, "%s: requested to map prio[%d] to buffer %d\n",
+ __func__, i, prio2buffer[i]);
+
err = fill_pfc_en(priv->mdev, &curr_pfc_en);
if (err)
return err;
- err = update_buffer_lossy(max_mtu, curr_pfc_en, prio2buffer, xoff,
+ err = update_buffer_lossy(priv->mdev, max_mtu, curr_pfc_en, prio2buffer, xoff,
port_buff_cell_sz, &port_buffer, &update_buffer);
if (err)
return err;
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/port_buffer.h b/drivers/net/ethernet/mellanox/mlx5/core/en/port_buffer.h
index 80af7a5ac604..a6ef118de758 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/port_buffer.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/port_buffer.h
@@ -60,6 +60,7 @@ struct mlx5e_bufferx_reg {
struct mlx5e_port_buffer {
u32 port_buffer_size;
u32 spare_buffer_size;
+ u32 headroom_size;
struct mlx5e_bufferx_reg buffer[MLX5E_MAX_BUFFER];
};
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/ptp.c b/drivers/net/ethernet/mellanox/mlx5/core/en/ptp.c
index 8469e9c38670..9a1bc93b7dc6 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/ptp.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/ptp.c
@@ -771,8 +771,8 @@ void mlx5e_ptp_activate_channel(struct mlx5e_ptp *c)
if (test_bit(MLX5E_PTP_STATE_RX, c->state)) {
mlx5e_ptp_rx_set_fs(c->priv);
mlx5e_activate_rq(&c->rq);
- mlx5e_trigger_napi_sched(&c->napi);
}
+ mlx5e_trigger_napi_sched(&c->napi);
}
void mlx5e_ptp_deactivate_channel(struct mlx5e_ptp *c)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/rep/bond.c b/drivers/net/ethernet/mellanox/mlx5/core/en/rep/bond.c
index b6f5c1bcdbcd..016a61c52c45 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/rep/bond.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/rep/bond.c
@@ -120,8 +120,8 @@ int mlx5e_rep_bond_enslave(struct mlx5_eswitch *esw, struct net_device *netdev,
priv = netdev_priv(netdev);
rpriv = priv->ppriv;
- err = mlx5_esw_acl_ingress_vport_bond_update(esw, rpriv->rep->vport,
- mdata->metadata_reg_c_0);
+ err = mlx5_esw_acl_ingress_vport_metadata_update(esw, rpriv->rep->vport,
+ mdata->metadata_reg_c_0);
if (err)
goto ingress_err;
@@ -167,7 +167,7 @@ void mlx5e_rep_bond_unslave(struct mlx5_eswitch *esw,
/* Reset bond_metadata to zero first then reset all ingress/egress
* acls and rx rules of unslave representor's vport
*/
- mlx5_esw_acl_ingress_vport_bond_update(esw, rpriv->rep->vport, 0);
+ mlx5_esw_acl_ingress_vport_metadata_update(esw, rpriv->rep->vport, 0);
mlx5_esw_acl_egress_vport_unbond(esw, rpriv->rep->vport);
mlx5e_rep_bond_update(priv, false);
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/rep/tc.c b/drivers/net/ethernet/mellanox/mlx5/core/en/rep/tc.c
index b08339d986d5..e24b46953542 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/rep/tc.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/rep/tc.c
@@ -1,7 +1,6 @@
// SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB
/* Copyright (c) 2020 Mellanox Technologies. */
-#include <net/dst_metadata.h>
#include <linux/netdevice.h>
#include <linux/if_macvlan.h>
#include <linux/list.h>
@@ -589,7 +588,7 @@ mlx5e_rep_indr_stats_act(struct mlx5e_rep_priv *rpriv,
act = mlx5e_tc_act_get(fl_act->id, ns_type);
if (!act || !act->stats_action)
- return -EOPNOTSUPP;
+ return mlx5e_tc_fill_action_stats(priv, fl_act);
return act->stats_action(priv, fl_act);
}
@@ -665,232 +664,54 @@ void mlx5e_rep_tc_netdevice_event_unregister(struct mlx5e_rep_priv *rpriv)
mlx5e_rep_indr_block_unbind);
}
-static bool mlx5e_restore_tunnel(struct mlx5e_priv *priv, struct sk_buff *skb,
- struct mlx5e_tc_update_priv *tc_priv,
- u32 tunnel_id)
-{
- struct mlx5_eswitch *esw = priv->mdev->priv.eswitch;
- struct tunnel_match_enc_opts enc_opts = {};
- struct mlx5_rep_uplink_priv *uplink_priv;
- struct mlx5e_rep_priv *uplink_rpriv;
- struct metadata_dst *tun_dst;
- struct tunnel_match_key key;
- u32 tun_id, enc_opts_id;
- struct net_device *dev;
- int err;
-
- enc_opts_id = tunnel_id & ENC_OPTS_BITS_MASK;
- tun_id = tunnel_id >> ENC_OPTS_BITS;
-
- if (!tun_id)
- return true;
-
- uplink_rpriv = mlx5_eswitch_get_uplink_priv(esw, REP_ETH);
- uplink_priv = &uplink_rpriv->uplink_priv;
-
- err = mapping_find(uplink_priv->tunnel_mapping, tun_id, &key);
- if (err) {
- netdev_dbg(priv->netdev,
- "Couldn't find tunnel for tun_id: %d, err: %d\n",
- tun_id, err);
- return false;
- }
-
- if (enc_opts_id) {
- err = mapping_find(uplink_priv->tunnel_enc_opts_mapping,
- enc_opts_id, &enc_opts);
- if (err) {
- netdev_dbg(priv->netdev,
- "Couldn't find tunnel (opts) for tun_id: %d, err: %d\n",
- enc_opts_id, err);
- return false;
- }
- }
-
- if (key.enc_control.addr_type == FLOW_DISSECTOR_KEY_IPV4_ADDRS) {
- tun_dst = __ip_tun_set_dst(key.enc_ipv4.src, key.enc_ipv4.dst,
- key.enc_ip.tos, key.enc_ip.ttl,
- key.enc_tp.dst, TUNNEL_KEY,
- key32_to_tunnel_id(key.enc_key_id.keyid),
- enc_opts.key.len);
- } else if (key.enc_control.addr_type == FLOW_DISSECTOR_KEY_IPV6_ADDRS) {
- tun_dst = __ipv6_tun_set_dst(&key.enc_ipv6.src, &key.enc_ipv6.dst,
- key.enc_ip.tos, key.enc_ip.ttl,
- key.enc_tp.dst, 0, TUNNEL_KEY,
- key32_to_tunnel_id(key.enc_key_id.keyid),
- enc_opts.key.len);
- } else {
- netdev_dbg(priv->netdev,
- "Couldn't restore tunnel, unsupported addr_type: %d\n",
- key.enc_control.addr_type);
- return false;
- }
-
- if (!tun_dst) {
- netdev_dbg(priv->netdev, "Couldn't restore tunnel, no tun_dst\n");
- return false;
- }
-
- tun_dst->u.tun_info.key.tp_src = key.enc_tp.src;
-
- if (enc_opts.key.len)
- ip_tunnel_info_opts_set(&tun_dst->u.tun_info,
- enc_opts.key.data,
- enc_opts.key.len,
- enc_opts.key.dst_opt_type);
-
- skb_dst_set(skb, (struct dst_entry *)tun_dst);
- dev = dev_get_by_index(&init_net, key.filter_ifindex);
- if (!dev) {
- netdev_dbg(priv->netdev,
- "Couldn't find tunnel device with ifindex: %d\n",
- key.filter_ifindex);
- return false;
- }
-
- /* Set fwd_dev so we do dev_put() after datapath */
- tc_priv->fwd_dev = dev;
-
- skb->dev = dev;
-
- return true;
-}
-
-static bool mlx5e_restore_skb_chain(struct sk_buff *skb, u32 chain, u32 reg_c1,
- struct mlx5e_tc_update_priv *tc_priv)
-{
- struct mlx5e_priv *priv = netdev_priv(skb->dev);
- u32 tunnel_id = (reg_c1 >> ESW_TUN_OFFSET) & TUNNEL_ID_MASK;
-
-#if IS_ENABLED(CONFIG_NET_TC_SKB_EXT)
- if (chain) {
- struct mlx5_rep_uplink_priv *uplink_priv;
- struct mlx5e_rep_priv *uplink_rpriv;
- struct tc_skb_ext *tc_skb_ext;
- struct mlx5_eswitch *esw;
- u32 zone_restore_id;
-
- tc_skb_ext = tc_skb_ext_alloc(skb);
- if (!tc_skb_ext) {
- WARN_ON(1);
- return false;
- }
- tc_skb_ext->chain = chain;
- zone_restore_id = reg_c1 & ESW_ZONE_ID_MASK;
- esw = priv->mdev->priv.eswitch;
- uplink_rpriv = mlx5_eswitch_get_uplink_priv(esw, REP_ETH);
- uplink_priv = &uplink_rpriv->uplink_priv;
- if (!mlx5e_tc_ct_restore_flow(uplink_priv->ct_priv, skb,
- zone_restore_id))
- return false;
- }
-#endif /* CONFIG_NET_TC_SKB_EXT */
-
- return mlx5e_restore_tunnel(priv, skb, tc_priv, tunnel_id);
-}
-
-static void mlx5_rep_tc_post_napi_receive(struct mlx5e_tc_update_priv *tc_priv)
-{
- if (tc_priv->fwd_dev)
- dev_put(tc_priv->fwd_dev);
-}
-
-static void mlx5e_restore_skb_sample(struct mlx5e_priv *priv, struct sk_buff *skb,
- struct mlx5_mapped_obj *mapped_obj,
- struct mlx5e_tc_update_priv *tc_priv)
-{
- if (!mlx5e_restore_tunnel(priv, skb, tc_priv, mapped_obj->sample.tunnel_id)) {
- netdev_dbg(priv->netdev,
- "Failed to restore tunnel info for sampled packet\n");
- return;
- }
- mlx5e_tc_sample_skb(skb, mapped_obj);
- mlx5_rep_tc_post_napi_receive(tc_priv);
-}
-
-static bool mlx5e_restore_skb_int_port(struct mlx5e_priv *priv, struct sk_buff *skb,
- struct mlx5_mapped_obj *mapped_obj,
- struct mlx5e_tc_update_priv *tc_priv,
- bool *forward_tx,
- u32 reg_c1)
-{
- u32 tunnel_id = (reg_c1 >> ESW_TUN_OFFSET) & TUNNEL_ID_MASK;
- struct mlx5_eswitch *esw = priv->mdev->priv.eswitch;
- struct mlx5_rep_uplink_priv *uplink_priv;
- struct mlx5e_rep_priv *uplink_rpriv;
-
- /* Tunnel restore takes precedence over int port restore */
- if (tunnel_id)
- return mlx5e_restore_tunnel(priv, skb, tc_priv, tunnel_id);
-
- uplink_rpriv = mlx5_eswitch_get_uplink_priv(esw, REP_ETH);
- uplink_priv = &uplink_rpriv->uplink_priv;
-
- if (mlx5e_tc_int_port_dev_fwd(uplink_priv->int_port_priv, skb,
- mapped_obj->int_port_metadata, forward_tx)) {
- /* Set fwd_dev for future dev_put */
- tc_priv->fwd_dev = skb->dev;
-
- return true;
- }
-
- return false;
-}
-
void mlx5e_rep_tc_receive(struct mlx5_cqe64 *cqe, struct mlx5e_rq *rq,
struct sk_buff *skb)
{
- u32 reg_c1 = be32_to_cpu(cqe->ft_metadata);
+ u32 reg_c0, reg_c1, zone_restore_id, tunnel_id;
struct mlx5e_tc_update_priv tc_priv = {};
- struct mlx5_mapped_obj mapped_obj;
+ struct mlx5_rep_uplink_priv *uplink_priv;
+ struct mlx5e_rep_priv *uplink_rpriv;
+ struct mlx5_tc_ct_priv *ct_priv;
+ struct mapping_ctx *mapping_ctx;
struct mlx5_eswitch *esw;
- bool forward_tx = false;
struct mlx5e_priv *priv;
- u32 reg_c0;
- int err;
reg_c0 = (be32_to_cpu(cqe->sop_drop_qpn) & MLX5E_TC_FLOW_ID_MASK);
if (!reg_c0 || reg_c0 == MLX5_FS_DEFAULT_FLOW_TAG)
goto forward;
- /* If reg_c0 is not equal to the default flow tag then skb->mark
+ /* If mapped_obj_id is not equal to the default flow tag then skb->mark
* is not supported and must be reset back to 0.
*/
skb->mark = 0;
priv = netdev_priv(skb->dev);
esw = priv->mdev->priv.eswitch;
- err = mapping_find(esw->offloads.reg_c0_obj_pool, reg_c0, &mapped_obj);
- if (err) {
- netdev_dbg(priv->netdev,
- "Couldn't find mapped object for reg_c0: %d, err: %d\n",
- reg_c0, err);
- goto free_skb;
- }
+ mapping_ctx = esw->offloads.reg_c0_obj_pool;
+ reg_c1 = be32_to_cpu(cqe->ft_metadata);
+ zone_restore_id = reg_c1 & ESW_ZONE_ID_MASK;
+ tunnel_id = (reg_c1 >> ESW_TUN_OFFSET) & TUNNEL_ID_MASK;
- if (mapped_obj.type == MLX5_MAPPED_OBJ_CHAIN) {
- if (!mlx5e_restore_skb_chain(skb, mapped_obj.chain, reg_c1, &tc_priv) &&
- !mlx5_ipsec_is_rx_flow(cqe))
- goto free_skb;
- } else if (mapped_obj.type == MLX5_MAPPED_OBJ_SAMPLE) {
- mlx5e_restore_skb_sample(priv, skb, &mapped_obj, &tc_priv);
- goto free_skb;
- } else if (mapped_obj.type == MLX5_MAPPED_OBJ_INT_PORT_METADATA) {
- if (!mlx5e_restore_skb_int_port(priv, skb, &mapped_obj, &tc_priv,
- &forward_tx, reg_c1))
- goto free_skb;
- } else {
- netdev_dbg(priv->netdev, "Invalid mapped object type: %d\n", mapped_obj.type);
+ uplink_rpriv = mlx5_eswitch_get_uplink_priv(esw, REP_ETH);
+ uplink_priv = &uplink_rpriv->uplink_priv;
+ ct_priv = uplink_priv->ct_priv;
+
+ if (!mlx5_ipsec_is_rx_flow(cqe) &&
+ !mlx5e_tc_update_skb(cqe, skb, mapping_ctx, reg_c0, ct_priv, zone_restore_id, tunnel_id,
+ &tc_priv))
goto free_skb;
- }
forward:
- if (forward_tx)
+ if (tc_priv.skb_done)
+ goto free_skb;
+
+ if (tc_priv.forward_tx)
dev_queue_xmit(skb);
else
napi_gro_receive(rq->cq.napi, skb);
- mlx5_rep_tc_post_napi_receive(&tc_priv);
+ if (tc_priv.fwd_dev)
+ dev_put(tc_priv.fwd_dev);
return;
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/reporter_rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en/reporter_rx.c
index 1ae15b8536a8..c462fe76495b 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/reporter_rx.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/reporter_rx.c
@@ -736,10 +736,10 @@ static const struct devlink_health_reporter_ops mlx5_rx_reporter_ops = {
void mlx5e_reporter_rx_create(struct mlx5e_priv *priv)
{
- struct devlink_port *dl_port = mlx5e_devlink_get_dl_port(priv);
struct devlink_health_reporter *reporter;
- reporter = devlink_port_health_reporter_create(dl_port, &mlx5_rx_reporter_ops,
+ reporter = devlink_port_health_reporter_create(priv->netdev->devlink_port,
+ &mlx5_rx_reporter_ops,
MLX5E_REPORTER_RX_GRACEFUL_PERIOD, priv);
if (IS_ERR(reporter)) {
netdev_warn(priv->netdev, "Failed to create rx reporter, err = %ld\n",
@@ -754,6 +754,6 @@ void mlx5e_reporter_rx_destroy(struct mlx5e_priv *priv)
if (!priv->rx_reporter)
return;
- devlink_port_health_reporter_destroy(priv->rx_reporter);
+ devlink_health_reporter_destroy(priv->rx_reporter);
priv->rx_reporter = NULL;
}
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/reporter_tx.c b/drivers/net/ethernet/mellanox/mlx5/core/en/reporter_tx.c
index 60bc5b577ab9..34666e2b3871 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/reporter_tx.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/reporter_tx.c
@@ -81,6 +81,10 @@ static int mlx5e_tx_reporter_err_cqe_recover(void *ctx)
sq->stats->recover++;
clear_bit(MLX5E_SQ_STATE_RECOVERING, &sq->state);
mlx5e_activate_txqsq(sq);
+ if (sq->channel)
+ mlx5e_trigger_napi_icosq(sq->channel);
+ else
+ mlx5e_trigger_napi_sched(sq->cq.napi);
return 0;
out:
@@ -590,10 +594,10 @@ static const struct devlink_health_reporter_ops mlx5_tx_reporter_ops = {
void mlx5e_reporter_tx_create(struct mlx5e_priv *priv)
{
- struct devlink_port *dl_port = mlx5e_devlink_get_dl_port(priv);
struct devlink_health_reporter *reporter;
- reporter = devlink_port_health_reporter_create(dl_port, &mlx5_tx_reporter_ops,
+ reporter = devlink_port_health_reporter_create(priv->netdev->devlink_port,
+ &mlx5_tx_reporter_ops,
MLX5_REPORTER_TX_GRACEFUL_PERIOD, priv);
if (IS_ERR(reporter)) {
netdev_warn(priv->netdev,
@@ -609,6 +613,6 @@ void mlx5e_reporter_tx_destroy(struct mlx5e_priv *priv)
if (!priv->tx_reporter)
return;
- devlink_port_health_reporter_destroy(priv->tx_reporter);
+ devlink_health_reporter_destroy(priv->tx_reporter);
priv->tx_reporter = NULL;
}
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc/act/mirred.c b/drivers/net/ethernet/mellanox/mlx5/core/en/tc/act/mirred.c
index 78c427b38048..07cc65596f89 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/tc/act/mirred.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc/act/mirred.c
@@ -216,7 +216,6 @@ parse_mirred(struct mlx5e_tc_act_parse_state *parse_state,
struct net_device *uplink_dev;
struct mlx5e_priv *out_priv;
struct mlx5_eswitch *esw;
- bool is_uplink_rep;
int *ifindexes;
int if_count;
int err;
@@ -231,10 +230,9 @@ parse_mirred(struct mlx5e_tc_act_parse_state *parse_state,
parse_state->ifindexes[if_count] = out_dev->ifindex;
parse_state->if_count++;
- is_uplink_rep = mlx5e_eswitch_uplink_rep(out_dev);
- err = mlx5_lag_do_mirred(priv->mdev, out_dev);
- if (err)
- return err;
+
+ if (mlx5_lag_mpesw_do_mirred(priv->mdev, out_dev, extack))
+ return -EOPNOTSUPP;
out_dev = get_fdb_out_dev(uplink_dev, out_dev);
if (!out_dev)
@@ -275,13 +273,6 @@ parse_mirred(struct mlx5e_tc_act_parse_state *parse_state,
esw_attr->dests[esw_attr->out_count].rep = rpriv->rep;
esw_attr->dests[esw_attr->out_count].mdev = out_priv->mdev;
- /* If output device is bond master then rules are not explicit
- * so we don't attempt to count them.
- */
- if (is_uplink_rep && MLX5_CAP_PORT_SELECTION(priv->mdev, port_select_flow_table) &&
- MLX5_CAP_GEN(priv->mdev, create_lag_when_not_master_up))
- attr->lag.count = true;
-
esw_attr->out_count++;
return 0;
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc/act/vlan.c b/drivers/net/ethernet/mellanox/mlx5/core/en/tc/act/vlan.c
index b86ac604d0c2..2e0d88b513aa 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/tc/act/vlan.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc/act/vlan.c
@@ -44,19 +44,17 @@ parse_tc_vlan_action(struct mlx5e_priv *priv,
return -EOPNOTSUPP;
}
+ if (!mlx5_eswitch_vlan_actions_supported(priv->mdev, vlan_idx)) {
+ NL_SET_ERR_MSG_MOD(extack, "firmware vlan actions is not supported");
+ return -EOPNOTSUPP;
+ }
+
switch (act->id) {
case FLOW_ACTION_VLAN_POP:
- if (vlan_idx) {
- if (!mlx5_eswitch_vlan_actions_supported(priv->mdev,
- MLX5_FS_VLAN_DEPTH)) {
- NL_SET_ERR_MSG_MOD(extack, "vlan pop action is not supported");
- return -EOPNOTSUPP;
- }
-
+ if (vlan_idx)
*action |= MLX5_FLOW_CONTEXT_ACTION_VLAN_POP_2;
- } else {
+ else
*action |= MLX5_FLOW_CONTEXT_ACTION_VLAN_POP;
- }
break;
case FLOW_ACTION_VLAN_PUSH:
attr->vlan_vid[vlan_idx] = act->vlan.vid;
@@ -65,25 +63,10 @@ parse_tc_vlan_action(struct mlx5e_priv *priv,
if (!attr->vlan_proto[vlan_idx])
attr->vlan_proto[vlan_idx] = htons(ETH_P_8021Q);
- if (vlan_idx) {
- if (!mlx5_eswitch_vlan_actions_supported(priv->mdev,
- MLX5_FS_VLAN_DEPTH)) {
- NL_SET_ERR_MSG_MOD(extack,
- "vlan push action is not supported for vlan depth > 1");
- return -EOPNOTSUPP;
- }
-
+ if (vlan_idx)
*action |= MLX5_FLOW_CONTEXT_ACTION_VLAN_PUSH_2;
- } else {
- if (!mlx5_eswitch_vlan_actions_supported(priv->mdev, 1) &&
- (act->vlan.proto != htons(ETH_P_8021Q) ||
- act->vlan.prio)) {
- NL_SET_ERR_MSG_MOD(extack, "vlan push action is not supported");
- return -EOPNOTSUPP;
- }
-
+ else
*action |= MLX5_FLOW_CONTEXT_ACTION_VLAN_PUSH;
- }
break;
case FLOW_ACTION_VLAN_POP_ETH:
parse_state->eth_pop = true;
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc/act_stats.c b/drivers/net/ethernet/mellanox/mlx5/core/en/tc/act_stats.c
new file mode 100644
index 000000000000..f71766dca660
--- /dev/null
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc/act_stats.c
@@ -0,0 +1,197 @@
+// SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB
+// Copyright (c) 2023, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+
+#include <linux/rhashtable.h>
+#include <net/flow_offload.h>
+#include "en/tc_priv.h"
+#include "act_stats.h"
+#include "en/fs.h"
+
+struct mlx5e_tc_act_stats_handle {
+ struct rhashtable ht;
+ spinlock_t ht_lock; /* protects hashtable */
+};
+
+struct mlx5e_tc_act_stats {
+ unsigned long tc_act_cookie;
+
+ struct mlx5_fc *counter;
+ u64 lastpackets;
+ u64 lastbytes;
+
+ struct rhash_head hash;
+ struct rcu_head rcu_head;
+};
+
+static const struct rhashtable_params act_counters_ht_params = {
+ .head_offset = offsetof(struct mlx5e_tc_act_stats, hash),
+ .key_offset = 0,
+ .key_len = offsetof(struct mlx5e_tc_act_stats, counter),
+ .automatic_shrinking = true,
+};
+
+struct mlx5e_tc_act_stats_handle *
+mlx5e_tc_act_stats_create(void)
+{
+ struct mlx5e_tc_act_stats_handle *handle;
+ int err;
+
+ handle = kvzalloc(sizeof(*handle), GFP_KERNEL);
+ if (IS_ERR(handle))
+ return ERR_PTR(-ENOMEM);
+
+ err = rhashtable_init(&handle->ht, &act_counters_ht_params);
+ if (err)
+ goto err;
+
+ spin_lock_init(&handle->ht_lock);
+ return handle;
+err:
+ kvfree(handle);
+ return ERR_PTR(err);
+}
+
+void mlx5e_tc_act_stats_free(struct mlx5e_tc_act_stats_handle *handle)
+{
+ rhashtable_destroy(&handle->ht);
+ kvfree(handle);
+}
+
+static int
+mlx5e_tc_act_stats_add(struct mlx5e_tc_act_stats_handle *handle,
+ unsigned long act_cookie,
+ struct mlx5_fc *counter)
+{
+ struct mlx5e_tc_act_stats *act_stats, *old_act_stats;
+ struct rhashtable *ht = &handle->ht;
+ int err = 0;
+
+ act_stats = kvzalloc(sizeof(*act_stats), GFP_KERNEL);
+ if (!act_stats)
+ return -ENOMEM;
+
+ act_stats->tc_act_cookie = act_cookie;
+ act_stats->counter = counter;
+
+ rcu_read_lock();
+ old_act_stats = rhashtable_lookup_get_insert_fast(ht,
+ &act_stats->hash,
+ act_counters_ht_params);
+ if (IS_ERR(old_act_stats)) {
+ err = PTR_ERR(old_act_stats);
+ goto err_hash_insert;
+ } else if (old_act_stats) {
+ err = -EEXIST;
+ goto err_hash_insert;
+ }
+ rcu_read_unlock();
+
+ return 0;
+
+err_hash_insert:
+ rcu_read_unlock();
+ kvfree(act_stats);
+ return err;
+}
+
+void
+mlx5e_tc_act_stats_del_flow(struct mlx5e_tc_act_stats_handle *handle,
+ struct mlx5e_tc_flow *flow)
+{
+ struct mlx5_flow_attr *attr;
+ struct mlx5e_tc_act_stats *act_stats;
+ int i;
+
+ if (!flow_flag_test(flow, USE_ACT_STATS))
+ return;
+
+ list_for_each_entry(attr, &flow->attrs, list) {
+ for (i = 0; i < attr->tc_act_cookies_count; i++) {
+ struct rhashtable *ht = &handle->ht;
+
+ spin_lock(&handle->ht_lock);
+ act_stats = rhashtable_lookup_fast(ht,
+ &attr->tc_act_cookies[i],
+ act_counters_ht_params);
+ if (act_stats &&
+ rhashtable_remove_fast(ht, &act_stats->hash,
+ act_counters_ht_params) == 0)
+ kvfree_rcu(act_stats, rcu_head);
+
+ spin_unlock(&handle->ht_lock);
+ }
+ }
+}
+
+int
+mlx5e_tc_act_stats_add_flow(struct mlx5e_tc_act_stats_handle *handle,
+ struct mlx5e_tc_flow *flow)
+{
+ struct mlx5_fc *curr_counter = NULL;
+ unsigned long last_cookie = 0;
+ struct mlx5_flow_attr *attr;
+ int err;
+ int i;
+
+ if (!flow_flag_test(flow, USE_ACT_STATS))
+ return 0;
+
+ list_for_each_entry(attr, &flow->attrs, list) {
+ if (attr->counter)
+ curr_counter = attr->counter;
+
+ for (i = 0; i < attr->tc_act_cookies_count; i++) {
+ /* jump over identical ids (e.g. pedit)*/
+ if (last_cookie == attr->tc_act_cookies[i])
+ continue;
+
+ err = mlx5e_tc_act_stats_add(handle, attr->tc_act_cookies[i], curr_counter);
+ if (err)
+ goto out_err;
+ last_cookie = attr->tc_act_cookies[i];
+ }
+ }
+
+ return 0;
+out_err:
+ mlx5e_tc_act_stats_del_flow(handle, flow);
+ return err;
+}
+
+int
+mlx5e_tc_act_stats_fill_stats(struct mlx5e_tc_act_stats_handle *handle,
+ struct flow_offload_action *fl_act)
+{
+ struct rhashtable *ht = &handle->ht;
+ struct mlx5e_tc_act_stats *item;
+ struct mlx5e_tc_act_stats key;
+ u64 pkts, bytes, lastused;
+ int err = 0;
+
+ key.tc_act_cookie = fl_act->cookie;
+
+ rcu_read_lock();
+ item = rhashtable_lookup(ht, &key, act_counters_ht_params);
+ if (!item) {
+ rcu_read_unlock();
+ err = -ENOENT;
+ goto err_out;
+ }
+
+ mlx5_fc_query_cached_raw(item->counter,
+ &bytes, &pkts, &lastused);
+
+ flow_stats_update(&fl_act->stats,
+ bytes - item->lastbytes,
+ pkts - item->lastpackets,
+ 0, lastused, FLOW_ACTION_HW_STATS_DELAYED);
+
+ item->lastpackets = pkts;
+ item->lastbytes = bytes;
+ rcu_read_unlock();
+
+ return 0;
+
+err_out:
+ return err;
+}
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc/act_stats.h b/drivers/net/ethernet/mellanox/mlx5/core/en/tc/act_stats.h
new file mode 100644
index 000000000000..002292c2567c
--- /dev/null
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc/act_stats.h
@@ -0,0 +1,27 @@
+/* SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB */
+/* Copyright (c) 2023, NVIDIA CORPORATION & AFFILIATES. All rights reserved. */
+
+#ifndef __MLX5_EN_ACT_STATS_H__
+#define __MLX5_EN_ACT_STATS_H__
+
+#include <net/flow_offload.h>
+#include "en/tc_priv.h"
+
+struct mlx5e_tc_act_stats_handle;
+
+struct mlx5e_tc_act_stats_handle *mlx5e_tc_act_stats_create(void);
+void mlx5e_tc_act_stats_free(struct mlx5e_tc_act_stats_handle *handle);
+
+int
+mlx5e_tc_act_stats_add_flow(struct mlx5e_tc_act_stats_handle *handle,
+ struct mlx5e_tc_flow *flow);
+
+void
+mlx5e_tc_act_stats_del_flow(struct mlx5e_tc_act_stats_handle *handle,
+ struct mlx5e_tc_flow *flow);
+
+int
+mlx5e_tc_act_stats_fill_stats(struct mlx5e_tc_act_stats_handle *handle,
+ struct flow_offload_action *fl_act);
+
+#endif /* __MLX5_EN_ACT_STATS_H__ */
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc/meter.c b/drivers/net/ethernet/mellanox/mlx5/core/en/tc/meter.c
index 78af8a3175bf..8218c892b161 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/tc/meter.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc/meter.c
@@ -28,7 +28,7 @@ struct mlx5e_flow_meter_aso_obj {
int base_id;
int total_meters;
- unsigned long meters_map[0]; /* must be at the end of this struct */
+ unsigned long meters_map[]; /* must be at the end of this struct */
};
struct mlx5e_flow_meters {
@@ -204,13 +204,15 @@ mlx5e_flow_meter_create_aso_obj(struct mlx5e_flow_meters *flow_meters, int *obj_
u32 in[MLX5_ST_SZ_DW(create_flow_meter_aso_obj_in)] = {};
u32 out[MLX5_ST_SZ_DW(general_obj_out_cmd_hdr)];
struct mlx5_core_dev *mdev = flow_meters->mdev;
- void *obj;
+ void *obj, *param;
int err;
MLX5_SET(general_obj_in_cmd_hdr, in, opcode, MLX5_CMD_OP_CREATE_GENERAL_OBJECT);
MLX5_SET(general_obj_in_cmd_hdr, in, obj_type,
MLX5_GENERAL_OBJECT_TYPES_FLOW_METER_ASO);
- MLX5_SET(general_obj_in_cmd_hdr, in, log_obj_range, flow_meters->log_granularity);
+ param = MLX5_ADDR_OF(general_obj_in_cmd_hdr, in, op_param);
+ MLX5_SET(general_obj_create_param, param, log_obj_range,
+ flow_meters->log_granularity);
obj = MLX5_ADDR_OF(create_flow_meter_aso_obj_in, in, flow_meter_aso_obj);
MLX5_SET(flow_meter_aso_obj, obj, meter_aso_access_pd, flow_meters->pdn);
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc/sample.c b/drivers/net/ethernet/mellanox/mlx5/core/en/tc/sample.c
index f2c2c752bd1c..558a776359af 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/tc/sample.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc/sample.c
@@ -237,7 +237,7 @@ sample_modify_hdr_get(struct mlx5_core_dev *mdev, u32 obj_id,
int err;
err = mlx5e_tc_match_to_reg_set(mdev, mod_acts, MLX5_FLOW_NAMESPACE_FDB,
- CHAIN_TO_REG, obj_id);
+ MAPPED_OBJ_TO_REG, obj_id);
if (err)
goto err_set_regc0;
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c
index 313df8232db7..314983bc6f08 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c
@@ -35,6 +35,7 @@
#define MLX5_CT_STATE_REPLY_BIT BIT(4)
#define MLX5_CT_STATE_RELATED_BIT BIT(5)
#define MLX5_CT_STATE_INVALID_BIT BIT(6)
+#define MLX5_CT_STATE_NEW_BIT BIT(7)
#define MLX5_CT_LABELS_BITS MLX5_REG_MAPPING_MBITS(LABELS_TO_REG)
#define MLX5_CT_LABELS_MASK MLX5_REG_MAPPING_MASK(LABELS_TO_REG)
@@ -59,6 +60,7 @@ struct mlx5_tc_ct_debugfs {
struct mlx5_tc_ct_priv {
struct mlx5_core_dev *dev;
+ struct mlx5e_priv *priv;
const struct net_device *netdev;
struct mod_hdr_tbl *mod_hdr_tbl;
struct xarray tuple_ids;
@@ -85,7 +87,6 @@ struct mlx5_ct_flow {
struct mlx5_flow_attr *pre_ct_attr;
struct mlx5_flow_handle *pre_ct_rule;
struct mlx5_ct_ft *ft;
- u32 chain_mapping;
};
struct mlx5_ct_zone_rule {
@@ -721,12 +722,14 @@ mlx5_tc_ct_entry_create_mod_hdr(struct mlx5_tc_ct_priv *ct_priv,
DECLARE_MOD_HDR_ACTS_ACTIONS(actions_arr, MLX5_CT_MIN_MOD_ACTS);
DECLARE_MOD_HDR_ACTS(mod_acts, actions_arr);
struct flow_action_entry *meta;
+ enum ip_conntrack_info ctinfo;
u16 ct_state = 0;
int err;
meta = mlx5_tc_ct_get_ct_metadata_action(flow_rule);
if (!meta)
return -EOPNOTSUPP;
+ ctinfo = meta->ct_metadata.cookie & NFCT_INFOMASK;
err = mlx5_get_label_mapping(ct_priv, meta->ct_metadata.labels,
&attr->ct_attr.ct_labels_id);
@@ -742,7 +745,8 @@ mlx5_tc_ct_entry_create_mod_hdr(struct mlx5_tc_ct_priv *ct_priv,
ct_state |= MLX5_CT_STATE_NAT_BIT;
}
- ct_state |= MLX5_CT_STATE_ESTABLISHED_BIT | MLX5_CT_STATE_TRK_BIT;
+ ct_state |= MLX5_CT_STATE_TRK_BIT;
+ ct_state |= ctinfo == IP_CT_NEW ? MLX5_CT_STATE_NEW_BIT : MLX5_CT_STATE_ESTABLISHED_BIT;
ct_state |= meta->ct_metadata.orig_dir ? 0 : MLX5_CT_STATE_REPLY_BIT;
err = mlx5_tc_ct_entry_set_registers(ct_priv, &mod_acts,
ct_state,
@@ -871,6 +875,68 @@ err_attr:
return err;
}
+static int
+mlx5_tc_ct_entry_replace_rule(struct mlx5_tc_ct_priv *ct_priv,
+ struct flow_rule *flow_rule,
+ struct mlx5_ct_entry *entry,
+ bool nat, u8 zone_restore_id)
+{
+ struct mlx5_ct_zone_rule *zone_rule = &entry->zone_rules[nat];
+ struct mlx5_flow_attr *attr = zone_rule->attr, *old_attr;
+ struct mlx5e_mod_hdr_handle *mh;
+ struct mlx5_ct_fs_rule *rule;
+ struct mlx5_flow_spec *spec;
+ int err;
+
+ spec = kvzalloc(sizeof(*spec), GFP_KERNEL);
+ if (!spec)
+ return -ENOMEM;
+
+ old_attr = mlx5_alloc_flow_attr(ct_priv->ns_type);
+ if (!old_attr) {
+ err = -ENOMEM;
+ goto err_attr;
+ }
+ *old_attr = *attr;
+
+ err = mlx5_tc_ct_entry_create_mod_hdr(ct_priv, attr, flow_rule, &mh, zone_restore_id,
+ nat, mlx5_tc_ct_entry_has_nat(entry));
+ if (err) {
+ ct_dbg("Failed to create ct entry mod hdr");
+ goto err_mod_hdr;
+ }
+
+ mlx5_tc_ct_set_tuple_match(ct_priv, spec, flow_rule);
+ mlx5e_tc_match_to_reg_match(spec, ZONE_TO_REG, entry->tuple.zone, MLX5_CT_ZONE_MASK);
+
+ rule = ct_priv->fs_ops->ct_rule_add(ct_priv->fs, spec, attr, flow_rule);
+ if (IS_ERR(rule)) {
+ err = PTR_ERR(rule);
+ ct_dbg("Failed to add replacement ct entry rule, nat: %d", nat);
+ goto err_rule;
+ }
+
+ ct_priv->fs_ops->ct_rule_del(ct_priv->fs, zone_rule->rule);
+ zone_rule->rule = rule;
+ mlx5_tc_ct_entry_destroy_mod_hdr(ct_priv, old_attr, zone_rule->mh);
+ zone_rule->mh = mh;
+
+ kfree(old_attr);
+ kvfree(spec);
+ ct_dbg("Replaced ct entry rule in zone %d", entry->tuple.zone);
+
+ return 0;
+
+err_rule:
+ mlx5_tc_ct_entry_destroy_mod_hdr(ct_priv, zone_rule->attr, mh);
+ mlx5_put_label_mapping(ct_priv, attr->ct_attr.ct_labels_id);
+err_mod_hdr:
+ kfree(old_attr);
+err_attr:
+ kvfree(spec);
+ return err;
+}
+
static bool
mlx5_tc_ct_entry_valid(struct mlx5_ct_entry *entry)
{
@@ -1066,6 +1132,52 @@ err_orig:
}
static int
+mlx5_tc_ct_entry_replace_rules(struct mlx5_tc_ct_priv *ct_priv,
+ struct flow_rule *flow_rule,
+ struct mlx5_ct_entry *entry,
+ u8 zone_restore_id)
+{
+ int err;
+
+ err = mlx5_tc_ct_entry_replace_rule(ct_priv, flow_rule, entry, false,
+ zone_restore_id);
+ if (err)
+ return err;
+
+ err = mlx5_tc_ct_entry_replace_rule(ct_priv, flow_rule, entry, true,
+ zone_restore_id);
+ if (err)
+ mlx5_tc_ct_entry_del_rule(ct_priv, entry, false);
+ return err;
+}
+
+static int
+mlx5_tc_ct_block_flow_offload_replace(struct mlx5_ct_ft *ft, struct flow_rule *flow_rule,
+ struct mlx5_ct_entry *entry, unsigned long cookie)
+{
+ struct mlx5_tc_ct_priv *ct_priv = ft->ct_priv;
+ int err;
+
+ err = mlx5_tc_ct_entry_replace_rules(ct_priv, flow_rule, entry, ft->zone_restore_id);
+ if (!err)
+ return 0;
+
+ /* If failed to update the entry, then look it up again under ht_lock
+ * protection and properly delete it.
+ */
+ spin_lock_bh(&ct_priv->ht_lock);
+ entry = rhashtable_lookup_fast(&ft->ct_entries_ht, &cookie, cts_ht_params);
+ if (entry) {
+ rhashtable_remove_fast(&ft->ct_entries_ht, &entry->node, cts_ht_params);
+ spin_unlock_bh(&ct_priv->ht_lock);
+ mlx5_tc_ct_entry_put(entry);
+ } else {
+ spin_unlock_bh(&ct_priv->ht_lock);
+ }
+ return err;
+}
+
+static int
mlx5_tc_ct_block_flow_offload_add(struct mlx5_ct_ft *ft,
struct flow_cls_offload *flow)
{
@@ -1083,9 +1195,17 @@ mlx5_tc_ct_block_flow_offload_add(struct mlx5_ct_ft *ft,
spin_lock_bh(&ct_priv->ht_lock);
entry = rhashtable_lookup_fast(&ft->ct_entries_ht, &cookie, cts_ht_params);
if (entry && refcount_inc_not_zero(&entry->refcnt)) {
+ if (entry->restore_cookie == meta_action->ct_metadata.cookie) {
+ spin_unlock_bh(&ct_priv->ht_lock);
+ mlx5_tc_ct_entry_put(entry);
+ return -EEXIST;
+ }
+ entry->restore_cookie = meta_action->ct_metadata.cookie;
spin_unlock_bh(&ct_priv->ht_lock);
+
+ err = mlx5_tc_ct_block_flow_offload_replace(ft, flow_rule, entry, cookie);
mlx5_tc_ct_entry_put(entry);
- return -EEXIST;
+ return err;
}
spin_unlock_bh(&ct_priv->ht_lock);
@@ -1323,7 +1443,7 @@ mlx5_tc_ct_match_add(struct mlx5_tc_ct_priv *priv,
struct mlx5_ct_attr *ct_attr,
struct netlink_ext_ack *extack)
{
- bool trk, est, untrk, unest, new, rpl, unrpl, rel, unrel, inv, uninv;
+ bool trk, est, untrk, unnew, unest, new, rpl, unrpl, rel, unrel, inv, uninv;
struct flow_rule *rule = flow_cls_offload_flow_rule(f);
struct flow_dissector_key_ct *mask, *key;
u32 ctstate = 0, ctstate_mask = 0;
@@ -1369,15 +1489,18 @@ mlx5_tc_ct_match_add(struct mlx5_tc_ct_priv *priv,
rel = ct_state_on & TCA_FLOWER_KEY_CT_FLAGS_RELATED;
inv = ct_state_on & TCA_FLOWER_KEY_CT_FLAGS_INVALID;
untrk = ct_state_off & TCA_FLOWER_KEY_CT_FLAGS_TRACKED;
+ unnew = ct_state_off & TCA_FLOWER_KEY_CT_FLAGS_NEW;
unest = ct_state_off & TCA_FLOWER_KEY_CT_FLAGS_ESTABLISHED;
unrpl = ct_state_off & TCA_FLOWER_KEY_CT_FLAGS_REPLY;
unrel = ct_state_off & TCA_FLOWER_KEY_CT_FLAGS_RELATED;
uninv = ct_state_off & TCA_FLOWER_KEY_CT_FLAGS_INVALID;
ctstate |= trk ? MLX5_CT_STATE_TRK_BIT : 0;
+ ctstate |= new ? MLX5_CT_STATE_NEW_BIT : 0;
ctstate |= est ? MLX5_CT_STATE_ESTABLISHED_BIT : 0;
ctstate |= rpl ? MLX5_CT_STATE_REPLY_BIT : 0;
ctstate_mask |= (untrk || trk) ? MLX5_CT_STATE_TRK_BIT : 0;
+ ctstate_mask |= (unnew || new) ? MLX5_CT_STATE_NEW_BIT : 0;
ctstate_mask |= (unest || est) ? MLX5_CT_STATE_ESTABLISHED_BIT : 0;
ctstate_mask |= (unrpl || rpl) ? MLX5_CT_STATE_REPLY_BIT : 0;
ctstate_mask |= unrel ? MLX5_CT_STATE_RELATED_BIT : 0;
@@ -1395,12 +1518,6 @@ mlx5_tc_ct_match_add(struct mlx5_tc_ct_priv *priv,
return -EOPNOTSUPP;
}
- if (new) {
- NL_SET_ERR_MSG_MOD(extack,
- "matching on ct_state +new isn't supported");
- return -EOPNOTSUPP;
- }
-
if (mask->ct_zone)
mlx5e_tc_match_to_reg_match(spec, ZONE_TO_REG,
key->ct_zone, MLX5_CT_ZONE_MASK);
@@ -1441,6 +1558,7 @@ mlx5_tc_ct_parse_action(struct mlx5_tc_ct_priv *priv,
attr->ct_attr.zone = act->ct.zone;
attr->ct_attr.ct_action = act->ct.action;
attr->ct_attr.nf_ft = act->ct.flow_table;
+ attr->ct_attr.act_miss_cookie = act->miss_cookie;
return 0;
}
@@ -1778,7 +1896,7 @@ mlx5_tc_ct_del_ft_cb(struct mlx5_tc_ct_priv *ct_priv, struct mlx5_ct_ft *ft)
* + ft prio (tc chain) +
* + original match +
* +---------------------+
- * | set chain miss mapping
+ * | set act_miss_cookie mapping
* | set fte_id
* | set tunnel_id
* | do decap
@@ -1823,7 +1941,7 @@ __mlx5_tc_ct_flow_offload(struct mlx5_tc_ct_priv *ct_priv,
struct mlx5_flow_attr *pre_ct_attr;
struct mlx5_modify_hdr *mod_hdr;
struct mlx5_ct_flow *ct_flow;
- int chain_mapping = 0, err;
+ int act_miss_mapping = 0, err;
struct mlx5_ct_ft *ft;
u16 zone;
@@ -1858,22 +1976,18 @@ __mlx5_tc_ct_flow_offload(struct mlx5_tc_ct_priv *ct_priv,
pre_ct_attr->action |= MLX5_FLOW_CONTEXT_ACTION_FWD_DEST |
MLX5_FLOW_CONTEXT_ACTION_MOD_HDR;
- /* Write chain miss tag for miss in ct table as we
- * don't go though all prios of this chain as normal tc rules
- * miss.
- */
- err = mlx5_chains_get_chain_mapping(ct_priv->chains, attr->chain,
- &chain_mapping);
+ err = mlx5e_tc_action_miss_mapping_get(ct_priv->priv, attr, attr->ct_attr.act_miss_cookie,
+ &act_miss_mapping);
if (err) {
- ct_dbg("Failed to get chain register mapping for chain");
- goto err_get_chain;
+ ct_dbg("Failed to get register mapping for act miss");
+ goto err_get_act_miss;
}
- ct_flow->chain_mapping = chain_mapping;
+ attr->ct_attr.act_miss_mapping = act_miss_mapping;
err = mlx5e_tc_match_to_reg_set(priv->mdev, pre_mod_acts, ct_priv->ns_type,
- CHAIN_TO_REG, chain_mapping);
+ MAPPED_OBJ_TO_REG, act_miss_mapping);
if (err) {
- ct_dbg("Failed to set chain register mapping");
+ ct_dbg("Failed to set act miss register mapping");
goto err_mapping;
}
@@ -1937,8 +2051,8 @@ err_insert_orig:
mlx5_modify_header_dealloc(priv->mdev, pre_ct_attr->modify_hdr);
err_mapping:
mlx5e_mod_hdr_dealloc(pre_mod_acts);
- mlx5_chains_put_chain_mapping(ct_priv->chains, ct_flow->chain_mapping);
-err_get_chain:
+ mlx5e_tc_action_miss_mapping_put(ct_priv->priv, attr, act_miss_mapping);
+err_get_act_miss:
kfree(ct_flow->pre_ct_attr);
err_alloc_pre:
mlx5_tc_ct_del_ft_cb(ct_priv, ft);
@@ -1977,7 +2091,7 @@ __mlx5_tc_ct_delete_flow(struct mlx5_tc_ct_priv *ct_priv,
mlx5_tc_rule_delete(priv, ct_flow->pre_ct_rule, pre_ct_attr);
mlx5_modify_header_dealloc(priv->mdev, pre_ct_attr->modify_hdr);
- mlx5_chains_put_chain_mapping(ct_priv->chains, ct_flow->chain_mapping);
+ mlx5e_tc_action_miss_mapping_put(ct_priv->priv, attr, attr->ct_attr.act_miss_mapping);
mlx5_tc_ct_del_ft_cb(ct_priv, ct_flow->ft);
kfree(ct_flow->pre_ct_attr);
@@ -2074,13 +2188,6 @@ mlx5_tc_ct_init_check_support(struct mlx5e_priv *priv,
const char *err_msg = NULL;
int err = 0;
-#if !IS_ENABLED(CONFIG_NET_TC_SKB_EXT)
- /* cannot restore chain ID on HW miss */
-
- err_msg = "tc skb extension missing";
- err = -EOPNOTSUPP;
- goto out_err;
-#endif
if (IS_ERR_OR_NULL(post_act)) {
/* Ignore_flow_level support isn't supported by default for VFs and so post_act
* won't be supported. Skip showing error msg.
@@ -2157,6 +2264,7 @@ mlx5_tc_ct_init(struct mlx5e_priv *priv, struct mlx5_fs_chains *chains,
}
spin_lock_init(&ct_priv->ht_lock);
+ ct_priv->priv = priv;
ct_priv->ns_type = ns_type;
ct_priv->chains = chains;
ct_priv->netdev = priv->netdev;
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.h b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.h
index 5bbd6b92840f..5c5ddaa83055 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.h
@@ -28,6 +28,8 @@ struct mlx5_ct_attr {
struct mlx5_ct_flow *ct_flow;
struct nf_flowtable *nf_ft;
u32 ct_labels_id;
+ u32 act_miss_mapping;
+ u64 act_miss_cookie;
};
#define zone_to_reg_ct {\
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_priv.h b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_priv.h
index 2b7fd1c0e643..451fd4342a5a 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_priv.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_priv.h
@@ -30,6 +30,7 @@ enum {
MLX5E_TC_FLOW_FLAG_TUN_RX = MLX5E_TC_FLOW_BASE + 9,
MLX5E_TC_FLOW_FLAG_FAILED = MLX5E_TC_FLOW_BASE + 10,
MLX5E_TC_FLOW_FLAG_SAMPLE = MLX5E_TC_FLOW_BASE + 11,
+ MLX5E_TC_FLOW_FLAG_USE_ACT_STATS = MLX5E_TC_FLOW_BASE + 12,
};
struct mlx5e_tc_flow_parse_attr {
@@ -95,8 +96,6 @@ struct mlx5e_tc_flow {
*/
struct encap_flow_item encaps[MLX5_MAX_FLOW_FWD_VPORTS];
struct mlx5e_tc_flow *peer_flow;
- struct mlx5e_mod_hdr_handle *mh; /* attached mod header instance */
- struct mlx5e_mod_hdr_handle *slow_mh; /* attached mod header instance for slow path */
struct mlx5e_hairpin_entry *hpe; /* attached hairpin instance */
struct list_head hairpin; /* flows sharing the same hairpin */
struct list_head peer; /* flows with peer flow */
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun.c b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun.c
index e6f64d890fb3..00a04fdd756f 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun.c
@@ -93,11 +93,11 @@ static int get_route_and_out_devs(struct mlx5e_priv *priv,
else
return -EOPNOTSUPP;
- if (!(mlx5e_eswitch_rep(*out_dev) &&
- mlx5e_is_uplink_rep(netdev_priv(*out_dev))))
+ if (!mlx5e_eswitch_uplink_rep(*out_dev))
return -EOPNOTSUPP;
- if (mlx5e_eswitch_uplink_rep(priv->netdev) && *out_dev != priv->netdev)
+ if (mlx5e_eswitch_uplink_rep(priv->netdev) && *out_dev != priv->netdev &&
+ !mlx5_lag_is_mpesw(priv->mdev))
return -EOPNOTSUPP;
return 0;
@@ -745,8 +745,6 @@ int mlx5e_tc_tun_route_lookup(struct mlx5e_priv *priv,
if (err)
goto out;
- esw_attr->rx_tun_attr->vni = MLX5_GET(fte_match_param, spec->match_value,
- misc_parameters.vxlan_vni);
esw_attr->rx_tun_attr->decap_vport = vport_num;
} else if (netif_is_ovs_master(attr.route_dev) && mlx5e_tc_int_port_supported(esw)) {
int_port = mlx5e_tc_int_port_get(mlx5e_get_int_port_priv(priv),
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun_encap.c b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun_encap.c
index 2aaf8ab857b8..780224fd67a1 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun_encap.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun_encap.c
@@ -1349,7 +1349,8 @@ static void mlx5e_invalidate_encap(struct mlx5e_priv *priv,
mlx5e_tc_unoffload_from_slow_path(esw, flow);
else
mlx5e_tc_unoffload_fdb_rules(esw, flow, flow->attr);
- mlx5_modify_header_dealloc(priv->mdev, attr->modify_hdr);
+
+ mlx5e_tc_detach_mod_hdr(priv, flow, attr);
attr->modify_hdr = NULL;
esw_attr->dests[flow->tmp_entry_index].flags &=
@@ -1405,7 +1406,7 @@ static void mlx5e_reoffload_encap(struct mlx5e_priv *priv,
continue;
}
- err = mlx5e_tc_add_flow_mod_hdr(priv, flow, attr);
+ err = mlx5e_tc_attach_mod_hdr(priv, flow, attr);
if (err) {
mlx5_core_warn(priv->mdev, "Failed to update flow mod_hdr err=%d",
err);
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/txrx.h b/drivers/net/ethernet/mellanox/mlx5/core/en/txrx.h
index 853f312cd757..c067d2efab51 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/txrx.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/txrx.h
@@ -73,6 +73,11 @@ int mlx5e_poll_rx_cq(struct mlx5e_cq *cq, int budget);
void mlx5e_free_rx_descs(struct mlx5e_rq *rq);
void mlx5e_free_rx_in_progress_descs(struct mlx5e_rq *rq);
+static inline bool mlx5e_rx_hw_stamp(struct hwtstamp_config *config)
+{
+ return config->rx_filter == HWTSTAMP_FILTER_ALL;
+}
+
/* TX */
netdev_tx_t mlx5e_xmit(struct sk_buff *skb, struct net_device *dev);
bool mlx5e_poll_tx_cq(struct mlx5e_cq *cq, int napi_budget);
@@ -315,7 +320,6 @@ mlx5e_tx_dma_unmap(struct device *pdev, struct mlx5e_sq_dma *dma)
}
}
-void mlx5e_sq_xmit_simple(struct mlx5e_txqsq *sq, struct sk_buff *skb, bool xmit_more);
void mlx5e_tx_mpwqe_ensure_complete(struct mlx5e_txqsq *sq);
static inline bool mlx5e_tx_mpwqe_is_full(struct mlx5e_tx_mpwqe *session, u8 max_sq_mpw_wqebbs)
@@ -445,7 +449,7 @@ mlx5e_set_eseg_swp(struct sk_buff *skb, struct mlx5_wqe_eth_seg *eseg,
static inline u16 mlx5e_stop_room_for_wqe(struct mlx5_core_dev *mdev, u16 wqe_size)
{
- WARN_ON_ONCE(PAGE_SIZE / MLX5_SEND_WQE_BB < mlx5e_get_max_sq_wqebbs(mdev));
+ WARN_ON_ONCE(PAGE_SIZE / MLX5_SEND_WQE_BB < (u16)mlx5e_get_max_sq_wqebbs(mdev));
/* A WQE must not cross the page boundary, hence two conditions:
* 1. Its size must not exceed the page size.
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c
index 20507ef2f956..bcd6370de440 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c
@@ -57,8 +57,9 @@ int mlx5e_xdp_max_mtu(struct mlx5e_params *params, struct mlx5e_xsk_param *xsk)
static inline bool
mlx5e_xmit_xdp_buff(struct mlx5e_xdpsq *sq, struct mlx5e_rq *rq,
- struct page *page, struct xdp_buff *xdp)
+ struct xdp_buff *xdp)
{
+ struct page *page = virt_to_page(xdp->data);
struct skb_shared_info *sinfo = NULL;
struct mlx5e_xmit_data xdptxd;
struct mlx5e_xdp_info xdpi;
@@ -156,10 +157,39 @@ mlx5e_xmit_xdp_buff(struct mlx5e_xdpsq *sq, struct mlx5e_rq *rq,
return true;
}
+static int mlx5e_xdp_rx_timestamp(const struct xdp_md *ctx, u64 *timestamp)
+{
+ const struct mlx5e_xdp_buff *_ctx = (void *)ctx;
+
+ if (unlikely(!mlx5e_rx_hw_stamp(_ctx->rq->tstamp)))
+ return -EOPNOTSUPP;
+
+ *timestamp = mlx5e_cqe_ts_to_ns(_ctx->rq->ptp_cyc2time,
+ _ctx->rq->clock, get_cqe_ts(_ctx->cqe));
+ return 0;
+}
+
+static int mlx5e_xdp_rx_hash(const struct xdp_md *ctx, u32 *hash)
+{
+ const struct mlx5e_xdp_buff *_ctx = (void *)ctx;
+
+ if (unlikely(!(_ctx->xdp.rxq->dev->features & NETIF_F_RXHASH)))
+ return -EOPNOTSUPP;
+
+ *hash = be32_to_cpu(_ctx->cqe->rss_hash_result);
+ return 0;
+}
+
+const struct xdp_metadata_ops mlx5e_xdp_metadata_ops = {
+ .xmo_rx_timestamp = mlx5e_xdp_rx_timestamp,
+ .xmo_rx_hash = mlx5e_xdp_rx_hash,
+};
+
/* returns true if packet was consumed by xdp */
-bool mlx5e_xdp_handle(struct mlx5e_rq *rq, struct page *page,
- struct bpf_prog *prog, struct xdp_buff *xdp)
+bool mlx5e_xdp_handle(struct mlx5e_rq *rq,
+ struct bpf_prog *prog, struct mlx5e_xdp_buff *mxbuf)
{
+ struct xdp_buff *xdp = &mxbuf->xdp;
u32 act;
int err;
@@ -168,7 +198,7 @@ bool mlx5e_xdp_handle(struct mlx5e_rq *rq, struct page *page,
case XDP_PASS:
return false;
case XDP_TX:
- if (unlikely(!mlx5e_xmit_xdp_buff(rq->xdpsq, rq, page, xdp)))
+ if (unlikely(!mlx5e_xmit_xdp_buff(rq->xdpsq, rq, xdp)))
goto xdp_abort;
__set_bit(MLX5E_RQ_FLAG_XDP_XMIT, rq->flags); /* non-atomic */
return true;
@@ -180,7 +210,7 @@ bool mlx5e_xdp_handle(struct mlx5e_rq *rq, struct page *page,
__set_bit(MLX5E_RQ_FLAG_XDP_XMIT, rq->flags);
__set_bit(MLX5E_RQ_FLAG_XDP_REDIRECT, rq->flags);
if (xdp->rxq->mem.type != MEM_TYPE_XSK_BUFF_POOL)
- mlx5e_page_dma_unmap(rq, page);
+ mlx5e_page_dma_unmap(rq, virt_to_page(xdp->data));
rq->stats->xdp_redirect++;
return true;
default:
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.h b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.h
index bc2d9034af5b..10bcfa6f88c1 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.h
@@ -44,10 +44,16 @@
(MLX5E_XDP_INLINE_WQE_MAX_DS_CNT * MLX5_SEND_WQE_DS - \
sizeof(struct mlx5_wqe_inline_seg))
+struct mlx5e_xdp_buff {
+ struct xdp_buff xdp;
+ struct mlx5_cqe64 *cqe;
+ struct mlx5e_rq *rq;
+};
+
struct mlx5e_xsk_param;
int mlx5e_xdp_max_mtu(struct mlx5e_params *params, struct mlx5e_xsk_param *xsk);
-bool mlx5e_xdp_handle(struct mlx5e_rq *rq, struct page *page,
- struct bpf_prog *prog, struct xdp_buff *xdp);
+bool mlx5e_xdp_handle(struct mlx5e_rq *rq,
+ struct bpf_prog *prog, struct mlx5e_xdp_buff *mlctx);
void mlx5e_xdp_mpwqe_complete(struct mlx5e_xdpsq *sq);
bool mlx5e_poll_xdpsq_cq(struct mlx5e_cq *cq);
void mlx5e_free_xdpsq_descs(struct mlx5e_xdpsq *sq);
@@ -56,6 +62,8 @@ void mlx5e_xdp_rx_poll_complete(struct mlx5e_rq *rq);
int mlx5e_xdp_xmit(struct net_device *dev, int n, struct xdp_frame **frames,
u32 flags);
+extern const struct xdp_metadata_ops mlx5e_xdp_metadata_ops;
+
INDIRECT_CALLABLE_DECLARE(bool mlx5e_xmit_xdp_frame_mpwqe(struct mlx5e_xdpsq *sq,
struct mlx5e_xmit_data *xdptxd,
struct skb_shared_info *sinfo,
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/rx.c
index c91b54d9ff27..fab787600459 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/rx.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/rx.c
@@ -8,6 +8,14 @@
/* RX data path */
+static struct mlx5e_xdp_buff *xsk_buff_to_mxbuf(struct xdp_buff *xdp)
+{
+ /* mlx5e_xdp_buff shares its layout with xdp_buff_xsk
+ * and private mlx5e_xdp_buff fields fall into xdp_buff_xsk->cb
+ */
+ return (struct mlx5e_xdp_buff *)xdp;
+}
+
int mlx5e_xsk_alloc_rx_mpwqe(struct mlx5e_rq *rq, u16 ix)
{
struct mlx5e_mpw_info *wi = mlx5e_get_mpw_info(rq, ix);
@@ -22,6 +30,7 @@ int mlx5e_xsk_alloc_rx_mpwqe(struct mlx5e_rq *rq, u16 ix)
goto err;
BUILD_BUG_ON(sizeof(wi->alloc_units[0]) != sizeof(wi->alloc_units[0].xsk));
+ XSK_CHECK_PRIV_TYPE(struct mlx5e_xdp_buff);
batch = xsk_buff_alloc_batch(rq->xsk_pool, (struct xdp_buff **)wi->alloc_units,
rq->mpwqe.pages_per_wqe);
@@ -43,25 +52,30 @@ int mlx5e_xsk_alloc_rx_mpwqe(struct mlx5e_rq *rq, u16 ix)
if (likely(rq->mpwqe.umr_mode == MLX5E_MPWRQ_UMR_MODE_ALIGNED)) {
for (i = 0; i < batch; i++) {
+ struct mlx5e_xdp_buff *mxbuf = xsk_buff_to_mxbuf(wi->alloc_units[i].xsk);
dma_addr_t addr = xsk_buff_xdp_get_frame_dma(wi->alloc_units[i].xsk);
umr_wqe->inline_mtts[i] = (struct mlx5_mtt) {
.ptag = cpu_to_be64(addr | MLX5_EN_WR),
};
+ mxbuf->rq = rq;
}
} else if (unlikely(rq->mpwqe.umr_mode == MLX5E_MPWRQ_UMR_MODE_UNALIGNED)) {
for (i = 0; i < batch; i++) {
+ struct mlx5e_xdp_buff *mxbuf = xsk_buff_to_mxbuf(wi->alloc_units[i].xsk);
dma_addr_t addr = xsk_buff_xdp_get_frame_dma(wi->alloc_units[i].xsk);
umr_wqe->inline_ksms[i] = (struct mlx5_ksm) {
.key = rq->mkey_be,
.va = cpu_to_be64(addr),
};
+ mxbuf->rq = rq;
}
} else if (likely(rq->mpwqe.umr_mode == MLX5E_MPWRQ_UMR_MODE_TRIPLE)) {
u32 mapping_size = 1 << (rq->mpwqe.page_shift - 2);
for (i = 0; i < batch; i++) {
+ struct mlx5e_xdp_buff *mxbuf = xsk_buff_to_mxbuf(wi->alloc_units[i].xsk);
dma_addr_t addr = xsk_buff_xdp_get_frame_dma(wi->alloc_units[i].xsk);
umr_wqe->inline_ksms[i << 2] = (struct mlx5_ksm) {
@@ -80,6 +94,7 @@ int mlx5e_xsk_alloc_rx_mpwqe(struct mlx5e_rq *rq, u16 ix)
.key = rq->mkey_be,
.va = cpu_to_be64(rq->wqe_overflow.addr),
};
+ mxbuf->rq = rq;
}
} else {
__be32 pad_size = cpu_to_be32((1 << rq->mpwqe.page_shift) -
@@ -87,6 +102,7 @@ int mlx5e_xsk_alloc_rx_mpwqe(struct mlx5e_rq *rq, u16 ix)
__be32 frame_size = cpu_to_be32(rq->xsk_pool->chunk_size);
for (i = 0; i < batch; i++) {
+ struct mlx5e_xdp_buff *mxbuf = xsk_buff_to_mxbuf(wi->alloc_units[i].xsk);
dma_addr_t addr = xsk_buff_xdp_get_frame_dma(wi->alloc_units[i].xsk);
umr_wqe->inline_klms[i << 1] = (struct mlx5_klm) {
@@ -99,6 +115,7 @@ int mlx5e_xsk_alloc_rx_mpwqe(struct mlx5e_rq *rq, u16 ix)
.va = cpu_to_be64(rq->wqe_overflow.addr),
.bcount = pad_size,
};
+ mxbuf->rq = rq;
}
}
@@ -229,11 +246,12 @@ static struct sk_buff *mlx5e_xsk_construct_skb(struct mlx5e_rq *rq, struct xdp_b
struct sk_buff *mlx5e_xsk_skb_from_cqe_mpwrq_linear(struct mlx5e_rq *rq,
struct mlx5e_mpw_info *wi,
+ struct mlx5_cqe64 *cqe,
u16 cqe_bcnt,
u32 head_offset,
u32 page_idx)
{
- struct xdp_buff *xdp = wi->alloc_units[page_idx].xsk;
+ struct mlx5e_xdp_buff *mxbuf = xsk_buff_to_mxbuf(wi->alloc_units[page_idx].xsk);
struct bpf_prog *prog;
/* Check packet size. Note LRO doesn't use linear SKB */
@@ -249,9 +267,11 @@ struct sk_buff *mlx5e_xsk_skb_from_cqe_mpwrq_linear(struct mlx5e_rq *rq,
*/
WARN_ON_ONCE(head_offset);
- xsk_buff_set_size(xdp, cqe_bcnt);
- xsk_buff_dma_sync_for_cpu(xdp, rq->xsk_pool);
- net_prefetch(xdp->data);
+ /* mxbuf->rq is set on allocation, but cqe is per-packet so set it here */
+ mxbuf->cqe = cqe;
+ xsk_buff_set_size(&mxbuf->xdp, cqe_bcnt);
+ xsk_buff_dma_sync_for_cpu(&mxbuf->xdp, rq->xsk_pool);
+ net_prefetch(mxbuf->xdp.data);
/* Possible flows:
* - XDP_REDIRECT to XSKMAP:
@@ -269,7 +289,7 @@ struct sk_buff *mlx5e_xsk_skb_from_cqe_mpwrq_linear(struct mlx5e_rq *rq,
*/
prog = rcu_dereference(rq->xdp_prog);
- if (likely(prog && mlx5e_xdp_handle(rq, NULL, prog, xdp))) {
+ if (likely(prog && mlx5e_xdp_handle(rq, prog, mxbuf))) {
if (likely(__test_and_clear_bit(MLX5E_RQ_FLAG_XDP_XMIT, rq->flags)))
__set_bit(page_idx, wi->xdp_xmit_bitmap); /* non-atomic */
return NULL; /* page/packet was consumed by XDP */
@@ -278,14 +298,15 @@ struct sk_buff *mlx5e_xsk_skb_from_cqe_mpwrq_linear(struct mlx5e_rq *rq,
/* XDP_PASS: copy the data from the UMEM to a new SKB and reuse the
* frame. On SKB allocation failure, NULL is returned.
*/
- return mlx5e_xsk_construct_skb(rq, xdp);
+ return mlx5e_xsk_construct_skb(rq, &mxbuf->xdp);
}
struct sk_buff *mlx5e_xsk_skb_from_cqe_linear(struct mlx5e_rq *rq,
struct mlx5e_wqe_frag_info *wi,
+ struct mlx5_cqe64 *cqe,
u32 cqe_bcnt)
{
- struct xdp_buff *xdp = wi->au->xsk;
+ struct mlx5e_xdp_buff *mxbuf = xsk_buff_to_mxbuf(wi->au->xsk);
struct bpf_prog *prog;
/* wi->offset is not used in this function, because xdp->data and the
@@ -295,17 +316,19 @@ struct sk_buff *mlx5e_xsk_skb_from_cqe_linear(struct mlx5e_rq *rq,
*/
WARN_ON_ONCE(wi->offset);
- xsk_buff_set_size(xdp, cqe_bcnt);
- xsk_buff_dma_sync_for_cpu(xdp, rq->xsk_pool);
- net_prefetch(xdp->data);
+ /* mxbuf->rq is set on allocation, but cqe is per-packet so set it here */
+ mxbuf->cqe = cqe;
+ xsk_buff_set_size(&mxbuf->xdp, cqe_bcnt);
+ xsk_buff_dma_sync_for_cpu(&mxbuf->xdp, rq->xsk_pool);
+ net_prefetch(mxbuf->xdp.data);
prog = rcu_dereference(rq->xdp_prog);
- if (likely(prog && mlx5e_xdp_handle(rq, NULL, prog, xdp)))
+ if (likely(prog && mlx5e_xdp_handle(rq, prog, mxbuf)))
return NULL; /* page/packet was consumed by XDP */
/* XDP_PASS: copy the data from the UMEM to a new SKB. The frame reuse
* will be handled by mlx5e_free_rx_wqe.
* On SKB allocation failure, NULL is returned.
*/
- return mlx5e_xsk_construct_skb(rq, xdp);
+ return mlx5e_xsk_construct_skb(rq, &mxbuf->xdp);
}
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/rx.h b/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/rx.h
index 087c943bd8e9..cefc0ef6105d 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/rx.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/rx.h
@@ -13,11 +13,13 @@ int mlx5e_xsk_alloc_rx_wqes_batched(struct mlx5e_rq *rq, u16 ix, int wqe_bulk);
int mlx5e_xsk_alloc_rx_wqes(struct mlx5e_rq *rq, u16 ix, int wqe_bulk);
struct sk_buff *mlx5e_xsk_skb_from_cqe_mpwrq_linear(struct mlx5e_rq *rq,
struct mlx5e_mpw_info *wi,
+ struct mlx5_cqe64 *cqe,
u16 cqe_bcnt,
u32 head_offset,
u32 page_idx);
struct sk_buff *mlx5e_xsk_skb_from_cqe_linear(struct mlx5e_rq *rq,
struct mlx5e_wqe_frag_info *wi,
+ struct mlx5_cqe64 *cqe,
u32 cqe_bcnt);
#endif /* __MLX5_EN_XSK_RX_H__ */
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/setup.c b/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/setup.c
index ff03c43833bb..81a567e17264 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/setup.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/setup.c
@@ -7,6 +7,18 @@
#include "en/health.h"
#include <net/xdp_sock_drv.h>
+static int mlx5e_legacy_rq_validate_xsk(struct mlx5_core_dev *mdev,
+ struct mlx5e_params *params,
+ struct mlx5e_xsk_param *xsk)
+{
+ if (!mlx5e_rx_is_linear_skb(mdev, params, xsk)) {
+ mlx5_core_err(mdev, "Legacy RQ linear mode for XSK can't be activated with current params\n");
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
/* The limitation of 2048 can be altered, but shouldn't go beyond the minimal
* stride size of striding RQ.
*/
@@ -17,8 +29,11 @@ bool mlx5e_validate_xsk_param(struct mlx5e_params *params,
struct mlx5_core_dev *mdev)
{
/* AF_XDP doesn't support frames larger than PAGE_SIZE. */
- if (xsk->chunk_size > PAGE_SIZE || xsk->chunk_size < MLX5E_MIN_XSK_CHUNK_SIZE)
+ if (xsk->chunk_size > PAGE_SIZE || xsk->chunk_size < MLX5E_MIN_XSK_CHUNK_SIZE) {
+ mlx5_core_err(mdev, "XSK chunk size %u out of bounds [%u, %lu]\n", xsk->chunk_size,
+ MLX5E_MIN_XSK_CHUNK_SIZE, PAGE_SIZE);
return false;
+ }
/* frag_sz is different for regular and XSK RQs, so ensure that linear
* SKB mode is possible.
@@ -27,7 +42,7 @@ bool mlx5e_validate_xsk_param(struct mlx5e_params *params,
case MLX5_WQ_TYPE_LINKED_LIST_STRIDING_RQ:
return !mlx5e_mpwrq_validate_xsk(mdev, params, xsk);
default: /* MLX5_WQ_TYPE_CYCLIC */
- return mlx5e_rx_is_linear_skb(mdev, params, xsk);
+ return !mlx5e_legacy_rq_validate_xsk(mdev, params, xsk);
}
}
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/en_accel.h b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/en_accel.h
index 07187028f0d3..c964644ee866 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/en_accel.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/en_accel.h
@@ -124,7 +124,7 @@ static inline bool mlx5e_accel_tx_begin(struct net_device *dev,
mlx5e_udp_gso_handle_tx_skb(skb);
#ifdef CONFIG_MLX5_EN_TLS
- /* May send SKBs and WQEs. */
+ /* May send WQEs. */
if (mlx5e_ktls_skb_offloaded(skb))
if (unlikely(!mlx5e_ktls_handle_tx_skb(dev, sq, skb,
&state->tls)))
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/fs_tcp.c b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/fs_tcp.c
index d7c020f72401..88a5aed9d678 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/fs_tcp.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/fs_tcp.c
@@ -365,7 +365,7 @@ void mlx5e_accel_fs_tcp_destroy(struct mlx5e_flow_steering *fs)
for (i = 0; i < ACCEL_FS_TCP_NUM_TYPES; i++)
accel_fs_tcp_destroy_table(fs, i);
- kvfree(accel_tcp);
+ kfree(accel_tcp);
mlx5e_fs_set_accel_tcp(fs, NULL);
}
@@ -377,7 +377,7 @@ int mlx5e_accel_fs_tcp_create(struct mlx5e_flow_steering *fs)
if (!MLX5_CAP_FLOWTABLE_NIC_RX(mlx5e_fs_get_mdev(fs), ft_field_support.outer_ip_version))
return -EOPNOTSUPP;
- accel_tcp = kvzalloc(sizeof(*accel_tcp), GFP_KERNEL);
+ accel_tcp = kzalloc(sizeof(*accel_tcp), GFP_KERNEL);
if (!accel_tcp)
return -ENOMEM;
mlx5e_fs_set_accel_tcp(fs, accel_tcp);
@@ -397,7 +397,7 @@ int mlx5e_accel_fs_tcp_create(struct mlx5e_flow_steering *fs)
err_destroy_tables:
while (--i >= 0)
accel_fs_tcp_destroy_table(fs, i);
- kvfree(accel_tcp);
+ kfree(accel_tcp);
mlx5e_fs_set_accel_tcp(fs, NULL);
return err;
}
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.c b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.c
index bb9023957f74..7b0d3de0ec6c 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.c
@@ -158,95 +158,103 @@ void mlx5e_ipsec_build_accel_xfrm_attrs(struct mlx5e_ipsec_sa_entry *sa_entry,
attrs->family = x->props.family;
attrs->type = x->xso.type;
attrs->reqid = x->props.reqid;
+ attrs->upspec.dport = ntohs(x->sel.dport);
+ attrs->upspec.dport_mask = ntohs(x->sel.dport_mask);
+ attrs->upspec.sport = ntohs(x->sel.sport);
+ attrs->upspec.sport_mask = ntohs(x->sel.sport_mask);
+ attrs->upspec.proto = x->sel.proto;
mlx5e_ipsec_init_limits(sa_entry, attrs);
}
-static inline int mlx5e_xfrm_validate_state(struct xfrm_state *x)
+static int mlx5e_xfrm_validate_state(struct mlx5_core_dev *mdev,
+ struct xfrm_state *x,
+ struct netlink_ext_ack *extack)
{
- struct net_device *netdev = x->xso.real_dev;
- struct mlx5e_priv *priv;
-
- priv = netdev_priv(netdev);
-
if (x->props.aalgo != SADB_AALG_NONE) {
- netdev_info(netdev, "Cannot offload authenticated xfrm states\n");
+ NL_SET_ERR_MSG_MOD(extack, "Cannot offload authenticated xfrm states");
return -EINVAL;
}
if (x->props.ealgo != SADB_X_EALG_AES_GCM_ICV16) {
- netdev_info(netdev, "Only AES-GCM-ICV16 xfrm state may be offloaded\n");
+ NL_SET_ERR_MSG_MOD(extack, "Only AES-GCM-ICV16 xfrm state may be offloaded");
return -EINVAL;
}
if (x->props.calgo != SADB_X_CALG_NONE) {
- netdev_info(netdev, "Cannot offload compressed xfrm states\n");
+ NL_SET_ERR_MSG_MOD(extack, "Cannot offload compressed xfrm states");
return -EINVAL;
}
if (x->props.flags & XFRM_STATE_ESN &&
- !(mlx5_ipsec_device_caps(priv->mdev) & MLX5_IPSEC_CAP_ESN)) {
- netdev_info(netdev, "Cannot offload ESN xfrm states\n");
+ !(mlx5_ipsec_device_caps(mdev) & MLX5_IPSEC_CAP_ESN)) {
+ NL_SET_ERR_MSG_MOD(extack, "Cannot offload ESN xfrm states");
return -EINVAL;
}
if (x->props.family != AF_INET &&
x->props.family != AF_INET6) {
- netdev_info(netdev, "Only IPv4/6 xfrm states may be offloaded\n");
+ NL_SET_ERR_MSG_MOD(extack, "Only IPv4/6 xfrm states may be offloaded");
return -EINVAL;
}
if (x->id.proto != IPPROTO_ESP) {
- netdev_info(netdev, "Only ESP xfrm state may be offloaded\n");
+ NL_SET_ERR_MSG_MOD(extack, "Only ESP xfrm state may be offloaded");
return -EINVAL;
}
if (x->encap) {
- netdev_info(netdev, "Encapsulated xfrm state may not be offloaded\n");
+ NL_SET_ERR_MSG_MOD(extack, "Encapsulated xfrm state may not be offloaded");
return -EINVAL;
}
if (!x->aead) {
- netdev_info(netdev, "Cannot offload xfrm states without aead\n");
+ NL_SET_ERR_MSG_MOD(extack, "Cannot offload xfrm states without aead");
return -EINVAL;
}
if (x->aead->alg_icv_len != 128) {
- netdev_info(netdev, "Cannot offload xfrm states with AEAD ICV length other than 128bit\n");
+ NL_SET_ERR_MSG_MOD(extack, "Cannot offload xfrm states with AEAD ICV length other than 128bit");
return -EINVAL;
}
if ((x->aead->alg_key_len != 128 + 32) &&
(x->aead->alg_key_len != 256 + 32)) {
- netdev_info(netdev, "Cannot offload xfrm states with AEAD key length other than 128/256 bit\n");
+ NL_SET_ERR_MSG_MOD(extack, "Cannot offload xfrm states with AEAD key length other than 128/256 bit");
return -EINVAL;
}
if (x->tfcpad) {
- netdev_info(netdev, "Cannot offload xfrm states with tfc padding\n");
+ NL_SET_ERR_MSG_MOD(extack, "Cannot offload xfrm states with tfc padding");
return -EINVAL;
}
if (!x->geniv) {
- netdev_info(netdev, "Cannot offload xfrm states without geniv\n");
+ NL_SET_ERR_MSG_MOD(extack, "Cannot offload xfrm states without geniv");
return -EINVAL;
}
if (strcmp(x->geniv, "seqiv")) {
- netdev_info(netdev, "Cannot offload xfrm states with geniv other than seqiv\n");
+ NL_SET_ERR_MSG_MOD(extack, "Cannot offload xfrm states with geniv other than seqiv");
return -EINVAL;
}
+
+ if (x->sel.proto != IPPROTO_IP &&
+ (x->sel.proto != IPPROTO_UDP || x->xso.dir != XFRM_DEV_OFFLOAD_OUT)) {
+ NL_SET_ERR_MSG_MOD(extack, "Device does not support upper protocol other than UDP, and only Tx direction");
+ return -EINVAL;
+ }
+
switch (x->xso.type) {
case XFRM_DEV_OFFLOAD_CRYPTO:
- if (!(mlx5_ipsec_device_caps(priv->mdev) &
- MLX5_IPSEC_CAP_CRYPTO)) {
- netdev_info(netdev, "Crypto offload is not supported\n");
+ if (!(mlx5_ipsec_device_caps(mdev) & MLX5_IPSEC_CAP_CRYPTO)) {
+ NL_SET_ERR_MSG_MOD(extack, "Crypto offload is not supported");
return -EINVAL;
}
if (x->props.mode != XFRM_MODE_TRANSPORT &&
x->props.mode != XFRM_MODE_TUNNEL) {
- netdev_info(netdev, "Only transport and tunnel xfrm states may be offloaded\n");
+ NL_SET_ERR_MSG_MOD(extack, "Only transport and tunnel xfrm states may be offloaded");
return -EINVAL;
}
break;
case XFRM_DEV_OFFLOAD_PACKET:
- if (!(mlx5_ipsec_device_caps(priv->mdev) &
+ if (!(mlx5_ipsec_device_caps(mdev) &
MLX5_IPSEC_CAP_PACKET_OFFLOAD)) {
- netdev_info(netdev, "Packet offload is not supported\n");
+ NL_SET_ERR_MSG_MOD(extack, "Packet offload is not supported");
return -EINVAL;
}
if (x->props.mode != XFRM_MODE_TRANSPORT) {
- netdev_info(netdev, "Only transport xfrm states may be offloaded in packet mode\n");
+ NL_SET_ERR_MSG_MOD(extack, "Only transport xfrm states may be offloaded in packet mode");
return -EINVAL;
}
@@ -254,35 +262,30 @@ static inline int mlx5e_xfrm_validate_state(struct xfrm_state *x)
x->replay_esn->replay_window != 64 &&
x->replay_esn->replay_window != 128 &&
x->replay_esn->replay_window != 256) {
- netdev_info(netdev,
- "Unsupported replay window size %u\n",
- x->replay_esn->replay_window);
+ NL_SET_ERR_MSG_MOD(extack, "Unsupported replay window size");
return -EINVAL;
}
if (!x->props.reqid) {
- netdev_info(netdev, "Cannot offload without reqid\n");
+ NL_SET_ERR_MSG_MOD(extack, "Cannot offload without reqid");
return -EINVAL;
}
if (x->lft.hard_byte_limit != XFRM_INF ||
x->lft.soft_byte_limit != XFRM_INF) {
- netdev_info(netdev,
- "Device doesn't support limits in bytes\n");
+ NL_SET_ERR_MSG_MOD(extack, "Device doesn't support limits in bytes");
return -EINVAL;
}
if (x->lft.soft_packet_limit >= x->lft.hard_packet_limit &&
x->lft.hard_packet_limit != XFRM_INF) {
/* XFRM stack doesn't prevent such configuration :(. */
- netdev_info(netdev,
- "Hard packet limit must be greater than soft one\n");
+ NL_SET_ERR_MSG_MOD(extack, "Hard packet limit must be greater than soft one");
return -EINVAL;
}
break;
default:
- netdev_info(netdev, "Unsupported xfrm offload type %d\n",
- x->xso.type);
+ NL_SET_ERR_MSG_MOD(extack, "Unsupported xfrm offload type");
return -EINVAL;
}
return 0;
@@ -298,7 +301,8 @@ static void _update_xfrm_state(struct work_struct *work)
mlx5_accel_esp_modify_xfrm(sa_entry, &modify_work->attrs);
}
-static int mlx5e_xfrm_add_state(struct xfrm_state *x)
+static int mlx5e_xfrm_add_state(struct xfrm_state *x,
+ struct netlink_ext_ack *extack)
{
struct mlx5e_ipsec_sa_entry *sa_entry = NULL;
struct net_device *netdev = x->xso.real_dev;
@@ -311,15 +315,13 @@ static int mlx5e_xfrm_add_state(struct xfrm_state *x)
return -EOPNOTSUPP;
ipsec = priv->ipsec;
- err = mlx5e_xfrm_validate_state(x);
+ err = mlx5e_xfrm_validate_state(priv->mdev, x, extack);
if (err)
return err;
sa_entry = kzalloc(sizeof(*sa_entry), GFP_KERNEL);
- if (!sa_entry) {
- err = -ENOMEM;
- goto out;
- }
+ if (!sa_entry)
+ return -ENOMEM;
sa_entry->x = x;
sa_entry->ipsec = ipsec;
@@ -360,7 +362,7 @@ err_hw_ctx:
mlx5_ipsec_free_sa_ctx(sa_entry);
err_xfrm:
kfree(sa_entry);
-out:
+ NL_SET_ERR_MSG_MOD(extack, "Device failed to offload this policy");
return err;
}
@@ -497,34 +499,39 @@ static void mlx5e_xfrm_update_curlft(struct xfrm_state *x)
mlx5e_ipsec_aso_update_curlft(sa_entry, &x->curlft.packets);
}
-static int mlx5e_xfrm_validate_policy(struct xfrm_policy *x)
+static int mlx5e_xfrm_validate_policy(struct xfrm_policy *x,
+ struct netlink_ext_ack *extack)
{
- struct net_device *netdev = x->xdo.real_dev;
-
if (x->type != XFRM_POLICY_TYPE_MAIN) {
- netdev_info(netdev, "Cannot offload non-main policy types\n");
+ NL_SET_ERR_MSG_MOD(extack, "Cannot offload non-main policy types");
return -EINVAL;
}
/* Please pay attention that we support only one template */
if (x->xfrm_nr > 1) {
- netdev_info(netdev, "Cannot offload more than one template\n");
+ NL_SET_ERR_MSG_MOD(extack, "Cannot offload more than one template");
return -EINVAL;
}
if (x->xdo.dir != XFRM_DEV_OFFLOAD_IN &&
x->xdo.dir != XFRM_DEV_OFFLOAD_OUT) {
- netdev_info(netdev, "Cannot offload forward policy\n");
+ NL_SET_ERR_MSG_MOD(extack, "Cannot offload forward policy");
return -EINVAL;
}
if (!x->xfrm_vec[0].reqid) {
- netdev_info(netdev, "Cannot offload policy without reqid\n");
+ NL_SET_ERR_MSG_MOD(extack, "Cannot offload policy without reqid");
return -EINVAL;
}
if (x->xdo.type != XFRM_DEV_OFFLOAD_PACKET) {
- netdev_info(netdev, "Unsupported xfrm offload type\n");
+ NL_SET_ERR_MSG_MOD(extack, "Unsupported xfrm offload type");
+ return -EINVAL;
+ }
+
+ if (x->selector.proto != IPPROTO_IP &&
+ (x->selector.proto != IPPROTO_UDP || x->xdo.dir != XFRM_DEV_OFFLOAD_OUT)) {
+ NL_SET_ERR_MSG_MOD(extack, "Device does not support upper protocol other than UDP, and only Tx direction");
return -EINVAL;
}
@@ -548,9 +555,15 @@ mlx5e_ipsec_build_accel_pol_attrs(struct mlx5e_ipsec_pol_entry *pol_entry,
attrs->action = x->action;
attrs->type = XFRM_DEV_OFFLOAD_PACKET;
attrs->reqid = x->xfrm_vec[0].reqid;
+ attrs->upspec.dport = ntohs(sel->dport);
+ attrs->upspec.dport_mask = ntohs(sel->dport_mask);
+ attrs->upspec.sport = ntohs(sel->sport);
+ attrs->upspec.sport_mask = ntohs(sel->sport_mask);
+ attrs->upspec.proto = sel->proto;
}
-static int mlx5e_xfrm_add_policy(struct xfrm_policy *x)
+static int mlx5e_xfrm_add_policy(struct xfrm_policy *x,
+ struct netlink_ext_ack *extack)
{
struct net_device *netdev = x->xdo.real_dev;
struct mlx5e_ipsec_pol_entry *pol_entry;
@@ -558,10 +571,12 @@ static int mlx5e_xfrm_add_policy(struct xfrm_policy *x)
int err;
priv = netdev_priv(netdev);
- if (!priv->ipsec)
+ if (!priv->ipsec) {
+ NL_SET_ERR_MSG_MOD(extack, "Device doesn't support IPsec packet offload");
return -EOPNOTSUPP;
+ }
- err = mlx5e_xfrm_validate_policy(x);
+ err = mlx5e_xfrm_validate_policy(x, extack);
if (err)
return err;
@@ -582,6 +597,7 @@ static int mlx5e_xfrm_add_policy(struct xfrm_policy *x)
err_fs:
kfree(pol_entry);
+ NL_SET_ERR_MSG_MOD(extack, "Device failed to offload this policy");
return err;
}
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.h b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.h
index 8bed9c361075..12f044330639 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.h
@@ -52,6 +52,14 @@ struct aes_gcm_keymat {
u32 aes_key[256 / 32];
};
+struct upspec {
+ u16 dport;
+ u16 dport_mask;
+ u16 sport;
+ u16 sport_mask;
+ u8 proto;
+};
+
struct mlx5_accel_esp_xfrm_attrs {
u32 esn;
u32 spi;
@@ -68,6 +76,7 @@ struct mlx5_accel_esp_xfrm_attrs {
__be32 a6[4];
} daddr;
+ struct upspec upspec;
u8 dir : 2;
u8 esn_overlap : 1;
u8 esn_trigger : 1;
@@ -84,6 +93,7 @@ enum mlx5_ipsec_cap {
MLX5_IPSEC_CAP_CRYPTO = 1 << 0,
MLX5_IPSEC_CAP_ESN = 1 << 1,
MLX5_IPSEC_CAP_PACKET_OFFLOAD = 1 << 2,
+ MLX5_IPSEC_CAP_ROCE = 1 << 3,
};
struct mlx5e_priv;
@@ -119,7 +129,7 @@ struct mlx5e_ipsec_work {
};
struct mlx5e_ipsec_aso {
- u8 ctx[MLX5_ST_SZ_BYTES(ipsec_aso)];
+ u8 __aligned(64) ctx[MLX5_ST_SZ_BYTES(ipsec_aso)];
dma_addr_t dma_addr;
struct mlx5_aso *aso;
/* Protect ASO WQ access, as it is global to whole IPsec */
@@ -138,6 +148,7 @@ struct mlx5e_ipsec {
struct mlx5e_ipsec_tx *tx;
struct mlx5e_ipsec_aso *aso;
struct notifier_block nb;
+ struct mlx5_ipsec_fs *roce;
};
struct mlx5e_ipsec_esn_state {
@@ -181,6 +192,7 @@ struct mlx5_accel_pol_xfrm_attrs {
__be32 a6[4];
} daddr;
+ struct upspec upspec;
u8 family;
u8 action;
u8 type : 2;
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_fs.c b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_fs.c
index 9f19f4b59a70..9871ba1b25ff 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_fs.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_fs.c
@@ -6,6 +6,7 @@
#include "en/fs.h"
#include "ipsec.h"
#include "fs_core.h"
+#include "lib/ipsec_fs_roce.h"
#define NUM_IPSEC_FTE BIT(15)
@@ -166,7 +167,8 @@ out:
return err;
}
-static void rx_destroy(struct mlx5_core_dev *mdev, struct mlx5e_ipsec_rx *rx)
+static void rx_destroy(struct mlx5_core_dev *mdev, struct mlx5e_ipsec *ipsec,
+ struct mlx5e_ipsec_rx *rx, u32 family)
{
mlx5_del_flow_rules(rx->pol.rule);
mlx5_destroy_flow_group(rx->pol.group);
@@ -179,6 +181,8 @@ static void rx_destroy(struct mlx5_core_dev *mdev, struct mlx5e_ipsec_rx *rx)
mlx5_del_flow_rules(rx->status.rule);
mlx5_modify_header_dealloc(mdev, rx->status.modify_hdr);
mlx5_destroy_flow_table(rx->ft.status);
+
+ mlx5_ipsec_fs_roce_rx_destroy(ipsec->roce, family);
}
static int rx_create(struct mlx5_core_dev *mdev, struct mlx5e_ipsec *ipsec,
@@ -186,18 +190,35 @@ static int rx_create(struct mlx5_core_dev *mdev, struct mlx5e_ipsec *ipsec,
{
struct mlx5_flow_namespace *ns = mlx5e_fs_get_ns(ipsec->fs, false);
struct mlx5_ttc_table *ttc = mlx5e_fs_get_ttc(ipsec->fs, false);
+ struct mlx5_flow_destination default_dest;
struct mlx5_flow_destination dest[2];
struct mlx5_flow_table *ft;
int err;
+ default_dest = mlx5_ttc_get_default_dest(ttc, family2tt(family));
+ err = mlx5_ipsec_fs_roce_rx_create(mdev, ipsec->roce, ns, &default_dest,
+ family, MLX5E_ACCEL_FS_ESP_FT_ROCE_LEVEL,
+ MLX5E_NIC_PRIO);
+ if (err)
+ return err;
+
ft = ipsec_ft_create(ns, MLX5E_ACCEL_FS_ESP_FT_ERR_LEVEL,
MLX5E_NIC_PRIO, 1);
- if (IS_ERR(ft))
- return PTR_ERR(ft);
+ if (IS_ERR(ft)) {
+ err = PTR_ERR(ft);
+ goto err_fs_ft_status;
+ }
rx->ft.status = ft;
- dest[0] = mlx5_ttc_get_default_dest(ttc, family2tt(family));
+ ft = mlx5_ipsec_fs_roce_ft_get(ipsec->roce, family);
+ if (ft) {
+ dest[0].type = MLX5_FLOW_DESTINATION_TYPE_FLOW_TABLE;
+ dest[0].ft = ft;
+ } else {
+ dest[0] = default_dest;
+ }
+
dest[1].type = MLX5_FLOW_DESTINATION_TYPE_COUNTER;
dest[1].counter_id = mlx5_fc_id(rx->fc->cnt);
err = ipsec_status_rule(mdev, rx, dest);
@@ -245,6 +266,8 @@ err_fs_ft:
mlx5_modify_header_dealloc(mdev, rx->status.modify_hdr);
err_add:
mlx5_destroy_flow_table(rx->ft.status);
+err_fs_ft_status:
+ mlx5_ipsec_fs_roce_rx_destroy(ipsec->roce, family);
return err;
}
@@ -304,14 +327,15 @@ static void rx_ft_put(struct mlx5_core_dev *mdev, struct mlx5e_ipsec *ipsec,
mlx5_ttc_fwd_default_dest(ttc, family2tt(family));
/* remove FT */
- rx_destroy(mdev, rx);
+ rx_destroy(mdev, ipsec, rx, family);
out:
mutex_unlock(&rx->ft.mutex);
}
/* IPsec TX flow steering */
-static int tx_create(struct mlx5_core_dev *mdev, struct mlx5e_ipsec_tx *tx)
+static int tx_create(struct mlx5_core_dev *mdev, struct mlx5e_ipsec_tx *tx,
+ struct mlx5_ipsec_fs *roce)
{
struct mlx5_flow_destination dest = {};
struct mlx5_flow_table *ft;
@@ -334,8 +358,15 @@ static int tx_create(struct mlx5_core_dev *mdev, struct mlx5e_ipsec_tx *tx)
err = ipsec_miss_create(mdev, tx->ft.pol, &tx->pol, &dest);
if (err)
goto err_pol_miss;
+
+ err = mlx5_ipsec_fs_roce_tx_create(mdev, roce, tx->ft.pol);
+ if (err)
+ goto err_roce;
return 0;
+err_roce:
+ mlx5_del_flow_rules(tx->pol.rule);
+ mlx5_destroy_flow_group(tx->pol.group);
err_pol_miss:
mlx5_destroy_flow_table(tx->ft.pol);
err_pol_ft:
@@ -353,9 +384,10 @@ static struct mlx5e_ipsec_tx *tx_ft_get(struct mlx5_core_dev *mdev,
if (tx->ft.refcnt)
goto skip;
- err = tx_create(mdev, tx);
+ err = tx_create(mdev, tx, ipsec->roce);
if (err)
goto out;
+
skip:
tx->ft.refcnt++;
out:
@@ -374,6 +406,7 @@ static void tx_ft_put(struct mlx5e_ipsec *ipsec)
if (tx->ft.refcnt)
goto out;
+ mlx5_ipsec_fs_roce_tx_destroy(ipsec->roce);
mlx5_del_flow_rules(tx->pol.rule);
mlx5_destroy_flow_group(tx->pol.group);
mlx5_destroy_flow_table(tx->ft.pol);
@@ -467,6 +500,27 @@ static void setup_fte_reg_c0(struct mlx5_flow_spec *spec, u32 reqid)
misc_parameters_2.metadata_reg_c_0, reqid);
}
+static void setup_fte_upper_proto_match(struct mlx5_flow_spec *spec, struct upspec *upspec)
+{
+ if (upspec->proto != IPPROTO_UDP)
+ return;
+
+ spec->match_criteria_enable |= MLX5_MATCH_OUTER_HEADERS;
+ MLX5_SET_TO_ONES(fte_match_set_lyr_2_4, spec->match_criteria, ip_protocol);
+ MLX5_SET(fte_match_set_lyr_2_4, spec->match_value, ip_protocol, upspec->proto);
+ if (upspec->dport) {
+ MLX5_SET(fte_match_set_lyr_2_4, spec->match_criteria, udp_dport,
+ upspec->dport_mask);
+ MLX5_SET(fte_match_set_lyr_2_4, spec->match_value, udp_dport, upspec->dport);
+ }
+
+ if (upspec->sport) {
+ MLX5_SET(fte_match_set_lyr_2_4, spec->match_criteria, udp_dport,
+ upspec->sport_mask);
+ MLX5_SET(fte_match_set_lyr_2_4, spec->match_value, udp_dport, upspec->sport);
+ }
+}
+
static int setup_modify_header(struct mlx5_core_dev *mdev, u32 val, u8 dir,
struct mlx5_flow_act *flow_act)
{
@@ -654,6 +708,7 @@ static int tx_add_rule(struct mlx5e_ipsec_sa_entry *sa_entry)
setup_fte_addr6(spec, attrs->saddr.a6, attrs->daddr.a6);
setup_fte_no_frags(spec);
+ setup_fte_upper_proto_match(spec, &attrs->upspec);
switch (attrs->type) {
case XFRM_DEV_OFFLOAD_CRYPTO:
@@ -728,6 +783,7 @@ static int tx_add_policy(struct mlx5e_ipsec_pol_entry *pol_entry)
setup_fte_addr6(spec, attrs->saddr.a6, attrs->daddr.a6);
setup_fte_no_frags(spec);
+ setup_fte_upper_proto_match(spec, &attrs->upspec);
err = setup_modify_header(mdev, attrs->reqid, XFRM_DEV_OFFLOAD_OUT,
&flow_act);
@@ -1008,6 +1064,9 @@ void mlx5e_accel_ipsec_fs_cleanup(struct mlx5e_ipsec *ipsec)
if (!ipsec->tx)
return;
+ if (mlx5_ipsec_device_caps(ipsec->mdev) & MLX5_IPSEC_CAP_ROCE)
+ mlx5_ipsec_fs_roce_cleanup(ipsec->roce);
+
ipsec_fs_destroy_counters(ipsec);
mutex_destroy(&ipsec->tx->ft.mutex);
WARN_ON(ipsec->tx->ft.refcnt);
@@ -1024,6 +1083,7 @@ void mlx5e_accel_ipsec_fs_cleanup(struct mlx5e_ipsec *ipsec)
int mlx5e_accel_ipsec_fs_init(struct mlx5e_ipsec *ipsec)
{
+ struct mlx5_core_dev *mdev = ipsec->mdev;
struct mlx5_flow_namespace *ns;
int err = -ENOMEM;
@@ -1053,6 +1113,9 @@ int mlx5e_accel_ipsec_fs_init(struct mlx5e_ipsec *ipsec)
mutex_init(&ipsec->rx_ipv6->ft.mutex);
ipsec->tx->ns = ns;
+ if (mlx5_ipsec_device_caps(mdev) & MLX5_IPSEC_CAP_ROCE)
+ ipsec->roce = mlx5_ipsec_fs_roce_init(mdev);
+
return 0;
err_counters:
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_offload.c b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_offload.c
index 2461462b7b99..5fa7a4c40429 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_offload.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_offload.c
@@ -4,7 +4,7 @@
#include "mlx5_core.h"
#include "en.h"
#include "ipsec.h"
-#include "lib/mlx5.h"
+#include "lib/crypto.h"
enum {
MLX5_IPSEC_ASO_REMOVE_FLOW_PKT_CNT_OFFSET,
@@ -42,6 +42,11 @@ u32 mlx5_ipsec_device_caps(struct mlx5_core_dev *mdev)
MLX5_CAP_FLOWTABLE_NIC_RX(mdev, decap))
caps |= MLX5_IPSEC_CAP_PACKET_OFFLOAD;
+ if (mlx5_get_roce_state(mdev) &&
+ MLX5_CAP_GEN_2(mdev, flow_table_type_2_type) & MLX5_FT_NIC_RX_2_NIC_RX_RDMA &&
+ MLX5_CAP_GEN_2(mdev, flow_table_type_2_type) & MLX5_FT_NIC_TX_RDMA_2_NIC_TX)
+ caps |= MLX5_IPSEC_CAP_ROCE;
+
if (!caps)
return 0;
@@ -92,7 +97,6 @@ static void mlx5e_ipsec_packet_setup(void *obj, u32 pdn,
MLX5_SET(ipsec_aso, aso_ctx, remove_flow_pkt_cnt,
lower_32_bits(attrs->hard_packet_limit));
MLX5_SET(ipsec_aso, aso_ctx, hard_lft_arm, 1);
- MLX5_SET(ipsec_aso, aso_ctx, remove_flow_enable, 1);
}
if (attrs->soft_packet_limit != XFRM_INF) {
@@ -329,8 +333,7 @@ static void mlx5e_ipsec_handle_event(struct work_struct *_work)
if (attrs->soft_packet_limit != XFRM_INF)
if (!MLX5_GET(ipsec_aso, aso->ctx, soft_lft_arm) ||
- !MLX5_GET(ipsec_aso, aso->ctx, hard_lft_arm) ||
- !MLX5_GET(ipsec_aso, aso->ctx, remove_flow_enable))
+ !MLX5_GET(ipsec_aso, aso->ctx, hard_lft_arm))
xfrm_state_check_expire(sa_entry->x);
unlock:
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls.c b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls.c
index da2184c94203..cf704f106b7c 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls.c
@@ -1,18 +1,19 @@
// SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB
// Copyright (c) 2019 Mellanox Technologies.
+#include <linux/debugfs.h>
#include "en.h"
#include "lib/mlx5.h"
+#include "lib/crypto.h"
#include "en_accel/ktls.h"
#include "en_accel/ktls_utils.h"
#include "en_accel/fs_tcp.h"
-int mlx5_ktls_create_key(struct mlx5_core_dev *mdev,
- struct tls_crypto_info *crypto_info,
- u32 *p_key_id)
+struct mlx5_crypto_dek *mlx5_ktls_create_key(struct mlx5_crypto_dek_pool *dek_pool,
+ struct tls_crypto_info *crypto_info)
{
+ const void *key;
u32 sz_bytes;
- void *key;
switch (crypto_info->cipher_type) {
case TLS_CIPHER_AES_GCM_128: {
@@ -32,17 +33,16 @@ int mlx5_ktls_create_key(struct mlx5_core_dev *mdev,
break;
}
default:
- return -EINVAL;
+ return ERR_PTR(-EINVAL);
}
- return mlx5_create_encryption_key(mdev, key, sz_bytes,
- MLX5_ACCEL_OBJ_TLS_KEY,
- p_key_id);
+ return mlx5_crypto_dek_create(dek_pool, key, sz_bytes);
}
-void mlx5_ktls_destroy_key(struct mlx5_core_dev *mdev, u32 key_id)
+void mlx5_ktls_destroy_key(struct mlx5_crypto_dek_pool *dek_pool,
+ struct mlx5_crypto_dek *dek)
{
- mlx5_destroy_encryption_key(mdev, key_id);
+ mlx5_crypto_dek_destroy(dek_pool, dek);
}
static int mlx5e_ktls_add(struct net_device *netdev, struct sock *sk,
@@ -177,8 +177,18 @@ void mlx5e_ktls_cleanup_rx(struct mlx5e_priv *priv)
destroy_workqueue(priv->tls->rx_wq);
}
+static void mlx5e_tls_debugfs_init(struct mlx5e_tls *tls,
+ struct dentry *dfs_root)
+{
+ if (IS_ERR_OR_NULL(dfs_root))
+ return;
+
+ tls->debugfs.dfs = debugfs_create_dir("tls", dfs_root);
+}
+
int mlx5e_ktls_init(struct mlx5e_priv *priv)
{
+ struct mlx5_crypto_dek_pool *dek_pool;
struct mlx5e_tls *tls;
if (!mlx5e_is_ktls_device(priv->mdev))
@@ -187,13 +197,32 @@ int mlx5e_ktls_init(struct mlx5e_priv *priv)
tls = kzalloc(sizeof(*tls), GFP_KERNEL);
if (!tls)
return -ENOMEM;
+ tls->mdev = priv->mdev;
+ dek_pool = mlx5_crypto_dek_pool_create(priv->mdev, MLX5_ACCEL_OBJ_TLS_KEY);
+ if (IS_ERR(dek_pool)) {
+ kfree(tls);
+ return PTR_ERR(dek_pool);
+ }
+ tls->dek_pool = dek_pool;
priv->tls = tls;
+
+ mlx5e_tls_debugfs_init(tls, priv->dfs_root);
+
return 0;
}
void mlx5e_ktls_cleanup(struct mlx5e_priv *priv)
{
+ struct mlx5e_tls *tls = priv->tls;
+
+ if (!mlx5e_is_ktls_device(priv->mdev))
+ return;
+
+ debugfs_remove_recursive(tls->debugfs.dfs);
+ tls->debugfs.dfs = NULL;
+
+ mlx5_crypto_dek_pool_destroy(tls->dek_pool);
kfree(priv->tls);
priv->tls = NULL;
}
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls.h b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls.h
index 1c35045e41fb..f11075e67658 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls.h
@@ -4,15 +4,18 @@
#ifndef __MLX5E_KTLS_H__
#define __MLX5E_KTLS_H__
+#include <linux/debugfs.h>
#include <linux/tls.h>
#include <net/tls.h>
#include "en.h"
#ifdef CONFIG_MLX5_EN_TLS
-int mlx5_ktls_create_key(struct mlx5_core_dev *mdev,
- struct tls_crypto_info *crypto_info,
- u32 *p_key_id);
-void mlx5_ktls_destroy_key(struct mlx5_core_dev *mdev, u32 key_id);
+#include "lib/crypto.h"
+
+struct mlx5_crypto_dek *mlx5_ktls_create_key(struct mlx5_crypto_dek_pool *dek_pool,
+ struct tls_crypto_info *crypto_info);
+void mlx5_ktls_destroy_key(struct mlx5_crypto_dek_pool *dek_pool,
+ struct mlx5_crypto_dek *dek);
static inline bool mlx5e_is_ktls_device(struct mlx5_core_dev *mdev)
{
@@ -72,10 +75,18 @@ struct mlx5e_tls_sw_stats {
atomic64_t rx_tls_del;
};
+struct mlx5e_tls_debugfs {
+ struct dentry *dfs;
+ struct dentry *dfs_tx;
+};
+
struct mlx5e_tls {
+ struct mlx5_core_dev *mdev;
struct mlx5e_tls_sw_stats sw_stats;
struct workqueue_struct *rx_wq;
struct mlx5e_tls_tx_pool *tx_pool;
+ struct mlx5_crypto_dek_pool *dek_pool;
+ struct mlx5e_tls_debugfs debugfs;
};
int mlx5e_ktls_init(struct mlx5e_priv *priv);
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_rx.c
index 3e54834747ce..4be770443b0c 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_rx.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_rx.c
@@ -50,7 +50,7 @@ struct mlx5e_ktls_offload_context_rx {
struct mlx5e_tls_sw_stats *sw_stats;
struct completion add_ctx;
struct mlx5e_tir tir;
- u32 key_id;
+ struct mlx5_crypto_dek *dek;
u32 rxq;
DECLARE_BITMAP(flags, MLX5E_NUM_PRIV_RX_FLAGS);
@@ -148,7 +148,8 @@ post_static_params(struct mlx5e_icosq *sq,
wqe = MLX5E_TLS_FETCH_SET_STATIC_PARAMS_WQE(sq, pi);
mlx5e_ktls_build_static_params(wqe, sq->pc, sq->sqn, &priv_rx->crypto_info,
mlx5e_tir_get_tirn(&priv_rx->tir),
- priv_rx->key_id, priv_rx->resync.seq, false,
+ mlx5_crypto_dek_get_id(priv_rx->dek),
+ priv_rx->resync.seq, false,
TLS_OFFLOAD_CTX_DIR_RX);
wi = (struct mlx5e_icosq_wqe_info) {
.wqe_type = MLX5E_ICOSQ_WQE_UMR_TLS,
@@ -610,20 +611,22 @@ int mlx5e_ktls_add_rx(struct net_device *netdev, struct sock *sk,
struct mlx5e_ktls_offload_context_rx *priv_rx;
struct mlx5e_ktls_rx_resync_ctx *resync;
struct tls_context *tls_ctx;
- struct mlx5_core_dev *mdev;
+ struct mlx5_crypto_dek *dek;
struct mlx5e_priv *priv;
int rxq, err;
tls_ctx = tls_get_ctx(sk);
priv = netdev_priv(netdev);
- mdev = priv->mdev;
priv_rx = kzalloc(sizeof(*priv_rx), GFP_KERNEL);
if (unlikely(!priv_rx))
return -ENOMEM;
- err = mlx5_ktls_create_key(mdev, crypto_info, &priv_rx->key_id);
- if (err)
+ dek = mlx5_ktls_create_key(priv->tls->dek_pool, crypto_info);
+ if (IS_ERR(dek)) {
+ err = PTR_ERR(dek);
goto err_create_key;
+ }
+ priv_rx->dek = dek;
INIT_LIST_HEAD(&priv_rx->list);
spin_lock_init(&priv_rx->lock);
@@ -673,7 +676,7 @@ int mlx5e_ktls_add_rx(struct net_device *netdev, struct sock *sk,
err_post_wqes:
mlx5e_tir_destroy(&priv_rx->tir);
err_create_tir:
- mlx5_ktls_destroy_key(mdev, priv_rx->key_id);
+ mlx5_ktls_destroy_key(priv->tls->dek_pool, priv_rx->dek);
err_create_key:
kfree(priv_rx);
return err;
@@ -683,11 +686,9 @@ void mlx5e_ktls_del_rx(struct net_device *netdev, struct tls_context *tls_ctx)
{
struct mlx5e_ktls_offload_context_rx *priv_rx;
struct mlx5e_ktls_rx_resync_ctx *resync;
- struct mlx5_core_dev *mdev;
struct mlx5e_priv *priv;
priv = netdev_priv(netdev);
- mdev = priv->mdev;
priv_rx = mlx5e_get_ktls_rx_priv_ctx(tls_ctx);
set_bit(MLX5E_PRIV_RX_FLAG_DELETING, priv_rx->flags);
@@ -707,7 +708,7 @@ void mlx5e_ktls_del_rx(struct net_device *netdev, struct tls_context *tls_ctx)
mlx5e_accel_fs_del_sk(priv_rx->rule.rule);
mlx5e_tir_destroy(&priv_rx->tir);
- mlx5_ktls_destroy_key(mdev, priv_rx->key_id);
+ mlx5_ktls_destroy_key(priv->tls->dek_pool, priv_rx->dek);
/* priv_rx should normally be freed here, but if there is an outstanding
* GET_PSV, deallocation will be delayed until the CQE for GET_PSV is
* processed.
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_tx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_tx.c
index 78072bf93f3f..60b3e08a1028 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_tx.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_tx.c
@@ -1,6 +1,7 @@
// SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB
// Copyright (c) 2019 Mellanox Technologies.
+#include <linux/debugfs.h>
#include "en_accel/ktls.h"
#include "en_accel/ktls_txrx.h"
#include "en_accel/ktls_utils.h"
@@ -97,7 +98,7 @@ struct mlx5e_ktls_offload_context_tx {
struct tls_offload_context_tx *tx_ctx;
struct mlx5_core_dev *mdev;
struct mlx5e_tls_sw_stats *sw_stats;
- u32 key_id;
+ struct mlx5_crypto_dek *dek;
u8 create_err : 1;
};
@@ -456,6 +457,7 @@ int mlx5e_ktls_add_tx(struct net_device *netdev, struct sock *sk,
struct mlx5e_ktls_offload_context_tx *priv_tx;
struct mlx5e_tls_tx_pool *pool;
struct tls_context *tls_ctx;
+ struct mlx5_crypto_dek *dek;
struct mlx5e_priv *priv;
int err;
@@ -467,9 +469,12 @@ int mlx5e_ktls_add_tx(struct net_device *netdev, struct sock *sk,
if (IS_ERR(priv_tx))
return PTR_ERR(priv_tx);
- err = mlx5_ktls_create_key(pool->mdev, crypto_info, &priv_tx->key_id);
- if (err)
+ dek = mlx5_ktls_create_key(priv->tls->dek_pool, crypto_info);
+ if (IS_ERR(dek)) {
+ err = PTR_ERR(dek);
goto err_create_key;
+ }
+ priv_tx->dek = dek;
priv_tx->expected_seq = start_offload_tcp_sn;
switch (crypto_info->cipher_type) {
@@ -511,7 +516,7 @@ void mlx5e_ktls_del_tx(struct net_device *netdev, struct tls_context *tls_ctx)
pool = priv->tls->tx_pool;
atomic64_inc(&priv_tx->sw_stats->tx_tls_del);
- mlx5_ktls_destroy_key(priv_tx->mdev, priv_tx->key_id);
+ mlx5_ktls_destroy_key(priv->tls->dek_pool, priv_tx->dek);
pool_push(pool, priv_tx);
}
@@ -550,8 +555,9 @@ post_static_params(struct mlx5e_txqsq *sq,
pi = mlx5e_txqsq_get_next_pi(sq, num_wqebbs);
wqe = MLX5E_TLS_FETCH_SET_STATIC_PARAMS_WQE(sq, pi);
mlx5e_ktls_build_static_params(wqe, sq->pc, sq->sqn, &priv_tx->crypto_info,
- priv_tx->tisn, priv_tx->key_id, 0, fence,
- TLS_OFFLOAD_CTX_DIR_TX);
+ priv_tx->tisn,
+ mlx5_crypto_dek_get_id(priv_tx->dek),
+ 0, fence, TLS_OFFLOAD_CTX_DIR_TX);
tx_fill_wi(sq, pi, num_wqebbs, 0, NULL);
sq->pc += num_wqebbs;
}
@@ -886,8 +892,22 @@ err_out:
return false;
}
+static void mlx5e_tls_tx_debugfs_init(struct mlx5e_tls *tls,
+ struct dentry *dfs_root)
+{
+ if (IS_ERR_OR_NULL(dfs_root))
+ return;
+
+ tls->debugfs.dfs_tx = debugfs_create_dir("tx", dfs_root);
+
+ debugfs_create_size_t("pool_size", 0400, tls->debugfs.dfs_tx,
+ &tls->tx_pool->size);
+}
+
int mlx5e_ktls_init_tx(struct mlx5e_priv *priv)
{
+ struct mlx5e_tls *tls = priv->tls;
+
if (!mlx5e_is_ktls_tx(priv->mdev))
return 0;
@@ -895,6 +915,8 @@ int mlx5e_ktls_init_tx(struct mlx5e_priv *priv)
if (!priv->tls->tx_pool)
return -ENOMEM;
+ mlx5e_tls_tx_debugfs_init(tls, tls->debugfs.dfs);
+
return 0;
}
@@ -903,6 +925,9 @@ void mlx5e_ktls_cleanup_tx(struct mlx5e_priv *priv)
if (!mlx5e_is_ktls_tx(priv->mdev))
return;
+ debugfs_remove_recursive(priv->tls->debugfs.dfs_tx);
+ priv->tls->debugfs.dfs_tx = NULL;
+
mlx5e_tls_tx_pool_cleanup(priv->tls->tx_pool);
priv->tls->tx_pool = NULL;
}
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/macsec.c b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/macsec.c
index 7f6b940830b3..08d0929e8260 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/macsec.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/macsec.c
@@ -7,7 +7,7 @@
#include "en.h"
#include "lib/aso.h"
-#include "lib/mlx5.h"
+#include "lib/crypto.h"
#include "en_accel/macsec.h"
#include "en_accel/macsec_fs.h"
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_common.c b/drivers/net/ethernet/mellanox/mlx5/core/en_common.c
index 68f19324db93..4c9a3210600c 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_common.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_common.c
@@ -31,6 +31,7 @@
*/
#include "en.h"
+#include "lib/crypto.h"
/* mlx5e global resources should be placed in this file.
* Global resources are common to all the netdevices created on the same nic.
@@ -104,6 +105,13 @@ int mlx5e_create_mdev_resources(struct mlx5_core_dev *mdev)
INIT_LIST_HEAD(&res->td.tirs_list);
mutex_init(&res->td.list_lock);
+ mdev->mlx5e_res.dek_priv = mlx5_crypto_dek_init(mdev);
+ if (IS_ERR(mdev->mlx5e_res.dek_priv)) {
+ mlx5_core_err(mdev, "crypto dek init failed, %ld\n",
+ PTR_ERR(mdev->mlx5e_res.dek_priv));
+ mdev->mlx5e_res.dek_priv = NULL;
+ }
+
return 0;
err_destroy_mkey:
@@ -119,6 +127,8 @@ void mlx5e_destroy_mdev_resources(struct mlx5_core_dev *mdev)
{
struct mlx5e_hw_objs *res = &mdev->mlx5e_res.hw_objs;
+ mlx5_crypto_dek_cleanup(mdev->mlx5e_res.dek_priv);
+ mdev->mlx5e_res.dek_priv = NULL;
mlx5_free_bfreg(mdev, &res->bfreg);
mlx5_core_destroy_mkey(mdev, res->mkey);
mlx5_core_dealloc_transport_domain(mdev, res->td.tdn);
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_fs.c b/drivers/net/ethernet/mellanox/mlx5/core/en_fs.c
index 7cd36f4ac3ef..05796f8b1d7c 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_fs.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_fs.c
@@ -30,6 +30,7 @@
* SOFTWARE.
*/
+#include <linux/debugfs.h>
#include <linux/list.h>
#include <linux/ip.h>
#include <linux/ipv6.h>
@@ -67,6 +68,7 @@ struct mlx5e_flow_steering {
struct mlx5e_fs_udp *udp;
struct mlx5e_fs_any *any;
struct mlx5e_ptp_fs *ptp_fs;
+ struct dentry *dfs_root;
};
static int mlx5e_add_l2_flow_rule(struct mlx5e_flow_steering *fs,
@@ -104,6 +106,11 @@ static inline int mlx5e_hash_l2(const u8 *addr)
return addr[5];
}
+struct dentry *mlx5e_fs_get_debugfs_root(struct mlx5e_flow_steering *fs)
+{
+ return fs->dfs_root;
+}
+
static void mlx5e_add_l2_to_hash(struct hlist_head *hash, const u8 *addr)
{
struct mlx5e_l2_hash_node *hn;
@@ -1429,9 +1436,19 @@ static int mlx5e_fs_ethtool_alloc(struct mlx5e_flow_steering *fs)
static void mlx5e_fs_ethtool_free(struct mlx5e_flow_steering *fs) { }
#endif
+static void mlx5e_fs_debugfs_init(struct mlx5e_flow_steering *fs,
+ struct dentry *dfs_root)
+{
+ if (IS_ERR_OR_NULL(dfs_root))
+ return;
+
+ fs->dfs_root = debugfs_create_dir("fs", dfs_root);
+}
+
struct mlx5e_flow_steering *mlx5e_fs_init(const struct mlx5e_profile *profile,
struct mlx5_core_dev *mdev,
- bool state_destroy)
+ bool state_destroy,
+ struct dentry *dfs_root)
{
struct mlx5e_flow_steering *fs;
int err;
@@ -1458,6 +1475,8 @@ struct mlx5e_flow_steering *mlx5e_fs_init(const struct mlx5e_profile *profile,
if (err)
goto err_free_tc;
+ mlx5e_fs_debugfs_init(fs, dfs_root);
+
return fs;
err_free_tc:
mlx5e_fs_tc_free(fs);
@@ -1471,6 +1490,7 @@ err:
void mlx5e_fs_cleanup(struct mlx5e_flow_steering *fs)
{
+ debugfs_remove_recursive(fs->dfs_root);
mlx5e_fs_ethtool_free(fs);
mlx5e_fs_tc_free(fs);
mlx5e_fs_vlan_free(fs);
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
index 6c24f33a5ea5..53feb0529943 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
@@ -35,9 +35,11 @@
#include <net/vxlan.h>
#include <net/geneve.h>
#include <linux/bpf.h>
+#include <linux/debugfs.h>
#include <linux/if_bridge.h>
#include <linux/filter.h>
#include <net/page_pool.h>
+#include <net/pkt_sched.h>
#include <net/xdp_sock_drv.h>
#include "eswitch.h"
#include "en.h"
@@ -179,17 +181,21 @@ static void mlx5e_disable_async_events(struct mlx5e_priv *priv)
static int blocking_event(struct notifier_block *nb, unsigned long event, void *data)
{
struct mlx5e_priv *priv = container_of(nb, struct mlx5e_priv, blocking_events_nb);
+ struct mlx5_devlink_trap_event_ctx *trap_event_ctx = data;
int err;
switch (event) {
case MLX5_DRIVER_EVENT_TYPE_TRAP:
- err = mlx5e_handle_trap_event(priv, data);
+ err = mlx5e_handle_trap_event(priv, trap_event_ctx->trap);
+ if (err) {
+ trap_event_ctx->err = err;
+ return NOTIFY_BAD;
+ }
break;
default:
- netdev_warn(priv->netdev, "Sync event: Unknown event %ld\n", event);
- err = -EINVAL;
+ return NOTIFY_DONE;
}
- return err;
+ return NOTIFY_OK;
}
static void mlx5e_enable_blocking_events(struct mlx5e_priv *priv)
@@ -1441,6 +1447,7 @@ static int mlx5e_alloc_txqsq(struct mlx5e_channel *c,
sq->mkey_be = c->mkey_be;
sq->netdev = c->netdev;
sq->mdev = c->mdev;
+ sq->channel = c;
sq->priv = c->priv;
sq->ch_ix = c->ix;
sq->txq_ix = txq_ix;
@@ -2453,8 +2460,6 @@ static void mlx5e_activate_channel(struct mlx5e_channel *c)
mlx5e_activate_xsk(c);
else
mlx5e_activate_rq(&c->rq);
-
- mlx5e_trigger_napi_icosq(c);
}
static void mlx5e_deactivate_channel(struct mlx5e_channel *c)
@@ -2546,13 +2551,19 @@ err_free:
return err;
}
-static void mlx5e_activate_channels(struct mlx5e_channels *chs)
+static void mlx5e_activate_channels(struct mlx5e_priv *priv, struct mlx5e_channels *chs)
{
int i;
for (i = 0; i < chs->num; i++)
mlx5e_activate_channel(chs->c[i]);
+ if (priv->htb)
+ mlx5e_qos_activate_queues(priv);
+
+ for (i = 0; i < chs->num; i++)
+ mlx5e_trigger_napi_icosq(chs->c[i]);
+
if (chs->ptp)
mlx5e_ptp_activate_channel(chs->ptp);
}
@@ -2859,9 +2870,7 @@ out:
void mlx5e_activate_priv_channels(struct mlx5e_priv *priv)
{
mlx5e_build_txq_maps(priv);
- mlx5e_activate_channels(&priv->channels);
- if (priv->htb)
- mlx5e_qos_activate_queues(priv);
+ mlx5e_activate_channels(priv, &priv->channels);
mlx5e_xdp_tx_enable(priv);
/* dev_watchdog() wants all TX queues to be started when the carrier is
@@ -2969,32 +2978,37 @@ int mlx5e_safe_switch_params(struct mlx5e_priv *priv,
mlx5e_fp_preactivate preactivate,
void *context, bool reset)
{
- struct mlx5e_channels new_chs = {};
+ struct mlx5e_channels *new_chs;
int err;
reset &= test_bit(MLX5E_STATE_OPENED, &priv->state);
if (!reset)
return mlx5e_switch_priv_params(priv, params, preactivate, context);
- new_chs.params = *params;
+ new_chs = kzalloc(sizeof(*new_chs), GFP_KERNEL);
+ if (!new_chs)
+ return -ENOMEM;
+ new_chs->params = *params;
- mlx5e_selq_prepare_params(&priv->selq, &new_chs.params);
+ mlx5e_selq_prepare_params(&priv->selq, &new_chs->params);
- err = mlx5e_open_channels(priv, &new_chs);
+ err = mlx5e_open_channels(priv, new_chs);
if (err)
goto err_cancel_selq;
- err = mlx5e_switch_priv_channels(priv, &new_chs, preactivate, context);
+ err = mlx5e_switch_priv_channels(priv, new_chs, preactivate, context);
if (err)
goto err_close;
+ kfree(new_chs);
return 0;
err_close:
- mlx5e_close_channels(&new_chs);
+ mlx5e_close_channels(new_chs);
err_cancel_selq:
mlx5e_selq_cancel(&priv->selq);
+ kfree(new_chs);
return err;
}
@@ -4727,6 +4741,13 @@ static int mlx5e_xdp_set(struct net_device *netdev, struct bpf_prog *prog)
if (old_prog)
bpf_prog_put(old_prog);
+ if (reset) {
+ if (prog)
+ xdp_features_set_redirect_target(netdev, true);
+ else
+ xdp_features_clear_redirect_target(netdev);
+ }
+
if (!test_bit(MLX5E_STATE_OPENED, &priv->state) || reset)
goto unlock;
@@ -5004,6 +5025,7 @@ static void mlx5e_build_nic_netdev(struct net_device *netdev)
SET_NETDEV_DEV(netdev, mdev->device);
netdev->netdev_ops = &mlx5e_netdev_ops;
+ netdev->xdp_metadata_ops = &mlx5e_xdp_metadata_ops;
mlx5e_dcbnl_build_netdev(netdev);
@@ -5121,6 +5143,10 @@ static void mlx5e_build_nic_netdev(struct net_device *netdev)
netdev->features |= NETIF_F_HIGHDMA;
netdev->features |= NETIF_F_HW_VLAN_STAG_FILTER;
+ netdev->xdp_features = NETDEV_XDP_ACT_BASIC | NETDEV_XDP_ACT_REDIRECT |
+ NETDEV_XDP_ACT_XSK_ZEROCOPY |
+ NETDEV_XDP_ACT_RX_SG;
+
netdev->priv_flags |= IFF_UNICAST_FLT;
netif_set_tso_max_size(netdev, GSO_MAX_SIZE);
@@ -5181,7 +5207,8 @@ static int mlx5e_nic_init(struct mlx5_core_dev *mdev,
mlx5e_timestamp_init(priv);
fs = mlx5e_fs_init(priv->profile, mdev,
- !test_bit(MLX5E_STATE_DESTROYING, &priv->state));
+ !test_bit(MLX5E_STATE_DESTROYING, &priv->state),
+ priv->dfs_root);
if (!fs) {
err = -ENOMEM;
mlx5_core_err(mdev, "FS initialization failed, %d\n", err);
@@ -5822,7 +5849,8 @@ void mlx5e_destroy_netdev(struct mlx5e_priv *priv)
static int mlx5e_resume(struct auxiliary_device *adev)
{
struct mlx5_adev *edev = container_of(adev, struct mlx5_adev, adev);
- struct mlx5e_priv *priv = auxiliary_get_drvdata(adev);
+ struct mlx5e_dev *mlx5e_dev = auxiliary_get_drvdata(adev);
+ struct mlx5e_priv *priv = mlx5e_dev->priv;
struct net_device *netdev = priv->netdev;
struct mlx5_core_dev *mdev = edev->mdev;
int err;
@@ -5845,7 +5873,8 @@ static int mlx5e_resume(struct auxiliary_device *adev)
static int mlx5e_suspend(struct auxiliary_device *adev, pm_message_t state)
{
- struct mlx5e_priv *priv = auxiliary_get_drvdata(adev);
+ struct mlx5e_dev *mlx5e_dev = auxiliary_get_drvdata(adev);
+ struct mlx5e_priv *priv = mlx5e_dev->priv;
struct net_device *netdev = priv->netdev;
struct mlx5_core_dev *mdev = priv->mdev;
@@ -5863,35 +5892,46 @@ static int mlx5e_probe(struct auxiliary_device *adev,
struct mlx5_adev *edev = container_of(adev, struct mlx5_adev, adev);
const struct mlx5e_profile *profile = &mlx5e_nic_profile;
struct mlx5_core_dev *mdev = edev->mdev;
+ struct mlx5e_dev *mlx5e_dev;
struct net_device *netdev;
pm_message_t state = {};
struct mlx5e_priv *priv;
int err;
+ mlx5e_dev = mlx5e_create_devlink(&adev->dev, mdev);
+ if (IS_ERR(mlx5e_dev))
+ return PTR_ERR(mlx5e_dev);
+ auxiliary_set_drvdata(adev, mlx5e_dev);
+
+ err = mlx5e_devlink_port_register(mlx5e_dev, mdev);
+ if (err) {
+ mlx5_core_err(mdev, "mlx5e_devlink_port_register failed, %d\n", err);
+ goto err_devlink_unregister;
+ }
+
netdev = mlx5e_create_netdev(mdev, profile);
if (!netdev) {
mlx5_core_err(mdev, "mlx5e_create_netdev failed\n");
- return -ENOMEM;
+ err = -ENOMEM;
+ goto err_devlink_port_unregister;
}
+ SET_NETDEV_DEVLINK_PORT(netdev, &mlx5e_dev->dl_port);
mlx5e_build_nic_netdev(netdev);
priv = netdev_priv(netdev);
- auxiliary_set_drvdata(adev, priv);
+ mlx5e_dev->priv = priv;
priv->profile = profile;
priv->ppriv = NULL;
- err = mlx5e_devlink_port_register(priv);
- if (err) {
- mlx5_core_err(mdev, "mlx5e_devlink_port_register failed, %d\n", err);
- goto err_destroy_netdev;
- }
+ priv->dfs_root = debugfs_create_dir("nic",
+ mlx5_debugfs_get_dev_root(priv->mdev));
err = profile->init(mdev, netdev);
if (err) {
mlx5_core_err(mdev, "mlx5e_nic_profile init failed, %d\n", err);
- goto err_devlink_cleanup;
+ goto err_destroy_netdev;
}
err = mlx5e_resume(adev);
@@ -5900,7 +5940,6 @@ static int mlx5e_probe(struct auxiliary_device *adev,
goto err_profile_cleanup;
}
- SET_NETDEV_DEVLINK_PORT(netdev, mlx5e_devlink_get_dl_port(priv));
err = register_netdev(netdev);
if (err) {
mlx5_core_err(mdev, "register_netdev failed, %d\n", err);
@@ -5908,7 +5947,7 @@ static int mlx5e_probe(struct auxiliary_device *adev,
}
mlx5e_dcbnl_init_app(priv);
- mlx5_uplink_netdev_set(mdev, netdev);
+ mlx5_core_uplink_netdev_set(mdev, netdev);
mlx5e_params_print_info(mdev, &priv->channels.params);
return 0;
@@ -5916,24 +5955,31 @@ err_resume:
mlx5e_suspend(adev, state);
err_profile_cleanup:
profile->cleanup(priv);
-err_devlink_cleanup:
- mlx5e_devlink_port_unregister(priv);
err_destroy_netdev:
+ debugfs_remove_recursive(priv->dfs_root);
mlx5e_destroy_netdev(priv);
+err_devlink_port_unregister:
+ mlx5e_devlink_port_unregister(mlx5e_dev);
+err_devlink_unregister:
+ mlx5e_destroy_devlink(mlx5e_dev);
return err;
}
static void mlx5e_remove(struct auxiliary_device *adev)
{
- struct mlx5e_priv *priv = auxiliary_get_drvdata(adev);
+ struct mlx5e_dev *mlx5e_dev = auxiliary_get_drvdata(adev);
+ struct mlx5e_priv *priv = mlx5e_dev->priv;
pm_message_t state = {};
+ mlx5_core_uplink_netdev_set(priv->mdev, NULL);
mlx5e_dcbnl_delete_app(priv);
unregister_netdev(priv->netdev);
mlx5e_suspend(adev, state);
priv->profile->cleanup(priv);
- mlx5e_devlink_port_unregister(priv);
+ debugfs_remove_recursive(priv->dfs_root);
mlx5e_destroy_netdev(priv);
+ mlx5e_devlink_port_unregister(mlx5e_dev);
+ mlx5e_destroy_devlink(mlx5e_dev);
}
static const struct auxiliary_device_id mlx5e_id_table[] = {
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
index 7d90e5b72854..9b9203443085 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
@@ -789,8 +789,10 @@ static int mlx5e_init_rep(struct mlx5_core_dev *mdev,
{
struct mlx5e_priv *priv = netdev_priv(netdev);
- priv->fs = mlx5e_fs_init(priv->profile, mdev,
- !test_bit(MLX5E_STATE_DESTROYING, &priv->state));
+ priv->fs =
+ mlx5e_fs_init(priv->profile, mdev,
+ !test_bit(MLX5E_STATE_DESTROYING, &priv->state),
+ priv->dfs_root);
if (!priv->fs) {
netdev_err(priv->netdev, "FS allocation failed\n");
return -ENOMEM;
@@ -808,7 +810,8 @@ static int mlx5e_init_ul_rep(struct mlx5_core_dev *mdev,
struct mlx5e_priv *priv = netdev_priv(netdev);
priv->fs = mlx5e_fs_init(priv->profile, mdev,
- !test_bit(MLX5E_STATE_DESTROYING, &priv->state));
+ !test_bit(MLX5E_STATE_DESTROYING, &priv->state),
+ priv->dfs_root);
if (!priv->fs) {
netdev_err(priv->netdev, "FS allocation failed\n");
return -ENOMEM;
@@ -1004,8 +1007,23 @@ static void mlx5e_cleanup_rep_rx(struct mlx5e_priv *priv)
priv->rx_res = NULL;
}
+static void mlx5e_rep_mpesw_work(struct work_struct *work)
+{
+ struct mlx5_rep_uplink_priv *uplink_priv =
+ container_of(work, struct mlx5_rep_uplink_priv,
+ mpesw_work);
+ struct mlx5e_rep_priv *rpriv =
+ container_of(uplink_priv, struct mlx5e_rep_priv,
+ uplink_priv);
+ struct mlx5e_priv *priv = netdev_priv(rpriv->netdev);
+
+ rep_vport_rx_rule_destroy(priv);
+ mlx5e_create_rep_vport_rx_rule(priv);
+}
+
static int mlx5e_init_ul_rep_rx(struct mlx5e_priv *priv)
{
+ struct mlx5e_rep_priv *rpriv = priv->ppriv;
int err;
mlx5e_create_q_counters(priv);
@@ -1015,12 +1033,17 @@ static int mlx5e_init_ul_rep_rx(struct mlx5e_priv *priv)
mlx5e_tc_int_port_init_rep_rx(priv);
+ INIT_WORK(&rpriv->uplink_priv.mpesw_work, mlx5e_rep_mpesw_work);
+
out:
return err;
}
static void mlx5e_cleanup_ul_rep_rx(struct mlx5e_priv *priv)
{
+ struct mlx5e_rep_priv *rpriv = priv->ppriv;
+
+ cancel_work_sync(&rpriv->uplink_priv.mpesw_work);
mlx5e_tc_int_port_cleanup_rep_rx(priv);
mlx5e_cleanup_rep_rx(priv);
mlx5e_destroy_q_counters(priv);
@@ -1129,6 +1152,19 @@ static int mlx5e_update_rep_rx(struct mlx5e_priv *priv)
return 0;
}
+static int mlx5e_rep_event_mpesw(struct mlx5e_priv *priv)
+{
+ struct mlx5e_rep_priv *rpriv = priv->ppriv;
+ struct mlx5_eswitch_rep *rep = rpriv->rep;
+
+ if (rep->vport != MLX5_VPORT_UPLINK)
+ return NOTIFY_DONE;
+
+ queue_work(priv->wq, &rpriv->uplink_priv.mpesw_work);
+
+ return NOTIFY_OK;
+}
+
static int uplink_rep_async_event(struct notifier_block *nb, unsigned long event, void *data)
{
struct mlx5e_priv *priv = container_of(nb, struct mlx5e_priv, events_nb);
@@ -1150,6 +1186,8 @@ static int uplink_rep_async_event(struct notifier_block *nb, unsigned long event
if (event == MLX5_DEV_EVENT_PORT_AFFINITY)
return mlx5e_rep_tc_event_port_affinity(priv);
+ else if (event == MLX5_DEV_EVENT_MULTIPORT_ESW)
+ return mlx5e_rep_event_mpesw(priv);
return NOTIFY_DONE;
}
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rep.h b/drivers/net/ethernet/mellanox/mlx5/core/en_rep.h
index b4e691760da9..dcfad0bf0f45 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_rep.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rep.h
@@ -100,6 +100,11 @@ struct mlx5_rep_uplink_priv {
struct mlx5e_tc_int_port_priv *int_port_priv;
struct mlx5e_flow_meters *flow_meters;
+
+ /* tc action stats */
+ struct mlx5e_tc_act_stats_handle *action_stats_handle;
+
+ struct work_struct mpesw_work;
};
struct mlx5e_rep_priv {
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
index 3df455f6b168..3f7b63d6616b 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
@@ -62,10 +62,12 @@
static struct sk_buff *
mlx5e_skb_from_cqe_mpwrq_linear(struct mlx5e_rq *rq, struct mlx5e_mpw_info *wi,
- u16 cqe_bcnt, u32 head_offset, u32 page_idx);
+ struct mlx5_cqe64 *cqe, u16 cqe_bcnt, u32 head_offset,
+ u32 page_idx);
static struct sk_buff *
mlx5e_skb_from_cqe_mpwrq_nonlinear(struct mlx5e_rq *rq, struct mlx5e_mpw_info *wi,
- u16 cqe_bcnt, u32 head_offset, u32 page_idx);
+ struct mlx5_cqe64 *cqe, u16 cqe_bcnt, u32 head_offset,
+ u32 page_idx);
static void mlx5e_handle_rx_cqe(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe);
static void mlx5e_handle_rx_cqe_mpwrq(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe);
static void mlx5e_handle_rx_cqe_mpwrq_shampo(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe);
@@ -76,11 +78,6 @@ const struct mlx5e_rx_handlers mlx5e_rx_handlers_nic = {
.handle_rx_cqe_mpwqe_shampo = mlx5e_handle_rx_cqe_mpwrq_shampo,
};
-static inline bool mlx5e_rx_hw_stamp(struct hwtstamp_config *config)
-{
- return config->rx_filter == HWTSTAMP_FILTER_ALL;
-}
-
static inline void mlx5e_read_cqe_slot(struct mlx5_cqwq *wq,
u32 cqcc, void *data)
{
@@ -1559,7 +1556,7 @@ struct sk_buff *mlx5e_build_linear_skb(struct mlx5e_rq *rq, void *va,
u32 frag_size, u16 headroom,
u32 cqe_bcnt, u32 metasize)
{
- struct sk_buff *skb = build_skb(va, frag_size);
+ struct sk_buff *skb = napi_build_skb(va, frag_size);
if (unlikely(!skb)) {
rq->stats->buff_alloc_err++;
@@ -1575,16 +1572,19 @@ struct sk_buff *mlx5e_build_linear_skb(struct mlx5e_rq *rq, void *va,
return skb;
}
-static void mlx5e_fill_xdp_buff(struct mlx5e_rq *rq, void *va, u16 headroom,
- u32 len, struct xdp_buff *xdp)
+static void mlx5e_fill_mxbuf(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe,
+ void *va, u16 headroom, u32 len,
+ struct mlx5e_xdp_buff *mxbuf)
{
- xdp_init_buff(xdp, rq->buff.frame0_sz, &rq->xdp_rxq);
- xdp_prepare_buff(xdp, va, headroom, len, true);
+ xdp_init_buff(&mxbuf->xdp, rq->buff.frame0_sz, &rq->xdp_rxq);
+ xdp_prepare_buff(&mxbuf->xdp, va, headroom, len, true);
+ mxbuf->cqe = cqe;
+ mxbuf->rq = rq;
}
static struct sk_buff *
mlx5e_skb_from_cqe_linear(struct mlx5e_rq *rq, struct mlx5e_wqe_frag_info *wi,
- u32 cqe_bcnt)
+ struct mlx5_cqe64 *cqe, u32 cqe_bcnt)
{
union mlx5e_alloc_unit *au = wi->au;
u16 rx_headroom = rq->buff.headroom;
@@ -1606,16 +1606,16 @@ mlx5e_skb_from_cqe_linear(struct mlx5e_rq *rq, struct mlx5e_wqe_frag_info *wi,
prog = rcu_dereference(rq->xdp_prog);
if (prog) {
- struct xdp_buff xdp;
+ struct mlx5e_xdp_buff mxbuf;
net_prefetchw(va); /* xdp_frame data area */
- mlx5e_fill_xdp_buff(rq, va, rx_headroom, cqe_bcnt, &xdp);
- if (mlx5e_xdp_handle(rq, au->page, prog, &xdp))
+ mlx5e_fill_mxbuf(rq, cqe, va, rx_headroom, cqe_bcnt, &mxbuf);
+ if (mlx5e_xdp_handle(rq, prog, &mxbuf))
return NULL; /* page/packet was consumed by XDP */
- rx_headroom = xdp.data - xdp.data_hard_start;
- metasize = xdp.data - xdp.data_meta;
- cqe_bcnt = xdp.data_end - xdp.data;
+ rx_headroom = mxbuf.xdp.data - mxbuf.xdp.data_hard_start;
+ metasize = mxbuf.xdp.data - mxbuf.xdp.data_meta;
+ cqe_bcnt = mxbuf.xdp.data_end - mxbuf.xdp.data;
}
frag_size = MLX5_SKB_FRAG_SZ(rx_headroom + cqe_bcnt);
skb = mlx5e_build_linear_skb(rq, va, frag_size, rx_headroom, cqe_bcnt, metasize);
@@ -1630,16 +1630,16 @@ mlx5e_skb_from_cqe_linear(struct mlx5e_rq *rq, struct mlx5e_wqe_frag_info *wi,
static struct sk_buff *
mlx5e_skb_from_cqe_nonlinear(struct mlx5e_rq *rq, struct mlx5e_wqe_frag_info *wi,
- u32 cqe_bcnt)
+ struct mlx5_cqe64 *cqe, u32 cqe_bcnt)
{
struct mlx5e_rq_frag_info *frag_info = &rq->wqe.info.arr[0];
struct mlx5e_wqe_frag_info *head_wi = wi;
union mlx5e_alloc_unit *au = wi->au;
u16 rx_headroom = rq->buff.headroom;
struct skb_shared_info *sinfo;
+ struct mlx5e_xdp_buff mxbuf;
u32 frag_consumed_bytes;
struct bpf_prog *prog;
- struct xdp_buff xdp;
struct sk_buff *skb;
dma_addr_t addr;
u32 truesize;
@@ -1654,8 +1654,8 @@ mlx5e_skb_from_cqe_nonlinear(struct mlx5e_rq *rq, struct mlx5e_wqe_frag_info *wi
net_prefetchw(va); /* xdp_frame data area */
net_prefetch(va + rx_headroom);
- mlx5e_fill_xdp_buff(rq, va, rx_headroom, frag_consumed_bytes, &xdp);
- sinfo = xdp_get_shared_info_from_buff(&xdp);
+ mlx5e_fill_mxbuf(rq, cqe, va, rx_headroom, frag_consumed_bytes, &mxbuf);
+ sinfo = xdp_get_shared_info_from_buff(&mxbuf.xdp);
truesize = 0;
cqe_bcnt -= frag_consumed_bytes;
@@ -1673,13 +1673,13 @@ mlx5e_skb_from_cqe_nonlinear(struct mlx5e_rq *rq, struct mlx5e_wqe_frag_info *wi
dma_sync_single_for_cpu(rq->pdev, addr + wi->offset,
frag_consumed_bytes, rq->buff.map_dir);
- if (!xdp_buff_has_frags(&xdp)) {
+ if (!xdp_buff_has_frags(&mxbuf.xdp)) {
/* Init on the first fragment to avoid cold cache access
* when possible.
*/
sinfo->nr_frags = 0;
sinfo->xdp_frags_size = 0;
- xdp_buff_set_frags_flag(&xdp);
+ xdp_buff_set_frags_flag(&mxbuf.xdp);
}
frag = &sinfo->frags[sinfo->nr_frags++];
@@ -1688,7 +1688,7 @@ mlx5e_skb_from_cqe_nonlinear(struct mlx5e_rq *rq, struct mlx5e_wqe_frag_info *wi
skb_frag_size_set(frag, frag_consumed_bytes);
if (page_is_pfmemalloc(au->page))
- xdp_buff_set_frag_pfmemalloc(&xdp);
+ xdp_buff_set_frag_pfmemalloc(&mxbuf.xdp);
sinfo->xdp_frags_size += frag_consumed_bytes;
truesize += frag_info->frag_stride;
@@ -1698,10 +1698,8 @@ mlx5e_skb_from_cqe_nonlinear(struct mlx5e_rq *rq, struct mlx5e_wqe_frag_info *wi
wi++;
}
- au = head_wi->au;
-
prog = rcu_dereference(rq->xdp_prog);
- if (prog && mlx5e_xdp_handle(rq, au->page, prog, &xdp)) {
+ if (prog && mlx5e_xdp_handle(rq, prog, &mxbuf)) {
if (test_bit(MLX5E_RQ_FLAG_XDP_XMIT, rq->flags)) {
int i;
@@ -1711,22 +1709,22 @@ mlx5e_skb_from_cqe_nonlinear(struct mlx5e_rq *rq, struct mlx5e_wqe_frag_info *wi
return NULL; /* page/packet was consumed by XDP */
}
- skb = mlx5e_build_linear_skb(rq, xdp.data_hard_start, rq->buff.frame0_sz,
- xdp.data - xdp.data_hard_start,
- xdp.data_end - xdp.data,
- xdp.data - xdp.data_meta);
+ skb = mlx5e_build_linear_skb(rq, mxbuf.xdp.data_hard_start, rq->buff.frame0_sz,
+ mxbuf.xdp.data - mxbuf.xdp.data_hard_start,
+ mxbuf.xdp.data_end - mxbuf.xdp.data,
+ mxbuf.xdp.data - mxbuf.xdp.data_meta);
if (unlikely(!skb))
return NULL;
- page_ref_inc(au->page);
+ page_ref_inc(head_wi->au->page);
- if (unlikely(xdp_buff_has_frags(&xdp))) {
+ if (xdp_buff_has_frags(&mxbuf.xdp)) {
int i;
/* sinfo->nr_frags is reset by build_skb, calculate again. */
xdp_update_skb_shared_info(skb, wi - head_wi - 1,
sinfo->xdp_frags_size, truesize,
- xdp_buff_is_frag_pfmemalloc(&xdp));
+ xdp_buff_is_frag_pfmemalloc(&mxbuf.xdp));
for (i = 0; i < sinfo->nr_frags; i++) {
skb_frag_t *frag = &sinfo->frags[i];
@@ -1777,7 +1775,7 @@ static void mlx5e_handle_rx_cqe(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe)
mlx5e_skb_from_cqe_linear,
mlx5e_skb_from_cqe_nonlinear,
mlx5e_xsk_skb_from_cqe_linear,
- rq, wi, cqe_bcnt);
+ rq, wi, cqe, cqe_bcnt);
if (!skb) {
/* probably for XDP */
if (__test_and_clear_bit(MLX5E_RQ_FLAG_XDP_XMIT, rq->flags)) {
@@ -1792,7 +1790,7 @@ static void mlx5e_handle_rx_cqe(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe)
mlx5e_complete_rx_cqe(rq, cqe, cqe_bcnt, skb);
if (mlx5e_cqe_regb_chain(cqe))
- if (!mlx5e_tc_update_skb(cqe, skb)) {
+ if (!mlx5e_tc_update_skb_nic(cqe, skb)) {
dev_kfree_skb_any(skb);
goto free_wqe;
}
@@ -1830,7 +1828,7 @@ static void mlx5e_handle_rx_cqe_rep(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe)
skb = INDIRECT_CALL_2(rq->wqe.skb_from_cqe,
mlx5e_skb_from_cqe_linear,
mlx5e_skb_from_cqe_nonlinear,
- rq, wi, cqe_bcnt);
+ rq, wi, cqe, cqe_bcnt);
if (!skb) {
/* probably for XDP */
if (__test_and_clear_bit(MLX5E_RQ_FLAG_XDP_XMIT, rq->flags)) {
@@ -1889,7 +1887,7 @@ static void mlx5e_handle_rx_cqe_mpwrq_rep(struct mlx5e_rq *rq, struct mlx5_cqe64
skb = INDIRECT_CALL_2(rq->mpwqe.skb_from_cqe_mpwrq,
mlx5e_skb_from_cqe_mpwrq_linear,
mlx5e_skb_from_cqe_mpwrq_nonlinear,
- rq, wi, cqe_bcnt, head_offset, page_idx);
+ rq, wi, cqe, cqe_bcnt, head_offset, page_idx);
if (!skb)
goto mpwrq_cqe_out;
@@ -1940,7 +1938,8 @@ mlx5e_fill_skb_data(struct sk_buff *skb, struct mlx5e_rq *rq,
static struct sk_buff *
mlx5e_skb_from_cqe_mpwrq_nonlinear(struct mlx5e_rq *rq, struct mlx5e_mpw_info *wi,
- u16 cqe_bcnt, u32 head_offset, u32 page_idx)
+ struct mlx5_cqe64 *cqe, u16 cqe_bcnt, u32 head_offset,
+ u32 page_idx)
{
union mlx5e_alloc_unit *au = &wi->alloc_units[page_idx];
u16 headlen = min_t(u16, MLX5E_RX_MAX_HEAD, cqe_bcnt);
@@ -1979,7 +1978,8 @@ mlx5e_skb_from_cqe_mpwrq_nonlinear(struct mlx5e_rq *rq, struct mlx5e_mpw_info *w
static struct sk_buff *
mlx5e_skb_from_cqe_mpwrq_linear(struct mlx5e_rq *rq, struct mlx5e_mpw_info *wi,
- u16 cqe_bcnt, u32 head_offset, u32 page_idx)
+ struct mlx5_cqe64 *cqe, u16 cqe_bcnt, u32 head_offset,
+ u32 page_idx)
{
union mlx5e_alloc_unit *au = &wi->alloc_units[page_idx];
u16 rx_headroom = rq->buff.headroom;
@@ -2007,19 +2007,19 @@ mlx5e_skb_from_cqe_mpwrq_linear(struct mlx5e_rq *rq, struct mlx5e_mpw_info *wi,
prog = rcu_dereference(rq->xdp_prog);
if (prog) {
- struct xdp_buff xdp;
+ struct mlx5e_xdp_buff mxbuf;
net_prefetchw(va); /* xdp_frame data area */
- mlx5e_fill_xdp_buff(rq, va, rx_headroom, cqe_bcnt, &xdp);
- if (mlx5e_xdp_handle(rq, au->page, prog, &xdp)) {
+ mlx5e_fill_mxbuf(rq, cqe, va, rx_headroom, cqe_bcnt, &mxbuf);
+ if (mlx5e_xdp_handle(rq, prog, &mxbuf)) {
if (__test_and_clear_bit(MLX5E_RQ_FLAG_XDP_XMIT, rq->flags))
__set_bit(page_idx, wi->xdp_xmit_bitmap); /* non-atomic */
return NULL; /* page/packet was consumed by XDP */
}
- rx_headroom = xdp.data - xdp.data_hard_start;
- metasize = xdp.data - xdp.data_meta;
- cqe_bcnt = xdp.data_end - xdp.data;
+ rx_headroom = mxbuf.xdp.data - mxbuf.xdp.data_hard_start;
+ metasize = mxbuf.xdp.data - mxbuf.xdp.data_meta;
+ cqe_bcnt = mxbuf.xdp.data_end - mxbuf.xdp.data;
}
frag_size = MLX5_SKB_FRAG_SZ(rx_headroom + cqe_bcnt);
skb = mlx5e_build_linear_skb(rq, va, frag_size, rx_headroom, cqe_bcnt, metasize);
@@ -2174,8 +2174,8 @@ static void mlx5e_handle_rx_cqe_mpwrq_shampo(struct mlx5e_rq *rq, struct mlx5_cq
if (likely(head_size))
*skb = mlx5e_skb_from_cqe_shampo(rq, wi, cqe, header_index);
else
- *skb = mlx5e_skb_from_cqe_mpwrq_nonlinear(rq, wi, cqe_bcnt, data_offset,
- page_idx);
+ *skb = mlx5e_skb_from_cqe_mpwrq_nonlinear(rq, wi, cqe, cqe_bcnt,
+ data_offset, page_idx);
if (unlikely(!*skb))
goto free_hd_entry;
@@ -2249,14 +2249,15 @@ static void mlx5e_handle_rx_cqe_mpwrq(struct mlx5e_rq *rq, struct mlx5_cqe64 *cq
mlx5e_skb_from_cqe_mpwrq_linear,
mlx5e_skb_from_cqe_mpwrq_nonlinear,
mlx5e_xsk_skb_from_cqe_mpwrq_linear,
- rq, wi, cqe_bcnt, head_offset, page_idx);
+ rq, wi, cqe, cqe_bcnt, head_offset,
+ page_idx);
if (!skb)
goto mpwrq_cqe_out;
mlx5e_complete_rx_cqe(rq, cqe, cqe_bcnt, skb);
if (mlx5e_cqe_regb_chain(cqe))
- if (!mlx5e_tc_update_skb(cqe, skb)) {
+ if (!mlx5e_tc_update_skb_nic(cqe, skb)) {
dev_kfree_skb_any(skb);
goto mpwrq_cqe_out;
}
@@ -2494,7 +2495,7 @@ static void mlx5i_handle_rx_cqe(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe)
skb = INDIRECT_CALL_2(rq->wqe.skb_from_cqe,
mlx5e_skb_from_cqe_linear,
mlx5e_skb_from_cqe_nonlinear,
- rq, wi, cqe_bcnt);
+ rq, wi, cqe, cqe_bcnt);
if (!skb)
goto wq_free_wqe;
@@ -2567,10 +2568,8 @@ int mlx5e_rq_set_handlers(struct mlx5e_rq *rq, struct mlx5e_params *params, bool
static void mlx5e_trap_handle_rx_cqe(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe)
{
- struct mlx5e_priv *priv = netdev_priv(rq->netdev);
struct mlx5_wq_cyc *wq = &rq->wqe.wq;
struct mlx5e_wqe_frag_info *wi;
- struct devlink_port *dl_port;
struct sk_buff *skb;
u32 cqe_bcnt;
u16 trap_id;
@@ -2586,15 +2585,15 @@ static void mlx5e_trap_handle_rx_cqe(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe
goto free_wqe;
}
- skb = mlx5e_skb_from_cqe_nonlinear(rq, wi, cqe_bcnt);
+ skb = mlx5e_skb_from_cqe_nonlinear(rq, wi, cqe, cqe_bcnt);
if (!skb)
goto free_wqe;
mlx5e_complete_rx_cqe(rq, cqe, cqe_bcnt, skb);
skb_push(skb, ETH_HLEN);
- dl_port = mlx5e_devlink_get_dl_port(priv);
- mlx5_devlink_trap_report(rq->mdev, trap_id, skb, dl_port);
+ mlx5_devlink_trap_report(rq->mdev, trap_id, skb,
+ rq->netdev->devlink_port);
dev_kfree_skb_any(skb);
free_wqe:
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
index 243d5d7750be..e34d9b5fb504 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
@@ -43,8 +43,10 @@
#include <net/ipv6_stubs.h>
#include <net/bareudp.h>
#include <net/bonding.h>
+#include <net/dst_metadata.h>
#include "en.h"
#include "en/tc/post_act.h"
+#include "en/tc/act_stats.h"
#include "en_rep.h"
#include "en/rep/tc.h"
#include "en/rep/neigh.h"
@@ -71,6 +73,12 @@
#define MLX5E_TC_TABLE_NUM_GROUPS 4
#define MLX5E_TC_TABLE_MAX_GROUP_SIZE BIT(18)
+struct mlx5e_hairpin_params {
+ struct mlx5_core_dev *mdev;
+ u32 num_queues;
+ u32 queue_size;
+};
+
struct mlx5e_tc_table {
/* Protects the dynamic assignment of the t parameter
* which is the nic tc root table.
@@ -93,10 +101,15 @@ struct mlx5e_tc_table {
struct mlx5_tc_ct_priv *ct;
struct mapping_ctx *mapping;
+ struct mlx5e_hairpin_params hairpin_params;
+ struct dentry *dfs_root;
+
+ /* tc action stats */
+ struct mlx5e_tc_act_stats_handle *action_stats_handle;
};
struct mlx5e_tc_attr_to_reg_mapping mlx5e_tc_attr_to_reg_mappings[] = {
- [CHAIN_TO_REG] = {
+ [MAPPED_OBJ_TO_REG] = {
.mfield = MLX5_ACTION_IN_FIELD_METADATA_REG_C_0,
.moffset = 0,
.mlen = 16,
@@ -123,7 +136,7 @@ struct mlx5e_tc_attr_to_reg_mapping mlx5e_tc_attr_to_reg_mappings[] = {
* into reg_b that is passed to SW since we don't
* jump between steering domains.
*/
- [NIC_CHAIN_TO_REG] = {
+ [NIC_MAPPED_OBJ_TO_REG] = {
.mfield = MLX5_ACTION_IN_FIELD_METADATA_REG_B,
.moffset = 0,
.mlen = 16,
@@ -278,6 +291,24 @@ mlx5e_tc_match_to_reg_set_and_get_id(struct mlx5_core_dev *mdev,
return err;
}
+static struct mlx5e_tc_act_stats_handle *
+get_act_stats_handle(struct mlx5e_priv *priv)
+{
+ struct mlx5e_tc_table *tc = mlx5e_fs_get_tc(priv->fs);
+ struct mlx5_eswitch *esw = priv->mdev->priv.eswitch;
+ struct mlx5_rep_uplink_priv *uplink_priv;
+ struct mlx5e_rep_priv *uplink_rpriv;
+
+ if (is_mdev_switchdev_mode(priv->mdev)) {
+ uplink_rpriv = mlx5_eswitch_get_uplink_priv(esw, REP_ETH);
+ uplink_priv = &uplink_rpriv->uplink_priv;
+
+ return uplink_priv->action_stats_handle;
+ }
+
+ return tc->action_stats_handle;
+}
+
struct mlx5e_tc_int_port_priv *
mlx5e_get_int_port_priv(struct mlx5e_priv *priv)
{
@@ -639,36 +670,36 @@ get_mod_hdr_table(struct mlx5e_priv *priv, struct mlx5e_tc_flow *flow)
&tc->mod_hdr;
}
-static int mlx5e_attach_mod_hdr(struct mlx5e_priv *priv,
- struct mlx5e_tc_flow *flow,
- struct mlx5e_tc_flow_parse_attr *parse_attr)
+int mlx5e_tc_attach_mod_hdr(struct mlx5e_priv *priv,
+ struct mlx5e_tc_flow *flow,
+ struct mlx5_flow_attr *attr)
{
- struct mlx5_modify_hdr *modify_hdr;
struct mlx5e_mod_hdr_handle *mh;
mh = mlx5e_mod_hdr_attach(priv->mdev, get_mod_hdr_table(priv, flow),
mlx5e_get_flow_namespace(flow),
- &parse_attr->mod_hdr_acts);
+ &attr->parse_attr->mod_hdr_acts);
if (IS_ERR(mh))
return PTR_ERR(mh);
- modify_hdr = mlx5e_mod_hdr_get(mh);
- flow->attr->modify_hdr = modify_hdr;
- flow->mh = mh;
+ WARN_ON(attr->modify_hdr);
+ attr->modify_hdr = mlx5e_mod_hdr_get(mh);
+ attr->mh = mh;
return 0;
}
-static void mlx5e_detach_mod_hdr(struct mlx5e_priv *priv,
- struct mlx5e_tc_flow *flow)
+void mlx5e_tc_detach_mod_hdr(struct mlx5e_priv *priv,
+ struct mlx5e_tc_flow *flow,
+ struct mlx5_flow_attr *attr)
{
/* flow wasn't fully initialized */
- if (!flow->mh)
+ if (!attr->mh)
return;
mlx5e_mod_hdr_detach(priv->mdev, get_mod_hdr_table(priv, flow),
- flow->mh);
- flow->mh = NULL;
+ attr->mh);
+ attr->mh = NULL;
}
static
@@ -1017,6 +1048,136 @@ static int mlx5e_hairpin_get_prio(struct mlx5e_priv *priv,
return 0;
}
+static int debugfs_hairpin_queues_set(void *data, u64 val)
+{
+ struct mlx5e_hairpin_params *hp = data;
+
+ if (!val) {
+ mlx5_core_err(hp->mdev,
+ "Number of hairpin queues must be > 0\n");
+ return -EINVAL;
+ }
+
+ hp->num_queues = val;
+
+ return 0;
+}
+
+static int debugfs_hairpin_queues_get(void *data, u64 *val)
+{
+ struct mlx5e_hairpin_params *hp = data;
+
+ *val = hp->num_queues;
+
+ return 0;
+}
+DEFINE_DEBUGFS_ATTRIBUTE(fops_hairpin_queues, debugfs_hairpin_queues_get,
+ debugfs_hairpin_queues_set, "%llu\n");
+
+static int debugfs_hairpin_queue_size_set(void *data, u64 val)
+{
+ struct mlx5e_hairpin_params *hp = data;
+
+ if (val > BIT(MLX5_CAP_GEN(hp->mdev, log_max_hairpin_num_packets))) {
+ mlx5_core_err(hp->mdev,
+ "Invalid hairpin queue size, must be <= %lu\n",
+ BIT(MLX5_CAP_GEN(hp->mdev,
+ log_max_hairpin_num_packets)));
+ return -EINVAL;
+ }
+
+ hp->queue_size = roundup_pow_of_two(val);
+
+ return 0;
+}
+
+static int debugfs_hairpin_queue_size_get(void *data, u64 *val)
+{
+ struct mlx5e_hairpin_params *hp = data;
+
+ *val = hp->queue_size;
+
+ return 0;
+}
+DEFINE_DEBUGFS_ATTRIBUTE(fops_hairpin_queue_size,
+ debugfs_hairpin_queue_size_get,
+ debugfs_hairpin_queue_size_set, "%llu\n");
+
+static int debugfs_hairpin_num_active_get(void *data, u64 *val)
+{
+ struct mlx5e_tc_table *tc = data;
+ struct mlx5e_hairpin_entry *hpe;
+ u32 cnt = 0;
+ u32 bkt;
+
+ mutex_lock(&tc->hairpin_tbl_lock);
+ hash_for_each(tc->hairpin_tbl, bkt, hpe, hairpin_hlist)
+ cnt++;
+ mutex_unlock(&tc->hairpin_tbl_lock);
+
+ *val = cnt;
+
+ return 0;
+}
+DEFINE_DEBUGFS_ATTRIBUTE(fops_hairpin_num_active,
+ debugfs_hairpin_num_active_get, NULL, "%llu\n");
+
+static int debugfs_hairpin_table_dump_show(struct seq_file *file, void *priv)
+
+{
+ struct mlx5e_tc_table *tc = file->private;
+ struct mlx5e_hairpin_entry *hpe;
+ u32 bkt;
+
+ mutex_lock(&tc->hairpin_tbl_lock);
+ hash_for_each(tc->hairpin_tbl, bkt, hpe, hairpin_hlist)
+ seq_printf(file, "Hairpin peer_vhca_id %u prio %u refcnt %u\n",
+ hpe->peer_vhca_id, hpe->prio,
+ refcount_read(&hpe->refcnt));
+ mutex_unlock(&tc->hairpin_tbl_lock);
+
+ return 0;
+}
+DEFINE_SHOW_ATTRIBUTE(debugfs_hairpin_table_dump);
+
+static void mlx5e_tc_debugfs_init(struct mlx5e_tc_table *tc,
+ struct dentry *dfs_root)
+{
+ if (IS_ERR_OR_NULL(dfs_root))
+ return;
+
+ tc->dfs_root = debugfs_create_dir("tc", dfs_root);
+
+ debugfs_create_file("hairpin_num_queues", 0644, tc->dfs_root,
+ &tc->hairpin_params, &fops_hairpin_queues);
+ debugfs_create_file("hairpin_queue_size", 0644, tc->dfs_root,
+ &tc->hairpin_params, &fops_hairpin_queue_size);
+ debugfs_create_file("hairpin_num_active", 0444, tc->dfs_root, tc,
+ &fops_hairpin_num_active);
+ debugfs_create_file("hairpin_table_dump", 0444, tc->dfs_root, tc,
+ &debugfs_hairpin_table_dump_fops);
+}
+
+static void
+mlx5e_hairpin_params_init(struct mlx5e_hairpin_params *hairpin_params,
+ struct mlx5_core_dev *mdev)
+{
+ u64 link_speed64;
+ u32 link_speed;
+
+ hairpin_params->mdev = mdev;
+ /* set hairpin pair per each 50Gbs share of the link */
+ mlx5e_port_max_linkspeed(mdev, &link_speed);
+ link_speed = max_t(u32, link_speed, 50000);
+ link_speed64 = link_speed;
+ do_div(link_speed64, 50000);
+ hairpin_params->num_queues = link_speed64;
+
+ hairpin_params->queue_size =
+ BIT(min_t(u32, 16 - MLX5_MPWRQ_MIN_LOG_STRIDE_SZ(mdev),
+ MLX5_CAP_GEN(mdev, log_max_hairpin_num_packets)));
+}
+
static int mlx5e_hairpin_flow_add(struct mlx5e_priv *priv,
struct mlx5e_tc_flow *flow,
struct mlx5e_tc_flow_parse_attr *parse_attr,
@@ -1028,8 +1189,6 @@ static int mlx5e_hairpin_flow_add(struct mlx5e_priv *priv,
struct mlx5_core_dev *peer_mdev;
struct mlx5e_hairpin_entry *hpe;
struct mlx5e_hairpin *hp;
- u64 link_speed64;
- u32 link_speed;
u8 match_prio;
u16 peer_id;
int err;
@@ -1082,21 +1241,16 @@ static int mlx5e_hairpin_flow_add(struct mlx5e_priv *priv,
hash_hairpin_info(peer_id, match_prio));
mutex_unlock(&tc->hairpin_tbl_lock);
- params.log_data_size = clamp_t(u8, 16,
- MLX5_CAP_GEN(priv->mdev, log_min_hairpin_wq_data_sz),
- MLX5_CAP_GEN(priv->mdev, log_max_hairpin_wq_data_sz));
- params.log_num_packets = params.log_data_size -
- MLX5_MPWRQ_MIN_LOG_STRIDE_SZ(priv->mdev);
- params.log_num_packets = min_t(u8, params.log_num_packets,
- MLX5_CAP_GEN(priv->mdev, log_max_hairpin_num_packets));
+ params.log_num_packets = ilog2(tc->hairpin_params.queue_size);
+ params.log_data_size =
+ clamp_t(u32,
+ params.log_num_packets +
+ MLX5_MPWRQ_MIN_LOG_STRIDE_SZ(priv->mdev),
+ MLX5_CAP_GEN(priv->mdev, log_min_hairpin_wq_data_sz),
+ MLX5_CAP_GEN(priv->mdev, log_max_hairpin_wq_data_sz));
params.q_counter = priv->q_counter;
- /* set hairpin pair per each 50Gbs share of the link */
- mlx5e_port_max_linkspeed(priv->mdev, &link_speed);
- link_speed = max_t(u32, link_speed, 50000);
- link_speed64 = link_speed;
- do_div(link_speed64, 50000);
- params.num_channels = link_speed64;
+ params.num_channels = tc->hairpin_params.num_queues;
hp = mlx5e_hairpin_create(priv, &params, peer_ifindex);
hpe->hp = hp;
@@ -1301,7 +1455,7 @@ mlx5e_tc_add_nic_flow(struct mlx5e_priv *priv,
}
if (attr->action & MLX5_FLOW_CONTEXT_ACTION_MOD_HDR) {
- err = mlx5e_attach_mod_hdr(priv, flow, parse_attr);
+ err = mlx5e_tc_attach_mod_hdr(priv, flow, attr);
if (err)
return err;
}
@@ -1361,7 +1515,7 @@ static void mlx5e_tc_del_nic_flow(struct mlx5e_priv *priv,
if (attr->action & MLX5_FLOW_CONTEXT_ACTION_MOD_HDR) {
mlx5e_mod_hdr_dealloc(&attr->parse_attr->mod_hdr_acts);
- mlx5e_detach_mod_hdr(priv, flow);
+ mlx5e_tc_detach_mod_hdr(priv, flow, attr);
}
if (attr->action & MLX5_FLOW_CONTEXT_ACTION_COUNT)
@@ -1451,7 +1605,7 @@ mlx5e_tc_offload_to_slow_path(struct mlx5_eswitch *esw,
goto err_get_chain;
err = mlx5e_tc_match_to_reg_set(esw->dev, &mod_acts, MLX5_FLOW_NAMESPACE_FDB,
- CHAIN_TO_REG, chain_mapping);
+ MAPPED_OBJ_TO_REG, chain_mapping);
if (err)
goto err_reg_set;
@@ -1472,7 +1626,7 @@ skip_restore:
goto err_offload;
}
- flow->slow_mh = mh;
+ flow->attr->slow_mh = mh;
flow->chain_mapping = chain_mapping;
flow_flag_set(flow, SLOW);
@@ -1497,6 +1651,7 @@ err_get_chain:
void mlx5e_tc_unoffload_from_slow_path(struct mlx5_eswitch *esw,
struct mlx5e_tc_flow *flow)
{
+ struct mlx5e_mod_hdr_handle *slow_mh = flow->attr->slow_mh;
struct mlx5_flow_attr *slow_attr;
slow_attr = mlx5_alloc_flow_attr(MLX5_FLOW_NAMESPACE_FDB);
@@ -1509,16 +1664,16 @@ void mlx5e_tc_unoffload_from_slow_path(struct mlx5_eswitch *esw,
slow_attr->action = MLX5_FLOW_CONTEXT_ACTION_FWD_DEST;
slow_attr->esw_attr->split_count = 0;
slow_attr->flags |= MLX5_ATTR_FLAG_SLOW_PATH;
- if (flow->slow_mh) {
+ if (slow_mh) {
slow_attr->action |= MLX5_FLOW_CONTEXT_ACTION_MOD_HDR;
- slow_attr->modify_hdr = mlx5e_mod_hdr_get(flow->slow_mh);
+ slow_attr->modify_hdr = mlx5e_mod_hdr_get(slow_mh);
}
mlx5e_tc_unoffload_fdb_rules(esw, flow, slow_attr);
- if (flow->slow_mh) {
- mlx5e_mod_hdr_detach(esw->dev, get_mod_hdr_table(flow->priv, flow), flow->slow_mh);
+ if (slow_mh) {
+ mlx5e_mod_hdr_detach(esw->dev, get_mod_hdr_table(flow->priv, flow), slow_mh);
mlx5_chains_put_chain_mapping(esw_chains(esw), flow->chain_mapping);
flow->chain_mapping = 0;
- flow->slow_mh = NULL;
+ flow->attr->slow_mh = NULL;
}
flow_flag_clear(flow, SLOW);
kfree(slow_attr);
@@ -1629,26 +1784,6 @@ int mlx5e_tc_query_route_vport(struct net_device *out_dev, struct net_device *ro
return err;
}
-int mlx5e_tc_add_flow_mod_hdr(struct mlx5e_priv *priv,
- struct mlx5e_tc_flow *flow,
- struct mlx5_flow_attr *attr)
-{
- struct mlx5e_tc_mod_hdr_acts *mod_hdr_acts = &attr->parse_attr->mod_hdr_acts;
- struct mlx5_modify_hdr *mod_hdr;
-
- mod_hdr = mlx5_modify_header_alloc(priv->mdev,
- mlx5e_get_flow_namespace(flow),
- mod_hdr_acts->num_actions,
- mod_hdr_acts->actions);
- if (IS_ERR(mod_hdr))
- return PTR_ERR(mod_hdr);
-
- WARN_ON(attr->modify_hdr);
- attr->modify_hdr = mod_hdr;
-
- return 0;
-}
-
static int
set_encap_dests(struct mlx5e_priv *priv,
struct mlx5e_tc_flow *flow,
@@ -1768,10 +1903,8 @@ verify_attr_actions(u32 actions, struct netlink_ext_ack *extack)
static int
post_process_attr(struct mlx5e_tc_flow *flow,
struct mlx5_flow_attr *attr,
- bool is_post_act_attr,
struct netlink_ext_ack *extack)
{
- struct mlx5_eswitch *esw = flow->priv->mdev->priv.eswitch;
bool vf_tun;
int err = 0;
@@ -1783,34 +1916,22 @@ post_process_attr(struct mlx5e_tc_flow *flow,
if (err)
goto err_out;
- if (mlx5e_is_eswitch_flow(flow)) {
- err = mlx5_eswitch_add_vlan_action(esw, attr);
+ if (attr->action & MLX5_FLOW_CONTEXT_ACTION_MOD_HDR) {
+ err = mlx5e_tc_attach_mod_hdr(flow->priv, flow, attr);
if (err)
goto err_out;
}
- if (attr->action & MLX5_FLOW_CONTEXT_ACTION_MOD_HDR) {
- if (vf_tun || is_post_act_attr) {
- err = mlx5e_tc_add_flow_mod_hdr(flow->priv, flow, attr);
- if (err)
- goto err_out;
- } else {
- err = mlx5e_attach_mod_hdr(flow->priv, flow, attr->parse_attr);
- if (err)
- goto err_out;
- }
- }
-
if (attr->branch_true &&
attr->branch_true->action & MLX5_FLOW_CONTEXT_ACTION_MOD_HDR) {
- err = mlx5e_tc_add_flow_mod_hdr(flow->priv, flow, attr->branch_true);
+ err = mlx5e_tc_attach_mod_hdr(flow->priv, flow, attr->branch_true);
if (err)
goto err_out;
}
if (attr->branch_false &&
attr->branch_false->action & MLX5_FLOW_CONTEXT_ACTION_MOD_HDR) {
- err = mlx5e_tc_add_flow_mod_hdr(flow->priv, flow, attr->branch_false);
+ err = mlx5e_tc_attach_mod_hdr(flow->priv, flow, attr->branch_false);
if (err)
goto err_out;
}
@@ -1924,7 +2045,11 @@ mlx5e_tc_add_fdb_flow(struct mlx5e_priv *priv,
esw_attr->int_port = int_port;
}
- err = post_process_attr(flow, attr, false, extack);
+ err = post_process_attr(flow, attr, extack);
+ if (err)
+ goto err_out;
+
+ err = mlx5e_tc_act_stats_add_flow(get_act_stats_handle(priv), flow);
if (err)
goto err_out;
@@ -1998,8 +2123,6 @@ static void mlx5e_tc_del_fdb_flow(struct mlx5e_priv *priv,
if (mlx5_flow_has_geneve_opt(flow))
mlx5_geneve_tlv_option_del(priv->mdev->geneve);
- mlx5_eswitch_del_vlan_action(esw, attr);
-
if (flow->decap_route)
mlx5e_detach_decap_route(priv, flow);
@@ -2009,10 +2132,7 @@ static void mlx5e_tc_del_fdb_flow(struct mlx5e_priv *priv,
if (attr->action & MLX5_FLOW_CONTEXT_ACTION_MOD_HDR) {
mlx5e_mod_hdr_dealloc(&attr->parse_attr->mod_hdr_acts);
- if (vf_tun && attr->modify_hdr)
- mlx5_modify_header_dealloc(priv->mdev, attr->modify_hdr);
- else
- mlx5e_detach_mod_hdr(priv, flow);
+ mlx5e_tc_detach_mod_hdr(priv, flow, attr);
}
if (attr->action & MLX5_FLOW_CONTEXT_ACTION_COUNT)
@@ -2027,13 +2147,12 @@ static void mlx5e_tc_del_fdb_flow(struct mlx5e_priv *priv,
if (flow_flag_test(flow, L3_TO_L2_DECAP))
mlx5e_detach_decap(priv, flow);
+ mlx5e_tc_act_stats_del_flow(get_act_stats_handle(priv), flow);
+
free_flow_post_acts(flow);
free_branch_attr(flow, attr->branch_true);
free_branch_attr(flow, attr->branch_false);
- if (flow->attr->lag.count)
- mlx5_lag_del_mpesw_rule(esw->dev);
-
kvfree(attr->esw_attr->rx_tun_attr);
kvfree(attr->parse_attr);
kfree(flow->attr);
@@ -2492,13 +2611,13 @@ static int parse_tunnel_attr(struct mlx5e_priv *priv,
err = mlx5e_tc_set_attr_rx_tun(flow, spec);
if (err)
return err;
- } else if (tunnel && tunnel->tunnel_type == MLX5E_TC_TUNNEL_TYPE_VXLAN) {
+ } else if (tunnel) {
struct mlx5_flow_spec *tmp_spec;
tmp_spec = kvzalloc(sizeof(*tmp_spec), GFP_KERNEL);
if (!tmp_spec) {
- NL_SET_ERR_MSG_MOD(extack, "Failed to allocate memory for vxlan tmp spec");
- netdev_warn(priv->netdev, "Failed to allocate memory for vxlan tmp spec");
+ NL_SET_ERR_MSG_MOD(extack, "Failed to allocate memory for tunnel tmp spec");
+ netdev_warn(priv->netdev, "Failed to allocate memory for tunnel tmp spec");
return -ENOMEM;
}
memcpy(tmp_spec, spec, sizeof(*tmp_spec));
@@ -3560,7 +3679,6 @@ out_ok:
static bool
actions_match_supported_fdb(struct mlx5e_priv *priv,
- struct mlx5e_tc_flow_parse_attr *parse_attr,
struct mlx5e_tc_flow *flow,
struct netlink_ext_ack *extack)
{
@@ -3609,7 +3727,7 @@ actions_match_supported(struct mlx5e_priv *priv,
return false;
if (mlx5e_is_eswitch_flow(flow) &&
- !actions_match_supported_fdb(priv, parse_attr, flow, extack))
+ !actions_match_supported_fdb(priv, flow, extack))
return false;
return true;
@@ -3692,10 +3810,13 @@ mlx5e_clone_flow_attr_for_post_act(struct mlx5_flow_attr *attr,
INIT_LIST_HEAD(&attr2->list);
parse_attr->filter_dev = attr->parse_attr->filter_dev;
attr2->action = 0;
+ attr2->counter = NULL;
+ attr->tc_act_cookies_count = 0;
attr2->flags = 0;
attr2->parse_attr = parse_attr;
attr2->dest_chain = 0;
attr2->dest_ft = NULL;
+ attr2->act_id_restore_rule = NULL;
if (ns_type == MLX5_FLOW_NAMESPACE_FDB) {
attr2->esw_attr->out_count = 0;
@@ -3831,7 +3952,7 @@ alloc_flow_post_acts(struct mlx5e_tc_flow *flow, struct netlink_ext_ack *extack)
if (err)
goto out_free;
- err = post_process_attr(flow, attr, true, extack);
+ err = post_process_attr(flow, attr, extack);
if (err)
goto out_free;
@@ -3991,6 +4112,11 @@ parse_branch_ctrl(struct flow_action_entry *act, struct mlx5e_tc_act *tc_act,
jump_state->jumping_attr = attr->branch_false;
jump_state->jump_count = jump_count;
+
+ /* branching action requires its own counter */
+ attr->action |= MLX5_FLOW_CONTEXT_ACTION_COUNT;
+ flow_flag_set(flow, USE_ACT_STATS);
+
return 0;
err_branch_false:
@@ -4051,6 +4177,8 @@ parse_tc_actions(struct mlx5e_tc_act_parse_state *parse_state,
goto out_free;
parse_state->actions |= attr->action;
+ if (!tc_act->stats_action)
+ attr->tc_act_cookies[attr->tc_act_cookies_count++] = act->cookie;
/* Split attr for multi table act if not the last act. */
if (jump_state.jump_target ||
@@ -4184,12 +4312,7 @@ static bool is_lag_dev(struct mlx5e_priv *priv,
static bool is_multiport_eligible(struct mlx5e_priv *priv, struct net_device *out_dev)
{
- if (same_hw_reps(priv, out_dev) &&
- MLX5_CAP_PORT_SELECTION(priv->mdev, port_select_flow_table) &&
- MLX5_CAP_GEN(priv->mdev, create_lag_when_not_master_up))
- return true;
-
- return false;
+ return same_hw_reps(priv, out_dev) && mlx5_lag_is_mpesw(priv->mdev);
}
bool mlx5e_is_valid_eswitch_fwd_dev(struct mlx5e_priv *priv,
@@ -4360,6 +4483,9 @@ static bool is_peer_flow_needed(struct mlx5e_tc_flow *flow)
(is_rep_ingress || act_is_encap))
return true;
+ if (mlx5_lag_is_mpesw(esw_attr->in_mdev))
+ return true;
+
return false;
}
@@ -4398,8 +4524,7 @@ mlx5_free_flow_attr(struct mlx5e_tc_flow *flow, struct mlx5_flow_attr *attr)
if (attr->action & MLX5_FLOW_CONTEXT_ACTION_MOD_HDR) {
mlx5e_mod_hdr_dealloc(&attr->parse_attr->mod_hdr_acts);
- if (attr->modify_hdr)
- mlx5_modify_header_dealloc(flow->priv->mdev, attr->modify_hdr);
+ mlx5e_tc_detach_mod_hdr(flow->priv, flow, attr);
}
}
@@ -4492,7 +4617,6 @@ __mlx5e_add_fdb_flow(struct mlx5e_priv *priv,
struct mlx5_core_dev *in_mdev)
{
struct flow_rule *rule = flow_cls_offload_flow_rule(f);
- struct mlx5_eswitch *esw = priv->mdev->priv.eswitch;
struct netlink_ext_ack *extack = f->common.extack;
struct mlx5e_tc_flow_parse_attr *parse_attr;
struct mlx5e_tc_flow *flow;
@@ -4521,33 +4645,21 @@ __mlx5e_add_fdb_flow(struct mlx5e_priv *priv,
if (err)
goto err_free;
- /* always set IP version for indirect table handling */
- flow->attr->ip_version = mlx5e_tc_get_ip_version(&parse_attr->spec, true);
-
err = parse_tc_fdb_actions(priv, &rule->action, flow, extack);
if (err)
goto err_free;
- if (flow->attr->lag.count) {
- err = mlx5_lag_add_mpesw_rule(esw->dev);
- if (err)
- goto err_free;
- }
-
err = mlx5e_tc_add_fdb_flow(priv, flow, extack);
complete_all(&flow->init_done);
if (err) {
if (!(err == -ENETUNREACH && mlx5_lag_is_multipath(in_mdev)))
- goto err_lag;
+ goto err_free;
add_unready_flow(flow);
}
return flow;
-err_lag:
- if (flow->attr->lag.count)
- mlx5_lag_del_mpesw_rule(esw->dev);
err_free:
mlx5e_flow_put(priv, flow);
out:
@@ -4579,8 +4691,10 @@ static int mlx5e_tc_add_fdb_peer_flow(struct flow_cls_offload *f,
* So packets redirected to uplink use the same mdev of the
* original flow and packets redirected from uplink use the
* peer mdev.
+ * In multiport eswitch it's a special case that we need to
+ * keep the original mdev.
*/
- if (attr->in_rep->vport == MLX5_VPORT_UPLINK)
+ if (attr->in_rep->vport == MLX5_VPORT_UPLINK && !mlx5_lag_is_mpesw(priv->mdev))
in_mdev = peer_priv->mdev;
else
in_mdev = priv->mdev;
@@ -4842,6 +4956,12 @@ errout:
return err;
}
+int mlx5e_tc_fill_action_stats(struct mlx5e_priv *priv,
+ struct flow_offload_action *fl_act)
+{
+ return mlx5e_tc_act_stats_fill_stats(get_act_stats_handle(priv), fl_act);
+}
+
int mlx5e_stats_flower(struct net_device *dev, struct mlx5e_priv *priv,
struct flow_cls_offload *f, unsigned long flags)
{
@@ -4868,11 +4988,15 @@ int mlx5e_stats_flower(struct net_device *dev, struct mlx5e_priv *priv,
}
if (mlx5e_is_offloaded_flow(flow) || flow_flag_test(flow, CT)) {
- counter = mlx5e_tc_get_counter(flow);
- if (!counter)
- goto errout;
+ if (flow_flag_test(flow, USE_ACT_STATS)) {
+ f->use_act_stats = true;
+ } else {
+ counter = mlx5e_tc_get_counter(flow);
+ if (!counter)
+ goto errout;
- mlx5_fc_query_cached(counter, &bytes, &packets, &lastuse);
+ mlx5_fc_query_cached(counter, &bytes, &packets, &lastuse);
+ }
}
/* Under multipath it's possible for one rule to be currently
@@ -4888,14 +5012,18 @@ int mlx5e_stats_flower(struct net_device *dev, struct mlx5e_priv *priv,
u64 packets2;
u64 lastuse2;
- counter = mlx5e_tc_get_counter(flow->peer_flow);
- if (!counter)
- goto no_peer_counter;
- mlx5_fc_query_cached(counter, &bytes2, &packets2, &lastuse2);
-
- bytes += bytes2;
- packets += packets2;
- lastuse = max_t(u64, lastuse, lastuse2);
+ if (flow_flag_test(flow, USE_ACT_STATS)) {
+ f->use_act_stats = true;
+ } else {
+ counter = mlx5e_tc_get_counter(flow->peer_flow);
+ if (!counter)
+ goto no_peer_counter;
+ mlx5_fc_query_cached(counter, &bytes2, &packets2, &lastuse2);
+
+ bytes += bytes2;
+ packets += packets2;
+ lastuse = max_t(u64, lastuse, lastuse2);
+ }
}
no_peer_counter:
@@ -5220,6 +5348,8 @@ int mlx5e_tc_nic_init(struct mlx5e_priv *priv)
tc->ct = mlx5_tc_ct_init(priv, tc->chains, &tc->mod_hdr,
MLX5_FLOW_NAMESPACE_KERNEL, tc->post_act);
+ mlx5e_hairpin_params_init(&tc->hairpin_params, dev);
+
tc->netdevice_nb.notifier_call = mlx5e_tc_netdev_event;
err = register_netdevice_notifier_dev_net(priv->netdev,
&tc->netdevice_nb,
@@ -5230,8 +5360,18 @@ int mlx5e_tc_nic_init(struct mlx5e_priv *priv)
goto err_reg;
}
+ mlx5e_tc_debugfs_init(tc, mlx5e_fs_get_debugfs_root(priv->fs));
+
+ tc->action_stats_handle = mlx5e_tc_act_stats_create();
+ if (IS_ERR(tc->action_stats_handle))
+ goto err_act_stats;
+
return 0;
+err_act_stats:
+ unregister_netdevice_notifier_dev_net(priv->netdev,
+ &tc->netdevice_nb,
+ &tc->netdevice_nn);
err_reg:
mlx5_tc_ct_clean(tc->ct);
mlx5e_tc_post_act_destroy(tc->post_act);
@@ -5258,6 +5398,8 @@ void mlx5e_tc_nic_cleanup(struct mlx5e_priv *priv)
{
struct mlx5e_tc_table *tc = mlx5e_fs_get_tc(priv->fs);
+ debugfs_remove_recursive(tc->dfs_root);
+
if (tc->netdevice_nb.notifier_call)
unregister_netdevice_notifier_dev_net(priv->netdev,
&tc->netdevice_nb,
@@ -5279,6 +5421,7 @@ void mlx5e_tc_nic_cleanup(struct mlx5e_priv *priv)
mapping_destroy(tc->mapping);
mlx5_chains_destroy(tc->chains);
mlx5e_tc_nic_destroy_miss_table(priv);
+ mlx5e_tc_act_stats_free(tc->action_stats_handle);
}
int mlx5e_tc_ht_init(struct rhashtable *tc_ht)
@@ -5355,8 +5498,14 @@ int mlx5e_tc_esw_init(struct mlx5_rep_uplink_priv *uplink_priv)
goto err_register_fib_notifier;
}
+ uplink_priv->action_stats_handle = mlx5e_tc_act_stats_create();
+ if (IS_ERR(uplink_priv->action_stats_handle))
+ goto err_action_counter;
+
return 0;
+err_action_counter:
+ mlx5e_tc_tun_cleanup(uplink_priv->encap);
err_register_fib_notifier:
mapping_destroy(uplink_priv->tunnel_enc_opts_mapping);
err_enc_opts_mapping:
@@ -5383,6 +5532,7 @@ void mlx5e_tc_esw_cleanup(struct mlx5_rep_uplink_priv *uplink_priv)
mlx5_tc_ct_clean(uplink_priv->ct_priv);
mlx5e_flow_meters_cleanup(uplink_priv->flow_meters);
mlx5e_tc_post_act_destroy(uplink_priv->post_act);
+ mlx5e_tc_act_stats_free(uplink_priv->action_stats_handle);
}
int mlx5e_tc_num_filters(struct mlx5e_priv *priv, unsigned long flags)
@@ -5456,48 +5606,268 @@ int mlx5e_setup_tc_block_cb(enum tc_setup_type type, void *type_data,
}
}
-bool mlx5e_tc_update_skb(struct mlx5_cqe64 *cqe,
- struct sk_buff *skb)
+static bool mlx5e_tc_restore_tunnel(struct mlx5e_priv *priv, struct sk_buff *skb,
+ struct mlx5e_tc_update_priv *tc_priv,
+ u32 tunnel_id)
{
-#if IS_ENABLED(CONFIG_NET_TC_SKB_EXT)
- u32 chain = 0, chain_tag, reg_b, zone_restore_id;
- struct mlx5e_priv *priv = netdev_priv(skb->dev);
- struct mlx5_mapped_obj mapped_obj;
- struct tc_skb_ext *tc_skb_ext;
- struct mlx5e_tc_table *tc;
+ struct mlx5_eswitch *esw = priv->mdev->priv.eswitch;
+ struct tunnel_match_enc_opts enc_opts = {};
+ struct mlx5_rep_uplink_priv *uplink_priv;
+ struct mlx5e_rep_priv *uplink_rpriv;
+ struct metadata_dst *tun_dst;
+ struct tunnel_match_key key;
+ u32 tun_id, enc_opts_id;
+ struct net_device *dev;
int err;
- reg_b = be32_to_cpu(cqe->ft_metadata);
- tc = mlx5e_fs_get_tc(priv->fs);
- chain_tag = reg_b & MLX5E_TC_TABLE_CHAIN_TAG_MASK;
+ enc_opts_id = tunnel_id & ENC_OPTS_BITS_MASK;
+ tun_id = tunnel_id >> ENC_OPTS_BITS;
- err = mapping_find(tc->mapping, chain_tag, &mapped_obj);
+ if (!tun_id)
+ return true;
+
+ uplink_rpriv = mlx5_eswitch_get_uplink_priv(esw, REP_ETH);
+ uplink_priv = &uplink_rpriv->uplink_priv;
+
+ err = mapping_find(uplink_priv->tunnel_mapping, tun_id, &key);
if (err) {
netdev_dbg(priv->netdev,
- "Couldn't find chain for chain tag: %d, err: %d\n",
- chain_tag, err);
+ "Couldn't find tunnel for tun_id: %d, err: %d\n",
+ tun_id, err);
return false;
}
- if (mapped_obj.type == MLX5_MAPPED_OBJ_CHAIN) {
- chain = mapped_obj.chain;
- tc_skb_ext = tc_skb_ext_alloc(skb);
- if (WARN_ON(!tc_skb_ext))
+ if (enc_opts_id) {
+ err = mapping_find(uplink_priv->tunnel_enc_opts_mapping,
+ enc_opts_id, &enc_opts);
+ if (err) {
+ netdev_dbg(priv->netdev,
+ "Couldn't find tunnel (opts) for tun_id: %d, err: %d\n",
+ enc_opts_id, err);
return false;
+ }
+ }
+
+ switch (key.enc_control.addr_type) {
+ case FLOW_DISSECTOR_KEY_IPV4_ADDRS:
+ tun_dst = __ip_tun_set_dst(key.enc_ipv4.src, key.enc_ipv4.dst,
+ key.enc_ip.tos, key.enc_ip.ttl,
+ key.enc_tp.dst, TUNNEL_KEY,
+ key32_to_tunnel_id(key.enc_key_id.keyid),
+ enc_opts.key.len);
+ break;
+ case FLOW_DISSECTOR_KEY_IPV6_ADDRS:
+ tun_dst = __ipv6_tun_set_dst(&key.enc_ipv6.src, &key.enc_ipv6.dst,
+ key.enc_ip.tos, key.enc_ip.ttl,
+ key.enc_tp.dst, 0, TUNNEL_KEY,
+ key32_to_tunnel_id(key.enc_key_id.keyid),
+ enc_opts.key.len);
+ break;
+ default:
+ netdev_dbg(priv->netdev,
+ "Couldn't restore tunnel, unsupported addr_type: %d\n",
+ key.enc_control.addr_type);
+ return false;
+ }
+
+ if (!tun_dst) {
+ netdev_dbg(priv->netdev, "Couldn't restore tunnel, no tun_dst\n");
+ return false;
+ }
+
+ tun_dst->u.tun_info.key.tp_src = key.enc_tp.src;
+
+ if (enc_opts.key.len)
+ ip_tunnel_info_opts_set(&tun_dst->u.tun_info,
+ enc_opts.key.data,
+ enc_opts.key.len,
+ enc_opts.key.dst_opt_type);
+
+ skb_dst_set(skb, (struct dst_entry *)tun_dst);
+ dev = dev_get_by_index(&init_net, key.filter_ifindex);
+ if (!dev) {
+ netdev_dbg(priv->netdev,
+ "Couldn't find tunnel device with ifindex: %d\n",
+ key.filter_ifindex);
+ return false;
+ }
- tc_skb_ext->chain = chain;
+ /* Set fwd_dev so we do dev_put() after datapath */
+ tc_priv->fwd_dev = dev;
- zone_restore_id = (reg_b >> MLX5_REG_MAPPING_MOFFSET(NIC_ZONE_RESTORE_TO_REG)) &
- ESW_ZONE_ID_MASK;
+ skb->dev = dev;
- if (!mlx5e_tc_ct_restore_flow(tc->ct, skb,
- zone_restore_id))
+ return true;
+}
+
+static bool mlx5e_tc_restore_skb_tc_meta(struct sk_buff *skb, struct mlx5_tc_ct_priv *ct_priv,
+ struct mlx5_mapped_obj *mapped_obj, u32 zone_restore_id,
+ u32 tunnel_id, struct mlx5e_tc_update_priv *tc_priv)
+{
+ struct mlx5e_priv *priv = netdev_priv(skb->dev);
+ struct tc_skb_ext *tc_skb_ext;
+ u64 act_miss_cookie;
+ u32 chain;
+
+ chain = mapped_obj->type == MLX5_MAPPED_OBJ_CHAIN ? mapped_obj->chain : 0;
+ act_miss_cookie = mapped_obj->type == MLX5_MAPPED_OBJ_ACT_MISS ?
+ mapped_obj->act_miss_cookie : 0;
+ if (chain || act_miss_cookie) {
+ if (!mlx5e_tc_ct_restore_flow(ct_priv, skb, zone_restore_id))
return false;
- } else {
+
+ tc_skb_ext = tc_skb_ext_alloc(skb);
+ if (!tc_skb_ext) {
+ WARN_ON(1);
+ return false;
+ }
+
+ if (act_miss_cookie) {
+ tc_skb_ext->act_miss_cookie = act_miss_cookie;
+ tc_skb_ext->act_miss = 1;
+ } else {
+ tc_skb_ext->chain = chain;
+ }
+ }
+
+ if (tc_priv)
+ return mlx5e_tc_restore_tunnel(priv, skb, tc_priv, tunnel_id);
+
+ return true;
+}
+
+static void mlx5e_tc_restore_skb_sample(struct mlx5e_priv *priv, struct sk_buff *skb,
+ struct mlx5_mapped_obj *mapped_obj,
+ struct mlx5e_tc_update_priv *tc_priv)
+{
+ if (!mlx5e_tc_restore_tunnel(priv, skb, tc_priv, mapped_obj->sample.tunnel_id)) {
+ netdev_dbg(priv->netdev,
+ "Failed to restore tunnel info for sampled packet\n");
+ return;
+ }
+ mlx5e_tc_sample_skb(skb, mapped_obj);
+}
+
+static bool mlx5e_tc_restore_skb_int_port(struct mlx5e_priv *priv, struct sk_buff *skb,
+ struct mlx5_mapped_obj *mapped_obj,
+ struct mlx5e_tc_update_priv *tc_priv,
+ u32 tunnel_id)
+{
+ struct mlx5_eswitch *esw = priv->mdev->priv.eswitch;
+ struct mlx5_rep_uplink_priv *uplink_priv;
+ struct mlx5e_rep_priv *uplink_rpriv;
+ bool forward_tx = false;
+
+ /* Tunnel restore takes precedence over int port restore */
+ if (tunnel_id)
+ return mlx5e_tc_restore_tunnel(priv, skb, tc_priv, tunnel_id);
+
+ uplink_rpriv = mlx5_eswitch_get_uplink_priv(esw, REP_ETH);
+ uplink_priv = &uplink_rpriv->uplink_priv;
+
+ if (mlx5e_tc_int_port_dev_fwd(uplink_priv->int_port_priv, skb,
+ mapped_obj->int_port_metadata, &forward_tx)) {
+ /* Set fwd_dev for future dev_put */
+ tc_priv->fwd_dev = skb->dev;
+ tc_priv->forward_tx = forward_tx;
+
+ return true;
+ }
+
+ return false;
+}
+
+bool mlx5e_tc_update_skb(struct mlx5_cqe64 *cqe, struct sk_buff *skb,
+ struct mapping_ctx *mapping_ctx, u32 mapped_obj_id,
+ struct mlx5_tc_ct_priv *ct_priv,
+ u32 zone_restore_id, u32 tunnel_id,
+ struct mlx5e_tc_update_priv *tc_priv)
+{
+ struct mlx5e_priv *priv = netdev_priv(skb->dev);
+ struct mlx5_mapped_obj mapped_obj;
+ int err;
+
+ err = mapping_find(mapping_ctx, mapped_obj_id, &mapped_obj);
+ if (err) {
+ netdev_dbg(skb->dev,
+ "Couldn't find mapped object for mapped_obj_id: %d, err: %d\n",
+ mapped_obj_id, err);
+ return false;
+ }
+
+ switch (mapped_obj.type) {
+ case MLX5_MAPPED_OBJ_CHAIN:
+ case MLX5_MAPPED_OBJ_ACT_MISS:
+ return mlx5e_tc_restore_skb_tc_meta(skb, ct_priv, &mapped_obj, zone_restore_id,
+ tunnel_id, tc_priv);
+ case MLX5_MAPPED_OBJ_SAMPLE:
+ mlx5e_tc_restore_skb_sample(priv, skb, &mapped_obj, tc_priv);
+ tc_priv->skb_done = true;
+ return true;
+ case MLX5_MAPPED_OBJ_INT_PORT_METADATA:
+ return mlx5e_tc_restore_skb_int_port(priv, skb, &mapped_obj, tc_priv, tunnel_id);
+ default:
netdev_dbg(priv->netdev, "Invalid mapped object type: %d\n", mapped_obj.type);
return false;
}
-#endif /* CONFIG_NET_TC_SKB_EXT */
- return true;
+ return false;
+}
+
+bool mlx5e_tc_update_skb_nic(struct mlx5_cqe64 *cqe, struct sk_buff *skb)
+{
+ struct mlx5e_priv *priv = netdev_priv(skb->dev);
+ u32 mapped_obj_id, reg_b, zone_restore_id;
+ struct mlx5_tc_ct_priv *ct_priv;
+ struct mapping_ctx *mapping_ctx;
+ struct mlx5e_tc_table *tc;
+
+ reg_b = be32_to_cpu(cqe->ft_metadata);
+ tc = mlx5e_fs_get_tc(priv->fs);
+ mapped_obj_id = reg_b & MLX5E_TC_TABLE_CHAIN_TAG_MASK;
+ zone_restore_id = (reg_b >> MLX5_REG_MAPPING_MOFFSET(NIC_ZONE_RESTORE_TO_REG)) &
+ ESW_ZONE_ID_MASK;
+ ct_priv = tc->ct;
+ mapping_ctx = tc->mapping;
+
+ return mlx5e_tc_update_skb(cqe, skb, mapping_ctx, mapped_obj_id, ct_priv, zone_restore_id,
+ 0, NULL);
+}
+
+int mlx5e_tc_action_miss_mapping_get(struct mlx5e_priv *priv, struct mlx5_flow_attr *attr,
+ u64 act_miss_cookie, u32 *act_miss_mapping)
+{
+ struct mlx5_eswitch *esw = priv->mdev->priv.eswitch;
+ struct mlx5_mapped_obj mapped_obj = {};
+ struct mapping_ctx *ctx;
+ int err;
+
+ ctx = esw->offloads.reg_c0_obj_pool;
+
+ mapped_obj.type = MLX5_MAPPED_OBJ_ACT_MISS;
+ mapped_obj.act_miss_cookie = act_miss_cookie;
+ err = mapping_add(ctx, &mapped_obj, act_miss_mapping);
+ if (err)
+ return err;
+
+ attr->act_id_restore_rule = esw_add_restore_rule(esw, *act_miss_mapping);
+ if (IS_ERR(attr->act_id_restore_rule))
+ goto err_rule;
+
+ return 0;
+
+err_rule:
+ mapping_remove(ctx, *act_miss_mapping);
+ return err;
+}
+
+void mlx5e_tc_action_miss_mapping_put(struct mlx5e_priv *priv, struct mlx5_flow_attr *attr,
+ u32 act_miss_mapping)
+{
+ struct mlx5_eswitch *esw = priv->mdev->priv.eswitch;
+ struct mapping_ctx *ctx;
+
+ ctx = esw->offloads.reg_c0_obj_pool;
+ mlx5_del_flow_rules(attr->act_id_restore_rule);
+ mapping_remove(ctx, act_miss_mapping);
}
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.h b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.h
index 50af70ef22f3..adb39e30f90f 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.h
@@ -59,6 +59,8 @@ int mlx5e_tc_num_filters(struct mlx5e_priv *priv, unsigned long flags);
struct mlx5e_tc_update_priv {
struct net_device *fwd_dev;
+ bool skb_done;
+ bool forward_tx;
};
struct mlx5_nic_flow_attr {
@@ -69,35 +71,33 @@ struct mlx5_nic_flow_attr {
struct mlx5_flow_attr {
u32 action;
+ unsigned long tc_act_cookies[TCA_ACT_MAX_PRIO];
struct mlx5_fc *counter;
struct mlx5_modify_hdr *modify_hdr;
+ struct mlx5e_mod_hdr_handle *mh; /* attached mod header instance */
+ struct mlx5e_mod_hdr_handle *slow_mh; /* attached mod header instance for slow path */
struct mlx5_ct_attr ct_attr;
struct mlx5e_sample_attr sample_attr;
struct mlx5e_meter_attr meter_attr;
struct mlx5e_tc_flow_parse_attr *parse_attr;
u32 chain;
u16 prio;
+ u16 tc_act_cookies_count;
u32 dest_chain;
struct mlx5_flow_table *ft;
struct mlx5_flow_table *dest_ft;
u8 inner_match_level;
u8 outer_match_level;
- u8 ip_version;
u8 tun_ip_version;
int tunnel_id; /* mapped tunnel id */
u32 flags;
u32 exe_aso_type;
struct list_head list;
struct mlx5e_post_act_handle *post_act_handle;
- struct {
- /* Indicate whether the parsed flow should be counted for lag mode decision
- * making
- */
- bool count;
- } lag;
struct mlx5_flow_attr *branch_true;
struct mlx5_flow_attr *branch_false;
struct mlx5_flow_attr *jumping_attr;
+ struct mlx5_flow_handle *act_id_restore_rule;
/* keep this union last */
union {
DECLARE_FLEX_ARRAY(struct mlx5_esw_flow_attr, esw_attr);
@@ -134,7 +134,6 @@ struct mlx5_rx_tun_attr {
__be32 v4;
struct in6_addr v6;
} dst_ip; /* Valid if decap_vport is not zero */
- u32 vni;
};
#define MLX5E_TC_TABLE_CHAIN_TAG_BITS 16
@@ -197,6 +196,8 @@ int mlx5e_delete_flower(struct net_device *dev, struct mlx5e_priv *priv,
int mlx5e_stats_flower(struct net_device *dev, struct mlx5e_priv *priv,
struct flow_cls_offload *f, unsigned long flags);
+int mlx5e_tc_fill_action_stats(struct mlx5e_priv *priv,
+ struct flow_offload_action *fl_act);
int mlx5e_tc_configure_matchall(struct mlx5e_priv *priv,
struct tc_cls_matchall_offload *f);
@@ -227,7 +228,7 @@ void mlx5e_tc_update_neigh_used_value(struct mlx5e_neigh_hash_entry *nhe);
void mlx5e_tc_reoffload_flows_work(struct work_struct *work);
enum mlx5e_tc_attr_to_reg {
- CHAIN_TO_REG,
+ MAPPED_OBJ_TO_REG,
VPORT_TO_REG,
TUNNEL_TO_REG,
CTSTATE_TO_REG,
@@ -236,7 +237,7 @@ enum mlx5e_tc_attr_to_reg {
MARK_TO_REG,
LABELS_TO_REG,
FTEID_TO_REG,
- NIC_CHAIN_TO_REG,
+ NIC_MAPPED_OBJ_TO_REG,
NIC_ZONE_RESTORE_TO_REG,
PACKET_COLOR_TO_REG,
};
@@ -285,9 +286,13 @@ int mlx5e_tc_match_to_reg_set_and_get_id(struct mlx5_core_dev *mdev,
enum mlx5e_tc_attr_to_reg type,
u32 data);
-int mlx5e_tc_add_flow_mod_hdr(struct mlx5e_priv *priv,
- struct mlx5e_tc_flow *flow,
- struct mlx5_flow_attr *attr);
+int mlx5e_tc_attach_mod_hdr(struct mlx5e_priv *priv,
+ struct mlx5e_tc_flow *flow,
+ struct mlx5_flow_attr *attr);
+
+void mlx5e_tc_detach_mod_hdr(struct mlx5e_priv *priv,
+ struct mlx5e_tc_flow *flow,
+ struct mlx5_flow_attr *attr);
void mlx5e_tc_set_ethertype(struct mlx5_core_dev *mdev,
struct flow_match_basic *match, bool outer,
@@ -366,7 +371,6 @@ struct mlx5e_tc_table *mlx5e_tc_table_alloc(void);
void mlx5e_tc_table_free(struct mlx5e_tc_table *tc);
static inline bool mlx5e_cqe_regb_chain(struct mlx5_cqe64 *cqe)
{
-#if IS_ENABLED(CONFIG_NET_TC_SKB_EXT)
u32 chain, reg_b;
reg_b = be32_to_cpu(cqe->ft_metadata);
@@ -377,20 +381,29 @@ static inline bool mlx5e_cqe_regb_chain(struct mlx5_cqe64 *cqe)
chain = reg_b & MLX5E_TC_TABLE_CHAIN_TAG_MASK;
if (chain)
return true;
-#endif
return false;
}
-bool mlx5e_tc_update_skb(struct mlx5_cqe64 *cqe, struct sk_buff *skb);
+bool mlx5e_tc_update_skb_nic(struct mlx5_cqe64 *cqe, struct sk_buff *skb);
+bool mlx5e_tc_update_skb(struct mlx5_cqe64 *cqe, struct sk_buff *skb,
+ struct mapping_ctx *mapping_ctx, u32 mapped_obj_id,
+ struct mlx5_tc_ct_priv *ct_priv,
+ u32 zone_restore_id, u32 tunnel_id,
+ struct mlx5e_tc_update_priv *tc_priv);
#else /* CONFIG_MLX5_CLS_ACT */
static inline struct mlx5e_tc_table *mlx5e_tc_table_alloc(void) { return NULL; }
static inline void mlx5e_tc_table_free(struct mlx5e_tc_table *tc) {}
static inline bool mlx5e_cqe_regb_chain(struct mlx5_cqe64 *cqe)
{ return false; }
static inline bool
-mlx5e_tc_update_skb(struct mlx5_cqe64 *cqe, struct sk_buff *skb)
+mlx5e_tc_update_skb_nic(struct mlx5_cqe64 *cqe, struct sk_buff *skb)
{ return true; }
#endif
+int mlx5e_tc_action_miss_mapping_get(struct mlx5e_priv *priv, struct mlx5_flow_attr *attr,
+ u64 act_miss_cookie, u32 *act_miss_mapping);
+void mlx5e_tc_action_miss_mapping_put(struct mlx5e_priv *priv, struct mlx5_flow_attr *attr,
+ u32 act_miss_mapping);
+
#endif /* __MLX5_EN_TC_H__ */
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c
index f7897ddb29c5..df5e780e8e6a 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c
@@ -720,21 +720,6 @@ netdev_tx_t mlx5e_xmit(struct sk_buff *skb, struct net_device *dev)
return NETDEV_TX_OK;
}
-void mlx5e_sq_xmit_simple(struct mlx5e_txqsq *sq, struct sk_buff *skb, bool xmit_more)
-{
- struct mlx5e_tx_wqe_attr wqe_attr;
- struct mlx5e_tx_attr attr;
- struct mlx5e_tx_wqe *wqe;
- u16 pi;
-
- mlx5e_sq_xmit_prepare(sq, skb, NULL, &attr);
- mlx5e_sq_calc_wqe_attr(skb, &attr, &wqe_attr);
- pi = mlx5e_txqsq_get_next_pi(sq, wqe_attr.num_wqebbs);
- wqe = MLX5E_TX_FETCH_WQE(sq, pi);
- mlx5e_txwqe_build_eseg_csum(sq, skb, NULL, &wqe->eth);
- mlx5e_sq_xmit_wqe(sq, skb, &attr, &wqe_attr, wqe, pi, xmit_more);
-}
-
static void mlx5e_tx_wi_dma_unmap(struct mlx5e_txqsq *sq, struct mlx5e_tx_wqe_info *wi,
u32 *dma_fifo_cc)
{
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eq.c b/drivers/net/ethernet/mellanox/mlx5/core/eq.c
index 8f7580fec193..38b32e98f3bd 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/eq.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/eq.c
@@ -629,9 +629,9 @@ static u16 async_eq_depth_devlink_param_get(struct mlx5_core_dev *dev)
union devlink_param_value val;
int err;
- err = devlink_param_driverinit_value_get(devlink,
- DEVLINK_PARAM_GENERIC_ID_EVENT_EQ_SIZE,
- &val);
+ err = devl_param_driverinit_value_get(devlink,
+ DEVLINK_PARAM_GENERIC_ID_EVENT_EQ_SIZE,
+ &val);
if (!err)
return val.vu32;
mlx5_core_dbg(dev, "Failed to get param. using default. err = %d\n", err);
@@ -817,9 +817,12 @@ static void comp_irqs_release(struct mlx5_core_dev *dev)
static int comp_irqs_request(struct mlx5_core_dev *dev)
{
struct mlx5_eq_table *table = dev->priv.eq_table;
+ const struct cpumask *prev = cpu_none_mask;
+ const struct cpumask *mask;
int ncomp_eqs = table->num_comp_eqs;
u16 *cpus;
int ret;
+ int cpu;
int i;
ncomp_eqs = table->num_comp_eqs;
@@ -838,8 +841,19 @@ static int comp_irqs_request(struct mlx5_core_dev *dev)
ret = -ENOMEM;
goto free_irqs;
}
- for (i = 0; i < ncomp_eqs; i++)
- cpus[i] = cpumask_local_spread(i, dev->priv.numa_node);
+
+ i = 0;
+ rcu_read_lock();
+ for_each_numa_hop_mask(mask, dev->priv.numa_node) {
+ for_each_cpu_andnot(cpu, mask, prev) {
+ cpus[i] = cpu;
+ if (++i == ncomp_eqs)
+ goto spread_done;
+ }
+ prev = mask;
+ }
+spread_done:
+ rcu_read_unlock();
ret = mlx5_irqs_request_vectors(dev, cpus, ncomp_eqs, table->comp_irqs);
kfree(cpus);
if (ret < 0)
@@ -874,9 +888,9 @@ static u16 comp_eq_depth_devlink_param_get(struct mlx5_core_dev *dev)
union devlink_param_value val;
int err;
- err = devlink_param_driverinit_value_get(devlink,
- DEVLINK_PARAM_GENERIC_ID_IO_EQ_SIZE,
- &val);
+ err = devl_param_driverinit_value_get(devlink,
+ DEVLINK_PARAM_GENERIC_ID_IO_EQ_SIZE,
+ &val);
if (!err)
return val.vu32;
mlx5_core_dbg(dev, "Failed to get param. using default. err = %d\n", err);
@@ -946,11 +960,11 @@ static int vector2eqnirqn(struct mlx5_core_dev *dev, int vector, int *eqn,
unsigned int *irqn)
{
struct mlx5_eq_table *table = dev->priv.eq_table;
- struct mlx5_eq_comp *eq, *n;
+ struct mlx5_eq_comp *eq;
int err = -ENOENT;
int i = 0;
- list_for_each_entry_safe(eq, n, &table->comp_eqs_list, list) {
+ list_for_each_entry(eq, &table->comp_eqs_list, list) {
if (i++ == vector) {
if (irqn)
*irqn = eq->core.irqn;
@@ -985,10 +999,10 @@ struct cpumask *
mlx5_comp_irq_get_affinity_mask(struct mlx5_core_dev *dev, int vector)
{
struct mlx5_eq_table *table = dev->priv.eq_table;
- struct mlx5_eq_comp *eq, *n;
+ struct mlx5_eq_comp *eq;
int i = 0;
- list_for_each_entry_safe(eq, n, &table->comp_eqs_list, list) {
+ list_for_each_entry(eq, &table->comp_eqs_list, list) {
if (i++ == vector)
break;
}
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/acl/ingress_ofld.c b/drivers/net/ethernet/mellanox/mlx5/core/esw/acl/ingress_ofld.c
index a994e71e05c1..d55775627a47 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/esw/acl/ingress_ofld.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/esw/acl/ingress_ofld.c
@@ -356,8 +356,8 @@ void esw_acl_ingress_ofld_cleanup(struct mlx5_eswitch *esw,
}
/* Caller must hold rtnl_lock */
-int mlx5_esw_acl_ingress_vport_bond_update(struct mlx5_eswitch *esw, u16 vport_num,
- u32 metadata)
+int mlx5_esw_acl_ingress_vport_metadata_update(struct mlx5_eswitch *esw, u16 vport_num,
+ u32 metadata)
{
struct mlx5_vport *vport = mlx5_eswitch_get_vport(esw, vport_num);
int err;
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/acl/ofld.h b/drivers/net/ethernet/mellanox/mlx5/core/esw/acl/ofld.h
index 11d3d3978848..c9f8469e9a47 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/esw/acl/ofld.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/esw/acl/ofld.h
@@ -24,8 +24,8 @@ static inline bool mlx5_esw_acl_egress_fwd2vport_supported(struct mlx5_eswitch *
/* Eswitch acl ingress external APIs */
int esw_acl_ingress_ofld_setup(struct mlx5_eswitch *esw, struct mlx5_vport *vport);
void esw_acl_ingress_ofld_cleanup(struct mlx5_eswitch *esw, struct mlx5_vport *vport);
-int mlx5_esw_acl_ingress_vport_bond_update(struct mlx5_eswitch *esw, u16 vport_num,
- u32 metadata);
+int mlx5_esw_acl_ingress_vport_metadata_update(struct mlx5_eswitch *esw, u16 vport_num,
+ u32 metadata);
void mlx5_esw_acl_ingress_vport_drop_rule_destroy(struct mlx5_eswitch *esw, u16 vport_num);
int mlx5_esw_acl_ingress_vport_drop_rule_create(struct mlx5_eswitch *esw, u16 vport_num);
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/indir_table.c b/drivers/net/ethernet/mellanox/mlx5/core/esw/indir_table.c
index c9a91158e99c..9959e9fd15a1 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/esw/indir_table.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/esw/indir_table.c
@@ -16,18 +16,12 @@
#include "lib/fs_chains.h"
#include "en/mod_hdr.h"
-#define MLX5_ESW_INDIR_TABLE_SIZE 128
-#define MLX5_ESW_INDIR_TABLE_RECIRC_IDX_MAX (MLX5_ESW_INDIR_TABLE_SIZE - 2)
+#define MLX5_ESW_INDIR_TABLE_SIZE 2
+#define MLX5_ESW_INDIR_TABLE_RECIRC_IDX (MLX5_ESW_INDIR_TABLE_SIZE - 2)
#define MLX5_ESW_INDIR_TABLE_FWD_IDX (MLX5_ESW_INDIR_TABLE_SIZE - 1)
struct mlx5_esw_indir_table_rule {
- struct list_head list;
struct mlx5_flow_handle *handle;
- union {
- __be32 v4;
- struct in6_addr v6;
- } dst_ip;
- u32 vni;
struct mlx5_modify_hdr *mh;
refcount_t refcnt;
};
@@ -38,12 +32,10 @@ struct mlx5_esw_indir_table_entry {
struct mlx5_flow_group *recirc_grp;
struct mlx5_flow_group *fwd_grp;
struct mlx5_flow_handle *fwd_rule;
- struct list_head recirc_rules;
- int recirc_cnt;
+ struct mlx5_esw_indir_table_rule *recirc_rule;
int fwd_ref;
u16 vport;
- u8 ip_version;
};
struct mlx5_esw_indir_table {
@@ -89,7 +81,6 @@ mlx5_esw_indir_table_needed(struct mlx5_eswitch *esw,
return esw_attr->in_rep->vport == MLX5_VPORT_UPLINK &&
vf_sf_vport &&
esw->dev == dest_mdev &&
- attr->ip_version &&
attr->flags & MLX5_ATTR_FLAG_SRC_REWRITE;
}
@@ -101,27 +92,8 @@ mlx5_esw_indir_table_decap_vport(struct mlx5_flow_attr *attr)
return esw_attr->rx_tun_attr ? esw_attr->rx_tun_attr->decap_vport : 0;
}
-static struct mlx5_esw_indir_table_rule *
-mlx5_esw_indir_table_rule_lookup(struct mlx5_esw_indir_table_entry *e,
- struct mlx5_esw_flow_attr *attr)
-{
- struct mlx5_esw_indir_table_rule *rule;
-
- list_for_each_entry(rule, &e->recirc_rules, list)
- if (rule->vni == attr->rx_tun_attr->vni &&
- !memcmp(&rule->dst_ip, &attr->rx_tun_attr->dst_ip,
- sizeof(attr->rx_tun_attr->dst_ip)))
- goto found;
- return NULL;
-
-found:
- refcount_inc(&rule->refcnt);
- return rule;
-}
-
static int mlx5_esw_indir_table_rule_get(struct mlx5_eswitch *esw,
struct mlx5_flow_attr *attr,
- struct mlx5_flow_spec *spec,
struct mlx5_esw_indir_table_entry *e)
{
struct mlx5_esw_flow_attr *esw_attr = attr->esw_attr;
@@ -130,73 +102,18 @@ static int mlx5_esw_indir_table_rule_get(struct mlx5_eswitch *esw,
struct mlx5_flow_destination dest = {};
struct mlx5_esw_indir_table_rule *rule;
struct mlx5_flow_act flow_act = {};
- struct mlx5_flow_spec *rule_spec;
struct mlx5_flow_handle *handle;
int err = 0;
u32 data;
- rule = mlx5_esw_indir_table_rule_lookup(e, esw_attr);
- if (rule)
+ if (e->recirc_rule) {
+ refcount_inc(&e->recirc_rule->refcnt);
return 0;
-
- if (e->recirc_cnt == MLX5_ESW_INDIR_TABLE_RECIRC_IDX_MAX)
- return -EINVAL;
-
- rule_spec = kvzalloc(sizeof(*rule_spec), GFP_KERNEL);
- if (!rule_spec)
- return -ENOMEM;
-
- rule = kzalloc(sizeof(*rule), GFP_KERNEL);
- if (!rule) {
- err = -ENOMEM;
- goto out;
- }
-
- rule_spec->match_criteria_enable = MLX5_MATCH_OUTER_HEADERS |
- MLX5_MATCH_MISC_PARAMETERS |
- MLX5_MATCH_MISC_PARAMETERS_2;
- if (MLX5_CAP_FLOWTABLE_NIC_RX(esw->dev, ft_field_support.outer_ip_version)) {
- MLX5_SET(fte_match_param, rule_spec->match_criteria,
- outer_headers.ip_version, 0xf);
- MLX5_SET(fte_match_param, rule_spec->match_value, outer_headers.ip_version,
- attr->ip_version);
- } else if (attr->ip_version) {
- MLX5_SET_TO_ONES(fte_match_param, rule_spec->match_criteria,
- outer_headers.ethertype);
- MLX5_SET(fte_match_param, rule_spec->match_value, outer_headers.ethertype,
- (attr->ip_version == 4 ? ETH_P_IP : ETH_P_IPV6));
- } else {
- err = -EOPNOTSUPP;
- goto err_ethertype;
}
- if (attr->ip_version == 4) {
- MLX5_SET_TO_ONES(fte_match_param, rule_spec->match_criteria,
- outer_headers.dst_ipv4_dst_ipv6.ipv4_layout.ipv4);
- MLX5_SET(fte_match_param, rule_spec->match_value,
- outer_headers.dst_ipv4_dst_ipv6.ipv4_layout.ipv4,
- ntohl(esw_attr->rx_tun_attr->dst_ip.v4));
- } else if (attr->ip_version == 6) {
- int len = sizeof(struct in6_addr);
-
- memset(MLX5_ADDR_OF(fte_match_param, rule_spec->match_criteria,
- outer_headers.dst_ipv4_dst_ipv6.ipv6_layout.ipv6),
- 0xff, len);
- memcpy(MLX5_ADDR_OF(fte_match_param, rule_spec->match_value,
- outer_headers.dst_ipv4_dst_ipv6.ipv6_layout.ipv6),
- &esw_attr->rx_tun_attr->dst_ip.v6, len);
- }
-
- MLX5_SET_TO_ONES(fte_match_param, rule_spec->match_criteria,
- misc_parameters.vxlan_vni);
- MLX5_SET(fte_match_param, rule_spec->match_value, misc_parameters.vxlan_vni,
- MLX5_GET(fte_match_param, spec->match_value, misc_parameters.vxlan_vni));
-
- MLX5_SET(fte_match_param, rule_spec->match_criteria,
- misc_parameters_2.metadata_reg_c_0, mlx5_eswitch_get_vport_metadata_mask());
- MLX5_SET(fte_match_param, rule_spec->match_value, misc_parameters_2.metadata_reg_c_0,
- mlx5_eswitch_get_vport_metadata_for_match(esw_attr->in_mdev->priv.eswitch,
- MLX5_VPORT_UPLINK));
+ rule = kzalloc(sizeof(*rule), GFP_KERNEL);
+ if (!rule)
+ return -ENOMEM;
/* Modify flow source to recirculate packet */
data = mlx5_eswitch_get_vport_metadata_for_set(esw, esw_attr->rx_tun_attr->decap_vport);
@@ -219,13 +136,14 @@ static int mlx5_esw_indir_table_rule_get(struct mlx5_eswitch *esw,
flow_act.action = MLX5_FLOW_CONTEXT_ACTION_FWD_DEST | MLX5_FLOW_CONTEXT_ACTION_MOD_HDR;
flow_act.flags = FLOW_ACT_IGNORE_FLOW_LEVEL | FLOW_ACT_NO_APPEND;
+ flow_act.fg = e->recirc_grp;
dest.type = MLX5_FLOW_DESTINATION_TYPE_FLOW_TABLE;
dest.ft = mlx5_chains_get_table(chains, 0, 1, 0);
if (IS_ERR(dest.ft)) {
err = PTR_ERR(dest.ft);
goto err_table;
}
- handle = mlx5_add_flow_rules(e->ft, rule_spec, &flow_act, &dest, 1);
+ handle = mlx5_add_flow_rules(e->ft, NULL, &flow_act, &dest, 1);
if (IS_ERR(handle)) {
err = PTR_ERR(handle);
goto err_handle;
@@ -233,14 +151,10 @@ static int mlx5_esw_indir_table_rule_get(struct mlx5_eswitch *esw,
mlx5e_mod_hdr_dealloc(&mod_acts);
rule->handle = handle;
- rule->vni = esw_attr->rx_tun_attr->vni;
rule->mh = flow_act.modify_hdr;
- memcpy(&rule->dst_ip, &esw_attr->rx_tun_attr->dst_ip,
- sizeof(esw_attr->rx_tun_attr->dst_ip));
refcount_set(&rule->refcnt, 1);
- list_add(&rule->list, &e->recirc_rules);
- e->recirc_cnt++;
- goto out;
+ e->recirc_rule = rule;
+ return 0;
err_handle:
mlx5_chains_put_table(chains, 0, 1, 0);
@@ -250,89 +164,44 @@ err_mod_hdr_alloc:
err_mod_hdr_regc1:
mlx5e_mod_hdr_dealloc(&mod_acts);
err_mod_hdr_regc0:
-err_ethertype:
kfree(rule);
-out:
- kvfree(rule_spec);
return err;
}
static void mlx5_esw_indir_table_rule_put(struct mlx5_eswitch *esw,
- struct mlx5_flow_attr *attr,
struct mlx5_esw_indir_table_entry *e)
{
- struct mlx5_esw_flow_attr *esw_attr = attr->esw_attr;
+ struct mlx5_esw_indir_table_rule *rule = e->recirc_rule;
struct mlx5_fs_chains *chains = esw_chains(esw);
- struct mlx5_esw_indir_table_rule *rule;
- list_for_each_entry(rule, &e->recirc_rules, list)
- if (rule->vni == esw_attr->rx_tun_attr->vni &&
- !memcmp(&rule->dst_ip, &esw_attr->rx_tun_attr->dst_ip,
- sizeof(esw_attr->rx_tun_attr->dst_ip)))
- goto found;
-
- return;
+ if (!rule)
+ return;
-found:
if (!refcount_dec_and_test(&rule->refcnt))
return;
mlx5_del_flow_rules(rule->handle);
mlx5_chains_put_table(chains, 0, 1, 0);
mlx5_modify_header_dealloc(esw->dev, rule->mh);
- list_del(&rule->list);
kfree(rule);
- e->recirc_cnt--;
+ e->recirc_rule = NULL;
}
-static int mlx5_create_indir_recirc_group(struct mlx5_eswitch *esw,
- struct mlx5_flow_attr *attr,
- struct mlx5_flow_spec *spec,
- struct mlx5_esw_indir_table_entry *e)
+static int mlx5_create_indir_recirc_group(struct mlx5_esw_indir_table_entry *e)
{
int err = 0, inlen = MLX5_ST_SZ_BYTES(create_flow_group_in);
- u32 *in, *match;
+ u32 *in;
in = kvzalloc(inlen, GFP_KERNEL);
if (!in)
return -ENOMEM;
- MLX5_SET(create_flow_group_in, in, match_criteria_enable, MLX5_MATCH_OUTER_HEADERS |
- MLX5_MATCH_MISC_PARAMETERS | MLX5_MATCH_MISC_PARAMETERS_2);
- match = MLX5_ADDR_OF(create_flow_group_in, in, match_criteria);
-
- if (MLX5_CAP_FLOWTABLE_NIC_RX(esw->dev, ft_field_support.outer_ip_version))
- MLX5_SET(fte_match_param, match, outer_headers.ip_version, 0xf);
- else
- MLX5_SET_TO_ONES(fte_match_param, match, outer_headers.ethertype);
-
- if (attr->ip_version == 4) {
- MLX5_SET_TO_ONES(fte_match_param, match,
- outer_headers.dst_ipv4_dst_ipv6.ipv4_layout.ipv4);
- } else if (attr->ip_version == 6) {
- memset(MLX5_ADDR_OF(fte_match_param, match,
- outer_headers.dst_ipv4_dst_ipv6.ipv6_layout.ipv6),
- 0xff, sizeof(struct in6_addr));
- } else {
- err = -EOPNOTSUPP;
- goto out;
- }
-
- MLX5_SET_TO_ONES(fte_match_param, match, misc_parameters.vxlan_vni);
- MLX5_SET(fte_match_param, match, misc_parameters_2.metadata_reg_c_0,
- mlx5_eswitch_get_vport_metadata_mask());
MLX5_SET(create_flow_group_in, in, start_flow_index, 0);
- MLX5_SET(create_flow_group_in, in, end_flow_index, MLX5_ESW_INDIR_TABLE_RECIRC_IDX_MAX);
+ MLX5_SET(create_flow_group_in, in, end_flow_index, MLX5_ESW_INDIR_TABLE_RECIRC_IDX);
e->recirc_grp = mlx5_create_flow_group(e->ft, in);
- if (IS_ERR(e->recirc_grp)) {
+ if (IS_ERR(e->recirc_grp))
err = PTR_ERR(e->recirc_grp);
- goto out;
- }
- INIT_LIST_HEAD(&e->recirc_rules);
- e->recirc_cnt = 0;
-
-out:
kvfree(in);
return err;
}
@@ -343,19 +212,12 @@ static int mlx5_create_indir_fwd_group(struct mlx5_eswitch *esw,
int err = 0, inlen = MLX5_ST_SZ_BYTES(create_flow_group_in);
struct mlx5_flow_destination dest = {};
struct mlx5_flow_act flow_act = {};
- struct mlx5_flow_spec *spec;
u32 *in;
in = kvzalloc(inlen, GFP_KERNEL);
if (!in)
return -ENOMEM;
- spec = kvzalloc(sizeof(*spec), GFP_KERNEL);
- if (!spec) {
- kvfree(in);
- return -ENOMEM;
- }
-
/* Hold one entry */
MLX5_SET(create_flow_group_in, in, start_flow_index, MLX5_ESW_INDIR_TABLE_FWD_IDX);
MLX5_SET(create_flow_group_in, in, end_flow_index, MLX5_ESW_INDIR_TABLE_FWD_IDX);
@@ -366,25 +228,25 @@ static int mlx5_create_indir_fwd_group(struct mlx5_eswitch *esw,
}
flow_act.action = MLX5_FLOW_CONTEXT_ACTION_FWD_DEST;
+ flow_act.fg = e->fwd_grp;
dest.type = MLX5_FLOW_DESTINATION_TYPE_VPORT;
dest.vport.num = e->vport;
dest.vport.vhca_id = MLX5_CAP_GEN(esw->dev, vhca_id);
dest.vport.flags = MLX5_FLOW_DEST_VPORT_VHCA_ID;
- e->fwd_rule = mlx5_add_flow_rules(e->ft, spec, &flow_act, &dest, 1);
+ e->fwd_rule = mlx5_add_flow_rules(e->ft, NULL, &flow_act, &dest, 1);
if (IS_ERR(e->fwd_rule)) {
mlx5_destroy_flow_group(e->fwd_grp);
err = PTR_ERR(e->fwd_rule);
}
err_out:
- kvfree(spec);
kvfree(in);
return err;
}
static struct mlx5_esw_indir_table_entry *
mlx5_esw_indir_table_entry_create(struct mlx5_eswitch *esw, struct mlx5_flow_attr *attr,
- struct mlx5_flow_spec *spec, u16 vport, bool decap)
+ u16 vport, bool decap)
{
struct mlx5_flow_table_attr ft_attr = {};
struct mlx5_flow_namespace *root_ns;
@@ -412,15 +274,14 @@ mlx5_esw_indir_table_entry_create(struct mlx5_eswitch *esw, struct mlx5_flow_att
}
e->ft = ft;
e->vport = vport;
- e->ip_version = attr->ip_version;
e->fwd_ref = !decap;
- err = mlx5_create_indir_recirc_group(esw, attr, spec, e);
+ err = mlx5_create_indir_recirc_group(e);
if (err)
goto recirc_grp_err;
if (decap) {
- err = mlx5_esw_indir_table_rule_get(esw, attr, spec, e);
+ err = mlx5_esw_indir_table_rule_get(esw, attr, e);
if (err)
goto recirc_rule_err;
}
@@ -430,13 +291,13 @@ mlx5_esw_indir_table_entry_create(struct mlx5_eswitch *esw, struct mlx5_flow_att
goto fwd_grp_err;
hash_add(esw->fdb_table.offloads.indir->table, &e->hlist,
- vport << 16 | attr->ip_version);
+ vport << 16);
return e;
fwd_grp_err:
if (decap)
- mlx5_esw_indir_table_rule_put(esw, attr, e);
+ mlx5_esw_indir_table_rule_put(esw, e);
recirc_rule_err:
mlx5_destroy_flow_group(e->recirc_grp);
recirc_grp_err:
@@ -447,13 +308,13 @@ tbl_err:
}
static struct mlx5_esw_indir_table_entry *
-mlx5_esw_indir_table_entry_lookup(struct mlx5_eswitch *esw, u16 vport, u8 ip_version)
+mlx5_esw_indir_table_entry_lookup(struct mlx5_eswitch *esw, u16 vport)
{
struct mlx5_esw_indir_table_entry *e;
- u32 key = vport << 16 | ip_version;
+ u32 key = vport << 16;
hash_for_each_possible(esw->fdb_table.offloads.indir->table, e, hlist, key)
- if (e->vport == vport && e->ip_version == ip_version)
+ if (e->vport == vport)
return e;
return NULL;
@@ -461,24 +322,23 @@ mlx5_esw_indir_table_entry_lookup(struct mlx5_eswitch *esw, u16 vport, u8 ip_ver
struct mlx5_flow_table *mlx5_esw_indir_table_get(struct mlx5_eswitch *esw,
struct mlx5_flow_attr *attr,
- struct mlx5_flow_spec *spec,
u16 vport, bool decap)
{
struct mlx5_esw_indir_table_entry *e;
int err;
mutex_lock(&esw->fdb_table.offloads.indir->lock);
- e = mlx5_esw_indir_table_entry_lookup(esw, vport, attr->ip_version);
+ e = mlx5_esw_indir_table_entry_lookup(esw, vport);
if (e) {
if (!decap) {
e->fwd_ref++;
} else {
- err = mlx5_esw_indir_table_rule_get(esw, attr, spec, e);
+ err = mlx5_esw_indir_table_rule_get(esw, attr, e);
if (err)
goto out_err;
}
} else {
- e = mlx5_esw_indir_table_entry_create(esw, attr, spec, vport, decap);
+ e = mlx5_esw_indir_table_entry_create(esw, attr, vport, decap);
if (IS_ERR(e)) {
err = PTR_ERR(e);
esw_warn(esw->dev, "Failed to create indirection table, err %d.\n", err);
@@ -494,22 +354,21 @@ out_err:
}
void mlx5_esw_indir_table_put(struct mlx5_eswitch *esw,
- struct mlx5_flow_attr *attr,
u16 vport, bool decap)
{
struct mlx5_esw_indir_table_entry *e;
mutex_lock(&esw->fdb_table.offloads.indir->lock);
- e = mlx5_esw_indir_table_entry_lookup(esw, vport, attr->ip_version);
+ e = mlx5_esw_indir_table_entry_lookup(esw, vport);
if (!e)
goto out;
if (!decap)
e->fwd_ref--;
else
- mlx5_esw_indir_table_rule_put(esw, attr, e);
+ mlx5_esw_indir_table_rule_put(esw, e);
- if (e->fwd_ref || e->recirc_cnt)
+ if (e->fwd_ref || e->recirc_rule)
goto out;
hash_del(&e->hlist);
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/indir_table.h b/drivers/net/ethernet/mellanox/mlx5/core/esw/indir_table.h
index 21d56b49d14b..036f5b3a341b 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/esw/indir_table.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/esw/indir_table.h
@@ -13,10 +13,8 @@ mlx5_esw_indir_table_destroy(struct mlx5_esw_indir_table *indir);
struct mlx5_flow_table *mlx5_esw_indir_table_get(struct mlx5_eswitch *esw,
struct mlx5_flow_attr *attr,
- struct mlx5_flow_spec *spec,
u16 vport, bool decap);
void mlx5_esw_indir_table_put(struct mlx5_eswitch *esw,
- struct mlx5_flow_attr *attr,
u16 vport, bool decap);
bool
@@ -44,7 +42,6 @@ mlx5_esw_indir_table_destroy(struct mlx5_esw_indir_table *indir)
static inline struct mlx5_flow_table *
mlx5_esw_indir_table_get(struct mlx5_eswitch *esw,
struct mlx5_flow_attr *attr,
- struct mlx5_flow_spec *spec,
u16 vport, bool decap)
{
return ERR_PTR(-EOPNOTSUPP);
@@ -52,7 +49,6 @@ mlx5_esw_indir_table_get(struct mlx5_eswitch *esw,
static inline void
mlx5_esw_indir_table_put(struct mlx5_eswitch *esw,
- struct mlx5_flow_attr *attr,
u16 vport, bool decap)
{
}
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
index 9daf55e90367..0f052513fefa 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
@@ -1190,9 +1190,9 @@ static void mlx5_eswitch_get_devlink_param(struct mlx5_eswitch *esw)
union devlink_param_value val;
int err;
- err = devlink_param_driverinit_value_get(devlink,
- MLX5_DEVLINK_PARAM_ID_ESW_LARGE_GROUP_NUM,
- &val);
+ err = devl_param_driverinit_value_get(devlink,
+ MLX5_DEVLINK_PARAM_ID_ESW_LARGE_GROUP_NUM,
+ &val);
if (!err) {
esw->params.large_group_num = val.vu32;
} else {
@@ -1250,7 +1250,7 @@ static int mlx5_esw_acls_ns_init(struct mlx5_eswitch *esw)
if (err)
return err;
} else {
- esw_warn(dev, "engress ACL is not supported by FW\n");
+ esw_warn(dev, "egress ACL is not supported by FW\n");
}
if (MLX5_CAP_ESW_INGRESS_ACL(dev, ft_support)) {
@@ -1406,9 +1406,7 @@ void mlx5_eswitch_disable_sriov(struct mlx5_eswitch *esw, bool clear_vf)
mlx5_eswitch_unload_vf_vports(esw, esw->esw_funcs.num_vfs);
if (clear_vf)
mlx5_eswitch_clear_vf_vports_info(esw);
- /* If disabling sriov in switchdev mode, free meta rules here
- * because it depends on num_vfs.
- */
+
if (esw->mode == MLX5_ESWITCH_OFFLOADS) {
struct devlink *devlink = priv_to_devlink(esw->dev);
@@ -1489,7 +1487,7 @@ int mlx5_esw_sf_max_hpf_functions(struct mlx5_core_dev *dev, u16 *max_sfs, u16 *
void *hca_caps;
int err;
- if (!mlx5_core_is_ecpf(dev)) {
+ if (!mlx5_core_is_ecpf(dev) || mlx5_core_is_management_pf(dev)) {
*max_sfs = 0;
return 0;
}
@@ -1642,7 +1640,7 @@ int mlx5_eswitch_init(struct mlx5_core_dev *dev)
if (err)
goto abort;
- err = esw_offloads_init_reps(esw);
+ err = esw_offloads_init(esw);
if (err)
goto reps_err;
@@ -1708,7 +1706,7 @@ void mlx5_eswitch_cleanup(struct mlx5_eswitch *esw)
mlx5e_mod_hdr_tbl_destroy(&esw->offloads.mod_hdr);
mutex_destroy(&esw->offloads.encap_tbl_lock);
mutex_destroy(&esw->offloads.decap_tbl_lock);
- esw_offloads_cleanup_reps(esw);
+ esw_offloads_cleanup(esw);
mlx5_esw_vports_cleanup(esw);
kfree(esw);
}
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h
index 92644fbb5081..19e9a77c4633 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h
@@ -52,12 +52,14 @@ enum mlx5_mapped_obj_type {
MLX5_MAPPED_OBJ_CHAIN,
MLX5_MAPPED_OBJ_SAMPLE,
MLX5_MAPPED_OBJ_INT_PORT_METADATA,
+ MLX5_MAPPED_OBJ_ACT_MISS,
};
struct mlx5_mapped_obj {
enum mlx5_mapped_obj_type type;
union {
u32 chain;
+ u64 act_miss_cookie;
struct {
u32 group_id;
u32 rate;
@@ -222,7 +224,6 @@ struct mlx5_eswitch_fdb {
struct mlx5_flow_handle **send_to_vport_meta_rules;
struct mlx5_flow_handle *miss_rule_uni;
struct mlx5_flow_handle *miss_rule_multi;
- int vlan_push_pop_refcount;
struct mlx5_fs_chains *esw_chains_priv;
struct {
@@ -346,8 +347,8 @@ struct mlx5_eswitch {
void esw_offloads_disable(struct mlx5_eswitch *esw);
int esw_offloads_enable(struct mlx5_eswitch *esw);
-void esw_offloads_cleanup_reps(struct mlx5_eswitch *esw);
-int esw_offloads_init_reps(struct mlx5_eswitch *esw);
+void esw_offloads_cleanup(struct mlx5_eswitch *esw);
+int esw_offloads_init(struct mlx5_eswitch *esw);
struct mlx5_flow_handle *
mlx5_eswitch_add_send_to_vport_meta_rule(struct mlx5_eswitch *esw, u16 vport_num);
@@ -520,10 +521,6 @@ int mlx5_devlink_port_fn_migratable_set(struct devlink_port *port, bool enable,
struct netlink_ext_ack *extack);
void *mlx5_eswitch_get_uplink_priv(struct mlx5_eswitch *esw, u8 rep_type);
-int mlx5_eswitch_add_vlan_action(struct mlx5_eswitch *esw,
- struct mlx5_flow_attr *attr);
-int mlx5_eswitch_del_vlan_action(struct mlx5_eswitch *esw,
- struct mlx5_flow_attr *attr);
int __mlx5_eswitch_set_vport_vlan(struct mlx5_eswitch *esw,
u16 vport, u16 vlan, u8 qos, u8 set_flags);
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
index c981fa77f439..2a98375a0abf 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
@@ -179,15 +179,14 @@ mlx5_eswitch_set_rule_source_port(struct mlx5_eswitch *esw,
static int
esw_setup_decap_indir(struct mlx5_eswitch *esw,
- struct mlx5_flow_attr *attr,
- struct mlx5_flow_spec *spec)
+ struct mlx5_flow_attr *attr)
{
struct mlx5_flow_table *ft;
if (!(attr->flags & MLX5_ATTR_FLAG_SRC_REWRITE))
return -EOPNOTSUPP;
- ft = mlx5_esw_indir_table_get(esw, attr, spec,
+ ft = mlx5_esw_indir_table_get(esw, attr,
mlx5_esw_indir_table_decap_vport(attr), true);
return PTR_ERR_OR_ZERO(ft);
}
@@ -197,7 +196,7 @@ esw_cleanup_decap_indir(struct mlx5_eswitch *esw,
struct mlx5_flow_attr *attr)
{
if (mlx5_esw_indir_table_decap_vport(attr))
- mlx5_esw_indir_table_put(esw, attr,
+ mlx5_esw_indir_table_put(esw,
mlx5_esw_indir_table_decap_vport(attr),
true);
}
@@ -235,7 +234,6 @@ esw_setup_ft_dest(struct mlx5_flow_destination *dest,
struct mlx5_flow_act *flow_act,
struct mlx5_eswitch *esw,
struct mlx5_flow_attr *attr,
- struct mlx5_flow_spec *spec,
int i)
{
flow_act->flags |= FLOW_ACT_IGNORE_FLOW_LEVEL;
@@ -243,7 +241,7 @@ esw_setup_ft_dest(struct mlx5_flow_destination *dest,
dest[i].ft = attr->dest_ft;
if (mlx5_esw_indir_table_decap_vport(attr))
- return esw_setup_decap_indir(esw, attr, spec);
+ return esw_setup_decap_indir(esw, attr);
return 0;
}
@@ -298,7 +296,7 @@ static void esw_put_dest_tables_loop(struct mlx5_eswitch *esw, struct mlx5_flow_
mlx5_chains_put_table(chains, 0, 1, 0);
else if (mlx5_esw_indir_table_needed(esw, attr, esw_attr->dests[i].rep->vport,
esw_attr->dests[i].mdev))
- mlx5_esw_indir_table_put(esw, attr, esw_attr->dests[i].rep->vport,
+ mlx5_esw_indir_table_put(esw, esw_attr->dests[i].rep->vport,
false);
}
@@ -384,7 +382,6 @@ esw_setup_indir_table(struct mlx5_flow_destination *dest,
struct mlx5_flow_act *flow_act,
struct mlx5_eswitch *esw,
struct mlx5_flow_attr *attr,
- struct mlx5_flow_spec *spec,
bool ignore_flow_lvl,
int *i)
{
@@ -399,7 +396,7 @@ esw_setup_indir_table(struct mlx5_flow_destination *dest,
flow_act->flags |= FLOW_ACT_IGNORE_FLOW_LEVEL;
dest[*i].type = MLX5_FLOW_DESTINATION_TYPE_FLOW_TABLE;
- dest[*i].ft = mlx5_esw_indir_table_get(esw, attr, spec,
+ dest[*i].ft = mlx5_esw_indir_table_get(esw, attr,
esw_attr->dests[j].rep->vport, false);
if (IS_ERR(dest[*i].ft)) {
err = PTR_ERR(dest[*i].ft);
@@ -408,7 +405,7 @@ esw_setup_indir_table(struct mlx5_flow_destination *dest,
}
if (mlx5_esw_indir_table_decap_vport(attr)) {
- err = esw_setup_decap_indir(esw, attr, spec);
+ err = esw_setup_decap_indir(esw, attr);
if (err)
goto err_indir_tbl_get;
}
@@ -446,7 +443,7 @@ esw_setup_vport_dest(struct mlx5_flow_destination *dest, struct mlx5_flow_act *f
MLX5_CAP_GEN(esw_attr->dests[attr_idx].mdev, vhca_id);
dest[dest_idx].vport.flags |= MLX5_FLOW_DEST_VPORT_VHCA_ID;
if (dest[dest_idx].vport.num == MLX5_VPORT_UPLINK &&
- mlx5_lag_mpesw_is_activated(esw->dev))
+ mlx5_lag_is_mpesw(esw->dev))
dest[dest_idx].type = MLX5_FLOW_DESTINATION_TYPE_UPLINK;
}
if (esw_attr->dests[attr_idx].flags & MLX5_ESW_DEST_ENCAP_VALID) {
@@ -511,14 +508,14 @@ esw_setup_dests(struct mlx5_flow_destination *dest,
err = esw_setup_mtu_dest(dest, &attr->meter_attr, *i);
(*i)++;
} else if (esw_is_indir_table(esw, attr)) {
- err = esw_setup_indir_table(dest, flow_act, esw, attr, spec, true, i);
+ err = esw_setup_indir_table(dest, flow_act, esw, attr, true, i);
} else if (esw_is_chain_src_port_rewrite(esw, esw_attr)) {
err = esw_setup_chain_src_port_rewrite(dest, flow_act, esw, chains, attr, i);
} else {
*i = esw_setup_vport_dests(dest, flow_act, esw, esw_attr, *i);
if (attr->dest_ft) {
- err = esw_setup_ft_dest(dest, flow_act, esw, attr, spec, *i);
+ err = esw_setup_ft_dest(dest, flow_act, esw, attr, *i);
(*i)++;
} else if (attr->dest_chain) {
err = esw_setup_chain_dest(dest, flow_act, chains, attr->dest_chain,
@@ -582,16 +579,16 @@ mlx5_eswitch_add_offloaded_rule(struct mlx5_eswitch *esw,
if (esw->mode != MLX5_ESWITCH_OFFLOADS)
return ERR_PTR(-EOPNOTSUPP);
+ if (!mlx5_eswitch_vlan_actions_supported(esw->dev, 1))
+ return ERR_PTR(-EOPNOTSUPP);
+
dest = kcalloc(MLX5_MAX_FLOW_FWD_VPORTS + 1, sizeof(*dest), GFP_KERNEL);
if (!dest)
return ERR_PTR(-ENOMEM);
flow_act.action = attr->action;
- /* if per flow vlan pop/push is emulated, don't set that into the firmware */
- if (!mlx5_eswitch_vlan_actions_supported(esw->dev, 1))
- flow_act.action &= ~(MLX5_FLOW_CONTEXT_ACTION_VLAN_PUSH |
- MLX5_FLOW_CONTEXT_ACTION_VLAN_POP);
- else if (flow_act.action & MLX5_FLOW_CONTEXT_ACTION_VLAN_PUSH) {
+
+ if (flow_act.action & MLX5_FLOW_CONTEXT_ACTION_VLAN_PUSH) {
flow_act.vlan[0].ethtype = ntohs(esw_attr->vlan_proto[0]);
flow_act.vlan[0].vid = esw_attr->vlan_vid[0];
flow_act.vlan[0].prio = esw_attr->vlan_prio[0];
@@ -727,7 +724,7 @@ mlx5_eswitch_add_fwd_rule(struct mlx5_eswitch *esw,
flow_act.action = MLX5_FLOW_CONTEXT_ACTION_FWD_DEST;
for (i = 0; i < esw_attr->split_count; i++) {
if (esw_is_indir_table(esw, attr))
- err = esw_setup_indir_table(dest, &flow_act, esw, attr, spec, false, &i);
+ err = esw_setup_indir_table(dest, &flow_act, esw, attr, false, &i);
else if (esw_is_chain_src_port_rewrite(esw, esw_attr))
err = esw_setup_chain_src_port_rewrite(dest, &flow_act, esw, chains, attr,
&i);
@@ -832,204 +829,6 @@ mlx5_eswitch_del_fwd_rule(struct mlx5_eswitch *esw,
__mlx5_eswitch_del_rule(esw, rule, attr, true);
}
-static int esw_set_global_vlan_pop(struct mlx5_eswitch *esw, u8 val)
-{
- struct mlx5_eswitch_rep *rep;
- unsigned long i;
- int err = 0;
-
- esw_debug(esw->dev, "%s applying global %s policy\n", __func__, val ? "pop" : "none");
- mlx5_esw_for_each_host_func_vport(esw, i, rep, esw->esw_funcs.num_vfs) {
- if (atomic_read(&rep->rep_data[REP_ETH].state) != REP_LOADED)
- continue;
-
- err = __mlx5_eswitch_set_vport_vlan(esw, rep->vport, 0, 0, val);
- if (err)
- goto out;
- }
-
-out:
- return err;
-}
-
-static struct mlx5_eswitch_rep *
-esw_vlan_action_get_vport(struct mlx5_esw_flow_attr *attr, bool push, bool pop)
-{
- struct mlx5_eswitch_rep *in_rep, *out_rep, *vport = NULL;
-
- in_rep = attr->in_rep;
- out_rep = attr->dests[0].rep;
-
- if (push)
- vport = in_rep;
- else if (pop)
- vport = out_rep;
- else
- vport = in_rep;
-
- return vport;
-}
-
-static int esw_add_vlan_action_check(struct mlx5_esw_flow_attr *attr,
- bool push, bool pop, bool fwd)
-{
- struct mlx5_eswitch_rep *in_rep, *out_rep;
-
- if ((push || pop) && !fwd)
- goto out_notsupp;
-
- in_rep = attr->in_rep;
- out_rep = attr->dests[0].rep;
-
- if (push && in_rep->vport == MLX5_VPORT_UPLINK)
- goto out_notsupp;
-
- if (pop && out_rep->vport == MLX5_VPORT_UPLINK)
- goto out_notsupp;
-
- /* vport has vlan push configured, can't offload VF --> wire rules w.o it */
- if (!push && !pop && fwd)
- if (in_rep->vlan && out_rep->vport == MLX5_VPORT_UPLINK)
- goto out_notsupp;
-
- /* protects against (1) setting rules with different vlans to push and
- * (2) setting rules w.o vlans (attr->vlan = 0) && w. vlans to push (!= 0)
- */
- if (push && in_rep->vlan_refcount && (in_rep->vlan != attr->vlan_vid[0]))
- goto out_notsupp;
-
- return 0;
-
-out_notsupp:
- return -EOPNOTSUPP;
-}
-
-int mlx5_eswitch_add_vlan_action(struct mlx5_eswitch *esw,
- struct mlx5_flow_attr *attr)
-{
- struct offloads_fdb *offloads = &esw->fdb_table.offloads;
- struct mlx5_esw_flow_attr *esw_attr = attr->esw_attr;
- struct mlx5_eswitch_rep *vport = NULL;
- bool push, pop, fwd;
- int err = 0;
-
- /* nop if we're on the vlan push/pop non emulation mode */
- if (mlx5_eswitch_vlan_actions_supported(esw->dev, 1))
- return 0;
-
- push = !!(attr->action & MLX5_FLOW_CONTEXT_ACTION_VLAN_PUSH);
- pop = !!(attr->action & MLX5_FLOW_CONTEXT_ACTION_VLAN_POP);
- fwd = !!((attr->action & MLX5_FLOW_CONTEXT_ACTION_FWD_DEST) &&
- !attr->dest_chain);
-
- mutex_lock(&esw->state_lock);
-
- err = esw_add_vlan_action_check(esw_attr, push, pop, fwd);
- if (err)
- goto unlock;
-
- attr->flags &= ~MLX5_ATTR_FLAG_VLAN_HANDLED;
-
- vport = esw_vlan_action_get_vport(esw_attr, push, pop);
-
- if (!push && !pop && fwd) {
- /* tracks VF --> wire rules without vlan push action */
- if (esw_attr->dests[0].rep->vport == MLX5_VPORT_UPLINK) {
- vport->vlan_refcount++;
- attr->flags |= MLX5_ATTR_FLAG_VLAN_HANDLED;
- }
-
- goto unlock;
- }
-
- if (!push && !pop)
- goto unlock;
-
- if (!(offloads->vlan_push_pop_refcount)) {
- /* it's the 1st vlan rule, apply global vlan pop policy */
- err = esw_set_global_vlan_pop(esw, SET_VLAN_STRIP);
- if (err)
- goto out;
- }
- offloads->vlan_push_pop_refcount++;
-
- if (push) {
- if (vport->vlan_refcount)
- goto skip_set_push;
-
- err = __mlx5_eswitch_set_vport_vlan(esw, vport->vport, esw_attr->vlan_vid[0],
- 0, SET_VLAN_INSERT | SET_VLAN_STRIP);
- if (err)
- goto out;
- vport->vlan = esw_attr->vlan_vid[0];
-skip_set_push:
- vport->vlan_refcount++;
- }
-out:
- if (!err)
- attr->flags |= MLX5_ATTR_FLAG_VLAN_HANDLED;
-unlock:
- mutex_unlock(&esw->state_lock);
- return err;
-}
-
-int mlx5_eswitch_del_vlan_action(struct mlx5_eswitch *esw,
- struct mlx5_flow_attr *attr)
-{
- struct offloads_fdb *offloads = &esw->fdb_table.offloads;
- struct mlx5_esw_flow_attr *esw_attr = attr->esw_attr;
- struct mlx5_eswitch_rep *vport = NULL;
- bool push, pop, fwd;
- int err = 0;
-
- /* nop if we're on the vlan push/pop non emulation mode */
- if (mlx5_eswitch_vlan_actions_supported(esw->dev, 1))
- return 0;
-
- if (!(attr->flags & MLX5_ATTR_FLAG_VLAN_HANDLED))
- return 0;
-
- push = !!(attr->action & MLX5_FLOW_CONTEXT_ACTION_VLAN_PUSH);
- pop = !!(attr->action & MLX5_FLOW_CONTEXT_ACTION_VLAN_POP);
- fwd = !!(attr->action & MLX5_FLOW_CONTEXT_ACTION_FWD_DEST);
-
- mutex_lock(&esw->state_lock);
-
- vport = esw_vlan_action_get_vport(esw_attr, push, pop);
-
- if (!push && !pop && fwd) {
- /* tracks VF --> wire rules without vlan push action */
- if (esw_attr->dests[0].rep->vport == MLX5_VPORT_UPLINK)
- vport->vlan_refcount--;
-
- goto out;
- }
-
- if (push) {
- vport->vlan_refcount--;
- if (vport->vlan_refcount)
- goto skip_unset_push;
-
- vport->vlan = 0;
- err = __mlx5_eswitch_set_vport_vlan(esw, vport->vport,
- 0, 0, SET_VLAN_STRIP);
- if (err)
- goto out;
- }
-
-skip_unset_push:
- offloads->vlan_push_pop_refcount--;
- if (offloads->vlan_push_pop_refcount)
- goto out;
-
- /* no more vlan rules, stop global vlan pop policy */
- err = esw_set_global_vlan_pop(esw, 0);
-
-out:
- mutex_unlock(&esw->state_lock);
- return err;
-}
-
struct mlx5_flow_handle *
mlx5_eswitch_add_send_to_vport_rule(struct mlx5_eswitch *on_esw,
struct mlx5_eswitch *from_esw,
@@ -2406,7 +2205,7 @@ static void mlx5_esw_offloads_rep_cleanup(struct mlx5_eswitch *esw,
kfree(rep);
}
-void esw_offloads_cleanup_reps(struct mlx5_eswitch *esw)
+static void esw_offloads_cleanup_reps(struct mlx5_eswitch *esw)
{
struct mlx5_eswitch_rep *rep;
unsigned long i;
@@ -2416,7 +2215,7 @@ void esw_offloads_cleanup_reps(struct mlx5_eswitch *esw)
xa_destroy(&esw->offloads.vport_reps);
}
-int esw_offloads_init_reps(struct mlx5_eswitch *esw)
+static int esw_offloads_init_reps(struct mlx5_eswitch *esw)
{
struct mlx5_vport *vport;
unsigned long i;
@@ -2436,6 +2235,94 @@ err:
return err;
}
+static int esw_port_metadata_set(struct devlink *devlink, u32 id,
+ struct devlink_param_gset_ctx *ctx)
+{
+ struct mlx5_core_dev *dev = devlink_priv(devlink);
+ struct mlx5_eswitch *esw = dev->priv.eswitch;
+ int err = 0;
+
+ down_write(&esw->mode_lock);
+ if (mlx5_esw_is_fdb_created(esw)) {
+ err = -EBUSY;
+ goto done;
+ }
+ if (!mlx5_esw_vport_match_metadata_supported(esw)) {
+ err = -EOPNOTSUPP;
+ goto done;
+ }
+ if (ctx->val.vbool)
+ esw->flags |= MLX5_ESWITCH_VPORT_MATCH_METADATA;
+ else
+ esw->flags &= ~MLX5_ESWITCH_VPORT_MATCH_METADATA;
+done:
+ up_write(&esw->mode_lock);
+ return err;
+}
+
+static int esw_port_metadata_get(struct devlink *devlink, u32 id,
+ struct devlink_param_gset_ctx *ctx)
+{
+ struct mlx5_core_dev *dev = devlink_priv(devlink);
+
+ ctx->val.vbool = mlx5_eswitch_vport_match_metadata_enabled(dev->priv.eswitch);
+ return 0;
+}
+
+static int esw_port_metadata_validate(struct devlink *devlink, u32 id,
+ union devlink_param_value val,
+ struct netlink_ext_ack *extack)
+{
+ struct mlx5_core_dev *dev = devlink_priv(devlink);
+ u8 esw_mode;
+
+ esw_mode = mlx5_eswitch_mode(dev);
+ if (esw_mode == MLX5_ESWITCH_OFFLOADS) {
+ NL_SET_ERR_MSG_MOD(extack,
+ "E-Switch must either disabled or non switchdev mode");
+ return -EBUSY;
+ }
+ return 0;
+}
+
+static const struct devlink_param esw_devlink_params[] = {
+ DEVLINK_PARAM_DRIVER(MLX5_DEVLINK_PARAM_ID_ESW_PORT_METADATA,
+ "esw_port_metadata", DEVLINK_PARAM_TYPE_BOOL,
+ BIT(DEVLINK_PARAM_CMODE_RUNTIME),
+ esw_port_metadata_get,
+ esw_port_metadata_set,
+ esw_port_metadata_validate),
+};
+
+int esw_offloads_init(struct mlx5_eswitch *esw)
+{
+ int err;
+
+ err = esw_offloads_init_reps(esw);
+ if (err)
+ return err;
+
+ err = devl_params_register(priv_to_devlink(esw->dev),
+ esw_devlink_params,
+ ARRAY_SIZE(esw_devlink_params));
+ if (err)
+ goto err_params;
+
+ return 0;
+
+err_params:
+ esw_offloads_cleanup_reps(esw);
+ return err;
+}
+
+void esw_offloads_cleanup(struct mlx5_eswitch *esw)
+{
+ devl_params_unregister(priv_to_devlink(esw->dev),
+ esw_devlink_params,
+ ARRAY_SIZE(esw_devlink_params));
+ esw_offloads_cleanup_reps(esw);
+}
+
static void __esw_offloads_unload_rep(struct mlx5_eswitch *esw,
struct mlx5_eswitch_rep *rep, u8 rep_type)
{
@@ -3575,9 +3462,9 @@ int mlx5_devlink_eswitch_mode_get(struct devlink *devlink, u16 *mode)
if (IS_ERR(esw))
return PTR_ERR(esw);
- down_write(&esw->mode_lock);
+ down_read(&esw->mode_lock);
err = esw_mode_to_devlink(esw->mode, mode);
- up_write(&esw->mode_lock);
+ up_read(&esw->mode_lock);
return err;
}
@@ -3675,9 +3562,9 @@ int mlx5_devlink_eswitch_inline_mode_get(struct devlink *devlink, u8 *mode)
if (IS_ERR(esw))
return PTR_ERR(esw);
- down_write(&esw->mode_lock);
+ down_read(&esw->mode_lock);
err = esw_inline_mode_to_devlink(esw->offloads.inline_mode, mode);
- up_write(&esw->mode_lock);
+ up_read(&esw->mode_lock);
return err;
}
@@ -3749,9 +3636,9 @@ int mlx5_devlink_eswitch_encap_mode_get(struct devlink *devlink,
if (IS_ERR(esw))
return PTR_ERR(esw);
- down_write(&esw->mode_lock);
+ down_read(&esw->mode_lock);
*encap = esw->offloads.encap;
- up_write(&esw->mode_lock);
+ up_read(&esw->mode_lock);
return 0;
}
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/events.c b/drivers/net/ethernet/mellanox/mlx5/core/events.c
index 9459e56ee90a..718cf09c28ce 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/events.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/events.c
@@ -424,6 +424,7 @@ int mlx5_blocking_notifier_register(struct mlx5_core_dev *dev, struct notifier_b
return blocking_notifier_chain_register(&events->sw_nh, nb);
}
+EXPORT_SYMBOL(mlx5_blocking_notifier_register);
int mlx5_blocking_notifier_unregister(struct mlx5_core_dev *dev, struct notifier_block *nb)
{
@@ -431,6 +432,7 @@ int mlx5_blocking_notifier_unregister(struct mlx5_core_dev *dev, struct notifier
return blocking_notifier_chain_unregister(&events->sw_nh, nb);
}
+EXPORT_SYMBOL(mlx5_blocking_notifier_unregister);
int mlx5_blocking_notifier_call_chain(struct mlx5_core_dev *dev, unsigned int event,
void *data)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fs_cmd.c b/drivers/net/ethernet/mellanox/mlx5/core/fs_cmd.c
index 32d4c967469c..144e59480686 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/fs_cmd.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/fs_cmd.c
@@ -272,8 +272,6 @@ static int mlx5_cmd_create_flow_table(struct mlx5_flow_root_namespace *ns,
unsigned int size;
int err;
- if (ft_attr->max_fte != POOL_NEXT_SIZE)
- size = roundup_pow_of_two(ft_attr->max_fte);
size = mlx5_ft_pool_get_avail_sz(dev, ft->type, ft_attr->max_fte);
if (!size)
return -ENOSPC;
@@ -412,11 +410,6 @@ static int mlx5_cmd_create_flow_group(struct mlx5_flow_root_namespace *ns,
MLX5_CMD_OP_CREATE_FLOW_GROUP);
MLX5_SET(create_flow_group_in, in, table_type, ft->type);
MLX5_SET(create_flow_group_in, in, table_id, ft->id);
- if (ft->vport) {
- MLX5_SET(create_flow_group_in, in, vport_number, ft->vport);
- MLX5_SET(create_flow_group_in, in, other_vport, 1);
- }
-
MLX5_SET(create_flow_group_in, in, vport_number, ft->vport);
MLX5_SET(create_flow_group_in, in, other_vport,
!!(ft->flags & MLX5_FLOW_TABLE_OTHER_VPORT));
@@ -653,6 +646,12 @@ static int mlx5_cmd_set_fte(struct mlx5_core_dev *dev,
id = dst->dest_attr.sampler_id;
ifc_type = MLX5_IFC_FLOW_DESTINATION_TYPE_FLOW_SAMPLER;
break;
+ case MLX5_FLOW_DESTINATION_TYPE_TABLE_TYPE:
+ MLX5_SET(dest_format_struct, in_dests,
+ destination_table_type, dst->dest_attr.ft->type);
+ id = dst->dest_attr.ft->id;
+ ifc_type = MLX5_IFC_FLOW_DESTINATION_TYPE_TABLE_TYPE;
+ break;
default:
id = dst->dest_attr.tir_num;
ifc_type = MLX5_IFC_FLOW_DESTINATION_TYPE_TIR;
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
index 5a85d8c1e797..731acbe22dc7 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
@@ -34,12 +34,14 @@
#include <linux/mlx5/driver.h>
#include <linux/mlx5/vport.h>
#include <linux/mlx5/eswitch.h>
+#include <net/devlink.h>
#include "mlx5_core.h"
#include "fs_core.h"
#include "fs_cmd.h"
#include "fs_ft_pool.h"
#include "diag/fs_tracepoint.h"
+#include "devlink.h"
#define INIT_TREE_NODE_ARRAY_SIZE(...) (sizeof((struct init_tree_node[]){__VA_ARGS__}) /\
sizeof(struct init_tree_node))
@@ -111,8 +113,10 @@
#define ETHTOOL_PRIO_NUM_LEVELS 1
#define ETHTOOL_NUM_PRIOS 11
#define ETHTOOL_MIN_LEVEL (KERNEL_MIN_LEVEL + ETHTOOL_NUM_PRIOS)
-/* Promiscuous, Vlan, mac, ttc, inner ttc, {UDP/ANY/aRFS/accel/{esp, esp_err}}, IPsec policy */
-#define KERNEL_NIC_PRIO_NUM_LEVELS 8
+/* Promiscuous, Vlan, mac, ttc, inner ttc, {UDP/ANY/aRFS/accel/{esp, esp_err}}, IPsec policy,
+ * IPsec RoCE policy
+ */
+#define KERNEL_NIC_PRIO_NUM_LEVELS 9
#define KERNEL_NIC_NUM_PRIOS 1
/* One more level for tc */
#define KERNEL_MIN_LEVEL (KERNEL_NIC_PRIO_NUM_LEVELS + 1)
@@ -219,19 +223,30 @@ static struct init_tree_node egress_root_fs = {
};
enum {
+ RDMA_RX_IPSEC_PRIO,
RDMA_RX_COUNTERS_PRIO,
RDMA_RX_BYPASS_PRIO,
RDMA_RX_KERNEL_PRIO,
};
+#define RDMA_RX_IPSEC_NUM_PRIOS 1
+#define RDMA_RX_IPSEC_NUM_LEVELS 2
+#define RDMA_RX_IPSEC_MIN_LEVEL (RDMA_RX_IPSEC_NUM_LEVELS)
+
#define RDMA_RX_BYPASS_MIN_LEVEL MLX5_BY_PASS_NUM_REGULAR_PRIOS
#define RDMA_RX_KERNEL_MIN_LEVEL (RDMA_RX_BYPASS_MIN_LEVEL + 1)
#define RDMA_RX_COUNTERS_MIN_LEVEL (RDMA_RX_KERNEL_MIN_LEVEL + 2)
static struct init_tree_node rdma_rx_root_fs = {
.type = FS_TYPE_NAMESPACE,
- .ar_size = 3,
+ .ar_size = 4,
.children = (struct init_tree_node[]) {
+ [RDMA_RX_IPSEC_PRIO] =
+ ADD_PRIO(0, RDMA_RX_IPSEC_MIN_LEVEL, 0,
+ FS_CHAINING_CAPS,
+ ADD_NS(MLX5_FLOW_TABLE_MISS_ACTION_DEF,
+ ADD_MULTIPLE_PRIO(RDMA_RX_IPSEC_NUM_PRIOS,
+ RDMA_RX_IPSEC_NUM_LEVELS))),
[RDMA_RX_COUNTERS_PRIO] =
ADD_PRIO(0, RDMA_RX_COUNTERS_MIN_LEVEL, 0,
FS_CHAINING_CAPS,
@@ -254,15 +269,20 @@ static struct init_tree_node rdma_rx_root_fs = {
enum {
RDMA_TX_COUNTERS_PRIO,
+ RDMA_TX_IPSEC_PRIO,
RDMA_TX_BYPASS_PRIO,
};
#define RDMA_TX_BYPASS_MIN_LEVEL MLX5_BY_PASS_NUM_PRIOS
#define RDMA_TX_COUNTERS_MIN_LEVEL (RDMA_TX_BYPASS_MIN_LEVEL + 1)
+#define RDMA_TX_IPSEC_NUM_PRIOS 1
+#define RDMA_TX_IPSEC_PRIO_NUM_LEVELS 1
+#define RDMA_TX_IPSEC_MIN_LEVEL (RDMA_TX_COUNTERS_MIN_LEVEL + RDMA_TX_IPSEC_NUM_PRIOS)
+
static struct init_tree_node rdma_tx_root_fs = {
.type = FS_TYPE_NAMESPACE,
- .ar_size = 2,
+ .ar_size = 3,
.children = (struct init_tree_node[]) {
[RDMA_TX_COUNTERS_PRIO] =
ADD_PRIO(0, RDMA_TX_COUNTERS_MIN_LEVEL, 0,
@@ -270,6 +290,13 @@ static struct init_tree_node rdma_tx_root_fs = {
ADD_NS(MLX5_FLOW_TABLE_MISS_ACTION_DEF,
ADD_MULTIPLE_PRIO(MLX5_RDMA_TX_NUM_COUNTERS_PRIOS,
RDMA_TX_COUNTERS_PRIO_NUM_LEVELS))),
+ [RDMA_TX_IPSEC_PRIO] =
+ ADD_PRIO(0, RDMA_TX_IPSEC_MIN_LEVEL, 0,
+ FS_CHAINING_CAPS,
+ ADD_NS(MLX5_FLOW_TABLE_MISS_ACTION_DEF,
+ ADD_MULTIPLE_PRIO(RDMA_TX_IPSEC_NUM_PRIOS,
+ RDMA_TX_IPSEC_PRIO_NUM_LEVELS))),
+
[RDMA_TX_BYPASS_PRIO] =
ADD_PRIO(0, RDMA_TX_BYPASS_MIN_LEVEL, 0,
FS_CHAINING_CAPS_RDMA_TX,
@@ -449,7 +476,8 @@ static bool is_fwd_dest_type(enum mlx5_flow_destination_type type)
type == MLX5_FLOW_DESTINATION_TYPE_VPORT ||
type == MLX5_FLOW_DESTINATION_TYPE_FLOW_SAMPLER ||
type == MLX5_FLOW_DESTINATION_TYPE_TIR ||
- type == MLX5_FLOW_DESTINATION_TYPE_RANGE;
+ type == MLX5_FLOW_DESTINATION_TYPE_RANGE ||
+ type == MLX5_FLOW_DESTINATION_TYPE_TABLE_TYPE;
}
static bool check_valid_spec(const struct mlx5_flow_spec *spec)
@@ -1774,7 +1802,6 @@ static int build_match_list(struct match_list *match_head,
{
struct rhlist_head *tmp, *list;
struct mlx5_flow_group *g;
- int err = 0;
rcu_read_lock();
INIT_LIST_HEAD(&match_head->list);
@@ -1800,7 +1827,7 @@ static int build_match_list(struct match_list *match_head,
list_add_tail(&curr_match->list, &match_head->list);
}
rcu_read_unlock();
- return err;
+ return 0;
}
static u64 matched_fgs_get_version(struct list_head *match_head)
@@ -2367,6 +2394,14 @@ struct mlx5_flow_namespace *mlx5_get_flow_namespace(struct mlx5_core_dev *dev,
root_ns = steering->rdma_tx_root_ns;
prio = RDMA_TX_COUNTERS_PRIO;
break;
+ case MLX5_FLOW_NAMESPACE_RDMA_RX_IPSEC:
+ root_ns = steering->rdma_rx_root_ns;
+ prio = RDMA_RX_IPSEC_PRIO;
+ break;
+ case MLX5_FLOW_NAMESPACE_RDMA_TX_IPSEC:
+ root_ns = steering->rdma_tx_root_ns;
+ prio = RDMA_TX_IPSEC_PRIO;
+ break;
default: /* Must be NIC RX */
WARN_ON(!is_nic_rx_ns(type));
root_ns = steering->root_ns;
@@ -3143,6 +3178,78 @@ cleanup:
return err;
}
+static int mlx5_fs_mode_validate(struct devlink *devlink, u32 id,
+ union devlink_param_value val,
+ struct netlink_ext_ack *extack)
+{
+ struct mlx5_core_dev *dev = devlink_priv(devlink);
+ char *value = val.vstr;
+ int err = 0;
+
+ if (!strcmp(value, "dmfs")) {
+ return 0;
+ } else if (!strcmp(value, "smfs")) {
+ u8 eswitch_mode;
+ bool smfs_cap;
+
+ eswitch_mode = mlx5_eswitch_mode(dev);
+ smfs_cap = mlx5_fs_dr_is_supported(dev);
+
+ if (!smfs_cap) {
+ err = -EOPNOTSUPP;
+ NL_SET_ERR_MSG_MOD(extack,
+ "Software managed steering is not supported by current device");
+ }
+
+ else if (eswitch_mode == MLX5_ESWITCH_OFFLOADS) {
+ NL_SET_ERR_MSG_MOD(extack,
+ "Software managed steering is not supported when eswitch offloads enabled.");
+ err = -EOPNOTSUPP;
+ }
+ } else {
+ NL_SET_ERR_MSG_MOD(extack,
+ "Bad parameter: supported values are [\"dmfs\", \"smfs\"]");
+ err = -EINVAL;
+ }
+
+ return err;
+}
+
+static int mlx5_fs_mode_set(struct devlink *devlink, u32 id,
+ struct devlink_param_gset_ctx *ctx)
+{
+ struct mlx5_core_dev *dev = devlink_priv(devlink);
+ enum mlx5_flow_steering_mode mode;
+
+ if (!strcmp(ctx->val.vstr, "smfs"))
+ mode = MLX5_FLOW_STEERING_MODE_SMFS;
+ else
+ mode = MLX5_FLOW_STEERING_MODE_DMFS;
+ dev->priv.steering->mode = mode;
+
+ return 0;
+}
+
+static int mlx5_fs_mode_get(struct devlink *devlink, u32 id,
+ struct devlink_param_gset_ctx *ctx)
+{
+ struct mlx5_core_dev *dev = devlink_priv(devlink);
+
+ if (dev->priv.steering->mode == MLX5_FLOW_STEERING_MODE_SMFS)
+ strcpy(ctx->val.vstr, "smfs");
+ else
+ strcpy(ctx->val.vstr, "dmfs");
+ return 0;
+}
+
+static const struct devlink_param mlx5_fs_params[] = {
+ DEVLINK_PARAM_DRIVER(MLX5_DEVLINK_PARAM_ID_FLOW_STEERING_MODE,
+ "flow_steering_mode", DEVLINK_PARAM_TYPE_STRING,
+ BIT(DEVLINK_PARAM_CMODE_RUNTIME),
+ mlx5_fs_mode_get, mlx5_fs_mode_set,
+ mlx5_fs_mode_validate),
+};
+
void mlx5_fs_core_cleanup(struct mlx5_core_dev *dev)
{
struct mlx5_flow_steering *steering = dev->priv.steering;
@@ -3155,12 +3262,20 @@ void mlx5_fs_core_cleanup(struct mlx5_core_dev *dev)
cleanup_root_ns(steering->rdma_rx_root_ns);
cleanup_root_ns(steering->rdma_tx_root_ns);
cleanup_root_ns(steering->egress_root_ns);
+
+ devl_params_unregister(priv_to_devlink(dev), mlx5_fs_params,
+ ARRAY_SIZE(mlx5_fs_params));
}
int mlx5_fs_core_init(struct mlx5_core_dev *dev)
{
struct mlx5_flow_steering *steering = dev->priv.steering;
- int err = 0;
+ int err;
+
+ err = devl_params_register(priv_to_devlink(dev), mlx5_fs_params,
+ ARRAY_SIZE(mlx5_fs_params));
+ if (err)
+ return err;
if ((((MLX5_CAP_GEN(dev, port_type) == MLX5_CAP_PORT_TYPE_ETH) &&
(MLX5_CAP_GEN(dev, nic_flow_table))) ||
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fs_counters.c b/drivers/net/ethernet/mellanox/mlx5/core/fs_counters.c
index b406e0367af6..17fe30a4c06c 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/fs_counters.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/fs_counters.c
@@ -504,6 +504,16 @@ void mlx5_fc_query_cached(struct mlx5_fc *counter,
counter->lastpackets = c.packets;
}
+void mlx5_fc_query_cached_raw(struct mlx5_fc *counter,
+ u64 *bytes, u64 *packets, u64 *lastuse)
+{
+ struct mlx5_fc_cache c = counter->cache;
+
+ *bytes = c.bytes;
+ *packets = c.packets;
+ *lastuse = c.lastuse;
+}
+
void mlx5_fc_queue_stats_work(struct mlx5_core_dev *dev,
struct delayed_work *dwork,
unsigned long delay)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fw.c b/drivers/net/ethernet/mellanox/mlx5/core/fw.c
index f34e758a2f1f..7bb7be01225a 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/fw.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/fw.c
@@ -267,6 +267,12 @@ int mlx5_query_hca_caps(struct mlx5_core_dev *dev)
return err;
}
+ if (MLX5_CAP_GEN(dev, crypto)) {
+ err = mlx5_core_get_caps(dev, MLX5_CAP_CRYPTO);
+ if (err)
+ return err;
+ }
+
if (MLX5_CAP_GEN(dev, shampo)) {
err = mlx5_core_get_caps(dev, MLX5_CAP_DEV_SHAMPO);
if (err)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fw_reset.c b/drivers/net/ethernet/mellanox/mlx5/core/fw_reset.c
index 1e46f9afa40e..4c2dad9d7cfb 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/fw_reset.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/fw_reset.c
@@ -1,6 +1,8 @@
// SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB
/* Copyright (c) 2020, Mellanox Technologies inc. All rights reserved. */
+#include <devlink.h>
+
#include "fw_reset.h"
#include "diag/fw_tracer.h"
#include "lib/tout.h"
@@ -28,21 +30,32 @@ struct mlx5_fw_reset {
int ret;
};
-void mlx5_fw_reset_enable_remote_dev_reset_set(struct mlx5_core_dev *dev, bool enable)
+static int mlx5_fw_reset_enable_remote_dev_reset_set(struct devlink *devlink, u32 id,
+ struct devlink_param_gset_ctx *ctx)
{
- struct mlx5_fw_reset *fw_reset = dev->priv.fw_reset;
+ struct mlx5_core_dev *dev = devlink_priv(devlink);
+ struct mlx5_fw_reset *fw_reset;
- if (enable)
+ fw_reset = dev->priv.fw_reset;
+
+ if (ctx->val.vbool)
clear_bit(MLX5_FW_RESET_FLAGS_NACK_RESET_REQUEST, &fw_reset->reset_flags);
else
set_bit(MLX5_FW_RESET_FLAGS_NACK_RESET_REQUEST, &fw_reset->reset_flags);
+ return 0;
}
-bool mlx5_fw_reset_enable_remote_dev_reset_get(struct mlx5_core_dev *dev)
+static int mlx5_fw_reset_enable_remote_dev_reset_get(struct devlink *devlink, u32 id,
+ struct devlink_param_gset_ctx *ctx)
{
- struct mlx5_fw_reset *fw_reset = dev->priv.fw_reset;
+ struct mlx5_core_dev *dev = devlink_priv(devlink);
+ struct mlx5_fw_reset *fw_reset;
+
+ fw_reset = dev->priv.fw_reset;
- return !test_bit(MLX5_FW_RESET_FLAGS_NACK_RESET_REQUEST, &fw_reset->reset_flags);
+ ctx->val.vbool = !test_bit(MLX5_FW_RESET_FLAGS_NACK_RESET_REQUEST,
+ &fw_reset->reset_flags);
+ return 0;
}
static int mlx5_reg_mfrl_set(struct mlx5_core_dev *dev, u8 reset_level,
@@ -150,11 +163,11 @@ static void mlx5_fw_reset_complete_reload(struct mlx5_core_dev *dev)
if (test_bit(MLX5_FW_RESET_FLAGS_PENDING_COMP, &fw_reset->reset_flags)) {
complete(&fw_reset->done);
} else {
- mlx5_unload_one(dev);
+ mlx5_unload_one(dev, false);
if (mlx5_health_wait_pci_up(dev))
mlx5_core_err(dev, "reset reload flow aborted, PCI reads still not working\n");
else
- mlx5_load_one(dev, false);
+ mlx5_load_one(dev);
devlink_remote_reload_actions_performed(priv_to_devlink(dev), 0,
BIT(DEVLINK_RELOAD_ACTION_DRIVER_REINIT) |
BIT(DEVLINK_RELOAD_ACTION_FW_ACTIVATE));
@@ -358,6 +371,7 @@ static int mlx5_pci_link_toggle(struct mlx5_core_dev *dev)
mlx5_core_err(dev, "PCI link not ready (0x%04x) after %llu ms\n",
reg16, mlx5_tout_ms(dev, PCI_TOGGLE));
err = -ETIMEDOUT;
+ goto restore;
}
do {
@@ -484,7 +498,7 @@ int mlx5_fw_reset_wait_reset_done(struct mlx5_core_dev *dev)
}
err = fw_reset->ret;
if (test_and_clear_bit(MLX5_FW_RESET_FLAGS_RELOAD_REQUIRED, &fw_reset->reset_flags)) {
- mlx5_unload_one_devl_locked(dev);
+ mlx5_unload_one_devl_locked(dev, false);
mlx5_load_one_devl_locked(dev, false);
}
out:
@@ -517,9 +531,16 @@ void mlx5_drain_fw_reset(struct mlx5_core_dev *dev)
cancel_work_sync(&fw_reset->reset_abort_work);
}
+static const struct devlink_param mlx5_fw_reset_devlink_params[] = {
+ DEVLINK_PARAM_GENERIC(ENABLE_REMOTE_DEV_RESET, BIT(DEVLINK_PARAM_CMODE_RUNTIME),
+ mlx5_fw_reset_enable_remote_dev_reset_get,
+ mlx5_fw_reset_enable_remote_dev_reset_set, NULL),
+};
+
int mlx5_fw_reset_init(struct mlx5_core_dev *dev)
{
struct mlx5_fw_reset *fw_reset = kzalloc(sizeof(*fw_reset), GFP_KERNEL);
+ int err;
if (!fw_reset)
return -ENOMEM;
@@ -532,6 +553,15 @@ int mlx5_fw_reset_init(struct mlx5_core_dev *dev)
fw_reset->dev = dev;
dev->priv.fw_reset = fw_reset;
+ err = devl_params_register(priv_to_devlink(dev),
+ mlx5_fw_reset_devlink_params,
+ ARRAY_SIZE(mlx5_fw_reset_devlink_params));
+ if (err) {
+ destroy_workqueue(fw_reset->wq);
+ kfree(fw_reset);
+ return err;
+ }
+
INIT_WORK(&fw_reset->fw_live_patch_work, mlx5_fw_live_patch_event);
INIT_WORK(&fw_reset->reset_request_work, mlx5_sync_reset_request_event);
INIT_WORK(&fw_reset->reset_reload_work, mlx5_sync_reset_reload_work);
@@ -546,6 +576,9 @@ void mlx5_fw_reset_cleanup(struct mlx5_core_dev *dev)
{
struct mlx5_fw_reset *fw_reset = dev->priv.fw_reset;
+ devl_params_unregister(priv_to_devlink(dev),
+ mlx5_fw_reset_devlink_params,
+ ARRAY_SIZE(mlx5_fw_reset_devlink_params));
destroy_workqueue(fw_reset->wq);
kfree(dev->priv.fw_reset);
}
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fw_reset.h b/drivers/net/ethernet/mellanox/mlx5/core/fw_reset.h
index dc141c7e641a..c57465595f7c 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/fw_reset.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/fw_reset.h
@@ -6,8 +6,6 @@
#include "mlx5_core.h"
-void mlx5_fw_reset_enable_remote_dev_reset_set(struct mlx5_core_dev *dev, bool enable);
-bool mlx5_fw_reset_enable_remote_dev_reset_get(struct mlx5_core_dev *dev);
int mlx5_fw_reset_query(struct mlx5_core_dev *dev, u8 *reset_level, u8 *reset_type);
int mlx5_fw_reset_set_reset_sync(struct mlx5_core_dev *dev, u8 reset_type_sel,
struct netlink_ext_ack *extack);
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/health.c b/drivers/net/ethernet/mellanox/mlx5/core/health.c
index 879555ba847d..f9438d4e43ca 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/health.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/health.c
@@ -62,7 +62,7 @@ enum {
};
enum {
- MLX5_DROP_NEW_HEALTH_WORK,
+ MLX5_DROP_HEALTH_WORK,
};
enum {
@@ -675,7 +675,7 @@ static void mlx5_fw_fatal_reporter_err_work(struct work_struct *work)
devlink = priv_to_devlink(dev);
mutex_lock(&dev->intf_state_mutex);
- if (test_bit(MLX5_DROP_NEW_HEALTH_WORK, &health->flags)) {
+ if (test_bit(MLX5_DROP_HEALTH_WORK, &health->flags)) {
mlx5_core_err(dev, "health works are not permitted at this stage\n");
mutex_unlock(&dev->intf_state_mutex);
return;
@@ -699,7 +699,7 @@ static void mlx5_fw_fatal_reporter_err_work(struct work_struct *work)
* requests from the kernel.
*/
mlx5_core_err(dev, "Driver is in error state. Unloading\n");
- mlx5_unload_one(dev);
+ mlx5_unload_one(dev, false);
}
}
@@ -771,14 +771,8 @@ static unsigned long get_next_poll_jiffies(struct mlx5_core_dev *dev)
void mlx5_trigger_health_work(struct mlx5_core_dev *dev)
{
struct mlx5_core_health *health = &dev->priv.health;
- unsigned long flags;
- spin_lock_irqsave(&health->wq_lock, flags);
- if (!test_bit(MLX5_DROP_NEW_HEALTH_WORK, &health->flags))
- queue_work(health->wq, &health->fatal_report_work);
- else
- mlx5_core_err(dev, "new health works are not permitted at this stage\n");
- spin_unlock_irqrestore(&health->wq_lock, flags);
+ queue_work(health->wq, &health->fatal_report_work);
}
#define MLX5_MSEC_PER_HOUR (MSEC_PER_SEC * 60 * 60)
@@ -858,7 +852,7 @@ void mlx5_start_health_poll(struct mlx5_core_dev *dev)
timer_setup(&health->timer, poll_health, 0);
health->fatal_error = MLX5_SENSOR_NO_ERR;
- clear_bit(MLX5_DROP_NEW_HEALTH_WORK, &health->flags);
+ clear_bit(MLX5_DROP_HEALTH_WORK, &health->flags);
health->health = &dev->iseg->health;
health->health_counter = &dev->iseg->health_counter;
@@ -869,13 +863,9 @@ void mlx5_start_health_poll(struct mlx5_core_dev *dev)
void mlx5_stop_health_poll(struct mlx5_core_dev *dev, bool disable_health)
{
struct mlx5_core_health *health = &dev->priv.health;
- unsigned long flags;
- if (disable_health) {
- spin_lock_irqsave(&health->wq_lock, flags);
- set_bit(MLX5_DROP_NEW_HEALTH_WORK, &health->flags);
- spin_unlock_irqrestore(&health->wq_lock, flags);
- }
+ if (disable_health)
+ set_bit(MLX5_DROP_HEALTH_WORK, &health->flags);
del_timer_sync(&health->timer);
}
@@ -891,11 +881,8 @@ void mlx5_start_health_fw_log_up(struct mlx5_core_dev *dev)
void mlx5_drain_health_wq(struct mlx5_core_dev *dev)
{
struct mlx5_core_health *health = &dev->priv.health;
- unsigned long flags;
- spin_lock_irqsave(&health->wq_lock, flags);
- set_bit(MLX5_DROP_NEW_HEALTH_WORK, &health->flags);
- spin_unlock_irqrestore(&health->wq_lock, flags);
+ set_bit(MLX5_DROP_HEALTH_WORK, &health->flags);
cancel_delayed_work_sync(&health->update_fw_log_ts_work);
cancel_work_sync(&health->report_work);
cancel_work_sync(&health->fatal_report_work);
@@ -928,7 +915,6 @@ int mlx5_health_init(struct mlx5_core_dev *dev)
kfree(name);
if (!health->wq)
goto out_err;
- spin_lock_init(&health->wq_lock);
INIT_WORK(&health->fatal_report_work, mlx5_fw_fatal_reporter_err_work);
INIT_WORK(&health->report_work, mlx5_fw_reporter_err_work);
INIT_DELAYED_WORK(&health->update_fw_log_ts_work, mlx5_health_log_ts_update);
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/ipoib/ethtool.c b/drivers/net/ethernet/mellanox/mlx5/core/ipoib/ethtool.c
index e09518f887a0..779d92b762d3 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/ipoib/ethtool.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/ipoib/ethtool.c
@@ -172,6 +172,7 @@ enum mlx5_ptys_rate {
MLX5_PTYS_RATE_EDR = 1 << 5,
MLX5_PTYS_RATE_HDR = 1 << 6,
MLX5_PTYS_RATE_NDR = 1 << 7,
+ MLX5_PTYS_RATE_XDR = 1 << 8,
};
static inline int mlx5_ptys_rate_enum_to_int(enum mlx5_ptys_rate rate)
@@ -185,6 +186,7 @@ static inline int mlx5_ptys_rate_enum_to_int(enum mlx5_ptys_rate rate)
case MLX5_PTYS_RATE_EDR: return 25000;
case MLX5_PTYS_RATE_HDR: return 50000;
case MLX5_PTYS_RATE_NDR: return 100000;
+ case MLX5_PTYS_RATE_XDR: return 200000;
default: return -1;
}
}
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/ipoib/ipoib.c b/drivers/net/ethernet/mellanox/mlx5/core/ipoib/ipoib.c
index 911cf4d23964..c2a4f86bc890 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/ipoib/ipoib.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/ipoib/ipoib.c
@@ -412,7 +412,8 @@ static int mlx5i_init_rx(struct mlx5e_priv *priv)
int err;
priv->fs = mlx5e_fs_init(priv->profile, mdev,
- !test_bit(MLX5E_STATE_DESTROYING, &priv->state));
+ !test_bit(MLX5E_STATE_DESTROYING, &priv->state),
+ priv->dfs_root);
if (!priv->fs) {
netdev_err(priv->netdev, "FS allocation failed\n");
return -ENOMEM;
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lag/debugfs.c b/drivers/net/ethernet/mellanox/mlx5/core/lag/debugfs.c
index b8feaf0f5c4c..f4b777d4e108 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/lag/debugfs.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/lag/debugfs.c
@@ -22,7 +22,7 @@ static int type_show(struct seq_file *file, void *priv)
struct mlx5_lag *ldev;
char *mode = NULL;
- ldev = dev->priv.lag;
+ ldev = mlx5_lag_dev(dev);
mutex_lock(&ldev->lock);
if (__mlx5_lag_is_active(ldev))
mode = get_str_mode_type(ldev);
@@ -41,7 +41,7 @@ static int port_sel_mode_show(struct seq_file *file, void *priv)
int ret = 0;
char *mode;
- ldev = dev->priv.lag;
+ ldev = mlx5_lag_dev(dev);
mutex_lock(&ldev->lock);
if (__mlx5_lag_is_active(ldev))
mode = mlx5_get_str_port_sel_mode(ldev->mode, ldev->mode_flags);
@@ -61,7 +61,7 @@ static int state_show(struct seq_file *file, void *priv)
struct mlx5_lag *ldev;
bool active;
- ldev = dev->priv.lag;
+ ldev = mlx5_lag_dev(dev);
mutex_lock(&ldev->lock);
active = __mlx5_lag_is_active(ldev);
mutex_unlock(&ldev->lock);
@@ -77,7 +77,7 @@ static int flags_show(struct seq_file *file, void *priv)
bool shared_fdb;
bool lag_active;
- ldev = dev->priv.lag;
+ ldev = mlx5_lag_dev(dev);
mutex_lock(&ldev->lock);
lag_active = __mlx5_lag_is_active(ldev);
if (!lag_active)
@@ -108,7 +108,7 @@ static int mapping_show(struct seq_file *file, void *priv)
int num_ports;
int i;
- ldev = dev->priv.lag;
+ ldev = mlx5_lag_dev(dev);
mutex_lock(&ldev->lock);
lag_active = __mlx5_lag_is_active(ldev);
if (lag_active) {
@@ -142,7 +142,7 @@ static int members_show(struct seq_file *file, void *priv)
struct mlx5_lag *ldev;
int i;
- ldev = dev->priv.lag;
+ ldev = mlx5_lag_dev(dev);
mutex_lock(&ldev->lock);
for (i = 0; i < ldev->ports; i++) {
if (!ldev->pf[i].dev)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lag/lag.c b/drivers/net/ethernet/mellanox/mlx5/core/lag/lag.c
index ad32b80e8501..5d331b940f4d 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/lag/lag.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/lag/lag.c
@@ -230,7 +230,6 @@ static void mlx5_ldev_free(struct kref *ref)
mlx5_lag_mp_cleanup(ldev);
cancel_delayed_work_sync(&ldev->bond_work);
destroy_workqueue(ldev->wq);
- mlx5_lag_mpesw_cleanup(ldev);
mutex_destroy(&ldev->lock);
kfree(ldev);
}
@@ -276,7 +275,6 @@ static struct mlx5_lag *mlx5_lag_dev_alloc(struct mlx5_core_dev *dev)
mlx5_core_err(dev, "Failed to init multipath lag err=%d\n",
err);
- mlx5_lag_mpesw_init(ldev);
ldev->ports = MLX5_CAP_GEN(dev, num_lag_ports);
ldev->buckets = 1;
@@ -646,7 +644,7 @@ int mlx5_activate_lag(struct mlx5_lag *ldev,
return 0;
}
-static int mlx5_deactivate_lag(struct mlx5_lag *ldev)
+int mlx5_deactivate_lag(struct mlx5_lag *ldev)
{
struct mlx5_core_dev *dev0 = ldev->pf[MLX5_LAG_P1].dev;
struct mlx5_core_dev *dev1 = ldev->pf[MLX5_LAG_P2].dev;
@@ -688,7 +686,7 @@ static int mlx5_deactivate_lag(struct mlx5_lag *ldev)
}
#define MLX5_LAG_OFFLOADS_SUPPORTED_PORTS 2
-static bool mlx5_lag_check_prereq(struct mlx5_lag *ldev)
+bool mlx5_lag_check_prereq(struct mlx5_lag *ldev)
{
#ifdef CONFIG_MLX5_ESWITCH
struct mlx5_core_dev *dev;
@@ -723,7 +721,7 @@ static bool mlx5_lag_check_prereq(struct mlx5_lag *ldev)
return true;
}
-static void mlx5_lag_add_devices(struct mlx5_lag *ldev)
+void mlx5_lag_add_devices(struct mlx5_lag *ldev)
{
int i;
@@ -740,7 +738,7 @@ static void mlx5_lag_add_devices(struct mlx5_lag *ldev)
}
}
-static void mlx5_lag_remove_devices(struct mlx5_lag *ldev)
+void mlx5_lag_remove_devices(struct mlx5_lag *ldev)
{
int i;
@@ -1187,7 +1185,7 @@ static int __mlx5_lag_dev_add_mdev(struct mlx5_core_dev *dev)
tmp_dev = mlx5_get_next_phys_dev_lag(dev);
if (tmp_dev)
- ldev = tmp_dev->priv.lag;
+ ldev = mlx5_lag_dev(tmp_dev);
if (!ldev) {
ldev = mlx5_lag_dev_alloc(dev);
@@ -1386,8 +1384,7 @@ bool mlx5_lag_is_shared_fdb(struct mlx5_core_dev *dev)
spin_lock_irqsave(&lag_lock, flags);
ldev = mlx5_lag_dev(dev);
- res = ldev && __mlx5_lag_is_sriov(ldev) &&
- test_bit(MLX5_LAG_MODE_FLAG_SHARED_FDB, &ldev->mode_flags);
+ res = ldev && test_bit(MLX5_LAG_MODE_FLAG_SHARED_FDB, &ldev->mode_flags);
spin_unlock_irqrestore(&lag_lock, flags);
return res;
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lag/lag.h b/drivers/net/ethernet/mellanox/mlx5/core/lag/lag.h
index f30ac2de639f..bc1f1dd3e283 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/lag/lag.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/lag/lag.h
@@ -50,19 +50,6 @@ struct lag_tracker {
enum netdev_lag_hash hash_type;
};
-enum mpesw_op {
- MLX5_MPESW_OP_ENABLE,
- MLX5_MPESW_OP_DISABLE,
-};
-
-struct mlx5_mpesw_work_st {
- struct work_struct work;
- struct mlx5_lag *lag;
- enum mpesw_op op;
- struct completion comp;
- int result;
-};
-
/* LAG data of a ConnectX card.
* It serves both its phys functions.
*/
@@ -115,6 +102,7 @@ mlx5_lag_is_ready(struct mlx5_lag *ldev)
return test_bit(MLX5_LAG_FLAG_NDEVS_READY, &ldev->state_flags);
}
+bool mlx5_lag_check_prereq(struct mlx5_lag *ldev);
void mlx5_modify_lag(struct mlx5_lag *ldev,
struct lag_tracker *tracker);
int mlx5_activate_lag(struct mlx5_lag *ldev,
@@ -124,8 +112,6 @@ int mlx5_activate_lag(struct mlx5_lag *ldev,
int mlx5_lag_dev_get_netdev_idx(struct mlx5_lag *ldev,
struct net_device *ndev);
bool mlx5_shared_fdb_supported(struct mlx5_lag *ldev);
-void mlx5_lag_del_mpesw_rule(struct mlx5_core_dev *dev);
-int mlx5_lag_add_mpesw_rule(struct mlx5_core_dev *dev);
char *mlx5_get_str_port_sel_mode(enum mlx5_lag_mode mode, unsigned long flags);
void mlx5_infer_tx_enabled(struct lag_tracker *tracker, u8 num_ports,
@@ -134,5 +120,8 @@ void mlx5_infer_tx_enabled(struct lag_tracker *tracker, u8 num_ports,
void mlx5_ldev_add_debugfs(struct mlx5_core_dev *dev);
void mlx5_ldev_remove_debugfs(struct dentry *dbg);
void mlx5_disable_lag(struct mlx5_lag *ldev);
+void mlx5_lag_remove_devices(struct mlx5_lag *ldev);
+int mlx5_deactivate_lag(struct mlx5_lag *ldev);
+void mlx5_lag_add_devices(struct mlx5_lag *ldev);
#endif /* __MLX5_LAG_H__ */
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lag/mp.c b/drivers/net/ethernet/mellanox/mlx5/core/lag/mp.c
index d9fcb9ed726f..d85a8dfc153d 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/lag/mp.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/lag/mp.c
@@ -28,13 +28,9 @@ static bool mlx5_lag_multipath_check_prereq(struct mlx5_lag *ldev)
bool mlx5_lag_is_multipath(struct mlx5_core_dev *dev)
{
- struct mlx5_lag *ldev;
- bool res;
-
- ldev = mlx5_lag_dev(dev);
- res = ldev && __mlx5_lag_is_multipath(ldev);
+ struct mlx5_lag *ldev = mlx5_lag_dev(dev);
- return res;
+ return ldev && __mlx5_lag_is_multipath(ldev);
}
/**
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lag/mpesw.c b/drivers/net/ethernet/mellanox/mlx5/core/lag/mpesw.c
index c17e8f1ec914..0c0ef600f643 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/lag/mpesw.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/lag/mpesw.c
@@ -5,39 +5,121 @@
#include <net/nexthop.h>
#include "lag/lag.h"
#include "eswitch.h"
+#include "esw/acl/ofld.h"
#include "lib/mlx5.h"
-static int add_mpesw_rule(struct mlx5_lag *ldev)
+static void mlx5_mpesw_metadata_cleanup(struct mlx5_lag *ldev)
{
- struct mlx5_core_dev *dev = ldev->pf[MLX5_LAG_P1].dev;
- int err;
+ struct mlx5_core_dev *dev;
+ struct mlx5_eswitch *esw;
+ u32 pf_metadata;
+ int i;
+
+ for (i = 0; i < ldev->ports; i++) {
+ dev = ldev->pf[i].dev;
+ esw = dev->priv.eswitch;
+ pf_metadata = ldev->lag_mpesw.pf_metadata[i];
+ if (!pf_metadata)
+ continue;
+ mlx5_esw_acl_ingress_vport_metadata_update(esw, MLX5_VPORT_UPLINK, 0);
+ mlx5_notifier_call_chain(dev->priv.events, MLX5_DEV_EVENT_MULTIPORT_ESW,
+ (void *)0);
+ mlx5_esw_match_metadata_free(esw, pf_metadata);
+ ldev->lag_mpesw.pf_metadata[i] = 0;
+ }
+}
- if (atomic_add_return(1, &ldev->lag_mpesw.mpesw_rule_count) != 1)
- return 0;
+static int mlx5_mpesw_metadata_set(struct mlx5_lag *ldev)
+{
+ struct mlx5_core_dev *dev;
+ struct mlx5_eswitch *esw;
+ u32 pf_metadata;
+ int i, err;
+
+ for (i = 0; i < ldev->ports; i++) {
+ dev = ldev->pf[i].dev;
+ esw = dev->priv.eswitch;
+ pf_metadata = mlx5_esw_match_metadata_alloc(esw);
+ if (!pf_metadata) {
+ err = -ENOSPC;
+ goto err_metadata;
+ }
+
+ ldev->lag_mpesw.pf_metadata[i] = pf_metadata;
+ err = mlx5_esw_acl_ingress_vport_metadata_update(esw, MLX5_VPORT_UPLINK,
+ pf_metadata);
+ if (err)
+ goto err_metadata;
+ }
- if (ldev->mode != MLX5_LAG_MODE_NONE) {
- err = -EINVAL;
- goto out_err;
+ for (i = 0; i < ldev->ports; i++) {
+ dev = ldev->pf[i].dev;
+ mlx5_notifier_call_chain(dev->priv.events, MLX5_DEV_EVENT_MULTIPORT_ESW,
+ (void *)0);
}
- err = mlx5_activate_lag(ldev, NULL, MLX5_LAG_MODE_MPESW, false);
+ return 0;
+
+err_metadata:
+ mlx5_mpesw_metadata_cleanup(ldev);
+ return err;
+}
+
+static int enable_mpesw(struct mlx5_lag *ldev)
+{
+ struct mlx5_core_dev *dev0 = ldev->pf[MLX5_LAG_P1].dev;
+ struct mlx5_core_dev *dev1 = ldev->pf[MLX5_LAG_P2].dev;
+ int err;
+
+ if (ldev->mode != MLX5_LAG_MODE_NONE)
+ return -EINVAL;
+
+ if (mlx5_eswitch_mode(dev0) != MLX5_ESWITCH_OFFLOADS ||
+ !MLX5_CAP_PORT_SELECTION(dev0, port_select_flow_table) ||
+ !MLX5_CAP_GEN(dev0, create_lag_when_not_master_up) ||
+ !mlx5_lag_check_prereq(ldev))
+ return -EOPNOTSUPP;
+
+ err = mlx5_mpesw_metadata_set(ldev);
+ if (err)
+ return err;
+
+ mlx5_lag_remove_devices(ldev);
+
+ err = mlx5_activate_lag(ldev, NULL, MLX5_LAG_MODE_MPESW, true);
if (err) {
- mlx5_core_warn(dev, "Failed to create LAG in MPESW mode (%d)\n", err);
- goto out_err;
+ mlx5_core_warn(dev0, "Failed to create LAG in MPESW mode (%d)\n", err);
+ goto err_add_devices;
}
+ dev0->priv.flags &= ~MLX5_PRIV_FLAGS_DISABLE_IB_ADEV;
+ mlx5_rescan_drivers_locked(dev0);
+ err = mlx5_eswitch_reload_reps(dev0->priv.eswitch);
+ if (!err)
+ err = mlx5_eswitch_reload_reps(dev1->priv.eswitch);
+ if (err)
+ goto err_rescan_drivers;
+
return 0;
-out_err:
- atomic_dec(&ldev->lag_mpesw.mpesw_rule_count);
+err_rescan_drivers:
+ dev0->priv.flags |= MLX5_PRIV_FLAGS_DISABLE_IB_ADEV;
+ mlx5_rescan_drivers_locked(dev0);
+ mlx5_deactivate_lag(ldev);
+err_add_devices:
+ mlx5_lag_add_devices(ldev);
+ mlx5_eswitch_reload_reps(dev0->priv.eswitch);
+ mlx5_eswitch_reload_reps(dev1->priv.eswitch);
+ mlx5_mpesw_metadata_cleanup(ldev);
return err;
}
-static void del_mpesw_rule(struct mlx5_lag *ldev)
+static void disable_mpesw(struct mlx5_lag *ldev)
{
- if (!atomic_dec_return(&ldev->lag_mpesw.mpesw_rule_count) &&
- ldev->mode == MLX5_LAG_MODE_MPESW)
+ if (ldev->mode == MLX5_LAG_MODE_MPESW) {
+ mlx5_mpesw_metadata_cleanup(ldev);
mlx5_disable_lag(ldev);
+ }
}
static void mlx5_mpesw_work(struct work_struct *work)
@@ -45,20 +127,27 @@ static void mlx5_mpesw_work(struct work_struct *work)
struct mlx5_mpesw_work_st *mpesww = container_of(work, struct mlx5_mpesw_work_st, work);
struct mlx5_lag *ldev = mpesww->lag;
+ mlx5_dev_list_lock();
mutex_lock(&ldev->lock);
+ if (ldev->mode_changes_in_progress) {
+ mpesww->result = -EAGAIN;
+ goto unlock;
+ }
+
if (mpesww->op == MLX5_MPESW_OP_ENABLE)
- mpesww->result = add_mpesw_rule(ldev);
+ mpesww->result = enable_mpesw(ldev);
else if (mpesww->op == MLX5_MPESW_OP_DISABLE)
- del_mpesw_rule(ldev);
+ disable_mpesw(ldev);
+unlock:
mutex_unlock(&ldev->lock);
-
+ mlx5_dev_list_unlock();
complete(&mpesww->comp);
}
static int mlx5_lag_mpesw_queue_work(struct mlx5_core_dev *dev,
enum mpesw_op op)
{
- struct mlx5_lag *ldev = dev->priv.lag;
+ struct mlx5_lag *ldev = mlx5_lag_dev(dev);
struct mlx5_mpesw_work_st *work;
int err = 0;
@@ -86,43 +175,36 @@ out:
return err;
}
-void mlx5_lag_del_mpesw_rule(struct mlx5_core_dev *dev)
+void mlx5_lag_mpesw_disable(struct mlx5_core_dev *dev)
{
mlx5_lag_mpesw_queue_work(dev, MLX5_MPESW_OP_DISABLE);
}
-int mlx5_lag_add_mpesw_rule(struct mlx5_core_dev *dev)
+int mlx5_lag_mpesw_enable(struct mlx5_core_dev *dev)
{
return mlx5_lag_mpesw_queue_work(dev, MLX5_MPESW_OP_ENABLE);
}
-int mlx5_lag_do_mirred(struct mlx5_core_dev *mdev, struct net_device *out_dev)
+int mlx5_lag_mpesw_do_mirred(struct mlx5_core_dev *mdev,
+ struct net_device *out_dev,
+ struct netlink_ext_ack *extack)
{
- struct mlx5_lag *ldev = mdev->priv.lag;
+ struct mlx5_lag *ldev = mlx5_lag_dev(mdev);
if (!netif_is_bond_master(out_dev) || !ldev)
return 0;
- if (ldev->mode == MLX5_LAG_MODE_MPESW)
- return -EOPNOTSUPP;
-
- return 0;
-}
-
-bool mlx5_lag_mpesw_is_activated(struct mlx5_core_dev *dev)
-{
- bool ret;
+ if (ldev->mode != MLX5_LAG_MODE_MPESW)
+ return 0;
- ret = dev->priv.lag && dev->priv.lag->mode == MLX5_LAG_MODE_MPESW;
- return ret;
+ NL_SET_ERR_MSG_MOD(extack, "can't forward to bond in mpesw mode");
+ return -EOPNOTSUPP;
}
-void mlx5_lag_mpesw_init(struct mlx5_lag *ldev)
+bool mlx5_lag_is_mpesw(struct mlx5_core_dev *dev)
{
- atomic_set(&ldev->lag_mpesw.mpesw_rule_count, 0);
-}
+ struct mlx5_lag *ldev = mlx5_lag_dev(dev);
-void mlx5_lag_mpesw_cleanup(struct mlx5_lag *ldev)
-{
- WARN_ON(atomic_read(&ldev->lag_mpesw.mpesw_rule_count));
+ return ldev && ldev->mode == MLX5_LAG_MODE_MPESW;
}
+EXPORT_SYMBOL(mlx5_lag_is_mpesw);
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lag/mpesw.h b/drivers/net/ethernet/mellanox/mlx5/core/lag/mpesw.h
index 88e8daffcf92..02520f27a033 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/lag/mpesw.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/lag/mpesw.h
@@ -9,17 +9,27 @@
struct lag_mpesw {
struct work_struct mpesw_work;
- atomic_t mpesw_rule_count;
+ u32 pf_metadata[MLX5_MAX_PORTS];
};
-int mlx5_lag_do_mirred(struct mlx5_core_dev *mdev, struct net_device *out_dev);
-bool mlx5_lag_mpesw_is_activated(struct mlx5_core_dev *dev);
-#if IS_ENABLED(CONFIG_MLX5_ESWITCH)
-void mlx5_lag_mpesw_init(struct mlx5_lag *ldev);
-void mlx5_lag_mpesw_cleanup(struct mlx5_lag *ldev);
-#else
-static inline void mlx5_lag_mpesw_init(struct mlx5_lag *ldev) {}
-static inline void mlx5_lag_mpesw_cleanup(struct mlx5_lag *ldev) {}
-#endif
+enum mpesw_op {
+ MLX5_MPESW_OP_ENABLE,
+ MLX5_MPESW_OP_DISABLE,
+};
+
+struct mlx5_mpesw_work_st {
+ struct work_struct work;
+ struct mlx5_lag *lag;
+ enum mpesw_op op;
+ struct completion comp;
+ int result;
+};
+
+int mlx5_lag_mpesw_do_mirred(struct mlx5_core_dev *mdev,
+ struct net_device *out_dev,
+ struct netlink_ext_ack *extack);
+bool mlx5_lag_is_mpesw(struct mlx5_core_dev *dev);
+void mlx5_lag_mpesw_disable(struct mlx5_core_dev *dev);
+int mlx5_lag_mpesw_enable(struct mlx5_core_dev *dev);
#endif /* __MLX5_LAG_MPESW_H__ */
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lib/clock.c b/drivers/net/ethernet/mellanox/mlx5/core/lib/clock.c
index 69318b143268..4c9a40211059 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/lib/clock.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/lib/clock.c
@@ -69,6 +69,13 @@ enum {
MLX5_MTPPS_FS_OUT_PULSE_DURATION_NS = BIT(0xa),
};
+enum {
+ MLX5_MTUTC_OPERATION_ADJUST_TIME_MIN = S16_MIN,
+ MLX5_MTUTC_OPERATION_ADJUST_TIME_MAX = S16_MAX,
+ MLX5_MTUTC_OPERATION_ADJUST_TIME_EXTENDED_MIN = -200000,
+ MLX5_MTUTC_OPERATION_ADJUST_TIME_EXTENDED_MAX = 200000,
+};
+
static bool mlx5_real_time_mode(struct mlx5_core_dev *mdev)
{
return (mlx5_is_real_time_rq(mdev) || mlx5_is_real_time_sq(mdev));
@@ -86,6 +93,22 @@ static bool mlx5_modify_mtutc_allowed(struct mlx5_core_dev *mdev)
return MLX5_CAP_MCAM_FEATURE(mdev, ptpcyc2realtime_modify);
}
+static bool mlx5_is_mtutc_time_adj_cap(struct mlx5_core_dev *mdev, s64 delta)
+{
+ s64 min = MLX5_MTUTC_OPERATION_ADJUST_TIME_MIN;
+ s64 max = MLX5_MTUTC_OPERATION_ADJUST_TIME_MAX;
+
+ if (MLX5_CAP_MCAM_FEATURE(mdev, mtutc_time_adjustment_extended_range)) {
+ min = MLX5_MTUTC_OPERATION_ADJUST_TIME_EXTENDED_MIN;
+ max = MLX5_MTUTC_OPERATION_ADJUST_TIME_EXTENDED_MAX;
+ }
+
+ if (delta < min || delta > max)
+ return false;
+
+ return true;
+}
+
static int mlx5_set_mtutc(struct mlx5_core_dev *dev, u32 *mtutc, u32 size)
{
u32 out[MLX5_ST_SZ_DW(mtutc_reg)] = {};
@@ -288,8 +311,8 @@ static int mlx5_ptp_adjtime_real_time(struct mlx5_core_dev *mdev, s64 delta)
if (!mlx5_modify_mtutc_allowed(mdev))
return 0;
- /* HW time adjustment range is s16. If out of range, settime instead */
- if (delta < S16_MIN || delta > S16_MAX) {
+ /* HW time adjustment range is checked. If out of range, settime instead */
+ if (!mlx5_is_mtutc_time_adj_cap(mdev, delta)) {
struct timespec64 ts;
s64 ns;
@@ -326,7 +349,20 @@ static int mlx5_ptp_adjtime(struct ptp_clock_info *ptp, s64 delta)
return 0;
}
-static int mlx5_ptp_adjfreq_real_time(struct mlx5_core_dev *mdev, s32 freq)
+static int mlx5_ptp_adjphase(struct ptp_clock_info *ptp, s32 delta)
+{
+ struct mlx5_clock *clock = container_of(ptp, struct mlx5_clock, ptp_info);
+ struct mlx5_core_dev *mdev;
+
+ mdev = container_of(clock, struct mlx5_core_dev, clock);
+
+ if (!mlx5_is_mtutc_time_adj_cap(mdev, delta))
+ return -ERANGE;
+
+ return mlx5_ptp_adjtime(ptp, delta);
+}
+
+static int mlx5_ptp_freq_adj_real_time(struct mlx5_core_dev *mdev, long scaled_ppm)
{
u32 in[MLX5_ST_SZ_DW(mtutc_reg)] = {};
@@ -334,7 +370,15 @@ static int mlx5_ptp_adjfreq_real_time(struct mlx5_core_dev *mdev, s32 freq)
return 0;
MLX5_SET(mtutc_reg, in, operation, MLX5_MTUTC_OPERATION_ADJUST_FREQ_UTC);
- MLX5_SET(mtutc_reg, in, freq_adjustment, freq);
+
+ if (MLX5_CAP_MCAM_FEATURE(mdev, mtutc_freq_adj_units)) {
+ MLX5_SET(mtutc_reg, in, freq_adj_units,
+ MLX5_MTUTC_FREQ_ADJ_UNITS_SCALED_PPM);
+ MLX5_SET(mtutc_reg, in, freq_adjustment, scaled_ppm);
+ } else {
+ MLX5_SET(mtutc_reg, in, freq_adj_units, MLX5_MTUTC_FREQ_ADJ_UNITS_PPB);
+ MLX5_SET(mtutc_reg, in, freq_adjustment, scaled_ppm_to_ppb(scaled_ppm));
+ }
return mlx5_set_mtutc(mdev, in, sizeof(in));
}
@@ -349,7 +393,8 @@ static int mlx5_ptp_adjfine(struct ptp_clock_info *ptp, long scaled_ppm)
int err;
mdev = container_of(clock, struct mlx5_core_dev, clock);
- err = mlx5_ptp_adjfreq_real_time(mdev, scaled_ppm_to_ppb(scaled_ppm));
+
+ err = mlx5_ptp_freq_adj_real_time(mdev, scaled_ppm);
if (err)
return err;
@@ -688,6 +733,7 @@ static const struct ptp_clock_info mlx5_ptp_clock_info = {
.n_pins = 0,
.pps = 0,
.adjfine = mlx5_ptp_adjfine,
+ .adjphase = mlx5_ptp_adjphase,
.adjtime = mlx5_ptp_adjtime,
.gettimex64 = mlx5_ptp_gettimex,
.settime64 = mlx5_ptp_settime,
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lib/crypto.c b/drivers/net/ethernet/mellanox/mlx5/core/lib/crypto.c
index e995f8378df7..3a94b8f8031e 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/lib/crypto.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/lib/crypto.c
@@ -2,53 +2,253 @@
// Copyright (c) 2019 Mellanox Technologies.
#include "mlx5_core.h"
-#include "lib/mlx5.h"
+#include "lib/crypto.h"
-int mlx5_create_encryption_key(struct mlx5_core_dev *mdev,
- void *key, u32 sz_bytes,
- u32 key_type, u32 *p_key_id)
-{
- u32 in[MLX5_ST_SZ_DW(create_encryption_key_in)] = {};
- u32 out[MLX5_ST_SZ_DW(general_obj_out_cmd_hdr)];
- u32 sz_bits = sz_bytes * BITS_PER_BYTE;
- u8 general_obj_key_size;
- u64 general_obj_types;
- void *obj, *key_p;
- int err;
+#define MLX5_CRYPTO_DEK_POOLS_NUM (MLX5_ACCEL_OBJ_TYPE_KEY_NUM - 1)
+#define type2idx(type) ((type) - 1)
- obj = MLX5_ADDR_OF(create_encryption_key_in, in, encryption_key_object);
- key_p = MLX5_ADDR_OF(encryption_key_obj, obj, key);
+#define MLX5_CRYPTO_DEK_POOL_SYNC_THRESH 128
- general_obj_types = MLX5_CAP_GEN_64(mdev, general_obj_types);
- if (!(general_obj_types &
- MLX5_HCA_CAP_GENERAL_OBJECT_TYPES_ENCRYPTION_KEY))
- return -EINVAL;
+/* calculate the num of DEKs, which are freed by any user
+ * (for example, TLS) after last revalidation in a pool or a bulk.
+ */
+#define MLX5_CRYPTO_DEK_CALC_FREED(a) \
+ ({ typeof(a) _a = (a); \
+ _a->num_deks - _a->avail_deks - _a->in_use_deks; })
+
+#define MLX5_CRYPTO_DEK_POOL_CALC_FREED(pool) MLX5_CRYPTO_DEK_CALC_FREED(pool)
+#define MLX5_CRYPTO_DEK_BULK_CALC_FREED(bulk) MLX5_CRYPTO_DEK_CALC_FREED(bulk)
+
+#define MLX5_CRYPTO_DEK_BULK_IDLE(bulk) \
+ ({ typeof(bulk) _bulk = (bulk); \
+ _bulk->avail_deks == _bulk->num_deks; })
+
+enum {
+ MLX5_CRYPTO_DEK_ALL_TYPE = BIT(0),
+};
+
+struct mlx5_crypto_dek_pool {
+ struct mlx5_core_dev *mdev;
+ u32 key_purpose;
+ int num_deks; /* the total number of keys in this pool */
+ int avail_deks; /* the number of available keys in this pool */
+ int in_use_deks; /* the number of being used keys in this pool */
+ struct mutex lock; /* protect the following lists, and the bulks */
+ struct list_head partial_list; /* some of keys are available */
+ struct list_head full_list; /* no available keys */
+ struct list_head avail_list; /* all keys are available to use */
+
+ /* No in-used keys, and all need to be synced.
+ * These bulks will be put to avail list after sync.
+ */
+ struct list_head sync_list;
+
+ bool syncing;
+ struct list_head wait_for_free;
+ struct work_struct sync_work;
+
+ spinlock_t destroy_lock; /* protect destroy_list */
+ struct list_head destroy_list;
+ struct work_struct destroy_work;
+};
+
+struct mlx5_crypto_dek_bulk {
+ struct mlx5_core_dev *mdev;
+ int base_obj_id;
+ int avail_start; /* the bit to start search */
+ int num_deks; /* the total number of keys in a bulk */
+ int avail_deks; /* the number of keys available, with need_sync bit 0 */
+ int in_use_deks; /* the number of keys being used, with in_use bit 1 */
+ struct list_head entry;
+
+ /* 0: not being used by any user, 1: otherwise */
+ unsigned long *in_use;
+
+ /* The bits are set when they are used, and reset after crypto_sync
+ * is executed. So, the value 0 means the key is newly created, or not
+ * used after sync, and 1 means it is in use, or freed but not synced
+ */
+ unsigned long *need_sync;
+};
+
+struct mlx5_crypto_dek_priv {
+ struct mlx5_core_dev *mdev;
+ int log_dek_obj_range;
+};
+
+struct mlx5_crypto_dek {
+ struct mlx5_crypto_dek_bulk *bulk;
+ struct list_head entry;
+ u32 obj_id;
+};
+
+u32 mlx5_crypto_dek_get_id(struct mlx5_crypto_dek *dek)
+{
+ return dek->obj_id;
+}
+
+static int mlx5_crypto_dek_get_key_sz(struct mlx5_core_dev *mdev,
+ u32 sz_bytes, u8 *key_sz_p)
+{
+ u32 sz_bits = sz_bytes * BITS_PER_BYTE;
switch (sz_bits) {
case 128:
- general_obj_key_size =
- MLX5_GENERAL_OBJECT_TYPE_ENCRYPTION_KEY_KEY_SIZE_128;
- key_p += sz_bytes;
+ *key_sz_p = MLX5_GENERAL_OBJECT_TYPE_ENCRYPTION_KEY_KEY_SIZE_128;
break;
case 256:
- general_obj_key_size =
- MLX5_GENERAL_OBJECT_TYPE_ENCRYPTION_KEY_KEY_SIZE_256;
+ *key_sz_p = MLX5_GENERAL_OBJECT_TYPE_ENCRYPTION_KEY_KEY_SIZE_256;
break;
default:
+ mlx5_core_err(mdev, "Crypto offload error, invalid key size (%u bits)\n",
+ sz_bits);
return -EINVAL;
}
- memcpy(key_p, key, sz_bytes);
+ return 0;
+}
+
+static int mlx5_crypto_dek_fill_key(struct mlx5_core_dev *mdev, u8 *key_obj,
+ const void *key, u32 sz_bytes)
+{
+ void *dst;
+ u8 key_sz;
+ int err;
+
+ err = mlx5_crypto_dek_get_key_sz(mdev, sz_bytes, &key_sz);
+ if (err)
+ return err;
+
+ MLX5_SET(encryption_key_obj, key_obj, key_size, key_sz);
+
+ if (sz_bytes == 16)
+ /* For key size of 128b the MSBs are reserved. */
+ dst = MLX5_ADDR_OF(encryption_key_obj, key_obj, key[1]);
+ else
+ dst = MLX5_ADDR_OF(encryption_key_obj, key_obj, key);
+
+ memcpy(dst, key, sz_bytes);
+
+ return 0;
+}
+
+static int mlx5_crypto_cmd_sync_crypto(struct mlx5_core_dev *mdev,
+ int crypto_type)
+{
+ u32 in[MLX5_ST_SZ_DW(sync_crypto_in)] = {};
+ int err;
+
+ mlx5_core_dbg(mdev,
+ "Execute SYNC_CRYPTO command with crypto_type(0x%x)\n",
+ crypto_type);
+
+ MLX5_SET(sync_crypto_in, in, opcode, MLX5_CMD_OP_SYNC_CRYPTO);
+ MLX5_SET(sync_crypto_in, in, crypto_type, crypto_type);
+
+ err = mlx5_cmd_exec_in(mdev, sync_crypto, in);
+ if (err)
+ mlx5_core_err(mdev,
+ "Failed to exec sync crypto, type=%d, err=%d\n",
+ crypto_type, err);
+
+ return err;
+}
+
+static int mlx5_crypto_create_dek_bulk(struct mlx5_core_dev *mdev,
+ u32 key_purpose, int log_obj_range,
+ u32 *obj_id)
+{
+ u32 in[MLX5_ST_SZ_DW(create_encryption_key_in)] = {};
+ u32 out[MLX5_ST_SZ_DW(general_obj_out_cmd_hdr)];
+ void *obj, *param;
+ int err;
- MLX5_SET(encryption_key_obj, obj, key_size, general_obj_key_size);
- MLX5_SET(encryption_key_obj, obj, key_type, key_type);
MLX5_SET(general_obj_in_cmd_hdr, in, opcode,
MLX5_CMD_OP_CREATE_GENERAL_OBJECT);
MLX5_SET(general_obj_in_cmd_hdr, in, obj_type,
MLX5_GENERAL_OBJECT_TYPES_ENCRYPTION_KEY);
+ param = MLX5_ADDR_OF(general_obj_in_cmd_hdr, in, op_param);
+ MLX5_SET(general_obj_create_param, param, log_obj_range, log_obj_range);
+
+ obj = MLX5_ADDR_OF(create_encryption_key_in, in, encryption_key_object);
+ MLX5_SET(encryption_key_obj, obj, key_purpose, key_purpose);
MLX5_SET(encryption_key_obj, obj, pd, mdev->mlx5e_res.hw_objs.pdn);
err = mlx5_cmd_exec(mdev, in, sizeof(in), out, sizeof(out));
+ if (err)
+ return err;
+
+ *obj_id = MLX5_GET(general_obj_out_cmd_hdr, out, obj_id);
+ mlx5_core_dbg(mdev, "DEK objects created, bulk=%d, obj_id=%d\n",
+ 1 << log_obj_range, *obj_id);
+
+ return 0;
+}
+
+static int mlx5_crypto_modify_dek_key(struct mlx5_core_dev *mdev,
+ const void *key, u32 sz_bytes, u32 key_purpose,
+ u32 obj_id, u32 obj_offset)
+{
+ u32 in[MLX5_ST_SZ_DW(modify_encryption_key_in)] = {};
+ u32 out[MLX5_ST_SZ_DW(general_obj_out_cmd_hdr)];
+ void *obj, *param;
+ int err;
+
+ MLX5_SET(general_obj_in_cmd_hdr, in, opcode,
+ MLX5_CMD_OP_MODIFY_GENERAL_OBJECT);
+ MLX5_SET(general_obj_in_cmd_hdr, in, obj_type,
+ MLX5_GENERAL_OBJECT_TYPES_ENCRYPTION_KEY);
+ MLX5_SET(general_obj_in_cmd_hdr, in, obj_id, obj_id);
+
+ param = MLX5_ADDR_OF(general_obj_in_cmd_hdr, in, op_param);
+ MLX5_SET(general_obj_query_param, param, obj_offset, obj_offset);
+
+ obj = MLX5_ADDR_OF(modify_encryption_key_in, in, encryption_key_object);
+ MLX5_SET64(encryption_key_obj, obj, modify_field_select, 1);
+ MLX5_SET(encryption_key_obj, obj, key_purpose, key_purpose);
+ MLX5_SET(encryption_key_obj, obj, pd, mdev->mlx5e_res.hw_objs.pdn);
+
+ err = mlx5_crypto_dek_fill_key(mdev, obj, key, sz_bytes);
+ if (err)
+ return err;
+
+ err = mlx5_cmd_exec(mdev, in, sizeof(in), out, sizeof(out));
+
+ /* avoid leaking key on the stack */
+ memzero_explicit(in, sizeof(in));
+
+ return err;
+}
+
+static int mlx5_crypto_create_dek_key(struct mlx5_core_dev *mdev,
+ const void *key, u32 sz_bytes,
+ u32 key_purpose, u32 *p_key_id)
+{
+ u32 in[MLX5_ST_SZ_DW(create_encryption_key_in)] = {};
+ u32 out[MLX5_ST_SZ_DW(general_obj_out_cmd_hdr)];
+ u64 general_obj_types;
+ void *obj;
+ int err;
+
+ general_obj_types = MLX5_CAP_GEN_64(mdev, general_obj_types);
+ if (!(general_obj_types &
+ MLX5_HCA_CAP_GENERAL_OBJECT_TYPES_ENCRYPTION_KEY))
+ return -EINVAL;
+
+ MLX5_SET(general_obj_in_cmd_hdr, in, opcode,
+ MLX5_CMD_OP_CREATE_GENERAL_OBJECT);
+ MLX5_SET(general_obj_in_cmd_hdr, in, obj_type,
+ MLX5_GENERAL_OBJECT_TYPES_ENCRYPTION_KEY);
+
+ obj = MLX5_ADDR_OF(create_encryption_key_in, in, encryption_key_object);
+ MLX5_SET(encryption_key_obj, obj, key_purpose, key_purpose);
+ MLX5_SET(encryption_key_obj, obj, pd, mdev->mlx5e_res.hw_objs.pdn);
+
+ err = mlx5_crypto_dek_fill_key(mdev, obj, key, sz_bytes);
+ if (err)
+ return err;
+
+ err = mlx5_cmd_exec(mdev, in, sizeof(in), out, sizeof(out));
if (!err)
*p_key_id = MLX5_GET(general_obj_out_cmd_hdr, out, obj_id);
@@ -58,7 +258,7 @@ int mlx5_create_encryption_key(struct mlx5_core_dev *mdev,
return err;
}
-void mlx5_destroy_encryption_key(struct mlx5_core_dev *mdev, u32 key_id)
+static void mlx5_crypto_destroy_dek_key(struct mlx5_core_dev *mdev, u32 key_id)
{
u32 in[MLX5_ST_SZ_DW(general_obj_in_cmd_hdr)] = {};
u32 out[MLX5_ST_SZ_DW(general_obj_out_cmd_hdr)];
@@ -71,3 +271,504 @@ void mlx5_destroy_encryption_key(struct mlx5_core_dev *mdev, u32 key_id)
mlx5_cmd_exec(mdev, in, sizeof(in), out, sizeof(out));
}
+
+int mlx5_create_encryption_key(struct mlx5_core_dev *mdev,
+ const void *key, u32 sz_bytes,
+ u32 key_type, u32 *p_key_id)
+{
+ return mlx5_crypto_create_dek_key(mdev, key, sz_bytes, key_type, p_key_id);
+}
+
+void mlx5_destroy_encryption_key(struct mlx5_core_dev *mdev, u32 key_id)
+{
+ mlx5_crypto_destroy_dek_key(mdev, key_id);
+}
+
+static struct mlx5_crypto_dek_bulk *
+mlx5_crypto_dek_bulk_create(struct mlx5_crypto_dek_pool *pool)
+{
+ struct mlx5_crypto_dek_priv *dek_priv = pool->mdev->mlx5e_res.dek_priv;
+ struct mlx5_core_dev *mdev = pool->mdev;
+ struct mlx5_crypto_dek_bulk *bulk;
+ int num_deks, base_obj_id;
+ int err;
+
+ bulk = kzalloc(sizeof(*bulk), GFP_KERNEL);
+ if (!bulk)
+ return ERR_PTR(-ENOMEM);
+
+ num_deks = 1 << dek_priv->log_dek_obj_range;
+ bulk->need_sync = bitmap_zalloc(num_deks, GFP_KERNEL);
+ if (!bulk->need_sync) {
+ err = -ENOMEM;
+ goto err_out;
+ }
+
+ bulk->in_use = bitmap_zalloc(num_deks, GFP_KERNEL);
+ if (!bulk->in_use) {
+ err = -ENOMEM;
+ goto err_out;
+ }
+
+ err = mlx5_crypto_create_dek_bulk(mdev, pool->key_purpose,
+ dek_priv->log_dek_obj_range,
+ &base_obj_id);
+ if (err)
+ goto err_out;
+
+ bulk->base_obj_id = base_obj_id;
+ bulk->num_deks = num_deks;
+ bulk->avail_deks = num_deks;
+ bulk->mdev = mdev;
+
+ return bulk;
+
+err_out:
+ bitmap_free(bulk->in_use);
+ bitmap_free(bulk->need_sync);
+ kfree(bulk);
+ return ERR_PTR(err);
+}
+
+static struct mlx5_crypto_dek_bulk *
+mlx5_crypto_dek_pool_add_bulk(struct mlx5_crypto_dek_pool *pool)
+{
+ struct mlx5_crypto_dek_bulk *bulk;
+
+ bulk = mlx5_crypto_dek_bulk_create(pool);
+ if (IS_ERR(bulk))
+ return bulk;
+
+ pool->avail_deks += bulk->num_deks;
+ pool->num_deks += bulk->num_deks;
+ list_add(&bulk->entry, &pool->partial_list);
+
+ return bulk;
+}
+
+static void mlx5_crypto_dek_bulk_free(struct mlx5_crypto_dek_bulk *bulk)
+{
+ mlx5_crypto_destroy_dek_key(bulk->mdev, bulk->base_obj_id);
+ bitmap_free(bulk->need_sync);
+ bitmap_free(bulk->in_use);
+ kfree(bulk);
+}
+
+static void mlx5_crypto_dek_pool_remove_bulk(struct mlx5_crypto_dek_pool *pool,
+ struct mlx5_crypto_dek_bulk *bulk,
+ bool delay)
+{
+ pool->num_deks -= bulk->num_deks;
+ pool->avail_deks -= bulk->avail_deks;
+ pool->in_use_deks -= bulk->in_use_deks;
+ list_del(&bulk->entry);
+ if (!delay)
+ mlx5_crypto_dek_bulk_free(bulk);
+}
+
+static struct mlx5_crypto_dek_bulk *
+mlx5_crypto_dek_pool_pop(struct mlx5_crypto_dek_pool *pool, u32 *obj_offset)
+{
+ struct mlx5_crypto_dek_bulk *bulk;
+ int pos;
+
+ mutex_lock(&pool->lock);
+ bulk = list_first_entry_or_null(&pool->partial_list,
+ struct mlx5_crypto_dek_bulk, entry);
+
+ if (bulk) {
+ pos = find_next_zero_bit(bulk->need_sync, bulk->num_deks,
+ bulk->avail_start);
+ if (pos == bulk->num_deks) {
+ mlx5_core_err(pool->mdev, "Wrong DEK bulk avail_start.\n");
+ pos = find_first_zero_bit(bulk->need_sync, bulk->num_deks);
+ }
+ WARN_ON(pos == bulk->num_deks);
+ } else {
+ bulk = list_first_entry_or_null(&pool->avail_list,
+ struct mlx5_crypto_dek_bulk,
+ entry);
+ if (bulk) {
+ list_move(&bulk->entry, &pool->partial_list);
+ } else {
+ bulk = mlx5_crypto_dek_pool_add_bulk(pool);
+ if (IS_ERR(bulk))
+ goto out;
+ }
+ pos = 0;
+ }
+
+ *obj_offset = pos;
+ bitmap_set(bulk->need_sync, pos, 1);
+ bitmap_set(bulk->in_use, pos, 1);
+ bulk->in_use_deks++;
+ bulk->avail_deks--;
+ if (!bulk->avail_deks) {
+ list_move(&bulk->entry, &pool->full_list);
+ bulk->avail_start = bulk->num_deks;
+ } else {
+ bulk->avail_start = pos + 1;
+ }
+ pool->avail_deks--;
+ pool->in_use_deks++;
+
+out:
+ mutex_unlock(&pool->lock);
+ return bulk;
+}
+
+static bool mlx5_crypto_dek_need_sync(struct mlx5_crypto_dek_pool *pool)
+{
+ return !pool->syncing &&
+ MLX5_CRYPTO_DEK_POOL_CALC_FREED(pool) > MLX5_CRYPTO_DEK_POOL_SYNC_THRESH;
+}
+
+static int mlx5_crypto_dek_free_locked(struct mlx5_crypto_dek_pool *pool,
+ struct mlx5_crypto_dek *dek)
+{
+ struct mlx5_crypto_dek_bulk *bulk = dek->bulk;
+ int obj_offset;
+ bool old_val;
+ int err = 0;
+
+ obj_offset = dek->obj_id - bulk->base_obj_id;
+ old_val = test_and_clear_bit(obj_offset, bulk->in_use);
+ WARN_ON_ONCE(!old_val);
+ if (!old_val) {
+ err = -ENOENT;
+ goto out_free;
+ }
+ pool->in_use_deks--;
+ bulk->in_use_deks--;
+ if (!bulk->avail_deks && !bulk->in_use_deks)
+ list_move(&bulk->entry, &pool->sync_list);
+
+ if (mlx5_crypto_dek_need_sync(pool) && schedule_work(&pool->sync_work))
+ pool->syncing = true;
+
+out_free:
+ kfree(dek);
+ return err;
+}
+
+static int mlx5_crypto_dek_pool_push(struct mlx5_crypto_dek_pool *pool,
+ struct mlx5_crypto_dek *dek)
+{
+ int err = 0;
+
+ mutex_lock(&pool->lock);
+ if (pool->syncing)
+ list_add(&dek->entry, &pool->wait_for_free);
+ else
+ err = mlx5_crypto_dek_free_locked(pool, dek);
+ mutex_unlock(&pool->lock);
+
+ return err;
+}
+
+/* Update the bits for a bulk while sync, and avail_next for search.
+ * As the combinations of (need_sync, in_use) of one DEK are
+ * - (0,0) means the key is ready for use,
+ * - (1,1) means the key is currently being used by a user,
+ * - (1,0) means the key is freed, and waiting for being synced,
+ * - (0,1) is invalid state.
+ * the number of revalidated DEKs can be calculated by
+ * hweight_long(need_sync XOR in_use), and the need_sync bits can be reset
+ * by simply copying from in_use bits.
+ */
+static void mlx5_crypto_dek_bulk_reset_synced(struct mlx5_crypto_dek_pool *pool,
+ struct mlx5_crypto_dek_bulk *bulk)
+{
+ unsigned long *need_sync = bulk->need_sync;
+ unsigned long *in_use = bulk->in_use;
+ int i, freed, reused, avail_next;
+ bool first = true;
+
+ freed = MLX5_CRYPTO_DEK_BULK_CALC_FREED(bulk);
+
+ for (i = 0; freed && i < BITS_TO_LONGS(bulk->num_deks);
+ i++, need_sync++, in_use++) {
+ reused = hweight_long((*need_sync) ^ (*in_use));
+ if (!reused)
+ continue;
+
+ bulk->avail_deks += reused;
+ pool->avail_deks += reused;
+ *need_sync = *in_use;
+ if (first) {
+ avail_next = i * BITS_PER_TYPE(long);
+ if (bulk->avail_start > avail_next)
+ bulk->avail_start = avail_next;
+ first = false;
+ }
+
+ freed -= reused;
+ }
+}
+
+/* Return true if the bulk is reused, false if destroyed with delay */
+static bool mlx5_crypto_dek_bulk_handle_avail(struct mlx5_crypto_dek_pool *pool,
+ struct mlx5_crypto_dek_bulk *bulk,
+ struct list_head *destroy_list)
+{
+ if (list_empty(&pool->avail_list)) {
+ list_move(&bulk->entry, &pool->avail_list);
+ return true;
+ }
+
+ mlx5_crypto_dek_pool_remove_bulk(pool, bulk, true);
+ list_add(&bulk->entry, destroy_list);
+ return false;
+}
+
+static void mlx5_crypto_dek_pool_splice_destroy_list(struct mlx5_crypto_dek_pool *pool,
+ struct list_head *list,
+ struct list_head *head)
+{
+ spin_lock(&pool->destroy_lock);
+ list_splice_init(list, head);
+ spin_unlock(&pool->destroy_lock);
+}
+
+static void mlx5_crypto_dek_pool_free_wait_keys(struct mlx5_crypto_dek_pool *pool)
+{
+ struct mlx5_crypto_dek *dek, *next;
+
+ list_for_each_entry_safe(dek, next, &pool->wait_for_free, entry) {
+ list_del(&dek->entry);
+ mlx5_crypto_dek_free_locked(pool, dek);
+ }
+}
+
+/* For all the bulks in each list, reset the bits while sync.
+ * Move them to different lists according to the number of available DEKs.
+ * Destrory all the idle bulks, except one for quick service.
+ * And free DEKs in the waiting list at the end of this func.
+ */
+static void mlx5_crypto_dek_pool_reset_synced(struct mlx5_crypto_dek_pool *pool)
+{
+ struct mlx5_crypto_dek_bulk *bulk, *tmp;
+ LIST_HEAD(destroy_list);
+
+ list_for_each_entry_safe(bulk, tmp, &pool->partial_list, entry) {
+ mlx5_crypto_dek_bulk_reset_synced(pool, bulk);
+ if (MLX5_CRYPTO_DEK_BULK_IDLE(bulk))
+ mlx5_crypto_dek_bulk_handle_avail(pool, bulk, &destroy_list);
+ }
+
+ list_for_each_entry_safe(bulk, tmp, &pool->full_list, entry) {
+ mlx5_crypto_dek_bulk_reset_synced(pool, bulk);
+
+ if (!bulk->avail_deks)
+ continue;
+
+ if (MLX5_CRYPTO_DEK_BULK_IDLE(bulk))
+ mlx5_crypto_dek_bulk_handle_avail(pool, bulk, &destroy_list);
+ else
+ list_move(&bulk->entry, &pool->partial_list);
+ }
+
+ list_for_each_entry_safe(bulk, tmp, &pool->sync_list, entry) {
+ bulk->avail_deks = bulk->num_deks;
+ pool->avail_deks += bulk->num_deks;
+ if (mlx5_crypto_dek_bulk_handle_avail(pool, bulk, &destroy_list)) {
+ memset(bulk->need_sync, 0, BITS_TO_BYTES(bulk->num_deks));
+ bulk->avail_start = 0;
+ }
+ }
+
+ mlx5_crypto_dek_pool_free_wait_keys(pool);
+
+ if (!list_empty(&destroy_list)) {
+ mlx5_crypto_dek_pool_splice_destroy_list(pool, &destroy_list,
+ &pool->destroy_list);
+ schedule_work(&pool->destroy_work);
+ }
+}
+
+static void mlx5_crypto_dek_sync_work_fn(struct work_struct *work)
+{
+ struct mlx5_crypto_dek_pool *pool =
+ container_of(work, struct mlx5_crypto_dek_pool, sync_work);
+ int err;
+
+ err = mlx5_crypto_cmd_sync_crypto(pool->mdev, BIT(pool->key_purpose));
+ mutex_lock(&pool->lock);
+ if (!err)
+ mlx5_crypto_dek_pool_reset_synced(pool);
+ pool->syncing = false;
+ mutex_unlock(&pool->lock);
+}
+
+struct mlx5_crypto_dek *mlx5_crypto_dek_create(struct mlx5_crypto_dek_pool *dek_pool,
+ const void *key, u32 sz_bytes)
+{
+ struct mlx5_crypto_dek_priv *dek_priv = dek_pool->mdev->mlx5e_res.dek_priv;
+ struct mlx5_core_dev *mdev = dek_pool->mdev;
+ u32 key_purpose = dek_pool->key_purpose;
+ struct mlx5_crypto_dek_bulk *bulk;
+ struct mlx5_crypto_dek *dek;
+ int obj_offset;
+ int err;
+
+ dek = kzalloc(sizeof(*dek), GFP_KERNEL);
+ if (!dek)
+ return ERR_PTR(-ENOMEM);
+
+ if (!dek_priv) {
+ err = mlx5_crypto_create_dek_key(mdev, key, sz_bytes,
+ key_purpose, &dek->obj_id);
+ goto out;
+ }
+
+ bulk = mlx5_crypto_dek_pool_pop(dek_pool, &obj_offset);
+ if (IS_ERR(bulk)) {
+ err = PTR_ERR(bulk);
+ goto out;
+ }
+
+ dek->bulk = bulk;
+ dek->obj_id = bulk->base_obj_id + obj_offset;
+ err = mlx5_crypto_modify_dek_key(mdev, key, sz_bytes, key_purpose,
+ bulk->base_obj_id, obj_offset);
+ if (err) {
+ mlx5_crypto_dek_pool_push(dek_pool, dek);
+ return ERR_PTR(err);
+ }
+
+out:
+ if (err) {
+ kfree(dek);
+ return ERR_PTR(err);
+ }
+
+ return dek;
+}
+
+void mlx5_crypto_dek_destroy(struct mlx5_crypto_dek_pool *dek_pool,
+ struct mlx5_crypto_dek *dek)
+{
+ struct mlx5_crypto_dek_priv *dek_priv = dek_pool->mdev->mlx5e_res.dek_priv;
+ struct mlx5_core_dev *mdev = dek_pool->mdev;
+
+ if (!dek_priv) {
+ mlx5_crypto_destroy_dek_key(mdev, dek->obj_id);
+ kfree(dek);
+ } else {
+ mlx5_crypto_dek_pool_push(dek_pool, dek);
+ }
+}
+
+static void mlx5_crypto_dek_free_destroy_list(struct list_head *destroy_list)
+{
+ struct mlx5_crypto_dek_bulk *bulk, *tmp;
+
+ list_for_each_entry_safe(bulk, tmp, destroy_list, entry)
+ mlx5_crypto_dek_bulk_free(bulk);
+}
+
+static void mlx5_crypto_dek_destroy_work_fn(struct work_struct *work)
+{
+ struct mlx5_crypto_dek_pool *pool =
+ container_of(work, struct mlx5_crypto_dek_pool, destroy_work);
+ LIST_HEAD(destroy_list);
+
+ mlx5_crypto_dek_pool_splice_destroy_list(pool, &pool->destroy_list,
+ &destroy_list);
+ mlx5_crypto_dek_free_destroy_list(&destroy_list);
+}
+
+struct mlx5_crypto_dek_pool *
+mlx5_crypto_dek_pool_create(struct mlx5_core_dev *mdev, int key_purpose)
+{
+ struct mlx5_crypto_dek_pool *pool;
+
+ pool = kzalloc(sizeof(*pool), GFP_KERNEL);
+ if (!pool)
+ return ERR_PTR(-ENOMEM);
+
+ pool->mdev = mdev;
+ pool->key_purpose = key_purpose;
+
+ mutex_init(&pool->lock);
+ INIT_LIST_HEAD(&pool->avail_list);
+ INIT_LIST_HEAD(&pool->partial_list);
+ INIT_LIST_HEAD(&pool->full_list);
+ INIT_LIST_HEAD(&pool->sync_list);
+ INIT_LIST_HEAD(&pool->wait_for_free);
+ INIT_WORK(&pool->sync_work, mlx5_crypto_dek_sync_work_fn);
+ spin_lock_init(&pool->destroy_lock);
+ INIT_LIST_HEAD(&pool->destroy_list);
+ INIT_WORK(&pool->destroy_work, mlx5_crypto_dek_destroy_work_fn);
+
+ return pool;
+}
+
+void mlx5_crypto_dek_pool_destroy(struct mlx5_crypto_dek_pool *pool)
+{
+ struct mlx5_crypto_dek_bulk *bulk, *tmp;
+
+ cancel_work_sync(&pool->sync_work);
+ cancel_work_sync(&pool->destroy_work);
+
+ mlx5_crypto_dek_pool_free_wait_keys(pool);
+
+ list_for_each_entry_safe(bulk, tmp, &pool->avail_list, entry)
+ mlx5_crypto_dek_pool_remove_bulk(pool, bulk, false);
+
+ list_for_each_entry_safe(bulk, tmp, &pool->full_list, entry)
+ mlx5_crypto_dek_pool_remove_bulk(pool, bulk, false);
+
+ list_for_each_entry_safe(bulk, tmp, &pool->sync_list, entry)
+ mlx5_crypto_dek_pool_remove_bulk(pool, bulk, false);
+
+ list_for_each_entry_safe(bulk, tmp, &pool->partial_list, entry)
+ mlx5_crypto_dek_pool_remove_bulk(pool, bulk, false);
+
+ mlx5_crypto_dek_free_destroy_list(&pool->destroy_list);
+
+ mutex_destroy(&pool->lock);
+
+ kfree(pool);
+}
+
+void mlx5_crypto_dek_cleanup(struct mlx5_crypto_dek_priv *dek_priv)
+{
+ if (!dek_priv)
+ return;
+
+ kfree(dek_priv);
+}
+
+struct mlx5_crypto_dek_priv *mlx5_crypto_dek_init(struct mlx5_core_dev *mdev)
+{
+ struct mlx5_crypto_dek_priv *dek_priv;
+ int err;
+
+ if (!MLX5_CAP_CRYPTO(mdev, log_dek_max_alloc))
+ return NULL;
+
+ dek_priv = kzalloc(sizeof(*dek_priv), GFP_KERNEL);
+ if (!dek_priv)
+ return ERR_PTR(-ENOMEM);
+
+ dek_priv->mdev = mdev;
+ dek_priv->log_dek_obj_range = min_t(int, 12,
+ MLX5_CAP_CRYPTO(mdev, log_dek_max_alloc));
+
+ /* sync all types of objects */
+ err = mlx5_crypto_cmd_sync_crypto(mdev, MLX5_CRYPTO_DEK_ALL_TYPE);
+ if (err)
+ goto err_sync_crypto;
+
+ mlx5_core_dbg(mdev, "Crypto DEK enabled, %d deks per alloc (max %d), total %d\n",
+ 1 << dek_priv->log_dek_obj_range,
+ 1 << MLX5_CAP_CRYPTO(mdev, log_dek_max_alloc),
+ 1 << MLX5_CAP_CRYPTO(mdev, log_max_num_deks));
+
+ return dek_priv;
+
+err_sync_crypto:
+ kfree(dek_priv);
+ return ERR_PTR(err);
+}
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lib/crypto.h b/drivers/net/ethernet/mellanox/mlx5/core/lib/crypto.h
new file mode 100644
index 000000000000..c819c047bb9c
--- /dev/null
+++ b/drivers/net/ethernet/mellanox/mlx5/core/lib/crypto.h
@@ -0,0 +1,34 @@
+/* SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB */
+/* Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved. */
+
+#ifndef __MLX5_LIB_CRYPTO_H__
+#define __MLX5_LIB_CRYPTO_H__
+
+enum {
+ MLX5_ACCEL_OBJ_TLS_KEY = MLX5_GENERAL_OBJECT_TYPE_ENCRYPTION_KEY_PURPOSE_TLS,
+ MLX5_ACCEL_OBJ_IPSEC_KEY = MLX5_GENERAL_OBJECT_TYPE_ENCRYPTION_KEY_PURPOSE_IPSEC,
+ MLX5_ACCEL_OBJ_MACSEC_KEY = MLX5_GENERAL_OBJECT_TYPE_ENCRYPTION_KEY_PURPOSE_MACSEC,
+ MLX5_ACCEL_OBJ_TYPE_KEY_NUM,
+};
+
+int mlx5_create_encryption_key(struct mlx5_core_dev *mdev,
+ const void *key, u32 sz_bytes,
+ u32 key_type, u32 *p_key_id);
+
+void mlx5_destroy_encryption_key(struct mlx5_core_dev *mdev, u32 key_id);
+
+struct mlx5_crypto_dek_pool;
+struct mlx5_crypto_dek;
+
+struct mlx5_crypto_dek_pool *mlx5_crypto_dek_pool_create(struct mlx5_core_dev *mdev,
+ int key_purpose);
+void mlx5_crypto_dek_pool_destroy(struct mlx5_crypto_dek_pool *pool);
+struct mlx5_crypto_dek *mlx5_crypto_dek_create(struct mlx5_crypto_dek_pool *dek_pool,
+ const void *key, u32 sz_bytes);
+void mlx5_crypto_dek_destroy(struct mlx5_crypto_dek_pool *dek_pool,
+ struct mlx5_crypto_dek *dek);
+u32 mlx5_crypto_dek_get_id(struct mlx5_crypto_dek *dek);
+
+struct mlx5_crypto_dek_priv *mlx5_crypto_dek_init(struct mlx5_core_dev *mdev);
+void mlx5_crypto_dek_cleanup(struct mlx5_crypto_dek_priv *dek_priv);
+#endif /* __MLX5_LIB_CRYPTO_H__ */
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lib/fs_chains.c b/drivers/net/ethernet/mellanox/mlx5/core/lib/fs_chains.c
index df58cba37930..81ed91fee59b 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/lib/fs_chains.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/lib/fs_chains.c
@@ -214,7 +214,7 @@ create_chain_restore(struct fs_chain *chain)
struct mlx5_eswitch *esw = chain->chains->dev->priv.eswitch;
u8 modact[MLX5_UN_SZ_BYTES(set_add_copy_action_in_auto)] = {};
struct mlx5_fs_chains *chains = chain->chains;
- enum mlx5e_tc_attr_to_reg chain_to_reg;
+ enum mlx5e_tc_attr_to_reg mapped_obj_to_reg;
struct mlx5_modify_hdr *mod_hdr;
u32 index;
int err;
@@ -242,7 +242,7 @@ create_chain_restore(struct fs_chain *chain)
chain->id = index;
if (chains->ns == MLX5_FLOW_NAMESPACE_FDB) {
- chain_to_reg = CHAIN_TO_REG;
+ mapped_obj_to_reg = MAPPED_OBJ_TO_REG;
chain->restore_rule = esw_add_restore_rule(esw, chain->id);
if (IS_ERR(chain->restore_rule)) {
err = PTR_ERR(chain->restore_rule);
@@ -253,7 +253,7 @@ create_chain_restore(struct fs_chain *chain)
* since we write the metadata to reg_b
* that is passed to SW directly.
*/
- chain_to_reg = NIC_CHAIN_TO_REG;
+ mapped_obj_to_reg = NIC_MAPPED_OBJ_TO_REG;
} else {
err = -EINVAL;
goto err_rule;
@@ -261,12 +261,12 @@ create_chain_restore(struct fs_chain *chain)
MLX5_SET(set_action_in, modact, action_type, MLX5_ACTION_TYPE_SET);
MLX5_SET(set_action_in, modact, field,
- mlx5e_tc_attr_to_reg_mappings[chain_to_reg].mfield);
+ mlx5e_tc_attr_to_reg_mappings[mapped_obj_to_reg].mfield);
MLX5_SET(set_action_in, modact, offset,
- mlx5e_tc_attr_to_reg_mappings[chain_to_reg].moffset);
+ mlx5e_tc_attr_to_reg_mappings[mapped_obj_to_reg].moffset);
MLX5_SET(set_action_in, modact, length,
- mlx5e_tc_attr_to_reg_mappings[chain_to_reg].mlen == 32 ?
- 0 : mlx5e_tc_attr_to_reg_mappings[chain_to_reg].mlen);
+ mlx5e_tc_attr_to_reg_mappings[mapped_obj_to_reg].mlen == 32 ?
+ 0 : mlx5e_tc_attr_to_reg_mappings[mapped_obj_to_reg].mlen);
MLX5_SET(set_action_in, modact, data, chain->id);
mod_hdr = mlx5_modify_header_alloc(chains->dev, chains->ns,
1, modact);
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lib/ipsec_fs_roce.c b/drivers/net/ethernet/mellanox/mlx5/core/lib/ipsec_fs_roce.c
new file mode 100644
index 000000000000..2c53589b765d
--- /dev/null
+++ b/drivers/net/ethernet/mellanox/mlx5/core/lib/ipsec_fs_roce.c
@@ -0,0 +1,368 @@
+// SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB
+/* Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved. */
+
+#include "fs_core.h"
+#include "lib/ipsec_fs_roce.h"
+#include "mlx5_core.h"
+
+struct mlx5_ipsec_miss {
+ struct mlx5_flow_group *group;
+ struct mlx5_flow_handle *rule;
+};
+
+struct mlx5_ipsec_rx_roce {
+ struct mlx5_flow_group *g;
+ struct mlx5_flow_table *ft;
+ struct mlx5_flow_handle *rule;
+ struct mlx5_ipsec_miss roce_miss;
+
+ struct mlx5_flow_table *ft_rdma;
+ struct mlx5_flow_namespace *ns_rdma;
+};
+
+struct mlx5_ipsec_tx_roce {
+ struct mlx5_flow_group *g;
+ struct mlx5_flow_table *ft;
+ struct mlx5_flow_handle *rule;
+ struct mlx5_flow_namespace *ns;
+};
+
+struct mlx5_ipsec_fs {
+ struct mlx5_ipsec_rx_roce ipv4_rx;
+ struct mlx5_ipsec_rx_roce ipv6_rx;
+ struct mlx5_ipsec_tx_roce tx;
+};
+
+static void ipsec_fs_roce_setup_udp_dport(struct mlx5_flow_spec *spec,
+ u16 dport)
+{
+ spec->match_criteria_enable |= MLX5_MATCH_OUTER_HEADERS;
+ MLX5_SET_TO_ONES(fte_match_param, spec->match_criteria, outer_headers.ip_protocol);
+ MLX5_SET(fte_match_param, spec->match_value, outer_headers.ip_protocol, IPPROTO_UDP);
+ MLX5_SET_TO_ONES(fte_match_param, spec->match_criteria, outer_headers.udp_dport);
+ MLX5_SET(fte_match_param, spec->match_value, outer_headers.udp_dport, dport);
+}
+
+static int
+ipsec_fs_roce_rx_rule_setup(struct mlx5_core_dev *mdev,
+ struct mlx5_flow_destination *default_dst,
+ struct mlx5_ipsec_rx_roce *roce)
+{
+ struct mlx5_flow_destination dst = {};
+ MLX5_DECLARE_FLOW_ACT(flow_act);
+ struct mlx5_flow_handle *rule;
+ struct mlx5_flow_spec *spec;
+ int err = 0;
+
+ spec = kvzalloc(sizeof(*spec), GFP_KERNEL);
+ if (!spec)
+ return -ENOMEM;
+
+ ipsec_fs_roce_setup_udp_dport(spec, ROCE_V2_UDP_DPORT);
+
+ flow_act.action = MLX5_FLOW_CONTEXT_ACTION_FWD_DEST;
+ dst.type = MLX5_FLOW_DESTINATION_TYPE_TABLE_TYPE;
+ dst.ft = roce->ft_rdma;
+ rule = mlx5_add_flow_rules(roce->ft, spec, &flow_act, &dst, 1);
+ if (IS_ERR(rule)) {
+ err = PTR_ERR(rule);
+ mlx5_core_err(mdev, "Fail to add RX RoCE IPsec rule err=%d\n",
+ err);
+ goto fail_add_rule;
+ }
+
+ roce->rule = rule;
+
+ memset(spec, 0, sizeof(*spec));
+ rule = mlx5_add_flow_rules(roce->ft, spec, &flow_act, default_dst, 1);
+ if (IS_ERR(rule)) {
+ err = PTR_ERR(rule);
+ mlx5_core_err(mdev, "Fail to add RX RoCE IPsec miss rule err=%d\n",
+ err);
+ goto fail_add_default_rule;
+ }
+
+ roce->roce_miss.rule = rule;
+
+ kvfree(spec);
+ return 0;
+
+fail_add_default_rule:
+ mlx5_del_flow_rules(roce->rule);
+fail_add_rule:
+ kvfree(spec);
+ return err;
+}
+
+static int ipsec_fs_roce_tx_rule_setup(struct mlx5_core_dev *mdev,
+ struct mlx5_ipsec_tx_roce *roce,
+ struct mlx5_flow_table *pol_ft)
+{
+ struct mlx5_flow_destination dst = {};
+ MLX5_DECLARE_FLOW_ACT(flow_act);
+ struct mlx5_flow_handle *rule;
+ int err = 0;
+
+ flow_act.action = MLX5_FLOW_CONTEXT_ACTION_FWD_DEST;
+ dst.type = MLX5_FLOW_DESTINATION_TYPE_TABLE_TYPE;
+ dst.ft = pol_ft;
+ rule = mlx5_add_flow_rules(roce->ft, NULL, &flow_act, &dst,
+ 1);
+ if (IS_ERR(rule)) {
+ err = PTR_ERR(rule);
+ mlx5_core_err(mdev, "Fail to add TX RoCE IPsec rule err=%d\n",
+ err);
+ goto out;
+ }
+ roce->rule = rule;
+
+out:
+ return err;
+}
+
+void mlx5_ipsec_fs_roce_tx_destroy(struct mlx5_ipsec_fs *ipsec_roce)
+{
+ struct mlx5_ipsec_tx_roce *tx_roce;
+
+ if (!ipsec_roce)
+ return;
+
+ tx_roce = &ipsec_roce->tx;
+
+ mlx5_del_flow_rules(tx_roce->rule);
+ mlx5_destroy_flow_group(tx_roce->g);
+ mlx5_destroy_flow_table(tx_roce->ft);
+}
+
+#define MLX5_TX_ROCE_GROUP_SIZE BIT(0)
+
+int mlx5_ipsec_fs_roce_tx_create(struct mlx5_core_dev *mdev,
+ struct mlx5_ipsec_fs *ipsec_roce,
+ struct mlx5_flow_table *pol_ft)
+{
+ struct mlx5_flow_table_attr ft_attr = {};
+ struct mlx5_ipsec_tx_roce *roce;
+ struct mlx5_flow_table *ft;
+ struct mlx5_flow_group *g;
+ int ix = 0;
+ int err;
+ u32 *in;
+
+ if (!ipsec_roce)
+ return 0;
+
+ roce = &ipsec_roce->tx;
+
+ in = kvzalloc(MLX5_ST_SZ_BYTES(create_flow_group_in), GFP_KERNEL);
+ if (!in)
+ return -ENOMEM;
+
+ ft_attr.max_fte = 1;
+ ft = mlx5_create_flow_table(roce->ns, &ft_attr);
+ if (IS_ERR(ft)) {
+ err = PTR_ERR(ft);
+ mlx5_core_err(mdev, "Fail to create RoCE IPsec tx ft err=%d\n", err);
+ return err;
+ }
+
+ roce->ft = ft;
+
+ MLX5_SET_CFG(in, start_flow_index, ix);
+ ix += MLX5_TX_ROCE_GROUP_SIZE;
+ MLX5_SET_CFG(in, end_flow_index, ix - 1);
+ g = mlx5_create_flow_group(ft, in);
+ if (IS_ERR(g)) {
+ err = PTR_ERR(g);
+ mlx5_core_err(mdev, "Fail to create RoCE IPsec tx group err=%d\n", err);
+ goto fail;
+ }
+ roce->g = g;
+
+ err = ipsec_fs_roce_tx_rule_setup(mdev, roce, pol_ft);
+ if (err) {
+ mlx5_core_err(mdev, "Fail to create RoCE IPsec tx rules err=%d\n", err);
+ goto rule_fail;
+ }
+
+ return 0;
+
+rule_fail:
+ mlx5_destroy_flow_group(roce->g);
+fail:
+ mlx5_destroy_flow_table(ft);
+ return err;
+}
+
+struct mlx5_flow_table *mlx5_ipsec_fs_roce_ft_get(struct mlx5_ipsec_fs *ipsec_roce, u32 family)
+{
+ struct mlx5_ipsec_rx_roce *rx_roce;
+
+ if (!ipsec_roce)
+ return NULL;
+
+ rx_roce = (family == AF_INET) ? &ipsec_roce->ipv4_rx :
+ &ipsec_roce->ipv6_rx;
+
+ return rx_roce->ft;
+}
+
+void mlx5_ipsec_fs_roce_rx_destroy(struct mlx5_ipsec_fs *ipsec_roce, u32 family)
+{
+ struct mlx5_ipsec_rx_roce *rx_roce;
+
+ if (!ipsec_roce)
+ return;
+
+ rx_roce = (family == AF_INET) ? &ipsec_roce->ipv4_rx :
+ &ipsec_roce->ipv6_rx;
+
+ mlx5_del_flow_rules(rx_roce->roce_miss.rule);
+ mlx5_del_flow_rules(rx_roce->rule);
+ mlx5_destroy_flow_table(rx_roce->ft_rdma);
+ mlx5_destroy_flow_group(rx_roce->roce_miss.group);
+ mlx5_destroy_flow_group(rx_roce->g);
+ mlx5_destroy_flow_table(rx_roce->ft);
+}
+
+#define MLX5_RX_ROCE_GROUP_SIZE BIT(0)
+
+int mlx5_ipsec_fs_roce_rx_create(struct mlx5_core_dev *mdev,
+ struct mlx5_ipsec_fs *ipsec_roce,
+ struct mlx5_flow_namespace *ns,
+ struct mlx5_flow_destination *default_dst,
+ u32 family, u32 level, u32 prio)
+{
+ struct mlx5_flow_table_attr ft_attr = {};
+ struct mlx5_ipsec_rx_roce *roce;
+ struct mlx5_flow_table *ft;
+ struct mlx5_flow_group *g;
+ void *outer_headers_c;
+ int ix = 0;
+ u32 *in;
+ int err;
+ u8 *mc;
+
+ if (!ipsec_roce)
+ return 0;
+
+ roce = (family == AF_INET) ? &ipsec_roce->ipv4_rx :
+ &ipsec_roce->ipv6_rx;
+
+ ft_attr.max_fte = 2;
+ ft_attr.level = level;
+ ft_attr.prio = prio;
+ ft = mlx5_create_flow_table(ns, &ft_attr);
+ if (IS_ERR(ft)) {
+ err = PTR_ERR(ft);
+ mlx5_core_err(mdev, "Fail to create RoCE IPsec rx ft at nic err=%d\n", err);
+ return err;
+ }
+
+ roce->ft = ft;
+
+ in = kvzalloc(MLX5_ST_SZ_BYTES(create_flow_group_in), GFP_KERNEL);
+ if (!in) {
+ err = -ENOMEM;
+ goto fail_nomem;
+ }
+
+ mc = MLX5_ADDR_OF(create_flow_group_in, in, match_criteria);
+ outer_headers_c = MLX5_ADDR_OF(fte_match_param, mc, outer_headers);
+ MLX5_SET_TO_ONES(fte_match_set_lyr_2_4, outer_headers_c, ip_protocol);
+ MLX5_SET_TO_ONES(fte_match_set_lyr_2_4, outer_headers_c, udp_dport);
+
+ MLX5_SET_CFG(in, match_criteria_enable, MLX5_MATCH_OUTER_HEADERS);
+ MLX5_SET_CFG(in, start_flow_index, ix);
+ ix += MLX5_RX_ROCE_GROUP_SIZE;
+ MLX5_SET_CFG(in, end_flow_index, ix - 1);
+ g = mlx5_create_flow_group(ft, in);
+ if (IS_ERR(g)) {
+ err = PTR_ERR(g);
+ mlx5_core_err(mdev, "Fail to create RoCE IPsec rx group at nic err=%d\n", err);
+ goto fail_group;
+ }
+ roce->g = g;
+
+ memset(in, 0, MLX5_ST_SZ_BYTES(create_flow_group_in));
+ MLX5_SET_CFG(in, start_flow_index, ix);
+ ix += MLX5_RX_ROCE_GROUP_SIZE;
+ MLX5_SET_CFG(in, end_flow_index, ix - 1);
+ g = mlx5_create_flow_group(ft, in);
+ if (IS_ERR(g)) {
+ err = PTR_ERR(g);
+ mlx5_core_err(mdev, "Fail to create RoCE IPsec rx miss group at nic err=%d\n", err);
+ goto fail_mgroup;
+ }
+ roce->roce_miss.group = g;
+
+ memset(&ft_attr, 0, sizeof(ft_attr));
+ if (family == AF_INET)
+ ft_attr.level = 1;
+ ft = mlx5_create_flow_table(roce->ns_rdma, &ft_attr);
+ if (IS_ERR(ft)) {
+ err = PTR_ERR(ft);
+ mlx5_core_err(mdev, "Fail to create RoCE IPsec rx ft at rdma err=%d\n", err);
+ goto fail_rdma_table;
+ }
+
+ roce->ft_rdma = ft;
+
+ err = ipsec_fs_roce_rx_rule_setup(mdev, default_dst, roce);
+ if (err) {
+ mlx5_core_err(mdev, "Fail to create RoCE IPsec rx rules err=%d\n", err);
+ goto fail_setup_rule;
+ }
+
+ kvfree(in);
+ return 0;
+
+fail_setup_rule:
+ mlx5_destroy_flow_table(roce->ft_rdma);
+fail_rdma_table:
+ mlx5_destroy_flow_group(roce->roce_miss.group);
+fail_mgroup:
+ mlx5_destroy_flow_group(roce->g);
+fail_group:
+ kvfree(in);
+fail_nomem:
+ mlx5_destroy_flow_table(roce->ft);
+ return err;
+}
+
+void mlx5_ipsec_fs_roce_cleanup(struct mlx5_ipsec_fs *ipsec_roce)
+{
+ kfree(ipsec_roce);
+}
+
+struct mlx5_ipsec_fs *mlx5_ipsec_fs_roce_init(struct mlx5_core_dev *mdev)
+{
+ struct mlx5_ipsec_fs *roce_ipsec;
+ struct mlx5_flow_namespace *ns;
+
+ ns = mlx5_get_flow_namespace(mdev, MLX5_FLOW_NAMESPACE_RDMA_RX_IPSEC);
+ if (!ns) {
+ mlx5_core_err(mdev, "Failed to get RoCE rx ns\n");
+ return NULL;
+ }
+
+ roce_ipsec = kzalloc(sizeof(*roce_ipsec), GFP_KERNEL);
+ if (!roce_ipsec)
+ return NULL;
+
+ roce_ipsec->ipv4_rx.ns_rdma = ns;
+ roce_ipsec->ipv6_rx.ns_rdma = ns;
+
+ ns = mlx5_get_flow_namespace(mdev, MLX5_FLOW_NAMESPACE_RDMA_TX_IPSEC);
+ if (!ns) {
+ mlx5_core_err(mdev, "Failed to get RoCE tx ns\n");
+ goto err_tx;
+ }
+
+ roce_ipsec->tx.ns = ns;
+
+ return roce_ipsec;
+
+err_tx:
+ kfree(roce_ipsec);
+ return NULL;
+}
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lib/ipsec_fs_roce.h b/drivers/net/ethernet/mellanox/mlx5/core/lib/ipsec_fs_roce.h
new file mode 100644
index 000000000000..9712d705fe48
--- /dev/null
+++ b/drivers/net/ethernet/mellanox/mlx5/core/lib/ipsec_fs_roce.h
@@ -0,0 +1,25 @@
+/* SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB */
+/* Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved. */
+
+#ifndef __MLX5_LIB_IPSEC_H__
+#define __MLX5_LIB_IPSEC_H__
+
+struct mlx5_ipsec_fs;
+
+struct mlx5_flow_table *
+mlx5_ipsec_fs_roce_ft_get(struct mlx5_ipsec_fs *ipsec_roce, u32 family);
+void mlx5_ipsec_fs_roce_rx_destroy(struct mlx5_ipsec_fs *ipsec_roce,
+ u32 family);
+int mlx5_ipsec_fs_roce_rx_create(struct mlx5_core_dev *mdev,
+ struct mlx5_ipsec_fs *ipsec_roce,
+ struct mlx5_flow_namespace *ns,
+ struct mlx5_flow_destination *default_dst,
+ u32 family, u32 level, u32 prio);
+void mlx5_ipsec_fs_roce_tx_destroy(struct mlx5_ipsec_fs *ipsec_roce);
+int mlx5_ipsec_fs_roce_tx_create(struct mlx5_core_dev *mdev,
+ struct mlx5_ipsec_fs *ipsec_roce,
+ struct mlx5_flow_table *pol_ft);
+void mlx5_ipsec_fs_roce_cleanup(struct mlx5_ipsec_fs *ipsec_roce);
+struct mlx5_ipsec_fs *mlx5_ipsec_fs_roce_init(struct mlx5_core_dev *mdev);
+
+#endif /* __MLX5_LIB_IPSEC_H__ */
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lib/mlx5.h b/drivers/net/ethernet/mellanox/mlx5/core/lib/mlx5.h
index 032adb21ad4b..ccf12f7db6f0 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/lib/mlx5.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/lib/mlx5.h
@@ -79,28 +79,11 @@ struct mlx5_pme_stats {
void mlx5_get_pme_stats(struct mlx5_core_dev *dev, struct mlx5_pme_stats *stats);
int mlx5_notifier_call_chain(struct mlx5_events *events, unsigned int event, void *data);
-/* Crypto */
-enum {
- MLX5_ACCEL_OBJ_TLS_KEY = MLX5_GENERAL_OBJECT_TYPE_ENCRYPTION_KEY_TYPE_TLS,
- MLX5_ACCEL_OBJ_IPSEC_KEY = MLX5_GENERAL_OBJECT_TYPE_ENCRYPTION_KEY_TYPE_IPSEC,
- MLX5_ACCEL_OBJ_MACSEC_KEY = MLX5_GENERAL_OBJECT_TYPE_ENCRYPTION_KEY_TYPE_MACSEC,
-};
-
-int mlx5_create_encryption_key(struct mlx5_core_dev *mdev,
- void *key, u32 sz_bytes,
- u32 key_type, u32 *p_key_id);
-void mlx5_destroy_encryption_key(struct mlx5_core_dev *mdev, u32 key_id);
-
static inline struct net *mlx5_core_net(struct mlx5_core_dev *dev)
{
return devlink_net(priv_to_devlink(dev));
}
-static inline void mlx5_uplink_netdev_set(struct mlx5_core_dev *mdev, struct net_device *netdev)
-{
- mdev->mlx5e_res.uplink_netdev = netdev;
-}
-
static inline struct net_device *mlx5_uplink_netdev_get(struct mlx5_core_dev *mdev)
{
return mdev->mlx5e_res.uplink_netdev;
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/main.c b/drivers/net/ethernet/mellanox/mlx5/core/main.c
index 4e1b5757528a..540840e80493 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/main.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/main.c
@@ -336,6 +336,24 @@ static u16 to_fw_pkey_sz(struct mlx5_core_dev *dev, u32 size)
}
}
+void mlx5_core_uplink_netdev_set(struct mlx5_core_dev *dev, struct net_device *netdev)
+{
+ mutex_lock(&dev->mlx5e_res.uplink_netdev_lock);
+ dev->mlx5e_res.uplink_netdev = netdev;
+ mlx5_blocking_notifier_call_chain(dev, MLX5_DRIVER_EVENT_UPLINK_NETDEV,
+ netdev);
+ mutex_unlock(&dev->mlx5e_res.uplink_netdev_lock);
+}
+
+void mlx5_core_uplink_netdev_event_replay(struct mlx5_core_dev *dev)
+{
+ mutex_lock(&dev->mlx5e_res.uplink_netdev_lock);
+ mlx5_blocking_notifier_call_chain(dev, MLX5_DRIVER_EVENT_UPLINK_NETDEV,
+ dev->mlx5e_res.uplink_netdev);
+ mutex_unlock(&dev->mlx5e_res.uplink_netdev_lock);
+}
+EXPORT_SYMBOL(mlx5_core_uplink_netdev_event_replay);
+
static int mlx5_core_get_caps_mode(struct mlx5_core_dev *dev,
enum mlx5_cap_type cap_type,
enum mlx5_cap_mode cap_mode)
@@ -484,9 +502,9 @@ static int max_uc_list_get_devlink_param(struct mlx5_core_dev *dev)
union devlink_param_value val;
int err;
- err = devlink_param_driverinit_value_get(devlink,
- DEVLINK_PARAM_GENERIC_ID_MAX_MACS,
- &val);
+ err = devl_param_driverinit_value_get(devlink,
+ DEVLINK_PARAM_GENERIC_ID_MAX_MACS,
+ &val);
if (!err)
return val.vu32;
mlx5_core_dbg(dev, "Failed to get param. err = %d\n", err);
@@ -499,9 +517,9 @@ bool mlx5_is_roce_on(struct mlx5_core_dev *dev)
union devlink_param_value val;
int err;
- err = devlink_param_driverinit_value_get(devlink,
- DEVLINK_PARAM_GENERIC_ID_ENABLE_ROCE,
- &val);
+ err = devl_param_driverinit_value_get(devlink,
+ DEVLINK_PARAM_GENERIC_ID_ENABLE_ROCE,
+ &val);
if (!err)
return val.vbool;
@@ -1390,9 +1408,9 @@ int mlx5_init_one(struct mlx5_core_dev *dev)
set_bit(MLX5_INTERFACE_STATE_UP, &dev->intf_state);
- err = mlx5_devlink_register(priv_to_devlink(dev));
+ err = mlx5_devlink_params_register(priv_to_devlink(dev));
if (err)
- goto err_devlink_reg;
+ goto err_devlink_params_reg;
err = mlx5_register_device(dev);
if (err)
@@ -1403,8 +1421,8 @@ int mlx5_init_one(struct mlx5_core_dev *dev)
return 0;
err_register:
- mlx5_devlink_unregister(priv_to_devlink(dev));
-err_devlink_reg:
+ mlx5_devlink_params_unregister(priv_to_devlink(dev));
+err_devlink_params_reg:
clear_bit(MLX5_INTERFACE_STATE_UP, &dev->intf_state);
mlx5_unload(dev);
err_load:
@@ -1426,7 +1444,7 @@ void mlx5_uninit_one(struct mlx5_core_dev *dev)
mutex_lock(&dev->intf_state_mutex);
mlx5_unregister_device(dev);
- mlx5_devlink_unregister(priv_to_devlink(dev));
+ mlx5_devlink_params_unregister(priv_to_devlink(dev));
if (!test_bit(MLX5_INTERFACE_STATE_UP, &dev->intf_state)) {
mlx5_core_warn(dev, "%s: interface is down, NOP\n",
@@ -1491,23 +1509,23 @@ out:
return err;
}
-int mlx5_load_one(struct mlx5_core_dev *dev, bool recovery)
+int mlx5_load_one(struct mlx5_core_dev *dev)
{
struct devlink *devlink = priv_to_devlink(dev);
int ret;
devl_lock(devlink);
- ret = mlx5_load_one_devl_locked(dev, recovery);
+ ret = mlx5_load_one_devl_locked(dev, false);
devl_unlock(devlink);
return ret;
}
-void mlx5_unload_one_devl_locked(struct mlx5_core_dev *dev)
+void mlx5_unload_one_devl_locked(struct mlx5_core_dev *dev, bool suspend)
{
devl_assert_locked(priv_to_devlink(dev));
mutex_lock(&dev->intf_state_mutex);
- mlx5_detach_device(dev);
+ mlx5_detach_device(dev, suspend);
if (!test_bit(MLX5_INTERFACE_STATE_UP, &dev->intf_state)) {
mlx5_core_warn(dev, "%s: interface is down, NOP\n",
@@ -1522,12 +1540,12 @@ out:
mutex_unlock(&dev->intf_state_mutex);
}
-void mlx5_unload_one(struct mlx5_core_dev *dev)
+void mlx5_unload_one(struct mlx5_core_dev *dev, bool suspend)
{
struct devlink *devlink = priv_to_devlink(dev);
devl_lock(devlink);
- mlx5_unload_one_devl_locked(dev);
+ mlx5_unload_one_devl_locked(dev, suspend);
devl_unlock(devlink);
}
@@ -1555,6 +1573,7 @@ static const int types[] = {
MLX5_CAP_DEV_SHAMPO,
MLX5_CAP_MACSEC,
MLX5_CAP_ADV_VIRTUALIZATION,
+ MLX5_CAP_CRYPTO,
};
static void mlx5_hca_caps_free(struct mlx5_core_dev *dev)
@@ -1608,6 +1627,7 @@ int mlx5_mdev_init(struct mlx5_core_dev *dev, int profile_idx)
lockdep_register_key(&dev->lock_key);
mutex_init(&dev->intf_state_mutex);
lockdep_set_class(&dev->intf_state_mutex, &dev->lock_key);
+ mutex_init(&dev->mlx5e_res.uplink_netdev_lock);
mutex_init(&priv->bfregs.reg_head.lock);
mutex_init(&priv->bfregs.wc_head.lock);
@@ -1696,6 +1716,7 @@ void mlx5_mdev_uninit(struct mlx5_core_dev *dev)
mutex_destroy(&priv->alloc_mutex);
mutex_destroy(&priv->bfregs.wc_head.lock);
mutex_destroy(&priv->bfregs.reg_head.lock);
+ mutex_destroy(&dev->mlx5e_res.uplink_netdev_lock);
mutex_destroy(&dev->intf_state_mutex);
lockdep_unregister_key(&dev->lock_key);
}
@@ -1809,7 +1830,7 @@ static pci_ers_result_t mlx5_pci_err_detected(struct pci_dev *pdev,
mlx5_enter_error_state(dev, false);
mlx5_error_sw_reset(dev);
- mlx5_unload_one(dev);
+ mlx5_unload_one(dev, true);
mlx5_drain_health_wq(dev);
mlx5_pci_disable_device(dev);
@@ -1891,8 +1912,7 @@ static void mlx5_pci_resume(struct pci_dev *pdev)
mlx5_pci_trace(dev, "Enter, loading driver..\n");
- err = mlx5_load_one(dev, false);
-
+ err = mlx5_load_one(dev);
if (!err)
devlink_health_reporter_state_update(dev->priv.health.fw_fatal_reporter,
DEVLINK_HEALTH_REPORTER_STATE_HEALTHY);
@@ -1966,7 +1986,7 @@ static void shutdown(struct pci_dev *pdev)
set_bit(MLX5_BREAK_FW_WAIT, &dev->intf_state);
err = mlx5_try_fast_unload(dev);
if (err)
- mlx5_unload_one(dev);
+ mlx5_unload_one(dev, false);
mlx5_pci_disable_device(dev);
}
@@ -1974,7 +1994,7 @@ static int mlx5_suspend(struct pci_dev *pdev, pm_message_t state)
{
struct mlx5_core_dev *dev = pci_get_drvdata(pdev);
- mlx5_unload_one(dev);
+ mlx5_unload_one(dev, true);
return 0;
}
@@ -1983,7 +2003,7 @@ static int mlx5_resume(struct pci_dev *pdev)
{
struct mlx5_core_dev *dev = pci_get_drvdata(pdev);
- return mlx5_load_one(dev, false);
+ return mlx5_load_one(dev);
}
static const struct pci_device_id mlx5_core_pci_table[] = {
@@ -2017,7 +2037,7 @@ MODULE_DEVICE_TABLE(pci, mlx5_core_pci_table);
void mlx5_disable_device(struct mlx5_core_dev *dev)
{
mlx5_error_sw_reset(dev);
- mlx5_unload_one_devl_locked(dev);
+ mlx5_unload_one_devl_locked(dev, false);
}
int mlx5_recover_device(struct mlx5_core_dev *dev)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/mlx5_core.h b/drivers/net/ethernet/mellanox/mlx5/core/mlx5_core.h
index 029305a8b80a..be0785f83083 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/mlx5_core.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/mlx5_core.h
@@ -236,7 +236,7 @@ void mlx5_adev_cleanup(struct mlx5_core_dev *dev);
int mlx5_adev_init(struct mlx5_core_dev *dev);
int mlx5_attach_device(struct mlx5_core_dev *dev);
-void mlx5_detach_device(struct mlx5_core_dev *dev);
+void mlx5_detach_device(struct mlx5_core_dev *dev, bool suspend);
int mlx5_register_device(struct mlx5_core_dev *dev);
void mlx5_unregister_device(struct mlx5_core_dev *dev);
struct mlx5_core_dev *mlx5_get_next_phys_dev_lag(struct mlx5_core_dev *dev);
@@ -319,9 +319,9 @@ int mlx5_mdev_init(struct mlx5_core_dev *dev, int profile_idx);
void mlx5_mdev_uninit(struct mlx5_core_dev *dev);
int mlx5_init_one(struct mlx5_core_dev *dev);
void mlx5_uninit_one(struct mlx5_core_dev *dev);
-void mlx5_unload_one(struct mlx5_core_dev *dev);
-void mlx5_unload_one_devl_locked(struct mlx5_core_dev *dev);
-int mlx5_load_one(struct mlx5_core_dev *dev, bool recovery);
+void mlx5_unload_one(struct mlx5_core_dev *dev, bool suspend);
+void mlx5_unload_one_devl_locked(struct mlx5_core_dev *dev, bool suspend);
+int mlx5_load_one(struct mlx5_core_dev *dev);
int mlx5_load_one_devl_locked(struct mlx5_core_dev *dev, bool recovery);
int mlx5_vport_set_other_func_cap(struct mlx5_core_dev *dev, const void *hca_cap, u16 function_id,
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/pagealloc.c b/drivers/net/ethernet/mellanox/mlx5/core/pagealloc.c
index 0eb50be175cc..64d4e7125e9b 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/pagealloc.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/pagealloc.c
@@ -219,7 +219,8 @@ static int alloc_4k(struct mlx5_core_dev *dev, u64 *addr, u32 function)
n = find_first_bit(&fp->bitmask, 8 * sizeof(fp->bitmask));
if (n >= MLX5_NUM_4K_IN_PAGE) {
- mlx5_core_warn(dev, "alloc 4k bug\n");
+ mlx5_core_warn(dev, "alloc 4k bug: fw page = 0x%llx, n = %u, bitmask: %lu, max num of 4K pages: %d\n",
+ fp->addr, n, fp->bitmask, MLX5_NUM_4K_IN_PAGE);
return -ENOENT;
}
clear_bit(n, &fp->bitmask);
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/sf/dev/driver.c b/drivers/net/ethernet/mellanox/mlx5/core/sf/dev/driver.c
index 7b4783ce213e..a7377619ba6f 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/sf/dev/driver.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/sf/dev/driver.c
@@ -74,7 +74,7 @@ static void mlx5_sf_dev_shutdown(struct auxiliary_device *adev)
{
struct mlx5_sf_dev *sf_dev = container_of(adev, struct mlx5_sf_dev, adev);
- mlx5_unload_one(sf_dev->mdev);
+ mlx5_unload_one(sf_dev->mdev, false);
}
static const struct auxiliary_device_id mlx5_sf_dev_id_table[] = {
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_send.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_send.c
index a4476cb4c3b3..fd2d31cdbcf9 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_send.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_send.c
@@ -724,7 +724,6 @@ int mlx5dr_send_postsend_action(struct mlx5dr_domain *dmn,
struct mlx5dr_action *action)
{
struct postsend_info send_info = {};
- int ret;
send_info.write.addr = (uintptr_t)action->rewrite->data;
send_info.write.length = action->rewrite->num_of_actions *
@@ -734,9 +733,7 @@ int mlx5dr_send_postsend_action(struct mlx5dr_domain *dmn,
mlx5dr_icm_pool_get_chunk_mr_addr(action->rewrite->chunk);
send_info.rkey = mlx5dr_icm_pool_get_chunk_rkey(action->rewrite->chunk);
- ret = dr_postsend_icm_data(dmn, &send_info);
-
- return ret;
+ return dr_postsend_icm_data(dmn, &send_info);
}
static int dr_modify_qp_rst2init(struct mlx5_core_dev *mdev,
diff --git a/drivers/net/ethernet/mellanox/mlxbf_gige/mlxbf_gige.h b/drivers/net/ethernet/mellanox/mlxbf_gige/mlxbf_gige.h
index 5a1027b07215..a453b9cd9033 100644
--- a/drivers/net/ethernet/mellanox/mlxbf_gige/mlxbf_gige.h
+++ b/drivers/net/ethernet/mellanox/mlxbf_gige/mlxbf_gige.h
@@ -14,6 +14,7 @@
#include <linux/irqreturn.h>
#include <linux/netdevice.h>
#include <linux/irq.h>
+#include <linux/phy.h>
/* The silicon design supports a maximum RX ring size of
* 32K entries. Based on current testing this maximum size
@@ -67,6 +68,29 @@ struct mlxbf_gige_stats {
u64 rx_filter_discard_pkts;
};
+struct mlxbf_gige_reg_param {
+ u32 mask;
+ u32 shift;
+};
+
+struct mlxbf_gige_mdio_gw {
+ u32 gw_address;
+ u32 read_data_address;
+ struct mlxbf_gige_reg_param busy;
+ struct mlxbf_gige_reg_param write_data;
+ struct mlxbf_gige_reg_param read_data;
+ struct mlxbf_gige_reg_param devad;
+ struct mlxbf_gige_reg_param partad;
+ struct mlxbf_gige_reg_param opcode;
+ struct mlxbf_gige_reg_param st1;
+};
+
+struct mlxbf_gige_link_cfg {
+ void (*set_phy_link_mode)(struct phy_device *phydev);
+ void (*adjust_link)(struct net_device *netdev);
+ phy_interface_t phy_mode;
+};
+
struct mlxbf_gige {
void __iomem *base;
void __iomem *llu_base;
@@ -102,6 +126,9 @@ struct mlxbf_gige {
u8 valid_polarity;
struct napi_struct napi;
struct mlxbf_gige_stats stats;
+ u8 hw_version;
+ struct mlxbf_gige_mdio_gw *mdio_gw;
+ int prev_speed;
};
/* Rx Work Queue Element definitions */
diff --git a/drivers/net/ethernet/mellanox/mlxbf_gige/mlxbf_gige_ethtool.c b/drivers/net/ethernet/mellanox/mlxbf_gige/mlxbf_gige_ethtool.c
index 41ebef25a930..253d7ad9b809 100644
--- a/drivers/net/ethernet/mellanox/mlxbf_gige/mlxbf_gige_ethtool.c
+++ b/drivers/net/ethernet/mellanox/mlxbf_gige/mlxbf_gige_ethtool.c
@@ -135,4 +135,5 @@ const struct ethtool_ops mlxbf_gige_ethtool_ops = {
.nway_reset = phy_ethtool_nway_reset,
.get_pauseparam = mlxbf_gige_get_pauseparam,
.get_link_ksettings = phy_ethtool_get_link_ksettings,
+ .set_link_ksettings = phy_ethtool_set_link_ksettings,
};
diff --git a/drivers/net/ethernet/mellanox/mlxbf_gige/mlxbf_gige_main.c b/drivers/net/ethernet/mellanox/mlxbf_gige/mlxbf_gige_main.c
index 2292d63a279c..694de9513b9f 100644
--- a/drivers/net/ethernet/mellanox/mlxbf_gige/mlxbf_gige_main.c
+++ b/drivers/net/ethernet/mellanox/mlxbf_gige/mlxbf_gige_main.c
@@ -205,7 +205,7 @@ static int mlxbf_gige_stop(struct net_device *netdev)
}
static int mlxbf_gige_eth_ioctl(struct net_device *netdev,
- struct ifreq *ifr, int cmd)
+ struct ifreq *ifr, int cmd)
{
if (!(netif_running(netdev)))
return -EINVAL;
@@ -263,13 +263,99 @@ static const struct net_device_ops mlxbf_gige_netdev_ops = {
.ndo_get_stats64 = mlxbf_gige_get_stats64,
};
-static void mlxbf_gige_adjust_link(struct net_device *netdev)
+static void mlxbf_gige_bf2_adjust_link(struct net_device *netdev)
{
struct phy_device *phydev = netdev->phydev;
phy_print_status(phydev);
}
+static void mlxbf_gige_bf3_adjust_link(struct net_device *netdev)
+{
+ struct mlxbf_gige *priv = netdev_priv(netdev);
+ struct phy_device *phydev = netdev->phydev;
+ u8 sgmii_mode;
+ u16 ipg_size;
+ u32 val;
+
+ if (phydev->link && phydev->speed != priv->prev_speed) {
+ switch (phydev->speed) {
+ case 1000:
+ ipg_size = MLXBF_GIGE_1G_IPG_SIZE;
+ sgmii_mode = MLXBF_GIGE_1G_SGMII_MODE;
+ break;
+ case 100:
+ ipg_size = MLXBF_GIGE_100M_IPG_SIZE;
+ sgmii_mode = MLXBF_GIGE_100M_SGMII_MODE;
+ break;
+ case 10:
+ ipg_size = MLXBF_GIGE_10M_IPG_SIZE;
+ sgmii_mode = MLXBF_GIGE_10M_SGMII_MODE;
+ break;
+ default:
+ return;
+ }
+
+ val = readl(priv->plu_base + MLXBF_GIGE_PLU_TX_REG0);
+ val &= ~(MLXBF_GIGE_PLU_TX_IPG_SIZE_MASK | MLXBF_GIGE_PLU_TX_SGMII_MODE_MASK);
+ val |= FIELD_PREP(MLXBF_GIGE_PLU_TX_IPG_SIZE_MASK, ipg_size);
+ val |= FIELD_PREP(MLXBF_GIGE_PLU_TX_SGMII_MODE_MASK, sgmii_mode);
+ writel(val, priv->plu_base + MLXBF_GIGE_PLU_TX_REG0);
+
+ val = readl(priv->plu_base + MLXBF_GIGE_PLU_RX_REG0);
+ val &= ~MLXBF_GIGE_PLU_RX_SGMII_MODE_MASK;
+ val |= FIELD_PREP(MLXBF_GIGE_PLU_RX_SGMII_MODE_MASK, sgmii_mode);
+ writel(val, priv->plu_base + MLXBF_GIGE_PLU_RX_REG0);
+
+ priv->prev_speed = phydev->speed;
+ }
+
+ phy_print_status(phydev);
+}
+
+static void mlxbf_gige_bf2_set_phy_link_mode(struct phy_device *phydev)
+{
+ /* MAC only supports 1000T full duplex mode */
+ phy_remove_link_mode(phydev, ETHTOOL_LINK_MODE_1000baseT_Half_BIT);
+ phy_remove_link_mode(phydev, ETHTOOL_LINK_MODE_100baseT_Full_BIT);
+ phy_remove_link_mode(phydev, ETHTOOL_LINK_MODE_100baseT_Half_BIT);
+ phy_remove_link_mode(phydev, ETHTOOL_LINK_MODE_10baseT_Full_BIT);
+ phy_remove_link_mode(phydev, ETHTOOL_LINK_MODE_10baseT_Half_BIT);
+
+ /* Only symmetric pause with flow control enabled is supported so no
+ * need to negotiate pause.
+ */
+ linkmode_clear_bit(ETHTOOL_LINK_MODE_Pause_BIT, phydev->advertising);
+ linkmode_clear_bit(ETHTOOL_LINK_MODE_Asym_Pause_BIT, phydev->advertising);
+}
+
+static void mlxbf_gige_bf3_set_phy_link_mode(struct phy_device *phydev)
+{
+ /* MAC only supports full duplex mode */
+ phy_remove_link_mode(phydev, ETHTOOL_LINK_MODE_1000baseT_Half_BIT);
+ phy_remove_link_mode(phydev, ETHTOOL_LINK_MODE_100baseT_Half_BIT);
+ phy_remove_link_mode(phydev, ETHTOOL_LINK_MODE_10baseT_Half_BIT);
+
+ /* Only symmetric pause with flow control enabled is supported so no
+ * need to negotiate pause.
+ */
+ linkmode_clear_bit(ETHTOOL_LINK_MODE_Pause_BIT, phydev->advertising);
+ linkmode_clear_bit(ETHTOOL_LINK_MODE_Asym_Pause_BIT, phydev->advertising);
+}
+
+static struct mlxbf_gige_link_cfg mlxbf_gige_link_cfgs[] = {
+ [MLXBF_GIGE_VERSION_BF2] = {
+ .set_phy_link_mode = mlxbf_gige_bf2_set_phy_link_mode,
+ .adjust_link = mlxbf_gige_bf2_adjust_link,
+ .phy_mode = PHY_INTERFACE_MODE_GMII
+ },
+ [MLXBF_GIGE_VERSION_BF3] = {
+ .set_phy_link_mode = mlxbf_gige_bf3_set_phy_link_mode,
+ .adjust_link = mlxbf_gige_bf3_adjust_link,
+ .phy_mode = PHY_INTERFACE_MODE_SGMII
+ }
+};
+
static int mlxbf_gige_probe(struct platform_device *pdev)
{
struct phy_device *phydev;
@@ -315,6 +401,8 @@ static int mlxbf_gige_probe(struct platform_device *pdev)
spin_lock_init(&priv->lock);
+ priv->hw_version = readq(base + MLXBF_GIGE_VERSION);
+
/* Attach MDIO device */
err = mlxbf_gige_mdio_probe(pdev, priv);
if (err)
@@ -357,25 +445,14 @@ static int mlxbf_gige_probe(struct platform_device *pdev)
phydev->irq = phy_irq;
err = phy_connect_direct(netdev, phydev,
- mlxbf_gige_adjust_link,
- PHY_INTERFACE_MODE_GMII);
+ mlxbf_gige_link_cfgs[priv->hw_version].adjust_link,
+ mlxbf_gige_link_cfgs[priv->hw_version].phy_mode);
if (err) {
dev_err(&pdev->dev, "Could not attach to PHY\n");
goto out;
}
- /* MAC only supports 1000T full duplex mode */
- phy_remove_link_mode(phydev, ETHTOOL_LINK_MODE_1000baseT_Half_BIT);
- phy_remove_link_mode(phydev, ETHTOOL_LINK_MODE_100baseT_Full_BIT);
- phy_remove_link_mode(phydev, ETHTOOL_LINK_MODE_100baseT_Half_BIT);
- phy_remove_link_mode(phydev, ETHTOOL_LINK_MODE_10baseT_Full_BIT);
- phy_remove_link_mode(phydev, ETHTOOL_LINK_MODE_10baseT_Half_BIT);
-
- /* Only symmetric pause with flow control enabled is supported so no
- * need to negotiate pause.
- */
- linkmode_clear_bit(ETHTOOL_LINK_MODE_Pause_BIT, phydev->advertising);
- linkmode_clear_bit(ETHTOOL_LINK_MODE_Asym_Pause_BIT, phydev->advertising);
+ mlxbf_gige_link_cfgs[priv->hw_version].set_phy_link_mode(phydev);
/* Display information about attached PHY device */
phy_attached_info(phydev);
diff --git a/drivers/net/ethernet/mellanox/mlxbf_gige/mlxbf_gige_mdio.c b/drivers/net/ethernet/mellanox/mlxbf_gige/mlxbf_gige_mdio.c
index aa780b1614a3..654190263535 100644
--- a/drivers/net/ethernet/mellanox/mlxbf_gige/mlxbf_gige_mdio.c
+++ b/drivers/net/ethernet/mellanox/mlxbf_gige/mlxbf_gige_mdio.c
@@ -23,9 +23,75 @@
#include "mlxbf_gige.h"
#include "mlxbf_gige_regs.h"
+#include "mlxbf_gige_mdio_bf2.h"
+#include "mlxbf_gige_mdio_bf3.h"
-#define MLXBF_GIGE_MDIO_GW_OFFSET 0x0
-#define MLXBF_GIGE_MDIO_CFG_OFFSET 0x4
+static struct mlxbf_gige_mdio_gw mlxbf_gige_mdio_gw_t[] = {
+ [MLXBF_GIGE_VERSION_BF2] = {
+ .gw_address = MLXBF2_GIGE_MDIO_GW_OFFSET,
+ .read_data_address = MLXBF2_GIGE_MDIO_GW_OFFSET,
+ .busy = {
+ .mask = MLXBF2_GIGE_MDIO_GW_BUSY_MASK,
+ .shift = MLXBF2_GIGE_MDIO_GW_BUSY_SHIFT,
+ },
+ .read_data = {
+ .mask = MLXBF2_GIGE_MDIO_GW_AD_MASK,
+ .shift = MLXBF2_GIGE_MDIO_GW_AD_SHIFT,
+ },
+ .write_data = {
+ .mask = MLXBF2_GIGE_MDIO_GW_AD_MASK,
+ .shift = MLXBF2_GIGE_MDIO_GW_AD_SHIFT,
+ },
+ .devad = {
+ .mask = MLXBF2_GIGE_MDIO_GW_DEVAD_MASK,
+ .shift = MLXBF2_GIGE_MDIO_GW_DEVAD_SHIFT,
+ },
+ .partad = {
+ .mask = MLXBF2_GIGE_MDIO_GW_PARTAD_MASK,
+ .shift = MLXBF2_GIGE_MDIO_GW_PARTAD_SHIFT,
+ },
+ .opcode = {
+ .mask = MLXBF2_GIGE_MDIO_GW_OPCODE_MASK,
+ .shift = MLXBF2_GIGE_MDIO_GW_OPCODE_SHIFT,
+ },
+ .st1 = {
+ .mask = MLXBF2_GIGE_MDIO_GW_ST1_MASK,
+ .shift = MLXBF2_GIGE_MDIO_GW_ST1_SHIFT,
+ },
+ },
+ [MLXBF_GIGE_VERSION_BF3] = {
+ .gw_address = MLXBF3_GIGE_MDIO_GW_OFFSET,
+ .read_data_address = MLXBF3_GIGE_MDIO_DATA_READ,
+ .busy = {
+ .mask = MLXBF3_GIGE_MDIO_GW_BUSY_MASK,
+ .shift = MLXBF3_GIGE_MDIO_GW_BUSY_SHIFT,
+ },
+ .read_data = {
+ .mask = MLXBF3_GIGE_MDIO_GW_DATA_READ_MASK,
+ .shift = MLXBF3_GIGE_MDIO_GW_DATA_READ_SHIFT,
+ },
+ .write_data = {
+ .mask = MLXBF3_GIGE_MDIO_GW_DATA_MASK,
+ .shift = MLXBF3_GIGE_MDIO_GW_DATA_SHIFT,
+ },
+ .devad = {
+ .mask = MLXBF3_GIGE_MDIO_GW_DEVAD_MASK,
+ .shift = MLXBF3_GIGE_MDIO_GW_DEVAD_SHIFT,
+ },
+ .partad = {
+ .mask = MLXBF3_GIGE_MDIO_GW_PARTAD_MASK,
+ .shift = MLXBF3_GIGE_MDIO_GW_PARTAD_SHIFT,
+ },
+ .opcode = {
+ .mask = MLXBF3_GIGE_MDIO_GW_OPCODE_MASK,
+ .shift = MLXBF3_GIGE_MDIO_GW_OPCODE_SHIFT,
+ },
+ .st1 = {
+ .mask = MLXBF3_GIGE_MDIO_GW_ST1_MASK,
+ .shift = MLXBF3_GIGE_MDIO_GW_ST1_SHIFT,
+ },
+ },
+};
#define MLXBF_GIGE_MDIO_FREQ_REFERENCE 156250000ULL
#define MLXBF_GIGE_MDIO_COREPLL_CONST 16384ULL
@@ -47,30 +113,10 @@
/* Busy bit is set by software and cleared by hardware */
#define MLXBF_GIGE_MDIO_SET_BUSY 0x1
-/* MDIO GW register bits */
-#define MLXBF_GIGE_MDIO_GW_AD_MASK GENMASK(15, 0)
-#define MLXBF_GIGE_MDIO_GW_DEVAD_MASK GENMASK(20, 16)
-#define MLXBF_GIGE_MDIO_GW_PARTAD_MASK GENMASK(25, 21)
-#define MLXBF_GIGE_MDIO_GW_OPCODE_MASK GENMASK(27, 26)
-#define MLXBF_GIGE_MDIO_GW_ST1_MASK GENMASK(28, 28)
-#define MLXBF_GIGE_MDIO_GW_BUSY_MASK GENMASK(30, 30)
-
-/* MDIO config register bits */
-#define MLXBF_GIGE_MDIO_CFG_MDIO_MODE_MASK GENMASK(1, 0)
-#define MLXBF_GIGE_MDIO_CFG_MDIO3_3_MASK GENMASK(2, 2)
-#define MLXBF_GIGE_MDIO_CFG_MDIO_FULL_DRIVE_MASK GENMASK(4, 4)
-#define MLXBF_GIGE_MDIO_CFG_MDC_PERIOD_MASK GENMASK(15, 8)
-#define MLXBF_GIGE_MDIO_CFG_MDIO_IN_SAMP_MASK GENMASK(23, 16)
-#define MLXBF_GIGE_MDIO_CFG_MDIO_OUT_SAMP_MASK GENMASK(31, 24)
-
-#define MLXBF_GIGE_MDIO_CFG_VAL (FIELD_PREP(MLXBF_GIGE_MDIO_CFG_MDIO_MODE_MASK, 1) | \
- FIELD_PREP(MLXBF_GIGE_MDIO_CFG_MDIO3_3_MASK, 1) | \
- FIELD_PREP(MLXBF_GIGE_MDIO_CFG_MDIO_FULL_DRIVE_MASK, 1) | \
- FIELD_PREP(MLXBF_GIGE_MDIO_CFG_MDIO_IN_SAMP_MASK, 6) | \
- FIELD_PREP(MLXBF_GIGE_MDIO_CFG_MDIO_OUT_SAMP_MASK, 13))
-
#define MLXBF_GIGE_BF2_COREPLL_ADDR 0x02800c30
#define MLXBF_GIGE_BF2_COREPLL_SIZE 0x0000000c
+#define MLXBF_GIGE_BF3_COREPLL_ADDR 0x13409824
+#define MLXBF_GIGE_BF3_COREPLL_SIZE 0x00000010
static struct resource corepll_params[] = {
[MLXBF_GIGE_VERSION_BF2] = {
@@ -78,6 +124,11 @@ static struct resource corepll_params[] = {
.end = MLXBF_GIGE_BF2_COREPLL_ADDR + MLXBF_GIGE_BF2_COREPLL_SIZE - 1,
.name = "COREPLL_RES"
},
+ [MLXBF_GIGE_VERSION_BF3] = {
+ .start = MLXBF_GIGE_BF3_COREPLL_ADDR,
+ .end = MLXBF_GIGE_BF3_COREPLL_ADDR + MLXBF_GIGE_BF3_COREPLL_SIZE - 1,
+ .name = "COREPLL_RES"
+ }
};
/* Returns core clock i1clk in Hz */
@@ -134,19 +185,23 @@ static u8 mdio_period_map(struct mlxbf_gige *priv)
return mdio_period;
}
-static u32 mlxbf_gige_mdio_create_cmd(u16 data, int phy_add,
+static u32 mlxbf_gige_mdio_create_cmd(struct mlxbf_gige_mdio_gw *mdio_gw, u16 data, int phy_add,
int phy_reg, u32 opcode)
{
u32 gw_reg = 0;
- gw_reg |= FIELD_PREP(MLXBF_GIGE_MDIO_GW_AD_MASK, data);
- gw_reg |= FIELD_PREP(MLXBF_GIGE_MDIO_GW_DEVAD_MASK, phy_reg);
- gw_reg |= FIELD_PREP(MLXBF_GIGE_MDIO_GW_PARTAD_MASK, phy_add);
- gw_reg |= FIELD_PREP(MLXBF_GIGE_MDIO_GW_OPCODE_MASK, opcode);
- gw_reg |= FIELD_PREP(MLXBF_GIGE_MDIO_GW_ST1_MASK,
- MLXBF_GIGE_MDIO_CL22_ST1);
- gw_reg |= FIELD_PREP(MLXBF_GIGE_MDIO_GW_BUSY_MASK,
- MLXBF_GIGE_MDIO_SET_BUSY);
+ gw_reg |= ((data << mdio_gw->write_data.shift) &
+ mdio_gw->write_data.mask);
+ gw_reg |= ((phy_reg << mdio_gw->devad.shift) &
+ mdio_gw->devad.mask);
+ gw_reg |= ((phy_add << mdio_gw->partad.shift) &
+ mdio_gw->partad.mask);
+ gw_reg |= ((opcode << mdio_gw->opcode.shift) &
+ mdio_gw->opcode.mask);
+ gw_reg |= ((MLXBF_GIGE_MDIO_CL22_ST1 << mdio_gw->st1.shift) &
+ mdio_gw->st1.mask);
+ gw_reg |= ((MLXBF_GIGE_MDIO_SET_BUSY << mdio_gw->busy.shift) &
+ mdio_gw->busy.mask);
return gw_reg;
}
@@ -158,29 +213,27 @@ static int mlxbf_gige_mdio_read(struct mii_bus *bus, int phy_add, int phy_reg)
int ret;
u32 val;
- if (phy_reg & MII_ADDR_C45)
- return -EOPNOTSUPP;
-
/* Send mdio read request */
- cmd = mlxbf_gige_mdio_create_cmd(0, phy_add, phy_reg, MLXBF_GIGE_MDIO_CL22_READ);
+ cmd = mlxbf_gige_mdio_create_cmd(priv->mdio_gw, 0, phy_add, phy_reg,
+ MLXBF_GIGE_MDIO_CL22_READ);
- writel(cmd, priv->mdio_io + MLXBF_GIGE_MDIO_GW_OFFSET);
+ writel(cmd, priv->mdio_io + priv->mdio_gw->gw_address);
- ret = readl_poll_timeout_atomic(priv->mdio_io + MLXBF_GIGE_MDIO_GW_OFFSET,
- val, !(val & MLXBF_GIGE_MDIO_GW_BUSY_MASK),
+ ret = readl_poll_timeout_atomic(priv->mdio_io + priv->mdio_gw->gw_address,
+ val, !(val & priv->mdio_gw->busy.mask),
5, 1000000);
if (ret) {
- writel(0, priv->mdio_io + MLXBF_GIGE_MDIO_GW_OFFSET);
+ writel(0, priv->mdio_io + priv->mdio_gw->gw_address);
return ret;
}
- ret = readl(priv->mdio_io + MLXBF_GIGE_MDIO_GW_OFFSET);
+ ret = readl(priv->mdio_io + priv->mdio_gw->read_data_address);
/* Only return ad bits of the gw register */
- ret &= MLXBF_GIGE_MDIO_GW_AD_MASK;
+ ret &= priv->mdio_gw->read_data.mask;
/* The MDIO lock is set on read. To release it, clear gw register */
- writel(0, priv->mdio_io + MLXBF_GIGE_MDIO_GW_OFFSET);
+ writel(0, priv->mdio_io + priv->mdio_gw->gw_address);
return ret;
}
@@ -193,21 +246,18 @@ static int mlxbf_gige_mdio_write(struct mii_bus *bus, int phy_add,
u32 cmd;
int ret;
- if (phy_reg & MII_ADDR_C45)
- return -EOPNOTSUPP;
-
/* Send mdio write request */
- cmd = mlxbf_gige_mdio_create_cmd(val, phy_add, phy_reg,
+ cmd = mlxbf_gige_mdio_create_cmd(priv->mdio_gw, val, phy_add, phy_reg,
MLXBF_GIGE_MDIO_CL22_WRITE);
- writel(cmd, priv->mdio_io + MLXBF_GIGE_MDIO_GW_OFFSET);
+ writel(cmd, priv->mdio_io + priv->mdio_gw->gw_address);
/* If the poll timed out, drop the request */
- ret = readl_poll_timeout_atomic(priv->mdio_io + MLXBF_GIGE_MDIO_GW_OFFSET,
- temp, !(temp & MLXBF_GIGE_MDIO_GW_BUSY_MASK),
+ ret = readl_poll_timeout_atomic(priv->mdio_io + priv->mdio_gw->gw_address,
+ temp, !(temp & priv->mdio_gw->busy.mask),
5, 1000000);
/* The MDIO lock is set on read. To release it, clear gw register */
- writel(0, priv->mdio_io + MLXBF_GIGE_MDIO_GW_OFFSET);
+ writel(0, priv->mdio_io + priv->mdio_gw->gw_address);
return ret;
}
@@ -219,9 +269,20 @@ static void mlxbf_gige_mdio_cfg(struct mlxbf_gige *priv)
mdio_period = mdio_period_map(priv);
- val = MLXBF_GIGE_MDIO_CFG_VAL;
- val |= FIELD_PREP(MLXBF_GIGE_MDIO_CFG_MDC_PERIOD_MASK, mdio_period);
- writel(val, priv->mdio_io + MLXBF_GIGE_MDIO_CFG_OFFSET);
+ if (priv->hw_version == MLXBF_GIGE_VERSION_BF2) {
+ val = MLXBF2_GIGE_MDIO_CFG_VAL;
+ val |= FIELD_PREP(MLXBF2_GIGE_MDIO_CFG_MDC_PERIOD_MASK, mdio_period);
+ writel(val, priv->mdio_io + MLXBF2_GIGE_MDIO_CFG_OFFSET);
+ } else {
+ val = FIELD_PREP(MLXBF3_GIGE_MDIO_CFG_MDIO_MODE_MASK, 1) |
+ FIELD_PREP(MLXBF3_GIGE_MDIO_CFG_MDIO_FULL_DRIVE_MASK, 1);
+ writel(val, priv->mdio_io + MLXBF3_GIGE_MDIO_CFG_REG0);
+ val = FIELD_PREP(MLXBF3_GIGE_MDIO_CFG_MDC_PERIOD_MASK, mdio_period);
+ writel(val, priv->mdio_io + MLXBF3_GIGE_MDIO_CFG_REG1);
+ val = FIELD_PREP(MLXBF3_GIGE_MDIO_CFG_MDIO_IN_SAMP_MASK, 6) |
+ FIELD_PREP(MLXBF3_GIGE_MDIO_CFG_MDIO_OUT_SAMP_MASK, 13);
+ writel(val, priv->mdio_io + MLXBF3_GIGE_MDIO_CFG_REG2);
+ }
}
int mlxbf_gige_mdio_probe(struct platform_device *pdev, struct mlxbf_gige *priv)
@@ -230,6 +291,9 @@ int mlxbf_gige_mdio_probe(struct platform_device *pdev, struct mlxbf_gige *priv)
struct resource *res;
int ret;
+ if (priv->hw_version > MLXBF_GIGE_VERSION_BF3)
+ return -ENODEV;
+
priv->mdio_io = devm_platform_ioremap_resource(pdev, MLXBF_GIGE_RES_MDIO9);
if (IS_ERR(priv->mdio_io))
return PTR_ERR(priv->mdio_io);
@@ -242,13 +306,15 @@ int mlxbf_gige_mdio_probe(struct platform_device *pdev, struct mlxbf_gige *priv)
/* For backward compatibility with older ACPI tables, also keep
* CLK resource internal to the driver.
*/
- res = &corepll_params[MLXBF_GIGE_VERSION_BF2];
+ res = &corepll_params[priv->hw_version];
}
priv->clk_io = devm_ioremap(dev, res->start, resource_size(res));
if (!priv->clk_io)
return -ENOMEM;
+ priv->mdio_gw = &mlxbf_gige_mdio_gw_t[priv->hw_version];
+
mlxbf_gige_mdio_cfg(priv);
priv->mdiobus = devm_mdiobus_alloc(dev);
diff --git a/drivers/net/ethernet/mellanox/mlxbf_gige/mlxbf_gige_mdio_bf2.h b/drivers/net/ethernet/mellanox/mlxbf_gige/mlxbf_gige_mdio_bf2.h
new file mode 100644
index 000000000000..7f1ff0ac7699
--- /dev/null
+++ b/drivers/net/ethernet/mellanox/mlxbf_gige/mlxbf_gige_mdio_bf2.h
@@ -0,0 +1,53 @@
+/* SPDX-License-Identifier: GPL-2.0-only OR BSD-3-Clause */
+
+/* MDIO support for Mellanox Gigabit Ethernet driver
+ *
+ * Copyright (c) 2022 NVIDIA CORPORATION & AFFILIATES, ALL RIGHTS RESERVED.
+ *
+ * This software product is a proprietary product of NVIDIA CORPORATION &
+ * AFFILIATES (the "Company") and all right, title, and interest in and to the
+ * software product, including all associated intellectual property rights, are
+ * and shall remain exclusively with the Company.
+ *
+ * This software product is governed by the End User License Agreement
+ * provided with the software product.
+ */
+
+#ifndef __MLXBF_GIGE_MDIO_BF2_H__
+#define __MLXBF_GIGE_MDIO_BF2_H__
+
+#include <linux/bitfield.h>
+
+#define MLXBF2_GIGE_MDIO_GW_OFFSET 0x0
+#define MLXBF2_GIGE_MDIO_CFG_OFFSET 0x4
+
+/* MDIO GW register bits */
+#define MLXBF2_GIGE_MDIO_GW_AD_MASK GENMASK(15, 0)
+#define MLXBF2_GIGE_MDIO_GW_DEVAD_MASK GENMASK(20, 16)
+#define MLXBF2_GIGE_MDIO_GW_PARTAD_MASK GENMASK(25, 21)
+#define MLXBF2_GIGE_MDIO_GW_OPCODE_MASK GENMASK(27, 26)
+#define MLXBF2_GIGE_MDIO_GW_ST1_MASK GENMASK(28, 28)
+#define MLXBF2_GIGE_MDIO_GW_BUSY_MASK GENMASK(30, 30)
+
+#define MLXBF2_GIGE_MDIO_GW_AD_SHIFT 0
+#define MLXBF2_GIGE_MDIO_GW_DEVAD_SHIFT 16
+#define MLXBF2_GIGE_MDIO_GW_PARTAD_SHIFT 21
+#define MLXBF2_GIGE_MDIO_GW_OPCODE_SHIFT 26
+#define MLXBF2_GIGE_MDIO_GW_ST1_SHIFT 28
+#define MLXBF2_GIGE_MDIO_GW_BUSY_SHIFT 30
+
+/* MDIO config register bits */
+#define MLXBF2_GIGE_MDIO_CFG_MDIO_MODE_MASK GENMASK(1, 0)
+#define MLXBF2_GIGE_MDIO_CFG_MDIO3_3_MASK GENMASK(2, 2)
+#define MLXBF2_GIGE_MDIO_CFG_MDIO_FULL_DRIVE_MASK GENMASK(4, 4)
+#define MLXBF2_GIGE_MDIO_CFG_MDC_PERIOD_MASK GENMASK(15, 8)
+#define MLXBF2_GIGE_MDIO_CFG_MDIO_IN_SAMP_MASK GENMASK(23, 16)
+#define MLXBF2_GIGE_MDIO_CFG_MDIO_OUT_SAMP_MASK GENMASK(31, 24)
+
+#define MLXBF2_GIGE_MDIO_CFG_VAL (FIELD_PREP(MLXBF2_GIGE_MDIO_CFG_MDIO_MODE_MASK, 1) | \
+ FIELD_PREP(MLXBF2_GIGE_MDIO_CFG_MDIO3_3_MASK, 1) | \
+ FIELD_PREP(MLXBF2_GIGE_MDIO_CFG_MDIO_FULL_DRIVE_MASK, 1) | \
+ FIELD_PREP(MLXBF2_GIGE_MDIO_CFG_MDIO_IN_SAMP_MASK, 6) | \
+ FIELD_PREP(MLXBF2_GIGE_MDIO_CFG_MDIO_OUT_SAMP_MASK, 13))
+
+#endif /* __MLXBF_GIGE_MDIO_BF2_H__ */
diff --git a/drivers/net/ethernet/mellanox/mlxbf_gige/mlxbf_gige_mdio_bf3.h b/drivers/net/ethernet/mellanox/mlxbf_gige/mlxbf_gige_mdio_bf3.h
new file mode 100644
index 000000000000..9dd9144b9173
--- /dev/null
+++ b/drivers/net/ethernet/mellanox/mlxbf_gige/mlxbf_gige_mdio_bf3.h
@@ -0,0 +1,54 @@
+/* SPDX-License-Identifier: GPL-2.0-only OR BSD-3-Clause */
+
+/* MDIO support for Mellanox Gigabit Ethernet driver
+ *
+ * Copyright (c) 2022 NVIDIA CORPORATION & AFFILIATES, ALL RIGHTS RESERVED.
+ *
+ * This software product is a proprietary product of NVIDIA CORPORATION &
+ * AFFILIATES (the "Company") and all right, title, and interest in and to the
+ * software product, including all associated intellectual property rights, are
+ * and shall remain exclusively with the Company.
+ *
+ * This software product is governed by the End User License Agreement
+ * provided with the software product.
+ */
+
+#ifndef __MLXBF_GIGE_MDIO_BF3_H__
+#define __MLXBF_GIGE_MDIO_BF3_H__
+
+#include <linux/bitfield.h>
+
+#define MLXBF3_GIGE_MDIO_GW_OFFSET 0x80
+#define MLXBF3_GIGE_MDIO_DATA_READ 0x8c
+#define MLXBF3_GIGE_MDIO_CFG_REG0 0x100
+#define MLXBF3_GIGE_MDIO_CFG_REG1 0x104
+#define MLXBF3_GIGE_MDIO_CFG_REG2 0x108
+
+/* MDIO GW register bits */
+#define MLXBF3_GIGE_MDIO_GW_ST1_MASK GENMASK(1, 1)
+#define MLXBF3_GIGE_MDIO_GW_OPCODE_MASK GENMASK(3, 2)
+#define MLXBF3_GIGE_MDIO_GW_PARTAD_MASK GENMASK(8, 4)
+#define MLXBF3_GIGE_MDIO_GW_DEVAD_MASK GENMASK(13, 9)
+/* For BlueField-3, this field is only used for mdio write */
+#define MLXBF3_GIGE_MDIO_GW_DATA_MASK GENMASK(29, 14)
+#define MLXBF3_GIGE_MDIO_GW_BUSY_MASK GENMASK(30, 30)
+
+#define MLXBF3_GIGE_MDIO_GW_DATA_READ_MASK GENMASK(15, 0)
+
+#define MLXBF3_GIGE_MDIO_GW_ST1_SHIFT 1
+#define MLXBF3_GIGE_MDIO_GW_OPCODE_SHIFT 2
+#define MLXBF3_GIGE_MDIO_GW_PARTAD_SHIFT 4
+#define MLXBF3_GIGE_MDIO_GW_DEVAD_SHIFT 9
+#define MLXBF3_GIGE_MDIO_GW_DATA_SHIFT 14
+#define MLXBF3_GIGE_MDIO_GW_BUSY_SHIFT 30
+
+#define MLXBF3_GIGE_MDIO_GW_DATA_READ_SHIFT 0
+
+/* MDIO config register bits */
+#define MLXBF3_GIGE_MDIO_CFG_MDIO_MODE_MASK GENMASK(1, 0)
+#define MLXBF3_GIGE_MDIO_CFG_MDIO_FULL_DRIVE_MASK GENMASK(2, 2)
+#define MLXBF3_GIGE_MDIO_CFG_MDC_PERIOD_MASK GENMASK(7, 0)
+#define MLXBF3_GIGE_MDIO_CFG_MDIO_IN_SAMP_MASK GENMASK(7, 0)
+#define MLXBF3_GIGE_MDIO_CFG_MDIO_OUT_SAMP_MASK GENMASK(15, 8)
+
+#endif /* __MLXBF_GIGE_MDIO_BF3_H__ */
diff --git a/drivers/net/ethernet/mellanox/mlxbf_gige/mlxbf_gige_regs.h b/drivers/net/ethernet/mellanox/mlxbf_gige/mlxbf_gige_regs.h
index 7be3a793984d..cd0973229c9b 100644
--- a/drivers/net/ethernet/mellanox/mlxbf_gige/mlxbf_gige_regs.h
+++ b/drivers/net/ethernet/mellanox/mlxbf_gige/mlxbf_gige_regs.h
@@ -8,8 +8,11 @@
#ifndef __MLXBF_GIGE_REGS_H__
#define __MLXBF_GIGE_REGS_H__
+#include <linux/bitfield.h>
+
#define MLXBF_GIGE_VERSION 0x0000
#define MLXBF_GIGE_VERSION_BF2 0x0
+#define MLXBF_GIGE_VERSION_BF3 0x1
#define MLXBF_GIGE_STATUS 0x0010
#define MLXBF_GIGE_STATUS_READY BIT(0)
#define MLXBF_GIGE_INT_STATUS 0x0028
@@ -77,4 +80,23 @@
*/
#define MLXBF_GIGE_MMIO_REG_SZ (MLXBF_GIGE_MAC_CFG + 8)
+#define MLXBF_GIGE_PLU_TX_REG0 0x80
+#define MLXBF_GIGE_PLU_TX_IPG_SIZE_MASK GENMASK(11, 0)
+#define MLXBF_GIGE_PLU_TX_SGMII_MODE_MASK GENMASK(15, 14)
+
+#define MLXBF_GIGE_PLU_RX_REG0 0x10
+#define MLXBF_GIGE_PLU_RX_SGMII_MODE_MASK GENMASK(25, 24)
+
+#define MLXBF_GIGE_1G_SGMII_MODE 0x0
+#define MLXBF_GIGE_10M_SGMII_MODE 0x1
+#define MLXBF_GIGE_100M_SGMII_MODE 0x2
+
+/* ipg_size default value for 1G is fixed by HW to 11 + End = 12.
+ * So for 100M it is 12 * 10 - 1 = 119
+ * For 10M, it is 12 * 100 - 1 = 1199
+ */
+#define MLXBF_GIGE_1G_IPG_SIZE 11
+#define MLXBF_GIGE_100M_IPG_SIZE 119
+#define MLXBF_GIGE_10M_IPG_SIZE 1199
+
#endif /* !defined(__MLXBF_GIGE_REGS_H__) */
diff --git a/drivers/net/ethernet/mellanox/mlxsw/core.c b/drivers/net/ethernet/mellanox/mlxsw/core.c
index a0a06e2eff82..22db0bb15c45 100644
--- a/drivers/net/ethernet/mellanox/mlxsw/core.c
+++ b/drivers/net/ethernet/mellanox/mlxsw/core.c
@@ -78,6 +78,7 @@ struct mlxsw_core {
spinlock_t trans_list_lock; /* protects trans_list writes */
bool use_emad;
bool enable_string_tlv;
+ bool enable_latency_tlv;
} emad;
struct {
u16 *mapping; /* lag_id+port_index to local_port mapping */
@@ -378,6 +379,22 @@ MLXSW_ITEM32(emad, string_tlv, len, 0x00, 16, 11);
MLXSW_ITEM_BUF(emad, string_tlv, string, 0x04,
MLXSW_EMAD_STRING_TLV_STRING_LEN);
+/* emad_latency_tlv_type
+ * Type of the TLV.
+ * Must be set to 0x4 (latency TLV).
+ */
+MLXSW_ITEM32(emad, latency_tlv, type, 0x00, 27, 5);
+
+/* emad_latency_tlv_len
+ * Length of the latency TLV in u32.
+ */
+MLXSW_ITEM32(emad, latency_tlv, len, 0x00, 16, 11);
+
+/* emad_latency_tlv_latency_time
+ * EMAD latency time in units of uSec.
+ */
+MLXSW_ITEM32(emad, latency_tlv, latency_time, 0x04, 0, 32);
+
/* emad_reg_tlv_type
* Type of the TLV.
* Must be set to 0x3 (register TLV).
@@ -461,6 +478,12 @@ static void mlxsw_emad_pack_op_tlv(char *op_tlv,
mlxsw_emad_op_tlv_tid_set(op_tlv, tid);
}
+static void mlxsw_emad_pack_latency_tlv(char *latency_tlv)
+{
+ mlxsw_emad_latency_tlv_type_set(latency_tlv, MLXSW_EMAD_TLV_TYPE_LATENCY);
+ mlxsw_emad_latency_tlv_len_set(latency_tlv, MLXSW_EMAD_LATENCY_TLV_LEN);
+}
+
static int mlxsw_emad_construct_eth_hdr(struct sk_buff *skb)
{
char *eth_hdr = skb_push(skb, MLXSW_EMAD_ETH_HDR_LEN);
@@ -476,11 +499,11 @@ static int mlxsw_emad_construct_eth_hdr(struct sk_buff *skb)
return 0;
}
-static void mlxsw_emad_construct(struct sk_buff *skb,
+static void mlxsw_emad_construct(const struct mlxsw_core *mlxsw_core,
+ struct sk_buff *skb,
const struct mlxsw_reg_info *reg,
char *payload,
- enum mlxsw_core_reg_access_type type,
- u64 tid, bool enable_string_tlv)
+ enum mlxsw_core_reg_access_type type, u64 tid)
{
char *buf;
@@ -490,7 +513,12 @@ static void mlxsw_emad_construct(struct sk_buff *skb,
buf = skb_push(skb, reg->len + sizeof(u32));
mlxsw_emad_pack_reg_tlv(buf, reg, payload);
- if (enable_string_tlv) {
+ if (mlxsw_core->emad.enable_latency_tlv) {
+ buf = skb_push(skb, MLXSW_EMAD_LATENCY_TLV_LEN * sizeof(u32));
+ mlxsw_emad_pack_latency_tlv(buf);
+ }
+
+ if (mlxsw_core->emad.enable_string_tlv) {
buf = skb_push(skb, MLXSW_EMAD_STRING_TLV_LEN * sizeof(u32));
mlxsw_emad_pack_string_tlv(buf);
}
@@ -504,6 +532,7 @@ static void mlxsw_emad_construct(struct sk_buff *skb,
struct mlxsw_emad_tlv_offsets {
u16 op_tlv;
u16 string_tlv;
+ u16 latency_tlv;
u16 reg_tlv;
};
@@ -514,6 +543,13 @@ static bool mlxsw_emad_tlv_is_string_tlv(const char *tlv)
return tlv_type == MLXSW_EMAD_TLV_TYPE_STRING;
}
+static bool mlxsw_emad_tlv_is_latency_tlv(const char *tlv)
+{
+ u8 tlv_type = mlxsw_emad_latency_tlv_type_get(tlv);
+
+ return tlv_type == MLXSW_EMAD_TLV_TYPE_LATENCY;
+}
+
static void mlxsw_emad_tlv_parse(struct sk_buff *skb)
{
struct mlxsw_emad_tlv_offsets *offsets =
@@ -521,6 +557,8 @@ static void mlxsw_emad_tlv_parse(struct sk_buff *skb)
offsets->op_tlv = MLXSW_EMAD_ETH_HDR_LEN;
offsets->string_tlv = 0;
+ offsets->latency_tlv = 0;
+
offsets->reg_tlv = MLXSW_EMAD_ETH_HDR_LEN +
MLXSW_EMAD_OP_TLV_LEN * sizeof(u32);
@@ -529,6 +567,11 @@ static void mlxsw_emad_tlv_parse(struct sk_buff *skb)
offsets->string_tlv = offsets->reg_tlv;
offsets->reg_tlv += MLXSW_EMAD_STRING_TLV_LEN * sizeof(u32);
}
+
+ if (mlxsw_emad_tlv_is_latency_tlv(skb->data + offsets->reg_tlv)) {
+ offsets->latency_tlv = offsets->reg_tlv;
+ offsets->reg_tlv += MLXSW_EMAD_LATENCY_TLV_LEN * sizeof(u32);
+ }
}
static char *mlxsw_emad_op_tlv(const struct sk_buff *skb)
@@ -794,6 +837,32 @@ static const struct mlxsw_listener mlxsw_emad_rx_listener =
MLXSW_RXL(mlxsw_emad_rx_listener_func, ETHEMAD, TRAP_TO_CPU, false,
EMAD, DISCARD);
+static int mlxsw_emad_tlv_enable(struct mlxsw_core *mlxsw_core)
+{
+ char mgir_pl[MLXSW_REG_MGIR_LEN];
+ bool string_tlv, latency_tlv;
+ int err;
+
+ mlxsw_reg_mgir_pack(mgir_pl);
+ err = mlxsw_reg_query(mlxsw_core, MLXSW_REG(mgir), mgir_pl);
+ if (err)
+ return err;
+
+ string_tlv = mlxsw_reg_mgir_fw_info_string_tlv_get(mgir_pl);
+ mlxsw_core->emad.enable_string_tlv = string_tlv;
+
+ latency_tlv = mlxsw_reg_mgir_fw_info_latency_tlv_get(mgir_pl);
+ mlxsw_core->emad.enable_latency_tlv = latency_tlv;
+
+ return 0;
+}
+
+static void mlxsw_emad_tlv_disable(struct mlxsw_core *mlxsw_core)
+{
+ mlxsw_core->emad.enable_latency_tlv = false;
+ mlxsw_core->emad.enable_string_tlv = false;
+}
+
static int mlxsw_emad_init(struct mlxsw_core *mlxsw_core)
{
struct workqueue_struct *emad_wq;
@@ -824,10 +893,17 @@ static int mlxsw_emad_init(struct mlxsw_core *mlxsw_core)
if (err)
goto err_trap_register;
+ err = mlxsw_emad_tlv_enable(mlxsw_core);
+ if (err)
+ goto err_emad_tlv_enable;
+
mlxsw_core->emad.use_emad = true;
return 0;
+err_emad_tlv_enable:
+ mlxsw_core_trap_unregister(mlxsw_core, &mlxsw_emad_rx_listener,
+ mlxsw_core);
err_trap_register:
destroy_workqueue(mlxsw_core->emad_wq);
return err;
@@ -840,13 +916,14 @@ static void mlxsw_emad_fini(struct mlxsw_core *mlxsw_core)
return;
mlxsw_core->emad.use_emad = false;
+ mlxsw_emad_tlv_disable(mlxsw_core);
mlxsw_core_trap_unregister(mlxsw_core, &mlxsw_emad_rx_listener,
mlxsw_core);
destroy_workqueue(mlxsw_core->emad_wq);
}
static struct sk_buff *mlxsw_emad_alloc(const struct mlxsw_core *mlxsw_core,
- u16 reg_len, bool enable_string_tlv)
+ u16 reg_len)
{
struct sk_buff *skb;
u16 emad_len;
@@ -854,8 +931,10 @@ static struct sk_buff *mlxsw_emad_alloc(const struct mlxsw_core *mlxsw_core,
emad_len = (reg_len + sizeof(u32) + MLXSW_EMAD_ETH_HDR_LEN +
(MLXSW_EMAD_OP_TLV_LEN + MLXSW_EMAD_END_TLV_LEN) *
sizeof(u32) + mlxsw_core->driver->txhdr_len);
- if (enable_string_tlv)
+ if (mlxsw_core->emad.enable_string_tlv)
emad_len += MLXSW_EMAD_STRING_TLV_LEN * sizeof(u32);
+ if (mlxsw_core->emad.enable_latency_tlv)
+ emad_len += MLXSW_EMAD_LATENCY_TLV_LEN * sizeof(u32);
if (emad_len > MLXSW_EMAD_MAX_FRAME_LEN)
return NULL;
@@ -877,7 +956,6 @@ static int mlxsw_emad_reg_access(struct mlxsw_core *mlxsw_core,
mlxsw_reg_trans_cb_t *cb,
unsigned long cb_priv, u64 tid)
{
- bool enable_string_tlv;
struct sk_buff *skb;
int err;
@@ -885,12 +963,7 @@ static int mlxsw_emad_reg_access(struct mlxsw_core *mlxsw_core,
tid, reg->id, mlxsw_reg_id_str(reg->id),
mlxsw_core_reg_access_type_str(type));
- /* Since this can be changed during emad_reg_access, read it once and
- * use the value all the way.
- */
- enable_string_tlv = mlxsw_core->emad.enable_string_tlv;
-
- skb = mlxsw_emad_alloc(mlxsw_core, reg->len, enable_string_tlv);
+ skb = mlxsw_emad_alloc(mlxsw_core, reg->len);
if (!skb)
return -ENOMEM;
@@ -907,8 +980,7 @@ static int mlxsw_emad_reg_access(struct mlxsw_core *mlxsw_core,
trans->reg = reg;
trans->type = type;
- mlxsw_emad_construct(skb, reg, payload, type, trans->tid,
- enable_string_tlv);
+ mlxsw_emad_construct(mlxsw_core, skb, reg, payload, type, trans->tid);
mlxsw_core->driver->txhdr_construct(skb, &trans->tx_info);
spin_lock_bh(&mlxsw_core->emad.trans_list_lock);
@@ -1171,9 +1243,9 @@ static int mlxsw_core_fw_rev_validate(struct mlxsw_core *mlxsw_core,
return 0;
/* Don't check if devlink 'fw_load_policy' param is 'flash' */
- err = devlink_param_driverinit_value_get(priv_to_devlink(mlxsw_core),
- DEVLINK_PARAM_GENERIC_ID_FW_LOAD_POLICY,
- &value);
+ err = devl_param_driverinit_value_get(priv_to_devlink(mlxsw_core),
+ DEVLINK_PARAM_GENERIC_ID_FW_LOAD_POLICY,
+ &value);
if (err)
return err;
if (value.vu8 == DEVLINK_PARAM_FW_LOAD_POLICY_VALUE_FLASH)
@@ -1244,20 +1316,22 @@ static int mlxsw_core_fw_params_register(struct mlxsw_core *mlxsw_core)
union devlink_param_value value;
int err;
- err = devlink_params_register(devlink, mlxsw_core_fw_devlink_params,
- ARRAY_SIZE(mlxsw_core_fw_devlink_params));
+ err = devl_params_register(devlink, mlxsw_core_fw_devlink_params,
+ ARRAY_SIZE(mlxsw_core_fw_devlink_params));
if (err)
return err;
value.vu8 = DEVLINK_PARAM_FW_LOAD_POLICY_VALUE_DRIVER;
- devlink_param_driverinit_value_set(devlink, DEVLINK_PARAM_GENERIC_ID_FW_LOAD_POLICY, value);
+ devl_param_driverinit_value_set(devlink,
+ DEVLINK_PARAM_GENERIC_ID_FW_LOAD_POLICY,
+ value);
return 0;
}
static void mlxsw_core_fw_params_unregister(struct mlxsw_core *mlxsw_core)
{
- devlink_params_unregister(priv_to_devlink(mlxsw_core), mlxsw_core_fw_devlink_params,
- ARRAY_SIZE(mlxsw_core_fw_devlink_params));
+ devl_params_unregister(priv_to_devlink(mlxsw_core), mlxsw_core_fw_devlink_params,
+ ARRAY_SIZE(mlxsw_core_fw_devlink_params));
}
static void *__dl_port(struct devlink_port *devlink_port)
@@ -1676,29 +1750,12 @@ static const struct devlink_ops mlxsw_devlink_ops = {
static int mlxsw_core_params_register(struct mlxsw_core *mlxsw_core)
{
- int err;
-
- err = mlxsw_core_fw_params_register(mlxsw_core);
- if (err)
- return err;
-
- if (mlxsw_core->driver->params_register) {
- err = mlxsw_core->driver->params_register(mlxsw_core);
- if (err)
- goto err_params_register;
- }
- return 0;
-
-err_params_register:
- mlxsw_core_fw_params_unregister(mlxsw_core);
- return err;
+ return mlxsw_core_fw_params_register(mlxsw_core);
}
static void mlxsw_core_params_unregister(struct mlxsw_core *mlxsw_core)
{
mlxsw_core_fw_params_unregister(mlxsw_core);
- if (mlxsw_core->driver->params_register)
- mlxsw_core->driver->params_unregister(mlxsw_core);
}
struct mlxsw_core_health_event {
@@ -2051,8 +2108,8 @@ static int mlxsw_core_health_init(struct mlxsw_core *mlxsw_core)
if (!(mlxsw_core->bus->features & MLXSW_BUS_F_TXRX))
return 0;
- fw_fatal = devlink_health_reporter_create(devlink, &mlxsw_core_health_fw_fatal_ops,
- 0, mlxsw_core);
+ fw_fatal = devl_health_reporter_create(devlink, &mlxsw_core_health_fw_fatal_ops,
+ 0, mlxsw_core);
if (IS_ERR(fw_fatal)) {
dev_err(mlxsw_core->bus_info->dev, "Failed to create fw fatal reporter");
return PTR_ERR(fw_fatal);
@@ -2072,7 +2129,7 @@ static int mlxsw_core_health_init(struct mlxsw_core *mlxsw_core)
err_fw_fatal_config:
mlxsw_core_trap_unregister(mlxsw_core, &mlxsw_core_health_listener, mlxsw_core);
err_trap_register:
- devlink_health_reporter_destroy(mlxsw_core->health.fw_fatal);
+ devl_health_reporter_destroy(mlxsw_core->health.fw_fatal);
return err;
}
@@ -2085,7 +2142,7 @@ static void mlxsw_core_health_fini(struct mlxsw_core *mlxsw_core)
mlxsw_core_trap_unregister(mlxsw_core, &mlxsw_core_health_listener, mlxsw_core);
/* Make sure there is no more event work scheduled */
mlxsw_core_flush_owq();
- devlink_health_reporter_destroy(mlxsw_core->health.fw_fatal);
+ devl_health_reporter_destroy(mlxsw_core->health.fw_fatal);
}
static void mlxsw_core_irq_event_handler_init(struct mlxsw_core *mlxsw_core)
@@ -2127,6 +2184,7 @@ __mlxsw_core_bus_device_register(const struct mlxsw_bus_info *mlxsw_bus_info,
goto err_devlink_alloc;
}
devl_lock(devlink);
+ devl_register(devlink);
}
mlxsw_core = devlink_priv(devlink);
@@ -2210,11 +2268,8 @@ __mlxsw_core_bus_device_register(const struct mlxsw_bus_info *mlxsw_bus_info,
goto err_driver_init;
}
- if (!reload) {
- devlink_set_features(devlink, DEVLINK_F_RELOAD);
+ if (!reload)
devl_unlock(devlink);
- devlink_register(devlink);
- }
return 0;
err_driver_init:
@@ -2246,6 +2301,7 @@ err_register_resources:
err_bus_init:
mlxsw_core_irq_event_handler_fini(mlxsw_core);
if (!reload) {
+ devl_unregister(devlink);
devl_unlock(devlink);
devlink_free(devlink);
}
@@ -2284,10 +2340,8 @@ void mlxsw_core_bus_device_unregister(struct mlxsw_core *mlxsw_core,
{
struct devlink *devlink = priv_to_devlink(mlxsw_core);
- if (!reload) {
- devlink_unregister(devlink);
+ if (!reload)
devl_lock(devlink);
- }
if (devlink_is_reload_failed(devlink)) {
if (!reload)
@@ -2316,6 +2370,7 @@ void mlxsw_core_bus_device_unregister(struct mlxsw_core *mlxsw_core,
mlxsw_core->bus->fini(mlxsw_core->bus_priv);
mlxsw_core_irq_event_handler_fini(mlxsw_core);
if (!reload) {
+ devl_unregister(devlink);
devl_unlock(devlink);
devlink_free(devlink);
}
@@ -2325,6 +2380,7 @@ void mlxsw_core_bus_device_unregister(struct mlxsw_core *mlxsw_core,
reload_fail_deinit:
mlxsw_core_params_unregister(mlxsw_core);
devl_resources_unregister(devlink);
+ devl_unregister(devlink);
devl_unlock(devlink);
devlink_free(devlink);
}
@@ -3377,12 +3433,6 @@ bool mlxsw_core_sdq_supports_cqe_v2(struct mlxsw_core *mlxsw_core)
}
EXPORT_SYMBOL(mlxsw_core_sdq_supports_cqe_v2);
-void mlxsw_core_emad_string_tlv_enable(struct mlxsw_core *mlxsw_core)
-{
- mlxsw_core->emad.enable_string_tlv = true;
-}
-EXPORT_SYMBOL(mlxsw_core_emad_string_tlv_enable);
-
static int __init mlxsw_core_module_init(void)
{
int err;
diff --git a/drivers/net/ethernet/mellanox/mlxsw/core.h b/drivers/net/ethernet/mellanox/mlxsw/core.h
index e0a6fcbbcb19..e5474d3e34db 100644
--- a/drivers/net/ethernet/mellanox/mlxsw/core.h
+++ b/drivers/net/ethernet/mellanox/mlxsw/core.h
@@ -421,8 +421,6 @@ struct mlxsw_driver {
const struct mlxsw_config_profile *profile,
u64 *p_single_size, u64 *p_double_size,
u64 *p_linear_size);
- int (*params_register)(struct mlxsw_core *mlxsw_core);
- void (*params_unregister)(struct mlxsw_core *mlxsw_core);
/* Notify a driver that a timestamped packet was transmitted. Driver
* is responsible for freeing the passed-in SKB.
@@ -448,8 +446,6 @@ u32 mlxsw_core_read_utc_nsec(struct mlxsw_core *mlxsw_core);
bool mlxsw_core_sdq_supports_cqe_v2(struct mlxsw_core *mlxsw_core);
-void mlxsw_core_emad_string_tlv_enable(struct mlxsw_core *mlxsw_core);
-
bool mlxsw_core_res_valid(struct mlxsw_core *mlxsw_core,
enum mlxsw_res_id res_id);
diff --git a/drivers/net/ethernet/mellanox/mlxsw/core_linecards.c b/drivers/net/ethernet/mellanox/mlxsw/core_linecards.c
index 83d2dc91ba2c..025e0db983fe 100644
--- a/drivers/net/ethernet/mellanox/mlxsw/core_linecards.c
+++ b/drivers/net/ethernet/mellanox/mlxsw/core_linecards.c
@@ -1259,9 +1259,9 @@ static int mlxsw_linecard_init(struct mlxsw_core *mlxsw_core,
linecard->linecards = linecards;
mutex_init(&linecard->lock);
- devlink_linecard = devlink_linecard_create(priv_to_devlink(mlxsw_core),
- slot_index, &mlxsw_linecard_ops,
- linecard);
+ devlink_linecard = devl_linecard_create(priv_to_devlink(mlxsw_core),
+ slot_index, &mlxsw_linecard_ops,
+ linecard);
if (IS_ERR(devlink_linecard))
return PTR_ERR(devlink_linecard);
@@ -1285,7 +1285,7 @@ static void mlxsw_linecard_fini(struct mlxsw_core *mlxsw_core,
if (linecard->active)
mlxsw_linecard_active_clear(linecard);
mlxsw_linecard_bdev_del(linecard);
- devlink_linecard_destroy(linecard->devlink_linecard);
+ devl_linecard_destroy(linecard->devlink_linecard);
mutex_destroy(&linecard->lock);
}
diff --git a/drivers/net/ethernet/mellanox/mlxsw/emad.h b/drivers/net/ethernet/mellanox/mlxsw/emad.h
index acfbbec52424..c51a61aa19b7 100644
--- a/drivers/net/ethernet/mellanox/mlxsw/emad.h
+++ b/drivers/net/ethernet/mellanox/mlxsw/emad.h
@@ -21,6 +21,7 @@ enum {
MLXSW_EMAD_TLV_TYPE_OP,
MLXSW_EMAD_TLV_TYPE_STRING,
MLXSW_EMAD_TLV_TYPE_REG,
+ MLXSW_EMAD_TLV_TYPE_LATENCY,
};
/* OP TLV */
@@ -90,6 +91,9 @@ enum {
/* STRING TLV */
#define MLXSW_EMAD_STRING_TLV_LEN 33 /* Length in u32 */
+/* LATENCY TLV */
+#define MLXSW_EMAD_LATENCY_TLV_LEN 7 /* Length in u32 */
+
/* END TLV */
#define MLXSW_EMAD_END_TLV_LEN 1 /* Length in u32 */
diff --git a/drivers/net/ethernet/mellanox/mlxsw/reg.h b/drivers/net/ethernet/mellanox/mlxsw/reg.h
index f2d6f8654e04..8165bf31a99a 100644
--- a/drivers/net/ethernet/mellanox/mlxsw/reg.h
+++ b/drivers/net/ethernet/mellanox/mlxsw/reg.h
@@ -10009,6 +10009,18 @@ MLXSW_REG_DEFINE(mgir, MLXSW_REG_MGIR_ID, MLXSW_REG_MGIR_LEN);
*/
MLXSW_ITEM32(reg, mgir, hw_info_device_hw_revision, 0x0, 16, 16);
+/* reg_mgir_fw_info_latency_tlv
+ * When set, latency-TLV is supported.
+ * Access: RO
+ */
+MLXSW_ITEM32(reg, mgir, fw_info_latency_tlv, 0x20, 29, 1);
+
+/* reg_mgir_fw_info_string_tlv
+ * When set, string-TLV is supported.
+ * Access: RO
+ */
+MLXSW_ITEM32(reg, mgir, fw_info_string_tlv, 0x20, 28, 1);
+
#define MLXSW_REG_MGIR_FW_INFO_PSID_SIZE 16
/* reg_mgir_fw_info_psid
diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum.c
index f5b2d965d476..a8f94b7544ee 100644
--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum.c
+++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum.c
@@ -3092,7 +3092,6 @@ static int mlxsw_sp_init(struct mlxsw_core *mlxsw_core,
mlxsw_sp->bus_info = mlxsw_bus_info;
mlxsw_sp_parsing_init(mlxsw_sp);
- mlxsw_core_emad_string_tlv_enable(mlxsw_core);
err = mlxsw_sp_base_mac_get(mlxsw_sp);
if (err) {
@@ -3862,62 +3861,6 @@ static int mlxsw_sp_kvd_sizes_get(struct mlxsw_core *mlxsw_core,
return 0;
}
-static int
-mlxsw_sp_params_acl_region_rehash_intrvl_get(struct devlink *devlink, u32 id,
- struct devlink_param_gset_ctx *ctx)
-{
- struct mlxsw_core *mlxsw_core = devlink_priv(devlink);
- struct mlxsw_sp *mlxsw_sp = mlxsw_core_driver_priv(mlxsw_core);
-
- ctx->val.vu32 = mlxsw_sp_acl_region_rehash_intrvl_get(mlxsw_sp);
- return 0;
-}
-
-static int
-mlxsw_sp_params_acl_region_rehash_intrvl_set(struct devlink *devlink, u32 id,
- struct devlink_param_gset_ctx *ctx)
-{
- struct mlxsw_core *mlxsw_core = devlink_priv(devlink);
- struct mlxsw_sp *mlxsw_sp = mlxsw_core_driver_priv(mlxsw_core);
-
- return mlxsw_sp_acl_region_rehash_intrvl_set(mlxsw_sp, ctx->val.vu32);
-}
-
-static const struct devlink_param mlxsw_sp2_devlink_params[] = {
- DEVLINK_PARAM_DRIVER(MLXSW_DEVLINK_PARAM_ID_ACL_REGION_REHASH_INTERVAL,
- "acl_region_rehash_interval",
- DEVLINK_PARAM_TYPE_U32,
- BIT(DEVLINK_PARAM_CMODE_RUNTIME),
- mlxsw_sp_params_acl_region_rehash_intrvl_get,
- mlxsw_sp_params_acl_region_rehash_intrvl_set,
- NULL),
-};
-
-static int mlxsw_sp2_params_register(struct mlxsw_core *mlxsw_core)
-{
- struct devlink *devlink = priv_to_devlink(mlxsw_core);
- union devlink_param_value value;
- int err;
-
- err = devlink_params_register(devlink, mlxsw_sp2_devlink_params,
- ARRAY_SIZE(mlxsw_sp2_devlink_params));
- if (err)
- return err;
-
- value.vu32 = 0;
- devlink_param_driverinit_value_set(devlink,
- MLXSW_DEVLINK_PARAM_ID_ACL_REGION_REHASH_INTERVAL,
- value);
- return 0;
-}
-
-static void mlxsw_sp2_params_unregister(struct mlxsw_core *mlxsw_core)
-{
- devlink_params_unregister(priv_to_devlink(mlxsw_core),
- mlxsw_sp2_devlink_params,
- ARRAY_SIZE(mlxsw_sp2_devlink_params));
-}
-
static void mlxsw_sp_ptp_transmitted(struct mlxsw_core *mlxsw_core,
struct sk_buff *skb, u16 local_port)
{
@@ -3995,8 +3938,6 @@ static struct mlxsw_driver mlxsw_sp2_driver = {
.trap_policer_counter_get = mlxsw_sp_trap_policer_counter_get,
.txhdr_construct = mlxsw_sp_txhdr_construct,
.resources_register = mlxsw_sp2_resources_register,
- .params_register = mlxsw_sp2_params_register,
- .params_unregister = mlxsw_sp2_params_unregister,
.ptp_transmitted = mlxsw_sp_ptp_transmitted,
.txhdr_len = MLXSW_TXHDR_LEN,
.profile = &mlxsw_sp2_config_profile,
@@ -4034,8 +3975,6 @@ static struct mlxsw_driver mlxsw_sp3_driver = {
.trap_policer_counter_get = mlxsw_sp_trap_policer_counter_get,
.txhdr_construct = mlxsw_sp_txhdr_construct,
.resources_register = mlxsw_sp2_resources_register,
- .params_register = mlxsw_sp2_params_register,
- .params_unregister = mlxsw_sp2_params_unregister,
.ptp_transmitted = mlxsw_sp_ptp_transmitted,
.txhdr_len = MLXSW_TXHDR_LEN,
.profile = &mlxsw_sp2_config_profile,
@@ -4071,8 +4010,6 @@ static struct mlxsw_driver mlxsw_sp4_driver = {
.trap_policer_counter_get = mlxsw_sp_trap_policer_counter_get,
.txhdr_construct = mlxsw_sp_txhdr_construct,
.resources_register = mlxsw_sp2_resources_register,
- .params_register = mlxsw_sp2_params_register,
- .params_unregister = mlxsw_sp2_params_unregister,
.ptp_transmitted = mlxsw_sp_ptp_transmitted,
.txhdr_len = MLXSW_TXHDR_LEN,
.profile = &mlxsw_sp4_config_profile,
diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum.h b/drivers/net/ethernet/mellanox/mlxsw/spectrum.h
index bbc73324451d..4c22f8004514 100644
--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum.h
+++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum.h
@@ -973,6 +973,7 @@ enum mlxsw_sp_acl_profile {
};
struct mlxsw_afk *mlxsw_sp_acl_afk(struct mlxsw_sp_acl *acl);
+struct mlxsw_sp_acl_tcam *mlxsw_sp_acl_to_tcam(struct mlxsw_sp_acl *acl);
int mlxsw_sp_acl_ruleset_bind(struct mlxsw_sp *mlxsw_sp,
struct mlxsw_sp_flow_block *block,
@@ -1096,8 +1097,6 @@ mlxsw_sp_acl_act_cookie_lookup(struct mlxsw_sp *mlxsw_sp, u32 cookie_index)
int mlxsw_sp_acl_init(struct mlxsw_sp *mlxsw_sp);
void mlxsw_sp_acl_fini(struct mlxsw_sp *mlxsw_sp);
-u32 mlxsw_sp_acl_region_rehash_intrvl_get(struct mlxsw_sp *mlxsw_sp);
-int mlxsw_sp_acl_region_rehash_intrvl_set(struct mlxsw_sp *mlxsw_sp, u32 val);
struct mlxsw_sp_acl_mangle_action;
diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl.c
index 6c5af018546f..0423ac262d89 100644
--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl.c
+++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl.c
@@ -40,6 +40,11 @@ struct mlxsw_afk *mlxsw_sp_acl_afk(struct mlxsw_sp_acl *acl)
return acl->afk;
}
+struct mlxsw_sp_acl_tcam *mlxsw_sp_acl_to_tcam(struct mlxsw_sp_acl *acl)
+{
+ return &acl->tcam;
+}
+
struct mlxsw_sp_acl_ruleset_ht_key {
struct mlxsw_sp_flow_block *block;
u32 chain_index;
@@ -1099,22 +1104,6 @@ void mlxsw_sp_acl_fini(struct mlxsw_sp *mlxsw_sp)
kfree(acl);
}
-u32 mlxsw_sp_acl_region_rehash_intrvl_get(struct mlxsw_sp *mlxsw_sp)
-{
- struct mlxsw_sp_acl *acl = mlxsw_sp->acl;
-
- return mlxsw_sp_acl_tcam_vregion_rehash_intrvl_get(mlxsw_sp,
- &acl->tcam);
-}
-
-int mlxsw_sp_acl_region_rehash_intrvl_set(struct mlxsw_sp *mlxsw_sp, u32 val)
-{
- struct mlxsw_sp_acl *acl = mlxsw_sp->acl;
-
- return mlxsw_sp_acl_tcam_vregion_rehash_intrvl_set(mlxsw_sp,
- &acl->tcam, val);
-}
-
struct mlxsw_sp_acl_rulei_ops mlxsw_sp1_acl_rulei_ops = {
.act_mangle_field = mlxsw_sp1_acl_rulei_act_mangle_field,
};
diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_tcam.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_tcam.c
index 3b9ba8fa247a..d50786b0a6ce 100644
--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_tcam.c
+++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_tcam.c
@@ -9,6 +9,7 @@
#include <linux/rhashtable.h>
#include <linux/netdevice.h>
#include <linux/mutex.h>
+#include <net/devlink.h>
#include <trace/events/mlxsw.h>
#include "reg.h"
@@ -29,67 +30,6 @@ size_t mlxsw_sp_acl_tcam_priv_size(struct mlxsw_sp *mlxsw_sp)
#define MLXSW_SP_ACL_TCAM_VREGION_REHASH_INTRVL_MIN 3000 /* ms */
#define MLXSW_SP_ACL_TCAM_VREGION_REHASH_CREDITS 100 /* number of entries */
-int mlxsw_sp_acl_tcam_init(struct mlxsw_sp *mlxsw_sp,
- struct mlxsw_sp_acl_tcam *tcam)
-{
- const struct mlxsw_sp_acl_tcam_ops *ops = mlxsw_sp->acl_tcam_ops;
- u64 max_tcam_regions;
- u64 max_regions;
- u64 max_groups;
- int err;
-
- mutex_init(&tcam->lock);
- tcam->vregion_rehash_intrvl =
- MLXSW_SP_ACL_TCAM_VREGION_REHASH_INTRVL_DFLT;
- INIT_LIST_HEAD(&tcam->vregion_list);
-
- max_tcam_regions = MLXSW_CORE_RES_GET(mlxsw_sp->core,
- ACL_MAX_TCAM_REGIONS);
- max_regions = MLXSW_CORE_RES_GET(mlxsw_sp->core, ACL_MAX_REGIONS);
-
- /* Use 1:1 mapping between ACL region and TCAM region */
- if (max_tcam_regions < max_regions)
- max_regions = max_tcam_regions;
-
- tcam->used_regions = bitmap_zalloc(max_regions, GFP_KERNEL);
- if (!tcam->used_regions)
- return -ENOMEM;
- tcam->max_regions = max_regions;
-
- max_groups = MLXSW_CORE_RES_GET(mlxsw_sp->core, ACL_MAX_GROUPS);
- tcam->used_groups = bitmap_zalloc(max_groups, GFP_KERNEL);
- if (!tcam->used_groups) {
- err = -ENOMEM;
- goto err_alloc_used_groups;
- }
- tcam->max_groups = max_groups;
- tcam->max_group_size = MLXSW_CORE_RES_GET(mlxsw_sp->core,
- ACL_MAX_GROUP_SIZE);
-
- err = ops->init(mlxsw_sp, tcam->priv, tcam);
- if (err)
- goto err_tcam_init;
-
- return 0;
-
-err_tcam_init:
- bitmap_free(tcam->used_groups);
-err_alloc_used_groups:
- bitmap_free(tcam->used_regions);
- return err;
-}
-
-void mlxsw_sp_acl_tcam_fini(struct mlxsw_sp *mlxsw_sp,
- struct mlxsw_sp_acl_tcam *tcam)
-{
- const struct mlxsw_sp_acl_tcam_ops *ops = mlxsw_sp->acl_tcam_ops;
-
- mutex_destroy(&tcam->lock);
- ops->fini(mlxsw_sp, tcam->priv);
- bitmap_free(tcam->used_groups);
- bitmap_free(tcam->used_regions);
-}
-
int mlxsw_sp_acl_tcam_priority_get(struct mlxsw_sp *mlxsw_sp,
struct mlxsw_sp_acl_rule_info *rulei,
u32 *priority, bool fillup_priority)
@@ -893,41 +833,6 @@ mlxsw_sp_acl_tcam_vregion_destroy(struct mlxsw_sp *mlxsw_sp,
kfree(vregion);
}
-u32 mlxsw_sp_acl_tcam_vregion_rehash_intrvl_get(struct mlxsw_sp *mlxsw_sp,
- struct mlxsw_sp_acl_tcam *tcam)
-{
- const struct mlxsw_sp_acl_tcam_ops *ops = mlxsw_sp->acl_tcam_ops;
- u32 vregion_rehash_intrvl;
-
- if (WARN_ON(!ops->region_rehash_hints_get))
- return 0;
- vregion_rehash_intrvl = tcam->vregion_rehash_intrvl;
- return vregion_rehash_intrvl;
-}
-
-int mlxsw_sp_acl_tcam_vregion_rehash_intrvl_set(struct mlxsw_sp *mlxsw_sp,
- struct mlxsw_sp_acl_tcam *tcam,
- u32 val)
-{
- const struct mlxsw_sp_acl_tcam_ops *ops = mlxsw_sp->acl_tcam_ops;
- struct mlxsw_sp_acl_tcam_vregion *vregion;
-
- if (val < MLXSW_SP_ACL_TCAM_VREGION_REHASH_INTRVL_MIN && val)
- return -EINVAL;
- if (WARN_ON(!ops->region_rehash_hints_get))
- return -EOPNOTSUPP;
- tcam->vregion_rehash_intrvl = val;
- mutex_lock(&tcam->lock);
- list_for_each_entry(vregion, &tcam->vregion_list, tlist) {
- if (val)
- mlxsw_core_schedule_dw(&vregion->rehash.dw, 0);
- else
- cancel_delayed_work_sync(&vregion->rehash.dw);
- }
- mutex_unlock(&tcam->lock);
- return 0;
-}
-
static struct mlxsw_sp_acl_tcam_vregion *
mlxsw_sp_acl_tcam_vregion_get(struct mlxsw_sp *mlxsw_sp,
struct mlxsw_sp_acl_tcam_vgroup *vgroup,
@@ -1542,6 +1447,153 @@ mlxsw_sp_acl_tcam_vregion_rehash(struct mlxsw_sp *mlxsw_sp,
mlxsw_sp_acl_tcam_vregion_rehash_end(mlxsw_sp, vregion, ctx);
}
+static int
+mlxsw_sp_acl_tcam_region_rehash_intrvl_get(struct devlink *devlink, u32 id,
+ struct devlink_param_gset_ctx *ctx)
+{
+ struct mlxsw_core *mlxsw_core = devlink_priv(devlink);
+ struct mlxsw_sp_acl_tcam *tcam;
+ struct mlxsw_sp *mlxsw_sp;
+
+ mlxsw_sp = mlxsw_core_driver_priv(mlxsw_core);
+ tcam = mlxsw_sp_acl_to_tcam(mlxsw_sp->acl);
+ ctx->val.vu32 = tcam->vregion_rehash_intrvl;
+
+ return 0;
+}
+
+static int
+mlxsw_sp_acl_tcam_region_rehash_intrvl_set(struct devlink *devlink, u32 id,
+ struct devlink_param_gset_ctx *ctx)
+{
+ struct mlxsw_core *mlxsw_core = devlink_priv(devlink);
+ struct mlxsw_sp_acl_tcam_vregion *vregion;
+ struct mlxsw_sp_acl_tcam *tcam;
+ struct mlxsw_sp *mlxsw_sp;
+ u32 val = ctx->val.vu32;
+
+ if (val < MLXSW_SP_ACL_TCAM_VREGION_REHASH_INTRVL_MIN && val)
+ return -EINVAL;
+
+ mlxsw_sp = mlxsw_core_driver_priv(mlxsw_core);
+ tcam = mlxsw_sp_acl_to_tcam(mlxsw_sp->acl);
+ tcam->vregion_rehash_intrvl = val;
+ mutex_lock(&tcam->lock);
+ list_for_each_entry(vregion, &tcam->vregion_list, tlist) {
+ if (val)
+ mlxsw_core_schedule_dw(&vregion->rehash.dw, 0);
+ else
+ cancel_delayed_work_sync(&vregion->rehash.dw);
+ }
+ mutex_unlock(&tcam->lock);
+ return 0;
+}
+
+static const struct devlink_param mlxsw_sp_acl_tcam_rehash_params[] = {
+ DEVLINK_PARAM_DRIVER(MLXSW_DEVLINK_PARAM_ID_ACL_REGION_REHASH_INTERVAL,
+ "acl_region_rehash_interval",
+ DEVLINK_PARAM_TYPE_U32,
+ BIT(DEVLINK_PARAM_CMODE_RUNTIME),
+ mlxsw_sp_acl_tcam_region_rehash_intrvl_get,
+ mlxsw_sp_acl_tcam_region_rehash_intrvl_set,
+ NULL),
+};
+
+static int mlxsw_sp_acl_tcam_rehash_params_register(struct mlxsw_sp *mlxsw_sp)
+{
+ struct devlink *devlink = priv_to_devlink(mlxsw_sp->core);
+
+ if (!mlxsw_sp->acl_tcam_ops->region_rehash_hints_get)
+ return 0;
+
+ return devl_params_register(devlink, mlxsw_sp_acl_tcam_rehash_params,
+ ARRAY_SIZE(mlxsw_sp_acl_tcam_rehash_params));
+}
+
+static void
+mlxsw_sp_acl_tcam_rehash_params_unregister(struct mlxsw_sp *mlxsw_sp)
+{
+ struct devlink *devlink = priv_to_devlink(mlxsw_sp->core);
+
+ if (!mlxsw_sp->acl_tcam_ops->region_rehash_hints_get)
+ return;
+
+ devl_params_unregister(devlink, mlxsw_sp_acl_tcam_rehash_params,
+ ARRAY_SIZE(mlxsw_sp_acl_tcam_rehash_params));
+}
+
+int mlxsw_sp_acl_tcam_init(struct mlxsw_sp *mlxsw_sp,
+ struct mlxsw_sp_acl_tcam *tcam)
+{
+ const struct mlxsw_sp_acl_tcam_ops *ops = mlxsw_sp->acl_tcam_ops;
+ u64 max_tcam_regions;
+ u64 max_regions;
+ u64 max_groups;
+ int err;
+
+ mutex_init(&tcam->lock);
+ tcam->vregion_rehash_intrvl =
+ MLXSW_SP_ACL_TCAM_VREGION_REHASH_INTRVL_DFLT;
+ INIT_LIST_HEAD(&tcam->vregion_list);
+
+ err = mlxsw_sp_acl_tcam_rehash_params_register(mlxsw_sp);
+ if (err)
+ goto err_rehash_params_register;
+
+ max_tcam_regions = MLXSW_CORE_RES_GET(mlxsw_sp->core,
+ ACL_MAX_TCAM_REGIONS);
+ max_regions = MLXSW_CORE_RES_GET(mlxsw_sp->core, ACL_MAX_REGIONS);
+
+ /* Use 1:1 mapping between ACL region and TCAM region */
+ if (max_tcam_regions < max_regions)
+ max_regions = max_tcam_regions;
+
+ tcam->used_regions = bitmap_zalloc(max_regions, GFP_KERNEL);
+ if (!tcam->used_regions) {
+ err = -ENOMEM;
+ goto err_alloc_used_regions;
+ }
+ tcam->max_regions = max_regions;
+
+ max_groups = MLXSW_CORE_RES_GET(mlxsw_sp->core, ACL_MAX_GROUPS);
+ tcam->used_groups = bitmap_zalloc(max_groups, GFP_KERNEL);
+ if (!tcam->used_groups) {
+ err = -ENOMEM;
+ goto err_alloc_used_groups;
+ }
+ tcam->max_groups = max_groups;
+ tcam->max_group_size = MLXSW_CORE_RES_GET(mlxsw_sp->core,
+ ACL_MAX_GROUP_SIZE);
+
+ err = ops->init(mlxsw_sp, tcam->priv, tcam);
+ if (err)
+ goto err_tcam_init;
+
+ return 0;
+
+err_tcam_init:
+ bitmap_free(tcam->used_groups);
+err_alloc_used_groups:
+ bitmap_free(tcam->used_regions);
+err_alloc_used_regions:
+ mlxsw_sp_acl_tcam_rehash_params_unregister(mlxsw_sp);
+err_rehash_params_register:
+ mutex_destroy(&tcam->lock);
+ return err;
+}
+
+void mlxsw_sp_acl_tcam_fini(struct mlxsw_sp *mlxsw_sp,
+ struct mlxsw_sp_acl_tcam *tcam)
+{
+ const struct mlxsw_sp_acl_tcam_ops *ops = mlxsw_sp->acl_tcam_ops;
+
+ ops->fini(mlxsw_sp, tcam->priv);
+ bitmap_free(tcam->used_groups);
+ bitmap_free(tcam->used_regions);
+ mlxsw_sp_acl_tcam_rehash_params_unregister(mlxsw_sp);
+ mutex_destroy(&tcam->lock);
+}
+
static const enum mlxsw_afk_element mlxsw_sp_acl_tcam_pattern_ipv4[] = {
MLXSW_AFK_ELEMENT_SRC_SYS_PORT,
MLXSW_AFK_ELEMENT_DMAC_32_47,
diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_tcam.h b/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_tcam.h
index edbbc89e7a71..462bf448497d 100644
--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_tcam.h
+++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_tcam.h
@@ -29,11 +29,6 @@ int mlxsw_sp_acl_tcam_init(struct mlxsw_sp *mlxsw_sp,
struct mlxsw_sp_acl_tcam *tcam);
void mlxsw_sp_acl_tcam_fini(struct mlxsw_sp *mlxsw_sp,
struct mlxsw_sp_acl_tcam *tcam);
-u32 mlxsw_sp_acl_tcam_vregion_rehash_intrvl_get(struct mlxsw_sp *mlxsw_sp,
- struct mlxsw_sp_acl_tcam *tcam);
-int mlxsw_sp_acl_tcam_vregion_rehash_intrvl_set(struct mlxsw_sp *mlxsw_sp,
- struct mlxsw_sp_acl_tcam *tcam,
- u32 val);
int mlxsw_sp_acl_tcam_priority_get(struct mlxsw_sp *mlxsw_sp,
struct mlxsw_sp_acl_rule_info *rulei,
u32 *priority, bool fillup_priority);
diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_flower.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_flower.c
index e91fb205e0b4..594cdcb90b3d 100644
--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_flower.c
+++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_flower.c
@@ -103,7 +103,7 @@ static int mlxsw_sp_flower_parse_actions(struct mlxsw_sp *mlxsw_sp,
}
ingress = mlxsw_sp_flow_block_is_ingress_bound(block);
err = mlxsw_sp_acl_rulei_act_drop(rulei, ingress,
- act->cookie, extack);
+ act->user_cookie, extack);
if (err) {
NL_SET_ERR_MSG_MOD(extack, "Cannot append drop action");
return err;
diff --git a/drivers/net/ethernet/microchip/lan743x_main.c b/drivers/net/ethernet/microchip/lan743x_main.c
index 534840f9a7ca..7e0871b631e4 100644
--- a/drivers/net/ethernet/microchip/lan743x_main.c
+++ b/drivers/net/ethernet/microchip/lan743x_main.c
@@ -792,7 +792,7 @@ static int lan743x_mac_mii_wait_till_not_busy(struct lan743x_adapter *adapter)
!(data & MAC_MII_ACC_MII_BUSY_), 0, 1000000);
}
-static int lan743x_mdiobus_read(struct mii_bus *bus, int phy_id, int index)
+static int lan743x_mdiobus_read_c22(struct mii_bus *bus, int phy_id, int index)
{
struct lan743x_adapter *adapter = bus->priv;
u32 val, mii_access;
@@ -814,8 +814,8 @@ static int lan743x_mdiobus_read(struct mii_bus *bus, int phy_id, int index)
return (int)(val & 0xFFFF);
}
-static int lan743x_mdiobus_write(struct mii_bus *bus,
- int phy_id, int index, u16 regval)
+static int lan743x_mdiobus_write_c22(struct mii_bus *bus,
+ int phy_id, int index, u16 regval)
{
struct lan743x_adapter *adapter = bus->priv;
u32 val, mii_access;
@@ -835,12 +835,10 @@ static int lan743x_mdiobus_write(struct mii_bus *bus,
return ret;
}
-static u32 lan743x_mac_mmd_access(int id, int index, int op)
+static u32 lan743x_mac_mmd_access(int id, int dev_addr, int op)
{
- u16 dev_addr;
u32 ret;
- dev_addr = (index >> 16) & 0x1f;
ret = (id << MAC_MII_ACC_PHY_ADDR_SHIFT_) &
MAC_MII_ACC_PHY_ADDR_MASK_;
ret |= (dev_addr << MAC_MII_ACC_MIIMMD_SHIFT_) &
@@ -858,7 +856,8 @@ static u32 lan743x_mac_mmd_access(int id, int index, int op)
return ret;
}
-static int lan743x_mdiobus_c45_read(struct mii_bus *bus, int phy_id, int index)
+static int lan743x_mdiobus_read_c45(struct mii_bus *bus, int phy_id,
+ int dev_addr, int index)
{
struct lan743x_adapter *adapter = bus->priv;
u32 mmd_access;
@@ -868,32 +867,30 @@ static int lan743x_mdiobus_c45_read(struct mii_bus *bus, int phy_id, int index)
ret = lan743x_mac_mii_wait_till_not_busy(adapter);
if (ret < 0)
return ret;
- if (index & MII_ADDR_C45) {
- /* Load Register Address */
- lan743x_csr_write(adapter, MAC_MII_DATA, (u32)(index & 0xffff));
- mmd_access = lan743x_mac_mmd_access(phy_id, index,
- MMD_ACCESS_ADDRESS);
- lan743x_csr_write(adapter, MAC_MII_ACC, mmd_access);
- ret = lan743x_mac_mii_wait_till_not_busy(adapter);
- if (ret < 0)
- return ret;
- /* Read Data */
- mmd_access = lan743x_mac_mmd_access(phy_id, index,
- MMD_ACCESS_READ);
- lan743x_csr_write(adapter, MAC_MII_ACC, mmd_access);
- ret = lan743x_mac_mii_wait_till_not_busy(adapter);
- if (ret < 0)
- return ret;
- ret = lan743x_csr_read(adapter, MAC_MII_DATA);
- return (int)(ret & 0xFFFF);
- }
- ret = lan743x_mdiobus_read(bus, phy_id, index);
- return ret;
+ /* Load Register Address */
+ lan743x_csr_write(adapter, MAC_MII_DATA, index);
+ mmd_access = lan743x_mac_mmd_access(phy_id, dev_addr,
+ MMD_ACCESS_ADDRESS);
+ lan743x_csr_write(adapter, MAC_MII_ACC, mmd_access);
+ ret = lan743x_mac_mii_wait_till_not_busy(adapter);
+ if (ret < 0)
+ return ret;
+
+ /* Read Data */
+ mmd_access = lan743x_mac_mmd_access(phy_id, dev_addr,
+ MMD_ACCESS_READ);
+ lan743x_csr_write(adapter, MAC_MII_ACC, mmd_access);
+ ret = lan743x_mac_mii_wait_till_not_busy(adapter);
+ if (ret < 0)
+ return ret;
+
+ ret = lan743x_csr_read(adapter, MAC_MII_DATA);
+ return (int)(ret & 0xFFFF);
}
-static int lan743x_mdiobus_c45_write(struct mii_bus *bus,
- int phy_id, int index, u16 regval)
+static int lan743x_mdiobus_write_c45(struct mii_bus *bus, int phy_id,
+ int dev_addr, int index, u16 regval)
{
struct lan743x_adapter *adapter = bus->priv;
u32 mmd_access;
@@ -903,26 +900,23 @@ static int lan743x_mdiobus_c45_write(struct mii_bus *bus,
ret = lan743x_mac_mii_wait_till_not_busy(adapter);
if (ret < 0)
return ret;
- if (index & MII_ADDR_C45) {
- /* Load Register Address */
- lan743x_csr_write(adapter, MAC_MII_DATA, (u32)(index & 0xffff));
- mmd_access = lan743x_mac_mmd_access(phy_id, index,
- MMD_ACCESS_ADDRESS);
- lan743x_csr_write(adapter, MAC_MII_ACC, mmd_access);
- ret = lan743x_mac_mii_wait_till_not_busy(adapter);
- if (ret < 0)
- return ret;
- /* Write Data */
- lan743x_csr_write(adapter, MAC_MII_DATA, (u32)regval);
- mmd_access = lan743x_mac_mmd_access(phy_id, index,
- MMD_ACCESS_WRITE);
- lan743x_csr_write(adapter, MAC_MII_ACC, mmd_access);
- ret = lan743x_mac_mii_wait_till_not_busy(adapter);
- } else {
- ret = lan743x_mdiobus_write(bus, phy_id, index, regval);
- }
- return ret;
+ /* Load Register Address */
+ lan743x_csr_write(adapter, MAC_MII_DATA, (u32)index);
+ mmd_access = lan743x_mac_mmd_access(phy_id, dev_addr,
+ MMD_ACCESS_ADDRESS);
+ lan743x_csr_write(adapter, MAC_MII_ACC, mmd_access);
+ ret = lan743x_mac_mii_wait_till_not_busy(adapter);
+ if (ret < 0)
+ return ret;
+
+ /* Write Data */
+ lan743x_csr_write(adapter, MAC_MII_DATA, (u32)regval);
+ mmd_access = lan743x_mac_mmd_access(phy_id, dev_addr,
+ MMD_ACCESS_WRITE);
+ lan743x_csr_write(adapter, MAC_MII_ACC, mmd_access);
+
+ return lan743x_mac_mii_wait_till_not_busy(adapter);
}
static int lan743x_sgmii_wait_till_not_busy(struct lan743x_adapter *adapter)
@@ -1424,14 +1418,6 @@ static void lan743x_phy_link_status_change(struct net_device *netdev)
data = lan743x_csr_read(adapter, MAC_CR);
- /* set interface mode */
- if (phy_interface_is_rgmii(phydev))
- /* RGMII */
- data &= ~MAC_CR_MII_EN_;
- else
- /* GMII */
- data |= MAC_CR_MII_EN_;
-
/* set duplex mode */
if (phydev->duplex)
data |= MAC_CR_DPX_;
@@ -1483,10 +1469,33 @@ static void lan743x_phy_close(struct lan743x_adapter *adapter)
netdev->phydev = NULL;
}
+static void lan743x_phy_interface_select(struct lan743x_adapter *adapter)
+{
+ u32 id_rev;
+ u32 data;
+
+ data = lan743x_csr_read(adapter, MAC_CR);
+ id_rev = adapter->csr.id_rev & ID_REV_ID_MASK_;
+
+ if (adapter->is_pci11x1x && adapter->is_sgmii_en)
+ adapter->phy_interface = PHY_INTERFACE_MODE_SGMII;
+ else if (id_rev == ID_REV_ID_LAN7430_)
+ adapter->phy_interface = PHY_INTERFACE_MODE_GMII;
+ else if ((id_rev == ID_REV_ID_LAN7431_) && (data & MAC_CR_MII_EN_))
+ adapter->phy_interface = PHY_INTERFACE_MODE_MII;
+ else
+ adapter->phy_interface = PHY_INTERFACE_MODE_RGMII;
+}
+
static int lan743x_phy_open(struct lan743x_adapter *adapter)
{
struct net_device *netdev = adapter->netdev;
struct lan743x_phy *phy = &adapter->phy;
+ struct fixed_phy_status fphy_status = {
+ .link = 1,
+ .speed = SPEED_1000,
+ .duplex = DUPLEX_FULL,
+ };
struct phy_device *phydev;
int ret = -EIO;
@@ -1497,17 +1506,25 @@ static int lan743x_phy_open(struct lan743x_adapter *adapter)
if (!phydev) {
/* try internal phy */
phydev = phy_find_first(adapter->mdiobus);
- if (!phydev)
- goto return_error;
+ if (!phydev) {
+ if ((adapter->csr.id_rev & ID_REV_ID_MASK_) ==
+ ID_REV_ID_LAN7431_) {
+ phydev = fixed_phy_register(PHY_POLL,
+ &fphy_status, NULL);
+ if (IS_ERR(phydev)) {
+ netdev_err(netdev, "No PHY/fixed_PHY found\n");
+ return -EIO;
+ }
+ } else {
+ goto return_error;
+ }
+ }
- if (adapter->is_pci11x1x)
- ret = phy_connect_direct(netdev, phydev,
- lan743x_phy_link_status_change,
- PHY_INTERFACE_MODE_RGMII);
- else
- ret = phy_connect_direct(netdev, phydev,
- lan743x_phy_link_status_change,
- PHY_INTERFACE_MODE_GMII);
+ lan743x_phy_interface_select(adapter);
+
+ ret = phy_connect_direct(netdev, phydev,
+ lan743x_phy_link_status_change,
+ adapter->phy_interface);
if (ret)
goto return_error;
}
@@ -3285,9 +3302,10 @@ static int lan743x_mdiobus_init(struct lan743x_adapter *adapter)
lan743x_csr_write(adapter, SGMII_CTL, sgmii_ctl);
netif_dbg(adapter, drv, adapter->netdev,
"SGMII operation\n");
- adapter->mdiobus->probe_capabilities = MDIOBUS_C22_C45;
- adapter->mdiobus->read = lan743x_mdiobus_c45_read;
- adapter->mdiobus->write = lan743x_mdiobus_c45_write;
+ adapter->mdiobus->read = lan743x_mdiobus_read_c22;
+ adapter->mdiobus->write = lan743x_mdiobus_write_c22;
+ adapter->mdiobus->read_c45 = lan743x_mdiobus_read_c45;
+ adapter->mdiobus->write_c45 = lan743x_mdiobus_write_c45;
adapter->mdiobus->name = "lan743x-mdiobus-c45";
netif_dbg(adapter, drv, adapter->netdev,
"lan743x-mdiobus-c45\n");
@@ -3299,16 +3317,15 @@ static int lan743x_mdiobus_init(struct lan743x_adapter *adapter)
netif_dbg(adapter, drv, adapter->netdev,
"RGMII operation\n");
// Only C22 support when RGMII I/F
- adapter->mdiobus->probe_capabilities = MDIOBUS_C22;
- adapter->mdiobus->read = lan743x_mdiobus_read;
- adapter->mdiobus->write = lan743x_mdiobus_write;
+ adapter->mdiobus->read = lan743x_mdiobus_read_c22;
+ adapter->mdiobus->write = lan743x_mdiobus_write_c22;
adapter->mdiobus->name = "lan743x-mdiobus";
netif_dbg(adapter, drv, adapter->netdev,
"lan743x-mdiobus\n");
}
} else {
- adapter->mdiobus->read = lan743x_mdiobus_read;
- adapter->mdiobus->write = lan743x_mdiobus_write;
+ adapter->mdiobus->read = lan743x_mdiobus_read_c22;
+ adapter->mdiobus->write = lan743x_mdiobus_write_c22;
adapter->mdiobus->name = "lan743x-mdiobus";
netif_dbg(adapter, drv, adapter->netdev, "lan743x-mdiobus\n");
}
diff --git a/drivers/net/ethernet/microchip/lan743x_main.h b/drivers/net/ethernet/microchip/lan743x_main.h
index 8438c3dbcf36..52609fc13ad9 100644
--- a/drivers/net/ethernet/microchip/lan743x_main.h
+++ b/drivers/net/ethernet/microchip/lan743x_main.h
@@ -1042,6 +1042,7 @@ struct lan743x_adapter {
#define LAN743X_ADAPTER_FLAG_OTP BIT(0)
u32 flags;
u32 hw_cfg;
+ phy_interface_t phy_interface;
};
#define LAN743X_COMPONENT_FLAG_RX(channel) BIT(20 + (channel))
diff --git a/drivers/net/ethernet/microchip/lan966x/Makefile b/drivers/net/ethernet/microchip/lan966x/Makefile
index 56afd694f3c7..7b0cda4ffa6b 100644
--- a/drivers/net/ethernet/microchip/lan966x/Makefile
+++ b/drivers/net/ethernet/microchip/lan966x/Makefile
@@ -15,5 +15,7 @@ lan966x-switch-objs := lan966x_main.o lan966x_phylink.o lan966x_port.o \
lan966x_xdp.o lan966x_vcap_impl.o lan966x_vcap_ag_api.o \
lan966x_tc_flower.o lan966x_goto.o
+lan966x-switch-$(CONFIG_DEBUG_FS) += lan966x_vcap_debugfs.o
+
# Provide include files
ccflags-y += -I$(srctree)/drivers/net/ethernet/microchip/vcap
diff --git a/drivers/net/ethernet/microchip/lan966x/lan966x_goto.c b/drivers/net/ethernet/microchip/lan966x/lan966x_goto.c
index bf0cfe24a8fc..9b18156eea1a 100644
--- a/drivers/net/ethernet/microchip/lan966x/lan966x_goto.c
+++ b/drivers/net/ethernet/microchip/lan966x/lan966x_goto.c
@@ -4,7 +4,7 @@
#include "vcap_api_client.h"
int lan966x_goto_port_add(struct lan966x_port *port,
- struct flow_action_entry *act,
+ int from_cid, int to_cid,
unsigned long goto_id,
struct netlink_ext_ack *extack)
{
@@ -12,7 +12,7 @@ int lan966x_goto_port_add(struct lan966x_port *port,
int err;
err = vcap_enable_lookups(lan966x->vcap_ctrl, port->dev,
- act->chain_index, goto_id,
+ from_cid, to_cid, goto_id,
true);
if (err == -EFAULT) {
NL_SET_ERR_MSG_MOD(extack, "Unsupported goto chain");
@@ -29,8 +29,6 @@ int lan966x_goto_port_add(struct lan966x_port *port,
return err;
}
- port->tc.goto_id = goto_id;
-
return 0;
}
@@ -41,14 +39,12 @@ int lan966x_goto_port_del(struct lan966x_port *port,
struct lan966x *lan966x = port->lan966x;
int err;
- err = vcap_enable_lookups(lan966x->vcap_ctrl, port->dev, 0,
+ err = vcap_enable_lookups(lan966x->vcap_ctrl, port->dev, 0, 0,
goto_id, false);
if (err) {
NL_SET_ERR_MSG_MOD(extack, "Could not disable VCAP lookups");
return err;
}
- port->tc.goto_id = 0;
-
return 0;
}
diff --git a/drivers/net/ethernet/microchip/lan966x/lan966x_main.c b/drivers/net/ethernet/microchip/lan966x/lan966x_main.c
index 580c91d24a52..8b89de0541ff 100644
--- a/drivers/net/ethernet/microchip/lan966x/lan966x_main.c
+++ b/drivers/net/ethernet/microchip/lan966x/lan966x_main.c
@@ -823,6 +823,11 @@ static int lan966x_probe_port(struct lan966x *lan966x, u32 p,
port->phylink = phylink;
+ if (lan966x->fdma)
+ dev->xdp_features = NETDEV_XDP_ACT_BASIC |
+ NETDEV_XDP_ACT_REDIRECT |
+ NETDEV_XDP_ACT_NDO_XMIT;
+
err = register_netdev(dev);
if (err) {
dev_err(lan966x->dev, "register_netdev failed\n");
@@ -1035,6 +1040,8 @@ static int lan966x_probe(struct platform_device *pdev)
platform_set_drvdata(pdev, lan966x);
lan966x->dev = &pdev->dev;
+ lan966x->debugfs_root = debugfs_create_dir("lan966x", NULL);
+
if (!device_get_mac_address(&pdev->dev, mac_addr)) {
ether_addr_copy(lan966x->base_mac, mac_addr);
} else {
@@ -1223,6 +1230,8 @@ static int lan966x_remove(struct platform_device *pdev)
lan966x_fdb_deinit(lan966x);
lan966x_ptp_deinit(lan966x);
+ debugfs_remove_recursive(lan966x->debugfs_root);
+
return 0;
}
diff --git a/drivers/net/ethernet/microchip/lan966x/lan966x_main.h b/drivers/net/ethernet/microchip/lan966x/lan966x_main.h
index 3491f1961835..49f5159afbf3 100644
--- a/drivers/net/ethernet/microchip/lan966x/lan966x_main.h
+++ b/drivers/net/ethernet/microchip/lan966x/lan966x_main.h
@@ -3,6 +3,7 @@
#ifndef __LAN966X_MAIN_H__
#define __LAN966X_MAIN_H__
+#include <linux/debugfs.h>
#include <linux/etherdevice.h>
#include <linux/if_vlan.h>
#include <linux/jiffies.h>
@@ -14,6 +15,9 @@
#include <net/pkt_sched.h>
#include <net/switchdev.h>
+#include <vcap_api.h>
+#include <vcap_api_client.h>
+
#include "lan966x_regs.h"
#include "lan966x_ifh.h"
@@ -128,6 +132,13 @@ enum LAN966X_PORT_MASK_MODE {
LAN966X_PMM_REDIRECT,
};
+enum vcap_is2_port_sel_ipv6 {
+ VCAP_IS2_PS_IPV6_TCPUDP_OTHER,
+ VCAP_IS2_PS_IPV6_STD,
+ VCAP_IS2_PS_IPV6_IP4_TCPUDP_IP4_OTHER,
+ VCAP_IS2_PS_IPV6_MAC_ETYPE,
+};
+
struct lan966x_port;
struct lan966x_db {
@@ -315,6 +326,9 @@ struct lan966x {
/* vcap */
struct vcap_control *vcap_ctrl;
+
+ /* debugfs */
+ struct dentry *debugfs_root;
};
struct lan966x_port_config {
@@ -332,7 +346,6 @@ struct lan966x_port_tc {
unsigned long police_id;
unsigned long ingress_mirror_id;
unsigned long egress_mirror_id;
- unsigned long goto_id;
struct flow_stats police_stat;
struct flow_stats mirror_stat;
};
@@ -602,12 +615,25 @@ static inline bool lan966x_xdp_port_present(struct lan966x_port *port)
int lan966x_vcap_init(struct lan966x *lan966x);
void lan966x_vcap_deinit(struct lan966x *lan966x);
+#if defined(CONFIG_DEBUG_FS)
+int lan966x_vcap_port_info(struct net_device *dev,
+ struct vcap_admin *admin,
+ struct vcap_output_print *out);
+#else
+static inline int lan966x_vcap_port_info(struct net_device *dev,
+ struct vcap_admin *admin,
+ struct vcap_output_print *out)
+{
+ return 0;
+}
+#endif
int lan966x_tc_flower(struct lan966x_port *port,
- struct flow_cls_offload *f);
+ struct flow_cls_offload *f,
+ bool ingress);
int lan966x_goto_port_add(struct lan966x_port *port,
- struct flow_action_entry *act,
+ int from_cid, int to_cid,
unsigned long goto_id,
struct netlink_ext_ack *extack);
int lan966x_goto_port_del(struct lan966x_port *port,
diff --git a/drivers/net/ethernet/microchip/lan966x/lan966x_ptp.c b/drivers/net/ethernet/microchip/lan966x/lan966x_ptp.c
index a8348437dd87..931e37b9a0ad 100644
--- a/drivers/net/ethernet/microchip/lan966x/lan966x_ptp.c
+++ b/drivers/net/ethernet/microchip/lan966x/lan966x_ptp.c
@@ -83,8 +83,7 @@ static int lan966x_ptp_add_trap(struct lan966x_port *port,
if (err)
goto free_rule;
- err = vcap_set_rule_set_actionset(vrule, VCAP_AFS_BASE_TYPE);
- err |= vcap_rule_add_action_bit(vrule, VCAP_AF_CPU_COPY_ENA, VCAP_BIT_1);
+ err = vcap_rule_add_action_bit(vrule, VCAP_AF_CPU_COPY_ENA, VCAP_BIT_1);
err |= vcap_rule_add_action_u32(vrule, VCAP_AF_MASK_MODE, LAN966X_PMM_REPLACE);
err |= vcap_val_rule(vrule, proto);
if (err)
@@ -524,9 +523,9 @@ irqreturn_t lan966x_ptp_irq_handler(int irq, void *args)
if (WARN_ON(!skb_match))
continue;
- spin_lock(&lan966x->ptp_ts_id_lock);
+ spin_lock_irqsave(&lan966x->ptp_ts_id_lock, flags);
lan966x->ptp_skbs--;
- spin_unlock(&lan966x->ptp_ts_id_lock);
+ spin_unlock_irqrestore(&lan966x->ptp_ts_id_lock, flags);
/* Get the h/w timestamp */
lan966x_get_hwtimestamp(lan966x, &ts, delay);
diff --git a/drivers/net/ethernet/microchip/lan966x/lan966x_tc.c b/drivers/net/ethernet/microchip/lan966x/lan966x_tc.c
index 01072121c999..cf0cc7562d04 100644
--- a/drivers/net/ethernet/microchip/lan966x/lan966x_tc.c
+++ b/drivers/net/ethernet/microchip/lan966x/lan966x_tc.c
@@ -1,6 +1,7 @@
// SPDX-License-Identifier: GPL-2.0+
#include <net/pkt_cls.h>
+#include <net/pkt_sched.h>
#include "lan966x_main.h"
@@ -70,7 +71,7 @@ static int lan966x_tc_block_cb(enum tc_setup_type type, void *type_data,
case TC_SETUP_CLSMATCHALL:
return lan966x_tc_matchall(port, type_data, ingress);
case TC_SETUP_CLSFLOWER:
- return lan966x_tc_flower(port, type_data);
+ return lan966x_tc_flower(port, type_data, ingress);
default:
return -EOPNOTSUPP;
}
diff --git a/drivers/net/ethernet/microchip/lan966x/lan966x_tc_flower.c b/drivers/net/ethernet/microchip/lan966x/lan966x_tc_flower.c
index ba3fa917d6b7..f960727ecaee 100644
--- a/drivers/net/ethernet/microchip/lan966x/lan966x_tc_flower.c
+++ b/drivers/net/ethernet/microchip/lan966x/lan966x_tc_flower.c
@@ -3,53 +3,135 @@
#include "lan966x_main.h"
#include "vcap_api.h"
#include "vcap_api_client.h"
+#include "vcap_tc.h"
-struct lan966x_tc_flower_parse_usage {
- struct flow_cls_offload *f;
- struct flow_rule *frule;
- struct vcap_rule *vrule;
- unsigned int used_keys;
- u16 l3_proto;
-};
+static bool lan966x_tc_is_known_etype(u16 etype)
+{
+ switch (etype) {
+ case ETH_P_ALL:
+ case ETH_P_ARP:
+ case ETH_P_IP:
+ case ETH_P_IPV6:
+ return true;
+ }
+
+ return false;
+}
-static int lan966x_tc_flower_handler_ethaddr_usage(struct lan966x_tc_flower_parse_usage *st)
+static int
+lan966x_tc_flower_handler_control_usage(struct vcap_tc_flower_parse_usage *st)
{
- enum vcap_key_field smac_key = VCAP_KF_L2_SMAC;
- enum vcap_key_field dmac_key = VCAP_KF_L2_DMAC;
- struct flow_match_eth_addrs match;
- struct vcap_u48_key smac, dmac;
+ struct flow_match_control match;
int err = 0;
- flow_rule_match_eth_addrs(st->frule, &match);
-
- if (!is_zero_ether_addr(match.mask->src)) {
- vcap_netbytes_copy(smac.value, match.key->src, ETH_ALEN);
- vcap_netbytes_copy(smac.mask, match.mask->src, ETH_ALEN);
- err = vcap_rule_add_key_u48(st->vrule, smac_key, &smac);
+ flow_rule_match_control(st->frule, &match);
+ if (match.mask->flags & FLOW_DIS_IS_FRAGMENT) {
+ if (match.key->flags & FLOW_DIS_IS_FRAGMENT)
+ err = vcap_rule_add_key_bit(st->vrule,
+ VCAP_KF_L3_FRAGMENT,
+ VCAP_BIT_1);
+ else
+ err = vcap_rule_add_key_bit(st->vrule,
+ VCAP_KF_L3_FRAGMENT,
+ VCAP_BIT_0);
if (err)
goto out;
}
- if (!is_zero_ether_addr(match.mask->dst)) {
- vcap_netbytes_copy(dmac.value, match.key->dst, ETH_ALEN);
- vcap_netbytes_copy(dmac.mask, match.mask->dst, ETH_ALEN);
- err = vcap_rule_add_key_u48(st->vrule, dmac_key, &dmac);
+ if (match.mask->flags & FLOW_DIS_FIRST_FRAG) {
+ if (match.key->flags & FLOW_DIS_FIRST_FRAG)
+ err = vcap_rule_add_key_bit(st->vrule,
+ VCAP_KF_L3_FRAG_OFS_GT0,
+ VCAP_BIT_0);
+ else
+ err = vcap_rule_add_key_bit(st->vrule,
+ VCAP_KF_L3_FRAG_OFS_GT0,
+ VCAP_BIT_1);
if (err)
goto out;
}
- st->used_keys |= BIT(FLOW_DISSECTOR_KEY_ETH_ADDRS);
+ st->used_keys |= BIT(FLOW_DISSECTOR_KEY_CONTROL);
return err;
out:
- NL_SET_ERR_MSG_MOD(st->f->common.extack, "eth_addr parse error");
+ NL_SET_ERR_MSG_MOD(st->fco->common.extack, "ip_frag parse error");
return err;
}
static int
-(*lan966x_tc_flower_handlers_usage[])(struct lan966x_tc_flower_parse_usage *st) = {
- [FLOW_DISSECTOR_KEY_ETH_ADDRS] = lan966x_tc_flower_handler_ethaddr_usage,
+lan966x_tc_flower_handler_basic_usage(struct vcap_tc_flower_parse_usage *st)
+{
+ struct flow_match_basic match;
+ int err = 0;
+
+ flow_rule_match_basic(st->frule, &match);
+ if (match.mask->n_proto) {
+ st->l3_proto = be16_to_cpu(match.key->n_proto);
+ if (!lan966x_tc_is_known_etype(st->l3_proto)) {
+ err = vcap_rule_add_key_u32(st->vrule, VCAP_KF_ETYPE,
+ st->l3_proto, ~0);
+ if (err)
+ goto out;
+ } else if (st->l3_proto == ETH_P_IP) {
+ err = vcap_rule_add_key_bit(st->vrule, VCAP_KF_IP4_IS,
+ VCAP_BIT_1);
+ if (err)
+ goto out;
+ }
+ }
+ if (match.mask->ip_proto) {
+ st->l4_proto = match.key->ip_proto;
+
+ if (st->l4_proto == IPPROTO_TCP) {
+ err = vcap_rule_add_key_bit(st->vrule,
+ VCAP_KF_TCP_IS,
+ VCAP_BIT_1);
+ if (err)
+ goto out;
+ } else if (st->l4_proto == IPPROTO_UDP) {
+ err = vcap_rule_add_key_bit(st->vrule,
+ VCAP_KF_TCP_IS,
+ VCAP_BIT_0);
+ if (err)
+ goto out;
+ } else {
+ err = vcap_rule_add_key_u32(st->vrule,
+ VCAP_KF_L3_IP_PROTO,
+ st->l4_proto, ~0);
+ if (err)
+ goto out;
+ }
+ }
+
+ st->used_keys |= BIT(FLOW_DISSECTOR_KEY_BASIC);
+ return err;
+out:
+ NL_SET_ERR_MSG_MOD(st->fco->common.extack, "ip_proto parse error");
+ return err;
+}
+
+static int
+lan966x_tc_flower_handler_vlan_usage(struct vcap_tc_flower_parse_usage *st)
+{
+ return vcap_tc_flower_handler_vlan_usage(st,
+ VCAP_KF_8021Q_VID_CLS,
+ VCAP_KF_8021Q_PCP_CLS);
+}
+
+static int
+(*lan966x_tc_flower_handlers_usage[])(struct vcap_tc_flower_parse_usage *st) = {
+ [FLOW_DISSECTOR_KEY_ETH_ADDRS] = vcap_tc_flower_handler_ethaddr_usage,
+ [FLOW_DISSECTOR_KEY_IPV4_ADDRS] = vcap_tc_flower_handler_ipv4_usage,
+ [FLOW_DISSECTOR_KEY_IPV6_ADDRS] = vcap_tc_flower_handler_ipv6_usage,
+ [FLOW_DISSECTOR_KEY_CONTROL] = lan966x_tc_flower_handler_control_usage,
+ [FLOW_DISSECTOR_KEY_PORTS] = vcap_tc_flower_handler_portnum_usage,
+ [FLOW_DISSECTOR_KEY_BASIC] = lan966x_tc_flower_handler_basic_usage,
+ [FLOW_DISSECTOR_KEY_VLAN] = lan966x_tc_flower_handler_vlan_usage,
+ [FLOW_DISSECTOR_KEY_TCP] = vcap_tc_flower_handler_tcp_usage,
+ [FLOW_DISSECTOR_KEY_ARP] = vcap_tc_flower_handler_arp_usage,
+ [FLOW_DISSECTOR_KEY_IP] = vcap_tc_flower_handler_ip_usage,
};
static int lan966x_tc_flower_use_dissectors(struct flow_cls_offload *f,
@@ -57,8 +139,8 @@ static int lan966x_tc_flower_use_dissectors(struct flow_cls_offload *f,
struct vcap_rule *vrule,
u16 *l3_proto)
{
- struct lan966x_tc_flower_parse_usage state = {
- .f = f,
+ struct vcap_tc_flower_parse_usage state = {
+ .fco = f,
.vrule = vrule,
.l3_proto = ETH_P_ALL,
};
@@ -82,8 +164,9 @@ static int lan966x_tc_flower_use_dissectors(struct flow_cls_offload *f,
}
static int lan966x_tc_flower_action_check(struct vcap_control *vctrl,
+ struct net_device *dev,
struct flow_cls_offload *fco,
- struct vcap_admin *admin)
+ bool ingress)
{
struct flow_rule *rule = flow_cls_offload_flow_rule(fco);
struct flow_action_entry *actent, *last_actent = NULL;
@@ -109,21 +192,24 @@ static int lan966x_tc_flower_action_check(struct vcap_control *vctrl,
last_actent = actent; /* Save last action for later check */
}
- /* Check that last action is a goto */
- if (last_actent->id != FLOW_ACTION_GOTO) {
+ /* Check that last action is a goto
+ * The last chain/lookup does not need to have goto action
+ */
+ if (last_actent->id == FLOW_ACTION_GOTO) {
+ /* Check if the destination chain is in one of the VCAPs */
+ if (!vcap_is_next_lookup(vctrl, fco->common.chain_index,
+ last_actent->chain_index)) {
+ NL_SET_ERR_MSG_MOD(fco->common.extack,
+ "Invalid goto chain");
+ return -EINVAL;
+ }
+ } else if (!vcap_is_last_chain(vctrl, fco->common.chain_index,
+ ingress)) {
NL_SET_ERR_MSG_MOD(fco->common.extack,
"Last action must be 'goto'");
return -EINVAL;
}
- /* Check if the goto chain is in the next lookup */
- if (!vcap_is_next_lookup(vctrl, fco->common.chain_index,
- last_actent->chain_index)) {
- NL_SET_ERR_MSG_MOD(fco->common.extack,
- "Invalid goto chain");
- return -EINVAL;
- }
-
/* Catch unsupported combinations of actions */
if (action_mask & BIT(FLOW_ACTION_TRAP) &&
action_mask & BIT(FLOW_ACTION_ACCEPT)) {
@@ -137,7 +223,8 @@ static int lan966x_tc_flower_action_check(struct vcap_control *vctrl,
static int lan966x_tc_flower_add(struct lan966x_port *port,
struct flow_cls_offload *f,
- struct vcap_admin *admin)
+ struct vcap_admin *admin,
+ bool ingress)
{
struct flow_action_entry *act;
u16 l3_proto = ETH_P_ALL;
@@ -145,8 +232,8 @@ static int lan966x_tc_flower_add(struct lan966x_port *port,
struct vcap_rule *vrule;
int err, idx;
- err = lan966x_tc_flower_action_check(port->lan966x->vcap_ctrl, f,
- admin);
+ err = lan966x_tc_flower_action_check(port->lan966x->vcap_ctrl,
+ port->dev, f, ingress);
if (err)
return err;
@@ -174,8 +261,6 @@ static int lan966x_tc_flower_add(struct lan966x_port *port,
0);
err |= vcap_rule_add_action_u32(vrule, VCAP_AF_MASK_MODE,
LAN966X_PMM_REPLACE);
- err |= vcap_set_rule_set_actionset(vrule,
- VCAP_AFS_BASE_TYPE);
if (err)
goto out;
@@ -229,8 +314,27 @@ static int lan966x_tc_flower_del(struct lan966x_port *port,
return err;
}
+static int lan966x_tc_flower_stats(struct lan966x_port *port,
+ struct flow_cls_offload *f,
+ struct vcap_admin *admin)
+{
+ struct vcap_counter count = {};
+ int err;
+
+ err = vcap_get_rule_count_by_cookie(port->lan966x->vcap_ctrl,
+ &count, f->cookie);
+ if (err)
+ return err;
+
+ flow_stats_update(&f->stats, 0x0, count.value, 0, 0,
+ FLOW_ACTION_HW_STATS_IMMEDIATE);
+
+ return err;
+}
+
int lan966x_tc_flower(struct lan966x_port *port,
- struct flow_cls_offload *f)
+ struct flow_cls_offload *f,
+ bool ingress)
{
struct vcap_admin *admin;
@@ -243,9 +347,11 @@ int lan966x_tc_flower(struct lan966x_port *port,
switch (f->command) {
case FLOW_CLS_REPLACE:
- return lan966x_tc_flower_add(port, f, admin);
+ return lan966x_tc_flower_add(port, f, admin, ingress);
case FLOW_CLS_DESTROY:
return lan966x_tc_flower_del(port, f, admin);
+ case FLOW_CLS_STATS:
+ return lan966x_tc_flower_stats(port, f, admin);
default:
return -EOPNOTSUPP;
}
diff --git a/drivers/net/ethernet/microchip/lan966x/lan966x_tc_matchall.c b/drivers/net/ethernet/microchip/lan966x/lan966x_tc_matchall.c
index a539abaad9b6..20627323d656 100644
--- a/drivers/net/ethernet/microchip/lan966x/lan966x_tc_matchall.c
+++ b/drivers/net/ethernet/microchip/lan966x/lan966x_tc_matchall.c
@@ -24,7 +24,8 @@ static int lan966x_tc_matchall_add(struct lan966x_port *port,
return lan966x_mirror_port_add(port, act, f->cookie,
ingress, f->common.extack);
case FLOW_ACTION_GOTO:
- return lan966x_goto_port_add(port, act, f->cookie,
+ return lan966x_goto_port_add(port, f->common.chain_index,
+ act->chain_index, f->cookie,
f->common.extack);
default:
NL_SET_ERR_MSG_MOD(f->common.extack,
@@ -46,13 +47,8 @@ static int lan966x_tc_matchall_del(struct lan966x_port *port,
f->cookie == port->tc.egress_mirror_id) {
return lan966x_mirror_port_del(port, ingress,
f->common.extack);
- } else if (f->cookie == port->tc.goto_id) {
- return lan966x_goto_port_del(port, f->cookie,
- f->common.extack);
} else {
- NL_SET_ERR_MSG_MOD(f->common.extack,
- "Unsupported action");
- return -EOPNOTSUPP;
+ return lan966x_goto_port_del(port, f->cookie, f->common.extack);
}
return 0;
@@ -80,12 +76,6 @@ int lan966x_tc_matchall(struct lan966x_port *port,
struct tc_cls_matchall_offload *f,
bool ingress)
{
- if (!tc_cls_can_offload_and_chain0(port->dev, &f->common)) {
- NL_SET_ERR_MSG_MOD(f->common.extack,
- "Only chain zero is supported");
- return -EOPNOTSUPP;
- }
-
switch (f->command) {
case TC_CLSMATCHALL_REPLACE:
return lan966x_tc_matchall_add(port, f, ingress);
diff --git a/drivers/net/ethernet/microchip/lan966x/lan966x_vcap_debugfs.c b/drivers/net/ethernet/microchip/lan966x/lan966x_vcap_debugfs.c
new file mode 100644
index 000000000000..7a0db58f5513
--- /dev/null
+++ b/drivers/net/ethernet/microchip/lan966x/lan966x_vcap_debugfs.c
@@ -0,0 +1,94 @@
+// SPDX-License-Identifier: GPL-2.0+
+
+#include "lan966x_main.h"
+#include "lan966x_vcap_ag_api.h"
+#include "vcap_api.h"
+#include "vcap_api_client.h"
+
+static void lan966x_vcap_port_keys(struct lan966x_port *port,
+ struct vcap_admin *admin,
+ struct vcap_output_print *out)
+{
+ struct lan966x *lan966x = port->lan966x;
+ u32 val;
+
+ out->prf(out->dst, " port[%d] (%s): ", port->chip_port,
+ netdev_name(port->dev));
+
+ val = lan_rd(lan966x, ANA_VCAP_S2_CFG(port->chip_port));
+ out->prf(out->dst, "\n state: ");
+ if (ANA_VCAP_S2_CFG_ENA_GET(val))
+ out->prf(out->dst, "on");
+ else
+ out->prf(out->dst, "off");
+
+ for (int l = 0; l < admin->lookups; ++l) {
+ out->prf(out->dst, "\n Lookup %d: ", l);
+
+ out->prf(out->dst, "\n snap: ");
+ if (ANA_VCAP_S2_CFG_SNAP_DIS_GET(val) & (BIT(0) << l))
+ out->prf(out->dst, "mac_llc");
+ else
+ out->prf(out->dst, "mac_snap");
+
+ out->prf(out->dst, "\n oam: ");
+ if (ANA_VCAP_S2_CFG_OAM_DIS_GET(val) & (BIT(0) << l))
+ out->prf(out->dst, "mac_etype");
+ else
+ out->prf(out->dst, "mac_oam");
+
+ out->prf(out->dst, "\n arp: ");
+ if (ANA_VCAP_S2_CFG_ARP_DIS_GET(val) & (BIT(0) << l))
+ out->prf(out->dst, "mac_etype");
+ else
+ out->prf(out->dst, "mac_arp");
+
+ out->prf(out->dst, "\n ipv4_other: ");
+ if (ANA_VCAP_S2_CFG_IP_OTHER_DIS_GET(val) & (BIT(0) << l))
+ out->prf(out->dst, "mac_etype");
+ else
+ out->prf(out->dst, "ip4_other");
+
+ out->prf(out->dst, "\n ipv4_tcp_udp: ");
+ if (ANA_VCAP_S2_CFG_IP_TCPUDP_DIS_GET(val) & (BIT(0) << l))
+ out->prf(out->dst, "mac_etype");
+ else
+ out->prf(out->dst, "ipv4_tcp_udp");
+
+ out->prf(out->dst, "\n ipv6: ");
+ switch (ANA_VCAP_S2_CFG_IP6_CFG_GET(val) & (0x3 << l)) {
+ case VCAP_IS2_PS_IPV6_TCPUDP_OTHER:
+ out->prf(out->dst, "ipv6_tcp_udp ipv6_tcp_udp");
+ break;
+ case VCAP_IS2_PS_IPV6_STD:
+ out->prf(out->dst, "ipv6_std");
+ break;
+ case VCAP_IS2_PS_IPV6_IP4_TCPUDP_IP4_OTHER:
+ out->prf(out->dst, "ipv4_tcp_udp ipv4_tcp_udp");
+ break;
+ case VCAP_IS2_PS_IPV6_MAC_ETYPE:
+ out->prf(out->dst, "mac_etype");
+ break;
+ }
+ }
+
+ out->prf(out->dst, "\n");
+}
+
+int lan966x_vcap_port_info(struct net_device *dev,
+ struct vcap_admin *admin,
+ struct vcap_output_print *out)
+{
+ struct lan966x_port *port = netdev_priv(dev);
+ struct lan966x *lan966x = port->lan966x;
+ const struct vcap_info *vcap;
+ struct vcap_control *vctrl;
+
+ vctrl = lan966x->vcap_ctrl;
+ vcap = &vctrl->vcaps[admin->vtype];
+
+ out->prf(out->dst, "%s:\n", vcap->name);
+ lan966x_vcap_port_keys(port, admin, out);
+
+ return 0;
+}
diff --git a/drivers/net/ethernet/microchip/lan966x/lan966x_vcap_impl.c b/drivers/net/ethernet/microchip/lan966x/lan966x_vcap_impl.c
index a54c0426a35f..68f9d69fd37b 100644
--- a/drivers/net/ethernet/microchip/lan966x/lan966x_vcap_impl.c
+++ b/drivers/net/ethernet/microchip/lan966x/lan966x_vcap_impl.c
@@ -4,18 +4,12 @@
#include "lan966x_vcap_ag_api.h"
#include "vcap_api.h"
#include "vcap_api_client.h"
+#include "vcap_api_debugfs.h"
#define STREAMSIZE (64 * 4)
#define LAN966X_IS2_LOOKUPS 2
-enum vcap_is2_port_sel_ipv6 {
- VCAP_IS2_PS_IPV6_TCPUDP_OTHER,
- VCAP_IS2_PS_IPV6_STD,
- VCAP_IS2_PS_IPV6_IP4_TCPUDP_IP4_OTHER,
- VCAP_IS2_PS_IPV6_MAC_ETYPE,
-};
-
static struct lan966x_vcap_inst {
enum vcap_type vtype; /* type of vcap */
int tgt_inst; /* hardware instance number */
@@ -23,6 +17,7 @@ static struct lan966x_vcap_inst {
int first_cid; /* first chain id in this vcap */
int last_cid; /* last chain id in this vcap */
int count; /* number of available addresses */
+ bool ingress; /* is vcap in the ingress path */
} lan966x_vcap_inst_cfg[] = {
{
.vtype = VCAP_TYPE_IS2, /* IS2-0 */
@@ -31,6 +26,7 @@ static struct lan966x_vcap_inst {
.first_cid = LAN966X_VCAP_CID_IS2_L0,
.last_cid = LAN966X_VCAP_CID_IS2_MAX,
.count = 256,
+ .ingress = true,
},
};
@@ -383,27 +379,6 @@ static void lan966x_vcap_move(struct net_device *dev,
lan966x_vcap_wait_update(lan966x, admin->tgt_inst);
}
-static int lan966x_vcap_port_info(struct net_device *dev,
- struct vcap_admin *admin,
- struct vcap_output_print *out)
-{
- return 0;
-}
-
-static int lan966x_vcap_enable(struct net_device *dev,
- struct vcap_admin *admin,
- bool enable)
-{
- struct lan966x_port *port = netdev_priv(dev);
- struct lan966x *lan966x = port->lan966x;
-
- lan_rmw(ANA_VCAP_S2_CFG_ENA_SET(enable),
- ANA_VCAP_S2_CFG_ENA,
- lan966x, ANA_VCAP_S2_CFG(port->chip_port));
-
- return 0;
-}
-
static struct vcap_operations lan966x_vcap_ops = {
.validate_keyset = lan966x_vcap_validate_keyset,
.add_default_fields = lan966x_vcap_add_default_fields,
@@ -414,7 +389,6 @@ static struct vcap_operations lan966x_vcap_ops = {
.update = lan966x_vcap_update,
.move = lan966x_vcap_move,
.port_info = lan966x_vcap_port_info,
- .enable = lan966x_vcap_enable,
};
static void lan966x_vcap_admin_free(struct vcap_admin *admin)
@@ -446,6 +420,7 @@ lan966x_vcap_admin_alloc(struct lan966x *lan966x, struct vcap_control *ctrl,
admin->vtype = cfg->vtype;
admin->vinst = 0;
+ admin->ingress = cfg->ingress;
admin->w32be = true;
admin->tgt_inst = cfg->tgt_inst;
@@ -498,6 +473,7 @@ int lan966x_vcap_init(struct lan966x *lan966x)
struct lan966x_vcap_inst *cfg;
struct vcap_control *ctrl;
struct vcap_admin *admin;
+ struct dentry *dir;
ctrl = kzalloc(sizeof(*ctrl), GFP_KERNEL);
if (!ctrl)
@@ -521,6 +497,18 @@ int lan966x_vcap_init(struct lan966x *lan966x)
list_add_tail(&admin->list, &ctrl->list);
}
+ dir = vcap_debugfs(lan966x->dev, lan966x->debugfs_root, ctrl);
+ for (int p = 0; p < lan966x->num_phys_ports; ++p) {
+ if (lan966x->ports[p]) {
+ vcap_port_debugfs(lan966x->dev, dir, ctrl,
+ lan966x->ports[p]->dev);
+
+ lan_rmw(ANA_VCAP_S2_CFG_ENA_SET(true),
+ ANA_VCAP_S2_CFG_ENA, lan966x,
+ ANA_VCAP_S2_CFG(lan966x->ports[p]->chip_port));
+ }
+ }
+
lan966x->vcap_ctrl = ctrl;
return 0;
diff --git a/drivers/net/ethernet/microchip/sparx5/Makefile b/drivers/net/ethernet/microchip/sparx5/Makefile
index d0ed7090aa54..1cb1cc3f1a85 100644
--- a/drivers/net/ethernet/microchip/sparx5/Makefile
+++ b/drivers/net/ethernet/microchip/sparx5/Makefile
@@ -9,7 +9,8 @@ sparx5-switch-y := sparx5_main.o sparx5_packet.o \
sparx5_netdev.o sparx5_phylink.o sparx5_port.o sparx5_mactable.o sparx5_vlan.o \
sparx5_switchdev.o sparx5_calendar.o sparx5_ethtool.o sparx5_fdma.o \
sparx5_ptp.o sparx5_pgid.o sparx5_tc.o sparx5_qos.o \
- sparx5_vcap_impl.o sparx5_vcap_ag_api.o sparx5_tc_flower.o sparx5_tc_matchall.o
+ sparx5_vcap_impl.o sparx5_vcap_ag_api.o sparx5_tc_flower.o \
+ sparx5_tc_matchall.o sparx5_pool.o sparx5_sdlb.o sparx5_police.o sparx5_psfp.o
sparx5-switch-$(CONFIG_SPARX5_DCB) += sparx5_dcb.o
sparx5-switch-$(CONFIG_DEBUG_FS) += sparx5_vcap_debugfs.o
diff --git a/drivers/net/ethernet/microchip/sparx5/sparx5_dcb.c b/drivers/net/ethernet/microchip/sparx5/sparx5_dcb.c
index 74abb946b2a3..871a3e62f852 100644
--- a/drivers/net/ethernet/microchip/sparx5/sparx5_dcb.c
+++ b/drivers/net/ethernet/microchip/sparx5/sparx5_dcb.c
@@ -133,12 +133,17 @@ static bool sparx5_dcb_apptrust_contains(int portno, u8 selector)
static int sparx5_dcb_app_update(struct net_device *dev)
{
+ struct dcb_ieee_app_prio_map dscp_rewr_map = {0};
+ struct dcb_rewr_prio_pcp_map pcp_rewr_map = {0};
struct sparx5_port *port = netdev_priv(dev);
struct sparx5_port_qos_dscp_map *dscp_map;
struct sparx5_port_qos_pcp_map *pcp_map;
struct sparx5_port_qos qos = {0};
struct dcb_app app_itr = {0};
int portno = port->portno;
+ bool dscp_rewr = false;
+ bool pcp_rewr = false;
+ u16 dscp;
int i;
dscp_map = &qos.dscp.map;
@@ -163,31 +168,72 @@ static int sparx5_dcb_app_update(struct net_device *dev)
pcp_map->map[i] = dcb_getapp(dev, &app_itr);
}
+ /* Get pcp rewrite mapping */
+ dcb_getrewr_prio_pcp_mask_map(dev, &pcp_rewr_map);
+ for (i = 0; i < ARRAY_SIZE(pcp_rewr_map.map); i++) {
+ if (!pcp_rewr_map.map[i])
+ continue;
+ pcp_rewr = true;
+ qos.pcp_rewr.map.map[i] = fls(pcp_rewr_map.map[i]) - 1;
+ }
+
+ /* Get dscp rewrite mapping */
+ dcb_getrewr_prio_dscp_mask_map(dev, &dscp_rewr_map);
+ for (i = 0; i < ARRAY_SIZE(dscp_rewr_map.map); i++) {
+ if (!dscp_rewr_map.map[i])
+ continue;
+
+ /* The rewrite table of the switch has 32 entries; one for each
+ * priority for each DP level. Currently, the rewrite map does
+ * not indicate DP level, so we map classified QoS class to
+ * classified DSCP, for each classified DP level. Rewrite of
+ * DSCP is only enabled, if we have active mappings.
+ */
+ dscp_rewr = true;
+ dscp = fls64(dscp_rewr_map.map[i]) - 1;
+ qos.dscp_rewr.map.map[i] = dscp; /* DP 0 */
+ qos.dscp_rewr.map.map[i + 8] = dscp; /* DP 1 */
+ qos.dscp_rewr.map.map[i + 16] = dscp; /* DP 2 */
+ qos.dscp_rewr.map.map[i + 24] = dscp; /* DP 3 */
+ }
+
/* Enable use of pcp for queue classification ? */
if (sparx5_dcb_apptrust_contains(portno, DCB_APP_SEL_PCP)) {
qos.pcp.qos_enable = true;
qos.pcp.dp_enable = qos.pcp.qos_enable;
+ /* Enable rewrite of PCP and DEI if PCP is trusted *and* rewrite
+ * table is not empty.
+ */
+ if (pcp_rewr)
+ qos.pcp_rewr.enable = true;
}
/* Enable use of dscp for queue classification ? */
if (sparx5_dcb_apptrust_contains(portno, IEEE_8021QAZ_APP_SEL_DSCP)) {
qos.dscp.qos_enable = true;
qos.dscp.dp_enable = qos.dscp.qos_enable;
+ if (dscp_rewr)
+ /* Do not enable rewrite if no mappings are active, as
+ * classified DSCP will then be zero for all classified
+ * QoS class and DP combinations.
+ */
+ qos.dscp_rewr.enable = true;
}
return sparx5_port_qos_set(port, &qos);
}
-/* Set or delete dscp app entry.
+/* Set or delete DSCP app entry.
*
- * Dscp mapping is global for all ports, so set and delete app entries are
+ * DSCP mapping is global for all ports, so set and delete app entries are
* replicated for each port.
*/
-static int sparx5_dcb_ieee_dscp_setdel_app(struct net_device *dev,
- struct dcb_app *app, bool del)
+static int sparx5_dcb_ieee_dscp_setdel(struct net_device *dev,
+ struct dcb_app *app,
+ int (*setdel)(struct net_device *,
+ struct dcb_app *))
{
struct sparx5_port *port = netdev_priv(dev);
- struct dcb_app apps[SPX5_PORTS];
struct sparx5_port *port_itr;
int err, i;
@@ -195,11 +241,7 @@ static int sparx5_dcb_ieee_dscp_setdel_app(struct net_device *dev,
port_itr = port->sparx5->ports[i];
if (!port_itr)
continue;
- memcpy(&apps[i], app, sizeof(struct dcb_app));
- if (del)
- err = dcb_ieee_delapp(port_itr->ndev, &apps[i]);
- else
- err = dcb_ieee_setapp(port_itr->ndev, &apps[i]);
+ err = setdel(port_itr->ndev, app);
if (err)
return err;
}
@@ -226,7 +268,7 @@ static int sparx5_dcb_ieee_setapp(struct net_device *dev, struct dcb_app *app)
}
if (app->selector == IEEE_8021QAZ_APP_SEL_DSCP)
- err = sparx5_dcb_ieee_dscp_setdel_app(dev, app, false);
+ err = sparx5_dcb_ieee_dscp_setdel(dev, app, dcb_ieee_setapp);
else
err = dcb_ieee_setapp(dev, app);
@@ -244,7 +286,7 @@ static int sparx5_dcb_ieee_delapp(struct net_device *dev, struct dcb_app *app)
int err;
if (app->selector == IEEE_8021QAZ_APP_SEL_DSCP)
- err = sparx5_dcb_ieee_dscp_setdel_app(dev, app, true);
+ err = sparx5_dcb_ieee_dscp_setdel(dev, app, dcb_ieee_delapp);
else
err = dcb_ieee_delapp(dev, app);
@@ -283,11 +325,60 @@ static int sparx5_dcb_getapptrust(struct net_device *dev, u8 *selectors,
return 0;
}
+static int sparx5_dcb_delrewr(struct net_device *dev, struct dcb_app *app)
+{
+ int err;
+
+ if (app->selector == IEEE_8021QAZ_APP_SEL_DSCP)
+ err = sparx5_dcb_ieee_dscp_setdel(dev, app, dcb_delrewr);
+ else
+ err = dcb_delrewr(dev, app);
+
+ if (err < 0)
+ return err;
+
+ return sparx5_dcb_app_update(dev);
+}
+
+static int sparx5_dcb_setrewr(struct net_device *dev, struct dcb_app *app)
+{
+ struct dcb_app app_itr;
+ int err = 0;
+ u16 proto;
+
+ err = sparx5_dcb_app_validate(dev, app);
+ if (err)
+ goto out;
+
+ /* Delete current mapping, if it exists. */
+ proto = dcb_getrewr(dev, app);
+ if (proto) {
+ app_itr = *app;
+ app_itr.protocol = proto;
+ sparx5_dcb_delrewr(dev, &app_itr);
+ }
+
+ if (app->selector == IEEE_8021QAZ_APP_SEL_DSCP)
+ err = sparx5_dcb_ieee_dscp_setdel(dev, app, dcb_setrewr);
+ else
+ err = dcb_setrewr(dev, app);
+
+ if (err)
+ goto out;
+
+ sparx5_dcb_app_update(dev);
+
+out:
+ return err;
+}
+
const struct dcbnl_rtnl_ops sparx5_dcbnl_ops = {
.ieee_setapp = sparx5_dcb_ieee_setapp,
.ieee_delapp = sparx5_dcb_ieee_delapp,
.dcbnl_setapptrust = sparx5_dcb_setapptrust,
.dcbnl_getapptrust = sparx5_dcb_getapptrust,
+ .dcbnl_setrewr = sparx5_dcb_setrewr,
+ .dcbnl_delrewr = sparx5_dcb_delrewr,
};
int sparx5_dcb_init(struct sparx5 *sparx5)
@@ -304,6 +395,12 @@ int sparx5_dcb_init(struct sparx5 *sparx5)
sparx5_port_apptrust[port->portno] =
&sparx5_dcb_apptrust_policies
[SPARX5_DCB_APPTRUST_DSCP_PCP];
+
+ /* Enable DSCP classification based on classified QoS class and
+ * DP, for all DSCP values, for all ports.
+ */
+ sparx5_port_qos_dscp_rewr_mode_set(port,
+ SPARX5_PORT_REW_DSCP_ALL);
}
return 0;
diff --git a/drivers/net/ethernet/microchip/sparx5/sparx5_main.c b/drivers/net/ethernet/microchip/sparx5/sparx5_main.c
index 3c5d4fe99373..42b77ba9b572 100644
--- a/drivers/net/ethernet/microchip/sparx5/sparx5_main.c
+++ b/drivers/net/ethernet/microchip/sparx5/sparx5_main.c
@@ -198,12 +198,15 @@ static const struct sparx5_main_io_resource sparx5_main_iomap[] = {
{ TARGET_QSYS, 0x110a0000, 2 }, /* 0x6110a0000 */
{ TARGET_QFWD, 0x110b0000, 2 }, /* 0x6110b0000 */
{ TARGET_XQS, 0x110c0000, 2 }, /* 0x6110c0000 */
+ { TARGET_VCAP_ES2, 0x110d0000, 2 }, /* 0x6110d0000 */
+ { TARGET_VCAP_ES0, 0x110e0000, 2 }, /* 0x6110e0000 */
{ TARGET_CLKGEN, 0x11100000, 2 }, /* 0x611100000 */
{ TARGET_ANA_AC_POL, 0x11200000, 2 }, /* 0x611200000 */
{ TARGET_QRES, 0x11280000, 2 }, /* 0x611280000 */
{ TARGET_EACL, 0x112c0000, 2 }, /* 0x6112c0000 */
{ TARGET_ANA_CL, 0x11400000, 2 }, /* 0x611400000 */
{ TARGET_ANA_L3, 0x11480000, 2 }, /* 0x611480000 */
+ { TARGET_ANA_AC_SDLB, 0x11500000, 2 }, /* 0x611500000 */
{ TARGET_HSCH, 0x11580000, 2 }, /* 0x611580000 */
{ TARGET_REW, 0x11600000, 2 }, /* 0x611600000 */
{ TARGET_ANA_L2, 0x11800000, 2 }, /* 0x611800000 */
@@ -500,8 +503,8 @@ static int sparx5_init_coreclock(struct sparx5 *sparx5)
clk_period = sparx5_clk_period(freq);
- spx5_rmw(HSCH_SYS_CLK_PER_SYS_CLK_PER_100PS_SET(clk_period / 100),
- HSCH_SYS_CLK_PER_SYS_CLK_PER_100PS,
+ spx5_rmw(HSCH_SYS_CLK_PER_100PS_SET(clk_period / 100),
+ HSCH_SYS_CLK_PER_100PS,
sparx5,
HSCH_SYS_CLK_PER);
diff --git a/drivers/net/ethernet/microchip/sparx5/sparx5_main.h b/drivers/net/ethernet/microchip/sparx5/sparx5_main.h
index 4a574cdcb584..72e7928912eb 100644
--- a/drivers/net/ethernet/microchip/sparx5/sparx5_main.h
+++ b/drivers/net/ethernet/microchip/sparx5/sparx5_main.h
@@ -396,6 +396,7 @@ int sparx5_ptp_txtstamp_request(struct sparx5_port *port,
void sparx5_ptp_txtstamp_release(struct sparx5_port *port,
struct sk_buff *skb);
irqreturn_t sparx5_ptp_irq_handler(int irq, void *args);
+int sparx5_ptp_gettime64(struct ptp_clock_info *ptp, struct timespec64 *ts);
/* sparx5_vcap_impl.c */
int sparx5_vcap_init(struct sparx5 *sparx5);
@@ -413,6 +414,129 @@ int sparx5_pgid_alloc_glag(struct sparx5 *spx5, u16 *idx);
int sparx5_pgid_alloc_mcast(struct sparx5 *spx5, u16 *idx);
int sparx5_pgid_free(struct sparx5 *spx5, u16 idx);
+/* sparx5_pool.c */
+struct sparx5_pool_entry {
+ u16 ref_cnt;
+ u32 idx; /* tc index */
+};
+
+u32 sparx5_pool_idx_to_id(u32 idx);
+int sparx5_pool_put(struct sparx5_pool_entry *pool, int size, u32 id);
+int sparx5_pool_get(struct sparx5_pool_entry *pool, int size, u32 *id);
+int sparx5_pool_get_with_idx(struct sparx5_pool_entry *pool, int size, u32 idx,
+ u32 *id);
+
+/* sparx5_sdlb.c */
+#define SPX5_SDLB_PUP_TOKEN_DISABLE 0x1FFF
+#define SPX5_SDLB_PUP_TOKEN_MAX (SPX5_SDLB_PUP_TOKEN_DISABLE - 1)
+#define SPX5_SDLB_GROUP_RATE_MAX 25000000000ULL
+#define SPX5_SDLB_2CYCLES_TYPE2_THRES_OFFSET 13
+#define SPX5_SDLB_CNT 4096
+#define SPX5_SDLB_GROUP_CNT 10
+#define SPX5_CLK_PER_100PS_DEFAULT 16
+
+struct sparx5_sdlb_group {
+ u64 max_rate;
+ u32 min_burst;
+ u32 frame_size;
+ u32 pup_interval;
+ u32 nsets;
+};
+
+extern struct sparx5_sdlb_group sdlb_groups[SPX5_SDLB_GROUP_CNT];
+int sparx5_sdlb_pup_token_get(struct sparx5 *sparx5, u32 pup_interval,
+ u64 rate);
+
+int sparx5_sdlb_clk_hz_get(struct sparx5 *sparx5);
+int sparx5_sdlb_group_get_by_rate(struct sparx5 *sparx5, u32 rate, u32 burst);
+int sparx5_sdlb_group_get_by_index(struct sparx5 *sparx5, u32 idx, u32 *group);
+
+int sparx5_sdlb_group_add(struct sparx5 *sparx5, u32 group, u32 idx);
+int sparx5_sdlb_group_del(struct sparx5 *sparx5, u32 group, u32 idx);
+
+void sparx5_sdlb_group_init(struct sparx5 *sparx5, u64 max_rate, u32 min_burst,
+ u32 frame_size, u32 idx);
+
+/* sparx5_police.c */
+enum {
+ /* More policer types will be added later */
+ SPX5_POL_SERVICE
+};
+
+struct sparx5_policer {
+ u32 type;
+ u32 idx;
+ u64 rate;
+ u32 burst;
+ u32 group;
+ u8 event_mask;
+};
+
+int sparx5_policer_conf_set(struct sparx5 *sparx5, struct sparx5_policer *pol);
+
+/* sparx5_psfp.c */
+#define SPX5_PSFP_GCE_CNT 4
+#define SPX5_PSFP_SG_CNT 1024
+#define SPX5_PSFP_SG_MIN_CYCLE_TIME_NS (1 * NSEC_PER_USEC)
+#define SPX5_PSFP_SG_MAX_CYCLE_TIME_NS ((1 * NSEC_PER_SEC) - 1)
+#define SPX5_PSFP_SG_MAX_IPV (SPX5_PRIOS - 1)
+#define SPX5_PSFP_SG_OPEN (SPX5_PSFP_SG_CNT - 1)
+#define SPX5_PSFP_SG_CYCLE_TIME_DEFAULT 1000000
+#define SPX5_PSFP_SF_MAX_SDU 16383
+
+struct sparx5_psfp_fm {
+ struct sparx5_policer pol;
+};
+
+struct sparx5_psfp_gce {
+ bool gate_state; /* StreamGateState */
+ u32 interval; /* TimeInterval */
+ u32 ipv; /* InternalPriorityValue */
+ u32 maxoctets; /* IntervalOctetMax */
+};
+
+struct sparx5_psfp_sg {
+ bool gate_state; /* PSFPAdminGateStates */
+ bool gate_enabled; /* PSFPGateEnabled */
+ u32 ipv; /* PSFPAdminIPV */
+ struct timespec64 basetime; /* PSFPAdminBaseTime */
+ u32 cycletime; /* PSFPAdminCycleTime */
+ u32 cycletimeext; /* PSFPAdminCycleTimeExtension */
+ u32 num_entries; /* PSFPAdminControlListLength */
+ struct sparx5_psfp_gce gce[SPX5_PSFP_GCE_CNT];
+};
+
+struct sparx5_psfp_sf {
+ bool sblock_osize_ena;
+ bool sblock_osize;
+ u32 max_sdu;
+ u32 sgid; /* Gate id */
+ u32 fmid; /* Flow meter id */
+};
+
+int sparx5_psfp_fm_add(struct sparx5 *sparx5, u32 uidx,
+ struct sparx5_psfp_fm *fm, u32 *id);
+int sparx5_psfp_fm_del(struct sparx5 *sparx5, u32 id);
+
+int sparx5_psfp_sg_add(struct sparx5 *sparx5, u32 uidx,
+ struct sparx5_psfp_sg *sg, u32 *id);
+int sparx5_psfp_sg_del(struct sparx5 *sparx5, u32 id);
+
+int sparx5_psfp_sf_add(struct sparx5 *sparx5, const struct sparx5_psfp_sf *sf,
+ u32 *id);
+int sparx5_psfp_sf_del(struct sparx5 *sparx5, u32 id);
+
+u32 sparx5_psfp_isdx_get_sf(struct sparx5 *sparx5, u32 isdx);
+u32 sparx5_psfp_isdx_get_fm(struct sparx5 *sparx5, u32 isdx);
+u32 sparx5_psfp_sf_get_sg(struct sparx5 *sparx5, u32 sfid);
+void sparx5_isdx_conf_set(struct sparx5 *sparx5, u32 isdx, u32 sfid, u32 fmid);
+
+void sparx5_psfp_init(struct sparx5 *sparx5);
+
+/* sparx5_qos.c */
+void sparx5_new_base_time(struct sparx5 *sparx5, const u32 cycle_time,
+ const ktime_t org_base_time, ktime_t *new_base_time);
+
/* Clock period in picoseconds */
static inline u32 sparx5_clk_period(enum sparx5_core_clockfreq cclock)
{
diff --git a/drivers/net/ethernet/microchip/sparx5/sparx5_main_regs.h b/drivers/net/ethernet/microchip/sparx5/sparx5_main_regs.h
index 6c93dd6b01b0..bd03a0a3c1da 100644
--- a/drivers/net/ethernet/microchip/sparx5/sparx5_main_regs.h
+++ b/drivers/net/ethernet/microchip/sparx5/sparx5_main_regs.h
@@ -4,8 +4,8 @@
* Copyright (c) 2021 Microchip Technology Inc.
*/
-/* This file is autogenerated by cml-utils 2022-09-28 11:17:02 +0200.
- * Commit ID: 385c8a11d71a9f6a60368d3a3cb648fa257b479a
+/* This file is autogenerated by cml-utils 2023-02-10 11:18:53 +0100.
+ * Commit ID: c30fb4bf0281cd4a7133bdab6682f9e43c872ada
*/
#ifndef _SPARX5_MAIN_REGS_H_
@@ -19,6 +19,7 @@ enum sparx5_target {
TARGET_ANA_AC = 1,
TARGET_ANA_ACL = 2,
TARGET_ANA_AC_POL = 4,
+ TARGET_ANA_AC_SDLB = 5,
TARGET_ANA_CL = 6,
TARGET_ANA_L2 = 7,
TARGET_ANA_L3 = 8,
@@ -46,6 +47,8 @@ enum sparx5_target {
TARGET_QS = 177,
TARGET_QSYS = 178,
TARGET_REW = 179,
+ TARGET_VCAP_ES0 = 323,
+ TARGET_VCAP_ES2 = 324,
TARGET_VCAP_SUPER = 326,
TARGET_VOP = 327,
TARGET_XQS = 331,
@@ -55,7 +58,8 @@ enum sparx5_target {
#define __REG(...) __VA_ARGS__
/* ANA_AC:RAM_CTRL:RAM_INIT */
-#define ANA_AC_RAM_INIT __REG(TARGET_ANA_AC, 0, 1, 839108, 0, 1, 4, 0, 0, 1, 4)
+#define ANA_AC_RAM_INIT __REG(TARGET_ANA_AC,\
+ 0, 1, 839108, 0, 1, 4, 0, 0, 1, 4)
#define ANA_AC_RAM_INIT_RAM_INIT BIT(1)
#define ANA_AC_RAM_INIT_RAM_INIT_SET(x)\
@@ -70,7 +74,8 @@ enum sparx5_target {
FIELD_GET(ANA_AC_RAM_INIT_RAM_CFG_HOOK, x)
/* ANA_AC:PS_COMMON:OWN_UPSID */
-#define ANA_AC_OWN_UPSID(r) __REG(TARGET_ANA_AC, 0, 1, 894472, 0, 1, 352, 52, r, 3, 4)
+#define ANA_AC_OWN_UPSID(r) __REG(TARGET_ANA_AC,\
+ 0, 1, 894472, 0, 1, 352, 52, r, 3, 4)
#define ANA_AC_OWN_UPSID_OWN_UPSID GENMASK(4, 0)
#define ANA_AC_OWN_UPSID_OWN_UPSID_SET(x)\
@@ -79,13 +84,16 @@ enum sparx5_target {
FIELD_GET(ANA_AC_OWN_UPSID_OWN_UPSID, x)
/* ANA_AC:SRC:SRC_CFG */
-#define ANA_AC_SRC_CFG(g) __REG(TARGET_ANA_AC, 0, 1, 849920, g, 102, 16, 0, 0, 1, 4)
+#define ANA_AC_SRC_CFG(g) __REG(TARGET_ANA_AC,\
+ 0, 1, 849920, g, 102, 16, 0, 0, 1, 4)
/* ANA_AC:SRC:SRC_CFG1 */
-#define ANA_AC_SRC_CFG1(g) __REG(TARGET_ANA_AC, 0, 1, 849920, g, 102, 16, 4, 0, 1, 4)
+#define ANA_AC_SRC_CFG1(g) __REG(TARGET_ANA_AC,\
+ 0, 1, 849920, g, 102, 16, 4, 0, 1, 4)
/* ANA_AC:SRC:SRC_CFG2 */
-#define ANA_AC_SRC_CFG2(g) __REG(TARGET_ANA_AC, 0, 1, 849920, g, 102, 16, 8, 0, 1, 4)
+#define ANA_AC_SRC_CFG2(g) __REG(TARGET_ANA_AC,\
+ 0, 1, 849920, g, 102, 16, 8, 0, 1, 4)
#define ANA_AC_SRC_CFG2_PORT_MASK2 BIT(0)
#define ANA_AC_SRC_CFG2_PORT_MASK2_SET(x)\
@@ -94,13 +102,16 @@ enum sparx5_target {
FIELD_GET(ANA_AC_SRC_CFG2_PORT_MASK2, x)
/* ANA_AC:PGID:PGID_CFG */
-#define ANA_AC_PGID_CFG(g) __REG(TARGET_ANA_AC, 0, 1, 786432, g, 3290, 16, 0, 0, 1, 4)
+#define ANA_AC_PGID_CFG(g) __REG(TARGET_ANA_AC,\
+ 0, 1, 786432, g, 3290, 16, 0, 0, 1, 4)
/* ANA_AC:PGID:PGID_CFG1 */
-#define ANA_AC_PGID_CFG1(g) __REG(TARGET_ANA_AC, 0, 1, 786432, g, 3290, 16, 4, 0, 1, 4)
+#define ANA_AC_PGID_CFG1(g) __REG(TARGET_ANA_AC,\
+ 0, 1, 786432, g, 3290, 16, 4, 0, 1, 4)
/* ANA_AC:PGID:PGID_CFG2 */
-#define ANA_AC_PGID_CFG2(g) __REG(TARGET_ANA_AC, 0, 1, 786432, g, 3290, 16, 8, 0, 1, 4)
+#define ANA_AC_PGID_CFG2(g) __REG(TARGET_ANA_AC,\
+ 0, 1, 786432, g, 3290, 16, 8, 0, 1, 4)
#define ANA_AC_PGID_CFG2_PORT_MASK2 BIT(0)
#define ANA_AC_PGID_CFG2_PORT_MASK2_SET(x)\
@@ -109,7 +120,8 @@ enum sparx5_target {
FIELD_GET(ANA_AC_PGID_CFG2_PORT_MASK2, x)
/* ANA_AC:PGID:PGID_MISC_CFG */
-#define ANA_AC_PGID_MISC_CFG(g) __REG(TARGET_ANA_AC, 0, 1, 786432, g, 3290, 16, 12, 0, 1, 4)
+#define ANA_AC_PGID_MISC_CFG(g) __REG(TARGET_ANA_AC,\
+ 0, 1, 786432, g, 3290, 16, 12, 0, 1, 4)
#define ANA_AC_PGID_MISC_CFG_PGID_CPU_QU GENMASK(6, 4)
#define ANA_AC_PGID_MISC_CFG_PGID_CPU_QU_SET(x)\
@@ -129,8 +141,257 @@ enum sparx5_target {
#define ANA_AC_PGID_MISC_CFG_PGID_CPU_COPY_ENA_GET(x)\
FIELD_GET(ANA_AC_PGID_MISC_CFG_PGID_CPU_COPY_ENA, x)
+/* ANA_AC:TSN_SF:TSN_SF */
+#define ANA_AC_TSN_SF __REG(TARGET_ANA_AC,\
+ 0, 1, 839136, 0, 1, 4, 0, 0, 1, 4)
+
+#define ANA_AC_TSN_SF_TSN_STREAM_BLOCK_OVERSIZE_STICKY BIT(9)
+#define ANA_AC_TSN_SF_TSN_STREAM_BLOCK_OVERSIZE_STICKY_SET(x)\
+ FIELD_PREP(ANA_AC_TSN_SF_TSN_STREAM_BLOCK_OVERSIZE_STICKY, x)
+#define ANA_AC_TSN_SF_TSN_STREAM_BLOCK_OVERSIZE_STICKY_GET(x)\
+ FIELD_GET(ANA_AC_TSN_SF_TSN_STREAM_BLOCK_OVERSIZE_STICKY, x)
+
+#define ANA_AC_TSN_SF_PORT_NUM GENMASK(8, 0)
+#define ANA_AC_TSN_SF_PORT_NUM_SET(x)\
+ FIELD_PREP(ANA_AC_TSN_SF_PORT_NUM, x)
+#define ANA_AC_TSN_SF_PORT_NUM_GET(x)\
+ FIELD_GET(ANA_AC_TSN_SF_PORT_NUM, x)
+
+/* ANA_AC:TSN_SF_CFG:TSN_SF_CFG */
+#define ANA_AC_TSN_SF_CFG(g) __REG(TARGET_ANA_AC,\
+ 0, 1, 839680, g, 1024, 4, 0, 0, 1, 4)
+
+#define ANA_AC_TSN_SF_CFG_TSN_SGID GENMASK(25, 16)
+#define ANA_AC_TSN_SF_CFG_TSN_SGID_SET(x)\
+ FIELD_PREP(ANA_AC_TSN_SF_CFG_TSN_SGID, x)
+#define ANA_AC_TSN_SF_CFG_TSN_SGID_GET(x)\
+ FIELD_GET(ANA_AC_TSN_SF_CFG_TSN_SGID, x)
+
+#define ANA_AC_TSN_SF_CFG_TSN_MAX_SDU GENMASK(15, 2)
+#define ANA_AC_TSN_SF_CFG_TSN_MAX_SDU_SET(x)\
+ FIELD_PREP(ANA_AC_TSN_SF_CFG_TSN_MAX_SDU, x)
+#define ANA_AC_TSN_SF_CFG_TSN_MAX_SDU_GET(x)\
+ FIELD_GET(ANA_AC_TSN_SF_CFG_TSN_MAX_SDU, x)
+
+#define ANA_AC_TSN_SF_CFG_BLOCK_OVERSIZE_ENA BIT(1)
+#define ANA_AC_TSN_SF_CFG_BLOCK_OVERSIZE_ENA_SET(x)\
+ FIELD_PREP(ANA_AC_TSN_SF_CFG_BLOCK_OVERSIZE_ENA, x)
+#define ANA_AC_TSN_SF_CFG_BLOCK_OVERSIZE_ENA_GET(x)\
+ FIELD_GET(ANA_AC_TSN_SF_CFG_BLOCK_OVERSIZE_ENA, x)
+
+#define ANA_AC_TSN_SF_CFG_BLOCK_OVERSIZE_STATE BIT(0)
+#define ANA_AC_TSN_SF_CFG_BLOCK_OVERSIZE_STATE_SET(x)\
+ FIELD_PREP(ANA_AC_TSN_SF_CFG_BLOCK_OVERSIZE_STATE, x)
+#define ANA_AC_TSN_SF_CFG_BLOCK_OVERSIZE_STATE_GET(x)\
+ FIELD_GET(ANA_AC_TSN_SF_CFG_BLOCK_OVERSIZE_STATE, x)
+
+/* ANA_AC:TSN_SF_STATUS:TSN_SF_STATUS */
+#define ANA_AC_TSN_SF_STATUS __REG(TARGET_ANA_AC,\
+ 0, 1, 839072, 0, 1, 16, 0, 0, 1, 4)
+
+#define ANA_AC_TSN_SF_STATUS_FRM_LEN GENMASK(25, 12)
+#define ANA_AC_TSN_SF_STATUS_FRM_LEN_SET(x)\
+ FIELD_PREP(ANA_AC_TSN_SF_STATUS_FRM_LEN, x)
+#define ANA_AC_TSN_SF_STATUS_FRM_LEN_GET(x)\
+ FIELD_GET(ANA_AC_TSN_SF_STATUS_FRM_LEN, x)
+
+#define ANA_AC_TSN_SF_STATUS_DLB_DROP BIT(11)
+#define ANA_AC_TSN_SF_STATUS_DLB_DROP_SET(x)\
+ FIELD_PREP(ANA_AC_TSN_SF_STATUS_DLB_DROP, x)
+#define ANA_AC_TSN_SF_STATUS_DLB_DROP_GET(x)\
+ FIELD_GET(ANA_AC_TSN_SF_STATUS_DLB_DROP, x)
+
+#define ANA_AC_TSN_SF_STATUS_TSN_SFID GENMASK(10, 1)
+#define ANA_AC_TSN_SF_STATUS_TSN_SFID_SET(x)\
+ FIELD_PREP(ANA_AC_TSN_SF_STATUS_TSN_SFID, x)
+#define ANA_AC_TSN_SF_STATUS_TSN_SFID_GET(x)\
+ FIELD_GET(ANA_AC_TSN_SF_STATUS_TSN_SFID, x)
+
+#define ANA_AC_TSN_SF_STATUS_TSTAMP_VLD BIT(0)
+#define ANA_AC_TSN_SF_STATUS_TSTAMP_VLD_SET(x)\
+ FIELD_PREP(ANA_AC_TSN_SF_STATUS_TSTAMP_VLD, x)
+#define ANA_AC_TSN_SF_STATUS_TSTAMP_VLD_GET(x)\
+ FIELD_GET(ANA_AC_TSN_SF_STATUS_TSTAMP_VLD, x)
+
+/* ANA_AC:SG_ACCESS:SG_ACCESS_CTRL */
+#define ANA_AC_SG_ACCESS_CTRL __REG(TARGET_ANA_AC,\
+ 0, 1, 839140, 0, 1, 12, 0, 0, 1, 4)
+
+#define ANA_AC_SG_ACCESS_CTRL_SGID GENMASK(9, 0)
+#define ANA_AC_SG_ACCESS_CTRL_SGID_SET(x)\
+ FIELD_PREP(ANA_AC_SG_ACCESS_CTRL_SGID, x)
+#define ANA_AC_SG_ACCESS_CTRL_SGID_GET(x)\
+ FIELD_GET(ANA_AC_SG_ACCESS_CTRL_SGID, x)
+
+#define ANA_AC_SG_ACCESS_CTRL_CONFIG_CHANGE BIT(28)
+#define ANA_AC_SG_ACCESS_CTRL_CONFIG_CHANGE_SET(x)\
+ FIELD_PREP(ANA_AC_SG_ACCESS_CTRL_CONFIG_CHANGE, x)
+#define ANA_AC_SG_ACCESS_CTRL_CONFIG_CHANGE_GET(x)\
+ FIELD_GET(ANA_AC_SG_ACCESS_CTRL_CONFIG_CHANGE, x)
+
+/* ANA_AC:SG_ACCESS:SG_CYCLETIME_UPDATE_PERIOD */
+#define ANA_AC_SG_CYCLETIME_UPDATE_PERIOD __REG(TARGET_ANA_AC,\
+ 0, 1, 839140, 0, 1, 12, 8, 0, 1, 4)
+
+#define ANA_AC_SG_CYCLETIME_UPDATE_PERIOD_SG_CT_CLKS GENMASK(15, 0)
+#define ANA_AC_SG_CYCLETIME_UPDATE_PERIOD_SG_CT_CLKS_SET(x)\
+ FIELD_PREP(ANA_AC_SG_CYCLETIME_UPDATE_PERIOD_SG_CT_CLKS, x)
+#define ANA_AC_SG_CYCLETIME_UPDATE_PERIOD_SG_CT_CLKS_GET(x)\
+ FIELD_GET(ANA_AC_SG_CYCLETIME_UPDATE_PERIOD_SG_CT_CLKS, x)
+
+#define ANA_AC_SG_CYCLETIME_UPDATE_PERIOD_SG_CT_UPDATE_ENA BIT(31)
+#define ANA_AC_SG_CYCLETIME_UPDATE_PERIOD_SG_CT_UPDATE_ENA_SET(x)\
+ FIELD_PREP(ANA_AC_SG_CYCLETIME_UPDATE_PERIOD_SG_CT_UPDATE_ENA, x)
+#define ANA_AC_SG_CYCLETIME_UPDATE_PERIOD_SG_CT_UPDATE_ENA_GET(x)\
+ FIELD_GET(ANA_AC_SG_CYCLETIME_UPDATE_PERIOD_SG_CT_UPDATE_ENA, x)
+
+/* ANA_AC:SG_CONFIG:SG_CONFIG_REG_1 */
+#define ANA_AC_SG_CONFIG_REG_1 __REG(TARGET_ANA_AC,\
+ 0, 1, 851584, 0, 1, 128, 48, 0, 1, 4)
+
+/* ANA_AC:SG_CONFIG:SG_CONFIG_REG_2 */
+#define ANA_AC_SG_CONFIG_REG_2 __REG(TARGET_ANA_AC,\
+ 0, 1, 851584, 0, 1, 128, 52, 0, 1, 4)
+
+/* ANA_AC:SG_CONFIG:SG_CONFIG_REG_3 */
+#define ANA_AC_SG_CONFIG_REG_3 __REG(TARGET_ANA_AC,\
+ 0, 1, 851584, 0, 1, 128, 56, 0, 1, 4)
+
+#define ANA_AC_SG_CONFIG_REG_3_BASE_TIME_SEC_MSB GENMASK(15, 0)
+#define ANA_AC_SG_CONFIG_REG_3_BASE_TIME_SEC_MSB_SET(x)\
+ FIELD_PREP(ANA_AC_SG_CONFIG_REG_3_BASE_TIME_SEC_MSB, x)
+#define ANA_AC_SG_CONFIG_REG_3_BASE_TIME_SEC_MSB_GET(x)\
+ FIELD_GET(ANA_AC_SG_CONFIG_REG_3_BASE_TIME_SEC_MSB, x)
+
+#define ANA_AC_SG_CONFIG_REG_3_LIST_LENGTH GENMASK(18, 16)
+#define ANA_AC_SG_CONFIG_REG_3_LIST_LENGTH_SET(x)\
+ FIELD_PREP(ANA_AC_SG_CONFIG_REG_3_LIST_LENGTH, x)
+#define ANA_AC_SG_CONFIG_REG_3_LIST_LENGTH_GET(x)\
+ FIELD_GET(ANA_AC_SG_CONFIG_REG_3_LIST_LENGTH, x)
+
+#define ANA_AC_SG_CONFIG_REG_3_GATE_ENABLE BIT(20)
+#define ANA_AC_SG_CONFIG_REG_3_GATE_ENABLE_SET(x)\
+ FIELD_PREP(ANA_AC_SG_CONFIG_REG_3_GATE_ENABLE, x)
+#define ANA_AC_SG_CONFIG_REG_3_GATE_ENABLE_GET(x)\
+ FIELD_GET(ANA_AC_SG_CONFIG_REG_3_GATE_ENABLE, x)
+
+#define ANA_AC_SG_CONFIG_REG_3_INIT_IPS GENMASK(24, 21)
+#define ANA_AC_SG_CONFIG_REG_3_INIT_IPS_SET(x)\
+ FIELD_PREP(ANA_AC_SG_CONFIG_REG_3_INIT_IPS, x)
+#define ANA_AC_SG_CONFIG_REG_3_INIT_IPS_GET(x)\
+ FIELD_GET(ANA_AC_SG_CONFIG_REG_3_INIT_IPS, x)
+
+#define ANA_AC_SG_CONFIG_REG_3_INIT_GATE_STATE BIT(25)
+#define ANA_AC_SG_CONFIG_REG_3_INIT_GATE_STATE_SET(x)\
+ FIELD_PREP(ANA_AC_SG_CONFIG_REG_3_INIT_GATE_STATE, x)
+#define ANA_AC_SG_CONFIG_REG_3_INIT_GATE_STATE_GET(x)\
+ FIELD_GET(ANA_AC_SG_CONFIG_REG_3_INIT_GATE_STATE, x)
+
+#define ANA_AC_SG_CONFIG_REG_3_INVALID_RX_ENA BIT(26)
+#define ANA_AC_SG_CONFIG_REG_3_INVALID_RX_ENA_SET(x)\
+ FIELD_PREP(ANA_AC_SG_CONFIG_REG_3_INVALID_RX_ENA, x)
+#define ANA_AC_SG_CONFIG_REG_3_INVALID_RX_ENA_GET(x)\
+ FIELD_GET(ANA_AC_SG_CONFIG_REG_3_INVALID_RX_ENA, x)
+
+#define ANA_AC_SG_CONFIG_REG_3_INVALID_RX BIT(27)
+#define ANA_AC_SG_CONFIG_REG_3_INVALID_RX_SET(x)\
+ FIELD_PREP(ANA_AC_SG_CONFIG_REG_3_INVALID_RX, x)
+#define ANA_AC_SG_CONFIG_REG_3_INVALID_RX_GET(x)\
+ FIELD_GET(ANA_AC_SG_CONFIG_REG_3_INVALID_RX, x)
+
+#define ANA_AC_SG_CONFIG_REG_3_OCTETS_EXCEEDED_ENA BIT(28)
+#define ANA_AC_SG_CONFIG_REG_3_OCTETS_EXCEEDED_ENA_SET(x)\
+ FIELD_PREP(ANA_AC_SG_CONFIG_REG_3_OCTETS_EXCEEDED_ENA, x)
+#define ANA_AC_SG_CONFIG_REG_3_OCTETS_EXCEEDED_ENA_GET(x)\
+ FIELD_GET(ANA_AC_SG_CONFIG_REG_3_OCTETS_EXCEEDED_ENA, x)
+
+#define ANA_AC_SG_CONFIG_REG_3_OCTETS_EXCEEDED BIT(29)
+#define ANA_AC_SG_CONFIG_REG_3_OCTETS_EXCEEDED_SET(x)\
+ FIELD_PREP(ANA_AC_SG_CONFIG_REG_3_OCTETS_EXCEEDED, x)
+#define ANA_AC_SG_CONFIG_REG_3_OCTETS_EXCEEDED_GET(x)\
+ FIELD_GET(ANA_AC_SG_CONFIG_REG_3_OCTETS_EXCEEDED, x)
+
+/* ANA_AC:SG_CONFIG:SG_CONFIG_REG_4 */
+#define ANA_AC_SG_CONFIG_REG_4 __REG(TARGET_ANA_AC,\
+ 0, 1, 851584, 0, 1, 128, 60, 0, 1, 4)
+
+/* ANA_AC:SG_CONFIG:SG_CONFIG_REG_5 */
+#define ANA_AC_SG_CONFIG_REG_5 __REG(TARGET_ANA_AC,\
+ 0, 1, 851584, 0, 1, 128, 64, 0, 1, 4)
+
+/* ANA_AC:SG_CONFIG:SG_GCL_GS_CONFIG */
+#define ANA_AC_SG_GCL_GS_CONFIG(r) __REG(TARGET_ANA_AC,\
+ 0, 1, 851584, 0, 1, 128, 0, r, 4, 4)
+
+#define ANA_AC_SG_GCL_GS_CONFIG_IPS GENMASK(3, 0)
+#define ANA_AC_SG_GCL_GS_CONFIG_IPS_SET(x)\
+ FIELD_PREP(ANA_AC_SG_GCL_GS_CONFIG_IPS, x)
+#define ANA_AC_SG_GCL_GS_CONFIG_IPS_GET(x)\
+ FIELD_GET(ANA_AC_SG_GCL_GS_CONFIG_IPS, x)
+
+#define ANA_AC_SG_GCL_GS_CONFIG_GATE_STATE BIT(4)
+#define ANA_AC_SG_GCL_GS_CONFIG_GATE_STATE_SET(x)\
+ FIELD_PREP(ANA_AC_SG_GCL_GS_CONFIG_GATE_STATE, x)
+#define ANA_AC_SG_GCL_GS_CONFIG_GATE_STATE_GET(x)\
+ FIELD_GET(ANA_AC_SG_GCL_GS_CONFIG_GATE_STATE, x)
+
+/* ANA_AC:SG_CONFIG:SG_GCL_TI_CONFIG */
+#define ANA_AC_SG_GCL_TI_CONFIG(r) __REG(TARGET_ANA_AC,\
+ 0, 1, 851584, 0, 1, 128, 16, r, 4, 4)
+
+/* ANA_AC:SG_CONFIG:SG_GCL_OCT_CONFIG */
+#define ANA_AC_SG_GCL_OCT_CONFIG(r) __REG(TARGET_ANA_AC,\
+ 0, 1, 851584, 0, 1, 128, 32, r, 4, 4)
+
+/* ANA_AC:SG_STATUS:SG_STATUS_REG_1 */
+#define ANA_AC_SG_STATUS_REG_1 __REG(TARGET_ANA_AC,\
+ 0, 1, 839088, 0, 1, 16, 0, 0, 1, 4)
+
+/* ANA_AC:SG_STATUS:SG_STATUS_REG_2 */
+#define ANA_AC_SG_STATUS_REG_2 __REG(TARGET_ANA_AC,\
+ 0, 1, 839088, 0, 1, 16, 4, 0, 1, 4)
+
+/* ANA_AC:SG_STATUS:SG_STATUS_REG_3 */
+#define ANA_AC_SG_STATUS_REG_3 __REG(TARGET_ANA_AC,\
+ 0, 1, 839088, 0, 1, 16, 8, 0, 1, 4)
+
+#define ANA_AC_SG_STATUS_REG_3_CFG_CHG_TIME_SEC_MSB GENMASK(15, 0)
+#define ANA_AC_SG_STATUS_REG_3_CFG_CHG_TIME_SEC_MSB_SET(x)\
+ FIELD_PREP(ANA_AC_SG_STATUS_REG_3_CFG_CHG_TIME_SEC_MSB, x)
+#define ANA_AC_SG_STATUS_REG_3_CFG_CHG_TIME_SEC_MSB_GET(x)\
+ FIELD_GET(ANA_AC_SG_STATUS_REG_3_CFG_CHG_TIME_SEC_MSB, x)
+
+#define ANA_AC_SG_STATUS_REG_3_GATE_STATE BIT(16)
+#define ANA_AC_SG_STATUS_REG_3_GATE_STATE_SET(x)\
+ FIELD_PREP(ANA_AC_SG_STATUS_REG_3_GATE_STATE, x)
+#define ANA_AC_SG_STATUS_REG_3_GATE_STATE_GET(x)\
+ FIELD_GET(ANA_AC_SG_STATUS_REG_3_GATE_STATE, x)
+
+#define ANA_AC_SG_STATUS_REG_3_IPS GENMASK(23, 20)
+#define ANA_AC_SG_STATUS_REG_3_IPS_SET(x)\
+ FIELD_PREP(ANA_AC_SG_STATUS_REG_3_IPS, x)
+#define ANA_AC_SG_STATUS_REG_3_IPS_GET(x)\
+ FIELD_GET(ANA_AC_SG_STATUS_REG_3_IPS, x)
+
+#define ANA_AC_SG_STATUS_REG_3_CONFIG_PENDING BIT(24)
+#define ANA_AC_SG_STATUS_REG_3_CONFIG_PENDING_SET(x)\
+ FIELD_PREP(ANA_AC_SG_STATUS_REG_3_CONFIG_PENDING, x)
+#define ANA_AC_SG_STATUS_REG_3_CONFIG_PENDING_GET(x)\
+ FIELD_GET(ANA_AC_SG_STATUS_REG_3_CONFIG_PENDING, x)
+
+#define ANA_AC_SG_STATUS_REG_3_GCL_OCTET_INDEX GENMASK(27, 25)
+#define ANA_AC_SG_STATUS_REG_3_GCL_OCTET_INDEX_SET(x)\
+ FIELD_PREP(ANA_AC_SG_STATUS_REG_3_GCL_OCTET_INDEX, x)
+#define ANA_AC_SG_STATUS_REG_3_GCL_OCTET_INDEX_GET(x)\
+ FIELD_GET(ANA_AC_SG_STATUS_REG_3_GCL_OCTET_INDEX, x)
+
+/* ANA_AC:SG_STATUS:SG_STATUS_REG_4 */
+#define ANA_AC_SG_STATUS_REG_4 __REG(TARGET_ANA_AC,\
+ 0, 1, 839088, 0, 1, 16, 12, 0, 1, 4)
+
/* ANA_AC:STAT_GLOBAL_CFG_PORT:STAT_GLOBAL_EVENT_MASK */
-#define ANA_AC_PORT_SGE_CFG(r) __REG(TARGET_ANA_AC, 0, 1, 851552, 0, 1, 20, 0, r, 4, 4)
+#define ANA_AC_PORT_SGE_CFG(r) __REG(TARGET_ANA_AC,\
+ 0, 1, 851552, 0, 1, 20, 0, r, 4, 4)
#define ANA_AC_PORT_SGE_CFG_MASK GENMASK(15, 0)
#define ANA_AC_PORT_SGE_CFG_MASK_SET(x)\
@@ -139,7 +400,8 @@ enum sparx5_target {
FIELD_GET(ANA_AC_PORT_SGE_CFG_MASK, x)
/* ANA_AC:STAT_GLOBAL_CFG_PORT:STAT_RESET */
-#define ANA_AC_STAT_RESET __REG(TARGET_ANA_AC, 0, 1, 851552, 0, 1, 20, 16, 0, 1, 4)
+#define ANA_AC_STAT_RESET __REG(TARGET_ANA_AC,\
+ 0, 1, 851552, 0, 1, 20, 16, 0, 1, 4)
#define ANA_AC_STAT_RESET_RESET BIT(0)
#define ANA_AC_STAT_RESET_RESET_SET(x)\
@@ -148,7 +410,8 @@ enum sparx5_target {
FIELD_GET(ANA_AC_STAT_RESET_RESET, x)
/* ANA_AC:STAT_CNT_CFG_PORT:STAT_CFG */
-#define ANA_AC_PORT_STAT_CFG(g, r) __REG(TARGET_ANA_AC, 0, 1, 843776, g, 70, 64, 4, r, 4, 4)
+#define ANA_AC_PORT_STAT_CFG(g, r) __REG(TARGET_ANA_AC,\
+ 0, 1, 843776, g, 70, 64, 4, r, 4, 4)
#define ANA_AC_PORT_STAT_CFG_CFG_PRIO_MASK GENMASK(11, 4)
#define ANA_AC_PORT_STAT_CFG_CFG_PRIO_MASK_SET(x)\
@@ -169,10 +432,42 @@ enum sparx5_target {
FIELD_GET(ANA_AC_PORT_STAT_CFG_CFG_CNT_BYTE, x)
/* ANA_AC:STAT_CNT_CFG_PORT:STAT_LSB_CNT */
-#define ANA_AC_PORT_STAT_LSB_CNT(g, r) __REG(TARGET_ANA_AC, 0, 1, 843776, g, 70, 64, 20, r, 4, 4)
+#define ANA_AC_PORT_STAT_LSB_CNT(g, r) __REG(TARGET_ANA_AC,\
+ 0, 1, 843776, g, 70, 64, 20, r, 4, 4)
+
+/* ANA_AC:STAT_GLOBAL_CFG_ACL:GLOBAL_CNT_FRM_TYPE_CFG */
+#define ANA_AC_ACL_GLOBAL_CNT_FRM_TYPE_CFG(r) __REG(TARGET_ANA_AC,\
+ 0, 1, 893792, 0, 1, 24, 0, r, 2, 4)
+
+#define ANA_AC_ACL_GLOBAL_CNT_FRM_TYPE_CFG_GLOBAL_CFG_CNT_FRM_TYPE GENMASK(2, 0)
+#define ANA_AC_ACL_GLOBAL_CNT_FRM_TYPE_CFG_GLOBAL_CFG_CNT_FRM_TYPE_SET(x)\
+ FIELD_PREP(ANA_AC_ACL_GLOBAL_CNT_FRM_TYPE_CFG_GLOBAL_CFG_CNT_FRM_TYPE, x)
+#define ANA_AC_ACL_GLOBAL_CNT_FRM_TYPE_CFG_GLOBAL_CFG_CNT_FRM_TYPE_GET(x)\
+ FIELD_GET(ANA_AC_ACL_GLOBAL_CNT_FRM_TYPE_CFG_GLOBAL_CFG_CNT_FRM_TYPE, x)
+
+/* ANA_AC:STAT_GLOBAL_CFG_ACL:STAT_GLOBAL_CFG */
+#define ANA_AC_ACL_STAT_GLOBAL_CFG(r) __REG(TARGET_ANA_AC,\
+ 0, 1, 893792, 0, 1, 24, 8, r, 2, 4)
+
+#define ANA_AC_ACL_STAT_GLOBAL_CFG_GLOBAL_CFG_CNT_BYTE BIT(0)
+#define ANA_AC_ACL_STAT_GLOBAL_CFG_GLOBAL_CFG_CNT_BYTE_SET(x)\
+ FIELD_PREP(ANA_AC_ACL_STAT_GLOBAL_CFG_GLOBAL_CFG_CNT_BYTE, x)
+#define ANA_AC_ACL_STAT_GLOBAL_CFG_GLOBAL_CFG_CNT_BYTE_GET(x)\
+ FIELD_GET(ANA_AC_ACL_STAT_GLOBAL_CFG_GLOBAL_CFG_CNT_BYTE, x)
+
+/* ANA_AC:STAT_GLOBAL_CFG_ACL:STAT_GLOBAL_EVENT_MASK */
+#define ANA_AC_ACL_STAT_GLOBAL_EVENT_MASK(r) __REG(TARGET_ANA_AC,\
+ 0, 1, 893792, 0, 1, 24, 16, r, 2, 4)
+
+#define ANA_AC_ACL_STAT_GLOBAL_EVENT_MASK_GLOBAL_EVENT_MASK GENMASK(3, 0)
+#define ANA_AC_ACL_STAT_GLOBAL_EVENT_MASK_GLOBAL_EVENT_MASK_SET(x)\
+ FIELD_PREP(ANA_AC_ACL_STAT_GLOBAL_EVENT_MASK_GLOBAL_EVENT_MASK, x)
+#define ANA_AC_ACL_STAT_GLOBAL_EVENT_MASK_GLOBAL_EVENT_MASK_GET(x)\
+ FIELD_GET(ANA_AC_ACL_STAT_GLOBAL_EVENT_MASK_GLOBAL_EVENT_MASK, x)
/* ANA_ACL:COMMON:VCAP_S2_CFG */
-#define ANA_ACL_VCAP_S2_CFG(r) __REG(TARGET_ANA_ACL, 0, 1, 32768, 0, 1, 592, 0, r, 70, 4)
+#define ANA_ACL_VCAP_S2_CFG(r) __REG(TARGET_ANA_ACL,\
+ 0, 1, 32768, 0, 1, 592, 0, r, 70, 4)
#define ANA_ACL_VCAP_S2_CFG_SEC_ROUTE_HANDLING_ENA BIT(28)
#define ANA_ACL_VCAP_S2_CFG_SEC_ROUTE_HANDLING_ENA_SET(x)\
@@ -259,7 +554,8 @@ enum sparx5_target {
FIELD_GET(ANA_ACL_VCAP_S2_CFG_SEC_ENA, x)
/* ANA_ACL:COMMON:SWAP_IP_CTRL */
-#define ANA_ACL_SWAP_IP_CTRL __REG(TARGET_ANA_ACL, 0, 1, 32768, 0, 1, 592, 412, 0, 1, 4)
+#define ANA_ACL_SWAP_IP_CTRL __REG(TARGET_ANA_ACL,\
+ 0, 1, 32768, 0, 1, 592, 412, 0, 1, 4)
#define ANA_ACL_SWAP_IP_CTRL_DMAC_REPL_OFFSET_VAL GENMASK(23, 18)
#define ANA_ACL_SWAP_IP_CTRL_DMAC_REPL_OFFSET_VAL_SET(x)\
@@ -292,7 +588,8 @@ enum sparx5_target {
FIELD_GET(ANA_ACL_SWAP_IP_CTRL_IP_SWAP_IP4_TTL_ENA, x)
/* ANA_ACL:COMMON:VCAP_S2_RLEG_STAT */
-#define ANA_ACL_VCAP_S2_RLEG_STAT(r) __REG(TARGET_ANA_ACL, 0, 1, 32768, 0, 1, 592, 424, r, 4, 4)
+#define ANA_ACL_VCAP_S2_RLEG_STAT(r) __REG(TARGET_ANA_ACL,\
+ 0, 1, 32768, 0, 1, 592, 424, r, 4, 4)
#define ANA_ACL_VCAP_S2_RLEG_STAT_IRLEG_STAT_MASK GENMASK(12, 6)
#define ANA_ACL_VCAP_S2_RLEG_STAT_IRLEG_STAT_MASK_SET(x)\
@@ -307,7 +604,8 @@ enum sparx5_target {
FIELD_GET(ANA_ACL_VCAP_S2_RLEG_STAT_ERLEG_STAT_MASK, x)
/* ANA_ACL:COMMON:VCAP_S2_FRAGMENT_CFG */
-#define ANA_ACL_VCAP_S2_FRAGMENT_CFG __REG(TARGET_ANA_ACL, 0, 1, 32768, 0, 1, 592, 440, 0, 1, 4)
+#define ANA_ACL_VCAP_S2_FRAGMENT_CFG __REG(TARGET_ANA_ACL,\
+ 0, 1, 32768, 0, 1, 592, 440, 0, 1, 4)
#define ANA_ACL_VCAP_S2_FRAGMENT_CFG_L4_MIN_LEN GENMASK(9, 5)
#define ANA_ACL_VCAP_S2_FRAGMENT_CFG_L4_MIN_LEN_SET(x)\
@@ -328,7 +626,8 @@ enum sparx5_target {
FIELD_GET(ANA_ACL_VCAP_S2_FRAGMENT_CFG_FRAGMENT_OFFSET_THRES, x)
/* ANA_ACL:COMMON:OWN_UPSID */
-#define ANA_ACL_OWN_UPSID(r) __REG(TARGET_ANA_ACL, 0, 1, 32768, 0, 1, 592, 580, r, 3, 4)
+#define ANA_ACL_OWN_UPSID(r) __REG(TARGET_ANA_ACL,\
+ 0, 1, 32768, 0, 1, 592, 580, r, 3, 4)
#define ANA_ACL_OWN_UPSID_OWN_UPSID GENMASK(4, 0)
#define ANA_ACL_OWN_UPSID_OWN_UPSID_SET(x)\
@@ -337,7 +636,8 @@ enum sparx5_target {
FIELD_GET(ANA_ACL_OWN_UPSID_OWN_UPSID, x)
/* ANA_ACL:KEY_SEL:VCAP_S2_KEY_SEL */
-#define ANA_ACL_VCAP_S2_KEY_SEL(g, r) __REG(TARGET_ANA_ACL, 0, 1, 34200, g, 134, 16, 0, r, 4, 4)
+#define ANA_ACL_VCAP_S2_KEY_SEL(g, r) __REG(TARGET_ANA_ACL,\
+ 0, 1, 34200, g, 134, 16, 0, r, 4, 4)
#define ANA_ACL_VCAP_S2_KEY_SEL_KEY_SEL_ENA BIT(13)
#define ANA_ACL_VCAP_S2_KEY_SEL_KEY_SEL_ENA_SET(x)\
@@ -388,13 +688,16 @@ enum sparx5_target {
FIELD_GET(ANA_ACL_VCAP_S2_KEY_SEL_ARP_KEY_SEL, x)
/* ANA_ACL:CNT_A:CNT_A */
-#define ANA_ACL_CNT_A(g) __REG(TARGET_ANA_ACL, 0, 1, 0, g, 4096, 4, 0, 0, 1, 4)
+#define ANA_ACL_CNT_A(g) __REG(TARGET_ANA_ACL,\
+ 0, 1, 0, g, 4096, 4, 0, 0, 1, 4)
/* ANA_ACL:CNT_B:CNT_B */
-#define ANA_ACL_CNT_B(g) __REG(TARGET_ANA_ACL, 0, 1, 16384, g, 4096, 4, 0, 0, 1, 4)
+#define ANA_ACL_CNT_B(g) __REG(TARGET_ANA_ACL,\
+ 0, 1, 16384, g, 4096, 4, 0, 0, 1, 4)
/* ANA_ACL:STICKY:SEC_LOOKUP_STICKY */
-#define ANA_ACL_SEC_LOOKUP_STICKY(r) __REG(TARGET_ANA_ACL, 0, 1, 36408, 0, 1, 16, 0, r, 4, 4)
+#define ANA_ACL_SEC_LOOKUP_STICKY(r) __REG(TARGET_ANA_ACL,\
+ 0, 1, 36408, 0, 1, 16, 0, r, 4, 4)
#define ANA_ACL_SEC_LOOKUP_STICKY_KEY_SEL_CLM_STICKY BIT(17)
#define ANA_ACL_SEC_LOOKUP_STICKY_KEY_SEL_CLM_STICKY_SET(x)\
@@ -505,7 +808,8 @@ enum sparx5_target {
FIELD_GET(ANA_ACL_SEC_LOOKUP_STICKY_SEC_TYPE_MAC_ETYPE_STICKY, x)
/* ANA_AC_POL:POL_ALL_CFG:POL_UPD_INT_CFG */
-#define ANA_AC_POL_POL_UPD_INT_CFG __REG(TARGET_ANA_AC_POL, 0, 1, 75968, 0, 1, 1160, 1148, 0, 1, 4)
+#define ANA_AC_POL_POL_UPD_INT_CFG __REG(TARGET_ANA_AC_POL,\
+ 0, 1, 75968, 0, 1, 1160, 1148, 0, 1, 4)
#define ANA_AC_POL_POL_UPD_INT_CFG_POL_UPD_INT GENMASK(9, 0)
#define ANA_AC_POL_POL_UPD_INT_CFG_POL_UPD_INT_SET(x)\
@@ -514,7 +818,8 @@ enum sparx5_target {
FIELD_GET(ANA_AC_POL_POL_UPD_INT_CFG_POL_UPD_INT, x)
/* ANA_AC_POL:COMMON_BDLB:DLB_CTRL */
-#define ANA_AC_POL_BDLB_DLB_CTRL __REG(TARGET_ANA_AC_POL, 0, 1, 79048, 0, 1, 8, 0, 0, 1, 4)
+#define ANA_AC_POL_BDLB_DLB_CTRL __REG(TARGET_ANA_AC_POL,\
+ 0, 1, 79048, 0, 1, 8, 0, 0, 1, 4)
#define ANA_AC_POL_BDLB_DLB_CTRL_CLK_PERIOD_01NS GENMASK(26, 19)
#define ANA_AC_POL_BDLB_DLB_CTRL_CLK_PERIOD_01NS_SET(x)\
@@ -541,7 +846,8 @@ enum sparx5_target {
FIELD_GET(ANA_AC_POL_BDLB_DLB_CTRL_DLB_ADD_ENA, x)
/* ANA_AC_POL:COMMON_BUM_SLB:DLB_CTRL */
-#define ANA_AC_POL_SLB_DLB_CTRL __REG(TARGET_ANA_AC_POL, 0, 1, 79056, 0, 1, 20, 0, 0, 1, 4)
+#define ANA_AC_POL_SLB_DLB_CTRL __REG(TARGET_ANA_AC_POL,\
+ 0, 1, 79056, 0, 1, 20, 0, 0, 1, 4)
#define ANA_AC_POL_SLB_DLB_CTRL_CLK_PERIOD_01NS GENMASK(26, 19)
#define ANA_AC_POL_SLB_DLB_CTRL_CLK_PERIOD_01NS_SET(x)\
@@ -567,8 +873,235 @@ enum sparx5_target {
#define ANA_AC_POL_SLB_DLB_CTRL_DLB_ADD_ENA_GET(x)\
FIELD_GET(ANA_AC_POL_SLB_DLB_CTRL_DLB_ADD_ENA, x)
+/* ANA_AC_SDLB:LBGRP_TBL:XLB_START */
+#define ANA_AC_SDLB_XLB_START(g) __REG(TARGET_ANA_AC_SDLB,\
+ 0, 1, 295468, g, 10, 24, 0, 0, 1, 4)
+
+#define ANA_AC_SDLB_XLB_START_LBSET_START GENMASK(12, 0)
+#define ANA_AC_SDLB_XLB_START_LBSET_START_SET(x)\
+ FIELD_PREP(ANA_AC_SDLB_XLB_START_LBSET_START, x)
+#define ANA_AC_SDLB_XLB_START_LBSET_START_GET(x)\
+ FIELD_GET(ANA_AC_SDLB_XLB_START_LBSET_START, x)
+
+/* ANA_AC_SDLB:LBGRP_TBL:PUP_INTERVAL */
+#define ANA_AC_SDLB_PUP_INTERVAL(g) __REG(TARGET_ANA_AC_SDLB,\
+ 0, 1, 295468, g, 10, 24, 4, 0, 1, 4)
+
+#define ANA_AC_SDLB_PUP_INTERVAL_PUP_INTERVAL GENMASK(19, 0)
+#define ANA_AC_SDLB_PUP_INTERVAL_PUP_INTERVAL_SET(x)\
+ FIELD_PREP(ANA_AC_SDLB_PUP_INTERVAL_PUP_INTERVAL, x)
+#define ANA_AC_SDLB_PUP_INTERVAL_PUP_INTERVAL_GET(x)\
+ FIELD_GET(ANA_AC_SDLB_PUP_INTERVAL_PUP_INTERVAL, x)
+
+/* ANA_AC_SDLB:LBGRP_TBL:PUP_CTRL */
+#define ANA_AC_SDLB_PUP_CTRL(g) __REG(TARGET_ANA_AC_SDLB,\
+ 0, 1, 295468, g, 10, 24, 8, 0, 1, 4)
+
+#define ANA_AC_SDLB_PUP_CTRL_PUP_LB_DT GENMASK(18, 0)
+#define ANA_AC_SDLB_PUP_CTRL_PUP_LB_DT_SET(x)\
+ FIELD_PREP(ANA_AC_SDLB_PUP_CTRL_PUP_LB_DT, x)
+#define ANA_AC_SDLB_PUP_CTRL_PUP_LB_DT_GET(x)\
+ FIELD_GET(ANA_AC_SDLB_PUP_CTRL_PUP_LB_DT, x)
+
+#define ANA_AC_SDLB_PUP_CTRL_PUP_ENA BIT(24)
+#define ANA_AC_SDLB_PUP_CTRL_PUP_ENA_SET(x)\
+ FIELD_PREP(ANA_AC_SDLB_PUP_CTRL_PUP_ENA, x)
+#define ANA_AC_SDLB_PUP_CTRL_PUP_ENA_GET(x)\
+ FIELD_GET(ANA_AC_SDLB_PUP_CTRL_PUP_ENA, x)
+
+/* ANA_AC_SDLB:LBGRP_TBL:LBGRP_MISC */
+#define ANA_AC_SDLB_LBGRP_MISC(g) __REG(TARGET_ANA_AC_SDLB,\
+ 0, 1, 295468, g, 10, 24, 12, 0, 1, 4)
+
+#define ANA_AC_SDLB_LBGRP_MISC_THRES_SHIFT GENMASK(12, 8)
+#define ANA_AC_SDLB_LBGRP_MISC_THRES_SHIFT_SET(x)\
+ FIELD_PREP(ANA_AC_SDLB_LBGRP_MISC_THRES_SHIFT, x)
+#define ANA_AC_SDLB_LBGRP_MISC_THRES_SHIFT_GET(x)\
+ FIELD_GET(ANA_AC_SDLB_LBGRP_MISC_THRES_SHIFT, x)
+
+/* ANA_AC_SDLB:LBGRP_TBL:FRM_RATE_TOKENS */
+#define ANA_AC_SDLB_FRM_RATE_TOKENS(g) __REG(TARGET_ANA_AC_SDLB,\
+ 0, 1, 295468, g, 10, 24, 16, 0, 1, 4)
+
+#define ANA_AC_SDLB_FRM_RATE_TOKENS_FRM_RATE_TOKENS GENMASK(12, 0)
+#define ANA_AC_SDLB_FRM_RATE_TOKENS_FRM_RATE_TOKENS_SET(x)\
+ FIELD_PREP(ANA_AC_SDLB_FRM_RATE_TOKENS_FRM_RATE_TOKENS, x)
+#define ANA_AC_SDLB_FRM_RATE_TOKENS_FRM_RATE_TOKENS_GET(x)\
+ FIELD_GET(ANA_AC_SDLB_FRM_RATE_TOKENS_FRM_RATE_TOKENS, x)
+
+/* ANA_AC_SDLB:LBGRP_TBL:LBGRP_STATE_TBL */
+#define ANA_AC_SDLB_LBGRP_STATE_TBL(g) __REG(TARGET_ANA_AC_SDLB,\
+ 0, 1, 295468, g, 10, 24, 20, 0, 1, 4)
+
+#define ANA_AC_SDLB_LBGRP_STATE_TBL_PUP_ONGOING BIT(0)
+#define ANA_AC_SDLB_LBGRP_STATE_TBL_PUP_ONGOING_SET(x)\
+ FIELD_PREP(ANA_AC_SDLB_LBGRP_STATE_TBL_PUP_ONGOING, x)
+#define ANA_AC_SDLB_LBGRP_STATE_TBL_PUP_ONGOING_GET(x)\
+ FIELD_GET(ANA_AC_SDLB_LBGRP_STATE_TBL_PUP_ONGOING, x)
+
+#define ANA_AC_SDLB_LBGRP_STATE_TBL_PUP_WAIT_ACK BIT(1)
+#define ANA_AC_SDLB_LBGRP_STATE_TBL_PUP_WAIT_ACK_SET(x)\
+ FIELD_PREP(ANA_AC_SDLB_LBGRP_STATE_TBL_PUP_WAIT_ACK, x)
+#define ANA_AC_SDLB_LBGRP_STATE_TBL_PUP_WAIT_ACK_GET(x)\
+ FIELD_GET(ANA_AC_SDLB_LBGRP_STATE_TBL_PUP_WAIT_ACK, x)
+
+#define ANA_AC_SDLB_LBGRP_STATE_TBL_PUP_LBSET_NEXT GENMASK(28, 16)
+#define ANA_AC_SDLB_LBGRP_STATE_TBL_PUP_LBSET_NEXT_SET(x)\
+ FIELD_PREP(ANA_AC_SDLB_LBGRP_STATE_TBL_PUP_LBSET_NEXT, x)
+#define ANA_AC_SDLB_LBGRP_STATE_TBL_PUP_LBSET_NEXT_GET(x)\
+ FIELD_GET(ANA_AC_SDLB_LBGRP_STATE_TBL_PUP_LBSET_NEXT, x)
+
+/* ANA_AC_SDLB:LBSET_TBL:PUP_TOKENS */
+#define ANA_AC_SDLB_PUP_TOKENS(g, r) __REG(TARGET_ANA_AC_SDLB,\
+ 0, 1, 0, g, 4616, 64, 0, r, 2, 4)
+
+#define ANA_AC_SDLB_PUP_TOKENS_PUP_TOKENS GENMASK(12, 0)
+#define ANA_AC_SDLB_PUP_TOKENS_PUP_TOKENS_SET(x)\
+ FIELD_PREP(ANA_AC_SDLB_PUP_TOKENS_PUP_TOKENS, x)
+#define ANA_AC_SDLB_PUP_TOKENS_PUP_TOKENS_GET(x)\
+ FIELD_GET(ANA_AC_SDLB_PUP_TOKENS_PUP_TOKENS, x)
+
+/* ANA_AC_SDLB:LBSET_TBL:THRES */
+#define ANA_AC_SDLB_THRES(g, r) __REG(TARGET_ANA_AC_SDLB,\
+ 0, 1, 0, g, 4616, 64, 8, r, 2, 4)
+
+#define ANA_AC_SDLB_THRES_THRES GENMASK(9, 0)
+#define ANA_AC_SDLB_THRES_THRES_SET(x)\
+ FIELD_PREP(ANA_AC_SDLB_THRES_THRES, x)
+#define ANA_AC_SDLB_THRES_THRES_GET(x)\
+ FIELD_GET(ANA_AC_SDLB_THRES_THRES, x)
+
+#define ANA_AC_SDLB_THRES_THRES_HYS GENMASK(25, 16)
+#define ANA_AC_SDLB_THRES_THRES_HYS_SET(x)\
+ FIELD_PREP(ANA_AC_SDLB_THRES_THRES_HYS, x)
+#define ANA_AC_SDLB_THRES_THRES_HYS_GET(x)\
+ FIELD_GET(ANA_AC_SDLB_THRES_THRES_HYS, x)
+
+/* ANA_AC_SDLB:LBSET_TBL:XLB_NEXT */
+#define ANA_AC_SDLB_XLB_NEXT(g) __REG(TARGET_ANA_AC_SDLB,\
+ 0, 1, 0, g, 4616, 64, 16, 0, 1, 4)
+
+#define ANA_AC_SDLB_XLB_NEXT_LBSET_NEXT GENMASK(12, 0)
+#define ANA_AC_SDLB_XLB_NEXT_LBSET_NEXT_SET(x)\
+ FIELD_PREP(ANA_AC_SDLB_XLB_NEXT_LBSET_NEXT, x)
+#define ANA_AC_SDLB_XLB_NEXT_LBSET_NEXT_GET(x)\
+ FIELD_GET(ANA_AC_SDLB_XLB_NEXT_LBSET_NEXT, x)
+
+#define ANA_AC_SDLB_XLB_NEXT_LBGRP GENMASK(27, 24)
+#define ANA_AC_SDLB_XLB_NEXT_LBGRP_SET(x)\
+ FIELD_PREP(ANA_AC_SDLB_XLB_NEXT_LBGRP, x)
+#define ANA_AC_SDLB_XLB_NEXT_LBGRP_GET(x)\
+ FIELD_GET(ANA_AC_SDLB_XLB_NEXT_LBGRP, x)
+
+/* ANA_AC_SDLB:LBSET_TBL:INH_CTRL */
+#define ANA_AC_SDLB_INH_CTRL(g, r) __REG(TARGET_ANA_AC_SDLB,\
+ 0, 1, 0, g, 4616, 64, 20, r, 2, 4)
+
+#define ANA_AC_SDLB_INH_CTRL_PUP_TOKENS_MAX GENMASK(12, 0)
+#define ANA_AC_SDLB_INH_CTRL_PUP_TOKENS_MAX_SET(x)\
+ FIELD_PREP(ANA_AC_SDLB_INH_CTRL_PUP_TOKENS_MAX, x)
+#define ANA_AC_SDLB_INH_CTRL_PUP_TOKENS_MAX_GET(x)\
+ FIELD_GET(ANA_AC_SDLB_INH_CTRL_PUP_TOKENS_MAX, x)
+
+#define ANA_AC_SDLB_INH_CTRL_INH_MODE GENMASK(21, 20)
+#define ANA_AC_SDLB_INH_CTRL_INH_MODE_SET(x)\
+ FIELD_PREP(ANA_AC_SDLB_INH_CTRL_INH_MODE, x)
+#define ANA_AC_SDLB_INH_CTRL_INH_MODE_GET(x)\
+ FIELD_GET(ANA_AC_SDLB_INH_CTRL_INH_MODE, x)
+
+#define ANA_AC_SDLB_INH_CTRL_INH_LB BIT(24)
+#define ANA_AC_SDLB_INH_CTRL_INH_LB_SET(x)\
+ FIELD_PREP(ANA_AC_SDLB_INH_CTRL_INH_LB, x)
+#define ANA_AC_SDLB_INH_CTRL_INH_LB_GET(x)\
+ FIELD_GET(ANA_AC_SDLB_INH_CTRL_INH_LB, x)
+
+/* ANA_AC_SDLB:LBSET_TBL:INH_LBSET_ADDR */
+#define ANA_AC_SDLB_INH_LBSET_ADDR(g) __REG(TARGET_ANA_AC_SDLB,\
+ 0, 1, 0, g, 4616, 64, 28, 0, 1, 4)
+
+#define ANA_AC_SDLB_INH_LBSET_ADDR_INH_LBSET_ADDR GENMASK(12, 0)
+#define ANA_AC_SDLB_INH_LBSET_ADDR_INH_LBSET_ADDR_SET(x)\
+ FIELD_PREP(ANA_AC_SDLB_INH_LBSET_ADDR_INH_LBSET_ADDR, x)
+#define ANA_AC_SDLB_INH_LBSET_ADDR_INH_LBSET_ADDR_GET(x)\
+ FIELD_GET(ANA_AC_SDLB_INH_LBSET_ADDR_INH_LBSET_ADDR, x)
+
+/* ANA_AC_SDLB:LBSET_TBL:DLB_MISC */
+#define ANA_AC_SDLB_DLB_MISC(g) __REG(TARGET_ANA_AC_SDLB,\
+ 0, 1, 0, g, 4616, 64, 32, 0, 1, 4)
+
+#define ANA_AC_SDLB_DLB_MISC_DLB_FRM_RATE_ENA BIT(0)
+#define ANA_AC_SDLB_DLB_MISC_DLB_FRM_RATE_ENA_SET(x)\
+ FIELD_PREP(ANA_AC_SDLB_DLB_MISC_DLB_FRM_RATE_ENA, x)
+#define ANA_AC_SDLB_DLB_MISC_DLB_FRM_RATE_ENA_GET(x)\
+ FIELD_GET(ANA_AC_SDLB_DLB_MISC_DLB_FRM_RATE_ENA, x)
+
+#define ANA_AC_SDLB_DLB_MISC_MARK_ALL_FRMS_RED_ENA BIT(6)
+#define ANA_AC_SDLB_DLB_MISC_MARK_ALL_FRMS_RED_ENA_SET(x)\
+ FIELD_PREP(ANA_AC_SDLB_DLB_MISC_MARK_ALL_FRMS_RED_ENA, x)
+#define ANA_AC_SDLB_DLB_MISC_MARK_ALL_FRMS_RED_ENA_GET(x)\
+ FIELD_GET(ANA_AC_SDLB_DLB_MISC_MARK_ALL_FRMS_RED_ENA, x)
+
+#define ANA_AC_SDLB_DLB_MISC_DLB_FRM_ADJ GENMASK(14, 8)
+#define ANA_AC_SDLB_DLB_MISC_DLB_FRM_ADJ_SET(x)\
+ FIELD_PREP(ANA_AC_SDLB_DLB_MISC_DLB_FRM_ADJ, x)
+#define ANA_AC_SDLB_DLB_MISC_DLB_FRM_ADJ_GET(x)\
+ FIELD_GET(ANA_AC_SDLB_DLB_MISC_DLB_FRM_ADJ, x)
+
+/* ANA_AC_SDLB:LBSET_TBL:DLB_CFG */
+#define ANA_AC_SDLB_DLB_CFG(g) __REG(TARGET_ANA_AC_SDLB,\
+ 0, 1, 0, g, 4616, 64, 36, 0, 1, 4)
+
+#define ANA_AC_SDLB_DLB_CFG_DROP_ON_YELLOW_ENA BIT(11)
+#define ANA_AC_SDLB_DLB_CFG_DROP_ON_YELLOW_ENA_SET(x)\
+ FIELD_PREP(ANA_AC_SDLB_DLB_CFG_DROP_ON_YELLOW_ENA, x)
+#define ANA_AC_SDLB_DLB_CFG_DROP_ON_YELLOW_ENA_GET(x)\
+ FIELD_GET(ANA_AC_SDLB_DLB_CFG_DROP_ON_YELLOW_ENA, x)
+
+#define ANA_AC_SDLB_DLB_CFG_DP_BYPASS_LVL GENMASK(10, 9)
+#define ANA_AC_SDLB_DLB_CFG_DP_BYPASS_LVL_SET(x)\
+ FIELD_PREP(ANA_AC_SDLB_DLB_CFG_DP_BYPASS_LVL, x)
+#define ANA_AC_SDLB_DLB_CFG_DP_BYPASS_LVL_GET(x)\
+ FIELD_GET(ANA_AC_SDLB_DLB_CFG_DP_BYPASS_LVL, x)
+
+#define ANA_AC_SDLB_DLB_CFG_HIER_DLB_DIS BIT(8)
+#define ANA_AC_SDLB_DLB_CFG_HIER_DLB_DIS_SET(x)\
+ FIELD_PREP(ANA_AC_SDLB_DLB_CFG_HIER_DLB_DIS, x)
+#define ANA_AC_SDLB_DLB_CFG_HIER_DLB_DIS_GET(x)\
+ FIELD_GET(ANA_AC_SDLB_DLB_CFG_HIER_DLB_DIS, x)
+
+#define ANA_AC_SDLB_DLB_CFG_ENCAP_DATA_DIS BIT(7)
+#define ANA_AC_SDLB_DLB_CFG_ENCAP_DATA_DIS_SET(x)\
+ FIELD_PREP(ANA_AC_SDLB_DLB_CFG_ENCAP_DATA_DIS, x)
+#define ANA_AC_SDLB_DLB_CFG_ENCAP_DATA_DIS_GET(x)\
+ FIELD_GET(ANA_AC_SDLB_DLB_CFG_ENCAP_DATA_DIS, x)
+
+#define ANA_AC_SDLB_DLB_CFG_COLOR_AWARE_LVL GENMASK(6, 5)
+#define ANA_AC_SDLB_DLB_CFG_COLOR_AWARE_LVL_SET(x)\
+ FIELD_PREP(ANA_AC_SDLB_DLB_CFG_COLOR_AWARE_LVL, x)
+#define ANA_AC_SDLB_DLB_CFG_COLOR_AWARE_LVL_GET(x)\
+ FIELD_GET(ANA_AC_SDLB_DLB_CFG_COLOR_AWARE_LVL, x)
+
+#define ANA_AC_SDLB_DLB_CFG_CIR_INC_DP_VAL GENMASK(4, 3)
+#define ANA_AC_SDLB_DLB_CFG_CIR_INC_DP_VAL_SET(x)\
+ FIELD_PREP(ANA_AC_SDLB_DLB_CFG_CIR_INC_DP_VAL, x)
+#define ANA_AC_SDLB_DLB_CFG_CIR_INC_DP_VAL_GET(x)\
+ FIELD_GET(ANA_AC_SDLB_DLB_CFG_CIR_INC_DP_VAL, x)
+
+#define ANA_AC_SDLB_DLB_CFG_DLB_MODE BIT(2)
+#define ANA_AC_SDLB_DLB_CFG_DLB_MODE_SET(x)\
+ FIELD_PREP(ANA_AC_SDLB_DLB_CFG_DLB_MODE, x)
+#define ANA_AC_SDLB_DLB_CFG_DLB_MODE_GET(x)\
+ FIELD_GET(ANA_AC_SDLB_DLB_CFG_DLB_MODE, x)
+
+#define ANA_AC_SDLB_DLB_CFG_TRAFFIC_TYPE_MASK GENMASK(1, 0)
+#define ANA_AC_SDLB_DLB_CFG_TRAFFIC_TYPE_MASK_SET(x)\
+ FIELD_PREP(ANA_AC_SDLB_DLB_CFG_TRAFFIC_TYPE_MASK, x)
+#define ANA_AC_SDLB_DLB_CFG_TRAFFIC_TYPE_MASK_GET(x)\
+ FIELD_GET(ANA_AC_SDLB_DLB_CFG_TRAFFIC_TYPE_MASK, x)
+
/* ANA_CL:PORT:FILTER_CTRL */
-#define ANA_CL_FILTER_CTRL(g) __REG(TARGET_ANA_CL, 0, 1, 131072, g, 70, 512, 4, 0, 1, 4)
+#define ANA_CL_FILTER_CTRL(g) __REG(TARGET_ANA_CL,\
+ 0, 1, 131072, g, 70, 512, 4, 0, 1, 4)
#define ANA_CL_FILTER_CTRL_FILTER_SMAC_MC_DIS BIT(2)
#define ANA_CL_FILTER_CTRL_FILTER_SMAC_MC_DIS_SET(x)\
@@ -589,7 +1122,8 @@ enum sparx5_target {
FIELD_GET(ANA_CL_FILTER_CTRL_FORCE_FCS_UPDATE_ENA, x)
/* ANA_CL:PORT:VLAN_FILTER_CTRL */
-#define ANA_CL_VLAN_FILTER_CTRL(g, r) __REG(TARGET_ANA_CL, 0, 1, 131072, g, 70, 512, 8, r, 3, 4)
+#define ANA_CL_VLAN_FILTER_CTRL(g, r) __REG(TARGET_ANA_CL,\
+ 0, 1, 131072, g, 70, 512, 8, r, 3, 4)
#define ANA_CL_VLAN_FILTER_CTRL_TAG_REQUIRED_ENA BIT(10)
#define ANA_CL_VLAN_FILTER_CTRL_TAG_REQUIRED_ENA_SET(x)\
@@ -658,7 +1192,8 @@ enum sparx5_target {
FIELD_GET(ANA_CL_VLAN_FILTER_CTRL_CUST3_STAG_DIS, x)
/* ANA_CL:PORT:ETAG_FILTER_CTRL */
-#define ANA_CL_ETAG_FILTER_CTRL(g) __REG(TARGET_ANA_CL, 0, 1, 131072, g, 70, 512, 20, 0, 1, 4)
+#define ANA_CL_ETAG_FILTER_CTRL(g) __REG(TARGET_ANA_CL,\
+ 0, 1, 131072, g, 70, 512, 20, 0, 1, 4)
#define ANA_CL_ETAG_FILTER_CTRL_ETAG_REQUIRED_ENA BIT(1)
#define ANA_CL_ETAG_FILTER_CTRL_ETAG_REQUIRED_ENA_SET(x)\
@@ -673,7 +1208,8 @@ enum sparx5_target {
FIELD_GET(ANA_CL_ETAG_FILTER_CTRL_ETAG_DIS, x)
/* ANA_CL:PORT:VLAN_CTRL */
-#define ANA_CL_VLAN_CTRL(g) __REG(TARGET_ANA_CL, 0, 1, 131072, g, 70, 512, 32, 0, 1, 4)
+#define ANA_CL_VLAN_CTRL(g) __REG(TARGET_ANA_CL,\
+ 0, 1, 131072, g, 70, 512, 32, 0, 1, 4)
#define ANA_CL_VLAN_CTRL_PORT_VOE_TPID_AWARE_DIS GENMASK(30, 26)
#define ANA_CL_VLAN_CTRL_PORT_VOE_TPID_AWARE_DIS_SET(x)\
@@ -742,7 +1278,8 @@ enum sparx5_target {
FIELD_GET(ANA_CL_VLAN_CTRL_PORT_VID, x)
/* ANA_CL:PORT:VLAN_CTRL_2 */
-#define ANA_CL_VLAN_CTRL_2(g) __REG(TARGET_ANA_CL, 0, 1, 131072, g, 70, 512, 36, 0, 1, 4)
+#define ANA_CL_VLAN_CTRL_2(g) __REG(TARGET_ANA_CL,\
+ 0, 1, 131072, g, 70, 512, 36, 0, 1, 4)
#define ANA_CL_VLAN_CTRL_2_VLAN_PUSH_CNT GENMASK(1, 0)
#define ANA_CL_VLAN_CTRL_2_VLAN_PUSH_CNT_SET(x)\
@@ -751,7 +1288,8 @@ enum sparx5_target {
FIELD_GET(ANA_CL_VLAN_CTRL_2_VLAN_PUSH_CNT, x)
/* ANA_CL:PORT:PCP_DEI_MAP_CFG */
-#define ANA_CL_PCP_DEI_MAP_CFG(g, r) __REG(TARGET_ANA_CL, 0, 1, 131072, g, 70, 512, 108, r, 16, 4)
+#define ANA_CL_PCP_DEI_MAP_CFG(g, r) __REG(TARGET_ANA_CL,\
+ 0, 1, 131072, g, 70, 512, 108, r, 16, 4)
#define ANA_CL_PCP_DEI_MAP_CFG_PCP_DEI_DP_VAL GENMASK(4, 3)
#define ANA_CL_PCP_DEI_MAP_CFG_PCP_DEI_DP_VAL_SET(x)\
@@ -766,7 +1304,8 @@ enum sparx5_target {
FIELD_GET(ANA_CL_PCP_DEI_MAP_CFG_PCP_DEI_QOS_VAL, x)
/* ANA_CL:PORT:QOS_CFG */
-#define ANA_CL_QOS_CFG(g) __REG(TARGET_ANA_CL, 0, 1, 131072, g, 70, 512, 172, 0, 1, 4)
+#define ANA_CL_QOS_CFG(g) __REG(TARGET_ANA_CL,\
+ 0, 1, 131072, g, 70, 512, 172, 0, 1, 4)
#define ANA_CL_QOS_CFG_DEFAULT_COSID_ENA BIT(17)
#define ANA_CL_QOS_CFG_DEFAULT_COSID_ENA_SET(x)\
@@ -841,10 +1380,74 @@ enum sparx5_target {
FIELD_GET(ANA_CL_QOS_CFG_DEFAULT_QOS_VAL, x)
/* ANA_CL:PORT:CAPTURE_BPDU_CFG */
-#define ANA_CL_CAPTURE_BPDU_CFG(g) __REG(TARGET_ANA_CL, 0, 1, 131072, g, 70, 512, 196, 0, 1, 4)
+#define ANA_CL_CAPTURE_BPDU_CFG(g) __REG(TARGET_ANA_CL,\
+ 0, 1, 131072, g, 70, 512, 196, 0, 1, 4)
+
+/* ANA_CL:PORT:ADV_CL_CFG_2 */
+#define ANA_CL_ADV_CL_CFG_2(g, r) __REG(TARGET_ANA_CL,\
+ 0, 1, 131072, g, 70, 512, 200, r, 6, 4)
+
+#define ANA_CL_ADV_CL_CFG_2_USE_CL_TCI0_ENA BIT(1)
+#define ANA_CL_ADV_CL_CFG_2_USE_CL_TCI0_ENA_SET(x)\
+ FIELD_PREP(ANA_CL_ADV_CL_CFG_2_USE_CL_TCI0_ENA, x)
+#define ANA_CL_ADV_CL_CFG_2_USE_CL_TCI0_ENA_GET(x)\
+ FIELD_GET(ANA_CL_ADV_CL_CFG_2_USE_CL_TCI0_ENA, x)
+
+#define ANA_CL_ADV_CL_CFG_2_USE_CL_DSCP_ENA BIT(0)
+#define ANA_CL_ADV_CL_CFG_2_USE_CL_DSCP_ENA_SET(x)\
+ FIELD_PREP(ANA_CL_ADV_CL_CFG_2_USE_CL_DSCP_ENA, x)
+#define ANA_CL_ADV_CL_CFG_2_USE_CL_DSCP_ENA_GET(x)\
+ FIELD_GET(ANA_CL_ADV_CL_CFG_2_USE_CL_DSCP_ENA, x)
+
+/* ANA_CL:PORT:ADV_CL_CFG */
+#define ANA_CL_ADV_CL_CFG(g, r) __REG(TARGET_ANA_CL,\
+ 0, 1, 131072, g, 70, 512, 224, r, 6, 4)
+
+#define ANA_CL_ADV_CL_CFG_IP4_CLM_KEY_SEL GENMASK(30, 26)
+#define ANA_CL_ADV_CL_CFG_IP4_CLM_KEY_SEL_SET(x)\
+ FIELD_PREP(ANA_CL_ADV_CL_CFG_IP4_CLM_KEY_SEL, x)
+#define ANA_CL_ADV_CL_CFG_IP4_CLM_KEY_SEL_GET(x)\
+ FIELD_GET(ANA_CL_ADV_CL_CFG_IP4_CLM_KEY_SEL, x)
+
+#define ANA_CL_ADV_CL_CFG_IP6_CLM_KEY_SEL GENMASK(25, 21)
+#define ANA_CL_ADV_CL_CFG_IP6_CLM_KEY_SEL_SET(x)\
+ FIELD_PREP(ANA_CL_ADV_CL_CFG_IP6_CLM_KEY_SEL, x)
+#define ANA_CL_ADV_CL_CFG_IP6_CLM_KEY_SEL_GET(x)\
+ FIELD_GET(ANA_CL_ADV_CL_CFG_IP6_CLM_KEY_SEL, x)
+
+#define ANA_CL_ADV_CL_CFG_MPLS_UC_CLM_KEY_SEL GENMASK(20, 16)
+#define ANA_CL_ADV_CL_CFG_MPLS_UC_CLM_KEY_SEL_SET(x)\
+ FIELD_PREP(ANA_CL_ADV_CL_CFG_MPLS_UC_CLM_KEY_SEL, x)
+#define ANA_CL_ADV_CL_CFG_MPLS_UC_CLM_KEY_SEL_GET(x)\
+ FIELD_GET(ANA_CL_ADV_CL_CFG_MPLS_UC_CLM_KEY_SEL, x)
+
+#define ANA_CL_ADV_CL_CFG_MPLS_MC_CLM_KEY_SEL GENMASK(15, 11)
+#define ANA_CL_ADV_CL_CFG_MPLS_MC_CLM_KEY_SEL_SET(x)\
+ FIELD_PREP(ANA_CL_ADV_CL_CFG_MPLS_MC_CLM_KEY_SEL, x)
+#define ANA_CL_ADV_CL_CFG_MPLS_MC_CLM_KEY_SEL_GET(x)\
+ FIELD_GET(ANA_CL_ADV_CL_CFG_MPLS_MC_CLM_KEY_SEL, x)
+
+#define ANA_CL_ADV_CL_CFG_MLBS_CLM_KEY_SEL GENMASK(10, 6)
+#define ANA_CL_ADV_CL_CFG_MLBS_CLM_KEY_SEL_SET(x)\
+ FIELD_PREP(ANA_CL_ADV_CL_CFG_MLBS_CLM_KEY_SEL, x)
+#define ANA_CL_ADV_CL_CFG_MLBS_CLM_KEY_SEL_GET(x)\
+ FIELD_GET(ANA_CL_ADV_CL_CFG_MLBS_CLM_KEY_SEL, x)
+
+#define ANA_CL_ADV_CL_CFG_ETYPE_CLM_KEY_SEL GENMASK(5, 1)
+#define ANA_CL_ADV_CL_CFG_ETYPE_CLM_KEY_SEL_SET(x)\
+ FIELD_PREP(ANA_CL_ADV_CL_CFG_ETYPE_CLM_KEY_SEL, x)
+#define ANA_CL_ADV_CL_CFG_ETYPE_CLM_KEY_SEL_GET(x)\
+ FIELD_GET(ANA_CL_ADV_CL_CFG_ETYPE_CLM_KEY_SEL, x)
+
+#define ANA_CL_ADV_CL_CFG_LOOKUP_ENA BIT(0)
+#define ANA_CL_ADV_CL_CFG_LOOKUP_ENA_SET(x)\
+ FIELD_PREP(ANA_CL_ADV_CL_CFG_LOOKUP_ENA, x)
+#define ANA_CL_ADV_CL_CFG_LOOKUP_ENA_GET(x)\
+ FIELD_GET(ANA_CL_ADV_CL_CFG_LOOKUP_ENA, x)
/* ANA_CL:COMMON:OWN_UPSID */
-#define ANA_CL_OWN_UPSID(r) __REG(TARGET_ANA_CL, 0, 1, 166912, 0, 1, 756, 0, r, 3, 4)
+#define ANA_CL_OWN_UPSID(r) __REG(TARGET_ANA_CL,\
+ 0, 1, 166912, 0, 1, 756, 0, r, 3, 4)
#define ANA_CL_OWN_UPSID_OWN_UPSID GENMASK(4, 0)
#define ANA_CL_OWN_UPSID_OWN_UPSID_SET(x)\
@@ -853,7 +1456,8 @@ enum sparx5_target {
FIELD_GET(ANA_CL_OWN_UPSID_OWN_UPSID, x)
/* ANA_CL:COMMON:DSCP_CFG */
-#define ANA_CL_DSCP_CFG(r) __REG(TARGET_ANA_CL, 0, 1, 166912, 0, 1, 756, 256, r, 64, 4)
+#define ANA_CL_DSCP_CFG(r) __REG(TARGET_ANA_CL,\
+ 0, 1, 166912, 0, 1, 756, 256, r, 64, 4)
#define ANA_CL_DSCP_CFG_DSCP_TRANSLATE_VAL GENMASK(12, 7)
#define ANA_CL_DSCP_CFG_DSCP_TRANSLATE_VAL_SET(x)\
@@ -885,14 +1489,103 @@ enum sparx5_target {
#define ANA_CL_DSCP_CFG_DSCP_TRUST_ENA_GET(x)\
FIELD_GET(ANA_CL_DSCP_CFG_DSCP_TRUST_ENA, x)
+/* ANA_CL:COMMON:QOS_MAP_CFG */
+#define ANA_CL_QOS_MAP_CFG(r) __REG(TARGET_ANA_CL,\
+ 0, 1, 166912, 0, 1, 756, 512, r, 32, 4)
+
+#define ANA_CL_QOS_MAP_CFG_DSCP_REWR_VAL GENMASK(9, 4)
+#define ANA_CL_QOS_MAP_CFG_DSCP_REWR_VAL_SET(x)\
+ FIELD_PREP(ANA_CL_QOS_MAP_CFG_DSCP_REWR_VAL, x)
+#define ANA_CL_QOS_MAP_CFG_DSCP_REWR_VAL_GET(x)\
+ FIELD_GET(ANA_CL_QOS_MAP_CFG_DSCP_REWR_VAL, x)
+
+/* ANA_L2:COMMON:FWD_CFG */
+#define ANA_L2_FWD_CFG __REG(TARGET_ANA_L2,\
+ 0, 1, 566024, 0, 1, 700, 0, 0, 1, 4)
+
+#define ANA_L2_FWD_CFG_MAC_TBL_SPLIT_SEL GENMASK(21, 20)
+#define ANA_L2_FWD_CFG_MAC_TBL_SPLIT_SEL_SET(x)\
+ FIELD_PREP(ANA_L2_FWD_CFG_MAC_TBL_SPLIT_SEL, x)
+#define ANA_L2_FWD_CFG_MAC_TBL_SPLIT_SEL_GET(x)\
+ FIELD_GET(ANA_L2_FWD_CFG_MAC_TBL_SPLIT_SEL, x)
+
+#define ANA_L2_FWD_CFG_PORT_DEFAULT_BDLB_ENA BIT(18)
+#define ANA_L2_FWD_CFG_PORT_DEFAULT_BDLB_ENA_SET(x)\
+ FIELD_PREP(ANA_L2_FWD_CFG_PORT_DEFAULT_BDLB_ENA, x)
+#define ANA_L2_FWD_CFG_PORT_DEFAULT_BDLB_ENA_GET(x)\
+ FIELD_GET(ANA_L2_FWD_CFG_PORT_DEFAULT_BDLB_ENA, x)
+
+#define ANA_L2_FWD_CFG_QUEUE_DEFAULT_SDLB_ENA BIT(17)
+#define ANA_L2_FWD_CFG_QUEUE_DEFAULT_SDLB_ENA_SET(x)\
+ FIELD_PREP(ANA_L2_FWD_CFG_QUEUE_DEFAULT_SDLB_ENA, x)
+#define ANA_L2_FWD_CFG_QUEUE_DEFAULT_SDLB_ENA_GET(x)\
+ FIELD_GET(ANA_L2_FWD_CFG_QUEUE_DEFAULT_SDLB_ENA, x)
+
+#define ANA_L2_FWD_CFG_ISDX_LOOKUP_ENA BIT(16)
+#define ANA_L2_FWD_CFG_ISDX_LOOKUP_ENA_SET(x)\
+ FIELD_PREP(ANA_L2_FWD_CFG_ISDX_LOOKUP_ENA, x)
+#define ANA_L2_FWD_CFG_ISDX_LOOKUP_ENA_GET(x)\
+ FIELD_GET(ANA_L2_FWD_CFG_ISDX_LOOKUP_ENA, x)
+
+#define ANA_L2_FWD_CFG_CPU_DMAC_QU GENMASK(10, 8)
+#define ANA_L2_FWD_CFG_CPU_DMAC_QU_SET(x)\
+ FIELD_PREP(ANA_L2_FWD_CFG_CPU_DMAC_QU, x)
+#define ANA_L2_FWD_CFG_CPU_DMAC_QU_GET(x)\
+ FIELD_GET(ANA_L2_FWD_CFG_CPU_DMAC_QU, x)
+
+#define ANA_L2_FWD_CFG_LOOPBACK_ENA BIT(7)
+#define ANA_L2_FWD_CFG_LOOPBACK_ENA_SET(x)\
+ FIELD_PREP(ANA_L2_FWD_CFG_LOOPBACK_ENA, x)
+#define ANA_L2_FWD_CFG_LOOPBACK_ENA_GET(x)\
+ FIELD_GET(ANA_L2_FWD_CFG_LOOPBACK_ENA, x)
+
+#define ANA_L2_FWD_CFG_CPU_DMAC_COPY_ENA BIT(6)
+#define ANA_L2_FWD_CFG_CPU_DMAC_COPY_ENA_SET(x)\
+ FIELD_PREP(ANA_L2_FWD_CFG_CPU_DMAC_COPY_ENA, x)
+#define ANA_L2_FWD_CFG_CPU_DMAC_COPY_ENA_GET(x)\
+ FIELD_GET(ANA_L2_FWD_CFG_CPU_DMAC_COPY_ENA, x)
+
+#define ANA_L2_FWD_CFG_FILTER_MODE_SEL BIT(4)
+#define ANA_L2_FWD_CFG_FILTER_MODE_SEL_SET(x)\
+ FIELD_PREP(ANA_L2_FWD_CFG_FILTER_MODE_SEL, x)
+#define ANA_L2_FWD_CFG_FILTER_MODE_SEL_GET(x)\
+ FIELD_GET(ANA_L2_FWD_CFG_FILTER_MODE_SEL, x)
+
+#define ANA_L2_FWD_CFG_FLOOD_MIRROR_ENA BIT(3)
+#define ANA_L2_FWD_CFG_FLOOD_MIRROR_ENA_SET(x)\
+ FIELD_PREP(ANA_L2_FWD_CFG_FLOOD_MIRROR_ENA, x)
+#define ANA_L2_FWD_CFG_FLOOD_MIRROR_ENA_GET(x)\
+ FIELD_GET(ANA_L2_FWD_CFG_FLOOD_MIRROR_ENA, x)
+
+#define ANA_L2_FWD_CFG_FLOOD_IGNORE_VLAN_ENA BIT(2)
+#define ANA_L2_FWD_CFG_FLOOD_IGNORE_VLAN_ENA_SET(x)\
+ FIELD_PREP(ANA_L2_FWD_CFG_FLOOD_IGNORE_VLAN_ENA, x)
+#define ANA_L2_FWD_CFG_FLOOD_IGNORE_VLAN_ENA_GET(x)\
+ FIELD_GET(ANA_L2_FWD_CFG_FLOOD_IGNORE_VLAN_ENA, x)
+
+#define ANA_L2_FWD_CFG_FLOOD_CPU_COPY_ENA BIT(1)
+#define ANA_L2_FWD_CFG_FLOOD_CPU_COPY_ENA_SET(x)\
+ FIELD_PREP(ANA_L2_FWD_CFG_FLOOD_CPU_COPY_ENA, x)
+#define ANA_L2_FWD_CFG_FLOOD_CPU_COPY_ENA_GET(x)\
+ FIELD_GET(ANA_L2_FWD_CFG_FLOOD_CPU_COPY_ENA, x)
+
+#define ANA_L2_FWD_CFG_FWD_ENA BIT(0)
+#define ANA_L2_FWD_CFG_FWD_ENA_SET(x)\
+ FIELD_PREP(ANA_L2_FWD_CFG_FWD_ENA, x)
+#define ANA_L2_FWD_CFG_FWD_ENA_GET(x)\
+ FIELD_GET(ANA_L2_FWD_CFG_FWD_ENA, x)
+
/* ANA_L2:COMMON:AUTO_LRN_CFG */
-#define ANA_L2_AUTO_LRN_CFG __REG(TARGET_ANA_L2, 0, 1, 566024, 0, 1, 700, 24, 0, 1, 4)
+#define ANA_L2_AUTO_LRN_CFG __REG(TARGET_ANA_L2,\
+ 0, 1, 566024, 0, 1, 700, 24, 0, 1, 4)
/* ANA_L2:COMMON:AUTO_LRN_CFG1 */
-#define ANA_L2_AUTO_LRN_CFG1 __REG(TARGET_ANA_L2, 0, 1, 566024, 0, 1, 700, 28, 0, 1, 4)
+#define ANA_L2_AUTO_LRN_CFG1 __REG(TARGET_ANA_L2,\
+ 0, 1, 566024, 0, 1, 700, 28, 0, 1, 4)
/* ANA_L2:COMMON:AUTO_LRN_CFG2 */
-#define ANA_L2_AUTO_LRN_CFG2 __REG(TARGET_ANA_L2, 0, 1, 566024, 0, 1, 700, 32, 0, 1, 4)
+#define ANA_L2_AUTO_LRN_CFG2 __REG(TARGET_ANA_L2,\
+ 0, 1, 566024, 0, 1, 700, 32, 0, 1, 4)
#define ANA_L2_AUTO_LRN_CFG2_AUTO_LRN_ENA2 BIT(0)
#define ANA_L2_AUTO_LRN_CFG2_AUTO_LRN_ENA2_SET(x)\
@@ -901,7 +1594,8 @@ enum sparx5_target {
FIELD_GET(ANA_L2_AUTO_LRN_CFG2_AUTO_LRN_ENA2, x)
/* ANA_L2:COMMON:OWN_UPSID */
-#define ANA_L2_OWN_UPSID(r) __REG(TARGET_ANA_L2, 0, 1, 566024, 0, 1, 700, 672, r, 3, 4)
+#define ANA_L2_OWN_UPSID(r) __REG(TARGET_ANA_L2,\
+ 0, 1, 566024, 0, 1, 700, 672, r, 3, 4)
#define ANA_L2_OWN_UPSID_OWN_UPSID GENMASK(4, 0)
#define ANA_L2_OWN_UPSID_OWN_UPSID_SET(x)\
@@ -909,8 +1603,29 @@ enum sparx5_target {
#define ANA_L2_OWN_UPSID_OWN_UPSID_GET(x)\
FIELD_GET(ANA_L2_OWN_UPSID_OWN_UPSID, x)
+/* ANA_L2:ISDX:DLB_CFG */
+#define ANA_L2_DLB_CFG(g) __REG(TARGET_ANA_L2,\
+ 0, 1, 0, g, 4096, 128, 56, 0, 1, 4)
+
+#define ANA_L2_DLB_CFG_DLB_IDX GENMASK(12, 0)
+#define ANA_L2_DLB_CFG_DLB_IDX_SET(x)\
+ FIELD_PREP(ANA_L2_DLB_CFG_DLB_IDX, x)
+#define ANA_L2_DLB_CFG_DLB_IDX_GET(x)\
+ FIELD_GET(ANA_L2_DLB_CFG_DLB_IDX, x)
+
+/* ANA_L2:ISDX:TSN_CFG */
+#define ANA_L2_TSN_CFG(g) __REG(TARGET_ANA_L2,\
+ 0, 1, 0, g, 4096, 128, 100, 0, 1, 4)
+
+#define ANA_L2_TSN_CFG_TSN_SFID GENMASK(9, 0)
+#define ANA_L2_TSN_CFG_TSN_SFID_SET(x)\
+ FIELD_PREP(ANA_L2_TSN_CFG_TSN_SFID, x)
+#define ANA_L2_TSN_CFG_TSN_SFID_GET(x)\
+ FIELD_GET(ANA_L2_TSN_CFG_TSN_SFID, x)
+
/* ANA_L3:COMMON:VLAN_CTRL */
-#define ANA_L3_VLAN_CTRL __REG(TARGET_ANA_L3, 0, 1, 493632, 0, 1, 184, 4, 0, 1, 4)
+#define ANA_L3_VLAN_CTRL __REG(TARGET_ANA_L3,\
+ 0, 1, 493632, 0, 1, 184, 4, 0, 1, 4)
#define ANA_L3_VLAN_CTRL_VLAN_ENA BIT(0)
#define ANA_L3_VLAN_CTRL_VLAN_ENA_SET(x)\
@@ -919,7 +1634,8 @@ enum sparx5_target {
FIELD_GET(ANA_L3_VLAN_CTRL_VLAN_ENA, x)
/* ANA_L3:VLAN:VLAN_CFG */
-#define ANA_L3_VLAN_CFG(g) __REG(TARGET_ANA_L3, 0, 1, 0, g, 5120, 64, 8, 0, 1, 4)
+#define ANA_L3_VLAN_CFG(g) __REG(TARGET_ANA_L3,\
+ 0, 1, 0, g, 5120, 64, 8, 0, 1, 4)
#define ANA_L3_VLAN_CFG_VLAN_MSTP_PTR GENMASK(30, 24)
#define ANA_L3_VLAN_CFG_VLAN_MSTP_PTR_SET(x)\
@@ -976,13 +1692,16 @@ enum sparx5_target {
FIELD_GET(ANA_L3_VLAN_CFG_VLAN_MIRROR_ENA, x)
/* ANA_L3:VLAN:VLAN_MASK_CFG */
-#define ANA_L3_VLAN_MASK_CFG(g) __REG(TARGET_ANA_L3, 0, 1, 0, g, 5120, 64, 16, 0, 1, 4)
+#define ANA_L3_VLAN_MASK_CFG(g) __REG(TARGET_ANA_L3,\
+ 0, 1, 0, g, 5120, 64, 16, 0, 1, 4)
/* ANA_L3:VLAN:VLAN_MASK_CFG1 */
-#define ANA_L3_VLAN_MASK_CFG1(g) __REG(TARGET_ANA_L3, 0, 1, 0, g, 5120, 64, 20, 0, 1, 4)
+#define ANA_L3_VLAN_MASK_CFG1(g) __REG(TARGET_ANA_L3,\
+ 0, 1, 0, g, 5120, 64, 20, 0, 1, 4)
/* ANA_L3:VLAN:VLAN_MASK_CFG2 */
-#define ANA_L3_VLAN_MASK_CFG2(g) __REG(TARGET_ANA_L3, 0, 1, 0, g, 5120, 64, 24, 0, 1, 4)
+#define ANA_L3_VLAN_MASK_CFG2(g) __REG(TARGET_ANA_L3,\
+ 0, 1, 0, g, 5120, 64, 24, 0, 1, 4)
#define ANA_L3_VLAN_MASK_CFG2_VLAN_PORT_MASK2 BIT(0)
#define ANA_L3_VLAN_MASK_CFG2_VLAN_PORT_MASK2_SET(x)\
@@ -991,274 +1710,364 @@ enum sparx5_target {
FIELD_GET(ANA_L3_VLAN_MASK_CFG2_VLAN_PORT_MASK2, x)
/* ASM:DEV_STATISTICS:RX_IN_BYTES_CNT */
-#define ASM_RX_IN_BYTES_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 0, 0, 1, 4)
+#define ASM_RX_IN_BYTES_CNT(g) __REG(TARGET_ASM,\
+ 0, 1, 0, g, 65, 512, 0, 0, 1, 4)
/* ASM:DEV_STATISTICS:RX_SYMBOL_ERR_CNT */
-#define ASM_RX_SYMBOL_ERR_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 4, 0, 1, 4)
+#define ASM_RX_SYMBOL_ERR_CNT(g) __REG(TARGET_ASM,\
+ 0, 1, 0, g, 65, 512, 4, 0, 1, 4)
/* ASM:DEV_STATISTICS:RX_PAUSE_CNT */
-#define ASM_RX_PAUSE_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 8, 0, 1, 4)
+#define ASM_RX_PAUSE_CNT(g) __REG(TARGET_ASM,\
+ 0, 1, 0, g, 65, 512, 8, 0, 1, 4)
/* ASM:DEV_STATISTICS:RX_UNSUP_OPCODE_CNT */
-#define ASM_RX_UNSUP_OPCODE_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 12, 0, 1, 4)
+#define ASM_RX_UNSUP_OPCODE_CNT(g) __REG(TARGET_ASM,\
+ 0, 1, 0, g, 65, 512, 12, 0, 1, 4)
/* ASM:DEV_STATISTICS:RX_OK_BYTES_CNT */
-#define ASM_RX_OK_BYTES_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 16, 0, 1, 4)
+#define ASM_RX_OK_BYTES_CNT(g) __REG(TARGET_ASM,\
+ 0, 1, 0, g, 65, 512, 16, 0, 1, 4)
/* ASM:DEV_STATISTICS:RX_BAD_BYTES_CNT */
-#define ASM_RX_BAD_BYTES_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 20, 0, 1, 4)
+#define ASM_RX_BAD_BYTES_CNT(g) __REG(TARGET_ASM,\
+ 0, 1, 0, g, 65, 512, 20, 0, 1, 4)
/* ASM:DEV_STATISTICS:RX_UC_CNT */
-#define ASM_RX_UC_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 24, 0, 1, 4)
+#define ASM_RX_UC_CNT(g) __REG(TARGET_ASM,\
+ 0, 1, 0, g, 65, 512, 24, 0, 1, 4)
/* ASM:DEV_STATISTICS:RX_MC_CNT */
-#define ASM_RX_MC_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 28, 0, 1, 4)
+#define ASM_RX_MC_CNT(g) __REG(TARGET_ASM,\
+ 0, 1, 0, g, 65, 512, 28, 0, 1, 4)
/* ASM:DEV_STATISTICS:RX_BC_CNT */
-#define ASM_RX_BC_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 32, 0, 1, 4)
+#define ASM_RX_BC_CNT(g) __REG(TARGET_ASM,\
+ 0, 1, 0, g, 65, 512, 32, 0, 1, 4)
/* ASM:DEV_STATISTICS:RX_CRC_ERR_CNT */
-#define ASM_RX_CRC_ERR_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 36, 0, 1, 4)
+#define ASM_RX_CRC_ERR_CNT(g) __REG(TARGET_ASM,\
+ 0, 1, 0, g, 65, 512, 36, 0, 1, 4)
/* ASM:DEV_STATISTICS:RX_UNDERSIZE_CNT */
-#define ASM_RX_UNDERSIZE_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 40, 0, 1, 4)
+#define ASM_RX_UNDERSIZE_CNT(g) __REG(TARGET_ASM,\
+ 0, 1, 0, g, 65, 512, 40, 0, 1, 4)
/* ASM:DEV_STATISTICS:RX_FRAGMENTS_CNT */
-#define ASM_RX_FRAGMENTS_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 44, 0, 1, 4)
+#define ASM_RX_FRAGMENTS_CNT(g) __REG(TARGET_ASM,\
+ 0, 1, 0, g, 65, 512, 44, 0, 1, 4)
/* ASM:DEV_STATISTICS:RX_IN_RANGE_LEN_ERR_CNT */
-#define ASM_RX_IN_RANGE_LEN_ERR_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 48, 0, 1, 4)
+#define ASM_RX_IN_RANGE_LEN_ERR_CNT(g) __REG(TARGET_ASM,\
+ 0, 1, 0, g, 65, 512, 48, 0, 1, 4)
/* ASM:DEV_STATISTICS:RX_OUT_OF_RANGE_LEN_ERR_CNT */
-#define ASM_RX_OUT_OF_RANGE_LEN_ERR_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 52, 0, 1, 4)
+#define ASM_RX_OUT_OF_RANGE_LEN_ERR_CNT(g) __REG(TARGET_ASM,\
+ 0, 1, 0, g, 65, 512, 52, 0, 1, 4)
/* ASM:DEV_STATISTICS:RX_OVERSIZE_CNT */
-#define ASM_RX_OVERSIZE_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 56, 0, 1, 4)
+#define ASM_RX_OVERSIZE_CNT(g) __REG(TARGET_ASM,\
+ 0, 1, 0, g, 65, 512, 56, 0, 1, 4)
/* ASM:DEV_STATISTICS:RX_JABBERS_CNT */
-#define ASM_RX_JABBERS_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 60, 0, 1, 4)
+#define ASM_RX_JABBERS_CNT(g) __REG(TARGET_ASM,\
+ 0, 1, 0, g, 65, 512, 60, 0, 1, 4)
/* ASM:DEV_STATISTICS:RX_SIZE64_CNT */
-#define ASM_RX_SIZE64_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 64, 0, 1, 4)
+#define ASM_RX_SIZE64_CNT(g) __REG(TARGET_ASM,\
+ 0, 1, 0, g, 65, 512, 64, 0, 1, 4)
/* ASM:DEV_STATISTICS:RX_SIZE65TO127_CNT */
-#define ASM_RX_SIZE65TO127_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 68, 0, 1, 4)
+#define ASM_RX_SIZE65TO127_CNT(g) __REG(TARGET_ASM,\
+ 0, 1, 0, g, 65, 512, 68, 0, 1, 4)
/* ASM:DEV_STATISTICS:RX_SIZE128TO255_CNT */
-#define ASM_RX_SIZE128TO255_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 72, 0, 1, 4)
+#define ASM_RX_SIZE128TO255_CNT(g) __REG(TARGET_ASM,\
+ 0, 1, 0, g, 65, 512, 72, 0, 1, 4)
/* ASM:DEV_STATISTICS:RX_SIZE256TO511_CNT */
-#define ASM_RX_SIZE256TO511_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 76, 0, 1, 4)
+#define ASM_RX_SIZE256TO511_CNT(g) __REG(TARGET_ASM,\
+ 0, 1, 0, g, 65, 512, 76, 0, 1, 4)
/* ASM:DEV_STATISTICS:RX_SIZE512TO1023_CNT */
-#define ASM_RX_SIZE512TO1023_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 80, 0, 1, 4)
+#define ASM_RX_SIZE512TO1023_CNT(g) __REG(TARGET_ASM,\
+ 0, 1, 0, g, 65, 512, 80, 0, 1, 4)
/* ASM:DEV_STATISTICS:RX_SIZE1024TO1518_CNT */
-#define ASM_RX_SIZE1024TO1518_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 84, 0, 1, 4)
+#define ASM_RX_SIZE1024TO1518_CNT(g) __REG(TARGET_ASM,\
+ 0, 1, 0, g, 65, 512, 84, 0, 1, 4)
/* ASM:DEV_STATISTICS:RX_SIZE1519TOMAX_CNT */
-#define ASM_RX_SIZE1519TOMAX_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 88, 0, 1, 4)
+#define ASM_RX_SIZE1519TOMAX_CNT(g) __REG(TARGET_ASM,\
+ 0, 1, 0, g, 65, 512, 88, 0, 1, 4)
/* ASM:DEV_STATISTICS:RX_IPG_SHRINK_CNT */
-#define ASM_RX_IPG_SHRINK_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 92, 0, 1, 4)
+#define ASM_RX_IPG_SHRINK_CNT(g) __REG(TARGET_ASM,\
+ 0, 1, 0, g, 65, 512, 92, 0, 1, 4)
/* ASM:DEV_STATISTICS:TX_OUT_BYTES_CNT */
-#define ASM_TX_OUT_BYTES_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 96, 0, 1, 4)
+#define ASM_TX_OUT_BYTES_CNT(g) __REG(TARGET_ASM,\
+ 0, 1, 0, g, 65, 512, 96, 0, 1, 4)
/* ASM:DEV_STATISTICS:TX_PAUSE_CNT */
-#define ASM_TX_PAUSE_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 100, 0, 1, 4)
+#define ASM_TX_PAUSE_CNT(g) __REG(TARGET_ASM,\
+ 0, 1, 0, g, 65, 512, 100, 0, 1, 4)
/* ASM:DEV_STATISTICS:TX_OK_BYTES_CNT */
-#define ASM_TX_OK_BYTES_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 104, 0, 1, 4)
+#define ASM_TX_OK_BYTES_CNT(g) __REG(TARGET_ASM,\
+ 0, 1, 0, g, 65, 512, 104, 0, 1, 4)
/* ASM:DEV_STATISTICS:TX_UC_CNT */
-#define ASM_TX_UC_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 108, 0, 1, 4)
+#define ASM_TX_UC_CNT(g) __REG(TARGET_ASM,\
+ 0, 1, 0, g, 65, 512, 108, 0, 1, 4)
/* ASM:DEV_STATISTICS:TX_MC_CNT */
-#define ASM_TX_MC_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 112, 0, 1, 4)
+#define ASM_TX_MC_CNT(g) __REG(TARGET_ASM,\
+ 0, 1, 0, g, 65, 512, 112, 0, 1, 4)
/* ASM:DEV_STATISTICS:TX_BC_CNT */
-#define ASM_TX_BC_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 116, 0, 1, 4)
+#define ASM_TX_BC_CNT(g) __REG(TARGET_ASM,\
+ 0, 1, 0, g, 65, 512, 116, 0, 1, 4)
/* ASM:DEV_STATISTICS:TX_SIZE64_CNT */
-#define ASM_TX_SIZE64_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 120, 0, 1, 4)
+#define ASM_TX_SIZE64_CNT(g) __REG(TARGET_ASM,\
+ 0, 1, 0, g, 65, 512, 120, 0, 1, 4)
/* ASM:DEV_STATISTICS:TX_SIZE65TO127_CNT */
-#define ASM_TX_SIZE65TO127_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 124, 0, 1, 4)
+#define ASM_TX_SIZE65TO127_CNT(g) __REG(TARGET_ASM,\
+ 0, 1, 0, g, 65, 512, 124, 0, 1, 4)
/* ASM:DEV_STATISTICS:TX_SIZE128TO255_CNT */
-#define ASM_TX_SIZE128TO255_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 128, 0, 1, 4)
+#define ASM_TX_SIZE128TO255_CNT(g) __REG(TARGET_ASM,\
+ 0, 1, 0, g, 65, 512, 128, 0, 1, 4)
/* ASM:DEV_STATISTICS:TX_SIZE256TO511_CNT */
-#define ASM_TX_SIZE256TO511_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 132, 0, 1, 4)
+#define ASM_TX_SIZE256TO511_CNT(g) __REG(TARGET_ASM,\
+ 0, 1, 0, g, 65, 512, 132, 0, 1, 4)
/* ASM:DEV_STATISTICS:TX_SIZE512TO1023_CNT */
-#define ASM_TX_SIZE512TO1023_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 136, 0, 1, 4)
+#define ASM_TX_SIZE512TO1023_CNT(g) __REG(TARGET_ASM,\
+ 0, 1, 0, g, 65, 512, 136, 0, 1, 4)
/* ASM:DEV_STATISTICS:TX_SIZE1024TO1518_CNT */
-#define ASM_TX_SIZE1024TO1518_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 140, 0, 1, 4)
+#define ASM_TX_SIZE1024TO1518_CNT(g) __REG(TARGET_ASM,\
+ 0, 1, 0, g, 65, 512, 140, 0, 1, 4)
/* ASM:DEV_STATISTICS:TX_SIZE1519TOMAX_CNT */
-#define ASM_TX_SIZE1519TOMAX_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 144, 0, 1, 4)
+#define ASM_TX_SIZE1519TOMAX_CNT(g) __REG(TARGET_ASM,\
+ 0, 1, 0, g, 65, 512, 144, 0, 1, 4)
/* ASM:DEV_STATISTICS:RX_ALIGNMENT_LOST_CNT */
-#define ASM_RX_ALIGNMENT_LOST_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 148, 0, 1, 4)
+#define ASM_RX_ALIGNMENT_LOST_CNT(g) __REG(TARGET_ASM,\
+ 0, 1, 0, g, 65, 512, 148, 0, 1, 4)
/* ASM:DEV_STATISTICS:RX_TAGGED_FRMS_CNT */
-#define ASM_RX_TAGGED_FRMS_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 152, 0, 1, 4)
+#define ASM_RX_TAGGED_FRMS_CNT(g) __REG(TARGET_ASM,\
+ 0, 1, 0, g, 65, 512, 152, 0, 1, 4)
/* ASM:DEV_STATISTICS:RX_UNTAGGED_FRMS_CNT */
-#define ASM_RX_UNTAGGED_FRMS_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 156, 0, 1, 4)
+#define ASM_RX_UNTAGGED_FRMS_CNT(g) __REG(TARGET_ASM,\
+ 0, 1, 0, g, 65, 512, 156, 0, 1, 4)
/* ASM:DEV_STATISTICS:TX_TAGGED_FRMS_CNT */
-#define ASM_TX_TAGGED_FRMS_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 160, 0, 1, 4)
+#define ASM_TX_TAGGED_FRMS_CNT(g) __REG(TARGET_ASM,\
+ 0, 1, 0, g, 65, 512, 160, 0, 1, 4)
/* ASM:DEV_STATISTICS:TX_UNTAGGED_FRMS_CNT */
-#define ASM_TX_UNTAGGED_FRMS_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 164, 0, 1, 4)
+#define ASM_TX_UNTAGGED_FRMS_CNT(g) __REG(TARGET_ASM,\
+ 0, 1, 0, g, 65, 512, 164, 0, 1, 4)
/* ASM:DEV_STATISTICS:PMAC_RX_SYMBOL_ERR_CNT */
-#define ASM_PMAC_RX_SYMBOL_ERR_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 168, 0, 1, 4)
+#define ASM_PMAC_RX_SYMBOL_ERR_CNT(g) __REG(TARGET_ASM,\
+ 0, 1, 0, g, 65, 512, 168, 0, 1, 4)
/* ASM:DEV_STATISTICS:PMAC_RX_PAUSE_CNT */
-#define ASM_PMAC_RX_PAUSE_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 172, 0, 1, 4)
+#define ASM_PMAC_RX_PAUSE_CNT(g) __REG(TARGET_ASM,\
+ 0, 1, 0, g, 65, 512, 172, 0, 1, 4)
/* ASM:DEV_STATISTICS:PMAC_RX_UNSUP_OPCODE_CNT */
-#define ASM_PMAC_RX_UNSUP_OPCODE_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 176, 0, 1, 4)
+#define ASM_PMAC_RX_UNSUP_OPCODE_CNT(g) __REG(TARGET_ASM,\
+ 0, 1, 0, g, 65, 512, 176, 0, 1, 4)
/* ASM:DEV_STATISTICS:PMAC_RX_OK_BYTES_CNT */
-#define ASM_PMAC_RX_OK_BYTES_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 180, 0, 1, 4)
+#define ASM_PMAC_RX_OK_BYTES_CNT(g) __REG(TARGET_ASM,\
+ 0, 1, 0, g, 65, 512, 180, 0, 1, 4)
/* ASM:DEV_STATISTICS:PMAC_RX_BAD_BYTES_CNT */
-#define ASM_PMAC_RX_BAD_BYTES_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 184, 0, 1, 4)
+#define ASM_PMAC_RX_BAD_BYTES_CNT(g) __REG(TARGET_ASM,\
+ 0, 1, 0, g, 65, 512, 184, 0, 1, 4)
/* ASM:DEV_STATISTICS:PMAC_RX_UC_CNT */
-#define ASM_PMAC_RX_UC_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 188, 0, 1, 4)
+#define ASM_PMAC_RX_UC_CNT(g) __REG(TARGET_ASM,\
+ 0, 1, 0, g, 65, 512, 188, 0, 1, 4)
/* ASM:DEV_STATISTICS:PMAC_RX_MC_CNT */
-#define ASM_PMAC_RX_MC_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 192, 0, 1, 4)
+#define ASM_PMAC_RX_MC_CNT(g) __REG(TARGET_ASM,\
+ 0, 1, 0, g, 65, 512, 192, 0, 1, 4)
/* ASM:DEV_STATISTICS:PMAC_RX_BC_CNT */
-#define ASM_PMAC_RX_BC_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 196, 0, 1, 4)
+#define ASM_PMAC_RX_BC_CNT(g) __REG(TARGET_ASM,\
+ 0, 1, 0, g, 65, 512, 196, 0, 1, 4)
/* ASM:DEV_STATISTICS:PMAC_RX_CRC_ERR_CNT */
-#define ASM_PMAC_RX_CRC_ERR_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 200, 0, 1, 4)
+#define ASM_PMAC_RX_CRC_ERR_CNT(g) __REG(TARGET_ASM,\
+ 0, 1, 0, g, 65, 512, 200, 0, 1, 4)
/* ASM:DEV_STATISTICS:PMAC_RX_UNDERSIZE_CNT */
-#define ASM_PMAC_RX_UNDERSIZE_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 204, 0, 1, 4)
+#define ASM_PMAC_RX_UNDERSIZE_CNT(g) __REG(TARGET_ASM,\
+ 0, 1, 0, g, 65, 512, 204, 0, 1, 4)
/* ASM:DEV_STATISTICS:PMAC_RX_FRAGMENTS_CNT */
-#define ASM_PMAC_RX_FRAGMENTS_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 208, 0, 1, 4)
+#define ASM_PMAC_RX_FRAGMENTS_CNT(g) __REG(TARGET_ASM,\
+ 0, 1, 0, g, 65, 512, 208, 0, 1, 4)
/* ASM:DEV_STATISTICS:PMAC_RX_IN_RANGE_LEN_ERR_CNT */
-#define ASM_PMAC_RX_IN_RANGE_LEN_ERR_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 212, 0, 1, 4)
+#define ASM_PMAC_RX_IN_RANGE_LEN_ERR_CNT(g) __REG(TARGET_ASM,\
+ 0, 1, 0, g, 65, 512, 212, 0, 1, 4)
/* ASM:DEV_STATISTICS:PMAC_RX_OUT_OF_RANGE_LEN_ERR_CNT */
-#define ASM_PMAC_RX_OUT_OF_RANGE_LEN_ERR_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 216, 0, 1, 4)
+#define ASM_PMAC_RX_OUT_OF_RANGE_LEN_ERR_CNT(g) __REG(TARGET_ASM,\
+ 0, 1, 0, g, 65, 512, 216, 0, 1, 4)
/* ASM:DEV_STATISTICS:PMAC_RX_OVERSIZE_CNT */
-#define ASM_PMAC_RX_OVERSIZE_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 220, 0, 1, 4)
+#define ASM_PMAC_RX_OVERSIZE_CNT(g) __REG(TARGET_ASM,\
+ 0, 1, 0, g, 65, 512, 220, 0, 1, 4)
/* ASM:DEV_STATISTICS:PMAC_RX_JABBERS_CNT */
-#define ASM_PMAC_RX_JABBERS_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 224, 0, 1, 4)
+#define ASM_PMAC_RX_JABBERS_CNT(g) __REG(TARGET_ASM,\
+ 0, 1, 0, g, 65, 512, 224, 0, 1, 4)
/* ASM:DEV_STATISTICS:PMAC_RX_SIZE64_CNT */
-#define ASM_PMAC_RX_SIZE64_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 228, 0, 1, 4)
+#define ASM_PMAC_RX_SIZE64_CNT(g) __REG(TARGET_ASM,\
+ 0, 1, 0, g, 65, 512, 228, 0, 1, 4)
/* ASM:DEV_STATISTICS:PMAC_RX_SIZE65TO127_CNT */
-#define ASM_PMAC_RX_SIZE65TO127_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 232, 0, 1, 4)
+#define ASM_PMAC_RX_SIZE65TO127_CNT(g) __REG(TARGET_ASM,\
+ 0, 1, 0, g, 65, 512, 232, 0, 1, 4)
/* ASM:DEV_STATISTICS:PMAC_RX_SIZE128TO255_CNT */
-#define ASM_PMAC_RX_SIZE128TO255_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 236, 0, 1, 4)
+#define ASM_PMAC_RX_SIZE128TO255_CNT(g) __REG(TARGET_ASM,\
+ 0, 1, 0, g, 65, 512, 236, 0, 1, 4)
/* ASM:DEV_STATISTICS:PMAC_RX_SIZE256TO511_CNT */
-#define ASM_PMAC_RX_SIZE256TO511_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 240, 0, 1, 4)
+#define ASM_PMAC_RX_SIZE256TO511_CNT(g) __REG(TARGET_ASM,\
+ 0, 1, 0, g, 65, 512, 240, 0, 1, 4)
/* ASM:DEV_STATISTICS:PMAC_RX_SIZE512TO1023_CNT */
-#define ASM_PMAC_RX_SIZE512TO1023_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 244, 0, 1, 4)
+#define ASM_PMAC_RX_SIZE512TO1023_CNT(g) __REG(TARGET_ASM,\
+ 0, 1, 0, g, 65, 512, 244, 0, 1, 4)
/* ASM:DEV_STATISTICS:PMAC_RX_SIZE1024TO1518_CNT */
-#define ASM_PMAC_RX_SIZE1024TO1518_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 248, 0, 1, 4)
+#define ASM_PMAC_RX_SIZE1024TO1518_CNT(g) __REG(TARGET_ASM,\
+ 0, 1, 0, g, 65, 512, 248, 0, 1, 4)
/* ASM:DEV_STATISTICS:PMAC_RX_SIZE1519TOMAX_CNT */
-#define ASM_PMAC_RX_SIZE1519TOMAX_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 252, 0, 1, 4)
+#define ASM_PMAC_RX_SIZE1519TOMAX_CNT(g) __REG(TARGET_ASM,\
+ 0, 1, 0, g, 65, 512, 252, 0, 1, 4)
/* ASM:DEV_STATISTICS:PMAC_TX_PAUSE_CNT */
-#define ASM_PMAC_TX_PAUSE_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 256, 0, 1, 4)
+#define ASM_PMAC_TX_PAUSE_CNT(g) __REG(TARGET_ASM,\
+ 0, 1, 0, g, 65, 512, 256, 0, 1, 4)
/* ASM:DEV_STATISTICS:PMAC_TX_OK_BYTES_CNT */
-#define ASM_PMAC_TX_OK_BYTES_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 260, 0, 1, 4)
+#define ASM_PMAC_TX_OK_BYTES_CNT(g) __REG(TARGET_ASM,\
+ 0, 1, 0, g, 65, 512, 260, 0, 1, 4)
/* ASM:DEV_STATISTICS:PMAC_TX_UC_CNT */
-#define ASM_PMAC_TX_UC_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 264, 0, 1, 4)
+#define ASM_PMAC_TX_UC_CNT(g) __REG(TARGET_ASM,\
+ 0, 1, 0, g, 65, 512, 264, 0, 1, 4)
/* ASM:DEV_STATISTICS:PMAC_TX_MC_CNT */
-#define ASM_PMAC_TX_MC_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 268, 0, 1, 4)
+#define ASM_PMAC_TX_MC_CNT(g) __REG(TARGET_ASM,\
+ 0, 1, 0, g, 65, 512, 268, 0, 1, 4)
/* ASM:DEV_STATISTICS:PMAC_TX_BC_CNT */
-#define ASM_PMAC_TX_BC_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 272, 0, 1, 4)
+#define ASM_PMAC_TX_BC_CNT(g) __REG(TARGET_ASM,\
+ 0, 1, 0, g, 65, 512, 272, 0, 1, 4)
/* ASM:DEV_STATISTICS:PMAC_TX_SIZE64_CNT */
-#define ASM_PMAC_TX_SIZE64_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 276, 0, 1, 4)
+#define ASM_PMAC_TX_SIZE64_CNT(g) __REG(TARGET_ASM,\
+ 0, 1, 0, g, 65, 512, 276, 0, 1, 4)
/* ASM:DEV_STATISTICS:PMAC_TX_SIZE65TO127_CNT */
-#define ASM_PMAC_TX_SIZE65TO127_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 280, 0, 1, 4)
+#define ASM_PMAC_TX_SIZE65TO127_CNT(g) __REG(TARGET_ASM,\
+ 0, 1, 0, g, 65, 512, 280, 0, 1, 4)
/* ASM:DEV_STATISTICS:PMAC_TX_SIZE128TO255_CNT */
-#define ASM_PMAC_TX_SIZE128TO255_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 284, 0, 1, 4)
+#define ASM_PMAC_TX_SIZE128TO255_CNT(g) __REG(TARGET_ASM,\
+ 0, 1, 0, g, 65, 512, 284, 0, 1, 4)
/* ASM:DEV_STATISTICS:PMAC_TX_SIZE256TO511_CNT */
-#define ASM_PMAC_TX_SIZE256TO511_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 288, 0, 1, 4)
+#define ASM_PMAC_TX_SIZE256TO511_CNT(g) __REG(TARGET_ASM,\
+ 0, 1, 0, g, 65, 512, 288, 0, 1, 4)
/* ASM:DEV_STATISTICS:PMAC_TX_SIZE512TO1023_CNT */
-#define ASM_PMAC_TX_SIZE512TO1023_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 292, 0, 1, 4)
+#define ASM_PMAC_TX_SIZE512TO1023_CNT(g) __REG(TARGET_ASM,\
+ 0, 1, 0, g, 65, 512, 292, 0, 1, 4)
/* ASM:DEV_STATISTICS:PMAC_TX_SIZE1024TO1518_CNT */
-#define ASM_PMAC_TX_SIZE1024TO1518_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 296, 0, 1, 4)
+#define ASM_PMAC_TX_SIZE1024TO1518_CNT(g) __REG(TARGET_ASM,\
+ 0, 1, 0, g, 65, 512, 296, 0, 1, 4)
/* ASM:DEV_STATISTICS:PMAC_TX_SIZE1519TOMAX_CNT */
-#define ASM_PMAC_TX_SIZE1519TOMAX_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 300, 0, 1, 4)
+#define ASM_PMAC_TX_SIZE1519TOMAX_CNT(g) __REG(TARGET_ASM,\
+ 0, 1, 0, g, 65, 512, 300, 0, 1, 4)
/* ASM:DEV_STATISTICS:PMAC_RX_ALIGNMENT_LOST_CNT */
-#define ASM_PMAC_RX_ALIGNMENT_LOST_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 304, 0, 1, 4)
+#define ASM_PMAC_RX_ALIGNMENT_LOST_CNT(g) __REG(TARGET_ASM,\
+ 0, 1, 0, g, 65, 512, 304, 0, 1, 4)
/* ASM:DEV_STATISTICS:MM_RX_ASSEMBLY_ERR_CNT */
-#define ASM_MM_RX_ASSEMBLY_ERR_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 308, 0, 1, 4)
+#define ASM_MM_RX_ASSEMBLY_ERR_CNT(g) __REG(TARGET_ASM,\
+ 0, 1, 0, g, 65, 512, 308, 0, 1, 4)
/* ASM:DEV_STATISTICS:MM_RX_SMD_ERR_CNT */
-#define ASM_MM_RX_SMD_ERR_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 312, 0, 1, 4)
+#define ASM_MM_RX_SMD_ERR_CNT(g) __REG(TARGET_ASM,\
+ 0, 1, 0, g, 65, 512, 312, 0, 1, 4)
/* ASM:DEV_STATISTICS:MM_RX_ASSEMBLY_OK_CNT */
-#define ASM_MM_RX_ASSEMBLY_OK_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 316, 0, 1, 4)
+#define ASM_MM_RX_ASSEMBLY_OK_CNT(g) __REG(TARGET_ASM,\
+ 0, 1, 0, g, 65, 512, 316, 0, 1, 4)
/* ASM:DEV_STATISTICS:MM_RX_MERGE_FRAG_CNT */
-#define ASM_MM_RX_MERGE_FRAG_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 320, 0, 1, 4)
+#define ASM_MM_RX_MERGE_FRAG_CNT(g) __REG(TARGET_ASM,\
+ 0, 1, 0, g, 65, 512, 320, 0, 1, 4)
/* ASM:DEV_STATISTICS:MM_TX_PFRAGMENT_CNT */
-#define ASM_MM_TX_PFRAGMENT_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 324, 0, 1, 4)
+#define ASM_MM_TX_PFRAGMENT_CNT(g) __REG(TARGET_ASM,\
+ 0, 1, 0, g, 65, 512, 324, 0, 1, 4)
/* ASM:DEV_STATISTICS:TX_MULTI_COLL_CNT */
-#define ASM_TX_MULTI_COLL_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 328, 0, 1, 4)
+#define ASM_TX_MULTI_COLL_CNT(g) __REG(TARGET_ASM,\
+ 0, 1, 0, g, 65, 512, 328, 0, 1, 4)
/* ASM:DEV_STATISTICS:TX_LATE_COLL_CNT */
-#define ASM_TX_LATE_COLL_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 332, 0, 1, 4)
+#define ASM_TX_LATE_COLL_CNT(g) __REG(TARGET_ASM,\
+ 0, 1, 0, g, 65, 512, 332, 0, 1, 4)
/* ASM:DEV_STATISTICS:TX_XCOLL_CNT */
-#define ASM_TX_XCOLL_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 336, 0, 1, 4)
+#define ASM_TX_XCOLL_CNT(g) __REG(TARGET_ASM,\
+ 0, 1, 0, g, 65, 512, 336, 0, 1, 4)
/* ASM:DEV_STATISTICS:TX_DEFER_CNT */
-#define ASM_TX_DEFER_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 340, 0, 1, 4)
+#define ASM_TX_DEFER_CNT(g) __REG(TARGET_ASM,\
+ 0, 1, 0, g, 65, 512, 340, 0, 1, 4)
/* ASM:DEV_STATISTICS:TX_XDEFER_CNT */
-#define ASM_TX_XDEFER_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 344, 0, 1, 4)
+#define ASM_TX_XDEFER_CNT(g) __REG(TARGET_ASM,\
+ 0, 1, 0, g, 65, 512, 344, 0, 1, 4)
/* ASM:DEV_STATISTICS:TX_BACKOFF1_CNT */
-#define ASM_TX_BACKOFF1_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 348, 0, 1, 4)
+#define ASM_TX_BACKOFF1_CNT(g) __REG(TARGET_ASM,\
+ 0, 1, 0, g, 65, 512, 348, 0, 1, 4)
/* ASM:DEV_STATISTICS:TX_CSENSE_CNT */
-#define ASM_TX_CSENSE_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 352, 0, 1, 4)
+#define ASM_TX_CSENSE_CNT(g) __REG(TARGET_ASM,\
+ 0, 1, 0, g, 65, 512, 352, 0, 1, 4)
/* ASM:DEV_STATISTICS:RX_IN_BYTES_MSB_CNT */
-#define ASM_RX_IN_BYTES_MSB_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 356, 0, 1, 4)
+#define ASM_RX_IN_BYTES_MSB_CNT(g) __REG(TARGET_ASM,\
+ 0, 1, 0, g, 65, 512, 356, 0, 1, 4)
#define ASM_RX_IN_BYTES_MSB_CNT_RX_IN_BYTES_MSB_CNT GENMASK(3, 0)
#define ASM_RX_IN_BYTES_MSB_CNT_RX_IN_BYTES_MSB_CNT_SET(x)\
@@ -1267,7 +2076,8 @@ enum sparx5_target {
FIELD_GET(ASM_RX_IN_BYTES_MSB_CNT_RX_IN_BYTES_MSB_CNT, x)
/* ASM:DEV_STATISTICS:RX_OK_BYTES_MSB_CNT */
-#define ASM_RX_OK_BYTES_MSB_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 360, 0, 1, 4)
+#define ASM_RX_OK_BYTES_MSB_CNT(g) __REG(TARGET_ASM,\
+ 0, 1, 0, g, 65, 512, 360, 0, 1, 4)
#define ASM_RX_OK_BYTES_MSB_CNT_RX_OK_BYTES_MSB_CNT GENMASK(3, 0)
#define ASM_RX_OK_BYTES_MSB_CNT_RX_OK_BYTES_MSB_CNT_SET(x)\
@@ -1276,7 +2086,8 @@ enum sparx5_target {
FIELD_GET(ASM_RX_OK_BYTES_MSB_CNT_RX_OK_BYTES_MSB_CNT, x)
/* ASM:DEV_STATISTICS:PMAC_RX_OK_BYTES_MSB_CNT */
-#define ASM_PMAC_RX_OK_BYTES_MSB_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 364, 0, 1, 4)
+#define ASM_PMAC_RX_OK_BYTES_MSB_CNT(g) __REG(TARGET_ASM,\
+ 0, 1, 0, g, 65, 512, 364, 0, 1, 4)
#define ASM_PMAC_RX_OK_BYTES_MSB_CNT_PMAC_RX_OK_BYTES_MSB_CNT GENMASK(3, 0)
#define ASM_PMAC_RX_OK_BYTES_MSB_CNT_PMAC_RX_OK_BYTES_MSB_CNT_SET(x)\
@@ -1285,7 +2096,8 @@ enum sparx5_target {
FIELD_GET(ASM_PMAC_RX_OK_BYTES_MSB_CNT_PMAC_RX_OK_BYTES_MSB_CNT, x)
/* ASM:DEV_STATISTICS:RX_BAD_BYTES_MSB_CNT */
-#define ASM_RX_BAD_BYTES_MSB_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 368, 0, 1, 4)
+#define ASM_RX_BAD_BYTES_MSB_CNT(g) __REG(TARGET_ASM,\
+ 0, 1, 0, g, 65, 512, 368, 0, 1, 4)
#define ASM_RX_BAD_BYTES_MSB_CNT_RX_BAD_BYTES_MSB_CNT GENMASK(3, 0)
#define ASM_RX_BAD_BYTES_MSB_CNT_RX_BAD_BYTES_MSB_CNT_SET(x)\
@@ -1294,7 +2106,8 @@ enum sparx5_target {
FIELD_GET(ASM_RX_BAD_BYTES_MSB_CNT_RX_BAD_BYTES_MSB_CNT, x)
/* ASM:DEV_STATISTICS:PMAC_RX_BAD_BYTES_MSB_CNT */
-#define ASM_PMAC_RX_BAD_BYTES_MSB_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 372, 0, 1, 4)
+#define ASM_PMAC_RX_BAD_BYTES_MSB_CNT(g) __REG(TARGET_ASM,\
+ 0, 1, 0, g, 65, 512, 372, 0, 1, 4)
#define ASM_PMAC_RX_BAD_BYTES_MSB_CNT_PMAC_RX_BAD_BYTES_MSB_CNT GENMASK(3, 0)
#define ASM_PMAC_RX_BAD_BYTES_MSB_CNT_PMAC_RX_BAD_BYTES_MSB_CNT_SET(x)\
@@ -1303,7 +2116,8 @@ enum sparx5_target {
FIELD_GET(ASM_PMAC_RX_BAD_BYTES_MSB_CNT_PMAC_RX_BAD_BYTES_MSB_CNT, x)
/* ASM:DEV_STATISTICS:TX_OUT_BYTES_MSB_CNT */
-#define ASM_TX_OUT_BYTES_MSB_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 376, 0, 1, 4)
+#define ASM_TX_OUT_BYTES_MSB_CNT(g) __REG(TARGET_ASM,\
+ 0, 1, 0, g, 65, 512, 376, 0, 1, 4)
#define ASM_TX_OUT_BYTES_MSB_CNT_TX_OUT_BYTES_MSB_CNT GENMASK(3, 0)
#define ASM_TX_OUT_BYTES_MSB_CNT_TX_OUT_BYTES_MSB_CNT_SET(x)\
@@ -1312,7 +2126,8 @@ enum sparx5_target {
FIELD_GET(ASM_TX_OUT_BYTES_MSB_CNT_TX_OUT_BYTES_MSB_CNT, x)
/* ASM:DEV_STATISTICS:TX_OK_BYTES_MSB_CNT */
-#define ASM_TX_OK_BYTES_MSB_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 380, 0, 1, 4)
+#define ASM_TX_OK_BYTES_MSB_CNT(g) __REG(TARGET_ASM,\
+ 0, 1, 0, g, 65, 512, 380, 0, 1, 4)
#define ASM_TX_OK_BYTES_MSB_CNT_TX_OK_BYTES_MSB_CNT GENMASK(3, 0)
#define ASM_TX_OK_BYTES_MSB_CNT_TX_OK_BYTES_MSB_CNT_SET(x)\
@@ -1321,7 +2136,8 @@ enum sparx5_target {
FIELD_GET(ASM_TX_OK_BYTES_MSB_CNT_TX_OK_BYTES_MSB_CNT, x)
/* ASM:DEV_STATISTICS:PMAC_TX_OK_BYTES_MSB_CNT */
-#define ASM_PMAC_TX_OK_BYTES_MSB_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 384, 0, 1, 4)
+#define ASM_PMAC_TX_OK_BYTES_MSB_CNT(g) __REG(TARGET_ASM,\
+ 0, 1, 0, g, 65, 512, 384, 0, 1, 4)
#define ASM_PMAC_TX_OK_BYTES_MSB_CNT_PMAC_TX_OK_BYTES_MSB_CNT GENMASK(3, 0)
#define ASM_PMAC_TX_OK_BYTES_MSB_CNT_PMAC_TX_OK_BYTES_MSB_CNT_SET(x)\
@@ -1330,10 +2146,12 @@ enum sparx5_target {
FIELD_GET(ASM_PMAC_TX_OK_BYTES_MSB_CNT_PMAC_TX_OK_BYTES_MSB_CNT, x)
/* ASM:DEV_STATISTICS:RX_SYNC_LOST_ERR_CNT */
-#define ASM_RX_SYNC_LOST_ERR_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 388, 0, 1, 4)
+#define ASM_RX_SYNC_LOST_ERR_CNT(g) __REG(TARGET_ASM,\
+ 0, 1, 0, g, 65, 512, 388, 0, 1, 4)
/* ASM:CFG:STAT_CFG */
-#define ASM_STAT_CFG __REG(TARGET_ASM, 0, 1, 33280, 0, 1, 1088, 0, 0, 1, 4)
+#define ASM_STAT_CFG __REG(TARGET_ASM,\
+ 0, 1, 33280, 0, 1, 1088, 0, 0, 1, 4)
#define ASM_STAT_CFG_STAT_CNT_CLR_SHOT BIT(0)
#define ASM_STAT_CFG_STAT_CNT_CLR_SHOT_SET(x)\
@@ -1342,7 +2160,8 @@ enum sparx5_target {
FIELD_GET(ASM_STAT_CFG_STAT_CNT_CLR_SHOT, x)
/* ASM:CFG:PORT_CFG */
-#define ASM_PORT_CFG(r) __REG(TARGET_ASM, 0, 1, 33280, 0, 1, 1088, 540, r, 67, 4)
+#define ASM_PORT_CFG(r) __REG(TARGET_ASM,\
+ 0, 1, 33280, 0, 1, 1088, 540, r, 67, 4)
#define ASM_PORT_CFG_CSC_STAT_DIS BIT(12)
#define ASM_PORT_CFG_CSC_STAT_DIS_SET(x)\
@@ -1411,7 +2230,8 @@ enum sparx5_target {
FIELD_GET(ASM_PORT_CFG_PFRM_FLUSH, x)
/* ASM:RAM_CTRL:RAM_INIT */
-#define ASM_RAM_INIT __REG(TARGET_ASM, 0, 1, 34832, 0, 1, 4, 0, 0, 1, 4)
+#define ASM_RAM_INIT __REG(TARGET_ASM,\
+ 0, 1, 34832, 0, 1, 4, 0, 0, 1, 4)
#define ASM_RAM_INIT_RAM_INIT BIT(1)
#define ASM_RAM_INIT_RAM_INIT_SET(x)\
@@ -1426,7 +2246,8 @@ enum sparx5_target {
FIELD_GET(ASM_RAM_INIT_RAM_CFG_HOOK, x)
/* CLKGEN:LCPLL1:LCPLL1_CORE_CLK_CFG */
-#define CLKGEN_LCPLL1_CORE_CLK_CFG __REG(TARGET_CLKGEN, 0, 1, 12, 0, 1, 36, 0, 0, 1, 4)
+#define CLKGEN_LCPLL1_CORE_CLK_CFG __REG(TARGET_CLKGEN,\
+ 0, 1, 12, 0, 1, 36, 0, 0, 1, 4)
#define CLKGEN_LCPLL1_CORE_CLK_CFG_CORE_CLK_DIV GENMASK(7, 0)
#define CLKGEN_LCPLL1_CORE_CLK_CFG_CORE_CLK_DIV_SET(x)\
@@ -1465,7 +2286,8 @@ enum sparx5_target {
FIELD_GET(CLKGEN_LCPLL1_CORE_CLK_CFG_CORE_CLK_ENA, x)
/* CPU:CPU_REGS:PROC_CTRL */
-#define CPU_PROC_CTRL __REG(TARGET_CPU, 0, 1, 0, 0, 1, 204, 176, 0, 1, 4)
+#define CPU_PROC_CTRL __REG(TARGET_CPU,\
+ 0, 1, 0, 0, 1, 204, 176, 0, 1, 4)
#define CPU_PROC_CTRL_AARCH64_MODE_ENA BIT(12)
#define CPU_PROC_CTRL_AARCH64_MODE_ENA_SET(x)\
@@ -1546,7 +2368,8 @@ enum sparx5_target {
FIELD_GET(CPU_PROC_CTRL_ACP_DISABLE, x)
/* DEV10G:MAC_CFG_STATUS:MAC_ENA_CFG */
-#define DEV10G_MAC_ENA_CFG(t) __REG(TARGET_DEV10G, t, 12, 0, 0, 1, 60, 0, 0, 1, 4)
+#define DEV10G_MAC_ENA_CFG(t) __REG(TARGET_DEV10G,\
+ t, 12, 0, 0, 1, 60, 0, 0, 1, 4)
#define DEV10G_MAC_ENA_CFG_RX_ENA BIT(4)
#define DEV10G_MAC_ENA_CFG_RX_ENA_SET(x)\
@@ -1561,7 +2384,8 @@ enum sparx5_target {
FIELD_GET(DEV10G_MAC_ENA_CFG_TX_ENA, x)
/* DEV10G:MAC_CFG_STATUS:MAC_MAXLEN_CFG */
-#define DEV10G_MAC_MAXLEN_CFG(t) __REG(TARGET_DEV10G, t, 12, 0, 0, 1, 60, 8, 0, 1, 4)
+#define DEV10G_MAC_MAXLEN_CFG(t) __REG(TARGET_DEV10G,\
+ t, 12, 0, 0, 1, 60, 8, 0, 1, 4)
#define DEV10G_MAC_MAXLEN_CFG_MAX_LEN_TAG_CHK BIT(16)
#define DEV10G_MAC_MAXLEN_CFG_MAX_LEN_TAG_CHK_SET(x)\
@@ -1576,7 +2400,8 @@ enum sparx5_target {
FIELD_GET(DEV10G_MAC_MAXLEN_CFG_MAX_LEN, x)
/* DEV10G:MAC_CFG_STATUS:MAC_NUM_TAGS_CFG */
-#define DEV10G_MAC_NUM_TAGS_CFG(t) __REG(TARGET_DEV10G, t, 12, 0, 0, 1, 60, 12, 0, 1, 4)
+#define DEV10G_MAC_NUM_TAGS_CFG(t) __REG(TARGET_DEV10G,\
+ t, 12, 0, 0, 1, 60, 12, 0, 1, 4)
#define DEV10G_MAC_NUM_TAGS_CFG_NUM_TAGS GENMASK(1, 0)
#define DEV10G_MAC_NUM_TAGS_CFG_NUM_TAGS_SET(x)\
@@ -1585,7 +2410,8 @@ enum sparx5_target {
FIELD_GET(DEV10G_MAC_NUM_TAGS_CFG_NUM_TAGS, x)
/* DEV10G:MAC_CFG_STATUS:MAC_TAGS_CFG */
-#define DEV10G_MAC_TAGS_CFG(t, r) __REG(TARGET_DEV10G, t, 12, 0, 0, 1, 60, 16, r, 3, 4)
+#define DEV10G_MAC_TAGS_CFG(t, r) __REG(TARGET_DEV10G,\
+ t, 12, 0, 0, 1, 60, 16, r, 3, 4)
#define DEV10G_MAC_TAGS_CFG_TAG_ID GENMASK(31, 16)
#define DEV10G_MAC_TAGS_CFG_TAG_ID_SET(x)\
@@ -1600,7 +2426,8 @@ enum sparx5_target {
FIELD_GET(DEV10G_MAC_TAGS_CFG_TAG_ENA, x)
/* DEV10G:MAC_CFG_STATUS:MAC_ADV_CHK_CFG */
-#define DEV10G_MAC_ADV_CHK_CFG(t) __REG(TARGET_DEV10G, t, 12, 0, 0, 1, 60, 28, 0, 1, 4)
+#define DEV10G_MAC_ADV_CHK_CFG(t) __REG(TARGET_DEV10G,\
+ t, 12, 0, 0, 1, 60, 28, 0, 1, 4)
#define DEV10G_MAC_ADV_CHK_CFG_EXT_EOP_CHK_ENA BIT(24)
#define DEV10G_MAC_ADV_CHK_CFG_EXT_EOP_CHK_ENA_SET(x)\
@@ -1645,7 +2472,8 @@ enum sparx5_target {
FIELD_GET(DEV10G_MAC_ADV_CHK_CFG_INR_ERR_ENA, x)
/* DEV10G:MAC_CFG_STATUS:MAC_TX_MONITOR_STICKY */
-#define DEV10G_MAC_TX_MONITOR_STICKY(t) __REG(TARGET_DEV10G, t, 12, 0, 0, 1, 60, 48, 0, 1, 4)
+#define DEV10G_MAC_TX_MONITOR_STICKY(t) __REG(TARGET_DEV10G,\
+ t, 12, 0, 0, 1, 60, 48, 0, 1, 4)
#define DEV10G_MAC_TX_MONITOR_STICKY_LOCAL_ERR_STATE_STICKY BIT(4)
#define DEV10G_MAC_TX_MONITOR_STICKY_LOCAL_ERR_STATE_STICKY_SET(x)\
@@ -1678,7 +2506,8 @@ enum sparx5_target {
FIELD_GET(DEV10G_MAC_TX_MONITOR_STICKY_DIS_STATE_STICKY, x)
/* DEV10G:DEV_CFG_STATUS:DEV_RST_CTRL */
-#define DEV10G_DEV_RST_CTRL(t) __REG(TARGET_DEV10G, t, 12, 436, 0, 1, 52, 0, 0, 1, 4)
+#define DEV10G_DEV_RST_CTRL(t) __REG(TARGET_DEV10G,\
+ t, 12, 436, 0, 1, 52, 0, 0, 1, 4)
#define DEV10G_DEV_RST_CTRL_PARDET_MODE_ENA BIT(28)
#define DEV10G_DEV_RST_CTRL_PARDET_MODE_ENA_SET(x)\
@@ -1735,7 +2564,8 @@ enum sparx5_target {
FIELD_GET(DEV10G_DEV_RST_CTRL_MAC_RX_RST, x)
/* DEV10G:PCS25G_CFG_STATUS:PCS25G_CFG */
-#define DEV10G_PCS25G_CFG(t) __REG(TARGET_DEV10G, t, 12, 488, 0, 1, 32, 0, 0, 1, 4)
+#define DEV10G_PCS25G_CFG(t) __REG(TARGET_DEV10G,\
+ t, 12, 488, 0, 1, 32, 0, 0, 1, 4)
#define DEV10G_PCS25G_CFG_PCS25G_ENA BIT(0)
#define DEV10G_PCS25G_CFG_PCS25G_ENA_SET(x)\
@@ -1744,7 +2574,8 @@ enum sparx5_target {
FIELD_GET(DEV10G_PCS25G_CFG_PCS25G_ENA, x)
/* DEV10G:MAC_CFG_STATUS:MAC_ENA_CFG */
-#define DEV25G_MAC_ENA_CFG(t) __REG(TARGET_DEV25G, t, 8, 0, 0, 1, 60, 0, 0, 1, 4)
+#define DEV25G_MAC_ENA_CFG(t) __REG(TARGET_DEV25G,\
+ t, 8, 0, 0, 1, 60, 0, 0, 1, 4)
#define DEV25G_MAC_ENA_CFG_RX_ENA BIT(4)
#define DEV25G_MAC_ENA_CFG_RX_ENA_SET(x)\
@@ -1759,7 +2590,8 @@ enum sparx5_target {
FIELD_GET(DEV25G_MAC_ENA_CFG_TX_ENA, x)
/* DEV10G:MAC_CFG_STATUS:MAC_MAXLEN_CFG */
-#define DEV25G_MAC_MAXLEN_CFG(t) __REG(TARGET_DEV25G, t, 8, 0, 0, 1, 60, 8, 0, 1, 4)
+#define DEV25G_MAC_MAXLEN_CFG(t) __REG(TARGET_DEV25G,\
+ t, 8, 0, 0, 1, 60, 8, 0, 1, 4)
#define DEV25G_MAC_MAXLEN_CFG_MAX_LEN_TAG_CHK BIT(16)
#define DEV25G_MAC_MAXLEN_CFG_MAX_LEN_TAG_CHK_SET(x)\
@@ -1774,7 +2606,8 @@ enum sparx5_target {
FIELD_GET(DEV25G_MAC_MAXLEN_CFG_MAX_LEN, x)
/* DEV10G:MAC_CFG_STATUS:MAC_ADV_CHK_CFG */
-#define DEV25G_MAC_ADV_CHK_CFG(t) __REG(TARGET_DEV25G, t, 8, 0, 0, 1, 60, 28, 0, 1, 4)
+#define DEV25G_MAC_ADV_CHK_CFG(t) __REG(TARGET_DEV25G,\
+ t, 8, 0, 0, 1, 60, 28, 0, 1, 4)
#define DEV25G_MAC_ADV_CHK_CFG_EXT_EOP_CHK_ENA BIT(24)
#define DEV25G_MAC_ADV_CHK_CFG_EXT_EOP_CHK_ENA_SET(x)\
@@ -1819,7 +2652,8 @@ enum sparx5_target {
FIELD_GET(DEV25G_MAC_ADV_CHK_CFG_INR_ERR_ENA, x)
/* DEV10G:DEV_CFG_STATUS:DEV_RST_CTRL */
-#define DEV25G_DEV_RST_CTRL(t) __REG(TARGET_DEV25G, t, 8, 436, 0, 1, 52, 0, 0, 1, 4)
+#define DEV25G_DEV_RST_CTRL(t) __REG(TARGET_DEV25G,\
+ t, 8, 436, 0, 1, 52, 0, 0, 1, 4)
#define DEV25G_DEV_RST_CTRL_PARDET_MODE_ENA BIT(28)
#define DEV25G_DEV_RST_CTRL_PARDET_MODE_ENA_SET(x)\
@@ -1876,7 +2710,8 @@ enum sparx5_target {
FIELD_GET(DEV25G_DEV_RST_CTRL_MAC_RX_RST, x)
/* DEV10G:PCS25G_CFG_STATUS:PCS25G_CFG */
-#define DEV25G_PCS25G_CFG(t) __REG(TARGET_DEV25G, t, 8, 488, 0, 1, 32, 0, 0, 1, 4)
+#define DEV25G_PCS25G_CFG(t) __REG(TARGET_DEV25G,\
+ t, 8, 488, 0, 1, 32, 0, 0, 1, 4)
#define DEV25G_PCS25G_CFG_PCS25G_ENA BIT(0)
#define DEV25G_PCS25G_CFG_PCS25G_ENA_SET(x)\
@@ -1885,7 +2720,8 @@ enum sparx5_target {
FIELD_GET(DEV25G_PCS25G_CFG_PCS25G_ENA, x)
/* DEV10G:PCS25G_CFG_STATUS:PCS25G_SD_CFG */
-#define DEV25G_PCS25G_SD_CFG(t) __REG(TARGET_DEV25G, t, 8, 488, 0, 1, 32, 4, 0, 1, 4)
+#define DEV25G_PCS25G_SD_CFG(t) __REG(TARGET_DEV25G,\
+ t, 8, 488, 0, 1, 32, 4, 0, 1, 4)
#define DEV25G_PCS25G_SD_CFG_SD_SEL BIT(8)
#define DEV25G_PCS25G_SD_CFG_SD_SEL_SET(x)\
@@ -1906,7 +2742,8 @@ enum sparx5_target {
FIELD_GET(DEV25G_PCS25G_SD_CFG_SD_ENA, x)
/* DEV1G:DEV_CFG_STATUS:DEV_RST_CTRL */
-#define DEV2G5_DEV_RST_CTRL(t) __REG(TARGET_DEV2G5, t, 65, 0, 0, 1, 36, 0, 0, 1, 4)
+#define DEV2G5_DEV_RST_CTRL(t) __REG(TARGET_DEV2G5,\
+ t, 65, 0, 0, 1, 36, 0, 0, 1, 4)
#define DEV2G5_DEV_RST_CTRL_USXGMII_OSET_FILTER_DIS BIT(23)
#define DEV2G5_DEV_RST_CTRL_USXGMII_OSET_FILTER_DIS_SET(x)\
@@ -1957,7 +2794,8 @@ enum sparx5_target {
FIELD_GET(DEV2G5_DEV_RST_CTRL_MAC_RX_RST, x)
/* DEV1G:MAC_CFG_STATUS:MAC_ENA_CFG */
-#define DEV2G5_MAC_ENA_CFG(t) __REG(TARGET_DEV2G5, t, 65, 52, 0, 1, 36, 0, 0, 1, 4)
+#define DEV2G5_MAC_ENA_CFG(t) __REG(TARGET_DEV2G5,\
+ t, 65, 52, 0, 1, 36, 0, 0, 1, 4)
#define DEV2G5_MAC_ENA_CFG_RX_ENA BIT(4)
#define DEV2G5_MAC_ENA_CFG_RX_ENA_SET(x)\
@@ -1972,7 +2810,8 @@ enum sparx5_target {
FIELD_GET(DEV2G5_MAC_ENA_CFG_TX_ENA, x)
/* DEV1G:MAC_CFG_STATUS:MAC_MODE_CFG */
-#define DEV2G5_MAC_MODE_CFG(t) __REG(TARGET_DEV2G5, t, 65, 52, 0, 1, 36, 4, 0, 1, 4)
+#define DEV2G5_MAC_MODE_CFG(t) __REG(TARGET_DEV2G5,\
+ t, 65, 52, 0, 1, 36, 4, 0, 1, 4)
#define DEV2G5_MAC_MODE_CFG_FC_WORD_SYNC_ENA BIT(8)
#define DEV2G5_MAC_MODE_CFG_FC_WORD_SYNC_ENA_SET(x)\
@@ -1993,7 +2832,8 @@ enum sparx5_target {
FIELD_GET(DEV2G5_MAC_MODE_CFG_FDX_ENA, x)
/* DEV1G:MAC_CFG_STATUS:MAC_MAXLEN_CFG */
-#define DEV2G5_MAC_MAXLEN_CFG(t) __REG(TARGET_DEV2G5, t, 65, 52, 0, 1, 36, 8, 0, 1, 4)
+#define DEV2G5_MAC_MAXLEN_CFG(t) __REG(TARGET_DEV2G5,\
+ t, 65, 52, 0, 1, 36, 8, 0, 1, 4)
#define DEV2G5_MAC_MAXLEN_CFG_MAX_LEN GENMASK(15, 0)
#define DEV2G5_MAC_MAXLEN_CFG_MAX_LEN_SET(x)\
@@ -2002,7 +2842,8 @@ enum sparx5_target {
FIELD_GET(DEV2G5_MAC_MAXLEN_CFG_MAX_LEN, x)
/* DEV1G:MAC_CFG_STATUS:MAC_TAGS_CFG */
-#define DEV2G5_MAC_TAGS_CFG(t) __REG(TARGET_DEV2G5, t, 65, 52, 0, 1, 36, 12, 0, 1, 4)
+#define DEV2G5_MAC_TAGS_CFG(t) __REG(TARGET_DEV2G5,\
+ t, 65, 52, 0, 1, 36, 12, 0, 1, 4)
#define DEV2G5_MAC_TAGS_CFG_TAG_ID GENMASK(31, 16)
#define DEV2G5_MAC_TAGS_CFG_TAG_ID_SET(x)\
@@ -2029,7 +2870,8 @@ enum sparx5_target {
FIELD_GET(DEV2G5_MAC_TAGS_CFG_VLAN_AWR_ENA, x)
/* DEV1G:MAC_CFG_STATUS:MAC_TAGS_CFG2 */
-#define DEV2G5_MAC_TAGS_CFG2(t) __REG(TARGET_DEV2G5, t, 65, 52, 0, 1, 36, 16, 0, 1, 4)
+#define DEV2G5_MAC_TAGS_CFG2(t) __REG(TARGET_DEV2G5,\
+ t, 65, 52, 0, 1, 36, 16, 0, 1, 4)
#define DEV2G5_MAC_TAGS_CFG2_TAG_ID3 GENMASK(31, 16)
#define DEV2G5_MAC_TAGS_CFG2_TAG_ID3_SET(x)\
@@ -2044,7 +2886,8 @@ enum sparx5_target {
FIELD_GET(DEV2G5_MAC_TAGS_CFG2_TAG_ID2, x)
/* DEV1G:MAC_CFG_STATUS:MAC_ADV_CHK_CFG */
-#define DEV2G5_MAC_ADV_CHK_CFG(t) __REG(TARGET_DEV2G5, t, 65, 52, 0, 1, 36, 20, 0, 1, 4)
+#define DEV2G5_MAC_ADV_CHK_CFG(t) __REG(TARGET_DEV2G5,\
+ t, 65, 52, 0, 1, 36, 20, 0, 1, 4)
#define DEV2G5_MAC_ADV_CHK_CFG_LEN_DROP_ENA BIT(0)
#define DEV2G5_MAC_ADV_CHK_CFG_LEN_DROP_ENA_SET(x)\
@@ -2053,7 +2896,8 @@ enum sparx5_target {
FIELD_GET(DEV2G5_MAC_ADV_CHK_CFG_LEN_DROP_ENA, x)
/* DEV1G:MAC_CFG_STATUS:MAC_IFG_CFG */
-#define DEV2G5_MAC_IFG_CFG(t) __REG(TARGET_DEV2G5, t, 65, 52, 0, 1, 36, 24, 0, 1, 4)
+#define DEV2G5_MAC_IFG_CFG(t) __REG(TARGET_DEV2G5,\
+ t, 65, 52, 0, 1, 36, 24, 0, 1, 4)
#define DEV2G5_MAC_IFG_CFG_RESTORE_OLD_IPG_CHECK BIT(17)
#define DEV2G5_MAC_IFG_CFG_RESTORE_OLD_IPG_CHECK_SET(x)\
@@ -2080,7 +2924,8 @@ enum sparx5_target {
FIELD_GET(DEV2G5_MAC_IFG_CFG_RX_IFG1, x)
/* DEV1G:MAC_CFG_STATUS:MAC_HDX_CFG */
-#define DEV2G5_MAC_HDX_CFG(t) __REG(TARGET_DEV2G5, t, 65, 52, 0, 1, 36, 28, 0, 1, 4)
+#define DEV2G5_MAC_HDX_CFG(t) __REG(TARGET_DEV2G5,\
+ t, 65, 52, 0, 1, 36, 28, 0, 1, 4)
#define DEV2G5_MAC_HDX_CFG_BYPASS_COL_SYNC BIT(26)
#define DEV2G5_MAC_HDX_CFG_BYPASS_COL_SYNC_SET(x)\
@@ -2113,7 +2958,8 @@ enum sparx5_target {
FIELD_GET(DEV2G5_MAC_HDX_CFG_LATE_COL_POS, x)
/* DEV1G:PCS1G_CFG_STATUS:PCS1G_CFG */
-#define DEV2G5_PCS1G_CFG(t) __REG(TARGET_DEV2G5, t, 65, 88, 0, 1, 68, 0, 0, 1, 4)
+#define DEV2G5_PCS1G_CFG(t) __REG(TARGET_DEV2G5,\
+ t, 65, 88, 0, 1, 68, 0, 0, 1, 4)
#define DEV2G5_PCS1G_CFG_LINK_STATUS_TYPE BIT(4)
#define DEV2G5_PCS1G_CFG_LINK_STATUS_TYPE_SET(x)\
@@ -2134,7 +2980,8 @@ enum sparx5_target {
FIELD_GET(DEV2G5_PCS1G_CFG_PCS_ENA, x)
/* DEV1G:PCS1G_CFG_STATUS:PCS1G_MODE_CFG */
-#define DEV2G5_PCS1G_MODE_CFG(t) __REG(TARGET_DEV2G5, t, 65, 88, 0, 1, 68, 4, 0, 1, 4)
+#define DEV2G5_PCS1G_MODE_CFG(t) __REG(TARGET_DEV2G5,\
+ t, 65, 88, 0, 1, 68, 4, 0, 1, 4)
#define DEV2G5_PCS1G_MODE_CFG_UNIDIR_MODE_ENA BIT(4)
#define DEV2G5_PCS1G_MODE_CFG_UNIDIR_MODE_ENA_SET(x)\
@@ -2155,7 +3002,8 @@ enum sparx5_target {
FIELD_GET(DEV2G5_PCS1G_MODE_CFG_SGMII_MODE_ENA, x)
/* DEV1G:PCS1G_CFG_STATUS:PCS1G_SD_CFG */
-#define DEV2G5_PCS1G_SD_CFG(t) __REG(TARGET_DEV2G5, t, 65, 88, 0, 1, 68, 8, 0, 1, 4)
+#define DEV2G5_PCS1G_SD_CFG(t) __REG(TARGET_DEV2G5,\
+ t, 65, 88, 0, 1, 68, 8, 0, 1, 4)
#define DEV2G5_PCS1G_SD_CFG_SD_SEL BIT(8)
#define DEV2G5_PCS1G_SD_CFG_SD_SEL_SET(x)\
@@ -2176,7 +3024,8 @@ enum sparx5_target {
FIELD_GET(DEV2G5_PCS1G_SD_CFG_SD_ENA, x)
/* DEV1G:PCS1G_CFG_STATUS:PCS1G_ANEG_CFG */
-#define DEV2G5_PCS1G_ANEG_CFG(t) __REG(TARGET_DEV2G5, t, 65, 88, 0, 1, 68, 12, 0, 1, 4)
+#define DEV2G5_PCS1G_ANEG_CFG(t) __REG(TARGET_DEV2G5,\
+ t, 65, 88, 0, 1, 68, 12, 0, 1, 4)
#define DEV2G5_PCS1G_ANEG_CFG_ADV_ABILITY GENMASK(31, 16)
#define DEV2G5_PCS1G_ANEG_CFG_ADV_ABILITY_SET(x)\
@@ -2203,7 +3052,8 @@ enum sparx5_target {
FIELD_GET(DEV2G5_PCS1G_ANEG_CFG_ANEG_ENA, x)
/* DEV1G:PCS1G_CFG_STATUS:PCS1G_LB_CFG */
-#define DEV2G5_PCS1G_LB_CFG(t) __REG(TARGET_DEV2G5, t, 65, 88, 0, 1, 68, 20, 0, 1, 4)
+#define DEV2G5_PCS1G_LB_CFG(t) __REG(TARGET_DEV2G5,\
+ t, 65, 88, 0, 1, 68, 20, 0, 1, 4)
#define DEV2G5_PCS1G_LB_CFG_RA_ENA BIT(4)
#define DEV2G5_PCS1G_LB_CFG_RA_ENA_SET(x)\
@@ -2224,7 +3074,8 @@ enum sparx5_target {
FIELD_GET(DEV2G5_PCS1G_LB_CFG_TBI_HOST_LB_ENA, x)
/* DEV1G:PCS1G_CFG_STATUS:PCS1G_ANEG_STATUS */
-#define DEV2G5_PCS1G_ANEG_STATUS(t) __REG(TARGET_DEV2G5, t, 65, 88, 0, 1, 68, 32, 0, 1, 4)
+#define DEV2G5_PCS1G_ANEG_STATUS(t) __REG(TARGET_DEV2G5,\
+ t, 65, 88, 0, 1, 68, 32, 0, 1, 4)
#define DEV2G5_PCS1G_ANEG_STATUS_LP_ADV_ABILITY GENMASK(31, 16)
#define DEV2G5_PCS1G_ANEG_STATUS_LP_ADV_ABILITY_SET(x)\
@@ -2251,7 +3102,8 @@ enum sparx5_target {
FIELD_GET(DEV2G5_PCS1G_ANEG_STATUS_ANEG_COMPLETE, x)
/* DEV1G:PCS1G_CFG_STATUS:PCS1G_LINK_STATUS */
-#define DEV2G5_PCS1G_LINK_STATUS(t) __REG(TARGET_DEV2G5, t, 65, 88, 0, 1, 68, 40, 0, 1, 4)
+#define DEV2G5_PCS1G_LINK_STATUS(t) __REG(TARGET_DEV2G5,\
+ t, 65, 88, 0, 1, 68, 40, 0, 1, 4)
#define DEV2G5_PCS1G_LINK_STATUS_DELAY_VAR GENMASK(15, 12)
#define DEV2G5_PCS1G_LINK_STATUS_DELAY_VAR_SET(x)\
@@ -2278,7 +3130,8 @@ enum sparx5_target {
FIELD_GET(DEV2G5_PCS1G_LINK_STATUS_SYNC_STATUS, x)
/* DEV1G:PCS1G_CFG_STATUS:PCS1G_STICKY */
-#define DEV2G5_PCS1G_STICKY(t) __REG(TARGET_DEV2G5, t, 65, 88, 0, 1, 68, 48, 0, 1, 4)
+#define DEV2G5_PCS1G_STICKY(t) __REG(TARGET_DEV2G5,\
+ t, 65, 88, 0, 1, 68, 48, 0, 1, 4)
#define DEV2G5_PCS1G_STICKY_LINK_DOWN_STICKY BIT(4)
#define DEV2G5_PCS1G_STICKY_LINK_DOWN_STICKY_SET(x)\
@@ -2293,7 +3146,8 @@ enum sparx5_target {
FIELD_GET(DEV2G5_PCS1G_STICKY_OUT_OF_SYNC_STICKY, x)
/* DEV1G:PCS_FX100_CONFIGURATION:PCS_FX100_CFG */
-#define DEV2G5_PCS_FX100_CFG(t) __REG(TARGET_DEV2G5, t, 65, 164, 0, 1, 4, 0, 0, 1, 4)
+#define DEV2G5_PCS_FX100_CFG(t) __REG(TARGET_DEV2G5,\
+ t, 65, 164, 0, 1, 4, 0, 0, 1, 4)
#define DEV2G5_PCS_FX100_CFG_SD_SEL BIT(26)
#define DEV2G5_PCS_FX100_CFG_SD_SEL_SET(x)\
@@ -2374,7 +3228,8 @@ enum sparx5_target {
FIELD_GET(DEV2G5_PCS_FX100_CFG_PCS_ENA, x)
/* DEV1G:PCS_FX100_STATUS:PCS_FX100_STATUS */
-#define DEV2G5_PCS_FX100_STATUS(t) __REG(TARGET_DEV2G5, t, 65, 168, 0, 1, 4, 0, 0, 1, 4)
+#define DEV2G5_PCS_FX100_STATUS(t) __REG(TARGET_DEV2G5,\
+ t, 65, 168, 0, 1, 4, 0, 0, 1, 4)
#define DEV2G5_PCS_FX100_STATUS_EDGE_POS_PTP GENMASK(11, 8)
#define DEV2G5_PCS_FX100_STATUS_EDGE_POS_PTP_SET(x)\
@@ -2425,7 +3280,8 @@ enum sparx5_target {
FIELD_GET(DEV2G5_PCS_FX100_STATUS_SYNC_STATUS, x)
/* DEV10G:MAC_CFG_STATUS:MAC_ENA_CFG */
-#define DEV5G_MAC_ENA_CFG(t) __REG(TARGET_DEV5G, t, 13, 0, 0, 1, 60, 0, 0, 1, 4)
+#define DEV5G_MAC_ENA_CFG(t) __REG(TARGET_DEV5G,\
+ t, 13, 0, 0, 1, 60, 0, 0, 1, 4)
#define DEV5G_MAC_ENA_CFG_RX_ENA BIT(4)
#define DEV5G_MAC_ENA_CFG_RX_ENA_SET(x)\
@@ -2440,7 +3296,8 @@ enum sparx5_target {
FIELD_GET(DEV5G_MAC_ENA_CFG_TX_ENA, x)
/* DEV10G:MAC_CFG_STATUS:MAC_MAXLEN_CFG */
-#define DEV5G_MAC_MAXLEN_CFG(t) __REG(TARGET_DEV5G, t, 13, 0, 0, 1, 60, 8, 0, 1, 4)
+#define DEV5G_MAC_MAXLEN_CFG(t) __REG(TARGET_DEV5G,\
+ t, 13, 0, 0, 1, 60, 8, 0, 1, 4)
#define DEV5G_MAC_MAXLEN_CFG_MAX_LEN_TAG_CHK BIT(16)
#define DEV5G_MAC_MAXLEN_CFG_MAX_LEN_TAG_CHK_SET(x)\
@@ -2455,7 +3312,8 @@ enum sparx5_target {
FIELD_GET(DEV5G_MAC_MAXLEN_CFG_MAX_LEN, x)
/* DEV10G:MAC_CFG_STATUS:MAC_ADV_CHK_CFG */
-#define DEV5G_MAC_ADV_CHK_CFG(t) __REG(TARGET_DEV5G, t, 13, 0, 0, 1, 60, 28, 0, 1, 4)
+#define DEV5G_MAC_ADV_CHK_CFG(t) __REG(TARGET_DEV5G,\
+ t, 13, 0, 0, 1, 60, 28, 0, 1, 4)
#define DEV5G_MAC_ADV_CHK_CFG_EXT_EOP_CHK_ENA BIT(24)
#define DEV5G_MAC_ADV_CHK_CFG_EXT_EOP_CHK_ENA_SET(x)\
@@ -2500,142 +3358,188 @@ enum sparx5_target {
FIELD_GET(DEV5G_MAC_ADV_CHK_CFG_INR_ERR_ENA, x)
/* DEV10G:DEV_STATISTICS_32BIT:RX_SYMBOL_ERR_CNT */
-#define DEV5G_RX_SYMBOL_ERR_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 0, 0, 1, 4)
+#define DEV5G_RX_SYMBOL_ERR_CNT(t) __REG(TARGET_DEV5G,\
+ t, 13, 60, 0, 1, 312, 0, 0, 1, 4)
/* DEV10G:DEV_STATISTICS_32BIT:RX_PAUSE_CNT */
-#define DEV5G_RX_PAUSE_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 4, 0, 1, 4)
+#define DEV5G_RX_PAUSE_CNT(t) __REG(TARGET_DEV5G,\
+ t, 13, 60, 0, 1, 312, 4, 0, 1, 4)
/* DEV10G:DEV_STATISTICS_32BIT:RX_UNSUP_OPCODE_CNT */
-#define DEV5G_RX_UNSUP_OPCODE_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 8, 0, 1, 4)
+#define DEV5G_RX_UNSUP_OPCODE_CNT(t) __REG(TARGET_DEV5G,\
+ t, 13, 60, 0, 1, 312, 8, 0, 1, 4)
/* DEV10G:DEV_STATISTICS_32BIT:RX_UC_CNT */
-#define DEV5G_RX_UC_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 12, 0, 1, 4)
+#define DEV5G_RX_UC_CNT(t) __REG(TARGET_DEV5G,\
+ t, 13, 60, 0, 1, 312, 12, 0, 1, 4)
/* DEV10G:DEV_STATISTICS_32BIT:RX_MC_CNT */
-#define DEV5G_RX_MC_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 16, 0, 1, 4)
+#define DEV5G_RX_MC_CNT(t) __REG(TARGET_DEV5G,\
+ t, 13, 60, 0, 1, 312, 16, 0, 1, 4)
/* DEV10G:DEV_STATISTICS_32BIT:RX_BC_CNT */
-#define DEV5G_RX_BC_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 20, 0, 1, 4)
+#define DEV5G_RX_BC_CNT(t) __REG(TARGET_DEV5G,\
+ t, 13, 60, 0, 1, 312, 20, 0, 1, 4)
/* DEV10G:DEV_STATISTICS_32BIT:RX_CRC_ERR_CNT */
-#define DEV5G_RX_CRC_ERR_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 24, 0, 1, 4)
+#define DEV5G_RX_CRC_ERR_CNT(t) __REG(TARGET_DEV5G,\
+ t, 13, 60, 0, 1, 312, 24, 0, 1, 4)
/* DEV10G:DEV_STATISTICS_32BIT:RX_UNDERSIZE_CNT */
-#define DEV5G_RX_UNDERSIZE_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 28, 0, 1, 4)
+#define DEV5G_RX_UNDERSIZE_CNT(t) __REG(TARGET_DEV5G,\
+ t, 13, 60, 0, 1, 312, 28, 0, 1, 4)
/* DEV10G:DEV_STATISTICS_32BIT:RX_FRAGMENTS_CNT */
-#define DEV5G_RX_FRAGMENTS_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 32, 0, 1, 4)
+#define DEV5G_RX_FRAGMENTS_CNT(t) __REG(TARGET_DEV5G,\
+ t, 13, 60, 0, 1, 312, 32, 0, 1, 4)
/* DEV10G:DEV_STATISTICS_32BIT:RX_IN_RANGE_LEN_ERR_CNT */
-#define DEV5G_RX_IN_RANGE_LEN_ERR_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 36, 0, 1, 4)
+#define DEV5G_RX_IN_RANGE_LEN_ERR_CNT(t) __REG(TARGET_DEV5G,\
+ t, 13, 60, 0, 1, 312, 36, 0, 1, 4)
/* DEV10G:DEV_STATISTICS_32BIT:RX_OUT_OF_RANGE_LEN_ERR_CNT */
-#define DEV5G_RX_OUT_OF_RANGE_LEN_ERR_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 40, 0, 1, 4)
+#define DEV5G_RX_OUT_OF_RANGE_LEN_ERR_CNT(t) __REG(TARGET_DEV5G,\
+ t, 13, 60, 0, 1, 312, 40, 0, 1, 4)
/* DEV10G:DEV_STATISTICS_32BIT:RX_OVERSIZE_CNT */
-#define DEV5G_RX_OVERSIZE_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 44, 0, 1, 4)
+#define DEV5G_RX_OVERSIZE_CNT(t) __REG(TARGET_DEV5G,\
+ t, 13, 60, 0, 1, 312, 44, 0, 1, 4)
/* DEV10G:DEV_STATISTICS_32BIT:RX_JABBERS_CNT */
-#define DEV5G_RX_JABBERS_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 48, 0, 1, 4)
+#define DEV5G_RX_JABBERS_CNT(t) __REG(TARGET_DEV5G,\
+ t, 13, 60, 0, 1, 312, 48, 0, 1, 4)
/* DEV10G:DEV_STATISTICS_32BIT:RX_SIZE64_CNT */
-#define DEV5G_RX_SIZE64_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 52, 0, 1, 4)
+#define DEV5G_RX_SIZE64_CNT(t) __REG(TARGET_DEV5G,\
+ t, 13, 60, 0, 1, 312, 52, 0, 1, 4)
/* DEV10G:DEV_STATISTICS_32BIT:RX_SIZE65TO127_CNT */
-#define DEV5G_RX_SIZE65TO127_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 56, 0, 1, 4)
+#define DEV5G_RX_SIZE65TO127_CNT(t) __REG(TARGET_DEV5G,\
+ t, 13, 60, 0, 1, 312, 56, 0, 1, 4)
/* DEV10G:DEV_STATISTICS_32BIT:RX_SIZE128TO255_CNT */
-#define DEV5G_RX_SIZE128TO255_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 60, 0, 1, 4)
+#define DEV5G_RX_SIZE128TO255_CNT(t) __REG(TARGET_DEV5G,\
+ t, 13, 60, 0, 1, 312, 60, 0, 1, 4)
/* DEV10G:DEV_STATISTICS_32BIT:RX_SIZE256TO511_CNT */
-#define DEV5G_RX_SIZE256TO511_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 64, 0, 1, 4)
+#define DEV5G_RX_SIZE256TO511_CNT(t) __REG(TARGET_DEV5G,\
+ t, 13, 60, 0, 1, 312, 64, 0, 1, 4)
/* DEV10G:DEV_STATISTICS_32BIT:RX_SIZE512TO1023_CNT */
-#define DEV5G_RX_SIZE512TO1023_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 68, 0, 1, 4)
+#define DEV5G_RX_SIZE512TO1023_CNT(t) __REG(TARGET_DEV5G,\
+ t, 13, 60, 0, 1, 312, 68, 0, 1, 4)
/* DEV10G:DEV_STATISTICS_32BIT:RX_SIZE1024TO1518_CNT */
-#define DEV5G_RX_SIZE1024TO1518_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 72, 0, 1, 4)
+#define DEV5G_RX_SIZE1024TO1518_CNT(t) __REG(TARGET_DEV5G,\
+ t, 13, 60, 0, 1, 312, 72, 0, 1, 4)
/* DEV10G:DEV_STATISTICS_32BIT:RX_SIZE1519TOMAX_CNT */
-#define DEV5G_RX_SIZE1519TOMAX_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 76, 0, 1, 4)
+#define DEV5G_RX_SIZE1519TOMAX_CNT(t) __REG(TARGET_DEV5G,\
+ t, 13, 60, 0, 1, 312, 76, 0, 1, 4)
/* DEV10G:DEV_STATISTICS_32BIT:RX_IPG_SHRINK_CNT */
-#define DEV5G_RX_IPG_SHRINK_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 80, 0, 1, 4)
+#define DEV5G_RX_IPG_SHRINK_CNT(t) __REG(TARGET_DEV5G,\
+ t, 13, 60, 0, 1, 312, 80, 0, 1, 4)
/* DEV10G:DEV_STATISTICS_32BIT:TX_PAUSE_CNT */
-#define DEV5G_TX_PAUSE_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 84, 0, 1, 4)
+#define DEV5G_TX_PAUSE_CNT(t) __REG(TARGET_DEV5G,\
+ t, 13, 60, 0, 1, 312, 84, 0, 1, 4)
/* DEV10G:DEV_STATISTICS_32BIT:TX_UC_CNT */
-#define DEV5G_TX_UC_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 88, 0, 1, 4)
+#define DEV5G_TX_UC_CNT(t) __REG(TARGET_DEV5G,\
+ t, 13, 60, 0, 1, 312, 88, 0, 1, 4)
/* DEV10G:DEV_STATISTICS_32BIT:TX_MC_CNT */
-#define DEV5G_TX_MC_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 92, 0, 1, 4)
+#define DEV5G_TX_MC_CNT(t) __REG(TARGET_DEV5G,\
+ t, 13, 60, 0, 1, 312, 92, 0, 1, 4)
/* DEV10G:DEV_STATISTICS_32BIT:TX_BC_CNT */
-#define DEV5G_TX_BC_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 96, 0, 1, 4)
+#define DEV5G_TX_BC_CNT(t) __REG(TARGET_DEV5G,\
+ t, 13, 60, 0, 1, 312, 96, 0, 1, 4)
/* DEV10G:DEV_STATISTICS_32BIT:TX_SIZE64_CNT */
-#define DEV5G_TX_SIZE64_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 100, 0, 1, 4)
+#define DEV5G_TX_SIZE64_CNT(t) __REG(TARGET_DEV5G,\
+ t, 13, 60, 0, 1, 312, 100, 0, 1, 4)
/* DEV10G:DEV_STATISTICS_32BIT:TX_SIZE65TO127_CNT */
-#define DEV5G_TX_SIZE65TO127_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 104, 0, 1, 4)
+#define DEV5G_TX_SIZE65TO127_CNT(t) __REG(TARGET_DEV5G,\
+ t, 13, 60, 0, 1, 312, 104, 0, 1, 4)
/* DEV10G:DEV_STATISTICS_32BIT:TX_SIZE128TO255_CNT */
-#define DEV5G_TX_SIZE128TO255_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 108, 0, 1, 4)
+#define DEV5G_TX_SIZE128TO255_CNT(t) __REG(TARGET_DEV5G,\
+ t, 13, 60, 0, 1, 312, 108, 0, 1, 4)
/* DEV10G:DEV_STATISTICS_32BIT:TX_SIZE256TO511_CNT */
-#define DEV5G_TX_SIZE256TO511_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 112, 0, 1, 4)
+#define DEV5G_TX_SIZE256TO511_CNT(t) __REG(TARGET_DEV5G,\
+ t, 13, 60, 0, 1, 312, 112, 0, 1, 4)
/* DEV10G:DEV_STATISTICS_32BIT:TX_SIZE512TO1023_CNT */
-#define DEV5G_TX_SIZE512TO1023_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 116, 0, 1, 4)
+#define DEV5G_TX_SIZE512TO1023_CNT(t) __REG(TARGET_DEV5G,\
+ t, 13, 60, 0, 1, 312, 116, 0, 1, 4)
/* DEV10G:DEV_STATISTICS_32BIT:TX_SIZE1024TO1518_CNT */
-#define DEV5G_TX_SIZE1024TO1518_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 120, 0, 1, 4)
+#define DEV5G_TX_SIZE1024TO1518_CNT(t) __REG(TARGET_DEV5G,\
+ t, 13, 60, 0, 1, 312, 120, 0, 1, 4)
/* DEV10G:DEV_STATISTICS_32BIT:TX_SIZE1519TOMAX_CNT */
-#define DEV5G_TX_SIZE1519TOMAX_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 124, 0, 1, 4)
+#define DEV5G_TX_SIZE1519TOMAX_CNT(t) __REG(TARGET_DEV5G,\
+ t, 13, 60, 0, 1, 312, 124, 0, 1, 4)
/* DEV10G:DEV_STATISTICS_32BIT:RX_ALIGNMENT_LOST_CNT */
-#define DEV5G_RX_ALIGNMENT_LOST_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 128, 0, 1, 4)
+#define DEV5G_RX_ALIGNMENT_LOST_CNT(t) __REG(TARGET_DEV5G,\
+ t, 13, 60, 0, 1, 312, 128, 0, 1, 4)
/* DEV10G:DEV_STATISTICS_32BIT:RX_TAGGED_FRMS_CNT */
-#define DEV5G_RX_TAGGED_FRMS_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 132, 0, 1, 4)
+#define DEV5G_RX_TAGGED_FRMS_CNT(t) __REG(TARGET_DEV5G,\
+ t, 13, 60, 0, 1, 312, 132, 0, 1, 4)
/* DEV10G:DEV_STATISTICS_32BIT:RX_UNTAGGED_FRMS_CNT */
-#define DEV5G_RX_UNTAGGED_FRMS_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 136, 0, 1, 4)
+#define DEV5G_RX_UNTAGGED_FRMS_CNT(t) __REG(TARGET_DEV5G,\
+ t, 13, 60, 0, 1, 312, 136, 0, 1, 4)
/* DEV10G:DEV_STATISTICS_32BIT:TX_TAGGED_FRMS_CNT */
-#define DEV5G_TX_TAGGED_FRMS_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 140, 0, 1, 4)
+#define DEV5G_TX_TAGGED_FRMS_CNT(t) __REG(TARGET_DEV5G,\
+ t, 13, 60, 0, 1, 312, 140, 0, 1, 4)
/* DEV10G:DEV_STATISTICS_32BIT:TX_UNTAGGED_FRMS_CNT */
-#define DEV5G_TX_UNTAGGED_FRMS_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 144, 0, 1, 4)
+#define DEV5G_TX_UNTAGGED_FRMS_CNT(t) __REG(TARGET_DEV5G,\
+ t, 13, 60, 0, 1, 312, 144, 0, 1, 4)
/* DEV10G:DEV_STATISTICS_32BIT:PMAC_RX_SYMBOL_ERR_CNT */
-#define DEV5G_PMAC_RX_SYMBOL_ERR_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 148, 0, 1, 4)
+#define DEV5G_PMAC_RX_SYMBOL_ERR_CNT(t) __REG(TARGET_DEV5G,\
+ t, 13, 60, 0, 1, 312, 148, 0, 1, 4)
/* DEV10G:DEV_STATISTICS_32BIT:PMAC_RX_PAUSE_CNT */
-#define DEV5G_PMAC_RX_PAUSE_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 152, 0, 1, 4)
+#define DEV5G_PMAC_RX_PAUSE_CNT(t) __REG(TARGET_DEV5G,\
+ t, 13, 60, 0, 1, 312, 152, 0, 1, 4)
/* DEV10G:DEV_STATISTICS_32BIT:PMAC_RX_UNSUP_OPCODE_CNT */
-#define DEV5G_PMAC_RX_UNSUP_OPCODE_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 156, 0, 1, 4)
+#define DEV5G_PMAC_RX_UNSUP_OPCODE_CNT(t) __REG(TARGET_DEV5G,\
+ t, 13, 60, 0, 1, 312, 156, 0, 1, 4)
/* DEV10G:DEV_STATISTICS_32BIT:PMAC_RX_UC_CNT */
-#define DEV5G_PMAC_RX_UC_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 160, 0, 1, 4)
+#define DEV5G_PMAC_RX_UC_CNT(t) __REG(TARGET_DEV5G,\
+ t, 13, 60, 0, 1, 312, 160, 0, 1, 4)
/* DEV10G:DEV_STATISTICS_32BIT:PMAC_RX_MC_CNT */
-#define DEV5G_PMAC_RX_MC_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 164, 0, 1, 4)
+#define DEV5G_PMAC_RX_MC_CNT(t) __REG(TARGET_DEV5G,\
+ t, 13, 60, 0, 1, 312, 164, 0, 1, 4)
/* DEV10G:DEV_STATISTICS_32BIT:PMAC_RX_BC_CNT */
-#define DEV5G_PMAC_RX_BC_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 168, 0, 1, 4)
+#define DEV5G_PMAC_RX_BC_CNT(t) __REG(TARGET_DEV5G,\
+ t, 13, 60, 0, 1, 312, 168, 0, 1, 4)
/* DEV10G:DEV_STATISTICS_32BIT:PMAC_RX_CRC_ERR_CNT */
-#define DEV5G_PMAC_RX_CRC_ERR_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 172, 0, 1, 4)
+#define DEV5G_PMAC_RX_CRC_ERR_CNT(t) __REG(TARGET_DEV5G,\
+ t, 13, 60, 0, 1, 312, 172, 0, 1, 4)
/* DEV10G:DEV_STATISTICS_32BIT:PMAC_RX_UNDERSIZE_CNT */
-#define DEV5G_PMAC_RX_UNDERSIZE_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 176, 0, 1, 4)
+#define DEV5G_PMAC_RX_UNDERSIZE_CNT(t) __REG(TARGET_DEV5G,\
+ t, 13, 60, 0, 1, 312, 176, 0, 1, 4)
/* DEV10G:DEV_STATISTICS_32BIT:PMAC_RX_FRAGMENTS_CNT */
-#define DEV5G_PMAC_RX_FRAGMENTS_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 180, 0, 1, 4)
+#define DEV5G_PMAC_RX_FRAGMENTS_CNT(t) __REG(TARGET_DEV5G,\
+ t, 13, 60, 0, 1, 312, 180, 0, 1, 4)
/* DEV10G:DEV_STATISTICS_32BIT:PMAC_RX_IN_RANGE_LEN_ERR_CNT */
#define DEV5G_PMAC_RX_IN_RANGE_LEN_ERR_CNT(t) __REG(TARGET_DEV5G,\
@@ -2646,100 +3550,132 @@ enum sparx5_target {
t, 13, 60, 0, 1, 312, 188, 0, 1, 4)
/* DEV10G:DEV_STATISTICS_32BIT:PMAC_RX_OVERSIZE_CNT */
-#define DEV5G_PMAC_RX_OVERSIZE_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 192, 0, 1, 4)
+#define DEV5G_PMAC_RX_OVERSIZE_CNT(t) __REG(TARGET_DEV5G,\
+ t, 13, 60, 0, 1, 312, 192, 0, 1, 4)
/* DEV10G:DEV_STATISTICS_32BIT:PMAC_RX_JABBERS_CNT */
-#define DEV5G_PMAC_RX_JABBERS_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 196, 0, 1, 4)
+#define DEV5G_PMAC_RX_JABBERS_CNT(t) __REG(TARGET_DEV5G,\
+ t, 13, 60, 0, 1, 312, 196, 0, 1, 4)
/* DEV10G:DEV_STATISTICS_32BIT:PMAC_RX_SIZE64_CNT */
-#define DEV5G_PMAC_RX_SIZE64_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 200, 0, 1, 4)
+#define DEV5G_PMAC_RX_SIZE64_CNT(t) __REG(TARGET_DEV5G,\
+ t, 13, 60, 0, 1, 312, 200, 0, 1, 4)
/* DEV10G:DEV_STATISTICS_32BIT:PMAC_RX_SIZE65TO127_CNT */
-#define DEV5G_PMAC_RX_SIZE65TO127_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 204, 0, 1, 4)
+#define DEV5G_PMAC_RX_SIZE65TO127_CNT(t) __REG(TARGET_DEV5G,\
+ t, 13, 60, 0, 1, 312, 204, 0, 1, 4)
/* DEV10G:DEV_STATISTICS_32BIT:PMAC_RX_SIZE128TO255_CNT */
-#define DEV5G_PMAC_RX_SIZE128TO255_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 208, 0, 1, 4)
+#define DEV5G_PMAC_RX_SIZE128TO255_CNT(t) __REG(TARGET_DEV5G,\
+ t, 13, 60, 0, 1, 312, 208, 0, 1, 4)
/* DEV10G:DEV_STATISTICS_32BIT:PMAC_RX_SIZE256TO511_CNT */
-#define DEV5G_PMAC_RX_SIZE256TO511_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 212, 0, 1, 4)
+#define DEV5G_PMAC_RX_SIZE256TO511_CNT(t) __REG(TARGET_DEV5G,\
+ t, 13, 60, 0, 1, 312, 212, 0, 1, 4)
/* DEV10G:DEV_STATISTICS_32BIT:PMAC_RX_SIZE512TO1023_CNT */
-#define DEV5G_PMAC_RX_SIZE512TO1023_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 216, 0, 1, 4)
+#define DEV5G_PMAC_RX_SIZE512TO1023_CNT(t) __REG(TARGET_DEV5G,\
+ t, 13, 60, 0, 1, 312, 216, 0, 1, 4)
/* DEV10G:DEV_STATISTICS_32BIT:PMAC_RX_SIZE1024TO1518_CNT */
-#define DEV5G_PMAC_RX_SIZE1024TO1518_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 220, 0, 1, 4)
+#define DEV5G_PMAC_RX_SIZE1024TO1518_CNT(t) __REG(TARGET_DEV5G,\
+ t, 13, 60, 0, 1, 312, 220, 0, 1, 4)
/* DEV10G:DEV_STATISTICS_32BIT:PMAC_RX_SIZE1519TOMAX_CNT */
-#define DEV5G_PMAC_RX_SIZE1519TOMAX_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 224, 0, 1, 4)
+#define DEV5G_PMAC_RX_SIZE1519TOMAX_CNT(t) __REG(TARGET_DEV5G,\
+ t, 13, 60, 0, 1, 312, 224, 0, 1, 4)
/* DEV10G:DEV_STATISTICS_32BIT:PMAC_TX_PAUSE_CNT */
-#define DEV5G_PMAC_TX_PAUSE_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 228, 0, 1, 4)
+#define DEV5G_PMAC_TX_PAUSE_CNT(t) __REG(TARGET_DEV5G,\
+ t, 13, 60, 0, 1, 312, 228, 0, 1, 4)
/* DEV10G:DEV_STATISTICS_32BIT:PMAC_TX_UC_CNT */
-#define DEV5G_PMAC_TX_UC_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 232, 0, 1, 4)
+#define DEV5G_PMAC_TX_UC_CNT(t) __REG(TARGET_DEV5G,\
+ t, 13, 60, 0, 1, 312, 232, 0, 1, 4)
/* DEV10G:DEV_STATISTICS_32BIT:PMAC_TX_MC_CNT */
-#define DEV5G_PMAC_TX_MC_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 236, 0, 1, 4)
+#define DEV5G_PMAC_TX_MC_CNT(t) __REG(TARGET_DEV5G,\
+ t, 13, 60, 0, 1, 312, 236, 0, 1, 4)
/* DEV10G:DEV_STATISTICS_32BIT:PMAC_TX_BC_CNT */
-#define DEV5G_PMAC_TX_BC_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 240, 0, 1, 4)
+#define DEV5G_PMAC_TX_BC_CNT(t) __REG(TARGET_DEV5G,\
+ t, 13, 60, 0, 1, 312, 240, 0, 1, 4)
/* DEV10G:DEV_STATISTICS_32BIT:PMAC_TX_SIZE64_CNT */
-#define DEV5G_PMAC_TX_SIZE64_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 244, 0, 1, 4)
+#define DEV5G_PMAC_TX_SIZE64_CNT(t) __REG(TARGET_DEV5G,\
+ t, 13, 60, 0, 1, 312, 244, 0, 1, 4)
/* DEV10G:DEV_STATISTICS_32BIT:PMAC_TX_SIZE65TO127_CNT */
-#define DEV5G_PMAC_TX_SIZE65TO127_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 248, 0, 1, 4)
+#define DEV5G_PMAC_TX_SIZE65TO127_CNT(t) __REG(TARGET_DEV5G,\
+ t, 13, 60, 0, 1, 312, 248, 0, 1, 4)
/* DEV10G:DEV_STATISTICS_32BIT:PMAC_TX_SIZE128TO255_CNT */
-#define DEV5G_PMAC_TX_SIZE128TO255_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 252, 0, 1, 4)
+#define DEV5G_PMAC_TX_SIZE128TO255_CNT(t) __REG(TARGET_DEV5G,\
+ t, 13, 60, 0, 1, 312, 252, 0, 1, 4)
/* DEV10G:DEV_STATISTICS_32BIT:PMAC_TX_SIZE256TO511_CNT */
-#define DEV5G_PMAC_TX_SIZE256TO511_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 256, 0, 1, 4)
+#define DEV5G_PMAC_TX_SIZE256TO511_CNT(t) __REG(TARGET_DEV5G,\
+ t, 13, 60, 0, 1, 312, 256, 0, 1, 4)
/* DEV10G:DEV_STATISTICS_32BIT:PMAC_TX_SIZE512TO1023_CNT */
-#define DEV5G_PMAC_TX_SIZE512TO1023_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 260, 0, 1, 4)
+#define DEV5G_PMAC_TX_SIZE512TO1023_CNT(t) __REG(TARGET_DEV5G,\
+ t, 13, 60, 0, 1, 312, 260, 0, 1, 4)
/* DEV10G:DEV_STATISTICS_32BIT:PMAC_TX_SIZE1024TO1518_CNT */
-#define DEV5G_PMAC_TX_SIZE1024TO1518_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 264, 0, 1, 4)
+#define DEV5G_PMAC_TX_SIZE1024TO1518_CNT(t) __REG(TARGET_DEV5G,\
+ t, 13, 60, 0, 1, 312, 264, 0, 1, 4)
/* DEV10G:DEV_STATISTICS_32BIT:PMAC_TX_SIZE1519TOMAX_CNT */
-#define DEV5G_PMAC_TX_SIZE1519TOMAX_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 268, 0, 1, 4)
+#define DEV5G_PMAC_TX_SIZE1519TOMAX_CNT(t) __REG(TARGET_DEV5G,\
+ t, 13, 60, 0, 1, 312, 268, 0, 1, 4)
/* DEV10G:DEV_STATISTICS_32BIT:PMAC_RX_ALIGNMENT_LOST_CNT */
-#define DEV5G_PMAC_RX_ALIGNMENT_LOST_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 272, 0, 1, 4)
+#define DEV5G_PMAC_RX_ALIGNMENT_LOST_CNT(t) __REG(TARGET_DEV5G,\
+ t, 13, 60, 0, 1, 312, 272, 0, 1, 4)
/* DEV10G:DEV_STATISTICS_32BIT:MM_RX_ASSEMBLY_ERR_CNT */
-#define DEV5G_MM_RX_ASSEMBLY_ERR_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 276, 0, 1, 4)
+#define DEV5G_MM_RX_ASSEMBLY_ERR_CNT(t) __REG(TARGET_DEV5G,\
+ t, 13, 60, 0, 1, 312, 276, 0, 1, 4)
/* DEV10G:DEV_STATISTICS_32BIT:MM_RX_SMD_ERR_CNT */
-#define DEV5G_MM_RX_SMD_ERR_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 280, 0, 1, 4)
+#define DEV5G_MM_RX_SMD_ERR_CNT(t) __REG(TARGET_DEV5G,\
+ t, 13, 60, 0, 1, 312, 280, 0, 1, 4)
/* DEV10G:DEV_STATISTICS_32BIT:MM_RX_ASSEMBLY_OK_CNT */
-#define DEV5G_MM_RX_ASSEMBLY_OK_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 284, 0, 1, 4)
+#define DEV5G_MM_RX_ASSEMBLY_OK_CNT(t) __REG(TARGET_DEV5G,\
+ t, 13, 60, 0, 1, 312, 284, 0, 1, 4)
/* DEV10G:DEV_STATISTICS_32BIT:MM_RX_MERGE_FRAG_CNT */
-#define DEV5G_MM_RX_MERGE_FRAG_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 288, 0, 1, 4)
+#define DEV5G_MM_RX_MERGE_FRAG_CNT(t) __REG(TARGET_DEV5G,\
+ t, 13, 60, 0, 1, 312, 288, 0, 1, 4)
/* DEV10G:DEV_STATISTICS_32BIT:MM_TX_PFRAGMENT_CNT */
-#define DEV5G_MM_TX_PFRAGMENT_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 292, 0, 1, 4)
+#define DEV5G_MM_TX_PFRAGMENT_CNT(t) __REG(TARGET_DEV5G,\
+ t, 13, 60, 0, 1, 312, 292, 0, 1, 4)
/* DEV10G:DEV_STATISTICS_32BIT:RX_HIH_CKSM_ERR_CNT */
-#define DEV5G_RX_HIH_CKSM_ERR_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 296, 0, 1, 4)
+#define DEV5G_RX_HIH_CKSM_ERR_CNT(t) __REG(TARGET_DEV5G,\
+ t, 13, 60, 0, 1, 312, 296, 0, 1, 4)
/* DEV10G:DEV_STATISTICS_32BIT:RX_XGMII_PROT_ERR_CNT */
-#define DEV5G_RX_XGMII_PROT_ERR_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 300, 0, 1, 4)
+#define DEV5G_RX_XGMII_PROT_ERR_CNT(t) __REG(TARGET_DEV5G,\
+ t, 13, 60, 0, 1, 312, 300, 0, 1, 4)
/* DEV10G:DEV_STATISTICS_32BIT:PMAC_RX_HIH_CKSM_ERR_CNT */
-#define DEV5G_PMAC_RX_HIH_CKSM_ERR_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 304, 0, 1, 4)
+#define DEV5G_PMAC_RX_HIH_CKSM_ERR_CNT(t) __REG(TARGET_DEV5G,\
+ t, 13, 60, 0, 1, 312, 304, 0, 1, 4)
/* DEV10G:DEV_STATISTICS_32BIT:PMAC_RX_XGMII_PROT_ERR_CNT */
-#define DEV5G_PMAC_RX_XGMII_PROT_ERR_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 308, 0, 1, 4)
+#define DEV5G_PMAC_RX_XGMII_PROT_ERR_CNT(t) __REG(TARGET_DEV5G,\
+ t, 13, 60, 0, 1, 312, 308, 0, 1, 4)
/* DEV10G:DEV_STATISTICS_40BIT:RX_IN_BYTES_CNT */
-#define DEV5G_RX_IN_BYTES_CNT(t) __REG(TARGET_DEV5G, t, 13, 372, 0, 1, 64, 0, 0, 1, 4)
+#define DEV5G_RX_IN_BYTES_CNT(t) __REG(TARGET_DEV5G,\
+ t, 13, 372, 0, 1, 64, 0, 0, 1, 4)
/* DEV10G:DEV_STATISTICS_40BIT:RX_IN_BYTES_MSB_CNT */
-#define DEV5G_RX_IN_BYTES_MSB_CNT(t) __REG(TARGET_DEV5G, t, 13, 372, 0, 1, 64, 4, 0, 1, 4)
+#define DEV5G_RX_IN_BYTES_MSB_CNT(t) __REG(TARGET_DEV5G,\
+ t, 13, 372, 0, 1, 64, 4, 0, 1, 4)
#define DEV5G_RX_IN_BYTES_MSB_CNT_RX_IN_BYTES_MSB_CNT GENMASK(7, 0)
#define DEV5G_RX_IN_BYTES_MSB_CNT_RX_IN_BYTES_MSB_CNT_SET(x)\
@@ -2748,10 +3684,12 @@ enum sparx5_target {
FIELD_GET(DEV5G_RX_IN_BYTES_MSB_CNT_RX_IN_BYTES_MSB_CNT, x)
/* DEV10G:DEV_STATISTICS_40BIT:RX_OK_BYTES_CNT */
-#define DEV5G_RX_OK_BYTES_CNT(t) __REG(TARGET_DEV5G, t, 13, 372, 0, 1, 64, 8, 0, 1, 4)
+#define DEV5G_RX_OK_BYTES_CNT(t) __REG(TARGET_DEV5G,\
+ t, 13, 372, 0, 1, 64, 8, 0, 1, 4)
/* DEV10G:DEV_STATISTICS_40BIT:RX_OK_BYTES_MSB_CNT */
-#define DEV5G_RX_OK_BYTES_MSB_CNT(t) __REG(TARGET_DEV5G, t, 13, 372, 0, 1, 64, 12, 0, 1, 4)
+#define DEV5G_RX_OK_BYTES_MSB_CNT(t) __REG(TARGET_DEV5G,\
+ t, 13, 372, 0, 1, 64, 12, 0, 1, 4)
#define DEV5G_RX_OK_BYTES_MSB_CNT_RX_OK_BYTES_MSB_CNT GENMASK(7, 0)
#define DEV5G_RX_OK_BYTES_MSB_CNT_RX_OK_BYTES_MSB_CNT_SET(x)\
@@ -2760,10 +3698,12 @@ enum sparx5_target {
FIELD_GET(DEV5G_RX_OK_BYTES_MSB_CNT_RX_OK_BYTES_MSB_CNT, x)
/* DEV10G:DEV_STATISTICS_40BIT:RX_BAD_BYTES_CNT */
-#define DEV5G_RX_BAD_BYTES_CNT(t) __REG(TARGET_DEV5G, t, 13, 372, 0, 1, 64, 16, 0, 1, 4)
+#define DEV5G_RX_BAD_BYTES_CNT(t) __REG(TARGET_DEV5G,\
+ t, 13, 372, 0, 1, 64, 16, 0, 1, 4)
/* DEV10G:DEV_STATISTICS_40BIT:RX_BAD_BYTES_MSB_CNT */
-#define DEV5G_RX_BAD_BYTES_MSB_CNT(t) __REG(TARGET_DEV5G, t, 13, 372, 0, 1, 64, 20, 0, 1, 4)
+#define DEV5G_RX_BAD_BYTES_MSB_CNT(t) __REG(TARGET_DEV5G,\
+ t, 13, 372, 0, 1, 64, 20, 0, 1, 4)
#define DEV5G_RX_BAD_BYTES_MSB_CNT_RX_BAD_BYTES_MSB_CNT GENMASK(7, 0)
#define DEV5G_RX_BAD_BYTES_MSB_CNT_RX_BAD_BYTES_MSB_CNT_SET(x)\
@@ -2772,10 +3712,12 @@ enum sparx5_target {
FIELD_GET(DEV5G_RX_BAD_BYTES_MSB_CNT_RX_BAD_BYTES_MSB_CNT, x)
/* DEV10G:DEV_STATISTICS_40BIT:TX_OUT_BYTES_CNT */
-#define DEV5G_TX_OUT_BYTES_CNT(t) __REG(TARGET_DEV5G, t, 13, 372, 0, 1, 64, 24, 0, 1, 4)
+#define DEV5G_TX_OUT_BYTES_CNT(t) __REG(TARGET_DEV5G,\
+ t, 13, 372, 0, 1, 64, 24, 0, 1, 4)
/* DEV10G:DEV_STATISTICS_40BIT:TX_OUT_BYTES_MSB_CNT */
-#define DEV5G_TX_OUT_BYTES_MSB_CNT(t) __REG(TARGET_DEV5G, t, 13, 372, 0, 1, 64, 28, 0, 1, 4)
+#define DEV5G_TX_OUT_BYTES_MSB_CNT(t) __REG(TARGET_DEV5G,\
+ t, 13, 372, 0, 1, 64, 28, 0, 1, 4)
#define DEV5G_TX_OUT_BYTES_MSB_CNT_TX_OUT_BYTES_MSB_CNT GENMASK(7, 0)
#define DEV5G_TX_OUT_BYTES_MSB_CNT_TX_OUT_BYTES_MSB_CNT_SET(x)\
@@ -2784,10 +3726,12 @@ enum sparx5_target {
FIELD_GET(DEV5G_TX_OUT_BYTES_MSB_CNT_TX_OUT_BYTES_MSB_CNT, x)
/* DEV10G:DEV_STATISTICS_40BIT:TX_OK_BYTES_CNT */
-#define DEV5G_TX_OK_BYTES_CNT(t) __REG(TARGET_DEV5G, t, 13, 372, 0, 1, 64, 32, 0, 1, 4)
+#define DEV5G_TX_OK_BYTES_CNT(t) __REG(TARGET_DEV5G,\
+ t, 13, 372, 0, 1, 64, 32, 0, 1, 4)
/* DEV10G:DEV_STATISTICS_40BIT:TX_OK_BYTES_MSB_CNT */
-#define DEV5G_TX_OK_BYTES_MSB_CNT(t) __REG(TARGET_DEV5G, t, 13, 372, 0, 1, 64, 36, 0, 1, 4)
+#define DEV5G_TX_OK_BYTES_MSB_CNT(t) __REG(TARGET_DEV5G,\
+ t, 13, 372, 0, 1, 64, 36, 0, 1, 4)
#define DEV5G_TX_OK_BYTES_MSB_CNT_TX_OK_BYTES_MSB_CNT GENMASK(7, 0)
#define DEV5G_TX_OK_BYTES_MSB_CNT_TX_OK_BYTES_MSB_CNT_SET(x)\
@@ -2796,10 +3740,12 @@ enum sparx5_target {
FIELD_GET(DEV5G_TX_OK_BYTES_MSB_CNT_TX_OK_BYTES_MSB_CNT, x)
/* DEV10G:DEV_STATISTICS_40BIT:PMAC_RX_OK_BYTES_CNT */
-#define DEV5G_PMAC_RX_OK_BYTES_CNT(t) __REG(TARGET_DEV5G, t, 13, 372, 0, 1, 64, 40, 0, 1, 4)
+#define DEV5G_PMAC_RX_OK_BYTES_CNT(t) __REG(TARGET_DEV5G,\
+ t, 13, 372, 0, 1, 64, 40, 0, 1, 4)
/* DEV10G:DEV_STATISTICS_40BIT:PMAC_RX_OK_BYTES_MSB_CNT */
-#define DEV5G_PMAC_RX_OK_BYTES_MSB_CNT(t) __REG(TARGET_DEV5G, t, 13, 372, 0, 1, 64, 44, 0, 1, 4)
+#define DEV5G_PMAC_RX_OK_BYTES_MSB_CNT(t) __REG(TARGET_DEV5G,\
+ t, 13, 372, 0, 1, 64, 44, 0, 1, 4)
#define DEV5G_PMAC_RX_OK_BYTES_MSB_CNT_PMAC_RX_OK_BYTES_MSB_CNT GENMASK(7, 0)
#define DEV5G_PMAC_RX_OK_BYTES_MSB_CNT_PMAC_RX_OK_BYTES_MSB_CNT_SET(x)\
@@ -2808,10 +3754,12 @@ enum sparx5_target {
FIELD_GET(DEV5G_PMAC_RX_OK_BYTES_MSB_CNT_PMAC_RX_OK_BYTES_MSB_CNT, x)
/* DEV10G:DEV_STATISTICS_40BIT:PMAC_RX_BAD_BYTES_CNT */
-#define DEV5G_PMAC_RX_BAD_BYTES_CNT(t) __REG(TARGET_DEV5G, t, 13, 372, 0, 1, 64, 48, 0, 1, 4)
+#define DEV5G_PMAC_RX_BAD_BYTES_CNT(t) __REG(TARGET_DEV5G,\
+ t, 13, 372, 0, 1, 64, 48, 0, 1, 4)
/* DEV10G:DEV_STATISTICS_40BIT:PMAC_RX_BAD_BYTES_MSB_CNT */
-#define DEV5G_PMAC_RX_BAD_BYTES_MSB_CNT(t) __REG(TARGET_DEV5G, t, 13, 372, 0, 1, 64, 52, 0, 1, 4)
+#define DEV5G_PMAC_RX_BAD_BYTES_MSB_CNT(t) __REG(TARGET_DEV5G,\
+ t, 13, 372, 0, 1, 64, 52, 0, 1, 4)
#define DEV5G_PMAC_RX_BAD_BYTES_MSB_CNT_PMAC_RX_BAD_BYTES_MSB_CNT GENMASK(7, 0)
#define DEV5G_PMAC_RX_BAD_BYTES_MSB_CNT_PMAC_RX_BAD_BYTES_MSB_CNT_SET(x)\
@@ -2820,10 +3768,12 @@ enum sparx5_target {
FIELD_GET(DEV5G_PMAC_RX_BAD_BYTES_MSB_CNT_PMAC_RX_BAD_BYTES_MSB_CNT, x)
/* DEV10G:DEV_STATISTICS_40BIT:PMAC_TX_OK_BYTES_CNT */
-#define DEV5G_PMAC_TX_OK_BYTES_CNT(t) __REG(TARGET_DEV5G, t, 13, 372, 0, 1, 64, 56, 0, 1, 4)
+#define DEV5G_PMAC_TX_OK_BYTES_CNT(t) __REG(TARGET_DEV5G,\
+ t, 13, 372, 0, 1, 64, 56, 0, 1, 4)
/* DEV10G:DEV_STATISTICS_40BIT:PMAC_TX_OK_BYTES_MSB_CNT */
-#define DEV5G_PMAC_TX_OK_BYTES_MSB_CNT(t) __REG(TARGET_DEV5G, t, 13, 372, 0, 1, 64, 60, 0, 1, 4)
+#define DEV5G_PMAC_TX_OK_BYTES_MSB_CNT(t) __REG(TARGET_DEV5G,\
+ t, 13, 372, 0, 1, 64, 60, 0, 1, 4)
#define DEV5G_PMAC_TX_OK_BYTES_MSB_CNT_PMAC_TX_OK_BYTES_MSB_CNT GENMASK(7, 0)
#define DEV5G_PMAC_TX_OK_BYTES_MSB_CNT_PMAC_TX_OK_BYTES_MSB_CNT_SET(x)\
@@ -2832,7 +3782,8 @@ enum sparx5_target {
FIELD_GET(DEV5G_PMAC_TX_OK_BYTES_MSB_CNT_PMAC_TX_OK_BYTES_MSB_CNT, x)
/* DEV10G:DEV_CFG_STATUS:DEV_RST_CTRL */
-#define DEV5G_DEV_RST_CTRL(t) __REG(TARGET_DEV5G, t, 13, 436, 0, 1, 52, 0, 0, 1, 4)
+#define DEV5G_DEV_RST_CTRL(t) __REG(TARGET_DEV5G,\
+ t, 13, 436, 0, 1, 52, 0, 0, 1, 4)
#define DEV5G_DEV_RST_CTRL_PARDET_MODE_ENA BIT(28)
#define DEV5G_DEV_RST_CTRL_PARDET_MODE_ENA_SET(x)\
@@ -2889,7 +3840,8 @@ enum sparx5_target {
FIELD_GET(DEV5G_DEV_RST_CTRL_MAC_RX_RST, x)
/* DSM:RAM_CTRL:RAM_INIT */
-#define DSM_RAM_INIT __REG(TARGET_DSM, 0, 1, 0, 0, 1, 4, 0, 0, 1, 4)
+#define DSM_RAM_INIT __REG(TARGET_DSM,\
+ 0, 1, 0, 0, 1, 4, 0, 0, 1, 4)
#define DSM_RAM_INIT_RAM_INIT BIT(1)
#define DSM_RAM_INIT_RAM_INIT_SET(x)\
@@ -2904,7 +3856,8 @@ enum sparx5_target {
FIELD_GET(DSM_RAM_INIT_RAM_CFG_HOOK, x)
/* DSM:CFG:BUF_CFG */
-#define DSM_BUF_CFG(r) __REG(TARGET_DSM, 0, 1, 20, 0, 1, 3528, 0, r, 67, 4)
+#define DSM_BUF_CFG(r) __REG(TARGET_DSM,\
+ 0, 1, 20, 0, 1, 3528, 0, r, 67, 4)
#define DSM_BUF_CFG_CSC_STAT_DIS BIT(13)
#define DSM_BUF_CFG_CSC_STAT_DIS_SET(x)\
@@ -2931,7 +3884,8 @@ enum sparx5_target {
FIELD_GET(DSM_BUF_CFG_UNDERFLOW_WATCHDOG_TIMEOUT, x)
/* DSM:CFG:DEV_TX_STOP_WM_CFG */
-#define DSM_DEV_TX_STOP_WM_CFG(r) __REG(TARGET_DSM, 0, 1, 20, 0, 1, 3528, 1360, r, 67, 4)
+#define DSM_DEV_TX_STOP_WM_CFG(r) __REG(TARGET_DSM,\
+ 0, 1, 20, 0, 1, 3528, 1360, r, 67, 4)
#define DSM_DEV_TX_STOP_WM_CFG_FAST_STARTUP_ENA BIT(9)
#define DSM_DEV_TX_STOP_WM_CFG_FAST_STARTUP_ENA_SET(x)\
@@ -2958,7 +3912,8 @@ enum sparx5_target {
FIELD_GET(DSM_DEV_TX_STOP_WM_CFG_DEV_TX_CNT_CLR, x)
/* DSM:CFG:RX_PAUSE_CFG */
-#define DSM_RX_PAUSE_CFG(r) __REG(TARGET_DSM, 0, 1, 20, 0, 1, 3528, 1628, r, 67, 4)
+#define DSM_RX_PAUSE_CFG(r) __REG(TARGET_DSM,\
+ 0, 1, 20, 0, 1, 3528, 1628, r, 67, 4)
#define DSM_RX_PAUSE_CFG_RX_PAUSE_EN BIT(1)
#define DSM_RX_PAUSE_CFG_RX_PAUSE_EN_SET(x)\
@@ -2973,7 +3928,8 @@ enum sparx5_target {
FIELD_GET(DSM_RX_PAUSE_CFG_FC_OBEY_LOCAL, x)
/* DSM:CFG:MAC_CFG */
-#define DSM_MAC_CFG(r) __REG(TARGET_DSM, 0, 1, 20, 0, 1, 3528, 2432, r, 67, 4)
+#define DSM_MAC_CFG(r) __REG(TARGET_DSM,\
+ 0, 1, 20, 0, 1, 3528, 2432, r, 67, 4)
#define DSM_MAC_CFG_TX_PAUSE_VAL GENMASK(31, 16)
#define DSM_MAC_CFG_TX_PAUSE_VAL_SET(x)\
@@ -3000,7 +3956,8 @@ enum sparx5_target {
FIELD_GET(DSM_MAC_CFG_TX_PAUSE_XON_XOFF, x)
/* DSM:CFG:MAC_ADDR_BASE_HIGH_CFG */
-#define DSM_MAC_ADDR_BASE_HIGH_CFG(r) __REG(TARGET_DSM, 0, 1, 20, 0, 1, 3528, 2700, r, 65, 4)
+#define DSM_MAC_ADDR_BASE_HIGH_CFG(r) __REG(TARGET_DSM,\
+ 0, 1, 20, 0, 1, 3528, 2700, r, 65, 4)
#define DSM_MAC_ADDR_BASE_HIGH_CFG_MAC_ADDR_HIGH GENMASK(23, 0)
#define DSM_MAC_ADDR_BASE_HIGH_CFG_MAC_ADDR_HIGH_SET(x)\
@@ -3009,7 +3966,8 @@ enum sparx5_target {
FIELD_GET(DSM_MAC_ADDR_BASE_HIGH_CFG_MAC_ADDR_HIGH, x)
/* DSM:CFG:MAC_ADDR_BASE_LOW_CFG */
-#define DSM_MAC_ADDR_BASE_LOW_CFG(r) __REG(TARGET_DSM, 0, 1, 20, 0, 1, 3528, 2960, r, 65, 4)
+#define DSM_MAC_ADDR_BASE_LOW_CFG(r) __REG(TARGET_DSM,\
+ 0, 1, 20, 0, 1, 3528, 2960, r, 65, 4)
#define DSM_MAC_ADDR_BASE_LOW_CFG_MAC_ADDR_LOW GENMASK(23, 0)
#define DSM_MAC_ADDR_BASE_LOW_CFG_MAC_ADDR_LOW_SET(x)\
@@ -3018,7 +3976,8 @@ enum sparx5_target {
FIELD_GET(DSM_MAC_ADDR_BASE_LOW_CFG_MAC_ADDR_LOW, x)
/* DSM:CFG:TAXI_CAL_CFG */
-#define DSM_TAXI_CAL_CFG(r) __REG(TARGET_DSM, 0, 1, 20, 0, 1, 3528, 3224, r, 9, 4)
+#define DSM_TAXI_CAL_CFG(r) __REG(TARGET_DSM,\
+ 0, 1, 20, 0, 1, 3528, 3224, r, 9, 4)
#define DSM_TAXI_CAL_CFG_CAL_IDX GENMASK(20, 15)
#define DSM_TAXI_CAL_CFG_CAL_IDX_SET(x)\
@@ -3050,8 +4009,41 @@ enum sparx5_target {
#define DSM_TAXI_CAL_CFG_CAL_PGM_ENA_GET(x)\
FIELD_GET(DSM_TAXI_CAL_CFG_CAL_PGM_ENA, x)
+/* EACL:ES2_KEY_SELECT_PROFILE:VCAP_ES2_KEY_SEL */
+#define EACL_VCAP_ES2_KEY_SEL(g, r) __REG(TARGET_EACL,\
+ 0, 1, 149504, g, 138, 8, 0, r, 2, 4)
+
+#define EACL_VCAP_ES2_KEY_SEL_IP6_KEY_SEL GENMASK(7, 5)
+#define EACL_VCAP_ES2_KEY_SEL_IP6_KEY_SEL_SET(x)\
+ FIELD_PREP(EACL_VCAP_ES2_KEY_SEL_IP6_KEY_SEL, x)
+#define EACL_VCAP_ES2_KEY_SEL_IP6_KEY_SEL_GET(x)\
+ FIELD_GET(EACL_VCAP_ES2_KEY_SEL_IP6_KEY_SEL, x)
+
+#define EACL_VCAP_ES2_KEY_SEL_IP4_KEY_SEL GENMASK(4, 2)
+#define EACL_VCAP_ES2_KEY_SEL_IP4_KEY_SEL_SET(x)\
+ FIELD_PREP(EACL_VCAP_ES2_KEY_SEL_IP4_KEY_SEL, x)
+#define EACL_VCAP_ES2_KEY_SEL_IP4_KEY_SEL_GET(x)\
+ FIELD_GET(EACL_VCAP_ES2_KEY_SEL_IP4_KEY_SEL, x)
+
+#define EACL_VCAP_ES2_KEY_SEL_ARP_KEY_SEL BIT(1)
+#define EACL_VCAP_ES2_KEY_SEL_ARP_KEY_SEL_SET(x)\
+ FIELD_PREP(EACL_VCAP_ES2_KEY_SEL_ARP_KEY_SEL, x)
+#define EACL_VCAP_ES2_KEY_SEL_ARP_KEY_SEL_GET(x)\
+ FIELD_GET(EACL_VCAP_ES2_KEY_SEL_ARP_KEY_SEL, x)
+
+#define EACL_VCAP_ES2_KEY_SEL_KEY_ENA BIT(0)
+#define EACL_VCAP_ES2_KEY_SEL_KEY_ENA_SET(x)\
+ FIELD_PREP(EACL_VCAP_ES2_KEY_SEL_KEY_ENA, x)
+#define EACL_VCAP_ES2_KEY_SEL_KEY_ENA_GET(x)\
+ FIELD_GET(EACL_VCAP_ES2_KEY_SEL_KEY_ENA, x)
+
+/* EACL:CNT_TBL:ES2_CNT */
+#define EACL_ES2_CNT(g) __REG(TARGET_EACL,\
+ 0, 1, 122880, g, 2048, 4, 0, 0, 1, 4)
+
/* EACL:POL_CFG:POL_EACL_CFG */
-#define EACL_POL_EACL_CFG __REG(TARGET_EACL, 0, 1, 150608, 0, 1, 780, 768, 0, 1, 4)
+#define EACL_POL_EACL_CFG __REG(TARGET_EACL,\
+ 0, 1, 150608, 0, 1, 780, 768, 0, 1, 4)
#define EACL_POL_EACL_CFG_EACL_CNT_MARKED_AS_DROPPED BIT(5)
#define EACL_POL_EACL_CFG_EACL_CNT_MARKED_AS_DROPPED_SET(x)\
@@ -3089,8 +4081,61 @@ enum sparx5_target {
#define EACL_POL_EACL_CFG_EACL_FORCE_INIT_GET(x)\
FIELD_GET(EACL_POL_EACL_CFG_EACL_FORCE_INIT, x)
+/* EACL:ES2_STICKY:SEC_LOOKUP_STICKY */
+#define EACL_SEC_LOOKUP_STICKY(r) __REG(TARGET_EACL,\
+ 0, 1, 118696, 0, 1, 8, 0, r, 2, 4)
+
+#define EACL_SEC_LOOKUP_STICKY_SEC_TYPE_IP_7TUPLE_STICKY BIT(7)
+#define EACL_SEC_LOOKUP_STICKY_SEC_TYPE_IP_7TUPLE_STICKY_SET(x)\
+ FIELD_PREP(EACL_SEC_LOOKUP_STICKY_SEC_TYPE_IP_7TUPLE_STICKY, x)
+#define EACL_SEC_LOOKUP_STICKY_SEC_TYPE_IP_7TUPLE_STICKY_GET(x)\
+ FIELD_GET(EACL_SEC_LOOKUP_STICKY_SEC_TYPE_IP_7TUPLE_STICKY, x)
+
+#define EACL_SEC_LOOKUP_STICKY_SEC_TYPE_IP6_VID_STICKY BIT(6)
+#define EACL_SEC_LOOKUP_STICKY_SEC_TYPE_IP6_VID_STICKY_SET(x)\
+ FIELD_PREP(EACL_SEC_LOOKUP_STICKY_SEC_TYPE_IP6_VID_STICKY, x)
+#define EACL_SEC_LOOKUP_STICKY_SEC_TYPE_IP6_VID_STICKY_GET(x)\
+ FIELD_GET(EACL_SEC_LOOKUP_STICKY_SEC_TYPE_IP6_VID_STICKY, x)
+
+#define EACL_SEC_LOOKUP_STICKY_SEC_TYPE_IP6_STD_STICKY BIT(5)
+#define EACL_SEC_LOOKUP_STICKY_SEC_TYPE_IP6_STD_STICKY_SET(x)\
+ FIELD_PREP(EACL_SEC_LOOKUP_STICKY_SEC_TYPE_IP6_STD_STICKY, x)
+#define EACL_SEC_LOOKUP_STICKY_SEC_TYPE_IP6_STD_STICKY_GET(x)\
+ FIELD_GET(EACL_SEC_LOOKUP_STICKY_SEC_TYPE_IP6_STD_STICKY, x)
+
+#define EACL_SEC_LOOKUP_STICKY_SEC_TYPE_IP4_TCPUDP_STICKY BIT(4)
+#define EACL_SEC_LOOKUP_STICKY_SEC_TYPE_IP4_TCPUDP_STICKY_SET(x)\
+ FIELD_PREP(EACL_SEC_LOOKUP_STICKY_SEC_TYPE_IP4_TCPUDP_STICKY, x)
+#define EACL_SEC_LOOKUP_STICKY_SEC_TYPE_IP4_TCPUDP_STICKY_GET(x)\
+ FIELD_GET(EACL_SEC_LOOKUP_STICKY_SEC_TYPE_IP4_TCPUDP_STICKY, x)
+
+#define EACL_SEC_LOOKUP_STICKY_SEC_TYPE_IP4_VID_STICKY BIT(3)
+#define EACL_SEC_LOOKUP_STICKY_SEC_TYPE_IP4_VID_STICKY_SET(x)\
+ FIELD_PREP(EACL_SEC_LOOKUP_STICKY_SEC_TYPE_IP4_VID_STICKY, x)
+#define EACL_SEC_LOOKUP_STICKY_SEC_TYPE_IP4_VID_STICKY_GET(x)\
+ FIELD_GET(EACL_SEC_LOOKUP_STICKY_SEC_TYPE_IP4_VID_STICKY, x)
+
+#define EACL_SEC_LOOKUP_STICKY_SEC_TYPE_IP4_OTHER_STICKY BIT(2)
+#define EACL_SEC_LOOKUP_STICKY_SEC_TYPE_IP4_OTHER_STICKY_SET(x)\
+ FIELD_PREP(EACL_SEC_LOOKUP_STICKY_SEC_TYPE_IP4_OTHER_STICKY, x)
+#define EACL_SEC_LOOKUP_STICKY_SEC_TYPE_IP4_OTHER_STICKY_GET(x)\
+ FIELD_GET(EACL_SEC_LOOKUP_STICKY_SEC_TYPE_IP4_OTHER_STICKY, x)
+
+#define EACL_SEC_LOOKUP_STICKY_SEC_TYPE_ARP_STICKY BIT(1)
+#define EACL_SEC_LOOKUP_STICKY_SEC_TYPE_ARP_STICKY_SET(x)\
+ FIELD_PREP(EACL_SEC_LOOKUP_STICKY_SEC_TYPE_ARP_STICKY, x)
+#define EACL_SEC_LOOKUP_STICKY_SEC_TYPE_ARP_STICKY_GET(x)\
+ FIELD_GET(EACL_SEC_LOOKUP_STICKY_SEC_TYPE_ARP_STICKY, x)
+
+#define EACL_SEC_LOOKUP_STICKY_SEC_TYPE_MAC_ETYPE_STICKY BIT(0)
+#define EACL_SEC_LOOKUP_STICKY_SEC_TYPE_MAC_ETYPE_STICKY_SET(x)\
+ FIELD_PREP(EACL_SEC_LOOKUP_STICKY_SEC_TYPE_MAC_ETYPE_STICKY, x)
+#define EACL_SEC_LOOKUP_STICKY_SEC_TYPE_MAC_ETYPE_STICKY_GET(x)\
+ FIELD_GET(EACL_SEC_LOOKUP_STICKY_SEC_TYPE_MAC_ETYPE_STICKY, x)
+
/* EACL:RAM_CTRL:RAM_INIT */
-#define EACL_RAM_INIT __REG(TARGET_EACL, 0, 1, 118736, 0, 1, 4, 0, 0, 1, 4)
+#define EACL_RAM_INIT __REG(TARGET_EACL,\
+ 0, 1, 118736, 0, 1, 4, 0, 0, 1, 4)
#define EACL_RAM_INIT_RAM_INIT BIT(1)
#define EACL_RAM_INIT_RAM_INIT_SET(x)\
@@ -3105,7 +4150,8 @@ enum sparx5_target {
FIELD_GET(EACL_RAM_INIT_RAM_CFG_HOOK, x)
/* FDMA:FDMA:FDMA_CH_ACTIVATE */
-#define FDMA_CH_ACTIVATE __REG(TARGET_FDMA, 0, 1, 8, 0, 1, 428, 0, 0, 1, 4)
+#define FDMA_CH_ACTIVATE __REG(TARGET_FDMA,\
+ 0, 1, 8, 0, 1, 428, 0, 0, 1, 4)
#define FDMA_CH_ACTIVATE_CH_ACTIVATE GENMASK(7, 0)
#define FDMA_CH_ACTIVATE_CH_ACTIVATE_SET(x)\
@@ -3114,7 +4160,8 @@ enum sparx5_target {
FIELD_GET(FDMA_CH_ACTIVATE_CH_ACTIVATE, x)
/* FDMA:FDMA:FDMA_CH_RELOAD */
-#define FDMA_CH_RELOAD __REG(TARGET_FDMA, 0, 1, 8, 0, 1, 428, 4, 0, 1, 4)
+#define FDMA_CH_RELOAD __REG(TARGET_FDMA,\
+ 0, 1, 8, 0, 1, 428, 4, 0, 1, 4)
#define FDMA_CH_RELOAD_CH_RELOAD GENMASK(7, 0)
#define FDMA_CH_RELOAD_CH_RELOAD_SET(x)\
@@ -3123,7 +4170,8 @@ enum sparx5_target {
FIELD_GET(FDMA_CH_RELOAD_CH_RELOAD, x)
/* FDMA:FDMA:FDMA_CH_DISABLE */
-#define FDMA_CH_DISABLE __REG(TARGET_FDMA, 0, 1, 8, 0, 1, 428, 8, 0, 1, 4)
+#define FDMA_CH_DISABLE __REG(TARGET_FDMA,\
+ 0, 1, 8, 0, 1, 428, 8, 0, 1, 4)
#define FDMA_CH_DISABLE_CH_DISABLE GENMASK(7, 0)
#define FDMA_CH_DISABLE_CH_DISABLE_SET(x)\
@@ -3132,19 +4180,24 @@ enum sparx5_target {
FIELD_GET(FDMA_CH_DISABLE_CH_DISABLE, x)
/* FDMA:FDMA:FDMA_DCB_LLP */
-#define FDMA_DCB_LLP(r) __REG(TARGET_FDMA, 0, 1, 8, 0, 1, 428, 52, r, 8, 4)
+#define FDMA_DCB_LLP(r) __REG(TARGET_FDMA,\
+ 0, 1, 8, 0, 1, 428, 52, r, 8, 4)
/* FDMA:FDMA:FDMA_DCB_LLP1 */
-#define FDMA_DCB_LLP1(r) __REG(TARGET_FDMA, 0, 1, 8, 0, 1, 428, 84, r, 8, 4)
+#define FDMA_DCB_LLP1(r) __REG(TARGET_FDMA,\
+ 0, 1, 8, 0, 1, 428, 84, r, 8, 4)
/* FDMA:FDMA:FDMA_DCB_LLP_PREV */
-#define FDMA_DCB_LLP_PREV(r) __REG(TARGET_FDMA, 0, 1, 8, 0, 1, 428, 116, r, 8, 4)
+#define FDMA_DCB_LLP_PREV(r) __REG(TARGET_FDMA,\
+ 0, 1, 8, 0, 1, 428, 116, r, 8, 4)
/* FDMA:FDMA:FDMA_DCB_LLP_PREV1 */
-#define FDMA_DCB_LLP_PREV1(r) __REG(TARGET_FDMA, 0, 1, 8, 0, 1, 428, 148, r, 8, 4)
+#define FDMA_DCB_LLP_PREV1(r) __REG(TARGET_FDMA,\
+ 0, 1, 8, 0, 1, 428, 148, r, 8, 4)
/* FDMA:FDMA:FDMA_CH_CFG */
-#define FDMA_CH_CFG(r) __REG(TARGET_FDMA, 0, 1, 8, 0, 1, 428, 224, r, 8, 4)
+#define FDMA_CH_CFG(r) __REG(TARGET_FDMA,\
+ 0, 1, 8, 0, 1, 428, 224, r, 8, 4)
#define FDMA_CH_CFG_CH_XTR_STATUS_MODE BIT(7)
#define FDMA_CH_CFG_CH_XTR_STATUS_MODE_SET(x)\
@@ -3177,7 +4230,8 @@ enum sparx5_target {
FIELD_GET(FDMA_CH_CFG_CH_MEM, x)
/* FDMA:FDMA:FDMA_CH_TRANSLATE */
-#define FDMA_CH_TRANSLATE(r) __REG(TARGET_FDMA, 0, 1, 8, 0, 1, 428, 256, r, 8, 4)
+#define FDMA_CH_TRANSLATE(r) __REG(TARGET_FDMA,\
+ 0, 1, 8, 0, 1, 428, 256, r, 8, 4)
#define FDMA_CH_TRANSLATE_OFFSET GENMASK(15, 0)
#define FDMA_CH_TRANSLATE_OFFSET_SET(x)\
@@ -3186,7 +4240,8 @@ enum sparx5_target {
FIELD_GET(FDMA_CH_TRANSLATE_OFFSET, x)
/* FDMA:FDMA:FDMA_XTR_CFG */
-#define FDMA_XTR_CFG __REG(TARGET_FDMA, 0, 1, 8, 0, 1, 428, 364, 0, 1, 4)
+#define FDMA_XTR_CFG __REG(TARGET_FDMA,\
+ 0, 1, 8, 0, 1, 428, 364, 0, 1, 4)
#define FDMA_XTR_CFG_XTR_FIFO_WM GENMASK(15, 11)
#define FDMA_XTR_CFG_XTR_FIFO_WM_SET(x)\
@@ -3201,7 +4256,8 @@ enum sparx5_target {
FIELD_GET(FDMA_XTR_CFG_XTR_ARB_SAT, x)
/* FDMA:FDMA:FDMA_PORT_CTRL */
-#define FDMA_PORT_CTRL(r) __REG(TARGET_FDMA, 0, 1, 8, 0, 1, 428, 376, r, 2, 4)
+#define FDMA_PORT_CTRL(r) __REG(TARGET_FDMA,\
+ 0, 1, 8, 0, 1, 428, 376, r, 2, 4)
#define FDMA_PORT_CTRL_INJ_STOP BIT(4)
#define FDMA_PORT_CTRL_INJ_STOP_SET(x)\
@@ -3234,7 +4290,8 @@ enum sparx5_target {
FIELD_GET(FDMA_PORT_CTRL_XTR_BUF_RST, x)
/* FDMA:FDMA:FDMA_INTR_DCB */
-#define FDMA_INTR_DCB __REG(TARGET_FDMA, 0, 1, 8, 0, 1, 428, 384, 0, 1, 4)
+#define FDMA_INTR_DCB __REG(TARGET_FDMA,\
+ 0, 1, 8, 0, 1, 428, 384, 0, 1, 4)
#define FDMA_INTR_DCB_INTR_DCB GENMASK(7, 0)
#define FDMA_INTR_DCB_INTR_DCB_SET(x)\
@@ -3243,7 +4300,8 @@ enum sparx5_target {
FIELD_GET(FDMA_INTR_DCB_INTR_DCB, x)
/* FDMA:FDMA:FDMA_INTR_DCB_ENA */
-#define FDMA_INTR_DCB_ENA __REG(TARGET_FDMA, 0, 1, 8, 0, 1, 428, 388, 0, 1, 4)
+#define FDMA_INTR_DCB_ENA __REG(TARGET_FDMA,\
+ 0, 1, 8, 0, 1, 428, 388, 0, 1, 4)
#define FDMA_INTR_DCB_ENA_INTR_DCB_ENA GENMASK(7, 0)
#define FDMA_INTR_DCB_ENA_INTR_DCB_ENA_SET(x)\
@@ -3252,7 +4310,8 @@ enum sparx5_target {
FIELD_GET(FDMA_INTR_DCB_ENA_INTR_DCB_ENA, x)
/* FDMA:FDMA:FDMA_INTR_DB */
-#define FDMA_INTR_DB __REG(TARGET_FDMA, 0, 1, 8, 0, 1, 428, 392, 0, 1, 4)
+#define FDMA_INTR_DB __REG(TARGET_FDMA,\
+ 0, 1, 8, 0, 1, 428, 392, 0, 1, 4)
#define FDMA_INTR_DB_INTR_DB GENMASK(7, 0)
#define FDMA_INTR_DB_INTR_DB_SET(x)\
@@ -3261,7 +4320,8 @@ enum sparx5_target {
FIELD_GET(FDMA_INTR_DB_INTR_DB, x)
/* FDMA:FDMA:FDMA_INTR_DB_ENA */
-#define FDMA_INTR_DB_ENA __REG(TARGET_FDMA, 0, 1, 8, 0, 1, 428, 396, 0, 1, 4)
+#define FDMA_INTR_DB_ENA __REG(TARGET_FDMA,\
+ 0, 1, 8, 0, 1, 428, 396, 0, 1, 4)
#define FDMA_INTR_DB_ENA_INTR_DB_ENA GENMASK(7, 0)
#define FDMA_INTR_DB_ENA_INTR_DB_ENA_SET(x)\
@@ -3270,7 +4330,8 @@ enum sparx5_target {
FIELD_GET(FDMA_INTR_DB_ENA_INTR_DB_ENA, x)
/* FDMA:FDMA:FDMA_INTR_ERR */
-#define FDMA_INTR_ERR __REG(TARGET_FDMA, 0, 1, 8, 0, 1, 428, 400, 0, 1, 4)
+#define FDMA_INTR_ERR __REG(TARGET_FDMA,\
+ 0, 1, 8, 0, 1, 428, 400, 0, 1, 4)
#define FDMA_INTR_ERR_INTR_PORT_ERR GENMASK(9, 8)
#define FDMA_INTR_ERR_INTR_PORT_ERR_SET(x)\
@@ -3285,7 +4346,8 @@ enum sparx5_target {
FIELD_GET(FDMA_INTR_ERR_INTR_CH_ERR, x)
/* FDMA:FDMA:FDMA_ERRORS */
-#define FDMA_ERRORS __REG(TARGET_FDMA, 0, 1, 8, 0, 1, 428, 412, 0, 1, 4)
+#define FDMA_ERRORS __REG(TARGET_FDMA,\
+ 0, 1, 8, 0, 1, 428, 412, 0, 1, 4)
#define FDMA_ERRORS_ERR_XTR_WR GENMASK(31, 30)
#define FDMA_ERRORS_ERR_XTR_WR_SET(x)\
@@ -3336,7 +4398,8 @@ enum sparx5_target {
FIELD_GET(FDMA_ERRORS_ERR_CH_WR, x)
/* FDMA:FDMA:FDMA_ERRORS_2 */
-#define FDMA_ERRORS_2 __REG(TARGET_FDMA, 0, 1, 8, 0, 1, 428, 416, 0, 1, 4)
+#define FDMA_ERRORS_2 __REG(TARGET_FDMA,\
+ 0, 1, 8, 0, 1, 428, 416, 0, 1, 4)
#define FDMA_ERRORS_2_ERR_XTR_FRAG GENMASK(1, 0)
#define FDMA_ERRORS_2_ERR_XTR_FRAG_SET(x)\
@@ -3345,7 +4408,8 @@ enum sparx5_target {
FIELD_GET(FDMA_ERRORS_2_ERR_XTR_FRAG, x)
/* FDMA:FDMA:FDMA_CTRL */
-#define FDMA_CTRL __REG(TARGET_FDMA, 0, 1, 8, 0, 1, 428, 424, 0, 1, 4)
+#define FDMA_CTRL __REG(TARGET_FDMA,\
+ 0, 1, 8, 0, 1, 428, 424, 0, 1, 4)
#define FDMA_CTRL_NRESET BIT(0)
#define FDMA_CTRL_NRESET_SET(x)\
@@ -3354,7 +4418,8 @@ enum sparx5_target {
FIELD_GET(FDMA_CTRL_NRESET, x)
/* DEVCPU_GCB:CHIP_REGS:CHIP_ID */
-#define GCB_CHIP_ID __REG(TARGET_GCB, 0, 1, 0, 0, 1, 424, 0, 0, 1, 4)
+#define GCB_CHIP_ID __REG(TARGET_GCB,\
+ 0, 1, 0, 0, 1, 424, 0, 0, 1, 4)
#define GCB_CHIP_ID_REV_ID GENMASK(31, 28)
#define GCB_CHIP_ID_REV_ID_SET(x)\
@@ -3381,7 +4446,8 @@ enum sparx5_target {
FIELD_GET(GCB_CHIP_ID_ONE, x)
/* DEVCPU_GCB:CHIP_REGS:SOFT_RST */
-#define GCB_SOFT_RST __REG(TARGET_GCB, 0, 1, 0, 0, 1, 424, 8, 0, 1, 4)
+#define GCB_SOFT_RST __REG(TARGET_GCB,\
+ 0, 1, 0, 0, 1, 424, 8, 0, 1, 4)
#define GCB_SOFT_RST_SOFT_NON_CFG_RST BIT(2)
#define GCB_SOFT_RST_SOFT_NON_CFG_RST_SET(x)\
@@ -3402,7 +4468,8 @@ enum sparx5_target {
FIELD_GET(GCB_SOFT_RST_SOFT_CHIP_RST, x)
/* DEVCPU_GCB:CHIP_REGS:HW_SGPIO_SD_CFG */
-#define GCB_HW_SGPIO_SD_CFG __REG(TARGET_GCB, 0, 1, 0, 0, 1, 424, 20, 0, 1, 4)
+#define GCB_HW_SGPIO_SD_CFG __REG(TARGET_GCB,\
+ 0, 1, 0, 0, 1, 424, 20, 0, 1, 4)
#define GCB_HW_SGPIO_SD_CFG_SD_HIGH_ENA BIT(1)
#define GCB_HW_SGPIO_SD_CFG_SD_HIGH_ENA_SET(x)\
@@ -3417,7 +4484,8 @@ enum sparx5_target {
FIELD_GET(GCB_HW_SGPIO_SD_CFG_SD_MAP_SEL, x)
/* DEVCPU_GCB:CHIP_REGS:HW_SGPIO_TO_SD_MAP_CFG */
-#define GCB_HW_SGPIO_TO_SD_MAP_CFG(r) __REG(TARGET_GCB, 0, 1, 0, 0, 1, 424, 24, r, 65, 4)
+#define GCB_HW_SGPIO_TO_SD_MAP_CFG(r) __REG(TARGET_GCB,\
+ 0, 1, 0, 0, 1, 424, 24, r, 65, 4)
#define GCB_HW_SGPIO_TO_SD_MAP_CFG_SGPIO_TO_SD_SEL GENMASK(8, 0)
#define GCB_HW_SGPIO_TO_SD_MAP_CFG_SGPIO_TO_SD_SEL_SET(x)\
@@ -3426,7 +4494,8 @@ enum sparx5_target {
FIELD_GET(GCB_HW_SGPIO_TO_SD_MAP_CFG_SGPIO_TO_SD_SEL, x)
/* DEVCPU_GCB:SIO_CTRL:SIO_CLOCK */
-#define GCB_SIO_CLOCK(g) __REG(TARGET_GCB, 0, 1, 876, g, 3, 280, 20, 0, 1, 4)
+#define GCB_SIO_CLOCK(g) __REG(TARGET_GCB,\
+ 0, 1, 876, g, 3, 280, 20, 0, 1, 4)
#define GCB_SIO_CLOCK_SIO_CLK_FREQ GENMASK(19, 8)
#define GCB_SIO_CLOCK_SIO_CLK_FREQ_SET(x)\
@@ -3441,7 +4510,8 @@ enum sparx5_target {
FIELD_GET(GCB_SIO_CLOCK_SYS_CLK_PERIOD, x)
/* HSCH:HSCH_CFG:CIR_CFG */
-#define HSCH_CIR_CFG(g) __REG(TARGET_HSCH, 0, 1, 0, g, 5040, 32, 0, 0, 1, 4)
+#define HSCH_CIR_CFG(g) __REG(TARGET_HSCH,\
+ 0, 1, 0, g, 5040, 32, 0, 0, 1, 4)
#define HSCH_CIR_CFG_CIR_RATE GENMASK(22, 6)
#define HSCH_CIR_CFG_CIR_RATE_SET(x)\
@@ -3456,7 +4526,8 @@ enum sparx5_target {
FIELD_GET(HSCH_CIR_CFG_CIR_BURST, x)
/* HSCH:HSCH_CFG:EIR_CFG */
-#define HSCH_EIR_CFG(g) __REG(TARGET_HSCH, 0, 1, 0, g, 5040, 32, 4, 0, 1, 4)
+#define HSCH_EIR_CFG(g) __REG(TARGET_HSCH,\
+ 0, 1, 0, g, 5040, 32, 4, 0, 1, 4)
#define HSCH_EIR_CFG_EIR_RATE GENMASK(22, 6)
#define HSCH_EIR_CFG_EIR_RATE_SET(x)\
@@ -3471,7 +4542,8 @@ enum sparx5_target {
FIELD_GET(HSCH_EIR_CFG_EIR_BURST, x)
/* HSCH:HSCH_CFG:SE_CFG */
-#define HSCH_SE_CFG(g) __REG(TARGET_HSCH, 0, 1, 0, g, 5040, 32, 8, 0, 1, 4)
+#define HSCH_SE_CFG(g) __REG(TARGET_HSCH,\
+ 0, 1, 0, g, 5040, 32, 8, 0, 1, 4)
#define HSCH_SE_CFG_SE_DWRR_CNT GENMASK(12, 6)
#define HSCH_SE_CFG_SE_DWRR_CNT_SET(x)\
@@ -3504,7 +4576,8 @@ enum sparx5_target {
FIELD_GET(HSCH_SE_CFG_SE_STOP, x)
/* HSCH:HSCH_CFG:SE_CONNECT */
-#define HSCH_SE_CONNECT(g) __REG(TARGET_HSCH, 0, 1, 0, g, 5040, 32, 12, 0, 1, 4)
+#define HSCH_SE_CONNECT(g) __REG(TARGET_HSCH,\
+ 0, 1, 0, g, 5040, 32, 12, 0, 1, 4)
#define HSCH_SE_CONNECT_SE_LEAK_LINK GENMASK(15, 0)
#define HSCH_SE_CONNECT_SE_LEAK_LINK_SET(x)\
@@ -3513,7 +4586,8 @@ enum sparx5_target {
FIELD_GET(HSCH_SE_CONNECT_SE_LEAK_LINK, x)
/* HSCH:HSCH_CFG:SE_DLB_SENSE */
-#define HSCH_SE_DLB_SENSE(g) __REG(TARGET_HSCH, 0, 1, 0, g, 5040, 32, 16, 0, 1, 4)
+#define HSCH_SE_DLB_SENSE(g) __REG(TARGET_HSCH,\
+ 0, 1, 0, g, 5040, 32, 16, 0, 1, 4)
#define HSCH_SE_DLB_SENSE_SE_DLB_PRIO GENMASK(12, 10)
#define HSCH_SE_DLB_SENSE_SE_DLB_PRIO_SET(x)\
@@ -3546,7 +4620,8 @@ enum sparx5_target {
FIELD_GET(HSCH_SE_DLB_SENSE_SE_DLB_DPORT_ENA, x)
/* HSCH:HSCH_DWRR:DWRR_ENTRY */
-#define HSCH_DWRR_ENTRY(g) __REG(TARGET_HSCH, 0, 1, 162816, g, 72, 4, 0, 0, 1, 4)
+#define HSCH_DWRR_ENTRY(g) __REG(TARGET_HSCH,\
+ 0, 1, 162816, g, 72, 4, 0, 0, 1, 4)
#define HSCH_DWRR_ENTRY_DWRR_COST GENMASK(24, 20)
#define HSCH_DWRR_ENTRY_DWRR_COST_SET(x)\
@@ -3561,7 +4636,8 @@ enum sparx5_target {
FIELD_GET(HSCH_DWRR_ENTRY_DWRR_BALANCE, x)
/* HSCH:HSCH_MISC:HSCH_CFG_CFG */
-#define HSCH_HSCH_CFG_CFG __REG(TARGET_HSCH, 0, 1, 163104, 0, 1, 648, 284, 0, 1, 4)
+#define HSCH_HSCH_CFG_CFG __REG(TARGET_HSCH,\
+ 0, 1, 163104, 0, 1, 648, 284, 0, 1, 4)
#define HSCH_HSCH_CFG_CFG_CFG_SE_IDX GENMASK(26, 14)
#define HSCH_HSCH_CFG_CFG_CFG_SE_IDX_SET(x)\
@@ -3582,16 +4658,18 @@ enum sparx5_target {
FIELD_GET(HSCH_HSCH_CFG_CFG_CSR_GRANT, x)
/* HSCH:HSCH_MISC:SYS_CLK_PER */
-#define HSCH_SYS_CLK_PER __REG(TARGET_HSCH, 0, 1, 163104, 0, 1, 648, 640, 0, 1, 4)
+#define HSCH_SYS_CLK_PER __REG(TARGET_HSCH,\
+ 0, 1, 163104, 0, 1, 648, 640, 0, 1, 4)
-#define HSCH_SYS_CLK_PER_SYS_CLK_PER_100PS GENMASK(7, 0)
-#define HSCH_SYS_CLK_PER_SYS_CLK_PER_100PS_SET(x)\
- FIELD_PREP(HSCH_SYS_CLK_PER_SYS_CLK_PER_100PS, x)
-#define HSCH_SYS_CLK_PER_SYS_CLK_PER_100PS_GET(x)\
- FIELD_GET(HSCH_SYS_CLK_PER_SYS_CLK_PER_100PS, x)
+#define HSCH_SYS_CLK_PER_100PS GENMASK(7, 0)
+#define HSCH_SYS_CLK_PER_100PS_SET(x)\
+ FIELD_PREP(HSCH_SYS_CLK_PER_100PS, x)
+#define HSCH_SYS_CLK_PER_100PS_GET(x)\
+ FIELD_GET(HSCH_SYS_CLK_PER_100PS, x)
/* HSCH:HSCH_LEAK_LISTS:HSCH_TIMER_CFG */
-#define HSCH_HSCH_TIMER_CFG(g, r) __REG(TARGET_HSCH, 0, 1, 161664, g, 4, 32, 0, r, 4, 4)
+#define HSCH_HSCH_TIMER_CFG(g, r) __REG(TARGET_HSCH,\
+ 0, 1, 161664, g, 4, 32, 0, r, 4, 4)
#define HSCH_HSCH_TIMER_CFG_LEAK_TIME GENMASK(17, 0)
#define HSCH_HSCH_TIMER_CFG_LEAK_TIME_SET(x)\
@@ -3600,7 +4678,8 @@ enum sparx5_target {
FIELD_GET(HSCH_HSCH_TIMER_CFG_LEAK_TIME, x)
/* HSCH:HSCH_LEAK_LISTS:HSCH_LEAK_CFG */
-#define HSCH_HSCH_LEAK_CFG(g, r) __REG(TARGET_HSCH, 0, 1, 161664, g, 4, 32, 16, r, 4, 4)
+#define HSCH_HSCH_LEAK_CFG(g, r) __REG(TARGET_HSCH,\
+ 0, 1, 161664, g, 4, 32, 16, r, 4, 4)
#define HSCH_HSCH_LEAK_CFG_LEAK_FIRST GENMASK(16, 1)
#define HSCH_HSCH_LEAK_CFG_LEAK_FIRST_SET(x)\
@@ -3615,7 +4694,8 @@ enum sparx5_target {
FIELD_GET(HSCH_HSCH_LEAK_CFG_LEAK_ERR, x)
/* HSCH:SYSTEM:FLUSH_CTRL */
-#define HSCH_FLUSH_CTRL __REG(TARGET_HSCH, 0, 1, 184000, 0, 1, 312, 4, 0, 1, 4)
+#define HSCH_FLUSH_CTRL __REG(TARGET_HSCH,\
+ 0, 1, 184000, 0, 1, 312, 4, 0, 1, 4)
#define HSCH_FLUSH_CTRL_FLUSH_ENA BIT(27)
#define HSCH_FLUSH_CTRL_FLUSH_ENA_SET(x)\
@@ -3660,7 +4740,8 @@ enum sparx5_target {
FIELD_GET(HSCH_FLUSH_CTRL_FLUSH_HIER, x)
/* HSCH:SYSTEM:PORT_MODE */
-#define HSCH_PORT_MODE(r) __REG(TARGET_HSCH, 0, 1, 184000, 0, 1, 312, 8, r, 70, 4)
+#define HSCH_PORT_MODE(r) __REG(TARGET_HSCH,\
+ 0, 1, 184000, 0, 1, 312, 8, r, 70, 4)
#define HSCH_PORT_MODE_DEQUEUE_DIS BIT(4)
#define HSCH_PORT_MODE_DEQUEUE_DIS_SET(x)\
@@ -3693,7 +4774,8 @@ enum sparx5_target {
FIELD_GET(HSCH_PORT_MODE_CPU_PRIO_MODE, x)
/* HSCH:SYSTEM:OUTB_SHARE_ENA */
-#define HSCH_OUTB_SHARE_ENA(r) __REG(TARGET_HSCH, 0, 1, 184000, 0, 1, 312, 288, r, 5, 4)
+#define HSCH_OUTB_SHARE_ENA(r) __REG(TARGET_HSCH,\
+ 0, 1, 184000, 0, 1, 312, 288, r, 5, 4)
#define HSCH_OUTB_SHARE_ENA_OUTB_SHARE_ENA GENMASK(7, 0)
#define HSCH_OUTB_SHARE_ENA_OUTB_SHARE_ENA_SET(x)\
@@ -3702,7 +4784,8 @@ enum sparx5_target {
FIELD_GET(HSCH_OUTB_SHARE_ENA_OUTB_SHARE_ENA, x)
/* HSCH:MMGT:RESET_CFG */
-#define HSCH_RESET_CFG __REG(TARGET_HSCH, 0, 1, 162368, 0, 1, 16, 8, 0, 1, 4)
+#define HSCH_RESET_CFG __REG(TARGET_HSCH,\
+ 0, 1, 162368, 0, 1, 16, 8, 0, 1, 4)
#define HSCH_RESET_CFG_CORE_ENA BIT(0)
#define HSCH_RESET_CFG_CORE_ENA_SET(x)\
@@ -3711,7 +4794,8 @@ enum sparx5_target {
FIELD_GET(HSCH_RESET_CFG_CORE_ENA, x)
/* HSCH:TAS_CONFIG:TAS_STATEMACHINE_CFG */
-#define HSCH_TAS_STATEMACHINE_CFG __REG(TARGET_HSCH, 0, 1, 162384, 0, 1, 12, 8, 0, 1, 4)
+#define HSCH_TAS_STATEMACHINE_CFG __REG(TARGET_HSCH,\
+ 0, 1, 162384, 0, 1, 12, 8, 0, 1, 4)
#define HSCH_TAS_STATEMACHINE_CFG_REVISIT_DLY GENMASK(7, 0)
#define HSCH_TAS_STATEMACHINE_CFG_REVISIT_DLY_SET(x)\
@@ -3720,7 +4804,8 @@ enum sparx5_target {
FIELD_GET(HSCH_TAS_STATEMACHINE_CFG_REVISIT_DLY, x)
/* LRN:COMMON:COMMON_ACCESS_CTRL */
-#define LRN_COMMON_ACCESS_CTRL __REG(TARGET_LRN, 0, 1, 0, 0, 1, 72, 0, 0, 1, 4)
+#define LRN_COMMON_ACCESS_CTRL __REG(TARGET_LRN,\
+ 0, 1, 0, 0, 1, 72, 0, 0, 1, 4)
#define LRN_COMMON_ACCESS_CTRL_CPU_ACCESS_DIRECT_COL GENMASK(21, 20)
#define LRN_COMMON_ACCESS_CTRL_CPU_ACCESS_DIRECT_COL_SET(x)\
@@ -3753,7 +4838,8 @@ enum sparx5_target {
FIELD_GET(LRN_COMMON_ACCESS_CTRL_MAC_TABLE_ACCESS_SHOT, x)
/* LRN:COMMON:MAC_ACCESS_CFG_0 */
-#define LRN_MAC_ACCESS_CFG_0 __REG(TARGET_LRN, 0, 1, 0, 0, 1, 72, 4, 0, 1, 4)
+#define LRN_MAC_ACCESS_CFG_0 __REG(TARGET_LRN,\
+ 0, 1, 0, 0, 1, 72, 4, 0, 1, 4)
#define LRN_MAC_ACCESS_CFG_0_MAC_ENTRY_FID GENMASK(28, 16)
#define LRN_MAC_ACCESS_CFG_0_MAC_ENTRY_FID_SET(x)\
@@ -3768,10 +4854,12 @@ enum sparx5_target {
FIELD_GET(LRN_MAC_ACCESS_CFG_0_MAC_ENTRY_MAC_MSB, x)
/* LRN:COMMON:MAC_ACCESS_CFG_1 */
-#define LRN_MAC_ACCESS_CFG_1 __REG(TARGET_LRN, 0, 1, 0, 0, 1, 72, 8, 0, 1, 4)
+#define LRN_MAC_ACCESS_CFG_1 __REG(TARGET_LRN,\
+ 0, 1, 0, 0, 1, 72, 8, 0, 1, 4)
/* LRN:COMMON:MAC_ACCESS_CFG_2 */
-#define LRN_MAC_ACCESS_CFG_2 __REG(TARGET_LRN, 0, 1, 0, 0, 1, 72, 12, 0, 1, 4)
+#define LRN_MAC_ACCESS_CFG_2 __REG(TARGET_LRN,\
+ 0, 1, 0, 0, 1, 72, 12, 0, 1, 4)
#define LRN_MAC_ACCESS_CFG_2_MAC_ENTRY_SRC_KILL_FWD BIT(28)
#define LRN_MAC_ACCESS_CFG_2_MAC_ENTRY_SRC_KILL_FWD_SET(x)\
@@ -3846,7 +4934,8 @@ enum sparx5_target {
FIELD_GET(LRN_MAC_ACCESS_CFG_2_MAC_ENTRY_ADDR, x)
/* LRN:COMMON:MAC_ACCESS_CFG_3 */
-#define LRN_MAC_ACCESS_CFG_3 __REG(TARGET_LRN, 0, 1, 0, 0, 1, 72, 16, 0, 1, 4)
+#define LRN_MAC_ACCESS_CFG_3 __REG(TARGET_LRN,\
+ 0, 1, 0, 0, 1, 72, 16, 0, 1, 4)
#define LRN_MAC_ACCESS_CFG_3_MAC_ENTRY_ISDX_LIMIT_IDX GENMASK(10, 0)
#define LRN_MAC_ACCESS_CFG_3_MAC_ENTRY_ISDX_LIMIT_IDX_SET(x)\
@@ -3855,7 +4944,8 @@ enum sparx5_target {
FIELD_GET(LRN_MAC_ACCESS_CFG_3_MAC_ENTRY_ISDX_LIMIT_IDX, x)
/* LRN:COMMON:SCAN_NEXT_CFG */
-#define LRN_SCAN_NEXT_CFG __REG(TARGET_LRN, 0, 1, 0, 0, 1, 72, 20, 0, 1, 4)
+#define LRN_SCAN_NEXT_CFG __REG(TARGET_LRN,\
+ 0, 1, 0, 0, 1, 72, 20, 0, 1, 4)
#define LRN_SCAN_NEXT_CFG_SCAN_AGE_FLAG_UPDATE_SEL GENMASK(21, 19)
#define LRN_SCAN_NEXT_CFG_SCAN_AGE_FLAG_UPDATE_SEL_SET(x)\
@@ -3948,7 +5038,8 @@ enum sparx5_target {
FIELD_GET(LRN_SCAN_NEXT_CFG_ADDR_FILTER_ENA, x)
/* LRN:COMMON:SCAN_NEXT_CFG_1 */
-#define LRN_SCAN_NEXT_CFG_1 __REG(TARGET_LRN, 0, 1, 0, 0, 1, 72, 24, 0, 1, 4)
+#define LRN_SCAN_NEXT_CFG_1 __REG(TARGET_LRN,\
+ 0, 1, 0, 0, 1, 72, 24, 0, 1, 4)
#define LRN_SCAN_NEXT_CFG_1_PORT_MOVE_NEW_ADDR GENMASK(30, 16)
#define LRN_SCAN_NEXT_CFG_1_PORT_MOVE_NEW_ADDR_SET(x)\
@@ -3963,7 +5054,8 @@ enum sparx5_target {
FIELD_GET(LRN_SCAN_NEXT_CFG_1_SCAN_ENTRY_ADDR_MASK, x)
/* LRN:COMMON:AUTOAGE_CFG */
-#define LRN_AUTOAGE_CFG(r) __REG(TARGET_LRN, 0, 1, 0, 0, 1, 72, 36, r, 4, 4)
+#define LRN_AUTOAGE_CFG(r) __REG(TARGET_LRN,\
+ 0, 1, 0, 0, 1, 72, 36, r, 4, 4)
#define LRN_AUTOAGE_CFG_UNIT_SIZE GENMASK(29, 28)
#define LRN_AUTOAGE_CFG_UNIT_SIZE_SET(x)\
@@ -3978,7 +5070,8 @@ enum sparx5_target {
FIELD_GET(LRN_AUTOAGE_CFG_PERIOD_VAL, x)
/* LRN:COMMON:AUTOAGE_CFG_1 */
-#define LRN_AUTOAGE_CFG_1 __REG(TARGET_LRN, 0, 1, 0, 0, 1, 72, 52, 0, 1, 4)
+#define LRN_AUTOAGE_CFG_1 __REG(TARGET_LRN,\
+ 0, 1, 0, 0, 1, 72, 52, 0, 1, 4)
#define LRN_AUTOAGE_CFG_1_PAUSE_AUTO_AGE_ENA BIT(25)
#define LRN_AUTOAGE_CFG_1_PAUSE_AUTO_AGE_ENA_SET(x)\
@@ -4023,7 +5116,8 @@ enum sparx5_target {
FIELD_GET(LRN_AUTOAGE_CFG_1_FORCE_IDLE_ENA, x)
/* LRN:COMMON:AUTOAGE_CFG_2 */
-#define LRN_AUTOAGE_CFG_2 __REG(TARGET_LRN, 0, 1, 0, 0, 1, 72, 56, 0, 1, 4)
+#define LRN_AUTOAGE_CFG_2 __REG(TARGET_LRN,\
+ 0, 1, 0, 0, 1, 72, 56, 0, 1, 4)
#define LRN_AUTOAGE_CFG_2_NEXT_ROW GENMASK(17, 4)
#define LRN_AUTOAGE_CFG_2_NEXT_ROW_SET(x)\
@@ -4038,7 +5132,8 @@ enum sparx5_target {
FIELD_GET(LRN_AUTOAGE_CFG_2_SCAN_ONGOING_STATUS, x)
/* PCIE_DM_EP:PF0_ATU_CAP:IATU_REGION_CTRL_2_OFF_OUTBOUND_0 */
-#define PCEP_RCTRL_2_OUT_0 __REG(TARGET_PCEP, 0, 1, 3145728, 0, 1, 130852, 4, 0, 1, 4)
+#define PCEP_RCTRL_2_OUT_0 __REG(TARGET_PCEP,\
+ 0, 1, 3145728, 0, 1, 130852, 4, 0, 1, 4)
#define PCEP_RCTRL_2_OUT_0_MSG_CODE GENMASK(7, 0)
#define PCEP_RCTRL_2_OUT_0_MSG_CODE_SET(x)\
@@ -4101,7 +5196,8 @@ enum sparx5_target {
FIELD_GET(PCEP_RCTRL_2_OUT_0_REGION_EN, x)
/* PCIE_DM_EP:PF0_ATU_CAP:IATU_LWR_BASE_ADDR_OFF_OUTBOUND_0 */
-#define PCEP_ADDR_LWR_OUT_0 __REG(TARGET_PCEP, 0, 1, 3145728, 0, 1, 130852, 8, 0, 1, 4)
+#define PCEP_ADDR_LWR_OUT_0 __REG(TARGET_PCEP,\
+ 0, 1, 3145728, 0, 1, 130852, 8, 0, 1, 4)
#define PCEP_ADDR_LWR_OUT_0_LWR_BASE_HW GENMASK(15, 0)
#define PCEP_ADDR_LWR_OUT_0_LWR_BASE_HW_SET(x)\
@@ -4116,10 +5212,12 @@ enum sparx5_target {
FIELD_GET(PCEP_ADDR_LWR_OUT_0_LWR_BASE_RW, x)
/* PCIE_DM_EP:PF0_ATU_CAP:IATU_UPPER_BASE_ADDR_OFF_OUTBOUND_0 */
-#define PCEP_ADDR_UPR_OUT_0 __REG(TARGET_PCEP, 0, 1, 3145728, 0, 1, 130852, 12, 0, 1, 4)
+#define PCEP_ADDR_UPR_OUT_0 __REG(TARGET_PCEP,\
+ 0, 1, 3145728, 0, 1, 130852, 12, 0, 1, 4)
/* PCIE_DM_EP:PF0_ATU_CAP:IATU_LIMIT_ADDR_OFF_OUTBOUND_0 */
-#define PCEP_ADDR_LIM_OUT_0 __REG(TARGET_PCEP, 0, 1, 3145728, 0, 1, 130852, 16, 0, 1, 4)
+#define PCEP_ADDR_LIM_OUT_0 __REG(TARGET_PCEP,\
+ 0, 1, 3145728, 0, 1, 130852, 16, 0, 1, 4)
#define PCEP_ADDR_LIM_OUT_0_LIMIT_ADDR_HW GENMASK(15, 0)
#define PCEP_ADDR_LIM_OUT_0_LIMIT_ADDR_HW_SET(x)\
@@ -4134,13 +5232,16 @@ enum sparx5_target {
FIELD_GET(PCEP_ADDR_LIM_OUT_0_LIMIT_ADDR_RW, x)
/* PCIE_DM_EP:PF0_ATU_CAP:IATU_LWR_TARGET_ADDR_OFF_OUTBOUND_0 */
-#define PCEP_ADDR_LWR_TGT_OUT_0 __REG(TARGET_PCEP, 0, 1, 3145728, 0, 1, 130852, 20, 0, 1, 4)
+#define PCEP_ADDR_LWR_TGT_OUT_0 __REG(TARGET_PCEP,\
+ 0, 1, 3145728, 0, 1, 130852, 20, 0, 1, 4)
/* PCIE_DM_EP:PF0_ATU_CAP:IATU_UPPER_TARGET_ADDR_OFF_OUTBOUND_0 */
-#define PCEP_ADDR_UPR_TGT_OUT_0 __REG(TARGET_PCEP, 0, 1, 3145728, 0, 1, 130852, 24, 0, 1, 4)
+#define PCEP_ADDR_UPR_TGT_OUT_0 __REG(TARGET_PCEP,\
+ 0, 1, 3145728, 0, 1, 130852, 24, 0, 1, 4)
/* PCIE_DM_EP:PF0_ATU_CAP:IATU_UPPR_LIMIT_ADDR_OFF_OUTBOUND_0 */
-#define PCEP_ADDR_UPR_LIM_OUT_0 __REG(TARGET_PCEP, 0, 1, 3145728, 0, 1, 130852, 32, 0, 1, 4)
+#define PCEP_ADDR_UPR_LIM_OUT_0 __REG(TARGET_PCEP,\
+ 0, 1, 3145728, 0, 1, 130852, 32, 0, 1, 4)
#define PCEP_ADDR_UPR_LIM_OUT_0_UPPR_LIMIT_ADDR_RW GENMASK(1, 0)
#define PCEP_ADDR_UPR_LIM_OUT_0_UPPR_LIMIT_ADDR_RW_SET(x)\
@@ -4155,7 +5256,8 @@ enum sparx5_target {
FIELD_GET(PCEP_ADDR_UPR_LIM_OUT_0_UPPR_LIMIT_ADDR_HW, x)
/* PCS_10GBASE_R:PCS_10GBR_CFG:PCS_CFG */
-#define PCS10G_BR_PCS_CFG(t) __REG(TARGET_PCS10G_BR, t, 12, 0, 0, 1, 56, 0, 0, 1, 4)
+#define PCS10G_BR_PCS_CFG(t) __REG(TARGET_PCS10G_BR,\
+ t, 12, 0, 0, 1, 56, 0, 0, 1, 4)
#define PCS10G_BR_PCS_CFG_PCS_ENA BIT(31)
#define PCS10G_BR_PCS_CFG_PCS_ENA_SET(x)\
@@ -4230,7 +5332,8 @@ enum sparx5_target {
FIELD_GET(PCS10G_BR_PCS_CFG_TX_SCR_DISABLE, x)
/* PCS_10GBASE_R:PCS_10GBR_CFG:PCS_SD_CFG */
-#define PCS10G_BR_PCS_SD_CFG(t) __REG(TARGET_PCS10G_BR, t, 12, 0, 0, 1, 56, 4, 0, 1, 4)
+#define PCS10G_BR_PCS_SD_CFG(t) __REG(TARGET_PCS10G_BR,\
+ t, 12, 0, 0, 1, 56, 4, 0, 1, 4)
#define PCS10G_BR_PCS_SD_CFG_SD_SEL BIT(8)
#define PCS10G_BR_PCS_SD_CFG_SD_SEL_SET(x)\
@@ -4251,7 +5354,8 @@ enum sparx5_target {
FIELD_GET(PCS10G_BR_PCS_SD_CFG_SD_ENA, x)
/* PCS_10GBASE_R:PCS_10GBR_CFG:PCS_CFG */
-#define PCS25G_BR_PCS_CFG(t) __REG(TARGET_PCS25G_BR, t, 8, 0, 0, 1, 56, 0, 0, 1, 4)
+#define PCS25G_BR_PCS_CFG(t) __REG(TARGET_PCS25G_BR,\
+ t, 8, 0, 0, 1, 56, 0, 0, 1, 4)
#define PCS25G_BR_PCS_CFG_PCS_ENA BIT(31)
#define PCS25G_BR_PCS_CFG_PCS_ENA_SET(x)\
@@ -4326,7 +5430,8 @@ enum sparx5_target {
FIELD_GET(PCS25G_BR_PCS_CFG_TX_SCR_DISABLE, x)
/* PCS_10GBASE_R:PCS_10GBR_CFG:PCS_SD_CFG */
-#define PCS25G_BR_PCS_SD_CFG(t) __REG(TARGET_PCS25G_BR, t, 8, 0, 0, 1, 56, 4, 0, 1, 4)
+#define PCS25G_BR_PCS_SD_CFG(t) __REG(TARGET_PCS25G_BR,\
+ t, 8, 0, 0, 1, 56, 4, 0, 1, 4)
#define PCS25G_BR_PCS_SD_CFG_SD_SEL BIT(8)
#define PCS25G_BR_PCS_SD_CFG_SD_SEL_SET(x)\
@@ -4347,7 +5452,8 @@ enum sparx5_target {
FIELD_GET(PCS25G_BR_PCS_SD_CFG_SD_ENA, x)
/* PCS_10GBASE_R:PCS_10GBR_CFG:PCS_CFG */
-#define PCS5G_BR_PCS_CFG(t) __REG(TARGET_PCS5G_BR, t, 13, 0, 0, 1, 56, 0, 0, 1, 4)
+#define PCS5G_BR_PCS_CFG(t) __REG(TARGET_PCS5G_BR,\
+ t, 13, 0, 0, 1, 56, 0, 0, 1, 4)
#define PCS5G_BR_PCS_CFG_PCS_ENA BIT(31)
#define PCS5G_BR_PCS_CFG_PCS_ENA_SET(x)\
@@ -4422,7 +5528,8 @@ enum sparx5_target {
FIELD_GET(PCS5G_BR_PCS_CFG_TX_SCR_DISABLE, x)
/* PCS_10GBASE_R:PCS_10GBR_CFG:PCS_SD_CFG */
-#define PCS5G_BR_PCS_SD_CFG(t) __REG(TARGET_PCS5G_BR, t, 13, 0, 0, 1, 56, 4, 0, 1, 4)
+#define PCS5G_BR_PCS_SD_CFG(t) __REG(TARGET_PCS5G_BR,\
+ t, 13, 0, 0, 1, 56, 4, 0, 1, 4)
#define PCS5G_BR_PCS_SD_CFG_SD_SEL BIT(8)
#define PCS5G_BR_PCS_SD_CFG_SD_SEL_SET(x)\
@@ -4443,7 +5550,8 @@ enum sparx5_target {
FIELD_GET(PCS5G_BR_PCS_SD_CFG_SD_ENA, x)
/* PORT_CONF:HW_CFG:DEV5G_MODES */
-#define PORT_CONF_DEV5G_MODES __REG(TARGET_PORT_CONF, 0, 1, 0, 0, 1, 24, 0, 0, 1, 4)
+#define PORT_CONF_DEV5G_MODES __REG(TARGET_PORT_CONF,\
+ 0, 1, 0, 0, 1, 24, 0, 0, 1, 4)
#define PORT_CONF_DEV5G_MODES_DEV5G_D0_MODE BIT(0)
#define PORT_CONF_DEV5G_MODES_DEV5G_D0_MODE_SET(x)\
@@ -4524,7 +5632,8 @@ enum sparx5_target {
FIELD_GET(PORT_CONF_DEV5G_MODES_DEV5G_D64_MODE, x)
/* PORT_CONF:HW_CFG:DEV10G_MODES */
-#define PORT_CONF_DEV10G_MODES __REG(TARGET_PORT_CONF, 0, 1, 0, 0, 1, 24, 4, 0, 1, 4)
+#define PORT_CONF_DEV10G_MODES __REG(TARGET_PORT_CONF,\
+ 0, 1, 0, 0, 1, 24, 4, 0, 1, 4)
#define PORT_CONF_DEV10G_MODES_DEV10G_D12_MODE BIT(0)
#define PORT_CONF_DEV10G_MODES_DEV10G_D12_MODE_SET(x)\
@@ -4599,7 +5708,8 @@ enum sparx5_target {
FIELD_GET(PORT_CONF_DEV10G_MODES_DEV10G_D55_MODE, x)
/* PORT_CONF:HW_CFG:DEV25G_MODES */
-#define PORT_CONF_DEV25G_MODES __REG(TARGET_PORT_CONF, 0, 1, 0, 0, 1, 24, 8, 0, 1, 4)
+#define PORT_CONF_DEV25G_MODES __REG(TARGET_PORT_CONF,\
+ 0, 1, 0, 0, 1, 24, 8, 0, 1, 4)
#define PORT_CONF_DEV25G_MODES_DEV25G_D56_MODE BIT(0)
#define PORT_CONF_DEV25G_MODES_DEV25G_D56_MODE_SET(x)\
@@ -4650,7 +5760,8 @@ enum sparx5_target {
FIELD_GET(PORT_CONF_DEV25G_MODES_DEV25G_D63_MODE, x)
/* PORT_CONF:HW_CFG:QSGMII_ENA */
-#define PORT_CONF_QSGMII_ENA __REG(TARGET_PORT_CONF, 0, 1, 0, 0, 1, 24, 12, 0, 1, 4)
+#define PORT_CONF_QSGMII_ENA __REG(TARGET_PORT_CONF,\
+ 0, 1, 0, 0, 1, 24, 12, 0, 1, 4)
#define PORT_CONF_QSGMII_ENA_QSGMII_ENA_0 BIT(0)
#define PORT_CONF_QSGMII_ENA_QSGMII_ENA_0_SET(x)\
@@ -4725,7 +5836,8 @@ enum sparx5_target {
FIELD_GET(PORT_CONF_QSGMII_ENA_QSGMII_ENA_11, x)
/* PORT_CONF:USGMII_CFG_STAT:USGMII_CFG */
-#define PORT_CONF_USGMII_CFG(g) __REG(TARGET_PORT_CONF, 0, 1, 72, g, 6, 8, 0, 0, 1, 4)
+#define PORT_CONF_USGMII_CFG(g) __REG(TARGET_PORT_CONF,\
+ 0, 1, 72, g, 6, 8, 0, 0, 1, 4)
#define PORT_CONF_USGMII_CFG_BYPASS_SCRAM BIT(9)
#define PORT_CONF_USGMII_CFG_BYPASS_SCRAM_SET(x)\
@@ -4770,7 +5882,8 @@ enum sparx5_target {
FIELD_GET(PORT_CONF_USGMII_CFG_QUAD_MODE, x)
/* DEVCPU_PTP:PTP_CFG:PTP_PIN_INTR */
-#define PTP_PTP_PIN_INTR __REG(TARGET_PTP, 0, 1, 320, 0, 1, 16, 0, 0, 1, 4)
+#define PTP_PTP_PIN_INTR __REG(TARGET_PTP,\
+ 0, 1, 320, 0, 1, 16, 0, 0, 1, 4)
#define PTP_PTP_PIN_INTR_INTR_PTP GENMASK(4, 0)
#define PTP_PTP_PIN_INTR_INTR_PTP_SET(x)\
@@ -4779,7 +5892,8 @@ enum sparx5_target {
FIELD_GET(PTP_PTP_PIN_INTR_INTR_PTP, x)
/* DEVCPU_PTP:PTP_CFG:PTP_PIN_INTR_ENA */
-#define PTP_PTP_PIN_INTR_ENA __REG(TARGET_PTP, 0, 1, 320, 0, 1, 16, 4, 0, 1, 4)
+#define PTP_PTP_PIN_INTR_ENA __REG(TARGET_PTP,\
+ 0, 1, 320, 0, 1, 16, 4, 0, 1, 4)
#define PTP_PTP_PIN_INTR_ENA_INTR_PTP_ENA GENMASK(4, 0)
#define PTP_PTP_PIN_INTR_ENA_INTR_PTP_ENA_SET(x)\
@@ -4788,7 +5902,8 @@ enum sparx5_target {
FIELD_GET(PTP_PTP_PIN_INTR_ENA_INTR_PTP_ENA, x)
/* DEVCPU_PTP:PTP_CFG:PTP_INTR_IDENT */
-#define PTP_PTP_INTR_IDENT __REG(TARGET_PTP, 0, 1, 320, 0, 1, 16, 8, 0, 1, 4)
+#define PTP_PTP_INTR_IDENT __REG(TARGET_PTP,\
+ 0, 1, 320, 0, 1, 16, 8, 0, 1, 4)
#define PTP_PTP_INTR_IDENT_INTR_PTP_IDENT GENMASK(4, 0)
#define PTP_PTP_INTR_IDENT_INTR_PTP_IDENT_SET(x)\
@@ -4797,7 +5912,8 @@ enum sparx5_target {
FIELD_GET(PTP_PTP_INTR_IDENT_INTR_PTP_IDENT, x)
/* DEVCPU_PTP:PTP_CFG:PTP_DOM_CFG */
-#define PTP_PTP_DOM_CFG __REG(TARGET_PTP, 0, 1, 320, 0, 1, 16, 12, 0, 1, 4)
+#define PTP_PTP_DOM_CFG __REG(TARGET_PTP,\
+ 0, 1, 320, 0, 1, 16, 12, 0, 1, 4)
#define PTP_PTP_DOM_CFG_PTP_ENA GENMASK(11, 9)
#define PTP_PTP_DOM_CFG_PTP_ENA_SET(x)\
@@ -4824,10 +5940,12 @@ enum sparx5_target {
FIELD_GET(PTP_PTP_DOM_CFG_PTP_CLKCFG_DIS, x)
/* DEVCPU_PTP:PTP_TOD_DOMAINS:CLK_PER_CFG */
-#define PTP_CLK_PER_CFG(g, r) __REG(TARGET_PTP, 0, 1, 336, g, 3, 28, 0, r, 2, 4)
+#define PTP_CLK_PER_CFG(g, r) __REG(TARGET_PTP,\
+ 0, 1, 336, g, 3, 28, 0, r, 2, 4)
/* DEVCPU_PTP:PTP_TOD_DOMAINS:PTP_CUR_NSEC */
-#define PTP_PTP_CUR_NSEC(g) __REG(TARGET_PTP, 0, 1, 336, g, 3, 28, 8, 0, 1, 4)
+#define PTP_PTP_CUR_NSEC(g) __REG(TARGET_PTP,\
+ 0, 1, 336, g, 3, 28, 8, 0, 1, 4)
#define PTP_PTP_CUR_NSEC_PTP_CUR_NSEC GENMASK(29, 0)
#define PTP_PTP_CUR_NSEC_PTP_CUR_NSEC_SET(x)\
@@ -4836,7 +5954,8 @@ enum sparx5_target {
FIELD_GET(PTP_PTP_CUR_NSEC_PTP_CUR_NSEC, x)
/* DEVCPU_PTP:PTP_TOD_DOMAINS:PTP_CUR_NSEC_FRAC */
-#define PTP_PTP_CUR_NSEC_FRAC(g) __REG(TARGET_PTP, 0, 1, 336, g, 3, 28, 12, 0, 1, 4)
+#define PTP_PTP_CUR_NSEC_FRAC(g) __REG(TARGET_PTP,\
+ 0, 1, 336, g, 3, 28, 12, 0, 1, 4)
#define PTP_PTP_CUR_NSEC_FRAC_PTP_CUR_NSEC_FRAC GENMASK(7, 0)
#define PTP_PTP_CUR_NSEC_FRAC_PTP_CUR_NSEC_FRAC_SET(x)\
@@ -4845,10 +5964,12 @@ enum sparx5_target {
FIELD_GET(PTP_PTP_CUR_NSEC_FRAC_PTP_CUR_NSEC_FRAC, x)
/* DEVCPU_PTP:PTP_TOD_DOMAINS:PTP_CUR_SEC_LSB */
-#define PTP_PTP_CUR_SEC_LSB(g) __REG(TARGET_PTP, 0, 1, 336, g, 3, 28, 16, 0, 1, 4)
+#define PTP_PTP_CUR_SEC_LSB(g) __REG(TARGET_PTP,\
+ 0, 1, 336, g, 3, 28, 16, 0, 1, 4)
/* DEVCPU_PTP:PTP_TOD_DOMAINS:PTP_CUR_SEC_MSB */
-#define PTP_PTP_CUR_SEC_MSB(g) __REG(TARGET_PTP, 0, 1, 336, g, 3, 28, 20, 0, 1, 4)
+#define PTP_PTP_CUR_SEC_MSB(g) __REG(TARGET_PTP,\
+ 0, 1, 336, g, 3, 28, 20, 0, 1, 4)
#define PTP_PTP_CUR_SEC_MSB_PTP_CUR_SEC_MSB GENMASK(15, 0)
#define PTP_PTP_CUR_SEC_MSB_PTP_CUR_SEC_MSB_SET(x)\
@@ -4857,10 +5978,12 @@ enum sparx5_target {
FIELD_GET(PTP_PTP_CUR_SEC_MSB_PTP_CUR_SEC_MSB, x)
/* DEVCPU_PTP:PTP_TOD_DOMAINS:NTP_CUR_NSEC */
-#define PTP_NTP_CUR_NSEC(g) __REG(TARGET_PTP, 0, 1, 336, g, 3, 28, 24, 0, 1, 4)
+#define PTP_NTP_CUR_NSEC(g) __REG(TARGET_PTP,\
+ 0, 1, 336, g, 3, 28, 24, 0, 1, 4)
/* DEVCPU_PTP:PTP_PINS:PTP_PIN_CFG */
-#define PTP_PTP_PIN_CFG(g) __REG(TARGET_PTP, 0, 1, 0, g, 5, 64, 0, 0, 1, 4)
+#define PTP_PTP_PIN_CFG(g) __REG(TARGET_PTP,\
+ 0, 1, 0, g, 5, 64, 0, 0, 1, 4)
#define PTP_PTP_PIN_CFG_PTP_PIN_ACTION GENMASK(28, 26)
#define PTP_PTP_PIN_CFG_PTP_PIN_ACTION_SET(x)\
@@ -4917,7 +6040,8 @@ enum sparx5_target {
FIELD_GET(PTP_PTP_PIN_CFG_PTP_PIN_OUTP_OFS, x)
/* DEVCPU_PTP:PTP_PINS:PTP_TOD_SEC_MSB */
-#define PTP_PTP_TOD_SEC_MSB(g) __REG(TARGET_PTP, 0, 1, 0, g, 5, 64, 4, 0, 1, 4)
+#define PTP_PTP_TOD_SEC_MSB(g) __REG(TARGET_PTP,\
+ 0, 1, 0, g, 5, 64, 4, 0, 1, 4)
#define PTP_PTP_TOD_SEC_MSB_PTP_TOD_SEC_MSB GENMASK(15, 0)
#define PTP_PTP_TOD_SEC_MSB_PTP_TOD_SEC_MSB_SET(x)\
@@ -4926,10 +6050,12 @@ enum sparx5_target {
FIELD_GET(PTP_PTP_TOD_SEC_MSB_PTP_TOD_SEC_MSB, x)
/* DEVCPU_PTP:PTP_PINS:PTP_TOD_SEC_LSB */
-#define PTP_PTP_TOD_SEC_LSB(g) __REG(TARGET_PTP, 0, 1, 0, g, 5, 64, 8, 0, 1, 4)
+#define PTP_PTP_TOD_SEC_LSB(g) __REG(TARGET_PTP,\
+ 0, 1, 0, g, 5, 64, 8, 0, 1, 4)
/* DEVCPU_PTP:PTP_PINS:PTP_TOD_NSEC */
-#define PTP_PTP_TOD_NSEC(g) __REG(TARGET_PTP, 0, 1, 0, g, 5, 64, 12, 0, 1, 4)
+#define PTP_PTP_TOD_NSEC(g) __REG(TARGET_PTP,\
+ 0, 1, 0, g, 5, 64, 12, 0, 1, 4)
#define PTP_PTP_TOD_NSEC_PTP_TOD_NSEC GENMASK(29, 0)
#define PTP_PTP_TOD_NSEC_PTP_TOD_NSEC_SET(x)\
@@ -4938,7 +6064,8 @@ enum sparx5_target {
FIELD_GET(PTP_PTP_TOD_NSEC_PTP_TOD_NSEC, x)
/* DEVCPU_PTP:PTP_PINS:PTP_TOD_NSEC_FRAC */
-#define PTP_PTP_TOD_NSEC_FRAC(g) __REG(TARGET_PTP, 0, 1, 0, g, 5, 64, 16, 0, 1, 4)
+#define PTP_PTP_TOD_NSEC_FRAC(g) __REG(TARGET_PTP,\
+ 0, 1, 0, g, 5, 64, 16, 0, 1, 4)
#define PTP_PTP_TOD_NSEC_FRAC_PTP_TOD_NSEC_FRAC GENMASK(7, 0)
#define PTP_PTP_TOD_NSEC_FRAC_PTP_TOD_NSEC_FRAC_SET(x)\
@@ -4947,10 +6074,12 @@ enum sparx5_target {
FIELD_GET(PTP_PTP_TOD_NSEC_FRAC_PTP_TOD_NSEC_FRAC, x)
/* DEVCPU_PTP:PTP_PINS:NTP_NSEC */
-#define PTP_NTP_NSEC(g) __REG(TARGET_PTP, 0, 1, 0, g, 5, 64, 20, 0, 1, 4)
+#define PTP_NTP_NSEC(g) __REG(TARGET_PTP,\
+ 0, 1, 0, g, 5, 64, 20, 0, 1, 4)
/* DEVCPU_PTP:PTP_PINS:PIN_WF_HIGH_PERIOD */
-#define PTP_PIN_WF_HIGH_PERIOD(g) __REG(TARGET_PTP, 0, 1, 0, g, 5, 64, 24, 0, 1, 4)
+#define PTP_PIN_WF_HIGH_PERIOD(g) __REG(TARGET_PTP,\
+ 0, 1, 0, g, 5, 64, 24, 0, 1, 4)
#define PTP_PIN_WF_HIGH_PERIOD_PIN_WFH GENMASK(29, 0)
#define PTP_PIN_WF_HIGH_PERIOD_PIN_WFH_SET(x)\
@@ -4959,7 +6088,8 @@ enum sparx5_target {
FIELD_GET(PTP_PIN_WF_HIGH_PERIOD_PIN_WFH, x)
/* DEVCPU_PTP:PTP_PINS:PIN_WF_LOW_PERIOD */
-#define PTP_PIN_WF_LOW_PERIOD(g) __REG(TARGET_PTP, 0, 1, 0, g, 5, 64, 28, 0, 1, 4)
+#define PTP_PIN_WF_LOW_PERIOD(g) __REG(TARGET_PTP,\
+ 0, 1, 0, g, 5, 64, 28, 0, 1, 4)
#define PTP_PIN_WF_LOW_PERIOD_PIN_WFL GENMASK(29, 0)
#define PTP_PIN_WF_LOW_PERIOD_PIN_WFL_SET(x)\
@@ -4968,7 +6098,8 @@ enum sparx5_target {
FIELD_GET(PTP_PIN_WF_LOW_PERIOD_PIN_WFL, x)
/* DEVCPU_PTP:PTP_PINS:PIN_IOBOUNCH_DELAY */
-#define PTP_PIN_IOBOUNCH_DELAY(g) __REG(TARGET_PTP, 0, 1, 0, g, 5, 64, 32, 0, 1, 4)
+#define PTP_PIN_IOBOUNCH_DELAY(g) __REG(TARGET_PTP,\
+ 0, 1, 0, g, 5, 64, 32, 0, 1, 4)
#define PTP_PIN_IOBOUNCH_DELAY_PIN_IOBOUNCH_VAL GENMASK(18, 3)
#define PTP_PIN_IOBOUNCH_DELAY_PIN_IOBOUNCH_VAL_SET(x)\
@@ -4983,7 +6114,8 @@ enum sparx5_target {
FIELD_GET(PTP_PIN_IOBOUNCH_DELAY_PIN_IOBOUNCH_CFG, x)
/* DEVCPU_PTP:PHASE_DETECTOR_CTRL:PHAD_CTRL */
-#define PTP_PHAD_CTRL(g) __REG(TARGET_PTP, 0, 1, 420, g, 5, 8, 0, 0, 1, 4)
+#define PTP_PHAD_CTRL(g) __REG(TARGET_PTP,\
+ 0, 1, 420, g, 5, 8, 0, 0, 1, 4)
#define PTP_PHAD_CTRL_PHAD_ENA BIT(7)
#define PTP_PHAD_CTRL_PHAD_ENA_SET(x)\
@@ -5010,10 +6142,12 @@ enum sparx5_target {
FIELD_GET(PTP_PHAD_CTRL_LOCK_ACC, x)
/* DEVCPU_PTP:PHASE_DETECTOR_CTRL:PHAD_CYC_STAT */
-#define PTP_PHAD_CYC_STAT(g) __REG(TARGET_PTP, 0, 1, 420, g, 5, 8, 4, 0, 1, 4)
+#define PTP_PHAD_CYC_STAT(g) __REG(TARGET_PTP,\
+ 0, 1, 420, g, 5, 8, 4, 0, 1, 4)
/* QFWD:SYSTEM:SWITCH_PORT_MODE */
-#define QFWD_SWITCH_PORT_MODE(r) __REG(TARGET_QFWD, 0, 1, 0, 0, 1, 340, 0, r, 70, 4)
+#define QFWD_SWITCH_PORT_MODE(r) __REG(TARGET_QFWD,\
+ 0, 1, 0, 0, 1, 340, 0, r, 70, 4)
#define QFWD_SWITCH_PORT_MODE_PORT_ENA BIT(19)
#define QFWD_SWITCH_PORT_MODE_PORT_ENA_SET(x)\
@@ -5070,7 +6204,8 @@ enum sparx5_target {
FIELD_GET(QFWD_SWITCH_PORT_MODE_LEARNALL_MORE, x)
/* QRES:RES_CTRL:RES_CFG */
-#define QRES_RES_CFG(g) __REG(TARGET_QRES, 0, 1, 0, g, 5120, 16, 0, 0, 1, 4)
+#define QRES_RES_CFG(g) __REG(TARGET_QRES,\
+ 0, 1, 0, g, 5120, 16, 0, 0, 1, 4)
#define QRES_RES_CFG_WM_HIGH GENMASK(11, 0)
#define QRES_RES_CFG_WM_HIGH_SET(x)\
@@ -5079,7 +6214,8 @@ enum sparx5_target {
FIELD_GET(QRES_RES_CFG_WM_HIGH, x)
/* QRES:RES_CTRL:RES_STAT */
-#define QRES_RES_STAT(g) __REG(TARGET_QRES, 0, 1, 0, g, 5120, 16, 4, 0, 1, 4)
+#define QRES_RES_STAT(g) __REG(TARGET_QRES,\
+ 0, 1, 0, g, 5120, 16, 4, 0, 1, 4)
#define QRES_RES_STAT_MAXUSE GENMASK(20, 0)
#define QRES_RES_STAT_MAXUSE_SET(x)\
@@ -5088,7 +6224,8 @@ enum sparx5_target {
FIELD_GET(QRES_RES_STAT_MAXUSE, x)
/* QRES:RES_CTRL:RES_STAT_CUR */
-#define QRES_RES_STAT_CUR(g) __REG(TARGET_QRES, 0, 1, 0, g, 5120, 16, 8, 0, 1, 4)
+#define QRES_RES_STAT_CUR(g) __REG(TARGET_QRES,\
+ 0, 1, 0, g, 5120, 16, 8, 0, 1, 4)
#define QRES_RES_STAT_CUR_INUSE GENMASK(20, 0)
#define QRES_RES_STAT_CUR_INUSE_SET(x)\
@@ -5097,7 +6234,8 @@ enum sparx5_target {
FIELD_GET(QRES_RES_STAT_CUR_INUSE, x)
/* DEVCPU_QS:XTR:XTR_GRP_CFG */
-#define QS_XTR_GRP_CFG(r) __REG(TARGET_QS, 0, 1, 0, 0, 1, 36, 0, r, 2, 4)
+#define QS_XTR_GRP_CFG(r) __REG(TARGET_QS,\
+ 0, 1, 0, 0, 1, 36, 0, r, 2, 4)
#define QS_XTR_GRP_CFG_MODE GENMASK(3, 2)
#define QS_XTR_GRP_CFG_MODE_SET(x)\
@@ -5118,10 +6256,12 @@ enum sparx5_target {
FIELD_GET(QS_XTR_GRP_CFG_BYTE_SWAP, x)
/* DEVCPU_QS:XTR:XTR_RD */
-#define QS_XTR_RD(r) __REG(TARGET_QS, 0, 1, 0, 0, 1, 36, 8, r, 2, 4)
+#define QS_XTR_RD(r) __REG(TARGET_QS,\
+ 0, 1, 0, 0, 1, 36, 8, r, 2, 4)
/* DEVCPU_QS:XTR:XTR_FLUSH */
-#define QS_XTR_FLUSH __REG(TARGET_QS, 0, 1, 0, 0, 1, 36, 24, 0, 1, 4)
+#define QS_XTR_FLUSH __REG(TARGET_QS,\
+ 0, 1, 0, 0, 1, 36, 24, 0, 1, 4)
#define QS_XTR_FLUSH_FLUSH GENMASK(1, 0)
#define QS_XTR_FLUSH_FLUSH_SET(x)\
@@ -5130,7 +6270,8 @@ enum sparx5_target {
FIELD_GET(QS_XTR_FLUSH_FLUSH, x)
/* DEVCPU_QS:XTR:XTR_DATA_PRESENT */
-#define QS_XTR_DATA_PRESENT __REG(TARGET_QS, 0, 1, 0, 0, 1, 36, 28, 0, 1, 4)
+#define QS_XTR_DATA_PRESENT __REG(TARGET_QS,\
+ 0, 1, 0, 0, 1, 36, 28, 0, 1, 4)
#define QS_XTR_DATA_PRESENT_DATA_PRESENT GENMASK(1, 0)
#define QS_XTR_DATA_PRESENT_DATA_PRESENT_SET(x)\
@@ -5139,7 +6280,8 @@ enum sparx5_target {
FIELD_GET(QS_XTR_DATA_PRESENT_DATA_PRESENT, x)
/* DEVCPU_QS:INJ:INJ_GRP_CFG */
-#define QS_INJ_GRP_CFG(r) __REG(TARGET_QS, 0, 1, 36, 0, 1, 40, 0, r, 2, 4)
+#define QS_INJ_GRP_CFG(r) __REG(TARGET_QS,\
+ 0, 1, 36, 0, 1, 40, 0, r, 2, 4)
#define QS_INJ_GRP_CFG_MODE GENMASK(3, 2)
#define QS_INJ_GRP_CFG_MODE_SET(x)\
@@ -5154,10 +6296,12 @@ enum sparx5_target {
FIELD_GET(QS_INJ_GRP_CFG_BYTE_SWAP, x)
/* DEVCPU_QS:INJ:INJ_WR */
-#define QS_INJ_WR(r) __REG(TARGET_QS, 0, 1, 36, 0, 1, 40, 8, r, 2, 4)
+#define QS_INJ_WR(r) __REG(TARGET_QS,\
+ 0, 1, 36, 0, 1, 40, 8, r, 2, 4)
/* DEVCPU_QS:INJ:INJ_CTRL */
-#define QS_INJ_CTRL(r) __REG(TARGET_QS, 0, 1, 36, 0, 1, 40, 16, r, 2, 4)
+#define QS_INJ_CTRL(r) __REG(TARGET_QS,\
+ 0, 1, 36, 0, 1, 40, 16, r, 2, 4)
#define QS_INJ_CTRL_GAP_SIZE GENMASK(24, 21)
#define QS_INJ_CTRL_GAP_SIZE_SET(x)\
@@ -5190,7 +6334,8 @@ enum sparx5_target {
FIELD_GET(QS_INJ_CTRL_VLD_BYTES, x)
/* DEVCPU_QS:INJ:INJ_STATUS */
-#define QS_INJ_STATUS __REG(TARGET_QS, 0, 1, 36, 0, 1, 40, 24, 0, 1, 4)
+#define QS_INJ_STATUS __REG(TARGET_QS,\
+ 0, 1, 36, 0, 1, 40, 24, 0, 1, 4)
#define QS_INJ_STATUS_WMARK_REACHED GENMASK(5, 4)
#define QS_INJ_STATUS_WMARK_REACHED_SET(x)\
@@ -5211,7 +6356,8 @@ enum sparx5_target {
FIELD_GET(QS_INJ_STATUS_INJ_IN_PROGRESS, x)
/* QSYS:PAUSE_CFG:PAUSE_CFG */
-#define QSYS_PAUSE_CFG(r) __REG(TARGET_QSYS, 0, 1, 544, 0, 1, 1128, 0, r, 70, 4)
+#define QSYS_PAUSE_CFG(r) __REG(TARGET_QSYS,\
+ 0, 1, 544, 0, 1, 1128, 0, r, 70, 4)
#define QSYS_PAUSE_CFG_PAUSE_START GENMASK(25, 14)
#define QSYS_PAUSE_CFG_PAUSE_START_SET(x)\
@@ -5238,7 +6384,8 @@ enum sparx5_target {
FIELD_GET(QSYS_PAUSE_CFG_AGGRESSIVE_TAILDROP_ENA, x)
/* QSYS:PAUSE_CFG:ATOP */
-#define QSYS_ATOP(r) __REG(TARGET_QSYS, 0, 1, 544, 0, 1, 1128, 284, r, 70, 4)
+#define QSYS_ATOP(r) __REG(TARGET_QSYS,\
+ 0, 1, 544, 0, 1, 1128, 284, r, 70, 4)
#define QSYS_ATOP_ATOP GENMASK(11, 0)
#define QSYS_ATOP_ATOP_SET(x)\
@@ -5247,7 +6394,8 @@ enum sparx5_target {
FIELD_GET(QSYS_ATOP_ATOP, x)
/* QSYS:PAUSE_CFG:FWD_PRESSURE */
-#define QSYS_FWD_PRESSURE(r) __REG(TARGET_QSYS, 0, 1, 544, 0, 1, 1128, 564, r, 70, 4)
+#define QSYS_FWD_PRESSURE(r) __REG(TARGET_QSYS,\
+ 0, 1, 544, 0, 1, 1128, 564, r, 70, 4)
#define QSYS_FWD_PRESSURE_FWD_PRESSURE GENMASK(11, 1)
#define QSYS_FWD_PRESSURE_FWD_PRESSURE_SET(x)\
@@ -5262,7 +6410,8 @@ enum sparx5_target {
FIELD_GET(QSYS_FWD_PRESSURE_FWD_PRESSURE_DIS, x)
/* QSYS:PAUSE_CFG:ATOP_TOT_CFG */
-#define QSYS_ATOP_TOT_CFG __REG(TARGET_QSYS, 0, 1, 544, 0, 1, 1128, 844, 0, 1, 4)
+#define QSYS_ATOP_TOT_CFG __REG(TARGET_QSYS,\
+ 0, 1, 544, 0, 1, 1128, 844, 0, 1, 4)
#define QSYS_ATOP_TOT_CFG_ATOP_TOT GENMASK(11, 0)
#define QSYS_ATOP_TOT_CFG_ATOP_TOT_SET(x)\
@@ -5271,7 +6420,8 @@ enum sparx5_target {
FIELD_GET(QSYS_ATOP_TOT_CFG_ATOP_TOT, x)
/* QSYS:CALCFG:CAL_AUTO */
-#define QSYS_CAL_AUTO(r) __REG(TARGET_QSYS, 0, 1, 2304, 0, 1, 40, 0, r, 7, 4)
+#define QSYS_CAL_AUTO(r) __REG(TARGET_QSYS,\
+ 0, 1, 2304, 0, 1, 40, 0, r, 7, 4)
#define QSYS_CAL_AUTO_CAL_AUTO GENMASK(29, 0)
#define QSYS_CAL_AUTO_CAL_AUTO_SET(x)\
@@ -5280,7 +6430,8 @@ enum sparx5_target {
FIELD_GET(QSYS_CAL_AUTO_CAL_AUTO, x)
/* QSYS:CALCFG:CAL_CTRL */
-#define QSYS_CAL_CTRL __REG(TARGET_QSYS, 0, 1, 2304, 0, 1, 40, 36, 0, 1, 4)
+#define QSYS_CAL_CTRL __REG(TARGET_QSYS,\
+ 0, 1, 2304, 0, 1, 40, 36, 0, 1, 4)
#define QSYS_CAL_CTRL_CAL_MODE GENMASK(14, 11)
#define QSYS_CAL_CTRL_CAL_MODE_SET(x)\
@@ -5301,7 +6452,8 @@ enum sparx5_target {
FIELD_GET(QSYS_CAL_CTRL_CAL_AUTO_ERROR, x)
/* QSYS:RAM_CTRL:RAM_INIT */
-#define QSYS_RAM_INIT __REG(TARGET_QSYS, 0, 1, 2344, 0, 1, 4, 0, 0, 1, 4)
+#define QSYS_RAM_INIT __REG(TARGET_QSYS,\
+ 0, 1, 2344, 0, 1, 4, 0, 0, 1, 4)
#define QSYS_RAM_INIT_RAM_INIT BIT(1)
#define QSYS_RAM_INIT_RAM_INIT_SET(x)\
@@ -5316,7 +6468,8 @@ enum sparx5_target {
FIELD_GET(QSYS_RAM_INIT_RAM_CFG_HOOK, x)
/* REW:COMMON:OWN_UPSID */
-#define REW_OWN_UPSID(r) __REG(TARGET_REW, 0, 1, 387264, 0, 1, 1232, 0, r, 3, 4)
+#define REW_OWN_UPSID(r) __REG(TARGET_REW,\
+ 0, 1, 387264, 0, 1, 1232, 0, r, 3, 4)
#define REW_OWN_UPSID_OWN_UPSID GENMASK(4, 0)
#define REW_OWN_UPSID_OWN_UPSID_SET(x)\
@@ -5324,8 +6477,71 @@ enum sparx5_target {
#define REW_OWN_UPSID_OWN_UPSID_GET(x)\
FIELD_GET(REW_OWN_UPSID_OWN_UPSID, x)
+/* REW:COMMON:RTAG_ETAG_CTRL */
+#define REW_RTAG_ETAG_CTRL(r) __REG(TARGET_REW,\
+ 0, 1, 387264, 0, 1, 1232, 560, r, 70, 4)
+
+#define REW_RTAG_ETAG_CTRL_IPE_TBL GENMASK(9, 3)
+#define REW_RTAG_ETAG_CTRL_IPE_TBL_SET(x)\
+ FIELD_PREP(REW_RTAG_ETAG_CTRL_IPE_TBL, x)
+#define REW_RTAG_ETAG_CTRL_IPE_TBL_GET(x)\
+ FIELD_GET(REW_RTAG_ETAG_CTRL_IPE_TBL, x)
+
+#define REW_RTAG_ETAG_CTRL_ES0_ISDX_KEY_ENA GENMASK(2, 1)
+#define REW_RTAG_ETAG_CTRL_ES0_ISDX_KEY_ENA_SET(x)\
+ FIELD_PREP(REW_RTAG_ETAG_CTRL_ES0_ISDX_KEY_ENA, x)
+#define REW_RTAG_ETAG_CTRL_ES0_ISDX_KEY_ENA_GET(x)\
+ FIELD_GET(REW_RTAG_ETAG_CTRL_ES0_ISDX_KEY_ENA, x)
+
+#define REW_RTAG_ETAG_CTRL_KEEP_ETAG BIT(0)
+#define REW_RTAG_ETAG_CTRL_KEEP_ETAG_SET(x)\
+ FIELD_PREP(REW_RTAG_ETAG_CTRL_KEEP_ETAG, x)
+#define REW_RTAG_ETAG_CTRL_KEEP_ETAG_GET(x)\
+ FIELD_GET(REW_RTAG_ETAG_CTRL_KEEP_ETAG, x)
+
+/* REW:COMMON:ES0_CTRL */
+#define REW_ES0_CTRL __REG(TARGET_REW,\
+ 0, 1, 387264, 0, 1, 1232, 852, 0, 1, 4)
+
+#define REW_ES0_CTRL_ES0_BY_RT_FWD BIT(5)
+#define REW_ES0_CTRL_ES0_BY_RT_FWD_SET(x)\
+ FIELD_PREP(REW_ES0_CTRL_ES0_BY_RT_FWD, x)
+#define REW_ES0_CTRL_ES0_BY_RT_FWD_GET(x)\
+ FIELD_GET(REW_ES0_CTRL_ES0_BY_RT_FWD, x)
+
+#define REW_ES0_CTRL_ES0_BY_RLEG BIT(4)
+#define REW_ES0_CTRL_ES0_BY_RLEG_SET(x)\
+ FIELD_PREP(REW_ES0_CTRL_ES0_BY_RLEG, x)
+#define REW_ES0_CTRL_ES0_BY_RLEG_GET(x)\
+ FIELD_GET(REW_ES0_CTRL_ES0_BY_RLEG, x)
+
+#define REW_ES0_CTRL_ES0_DPORT_ENA BIT(3)
+#define REW_ES0_CTRL_ES0_DPORT_ENA_SET(x)\
+ FIELD_PREP(REW_ES0_CTRL_ES0_DPORT_ENA, x)
+#define REW_ES0_CTRL_ES0_DPORT_ENA_GET(x)\
+ FIELD_GET(REW_ES0_CTRL_ES0_DPORT_ENA, x)
+
+#define REW_ES0_CTRL_ES0_FRM_LBK_CFG BIT(2)
+#define REW_ES0_CTRL_ES0_FRM_LBK_CFG_SET(x)\
+ FIELD_PREP(REW_ES0_CTRL_ES0_FRM_LBK_CFG, x)
+#define REW_ES0_CTRL_ES0_FRM_LBK_CFG_GET(x)\
+ FIELD_GET(REW_ES0_CTRL_ES0_FRM_LBK_CFG, x)
+
+#define REW_ES0_CTRL_ES0_VD2_ENCAP_ID_ENA BIT(1)
+#define REW_ES0_CTRL_ES0_VD2_ENCAP_ID_ENA_SET(x)\
+ FIELD_PREP(REW_ES0_CTRL_ES0_VD2_ENCAP_ID_ENA, x)
+#define REW_ES0_CTRL_ES0_VD2_ENCAP_ID_ENA_GET(x)\
+ FIELD_GET(REW_ES0_CTRL_ES0_VD2_ENCAP_ID_ENA, x)
+
+#define REW_ES0_CTRL_ES0_LU_ENA BIT(0)
+#define REW_ES0_CTRL_ES0_LU_ENA_SET(x)\
+ FIELD_PREP(REW_ES0_CTRL_ES0_LU_ENA, x)
+#define REW_ES0_CTRL_ES0_LU_ENA_GET(x)\
+ FIELD_GET(REW_ES0_CTRL_ES0_LU_ENA, x)
+
/* REW:PORT:PORT_VLAN_CFG */
-#define REW_PORT_VLAN_CFG(g) __REG(TARGET_REW, 0, 1, 360448, g, 70, 256, 0, 0, 1, 4)
+#define REW_PORT_VLAN_CFG(g) __REG(TARGET_REW,\
+ 0, 1, 360448, g, 70, 256, 0, 0, 1, 4)
#define REW_PORT_VLAN_CFG_PORT_PCP GENMASK(15, 13)
#define REW_PORT_VLAN_CFG_PORT_PCP_SET(x)\
@@ -5345,8 +6561,49 @@ enum sparx5_target {
#define REW_PORT_VLAN_CFG_PORT_VID_GET(x)\
FIELD_GET(REW_PORT_VLAN_CFG_PORT_VID, x)
+/* REW:PORT:PCP_MAP_DE0 */
+#define REW_PCP_MAP_DE0(g, r) __REG(TARGET_REW,\
+ 0, 1, 360448, g, 70, 256, 4, r, 8, 4)
+
+#define REW_PCP_MAP_DE0_PCP_DE0 GENMASK(2, 0)
+#define REW_PCP_MAP_DE0_PCP_DE0_SET(x)\
+ FIELD_PREP(REW_PCP_MAP_DE0_PCP_DE0, x)
+#define REW_PCP_MAP_DE0_PCP_DE0_GET(x)\
+ FIELD_GET(REW_PCP_MAP_DE0_PCP_DE0, x)
+
+/* REW:PORT:PCP_MAP_DE1 */
+#define REW_PCP_MAP_DE1(g, r) __REG(TARGET_REW,\
+ 0, 1, 360448, g, 70, 256, 36, r, 8, 4)
+
+#define REW_PCP_MAP_DE1_PCP_DE1 GENMASK(2, 0)
+#define REW_PCP_MAP_DE1_PCP_DE1_SET(x)\
+ FIELD_PREP(REW_PCP_MAP_DE1_PCP_DE1, x)
+#define REW_PCP_MAP_DE1_PCP_DE1_GET(x)\
+ FIELD_GET(REW_PCP_MAP_DE1_PCP_DE1, x)
+
+/* REW:PORT:DEI_MAP_DE0 */
+#define REW_DEI_MAP_DE0(g, r) __REG(TARGET_REW,\
+ 0, 1, 360448, g, 70, 256, 68, r, 8, 4)
+
+#define REW_DEI_MAP_DE0_DEI_DE0 BIT(0)
+#define REW_DEI_MAP_DE0_DEI_DE0_SET(x)\
+ FIELD_PREP(REW_DEI_MAP_DE0_DEI_DE0, x)
+#define REW_DEI_MAP_DE0_DEI_DE0_GET(x)\
+ FIELD_GET(REW_DEI_MAP_DE0_DEI_DE0, x)
+
+/* REW:PORT:DEI_MAP_DE1 */
+#define REW_DEI_MAP_DE1(g, r) __REG(TARGET_REW,\
+ 0, 1, 360448, g, 70, 256, 100, r, 8, 4)
+
+#define REW_DEI_MAP_DE1_DEI_DE1 BIT(0)
+#define REW_DEI_MAP_DE1_DEI_DE1_SET(x)\
+ FIELD_PREP(REW_DEI_MAP_DE1_DEI_DE1, x)
+#define REW_DEI_MAP_DE1_DEI_DE1_GET(x)\
+ FIELD_GET(REW_DEI_MAP_DE1_DEI_DE1, x)
+
/* REW:PORT:TAG_CTRL */
-#define REW_TAG_CTRL(g) __REG(TARGET_REW, 0, 1, 360448, g, 70, 256, 132, 0, 1, 4)
+#define REW_TAG_CTRL(g) __REG(TARGET_REW,\
+ 0, 1, 360448, g, 70, 256, 132, 0, 1, 4)
#define REW_TAG_CTRL_TAG_CFG_OBEY_WAS_TAGGED BIT(13)
#define REW_TAG_CTRL_TAG_CFG_OBEY_WAS_TAGGED_SET(x)\
@@ -5384,8 +6641,25 @@ enum sparx5_target {
#define REW_TAG_CTRL_TAG_DEI_CFG_GET(x)\
FIELD_GET(REW_TAG_CTRL_TAG_DEI_CFG, x)
+/* REW:PORT:DSCP_MAP */
+#define REW_DSCP_MAP(g) __REG(TARGET_REW,\
+ 0, 1, 360448, g, 70, 256, 136, 0, 1, 4)
+
+#define REW_DSCP_MAP_DSCP_UPDATE_ENA BIT(1)
+#define REW_DSCP_MAP_DSCP_UPDATE_ENA_SET(x)\
+ FIELD_PREP(REW_DSCP_MAP_DSCP_UPDATE_ENA, x)
+#define REW_DSCP_MAP_DSCP_UPDATE_ENA_GET(x)\
+ FIELD_GET(REW_DSCP_MAP_DSCP_UPDATE_ENA, x)
+
+#define REW_DSCP_MAP_DSCP_REMAP_ENA BIT(0)
+#define REW_DSCP_MAP_DSCP_REMAP_ENA_SET(x)\
+ FIELD_PREP(REW_DSCP_MAP_DSCP_REMAP_ENA, x)
+#define REW_DSCP_MAP_DSCP_REMAP_ENA_GET(x)\
+ FIELD_GET(REW_DSCP_MAP_DSCP_REMAP_ENA, x)
+
/* REW:PTP_CTRL:PTP_TWOSTEP_CTRL */
-#define REW_PTP_TWOSTEP_CTRL __REG(TARGET_REW, 0, 1, 378368, 0, 1, 40, 0, 0, 1, 4)
+#define REW_PTP_TWOSTEP_CTRL __REG(TARGET_REW,\
+ 0, 1, 378368, 0, 1, 40, 0, 0, 1, 4)
#define REW_PTP_TWOSTEP_CTRL_PTP_OVWR_ENA BIT(12)
#define REW_PTP_TWOSTEP_CTRL_PTP_OVWR_ENA_SET(x)\
@@ -5424,7 +6698,8 @@ enum sparx5_target {
FIELD_GET(REW_PTP_TWOSTEP_CTRL_PTP_OVFL, x)
/* REW:PTP_CTRL:PTP_TWOSTEP_STAMP */
-#define REW_PTP_TWOSTEP_STAMP __REG(TARGET_REW, 0, 1, 378368, 0, 1, 40, 4, 0, 1, 4)
+#define REW_PTP_TWOSTEP_STAMP __REG(TARGET_REW,\
+ 0, 1, 378368, 0, 1, 40, 4, 0, 1, 4)
#define REW_PTP_TWOSTEP_STAMP_STAMP_NSEC GENMASK(29, 0)
#define REW_PTP_TWOSTEP_STAMP_STAMP_NSEC_SET(x)\
@@ -5433,7 +6708,8 @@ enum sparx5_target {
FIELD_GET(REW_PTP_TWOSTEP_STAMP_STAMP_NSEC, x)
/* REW:PTP_CTRL:PTP_TWOSTEP_STAMP_SUBNS */
-#define REW_PTP_TWOSTEP_STAMP_SUBNS __REG(TARGET_REW, 0, 1, 378368, 0, 1, 40, 8, 0, 1, 4)
+#define REW_PTP_TWOSTEP_STAMP_SUBNS __REG(TARGET_REW,\
+ 0, 1, 378368, 0, 1, 40, 8, 0, 1, 4)
#define REW_PTP_TWOSTEP_STAMP_SUBNS_STAMP_SUB_NSEC GENMASK(7, 0)
#define REW_PTP_TWOSTEP_STAMP_SUBNS_STAMP_SUB_NSEC_SET(x)\
@@ -5442,13 +6718,16 @@ enum sparx5_target {
FIELD_GET(REW_PTP_TWOSTEP_STAMP_SUBNS_STAMP_SUB_NSEC, x)
/* REW:PTP_CTRL:PTP_RSRV_NOT_ZERO */
-#define REW_PTP_RSRV_NOT_ZERO __REG(TARGET_REW, 0, 1, 378368, 0, 1, 40, 12, 0, 1, 4)
+#define REW_PTP_RSRV_NOT_ZERO __REG(TARGET_REW,\
+ 0, 1, 378368, 0, 1, 40, 12, 0, 1, 4)
/* REW:PTP_CTRL:PTP_RSRV_NOT_ZERO1 */
-#define REW_PTP_RSRV_NOT_ZERO1 __REG(TARGET_REW, 0, 1, 378368, 0, 1, 40, 16, 0, 1, 4)
+#define REW_PTP_RSRV_NOT_ZERO1 __REG(TARGET_REW,\
+ 0, 1, 378368, 0, 1, 40, 16, 0, 1, 4)
/* REW:PTP_CTRL:PTP_RSRV_NOT_ZERO2 */
-#define REW_PTP_RSRV_NOT_ZERO2 __REG(TARGET_REW, 0, 1, 378368, 0, 1, 40, 20, 0, 1, 4)
+#define REW_PTP_RSRV_NOT_ZERO2 __REG(TARGET_REW,\
+ 0, 1, 378368, 0, 1, 40, 20, 0, 1, 4)
#define REW_PTP_RSRV_NOT_ZERO2_PTP_RSRV_NOT_ZERO2 GENMASK(5, 0)
#define REW_PTP_RSRV_NOT_ZERO2_PTP_RSRV_NOT_ZERO2_SET(x)\
@@ -5457,7 +6736,8 @@ enum sparx5_target {
FIELD_GET(REW_PTP_RSRV_NOT_ZERO2_PTP_RSRV_NOT_ZERO2, x)
/* REW:PTP_CTRL:PTP_GEN_STAMP_FMT */
-#define REW_PTP_GEN_STAMP_FMT(r) __REG(TARGET_REW, 0, 1, 378368, 0, 1, 40, 24, r, 4, 4)
+#define REW_PTP_GEN_STAMP_FMT(r) __REG(TARGET_REW,\
+ 0, 1, 378368, 0, 1, 40, 24, r, 4, 4)
#define REW_PTP_GEN_STAMP_FMT_RT_OFS GENMASK(6, 2)
#define REW_PTP_GEN_STAMP_FMT_RT_OFS_SET(x)\
@@ -5472,7 +6752,8 @@ enum sparx5_target {
FIELD_GET(REW_PTP_GEN_STAMP_FMT_RT_FMT, x)
/* REW:RAM_CTRL:RAM_INIT */
-#define REW_RAM_INIT __REG(TARGET_REW, 0, 1, 378696, 0, 1, 4, 0, 0, 1, 4)
+#define REW_RAM_INIT __REG(TARGET_REW,\
+ 0, 1, 378696, 0, 1, 4, 0, 0, 1, 4)
#define REW_RAM_INIT_RAM_INIT BIT(1)
#define REW_RAM_INIT_RAM_INIT_SET(x)\
@@ -5486,8 +6767,333 @@ enum sparx5_target {
#define REW_RAM_INIT_RAM_CFG_HOOK_GET(x)\
FIELD_GET(REW_RAM_INIT_RAM_CFG_HOOK, x)
+/* VCAP_ES0:VCAP_CORE_CFG:VCAP_UPDATE_CTRL */
+#define VCAP_ES0_CTRL __REG(TARGET_VCAP_ES0,\
+ 0, 1, 0, 0, 1, 8, 0, 0, 1, 4)
+
+#define VCAP_ES0_CTRL_UPDATE_CMD GENMASK(24, 22)
+#define VCAP_ES0_CTRL_UPDATE_CMD_SET(x)\
+ FIELD_PREP(VCAP_ES0_CTRL_UPDATE_CMD, x)
+#define VCAP_ES0_CTRL_UPDATE_CMD_GET(x)\
+ FIELD_GET(VCAP_ES0_CTRL_UPDATE_CMD, x)
+
+#define VCAP_ES0_CTRL_UPDATE_ENTRY_DIS BIT(21)
+#define VCAP_ES0_CTRL_UPDATE_ENTRY_DIS_SET(x)\
+ FIELD_PREP(VCAP_ES0_CTRL_UPDATE_ENTRY_DIS, x)
+#define VCAP_ES0_CTRL_UPDATE_ENTRY_DIS_GET(x)\
+ FIELD_GET(VCAP_ES0_CTRL_UPDATE_ENTRY_DIS, x)
+
+#define VCAP_ES0_CTRL_UPDATE_ACTION_DIS BIT(20)
+#define VCAP_ES0_CTRL_UPDATE_ACTION_DIS_SET(x)\
+ FIELD_PREP(VCAP_ES0_CTRL_UPDATE_ACTION_DIS, x)
+#define VCAP_ES0_CTRL_UPDATE_ACTION_DIS_GET(x)\
+ FIELD_GET(VCAP_ES0_CTRL_UPDATE_ACTION_DIS, x)
+
+#define VCAP_ES0_CTRL_UPDATE_CNT_DIS BIT(19)
+#define VCAP_ES0_CTRL_UPDATE_CNT_DIS_SET(x)\
+ FIELD_PREP(VCAP_ES0_CTRL_UPDATE_CNT_DIS, x)
+#define VCAP_ES0_CTRL_UPDATE_CNT_DIS_GET(x)\
+ FIELD_GET(VCAP_ES0_CTRL_UPDATE_CNT_DIS, x)
+
+#define VCAP_ES0_CTRL_UPDATE_ADDR GENMASK(18, 3)
+#define VCAP_ES0_CTRL_UPDATE_ADDR_SET(x)\
+ FIELD_PREP(VCAP_ES0_CTRL_UPDATE_ADDR, x)
+#define VCAP_ES0_CTRL_UPDATE_ADDR_GET(x)\
+ FIELD_GET(VCAP_ES0_CTRL_UPDATE_ADDR, x)
+
+#define VCAP_ES0_CTRL_UPDATE_SHOT BIT(2)
+#define VCAP_ES0_CTRL_UPDATE_SHOT_SET(x)\
+ FIELD_PREP(VCAP_ES0_CTRL_UPDATE_SHOT, x)
+#define VCAP_ES0_CTRL_UPDATE_SHOT_GET(x)\
+ FIELD_GET(VCAP_ES0_CTRL_UPDATE_SHOT, x)
+
+#define VCAP_ES0_CTRL_CLEAR_CACHE BIT(1)
+#define VCAP_ES0_CTRL_CLEAR_CACHE_SET(x)\
+ FIELD_PREP(VCAP_ES0_CTRL_CLEAR_CACHE, x)
+#define VCAP_ES0_CTRL_CLEAR_CACHE_GET(x)\
+ FIELD_GET(VCAP_ES0_CTRL_CLEAR_CACHE, x)
+
+#define VCAP_ES0_CTRL_MV_TRAFFIC_IGN BIT(0)
+#define VCAP_ES0_CTRL_MV_TRAFFIC_IGN_SET(x)\
+ FIELD_PREP(VCAP_ES0_CTRL_MV_TRAFFIC_IGN, x)
+#define VCAP_ES0_CTRL_MV_TRAFFIC_IGN_GET(x)\
+ FIELD_GET(VCAP_ES0_CTRL_MV_TRAFFIC_IGN, x)
+
+/* VCAP_ES0:VCAP_CORE_CFG:VCAP_MV_CFG */
+#define VCAP_ES0_CFG __REG(TARGET_VCAP_ES0,\
+ 0, 1, 0, 0, 1, 8, 4, 0, 1, 4)
+
+#define VCAP_ES0_CFG_MV_NUM_POS GENMASK(31, 16)
+#define VCAP_ES0_CFG_MV_NUM_POS_SET(x)\
+ FIELD_PREP(VCAP_ES0_CFG_MV_NUM_POS, x)
+#define VCAP_ES0_CFG_MV_NUM_POS_GET(x)\
+ FIELD_GET(VCAP_ES0_CFG_MV_NUM_POS, x)
+
+#define VCAP_ES0_CFG_MV_SIZE GENMASK(15, 0)
+#define VCAP_ES0_CFG_MV_SIZE_SET(x)\
+ FIELD_PREP(VCAP_ES0_CFG_MV_SIZE, x)
+#define VCAP_ES0_CFG_MV_SIZE_GET(x)\
+ FIELD_GET(VCAP_ES0_CFG_MV_SIZE, x)
+
+/* VCAP_ES0:VCAP_CORE_CACHE:VCAP_ENTRY_DAT */
+#define VCAP_ES0_VCAP_ENTRY_DAT(r) __REG(TARGET_VCAP_ES0,\
+ 0, 1, 8, 0, 1, 904, 0, r, 64, 4)
+
+/* VCAP_ES0:VCAP_CORE_CACHE:VCAP_MASK_DAT */
+#define VCAP_ES0_VCAP_MASK_DAT(r) __REG(TARGET_VCAP_ES0,\
+ 0, 1, 8, 0, 1, 904, 256, r, 64, 4)
+
+/* VCAP_ES0:VCAP_CORE_CACHE:VCAP_ACTION_DAT */
+#define VCAP_ES0_VCAP_ACTION_DAT(r) __REG(TARGET_VCAP_ES0,\
+ 0, 1, 8, 0, 1, 904, 512, r, 64, 4)
+
+/* VCAP_ES0:VCAP_CORE_CACHE:VCAP_CNT_DAT */
+#define VCAP_ES0_VCAP_CNT_DAT(r) __REG(TARGET_VCAP_ES0,\
+ 0, 1, 8, 0, 1, 904, 768, r, 32, 4)
+
+/* VCAP_ES0:VCAP_CORE_CACHE:VCAP_CNT_FW_DAT */
+#define VCAP_ES0_VCAP_CNT_FW_DAT __REG(TARGET_VCAP_ES0,\
+ 0, 1, 8, 0, 1, 904, 896, 0, 1, 4)
+
+/* VCAP_ES0:VCAP_CORE_CACHE:VCAP_TG_DAT */
+#define VCAP_ES0_VCAP_TG_DAT __REG(TARGET_VCAP_ES0,\
+ 0, 1, 8, 0, 1, 904, 900, 0, 1, 4)
+
+/* VCAP_ES0:VCAP_CORE_MAP:VCAP_CORE_IDX */
+#define VCAP_ES0_IDX __REG(TARGET_VCAP_ES0,\
+ 0, 1, 912, 0, 1, 8, 0, 0, 1, 4)
+
+#define VCAP_ES0_IDX_CORE_IDX GENMASK(3, 0)
+#define VCAP_ES0_IDX_CORE_IDX_SET(x)\
+ FIELD_PREP(VCAP_ES0_IDX_CORE_IDX, x)
+#define VCAP_ES0_IDX_CORE_IDX_GET(x)\
+ FIELD_GET(VCAP_ES0_IDX_CORE_IDX, x)
+
+/* VCAP_ES0:VCAP_CORE_MAP:VCAP_CORE_MAP */
+#define VCAP_ES0_MAP __REG(TARGET_VCAP_ES0,\
+ 0, 1, 912, 0, 1, 8, 4, 0, 1, 4)
+
+#define VCAP_ES0_MAP_CORE_MAP GENMASK(2, 0)
+#define VCAP_ES0_MAP_CORE_MAP_SET(x)\
+ FIELD_PREP(VCAP_ES0_MAP_CORE_MAP, x)
+#define VCAP_ES0_MAP_CORE_MAP_GET(x)\
+ FIELD_GET(VCAP_ES0_MAP_CORE_MAP, x)
+
+/* VCAP_ES0:VCAP_CORE_STICKY:VCAP_STICKY */
+#define VCAP_ES0_VCAP_STICKY __REG(TARGET_VCAP_ES0,\
+ 0, 1, 920, 0, 1, 4, 0, 0, 1, 4)
+
+#define VCAP_ES0_VCAP_STICKY_VCAP_ROW_DELETED_STICKY BIT(0)
+#define VCAP_ES0_VCAP_STICKY_VCAP_ROW_DELETED_STICKY_SET(x)\
+ FIELD_PREP(VCAP_ES0_VCAP_STICKY_VCAP_ROW_DELETED_STICKY, x)
+#define VCAP_ES0_VCAP_STICKY_VCAP_ROW_DELETED_STICKY_GET(x)\
+ FIELD_GET(VCAP_ES0_VCAP_STICKY_VCAP_ROW_DELETED_STICKY, x)
+
+/* VCAP_ES0:VCAP_CONST:VCAP_VER */
+#define VCAP_ES0_VCAP_VER __REG(TARGET_VCAP_ES0,\
+ 0, 1, 924, 0, 1, 40, 0, 0, 1, 4)
+
+/* VCAP_ES0:VCAP_CONST:ENTRY_WIDTH */
+#define VCAP_ES0_ENTRY_WIDTH __REG(TARGET_VCAP_ES0,\
+ 0, 1, 924, 0, 1, 40, 4, 0, 1, 4)
+
+/* VCAP_ES0:VCAP_CONST:ENTRY_CNT */
+#define VCAP_ES0_ENTRY_CNT __REG(TARGET_VCAP_ES0,\
+ 0, 1, 924, 0, 1, 40, 8, 0, 1, 4)
+
+/* VCAP_ES0:VCAP_CONST:ENTRY_SWCNT */
+#define VCAP_ES0_ENTRY_SWCNT __REG(TARGET_VCAP_ES0,\
+ 0, 1, 924, 0, 1, 40, 12, 0, 1, 4)
+
+/* VCAP_ES0:VCAP_CONST:ENTRY_TG_WIDTH */
+#define VCAP_ES0_ENTRY_TG_WIDTH __REG(TARGET_VCAP_ES0,\
+ 0, 1, 924, 0, 1, 40, 16, 0, 1, 4)
+
+/* VCAP_ES0:VCAP_CONST:ACTION_DEF_CNT */
+#define VCAP_ES0_ACTION_DEF_CNT __REG(TARGET_VCAP_ES0,\
+ 0, 1, 924, 0, 1, 40, 20, 0, 1, 4)
+
+/* VCAP_ES0:VCAP_CONST:ACTION_WIDTH */
+#define VCAP_ES0_ACTION_WIDTH __REG(TARGET_VCAP_ES0,\
+ 0, 1, 924, 0, 1, 40, 24, 0, 1, 4)
+
+/* VCAP_ES0:VCAP_CONST:CNT_WIDTH */
+#define VCAP_ES0_CNT_WIDTH __REG(TARGET_VCAP_ES0,\
+ 0, 1, 924, 0, 1, 40, 28, 0, 1, 4)
+
+/* VCAP_ES0:VCAP_CONST:CORE_CNT */
+#define VCAP_ES0_CORE_CNT __REG(TARGET_VCAP_ES0,\
+ 0, 1, 924, 0, 1, 40, 32, 0, 1, 4)
+
+/* VCAP_ES0:VCAP_CONST:IF_CNT */
+#define VCAP_ES0_IF_CNT __REG(TARGET_VCAP_ES0,\
+ 0, 1, 924, 0, 1, 40, 36, 0, 1, 4)
+
+/* VCAP_ES2:VCAP_CORE_CFG:VCAP_UPDATE_CTRL */
+#define VCAP_ES2_CTRL __REG(TARGET_VCAP_ES2,\
+ 0, 1, 0, 0, 1, 8, 0, 0, 1, 4)
+
+#define VCAP_ES2_CTRL_UPDATE_CMD GENMASK(24, 22)
+#define VCAP_ES2_CTRL_UPDATE_CMD_SET(x)\
+ FIELD_PREP(VCAP_ES2_CTRL_UPDATE_CMD, x)
+#define VCAP_ES2_CTRL_UPDATE_CMD_GET(x)\
+ FIELD_GET(VCAP_ES2_CTRL_UPDATE_CMD, x)
+
+#define VCAP_ES2_CTRL_UPDATE_ENTRY_DIS BIT(21)
+#define VCAP_ES2_CTRL_UPDATE_ENTRY_DIS_SET(x)\
+ FIELD_PREP(VCAP_ES2_CTRL_UPDATE_ENTRY_DIS, x)
+#define VCAP_ES2_CTRL_UPDATE_ENTRY_DIS_GET(x)\
+ FIELD_GET(VCAP_ES2_CTRL_UPDATE_ENTRY_DIS, x)
+
+#define VCAP_ES2_CTRL_UPDATE_ACTION_DIS BIT(20)
+#define VCAP_ES2_CTRL_UPDATE_ACTION_DIS_SET(x)\
+ FIELD_PREP(VCAP_ES2_CTRL_UPDATE_ACTION_DIS, x)
+#define VCAP_ES2_CTRL_UPDATE_ACTION_DIS_GET(x)\
+ FIELD_GET(VCAP_ES2_CTRL_UPDATE_ACTION_DIS, x)
+
+#define VCAP_ES2_CTRL_UPDATE_CNT_DIS BIT(19)
+#define VCAP_ES2_CTRL_UPDATE_CNT_DIS_SET(x)\
+ FIELD_PREP(VCAP_ES2_CTRL_UPDATE_CNT_DIS, x)
+#define VCAP_ES2_CTRL_UPDATE_CNT_DIS_GET(x)\
+ FIELD_GET(VCAP_ES2_CTRL_UPDATE_CNT_DIS, x)
+
+#define VCAP_ES2_CTRL_UPDATE_ADDR GENMASK(18, 3)
+#define VCAP_ES2_CTRL_UPDATE_ADDR_SET(x)\
+ FIELD_PREP(VCAP_ES2_CTRL_UPDATE_ADDR, x)
+#define VCAP_ES2_CTRL_UPDATE_ADDR_GET(x)\
+ FIELD_GET(VCAP_ES2_CTRL_UPDATE_ADDR, x)
+
+#define VCAP_ES2_CTRL_UPDATE_SHOT BIT(2)
+#define VCAP_ES2_CTRL_UPDATE_SHOT_SET(x)\
+ FIELD_PREP(VCAP_ES2_CTRL_UPDATE_SHOT, x)
+#define VCAP_ES2_CTRL_UPDATE_SHOT_GET(x)\
+ FIELD_GET(VCAP_ES2_CTRL_UPDATE_SHOT, x)
+
+#define VCAP_ES2_CTRL_CLEAR_CACHE BIT(1)
+#define VCAP_ES2_CTRL_CLEAR_CACHE_SET(x)\
+ FIELD_PREP(VCAP_ES2_CTRL_CLEAR_CACHE, x)
+#define VCAP_ES2_CTRL_CLEAR_CACHE_GET(x)\
+ FIELD_GET(VCAP_ES2_CTRL_CLEAR_CACHE, x)
+
+#define VCAP_ES2_CTRL_MV_TRAFFIC_IGN BIT(0)
+#define VCAP_ES2_CTRL_MV_TRAFFIC_IGN_SET(x)\
+ FIELD_PREP(VCAP_ES2_CTRL_MV_TRAFFIC_IGN, x)
+#define VCAP_ES2_CTRL_MV_TRAFFIC_IGN_GET(x)\
+ FIELD_GET(VCAP_ES2_CTRL_MV_TRAFFIC_IGN, x)
+
+/* VCAP_ES2:VCAP_CORE_CFG:VCAP_MV_CFG */
+#define VCAP_ES2_CFG __REG(TARGET_VCAP_ES2,\
+ 0, 1, 0, 0, 1, 8, 4, 0, 1, 4)
+
+#define VCAP_ES2_CFG_MV_NUM_POS GENMASK(31, 16)
+#define VCAP_ES2_CFG_MV_NUM_POS_SET(x)\
+ FIELD_PREP(VCAP_ES2_CFG_MV_NUM_POS, x)
+#define VCAP_ES2_CFG_MV_NUM_POS_GET(x)\
+ FIELD_GET(VCAP_ES2_CFG_MV_NUM_POS, x)
+
+#define VCAP_ES2_CFG_MV_SIZE GENMASK(15, 0)
+#define VCAP_ES2_CFG_MV_SIZE_SET(x)\
+ FIELD_PREP(VCAP_ES2_CFG_MV_SIZE, x)
+#define VCAP_ES2_CFG_MV_SIZE_GET(x)\
+ FIELD_GET(VCAP_ES2_CFG_MV_SIZE, x)
+
+/* VCAP_ES2:VCAP_CORE_CACHE:VCAP_ENTRY_DAT */
+#define VCAP_ES2_VCAP_ENTRY_DAT(r) __REG(TARGET_VCAP_ES2,\
+ 0, 1, 8, 0, 1, 904, 0, r, 64, 4)
+
+/* VCAP_ES2:VCAP_CORE_CACHE:VCAP_MASK_DAT */
+#define VCAP_ES2_VCAP_MASK_DAT(r) __REG(TARGET_VCAP_ES2,\
+ 0, 1, 8, 0, 1, 904, 256, r, 64, 4)
+
+/* VCAP_ES2:VCAP_CORE_CACHE:VCAP_ACTION_DAT */
+#define VCAP_ES2_VCAP_ACTION_DAT(r) __REG(TARGET_VCAP_ES2,\
+ 0, 1, 8, 0, 1, 904, 512, r, 64, 4)
+
+/* VCAP_ES2:VCAP_CORE_CACHE:VCAP_CNT_DAT */
+#define VCAP_ES2_VCAP_CNT_DAT(r) __REG(TARGET_VCAP_ES2,\
+ 0, 1, 8, 0, 1, 904, 768, r, 32, 4)
+
+/* VCAP_ES2:VCAP_CORE_CACHE:VCAP_CNT_FW_DAT */
+#define VCAP_ES2_VCAP_CNT_FW_DAT __REG(TARGET_VCAP_ES2,\
+ 0, 1, 8, 0, 1, 904, 896, 0, 1, 4)
+
+/* VCAP_ES2:VCAP_CORE_CACHE:VCAP_TG_DAT */
+#define VCAP_ES2_VCAP_TG_DAT __REG(TARGET_VCAP_ES2,\
+ 0, 1, 8, 0, 1, 904, 900, 0, 1, 4)
+
+/* VCAP_ES2:VCAP_CORE_MAP:VCAP_CORE_IDX */
+#define VCAP_ES2_IDX __REG(TARGET_VCAP_ES2,\
+ 0, 1, 912, 0, 1, 8, 0, 0, 1, 4)
+
+#define VCAP_ES2_IDX_CORE_IDX GENMASK(3, 0)
+#define VCAP_ES2_IDX_CORE_IDX_SET(x)\
+ FIELD_PREP(VCAP_ES2_IDX_CORE_IDX, x)
+#define VCAP_ES2_IDX_CORE_IDX_GET(x)\
+ FIELD_GET(VCAP_ES2_IDX_CORE_IDX, x)
+
+/* VCAP_ES2:VCAP_CORE_MAP:VCAP_CORE_MAP */
+#define VCAP_ES2_MAP __REG(TARGET_VCAP_ES2,\
+ 0, 1, 912, 0, 1, 8, 4, 0, 1, 4)
+
+#define VCAP_ES2_MAP_CORE_MAP GENMASK(2, 0)
+#define VCAP_ES2_MAP_CORE_MAP_SET(x)\
+ FIELD_PREP(VCAP_ES2_MAP_CORE_MAP, x)
+#define VCAP_ES2_MAP_CORE_MAP_GET(x)\
+ FIELD_GET(VCAP_ES2_MAP_CORE_MAP, x)
+
+/* VCAP_ES2:VCAP_CORE_STICKY:VCAP_STICKY */
+#define VCAP_ES2_VCAP_STICKY __REG(TARGET_VCAP_ES2,\
+ 0, 1, 920, 0, 1, 4, 0, 0, 1, 4)
+
+#define VCAP_ES2_VCAP_STICKY_VCAP_ROW_DELETED_STICKY BIT(0)
+#define VCAP_ES2_VCAP_STICKY_VCAP_ROW_DELETED_STICKY_SET(x)\
+ FIELD_PREP(VCAP_ES2_VCAP_STICKY_VCAP_ROW_DELETED_STICKY, x)
+#define VCAP_ES2_VCAP_STICKY_VCAP_ROW_DELETED_STICKY_GET(x)\
+ FIELD_GET(VCAP_ES2_VCAP_STICKY_VCAP_ROW_DELETED_STICKY, x)
+
+/* VCAP_ES2:VCAP_CONST:VCAP_VER */
+#define VCAP_ES2_VCAP_VER __REG(TARGET_VCAP_ES2,\
+ 0, 1, 924, 0, 1, 40, 0, 0, 1, 4)
+
+/* VCAP_ES2:VCAP_CONST:ENTRY_WIDTH */
+#define VCAP_ES2_ENTRY_WIDTH __REG(TARGET_VCAP_ES2,\
+ 0, 1, 924, 0, 1, 40, 4, 0, 1, 4)
+
+/* VCAP_ES2:VCAP_CONST:ENTRY_CNT */
+#define VCAP_ES2_ENTRY_CNT __REG(TARGET_VCAP_ES2,\
+ 0, 1, 924, 0, 1, 40, 8, 0, 1, 4)
+
+/* VCAP_ES2:VCAP_CONST:ENTRY_SWCNT */
+#define VCAP_ES2_ENTRY_SWCNT __REG(TARGET_VCAP_ES2,\
+ 0, 1, 924, 0, 1, 40, 12, 0, 1, 4)
+
+/* VCAP_ES2:VCAP_CONST:ENTRY_TG_WIDTH */
+#define VCAP_ES2_ENTRY_TG_WIDTH __REG(TARGET_VCAP_ES2,\
+ 0, 1, 924, 0, 1, 40, 16, 0, 1, 4)
+
+/* VCAP_ES2:VCAP_CONST:ACTION_DEF_CNT */
+#define VCAP_ES2_ACTION_DEF_CNT __REG(TARGET_VCAP_ES2,\
+ 0, 1, 924, 0, 1, 40, 20, 0, 1, 4)
+
+/* VCAP_ES2:VCAP_CONST:ACTION_WIDTH */
+#define VCAP_ES2_ACTION_WIDTH __REG(TARGET_VCAP_ES2,\
+ 0, 1, 924, 0, 1, 40, 24, 0, 1, 4)
+
+/* VCAP_ES2:VCAP_CONST:CNT_WIDTH */
+#define VCAP_ES2_CNT_WIDTH __REG(TARGET_VCAP_ES2,\
+ 0, 1, 924, 0, 1, 40, 28, 0, 1, 4)
+
+/* VCAP_ES2:VCAP_CONST:CORE_CNT */
+#define VCAP_ES2_CORE_CNT __REG(TARGET_VCAP_ES2,\
+ 0, 1, 924, 0, 1, 40, 32, 0, 1, 4)
+
+/* VCAP_ES2:VCAP_CONST:IF_CNT */
+#define VCAP_ES2_IF_CNT __REG(TARGET_VCAP_ES2,\
+ 0, 1, 924, 0, 1, 40, 36, 0, 1, 4)
+
/* VCAP_SUPER:VCAP_CORE_CFG:VCAP_UPDATE_CTRL */
-#define VCAP_SUPER_CTRL __REG(TARGET_VCAP_SUPER, 0, 1, 0, 0, 1, 8, 0, 0, 1, 4)
+#define VCAP_SUPER_CTRL __REG(TARGET_VCAP_SUPER,\
+ 0, 1, 0, 0, 1, 8, 0, 0, 1, 4)
#define VCAP_SUPER_CTRL_UPDATE_CMD GENMASK(24, 22)
#define VCAP_SUPER_CTRL_UPDATE_CMD_SET(x)\
@@ -5538,7 +7144,8 @@ enum sparx5_target {
FIELD_GET(VCAP_SUPER_CTRL_MV_TRAFFIC_IGN, x)
/* VCAP_SUPER:VCAP_CORE_CFG:VCAP_MV_CFG */
-#define VCAP_SUPER_CFG __REG(TARGET_VCAP_SUPER, 0, 1, 0, 0, 1, 8, 4, 0, 1, 4)
+#define VCAP_SUPER_CFG __REG(TARGET_VCAP_SUPER,\
+ 0, 1, 0, 0, 1, 8, 4, 0, 1, 4)
#define VCAP_SUPER_CFG_MV_NUM_POS GENMASK(31, 16)
#define VCAP_SUPER_CFG_MV_NUM_POS_SET(x)\
@@ -5553,25 +7160,32 @@ enum sparx5_target {
FIELD_GET(VCAP_SUPER_CFG_MV_SIZE, x)
/* VCAP_SUPER:VCAP_CORE_CACHE:VCAP_ENTRY_DAT */
-#define VCAP_SUPER_VCAP_ENTRY_DAT(r) __REG(TARGET_VCAP_SUPER, 0, 1, 8, 0, 1, 904, 0, r, 64, 4)
+#define VCAP_SUPER_VCAP_ENTRY_DAT(r) __REG(TARGET_VCAP_SUPER,\
+ 0, 1, 8, 0, 1, 904, 0, r, 64, 4)
/* VCAP_SUPER:VCAP_CORE_CACHE:VCAP_MASK_DAT */
-#define VCAP_SUPER_VCAP_MASK_DAT(r) __REG(TARGET_VCAP_SUPER, 0, 1, 8, 0, 1, 904, 256, r, 64, 4)
+#define VCAP_SUPER_VCAP_MASK_DAT(r) __REG(TARGET_VCAP_SUPER,\
+ 0, 1, 8, 0, 1, 904, 256, r, 64, 4)
/* VCAP_SUPER:VCAP_CORE_CACHE:VCAP_ACTION_DAT */
-#define VCAP_SUPER_VCAP_ACTION_DAT(r) __REG(TARGET_VCAP_SUPER, 0, 1, 8, 0, 1, 904, 512, r, 64, 4)
+#define VCAP_SUPER_VCAP_ACTION_DAT(r) __REG(TARGET_VCAP_SUPER,\
+ 0, 1, 8, 0, 1, 904, 512, r, 64, 4)
/* VCAP_SUPER:VCAP_CORE_CACHE:VCAP_CNT_DAT */
-#define VCAP_SUPER_VCAP_CNT_DAT(r) __REG(TARGET_VCAP_SUPER, 0, 1, 8, 0, 1, 904, 768, r, 32, 4)
+#define VCAP_SUPER_VCAP_CNT_DAT(r) __REG(TARGET_VCAP_SUPER,\
+ 0, 1, 8, 0, 1, 904, 768, r, 32, 4)
/* VCAP_SUPER:VCAP_CORE_CACHE:VCAP_CNT_FW_DAT */
-#define VCAP_SUPER_VCAP_CNT_FW_DAT __REG(TARGET_VCAP_SUPER, 0, 1, 8, 0, 1, 904, 896, 0, 1, 4)
+#define VCAP_SUPER_VCAP_CNT_FW_DAT __REG(TARGET_VCAP_SUPER,\
+ 0, 1, 8, 0, 1, 904, 896, 0, 1, 4)
/* VCAP_SUPER:VCAP_CORE_CACHE:VCAP_TG_DAT */
-#define VCAP_SUPER_VCAP_TG_DAT __REG(TARGET_VCAP_SUPER, 0, 1, 8, 0, 1, 904, 900, 0, 1, 4)
+#define VCAP_SUPER_VCAP_TG_DAT __REG(TARGET_VCAP_SUPER,\
+ 0, 1, 8, 0, 1, 904, 900, 0, 1, 4)
/* VCAP_SUPER:VCAP_CORE_MAP:VCAP_CORE_IDX */
-#define VCAP_SUPER_IDX __REG(TARGET_VCAP_SUPER, 0, 1, 912, 0, 1, 8, 0, 0, 1, 4)
+#define VCAP_SUPER_IDX __REG(TARGET_VCAP_SUPER,\
+ 0, 1, 912, 0, 1, 8, 0, 0, 1, 4)
#define VCAP_SUPER_IDX_CORE_IDX GENMASK(3, 0)
#define VCAP_SUPER_IDX_CORE_IDX_SET(x)\
@@ -5580,7 +7194,8 @@ enum sparx5_target {
FIELD_GET(VCAP_SUPER_IDX_CORE_IDX, x)
/* VCAP_SUPER:VCAP_CORE_MAP:VCAP_CORE_MAP */
-#define VCAP_SUPER_MAP __REG(TARGET_VCAP_SUPER, 0, 1, 912, 0, 1, 8, 4, 0, 1, 4)
+#define VCAP_SUPER_MAP __REG(TARGET_VCAP_SUPER,\
+ 0, 1, 912, 0, 1, 8, 4, 0, 1, 4)
#define VCAP_SUPER_MAP_CORE_MAP GENMASK(2, 0)
#define VCAP_SUPER_MAP_CORE_MAP_SET(x)\
@@ -5589,37 +7204,48 @@ enum sparx5_target {
FIELD_GET(VCAP_SUPER_MAP_CORE_MAP, x)
/* VCAP_SUPER:VCAP_CONST:VCAP_VER */
-#define VCAP_SUPER_VCAP_VER __REG(TARGET_VCAP_SUPER, 0, 1, 924, 0, 1, 40, 0, 0, 1, 4)
+#define VCAP_SUPER_VCAP_VER __REG(TARGET_VCAP_SUPER,\
+ 0, 1, 924, 0, 1, 40, 0, 0, 1, 4)
/* VCAP_SUPER:VCAP_CONST:ENTRY_WIDTH */
-#define VCAP_SUPER_ENTRY_WIDTH __REG(TARGET_VCAP_SUPER, 0, 1, 924, 0, 1, 40, 4, 0, 1, 4)
+#define VCAP_SUPER_ENTRY_WIDTH __REG(TARGET_VCAP_SUPER,\
+ 0, 1, 924, 0, 1, 40, 4, 0, 1, 4)
/* VCAP_SUPER:VCAP_CONST:ENTRY_CNT */
-#define VCAP_SUPER_ENTRY_CNT __REG(TARGET_VCAP_SUPER, 0, 1, 924, 0, 1, 40, 8, 0, 1, 4)
+#define VCAP_SUPER_ENTRY_CNT __REG(TARGET_VCAP_SUPER,\
+ 0, 1, 924, 0, 1, 40, 8, 0, 1, 4)
/* VCAP_SUPER:VCAP_CONST:ENTRY_SWCNT */
-#define VCAP_SUPER_ENTRY_SWCNT __REG(TARGET_VCAP_SUPER, 0, 1, 924, 0, 1, 40, 12, 0, 1, 4)
+#define VCAP_SUPER_ENTRY_SWCNT __REG(TARGET_VCAP_SUPER,\
+ 0, 1, 924, 0, 1, 40, 12, 0, 1, 4)
/* VCAP_SUPER:VCAP_CONST:ENTRY_TG_WIDTH */
-#define VCAP_SUPER_ENTRY_TG_WIDTH __REG(TARGET_VCAP_SUPER, 0, 1, 924, 0, 1, 40, 16, 0, 1, 4)
+#define VCAP_SUPER_ENTRY_TG_WIDTH __REG(TARGET_VCAP_SUPER,\
+ 0, 1, 924, 0, 1, 40, 16, 0, 1, 4)
/* VCAP_SUPER:VCAP_CONST:ACTION_DEF_CNT */
-#define VCAP_SUPER_ACTION_DEF_CNT __REG(TARGET_VCAP_SUPER, 0, 1, 924, 0, 1, 40, 20, 0, 1, 4)
+#define VCAP_SUPER_ACTION_DEF_CNT __REG(TARGET_VCAP_SUPER,\
+ 0, 1, 924, 0, 1, 40, 20, 0, 1, 4)
/* VCAP_SUPER:VCAP_CONST:ACTION_WIDTH */
-#define VCAP_SUPER_ACTION_WIDTH __REG(TARGET_VCAP_SUPER, 0, 1, 924, 0, 1, 40, 24, 0, 1, 4)
+#define VCAP_SUPER_ACTION_WIDTH __REG(TARGET_VCAP_SUPER,\
+ 0, 1, 924, 0, 1, 40, 24, 0, 1, 4)
/* VCAP_SUPER:VCAP_CONST:CNT_WIDTH */
-#define VCAP_SUPER_CNT_WIDTH __REG(TARGET_VCAP_SUPER, 0, 1, 924, 0, 1, 40, 28, 0, 1, 4)
+#define VCAP_SUPER_CNT_WIDTH __REG(TARGET_VCAP_SUPER,\
+ 0, 1, 924, 0, 1, 40, 28, 0, 1, 4)
/* VCAP_SUPER:VCAP_CONST:CORE_CNT */
-#define VCAP_SUPER_CORE_CNT __REG(TARGET_VCAP_SUPER, 0, 1, 924, 0, 1, 40, 32, 0, 1, 4)
+#define VCAP_SUPER_CORE_CNT __REG(TARGET_VCAP_SUPER,\
+ 0, 1, 924, 0, 1, 40, 32, 0, 1, 4)
/* VCAP_SUPER:VCAP_CONST:IF_CNT */
-#define VCAP_SUPER_IF_CNT __REG(TARGET_VCAP_SUPER, 0, 1, 924, 0, 1, 40, 36, 0, 1, 4)
+#define VCAP_SUPER_IF_CNT __REG(TARGET_VCAP_SUPER,\
+ 0, 1, 924, 0, 1, 40, 36, 0, 1, 4)
/* VCAP_SUPER:RAM_CTRL:RAM_INIT */
-#define VCAP_SUPER_RAM_INIT __REG(TARGET_VCAP_SUPER, 0, 1, 1120, 0, 1, 4, 0, 0, 1, 4)
+#define VCAP_SUPER_RAM_INIT __REG(TARGET_VCAP_SUPER,\
+ 0, 1, 1120, 0, 1, 4, 0, 0, 1, 4)
#define VCAP_SUPER_RAM_INIT_RAM_INIT BIT(1)
#define VCAP_SUPER_RAM_INIT_RAM_INIT_SET(x)\
@@ -5634,7 +7260,8 @@ enum sparx5_target {
FIELD_GET(VCAP_SUPER_RAM_INIT_RAM_CFG_HOOK, x)
/* VOP:RAM_CTRL:RAM_INIT */
-#define VOP_RAM_INIT __REG(TARGET_VOP, 0, 1, 279176, 0, 1, 4, 0, 0, 1, 4)
+#define VOP_RAM_INIT __REG(TARGET_VOP,\
+ 0, 1, 279176, 0, 1, 4, 0, 0, 1, 4)
#define VOP_RAM_INIT_RAM_INIT BIT(1)
#define VOP_RAM_INIT_RAM_INIT_SET(x)\
@@ -5649,7 +7276,8 @@ enum sparx5_target {
FIELD_GET(VOP_RAM_INIT_RAM_CFG_HOOK, x)
/* XQS:SYSTEM:STAT_CFG */
-#define XQS_STAT_CFG __REG(TARGET_XQS, 0, 1, 6768, 0, 1, 872, 860, 0, 1, 4)
+#define XQS_STAT_CFG __REG(TARGET_XQS,\
+ 0, 1, 6768, 0, 1, 872, 860, 0, 1, 4)
#define XQS_STAT_CFG_STAT_CLEAR_SHOT GENMASK(21, 18)
#define XQS_STAT_CFG_STAT_CLEAR_SHOT_SET(x)\
@@ -5676,7 +7304,8 @@ enum sparx5_target {
FIELD_GET(XQS_STAT_CFG_STAT_WRAP_DIS, x)
/* XQS:QLIMIT_SHR:QLIMIT_SHR_TOP_CFG */
-#define XQS_QLIMIT_SHR_TOP_CFG(g) __REG(TARGET_XQS, 0, 1, 7936, g, 4, 48, 0, 0, 1, 4)
+#define XQS_QLIMIT_SHR_TOP_CFG(g) __REG(TARGET_XQS,\
+ 0, 1, 7936, g, 4, 48, 0, 0, 1, 4)
#define XQS_QLIMIT_SHR_TOP_CFG_QLIMIT_SHR_TOP GENMASK(14, 0)
#define XQS_QLIMIT_SHR_TOP_CFG_QLIMIT_SHR_TOP_SET(x)\
@@ -5685,7 +7314,8 @@ enum sparx5_target {
FIELD_GET(XQS_QLIMIT_SHR_TOP_CFG_QLIMIT_SHR_TOP, x)
/* XQS:QLIMIT_SHR:QLIMIT_SHR_ATOP_CFG */
-#define XQS_QLIMIT_SHR_ATOP_CFG(g) __REG(TARGET_XQS, 0, 1, 7936, g, 4, 48, 4, 0, 1, 4)
+#define XQS_QLIMIT_SHR_ATOP_CFG(g) __REG(TARGET_XQS,\
+ 0, 1, 7936, g, 4, 48, 4, 0, 1, 4)
#define XQS_QLIMIT_SHR_ATOP_CFG_QLIMIT_SHR_ATOP GENMASK(14, 0)
#define XQS_QLIMIT_SHR_ATOP_CFG_QLIMIT_SHR_ATOP_SET(x)\
@@ -5694,7 +7324,8 @@ enum sparx5_target {
FIELD_GET(XQS_QLIMIT_SHR_ATOP_CFG_QLIMIT_SHR_ATOP, x)
/* XQS:QLIMIT_SHR:QLIMIT_SHR_CTOP_CFG */
-#define XQS_QLIMIT_SHR_CTOP_CFG(g) __REG(TARGET_XQS, 0, 1, 7936, g, 4, 48, 8, 0, 1, 4)
+#define XQS_QLIMIT_SHR_CTOP_CFG(g) __REG(TARGET_XQS,\
+ 0, 1, 7936, g, 4, 48, 8, 0, 1, 4)
#define XQS_QLIMIT_SHR_CTOP_CFG_QLIMIT_SHR_CTOP GENMASK(14, 0)
#define XQS_QLIMIT_SHR_CTOP_CFG_QLIMIT_SHR_CTOP_SET(x)\
@@ -5703,7 +7334,8 @@ enum sparx5_target {
FIELD_GET(XQS_QLIMIT_SHR_CTOP_CFG_QLIMIT_SHR_CTOP, x)
/* XQS:QLIMIT_SHR:QLIMIT_SHR_QLIM_CFG */
-#define XQS_QLIMIT_SHR_QLIM_CFG(g) __REG(TARGET_XQS, 0, 1, 7936, g, 4, 48, 12, 0, 1, 4)
+#define XQS_QLIMIT_SHR_QLIM_CFG(g) __REG(TARGET_XQS,\
+ 0, 1, 7936, g, 4, 48, 12, 0, 1, 4)
#define XQS_QLIMIT_SHR_QLIM_CFG_QLIMIT_SHR_QLIM GENMASK(14, 0)
#define XQS_QLIMIT_SHR_QLIM_CFG_QLIMIT_SHR_QLIM_SET(x)\
@@ -5712,6 +7344,7 @@ enum sparx5_target {
FIELD_GET(XQS_QLIMIT_SHR_QLIM_CFG_QLIMIT_SHR_QLIM, x)
/* XQS:STAT:CNT */
-#define XQS_CNT(g) __REG(TARGET_XQS, 0, 1, 0, g, 1024, 4, 0, 0, 1, 4)
+#define XQS_CNT(g) __REG(TARGET_XQS,\
+ 0, 1, 0, g, 1024, 4, 0, 0, 1, 4)
#endif /* _SPARX5_MAIN_REGS_H_ */
diff --git a/drivers/net/ethernet/microchip/sparx5/sparx5_police.c b/drivers/net/ethernet/microchip/sparx5/sparx5_police.c
new file mode 100644
index 000000000000..8ada5cee1342
--- /dev/null
+++ b/drivers/net/ethernet/microchip/sparx5/sparx5_police.c
@@ -0,0 +1,53 @@
+// SPDX-License-Identifier: GPL-2.0+
+/* Microchip Sparx5 Switch driver
+ *
+ * Copyright (c) 2023 Microchip Technology Inc. and its subsidiaries.
+ */
+
+#include "sparx5_main_regs.h"
+#include "sparx5_main.h"
+
+static int sparx5_policer_service_conf_set(struct sparx5 *sparx5,
+ struct sparx5_policer *pol)
+{
+ u32 idx, pup_tokens, max_pup_tokens, burst, thres;
+ struct sparx5_sdlb_group *g;
+ u64 rate;
+
+ g = &sdlb_groups[pol->group];
+ idx = pol->idx;
+
+ rate = pol->rate * 1000;
+ burst = pol->burst;
+
+ pup_tokens = sparx5_sdlb_pup_token_get(sparx5, g->pup_interval, rate);
+ max_pup_tokens =
+ sparx5_sdlb_pup_token_get(sparx5, g->pup_interval, g->max_rate);
+
+ thres = DIV_ROUND_UP(burst, g->min_burst);
+
+ spx5_wr(ANA_AC_SDLB_PUP_TOKENS_PUP_TOKENS_SET(pup_tokens), sparx5,
+ ANA_AC_SDLB_PUP_TOKENS(idx, 0));
+
+ spx5_rmw(ANA_AC_SDLB_INH_CTRL_PUP_TOKENS_MAX_SET(max_pup_tokens),
+ ANA_AC_SDLB_INH_CTRL_PUP_TOKENS_MAX, sparx5,
+ ANA_AC_SDLB_INH_CTRL(idx, 0));
+
+ spx5_rmw(ANA_AC_SDLB_THRES_THRES_SET(thres), ANA_AC_SDLB_THRES_THRES,
+ sparx5, ANA_AC_SDLB_THRES(idx, 0));
+
+ return 0;
+}
+
+int sparx5_policer_conf_set(struct sparx5 *sparx5, struct sparx5_policer *pol)
+{
+ /* More policer types will be added later */
+ switch (pol->type) {
+ case SPX5_POL_SERVICE:
+ return sparx5_policer_service_conf_set(sparx5, pol);
+ default:
+ break;
+ }
+
+ return 0;
+}
diff --git a/drivers/net/ethernet/microchip/sparx5/sparx5_pool.c b/drivers/net/ethernet/microchip/sparx5/sparx5_pool.c
new file mode 100644
index 000000000000..b4b280c6138b
--- /dev/null
+++ b/drivers/net/ethernet/microchip/sparx5/sparx5_pool.c
@@ -0,0 +1,81 @@
+// SPDX-License-Identifier: GPL-2.0+
+/* Microchip Sparx5 Switch driver
+ *
+ * Copyright (c) 2023 Microchip Technology Inc. and its subsidiaries.
+ */
+
+#include "sparx5_main_regs.h"
+#include "sparx5_main.h"
+
+static u32 sparx5_pool_id_to_idx(u32 id)
+{
+ return --id;
+}
+
+u32 sparx5_pool_idx_to_id(u32 idx)
+{
+ return ++idx;
+}
+
+/* Release resource from pool.
+ * Return reference count on success, otherwise return error.
+ */
+int sparx5_pool_put(struct sparx5_pool_entry *pool, int size, u32 id)
+{
+ struct sparx5_pool_entry *e_itr;
+
+ e_itr = (pool + sparx5_pool_id_to_idx(id));
+ if (e_itr->ref_cnt == 0)
+ return -EINVAL;
+
+ return --e_itr->ref_cnt;
+}
+
+/* Get resource from pool.
+ * Return reference count on success, otherwise return error.
+ */
+int sparx5_pool_get(struct sparx5_pool_entry *pool, int size, u32 *id)
+{
+ struct sparx5_pool_entry *e_itr;
+ int i;
+
+ for (i = 0, e_itr = pool; i < size; i++, e_itr++) {
+ if (e_itr->ref_cnt == 0) {
+ *id = sparx5_pool_idx_to_id(i);
+ return ++e_itr->ref_cnt;
+ }
+ }
+
+ return -ENOSPC;
+}
+
+/* Get resource from pool that matches index.
+ * Return reference count on success, otherwise return error.
+ */
+int sparx5_pool_get_with_idx(struct sparx5_pool_entry *pool, int size, u32 idx,
+ u32 *id)
+{
+ struct sparx5_pool_entry *e_itr;
+ int i, ret = -ENOSPC;
+
+ for (i = 0, e_itr = pool; i < size; i++, e_itr++) {
+ /* Pool index of first free entry */
+ if (e_itr->ref_cnt == 0 && ret == -ENOSPC)
+ ret = i;
+ /* Tc index already in use ? */
+ if (e_itr->idx == idx && e_itr->ref_cnt > 0) {
+ ret = i;
+ break;
+ }
+ }
+
+ /* Did we find a free entry? */
+ if (ret >= 0) {
+ *id = sparx5_pool_idx_to_id(ret);
+ e_itr = (pool + ret);
+ e_itr->idx = idx;
+ return ++e_itr->ref_cnt;
+ }
+
+ return ret;
+}
diff --git a/drivers/net/ethernet/microchip/sparx5/sparx5_port.c b/drivers/net/ethernet/microchip/sparx5/sparx5_port.c
index 107b9cd931c0..3a1b1a1f5a19 100644
--- a/drivers/net/ethernet/microchip/sparx5/sparx5_port.c
+++ b/drivers/net/ethernet/microchip/sparx5/sparx5_port.c
@@ -1071,6 +1071,11 @@ int sparx5_port_init(struct sparx5 *sparx5,
/* Discard pause frame 01-80-C2-00-00-01 */
spx5_wr(PAUSE_DISCARD, sparx5, ANA_CL_CAPTURE_BPDU_CFG(port->portno));
+ /* Discard SMAC multicast */
+ spx5_rmw(ANA_CL_FILTER_CTRL_FILTER_SMAC_MC_DIS_SET(0),
+ ANA_CL_FILTER_CTRL_FILTER_SMAC_MC_DIS,
+ sparx5, ANA_CL_FILTER_CTRL(port->portno));
+
if (conf->portmode == PHY_INTERFACE_MODE_QSGMII ||
conf->portmode == PHY_INTERFACE_MODE_SGMII) {
err = sparx5_serdes_set(sparx5, port, conf);
@@ -1151,11 +1156,69 @@ int sparx5_port_qos_set(struct sparx5_port *port,
{
sparx5_port_qos_dscp_set(port, &qos->dscp);
sparx5_port_qos_pcp_set(port, &qos->pcp);
+ sparx5_port_qos_pcp_rewr_set(port, &qos->pcp_rewr);
+ sparx5_port_qos_dscp_rewr_set(port, &qos->dscp_rewr);
sparx5_port_qos_default_set(port, qos);
return 0;
}
+int sparx5_port_qos_pcp_rewr_set(const struct sparx5_port *port,
+ struct sparx5_port_qos_pcp_rewr *qos)
+{
+ int i, mode = SPARX5_PORT_REW_TAG_CTRL_CLASSIFIED;
+ struct sparx5 *sparx5 = port->sparx5;
+ u8 pcp, dei;
+
+ /* Use mapping table, with classified QoS as index, to map QoS and DP
+ * to tagged PCP and DEI, if PCP is trusted. Otherwise use classified
+ * PCP. Classified PCP equals frame PCP.
+ */
+ if (qos->enable)
+ mode = SPARX5_PORT_REW_TAG_CTRL_MAPPED;
+
+ spx5_rmw(REW_TAG_CTRL_TAG_PCP_CFG_SET(mode) |
+ REW_TAG_CTRL_TAG_DEI_CFG_SET(mode),
+ REW_TAG_CTRL_TAG_PCP_CFG | REW_TAG_CTRL_TAG_DEI_CFG,
+ port->sparx5, REW_TAG_CTRL(port->portno));
+
+ for (i = 0; i < ARRAY_SIZE(qos->map.map); i++) {
+ /* Extract PCP and DEI */
+ pcp = qos->map.map[i];
+ if (pcp > SPARX5_PORT_QOS_PCP_COUNT)
+ dei = 1;
+ else
+ dei = 0;
+
+ /* Rewrite PCP and DEI, for each classified QoS class and DP
+ * level. This table is only used if tag ctrl mode is set to
+ * 'mapped'.
+ *
+ * 0:0nd - prio=0 and dp:0 => pcp=0 and dei=0
+ * 0:0de - prio=0 and dp:1 => pcp=0 and dei=1
+ */
+ if (dei) {
+ spx5_rmw(REW_PCP_MAP_DE1_PCP_DE1_SET(pcp),
+ REW_PCP_MAP_DE1_PCP_DE1, sparx5,
+ REW_PCP_MAP_DE1(port->portno, i));
+
+ spx5_rmw(REW_DEI_MAP_DE1_DEI_DE1_SET(dei),
+ REW_DEI_MAP_DE1_DEI_DE1, port->sparx5,
+ REW_DEI_MAP_DE1(port->portno, i));
+ } else {
+ spx5_rmw(REW_PCP_MAP_DE0_PCP_DE0_SET(pcp),
+ REW_PCP_MAP_DE0_PCP_DE0, sparx5,
+ REW_PCP_MAP_DE0(port->portno, i));
+
+ spx5_rmw(REW_DEI_MAP_DE0_DEI_DE0_SET(dei),
+ REW_DEI_MAP_DE0_DEI_DE0, port->sparx5,
+ REW_DEI_MAP_DE0(port->portno, i));
+ }
+ }
+
+ return 0;
+}
+
int sparx5_port_qos_pcp_set(const struct sparx5_port *port,
struct sparx5_port_qos_pcp *qos)
{
@@ -1184,6 +1247,45 @@ int sparx5_port_qos_pcp_set(const struct sparx5_port *port,
return 0;
}
+void sparx5_port_qos_dscp_rewr_mode_set(const struct sparx5_port *port,
+ int mode)
+{
+ spx5_rmw(ANA_CL_QOS_CFG_DSCP_REWR_MODE_SEL_SET(mode),
+ ANA_CL_QOS_CFG_DSCP_REWR_MODE_SEL, port->sparx5,
+ ANA_CL_QOS_CFG(port->portno));
+}
+
+int sparx5_port_qos_dscp_rewr_set(const struct sparx5_port *port,
+ struct sparx5_port_qos_dscp_rewr *qos)
+{
+ struct sparx5 *sparx5 = port->sparx5;
+ bool rewr = false;
+ u16 dscp;
+ int i;
+
+ /* On egress, rewrite DSCP value to either classified DSCP or frame
+ * DSCP. If enabled; classified DSCP, if disabled; frame DSCP.
+ */
+ if (qos->enable)
+ rewr = true;
+
+ spx5_rmw(REW_DSCP_MAP_DSCP_UPDATE_ENA_SET(rewr),
+ REW_DSCP_MAP_DSCP_UPDATE_ENA, sparx5,
+ REW_DSCP_MAP(port->portno));
+
+ /* On ingress, map each classified QoS class and DP to classified DSCP
+ * value. This mapping table is global for all ports.
+ */
+ for (i = 0; i < ARRAY_SIZE(qos->map.map); i++) {
+ dscp = qos->map.map[i];
+ spx5_rmw(ANA_CL_QOS_MAP_CFG_DSCP_REWR_VAL_SET(dscp),
+ ANA_CL_QOS_MAP_CFG_DSCP_REWR_VAL, sparx5,
+ ANA_CL_QOS_MAP_CFG(i));
+ }
+
+ return 0;
+}
+
int sparx5_port_qos_dscp_set(const struct sparx5_port *port,
struct sparx5_port_qos_dscp *qos)
{
diff --git a/drivers/net/ethernet/microchip/sparx5/sparx5_port.h b/drivers/net/ethernet/microchip/sparx5/sparx5_port.h
index fbafe22e25cc..607c4ff1df6b 100644
--- a/drivers/net/ethernet/microchip/sparx5/sparx5_port.h
+++ b/drivers/net/ethernet/microchip/sparx5/sparx5_port.h
@@ -9,6 +9,17 @@
#include "sparx5_main.h"
+/* Port PCP rewrite mode */
+#define SPARX5_PORT_REW_TAG_CTRL_CLASSIFIED 0
+#define SPARX5_PORT_REW_TAG_CTRL_DEFAULT 1
+#define SPARX5_PORT_REW_TAG_CTRL_MAPPED 2
+
+/* Port DSCP rewrite mode */
+#define SPARX5_PORT_REW_DSCP_NONE 0
+#define SPARX5_PORT_REW_DSCP_IF_ZERO 1
+#define SPARX5_PORT_REW_DSCP_SELECTED 2
+#define SPARX5_PORT_REW_DSCP_ALL 3
+
static inline bool sparx5_port_is_2g5(int portno)
{
return portno >= 16 && portno <= 47;
@@ -99,6 +110,15 @@ struct sparx5_port_qos_pcp_map {
u8 map[SPARX5_PORT_QOS_PCP_DEI_COUNT];
};
+struct sparx5_port_qos_pcp_rewr_map {
+ u16 map[SPX5_PRIOS];
+};
+
+#define SPARX5_PORT_QOS_DP_NUM 4
+struct sparx5_port_qos_dscp_rewr_map {
+ u16 map[SPX5_PRIOS * SPARX5_PORT_QOS_DP_NUM];
+};
+
#define SPARX5_PORT_QOS_DSCP_COUNT 64
struct sparx5_port_qos_dscp_map {
u8 map[SPARX5_PORT_QOS_DSCP_COUNT];
@@ -110,15 +130,27 @@ struct sparx5_port_qos_pcp {
bool dp_enable;
};
+struct sparx5_port_qos_pcp_rewr {
+ struct sparx5_port_qos_pcp_rewr_map map;
+ bool enable;
+};
+
struct sparx5_port_qos_dscp {
struct sparx5_port_qos_dscp_map map;
bool qos_enable;
bool dp_enable;
};
+struct sparx5_port_qos_dscp_rewr {
+ struct sparx5_port_qos_dscp_rewr_map map;
+ bool enable;
+};
+
struct sparx5_port_qos {
struct sparx5_port_qos_pcp pcp;
+ struct sparx5_port_qos_pcp_rewr pcp_rewr;
struct sparx5_port_qos_dscp dscp;
+ struct sparx5_port_qos_dscp_rewr dscp_rewr;
u8 default_prio;
};
@@ -127,9 +159,18 @@ int sparx5_port_qos_set(struct sparx5_port *port, struct sparx5_port_qos *qos);
int sparx5_port_qos_pcp_set(const struct sparx5_port *port,
struct sparx5_port_qos_pcp *qos);
+int sparx5_port_qos_pcp_rewr_set(const struct sparx5_port *port,
+ struct sparx5_port_qos_pcp_rewr *qos);
+
int sparx5_port_qos_dscp_set(const struct sparx5_port *port,
struct sparx5_port_qos_dscp *qos);
+void sparx5_port_qos_dscp_rewr_mode_set(const struct sparx5_port *port,
+ int mode);
+
+int sparx5_port_qos_dscp_rewr_set(const struct sparx5_port *port,
+ struct sparx5_port_qos_dscp_rewr *qos);
+
int sparx5_port_qos_default_set(const struct sparx5_port *port,
const struct sparx5_port_qos *qos);
diff --git a/drivers/net/ethernet/microchip/sparx5/sparx5_psfp.c b/drivers/net/ethernet/microchip/sparx5/sparx5_psfp.c
new file mode 100644
index 000000000000..8dee1ab1fa75
--- /dev/null
+++ b/drivers/net/ethernet/microchip/sparx5/sparx5_psfp.c
@@ -0,0 +1,332 @@
+// SPDX-License-Identifier: GPL-2.0+
+/* Microchip Sparx5 Switch driver
+ *
+ * Copyright (c) 2023 Microchip Technology Inc. and its subsidiaries.
+ */
+
+#include "sparx5_main_regs.h"
+#include "sparx5_main.h"
+
+#define SPX5_PSFP_SF_CNT 1024
+#define SPX5_PSFP_SG_CONFIG_CHANGE_SLEEP 1000
+#define SPX5_PSFP_SG_CONFIG_CHANGE_TIMEO 100000
+
+/* Pool of available service policers */
+static struct sparx5_pool_entry sparx5_psfp_fm_pool[SPX5_SDLB_CNT];
+
+/* Pool of available stream gates */
+static struct sparx5_pool_entry sparx5_psfp_sg_pool[SPX5_PSFP_SG_CNT];
+
+/* Pool of available stream filters */
+static struct sparx5_pool_entry sparx5_psfp_sf_pool[SPX5_PSFP_SF_CNT];
+
+static int sparx5_psfp_sf_get(u32 *id)
+{
+ return sparx5_pool_get(sparx5_psfp_sf_pool, SPX5_PSFP_SF_CNT, id);
+}
+
+static int sparx5_psfp_sf_put(u32 id)
+{
+ return sparx5_pool_put(sparx5_psfp_sf_pool, SPX5_PSFP_SF_CNT, id);
+}
+
+static int sparx5_psfp_sg_get(u32 idx, u32 *id)
+{
+ return sparx5_pool_get_with_idx(sparx5_psfp_sg_pool, SPX5_PSFP_SG_CNT,
+ idx, id);
+}
+
+static int sparx5_psfp_sg_put(u32 id)
+{
+ return sparx5_pool_put(sparx5_psfp_sg_pool, SPX5_PSFP_SG_CNT, id);
+}
+
+static int sparx5_psfp_fm_get(u32 idx, u32 *id)
+{
+ return sparx5_pool_get_with_idx(sparx5_psfp_fm_pool, SPX5_SDLB_CNT, idx,
+ id);
+}
+
+static int sparx5_psfp_fm_put(u32 id)
+{
+ return sparx5_pool_put(sparx5_psfp_fm_pool, SPX5_SDLB_CNT, id);
+}
+
+u32 sparx5_psfp_isdx_get_sf(struct sparx5 *sparx5, u32 isdx)
+{
+ return ANA_L2_TSN_CFG_TSN_SFID_GET(spx5_rd(sparx5,
+ ANA_L2_TSN_CFG(isdx)));
+}
+
+u32 sparx5_psfp_isdx_get_fm(struct sparx5 *sparx5, u32 isdx)
+{
+ return ANA_L2_DLB_CFG_DLB_IDX_GET(spx5_rd(sparx5,
+ ANA_L2_DLB_CFG(isdx)));
+}
+
+u32 sparx5_psfp_sf_get_sg(struct sparx5 *sparx5, u32 sfid)
+{
+ return ANA_AC_TSN_SF_CFG_TSN_SGID_GET(spx5_rd(sparx5,
+ ANA_AC_TSN_SF_CFG(sfid)));
+}
+
+void sparx5_isdx_conf_set(struct sparx5 *sparx5, u32 isdx, u32 sfid, u32 fmid)
+{
+ spx5_rmw(ANA_L2_TSN_CFG_TSN_SFID_SET(sfid), ANA_L2_TSN_CFG_TSN_SFID,
+ sparx5, ANA_L2_TSN_CFG(isdx));
+
+ spx5_rmw(ANA_L2_DLB_CFG_DLB_IDX_SET(fmid), ANA_L2_DLB_CFG_DLB_IDX,
+ sparx5, ANA_L2_DLB_CFG(isdx));
+}
+
+/* Internal priority value to internal priority selector */
+static u32 sparx5_psfp_ipv_to_ips(s32 ipv)
+{
+ return ipv > 0 ? (ipv | BIT(3)) : 0;
+}
+
+static int sparx5_psfp_sgid_get_status(struct sparx5 *sparx5)
+{
+ return spx5_rd(sparx5, ANA_AC_SG_ACCESS_CTRL);
+}
+
+static int sparx5_psfp_sgid_wait_for_completion(struct sparx5 *sparx5)
+{
+ u32 val;
+
+ return readx_poll_timeout(sparx5_psfp_sgid_get_status, sparx5, val,
+ !ANA_AC_SG_ACCESS_CTRL_CONFIG_CHANGE_GET(val),
+ SPX5_PSFP_SG_CONFIG_CHANGE_SLEEP,
+ SPX5_PSFP_SG_CONFIG_CHANGE_TIMEO);
+}
+
+static void sparx5_psfp_sg_config_change(struct sparx5 *sparx5, u32 id)
+{
+ spx5_wr(ANA_AC_SG_ACCESS_CTRL_SGID_SET(id), sparx5,
+ ANA_AC_SG_ACCESS_CTRL);
+
+ spx5_wr(ANA_AC_SG_ACCESS_CTRL_CONFIG_CHANGE_SET(1) |
+ ANA_AC_SG_ACCESS_CTRL_SGID_SET(id),
+ sparx5, ANA_AC_SG_ACCESS_CTRL);
+
+ if (sparx5_psfp_sgid_wait_for_completion(sparx5) < 0)
+ pr_debug("%s:%d timed out waiting for sgid completion",
+ __func__, __LINE__);
+}
+
+static void sparx5_psfp_sf_set(struct sparx5 *sparx5, u32 id,
+ const struct sparx5_psfp_sf *sf)
+{
+ /* Configure stream gate*/
+ spx5_rmw(ANA_AC_TSN_SF_CFG_TSN_SGID_SET(sf->sgid) |
+ ANA_AC_TSN_SF_CFG_TSN_MAX_SDU_SET(sf->max_sdu) |
+ ANA_AC_TSN_SF_CFG_BLOCK_OVERSIZE_STATE_SET(sf->sblock_osize) |
+ ANA_AC_TSN_SF_CFG_BLOCK_OVERSIZE_ENA_SET(sf->sblock_osize_ena),
+ ANA_AC_TSN_SF_CFG_TSN_SGID | ANA_AC_TSN_SF_CFG_TSN_MAX_SDU |
+ ANA_AC_TSN_SF_CFG_BLOCK_OVERSIZE_STATE |
+ ANA_AC_TSN_SF_CFG_BLOCK_OVERSIZE_ENA,
+ sparx5, ANA_AC_TSN_SF_CFG(id));
+}
+
+static int sparx5_psfp_sg_set(struct sparx5 *sparx5, u32 id,
+ const struct sparx5_psfp_sg *sg)
+{
+ u32 ips, base_lsb, base_msb, accum_time_interval = 0;
+ const struct sparx5_psfp_gce *gce;
+ int i;
+
+ ips = sparx5_psfp_ipv_to_ips(sg->ipv);
+ base_lsb = sg->basetime.tv_sec & 0xffffffff;
+ base_msb = sg->basetime.tv_sec >> 32;
+
+ /* Set stream gate id */
+ spx5_wr(ANA_AC_SG_ACCESS_CTRL_SGID_SET(id), sparx5,
+ ANA_AC_SG_ACCESS_CTRL);
+
+ /* Write AdminPSFP values */
+ spx5_wr(sg->basetime.tv_nsec, sparx5, ANA_AC_SG_CONFIG_REG_1);
+ spx5_wr(base_lsb, sparx5, ANA_AC_SG_CONFIG_REG_2);
+
+ spx5_rmw(ANA_AC_SG_CONFIG_REG_3_BASE_TIME_SEC_MSB_SET(base_msb) |
+ ANA_AC_SG_CONFIG_REG_3_INIT_IPS_SET(ips) |
+ ANA_AC_SG_CONFIG_REG_3_LIST_LENGTH_SET(sg->num_entries) |
+ ANA_AC_SG_CONFIG_REG_3_INIT_GATE_STATE_SET(sg->gate_state) |
+ ANA_AC_SG_CONFIG_REG_3_GATE_ENABLE_SET(1),
+ ANA_AC_SG_CONFIG_REG_3_BASE_TIME_SEC_MSB |
+ ANA_AC_SG_CONFIG_REG_3_INIT_IPS |
+ ANA_AC_SG_CONFIG_REG_3_LIST_LENGTH |
+ ANA_AC_SG_CONFIG_REG_3_INIT_GATE_STATE |
+ ANA_AC_SG_CONFIG_REG_3_GATE_ENABLE,
+ sparx5, ANA_AC_SG_CONFIG_REG_3);
+
+ spx5_wr(sg->cycletime, sparx5, ANA_AC_SG_CONFIG_REG_4);
+ spx5_wr(sg->cycletimeext, sparx5, ANA_AC_SG_CONFIG_REG_5);
+
+ /* For each scheduling entry */
+ for (i = 0; i < sg->num_entries; i++) {
+ gce = &sg->gce[i];
+ ips = sparx5_psfp_ipv_to_ips(gce->ipv);
+ /* hardware needs TimeInterval to be cumulative */
+ accum_time_interval += gce->interval;
+ /* Set gate state */
+ spx5_wr(ANA_AC_SG_GCL_GS_CONFIG_IPS_SET(ips) |
+ ANA_AC_SG_GCL_GS_CONFIG_GATE_STATE_SET(gce->gate_state),
+ sparx5, ANA_AC_SG_GCL_GS_CONFIG(i));
+
+ /* Set time interval */
+ spx5_wr(accum_time_interval, sparx5,
+ ANA_AC_SG_GCL_TI_CONFIG(i));
+
+ /* Set maximum octets */
+ spx5_wr(gce->maxoctets, sparx5, ANA_AC_SG_GCL_OCT_CONFIG(i));
+ }
+
+ return 0;
+}
+
+static int sparx5_sdlb_conf_set(struct sparx5 *sparx5,
+ struct sparx5_psfp_fm *fm)
+{
+ int (*sparx5_sdlb_group_action)(struct sparx5 *sparx5, u32 group,
+ u32 idx);
+
+ if (!fm->pol.rate && !fm->pol.burst)
+ sparx5_sdlb_group_action = &sparx5_sdlb_group_del;
+ else
+ sparx5_sdlb_group_action = &sparx5_sdlb_group_add;
+
+ sparx5_policer_conf_set(sparx5, &fm->pol);
+
+ return sparx5_sdlb_group_action(sparx5, fm->pol.group, fm->pol.idx);
+}
+
+int sparx5_psfp_sf_add(struct sparx5 *sparx5, const struct sparx5_psfp_sf *sf,
+ u32 *id)
+{
+ int ret;
+
+ ret = sparx5_psfp_sf_get(id);
+ if (ret < 0)
+ return ret;
+
+ sparx5_psfp_sf_set(sparx5, *id, sf);
+
+ return 0;
+}
+
+int sparx5_psfp_sf_del(struct sparx5 *sparx5, u32 id)
+{
+ const struct sparx5_psfp_sf sf = { 0 };
+
+ sparx5_psfp_sf_set(sparx5, id, &sf);
+
+ return sparx5_psfp_sf_put(id);
+}
+
+int sparx5_psfp_sg_add(struct sparx5 *sparx5, u32 uidx,
+ struct sparx5_psfp_sg *sg, u32 *id)
+{
+ ktime_t basetime;
+ int ret;
+
+ ret = sparx5_psfp_sg_get(uidx, id);
+ if (ret < 0)
+ return ret;
+ /* Was already in use, no need to reconfigure */
+ if (ret > 1)
+ return 0;
+
+ /* Calculate basetime for this stream gate */
+ sparx5_new_base_time(sparx5, sg->cycletime, 0, &basetime);
+ sg->basetime = ktime_to_timespec64(basetime);
+
+ sparx5_psfp_sg_set(sparx5, *id, sg);
+
+ /* Signal hardware to copy AdminPSFP values into OperPSFP values */
+ sparx5_psfp_sg_config_change(sparx5, *id);
+
+ return 0;
+}
+
+int sparx5_psfp_sg_del(struct sparx5 *sparx5, u32 id)
+{
+ const struct sparx5_psfp_sg sg = { 0 };
+ int ret;
+
+ ret = sparx5_psfp_sg_put(id);
+ if (ret < 0)
+ return ret;
+ /* Stream gate still in use ? */
+ if (ret > 0)
+ return 0;
+
+ return sparx5_psfp_sg_set(sparx5, id, &sg);
+}
+
+int sparx5_psfp_fm_add(struct sparx5 *sparx5, u32 uidx,
+ struct sparx5_psfp_fm *fm, u32 *id)
+{
+ struct sparx5_policer *pol = &fm->pol;
+ int ret;
+
+ /* Get flow meter */
+ ret = sparx5_psfp_fm_get(uidx, &fm->pol.idx);
+ if (ret < 0)
+ return ret;
+ /* Was already in use, no need to reconfigure */
+ if (ret > 1)
+ return 0;
+
+ ret = sparx5_sdlb_group_get_by_rate(sparx5, pol->rate, pol->burst);
+ if (ret < 0)
+ return ret;
+
+ fm->pol.group = ret;
+
+ ret = sparx5_sdlb_conf_set(sparx5, fm);
+ if (ret < 0)
+ return ret;
+
+ *id = fm->pol.idx;
+
+ return 0;
+}
+
+int sparx5_psfp_fm_del(struct sparx5 *sparx5, u32 id)
+{
+ struct sparx5_psfp_fm fm = { .pol.idx = id,
+ .pol.type = SPX5_POL_SERVICE };
+ int ret;
+
+ /* Find the group that this lb belongs to */
+ ret = sparx5_sdlb_group_get_by_index(sparx5, id, &fm.pol.group);
+ if (ret < 0)
+ return ret;
+
+ ret = sparx5_psfp_fm_put(id);
+ if (ret < 0)
+ return ret;
+ /* Do not reset flow-meter if still in use. */
+ if (ret > 0)
+ return 0;
+
+ return sparx5_sdlb_conf_set(sparx5, &fm);
+}
+
+void sparx5_psfp_init(struct sparx5 *sparx5)
+{
+ const struct sparx5_sdlb_group *group;
+ int i;
+
+ for (i = 0; i < SPX5_SDLB_GROUP_CNT; i++) {
+ group = &sdlb_groups[i];
+ sparx5_sdlb_group_init(sparx5, group->max_rate,
+ group->min_burst, group->frame_size, i);
+ }
+
+ spx5_wr(ANA_AC_SG_CYCLETIME_UPDATE_PERIOD_SG_CT_UPDATE_ENA_SET(1),
+ sparx5, ANA_AC_SG_CYCLETIME_UPDATE_PERIOD);
+
+ spx5_rmw(ANA_L2_FWD_CFG_ISDX_LOOKUP_ENA_SET(1),
+ ANA_L2_FWD_CFG_ISDX_LOOKUP_ENA, sparx5, ANA_L2_FWD_CFG);
+}
diff --git a/drivers/net/ethernet/microchip/sparx5/sparx5_ptp.c b/drivers/net/ethernet/microchip/sparx5/sparx5_ptp.c
index 69e76634f9aa..0edb98cef7e4 100644
--- a/drivers/net/ethernet/microchip/sparx5/sparx5_ptp.c
+++ b/drivers/net/ethernet/microchip/sparx5/sparx5_ptp.c
@@ -476,8 +476,7 @@ static int sparx5_ptp_settime64(struct ptp_clock_info *ptp,
return 0;
}
-static int sparx5_ptp_gettime64(struct ptp_clock_info *ptp,
- struct timespec64 *ts)
+int sparx5_ptp_gettime64(struct ptp_clock_info *ptp, struct timespec64 *ts)
{
struct sparx5_phc *phc = container_of(ptp, struct sparx5_phc, info);
struct sparx5 *sparx5 = phc->sparx5;
diff --git a/drivers/net/ethernet/microchip/sparx5/sparx5_qos.c b/drivers/net/ethernet/microchip/sparx5/sparx5_qos.c
index 379e540e5e6a..5f34febaee6b 100644
--- a/drivers/net/ethernet/microchip/sparx5/sparx5_qos.c
+++ b/drivers/net/ethernet/microchip/sparx5/sparx5_qos.c
@@ -9,6 +9,63 @@
#include "sparx5_main.h"
#include "sparx5_qos.h"
+/* Calculate new base_time based on cycle_time.
+ *
+ * The hardware requires a base_time that is always in the future.
+ * We define threshold_time as current_time + (2 * cycle_time).
+ * If base_time is below threshold_time this function recalculates it to be in
+ * the interval:
+ * threshold_time <= base_time < (threshold_time + cycle_time)
+ *
+ * A very simple algorithm could be like this:
+ * new_base_time = org_base_time + N * cycle_time
+ * using the lowest N so (new_base_time >= threshold_time
+ */
+void sparx5_new_base_time(struct sparx5 *sparx5, const u32 cycle_time,
+ const ktime_t org_base_time, ktime_t *new_base_time)
+{
+ ktime_t current_time, threshold_time, new_time;
+ struct timespec64 ts;
+ u64 nr_of_cycles_p2;
+ u64 nr_of_cycles;
+ u64 diff_time;
+
+ new_time = org_base_time;
+
+ sparx5_ptp_gettime64(&sparx5->phc[SPARX5_PHC_PORT].info, &ts);
+ current_time = timespec64_to_ktime(ts);
+ threshold_time = current_time + (2 * cycle_time);
+ diff_time = threshold_time - new_time;
+ nr_of_cycles = div_u64(diff_time, cycle_time);
+ nr_of_cycles_p2 = 1; /* Use 2^0 as start value */
+
+ if (new_time >= threshold_time) {
+ *new_base_time = new_time;
+ return;
+ }
+
+ /* Calculate the smallest power of 2 (nr_of_cycles_p2)
+ * that is larger than nr_of_cycles.
+ */
+ while (nr_of_cycles_p2 < nr_of_cycles)
+ nr_of_cycles_p2 <<= 1; /* Next (higher) power of 2 */
+
+ /* Add as big chunks (power of 2 * cycle_time)
+ * as possible for each power of 2
+ */
+ while (nr_of_cycles_p2) {
+ if (new_time < threshold_time) {
+ new_time += cycle_time * nr_of_cycles_p2;
+ while (new_time < threshold_time)
+ new_time += cycle_time * nr_of_cycles_p2;
+ new_time -= cycle_time * nr_of_cycles_p2;
+ }
+ nr_of_cycles_p2 >>= 1; /* Next (lower) power of 2 */
+ }
+ new_time += cycle_time;
+ *new_base_time = new_time;
+}
+
/* Max rates for leak groups */
static const u32 spx5_hsch_max_group_rate[SPX5_HSCH_LEAK_GRP_CNT] = {
1048568, /* 1.049 Gbps */
@@ -393,6 +450,8 @@ int sparx5_qos_init(struct sparx5 *sparx5)
if (ret < 0)
return ret;
+ sparx5_psfp_init(sparx5);
+
return 0;
}
diff --git a/drivers/net/ethernet/microchip/sparx5/sparx5_sdlb.c b/drivers/net/ethernet/microchip/sparx5/sparx5_sdlb.c
new file mode 100644
index 000000000000..f5267218caeb
--- /dev/null
+++ b/drivers/net/ethernet/microchip/sparx5/sparx5_sdlb.c
@@ -0,0 +1,335 @@
+// SPDX-License-Identifier: GPL-2.0+
+/* Microchip Sparx5 Switch driver
+ *
+ * Copyright (c) 2023 Microchip Technology Inc. and its subsidiaries.
+ */
+
+#include "sparx5_main_regs.h"
+#include "sparx5_main.h"
+
+struct sparx5_sdlb_group sdlb_groups[SPX5_SDLB_GROUP_CNT] = {
+ { SPX5_SDLB_GROUP_RATE_MAX, 8192 / 1, 64 }, /* 25 G */
+ { 15000000000ULL, 8192 / 1, 64 }, /* 15 G */
+ { 10000000000ULL, 8192 / 1, 64 }, /* 10 G */
+ { 5000000000ULL, 8192 / 1, 64 }, /* 5 G */
+ { 2500000000ULL, 8192 / 1, 64 }, /* 2.5 G */
+ { 1000000000ULL, 8192 / 2, 64 }, /* 1 G */
+ { 500000000ULL, 8192 / 2, 64 }, /* 500 M */
+ { 100000000ULL, 8192 / 4, 64 }, /* 100 M */
+ { 50000000ULL, 8192 / 4, 64 }, /* 50 M */
+ { 5000000ULL, 8192 / 8, 64 } /* 5 M */
+};
+
+int sparx5_sdlb_clk_hz_get(struct sparx5 *sparx5)
+{
+ u32 clk_per_100ps;
+ u64 clk_hz;
+
+ clk_per_100ps = HSCH_SYS_CLK_PER_100PS_GET(spx5_rd(sparx5,
+ HSCH_SYS_CLK_PER));
+ if (!clk_per_100ps)
+ clk_per_100ps = SPX5_CLK_PER_100PS_DEFAULT;
+
+ clk_hz = (10 * 1000 * 1000) / clk_per_100ps;
+ return clk_hz *= 1000;
+}
+
+static int sparx5_sdlb_pup_interval_get(struct sparx5 *sparx5, u32 max_token,
+ u64 max_rate)
+{
+ u64 clk_hz;
+
+ clk_hz = sparx5_sdlb_clk_hz_get(sparx5);
+
+ return div64_u64((8 * clk_hz * max_token), max_rate);
+}
+
+int sparx5_sdlb_pup_token_get(struct sparx5 *sparx5, u32 pup_interval, u64 rate)
+{
+ u64 clk_hz;
+
+ if (!rate)
+ return SPX5_SDLB_PUP_TOKEN_DISABLE;
+
+ clk_hz = sparx5_sdlb_clk_hz_get(sparx5);
+
+ return DIV64_U64_ROUND_UP((rate * pup_interval), (clk_hz * 8));
+}
+
+static void sparx5_sdlb_group_disable(struct sparx5 *sparx5, u32 group)
+{
+ spx5_rmw(ANA_AC_SDLB_PUP_CTRL_PUP_ENA_SET(0),
+ ANA_AC_SDLB_PUP_CTRL_PUP_ENA, sparx5,
+ ANA_AC_SDLB_PUP_CTRL(group));
+}
+
+static void sparx5_sdlb_group_enable(struct sparx5 *sparx5, u32 group)
+{
+ spx5_rmw(ANA_AC_SDLB_PUP_CTRL_PUP_ENA_SET(1),
+ ANA_AC_SDLB_PUP_CTRL_PUP_ENA, sparx5,
+ ANA_AC_SDLB_PUP_CTRL(group));
+}
+
+static u32 sparx5_sdlb_group_get_first(struct sparx5 *sparx5, u32 group)
+{
+ u32 val;
+
+ val = spx5_rd(sparx5, ANA_AC_SDLB_XLB_START(group));
+
+ return ANA_AC_SDLB_XLB_START_LBSET_START_GET(val);
+}
+
+static u32 sparx5_sdlb_group_get_next(struct sparx5 *sparx5, u32 group,
+ u32 lb)
+{
+ u32 val;
+
+ val = spx5_rd(sparx5, ANA_AC_SDLB_XLB_NEXT(lb));
+
+ return ANA_AC_SDLB_XLB_NEXT_LBSET_NEXT_GET(val);
+}
+
+static bool sparx5_sdlb_group_is_first(struct sparx5 *sparx5, u32 group,
+ u32 lb)
+{
+ return lb == sparx5_sdlb_group_get_first(sparx5, group);
+}
+
+static bool sparx5_sdlb_group_is_last(struct sparx5 *sparx5, u32 group,
+ u32 lb)
+{
+ return lb == sparx5_sdlb_group_get_next(sparx5, group, lb);
+}
+
+static bool sparx5_sdlb_group_is_empty(struct sparx5 *sparx5, u32 group)
+{
+ u32 val;
+
+ val = spx5_rd(sparx5, ANA_AC_SDLB_PUP_CTRL(group));
+
+ return ANA_AC_SDLB_PUP_CTRL_PUP_ENA_GET(val) == 0;
+}
+
+static u32 sparx5_sdlb_group_get_last(struct sparx5 *sparx5, u32 group)
+{
+ u32 itr, next;
+
+ itr = sparx5_sdlb_group_get_first(sparx5, group);
+
+ for (;;) {
+ next = sparx5_sdlb_group_get_next(sparx5, group, itr);
+ if (itr == next)
+ return itr;
+
+ itr = next;
+ }
+}
+
+static bool sparx5_sdlb_group_is_singular(struct sparx5 *sparx5, u32 group)
+{
+ if (sparx5_sdlb_group_is_empty(sparx5, group))
+ return false;
+
+ return sparx5_sdlb_group_get_first(sparx5, group) ==
+ sparx5_sdlb_group_get_last(sparx5, group);
+}
+
+static int sparx5_sdlb_group_get_adjacent(struct sparx5 *sparx5, u32 group,
+ u32 idx, u32 *prev, u32 *next,
+ u32 *first)
+{
+ u32 itr;
+
+ *first = sparx5_sdlb_group_get_first(sparx5, group);
+ *prev = *first;
+ *next = *first;
+ itr = *first;
+
+ for (;;) {
+ *next = sparx5_sdlb_group_get_next(sparx5, group, itr);
+
+ if (itr == idx)
+ return 0; /* Found it */
+
+ if (itr == *next)
+ return -EINVAL; /* Was not found */
+
+ *prev = itr;
+ itr = *next;
+ }
+}
+
+static int sparx5_sdlb_group_get_count(struct sparx5 *sparx5, u32 group)
+{
+ u32 itr, next;
+ int count = 0;
+
+ itr = sparx5_sdlb_group_get_first(sparx5, group);
+
+ for (;;) {
+ next = sparx5_sdlb_group_get_next(sparx5, group, itr);
+ if (itr == next)
+ return count;
+
+ itr = next;
+ count++;
+ }
+}
+
+int sparx5_sdlb_group_get_by_rate(struct sparx5 *sparx5, u32 rate, u32 burst)
+{
+ const struct sparx5_sdlb_group *group;
+ u64 rate_bps;
+ int i, count;
+
+ rate_bps = rate * 1000;
+
+ for (i = SPX5_SDLB_GROUP_CNT - 1; i >= 0; i--) {
+ group = &sdlb_groups[i];
+
+ count = sparx5_sdlb_group_get_count(sparx5, i);
+
+ /* Check that this group is not full.
+ * According to LB group configuration rules: the number of XLBs
+ * in a group must not exceed PUP_INTERVAL/4 - 1.
+ */
+ if (count > ((group->pup_interval / 4) - 1))
+ continue;
+
+ if (rate_bps < group->max_rate)
+ return i;
+ }
+
+ return -ENOSPC;
+}
+
+int sparx5_sdlb_group_get_by_index(struct sparx5 *sparx5, u32 idx, u32 *group)
+{
+ u32 itr, next;
+ int i;
+
+ for (i = 0; i < SPX5_SDLB_GROUP_CNT; i++) {
+ if (sparx5_sdlb_group_is_empty(sparx5, i))
+ continue;
+
+ itr = sparx5_sdlb_group_get_first(sparx5, i);
+
+ for (;;) {
+ next = sparx5_sdlb_group_get_next(sparx5, i, itr);
+
+ if (itr == idx) {
+ *group = i;
+ return 0; /* Found it */
+ }
+ if (itr == next)
+ break; /* Was not found */
+
+ itr = next;
+ }
+ }
+
+ return -EINVAL;
+}
+
+static int sparx5_sdlb_group_link(struct sparx5 *sparx5, u32 group, u32 idx,
+ u32 first, u32 next, bool empty)
+{
+ /* Stop leaking */
+ sparx5_sdlb_group_disable(sparx5, group);
+
+ if (empty)
+ return 0;
+
+ /* Link insertion lb to next lb */
+ spx5_wr(ANA_AC_SDLB_XLB_NEXT_LBSET_NEXT_SET(next) |
+ ANA_AC_SDLB_XLB_NEXT_LBGRP_SET(group),
+ sparx5, ANA_AC_SDLB_XLB_NEXT(idx));
+
+ /* Set the first lb */
+ spx5_wr(ANA_AC_SDLB_XLB_START_LBSET_START_SET(first), sparx5,
+ ANA_AC_SDLB_XLB_START(group));
+
+ /* Start leaking */
+ sparx5_sdlb_group_enable(sparx5, group);
+
+ return 0;
+};
+
+int sparx5_sdlb_group_add(struct sparx5 *sparx5, u32 group, u32 idx)
+{
+ u32 first, next;
+
+ /* We always add to head of the list */
+ first = idx;
+
+ if (sparx5_sdlb_group_is_empty(sparx5, group))
+ next = idx;
+ else
+ next = sparx5_sdlb_group_get_first(sparx5, group);
+
+ return sparx5_sdlb_group_link(sparx5, group, idx, first, next, false);
+}
+
+int sparx5_sdlb_group_del(struct sparx5 *sparx5, u32 group, u32 idx)
+{
+ u32 first, next, prev;
+ bool empty = false;
+
+ if (sparx5_sdlb_group_get_adjacent(sparx5, group, idx, &prev, &next,
+ &first) < 0) {
+ pr_err("%s:%d Could not find idx: %d in group: %d", __func__,
+ __LINE__, idx, group);
+ return -EINVAL;
+ }
+
+ if (sparx5_sdlb_group_is_singular(sparx5, group)) {
+ empty = true;
+ } else if (sparx5_sdlb_group_is_last(sparx5, group, idx)) {
+ /* idx is removed, prev is now last */
+ idx = prev;
+ next = prev;
+ } else if (sparx5_sdlb_group_is_first(sparx5, group, idx)) {
+ /* idx is removed and points to itself, first is next */
+ first = next;
+ next = idx;
+ } else {
+ /* Next is not touched */
+ idx = prev;
+ }
+
+ return sparx5_sdlb_group_link(sparx5, group, idx, first, next, empty);
+}
+
+void sparx5_sdlb_group_init(struct sparx5 *sparx5, u64 max_rate, u32 min_burst,
+ u32 frame_size, u32 idx)
+{
+ u32 thres_shift, mask = 0x01, power = 0;
+ struct sparx5_sdlb_group *group;
+ u64 max_token;
+
+ group = &sdlb_groups[idx];
+
+ /* Number of positions to right-shift LB's threshold value. */
+ while ((min_burst & mask) == 0) {
+ power++;
+ mask <<= 1;
+ }
+ thres_shift = SPX5_SDLB_2CYCLES_TYPE2_THRES_OFFSET - power;
+
+ max_token = (min_burst > SPX5_SDLB_PUP_TOKEN_MAX) ?
+ SPX5_SDLB_PUP_TOKEN_MAX :
+ min_burst;
+ group->pup_interval =
+ sparx5_sdlb_pup_interval_get(sparx5, max_token, max_rate);
+
+ group->frame_size = frame_size;
+
+ spx5_wr(ANA_AC_SDLB_PUP_INTERVAL_PUP_INTERVAL_SET(group->pup_interval),
+ sparx5, ANA_AC_SDLB_PUP_INTERVAL(idx));
+
+ spx5_wr(ANA_AC_SDLB_FRM_RATE_TOKENS_FRM_RATE_TOKENS_SET(frame_size),
+ sparx5, ANA_AC_SDLB_FRM_RATE_TOKENS(idx));
+
+ spx5_wr(ANA_AC_SDLB_LBGRP_MISC_THRES_SHIFT_SET(thres_shift), sparx5,
+ ANA_AC_SDLB_LBGRP_MISC(idx));
+}
diff --git a/drivers/net/ethernet/microchip/sparx5/sparx5_tc.c b/drivers/net/ethernet/microchip/sparx5/sparx5_tc.c
index 205246b5af82..e80f3166db7d 100644
--- a/drivers/net/ethernet/microchip/sparx5/sparx5_tc.c
+++ b/drivers/net/ethernet/microchip/sparx5/sparx5_tc.c
@@ -5,6 +5,7 @@
*/
#include <net/pkt_cls.h>
+#include <net/pkt_sched.h>
#include "sparx5_tc.h"
#include "sparx5_main.h"
diff --git a/drivers/net/ethernet/microchip/sparx5/sparx5_tc.h b/drivers/net/ethernet/microchip/sparx5/sparx5_tc.h
index adab88e6b21f..7ef470b28566 100644
--- a/drivers/net/ethernet/microchip/sparx5/sparx5_tc.h
+++ b/drivers/net/ethernet/microchip/sparx5/sparx5_tc.h
@@ -21,6 +21,80 @@ enum SPX5_PORT_MASK_MODE {
SPX5_PMM_OR_PGID_MASK,
};
+/* Controls ES0 forwarding */
+enum SPX5_FORWARDING_SEL {
+ SPX5_FWSEL_NO_ACTION,
+ SPX5_FWSEL_COPY_TO_LOOPBACK,
+ SPX5_FWSEL_REDIRECT_TO_LOOPBACK,
+ SPX5_FWSEL_DISCARD,
+};
+
+/* Controls tag A (outer tagging) */
+enum SPX5_OUTER_TAG_SEL {
+ SPX5_OTAG_PORT,
+ SPX5_OTAG_TAG_A,
+ SPX5_OTAG_FORCED_PORT,
+ SPX5_OTAG_UNTAG,
+};
+
+/* Selects TPID for ES0 tag A */
+enum SPX5_TPID_A_SEL {
+ SPX5_TPID_A_8100,
+ SPX5_TPID_A_88A8,
+ SPX5_TPID_A_CUST1,
+ SPX5_TPID_A_CUST2,
+ SPX5_TPID_A_CUST3,
+ SPX5_TPID_A_CLASSIFIED,
+};
+
+/* Selects VID for ES0 tag A */
+enum SPX5_VID_A_SEL {
+ SPX5_VID_A_CLASSIFIED,
+ SPX5_VID_A_VAL,
+ SPX5_VID_A_IFH,
+ SPX5_VID_A_RESERVED,
+};
+
+/* Select PCP source for ES0 tag A */
+enum SPX5_PCP_A_SEL {
+ SPX5_PCP_A_CLASSIFIED,
+ SPX5_PCP_A_VAL,
+ SPX5_PCP_A_RESERVED,
+ SPX5_PCP_A_POPPED,
+ SPX5_PCP_A_MAPPED_0,
+ SPX5_PCP_A_MAPPED_1,
+ SPX5_PCP_A_MAPPED_2,
+ SPX5_PCP_A_MAPPED_3,
+};
+
+/* Select DEI source for ES0 tag A */
+enum SPX5_DEI_A_SEL {
+ SPX5_DEI_A_CLASSIFIED,
+ SPX5_DEI_A_VAL,
+ SPX5_DEI_A_REW,
+ SPX5_DEI_A_POPPED,
+ SPX5_DEI_A_MAPPED_0,
+ SPX5_DEI_A_MAPPED_1,
+ SPX5_DEI_A_MAPPED_2,
+ SPX5_DEI_A_MAPPED_3,
+};
+
+/* Controls tag B (inner tagging) */
+enum SPX5_INNER_TAG_SEL {
+ SPX5_ITAG_NO_PUSH,
+ SPX5_ITAG_PUSH_B_TAG,
+};
+
+/* Selects TPID for ES0 tag B. */
+enum SPX5_TPID_B_SEL {
+ SPX5_TPID_B_8100,
+ SPX5_TPID_B_88A8,
+ SPX5_TPID_B_CUST1,
+ SPX5_TPID_B_CUST2,
+ SPX5_TPID_B_CUST3,
+ SPX5_TPID_B_CLASSIFIED,
+};
+
int sparx5_port_setup_tc(struct net_device *ndev, enum tc_setup_type type,
void *type_data);
diff --git a/drivers/net/ethernet/microchip/sparx5/sparx5_tc_flower.c b/drivers/net/ethernet/microchip/sparx5/sparx5_tc_flower.c
index 1ed304a816cc..b36819aafaca 100644
--- a/drivers/net/ethernet/microchip/sparx5/sparx5_tc_flower.c
+++ b/drivers/net/ethernet/microchip/sparx5/sparx5_tc_flower.c
@@ -4,11 +4,13 @@
* Copyright (c) 2022 Microchip Technology Inc. and its subsidiaries.
*/
+#include <net/tc_act/tc_gate.h>
#include <net/tcp.h>
#include "sparx5_tc.h"
#include "vcap_api.h"
#include "vcap_api_client.h"
+#include "vcap_tc.h"
#include "sparx5_main.h"
#include "sparx5_vcap_impl.h"
@@ -26,249 +28,33 @@ struct sparx5_multiple_rules {
struct sparx5_wildcard_rule rule[SPX5_MAX_RULE_SIZE];
};
-struct sparx5_tc_flower_parse_usage {
- struct flow_cls_offload *fco;
- struct flow_rule *frule;
- struct vcap_rule *vrule;
- u16 l3_proto;
- u8 l4_proto;
- unsigned int used_keys;
-};
-
-struct sparx5_tc_rule_pkt_cnt {
- u64 cookie;
- u32 pkts;
-};
-
-/* These protocols have dedicated keysets in IS2 and a TC dissector
- * ETH_P_ARP does not have a TC dissector
- */
-static u16 sparx5_tc_known_etypes[] = {
- ETH_P_ALL,
- ETH_P_ARP,
- ETH_P_IP,
- ETH_P_IPV6,
-};
-
-enum sparx5_is2_arp_opcode {
- SPX5_IS2_ARP_REQUEST,
- SPX5_IS2_ARP_REPLY,
- SPX5_IS2_RARP_REQUEST,
- SPX5_IS2_RARP_REPLY,
-};
-
-enum tc_arp_opcode {
- TC_ARP_OP_RESERVED,
- TC_ARP_OP_REQUEST,
- TC_ARP_OP_REPLY,
-};
-
-static bool sparx5_tc_is_known_etype(u16 etype)
-{
- int idx;
-
- /* For now this only knows about IS2 traffic classification */
- for (idx = 0; idx < ARRAY_SIZE(sparx5_tc_known_etypes); ++idx)
- if (sparx5_tc_known_etypes[idx] == etype)
- return true;
-
- return false;
-}
-
-static int sparx5_tc_flower_handler_ethaddr_usage(struct sparx5_tc_flower_parse_usage *st)
-{
- enum vcap_key_field smac_key = VCAP_KF_L2_SMAC;
- enum vcap_key_field dmac_key = VCAP_KF_L2_DMAC;
- struct flow_match_eth_addrs match;
- struct vcap_u48_key smac, dmac;
- int err = 0;
-
- flow_rule_match_eth_addrs(st->frule, &match);
-
- if (!is_zero_ether_addr(match.mask->src)) {
- vcap_netbytes_copy(smac.value, match.key->src, ETH_ALEN);
- vcap_netbytes_copy(smac.mask, match.mask->src, ETH_ALEN);
- err = vcap_rule_add_key_u48(st->vrule, smac_key, &smac);
- if (err)
- goto out;
- }
-
- if (!is_zero_ether_addr(match.mask->dst)) {
- vcap_netbytes_copy(dmac.value, match.key->dst, ETH_ALEN);
- vcap_netbytes_copy(dmac.mask, match.mask->dst, ETH_ALEN);
- err = vcap_rule_add_key_u48(st->vrule, dmac_key, &dmac);
- if (err)
- goto out;
- }
-
- st->used_keys |= BIT(FLOW_DISSECTOR_KEY_ETH_ADDRS);
-
- return err;
-
-out:
- NL_SET_ERR_MSG_MOD(st->fco->common.extack, "eth_addr parse error");
- return err;
-}
-
-static int
-sparx5_tc_flower_handler_ipv4_usage(struct sparx5_tc_flower_parse_usage *st)
-{
- int err = 0;
-
- if (st->l3_proto == ETH_P_IP) {
- struct flow_match_ipv4_addrs mt;
-
- flow_rule_match_ipv4_addrs(st->frule, &mt);
- if (mt.mask->src) {
- err = vcap_rule_add_key_u32(st->vrule,
- VCAP_KF_L3_IP4_SIP,
- be32_to_cpu(mt.key->src),
- be32_to_cpu(mt.mask->src));
- if (err)
- goto out;
- }
- if (mt.mask->dst) {
- err = vcap_rule_add_key_u32(st->vrule,
- VCAP_KF_L3_IP4_DIP,
- be32_to_cpu(mt.key->dst),
- be32_to_cpu(mt.mask->dst));
- if (err)
- goto out;
- }
- }
-
- st->used_keys |= BIT(FLOW_DISSECTOR_KEY_IPV4_ADDRS);
-
- return err;
-
-out:
- NL_SET_ERR_MSG_MOD(st->fco->common.extack, "ipv4_addr parse error");
- return err;
-}
-
-static int
-sparx5_tc_flower_handler_ipv6_usage(struct sparx5_tc_flower_parse_usage *st)
-{
- int err = 0;
-
- if (st->l3_proto == ETH_P_IPV6) {
- struct flow_match_ipv6_addrs mt;
- struct vcap_u128_key sip;
- struct vcap_u128_key dip;
-
- flow_rule_match_ipv6_addrs(st->frule, &mt);
- /* Check if address masks are non-zero */
- if (!ipv6_addr_any(&mt.mask->src)) {
- vcap_netbytes_copy(sip.value, mt.key->src.s6_addr, 16);
- vcap_netbytes_copy(sip.mask, mt.mask->src.s6_addr, 16);
- err = vcap_rule_add_key_u128(st->vrule,
- VCAP_KF_L3_IP6_SIP, &sip);
- if (err)
- goto out;
- }
- if (!ipv6_addr_any(&mt.mask->dst)) {
- vcap_netbytes_copy(dip.value, mt.key->dst.s6_addr, 16);
- vcap_netbytes_copy(dip.mask, mt.mask->dst.s6_addr, 16);
- err = vcap_rule_add_key_u128(st->vrule,
- VCAP_KF_L3_IP6_DIP, &dip);
- if (err)
- goto out;
- }
- }
- st->used_keys |= BIT(FLOW_DISSECTOR_KEY_IPV6_ADDRS);
- return err;
-out:
- NL_SET_ERR_MSG_MOD(st->fco->common.extack, "ipv6_addr parse error");
- return err;
-}
-
static int
-sparx5_tc_flower_handler_control_usage(struct sparx5_tc_flower_parse_usage *st)
+sparx5_tc_flower_es0_tpid(struct vcap_tc_flower_parse_usage *st)
{
- struct flow_match_control mt;
- u32 value, mask;
int err = 0;
- flow_rule_match_control(st->frule, &mt);
-
- if (mt.mask->flags) {
- if (mt.mask->flags & FLOW_DIS_FIRST_FRAG) {
- if (mt.key->flags & FLOW_DIS_FIRST_FRAG) {
- value = 1; /* initial fragment */
- mask = 0x3;
- } else {
- if (mt.mask->flags & FLOW_DIS_IS_FRAGMENT) {
- value = 3; /* follow up fragment */
- mask = 0x3;
- } else {
- value = 0; /* no fragment */
- mask = 0x3;
- }
- }
- } else {
- if (mt.mask->flags & FLOW_DIS_IS_FRAGMENT) {
- value = 3; /* follow up fragment */
- mask = 0x3;
- } else {
- value = 0; /* no fragment */
- mask = 0x3;
- }
- }
-
+ switch (st->tpid) {
+ case ETH_P_8021Q:
err = vcap_rule_add_key_u32(st->vrule,
- VCAP_KF_L3_FRAGMENT_TYPE,
- value, mask);
- if (err)
- goto out;
- }
-
- st->used_keys |= BIT(FLOW_DISSECTOR_KEY_CONTROL);
-
- return err;
-
-out:
- NL_SET_ERR_MSG_MOD(st->fco->common.extack, "ip_frag parse error");
- return err;
-}
-
-static int
-sparx5_tc_flower_handler_portnum_usage(struct sparx5_tc_flower_parse_usage *st)
-{
- struct flow_match_ports mt;
- u16 value, mask;
- int err = 0;
-
- flow_rule_match_ports(st->frule, &mt);
-
- if (mt.mask->src) {
- value = be16_to_cpu(mt.key->src);
- mask = be16_to_cpu(mt.mask->src);
- err = vcap_rule_add_key_u32(st->vrule, VCAP_KF_L4_SPORT, value,
- mask);
- if (err)
- goto out;
- }
-
- if (mt.mask->dst) {
- value = be16_to_cpu(mt.key->dst);
- mask = be16_to_cpu(mt.mask->dst);
- err = vcap_rule_add_key_u32(st->vrule, VCAP_KF_L4_DPORT, value,
- mask);
- if (err)
- goto out;
+ VCAP_KF_8021Q_TPID,
+ SPX5_TPID_SEL_8100, ~0);
+ break;
+ case ETH_P_8021AD:
+ err = vcap_rule_add_key_u32(st->vrule,
+ VCAP_KF_8021Q_TPID,
+ SPX5_TPID_SEL_88A8, ~0);
+ break;
+ default:
+ NL_SET_ERR_MSG_MOD(st->fco->common.extack,
+ "Invalid vlan proto");
+ err = -EINVAL;
+ break;
}
-
- st->used_keys |= BIT(FLOW_DISSECTOR_KEY_PORTS);
-
- return err;
-
-out:
- NL_SET_ERR_MSG_MOD(st->fco->common.extack, "port parse error");
return err;
}
static int
-sparx5_tc_flower_handler_basic_usage(struct sparx5_tc_flower_parse_usage *st)
+sparx5_tc_flower_handler_basic_usage(struct vcap_tc_flower_parse_usage *st)
{
struct flow_match_basic mt;
int err = 0;
@@ -277,7 +63,7 @@ sparx5_tc_flower_handler_basic_usage(struct sparx5_tc_flower_parse_usage *st)
if (mt.mask->n_proto) {
st->l3_proto = be16_to_cpu(mt.key->n_proto);
- if (!sparx5_tc_is_known_etype(st->l3_proto)) {
+ if (!sparx5_vcap_is_known_etype(st->admin, st->l3_proto)) {
err = vcap_rule_add_key_u32(st->vrule, VCAP_KF_ETYPE,
st->l3_proto, ~0);
if (err)
@@ -292,6 +78,13 @@ sparx5_tc_flower_handler_basic_usage(struct sparx5_tc_flower_parse_usage *st)
VCAP_BIT_0);
if (err)
goto out;
+ if (st->admin->vtype == VCAP_TYPE_IS0) {
+ err = vcap_rule_add_key_bit(st->vrule,
+ VCAP_KF_IP_SNAP_IS,
+ VCAP_BIT_1);
+ if (err)
+ goto out;
+ }
}
}
@@ -309,6 +102,13 @@ sparx5_tc_flower_handler_basic_usage(struct sparx5_tc_flower_parse_usage *st)
VCAP_BIT_0);
if (err)
goto out;
+ if (st->admin->vtype == VCAP_TYPE_IS0) {
+ err = vcap_rule_add_key_bit(st->vrule,
+ VCAP_KF_TCP_UDP_IS,
+ VCAP_BIT_1);
+ if (err)
+ goto out;
+ }
} else {
err = vcap_rule_add_key_u32(st->vrule,
VCAP_KF_L3_IP_PROTO,
@@ -328,253 +128,131 @@ out:
}
static int
-sparx5_tc_flower_handler_vlan_usage(struct sparx5_tc_flower_parse_usage *st)
+sparx5_tc_flower_handler_control_usage(struct vcap_tc_flower_parse_usage *st)
{
- enum vcap_key_field vid_key = VCAP_KF_8021Q_VID_CLS;
- enum vcap_key_field pcp_key = VCAP_KF_8021Q_PCP_CLS;
- struct flow_match_vlan mt;
- int err;
-
- flow_rule_match_vlan(st->frule, &mt);
-
- if (mt.mask->vlan_id) {
- err = vcap_rule_add_key_u32(st->vrule, vid_key,
- mt.key->vlan_id,
- mt.mask->vlan_id);
- if (err)
- goto out;
- }
-
- if (mt.mask->vlan_priority) {
- err = vcap_rule_add_key_u32(st->vrule, pcp_key,
- mt.key->vlan_priority,
- mt.mask->vlan_priority);
- if (err)
- goto out;
- }
-
- st->used_keys |= BIT(FLOW_DISSECTOR_KEY_VLAN);
-
- return 0;
-out:
- NL_SET_ERR_MSG_MOD(st->fco->common.extack, "vlan parse error");
- return err;
-}
-
-static int
-sparx5_tc_flower_handler_tcp_usage(struct sparx5_tc_flower_parse_usage *st)
-{
- struct flow_match_tcp mt;
- u16 tcp_flags_mask;
- u16 tcp_flags_key;
- enum vcap_bit val;
+ struct flow_match_control mt;
+ u32 value, mask;
int err = 0;
- flow_rule_match_tcp(st->frule, &mt);
- tcp_flags_key = be16_to_cpu(mt.key->flags);
- tcp_flags_mask = be16_to_cpu(mt.mask->flags);
-
- if (tcp_flags_mask & TCPHDR_FIN) {
- val = VCAP_BIT_0;
- if (tcp_flags_key & TCPHDR_FIN)
- val = VCAP_BIT_1;
- err = vcap_rule_add_key_bit(st->vrule, VCAP_KF_L4_FIN, val);
- if (err)
- goto out;
- }
-
- if (tcp_flags_mask & TCPHDR_SYN) {
- val = VCAP_BIT_0;
- if (tcp_flags_key & TCPHDR_SYN)
- val = VCAP_BIT_1;
- err = vcap_rule_add_key_bit(st->vrule, VCAP_KF_L4_SYN, val);
- if (err)
- goto out;
- }
-
- if (tcp_flags_mask & TCPHDR_RST) {
- val = VCAP_BIT_0;
- if (tcp_flags_key & TCPHDR_RST)
- val = VCAP_BIT_1;
- err = vcap_rule_add_key_bit(st->vrule, VCAP_KF_L4_RST, val);
- if (err)
- goto out;
- }
-
- if (tcp_flags_mask & TCPHDR_PSH) {
- val = VCAP_BIT_0;
- if (tcp_flags_key & TCPHDR_PSH)
- val = VCAP_BIT_1;
- err = vcap_rule_add_key_bit(st->vrule, VCAP_KF_L4_PSH, val);
- if (err)
- goto out;
- }
+ flow_rule_match_control(st->frule, &mt);
- if (tcp_flags_mask & TCPHDR_ACK) {
- val = VCAP_BIT_0;
- if (tcp_flags_key & TCPHDR_ACK)
- val = VCAP_BIT_1;
- err = vcap_rule_add_key_bit(st->vrule, VCAP_KF_L4_ACK, val);
- if (err)
- goto out;
- }
+ if (mt.mask->flags) {
+ if (mt.mask->flags & FLOW_DIS_FIRST_FRAG) {
+ if (mt.key->flags & FLOW_DIS_FIRST_FRAG) {
+ value = 1; /* initial fragment */
+ mask = 0x3;
+ } else {
+ if (mt.mask->flags & FLOW_DIS_IS_FRAGMENT) {
+ value = 3; /* follow up fragment */
+ mask = 0x3;
+ } else {
+ value = 0; /* no fragment */
+ mask = 0x3;
+ }
+ }
+ } else {
+ if (mt.mask->flags & FLOW_DIS_IS_FRAGMENT) {
+ value = 3; /* follow up fragment */
+ mask = 0x3;
+ } else {
+ value = 0; /* no fragment */
+ mask = 0x3;
+ }
+ }
- if (tcp_flags_mask & TCPHDR_URG) {
- val = VCAP_BIT_0;
- if (tcp_flags_key & TCPHDR_URG)
- val = VCAP_BIT_1;
- err = vcap_rule_add_key_bit(st->vrule, VCAP_KF_L4_URG, val);
+ err = vcap_rule_add_key_u32(st->vrule,
+ VCAP_KF_L3_FRAGMENT_TYPE,
+ value, mask);
if (err)
goto out;
}
- st->used_keys |= BIT(FLOW_DISSECTOR_KEY_TCP);
+ st->used_keys |= BIT(FLOW_DISSECTOR_KEY_CONTROL);
return err;
out:
- NL_SET_ERR_MSG_MOD(st->fco->common.extack, "tcp_flags parse error");
+ NL_SET_ERR_MSG_MOD(st->fco->common.extack, "ip_frag parse error");
return err;
}
static int
-sparx5_tc_flower_handler_arp_usage(struct sparx5_tc_flower_parse_usage *st)
+sparx5_tc_flower_handler_cvlan_usage(struct vcap_tc_flower_parse_usage *st)
{
- struct flow_match_arp mt;
- u16 value, mask;
- u32 ipval, ipmsk;
- int err;
-
- flow_rule_match_arp(st->frule, &mt);
-
- if (mt.mask->op) {
- mask = 0x3;
- if (st->l3_proto == ETH_P_ARP) {
- value = mt.key->op == TC_ARP_OP_REQUEST ?
- SPX5_IS2_ARP_REQUEST :
- SPX5_IS2_ARP_REPLY;
- } else { /* RARP */
- value = mt.key->op == TC_ARP_OP_REQUEST ?
- SPX5_IS2_RARP_REQUEST :
- SPX5_IS2_RARP_REPLY;
- }
- err = vcap_rule_add_key_u32(st->vrule, VCAP_KF_ARP_OPCODE,
- value, mask);
- if (err)
- goto out;
- }
-
- /* The IS2 ARP keyset does not support ARP hardware addresses */
- if (!is_zero_ether_addr(mt.mask->sha) ||
- !is_zero_ether_addr(mt.mask->tha)) {
- err = -EINVAL;
- goto out;
- }
-
- if (mt.mask->sip) {
- ipval = be32_to_cpu((__force __be32)mt.key->sip);
- ipmsk = be32_to_cpu((__force __be32)mt.mask->sip);
-
- err = vcap_rule_add_key_u32(st->vrule, VCAP_KF_L3_IP4_SIP,
- ipval, ipmsk);
- if (err)
- goto out;
- }
-
- if (mt.mask->tip) {
- ipval = be32_to_cpu((__force __be32)mt.key->tip);
- ipmsk = be32_to_cpu((__force __be32)mt.mask->tip);
-
- err = vcap_rule_add_key_u32(st->vrule, VCAP_KF_L3_IP4_DIP,
- ipval, ipmsk);
- if (err)
- goto out;
+ if (st->admin->vtype != VCAP_TYPE_IS0) {
+ NL_SET_ERR_MSG_MOD(st->fco->common.extack,
+ "cvlan not supported in this VCAP");
+ return -EINVAL;
}
- st->used_keys |= BIT(FLOW_DISSECTOR_KEY_ARP);
-
- return 0;
-
-out:
- NL_SET_ERR_MSG_MOD(st->fco->common.extack, "arp parse error");
- return err;
+ return vcap_tc_flower_handler_cvlan_usage(st);
}
static int
-sparx5_tc_flower_handler_ip_usage(struct sparx5_tc_flower_parse_usage *st)
+sparx5_tc_flower_handler_vlan_usage(struct vcap_tc_flower_parse_usage *st)
{
- struct flow_match_ip mt;
- int err = 0;
-
- flow_rule_match_ip(st->frule, &mt);
+ enum vcap_key_field vid_key = VCAP_KF_8021Q_VID_CLS;
+ enum vcap_key_field pcp_key = VCAP_KF_8021Q_PCP_CLS;
+ int err;
- if (mt.mask->tos) {
- err = vcap_rule_add_key_u32(st->vrule, VCAP_KF_L3_TOS,
- mt.key->tos,
- mt.mask->tos);
- if (err)
- goto out;
+ if (st->admin->vtype == VCAP_TYPE_IS0) {
+ vid_key = VCAP_KF_8021Q_VID0;
+ pcp_key = VCAP_KF_8021Q_PCP0;
}
- st->used_keys |= BIT(FLOW_DISSECTOR_KEY_IP);
+ err = vcap_tc_flower_handler_vlan_usage(st, vid_key, pcp_key);
+ if (err)
+ return err;
- return err;
+ if (st->admin->vtype == VCAP_TYPE_ES0 && st->tpid)
+ err = sparx5_tc_flower_es0_tpid(st);
-out:
- NL_SET_ERR_MSG_MOD(st->fco->common.extack, "ip_tos parse error");
return err;
}
-static int (*sparx5_tc_flower_usage_handlers[])(struct sparx5_tc_flower_parse_usage *st) = {
- [FLOW_DISSECTOR_KEY_ETH_ADDRS] = sparx5_tc_flower_handler_ethaddr_usage,
- [FLOW_DISSECTOR_KEY_IPV4_ADDRS] = sparx5_tc_flower_handler_ipv4_usage,
- [FLOW_DISSECTOR_KEY_IPV6_ADDRS] = sparx5_tc_flower_handler_ipv6_usage,
+static int (*sparx5_tc_flower_usage_handlers[])(struct vcap_tc_flower_parse_usage *st) = {
+ [FLOW_DISSECTOR_KEY_ETH_ADDRS] = vcap_tc_flower_handler_ethaddr_usage,
+ [FLOW_DISSECTOR_KEY_IPV4_ADDRS] = vcap_tc_flower_handler_ipv4_usage,
+ [FLOW_DISSECTOR_KEY_IPV6_ADDRS] = vcap_tc_flower_handler_ipv6_usage,
[FLOW_DISSECTOR_KEY_CONTROL] = sparx5_tc_flower_handler_control_usage,
- [FLOW_DISSECTOR_KEY_PORTS] = sparx5_tc_flower_handler_portnum_usage,
+ [FLOW_DISSECTOR_KEY_PORTS] = vcap_tc_flower_handler_portnum_usage,
[FLOW_DISSECTOR_KEY_BASIC] = sparx5_tc_flower_handler_basic_usage,
+ [FLOW_DISSECTOR_KEY_CVLAN] = sparx5_tc_flower_handler_cvlan_usage,
[FLOW_DISSECTOR_KEY_VLAN] = sparx5_tc_flower_handler_vlan_usage,
- [FLOW_DISSECTOR_KEY_TCP] = sparx5_tc_flower_handler_tcp_usage,
- [FLOW_DISSECTOR_KEY_ARP] = sparx5_tc_flower_handler_arp_usage,
- [FLOW_DISSECTOR_KEY_IP] = sparx5_tc_flower_handler_ip_usage,
+ [FLOW_DISSECTOR_KEY_TCP] = vcap_tc_flower_handler_tcp_usage,
+ [FLOW_DISSECTOR_KEY_ARP] = vcap_tc_flower_handler_arp_usage,
+ [FLOW_DISSECTOR_KEY_IP] = vcap_tc_flower_handler_ip_usage,
};
-static int sparx5_tc_use_dissectors(struct flow_cls_offload *fco,
+static int sparx5_tc_use_dissectors(struct vcap_tc_flower_parse_usage *st,
struct vcap_admin *admin,
- struct vcap_rule *vrule,
- u16 *l3_proto)
+ struct vcap_rule *vrule)
{
- struct sparx5_tc_flower_parse_usage state = {
- .fco = fco,
- .vrule = vrule,
- .l3_proto = ETH_P_ALL,
- };
int idx, err = 0;
- state.frule = flow_cls_offload_flow_rule(fco);
for (idx = 0; idx < ARRAY_SIZE(sparx5_tc_flower_usage_handlers); ++idx) {
- if (!flow_rule_match_key(state.frule, idx))
+ if (!flow_rule_match_key(st->frule, idx))
continue;
if (!sparx5_tc_flower_usage_handlers[idx])
continue;
- err = sparx5_tc_flower_usage_handlers[idx](&state);
+ err = sparx5_tc_flower_usage_handlers[idx](st);
if (err)
return err;
}
- if (state.frule->match.dissector->used_keys ^ state.used_keys) {
- NL_SET_ERR_MSG_MOD(fco->common.extack,
+ if (st->frule->match.dissector->used_keys ^ st->used_keys) {
+ NL_SET_ERR_MSG_MOD(st->fco->common.extack,
"Unsupported match item");
return -ENOENT;
}
- if (l3_proto)
- *l3_proto = state.l3_proto;
return err;
}
static int sparx5_tc_flower_action_check(struct vcap_control *vctrl,
+ struct net_device *ndev,
struct flow_cls_offload *fco,
- struct vcap_admin *admin)
+ bool ingress)
{
struct flow_rule *rule = flow_cls_offload_flow_rule(fco);
struct flow_action_entry *actent, *last_actent = NULL;
@@ -600,21 +278,24 @@ static int sparx5_tc_flower_action_check(struct vcap_control *vctrl,
last_actent = actent; /* Save last action for later check */
}
- /* Check that last action is a goto */
- if (last_actent->id != FLOW_ACTION_GOTO) {
+ /* Check if last action is a goto
+ * The last chain/lookup does not need to have a goto action
+ */
+ if (last_actent->id == FLOW_ACTION_GOTO) {
+ /* Check if the destination chain is in one of the VCAPs */
+ if (!vcap_is_next_lookup(vctrl, fco->common.chain_index,
+ last_actent->chain_index)) {
+ NL_SET_ERR_MSG_MOD(fco->common.extack,
+ "Invalid goto chain");
+ return -EINVAL;
+ }
+ } else if (!vcap_is_last_chain(vctrl, fco->common.chain_index,
+ ingress)) {
NL_SET_ERR_MSG_MOD(fco->common.extack,
"Last action must be 'goto'");
return -EINVAL;
}
- /* Check if the goto chain is in the next lookup */
- if (!vcap_is_next_lookup(vctrl, fco->common.chain_index,
- last_actent->chain_index)) {
- NL_SET_ERR_MSG_MOD(fco->common.extack,
- "Invalid goto chain");
- return -EINVAL;
- }
-
/* Catch unsupported combinations of actions */
if (action_mask & BIT(FLOW_ACTION_TRAP) &&
action_mask & BIT(FLOW_ACTION_ACCEPT)) {
@@ -623,21 +304,60 @@ static int sparx5_tc_flower_action_check(struct vcap_control *vctrl,
return -EOPNOTSUPP;
}
+ if (action_mask & BIT(FLOW_ACTION_VLAN_PUSH) &&
+ action_mask & BIT(FLOW_ACTION_VLAN_POP)) {
+ NL_SET_ERR_MSG_MOD(fco->common.extack,
+ "Cannot combine vlan push and pop action");
+ return -EOPNOTSUPP;
+ }
+
+ if (action_mask & BIT(FLOW_ACTION_VLAN_PUSH) &&
+ action_mask & BIT(FLOW_ACTION_VLAN_MANGLE)) {
+ NL_SET_ERR_MSG_MOD(fco->common.extack,
+ "Cannot combine vlan push and modify action");
+ return -EOPNOTSUPP;
+ }
+
+ if (action_mask & BIT(FLOW_ACTION_VLAN_POP) &&
+ action_mask & BIT(FLOW_ACTION_VLAN_MANGLE)) {
+ NL_SET_ERR_MSG_MOD(fco->common.extack,
+ "Cannot combine vlan pop and modify action");
+ return -EOPNOTSUPP;
+ }
+
return 0;
}
-/* Add a rule counter action - only IS2 is considered for now */
+/* Add a rule counter action */
static int sparx5_tc_add_rule_counter(struct vcap_admin *admin,
struct vcap_rule *vrule)
{
int err;
- err = vcap_rule_mod_action_u32(vrule, VCAP_AF_CNT_ID, vrule->id);
- if (err)
- return err;
-
- vcap_rule_set_counter_id(vrule, vrule->id);
- return err;
+ switch (admin->vtype) {
+ case VCAP_TYPE_IS0:
+ break;
+ case VCAP_TYPE_ES0:
+ err = vcap_rule_mod_action_u32(vrule, VCAP_AF_ESDX,
+ vrule->id);
+ if (err)
+ return err;
+ vcap_rule_set_counter_id(vrule, vrule->id);
+ break;
+ case VCAP_TYPE_IS2:
+ case VCAP_TYPE_ES2:
+ err = vcap_rule_mod_action_u32(vrule, VCAP_AF_CNT_ID,
+ vrule->id);
+ if (err)
+ return err;
+ vcap_rule_set_counter_id(vrule, vrule->id);
+ break;
+ default:
+ pr_err("%s:%d: vcap type: %d not supported\n",
+ __func__, __LINE__, admin->vtype);
+ break;
+ }
+ return 0;
}
/* Collect all port keysets and apply the first of them, possibly wildcarded */
@@ -818,22 +538,490 @@ static int sparx5_tc_add_remaining_rules(struct vcap_control *vctrl,
return err;
}
+/* Add the actionset that is the default for the VCAP type */
+static int sparx5_tc_set_actionset(struct vcap_admin *admin,
+ struct vcap_rule *vrule)
+{
+ enum vcap_actionfield_set aset;
+ int err = 0;
+
+ switch (admin->vtype) {
+ case VCAP_TYPE_IS0:
+ aset = VCAP_AFS_CLASSIFICATION;
+ break;
+ case VCAP_TYPE_IS2:
+ aset = VCAP_AFS_BASE_TYPE;
+ break;
+ case VCAP_TYPE_ES0:
+ aset = VCAP_AFS_ES0;
+ break;
+ case VCAP_TYPE_ES2:
+ aset = VCAP_AFS_BASE_TYPE;
+ break;
+ default:
+ pr_err("%s:%d: %s\n", __func__, __LINE__, "Invalid VCAP type");
+ return -EINVAL;
+ }
+ /* Do not overwrite any current actionset */
+ if (vrule->actionset == VCAP_AFS_NO_VALUE)
+ err = vcap_set_rule_set_actionset(vrule, aset);
+ return err;
+}
+
+/* Add the VCAP key to match on for a rule target value */
+static int sparx5_tc_add_rule_link_target(struct vcap_admin *admin,
+ struct vcap_rule *vrule,
+ int target_cid)
+{
+ int link_val = target_cid % VCAP_CID_LOOKUP_SIZE;
+ int err;
+
+ if (!link_val)
+ return 0;
+
+ switch (admin->vtype) {
+ case VCAP_TYPE_IS0:
+ /* Add NXT_IDX key for chaining rules between IS0 instances */
+ err = vcap_rule_add_key_u32(vrule, VCAP_KF_LOOKUP_GEN_IDX_SEL,
+ 1, /* enable */
+ ~0);
+ if (err)
+ return err;
+ return vcap_rule_add_key_u32(vrule, VCAP_KF_LOOKUP_GEN_IDX,
+ link_val, /* target */
+ ~0);
+ case VCAP_TYPE_IS2:
+ /* Add PAG key for chaining rules from IS0 */
+ return vcap_rule_add_key_u32(vrule, VCAP_KF_LOOKUP_PAG,
+ link_val, /* target */
+ ~0);
+ case VCAP_TYPE_ES0:
+ case VCAP_TYPE_ES2:
+ /* Add ISDX key for chaining rules from IS0 */
+ return vcap_rule_add_key_u32(vrule, VCAP_KF_ISDX_CLS, link_val,
+ ~0);
+ default:
+ break;
+ }
+ return 0;
+}
+
+/* Add the VCAP action that adds a target value to a rule */
+static int sparx5_tc_add_rule_link(struct vcap_control *vctrl,
+ struct vcap_admin *admin,
+ struct vcap_rule *vrule,
+ int from_cid, int to_cid)
+{
+ struct vcap_admin *to_admin = vcap_find_admin(vctrl, to_cid);
+ int diff, err = 0;
+
+ if (!to_admin) {
+ pr_err("%s:%d: unsupported chain direction: %d\n",
+ __func__, __LINE__, to_cid);
+ return -EINVAL;
+ }
+
+ diff = vcap_chain_offset(vctrl, from_cid, to_cid);
+ if (!diff)
+ return 0;
+
+ if (admin->vtype == VCAP_TYPE_IS0 &&
+ to_admin->vtype == VCAP_TYPE_IS0) {
+ /* Between IS0 instances the G_IDX value is used */
+ err = vcap_rule_add_action_u32(vrule, VCAP_AF_NXT_IDX, diff);
+ if (err)
+ goto out;
+ err = vcap_rule_add_action_u32(vrule, VCAP_AF_NXT_IDX_CTRL,
+ 1); /* Replace */
+ if (err)
+ goto out;
+ } else if (admin->vtype == VCAP_TYPE_IS0 &&
+ to_admin->vtype == VCAP_TYPE_IS2) {
+ /* Between IS0 and IS2 the PAG value is used */
+ err = vcap_rule_add_action_u32(vrule, VCAP_AF_PAG_VAL, diff);
+ if (err)
+ goto out;
+ err = vcap_rule_add_action_u32(vrule,
+ VCAP_AF_PAG_OVERRIDE_MASK,
+ 0xff);
+ if (err)
+ goto out;
+ } else if (admin->vtype == VCAP_TYPE_IS0 &&
+ (to_admin->vtype == VCAP_TYPE_ES0 ||
+ to_admin->vtype == VCAP_TYPE_ES2)) {
+ /* Between IS0 and ES0/ES2 the ISDX value is used */
+ err = vcap_rule_add_action_u32(vrule, VCAP_AF_ISDX_VAL,
+ diff);
+ if (err)
+ goto out;
+ err = vcap_rule_add_action_bit(vrule,
+ VCAP_AF_ISDX_ADD_REPLACE_SEL,
+ VCAP_BIT_1);
+ if (err)
+ goto out;
+ } else {
+ pr_err("%s:%d: unsupported chain destination: %d\n",
+ __func__, __LINE__, to_cid);
+ err = -EOPNOTSUPP;
+ }
+out:
+ return err;
+}
+
+static int sparx5_tc_flower_parse_act_gate(struct sparx5_psfp_sg *sg,
+ struct flow_action_entry *act,
+ struct netlink_ext_ack *extack)
+{
+ int i;
+
+ if (act->gate.prio < -1 || act->gate.prio > SPX5_PSFP_SG_MAX_IPV) {
+ NL_SET_ERR_MSG_MOD(extack, "Invalid gate priority");
+ return -EINVAL;
+ }
+
+ if (act->gate.cycletime < SPX5_PSFP_SG_MIN_CYCLE_TIME_NS ||
+ act->gate.cycletime > SPX5_PSFP_SG_MAX_CYCLE_TIME_NS) {
+ NL_SET_ERR_MSG_MOD(extack, "Invalid gate cycletime");
+ return -EINVAL;
+ }
+
+ if (act->gate.cycletimeext > SPX5_PSFP_SG_MAX_CYCLE_TIME_NS) {
+ NL_SET_ERR_MSG_MOD(extack, "Invalid gate cycletimeext");
+ return -EINVAL;
+ }
+
+ if (act->gate.num_entries >= SPX5_PSFP_GCE_CNT) {
+ NL_SET_ERR_MSG_MOD(extack, "Invalid number of gate entries");
+ return -EINVAL;
+ }
+
+ sg->gate_state = true;
+ sg->ipv = act->gate.prio;
+ sg->num_entries = act->gate.num_entries;
+ sg->cycletime = act->gate.cycletime;
+ sg->cycletimeext = act->gate.cycletimeext;
+
+ for (i = 0; i < sg->num_entries; i++) {
+ sg->gce[i].gate_state = !!act->gate.entries[i].gate_state;
+ sg->gce[i].interval = act->gate.entries[i].interval;
+ sg->gce[i].ipv = act->gate.entries[i].ipv;
+ sg->gce[i].maxoctets = act->gate.entries[i].maxoctets;
+ }
+
+ return 0;
+}
+
+static int sparx5_tc_flower_parse_act_police(struct sparx5_policer *pol,
+ struct flow_action_entry *act,
+ struct netlink_ext_ack *extack)
+{
+ pol->type = SPX5_POL_SERVICE;
+ pol->rate = div_u64(act->police.rate_bytes_ps, 1000) * 8;
+ pol->burst = act->police.burst;
+ pol->idx = act->hw_index;
+
+ /* rate is now in kbit */
+ if (pol->rate > DIV_ROUND_UP(SPX5_SDLB_GROUP_RATE_MAX, 1000)) {
+ NL_SET_ERR_MSG_MOD(extack, "Maximum rate exceeded");
+ return -EINVAL;
+ }
+
+ if (act->police.exceed.act_id != FLOW_ACTION_DROP) {
+ NL_SET_ERR_MSG_MOD(extack, "Offload not supported when exceed action is not drop");
+ return -EOPNOTSUPP;
+ }
+
+ if (act->police.notexceed.act_id != FLOW_ACTION_PIPE &&
+ act->police.notexceed.act_id != FLOW_ACTION_ACCEPT) {
+ NL_SET_ERR_MSG_MOD(extack, "Offload not supported when conform action is not pipe or ok");
+ return -EOPNOTSUPP;
+ }
+
+ return 0;
+}
+
+static int sparx5_tc_flower_psfp_setup(struct sparx5 *sparx5,
+ struct vcap_rule *vrule, int sg_idx,
+ int pol_idx, struct sparx5_psfp_sg *sg,
+ struct sparx5_psfp_fm *fm,
+ struct sparx5_psfp_sf *sf)
+{
+ u32 psfp_sfid = 0, psfp_fmid = 0, psfp_sgid = 0;
+ int ret;
+
+ /* Must always have a stream gate - max sdu (filter option) is evaluated
+ * after frames have passed the gate, so in case of only a policer, we
+ * allocate a stream gate that is always open.
+ */
+ if (sg_idx < 0) {
+ sg_idx = sparx5_pool_idx_to_id(SPX5_PSFP_SG_OPEN);
+ sg->ipv = 0; /* Disabled */
+ sg->cycletime = SPX5_PSFP_SG_CYCLE_TIME_DEFAULT;
+ sg->num_entries = 1;
+ sg->gate_state = 1; /* Open */
+ sg->gate_enabled = 1;
+ sg->gce[0].gate_state = 1;
+ sg->gce[0].interval = SPX5_PSFP_SG_CYCLE_TIME_DEFAULT;
+ sg->gce[0].ipv = 0;
+ sg->gce[0].maxoctets = 0; /* Disabled */
+ }
+
+ ret = sparx5_psfp_sg_add(sparx5, sg_idx, sg, &psfp_sgid);
+ if (ret < 0)
+ return ret;
+
+ if (pol_idx >= 0) {
+ /* Add new flow-meter */
+ ret = sparx5_psfp_fm_add(sparx5, pol_idx, fm, &psfp_fmid);
+ if (ret < 0)
+ return ret;
+ }
+
+ /* Map stream filter to stream gate */
+ sf->sgid = psfp_sgid;
+
+ /* Add new stream-filter and map it to a steam gate */
+ ret = sparx5_psfp_sf_add(sparx5, sf, &psfp_sfid);
+ if (ret < 0)
+ return ret;
+
+ /* Streams are classified by ISDX - map ISDX 1:1 to sfid for now. */
+ sparx5_isdx_conf_set(sparx5, psfp_sfid, psfp_sfid, psfp_fmid);
+
+ ret = vcap_rule_add_action_bit(vrule, VCAP_AF_ISDX_ADD_REPLACE_SEL,
+ VCAP_BIT_1);
+ if (ret)
+ return ret;
+
+ ret = vcap_rule_add_action_u32(vrule, VCAP_AF_ISDX_VAL, psfp_sfid);
+ if (ret)
+ return ret;
+
+ return 0;
+}
+
+/* Handle the action trap for a VCAP rule */
+static int sparx5_tc_action_trap(struct vcap_admin *admin,
+ struct vcap_rule *vrule,
+ struct flow_cls_offload *fco)
+{
+ int err = 0;
+
+ switch (admin->vtype) {
+ case VCAP_TYPE_IS2:
+ err = vcap_rule_add_action_bit(vrule,
+ VCAP_AF_CPU_COPY_ENA,
+ VCAP_BIT_1);
+ if (err)
+ break;
+ err = vcap_rule_add_action_u32(vrule,
+ VCAP_AF_CPU_QUEUE_NUM, 0);
+ if (err)
+ break;
+ err = vcap_rule_add_action_u32(vrule,
+ VCAP_AF_MASK_MODE,
+ SPX5_PMM_REPLACE_ALL);
+ break;
+ case VCAP_TYPE_ES0:
+ err = vcap_rule_add_action_u32(vrule,
+ VCAP_AF_FWD_SEL,
+ SPX5_FWSEL_REDIRECT_TO_LOOPBACK);
+ break;
+ case VCAP_TYPE_ES2:
+ err = vcap_rule_add_action_bit(vrule,
+ VCAP_AF_CPU_COPY_ENA,
+ VCAP_BIT_1);
+ if (err)
+ break;
+ err = vcap_rule_add_action_u32(vrule,
+ VCAP_AF_CPU_QUEUE_NUM, 0);
+ break;
+ default:
+ NL_SET_ERR_MSG_MOD(fco->common.extack,
+ "Trap action not supported in this VCAP");
+ err = -EOPNOTSUPP;
+ break;
+ }
+ return err;
+}
+
+static int sparx5_tc_action_vlan_pop(struct vcap_admin *admin,
+ struct vcap_rule *vrule,
+ struct flow_cls_offload *fco,
+ u16 tpid)
+{
+ int err = 0;
+
+ switch (admin->vtype) {
+ case VCAP_TYPE_ES0:
+ break;
+ default:
+ NL_SET_ERR_MSG_MOD(fco->common.extack,
+ "VLAN pop action not supported in this VCAP");
+ return -EOPNOTSUPP;
+ }
+
+ switch (tpid) {
+ case ETH_P_8021Q:
+ case ETH_P_8021AD:
+ err = vcap_rule_add_action_u32(vrule,
+ VCAP_AF_PUSH_OUTER_TAG,
+ SPX5_OTAG_UNTAG);
+ break;
+ default:
+ NL_SET_ERR_MSG_MOD(fco->common.extack,
+ "Invalid vlan proto");
+ err = -EINVAL;
+ }
+ return err;
+}
+
+static int sparx5_tc_action_vlan_modify(struct vcap_admin *admin,
+ struct vcap_rule *vrule,
+ struct flow_cls_offload *fco,
+ struct flow_action_entry *act,
+ u16 tpid)
+{
+ int err = 0;
+
+ switch (admin->vtype) {
+ case VCAP_TYPE_ES0:
+ err = vcap_rule_add_action_u32(vrule,
+ VCAP_AF_PUSH_OUTER_TAG,
+ SPX5_OTAG_TAG_A);
+ if (err)
+ return err;
+ break;
+ default:
+ NL_SET_ERR_MSG_MOD(fco->common.extack,
+ "VLAN modify action not supported in this VCAP");
+ return -EOPNOTSUPP;
+ }
+
+ switch (tpid) {
+ case ETH_P_8021Q:
+ err = vcap_rule_add_action_u32(vrule,
+ VCAP_AF_TAG_A_TPID_SEL,
+ SPX5_TPID_A_8100);
+ break;
+ case ETH_P_8021AD:
+ err = vcap_rule_add_action_u32(vrule,
+ VCAP_AF_TAG_A_TPID_SEL,
+ SPX5_TPID_A_88A8);
+ break;
+ default:
+ NL_SET_ERR_MSG_MOD(fco->common.extack,
+ "Invalid vlan proto");
+ err = -EINVAL;
+ }
+ if (err)
+ return err;
+
+ err = vcap_rule_add_action_u32(vrule,
+ VCAP_AF_TAG_A_VID_SEL,
+ SPX5_VID_A_VAL);
+ if (err)
+ return err;
+
+ err = vcap_rule_add_action_u32(vrule,
+ VCAP_AF_VID_A_VAL,
+ act->vlan.vid);
+ if (err)
+ return err;
+
+ err = vcap_rule_add_action_u32(vrule,
+ VCAP_AF_TAG_A_PCP_SEL,
+ SPX5_PCP_A_VAL);
+ if (err)
+ return err;
+
+ err = vcap_rule_add_action_u32(vrule,
+ VCAP_AF_PCP_A_VAL,
+ act->vlan.prio);
+ if (err)
+ return err;
+
+ return vcap_rule_add_action_u32(vrule,
+ VCAP_AF_TAG_A_DEI_SEL,
+ SPX5_DEI_A_CLASSIFIED);
+}
+
+static int sparx5_tc_action_vlan_push(struct vcap_admin *admin,
+ struct vcap_rule *vrule,
+ struct flow_cls_offload *fco,
+ struct flow_action_entry *act,
+ u16 tpid)
+{
+ u16 act_tpid = be16_to_cpu(act->vlan.proto);
+ int err = 0;
+
+ switch (admin->vtype) {
+ case VCAP_TYPE_ES0:
+ break;
+ default:
+ NL_SET_ERR_MSG_MOD(fco->common.extack,
+ "VLAN push action not supported in this VCAP");
+ return -EOPNOTSUPP;
+ }
+
+ if (tpid == ETH_P_8021AD) {
+ NL_SET_ERR_MSG_MOD(fco->common.extack,
+ "Cannot push on double tagged frames");
+ return -EOPNOTSUPP;
+ }
+
+ err = sparx5_tc_action_vlan_modify(admin, vrule, fco, act, act_tpid);
+ if (err)
+ return err;
+
+ switch (act_tpid) {
+ case ETH_P_8021Q:
+ break;
+ case ETH_P_8021AD:
+ /* Push classified tag as inner tag */
+ err = vcap_rule_add_action_u32(vrule,
+ VCAP_AF_PUSH_INNER_TAG,
+ SPX5_ITAG_PUSH_B_TAG);
+ if (err)
+ break;
+ err = vcap_rule_add_action_u32(vrule,
+ VCAP_AF_TAG_B_TPID_SEL,
+ SPX5_TPID_B_CLASSIFIED);
+ break;
+ default:
+ NL_SET_ERR_MSG_MOD(fco->common.extack,
+ "Invalid vlan proto");
+ err = -EINVAL;
+ }
+ return err;
+}
+
static int sparx5_tc_flower_replace(struct net_device *ndev,
struct flow_cls_offload *fco,
- struct vcap_admin *admin)
+ struct vcap_admin *admin,
+ bool ingress)
{
+ struct sparx5_psfp_sf sf = { .max_sdu = SPX5_PSFP_SF_MAX_SDU };
+ struct netlink_ext_ack *extack = fco->common.extack;
+ int err, idx, tc_sg_idx = -1, tc_pol_idx = -1;
+ struct vcap_tc_flower_parse_usage state = {
+ .fco = fco,
+ .l3_proto = ETH_P_ALL,
+ .admin = admin,
+ };
struct sparx5_port *port = netdev_priv(ndev);
struct sparx5_multiple_rules multi = {};
+ struct sparx5 *sparx5 = port->sparx5;
+ struct sparx5_psfp_sg sg = { 0 };
+ struct sparx5_psfp_fm fm = { 0 };
struct flow_action_entry *act;
struct vcap_control *vctrl;
struct flow_rule *frule;
struct vcap_rule *vrule;
- u16 l3_proto;
- int err, idx;
vctrl = port->sparx5->vcap_ctrl;
- err = sparx5_tc_flower_action_check(vctrl, fco, admin);
+ err = sparx5_tc_flower_action_check(vctrl, ndev, fco, ingress);
if (err)
return err;
@@ -844,8 +1032,9 @@ static int sparx5_tc_flower_replace(struct net_device *ndev,
vrule->cookie = fco->cookie;
- l3_proto = ETH_P_ALL;
- err = sparx5_tc_use_dissectors(fco, admin, vrule, &l3_proto);
+ state.vrule = vrule;
+ state.frule = flow_cls_offload_flow_rule(fco);
+ err = sparx5_tc_use_dissectors(&state, admin, vrule);
if (err)
goto out;
@@ -853,38 +1042,69 @@ static int sparx5_tc_flower_replace(struct net_device *ndev,
if (err)
goto out;
+ err = sparx5_tc_add_rule_link_target(admin, vrule,
+ fco->common.chain_index);
+ if (err)
+ goto out;
+
frule = flow_cls_offload_flow_rule(fco);
flow_action_for_each(idx, act, &frule->action) {
switch (act->id) {
+ case FLOW_ACTION_GATE: {
+ err = sparx5_tc_flower_parse_act_gate(&sg, act, extack);
+ if (err < 0)
+ goto out;
+
+ tc_sg_idx = act->hw_index;
+
+ break;
+ }
+ case FLOW_ACTION_POLICE: {
+ err = sparx5_tc_flower_parse_act_police(&fm.pol, act,
+ extack);
+ if (err < 0)
+ goto out;
+
+ tc_pol_idx = fm.pol.idx;
+ sf.max_sdu = act->police.mtu;
+
+ break;
+ }
case FLOW_ACTION_TRAP:
- err = vcap_rule_add_action_bit(vrule,
- VCAP_AF_CPU_COPY_ENA,
- VCAP_BIT_1);
+ err = sparx5_tc_action_trap(admin, vrule, fco);
if (err)
goto out;
- err = vcap_rule_add_action_u32(vrule,
- VCAP_AF_CPU_QUEUE_NUM, 0);
+ break;
+ case FLOW_ACTION_ACCEPT:
+ err = sparx5_tc_set_actionset(admin, vrule);
if (err)
goto out;
- err = vcap_rule_add_action_u32(vrule, VCAP_AF_MASK_MODE,
- SPX5_PMM_REPLACE_ALL);
+ break;
+ case FLOW_ACTION_GOTO:
+ err = sparx5_tc_set_actionset(admin, vrule);
if (err)
goto out;
- /* For now the actionset is hardcoded */
- err = vcap_set_rule_set_actionset(vrule,
- VCAP_AFS_BASE_TYPE);
+ sparx5_tc_add_rule_link(vctrl, admin, vrule,
+ fco->common.chain_index,
+ act->chain_index);
+ break;
+ case FLOW_ACTION_VLAN_POP:
+ err = sparx5_tc_action_vlan_pop(admin, vrule, fco,
+ state.tpid);
if (err)
goto out;
break;
- case FLOW_ACTION_ACCEPT:
- /* For now the actionset is hardcoded */
- err = vcap_set_rule_set_actionset(vrule,
- VCAP_AFS_BASE_TYPE);
+ case FLOW_ACTION_VLAN_PUSH:
+ err = sparx5_tc_action_vlan_push(admin, vrule, fco,
+ act, state.tpid);
if (err)
goto out;
break;
- case FLOW_ACTION_GOTO:
- /* Links between VCAPs will be added later */
+ case FLOW_ACTION_VLAN_MANGLE:
+ err = sparx5_tc_action_vlan_modify(admin, vrule, fco,
+ act, state.tpid);
+ if (err)
+ goto out;
break;
default:
NL_SET_ERR_MSG_MOD(fco->common.extack,
@@ -894,8 +1114,16 @@ static int sparx5_tc_flower_replace(struct net_device *ndev,
}
}
- err = sparx5_tc_select_protocol_keyset(ndev, vrule, admin, l3_proto,
- &multi);
+ /* Setup PSFP */
+ if (tc_sg_idx >= 0 || tc_pol_idx >= 0) {
+ err = sparx5_tc_flower_psfp_setup(sparx5, vrule, tc_sg_idx,
+ tc_pol_idx, &sg, &fm, &sf);
+ if (err)
+ goto out;
+ }
+
+ err = sparx5_tc_select_protocol_keyset(ndev, vrule, admin,
+ state.l3_proto, &multi);
if (err) {
NL_SET_ERR_MSG_MOD(fco->common.extack,
"No matching port keyset for filter protocol and keys");
@@ -903,7 +1131,7 @@ static int sparx5_tc_flower_replace(struct net_device *ndev,
}
/* provide the l3 protocol to guide the keyset selection */
- err = vcap_val_rule(vrule, l3_proto);
+ err = vcap_val_rule(vrule, state.l3_proto);
if (err) {
vcap_set_tc_exterr(fco, vrule);
goto out;
@@ -913,7 +1141,7 @@ static int sparx5_tc_flower_replace(struct net_device *ndev,
NL_SET_ERR_MSG_MOD(fco->common.extack,
"Could not add the filter");
- if (l3_proto == ETH_P_ALL)
+ if (state.l3_proto == ETH_P_ALL)
err = sparx5_tc_add_remaining_rules(vctrl, fco, vrule, admin,
&multi);
@@ -922,19 +1150,86 @@ out:
return err;
}
+static void sparx5_tc_free_psfp_resources(struct sparx5 *sparx5,
+ struct vcap_rule *vrule)
+{
+ struct vcap_client_actionfield *afield;
+ u32 isdx, sfid, sgid, fmid;
+
+ /* Check if VCAP_AF_ISDX_VAL action is set for this rule - and if
+ * it is used for stream and/or flow-meter classification.
+ */
+ afield = vcap_find_actionfield(vrule, VCAP_AF_ISDX_VAL);
+ if (!afield)
+ return;
+
+ isdx = afield->data.u32.value;
+ sfid = sparx5_psfp_isdx_get_sf(sparx5, isdx);
+
+ if (!sfid)
+ return;
+
+ fmid = sparx5_psfp_isdx_get_fm(sparx5, isdx);
+ sgid = sparx5_psfp_sf_get_sg(sparx5, sfid);
+
+ if (fmid && sparx5_psfp_fm_del(sparx5, fmid) < 0)
+ pr_err("%s:%d Could not delete invalid fmid: %d", __func__,
+ __LINE__, fmid);
+
+ if (sgid && sparx5_psfp_sg_del(sparx5, sgid) < 0)
+ pr_err("%s:%d Could not delete invalid sgid: %d", __func__,
+ __LINE__, sgid);
+
+ if (sparx5_psfp_sf_del(sparx5, sfid) < 0)
+ pr_err("%s:%d Could not delete invalid sfid: %d", __func__,
+ __LINE__, sfid);
+
+ sparx5_isdx_conf_set(sparx5, isdx, 0, 0);
+}
+
+static int sparx5_tc_free_rule_resources(struct net_device *ndev,
+ struct vcap_control *vctrl,
+ int rule_id)
+{
+ struct sparx5_port *port = netdev_priv(ndev);
+ struct sparx5 *sparx5 = port->sparx5;
+ struct vcap_rule *vrule;
+ int ret = 0;
+
+ vrule = vcap_get_rule(vctrl, rule_id);
+ if (!vrule || IS_ERR(vrule))
+ return -EINVAL;
+
+ sparx5_tc_free_psfp_resources(sparx5, vrule);
+
+ vcap_free_rule(vrule);
+ return ret;
+}
+
static int sparx5_tc_flower_destroy(struct net_device *ndev,
struct flow_cls_offload *fco,
struct vcap_admin *admin)
{
struct sparx5_port *port = netdev_priv(ndev);
+ int err = -ENOENT, count = 0, rule_id;
struct vcap_control *vctrl;
- int err = -ENOENT, rule_id;
vctrl = port->sparx5->vcap_ctrl;
while (true) {
rule_id = vcap_lookup_rule_by_cookie(vctrl, fco->cookie);
if (rule_id <= 0)
break;
+ if (count == 0) {
+ /* Resources are attached to the first rule of
+ * a set of rules. Only works if the rules are
+ * in the correct order.
+ */
+ err = sparx5_tc_free_rule_resources(ndev, vctrl,
+ rule_id);
+ if (err)
+ pr_err("%s:%d: could not free resources %d\n",
+ __func__, __LINE__, rule_id);
+ }
err = vcap_del_rule(vctrl, ndev, rule_id);
if (err) {
pr_err("%s:%d: could not delete rule %d\n",
@@ -945,44 +1240,21 @@ static int sparx5_tc_flower_destroy(struct net_device *ndev,
return err;
}
-/* Collect packet counts from all rules with the same cookie */
-static int sparx5_tc_rule_counter_cb(void *arg, struct vcap_rule *rule)
-{
- struct sparx5_tc_rule_pkt_cnt *rinfo = arg;
- struct vcap_counter counter;
- int err = 0;
-
- if (rule->cookie == rinfo->cookie) {
- err = vcap_rule_get_counter(rule, &counter);
- if (err)
- return err;
- rinfo->pkts += counter.value;
- /* Reset the rule counter */
- counter.value = 0;
- vcap_rule_set_counter(rule, &counter);
- }
- return err;
-}
-
static int sparx5_tc_flower_stats(struct net_device *ndev,
struct flow_cls_offload *fco,
struct vcap_admin *admin)
{
struct sparx5_port *port = netdev_priv(ndev);
- struct sparx5_tc_rule_pkt_cnt rinfo = {};
+ struct vcap_counter ctr = {};
struct vcap_control *vctrl;
ulong lastused = 0;
- u64 drops = 0;
- u32 pkts = 0;
int err;
- rinfo.cookie = fco->cookie;
vctrl = port->sparx5->vcap_ctrl;
- err = vcap_rule_iter(vctrl, sparx5_tc_rule_counter_cb, &rinfo);
+ err = vcap_get_rule_count_by_cookie(vctrl, &ctr, fco->cookie);
if (err)
return err;
- pkts = rinfo.pkts;
- flow_stats_update(&fco->stats, 0x0, pkts, drops, lastused,
+ flow_stats_update(&fco->stats, 0x0, ctr.value, 0, lastused,
FLOW_ACTION_HW_STATS_IMMEDIATE);
return err;
}
@@ -1005,7 +1277,7 @@ int sparx5_tc_flower(struct net_device *ndev, struct flow_cls_offload *fco,
switch (fco->command) {
case FLOW_CLS_REPLACE:
- return sparx5_tc_flower_replace(ndev, fco, admin);
+ return sparx5_tc_flower_replace(ndev, fco, admin, ingress);
case FLOW_CLS_DESTROY:
return sparx5_tc_flower_destroy(ndev, fco, admin);
case FLOW_CLS_STATS:
diff --git a/drivers/net/ethernet/microchip/sparx5/sparx5_tc_matchall.c b/drivers/net/ethernet/microchip/sparx5/sparx5_tc_matchall.c
index 30dd61e5d150..d88a93f22606 100644
--- a/drivers/net/ethernet/microchip/sparx5/sparx5_tc_matchall.c
+++ b/drivers/net/ethernet/microchip/sparx5/sparx5_tc_matchall.c
@@ -31,6 +31,7 @@ static int sparx5_tc_matchall_replace(struct net_device *ndev,
switch (action->id) {
case FLOW_ACTION_GOTO:
err = vcap_enable_lookups(sparx5->vcap_ctrl, ndev,
+ tmo->common.chain_index,
action->chain_index, tmo->cookie,
true);
if (err == -EFAULT) {
@@ -43,6 +44,11 @@ static int sparx5_tc_matchall_replace(struct net_device *ndev,
"VCAP already enabled");
return -EOPNOTSUPP;
}
+ if (err == -EADDRNOTAVAIL) {
+ NL_SET_ERR_MSG_MOD(tmo->common.extack,
+ "Already matching this chain");
+ return -EOPNOTSUPP;
+ }
if (err) {
NL_SET_ERR_MSG_MOD(tmo->common.extack,
"Could not enable VCAP lookups");
@@ -66,8 +72,8 @@ static int sparx5_tc_matchall_destroy(struct net_device *ndev,
sparx5 = port->sparx5;
if (!tmo->rule && tmo->cookie) {
- err = vcap_enable_lookups(sparx5->vcap_ctrl, ndev, 0,
- tmo->cookie, false);
+ err = vcap_enable_lookups(sparx5->vcap_ctrl, ndev,
+ 0, 0, tmo->cookie, false);
if (err)
return err;
return 0;
@@ -80,12 +86,6 @@ int sparx5_tc_matchall(struct net_device *ndev,
struct tc_cls_matchall_offload *tmo,
bool ingress)
{
- if (!tc_cls_can_offload_and_chain0(ndev, &tmo->common)) {
- NL_SET_ERR_MSG_MOD(tmo->common.extack,
- "Only chain zero is supported");
- return -EOPNOTSUPP;
- }
-
switch (tmo->command) {
case TC_CLSMATCHALL_REPLACE:
return sparx5_tc_matchall_replace(ndev, tmo, ingress);
diff --git a/drivers/net/ethernet/microchip/sparx5/sparx5_vcap_ag_api.c b/drivers/net/ethernet/microchip/sparx5/sparx5_vcap_ag_api.c
index 1bd987c664e8..556d6ea0acd1 100644
--- a/drivers/net/ethernet/microchip/sparx5/sparx5_vcap_ag_api.c
+++ b/drivers/net/ethernet/microchip/sparx5/sparx5_vcap_ag_api.c
@@ -1,10 +1,10 @@
// SPDX-License-Identifier: BSD-3-Clause
-/* Copyright (C) 2022 Microchip Technology Inc. and its subsidiaries.
+/* Copyright (C) 2023 Microchip Technology Inc. and its subsidiaries.
* Microchip VCAP API
*/
-/* This file is autogenerated by cml-utils 2022-10-13 10:04:41 +0200.
- * Commit ID: fd7cafd175899f0672c73afb3a30fc872500ae86
+/* This file is autogenerated by cml-utils 2023-02-10 11:15:56 +0100.
+ * Commit ID: c30fb4bf0281cd4a7133bdab6682f9e43c872ada
*/
#include <linux/types.h>
@@ -14,6 +14,372 @@
#include "sparx5_vcap_ag_api.h"
/* keyfields */
+static const struct vcap_field is0_normal_7tuple_keyfield[] = {
+ [VCAP_KF_TYPE] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 0,
+ .width = 1,
+ },
+ [VCAP_KF_LOOKUP_FIRST_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 1,
+ .width = 1,
+ },
+ [VCAP_KF_LOOKUP_GEN_IDX_SEL] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 2,
+ .width = 2,
+ },
+ [VCAP_KF_LOOKUP_GEN_IDX] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 4,
+ .width = 12,
+ },
+ [VCAP_KF_IF_IGR_PORT_MASK_SEL] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 16,
+ .width = 2,
+ },
+ [VCAP_KF_IF_IGR_PORT_MASK] = {
+ .type = VCAP_FIELD_U72,
+ .offset = 18,
+ .width = 65,
+ },
+ [VCAP_KF_L2_MC_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 83,
+ .width = 1,
+ },
+ [VCAP_KF_L2_BC_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 84,
+ .width = 1,
+ },
+ [VCAP_KF_8021Q_VLAN_TAGS] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 85,
+ .width = 3,
+ },
+ [VCAP_KF_8021Q_TPID0] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 88,
+ .width = 3,
+ },
+ [VCAP_KF_8021Q_PCP0] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 91,
+ .width = 3,
+ },
+ [VCAP_KF_8021Q_DEI0] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 94,
+ .width = 1,
+ },
+ [VCAP_KF_8021Q_VID0] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 95,
+ .width = 12,
+ },
+ [VCAP_KF_8021Q_TPID1] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 107,
+ .width = 3,
+ },
+ [VCAP_KF_8021Q_PCP1] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 110,
+ .width = 3,
+ },
+ [VCAP_KF_8021Q_DEI1] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 113,
+ .width = 1,
+ },
+ [VCAP_KF_8021Q_VID1] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 114,
+ .width = 12,
+ },
+ [VCAP_KF_8021Q_TPID2] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 126,
+ .width = 3,
+ },
+ [VCAP_KF_8021Q_PCP2] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 129,
+ .width = 3,
+ },
+ [VCAP_KF_8021Q_DEI2] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 132,
+ .width = 1,
+ },
+ [VCAP_KF_8021Q_VID2] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 133,
+ .width = 12,
+ },
+ [VCAP_KF_L2_DMAC] = {
+ .type = VCAP_FIELD_U48,
+ .offset = 145,
+ .width = 48,
+ },
+ [VCAP_KF_L2_SMAC] = {
+ .type = VCAP_FIELD_U48,
+ .offset = 193,
+ .width = 48,
+ },
+ [VCAP_KF_IP_MC_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 241,
+ .width = 1,
+ },
+ [VCAP_KF_ETYPE_LEN_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 242,
+ .width = 1,
+ },
+ [VCAP_KF_ETYPE] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 243,
+ .width = 16,
+ },
+ [VCAP_KF_IP_SNAP_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 259,
+ .width = 1,
+ },
+ [VCAP_KF_IP4_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 260,
+ .width = 1,
+ },
+ [VCAP_KF_L3_FRAGMENT_TYPE] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 261,
+ .width = 2,
+ },
+ [VCAP_KF_L3_FRAG_INVLD_L4_LEN] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 263,
+ .width = 1,
+ },
+ [VCAP_KF_L3_OPTIONS_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 264,
+ .width = 1,
+ },
+ [VCAP_KF_L3_DSCP] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 265,
+ .width = 6,
+ },
+ [VCAP_KF_L3_IP6_DIP] = {
+ .type = VCAP_FIELD_U128,
+ .offset = 271,
+ .width = 128,
+ },
+ [VCAP_KF_L3_IP6_SIP] = {
+ .type = VCAP_FIELD_U128,
+ .offset = 399,
+ .width = 128,
+ },
+ [VCAP_KF_TCP_UDP_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 527,
+ .width = 1,
+ },
+ [VCAP_KF_TCP_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 528,
+ .width = 1,
+ },
+ [VCAP_KF_L4_SPORT] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 529,
+ .width = 16,
+ },
+ [VCAP_KF_L4_RNG] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 545,
+ .width = 8,
+ },
+};
+
+static const struct vcap_field is0_normal_5tuple_ip4_keyfield[] = {
+ [VCAP_KF_TYPE] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 0,
+ .width = 2,
+ },
+ [VCAP_KF_LOOKUP_FIRST_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 2,
+ .width = 1,
+ },
+ [VCAP_KF_LOOKUP_GEN_IDX_SEL] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 3,
+ .width = 2,
+ },
+ [VCAP_KF_LOOKUP_GEN_IDX] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 5,
+ .width = 12,
+ },
+ [VCAP_KF_IF_IGR_PORT_MASK_SEL] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 17,
+ .width = 2,
+ },
+ [VCAP_KF_IF_IGR_PORT_MASK] = {
+ .type = VCAP_FIELD_U72,
+ .offset = 19,
+ .width = 65,
+ },
+ [VCAP_KF_L2_MC_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 84,
+ .width = 1,
+ },
+ [VCAP_KF_L2_BC_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 85,
+ .width = 1,
+ },
+ [VCAP_KF_8021Q_VLAN_TAGS] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 86,
+ .width = 3,
+ },
+ [VCAP_KF_8021Q_TPID0] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 89,
+ .width = 3,
+ },
+ [VCAP_KF_8021Q_PCP0] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 92,
+ .width = 3,
+ },
+ [VCAP_KF_8021Q_DEI0] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 95,
+ .width = 1,
+ },
+ [VCAP_KF_8021Q_VID0] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 96,
+ .width = 12,
+ },
+ [VCAP_KF_8021Q_TPID1] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 108,
+ .width = 3,
+ },
+ [VCAP_KF_8021Q_PCP1] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 111,
+ .width = 3,
+ },
+ [VCAP_KF_8021Q_DEI1] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 114,
+ .width = 1,
+ },
+ [VCAP_KF_8021Q_VID1] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 115,
+ .width = 12,
+ },
+ [VCAP_KF_8021Q_TPID2] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 127,
+ .width = 3,
+ },
+ [VCAP_KF_8021Q_PCP2] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 130,
+ .width = 3,
+ },
+ [VCAP_KF_8021Q_DEI2] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 133,
+ .width = 1,
+ },
+ [VCAP_KF_8021Q_VID2] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 134,
+ .width = 12,
+ },
+ [VCAP_KF_IP_MC_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 146,
+ .width = 1,
+ },
+ [VCAP_KF_IP4_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 147,
+ .width = 1,
+ },
+ [VCAP_KF_L3_FRAGMENT_TYPE] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 148,
+ .width = 2,
+ },
+ [VCAP_KF_L3_FRAG_INVLD_L4_LEN] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 150,
+ .width = 1,
+ },
+ [VCAP_KF_L3_OPTIONS_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 151,
+ .width = 1,
+ },
+ [VCAP_KF_L3_DSCP] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 152,
+ .width = 6,
+ },
+ [VCAP_KF_L3_IP4_DIP] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 158,
+ .width = 32,
+ },
+ [VCAP_KF_L3_IP4_SIP] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 190,
+ .width = 32,
+ },
+ [VCAP_KF_L3_IP_PROTO] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 222,
+ .width = 8,
+ },
+ [VCAP_KF_TCP_UDP_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 230,
+ .width = 1,
+ },
+ [VCAP_KF_TCP_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 231,
+ .width = 1,
+ },
+ [VCAP_KF_L4_RNG] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 232,
+ .width = 8,
+ },
+ [VCAP_KF_IP_PAYLOAD_5TUPLE] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 240,
+ .width = 32,
+ },
+};
+
static const struct vcap_field is2_mac_etype_keyfield[] = {
[VCAP_KF_TYPE] = {
.type = VCAP_FIELD_U32,
@@ -967,7 +1333,971 @@ static const struct vcap_field is2_ip_7tuple_keyfield[] = {
},
};
+static const struct vcap_field es0_isdx_keyfield[] = {
+ [VCAP_KF_TYPE] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 0,
+ .width = 1,
+ },
+ [VCAP_KF_IF_EGR_PORT_NO] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 1,
+ .width = 7,
+ },
+ [VCAP_KF_8021Q_VID_CLS] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 8,
+ .width = 13,
+ },
+ [VCAP_KF_COSID_CLS] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 21,
+ .width = 3,
+ },
+ [VCAP_KF_8021Q_TPID] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 24,
+ .width = 3,
+ },
+ [VCAP_KF_L3_DPL_CLS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 27,
+ .width = 1,
+ },
+ [VCAP_KF_ISDX_GT0_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 28,
+ .width = 1,
+ },
+ [VCAP_KF_PROT_ACTIVE] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 29,
+ .width = 1,
+ },
+ [VCAP_KF_ISDX_CLS] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 39,
+ .width = 12,
+ },
+};
+
+static const struct vcap_field es2_mac_etype_keyfield[] = {
+ [VCAP_KF_TYPE] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 0,
+ .width = 3,
+ },
+ [VCAP_KF_LOOKUP_FIRST_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 3,
+ .width = 1,
+ },
+ [VCAP_KF_L2_MC_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 13,
+ .width = 1,
+ },
+ [VCAP_KF_L2_BC_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 14,
+ .width = 1,
+ },
+ [VCAP_KF_ISDX_GT0_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 15,
+ .width = 1,
+ },
+ [VCAP_KF_ISDX_CLS] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 16,
+ .width = 12,
+ },
+ [VCAP_KF_8021Q_VLAN_TAGGED_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 28,
+ .width = 1,
+ },
+ [VCAP_KF_8021Q_VID_CLS] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 29,
+ .width = 13,
+ },
+ [VCAP_KF_IF_EGR_PORT_MASK_RNG] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 42,
+ .width = 3,
+ },
+ [VCAP_KF_IF_EGR_PORT_MASK] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 45,
+ .width = 32,
+ },
+ [VCAP_KF_IF_IGR_PORT_SEL] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 77,
+ .width = 1,
+ },
+ [VCAP_KF_IF_IGR_PORT] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 78,
+ .width = 9,
+ },
+ [VCAP_KF_8021Q_PCP_CLS] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 87,
+ .width = 3,
+ },
+ [VCAP_KF_8021Q_DEI_CLS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 90,
+ .width = 1,
+ },
+ [VCAP_KF_COSID_CLS] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 91,
+ .width = 3,
+ },
+ [VCAP_KF_L3_DPL_CLS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 94,
+ .width = 1,
+ },
+ [VCAP_KF_L3_RT_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 95,
+ .width = 1,
+ },
+ [VCAP_KF_L2_DMAC] = {
+ .type = VCAP_FIELD_U48,
+ .offset = 99,
+ .width = 48,
+ },
+ [VCAP_KF_L2_SMAC] = {
+ .type = VCAP_FIELD_U48,
+ .offset = 147,
+ .width = 48,
+ },
+ [VCAP_KF_ETYPE_LEN_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 195,
+ .width = 1,
+ },
+ [VCAP_KF_ETYPE] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 196,
+ .width = 16,
+ },
+ [VCAP_KF_L2_PAYLOAD_ETYPE] = {
+ .type = VCAP_FIELD_U64,
+ .offset = 212,
+ .width = 64,
+ },
+ [VCAP_KF_OAM_CCM_CNTS_EQ0] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 276,
+ .width = 1,
+ },
+ [VCAP_KF_OAM_Y1731_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 277,
+ .width = 1,
+ },
+};
+
+static const struct vcap_field es2_arp_keyfield[] = {
+ [VCAP_KF_TYPE] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 0,
+ .width = 3,
+ },
+ [VCAP_KF_LOOKUP_FIRST_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 3,
+ .width = 1,
+ },
+ [VCAP_KF_L2_MC_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 13,
+ .width = 1,
+ },
+ [VCAP_KF_L2_BC_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 14,
+ .width = 1,
+ },
+ [VCAP_KF_ISDX_GT0_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 15,
+ .width = 1,
+ },
+ [VCAP_KF_ISDX_CLS] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 16,
+ .width = 12,
+ },
+ [VCAP_KF_8021Q_VLAN_TAGGED_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 28,
+ .width = 1,
+ },
+ [VCAP_KF_8021Q_VID_CLS] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 29,
+ .width = 13,
+ },
+ [VCAP_KF_IF_EGR_PORT_MASK_RNG] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 42,
+ .width = 3,
+ },
+ [VCAP_KF_IF_EGR_PORT_MASK] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 45,
+ .width = 32,
+ },
+ [VCAP_KF_IF_IGR_PORT_SEL] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 77,
+ .width = 1,
+ },
+ [VCAP_KF_IF_IGR_PORT] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 78,
+ .width = 9,
+ },
+ [VCAP_KF_8021Q_PCP_CLS] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 87,
+ .width = 3,
+ },
+ [VCAP_KF_8021Q_DEI_CLS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 90,
+ .width = 1,
+ },
+ [VCAP_KF_COSID_CLS] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 91,
+ .width = 3,
+ },
+ [VCAP_KF_L3_DPL_CLS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 94,
+ .width = 1,
+ },
+ [VCAP_KF_L2_SMAC] = {
+ .type = VCAP_FIELD_U48,
+ .offset = 98,
+ .width = 48,
+ },
+ [VCAP_KF_ARP_ADDR_SPACE_OK_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 146,
+ .width = 1,
+ },
+ [VCAP_KF_ARP_PROTO_SPACE_OK_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 147,
+ .width = 1,
+ },
+ [VCAP_KF_ARP_LEN_OK_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 148,
+ .width = 1,
+ },
+ [VCAP_KF_ARP_TGT_MATCH_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 149,
+ .width = 1,
+ },
+ [VCAP_KF_ARP_SENDER_MATCH_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 150,
+ .width = 1,
+ },
+ [VCAP_KF_ARP_OPCODE_UNKNOWN_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 151,
+ .width = 1,
+ },
+ [VCAP_KF_ARP_OPCODE] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 152,
+ .width = 2,
+ },
+ [VCAP_KF_L3_IP4_DIP] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 154,
+ .width = 32,
+ },
+ [VCAP_KF_L3_IP4_SIP] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 186,
+ .width = 32,
+ },
+ [VCAP_KF_L3_DIP_EQ_SIP_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 218,
+ .width = 1,
+ },
+};
+
+static const struct vcap_field es2_ip4_tcp_udp_keyfield[] = {
+ [VCAP_KF_TYPE] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 0,
+ .width = 3,
+ },
+ [VCAP_KF_LOOKUP_FIRST_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 3,
+ .width = 1,
+ },
+ [VCAP_KF_L2_MC_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 13,
+ .width = 1,
+ },
+ [VCAP_KF_L2_BC_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 14,
+ .width = 1,
+ },
+ [VCAP_KF_ISDX_GT0_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 15,
+ .width = 1,
+ },
+ [VCAP_KF_ISDX_CLS] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 16,
+ .width = 12,
+ },
+ [VCAP_KF_8021Q_VLAN_TAGGED_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 28,
+ .width = 1,
+ },
+ [VCAP_KF_8021Q_VID_CLS] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 29,
+ .width = 13,
+ },
+ [VCAP_KF_IF_EGR_PORT_MASK_RNG] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 42,
+ .width = 3,
+ },
+ [VCAP_KF_IF_EGR_PORT_MASK] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 45,
+ .width = 32,
+ },
+ [VCAP_KF_IF_IGR_PORT_SEL] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 77,
+ .width = 1,
+ },
+ [VCAP_KF_IF_IGR_PORT] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 78,
+ .width = 9,
+ },
+ [VCAP_KF_8021Q_PCP_CLS] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 87,
+ .width = 3,
+ },
+ [VCAP_KF_8021Q_DEI_CLS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 90,
+ .width = 1,
+ },
+ [VCAP_KF_COSID_CLS] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 91,
+ .width = 3,
+ },
+ [VCAP_KF_L3_DPL_CLS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 94,
+ .width = 1,
+ },
+ [VCAP_KF_L3_RT_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 95,
+ .width = 1,
+ },
+ [VCAP_KF_IP4_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 99,
+ .width = 1,
+ },
+ [VCAP_KF_L3_FRAGMENT_TYPE] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 100,
+ .width = 2,
+ },
+ [VCAP_KF_L3_OPTIONS_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 102,
+ .width = 1,
+ },
+ [VCAP_KF_L3_TTL_GT0] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 103,
+ .width = 1,
+ },
+ [VCAP_KF_L3_TOS] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 104,
+ .width = 8,
+ },
+ [VCAP_KF_L3_IP4_DIP] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 112,
+ .width = 32,
+ },
+ [VCAP_KF_L3_IP4_SIP] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 144,
+ .width = 32,
+ },
+ [VCAP_KF_L3_DIP_EQ_SIP_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 176,
+ .width = 1,
+ },
+ [VCAP_KF_TCP_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 177,
+ .width = 1,
+ },
+ [VCAP_KF_L4_DPORT] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 178,
+ .width = 16,
+ },
+ [VCAP_KF_L4_SPORT] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 194,
+ .width = 16,
+ },
+ [VCAP_KF_L4_RNG] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 210,
+ .width = 16,
+ },
+ [VCAP_KF_L4_SPORT_EQ_DPORT_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 226,
+ .width = 1,
+ },
+ [VCAP_KF_L4_SEQUENCE_EQ0_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 227,
+ .width = 1,
+ },
+ [VCAP_KF_L4_FIN] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 228,
+ .width = 1,
+ },
+ [VCAP_KF_L4_SYN] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 229,
+ .width = 1,
+ },
+ [VCAP_KF_L4_RST] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 230,
+ .width = 1,
+ },
+ [VCAP_KF_L4_PSH] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 231,
+ .width = 1,
+ },
+ [VCAP_KF_L4_ACK] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 232,
+ .width = 1,
+ },
+ [VCAP_KF_L4_URG] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 233,
+ .width = 1,
+ },
+ [VCAP_KF_L4_PAYLOAD] = {
+ .type = VCAP_FIELD_U64,
+ .offset = 234,
+ .width = 64,
+ },
+};
+
+static const struct vcap_field es2_ip4_other_keyfield[] = {
+ [VCAP_KF_TYPE] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 0,
+ .width = 3,
+ },
+ [VCAP_KF_LOOKUP_FIRST_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 3,
+ .width = 1,
+ },
+ [VCAP_KF_L2_MC_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 13,
+ .width = 1,
+ },
+ [VCAP_KF_L2_BC_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 14,
+ .width = 1,
+ },
+ [VCAP_KF_ISDX_GT0_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 15,
+ .width = 1,
+ },
+ [VCAP_KF_ISDX_CLS] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 16,
+ .width = 12,
+ },
+ [VCAP_KF_8021Q_VLAN_TAGGED_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 28,
+ .width = 1,
+ },
+ [VCAP_KF_8021Q_VID_CLS] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 29,
+ .width = 13,
+ },
+ [VCAP_KF_IF_EGR_PORT_MASK_RNG] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 42,
+ .width = 3,
+ },
+ [VCAP_KF_IF_EGR_PORT_MASK] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 45,
+ .width = 32,
+ },
+ [VCAP_KF_IF_IGR_PORT_SEL] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 77,
+ .width = 1,
+ },
+ [VCAP_KF_IF_IGR_PORT] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 78,
+ .width = 9,
+ },
+ [VCAP_KF_8021Q_PCP_CLS] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 87,
+ .width = 3,
+ },
+ [VCAP_KF_8021Q_DEI_CLS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 90,
+ .width = 1,
+ },
+ [VCAP_KF_COSID_CLS] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 91,
+ .width = 3,
+ },
+ [VCAP_KF_L3_DPL_CLS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 94,
+ .width = 1,
+ },
+ [VCAP_KF_L3_RT_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 95,
+ .width = 1,
+ },
+ [VCAP_KF_IP4_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 99,
+ .width = 1,
+ },
+ [VCAP_KF_L3_FRAGMENT_TYPE] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 100,
+ .width = 2,
+ },
+ [VCAP_KF_L3_OPTIONS_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 102,
+ .width = 1,
+ },
+ [VCAP_KF_L3_TTL_GT0] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 103,
+ .width = 1,
+ },
+ [VCAP_KF_L3_TOS] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 104,
+ .width = 8,
+ },
+ [VCAP_KF_L3_IP4_DIP] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 112,
+ .width = 32,
+ },
+ [VCAP_KF_L3_IP4_SIP] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 144,
+ .width = 32,
+ },
+ [VCAP_KF_L3_DIP_EQ_SIP_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 176,
+ .width = 1,
+ },
+ [VCAP_KF_L3_IP_PROTO] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 177,
+ .width = 8,
+ },
+ [VCAP_KF_L3_PAYLOAD] = {
+ .type = VCAP_FIELD_U112,
+ .offset = 185,
+ .width = 96,
+ },
+};
+
+static const struct vcap_field es2_ip_7tuple_keyfield[] = {
+ [VCAP_KF_LOOKUP_FIRST_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 0,
+ .width = 1,
+ },
+ [VCAP_KF_L2_MC_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 10,
+ .width = 1,
+ },
+ [VCAP_KF_L2_BC_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 11,
+ .width = 1,
+ },
+ [VCAP_KF_ISDX_GT0_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 12,
+ .width = 1,
+ },
+ [VCAP_KF_ISDX_CLS] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 13,
+ .width = 12,
+ },
+ [VCAP_KF_8021Q_VLAN_TAGGED_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 25,
+ .width = 1,
+ },
+ [VCAP_KF_8021Q_VID_CLS] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 26,
+ .width = 13,
+ },
+ [VCAP_KF_IF_EGR_PORT_MASK_RNG] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 39,
+ .width = 3,
+ },
+ [VCAP_KF_IF_EGR_PORT_MASK] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 42,
+ .width = 32,
+ },
+ [VCAP_KF_IF_IGR_PORT_SEL] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 74,
+ .width = 1,
+ },
+ [VCAP_KF_IF_IGR_PORT] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 75,
+ .width = 9,
+ },
+ [VCAP_KF_8021Q_PCP_CLS] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 84,
+ .width = 3,
+ },
+ [VCAP_KF_8021Q_DEI_CLS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 87,
+ .width = 1,
+ },
+ [VCAP_KF_COSID_CLS] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 88,
+ .width = 3,
+ },
+ [VCAP_KF_L3_DPL_CLS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 91,
+ .width = 1,
+ },
+ [VCAP_KF_L3_RT_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 92,
+ .width = 1,
+ },
+ [VCAP_KF_L2_DMAC] = {
+ .type = VCAP_FIELD_U48,
+ .offset = 96,
+ .width = 48,
+ },
+ [VCAP_KF_L2_SMAC] = {
+ .type = VCAP_FIELD_U48,
+ .offset = 144,
+ .width = 48,
+ },
+ [VCAP_KF_IP4_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 192,
+ .width = 1,
+ },
+ [VCAP_KF_L3_TTL_GT0] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 193,
+ .width = 1,
+ },
+ [VCAP_KF_L3_TOS] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 194,
+ .width = 8,
+ },
+ [VCAP_KF_L3_IP6_DIP] = {
+ .type = VCAP_FIELD_U128,
+ .offset = 202,
+ .width = 128,
+ },
+ [VCAP_KF_L3_IP6_SIP] = {
+ .type = VCAP_FIELD_U128,
+ .offset = 330,
+ .width = 128,
+ },
+ [VCAP_KF_L3_DIP_EQ_SIP_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 458,
+ .width = 1,
+ },
+ [VCAP_KF_TCP_UDP_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 459,
+ .width = 1,
+ },
+ [VCAP_KF_TCP_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 460,
+ .width = 1,
+ },
+ [VCAP_KF_L4_DPORT] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 461,
+ .width = 16,
+ },
+ [VCAP_KF_L4_SPORT] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 477,
+ .width = 16,
+ },
+ [VCAP_KF_L4_RNG] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 493,
+ .width = 16,
+ },
+ [VCAP_KF_L4_SPORT_EQ_DPORT_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 509,
+ .width = 1,
+ },
+ [VCAP_KF_L4_SEQUENCE_EQ0_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 510,
+ .width = 1,
+ },
+ [VCAP_KF_L4_FIN] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 511,
+ .width = 1,
+ },
+ [VCAP_KF_L4_SYN] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 512,
+ .width = 1,
+ },
+ [VCAP_KF_L4_RST] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 513,
+ .width = 1,
+ },
+ [VCAP_KF_L4_PSH] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 514,
+ .width = 1,
+ },
+ [VCAP_KF_L4_ACK] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 515,
+ .width = 1,
+ },
+ [VCAP_KF_L4_URG] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 516,
+ .width = 1,
+ },
+ [VCAP_KF_L4_PAYLOAD] = {
+ .type = VCAP_FIELD_U64,
+ .offset = 517,
+ .width = 64,
+ },
+};
+
+static const struct vcap_field es2_ip6_std_keyfield[] = {
+ [VCAP_KF_TYPE] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 0,
+ .width = 3,
+ },
+ [VCAP_KF_LOOKUP_FIRST_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 3,
+ .width = 1,
+ },
+ [VCAP_KF_L2_MC_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 13,
+ .width = 1,
+ },
+ [VCAP_KF_L2_BC_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 14,
+ .width = 1,
+ },
+ [VCAP_KF_ISDX_GT0_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 15,
+ .width = 1,
+ },
+ [VCAP_KF_ISDX_CLS] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 16,
+ .width = 12,
+ },
+ [VCAP_KF_8021Q_VLAN_TAGGED_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 28,
+ .width = 1,
+ },
+ [VCAP_KF_8021Q_VID_CLS] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 29,
+ .width = 13,
+ },
+ [VCAP_KF_IF_EGR_PORT_MASK_RNG] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 42,
+ .width = 3,
+ },
+ [VCAP_KF_IF_EGR_PORT_MASK] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 45,
+ .width = 32,
+ },
+ [VCAP_KF_IF_IGR_PORT_SEL] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 77,
+ .width = 1,
+ },
+ [VCAP_KF_IF_IGR_PORT] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 78,
+ .width = 9,
+ },
+ [VCAP_KF_8021Q_PCP_CLS] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 87,
+ .width = 3,
+ },
+ [VCAP_KF_8021Q_DEI_CLS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 90,
+ .width = 1,
+ },
+ [VCAP_KF_COSID_CLS] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 91,
+ .width = 3,
+ },
+ [VCAP_KF_L3_DPL_CLS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 94,
+ .width = 1,
+ },
+ [VCAP_KF_L3_RT_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 95,
+ .width = 1,
+ },
+ [VCAP_KF_L3_TTL_GT0] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 99,
+ .width = 1,
+ },
+ [VCAP_KF_L3_IP6_SIP] = {
+ .type = VCAP_FIELD_U128,
+ .offset = 100,
+ .width = 128,
+ },
+ [VCAP_KF_L3_DIP_EQ_SIP_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 228,
+ .width = 1,
+ },
+ [VCAP_KF_L3_IP_PROTO] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 229,
+ .width = 8,
+ },
+ [VCAP_KF_L4_RNG] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 237,
+ .width = 16,
+ },
+ [VCAP_KF_L3_PAYLOAD] = {
+ .type = VCAP_FIELD_U48,
+ .offset = 253,
+ .width = 40,
+ },
+};
+
/* keyfield_set */
+static const struct vcap_set is0_keyfield_set[] = {
+ [VCAP_KFS_NORMAL_7TUPLE] = {
+ .type_id = 0,
+ .sw_per_item = 12,
+ .sw_cnt = 1,
+ },
+ [VCAP_KFS_NORMAL_5TUPLE_IP4] = {
+ .type_id = 2,
+ .sw_per_item = 6,
+ .sw_cnt = 2,
+ },
+};
+
static const struct vcap_set is2_keyfield_set[] = {
[VCAP_KFS_MAC_ETYPE] = {
.type_id = 0,
@@ -1001,7 +2331,53 @@ static const struct vcap_set is2_keyfield_set[] = {
},
};
+static const struct vcap_set es0_keyfield_set[] = {
+ [VCAP_KFS_ISDX] = {
+ .type_id = 0,
+ .sw_per_item = 1,
+ .sw_cnt = 1,
+ },
+};
+
+static const struct vcap_set es2_keyfield_set[] = {
+ [VCAP_KFS_MAC_ETYPE] = {
+ .type_id = 0,
+ .sw_per_item = 6,
+ .sw_cnt = 2,
+ },
+ [VCAP_KFS_ARP] = {
+ .type_id = 1,
+ .sw_per_item = 6,
+ .sw_cnt = 2,
+ },
+ [VCAP_KFS_IP4_TCP_UDP] = {
+ .type_id = 2,
+ .sw_per_item = 6,
+ .sw_cnt = 2,
+ },
+ [VCAP_KFS_IP4_OTHER] = {
+ .type_id = 3,
+ .sw_per_item = 6,
+ .sw_cnt = 2,
+ },
+ [VCAP_KFS_IP_7TUPLE] = {
+ .type_id = -1,
+ .sw_per_item = 12,
+ .sw_cnt = 1,
+ },
+ [VCAP_KFS_IP6_STD] = {
+ .type_id = 4,
+ .sw_per_item = 6,
+ .sw_cnt = 2,
+ },
+};
+
/* keyfield_set map */
+static const struct vcap_field *is0_keyfield_set_map[] = {
+ [VCAP_KFS_NORMAL_7TUPLE] = is0_normal_7tuple_keyfield,
+ [VCAP_KFS_NORMAL_5TUPLE_IP4] = is0_normal_5tuple_ip4_keyfield,
+};
+
static const struct vcap_field *is2_keyfield_set_map[] = {
[VCAP_KFS_MAC_ETYPE] = is2_mac_etype_keyfield,
[VCAP_KFS_ARP] = is2_arp_keyfield,
@@ -1011,7 +2387,25 @@ static const struct vcap_field *is2_keyfield_set_map[] = {
[VCAP_KFS_IP_7TUPLE] = is2_ip_7tuple_keyfield,
};
+static const struct vcap_field *es0_keyfield_set_map[] = {
+ [VCAP_KFS_ISDX] = es0_isdx_keyfield,
+};
+
+static const struct vcap_field *es2_keyfield_set_map[] = {
+ [VCAP_KFS_MAC_ETYPE] = es2_mac_etype_keyfield,
+ [VCAP_KFS_ARP] = es2_arp_keyfield,
+ [VCAP_KFS_IP4_TCP_UDP] = es2_ip4_tcp_udp_keyfield,
+ [VCAP_KFS_IP4_OTHER] = es2_ip4_other_keyfield,
+ [VCAP_KFS_IP_7TUPLE] = es2_ip_7tuple_keyfield,
+ [VCAP_KFS_IP6_STD] = es2_ip6_std_keyfield,
+};
+
/* keyfield_set map sizes */
+static int is0_keyfield_set_map_size[] = {
+ [VCAP_KFS_NORMAL_7TUPLE] = ARRAY_SIZE(is0_normal_7tuple_keyfield),
+ [VCAP_KFS_NORMAL_5TUPLE_IP4] = ARRAY_SIZE(is0_normal_5tuple_ip4_keyfield),
+};
+
static int is2_keyfield_set_map_size[] = {
[VCAP_KFS_MAC_ETYPE] = ARRAY_SIZE(is2_mac_etype_keyfield),
[VCAP_KFS_ARP] = ARRAY_SIZE(is2_arp_keyfield),
@@ -1021,7 +2415,319 @@ static int is2_keyfield_set_map_size[] = {
[VCAP_KFS_IP_7TUPLE] = ARRAY_SIZE(is2_ip_7tuple_keyfield),
};
+static int es0_keyfield_set_map_size[] = {
+ [VCAP_KFS_ISDX] = ARRAY_SIZE(es0_isdx_keyfield),
+};
+
+static int es2_keyfield_set_map_size[] = {
+ [VCAP_KFS_MAC_ETYPE] = ARRAY_SIZE(es2_mac_etype_keyfield),
+ [VCAP_KFS_ARP] = ARRAY_SIZE(es2_arp_keyfield),
+ [VCAP_KFS_IP4_TCP_UDP] = ARRAY_SIZE(es2_ip4_tcp_udp_keyfield),
+ [VCAP_KFS_IP4_OTHER] = ARRAY_SIZE(es2_ip4_other_keyfield),
+ [VCAP_KFS_IP_7TUPLE] = ARRAY_SIZE(es2_ip_7tuple_keyfield),
+ [VCAP_KFS_IP6_STD] = ARRAY_SIZE(es2_ip6_std_keyfield),
+};
+
/* actionfields */
+static const struct vcap_field is0_classification_actionfield[] = {
+ [VCAP_AF_TYPE] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 0,
+ .width = 1,
+ },
+ [VCAP_AF_DSCP_ENA] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 1,
+ .width = 1,
+ },
+ [VCAP_AF_DSCP_VAL] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 2,
+ .width = 6,
+ },
+ [VCAP_AF_QOS_ENA] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 12,
+ .width = 1,
+ },
+ [VCAP_AF_QOS_VAL] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 13,
+ .width = 3,
+ },
+ [VCAP_AF_DP_ENA] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 16,
+ .width = 1,
+ },
+ [VCAP_AF_DP_VAL] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 17,
+ .width = 2,
+ },
+ [VCAP_AF_DEI_ENA] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 19,
+ .width = 1,
+ },
+ [VCAP_AF_DEI_VAL] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 20,
+ .width = 1,
+ },
+ [VCAP_AF_PCP_ENA] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 21,
+ .width = 1,
+ },
+ [VCAP_AF_PCP_VAL] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 22,
+ .width = 3,
+ },
+ [VCAP_AF_MAP_LOOKUP_SEL] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 25,
+ .width = 2,
+ },
+ [VCAP_AF_MAP_KEY] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 27,
+ .width = 3,
+ },
+ [VCAP_AF_MAP_IDX] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 30,
+ .width = 9,
+ },
+ [VCAP_AF_CLS_VID_SEL] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 39,
+ .width = 3,
+ },
+ [VCAP_AF_VID_VAL] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 45,
+ .width = 13,
+ },
+ [VCAP_AF_ISDX_ADD_REPLACE_SEL] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 68,
+ .width = 1,
+ },
+ [VCAP_AF_ISDX_VAL] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 69,
+ .width = 12,
+ },
+ [VCAP_AF_PAG_OVERRIDE_MASK] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 109,
+ .width = 8,
+ },
+ [VCAP_AF_PAG_VAL] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 117,
+ .width = 8,
+ },
+ [VCAP_AF_NXT_IDX_CTRL] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 171,
+ .width = 3,
+ },
+ [VCAP_AF_NXT_IDX] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 174,
+ .width = 12,
+ },
+};
+
+static const struct vcap_field is0_full_actionfield[] = {
+ [VCAP_AF_DSCP_ENA] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 0,
+ .width = 1,
+ },
+ [VCAP_AF_DSCP_VAL] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 1,
+ .width = 6,
+ },
+ [VCAP_AF_QOS_ENA] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 11,
+ .width = 1,
+ },
+ [VCAP_AF_QOS_VAL] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 12,
+ .width = 3,
+ },
+ [VCAP_AF_DP_ENA] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 15,
+ .width = 1,
+ },
+ [VCAP_AF_DP_VAL] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 16,
+ .width = 2,
+ },
+ [VCAP_AF_DEI_ENA] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 18,
+ .width = 1,
+ },
+ [VCAP_AF_DEI_VAL] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 19,
+ .width = 1,
+ },
+ [VCAP_AF_PCP_ENA] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 20,
+ .width = 1,
+ },
+ [VCAP_AF_PCP_VAL] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 21,
+ .width = 3,
+ },
+ [VCAP_AF_MAP_LOOKUP_SEL] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 24,
+ .width = 2,
+ },
+ [VCAP_AF_MAP_KEY] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 26,
+ .width = 3,
+ },
+ [VCAP_AF_MAP_IDX] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 29,
+ .width = 9,
+ },
+ [VCAP_AF_CLS_VID_SEL] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 38,
+ .width = 3,
+ },
+ [VCAP_AF_VID_VAL] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 44,
+ .width = 13,
+ },
+ [VCAP_AF_ISDX_ADD_REPLACE_SEL] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 67,
+ .width = 1,
+ },
+ [VCAP_AF_ISDX_VAL] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 68,
+ .width = 12,
+ },
+ [VCAP_AF_MASK_MODE] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 80,
+ .width = 3,
+ },
+ [VCAP_AF_PORT_MASK] = {
+ .type = VCAP_FIELD_U72,
+ .offset = 83,
+ .width = 65,
+ },
+ [VCAP_AF_PAG_OVERRIDE_MASK] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 204,
+ .width = 8,
+ },
+ [VCAP_AF_PAG_VAL] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 212,
+ .width = 8,
+ },
+ [VCAP_AF_NXT_IDX_CTRL] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 298,
+ .width = 3,
+ },
+ [VCAP_AF_NXT_IDX] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 301,
+ .width = 12,
+ },
+};
+
+static const struct vcap_field is0_class_reduced_actionfield[] = {
+ [VCAP_AF_TYPE] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 0,
+ .width = 1,
+ },
+ [VCAP_AF_QOS_ENA] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 5,
+ .width = 1,
+ },
+ [VCAP_AF_QOS_VAL] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 6,
+ .width = 3,
+ },
+ [VCAP_AF_DP_ENA] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 9,
+ .width = 1,
+ },
+ [VCAP_AF_DP_VAL] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 10,
+ .width = 2,
+ },
+ [VCAP_AF_MAP_LOOKUP_SEL] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 12,
+ .width = 2,
+ },
+ [VCAP_AF_MAP_KEY] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 14,
+ .width = 3,
+ },
+ [VCAP_AF_CLS_VID_SEL] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 17,
+ .width = 3,
+ },
+ [VCAP_AF_VID_VAL] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 23,
+ .width = 13,
+ },
+ [VCAP_AF_ISDX_ADD_REPLACE_SEL] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 46,
+ .width = 1,
+ },
+ [VCAP_AF_ISDX_VAL] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 47,
+ .width = 12,
+ },
+ [VCAP_AF_NXT_IDX_CTRL] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 90,
+ .width = 3,
+ },
+ [VCAP_AF_NXT_IDX] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 93,
+ .width = 12,
+ },
+};
+
static const struct vcap_field is2_base_type_actionfield[] = {
[VCAP_AF_PIPELINE_FORCE_ENA] = {
.type = VCAP_FIELD_BIT,
@@ -1110,7 +2816,276 @@ static const struct vcap_field is2_base_type_actionfield[] = {
},
};
+static const struct vcap_field es0_es0_actionfield[] = {
+ [VCAP_AF_PUSH_OUTER_TAG] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 0,
+ .width = 2,
+ },
+ [VCAP_AF_PUSH_INNER_TAG] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 2,
+ .width = 1,
+ },
+ [VCAP_AF_TAG_A_TPID_SEL] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 3,
+ .width = 3,
+ },
+ [VCAP_AF_TAG_A_VID_SEL] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 6,
+ .width = 2,
+ },
+ [VCAP_AF_TAG_A_PCP_SEL] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 8,
+ .width = 3,
+ },
+ [VCAP_AF_TAG_A_DEI_SEL] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 11,
+ .width = 3,
+ },
+ [VCAP_AF_TAG_B_TPID_SEL] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 14,
+ .width = 3,
+ },
+ [VCAP_AF_TAG_B_VID_SEL] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 17,
+ .width = 2,
+ },
+ [VCAP_AF_TAG_B_PCP_SEL] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 19,
+ .width = 3,
+ },
+ [VCAP_AF_TAG_B_DEI_SEL] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 22,
+ .width = 3,
+ },
+ [VCAP_AF_TAG_C_TPID_SEL] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 25,
+ .width = 3,
+ },
+ [VCAP_AF_TAG_C_PCP_SEL] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 28,
+ .width = 3,
+ },
+ [VCAP_AF_TAG_C_DEI_SEL] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 31,
+ .width = 3,
+ },
+ [VCAP_AF_VID_A_VAL] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 34,
+ .width = 12,
+ },
+ [VCAP_AF_PCP_A_VAL] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 46,
+ .width = 3,
+ },
+ [VCAP_AF_DEI_A_VAL] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 49,
+ .width = 1,
+ },
+ [VCAP_AF_VID_B_VAL] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 50,
+ .width = 12,
+ },
+ [VCAP_AF_PCP_B_VAL] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 62,
+ .width = 3,
+ },
+ [VCAP_AF_DEI_B_VAL] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 65,
+ .width = 1,
+ },
+ [VCAP_AF_VID_C_VAL] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 66,
+ .width = 12,
+ },
+ [VCAP_AF_PCP_C_VAL] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 78,
+ .width = 3,
+ },
+ [VCAP_AF_DEI_C_VAL] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 81,
+ .width = 1,
+ },
+ [VCAP_AF_POP_VAL] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 82,
+ .width = 2,
+ },
+ [VCAP_AF_UNTAG_VID_ENA] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 84,
+ .width = 1,
+ },
+ [VCAP_AF_PUSH_CUSTOMER_TAG] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 85,
+ .width = 2,
+ },
+ [VCAP_AF_TAG_C_VID_SEL] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 87,
+ .width = 2,
+ },
+ [VCAP_AF_DSCP_SEL] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 127,
+ .width = 3,
+ },
+ [VCAP_AF_DSCP_VAL] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 130,
+ .width = 6,
+ },
+ [VCAP_AF_ESDX] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 323,
+ .width = 13,
+ },
+ [VCAP_AF_FWD_SEL] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 459,
+ .width = 2,
+ },
+ [VCAP_AF_CPU_QU] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 461,
+ .width = 3,
+ },
+ [VCAP_AF_PIPELINE_PT] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 464,
+ .width = 2,
+ },
+ [VCAP_AF_PIPELINE_ACT] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 466,
+ .width = 1,
+ },
+ [VCAP_AF_SWAP_MACS_ENA] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 475,
+ .width = 1,
+ },
+ [VCAP_AF_LOOP_ENA] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 476,
+ .width = 1,
+ },
+};
+
+static const struct vcap_field es2_base_type_actionfield[] = {
+ [VCAP_AF_HIT_ME_ONCE] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 0,
+ .width = 1,
+ },
+ [VCAP_AF_INTR_ENA] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 1,
+ .width = 1,
+ },
+ [VCAP_AF_FWD_MODE] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 2,
+ .width = 2,
+ },
+ [VCAP_AF_COPY_QUEUE_NUM] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 4,
+ .width = 16,
+ },
+ [VCAP_AF_COPY_PORT_NUM] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 20,
+ .width = 7,
+ },
+ [VCAP_AF_MIRROR_PROBE_ID] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 27,
+ .width = 2,
+ },
+ [VCAP_AF_CPU_COPY_ENA] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 29,
+ .width = 1,
+ },
+ [VCAP_AF_CPU_QUEUE_NUM] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 30,
+ .width = 3,
+ },
+ [VCAP_AF_POLICE_ENA] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 33,
+ .width = 1,
+ },
+ [VCAP_AF_POLICE_REMARK] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 34,
+ .width = 1,
+ },
+ [VCAP_AF_POLICE_IDX] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 35,
+ .width = 6,
+ },
+ [VCAP_AF_ES2_REW_CMD] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 41,
+ .width = 3,
+ },
+ [VCAP_AF_CNT_ID] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 44,
+ .width = 11,
+ },
+ [VCAP_AF_IGNORE_PIPELINE_CTRL] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 55,
+ .width = 1,
+ },
+};
+
/* actionfield_set */
+static const struct vcap_set is0_actionfield_set[] = {
+ [VCAP_AFS_CLASSIFICATION] = {
+ .type_id = 1,
+ .sw_per_item = 2,
+ .sw_cnt = 6,
+ },
+ [VCAP_AFS_FULL] = {
+ .type_id = -1,
+ .sw_per_item = 3,
+ .sw_cnt = 4,
+ },
+ [VCAP_AFS_CLASS_REDUCED] = {
+ .type_id = 1,
+ .sw_per_item = 1,
+ .sw_cnt = 12,
+ },
+};
+
static const struct vcap_set is2_actionfield_set[] = {
[VCAP_AFS_BASE_TYPE] = {
.type_id = -1,
@@ -1119,17 +3094,196 @@ static const struct vcap_set is2_actionfield_set[] = {
},
};
+static const struct vcap_set es0_actionfield_set[] = {
+ [VCAP_AFS_ES0] = {
+ .type_id = -1,
+ .sw_per_item = 1,
+ .sw_cnt = 1,
+ },
+};
+
+static const struct vcap_set es2_actionfield_set[] = {
+ [VCAP_AFS_BASE_TYPE] = {
+ .type_id = -1,
+ .sw_per_item = 3,
+ .sw_cnt = 4,
+ },
+};
+
/* actionfield_set map */
+static const struct vcap_field *is0_actionfield_set_map[] = {
+ [VCAP_AFS_CLASSIFICATION] = is0_classification_actionfield,
+ [VCAP_AFS_FULL] = is0_full_actionfield,
+ [VCAP_AFS_CLASS_REDUCED] = is0_class_reduced_actionfield,
+};
+
static const struct vcap_field *is2_actionfield_set_map[] = {
[VCAP_AFS_BASE_TYPE] = is2_base_type_actionfield,
};
+static const struct vcap_field *es0_actionfield_set_map[] = {
+ [VCAP_AFS_ES0] = es0_es0_actionfield,
+};
+
+static const struct vcap_field *es2_actionfield_set_map[] = {
+ [VCAP_AFS_BASE_TYPE] = es2_base_type_actionfield,
+};
+
/* actionfield_set map size */
+static int is0_actionfield_set_map_size[] = {
+ [VCAP_AFS_CLASSIFICATION] = ARRAY_SIZE(is0_classification_actionfield),
+ [VCAP_AFS_FULL] = ARRAY_SIZE(is0_full_actionfield),
+ [VCAP_AFS_CLASS_REDUCED] = ARRAY_SIZE(is0_class_reduced_actionfield),
+};
+
static int is2_actionfield_set_map_size[] = {
[VCAP_AFS_BASE_TYPE] = ARRAY_SIZE(is2_base_type_actionfield),
};
+static int es0_actionfield_set_map_size[] = {
+ [VCAP_AFS_ES0] = ARRAY_SIZE(es0_es0_actionfield),
+};
+
+static int es2_actionfield_set_map_size[] = {
+ [VCAP_AFS_BASE_TYPE] = ARRAY_SIZE(es2_base_type_actionfield),
+};
+
/* Type Groups */
+static const struct vcap_typegroup is0_x12_keyfield_set_typegroups[] = {
+ {
+ .offset = 0,
+ .width = 5,
+ .value = 16,
+ },
+ {
+ .offset = 52,
+ .width = 1,
+ .value = 0,
+ },
+ {
+ .offset = 104,
+ .width = 2,
+ .value = 0,
+ },
+ {
+ .offset = 156,
+ .width = 3,
+ .value = 0,
+ },
+ {
+ .offset = 208,
+ .width = 2,
+ .value = 0,
+ },
+ {
+ .offset = 260,
+ .width = 1,
+ .value = 0,
+ },
+ {
+ .offset = 312,
+ .width = 4,
+ .value = 0,
+ },
+ {
+ .offset = 364,
+ .width = 1,
+ .value = 0,
+ },
+ {
+ .offset = 416,
+ .width = 2,
+ .value = 0,
+ },
+ {
+ .offset = 468,
+ .width = 3,
+ .value = 0,
+ },
+ {
+ .offset = 520,
+ .width = 2,
+ .value = 0,
+ },
+ {
+ .offset = 572,
+ .width = 1,
+ .value = 0,
+ },
+ {}
+};
+
+static const struct vcap_typegroup is0_x6_keyfield_set_typegroups[] = {
+ {
+ .offset = 0,
+ .width = 4,
+ .value = 8,
+ },
+ {
+ .offset = 52,
+ .width = 1,
+ .value = 0,
+ },
+ {
+ .offset = 104,
+ .width = 2,
+ .value = 0,
+ },
+ {
+ .offset = 156,
+ .width = 3,
+ .value = 0,
+ },
+ {
+ .offset = 208,
+ .width = 2,
+ .value = 0,
+ },
+ {
+ .offset = 260,
+ .width = 1,
+ .value = 0,
+ },
+ {}
+};
+
+static const struct vcap_typegroup is0_x3_keyfield_set_typegroups[] = {
+ {
+ .offset = 0,
+ .width = 3,
+ .value = 4,
+ },
+ {
+ .offset = 52,
+ .width = 2,
+ .value = 0,
+ },
+ {
+ .offset = 104,
+ .width = 2,
+ .value = 0,
+ },
+ {}
+};
+
+static const struct vcap_typegroup is0_x2_keyfield_set_typegroups[] = {
+ {
+ .offset = 0,
+ .width = 2,
+ .value = 2,
+ },
+ {
+ .offset = 52,
+ .width = 1,
+ .value = 0,
+ },
+ {}
+};
+
+static const struct vcap_typegroup is0_x1_keyfield_set_typegroups[] = {
+ {}
+};
+
static const struct vcap_typegroup is2_x12_keyfield_set_typegroups[] = {
{
.offset = 0,
@@ -1176,6 +3330,70 @@ static const struct vcap_typegroup is2_x1_keyfield_set_typegroups[] = {
{}
};
+static const struct vcap_typegroup es0_x1_keyfield_set_typegroups[] = {
+ {}
+};
+
+static const struct vcap_typegroup es2_x12_keyfield_set_typegroups[] = {
+ {
+ .offset = 0,
+ .width = 3,
+ .value = 4,
+ },
+ {
+ .offset = 156,
+ .width = 1,
+ .value = 0,
+ },
+ {
+ .offset = 312,
+ .width = 2,
+ .value = 0,
+ },
+ {
+ .offset = 468,
+ .width = 1,
+ .value = 0,
+ },
+ {}
+};
+
+static const struct vcap_typegroup es2_x6_keyfield_set_typegroups[] = {
+ {
+ .offset = 0,
+ .width = 2,
+ .value = 2,
+ },
+ {
+ .offset = 156,
+ .width = 1,
+ .value = 0,
+ },
+ {}
+};
+
+static const struct vcap_typegroup es2_x3_keyfield_set_typegroups[] = {
+ {
+ .offset = 0,
+ .width = 1,
+ .value = 1,
+ },
+ {}
+};
+
+static const struct vcap_typegroup es2_x1_keyfield_set_typegroups[] = {
+ {}
+};
+
+static const struct vcap_typegroup *is0_keyfield_set_typegroups[] = {
+ [12] = is0_x12_keyfield_set_typegroups,
+ [6] = is0_x6_keyfield_set_typegroups,
+ [3] = is0_x3_keyfield_set_typegroups,
+ [2] = is0_x2_keyfield_set_typegroups,
+ [1] = is0_x1_keyfield_set_typegroups,
+ [13] = NULL,
+};
+
static const struct vcap_typegroup *is2_keyfield_set_typegroups[] = {
[12] = is2_x12_keyfield_set_typegroups,
[6] = is2_x6_keyfield_set_typegroups,
@@ -1184,6 +3402,61 @@ static const struct vcap_typegroup *is2_keyfield_set_typegroups[] = {
[13] = NULL,
};
+static const struct vcap_typegroup *es0_keyfield_set_typegroups[] = {
+ [1] = es0_x1_keyfield_set_typegroups,
+ [2] = NULL,
+};
+
+static const struct vcap_typegroup *es2_keyfield_set_typegroups[] = {
+ [12] = es2_x12_keyfield_set_typegroups,
+ [6] = es2_x6_keyfield_set_typegroups,
+ [3] = es2_x3_keyfield_set_typegroups,
+ [1] = es2_x1_keyfield_set_typegroups,
+ [13] = NULL,
+};
+
+static const struct vcap_typegroup is0_x3_actionfield_set_typegroups[] = {
+ {
+ .offset = 0,
+ .width = 3,
+ .value = 4,
+ },
+ {
+ .offset = 110,
+ .width = 2,
+ .value = 0,
+ },
+ {
+ .offset = 220,
+ .width = 2,
+ .value = 0,
+ },
+ {}
+};
+
+static const struct vcap_typegroup is0_x2_actionfield_set_typegroups[] = {
+ {
+ .offset = 0,
+ .width = 2,
+ .value = 2,
+ },
+ {
+ .offset = 110,
+ .width = 1,
+ .value = 0,
+ },
+ {}
+};
+
+static const struct vcap_typegroup is0_x1_actionfield_set_typegroups[] = {
+ {
+ .offset = 0,
+ .width = 1,
+ .value = 1,
+ },
+ {}
+};
+
static const struct vcap_typegroup is2_x3_actionfield_set_typegroups[] = {
{
.offset = 0,
@@ -1207,36 +3480,122 @@ static const struct vcap_typegroup is2_x1_actionfield_set_typegroups[] = {
{}
};
+static const struct vcap_typegroup es0_x1_actionfield_set_typegroups[] = {
+ {}
+};
+
+static const struct vcap_typegroup es2_x3_actionfield_set_typegroups[] = {
+ {
+ .offset = 0,
+ .width = 2,
+ .value = 2,
+ },
+ {
+ .offset = 21,
+ .width = 1,
+ .value = 0,
+ },
+ {
+ .offset = 42,
+ .width = 1,
+ .value = 0,
+ },
+ {}
+};
+
+static const struct vcap_typegroup es2_x1_actionfield_set_typegroups[] = {
+ {}
+};
+
+static const struct vcap_typegroup *is0_actionfield_set_typegroups[] = {
+ [3] = is0_x3_actionfield_set_typegroups,
+ [2] = is0_x2_actionfield_set_typegroups,
+ [1] = is0_x1_actionfield_set_typegroups,
+ [13] = NULL,
+};
+
static const struct vcap_typegroup *is2_actionfield_set_typegroups[] = {
[3] = is2_x3_actionfield_set_typegroups,
[1] = is2_x1_actionfield_set_typegroups,
[13] = NULL,
};
+static const struct vcap_typegroup *es0_actionfield_set_typegroups[] = {
+ [1] = es0_x1_actionfield_set_typegroups,
+ [2] = NULL,
+};
+
+static const struct vcap_typegroup *es2_actionfield_set_typegroups[] = {
+ [3] = es2_x3_actionfield_set_typegroups,
+ [1] = es2_x1_actionfield_set_typegroups,
+ [13] = NULL,
+};
+
/* Keyfieldset names */
static const char * const vcap_keyfield_set_names[] = {
[VCAP_KFS_NO_VALUE] = "(None)",
[VCAP_KFS_ARP] = "VCAP_KFS_ARP",
+ [VCAP_KFS_ETAG] = "VCAP_KFS_ETAG",
[VCAP_KFS_IP4_OTHER] = "VCAP_KFS_IP4_OTHER",
[VCAP_KFS_IP4_TCP_UDP] = "VCAP_KFS_IP4_TCP_UDP",
+ [VCAP_KFS_IP4_VID] = "VCAP_KFS_IP4_VID",
+ [VCAP_KFS_IP6_OTHER] = "VCAP_KFS_IP6_OTHER",
[VCAP_KFS_IP6_STD] = "VCAP_KFS_IP6_STD",
+ [VCAP_KFS_IP6_TCP_UDP] = "VCAP_KFS_IP6_TCP_UDP",
+ [VCAP_KFS_IP6_VID] = "VCAP_KFS_IP6_VID",
[VCAP_KFS_IP_7TUPLE] = "VCAP_KFS_IP_7TUPLE",
+ [VCAP_KFS_ISDX] = "VCAP_KFS_ISDX",
+ [VCAP_KFS_LL_FULL] = "VCAP_KFS_LL_FULL",
[VCAP_KFS_MAC_ETYPE] = "VCAP_KFS_MAC_ETYPE",
+ [VCAP_KFS_MAC_LLC] = "VCAP_KFS_MAC_LLC",
+ [VCAP_KFS_MAC_SNAP] = "VCAP_KFS_MAC_SNAP",
+ [VCAP_KFS_NORMAL_5TUPLE_IP4] = "VCAP_KFS_NORMAL_5TUPLE_IP4",
+ [VCAP_KFS_NORMAL_7TUPLE] = "VCAP_KFS_NORMAL_7TUPLE",
+ [VCAP_KFS_OAM] = "VCAP_KFS_OAM",
+ [VCAP_KFS_PURE_5TUPLE_IP4] = "VCAP_KFS_PURE_5TUPLE_IP4",
+ [VCAP_KFS_SMAC_SIP4] = "VCAP_KFS_SMAC_SIP4",
+ [VCAP_KFS_SMAC_SIP6] = "VCAP_KFS_SMAC_SIP6",
};
/* Actionfieldset names */
static const char * const vcap_actionfield_set_names[] = {
[VCAP_AFS_NO_VALUE] = "(None)",
[VCAP_AFS_BASE_TYPE] = "VCAP_AFS_BASE_TYPE",
+ [VCAP_AFS_CLASSIFICATION] = "VCAP_AFS_CLASSIFICATION",
+ [VCAP_AFS_CLASS_REDUCED] = "VCAP_AFS_CLASS_REDUCED",
+ [VCAP_AFS_ES0] = "VCAP_AFS_ES0",
+ [VCAP_AFS_FULL] = "VCAP_AFS_FULL",
+ [VCAP_AFS_SMAC_SIP] = "VCAP_AFS_SMAC_SIP",
};
/* Keyfield names */
static const char * const vcap_keyfield_names[] = {
[VCAP_KF_NO_VALUE] = "(None)",
+ [VCAP_KF_8021BR_ECID_BASE] = "8021BR_ECID_BASE",
+ [VCAP_KF_8021BR_ECID_EXT] = "8021BR_ECID_EXT",
+ [VCAP_KF_8021BR_E_TAGGED] = "8021BR_E_TAGGED",
+ [VCAP_KF_8021BR_GRP] = "8021BR_GRP",
+ [VCAP_KF_8021BR_IGR_ECID_BASE] = "8021BR_IGR_ECID_BASE",
+ [VCAP_KF_8021BR_IGR_ECID_EXT] = "8021BR_IGR_ECID_EXT",
+ [VCAP_KF_8021Q_DEI0] = "8021Q_DEI0",
+ [VCAP_KF_8021Q_DEI1] = "8021Q_DEI1",
+ [VCAP_KF_8021Q_DEI2] = "8021Q_DEI2",
[VCAP_KF_8021Q_DEI_CLS] = "8021Q_DEI_CLS",
+ [VCAP_KF_8021Q_PCP0] = "8021Q_PCP0",
+ [VCAP_KF_8021Q_PCP1] = "8021Q_PCP1",
+ [VCAP_KF_8021Q_PCP2] = "8021Q_PCP2",
[VCAP_KF_8021Q_PCP_CLS] = "8021Q_PCP_CLS",
+ [VCAP_KF_8021Q_TPID] = "8021Q_TPID",
+ [VCAP_KF_8021Q_TPID0] = "8021Q_TPID0",
+ [VCAP_KF_8021Q_TPID1] = "8021Q_TPID1",
+ [VCAP_KF_8021Q_TPID2] = "8021Q_TPID2",
+ [VCAP_KF_8021Q_VID0] = "8021Q_VID0",
+ [VCAP_KF_8021Q_VID1] = "8021Q_VID1",
+ [VCAP_KF_8021Q_VID2] = "8021Q_VID2",
[VCAP_KF_8021Q_VID_CLS] = "8021Q_VID_CLS",
[VCAP_KF_8021Q_VLAN_TAGGED_IS] = "8021Q_VLAN_TAGGED_IS",
+ [VCAP_KF_8021Q_VLAN_TAGS] = "8021Q_VLAN_TAGS",
+ [VCAP_KF_ACL_GRP_ID] = "ACL_GRP_ID",
[VCAP_KF_ARP_ADDR_SPACE_OK_IS] = "ARP_ADDR_SPACE_OK_IS",
[VCAP_KF_ARP_LEN_OK_IS] = "ARP_LEN_OK_IS",
[VCAP_KF_ARP_OPCODE] = "ARP_OPCODE",
@@ -1244,25 +3603,46 @@ static const char * const vcap_keyfield_names[] = {
[VCAP_KF_ARP_PROTO_SPACE_OK_IS] = "ARP_PROTO_SPACE_OK_IS",
[VCAP_KF_ARP_SENDER_MATCH_IS] = "ARP_SENDER_MATCH_IS",
[VCAP_KF_ARP_TGT_MATCH_IS] = "ARP_TGT_MATCH_IS",
+ [VCAP_KF_COSID_CLS] = "COSID_CLS",
+ [VCAP_KF_ES0_ISDX_KEY_ENA] = "ES0_ISDX_KEY_ENA",
[VCAP_KF_ETYPE] = "ETYPE",
[VCAP_KF_ETYPE_LEN_IS] = "ETYPE_LEN_IS",
+ [VCAP_KF_HOST_MATCH] = "HOST_MATCH",
+ [VCAP_KF_IF_EGR_PORT_MASK] = "IF_EGR_PORT_MASK",
+ [VCAP_KF_IF_EGR_PORT_MASK_RNG] = "IF_EGR_PORT_MASK_RNG",
+ [VCAP_KF_IF_EGR_PORT_NO] = "IF_EGR_PORT_NO",
+ [VCAP_KF_IF_IGR_PORT] = "IF_IGR_PORT",
[VCAP_KF_IF_IGR_PORT_MASK] = "IF_IGR_PORT_MASK",
[VCAP_KF_IF_IGR_PORT_MASK_L3] = "IF_IGR_PORT_MASK_L3",
[VCAP_KF_IF_IGR_PORT_MASK_RNG] = "IF_IGR_PORT_MASK_RNG",
[VCAP_KF_IF_IGR_PORT_MASK_SEL] = "IF_IGR_PORT_MASK_SEL",
+ [VCAP_KF_IF_IGR_PORT_SEL] = "IF_IGR_PORT_SEL",
[VCAP_KF_IP4_IS] = "IP4_IS",
+ [VCAP_KF_IP_MC_IS] = "IP_MC_IS",
+ [VCAP_KF_IP_PAYLOAD_5TUPLE] = "IP_PAYLOAD_5TUPLE",
+ [VCAP_KF_IP_SNAP_IS] = "IP_SNAP_IS",
[VCAP_KF_ISDX_CLS] = "ISDX_CLS",
[VCAP_KF_ISDX_GT0_IS] = "ISDX_GT0_IS",
[VCAP_KF_L2_BC_IS] = "L2_BC_IS",
[VCAP_KF_L2_DMAC] = "L2_DMAC",
+ [VCAP_KF_L2_FRM_TYPE] = "L2_FRM_TYPE",
[VCAP_KF_L2_FWD_IS] = "L2_FWD_IS",
+ [VCAP_KF_L2_LLC] = "L2_LLC",
[VCAP_KF_L2_MC_IS] = "L2_MC_IS",
+ [VCAP_KF_L2_PAYLOAD0] = "L2_PAYLOAD0",
+ [VCAP_KF_L2_PAYLOAD1] = "L2_PAYLOAD1",
+ [VCAP_KF_L2_PAYLOAD2] = "L2_PAYLOAD2",
[VCAP_KF_L2_PAYLOAD_ETYPE] = "L2_PAYLOAD_ETYPE",
[VCAP_KF_L2_SMAC] = "L2_SMAC",
+ [VCAP_KF_L2_SNAP] = "L2_SNAP",
[VCAP_KF_L3_DIP_EQ_SIP_IS] = "L3_DIP_EQ_SIP_IS",
+ [VCAP_KF_L3_DPL_CLS] = "L3_DPL_CLS",
+ [VCAP_KF_L3_DSCP] = "L3_DSCP",
[VCAP_KF_L3_DST_IS] = "L3_DST_IS",
+ [VCAP_KF_L3_FRAGMENT] = "L3_FRAGMENT",
[VCAP_KF_L3_FRAGMENT_TYPE] = "L3_FRAGMENT_TYPE",
[VCAP_KF_L3_FRAG_INVLD_L4_LEN] = "L3_FRAG_INVLD_L4_LEN",
+ [VCAP_KF_L3_FRAG_OFS_GT0] = "L3_FRAG_OFS_GT0",
[VCAP_KF_L3_IP4_DIP] = "L3_IP4_DIP",
[VCAP_KF_L3_IP4_SIP] = "L3_IP4_SIP",
[VCAP_KF_L3_IP6_DIP] = "L3_IP6_DIP",
@@ -1273,6 +3653,8 @@ static const char * const vcap_keyfield_names[] = {
[VCAP_KF_L3_RT_IS] = "L3_RT_IS",
[VCAP_KF_L3_TOS] = "L3_TOS",
[VCAP_KF_L3_TTL_GT0] = "L3_TTL_GT0",
+ [VCAP_KF_L4_1588_DOM] = "L4_1588_DOM",
+ [VCAP_KF_L4_1588_VER] = "L4_1588_VER",
[VCAP_KF_L4_ACK] = "L4_ACK",
[VCAP_KF_L4_DPORT] = "L4_DPORT",
[VCAP_KF_L4_FIN] = "L4_FIN",
@@ -1286,9 +3668,19 @@ static const char * const vcap_keyfield_names[] = {
[VCAP_KF_L4_SYN] = "L4_SYN",
[VCAP_KF_L4_URG] = "L4_URG",
[VCAP_KF_LOOKUP_FIRST_IS] = "LOOKUP_FIRST_IS",
+ [VCAP_KF_LOOKUP_GEN_IDX] = "LOOKUP_GEN_IDX",
+ [VCAP_KF_LOOKUP_GEN_IDX_SEL] = "LOOKUP_GEN_IDX_SEL",
[VCAP_KF_LOOKUP_PAG] = "LOOKUP_PAG",
+ [VCAP_KF_MIRROR_PROBE] = "MIRROR_PROBE",
[VCAP_KF_OAM_CCM_CNTS_EQ0] = "OAM_CCM_CNTS_EQ0",
+ [VCAP_KF_OAM_DETECTED] = "OAM_DETECTED",
+ [VCAP_KF_OAM_FLAGS] = "OAM_FLAGS",
+ [VCAP_KF_OAM_MEL_FLAGS] = "OAM_MEL_FLAGS",
+ [VCAP_KF_OAM_MEPID] = "OAM_MEPID",
+ [VCAP_KF_OAM_OPCODE] = "OAM_OPCODE",
+ [VCAP_KF_OAM_VER] = "OAM_VER",
[VCAP_KF_OAM_Y1731_IS] = "OAM_Y1731_IS",
+ [VCAP_KF_PROT_ACTIVE] = "PROT_ACTIVE",
[VCAP_KF_TCP_IS] = "TCP_IS",
[VCAP_KF_TCP_UDP_IS] = "TCP_UDP_IS",
[VCAP_KF_TYPE] = "TYPE",
@@ -1297,27 +3689,116 @@ static const char * const vcap_keyfield_names[] = {
/* Actionfield names */
static const char * const vcap_actionfield_names[] = {
[VCAP_AF_NO_VALUE] = "(None)",
+ [VCAP_AF_ACL_ID] = "ACL_ID",
+ [VCAP_AF_CLS_VID_SEL] = "CLS_VID_SEL",
[VCAP_AF_CNT_ID] = "CNT_ID",
+ [VCAP_AF_COPY_PORT_NUM] = "COPY_PORT_NUM",
+ [VCAP_AF_COPY_QUEUE_NUM] = "COPY_QUEUE_NUM",
[VCAP_AF_CPU_COPY_ENA] = "CPU_COPY_ENA",
+ [VCAP_AF_CPU_QU] = "CPU_QU",
[VCAP_AF_CPU_QUEUE_NUM] = "CPU_QUEUE_NUM",
+ [VCAP_AF_DEI_A_VAL] = "DEI_A_VAL",
+ [VCAP_AF_DEI_B_VAL] = "DEI_B_VAL",
+ [VCAP_AF_DEI_C_VAL] = "DEI_C_VAL",
+ [VCAP_AF_DEI_ENA] = "DEI_ENA",
+ [VCAP_AF_DEI_VAL] = "DEI_VAL",
+ [VCAP_AF_DP_ENA] = "DP_ENA",
+ [VCAP_AF_DP_VAL] = "DP_VAL",
+ [VCAP_AF_DSCP_ENA] = "DSCP_ENA",
+ [VCAP_AF_DSCP_SEL] = "DSCP_SEL",
+ [VCAP_AF_DSCP_VAL] = "DSCP_VAL",
+ [VCAP_AF_ES2_REW_CMD] = "ES2_REW_CMD",
+ [VCAP_AF_ESDX] = "ESDX",
+ [VCAP_AF_FWD_KILL_ENA] = "FWD_KILL_ENA",
+ [VCAP_AF_FWD_MODE] = "FWD_MODE",
+ [VCAP_AF_FWD_SEL] = "FWD_SEL",
[VCAP_AF_HIT_ME_ONCE] = "HIT_ME_ONCE",
+ [VCAP_AF_HOST_MATCH] = "HOST_MATCH",
[VCAP_AF_IGNORE_PIPELINE_CTRL] = "IGNORE_PIPELINE_CTRL",
[VCAP_AF_INTR_ENA] = "INTR_ENA",
+ [VCAP_AF_ISDX_ADD_REPLACE_SEL] = "ISDX_ADD_REPLACE_SEL",
+ [VCAP_AF_ISDX_ENA] = "ISDX_ENA",
+ [VCAP_AF_ISDX_VAL] = "ISDX_VAL",
+ [VCAP_AF_LOOP_ENA] = "LOOP_ENA",
[VCAP_AF_LRN_DIS] = "LRN_DIS",
+ [VCAP_AF_MAP_IDX] = "MAP_IDX",
+ [VCAP_AF_MAP_KEY] = "MAP_KEY",
+ [VCAP_AF_MAP_LOOKUP_SEL] = "MAP_LOOKUP_SEL",
[VCAP_AF_MASK_MODE] = "MASK_MODE",
[VCAP_AF_MATCH_ID] = "MATCH_ID",
[VCAP_AF_MATCH_ID_MASK] = "MATCH_ID_MASK",
+ [VCAP_AF_MIRROR_ENA] = "MIRROR_ENA",
[VCAP_AF_MIRROR_PROBE] = "MIRROR_PROBE",
+ [VCAP_AF_MIRROR_PROBE_ID] = "MIRROR_PROBE_ID",
+ [VCAP_AF_NXT_IDX] = "NXT_IDX",
+ [VCAP_AF_NXT_IDX_CTRL] = "NXT_IDX_CTRL",
+ [VCAP_AF_PAG_OVERRIDE_MASK] = "PAG_OVERRIDE_MASK",
+ [VCAP_AF_PAG_VAL] = "PAG_VAL",
+ [VCAP_AF_PCP_A_VAL] = "PCP_A_VAL",
+ [VCAP_AF_PCP_B_VAL] = "PCP_B_VAL",
+ [VCAP_AF_PCP_C_VAL] = "PCP_C_VAL",
+ [VCAP_AF_PCP_ENA] = "PCP_ENA",
+ [VCAP_AF_PCP_VAL] = "PCP_VAL",
+ [VCAP_AF_PIPELINE_ACT] = "PIPELINE_ACT",
[VCAP_AF_PIPELINE_FORCE_ENA] = "PIPELINE_FORCE_ENA",
[VCAP_AF_PIPELINE_PT] = "PIPELINE_PT",
[VCAP_AF_POLICE_ENA] = "POLICE_ENA",
[VCAP_AF_POLICE_IDX] = "POLICE_IDX",
+ [VCAP_AF_POLICE_REMARK] = "POLICE_REMARK",
+ [VCAP_AF_POLICE_VCAP_ONLY] = "POLICE_VCAP_ONLY",
+ [VCAP_AF_POP_VAL] = "POP_VAL",
[VCAP_AF_PORT_MASK] = "PORT_MASK",
+ [VCAP_AF_PUSH_CUSTOMER_TAG] = "PUSH_CUSTOMER_TAG",
+ [VCAP_AF_PUSH_INNER_TAG] = "PUSH_INNER_TAG",
+ [VCAP_AF_PUSH_OUTER_TAG] = "PUSH_OUTER_TAG",
+ [VCAP_AF_QOS_ENA] = "QOS_ENA",
+ [VCAP_AF_QOS_VAL] = "QOS_VAL",
+ [VCAP_AF_REW_OP] = "REW_OP",
[VCAP_AF_RT_DIS] = "RT_DIS",
+ [VCAP_AF_SWAP_MACS_ENA] = "SWAP_MACS_ENA",
+ [VCAP_AF_TAG_A_DEI_SEL] = "TAG_A_DEI_SEL",
+ [VCAP_AF_TAG_A_PCP_SEL] = "TAG_A_PCP_SEL",
+ [VCAP_AF_TAG_A_TPID_SEL] = "TAG_A_TPID_SEL",
+ [VCAP_AF_TAG_A_VID_SEL] = "TAG_A_VID_SEL",
+ [VCAP_AF_TAG_B_DEI_SEL] = "TAG_B_DEI_SEL",
+ [VCAP_AF_TAG_B_PCP_SEL] = "TAG_B_PCP_SEL",
+ [VCAP_AF_TAG_B_TPID_SEL] = "TAG_B_TPID_SEL",
+ [VCAP_AF_TAG_B_VID_SEL] = "TAG_B_VID_SEL",
+ [VCAP_AF_TAG_C_DEI_SEL] = "TAG_C_DEI_SEL",
+ [VCAP_AF_TAG_C_PCP_SEL] = "TAG_C_PCP_SEL",
+ [VCAP_AF_TAG_C_TPID_SEL] = "TAG_C_TPID_SEL",
+ [VCAP_AF_TAG_C_VID_SEL] = "TAG_C_VID_SEL",
+ [VCAP_AF_TYPE] = "TYPE",
+ [VCAP_AF_UNTAG_VID_ENA] = "UNTAG_VID_ENA",
+ [VCAP_AF_VID_A_VAL] = "VID_A_VAL",
+ [VCAP_AF_VID_B_VAL] = "VID_B_VAL",
+ [VCAP_AF_VID_C_VAL] = "VID_C_VAL",
+ [VCAP_AF_VID_VAL] = "VID_VAL",
};
/* VCAPs */
const struct vcap_info sparx5_vcaps[] = {
+ [VCAP_TYPE_IS0] = {
+ .name = "is0",
+ .rows = 1024,
+ .sw_count = 12,
+ .sw_width = 52,
+ .sticky_width = 1,
+ .act_width = 110,
+ .default_cnt = 140,
+ .require_cnt_dis = 0,
+ .version = 1,
+ .keyfield_set = is0_keyfield_set,
+ .keyfield_set_size = ARRAY_SIZE(is0_keyfield_set),
+ .actionfield_set = is0_actionfield_set,
+ .actionfield_set_size = ARRAY_SIZE(is0_actionfield_set),
+ .keyfield_set_map = is0_keyfield_set_map,
+ .keyfield_set_map_size = is0_keyfield_set_map_size,
+ .actionfield_set_map = is0_actionfield_set_map,
+ .actionfield_set_map_size = is0_actionfield_set_map_size,
+ .keyfield_set_typegroups = is0_keyfield_set_typegroups,
+ .actionfield_set_typegroups = is0_actionfield_set_typegroups,
+ },
[VCAP_TYPE_IS2] = {
.name = "is2",
.rows = 256,
@@ -1339,11 +3820,53 @@ const struct vcap_info sparx5_vcaps[] = {
.keyfield_set_typegroups = is2_keyfield_set_typegroups,
.actionfield_set_typegroups = is2_actionfield_set_typegroups,
},
+ [VCAP_TYPE_ES0] = {
+ .name = "es0",
+ .rows = 4096,
+ .sw_count = 1,
+ .sw_width = 52,
+ .sticky_width = 1,
+ .act_width = 489,
+ .default_cnt = 70,
+ .require_cnt_dis = 0,
+ .version = 1,
+ .keyfield_set = es0_keyfield_set,
+ .keyfield_set_size = ARRAY_SIZE(es0_keyfield_set),
+ .actionfield_set = es0_actionfield_set,
+ .actionfield_set_size = ARRAY_SIZE(es0_actionfield_set),
+ .keyfield_set_map = es0_keyfield_set_map,
+ .keyfield_set_map_size = es0_keyfield_set_map_size,
+ .actionfield_set_map = es0_actionfield_set_map,
+ .actionfield_set_map_size = es0_actionfield_set_map_size,
+ .keyfield_set_typegroups = es0_keyfield_set_typegroups,
+ .actionfield_set_typegroups = es0_actionfield_set_typegroups,
+ },
+ [VCAP_TYPE_ES2] = {
+ .name = "es2",
+ .rows = 1024,
+ .sw_count = 12,
+ .sw_width = 52,
+ .sticky_width = 1,
+ .act_width = 21,
+ .default_cnt = 74,
+ .require_cnt_dis = 0,
+ .version = 1,
+ .keyfield_set = es2_keyfield_set,
+ .keyfield_set_size = ARRAY_SIZE(es2_keyfield_set),
+ .actionfield_set = es2_actionfield_set,
+ .actionfield_set_size = ARRAY_SIZE(es2_actionfield_set),
+ .keyfield_set_map = es2_keyfield_set_map,
+ .keyfield_set_map_size = es2_keyfield_set_map_size,
+ .actionfield_set_map = es2_actionfield_set_map,
+ .actionfield_set_map_size = es2_actionfield_set_map_size,
+ .keyfield_set_typegroups = es2_keyfield_set_typegroups,
+ .actionfield_set_typegroups = es2_actionfield_set_typegroups,
+ },
};
const struct vcap_statistics sparx5_vcap_stats = {
.name = "sparx5",
- .count = 1,
+ .count = 4,
.keyfield_set_names = vcap_keyfield_set_names,
.actionfield_set_names = vcap_actionfield_set_names,
.keyfield_names = vcap_keyfield_names,
diff --git a/drivers/net/ethernet/microchip/sparx5/sparx5_vcap_debugfs.c b/drivers/net/ethernet/microchip/sparx5/sparx5_vcap_debugfs.c
index b91e05ffe2f4..07b472c84a47 100644
--- a/drivers/net/ethernet/microchip/sparx5/sparx5_vcap_debugfs.c
+++ b/drivers/net/ethernet/microchip/sparx5/sparx5_vcap_debugfs.c
@@ -13,10 +13,113 @@
#include "sparx5_vcap_impl.h"
#include "sparx5_vcap_ag_api.h"
-static void sparx5_vcap_port_keys(struct sparx5 *sparx5,
- struct vcap_admin *admin,
- struct sparx5_port *port,
- struct vcap_output_print *out)
+static const char *sparx5_vcap_is0_etype_str(u32 value)
+{
+ switch (value) {
+ case VCAP_IS0_PS_ETYPE_DEFAULT:
+ return "default";
+ case VCAP_IS0_PS_ETYPE_NORMAL_7TUPLE:
+ return "normal_7tuple";
+ case VCAP_IS0_PS_ETYPE_NORMAL_5TUPLE_IP4:
+ return "normal_5tuple_ip4";
+ case VCAP_IS0_PS_ETYPE_MLL:
+ return "mll";
+ case VCAP_IS0_PS_ETYPE_LL_FULL:
+ return "ll_full";
+ case VCAP_IS0_PS_ETYPE_PURE_5TUPLE_IP4:
+ return "pure_5tuple_ip4";
+ case VCAP_IS0_PS_ETYPE_ETAG:
+ return "etag";
+ case VCAP_IS0_PS_ETYPE_NO_LOOKUP:
+ return "no lookup";
+ default:
+ return "unknown";
+ }
+}
+
+static const char *sparx5_vcap_is0_mpls_str(u32 value)
+{
+ switch (value) {
+ case VCAP_IS0_PS_MPLS_FOLLOW_ETYPE:
+ return "follow_etype";
+ case VCAP_IS0_PS_MPLS_NORMAL_7TUPLE:
+ return "normal_7tuple";
+ case VCAP_IS0_PS_MPLS_NORMAL_5TUPLE_IP4:
+ return "normal_5tuple_ip4";
+ case VCAP_IS0_PS_MPLS_MLL:
+ return "mll";
+ case VCAP_IS0_PS_MPLS_LL_FULL:
+ return "ll_full";
+ case VCAP_IS0_PS_MPLS_PURE_5TUPLE_IP4:
+ return "pure_5tuple_ip4";
+ case VCAP_IS0_PS_MPLS_ETAG:
+ return "etag";
+ case VCAP_IS0_PS_MPLS_NO_LOOKUP:
+ return "no lookup";
+ default:
+ return "unknown";
+ }
+}
+
+static const char *sparx5_vcap_is0_mlbs_str(u32 value)
+{
+ switch (value) {
+ case VCAP_IS0_PS_MLBS_FOLLOW_ETYPE:
+ return "follow_etype";
+ case VCAP_IS0_PS_MLBS_NO_LOOKUP:
+ return "no lookup";
+ default:
+ return "unknown";
+ }
+}
+
+static void sparx5_vcap_is0_port_keys(struct sparx5 *sparx5,
+ struct vcap_admin *admin,
+ struct sparx5_port *port,
+ struct vcap_output_print *out)
+{
+ int lookup;
+ u32 value, val;
+
+ out->prf(out->dst, " port[%02d] (%s): ", port->portno,
+ netdev_name(port->ndev));
+ for (lookup = 0; lookup < admin->lookups; ++lookup) {
+ out->prf(out->dst, "\n Lookup %d: ", lookup);
+
+ /* Get lookup state */
+ value = spx5_rd(sparx5,
+ ANA_CL_ADV_CL_CFG(port->portno, lookup));
+ out->prf(out->dst, "\n state: ");
+ if (ANA_CL_ADV_CL_CFG_LOOKUP_ENA_GET(value))
+ out->prf(out->dst, "on");
+ else
+ out->prf(out->dst, "off");
+ val = ANA_CL_ADV_CL_CFG_ETYPE_CLM_KEY_SEL_GET(value);
+ out->prf(out->dst, "\n etype: %s",
+ sparx5_vcap_is0_etype_str(val));
+ val = ANA_CL_ADV_CL_CFG_IP4_CLM_KEY_SEL_GET(value);
+ out->prf(out->dst, "\n ipv4: %s",
+ sparx5_vcap_is0_etype_str(val));
+ val = ANA_CL_ADV_CL_CFG_IP6_CLM_KEY_SEL_GET(value);
+ out->prf(out->dst, "\n ipv6: %s",
+ sparx5_vcap_is0_etype_str(val));
+ val = ANA_CL_ADV_CL_CFG_MPLS_UC_CLM_KEY_SEL_GET(value);
+ out->prf(out->dst, "\n mpls_uc: %s",
+ sparx5_vcap_is0_mpls_str(val));
+ val = ANA_CL_ADV_CL_CFG_MPLS_MC_CLM_KEY_SEL_GET(value);
+ out->prf(out->dst, "\n mpls_mc: %s",
+ sparx5_vcap_is0_mpls_str(val));
+ val = ANA_CL_ADV_CL_CFG_MLBS_CLM_KEY_SEL_GET(value);
+ out->prf(out->dst, "\n mlbs: %s",
+ sparx5_vcap_is0_mlbs_str(val));
+ }
+ out->prf(out->dst, "\n");
+}
+
+static void sparx5_vcap_is2_port_keys(struct sparx5 *sparx5,
+ struct vcap_admin *admin,
+ struct sparx5_port *port,
+ struct vcap_output_print *out)
{
int lookup;
u32 value;
@@ -29,7 +132,7 @@ static void sparx5_vcap_port_keys(struct sparx5 *sparx5,
/* Get lookup state */
value = spx5_rd(sparx5, ANA_ACL_VCAP_S2_CFG(port->portno));
out->prf(out->dst, "\n state: ");
- if (ANA_ACL_VCAP_S2_CFG_SEC_ENA_GET(value))
+ if (ANA_ACL_VCAP_S2_CFG_SEC_ENA_GET(value) & BIT(lookup))
out->prf(out->dst, "on");
else
out->prf(out->dst, "off");
@@ -126,9 +229,9 @@ static void sparx5_vcap_port_keys(struct sparx5 *sparx5,
out->prf(out->dst, "\n");
}
-static void sparx5_vcap_port_stickies(struct sparx5 *sparx5,
- struct vcap_admin *admin,
- struct vcap_output_print *out)
+static void sparx5_vcap_is2_port_stickies(struct sparx5 *sparx5,
+ struct vcap_admin *admin,
+ struct vcap_output_print *out)
{
int lookup;
u32 value;
@@ -181,6 +284,157 @@ static void sparx5_vcap_port_stickies(struct sparx5 *sparx5,
out->prf(out->dst, "\n");
}
+static void sparx5_vcap_es0_port_keys(struct sparx5 *sparx5,
+ struct vcap_admin *admin,
+ struct sparx5_port *port,
+ struct vcap_output_print *out)
+{
+ u32 value;
+
+ out->prf(out->dst, " port[%02d] (%s): ", port->portno,
+ netdev_name(port->ndev));
+ out->prf(out->dst, "\n Lookup 0: ");
+
+ /* Get lookup state */
+ value = spx5_rd(sparx5, REW_ES0_CTRL);
+ out->prf(out->dst, "\n state: ");
+ if (REW_ES0_CTRL_ES0_LU_ENA_GET(value))
+ out->prf(out->dst, "on");
+ else
+ out->prf(out->dst, "off");
+
+ out->prf(out->dst, "\n keyset: ");
+ value = spx5_rd(sparx5, REW_RTAG_ETAG_CTRL(port->portno));
+ switch (REW_RTAG_ETAG_CTRL_ES0_ISDX_KEY_ENA_GET(value)) {
+ case VCAP_ES0_PS_NORMAL_SELECTION:
+ out->prf(out->dst, "normal");
+ break;
+ case VCAP_ES0_PS_FORCE_ISDX_LOOKUPS:
+ out->prf(out->dst, "isdx");
+ break;
+ case VCAP_ES0_PS_FORCE_VID_LOOKUPS:
+ out->prf(out->dst, "vid");
+ break;
+ case VCAP_ES0_PS_RESERVED:
+ out->prf(out->dst, "reserved");
+ break;
+ }
+ out->prf(out->dst, "\n");
+}
+
+static void sparx5_vcap_es2_port_keys(struct sparx5 *sparx5,
+ struct vcap_admin *admin,
+ struct sparx5_port *port,
+ struct vcap_output_print *out)
+{
+ int lookup;
+ u32 value;
+
+ out->prf(out->dst, " port[%02d] (%s): ", port->portno,
+ netdev_name(port->ndev));
+ for (lookup = 0; lookup < admin->lookups; ++lookup) {
+ out->prf(out->dst, "\n Lookup %d: ", lookup);
+
+ /* Get lookup state */
+ value = spx5_rd(sparx5, EACL_VCAP_ES2_KEY_SEL(port->portno,
+ lookup));
+ out->prf(out->dst, "\n state: ");
+ if (EACL_VCAP_ES2_KEY_SEL_KEY_ENA_GET(value))
+ out->prf(out->dst, "on");
+ else
+ out->prf(out->dst, "off");
+
+ out->prf(out->dst, "\n arp: ");
+ switch (EACL_VCAP_ES2_KEY_SEL_ARP_KEY_SEL_GET(value)) {
+ case VCAP_ES2_PS_ARP_MAC_ETYPE:
+ out->prf(out->dst, "mac_etype");
+ break;
+ case VCAP_ES2_PS_ARP_ARP:
+ out->prf(out->dst, "arp");
+ break;
+ }
+ out->prf(out->dst, "\n ipv4: ");
+ switch (EACL_VCAP_ES2_KEY_SEL_IP4_KEY_SEL_GET(value)) {
+ case VCAP_ES2_PS_IPV4_MAC_ETYPE:
+ out->prf(out->dst, "mac_etype");
+ break;
+ case VCAP_ES2_PS_IPV4_IP_7TUPLE:
+ out->prf(out->dst, "ip_7tuple");
+ break;
+ case VCAP_ES2_PS_IPV4_IP4_TCP_UDP_VID:
+ out->prf(out->dst, "ip4_tcp_udp ip4_vid");
+ break;
+ case VCAP_ES2_PS_IPV4_IP4_TCP_UDP_OTHER:
+ out->prf(out->dst, "ip4_tcp_udp ip4_other");
+ break;
+ case VCAP_ES2_PS_IPV4_IP4_VID:
+ out->prf(out->dst, "ip4_vid");
+ break;
+ case VCAP_ES2_PS_IPV4_IP4_OTHER:
+ out->prf(out->dst, "ip4_other");
+ break;
+ }
+ out->prf(out->dst, "\n ipv6: ");
+ switch (EACL_VCAP_ES2_KEY_SEL_IP6_KEY_SEL_GET(value)) {
+ case VCAP_ES2_PS_IPV6_MAC_ETYPE:
+ out->prf(out->dst, "mac_etype");
+ break;
+ case VCAP_ES2_PS_IPV6_IP_7TUPLE:
+ out->prf(out->dst, "ip_7tuple");
+ break;
+ case VCAP_ES2_PS_IPV6_IP_7TUPLE_VID:
+ out->prf(out->dst, "ip_7tuple ip6_vid");
+ break;
+ case VCAP_ES2_PS_IPV6_IP_7TUPLE_STD:
+ out->prf(out->dst, "ip_7tuple ip6_std");
+ break;
+ case VCAP_ES2_PS_IPV6_IP6_VID:
+ out->prf(out->dst, "ip6_vid");
+ break;
+ case VCAP_ES2_PS_IPV6_IP6_STD:
+ out->prf(out->dst, "ip6_std");
+ break;
+ case VCAP_ES2_PS_IPV6_IP4_DOWNGRADE:
+ out->prf(out->dst, "ip4_downgrade");
+ break;
+ }
+ }
+ out->prf(out->dst, "\n");
+}
+
+static void sparx5_vcap_es2_port_stickies(struct sparx5 *sparx5,
+ struct vcap_admin *admin,
+ struct vcap_output_print *out)
+{
+ int lookup;
+ u32 value;
+
+ out->prf(out->dst, " Sticky bits: ");
+ for (lookup = 0; lookup < admin->lookups; ++lookup) {
+ value = spx5_rd(sparx5, EACL_SEC_LOOKUP_STICKY(lookup));
+ out->prf(out->dst, "\n Lookup %d: ", lookup);
+ if (EACL_SEC_LOOKUP_STICKY_SEC_TYPE_IP_7TUPLE_STICKY_GET(value))
+ out->prf(out->dst, " ip_7tuple");
+ if (EACL_SEC_LOOKUP_STICKY_SEC_TYPE_IP6_VID_STICKY_GET(value))
+ out->prf(out->dst, " ip6_vid");
+ if (EACL_SEC_LOOKUP_STICKY_SEC_TYPE_IP6_STD_STICKY_GET(value))
+ out->prf(out->dst, " ip6_std");
+ if (EACL_SEC_LOOKUP_STICKY_SEC_TYPE_IP4_TCPUDP_STICKY_GET(value))
+ out->prf(out->dst, " ip4_tcp_udp");
+ if (EACL_SEC_LOOKUP_STICKY_SEC_TYPE_IP4_VID_STICKY_GET(value))
+ out->prf(out->dst, " ip4_vid");
+ if (EACL_SEC_LOOKUP_STICKY_SEC_TYPE_IP4_OTHER_STICKY_GET(value))
+ out->prf(out->dst, " ip4_other");
+ if (EACL_SEC_LOOKUP_STICKY_SEC_TYPE_ARP_STICKY_GET(value))
+ out->prf(out->dst, " arp");
+ if (EACL_SEC_LOOKUP_STICKY_SEC_TYPE_MAC_ETYPE_STICKY_GET(value))
+ out->prf(out->dst, " mac_etype");
+ /* Clear stickies */
+ spx5_wr(value, sparx5, EACL_SEC_LOOKUP_STICKY(lookup));
+ }
+ out->prf(out->dst, "\n");
+}
+
/* Provide port information via a callback interface */
int sparx5_port_info(struct net_device *ndev,
struct vcap_admin *admin,
@@ -194,7 +448,24 @@ int sparx5_port_info(struct net_device *ndev,
vctrl = sparx5->vcap_ctrl;
vcap = &vctrl->vcaps[admin->vtype];
out->prf(out->dst, "%s:\n", vcap->name);
- sparx5_vcap_port_keys(sparx5, admin, port, out);
- sparx5_vcap_port_stickies(sparx5, admin, out);
+ switch (admin->vtype) {
+ case VCAP_TYPE_IS0:
+ sparx5_vcap_is0_port_keys(sparx5, admin, port, out);
+ break;
+ case VCAP_TYPE_IS2:
+ sparx5_vcap_is2_port_keys(sparx5, admin, port, out);
+ sparx5_vcap_is2_port_stickies(sparx5, admin, out);
+ break;
+ case VCAP_TYPE_ES0:
+ sparx5_vcap_es0_port_keys(sparx5, admin, port, out);
+ break;
+ case VCAP_TYPE_ES2:
+ sparx5_vcap_es2_port_keys(sparx5, admin, port, out);
+ sparx5_vcap_es2_port_stickies(sparx5, admin, out);
+ break;
+ default:
+ out->prf(out->dst, " no info\n");
+ break;
+ }
return 0;
}
diff --git a/drivers/net/ethernet/microchip/sparx5/sparx5_vcap_impl.c b/drivers/net/ethernet/microchip/sparx5/sparx5_vcap_impl.c
index a0c126ba9a87..d0d4e0385ac7 100644
--- a/drivers/net/ethernet/microchip/sparx5/sparx5_vcap_impl.c
+++ b/drivers/net/ethernet/microchip/sparx5/sparx5_vcap_impl.c
@@ -27,6 +27,28 @@
ANA_ACL_VCAP_S2_KEY_SEL_IP6_UC_KEY_SEL_SET(_v6_uc) | \
ANA_ACL_VCAP_S2_KEY_SEL_ARP_KEY_SEL_SET(_arp))
+#define SPARX5_IS0_LOOKUPS 6
+#define VCAP_IS0_KEYSEL(_ena, _etype, _ipv4, _ipv6, _mpls_uc, _mpls_mc, _mlbs) \
+ (ANA_CL_ADV_CL_CFG_LOOKUP_ENA_SET(_ena) | \
+ ANA_CL_ADV_CL_CFG_ETYPE_CLM_KEY_SEL_SET(_etype) | \
+ ANA_CL_ADV_CL_CFG_IP4_CLM_KEY_SEL_SET(_ipv4) | \
+ ANA_CL_ADV_CL_CFG_IP6_CLM_KEY_SEL_SET(_ipv6) | \
+ ANA_CL_ADV_CL_CFG_MPLS_UC_CLM_KEY_SEL_SET(_mpls_uc) | \
+ ANA_CL_ADV_CL_CFG_MPLS_MC_CLM_KEY_SEL_SET(_mpls_mc) | \
+ ANA_CL_ADV_CL_CFG_MLBS_CLM_KEY_SEL_SET(_mlbs))
+
+#define SPARX5_ES0_LOOKUPS 1
+#define VCAP_ES0_KEYSEL(_key) (REW_RTAG_ETAG_CTRL_ES0_ISDX_KEY_ENA_SET(_key))
+#define SPARX5_STAT_ESDX_GRN_PKTS 0x300
+#define SPARX5_STAT_ESDX_YEL_PKTS 0x301
+
+#define SPARX5_ES2_LOOKUPS 2
+#define VCAP_ES2_KEYSEL(_ena, _arp, _ipv4, _ipv6) \
+ (EACL_VCAP_ES2_KEY_SEL_KEY_ENA_SET(_ena) | \
+ EACL_VCAP_ES2_KEY_SEL_ARP_KEY_SEL_SET(_arp) | \
+ EACL_VCAP_ES2_KEY_SEL_IP4_KEY_SEL_SET(_ipv4) | \
+ EACL_VCAP_ES2_KEY_SEL_IP6_KEY_SEL_SET(_ipv6))
+
static struct sparx5_vcap_inst {
enum vcap_type vtype; /* type of vcap */
int vinst; /* instance number within the same type */
@@ -38,8 +60,45 @@ static struct sparx5_vcap_inst {
int map_id; /* id in the super vcap block mapping (if applicable) */
int blockno; /* starting block in super vcap (if applicable) */
int blocks; /* number of blocks in super vcap (if applicable) */
+ bool ingress; /* is vcap in the ingress path */
} sparx5_vcap_inst_cfg[] = {
{
+ .vtype = VCAP_TYPE_IS0, /* CLM-0 */
+ .vinst = 0,
+ .map_id = 1,
+ .lookups = SPARX5_IS0_LOOKUPS,
+ .lookups_per_instance = SPARX5_IS0_LOOKUPS / 3,
+ .first_cid = SPARX5_VCAP_CID_IS0_L0,
+ .last_cid = SPARX5_VCAP_CID_IS0_L2 - 1,
+ .blockno = 8, /* Maps block 8-9 */
+ .blocks = 2,
+ .ingress = true,
+ },
+ {
+ .vtype = VCAP_TYPE_IS0, /* CLM-1 */
+ .vinst = 1,
+ .map_id = 2,
+ .lookups = SPARX5_IS0_LOOKUPS,
+ .lookups_per_instance = SPARX5_IS0_LOOKUPS / 3,
+ .first_cid = SPARX5_VCAP_CID_IS0_L2,
+ .last_cid = SPARX5_VCAP_CID_IS0_L4 - 1,
+ .blockno = 6, /* Maps block 6-7 */
+ .blocks = 2,
+ .ingress = true,
+ },
+ {
+ .vtype = VCAP_TYPE_IS0, /* CLM-2 */
+ .vinst = 2,
+ .map_id = 3,
+ .lookups = SPARX5_IS0_LOOKUPS,
+ .lookups_per_instance = SPARX5_IS0_LOOKUPS / 3,
+ .first_cid = SPARX5_VCAP_CID_IS0_L4,
+ .last_cid = SPARX5_VCAP_CID_IS0_MAX,
+ .blockno = 4, /* Maps block 4-5 */
+ .blocks = 2,
+ .ingress = true,
+ },
+ {
.vtype = VCAP_TYPE_IS2, /* IS2-0 */
.vinst = 0,
.map_id = 4,
@@ -49,6 +108,7 @@ static struct sparx5_vcap_inst {
.last_cid = SPARX5_VCAP_CID_IS2_L2 - 1,
.blockno = 0, /* Maps block 0-1 */
.blocks = 2,
+ .ingress = true,
},
{
.vtype = VCAP_TYPE_IS2, /* IS2-1 */
@@ -60,9 +120,59 @@ static struct sparx5_vcap_inst {
.last_cid = SPARX5_VCAP_CID_IS2_MAX,
.blockno = 2, /* Maps block 2-3 */
.blocks = 2,
+ .ingress = true,
+ },
+ {
+ .vtype = VCAP_TYPE_ES0,
+ .lookups = SPARX5_ES0_LOOKUPS,
+ .lookups_per_instance = SPARX5_ES0_LOOKUPS,
+ .first_cid = SPARX5_VCAP_CID_ES0_L0,
+ .last_cid = SPARX5_VCAP_CID_ES0_MAX,
+ .count = 4096, /* Addresses according to datasheet */
+ .ingress = false,
+ },
+ {
+ .vtype = VCAP_TYPE_ES2,
+ .lookups = SPARX5_ES2_LOOKUPS,
+ .lookups_per_instance = SPARX5_ES2_LOOKUPS,
+ .first_cid = SPARX5_VCAP_CID_ES2_L0,
+ .last_cid = SPARX5_VCAP_CID_ES2_MAX,
+ .count = 12288, /* Addresses according to datasheet */
+ .ingress = false,
},
};
+/* These protocols have dedicated keysets in IS0 and a TC dissector */
+static u16 sparx5_vcap_is0_known_etypes[] = {
+ ETH_P_ALL,
+ ETH_P_IP,
+ ETH_P_IPV6,
+};
+
+/* These protocols have dedicated keysets in IS2 and a TC dissector */
+static u16 sparx5_vcap_is2_known_etypes[] = {
+ ETH_P_ALL,
+ ETH_P_ARP,
+ ETH_P_IP,
+ ETH_P_IPV6,
+};
+
+/* These protocols have dedicated keysets in ES2 and a TC dissector */
+static u16 sparx5_vcap_es2_known_etypes[] = {
+ ETH_P_ALL,
+ ETH_P_ARP,
+ ETH_P_IP,
+ ETH_P_IPV6,
+};
+
+static void sparx5_vcap_type_err(struct sparx5 *sparx5,
+ struct vcap_admin *admin,
+ const char *fname)
+{
+ pr_err("%s: vcap type: %s not supported\n",
+ fname, sparx5_vcaps[admin->vtype].name);
+}
+
/* Await the super VCAP completion of the current operation */
static void sparx5_vcap_wait_super_update(struct sparx5 *sparx5)
{
@@ -73,25 +183,81 @@ static void sparx5_vcap_wait_super_update(struct sparx5 *sparx5)
false, sparx5, VCAP_SUPER_CTRL);
}
-/* Initializing a VCAP address range: only IS2 for now */
+/* Await the ES0 VCAP completion of the current operation */
+static void sparx5_vcap_wait_es0_update(struct sparx5 *sparx5)
+{
+ u32 value;
+
+ read_poll_timeout(spx5_rd, value,
+ !VCAP_ES0_CTRL_UPDATE_SHOT_GET(value), 500, 10000,
+ false, sparx5, VCAP_ES0_CTRL);
+}
+
+/* Await the ES2 VCAP completion of the current operation */
+static void sparx5_vcap_wait_es2_update(struct sparx5 *sparx5)
+{
+ u32 value;
+
+ read_poll_timeout(spx5_rd, value,
+ !VCAP_ES2_CTRL_UPDATE_SHOT_GET(value), 500, 10000,
+ false, sparx5, VCAP_ES2_CTRL);
+}
+
+/* Initializing a VCAP address range */
static void _sparx5_vcap_range_init(struct sparx5 *sparx5,
struct vcap_admin *admin,
u32 addr, u32 count)
{
u32 size = count - 1;
- spx5_wr(VCAP_SUPER_CFG_MV_NUM_POS_SET(0) |
- VCAP_SUPER_CFG_MV_SIZE_SET(size),
- sparx5, VCAP_SUPER_CFG);
- spx5_wr(VCAP_SUPER_CTRL_UPDATE_CMD_SET(VCAP_CMD_INITIALIZE) |
- VCAP_SUPER_CTRL_UPDATE_ENTRY_DIS_SET(0) |
- VCAP_SUPER_CTRL_UPDATE_ACTION_DIS_SET(0) |
- VCAP_SUPER_CTRL_UPDATE_CNT_DIS_SET(0) |
- VCAP_SUPER_CTRL_UPDATE_ADDR_SET(addr) |
- VCAP_SUPER_CTRL_CLEAR_CACHE_SET(true) |
- VCAP_SUPER_CTRL_UPDATE_SHOT_SET(true),
- sparx5, VCAP_SUPER_CTRL);
- sparx5_vcap_wait_super_update(sparx5);
+ switch (admin->vtype) {
+ case VCAP_TYPE_IS0:
+ case VCAP_TYPE_IS2:
+ spx5_wr(VCAP_SUPER_CFG_MV_NUM_POS_SET(0) |
+ VCAP_SUPER_CFG_MV_SIZE_SET(size),
+ sparx5, VCAP_SUPER_CFG);
+ spx5_wr(VCAP_SUPER_CTRL_UPDATE_CMD_SET(VCAP_CMD_INITIALIZE) |
+ VCAP_SUPER_CTRL_UPDATE_ENTRY_DIS_SET(0) |
+ VCAP_SUPER_CTRL_UPDATE_ACTION_DIS_SET(0) |
+ VCAP_SUPER_CTRL_UPDATE_CNT_DIS_SET(0) |
+ VCAP_SUPER_CTRL_UPDATE_ADDR_SET(addr) |
+ VCAP_SUPER_CTRL_CLEAR_CACHE_SET(true) |
+ VCAP_SUPER_CTRL_UPDATE_SHOT_SET(true),
+ sparx5, VCAP_SUPER_CTRL);
+ sparx5_vcap_wait_super_update(sparx5);
+ break;
+ case VCAP_TYPE_ES0:
+ spx5_wr(VCAP_ES0_CFG_MV_NUM_POS_SET(0) |
+ VCAP_ES0_CFG_MV_SIZE_SET(size),
+ sparx5, VCAP_ES0_CFG);
+ spx5_wr(VCAP_ES0_CTRL_UPDATE_CMD_SET(VCAP_CMD_INITIALIZE) |
+ VCAP_ES0_CTRL_UPDATE_ENTRY_DIS_SET(0) |
+ VCAP_ES0_CTRL_UPDATE_ACTION_DIS_SET(0) |
+ VCAP_ES0_CTRL_UPDATE_CNT_DIS_SET(0) |
+ VCAP_ES0_CTRL_UPDATE_ADDR_SET(addr) |
+ VCAP_ES0_CTRL_CLEAR_CACHE_SET(true) |
+ VCAP_ES0_CTRL_UPDATE_SHOT_SET(true),
+ sparx5, VCAP_ES0_CTRL);
+ sparx5_vcap_wait_es0_update(sparx5);
+ break;
+ case VCAP_TYPE_ES2:
+ spx5_wr(VCAP_ES2_CFG_MV_NUM_POS_SET(0) |
+ VCAP_ES2_CFG_MV_SIZE_SET(size),
+ sparx5, VCAP_ES2_CFG);
+ spx5_wr(VCAP_ES2_CTRL_UPDATE_CMD_SET(VCAP_CMD_INITIALIZE) |
+ VCAP_ES2_CTRL_UPDATE_ENTRY_DIS_SET(0) |
+ VCAP_ES2_CTRL_UPDATE_ACTION_DIS_SET(0) |
+ VCAP_ES2_CTRL_UPDATE_CNT_DIS_SET(0) |
+ VCAP_ES2_CTRL_UPDATE_ADDR_SET(addr) |
+ VCAP_ES2_CTRL_CLEAR_CACHE_SET(true) |
+ VCAP_ES2_CTRL_UPDATE_SHOT_SET(true),
+ sparx5, VCAP_ES2_CTRL);
+ sparx5_vcap_wait_es2_update(sparx5);
+ break;
+ default:
+ sparx5_vcap_type_err(sparx5, admin, __func__);
+ break;
+ }
}
/* Initializing VCAP rule data area */
@@ -112,6 +278,17 @@ static const char *sparx5_vcap_keyset_name(struct net_device *ndev,
return vcap_keyset_name(port->sparx5->vcap_ctrl, keyset);
}
+/* Check if this is the first lookup of IS0 */
+static bool sparx5_vcap_is0_is_first_chain(struct vcap_rule *rule)
+{
+ return (rule->vcap_chain_id >= SPARX5_VCAP_CID_IS0_L0 &&
+ rule->vcap_chain_id < SPARX5_VCAP_CID_IS0_L1) ||
+ ((rule->vcap_chain_id >= SPARX5_VCAP_CID_IS0_L2 &&
+ rule->vcap_chain_id < SPARX5_VCAP_CID_IS0_L3)) ||
+ ((rule->vcap_chain_id >= SPARX5_VCAP_CID_IS0_L4 &&
+ rule->vcap_chain_id < SPARX5_VCAP_CID_IS0_L5));
+}
+
/* Check if this is the first lookup of IS2 */
static bool sparx5_vcap_is2_is_first_chain(struct vcap_rule *rule)
{
@@ -121,9 +298,15 @@ static bool sparx5_vcap_is2_is_first_chain(struct vcap_rule *rule)
rule->vcap_chain_id < SPARX5_VCAP_CID_IS2_L3));
}
+static bool sparx5_vcap_es2_is_first_chain(struct vcap_rule *rule)
+{
+ return (rule->vcap_chain_id >= SPARX5_VCAP_CID_ES2_L0 &&
+ rule->vcap_chain_id < SPARX5_VCAP_CID_ES2_L1);
+}
+
/* Set the narrow range ingress port mask on a rule */
-static void sparx5_vcap_add_range_port_mask(struct vcap_rule *rule,
- struct net_device *ndev)
+static void sparx5_vcap_add_ingress_range_port_mask(struct vcap_rule *rule,
+ struct net_device *ndev)
{
struct sparx5_port *port = netdev_priv(ndev);
u32 port_mask;
@@ -153,12 +336,51 @@ static void sparx5_vcap_add_wide_port_mask(struct vcap_rule *rule,
vcap_rule_add_key_u72(rule, VCAP_KF_IF_IGR_PORT_MASK, &port_mask);
}
-/* Convert chain id to vcap lookup id */
-static int sparx5_vcap_cid_to_lookup(int cid)
+static void sparx5_vcap_add_egress_range_port_mask(struct vcap_rule *rule,
+ struct net_device *ndev)
+{
+ struct sparx5_port *port = netdev_priv(ndev);
+ u32 port_mask;
+ u32 range;
+
+ /* Mask range selects:
+ * 0-2: Physical/Logical egress port number 0-31, 32–63, 64.
+ * 3-5: Virtual Interface Number 0-31, 32-63, 64.
+ * 6: CPU queue Number 0-7.
+ *
+ * Use physical/logical port ranges (0-2)
+ */
+ range = port->portno / BITS_PER_TYPE(u32);
+ /* Port bit set to match-any */
+ port_mask = ~BIT(port->portno % BITS_PER_TYPE(u32));
+ vcap_rule_add_key_u32(rule, VCAP_KF_IF_EGR_PORT_MASK_RNG, range, 0xf);
+ vcap_rule_add_key_u32(rule, VCAP_KF_IF_EGR_PORT_MASK, 0, port_mask);
+}
+
+/* Convert IS0 chain id to vcap lookup id */
+static int sparx5_vcap_is0_cid_to_lookup(int cid)
+{
+ int lookup = 0;
+
+ if (cid >= SPARX5_VCAP_CID_IS0_L1 && cid < SPARX5_VCAP_CID_IS0_L2)
+ lookup = 1;
+ else if (cid >= SPARX5_VCAP_CID_IS0_L2 && cid < SPARX5_VCAP_CID_IS0_L3)
+ lookup = 2;
+ else if (cid >= SPARX5_VCAP_CID_IS0_L3 && cid < SPARX5_VCAP_CID_IS0_L4)
+ lookup = 3;
+ else if (cid >= SPARX5_VCAP_CID_IS0_L4 && cid < SPARX5_VCAP_CID_IS0_L5)
+ lookup = 4;
+ else if (cid >= SPARX5_VCAP_CID_IS0_L5 && cid < SPARX5_VCAP_CID_IS0_MAX)
+ lookup = 5;
+
+ return lookup;
+}
+
+/* Convert IS2 chain id to vcap lookup id */
+static int sparx5_vcap_is2_cid_to_lookup(int cid)
{
int lookup = 0;
- /* For now only handle IS2 */
if (cid >= SPARX5_VCAP_CID_IS2_L1 && cid < SPARX5_VCAP_CID_IS2_L2)
lookup = 1;
else if (cid >= SPARX5_VCAP_CID_IS2_L2 && cid < SPARX5_VCAP_CID_IS2_L3)
@@ -169,6 +391,86 @@ static int sparx5_vcap_cid_to_lookup(int cid)
return lookup;
}
+/* Convert ES2 chain id to vcap lookup id */
+static int sparx5_vcap_es2_cid_to_lookup(int cid)
+{
+ int lookup = 0;
+
+ if (cid >= SPARX5_VCAP_CID_ES2_L1)
+ lookup = 1;
+
+ return lookup;
+}
+
+/* Add ethernet type IS0 keyset to a list */
+static void
+sparx5_vcap_is0_get_port_etype_keysets(struct vcap_keyset_list *keysetlist,
+ u32 value)
+{
+ switch (ANA_CL_ADV_CL_CFG_ETYPE_CLM_KEY_SEL_GET(value)) {
+ case VCAP_IS0_PS_ETYPE_NORMAL_7TUPLE:
+ vcap_keyset_list_add(keysetlist, VCAP_KFS_NORMAL_7TUPLE);
+ break;
+ case VCAP_IS0_PS_ETYPE_NORMAL_5TUPLE_IP4:
+ vcap_keyset_list_add(keysetlist, VCAP_KFS_NORMAL_5TUPLE_IP4);
+ break;
+ }
+}
+
+/* Return the list of keysets for the vcap port configuration */
+static int sparx5_vcap_is0_get_port_keysets(struct net_device *ndev,
+ int lookup,
+ struct vcap_keyset_list *keysetlist,
+ u16 l3_proto)
+{
+ struct sparx5_port *port = netdev_priv(ndev);
+ struct sparx5 *sparx5 = port->sparx5;
+ int portno = port->portno;
+ u32 value;
+
+ value = spx5_rd(sparx5, ANA_CL_ADV_CL_CFG(portno, lookup));
+
+ /* Collect all keysets for the port in a list */
+ if (l3_proto == ETH_P_ALL)
+ sparx5_vcap_is0_get_port_etype_keysets(keysetlist, value);
+
+ if (l3_proto == ETH_P_ALL || l3_proto == ETH_P_IP)
+ switch (ANA_CL_ADV_CL_CFG_IP4_CLM_KEY_SEL_GET(value)) {
+ case VCAP_IS0_PS_ETYPE_DEFAULT:
+ sparx5_vcap_is0_get_port_etype_keysets(keysetlist,
+ value);
+ break;
+ case VCAP_IS0_PS_ETYPE_NORMAL_7TUPLE:
+ vcap_keyset_list_add(keysetlist,
+ VCAP_KFS_NORMAL_7TUPLE);
+ break;
+ case VCAP_IS0_PS_ETYPE_NORMAL_5TUPLE_IP4:
+ vcap_keyset_list_add(keysetlist,
+ VCAP_KFS_NORMAL_5TUPLE_IP4);
+ break;
+ }
+
+ if (l3_proto == ETH_P_ALL || l3_proto == ETH_P_IPV6)
+ switch (ANA_CL_ADV_CL_CFG_IP6_CLM_KEY_SEL_GET(value)) {
+ case VCAP_IS0_PS_ETYPE_DEFAULT:
+ sparx5_vcap_is0_get_port_etype_keysets(keysetlist,
+ value);
+ break;
+ case VCAP_IS0_PS_ETYPE_NORMAL_7TUPLE:
+ vcap_keyset_list_add(keysetlist,
+ VCAP_KFS_NORMAL_7TUPLE);
+ break;
+ case VCAP_IS0_PS_ETYPE_NORMAL_5TUPLE_IP4:
+ vcap_keyset_list_add(keysetlist,
+ VCAP_KFS_NORMAL_5TUPLE_IP4);
+ break;
+ }
+
+ if (l3_proto != ETH_P_IP && l3_proto != ETH_P_IPV6)
+ sparx5_vcap_is0_get_port_etype_keysets(keysetlist, value);
+ return 0;
+}
+
/* Return the list of keysets for the vcap port configuration */
static int sparx5_vcap_is2_get_port_keysets(struct net_device *ndev,
int lookup,
@@ -180,10 +482,7 @@ static int sparx5_vcap_is2_get_port_keysets(struct net_device *ndev,
int portno = port->portno;
u32 value;
- /* Check if the port keyset selection is enabled */
value = spx5_rd(sparx5, ANA_ACL_VCAP_S2_KEY_SEL(portno, lookup));
- if (!ANA_ACL_VCAP_S2_KEY_SEL_KEY_SEL_ENA_GET(value))
- return -ENOENT;
/* Collect all keysets for the port in a list */
if (l3_proto == ETH_P_ALL || l3_proto == ETH_P_ARP) {
@@ -274,6 +573,121 @@ static int sparx5_vcap_is2_get_port_keysets(struct net_device *ndev,
return 0;
}
+/* Return the keysets for the vcap port IP4 traffic class configuration */
+static void
+sparx5_vcap_es2_get_port_ipv4_keysets(struct vcap_keyset_list *keysetlist,
+ u32 value)
+{
+ switch (EACL_VCAP_ES2_KEY_SEL_IP4_KEY_SEL_GET(value)) {
+ case VCAP_ES2_PS_IPV4_MAC_ETYPE:
+ vcap_keyset_list_add(keysetlist, VCAP_KFS_MAC_ETYPE);
+ break;
+ case VCAP_ES2_PS_IPV4_IP_7TUPLE:
+ vcap_keyset_list_add(keysetlist, VCAP_KFS_IP_7TUPLE);
+ break;
+ case VCAP_ES2_PS_IPV4_IP4_TCP_UDP_VID:
+ vcap_keyset_list_add(keysetlist, VCAP_KFS_IP4_TCP_UDP);
+ break;
+ case VCAP_ES2_PS_IPV4_IP4_TCP_UDP_OTHER:
+ vcap_keyset_list_add(keysetlist, VCAP_KFS_IP4_TCP_UDP);
+ vcap_keyset_list_add(keysetlist, VCAP_KFS_IP4_OTHER);
+ break;
+ case VCAP_ES2_PS_IPV4_IP4_VID:
+ /* Not used */
+ break;
+ case VCAP_ES2_PS_IPV4_IP4_OTHER:
+ vcap_keyset_list_add(keysetlist, VCAP_KFS_IP4_OTHER);
+ break;
+ }
+}
+
+/* Return the list of keysets for the vcap port configuration */
+static int sparx5_vcap_es0_get_port_keysets(struct net_device *ndev,
+ struct vcap_keyset_list *keysetlist,
+ u16 l3_proto)
+{
+ struct sparx5_port *port = netdev_priv(ndev);
+ struct sparx5 *sparx5 = port->sparx5;
+ int portno = port->portno;
+ u32 value;
+
+ value = spx5_rd(sparx5, REW_RTAG_ETAG_CTRL(portno));
+
+ /* Collect all keysets for the port in a list */
+ switch (REW_RTAG_ETAG_CTRL_ES0_ISDX_KEY_ENA_GET(value)) {
+ case VCAP_ES0_PS_NORMAL_SELECTION:
+ case VCAP_ES0_PS_FORCE_ISDX_LOOKUPS:
+ vcap_keyset_list_add(keysetlist, VCAP_KFS_ISDX);
+ break;
+ default:
+ break;
+ }
+ return 0;
+}
+
+/* Return the list of keysets for the vcap port configuration */
+static int sparx5_vcap_es2_get_port_keysets(struct net_device *ndev,
+ int lookup,
+ struct vcap_keyset_list *keysetlist,
+ u16 l3_proto)
+{
+ struct sparx5_port *port = netdev_priv(ndev);
+ struct sparx5 *sparx5 = port->sparx5;
+ int portno = port->portno;
+ u32 value;
+
+ value = spx5_rd(sparx5, EACL_VCAP_ES2_KEY_SEL(portno, lookup));
+
+ /* Collect all keysets for the port in a list */
+ if (l3_proto == ETH_P_ALL || l3_proto == ETH_P_ARP) {
+ switch (EACL_VCAP_ES2_KEY_SEL_ARP_KEY_SEL_GET(value)) {
+ case VCAP_ES2_PS_ARP_MAC_ETYPE:
+ vcap_keyset_list_add(keysetlist, VCAP_KFS_MAC_ETYPE);
+ break;
+ case VCAP_ES2_PS_ARP_ARP:
+ vcap_keyset_list_add(keysetlist, VCAP_KFS_ARP);
+ break;
+ }
+ }
+
+ if (l3_proto == ETH_P_ALL || l3_proto == ETH_P_IP)
+ sparx5_vcap_es2_get_port_ipv4_keysets(keysetlist, value);
+
+ if (l3_proto == ETH_P_ALL || l3_proto == ETH_P_IPV6) {
+ switch (EACL_VCAP_ES2_KEY_SEL_IP6_KEY_SEL_GET(value)) {
+ case VCAP_ES2_PS_IPV6_MAC_ETYPE:
+ vcap_keyset_list_add(keysetlist, VCAP_KFS_MAC_ETYPE);
+ break;
+ case VCAP_ES2_PS_IPV6_IP_7TUPLE:
+ vcap_keyset_list_add(keysetlist, VCAP_KFS_IP_7TUPLE);
+ break;
+ case VCAP_ES2_PS_IPV6_IP_7TUPLE_VID:
+ vcap_keyset_list_add(keysetlist, VCAP_KFS_IP_7TUPLE);
+ break;
+ case VCAP_ES2_PS_IPV6_IP_7TUPLE_STD:
+ vcap_keyset_list_add(keysetlist, VCAP_KFS_IP_7TUPLE);
+ vcap_keyset_list_add(keysetlist, VCAP_KFS_IP6_STD);
+ break;
+ case VCAP_ES2_PS_IPV6_IP6_VID:
+ /* Not used */
+ break;
+ case VCAP_ES2_PS_IPV6_IP6_STD:
+ vcap_keyset_list_add(keysetlist, VCAP_KFS_IP6_STD);
+ break;
+ case VCAP_ES2_PS_IPV6_IP4_DOWNGRADE:
+ sparx5_vcap_es2_get_port_ipv4_keysets(keysetlist,
+ value);
+ break;
+ }
+ }
+
+ if (l3_proto != ETH_P_ARP && l3_proto != ETH_P_IP &&
+ l3_proto != ETH_P_IPV6) {
+ vcap_keyset_list_add(keysetlist, VCAP_KFS_MAC_ETYPE);
+ }
+ return 0;
+}
+
/* Get the port keyset for the vcap lookup */
int sparx5_vcap_get_port_keyset(struct net_device *ndev,
struct vcap_admin *admin,
@@ -281,10 +695,64 @@ int sparx5_vcap_get_port_keyset(struct net_device *ndev,
u16 l3_proto,
struct vcap_keyset_list *kslist)
{
- int lookup;
+ int lookup, err = -EINVAL;
+ struct sparx5_port *port;
+
+ switch (admin->vtype) {
+ case VCAP_TYPE_IS0:
+ lookup = sparx5_vcap_is0_cid_to_lookup(cid);
+ err = sparx5_vcap_is0_get_port_keysets(ndev, lookup, kslist,
+ l3_proto);
+ break;
+ case VCAP_TYPE_IS2:
+ lookup = sparx5_vcap_is2_cid_to_lookup(cid);
+ err = sparx5_vcap_is2_get_port_keysets(ndev, lookup, kslist,
+ l3_proto);
+ break;
+ case VCAP_TYPE_ES0:
+ err = sparx5_vcap_es0_get_port_keysets(ndev, kslist, l3_proto);
+ break;
+ case VCAP_TYPE_ES2:
+ lookup = sparx5_vcap_es2_cid_to_lookup(cid);
+ err = sparx5_vcap_es2_get_port_keysets(ndev, lookup, kslist,
+ l3_proto);
+ break;
+ default:
+ port = netdev_priv(ndev);
+ sparx5_vcap_type_err(port->sparx5, admin, __func__);
+ break;
+ }
+ return err;
+}
+
+/* Check if the ethertype is supported by the vcap port classification */
+bool sparx5_vcap_is_known_etype(struct vcap_admin *admin, u16 etype)
+{
+ const u16 *known_etypes;
+ int size, idx;
- lookup = sparx5_vcap_cid_to_lookup(cid);
- return sparx5_vcap_is2_get_port_keysets(ndev, lookup, kslist, l3_proto);
+ switch (admin->vtype) {
+ case VCAP_TYPE_IS0:
+ known_etypes = sparx5_vcap_is0_known_etypes;
+ size = ARRAY_SIZE(sparx5_vcap_is0_known_etypes);
+ break;
+ case VCAP_TYPE_IS2:
+ known_etypes = sparx5_vcap_is2_known_etypes;
+ size = ARRAY_SIZE(sparx5_vcap_is2_known_etypes);
+ break;
+ case VCAP_TYPE_ES0:
+ return true;
+ case VCAP_TYPE_ES2:
+ known_etypes = sparx5_vcap_es2_known_etypes;
+ size = ARRAY_SIZE(sparx5_vcap_es2_known_etypes);
+ break;
+ default:
+ return false;
+ }
+ for (idx = 0; idx < size; ++idx)
+ if (known_etypes[idx] == etype)
+ return true;
+ return false;
}
/* API callback used for validating a field keyset (check the port keysets) */
@@ -297,16 +765,40 @@ sparx5_vcap_validate_keyset(struct net_device *ndev,
{
struct vcap_keyset_list keysetlist = {};
enum vcap_keyfield_set keysets[10] = {};
+ struct sparx5_port *port;
int idx, jdx, lookup;
if (!kslist || kslist->cnt == 0)
return VCAP_KFS_NO_VALUE;
- /* Get a list of currently configured keysets in the lookups */
- lookup = sparx5_vcap_cid_to_lookup(rule->vcap_chain_id);
keysetlist.max = ARRAY_SIZE(keysets);
keysetlist.keysets = keysets;
- sparx5_vcap_is2_get_port_keysets(ndev, lookup, &keysetlist, l3_proto);
+
+ /* Get a list of currently configured keysets in the lookups */
+ switch (admin->vtype) {
+ case VCAP_TYPE_IS0:
+ lookup = sparx5_vcap_is0_cid_to_lookup(rule->vcap_chain_id);
+ sparx5_vcap_is0_get_port_keysets(ndev, lookup, &keysetlist,
+ l3_proto);
+ break;
+ case VCAP_TYPE_IS2:
+ lookup = sparx5_vcap_is2_cid_to_lookup(rule->vcap_chain_id);
+ sparx5_vcap_is2_get_port_keysets(ndev, lookup, &keysetlist,
+ l3_proto);
+ break;
+ case VCAP_TYPE_ES0:
+ sparx5_vcap_es0_get_port_keysets(ndev, &keysetlist, l3_proto);
+ break;
+ case VCAP_TYPE_ES2:
+ lookup = sparx5_vcap_es2_cid_to_lookup(rule->vcap_chain_id);
+ sparx5_vcap_es2_get_port_keysets(ndev, lookup, &keysetlist,
+ l3_proto);
+ break;
+ default:
+ port = netdev_priv(ndev);
+ sparx5_vcap_type_err(port->sparx5, admin, __func__);
+ break;
+ }
/* Check if there is a match and return the match */
for (idx = 0; idx < kslist->cnt; ++idx)
@@ -321,27 +813,97 @@ sparx5_vcap_validate_keyset(struct net_device *ndev,
return -ENOENT;
}
-/* API callback used for adding default fields to a rule */
-static void sparx5_vcap_add_default_fields(struct net_device *ndev,
- struct vcap_admin *admin,
- struct vcap_rule *rule)
+static void sparx5_vcap_ingress_add_default_fields(struct net_device *ndev,
+ struct vcap_admin *admin,
+ struct vcap_rule *rule)
{
const struct vcap_field *field;
+ bool is_first;
+ /* Add ingress port mask matching the net device */
field = vcap_lookup_keyfield(rule, VCAP_KF_IF_IGR_PORT_MASK);
if (field && field->width == SPX5_PORTS)
sparx5_vcap_add_wide_port_mask(rule, ndev);
else if (field && field->width == BITS_PER_TYPE(u32))
- sparx5_vcap_add_range_port_mask(rule, ndev);
+ sparx5_vcap_add_ingress_range_port_mask(rule, ndev);
else
pr_err("%s:%d: %s: could not add an ingress port mask for: %s\n",
__func__, __LINE__, netdev_name(ndev),
sparx5_vcap_keyset_name(ndev, rule->keyset));
- /* add the lookup bit */
- if (sparx5_vcap_is2_is_first_chain(rule))
- vcap_rule_add_key_bit(rule, VCAP_KF_LOOKUP_FIRST_IS, VCAP_BIT_1);
+
+ if (admin->vtype == VCAP_TYPE_IS0)
+ is_first = sparx5_vcap_is0_is_first_chain(rule);
else
- vcap_rule_add_key_bit(rule, VCAP_KF_LOOKUP_FIRST_IS, VCAP_BIT_0);
+ is_first = sparx5_vcap_is2_is_first_chain(rule);
+
+ /* Add key that selects the first/second lookup */
+ if (is_first)
+ vcap_rule_add_key_bit(rule, VCAP_KF_LOOKUP_FIRST_IS,
+ VCAP_BIT_1);
+ else
+ vcap_rule_add_key_bit(rule, VCAP_KF_LOOKUP_FIRST_IS,
+ VCAP_BIT_0);
+}
+
+static void sparx5_vcap_es0_add_default_fields(struct net_device *ndev,
+ struct vcap_admin *admin,
+ struct vcap_rule *rule)
+{
+ struct sparx5_port *port = netdev_priv(ndev);
+
+ vcap_rule_add_key_u32(rule, VCAP_KF_IF_EGR_PORT_NO, port->portno, ~0);
+ /* Match untagged frames if there was no VLAN key */
+ vcap_rule_add_key_u32(rule, VCAP_KF_8021Q_TPID, SPX5_TPID_SEL_UNTAGGED,
+ ~0);
+}
+
+static void sparx5_vcap_es2_add_default_fields(struct net_device *ndev,
+ struct vcap_admin *admin,
+ struct vcap_rule *rule)
+{
+ const struct vcap_field *field;
+ bool is_first;
+
+ /* Add egress port mask matching the net device */
+ field = vcap_lookup_keyfield(rule, VCAP_KF_IF_EGR_PORT_MASK);
+ if (field)
+ sparx5_vcap_add_egress_range_port_mask(rule, ndev);
+
+ /* Add key that selects the first/second lookup */
+ is_first = sparx5_vcap_es2_is_first_chain(rule);
+
+ if (is_first)
+ vcap_rule_add_key_bit(rule, VCAP_KF_LOOKUP_FIRST_IS,
+ VCAP_BIT_1);
+ else
+ vcap_rule_add_key_bit(rule, VCAP_KF_LOOKUP_FIRST_IS,
+ VCAP_BIT_0);
+}
+
+/* API callback used for adding default fields to a rule */
+static void sparx5_vcap_add_default_fields(struct net_device *ndev,
+ struct vcap_admin *admin,
+ struct vcap_rule *rule)
+{
+ struct sparx5_port *port;
+
+ /* add the lookup bit */
+ switch (admin->vtype) {
+ case VCAP_TYPE_IS0:
+ case VCAP_TYPE_IS2:
+ sparx5_vcap_ingress_add_default_fields(ndev, admin, rule);
+ break;
+ case VCAP_TYPE_ES0:
+ sparx5_vcap_es0_add_default_fields(ndev, admin, rule);
+ break;
+ case VCAP_TYPE_ES2:
+ sparx5_vcap_es2_add_default_fields(ndev, admin, rule);
+ break;
+ default:
+ port = netdev_priv(ndev);
+ sparx5_vcap_type_err(port->sparx5, admin, __func__);
+ break;
+ }
}
/* API callback used for erasing the vcap cache area (not the register area) */
@@ -353,21 +915,60 @@ static void sparx5_vcap_cache_erase(struct vcap_admin *admin)
memset(&admin->cache.counter, 0, sizeof(admin->cache.counter));
}
-/* API callback used for writing to the VCAP cache */
-static void sparx5_vcap_cache_write(struct net_device *ndev,
- struct vcap_admin *admin,
- enum vcap_selection sel,
- u32 start,
- u32 count)
+static void sparx5_vcap_is0_cache_write(struct sparx5 *sparx5,
+ struct vcap_admin *admin,
+ enum vcap_selection sel,
+ u32 start,
+ u32 count)
+{
+ u32 *keystr, *mskstr, *actstr;
+ int idx;
+
+ keystr = &admin->cache.keystream[start];
+ mskstr = &admin->cache.maskstream[start];
+ actstr = &admin->cache.actionstream[start];
+
+ switch (sel) {
+ case VCAP_SEL_ENTRY:
+ for (idx = 0; idx < count; ++idx) {
+ /* Avoid 'match-off' by setting value & mask */
+ spx5_wr(keystr[idx] & mskstr[idx], sparx5,
+ VCAP_SUPER_VCAP_ENTRY_DAT(idx));
+ spx5_wr(~mskstr[idx], sparx5,
+ VCAP_SUPER_VCAP_MASK_DAT(idx));
+ }
+ break;
+ case VCAP_SEL_ACTION:
+ for (idx = 0; idx < count; ++idx)
+ spx5_wr(actstr[idx], sparx5,
+ VCAP_SUPER_VCAP_ACTION_DAT(idx));
+ break;
+ case VCAP_SEL_ALL:
+ pr_err("%s:%d: cannot write all streams at once\n",
+ __func__, __LINE__);
+ break;
+ default:
+ break;
+ }
+
+ if (sel & VCAP_SEL_COUNTER)
+ spx5_wr(admin->cache.counter, sparx5,
+ VCAP_SUPER_VCAP_CNT_DAT(0));
+}
+
+static void sparx5_vcap_is2_cache_write(struct sparx5 *sparx5,
+ struct vcap_admin *admin,
+ enum vcap_selection sel,
+ u32 start,
+ u32 count)
{
- struct sparx5_port *port = netdev_priv(ndev);
- struct sparx5 *sparx5 = port->sparx5;
u32 *keystr, *mskstr, *actstr;
int idx;
keystr = &admin->cache.keystream[start];
mskstr = &admin->cache.maskstream[start];
actstr = &admin->cache.actionstream[start];
+
switch (sel) {
case VCAP_SEL_ENTRY:
for (idx = 0; idx < count; ++idx) {
@@ -403,21 +1004,143 @@ static void sparx5_vcap_cache_write(struct net_device *ndev,
}
}
-/* API callback used for reading from the VCAP into the VCAP cache */
-static void sparx5_vcap_cache_read(struct net_device *ndev,
- struct vcap_admin *admin,
- enum vcap_selection sel,
- u32 start,
- u32 count)
+/* Use ESDX counters located in the XQS */
+static void sparx5_es0_write_esdx_counter(struct sparx5 *sparx5,
+ struct vcap_admin *admin, u32 id)
+{
+ mutex_lock(&sparx5->queue_stats_lock);
+ spx5_wr(XQS_STAT_CFG_STAT_VIEW_SET(id), sparx5, XQS_STAT_CFG);
+ spx5_wr(admin->cache.counter, sparx5,
+ XQS_CNT(SPARX5_STAT_ESDX_GRN_PKTS));
+ spx5_wr(0, sparx5, XQS_CNT(SPARX5_STAT_ESDX_YEL_PKTS));
+ mutex_unlock(&sparx5->queue_stats_lock);
+}
+
+static void sparx5_vcap_es0_cache_write(struct sparx5 *sparx5,
+ struct vcap_admin *admin,
+ enum vcap_selection sel,
+ u32 start,
+ u32 count)
+{
+ u32 *keystr, *mskstr, *actstr;
+ int idx;
+
+ keystr = &admin->cache.keystream[start];
+ mskstr = &admin->cache.maskstream[start];
+ actstr = &admin->cache.actionstream[start];
+
+ switch (sel) {
+ case VCAP_SEL_ENTRY:
+ for (idx = 0; idx < count; ++idx) {
+ /* Avoid 'match-off' by setting value & mask */
+ spx5_wr(keystr[idx] & mskstr[idx], sparx5,
+ VCAP_ES0_VCAP_ENTRY_DAT(idx));
+ spx5_wr(~mskstr[idx], sparx5,
+ VCAP_ES0_VCAP_MASK_DAT(idx));
+ }
+ break;
+ case VCAP_SEL_ACTION:
+ for (idx = 0; idx < count; ++idx)
+ spx5_wr(actstr[idx], sparx5,
+ VCAP_ES0_VCAP_ACTION_DAT(idx));
+ break;
+ case VCAP_SEL_ALL:
+ pr_err("%s:%d: cannot write all streams at once\n",
+ __func__, __LINE__);
+ break;
+ default:
+ break;
+ }
+ if (sel & VCAP_SEL_COUNTER) {
+ spx5_wr(admin->cache.counter, sparx5, VCAP_ES0_VCAP_CNT_DAT(0));
+ sparx5_es0_write_esdx_counter(sparx5, admin, start);
+ }
+}
+
+static void sparx5_vcap_es2_cache_write(struct sparx5 *sparx5,
+ struct vcap_admin *admin,
+ enum vcap_selection sel,
+ u32 start,
+ u32 count)
+{
+ u32 *keystr, *mskstr, *actstr;
+ int idx;
+
+ keystr = &admin->cache.keystream[start];
+ mskstr = &admin->cache.maskstream[start];
+ actstr = &admin->cache.actionstream[start];
+
+ switch (sel) {
+ case VCAP_SEL_ENTRY:
+ for (idx = 0; idx < count; ++idx) {
+ /* Avoid 'match-off' by setting value & mask */
+ spx5_wr(keystr[idx] & mskstr[idx], sparx5,
+ VCAP_ES2_VCAP_ENTRY_DAT(idx));
+ spx5_wr(~mskstr[idx], sparx5,
+ VCAP_ES2_VCAP_MASK_DAT(idx));
+ }
+ break;
+ case VCAP_SEL_ACTION:
+ for (idx = 0; idx < count; ++idx)
+ spx5_wr(actstr[idx], sparx5,
+ VCAP_ES2_VCAP_ACTION_DAT(idx));
+ break;
+ case VCAP_SEL_ALL:
+ pr_err("%s:%d: cannot write all streams at once\n",
+ __func__, __LINE__);
+ break;
+ default:
+ break;
+ }
+ if (sel & VCAP_SEL_COUNTER) {
+ start = start & 0x7ff; /* counter limit */
+ spx5_wr(admin->cache.counter, sparx5, EACL_ES2_CNT(start));
+ spx5_wr(admin->cache.sticky, sparx5, VCAP_ES2_VCAP_CNT_DAT(0));
+ }
+}
+
+/* API callback used for writing to the VCAP cache */
+static void sparx5_vcap_cache_write(struct net_device *ndev,
+ struct vcap_admin *admin,
+ enum vcap_selection sel,
+ u32 start,
+ u32 count)
{
struct sparx5_port *port = netdev_priv(ndev);
struct sparx5 *sparx5 = port->sparx5;
+
+ switch (admin->vtype) {
+ case VCAP_TYPE_IS0:
+ sparx5_vcap_is0_cache_write(sparx5, admin, sel, start, count);
+ break;
+ case VCAP_TYPE_IS2:
+ sparx5_vcap_is2_cache_write(sparx5, admin, sel, start, count);
+ break;
+ case VCAP_TYPE_ES0:
+ sparx5_vcap_es0_cache_write(sparx5, admin, sel, start, count);
+ break;
+ case VCAP_TYPE_ES2:
+ sparx5_vcap_es2_cache_write(sparx5, admin, sel, start, count);
+ break;
+ default:
+ sparx5_vcap_type_err(sparx5, admin, __func__);
+ break;
+ }
+}
+
+static void sparx5_vcap_is0_cache_read(struct sparx5 *sparx5,
+ struct vcap_admin *admin,
+ enum vcap_selection sel,
+ u32 start,
+ u32 count)
+{
u32 *keystr, *mskstr, *actstr;
int idx;
keystr = &admin->cache.keystream[start];
mskstr = &admin->cache.maskstream[start];
actstr = &admin->cache.actionstream[start];
+
if (sel & VCAP_SEL_ENTRY) {
for (idx = 0; idx < count; ++idx) {
keystr[idx] = spx5_rd(sparx5,
@@ -426,11 +1149,47 @@ static void sparx5_vcap_cache_read(struct net_device *ndev,
VCAP_SUPER_VCAP_MASK_DAT(idx));
}
}
- if (sel & VCAP_SEL_ACTION) {
+
+ if (sel & VCAP_SEL_ACTION)
for (idx = 0; idx < count; ++idx)
actstr[idx] = spx5_rd(sparx5,
VCAP_SUPER_VCAP_ACTION_DAT(idx));
+
+ if (sel & VCAP_SEL_COUNTER) {
+ admin->cache.counter =
+ spx5_rd(sparx5, VCAP_SUPER_VCAP_CNT_DAT(0));
+ admin->cache.sticky =
+ spx5_rd(sparx5, VCAP_SUPER_VCAP_CNT_DAT(0));
+ }
+}
+
+static void sparx5_vcap_is2_cache_read(struct sparx5 *sparx5,
+ struct vcap_admin *admin,
+ enum vcap_selection sel,
+ u32 start,
+ u32 count)
+{
+ u32 *keystr, *mskstr, *actstr;
+ int idx;
+
+ keystr = &admin->cache.keystream[start];
+ mskstr = &admin->cache.maskstream[start];
+ actstr = &admin->cache.actionstream[start];
+
+ if (sel & VCAP_SEL_ENTRY) {
+ for (idx = 0; idx < count; ++idx) {
+ keystr[idx] = spx5_rd(sparx5,
+ VCAP_SUPER_VCAP_ENTRY_DAT(idx));
+ mskstr[idx] = ~spx5_rd(sparx5,
+ VCAP_SUPER_VCAP_MASK_DAT(idx));
+ }
}
+
+ if (sel & VCAP_SEL_ACTION)
+ for (idx = 0; idx < count; ++idx)
+ actstr[idx] = spx5_rd(sparx5,
+ VCAP_SUPER_VCAP_ACTION_DAT(idx));
+
if (sel & VCAP_SEL_COUNTER) {
start = start & 0xfff; /* counter limit */
if (admin->vinst == 0)
@@ -444,6 +1203,121 @@ static void sparx5_vcap_cache_read(struct net_device *ndev,
}
}
+/* Use ESDX counters located in the XQS */
+static void sparx5_es0_read_esdx_counter(struct sparx5 *sparx5,
+ struct vcap_admin *admin, u32 id)
+{
+ u32 counter;
+
+ mutex_lock(&sparx5->queue_stats_lock);
+ spx5_wr(XQS_STAT_CFG_STAT_VIEW_SET(id), sparx5, XQS_STAT_CFG);
+ counter = spx5_rd(sparx5, XQS_CNT(SPARX5_STAT_ESDX_GRN_PKTS)) +
+ spx5_rd(sparx5, XQS_CNT(SPARX5_STAT_ESDX_YEL_PKTS));
+ mutex_unlock(&sparx5->queue_stats_lock);
+ if (counter)
+ admin->cache.counter = counter;
+}
+
+static void sparx5_vcap_es0_cache_read(struct sparx5 *sparx5,
+ struct vcap_admin *admin,
+ enum vcap_selection sel,
+ u32 start,
+ u32 count)
+{
+ u32 *keystr, *mskstr, *actstr;
+ int idx;
+
+ keystr = &admin->cache.keystream[start];
+ mskstr = &admin->cache.maskstream[start];
+ actstr = &admin->cache.actionstream[start];
+
+ if (sel & VCAP_SEL_ENTRY) {
+ for (idx = 0; idx < count; ++idx) {
+ keystr[idx] =
+ spx5_rd(sparx5, VCAP_ES0_VCAP_ENTRY_DAT(idx));
+ mskstr[idx] =
+ ~spx5_rd(sparx5, VCAP_ES0_VCAP_MASK_DAT(idx));
+ }
+ }
+
+ if (sel & VCAP_SEL_ACTION)
+ for (idx = 0; idx < count; ++idx)
+ actstr[idx] =
+ spx5_rd(sparx5, VCAP_ES0_VCAP_ACTION_DAT(idx));
+
+ if (sel & VCAP_SEL_COUNTER) {
+ admin->cache.counter =
+ spx5_rd(sparx5, VCAP_ES0_VCAP_CNT_DAT(0));
+ admin->cache.sticky = admin->cache.counter;
+ sparx5_es0_read_esdx_counter(sparx5, admin, start);
+ }
+}
+
+static void sparx5_vcap_es2_cache_read(struct sparx5 *sparx5,
+ struct vcap_admin *admin,
+ enum vcap_selection sel,
+ u32 start,
+ u32 count)
+{
+ u32 *keystr, *mskstr, *actstr;
+ int idx;
+
+ keystr = &admin->cache.keystream[start];
+ mskstr = &admin->cache.maskstream[start];
+ actstr = &admin->cache.actionstream[start];
+
+ if (sel & VCAP_SEL_ENTRY) {
+ for (idx = 0; idx < count; ++idx) {
+ keystr[idx] =
+ spx5_rd(sparx5, VCAP_ES2_VCAP_ENTRY_DAT(idx));
+ mskstr[idx] =
+ ~spx5_rd(sparx5, VCAP_ES2_VCAP_MASK_DAT(idx));
+ }
+ }
+
+ if (sel & VCAP_SEL_ACTION)
+ for (idx = 0; idx < count; ++idx)
+ actstr[idx] =
+ spx5_rd(sparx5, VCAP_ES2_VCAP_ACTION_DAT(idx));
+
+ if (sel & VCAP_SEL_COUNTER) {
+ start = start & 0x7ff; /* counter limit */
+ admin->cache.counter =
+ spx5_rd(sparx5, EACL_ES2_CNT(start));
+ admin->cache.sticky =
+ spx5_rd(sparx5, VCAP_ES2_VCAP_CNT_DAT(0));
+ }
+}
+
+/* API callback used for reading from the VCAP into the VCAP cache */
+static void sparx5_vcap_cache_read(struct net_device *ndev,
+ struct vcap_admin *admin,
+ enum vcap_selection sel,
+ u32 start,
+ u32 count)
+{
+ struct sparx5_port *port = netdev_priv(ndev);
+ struct sparx5 *sparx5 = port->sparx5;
+
+ switch (admin->vtype) {
+ case VCAP_TYPE_IS0:
+ sparx5_vcap_is0_cache_read(sparx5, admin, sel, start, count);
+ break;
+ case VCAP_TYPE_IS2:
+ sparx5_vcap_is2_cache_read(sparx5, admin, sel, start, count);
+ break;
+ case VCAP_TYPE_ES0:
+ sparx5_vcap_es0_cache_read(sparx5, admin, sel, start, count);
+ break;
+ case VCAP_TYPE_ES2:
+ sparx5_vcap_es2_cache_read(sparx5, admin, sel, start, count);
+ break;
+ default:
+ sparx5_vcap_type_err(sparx5, admin, __func__);
+ break;
+ }
+}
+
/* API callback used for initializing a VCAP address range */
static void sparx5_vcap_range_init(struct net_device *ndev,
struct vcap_admin *admin, u32 addr,
@@ -455,16 +1329,12 @@ static void sparx5_vcap_range_init(struct net_device *ndev,
_sparx5_vcap_range_init(sparx5, admin, addr, count);
}
-/* API callback used for updating the VCAP cache */
-static void sparx5_vcap_update(struct net_device *ndev,
- struct vcap_admin *admin, enum vcap_command cmd,
- enum vcap_selection sel, u32 addr)
+static void sparx5_vcap_super_update(struct sparx5 *sparx5,
+ enum vcap_command cmd,
+ enum vcap_selection sel, u32 addr)
{
- struct sparx5_port *port = netdev_priv(ndev);
- struct sparx5 *sparx5 = port->sparx5;
- bool clear;
+ bool clear = (cmd == VCAP_CMD_INITIALIZE);
- clear = (cmd == VCAP_CMD_INITIALIZE);
spx5_wr(VCAP_SUPER_CFG_MV_NUM_POS_SET(0) |
VCAP_SUPER_CFG_MV_SIZE_SET(0), sparx5, VCAP_SUPER_CFG);
spx5_wr(VCAP_SUPER_CTRL_UPDATE_CMD_SET(cmd) |
@@ -478,24 +1348,75 @@ static void sparx5_vcap_update(struct net_device *ndev,
sparx5_vcap_wait_super_update(sparx5);
}
-/* API callback used for moving a block of rules in the VCAP */
-static void sparx5_vcap_move(struct net_device *ndev, struct vcap_admin *admin,
- u32 addr, int offset, int count)
+static void sparx5_vcap_es0_update(struct sparx5 *sparx5,
+ enum vcap_command cmd,
+ enum vcap_selection sel, u32 addr)
+{
+ bool clear = (cmd == VCAP_CMD_INITIALIZE);
+
+ spx5_wr(VCAP_ES0_CFG_MV_NUM_POS_SET(0) |
+ VCAP_ES0_CFG_MV_SIZE_SET(0), sparx5, VCAP_ES0_CFG);
+ spx5_wr(VCAP_ES0_CTRL_UPDATE_CMD_SET(cmd) |
+ VCAP_ES0_CTRL_UPDATE_ENTRY_DIS_SET((VCAP_SEL_ENTRY & sel) == 0) |
+ VCAP_ES0_CTRL_UPDATE_ACTION_DIS_SET((VCAP_SEL_ACTION & sel) == 0) |
+ VCAP_ES0_CTRL_UPDATE_CNT_DIS_SET((VCAP_SEL_COUNTER & sel) == 0) |
+ VCAP_ES0_CTRL_UPDATE_ADDR_SET(addr) |
+ VCAP_ES0_CTRL_CLEAR_CACHE_SET(clear) |
+ VCAP_ES0_CTRL_UPDATE_SHOT_SET(true),
+ sparx5, VCAP_ES0_CTRL);
+ sparx5_vcap_wait_es0_update(sparx5);
+}
+
+static void sparx5_vcap_es2_update(struct sparx5 *sparx5,
+ enum vcap_command cmd,
+ enum vcap_selection sel, u32 addr)
+{
+ bool clear = (cmd == VCAP_CMD_INITIALIZE);
+
+ spx5_wr(VCAP_ES2_CFG_MV_NUM_POS_SET(0) |
+ VCAP_ES2_CFG_MV_SIZE_SET(0), sparx5, VCAP_ES2_CFG);
+ spx5_wr(VCAP_ES2_CTRL_UPDATE_CMD_SET(cmd) |
+ VCAP_ES2_CTRL_UPDATE_ENTRY_DIS_SET((VCAP_SEL_ENTRY & sel) == 0) |
+ VCAP_ES2_CTRL_UPDATE_ACTION_DIS_SET((VCAP_SEL_ACTION & sel) == 0) |
+ VCAP_ES2_CTRL_UPDATE_CNT_DIS_SET((VCAP_SEL_COUNTER & sel) == 0) |
+ VCAP_ES2_CTRL_UPDATE_ADDR_SET(addr) |
+ VCAP_ES2_CTRL_CLEAR_CACHE_SET(clear) |
+ VCAP_ES2_CTRL_UPDATE_SHOT_SET(true),
+ sparx5, VCAP_ES2_CTRL);
+ sparx5_vcap_wait_es2_update(sparx5);
+}
+
+/* API callback used for updating the VCAP cache */
+static void sparx5_vcap_update(struct net_device *ndev,
+ struct vcap_admin *admin, enum vcap_command cmd,
+ enum vcap_selection sel, u32 addr)
{
struct sparx5_port *port = netdev_priv(ndev);
struct sparx5 *sparx5 = port->sparx5;
- enum vcap_command cmd;
- u16 mv_num_pos;
- u16 mv_size;
- mv_size = count - 1;
- if (offset > 0) {
- mv_num_pos = offset - 1;
- cmd = VCAP_CMD_MOVE_DOWN;
- } else {
- mv_num_pos = -offset - 1;
- cmd = VCAP_CMD_MOVE_UP;
+ switch (admin->vtype) {
+ case VCAP_TYPE_IS0:
+ case VCAP_TYPE_IS2:
+ sparx5_vcap_super_update(sparx5, cmd, sel, addr);
+ break;
+ case VCAP_TYPE_ES0:
+ sparx5_vcap_es0_update(sparx5, cmd, sel, addr);
+ break;
+ case VCAP_TYPE_ES2:
+ sparx5_vcap_es2_update(sparx5, cmd, sel, addr);
+ break;
+ default:
+ sparx5_vcap_type_err(sparx5, admin, __func__);
+ break;
}
+}
+
+static void sparx5_vcap_super_move(struct sparx5 *sparx5,
+ u32 addr,
+ enum vcap_command cmd,
+ u16 mv_num_pos,
+ u16 mv_size)
+{
spx5_wr(VCAP_SUPER_CFG_MV_NUM_POS_SET(mv_num_pos) |
VCAP_SUPER_CFG_MV_SIZE_SET(mv_size),
sparx5, VCAP_SUPER_CFG);
@@ -510,29 +1431,82 @@ static void sparx5_vcap_move(struct net_device *ndev, struct vcap_admin *admin,
sparx5_vcap_wait_super_update(sparx5);
}
-/* Enable all lookups in the VCAP instance */
-static int sparx5_vcap_enable(struct net_device *ndev,
- struct vcap_admin *admin,
- bool enable)
+static void sparx5_vcap_es0_move(struct sparx5 *sparx5,
+ u32 addr,
+ enum vcap_command cmd,
+ u16 mv_num_pos,
+ u16 mv_size)
+{
+ spx5_wr(VCAP_ES0_CFG_MV_NUM_POS_SET(mv_num_pos) |
+ VCAP_ES0_CFG_MV_SIZE_SET(mv_size),
+ sparx5, VCAP_ES0_CFG);
+ spx5_wr(VCAP_ES0_CTRL_UPDATE_CMD_SET(cmd) |
+ VCAP_ES0_CTRL_UPDATE_ENTRY_DIS_SET(0) |
+ VCAP_ES0_CTRL_UPDATE_ACTION_DIS_SET(0) |
+ VCAP_ES0_CTRL_UPDATE_CNT_DIS_SET(0) |
+ VCAP_ES0_CTRL_UPDATE_ADDR_SET(addr) |
+ VCAP_ES0_CTRL_CLEAR_CACHE_SET(false) |
+ VCAP_ES0_CTRL_UPDATE_SHOT_SET(true),
+ sparx5, VCAP_ES0_CTRL);
+ sparx5_vcap_wait_es0_update(sparx5);
+}
+
+static void sparx5_vcap_es2_move(struct sparx5 *sparx5,
+ u32 addr,
+ enum vcap_command cmd,
+ u16 mv_num_pos,
+ u16 mv_size)
+{
+ spx5_wr(VCAP_ES2_CFG_MV_NUM_POS_SET(mv_num_pos) |
+ VCAP_ES2_CFG_MV_SIZE_SET(mv_size),
+ sparx5, VCAP_ES2_CFG);
+ spx5_wr(VCAP_ES2_CTRL_UPDATE_CMD_SET(cmd) |
+ VCAP_ES2_CTRL_UPDATE_ENTRY_DIS_SET(0) |
+ VCAP_ES2_CTRL_UPDATE_ACTION_DIS_SET(0) |
+ VCAP_ES2_CTRL_UPDATE_CNT_DIS_SET(0) |
+ VCAP_ES2_CTRL_UPDATE_ADDR_SET(addr) |
+ VCAP_ES2_CTRL_CLEAR_CACHE_SET(false) |
+ VCAP_ES2_CTRL_UPDATE_SHOT_SET(true),
+ sparx5, VCAP_ES2_CTRL);
+ sparx5_vcap_wait_es2_update(sparx5);
+}
+
+/* API callback used for moving a block of rules in the VCAP */
+static void sparx5_vcap_move(struct net_device *ndev, struct vcap_admin *admin,
+ u32 addr, int offset, int count)
{
struct sparx5_port *port = netdev_priv(ndev);
- struct sparx5 *sparx5;
- int portno;
+ struct sparx5 *sparx5 = port->sparx5;
+ enum vcap_command cmd;
+ u16 mv_num_pos;
+ u16 mv_size;
- sparx5 = port->sparx5;
- portno = port->portno;
+ mv_size = count - 1;
+ if (offset > 0) {
+ mv_num_pos = offset - 1;
+ cmd = VCAP_CMD_MOVE_DOWN;
+ } else {
+ mv_num_pos = -offset - 1;
+ cmd = VCAP_CMD_MOVE_UP;
+ }
- /* For now we only consider IS2 */
- if (enable)
- spx5_wr(ANA_ACL_VCAP_S2_CFG_SEC_ENA_SET(0xf), sparx5,
- ANA_ACL_VCAP_S2_CFG(portno));
- else
- spx5_wr(ANA_ACL_VCAP_S2_CFG_SEC_ENA_SET(0), sparx5,
- ANA_ACL_VCAP_S2_CFG(portno));
- return 0;
+ switch (admin->vtype) {
+ case VCAP_TYPE_IS0:
+ case VCAP_TYPE_IS2:
+ sparx5_vcap_super_move(sparx5, addr, cmd, mv_num_pos, mv_size);
+ break;
+ case VCAP_TYPE_ES0:
+ sparx5_vcap_es0_move(sparx5, addr, cmd, mv_num_pos, mv_size);
+ break;
+ case VCAP_TYPE_ES2:
+ sparx5_vcap_es2_move(sparx5, addr, cmd, mv_num_pos, mv_size);
+ break;
+ default:
+ sparx5_vcap_type_err(sparx5, admin, __func__);
+ break;
+ }
}
-/* API callback operations: only IS2 is supported for now */
static struct vcap_operations sparx5_vcap_ops = {
.validate_keyset = sparx5_vcap_validate_keyset,
.add_default_fields = sparx5_vcap_add_default_fields,
@@ -543,19 +1517,41 @@ static struct vcap_operations sparx5_vcap_ops = {
.update = sparx5_vcap_update,
.move = sparx5_vcap_move,
.port_info = sparx5_port_info,
- .enable = sparx5_vcap_enable,
};
-/* Enable lookups per port and set the keyset generation: only IS2 for now */
-static void sparx5_vcap_port_key_selection(struct sparx5 *sparx5,
- struct vcap_admin *admin)
+/* Enable IS0 lookups per port and set the keyset generation */
+static void sparx5_vcap_is0_port_key_selection(struct sparx5 *sparx5,
+ struct vcap_admin *admin)
+{
+ int portno, lookup;
+ u32 keysel;
+
+ keysel = VCAP_IS0_KEYSEL(false,
+ VCAP_IS0_PS_ETYPE_NORMAL_7TUPLE,
+ VCAP_IS0_PS_ETYPE_NORMAL_5TUPLE_IP4,
+ VCAP_IS0_PS_ETYPE_NORMAL_7TUPLE,
+ VCAP_IS0_PS_MPLS_FOLLOW_ETYPE,
+ VCAP_IS0_PS_MPLS_FOLLOW_ETYPE,
+ VCAP_IS0_PS_MLBS_FOLLOW_ETYPE);
+ for (lookup = 0; lookup < admin->lookups; ++lookup) {
+ for (portno = 0; portno < SPX5_PORTS; ++portno) {
+ spx5_wr(keysel, sparx5,
+ ANA_CL_ADV_CL_CFG(portno, lookup));
+ spx5_rmw(ANA_CL_ADV_CL_CFG_LOOKUP_ENA,
+ ANA_CL_ADV_CL_CFG_LOOKUP_ENA,
+ sparx5,
+ ANA_CL_ADV_CL_CFG(portno, lookup));
+ }
+ }
+}
+
+/* Enable IS2 lookups per port and set the keyset generation */
+static void sparx5_vcap_is2_port_key_selection(struct sparx5 *sparx5,
+ struct vcap_admin *admin)
{
int portno, lookup;
u32 keysel;
- /* all traffic types generate the MAC_ETYPE keyset for now in all
- * lookups on all ports
- */
keysel = VCAP_IS2_KEYSEL(true, VCAP_IS2_PS_NONETH_MAC_ETYPE,
VCAP_IS2_PS_IPV4_MC_IP4_TCP_UDP_OTHER,
VCAP_IS2_PS_IPV4_UC_IP4_TCP_UDP_OTHER,
@@ -568,19 +1564,107 @@ static void sparx5_vcap_port_key_selection(struct sparx5 *sparx5,
ANA_ACL_VCAP_S2_KEY_SEL(portno, lookup));
}
}
+ /* IS2 lookups are in bit 0:3 */
+ for (portno = 0; portno < SPX5_PORTS; ++portno)
+ spx5_rmw(ANA_ACL_VCAP_S2_CFG_SEC_ENA_SET(0xf),
+ ANA_ACL_VCAP_S2_CFG_SEC_ENA,
+ sparx5,
+ ANA_ACL_VCAP_S2_CFG(portno));
}
-/* Disable lookups per port and set the keyset generation: only IS2 for now */
-static void sparx5_vcap_port_key_deselection(struct sparx5 *sparx5,
- struct vcap_admin *admin)
+/* Enable ES0 lookups per port and set the keyset generation */
+static void sparx5_vcap_es0_port_key_selection(struct sparx5 *sparx5,
+ struct vcap_admin *admin)
{
int portno;
+ u32 keysel;
+ keysel = VCAP_ES0_KEYSEL(VCAP_ES0_PS_FORCE_ISDX_LOOKUPS);
for (portno = 0; portno < SPX5_PORTS; ++portno)
- spx5_rmw(ANA_ACL_VCAP_S2_CFG_SEC_ENA_SET(0),
- ANA_ACL_VCAP_S2_CFG_SEC_ENA,
- sparx5,
- ANA_ACL_VCAP_S2_CFG(portno));
+ spx5_rmw(keysel, REW_RTAG_ETAG_CTRL_ES0_ISDX_KEY_ENA,
+ sparx5, REW_RTAG_ETAG_CTRL(portno));
+
+ spx5_rmw(REW_ES0_CTRL_ES0_LU_ENA_SET(1), REW_ES0_CTRL_ES0_LU_ENA,
+ sparx5, REW_ES0_CTRL);
+}
+
+/* Enable ES2 lookups per port and set the keyset generation */
+static void sparx5_vcap_es2_port_key_selection(struct sparx5 *sparx5,
+ struct vcap_admin *admin)
+{
+ int portno, lookup;
+ u32 keysel;
+
+ keysel = VCAP_ES2_KEYSEL(true, VCAP_ES2_PS_ARP_MAC_ETYPE,
+ VCAP_ES2_PS_IPV4_IP4_TCP_UDP_OTHER,
+ VCAP_ES2_PS_IPV6_IP_7TUPLE);
+ for (lookup = 0; lookup < admin->lookups; ++lookup)
+ for (portno = 0; portno < SPX5_PORTS; ++portno)
+ spx5_wr(keysel, sparx5,
+ EACL_VCAP_ES2_KEY_SEL(portno, lookup));
+}
+
+/* Enable lookups per port and set the keyset generation */
+static void sparx5_vcap_port_key_selection(struct sparx5 *sparx5,
+ struct vcap_admin *admin)
+{
+ switch (admin->vtype) {
+ case VCAP_TYPE_IS0:
+ sparx5_vcap_is0_port_key_selection(sparx5, admin);
+ break;
+ case VCAP_TYPE_IS2:
+ sparx5_vcap_is2_port_key_selection(sparx5, admin);
+ break;
+ case VCAP_TYPE_ES0:
+ sparx5_vcap_es0_port_key_selection(sparx5, admin);
+ break;
+ case VCAP_TYPE_ES2:
+ sparx5_vcap_es2_port_key_selection(sparx5, admin);
+ break;
+ default:
+ sparx5_vcap_type_err(sparx5, admin, __func__);
+ break;
+ }
+}
+
+/* Disable lookups per port */
+static void sparx5_vcap_port_key_deselection(struct sparx5 *sparx5,
+ struct vcap_admin *admin)
+{
+ int portno, lookup;
+
+ switch (admin->vtype) {
+ case VCAP_TYPE_IS0:
+ for (lookup = 0; lookup < admin->lookups; ++lookup)
+ for (portno = 0; portno < SPX5_PORTS; ++portno)
+ spx5_rmw(ANA_CL_ADV_CL_CFG_LOOKUP_ENA_SET(0),
+ ANA_CL_ADV_CL_CFG_LOOKUP_ENA,
+ sparx5,
+ ANA_CL_ADV_CL_CFG(portno, lookup));
+ break;
+ case VCAP_TYPE_IS2:
+ for (portno = 0; portno < SPX5_PORTS; ++portno)
+ spx5_rmw(ANA_ACL_VCAP_S2_CFG_SEC_ENA_SET(0),
+ ANA_ACL_VCAP_S2_CFG_SEC_ENA,
+ sparx5,
+ ANA_ACL_VCAP_S2_CFG(portno));
+ break;
+ case VCAP_TYPE_ES0:
+ spx5_rmw(REW_ES0_CTRL_ES0_LU_ENA_SET(0),
+ REW_ES0_CTRL_ES0_LU_ENA, sparx5, REW_ES0_CTRL);
+ break;
+ case VCAP_TYPE_ES2:
+ for (lookup = 0; lookup < admin->lookups; ++lookup)
+ for (portno = 0; portno < SPX5_PORTS; ++portno)
+ spx5_rmw(EACL_VCAP_ES2_KEY_SEL_KEY_ENA_SET(0),
+ EACL_VCAP_ES2_KEY_SEL_KEY_ENA,
+ sparx5,
+ EACL_VCAP_ES2_KEY_SEL(portno, lookup));
+ break;
+ default:
+ sparx5_vcap_type_err(sparx5, admin, __func__);
+ break;
+ }
}
static void sparx5_vcap_admin_free(struct vcap_admin *admin)
@@ -610,6 +1694,7 @@ sparx5_vcap_admin_alloc(struct sparx5 *sparx5, struct vcap_control *ctrl,
mutex_init(&admin->lock);
admin->vtype = cfg->vtype;
admin->vinst = cfg->vinst;
+ admin->ingress = cfg->ingress;
admin->lookups = cfg->lookups;
admin->lookups_per_instance = cfg->lookups_per_instance;
admin->first_cid = cfg->first_cid;
@@ -633,22 +1718,55 @@ static void sparx5_vcap_block_alloc(struct sparx5 *sparx5,
struct vcap_admin *admin,
const struct sparx5_vcap_inst *cfg)
{
- int idx;
-
- /* Super VCAP block mapping and address configuration. Block 0
- * is assigned addresses 0 through 3071, block 1 is assigned
- * addresses 3072 though 6143, and so on.
- */
- for (idx = cfg->blockno; idx < cfg->blockno + cfg->blocks; ++idx) {
- spx5_wr(VCAP_SUPER_IDX_CORE_IDX_SET(idx), sparx5,
- VCAP_SUPER_IDX);
- spx5_wr(VCAP_SUPER_MAP_CORE_MAP_SET(cfg->map_id), sparx5,
- VCAP_SUPER_MAP);
- }
- admin->first_valid_addr = cfg->blockno * SUPER_VCAP_BLK_SIZE;
- admin->last_used_addr = admin->first_valid_addr +
- cfg->blocks * SUPER_VCAP_BLK_SIZE;
- admin->last_valid_addr = admin->last_used_addr - 1;
+ int idx, cores;
+
+ switch (admin->vtype) {
+ case VCAP_TYPE_IS0:
+ case VCAP_TYPE_IS2:
+ /* Super VCAP block mapping and address configuration. Block 0
+ * is assigned addresses 0 through 3071, block 1 is assigned
+ * addresses 3072 though 6143, and so on.
+ */
+ for (idx = cfg->blockno; idx < cfg->blockno + cfg->blocks;
+ ++idx) {
+ spx5_wr(VCAP_SUPER_IDX_CORE_IDX_SET(idx), sparx5,
+ VCAP_SUPER_IDX);
+ spx5_wr(VCAP_SUPER_MAP_CORE_MAP_SET(cfg->map_id),
+ sparx5, VCAP_SUPER_MAP);
+ }
+ admin->first_valid_addr = cfg->blockno * SUPER_VCAP_BLK_SIZE;
+ admin->last_used_addr = admin->first_valid_addr +
+ cfg->blocks * SUPER_VCAP_BLK_SIZE;
+ admin->last_valid_addr = admin->last_used_addr - 1;
+ break;
+ case VCAP_TYPE_ES0:
+ admin->first_valid_addr = 0;
+ admin->last_used_addr = cfg->count;
+ admin->last_valid_addr = cfg->count - 1;
+ cores = spx5_rd(sparx5, VCAP_ES0_CORE_CNT);
+ for (idx = 0; idx < cores; ++idx) {
+ spx5_wr(VCAP_ES0_IDX_CORE_IDX_SET(idx), sparx5,
+ VCAP_ES0_IDX);
+ spx5_wr(VCAP_ES0_MAP_CORE_MAP_SET(1), sparx5,
+ VCAP_ES0_MAP);
+ }
+ break;
+ case VCAP_TYPE_ES2:
+ admin->first_valid_addr = 0;
+ admin->last_used_addr = cfg->count;
+ admin->last_valid_addr = cfg->count - 1;
+ cores = spx5_rd(sparx5, VCAP_ES2_CORE_CNT);
+ for (idx = 0; idx < cores; ++idx) {
+ spx5_wr(VCAP_ES2_IDX_CORE_IDX_SET(idx), sparx5,
+ VCAP_ES2_IDX);
+ spx5_wr(VCAP_ES2_MAP_CORE_MAP_SET(1), sparx5,
+ VCAP_ES2_MAP);
+ }
+ break;
+ default:
+ sparx5_vcap_type_err(sparx5, admin, __func__);
+ break;
+ }
}
/* Allocate a vcap control and vcap instances and configure the system */
diff --git a/drivers/net/ethernet/microchip/sparx5/sparx5_vcap_impl.h b/drivers/net/ethernet/microchip/sparx5/sparx5_vcap_impl.h
index 0a0f2412c980..3260ab5e3a82 100644
--- a/drivers/net/ethernet/microchip/sparx5/sparx5_vcap_impl.h
+++ b/drivers/net/ethernet/microchip/sparx5/sparx5_vcap_impl.h
@@ -16,6 +16,15 @@
#include "vcap_api.h"
#include "vcap_api_client.h"
+#define SPARX5_VCAP_CID_IS0_L0 VCAP_CID_INGRESS_L0 /* IS0/CLM lookup 0 */
+#define SPARX5_VCAP_CID_IS0_L1 VCAP_CID_INGRESS_L1 /* IS0/CLM lookup 1 */
+#define SPARX5_VCAP_CID_IS0_L2 VCAP_CID_INGRESS_L2 /* IS0/CLM lookup 2 */
+#define SPARX5_VCAP_CID_IS0_L3 VCAP_CID_INGRESS_L3 /* IS0/CLM lookup 3 */
+#define SPARX5_VCAP_CID_IS0_L4 VCAP_CID_INGRESS_L4 /* IS0/CLM lookup 4 */
+#define SPARX5_VCAP_CID_IS0_L5 VCAP_CID_INGRESS_L5 /* IS0/CLM lookup 5 */
+#define SPARX5_VCAP_CID_IS0_MAX \
+ (VCAP_CID_INGRESS_L5 + VCAP_CID_LOOKUP_SIZE - 1) /* IS0/CLM Max */
+
#define SPARX5_VCAP_CID_IS2_L0 VCAP_CID_INGRESS_STAGE2_L0 /* IS2 lookup 0 */
#define SPARX5_VCAP_CID_IS2_L1 VCAP_CID_INGRESS_STAGE2_L1 /* IS2 lookup 1 */
#define SPARX5_VCAP_CID_IS2_L2 VCAP_CID_INGRESS_STAGE2_L2 /* IS2 lookup 2 */
@@ -23,6 +32,63 @@
#define SPARX5_VCAP_CID_IS2_MAX \
(VCAP_CID_INGRESS_STAGE2_L3 + VCAP_CID_LOOKUP_SIZE - 1) /* IS2 Max */
+#define SPARX5_VCAP_CID_ES0_L0 VCAP_CID_EGRESS_L0 /* ES0 lookup 0 */
+#define SPARX5_VCAP_CID_ES0_MAX (VCAP_CID_EGRESS_L1 - 1) /* ES0 Max */
+
+#define SPARX5_VCAP_CID_ES2_L0 VCAP_CID_EGRESS_STAGE2_L0 /* ES2 lookup 0 */
+#define SPARX5_VCAP_CID_ES2_L1 VCAP_CID_EGRESS_STAGE2_L1 /* ES2 lookup 1 */
+#define SPARX5_VCAP_CID_ES2_MAX \
+ (VCAP_CID_EGRESS_STAGE2_L1 + VCAP_CID_LOOKUP_SIZE - 1) /* ES2 Max */
+
+/* IS0 port keyset selection control */
+
+/* IS0 ethernet, IPv4, IPv6 traffic type keyset generation */
+enum vcap_is0_port_sel_etype {
+ VCAP_IS0_PS_ETYPE_DEFAULT, /* None or follow depending on class */
+ VCAP_IS0_PS_ETYPE_MLL,
+ VCAP_IS0_PS_ETYPE_SGL_MLBS,
+ VCAP_IS0_PS_ETYPE_DBL_MLBS,
+ VCAP_IS0_PS_ETYPE_TRI_MLBS,
+ VCAP_IS0_PS_ETYPE_TRI_VID,
+ VCAP_IS0_PS_ETYPE_LL_FULL,
+ VCAP_IS0_PS_ETYPE_NORMAL_SRC,
+ VCAP_IS0_PS_ETYPE_NORMAL_DST,
+ VCAP_IS0_PS_ETYPE_NORMAL_7TUPLE,
+ VCAP_IS0_PS_ETYPE_NORMAL_5TUPLE_IP4,
+ VCAP_IS0_PS_ETYPE_PURE_5TUPLE_IP4,
+ VCAP_IS0_PS_ETYPE_DBL_VID_IDX,
+ VCAP_IS0_PS_ETYPE_ETAG,
+ VCAP_IS0_PS_ETYPE_NO_LOOKUP,
+};
+
+/* IS0 MPLS traffic type keyset generation */
+enum vcap_is0_port_sel_mpls_uc_mc {
+ VCAP_IS0_PS_MPLS_FOLLOW_ETYPE,
+ VCAP_IS0_PS_MPLS_MLL,
+ VCAP_IS0_PS_MPLS_SGL_MLBS,
+ VCAP_IS0_PS_MPLS_DBL_MLBS,
+ VCAP_IS0_PS_MPLS_TRI_MLBS,
+ VCAP_IS0_PS_MPLS_TRI_VID,
+ VCAP_IS0_PS_MPLS_LL_FULL,
+ VCAP_IS0_PS_MPLS_NORMAL_SRC,
+ VCAP_IS0_PS_MPLS_NORMAL_DST,
+ VCAP_IS0_PS_MPLS_NORMAL_7TUPLE,
+ VCAP_IS0_PS_MPLS_NORMAL_5TUPLE_IP4,
+ VCAP_IS0_PS_MPLS_PURE_5TUPLE_IP4,
+ VCAP_IS0_PS_MPLS_DBL_VID_IDX,
+ VCAP_IS0_PS_MPLS_ETAG,
+ VCAP_IS0_PS_MPLS_NO_LOOKUP,
+};
+
+/* IS0 MBLS traffic type keyset generation */
+enum vcap_is0_port_sel_mlbs {
+ VCAP_IS0_PS_MLBS_FOLLOW_ETYPE,
+ VCAP_IS0_PS_MLBS_SGL_MLBS,
+ VCAP_IS0_PS_MLBS_DBL_MLBS,
+ VCAP_IS0_PS_MLBS_TRI_MLBS,
+ VCAP_IS0_PS_MLBS_NO_LOOKUP = 17,
+};
+
/* IS2 port keyset selection control */
/* IS2 non-ethernet traffic type keyset generation */
@@ -71,6 +137,57 @@ enum vcap_is2_port_sel_arp {
VCAP_IS2_PS_ARP_ARP,
};
+/* ES0 port keyset selection control */
+
+/* ES0 Egress port traffic type classification */
+enum vcap_es0_port_sel {
+ VCAP_ES0_PS_NORMAL_SELECTION,
+ VCAP_ES0_PS_FORCE_ISDX_LOOKUPS,
+ VCAP_ES0_PS_FORCE_VID_LOOKUPS,
+ VCAP_ES0_PS_RESERVED,
+};
+
+/* ES2 port keyset selection control */
+
+/* ES2 IPv4 traffic type keyset generation */
+enum vcap_es2_port_sel_ipv4 {
+ VCAP_ES2_PS_IPV4_MAC_ETYPE,
+ VCAP_ES2_PS_IPV4_IP_7TUPLE,
+ VCAP_ES2_PS_IPV4_IP4_TCP_UDP_VID,
+ VCAP_ES2_PS_IPV4_IP4_TCP_UDP_OTHER,
+ VCAP_ES2_PS_IPV4_IP4_VID,
+ VCAP_ES2_PS_IPV4_IP4_OTHER,
+};
+
+/* ES2 IPv6 traffic type keyset generation */
+enum vcap_es2_port_sel_ipv6 {
+ VCAP_ES2_PS_IPV6_MAC_ETYPE,
+ VCAP_ES2_PS_IPV6_IP_7TUPLE,
+ VCAP_ES2_PS_IPV6_IP_7TUPLE_VID,
+ VCAP_ES2_PS_IPV6_IP_7TUPLE_STD,
+ VCAP_ES2_PS_IPV6_IP6_VID,
+ VCAP_ES2_PS_IPV6_IP6_STD,
+ VCAP_ES2_PS_IPV6_IP4_DOWNGRADE,
+};
+
+/* ES2 ARP traffic type keyset generation */
+enum vcap_es2_port_sel_arp {
+ VCAP_ES2_PS_ARP_MAC_ETYPE,
+ VCAP_ES2_PS_ARP_ARP,
+};
+
+/* Selects TPID for ES0 matching */
+enum SPX5_TPID_SEL {
+ SPX5_TPID_SEL_UNTAGGED,
+ SPX5_TPID_SEL_8100,
+ SPX5_TPID_SEL_UNUSED_0,
+ SPX5_TPID_SEL_UNUSED_1,
+ SPX5_TPID_SEL_88A8,
+ SPX5_TPID_SEL_TPIDCFG_1,
+ SPX5_TPID_SEL_TPIDCFG_2,
+ SPX5_TPID_SEL_TPIDCFG_3,
+};
+
/* Get the port keyset for the vcap lookup */
int sparx5_vcap_get_port_keyset(struct net_device *ndev,
struct vcap_admin *admin,
@@ -78,4 +195,7 @@ int sparx5_vcap_get_port_keyset(struct net_device *ndev,
u16 l3_proto,
struct vcap_keyset_list *kslist);
+/* Check if the ethertype is supported by the vcap port classification */
+bool sparx5_vcap_is_known_etype(struct vcap_admin *admin, u16 etype);
+
#endif /* __SPARX5_VCAP_IMPL_H__ */
diff --git a/drivers/net/ethernet/microchip/sparx5/sparx5_vlan.c b/drivers/net/ethernet/microchip/sparx5/sparx5_vlan.c
index 34f954bbf815..ac001ae59a38 100644
--- a/drivers/net/ethernet/microchip/sparx5/sparx5_vlan.c
+++ b/drivers/net/ethernet/microchip/sparx5/sparx5_vlan.c
@@ -219,8 +219,8 @@ void sparx5_vlan_port_apply(struct sparx5 *sparx5,
spx5_wr(val, sparx5,
ANA_CL_VLAN_FILTER_CTRL(port->portno, 0));
- /* Egress configuration (REW_TAG_CFG): VLAN tag type to 8021Q */
- val = REW_TAG_CTRL_TAG_TPID_CFG_SET(0);
+ /* Egress configuration (REW_TAG_CFG): VLAN tag selected via IFH */
+ val = REW_TAG_CTRL_TAG_TPID_CFG_SET(5);
if (port->vlan_aware) {
if (port->vid)
/* Tag all frames except when VID == DEFAULT_VLAN */
diff --git a/drivers/net/ethernet/microchip/vcap/Makefile b/drivers/net/ethernet/microchip/vcap/Makefile
index 0adb8f5a8735..c86f20e6491f 100644
--- a/drivers/net/ethernet/microchip/vcap/Makefile
+++ b/drivers/net/ethernet/microchip/vcap/Makefile
@@ -7,4 +7,4 @@ obj-$(CONFIG_VCAP) += vcap.o
obj-$(CONFIG_VCAP_KUNIT_TEST) += vcap_model_kunit.o
vcap-$(CONFIG_DEBUG_FS) += vcap_api_debugfs.o
-vcap-y += vcap_api.o
+vcap-y += vcap_api.o vcap_tc.o
diff --git a/drivers/net/ethernet/microchip/vcap/vcap_ag_api.h b/drivers/net/ethernet/microchip/vcap/vcap_ag_api.h
index 84de2aee4169..0844fcaeee68 100644
--- a/drivers/net/ethernet/microchip/vcap/vcap_ag_api.h
+++ b/drivers/net/ethernet/microchip/vcap/vcap_ag_api.h
@@ -1,16 +1,17 @@
/* SPDX-License-Identifier: BSD-3-Clause */
-/* Copyright (C) 2022 Microchip Technology Inc. and its subsidiaries.
+/* Copyright (C) 2023 Microchip Technology Inc. and its subsidiaries.
* Microchip VCAP API
*/
-/* This file is autogenerated by cml-utils 2022-10-13 10:04:41 +0200.
- * Commit ID: fd7cafd175899f0672c73afb3a30fc872500ae86
+/* This file is autogenerated by cml-utils 2023-02-10 11:15:56 +0100.
+ * Commit ID: c30fb4bf0281cd4a7133bdab6682f9e43c872ada
*/
#ifndef __VCAP_AG_API__
#define __VCAP_AG_API__
enum vcap_type {
+ VCAP_TYPE_ES0,
VCAP_TYPE_ES2,
VCAP_TYPE_IS0,
VCAP_TYPE_IS2,
@@ -20,27 +21,25 @@ enum vcap_type {
/* Keyfieldset names with origin information */
enum vcap_keyfield_set {
VCAP_KFS_NO_VALUE, /* initial value */
- VCAP_KFS_ARP, /* sparx5 is2 X6, sparx5 es2 X6 */
+ VCAP_KFS_ARP, /* sparx5 is2 X6, sparx5 es2 X6, lan966x is2 X2 */
VCAP_KFS_ETAG, /* sparx5 is0 X2 */
- VCAP_KFS_IP4_OTHER, /* sparx5 is2 X6, sparx5 es2 X6 */
- VCAP_KFS_IP4_TCP_UDP, /* sparx5 is2 X6, sparx5 es2 X6 */
+ VCAP_KFS_IP4_OTHER, /* sparx5 is2 X6, sparx5 es2 X6, lan966x is2 X2 */
+ VCAP_KFS_IP4_TCP_UDP, /* sparx5 is2 X6, sparx5 es2 X6, lan966x is2 X2 */
VCAP_KFS_IP4_VID, /* sparx5 es2 X3 */
- VCAP_KFS_IP6_STD, /* sparx5 is2 X6 */
- VCAP_KFS_IP6_VID, /* sparx5 is2 X6, sparx5 es2 X6 */
+ VCAP_KFS_IP6_OTHER, /* lan966x is2 X4 */
+ VCAP_KFS_IP6_STD, /* sparx5 is2 X6, sparx5 es2 X6, lan966x is2 X2 */
+ VCAP_KFS_IP6_TCP_UDP, /* lan966x is2 X4 */
+ VCAP_KFS_IP6_VID, /* sparx5 es2 X6 */
VCAP_KFS_IP_7TUPLE, /* sparx5 is2 X12, sparx5 es2 X12 */
+ VCAP_KFS_ISDX, /* sparx5 es0 X1 */
VCAP_KFS_LL_FULL, /* sparx5 is0 X6 */
- VCAP_KFS_MAC_ETYPE, /* sparx5 is2 X6, sparx5 es2 X6 */
- VCAP_KFS_MLL, /* sparx5 is0 X3 */
- VCAP_KFS_NORMAL, /* sparx5 is0 X6 */
- VCAP_KFS_NORMAL_5TUPLE_IP4, /* sparx5 is0 X6 */
- VCAP_KFS_NORMAL_7TUPLE, /* sparx5 is0 X12 */
- VCAP_KFS_PURE_5TUPLE_IP4, /* sparx5 is0 X3 */
- VCAP_KFS_TRI_VID, /* sparx5 is0 X2 */
+ VCAP_KFS_MAC_ETYPE, /* sparx5 is2 X6, sparx5 es2 X6, lan966x is2 X2 */
VCAP_KFS_MAC_LLC, /* lan966x is2 X2 */
VCAP_KFS_MAC_SNAP, /* lan966x is2 X2 */
+ VCAP_KFS_NORMAL_5TUPLE_IP4, /* sparx5 is0 X6 */
+ VCAP_KFS_NORMAL_7TUPLE, /* sparx5 is0 X12 */
VCAP_KFS_OAM, /* lan966x is2 X2 */
- VCAP_KFS_IP6_TCP_UDP, /* lan966x is2 X4 */
- VCAP_KFS_IP6_OTHER, /* lan966x is2 X4 */
+ VCAP_KFS_PURE_5TUPLE_IP4, /* sparx5 is0 X3 */
VCAP_KFS_SMAC_SIP4, /* lan966x is2 X1 */
VCAP_KFS_SMAC_SIP6, /* lan966x is2 X2 */
};
@@ -78,6 +77,8 @@ enum vcap_keyfield_set {
* Third PCP in multiple vlan tags (not always available)
* VCAP_KF_8021Q_PCP_CLS: W3, sparx5: is2/es2, lan966x: is2
* Classified PCP
+ * VCAP_KF_8021Q_TPID: W3, sparx5: es0
+ * TPID for outer tag: 0: Customer TPID 1: Service TPID (88A8 or programmable)
* VCAP_KF_8021Q_TPID0: W3, sparx5: is0
* First TPIC in multiple vlan tags (outer tag or default port tag)
* VCAP_KF_8021Q_TPID1: W3, sparx5: is0
@@ -90,7 +91,8 @@ enum vcap_keyfield_set {
* Second VID in multiple vlan tags (inner tag)
* VCAP_KF_8021Q_VID2: W12, sparx5: is0
* Third VID in multiple vlan tags (not always available)
- * VCAP_KF_8021Q_VID_CLS: W13, sparx5: is2/es2, lan966x is2 W12
+ * VCAP_KF_8021Q_VID_CLS: sparx5 is2 W13, sparx5 es0 W13, sparx5 es2 W13,
+ * lan966x is2 W12
* Classified VID
* VCAP_KF_8021Q_VLAN_TAGGED_IS: W1, sparx5: is2/es2, lan966x: is2
* Sparx5: Set if frame was received with a VLAN tag, LAN966x: Set if frame has
@@ -104,7 +106,7 @@ enum vcap_keyfield_set {
* Set if hardware address is Ethernet
* VCAP_KF_ARP_LEN_OK_IS: W1, sparx5: is2/es2, lan966x: is2
* Set if hardware address length = 6 (Ethernet) and IP address length = 4 (IP).
- * VCAP_KF_ARP_OPCODE: W2, sparx5: is2/es2, lan966x: i2
+ * VCAP_KF_ARP_OPCODE: W2, sparx5: is2/es2, lan966x: is2
* ARP opcode
* VCAP_KF_ARP_OPCODE_UNKNOWN_IS: W1, sparx5: is2/es2, lan966x: is2
* Set if not one of the codes defined in VCAP_KF_ARP_OPCODE
@@ -114,25 +116,25 @@ enum vcap_keyfield_set {
* Sender Hardware Address = SMAC (ARP)
* VCAP_KF_ARP_TGT_MATCH_IS: W1, sparx5: is2/es2, lan966x: is2
* Target Hardware Address = SMAC (RARP)
- * VCAP_KF_COSID_CLS: W3, sparx5: es2
+ * VCAP_KF_COSID_CLS: W3, sparx5: es0/es2
* Class of service
- * VCAP_KF_DST_ENTRY: W1, sparx5: is0
- * Selects whether the frame’s destination or source information is used for
- * fields L2_SMAC and L3_IP4_SIP
* VCAP_KF_ES0_ISDX_KEY_ENA: W1, sparx5: es2
* The value taken from the IFH .FWD.ES0_ISDX_KEY_ENA
* VCAP_KF_ETYPE: W16, sparx5: is0/is2/es2, lan966x: is2
* Ethernet type
* VCAP_KF_ETYPE_LEN_IS: W1, sparx5: is0/is2/es2
* Set if frame has EtherType >= 0x600
- * VCAP_KF_ETYPE_MPLS: W2, sparx5: is0
- * Type of MPLS Ethertype (or not)
+ * VCAP_KF_HOST_MATCH: W1, lan966x: is2
+ * The action from the SMAC_SIP4 or SMAC_SIP6 lookups. Used for IP source
+ * guarding.
* VCAP_KF_IF_EGR_PORT_MASK: W32, sparx5: es2
* Egress port mask, one bit per port
* VCAP_KF_IF_EGR_PORT_MASK_RNG: W3, sparx5: es2
* Select which 32 port group is available in IF_EGR_PORT (or virtual ports or
* CPU queue)
- * VCAP_KF_IF_IGR_PORT: sparx5 is0 W7, sparx5 es2 W9
+ * VCAP_KF_IF_EGR_PORT_NO: W7, sparx5: es0
+ * Egress port number
+ * VCAP_KF_IF_IGR_PORT: sparx5 is0 W7, sparx5 es2 W9, lan966x is2 W4
* Sparx5: Logical ingress port number retrieved from
* ANA_CL::PORT_ID_CFG.LPORT_NUM or ERLEG, LAN966x: ingress port nunmber
* VCAP_KF_IF_IGR_PORT_MASK: sparx5 is0 W65, sparx5 is2 W32, sparx5 is2 W65,
@@ -152,45 +154,61 @@ enum vcap_keyfield_set {
* VCAP_KF_IP4_IS: W1, sparx5: is0/is2/es2, lan966x: is2
* Set if frame has EtherType = 0x800 and IP version = 4
* VCAP_KF_IP_MC_IS: W1, sparx5: is0
- * Set if frame is IPv4 frame and frame’s destination MAC address is an IPv4
- * multicast address (0x01005E0 /25). Set if frame is IPv6 frame and frame’s
+ * Set if frame is IPv4 frame and frame's destination MAC address is an IPv4
+ * multicast address (0x01005E0 /25). Set if frame is IPv6 frame and frame's
* destination MAC address is an IPv6 multicast address (0x3333/16).
* VCAP_KF_IP_PAYLOAD_5TUPLE: W32, sparx5: is0
* Payload bytes after IP header
* VCAP_KF_IP_SNAP_IS: W1, sparx5: is0
* Set if frame is IPv4, IPv6, or SNAP frame
- * VCAP_KF_ISDX_CLS: W12, sparx5: is2/es2
+ * VCAP_KF_ISDX_CLS: W12, sparx5: is2/es0/es2
* Classified ISDX
- * VCAP_KF_ISDX_GT0_IS: W1, sparx5: is2/es2, lan966x: is2
+ * VCAP_KF_ISDX_GT0_IS: W1, sparx5: is2/es0/es2, lan966x: is2
* Set if classified ISDX > 0
* VCAP_KF_L2_BC_IS: W1, sparx5: is0/is2/es2, lan966x: is2
- * Set if frame’s destination MAC address is the broadcast address
+ * Set if frame's destination MAC address is the broadcast address
* (FF-FF-FF-FF-FF-FF).
* VCAP_KF_L2_DMAC: W48, sparx5: is0/is2/es2, lan966x: is2
* Destination MAC address
+ * VCAP_KF_L2_FRM_TYPE: W4, lan966x: is2
+ * Frame subtype for specific EtherTypes (MRP, DLR)
* VCAP_KF_L2_FWD_IS: W1, sparx5: is2
* Set if the frame is allowed to be forwarded to front ports
- * VCAP_KF_L2_MC_IS: W1, sparx5: is0/is2/es2, lan9966x is2
- * Set if frame’s destination MAC address is a multicast address (bit 40 = 1).
+ * VCAP_KF_L2_LLC: W40, lan966x: is2
+ * LLC header and data after up to two VLAN tags and the type/length field
+ * VCAP_KF_L2_MC_IS: W1, sparx5: is0/is2/es2, lan966x: is2
+ * Set if frame's destination MAC address is a multicast address (bit 40 = 1).
+ * VCAP_KF_L2_PAYLOAD0: W16, lan966x: is2
+ * Payload bytes 0-1 after the frame's EtherType
+ * VCAP_KF_L2_PAYLOAD1: W8, lan966x: is2
+ * Payload byte 4 after the frame's EtherType. This is specifically for PTP
+ * frames.
+ * VCAP_KF_L2_PAYLOAD2: W3, lan966x: is2
+ * Bits 7, 2, and 1 from payload byte 6 after the frame's EtherType. This is
+ * specifically for PTP frames.
* VCAP_KF_L2_PAYLOAD_ETYPE: W64, sparx5: is2/es2
* Byte 0-7 of L2 payload after Type/Len field and overloading for OAM
- * VCAP_KF_L2_SMAC: W48, sparx5: is0/is2/es2, lan966x is2
+ * VCAP_KF_L2_SMAC: W48, sparx5: is0/is2/es2, lan966x: is2
* Source MAC address
+ * VCAP_KF_L2_SNAP: W40, lan966x: is2
+ * SNAP header after LLC header (AA-AA-03)
* VCAP_KF_L3_DIP_EQ_SIP_IS: W1, sparx5: is2/es2, lan966x: is2
* Set if Src IP matches Dst IP address
- * VCAP_KF_L3_DMAC_DIP_MATCH: W1, sparx5: is2
- * Match found in DIP security lookup in ANA_L3
- * VCAP_KF_L3_DPL_CLS: W1, sparx5: es2
+ * VCAP_KF_L3_DPL_CLS: W1, sparx5: es0/es2
* The frames drop precedence level
* VCAP_KF_L3_DSCP: W6, sparx5: is0
- * Frame’s DSCP value
+ * Frame's DSCP value
* VCAP_KF_L3_DST_IS: W1, sparx5: is2
* Set if lookup is done for egress router leg
+ * VCAP_KF_L3_FRAGMENT: W1, lan966x: is2
+ * Set if IPv4 frame is fragmented
* VCAP_KF_L3_FRAGMENT_TYPE: W2, sparx5: is0/is2/es2
* L3 Fragmentation type (none, initial, suspicious, valid follow up)
* VCAP_KF_L3_FRAG_INVLD_L4_LEN: W1, sparx5: is0/is2
* Set if frame's L4 length is less than ANA_CL:COMMON:CLM_FRAGMENT_CFG.L4_MIN_L
* EN
+ * VCAP_KF_L3_FRAG_OFS_GT0: W1, lan966x: is2
+ * Set if IPv4 frame is fragmented and it is not the first fragment
* VCAP_KF_L3_IP4_DIP: W32, sparx5: is0/is2/es2, lan966x: is2
* Destination IPv4 Address
* VCAP_KF_L3_IP4_SIP: W32, sparx5: is0/is2/es2, lan966x: is2
@@ -205,36 +223,38 @@ enum vcap_keyfield_set {
* IPv4 frames: IP protocol. IPv6 frames: Next header, same as for IPV4
* VCAP_KF_L3_OPTIONS_IS: W1, sparx5: is0/is2/es2, lan966x: is2
* Set if IPv4 frame contains options (IP len > 5)
- * VCAP_KF_L3_PAYLOAD: sparx5 is2 W96, sparx5 is2 W40, sparx5 es2 W96,
- * lan966x is2 W56
+ * VCAP_KF_L3_PAYLOAD: sparx5 is2 W96, sparx5 is2 W40, sparx5 es2 W96, sparx5
+ * es2 W40, lan966x is2 W56
* Sparx5: Payload bytes after IP header. IPv4: IPv4 options are not parsed so
* payload is always taken 20 bytes after the start of the IPv4 header, LAN966x:
* Bytes 0-6 after IP header
* VCAP_KF_L3_RT_IS: W1, sparx5: is2/es2
* Set if frame has hit a router leg
- * VCAP_KF_L3_SMAC_SIP_MATCH: W1, sparx5: is2
- * Match found in SIP security lookup in ANA_L3
* VCAP_KF_L3_TOS: W8, sparx5: is2/es2, lan966x: is2
* Sparx5: Frame's IPv4/IPv6 DSCP and ECN fields, LAN966x: IP TOS field
* VCAP_KF_L3_TTL_GT0: W1, sparx5: is2/es2, lan966x: is2
* Set if IPv4 TTL / IPv6 hop limit is greater than 0
+ * VCAP_KF_L4_1588_DOM: W8, lan966x: is2
+ * PTP over UDP: domainNumber
+ * VCAP_KF_L4_1588_VER: W4, lan966x: is2
+ * PTP over UDP: version
* VCAP_KF_L4_ACK: W1, sparx5: is2/es2, lan966x: is2
* Sparx5 and LAN966x: TCP flag ACK, LAN966x only: PTP over UDP: flagField bit 2
* (unicastFlag)
* VCAP_KF_L4_DPORT: W16, sparx5: is2/es2, lan966x: is2
* Sparx5: TCP/UDP destination port. Overloading for IP_7TUPLE: Non-TCP/UDP IP
* frames: L4_DPORT = L3_IP_PROTO, LAN966x: TCP/UDP destination port
- * VCAP_KF_L4_FIN: W1, sparx5: is2/es2
+ * VCAP_KF_L4_FIN: W1, sparx5: is2/es2, lan966x: is2
* TCP flag FIN, LAN966x: TCP flag FIN, and for PTP over UDP: messageType bit 1
* VCAP_KF_L4_PAYLOAD: W64, sparx5: is2/es2
* Payload bytes after TCP/UDP header Overloading for IP_7TUPLE: Non TCP/UDP
- * frames: Payload bytes 0–7 after IP header. IPv4 options are not parsed so
+ * frames: Payload bytes 0-7 after IP header. IPv4 options are not parsed so
* payload is always taken 20 bytes after the start of the IPv4 header for non
* TCP/UDP IPv4 frames
* VCAP_KF_L4_PSH: W1, sparx5: is2/es2, lan966x: is2
* Sparx5: TCP flag PSH, LAN966x: TCP: TCP flag PSH. PTP over UDP: flagField bit
* 1 (twoStepFlag)
- * VCAP_KF_L4_RNG: sparx5 is0 W8, sparx5 is2 W16, sparx5 es2 W16, lan966x: is2
+ * VCAP_KF_L4_RNG: sparx5 is0 W8, sparx5 is2 W16, sparx5 es2 W16, lan966x is2 W8
* Range checker bitmask (one for each range checker). Input into range checkers
* is taken from classified results (VID, DSCP) and frame (SPORT, DPORT, ETYPE,
* outer VID, inner VID)
@@ -263,58 +283,35 @@ enum vcap_keyfield_set {
* Select the mode of the Generic Index
* VCAP_KF_LOOKUP_PAG: W8, sparx5: is2, lan966x: is2
* Classified Policy Association Group: chains rules from IS1/CLM to IS2
+ * VCAP_KF_MIRROR_PROBE: W2, sparx5: es2
+ * Identifies frame copies generated as a result of mirroring
* VCAP_KF_OAM_CCM_CNTS_EQ0: W1, sparx5: is2/es2, lan966x: is2
* Dual-ended loss measurement counters in CCM frames are all zero
- * VCAP_KF_OAM_MEL_FLAGS: W7, sparx5: is0, lan966x: is2
+ * VCAP_KF_OAM_DETECTED: W1, lan966x: is2
+ * This is missing in the datasheet, but present in the OAM keyset in XML
+ * VCAP_KF_OAM_FLAGS: W8, lan966x: is2
+ * Frame's OAM flags
+ * VCAP_KF_OAM_MEL_FLAGS: W7, lan966x: is2
* Encoding of MD level/MEG level (MEL)
- * VCAP_KF_OAM_Y1731_IS: W1, sparx5: is0/is2/es2, lan966x: is2
- * Set if frame’s EtherType = 0x8902
- * VCAP_KF_PROT_ACTIVE: W1, sparx5: es2
+ * VCAP_KF_OAM_MEPID: W16, lan966x: is2
+ * CCM frame's OAM MEP ID
+ * VCAP_KF_OAM_OPCODE: W8, lan966x: is2
+ * Frame's OAM opcode
+ * VCAP_KF_OAM_VER: W5, lan966x: is2
+ * Frame's OAM version
+ * VCAP_KF_OAM_Y1731_IS: W1, sparx5: is2/es2, lan966x: is2
+ * Set if frame's EtherType = 0x8902
+ * VCAP_KF_PROT_ACTIVE: W1, sparx5: es0/es2
* Protection is active
* VCAP_KF_TCP_IS: W1, sparx5: is0/is2/es2, lan966x: is2
* Set if frame is IPv4 TCP frame (IP protocol = 6) or IPv6 TCP frames (Next
* header = 6)
- * VCAP_KF_TCP_UDP_IS: W1, sparx5: is0/is2/es2, lan966x: is2
+ * VCAP_KF_TCP_UDP_IS: W1, sparx5: is0/is2/es2
* Set if frame is IPv4/IPv6 TCP or UDP frame (IP protocol/next header equals 6
* or 17)
* VCAP_KF_TYPE: sparx5 is0 W2, sparx5 is0 W1, sparx5 is2 W4, sparx5 is2 W2,
- * sparx5 es2 W3, lan966x: is2
+ * sparx5 es0 W1, sparx5 es2 W3, lan966x is2 W4, lan966x is2 W2
* Keyset type id - set by the API
- * VCAP_KF_HOST_MATCH: W1, lan966x: is2
- * The action from the SMAC_SIP4 or SMAC_SIP6 lookups. Used for IP source
- * guarding.
- * VCAP_KF_L2_FRM_TYPE: W4, lan966x: is2
- * Frame subtype for specific EtherTypes (MRP, DLR)
- * VCAP_KF_L2_PAYLOAD0: W16, lan966x: is2
- * Payload bytes 0-1 after the frame’s EtherType
- * VCAP_KF_L2_PAYLOAD1: W8, lan966x: is2
- * Payload byte 4 after the frame’s EtherType. This is specifically for PTP
- * frames.
- * VCAP_KF_L2_PAYLOAD2: W3, lan966x: is2
- * Bits 7, 2, and 1 from payload byte 6 after the frame’s EtherType. This is
- * specifically for PTP frames.
- * VCAP_KF_L2_LLC: W40, lan966x: is2
- * LLC header and data after up to two VLAN tags and the type/length field
- * VCAP_KF_L3_FRAGMENT: W1, lan966x: is2
- * Set if IPv4 frame is fragmented
- * VCAP_KF_L3_FRAG_OFS_GT0: W1, lan966x: is2
- * Set if IPv4 frame is fragmented and it is not the first fragment
- * VCAP_KF_L2_SNAP: W40, lan966x: is2
- * SNAP header after LLC header (AA-AA-03)
- * VCAP_KF_L4_1588_DOM: W8, lan966x: is2
- * PTP over UDP: domainNumber
- * VCAP_KF_L4_1588_VER: W4, lan966x: is2
- * PTP over UDP: version
- * VCAP_KF_OAM_MEPID: W16, lan966x: is2
- * CCM frame’s OAM MEP ID
- * VCAP_KF_OAM_OPCODE: W8, lan966x: is2
- * Frame’s OAM opcode
- * VCAP_KF_OAM_VER: W5, lan966x: is2
- * Frame’s OAM version
- * VCAP_KF_OAM_FLAGS: W8, lan966x: is2
- * Frame’s OAM flags
- * VCAP_KF_OAM_DETECTED: W1, lan966x: is2
- * This is missing in the datasheet, but present in the OAM keyset in XML
*/
/* Keyfield names */
@@ -334,6 +331,7 @@ enum vcap_key_field {
VCAP_KF_8021Q_PCP1,
VCAP_KF_8021Q_PCP2,
VCAP_KF_8021Q_PCP_CLS,
+ VCAP_KF_8021Q_TPID,
VCAP_KF_8021Q_TPID0,
VCAP_KF_8021Q_TPID1,
VCAP_KF_8021Q_TPID2,
@@ -352,13 +350,13 @@ enum vcap_key_field {
VCAP_KF_ARP_SENDER_MATCH_IS,
VCAP_KF_ARP_TGT_MATCH_IS,
VCAP_KF_COSID_CLS,
- VCAP_KF_DST_ENTRY,
VCAP_KF_ES0_ISDX_KEY_ENA,
VCAP_KF_ETYPE,
VCAP_KF_ETYPE_LEN_IS,
- VCAP_KF_ETYPE_MPLS,
+ VCAP_KF_HOST_MATCH,
VCAP_KF_IF_EGR_PORT_MASK,
VCAP_KF_IF_EGR_PORT_MASK_RNG,
+ VCAP_KF_IF_EGR_PORT_NO,
VCAP_KF_IF_IGR_PORT,
VCAP_KF_IF_IGR_PORT_MASK,
VCAP_KF_IF_IGR_PORT_MASK_L3,
@@ -373,17 +371,24 @@ enum vcap_key_field {
VCAP_KF_ISDX_GT0_IS,
VCAP_KF_L2_BC_IS,
VCAP_KF_L2_DMAC,
+ VCAP_KF_L2_FRM_TYPE,
VCAP_KF_L2_FWD_IS,
+ VCAP_KF_L2_LLC,
VCAP_KF_L2_MC_IS,
+ VCAP_KF_L2_PAYLOAD0,
+ VCAP_KF_L2_PAYLOAD1,
+ VCAP_KF_L2_PAYLOAD2,
VCAP_KF_L2_PAYLOAD_ETYPE,
VCAP_KF_L2_SMAC,
+ VCAP_KF_L2_SNAP,
VCAP_KF_L3_DIP_EQ_SIP_IS,
- VCAP_KF_L3_DMAC_DIP_MATCH,
VCAP_KF_L3_DPL_CLS,
VCAP_KF_L3_DSCP,
VCAP_KF_L3_DST_IS,
+ VCAP_KF_L3_FRAGMENT,
VCAP_KF_L3_FRAGMENT_TYPE,
VCAP_KF_L3_FRAG_INVLD_L4_LEN,
+ VCAP_KF_L3_FRAG_OFS_GT0,
VCAP_KF_L3_IP4_DIP,
VCAP_KF_L3_IP4_SIP,
VCAP_KF_L3_IP6_DIP,
@@ -392,9 +397,10 @@ enum vcap_key_field {
VCAP_KF_L3_OPTIONS_IS,
VCAP_KF_L3_PAYLOAD,
VCAP_KF_L3_RT_IS,
- VCAP_KF_L3_SMAC_SIP_MATCH,
VCAP_KF_L3_TOS,
VCAP_KF_L3_TTL_GT0,
+ VCAP_KF_L4_1588_DOM,
+ VCAP_KF_L4_1588_VER,
VCAP_KF_L4_ACK,
VCAP_KF_L4_DPORT,
VCAP_KF_L4_FIN,
@@ -411,30 +417,19 @@ enum vcap_key_field {
VCAP_KF_LOOKUP_GEN_IDX,
VCAP_KF_LOOKUP_GEN_IDX_SEL,
VCAP_KF_LOOKUP_PAG,
- VCAP_KF_MIRROR_ENA,
+ VCAP_KF_MIRROR_PROBE,
VCAP_KF_OAM_CCM_CNTS_EQ0,
+ VCAP_KF_OAM_DETECTED,
+ VCAP_KF_OAM_FLAGS,
VCAP_KF_OAM_MEL_FLAGS,
+ VCAP_KF_OAM_MEPID,
+ VCAP_KF_OAM_OPCODE,
+ VCAP_KF_OAM_VER,
VCAP_KF_OAM_Y1731_IS,
VCAP_KF_PROT_ACTIVE,
VCAP_KF_TCP_IS,
VCAP_KF_TCP_UDP_IS,
VCAP_KF_TYPE,
- VCAP_KF_HOST_MATCH,
- VCAP_KF_L2_FRM_TYPE,
- VCAP_KF_L2_PAYLOAD0,
- VCAP_KF_L2_PAYLOAD1,
- VCAP_KF_L2_PAYLOAD2,
- VCAP_KF_L2_LLC,
- VCAP_KF_L3_FRAGMENT,
- VCAP_KF_L3_FRAG_OFS_GT0,
- VCAP_KF_L2_SNAP,
- VCAP_KF_L4_1588_DOM,
- VCAP_KF_L4_1588_VER,
- VCAP_KF_OAM_MEPID,
- VCAP_KF_OAM_OPCODE,
- VCAP_KF_OAM_VER,
- VCAP_KF_OAM_FLAGS,
- VCAP_KF_OAM_DETECTED,
};
/* Actionset names with origin information */
@@ -443,14 +438,17 @@ enum vcap_actionfield_set {
VCAP_AFS_BASE_TYPE, /* sparx5 is2 X3, sparx5 es2 X3, lan966x is2 X2 */
VCAP_AFS_CLASSIFICATION, /* sparx5 is0 X2 */
VCAP_AFS_CLASS_REDUCED, /* sparx5 is0 X1 */
+ VCAP_AFS_ES0, /* sparx5 es0 X1 */
VCAP_AFS_FULL, /* sparx5 is0 X3 */
- VCAP_AFS_MLBS, /* sparx5 is0 X2 */
- VCAP_AFS_MLBS_REDUCED, /* sparx5 is0 X1 */
- VCAP_AFS_SMAC_SIP, /* lan966x is2 x1 */
+ VCAP_AFS_SMAC_SIP, /* lan966x is2 X1 */
};
/* List of actionfields with description
*
+ * VCAP_AF_ACL_ID: W6, lan966x: is2
+ * Logical ID for the entry. This ID is extracted together with the frame in the
+ * CPU extraction header. Only applicable to actions with CPU_COPY_ENA or
+ * HIT_ME_ONCE set.
* VCAP_AF_CLS_VID_SEL: W3, sparx5: is0
* Controls the classified VID: 0: VID_NONE: No action. 1: VID_ADD: New VID =
* old VID + VID_VAL. 2: VID_REPLACE: New VID = VID_VAL. 3: VID_FIRST_TAG: New
@@ -468,8 +466,16 @@ enum vcap_actionfield_set {
* VCAP_AF_CPU_COPY_ENA: W1, sparx5: is2/es2, lan966x: is2
* Setting this bit to 1 causes all frames that hit this action to be copied to
* the CPU extraction queue specified in CPU_QUEUE_NUM.
+ * VCAP_AF_CPU_QU: W3, sparx5: es0
+ * CPU extraction queue. Used when FWD_SEL >0 and PIPELINE_ACT = XTR.
* VCAP_AF_CPU_QUEUE_NUM: W3, sparx5: is2/es2, lan966x: is2
* CPU queue number. Used when CPU_COPY_ENA is set.
+ * VCAP_AF_DEI_A_VAL: W1, sparx5: es0
+ * DEI used in ES0 tag A. See TAG_A_DEI_SEL.
+ * VCAP_AF_DEI_B_VAL: W1, sparx5: es0
+ * DEI used in ES0 tag B. See TAG_B_DEI_SEL.
+ * VCAP_AF_DEI_C_VAL: W1, sparx5: es0
+ * DEI used in ES0 tag C. See TAG_C_DEI_SEL.
* VCAP_AF_DEI_ENA: W1, sparx5: is0
* If set, use DEI_VAL as classified DEI value. Otherwise, DEI from basic
* classification is used
@@ -483,19 +489,38 @@ enum vcap_actionfield_set {
* VCAP_AF_DSCP_ENA: W1, sparx5: is0
* If set, use DSCP_VAL as classified DSCP value. Otherwise, DSCP value from
* basic classification is used.
- * VCAP_AF_DSCP_VAL: W6, sparx5: is0
+ * VCAP_AF_DSCP_SEL: W3, sparx5: es0
+ * Selects source for DSCP. 0: Controlled by port configuration and IFH. 1:
+ * Classified DSCP via IFH. 2: DSCP_VAL. 3: Reserved. 4: Mapped using mapping
+ * table 0, otherwise use DSCP_VAL. 5: Mapped using mapping table 1, otherwise
+ * use mapping table 0. 6: Mapped using mapping table 2, otherwise use DSCP_VAL.
+ * 7: Mapped using mapping table 3, otherwise use mapping table 2
+ * VCAP_AF_DSCP_VAL: W6, sparx5: is0/es0
* See DSCP_ENA.
* VCAP_AF_ES2_REW_CMD: W3, sparx5: es2
* Command forwarded to REW: 0: No action. 1: SWAP MAC addresses. 2: Do L2CP
* DMAC translation when entering or leaving a tunnel.
+ * VCAP_AF_ESDX: W13, sparx5: es0
+ * Egress counter index. Used to index egress counter set as defined in
+ * REW::STAT_CFG.
+ * VCAP_AF_FWD_KILL_ENA: W1, lan966x: is2
+ * Setting this bit to 1 denies forwarding of the frame forwarding to any front
+ * port. The frame can still be copied to the CPU by other actions.
* VCAP_AF_FWD_MODE: W2, sparx5: es2
* Forward selector: 0: Forward. 1: Discard. 2: Redirect. 3: Copy.
+ * VCAP_AF_FWD_SEL: W2, sparx5: es0
+ * ES0 Forward selector. 0: No action. 1: Copy to loopback interface. 2:
+ * Redirect to loopback interface. 3: Discard
* VCAP_AF_HIT_ME_ONCE: W1, sparx5: is2/es2, lan966x: is2
* Setting this bit to 1 causes the first frame that hits this action where the
* HIT_CNT counter is zero to be copied to the CPU extraction queue specified in
* CPU_QUEUE_NUM. The HIT_CNT counter is then incremented and any frames that
* hit this action later are not copied to the CPU. To re-enable the HIT_ME_ONCE
* functionality, the HIT_CNT counter must be cleared.
+ * VCAP_AF_HOST_MATCH: W1, lan966x: is2
+ * Used for IP source guarding. If set, it signals that the host is a valid (for
+ * instance a valid combination of source MAC address and source IP address).
+ * HOST_MATCH is input to the IS2 keys.
* VCAP_AF_IGNORE_PIPELINE_CTRL: W1, sparx5: is2/es2
* Ignore ingress pipeline control. This enforces the use of the VCAP IS2 action
* even when the pipeline control has terminated the frame before VCAP IS2.
@@ -504,8 +529,13 @@ enum vcap_actionfield_set {
* VCAP_AF_ISDX_ADD_REPLACE_SEL: W1, sparx5: is0
* Controls the classified ISDX. 0: New ISDX = old ISDX + ISDX_VAL. 1: New ISDX
* = ISDX_VAL.
+ * VCAP_AF_ISDX_ENA: W1, lan966x: is2
+ * Setting this bit to 1 causes the classified ISDX to be set to the value of
+ * POLICE_IDX[8:0].
* VCAP_AF_ISDX_VAL: W12, sparx5: is0
* See isdx_add_replace_sel
+ * VCAP_AF_LOOP_ENA: W1, sparx5: es0
+ * 0: Forward based on PIPELINE_PT and FWD_SEL
* VCAP_AF_LRN_DIS: W1, sparx5: is2, lan966x: is2
* Setting this bit to 1 disables learning of frames hitting this action.
* VCAP_AF_MAP_IDX: W9, sparx5: is0
@@ -521,136 +551,190 @@ enum vcap_actionfield_set {
* are applied to. 0: No changes to the QoS Mapping Table lookup. 1: Update key
* type and index for QoS Mapping Table lookup #0. 2: Update key type and index
* for QoS Mapping Table lookup #1. 3: Reserved.
- * VCAP_AF_MASK_MODE: W3, sparx5: is0/is2, lan966x is2 W2
+ * VCAP_AF_MASK_MODE: sparx5 is0 W3, sparx5 is2 W3, lan966x is2 W2
* Controls the PORT_MASK use. Sparx5: 0: OR_DSTMASK, 1: AND_VLANMASK, 2:
* REPLACE_PGID, 3: REPLACE_ALL, 4: REDIR_PGID, 5: OR_PGID_MASK, 6: VSTAX, 7:
* Not applicable. LAN966X: 0: No action, 1: Permit/deny (AND), 2: Policy
* forwarding (DMAC lookup), 3: Redirect. The CPU port is untouched by
* MASK_MODE.
- * VCAP_AF_MATCH_ID: W16, sparx5: is0/is2
+ * VCAP_AF_MATCH_ID: W16, sparx5: is2
* Logical ID for the entry. The MATCH_ID is extracted together with the frame
* if the frame is forwarded to the CPU (CPU_COPY_ENA). The result is placed in
* IFH.CL_RSLT.
- * VCAP_AF_MATCH_ID_MASK: W16, sparx5: is0/is2
+ * VCAP_AF_MATCH_ID_MASK: W16, sparx5: is2
* Mask used by MATCH_ID.
+ * VCAP_AF_MIRROR_ENA: W1, lan966x: is2
+ * Setting this bit to 1 causes frames to be mirrored to the mirror target port
+ * (ANA::MIRRPORPORTS).
* VCAP_AF_MIRROR_PROBE: W2, sparx5: is2
* Mirroring performed according to configuration of a mirror probe. 0: No
* mirroring. 1: Mirror probe 0. 2: Mirror probe 1. 3: Mirror probe 2
* VCAP_AF_MIRROR_PROBE_ID: W2, sparx5: es2
* Signals a mirror probe to be placed in the IFH. Only possible when FWD_MODE
- * is copy. 0: No mirroring. 1–3: Use mirror probe 0-2.
+ * is copy. 0: No mirroring. 1-3: Use mirror probe 0-2.
* VCAP_AF_NXT_IDX: W12, sparx5: is0
* Index used as part of key (field G_IDX) in the next lookup.
* VCAP_AF_NXT_IDX_CTRL: W3, sparx5: is0
* Controls the generation of the G_IDX used in the VCAP CLM next lookup
* VCAP_AF_PAG_OVERRIDE_MASK: W8, sparx5: is0
- * Bits set in this mask will override PAG_VAL from port profile.  New PAG =
- * (PAG (input) AND ~PAG_OVERRIDE_MASK) OR (PAG_VAL AND PAG_OVERRIDE_MASK)
+ * Bits set in this mask will override PAG_VAL from port profile. New PAG = (PAG
+ * (input) AND ~PAG_OVERRIDE_MASK) OR (PAG_VAL AND PAG_OVERRIDE_MASK)
* VCAP_AF_PAG_VAL: W8, sparx5: is0
* See PAG_OVERRIDE_MASK.
+ * VCAP_AF_PCP_A_VAL: W3, sparx5: es0
+ * PCP used in ES0 tag A. See TAG_A_PCP_SEL.
+ * VCAP_AF_PCP_B_VAL: W3, sparx5: es0
+ * PCP used in ES0 tag B. See TAG_B_PCP_SEL.
+ * VCAP_AF_PCP_C_VAL: W3, sparx5: es0
+ * PCP used in ES0 tag C. See TAG_C_PCP_SEL.
* VCAP_AF_PCP_ENA: W1, sparx5: is0
* If set, use PCP_VAL as classified PCP value. Otherwise, PCP from basic
* classification is used.
* VCAP_AF_PCP_VAL: W3, sparx5: is0
* See PCP_ENA.
- * VCAP_AF_PIPELINE_FORCE_ENA: sparx5 is0 W2, sparx5 is2 W1
+ * VCAP_AF_PIPELINE_ACT: W1, sparx5: es0
+ * Pipeline action when FWD_SEL > 0. 0: XTR. CPU_QU selects CPU extraction queue
+ * 1: LBK_ASM.
+ * VCAP_AF_PIPELINE_FORCE_ENA: W1, sparx5: is2
* If set, use PIPELINE_PT unconditionally and set PIPELINE_ACT = NONE if
* PIPELINE_PT == NONE. Overrules previous settings of pipeline point.
- * VCAP_AF_PIPELINE_PT: W5, sparx5: is0/is2
+ * VCAP_AF_PIPELINE_PT: sparx5 is2 W5, sparx5 es0 W2
* Pipeline point used if PIPELINE_FORCE_ENA is set
* VCAP_AF_POLICE_ENA: W1, sparx5: is2/es2, lan966x: is2
* Setting this bit to 1 causes frames that hit this action to be policed by the
* ACL policer specified in POLICE_IDX. Only applies to the first lookup.
- * VCAP_AF_POLICE_IDX: W6, sparx5: is2/es2, lan966x: is2 W9
+ * VCAP_AF_POLICE_IDX: sparx5 is2 W6, sparx5 es2 W6, lan966x is2 W9
* Selects VCAP policer used when policing frames (POLICE_ENA)
* VCAP_AF_POLICE_REMARK: W1, sparx5: es2
* If set, frames exceeding policer rates are marked as yellow but not
* discarded.
+ * VCAP_AF_POLICE_VCAP_ONLY: W1, lan966x: is2
+ * Disable policing from QoS, and port policers. Only the VCAP policer selected
+ * by POLICE_IDX is active. Only applies to the second lookup.
+ * VCAP_AF_POP_VAL: W2, sparx5: es0
+ * Controls popping of Q-tags. The final number of Q-tags popped is calculated
+ * as shown in section 4.28.7.2 VLAN Pop Decision.
* VCAP_AF_PORT_MASK: sparx5 is0 W65, sparx5 is2 W68, lan966x is2 W8
* Port mask applied to the forwarding decision based on MASK_MODE.
+ * VCAP_AF_PUSH_CUSTOMER_TAG: W2, sparx5: es0
+ * Selects tag C mode: 0: Do not push tag C. 1: Push tag C if
+ * IFH.VSTAX.TAG.WAS_TAGGED = 1. 2: Push tag C if IFH.VSTAX.TAG.WAS_TAGGED = 0.
+ * 3: Push tag C if UNTAG_VID_ENA = 0 or (C-TAG.VID ! = VID_C_VAL).
+ * VCAP_AF_PUSH_INNER_TAG: W1, sparx5: es0
+ * Controls inner tagging. 0: Do not push ES0 tag B as inner tag. 1: Push ES0
+ * tag B as inner tag.
+ * VCAP_AF_PUSH_OUTER_TAG: W2, sparx5: es0
+ * Controls outer tagging. 0: No ES0 tag A: Port tag is allowed if enabled on
+ * port. 1: ES0 tag A: Push ES0 tag A. No port tag. 2: Force port tag: Always
+ * push port tag. No ES0 tag A. 3: Force untag: Never push port tag or ES0 tag
+ * A.
* VCAP_AF_QOS_ENA: W1, sparx5: is0
* If set, use QOS_VAL as classified QoS class. Otherwise, QoS class from basic
* classification is used.
* VCAP_AF_QOS_VAL: W3, sparx5: is0
* See QOS_ENA.
+ * VCAP_AF_REW_OP: W16, lan966x: is2
+ * Rewriter operation command.
* VCAP_AF_RT_DIS: W1, sparx5: is2
* If set, routing is disallowed. Only applies when IS_INNER_ACL is 0. See also
* IGR_ACL_ENA, EGR_ACL_ENA, and RLEG_STAT_IDX.
+ * VCAP_AF_SWAP_MACS_ENA: W1, sparx5: es0
+ * This setting is only active when FWD_SEL = 1 or FWD_SEL = 2 and PIPELINE_ACT
+ * = LBK_ASM. 0: No action. 1: Swap MACs and clear bit 40 in new SMAC.
+ * VCAP_AF_TAG_A_DEI_SEL: W3, sparx5: es0
+ * Selects PCP for ES0 tag A. 0: Classified DEI. 1: DEI_A_VAL. 2: DP and QoS
+ * mapped to PCP (per port table). 3: DP.
+ * VCAP_AF_TAG_A_PCP_SEL: W3, sparx5: es0
+ * Selects PCP for ES0 tag A. 0: Classified PCP. 1: PCP_A_VAL. 2: DP and QoS
+ * mapped to PCP (per port table). 3: QoS class.
+ * VCAP_AF_TAG_A_TPID_SEL: W3, sparx5: es0
+ * Selects TPID for ES0 tag A: 0: 0x8100. 1: 0x88A8. 2: Custom
+ * (REW:PORT:PORT_VLAN_CFG.PORT_TPID). 3: If IFH.TAG_TYPE = 0 then 0x8100 else
+ * custom.
+ * VCAP_AF_TAG_A_VID_SEL: W2, sparx5: es0
+ * Selects VID for ES0 tag A. 0: Classified VID + VID_A_VAL. 1: VID_A_VAL.
+ * VCAP_AF_TAG_B_DEI_SEL: W3, sparx5: es0
+ * Selects PCP for ES0 tag B. 0: Classified DEI. 1: DEI_B_VAL. 2: DP and QoS
+ * mapped to PCP (per port table). 3: DP.
+ * VCAP_AF_TAG_B_PCP_SEL: W3, sparx5: es0
+ * Selects PCP for ES0 tag B. 0: Classified PCP. 1: PCP_B_VAL. 2: DP and QoS
+ * mapped to PCP (per port table). 3: QoS class.
+ * VCAP_AF_TAG_B_TPID_SEL: W3, sparx5: es0
+ * Selects TPID for ES0 tag B. 0: 0x8100. 1: 0x88A8. 2: Custom
+ * (REW:PORT:PORT_VLAN_CFG.PORT_TPID). 3: If IFH.TAG_TYPE = 0 then 0x8100 else
+ * custom.
+ * VCAP_AF_TAG_B_VID_SEL: W2, sparx5: es0
+ * Selects VID for ES0 tag B. 0: Classified VID + VID_B_VAL. 1: VID_B_VAL.
+ * VCAP_AF_TAG_C_DEI_SEL: W3, sparx5: es0
+ * Selects DEI source for ES0 tag C. 0: Classified DEI. 1: DEI_C_VAL. 2:
+ * REW::DP_MAP.DP [IFH.VSTAX.QOS.DP]. 3: DEI of popped VLAN tag if available
+ * (IFH.VSTAX.TAG.WAS_TAGGED = 1 and tot_pop_cnt>0) else DEI_C_VAL. 4: Mapped
+ * using mapping table 0, otherwise use DEI_C_VAL. 5: Mapped using mapping table
+ * 1, otherwise use mapping table 0. 6: Mapped using mapping table 2, otherwise
+ * use DEI_C_VAL. 7: Mapped using mapping table 3, otherwise use mapping table
+ * 2.
+ * VCAP_AF_TAG_C_PCP_SEL: W3, sparx5: es0
+ * Selects PCP source for ES0 tag C. 0: Classified PCP. 1: PCP_C_VAL. 2:
+ * Reserved. 3: PCP of popped VLAN tag if available (IFH.VSTAX.TAG.WAS_TAGGED=1
+ * and tot_pop_cnt>0) else PCP_C_VAL. 4: Mapped using mapping table 0, otherwise
+ * use PCP_C_VAL. 5: Mapped using mapping table 1, otherwise use mapping table
+ * 0. 6: Mapped using mapping table 2, otherwise use PCP_C_VAL. 7: Mapped using
+ * mapping table 3, otherwise use mapping table 2.
+ * VCAP_AF_TAG_C_TPID_SEL: W3, sparx5: es0
+ * Selects TPID for ES0 tag C. 0: 0x8100. 1: 0x88A8. 2: Custom 1. 3: Custom 2.
+ * 4: Custom 3. 5: See TAG_A_TPID_SEL.
+ * VCAP_AF_TAG_C_VID_SEL: W2, sparx5: es0
+ * Selects VID for ES0 tag C. The resulting VID is termed C-TAG.VID. 0:
+ * Classified VID. 1: VID_C_VAL. 2: IFH.ENCAP.GVID. 3: Reserved.
* VCAP_AF_TYPE: W1, sparx5: is0
* Actionset type id - Set by the API
+ * VCAP_AF_UNTAG_VID_ENA: W1, sparx5: es0
+ * Controls insertion of tag C. Untag or insert mode can be selected. See
+ * PUSH_CUSTOMER_TAG.
+ * VCAP_AF_VID_A_VAL: W12, sparx5: es0
+ * VID used in ES0 tag A. See TAG_A_VID_SEL.
+ * VCAP_AF_VID_B_VAL: W12, sparx5: es0
+ * VID used in ES0 tag B. See TAG_B_VID_SEL.
+ * VCAP_AF_VID_C_VAL: W12, sparx5: es0
+ * VID used in ES0 tag C. See TAG_C_VID_SEL.
* VCAP_AF_VID_VAL: W13, sparx5: is0
* New VID Value
- * VCAP_AF_MIRROR_ENA: W1, lan966x: is2
- * Setting this bit to 1 causes frames to be mirrored to the mirror target
- * port (ANA::MIRRPORPORTS).
- * VCAP_AF_POLICE_VCAP_ONLY: W1, lan966x: is2
- * Disable policing from QoS, and port policers. Only the VCAP policer
- * selected by POLICE_IDX is active. Only applies to the second lookup.
- * VCAP_AF_REW_OP: W16, lan966x: is2
- * Rewriter operation command.
- * VCAP_AF_ISDX_ENA: W1, lan966x: is2
- * Setting this bit to 1 causes the classified ISDX to be set to the value of
- * POLICE_IDX[8:0].
- * VCAP_AF_ACL_ID: W6, lan966x: is2
- * Logical ID for the entry. This ID is extracted together with the frame in
- * the CPU extraction header. Only applicable to actions with CPU_COPY_ENA or
- * HIT_ME_ONCE set.
- * VCAP_AF_FWD_KILL_ENA: W1, lan966x: is2
- * Setting this bit to 1 denies forwarding of the frame forwarding to any
- * front port. The frame can still be copied to the CPU by other actions.
- * VCAP_AF_HOST_MATCH: W1, lan966x: is2
- * Used for IP source guarding. If set, it signals that the host is a valid
- * (for instance a valid combination of source MAC address and source IP
- * address). HOST_MATCH is input to the IS2 keys.
*/
/* Actionfield names */
enum vcap_action_field {
VCAP_AF_NO_VALUE, /* initial value */
- VCAP_AF_ACL_MAC,
- VCAP_AF_ACL_RT_MODE,
+ VCAP_AF_ACL_ID,
VCAP_AF_CLS_VID_SEL,
VCAP_AF_CNT_ID,
VCAP_AF_COPY_PORT_NUM,
VCAP_AF_COPY_QUEUE_NUM,
- VCAP_AF_COSID_ENA,
- VCAP_AF_COSID_VAL,
VCAP_AF_CPU_COPY_ENA,
- VCAP_AF_CPU_DIS,
- VCAP_AF_CPU_ENA,
- VCAP_AF_CPU_Q,
+ VCAP_AF_CPU_QU,
VCAP_AF_CPU_QUEUE_NUM,
- VCAP_AF_CUSTOM_ACE_ENA,
- VCAP_AF_CUSTOM_ACE_OFFSET,
+ VCAP_AF_DEI_A_VAL,
+ VCAP_AF_DEI_B_VAL,
+ VCAP_AF_DEI_C_VAL,
VCAP_AF_DEI_ENA,
VCAP_AF_DEI_VAL,
- VCAP_AF_DLB_OFFSET,
- VCAP_AF_DMAC_OFFSET_ENA,
VCAP_AF_DP_ENA,
VCAP_AF_DP_VAL,
VCAP_AF_DSCP_ENA,
+ VCAP_AF_DSCP_SEL,
VCAP_AF_DSCP_VAL,
- VCAP_AF_EGR_ACL_ENA,
VCAP_AF_ES2_REW_CMD,
- VCAP_AF_FWD_DIS,
+ VCAP_AF_ESDX,
+ VCAP_AF_FWD_KILL_ENA,
VCAP_AF_FWD_MODE,
- VCAP_AF_FWD_TYPE,
- VCAP_AF_GVID_ADD_REPLACE_SEL,
+ VCAP_AF_FWD_SEL,
VCAP_AF_HIT_ME_ONCE,
+ VCAP_AF_HOST_MATCH,
VCAP_AF_IGNORE_PIPELINE_CTRL,
- VCAP_AF_IGR_ACL_ENA,
- VCAP_AF_INJ_MASQ_ENA,
- VCAP_AF_INJ_MASQ_LPORT,
- VCAP_AF_INJ_MASQ_PORT,
VCAP_AF_INTR_ENA,
VCAP_AF_ISDX_ADD_REPLACE_SEL,
+ VCAP_AF_ISDX_ENA,
VCAP_AF_ISDX_VAL,
- VCAP_AF_IS_INNER_ACL,
- VCAP_AF_L3_MAC_UPDATE_DIS,
- VCAP_AF_LOG_MSG_INTERVAL,
- VCAP_AF_LPM_AFFIX_ENA,
- VCAP_AF_LPM_AFFIX_VAL,
- VCAP_AF_LPORT_ENA,
+ VCAP_AF_LOOP_ENA,
VCAP_AF_LRN_DIS,
VCAP_AF_MAP_IDX,
VCAP_AF_MAP_KEY,
@@ -658,78 +742,53 @@ enum vcap_action_field {
VCAP_AF_MASK_MODE,
VCAP_AF_MATCH_ID,
VCAP_AF_MATCH_ID_MASK,
- VCAP_AF_MIP_SEL,
+ VCAP_AF_MIRROR_ENA,
VCAP_AF_MIRROR_PROBE,
VCAP_AF_MIRROR_PROBE_ID,
- VCAP_AF_MPLS_IP_CTRL_ENA,
- VCAP_AF_MPLS_MEP_ENA,
- VCAP_AF_MPLS_MIP_ENA,
- VCAP_AF_MPLS_OAM_FLAVOR,
- VCAP_AF_MPLS_OAM_TYPE,
- VCAP_AF_NUM_VLD_LABELS,
VCAP_AF_NXT_IDX,
VCAP_AF_NXT_IDX_CTRL,
- VCAP_AF_NXT_KEY_TYPE,
- VCAP_AF_NXT_NORMALIZE,
- VCAP_AF_NXT_NORM_W16_OFFSET,
- VCAP_AF_NXT_NORM_W32_OFFSET,
- VCAP_AF_NXT_OFFSET_FROM_TYPE,
- VCAP_AF_NXT_TYPE_AFTER_OFFSET,
- VCAP_AF_OAM_IP_BFD_ENA,
- VCAP_AF_OAM_TWAMP_ENA,
- VCAP_AF_OAM_Y1731_SEL,
VCAP_AF_PAG_OVERRIDE_MASK,
VCAP_AF_PAG_VAL,
+ VCAP_AF_PCP_A_VAL,
+ VCAP_AF_PCP_B_VAL,
+ VCAP_AF_PCP_C_VAL,
VCAP_AF_PCP_ENA,
VCAP_AF_PCP_VAL,
- VCAP_AF_PIPELINE_ACT_SEL,
+ VCAP_AF_PIPELINE_ACT,
VCAP_AF_PIPELINE_FORCE_ENA,
VCAP_AF_PIPELINE_PT,
- VCAP_AF_PIPELINE_PT_REDUCED,
VCAP_AF_POLICE_ENA,
VCAP_AF_POLICE_IDX,
VCAP_AF_POLICE_REMARK,
+ VCAP_AF_POLICE_VCAP_ONLY,
+ VCAP_AF_POP_VAL,
VCAP_AF_PORT_MASK,
- VCAP_AF_PTP_MASTER_SEL,
+ VCAP_AF_PUSH_CUSTOMER_TAG,
+ VCAP_AF_PUSH_INNER_TAG,
+ VCAP_AF_PUSH_OUTER_TAG,
VCAP_AF_QOS_ENA,
VCAP_AF_QOS_VAL,
- VCAP_AF_REW_CMD,
- VCAP_AF_RLEG_DMAC_CHK_DIS,
- VCAP_AF_RLEG_STAT_IDX,
- VCAP_AF_RSDX_ENA,
- VCAP_AF_RSDX_VAL,
- VCAP_AF_RSVD_LBL_VAL,
+ VCAP_AF_REW_OP,
VCAP_AF_RT_DIS,
- VCAP_AF_RT_SEL,
- VCAP_AF_S2_KEY_SEL_ENA,
- VCAP_AF_S2_KEY_SEL_IDX,
- VCAP_AF_SAM_SEQ_ENA,
- VCAP_AF_SIP_IDX,
- VCAP_AF_SWAP_MAC_ENA,
- VCAP_AF_TCP_UDP_DPORT,
- VCAP_AF_TCP_UDP_ENA,
- VCAP_AF_TCP_UDP_SPORT,
- VCAP_AF_TC_ENA,
- VCAP_AF_TC_LABEL,
- VCAP_AF_TPID_SEL,
- VCAP_AF_TTL_DECR_DIS,
- VCAP_AF_TTL_ENA,
- VCAP_AF_TTL_LABEL,
- VCAP_AF_TTL_UPDATE_ENA,
+ VCAP_AF_SWAP_MACS_ENA,
+ VCAP_AF_TAG_A_DEI_SEL,
+ VCAP_AF_TAG_A_PCP_SEL,
+ VCAP_AF_TAG_A_TPID_SEL,
+ VCAP_AF_TAG_A_VID_SEL,
+ VCAP_AF_TAG_B_DEI_SEL,
+ VCAP_AF_TAG_B_PCP_SEL,
+ VCAP_AF_TAG_B_TPID_SEL,
+ VCAP_AF_TAG_B_VID_SEL,
+ VCAP_AF_TAG_C_DEI_SEL,
+ VCAP_AF_TAG_C_PCP_SEL,
+ VCAP_AF_TAG_C_TPID_SEL,
+ VCAP_AF_TAG_C_VID_SEL,
VCAP_AF_TYPE,
+ VCAP_AF_UNTAG_VID_ENA,
+ VCAP_AF_VID_A_VAL,
+ VCAP_AF_VID_B_VAL,
+ VCAP_AF_VID_C_VAL,
VCAP_AF_VID_VAL,
- VCAP_AF_VLAN_POP_CNT,
- VCAP_AF_VLAN_POP_CNT_ENA,
- VCAP_AF_VLAN_PUSH_CNT,
- VCAP_AF_VLAN_PUSH_CNT_ENA,
- VCAP_AF_VLAN_WAS_TAGGED,
- VCAP_AF_MIRROR_ENA,
- VCAP_AF_POLICE_VCAP_ONLY,
- VCAP_AF_REW_OP,
- VCAP_AF_ISDX_ENA,
- VCAP_AF_ACL_ID,
- VCAP_AF_FWD_KILL_ENA,
- VCAP_AF_HOST_MATCH,
};
#endif /* __VCAP_AG_API__ */
diff --git a/drivers/net/ethernet/microchip/vcap/vcap_api.c b/drivers/net/ethernet/microchip/vcap/vcap_api.c
index 664aae3e2acd..4847d0d99ec9 100644
--- a/drivers/net/ethernet/microchip/vcap/vcap_api.c
+++ b/drivers/net/ethernet/microchip/vcap/vcap_api.c
@@ -37,11 +37,13 @@ struct vcap_rule_move {
int count; /* blocksize of addresses to move */
};
-/* Stores the filter cookie that enabled the port */
+/* Stores the filter cookie and chain id that enabled the port */
struct vcap_enabled_port {
struct list_head list; /* for insertion in enabled ports list */
struct net_device *ndev; /* the enabled port */
unsigned long cookie; /* filter that enabled the port */
+ int src_cid; /* source chain id */
+ int dst_cid; /* destination chain id */
};
void vcap_iter_set(struct vcap_stream_iter *itr, int sw_width,
@@ -508,10 +510,133 @@ static void vcap_encode_keyfield_typegroups(struct vcap_control *vctrl,
vcap_encode_typegroups(cache->maskstream, sw_width, tgt, true);
}
+/* Copy data from src to dst but reverse the data in chunks of 32bits.
+ * For example if src is 00:11:22:33:44:55 where 55 is LSB the dst will
+ * have the value 22:33:44:55:00:11.
+ */
+static void vcap_copy_to_w32be(u8 *dst, const u8 *src, int size)
+{
+ for (int idx = 0; idx < size; ++idx) {
+ int first_byte_index = 0;
+ int nidx;
+
+ first_byte_index = size - (((idx >> 2) + 1) << 2);
+ if (first_byte_index < 0)
+ first_byte_index = 0;
+ nidx = idx + first_byte_index - (idx & ~0x3);
+ dst[nidx] = src[idx];
+ }
+}
+
+static void
+vcap_copy_from_client_keyfield(struct vcap_rule *rule,
+ struct vcap_client_keyfield *dst,
+ const struct vcap_client_keyfield *src)
+{
+ struct vcap_rule_internal *ri = to_intrule(rule);
+ const struct vcap_client_keyfield_data *sdata;
+ struct vcap_client_keyfield_data *ddata;
+ int size;
+
+ dst->ctrl.type = src->ctrl.type;
+ dst->ctrl.key = src->ctrl.key;
+ INIT_LIST_HEAD(&dst->ctrl.list);
+ sdata = &src->data;
+ ddata = &dst->data;
+
+ if (!ri->admin->w32be) {
+ memcpy(ddata, sdata, sizeof(dst->data));
+ return;
+ }
+
+ size = keyfield_size_table[dst->ctrl.type] / 2;
+
+ switch (dst->ctrl.type) {
+ case VCAP_FIELD_BIT:
+ case VCAP_FIELD_U32:
+ memcpy(ddata, sdata, sizeof(dst->data));
+ break;
+ case VCAP_FIELD_U48:
+ vcap_copy_to_w32be(ddata->u48.value, src->data.u48.value, size);
+ vcap_copy_to_w32be(ddata->u48.mask, src->data.u48.mask, size);
+ break;
+ case VCAP_FIELD_U56:
+ vcap_copy_to_w32be(ddata->u56.value, sdata->u56.value, size);
+ vcap_copy_to_w32be(ddata->u56.mask, sdata->u56.mask, size);
+ break;
+ case VCAP_FIELD_U64:
+ vcap_copy_to_w32be(ddata->u64.value, sdata->u64.value, size);
+ vcap_copy_to_w32be(ddata->u64.mask, sdata->u64.mask, size);
+ break;
+ case VCAP_FIELD_U72:
+ vcap_copy_to_w32be(ddata->u72.value, sdata->u72.value, size);
+ vcap_copy_to_w32be(ddata->u72.mask, sdata->u72.mask, size);
+ break;
+ case VCAP_FIELD_U112:
+ vcap_copy_to_w32be(ddata->u112.value, sdata->u112.value, size);
+ vcap_copy_to_w32be(ddata->u112.mask, sdata->u112.mask, size);
+ break;
+ case VCAP_FIELD_U128:
+ vcap_copy_to_w32be(ddata->u128.value, sdata->u128.value, size);
+ vcap_copy_to_w32be(ddata->u128.mask, sdata->u128.mask, size);
+ break;
+ }
+}
+
+static void
+vcap_copy_from_client_actionfield(struct vcap_rule *rule,
+ struct vcap_client_actionfield *dst,
+ const struct vcap_client_actionfield *src)
+{
+ struct vcap_rule_internal *ri = to_intrule(rule);
+ const struct vcap_client_actionfield_data *sdata;
+ struct vcap_client_actionfield_data *ddata;
+ int size;
+
+ dst->ctrl.type = src->ctrl.type;
+ dst->ctrl.action = src->ctrl.action;
+ INIT_LIST_HEAD(&dst->ctrl.list);
+ sdata = &src->data;
+ ddata = &dst->data;
+
+ if (!ri->admin->w32be) {
+ memcpy(ddata, sdata, sizeof(dst->data));
+ return;
+ }
+
+ size = actionfield_size_table[dst->ctrl.type];
+
+ switch (dst->ctrl.type) {
+ case VCAP_FIELD_BIT:
+ case VCAP_FIELD_U32:
+ memcpy(ddata, sdata, sizeof(dst->data));
+ break;
+ case VCAP_FIELD_U48:
+ vcap_copy_to_w32be(ddata->u48.value, sdata->u48.value, size);
+ break;
+ case VCAP_FIELD_U56:
+ vcap_copy_to_w32be(ddata->u56.value, sdata->u56.value, size);
+ break;
+ case VCAP_FIELD_U64:
+ vcap_copy_to_w32be(ddata->u64.value, sdata->u64.value, size);
+ break;
+ case VCAP_FIELD_U72:
+ vcap_copy_to_w32be(ddata->u72.value, sdata->u72.value, size);
+ break;
+ case VCAP_FIELD_U112:
+ vcap_copy_to_w32be(ddata->u112.value, sdata->u112.value, size);
+ break;
+ case VCAP_FIELD_U128:
+ vcap_copy_to_w32be(ddata->u128.value, sdata->u128.value, size);
+ break;
+ }
+}
+
static int vcap_encode_rule_keyset(struct vcap_rule_internal *ri)
{
const struct vcap_client_keyfield *ckf;
const struct vcap_typegroup *tg_table;
+ struct vcap_client_keyfield tempkf;
const struct vcap_field *kf_table;
int keyset_size;
@@ -552,7 +677,9 @@ static int vcap_encode_rule_keyset(struct vcap_rule_internal *ri)
__func__, __LINE__, ckf->ctrl.key);
return -EINVAL;
}
- vcap_encode_keyfield(ri, ckf, &kf_table[ckf->ctrl.key], tg_table);
+ vcap_copy_from_client_keyfield(&ri->data, &tempkf, ckf);
+ vcap_encode_keyfield(ri, &tempkf, &kf_table[ckf->ctrl.key],
+ tg_table);
}
/* Add typegroup bits to the key/mask bitstreams */
vcap_encode_keyfield_typegroups(ri->vctrl, ri, tg_table);
@@ -667,6 +794,7 @@ static int vcap_encode_rule_actionset(struct vcap_rule_internal *ri)
{
const struct vcap_client_actionfield *caf;
const struct vcap_typegroup *tg_table;
+ struct vcap_client_actionfield tempaf;
const struct vcap_field *af_table;
int actionset_size;
@@ -707,8 +835,9 @@ static int vcap_encode_rule_actionset(struct vcap_rule_internal *ri)
__func__, __LINE__, caf->ctrl.action);
return -EINVAL;
}
- vcap_encode_actionfield(ri, caf, &af_table[caf->ctrl.action],
- tg_table);
+ vcap_copy_from_client_actionfield(&ri->data, &tempaf, caf);
+ vcap_encode_actionfield(ri, &tempaf,
+ &af_table[caf->ctrl.action], tg_table);
}
/* Add typegroup bits to the entry bitstreams */
vcap_encode_actionfield_typegroups(ri, tg_table);
@@ -738,7 +867,7 @@ int vcap_api_check(struct vcap_control *ctrl)
!ctrl->ops->add_default_fields || !ctrl->ops->cache_erase ||
!ctrl->ops->cache_write || !ctrl->ops->cache_read ||
!ctrl->ops->init || !ctrl->ops->update || !ctrl->ops->move ||
- !ctrl->ops->port_info || !ctrl->ops->enable) {
+ !ctrl->ops->port_info) {
pr_err("%s:%d: client operations are missing\n",
__func__, __LINE__);
return -ENOENT;
@@ -791,9 +920,8 @@ int vcap_set_rule_set_actionset(struct vcap_rule *rule,
}
EXPORT_SYMBOL_GPL(vcap_set_rule_set_actionset);
-/* Find a rule with a provided rule id */
-static struct vcap_rule_internal *vcap_lookup_rule(struct vcap_control *vctrl,
- u32 id)
+/* Check if a rule with this id exists */
+static bool vcap_rule_exists(struct vcap_control *vctrl, u32 id)
{
struct vcap_rule_internal *ri;
struct vcap_admin *admin;
@@ -802,7 +930,25 @@ static struct vcap_rule_internal *vcap_lookup_rule(struct vcap_control *vctrl,
list_for_each_entry(admin, &vctrl->list, list)
list_for_each_entry(ri, &admin->rules, list)
if (ri->data.id == id)
+ return true;
+ return false;
+}
+
+/* Find a rule with a provided rule id return a locked vcap */
+static struct vcap_rule_internal *
+vcap_get_locked_rule(struct vcap_control *vctrl, u32 id)
+{
+ struct vcap_rule_internal *ri;
+ struct vcap_admin *admin;
+
+ /* Look for the rule id in all vcaps */
+ list_for_each_entry(admin, &vctrl->list, list) {
+ mutex_lock(&admin->lock);
+ list_for_each_entry(ri, &admin->rules, list)
+ if (ri->data.id == id)
return ri;
+ mutex_unlock(&admin->lock);
+ }
return NULL;
}
@@ -811,19 +957,31 @@ int vcap_lookup_rule_by_cookie(struct vcap_control *vctrl, u64 cookie)
{
struct vcap_rule_internal *ri;
struct vcap_admin *admin;
+ int id = 0;
/* Look for the rule id in all vcaps */
- list_for_each_entry(admin, &vctrl->list, list)
- list_for_each_entry(ri, &admin->rules, list)
- if (ri->data.cookie == cookie)
- return ri->data.id;
+ list_for_each_entry(admin, &vctrl->list, list) {
+ mutex_lock(&admin->lock);
+ list_for_each_entry(ri, &admin->rules, list) {
+ if (ri->data.cookie == cookie) {
+ id = ri->data.id;
+ break;
+ }
+ }
+ mutex_unlock(&admin->lock);
+ if (id)
+ return id;
+ }
return -ENOENT;
}
EXPORT_SYMBOL_GPL(vcap_lookup_rule_by_cookie);
-/* Make a shallow copy of the rule without the fields */
-struct vcap_rule_internal *vcap_dup_rule(struct vcap_rule_internal *ri)
+/* Make a copy of the rule, shallow or full */
+static struct vcap_rule_internal *vcap_dup_rule(struct vcap_rule_internal *ri,
+ bool full)
{
+ struct vcap_client_actionfield *caf, *newcaf;
+ struct vcap_client_keyfield *ckf, *newckf;
struct vcap_rule_internal *duprule;
/* Allocate the client part */
@@ -836,6 +994,25 @@ struct vcap_rule_internal *vcap_dup_rule(struct vcap_rule_internal *ri)
/* No elements in these lists */
INIT_LIST_HEAD(&duprule->data.keyfields);
INIT_LIST_HEAD(&duprule->data.actionfields);
+
+ /* A full rule copy includes keys and actions */
+ if (!full)
+ return duprule;
+
+ list_for_each_entry(ckf, &ri->data.keyfields, ctrl.list) {
+ newckf = kmemdup(ckf, sizeof(*newckf), GFP_KERNEL);
+ if (!newckf)
+ return ERR_PTR(-ENOMEM);
+ list_add_tail(&newckf->ctrl.list, &duprule->data.keyfields);
+ }
+
+ list_for_each_entry(caf, &ri->data.actionfields, ctrl.list) {
+ newcaf = kmemdup(caf, sizeof(*newcaf), GFP_KERNEL);
+ if (!newcaf)
+ return ERR_PTR(-ENOMEM);
+ list_add_tail(&newcaf->ctrl.list, &duprule->data.actionfields);
+ }
+
return duprule;
}
@@ -1424,39 +1601,65 @@ struct vcap_admin *vcap_find_admin(struct vcap_control *vctrl, int cid)
}
EXPORT_SYMBOL_GPL(vcap_find_admin);
-/* Is the next chain id in the following lookup, possible in another VCAP */
-bool vcap_is_next_lookup(struct vcap_control *vctrl, int cur_cid, int next_cid)
+/* Is this the last admin instance ordered by chain id and direction */
+static bool vcap_admin_is_last(struct vcap_control *vctrl,
+ struct vcap_admin *admin,
+ bool ingress)
{
- struct vcap_admin *admin, *next_admin;
- int lookup, next_lookup;
+ struct vcap_admin *iter, *last = NULL;
+ int max_cid = 0;
- /* The offset must be at least one lookup */
- if (next_cid < cur_cid + VCAP_CID_LOOKUP_SIZE)
+ list_for_each_entry(iter, &vctrl->list, list) {
+ if (iter->first_cid > max_cid &&
+ iter->ingress == ingress) {
+ last = iter;
+ max_cid = iter->first_cid;
+ }
+ }
+ if (!last)
return false;
- if (vcap_api_check(vctrl))
- return false;
+ return admin == last;
+}
- admin = vcap_find_admin(vctrl, cur_cid);
- if (!admin)
+/* Calculate the value used for chaining VCAP rules */
+int vcap_chain_offset(struct vcap_control *vctrl, int from_cid, int to_cid)
+{
+ int diff = to_cid - from_cid;
+
+ if (diff < 0) /* Wrong direction */
+ return diff;
+ to_cid %= VCAP_CID_LOOKUP_SIZE;
+ if (to_cid == 0) /* Destination aligned to a lookup == no chaining */
+ return 0;
+ diff %= VCAP_CID_LOOKUP_SIZE; /* Limit to a value within a lookup */
+ return diff;
+}
+EXPORT_SYMBOL_GPL(vcap_chain_offset);
+
+/* Is the next chain id in one of the following lookups
+ * For now this does not support filters linked to other filters using
+ * keys and actions. That will be added later.
+ */
+bool vcap_is_next_lookup(struct vcap_control *vctrl, int src_cid, int dst_cid)
+{
+ struct vcap_admin *admin;
+ int next_cid;
+
+ if (vcap_api_check(vctrl))
return false;
- /* If no VCAP contains the next chain, the next chain must be beyond
- * the last chain in the current VCAP
- */
- next_admin = vcap_find_admin(vctrl, next_cid);
- if (!next_admin)
- return next_cid > admin->last_cid;
+ /* The offset must be at least one lookup so round up one chain */
+ next_cid = roundup(src_cid + 1, VCAP_CID_LOOKUP_SIZE);
- lookup = vcap_chain_id_to_lookup(admin, cur_cid);
- next_lookup = vcap_chain_id_to_lookup(next_admin, next_cid);
+ if (dst_cid < next_cid)
+ return false;
- /* Next lookup must be the following lookup */
- if (admin == next_admin || admin->vtype == next_admin->vtype)
- return next_lookup == lookup + 1;
+ admin = vcap_find_admin(vctrl, dst_cid);
+ if (!admin)
+ return false;
- /* Must be the first lookup in the next VCAP instance */
- return next_lookup == 0;
+ return true;
}
EXPORT_SYMBOL_GPL(vcap_is_next_lookup);
@@ -1504,6 +1707,39 @@ static int vcap_add_type_keyfield(struct vcap_rule *rule)
return 0;
}
+/* Add the actionset typefield to the list of rule actionfields */
+static int vcap_add_type_actionfield(struct vcap_rule *rule)
+{
+ enum vcap_actionfield_set actionset = rule->actionset;
+ struct vcap_rule_internal *ri = to_intrule(rule);
+ enum vcap_type vt = ri->admin->vtype;
+ const struct vcap_field *fields;
+ const struct vcap_set *aset;
+ int ret = -EINVAL;
+
+ aset = vcap_actionfieldset(ri->vctrl, vt, actionset);
+ if (!aset)
+ return ret;
+ if (aset->type_id == (u8)-1) /* No type field is needed */
+ return 0;
+
+ fields = vcap_actionfields(ri->vctrl, vt, actionset);
+ if (!fields)
+ return -EINVAL;
+ if (fields[VCAP_AF_TYPE].width > 1) {
+ ret = vcap_rule_add_action_u32(rule, VCAP_AF_TYPE,
+ aset->type_id);
+ } else {
+ if (aset->type_id)
+ ret = vcap_rule_add_action_bit(rule, VCAP_AF_TYPE,
+ VCAP_BIT_1);
+ else
+ ret = vcap_rule_add_action_bit(rule, VCAP_AF_TYPE,
+ VCAP_BIT_0);
+ }
+ return ret;
+}
+
/* Add a keyset to a keyset list */
bool vcap_keyset_list_add(struct vcap_keyset_list *keysetlist,
enum vcap_keyfield_set keyset)
@@ -1521,6 +1757,22 @@ bool vcap_keyset_list_add(struct vcap_keyset_list *keysetlist,
}
EXPORT_SYMBOL_GPL(vcap_keyset_list_add);
+/* Add a actionset to a actionset list */
+static bool vcap_actionset_list_add(struct vcap_actionset_list *actionsetlist,
+ enum vcap_actionfield_set actionset)
+{
+ int idx;
+
+ if (actionsetlist->cnt < actionsetlist->max) {
+ /* Avoid duplicates */
+ for (idx = 0; idx < actionsetlist->cnt; ++idx)
+ if (actionsetlist->actionsets[idx] == actionset)
+ return actionsetlist->cnt < actionsetlist->max;
+ actionsetlist->actionsets[actionsetlist->cnt++] = actionset;
+ }
+ return actionsetlist->cnt < actionsetlist->max;
+}
+
/* map keyset id to a string with the keyset name */
const char *vcap_keyset_name(struct vcap_control *vctrl,
enum vcap_keyfield_set keyset)
@@ -1629,6 +1881,75 @@ bool vcap_rule_find_keysets(struct vcap_rule *rule,
}
EXPORT_SYMBOL_GPL(vcap_rule_find_keysets);
+/* Return the actionfield that matches a action in a actionset */
+static const struct vcap_field *
+vcap_find_actionset_actionfield(struct vcap_control *vctrl,
+ enum vcap_type vtype,
+ enum vcap_actionfield_set actionset,
+ enum vcap_action_field action)
+{
+ const struct vcap_field *fields;
+ int idx, count;
+
+ fields = vcap_actionfields(vctrl, vtype, actionset);
+ if (!fields)
+ return NULL;
+
+ /* Iterate the actionfields of the actionset */
+ count = vcap_actionfield_count(vctrl, vtype, actionset);
+ for (idx = 0; idx < count; ++idx) {
+ if (fields[idx].width == 0)
+ continue;
+
+ if (action == idx)
+ return &fields[idx];
+ }
+
+ return NULL;
+}
+
+/* Match a list of actions against the actionsets available in a vcap type */
+static bool vcap_rule_find_actionsets(struct vcap_rule_internal *ri,
+ struct vcap_actionset_list *matches)
+{
+ int actionset, found, actioncount, map_size;
+ const struct vcap_client_actionfield *ckf;
+ const struct vcap_field **map;
+ enum vcap_type vtype;
+
+ vtype = ri->admin->vtype;
+ map = ri->vctrl->vcaps[vtype].actionfield_set_map;
+ map_size = ri->vctrl->vcaps[vtype].actionfield_set_size;
+
+ /* Get a count of the actionfields we want to match */
+ actioncount = 0;
+ list_for_each_entry(ckf, &ri->data.actionfields, ctrl.list)
+ ++actioncount;
+
+ matches->cnt = 0;
+ /* Iterate the actionsets of the VCAP */
+ for (actionset = 0; actionset < map_size; ++actionset) {
+ if (!map[actionset])
+ continue;
+
+ /* Iterate the actions in the rule */
+ found = 0;
+ list_for_each_entry(ckf, &ri->data.actionfields, ctrl.list)
+ if (vcap_find_actionset_actionfield(ri->vctrl, vtype,
+ actionset,
+ ckf->ctrl.action))
+ ++found;
+
+ /* Save the actionset if all actionfields were found */
+ if (found == actioncount)
+ if (!vcap_actionset_list_add(matches, actionset))
+ /* bail out when the quota is filled */
+ break;
+ }
+
+ return matches->cnt > 0;
+}
+
/* Validate a rule with respect to available port keys */
int vcap_val_rule(struct vcap_rule *rule, u16 l3_proto)
{
@@ -1680,13 +2001,26 @@ int vcap_val_rule(struct vcap_rule *rule, u16 l3_proto)
return ret;
}
if (ri->data.actionset == VCAP_AFS_NO_VALUE) {
- /* Later also actionsets will be matched against actions in
- * the rule, and the type will be set accordingly
- */
- ri->data.exterr = VCAP_ERR_NO_ACTIONSET_MATCH;
- return -EINVAL;
+ struct vcap_actionset_list matches = {};
+ enum vcap_actionfield_set actionsets[10];
+
+ matches.actionsets = actionsets;
+ matches.max = ARRAY_SIZE(actionsets);
+
+ /* Find an actionset that fits the rule actions */
+ if (!vcap_rule_find_actionsets(ri, &matches)) {
+ ri->data.exterr = VCAP_ERR_NO_ACTIONSET_MATCH;
+ return -EINVAL;
+ }
+ ret = vcap_set_rule_set_actionset(rule, actionsets[0]);
+ if (ret < 0) {
+ pr_err("%s:%d: actionset was not updated: %d\n",
+ __func__, __LINE__, ret);
+ return ret;
+ }
}
vcap_add_type_keyfield(rule);
+ vcap_add_type_actionfield(rule);
/* Add default fields to this rule */
ri->vctrl->ops->add_default_fields(ri->ndev, ri->admin, rule);
@@ -1721,7 +2055,7 @@ static u32 vcap_set_rule_id(struct vcap_rule_internal *ri)
return ri->data.id;
for (u32 next_id = 1; next_id < ~0; ++next_id) {
- if (!vcap_lookup_rule(ri->vctrl, next_id)) {
+ if (!vcap_rule_exists(ri->vctrl, next_id)) {
ri->data.id = next_id;
break;
}
@@ -1756,8 +2090,8 @@ static int vcap_insert_rule(struct vcap_rule_internal *ri,
ri->addr = vcap_next_rule_addr(admin->last_used_addr, ri);
admin->last_used_addr = ri->addr;
- /* Add a shallow copy of the rule to the VCAP list */
- duprule = vcap_dup_rule(ri);
+ /* Add a copy of the rule to the VCAP list */
+ duprule = vcap_dup_rule(ri, ri->state == VCAP_RS_DISABLED);
if (IS_ERR(duprule))
return PTR_ERR(duprule);
@@ -1770,8 +2104,8 @@ static int vcap_insert_rule(struct vcap_rule_internal *ri,
ri->addr = vcap_next_rule_addr(addr, ri);
addr = ri->addr;
- /* Add a shallow copy of the rule to the VCAP list */
- duprule = vcap_dup_rule(ri);
+ /* Add a copy of the rule to the VCAP list */
+ duprule = vcap_dup_rule(ri, ri->state == VCAP_RS_DISABLED);
if (IS_ERR(duprule))
return PTR_ERR(duprule);
@@ -1803,11 +2137,96 @@ static void vcap_move_rules(struct vcap_rule_internal *ri,
move->offset, move->count);
}
+/* Check if the chain is already used to enable a VCAP lookup for this port */
+static bool vcap_is_chain_used(struct vcap_control *vctrl,
+ struct net_device *ndev, int src_cid)
+{
+ struct vcap_enabled_port *eport;
+ struct vcap_admin *admin;
+
+ list_for_each_entry(admin, &vctrl->list, list)
+ list_for_each_entry(eport, &admin->enabled, list)
+ if (eport->src_cid == src_cid && eport->ndev == ndev)
+ return true;
+
+ return false;
+}
+
+/* Fetch the next chain in the enabled list for the port */
+static int vcap_get_next_chain(struct vcap_control *vctrl,
+ struct net_device *ndev,
+ int dst_cid)
+{
+ struct vcap_enabled_port *eport;
+ struct vcap_admin *admin;
+
+ list_for_each_entry(admin, &vctrl->list, list) {
+ list_for_each_entry(eport, &admin->enabled, list) {
+ if (eport->ndev != ndev)
+ continue;
+ if (eport->src_cid == dst_cid)
+ return eport->dst_cid;
+ }
+ }
+
+ return 0;
+}
+
+static bool vcap_path_exist(struct vcap_control *vctrl, struct net_device *ndev,
+ int dst_cid)
+{
+ int cid = rounddown(dst_cid, VCAP_CID_LOOKUP_SIZE);
+ struct vcap_enabled_port *eport = NULL;
+ struct vcap_enabled_port *elem;
+ struct vcap_admin *admin;
+ int tmp;
+
+ if (cid == 0) /* Chain zero is always available */
+ return true;
+
+ /* Find first entry that starts from chain 0*/
+ list_for_each_entry(admin, &vctrl->list, list) {
+ list_for_each_entry(elem, &admin->enabled, list) {
+ if (elem->src_cid == 0 && elem->ndev == ndev) {
+ eport = elem;
+ break;
+ }
+ }
+ if (eport)
+ break;
+ }
+
+ if (!eport)
+ return false;
+
+ tmp = eport->dst_cid;
+ while (tmp != cid && tmp != 0)
+ tmp = vcap_get_next_chain(vctrl, ndev, tmp);
+
+ return !!tmp;
+}
+
+/* Internal clients can always store their rules in HW
+ * External clients can store their rules if the chain is enabled all
+ * the way from chain 0, otherwise the rule will be cached until
+ * the chain is enabled.
+ */
+static void vcap_rule_set_state(struct vcap_rule_internal *ri)
+{
+ if (ri->data.user <= VCAP_USER_QOS)
+ ri->state = VCAP_RS_PERMANENT;
+ else if (vcap_path_exist(ri->vctrl, ri->ndev, ri->data.vcap_chain_id))
+ ri->state = VCAP_RS_ENABLED;
+ else
+ ri->state = VCAP_RS_DISABLED;
+}
+
/* Encode and write a validated rule to the VCAP */
int vcap_add_rule(struct vcap_rule *rule)
{
struct vcap_rule_internal *ri = to_intrule(rule);
struct vcap_rule_move move = {0};
+ struct vcap_counter ctr = {0};
int ret;
ret = vcap_api_check(ri->vctrl);
@@ -1815,6 +2234,8 @@ int vcap_add_rule(struct vcap_rule *rule)
return ret;
/* Insert the new rule in the list of vcap rules */
mutex_lock(&ri->admin->lock);
+
+ vcap_rule_set_state(ri);
ret = vcap_insert_rule(ri, &move);
if (ret < 0) {
pr_err("%s:%d: could not insert rule in vcap list: %d\n",
@@ -1823,6 +2244,19 @@ int vcap_add_rule(struct vcap_rule *rule)
}
if (move.count > 0)
vcap_move_rules(ri, &move);
+
+ /* Set the counter to zero */
+ ret = vcap_write_counter(ri, &ctr);
+ if (ret)
+ goto out;
+
+ if (ri->state == VCAP_RS_DISABLED) {
+ /* Erase the rule area */
+ ri->vctrl->ops->init(ri->ndev, ri->admin, ri->addr, ri->size);
+ goto out;
+ }
+
+ vcap_erase_cache(ri);
ret = vcap_encode_rule(ri);
if (ret) {
pr_err("%s:%d: rule encoding error: %d\n", __func__, __LINE__, ret);
@@ -1830,8 +2264,10 @@ int vcap_add_rule(struct vcap_rule *rule)
}
ret = vcap_write_rule(ri);
- if (ret)
+ if (ret) {
pr_err("%s:%d: rule write error: %d\n", __func__, __LINE__, ret);
+ goto out;
+ }
out:
mutex_unlock(&ri->admin->lock);
return ret;
@@ -1860,17 +2296,28 @@ struct vcap_rule *vcap_alloc_rule(struct vcap_control *vctrl,
/* Sanity check that this VCAP is supported on this platform */
if (vctrl->vcaps[admin->vtype].rows == 0)
return ERR_PTR(-EINVAL);
+
+ mutex_lock(&admin->lock);
/* Check if a rule with this id already exists */
- if (vcap_lookup_rule(vctrl, id))
- return ERR_PTR(-EEXIST);
+ if (vcap_rule_exists(vctrl, id)) {
+ err = -EINVAL;
+ goto out_unlock;
+ }
+
/* Check if there is room for the rule in the block(s) of the VCAP */
maxsize = vctrl->vcaps[admin->vtype].sw_count; /* worst case rule size */
- if (vcap_rule_space(admin, maxsize))
- return ERR_PTR(-ENOSPC);
+ if (vcap_rule_space(admin, maxsize)) {
+ err = -ENOSPC;
+ goto out_unlock;
+ }
+
/* Create a container for the rule and return it */
ri = kzalloc(sizeof(*ri), GFP_KERNEL);
- if (!ri)
- return ERR_PTR(-ENOMEM);
+ if (!ri) {
+ err = -ENOMEM;
+ goto out_unlock;
+ }
+
ri->data.vcap_chain_id = vcap_chain_id;
ri->data.user = user;
ri->data.priority = priority;
@@ -1883,14 +2330,21 @@ struct vcap_rule *vcap_alloc_rule(struct vcap_control *vctrl,
ri->ndev = ndev;
ri->admin = admin; /* refer to the vcap instance */
ri->vctrl = vctrl; /* refer to the client */
- if (vcap_set_rule_id(ri) == 0)
+
+ if (vcap_set_rule_id(ri) == 0) {
+ err = -EINVAL;
goto out_free;
- vcap_erase_cache(ri);
+ }
+
+ mutex_unlock(&admin->lock);
return (struct vcap_rule *)ri;
out_free:
kfree(ri);
- return ERR_PTR(-EINVAL);
+out_unlock:
+ mutex_unlock(&admin->lock);
+ return ERR_PTR(err);
+
}
EXPORT_SYMBOL_GPL(vcap_alloc_rule);
@@ -1915,43 +2369,52 @@ void vcap_free_rule(struct vcap_rule *rule)
}
EXPORT_SYMBOL_GPL(vcap_free_rule);
-struct vcap_rule *vcap_get_rule(struct vcap_control *vctrl, u32 id)
+/* Decode a rule from the VCAP cache and return a copy */
+struct vcap_rule *vcap_decode_rule(struct vcap_rule_internal *elem)
{
- struct vcap_rule_internal *elem;
struct vcap_rule_internal *ri;
int err;
- ri = NULL;
+ ri = vcap_dup_rule(elem, elem->state == VCAP_RS_DISABLED);
+ if (IS_ERR(ri))
+ return ERR_PTR(PTR_ERR(ri));
+
+ if (ri->state == VCAP_RS_DISABLED)
+ goto out;
+
+ err = vcap_read_rule(ri);
+ if (err)
+ return ERR_PTR(err);
+
+ err = vcap_decode_keyset(ri);
+ if (err)
+ return ERR_PTR(err);
+
+ err = vcap_decode_actionset(ri);
+ if (err)
+ return ERR_PTR(err);
+
+out:
+ return &ri->data;
+}
+
+struct vcap_rule *vcap_get_rule(struct vcap_control *vctrl, u32 id)
+{
+ struct vcap_rule_internal *elem;
+ struct vcap_rule *rule;
+ int err;
err = vcap_api_check(vctrl);
if (err)
return ERR_PTR(err);
- elem = vcap_lookup_rule(vctrl, id);
+
+ elem = vcap_get_locked_rule(vctrl, id);
if (!elem)
return NULL;
- mutex_lock(&elem->admin->lock);
- ri = vcap_dup_rule(elem);
- if (IS_ERR(ri))
- goto unlock;
- err = vcap_read_rule(ri);
- if (err) {
- ri = ERR_PTR(err);
- goto unlock;
- }
- err = vcap_decode_keyset(ri);
- if (err) {
- ri = ERR_PTR(err);
- goto unlock;
- }
- err = vcap_decode_actionset(ri);
- if (err) {
- ri = ERR_PTR(err);
- goto unlock;
- }
-unlock:
+ rule = vcap_decode_rule(elem);
mutex_unlock(&elem->admin->lock);
- return (struct vcap_rule *)ri;
+ return rule;
}
EXPORT_SYMBOL_GPL(vcap_get_rule);
@@ -1966,10 +2429,13 @@ int vcap_mod_rule(struct vcap_rule *rule)
if (err)
return err;
- if (!vcap_lookup_rule(ri->vctrl, ri->data.id))
+ if (!vcap_get_locked_rule(ri->vctrl, ri->data.id))
return -ENOENT;
- mutex_lock(&ri->admin->lock);
+ vcap_rule_set_state(ri);
+ if (ri->state == VCAP_RS_DISABLED)
+ goto out;
+
/* Encode the bitstreams to the VCAP cache */
vcap_erase_cache(ri);
err = vcap_encode_rule(ri);
@@ -1982,8 +2448,6 @@ int vcap_mod_rule(struct vcap_rule *rule)
memset(&ctr, 0, sizeof(ctr));
err = vcap_write_counter(ri, &ctr);
- if (err)
- goto out;
out:
mutex_unlock(&ri->admin->lock);
@@ -2050,20 +2514,19 @@ int vcap_del_rule(struct vcap_control *vctrl, struct net_device *ndev, u32 id)
if (err)
return err;
/* Look for the rule id in all vcaps */
- ri = vcap_lookup_rule(vctrl, id);
+ ri = vcap_get_locked_rule(vctrl, id);
if (!ri)
- return -EINVAL;
+ return -ENOENT;
+
admin = ri->admin;
if (ri->addr > admin->last_used_addr)
gap = vcap_fill_rule_gap(ri);
/* Delete the rule from the list of rules and the cache */
- mutex_lock(&admin->lock);
list_del(&ri->list);
vctrl->ops->init(ndev, admin, admin->last_used_addr, ri->size + gap);
- kfree(ri);
- mutex_unlock(&admin->lock);
+ vcap_free_rule(&ri->data);
/* Update the last used address, set to default when no rules */
if (list_empty(&admin->rules)) {
@@ -2073,7 +2536,9 @@ int vcap_del_rule(struct vcap_control *vctrl, struct net_device *ndev, u32 id)
list);
admin->last_used_addr = elem->addr;
}
- return 0;
+
+ mutex_unlock(&admin->lock);
+ return err;
}
EXPORT_SYMBOL_GPL(vcap_del_rule);
@@ -2091,7 +2556,7 @@ int vcap_del_rules(struct vcap_control *vctrl, struct vcap_admin *admin)
list_for_each_entry_safe(ri, next_ri, &admin->rules, list) {
vctrl->ops->init(ri->ndev, admin, ri->addr, ri->size);
list_del(&ri->list);
- kfree(ri);
+ vcap_free_rule(&ri->data);
}
admin->last_used_addr = admin->last_valid_addr;
@@ -2137,69 +2602,6 @@ const struct vcap_field *vcap_lookup_keyfield(struct vcap_rule *rule,
}
EXPORT_SYMBOL_GPL(vcap_lookup_keyfield);
-/* Copy data from src to dst but reverse the data in chunks of 32bits.
- * For example if src is 00:11:22:33:44:55 where 55 is LSB the dst will
- * have the value 22:33:44:55:00:11.
- */
-static void vcap_copy_to_w32be(u8 *dst, u8 *src, int size)
-{
- for (int idx = 0; idx < size; ++idx) {
- int first_byte_index = 0;
- int nidx;
-
- first_byte_index = size - (((idx >> 2) + 1) << 2);
- if (first_byte_index < 0)
- first_byte_index = 0;
- nidx = idx + first_byte_index - (idx & ~0x3);
- dst[nidx] = src[idx];
- }
-}
-
-static void vcap_copy_from_client_keyfield(struct vcap_rule *rule,
- struct vcap_client_keyfield *field,
- struct vcap_client_keyfield_data *data)
-{
- struct vcap_rule_internal *ri = to_intrule(rule);
- int size;
-
- if (!ri->admin->w32be) {
- memcpy(&field->data, data, sizeof(field->data));
- return;
- }
-
- size = keyfield_size_table[field->ctrl.type] / 2;
- switch (field->ctrl.type) {
- case VCAP_FIELD_BIT:
- case VCAP_FIELD_U32:
- memcpy(&field->data, data, sizeof(field->data));
- break;
- case VCAP_FIELD_U48:
- vcap_copy_to_w32be(field->data.u48.value, data->u48.value, size);
- vcap_copy_to_w32be(field->data.u48.mask, data->u48.mask, size);
- break;
- case VCAP_FIELD_U56:
- vcap_copy_to_w32be(field->data.u56.value, data->u56.value, size);
- vcap_copy_to_w32be(field->data.u56.mask, data->u56.mask, size);
- break;
- case VCAP_FIELD_U64:
- vcap_copy_to_w32be(field->data.u64.value, data->u64.value, size);
- vcap_copy_to_w32be(field->data.u64.mask, data->u64.mask, size);
- break;
- case VCAP_FIELD_U72:
- vcap_copy_to_w32be(field->data.u72.value, data->u72.value, size);
- vcap_copy_to_w32be(field->data.u72.mask, data->u72.mask, size);
- break;
- case VCAP_FIELD_U112:
- vcap_copy_to_w32be(field->data.u112.value, data->u112.value, size);
- vcap_copy_to_w32be(field->data.u112.mask, data->u112.mask, size);
- break;
- case VCAP_FIELD_U128:
- vcap_copy_to_w32be(field->data.u128.value, data->u128.value, size);
- vcap_copy_to_w32be(field->data.u128.mask, data->u128.mask, size);
- break;
- }
-}
-
/* Check if the keyfield is already in the rule */
static bool vcap_keyfield_unique(struct vcap_rule *rule,
enum vcap_key_field key)
@@ -2257,9 +2659,9 @@ static int vcap_rule_add_key(struct vcap_rule *rule,
field = kzalloc(sizeof(*field), GFP_KERNEL);
if (!field)
return -ENOMEM;
+ memcpy(&field->data, data, sizeof(field->data));
field->ctrl.key = key;
field->ctrl.type = ftype;
- vcap_copy_from_client_keyfield(rule, field, data);
list_add_tail(&field->ctrl.list, &rule->keyfields);
return 0;
}
@@ -2355,7 +2757,7 @@ int vcap_rule_get_key_u32(struct vcap_rule *rule, enum vcap_key_field key,
EXPORT_SYMBOL_GPL(vcap_rule_get_key_u32);
/* Find a client action field in a rule */
-static struct vcap_client_actionfield *
+struct vcap_client_actionfield *
vcap_find_actionfield(struct vcap_rule *rule, enum vcap_action_field act)
{
struct vcap_rule_internal *ri = (struct vcap_rule_internal *)rule;
@@ -2366,45 +2768,7 @@ vcap_find_actionfield(struct vcap_rule *rule, enum vcap_action_field act)
return caf;
return NULL;
}
-
-static void vcap_copy_from_client_actionfield(struct vcap_rule *rule,
- struct vcap_client_actionfield *field,
- struct vcap_client_actionfield_data *data)
-{
- struct vcap_rule_internal *ri = to_intrule(rule);
- int size;
-
- if (!ri->admin->w32be) {
- memcpy(&field->data, data, sizeof(field->data));
- return;
- }
-
- size = actionfield_size_table[field->ctrl.type];
- switch (field->ctrl.type) {
- case VCAP_FIELD_BIT:
- case VCAP_FIELD_U32:
- memcpy(&field->data, data, sizeof(field->data));
- break;
- case VCAP_FIELD_U48:
- vcap_copy_to_w32be(field->data.u48.value, data->u48.value, size);
- break;
- case VCAP_FIELD_U56:
- vcap_copy_to_w32be(field->data.u56.value, data->u56.value, size);
- break;
- case VCAP_FIELD_U64:
- vcap_copy_to_w32be(field->data.u64.value, data->u64.value, size);
- break;
- case VCAP_FIELD_U72:
- vcap_copy_to_w32be(field->data.u72.value, data->u72.value, size);
- break;
- case VCAP_FIELD_U112:
- vcap_copy_to_w32be(field->data.u112.value, data->u112.value, size);
- break;
- case VCAP_FIELD_U128:
- vcap_copy_to_w32be(field->data.u128.value, data->u128.value, size);
- break;
- }
-}
+EXPORT_SYMBOL_GPL(vcap_find_actionfield);
/* Check if the actionfield is already in the rule */
static bool vcap_actionfield_unique(struct vcap_rule *rule,
@@ -2463,9 +2827,9 @@ static int vcap_rule_add_action(struct vcap_rule *rule,
field = kzalloc(sizeof(*field), GFP_KERNEL);
if (!field)
return -ENOMEM;
+ memcpy(&field->data, data, sizeof(field->data));
field->ctrl.action = action;
field->ctrl.type = ftype;
- vcap_copy_from_client_actionfield(rule, field, data);
list_add_tail(&field->ctrl.list, &rule->actionfields);
return 0;
}
@@ -2564,24 +2928,157 @@ void vcap_set_tc_exterr(struct flow_cls_offload *fco, struct vcap_rule *vrule)
}
EXPORT_SYMBOL_GPL(vcap_set_tc_exterr);
+/* Write a rule to VCAP HW to enable it */
+static int vcap_enable_rule(struct vcap_rule_internal *ri)
+{
+ struct vcap_client_actionfield *af, *naf;
+ struct vcap_client_keyfield *kf, *nkf;
+ int err;
+
+ vcap_erase_cache(ri);
+ err = vcap_encode_rule(ri);
+ if (err)
+ goto out;
+ err = vcap_write_rule(ri);
+ if (err)
+ goto out;
+
+ /* Deallocate the list of keys and actions */
+ list_for_each_entry_safe(kf, nkf, &ri->data.keyfields, ctrl.list) {
+ list_del(&kf->ctrl.list);
+ kfree(kf);
+ }
+ list_for_each_entry_safe(af, naf, &ri->data.actionfields, ctrl.list) {
+ list_del(&af->ctrl.list);
+ kfree(af);
+ }
+ ri->state = VCAP_RS_ENABLED;
+out:
+ return err;
+}
+
+/* Enable all disabled rules for a specific chain/port in the VCAP HW */
+static int vcap_enable_rules(struct vcap_control *vctrl,
+ struct net_device *ndev, int chain)
+{
+ int next_chain = chain + VCAP_CID_LOOKUP_SIZE;
+ struct vcap_rule_internal *ri;
+ struct vcap_admin *admin;
+ int err = 0;
+
+ list_for_each_entry(admin, &vctrl->list, list) {
+ if (!(chain >= admin->first_cid && chain <= admin->last_cid))
+ continue;
+
+ /* Found the admin, now find the offloadable rules */
+ mutex_lock(&admin->lock);
+ list_for_each_entry(ri, &admin->rules, list) {
+ /* Is the rule in the lookup defined by the chain */
+ if (!(ri->data.vcap_chain_id >= chain &&
+ ri->data.vcap_chain_id < next_chain)) {
+ continue;
+ }
+
+ if (ri->ndev != ndev)
+ continue;
+
+ if (ri->state != VCAP_RS_DISABLED)
+ continue;
+
+ err = vcap_enable_rule(ri);
+ if (err)
+ break;
+ }
+ mutex_unlock(&admin->lock);
+ if (err)
+ break;
+ }
+ return err;
+}
+
+/* Read and erase a rule from VCAP HW to disable it */
+static int vcap_disable_rule(struct vcap_rule_internal *ri)
+{
+ int err;
+
+ err = vcap_read_rule(ri);
+ if (err)
+ return err;
+ err = vcap_decode_keyset(ri);
+ if (err)
+ return err;
+ err = vcap_decode_actionset(ri);
+ if (err)
+ return err;
+
+ ri->state = VCAP_RS_DISABLED;
+ ri->vctrl->ops->init(ri->ndev, ri->admin, ri->addr, ri->size);
+ return 0;
+}
+
+/* Disable all enabled rules for a specific chain/port in the VCAP HW */
+static int vcap_disable_rules(struct vcap_control *vctrl,
+ struct net_device *ndev, int chain)
+{
+ struct vcap_rule_internal *ri;
+ struct vcap_admin *admin;
+ int err = 0;
+
+ list_for_each_entry(admin, &vctrl->list, list) {
+ if (!(chain >= admin->first_cid && chain <= admin->last_cid))
+ continue;
+
+ /* Found the admin, now find the rules on the chain */
+ mutex_lock(&admin->lock);
+ list_for_each_entry(ri, &admin->rules, list) {
+ if (ri->data.vcap_chain_id != chain)
+ continue;
+
+ if (ri->ndev != ndev)
+ continue;
+
+ if (ri->state != VCAP_RS_ENABLED)
+ continue;
+
+ err = vcap_disable_rule(ri);
+ if (err)
+ break;
+ }
+ mutex_unlock(&admin->lock);
+ if (err)
+ break;
+ }
+ return err;
+}
+
/* Check if this port is already enabled for this VCAP instance */
-static bool vcap_is_enabled(struct vcap_admin *admin, struct net_device *ndev,
- unsigned long cookie)
+static bool vcap_is_enabled(struct vcap_control *vctrl, struct net_device *ndev,
+ int dst_cid)
{
struct vcap_enabled_port *eport;
+ struct vcap_admin *admin;
- list_for_each_entry(eport, &admin->enabled, list)
- if (eport->cookie == cookie || eport->ndev == ndev)
- return true;
+ list_for_each_entry(admin, &vctrl->list, list)
+ list_for_each_entry(eport, &admin->enabled, list)
+ if (eport->dst_cid == dst_cid && eport->ndev == ndev)
+ return true;
return false;
}
-/* Enable this port for this VCAP instance */
-static int vcap_enable(struct vcap_admin *admin, struct net_device *ndev,
- unsigned long cookie)
+/* Enable this port and chain id in a VCAP instance */
+static int vcap_enable(struct vcap_control *vctrl, struct net_device *ndev,
+ unsigned long cookie, int src_cid, int dst_cid)
{
struct vcap_enabled_port *eport;
+ struct vcap_admin *admin;
+
+ if (src_cid >= dst_cid)
+ return -EFAULT;
+
+ admin = vcap_find_admin(vctrl, dst_cid);
+ if (!admin)
+ return -ENOENT;
eport = kzalloc(sizeof(*eport), GFP_KERNEL);
if (!eport)
@@ -2589,48 +3086,72 @@ static int vcap_enable(struct vcap_admin *admin, struct net_device *ndev,
eport->ndev = ndev;
eport->cookie = cookie;
+ eport->src_cid = src_cid;
+ eport->dst_cid = dst_cid;
+ mutex_lock(&admin->lock);
list_add_tail(&eport->list, &admin->enabled);
+ mutex_unlock(&admin->lock);
+ if (vcap_path_exist(vctrl, ndev, src_cid)) {
+ /* Enable chained lookups */
+ while (dst_cid) {
+ admin = vcap_find_admin(vctrl, dst_cid);
+ if (!admin)
+ return -ENOENT;
+
+ vcap_enable_rules(vctrl, ndev, dst_cid);
+ dst_cid = vcap_get_next_chain(vctrl, ndev, dst_cid);
+ }
+ }
return 0;
}
-/* Disable this port for this VCAP instance */
-static int vcap_disable(struct vcap_admin *admin, struct net_device *ndev,
+/* Disable this port and chain id for a VCAP instance */
+static int vcap_disable(struct vcap_control *vctrl, struct net_device *ndev,
unsigned long cookie)
{
- struct vcap_enabled_port *eport;
+ struct vcap_enabled_port *elem, *eport = NULL;
+ struct vcap_admin *found = NULL, *admin;
+ int dst_cid;
- list_for_each_entry(eport, &admin->enabled, list) {
- if (eport->cookie == cookie && eport->ndev == ndev) {
- list_del(&eport->list);
- kfree(eport);
- return 0;
+ list_for_each_entry(admin, &vctrl->list, list) {
+ list_for_each_entry(elem, &admin->enabled, list) {
+ if (elem->cookie == cookie && elem->ndev == ndev) {
+ eport = elem;
+ found = admin;
+ break;
+ }
}
+ if (eport)
+ break;
}
- return -ENOENT;
-}
+ if (!eport)
+ return -ENOENT;
-/* Find the VCAP instance that enabled the port using a specific filter */
-static struct vcap_admin *vcap_find_admin_by_cookie(struct vcap_control *vctrl,
- unsigned long cookie)
-{
- struct vcap_enabled_port *eport;
- struct vcap_admin *admin;
+ /* Disable chained lookups */
+ dst_cid = eport->dst_cid;
+ while (dst_cid) {
+ admin = vcap_find_admin(vctrl, dst_cid);
+ if (!admin)
+ return -ENOENT;
- list_for_each_entry(admin, &vctrl->list, list)
- list_for_each_entry(eport, &admin->enabled, list)
- if (eport->cookie == cookie)
- return admin;
+ vcap_disable_rules(vctrl, ndev, dst_cid);
+ dst_cid = vcap_get_next_chain(vctrl, ndev, dst_cid);
+ }
- return NULL;
+ mutex_lock(&found->lock);
+ list_del(&eport->list);
+ mutex_unlock(&found->lock);
+ kfree(eport);
+ return 0;
}
-/* Enable/Disable the VCAP instance lookups. Chain id 0 means disable */
+/* Enable/Disable the VCAP instance lookups */
int vcap_enable_lookups(struct vcap_control *vctrl, struct net_device *ndev,
- int chain_id, unsigned long cookie, bool enable)
+ int src_cid, int dst_cid, unsigned long cookie,
+ bool enable)
{
- struct vcap_admin *admin;
int err;
err = vcap_api_check(vctrl);
@@ -2640,36 +3161,48 @@ int vcap_enable_lookups(struct vcap_control *vctrl, struct net_device *ndev,
if (!ndev)
return -ENODEV;
- if (chain_id)
- admin = vcap_find_admin(vctrl, chain_id);
- else
- admin = vcap_find_admin_by_cookie(vctrl, cookie);
- if (!admin)
- return -ENOENT;
-
- /* first instance and first chain */
- if (admin->vinst || chain_id > admin->first_cid)
+ /* Source and destination must be the first chain in a lookup */
+ if (src_cid % VCAP_CID_LOOKUP_SIZE)
+ return -EFAULT;
+ if (dst_cid % VCAP_CID_LOOKUP_SIZE)
return -EFAULT;
- err = vctrl->ops->enable(ndev, admin, enable);
- if (err)
- return err;
-
- if (chain_id) {
- if (vcap_is_enabled(admin, ndev, cookie))
+ if (enable) {
+ if (vcap_is_enabled(vctrl, ndev, dst_cid))
return -EADDRINUSE;
- mutex_lock(&admin->lock);
- vcap_enable(admin, ndev, cookie);
+ if (vcap_is_chain_used(vctrl, ndev, src_cid))
+ return -EADDRNOTAVAIL;
+ err = vcap_enable(vctrl, ndev, cookie, src_cid, dst_cid);
} else {
- mutex_lock(&admin->lock);
- vcap_disable(admin, ndev, cookie);
+ err = vcap_disable(vctrl, ndev, cookie);
}
- mutex_unlock(&admin->lock);
- return 0;
+ return err;
}
EXPORT_SYMBOL_GPL(vcap_enable_lookups);
+/* Is this chain id the last lookup of all VCAPs */
+bool vcap_is_last_chain(struct vcap_control *vctrl, int cid, bool ingress)
+{
+ struct vcap_admin *admin;
+ int lookup;
+
+ if (vcap_api_check(vctrl))
+ return false;
+
+ admin = vcap_find_admin(vctrl, cid);
+ if (!admin)
+ return false;
+
+ if (!vcap_admin_is_last(vctrl, admin, ingress))
+ return false;
+
+ /* This must be the last lookup in this VCAP type */
+ lookup = vcap_chain_id_to_lookup(admin, cid);
+ return lookup == admin->lookups - 1;
+}
+EXPORT_SYMBOL_GPL(vcap_is_last_chain);
+
/* Set a rule counter id (for certain vcaps only) */
void vcap_rule_set_counter_id(struct vcap_rule *rule, u32 counter_id)
{
@@ -2679,31 +3212,6 @@ void vcap_rule_set_counter_id(struct vcap_rule *rule, u32 counter_id)
}
EXPORT_SYMBOL_GPL(vcap_rule_set_counter_id);
-/* Provide all rules via a callback interface */
-int vcap_rule_iter(struct vcap_control *vctrl,
- int (*callback)(void *, struct vcap_rule *), void *arg)
-{
- struct vcap_rule_internal *ri;
- struct vcap_admin *admin;
- int ret;
-
- ret = vcap_api_check(vctrl);
- if (ret)
- return ret;
-
- /* Iterate all rules in each VCAP instance */
- list_for_each_entry(admin, &vctrl->list, list) {
- list_for_each_entry(ri, &admin->rules, list) {
- ret = callback(arg, &ri->data);
- if (ret)
- return ret;
- }
- }
-
- return 0;
-}
-EXPORT_SYMBOL_GPL(vcap_rule_iter);
-
int vcap_rule_set_counter(struct vcap_rule *rule, struct vcap_counter *ctr)
{
struct vcap_rule_internal *ri = to_intrule(rule);
@@ -2716,7 +3224,12 @@ int vcap_rule_set_counter(struct vcap_rule *rule, struct vcap_counter *ctr)
pr_err("%s:%d: counter is missing\n", __func__, __LINE__);
return -EINVAL;
}
- return vcap_write_counter(ri, ctr);
+
+ mutex_lock(&ri->admin->lock);
+ err = vcap_write_counter(ri, ctr);
+ mutex_unlock(&ri->admin->lock);
+
+ return err;
}
EXPORT_SYMBOL_GPL(vcap_rule_set_counter);
@@ -2732,10 +3245,138 @@ int vcap_rule_get_counter(struct vcap_rule *rule, struct vcap_counter *ctr)
pr_err("%s:%d: counter is missing\n", __func__, __LINE__);
return -EINVAL;
}
- return vcap_read_counter(ri, ctr);
+
+ mutex_lock(&ri->admin->lock);
+ err = vcap_read_counter(ri, ctr);
+ mutex_unlock(&ri->admin->lock);
+
+ return err;
}
EXPORT_SYMBOL_GPL(vcap_rule_get_counter);
+/* Get a copy of a client key field */
+static int vcap_rule_get_key(struct vcap_rule *rule,
+ enum vcap_key_field key,
+ struct vcap_client_keyfield *ckf)
+{
+ struct vcap_client_keyfield *field;
+
+ field = vcap_find_keyfield(rule, key);
+ if (!field)
+ return -EINVAL;
+ memcpy(ckf, field, sizeof(*ckf));
+ INIT_LIST_HEAD(&ckf->ctrl.list);
+ return 0;
+}
+
+/* Find a keyset having the same size as the provided rule, where the keyset
+ * does not have a type id.
+ */
+static int vcap_rule_get_untyped_keyset(struct vcap_rule_internal *ri,
+ struct vcap_keyset_list *matches)
+{
+ struct vcap_control *vctrl = ri->vctrl;
+ enum vcap_type vt = ri->admin->vtype;
+ const struct vcap_set *keyfield_set;
+ int idx;
+
+ keyfield_set = vctrl->vcaps[vt].keyfield_set;
+ for (idx = 0; idx < vctrl->vcaps[vt].keyfield_set_size; ++idx) {
+ if (keyfield_set[idx].sw_per_item == ri->keyset_sw &&
+ keyfield_set[idx].type_id == (u8)-1) {
+ vcap_keyset_list_add(matches, idx);
+ return 0;
+ }
+ }
+ return -EINVAL;
+}
+
+/* Get the keysets that matches the rule key type/mask */
+int vcap_rule_get_keysets(struct vcap_rule_internal *ri,
+ struct vcap_keyset_list *matches)
+{
+ struct vcap_control *vctrl = ri->vctrl;
+ enum vcap_type vt = ri->admin->vtype;
+ const struct vcap_set *keyfield_set;
+ struct vcap_client_keyfield kf = {};
+ u32 value, mask;
+ int err, idx;
+
+ err = vcap_rule_get_key(&ri->data, VCAP_KF_TYPE, &kf);
+ if (err)
+ return vcap_rule_get_untyped_keyset(ri, matches);
+
+ if (kf.ctrl.type == VCAP_FIELD_BIT) {
+ value = kf.data.u1.value;
+ mask = kf.data.u1.mask;
+ } else if (kf.ctrl.type == VCAP_FIELD_U32) {
+ value = kf.data.u32.value;
+ mask = kf.data.u32.mask;
+ } else {
+ return -EINVAL;
+ }
+
+ keyfield_set = vctrl->vcaps[vt].keyfield_set;
+ for (idx = 0; idx < vctrl->vcaps[vt].keyfield_set_size; ++idx) {
+ if (keyfield_set[idx].sw_per_item != ri->keyset_sw)
+ continue;
+
+ if (keyfield_set[idx].type_id == (u8)-1) {
+ vcap_keyset_list_add(matches, idx);
+ continue;
+ }
+
+ if ((keyfield_set[idx].type_id & mask) == value)
+ vcap_keyset_list_add(matches, idx);
+ }
+ if (matches->cnt > 0)
+ return 0;
+
+ return -EINVAL;
+}
+
+/* Collect packet counts from all rules with the same cookie */
+int vcap_get_rule_count_by_cookie(struct vcap_control *vctrl,
+ struct vcap_counter *ctr, u64 cookie)
+{
+ struct vcap_rule_internal *ri;
+ struct vcap_counter temp = {};
+ struct vcap_admin *admin;
+ int err;
+
+ err = vcap_api_check(vctrl);
+ if (err)
+ return err;
+
+ /* Iterate all rules in each VCAP instance */
+ list_for_each_entry(admin, &vctrl->list, list) {
+ mutex_lock(&admin->lock);
+ list_for_each_entry(ri, &admin->rules, list) {
+ if (ri->data.cookie != cookie)
+ continue;
+
+ err = vcap_read_counter(ri, &temp);
+ if (err)
+ goto unlock;
+ ctr->value += temp.value;
+
+ /* Reset the rule counter */
+ temp.value = 0;
+ temp.sticky = 0;
+ err = vcap_write_counter(ri, &temp);
+ if (err)
+ goto unlock;
+ }
+ mutex_unlock(&admin->lock);
+ }
+ return err;
+
+unlock:
+ mutex_unlock(&admin->lock);
+ return err;
+}
+EXPORT_SYMBOL_GPL(vcap_get_rule_count_by_cookie);
+
static int vcap_rule_mod_key(struct vcap_rule *rule,
enum vcap_key_field key,
enum vcap_field_type ftype,
@@ -2746,7 +3387,7 @@ static int vcap_rule_mod_key(struct vcap_rule *rule,
field = vcap_find_keyfield(rule, key);
if (!field)
return vcap_rule_add_key(rule, key, ftype, data);
- vcap_copy_from_client_keyfield(rule, field, data);
+ memcpy(&field->data, data, sizeof(field->data));
return 0;
}
@@ -2772,7 +3413,7 @@ static int vcap_rule_mod_action(struct vcap_rule *rule,
field = vcap_find_actionfield(rule, action);
if (!field)
return vcap_rule_add_action(rule, action, ftype, data);
- vcap_copy_from_client_actionfield(rule, field, data);
+ memcpy(&field->data, data, sizeof(field->data));
return 0;
}
diff --git a/drivers/net/ethernet/microchip/vcap/vcap_api.h b/drivers/net/ethernet/microchip/vcap/vcap_api.h
index 689c7270f2a8..62db270f65af 100644
--- a/drivers/net/ethernet/microchip/vcap/vcap_api.h
+++ b/drivers/net/ethernet/microchip/vcap/vcap_api.h
@@ -176,6 +176,7 @@ struct vcap_admin {
int first_valid_addr; /* bottom of address range to be used */
int last_used_addr; /* address of lowest added rule */
bool w32be; /* vcap uses "32bit-word big-endian" encoding */
+ bool ingress; /* chain traffic direction */
struct vcap_cache_data cache; /* encoded rule data */
};
@@ -201,6 +202,13 @@ struct vcap_keyset_list {
enum vcap_keyfield_set *keysets; /* the list of keysets */
};
+/* List of actionsets */
+struct vcap_actionset_list {
+ int max; /* size of the actionset list */
+ int cnt; /* count of actionsets actually in the list */
+ enum vcap_actionfield_set *actionsets; /* the list of actionsets */
+};
+
/* Client output printf-like function with destination */
struct vcap_output_print {
__printf(2, 3)
@@ -259,11 +267,6 @@ struct vcap_operations {
(struct net_device *ndev,
struct vcap_admin *admin,
struct vcap_output_print *out);
- /* enable/disable the lookups in a vcap instance */
- int (*enable)
- (struct net_device *ndev,
- struct vcap_admin *admin,
- bool enable);
};
/* VCAP API Client control interface */
diff --git a/drivers/net/ethernet/microchip/vcap/vcap_api_client.h b/drivers/net/ethernet/microchip/vcap/vcap_api_client.h
index 0319866f9c94..417af9754bcc 100644
--- a/drivers/net/ethernet/microchip/vcap/vcap_api_client.h
+++ b/drivers/net/ethernet/microchip/vcap/vcap_api_client.h
@@ -148,9 +148,10 @@ struct vcap_counter {
bool sticky;
};
-/* Enable/Disable the VCAP instance lookups. Chain id 0 means disable */
+/* Enable/Disable the VCAP instance lookups */
int vcap_enable_lookups(struct vcap_control *vctrl, struct net_device *ndev,
- int chain_id, unsigned long cookie, bool enable);
+ int from_cid, int to_cid, unsigned long cookie,
+ bool enable);
/* VCAP rule operations */
/* Allocate a rule and fill in the basic information */
@@ -201,6 +202,8 @@ int vcap_rule_add_action_u32(struct vcap_rule *rule,
enum vcap_action_field action, u32 value);
/* VCAP rule counter operations */
+int vcap_get_rule_count_by_cookie(struct vcap_control *vctrl,
+ struct vcap_counter *ctr, u64 cookie);
int vcap_rule_set_counter(struct vcap_rule *rule, struct vcap_counter *ctr);
int vcap_rule_get_counter(struct vcap_rule *rule, struct vcap_counter *ctr);
@@ -214,8 +217,12 @@ const struct vcap_field *vcap_lookup_keyfield(struct vcap_rule *rule,
enum vcap_key_field key);
/* Find a rule id with a provided cookie */
int vcap_lookup_rule_by_cookie(struct vcap_control *vctrl, u64 cookie);
+/* Calculate the value used for chaining VCAP rules */
+int vcap_chain_offset(struct vcap_control *vctrl, int from_cid, int to_cid);
/* Is the next chain id in the following lookup, possible in another VCAP */
bool vcap_is_next_lookup(struct vcap_control *vctrl, int cur_cid, int next_cid);
+/* Is this chain id the last lookup of all VCAPs */
+bool vcap_is_last_chain(struct vcap_control *vctrl, int cid, bool ingress);
/* Provide all rules via a callback interface */
int vcap_rule_iter(struct vcap_control *vctrl,
int (*callback)(void *, struct vcap_rule *), void *arg);
@@ -262,4 +269,6 @@ int vcap_rule_mod_action_u32(struct vcap_rule *rule,
int vcap_rule_get_key_u32(struct vcap_rule *rule, enum vcap_key_field key,
u32 *value, u32 *mask);
+struct vcap_client_actionfield *
+vcap_find_actionfield(struct vcap_rule *rule, enum vcap_action_field act);
#endif /* __VCAP_API_CLIENT__ */
diff --git a/drivers/net/ethernet/microchip/vcap/vcap_api_debugfs.c b/drivers/net/ethernet/microchip/vcap/vcap_api_debugfs.c
index e0b206247f2e..c2c3397c5898 100644
--- a/drivers/net/ethernet/microchip/vcap/vcap_api_debugfs.c
+++ b/drivers/net/ethernet/microchip/vcap/vcap_api_debugfs.c
@@ -44,11 +44,14 @@ static void vcap_debugfs_show_rule_keyfield(struct vcap_control *vctrl,
out->prf(out->dst, "%pI4h/%pI4h", &data->u32.value,
&data->u32.mask);
} else if (key == VCAP_KF_ETYPE ||
- key == VCAP_KF_IF_IGR_PORT_MASK) {
+ key == VCAP_KF_IF_IGR_PORT_MASK ||
+ key == VCAP_KF_IF_EGR_PORT_MASK) {
hex = true;
} else {
u32 fmsk = (1 << keyfield[key].width) - 1;
+ if (keyfield[key].width == 32)
+ fmsk = ~0;
out->prf(out->dst, "%u/%u", data->u32.value & fmsk,
data->u32.mask & fmsk);
}
@@ -152,37 +155,48 @@ vcap_debugfs_show_rule_actionfield(struct vcap_control *vctrl,
out->prf(out->dst, "\n");
}
-static int vcap_debugfs_show_rule_keyset(struct vcap_rule_internal *ri,
- struct vcap_output_print *out)
+static int vcap_debugfs_show_keysets(struct vcap_rule_internal *ri,
+ struct vcap_output_print *out)
{
- struct vcap_control *vctrl = ri->vctrl;
struct vcap_admin *admin = ri->admin;
enum vcap_keyfield_set keysets[10];
- const struct vcap_field *keyfield;
- enum vcap_type vt = admin->vtype;
- struct vcap_client_keyfield *ckf;
struct vcap_keyset_list matches;
- u32 *maskstream;
- u32 *keystream;
- int res;
+ int err;
- keystream = admin->cache.keystream;
- maskstream = admin->cache.maskstream;
matches.keysets = keysets;
matches.cnt = 0;
matches.max = ARRAY_SIZE(keysets);
- res = vcap_find_keystream_keysets(vctrl, vt, keystream, maskstream,
- false, 0, &matches);
- if (res < 0) {
+
+ if (ri->state == VCAP_RS_DISABLED)
+ err = vcap_rule_get_keysets(ri, &matches);
+ else
+ err = vcap_find_keystream_keysets(ri->vctrl, admin->vtype,
+ admin->cache.keystream,
+ admin->cache.maskstream,
+ false, 0, &matches);
+ if (err) {
pr_err("%s:%d: could not find valid keysets: %d\n",
- __func__, __LINE__, res);
- return -EINVAL;
+ __func__, __LINE__, err);
+ return err;
}
+
out->prf(out->dst, " keysets:");
for (int idx = 0; idx < matches.cnt; ++idx)
out->prf(out->dst, " %s",
- vcap_keyset_name(vctrl, matches.keysets[idx]));
+ vcap_keyset_name(ri->vctrl, matches.keysets[idx]));
out->prf(out->dst, "\n");
+ return 0;
+}
+
+static int vcap_debugfs_show_rule_keyset(struct vcap_rule_internal *ri,
+ struct vcap_output_print *out)
+{
+ struct vcap_control *vctrl = ri->vctrl;
+ struct vcap_admin *admin = ri->admin;
+ const struct vcap_field *keyfield;
+ struct vcap_client_keyfield *ckf;
+
+ vcap_debugfs_show_keysets(ri, out);
out->prf(out->dst, " keyset_sw: %d\n", ri->keyset_sw);
out->prf(out->dst, " keyset_sw_regs: %d\n", ri->keyset_sw_regs);
@@ -233,6 +247,18 @@ static void vcap_show_admin_rule(struct vcap_control *vctrl,
out->prf(out->dst, " chain_id: %d\n", ri->data.vcap_chain_id);
out->prf(out->dst, " user: %d\n", ri->data.user);
out->prf(out->dst, " priority: %d\n", ri->data.priority);
+ out->prf(out->dst, " state: ");
+ switch (ri->state) {
+ case VCAP_RS_PERMANENT:
+ out->prf(out->dst, "permanent\n");
+ break;
+ case VCAP_RS_DISABLED:
+ out->prf(out->dst, "disabled\n");
+ break;
+ case VCAP_RS_ENABLED:
+ out->prf(out->dst, "enabled\n");
+ break;
+ }
vcap_debugfs_show_rule_keyset(ri, out);
vcap_debugfs_show_rule_actionset(ri, out);
}
@@ -254,6 +280,7 @@ static void vcap_show_admin_info(struct vcap_control *vctrl,
out->prf(out->dst, "version: %d\n", vcap->version);
out->prf(out->dst, "vtype: %d\n", admin->vtype);
out->prf(out->dst, "vinst: %d\n", admin->vinst);
+ out->prf(out->dst, "ingress: %d\n", admin->ingress);
out->prf(out->dst, "first_cid: %d\n", admin->first_cid);
out->prf(out->dst, "last_cid: %d\n", admin->last_cid);
out->prf(out->dst, "lookups: %d\n", admin->lookups);
@@ -272,7 +299,7 @@ static int vcap_show_admin(struct vcap_control *vctrl,
vcap_show_admin_info(vctrl, admin, out);
list_for_each_entry(elem, &admin->rules, list) {
- vrule = vcap_get_rule(vctrl, elem->data.id);
+ vrule = vcap_decode_rule(elem);
if (IS_ERR_OR_NULL(vrule)) {
ret = PTR_ERR(vrule);
break;
@@ -381,8 +408,12 @@ static int vcap_debugfs_show(struct seq_file *m, void *unused)
.prf = (void *)seq_printf,
.dst = m,
};
+ int ret;
- return vcap_show_admin(info->vctrl, info->admin, &out);
+ mutex_lock(&info->admin->lock);
+ ret = vcap_show_admin(info->vctrl, info->admin, &out);
+ mutex_unlock(&info->admin->lock);
+ return ret;
}
DEFINE_SHOW_ATTRIBUTE(vcap_debugfs);
@@ -394,8 +425,12 @@ static int vcap_raw_debugfs_show(struct seq_file *m, void *unused)
.prf = (void *)seq_printf,
.dst = m,
};
+ int ret;
- return vcap_show_admin_raw(info->vctrl, info->admin, &out);
+ mutex_lock(&info->admin->lock);
+ ret = vcap_show_admin_raw(info->vctrl, info->admin, &out);
+ mutex_unlock(&info->admin->lock);
+ return ret;
}
DEFINE_SHOW_ATTRIBUTE(vcap_raw_debugfs);
diff --git a/drivers/net/ethernet/microchip/vcap/vcap_api_debugfs_kunit.c b/drivers/net/ethernet/microchip/vcap/vcap_api_debugfs_kunit.c
index cf594668d5d9..0de3f677135a 100644
--- a/drivers/net/ethernet/microchip/vcap/vcap_api_debugfs_kunit.c
+++ b/drivers/net/ethernet/microchip/vcap/vcap_api_debugfs_kunit.c
@@ -221,13 +221,6 @@ static int vcap_test_port_info(struct net_device *ndev,
return 0;
}
-static int vcap_test_enable(struct net_device *ndev,
- struct vcap_admin *admin,
- bool enable)
-{
- return 0;
-}
-
static struct vcap_operations test_callbacks = {
.validate_keyset = test_val_keyset,
.add_default_fields = test_add_def_fields,
@@ -238,7 +231,6 @@ static struct vcap_operations test_callbacks = {
.update = test_cache_update,
.move = test_cache_move,
.port_info = vcap_test_port_info,
- .enable = vcap_test_enable,
};
static struct vcap_control test_vctrl = {
@@ -253,6 +245,8 @@ static void vcap_test_api_init(struct vcap_admin *admin)
INIT_LIST_HEAD(&test_vctrl.list);
INIT_LIST_HEAD(&admin->list);
INIT_LIST_HEAD(&admin->rules);
+ INIT_LIST_HEAD(&admin->enabled);
+ mutex_init(&admin->lock);
list_add_tail(&admin->list, &test_vctrl.list);
memset(test_updateaddr, 0, sizeof(test_updateaddr));
test_updateaddridx = 0;
@@ -393,8 +387,9 @@ static const char * const test_admin_info_expect[] = {
"default_cnt: 73\n",
"require_cnt_dis: 0\n",
"version: 1\n",
- "vtype: 2\n",
+ "vtype: 3\n",
"vinst: 0\n",
+ "ingress: 1\n",
"first_cid: 10000\n",
"last_cid: 19999\n",
"lookups: 4\n",
@@ -413,6 +408,7 @@ static void vcap_api_show_admin_test(struct kunit *test)
.last_valid_addr = 3071,
.first_valid_addr = 0,
.last_used_addr = 794,
+ .ingress = true,
};
struct vcap_output_print out = {
.prf = (void *)test_prf,
@@ -439,8 +435,9 @@ static const char * const test_admin_expect[] = {
"default_cnt: 73\n",
"require_cnt_dis: 0\n",
"version: 1\n",
- "vtype: 2\n",
+ "vtype: 3\n",
"vinst: 0\n",
+ "ingress: 1\n",
"first_cid: 8000000\n",
"last_cid: 8199999\n",
"lookups: 4\n",
@@ -452,6 +449,7 @@ static const char * const test_admin_expect[] = {
" chain_id: 0\n",
" user: 0\n",
" priority: 0\n",
+ " state: permanent\n",
" keysets: VCAP_KFS_MAC_ETYPE\n",
" keyset_sw: 6\n",
" keyset_sw_regs: 2\n",
@@ -501,6 +499,7 @@ static void vcap_api_show_admin_rule_test(struct kunit *test)
.last_valid_addr = 3071,
.first_valid_addr = 0,
.last_used_addr = 794,
+ .ingress = true,
.cache = {
.keystream = keydata,
.maskstream = mskdata,
diff --git a/drivers/net/ethernet/microchip/vcap/vcap_api_kunit.c b/drivers/net/ethernet/microchip/vcap/vcap_api_kunit.c
index 76a31215ebfb..c07f25e791c7 100644
--- a/drivers/net/ethernet/microchip/vcap/vcap_api_kunit.c
+++ b/drivers/net/ethernet/microchip/vcap/vcap_api_kunit.c
@@ -211,13 +211,6 @@ static int vcap_test_port_info(struct net_device *ndev,
return 0;
}
-static int vcap_test_enable(struct net_device *ndev,
- struct vcap_admin *admin,
- bool enable)
-{
- return 0;
-}
-
static struct vcap_operations test_callbacks = {
.validate_keyset = test_val_keyset,
.add_default_fields = test_add_def_fields,
@@ -228,7 +221,6 @@ static struct vcap_operations test_callbacks = {
.update = test_cache_update,
.move = test_cache_move,
.port_info = vcap_test_port_info,
- .enable = vcap_test_enable,
};
static struct vcap_control test_vctrl = {
@@ -243,6 +235,8 @@ static void vcap_test_api_init(struct vcap_admin *admin)
INIT_LIST_HEAD(&test_vctrl.list);
INIT_LIST_HEAD(&admin->list);
INIT_LIST_HEAD(&admin->rules);
+ INIT_LIST_HEAD(&admin->enabled);
+ mutex_init(&admin->lock);
list_add_tail(&admin->list, &test_vctrl.list);
memset(test_updateaddr, 0, sizeof(test_updateaddr));
test_updateaddridx = 0;
@@ -302,7 +296,7 @@ test_vcap_xn_rule_creator(struct kunit *test, int cid, enum vcap_user user,
ret = vcap_set_rule_set_keyset(rule, keyset);
/* Add rule actions : there must be at least one action */
- ret = vcap_rule_add_action_u32(rule, VCAP_AF_COSID_VAL, 0);
+ ret = vcap_rule_add_action_u32(rule, VCAP_AF_ISDX_VAL, 0);
/* Override rule actionset */
ret = vcap_set_rule_set_actionset(rule, actionset);
@@ -1312,8 +1306,8 @@ static void vcap_api_encode_rule_test(struct kunit *test)
struct vcap_admin is2_admin = {
.vtype = VCAP_TYPE_IS2,
- .first_cid = 10000,
- .last_cid = 19999,
+ .first_cid = 8000000,
+ .last_cid = 8099999,
.lookups = 4,
.last_valid_addr = 3071,
.first_valid_addr = 0,
@@ -1326,7 +1320,7 @@ static void vcap_api_encode_rule_test(struct kunit *test)
};
struct vcap_rule *rule;
struct vcap_rule_internal *ri;
- int vcap_chain_id = 10005;
+ int vcap_chain_id = 8000000;
enum vcap_user user = VCAP_USER_VCAP_UTIL;
u16 priority = 10;
int id = 100;
@@ -1343,8 +1337,8 @@ static void vcap_api_encode_rule_test(struct kunit *test)
u32 port_mask_rng_mask = 0x0f;
u32 igr_port_mask_value = 0xffabcd01;
u32 igr_port_mask_mask = ~0;
- /* counter is not written yet, so it is not in expwriteaddr */
- u32 expwriteaddr[] = {792, 793, 794, 795, 796, 797, 0};
+ /* counter is written as the first operation */
+ u32 expwriteaddr[] = {792, 792, 793, 794, 795, 796, 797};
int idx;
vcap_test_api_init(&is2_admin);
@@ -1398,6 +1392,11 @@ static void vcap_api_encode_rule_test(struct kunit *test)
KUNIT_EXPECT_EQ(test, 2, ri->keyset_sw_regs);
KUNIT_EXPECT_EQ(test, 4, ri->actionset_sw_regs);
+ /* Enable lookup, so the rule will be written */
+ ret = vcap_enable_lookups(&test_vctrl, &test_netdev, 0,
+ rule->vcap_chain_id, rule->cookie, true);
+ KUNIT_EXPECT_EQ(test, 0, ret);
+
/* Add rule with write callback */
ret = vcap_add_rule(rule);
KUNIT_EXPECT_EQ(test, 0, ret);
@@ -1872,58 +1871,56 @@ static void vcap_api_next_lookup_basic_test(struct kunit *test)
ret = vcap_is_next_lookup(&test_vctrl, 8300000, 8301000);
KUNIT_EXPECT_EQ(test, false, ret);
ret = vcap_is_next_lookup(&test_vctrl, 8300000, 8401000);
- KUNIT_EXPECT_EQ(test, true, ret);
+ KUNIT_EXPECT_EQ(test, false, ret);
}
static void vcap_api_next_lookup_advanced_test(struct kunit *test)
{
- struct vcap_admin admin1 = {
+ struct vcap_admin admin[] = {
+ {
.vtype = VCAP_TYPE_IS0,
.vinst = 0,
.first_cid = 1000000,
.last_cid = 1199999,
.lookups = 6,
.lookups_per_instance = 2,
- };
- struct vcap_admin admin2 = {
+ }, {
.vtype = VCAP_TYPE_IS0,
.vinst = 1,
.first_cid = 1200000,
.last_cid = 1399999,
.lookups = 6,
.lookups_per_instance = 2,
- };
- struct vcap_admin admin3 = {
+ }, {
.vtype = VCAP_TYPE_IS0,
.vinst = 2,
.first_cid = 1400000,
.last_cid = 1599999,
.lookups = 6,
.lookups_per_instance = 2,
- };
- struct vcap_admin admin4 = {
+ }, {
.vtype = VCAP_TYPE_IS2,
.vinst = 0,
.first_cid = 8000000,
.last_cid = 8199999,
.lookups = 4,
.lookups_per_instance = 2,
- };
- struct vcap_admin admin5 = {
+ }, {
.vtype = VCAP_TYPE_IS2,
.vinst = 1,
.first_cid = 8200000,
.last_cid = 8399999,
.lookups = 4,
.lookups_per_instance = 2,
+ }
};
bool ret;
- vcap_test_api_init(&admin1);
- list_add_tail(&admin2.list, &test_vctrl.list);
- list_add_tail(&admin3.list, &test_vctrl.list);
- list_add_tail(&admin4.list, &test_vctrl.list);
- list_add_tail(&admin5.list, &test_vctrl.list);
+ vcap_test_api_init(&admin[0]);
+ list_add_tail(&admin[1].list, &test_vctrl.list);
+ list_add_tail(&admin[2].list, &test_vctrl.list);
+ list_add_tail(&admin[3].list, &test_vctrl.list);
+ list_add_tail(&admin[4].list, &test_vctrl.list);
ret = vcap_is_next_lookup(&test_vctrl, 1000000, 1001000);
KUNIT_EXPECT_EQ(test, false, ret);
@@ -1933,9 +1930,9 @@ static void vcap_api_next_lookup_advanced_test(struct kunit *test)
ret = vcap_is_next_lookup(&test_vctrl, 1100000, 1201000);
KUNIT_EXPECT_EQ(test, true, ret);
ret = vcap_is_next_lookup(&test_vctrl, 1100000, 1301000);
- KUNIT_EXPECT_EQ(test, false, ret);
+ KUNIT_EXPECT_EQ(test, true, ret);
ret = vcap_is_next_lookup(&test_vctrl, 1100000, 8101000);
- KUNIT_EXPECT_EQ(test, false, ret);
+ KUNIT_EXPECT_EQ(test, true, ret);
ret = vcap_is_next_lookup(&test_vctrl, 1300000, 1401000);
KUNIT_EXPECT_EQ(test, true, ret);
ret = vcap_is_next_lookup(&test_vctrl, 1400000, 1501000);
@@ -1951,7 +1948,7 @@ static void vcap_api_next_lookup_advanced_test(struct kunit *test)
ret = vcap_is_next_lookup(&test_vctrl, 8300000, 8301000);
KUNIT_EXPECT_EQ(test, false, ret);
ret = vcap_is_next_lookup(&test_vctrl, 8300000, 8401000);
- KUNIT_EXPECT_EQ(test, true, ret);
+ KUNIT_EXPECT_EQ(test, false, ret);
}
static void vcap_api_filter_unsupported_keys_test(struct kunit *test)
@@ -2146,6 +2143,71 @@ static void vcap_api_filter_keylist_test(struct kunit *test)
KUNIT_EXPECT_EQ(test, 26, idx);
}
+static void vcap_api_rule_chain_path_test(struct kunit *test)
+{
+ struct vcap_admin admin1 = {
+ .vtype = VCAP_TYPE_IS0,
+ .vinst = 0,
+ .first_cid = 1000000,
+ .last_cid = 1199999,
+ .lookups = 6,
+ .lookups_per_instance = 2,
+ };
+ struct vcap_enabled_port eport3 = {
+ .ndev = &test_netdev,
+ .cookie = 0x100,
+ .src_cid = 0,
+ .dst_cid = 1000000,
+ };
+ struct vcap_enabled_port eport2 = {
+ .ndev = &test_netdev,
+ .cookie = 0x200,
+ .src_cid = 1000000,
+ .dst_cid = 1100000,
+ };
+ struct vcap_enabled_port eport1 = {
+ .ndev = &test_netdev,
+ .cookie = 0x300,
+ .src_cid = 1100000,
+ .dst_cid = 8000000,
+ };
+ bool ret;
+ int chain;
+
+ vcap_test_api_init(&admin1);
+ list_add_tail(&eport1.list, &admin1.enabled);
+ list_add_tail(&eport2.list, &admin1.enabled);
+ list_add_tail(&eport3.list, &admin1.enabled);
+
+ ret = vcap_path_exist(&test_vctrl, &test_netdev, 1000000);
+ KUNIT_EXPECT_EQ(test, true, ret);
+
+ ret = vcap_path_exist(&test_vctrl, &test_netdev, 1100000);
+ KUNIT_EXPECT_EQ(test, true, ret);
+
+ ret = vcap_path_exist(&test_vctrl, &test_netdev, 1200000);
+ KUNIT_EXPECT_EQ(test, false, ret);
+
+ chain = vcap_get_next_chain(&test_vctrl, &test_netdev, 0);
+ KUNIT_EXPECT_EQ(test, 1000000, chain);
+
+ chain = vcap_get_next_chain(&test_vctrl, &test_netdev, 1000000);
+ KUNIT_EXPECT_EQ(test, 1100000, chain);
+
+ chain = vcap_get_next_chain(&test_vctrl, &test_netdev, 1100000);
+ KUNIT_EXPECT_EQ(test, 8000000, chain);
+}
+
+static struct kunit_case vcap_api_rule_enable_test_cases[] = {
+ KUNIT_CASE(vcap_api_rule_chain_path_test),
+ {}
+};
+
+static struct kunit_suite vcap_api_rule_enable_test_suite = {
+ .name = "VCAP_API_Rule_Enable_Testsuite",
+ .test_cases = vcap_api_rule_enable_test_cases,
+};
+
static struct kunit_suite vcap_api_rule_remove_test_suite = {
.name = "VCAP_API_Rule_Remove_Testsuite",
.test_cases = vcap_api_rule_remove_test_cases,
@@ -2236,6 +2298,7 @@ static struct kunit_suite vcap_api_encoding_test_suite = {
.test_cases = vcap_api_encoding_test_cases,
};
+kunit_test_suite(vcap_api_rule_enable_test_suite);
kunit_test_suite(vcap_api_rule_remove_test_suite);
kunit_test_suite(vcap_api_rule_insert_test_suite);
kunit_test_suite(vcap_api_rule_counter_test_suite);
diff --git a/drivers/net/ethernet/microchip/vcap/vcap_api_private.h b/drivers/net/ethernet/microchip/vcap/vcap_api_private.h
index 4fd21da97679..df81d9ff502b 100644
--- a/drivers/net/ethernet/microchip/vcap/vcap_api_private.h
+++ b/drivers/net/ethernet/microchip/vcap/vcap_api_private.h
@@ -13,6 +13,12 @@
#define to_intrule(rule) container_of((rule), struct vcap_rule_internal, data)
+enum vcap_rule_state {
+ VCAP_RS_PERMANENT, /* the rule is always stored in HW */
+ VCAP_RS_ENABLED, /* enabled in HW but can be disabled */
+ VCAP_RS_DISABLED, /* disabled (stored in SW) and can be enabled */
+};
+
/* Private VCAP API rule data */
struct vcap_rule_internal {
struct vcap_rule data; /* provided by the client */
@@ -29,6 +35,7 @@ struct vcap_rule_internal {
u32 addr; /* address in the VCAP at insertion */
u32 counter_id; /* counter id (if a dedicated counter is available) */
struct vcap_counter counter; /* last read counter value */
+ enum vcap_rule_state state; /* rule storage state */
};
/* Bit iterator for the VCAP cache streams */
@@ -43,8 +50,6 @@ struct vcap_stream_iter {
/* Check that the control has a valid set of callbacks */
int vcap_api_check(struct vcap_control *ctrl);
-/* Make a shallow copy of the rule without the fields */
-struct vcap_rule_internal *vcap_dup_rule(struct vcap_rule_internal *ri);
/* Erase the VCAP cache area used or encoding and decoding */
void vcap_erase_cache(struct vcap_rule_internal *ri);
@@ -110,4 +115,10 @@ int vcap_find_keystream_keysets(struct vcap_control *vctrl, enum vcap_type vt,
u32 *keystream, u32 *mskstream, bool mask,
int sw_max, struct vcap_keyset_list *kslist);
+/* Get the keysets that matches the rule key type/mask */
+int vcap_rule_get_keysets(struct vcap_rule_internal *ri,
+ struct vcap_keyset_list *matches);
+/* Decode a rule from the VCAP cache and return a copy */
+struct vcap_rule *vcap_decode_rule(struct vcap_rule_internal *elem);
+
#endif /* __VCAP_API_PRIVATE__ */
diff --git a/drivers/net/ethernet/microchip/vcap/vcap_model_kunit.c b/drivers/net/ethernet/microchip/vcap/vcap_model_kunit.c
index 5d681d2697cd..5dbfc0d0c369 100644
--- a/drivers/net/ethernet/microchip/vcap/vcap_model_kunit.c
+++ b/drivers/net/ethernet/microchip/vcap/vcap_model_kunit.c
@@ -1,6 +1,10 @@
// SPDX-License-Identifier: BSD-3-Clause
-/* Copyright (C) 2022 Microchip Technology Inc. and its subsidiaries.
- * Microchip VCAP API Test VCAP Model Data
+/* Copyright (C) 2023 Microchip Technology Inc. and its subsidiaries.
+ * Microchip VCAP test model interface for kunit testing
+ */
+
+/* This file is autogenerated by cml-utils 2023-02-10 11:16:00 +0100.
+ * Commit ID: c30fb4bf0281cd4a7133bdab6682f9e43c872ada
*/
#include <linux/types.h>
@@ -10,177 +14,6 @@
#include "vcap_model_kunit.h"
/* keyfields */
-static const struct vcap_field is0_mll_keyfield[] = {
- [VCAP_KF_TYPE] = {
- .type = VCAP_FIELD_U32,
- .offset = 0,
- .width = 2,
- },
- [VCAP_KF_LOOKUP_FIRST_IS] = {
- .type = VCAP_FIELD_BIT,
- .offset = 2,
- .width = 1,
- },
- [VCAP_KF_IF_IGR_PORT] = {
- .type = VCAP_FIELD_U32,
- .offset = 3,
- .width = 7,
- },
- [VCAP_KF_8021Q_VLAN_TAGS] = {
- .type = VCAP_FIELD_U32,
- .offset = 10,
- .width = 3,
- },
- [VCAP_KF_8021Q_TPID0] = {
- .type = VCAP_FIELD_U32,
- .offset = 13,
- .width = 3,
- },
- [VCAP_KF_8021Q_VID0] = {
- .type = VCAP_FIELD_U32,
- .offset = 16,
- .width = 12,
- },
- [VCAP_KF_8021Q_TPID1] = {
- .type = VCAP_FIELD_U32,
- .offset = 28,
- .width = 3,
- },
- [VCAP_KF_8021Q_VID1] = {
- .type = VCAP_FIELD_U32,
- .offset = 31,
- .width = 12,
- },
- [VCAP_KF_L2_DMAC] = {
- .type = VCAP_FIELD_U48,
- .offset = 43,
- .width = 48,
- },
- [VCAP_KF_L2_SMAC] = {
- .type = VCAP_FIELD_U48,
- .offset = 91,
- .width = 48,
- },
- [VCAP_KF_ETYPE_MPLS] = {
- .type = VCAP_FIELD_U32,
- .offset = 139,
- .width = 2,
- },
- [VCAP_KF_L4_RNG] = {
- .type = VCAP_FIELD_U32,
- .offset = 141,
- .width = 8,
- },
-};
-
-static const struct vcap_field is0_tri_vid_keyfield[] = {
- [VCAP_KF_TYPE] = {
- .type = VCAP_FIELD_U32,
- .offset = 0,
- .width = 2,
- },
- [VCAP_KF_LOOKUP_FIRST_IS] = {
- .type = VCAP_FIELD_BIT,
- .offset = 2,
- .width = 1,
- },
- [VCAP_KF_IF_IGR_PORT] = {
- .type = VCAP_FIELD_U32,
- .offset = 3,
- .width = 7,
- },
- [VCAP_KF_LOOKUP_GEN_IDX_SEL] = {
- .type = VCAP_FIELD_U32,
- .offset = 10,
- .width = 2,
- },
- [VCAP_KF_LOOKUP_GEN_IDX] = {
- .type = VCAP_FIELD_U32,
- .offset = 12,
- .width = 12,
- },
- [VCAP_KF_8021Q_VLAN_TAGS] = {
- .type = VCAP_FIELD_U32,
- .offset = 24,
- .width = 3,
- },
- [VCAP_KF_8021Q_TPID0] = {
- .type = VCAP_FIELD_U32,
- .offset = 27,
- .width = 3,
- },
- [VCAP_KF_8021Q_PCP0] = {
- .type = VCAP_FIELD_U32,
- .offset = 30,
- .width = 3,
- },
- [VCAP_KF_8021Q_DEI0] = {
- .type = VCAP_FIELD_BIT,
- .offset = 33,
- .width = 1,
- },
- [VCAP_KF_8021Q_VID0] = {
- .type = VCAP_FIELD_U32,
- .offset = 34,
- .width = 12,
- },
- [VCAP_KF_8021Q_TPID1] = {
- .type = VCAP_FIELD_U32,
- .offset = 46,
- .width = 3,
- },
- [VCAP_KF_8021Q_PCP1] = {
- .type = VCAP_FIELD_U32,
- .offset = 49,
- .width = 3,
- },
- [VCAP_KF_8021Q_DEI1] = {
- .type = VCAP_FIELD_BIT,
- .offset = 52,
- .width = 1,
- },
- [VCAP_KF_8021Q_VID1] = {
- .type = VCAP_FIELD_U32,
- .offset = 53,
- .width = 12,
- },
- [VCAP_KF_8021Q_TPID2] = {
- .type = VCAP_FIELD_U32,
- .offset = 65,
- .width = 3,
- },
- [VCAP_KF_8021Q_PCP2] = {
- .type = VCAP_FIELD_U32,
- .offset = 68,
- .width = 3,
- },
- [VCAP_KF_8021Q_DEI2] = {
- .type = VCAP_FIELD_BIT,
- .offset = 71,
- .width = 1,
- },
- [VCAP_KF_8021Q_VID2] = {
- .type = VCAP_FIELD_U32,
- .offset = 72,
- .width = 12,
- },
- [VCAP_KF_L4_RNG] = {
- .type = VCAP_FIELD_U32,
- .offset = 84,
- .width = 8,
- },
- [VCAP_KF_OAM_Y1731_IS] = {
- .type = VCAP_FIELD_BIT,
- .offset = 92,
- .width = 1,
- },
- [VCAP_KF_OAM_MEL_FLAGS] = {
- .type = VCAP_FIELD_U32,
- .offset = 93,
- .width = 7,
- },
-};
-
static const struct vcap_field is0_ll_full_keyfield[] = {
[VCAP_KF_TYPE] = {
.type = VCAP_FIELD_U32,
@@ -344,194 +177,6 @@ static const struct vcap_field is0_ll_full_keyfield[] = {
},
};
-static const struct vcap_field is0_normal_keyfield[] = {
- [VCAP_KF_TYPE] = {
- .type = VCAP_FIELD_U32,
- .offset = 0,
- .width = 2,
- },
- [VCAP_KF_LOOKUP_FIRST_IS] = {
- .type = VCAP_FIELD_BIT,
- .offset = 2,
- .width = 1,
- },
- [VCAP_KF_LOOKUP_GEN_IDX_SEL] = {
- .type = VCAP_FIELD_U32,
- .offset = 3,
- .width = 2,
- },
- [VCAP_KF_LOOKUP_GEN_IDX] = {
- .type = VCAP_FIELD_U32,
- .offset = 5,
- .width = 12,
- },
- [VCAP_KF_IF_IGR_PORT_MASK_SEL] = {
- .type = VCAP_FIELD_U32,
- .offset = 17,
- .width = 2,
- },
- [VCAP_KF_IF_IGR_PORT_MASK] = {
- .type = VCAP_FIELD_U72,
- .offset = 19,
- .width = 65,
- },
- [VCAP_KF_L2_MC_IS] = {
- .type = VCAP_FIELD_BIT,
- .offset = 84,
- .width = 1,
- },
- [VCAP_KF_L2_BC_IS] = {
- .type = VCAP_FIELD_BIT,
- .offset = 85,
- .width = 1,
- },
- [VCAP_KF_8021Q_VLAN_TAGS] = {
- .type = VCAP_FIELD_U32,
- .offset = 86,
- .width = 3,
- },
- [VCAP_KF_8021Q_TPID0] = {
- .type = VCAP_FIELD_U32,
- .offset = 89,
- .width = 3,
- },
- [VCAP_KF_8021Q_PCP0] = {
- .type = VCAP_FIELD_U32,
- .offset = 92,
- .width = 3,
- },
- [VCAP_KF_8021Q_DEI0] = {
- .type = VCAP_FIELD_BIT,
- .offset = 95,
- .width = 1,
- },
- [VCAP_KF_8021Q_VID0] = {
- .type = VCAP_FIELD_U32,
- .offset = 96,
- .width = 12,
- },
- [VCAP_KF_8021Q_TPID1] = {
- .type = VCAP_FIELD_U32,
- .offset = 108,
- .width = 3,
- },
- [VCAP_KF_8021Q_PCP1] = {
- .type = VCAP_FIELD_U32,
- .offset = 111,
- .width = 3,
- },
- [VCAP_KF_8021Q_DEI1] = {
- .type = VCAP_FIELD_BIT,
- .offset = 114,
- .width = 1,
- },
- [VCAP_KF_8021Q_VID1] = {
- .type = VCAP_FIELD_U32,
- .offset = 115,
- .width = 12,
- },
- [VCAP_KF_8021Q_TPID2] = {
- .type = VCAP_FIELD_U32,
- .offset = 127,
- .width = 3,
- },
- [VCAP_KF_8021Q_PCP2] = {
- .type = VCAP_FIELD_U32,
- .offset = 130,
- .width = 3,
- },
- [VCAP_KF_8021Q_DEI2] = {
- .type = VCAP_FIELD_BIT,
- .offset = 133,
- .width = 1,
- },
- [VCAP_KF_8021Q_VID2] = {
- .type = VCAP_FIELD_U32,
- .offset = 134,
- .width = 12,
- },
- [VCAP_KF_DST_ENTRY] = {
- .type = VCAP_FIELD_BIT,
- .offset = 146,
- .width = 1,
- },
- [VCAP_KF_L2_SMAC] = {
- .type = VCAP_FIELD_U48,
- .offset = 147,
- .width = 48,
- },
- [VCAP_KF_IP_MC_IS] = {
- .type = VCAP_FIELD_BIT,
- .offset = 195,
- .width = 1,
- },
- [VCAP_KF_ETYPE_LEN_IS] = {
- .type = VCAP_FIELD_BIT,
- .offset = 196,
- .width = 1,
- },
- [VCAP_KF_ETYPE] = {
- .type = VCAP_FIELD_U32,
- .offset = 197,
- .width = 16,
- },
- [VCAP_KF_IP_SNAP_IS] = {
- .type = VCAP_FIELD_BIT,
- .offset = 213,
- .width = 1,
- },
- [VCAP_KF_IP4_IS] = {
- .type = VCAP_FIELD_BIT,
- .offset = 214,
- .width = 1,
- },
- [VCAP_KF_L3_FRAGMENT_TYPE] = {
- .type = VCAP_FIELD_U32,
- .offset = 215,
- .width = 2,
- },
- [VCAP_KF_L3_FRAG_INVLD_L4_LEN] = {
- .type = VCAP_FIELD_BIT,
- .offset = 217,
- .width = 1,
- },
- [VCAP_KF_L3_OPTIONS_IS] = {
- .type = VCAP_FIELD_BIT,
- .offset = 218,
- .width = 1,
- },
- [VCAP_KF_L3_DSCP] = {
- .type = VCAP_FIELD_U32,
- .offset = 219,
- .width = 6,
- },
- [VCAP_KF_L3_IP4_SIP] = {
- .type = VCAP_FIELD_U32,
- .offset = 225,
- .width = 32,
- },
- [VCAP_KF_TCP_UDP_IS] = {
- .type = VCAP_FIELD_BIT,
- .offset = 257,
- .width = 1,
- },
- [VCAP_KF_TCP_IS] = {
- .type = VCAP_FIELD_BIT,
- .offset = 258,
- .width = 1,
- },
- [VCAP_KF_L4_SPORT] = {
- .type = VCAP_FIELD_U32,
- .offset = 259,
- .width = 16,
- },
- [VCAP_KF_L4_RNG] = {
- .type = VCAP_FIELD_U32,
- .offset = 275,
- .width = 8,
- },
-};
-
static const struct vcap_field is0_normal_7tuple_keyfield[] = {
[VCAP_KF_TYPE] = {
.type = VCAP_FIELD_BIT,
@@ -1095,16 +740,6 @@ static const struct vcap_field is2_mac_etype_keyfield[] = {
.offset = 85,
.width = 1,
},
- [VCAP_KF_L3_SMAC_SIP_MATCH] = {
- .type = VCAP_FIELD_BIT,
- .offset = 86,
- .width = 1,
- },
- [VCAP_KF_L3_DMAC_DIP_MATCH] = {
- .type = VCAP_FIELD_BIT,
- .offset = 87,
- .width = 1,
- },
[VCAP_KF_L3_RT_IS] = {
.type = VCAP_FIELD_BIT,
.offset = 88,
@@ -1381,16 +1016,6 @@ static const struct vcap_field is2_ip4_tcp_udp_keyfield[] = {
.offset = 85,
.width = 1,
},
- [VCAP_KF_L3_SMAC_SIP_MATCH] = {
- .type = VCAP_FIELD_BIT,
- .offset = 86,
- .width = 1,
- },
- [VCAP_KF_L3_DMAC_DIP_MATCH] = {
- .type = VCAP_FIELD_BIT,
- .offset = 87,
- .width = 1,
- },
[VCAP_KF_L3_RT_IS] = {
.type = VCAP_FIELD_BIT,
.offset = 88,
@@ -1594,16 +1219,6 @@ static const struct vcap_field is2_ip4_other_keyfield[] = {
.offset = 85,
.width = 1,
},
- [VCAP_KF_L3_SMAC_SIP_MATCH] = {
- .type = VCAP_FIELD_BIT,
- .offset = 86,
- .width = 1,
- },
- [VCAP_KF_L3_DMAC_DIP_MATCH] = {
- .type = VCAP_FIELD_BIT,
- .offset = 87,
- .width = 1,
- },
[VCAP_KF_L3_RT_IS] = {
.type = VCAP_FIELD_BIT,
.offset = 88,
@@ -1757,26 +1372,11 @@ static const struct vcap_field is2_ip6_std_keyfield[] = {
.offset = 85,
.width = 1,
},
- [VCAP_KF_L3_SMAC_SIP_MATCH] = {
- .type = VCAP_FIELD_BIT,
- .offset = 86,
- .width = 1,
- },
- [VCAP_KF_L3_DMAC_DIP_MATCH] = {
- .type = VCAP_FIELD_BIT,
- .offset = 87,
- .width = 1,
- },
[VCAP_KF_L3_RT_IS] = {
.type = VCAP_FIELD_BIT,
.offset = 88,
.width = 1,
},
- [VCAP_KF_L3_DST_IS] = {
- .type = VCAP_FIELD_BIT,
- .offset = 89,
- .width = 1,
- },
[VCAP_KF_L3_TTL_GT0] = {
.type = VCAP_FIELD_BIT,
.offset = 90,
@@ -1890,16 +1490,6 @@ static const struct vcap_field is2_ip_7tuple_keyfield[] = {
.offset = 116,
.width = 1,
},
- [VCAP_KF_L3_SMAC_SIP_MATCH] = {
- .type = VCAP_FIELD_BIT,
- .offset = 117,
- .width = 1,
- },
- [VCAP_KF_L3_DMAC_DIP_MATCH] = {
- .type = VCAP_FIELD_BIT,
- .offset = 118,
- .width = 1,
- },
[VCAP_KF_L3_RT_IS] = {
.type = VCAP_FIELD_BIT,
.offset = 119,
@@ -2022,69 +1612,6 @@ static const struct vcap_field is2_ip_7tuple_keyfield[] = {
},
};
-static const struct vcap_field is2_ip6_vid_keyfield[] = {
- [VCAP_KF_TYPE] = {
- .type = VCAP_FIELD_U32,
- .offset = 0,
- .width = 4,
- },
- [VCAP_KF_LOOKUP_FIRST_IS] = {
- .type = VCAP_FIELD_BIT,
- .offset = 4,
- .width = 1,
- },
- [VCAP_KF_LOOKUP_PAG] = {
- .type = VCAP_FIELD_U32,
- .offset = 5,
- .width = 8,
- },
- [VCAP_KF_ISDX_GT0_IS] = {
- .type = VCAP_FIELD_BIT,
- .offset = 13,
- .width = 1,
- },
- [VCAP_KF_ISDX_CLS] = {
- .type = VCAP_FIELD_U32,
- .offset = 14,
- .width = 12,
- },
- [VCAP_KF_8021Q_VID_CLS] = {
- .type = VCAP_FIELD_U32,
- .offset = 26,
- .width = 13,
- },
- [VCAP_KF_L3_SMAC_SIP_MATCH] = {
- .type = VCAP_FIELD_BIT,
- .offset = 39,
- .width = 1,
- },
- [VCAP_KF_L3_DMAC_DIP_MATCH] = {
- .type = VCAP_FIELD_BIT,
- .offset = 40,
- .width = 1,
- },
- [VCAP_KF_L3_RT_IS] = {
- .type = VCAP_FIELD_BIT,
- .offset = 41,
- .width = 1,
- },
- [VCAP_KF_L3_DST_IS] = {
- .type = VCAP_FIELD_BIT,
- .offset = 42,
- .width = 1,
- },
- [VCAP_KF_L3_IP6_DIP] = {
- .type = VCAP_FIELD_U128,
- .offset = 43,
- .width = 128,
- },
- [VCAP_KF_L3_IP6_SIP] = {
- .type = VCAP_FIELD_U128,
- .offset = 171,
- .width = 128,
- },
-};
-
static const struct vcap_field es2_mac_etype_keyfield[] = {
[VCAP_KF_TYPE] = {
.type = VCAP_FIELD_U32,
@@ -2096,16 +1623,6 @@ static const struct vcap_field es2_mac_etype_keyfield[] = {
.offset = 3,
.width = 1,
},
- [VCAP_KF_ACL_GRP_ID] = {
- .type = VCAP_FIELD_U32,
- .offset = 4,
- .width = 8,
- },
- [VCAP_KF_PROT_ACTIVE] = {
- .type = VCAP_FIELD_BIT,
- .offset = 12,
- .width = 1,
- },
[VCAP_KF_L2_MC_IS] = {
.type = VCAP_FIELD_BIT,
.offset = 13,
@@ -2181,16 +1698,6 @@ static const struct vcap_field es2_mac_etype_keyfield[] = {
.offset = 95,
.width = 1,
},
- [VCAP_KF_ES0_ISDX_KEY_ENA] = {
- .type = VCAP_FIELD_BIT,
- .offset = 96,
- .width = 1,
- },
- [VCAP_KF_MIRROR_ENA] = {
- .type = VCAP_FIELD_U32,
- .offset = 97,
- .width = 2,
- },
[VCAP_KF_L2_DMAC] = {
.type = VCAP_FIELD_U48,
.offset = 99,
@@ -2239,16 +1746,6 @@ static const struct vcap_field es2_arp_keyfield[] = {
.offset = 3,
.width = 1,
},
- [VCAP_KF_ACL_GRP_ID] = {
- .type = VCAP_FIELD_U32,
- .offset = 4,
- .width = 8,
- },
- [VCAP_KF_PROT_ACTIVE] = {
- .type = VCAP_FIELD_BIT,
- .offset = 12,
- .width = 1,
- },
[VCAP_KF_L2_MC_IS] = {
.type = VCAP_FIELD_BIT,
.offset = 13,
@@ -2319,16 +1816,6 @@ static const struct vcap_field es2_arp_keyfield[] = {
.offset = 94,
.width = 1,
},
- [VCAP_KF_ES0_ISDX_KEY_ENA] = {
- .type = VCAP_FIELD_BIT,
- .offset = 95,
- .width = 1,
- },
- [VCAP_KF_MIRROR_ENA] = {
- .type = VCAP_FIELD_U32,
- .offset = 96,
- .width = 2,
- },
[VCAP_KF_L2_SMAC] = {
.type = VCAP_FIELD_U48,
.offset = 98,
@@ -2397,16 +1884,6 @@ static const struct vcap_field es2_ip4_tcp_udp_keyfield[] = {
.offset = 3,
.width = 1,
},
- [VCAP_KF_ACL_GRP_ID] = {
- .type = VCAP_FIELD_U32,
- .offset = 4,
- .width = 8,
- },
- [VCAP_KF_PROT_ACTIVE] = {
- .type = VCAP_FIELD_BIT,
- .offset = 12,
- .width = 1,
- },
[VCAP_KF_L2_MC_IS] = {
.type = VCAP_FIELD_BIT,
.offset = 13,
@@ -2482,16 +1959,6 @@ static const struct vcap_field es2_ip4_tcp_udp_keyfield[] = {
.offset = 95,
.width = 1,
},
- [VCAP_KF_ES0_ISDX_KEY_ENA] = {
- .type = VCAP_FIELD_BIT,
- .offset = 96,
- .width = 1,
- },
- [VCAP_KF_MIRROR_ENA] = {
- .type = VCAP_FIELD_U32,
- .offset = 97,
- .width = 2,
- },
[VCAP_KF_IP4_IS] = {
.type = VCAP_FIELD_BIT,
.offset = 99,
@@ -2610,16 +2077,6 @@ static const struct vcap_field es2_ip4_other_keyfield[] = {
.offset = 3,
.width = 1,
},
- [VCAP_KF_ACL_GRP_ID] = {
- .type = VCAP_FIELD_U32,
- .offset = 4,
- .width = 8,
- },
- [VCAP_KF_PROT_ACTIVE] = {
- .type = VCAP_FIELD_BIT,
- .offset = 12,
- .width = 1,
- },
[VCAP_KF_L2_MC_IS] = {
.type = VCAP_FIELD_BIT,
.offset = 13,
@@ -2695,16 +2152,6 @@ static const struct vcap_field es2_ip4_other_keyfield[] = {
.offset = 95,
.width = 1,
},
- [VCAP_KF_ES0_ISDX_KEY_ENA] = {
- .type = VCAP_FIELD_BIT,
- .offset = 96,
- .width = 1,
- },
- [VCAP_KF_MIRROR_ENA] = {
- .type = VCAP_FIELD_U32,
- .offset = 97,
- .width = 2,
- },
[VCAP_KF_IP4_IS] = {
.type = VCAP_FIELD_BIT,
.offset = 99,
@@ -2763,16 +2210,6 @@ static const struct vcap_field es2_ip_7tuple_keyfield[] = {
.offset = 0,
.width = 1,
},
- [VCAP_KF_ACL_GRP_ID] = {
- .type = VCAP_FIELD_U32,
- .offset = 1,
- .width = 8,
- },
- [VCAP_KF_PROT_ACTIVE] = {
- .type = VCAP_FIELD_BIT,
- .offset = 9,
- .width = 1,
- },
[VCAP_KF_L2_MC_IS] = {
.type = VCAP_FIELD_BIT,
.offset = 10,
@@ -2848,16 +2285,6 @@ static const struct vcap_field es2_ip_7tuple_keyfield[] = {
.offset = 92,
.width = 1,
},
- [VCAP_KF_ES0_ISDX_KEY_ENA] = {
- .type = VCAP_FIELD_BIT,
- .offset = 93,
- .width = 1,
- },
- [VCAP_KF_MIRROR_ENA] = {
- .type = VCAP_FIELD_U32,
- .offset = 94,
- .width = 2,
- },
[VCAP_KF_L2_DMAC] = {
.type = VCAP_FIELD_U48,
.offset = 96,
@@ -2970,6 +2397,124 @@ static const struct vcap_field es2_ip_7tuple_keyfield[] = {
},
};
+static const struct vcap_field es2_ip6_std_keyfield[] = {
+ [VCAP_KF_TYPE] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 0,
+ .width = 3,
+ },
+ [VCAP_KF_LOOKUP_FIRST_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 3,
+ .width = 1,
+ },
+ [VCAP_KF_L2_MC_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 13,
+ .width = 1,
+ },
+ [VCAP_KF_L2_BC_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 14,
+ .width = 1,
+ },
+ [VCAP_KF_ISDX_GT0_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 15,
+ .width = 1,
+ },
+ [VCAP_KF_ISDX_CLS] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 16,
+ .width = 12,
+ },
+ [VCAP_KF_8021Q_VLAN_TAGGED_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 28,
+ .width = 1,
+ },
+ [VCAP_KF_8021Q_VID_CLS] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 29,
+ .width = 13,
+ },
+ [VCAP_KF_IF_EGR_PORT_MASK_RNG] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 42,
+ .width = 3,
+ },
+ [VCAP_KF_IF_EGR_PORT_MASK] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 45,
+ .width = 32,
+ },
+ [VCAP_KF_IF_IGR_PORT_SEL] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 77,
+ .width = 1,
+ },
+ [VCAP_KF_IF_IGR_PORT] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 78,
+ .width = 9,
+ },
+ [VCAP_KF_8021Q_PCP_CLS] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 87,
+ .width = 3,
+ },
+ [VCAP_KF_8021Q_DEI_CLS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 90,
+ .width = 1,
+ },
+ [VCAP_KF_COSID_CLS] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 91,
+ .width = 3,
+ },
+ [VCAP_KF_L3_DPL_CLS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 94,
+ .width = 1,
+ },
+ [VCAP_KF_L3_RT_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 95,
+ .width = 1,
+ },
+ [VCAP_KF_L3_TTL_GT0] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 99,
+ .width = 1,
+ },
+ [VCAP_KF_L3_IP6_SIP] = {
+ .type = VCAP_FIELD_U128,
+ .offset = 100,
+ .width = 128,
+ },
+ [VCAP_KF_L3_DIP_EQ_SIP_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 228,
+ .width = 1,
+ },
+ [VCAP_KF_L3_IP_PROTO] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 229,
+ .width = 8,
+ },
+ [VCAP_KF_L4_RNG] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 237,
+ .width = 16,
+ },
+ [VCAP_KF_L3_PAYLOAD] = {
+ .type = VCAP_FIELD_U48,
+ .offset = 253,
+ .width = 40,
+ },
+};
+
static const struct vcap_field es2_ip4_vid_keyfield[] = {
[VCAP_KF_LOOKUP_FIRST_IS] = {
.type = VCAP_FIELD_BIT,
@@ -3046,7 +2591,7 @@ static const struct vcap_field es2_ip4_vid_keyfield[] = {
.offset = 48,
.width = 1,
},
- [VCAP_KF_MIRROR_ENA] = {
+ [VCAP_KF_MIRROR_PROBE] = {
.type = VCAP_FIELD_U32,
.offset = 49,
.width = 2,
@@ -3143,26 +2688,11 @@ static const struct vcap_field es2_ip6_vid_keyfield[] = {
/* keyfield_set */
static const struct vcap_set is0_keyfield_set[] = {
- [VCAP_KFS_MLL] = {
- .type_id = 0,
- .sw_per_item = 3,
- .sw_cnt = 4,
- },
- [VCAP_KFS_TRI_VID] = {
- .type_id = 0,
- .sw_per_item = 2,
- .sw_cnt = 6,
- },
[VCAP_KFS_LL_FULL] = {
.type_id = 0,
.sw_per_item = 6,
.sw_cnt = 2,
},
- [VCAP_KFS_NORMAL] = {
- .type_id = 1,
- .sw_per_item = 6,
- .sw_cnt = 2,
- },
[VCAP_KFS_NORMAL_7TUPLE] = {
.type_id = 0,
.sw_per_item = 12,
@@ -3216,11 +2746,6 @@ static const struct vcap_set is2_keyfield_set[] = {
.sw_per_item = 12,
.sw_cnt = 1,
},
- [VCAP_KFS_IP6_VID] = {
- .type_id = 9,
- .sw_per_item = 6,
- .sw_cnt = 2,
- },
};
static const struct vcap_set es2_keyfield_set[] = {
@@ -3249,6 +2774,11 @@ static const struct vcap_set es2_keyfield_set[] = {
.sw_per_item = 12,
.sw_cnt = 1,
},
+ [VCAP_KFS_IP6_STD] = {
+ .type_id = 4,
+ .sw_per_item = 6,
+ .sw_cnt = 2,
+ },
[VCAP_KFS_IP4_VID] = {
.type_id = -1,
.sw_per_item = 3,
@@ -3263,10 +2793,7 @@ static const struct vcap_set es2_keyfield_set[] = {
/* keyfield_set map */
static const struct vcap_field *is0_keyfield_set_map[] = {
- [VCAP_KFS_MLL] = is0_mll_keyfield,
- [VCAP_KFS_TRI_VID] = is0_tri_vid_keyfield,
[VCAP_KFS_LL_FULL] = is0_ll_full_keyfield,
- [VCAP_KFS_NORMAL] = is0_normal_keyfield,
[VCAP_KFS_NORMAL_7TUPLE] = is0_normal_7tuple_keyfield,
[VCAP_KFS_NORMAL_5TUPLE_IP4] = is0_normal_5tuple_ip4_keyfield,
[VCAP_KFS_PURE_5TUPLE_IP4] = is0_pure_5tuple_ip4_keyfield,
@@ -3280,7 +2807,6 @@ static const struct vcap_field *is2_keyfield_set_map[] = {
[VCAP_KFS_IP4_OTHER] = is2_ip4_other_keyfield,
[VCAP_KFS_IP6_STD] = is2_ip6_std_keyfield,
[VCAP_KFS_IP_7TUPLE] = is2_ip_7tuple_keyfield,
- [VCAP_KFS_IP6_VID] = is2_ip6_vid_keyfield,
};
static const struct vcap_field *es2_keyfield_set_map[] = {
@@ -3289,16 +2815,14 @@ static const struct vcap_field *es2_keyfield_set_map[] = {
[VCAP_KFS_IP4_TCP_UDP] = es2_ip4_tcp_udp_keyfield,
[VCAP_KFS_IP4_OTHER] = es2_ip4_other_keyfield,
[VCAP_KFS_IP_7TUPLE] = es2_ip_7tuple_keyfield,
+ [VCAP_KFS_IP6_STD] = es2_ip6_std_keyfield,
[VCAP_KFS_IP4_VID] = es2_ip4_vid_keyfield,
[VCAP_KFS_IP6_VID] = es2_ip6_vid_keyfield,
};
/* keyfield_set map sizes */
static int is0_keyfield_set_map_size[] = {
- [VCAP_KFS_MLL] = ARRAY_SIZE(is0_mll_keyfield),
- [VCAP_KFS_TRI_VID] = ARRAY_SIZE(is0_tri_vid_keyfield),
[VCAP_KFS_LL_FULL] = ARRAY_SIZE(is0_ll_full_keyfield),
- [VCAP_KFS_NORMAL] = ARRAY_SIZE(is0_normal_keyfield),
[VCAP_KFS_NORMAL_7TUPLE] = ARRAY_SIZE(is0_normal_7tuple_keyfield),
[VCAP_KFS_NORMAL_5TUPLE_IP4] = ARRAY_SIZE(is0_normal_5tuple_ip4_keyfield),
[VCAP_KFS_PURE_5TUPLE_IP4] = ARRAY_SIZE(is0_pure_5tuple_ip4_keyfield),
@@ -3312,7 +2836,6 @@ static int is2_keyfield_set_map_size[] = {
[VCAP_KFS_IP4_OTHER] = ARRAY_SIZE(is2_ip4_other_keyfield),
[VCAP_KFS_IP6_STD] = ARRAY_SIZE(is2_ip6_std_keyfield),
[VCAP_KFS_IP_7TUPLE] = ARRAY_SIZE(is2_ip_7tuple_keyfield),
- [VCAP_KFS_IP6_VID] = ARRAY_SIZE(is2_ip6_vid_keyfield),
};
static int es2_keyfield_set_map_size[] = {
@@ -3321,387 +2844,12 @@ static int es2_keyfield_set_map_size[] = {
[VCAP_KFS_IP4_TCP_UDP] = ARRAY_SIZE(es2_ip4_tcp_udp_keyfield),
[VCAP_KFS_IP4_OTHER] = ARRAY_SIZE(es2_ip4_other_keyfield),
[VCAP_KFS_IP_7TUPLE] = ARRAY_SIZE(es2_ip_7tuple_keyfield),
+ [VCAP_KFS_IP6_STD] = ARRAY_SIZE(es2_ip6_std_keyfield),
[VCAP_KFS_IP4_VID] = ARRAY_SIZE(es2_ip4_vid_keyfield),
[VCAP_KFS_IP6_VID] = ARRAY_SIZE(es2_ip6_vid_keyfield),
};
/* actionfields */
-static const struct vcap_field is0_mlbs_actionfield[] = {
- [VCAP_AF_TYPE] = {
- .type = VCAP_FIELD_BIT,
- .offset = 0,
- .width = 1,
- },
- [VCAP_AF_COSID_ENA] = {
- .type = VCAP_FIELD_BIT,
- .offset = 1,
- .width = 1,
- },
- [VCAP_AF_COSID_VAL] = {
- .type = VCAP_FIELD_U32,
- .offset = 2,
- .width = 3,
- },
- [VCAP_AF_QOS_ENA] = {
- .type = VCAP_FIELD_BIT,
- .offset = 5,
- .width = 1,
- },
- [VCAP_AF_QOS_VAL] = {
- .type = VCAP_FIELD_U32,
- .offset = 6,
- .width = 3,
- },
- [VCAP_AF_DP_ENA] = {
- .type = VCAP_FIELD_BIT,
- .offset = 9,
- .width = 1,
- },
- [VCAP_AF_DP_VAL] = {
- .type = VCAP_FIELD_U32,
- .offset = 10,
- .width = 2,
- },
- [VCAP_AF_MAP_LOOKUP_SEL] = {
- .type = VCAP_FIELD_U32,
- .offset = 12,
- .width = 2,
- },
- [VCAP_AF_MAP_KEY] = {
- .type = VCAP_FIELD_U32,
- .offset = 14,
- .width = 3,
- },
- [VCAP_AF_MAP_IDX] = {
- .type = VCAP_FIELD_U32,
- .offset = 17,
- .width = 9,
- },
- [VCAP_AF_CLS_VID_SEL] = {
- .type = VCAP_FIELD_U32,
- .offset = 26,
- .width = 3,
- },
- [VCAP_AF_GVID_ADD_REPLACE_SEL] = {
- .type = VCAP_FIELD_U32,
- .offset = 29,
- .width = 3,
- },
- [VCAP_AF_VID_VAL] = {
- .type = VCAP_FIELD_U32,
- .offset = 32,
- .width = 13,
- },
- [VCAP_AF_ISDX_ADD_REPLACE_SEL] = {
- .type = VCAP_FIELD_BIT,
- .offset = 45,
- .width = 1,
- },
- [VCAP_AF_ISDX_VAL] = {
- .type = VCAP_FIELD_U32,
- .offset = 46,
- .width = 12,
- },
- [VCAP_AF_FWD_DIS] = {
- .type = VCAP_FIELD_BIT,
- .offset = 58,
- .width = 1,
- },
- [VCAP_AF_CPU_ENA] = {
- .type = VCAP_FIELD_BIT,
- .offset = 59,
- .width = 1,
- },
- [VCAP_AF_CPU_Q] = {
- .type = VCAP_FIELD_U32,
- .offset = 60,
- .width = 3,
- },
- [VCAP_AF_OAM_Y1731_SEL] = {
- .type = VCAP_FIELD_U32,
- .offset = 63,
- .width = 3,
- },
- [VCAP_AF_OAM_TWAMP_ENA] = {
- .type = VCAP_FIELD_BIT,
- .offset = 66,
- .width = 1,
- },
- [VCAP_AF_OAM_IP_BFD_ENA] = {
- .type = VCAP_FIELD_BIT,
- .offset = 67,
- .width = 1,
- },
- [VCAP_AF_TC_LABEL] = {
- .type = VCAP_FIELD_U32,
- .offset = 68,
- .width = 3,
- },
- [VCAP_AF_TTL_LABEL] = {
- .type = VCAP_FIELD_U32,
- .offset = 71,
- .width = 3,
- },
- [VCAP_AF_NUM_VLD_LABELS] = {
- .type = VCAP_FIELD_U32,
- .offset = 74,
- .width = 2,
- },
- [VCAP_AF_FWD_TYPE] = {
- .type = VCAP_FIELD_U32,
- .offset = 76,
- .width = 3,
- },
- [VCAP_AF_MPLS_OAM_TYPE] = {
- .type = VCAP_FIELD_U32,
- .offset = 79,
- .width = 3,
- },
- [VCAP_AF_MPLS_MEP_ENA] = {
- .type = VCAP_FIELD_BIT,
- .offset = 82,
- .width = 1,
- },
- [VCAP_AF_MPLS_MIP_ENA] = {
- .type = VCAP_FIELD_BIT,
- .offset = 83,
- .width = 1,
- },
- [VCAP_AF_MPLS_OAM_FLAVOR] = {
- .type = VCAP_FIELD_BIT,
- .offset = 84,
- .width = 1,
- },
- [VCAP_AF_MPLS_IP_CTRL_ENA] = {
- .type = VCAP_FIELD_BIT,
- .offset = 85,
- .width = 1,
- },
- [VCAP_AF_PAG_OVERRIDE_MASK] = {
- .type = VCAP_FIELD_U32,
- .offset = 86,
- .width = 8,
- },
- [VCAP_AF_PAG_VAL] = {
- .type = VCAP_FIELD_U32,
- .offset = 94,
- .width = 8,
- },
- [VCAP_AF_S2_KEY_SEL_ENA] = {
- .type = VCAP_FIELD_BIT,
- .offset = 102,
- .width = 1,
- },
- [VCAP_AF_S2_KEY_SEL_IDX] = {
- .type = VCAP_FIELD_U32,
- .offset = 103,
- .width = 6,
- },
- [VCAP_AF_PIPELINE_FORCE_ENA] = {
- .type = VCAP_FIELD_U32,
- .offset = 109,
- .width = 2,
- },
- [VCAP_AF_PIPELINE_ACT_SEL] = {
- .type = VCAP_FIELD_BIT,
- .offset = 111,
- .width = 1,
- },
- [VCAP_AF_PIPELINE_PT] = {
- .type = VCAP_FIELD_U32,
- .offset = 112,
- .width = 5,
- },
- [VCAP_AF_NXT_KEY_TYPE] = {
- .type = VCAP_FIELD_U32,
- .offset = 117,
- .width = 5,
- },
- [VCAP_AF_NXT_NORM_W16_OFFSET] = {
- .type = VCAP_FIELD_U32,
- .offset = 122,
- .width = 5,
- },
- [VCAP_AF_NXT_OFFSET_FROM_TYPE] = {
- .type = VCAP_FIELD_U32,
- .offset = 127,
- .width = 2,
- },
- [VCAP_AF_NXT_TYPE_AFTER_OFFSET] = {
- .type = VCAP_FIELD_U32,
- .offset = 129,
- .width = 2,
- },
- [VCAP_AF_NXT_NORMALIZE] = {
- .type = VCAP_FIELD_BIT,
- .offset = 131,
- .width = 1,
- },
- [VCAP_AF_NXT_IDX_CTRL] = {
- .type = VCAP_FIELD_U32,
- .offset = 132,
- .width = 3,
- },
- [VCAP_AF_NXT_IDX] = {
- .type = VCAP_FIELD_U32,
- .offset = 135,
- .width = 12,
- },
-};
-
-static const struct vcap_field is0_mlbs_reduced_actionfield[] = {
- [VCAP_AF_TYPE] = {
- .type = VCAP_FIELD_BIT,
- .offset = 0,
- .width = 1,
- },
- [VCAP_AF_COSID_ENA] = {
- .type = VCAP_FIELD_BIT,
- .offset = 1,
- .width = 1,
- },
- [VCAP_AF_COSID_VAL] = {
- .type = VCAP_FIELD_U32,
- .offset = 2,
- .width = 3,
- },
- [VCAP_AF_QOS_ENA] = {
- .type = VCAP_FIELD_BIT,
- .offset = 5,
- .width = 1,
- },
- [VCAP_AF_QOS_VAL] = {
- .type = VCAP_FIELD_U32,
- .offset = 6,
- .width = 3,
- },
- [VCAP_AF_DP_ENA] = {
- .type = VCAP_FIELD_BIT,
- .offset = 9,
- .width = 1,
- },
- [VCAP_AF_DP_VAL] = {
- .type = VCAP_FIELD_U32,
- .offset = 10,
- .width = 2,
- },
- [VCAP_AF_MAP_LOOKUP_SEL] = {
- .type = VCAP_FIELD_U32,
- .offset = 12,
- .width = 2,
- },
- [VCAP_AF_ISDX_ADD_REPLACE_SEL] = {
- .type = VCAP_FIELD_BIT,
- .offset = 14,
- .width = 1,
- },
- [VCAP_AF_ISDX_VAL] = {
- .type = VCAP_FIELD_U32,
- .offset = 15,
- .width = 12,
- },
- [VCAP_AF_FWD_DIS] = {
- .type = VCAP_FIELD_BIT,
- .offset = 27,
- .width = 1,
- },
- [VCAP_AF_CPU_ENA] = {
- .type = VCAP_FIELD_BIT,
- .offset = 28,
- .width = 1,
- },
- [VCAP_AF_CPU_Q] = {
- .type = VCAP_FIELD_U32,
- .offset = 29,
- .width = 3,
- },
- [VCAP_AF_TC_ENA] = {
- .type = VCAP_FIELD_BIT,
- .offset = 32,
- .width = 1,
- },
- [VCAP_AF_TTL_ENA] = {
- .type = VCAP_FIELD_BIT,
- .offset = 33,
- .width = 1,
- },
- [VCAP_AF_FWD_TYPE] = {
- .type = VCAP_FIELD_U32,
- .offset = 34,
- .width = 3,
- },
- [VCAP_AF_MPLS_OAM_TYPE] = {
- .type = VCAP_FIELD_U32,
- .offset = 37,
- .width = 3,
- },
- [VCAP_AF_MPLS_MEP_ENA] = {
- .type = VCAP_FIELD_BIT,
- .offset = 40,
- .width = 1,
- },
- [VCAP_AF_MPLS_MIP_ENA] = {
- .type = VCAP_FIELD_BIT,
- .offset = 41,
- .width = 1,
- },
- [VCAP_AF_MPLS_OAM_FLAVOR] = {
- .type = VCAP_FIELD_BIT,
- .offset = 42,
- .width = 1,
- },
- [VCAP_AF_MPLS_IP_CTRL_ENA] = {
- .type = VCAP_FIELD_BIT,
- .offset = 43,
- .width = 1,
- },
- [VCAP_AF_PIPELINE_FORCE_ENA] = {
- .type = VCAP_FIELD_U32,
- .offset = 44,
- .width = 2,
- },
- [VCAP_AF_PIPELINE_ACT_SEL] = {
- .type = VCAP_FIELD_BIT,
- .offset = 46,
- .width = 1,
- },
- [VCAP_AF_PIPELINE_PT_REDUCED] = {
- .type = VCAP_FIELD_U32,
- .offset = 47,
- .width = 3,
- },
- [VCAP_AF_NXT_KEY_TYPE] = {
- .type = VCAP_FIELD_U32,
- .offset = 50,
- .width = 5,
- },
- [VCAP_AF_NXT_NORM_W32_OFFSET] = {
- .type = VCAP_FIELD_U32,
- .offset = 55,
- .width = 2,
- },
- [VCAP_AF_NXT_TYPE_AFTER_OFFSET] = {
- .type = VCAP_FIELD_U32,
- .offset = 57,
- .width = 2,
- },
- [VCAP_AF_NXT_NORMALIZE] = {
- .type = VCAP_FIELD_BIT,
- .offset = 59,
- .width = 1,
- },
- [VCAP_AF_NXT_IDX_CTRL] = {
- .type = VCAP_FIELD_U32,
- .offset = 60,
- .width = 3,
- },
- [VCAP_AF_NXT_IDX] = {
- .type = VCAP_FIELD_U32,
- .offset = 63,
- .width = 12,
- },
-};
-
static const struct vcap_field is0_classification_actionfield[] = {
[VCAP_AF_TYPE] = {
.type = VCAP_FIELD_BIT,
@@ -3718,16 +2866,6 @@ static const struct vcap_field is0_classification_actionfield[] = {
.offset = 2,
.width = 6,
},
- [VCAP_AF_COSID_ENA] = {
- .type = VCAP_FIELD_BIT,
- .offset = 8,
- .width = 1,
- },
- [VCAP_AF_COSID_VAL] = {
- .type = VCAP_FIELD_U32,
- .offset = 9,
- .width = 3,
- },
[VCAP_AF_QOS_ENA] = {
.type = VCAP_FIELD_BIT,
.offset = 12,
@@ -3788,46 +2926,11 @@ static const struct vcap_field is0_classification_actionfield[] = {
.offset = 39,
.width = 3,
},
- [VCAP_AF_GVID_ADD_REPLACE_SEL] = {
- .type = VCAP_FIELD_U32,
- .offset = 42,
- .width = 3,
- },
[VCAP_AF_VID_VAL] = {
.type = VCAP_FIELD_U32,
.offset = 45,
.width = 13,
},
- [VCAP_AF_VLAN_POP_CNT_ENA] = {
- .type = VCAP_FIELD_BIT,
- .offset = 58,
- .width = 1,
- },
- [VCAP_AF_VLAN_POP_CNT] = {
- .type = VCAP_FIELD_U32,
- .offset = 59,
- .width = 2,
- },
- [VCAP_AF_VLAN_PUSH_CNT_ENA] = {
- .type = VCAP_FIELD_BIT,
- .offset = 61,
- .width = 1,
- },
- [VCAP_AF_VLAN_PUSH_CNT] = {
- .type = VCAP_FIELD_U32,
- .offset = 62,
- .width = 2,
- },
- [VCAP_AF_TPID_SEL] = {
- .type = VCAP_FIELD_U32,
- .offset = 64,
- .width = 2,
- },
- [VCAP_AF_VLAN_WAS_TAGGED] = {
- .type = VCAP_FIELD_U32,
- .offset = 66,
- .width = 2,
- },
[VCAP_AF_ISDX_ADD_REPLACE_SEL] = {
.type = VCAP_FIELD_BIT,
.offset = 68,
@@ -3838,71 +2941,6 @@ static const struct vcap_field is0_classification_actionfield[] = {
.offset = 69,
.width = 12,
},
- [VCAP_AF_RT_SEL] = {
- .type = VCAP_FIELD_U32,
- .offset = 81,
- .width = 2,
- },
- [VCAP_AF_LPM_AFFIX_ENA] = {
- .type = VCAP_FIELD_BIT,
- .offset = 83,
- .width = 1,
- },
- [VCAP_AF_LPM_AFFIX_VAL] = {
- .type = VCAP_FIELD_U32,
- .offset = 84,
- .width = 10,
- },
- [VCAP_AF_RLEG_DMAC_CHK_DIS] = {
- .type = VCAP_FIELD_BIT,
- .offset = 94,
- .width = 1,
- },
- [VCAP_AF_TTL_DECR_DIS] = {
- .type = VCAP_FIELD_BIT,
- .offset = 95,
- .width = 1,
- },
- [VCAP_AF_L3_MAC_UPDATE_DIS] = {
- .type = VCAP_FIELD_BIT,
- .offset = 96,
- .width = 1,
- },
- [VCAP_AF_FWD_DIS] = {
- .type = VCAP_FIELD_BIT,
- .offset = 97,
- .width = 1,
- },
- [VCAP_AF_CPU_ENA] = {
- .type = VCAP_FIELD_BIT,
- .offset = 98,
- .width = 1,
- },
- [VCAP_AF_CPU_Q] = {
- .type = VCAP_FIELD_U32,
- .offset = 99,
- .width = 3,
- },
- [VCAP_AF_MIP_SEL] = {
- .type = VCAP_FIELD_U32,
- .offset = 102,
- .width = 2,
- },
- [VCAP_AF_OAM_Y1731_SEL] = {
- .type = VCAP_FIELD_U32,
- .offset = 104,
- .width = 3,
- },
- [VCAP_AF_OAM_TWAMP_ENA] = {
- .type = VCAP_FIELD_BIT,
- .offset = 107,
- .width = 1,
- },
- [VCAP_AF_OAM_IP_BFD_ENA] = {
- .type = VCAP_FIELD_BIT,
- .offset = 108,
- .width = 1,
- },
[VCAP_AF_PAG_OVERRIDE_MASK] = {
.type = VCAP_FIELD_U32,
.offset = 109,
@@ -3913,76 +2951,6 @@ static const struct vcap_field is0_classification_actionfield[] = {
.offset = 117,
.width = 8,
},
- [VCAP_AF_S2_KEY_SEL_ENA] = {
- .type = VCAP_FIELD_BIT,
- .offset = 125,
- .width = 1,
- },
- [VCAP_AF_S2_KEY_SEL_IDX] = {
- .type = VCAP_FIELD_U32,
- .offset = 126,
- .width = 6,
- },
- [VCAP_AF_INJ_MASQ_ENA] = {
- .type = VCAP_FIELD_BIT,
- .offset = 132,
- .width = 1,
- },
- [VCAP_AF_INJ_MASQ_PORT] = {
- .type = VCAP_FIELD_U32,
- .offset = 133,
- .width = 7,
- },
- [VCAP_AF_LPORT_ENA] = {
- .type = VCAP_FIELD_BIT,
- .offset = 140,
- .width = 1,
- },
- [VCAP_AF_INJ_MASQ_LPORT] = {
- .type = VCAP_FIELD_U32,
- .offset = 141,
- .width = 7,
- },
- [VCAP_AF_PIPELINE_FORCE_ENA] = {
- .type = VCAP_FIELD_U32,
- .offset = 148,
- .width = 2,
- },
- [VCAP_AF_PIPELINE_ACT_SEL] = {
- .type = VCAP_FIELD_BIT,
- .offset = 150,
- .width = 1,
- },
- [VCAP_AF_PIPELINE_PT] = {
- .type = VCAP_FIELD_U32,
- .offset = 151,
- .width = 5,
- },
- [VCAP_AF_NXT_KEY_TYPE] = {
- .type = VCAP_FIELD_U32,
- .offset = 156,
- .width = 5,
- },
- [VCAP_AF_NXT_NORM_W16_OFFSET] = {
- .type = VCAP_FIELD_U32,
- .offset = 161,
- .width = 5,
- },
- [VCAP_AF_NXT_OFFSET_FROM_TYPE] = {
- .type = VCAP_FIELD_U32,
- .offset = 166,
- .width = 2,
- },
- [VCAP_AF_NXT_TYPE_AFTER_OFFSET] = {
- .type = VCAP_FIELD_U32,
- .offset = 168,
- .width = 2,
- },
- [VCAP_AF_NXT_NORMALIZE] = {
- .type = VCAP_FIELD_BIT,
- .offset = 170,
- .width = 1,
- },
[VCAP_AF_NXT_IDX_CTRL] = {
.type = VCAP_FIELD_U32,
.offset = 171,
@@ -4006,16 +2974,6 @@ static const struct vcap_field is0_full_actionfield[] = {
.offset = 1,
.width = 6,
},
- [VCAP_AF_COSID_ENA] = {
- .type = VCAP_FIELD_BIT,
- .offset = 7,
- .width = 1,
- },
- [VCAP_AF_COSID_VAL] = {
- .type = VCAP_FIELD_U32,
- .offset = 8,
- .width = 3,
- },
[VCAP_AF_QOS_ENA] = {
.type = VCAP_FIELD_BIT,
.offset = 11,
@@ -4076,46 +3034,11 @@ static const struct vcap_field is0_full_actionfield[] = {
.offset = 38,
.width = 3,
},
- [VCAP_AF_GVID_ADD_REPLACE_SEL] = {
- .type = VCAP_FIELD_U32,
- .offset = 41,
- .width = 3,
- },
[VCAP_AF_VID_VAL] = {
.type = VCAP_FIELD_U32,
.offset = 44,
.width = 13,
},
- [VCAP_AF_VLAN_POP_CNT_ENA] = {
- .type = VCAP_FIELD_BIT,
- .offset = 57,
- .width = 1,
- },
- [VCAP_AF_VLAN_POP_CNT] = {
- .type = VCAP_FIELD_U32,
- .offset = 58,
- .width = 2,
- },
- [VCAP_AF_VLAN_PUSH_CNT_ENA] = {
- .type = VCAP_FIELD_BIT,
- .offset = 60,
- .width = 1,
- },
- [VCAP_AF_VLAN_PUSH_CNT] = {
- .type = VCAP_FIELD_U32,
- .offset = 61,
- .width = 2,
- },
- [VCAP_AF_TPID_SEL] = {
- .type = VCAP_FIELD_U32,
- .offset = 63,
- .width = 2,
- },
- [VCAP_AF_VLAN_WAS_TAGGED] = {
- .type = VCAP_FIELD_U32,
- .offset = 65,
- .width = 2,
- },
[VCAP_AF_ISDX_ADD_REPLACE_SEL] = {
.type = VCAP_FIELD_BIT,
.offset = 67,
@@ -4136,126 +3059,6 @@ static const struct vcap_field is0_full_actionfield[] = {
.offset = 83,
.width = 65,
},
- [VCAP_AF_RT_SEL] = {
- .type = VCAP_FIELD_U32,
- .offset = 148,
- .width = 2,
- },
- [VCAP_AF_LPM_AFFIX_ENA] = {
- .type = VCAP_FIELD_BIT,
- .offset = 150,
- .width = 1,
- },
- [VCAP_AF_LPM_AFFIX_VAL] = {
- .type = VCAP_FIELD_U32,
- .offset = 151,
- .width = 10,
- },
- [VCAP_AF_RLEG_DMAC_CHK_DIS] = {
- .type = VCAP_FIELD_BIT,
- .offset = 161,
- .width = 1,
- },
- [VCAP_AF_TTL_DECR_DIS] = {
- .type = VCAP_FIELD_BIT,
- .offset = 162,
- .width = 1,
- },
- [VCAP_AF_L3_MAC_UPDATE_DIS] = {
- .type = VCAP_FIELD_BIT,
- .offset = 163,
- .width = 1,
- },
- [VCAP_AF_CPU_ENA] = {
- .type = VCAP_FIELD_BIT,
- .offset = 164,
- .width = 1,
- },
- [VCAP_AF_CPU_Q] = {
- .type = VCAP_FIELD_U32,
- .offset = 165,
- .width = 3,
- },
- [VCAP_AF_MIP_SEL] = {
- .type = VCAP_FIELD_U32,
- .offset = 168,
- .width = 2,
- },
- [VCAP_AF_OAM_Y1731_SEL] = {
- .type = VCAP_FIELD_U32,
- .offset = 170,
- .width = 3,
- },
- [VCAP_AF_OAM_TWAMP_ENA] = {
- .type = VCAP_FIELD_BIT,
- .offset = 173,
- .width = 1,
- },
- [VCAP_AF_OAM_IP_BFD_ENA] = {
- .type = VCAP_FIELD_BIT,
- .offset = 174,
- .width = 1,
- },
- [VCAP_AF_RSVD_LBL_VAL] = {
- .type = VCAP_FIELD_U32,
- .offset = 175,
- .width = 4,
- },
- [VCAP_AF_TC_LABEL] = {
- .type = VCAP_FIELD_U32,
- .offset = 179,
- .width = 3,
- },
- [VCAP_AF_TTL_LABEL] = {
- .type = VCAP_FIELD_U32,
- .offset = 182,
- .width = 3,
- },
- [VCAP_AF_NUM_VLD_LABELS] = {
- .type = VCAP_FIELD_U32,
- .offset = 185,
- .width = 2,
- },
- [VCAP_AF_FWD_TYPE] = {
- .type = VCAP_FIELD_U32,
- .offset = 187,
- .width = 3,
- },
- [VCAP_AF_MPLS_OAM_TYPE] = {
- .type = VCAP_FIELD_U32,
- .offset = 190,
- .width = 3,
- },
- [VCAP_AF_MPLS_MEP_ENA] = {
- .type = VCAP_FIELD_BIT,
- .offset = 193,
- .width = 1,
- },
- [VCAP_AF_MPLS_MIP_ENA] = {
- .type = VCAP_FIELD_BIT,
- .offset = 194,
- .width = 1,
- },
- [VCAP_AF_MPLS_OAM_FLAVOR] = {
- .type = VCAP_FIELD_BIT,
- .offset = 195,
- .width = 1,
- },
- [VCAP_AF_MPLS_IP_CTRL_ENA] = {
- .type = VCAP_FIELD_BIT,
- .offset = 196,
- .width = 1,
- },
- [VCAP_AF_CUSTOM_ACE_ENA] = {
- .type = VCAP_FIELD_U32,
- .offset = 197,
- .width = 5,
- },
- [VCAP_AF_CUSTOM_ACE_OFFSET] = {
- .type = VCAP_FIELD_U32,
- .offset = 202,
- .width = 2,
- },
[VCAP_AF_PAG_OVERRIDE_MASK] = {
.type = VCAP_FIELD_U32,
.offset = 204,
@@ -4266,86 +3069,6 @@ static const struct vcap_field is0_full_actionfield[] = {
.offset = 212,
.width = 8,
},
- [VCAP_AF_S2_KEY_SEL_ENA] = {
- .type = VCAP_FIELD_BIT,
- .offset = 220,
- .width = 1,
- },
- [VCAP_AF_S2_KEY_SEL_IDX] = {
- .type = VCAP_FIELD_U32,
- .offset = 221,
- .width = 6,
- },
- [VCAP_AF_INJ_MASQ_ENA] = {
- .type = VCAP_FIELD_BIT,
- .offset = 227,
- .width = 1,
- },
- [VCAP_AF_INJ_MASQ_PORT] = {
- .type = VCAP_FIELD_U32,
- .offset = 228,
- .width = 7,
- },
- [VCAP_AF_LPORT_ENA] = {
- .type = VCAP_FIELD_BIT,
- .offset = 235,
- .width = 1,
- },
- [VCAP_AF_INJ_MASQ_LPORT] = {
- .type = VCAP_FIELD_U32,
- .offset = 236,
- .width = 7,
- },
- [VCAP_AF_MATCH_ID] = {
- .type = VCAP_FIELD_U32,
- .offset = 243,
- .width = 16,
- },
- [VCAP_AF_MATCH_ID_MASK] = {
- .type = VCAP_FIELD_U32,
- .offset = 259,
- .width = 16,
- },
- [VCAP_AF_PIPELINE_FORCE_ENA] = {
- .type = VCAP_FIELD_U32,
- .offset = 275,
- .width = 2,
- },
- [VCAP_AF_PIPELINE_ACT_SEL] = {
- .type = VCAP_FIELD_BIT,
- .offset = 277,
- .width = 1,
- },
- [VCAP_AF_PIPELINE_PT] = {
- .type = VCAP_FIELD_U32,
- .offset = 278,
- .width = 5,
- },
- [VCAP_AF_NXT_KEY_TYPE] = {
- .type = VCAP_FIELD_U32,
- .offset = 283,
- .width = 5,
- },
- [VCAP_AF_NXT_NORM_W16_OFFSET] = {
- .type = VCAP_FIELD_U32,
- .offset = 288,
- .width = 5,
- },
- [VCAP_AF_NXT_OFFSET_FROM_TYPE] = {
- .type = VCAP_FIELD_U32,
- .offset = 293,
- .width = 2,
- },
- [VCAP_AF_NXT_TYPE_AFTER_OFFSET] = {
- .type = VCAP_FIELD_U32,
- .offset = 295,
- .width = 2,
- },
- [VCAP_AF_NXT_NORMALIZE] = {
- .type = VCAP_FIELD_BIT,
- .offset = 297,
- .width = 1,
- },
[VCAP_AF_NXT_IDX_CTRL] = {
.type = VCAP_FIELD_U32,
.offset = 298,
@@ -4364,16 +3087,6 @@ static const struct vcap_field is0_class_reduced_actionfield[] = {
.offset = 0,
.width = 1,
},
- [VCAP_AF_COSID_ENA] = {
- .type = VCAP_FIELD_BIT,
- .offset = 1,
- .width = 1,
- },
- [VCAP_AF_COSID_VAL] = {
- .type = VCAP_FIELD_U32,
- .offset = 2,
- .width = 3,
- },
[VCAP_AF_QOS_ENA] = {
.type = VCAP_FIELD_BIT,
.offset = 5,
@@ -4409,46 +3122,11 @@ static const struct vcap_field is0_class_reduced_actionfield[] = {
.offset = 17,
.width = 3,
},
- [VCAP_AF_GVID_ADD_REPLACE_SEL] = {
- .type = VCAP_FIELD_U32,
- .offset = 20,
- .width = 3,
- },
[VCAP_AF_VID_VAL] = {
.type = VCAP_FIELD_U32,
.offset = 23,
.width = 13,
},
- [VCAP_AF_VLAN_POP_CNT_ENA] = {
- .type = VCAP_FIELD_BIT,
- .offset = 36,
- .width = 1,
- },
- [VCAP_AF_VLAN_POP_CNT] = {
- .type = VCAP_FIELD_U32,
- .offset = 37,
- .width = 2,
- },
- [VCAP_AF_VLAN_PUSH_CNT_ENA] = {
- .type = VCAP_FIELD_BIT,
- .offset = 39,
- .width = 1,
- },
- [VCAP_AF_VLAN_PUSH_CNT] = {
- .type = VCAP_FIELD_U32,
- .offset = 40,
- .width = 2,
- },
- [VCAP_AF_TPID_SEL] = {
- .type = VCAP_FIELD_U32,
- .offset = 42,
- .width = 2,
- },
- [VCAP_AF_VLAN_WAS_TAGGED] = {
- .type = VCAP_FIELD_U32,
- .offset = 44,
- .width = 2,
- },
[VCAP_AF_ISDX_ADD_REPLACE_SEL] = {
.type = VCAP_FIELD_BIT,
.offset = 46,
@@ -4459,61 +3137,6 @@ static const struct vcap_field is0_class_reduced_actionfield[] = {
.offset = 47,
.width = 12,
},
- [VCAP_AF_FWD_DIS] = {
- .type = VCAP_FIELD_BIT,
- .offset = 59,
- .width = 1,
- },
- [VCAP_AF_CPU_ENA] = {
- .type = VCAP_FIELD_BIT,
- .offset = 60,
- .width = 1,
- },
- [VCAP_AF_CPU_Q] = {
- .type = VCAP_FIELD_U32,
- .offset = 61,
- .width = 3,
- },
- [VCAP_AF_MIP_SEL] = {
- .type = VCAP_FIELD_U32,
- .offset = 64,
- .width = 2,
- },
- [VCAP_AF_OAM_Y1731_SEL] = {
- .type = VCAP_FIELD_U32,
- .offset = 66,
- .width = 3,
- },
- [VCAP_AF_LPORT_ENA] = {
- .type = VCAP_FIELD_BIT,
- .offset = 69,
- .width = 1,
- },
- [VCAP_AF_INJ_MASQ_LPORT] = {
- .type = VCAP_FIELD_U32,
- .offset = 70,
- .width = 7,
- },
- [VCAP_AF_PIPELINE_FORCE_ENA] = {
- .type = VCAP_FIELD_U32,
- .offset = 77,
- .width = 2,
- },
- [VCAP_AF_PIPELINE_ACT_SEL] = {
- .type = VCAP_FIELD_BIT,
- .offset = 79,
- .width = 1,
- },
- [VCAP_AF_PIPELINE_PT] = {
- .type = VCAP_FIELD_U32,
- .offset = 80,
- .width = 5,
- },
- [VCAP_AF_NXT_KEY_TYPE] = {
- .type = VCAP_FIELD_U32,
- .offset = 85,
- .width = 5,
- },
[VCAP_AF_NXT_IDX_CTRL] = {
.type = VCAP_FIELD_U32,
.offset = 90,
@@ -4527,11 +3150,6 @@ static const struct vcap_field is0_class_reduced_actionfield[] = {
};
static const struct vcap_field is2_base_type_actionfield[] = {
- [VCAP_AF_IS_INNER_ACL] = {
- .type = VCAP_FIELD_BIT,
- .offset = 0,
- .width = 1,
- },
[VCAP_AF_PIPELINE_FORCE_ENA] = {
.type = VCAP_FIELD_BIT,
.offset = 1,
@@ -4562,11 +3180,6 @@ static const struct vcap_field is2_base_type_actionfield[] = {
.offset = 10,
.width = 3,
},
- [VCAP_AF_CPU_DIS] = {
- .type = VCAP_FIELD_BIT,
- .offset = 13,
- .width = 1,
- },
[VCAP_AF_LRN_DIS] = {
.type = VCAP_FIELD_BIT,
.offset = 14,
@@ -4592,11 +3205,6 @@ static const struct vcap_field is2_base_type_actionfield[] = {
.offset = 23,
.width = 1,
},
- [VCAP_AF_DLB_OFFSET] = {
- .type = VCAP_FIELD_U32,
- .offset = 24,
- .width = 3,
- },
[VCAP_AF_MASK_MODE] = {
.type = VCAP_FIELD_U32,
.offset = 27,
@@ -4607,51 +3215,11 @@ static const struct vcap_field is2_base_type_actionfield[] = {
.offset = 30,
.width = 68,
},
- [VCAP_AF_RSDX_ENA] = {
- .type = VCAP_FIELD_BIT,
- .offset = 98,
- .width = 1,
- },
- [VCAP_AF_RSDX_VAL] = {
- .type = VCAP_FIELD_U32,
- .offset = 99,
- .width = 12,
- },
[VCAP_AF_MIRROR_PROBE] = {
.type = VCAP_FIELD_U32,
.offset = 111,
.width = 2,
},
- [VCAP_AF_REW_CMD] = {
- .type = VCAP_FIELD_U32,
- .offset = 113,
- .width = 11,
- },
- [VCAP_AF_TTL_UPDATE_ENA] = {
- .type = VCAP_FIELD_BIT,
- .offset = 124,
- .width = 1,
- },
- [VCAP_AF_SAM_SEQ_ENA] = {
- .type = VCAP_FIELD_BIT,
- .offset = 125,
- .width = 1,
- },
- [VCAP_AF_TCP_UDP_ENA] = {
- .type = VCAP_FIELD_BIT,
- .offset = 126,
- .width = 1,
- },
- [VCAP_AF_TCP_UDP_DPORT] = {
- .type = VCAP_FIELD_U32,
- .offset = 127,
- .width = 16,
- },
- [VCAP_AF_TCP_UDP_SPORT] = {
- .type = VCAP_FIELD_U32,
- .offset = 143,
- .width = 16,
- },
[VCAP_AF_MATCH_ID] = {
.type = VCAP_FIELD_U32,
.offset = 159,
@@ -4667,56 +3235,6 @@ static const struct vcap_field is2_base_type_actionfield[] = {
.offset = 191,
.width = 12,
},
- [VCAP_AF_SWAP_MAC_ENA] = {
- .type = VCAP_FIELD_BIT,
- .offset = 203,
- .width = 1,
- },
- [VCAP_AF_ACL_RT_MODE] = {
- .type = VCAP_FIELD_U32,
- .offset = 204,
- .width = 4,
- },
- [VCAP_AF_ACL_MAC] = {
- .type = VCAP_FIELD_U48,
- .offset = 208,
- .width = 48,
- },
- [VCAP_AF_DMAC_OFFSET_ENA] = {
- .type = VCAP_FIELD_BIT,
- .offset = 256,
- .width = 1,
- },
- [VCAP_AF_PTP_MASTER_SEL] = {
- .type = VCAP_FIELD_U32,
- .offset = 257,
- .width = 2,
- },
- [VCAP_AF_LOG_MSG_INTERVAL] = {
- .type = VCAP_FIELD_U32,
- .offset = 259,
- .width = 4,
- },
- [VCAP_AF_SIP_IDX] = {
- .type = VCAP_FIELD_U32,
- .offset = 263,
- .width = 5,
- },
- [VCAP_AF_RLEG_STAT_IDX] = {
- .type = VCAP_FIELD_U32,
- .offset = 268,
- .width = 3,
- },
- [VCAP_AF_IGR_ACL_ENA] = {
- .type = VCAP_FIELD_BIT,
- .offset = 271,
- .width = 1,
- },
- [VCAP_AF_EGR_ACL_ENA] = {
- .type = VCAP_FIELD_BIT,
- .offset = 272,
- .width = 1,
- },
};
static const struct vcap_field es2_base_type_actionfield[] = {
@@ -4794,16 +3312,6 @@ static const struct vcap_field es2_base_type_actionfield[] = {
/* actionfield_set */
static const struct vcap_set is0_actionfield_set[] = {
- [VCAP_AFS_MLBS] = {
- .type_id = 0,
- .sw_per_item = 2,
- .sw_cnt = 6,
- },
- [VCAP_AFS_MLBS_REDUCED] = {
- .type_id = 0,
- .sw_per_item = 1,
- .sw_cnt = 12,
- },
[VCAP_AFS_CLASSIFICATION] = {
.type_id = 1,
.sw_per_item = 2,
@@ -4839,8 +3347,6 @@ static const struct vcap_set es2_actionfield_set[] = {
/* actionfield_set map */
static const struct vcap_field *is0_actionfield_set_map[] = {
- [VCAP_AFS_MLBS] = is0_mlbs_actionfield,
- [VCAP_AFS_MLBS_REDUCED] = is0_mlbs_reduced_actionfield,
[VCAP_AFS_CLASSIFICATION] = is0_classification_actionfield,
[VCAP_AFS_FULL] = is0_full_actionfield,
[VCAP_AFS_CLASS_REDUCED] = is0_class_reduced_actionfield,
@@ -4856,8 +3362,6 @@ static const struct vcap_field *es2_actionfield_set_map[] = {
/* actionfield_set map size */
static int is0_actionfield_set_map_size[] = {
- [VCAP_AFS_MLBS] = ARRAY_SIZE(is0_mlbs_actionfield),
- [VCAP_AFS_MLBS_REDUCED] = ARRAY_SIZE(is0_mlbs_reduced_actionfield),
[VCAP_AFS_CLASSIFICATION] = ARRAY_SIZE(is0_classification_actionfield),
[VCAP_AFS_FULL] = ARRAY_SIZE(is0_full_actionfield),
[VCAP_AFS_CLASS_REDUCED] = ARRAY_SIZE(is0_class_reduced_actionfield),
@@ -5244,17 +3748,22 @@ static const char * const vcap_keyfield_set_names[] = {
[VCAP_KFS_IP4_OTHER] = "VCAP_KFS_IP4_OTHER",
[VCAP_KFS_IP4_TCP_UDP] = "VCAP_KFS_IP4_TCP_UDP",
[VCAP_KFS_IP4_VID] = "VCAP_KFS_IP4_VID",
+ [VCAP_KFS_IP6_OTHER] = "VCAP_KFS_IP6_OTHER",
[VCAP_KFS_IP6_STD] = "VCAP_KFS_IP6_STD",
+ [VCAP_KFS_IP6_TCP_UDP] = "VCAP_KFS_IP6_TCP_UDP",
[VCAP_KFS_IP6_VID] = "VCAP_KFS_IP6_VID",
[VCAP_KFS_IP_7TUPLE] = "VCAP_KFS_IP_7TUPLE",
+ [VCAP_KFS_ISDX] = "VCAP_KFS_ISDX",
[VCAP_KFS_LL_FULL] = "VCAP_KFS_LL_FULL",
[VCAP_KFS_MAC_ETYPE] = "VCAP_KFS_MAC_ETYPE",
- [VCAP_KFS_MLL] = "VCAP_KFS_MLL",
- [VCAP_KFS_NORMAL] = "VCAP_KFS_NORMAL",
+ [VCAP_KFS_MAC_LLC] = "VCAP_KFS_MAC_LLC",
+ [VCAP_KFS_MAC_SNAP] = "VCAP_KFS_MAC_SNAP",
[VCAP_KFS_NORMAL_5TUPLE_IP4] = "VCAP_KFS_NORMAL_5TUPLE_IP4",
[VCAP_KFS_NORMAL_7TUPLE] = "VCAP_KFS_NORMAL_7TUPLE",
+ [VCAP_KFS_OAM] = "VCAP_KFS_OAM",
[VCAP_KFS_PURE_5TUPLE_IP4] = "VCAP_KFS_PURE_5TUPLE_IP4",
- [VCAP_KFS_TRI_VID] = "VCAP_KFS_TRI_VID",
+ [VCAP_KFS_SMAC_SIP4] = "VCAP_KFS_SMAC_SIP4",
+ [VCAP_KFS_SMAC_SIP6] = "VCAP_KFS_SMAC_SIP6",
};
/* Actionfieldset names */
@@ -5263,9 +3772,9 @@ static const char * const vcap_actionfield_set_names[] = {
[VCAP_AFS_BASE_TYPE] = "VCAP_AFS_BASE_TYPE",
[VCAP_AFS_CLASSIFICATION] = "VCAP_AFS_CLASSIFICATION",
[VCAP_AFS_CLASS_REDUCED] = "VCAP_AFS_CLASS_REDUCED",
+ [VCAP_AFS_ES0] = "VCAP_AFS_ES0",
[VCAP_AFS_FULL] = "VCAP_AFS_FULL",
- [VCAP_AFS_MLBS] = "VCAP_AFS_MLBS",
- [VCAP_AFS_MLBS_REDUCED] = "VCAP_AFS_MLBS_REDUCED",
+ [VCAP_AFS_SMAC_SIP] = "VCAP_AFS_SMAC_SIP",
};
/* Keyfield names */
@@ -5285,6 +3794,7 @@ static const char * const vcap_keyfield_names[] = {
[VCAP_KF_8021Q_PCP1] = "8021Q_PCP1",
[VCAP_KF_8021Q_PCP2] = "8021Q_PCP2",
[VCAP_KF_8021Q_PCP_CLS] = "8021Q_PCP_CLS",
+ [VCAP_KF_8021Q_TPID] = "8021Q_TPID",
[VCAP_KF_8021Q_TPID0] = "8021Q_TPID0",
[VCAP_KF_8021Q_TPID1] = "8021Q_TPID1",
[VCAP_KF_8021Q_TPID2] = "8021Q_TPID2",
@@ -5303,13 +3813,13 @@ static const char * const vcap_keyfield_names[] = {
[VCAP_KF_ARP_SENDER_MATCH_IS] = "ARP_SENDER_MATCH_IS",
[VCAP_KF_ARP_TGT_MATCH_IS] = "ARP_TGT_MATCH_IS",
[VCAP_KF_COSID_CLS] = "COSID_CLS",
- [VCAP_KF_DST_ENTRY] = "DST_ENTRY",
[VCAP_KF_ES0_ISDX_KEY_ENA] = "ES0_ISDX_KEY_ENA",
[VCAP_KF_ETYPE] = "ETYPE",
[VCAP_KF_ETYPE_LEN_IS] = "ETYPE_LEN_IS",
- [VCAP_KF_ETYPE_MPLS] = "ETYPE_MPLS",
+ [VCAP_KF_HOST_MATCH] = "HOST_MATCH",
[VCAP_KF_IF_EGR_PORT_MASK] = "IF_EGR_PORT_MASK",
[VCAP_KF_IF_EGR_PORT_MASK_RNG] = "IF_EGR_PORT_MASK_RNG",
+ [VCAP_KF_IF_EGR_PORT_NO] = "IF_EGR_PORT_NO",
[VCAP_KF_IF_IGR_PORT] = "IF_IGR_PORT",
[VCAP_KF_IF_IGR_PORT_MASK] = "IF_IGR_PORT_MASK",
[VCAP_KF_IF_IGR_PORT_MASK_L3] = "IF_IGR_PORT_MASK_L3",
@@ -5324,17 +3834,24 @@ static const char * const vcap_keyfield_names[] = {
[VCAP_KF_ISDX_GT0_IS] = "ISDX_GT0_IS",
[VCAP_KF_L2_BC_IS] = "L2_BC_IS",
[VCAP_KF_L2_DMAC] = "L2_DMAC",
+ [VCAP_KF_L2_FRM_TYPE] = "L2_FRM_TYPE",
[VCAP_KF_L2_FWD_IS] = "L2_FWD_IS",
+ [VCAP_KF_L2_LLC] = "L2_LLC",
[VCAP_KF_L2_MC_IS] = "L2_MC_IS",
+ [VCAP_KF_L2_PAYLOAD0] = "L2_PAYLOAD0",
+ [VCAP_KF_L2_PAYLOAD1] = "L2_PAYLOAD1",
+ [VCAP_KF_L2_PAYLOAD2] = "L2_PAYLOAD2",
[VCAP_KF_L2_PAYLOAD_ETYPE] = "L2_PAYLOAD_ETYPE",
[VCAP_KF_L2_SMAC] = "L2_SMAC",
+ [VCAP_KF_L2_SNAP] = "L2_SNAP",
[VCAP_KF_L3_DIP_EQ_SIP_IS] = "L3_DIP_EQ_SIP_IS",
- [VCAP_KF_L3_DMAC_DIP_MATCH] = "L3_DMAC_DIP_MATCH",
[VCAP_KF_L3_DPL_CLS] = "L3_DPL_CLS",
[VCAP_KF_L3_DSCP] = "L3_DSCP",
[VCAP_KF_L3_DST_IS] = "L3_DST_IS",
+ [VCAP_KF_L3_FRAGMENT] = "L3_FRAGMENT",
[VCAP_KF_L3_FRAGMENT_TYPE] = "L3_FRAGMENT_TYPE",
[VCAP_KF_L3_FRAG_INVLD_L4_LEN] = "L3_FRAG_INVLD_L4_LEN",
+ [VCAP_KF_L3_FRAG_OFS_GT0] = "L3_FRAG_OFS_GT0",
[VCAP_KF_L3_IP4_DIP] = "L3_IP4_DIP",
[VCAP_KF_L3_IP4_SIP] = "L3_IP4_SIP",
[VCAP_KF_L3_IP6_DIP] = "L3_IP6_DIP",
@@ -5343,9 +3860,10 @@ static const char * const vcap_keyfield_names[] = {
[VCAP_KF_L3_OPTIONS_IS] = "L3_OPTIONS_IS",
[VCAP_KF_L3_PAYLOAD] = "L3_PAYLOAD",
[VCAP_KF_L3_RT_IS] = "L3_RT_IS",
- [VCAP_KF_L3_SMAC_SIP_MATCH] = "L3_SMAC_SIP_MATCH",
[VCAP_KF_L3_TOS] = "L3_TOS",
[VCAP_KF_L3_TTL_GT0] = "L3_TTL_GT0",
+ [VCAP_KF_L4_1588_DOM] = "L4_1588_DOM",
+ [VCAP_KF_L4_1588_VER] = "L4_1588_VER",
[VCAP_KF_L4_ACK] = "L4_ACK",
[VCAP_KF_L4_DPORT] = "L4_DPORT",
[VCAP_KF_L4_FIN] = "L4_FIN",
@@ -5362,9 +3880,14 @@ static const char * const vcap_keyfield_names[] = {
[VCAP_KF_LOOKUP_GEN_IDX] = "LOOKUP_GEN_IDX",
[VCAP_KF_LOOKUP_GEN_IDX_SEL] = "LOOKUP_GEN_IDX_SEL",
[VCAP_KF_LOOKUP_PAG] = "LOOKUP_PAG",
- [VCAP_KF_MIRROR_ENA] = "MIRROR_ENA",
+ [VCAP_KF_MIRROR_PROBE] = "MIRROR_PROBE",
[VCAP_KF_OAM_CCM_CNTS_EQ0] = "OAM_CCM_CNTS_EQ0",
+ [VCAP_KF_OAM_DETECTED] = "OAM_DETECTED",
+ [VCAP_KF_OAM_FLAGS] = "OAM_FLAGS",
[VCAP_KF_OAM_MEL_FLAGS] = "OAM_MEL_FLAGS",
+ [VCAP_KF_OAM_MEPID] = "OAM_MEPID",
+ [VCAP_KF_OAM_OPCODE] = "OAM_OPCODE",
+ [VCAP_KF_OAM_VER] = "OAM_VER",
[VCAP_KF_OAM_Y1731_IS] = "OAM_Y1731_IS",
[VCAP_KF_PROT_ACTIVE] = "PROT_ACTIVE",
[VCAP_KF_TCP_IS] = "TCP_IS",
@@ -5375,50 +3898,37 @@ static const char * const vcap_keyfield_names[] = {
/* Actionfield names */
static const char * const vcap_actionfield_names[] = {
[VCAP_AF_NO_VALUE] = "(None)",
- [VCAP_AF_ACL_MAC] = "ACL_MAC",
- [VCAP_AF_ACL_RT_MODE] = "ACL_RT_MODE",
+ [VCAP_AF_ACL_ID] = "ACL_ID",
[VCAP_AF_CLS_VID_SEL] = "CLS_VID_SEL",
[VCAP_AF_CNT_ID] = "CNT_ID",
[VCAP_AF_COPY_PORT_NUM] = "COPY_PORT_NUM",
[VCAP_AF_COPY_QUEUE_NUM] = "COPY_QUEUE_NUM",
- [VCAP_AF_COSID_ENA] = "COSID_ENA",
- [VCAP_AF_COSID_VAL] = "COSID_VAL",
[VCAP_AF_CPU_COPY_ENA] = "CPU_COPY_ENA",
- [VCAP_AF_CPU_DIS] = "CPU_DIS",
- [VCAP_AF_CPU_ENA] = "CPU_ENA",
- [VCAP_AF_CPU_Q] = "CPU_Q",
+ [VCAP_AF_CPU_QU] = "CPU_QU",
[VCAP_AF_CPU_QUEUE_NUM] = "CPU_QUEUE_NUM",
- [VCAP_AF_CUSTOM_ACE_ENA] = "CUSTOM_ACE_ENA",
- [VCAP_AF_CUSTOM_ACE_OFFSET] = "CUSTOM_ACE_OFFSET",
+ [VCAP_AF_DEI_A_VAL] = "DEI_A_VAL",
+ [VCAP_AF_DEI_B_VAL] = "DEI_B_VAL",
+ [VCAP_AF_DEI_C_VAL] = "DEI_C_VAL",
[VCAP_AF_DEI_ENA] = "DEI_ENA",
[VCAP_AF_DEI_VAL] = "DEI_VAL",
- [VCAP_AF_DLB_OFFSET] = "DLB_OFFSET",
- [VCAP_AF_DMAC_OFFSET_ENA] = "DMAC_OFFSET_ENA",
[VCAP_AF_DP_ENA] = "DP_ENA",
[VCAP_AF_DP_VAL] = "DP_VAL",
[VCAP_AF_DSCP_ENA] = "DSCP_ENA",
+ [VCAP_AF_DSCP_SEL] = "DSCP_SEL",
[VCAP_AF_DSCP_VAL] = "DSCP_VAL",
- [VCAP_AF_EGR_ACL_ENA] = "EGR_ACL_ENA",
[VCAP_AF_ES2_REW_CMD] = "ES2_REW_CMD",
- [VCAP_AF_FWD_DIS] = "FWD_DIS",
+ [VCAP_AF_ESDX] = "ESDX",
+ [VCAP_AF_FWD_KILL_ENA] = "FWD_KILL_ENA",
[VCAP_AF_FWD_MODE] = "FWD_MODE",
- [VCAP_AF_FWD_TYPE] = "FWD_TYPE",
- [VCAP_AF_GVID_ADD_REPLACE_SEL] = "GVID_ADD_REPLACE_SEL",
+ [VCAP_AF_FWD_SEL] = "FWD_SEL",
[VCAP_AF_HIT_ME_ONCE] = "HIT_ME_ONCE",
+ [VCAP_AF_HOST_MATCH] = "HOST_MATCH",
[VCAP_AF_IGNORE_PIPELINE_CTRL] = "IGNORE_PIPELINE_CTRL",
- [VCAP_AF_IGR_ACL_ENA] = "IGR_ACL_ENA",
- [VCAP_AF_INJ_MASQ_ENA] = "INJ_MASQ_ENA",
- [VCAP_AF_INJ_MASQ_LPORT] = "INJ_MASQ_LPORT",
- [VCAP_AF_INJ_MASQ_PORT] = "INJ_MASQ_PORT",
[VCAP_AF_INTR_ENA] = "INTR_ENA",
[VCAP_AF_ISDX_ADD_REPLACE_SEL] = "ISDX_ADD_REPLACE_SEL",
+ [VCAP_AF_ISDX_ENA] = "ISDX_ENA",
[VCAP_AF_ISDX_VAL] = "ISDX_VAL",
- [VCAP_AF_IS_INNER_ACL] = "IS_INNER_ACL",
- [VCAP_AF_L3_MAC_UPDATE_DIS] = "L3_MAC_UPDATE_DIS",
- [VCAP_AF_LOG_MSG_INTERVAL] = "LOG_MSG_INTERVAL",
- [VCAP_AF_LPM_AFFIX_ENA] = "LPM_AFFIX_ENA",
- [VCAP_AF_LPM_AFFIX_VAL] = "LPM_AFFIX_VAL",
- [VCAP_AF_LPORT_ENA] = "LPORT_ENA",
+ [VCAP_AF_LOOP_ENA] = "LOOP_ENA",
[VCAP_AF_LRN_DIS] = "LRN_DIS",
[VCAP_AF_MAP_IDX] = "MAP_IDX",
[VCAP_AF_MAP_KEY] = "MAP_KEY",
@@ -5426,71 +3936,53 @@ static const char * const vcap_actionfield_names[] = {
[VCAP_AF_MASK_MODE] = "MASK_MODE",
[VCAP_AF_MATCH_ID] = "MATCH_ID",
[VCAP_AF_MATCH_ID_MASK] = "MATCH_ID_MASK",
- [VCAP_AF_MIP_SEL] = "MIP_SEL",
+ [VCAP_AF_MIRROR_ENA] = "MIRROR_ENA",
[VCAP_AF_MIRROR_PROBE] = "MIRROR_PROBE",
[VCAP_AF_MIRROR_PROBE_ID] = "MIRROR_PROBE_ID",
- [VCAP_AF_MPLS_IP_CTRL_ENA] = "MPLS_IP_CTRL_ENA",
- [VCAP_AF_MPLS_MEP_ENA] = "MPLS_MEP_ENA",
- [VCAP_AF_MPLS_MIP_ENA] = "MPLS_MIP_ENA",
- [VCAP_AF_MPLS_OAM_FLAVOR] = "MPLS_OAM_FLAVOR",
- [VCAP_AF_MPLS_OAM_TYPE] = "MPLS_OAM_TYPE",
- [VCAP_AF_NUM_VLD_LABELS] = "NUM_VLD_LABELS",
[VCAP_AF_NXT_IDX] = "NXT_IDX",
[VCAP_AF_NXT_IDX_CTRL] = "NXT_IDX_CTRL",
- [VCAP_AF_NXT_KEY_TYPE] = "NXT_KEY_TYPE",
- [VCAP_AF_NXT_NORMALIZE] = "NXT_NORMALIZE",
- [VCAP_AF_NXT_NORM_W16_OFFSET] = "NXT_NORM_W16_OFFSET",
- [VCAP_AF_NXT_NORM_W32_OFFSET] = "NXT_NORM_W32_OFFSET",
- [VCAP_AF_NXT_OFFSET_FROM_TYPE] = "NXT_OFFSET_FROM_TYPE",
- [VCAP_AF_NXT_TYPE_AFTER_OFFSET] = "NXT_TYPE_AFTER_OFFSET",
- [VCAP_AF_OAM_IP_BFD_ENA] = "OAM_IP_BFD_ENA",
- [VCAP_AF_OAM_TWAMP_ENA] = "OAM_TWAMP_ENA",
- [VCAP_AF_OAM_Y1731_SEL] = "OAM_Y1731_SEL",
[VCAP_AF_PAG_OVERRIDE_MASK] = "PAG_OVERRIDE_MASK",
[VCAP_AF_PAG_VAL] = "PAG_VAL",
+ [VCAP_AF_PCP_A_VAL] = "PCP_A_VAL",
+ [VCAP_AF_PCP_B_VAL] = "PCP_B_VAL",
+ [VCAP_AF_PCP_C_VAL] = "PCP_C_VAL",
[VCAP_AF_PCP_ENA] = "PCP_ENA",
[VCAP_AF_PCP_VAL] = "PCP_VAL",
- [VCAP_AF_PIPELINE_ACT_SEL] = "PIPELINE_ACT_SEL",
+ [VCAP_AF_PIPELINE_ACT] = "PIPELINE_ACT",
[VCAP_AF_PIPELINE_FORCE_ENA] = "PIPELINE_FORCE_ENA",
[VCAP_AF_PIPELINE_PT] = "PIPELINE_PT",
- [VCAP_AF_PIPELINE_PT_REDUCED] = "PIPELINE_PT_REDUCED",
[VCAP_AF_POLICE_ENA] = "POLICE_ENA",
[VCAP_AF_POLICE_IDX] = "POLICE_IDX",
[VCAP_AF_POLICE_REMARK] = "POLICE_REMARK",
+ [VCAP_AF_POLICE_VCAP_ONLY] = "POLICE_VCAP_ONLY",
+ [VCAP_AF_POP_VAL] = "POP_VAL",
[VCAP_AF_PORT_MASK] = "PORT_MASK",
- [VCAP_AF_PTP_MASTER_SEL] = "PTP_MASTER_SEL",
+ [VCAP_AF_PUSH_CUSTOMER_TAG] = "PUSH_CUSTOMER_TAG",
+ [VCAP_AF_PUSH_INNER_TAG] = "PUSH_INNER_TAG",
+ [VCAP_AF_PUSH_OUTER_TAG] = "PUSH_OUTER_TAG",
[VCAP_AF_QOS_ENA] = "QOS_ENA",
[VCAP_AF_QOS_VAL] = "QOS_VAL",
- [VCAP_AF_REW_CMD] = "REW_CMD",
- [VCAP_AF_RLEG_DMAC_CHK_DIS] = "RLEG_DMAC_CHK_DIS",
- [VCAP_AF_RLEG_STAT_IDX] = "RLEG_STAT_IDX",
- [VCAP_AF_RSDX_ENA] = "RSDX_ENA",
- [VCAP_AF_RSDX_VAL] = "RSDX_VAL",
- [VCAP_AF_RSVD_LBL_VAL] = "RSVD_LBL_VAL",
+ [VCAP_AF_REW_OP] = "REW_OP",
[VCAP_AF_RT_DIS] = "RT_DIS",
- [VCAP_AF_RT_SEL] = "RT_SEL",
- [VCAP_AF_S2_KEY_SEL_ENA] = "S2_KEY_SEL_ENA",
- [VCAP_AF_S2_KEY_SEL_IDX] = "S2_KEY_SEL_IDX",
- [VCAP_AF_SAM_SEQ_ENA] = "SAM_SEQ_ENA",
- [VCAP_AF_SIP_IDX] = "SIP_IDX",
- [VCAP_AF_SWAP_MAC_ENA] = "SWAP_MAC_ENA",
- [VCAP_AF_TCP_UDP_DPORT] = "TCP_UDP_DPORT",
- [VCAP_AF_TCP_UDP_ENA] = "TCP_UDP_ENA",
- [VCAP_AF_TCP_UDP_SPORT] = "TCP_UDP_SPORT",
- [VCAP_AF_TC_ENA] = "TC_ENA",
- [VCAP_AF_TC_LABEL] = "TC_LABEL",
- [VCAP_AF_TPID_SEL] = "TPID_SEL",
- [VCAP_AF_TTL_DECR_DIS] = "TTL_DECR_DIS",
- [VCAP_AF_TTL_ENA] = "TTL_ENA",
- [VCAP_AF_TTL_LABEL] = "TTL_LABEL",
- [VCAP_AF_TTL_UPDATE_ENA] = "TTL_UPDATE_ENA",
+ [VCAP_AF_SWAP_MACS_ENA] = "SWAP_MACS_ENA",
+ [VCAP_AF_TAG_A_DEI_SEL] = "TAG_A_DEI_SEL",
+ [VCAP_AF_TAG_A_PCP_SEL] = "TAG_A_PCP_SEL",
+ [VCAP_AF_TAG_A_TPID_SEL] = "TAG_A_TPID_SEL",
+ [VCAP_AF_TAG_A_VID_SEL] = "TAG_A_VID_SEL",
+ [VCAP_AF_TAG_B_DEI_SEL] = "TAG_B_DEI_SEL",
+ [VCAP_AF_TAG_B_PCP_SEL] = "TAG_B_PCP_SEL",
+ [VCAP_AF_TAG_B_TPID_SEL] = "TAG_B_TPID_SEL",
+ [VCAP_AF_TAG_B_VID_SEL] = "TAG_B_VID_SEL",
+ [VCAP_AF_TAG_C_DEI_SEL] = "TAG_C_DEI_SEL",
+ [VCAP_AF_TAG_C_PCP_SEL] = "TAG_C_PCP_SEL",
+ [VCAP_AF_TAG_C_TPID_SEL] = "TAG_C_TPID_SEL",
+ [VCAP_AF_TAG_C_VID_SEL] = "TAG_C_VID_SEL",
[VCAP_AF_TYPE] = "TYPE",
+ [VCAP_AF_UNTAG_VID_ENA] = "UNTAG_VID_ENA",
+ [VCAP_AF_VID_A_VAL] = "VID_A_VAL",
+ [VCAP_AF_VID_B_VAL] = "VID_B_VAL",
+ [VCAP_AF_VID_C_VAL] = "VID_C_VAL",
[VCAP_AF_VID_VAL] = "VID_VAL",
- [VCAP_AF_VLAN_POP_CNT] = "VLAN_POP_CNT",
- [VCAP_AF_VLAN_POP_CNT_ENA] = "VLAN_POP_CNT_ENA",
- [VCAP_AF_VLAN_PUSH_CNT] = "VLAN_PUSH_CNT",
- [VCAP_AF_VLAN_PUSH_CNT_ENA] = "VLAN_PUSH_CNT_ENA",
- [VCAP_AF_VLAN_WAS_TAGGED] = "VLAN_WAS_TAGGED",
};
/* VCAPs */
diff --git a/drivers/net/ethernet/microchip/vcap/vcap_model_kunit.h b/drivers/net/ethernet/microchip/vcap/vcap_model_kunit.h
index b5a74f0eef9b..55762f24e196 100644
--- a/drivers/net/ethernet/microchip/vcap/vcap_model_kunit.h
+++ b/drivers/net/ethernet/microchip/vcap/vcap_model_kunit.h
@@ -1,10 +1,18 @@
/* SPDX-License-Identifier: BSD-3-Clause */
-/* Copyright (C) 2022 Microchip Technology Inc. and its subsidiaries.
+/* Copyright (C) 2023 Microchip Technology Inc. and its subsidiaries.
* Microchip VCAP test model interface for kunit testing
*/
+/* This file is autogenerated by cml-utils 2023-02-10 11:16:00 +0100.
+ * Commit ID: c30fb4bf0281cd4a7133bdab6682f9e43c872ada
+ */
+
#ifndef __VCAP_MODEL_KUNIT_H__
#define __VCAP_MODEL_KUNIT_H__
+
+/* VCAPs */
extern const struct vcap_info kunit_test_vcaps[];
extern const struct vcap_statistics kunit_test_vcap_stats;
+
#endif /* __VCAP_MODEL_KUNIT_H__ */
+
diff --git a/drivers/net/ethernet/microchip/vcap/vcap_tc.c b/drivers/net/ethernet/microchip/vcap/vcap_tc.c
new file mode 100644
index 000000000000..09abe7944af6
--- /dev/null
+++ b/drivers/net/ethernet/microchip/vcap/vcap_tc.c
@@ -0,0 +1,412 @@
+// SPDX-License-Identifier: GPL-2.0+
+/* Microchip VCAP TC
+ *
+ * Copyright (c) 2023 Microchip Technology Inc. and its subsidiaries.
+ */
+
+#include <net/flow_offload.h>
+#include <net/ipv6.h>
+#include <net/tcp.h>
+
+#include "vcap_api_client.h"
+#include "vcap_tc.h"
+
+enum vcap_is2_arp_opcode {
+ VCAP_IS2_ARP_REQUEST,
+ VCAP_IS2_ARP_REPLY,
+ VCAP_IS2_RARP_REQUEST,
+ VCAP_IS2_RARP_REPLY,
+};
+
+enum vcap_arp_opcode {
+ VCAP_ARP_OP_RESERVED,
+ VCAP_ARP_OP_REQUEST,
+ VCAP_ARP_OP_REPLY,
+};
+
+int vcap_tc_flower_handler_ethaddr_usage(struct vcap_tc_flower_parse_usage *st)
+{
+ enum vcap_key_field smac_key = VCAP_KF_L2_SMAC;
+ enum vcap_key_field dmac_key = VCAP_KF_L2_DMAC;
+ struct flow_match_eth_addrs match;
+ struct vcap_u48_key smac, dmac;
+ int err = 0;
+
+ flow_rule_match_eth_addrs(st->frule, &match);
+
+ if (!is_zero_ether_addr(match.mask->src)) {
+ vcap_netbytes_copy(smac.value, match.key->src, ETH_ALEN);
+ vcap_netbytes_copy(smac.mask, match.mask->src, ETH_ALEN);
+ err = vcap_rule_add_key_u48(st->vrule, smac_key, &smac);
+ if (err)
+ goto out;
+ }
+
+ if (!is_zero_ether_addr(match.mask->dst)) {
+ vcap_netbytes_copy(dmac.value, match.key->dst, ETH_ALEN);
+ vcap_netbytes_copy(dmac.mask, match.mask->dst, ETH_ALEN);
+ err = vcap_rule_add_key_u48(st->vrule, dmac_key, &dmac);
+ if (err)
+ goto out;
+ }
+
+ st->used_keys |= BIT(FLOW_DISSECTOR_KEY_ETH_ADDRS);
+
+ return err;
+
+out:
+ NL_SET_ERR_MSG_MOD(st->fco->common.extack, "eth_addr parse error");
+ return err;
+}
+EXPORT_SYMBOL_GPL(vcap_tc_flower_handler_ethaddr_usage);
+
+int vcap_tc_flower_handler_ipv4_usage(struct vcap_tc_flower_parse_usage *st)
+{
+ int err = 0;
+
+ if (st->l3_proto == ETH_P_IP) {
+ struct flow_match_ipv4_addrs mt;
+
+ flow_rule_match_ipv4_addrs(st->frule, &mt);
+ if (mt.mask->src) {
+ err = vcap_rule_add_key_u32(st->vrule,
+ VCAP_KF_L3_IP4_SIP,
+ be32_to_cpu(mt.key->src),
+ be32_to_cpu(mt.mask->src));
+ if (err)
+ goto out;
+ }
+ if (mt.mask->dst) {
+ err = vcap_rule_add_key_u32(st->vrule,
+ VCAP_KF_L3_IP4_DIP,
+ be32_to_cpu(mt.key->dst),
+ be32_to_cpu(mt.mask->dst));
+ if (err)
+ goto out;
+ }
+ }
+
+ st->used_keys |= BIT(FLOW_DISSECTOR_KEY_IPV4_ADDRS);
+
+ return err;
+
+out:
+ NL_SET_ERR_MSG_MOD(st->fco->common.extack, "ipv4_addr parse error");
+ return err;
+}
+EXPORT_SYMBOL_GPL(vcap_tc_flower_handler_ipv4_usage);
+
+int vcap_tc_flower_handler_ipv6_usage(struct vcap_tc_flower_parse_usage *st)
+{
+ int err = 0;
+
+ if (st->l3_proto == ETH_P_IPV6) {
+ struct flow_match_ipv6_addrs mt;
+ struct vcap_u128_key sip;
+ struct vcap_u128_key dip;
+
+ flow_rule_match_ipv6_addrs(st->frule, &mt);
+ /* Check if address masks are non-zero */
+ if (!ipv6_addr_any(&mt.mask->src)) {
+ vcap_netbytes_copy(sip.value, mt.key->src.s6_addr, 16);
+ vcap_netbytes_copy(sip.mask, mt.mask->src.s6_addr, 16);
+ err = vcap_rule_add_key_u128(st->vrule,
+ VCAP_KF_L3_IP6_SIP, &sip);
+ if (err)
+ goto out;
+ }
+ if (!ipv6_addr_any(&mt.mask->dst)) {
+ vcap_netbytes_copy(dip.value, mt.key->dst.s6_addr, 16);
+ vcap_netbytes_copy(dip.mask, mt.mask->dst.s6_addr, 16);
+ err = vcap_rule_add_key_u128(st->vrule,
+ VCAP_KF_L3_IP6_DIP, &dip);
+ if (err)
+ goto out;
+ }
+ }
+ st->used_keys |= BIT(FLOW_DISSECTOR_KEY_IPV6_ADDRS);
+ return err;
+out:
+ NL_SET_ERR_MSG_MOD(st->fco->common.extack, "ipv6_addr parse error");
+ return err;
+}
+EXPORT_SYMBOL_GPL(vcap_tc_flower_handler_ipv6_usage);
+
+int vcap_tc_flower_handler_portnum_usage(struct vcap_tc_flower_parse_usage *st)
+{
+ struct flow_match_ports mt;
+ u16 value, mask;
+ int err = 0;
+
+ flow_rule_match_ports(st->frule, &mt);
+
+ if (mt.mask->src) {
+ value = be16_to_cpu(mt.key->src);
+ mask = be16_to_cpu(mt.mask->src);
+ err = vcap_rule_add_key_u32(st->vrule, VCAP_KF_L4_SPORT, value,
+ mask);
+ if (err)
+ goto out;
+ }
+
+ if (mt.mask->dst) {
+ value = be16_to_cpu(mt.key->dst);
+ mask = be16_to_cpu(mt.mask->dst);
+ err = vcap_rule_add_key_u32(st->vrule, VCAP_KF_L4_DPORT, value,
+ mask);
+ if (err)
+ goto out;
+ }
+
+ st->used_keys |= BIT(FLOW_DISSECTOR_KEY_PORTS);
+
+ return err;
+
+out:
+ NL_SET_ERR_MSG_MOD(st->fco->common.extack, "port parse error");
+ return err;
+}
+EXPORT_SYMBOL_GPL(vcap_tc_flower_handler_portnum_usage);
+
+int vcap_tc_flower_handler_cvlan_usage(struct vcap_tc_flower_parse_usage *st)
+{
+ enum vcap_key_field vid_key = VCAP_KF_8021Q_VID0;
+ enum vcap_key_field pcp_key = VCAP_KF_8021Q_PCP0;
+ struct flow_match_vlan mt;
+ u16 tpid;
+ int err;
+
+ flow_rule_match_cvlan(st->frule, &mt);
+
+ tpid = be16_to_cpu(mt.key->vlan_tpid);
+
+ if (tpid == ETH_P_8021Q) {
+ vid_key = VCAP_KF_8021Q_VID1;
+ pcp_key = VCAP_KF_8021Q_PCP1;
+ }
+
+ if (mt.mask->vlan_id) {
+ err = vcap_rule_add_key_u32(st->vrule, vid_key,
+ mt.key->vlan_id,
+ mt.mask->vlan_id);
+ if (err)
+ goto out;
+ }
+
+ if (mt.mask->vlan_priority) {
+ err = vcap_rule_add_key_u32(st->vrule, pcp_key,
+ mt.key->vlan_priority,
+ mt.mask->vlan_priority);
+ if (err)
+ goto out;
+ }
+
+ st->used_keys |= BIT(FLOW_DISSECTOR_KEY_CVLAN);
+
+ return 0;
+out:
+ NL_SET_ERR_MSG_MOD(st->fco->common.extack, "cvlan parse error");
+ return err;
+}
+EXPORT_SYMBOL_GPL(vcap_tc_flower_handler_cvlan_usage);
+
+int vcap_tc_flower_handler_vlan_usage(struct vcap_tc_flower_parse_usage *st,
+ enum vcap_key_field vid_key,
+ enum vcap_key_field pcp_key)
+{
+ struct flow_match_vlan mt;
+ int err;
+
+ flow_rule_match_vlan(st->frule, &mt);
+
+ if (mt.mask->vlan_id) {
+ err = vcap_rule_add_key_u32(st->vrule, vid_key,
+ mt.key->vlan_id,
+ mt.mask->vlan_id);
+ if (err)
+ goto out;
+ }
+
+ if (mt.mask->vlan_priority) {
+ err = vcap_rule_add_key_u32(st->vrule, pcp_key,
+ mt.key->vlan_priority,
+ mt.mask->vlan_priority);
+ if (err)
+ goto out;
+ }
+
+ if (mt.mask->vlan_tpid)
+ st->tpid = be16_to_cpu(mt.key->vlan_tpid);
+
+ st->used_keys |= BIT(FLOW_DISSECTOR_KEY_VLAN);
+
+ return 0;
+out:
+ NL_SET_ERR_MSG_MOD(st->fco->common.extack, "vlan parse error");
+ return err;
+}
+EXPORT_SYMBOL_GPL(vcap_tc_flower_handler_vlan_usage);
+
+int vcap_tc_flower_handler_tcp_usage(struct vcap_tc_flower_parse_usage *st)
+{
+ struct flow_match_tcp mt;
+ u16 tcp_flags_mask;
+ u16 tcp_flags_key;
+ enum vcap_bit val;
+ int err = 0;
+
+ flow_rule_match_tcp(st->frule, &mt);
+ tcp_flags_key = be16_to_cpu(mt.key->flags);
+ tcp_flags_mask = be16_to_cpu(mt.mask->flags);
+
+ if (tcp_flags_mask & TCPHDR_FIN) {
+ val = VCAP_BIT_0;
+ if (tcp_flags_key & TCPHDR_FIN)
+ val = VCAP_BIT_1;
+ err = vcap_rule_add_key_bit(st->vrule, VCAP_KF_L4_FIN, val);
+ if (err)
+ goto out;
+ }
+
+ if (tcp_flags_mask & TCPHDR_SYN) {
+ val = VCAP_BIT_0;
+ if (tcp_flags_key & TCPHDR_SYN)
+ val = VCAP_BIT_1;
+ err = vcap_rule_add_key_bit(st->vrule, VCAP_KF_L4_SYN, val);
+ if (err)
+ goto out;
+ }
+
+ if (tcp_flags_mask & TCPHDR_RST) {
+ val = VCAP_BIT_0;
+ if (tcp_flags_key & TCPHDR_RST)
+ val = VCAP_BIT_1;
+ err = vcap_rule_add_key_bit(st->vrule, VCAP_KF_L4_RST, val);
+ if (err)
+ goto out;
+ }
+
+ if (tcp_flags_mask & TCPHDR_PSH) {
+ val = VCAP_BIT_0;
+ if (tcp_flags_key & TCPHDR_PSH)
+ val = VCAP_BIT_1;
+ err = vcap_rule_add_key_bit(st->vrule, VCAP_KF_L4_PSH, val);
+ if (err)
+ goto out;
+ }
+
+ if (tcp_flags_mask & TCPHDR_ACK) {
+ val = VCAP_BIT_0;
+ if (tcp_flags_key & TCPHDR_ACK)
+ val = VCAP_BIT_1;
+ err = vcap_rule_add_key_bit(st->vrule, VCAP_KF_L4_ACK, val);
+ if (err)
+ goto out;
+ }
+
+ if (tcp_flags_mask & TCPHDR_URG) {
+ val = VCAP_BIT_0;
+ if (tcp_flags_key & TCPHDR_URG)
+ val = VCAP_BIT_1;
+ err = vcap_rule_add_key_bit(st->vrule, VCAP_KF_L4_URG, val);
+ if (err)
+ goto out;
+ }
+
+ st->used_keys |= BIT(FLOW_DISSECTOR_KEY_TCP);
+
+ return err;
+
+out:
+ NL_SET_ERR_MSG_MOD(st->fco->common.extack, "tcp_flags parse error");
+ return err;
+}
+EXPORT_SYMBOL_GPL(vcap_tc_flower_handler_tcp_usage);
+
+int vcap_tc_flower_handler_arp_usage(struct vcap_tc_flower_parse_usage *st)
+{
+ struct flow_match_arp mt;
+ u16 value, mask;
+ u32 ipval, ipmsk;
+ int err;
+
+ flow_rule_match_arp(st->frule, &mt);
+
+ if (mt.mask->op) {
+ mask = 0x3;
+ if (st->l3_proto == ETH_P_ARP) {
+ value = mt.key->op == VCAP_ARP_OP_REQUEST ?
+ VCAP_IS2_ARP_REQUEST :
+ VCAP_IS2_ARP_REPLY;
+ } else { /* RARP */
+ value = mt.key->op == VCAP_ARP_OP_REQUEST ?
+ VCAP_IS2_RARP_REQUEST :
+ VCAP_IS2_RARP_REPLY;
+ }
+ err = vcap_rule_add_key_u32(st->vrule, VCAP_KF_ARP_OPCODE,
+ value, mask);
+ if (err)
+ goto out;
+ }
+
+ /* The IS2 ARP keyset does not support ARP hardware addresses */
+ if (!is_zero_ether_addr(mt.mask->sha) ||
+ !is_zero_ether_addr(mt.mask->tha)) {
+ err = -EINVAL;
+ goto out;
+ }
+
+ if (mt.mask->sip) {
+ ipval = be32_to_cpu((__force __be32)mt.key->sip);
+ ipmsk = be32_to_cpu((__force __be32)mt.mask->sip);
+
+ err = vcap_rule_add_key_u32(st->vrule, VCAP_KF_L3_IP4_SIP,
+ ipval, ipmsk);
+ if (err)
+ goto out;
+ }
+
+ if (mt.mask->tip) {
+ ipval = be32_to_cpu((__force __be32)mt.key->tip);
+ ipmsk = be32_to_cpu((__force __be32)mt.mask->tip);
+
+ err = vcap_rule_add_key_u32(st->vrule, VCAP_KF_L3_IP4_DIP,
+ ipval, ipmsk);
+ if (err)
+ goto out;
+ }
+
+ st->used_keys |= BIT(FLOW_DISSECTOR_KEY_ARP);
+
+ return 0;
+
+out:
+ NL_SET_ERR_MSG_MOD(st->fco->common.extack, "arp parse error");
+ return err;
+}
+EXPORT_SYMBOL_GPL(vcap_tc_flower_handler_arp_usage);
+
+int vcap_tc_flower_handler_ip_usage(struct vcap_tc_flower_parse_usage *st)
+{
+ struct flow_match_ip mt;
+ int err = 0;
+
+ flow_rule_match_ip(st->frule, &mt);
+
+ if (mt.mask->tos) {
+ err = vcap_rule_add_key_u32(st->vrule, VCAP_KF_L3_TOS,
+ mt.key->tos,
+ mt.mask->tos);
+ if (err)
+ goto out;
+ }
+
+ st->used_keys |= BIT(FLOW_DISSECTOR_KEY_IP);
+
+ return err;
+
+out:
+ NL_SET_ERR_MSG_MOD(st->fco->common.extack, "ip_tos parse error");
+ return err;
+}
+EXPORT_SYMBOL_GPL(vcap_tc_flower_handler_ip_usage);
diff --git a/drivers/net/ethernet/microchip/vcap/vcap_tc.h b/drivers/net/ethernet/microchip/vcap/vcap_tc.h
new file mode 100644
index 000000000000..071f892f9aa4
--- /dev/null
+++ b/drivers/net/ethernet/microchip/vcap/vcap_tc.h
@@ -0,0 +1,32 @@
+/* SPDX-License-Identifier: BSD-3-Clause */
+/* Copyright (C) 2023 Microchip Technology Inc. and its subsidiaries.
+ * Microchip VCAP TC
+ */
+
+#ifndef __VCAP_TC__
+#define __VCAP_TC__
+
+struct vcap_tc_flower_parse_usage {
+ struct flow_cls_offload *fco;
+ struct flow_rule *frule;
+ struct vcap_rule *vrule;
+ struct vcap_admin *admin;
+ u16 l3_proto;
+ u8 l4_proto;
+ u16 tpid;
+ unsigned int used_keys;
+};
+
+int vcap_tc_flower_handler_ethaddr_usage(struct vcap_tc_flower_parse_usage *st);
+int vcap_tc_flower_handler_ipv4_usage(struct vcap_tc_flower_parse_usage *st);
+int vcap_tc_flower_handler_ipv6_usage(struct vcap_tc_flower_parse_usage *st);
+int vcap_tc_flower_handler_portnum_usage(struct vcap_tc_flower_parse_usage *st);
+int vcap_tc_flower_handler_cvlan_usage(struct vcap_tc_flower_parse_usage *st);
+int vcap_tc_flower_handler_vlan_usage(struct vcap_tc_flower_parse_usage *st,
+ enum vcap_key_field vid_key,
+ enum vcap_key_field pcp_key);
+int vcap_tc_flower_handler_tcp_usage(struct vcap_tc_flower_parse_usage *st);
+int vcap_tc_flower_handler_arp_usage(struct vcap_tc_flower_parse_usage *st);
+int vcap_tc_flower_handler_ip_usage(struct vcap_tc_flower_parse_usage *st);
+
+#endif /* __VCAP_TC__ */
diff --git a/drivers/net/ethernet/microsoft/mana/mana_en.c b/drivers/net/ethernet/microsoft/mana/mana_en.c
index 2f6a048dee90..6120f2b6684f 100644
--- a/drivers/net/ethernet/microsoft/mana/mana_en.c
+++ b/drivers/net/ethernet/microsoft/mana/mana_en.c
@@ -2160,6 +2160,8 @@ static int mana_probe_port(struct mana_context *ac, int port_idx,
ndev->hw_features |= NETIF_F_RXHASH;
ndev->features = ndev->hw_features;
ndev->vlan_features = 0;
+ ndev->xdp_features = NETDEV_XDP_ACT_BASIC | NETDEV_XDP_ACT_REDIRECT |
+ NETDEV_XDP_ACT_NDO_XMIT;
err = register_netdev(ndev);
if (err) {
diff --git a/drivers/net/ethernet/mscc/Kconfig b/drivers/net/ethernet/mscc/Kconfig
index 8dd8c7f425d2..81e605691bb8 100644
--- a/drivers/net/ethernet/mscc/Kconfig
+++ b/drivers/net/ethernet/mscc/Kconfig
@@ -13,6 +13,7 @@ if NET_VENDOR_MICROSEMI
# Users should depend on NET_SWITCHDEV, HAS_IOMEM, BRIDGE
config MSCC_OCELOT_SWITCH_LIB
+ depends on PTP_1588_CLOCK_OPTIONAL
select NET_DEVLINK
select REGMAP_MMIO
select PACKING
diff --git a/drivers/net/ethernet/mscc/Makefile b/drivers/net/ethernet/mscc/Makefile
index 5d435a565d4c..16987b72dfc0 100644
--- a/drivers/net/ethernet/mscc/Makefile
+++ b/drivers/net/ethernet/mscc/Makefile
@@ -5,6 +5,7 @@ mscc_ocelot_switch_lib-y := \
ocelot_devlink.o \
ocelot_flower.o \
ocelot_io.o \
+ ocelot_mm.o \
ocelot_police.o \
ocelot_ptp.o \
ocelot_stats.o \
diff --git a/drivers/net/ethernet/mscc/ocelot.c b/drivers/net/ethernet/mscc/ocelot.c
index da56f9bfeaf0..08acb7b89086 100644
--- a/drivers/net/ethernet/mscc/ocelot.c
+++ b/drivers/net/ethernet/mscc/ocelot.c
@@ -6,12 +6,16 @@
*/
#include <linux/dsa/ocelot.h>
#include <linux/if_bridge.h>
+#include <linux/iopoll.h>
#include <soc/mscc/ocelot_vcap.h>
#include "ocelot.h"
#include "ocelot_vcap.h"
-#define TABLE_UPDATE_SLEEP_US 10
-#define TABLE_UPDATE_TIMEOUT_US 100000
+#define TABLE_UPDATE_SLEEP_US 10
+#define TABLE_UPDATE_TIMEOUT_US 100000
+#define MEM_INIT_SLEEP_US 1000
+#define MEM_INIT_TIMEOUT_US 100000
+
#define OCELOT_RSV_VLAN_RANGE_START 4000
struct ocelot_mact_entry {
@@ -2713,6 +2717,46 @@ static void ocelot_detect_features(struct ocelot *ocelot)
ocelot->num_frame_refs = QSYS_MMGT_EQ_CTRL_FP_FREE_CNT(eq_ctrl);
}
+static int ocelot_mem_init_status(struct ocelot *ocelot)
+{
+ unsigned int val;
+ int err;
+
+ err = regmap_field_read(ocelot->regfields[SYS_RESET_CFG_MEM_INIT],
+ &val);
+
+ return err ?: val;
+}
+
+int ocelot_reset(struct ocelot *ocelot)
+{
+ int err;
+ u32 val;
+
+ err = regmap_field_write(ocelot->regfields[SYS_RESET_CFG_MEM_INIT], 1);
+ if (err)
+ return err;
+
+ err = regmap_field_write(ocelot->regfields[SYS_RESET_CFG_MEM_ENA], 1);
+ if (err)
+ return err;
+
+ /* MEM_INIT is a self-clearing bit. Wait for it to be cleared (should be
+ * 100us) before enabling the switch core.
+ */
+ err = readx_poll_timeout(ocelot_mem_init_status, ocelot, val, !val,
+ MEM_INIT_SLEEP_US, MEM_INIT_TIMEOUT_US);
+ if (err)
+ return err;
+
+ err = regmap_field_write(ocelot->regfields[SYS_RESET_CFG_MEM_ENA], 1);
+ if (err)
+ return err;
+
+ return regmap_field_write(ocelot->regfields[SYS_RESET_CFG_CORE_ENA], 1);
+}
+EXPORT_SYMBOL(ocelot_reset);
+
int ocelot_init(struct ocelot *ocelot)
{
int i, ret;
@@ -2738,10 +2782,8 @@ int ocelot_init(struct ocelot *ocelot)
return -ENOMEM;
ret = ocelot_stats_init(ocelot);
- if (ret) {
- destroy_workqueue(ocelot->owq);
- return ret;
- }
+ if (ret)
+ goto err_stats_init;
INIT_LIST_HEAD(&ocelot->multicast);
INIT_LIST_HEAD(&ocelot->pgids);
@@ -2756,6 +2798,12 @@ int ocelot_init(struct ocelot *ocelot)
if (ocelot->ops->psfp_init)
ocelot->ops->psfp_init(ocelot);
+ if (ocelot->mm_supported) {
+ ret = ocelot_mm_init(ocelot);
+ if (ret)
+ goto err_mm_init;
+ }
+
for (port = 0; port < ocelot->num_phys_ports; port++) {
/* Clear all counters (5 groups) */
ocelot_write(ocelot, SYS_STAT_CFG_STAT_VIEW(port) |
@@ -2853,6 +2901,12 @@ int ocelot_init(struct ocelot *ocelot)
ANA_CPUQ_8021_CFG, i);
return 0;
+
+err_mm_init:
+ ocelot_stats_deinit(ocelot);
+err_stats_init:
+ destroy_workqueue(ocelot->owq);
+ return ret;
}
EXPORT_SYMBOL(ocelot_init);
diff --git a/drivers/net/ethernet/mscc/ocelot.h b/drivers/net/ethernet/mscc/ocelot.h
index 70dbd9c4e512..e9a0179448bf 100644
--- a/drivers/net/ethernet/mscc/ocelot.h
+++ b/drivers/net/ethernet/mscc/ocelot.h
@@ -109,6 +109,8 @@ void ocelot_mirror_put(struct ocelot *ocelot);
int ocelot_stats_init(struct ocelot *ocelot);
void ocelot_stats_deinit(struct ocelot *ocelot);
+int ocelot_mm_init(struct ocelot *ocelot);
+
extern struct notifier_block ocelot_netdevice_nb;
extern struct notifier_block ocelot_switchdev_nb;
extern struct notifier_block ocelot_switchdev_blocking_nb;
diff --git a/drivers/net/ethernet/mscc/ocelot_devlink.c b/drivers/net/ethernet/mscc/ocelot_devlink.c
index b8737efd2a85..d9ea75a14f2f 100644
--- a/drivers/net/ethernet/mscc/ocelot_devlink.c
+++ b/drivers/net/ethernet/mscc/ocelot_devlink.c
@@ -487,6 +487,37 @@ static void ocelot_watermark_init(struct ocelot *ocelot)
ocelot_setup_sharing_watermarks(ocelot);
}
+/* Watermark encode
+ * Bit 8: Unit; 0:1, 1:16
+ * Bit 7-0: Value to be multiplied with unit
+ */
+u16 ocelot_wm_enc(u16 value)
+{
+ WARN_ON(value >= 16 * BIT(8));
+
+ if (value >= BIT(8))
+ return BIT(8) | (value / 16);
+
+ return value;
+}
+EXPORT_SYMBOL(ocelot_wm_enc);
+
+u16 ocelot_wm_dec(u16 wm)
+{
+ if (wm & BIT(8))
+ return (wm & GENMASK(7, 0)) * 16;
+
+ return wm;
+}
+EXPORT_SYMBOL(ocelot_wm_dec);
+
+void ocelot_wm_stat(u32 val, u32 *inuse, u32 *maxuse)
+{
+ *inuse = (val & GENMASK(23, 12)) >> 12;
+ *maxuse = val & GENMASK(11, 0);
+}
+EXPORT_SYMBOL(ocelot_wm_stat);
+
/* Pool size and type are fixed up at runtime. Keeping this structure to
* look up the cell size multipliers.
*/
diff --git a/drivers/net/ethernet/mscc/ocelot_mm.c b/drivers/net/ethernet/mscc/ocelot_mm.c
new file mode 100644
index 000000000000..0a8f21ae23f0
--- /dev/null
+++ b/drivers/net/ethernet/mscc/ocelot_mm.c
@@ -0,0 +1,215 @@
+// SPDX-License-Identifier: (GPL-2.0 OR MIT)
+/*
+ * Hardware library for MAC Merge Layer and Frame Preemption on TSN-capable
+ * switches (VSC9959)
+ *
+ * Copyright 2022-2023 NXP
+ */
+#include <linux/ethtool.h>
+#include <soc/mscc/ocelot.h>
+#include <soc/mscc/ocelot_dev.h>
+#include <soc/mscc/ocelot_qsys.h>
+
+#include "ocelot.h"
+
+static const char *
+mm_verify_state_to_string(enum ethtool_mm_verify_status state)
+{
+ switch (state) {
+ case ETHTOOL_MM_VERIFY_STATUS_INITIAL:
+ return "INITIAL";
+ case ETHTOOL_MM_VERIFY_STATUS_VERIFYING:
+ return "VERIFYING";
+ case ETHTOOL_MM_VERIFY_STATUS_SUCCEEDED:
+ return "SUCCEEDED";
+ case ETHTOOL_MM_VERIFY_STATUS_FAILED:
+ return "FAILED";
+ case ETHTOOL_MM_VERIFY_STATUS_DISABLED:
+ return "DISABLED";
+ default:
+ return "UNKNOWN";
+ }
+}
+
+static enum ethtool_mm_verify_status ocelot_mm_verify_status(u32 val)
+{
+ switch (DEV_MM_STAT_MM_STATUS_PRMPT_VERIFY_STATE_X(val)) {
+ case 0:
+ return ETHTOOL_MM_VERIFY_STATUS_INITIAL;
+ case 1:
+ return ETHTOOL_MM_VERIFY_STATUS_VERIFYING;
+ case 2:
+ return ETHTOOL_MM_VERIFY_STATUS_SUCCEEDED;
+ case 3:
+ return ETHTOOL_MM_VERIFY_STATUS_FAILED;
+ case 4:
+ return ETHTOOL_MM_VERIFY_STATUS_DISABLED;
+ default:
+ return ETHTOOL_MM_VERIFY_STATUS_UNKNOWN;
+ }
+}
+
+void ocelot_port_mm_irq(struct ocelot *ocelot, int port)
+{
+ struct ocelot_port *ocelot_port = ocelot->ports[port];
+ struct ocelot_mm_state *mm = &ocelot->mm[port];
+ enum ethtool_mm_verify_status verify_status;
+ u32 val;
+
+ mutex_lock(&mm->lock);
+
+ val = ocelot_port_readl(ocelot_port, DEV_MM_STATUS);
+
+ verify_status = ocelot_mm_verify_status(val);
+ if (mm->verify_status != verify_status) {
+ dev_dbg(ocelot->dev,
+ "Port %d MAC Merge verification state %s\n",
+ port, mm_verify_state_to_string(verify_status));
+ mm->verify_status = verify_status;
+ }
+
+ if (val & DEV_MM_STAT_MM_STATUS_PRMPT_ACTIVE_STICKY) {
+ mm->tx_active = !!(val & DEV_MM_STAT_MM_STATUS_PRMPT_ACTIVE_STATUS);
+
+ dev_dbg(ocelot->dev, "Port %d TX preemption %s\n",
+ port, mm->tx_active ? "active" : "inactive");
+ }
+
+ if (val & DEV_MM_STAT_MM_STATUS_UNEXP_RX_PFRM_STICKY) {
+ dev_err(ocelot->dev,
+ "Unexpected P-frame received on port %d while verification was unsuccessful or not yet verified\n",
+ port);
+ }
+
+ if (val & DEV_MM_STAT_MM_STATUS_UNEXP_TX_PFRM_STICKY) {
+ dev_err(ocelot->dev,
+ "Unexpected P-frame requested to be transmitted on port %d while verification was unsuccessful or not yet verified, or MM_TX_ENA=0\n",
+ port);
+ }
+
+ ocelot_port_writel(ocelot_port, val, DEV_MM_STATUS);
+
+ mutex_unlock(&mm->lock);
+}
+EXPORT_SYMBOL_GPL(ocelot_port_mm_irq);
+
+int ocelot_port_set_mm(struct ocelot *ocelot, int port,
+ struct ethtool_mm_cfg *cfg,
+ struct netlink_ext_ack *extack)
+{
+ struct ocelot_port *ocelot_port = ocelot->ports[port];
+ u32 mm_enable = 0, verify_disable = 0, add_frag_size;
+ struct ocelot_mm_state *mm;
+ int err;
+
+ if (!ocelot->mm_supported)
+ return -EOPNOTSUPP;
+
+ mm = &ocelot->mm[port];
+
+ err = ethtool_mm_frag_size_min_to_add(cfg->tx_min_frag_size,
+ &add_frag_size, extack);
+ if (err)
+ return err;
+
+ if (cfg->pmac_enabled)
+ mm_enable |= DEV_MM_CONFIG_ENABLE_CONFIG_MM_RX_ENA;
+
+ if (cfg->tx_enabled)
+ mm_enable |= DEV_MM_CONFIG_ENABLE_CONFIG_MM_TX_ENA;
+
+ if (!cfg->verify_enabled)
+ verify_disable = DEV_MM_CONFIG_VERIF_CONFIG_PRM_VERIFY_DIS;
+
+ mutex_lock(&mm->lock);
+
+ ocelot_port_rmwl(ocelot_port, mm_enable,
+ DEV_MM_CONFIG_ENABLE_CONFIG_MM_TX_ENA |
+ DEV_MM_CONFIG_ENABLE_CONFIG_MM_RX_ENA,
+ DEV_MM_ENABLE_CONFIG);
+
+ ocelot_port_rmwl(ocelot_port, verify_disable |
+ DEV_MM_CONFIG_VERIF_CONFIG_PRM_VERIFY_TIME(cfg->verify_time),
+ DEV_MM_CONFIG_VERIF_CONFIG_PRM_VERIFY_DIS |
+ DEV_MM_CONFIG_VERIF_CONFIG_PRM_VERIFY_TIME_M,
+ DEV_MM_VERIF_CONFIG);
+
+ ocelot_rmw_rix(ocelot,
+ QSYS_PREEMPTION_CFG_MM_ADD_FRAG_SIZE(add_frag_size),
+ QSYS_PREEMPTION_CFG_MM_ADD_FRAG_SIZE_M,
+ QSYS_PREEMPTION_CFG,
+ port);
+
+ mutex_unlock(&mm->lock);
+
+ return 0;
+}
+EXPORT_SYMBOL_GPL(ocelot_port_set_mm);
+
+int ocelot_port_get_mm(struct ocelot *ocelot, int port,
+ struct ethtool_mm_state *state)
+{
+ struct ocelot_port *ocelot_port = ocelot->ports[port];
+ struct ocelot_mm_state *mm;
+ u32 val, add_frag_size;
+
+ if (!ocelot->mm_supported)
+ return -EOPNOTSUPP;
+
+ mm = &ocelot->mm[port];
+
+ mutex_lock(&mm->lock);
+
+ val = ocelot_port_readl(ocelot_port, DEV_MM_ENABLE_CONFIG);
+ state->pmac_enabled = !!(val & DEV_MM_CONFIG_ENABLE_CONFIG_MM_RX_ENA);
+ state->tx_enabled = !!(val & DEV_MM_CONFIG_ENABLE_CONFIG_MM_TX_ENA);
+
+ val = ocelot_port_readl(ocelot_port, DEV_MM_VERIF_CONFIG);
+ state->verify_enabled = !(val & DEV_MM_CONFIG_VERIF_CONFIG_PRM_VERIFY_DIS);
+ state->verify_time = DEV_MM_CONFIG_VERIF_CONFIG_PRM_VERIFY_TIME_X(val);
+ state->max_verify_time = 128;
+
+ val = ocelot_read_rix(ocelot, QSYS_PREEMPTION_CFG, port);
+ add_frag_size = QSYS_PREEMPTION_CFG_MM_ADD_FRAG_SIZE_X(val);
+ state->tx_min_frag_size = ethtool_mm_frag_size_add_to_min(add_frag_size);
+ state->rx_min_frag_size = ETH_ZLEN;
+
+ state->verify_status = mm->verify_status;
+ state->tx_active = mm->tx_active;
+
+ mutex_unlock(&mm->lock);
+
+ return 0;
+}
+EXPORT_SYMBOL_GPL(ocelot_port_get_mm);
+
+int ocelot_mm_init(struct ocelot *ocelot)
+{
+ struct ocelot_port *ocelot_port;
+ struct ocelot_mm_state *mm;
+ int port;
+
+ if (!ocelot->mm_supported)
+ return 0;
+
+ ocelot->mm = devm_kcalloc(ocelot->dev, ocelot->num_phys_ports,
+ sizeof(*ocelot->mm), GFP_KERNEL);
+ if (!ocelot->mm)
+ return -ENOMEM;
+
+ for (port = 0; port < ocelot->num_phys_ports; port++) {
+ u32 val;
+
+ mm = &ocelot->mm[port];
+ mutex_init(&mm->lock);
+ ocelot_port = ocelot->ports[port];
+
+ /* Update initial status variable for the
+ * verification state machine
+ */
+ val = ocelot_port_readl(ocelot_port, DEV_MM_STATUS);
+ mm->verify_status = ocelot_mm_verify_status(val);
+ }
+
+ return 0;
+}
diff --git a/drivers/net/ethernet/mscc/ocelot_stats.c b/drivers/net/ethernet/mscc/ocelot_stats.c
index 1478c3b21af1..bdb893476832 100644
--- a/drivers/net/ethernet/mscc/ocelot_stats.c
+++ b/drivers/net/ethernet/mscc/ocelot_stats.c
@@ -4,6 +4,7 @@
* Copyright (c) 2017 Microsemi Corporation
* Copyright 2022 NXP
*/
+#include <linux/ethtool_netlink.h>
#include <linux/spinlock.h>
#include <linux/mutex.h>
#include <linux/workqueue.h>
@@ -54,6 +55,29 @@ enum ocelot_stat {
OCELOT_STAT_RX_GREEN_PRIO_5,
OCELOT_STAT_RX_GREEN_PRIO_6,
OCELOT_STAT_RX_GREEN_PRIO_7,
+ OCELOT_STAT_RX_ASSEMBLY_ERRS,
+ OCELOT_STAT_RX_SMD_ERRS,
+ OCELOT_STAT_RX_ASSEMBLY_OK,
+ OCELOT_STAT_RX_MERGE_FRAGMENTS,
+ OCELOT_STAT_RX_PMAC_OCTETS,
+ OCELOT_STAT_RX_PMAC_UNICAST,
+ OCELOT_STAT_RX_PMAC_MULTICAST,
+ OCELOT_STAT_RX_PMAC_BROADCAST,
+ OCELOT_STAT_RX_PMAC_SHORTS,
+ OCELOT_STAT_RX_PMAC_FRAGMENTS,
+ OCELOT_STAT_RX_PMAC_JABBERS,
+ OCELOT_STAT_RX_PMAC_CRC_ALIGN_ERRS,
+ OCELOT_STAT_RX_PMAC_SYM_ERRS,
+ OCELOT_STAT_RX_PMAC_64,
+ OCELOT_STAT_RX_PMAC_65_127,
+ OCELOT_STAT_RX_PMAC_128_255,
+ OCELOT_STAT_RX_PMAC_256_511,
+ OCELOT_STAT_RX_PMAC_512_1023,
+ OCELOT_STAT_RX_PMAC_1024_1526,
+ OCELOT_STAT_RX_PMAC_1527_MAX,
+ OCELOT_STAT_RX_PMAC_PAUSE,
+ OCELOT_STAT_RX_PMAC_CONTROL,
+ OCELOT_STAT_RX_PMAC_LONGS,
OCELOT_STAT_TX_OCTETS,
OCELOT_STAT_TX_UNICAST,
OCELOT_STAT_TX_MULTICAST,
@@ -85,6 +109,20 @@ enum ocelot_stat {
OCELOT_STAT_TX_GREEN_PRIO_6,
OCELOT_STAT_TX_GREEN_PRIO_7,
OCELOT_STAT_TX_AGED,
+ OCELOT_STAT_TX_MM_HOLD,
+ OCELOT_STAT_TX_MERGE_FRAGMENTS,
+ OCELOT_STAT_TX_PMAC_OCTETS,
+ OCELOT_STAT_TX_PMAC_UNICAST,
+ OCELOT_STAT_TX_PMAC_MULTICAST,
+ OCELOT_STAT_TX_PMAC_BROADCAST,
+ OCELOT_STAT_TX_PMAC_PAUSE,
+ OCELOT_STAT_TX_PMAC_64,
+ OCELOT_STAT_TX_PMAC_65_127,
+ OCELOT_STAT_TX_PMAC_128_255,
+ OCELOT_STAT_TX_PMAC_256_511,
+ OCELOT_STAT_TX_PMAC_512_1023,
+ OCELOT_STAT_TX_PMAC_1024_1526,
+ OCELOT_STAT_TX_PMAC_1527_MAX,
OCELOT_STAT_DROP_LOCAL,
OCELOT_STAT_DROP_TAIL,
OCELOT_STAT_DROP_YELLOW_PRIO_0,
@@ -228,6 +266,55 @@ static const struct ocelot_stat_layout ocelot_stats_layout[OCELOT_NUM_STATS] = {
OCELOT_COMMON_STATS,
};
+static const struct ocelot_stat_layout ocelot_mm_stats_layout[OCELOT_NUM_STATS] = {
+ OCELOT_COMMON_STATS,
+ OCELOT_STAT(RX_ASSEMBLY_ERRS),
+ OCELOT_STAT(RX_SMD_ERRS),
+ OCELOT_STAT(RX_ASSEMBLY_OK),
+ OCELOT_STAT(RX_MERGE_FRAGMENTS),
+ OCELOT_STAT(TX_MERGE_FRAGMENTS),
+ OCELOT_STAT(RX_PMAC_OCTETS),
+ OCELOT_STAT(RX_PMAC_UNICAST),
+ OCELOT_STAT(RX_PMAC_MULTICAST),
+ OCELOT_STAT(RX_PMAC_BROADCAST),
+ OCELOT_STAT(RX_PMAC_SHORTS),
+ OCELOT_STAT(RX_PMAC_FRAGMENTS),
+ OCELOT_STAT(RX_PMAC_JABBERS),
+ OCELOT_STAT(RX_PMAC_CRC_ALIGN_ERRS),
+ OCELOT_STAT(RX_PMAC_SYM_ERRS),
+ OCELOT_STAT(RX_PMAC_64),
+ OCELOT_STAT(RX_PMAC_65_127),
+ OCELOT_STAT(RX_PMAC_128_255),
+ OCELOT_STAT(RX_PMAC_256_511),
+ OCELOT_STAT(RX_PMAC_512_1023),
+ OCELOT_STAT(RX_PMAC_1024_1526),
+ OCELOT_STAT(RX_PMAC_1527_MAX),
+ OCELOT_STAT(RX_PMAC_PAUSE),
+ OCELOT_STAT(RX_PMAC_CONTROL),
+ OCELOT_STAT(RX_PMAC_LONGS),
+ OCELOT_STAT(TX_PMAC_OCTETS),
+ OCELOT_STAT(TX_PMAC_UNICAST),
+ OCELOT_STAT(TX_PMAC_MULTICAST),
+ OCELOT_STAT(TX_PMAC_BROADCAST),
+ OCELOT_STAT(TX_PMAC_PAUSE),
+ OCELOT_STAT(TX_PMAC_64),
+ OCELOT_STAT(TX_PMAC_65_127),
+ OCELOT_STAT(TX_PMAC_128_255),
+ OCELOT_STAT(TX_PMAC_256_511),
+ OCELOT_STAT(TX_PMAC_512_1023),
+ OCELOT_STAT(TX_PMAC_1024_1526),
+ OCELOT_STAT(TX_PMAC_1527_MAX),
+};
+
+static const struct ocelot_stat_layout *
+ocelot_get_stats_layout(struct ocelot *ocelot)
+{
+ if (ocelot->mm_supported)
+ return ocelot_mm_stats_layout;
+
+ return ocelot_stats_layout;
+}
+
/* Read the counters from hardware and keep them in region->buf.
* Caller must hold &ocelot->stat_view_lock.
*/
@@ -306,17 +393,20 @@ static void ocelot_check_stats_work(struct work_struct *work)
void ocelot_get_strings(struct ocelot *ocelot, int port, u32 sset, u8 *data)
{
+ const struct ocelot_stat_layout *layout;
int i;
if (sset != ETH_SS_STATS)
return;
+ layout = ocelot_get_stats_layout(ocelot);
+
for (i = 0; i < OCELOT_NUM_STATS; i++) {
- if (ocelot_stats_layout[i].name[0] == '\0')
+ if (layout[i].name[0] == '\0')
continue;
- memcpy(data + i * ETH_GSTRING_LEN, ocelot_stats_layout[i].name,
- ETH_GSTRING_LEN);
+ memcpy(data, layout[i].name, ETH_GSTRING_LEN);
+ data += ETH_GSTRING_LEN;
}
}
EXPORT_SYMBOL(ocelot_get_strings);
@@ -350,13 +440,16 @@ out_unlock:
int ocelot_get_sset_count(struct ocelot *ocelot, int port, int sset)
{
+ const struct ocelot_stat_layout *layout;
int i, num_stats = 0;
if (sset != ETH_SS_STATS)
return -EOPNOTSUPP;
+ layout = ocelot_get_stats_layout(ocelot);
+
for (i = 0; i < OCELOT_NUM_STATS; i++)
- if (ocelot_stats_layout[i].name[0] != '\0')
+ if (layout[i].name[0] != '\0')
num_stats++;
return num_stats;
@@ -366,14 +459,17 @@ EXPORT_SYMBOL(ocelot_get_sset_count);
static void ocelot_port_ethtool_stats_cb(struct ocelot *ocelot, int port,
void *priv)
{
+ const struct ocelot_stat_layout *layout;
u64 *data = priv;
int i;
+ layout = ocelot_get_stats_layout(ocelot);
+
/* Copy all supported counters */
for (i = 0; i < OCELOT_NUM_STATS; i++) {
int index = port * OCELOT_NUM_STATS + i;
- if (ocelot_stats_layout[i].name[0] == '\0')
+ if (layout[i].name[0] == '\0')
continue;
*data++ = ocelot->stats[index];
@@ -395,14 +491,63 @@ static void ocelot_port_pause_stats_cb(struct ocelot *ocelot, int port, void *pr
pause_stats->rx_pause_frames = s[OCELOT_STAT_RX_PAUSE];
}
+static void ocelot_port_pmac_pause_stats_cb(struct ocelot *ocelot, int port,
+ void *priv)
+{
+ u64 *s = &ocelot->stats[port * OCELOT_NUM_STATS];
+ struct ethtool_pause_stats *pause_stats = priv;
+
+ pause_stats->tx_pause_frames = s[OCELOT_STAT_TX_PMAC_PAUSE];
+ pause_stats->rx_pause_frames = s[OCELOT_STAT_RX_PMAC_PAUSE];
+}
+
+static void ocelot_port_mm_stats_cb(struct ocelot *ocelot, int port,
+ void *priv)
+{
+ u64 *s = &ocelot->stats[port * OCELOT_NUM_STATS];
+ struct ethtool_mm_stats *stats = priv;
+
+ stats->MACMergeFrameAssErrorCount = s[OCELOT_STAT_RX_ASSEMBLY_ERRS];
+ stats->MACMergeFrameSmdErrorCount = s[OCELOT_STAT_RX_SMD_ERRS];
+ stats->MACMergeFrameAssOkCount = s[OCELOT_STAT_RX_ASSEMBLY_OK];
+ stats->MACMergeFragCountRx = s[OCELOT_STAT_RX_MERGE_FRAGMENTS];
+ stats->MACMergeFragCountTx = s[OCELOT_STAT_TX_MERGE_FRAGMENTS];
+ stats->MACMergeHoldCount = s[OCELOT_STAT_TX_MM_HOLD];
+}
+
void ocelot_port_get_pause_stats(struct ocelot *ocelot, int port,
struct ethtool_pause_stats *pause_stats)
{
- ocelot_port_stats_run(ocelot, port, pause_stats,
- ocelot_port_pause_stats_cb);
+ struct net_device *dev;
+
+ switch (pause_stats->src) {
+ case ETHTOOL_MAC_STATS_SRC_EMAC:
+ ocelot_port_stats_run(ocelot, port, pause_stats,
+ ocelot_port_pause_stats_cb);
+ break;
+ case ETHTOOL_MAC_STATS_SRC_PMAC:
+ if (ocelot->mm_supported)
+ ocelot_port_stats_run(ocelot, port, pause_stats,
+ ocelot_port_pmac_pause_stats_cb);
+ break;
+ case ETHTOOL_MAC_STATS_SRC_AGGREGATE:
+ dev = ocelot->ops->port_to_netdev(ocelot, port);
+ ethtool_aggregate_pause_stats(dev, pause_stats);
+ break;
+ }
}
EXPORT_SYMBOL_GPL(ocelot_port_get_pause_stats);
+void ocelot_port_get_mm_stats(struct ocelot *ocelot, int port,
+ struct ethtool_mm_stats *stats)
+{
+ if (!ocelot->mm_supported)
+ return;
+
+ ocelot_port_stats_run(ocelot, port, stats, ocelot_port_mm_stats_cb);
+}
+EXPORT_SYMBOL_GPL(ocelot_port_get_mm_stats);
+
static const struct ethtool_rmon_hist_range ocelot_rmon_ranges[] = {
{ 64, 64 },
{ 65, 127 },
@@ -441,14 +586,57 @@ static void ocelot_port_rmon_stats_cb(struct ocelot *ocelot, int port, void *pri
rmon_stats->hist_tx[6] = s[OCELOT_STAT_TX_1024_1526];
}
+static void ocelot_port_pmac_rmon_stats_cb(struct ocelot *ocelot, int port,
+ void *priv)
+{
+ u64 *s = &ocelot->stats[port * OCELOT_NUM_STATS];
+ struct ethtool_rmon_stats *rmon_stats = priv;
+
+ rmon_stats->undersize_pkts = s[OCELOT_STAT_RX_PMAC_SHORTS];
+ rmon_stats->oversize_pkts = s[OCELOT_STAT_RX_PMAC_LONGS];
+ rmon_stats->fragments = s[OCELOT_STAT_RX_PMAC_FRAGMENTS];
+ rmon_stats->jabbers = s[OCELOT_STAT_RX_PMAC_JABBERS];
+
+ rmon_stats->hist[0] = s[OCELOT_STAT_RX_PMAC_64];
+ rmon_stats->hist[1] = s[OCELOT_STAT_RX_PMAC_65_127];
+ rmon_stats->hist[2] = s[OCELOT_STAT_RX_PMAC_128_255];
+ rmon_stats->hist[3] = s[OCELOT_STAT_RX_PMAC_256_511];
+ rmon_stats->hist[4] = s[OCELOT_STAT_RX_PMAC_512_1023];
+ rmon_stats->hist[5] = s[OCELOT_STAT_RX_PMAC_1024_1526];
+ rmon_stats->hist[6] = s[OCELOT_STAT_RX_PMAC_1527_MAX];
+
+ rmon_stats->hist_tx[0] = s[OCELOT_STAT_TX_PMAC_64];
+ rmon_stats->hist_tx[1] = s[OCELOT_STAT_TX_PMAC_65_127];
+ rmon_stats->hist_tx[2] = s[OCELOT_STAT_TX_PMAC_128_255];
+ rmon_stats->hist_tx[3] = s[OCELOT_STAT_TX_PMAC_128_255];
+ rmon_stats->hist_tx[4] = s[OCELOT_STAT_TX_PMAC_256_511];
+ rmon_stats->hist_tx[5] = s[OCELOT_STAT_TX_PMAC_512_1023];
+ rmon_stats->hist_tx[6] = s[OCELOT_STAT_TX_PMAC_1024_1526];
+}
+
void ocelot_port_get_rmon_stats(struct ocelot *ocelot, int port,
struct ethtool_rmon_stats *rmon_stats,
const struct ethtool_rmon_hist_range **ranges)
{
+ struct net_device *dev;
+
*ranges = ocelot_rmon_ranges;
- ocelot_port_stats_run(ocelot, port, rmon_stats,
- ocelot_port_rmon_stats_cb);
+ switch (rmon_stats->src) {
+ case ETHTOOL_MAC_STATS_SRC_EMAC:
+ ocelot_port_stats_run(ocelot, port, rmon_stats,
+ ocelot_port_rmon_stats_cb);
+ break;
+ case ETHTOOL_MAC_STATS_SRC_PMAC:
+ if (ocelot->mm_supported)
+ ocelot_port_stats_run(ocelot, port, rmon_stats,
+ ocelot_port_pmac_rmon_stats_cb);
+ break;
+ case ETHTOOL_MAC_STATS_SRC_AGGREGATE:
+ dev = ocelot->ops->port_to_netdev(ocelot, port);
+ ethtool_aggregate_rmon_stats(dev, rmon_stats);
+ break;
+ }
}
EXPORT_SYMBOL_GPL(ocelot_port_get_rmon_stats);
@@ -460,11 +648,35 @@ static void ocelot_port_ctrl_stats_cb(struct ocelot *ocelot, int port, void *pri
ctrl_stats->MACControlFramesReceived = s[OCELOT_STAT_RX_CONTROL];
}
+static void ocelot_port_pmac_ctrl_stats_cb(struct ocelot *ocelot, int port,
+ void *priv)
+{
+ u64 *s = &ocelot->stats[port * OCELOT_NUM_STATS];
+ struct ethtool_eth_ctrl_stats *ctrl_stats = priv;
+
+ ctrl_stats->MACControlFramesReceived = s[OCELOT_STAT_RX_PMAC_CONTROL];
+}
+
void ocelot_port_get_eth_ctrl_stats(struct ocelot *ocelot, int port,
struct ethtool_eth_ctrl_stats *ctrl_stats)
{
- ocelot_port_stats_run(ocelot, port, ctrl_stats,
- ocelot_port_ctrl_stats_cb);
+ struct net_device *dev;
+
+ switch (ctrl_stats->src) {
+ case ETHTOOL_MAC_STATS_SRC_EMAC:
+ ocelot_port_stats_run(ocelot, port, ctrl_stats,
+ ocelot_port_ctrl_stats_cb);
+ break;
+ case ETHTOOL_MAC_STATS_SRC_PMAC:
+ if (ocelot->mm_supported)
+ ocelot_port_stats_run(ocelot, port, ctrl_stats,
+ ocelot_port_pmac_ctrl_stats_cb);
+ break;
+ case ETHTOOL_MAC_STATS_SRC_AGGREGATE:
+ dev = ocelot->ops->port_to_netdev(ocelot, port);
+ ethtool_aggregate_ctrl_stats(dev, ctrl_stats);
+ break;
+ }
}
EXPORT_SYMBOL_GPL(ocelot_port_get_eth_ctrl_stats);
@@ -510,11 +722,60 @@ static void ocelot_port_mac_stats_cb(struct ocelot *ocelot, int port, void *priv
mac_stats->AlignmentErrors = s[OCELOT_STAT_RX_CRC_ALIGN_ERRS];
}
+static void ocelot_port_pmac_mac_stats_cb(struct ocelot *ocelot, int port,
+ void *priv)
+{
+ u64 *s = &ocelot->stats[port * OCELOT_NUM_STATS];
+ struct ethtool_eth_mac_stats *mac_stats = priv;
+
+ mac_stats->OctetsTransmittedOK = s[OCELOT_STAT_TX_PMAC_OCTETS];
+ mac_stats->FramesTransmittedOK = s[OCELOT_STAT_TX_PMAC_64] +
+ s[OCELOT_STAT_TX_PMAC_65_127] +
+ s[OCELOT_STAT_TX_PMAC_128_255] +
+ s[OCELOT_STAT_TX_PMAC_256_511] +
+ s[OCELOT_STAT_TX_PMAC_512_1023] +
+ s[OCELOT_STAT_TX_PMAC_1024_1526] +
+ s[OCELOT_STAT_TX_PMAC_1527_MAX];
+ mac_stats->OctetsReceivedOK = s[OCELOT_STAT_RX_PMAC_OCTETS];
+ mac_stats->FramesReceivedOK = s[OCELOT_STAT_RX_PMAC_64] +
+ s[OCELOT_STAT_RX_PMAC_65_127] +
+ s[OCELOT_STAT_RX_PMAC_128_255] +
+ s[OCELOT_STAT_RX_PMAC_256_511] +
+ s[OCELOT_STAT_RX_PMAC_512_1023] +
+ s[OCELOT_STAT_RX_PMAC_1024_1526] +
+ s[OCELOT_STAT_RX_PMAC_1527_MAX];
+ mac_stats->MulticastFramesXmittedOK = s[OCELOT_STAT_TX_PMAC_MULTICAST];
+ mac_stats->BroadcastFramesXmittedOK = s[OCELOT_STAT_TX_PMAC_BROADCAST];
+ mac_stats->MulticastFramesReceivedOK = s[OCELOT_STAT_RX_PMAC_MULTICAST];
+ mac_stats->BroadcastFramesReceivedOK = s[OCELOT_STAT_RX_PMAC_BROADCAST];
+ mac_stats->FrameTooLongErrors = s[OCELOT_STAT_RX_PMAC_LONGS];
+ /* Sadly, C_RX_CRC is the sum of FCS and alignment errors, they are not
+ * counted individually.
+ */
+ mac_stats->FrameCheckSequenceErrors = s[OCELOT_STAT_RX_PMAC_CRC_ALIGN_ERRS];
+ mac_stats->AlignmentErrors = s[OCELOT_STAT_RX_PMAC_CRC_ALIGN_ERRS];
+}
+
void ocelot_port_get_eth_mac_stats(struct ocelot *ocelot, int port,
struct ethtool_eth_mac_stats *mac_stats)
{
- ocelot_port_stats_run(ocelot, port, mac_stats,
- ocelot_port_mac_stats_cb);
+ struct net_device *dev;
+
+ switch (mac_stats->src) {
+ case ETHTOOL_MAC_STATS_SRC_EMAC:
+ ocelot_port_stats_run(ocelot, port, mac_stats,
+ ocelot_port_mac_stats_cb);
+ break;
+ case ETHTOOL_MAC_STATS_SRC_PMAC:
+ if (ocelot->mm_supported)
+ ocelot_port_stats_run(ocelot, port, mac_stats,
+ ocelot_port_pmac_mac_stats_cb);
+ break;
+ case ETHTOOL_MAC_STATS_SRC_AGGREGATE:
+ dev = ocelot->ops->port_to_netdev(ocelot, port);
+ ethtool_aggregate_mac_stats(dev, mac_stats);
+ break;
+ }
}
EXPORT_SYMBOL_GPL(ocelot_port_get_eth_mac_stats);
@@ -526,11 +787,35 @@ static void ocelot_port_phy_stats_cb(struct ocelot *ocelot, int port, void *priv
phy_stats->SymbolErrorDuringCarrier = s[OCELOT_STAT_RX_SYM_ERRS];
}
+static void ocelot_port_pmac_phy_stats_cb(struct ocelot *ocelot, int port,
+ void *priv)
+{
+ u64 *s = &ocelot->stats[port * OCELOT_NUM_STATS];
+ struct ethtool_eth_phy_stats *phy_stats = priv;
+
+ phy_stats->SymbolErrorDuringCarrier = s[OCELOT_STAT_RX_PMAC_SYM_ERRS];
+}
+
void ocelot_port_get_eth_phy_stats(struct ocelot *ocelot, int port,
struct ethtool_eth_phy_stats *phy_stats)
{
- ocelot_port_stats_run(ocelot, port, phy_stats,
- ocelot_port_phy_stats_cb);
+ struct net_device *dev;
+
+ switch (phy_stats->src) {
+ case ETHTOOL_MAC_STATS_SRC_EMAC:
+ ocelot_port_stats_run(ocelot, port, phy_stats,
+ ocelot_port_phy_stats_cb);
+ break;
+ case ETHTOOL_MAC_STATS_SRC_PMAC:
+ if (ocelot->mm_supported)
+ ocelot_port_stats_run(ocelot, port, phy_stats,
+ ocelot_port_pmac_phy_stats_cb);
+ break;
+ case ETHTOOL_MAC_STATS_SRC_AGGREGATE:
+ dev = ocelot->ops->port_to_netdev(ocelot, port);
+ ethtool_aggregate_phy_stats(dev, phy_stats);
+ break;
+ }
}
EXPORT_SYMBOL_GPL(ocelot_port_get_eth_phy_stats);
@@ -602,16 +887,19 @@ EXPORT_SYMBOL(ocelot_port_get_stats64);
static int ocelot_prepare_stats_regions(struct ocelot *ocelot)
{
struct ocelot_stats_region *region = NULL;
+ const struct ocelot_stat_layout *layout;
unsigned int last = 0;
int i;
INIT_LIST_HEAD(&ocelot->stats_regions);
+ layout = ocelot_get_stats_layout(ocelot);
+
for (i = 0; i < OCELOT_NUM_STATS; i++) {
- if (!ocelot_stats_layout[i].reg)
+ if (!layout[i].reg)
continue;
- if (region && ocelot_stats_layout[i].reg == last + 4) {
+ if (region && layout[i].reg == last + 4) {
region->count++;
} else {
region = devm_kzalloc(ocelot->dev, sizeof(*region),
@@ -620,17 +908,17 @@ static int ocelot_prepare_stats_regions(struct ocelot *ocelot)
return -ENOMEM;
/* enum ocelot_stat must be kept sorted in the same
- * order as ocelot_stats_layout[i].reg in order to have
- * efficient bulking
+ * order as layout[i].reg in order to have efficient
+ * bulking
*/
- WARN_ON(last >= ocelot_stats_layout[i].reg);
+ WARN_ON(last >= layout[i].reg);
- region->base = ocelot_stats_layout[i].reg;
+ region->base = layout[i].reg;
region->count = 1;
list_add_tail(&region->node, &ocelot->stats_regions);
}
- last = ocelot_stats_layout[i].reg;
+ last = layout[i].reg;
}
list_for_each_entry(region, &ocelot->stats_regions, node) {
diff --git a/drivers/net/ethernet/mscc/ocelot_vsc7514.c b/drivers/net/ethernet/mscc/ocelot_vsc7514.c
index b097fd4a4061..7388c3b0535c 100644
--- a/drivers/net/ethernet/mscc/ocelot_vsc7514.c
+++ b/drivers/net/ethernet/mscc/ocelot_vsc7514.c
@@ -6,7 +6,6 @@
*/
#include <linux/dsa/ocelot.h>
#include <linux/interrupt.h>
-#include <linux/iopoll.h>
#include <linux/module.h>
#include <linux/of_net.h>
#include <linux/netdevice.h>
@@ -17,6 +16,7 @@
#include <linux/skbuff.h>
#include <net/switchdev.h>
+#include <soc/mscc/ocelot.h>
#include <soc/mscc/ocelot_vcap.h>
#include <soc/mscc/ocelot_hsio.h>
#include <soc/mscc/vsc7514_regs.h>
@@ -26,80 +26,6 @@
#define VSC7514_VCAP_POLICER_BASE 128
#define VSC7514_VCAP_POLICER_MAX 191
-#define MEM_INIT_SLEEP_US 1000
-#define MEM_INIT_TIMEOUT_US 100000
-
-static const u32 *ocelot_regmap[TARGET_MAX] = {
- [ANA] = vsc7514_ana_regmap,
- [QS] = vsc7514_qs_regmap,
- [QSYS] = vsc7514_qsys_regmap,
- [REW] = vsc7514_rew_regmap,
- [SYS] = vsc7514_sys_regmap,
- [S0] = vsc7514_vcap_regmap,
- [S1] = vsc7514_vcap_regmap,
- [S2] = vsc7514_vcap_regmap,
- [PTP] = vsc7514_ptp_regmap,
- [DEV_GMII] = vsc7514_dev_gmii_regmap,
-};
-
-static const struct reg_field ocelot_regfields[REGFIELD_MAX] = {
- [ANA_ADVLEARN_VLAN_CHK] = REG_FIELD(ANA_ADVLEARN, 11, 11),
- [ANA_ADVLEARN_LEARN_MIRROR] = REG_FIELD(ANA_ADVLEARN, 0, 10),
- [ANA_ANEVENTS_MSTI_DROP] = REG_FIELD(ANA_ANEVENTS, 27, 27),
- [ANA_ANEVENTS_ACLKILL] = REG_FIELD(ANA_ANEVENTS, 26, 26),
- [ANA_ANEVENTS_ACLUSED] = REG_FIELD(ANA_ANEVENTS, 25, 25),
- [ANA_ANEVENTS_AUTOAGE] = REG_FIELD(ANA_ANEVENTS, 24, 24),
- [ANA_ANEVENTS_VS2TTL1] = REG_FIELD(ANA_ANEVENTS, 23, 23),
- [ANA_ANEVENTS_STORM_DROP] = REG_FIELD(ANA_ANEVENTS, 22, 22),
- [ANA_ANEVENTS_LEARN_DROP] = REG_FIELD(ANA_ANEVENTS, 21, 21),
- [ANA_ANEVENTS_AGED_ENTRY] = REG_FIELD(ANA_ANEVENTS, 20, 20),
- [ANA_ANEVENTS_CPU_LEARN_FAILED] = REG_FIELD(ANA_ANEVENTS, 19, 19),
- [ANA_ANEVENTS_AUTO_LEARN_FAILED] = REG_FIELD(ANA_ANEVENTS, 18, 18),
- [ANA_ANEVENTS_LEARN_REMOVE] = REG_FIELD(ANA_ANEVENTS, 17, 17),
- [ANA_ANEVENTS_AUTO_LEARNED] = REG_FIELD(ANA_ANEVENTS, 16, 16),
- [ANA_ANEVENTS_AUTO_MOVED] = REG_FIELD(ANA_ANEVENTS, 15, 15),
- [ANA_ANEVENTS_DROPPED] = REG_FIELD(ANA_ANEVENTS, 14, 14),
- [ANA_ANEVENTS_CLASSIFIED_DROP] = REG_FIELD(ANA_ANEVENTS, 13, 13),
- [ANA_ANEVENTS_CLASSIFIED_COPY] = REG_FIELD(ANA_ANEVENTS, 12, 12),
- [ANA_ANEVENTS_VLAN_DISCARD] = REG_FIELD(ANA_ANEVENTS, 11, 11),
- [ANA_ANEVENTS_FWD_DISCARD] = REG_FIELD(ANA_ANEVENTS, 10, 10),
- [ANA_ANEVENTS_MULTICAST_FLOOD] = REG_FIELD(ANA_ANEVENTS, 9, 9),
- [ANA_ANEVENTS_UNICAST_FLOOD] = REG_FIELD(ANA_ANEVENTS, 8, 8),
- [ANA_ANEVENTS_DEST_KNOWN] = REG_FIELD(ANA_ANEVENTS, 7, 7),
- [ANA_ANEVENTS_BUCKET3_MATCH] = REG_FIELD(ANA_ANEVENTS, 6, 6),
- [ANA_ANEVENTS_BUCKET2_MATCH] = REG_FIELD(ANA_ANEVENTS, 5, 5),
- [ANA_ANEVENTS_BUCKET1_MATCH] = REG_FIELD(ANA_ANEVENTS, 4, 4),
- [ANA_ANEVENTS_BUCKET0_MATCH] = REG_FIELD(ANA_ANEVENTS, 3, 3),
- [ANA_ANEVENTS_CPU_OPERATION] = REG_FIELD(ANA_ANEVENTS, 2, 2),
- [ANA_ANEVENTS_DMAC_LOOKUP] = REG_FIELD(ANA_ANEVENTS, 1, 1),
- [ANA_ANEVENTS_SMAC_LOOKUP] = REG_FIELD(ANA_ANEVENTS, 0, 0),
- [ANA_TABLES_MACACCESS_B_DOM] = REG_FIELD(ANA_TABLES_MACACCESS, 18, 18),
- [ANA_TABLES_MACTINDX_BUCKET] = REG_FIELD(ANA_TABLES_MACTINDX, 10, 11),
- [ANA_TABLES_MACTINDX_M_INDEX] = REG_FIELD(ANA_TABLES_MACTINDX, 0, 9),
- [QSYS_TIMED_FRAME_ENTRY_TFRM_VLD] = REG_FIELD(QSYS_TIMED_FRAME_ENTRY, 20, 20),
- [QSYS_TIMED_FRAME_ENTRY_TFRM_FP] = REG_FIELD(QSYS_TIMED_FRAME_ENTRY, 8, 19),
- [QSYS_TIMED_FRAME_ENTRY_TFRM_PORTNO] = REG_FIELD(QSYS_TIMED_FRAME_ENTRY, 4, 7),
- [QSYS_TIMED_FRAME_ENTRY_TFRM_TM_SEL] = REG_FIELD(QSYS_TIMED_FRAME_ENTRY, 1, 3),
- [QSYS_TIMED_FRAME_ENTRY_TFRM_TM_T] = REG_FIELD(QSYS_TIMED_FRAME_ENTRY, 0, 0),
- [SYS_RESET_CFG_CORE_ENA] = REG_FIELD(SYS_RESET_CFG, 2, 2),
- [SYS_RESET_CFG_MEM_ENA] = REG_FIELD(SYS_RESET_CFG, 1, 1),
- [SYS_RESET_CFG_MEM_INIT] = REG_FIELD(SYS_RESET_CFG, 0, 0),
- /* Replicated per number of ports (12), register size 4 per port */
- [QSYS_SWITCH_PORT_MODE_PORT_ENA] = REG_FIELD_ID(QSYS_SWITCH_PORT_MODE, 14, 14, 12, 4),
- [QSYS_SWITCH_PORT_MODE_SCH_NEXT_CFG] = REG_FIELD_ID(QSYS_SWITCH_PORT_MODE, 11, 13, 12, 4),
- [QSYS_SWITCH_PORT_MODE_YEL_RSRVD] = REG_FIELD_ID(QSYS_SWITCH_PORT_MODE, 10, 10, 12, 4),
- [QSYS_SWITCH_PORT_MODE_INGRESS_DROP_MODE] = REG_FIELD_ID(QSYS_SWITCH_PORT_MODE, 9, 9, 12, 4),
- [QSYS_SWITCH_PORT_MODE_TX_PFC_ENA] = REG_FIELD_ID(QSYS_SWITCH_PORT_MODE, 1, 8, 12, 4),
- [QSYS_SWITCH_PORT_MODE_TX_PFC_MODE] = REG_FIELD_ID(QSYS_SWITCH_PORT_MODE, 0, 0, 12, 4),
- [SYS_PORT_MODE_DATA_WO_TS] = REG_FIELD_ID(SYS_PORT_MODE, 5, 6, 12, 4),
- [SYS_PORT_MODE_INCL_INJ_HDR] = REG_FIELD_ID(SYS_PORT_MODE, 3, 4, 12, 4),
- [SYS_PORT_MODE_INCL_XTR_HDR] = REG_FIELD_ID(SYS_PORT_MODE, 1, 2, 12, 4),
- [SYS_PORT_MODE_INCL_HDR_ERR] = REG_FIELD_ID(SYS_PORT_MODE, 0, 0, 12, 4),
- [SYS_PAUSE_CFG_PAUSE_START] = REG_FIELD_ID(SYS_PAUSE_CFG, 10, 18, 12, 4),
- [SYS_PAUSE_CFG_PAUSE_STOP] = REG_FIELD_ID(SYS_PAUSE_CFG, 1, 9, 12, 4),
- [SYS_PAUSE_CFG_PAUSE_ENA] = REG_FIELD_ID(SYS_PAUSE_CFG, 0, 1, 12, 4),
-};
-
static void ocelot_pll5_init(struct ocelot *ocelot)
{
/* Configure PLL5. This will need a proper CCF driver
@@ -133,11 +59,11 @@ static int ocelot_chip_init(struct ocelot *ocelot, const struct ocelot_ops *ops)
{
int ret;
- ocelot->map = ocelot_regmap;
+ ocelot->map = vsc7514_regmap;
ocelot->num_mact_rows = 1024;
ocelot->ops = ops;
- ret = ocelot_regfields_init(ocelot, ocelot_regfields);
+ ret = ocelot_regfields_init(ocelot, vsc7514_regfields);
if (ret)
return ret;
@@ -190,73 +116,6 @@ static const struct of_device_id mscc_ocelot_match[] = {
};
MODULE_DEVICE_TABLE(of, mscc_ocelot_match);
-static int ocelot_mem_init_status(struct ocelot *ocelot)
-{
- unsigned int val;
- int err;
-
- err = regmap_field_read(ocelot->regfields[SYS_RESET_CFG_MEM_INIT],
- &val);
-
- return err ?: val;
-}
-
-static int ocelot_reset(struct ocelot *ocelot)
-{
- int err;
- u32 val;
-
- err = regmap_field_write(ocelot->regfields[SYS_RESET_CFG_MEM_INIT], 1);
- if (err)
- return err;
-
- err = regmap_field_write(ocelot->regfields[SYS_RESET_CFG_MEM_ENA], 1);
- if (err)
- return err;
-
- /* MEM_INIT is a self-clearing bit. Wait for it to be cleared (should be
- * 100us) before enabling the switch core.
- */
- err = readx_poll_timeout(ocelot_mem_init_status, ocelot, val, !val,
- MEM_INIT_SLEEP_US, MEM_INIT_TIMEOUT_US);
- if (err)
- return err;
-
- err = regmap_field_write(ocelot->regfields[SYS_RESET_CFG_MEM_ENA], 1);
- if (err)
- return err;
-
- return regmap_field_write(ocelot->regfields[SYS_RESET_CFG_CORE_ENA], 1);
-}
-
-/* Watermark encode
- * Bit 8: Unit; 0:1, 1:16
- * Bit 7-0: Value to be multiplied with unit
- */
-static u16 ocelot_wm_enc(u16 value)
-{
- WARN_ON(value >= 16 * BIT(8));
-
- if (value >= BIT(8))
- return BIT(8) | (value / 16);
-
- return value;
-}
-
-static u16 ocelot_wm_dec(u16 wm)
-{
- if (wm & BIT(8))
- return (wm & GENMASK(7, 0)) * 16;
-
- return wm;
-}
-
-static void ocelot_wm_stat(u32 val, u32 *inuse, u32 *maxuse)
-{
- *inuse = (val & GENMASK(23, 12)) >> 12;
- *maxuse = val & GENMASK(11, 0);
-}
-
static const struct ocelot_ops ocelot_ops = {
.reset = ocelot_reset,
.wm_enc = ocelot_wm_enc,
@@ -266,49 +125,6 @@ static const struct ocelot_ops ocelot_ops = {
.netdev_to_port = ocelot_netdev_to_port,
};
-static struct vcap_props vsc7514_vcap_props[] = {
- [VCAP_ES0] = {
- .action_type_width = 0,
- .action_table = {
- [ES0_ACTION_TYPE_NORMAL] = {
- .width = 73, /* HIT_STICKY not included */
- .count = 1,
- },
- },
- .target = S0,
- .keys = vsc7514_vcap_es0_keys,
- .actions = vsc7514_vcap_es0_actions,
- },
- [VCAP_IS1] = {
- .action_type_width = 0,
- .action_table = {
- [IS1_ACTION_TYPE_NORMAL] = {
- .width = 78, /* HIT_STICKY not included */
- .count = 4,
- },
- },
- .target = S1,
- .keys = vsc7514_vcap_is1_keys,
- .actions = vsc7514_vcap_is1_actions,
- },
- [VCAP_IS2] = {
- .action_type_width = 1,
- .action_table = {
- [IS2_ACTION_TYPE_NORMAL] = {
- .width = 49,
- .count = 2
- },
- [IS2_ACTION_TYPE_SMAC_SIP] = {
- .width = 6,
- .count = 4
- },
- },
- .target = S2,
- .keys = vsc7514_vcap_is2_keys,
- .actions = vsc7514_vcap_is2_actions,
- },
-};
-
static struct ptp_clock_info ocelot_ptp_clock_info = {
.owner = THIS_MODULE,
.name = "ocelot ptp",
diff --git a/drivers/net/ethernet/mscc/vsc7514_regs.c b/drivers/net/ethernet/mscc/vsc7514_regs.c
index 9d2d3e13cacf..ef6fd3f6be30 100644
--- a/drivers/net/ethernet/mscc/vsc7514_regs.c
+++ b/drivers/net/ethernet/mscc/vsc7514_regs.c
@@ -9,7 +9,66 @@
#include <soc/mscc/vsc7514_regs.h>
#include "ocelot.h"
-const u32 vsc7514_ana_regmap[] = {
+const struct reg_field vsc7514_regfields[REGFIELD_MAX] = {
+ [ANA_ADVLEARN_VLAN_CHK] = REG_FIELD(ANA_ADVLEARN, 11, 11),
+ [ANA_ADVLEARN_LEARN_MIRROR] = REG_FIELD(ANA_ADVLEARN, 0, 10),
+ [ANA_ANEVENTS_MSTI_DROP] = REG_FIELD(ANA_ANEVENTS, 27, 27),
+ [ANA_ANEVENTS_ACLKILL] = REG_FIELD(ANA_ANEVENTS, 26, 26),
+ [ANA_ANEVENTS_ACLUSED] = REG_FIELD(ANA_ANEVENTS, 25, 25),
+ [ANA_ANEVENTS_AUTOAGE] = REG_FIELD(ANA_ANEVENTS, 24, 24),
+ [ANA_ANEVENTS_VS2TTL1] = REG_FIELD(ANA_ANEVENTS, 23, 23),
+ [ANA_ANEVENTS_STORM_DROP] = REG_FIELD(ANA_ANEVENTS, 22, 22),
+ [ANA_ANEVENTS_LEARN_DROP] = REG_FIELD(ANA_ANEVENTS, 21, 21),
+ [ANA_ANEVENTS_AGED_ENTRY] = REG_FIELD(ANA_ANEVENTS, 20, 20),
+ [ANA_ANEVENTS_CPU_LEARN_FAILED] = REG_FIELD(ANA_ANEVENTS, 19, 19),
+ [ANA_ANEVENTS_AUTO_LEARN_FAILED] = REG_FIELD(ANA_ANEVENTS, 18, 18),
+ [ANA_ANEVENTS_LEARN_REMOVE] = REG_FIELD(ANA_ANEVENTS, 17, 17),
+ [ANA_ANEVENTS_AUTO_LEARNED] = REG_FIELD(ANA_ANEVENTS, 16, 16),
+ [ANA_ANEVENTS_AUTO_MOVED] = REG_FIELD(ANA_ANEVENTS, 15, 15),
+ [ANA_ANEVENTS_DROPPED] = REG_FIELD(ANA_ANEVENTS, 14, 14),
+ [ANA_ANEVENTS_CLASSIFIED_DROP] = REG_FIELD(ANA_ANEVENTS, 13, 13),
+ [ANA_ANEVENTS_CLASSIFIED_COPY] = REG_FIELD(ANA_ANEVENTS, 12, 12),
+ [ANA_ANEVENTS_VLAN_DISCARD] = REG_FIELD(ANA_ANEVENTS, 11, 11),
+ [ANA_ANEVENTS_FWD_DISCARD] = REG_FIELD(ANA_ANEVENTS, 10, 10),
+ [ANA_ANEVENTS_MULTICAST_FLOOD] = REG_FIELD(ANA_ANEVENTS, 9, 9),
+ [ANA_ANEVENTS_UNICAST_FLOOD] = REG_FIELD(ANA_ANEVENTS, 8, 8),
+ [ANA_ANEVENTS_DEST_KNOWN] = REG_FIELD(ANA_ANEVENTS, 7, 7),
+ [ANA_ANEVENTS_BUCKET3_MATCH] = REG_FIELD(ANA_ANEVENTS, 6, 6),
+ [ANA_ANEVENTS_BUCKET2_MATCH] = REG_FIELD(ANA_ANEVENTS, 5, 5),
+ [ANA_ANEVENTS_BUCKET1_MATCH] = REG_FIELD(ANA_ANEVENTS, 4, 4),
+ [ANA_ANEVENTS_BUCKET0_MATCH] = REG_FIELD(ANA_ANEVENTS, 3, 3),
+ [ANA_ANEVENTS_CPU_OPERATION] = REG_FIELD(ANA_ANEVENTS, 2, 2),
+ [ANA_ANEVENTS_DMAC_LOOKUP] = REG_FIELD(ANA_ANEVENTS, 1, 1),
+ [ANA_ANEVENTS_SMAC_LOOKUP] = REG_FIELD(ANA_ANEVENTS, 0, 0),
+ [ANA_TABLES_MACACCESS_B_DOM] = REG_FIELD(ANA_TABLES_MACACCESS, 18, 18),
+ [ANA_TABLES_MACTINDX_BUCKET] = REG_FIELD(ANA_TABLES_MACTINDX, 10, 11),
+ [ANA_TABLES_MACTINDX_M_INDEX] = REG_FIELD(ANA_TABLES_MACTINDX, 0, 9),
+ [QSYS_TIMED_FRAME_ENTRY_TFRM_VLD] = REG_FIELD(QSYS_TIMED_FRAME_ENTRY, 20, 20),
+ [QSYS_TIMED_FRAME_ENTRY_TFRM_FP] = REG_FIELD(QSYS_TIMED_FRAME_ENTRY, 8, 19),
+ [QSYS_TIMED_FRAME_ENTRY_TFRM_PORTNO] = REG_FIELD(QSYS_TIMED_FRAME_ENTRY, 4, 7),
+ [QSYS_TIMED_FRAME_ENTRY_TFRM_TM_SEL] = REG_FIELD(QSYS_TIMED_FRAME_ENTRY, 1, 3),
+ [QSYS_TIMED_FRAME_ENTRY_TFRM_TM_T] = REG_FIELD(QSYS_TIMED_FRAME_ENTRY, 0, 0),
+ [SYS_RESET_CFG_CORE_ENA] = REG_FIELD(SYS_RESET_CFG, 2, 2),
+ [SYS_RESET_CFG_MEM_ENA] = REG_FIELD(SYS_RESET_CFG, 1, 1),
+ [SYS_RESET_CFG_MEM_INIT] = REG_FIELD(SYS_RESET_CFG, 0, 0),
+ /* Replicated per number of ports (12), register size 4 per port */
+ [QSYS_SWITCH_PORT_MODE_PORT_ENA] = REG_FIELD_ID(QSYS_SWITCH_PORT_MODE, 14, 14, 12, 4),
+ [QSYS_SWITCH_PORT_MODE_SCH_NEXT_CFG] = REG_FIELD_ID(QSYS_SWITCH_PORT_MODE, 11, 13, 12, 4),
+ [QSYS_SWITCH_PORT_MODE_YEL_RSRVD] = REG_FIELD_ID(QSYS_SWITCH_PORT_MODE, 10, 10, 12, 4),
+ [QSYS_SWITCH_PORT_MODE_INGRESS_DROP_MODE] = REG_FIELD_ID(QSYS_SWITCH_PORT_MODE, 9, 9, 12, 4),
+ [QSYS_SWITCH_PORT_MODE_TX_PFC_ENA] = REG_FIELD_ID(QSYS_SWITCH_PORT_MODE, 1, 8, 12, 4),
+ [QSYS_SWITCH_PORT_MODE_TX_PFC_MODE] = REG_FIELD_ID(QSYS_SWITCH_PORT_MODE, 0, 0, 12, 4),
+ [SYS_PORT_MODE_DATA_WO_TS] = REG_FIELD_ID(SYS_PORT_MODE, 5, 6, 12, 4),
+ [SYS_PORT_MODE_INCL_INJ_HDR] = REG_FIELD_ID(SYS_PORT_MODE, 3, 4, 12, 4),
+ [SYS_PORT_MODE_INCL_XTR_HDR] = REG_FIELD_ID(SYS_PORT_MODE, 1, 2, 12, 4),
+ [SYS_PORT_MODE_INCL_HDR_ERR] = REG_FIELD_ID(SYS_PORT_MODE, 0, 0, 12, 4),
+ [SYS_PAUSE_CFG_PAUSE_START] = REG_FIELD_ID(SYS_PAUSE_CFG, 10, 18, 12, 4),
+ [SYS_PAUSE_CFG_PAUSE_STOP] = REG_FIELD_ID(SYS_PAUSE_CFG, 1, 9, 12, 4),
+ [SYS_PAUSE_CFG_PAUSE_ENA] = REG_FIELD_ID(SYS_PAUSE_CFG, 0, 1, 12, 4),
+};
+EXPORT_SYMBOL(vsc7514_regfields);
+
+static const u32 vsc7514_ana_regmap[] = {
REG(ANA_ADVLEARN, 0x009000),
REG(ANA_VLANMASK, 0x009004),
REG(ANA_PORT_B_DOMAIN, 0x009008),
@@ -89,9 +148,8 @@ const u32 vsc7514_ana_regmap[] = {
REG(ANA_POL_HYST, 0x008bec),
REG(ANA_POL_MISC_CFG, 0x008bf0),
};
-EXPORT_SYMBOL(vsc7514_ana_regmap);
-const u32 vsc7514_qs_regmap[] = {
+static const u32 vsc7514_qs_regmap[] = {
REG(QS_XTR_GRP_CFG, 0x000000),
REG(QS_XTR_RD, 0x000008),
REG(QS_XTR_FRM_PRUNING, 0x000010),
@@ -105,9 +163,8 @@ const u32 vsc7514_qs_regmap[] = {
REG(QS_INJ_ERR, 0x000040),
REG(QS_INH_DBG, 0x000048),
};
-EXPORT_SYMBOL(vsc7514_qs_regmap);
-const u32 vsc7514_qsys_regmap[] = {
+static const u32 vsc7514_qsys_regmap[] = {
REG(QSYS_PORT_MODE, 0x011200),
REG(QSYS_SWITCH_PORT_MODE, 0x011234),
REG(QSYS_STAT_CNT_CFG, 0x011264),
@@ -150,9 +207,8 @@ const u32 vsc7514_qsys_regmap[] = {
REG(QSYS_SE_STATE, 0x00004c),
REG(QSYS_HSCH_MISC_CFG, 0x011388),
};
-EXPORT_SYMBOL(vsc7514_qsys_regmap);
-const u32 vsc7514_rew_regmap[] = {
+static const u32 vsc7514_rew_regmap[] = {
REG(REW_PORT_VLAN_CFG, 0x000000),
REG(REW_TAG_CFG, 0x000004),
REG(REW_PORT_CFG, 0x000008),
@@ -165,9 +221,8 @@ const u32 vsc7514_rew_regmap[] = {
REG(REW_STAT_CFG, 0x000890),
REG(REW_PPT, 0x000680),
};
-EXPORT_SYMBOL(vsc7514_rew_regmap);
-const u32 vsc7514_sys_regmap[] = {
+static const u32 vsc7514_sys_regmap[] = {
REG(SYS_COUNT_RX_OCTETS, 0x000000),
REG(SYS_COUNT_RX_UNICAST, 0x000004),
REG(SYS_COUNT_RX_MULTICAST, 0x000008),
@@ -288,9 +343,8 @@ const u32 vsc7514_sys_regmap[] = {
REG(SYS_PTP_NXT, 0x0006c0),
REG(SYS_PTP_CFG, 0x0006c4),
};
-EXPORT_SYMBOL(vsc7514_sys_regmap);
-const u32 vsc7514_vcap_regmap[] = {
+static const u32 vsc7514_vcap_regmap[] = {
/* VCAP_CORE_CFG */
REG(VCAP_CORE_UPDATE_CTRL, 0x000000),
REG(VCAP_CORE_MV_CFG, 0x000004),
@@ -312,9 +366,8 @@ const u32 vsc7514_vcap_regmap[] = {
REG(VCAP_CONST_CORE_CNT, 0x0003b8),
REG(VCAP_CONST_IF_CNT, 0x0003bc),
};
-EXPORT_SYMBOL(vsc7514_vcap_regmap);
-const u32 vsc7514_ptp_regmap[] = {
+static const u32 vsc7514_ptp_regmap[] = {
REG(PTP_PIN_CFG, 0x000000),
REG(PTP_PIN_TOD_SEC_MSB, 0x000004),
REG(PTP_PIN_TOD_SEC_LSB, 0x000008),
@@ -325,9 +378,8 @@ const u32 vsc7514_ptp_regmap[] = {
REG(PTP_CLK_CFG_ADJ_CFG, 0x0000a4),
REG(PTP_CLK_CFG_ADJ_FREQ, 0x0000a8),
};
-EXPORT_SYMBOL(vsc7514_ptp_regmap);
-const u32 vsc7514_dev_gmii_regmap[] = {
+static const u32 vsc7514_dev_gmii_regmap[] = {
REG(DEV_CLOCK_CFG, 0x0),
REG(DEV_PORT_MISC, 0x4),
REG(DEV_EVENTS, 0x8),
@@ -368,9 +420,22 @@ const u32 vsc7514_dev_gmii_regmap[] = {
REG(DEV_PCS_FX100_CFG, 0x94),
REG(DEV_PCS_FX100_STATUS, 0x98),
};
-EXPORT_SYMBOL(vsc7514_dev_gmii_regmap);
-const struct vcap_field vsc7514_vcap_es0_keys[] = {
+const u32 *vsc7514_regmap[TARGET_MAX] = {
+ [ANA] = vsc7514_ana_regmap,
+ [QS] = vsc7514_qs_regmap,
+ [QSYS] = vsc7514_qsys_regmap,
+ [REW] = vsc7514_rew_regmap,
+ [SYS] = vsc7514_sys_regmap,
+ [S0] = vsc7514_vcap_regmap,
+ [S1] = vsc7514_vcap_regmap,
+ [S2] = vsc7514_vcap_regmap,
+ [PTP] = vsc7514_ptp_regmap,
+ [DEV_GMII] = vsc7514_dev_gmii_regmap,
+};
+EXPORT_SYMBOL(vsc7514_regmap);
+
+static const struct vcap_field vsc7514_vcap_es0_keys[] = {
[VCAP_ES0_EGR_PORT] = { 0, 4 },
[VCAP_ES0_IGR_PORT] = { 4, 4 },
[VCAP_ES0_RSV] = { 8, 2 },
@@ -380,9 +445,8 @@ const struct vcap_field vsc7514_vcap_es0_keys[] = {
[VCAP_ES0_DP] = { 24, 1 },
[VCAP_ES0_PCP] = { 25, 3 },
};
-EXPORT_SYMBOL(vsc7514_vcap_es0_keys);
-const struct vcap_field vsc7514_vcap_es0_actions[] = {
+static const struct vcap_field vsc7514_vcap_es0_actions[] = {
[VCAP_ES0_ACT_PUSH_OUTER_TAG] = { 0, 2 },
[VCAP_ES0_ACT_PUSH_INNER_TAG] = { 2, 1 },
[VCAP_ES0_ACT_TAG_A_TPID_SEL] = { 3, 2 },
@@ -402,9 +466,8 @@ const struct vcap_field vsc7514_vcap_es0_actions[] = {
[VCAP_ES0_ACT_RSV] = { 49, 24 },
[VCAP_ES0_ACT_HIT_STICKY] = { 73, 1 },
};
-EXPORT_SYMBOL(vsc7514_vcap_es0_actions);
-const struct vcap_field vsc7514_vcap_is1_keys[] = {
+static const struct vcap_field vsc7514_vcap_is1_keys[] = {
[VCAP_IS1_HK_TYPE] = { 0, 1 },
[VCAP_IS1_HK_LOOKUP] = { 1, 2 },
[VCAP_IS1_HK_IGR_PORT_MASK] = { 3, 12 },
@@ -454,9 +517,8 @@ const struct vcap_field vsc7514_vcap_is1_keys[] = {
[VCAP_IS1_HK_IP4_L4_RNG] = { 148, 8 },
[VCAP_IS1_HK_IP4_IP_PAYLOAD_S1_5TUPLE] = { 156, 32 },
};
-EXPORT_SYMBOL(vsc7514_vcap_is1_keys);
-const struct vcap_field vsc7514_vcap_is1_actions[] = {
+static const struct vcap_field vsc7514_vcap_is1_actions[] = {
[VCAP_IS1_ACT_DSCP_ENA] = { 0, 1 },
[VCAP_IS1_ACT_DSCP_VAL] = { 1, 6 },
[VCAP_IS1_ACT_QOS_ENA] = { 7, 1 },
@@ -479,9 +541,8 @@ const struct vcap_field vsc7514_vcap_is1_actions[] = {
[VCAP_IS1_ACT_CUSTOM_ACE_TYPE_ENA] = { 74, 4 },
[VCAP_IS1_ACT_HIT_STICKY] = { 78, 1 },
};
-EXPORT_SYMBOL(vsc7514_vcap_is1_actions);
-const struct vcap_field vsc7514_vcap_is2_keys[] = {
+static const struct vcap_field vsc7514_vcap_is2_keys[] = {
/* Common: 46 bits */
[VCAP_IS2_TYPE] = { 0, 4 },
[VCAP_IS2_HK_FIRST] = { 4, 1 },
@@ -560,9 +621,8 @@ const struct vcap_field vsc7514_vcap_is2_keys[] = {
[VCAP_IS2_HK_OAM_CCM_CNTS_EQ0] = { 186, 1 },
[VCAP_IS2_HK_OAM_IS_Y1731] = { 187, 1 },
};
-EXPORT_SYMBOL(vsc7514_vcap_is2_keys);
-const struct vcap_field vsc7514_vcap_is2_actions[] = {
+static const struct vcap_field vsc7514_vcap_is2_actions[] = {
[VCAP_IS2_ACT_HIT_ME_ONCE] = { 0, 1 },
[VCAP_IS2_ACT_CPU_COPY_ENA] = { 1, 1 },
[VCAP_IS2_ACT_CPU_QU_NUM] = { 2, 3 },
@@ -579,4 +639,47 @@ const struct vcap_field vsc7514_vcap_is2_actions[] = {
[VCAP_IS2_ACT_ACL_ID] = { 43, 6 },
[VCAP_IS2_ACT_HIT_CNT] = { 49, 32 },
};
-EXPORT_SYMBOL(vsc7514_vcap_is2_actions);
+
+struct vcap_props vsc7514_vcap_props[] = {
+ [VCAP_ES0] = {
+ .action_type_width = 0,
+ .action_table = {
+ [ES0_ACTION_TYPE_NORMAL] = {
+ .width = 73, /* HIT_STICKY not included */
+ .count = 1,
+ },
+ },
+ .target = S0,
+ .keys = vsc7514_vcap_es0_keys,
+ .actions = vsc7514_vcap_es0_actions,
+ },
+ [VCAP_IS1] = {
+ .action_type_width = 0,
+ .action_table = {
+ [IS1_ACTION_TYPE_NORMAL] = {
+ .width = 78, /* HIT_STICKY not included */
+ .count = 4,
+ },
+ },
+ .target = S1,
+ .keys = vsc7514_vcap_is1_keys,
+ .actions = vsc7514_vcap_is1_actions,
+ },
+ [VCAP_IS2] = {
+ .action_type_width = 1,
+ .action_table = {
+ [IS2_ACTION_TYPE_NORMAL] = {
+ .width = 49,
+ .count = 2
+ },
+ [IS2_ACTION_TYPE_SMAC_SIP] = {
+ .width = 6,
+ .count = 4
+ },
+ },
+ .target = S2,
+ .keys = vsc7514_vcap_is2_keys,
+ .actions = vsc7514_vcap_is2_actions,
+ },
+};
+EXPORT_SYMBOL(vsc7514_vcap_props);
diff --git a/drivers/net/ethernet/netronome/Kconfig b/drivers/net/ethernet/netronome/Kconfig
index e785c00b5845..d03d6e96f730 100644
--- a/drivers/net/ethernet/netronome/Kconfig
+++ b/drivers/net/ethernet/netronome/Kconfig
@@ -18,7 +18,7 @@ if NET_VENDOR_NETRONOME
config NFP
tristate "Netronome(R) NFP4000/NFP6000 NIC driver"
- depends on PCI && PCI_MSI
+ depends on PCI_MSI
depends on VXLAN || VXLAN=n
depends on TLS && TLS_DEVICE || TLS_DEVICE=n
select NET_DEVLINK
diff --git a/drivers/net/ethernet/netronome/nfp/Makefile b/drivers/net/ethernet/netronome/nfp/Makefile
index 8a250214e289..808599b8066e 100644
--- a/drivers/net/ethernet/netronome/nfp/Makefile
+++ b/drivers/net/ethernet/netronome/nfp/Makefile
@@ -80,6 +80,8 @@ nfp-objs += \
abm/main.o
endif
-nfp-$(CONFIG_NFP_NET_IPSEC) += crypto/ipsec.o nfd3/ipsec.o
+nfp-$(CONFIG_NFP_NET_IPSEC) += crypto/ipsec.o nfd3/ipsec.o nfdk/ipsec.o
nfp-$(CONFIG_NFP_DEBUG) += nfp_net_debugfs.o
+
+nfp-$(CONFIG_DCB) += nic/dcb.o
diff --git a/drivers/net/ethernet/netronome/nfp/crypto/ipsec.c b/drivers/net/ethernet/netronome/nfp/crypto/ipsec.c
index 063cd371033a..c0dcce8ae437 100644
--- a/drivers/net/ethernet/netronome/nfp/crypto/ipsec.c
+++ b/drivers/net/ethernet/netronome/nfp/crypto/ipsec.c
@@ -10,6 +10,7 @@
#include <linux/ktime.h>
#include <net/xfrm.h>
+#include "../nfpcore/nfp_dev.h"
#include "../nfp_net_ctrl.h"
#include "../nfp_net.h"
#include "crypto.h"
@@ -265,7 +266,8 @@ static void set_sha2_512hmac(struct nfp_ipsec_cfg_add_sa *cfg, int *trunc_len)
}
}
-static int nfp_net_xfrm_add_state(struct xfrm_state *x)
+static int nfp_net_xfrm_add_state(struct xfrm_state *x,
+ struct netlink_ext_ack *extack)
{
struct net_device *netdev = x->xso.dev;
struct nfp_ipsec_cfg_mssg msg = {};
@@ -286,7 +288,7 @@ static int nfp_net_xfrm_add_state(struct xfrm_state *x)
cfg->ctrl_word.mode = NFP_IPSEC_PROTMODE_TRANSPORT;
break;
default:
- nn_err(nn, "Unsupported mode for xfrm offload\n");
+ NL_SET_ERR_MSG_MOD(extack, "Unsupported mode for xfrm offload");
return -EINVAL;
}
@@ -298,17 +300,17 @@ static int nfp_net_xfrm_add_state(struct xfrm_state *x)
cfg->ctrl_word.proto = NFP_IPSEC_PROTOCOL_AH;
break;
default:
- nn_err(nn, "Unsupported protocol for xfrm offload\n");
+ NL_SET_ERR_MSG_MOD(extack, "Unsupported protocol for xfrm offload");
return -EINVAL;
}
if (x->props.flags & XFRM_STATE_ESN) {
- nn_err(nn, "Unsupported XFRM_REPLAY_MODE_ESN for xfrm offload\n");
+ NL_SET_ERR_MSG_MOD(extack, "Unsupported XFRM_REPLAY_MODE_ESN for xfrm offload");
return -EINVAL;
}
if (x->xso.type != XFRM_DEV_OFFLOAD_CRYPTO) {
- nn_err(nn, "Unsupported xfrm offload tyoe\n");
+ NL_SET_ERR_MSG_MOD(extack, "Unsupported xfrm offload type");
return -EINVAL;
}
@@ -325,7 +327,7 @@ static int nfp_net_xfrm_add_state(struct xfrm_state *x)
if (x->aead) {
trunc_len = -1;
} else {
- nn_err(nn, "Unsupported authentication algorithm\n");
+ NL_SET_ERR_MSG_MOD(extack, "Unsupported authentication algorithm");
return -EINVAL;
}
break;
@@ -334,6 +336,10 @@ static int nfp_net_xfrm_add_state(struct xfrm_state *x)
trunc_len = -1;
break;
case SADB_AALG_MD5HMAC:
+ if (nn->pdev->device == PCI_DEVICE_ID_NFP3800) {
+ NL_SET_ERR_MSG_MOD(extack, "Unsupported authentication algorithm");
+ return -EINVAL;
+ }
set_md5hmac(cfg, &trunc_len);
break;
case SADB_AALG_SHA1HMAC:
@@ -349,19 +355,19 @@ static int nfp_net_xfrm_add_state(struct xfrm_state *x)
set_sha2_512hmac(cfg, &trunc_len);
break;
default:
- nn_err(nn, "Unsupported authentication algorithm\n");
+ NL_SET_ERR_MSG_MOD(extack, "Unsupported authentication algorithm");
return -EINVAL;
}
if (!trunc_len) {
- nn_err(nn, "Unsupported authentication algorithm trunc length\n");
+ NL_SET_ERR_MSG_MOD(extack, "Unsupported authentication algorithm trunc length");
return -EINVAL;
}
if (x->aalg) {
key_len = DIV_ROUND_UP(x->aalg->alg_key_len, BITS_PER_BYTE);
if (key_len > sizeof(cfg->auth_key)) {
- nn_err(nn, "Insufficient space for offloaded auth key\n");
+ NL_SET_ERR_MSG_MOD(extack, "Insufficient space for offloaded auth key");
return -EINVAL;
}
for (i = 0; i < key_len / sizeof(cfg->auth_key[0]) ; i++)
@@ -377,18 +383,22 @@ static int nfp_net_xfrm_add_state(struct xfrm_state *x)
cfg->ctrl_word.cipher = NFP_IPSEC_CIPHER_NULL;
break;
case SADB_EALG_3DESCBC:
+ if (nn->pdev->device == PCI_DEVICE_ID_NFP3800) {
+ NL_SET_ERR_MSG_MOD(extack, "Unsupported encryption algorithm for offload");
+ return -EINVAL;
+ }
cfg->ctrl_word.cimode = NFP_IPSEC_CIMODE_CBC;
cfg->ctrl_word.cipher = NFP_IPSEC_CIPHER_3DES;
break;
case SADB_X_EALG_AES_GCM_ICV16:
case SADB_X_EALG_NULL_AES_GMAC:
if (!x->aead) {
- nn_err(nn, "Invalid AES key data\n");
+ NL_SET_ERR_MSG_MOD(extack, "Invalid AES key data");
return -EINVAL;
}
if (x->aead->alg_icv_len != 128) {
- nn_err(nn, "ICV must be 128bit with SADB_X_EALG_AES_GCM_ICV16\n");
+ NL_SET_ERR_MSG_MOD(extack, "ICV must be 128bit with SADB_X_EALG_AES_GCM_ICV16");
return -EINVAL;
}
cfg->ctrl_word.cimode = NFP_IPSEC_CIMODE_CTR;
@@ -396,23 +406,23 @@ static int nfp_net_xfrm_add_state(struct xfrm_state *x)
/* Aead->alg_key_len includes 32-bit salt */
if (set_aes_keylen(cfg, x->props.ealgo, x->aead->alg_key_len - 32)) {
- nn_err(nn, "Unsupported AES key length %d\n", x->aead->alg_key_len);
+ NL_SET_ERR_MSG_MOD(extack, "Unsupported AES key length");
return -EINVAL;
}
break;
case SADB_X_EALG_AESCBC:
cfg->ctrl_word.cimode = NFP_IPSEC_CIMODE_CBC;
if (!x->ealg) {
- nn_err(nn, "Invalid AES key data\n");
+ NL_SET_ERR_MSG_MOD(extack, "Invalid AES key data");
return -EINVAL;
}
if (set_aes_keylen(cfg, x->props.ealgo, x->ealg->alg_key_len) < 0) {
- nn_err(nn, "Unsupported AES key length %d\n", x->ealg->alg_key_len);
+ NL_SET_ERR_MSG_MOD(extack, "Unsupported AES key length");
return -EINVAL;
}
break;
default:
- nn_err(nn, "Unsupported encryption algorithm for offload\n");
+ NL_SET_ERR_MSG_MOD(extack, "Unsupported encryption algorithm for offload");
return -EINVAL;
}
@@ -423,7 +433,7 @@ static int nfp_net_xfrm_add_state(struct xfrm_state *x)
key_len -= salt_len;
if (key_len > sizeof(cfg->ciph_key)) {
- nn_err(nn, "aead: Insufficient space for offloaded key\n");
+ NL_SET_ERR_MSG_MOD(extack, "aead: Insufficient space for offloaded key");
return -EINVAL;
}
@@ -439,7 +449,7 @@ static int nfp_net_xfrm_add_state(struct xfrm_state *x)
key_len = DIV_ROUND_UP(x->ealg->alg_key_len, BITS_PER_BYTE);
if (key_len > sizeof(cfg->ciph_key)) {
- nn_err(nn, "ealg: Insufficient space for offloaded key\n");
+ NL_SET_ERR_MSG_MOD(extack, "ealg: Insufficient space for offloaded key");
return -EINVAL;
}
for (i = 0; i < key_len / sizeof(cfg->ciph_key[0]) ; i++)
@@ -462,7 +472,7 @@ static int nfp_net_xfrm_add_state(struct xfrm_state *x)
}
break;
default:
- nn_err(nn, "Unsupported address family\n");
+ NL_SET_ERR_MSG_MOD(extack, "Unsupported address family");
return -EINVAL;
}
@@ -477,7 +487,7 @@ static int nfp_net_xfrm_add_state(struct xfrm_state *x)
err = xa_alloc(&nn->xa_ipsec, &saidx, x,
XA_LIMIT(0, NFP_NET_IPSEC_MAX_SA_CNT - 1), GFP_KERNEL);
if (err < 0) {
- nn_err(nn, "Unable to get sa_data number for IPsec\n");
+ NL_SET_ERR_MSG_MOD(extack, "Unable to get sa_data number for IPsec");
return err;
}
@@ -488,7 +498,7 @@ static int nfp_net_xfrm_add_state(struct xfrm_state *x)
sizeof(msg), nfp_net_ipsec_cfg);
if (err) {
xa_erase(&nn->xa_ipsec, saidx);
- nn_err(nn, "Failed to issue IPsec command err ret=%d\n", err);
+ NL_SET_ERR_MSG_MOD(extack, "Failed to issue IPsec command");
return err;
}
diff --git a/drivers/net/ethernet/netronome/nfp/devlink_param.c b/drivers/net/ethernet/netronome/nfp/devlink_param.c
index db297ee4d7ad..a655f9e69a7b 100644
--- a/drivers/net/ethernet/netronome/nfp/devlink_param.c
+++ b/drivers/net/ethernet/netronome/nfp/devlink_param.c
@@ -233,8 +233,8 @@ int nfp_devlink_params_register(struct nfp_pf *pf)
if (err <= 0)
return err;
- return devlink_params_register(devlink, nfp_devlink_params,
- ARRAY_SIZE(nfp_devlink_params));
+ return devl_params_register(devlink, nfp_devlink_params,
+ ARRAY_SIZE(nfp_devlink_params));
}
void nfp_devlink_params_unregister(struct nfp_pf *pf)
@@ -245,6 +245,6 @@ void nfp_devlink_params_unregister(struct nfp_pf *pf)
if (err <= 0)
return;
- devlink_params_unregister(priv_to_devlink(pf), nfp_devlink_params,
- ARRAY_SIZE(nfp_devlink_params));
+ devl_params_unregister(priv_to_devlink(pf), nfp_devlink_params,
+ ARRAY_SIZE(nfp_devlink_params));
}
diff --git a/drivers/net/ethernet/netronome/nfp/flower/conntrack.c b/drivers/net/ethernet/netronome/nfp/flower/conntrack.c
index f693119541d5..d23830b5bcb8 100644
--- a/drivers/net/ethernet/netronome/nfp/flower/conntrack.c
+++ b/drivers/net/ethernet/netronome/nfp/flower/conntrack.c
@@ -1964,6 +1964,27 @@ int nfp_fl_ct_stats(struct flow_cls_offload *flow,
return 0;
}
+static bool
+nfp_fl_ct_offload_nft_supported(struct flow_cls_offload *flow)
+{
+ struct flow_rule *flow_rule = flow->rule;
+ struct flow_action *flow_action =
+ &flow_rule->action;
+ struct flow_action_entry *act;
+ int i;
+
+ flow_action_for_each(i, act, flow_action) {
+ if (act->id == FLOW_ACTION_CT_METADATA) {
+ enum ip_conntrack_info ctinfo =
+ act->ct_metadata.cookie & NFCT_INFOMASK;
+
+ return ctinfo != IP_CT_NEW;
+ }
+ }
+
+ return false;
+}
+
static int
nfp_fl_ct_offload_nft_flow(struct nfp_fl_ct_zone_entry *zt, struct flow_cls_offload *flow)
{
@@ -1976,6 +1997,9 @@ nfp_fl_ct_offload_nft_flow(struct nfp_fl_ct_zone_entry *zt, struct flow_cls_offl
extack = flow->common.extack;
switch (flow->command) {
case FLOW_CLS_REPLACE:
+ if (!nfp_fl_ct_offload_nft_supported(flow))
+ return -EOPNOTSUPP;
+
/* Netfilter can request offload multiple times for the same
* flow - protect against adding duplicates.
*/
diff --git a/drivers/net/ethernet/netronome/nfp/nfd3/dp.c b/drivers/net/ethernet/netronome/nfp/nfd3/dp.c
index 861082c5dbff..59fb0583cc08 100644
--- a/drivers/net/ethernet/netronome/nfp/nfd3/dp.c
+++ b/drivers/net/ethernet/netronome/nfp/nfd3/dp.c
@@ -192,10 +192,10 @@ static int nfp_nfd3_prep_tx_meta(struct nfp_net_dp *dp, struct sk_buff *skb,
return 0;
md_bytes = sizeof(meta_id) +
- !!md_dst * NFP_NET_META_PORTID_SIZE +
- !!tls_handle * NFP_NET_META_CONN_HANDLE_SIZE +
- vlan_insert * NFP_NET_META_VLAN_SIZE +
- *ipsec * NFP_NET_META_IPSEC_FIELD_SIZE; /* IPsec has 12 bytes of metadata */
+ (!!md_dst ? NFP_NET_META_PORTID_SIZE : 0) +
+ (!!tls_handle ? NFP_NET_META_CONN_HANDLE_SIZE : 0) +
+ (vlan_insert ? NFP_NET_META_VLAN_SIZE : 0) +
+ (*ipsec ? NFP_NET_META_IPSEC_FIELD_SIZE : 0);
if (unlikely(skb_cow_head(skb, md_bytes)))
return -ENOMEM;
@@ -226,9 +226,6 @@ static int nfp_nfd3_prep_tx_meta(struct nfp_net_dp *dp, struct sk_buff *skb,
meta_id |= NFP_NET_META_VLAN;
}
if (*ipsec) {
- /* IPsec has three consecutive 4-bit IPsec metadata types,
- * so in total IPsec has three 4 bytes of metadata.
- */
data -= NFP_NET_META_IPSEC_SIZE;
put_unaligned_be32(offload_info.seq_hi, data);
data -= NFP_NET_META_IPSEC_SIZE;
diff --git a/drivers/net/ethernet/netronome/nfp/nfdk/dp.c b/drivers/net/ethernet/netronome/nfp/nfdk/dp.c
index ccacb6ab6c39..d60c0e991a91 100644
--- a/drivers/net/ethernet/netronome/nfp/nfdk/dp.c
+++ b/drivers/net/ethernet/netronome/nfp/nfdk/dp.c
@@ -6,6 +6,7 @@
#include <linux/overflow.h>
#include <linux/sizes.h>
#include <linux/bitfield.h>
+#include <net/xfrm.h>
#include "../nfp_app.h"
#include "../nfp_net.h"
@@ -172,25 +173,32 @@ close_block:
static int
nfp_nfdk_prep_tx_meta(struct nfp_net_dp *dp, struct nfp_app *app,
- struct sk_buff *skb)
+ struct sk_buff *skb, bool *ipsec)
{
struct metadata_dst *md_dst = skb_metadata_dst(skb);
+ struct nfp_ipsec_offload offload_info;
unsigned char *data;
bool vlan_insert;
u32 meta_id = 0;
int md_bytes;
+#ifdef CONFIG_NFP_NET_IPSEC
+ if (xfrm_offload(skb))
+ *ipsec = nfp_net_ipsec_tx_prep(dp, skb, &offload_info);
+#endif
+
if (unlikely(md_dst && md_dst->type != METADATA_HW_PORT_MUX))
md_dst = NULL;
vlan_insert = skb_vlan_tag_present(skb) && (dp->ctrl & NFP_NET_CFG_CTRL_TXVLAN_V2);
- if (!(md_dst || vlan_insert))
+ if (!(md_dst || vlan_insert || *ipsec))
return 0;
md_bytes = sizeof(meta_id) +
- !!md_dst * NFP_NET_META_PORTID_SIZE +
- vlan_insert * NFP_NET_META_VLAN_SIZE;
+ (!!md_dst ? NFP_NET_META_PORTID_SIZE : 0) +
+ (vlan_insert ? NFP_NET_META_VLAN_SIZE : 0) +
+ (*ipsec ? NFP_NET_META_IPSEC_FIELD_SIZE : 0);
if (unlikely(skb_cow_head(skb, md_bytes)))
return -ENOMEM;
@@ -212,6 +220,17 @@ nfp_nfdk_prep_tx_meta(struct nfp_net_dp *dp, struct nfp_app *app,
meta_id |= NFP_NET_META_VLAN;
}
+ if (*ipsec) {
+ data -= NFP_NET_META_IPSEC_SIZE;
+ put_unaligned_be32(offload_info.seq_hi, data);
+ data -= NFP_NET_META_IPSEC_SIZE;
+ put_unaligned_be32(offload_info.seq_low, data);
+ data -= NFP_NET_META_IPSEC_SIZE;
+ put_unaligned_be32(offload_info.handle - 1, data);
+ meta_id <<= NFP_NET_META_IPSEC_FIELD_SIZE;
+ meta_id |= NFP_NET_META_IPSEC << 8 | NFP_NET_META_IPSEC << 4 | NFP_NET_META_IPSEC;
+ }
+
meta_id = FIELD_PREP(NFDK_META_LEN, md_bytes) |
FIELD_PREP(NFDK_META_FIELDS, meta_id);
@@ -243,6 +262,7 @@ netdev_tx_t nfp_nfdk_tx(struct sk_buff *skb, struct net_device *netdev)
struct nfp_net_dp *dp;
int nr_frags, wr_idx;
dma_addr_t dma_addr;
+ bool ipsec = false;
u64 metadata;
dp = &nn->dp;
@@ -263,7 +283,7 @@ netdev_tx_t nfp_nfdk_tx(struct sk_buff *skb, struct net_device *netdev)
return NETDEV_TX_BUSY;
}
- metadata = nfp_nfdk_prep_tx_meta(dp, nn->app, skb);
+ metadata = nfp_nfdk_prep_tx_meta(dp, nn->app, skb, &ipsec);
if (unlikely((int)metadata < 0))
goto err_flush;
@@ -361,6 +381,9 @@ netdev_tx_t nfp_nfdk_tx(struct sk_buff *skb, struct net_device *netdev)
(txd - 1)->dma_len_type = cpu_to_le16(dlen_type | NFDK_DESC_TX_EOP);
+ if (ipsec)
+ metadata = nfp_nfdk_ipsec_tx(metadata, skb);
+
if (!skb_is_gso(skb)) {
real_len = skb->len;
/* Metadata desc */
@@ -760,6 +783,15 @@ nfp_nfdk_parse_meta(struct net_device *netdev, struct nfp_meta_parsed *meta,
return false;
data += sizeof(struct nfp_net_tls_resync_req);
break;
+#ifdef CONFIG_NFP_NET_IPSEC
+ case NFP_NET_META_IPSEC:
+ /* Note: IPsec packet could have zero saidx, so need add 1
+ * to indicate packet is IPsec packet within driver.
+ */
+ meta->ipsec_saidx = get_unaligned_be32(data) + 1;
+ data += 4;
+ break;
+#endif
default:
return true;
}
@@ -1186,6 +1218,13 @@ static int nfp_nfdk_rx(struct nfp_net_rx_ring *rx_ring, int budget)
continue;
}
+#ifdef CONFIG_NFP_NET_IPSEC
+ if (meta.ipsec_saidx != 0 && unlikely(nfp_net_ipsec_rx(&meta, skb))) {
+ nfp_nfdk_rx_drop(dp, r_vec, rx_ring, NULL, skb);
+ continue;
+ }
+#endif
+
if (meta_len_xdp)
skb_metadata_set(skb, meta_len_xdp);
diff --git a/drivers/net/ethernet/netronome/nfp/nfdk/ipsec.c b/drivers/net/ethernet/netronome/nfp/nfdk/ipsec.c
new file mode 100644
index 000000000000..58d8f59eb885
--- /dev/null
+++ b/drivers/net/ethernet/netronome/nfp/nfdk/ipsec.c
@@ -0,0 +1,17 @@
+// SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
+/* Copyright (C) 2023 Corigine, Inc */
+
+#include <net/xfrm.h>
+
+#include "../nfp_net.h"
+#include "nfdk.h"
+
+u64 nfp_nfdk_ipsec_tx(u64 flags, struct sk_buff *skb)
+{
+ struct xfrm_state *x = xfrm_input_state(skb);
+
+ if (x->xso.dev && (x->xso.dev->features & NETIF_F_HW_ESP_TX_CSUM))
+ flags |= NFDK_DESC_TX_L3_CSUM | NFDK_DESC_TX_L4_CSUM;
+
+ return flags;
+}
diff --git a/drivers/net/ethernet/netronome/nfp/nfdk/nfdk.h b/drivers/net/ethernet/netronome/nfp/nfdk/nfdk.h
index 0ea51d9f2325..fe55980348e9 100644
--- a/drivers/net/ethernet/netronome/nfp/nfdk/nfdk.h
+++ b/drivers/net/ethernet/netronome/nfp/nfdk/nfdk.h
@@ -125,4 +125,12 @@ nfp_nfdk_ctrl_tx_one(struct nfp_net *nn, struct nfp_net_r_vector *r_vec,
void nfp_nfdk_ctrl_poll(struct tasklet_struct *t);
void nfp_nfdk_rx_ring_fill_freelist(struct nfp_net_dp *dp,
struct nfp_net_rx_ring *rx_ring);
+#ifndef CONFIG_NFP_NET_IPSEC
+static inline u64 nfp_nfdk_ipsec_tx(u64 flags, struct sk_buff *skb)
+{
+ return flags;
+}
+#else
+u64 nfp_nfdk_ipsec_tx(u64 flags, struct sk_buff *skb);
+#endif
#endif
diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net_common.c b/drivers/net/ethernet/netronome/nfp/nfp_net_common.c
index 70d7484c82af..81b7ca0ad222 100644
--- a/drivers/net/ethernet/netronome/nfp/nfp_net_common.c
+++ b/drivers/net/ethernet/netronome/nfp/nfp_net_common.c
@@ -2531,10 +2531,15 @@ static void nfp_net_netdev_init(struct nfp_net *nn)
netdev->features &= ~NETIF_F_HW_VLAN_STAG_RX;
nn->dp.ctrl &= ~NFP_NET_CFG_CTRL_RXQINQ;
+ netdev->xdp_features = NETDEV_XDP_ACT_BASIC;
+ if (nn->app && nn->app->type->id == NFP_APP_BPF_NIC)
+ netdev->xdp_features |= NETDEV_XDP_ACT_HW_OFFLOAD;
+
/* Finalise the netdev setup */
switch (nn->dp.ops->version) {
case NFP_NFD_VER_NFD3:
netdev->netdev_ops = &nfp_nfd3_netdev_ops;
+ netdev->xdp_features |= NETDEV_XDP_ACT_XSK_ZEROCOPY;
break;
case NFP_NFD_VER_NFDK:
netdev->netdev_ops = &nfp_nfdk_netdev_ops;
diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net_ctrl.h b/drivers/net/ethernet/netronome/nfp/nfp_net_ctrl.h
index f03dcadff738..669b9dccb6a9 100644
--- a/drivers/net/ethernet/netronome/nfp/nfp_net_ctrl.h
+++ b/drivers/net/ethernet/netronome/nfp/nfp_net_ctrl.h
@@ -412,6 +412,7 @@
#define NFP_NET_CFG_MBOX_CMD_IPSEC 3
#define NFP_NET_CFG_MBOX_CMD_PCI_DSCP_PRIOMAP_SET 5
#define NFP_NET_CFG_MBOX_CMD_TLV_CMSG 6
+#define NFP_NET_CFG_MBOX_CMD_DCB_UPDATE 7
#define NFP_NET_CFG_MBOX_CMD_MULTICAST_ADD 8
#define NFP_NET_CFG_MBOX_CMD_MULTICAST_DEL 9
diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net_ethtool.c b/drivers/net/ethernet/netronome/nfp/nfp_net_ethtool.c
index cc97b3d00414..dfedb52b7e70 100644
--- a/drivers/net/ethernet/netronome/nfp/nfp_net_ethtool.c
+++ b/drivers/net/ethernet/netronome/nfp/nfp_net_ethtool.c
@@ -313,6 +313,10 @@ static const struct nfp_eth_media_link_mode {
.ethtool_link_mode = ETHTOOL_LINK_MODE_10000baseKR_Full_BIT,
.speed = NFP_SPEED_10G,
},
+ [NFP_MEDIA_10GBASE_LR] = {
+ .ethtool_link_mode = ETHTOOL_LINK_MODE_10000baseLR_Full_BIT,
+ .speed = NFP_SPEED_10G,
+ },
[NFP_MEDIA_10GBASE_CX4] = {
.ethtool_link_mode = ETHTOOL_LINK_MODE_10000baseKX4_Full_BIT,
.speed = NFP_SPEED_10G,
@@ -349,6 +353,14 @@ static const struct nfp_eth_media_link_mode {
.ethtool_link_mode = ETHTOOL_LINK_MODE_25000baseSR_Full_BIT,
.speed = NFP_SPEED_25G,
},
+ [NFP_MEDIA_25GBASE_LR] = {
+ .ethtool_link_mode = ETHTOOL_LINK_MODE_25000baseSR_Full_BIT,
+ .speed = NFP_SPEED_25G,
+ },
+ [NFP_MEDIA_25GBASE_ER] = {
+ .ethtool_link_mode = ETHTOOL_LINK_MODE_25000baseSR_Full_BIT,
+ .speed = NFP_SPEED_25G,
+ },
[NFP_MEDIA_40GBASE_CR4] = {
.ethtool_link_mode = ETHTOOL_LINK_MODE_40000baseCR4_Full_BIT,
.speed = NFP_SPEED_40G,
@@ -2027,16 +2039,16 @@ static int
nfp_net_get_eeprom(struct net_device *netdev,
struct ethtool_eeprom *eeprom, u8 *bytes)
{
- struct nfp_net *nn = netdev_priv(netdev);
+ struct nfp_app *app = nfp_app_from_netdev(netdev);
u8 buf[NFP_EEPROM_LEN] = {};
- if (eeprom->len == 0)
- return -EINVAL;
-
if (nfp_net_get_port_mac_by_hwinfo(netdev, buf))
return -EOPNOTSUPP;
- eeprom->magic = nn->pdev->vendor | (nn->pdev->device << 16);
+ if (eeprom->len == 0)
+ return -EINVAL;
+
+ eeprom->magic = app->pdev->vendor | (app->pdev->device << 16);
memcpy(bytes, buf + eeprom->offset, eeprom->len);
return 0;
@@ -2046,18 +2058,18 @@ static int
nfp_net_set_eeprom(struct net_device *netdev,
struct ethtool_eeprom *eeprom, u8 *bytes)
{
- struct nfp_net *nn = netdev_priv(netdev);
+ struct nfp_app *app = nfp_app_from_netdev(netdev);
u8 buf[NFP_EEPROM_LEN] = {};
+ if (nfp_net_get_port_mac_by_hwinfo(netdev, buf))
+ return -EOPNOTSUPP;
+
if (eeprom->len == 0)
return -EINVAL;
- if (eeprom->magic != (nn->pdev->vendor | nn->pdev->device << 16))
+ if (eeprom->magic != (app->pdev->vendor | app->pdev->device << 16))
return -EINVAL;
- if (nfp_net_get_port_mac_by_hwinfo(netdev, buf))
- return -EOPNOTSUPP;
-
memcpy(buf + eeprom->offset, bytes, eeprom->len);
if (nfp_net_set_port_mac_by_hwinfo(netdev, buf))
return -EOPNOTSUPP;
@@ -2117,6 +2129,9 @@ const struct ethtool_ops nfp_port_ethtool_ops = {
.set_dump = nfp_app_set_dump,
.get_dump_flag = nfp_app_get_dump_flag,
.get_dump_data = nfp_app_get_dump_data,
+ .get_eeprom_len = nfp_net_get_eeprom_len,
+ .get_eeprom = nfp_net_get_eeprom,
+ .set_eeprom = nfp_net_set_eeprom,
.get_module_info = nfp_port_get_module_info,
.get_module_eeprom = nfp_port_get_module_eeprom,
.get_link_ksettings = nfp_net_get_link_ksettings,
diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net_main.c b/drivers/net/ethernet/netronome/nfp/nfp_net_main.c
index abfe788d558f..cbe4972ba104 100644
--- a/drivers/net/ethernet/netronome/nfp/nfp_net_main.c
+++ b/drivers/net/ethernet/netronome/nfp/nfp_net_main.c
@@ -754,11 +754,11 @@ int nfp_net_pci_probe(struct nfp_pf *pf)
if (err)
goto err_devlink_unreg;
+ devl_lock(devlink);
err = nfp_devlink_params_register(pf);
if (err)
goto err_shared_buf_unreg;
- devl_lock(devlink);
pf->ddir = nfp_net_debugfs_device_add(pf->pdev);
/* Allocate the vnics and do basic init */
@@ -791,9 +791,9 @@ err_free_vnics:
nfp_net_pf_free_vnics(pf);
err_clean_ddir:
nfp_net_debugfs_dir_clean(&pf->ddir);
- devl_unlock(devlink);
nfp_devlink_params_unregister(pf);
err_shared_buf_unreg:
+ devl_unlock(devlink);
nfp_shared_buf_unregister(pf);
err_devlink_unreg:
cancel_work_sync(&pf->port_refresh_work);
@@ -821,9 +821,10 @@ void nfp_net_pci_remove(struct nfp_pf *pf)
/* stop app first, to avoid double free of ctrl vNIC's ddir */
nfp_net_debugfs_dir_clean(&pf->ddir);
+ nfp_devlink_params_unregister(pf);
+
devl_unlock(devlink);
- nfp_devlink_params_unregister(pf);
nfp_shared_buf_unregister(pf);
nfp_net_pf_free_irqs(pf);
diff --git a/drivers/net/ethernet/netronome/nfp/nfpcore/nfp_nsp.h b/drivers/net/ethernet/netronome/nfp/nfpcore/nfp_nsp.h
index 8f5cab0032d0..781edc451bd4 100644
--- a/drivers/net/ethernet/netronome/nfp/nfpcore/nfp_nsp.h
+++ b/drivers/net/ethernet/netronome/nfp/nfpcore/nfp_nsp.h
@@ -140,6 +140,9 @@ enum nfp_ethtool_link_mode_list {
NFP_MEDIA_100GBASE_CR4,
NFP_MEDIA_100GBASE_KP4,
NFP_MEDIA_100GBASE_CR10,
+ NFP_MEDIA_10GBASE_LR,
+ NFP_MEDIA_25GBASE_LR,
+ NFP_MEDIA_25GBASE_ER,
NFP_MEDIA_LINK_MODES_NUMBER
};
diff --git a/drivers/net/ethernet/netronome/nfp/nic/dcb.c b/drivers/net/ethernet/netronome/nfp/nic/dcb.c
new file mode 100644
index 000000000000..bb498ac6bd7d
--- /dev/null
+++ b/drivers/net/ethernet/netronome/nfp/nic/dcb.c
@@ -0,0 +1,571 @@
+// SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
+/* Copyright (C) 2023 Corigine, Inc. */
+
+#include <linux/device.h>
+#include <linux/netdevice.h>
+#include <net/dcbnl.h>
+
+#include "../nfp_app.h"
+#include "../nfp_net.h"
+#include "../nfp_main.h"
+#include "../nfpcore/nfp_cpp.h"
+#include "../nfpcore/nfp_nffw.h"
+#include "../nfp_net_sriov.h"
+
+#include "main.h"
+
+#define NFP_DCB_TRUST_PCP 1
+#define NFP_DCB_TRUST_DSCP 2
+#define NFP_DCB_TRUST_INVALID 0xff
+
+#define NFP_DCB_TSA_VENDOR 1
+#define NFP_DCB_TSA_STRICT 2
+#define NFP_DCB_TSA_ETS 3
+
+#define NFP_DCB_GBL_ENABLE BIT(0)
+#define NFP_DCB_QOS_ENABLE BIT(1)
+#define NFP_DCB_DISABLE 0
+#define NFP_DCB_ALL_QOS_ENABLE (NFP_DCB_GBL_ENABLE | NFP_DCB_QOS_ENABLE)
+
+#define NFP_DCB_UPDATE_MSK_SZ 4
+#define NFP_DCB_TC_RATE_MAX 0xffff
+
+#define NFP_DCB_DATA_OFF_DSCP2IDX 0
+#define NFP_DCB_DATA_OFF_PCP2IDX 64
+#define NFP_DCB_DATA_OFF_TSA 80
+#define NFP_DCB_DATA_OFF_IDX_BW_PCT 88
+#define NFP_DCB_DATA_OFF_RATE 96
+#define NFP_DCB_DATA_OFF_CAP 112
+#define NFP_DCB_DATA_OFF_ENABLE 116
+#define NFP_DCB_DATA_OFF_TRUST 120
+
+#define NFP_DCB_MSG_MSK_ENABLE BIT(31)
+#define NFP_DCB_MSG_MSK_TRUST BIT(30)
+#define NFP_DCB_MSG_MSK_TSA BIT(29)
+#define NFP_DCB_MSG_MSK_DSCP BIT(28)
+#define NFP_DCB_MSG_MSK_PCP BIT(27)
+#define NFP_DCB_MSG_MSK_RATE BIT(26)
+#define NFP_DCB_MSG_MSK_PCT BIT(25)
+
+static struct nfp_dcb *get_dcb_priv(struct nfp_net *nn)
+{
+ struct nfp_dcb *dcb = &((struct nfp_app_nic_private *)nn->app_priv)->dcb;
+
+ return dcb;
+}
+
+static u8 nfp_tsa_ieee2nfp(u8 tsa)
+{
+ switch (tsa) {
+ case IEEE_8021QAZ_TSA_STRICT:
+ return NFP_DCB_TSA_STRICT;
+ case IEEE_8021QAZ_TSA_ETS:
+ return NFP_DCB_TSA_ETS;
+ default:
+ return NFP_DCB_TSA_VENDOR;
+ }
+}
+
+static int nfp_nic_dcbnl_ieee_getets(struct net_device *dev,
+ struct ieee_ets *ets)
+{
+ struct nfp_net *nn = netdev_priv(dev);
+ struct nfp_dcb *dcb;
+
+ dcb = get_dcb_priv(nn);
+
+ for (unsigned int i = 0; i < IEEE_8021QAZ_MAX_TCS; i++) {
+ ets->prio_tc[i] = dcb->prio2tc[i];
+ ets->tc_tx_bw[i] = dcb->tc_tx_pct[i];
+ ets->tc_tsa[i] = dcb->tc_tsa[i];
+ }
+
+ return 0;
+}
+
+static bool nfp_refresh_tc2idx(struct nfp_net *nn)
+{
+ u8 tc2idx[IEEE_8021QAZ_MAX_TCS];
+ bool change = false;
+ struct nfp_dcb *dcb;
+ int maxstrict = 0;
+
+ dcb = get_dcb_priv(nn);
+
+ for (unsigned int i = 0; i < IEEE_8021QAZ_MAX_TCS; i++) {
+ tc2idx[i] = i;
+ if (dcb->tc_tsa[i] == IEEE_8021QAZ_TSA_STRICT)
+ maxstrict = i;
+ }
+
+ if (maxstrict > 0 && dcb->tc_tsa[0] != IEEE_8021QAZ_TSA_STRICT) {
+ tc2idx[0] = maxstrict;
+ tc2idx[maxstrict] = 0;
+ }
+
+ for (unsigned int j = 0; j < IEEE_8021QAZ_MAX_TCS; j++) {
+ if (dcb->tc2idx[j] != tc2idx[j]) {
+ change = true;
+ dcb->tc2idx[j] = tc2idx[j];
+ }
+ }
+
+ return change;
+}
+
+static int nfp_fill_maxrate(struct nfp_net *nn, u64 *max_rate_array)
+{
+ struct nfp_app *app = nn->app;
+ struct nfp_dcb *dcb;
+ u32 ratembps;
+
+ dcb = get_dcb_priv(nn);
+
+ for (unsigned int i = 0; i < IEEE_8021QAZ_MAX_TCS; i++) {
+ /* Convert bandwidth from kbps to mbps. */
+ ratembps = max_rate_array[i] / 1024;
+
+ /* Reject input values >= NFP_DCB_TC_RATE_MAX */
+ if (ratembps >= NFP_DCB_TC_RATE_MAX) {
+ nfp_warn(app->cpp, "ratembps(%d) must less than %d.",
+ ratembps, NFP_DCB_TC_RATE_MAX);
+ return -EINVAL;
+ }
+ /* Input value 0 mapped to NFP_DCB_TC_RATE_MAX for firmware. */
+ if (ratembps == 0)
+ ratembps = NFP_DCB_TC_RATE_MAX;
+
+ writew((u16)ratembps, dcb->dcbcfg_tbl +
+ dcb->cfg_offset + NFP_DCB_DATA_OFF_RATE + dcb->tc2idx[i] * 2);
+ /* for rate value from user space, need to sync to dcb structure */
+ if (dcb->tc_maxrate != max_rate_array)
+ dcb->tc_maxrate[i] = max_rate_array[i];
+ }
+
+ return 0;
+}
+
+static int update_dscp_maxrate(struct net_device *dev, u32 *update)
+{
+ struct nfp_net *nn = netdev_priv(dev);
+ struct nfp_dcb *dcb;
+ int err;
+
+ dcb = get_dcb_priv(nn);
+
+ err = nfp_fill_maxrate(nn, dcb->tc_maxrate);
+ if (err)
+ return err;
+
+ *update |= NFP_DCB_MSG_MSK_RATE;
+
+ /* We only refresh dscp in dscp trust mode. */
+ if (dcb->dscp_cnt > 0) {
+ for (unsigned int i = 0; i < NFP_NET_MAX_DSCP; i++) {
+ writeb(dcb->tc2idx[dcb->prio2tc[dcb->dscp2prio[i]]],
+ dcb->dcbcfg_tbl + dcb->cfg_offset +
+ NFP_DCB_DATA_OFF_DSCP2IDX + i);
+ }
+ *update |= NFP_DCB_MSG_MSK_DSCP;
+ }
+
+ return 0;
+}
+
+static void nfp_nic_set_trust(struct nfp_net *nn, u32 *update)
+{
+ struct nfp_dcb *dcb;
+ u8 trust;
+
+ dcb = get_dcb_priv(nn);
+
+ if (dcb->trust_status != NFP_DCB_TRUST_INVALID)
+ return;
+
+ trust = dcb->dscp_cnt > 0 ? NFP_DCB_TRUST_DSCP : NFP_DCB_TRUST_PCP;
+ writeb(trust, dcb->dcbcfg_tbl + dcb->cfg_offset +
+ NFP_DCB_DATA_OFF_TRUST);
+
+ dcb->trust_status = trust;
+ *update |= NFP_DCB_MSG_MSK_TRUST;
+}
+
+static void nfp_nic_set_enable(struct nfp_net *nn, u32 enable, u32 *update)
+{
+ struct nfp_dcb *dcb;
+ u32 value = 0;
+
+ dcb = get_dcb_priv(nn);
+
+ value = readl(dcb->dcbcfg_tbl + dcb->cfg_offset +
+ NFP_DCB_DATA_OFF_ENABLE);
+ if (value != enable) {
+ writel(enable, dcb->dcbcfg_tbl + dcb->cfg_offset +
+ NFP_DCB_DATA_OFF_ENABLE);
+ *update |= NFP_DCB_MSG_MSK_ENABLE;
+ }
+}
+
+static int dcb_ets_check(struct net_device *dev, struct ieee_ets *ets)
+{
+ struct nfp_net *nn = netdev_priv(dev);
+ struct nfp_app *app = nn->app;
+ bool ets_exists = false;
+ int sum = 0;
+
+ for (unsigned int i = 0; i < IEEE_8021QAZ_MAX_TCS; i++) {
+ /* For ets mode, check bw percentage sum. */
+ if (ets->tc_tsa[i] == IEEE_8021QAZ_TSA_ETS) {
+ ets_exists = true;
+ sum += ets->tc_tx_bw[i];
+ } else if (ets->tc_tx_bw[i]) {
+ nfp_warn(app->cpp, "ETS BW for strict/vendor TC must be 0.");
+ return -EINVAL;
+ }
+ }
+
+ if (ets_exists && sum != 100) {
+ nfp_warn(app->cpp, "Failed to validate ETS BW: sum must be 100.");
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static void nfp_nic_fill_ets(struct nfp_net *nn)
+{
+ struct nfp_dcb *dcb;
+
+ dcb = get_dcb_priv(nn);
+
+ for (unsigned int i = 0; i < IEEE_8021QAZ_MAX_TCS; i++) {
+ writeb(dcb->tc2idx[dcb->prio2tc[i]],
+ dcb->dcbcfg_tbl + dcb->cfg_offset + NFP_DCB_DATA_OFF_PCP2IDX + i);
+ writeb(dcb->tc_tx_pct[i], dcb->dcbcfg_tbl +
+ dcb->cfg_offset + NFP_DCB_DATA_OFF_IDX_BW_PCT + dcb->tc2idx[i]);
+ writeb(nfp_tsa_ieee2nfp(dcb->tc_tsa[i]), dcb->dcbcfg_tbl +
+ dcb->cfg_offset + NFP_DCB_DATA_OFF_TSA + dcb->tc2idx[i]);
+ }
+}
+
+static void nfp_nic_ets_init(struct nfp_net *nn, u32 *update)
+{
+ struct nfp_dcb *dcb = get_dcb_priv(nn);
+
+ if (dcb->ets_init)
+ return;
+
+ nfp_nic_fill_ets(nn);
+ dcb->ets_init = true;
+ *update |= NFP_DCB_MSG_MSK_TSA | NFP_DCB_MSG_MSK_PCT | NFP_DCB_MSG_MSK_PCP;
+}
+
+static int nfp_nic_dcbnl_ieee_setets(struct net_device *dev,
+ struct ieee_ets *ets)
+{
+ const u32 cmd = NFP_NET_CFG_MBOX_CMD_DCB_UPDATE;
+ struct nfp_net *nn = netdev_priv(dev);
+ struct nfp_app *app = nn->app;
+ struct nfp_dcb *dcb;
+ u32 update = 0;
+ bool change;
+ int err;
+
+ err = dcb_ets_check(dev, ets);
+ if (err)
+ return err;
+
+ dcb = get_dcb_priv(nn);
+
+ for (unsigned int i = 0; i < IEEE_8021QAZ_MAX_TCS; i++) {
+ dcb->prio2tc[i] = ets->prio_tc[i];
+ dcb->tc_tx_pct[i] = ets->tc_tx_bw[i];
+ dcb->tc_tsa[i] = ets->tc_tsa[i];
+ }
+
+ change = nfp_refresh_tc2idx(nn);
+ nfp_nic_fill_ets(nn);
+ dcb->ets_init = true;
+ if (change || !dcb->rate_init) {
+ err = update_dscp_maxrate(dev, &update);
+ if (err) {
+ nfp_warn(app->cpp,
+ "nfp dcbnl ieee setets ERROR:%d.",
+ err);
+ return err;
+ }
+
+ dcb->rate_init = true;
+ }
+ nfp_nic_set_enable(nn, NFP_DCB_ALL_QOS_ENABLE, &update);
+ nfp_nic_set_trust(nn, &update);
+ err = nfp_net_mbox_lock(nn, NFP_DCB_UPDATE_MSK_SZ);
+ if (err)
+ return err;
+
+ nn_writel(nn, nn->tlv_caps.mbox_off + NFP_NET_CFG_MBOX_SIMPLE_VAL,
+ update | NFP_DCB_MSG_MSK_TSA | NFP_DCB_MSG_MSK_PCT |
+ NFP_DCB_MSG_MSK_PCP);
+
+ return nfp_net_mbox_reconfig_and_unlock(nn, cmd);
+}
+
+static int nfp_nic_dcbnl_ieee_getmaxrate(struct net_device *dev,
+ struct ieee_maxrate *maxrate)
+{
+ struct nfp_net *nn = netdev_priv(dev);
+ struct nfp_dcb *dcb;
+
+ dcb = get_dcb_priv(nn);
+
+ for (unsigned int i = 0; i < IEEE_8021QAZ_MAX_TCS; i++)
+ maxrate->tc_maxrate[i] = dcb->tc_maxrate[i];
+
+ return 0;
+}
+
+static int nfp_nic_dcbnl_ieee_setmaxrate(struct net_device *dev,
+ struct ieee_maxrate *maxrate)
+{
+ const u32 cmd = NFP_NET_CFG_MBOX_CMD_DCB_UPDATE;
+ struct nfp_net *nn = netdev_priv(dev);
+ struct nfp_app *app = nn->app;
+ struct nfp_dcb *dcb;
+ u32 update = 0;
+ int err;
+
+ err = nfp_fill_maxrate(nn, maxrate->tc_maxrate);
+ if (err) {
+ nfp_warn(app->cpp,
+ "nfp dcbnl ieee setmaxrate ERROR:%d.",
+ err);
+ return err;
+ }
+
+ dcb = get_dcb_priv(nn);
+
+ dcb->rate_init = true;
+ nfp_nic_set_enable(nn, NFP_DCB_ALL_QOS_ENABLE, &update);
+ nfp_nic_set_trust(nn, &update);
+ nfp_nic_ets_init(nn, &update);
+
+ err = nfp_net_mbox_lock(nn, NFP_DCB_UPDATE_MSK_SZ);
+ if (err)
+ return err;
+
+ nn_writel(nn, nn->tlv_caps.mbox_off + NFP_NET_CFG_MBOX_SIMPLE_VAL,
+ update | NFP_DCB_MSG_MSK_RATE);
+
+ return nfp_net_mbox_reconfig_and_unlock(nn, cmd);
+}
+
+static int nfp_nic_set_trust_status(struct nfp_net *nn, u8 status)
+{
+ const u32 cmd = NFP_NET_CFG_MBOX_CMD_DCB_UPDATE;
+ struct nfp_dcb *dcb;
+ u32 update = 0;
+ int err;
+
+ dcb = get_dcb_priv(nn);
+ if (!dcb->rate_init) {
+ err = nfp_fill_maxrate(nn, dcb->tc_maxrate);
+ if (err)
+ return err;
+
+ update |= NFP_DCB_MSG_MSK_RATE;
+ dcb->rate_init = true;
+ }
+
+ err = nfp_net_mbox_lock(nn, NFP_DCB_UPDATE_MSK_SZ);
+ if (err)
+ return err;
+
+ nfp_nic_ets_init(nn, &update);
+ writeb(status, dcb->dcbcfg_tbl + dcb->cfg_offset +
+ NFP_DCB_DATA_OFF_TRUST);
+ nfp_nic_set_enable(nn, NFP_DCB_ALL_QOS_ENABLE, &update);
+ nn_writel(nn, nn->tlv_caps.mbox_off + NFP_NET_CFG_MBOX_SIMPLE_VAL,
+ update | NFP_DCB_MSG_MSK_TRUST);
+
+ err = nfp_net_mbox_reconfig_and_unlock(nn, cmd);
+ if (err)
+ return err;
+
+ dcb->trust_status = status;
+
+ return 0;
+}
+
+static int nfp_nic_set_dscp2prio(struct nfp_net *nn, u8 dscp, u8 prio)
+{
+ const u32 cmd = NFP_NET_CFG_MBOX_CMD_DCB_UPDATE;
+ struct nfp_dcb *dcb;
+ u8 idx, tc;
+ int err;
+
+ err = nfp_net_mbox_lock(nn, NFP_DCB_UPDATE_MSK_SZ);
+ if (err)
+ return err;
+
+ dcb = get_dcb_priv(nn);
+
+ tc = dcb->prio2tc[prio];
+ idx = dcb->tc2idx[tc];
+
+ writeb(idx, dcb->dcbcfg_tbl + dcb->cfg_offset +
+ NFP_DCB_DATA_OFF_DSCP2IDX + dscp);
+
+ nn_writel(nn, nn->tlv_caps.mbox_off +
+ NFP_NET_CFG_MBOX_SIMPLE_VAL, NFP_DCB_MSG_MSK_DSCP);
+
+ err = nfp_net_mbox_reconfig_and_unlock(nn, cmd);
+ if (err)
+ return err;
+
+ dcb->dscp2prio[dscp] = prio;
+
+ return 0;
+}
+
+static int nfp_nic_dcbnl_ieee_setapp(struct net_device *dev,
+ struct dcb_app *app)
+{
+ struct nfp_net *nn = netdev_priv(dev);
+ struct dcb_app old_app;
+ struct nfp_dcb *dcb;
+ bool is_new;
+ int err;
+
+ if (app->selector != IEEE_8021QAZ_APP_SEL_DSCP)
+ return -EINVAL;
+
+ dcb = get_dcb_priv(nn);
+
+ /* Save the old entry info */
+ old_app.selector = IEEE_8021QAZ_APP_SEL_DSCP;
+ old_app.protocol = app->protocol;
+ old_app.priority = dcb->dscp2prio[app->protocol];
+
+ /* Check trust status */
+ if (!dcb->dscp_cnt) {
+ err = nfp_nic_set_trust_status(nn, NFP_DCB_TRUST_DSCP);
+ if (err)
+ return err;
+ }
+
+ /* Check if the new mapping is same as old or in init stage */
+ if (app->priority != old_app.priority || app->priority == 0) {
+ err = nfp_nic_set_dscp2prio(nn, app->protocol, app->priority);
+ if (err)
+ return err;
+ }
+
+ /* Delete the old entry if exists */
+ is_new = !!dcb_ieee_delapp(dev, &old_app);
+
+ /* Add new entry and update counter */
+ err = dcb_ieee_setapp(dev, app);
+ if (err)
+ return err;
+
+ if (is_new)
+ dcb->dscp_cnt++;
+
+ return 0;
+}
+
+static int nfp_nic_dcbnl_ieee_delapp(struct net_device *dev,
+ struct dcb_app *app)
+{
+ struct nfp_net *nn = netdev_priv(dev);
+ struct nfp_dcb *dcb;
+ int err;
+
+ if (app->selector != IEEE_8021QAZ_APP_SEL_DSCP)
+ return -EINVAL;
+
+ dcb = get_dcb_priv(nn);
+
+ /* Check if the dcb_app param match fw */
+ if (app->priority != dcb->dscp2prio[app->protocol])
+ return -ENOENT;
+
+ /* Set fw dscp mapping to 0 */
+ err = nfp_nic_set_dscp2prio(nn, app->protocol, 0);
+ if (err)
+ return err;
+
+ /* Delete app from dcb list */
+ err = dcb_ieee_delapp(dev, app);
+ if (err)
+ return err;
+
+ /* Decrease dscp counter */
+ dcb->dscp_cnt--;
+
+ /* If no dscp mapping is configured, trust pcp */
+ if (dcb->dscp_cnt == 0)
+ return nfp_nic_set_trust_status(nn, NFP_DCB_TRUST_PCP);
+
+ return 0;
+}
+
+static const struct dcbnl_rtnl_ops nfp_nic_dcbnl_ops = {
+ /* ieee 802.1Qaz std */
+ .ieee_getets = nfp_nic_dcbnl_ieee_getets,
+ .ieee_setets = nfp_nic_dcbnl_ieee_setets,
+ .ieee_getmaxrate = nfp_nic_dcbnl_ieee_getmaxrate,
+ .ieee_setmaxrate = nfp_nic_dcbnl_ieee_setmaxrate,
+ .ieee_setapp = nfp_nic_dcbnl_ieee_setapp,
+ .ieee_delapp = nfp_nic_dcbnl_ieee_delapp,
+};
+
+int nfp_nic_dcb_init(struct nfp_net *nn)
+{
+ struct nfp_app *app = nn->app;
+ struct nfp_dcb *dcb;
+ int err;
+
+ dcb = get_dcb_priv(nn);
+ dcb->cfg_offset = NFP_DCB_CFG_STRIDE * nn->id;
+ dcb->dcbcfg_tbl = nfp_pf_map_rtsym(app->pf, "net.dcbcfg_tbl",
+ "_abi_dcb_cfg",
+ dcb->cfg_offset, &dcb->dcbcfg_tbl_area);
+ if (IS_ERR(dcb->dcbcfg_tbl)) {
+ if (PTR_ERR(dcb->dcbcfg_tbl) != -ENOENT) {
+ err = PTR_ERR(dcb->dcbcfg_tbl);
+ dcb->dcbcfg_tbl = NULL;
+ nfp_err(app->cpp,
+ "Failed to map dcbcfg_tbl area, min_size %u.\n",
+ dcb->cfg_offset);
+ return err;
+ }
+ dcb->dcbcfg_tbl = NULL;
+ }
+
+ if (dcb->dcbcfg_tbl) {
+ for (unsigned int i = 0; i < IEEE_8021QAZ_MAX_TCS; i++) {
+ dcb->prio2tc[i] = i;
+ dcb->tc2idx[i] = i;
+ dcb->tc_tx_pct[i] = 0;
+ dcb->tc_maxrate[i] = 0;
+ dcb->tc_tsa[i] = IEEE_8021QAZ_TSA_VENDOR;
+ }
+ dcb->trust_status = NFP_DCB_TRUST_INVALID;
+ dcb->rate_init = false;
+ dcb->ets_init = false;
+
+ nn->dp.netdev->dcbnl_ops = &nfp_nic_dcbnl_ops;
+ }
+
+ return 0;
+}
+
+void nfp_nic_dcb_clean(struct nfp_net *nn)
+{
+ struct nfp_dcb *dcb;
+
+ dcb = get_dcb_priv(nn);
+ if (dcb->dcbcfg_tbl_area)
+ nfp_cpp_area_release_free(dcb->dcbcfg_tbl_area);
+}
diff --git a/drivers/net/ethernet/netronome/nfp/nic/main.c b/drivers/net/ethernet/netronome/nfp/nic/main.c
index aea8579206ee..9dd5afe37f6e 100644
--- a/drivers/net/ethernet/netronome/nfp/nic/main.c
+++ b/drivers/net/ethernet/netronome/nfp/nic/main.c
@@ -5,6 +5,8 @@
#include "../nfpcore/nfp_nsp.h"
#include "../nfp_app.h"
#include "../nfp_main.h"
+#include "../nfp_net.h"
+#include "main.h"
static int nfp_nic_init(struct nfp_app *app)
{
@@ -28,13 +30,50 @@ static void nfp_nic_sriov_disable(struct nfp_app *app)
{
}
+static int nfp_nic_vnic_init(struct nfp_app *app, struct nfp_net *nn)
+{
+ return nfp_nic_dcb_init(nn);
+}
+
+static void nfp_nic_vnic_clean(struct nfp_app *app, struct nfp_net *nn)
+{
+ nfp_nic_dcb_clean(nn);
+}
+
+static int nfp_nic_vnic_alloc(struct nfp_app *app, struct nfp_net *nn,
+ unsigned int id)
+{
+ struct nfp_app_nic_private *app_pri = nn->app_priv;
+ int err;
+
+ err = nfp_app_nic_vnic_alloc(app, nn, id);
+ if (err)
+ return err;
+
+ if (sizeof(*app_pri)) {
+ nn->app_priv = kzalloc(sizeof(*app_pri), GFP_KERNEL);
+ if (!nn->app_priv)
+ return -ENOMEM;
+ }
+
+ return 0;
+}
+
+static void nfp_nic_vnic_free(struct nfp_app *app, struct nfp_net *nn)
+{
+ kfree(nn->app_priv);
+}
+
const struct nfp_app_type app_nic = {
.id = NFP_APP_CORE_NIC,
.name = "nic",
.init = nfp_nic_init,
- .vnic_alloc = nfp_app_nic_vnic_alloc,
-
+ .vnic_alloc = nfp_nic_vnic_alloc,
+ .vnic_free = nfp_nic_vnic_free,
.sriov_enable = nfp_nic_sriov_enable,
.sriov_disable = nfp_nic_sriov_disable,
+
+ .vnic_init = nfp_nic_vnic_init,
+ .vnic_clean = nfp_nic_vnic_clean,
};
diff --git a/drivers/net/ethernet/netronome/nfp/nic/main.h b/drivers/net/ethernet/netronome/nfp/nic/main.h
new file mode 100644
index 000000000000..094374df42b8
--- /dev/null
+++ b/drivers/net/ethernet/netronome/nfp/nic/main.h
@@ -0,0 +1,46 @@
+/* SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) */
+/* Copyright (C) 2023 Corigine, Inc. */
+
+#ifndef __NFP_NIC_H__
+#define __NFP_NIC_H__ 1
+
+#include <linux/netdevice.h>
+
+#ifdef CONFIG_DCB
+/* DCB feature definitions */
+#define NFP_NET_MAX_DSCP 4
+#define NFP_NET_MAX_TC IEEE_8021QAZ_MAX_TCS
+#define NFP_NET_MAX_PRIO 8
+#define NFP_DCB_CFG_STRIDE 256
+
+struct nfp_dcb {
+ u8 dscp2prio[NFP_NET_MAX_DSCP];
+ u8 prio2tc[NFP_NET_MAX_PRIO];
+ u8 tc2idx[IEEE_8021QAZ_MAX_TCS];
+ u64 tc_maxrate[IEEE_8021QAZ_MAX_TCS];
+ u8 tc_tx_pct[IEEE_8021QAZ_MAX_TCS];
+ u8 tc_tsa[IEEE_8021QAZ_MAX_TCS];
+ u8 dscp_cnt;
+ u8 trust_status;
+ bool rate_init;
+ bool ets_init;
+
+ struct nfp_cpp_area *dcbcfg_tbl_area;
+ u8 __iomem *dcbcfg_tbl;
+ u32 cfg_offset;
+};
+
+int nfp_nic_dcb_init(struct nfp_net *nn);
+void nfp_nic_dcb_clean(struct nfp_net *nn);
+#else
+static inline int nfp_nic_dcb_init(struct nfp_net *nn) { return 0; }
+static inline void nfp_nic_dcb_clean(struct nfp_net *nn) {}
+#endif
+
+struct nfp_app_nic_private {
+#ifdef CONFIG_DCB
+ struct nfp_dcb dcb;
+#endif
+};
+
+#endif
diff --git a/drivers/net/ethernet/ni/nixge.c b/drivers/net/ethernet/ni/nixge.c
index 62320be4de5a..56e02cba0b8a 100644
--- a/drivers/net/ethernet/ni/nixge.c
+++ b/drivers/net/ethernet/ni/nixge.c
@@ -1081,40 +1081,59 @@ static const struct ethtool_ops nixge_ethtool_ops = {
.get_link = ethtool_op_get_link,
};
-static int nixge_mdio_read(struct mii_bus *bus, int phy_id, int reg)
+static int nixge_mdio_read_c22(struct mii_bus *bus, int phy_id, int reg)
{
struct nixge_priv *priv = bus->priv;
u32 status, tmp;
int err;
u16 device;
- if (reg & MII_ADDR_C45) {
- device = (reg >> 16) & 0x1f;
+ device = reg & 0x1f;
- nixge_ctrl_write_reg(priv, NIXGE_REG_MDIO_ADDR, reg & 0xffff);
+ tmp = NIXGE_MDIO_CLAUSE22 | NIXGE_MDIO_OP(NIXGE_MDIO_C22_READ) |
+ NIXGE_MDIO_ADDR(phy_id) | NIXGE_MDIO_MMD(device);
- tmp = NIXGE_MDIO_CLAUSE45 | NIXGE_MDIO_OP(NIXGE_MDIO_OP_ADDRESS)
- | NIXGE_MDIO_ADDR(phy_id) | NIXGE_MDIO_MMD(device);
+ nixge_ctrl_write_reg(priv, NIXGE_REG_MDIO_OP, tmp);
+ nixge_ctrl_write_reg(priv, NIXGE_REG_MDIO_CTRL, 1);
- nixge_ctrl_write_reg(priv, NIXGE_REG_MDIO_OP, tmp);
- nixge_ctrl_write_reg(priv, NIXGE_REG_MDIO_CTRL, 1);
+ err = nixge_ctrl_poll_timeout(priv, NIXGE_REG_MDIO_CTRL, status,
+ !status, 10, 1000);
+ if (err) {
+ dev_err(priv->dev, "timeout setting read command");
+ return err;
+ }
- err = nixge_ctrl_poll_timeout(priv, NIXGE_REG_MDIO_CTRL, status,
- !status, 10, 1000);
- if (err) {
- dev_err(priv->dev, "timeout setting address");
- return err;
- }
+ status = nixge_ctrl_read_reg(priv, NIXGE_REG_MDIO_DATA);
- tmp = NIXGE_MDIO_CLAUSE45 | NIXGE_MDIO_OP(NIXGE_MDIO_C45_READ) |
- NIXGE_MDIO_ADDR(phy_id) | NIXGE_MDIO_MMD(device);
- } else {
- device = reg & 0x1f;
+ return status;
+}
- tmp = NIXGE_MDIO_CLAUSE22 | NIXGE_MDIO_OP(NIXGE_MDIO_C22_READ) |
- NIXGE_MDIO_ADDR(phy_id) | NIXGE_MDIO_MMD(device);
+static int nixge_mdio_read_c45(struct mii_bus *bus, int phy_id, int device,
+ int reg)
+{
+ struct nixge_priv *priv = bus->priv;
+ u32 status, tmp;
+ int err;
+
+ nixge_ctrl_write_reg(priv, NIXGE_REG_MDIO_ADDR, reg & 0xffff);
+
+ tmp = NIXGE_MDIO_CLAUSE45 |
+ NIXGE_MDIO_OP(NIXGE_MDIO_OP_ADDRESS) |
+ NIXGE_MDIO_ADDR(phy_id) | NIXGE_MDIO_MMD(device);
+
+ nixge_ctrl_write_reg(priv, NIXGE_REG_MDIO_OP, tmp);
+ nixge_ctrl_write_reg(priv, NIXGE_REG_MDIO_CTRL, 1);
+
+ err = nixge_ctrl_poll_timeout(priv, NIXGE_REG_MDIO_CTRL, status,
+ !status, 10, 1000);
+ if (err) {
+ dev_err(priv->dev, "timeout setting address");
+ return err;
}
+ tmp = NIXGE_MDIO_CLAUSE45 | NIXGE_MDIO_OP(NIXGE_MDIO_C45_READ) |
+ NIXGE_MDIO_ADDR(phy_id) | NIXGE_MDIO_MMD(device);
+
nixge_ctrl_write_reg(priv, NIXGE_REG_MDIO_OP, tmp);
nixge_ctrl_write_reg(priv, NIXGE_REG_MDIO_CTRL, 1);
@@ -1130,57 +1149,65 @@ static int nixge_mdio_read(struct mii_bus *bus, int phy_id, int reg)
return status;
}
-static int nixge_mdio_write(struct mii_bus *bus, int phy_id, int reg, u16 val)
+static int nixge_mdio_write_c22(struct mii_bus *bus, int phy_id, int reg,
+ u16 val)
{
struct nixge_priv *priv = bus->priv;
u32 status, tmp;
u16 device;
int err;
- if (reg & MII_ADDR_C45) {
- device = (reg >> 16) & 0x1f;
+ device = reg & 0x1f;
- nixge_ctrl_write_reg(priv, NIXGE_REG_MDIO_ADDR, reg & 0xffff);
+ tmp = NIXGE_MDIO_CLAUSE22 | NIXGE_MDIO_OP(NIXGE_MDIO_C22_WRITE) |
+ NIXGE_MDIO_ADDR(phy_id) | NIXGE_MDIO_MMD(device);
- tmp = NIXGE_MDIO_CLAUSE45 | NIXGE_MDIO_OP(NIXGE_MDIO_OP_ADDRESS)
- | NIXGE_MDIO_ADDR(phy_id) | NIXGE_MDIO_MMD(device);
+ nixge_ctrl_write_reg(priv, NIXGE_REG_MDIO_DATA, val);
+ nixge_ctrl_write_reg(priv, NIXGE_REG_MDIO_OP, tmp);
+ nixge_ctrl_write_reg(priv, NIXGE_REG_MDIO_CTRL, 1);
- nixge_ctrl_write_reg(priv, NIXGE_REG_MDIO_OP, tmp);
- nixge_ctrl_write_reg(priv, NIXGE_REG_MDIO_CTRL, 1);
+ err = nixge_ctrl_poll_timeout(priv, NIXGE_REG_MDIO_CTRL, status,
+ !status, 10, 1000);
+ if (err)
+ dev_err(priv->dev, "timeout setting write command");
- err = nixge_ctrl_poll_timeout(priv, NIXGE_REG_MDIO_CTRL, status,
- !status, 10, 1000);
- if (err) {
- dev_err(priv->dev, "timeout setting address");
- return err;
- }
+ return err;
+}
- tmp = NIXGE_MDIO_CLAUSE45 | NIXGE_MDIO_OP(NIXGE_MDIO_C45_WRITE)
- | NIXGE_MDIO_ADDR(phy_id) | NIXGE_MDIO_MMD(device);
+static int nixge_mdio_write_c45(struct mii_bus *bus, int phy_id,
+ int device, int reg, u16 val)
+{
+ struct nixge_priv *priv = bus->priv;
+ u32 status, tmp;
+ int err;
- nixge_ctrl_write_reg(priv, NIXGE_REG_MDIO_DATA, val);
- nixge_ctrl_write_reg(priv, NIXGE_REG_MDIO_OP, tmp);
- err = nixge_ctrl_poll_timeout(priv, NIXGE_REG_MDIO_CTRL, status,
- !status, 10, 1000);
- if (err)
- dev_err(priv->dev, "timeout setting write command");
- } else {
- device = reg & 0x1f;
+ nixge_ctrl_write_reg(priv, NIXGE_REG_MDIO_ADDR, reg & 0xffff);
- tmp = NIXGE_MDIO_CLAUSE22 |
- NIXGE_MDIO_OP(NIXGE_MDIO_C22_WRITE) |
- NIXGE_MDIO_ADDR(phy_id) | NIXGE_MDIO_MMD(device);
+ tmp = NIXGE_MDIO_CLAUSE45 |
+ NIXGE_MDIO_OP(NIXGE_MDIO_OP_ADDRESS) |
+ NIXGE_MDIO_ADDR(phy_id) | NIXGE_MDIO_MMD(device);
- nixge_ctrl_write_reg(priv, NIXGE_REG_MDIO_DATA, val);
- nixge_ctrl_write_reg(priv, NIXGE_REG_MDIO_OP, tmp);
- nixge_ctrl_write_reg(priv, NIXGE_REG_MDIO_CTRL, 1);
+ nixge_ctrl_write_reg(priv, NIXGE_REG_MDIO_OP, tmp);
+ nixge_ctrl_write_reg(priv, NIXGE_REG_MDIO_CTRL, 1);
- err = nixge_ctrl_poll_timeout(priv, NIXGE_REG_MDIO_CTRL, status,
- !status, 10, 1000);
- if (err)
- dev_err(priv->dev, "timeout setting write command");
+ err = nixge_ctrl_poll_timeout(priv, NIXGE_REG_MDIO_CTRL, status,
+ !status, 10, 1000);
+ if (err) {
+ dev_err(priv->dev, "timeout setting address");
+ return err;
}
+ tmp = NIXGE_MDIO_CLAUSE45 | NIXGE_MDIO_OP(NIXGE_MDIO_C45_WRITE) |
+ NIXGE_MDIO_ADDR(phy_id) | NIXGE_MDIO_MMD(device);
+
+ nixge_ctrl_write_reg(priv, NIXGE_REG_MDIO_DATA, val);
+ nixge_ctrl_write_reg(priv, NIXGE_REG_MDIO_OP, tmp);
+
+ err = nixge_ctrl_poll_timeout(priv, NIXGE_REG_MDIO_CTRL, status,
+ !status, 10, 1000);
+ if (err)
+ dev_err(priv->dev, "timeout setting write command");
+
return err;
}
@@ -1195,8 +1222,10 @@ static int nixge_mdio_setup(struct nixge_priv *priv, struct device_node *np)
snprintf(bus->id, MII_BUS_ID_SIZE, "%s-mii", dev_name(priv->dev));
bus->priv = priv;
bus->name = "nixge_mii_bus";
- bus->read = nixge_mdio_read;
- bus->write = nixge_mdio_write;
+ bus->read = nixge_mdio_read_c22;
+ bus->write = nixge_mdio_write_c22;
+ bus->read_c45 = nixge_mdio_read_c45;
+ bus->write_c45 = nixge_mdio_write_c45;
bus->parent = priv->dev;
priv->mii_bus = bus;
diff --git a/drivers/net/ethernet/pensando/ionic/ionic_bus_pci.c b/drivers/net/ethernet/pensando/ionic/ionic_bus_pci.c
index ce436e97324a..e508f8eb43bf 100644
--- a/drivers/net/ethernet/pensando/ionic/ionic_bus_pci.c
+++ b/drivers/net/ethernet/pensando/ionic/ionic_bus_pci.c
@@ -121,7 +121,7 @@ static void ionic_vf_dealloc_locked(struct ionic *ionic)
if (v->stats_pa) {
vfc.stats_pa = 0;
- (void)ionic_set_vf_config(ionic, i, &vfc);
+ ionic_set_vf_config(ionic, i, &vfc);
dma_unmap_single(ionic->dev, v->stats_pa,
sizeof(v->stats), DMA_FROM_DEVICE);
v->stats_pa = 0;
@@ -169,7 +169,7 @@ static int ionic_vf_alloc(struct ionic *ionic, int num_vfs)
/* ignore failures from older FW, we just won't get stats */
vfc.stats_pa = cpu_to_le64(v->stats_pa);
- (void)ionic_set_vf_config(ionic, i, &vfc);
+ ionic_set_vf_config(ionic, i, &vfc);
}
out:
@@ -352,6 +352,7 @@ err_out_port_reset:
err_out_reset:
ionic_reset(ionic);
err_out_teardown:
+ ionic_dev_teardown(ionic);
pci_clear_master(pdev);
/* Don't fail the probe for these errors, keep
* the hw interface around for inspection
@@ -390,6 +391,7 @@ static void ionic_remove(struct pci_dev *pdev)
ionic_port_reset(ionic);
ionic_reset(ionic);
+ ionic_dev_teardown(ionic);
pci_clear_master(pdev);
ionic_unmap_bars(ionic);
pci_release_regions(pdev);
diff --git a/drivers/net/ethernet/pensando/ionic/ionic_dev.c b/drivers/net/ethernet/pensando/ionic/ionic_dev.c
index d911f4fd9af6..c06576f43916 100644
--- a/drivers/net/ethernet/pensando/ionic/ionic_dev.c
+++ b/drivers/net/ethernet/pensando/ionic/ionic_dev.c
@@ -92,6 +92,7 @@ int ionic_dev_setup(struct ionic *ionic)
unsigned int num_bars = ionic->num_bars;
struct ionic_dev *idev = &ionic->idev;
struct device *dev = ionic->dev;
+ int size;
u32 sig;
/* BAR0: dev_cmd and interrupts */
@@ -133,9 +134,36 @@ int ionic_dev_setup(struct ionic *ionic)
idev->db_pages = bar->vaddr;
idev->phy_db_pages = bar->bus_addr;
+ /* BAR2: optional controller memory mapping */
+ bar++;
+ mutex_init(&idev->cmb_inuse_lock);
+ if (num_bars < 3 || !ionic->bars[IONIC_PCI_BAR_CMB].len) {
+ idev->cmb_inuse = NULL;
+ return 0;
+ }
+
+ idev->phy_cmb_pages = bar->bus_addr;
+ idev->cmb_npages = bar->len / PAGE_SIZE;
+ size = BITS_TO_LONGS(idev->cmb_npages) * sizeof(long);
+ idev->cmb_inuse = kzalloc(size, GFP_KERNEL);
+ if (!idev->cmb_inuse)
+ dev_warn(dev, "No memory for CMB, disabling\n");
+
return 0;
}
+void ionic_dev_teardown(struct ionic *ionic)
+{
+ struct ionic_dev *idev = &ionic->idev;
+
+ kfree(idev->cmb_inuse);
+ idev->cmb_inuse = NULL;
+ idev->phy_cmb_pages = 0;
+ idev->cmb_npages = 0;
+
+ mutex_destroy(&idev->cmb_inuse_lock);
+}
+
/* Devcmd Interface */
bool ionic_is_fw_running(struct ionic_dev *idev)
{
@@ -571,6 +599,33 @@ int ionic_db_page_num(struct ionic_lif *lif, int pid)
return (lif->hw_index * lif->dbid_count) + pid;
}
+int ionic_get_cmb(struct ionic_lif *lif, u32 *pgid, phys_addr_t *pgaddr, int order)
+{
+ struct ionic_dev *idev = &lif->ionic->idev;
+ int ret;
+
+ mutex_lock(&idev->cmb_inuse_lock);
+ ret = bitmap_find_free_region(idev->cmb_inuse, idev->cmb_npages, order);
+ mutex_unlock(&idev->cmb_inuse_lock);
+
+ if (ret < 0)
+ return ret;
+
+ *pgid = ret;
+ *pgaddr = idev->phy_cmb_pages + ret * PAGE_SIZE;
+
+ return 0;
+}
+
+void ionic_put_cmb(struct ionic_lif *lif, u32 pgid, int order)
+{
+ struct ionic_dev *idev = &lif->ionic->idev;
+
+ mutex_lock(&idev->cmb_inuse_lock);
+ bitmap_release_region(idev->cmb_inuse, pgid, order);
+ mutex_unlock(&idev->cmb_inuse_lock);
+}
+
int ionic_cq_init(struct ionic_lif *lif, struct ionic_cq *cq,
struct ionic_intr_info *intr,
unsigned int num_descs, size_t desc_size)
@@ -679,6 +734,18 @@ void ionic_q_map(struct ionic_queue *q, void *base, dma_addr_t base_pa)
cur->desc = base + (i * q->desc_size);
}
+void ionic_q_cmb_map(struct ionic_queue *q, void __iomem *base, dma_addr_t base_pa)
+{
+ struct ionic_desc_info *cur;
+ unsigned int i;
+
+ q->cmb_base = base;
+ q->cmb_base_pa = base_pa;
+
+ for (i = 0, cur = q->info; i < q->num_descs; i++, cur++)
+ cur->cmb_desc = base + (i * q->desc_size);
+}
+
void ionic_q_sg_map(struct ionic_queue *q, void *base, dma_addr_t base_pa)
{
struct ionic_desc_info *cur;
diff --git a/drivers/net/ethernet/pensando/ionic/ionic_dev.h b/drivers/net/ethernet/pensando/ionic/ionic_dev.h
index bce3ca38669b..0bea208bfba2 100644
--- a/drivers/net/ethernet/pensando/ionic/ionic_dev.h
+++ b/drivers/net/ethernet/pensando/ionic/ionic_dev.h
@@ -159,6 +159,11 @@ struct ionic_dev {
struct ionic_intr __iomem *intr_ctrl;
u64 __iomem *intr_status;
+ struct mutex cmb_inuse_lock; /* for cmb_inuse */
+ unsigned long *cmb_inuse;
+ dma_addr_t phy_cmb_pages;
+ u32 cmb_npages;
+
u32 port_info_sz;
struct ionic_port_info *port_info;
dma_addr_t port_info_pa;
@@ -203,6 +208,7 @@ struct ionic_desc_info {
struct ionic_rxq_desc *rxq_desc;
struct ionic_admin_cmd *adminq_desc;
};
+ void __iomem *cmb_desc;
union {
void *sg_desc;
struct ionic_txq_sg_desc *txq_sg_desc;
@@ -241,12 +247,14 @@ struct ionic_queue {
struct ionic_rxq_desc *rxq;
struct ionic_admin_cmd *adminq;
};
+ void __iomem *cmb_base;
union {
void *sg_base;
struct ionic_txq_sg_desc *txq_sgl;
struct ionic_rxq_sg_desc *rxq_sgl;
};
dma_addr_t base_pa;
+ dma_addr_t cmb_base_pa;
dma_addr_t sg_base_pa;
unsigned int desc_size;
unsigned int sg_desc_size;
@@ -309,6 +317,7 @@ static inline bool ionic_q_has_space(struct ionic_queue *q, unsigned int want)
void ionic_init_devinfo(struct ionic *ionic);
int ionic_dev_setup(struct ionic *ionic);
+void ionic_dev_teardown(struct ionic *ionic);
void ionic_dev_cmd_go(struct ionic_dev *idev, union ionic_dev_cmd *cmd);
u8 ionic_dev_cmd_status(struct ionic_dev *idev);
@@ -344,6 +353,9 @@ void ionic_dev_cmd_adminq_init(struct ionic_dev *idev, struct ionic_qcq *qcq,
int ionic_db_page_num(struct ionic_lif *lif, int pid);
+int ionic_get_cmb(struct ionic_lif *lif, u32 *pgid, phys_addr_t *pgaddr, int order);
+void ionic_put_cmb(struct ionic_lif *lif, u32 pgid, int order);
+
int ionic_cq_init(struct ionic_lif *lif, struct ionic_cq *cq,
struct ionic_intr_info *intr,
unsigned int num_descs, size_t desc_size);
@@ -360,6 +372,7 @@ int ionic_q_init(struct ionic_lif *lif, struct ionic_dev *idev,
unsigned int num_descs, size_t desc_size,
size_t sg_desc_size, unsigned int pid);
void ionic_q_map(struct ionic_queue *q, void *base, dma_addr_t base_pa);
+void ionic_q_cmb_map(struct ionic_queue *q, void __iomem *base, dma_addr_t base_pa);
void ionic_q_sg_map(struct ionic_queue *q, void *base, dma_addr_t base_pa);
void ionic_q_post(struct ionic_queue *q, bool ring_doorbell, ionic_desc_cb cb,
void *cb_arg);
diff --git a/drivers/net/ethernet/pensando/ionic/ionic_ethtool.c b/drivers/net/ethernet/pensando/ionic/ionic_ethtool.c
index 01c22701482d..cf33503468a3 100644
--- a/drivers/net/ethernet/pensando/ionic/ionic_ethtool.c
+++ b/drivers/net/ethernet/pensando/ionic/ionic_ethtool.c
@@ -511,6 +511,87 @@ static int ionic_set_coalesce(struct net_device *netdev,
return 0;
}
+static int ionic_validate_cmb_config(struct ionic_lif *lif,
+ struct ionic_queue_params *qparam)
+{
+ int pages_have, pages_required = 0;
+ unsigned long sz;
+
+ if (!lif->ionic->idev.cmb_inuse &&
+ (qparam->cmb_tx || qparam->cmb_rx)) {
+ netdev_info(lif->netdev, "CMB rings are not supported on this device\n");
+ return -EOPNOTSUPP;
+ }
+
+ if (qparam->cmb_tx) {
+ if (!(lif->qtype_info[IONIC_QTYPE_TXQ].features & IONIC_QIDENT_F_CMB)) {
+ netdev_info(lif->netdev,
+ "CMB rings for tx-push are not supported on this device\n");
+ return -EOPNOTSUPP;
+ }
+
+ sz = sizeof(struct ionic_txq_desc) * qparam->ntxq_descs * qparam->nxqs;
+ pages_required += ALIGN(sz, PAGE_SIZE) / PAGE_SIZE;
+ }
+
+ if (qparam->cmb_rx) {
+ if (!(lif->qtype_info[IONIC_QTYPE_RXQ].features & IONIC_QIDENT_F_CMB)) {
+ netdev_info(lif->netdev,
+ "CMB rings for rx-push are not supported on this device\n");
+ return -EOPNOTSUPP;
+ }
+
+ sz = sizeof(struct ionic_rxq_desc) * qparam->nrxq_descs * qparam->nxqs;
+ pages_required += ALIGN(sz, PAGE_SIZE) / PAGE_SIZE;
+ }
+
+ pages_have = lif->ionic->bars[IONIC_PCI_BAR_CMB].len / PAGE_SIZE;
+ if (pages_required > pages_have) {
+ netdev_info(lif->netdev,
+ "Not enough CMB pages for number of queues and size of descriptor rings, need %d have %d",
+ pages_required, pages_have);
+ return -ENOMEM;
+ }
+
+ return pages_required;
+}
+
+static int ionic_cmb_rings_toggle(struct ionic_lif *lif, bool cmb_tx, bool cmb_rx)
+{
+ struct ionic_queue_params qparam;
+ int pages_used;
+
+ if (netif_running(lif->netdev)) {
+ netdev_info(lif->netdev, "Please stop device to toggle CMB for tx/rx-push\n");
+ return -EBUSY;
+ }
+
+ ionic_init_queue_params(lif, &qparam);
+ qparam.cmb_tx = cmb_tx;
+ qparam.cmb_rx = cmb_rx;
+ pages_used = ionic_validate_cmb_config(lif, &qparam);
+ if (pages_used < 0)
+ return pages_used;
+
+ if (cmb_tx)
+ set_bit(IONIC_LIF_F_CMB_TX_RINGS, lif->state);
+ else
+ clear_bit(IONIC_LIF_F_CMB_TX_RINGS, lif->state);
+
+ if (cmb_rx)
+ set_bit(IONIC_LIF_F_CMB_RX_RINGS, lif->state);
+ else
+ clear_bit(IONIC_LIF_F_CMB_RX_RINGS, lif->state);
+
+ if (cmb_tx || cmb_rx)
+ netdev_info(lif->netdev, "Enabling CMB %s %s rings - %d pages\n",
+ cmb_tx ? "TX" : "", cmb_rx ? "RX" : "", pages_used);
+ else
+ netdev_info(lif->netdev, "Disabling CMB rings\n");
+
+ return 0;
+}
+
static void ionic_get_ringparam(struct net_device *netdev,
struct ethtool_ringparam *ring,
struct kernel_ethtool_ringparam *kernel_ring,
@@ -522,6 +603,8 @@ static void ionic_get_ringparam(struct net_device *netdev,
ring->tx_pending = lif->ntxq_descs;
ring->rx_max_pending = IONIC_MAX_RX_DESC;
ring->rx_pending = lif->nrxq_descs;
+ kernel_ring->tx_push = test_bit(IONIC_LIF_F_CMB_TX_RINGS, lif->state);
+ kernel_ring->rx_push = test_bit(IONIC_LIF_F_CMB_RX_RINGS, lif->state);
}
static int ionic_set_ringparam(struct net_device *netdev,
@@ -551,9 +634,28 @@ static int ionic_set_ringparam(struct net_device *netdev,
/* if nothing to do return success */
if (ring->tx_pending == lif->ntxq_descs &&
- ring->rx_pending == lif->nrxq_descs)
+ ring->rx_pending == lif->nrxq_descs &&
+ kernel_ring->tx_push == test_bit(IONIC_LIF_F_CMB_TX_RINGS, lif->state) &&
+ kernel_ring->rx_push == test_bit(IONIC_LIF_F_CMB_RX_RINGS, lif->state))
return 0;
+ qparam.ntxq_descs = ring->tx_pending;
+ qparam.nrxq_descs = ring->rx_pending;
+ qparam.cmb_tx = kernel_ring->tx_push;
+ qparam.cmb_rx = kernel_ring->rx_push;
+
+ err = ionic_validate_cmb_config(lif, &qparam);
+ if (err < 0)
+ return err;
+
+ if (kernel_ring->tx_push != test_bit(IONIC_LIF_F_CMB_TX_RINGS, lif->state) ||
+ kernel_ring->rx_push != test_bit(IONIC_LIF_F_CMB_RX_RINGS, lif->state)) {
+ err = ionic_cmb_rings_toggle(lif, kernel_ring->tx_push,
+ kernel_ring->rx_push);
+ if (err < 0)
+ return err;
+ }
+
if (ring->tx_pending != lif->ntxq_descs)
netdev_info(netdev, "Changing Tx ring size from %d to %d\n",
lif->ntxq_descs, ring->tx_pending);
@@ -569,9 +671,6 @@ static int ionic_set_ringparam(struct net_device *netdev,
return 0;
}
- qparam.ntxq_descs = ring->tx_pending;
- qparam.nrxq_descs = ring->rx_pending;
-
mutex_lock(&lif->queue_lock);
err = ionic_reconfigure_queues(lif, &qparam);
mutex_unlock(&lif->queue_lock);
@@ -638,7 +737,7 @@ static int ionic_set_channels(struct net_device *netdev,
lif->nxqs, ch->combined_count);
qparam.nxqs = ch->combined_count;
- qparam.intr_split = 0;
+ qparam.intr_split = false;
} else {
max_cnt /= 2;
if (ch->rx_count > max_cnt)
@@ -654,9 +753,13 @@ static int ionic_set_channels(struct net_device *netdev,
lif->nxqs, ch->rx_count);
qparam.nxqs = ch->rx_count;
- qparam.intr_split = 1;
+ qparam.intr_split = true;
}
+ err = ionic_validate_cmb_config(lif, &qparam);
+ if (err < 0)
+ return err;
+
/* if we're not running, just set the values and return */
if (!netif_running(lif->netdev)) {
lif->nxqs = qparam.nxqs;
@@ -965,6 +1068,8 @@ static const struct ethtool_ops ionic_ethtool_ops = {
.supported_coalesce_params = ETHTOOL_COALESCE_USECS |
ETHTOOL_COALESCE_USE_ADAPTIVE_RX |
ETHTOOL_COALESCE_USE_ADAPTIVE_TX,
+ .supported_ring_params = ETHTOOL_RING_USE_TX_PUSH |
+ ETHTOOL_RING_USE_RX_PUSH,
.get_drvinfo = ionic_get_drvinfo,
.get_regs_len = ionic_get_regs_len,
.get_regs = ionic_get_regs,
diff --git a/drivers/net/ethernet/pensando/ionic/ionic_if.h b/drivers/net/ethernet/pensando/ionic/ionic_if.h
index eac09b2375b8..9a1825edf0d0 100644
--- a/drivers/net/ethernet/pensando/ionic/ionic_if.h
+++ b/drivers/net/ethernet/pensando/ionic/ionic_if.h
@@ -3073,9 +3073,10 @@ union ionic_adminq_comp {
#define IONIC_BARS_MAX 6
#define IONIC_PCI_BAR_DBELL 1
+#define IONIC_PCI_BAR_CMB 2
-/* BAR0 */
#define IONIC_BAR0_SIZE 0x8000
+#define IONIC_BAR2_SIZE 0x800000
#define IONIC_BAR0_DEV_INFO_REGS_OFFSET 0x0000
#define IONIC_BAR0_DEV_CMD_REGS_OFFSET 0x0800
diff --git a/drivers/net/ethernet/pensando/ionic/ionic_lif.c b/drivers/net/ethernet/pensando/ionic/ionic_lif.c
index 63a78a9ac241..957027e546b3 100644
--- a/drivers/net/ethernet/pensando/ionic/ionic_lif.c
+++ b/drivers/net/ethernet/pensando/ionic/ionic_lif.c
@@ -26,9 +26,12 @@
static const u8 ionic_qtype_versions[IONIC_QTYPE_MAX] = {
[IONIC_QTYPE_ADMINQ] = 0, /* 0 = Base version with CQ support */
[IONIC_QTYPE_NOTIFYQ] = 0, /* 0 = Base version */
- [IONIC_QTYPE_RXQ] = 0, /* 0 = Base version with CQ+SG support */
- [IONIC_QTYPE_TXQ] = 1, /* 0 = Base version with CQ+SG support
- * 1 = ... with Tx SG version 1
+ [IONIC_QTYPE_RXQ] = 2, /* 0 = Base version with CQ+SG support
+ * 2 = ... with CMB rings
+ */
+ [IONIC_QTYPE_TXQ] = 3, /* 0 = Base version with CQ+SG support
+ * 1 = ... with Tx SG version 1
+ * 3 = ... with CMB rings
*/
};
@@ -149,7 +152,7 @@ static void ionic_link_status_check(struct ionic_lif *lif)
mutex_lock(&lif->queue_lock);
err = ionic_start_queues(lif);
if (err && err != -EBUSY) {
- netdev_err(lif->netdev,
+ netdev_err(netdev,
"Failed to start queues: %d\n", err);
set_bit(IONIC_LIF_F_BROKEN, lif->state);
netif_carrier_off(lif->netdev);
@@ -397,6 +400,15 @@ static void ionic_qcq_free(struct ionic_lif *lif, struct ionic_qcq *qcq)
qcq->q_base_pa = 0;
}
+ if (qcq->cmb_q_base) {
+ iounmap(qcq->cmb_q_base);
+ ionic_put_cmb(lif, qcq->cmb_pgid, qcq->cmb_order);
+ qcq->cmb_pgid = 0;
+ qcq->cmb_order = 0;
+ qcq->cmb_q_base = NULL;
+ qcq->cmb_q_base_pa = 0;
+ }
+
if (qcq->cq_base) {
dma_free_coherent(dev, qcq->cq_size, qcq->cq_base, qcq->cq_base_pa);
qcq->cq_base = NULL;
@@ -608,6 +620,7 @@ static int ionic_qcq_alloc(struct ionic_lif *lif, unsigned int type,
ionic_cq_map(&new->cq, cq_base, cq_base_pa);
ionic_cq_bind(&new->cq, &new->q);
} else {
+ /* regular DMA q descriptors */
new->q_size = PAGE_SIZE + (num_descs * desc_size);
new->q_base = dma_alloc_coherent(dev, new->q_size, &new->q_base_pa,
GFP_KERNEL);
@@ -620,6 +633,33 @@ static int ionic_qcq_alloc(struct ionic_lif *lif, unsigned int type,
q_base_pa = ALIGN(new->q_base_pa, PAGE_SIZE);
ionic_q_map(&new->q, q_base, q_base_pa);
+ if (flags & IONIC_QCQ_F_CMB_RINGS) {
+ /* on-chip CMB q descriptors */
+ new->cmb_q_size = num_descs * desc_size;
+ new->cmb_order = order_base_2(new->cmb_q_size / PAGE_SIZE);
+
+ err = ionic_get_cmb(lif, &new->cmb_pgid, &new->cmb_q_base_pa,
+ new->cmb_order);
+ if (err) {
+ netdev_err(lif->netdev,
+ "Cannot allocate queue order %d from cmb: err %d\n",
+ new->cmb_order, err);
+ goto err_out_free_q;
+ }
+
+ new->cmb_q_base = ioremap_wc(new->cmb_q_base_pa, new->cmb_q_size);
+ if (!new->cmb_q_base) {
+ netdev_err(lif->netdev, "Cannot map queue from cmb\n");
+ ionic_put_cmb(lif, new->cmb_pgid, new->cmb_order);
+ err = -ENOMEM;
+ goto err_out_free_q;
+ }
+
+ new->cmb_q_base_pa -= idev->phy_cmb_pages;
+ ionic_q_cmb_map(&new->q, new->cmb_q_base, new->cmb_q_base_pa);
+ }
+
+ /* cq DMA descriptors */
new->cq_size = PAGE_SIZE + (num_descs * cq_desc_size);
new->cq_base = dma_alloc_coherent(dev, new->cq_size, &new->cq_base_pa,
GFP_KERNEL);
@@ -658,6 +698,10 @@ static int ionic_qcq_alloc(struct ionic_lif *lif, unsigned int type,
err_out_free_cq:
dma_free_coherent(dev, new->cq_size, new->cq_base, new->cq_base_pa);
err_out_free_q:
+ if (new->cmb_q_base) {
+ iounmap(new->cmb_q_base);
+ ionic_put_cmb(lif, new->cmb_pgid, new->cmb_order);
+ }
dma_free_coherent(dev, new->q_size, new->q_base, new->q_base_pa);
err_out_free_cq_info:
vfree(new->cq.info);
@@ -739,6 +783,8 @@ static void ionic_qcq_sanitize(struct ionic_qcq *qcq)
qcq->cq.tail_idx = 0;
qcq->cq.done_color = 1;
memset(qcq->q_base, 0, qcq->q_size);
+ if (qcq->cmb_q_base)
+ memset_io(qcq->cmb_q_base, 0, qcq->cmb_q_size);
memset(qcq->cq_base, 0, qcq->cq_size);
memset(qcq->sg_base, 0, qcq->sg_size);
}
@@ -758,6 +804,7 @@ static int ionic_lif_txq_init(struct ionic_lif *lif, struct ionic_qcq *qcq)
.index = cpu_to_le32(q->index),
.flags = cpu_to_le16(IONIC_QINIT_F_IRQ |
IONIC_QINIT_F_SG),
+ .intr_index = cpu_to_le16(qcq->intr.index),
.pid = cpu_to_le16(q->pid),
.ring_size = ilog2(q->num_descs),
.ring_base = cpu_to_le64(q->base_pa),
@@ -766,17 +813,19 @@ static int ionic_lif_txq_init(struct ionic_lif *lif, struct ionic_qcq *qcq)
.features = cpu_to_le64(q->features),
},
};
- unsigned int intr_index;
int err;
- intr_index = qcq->intr.index;
-
- ctx.cmd.q_init.intr_index = cpu_to_le16(intr_index);
+ if (qcq->flags & IONIC_QCQ_F_CMB_RINGS) {
+ ctx.cmd.q_init.flags |= cpu_to_le16(IONIC_QINIT_F_CMB);
+ ctx.cmd.q_init.ring_base = cpu_to_le64(qcq->cmb_q_base_pa);
+ }
dev_dbg(dev, "txq_init.pid %d\n", ctx.cmd.q_init.pid);
dev_dbg(dev, "txq_init.index %d\n", ctx.cmd.q_init.index);
dev_dbg(dev, "txq_init.ring_base 0x%llx\n", ctx.cmd.q_init.ring_base);
dev_dbg(dev, "txq_init.ring_size %d\n", ctx.cmd.q_init.ring_size);
+ dev_dbg(dev, "txq_init.cq_ring_base 0x%llx\n", ctx.cmd.q_init.cq_ring_base);
+ dev_dbg(dev, "txq_init.sg_ring_base 0x%llx\n", ctx.cmd.q_init.sg_ring_base);
dev_dbg(dev, "txq_init.flags 0x%x\n", ctx.cmd.q_init.flags);
dev_dbg(dev, "txq_init.ver %d\n", ctx.cmd.q_init.ver);
dev_dbg(dev, "txq_init.intr_index %d\n", ctx.cmd.q_init.intr_index);
@@ -834,6 +883,11 @@ static int ionic_lif_rxq_init(struct ionic_lif *lif, struct ionic_qcq *qcq)
};
int err;
+ if (qcq->flags & IONIC_QCQ_F_CMB_RINGS) {
+ ctx.cmd.q_init.flags |= cpu_to_le16(IONIC_QINIT_F_CMB);
+ ctx.cmd.q_init.ring_base = cpu_to_le64(qcq->cmb_q_base_pa);
+ }
+
dev_dbg(dev, "rxq_init.pid %d\n", ctx.cmd.q_init.pid);
dev_dbg(dev, "rxq_init.index %d\n", ctx.cmd.q_init.index);
dev_dbg(dev, "rxq_init.ring_base 0x%llx\n", ctx.cmd.q_init.ring_base);
@@ -2010,8 +2064,13 @@ static int ionic_txrx_alloc(struct ionic_lif *lif)
sg_desc_sz = sizeof(struct ionic_txq_sg_desc);
flags = IONIC_QCQ_F_TX_STATS | IONIC_QCQ_F_SG;
+
+ if (test_bit(IONIC_LIF_F_CMB_TX_RINGS, lif->state))
+ flags |= IONIC_QCQ_F_CMB_RINGS;
+
if (test_bit(IONIC_LIF_F_SPLIT_INTR, lif->state))
flags |= IONIC_QCQ_F_INTR;
+
for (i = 0; i < lif->nxqs; i++) {
err = ionic_qcq_alloc(lif, IONIC_QTYPE_TXQ, i, "tx", flags,
num_desc, desc_sz, comp_sz, sg_desc_sz,
@@ -2032,6 +2091,9 @@ static int ionic_txrx_alloc(struct ionic_lif *lif)
flags = IONIC_QCQ_F_RX_STATS | IONIC_QCQ_F_SG | IONIC_QCQ_F_INTR;
+ if (test_bit(IONIC_LIF_F_CMB_RX_RINGS, lif->state))
+ flags |= IONIC_QCQ_F_CMB_RINGS;
+
num_desc = lif->nrxq_descs;
desc_sz = sizeof(struct ionic_rxq_desc);
comp_sz = sizeof(struct ionic_rxq_comp);
@@ -2507,7 +2569,7 @@ static int ionic_set_vf_rate(struct net_device *netdev, int vf,
ret = ionic_set_vf_config(ionic, vf, &vfc);
if (!ret)
- lif->ionic->vfs[vf].maxrate = cpu_to_le32(tx_max);
+ ionic->vfs[vf].maxrate = cpu_to_le32(tx_max);
}
up_write(&ionic->vf_op_lock);
@@ -2707,6 +2769,55 @@ static const struct net_device_ops ionic_netdev_ops = {
.ndo_get_vf_stats = ionic_get_vf_stats,
};
+static int ionic_cmb_reconfig(struct ionic_lif *lif,
+ struct ionic_queue_params *qparam)
+{
+ struct ionic_queue_params start_qparams;
+ int err = 0;
+
+ /* When changing CMB queue parameters, we're using limited
+ * on-device memory and don't have extra memory to use for
+ * duplicate allocations, so we free it all first then
+ * re-allocate with the new parameters.
+ */
+
+ /* Checkpoint for possible unwind */
+ ionic_init_queue_params(lif, &start_qparams);
+
+ /* Stop and free the queues */
+ ionic_stop_queues_reconfig(lif);
+ ionic_txrx_free(lif);
+
+ /* Set up new qparams */
+ ionic_set_queue_params(lif, qparam);
+
+ if (netif_running(lif->netdev)) {
+ /* Alloc and start the new configuration */
+ err = ionic_txrx_alloc(lif);
+ if (err) {
+ dev_warn(lif->ionic->dev,
+ "CMB reconfig failed, restoring values: %d\n", err);
+
+ /* Back out the changes */
+ ionic_set_queue_params(lif, &start_qparams);
+ err = ionic_txrx_alloc(lif);
+ if (err) {
+ dev_err(lif->ionic->dev,
+ "CMB restore failed: %d\n", err);
+ goto errout;
+ }
+ }
+
+ ionic_start_queues_reconfig(lif);
+ } else {
+ /* This was detached in ionic_stop_queues_reconfig() */
+ netif_device_attach(lif->netdev);
+ }
+
+errout:
+ return err;
+}
+
static void ionic_swap_queues(struct ionic_qcq *a, struct ionic_qcq *b)
{
/* only swapping the queues, not the napi, flags, or other stuff */
@@ -2749,6 +2860,11 @@ int ionic_reconfigure_queues(struct ionic_lif *lif,
unsigned int flags, i;
int err = 0;
+ /* Are we changing q params while CMB is on */
+ if ((test_bit(IONIC_LIF_F_CMB_TX_RINGS, lif->state) && qparam->cmb_tx) ||
+ (test_bit(IONIC_LIF_F_CMB_RX_RINGS, lif->state) && qparam->cmb_rx))
+ return ionic_cmb_reconfig(lif, qparam);
+
/* allocate temporary qcq arrays to hold new queue structs */
if (qparam->nxqs != lif->nxqs || qparam->ntxq_descs != lif->ntxq_descs) {
tx_qcqs = devm_kcalloc(lif->ionic->dev, lif->ionic->ntxqs_per_lif,
@@ -2785,6 +2901,16 @@ int ionic_reconfigure_queues(struct ionic_lif *lif,
sg_desc_sz = sizeof(struct ionic_txq_sg_desc);
for (i = 0; i < qparam->nxqs; i++) {
+ /* If missing, short placeholder qcq needed for swap */
+ if (!lif->txqcqs[i]) {
+ flags = IONIC_QCQ_F_TX_STATS | IONIC_QCQ_F_SG;
+ err = ionic_qcq_alloc(lif, IONIC_QTYPE_TXQ, i, "tx", flags,
+ 4, desc_sz, comp_sz, sg_desc_sz,
+ lif->kern_pid, &lif->txqcqs[i]);
+ if (err)
+ goto err_out;
+ }
+
flags = lif->txqcqs[i]->flags & ~IONIC_QCQ_F_INTR;
err = ionic_qcq_alloc(lif, IONIC_QTYPE_TXQ, i, "tx", flags,
num_desc, desc_sz, comp_sz, sg_desc_sz,
@@ -2804,6 +2930,16 @@ int ionic_reconfigure_queues(struct ionic_lif *lif,
comp_sz *= 2;
for (i = 0; i < qparam->nxqs; i++) {
+ /* If missing, short placeholder qcq needed for swap */
+ if (!lif->rxqcqs[i]) {
+ flags = IONIC_QCQ_F_RX_STATS | IONIC_QCQ_F_SG;
+ err = ionic_qcq_alloc(lif, IONIC_QTYPE_RXQ, i, "rx", flags,
+ 4, desc_sz, comp_sz, sg_desc_sz,
+ lif->kern_pid, &lif->rxqcqs[i]);
+ if (err)
+ goto err_out;
+ }
+
flags = lif->rxqcqs[i]->flags & ~IONIC_QCQ_F_INTR;
err = ionic_qcq_alloc(lif, IONIC_QTYPE_RXQ, i, "rx", flags,
num_desc, desc_sz, comp_sz, sg_desc_sz,
@@ -2853,10 +2989,15 @@ int ionic_reconfigure_queues(struct ionic_lif *lif,
lif->tx_coalesce_hw = lif->rx_coalesce_hw;
}
- /* clear existing interrupt assignments */
+ /* Clear existing interrupt assignments. We check for NULL here
+ * because we're checking the whole array for potential qcqs, not
+ * just those qcqs that have just been set up.
+ */
for (i = 0; i < lif->ionic->ntxqs_per_lif; i++) {
- ionic_qcq_intr_free(lif, lif->txqcqs[i]);
- ionic_qcq_intr_free(lif, lif->rxqcqs[i]);
+ if (lif->txqcqs[i])
+ ionic_qcq_intr_free(lif, lif->txqcqs[i]);
+ if (lif->rxqcqs[i])
+ ionic_qcq_intr_free(lif, lif->rxqcqs[i]);
}
/* re-assign the interrupts */
diff --git a/drivers/net/ethernet/pensando/ionic/ionic_lif.h b/drivers/net/ethernet/pensando/ionic/ionic_lif.h
index 734519895614..c9c4c46d5a16 100644
--- a/drivers/net/ethernet/pensando/ionic/ionic_lif.h
+++ b/drivers/net/ethernet/pensando/ionic/ionic_lif.h
@@ -59,6 +59,7 @@ struct ionic_rx_stats {
#define IONIC_QCQ_F_TX_STATS BIT(3)
#define IONIC_QCQ_F_RX_STATS BIT(4)
#define IONIC_QCQ_F_NOTIFYQ BIT(5)
+#define IONIC_QCQ_F_CMB_RINGS BIT(6)
struct ionic_qcq {
void *q_base;
@@ -70,6 +71,11 @@ struct ionic_qcq {
void *sg_base;
dma_addr_t sg_base_pa;
u32 sg_size;
+ void __iomem *cmb_q_base;
+ phys_addr_t cmb_q_base_pa;
+ u32 cmb_q_size;
+ u32 cmb_pgid;
+ u32 cmb_order;
struct dim dim;
struct ionic_queue q;
struct ionic_cq cq;
@@ -142,6 +148,8 @@ enum ionic_lif_state_flags {
IONIC_LIF_F_BROKEN,
IONIC_LIF_F_TX_DIM_INTR,
IONIC_LIF_F_RX_DIM_INTR,
+ IONIC_LIF_F_CMB_TX_RINGS,
+ IONIC_LIF_F_CMB_RX_RINGS,
/* leave this as last */
IONIC_LIF_F_STATE_SIZE
@@ -245,8 +253,10 @@ struct ionic_queue_params {
unsigned int nxqs;
unsigned int ntxq_descs;
unsigned int nrxq_descs;
- unsigned int intr_split;
u64 rxq_features;
+ bool intr_split;
+ bool cmb_tx;
+ bool cmb_rx;
};
static inline void ionic_init_queue_params(struct ionic_lif *lif,
@@ -255,8 +265,34 @@ static inline void ionic_init_queue_params(struct ionic_lif *lif,
qparam->nxqs = lif->nxqs;
qparam->ntxq_descs = lif->ntxq_descs;
qparam->nrxq_descs = lif->nrxq_descs;
- qparam->intr_split = test_bit(IONIC_LIF_F_SPLIT_INTR, lif->state);
qparam->rxq_features = lif->rxq_features;
+ qparam->intr_split = test_bit(IONIC_LIF_F_SPLIT_INTR, lif->state);
+ qparam->cmb_tx = test_bit(IONIC_LIF_F_CMB_TX_RINGS, lif->state);
+ qparam->cmb_rx = test_bit(IONIC_LIF_F_CMB_RX_RINGS, lif->state);
+}
+
+static inline void ionic_set_queue_params(struct ionic_lif *lif,
+ struct ionic_queue_params *qparam)
+{
+ lif->nxqs = qparam->nxqs;
+ lif->ntxq_descs = qparam->ntxq_descs;
+ lif->nrxq_descs = qparam->nrxq_descs;
+ lif->rxq_features = qparam->rxq_features;
+
+ if (qparam->intr_split)
+ set_bit(IONIC_LIF_F_SPLIT_INTR, lif->state);
+ else
+ clear_bit(IONIC_LIF_F_SPLIT_INTR, lif->state);
+
+ if (qparam->cmb_tx)
+ set_bit(IONIC_LIF_F_CMB_TX_RINGS, lif->state);
+ else
+ clear_bit(IONIC_LIF_F_CMB_TX_RINGS, lif->state);
+
+ if (qparam->cmb_rx)
+ set_bit(IONIC_LIF_F_CMB_RX_RINGS, lif->state);
+ else
+ clear_bit(IONIC_LIF_F_CMB_RX_RINGS, lif->state);
}
static inline u32 ionic_coal_usec_to_hw(struct ionic *ionic, u32 usecs)
diff --git a/drivers/net/ethernet/pensando/ionic/ionic_main.c b/drivers/net/ethernet/pensando/ionic/ionic_main.c
index 08c42b039d92..1dc79cecc5cc 100644
--- a/drivers/net/ethernet/pensando/ionic/ionic_main.c
+++ b/drivers/net/ethernet/pensando/ionic/ionic_main.c
@@ -388,7 +388,7 @@ int ionic_adminq_wait(struct ionic_lif *lif, struct ionic_admin_ctx *ctx,
break;
/* force a check of FW status and break out if FW reset */
- (void)ionic_heartbeat_check(lif->ionic);
+ ionic_heartbeat_check(lif->ionic);
if ((test_bit(IONIC_LIF_F_FW_RESET, lif->state) &&
!lif->ionic->idev.fw_status_ready) ||
test_bit(IONIC_LIF_F_FW_STOPPING, lif->state)) {
@@ -676,7 +676,7 @@ int ionic_port_init(struct ionic *ionic)
err = ionic_dev_cmd_wait(ionic, DEVCMD_TIMEOUT);
ionic_dev_cmd_port_state(&ionic->idev, IONIC_PORT_ADMIN_STATE_UP);
- (void)ionic_dev_cmd_wait(ionic, DEVCMD_TIMEOUT);
+ ionic_dev_cmd_wait(ionic, DEVCMD_TIMEOUT);
mutex_unlock(&ionic->dev_cmd_lock);
if (err) {
diff --git a/drivers/net/ethernet/pensando/ionic/ionic_phc.c b/drivers/net/ethernet/pensando/ionic/ionic_phc.c
index 887046838b3b..eac2f0e3576e 100644
--- a/drivers/net/ethernet/pensando/ionic/ionic_phc.c
+++ b/drivers/net/ethernet/pensando/ionic/ionic_phc.c
@@ -268,7 +268,7 @@ static u64 ionic_hwstamp_read(struct ionic *ionic,
u32 tick_high_before, tick_high, tick_low;
/* read and discard low part to defeat hw staging of high part */
- (void)ioread32(&ionic->idev.hwstamp_regs->tick_low);
+ ioread32(&ionic->idev.hwstamp_regs->tick_low);
tick_high_before = ioread32(&ionic->idev.hwstamp_regs->tick_high);
diff --git a/drivers/net/ethernet/pensando/ionic/ionic_rx_filter.c b/drivers/net/ethernet/pensando/ionic/ionic_rx_filter.c
index b7363376dfc8..1ee2f285cb42 100644
--- a/drivers/net/ethernet/pensando/ionic/ionic_rx_filter.c
+++ b/drivers/net/ethernet/pensando/ionic/ionic_rx_filter.c
@@ -604,14 +604,14 @@ loop_out:
* they can clear room for some new filters
*/
list_for_each_entry_safe(sync_item, spos, &sync_del_list, list) {
- (void)ionic_lif_filter_del(lif, &sync_item->f.cmd);
+ ionic_lif_filter_del(lif, &sync_item->f.cmd);
list_del(&sync_item->list);
devm_kfree(dev, sync_item);
}
list_for_each_entry_safe(sync_item, spos, &sync_add_list, list) {
- (void)ionic_lif_filter_add(lif, &sync_item->f.cmd);
+ ionic_lif_filter_add(lif, &sync_item->f.cmd);
list_del(&sync_item->list);
devm_kfree(dev, sync_item);
diff --git a/drivers/net/ethernet/pensando/ionic/ionic_txrx.c b/drivers/net/ethernet/pensando/ionic/ionic_txrx.c
index f761780f0162..26798fc635db 100644
--- a/drivers/net/ethernet/pensando/ionic/ionic_txrx.c
+++ b/drivers/net/ethernet/pensando/ionic/ionic_txrx.c
@@ -402,6 +402,14 @@ bool ionic_rx_service(struct ionic_cq *cq, struct ionic_cq_info *cq_info)
return true;
}
+static inline void ionic_write_cmb_desc(struct ionic_queue *q,
+ void __iomem *cmb_desc,
+ void *desc)
+{
+ if (q_to_qcq(q)->flags & IONIC_QCQ_F_CMB_RINGS)
+ memcpy_toio(cmb_desc, desc, q->desc_size);
+}
+
void ionic_rx_fill(struct ionic_queue *q)
{
struct net_device *netdev = q->lif->netdev;
@@ -480,6 +488,8 @@ void ionic_rx_fill(struct ionic_queue *q)
IONIC_RXQ_DESC_OPCODE_SIMPLE;
desc_info->nbufs = nfrags;
+ ionic_write_cmb_desc(q, desc_info->cmb_desc, desc);
+
ionic_rxq_post(q, false, ionic_rx_clean, NULL);
}
@@ -943,7 +953,8 @@ static int ionic_tx_tcp_pseudo_csum(struct sk_buff *skb)
return 0;
}
-static void ionic_tx_tso_post(struct ionic_queue *q, struct ionic_txq_desc *desc,
+static void ionic_tx_tso_post(struct ionic_queue *q,
+ struct ionic_desc_info *desc_info,
struct sk_buff *skb,
dma_addr_t addr, u8 nsge, u16 len,
unsigned int hdrlen, unsigned int mss,
@@ -951,6 +962,7 @@ static void ionic_tx_tso_post(struct ionic_queue *q, struct ionic_txq_desc *desc
u16 vlan_tci, bool has_vlan,
bool start, bool done)
{
+ struct ionic_txq_desc *desc = desc_info->desc;
u8 flags = 0;
u64 cmd;
@@ -966,6 +978,8 @@ static void ionic_tx_tso_post(struct ionic_queue *q, struct ionic_txq_desc *desc
desc->hdr_len = cpu_to_le16(hdrlen);
desc->mss = cpu_to_le16(mss);
+ ionic_write_cmb_desc(q, desc_info->cmb_desc, desc);
+
if (start) {
skb_tx_timestamp(skb);
if (!unlikely(q->features & IONIC_TXQ_F_HWSTAMP))
@@ -1084,7 +1098,7 @@ static int ionic_tx_tso(struct ionic_queue *q, struct sk_buff *skb)
seg_rem = min(tso_rem, mss);
done = (tso_rem == 0);
/* post descriptor */
- ionic_tx_tso_post(q, desc, skb,
+ ionic_tx_tso_post(q, desc_info, skb,
desc_addr, desc_nsge, desc_len,
hdrlen, mss, outer_csum, vlan_tci, has_vlan,
start, done);
@@ -1133,6 +1147,8 @@ static void ionic_tx_calc_csum(struct ionic_queue *q, struct sk_buff *skb,
desc->csum_start = cpu_to_le16(skb_checksum_start_offset(skb));
desc->csum_offset = cpu_to_le16(skb->csum_offset);
+ ionic_write_cmb_desc(q, desc_info->cmb_desc, desc);
+
if (skb_csum_is_sctp(skb))
stats->crc32_csum++;
else
@@ -1170,6 +1186,8 @@ static void ionic_tx_calc_no_csum(struct ionic_queue *q, struct sk_buff *skb,
desc->csum_start = 0;
desc->csum_offset = 0;
+ ionic_write_cmb_desc(q, desc_info->cmb_desc, desc);
+
stats->csum_none++;
}
diff --git a/drivers/net/ethernet/qlogic/qed/qed_devlink.c b/drivers/net/ethernet/qlogic/qed/qed_devlink.c
index 922c47797af6..be5cc8b79bd5 100644
--- a/drivers/net/ethernet/qlogic/qed/qed_devlink.c
+++ b/drivers/net/ethernet/qlogic/qed/qed_devlink.c
@@ -198,7 +198,6 @@ static const struct devlink_ops qed_dl_ops = {
struct devlink *qed_devlink_register(struct qed_dev *cdev)
{
- union devlink_param_value value;
struct qed_devlink *qdevlink;
struct devlink *dl;
int rc;
@@ -216,11 +215,6 @@ struct devlink *qed_devlink_register(struct qed_dev *cdev)
if (rc)
goto err_unregister;
- value.vbool = false;
- devlink_param_driverinit_value_set(dl,
- QED_DEVLINK_PARAM_ID_IWARP_CMT,
- value);
-
cdev->iwarp_cmt = false;
qed_fw_reporters_create(dl);
diff --git a/drivers/net/ethernet/qlogic/qed/qed_sriov.c b/drivers/net/ethernet/qlogic/qed/qed_sriov.c
index 0848b5529d48..2bf18748581d 100644
--- a/drivers/net/ethernet/qlogic/qed/qed_sriov.c
+++ b/drivers/net/ethernet/qlogic/qed/qed_sriov.c
@@ -831,7 +831,7 @@ static int qed_iov_enable_vf_access(struct qed_hwfn *p_hwfn,
* @p_hwfn: HW device data.
* @p_ptt: PTT window for writing the registers.
* @vf: VF info data.
- * @enable: The actual permision for this VF.
+ * @enable: The actual permission for this VF.
*
* In E4, queue zone permission table size is 320x9. There
* are 320 VF queues for single engine device (256 for dual
diff --git a/drivers/net/ethernet/qlogic/qede/qede_main.c b/drivers/net/ethernet/qlogic/qede/qede_main.c
index 953f304b8588..4a3c3b5fb4a1 100644
--- a/drivers/net/ethernet/qlogic/qede/qede_main.c
+++ b/drivers/net/ethernet/qlogic/qede/qede_main.c
@@ -892,6 +892,9 @@ static void qede_init_ndev(struct qede_dev *edev)
ndev->hw_features = hw_features;
+ ndev->xdp_features = NETDEV_XDP_ACT_BASIC | NETDEV_XDP_ACT_REDIRECT |
+ NETDEV_XDP_ACT_NDO_XMIT;
+
/* MTU range: 46 - 9600 */
ndev->min_mtu = ETH_ZLEN - ETH_HLEN;
ndev->max_mtu = QEDE_MAX_JUMBO_PACKET_SIZE;
@@ -970,8 +973,15 @@ static int qede_alloc_fp_array(struct qede_dev *edev)
goto err;
}
- mem = krealloc(edev->coal_entry, QEDE_QUEUE_CNT(edev) *
- sizeof(*edev->coal_entry), GFP_KERNEL);
+ if (!edev->coal_entry) {
+ mem = kcalloc(QEDE_MAX_RSS_CNT(edev),
+ sizeof(*edev->coal_entry), GFP_KERNEL);
+ } else {
+ mem = krealloc(edev->coal_entry,
+ QEDE_QUEUE_CNT(edev) * sizeof(*edev->coal_entry),
+ GFP_KERNEL);
+ }
+
if (!mem) {
DP_ERR(edev, "coalesce entry allocation failed\n");
kfree(edev->coal_entry);
diff --git a/drivers/net/ethernet/qualcomm/rmnet/rmnet_config.c b/drivers/net/ethernet/qualcomm/rmnet/rmnet_config.c
index 27b1663c476e..39d24e07f306 100644
--- a/drivers/net/ethernet/qualcomm/rmnet/rmnet_config.c
+++ b/drivers/net/ethernet/qualcomm/rmnet/rmnet_config.c
@@ -12,6 +12,7 @@
#include "rmnet_handlers.h"
#include "rmnet_vnd.h"
#include "rmnet_private.h"
+#include "rmnet_map.h"
/* Local Definitions and Declarations */
@@ -39,6 +40,8 @@ static int rmnet_unregister_real_device(struct net_device *real_dev)
if (port->nr_rmnet_devs)
return -EINVAL;
+ rmnet_map_tx_aggregate_exit(port);
+
netdev_rx_handler_unregister(real_dev);
kfree(port);
@@ -79,6 +82,8 @@ static int rmnet_register_real_device(struct net_device *real_dev,
for (entry = 0; entry < RMNET_MAX_LOGICAL_EP; entry++)
INIT_HLIST_HEAD(&port->muxed_ep[entry]);
+ rmnet_map_tx_aggregate_init(port);
+
netdev_dbg(real_dev, "registered with rmnet\n");
return 0;
}
diff --git a/drivers/net/ethernet/qualcomm/rmnet/rmnet_config.h b/drivers/net/ethernet/qualcomm/rmnet/rmnet_config.h
index 3d3cba56c516..ed112d51ac5a 100644
--- a/drivers/net/ethernet/qualcomm/rmnet/rmnet_config.h
+++ b/drivers/net/ethernet/qualcomm/rmnet/rmnet_config.h
@@ -6,6 +6,7 @@
*/
#include <linux/skbuff.h>
+#include <linux/time.h>
#include <net/gro_cells.h>
#ifndef _RMNET_CONFIG_H_
@@ -19,6 +20,12 @@ struct rmnet_endpoint {
struct hlist_node hlnode;
};
+struct rmnet_egress_agg_params {
+ u32 bytes;
+ u32 count;
+ u64 time_nsec;
+};
+
/* One instance of this structure is instantiated for each real_dev associated
* with rmnet.
*/
@@ -30,6 +37,19 @@ struct rmnet_port {
struct hlist_head muxed_ep[RMNET_MAX_LOGICAL_EP];
struct net_device *bridge_ep;
struct net_device *rmnet_dev;
+
+ /* Egress aggregation information */
+ struct rmnet_egress_agg_params egress_agg_params;
+ /* Protect aggregation related elements */
+ spinlock_t agg_lock;
+ struct sk_buff *skbagg_head;
+ struct sk_buff *skbagg_tail;
+ int agg_state;
+ u8 agg_count;
+ struct timespec64 agg_time;
+ struct timespec64 agg_last;
+ struct hrtimer hrtimer;
+ struct work_struct agg_wq;
};
extern struct rtnl_link_ops rmnet_link_ops;
diff --git a/drivers/net/ethernet/qualcomm/rmnet/rmnet_handlers.c b/drivers/net/ethernet/qualcomm/rmnet/rmnet_handlers.c
index a313242a762e..9f3479500f85 100644
--- a/drivers/net/ethernet/qualcomm/rmnet/rmnet_handlers.c
+++ b/drivers/net/ethernet/qualcomm/rmnet/rmnet_handlers.c
@@ -164,8 +164,18 @@ static int rmnet_map_egress_handler(struct sk_buff *skb,
map_header->mux_id = mux_id;
- skb->protocol = htons(ETH_P_MAP);
+ if (READ_ONCE(port->egress_agg_params.count) > 1) {
+ unsigned int len;
+
+ len = rmnet_map_tx_aggregate(skb, port, orig_dev);
+ if (likely(len)) {
+ rmnet_vnd_tx_fixup_len(len, orig_dev);
+ return -EINPROGRESS;
+ }
+ return -ENOMEM;
+ }
+ skb->protocol = htons(ETH_P_MAP);
return 0;
}
@@ -235,6 +245,7 @@ void rmnet_egress_handler(struct sk_buff *skb)
struct rmnet_port *port;
struct rmnet_priv *priv;
u8 mux_id;
+ int err;
sk_pacing_shift_update(skb->sk, 8);
@@ -247,8 +258,11 @@ void rmnet_egress_handler(struct sk_buff *skb)
if (!port)
goto drop;
- if (rmnet_map_egress_handler(skb, port, mux_id, orig_dev))
+ err = rmnet_map_egress_handler(skb, port, mux_id, orig_dev);
+ if (err == -ENOMEM)
goto drop;
+ else if (err == -EINPROGRESS)
+ return;
rmnet_vnd_tx_fixup(skb, orig_dev);
diff --git a/drivers/net/ethernet/qualcomm/rmnet/rmnet_map.h b/drivers/net/ethernet/qualcomm/rmnet/rmnet_map.h
index 2b033060fc20..b70284095568 100644
--- a/drivers/net/ethernet/qualcomm/rmnet/rmnet_map.h
+++ b/drivers/net/ethernet/qualcomm/rmnet/rmnet_map.h
@@ -53,5 +53,11 @@ void rmnet_map_checksum_uplink_packet(struct sk_buff *skb,
struct net_device *orig_dev,
int csum_type);
int rmnet_map_process_next_hdr_packet(struct sk_buff *skb, u16 len);
+unsigned int rmnet_map_tx_aggregate(struct sk_buff *skb, struct rmnet_port *port,
+ struct net_device *orig_dev);
+void rmnet_map_tx_aggregate_init(struct rmnet_port *port);
+void rmnet_map_tx_aggregate_exit(struct rmnet_port *port);
+void rmnet_map_update_ul_agg_config(struct rmnet_port *port, u32 size,
+ u32 count, u32 time);
#endif /* _RMNET_MAP_H_ */
diff --git a/drivers/net/ethernet/qualcomm/rmnet/rmnet_map_data.c b/drivers/net/ethernet/qualcomm/rmnet/rmnet_map_data.c
index ba194698cc14..a5e3d1a88305 100644
--- a/drivers/net/ethernet/qualcomm/rmnet/rmnet_map_data.c
+++ b/drivers/net/ethernet/qualcomm/rmnet/rmnet_map_data.c
@@ -12,6 +12,7 @@
#include "rmnet_config.h"
#include "rmnet_map.h"
#include "rmnet_private.h"
+#include "rmnet_vnd.h"
#define RMNET_MAP_DEAGGR_SPACING 64
#define RMNET_MAP_DEAGGR_HEADROOM (RMNET_MAP_DEAGGR_SPACING / 2)
@@ -518,3 +519,193 @@ int rmnet_map_process_next_hdr_packet(struct sk_buff *skb,
return 0;
}
+
+#define RMNET_AGG_BYPASS_TIME_NSEC 10000000L
+
+static void reset_aggr_params(struct rmnet_port *port)
+{
+ port->skbagg_head = NULL;
+ port->agg_count = 0;
+ port->agg_state = 0;
+ memset(&port->agg_time, 0, sizeof(struct timespec64));
+}
+
+static void rmnet_send_skb(struct rmnet_port *port, struct sk_buff *skb)
+{
+ if (skb_needs_linearize(skb, port->dev->features)) {
+ if (unlikely(__skb_linearize(skb))) {
+ struct rmnet_priv *priv;
+
+ priv = netdev_priv(port->rmnet_dev);
+ this_cpu_inc(priv->pcpu_stats->stats.tx_drops);
+ dev_kfree_skb_any(skb);
+ return;
+ }
+ }
+
+ dev_queue_xmit(skb);
+}
+
+static void rmnet_map_flush_tx_packet_work(struct work_struct *work)
+{
+ struct sk_buff *skb = NULL;
+ struct rmnet_port *port;
+
+ port = container_of(work, struct rmnet_port, agg_wq);
+
+ spin_lock_bh(&port->agg_lock);
+ if (likely(port->agg_state == -EINPROGRESS)) {
+ /* Buffer may have already been shipped out */
+ if (likely(port->skbagg_head)) {
+ skb = port->skbagg_head;
+ reset_aggr_params(port);
+ }
+ port->agg_state = 0;
+ }
+
+ spin_unlock_bh(&port->agg_lock);
+ if (skb)
+ rmnet_send_skb(port, skb);
+}
+
+static enum hrtimer_restart rmnet_map_flush_tx_packet_queue(struct hrtimer *t)
+{
+ struct rmnet_port *port;
+
+ port = container_of(t, struct rmnet_port, hrtimer);
+
+ schedule_work(&port->agg_wq);
+
+ return HRTIMER_NORESTART;
+}
+
+unsigned int rmnet_map_tx_aggregate(struct sk_buff *skb, struct rmnet_port *port,
+ struct net_device *orig_dev)
+{
+ struct timespec64 diff, last;
+ unsigned int len = skb->len;
+ struct sk_buff *agg_skb;
+ int size;
+
+ spin_lock_bh(&port->agg_lock);
+ memcpy(&last, &port->agg_last, sizeof(struct timespec64));
+ ktime_get_real_ts64(&port->agg_last);
+
+ if (!port->skbagg_head) {
+ /* Check to see if we should agg first. If the traffic is very
+ * sparse, don't aggregate.
+ */
+new_packet:
+ diff = timespec64_sub(port->agg_last, last);
+ size = port->egress_agg_params.bytes - skb->len;
+
+ if (size < 0) {
+ /* dropped */
+ spin_unlock_bh(&port->agg_lock);
+ return 0;
+ }
+
+ if (diff.tv_sec > 0 || diff.tv_nsec > RMNET_AGG_BYPASS_TIME_NSEC ||
+ size == 0)
+ goto no_aggr;
+
+ port->skbagg_head = skb_copy_expand(skb, 0, size, GFP_ATOMIC);
+ if (!port->skbagg_head)
+ goto no_aggr;
+
+ dev_kfree_skb_any(skb);
+ port->skbagg_head->protocol = htons(ETH_P_MAP);
+ port->agg_count = 1;
+ ktime_get_real_ts64(&port->agg_time);
+ skb_frag_list_init(port->skbagg_head);
+ goto schedule;
+ }
+ diff = timespec64_sub(port->agg_last, port->agg_time);
+ size = port->egress_agg_params.bytes - port->skbagg_head->len;
+
+ if (skb->len > size) {
+ agg_skb = port->skbagg_head;
+ reset_aggr_params(port);
+ spin_unlock_bh(&port->agg_lock);
+ hrtimer_cancel(&port->hrtimer);
+ rmnet_send_skb(port, agg_skb);
+ spin_lock_bh(&port->agg_lock);
+ goto new_packet;
+ }
+
+ if (skb_has_frag_list(port->skbagg_head))
+ port->skbagg_tail->next = skb;
+ else
+ skb_shinfo(port->skbagg_head)->frag_list = skb;
+
+ port->skbagg_head->len += skb->len;
+ port->skbagg_head->data_len += skb->len;
+ port->skbagg_head->truesize += skb->truesize;
+ port->skbagg_tail = skb;
+ port->agg_count++;
+
+ if (diff.tv_sec > 0 || diff.tv_nsec > port->egress_agg_params.time_nsec ||
+ port->agg_count >= port->egress_agg_params.count ||
+ port->skbagg_head->len == port->egress_agg_params.bytes) {
+ agg_skb = port->skbagg_head;
+ reset_aggr_params(port);
+ spin_unlock_bh(&port->agg_lock);
+ hrtimer_cancel(&port->hrtimer);
+ rmnet_send_skb(port, agg_skb);
+ return len;
+ }
+
+schedule:
+ if (!hrtimer_active(&port->hrtimer) && port->agg_state != -EINPROGRESS) {
+ port->agg_state = -EINPROGRESS;
+ hrtimer_start(&port->hrtimer,
+ ns_to_ktime(port->egress_agg_params.time_nsec),
+ HRTIMER_MODE_REL);
+ }
+ spin_unlock_bh(&port->agg_lock);
+
+ return len;
+
+no_aggr:
+ spin_unlock_bh(&port->agg_lock);
+ skb->protocol = htons(ETH_P_MAP);
+ dev_queue_xmit(skb);
+
+ return len;
+}
+
+void rmnet_map_update_ul_agg_config(struct rmnet_port *port, u32 size,
+ u32 count, u32 time)
+{
+ spin_lock_bh(&port->agg_lock);
+ port->egress_agg_params.bytes = size;
+ WRITE_ONCE(port->egress_agg_params.count, count);
+ port->egress_agg_params.time_nsec = time * NSEC_PER_USEC;
+ spin_unlock_bh(&port->agg_lock);
+}
+
+void rmnet_map_tx_aggregate_init(struct rmnet_port *port)
+{
+ hrtimer_init(&port->hrtimer, CLOCK_MONOTONIC, HRTIMER_MODE_REL);
+ port->hrtimer.function = rmnet_map_flush_tx_packet_queue;
+ spin_lock_init(&port->agg_lock);
+ rmnet_map_update_ul_agg_config(port, 4096, 1, 800);
+ INIT_WORK(&port->agg_wq, rmnet_map_flush_tx_packet_work);
+}
+
+void rmnet_map_tx_aggregate_exit(struct rmnet_port *port)
+{
+ hrtimer_cancel(&port->hrtimer);
+ cancel_work_sync(&port->agg_wq);
+
+ spin_lock_bh(&port->agg_lock);
+ if (port->agg_state == -EINPROGRESS) {
+ if (port->skbagg_head) {
+ dev_kfree_skb_any(port->skbagg_head);
+ reset_aggr_params(port);
+ }
+
+ port->agg_state = 0;
+ }
+ spin_unlock_bh(&port->agg_lock);
+}
diff --git a/drivers/net/ethernet/qualcomm/rmnet/rmnet_vnd.c b/drivers/net/ethernet/qualcomm/rmnet/rmnet_vnd.c
index 3f5e6572d20e..046b5f7d8e7c 100644
--- a/drivers/net/ethernet/qualcomm/rmnet/rmnet_vnd.c
+++ b/drivers/net/ethernet/qualcomm/rmnet/rmnet_vnd.c
@@ -29,7 +29,7 @@ void rmnet_vnd_rx_fixup(struct sk_buff *skb, struct net_device *dev)
u64_stats_update_end(&pcpu_ptr->syncp);
}
-void rmnet_vnd_tx_fixup(struct sk_buff *skb, struct net_device *dev)
+void rmnet_vnd_tx_fixup_len(unsigned int len, struct net_device *dev)
{
struct rmnet_priv *priv = netdev_priv(dev);
struct rmnet_pcpu_stats *pcpu_ptr;
@@ -38,10 +38,15 @@ void rmnet_vnd_tx_fixup(struct sk_buff *skb, struct net_device *dev)
u64_stats_update_begin(&pcpu_ptr->syncp);
pcpu_ptr->stats.tx_pkts++;
- pcpu_ptr->stats.tx_bytes += skb->len;
+ pcpu_ptr->stats.tx_bytes += len;
u64_stats_update_end(&pcpu_ptr->syncp);
}
+void rmnet_vnd_tx_fixup(struct sk_buff *skb, struct net_device *dev)
+{
+ rmnet_vnd_tx_fixup_len(skb->len, dev);
+}
+
/* Network Device Operations */
static netdev_tx_t rmnet_vnd_start_xmit(struct sk_buff *skb,
@@ -210,7 +215,52 @@ static void rmnet_get_ethtool_stats(struct net_device *dev,
memcpy(data, st, ARRAY_SIZE(rmnet_gstrings_stats) * sizeof(u64));
}
+static int rmnet_get_coalesce(struct net_device *dev,
+ struct ethtool_coalesce *coal,
+ struct kernel_ethtool_coalesce *kernel_coal,
+ struct netlink_ext_ack *extack)
+{
+ struct rmnet_priv *priv = netdev_priv(dev);
+ struct rmnet_port *port;
+
+ port = rmnet_get_port_rtnl(priv->real_dev);
+
+ memset(kernel_coal, 0, sizeof(*kernel_coal));
+ kernel_coal->tx_aggr_max_bytes = port->egress_agg_params.bytes;
+ kernel_coal->tx_aggr_max_frames = port->egress_agg_params.count;
+ kernel_coal->tx_aggr_time_usecs = div_u64(port->egress_agg_params.time_nsec,
+ NSEC_PER_USEC);
+
+ return 0;
+}
+
+static int rmnet_set_coalesce(struct net_device *dev,
+ struct ethtool_coalesce *coal,
+ struct kernel_ethtool_coalesce *kernel_coal,
+ struct netlink_ext_ack *extack)
+{
+ struct rmnet_priv *priv = netdev_priv(dev);
+ struct rmnet_port *port;
+
+ port = rmnet_get_port_rtnl(priv->real_dev);
+
+ if (kernel_coal->tx_aggr_max_frames < 1 || kernel_coal->tx_aggr_max_frames > 64)
+ return -EINVAL;
+
+ if (kernel_coal->tx_aggr_max_bytes > 32768)
+ return -EINVAL;
+
+ rmnet_map_update_ul_agg_config(port, kernel_coal->tx_aggr_max_bytes,
+ kernel_coal->tx_aggr_max_frames,
+ kernel_coal->tx_aggr_time_usecs);
+
+ return 0;
+}
+
static const struct ethtool_ops rmnet_ethtool_ops = {
+ .supported_coalesce_params = ETHTOOL_COALESCE_TX_AGGR,
+ .get_coalesce = rmnet_get_coalesce,
+ .set_coalesce = rmnet_set_coalesce,
.get_ethtool_stats = rmnet_get_ethtool_stats,
.get_strings = rmnet_get_strings,
.get_sset_count = rmnet_get_sset_count,
diff --git a/drivers/net/ethernet/qualcomm/rmnet/rmnet_vnd.h b/drivers/net/ethernet/qualcomm/rmnet/rmnet_vnd.h
index dc3a4443ef0a..c2b2baf86894 100644
--- a/drivers/net/ethernet/qualcomm/rmnet/rmnet_vnd.h
+++ b/drivers/net/ethernet/qualcomm/rmnet/rmnet_vnd.h
@@ -16,6 +16,7 @@ int rmnet_vnd_newlink(u8 id, struct net_device *rmnet_dev,
int rmnet_vnd_dellink(u8 id, struct rmnet_port *port,
struct rmnet_endpoint *ep);
void rmnet_vnd_rx_fixup(struct sk_buff *skb, struct net_device *dev);
+void rmnet_vnd_tx_fixup_len(unsigned int len, struct net_device *dev);
void rmnet_vnd_tx_fixup(struct sk_buff *skb, struct net_device *dev);
void rmnet_vnd_setup(struct net_device *dev);
int rmnet_vnd_validate_real_dev_mtu(struct net_device *real_dev);
diff --git a/drivers/net/ethernet/realtek/r8169_main.c b/drivers/net/ethernet/realtek/r8169_main.c
index dadd61bccfe7..45147a1016be 100644
--- a/drivers/net/ethernet/realtek/r8169_main.c
+++ b/drivers/net/ethernet/realtek/r8169_main.c
@@ -576,6 +576,7 @@ struct rtl8169_tc_offsets {
enum rtl_flag {
RTL_FLAG_TASK_ENABLED = 0,
RTL_FLAG_TASK_RESET_PENDING,
+ RTL_FLAG_TASK_TX_TIMEOUT,
RTL_FLAG_MAX
};
@@ -3928,7 +3929,7 @@ static void rtl8169_tx_timeout(struct net_device *dev, unsigned int txqueue)
{
struct rtl8169_private *tp = netdev_priv(dev);
- rtl_schedule_task(tp, RTL_FLAG_TASK_RESET_PENDING);
+ rtl_schedule_task(tp, RTL_FLAG_TASK_TX_TIMEOUT);
}
static int rtl8169_tx_map(struct rtl8169_private *tp, const u32 *opts, u32 len,
@@ -4522,6 +4523,7 @@ static void rtl_task(struct work_struct *work)
{
struct rtl8169_private *tp =
container_of(work, struct rtl8169_private, wk.work);
+ int ret;
rtnl_lock();
@@ -4529,7 +4531,27 @@ static void rtl_task(struct work_struct *work)
!test_bit(RTL_FLAG_TASK_ENABLED, tp->wk.flags))
goto out_unlock;
+ if (test_and_clear_bit(RTL_FLAG_TASK_TX_TIMEOUT, tp->wk.flags)) {
+ /* if chip isn't accessible, reset bus to revive it */
+ if (RTL_R32(tp, TxConfig) == ~0) {
+ ret = pci_reset_bus(tp->pci_dev);
+ if (ret < 0) {
+ netdev_err(tp->dev, "Can't reset secondary PCI bus, detach NIC\n");
+ netif_device_detach(tp->dev);
+ goto out_unlock;
+ }
+ }
+
+ /* ASPM compatibility issues are a typical reason for tx timeouts */
+ ret = pci_disable_link_state(tp->pci_dev, PCIE_LINK_STATE_L1 |
+ PCIE_LINK_STATE_L0S);
+ if (!ret)
+ netdev_warn_once(tp->dev, "ASPM disabled on Tx timeout\n");
+ goto reset;
+ }
+
if (test_and_clear_bit(RTL_FLAG_TASK_RESET_PENDING, tp->wk.flags)) {
+reset:
rtl_reset_work(tp);
netif_wake_queue(tp->dev);
}
diff --git a/drivers/net/ethernet/renesas/rswitch.c b/drivers/net/ethernet/renesas/rswitch.c
index 2370c7797a0a..853394e5bb8b 100644
--- a/drivers/net/ethernet/renesas/rswitch.c
+++ b/drivers/net/ethernet/renesas/rswitch.c
@@ -16,7 +16,6 @@
#include <linux/of_irq.h>
#include <linux/of_mdio.h>
#include <linux/of_net.h>
-#include <linux/phylink.h>
#include <linux/phy/phy.h>
#include <linux/pm_runtime.h>
#include <linux/rtnetlink.h>
@@ -124,13 +123,6 @@ static void rswitch_fwd_init(struct rswitch_private *priv)
iowrite32(GENMASK(RSWITCH_NUM_PORTS - 1, 0), priv->addr + FWPBFC(priv->gwca.index));
}
-/* gPTP timer (gPTP) */
-static void rswitch_get_timestamp(struct rswitch_private *priv,
- struct timespec64 *ts)
-{
- priv->ptp_priv->info.gettime64(&priv->ptp_priv->info, ts);
-}
-
/* Gateway CPU agent block (GWCA) */
static int rswitch_gwca_change_mode(struct rswitch_private *priv,
enum rswitch_gwca_mode mode)
@@ -241,7 +233,7 @@ static int rswitch_get_num_cur_queues(struct rswitch_gwca_queue *gq)
static bool rswitch_is_queue_rxed(struct rswitch_gwca_queue *gq)
{
- struct rswitch_ext_ts_desc *desc = &gq->ts_ring[gq->dirty];
+ struct rswitch_ext_ts_desc *desc = &gq->rx_ring[gq->dirty];
if ((desc->desc.die_dt & DT_MASK) != DT_FEMPTY)
return true;
@@ -281,36 +273,43 @@ static void rswitch_gwca_queue_free(struct net_device *ndev,
{
int i;
- if (gq->gptp) {
+ if (!gq->dir_tx) {
dma_free_coherent(ndev->dev.parent,
sizeof(struct rswitch_ext_ts_desc) *
- (gq->ring_size + 1), gq->ts_ring, gq->ring_dma);
- gq->ts_ring = NULL;
- } else {
- dma_free_coherent(ndev->dev.parent,
- sizeof(struct rswitch_ext_desc) *
- (gq->ring_size + 1), gq->ring, gq->ring_dma);
- gq->ring = NULL;
- }
+ (gq->ring_size + 1), gq->rx_ring, gq->ring_dma);
+ gq->rx_ring = NULL;
- if (!gq->dir_tx) {
for (i = 0; i < gq->ring_size; i++)
dev_kfree_skb(gq->skbs[i]);
+ } else {
+ dma_free_coherent(ndev->dev.parent,
+ sizeof(struct rswitch_ext_desc) *
+ (gq->ring_size + 1), gq->tx_ring, gq->ring_dma);
+ gq->tx_ring = NULL;
}
kfree(gq->skbs);
gq->skbs = NULL;
}
+static void rswitch_gwca_ts_queue_free(struct rswitch_private *priv)
+{
+ struct rswitch_gwca_queue *gq = &priv->gwca.ts_queue;
+
+ dma_free_coherent(&priv->pdev->dev,
+ sizeof(struct rswitch_ts_desc) * (gq->ring_size + 1),
+ gq->ts_ring, gq->ring_dma);
+ gq->ts_ring = NULL;
+}
+
static int rswitch_gwca_queue_alloc(struct net_device *ndev,
struct rswitch_private *priv,
struct rswitch_gwca_queue *gq,
- bool dir_tx, bool gptp, int ring_size)
+ bool dir_tx, int ring_size)
{
int i, bit;
gq->dir_tx = dir_tx;
- gq->gptp = gptp;
gq->ring_size = ring_size;
gq->ndev = ndev;
@@ -318,18 +317,19 @@ static int rswitch_gwca_queue_alloc(struct net_device *ndev,
if (!gq->skbs)
return -ENOMEM;
- if (!dir_tx)
+ if (!dir_tx) {
rswitch_gwca_queue_alloc_skb(gq, 0, gq->ring_size);
- if (gptp)
- gq->ts_ring = dma_alloc_coherent(ndev->dev.parent,
+ gq->rx_ring = dma_alloc_coherent(ndev->dev.parent,
sizeof(struct rswitch_ext_ts_desc) *
(gq->ring_size + 1), &gq->ring_dma, GFP_KERNEL);
- else
- gq->ring = dma_alloc_coherent(ndev->dev.parent,
- sizeof(struct rswitch_ext_desc) *
- (gq->ring_size + 1), &gq->ring_dma, GFP_KERNEL);
- if (!gq->ts_ring && !gq->ring)
+ } else {
+ gq->tx_ring = dma_alloc_coherent(ndev->dev.parent,
+ sizeof(struct rswitch_ext_desc) *
+ (gq->ring_size + 1), &gq->ring_dma, GFP_KERNEL);
+ }
+
+ if (!gq->rx_ring && !gq->tx_ring)
goto out;
i = gq->index / 32;
@@ -347,6 +347,17 @@ out:
return -ENOMEM;
}
+static int rswitch_gwca_ts_queue_alloc(struct rswitch_private *priv)
+{
+ struct rswitch_gwca_queue *gq = &priv->gwca.ts_queue;
+
+ gq->ring_size = TS_RING_SIZE;
+ gq->ts_ring = dma_alloc_coherent(&priv->pdev->dev,
+ sizeof(struct rswitch_ts_desc) *
+ (gq->ring_size + 1), &gq->ring_dma, GFP_KERNEL);
+ return !gq->ts_ring ? -ENOMEM : 0;
+}
+
static void rswitch_desc_set_dptr(struct rswitch_desc *desc, dma_addr_t addr)
{
desc->dptrl = cpu_to_le32(lower_32_bits(addr));
@@ -362,14 +373,14 @@ static int rswitch_gwca_queue_format(struct net_device *ndev,
struct rswitch_private *priv,
struct rswitch_gwca_queue *gq)
{
- int tx_ring_size = sizeof(struct rswitch_ext_desc) * gq->ring_size;
+ int ring_size = sizeof(struct rswitch_ext_desc) * gq->ring_size;
struct rswitch_ext_desc *desc;
struct rswitch_desc *linkfix;
dma_addr_t dma_addr;
int i;
- memset(gq->ring, 0, tx_ring_size);
- for (i = 0, desc = gq->ring; i < gq->ring_size; i++, desc++) {
+ memset(gq->tx_ring, 0, ring_size);
+ for (i = 0, desc = gq->tx_ring; i < gq->ring_size; i++, desc++) {
if (!gq->dir_tx) {
dma_addr = dma_map_single(ndev->dev.parent,
gq->skbs[i]->data, PKT_BUF_SZ,
@@ -387,7 +398,7 @@ static int rswitch_gwca_queue_format(struct net_device *ndev,
rswitch_desc_set_dptr(&desc->desc, gq->ring_dma);
desc->desc.die_dt = DT_LINKFIX;
- linkfix = &priv->linkfix_table[gq->index];
+ linkfix = &priv->gwca.linkfix_table[gq->index];
linkfix->die_dt = DT_LINKFIX;
rswitch_desc_set_dptr(linkfix, gq->ring_dma);
@@ -398,7 +409,7 @@ static int rswitch_gwca_queue_format(struct net_device *ndev,
err:
if (!gq->dir_tx) {
- for (i--, desc = gq->ring; i >= 0; i--, desc++) {
+ for (i--, desc = gq->tx_ring; i >= 0; i--, desc++) {
dma_addr = rswitch_desc_get_dptr(&desc->desc);
dma_unmap_single(ndev->dev.parent, dma_addr, PKT_BUF_SZ,
DMA_FROM_DEVICE);
@@ -408,9 +419,23 @@ err:
return -ENOMEM;
}
-static int rswitch_gwca_queue_ts_fill(struct net_device *ndev,
- struct rswitch_gwca_queue *gq,
- int start_index, int num)
+static void rswitch_gwca_ts_queue_fill(struct rswitch_private *priv,
+ int start_index, int num)
+{
+ struct rswitch_gwca_queue *gq = &priv->gwca.ts_queue;
+ struct rswitch_ts_desc *desc;
+ int i, index;
+
+ for (i = 0; i < num; i++) {
+ index = (i + start_index) % gq->ring_size;
+ desc = &gq->ts_ring[index];
+ desc->desc.die_dt = DT_FEMPTY_ND | DIE;
+ }
+}
+
+static int rswitch_gwca_queue_ext_ts_fill(struct net_device *ndev,
+ struct rswitch_gwca_queue *gq,
+ int start_index, int num)
{
struct rswitch_device *rdev = netdev_priv(ndev);
struct rswitch_ext_ts_desc *desc;
@@ -419,7 +444,7 @@ static int rswitch_gwca_queue_ts_fill(struct net_device *ndev,
for (i = 0; i < num; i++) {
index = (i + start_index) % gq->ring_size;
- desc = &gq->ts_ring[index];
+ desc = &gq->rx_ring[index];
if (!gq->dir_tx) {
dma_addr = dma_map_single(ndev->dev.parent,
gq->skbs[index]->data, PKT_BUF_SZ,
@@ -443,7 +468,7 @@ err:
if (!gq->dir_tx) {
for (i--; i >= 0; i--) {
index = (i + start_index) % gq->ring_size;
- desc = &gq->ts_ring[index];
+ desc = &gq->rx_ring[index];
dma_addr = rswitch_desc_get_dptr(&desc->desc);
dma_unmap_single(ndev->dev.parent, dma_addr, PKT_BUF_SZ,
DMA_FROM_DEVICE);
@@ -453,25 +478,25 @@ err:
return -ENOMEM;
}
-static int rswitch_gwca_queue_ts_format(struct net_device *ndev,
- struct rswitch_private *priv,
- struct rswitch_gwca_queue *gq)
+static int rswitch_gwca_queue_ext_ts_format(struct net_device *ndev,
+ struct rswitch_private *priv,
+ struct rswitch_gwca_queue *gq)
{
- int tx_ts_ring_size = sizeof(struct rswitch_ext_ts_desc) * gq->ring_size;
+ int ring_size = sizeof(struct rswitch_ext_ts_desc) * gq->ring_size;
struct rswitch_ext_ts_desc *desc;
struct rswitch_desc *linkfix;
int err;
- memset(gq->ts_ring, 0, tx_ts_ring_size);
- err = rswitch_gwca_queue_ts_fill(ndev, gq, 0, gq->ring_size);
+ memset(gq->rx_ring, 0, ring_size);
+ err = rswitch_gwca_queue_ext_ts_fill(ndev, gq, 0, gq->ring_size);
if (err < 0)
return err;
- desc = &gq->ts_ring[gq->ring_size]; /* Last */
+ desc = &gq->rx_ring[gq->ring_size]; /* Last */
rswitch_desc_set_dptr(&desc->desc, gq->ring_dma);
desc->desc.die_dt = DT_LINKFIX;
- linkfix = &priv->linkfix_table[gq->index];
+ linkfix = &priv->gwca.linkfix_table[gq->index];
linkfix->die_dt = DT_LINKFIX;
rswitch_desc_set_dptr(linkfix, gq->ring_dma);
@@ -481,28 +506,31 @@ static int rswitch_gwca_queue_ts_format(struct net_device *ndev,
return 0;
}
-static int rswitch_gwca_desc_alloc(struct rswitch_private *priv)
+static int rswitch_gwca_linkfix_alloc(struct rswitch_private *priv)
{
int i, num_queues = priv->gwca.num_queues;
+ struct rswitch_gwca *gwca = &priv->gwca;
struct device *dev = &priv->pdev->dev;
- priv->linkfix_table_size = sizeof(struct rswitch_desc) * num_queues;
- priv->linkfix_table = dma_alloc_coherent(dev, priv->linkfix_table_size,
- &priv->linkfix_table_dma, GFP_KERNEL);
- if (!priv->linkfix_table)
+ gwca->linkfix_table_size = sizeof(struct rswitch_desc) * num_queues;
+ gwca->linkfix_table = dma_alloc_coherent(dev, gwca->linkfix_table_size,
+ &gwca->linkfix_table_dma, GFP_KERNEL);
+ if (!gwca->linkfix_table)
return -ENOMEM;
for (i = 0; i < num_queues; i++)
- priv->linkfix_table[i].die_dt = DT_EOS;
+ gwca->linkfix_table[i].die_dt = DT_EOS;
return 0;
}
-static void rswitch_gwca_desc_free(struct rswitch_private *priv)
+static void rswitch_gwca_linkfix_free(struct rswitch_private *priv)
{
- if (priv->linkfix_table)
- dma_free_coherent(&priv->pdev->dev, priv->linkfix_table_size,
- priv->linkfix_table, priv->linkfix_table_dma);
- priv->linkfix_table = NULL;
+ struct rswitch_gwca *gwca = &priv->gwca;
+
+ if (gwca->linkfix_table)
+ dma_free_coherent(&priv->pdev->dev, gwca->linkfix_table_size,
+ gwca->linkfix_table, gwca->linkfix_table_dma);
+ gwca->linkfix_table = NULL;
}
static struct rswitch_gwca_queue *rswitch_gwca_get(struct rswitch_private *priv)
@@ -537,8 +565,7 @@ static int rswitch_txdmac_alloc(struct net_device *ndev)
if (!rdev->tx_queue)
return -EBUSY;
- err = rswitch_gwca_queue_alloc(ndev, priv, rdev->tx_queue, true, false,
- TX_RING_SIZE);
+ err = rswitch_gwca_queue_alloc(ndev, priv, rdev->tx_queue, true, TX_RING_SIZE);
if (err < 0) {
rswitch_gwca_put(priv, rdev->tx_queue);
return err;
@@ -572,8 +599,7 @@ static int rswitch_rxdmac_alloc(struct net_device *ndev)
if (!rdev->rx_queue)
return -EBUSY;
- err = rswitch_gwca_queue_alloc(ndev, priv, rdev->rx_queue, false, true,
- RX_RING_SIZE);
+ err = rswitch_gwca_queue_alloc(ndev, priv, rdev->rx_queue, false, RX_RING_SIZE);
if (err < 0) {
rswitch_gwca_put(priv, rdev->rx_queue);
return err;
@@ -595,7 +621,7 @@ static int rswitch_rxdmac_init(struct rswitch_private *priv, int index)
struct rswitch_device *rdev = priv->rdev[index];
struct net_device *ndev = rdev->ndev;
- return rswitch_gwca_queue_ts_format(ndev, priv, rdev->rx_queue);
+ return rswitch_gwca_queue_ext_ts_format(ndev, priv, rdev->rx_queue);
}
static int rswitch_gwca_hw_init(struct rswitch_private *priv)
@@ -618,8 +644,11 @@ static int rswitch_gwca_hw_init(struct rswitch_private *priv)
iowrite32(GWVCC_VEM_SC_TAG, priv->addr + GWVCC);
iowrite32(0, priv->addr + GWTTFC);
- iowrite32(lower_32_bits(priv->linkfix_table_dma), priv->addr + GWDCBAC1);
- iowrite32(upper_32_bits(priv->linkfix_table_dma), priv->addr + GWDCBAC0);
+ iowrite32(lower_32_bits(priv->gwca.linkfix_table_dma), priv->addr + GWDCBAC1);
+ iowrite32(upper_32_bits(priv->gwca.linkfix_table_dma), priv->addr + GWDCBAC0);
+ iowrite32(lower_32_bits(priv->gwca.ts_queue.ring_dma), priv->addr + GWTDCAC10);
+ iowrite32(upper_32_bits(priv->gwca.ts_queue.ring_dma), priv->addr + GWTDCAC00);
+ iowrite32(GWCA_TS_IRQ_BIT, priv->addr + GWTSDCC0);
rswitch_gwca_set_rate_limit(priv, priv->gwca.speed);
for (i = 0; i < RSWITCH_NUM_PORTS; i++) {
@@ -676,7 +705,7 @@ static bool rswitch_rx(struct net_device *ndev, int *quota)
boguscnt = min_t(int, gq->ring_size, *quota);
limit = boguscnt;
- desc = &gq->ts_ring[gq->cur];
+ desc = &gq->rx_ring[gq->cur];
while ((desc->desc.die_dt & DT_MASK) != DT_FEMPTY) {
if (--boguscnt < 0)
break;
@@ -704,14 +733,14 @@ static bool rswitch_rx(struct net_device *ndev, int *quota)
rdev->ndev->stats.rx_bytes += pkt_len;
gq->cur = rswitch_next_queue_index(gq, true, 1);
- desc = &gq->ts_ring[gq->cur];
+ desc = &gq->rx_ring[gq->cur];
}
num = rswitch_get_num_cur_queues(gq);
ret = rswitch_gwca_queue_alloc_skb(gq, gq->dirty, num);
if (ret < 0)
goto err;
- ret = rswitch_gwca_queue_ts_fill(ndev, gq, gq->dirty, num);
+ ret = rswitch_gwca_queue_ext_ts_fill(ndev, gq, gq->dirty, num);
if (ret < 0)
goto err;
gq->dirty = rswitch_next_queue_index(gq, false, num);
@@ -738,7 +767,7 @@ static int rswitch_tx_free(struct net_device *ndev, bool free_txed_only)
for (; rswitch_get_num_cur_queues(gq) > 0;
gq->dirty = rswitch_next_queue_index(gq, false, 1)) {
- desc = &gq->ring[gq->dirty];
+ desc = &gq->tx_ring[gq->dirty];
if (free_txed_only && (desc->desc.die_dt & DT_MASK) != DT_FEMPTY)
break;
@@ -746,15 +775,6 @@ static int rswitch_tx_free(struct net_device *ndev, bool free_txed_only)
size = le16_to_cpu(desc->desc.info_ds) & TX_DS;
skb = gq->skbs[gq->dirty];
if (skb) {
- if (skb_shinfo(skb)->tx_flags & SKBTX_HW_TSTAMP) {
- struct skb_shared_hwtstamps shhwtstamps;
- struct timespec64 ts;
-
- rswitch_get_timestamp(rdev->priv, &ts);
- memset(&shhwtstamps, 0, sizeof(shhwtstamps));
- shhwtstamps.hwtstamp = timespec64_to_ktime(ts);
- skb_tstamp_tx(skb, &shhwtstamps);
- }
dma_addr = rswitch_desc_get_dptr(&desc->desc);
dma_unmap_single(ndev->dev.parent, dma_addr,
size, DMA_TO_DEVICE);
@@ -880,6 +900,73 @@ static int rswitch_gwca_request_irqs(struct rswitch_private *priv)
return 0;
}
+static void rswitch_ts(struct rswitch_private *priv)
+{
+ struct rswitch_gwca_queue *gq = &priv->gwca.ts_queue;
+ struct rswitch_gwca_ts_info *ts_info, *ts_info2;
+ struct skb_shared_hwtstamps shhwtstamps;
+ struct rswitch_ts_desc *desc;
+ struct timespec64 ts;
+ u32 tag, port;
+ int num;
+
+ desc = &gq->ts_ring[gq->cur];
+ while ((desc->desc.die_dt & DT_MASK) != DT_FEMPTY_ND) {
+ dma_rmb();
+
+ port = TS_DESC_DPN(__le32_to_cpu(desc->desc.dptrl));
+ tag = TS_DESC_TSUN(__le32_to_cpu(desc->desc.dptrl));
+
+ list_for_each_entry_safe(ts_info, ts_info2, &priv->gwca.ts_info_list, list) {
+ if (!(ts_info->port == port && ts_info->tag == tag))
+ continue;
+
+ memset(&shhwtstamps, 0, sizeof(shhwtstamps));
+ ts.tv_sec = __le32_to_cpu(desc->ts_sec);
+ ts.tv_nsec = __le32_to_cpu(desc->ts_nsec & cpu_to_le32(0x3fffffff));
+ shhwtstamps.hwtstamp = timespec64_to_ktime(ts);
+ skb_tstamp_tx(ts_info->skb, &shhwtstamps);
+ dev_consume_skb_irq(ts_info->skb);
+ list_del(&ts_info->list);
+ kfree(ts_info);
+ break;
+ }
+
+ gq->cur = rswitch_next_queue_index(gq, true, 1);
+ desc = &gq->ts_ring[gq->cur];
+ }
+
+ num = rswitch_get_num_cur_queues(gq);
+ rswitch_gwca_ts_queue_fill(priv, gq->dirty, num);
+ gq->dirty = rswitch_next_queue_index(gq, false, num);
+}
+
+static irqreturn_t rswitch_gwca_ts_irq(int irq, void *dev_id)
+{
+ struct rswitch_private *priv = dev_id;
+
+ if (ioread32(priv->addr + GWTSDIS) & GWCA_TS_IRQ_BIT) {
+ iowrite32(GWCA_TS_IRQ_BIT, priv->addr + GWTSDIS);
+ rswitch_ts(priv);
+
+ return IRQ_HANDLED;
+ }
+
+ return IRQ_NONE;
+}
+
+static int rswitch_gwca_ts_request_irqs(struct rswitch_private *priv)
+{
+ int irq;
+
+ irq = platform_get_irq_byname(priv->pdev, GWCA_TS_IRQ_RESOURCE_NAME);
+ if (irq < 0)
+ return irq;
+
+ return devm_request_irq(&priv->pdev->dev, irq, rswitch_gwca_ts_irq,
+ 0, GWCA_TS_IRQ_NAME, priv);
+}
+
/* Ethernet TSN Agent block (ETHA) and Ethernet MAC IP block (RMAC) */
static int rswitch_etha_change_mode(struct rswitch_etha *etha,
enum rswitch_etha_mode mode)
@@ -1024,34 +1111,18 @@ static int rswitch_etha_set_access(struct rswitch_etha *etha, bool read,
return ret;
}
-static int rswitch_etha_mii_read(struct mii_bus *bus, int addr, int regnum)
+static int rswitch_etha_mii_read_c45(struct mii_bus *bus, int addr, int devad,
+ int regad)
{
struct rswitch_etha *etha = bus->priv;
- int mode, devad, regad;
-
- mode = regnum & MII_ADDR_C45;
- devad = (regnum >> MII_DEVADDR_C45_SHIFT) & 0x1f;
- regad = regnum & MII_REGADDR_C45_MASK;
-
- /* Not support Clause 22 access method */
- if (!mode)
- return -EOPNOTSUPP;
return rswitch_etha_set_access(etha, true, addr, devad, regad, 0);
}
-static int rswitch_etha_mii_write(struct mii_bus *bus, int addr, int regnum, u16 val)
+static int rswitch_etha_mii_write_c45(struct mii_bus *bus, int addr, int devad,
+ int regad, u16 val)
{
struct rswitch_etha *etha = bus->priv;
- int mode, devad, regad;
-
- mode = regnum & MII_ADDR_C45;
- devad = (regnum >> MII_DEVADDR_C45_SHIFT) & 0x1f;
- regad = regnum & MII_REGADDR_C45_MASK;
-
- /* Not support Clause 22 access method */
- if (!mode)
- return -EOPNOTSUPP;
return rswitch_etha_set_access(etha, false, addr, devad, regad, val);
}
@@ -1087,33 +1158,25 @@ out:
return port;
}
-/* Call of_node_put(mdio) after done */
-static struct device_node *rswitch_get_mdio_node(struct rswitch_device *rdev)
-{
- struct device_node *port, *mdio;
-
- port = rswitch_get_port_node(rdev);
- if (!port)
- return NULL;
-
- mdio = of_get_child_by_name(port, "mdio");
- of_node_put(port);
-
- return mdio;
-}
-
static int rswitch_etha_get_params(struct rswitch_device *rdev)
{
- struct device_node *port;
+ u32 max_speed;
int err;
- port = rswitch_get_port_node(rdev);
- if (!port)
+ if (!rdev->np_port)
return 0; /* ignored */
- err = of_get_phy_mode(port, &rdev->etha->phy_interface);
- of_node_put(port);
+ err = of_get_phy_mode(rdev->np_port, &rdev->etha->phy_interface);
+ if (err)
+ return err;
+ err = of_property_read_u32(rdev->np_port, "max-speed", &max_speed);
+ if (!err) {
+ rdev->etha->speed = max_speed;
+ return 0;
+ }
+
+ /* if no "max-speed" property, let's use default speed */
switch (rdev->etha->phy_interface) {
case PHY_INTERFACE_MODE_MII:
rdev->etha->speed = SPEED_100;
@@ -1125,11 +1188,10 @@ static int rswitch_etha_get_params(struct rswitch_device *rdev)
rdev->etha->speed = SPEED_2500;
break;
default:
- err = -EINVAL;
- break;
+ return -EINVAL;
}
- return err;
+ return 0;
}
static int rswitch_mii_register(struct rswitch_device *rdev)
@@ -1145,11 +1207,11 @@ static int rswitch_mii_register(struct rswitch_device *rdev)
mii_bus->name = "rswitch_mii";
sprintf(mii_bus->id, "etha%d", rdev->etha->index);
mii_bus->priv = rdev->etha;
- mii_bus->read = rswitch_etha_mii_read;
- mii_bus->write = rswitch_etha_mii_write;
+ mii_bus->read_c45 = rswitch_etha_mii_read_c45;
+ mii_bus->write_c45 = rswitch_etha_mii_write_c45;
mii_bus->parent = &rdev->priv->pdev->dev;
- mdio_np = rswitch_get_mdio_node(rdev);
+ mdio_np = of_get_child_by_name(rdev->np_port, "mdio");
err = of_mdiobus_register(mii_bus, mdio_np);
if (err < 0) {
mdiobus_free(mii_bus);
@@ -1173,114 +1235,107 @@ static void rswitch_mii_unregister(struct rswitch_device *rdev)
}
}
-static void rswitch_mac_config(struct phylink_config *config,
- unsigned int mode,
- const struct phylink_link_state *state)
+static void rswitch_adjust_link(struct net_device *ndev)
{
-}
+ struct rswitch_device *rdev = netdev_priv(ndev);
+ struct phy_device *phydev = ndev->phydev;
-static void rswitch_mac_link_down(struct phylink_config *config,
- unsigned int mode,
- phy_interface_t interface)
-{
+ /* Current hardware has a restriction not to change speed at runtime */
+ if (phydev->link != rdev->etha->link) {
+ phy_print_status(phydev);
+ if (phydev->link)
+ phy_power_on(rdev->serdes);
+ else
+ phy_power_off(rdev->serdes);
+
+ rdev->etha->link = phydev->link;
+ }
}
-static void rswitch_mac_link_up(struct phylink_config *config,
- struct phy_device *phydev, unsigned int mode,
- phy_interface_t interface, int speed,
- int duplex, bool tx_pause, bool rx_pause)
+static void rswitch_phy_remove_link_mode(struct rswitch_device *rdev,
+ struct phy_device *phydev)
{
- /* Current hardware cannot change speed at runtime */
-}
+ /* Current hardware has a restriction not to change speed at runtime */
+ switch (rdev->etha->speed) {
+ case SPEED_2500:
+ phy_remove_link_mode(phydev, ETHTOOL_LINK_MODE_1000baseT_Full_BIT);
+ phy_remove_link_mode(phydev, ETHTOOL_LINK_MODE_100baseT_Full_BIT);
+ break;
+ case SPEED_1000:
+ phy_remove_link_mode(phydev, ETHTOOL_LINK_MODE_2500baseX_Full_BIT);
+ phy_remove_link_mode(phydev, ETHTOOL_LINK_MODE_100baseT_Full_BIT);
+ break;
+ case SPEED_100:
+ phy_remove_link_mode(phydev, ETHTOOL_LINK_MODE_2500baseX_Full_BIT);
+ phy_remove_link_mode(phydev, ETHTOOL_LINK_MODE_1000baseT_Full_BIT);
+ break;
+ default:
+ break;
+ }
-static const struct phylink_mac_ops rswitch_phylink_ops = {
- .mac_config = rswitch_mac_config,
- .mac_link_down = rswitch_mac_link_down,
- .mac_link_up = rswitch_mac_link_up,
-};
+ phy_set_max_speed(phydev, rdev->etha->speed);
+}
-static int rswitch_phylink_init(struct rswitch_device *rdev)
+static int rswitch_phy_device_init(struct rswitch_device *rdev)
{
- struct device_node *port;
- struct phylink *phylink;
- int err;
+ struct phy_device *phydev;
+ struct device_node *phy;
+ int err = -ENOENT;
- port = rswitch_get_port_node(rdev);
- if (!port)
+ if (!rdev->np_port)
return -ENODEV;
- rdev->phylink_config.dev = &rdev->ndev->dev;
- rdev->phylink_config.type = PHYLINK_NETDEV;
- __set_bit(PHY_INTERFACE_MODE_SGMII, rdev->phylink_config.supported_interfaces);
- __set_bit(PHY_INTERFACE_MODE_USXGMII, rdev->phylink_config.supported_interfaces);
- rdev->phylink_config.mac_capabilities = MAC_100FD | MAC_1000FD | MAC_2500FD;
+ phy = of_parse_phandle(rdev->np_port, "phy-handle", 0);
+ if (!phy)
+ return -ENODEV;
- phylink = phylink_create(&rdev->phylink_config, &port->fwnode,
- rdev->etha->phy_interface, &rswitch_phylink_ops);
- if (IS_ERR(phylink)) {
- err = PTR_ERR(phylink);
+ /* Set phydev->host_interfaces before calling of_phy_connect() to
+ * configure the PHY with the information of host_interfaces.
+ */
+ phydev = of_phy_find_device(phy);
+ if (!phydev)
goto out;
- }
+ __set_bit(rdev->etha->phy_interface, phydev->host_interfaces);
- rdev->phylink = phylink;
- err = phylink_of_phy_connect(rdev->phylink, port, rdev->etha->phy_interface);
+ phydev = of_phy_connect(rdev->ndev, phy, rswitch_adjust_link, 0,
+ rdev->etha->phy_interface);
+ if (!phydev)
+ goto out;
+
+ phy_set_max_speed(phydev, SPEED_2500);
+ phy_remove_link_mode(phydev, ETHTOOL_LINK_MODE_10baseT_Half_BIT);
+ phy_remove_link_mode(phydev, ETHTOOL_LINK_MODE_10baseT_Full_BIT);
+ phy_remove_link_mode(phydev, ETHTOOL_LINK_MODE_100baseT_Half_BIT);
+ phy_remove_link_mode(phydev, ETHTOOL_LINK_MODE_1000baseT_Half_BIT);
+ rswitch_phy_remove_link_mode(rdev, phydev);
+
+ phy_attached_info(phydev);
+
+ err = 0;
out:
- of_node_put(port);
+ of_node_put(phy);
return err;
}
-static void rswitch_phylink_deinit(struct rswitch_device *rdev)
+static void rswitch_phy_device_deinit(struct rswitch_device *rdev)
{
- rtnl_lock();
- phylink_disconnect_phy(rdev->phylink);
- rtnl_unlock();
- phylink_destroy(rdev->phylink);
+ if (rdev->ndev->phydev) {
+ phy_disconnect(rdev->ndev->phydev);
+ rdev->ndev->phydev = NULL;
+ }
}
static int rswitch_serdes_set_params(struct rswitch_device *rdev)
{
- struct device_node *port = rswitch_get_port_node(rdev);
- struct phy *serdes;
int err;
- serdes = devm_of_phy_get(&rdev->priv->pdev->dev, port, NULL);
- of_node_put(port);
- if (IS_ERR(serdes))
- return PTR_ERR(serdes);
-
- err = phy_set_mode_ext(serdes, PHY_MODE_ETHERNET,
+ err = phy_set_mode_ext(rdev->serdes, PHY_MODE_ETHERNET,
rdev->etha->phy_interface);
if (err < 0)
return err;
- return phy_set_speed(serdes, rdev->etha->speed);
-}
-
-static int rswitch_serdes_init(struct rswitch_device *rdev)
-{
- struct device_node *port = rswitch_get_port_node(rdev);
- struct phy *serdes;
-
- serdes = devm_of_phy_get(&rdev->priv->pdev->dev, port, NULL);
- of_node_put(port);
- if (IS_ERR(serdes))
- return PTR_ERR(serdes);
-
- return phy_init(serdes);
-}
-
-static int rswitch_serdes_deinit(struct rswitch_device *rdev)
-{
- struct device_node *port = rswitch_get_port_node(rdev);
- struct phy *serdes;
-
- serdes = devm_of_phy_get(&rdev->priv->pdev->dev, port, NULL);
- of_node_put(port);
- if (IS_ERR(serdes))
- return PTR_ERR(serdes);
-
- return phy_exit(serdes);
+ return phy_set_speed(rdev->serdes, rdev->etha->speed);
}
static int rswitch_ether_port_init_one(struct rswitch_device *rdev)
@@ -1298,9 +1353,15 @@ static int rswitch_ether_port_init_one(struct rswitch_device *rdev)
if (err < 0)
return err;
- err = rswitch_phylink_init(rdev);
+ err = rswitch_phy_device_init(rdev);
if (err < 0)
- goto err_phylink_init;
+ goto err_phy_device_init;
+
+ rdev->serdes = devm_of_phy_get(&rdev->priv->pdev->dev, rdev->np_port, NULL);
+ if (IS_ERR(rdev->serdes)) {
+ err = PTR_ERR(rdev->serdes);
+ goto err_serdes_phy_get;
+ }
err = rswitch_serdes_set_params(rdev);
if (err < 0)
@@ -1309,9 +1370,10 @@ static int rswitch_ether_port_init_one(struct rswitch_device *rdev)
return 0;
err_serdes_set_params:
- rswitch_phylink_deinit(rdev);
+err_serdes_phy_get:
+ rswitch_phy_device_deinit(rdev);
-err_phylink_init:
+err_phy_device_init:
rswitch_mii_unregister(rdev);
return err;
@@ -1319,7 +1381,7 @@ err_phylink_init:
static void rswitch_ether_port_deinit_one(struct rswitch_device *rdev)
{
- rswitch_phylink_deinit(rdev);
+ rswitch_phy_device_deinit(rdev);
rswitch_mii_unregister(rdev);
}
@@ -1334,7 +1396,7 @@ static int rswitch_ether_port_init_all(struct rswitch_private *priv)
}
rswitch_for_each_enabled_port(priv, i) {
- err = rswitch_serdes_init(priv->rdev[i]);
+ err = phy_init(priv->rdev[i]->serdes);
if (err)
goto err_serdes;
}
@@ -1343,7 +1405,7 @@ static int rswitch_ether_port_init_all(struct rswitch_private *priv)
err_serdes:
rswitch_for_each_enabled_port_continue_reverse(priv, i)
- rswitch_serdes_deinit(priv->rdev[i]);
+ phy_exit(priv->rdev[i]->serdes);
i = RSWITCH_NUM_PORTS;
err_init_one:
@@ -1358,7 +1420,7 @@ static void rswitch_ether_port_deinit_all(struct rswitch_private *priv)
int i;
for (i = 0; i < RSWITCH_NUM_PORTS; i++) {
- rswitch_serdes_deinit(priv->rdev[i]);
+ phy_exit(priv->rdev[i]->serdes);
rswitch_ether_port_deinit_one(priv->rdev[i]);
}
}
@@ -1367,7 +1429,7 @@ static int rswitch_open(struct net_device *ndev)
{
struct rswitch_device *rdev = netdev_priv(ndev);
- phylink_start(rdev->phylink);
+ phy_start(ndev->phydev);
napi_enable(&rdev->napi);
netif_start_queue(ndev);
@@ -1375,19 +1437,32 @@ static int rswitch_open(struct net_device *ndev)
rswitch_enadis_data_irq(rdev->priv, rdev->tx_queue->index, true);
rswitch_enadis_data_irq(rdev->priv, rdev->rx_queue->index, true);
+ iowrite32(GWCA_TS_IRQ_BIT, rdev->priv->addr + GWTSDIE);
+
return 0;
};
static int rswitch_stop(struct net_device *ndev)
{
struct rswitch_device *rdev = netdev_priv(ndev);
+ struct rswitch_gwca_ts_info *ts_info, *ts_info2;
netif_tx_stop_all_queues(ndev);
+ iowrite32(GWCA_TS_IRQ_BIT, rdev->priv->addr + GWTSDID);
+
+ list_for_each_entry_safe(ts_info, ts_info2, &rdev->priv->gwca.ts_info_list, list) {
+ if (ts_info->port != rdev->port)
+ continue;
+ dev_kfree_skb_irq(ts_info->skb);
+ list_del(&ts_info->list);
+ kfree(ts_info);
+ }
+
rswitch_enadis_data_irq(rdev->priv, rdev->tx_queue->index, false);
rswitch_enadis_data_irq(rdev->priv, rdev->rx_queue->index, false);
- phylink_stop(rdev->phylink);
+ phy_stop(ndev->phydev);
napi_disable(&rdev->napi);
return 0;
@@ -1416,17 +1491,31 @@ static netdev_tx_t rswitch_start_xmit(struct sk_buff *skb, struct net_device *nd
}
gq->skbs[gq->cur] = skb;
- desc = &gq->ring[gq->cur];
+ desc = &gq->tx_ring[gq->cur];
rswitch_desc_set_dptr(&desc->desc, dma_addr);
desc->desc.info_ds = cpu_to_le16(skb->len);
desc->info1 = cpu_to_le64(INFO1_DV(BIT(rdev->etha->index)) | INFO1_FMT);
if (skb_shinfo(skb)->tx_flags & SKBTX_HW_TSTAMP) {
+ struct rswitch_gwca_ts_info *ts_info;
+
+ ts_info = kzalloc(sizeof(*ts_info), GFP_ATOMIC);
+ if (!ts_info) {
+ dma_unmap_single(ndev->dev.parent, dma_addr, skb->len, DMA_TO_DEVICE);
+ return -ENOMEM;
+ }
+
skb_shinfo(skb)->tx_flags |= SKBTX_IN_PROGRESS;
rdev->ts_tag++;
desc->info1 |= cpu_to_le64(INFO1_TSUN(rdev->ts_tag) | INFO1_TXC);
+
+ ts_info->skb = skb_get(skb);
+ ts_info->port = rdev->port;
+ ts_info->tag = rdev->ts_tag;
+ list_add_tail(&ts_info->list, &rdev->priv->gwca.ts_info_list);
+
+ skb_tx_timestamp(skb);
}
- skb_tx_timestamp(skb);
dma_wmb();
@@ -1515,8 +1604,6 @@ static int rswitch_hwstamp_set(struct net_device *ndev, struct ifreq *req)
static int rswitch_eth_ioctl(struct net_device *ndev, struct ifreq *req, int cmd)
{
- struct rswitch_device *rdev = netdev_priv(ndev);
-
if (!netif_running(ndev))
return -EINVAL;
@@ -1526,7 +1613,7 @@ static int rswitch_eth_ioctl(struct net_device *ndev, struct ifreq *req, int cmd
case SIOCSHWTSTAMP:
return rswitch_hwstamp_set(ndev, req);
default:
- return phylink_mii_ioctl(rdev->phylink, req, cmd);
+ return phy_mii_ioctl(ndev->phydev, req, cmd);
}
}
@@ -1581,7 +1668,6 @@ static int rswitch_device_alloc(struct rswitch_private *priv, int index)
{
struct platform_device *pdev = priv->pdev;
struct rswitch_device *rdev;
- struct device_node *port;
struct net_device *ndev;
int err;
@@ -1610,10 +1696,10 @@ static int rswitch_device_alloc(struct rswitch_private *priv, int index)
netif_napi_add(ndev, &rdev->napi, rswitch_poll);
- port = rswitch_get_port_node(rdev);
- rdev->disabled = !port;
- err = of_get_ethdev_address(port, ndev);
- of_node_put(port);
+ rdev->np_port = rswitch_get_port_node(rdev);
+ rdev->disabled = !rdev->np_port;
+ err = of_get_ethdev_address(rdev->np_port, ndev);
+ of_node_put(rdev->np_port);
if (err) {
if (is_valid_ether_addr(rdev->etha->mac_addr))
eth_hw_addr_set(ndev, rdev->etha->mac_addr);
@@ -1679,10 +1765,17 @@ static int rswitch_init(struct rswitch_private *priv)
if (err < 0)
return err;
- err = rswitch_gwca_desc_alloc(priv);
+ err = rswitch_gwca_linkfix_alloc(priv);
if (err < 0)
return -ENOMEM;
+ err = rswitch_gwca_ts_queue_alloc(priv);
+ if (err < 0)
+ goto err_ts_queue_alloc;
+
+ rswitch_gwca_ts_queue_fill(priv, 0, TS_RING_SIZE);
+ INIT_LIST_HEAD(&priv->gwca.ts_info_list);
+
for (i = 0; i < RSWITCH_NUM_PORTS; i++) {
err = rswitch_device_alloc(priv, i);
if (err < 0) {
@@ -1703,6 +1796,10 @@ static int rswitch_init(struct rswitch_private *priv)
if (err < 0)
goto err_gwca_request_irq;
+ err = rswitch_gwca_ts_request_irqs(priv);
+ if (err < 0)
+ goto err_gwca_ts_request_irq;
+
err = rswitch_gwca_hw_init(priv);
if (err < 0)
goto err_gwca_hw_init;
@@ -1733,6 +1830,7 @@ err_ether_port_init_all:
rswitch_gwca_hw_deinit(priv);
err_gwca_hw_init:
+err_gwca_ts_request_irq:
err_gwca_request_irq:
rcar_gen4_ptp_unregister(priv->ptp_priv);
@@ -1741,7 +1839,10 @@ err_ptp_register:
rswitch_device_free(priv, i);
err_device_alloc:
- rswitch_gwca_desc_free(priv);
+ rswitch_gwca_ts_queue_free(priv);
+
+err_ts_queue_alloc:
+ rswitch_gwca_linkfix_free(priv);
return err;
}
@@ -1814,13 +1915,14 @@ static void rswitch_deinit(struct rswitch_private *priv)
for (i = 0; i < RSWITCH_NUM_PORTS; i++) {
struct rswitch_device *rdev = priv->rdev[i];
- rswitch_serdes_deinit(rdev);
+ phy_exit(priv->rdev[i]->serdes);
rswitch_ether_port_deinit_one(rdev);
unregister_netdev(rdev->ndev);
rswitch_device_free(priv, i);
}
- rswitch_gwca_desc_free(priv);
+ rswitch_gwca_ts_queue_free(priv);
+ rswitch_gwca_linkfix_free(priv);
rswitch_clock_disable(priv);
}
diff --git a/drivers/net/ethernet/renesas/rswitch.h b/drivers/net/ethernet/renesas/rswitch.h
index 49efb0f31c77..27d3d38c055f 100644
--- a/drivers/net/ethernet/renesas/rswitch.h
+++ b/drivers/net/ethernet/renesas/rswitch.h
@@ -27,6 +27,7 @@
#define TX_RING_SIZE 1024
#define RX_RING_SIZE 1024
+#define TS_RING_SIZE (TX_RING_SIZE * RSWITCH_NUM_PORTS)
#define PKT_BUF_SZ 1584
#define RSWITCH_ALIGN 128
@@ -49,6 +50,10 @@
#define AGENT_INDEX_GWCA 3
#define GWRO RSWITCH_GWCA0_OFFSET
+#define GWCA_TS_IRQ_RESOURCE_NAME "gwca0_rxts0"
+#define GWCA_TS_IRQ_NAME "rswitch: gwca0_rxts0"
+#define GWCA_TS_IRQ_BIT BIT(0)
+
#define FWRO 0
#define TPRO RSWITCH_TOP_OFFSET
#define CARO RSWITCH_COMA_OFFSET
@@ -831,7 +836,7 @@ enum DIE_DT {
DT_FSINGLE = 0x80,
DT_FSTART = 0x90,
DT_FMID = 0xa0,
- DT_FEND = 0xb8,
+ DT_FEND = 0xb0,
/* Chain control */
DT_LEMPTY = 0xc0,
@@ -843,7 +848,7 @@ enum DIE_DT {
DT_FEMPTY = 0x40,
DT_FEMPTY_IS = 0x10,
DT_FEMPTY_IC = 0x20,
- DT_FEMPTY_ND = 0x38,
+ DT_FEMPTY_ND = 0x30,
DT_FEMPTY_START = 0x50,
DT_FEMPTY_MID = 0x60,
DT_FEMPTY_END = 0x70,
@@ -865,6 +870,12 @@ enum DIE_DT {
/* For reception */
#define INFO1_SPN(port) ((u64)(port) << 36ULL)
+/* For timestamp descriptor in dptrl (Byte 4 to 7) */
+#define TS_DESC_TSUN(dptrl) ((dptrl) & GENMASK(7, 0))
+#define TS_DESC_SPN(dptrl) (((dptrl) & GENMASK(10, 8)) >> 8)
+#define TS_DESC_DPN(dptrl) (((dptrl) & GENMASK(17, 16)) >> 16)
+#define TS_DESC_TN(dptrl) ((dptrl) & BIT(24))
+
struct rswitch_desc {
__le16 info_ds; /* Descriptor size */
u8 die_dt; /* Descriptor interrupt enable and type */
@@ -911,27 +922,43 @@ struct rswitch_etha {
* name, this driver calls "queue".
*/
struct rswitch_gwca_queue {
- int index;
- bool dir_tx;
- bool gptp;
union {
- struct rswitch_ext_desc *ring;
- struct rswitch_ext_ts_desc *ts_ring;
+ struct rswitch_ext_desc *tx_ring;
+ struct rswitch_ext_ts_desc *rx_ring;
+ struct rswitch_ts_desc *ts_ring;
};
+
+ /* Common */
dma_addr_t ring_dma;
int ring_size;
int cur;
int dirty;
- struct sk_buff **skbs;
+ /* For [rt]_ring */
+ int index;
+ bool dir_tx;
+ struct sk_buff **skbs;
struct net_device *ndev; /* queue to ndev for irq */
};
+struct rswitch_gwca_ts_info {
+ struct sk_buff *skb;
+ struct list_head list;
+
+ int port;
+ u8 tag;
+};
+
#define RSWITCH_NUM_IRQ_REGS (RSWITCH_MAX_NUM_QUEUES / BITS_PER_TYPE(u32))
struct rswitch_gwca {
int index;
+ struct rswitch_desc *linkfix_table;
+ dma_addr_t linkfix_table_dma;
+ u32 linkfix_table_size;
struct rswitch_gwca_queue *queues;
int num_queues;
+ struct rswitch_gwca_queue ts_queue;
+ struct list_head ts_info_list;
DECLARE_BITMAP(used, RSWITCH_MAX_NUM_QUEUES);
u32 tx_irq_bits[RSWITCH_NUM_IRQ_REGS];
u32 rx_irq_bits[RSWITCH_NUM_IRQ_REGS];
@@ -943,8 +970,6 @@ struct rswitch_device {
struct rswitch_private *priv;
struct net_device *ndev;
struct napi_struct napi;
- struct phylink *phylink;
- struct phylink_config phylink_config;
void __iomem *addr;
struct rswitch_gwca_queue *tx_queue;
struct rswitch_gwca_queue *rx_queue;
@@ -953,6 +978,8 @@ struct rswitch_device {
int port;
struct rswitch_etha *etha;
+ struct device_node *np_port;
+ struct phy *serdes;
};
struct rswitch_mfwd_mac_table_entry {
@@ -969,9 +996,6 @@ struct rswitch_private {
struct platform_device *pdev;
void __iomem *addr;
struct rcar_gen4_ptp_private *ptp_priv;
- struct rswitch_desc *linkfix_table;
- dma_addr_t linkfix_table_dma;
- u32 linkfix_table_size;
struct rswitch_device *rdev[RSWITCH_NUM_PORTS];
diff --git a/drivers/net/ethernet/renesas/sh_eth.c b/drivers/net/ethernet/renesas/sh_eth.c
index 71a499113308..ed17163d7811 100644
--- a/drivers/net/ethernet/renesas/sh_eth.c
+++ b/drivers/net/ethernet/renesas/sh_eth.c
@@ -3044,23 +3044,46 @@ static int sh_mdio_release(struct sh_eth_private *mdp)
return 0;
}
-static int sh_mdiobb_read(struct mii_bus *bus, int phy, int reg)
+static int sh_mdiobb_read_c22(struct mii_bus *bus, int phy, int reg)
{
int res;
pm_runtime_get_sync(bus->parent);
- res = mdiobb_read(bus, phy, reg);
+ res = mdiobb_read_c22(bus, phy, reg);
pm_runtime_put(bus->parent);
return res;
}
-static int sh_mdiobb_write(struct mii_bus *bus, int phy, int reg, u16 val)
+static int sh_mdiobb_write_c22(struct mii_bus *bus, int phy, int reg, u16 val)
{
int res;
pm_runtime_get_sync(bus->parent);
- res = mdiobb_write(bus, phy, reg, val);
+ res = mdiobb_write_c22(bus, phy, reg, val);
+ pm_runtime_put(bus->parent);
+
+ return res;
+}
+
+static int sh_mdiobb_read_c45(struct mii_bus *bus, int phy, int devad, int reg)
+{
+ int res;
+
+ pm_runtime_get_sync(bus->parent);
+ res = mdiobb_read_c45(bus, phy, devad, reg);
+ pm_runtime_put(bus->parent);
+
+ return res;
+}
+
+static int sh_mdiobb_write_c45(struct mii_bus *bus, int phy, int devad,
+ int reg, u16 val)
+{
+ int res;
+
+ pm_runtime_get_sync(bus->parent);
+ res = mdiobb_write_c45(bus, phy, devad, reg, val);
pm_runtime_put(bus->parent);
return res;
@@ -3091,8 +3114,10 @@ static int sh_mdio_init(struct sh_eth_private *mdp,
return -ENOMEM;
/* Wrap accessors with Runtime PM-aware ops */
- mdp->mii_bus->read = sh_mdiobb_read;
- mdp->mii_bus->write = sh_mdiobb_write;
+ mdp->mii_bus->read = sh_mdiobb_read_c22;
+ mdp->mii_bus->write = sh_mdiobb_write_c22;
+ mdp->mii_bus->read_c45 = sh_mdiobb_read_c45;
+ mdp->mii_bus->write_c45 = sh_mdiobb_write_c45;
/* Hook up MII support for ethtool */
mdp->mii_bus->name = "sh_mii";
diff --git a/drivers/net/ethernet/samsung/sxgbe/sxgbe_mdio.c b/drivers/net/ethernet/samsung/sxgbe/sxgbe_mdio.c
index fceb6d637235..0227223c06fa 100644
--- a/drivers/net/ethernet/samsung/sxgbe/sxgbe_mdio.c
+++ b/drivers/net/ethernet/samsung/sxgbe/sxgbe_mdio.c
@@ -50,12 +50,12 @@ static void sxgbe_mdio_ctrl_data(struct sxgbe_priv_data *sp, u32 cmd,
}
static void sxgbe_mdio_c45(struct sxgbe_priv_data *sp, u32 cmd, int phyaddr,
- int phyreg, u16 phydata)
+ int devad, int phyreg, u16 phydata)
{
u32 reg;
/* set mdio address register */
- reg = ((phyreg >> 16) & 0x1f) << 21;
+ reg = (devad & 0x1f) << 21;
reg |= (phyaddr << 16) | (phyreg & 0xffff);
writel(reg, sp->ioaddr + sp->hw->mii.addr);
@@ -76,8 +76,8 @@ static void sxgbe_mdio_c22(struct sxgbe_priv_data *sp, u32 cmd, int phyaddr,
sxgbe_mdio_ctrl_data(sp, cmd, phydata);
}
-static int sxgbe_mdio_access(struct sxgbe_priv_data *sp, u32 cmd, int phyaddr,
- int phyreg, u16 phydata)
+static int sxgbe_mdio_access_c22(struct sxgbe_priv_data *sp, u32 cmd,
+ int phyaddr, int phyreg, u16 phydata)
{
const struct mii_regs *mii = &sp->hw->mii;
int rc;
@@ -86,33 +86,46 @@ static int sxgbe_mdio_access(struct sxgbe_priv_data *sp, u32 cmd, int phyaddr,
if (rc < 0)
return rc;
- if (phyreg & MII_ADDR_C45) {
- sxgbe_mdio_c45(sp, cmd, phyaddr, phyreg, phydata);
- } else {
- /* Ports 0-3 only support C22. */
- if (phyaddr >= 4)
- return -ENODEV;
+ /* Ports 0-3 only support C22. */
+ if (phyaddr >= 4)
+ return -ENODEV;
- sxgbe_mdio_c22(sp, cmd, phyaddr, phyreg, phydata);
- }
+ sxgbe_mdio_c22(sp, cmd, phyaddr, phyreg, phydata);
+
+ return sxgbe_mdio_busy_wait(sp->ioaddr, mii->data);
+}
+
+static int sxgbe_mdio_access_c45(struct sxgbe_priv_data *sp, u32 cmd,
+ int phyaddr, int devad, int phyreg,
+ u16 phydata)
+{
+ const struct mii_regs *mii = &sp->hw->mii;
+ int rc;
+
+ rc = sxgbe_mdio_busy_wait(sp->ioaddr, mii->data);
+ if (rc < 0)
+ return rc;
+
+ sxgbe_mdio_c45(sp, cmd, phyaddr, devad, phyreg, phydata);
return sxgbe_mdio_busy_wait(sp->ioaddr, mii->data);
}
/**
- * sxgbe_mdio_read
+ * sxgbe_mdio_read_c22
* @bus: points to the mii_bus structure
* @phyaddr: address of phy port
* @phyreg: address of register with in phy register
- * Description: this function used for C45 and C22 MDIO Read
+ * Description: this function used for C22 MDIO Read
*/
-static int sxgbe_mdio_read(struct mii_bus *bus, int phyaddr, int phyreg)
+static int sxgbe_mdio_read_c22(struct mii_bus *bus, int phyaddr, int phyreg)
{
struct net_device *ndev = bus->priv;
struct sxgbe_priv_data *priv = netdev_priv(ndev);
int rc;
- rc = sxgbe_mdio_access(priv, SXGBE_SMA_READ_CMD, phyaddr, phyreg, 0);
+ rc = sxgbe_mdio_access_c22(priv, SXGBE_SMA_READ_CMD, phyaddr,
+ phyreg, 0);
if (rc < 0)
return rc;
@@ -120,21 +133,63 @@ static int sxgbe_mdio_read(struct mii_bus *bus, int phyaddr, int phyreg)
}
/**
- * sxgbe_mdio_write
+ * sxgbe_mdio_read_c45
+ * @bus: points to the mii_bus structure
+ * @phyaddr: address of phy port
+ * @devad: device (MMD) address
+ * @phyreg: address of register with in phy register
+ * Description: this function used for C45 MDIO Read
+ */
+static int sxgbe_mdio_read_c45(struct mii_bus *bus, int phyaddr, int devad,
+ int phyreg)
+{
+ struct net_device *ndev = bus->priv;
+ struct sxgbe_priv_data *priv = netdev_priv(ndev);
+ int rc;
+
+ rc = sxgbe_mdio_access_c45(priv, SXGBE_SMA_READ_CMD, phyaddr,
+ devad, phyreg, 0);
+ if (rc < 0)
+ return rc;
+
+ return readl(priv->ioaddr + priv->hw->mii.data) & 0xffff;
+}
+
+/**
+ * sxgbe_mdio_write_c22
+ * @bus: points to the mii_bus structure
+ * @phyaddr: address of phy port
+ * @phyreg: address of phy registers
+ * @phydata: data to be written into phy register
+ * Description: this function is used for C22 MDIO write
+ */
+static int sxgbe_mdio_write_c22(struct mii_bus *bus, int phyaddr, int phyreg,
+ u16 phydata)
+{
+ struct net_device *ndev = bus->priv;
+ struct sxgbe_priv_data *priv = netdev_priv(ndev);
+
+ return sxgbe_mdio_access_c22(priv, SXGBE_SMA_WRITE_CMD, phyaddr, phyreg,
+ phydata);
+}
+
+/**
+ * sxgbe_mdio_write_c45
* @bus: points to the mii_bus structure
* @phyaddr: address of phy port
* @phyreg: address of phy registers
+ * @devad: device (MMD) address
* @phydata: data to be written into phy register
- * Description: this function is used for C45 and C22 MDIO write
+ * Description: this function is used for C45 MDIO write
*/
-static int sxgbe_mdio_write(struct mii_bus *bus, int phyaddr, int phyreg,
- u16 phydata)
+static int sxgbe_mdio_write_c45(struct mii_bus *bus, int phyaddr, int devad,
+ int phyreg, u16 phydata)
{
struct net_device *ndev = bus->priv;
struct sxgbe_priv_data *priv = netdev_priv(ndev);
- return sxgbe_mdio_access(priv, SXGBE_SMA_WRITE_CMD, phyaddr, phyreg,
- phydata);
+ return sxgbe_mdio_access_c45(priv, SXGBE_SMA_WRITE_CMD, phyaddr,
+ devad, phyreg, phydata);
}
int sxgbe_mdio_register(struct net_device *ndev)
@@ -161,8 +216,10 @@ int sxgbe_mdio_register(struct net_device *ndev)
/* assign mii bus fields */
mdio_bus->name = "sxgbe";
- mdio_bus->read = &sxgbe_mdio_read;
- mdio_bus->write = &sxgbe_mdio_write;
+ mdio_bus->read = sxgbe_mdio_read_c22;
+ mdio_bus->write = sxgbe_mdio_write_c22;
+ mdio_bus->read_c45 = sxgbe_mdio_read_c45;
+ mdio_bus->write_c45 = sxgbe_mdio_write_c45;
snprintf(mdio_bus->id, MII_BUS_ID_SIZE, "%s-%x",
mdio_bus->name, priv->plat->bus_id);
mdio_bus->priv = ndev;
diff --git a/drivers/net/ethernet/sfc/Kconfig b/drivers/net/ethernet/sfc/Kconfig
index 0950e6b0508f..4af36ba8906b 100644
--- a/drivers/net/ethernet/sfc/Kconfig
+++ b/drivers/net/ethernet/sfc/Kconfig
@@ -22,6 +22,7 @@ config SFC
depends on PTP_1588_CLOCK_OPTIONAL
select MDIO
select CRC32
+ select NET_DEVLINK
help
This driver supports 10/40-gigabit Ethernet cards based on
the Solarflare SFC9100-family controllers.
diff --git a/drivers/net/ethernet/sfc/Makefile b/drivers/net/ethernet/sfc/Makefile
index 712a48d00069..55b9c73cd8ef 100644
--- a/drivers/net/ethernet/sfc/Makefile
+++ b/drivers/net/ethernet/sfc/Makefile
@@ -6,7 +6,8 @@ sfc-y += efx.o efx_common.o efx_channels.o nic.o \
mcdi.o mcdi_port.o mcdi_port_common.o \
mcdi_functions.o mcdi_filters.o mcdi_mon.o \
ef100.o ef100_nic.o ef100_netdev.o \
- ef100_ethtool.o ef100_rx.o ef100_tx.o
+ ef100_ethtool.o ef100_rx.o ef100_tx.o \
+ efx_devlink.o
sfc-$(CONFIG_SFC_MTD) += mtd.o
sfc-$(CONFIG_SFC_SRIOV) += sriov.o ef10_sriov.o ef100_sriov.o ef100_rep.o \
mae.o tc.o tc_bindings.o tc_counters.o
diff --git a/drivers/net/ethernet/sfc/ef100_netdev.c b/drivers/net/ethernet/sfc/ef100_netdev.c
index ddcc325ed570..d916877b5a9a 100644
--- a/drivers/net/ethernet/sfc/ef100_netdev.c
+++ b/drivers/net/ethernet/sfc/ef100_netdev.c
@@ -24,6 +24,7 @@
#include "rx_common.h"
#include "ef100_sriov.h"
#include "tc_bindings.h"
+#include "efx_devlink.h"
static void ef100_update_name(struct efx_nic *efx)
{
@@ -332,9 +333,11 @@ void ef100_remove_netdev(struct efx_probe_data *probe_data)
efx_ef100_pci_sriov_disable(efx, true);
#endif
+ efx_fini_devlink_lock(efx);
ef100_unregister_netdev(efx);
#ifdef CONFIG_SFC_SRIOV
+ ef100_pf_unset_devlink_port(efx);
efx_fini_tc(efx);
#endif
@@ -345,6 +348,8 @@ void ef100_remove_netdev(struct efx_probe_data *probe_data)
kfree(efx->phy_data);
efx->phy_data = NULL;
+ efx_fini_devlink_and_unlock(efx);
+
free_netdev(efx->net_dev);
efx->net_dev = NULL;
efx->state = STATE_PROBED;
@@ -354,6 +359,7 @@ int ef100_probe_netdev(struct efx_probe_data *probe_data)
{
struct efx_nic *efx = &probe_data->efx;
struct efx_probe_data **probe_ptr;
+ struct ef100_nic_data *nic_data;
struct net_device *net_dev;
int rc;
@@ -405,6 +411,20 @@ int ef100_probe_netdev(struct efx_probe_data *probe_data)
/* Don't fail init if RSS setup doesn't work. */
efx_mcdi_push_default_indir_table(efx, efx->n_rx_channels);
+ nic_data = efx->nic_data;
+ rc = ef100_get_mac_address(efx, net_dev->perm_addr, CLIENT_HANDLE_SELF,
+ efx->type->is_vf);
+ if (rc)
+ return rc;
+ /* Assign MAC address */
+ eth_hw_addr_set(net_dev, net_dev->perm_addr);
+ ether_addr_copy(nic_data->port_id, net_dev->perm_addr);
+
+ /* devlink creation, registration and lock */
+ rc = efx_probe_devlink_and_lock(efx);
+ if (rc)
+ pci_info(efx->pci_dev, "devlink registration failed");
+
rc = ef100_register_netdev(efx);
if (rc)
goto fail;
@@ -413,6 +433,9 @@ int ef100_probe_netdev(struct efx_probe_data *probe_data)
rc = ef100_probe_netdev_pf(efx);
if (rc)
goto fail;
+#ifdef CONFIG_SFC_SRIOV
+ ef100_pf_set_devlink_port(efx);
+#endif
}
efx->netdev_notifier.notifier_call = ef100_netdev_event;
@@ -423,6 +446,13 @@ int ef100_probe_netdev(struct efx_probe_data *probe_data)
goto fail;
}
+ efx_probe_devlink_unlock(efx);
+ return rc;
fail:
+#ifdef CONFIG_SFC_SRIOV
+ /* remove devlink port if does exist */
+ ef100_pf_unset_devlink_port(efx);
+#endif
+ efx_probe_devlink_unlock(efx);
return rc;
}
diff --git a/drivers/net/ethernet/sfc/ef100_nic.c b/drivers/net/ethernet/sfc/ef100_nic.c
index ad686c671ab8..4dc643b0d2db 100644
--- a/drivers/net/ethernet/sfc/ef100_nic.c
+++ b/drivers/net/ethernet/sfc/ef100_nic.c
@@ -130,23 +130,34 @@ static void ef100_mcdi_reboot_detected(struct efx_nic *efx)
/* MCDI calls
*/
-static int ef100_get_mac_address(struct efx_nic *efx, u8 *mac_address)
+int ef100_get_mac_address(struct efx_nic *efx, u8 *mac_address,
+ int client_handle, bool empty_ok)
{
- MCDI_DECLARE_BUF(outbuf, MC_CMD_GET_MAC_ADDRESSES_OUT_LEN);
+ MCDI_DECLARE_BUF(outbuf, MC_CMD_GET_CLIENT_MAC_ADDRESSES_OUT_LEN(1));
+ MCDI_DECLARE_BUF(inbuf, MC_CMD_GET_CLIENT_MAC_ADDRESSES_IN_LEN);
size_t outlen;
int rc;
BUILD_BUG_ON(MC_CMD_GET_MAC_ADDRESSES_IN_LEN != 0);
+ MCDI_SET_DWORD(inbuf, GET_CLIENT_MAC_ADDRESSES_IN_CLIENT_HANDLE,
+ client_handle);
- rc = efx_mcdi_rpc(efx, MC_CMD_GET_MAC_ADDRESSES, NULL, 0,
- outbuf, sizeof(outbuf), &outlen);
+ rc = efx_mcdi_rpc(efx, MC_CMD_GET_CLIENT_MAC_ADDRESSES, inbuf,
+ sizeof(inbuf), outbuf, sizeof(outbuf), &outlen);
if (rc)
return rc;
- if (outlen < MC_CMD_GET_MAC_ADDRESSES_OUT_LEN)
- return -EIO;
- ether_addr_copy(mac_address,
- MCDI_PTR(outbuf, GET_MAC_ADDRESSES_OUT_MAC_ADDR_BASE));
+ if (outlen >= MC_CMD_GET_CLIENT_MAC_ADDRESSES_OUT_LEN(1)) {
+ ether_addr_copy(mac_address,
+ MCDI_PTR(outbuf, GET_CLIENT_MAC_ADDRESSES_OUT_MAC_ADDRS));
+ } else if (empty_ok) {
+ pci_warn(efx->pci_dev,
+ "No MAC address provisioned for client ID %#x.\n",
+ client_handle);
+ eth_zero_addr(mac_address);
+ } else {
+ return -ENOENT;
+ }
return 0;
}
@@ -388,14 +399,14 @@ static int ef100_filter_table_up(struct efx_nic *efx)
* filter insertion will need to take the lock for read.
*/
up_write(&efx->filter_sem);
-#ifdef CONFIG_SFC_SRIOV
- rc = efx_tc_insert_rep_filters(efx);
+ if (IS_ENABLED(CONFIG_SFC_SRIOV))
+ rc = efx_tc_insert_rep_filters(efx);
+
/* Rep filter failure is nonfatal */
if (rc)
netif_warn(efx, drv, efx->net_dev,
"Failed to insert representor filters, rc %d\n",
rc);
-#endif
return 0;
fail_vlan0:
@@ -408,9 +419,8 @@ fail_unspec:
static void ef100_filter_table_down(struct efx_nic *efx)
{
-#ifdef CONFIG_SFC_SRIOV
- efx_tc_remove_rep_filters(efx);
-#endif
+ if (IS_ENABLED(CONFIG_SFC_SRIOV))
+ efx_tc_remove_rep_filters(efx);
down_write(&efx->filter_sem);
efx_mcdi_filter_del_vlan(efx, 0);
efx_mcdi_filter_del_vlan(efx, EFX_FILTER_VID_UNSPEC);
@@ -726,7 +736,6 @@ static unsigned int efx_ef100_recycle_ring_size(const struct efx_nic *efx)
return 10 * EFX_RECYCLE_RING_SIZE_10G;
}
-#ifdef CONFIG_SFC_SRIOV
static int efx_ef100_get_base_mport(struct efx_nic *efx)
{
struct ef100_nic_data *nic_data = efx->nic_data;
@@ -736,7 +745,7 @@ static int efx_ef100_get_base_mport(struct efx_nic *efx)
/* Construct mport selector for "physical network port" */
efx_mae_mport_wire(efx, &selector);
/* Look up actual mport ID */
- rc = efx_mae_lookup_mport(efx, selector, &id);
+ rc = efx_mae_fw_lookup_mport(efx, selector, &id);
if (rc)
return rc;
/* The ID should always fit in 16 bits, because that's how wide the
@@ -747,9 +756,21 @@ static int efx_ef100_get_base_mport(struct efx_nic *efx)
id);
nic_data->base_mport = id;
nic_data->have_mport = true;
+
+ /* Construct mport selector for "calling PF" */
+ efx_mae_mport_uplink(efx, &selector);
+ /* Look up actual mport ID */
+ rc = efx_mae_fw_lookup_mport(efx, selector, &id);
+ if (rc)
+ return rc;
+ if (id >> 16)
+ netif_warn(efx, probe, efx->net_dev, "Bad own m-port id %#x\n",
+ id);
+ nic_data->own_mport = id;
+ nic_data->have_own_mport = true;
+
return 0;
}
-#endif
static int compare_versions(const char *a, const char *b)
{
@@ -1098,23 +1119,42 @@ fail:
return rc;
}
+/* MCDI commands are related to the same device issuing them. This function
+ * allows to do an MCDI command on behalf of another device, mainly PFs setting
+ * things for VFs.
+ */
+int efx_ef100_lookup_client_id(struct efx_nic *efx, efx_qword_t pciefn, u32 *id)
+{
+ MCDI_DECLARE_BUF(outbuf, MC_CMD_GET_CLIENT_HANDLE_OUT_LEN);
+ MCDI_DECLARE_BUF(inbuf, MC_CMD_GET_CLIENT_HANDLE_IN_LEN);
+ u64 pciefn_flat = le64_to_cpu(pciefn.u64[0]);
+ size_t outlen;
+ int rc;
+
+ MCDI_SET_DWORD(inbuf, GET_CLIENT_HANDLE_IN_TYPE,
+ MC_CMD_GET_CLIENT_HANDLE_IN_TYPE_FUNC);
+ MCDI_SET_QWORD(inbuf, GET_CLIENT_HANDLE_IN_FUNC,
+ pciefn_flat);
+
+ rc = efx_mcdi_rpc(efx, MC_CMD_GET_CLIENT_HANDLE, inbuf, sizeof(inbuf),
+ outbuf, sizeof(outbuf), &outlen);
+ if (rc)
+ return rc;
+ if (outlen < sizeof(outbuf))
+ return -EIO;
+ *id = MCDI_DWORD(outbuf, GET_CLIENT_HANDLE_OUT_HANDLE);
+ return 0;
+}
+
int ef100_probe_netdev_pf(struct efx_nic *efx)
{
struct ef100_nic_data *nic_data = efx->nic_data;
struct net_device *net_dev = efx->net_dev;
int rc;
- rc = ef100_get_mac_address(efx, net_dev->perm_addr);
- if (rc)
- goto fail;
- /* Assign MAC address */
- eth_hw_addr_set(net_dev, net_dev->perm_addr);
- memcpy(nic_data->port_id, net_dev->perm_addr, ETH_ALEN);
-
- if (!nic_data->grp_mae)
+ if (!IS_ENABLED(CONFIG_SFC_SRIOV) || !nic_data->grp_mae)
return 0;
-#ifdef CONFIG_SFC_SRIOV
rc = efx_init_struct_tc(efx);
if (rc)
return rc;
@@ -1126,6 +1166,14 @@ int ef100_probe_netdev_pf(struct efx_nic *efx)
rc);
}
+ rc = efx_init_mae(efx);
+ if (rc)
+ netif_warn(efx, probe, net_dev,
+ "Failed to init MAE rc %d; representors will not function\n",
+ rc);
+ else
+ efx_ef100_init_reps(efx);
+
rc = efx_init_tc(efx);
if (rc) {
/* Either we don't have an MAE at all (i.e. legacy v-switching),
@@ -1141,10 +1189,6 @@ int ef100_probe_netdev_pf(struct efx_nic *efx)
net_dev->features |= NETIF_F_HW_TC;
efx->fixed_features |= NETIF_F_HW_TC;
}
-#endif
- return 0;
-
-fail:
return rc;
}
@@ -1157,6 +1201,11 @@ void ef100_remove(struct efx_nic *efx)
{
struct ef100_nic_data *nic_data = efx->nic_data;
+ if (IS_ENABLED(CONFIG_SFC_SRIOV) && efx->mae) {
+ efx_ef100_fini_reps(efx);
+ efx_fini_mae(efx);
+ }
+
efx_mcdi_detach(efx);
efx_mcdi_fini(efx);
if (nic_data)
@@ -1249,9 +1298,8 @@ const struct efx_nic_type ef100_pf_nic_type = {
.update_stats = ef100_update_stats,
.pull_stats = efx_mcdi_mac_pull_stats,
.stop_stats = efx_mcdi_mac_stop_stats,
-#ifdef CONFIG_SFC_SRIOV
- .sriov_configure = efx_ef100_sriov_configure,
-#endif
+ .sriov_configure = IS_ENABLED(CONFIG_SFC_SRIOV) ?
+ efx_ef100_sriov_configure : NULL,
/* Per-type bar/size configuration not used on ef100. Location of
* registers is defined by extended capabilities.
diff --git a/drivers/net/ethernet/sfc/ef100_nic.h b/drivers/net/ethernet/sfc/ef100_nic.h
index 0295933145fa..f1ed481c1260 100644
--- a/drivers/net/ethernet/sfc/ef100_nic.h
+++ b/drivers/net/ethernet/sfc/ef100_nic.h
@@ -74,6 +74,10 @@ struct ef100_nic_data {
u64 stats[EF100_STAT_COUNT];
u32 base_mport;
bool have_mport; /* base_mport was populated successfully */
+ u32 own_mport;
+ u32 local_mae_intf; /* interface_idx that corresponds to us, in mport enumerate */
+ bool have_own_mport; /* own_mport was populated successfully */
+ bool have_local_intf; /* local_mae_intf was populated successfully */
bool grp_mae; /* MAE Privilege */
u16 tso_max_hdr_len;
u16 tso_max_payload_num_segs;
@@ -88,4 +92,7 @@ int efx_ef100_init_datapath_caps(struct efx_nic *efx);
int ef100_phy_probe(struct efx_nic *efx);
int ef100_filter_table_probe(struct efx_nic *efx);
+int ef100_get_mac_address(struct efx_nic *efx, u8 *mac_address,
+ int client_handle, bool empty_ok);
+int efx_ef100_lookup_client_id(struct efx_nic *efx, efx_qword_t pciefn, u32 *id);
#endif /* EFX_EF100_NIC_H */
diff --git a/drivers/net/ethernet/sfc/ef100_rep.c b/drivers/net/ethernet/sfc/ef100_rep.c
index 81ab22c74635..0b3083ef0ead 100644
--- a/drivers/net/ethernet/sfc/ef100_rep.c
+++ b/drivers/net/ethernet/sfc/ef100_rep.c
@@ -9,12 +9,14 @@
* by the Free Software Foundation, incorporated herein by reference.
*/
+#include <linux/rhashtable.h>
#include "ef100_rep.h"
#include "ef100_netdev.h"
#include "ef100_nic.h"
#include "mae.h"
#include "rx_common.h"
#include "tc_bindings.h"
+#include "efx_devlink.h"
#define EFX_EF100_REP_DRIVER "efx_ef100_rep"
@@ -242,14 +244,11 @@ fail1:
static int efx_ef100_configure_rep(struct efx_rep *efv)
{
struct efx_nic *efx = efv->parent;
- u32 selector;
int rc;
efv->rx_pring_size = EFX_REP_DEFAULT_PSEUDO_RING_SIZE;
- /* Construct mport selector for corresponding VF */
- efx_mae_mport_vf(efx, efv->idx, &selector);
/* Look up actual mport ID */
- rc = efx_mae_lookup_mport(efx, selector, &efv->mport);
+ rc = efx_mae_lookup_mport(efx, efv->idx, &efv->mport);
if (rc)
return rc;
pci_dbg(efx->pci_dev, "VF %u has mport ID %#x\n", efv->idx, efv->mport);
@@ -299,6 +298,7 @@ int efx_ef100_vfrep_create(struct efx_nic *efx, unsigned int i)
i, rc);
goto fail1;
}
+ ef100_rep_set_devlink_port(efv);
rc = register_netdev(efv->net_dev);
if (rc) {
pci_err(efx->pci_dev,
@@ -310,6 +310,7 @@ int efx_ef100_vfrep_create(struct efx_nic *efx, unsigned int i)
efv->net_dev->name);
return 0;
fail2:
+ ef100_rep_unset_devlink_port(efv);
efx_ef100_deconfigure_rep(efv);
fail1:
efx_ef100_rep_destroy_netdev(efv);
@@ -325,6 +326,7 @@ void efx_ef100_vfrep_destroy(struct efx_nic *efx, struct efx_rep *efv)
return;
netif_dbg(efx, drv, rep_dev, "Removing VF representor\n");
unregister_netdev(rep_dev);
+ ef100_rep_unset_devlink_port(efv);
efx_ef100_deconfigure_rep(efv);
efx_ef100_rep_destroy_netdev(efv);
}
@@ -341,6 +343,53 @@ void efx_ef100_fini_vfreps(struct efx_nic *efx)
efx_ef100_vfrep_destroy(efx, efv);
}
+static bool ef100_mport_is_pcie_vnic(struct mae_mport_desc *mport_desc)
+{
+ return mport_desc->mport_type == MAE_MPORT_DESC_MPORT_TYPE_VNIC &&
+ mport_desc->vnic_client_type == MAE_MPORT_DESC_VNIC_CLIENT_TYPE_FUNCTION;
+}
+
+bool ef100_mport_on_local_intf(struct efx_nic *efx,
+ struct mae_mport_desc *mport_desc)
+{
+ struct ef100_nic_data *nic_data = efx->nic_data;
+ bool pcie_func;
+
+ pcie_func = ef100_mport_is_pcie_vnic(mport_desc);
+
+ return nic_data->have_local_intf && pcie_func &&
+ mport_desc->interface_idx == nic_data->local_mae_intf;
+}
+
+bool ef100_mport_is_vf(struct mae_mport_desc *mport_desc)
+{
+ bool pcie_func;
+
+ pcie_func = ef100_mport_is_pcie_vnic(mport_desc);
+ return pcie_func && (mport_desc->vf_idx != MAE_MPORT_DESC_VF_IDX_NULL);
+}
+
+void efx_ef100_init_reps(struct efx_nic *efx)
+{
+ struct ef100_nic_data *nic_data = efx->nic_data;
+ int rc;
+
+ nic_data->have_local_intf = false;
+ rc = efx_mae_enumerate_mports(efx);
+ if (rc)
+ pci_warn(efx->pci_dev,
+ "Could not enumerate mports (rc=%d), are we admin?",
+ rc);
+}
+
+void efx_ef100_fini_reps(struct efx_nic *efx)
+{
+ struct efx_mae *mae = efx->mae;
+
+ rhashtable_free_and_destroy(&mae->mports_ht, efx_mae_remove_mport,
+ NULL);
+}
+
static int efx_ef100_rep_poll(struct napi_struct *napi, int weight)
{
struct efx_rep *efv = container_of(napi, struct efx_rep, napi);
diff --git a/drivers/net/ethernet/sfc/ef100_rep.h b/drivers/net/ethernet/sfc/ef100_rep.h
index c21bc716f847..a042525a2240 100644
--- a/drivers/net/ethernet/sfc/ef100_rep.h
+++ b/drivers/net/ethernet/sfc/ef100_rep.h
@@ -22,6 +22,8 @@ struct efx_rep_sw_stats {
atomic64_t rx_dropped, tx_errors;
};
+struct devlink_port;
+
/**
* struct efx_rep - Private data for an Efx representor
*
@@ -39,6 +41,7 @@ struct efx_rep_sw_stats {
* @rx_lock: protects @rx_list
* @napi: NAPI control structure
* @stats: software traffic counters for netdev stats
+ * @dl_port: devlink port associated to this netdev representor
*/
struct efx_rep {
struct efx_nic *parent;
@@ -54,6 +57,7 @@ struct efx_rep {
spinlock_t rx_lock;
struct napi_struct napi;
struct efx_rep_sw_stats stats;
+ struct devlink_port *dl_port;
};
int efx_ef100_vfrep_create(struct efx_nic *efx, unsigned int i);
@@ -67,4 +71,10 @@ void efx_ef100_rep_rx_packet(struct efx_rep *efv, struct efx_rx_buffer *rx_buf);
*/
struct efx_rep *efx_ef100_find_rep_by_mport(struct efx_nic *efx, u16 mport);
extern const struct net_device_ops efx_ef100_rep_netdev_ops;
+void efx_ef100_init_reps(struct efx_nic *efx);
+void efx_ef100_fini_reps(struct efx_nic *efx);
+struct mae_mport_desc;
+bool ef100_mport_on_local_intf(struct efx_nic *efx,
+ struct mae_mport_desc *mport_desc);
+bool ef100_mport_is_vf(struct mae_mport_desc *mport_desc);
#endif /* EF100_REP_H */
diff --git a/drivers/net/ethernet/sfc/efx.c b/drivers/net/ethernet/sfc/efx.c
index 3a86f1213a05..02c2adeb0a12 100644
--- a/drivers/net/ethernet/sfc/efx.c
+++ b/drivers/net/ethernet/sfc/efx.c
@@ -1028,6 +1028,10 @@ static int efx_pci_probe_post_io(struct efx_nic *efx)
net_dev->features &= ~NETIF_F_HW_VLAN_CTAG_FILTER;
net_dev->features |= efx->fixed_features;
+ net_dev->xdp_features = NETDEV_XDP_ACT_BASIC |
+ NETDEV_XDP_ACT_REDIRECT |
+ NETDEV_XDP_ACT_NDO_XMIT;
+
rc = efx_register_netdev(efx);
if (!rc)
return 0;
diff --git a/drivers/net/ethernet/sfc/efx_devlink.c b/drivers/net/ethernet/sfc/efx_devlink.c
new file mode 100644
index 000000000000..381b805659d3
--- /dev/null
+++ b/drivers/net/ethernet/sfc/efx_devlink.c
@@ -0,0 +1,731 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/****************************************************************************
+ * Driver for AMD network controllers and boards
+ * Copyright (C) 2023, Advanced Micro Devices, Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License version 2 as published
+ * by the Free Software Foundation, incorporated herein by reference.
+ */
+
+#include "net_driver.h"
+#include "ef100_nic.h"
+#include "efx_devlink.h"
+#include <linux/rtc.h>
+#include "mcdi.h"
+#include "mcdi_functions.h"
+#include "mcdi_pcol.h"
+#ifdef CONFIG_SFC_SRIOV
+#include "mae.h"
+#include "ef100_rep.h"
+#endif
+
+struct efx_devlink {
+ struct efx_nic *efx;
+};
+
+#ifdef CONFIG_SFC_SRIOV
+static void efx_devlink_del_port(struct devlink_port *dl_port)
+{
+ if (!dl_port)
+ return;
+ devl_port_unregister(dl_port);
+}
+
+static int efx_devlink_add_port(struct efx_nic *efx,
+ struct mae_mport_desc *mport)
+{
+ bool external = false;
+
+ if (!ef100_mport_on_local_intf(efx, mport))
+ external = true;
+
+ switch (mport->mport_type) {
+ case MAE_MPORT_DESC_MPORT_TYPE_VNIC:
+ if (mport->vf_idx != MAE_MPORT_DESC_VF_IDX_NULL)
+ devlink_port_attrs_pci_vf_set(&mport->dl_port, 0, mport->pf_idx,
+ mport->vf_idx,
+ external);
+ else
+ devlink_port_attrs_pci_pf_set(&mport->dl_port, 0, mport->pf_idx,
+ external);
+ break;
+ default:
+ /* MAE_MPORT_DESC_MPORT_ALIAS and UNDEFINED */
+ return 0;
+ }
+
+ mport->dl_port.index = mport->mport_id;
+
+ return devl_port_register(efx->devlink, &mport->dl_port, mport->mport_id);
+}
+
+static int efx_devlink_port_addr_get(struct devlink_port *port, u8 *hw_addr,
+ int *hw_addr_len,
+ struct netlink_ext_ack *extack)
+{
+ struct efx_devlink *devlink = devlink_priv(port->devlink);
+ struct mae_mport_desc *mport_desc;
+ efx_qword_t pciefn;
+ u32 client_id;
+ int rc = 0;
+
+ mport_desc = container_of(port, struct mae_mport_desc, dl_port);
+
+ if (!ef100_mport_on_local_intf(devlink->efx, mport_desc)) {
+ rc = -EINVAL;
+ NL_SET_ERR_MSG_FMT(extack,
+ "Port not on local interface (mport: %u)",
+ mport_desc->mport_id);
+ goto out;
+ }
+
+ if (ef100_mport_is_vf(mport_desc))
+ EFX_POPULATE_QWORD_3(pciefn,
+ PCIE_FUNCTION_PF, PCIE_FUNCTION_PF_NULL,
+ PCIE_FUNCTION_VF, mport_desc->vf_idx,
+ PCIE_FUNCTION_INTF, PCIE_INTERFACE_CALLER);
+ else
+ EFX_POPULATE_QWORD_3(pciefn,
+ PCIE_FUNCTION_PF, mport_desc->pf_idx,
+ PCIE_FUNCTION_VF, PCIE_FUNCTION_VF_NULL,
+ PCIE_FUNCTION_INTF, PCIE_INTERFACE_CALLER);
+
+ rc = efx_ef100_lookup_client_id(devlink->efx, pciefn, &client_id);
+ if (rc) {
+ NL_SET_ERR_MSG_FMT(extack,
+ "No internal client_ID for port (mport: %u)",
+ mport_desc->mport_id);
+ goto out;
+ }
+
+ rc = ef100_get_mac_address(devlink->efx, hw_addr, client_id, true);
+ if (rc != 0)
+ NL_SET_ERR_MSG_FMT(extack,
+ "No available MAC for port (mport: %u)",
+ mport_desc->mport_id);
+out:
+ *hw_addr_len = ETH_ALEN;
+ return rc;
+}
+
+static int efx_devlink_port_addr_set(struct devlink_port *port,
+ const u8 *hw_addr, int hw_addr_len,
+ struct netlink_ext_ack *extack)
+{
+ MCDI_DECLARE_BUF(inbuf, MC_CMD_SET_CLIENT_MAC_ADDRESSES_IN_LEN(1));
+ struct efx_devlink *devlink = devlink_priv(port->devlink);
+ struct mae_mport_desc *mport_desc;
+ efx_qword_t pciefn;
+ u32 client_id;
+ int rc;
+
+ mport_desc = container_of(port, struct mae_mport_desc, dl_port);
+
+ if (!ef100_mport_is_vf(mport_desc)) {
+ NL_SET_ERR_MSG_FMT(extack,
+ "port mac change not allowed (mport: %u)",
+ mport_desc->mport_id);
+ return -EPERM;
+ }
+
+ EFX_POPULATE_QWORD_3(pciefn,
+ PCIE_FUNCTION_PF, PCIE_FUNCTION_PF_NULL,
+ PCIE_FUNCTION_VF, mport_desc->vf_idx,
+ PCIE_FUNCTION_INTF, PCIE_INTERFACE_CALLER);
+
+ rc = efx_ef100_lookup_client_id(devlink->efx, pciefn, &client_id);
+ if (rc) {
+ NL_SET_ERR_MSG_FMT(extack,
+ "No internal client_ID for port (mport: %u)",
+ mport_desc->mport_id);
+ return rc;
+ }
+
+ MCDI_SET_DWORD(inbuf, SET_CLIENT_MAC_ADDRESSES_IN_CLIENT_HANDLE,
+ client_id);
+
+ ether_addr_copy(MCDI_PTR(inbuf, SET_CLIENT_MAC_ADDRESSES_IN_MAC_ADDRS),
+ hw_addr);
+
+ rc = efx_mcdi_rpc(devlink->efx, MC_CMD_SET_CLIENT_MAC_ADDRESSES, inbuf,
+ sizeof(inbuf), NULL, 0, NULL);
+ if (rc)
+ NL_SET_ERR_MSG_FMT(extack,
+ "sfc MC_CMD_SET_CLIENT_MAC_ADDRESSES mcdi error (mport: %u)",
+ mport_desc->mport_id);
+
+ return rc;
+}
+
+#endif
+
+static int efx_devlink_info_nvram_partition(struct efx_nic *efx,
+ struct devlink_info_req *req,
+ unsigned int partition_type,
+ const char *version_name)
+{
+ char buf[EFX_MAX_VERSION_INFO_LEN];
+ u16 version[4];
+ int rc;
+
+ rc = efx_mcdi_nvram_metadata(efx, partition_type, NULL, version, NULL,
+ 0);
+ if (rc) {
+ netif_err(efx, drv, efx->net_dev, "mcdi nvram %s: failed\n",
+ version_name);
+ return rc;
+ }
+
+ snprintf(buf, EFX_MAX_VERSION_INFO_LEN, "%u.%u.%u.%u", version[0],
+ version[1], version[2], version[3]);
+ devlink_info_version_stored_put(req, version_name, buf);
+
+ return 0;
+}
+
+static int efx_devlink_info_stored_versions(struct efx_nic *efx,
+ struct devlink_info_req *req)
+{
+ int rc;
+
+ rc = efx_devlink_info_nvram_partition(efx, req,
+ NVRAM_PARTITION_TYPE_BUNDLE,
+ DEVLINK_INFO_VERSION_GENERIC_FW_BUNDLE_ID);
+ if (rc)
+ return rc;
+
+ rc = efx_devlink_info_nvram_partition(efx, req,
+ NVRAM_PARTITION_TYPE_MC_FIRMWARE,
+ DEVLINK_INFO_VERSION_GENERIC_FW_MGMT);
+ if (rc)
+ return rc;
+
+ rc = efx_devlink_info_nvram_partition(efx, req,
+ NVRAM_PARTITION_TYPE_SUC_FIRMWARE,
+ EFX_DEVLINK_INFO_VERSION_FW_MGMT_SUC);
+ if (rc)
+ return rc;
+
+ rc = efx_devlink_info_nvram_partition(efx, req,
+ NVRAM_PARTITION_TYPE_EXPANSION_ROM,
+ EFX_DEVLINK_INFO_VERSION_FW_EXPROM);
+ if (rc)
+ return rc;
+
+ rc = efx_devlink_info_nvram_partition(efx, req,
+ NVRAM_PARTITION_TYPE_EXPANSION_UEFI,
+ EFX_DEVLINK_INFO_VERSION_FW_UEFI);
+ return rc;
+}
+
+#define EFX_VER_FLAG(_f) \
+ (MC_CMD_GET_VERSION_V5_OUT_ ## _f ## _PRESENT_LBN)
+
+static void efx_devlink_info_running_v2(struct efx_nic *efx,
+ struct devlink_info_req *req,
+ unsigned int flags, efx_dword_t *outbuf)
+{
+ char buf[EFX_MAX_VERSION_INFO_LEN];
+ union {
+ const __le32 *dwords;
+ const __le16 *words;
+ const char *str;
+ } ver;
+ struct rtc_time build_date;
+ unsigned int build_id;
+ size_t offset;
+ __maybe_unused u64 tstamp;
+
+ if (flags & BIT(EFX_VER_FLAG(BOARD_EXT_INFO))) {
+ snprintf(buf, EFX_MAX_VERSION_INFO_LEN, "%s",
+ MCDI_PTR(outbuf, GET_VERSION_V2_OUT_BOARD_NAME));
+ devlink_info_version_fixed_put(req,
+ DEVLINK_INFO_VERSION_GENERIC_BOARD_ID,
+ buf);
+
+ /* Favour full board version if present (in V5 or later) */
+ if (~flags & BIT(EFX_VER_FLAG(BOARD_VERSION))) {
+ snprintf(buf, EFX_MAX_VERSION_INFO_LEN, "%u",
+ MCDI_DWORD(outbuf,
+ GET_VERSION_V2_OUT_BOARD_REVISION));
+ devlink_info_version_fixed_put(req,
+ DEVLINK_INFO_VERSION_GENERIC_BOARD_REV,
+ buf);
+ }
+
+ ver.str = MCDI_PTR(outbuf, GET_VERSION_V2_OUT_BOARD_SERIAL);
+ if (ver.str[0])
+ devlink_info_board_serial_number_put(req, ver.str);
+ }
+
+ if (flags & BIT(EFX_VER_FLAG(FPGA_EXT_INFO))) {
+ ver.dwords = (__le32 *)MCDI_PTR(outbuf,
+ GET_VERSION_V2_OUT_FPGA_VERSION);
+ offset = snprintf(buf, EFX_MAX_VERSION_INFO_LEN, "%u_%c%u",
+ le32_to_cpu(ver.dwords[0]),
+ 'A' + le32_to_cpu(ver.dwords[1]),
+ le32_to_cpu(ver.dwords[2]));
+
+ ver.str = MCDI_PTR(outbuf, GET_VERSION_V2_OUT_FPGA_EXTRA);
+ if (ver.str[0])
+ snprintf(&buf[offset], EFX_MAX_VERSION_INFO_LEN - offset,
+ " (%s)", ver.str);
+
+ devlink_info_version_running_put(req,
+ EFX_DEVLINK_INFO_VERSION_FPGA_REV,
+ buf);
+ }
+
+ if (flags & BIT(EFX_VER_FLAG(CMC_EXT_INFO))) {
+ ver.dwords = (__le32 *)MCDI_PTR(outbuf,
+ GET_VERSION_V2_OUT_CMCFW_VERSION);
+ offset = snprintf(buf, EFX_MAX_VERSION_INFO_LEN, "%u.%u.%u.%u",
+ le32_to_cpu(ver.dwords[0]),
+ le32_to_cpu(ver.dwords[1]),
+ le32_to_cpu(ver.dwords[2]),
+ le32_to_cpu(ver.dwords[3]));
+
+#ifdef CONFIG_RTC_LIB
+ tstamp = MCDI_QWORD(outbuf,
+ GET_VERSION_V2_OUT_CMCFW_BUILD_DATE);
+ if (tstamp) {
+ rtc_time64_to_tm(tstamp, &build_date);
+ snprintf(&buf[offset], EFX_MAX_VERSION_INFO_LEN - offset,
+ " (%ptRd)", &build_date);
+ }
+#endif
+
+ devlink_info_version_running_put(req,
+ EFX_DEVLINK_INFO_VERSION_FW_MGMT_CMC,
+ buf);
+ }
+
+ ver.words = (__le16 *)MCDI_PTR(outbuf, GET_VERSION_V2_OUT_VERSION);
+ offset = snprintf(buf, EFX_MAX_VERSION_INFO_LEN, "%u.%u.%u.%u",
+ le16_to_cpu(ver.words[0]), le16_to_cpu(ver.words[1]),
+ le16_to_cpu(ver.words[2]), le16_to_cpu(ver.words[3]));
+ if (flags & BIT(EFX_VER_FLAG(MCFW_EXT_INFO))) {
+ build_id = MCDI_DWORD(outbuf, GET_VERSION_V2_OUT_MCFW_BUILD_ID);
+ snprintf(&buf[offset], EFX_MAX_VERSION_INFO_LEN - offset,
+ " (%x) %s", build_id,
+ MCDI_PTR(outbuf, GET_VERSION_V2_OUT_MCFW_BUILD_NAME));
+ }
+ devlink_info_version_running_put(req,
+ DEVLINK_INFO_VERSION_GENERIC_FW_MGMT,
+ buf);
+
+ if (flags & BIT(EFX_VER_FLAG(SUCFW_EXT_INFO))) {
+ ver.dwords = (__le32 *)MCDI_PTR(outbuf,
+ GET_VERSION_V2_OUT_SUCFW_VERSION);
+#ifdef CONFIG_RTC_LIB
+ tstamp = MCDI_QWORD(outbuf,
+ GET_VERSION_V2_OUT_SUCFW_BUILD_DATE);
+ rtc_time64_to_tm(tstamp, &build_date);
+#else
+ memset(&build_date, 0, sizeof(build_date));
+#endif
+ build_id = MCDI_DWORD(outbuf, GET_VERSION_V2_OUT_SUCFW_CHIP_ID);
+
+ snprintf(buf, EFX_MAX_VERSION_INFO_LEN,
+ "%u.%u.%u.%u type %x (%ptRd)",
+ le32_to_cpu(ver.dwords[0]), le32_to_cpu(ver.dwords[1]),
+ le32_to_cpu(ver.dwords[2]), le32_to_cpu(ver.dwords[3]),
+ build_id, &build_date);
+
+ devlink_info_version_running_put(req,
+ EFX_DEVLINK_INFO_VERSION_FW_MGMT_SUC,
+ buf);
+ }
+}
+
+static void efx_devlink_info_running_v3(struct efx_nic *efx,
+ struct devlink_info_req *req,
+ unsigned int flags, efx_dword_t *outbuf)
+{
+ char buf[EFX_MAX_VERSION_INFO_LEN];
+ union {
+ const __le32 *dwords;
+ const __le16 *words;
+ const char *str;
+ } ver;
+
+ if (flags & BIT(EFX_VER_FLAG(DATAPATH_HW_VERSION))) {
+ ver.dwords = (__le32 *)MCDI_PTR(outbuf,
+ GET_VERSION_V3_OUT_DATAPATH_HW_VERSION);
+
+ snprintf(buf, EFX_MAX_VERSION_INFO_LEN, "%u.%u.%u",
+ le32_to_cpu(ver.dwords[0]), le32_to_cpu(ver.dwords[1]),
+ le32_to_cpu(ver.dwords[2]));
+
+ devlink_info_version_running_put(req,
+ EFX_DEVLINK_INFO_VERSION_DATAPATH_HW,
+ buf);
+ }
+
+ if (flags & BIT(EFX_VER_FLAG(DATAPATH_FW_VERSION))) {
+ ver.dwords = (__le32 *)MCDI_PTR(outbuf,
+ GET_VERSION_V3_OUT_DATAPATH_FW_VERSION);
+
+ snprintf(buf, EFX_MAX_VERSION_INFO_LEN, "%u.%u.%u",
+ le32_to_cpu(ver.dwords[0]), le32_to_cpu(ver.dwords[1]),
+ le32_to_cpu(ver.dwords[2]));
+
+ devlink_info_version_running_put(req,
+ EFX_DEVLINK_INFO_VERSION_DATAPATH_FW,
+ buf);
+ }
+}
+
+static void efx_devlink_info_running_v4(struct efx_nic *efx,
+ struct devlink_info_req *req,
+ unsigned int flags, efx_dword_t *outbuf)
+{
+ char buf[EFX_MAX_VERSION_INFO_LEN];
+ union {
+ const __le32 *dwords;
+ const __le16 *words;
+ const char *str;
+ } ver;
+
+ if (flags & BIT(EFX_VER_FLAG(SOC_BOOT_VERSION))) {
+ ver.dwords = (__le32 *)MCDI_PTR(outbuf,
+ GET_VERSION_V4_OUT_SOC_BOOT_VERSION);
+
+ snprintf(buf, EFX_MAX_VERSION_INFO_LEN, "%u.%u.%u.%u",
+ le32_to_cpu(ver.dwords[0]), le32_to_cpu(ver.dwords[1]),
+ le32_to_cpu(ver.dwords[2]),
+ le32_to_cpu(ver.dwords[3]));
+
+ devlink_info_version_running_put(req,
+ EFX_DEVLINK_INFO_VERSION_SOC_BOOT,
+ buf);
+ }
+
+ if (flags & BIT(EFX_VER_FLAG(SOC_UBOOT_VERSION))) {
+ ver.dwords = (__le32 *)MCDI_PTR(outbuf,
+ GET_VERSION_V4_OUT_SOC_UBOOT_VERSION);
+
+ snprintf(buf, EFX_MAX_VERSION_INFO_LEN, "%u.%u.%u.%u",
+ le32_to_cpu(ver.dwords[0]), le32_to_cpu(ver.dwords[1]),
+ le32_to_cpu(ver.dwords[2]),
+ le32_to_cpu(ver.dwords[3]));
+
+ devlink_info_version_running_put(req,
+ EFX_DEVLINK_INFO_VERSION_SOC_UBOOT,
+ buf);
+ }
+
+ if (flags & BIT(EFX_VER_FLAG(SOC_MAIN_ROOTFS_VERSION))) {
+ ver.dwords = (__le32 *)MCDI_PTR(outbuf,
+ GET_VERSION_V4_OUT_SOC_MAIN_ROOTFS_VERSION);
+
+ snprintf(buf, EFX_MAX_VERSION_INFO_LEN, "%u.%u.%u.%u",
+ le32_to_cpu(ver.dwords[0]), le32_to_cpu(ver.dwords[1]),
+ le32_to_cpu(ver.dwords[2]),
+ le32_to_cpu(ver.dwords[3]));
+
+ devlink_info_version_running_put(req,
+ EFX_DEVLINK_INFO_VERSION_SOC_MAIN,
+ buf);
+ }
+
+ if (flags & BIT(EFX_VER_FLAG(SOC_RECOVERY_BUILDROOT_VERSION))) {
+ ver.dwords = (__le32 *)MCDI_PTR(outbuf,
+ GET_VERSION_V4_OUT_SOC_RECOVERY_BUILDROOT_VERSION);
+
+ snprintf(buf, EFX_MAX_VERSION_INFO_LEN, "%u.%u.%u.%u",
+ le32_to_cpu(ver.dwords[0]), le32_to_cpu(ver.dwords[1]),
+ le32_to_cpu(ver.dwords[2]),
+ le32_to_cpu(ver.dwords[3]));
+
+ devlink_info_version_running_put(req,
+ EFX_DEVLINK_INFO_VERSION_SOC_RECOVERY,
+ buf);
+ }
+
+ if (flags & BIT(EFX_VER_FLAG(SUCFW_VERSION)) &&
+ ~flags & BIT(EFX_VER_FLAG(SUCFW_EXT_INFO))) {
+ ver.dwords = (__le32 *)MCDI_PTR(outbuf,
+ GET_VERSION_V4_OUT_SUCFW_VERSION);
+
+ snprintf(buf, EFX_MAX_VERSION_INFO_LEN, "%u.%u.%u.%u",
+ le32_to_cpu(ver.dwords[0]), le32_to_cpu(ver.dwords[1]),
+ le32_to_cpu(ver.dwords[2]),
+ le32_to_cpu(ver.dwords[3]));
+
+ devlink_info_version_running_put(req,
+ EFX_DEVLINK_INFO_VERSION_FW_MGMT_SUC,
+ buf);
+ }
+}
+
+static void efx_devlink_info_running_v5(struct efx_nic *efx,
+ struct devlink_info_req *req,
+ unsigned int flags, efx_dword_t *outbuf)
+{
+ char buf[EFX_MAX_VERSION_INFO_LEN];
+ union {
+ const __le32 *dwords;
+ const __le16 *words;
+ const char *str;
+ } ver;
+
+ if (flags & BIT(EFX_VER_FLAG(BOARD_VERSION))) {
+ ver.dwords = (__le32 *)MCDI_PTR(outbuf,
+ GET_VERSION_V5_OUT_BOARD_VERSION);
+
+ snprintf(buf, EFX_MAX_VERSION_INFO_LEN, "%u.%u.%u.%u",
+ le32_to_cpu(ver.dwords[0]), le32_to_cpu(ver.dwords[1]),
+ le32_to_cpu(ver.dwords[2]),
+ le32_to_cpu(ver.dwords[3]));
+
+ devlink_info_version_running_put(req,
+ DEVLINK_INFO_VERSION_GENERIC_BOARD_REV,
+ buf);
+ }
+
+ if (flags & BIT(EFX_VER_FLAG(BUNDLE_VERSION))) {
+ ver.dwords = (__le32 *)MCDI_PTR(outbuf,
+ GET_VERSION_V5_OUT_BUNDLE_VERSION);
+
+ snprintf(buf, EFX_MAX_VERSION_INFO_LEN, "%u.%u.%u.%u",
+ le32_to_cpu(ver.dwords[0]), le32_to_cpu(ver.dwords[1]),
+ le32_to_cpu(ver.dwords[2]),
+ le32_to_cpu(ver.dwords[3]));
+
+ devlink_info_version_running_put(req,
+ DEVLINK_INFO_VERSION_GENERIC_FW_BUNDLE_ID,
+ buf);
+ }
+}
+
+static int efx_devlink_info_running_versions(struct efx_nic *efx,
+ struct devlink_info_req *req)
+{
+ MCDI_DECLARE_BUF(outbuf, MC_CMD_GET_VERSION_V5_OUT_LEN);
+ MCDI_DECLARE_BUF(inbuf, MC_CMD_GET_VERSION_EXT_IN_LEN);
+ char buf[EFX_MAX_VERSION_INFO_LEN];
+ union {
+ const __le32 *dwords;
+ const __le16 *words;
+ const char *str;
+ } ver;
+ size_t outlength;
+ unsigned int flags;
+ int rc;
+
+ rc = efx_mcdi_rpc(efx, MC_CMD_GET_VERSION, inbuf, sizeof(inbuf),
+ outbuf, sizeof(outbuf), &outlength);
+ if (rc || outlength < MC_CMD_GET_VERSION_OUT_LEN) {
+ netif_err(efx, drv, efx->net_dev,
+ "mcdi MC_CMD_GET_VERSION failed\n");
+ return rc;
+ }
+
+ /* Handle previous output */
+ if (outlength < MC_CMD_GET_VERSION_V2_OUT_LEN) {
+ ver.words = (__le16 *)MCDI_PTR(outbuf,
+ GET_VERSION_EXT_OUT_VERSION);
+ snprintf(buf, EFX_MAX_VERSION_INFO_LEN, "%u.%u.%u.%u",
+ le16_to_cpu(ver.words[0]),
+ le16_to_cpu(ver.words[1]),
+ le16_to_cpu(ver.words[2]),
+ le16_to_cpu(ver.words[3]));
+
+ devlink_info_version_running_put(req,
+ DEVLINK_INFO_VERSION_GENERIC_FW_MGMT,
+ buf);
+ return 0;
+ }
+
+ /* Handle V2 additions */
+ flags = MCDI_DWORD(outbuf, GET_VERSION_V2_OUT_FLAGS);
+ efx_devlink_info_running_v2(efx, req, flags, outbuf);
+
+ if (outlength < MC_CMD_GET_VERSION_V3_OUT_LEN)
+ return 0;
+
+ /* Handle V3 additions */
+ efx_devlink_info_running_v3(efx, req, flags, outbuf);
+
+ if (outlength < MC_CMD_GET_VERSION_V4_OUT_LEN)
+ return 0;
+
+ /* Handle V4 additions */
+ efx_devlink_info_running_v4(efx, req, flags, outbuf);
+
+ if (outlength < MC_CMD_GET_VERSION_V5_OUT_LEN)
+ return 0;
+
+ /* Handle V5 additions */
+ efx_devlink_info_running_v5(efx, req, flags, outbuf);
+
+ return 0;
+}
+
+#define EFX_MAX_SERIALNUM_LEN (ETH_ALEN * 2 + 1)
+
+static int efx_devlink_info_board_cfg(struct efx_nic *efx,
+ struct devlink_info_req *req)
+{
+ char sn[EFX_MAX_SERIALNUM_LEN];
+ u8 mac_address[ETH_ALEN];
+ int rc;
+
+ rc = efx_mcdi_get_board_cfg(efx, (u8 *)mac_address, NULL, NULL);
+ if (!rc) {
+ snprintf(sn, EFX_MAX_SERIALNUM_LEN, "%pm", mac_address);
+ devlink_info_serial_number_put(req, sn);
+ }
+ return rc;
+}
+
+static int efx_devlink_info_get(struct devlink *devlink,
+ struct devlink_info_req *req,
+ struct netlink_ext_ack *extack)
+{
+ struct efx_devlink *devlink_private = devlink_priv(devlink);
+ struct efx_nic *efx = devlink_private->efx;
+ int rc;
+
+ /* Several different MCDI commands are used. We report first error
+ * through extack returning at that point. Specific error
+ * information via system messages.
+ */
+ rc = efx_devlink_info_board_cfg(efx, req);
+ if (rc) {
+ NL_SET_ERR_MSG_MOD(extack, "Getting board info failed");
+ return rc;
+ }
+ rc = efx_devlink_info_stored_versions(efx, req);
+ if (rc) {
+ NL_SET_ERR_MSG_MOD(extack, "Getting stored versions failed");
+ return rc;
+ }
+ rc = efx_devlink_info_running_versions(efx, req);
+ if (rc) {
+ NL_SET_ERR_MSG_MOD(extack, "Getting running versions failed");
+ return rc;
+ }
+
+ return 0;
+}
+
+static const struct devlink_ops sfc_devlink_ops = {
+ .info_get = efx_devlink_info_get,
+#ifdef CONFIG_SFC_SRIOV
+ .port_function_hw_addr_get = efx_devlink_port_addr_get,
+ .port_function_hw_addr_set = efx_devlink_port_addr_set,
+#endif
+};
+
+#ifdef CONFIG_SFC_SRIOV
+static struct devlink_port *ef100_set_devlink_port(struct efx_nic *efx, u32 idx)
+{
+ struct mae_mport_desc *mport;
+ u32 id;
+ int rc;
+
+ if (efx_mae_lookup_mport(efx, idx, &id)) {
+ /* This should not happen. */
+ if (idx == MAE_MPORT_DESC_VF_IDX_NULL)
+ pci_warn_once(efx->pci_dev, "No mport ID found for PF.\n");
+ else
+ pci_warn_once(efx->pci_dev, "No mport ID found for VF %u.\n",
+ idx);
+ return NULL;
+ }
+
+ mport = efx_mae_get_mport(efx, id);
+ if (!mport) {
+ /* This should not happen. */
+ if (idx == MAE_MPORT_DESC_VF_IDX_NULL)
+ pci_warn_once(efx->pci_dev, "No mport found for PF.\n");
+ else
+ pci_warn_once(efx->pci_dev, "No mport found for VF %u.\n",
+ idx);
+ return NULL;
+ }
+
+ rc = efx_devlink_add_port(efx, mport);
+ if (rc) {
+ if (idx == MAE_MPORT_DESC_VF_IDX_NULL)
+ pci_warn(efx->pci_dev,
+ "devlink port creation for PF failed.\n");
+ else
+ pci_warn(efx->pci_dev,
+ "devlink_port creation for VF %u failed.\n",
+ idx);
+ return NULL;
+ }
+
+ return &mport->dl_port;
+}
+
+void ef100_rep_set_devlink_port(struct efx_rep *efv)
+{
+ efv->dl_port = ef100_set_devlink_port(efv->parent, efv->idx);
+}
+
+void ef100_pf_set_devlink_port(struct efx_nic *efx)
+{
+ efx->dl_port = ef100_set_devlink_port(efx, MAE_MPORT_DESC_VF_IDX_NULL);
+}
+
+void ef100_rep_unset_devlink_port(struct efx_rep *efv)
+{
+ efx_devlink_del_port(efv->dl_port);
+}
+
+void ef100_pf_unset_devlink_port(struct efx_nic *efx)
+{
+ efx_devlink_del_port(efx->dl_port);
+}
+#endif
+
+void efx_fini_devlink_lock(struct efx_nic *efx)
+{
+ if (efx->devlink)
+ devl_lock(efx->devlink);
+}
+
+void efx_fini_devlink_and_unlock(struct efx_nic *efx)
+{
+ if (efx->devlink) {
+ devl_unregister(efx->devlink);
+ devl_unlock(efx->devlink);
+ devlink_free(efx->devlink);
+ efx->devlink = NULL;
+ }
+}
+
+int efx_probe_devlink_and_lock(struct efx_nic *efx)
+{
+ struct efx_devlink *devlink_private;
+
+ if (efx->type->is_vf)
+ return 0;
+
+ efx->devlink = devlink_alloc(&sfc_devlink_ops,
+ sizeof(struct efx_devlink),
+ &efx->pci_dev->dev);
+ if (!efx->devlink)
+ return -ENOMEM;
+
+ devl_lock(efx->devlink);
+ devlink_private = devlink_priv(efx->devlink);
+ devlink_private->efx = efx;
+
+ devl_register(efx->devlink);
+
+ return 0;
+}
+
+void efx_probe_devlink_unlock(struct efx_nic *efx)
+{
+ if (!efx->devlink)
+ return;
+
+ devl_unlock(efx->devlink);
+}
diff --git a/drivers/net/ethernet/sfc/efx_devlink.h b/drivers/net/ethernet/sfc/efx_devlink.h
new file mode 100644
index 000000000000..e5fd5e1dcc27
--- /dev/null
+++ b/drivers/net/ethernet/sfc/efx_devlink.h
@@ -0,0 +1,47 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/****************************************************************************
+ * Driver for AMD network controllers and boards
+ * Copyright (C) 2023, Advanced Micro Devices, Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License version 2 as published
+ * by the Free Software Foundation, incorporated herein by reference.
+ */
+
+#ifndef _EFX_DEVLINK_H
+#define _EFX_DEVLINK_H
+
+#include "net_driver.h"
+#include <net/devlink.h>
+
+/* Custom devlink-info version object names for details that do not map to the
+ * generic standardized names.
+ */
+#define EFX_DEVLINK_INFO_VERSION_FW_MGMT_SUC "fw.mgmt.suc"
+#define EFX_DEVLINK_INFO_VERSION_FW_MGMT_CMC "fw.mgmt.cmc"
+#define EFX_DEVLINK_INFO_VERSION_FPGA_REV "fpga.rev"
+#define EFX_DEVLINK_INFO_VERSION_DATAPATH_HW "fpga.app"
+#define EFX_DEVLINK_INFO_VERSION_DATAPATH_FW DEVLINK_INFO_VERSION_GENERIC_FW_APP
+#define EFX_DEVLINK_INFO_VERSION_SOC_BOOT "coproc.boot"
+#define EFX_DEVLINK_INFO_VERSION_SOC_UBOOT "coproc.uboot"
+#define EFX_DEVLINK_INFO_VERSION_SOC_MAIN "coproc.main"
+#define EFX_DEVLINK_INFO_VERSION_SOC_RECOVERY "coproc.recovery"
+#define EFX_DEVLINK_INFO_VERSION_FW_EXPROM "fw.exprom"
+#define EFX_DEVLINK_INFO_VERSION_FW_UEFI "fw.uefi"
+
+#define EFX_MAX_VERSION_INFO_LEN 64
+
+int efx_probe_devlink_and_lock(struct efx_nic *efx);
+void efx_probe_devlink_unlock(struct efx_nic *efx);
+void efx_fini_devlink_lock(struct efx_nic *efx);
+void efx_fini_devlink_and_unlock(struct efx_nic *efx);
+
+#ifdef CONFIG_SFC_SRIOV
+struct efx_rep;
+
+void ef100_pf_set_devlink_port(struct efx_nic *efx);
+void ef100_rep_set_devlink_port(struct efx_rep *efv);
+void ef100_pf_unset_devlink_port(struct efx_nic *efx);
+void ef100_rep_unset_devlink_port(struct efx_rep *efv);
+#endif
+#endif /* _EFX_DEVLINK_H */
diff --git a/drivers/net/ethernet/sfc/mae.c b/drivers/net/ethernet/sfc/mae.c
index 583baf69981c..2d32abe5f478 100644
--- a/drivers/net/ethernet/sfc/mae.c
+++ b/drivers/net/ethernet/sfc/mae.c
@@ -9,8 +9,11 @@
* by the Free Software Foundation, incorporated herein by reference.
*/
+#include <linux/rhashtable.h>
+#include "ef100_nic.h"
#include "mae.h"
#include "mcdi.h"
+#include "mcdi_pcol.h"
#include "mcdi_pcol_mae.h"
int efx_mae_allocate_mport(struct efx_nic *efx, u32 *id, u32 *label)
@@ -94,7 +97,7 @@ void efx_mae_mport_mport(struct efx_nic *efx __always_unused, u32 mport_id, u32
}
/* id is really only 24 bits wide */
-int efx_mae_lookup_mport(struct efx_nic *efx, u32 selector, u32 *id)
+int efx_mae_fw_lookup_mport(struct efx_nic *efx, u32 selector, u32 *id)
{
MCDI_DECLARE_BUF(outbuf, MC_CMD_MAE_MPORT_LOOKUP_OUT_LEN);
MCDI_DECLARE_BUF(inbuf, MC_CMD_MAE_MPORT_LOOKUP_IN_LEN);
@@ -485,11 +488,193 @@ int efx_mae_free_counter(struct efx_nic *efx, struct efx_tc_counter *cnt)
return 0;
}
+int efx_mae_lookup_mport(struct efx_nic *efx, u32 vf_idx, u32 *id)
+{
+ struct ef100_nic_data *nic_data = efx->nic_data;
+ struct efx_mae *mae = efx->mae;
+ struct rhashtable_iter walk;
+ struct mae_mport_desc *m;
+ int rc = -ENOENT;
+
+ rhashtable_walk_enter(&mae->mports_ht, &walk);
+ rhashtable_walk_start(&walk);
+ while ((m = rhashtable_walk_next(&walk)) != NULL) {
+ if (m->mport_type == MAE_MPORT_DESC_MPORT_TYPE_VNIC &&
+ m->interface_idx == nic_data->local_mae_intf &&
+ m->pf_idx == 0 &&
+ m->vf_idx == vf_idx) {
+ *id = m->mport_id;
+ rc = 0;
+ break;
+ }
+ }
+ rhashtable_walk_stop(&walk);
+ rhashtable_walk_exit(&walk);
+ return rc;
+}
+
static bool efx_mae_asl_id(u32 id)
{
return !!(id & BIT(31));
}
+/* mport handling */
+static const struct rhashtable_params efx_mae_mports_ht_params = {
+ .key_len = sizeof(u32),
+ .key_offset = offsetof(struct mae_mport_desc, mport_id),
+ .head_offset = offsetof(struct mae_mport_desc, linkage),
+};
+
+struct mae_mport_desc *efx_mae_get_mport(struct efx_nic *efx, u32 mport_id)
+{
+ return rhashtable_lookup_fast(&efx->mae->mports_ht, &mport_id,
+ efx_mae_mports_ht_params);
+}
+
+static int efx_mae_add_mport(struct efx_nic *efx, struct mae_mport_desc *desc)
+{
+ struct efx_mae *mae = efx->mae;
+ int rc;
+
+ rc = rhashtable_insert_fast(&mae->mports_ht, &desc->linkage,
+ efx_mae_mports_ht_params);
+
+ if (rc) {
+ pci_err(efx->pci_dev, "Failed to insert MPORT %08x, rc %d\n",
+ desc->mport_id, rc);
+ kfree(desc);
+ return rc;
+ }
+
+ return rc;
+}
+
+void efx_mae_remove_mport(void *desc, void *arg)
+{
+ struct mae_mport_desc *mport = desc;
+
+ synchronize_rcu();
+ kfree(mport);
+}
+
+static int efx_mae_process_mport(struct efx_nic *efx,
+ struct mae_mport_desc *desc)
+{
+ struct ef100_nic_data *nic_data = efx->nic_data;
+ struct mae_mport_desc *mport;
+
+ mport = efx_mae_get_mport(efx, desc->mport_id);
+ if (!IS_ERR_OR_NULL(mport)) {
+ netif_err(efx, drv, efx->net_dev,
+ "mport with id %u does exist!!!\n", desc->mport_id);
+ return -EEXIST;
+ }
+
+ if (nic_data->have_own_mport &&
+ desc->mport_id == nic_data->own_mport) {
+ WARN_ON(desc->mport_type != MAE_MPORT_DESC_MPORT_TYPE_VNIC);
+ WARN_ON(desc->vnic_client_type !=
+ MAE_MPORT_DESC_VNIC_CLIENT_TYPE_FUNCTION);
+ nic_data->local_mae_intf = desc->interface_idx;
+ nic_data->have_local_intf = true;
+ pci_dbg(efx->pci_dev, "MAE interface_idx is %u\n",
+ nic_data->local_mae_intf);
+ }
+
+ return efx_mae_add_mport(efx, desc);
+}
+
+#define MCDI_MPORT_JOURNAL_LEN \
+ ALIGN(MC_CMD_MAE_MPORT_READ_JOURNAL_OUT_LENMAX_MCDI2, 4)
+
+int efx_mae_enumerate_mports(struct efx_nic *efx)
+{
+ efx_dword_t *outbuf = kzalloc(MCDI_MPORT_JOURNAL_LEN, GFP_KERNEL);
+ MCDI_DECLARE_BUF(inbuf, MC_CMD_MAE_MPORT_READ_JOURNAL_IN_LEN);
+ MCDI_DECLARE_STRUCT_PTR(desc);
+ size_t outlen, stride, count;
+ int rc = 0, i;
+
+ if (!outbuf)
+ return -ENOMEM;
+ do {
+ rc = efx_mcdi_rpc(efx, MC_CMD_MAE_MPORT_READ_JOURNAL, inbuf,
+ sizeof(inbuf), outbuf,
+ MCDI_MPORT_JOURNAL_LEN, &outlen);
+ if (rc)
+ goto fail;
+ if (outlen < MC_CMD_MAE_MPORT_READ_JOURNAL_OUT_MPORT_DESC_DATA_OFST) {
+ rc = -EIO;
+ goto fail;
+ }
+ count = MCDI_DWORD(outbuf, MAE_MPORT_READ_JOURNAL_OUT_MPORT_DESC_COUNT);
+ if (!count)
+ continue; /* not break; we want to look at MORE flag */
+ stride = MCDI_DWORD(outbuf, MAE_MPORT_READ_JOURNAL_OUT_SIZEOF_MPORT_DESC);
+ if (stride < MAE_MPORT_DESC_LEN) {
+ rc = -EIO;
+ goto fail;
+ }
+ if (outlen < MC_CMD_MAE_MPORT_READ_JOURNAL_OUT_LEN(count * stride)) {
+ rc = -EIO;
+ goto fail;
+ }
+
+ for (i = 0; i < count; i++) {
+ struct mae_mport_desc *d;
+
+ d = kzalloc(sizeof(*d), GFP_KERNEL);
+ if (!d) {
+ rc = -ENOMEM;
+ goto fail;
+ }
+
+ desc = (efx_dword_t *)
+ _MCDI_PTR(outbuf, MC_CMD_MAE_MPORT_READ_JOURNAL_OUT_MPORT_DESC_DATA_OFST +
+ i * stride);
+ d->mport_id = MCDI_STRUCT_DWORD(desc, MAE_MPORT_DESC_MPORT_ID);
+ d->flags = MCDI_STRUCT_DWORD(desc, MAE_MPORT_DESC_FLAGS);
+ d->caller_flags = MCDI_STRUCT_DWORD(desc,
+ MAE_MPORT_DESC_CALLER_FLAGS);
+ d->mport_type = MCDI_STRUCT_DWORD(desc,
+ MAE_MPORT_DESC_MPORT_TYPE);
+ switch (d->mport_type) {
+ case MAE_MPORT_DESC_MPORT_TYPE_NET_PORT:
+ d->port_idx = MCDI_STRUCT_DWORD(desc,
+ MAE_MPORT_DESC_NET_PORT_IDX);
+ break;
+ case MAE_MPORT_DESC_MPORT_TYPE_ALIAS:
+ d->alias_mport_id = MCDI_STRUCT_DWORD(desc,
+ MAE_MPORT_DESC_ALIAS_DELIVER_MPORT_ID);
+ break;
+ case MAE_MPORT_DESC_MPORT_TYPE_VNIC:
+ d->vnic_client_type = MCDI_STRUCT_DWORD(desc,
+ MAE_MPORT_DESC_VNIC_CLIENT_TYPE);
+ d->interface_idx = MCDI_STRUCT_DWORD(desc,
+ MAE_MPORT_DESC_VNIC_FUNCTION_INTERFACE);
+ d->pf_idx = MCDI_STRUCT_WORD(desc,
+ MAE_MPORT_DESC_VNIC_FUNCTION_PF_IDX);
+ d->vf_idx = MCDI_STRUCT_WORD(desc,
+ MAE_MPORT_DESC_VNIC_FUNCTION_VF_IDX);
+ break;
+ default:
+ /* Unknown mport_type, just accept it */
+ break;
+ }
+ rc = efx_mae_process_mport(efx, d);
+ /* Any failure will be due to memory allocation faiure,
+ * so there is no point to try subsequent entries.
+ */
+ if (rc)
+ goto fail;
+ }
+ } while (MCDI_FIELD(outbuf, MAE_MPORT_READ_JOURNAL_OUT, MORE) &&
+ !WARN_ON(!count));
+fail:
+ kfree(outbuf);
+ return rc;
+}
+
int efx_mae_alloc_action_set(struct efx_nic *efx, struct efx_tc_action_set *act)
{
MCDI_DECLARE_BUF(outbuf, MC_CMD_MAE_ACTION_SET_ALLOC_OUT_LEN);
@@ -805,3 +990,34 @@ int efx_mae_delete_rule(struct efx_nic *efx, u32 id)
return -EIO;
return 0;
}
+
+int efx_init_mae(struct efx_nic *efx)
+{
+ struct ef100_nic_data *nic_data = efx->nic_data;
+ struct efx_mae *mae;
+ int rc;
+
+ if (!nic_data->have_mport)
+ return -EINVAL;
+
+ mae = kmalloc(sizeof(*mae), GFP_KERNEL);
+ if (!mae)
+ return -ENOMEM;
+
+ rc = rhashtable_init(&mae->mports_ht, &efx_mae_mports_ht_params);
+ if (rc < 0) {
+ kfree(mae);
+ return rc;
+ }
+ efx->mae = mae;
+ mae->efx = efx;
+ return 0;
+}
+
+void efx_fini_mae(struct efx_nic *efx)
+{
+ struct efx_mae *mae = efx->mae;
+
+ kfree(mae);
+ efx->mae = NULL;
+}
diff --git a/drivers/net/ethernet/sfc/mae.h b/drivers/net/ethernet/sfc/mae.h
index 72343e90e222..bec293a06733 100644
--- a/drivers/net/ethernet/sfc/mae.h
+++ b/drivers/net/ethernet/sfc/mae.h
@@ -13,6 +13,7 @@
#define EF100_MAE_H
/* MCDI interface for the ef100 Match-Action Engine */
+#include <net/devlink.h>
#include "net_driver.h"
#include "tc.h"
#include "mcdi_pcol.h" /* needed for various MC_CMD_MAE_*_NULL defines */
@@ -27,6 +28,40 @@ void efx_mae_mport_mport(struct efx_nic *efx, u32 mport_id, u32 *out);
int efx_mae_lookup_mport(struct efx_nic *efx, u32 selector, u32 *id);
+struct mae_mport_desc {
+ u32 mport_id;
+ u32 flags;
+ u32 caller_flags; /* enum mae_mport_desc_caller_flags */
+ u32 mport_type; /* MAE_MPORT_DESC_MPORT_TYPE_* */
+ union {
+ u32 port_idx; /* for mport_type == NET_PORT */
+ u32 alias_mport_id; /* for mport_type == ALIAS */
+ struct { /* for mport_type == VNIC */
+ u32 vnic_client_type; /* MAE_MPORT_DESC_VNIC_CLIENT_TYPE_* */
+ u32 interface_idx;
+ u16 pf_idx;
+ u16 vf_idx;
+ };
+ };
+ struct rhash_head linkage;
+ struct devlink_port dl_port;
+};
+
+int efx_mae_enumerate_mports(struct efx_nic *efx);
+struct mae_mport_desc *efx_mae_get_mport(struct efx_nic *efx, u32 mport_id);
+void efx_mae_put_mport(struct efx_nic *efx, struct mae_mport_desc *desc);
+
+/**
+ * struct efx_mae - MAE information
+ *
+ * @efx: The associated NIC
+ * @mports_ht: m-port descriptions from MC_CMD_MAE_MPORT_READ_JOURNAL
+ */
+struct efx_mae {
+ struct efx_nic *efx;
+ struct rhashtable mports_ht;
+};
+
int efx_mae_start_counters(struct efx_nic *efx, struct efx_rx_queue *rx_queue);
int efx_mae_stop_counters(struct efx_nic *efx, struct efx_rx_queue *rx_queue);
void efx_mae_counters_grant_credits(struct work_struct *work);
@@ -60,4 +95,9 @@ int efx_mae_insert_rule(struct efx_nic *efx, const struct efx_tc_match *match,
u32 prio, u32 acts_id, u32 *id);
int efx_mae_delete_rule(struct efx_nic *efx, u32 id);
+int efx_init_mae(struct efx_nic *efx);
+void efx_fini_mae(struct efx_nic *efx);
+void efx_mae_remove_mport(void *desc, void *arg);
+int efx_mae_fw_lookup_mport(struct efx_nic *efx, u32 selector, u32 *id);
+int efx_mae_lookup_mport(struct efx_nic *efx, u32 vf, u32 *id);
#endif /* EF100_MAE_H */
diff --git a/drivers/net/ethernet/sfc/mcdi.c b/drivers/net/ethernet/sfc/mcdi.c
index af338208eae9..a7f2c31071e8 100644
--- a/drivers/net/ethernet/sfc/mcdi.c
+++ b/drivers/net/ethernet/sfc/mcdi.c
@@ -2175,6 +2175,78 @@ int efx_mcdi_get_privilege_mask(struct efx_nic *efx, u32 *mask)
return 0;
}
+int efx_mcdi_nvram_metadata(struct efx_nic *efx, unsigned int type,
+ u32 *subtype, u16 version[4], char *desc,
+ size_t descsize)
+{
+ MCDI_DECLARE_BUF(inbuf, MC_CMD_NVRAM_METADATA_IN_LEN);
+ efx_dword_t *outbuf;
+ size_t outlen;
+ u32 flags;
+ int rc;
+
+ outbuf = kzalloc(MC_CMD_NVRAM_METADATA_OUT_LENMAX_MCDI2, GFP_KERNEL);
+ if (!outbuf)
+ return -ENOMEM;
+
+ MCDI_SET_DWORD(inbuf, NVRAM_METADATA_IN_TYPE, type);
+
+ rc = efx_mcdi_rpc_quiet(efx, MC_CMD_NVRAM_METADATA, inbuf,
+ sizeof(inbuf), outbuf,
+ MC_CMD_NVRAM_METADATA_OUT_LENMAX_MCDI2,
+ &outlen);
+ if (rc)
+ goto out_free;
+ if (outlen < MC_CMD_NVRAM_METADATA_OUT_LENMIN) {
+ rc = -EIO;
+ goto out_free;
+ }
+
+ flags = MCDI_DWORD(outbuf, NVRAM_METADATA_OUT_FLAGS);
+
+ if (desc && descsize > 0) {
+ if (flags & BIT(MC_CMD_NVRAM_METADATA_OUT_DESCRIPTION_VALID_LBN)) {
+ if (descsize <=
+ MC_CMD_NVRAM_METADATA_OUT_DESCRIPTION_NUM(outlen)) {
+ rc = -E2BIG;
+ goto out_free;
+ }
+
+ strncpy(desc,
+ MCDI_PTR(outbuf, NVRAM_METADATA_OUT_DESCRIPTION),
+ MC_CMD_NVRAM_METADATA_OUT_DESCRIPTION_NUM(outlen));
+ desc[MC_CMD_NVRAM_METADATA_OUT_DESCRIPTION_NUM(outlen)] = '\0';
+ } else {
+ desc[0] = '\0';
+ }
+ }
+
+ if (subtype) {
+ if (flags & BIT(MC_CMD_NVRAM_METADATA_OUT_SUBTYPE_VALID_LBN))
+ *subtype = MCDI_DWORD(outbuf, NVRAM_METADATA_OUT_SUBTYPE);
+ else
+ *subtype = 0;
+ }
+
+ if (version) {
+ if (flags & BIT(MC_CMD_NVRAM_METADATA_OUT_VERSION_VALID_LBN)) {
+ version[0] = MCDI_WORD(outbuf, NVRAM_METADATA_OUT_VERSION_W);
+ version[1] = MCDI_WORD(outbuf, NVRAM_METADATA_OUT_VERSION_X);
+ version[2] = MCDI_WORD(outbuf, NVRAM_METADATA_OUT_VERSION_Y);
+ version[3] = MCDI_WORD(outbuf, NVRAM_METADATA_OUT_VERSION_Z);
+ } else {
+ version[0] = 0;
+ version[1] = 0;
+ version[2] = 0;
+ version[3] = 0;
+ }
+ }
+
+out_free:
+ kfree(outbuf);
+ return rc;
+}
+
#ifdef CONFIG_SFC_MTD
#define EFX_MCDI_NVRAM_LEN_MAX 128
diff --git a/drivers/net/ethernet/sfc/mcdi.h b/drivers/net/ethernet/sfc/mcdi.h
index 7e35fec9da35..b139b76febff 100644
--- a/drivers/net/ethernet/sfc/mcdi.h
+++ b/drivers/net/ethernet/sfc/mcdi.h
@@ -229,6 +229,9 @@ void efx_mcdi_sensor_event(struct efx_nic *efx, efx_qword_t *ev);
#define MCDI_WORD(_buf, _field) \
((u16)BUILD_BUG_ON_ZERO(MC_CMD_ ## _field ## _LEN != 2) + \
le16_to_cpu(*(__force const __le16 *)MCDI_PTR(_buf, _field)))
+#define MCDI_STRUCT_WORD(_buf, _field) \
+ ((void)BUILD_BUG_ON_ZERO(_field ## _LEN != 2), \
+ le16_to_cpu(*(__force const __le16 *)MCDI_STRUCT_PTR(_buf, _field)))
/* Write a 16-bit field defined in the protocol as being big-endian. */
#define MCDI_STRUCT_SET_WORD_BE(_buf, _field, _value) do { \
BUILD_BUG_ON(_field ## _LEN != 2); \
@@ -241,6 +244,8 @@ void efx_mcdi_sensor_event(struct efx_nic *efx, efx_qword_t *ev);
EFX_POPULATE_DWORD_1(*_MCDI_STRUCT_DWORD(_buf, _field), EFX_DWORD_0, _value)
#define MCDI_DWORD(_buf, _field) \
EFX_DWORD_FIELD(*_MCDI_DWORD(_buf, _field), EFX_DWORD_0)
+#define MCDI_STRUCT_DWORD(_buf, _field) \
+ EFX_DWORD_FIELD(*_MCDI_STRUCT_DWORD(_buf, _field), EFX_DWORD_0)
/* Write a 32-bit field defined in the protocol as being big-endian. */
#define MCDI_STRUCT_SET_DWORD_BE(_buf, _field, _value) do { \
BUILD_BUG_ON(_field ## _LEN != 4); \
@@ -378,6 +383,9 @@ int efx_mcdi_nvram_info(struct efx_nic *efx, unsigned int type,
size_t *size_out, size_t *erase_size_out,
bool *protected_out);
int efx_new_mcdi_nvram_test_all(struct efx_nic *efx);
+int efx_mcdi_nvram_metadata(struct efx_nic *efx, unsigned int type,
+ u32 *subtype, u16 version[4], char *desc,
+ size_t descsize);
int efx_mcdi_nvram_test_all(struct efx_nic *efx);
int efx_mcdi_handle_assertion(struct efx_nic *efx);
int efx_mcdi_set_id_led(struct efx_nic *efx, enum efx_led_mode mode);
diff --git a/drivers/net/ethernet/sfc/net_driver.h b/drivers/net/ethernet/sfc/net_driver.h
index 3b49e216768b..fcd51d3992fa 100644
--- a/drivers/net/ethernet/sfc/net_driver.h
+++ b/drivers/net/ethernet/sfc/net_driver.h
@@ -845,6 +845,8 @@ enum efx_xdp_tx_queues_mode {
EFX_XDP_TX_QUEUES_BORROWED /* queues borrowed from net stack */
};
+struct efx_mae;
+
/**
* struct efx_nic - an Efx NIC
* @name: Device name (net device name or bus id before net device registered)
@@ -881,6 +883,7 @@ enum efx_xdp_tx_queues_mode {
* @msi_context: Context for each MSI
* @extra_channel_types: Types of extra (non-traffic) channels that
* should be allocated for this NIC
+ * @mae: Details of the Match Action Engine
* @xdp_tx_queue_count: Number of entries in %xdp_tx_queues.
* @xdp_tx_queues: Array of pointers to tx queues used for XDP transmit.
* @xdp_txq_queues_mode: XDP TX queues sharing strategy.
@@ -994,6 +997,8 @@ enum efx_xdp_tx_queues_mode {
* xdp_rxq_info structures?
* @netdev_notifier: Netdevice notifier.
* @tc: state for TC offload (EF100).
+ * @devlink: reference to devlink structure owned by this device
+ * @dl_port: devlink port associated with the PF
* @mem_bar: The BAR that is mapped into membase.
* @reg_base: Offset from the start of the bar to the function control window.
* @monitor_work: Hardware monitor workitem
@@ -1043,6 +1048,7 @@ struct efx_nic {
struct efx_msi_context msi_context[EFX_MAX_CHANNELS];
const struct efx_channel_type *
extra_channel_type[EFX_MAX_EXTRA_CHANNELS];
+ struct efx_mae *mae;
unsigned int xdp_tx_queue_count;
struct efx_tx_queue **xdp_tx_queues;
@@ -1179,6 +1185,8 @@ struct efx_nic {
struct notifier_block netdev_notifier;
struct efx_tc_state *tc;
+ struct devlink *devlink;
+ struct devlink_port *dl_port;
unsigned int mem_bar;
u32 reg_base;
diff --git a/drivers/net/ethernet/sfc/siena/efx.c b/drivers/net/ethernet/sfc/siena/efx.c
index 60e5b7c8ccf9..ef52ec71d197 100644
--- a/drivers/net/ethernet/sfc/siena/efx.c
+++ b/drivers/net/ethernet/sfc/siena/efx.c
@@ -1007,6 +1007,10 @@ static int efx_pci_probe_post_io(struct efx_nic *efx)
net_dev->features &= ~NETIF_F_HW_VLAN_CTAG_FILTER;
net_dev->features |= efx->fixed_features;
+ net_dev->xdp_features = NETDEV_XDP_ACT_BASIC |
+ NETDEV_XDP_ACT_REDIRECT |
+ NETDEV_XDP_ACT_NDO_XMIT;
+
rc = efx_register_netdev(efx);
if (!rc)
return 0;
diff --git a/drivers/net/ethernet/socionext/netsec.c b/drivers/net/ethernet/socionext/netsec.c
index 9b46579b5a10..2d7347b71c41 100644
--- a/drivers/net/ethernet/socionext/netsec.c
+++ b/drivers/net/ethernet/socionext/netsec.c
@@ -2104,6 +2104,9 @@ static int netsec_probe(struct platform_device *pdev)
NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM;
ndev->hw_features = ndev->features;
+ ndev->xdp_features = NETDEV_XDP_ACT_BASIC | NETDEV_XDP_ACT_REDIRECT |
+ NETDEV_XDP_ACT_NDO_XMIT;
+
priv->rx_cksum_offload_flag = true;
ret = netsec_register_mdio(priv, phy_addr);
diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-dwc-qos-eth.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-dwc-qos-eth.c
index 80efdeeb0b59..18acf7dd74e5 100644
--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-dwc-qos-eth.c
+++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-dwc-qos-eth.c
@@ -159,15 +159,13 @@ disable:
return err;
}
-static int dwc_qos_remove(struct platform_device *pdev)
+static void dwc_qos_remove(struct platform_device *pdev)
{
struct net_device *ndev = platform_get_drvdata(pdev);
struct stmmac_priv *priv = netdev_priv(ndev);
clk_disable_unprepare(priv->plat->pclk);
clk_disable_unprepare(priv->plat->stmmac_clk);
-
- return 0;
}
#define SDMEMCOMPPADCTRL 0x8800
@@ -384,7 +382,7 @@ error:
return err;
}
-static int tegra_eqos_remove(struct platform_device *pdev)
+static void tegra_eqos_remove(struct platform_device *pdev)
{
struct tegra_eqos *eqos = get_stmmac_bsp_priv(&pdev->dev);
@@ -394,15 +392,13 @@ static int tegra_eqos_remove(struct platform_device *pdev)
clk_disable_unprepare(eqos->clk_rx);
clk_disable_unprepare(eqos->clk_slave);
clk_disable_unprepare(eqos->clk_master);
-
- return 0;
}
struct dwc_eth_dwmac_data {
int (*probe)(struct platform_device *pdev,
struct plat_stmmacenet_data *data,
struct stmmac_resources *res);
- int (*remove)(struct platform_device *pdev);
+ void (*remove)(struct platform_device *pdev);
};
static const struct dwc_eth_dwmac_data dwc_qos_data = {
@@ -473,21 +469,16 @@ static int dwc_eth_dwmac_remove(struct platform_device *pdev)
struct net_device *ndev = platform_get_drvdata(pdev);
struct stmmac_priv *priv = netdev_priv(ndev);
const struct dwc_eth_dwmac_data *data;
- int err;
data = device_get_match_data(&pdev->dev);
- err = stmmac_dvr_remove(&pdev->dev);
- if (err < 0)
- dev_err(&pdev->dev, "failed to remove platform: %d\n", err);
+ stmmac_dvr_remove(&pdev->dev);
- err = data->remove(pdev);
- if (err < 0)
- dev_err(&pdev->dev, "failed to remove subdriver: %d\n", err);
+ data->remove(pdev);
stmmac_remove_config_dt(pdev, priv->plat);
- return err;
+ return 0;
}
static const struct of_device_id dwc_eth_dwmac_match[] = {
diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-imx.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-imx.c
index bd52fb7cf486..ac8580f501e2 100644
--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-imx.c
+++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-imx.c
@@ -31,6 +31,12 @@
#define GPR_ENET_QOS_CLK_TX_CLK_SEL (0x1 << 20)
#define GPR_ENET_QOS_RGMII_EN (0x1 << 21)
+#define MX93_GPR_ENET_QOS_INTF_MODE_MASK GENMASK(3, 0)
+#define MX93_GPR_ENET_QOS_INTF_SEL_MII (0x0 << 1)
+#define MX93_GPR_ENET_QOS_INTF_SEL_RMII (0x4 << 1)
+#define MX93_GPR_ENET_QOS_INTF_SEL_RGMII (0x1 << 1)
+#define MX93_GPR_ENET_QOS_CLK_GEN_EN (0x1 << 0)
+
struct imx_dwmac_ops {
u32 addr_width;
bool mac_rgmii_txclk_auto_adj;
@@ -90,6 +96,35 @@ imx8dxl_set_intf_mode(struct plat_stmmacenet_data *plat_dat)
return ret;
}
+static int imx93_set_intf_mode(struct plat_stmmacenet_data *plat_dat)
+{
+ struct imx_priv_data *dwmac = plat_dat->bsp_priv;
+ int val;
+
+ switch (plat_dat->interface) {
+ case PHY_INTERFACE_MODE_MII:
+ val = MX93_GPR_ENET_QOS_INTF_SEL_MII;
+ break;
+ case PHY_INTERFACE_MODE_RMII:
+ val = MX93_GPR_ENET_QOS_INTF_SEL_RMII;
+ break;
+ case PHY_INTERFACE_MODE_RGMII:
+ case PHY_INTERFACE_MODE_RGMII_ID:
+ case PHY_INTERFACE_MODE_RGMII_RXID:
+ case PHY_INTERFACE_MODE_RGMII_TXID:
+ val = MX93_GPR_ENET_QOS_INTF_SEL_RGMII;
+ break;
+ default:
+ dev_dbg(dwmac->dev, "imx dwmac doesn't support %d interface\n",
+ plat_dat->interface);
+ return -EINVAL;
+ }
+
+ val |= MX93_GPR_ENET_QOS_CLK_GEN_EN;
+ return regmap_update_bits(dwmac->intf_regmap, dwmac->intf_reg_off,
+ MX93_GPR_ENET_QOS_INTF_MODE_MASK, val);
+};
+
static int imx_dwmac_clks_config(void *priv, bool enabled)
{
struct imx_priv_data *dwmac = priv;
@@ -188,7 +223,9 @@ imx_dwmac_parse_dt(struct imx_priv_data *dwmac, struct device *dev)
}
dwmac->clk_mem = NULL;
- if (of_machine_is_compatible("fsl,imx8dxl")) {
+
+ if (of_machine_is_compatible("fsl,imx8dxl") ||
+ of_machine_is_compatible("fsl,imx93")) {
dwmac->clk_mem = devm_clk_get(dev, "mem");
if (IS_ERR(dwmac->clk_mem)) {
dev_err(dev, "failed to get mem clock\n");
@@ -196,10 +233,11 @@ imx_dwmac_parse_dt(struct imx_priv_data *dwmac, struct device *dev)
}
}
- if (of_machine_is_compatible("fsl,imx8mp")) {
- /* Binding doc describes the property:
- is required by i.MX8MP.
- is optional for i.MX8DXL.
+ if (of_machine_is_compatible("fsl,imx8mp") ||
+ of_machine_is_compatible("fsl,imx93")) {
+ /* Binding doc describes the propety:
+ * is required by i.MX8MP, i.MX93.
+ * is optinoal for i.MX8DXL.
*/
dwmac->intf_regmap = syscon_regmap_lookup_by_phandle(np, "intf_mode");
if (IS_ERR(dwmac->intf_regmap))
@@ -296,9 +334,16 @@ static struct imx_dwmac_ops imx8dxl_dwmac_data = {
.set_intf_mode = imx8dxl_set_intf_mode,
};
+static struct imx_dwmac_ops imx93_dwmac_data = {
+ .addr_width = 32,
+ .mac_rgmii_txclk_auto_adj = true,
+ .set_intf_mode = imx93_set_intf_mode,
+};
+
static const struct of_device_id imx_dwmac_match[] = {
{ .compatible = "nxp,imx8mp-dwmac-eqos", .data = &imx8mp_dwmac_data },
{ .compatible = "nxp,imx8dxl-dwmac-eqos", .data = &imx8dxl_dwmac_data },
+ { .compatible = "nxp,imx93-dwmac-eqos", .data = &imx93_dwmac_data },
{ }
};
MODULE_DEVICE_TABLE(of, imx_dwmac_match);
diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-rk.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-rk.c
index 6656d76b6766..4b8fd11563e4 100644
--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-rk.c
+++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-rk.c
@@ -1915,11 +1915,12 @@ err_remove_config_dt:
static int rk_gmac_remove(struct platform_device *pdev)
{
struct rk_priv_data *bsp_priv = get_stmmac_bsp_priv(&pdev->dev);
- int ret = stmmac_dvr_remove(&pdev->dev);
+
+ stmmac_dvr_remove(&pdev->dev);
rk_gmac_powerdown(bsp_priv);
- return ret;
+ return 0;
}
#ifdef CONFIG_PM_SLEEP
diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-sti.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-sti.c
index 710d7435733e..be3b1ebc06ab 100644
--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-sti.c
+++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-sti.c
@@ -371,11 +371,12 @@ err_remove_config_dt:
static int sti_dwmac_remove(struct platform_device *pdev)
{
struct sti_dwmac *dwmac = get_stmmac_bsp_priv(&pdev->dev);
- int ret = stmmac_dvr_remove(&pdev->dev);
+
+ stmmac_dvr_remove(&pdev->dev);
clk_disable_unprepare(dwmac->clk);
- return ret;
+ return 0;
}
#ifdef CONFIG_PM_SLEEP
diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-stm32.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-stm32.c
index 2b38a499a404..0616b3a04ff3 100644
--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-stm32.c
+++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-stm32.c
@@ -421,9 +421,10 @@ static int stm32_dwmac_remove(struct platform_device *pdev)
{
struct net_device *ndev = platform_get_drvdata(pdev);
struct stmmac_priv *priv = netdev_priv(ndev);
- int ret = stmmac_dvr_remove(&pdev->dev);
struct stm32_dwmac *dwmac = priv->plat->bsp_priv;
+ stmmac_dvr_remove(&pdev->dev);
+
stm32_dwmac_clk_disable(priv->plat->bsp_priv);
if (dwmac->irq_pwr_wakeup >= 0) {
@@ -431,7 +432,7 @@ static int stm32_dwmac_remove(struct platform_device *pdev)
device_init_wakeup(&pdev->dev, false);
}
- return ret;
+ return 0;
}
static int stm32mp1_suspend(struct stm32_dwmac *dwmac)
diff --git a/drivers/net/ethernet/stmicro/stmmac/hwif.h b/drivers/net/ethernet/stmicro/stmmac/hwif.h
index 592b4067f9b8..16a7421715cb 100644
--- a/drivers/net/ethernet/stmicro/stmmac/hwif.h
+++ b/drivers/net/ethernet/stmicro/stmmac/hwif.h
@@ -567,6 +567,7 @@ struct tc_cbs_qopt_offload;
struct flow_cls_offload;
struct tc_taprio_qopt_offload;
struct tc_etf_qopt_offload;
+struct tc_query_caps_base;
struct stmmac_tc_ops {
int (*init)(struct stmmac_priv *priv);
@@ -580,6 +581,8 @@ struct stmmac_tc_ops {
struct tc_taprio_qopt_offload *qopt);
int (*setup_etf)(struct stmmac_priv *priv,
struct tc_etf_qopt_offload *qopt);
+ int (*query_caps)(struct stmmac_priv *priv,
+ struct tc_query_caps_base *base);
};
#define stmmac_tc_init(__priv, __args...) \
@@ -594,6 +597,8 @@ struct stmmac_tc_ops {
stmmac_do_callback(__priv, tc, setup_taprio, __args)
#define stmmac_tc_setup_etf(__priv, __args...) \
stmmac_do_callback(__priv, tc, setup_etf, __args)
+#define stmmac_tc_query_caps(__priv, __args...) \
+ stmmac_do_callback(__priv, tc, query_caps, __args)
struct stmmac_counters;
diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac.h b/drivers/net/ethernet/stmicro/stmmac/stmmac.h
index bdbf86cb102a..3d15e1e92e18 100644
--- a/drivers/net/ethernet/stmicro/stmmac/stmmac.h
+++ b/drivers/net/ethernet/stmicro/stmmac/stmmac.h
@@ -345,7 +345,7 @@ int stmmac_xdp_open(struct net_device *dev);
void stmmac_xdp_release(struct net_device *dev);
int stmmac_resume(struct device *dev);
int stmmac_suspend(struct device *dev);
-int stmmac_dvr_remove(struct device *dev);
+void stmmac_dvr_remove(struct device *dev);
int stmmac_dvr_probe(struct device *device,
struct plat_stmmacenet_data *plat_dat,
struct stmmac_resources *res);
diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
index 1a5b8dab5e9b..e4902a7bb61e 100644
--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
@@ -5992,6 +5992,8 @@ static int stmmac_setup_tc(struct net_device *ndev, enum tc_setup_type type,
struct stmmac_priv *priv = netdev_priv(ndev);
switch (type) {
+ case TC_QUERY_CAPS:
+ return stmmac_tc_query_caps(priv, priv, type_data);
case TC_SETUP_BLOCK:
return flow_block_cb_setup_simple(type_data,
&stmmac_block_cb_list,
@@ -7151,6 +7153,9 @@ int stmmac_dvr_probe(struct device *device,
ndev->hw_features = NETIF_F_SG | NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM |
NETIF_F_RXCSUM;
+ ndev->xdp_features = NETDEV_XDP_ACT_BASIC | NETDEV_XDP_ACT_REDIRECT |
+ NETDEV_XDP_ACT_XSK_ZEROCOPY |
+ NETDEV_XDP_ACT_NDO_XMIT;
ret = stmmac_tc_init(priv, priv);
if (!ret) {
@@ -7347,7 +7352,7 @@ EXPORT_SYMBOL_GPL(stmmac_dvr_probe);
* Description: this function resets the TX/RX processes, disables the MAC RX/TX
* changes the link status, releases the DMA descriptor rings.
*/
-int stmmac_dvr_remove(struct device *dev)
+void stmmac_dvr_remove(struct device *dev)
{
struct net_device *ndev = dev_get_drvdata(dev);
struct stmmac_priv *priv = netdev_priv(ndev);
@@ -7383,8 +7388,6 @@ int stmmac_dvr_remove(struct device *dev)
pm_runtime_disable(dev);
pm_runtime_put_noidle(dev);
-
- return 0;
}
EXPORT_SYMBOL_GPL(stmmac_dvr_remove);
diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_mdio.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_mdio.c
index 5f177ea80725..21aaa2730ac8 100644
--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_mdio.c
+++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_mdio.c
@@ -45,8 +45,8 @@
#define MII_XGMAC_PA_SHIFT 16
#define MII_XGMAC_DA_SHIFT 21
-static int stmmac_xgmac2_c45_format(struct stmmac_priv *priv, int phyaddr,
- int phyreg, u32 *hw_addr)
+static void stmmac_xgmac2_c45_format(struct stmmac_priv *priv, int phyaddr,
+ int devad, int phyreg, u32 *hw_addr)
{
u32 tmp;
@@ -56,19 +56,14 @@ static int stmmac_xgmac2_c45_format(struct stmmac_priv *priv, int phyaddr,
writel(tmp, priv->ioaddr + XGMAC_MDIO_C22P);
*hw_addr = (phyaddr << MII_XGMAC_PA_SHIFT) | (phyreg & 0xffff);
- *hw_addr |= (phyreg >> MII_DEVADDR_C45_SHIFT) << MII_XGMAC_DA_SHIFT;
- return 0;
+ *hw_addr |= devad << MII_XGMAC_DA_SHIFT;
}
-static int stmmac_xgmac2_c22_format(struct stmmac_priv *priv, int phyaddr,
- int phyreg, u32 *hw_addr)
+static void stmmac_xgmac2_c22_format(struct stmmac_priv *priv, int phyaddr,
+ int phyreg, u32 *hw_addr)
{
u32 tmp;
- /* HW does not support C22 addr >= 4 */
- if (phyaddr > MII_XGMAC_MAX_C22ADDR)
- return -ENODEV;
-
/* Set port as Clause 22 */
tmp = readl(priv->ioaddr + XGMAC_MDIO_C22P);
tmp &= ~MII_XGMAC_C22P_MASK;
@@ -76,16 +71,14 @@ static int stmmac_xgmac2_c22_format(struct stmmac_priv *priv, int phyaddr,
writel(tmp, priv->ioaddr + XGMAC_MDIO_C22P);
*hw_addr = (phyaddr << MII_XGMAC_PA_SHIFT) | (phyreg & 0x1f);
- return 0;
}
-static int stmmac_xgmac2_mdio_read(struct mii_bus *bus, int phyaddr, int phyreg)
+static int stmmac_xgmac2_mdio_read(struct stmmac_priv *priv, u32 addr,
+ u32 value)
{
- struct net_device *ndev = bus->priv;
- struct stmmac_priv *priv = netdev_priv(ndev);
unsigned int mii_address = priv->hw->mii.addr;
unsigned int mii_data = priv->hw->mii.data;
- u32 tmp, addr, value = MII_XGMAC_BUSY;
+ u32 tmp;
int ret;
ret = pm_runtime_resume_and_get(priv->device);
@@ -99,20 +92,6 @@ static int stmmac_xgmac2_mdio_read(struct mii_bus *bus, int phyaddr, int phyreg)
goto err_disable_clks;
}
- if (phyreg & MII_ADDR_C45) {
- phyreg &= ~MII_ADDR_C45;
-
- ret = stmmac_xgmac2_c45_format(priv, phyaddr, phyreg, &addr);
- if (ret)
- goto err_disable_clks;
- } else {
- ret = stmmac_xgmac2_c22_format(priv, phyaddr, phyreg, &addr);
- if (ret)
- goto err_disable_clks;
-
- value |= MII_XGMAC_SADDR;
- }
-
value |= (priv->clk_csr << priv->hw->mii.clk_csr_shift)
& priv->hw->mii.clk_csr_mask;
value |= MII_XGMAC_READ;
@@ -144,14 +123,44 @@ err_disable_clks:
return ret;
}
-static int stmmac_xgmac2_mdio_write(struct mii_bus *bus, int phyaddr,
- int phyreg, u16 phydata)
+static int stmmac_xgmac2_mdio_read_c22(struct mii_bus *bus, int phyaddr,
+ int phyreg)
{
struct net_device *ndev = bus->priv;
- struct stmmac_priv *priv = netdev_priv(ndev);
+ struct stmmac_priv *priv;
+ u32 addr;
+
+ priv = netdev_priv(ndev);
+
+ /* HW does not support C22 addr >= 4 */
+ if (phyaddr > MII_XGMAC_MAX_C22ADDR)
+ return -ENODEV;
+
+ stmmac_xgmac2_c22_format(priv, phyaddr, phyreg, &addr);
+
+ return stmmac_xgmac2_mdio_read(priv, addr, MII_XGMAC_BUSY);
+}
+
+static int stmmac_xgmac2_mdio_read_c45(struct mii_bus *bus, int phyaddr,
+ int devad, int phyreg)
+{
+ struct net_device *ndev = bus->priv;
+ struct stmmac_priv *priv;
+ u32 addr;
+
+ priv = netdev_priv(ndev);
+
+ stmmac_xgmac2_c45_format(priv, phyaddr, devad, phyreg, &addr);
+
+ return stmmac_xgmac2_mdio_read(priv, addr, MII_XGMAC_BUSY);
+}
+
+static int stmmac_xgmac2_mdio_write(struct stmmac_priv *priv, u32 addr,
+ u32 value, u16 phydata)
+{
unsigned int mii_address = priv->hw->mii.addr;
unsigned int mii_data = priv->hw->mii.data;
- u32 addr, tmp, value = MII_XGMAC_BUSY;
+ u32 tmp;
int ret;
ret = pm_runtime_resume_and_get(priv->device);
@@ -165,20 +174,6 @@ static int stmmac_xgmac2_mdio_write(struct mii_bus *bus, int phyaddr,
goto err_disable_clks;
}
- if (phyreg & MII_ADDR_C45) {
- phyreg &= ~MII_ADDR_C45;
-
- ret = stmmac_xgmac2_c45_format(priv, phyaddr, phyreg, &addr);
- if (ret)
- goto err_disable_clks;
- } else {
- ret = stmmac_xgmac2_c22_format(priv, phyaddr, phyreg, &addr);
- if (ret)
- goto err_disable_clks;
-
- value |= MII_XGMAC_SADDR;
- }
-
value |= (priv->clk_csr << priv->hw->mii.clk_csr_shift)
& priv->hw->mii.clk_csr_mask;
value |= phydata;
@@ -205,8 +200,63 @@ err_disable_clks:
return ret;
}
+static int stmmac_xgmac2_mdio_write_c22(struct mii_bus *bus, int phyaddr,
+ int phyreg, u16 phydata)
+{
+ struct net_device *ndev = bus->priv;
+ struct stmmac_priv *priv;
+ u32 addr;
+
+ priv = netdev_priv(ndev);
+
+ /* HW does not support C22 addr >= 4 */
+ if (phyaddr > MII_XGMAC_MAX_C22ADDR)
+ return -ENODEV;
+
+ stmmac_xgmac2_c22_format(priv, phyaddr, phyreg, &addr);
+
+ return stmmac_xgmac2_mdio_write(priv, addr,
+ MII_XGMAC_BUSY | MII_XGMAC_SADDR, phydata);
+}
+
+static int stmmac_xgmac2_mdio_write_c45(struct mii_bus *bus, int phyaddr,
+ int devad, int phyreg, u16 phydata)
+{
+ struct net_device *ndev = bus->priv;
+ struct stmmac_priv *priv;
+ u32 addr;
+
+ priv = netdev_priv(ndev);
+
+ stmmac_xgmac2_c45_format(priv, phyaddr, devad, phyreg, &addr);
+
+ return stmmac_xgmac2_mdio_write(priv, addr, MII_XGMAC_BUSY,
+ phydata);
+}
+
+static int stmmac_mdio_read(struct stmmac_priv *priv, int data, u32 value)
+{
+ unsigned int mii_address = priv->hw->mii.addr;
+ unsigned int mii_data = priv->hw->mii.data;
+ u32 v;
+
+ if (readl_poll_timeout(priv->ioaddr + mii_address, v, !(v & MII_BUSY),
+ 100, 10000))
+ return -EBUSY;
+
+ writel(data, priv->ioaddr + mii_data);
+ writel(value, priv->ioaddr + mii_address);
+
+ if (readl_poll_timeout(priv->ioaddr + mii_address, v, !(v & MII_BUSY),
+ 100, 10000))
+ return -EBUSY;
+
+ /* Read the data from the MII data register */
+ return readl(priv->ioaddr + mii_data) & MII_DATA_MASK;
+}
+
/**
- * stmmac_mdio_read
+ * stmmac_mdio_read_c22
* @bus: points to the mii_bus structure
* @phyaddr: MII addr
* @phyreg: MII reg
@@ -215,15 +265,12 @@ err_disable_clks:
* accessing the PHY registers.
* Fortunately, it seems this has no drawback for the 7109 MAC.
*/
-static int stmmac_mdio_read(struct mii_bus *bus, int phyaddr, int phyreg)
+static int stmmac_mdio_read_c22(struct mii_bus *bus, int phyaddr, int phyreg)
{
struct net_device *ndev = bus->priv;
struct stmmac_priv *priv = netdev_priv(ndev);
- unsigned int mii_address = priv->hw->mii.addr;
- unsigned int mii_data = priv->hw->mii.data;
u32 value = MII_BUSY;
int data = 0;
- u32 v;
data = pm_runtime_resume_and_get(priv->device);
if (data < 0)
@@ -236,60 +283,94 @@ static int stmmac_mdio_read(struct mii_bus *bus, int phyaddr, int phyreg)
& priv->hw->mii.clk_csr_mask;
if (priv->plat->has_gmac4) {
value |= MII_GMAC4_READ;
- if (phyreg & MII_ADDR_C45) {
- value |= MII_GMAC4_C45E;
- value &= ~priv->hw->mii.reg_mask;
- value |= ((phyreg >> MII_DEVADDR_C45_SHIFT) <<
- priv->hw->mii.reg_shift) &
- priv->hw->mii.reg_mask;
-
- data |= (phyreg & MII_REGADDR_C45_MASK) <<
- MII_GMAC4_REG_ADDR_SHIFT;
- }
}
- if (readl_poll_timeout(priv->ioaddr + mii_address, v, !(v & MII_BUSY),
- 100, 10000)) {
- data = -EBUSY;
- goto err_disable_clks;
- }
+ data = stmmac_mdio_read(priv, data, value);
- writel(data, priv->ioaddr + mii_data);
- writel(value, priv->ioaddr + mii_address);
+ pm_runtime_put(priv->device);
- if (readl_poll_timeout(priv->ioaddr + mii_address, v, !(v & MII_BUSY),
- 100, 10000)) {
- data = -EBUSY;
- goto err_disable_clks;
+ return data;
+}
+
+/**
+ * stmmac_mdio_read_c45
+ * @bus: points to the mii_bus structure
+ * @phyaddr: MII addr
+ * @devad: device address to read
+ * @phyreg: MII reg
+ * Description: it reads data from the MII register from within the phy device.
+ * For the 7111 GMAC, we must set the bit 0 in the MII address register while
+ * accessing the PHY registers.
+ * Fortunately, it seems this has no drawback for the 7109 MAC.
+ */
+static int stmmac_mdio_read_c45(struct mii_bus *bus, int phyaddr, int devad,
+ int phyreg)
+{
+ struct net_device *ndev = bus->priv;
+ struct stmmac_priv *priv = netdev_priv(ndev);
+ u32 value = MII_BUSY;
+ int data = 0;
+
+ data = pm_runtime_get_sync(priv->device);
+ if (data < 0) {
+ pm_runtime_put_noidle(priv->device);
+ return data;
}
- /* Read the data from the MII data register */
- data = (int)readl(priv->ioaddr + mii_data) & MII_DATA_MASK;
+ value |= (phyaddr << priv->hw->mii.addr_shift)
+ & priv->hw->mii.addr_mask;
+ value |= (phyreg << priv->hw->mii.reg_shift) & priv->hw->mii.reg_mask;
+ value |= (priv->clk_csr << priv->hw->mii.clk_csr_shift)
+ & priv->hw->mii.clk_csr_mask;
+ value |= MII_GMAC4_READ;
+ value |= MII_GMAC4_C45E;
+ value &= ~priv->hw->mii.reg_mask;
+ value |= (devad << priv->hw->mii.reg_shift) & priv->hw->mii.reg_mask;
+
+ data |= phyreg << MII_GMAC4_REG_ADDR_SHIFT;
+
+ data = stmmac_mdio_read(priv, data, value);
-err_disable_clks:
pm_runtime_put(priv->device);
return data;
}
+static int stmmac_mdio_write(struct stmmac_priv *priv, int data, u32 value)
+{
+ unsigned int mii_address = priv->hw->mii.addr;
+ unsigned int mii_data = priv->hw->mii.data;
+ u32 v;
+
+ /* Wait until any existing MII operation is complete */
+ if (readl_poll_timeout(priv->ioaddr + mii_address, v, !(v & MII_BUSY),
+ 100, 10000))
+ return -EBUSY;
+
+ /* Set the MII address register to write */
+ writel(data, priv->ioaddr + mii_data);
+ writel(value, priv->ioaddr + mii_address);
+
+ /* Wait until any existing MII operation is complete */
+ return readl_poll_timeout(priv->ioaddr + mii_address, v,
+ !(v & MII_BUSY), 100, 10000);
+}
+
/**
- * stmmac_mdio_write
+ * stmmac_mdio_write_c22
* @bus: points to the mii_bus structure
* @phyaddr: MII addr
* @phyreg: MII reg
* @phydata: phy data
* Description: it writes the data into the MII register from within the device.
*/
-static int stmmac_mdio_write(struct mii_bus *bus, int phyaddr, int phyreg,
- u16 phydata)
+static int stmmac_mdio_write_c22(struct mii_bus *bus, int phyaddr, int phyreg,
+ u16 phydata)
{
struct net_device *ndev = bus->priv;
struct stmmac_priv *priv = netdev_priv(ndev);
- unsigned int mii_address = priv->hw->mii.addr;
- unsigned int mii_data = priv->hw->mii.data;
int ret, data = phydata;
u32 value = MII_BUSY;
- u32 v;
ret = pm_runtime_resume_and_get(priv->device);
if (ret < 0)
@@ -301,38 +382,57 @@ static int stmmac_mdio_write(struct mii_bus *bus, int phyaddr, int phyreg,
value |= (priv->clk_csr << priv->hw->mii.clk_csr_shift)
& priv->hw->mii.clk_csr_mask;
- if (priv->plat->has_gmac4) {
+ if (priv->plat->has_gmac4)
value |= MII_GMAC4_WRITE;
- if (phyreg & MII_ADDR_C45) {
- value |= MII_GMAC4_C45E;
- value &= ~priv->hw->mii.reg_mask;
- value |= ((phyreg >> MII_DEVADDR_C45_SHIFT) <<
- priv->hw->mii.reg_shift) &
- priv->hw->mii.reg_mask;
-
- data |= (phyreg & MII_REGADDR_C45_MASK) <<
- MII_GMAC4_REG_ADDR_SHIFT;
- }
- } else {
+ else
value |= MII_WRITE;
- }
- /* Wait until any existing MII operation is complete */
- if (readl_poll_timeout(priv->ioaddr + mii_address, v, !(v & MII_BUSY),
- 100, 10000)) {
- ret = -EBUSY;
- goto err_disable_clks;
+ ret = stmmac_mdio_write(priv, data, value);
+
+ pm_runtime_put(priv->device);
+
+ return ret;
+}
+
+/**
+ * stmmac_mdio_write_c45
+ * @bus: points to the mii_bus structure
+ * @phyaddr: MII addr
+ * @phyreg: MII reg
+ * @devad: device address to read
+ * @phydata: phy data
+ * Description: it writes the data into the MII register from within the device.
+ */
+static int stmmac_mdio_write_c45(struct mii_bus *bus, int phyaddr,
+ int devad, int phyreg, u16 phydata)
+{
+ struct net_device *ndev = bus->priv;
+ struct stmmac_priv *priv = netdev_priv(ndev);
+ int ret, data = phydata;
+ u32 value = MII_BUSY;
+
+ ret = pm_runtime_get_sync(priv->device);
+ if (ret < 0) {
+ pm_runtime_put_noidle(priv->device);
+ return ret;
}
- /* Set the MII address register to write */
- writel(data, priv->ioaddr + mii_data);
- writel(value, priv->ioaddr + mii_address);
+ value |= (phyaddr << priv->hw->mii.addr_shift)
+ & priv->hw->mii.addr_mask;
+ value |= (phyreg << priv->hw->mii.reg_shift) & priv->hw->mii.reg_mask;
- /* Wait until any existing MII operation is complete */
- ret = readl_poll_timeout(priv->ioaddr + mii_address, v, !(v & MII_BUSY),
- 100, 10000);
+ value |= (priv->clk_csr << priv->hw->mii.clk_csr_shift)
+ & priv->hw->mii.clk_csr_mask;
+
+ value |= MII_GMAC4_WRITE;
+ value |= MII_GMAC4_C45E;
+ value &= ~priv->hw->mii.reg_mask;
+ value |= (devad << priv->hw->mii.reg_shift) & priv->hw->mii.reg_mask;
+
+ data |= phyreg << MII_GMAC4_REG_ADDR_SHIFT;
+
+ ret = stmmac_mdio_write(priv, data, value);
-err_disable_clks:
pm_runtime_put(priv->device);
return ret;
@@ -453,12 +553,11 @@ int stmmac_mdio_register(struct net_device *ndev)
new_bus->name = "stmmac";
- if (priv->plat->has_gmac4)
- new_bus->probe_capabilities = MDIOBUS_C22_C45;
-
if (priv->plat->has_xgmac) {
- new_bus->read = &stmmac_xgmac2_mdio_read;
- new_bus->write = &stmmac_xgmac2_mdio_write;
+ new_bus->read = &stmmac_xgmac2_mdio_read_c22;
+ new_bus->write = &stmmac_xgmac2_mdio_write_c22;
+ new_bus->read_c45 = &stmmac_xgmac2_mdio_read_c45;
+ new_bus->write_c45 = &stmmac_xgmac2_mdio_write_c45;
/* Right now only C22 phys are supported */
max_addr = MII_XGMAC_MAX_C22ADDR + 1;
@@ -468,8 +567,13 @@ int stmmac_mdio_register(struct net_device *ndev)
dev_err(dev, "Unsupported phy_addr (max=%d)\n",
MII_XGMAC_MAX_C22ADDR);
} else {
- new_bus->read = &stmmac_mdio_read;
- new_bus->write = &stmmac_mdio_write;
+ new_bus->read = &stmmac_mdio_read_c22;
+ new_bus->write = &stmmac_mdio_write_c22;
+ if (priv->plat->has_gmac4) {
+ new_bus->read_c45 = &stmmac_mdio_read_c45;
+ new_bus->write_c45 = &stmmac_mdio_write_c45;
+ }
+
max_addr = PHY_MAX_ADDR;
}
@@ -490,7 +594,7 @@ int stmmac_mdio_register(struct net_device *ndev)
/* Looks like we need a dummy read for XGMAC only and C45 PHYs */
if (priv->plat->has_xgmac)
- stmmac_xgmac2_mdio_read(new_bus, 0, MII_ADDR_C45);
+ stmmac_xgmac2_mdio_read_c45(new_bus, 0, 0, 0);
/* If fixed-link is set, skip PHY scanning */
if (!fwnode)
diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c
index 0046a4ee6e64..067a40fe0a23 100644
--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c
+++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c
@@ -711,14 +711,15 @@ int stmmac_pltfr_remove(struct platform_device *pdev)
struct net_device *ndev = platform_get_drvdata(pdev);
struct stmmac_priv *priv = netdev_priv(ndev);
struct plat_stmmacenet_data *plat = priv->plat;
- int ret = stmmac_dvr_remove(&pdev->dev);
+
+ stmmac_dvr_remove(&pdev->dev);
if (plat->exit)
plat->exit(pdev, plat->bsp_priv);
stmmac_remove_config_dt(pdev, plat);
- return ret;
+ return 0;
}
EXPORT_SYMBOL_GPL(stmmac_pltfr_remove);
diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_tc.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_tc.c
index 2cfb18cef1d4..9d55226479b4 100644
--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_tc.c
+++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_tc.c
@@ -1107,6 +1107,25 @@ static int tc_setup_etf(struct stmmac_priv *priv,
return 0;
}
+static int tc_query_caps(struct stmmac_priv *priv,
+ struct tc_query_caps_base *base)
+{
+ switch (base->type) {
+ case TC_SETUP_QDISC_TAPRIO: {
+ struct tc_taprio_caps *caps = base->caps;
+
+ if (!priv->dma_cap.estsel)
+ return -EOPNOTSUPP;
+
+ caps->gate_mask_per_txq = true;
+
+ return 0;
+ }
+ default:
+ return -EOPNOTSUPP;
+ }
+}
+
const struct stmmac_tc_ops dwmac510_tc_ops = {
.init = tc_init,
.setup_cls_u32 = tc_setup_cls_u32,
@@ -1114,4 +1133,5 @@ const struct stmmac_tc_ops dwmac510_tc_ops = {
.setup_cls = tc_setup_cls,
.setup_taprio = tc_setup_taprio,
.setup_etf = tc_setup_etf,
+ .query_caps = tc_query_caps,
};
diff --git a/drivers/net/ethernet/sunplus/spl2sw_mdio.c b/drivers/net/ethernet/sunplus/spl2sw_mdio.c
index 733ae1704269..c8ef17e34f3c 100644
--- a/drivers/net/ethernet/sunplus/spl2sw_mdio.c
+++ b/drivers/net/ethernet/sunplus/spl2sw_mdio.c
@@ -61,9 +61,6 @@ static int spl2sw_mii_read(struct mii_bus *bus, int addr, int regnum)
{
struct spl2sw_common *comm = bus->priv;
- if (regnum & MII_ADDR_C45)
- return -EOPNOTSUPP;
-
return spl2sw_mdio_access(comm, SPL2SW_MDIO_READ_CMD, addr, regnum, 0);
}
@@ -72,9 +69,6 @@ static int spl2sw_mii_write(struct mii_bus *bus, int addr, int regnum, u16 val)
struct spl2sw_common *comm = bus->priv;
int ret;
- if (regnum & MII_ADDR_C45)
- return -EOPNOTSUPP;
-
ret = spl2sw_mdio_access(comm, SPL2SW_MDIO_WRITE_CMD, addr, regnum, val);
if (ret < 0)
return ret;
diff --git a/drivers/net/ethernet/ti/am65-cpsw-nuss.c b/drivers/net/ethernet/ti/am65-cpsw-nuss.c
index 6cda4b7c10cb..4e3861c47708 100644
--- a/drivers/net/ethernet/ti/am65-cpsw-nuss.c
+++ b/drivers/net/ethernet/ti/am65-cpsw-nuss.c
@@ -1426,6 +1426,70 @@ static const struct net_device_ops am65_cpsw_nuss_netdev_ops = {
.ndo_setup_tc = am65_cpsw_qos_ndo_setup_tc,
};
+static void am65_cpsw_disable_phy(struct phy *phy)
+{
+ phy_power_off(phy);
+ phy_exit(phy);
+}
+
+static int am65_cpsw_enable_phy(struct phy *phy)
+{
+ int ret;
+
+ ret = phy_init(phy);
+ if (ret < 0)
+ return ret;
+
+ ret = phy_power_on(phy);
+ if (ret < 0) {
+ phy_exit(phy);
+ return ret;
+ }
+
+ return 0;
+}
+
+static void am65_cpsw_disable_serdes_phy(struct am65_cpsw_common *common)
+{
+ struct am65_cpsw_port *port;
+ struct phy *phy;
+ int i;
+
+ for (i = 0; i < common->port_num; i++) {
+ port = &common->ports[i];
+ phy = port->slave.serdes_phy;
+ if (phy)
+ am65_cpsw_disable_phy(phy);
+ }
+}
+
+static int am65_cpsw_init_serdes_phy(struct device *dev, struct device_node *port_np,
+ struct am65_cpsw_port *port)
+{
+ const char *name = "serdes-phy";
+ struct phy *phy;
+ int ret;
+
+ phy = devm_of_phy_get(dev, port_np, name);
+ if (PTR_ERR(phy) == -ENODEV)
+ return 0;
+ if (IS_ERR(phy))
+ return PTR_ERR(phy);
+
+ /* Serdes PHY exists. Store it. */
+ port->slave.serdes_phy = phy;
+
+ ret = am65_cpsw_enable_phy(phy);
+ if (ret < 0)
+ goto err_phy;
+
+ return 0;
+
+err_phy:
+ devm_phy_put(dev, phy);
+ return ret;
+}
+
static void am65_cpsw_nuss_mac_config(struct phylink_config *config, unsigned int mode,
const struct phylink_link_state *state)
{
@@ -1883,11 +1947,6 @@ static int am65_cpsw_init_cpts(struct am65_cpsw_common *common)
int ret = PTR_ERR(cpts);
of_node_put(node);
- if (ret == -EOPNOTSUPP) {
- dev_info(dev, "cpts disabled\n");
- return 0;
- }
-
dev_err(dev, "cpts create err %d\n", ret);
return ret;
}
@@ -1969,6 +2028,11 @@ static int am65_cpsw_nuss_init_slave_ports(struct am65_cpsw_common *common)
goto of_node_put;
}
+ /* Initialize the Serdes PHY for the port */
+ ret = am65_cpsw_init_serdes_phy(dev, port_np, port);
+ if (ret)
+ return ret;
+
port->slave.mac_only =
of_property_read_bool(port_np, "ti,mac-only");
@@ -2694,11 +2758,19 @@ static const struct am65_cpsw_pdata j7200_cpswxg_pdata = {
.extra_modes = BIT(PHY_INTERFACE_MODE_QSGMII),
};
+static const struct am65_cpsw_pdata j721e_cpswxg_pdata = {
+ .quirks = 0,
+ .ale_dev_id = "am64-cpswxg",
+ .fdqring_mode = K3_RINGACC_RING_MODE_MESSAGE,
+ .extra_modes = BIT(PHY_INTERFACE_MODE_QSGMII),
+};
+
static const struct of_device_id am65_cpsw_nuss_of_mtable[] = {
{ .compatible = "ti,am654-cpsw-nuss", .data = &am65x_sr1_0},
{ .compatible = "ti,j721e-cpsw-nuss", .data = &j721e_pdata},
{ .compatible = "ti,am642-cpsw-nuss", .data = &am64x_cpswxg_pdata},
{ .compatible = "ti,j7200-cpswxg-nuss", .data = &j7200_cpswxg_pdata},
+ { .compatible = "ti,j721e-cpswxg-nuss", .data = &j721e_cpswxg_pdata},
{ /* sentinel */ },
};
MODULE_DEVICE_TABLE(of, am65_cpsw_nuss_of_mtable);
@@ -2852,6 +2924,7 @@ static int am65_cpsw_nuss_probe(struct platform_device *pdev)
err_free_phylink:
am65_cpsw_nuss_phylink_cleanup(common);
+ am65_cpts_release(common->cpts);
err_of_clear:
of_platform_device_destroy(common->mdio_dev, NULL);
err_pm_clear:
@@ -2880,6 +2953,8 @@ static int am65_cpsw_nuss_remove(struct platform_device *pdev)
*/
am65_cpsw_nuss_cleanup_ndev(common);
am65_cpsw_nuss_phylink_cleanup(common);
+ am65_cpts_release(common->cpts);
+ am65_cpsw_disable_serdes_phy(common);
of_platform_device_destroy(common->mdio_dev, NULL);
diff --git a/drivers/net/ethernet/ti/am65-cpsw-nuss.h b/drivers/net/ethernet/ti/am65-cpsw-nuss.h
index e5f1c44788c1..cad04662739c 100644
--- a/drivers/net/ethernet/ti/am65-cpsw-nuss.h
+++ b/drivers/net/ethernet/ti/am65-cpsw-nuss.h
@@ -32,6 +32,7 @@ struct am65_cpsw_slave_data {
struct device_node *phy_node;
phy_interface_t phy_if;
struct phy *ifphy;
+ struct phy *serdes_phy;
bool rx_pause;
bool tx_pause;
u8 mac_addr[ETH_ALEN];
diff --git a/drivers/net/ethernet/ti/am65-cpsw-qos.c b/drivers/net/ethernet/ti/am65-cpsw-qos.c
index e162771893af..8dc2c3085dcf 100644
--- a/drivers/net/ethernet/ti/am65-cpsw-qos.c
+++ b/drivers/net/ethernet/ti/am65-cpsw-qos.c
@@ -585,6 +585,26 @@ static int am65_cpsw_setup_taprio(struct net_device *ndev, void *type_data)
return am65_cpsw_set_taprio(ndev, type_data);
}
+static int am65_cpsw_tc_query_caps(struct net_device *ndev, void *type_data)
+{
+ struct tc_query_caps_base *base = type_data;
+
+ switch (base->type) {
+ case TC_SETUP_QDISC_TAPRIO: {
+ struct tc_taprio_caps *caps = base->caps;
+
+ if (!IS_ENABLED(CONFIG_TI_AM65_CPSW_TAS))
+ return -EOPNOTSUPP;
+
+ caps->gate_mask_per_txq = true;
+
+ return 0;
+ }
+ default:
+ return -EOPNOTSUPP;
+ }
+}
+
static int am65_cpsw_qos_clsflower_add_policer(struct am65_cpsw_port *port,
struct netlink_ext_ack *extack,
struct flow_cls_offload *cls,
@@ -765,6 +785,8 @@ int am65_cpsw_qos_ndo_setup_tc(struct net_device *ndev, enum tc_setup_type type,
void *type_data)
{
switch (type) {
+ case TC_QUERY_CAPS:
+ return am65_cpsw_tc_query_caps(ndev, type_data);
case TC_SETUP_QDISC_TAPRIO:
return am65_cpsw_setup_taprio(ndev, type_data);
case TC_SETUP_BLOCK:
diff --git a/drivers/net/ethernet/ti/am65-cpts.c b/drivers/net/ethernet/ti/am65-cpts.c
index 9535396b28cd..16ee9c29cb35 100644
--- a/drivers/net/ethernet/ti/am65-cpts.c
+++ b/drivers/net/ethernet/ti/am65-cpts.c
@@ -176,6 +176,10 @@ struct am65_cpts {
u32 genf_enable;
u32 hw_ts_enable;
struct sk_buff_head txq;
+ bool pps_enabled;
+ bool pps_present;
+ u32 pps_hw_ts_idx;
+ u32 pps_genf_idx;
/* context save/restore */
u64 sr_cpts_ns;
u64 sr_ktime_ns;
@@ -319,8 +323,15 @@ static int am65_cpts_fifo_read(struct am65_cpts *cpts)
case AM65_CPTS_EV_HW:
pevent.index = am65_cpts_event_get_port(event) - 1;
pevent.timestamp = event->timestamp;
- pevent.type = PTP_CLOCK_EXTTS;
- dev_dbg(cpts->dev, "AM65_CPTS_EV_HW p:%d t:%llu\n",
+ if (cpts->pps_enabled && pevent.index == cpts->pps_hw_ts_idx) {
+ pevent.type = PTP_CLOCK_PPSUSR;
+ pevent.pps_times.ts_real = ns_to_timespec64(pevent.timestamp);
+ } else {
+ pevent.type = PTP_CLOCK_EXTTS;
+ }
+ dev_dbg(cpts->dev, "AM65_CPTS_EV_HW:%s p:%d t:%llu\n",
+ pevent.type == PTP_CLOCK_EXTTS ?
+ "extts" : "pps",
pevent.index, event->timestamp);
ptp_clock_event(cpts->ptp_clock, &pevent);
@@ -394,10 +405,13 @@ static irqreturn_t am65_cpts_interrupt(int irq, void *dev_id)
static int am65_cpts_ptp_adjfine(struct ptp_clock_info *ptp, long scaled_ppm)
{
struct am65_cpts *cpts = container_of(ptp, struct am65_cpts, ptp_info);
+ u32 pps_ctrl_val = 0, pps_ppm_hi = 0, pps_ppm_low = 0;
s32 ppb = scaled_ppm_to_ppb(scaled_ppm);
+ int pps_index = cpts->pps_genf_idx;
+ u64 adj_period, pps_adj_period;
+ u32 ctrl_val, ppm_hi, ppm_low;
+ unsigned long flags;
int neg_adj = 0;
- u64 adj_period;
- u32 val;
if (ppb < 0) {
neg_adj = 1;
@@ -417,17 +431,53 @@ static int am65_cpts_ptp_adjfine(struct ptp_clock_info *ptp, long scaled_ppm)
mutex_lock(&cpts->ptp_clk_lock);
- val = am65_cpts_read32(cpts, control);
+ ctrl_val = am65_cpts_read32(cpts, control);
if (neg_adj)
- val |= AM65_CPTS_CONTROL_TS_PPM_DIR;
+ ctrl_val |= AM65_CPTS_CONTROL_TS_PPM_DIR;
else
- val &= ~AM65_CPTS_CONTROL_TS_PPM_DIR;
- am65_cpts_write32(cpts, val, control);
+ ctrl_val &= ~AM65_CPTS_CONTROL_TS_PPM_DIR;
+
+ ppm_hi = upper_32_bits(adj_period) & 0x3FF;
+ ppm_low = lower_32_bits(adj_period);
+
+ if (cpts->pps_enabled) {
+ pps_ctrl_val = am65_cpts_read32(cpts, genf[pps_index].control);
+ if (neg_adj)
+ pps_ctrl_val &= ~BIT(1);
+ else
+ pps_ctrl_val |= BIT(1);
+
+ /* GenF PPM will do correction using cpts refclk tick which is
+ * (cpts->ts_add_val + 1) ns, so GenF length PPM adj period
+ * need to be corrected.
+ */
+ pps_adj_period = adj_period * (cpts->ts_add_val + 1);
+ pps_ppm_hi = upper_32_bits(pps_adj_period) & 0x3FF;
+ pps_ppm_low = lower_32_bits(pps_adj_period);
+ }
+
+ spin_lock_irqsave(&cpts->lock, flags);
+
+ /* All below writes must be done extremely fast:
+ * - delay between PPM dir and PPM value changes can cause err due old
+ * PPM correction applied in wrong direction
+ * - delay between CPTS-clock PPM cfg and GenF PPM cfg can cause err
+ * due CPTS-clock PPM working with new cfg while GenF PPM cfg still
+ * with old for short period of time
+ */
+
+ am65_cpts_write32(cpts, ctrl_val, control);
+ am65_cpts_write32(cpts, ppm_hi, ts_ppm_hi);
+ am65_cpts_write32(cpts, ppm_low, ts_ppm_low);
+
+ if (cpts->pps_enabled) {
+ am65_cpts_write32(cpts, pps_ctrl_val, genf[pps_index].control);
+ am65_cpts_write32(cpts, pps_ppm_hi, genf[pps_index].ppm_hi);
+ am65_cpts_write32(cpts, pps_ppm_low, genf[pps_index].ppm_low);
+ }
- val = upper_32_bits(adj_period) & 0x3FF;
- am65_cpts_write32(cpts, val, ts_ppm_hi);
- val = lower_32_bits(adj_period);
- am65_cpts_write32(cpts, val, ts_ppm_low);
+ /* All GenF/EstF can be updated here the same way */
+ spin_unlock_irqrestore(&cpts->lock, flags);
mutex_unlock(&cpts->ptp_clk_lock);
@@ -507,7 +557,13 @@ static void am65_cpts_extts_enable_hw(struct am65_cpts *cpts, u32 index, int on)
static int am65_cpts_extts_enable(struct am65_cpts *cpts, u32 index, int on)
{
- if (!!(cpts->hw_ts_enable & BIT(index)) == !!on)
+ if (index >= cpts->ptp_info.n_ext_ts)
+ return -ENXIO;
+
+ if (cpts->pps_present && index == cpts->pps_hw_ts_idx)
+ return -EINVAL;
+
+ if (((cpts->hw_ts_enable & BIT(index)) >> index) == on)
return 0;
mutex_lock(&cpts->ptp_clk_lock);
@@ -591,6 +647,12 @@ static void am65_cpts_perout_enable_hw(struct am65_cpts *cpts,
static int am65_cpts_perout_enable(struct am65_cpts *cpts,
struct ptp_perout_request *req, int on)
{
+ if (req->index >= cpts->ptp_info.n_per_out)
+ return -ENXIO;
+
+ if (cpts->pps_present && req->index == cpts->pps_genf_idx)
+ return -EINVAL;
+
if (!!(cpts->genf_enable & BIT(req->index)) == !!on)
return 0;
@@ -604,6 +666,48 @@ static int am65_cpts_perout_enable(struct am65_cpts *cpts,
return 0;
}
+static int am65_cpts_pps_enable(struct am65_cpts *cpts, int on)
+{
+ int ret = 0;
+ struct timespec64 ts;
+ struct ptp_clock_request rq;
+ u64 ns;
+
+ if (!cpts->pps_present)
+ return -EINVAL;
+
+ if (cpts->pps_enabled == !!on)
+ return 0;
+
+ mutex_lock(&cpts->ptp_clk_lock);
+
+ if (on) {
+ am65_cpts_extts_enable_hw(cpts, cpts->pps_hw_ts_idx, on);
+
+ ns = am65_cpts_gettime(cpts, NULL);
+ ts = ns_to_timespec64(ns);
+ rq.perout.period.sec = 1;
+ rq.perout.period.nsec = 0;
+ rq.perout.start.sec = ts.tv_sec + 2;
+ rq.perout.start.nsec = 0;
+ rq.perout.index = cpts->pps_genf_idx;
+
+ am65_cpts_perout_enable_hw(cpts, &rq.perout, on);
+ cpts->pps_enabled = true;
+ } else {
+ rq.perout.index = cpts->pps_genf_idx;
+ am65_cpts_perout_enable_hw(cpts, &rq.perout, on);
+ am65_cpts_extts_enable_hw(cpts, cpts->pps_hw_ts_idx, on);
+ cpts->pps_enabled = false;
+ }
+
+ mutex_unlock(&cpts->ptp_clk_lock);
+
+ dev_dbg(cpts->dev, "%s: pps: %s\n",
+ __func__, on ? "enabled" : "disabled");
+ return ret;
+}
+
static int am65_cpts_ptp_enable(struct ptp_clock_info *ptp,
struct ptp_clock_request *rq, int on)
{
@@ -614,6 +718,8 @@ static int am65_cpts_ptp_enable(struct ptp_clock_info *ptp,
return am65_cpts_extts_enable(cpts, rq->extts.index, on);
case PTP_CLK_REQ_PEROUT:
return am65_cpts_perout_enable(cpts, &rq->perout, on);
+ case PTP_CLK_REQ_PPS:
+ return am65_cpts_pps_enable(cpts, on);
default:
break;
}
@@ -926,17 +1032,33 @@ static int am65_cpts_of_parse(struct am65_cpts *cpts, struct device_node *node)
if (!of_property_read_u32(node, "ti,cpts-periodic-outputs", &prop[0]))
cpts->genf_num = prop[0];
+ if (!of_property_read_u32_array(node, "ti,pps", prop, 2)) {
+ cpts->pps_present = true;
+
+ if (prop[0] > 7) {
+ dev_err(cpts->dev, "invalid HWx_TS_PUSH index: %u provided\n", prop[0]);
+ cpts->pps_present = false;
+ }
+ if (prop[1] > 1) {
+ dev_err(cpts->dev, "invalid GENFy index: %u provided\n", prop[1]);
+ cpts->pps_present = false;
+ }
+ if (cpts->pps_present) {
+ cpts->pps_hw_ts_idx = prop[0];
+ cpts->pps_genf_idx = prop[1];
+ }
+ }
+
return cpts_of_mux_clk_setup(cpts, node);
}
-static void am65_cpts_release(void *data)
+void am65_cpts_release(struct am65_cpts *cpts)
{
- struct am65_cpts *cpts = data;
-
ptp_clock_unregister(cpts->ptp_clock);
am65_cpts_disable(cpts);
clk_disable_unprepare(cpts->refclk);
}
+EXPORT_SYMBOL_GPL(am65_cpts_release);
struct am65_cpts *am65_cpts_create(struct device *dev, void __iomem *regs,
struct device_node *node)
@@ -993,6 +1115,8 @@ struct am65_cpts *am65_cpts_create(struct device *dev, void __iomem *regs,
cpts->ptp_info.n_ext_ts = cpts->ext_ts_inputs;
if (cpts->genf_num)
cpts->ptp_info.n_per_out = cpts->genf_num;
+ if (cpts->pps_present)
+ cpts->ptp_info.pps = 1;
am65_cpts_set_add_val(cpts);
@@ -1014,26 +1138,22 @@ struct am65_cpts *am65_cpts_create(struct device *dev, void __iomem *regs,
}
cpts->phc_index = ptp_clock_index(cpts->ptp_clock);
- ret = devm_add_action_or_reset(dev, am65_cpts_release, cpts);
- if (ret) {
- dev_err(dev, "failed to add ptpclk reset action %d", ret);
- return ERR_PTR(ret);
- }
-
ret = devm_request_threaded_irq(dev, cpts->irq, NULL,
am65_cpts_interrupt,
IRQF_ONESHOT, dev_name(dev), cpts);
if (ret < 0) {
dev_err(cpts->dev, "error attaching irq %d\n", ret);
- return ERR_PTR(ret);
+ goto reset_ptpclk;
}
- dev_info(dev, "CPTS ver 0x%08x, freq:%u, add_val:%u\n",
+ dev_info(dev, "CPTS ver 0x%08x, freq:%u, add_val:%u pps:%d\n",
am65_cpts_read32(cpts, idver),
- cpts->refclk_freq, cpts->ts_add_val);
+ cpts->refclk_freq, cpts->ts_add_val, cpts->pps_present);
return cpts;
+reset_ptpclk:
+ am65_cpts_release(cpts);
refclk_disable:
clk_disable_unprepare(cpts->refclk);
return ERR_PTR(ret);
diff --git a/drivers/net/ethernet/ti/am65-cpts.h b/drivers/net/ethernet/ti/am65-cpts.h
index bd08f4b2edd2..6e14df0be113 100644
--- a/drivers/net/ethernet/ti/am65-cpts.h
+++ b/drivers/net/ethernet/ti/am65-cpts.h
@@ -18,6 +18,7 @@ struct am65_cpts_estf_cfg {
};
#if IS_ENABLED(CONFIG_TI_K3_AM65_CPTS)
+void am65_cpts_release(struct am65_cpts *cpts);
struct am65_cpts *am65_cpts_create(struct device *dev, void __iomem *regs,
struct device_node *node);
int am65_cpts_phc_index(struct am65_cpts *cpts);
@@ -31,6 +32,10 @@ void am65_cpts_estf_disable(struct am65_cpts *cpts, int idx);
void am65_cpts_suspend(struct am65_cpts *cpts);
void am65_cpts_resume(struct am65_cpts *cpts);
#else
+static inline void am65_cpts_release(struct am65_cpts *cpts)
+{
+}
+
static inline struct am65_cpts *am65_cpts_create(struct device *dev,
void __iomem *regs,
struct device_node *node)
diff --git a/drivers/net/ethernet/ti/cpsw.c b/drivers/net/ethernet/ti/cpsw.c
index 13c9c2d6b79b..37f0b62ec5d6 100644
--- a/drivers/net/ethernet/ti/cpsw.c
+++ b/drivers/net/ethernet/ti/cpsw.c
@@ -1458,6 +1458,8 @@ static int cpsw_probe_dual_emac(struct cpsw_priv *priv)
priv_sl2->emac_port = 1;
cpsw->slaves[1].ndev = ndev;
ndev->features |= NETIF_F_HW_VLAN_CTAG_FILTER | NETIF_F_HW_VLAN_CTAG_RX;
+ ndev->xdp_features = NETDEV_XDP_ACT_BASIC | NETDEV_XDP_ACT_REDIRECT |
+ NETDEV_XDP_ACT_NDO_XMIT;
ndev->netdev_ops = &cpsw_netdev_ops;
ndev->ethtool_ops = &cpsw_ethtool_ops;
@@ -1635,6 +1637,8 @@ static int cpsw_probe(struct platform_device *pdev)
cpsw->slaves[0].ndev = ndev;
ndev->features |= NETIF_F_HW_VLAN_CTAG_FILTER | NETIF_F_HW_VLAN_CTAG_RX;
+ ndev->xdp_features = NETDEV_XDP_ACT_BASIC | NETDEV_XDP_ACT_REDIRECT |
+ NETDEV_XDP_ACT_NDO_XMIT;
ndev->netdev_ops = &cpsw_netdev_ops;
ndev->ethtool_ops = &cpsw_ethtool_ops;
diff --git a/drivers/net/ethernet/ti/cpsw_new.c b/drivers/net/ethernet/ti/cpsw_new.c
index 83596ec0c7cb..35128dd45ffc 100644
--- a/drivers/net/ethernet/ti/cpsw_new.c
+++ b/drivers/net/ethernet/ti/cpsw_new.c
@@ -1405,6 +1405,10 @@ static int cpsw_create_ports(struct cpsw_common *cpsw)
ndev->features |= NETIF_F_HW_VLAN_CTAG_FILTER |
NETIF_F_HW_VLAN_CTAG_RX | NETIF_F_NETNS_LOCAL | NETIF_F_HW_TC;
+ ndev->xdp_features = NETDEV_XDP_ACT_BASIC |
+ NETDEV_XDP_ACT_REDIRECT |
+ NETDEV_XDP_ACT_NDO_XMIT;
+
ndev->netdev_ops = &cpsw_netdev_ops;
ndev->ethtool_ops = &cpsw_ethtool_ops;
SET_NETDEV_DEV(ndev, dev);
diff --git a/drivers/net/ethernet/ti/cpsw_priv.c b/drivers/net/ethernet/ti/cpsw_priv.c
index 758295c898ac..e966dd47e2db 100644
--- a/drivers/net/ethernet/ti/cpsw_priv.c
+++ b/drivers/net/ethernet/ti/cpsw_priv.c
@@ -20,6 +20,7 @@
#include <linux/skbuff.h>
#include <net/page_pool.h>
#include <net/pkt_cls.h>
+#include <net/pkt_sched.h>
#include "cpsw.h"
#include "cpts.h"
diff --git a/drivers/net/ethernet/ti/davinci_mdio.c b/drivers/net/ethernet/ti/davinci_mdio.c
index 946b9753ccfb..23169e36a3d4 100644
--- a/drivers/net/ethernet/ti/davinci_mdio.c
+++ b/drivers/net/ethernet/ti/davinci_mdio.c
@@ -225,7 +225,7 @@ static int davinci_get_mdio_data(struct mdiobb_ctrl *ctrl)
return test_bit(MDIO_PIN, &reg);
}
-static int davinci_mdiobb_read(struct mii_bus *bus, int phy, int reg)
+static int davinci_mdiobb_read_c22(struct mii_bus *bus, int phy, int reg)
{
int ret;
@@ -233,7 +233,7 @@ static int davinci_mdiobb_read(struct mii_bus *bus, int phy, int reg)
if (ret < 0)
return ret;
- ret = mdiobb_read(bus, phy, reg);
+ ret = mdiobb_read_c22(bus, phy, reg);
pm_runtime_mark_last_busy(bus->parent);
pm_runtime_put_autosuspend(bus->parent);
@@ -241,8 +241,8 @@ static int davinci_mdiobb_read(struct mii_bus *bus, int phy, int reg)
return ret;
}
-static int davinci_mdiobb_write(struct mii_bus *bus, int phy, int reg,
- u16 val)
+static int davinci_mdiobb_write_c22(struct mii_bus *bus, int phy, int reg,
+ u16 val)
{
int ret;
@@ -250,7 +250,41 @@ static int davinci_mdiobb_write(struct mii_bus *bus, int phy, int reg,
if (ret < 0)
return ret;
- ret = mdiobb_write(bus, phy, reg, val);
+ ret = mdiobb_write_c22(bus, phy, reg, val);
+
+ pm_runtime_mark_last_busy(bus->parent);
+ pm_runtime_put_autosuspend(bus->parent);
+
+ return ret;
+}
+
+static int davinci_mdiobb_read_c45(struct mii_bus *bus, int phy, int devad,
+ int reg)
+{
+ int ret;
+
+ ret = pm_runtime_resume_and_get(bus->parent);
+ if (ret < 0)
+ return ret;
+
+ ret = mdiobb_read_c45(bus, phy, devad, reg);
+
+ pm_runtime_mark_last_busy(bus->parent);
+ pm_runtime_put_autosuspend(bus->parent);
+
+ return ret;
+}
+
+static int davinci_mdiobb_write_c45(struct mii_bus *bus, int phy, int devad,
+ int reg, u16 val)
+{
+ int ret;
+
+ ret = pm_runtime_resume_and_get(bus->parent);
+ if (ret < 0)
+ return ret;
+
+ ret = mdiobb_write_c45(bus, phy, devad, reg, val);
pm_runtime_mark_last_busy(bus->parent);
pm_runtime_put_autosuspend(bus->parent);
@@ -573,8 +607,10 @@ static int davinci_mdio_probe(struct platform_device *pdev)
data->bus->name = dev_name(dev);
if (data->manual_mode) {
- data->bus->read = davinci_mdiobb_read;
- data->bus->write = davinci_mdiobb_write;
+ data->bus->read = davinci_mdiobb_read_c22;
+ data->bus->write = davinci_mdiobb_write_c22;
+ data->bus->read_c45 = davinci_mdiobb_read_c45;
+ data->bus->write_c45 = davinci_mdiobb_write_c45;
data->bus->reset = davinci_mdiobb_reset;
dev_info(dev, "Configuring MDIO in manual mode\n");
diff --git a/drivers/net/ethernet/wangxun/Kconfig b/drivers/net/ethernet/wangxun/Kconfig
index 86310588c6c1..c9d88673d306 100644
--- a/drivers/net/ethernet/wangxun/Kconfig
+++ b/drivers/net/ethernet/wangxun/Kconfig
@@ -18,6 +18,7 @@ if NET_VENDOR_WANGXUN
config LIBWX
tristate
+ select PAGE_POOL
help
Common library for Wangxun(R) Ethernet drivers.
@@ -25,6 +26,7 @@ config NGBE
tristate "Wangxun(R) GbE PCI Express adapters support"
depends on PCI
select LIBWX
+ select PHYLIB
help
This driver supports Wangxun(R) GbE PCI Express family of
adapters.
diff --git a/drivers/net/ethernet/wangxun/libwx/Makefile b/drivers/net/ethernet/wangxun/libwx/Makefile
index 1ed5e23af944..42ccd6e4052e 100644
--- a/drivers/net/ethernet/wangxun/libwx/Makefile
+++ b/drivers/net/ethernet/wangxun/libwx/Makefile
@@ -4,4 +4,4 @@
obj-$(CONFIG_LIBWX) += libwx.o
-libwx-objs := wx_hw.o
+libwx-objs := wx_hw.o wx_lib.o wx_ethtool.o
diff --git a/drivers/net/ethernet/wangxun/libwx/wx_ethtool.c b/drivers/net/ethernet/wangxun/libwx/wx_ethtool.c
new file mode 100644
index 000000000000..93cb6f2294e7
--- /dev/null
+++ b/drivers/net/ethernet/wangxun/libwx/wx_ethtool.c
@@ -0,0 +1,18 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2015 - 2023 Beijing WangXun Technology Co., Ltd. */
+
+#include <linux/pci.h>
+#include <linux/phy.h>
+
+#include "wx_type.h"
+#include "wx_ethtool.h"
+
+void wx_get_drvinfo(struct net_device *netdev, struct ethtool_drvinfo *info)
+{
+ struct wx *wx = netdev_priv(netdev);
+
+ strscpy(info->driver, wx->driver_name, sizeof(info->driver));
+ strscpy(info->fw_version, wx->eeprom_id, sizeof(info->fw_version));
+ strscpy(info->bus_info, pci_name(wx->pdev), sizeof(info->bus_info));
+}
+EXPORT_SYMBOL(wx_get_drvinfo);
diff --git a/drivers/net/ethernet/wangxun/libwx/wx_ethtool.h b/drivers/net/ethernet/wangxun/libwx/wx_ethtool.h
new file mode 100644
index 000000000000..e85538c69454
--- /dev/null
+++ b/drivers/net/ethernet/wangxun/libwx/wx_ethtool.h
@@ -0,0 +1,8 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright (c) 2015 - 2023 Beijing WangXun Technology Co., Ltd. */
+
+#ifndef _WX_ETHTOOL_H_
+#define _WX_ETHTOOL_H_
+
+void wx_get_drvinfo(struct net_device *netdev, struct ethtool_drvinfo *info);
+#endif /* _WX_ETHTOOL_H_ */
diff --git a/drivers/net/ethernet/wangxun/libwx/wx_hw.c b/drivers/net/ethernet/wangxun/libwx/wx_hw.c
index c57dc3238b3f..7db57f934a91 100644
--- a/drivers/net/ethernet/wangxun/libwx/wx_hw.c
+++ b/drivers/net/ethernet/wangxun/libwx/wx_hw.c
@@ -2,59 +2,100 @@
/* Copyright (c) 2015 - 2022 Beijing WangXun Technology Co., Ltd. */
#include <linux/etherdevice.h>
+#include <linux/netdevice.h>
#include <linux/if_ether.h>
#include <linux/iopoll.h>
#include <linux/pci.h>
#include "wx_type.h"
+#include "wx_lib.h"
#include "wx_hw.h"
-static void wx_intr_disable(struct wx_hw *wxhw, u64 qmask)
+static void wx_intr_disable(struct wx *wx, u64 qmask)
{
u32 mask;
- mask = (qmask & 0xFFFFFFFF);
+ mask = (qmask & U32_MAX);
if (mask)
- wr32(wxhw, WX_PX_IMS(0), mask);
+ wr32(wx, WX_PX_IMS(0), mask);
- if (wxhw->mac.type == wx_mac_sp) {
+ if (wx->mac.type == wx_mac_sp) {
mask = (qmask >> 32);
if (mask)
- wr32(wxhw, WX_PX_IMS(1), mask);
+ wr32(wx, WX_PX_IMS(1), mask);
}
}
+void wx_intr_enable(struct wx *wx, u64 qmask)
+{
+ u32 mask;
+
+ mask = (qmask & U32_MAX);
+ if (mask)
+ wr32(wx, WX_PX_IMC(0), mask);
+ if (wx->mac.type == wx_mac_sp) {
+ mask = (qmask >> 32);
+ if (mask)
+ wr32(wx, WX_PX_IMC(1), mask);
+ }
+}
+EXPORT_SYMBOL(wx_intr_enable);
+
+/**
+ * wx_irq_disable - Mask off interrupt generation on the NIC
+ * @wx: board private structure
+ **/
+void wx_irq_disable(struct wx *wx)
+{
+ struct pci_dev *pdev = wx->pdev;
+
+ wr32(wx, WX_PX_MISC_IEN, 0);
+ wx_intr_disable(wx, WX_INTR_ALL);
+
+ if (pdev->msix_enabled) {
+ int vector;
+
+ for (vector = 0; vector < wx->num_q_vectors; vector++)
+ synchronize_irq(wx->msix_entries[vector].vector);
+
+ synchronize_irq(wx->msix_entries[vector].vector);
+ } else {
+ synchronize_irq(pdev->irq);
+ }
+}
+EXPORT_SYMBOL(wx_irq_disable);
+
/* cmd_addr is used for some special command:
* 1. to be sector address, when implemented erase sector command
* 2. to be flash address when implemented read, write flash address
*/
-static int wx_fmgr_cmd_op(struct wx_hw *wxhw, u32 cmd, u32 cmd_addr)
+static int wx_fmgr_cmd_op(struct wx *wx, u32 cmd, u32 cmd_addr)
{
u32 cmd_val = 0, val = 0;
cmd_val = WX_SPI_CMD_CMD(cmd) |
WX_SPI_CMD_CLK(WX_SPI_CLK_DIV) |
cmd_addr;
- wr32(wxhw, WX_SPI_CMD, cmd_val);
+ wr32(wx, WX_SPI_CMD, cmd_val);
return read_poll_timeout(rd32, val, (val & 0x1), 10, 100000,
- false, wxhw, WX_SPI_STATUS);
+ false, wx, WX_SPI_STATUS);
}
-static int wx_flash_read_dword(struct wx_hw *wxhw, u32 addr, u32 *data)
+static int wx_flash_read_dword(struct wx *wx, u32 addr, u32 *data)
{
int ret = 0;
- ret = wx_fmgr_cmd_op(wxhw, WX_SPI_CMD_READ_DWORD, addr);
+ ret = wx_fmgr_cmd_op(wx, WX_SPI_CMD_READ_DWORD, addr);
if (ret < 0)
return ret;
- *data = rd32(wxhw, WX_SPI_DATA);
+ *data = rd32(wx, WX_SPI_DATA);
return ret;
}
-int wx_check_flash_load(struct wx_hw *hw, u32 check_bit)
+int wx_check_flash_load(struct wx *hw, u32 check_bit)
{
u32 reg = 0;
int err = 0;
@@ -73,29 +114,25 @@ int wx_check_flash_load(struct wx_hw *hw, u32 check_bit)
}
EXPORT_SYMBOL(wx_check_flash_load);
-void wx_control_hw(struct wx_hw *wxhw, bool drv)
+void wx_control_hw(struct wx *wx, bool drv)
{
- if (drv) {
- /* Let firmware know the driver has taken over */
- wr32m(wxhw, WX_CFG_PORT_CTL,
- WX_CFG_PORT_CTL_DRV_LOAD, WX_CFG_PORT_CTL_DRV_LOAD);
- } else {
- /* Let firmware take over control of hw */
- wr32m(wxhw, WX_CFG_PORT_CTL,
- WX_CFG_PORT_CTL_DRV_LOAD, 0);
- }
+ /* True : Let firmware know the driver has taken over
+ * False : Let firmware take over control of hw
+ */
+ wr32m(wx, WX_CFG_PORT_CTL, WX_CFG_PORT_CTL_DRV_LOAD,
+ drv ? WX_CFG_PORT_CTL_DRV_LOAD : 0);
}
EXPORT_SYMBOL(wx_control_hw);
/**
* wx_mng_present - returns 0 when management capability is present
- * @wxhw: pointer to hardware structure
+ * @wx: pointer to hardware structure
*/
-int wx_mng_present(struct wx_hw *wxhw)
+int wx_mng_present(struct wx *wx)
{
u32 fwsm;
- fwsm = rd32(wxhw, WX_MIS_ST);
+ fwsm = rd32(wx, WX_MIS_ST);
if (fwsm & WX_MIS_ST_MNG_INIT_DN)
return 0;
else
@@ -108,40 +145,40 @@ static DEFINE_MUTEX(wx_sw_sync_lock);
/**
* wx_release_sw_sync - Release SW semaphore
- * @wxhw: pointer to hardware structure
+ * @wx: pointer to hardware structure
* @mask: Mask to specify which semaphore to release
*
* Releases the SW semaphore for the specified
* function (CSR, PHY0, PHY1, EEPROM, Flash)
**/
-static void wx_release_sw_sync(struct wx_hw *wxhw, u32 mask)
+static void wx_release_sw_sync(struct wx *wx, u32 mask)
{
mutex_lock(&wx_sw_sync_lock);
- wr32m(wxhw, WX_MNG_SWFW_SYNC, mask, 0);
+ wr32m(wx, WX_MNG_SWFW_SYNC, mask, 0);
mutex_unlock(&wx_sw_sync_lock);
}
/**
* wx_acquire_sw_sync - Acquire SW semaphore
- * @wxhw: pointer to hardware structure
+ * @wx: pointer to hardware structure
* @mask: Mask to specify which semaphore to acquire
*
* Acquires the SW semaphore for the specified
* function (CSR, PHY0, PHY1, EEPROM, Flash)
**/
-static int wx_acquire_sw_sync(struct wx_hw *wxhw, u32 mask)
+static int wx_acquire_sw_sync(struct wx *wx, u32 mask)
{
u32 sem = 0;
int ret = 0;
mutex_lock(&wx_sw_sync_lock);
ret = read_poll_timeout(rd32, sem, !(sem & mask),
- 5000, 2000000, false, wxhw, WX_MNG_SWFW_SYNC);
+ 5000, 2000000, false, wx, WX_MNG_SWFW_SYNC);
if (!ret) {
sem |= mask;
- wr32(wxhw, WX_MNG_SWFW_SYNC, sem);
+ wr32(wx, WX_MNG_SWFW_SYNC, sem);
} else {
- wx_err(wxhw, "SW Semaphore not granted: 0x%x.\n", sem);
+ wx_err(wx, "SW Semaphore not granted: 0x%x.\n", sem);
}
mutex_unlock(&wx_sw_sync_lock);
@@ -150,7 +187,7 @@ static int wx_acquire_sw_sync(struct wx_hw *wxhw, u32 mask)
/**
* wx_host_interface_command - Issue command to manageability block
- * @wxhw: pointer to the HW structure
+ * @wx: pointer to the HW structure
* @buffer: contains the command to write and where the return status will
* be placed
* @length: length of buffer, must be multiple of 4 bytes
@@ -162,7 +199,7 @@ static int wx_acquire_sw_sync(struct wx_hw *wxhw, u32 mask)
* So we will leave this up to the caller to read back the data
* in these cases.
**/
-int wx_host_interface_command(struct wx_hw *wxhw, u32 *buffer,
+int wx_host_interface_command(struct wx *wx, u32 *buffer,
u32 length, u32 timeout, bool return_data)
{
u32 hdr_size = sizeof(struct wx_hic_hdr);
@@ -172,17 +209,17 @@ int wx_host_interface_command(struct wx_hw *wxhw, u32 *buffer,
u16 buf_len;
if (length == 0 || length > WX_HI_MAX_BLOCK_BYTE_LENGTH) {
- wx_err(wxhw, "Buffer length failure buffersize=%d.\n", length);
+ wx_err(wx, "Buffer length failure buffersize=%d.\n", length);
return -EINVAL;
}
- status = wx_acquire_sw_sync(wxhw, WX_MNG_SWFW_SYNC_SW_MB);
+ status = wx_acquire_sw_sync(wx, WX_MNG_SWFW_SYNC_SW_MB);
if (status != 0)
return status;
/* Calculate length in DWORDs. We must be DWORD aligned */
if ((length % (sizeof(u32))) != 0) {
- wx_err(wxhw, "Buffer length failure, not aligned to dword");
+ wx_err(wx, "Buffer length failure, not aligned to dword");
status = -EINVAL;
goto rel_out;
}
@@ -193,38 +230,38 @@ int wx_host_interface_command(struct wx_hw *wxhw, u32 *buffer,
* into the ram area.
*/
for (i = 0; i < dword_len; i++) {
- wr32a(wxhw, WX_MNG_MBOX, i, (__force u32)cpu_to_le32(buffer[i]));
+ wr32a(wx, WX_MNG_MBOX, i, (__force u32)cpu_to_le32(buffer[i]));
/* write flush */
- buf[i] = rd32a(wxhw, WX_MNG_MBOX, i);
+ buf[i] = rd32a(wx, WX_MNG_MBOX, i);
}
/* Setting this bit tells the ARC that a new command is pending. */
- wr32m(wxhw, WX_MNG_MBOX_CTL,
+ wr32m(wx, WX_MNG_MBOX_CTL,
WX_MNG_MBOX_CTL_SWRDY, WX_MNG_MBOX_CTL_SWRDY);
status = read_poll_timeout(rd32, hicr, hicr & WX_MNG_MBOX_CTL_FWRDY, 1000,
- timeout * 1000, false, wxhw, WX_MNG_MBOX_CTL);
+ timeout * 1000, false, wx, WX_MNG_MBOX_CTL);
/* Check command completion */
if (status) {
- wx_dbg(wxhw, "Command has failed with no status valid.\n");
+ wx_dbg(wx, "Command has failed with no status valid.\n");
- buf[0] = rd32(wxhw, WX_MNG_MBOX);
+ buf[0] = rd32(wx, WX_MNG_MBOX);
if ((buffer[0] & 0xff) != (~buf[0] >> 24)) {
status = -EINVAL;
goto rel_out;
}
if ((buf[0] & 0xff0000) >> 16 == 0x80) {
- wx_dbg(wxhw, "It's unknown cmd.\n");
+ wx_dbg(wx, "It's unknown cmd.\n");
status = -EINVAL;
goto rel_out;
}
- wx_dbg(wxhw, "write value:\n");
+ wx_dbg(wx, "write value:\n");
for (i = 0; i < dword_len; i++)
- wx_dbg(wxhw, "%x ", buffer[i]);
- wx_dbg(wxhw, "read value:\n");
+ wx_dbg(wx, "%x ", buffer[i]);
+ wx_dbg(wx, "read value:\n");
for (i = 0; i < dword_len; i++)
- wx_dbg(wxhw, "%x ", buf[i]);
+ wx_dbg(wx, "%x ", buf[i]);
}
if (!return_data)
@@ -235,7 +272,7 @@ int wx_host_interface_command(struct wx_hw *wxhw, u32 *buffer,
/* first pull in the header so we know the buffer length */
for (bi = 0; bi < dword_len; bi++) {
- buffer[bi] = rd32a(wxhw, WX_MNG_MBOX, bi);
+ buffer[bi] = rd32a(wx, WX_MNG_MBOX, bi);
le32_to_cpus(&buffer[bi]);
}
@@ -245,7 +282,7 @@ int wx_host_interface_command(struct wx_hw *wxhw, u32 *buffer,
goto rel_out;
if (length < buf_len + hdr_size) {
- wx_err(wxhw, "Buffer not large enough for reply message.\n");
+ wx_err(wx, "Buffer not large enough for reply message.\n");
status = -EFAULT;
goto rel_out;
}
@@ -255,12 +292,12 @@ int wx_host_interface_command(struct wx_hw *wxhw, u32 *buffer,
/* Pull in the rest of the buffer (bi is where we left off) */
for (; bi <= dword_len; bi++) {
- buffer[bi] = rd32a(wxhw, WX_MNG_MBOX, bi);
+ buffer[bi] = rd32a(wx, WX_MNG_MBOX, bi);
le32_to_cpus(&buffer[bi]);
}
rel_out:
- wx_release_sw_sync(wxhw, WX_MNG_SWFW_SYNC_SW_MB);
+ wx_release_sw_sync(wx, WX_MNG_SWFW_SYNC_SW_MB);
return status;
}
EXPORT_SYMBOL(wx_host_interface_command);
@@ -268,13 +305,13 @@ EXPORT_SYMBOL(wx_host_interface_command);
/**
* wx_read_ee_hostif_data - Read EEPROM word using a host interface cmd
* assuming that the semaphore is already obtained.
- * @wxhw: pointer to hardware structure
+ * @wx: pointer to hardware structure
* @offset: offset of word in the EEPROM to read
* @data: word read from the EEPROM
*
* Reads a 16 bit word from the EEPROM using the hostif.
**/
-static int wx_read_ee_hostif_data(struct wx_hw *wxhw, u16 offset, u16 *data)
+static int wx_read_ee_hostif_data(struct wx *wx, u16 offset, u16 *data)
{
struct wx_hic_read_shadow_ram buffer;
int status;
@@ -289,33 +326,33 @@ static int wx_read_ee_hostif_data(struct wx_hw *wxhw, u16 offset, u16 *data)
/* one word */
buffer.length = (__force u16)cpu_to_be16(sizeof(u16));
- status = wx_host_interface_command(wxhw, (u32 *)&buffer, sizeof(buffer),
+ status = wx_host_interface_command(wx, (u32 *)&buffer, sizeof(buffer),
WX_HI_COMMAND_TIMEOUT, false);
if (status != 0)
return status;
- *data = (u16)rd32a(wxhw, WX_MNG_MBOX, FW_NVM_DATA_OFFSET);
+ *data = (u16)rd32a(wx, WX_MNG_MBOX, FW_NVM_DATA_OFFSET);
return status;
}
/**
* wx_read_ee_hostif - Read EEPROM word using a host interface cmd
- * @wxhw: pointer to hardware structure
+ * @wx: pointer to hardware structure
* @offset: offset of word in the EEPROM to read
* @data: word read from the EEPROM
*
* Reads a 16 bit word from the EEPROM using the hostif.
**/
-int wx_read_ee_hostif(struct wx_hw *wxhw, u16 offset, u16 *data)
+int wx_read_ee_hostif(struct wx *wx, u16 offset, u16 *data)
{
int status = 0;
- status = wx_acquire_sw_sync(wxhw, WX_MNG_SWFW_SYNC_SW_FLASH);
+ status = wx_acquire_sw_sync(wx, WX_MNG_SWFW_SYNC_SW_FLASH);
if (status == 0) {
- status = wx_read_ee_hostif_data(wxhw, offset, data);
- wx_release_sw_sync(wxhw, WX_MNG_SWFW_SYNC_SW_FLASH);
+ status = wx_read_ee_hostif_data(wx, offset, data);
+ wx_release_sw_sync(wx, WX_MNG_SWFW_SYNC_SW_FLASH);
}
return status;
@@ -324,14 +361,14 @@ EXPORT_SYMBOL(wx_read_ee_hostif);
/**
* wx_read_ee_hostif_buffer- Read EEPROM word(s) using hostif
- * @wxhw: pointer to hardware structure
+ * @wx: pointer to hardware structure
* @offset: offset of word in the EEPROM to read
* @words: number of words
* @data: word(s) read from the EEPROM
*
* Reads a 16 bit word(s) from the EEPROM using the hostif.
**/
-int wx_read_ee_hostif_buffer(struct wx_hw *wxhw,
+int wx_read_ee_hostif_buffer(struct wx *wx,
u16 offset, u16 words, u16 *data)
{
struct wx_hic_read_shadow_ram buffer;
@@ -342,7 +379,7 @@ int wx_read_ee_hostif_buffer(struct wx_hw *wxhw,
u32 i;
/* Take semaphore for the entire operation. */
- status = wx_acquire_sw_sync(wxhw, WX_MNG_SWFW_SYNC_SW_FLASH);
+ status = wx_acquire_sw_sync(wx, WX_MNG_SWFW_SYNC_SW_FLASH);
if (status != 0)
return status;
@@ -361,20 +398,20 @@ int wx_read_ee_hostif_buffer(struct wx_hw *wxhw,
buffer.address = (__force u32)cpu_to_be32((offset + current_word) * 2);
buffer.length = (__force u16)cpu_to_be16(words_to_read * 2);
- status = wx_host_interface_command(wxhw, (u32 *)&buffer,
+ status = wx_host_interface_command(wx, (u32 *)&buffer,
sizeof(buffer),
WX_HI_COMMAND_TIMEOUT,
false);
if (status != 0) {
- wx_err(wxhw, "Host interface command failed\n");
+ wx_err(wx, "Host interface command failed\n");
goto out;
}
for (i = 0; i < words_to_read; i++) {
u32 reg = WX_MNG_MBOX + (FW_NVM_DATA_OFFSET << 2) + 2 * i;
- value = rd32(wxhw, reg);
+ value = rd32(wx, reg);
data[current_word] = (u16)(value & 0xffff);
current_word++;
i++;
@@ -388,7 +425,7 @@ int wx_read_ee_hostif_buffer(struct wx_hw *wxhw,
}
out:
- wx_release_sw_sync(wxhw, WX_MNG_SWFW_SYNC_SW_FLASH);
+ wx_release_sw_sync(wx, WX_MNG_SWFW_SYNC_SW_FLASH);
return status;
}
EXPORT_SYMBOL(wx_read_ee_hostif_buffer);
@@ -416,12 +453,12 @@ static u8 wx_calculate_checksum(u8 *buffer, u32 length)
/**
* wx_reset_hostif - send reset cmd to fw
- * @wxhw: pointer to hardware structure
+ * @wx: pointer to hardware structure
*
* Sends reset cmd to firmware through the manageability
* block.
**/
-int wx_reset_hostif(struct wx_hw *wxhw)
+int wx_reset_hostif(struct wx *wx)
{
struct wx_hic_reset reset_cmd;
int ret_val = 0;
@@ -430,15 +467,15 @@ int wx_reset_hostif(struct wx_hw *wxhw)
reset_cmd.hdr.cmd = FW_RESET_CMD;
reset_cmd.hdr.buf_len = FW_RESET_LEN;
reset_cmd.hdr.cmd_or_resp.cmd_resv = FW_CEM_CMD_RESERVED;
- reset_cmd.lan_id = wxhw->bus.func;
- reset_cmd.reset_type = (u16)wxhw->reset_type;
+ reset_cmd.lan_id = wx->bus.func;
+ reset_cmd.reset_type = (u16)wx->reset_type;
reset_cmd.hdr.checksum = 0;
reset_cmd.hdr.checksum = wx_calculate_checksum((u8 *)&reset_cmd,
(FW_CEM_HDR_LEN +
reset_cmd.hdr.buf_len));
for (i = 0; i <= FW_CEM_MAX_RETRIES; i++) {
- ret_val = wx_host_interface_command(wxhw, (u32 *)&reset_cmd,
+ ret_val = wx_host_interface_command(wx, (u32 *)&reset_cmd,
sizeof(reset_cmd),
WX_HI_COMMAND_TIMEOUT,
true);
@@ -460,14 +497,14 @@ EXPORT_SYMBOL(wx_reset_hostif);
/**
* wx_init_eeprom_params - Initialize EEPROM params
- * @wxhw: pointer to hardware structure
+ * @wx: pointer to hardware structure
*
* Initializes the EEPROM parameters wx_eeprom_info within the
* wx_hw struct in order to set up EEPROM access.
**/
-void wx_init_eeprom_params(struct wx_hw *wxhw)
+void wx_init_eeprom_params(struct wx *wx)
{
- struct wx_eeprom_info *eeprom = &wxhw->eeprom;
+ struct wx_eeprom_info *eeprom = &wx->eeprom;
u16 eeprom_size;
u16 data = 0x80;
@@ -475,21 +512,21 @@ void wx_init_eeprom_params(struct wx_hw *wxhw)
eeprom->semaphore_delay = 10;
eeprom->type = wx_eeprom_none;
- if (!(rd32(wxhw, WX_SPI_STATUS) &
+ if (!(rd32(wx, WX_SPI_STATUS) &
WX_SPI_STATUS_FLASH_BYPASS)) {
eeprom->type = wx_flash;
eeprom_size = 4096;
eeprom->word_size = eeprom_size >> 1;
- wx_dbg(wxhw, "Eeprom params: type = %d, size = %d\n",
+ wx_dbg(wx, "Eeprom params: type = %d, size = %d\n",
eeprom->type, eeprom->word_size);
}
}
- if (wxhw->mac.type == wx_mac_sp) {
- if (wx_read_ee_hostif(wxhw, WX_SW_REGION_PTR, &data)) {
- wx_err(wxhw, "NVM Read Error\n");
+ if (wx->mac.type == wx_mac_sp) {
+ if (wx_read_ee_hostif(wx, WX_SW_REGION_PTR, &data)) {
+ wx_err(wx, "NVM Read Error\n");
return;
}
data = data >> 1;
@@ -501,22 +538,22 @@ EXPORT_SYMBOL(wx_init_eeprom_params);
/**
* wx_get_mac_addr - Generic get MAC address
- * @wxhw: pointer to hardware structure
+ * @wx: pointer to hardware structure
* @mac_addr: Adapter MAC address
*
* Reads the adapter's MAC address from first Receive Address Register (RAR0)
* A reset of the adapter must be performed prior to calling this function
* in order for the MAC address to have been loaded from the EEPROM into RAR0
**/
-void wx_get_mac_addr(struct wx_hw *wxhw, u8 *mac_addr)
+void wx_get_mac_addr(struct wx *wx, u8 *mac_addr)
{
u32 rar_high;
u32 rar_low;
u16 i;
- wr32(wxhw, WX_PSR_MAC_SWC_IDX, 0);
- rar_high = rd32(wxhw, WX_PSR_MAC_SWC_AD_H);
- rar_low = rd32(wxhw, WX_PSR_MAC_SWC_AD_L);
+ wr32(wx, WX_PSR_MAC_SWC_IDX, 0);
+ rar_high = rd32(wx, WX_PSR_MAC_SWC_AD_H);
+ rar_low = rd32(wx, WX_PSR_MAC_SWC_AD_L);
for (i = 0; i < 2; i++)
mac_addr[i] = (u8)(rar_high >> (1 - i) * 8);
@@ -528,7 +565,7 @@ EXPORT_SYMBOL(wx_get_mac_addr);
/**
* wx_set_rar - Set Rx address register
- * @wxhw: pointer to hardware structure
+ * @wx: pointer to hardware structure
* @index: Receive address register to write
* @addr: Address to put into receive address register
* @pools: VMDq "set" or "pool" index
@@ -536,25 +573,25 @@ EXPORT_SYMBOL(wx_get_mac_addr);
*
* Puts an ethernet address into a receive address register.
**/
-int wx_set_rar(struct wx_hw *wxhw, u32 index, u8 *addr, u64 pools,
- u32 enable_addr)
+static int wx_set_rar(struct wx *wx, u32 index, u8 *addr, u64 pools,
+ u32 enable_addr)
{
- u32 rar_entries = wxhw->mac.num_rar_entries;
+ u32 rar_entries = wx->mac.num_rar_entries;
u32 rar_low, rar_high;
/* Make sure we are using a valid rar index range */
if (index >= rar_entries) {
- wx_err(wxhw, "RAR index %d is out of range.\n", index);
+ wx_err(wx, "RAR index %d is out of range.\n", index);
return -EINVAL;
}
/* select the MAC address */
- wr32(wxhw, WX_PSR_MAC_SWC_IDX, index);
+ wr32(wx, WX_PSR_MAC_SWC_IDX, index);
/* setup VMDq pool mapping */
- wr32(wxhw, WX_PSR_MAC_SWC_VM_L, pools & 0xFFFFFFFF);
- if (wxhw->mac.type == wx_mac_sp)
- wr32(wxhw, WX_PSR_MAC_SWC_VM_H, pools >> 32);
+ wr32(wx, WX_PSR_MAC_SWC_VM_L, pools & 0xFFFFFFFF);
+ if (wx->mac.type == wx_mac_sp)
+ wr32(wx, WX_PSR_MAC_SWC_VM_H, pools >> 32);
/* HW expects these in little endian so we reverse the byte
* order from network order (big endian) to little endian
@@ -572,31 +609,30 @@ int wx_set_rar(struct wx_hw *wxhw, u32 index, u8 *addr, u64 pools,
if (enable_addr != 0)
rar_high |= WX_PSR_MAC_SWC_AD_H_AV;
- wr32(wxhw, WX_PSR_MAC_SWC_AD_L, rar_low);
- wr32m(wxhw, WX_PSR_MAC_SWC_AD_H,
- (WX_PSR_MAC_SWC_AD_H_AD(~0) |
- WX_PSR_MAC_SWC_AD_H_ADTYPE(~0) |
+ wr32(wx, WX_PSR_MAC_SWC_AD_L, rar_low);
+ wr32m(wx, WX_PSR_MAC_SWC_AD_H,
+ (WX_PSR_MAC_SWC_AD_H_AD(U16_MAX) |
+ WX_PSR_MAC_SWC_AD_H_ADTYPE(1) |
WX_PSR_MAC_SWC_AD_H_AV),
rar_high);
return 0;
}
-EXPORT_SYMBOL(wx_set_rar);
/**
* wx_clear_rar - Remove Rx address register
- * @wxhw: pointer to hardware structure
+ * @wx: pointer to hardware structure
* @index: Receive address register to write
*
* Clears an ethernet address from a receive address register.
**/
-int wx_clear_rar(struct wx_hw *wxhw, u32 index)
+static int wx_clear_rar(struct wx *wx, u32 index)
{
- u32 rar_entries = wxhw->mac.num_rar_entries;
+ u32 rar_entries = wx->mac.num_rar_entries;
/* Make sure we are using a valid rar index range */
if (index >= rar_entries) {
- wx_err(wxhw, "RAR index %d is out of range.\n", index);
+ wx_err(wx, "RAR index %d is out of range.\n", index);
return -EINVAL;
}
@@ -604,78 +640,77 @@ int wx_clear_rar(struct wx_hw *wxhw, u32 index)
* so save everything except the lower 16 bits that hold part
* of the address and the address valid bit.
*/
- wr32(wxhw, WX_PSR_MAC_SWC_IDX, index);
+ wr32(wx, WX_PSR_MAC_SWC_IDX, index);
- wr32(wxhw, WX_PSR_MAC_SWC_VM_L, 0);
- wr32(wxhw, WX_PSR_MAC_SWC_VM_H, 0);
+ wr32(wx, WX_PSR_MAC_SWC_VM_L, 0);
+ wr32(wx, WX_PSR_MAC_SWC_VM_H, 0);
- wr32(wxhw, WX_PSR_MAC_SWC_AD_L, 0);
- wr32m(wxhw, WX_PSR_MAC_SWC_AD_H,
- (WX_PSR_MAC_SWC_AD_H_AD(~0) |
- WX_PSR_MAC_SWC_AD_H_ADTYPE(~0) |
+ wr32(wx, WX_PSR_MAC_SWC_AD_L, 0);
+ wr32m(wx, WX_PSR_MAC_SWC_AD_H,
+ (WX_PSR_MAC_SWC_AD_H_AD(U16_MAX) |
+ WX_PSR_MAC_SWC_AD_H_ADTYPE(1) |
WX_PSR_MAC_SWC_AD_H_AV),
0);
return 0;
}
-EXPORT_SYMBOL(wx_clear_rar);
/**
* wx_clear_vmdq - Disassociate a VMDq pool index from a rx address
- * @wxhw: pointer to hardware struct
+ * @wx: pointer to hardware struct
* @rar: receive address register index to disassociate
* @vmdq: VMDq pool index to remove from the rar
**/
-static int wx_clear_vmdq(struct wx_hw *wxhw, u32 rar, u32 __maybe_unused vmdq)
+static int wx_clear_vmdq(struct wx *wx, u32 rar, u32 __maybe_unused vmdq)
{
- u32 rar_entries = wxhw->mac.num_rar_entries;
+ u32 rar_entries = wx->mac.num_rar_entries;
u32 mpsar_lo, mpsar_hi;
/* Make sure we are using a valid rar index range */
if (rar >= rar_entries) {
- wx_err(wxhw, "RAR index %d is out of range.\n", rar);
+ wx_err(wx, "RAR index %d is out of range.\n", rar);
return -EINVAL;
}
- wr32(wxhw, WX_PSR_MAC_SWC_IDX, rar);
- mpsar_lo = rd32(wxhw, WX_PSR_MAC_SWC_VM_L);
- mpsar_hi = rd32(wxhw, WX_PSR_MAC_SWC_VM_H);
+ wr32(wx, WX_PSR_MAC_SWC_IDX, rar);
+ mpsar_lo = rd32(wx, WX_PSR_MAC_SWC_VM_L);
+ mpsar_hi = rd32(wx, WX_PSR_MAC_SWC_VM_H);
if (!mpsar_lo && !mpsar_hi)
return 0;
/* was that the last pool using this rar? */
if (mpsar_lo == 0 && mpsar_hi == 0 && rar != 0)
- wx_clear_rar(wxhw, rar);
+ wx_clear_rar(wx, rar);
return 0;
}
/**
* wx_init_uta_tables - Initialize the Unicast Table Array
- * @wxhw: pointer to hardware structure
+ * @wx: pointer to hardware structure
**/
-static void wx_init_uta_tables(struct wx_hw *wxhw)
+static void wx_init_uta_tables(struct wx *wx)
{
int i;
- wx_dbg(wxhw, " Clearing UTA\n");
+ wx_dbg(wx, " Clearing UTA\n");
for (i = 0; i < 128; i++)
- wr32(wxhw, WX_PSR_UC_TBL(i), 0);
+ wr32(wx, WX_PSR_UC_TBL(i), 0);
}
/**
* wx_init_rx_addrs - Initializes receive address filters.
- * @wxhw: pointer to hardware structure
+ * @wx: pointer to hardware structure
*
* Places the MAC address in receive address register 0 and clears the rest
* of the receive address registers. Clears the multicast table. Assumes
* the receiver is in reset when the routine is called.
**/
-void wx_init_rx_addrs(struct wx_hw *wxhw)
+void wx_init_rx_addrs(struct wx *wx)
{
- u32 rar_entries = wxhw->mac.num_rar_entries;
+ u32 rar_entries = wx->mac.num_rar_entries;
u32 psrctl;
int i;
@@ -683,97 +718,829 @@ void wx_init_rx_addrs(struct wx_hw *wxhw)
* to the permanent address.
* Otherwise, use the permanent address from the eeprom.
*/
- if (!is_valid_ether_addr(wxhw->mac.addr)) {
+ if (!is_valid_ether_addr(wx->mac.addr)) {
/* Get the MAC address from the RAR0 for later reference */
- wx_get_mac_addr(wxhw, wxhw->mac.addr);
- wx_dbg(wxhw, "Keeping Current RAR0 Addr = %pM\n", wxhw->mac.addr);
+ wx_get_mac_addr(wx, wx->mac.addr);
+ wx_dbg(wx, "Keeping Current RAR0 Addr = %pM\n", wx->mac.addr);
} else {
/* Setup the receive address. */
- wx_dbg(wxhw, "Overriding MAC Address in RAR[0]\n");
- wx_dbg(wxhw, "New MAC Addr = %pM\n", wxhw->mac.addr);
+ wx_dbg(wx, "Overriding MAC Address in RAR[0]\n");
+ wx_dbg(wx, "New MAC Addr = %pM\n", wx->mac.addr);
- wx_set_rar(wxhw, 0, wxhw->mac.addr, 0, WX_PSR_MAC_SWC_AD_H_AV);
+ wx_set_rar(wx, 0, wx->mac.addr, 0, WX_PSR_MAC_SWC_AD_H_AV);
- if (wxhw->mac.type == wx_mac_sp) {
+ if (wx->mac.type == wx_mac_sp) {
/* clear VMDq pool/queue selection for RAR 0 */
- wx_clear_vmdq(wxhw, 0, WX_CLEAR_VMDQ_ALL);
+ wx_clear_vmdq(wx, 0, WX_CLEAR_VMDQ_ALL);
}
}
/* Zero out the other receive addresses. */
- wx_dbg(wxhw, "Clearing RAR[1-%d]\n", rar_entries - 1);
+ wx_dbg(wx, "Clearing RAR[1-%d]\n", rar_entries - 1);
for (i = 1; i < rar_entries; i++) {
- wr32(wxhw, WX_PSR_MAC_SWC_IDX, i);
- wr32(wxhw, WX_PSR_MAC_SWC_AD_L, 0);
- wr32(wxhw, WX_PSR_MAC_SWC_AD_H, 0);
+ wr32(wx, WX_PSR_MAC_SWC_IDX, i);
+ wr32(wx, WX_PSR_MAC_SWC_AD_L, 0);
+ wr32(wx, WX_PSR_MAC_SWC_AD_H, 0);
}
/* Clear the MTA */
- wxhw->addr_ctrl.mta_in_use = 0;
- psrctl = rd32(wxhw, WX_PSR_CTL);
+ wx->addr_ctrl.mta_in_use = 0;
+ psrctl = rd32(wx, WX_PSR_CTL);
psrctl &= ~(WX_PSR_CTL_MO | WX_PSR_CTL_MFE);
- psrctl |= wxhw->mac.mc_filter_type << WX_PSR_CTL_MO_SHIFT;
- wr32(wxhw, WX_PSR_CTL, psrctl);
- wx_dbg(wxhw, " Clearing MTA\n");
- for (i = 0; i < wxhw->mac.mcft_size; i++)
- wr32(wxhw, WX_PSR_MC_TBL(i), 0);
+ psrctl |= wx->mac.mc_filter_type << WX_PSR_CTL_MO_SHIFT;
+ wr32(wx, WX_PSR_CTL, psrctl);
+ wx_dbg(wx, " Clearing MTA\n");
+ for (i = 0; i < wx->mac.mcft_size; i++)
+ wr32(wx, WX_PSR_MC_TBL(i), 0);
- wx_init_uta_tables(wxhw);
+ wx_init_uta_tables(wx);
}
EXPORT_SYMBOL(wx_init_rx_addrs);
-void wx_disable_rx(struct wx_hw *wxhw)
+static void wx_sync_mac_table(struct wx *wx)
+{
+ int i;
+
+ for (i = 0; i < wx->mac.num_rar_entries; i++) {
+ if (wx->mac_table[i].state & WX_MAC_STATE_MODIFIED) {
+ if (wx->mac_table[i].state & WX_MAC_STATE_IN_USE) {
+ wx_set_rar(wx, i,
+ wx->mac_table[i].addr,
+ wx->mac_table[i].pools,
+ WX_PSR_MAC_SWC_AD_H_AV);
+ } else {
+ wx_clear_rar(wx, i);
+ }
+ wx->mac_table[i].state &= ~(WX_MAC_STATE_MODIFIED);
+ }
+ }
+}
+
+/* this function destroys the first RAR entry */
+void wx_mac_set_default_filter(struct wx *wx, u8 *addr)
+{
+ memcpy(&wx->mac_table[0].addr, addr, ETH_ALEN);
+ wx->mac_table[0].pools = 1ULL;
+ wx->mac_table[0].state = (WX_MAC_STATE_DEFAULT | WX_MAC_STATE_IN_USE);
+ wx_set_rar(wx, 0, wx->mac_table[0].addr,
+ wx->mac_table[0].pools,
+ WX_PSR_MAC_SWC_AD_H_AV);
+}
+EXPORT_SYMBOL(wx_mac_set_default_filter);
+
+void wx_flush_sw_mac_table(struct wx *wx)
+{
+ u32 i;
+
+ for (i = 0; i < wx->mac.num_rar_entries; i++) {
+ if (!(wx->mac_table[i].state & WX_MAC_STATE_IN_USE))
+ continue;
+
+ wx->mac_table[i].state |= WX_MAC_STATE_MODIFIED;
+ wx->mac_table[i].state &= ~WX_MAC_STATE_IN_USE;
+ memset(wx->mac_table[i].addr, 0, ETH_ALEN);
+ wx->mac_table[i].pools = 0;
+ }
+ wx_sync_mac_table(wx);
+}
+EXPORT_SYMBOL(wx_flush_sw_mac_table);
+
+static int wx_add_mac_filter(struct wx *wx, u8 *addr, u16 pool)
+{
+ u32 i;
+
+ if (is_zero_ether_addr(addr))
+ return -EINVAL;
+
+ for (i = 0; i < wx->mac.num_rar_entries; i++) {
+ if (wx->mac_table[i].state & WX_MAC_STATE_IN_USE) {
+ if (ether_addr_equal(addr, wx->mac_table[i].addr)) {
+ if (wx->mac_table[i].pools != (1ULL << pool)) {
+ memcpy(wx->mac_table[i].addr, addr, ETH_ALEN);
+ wx->mac_table[i].pools |= (1ULL << pool);
+ wx_sync_mac_table(wx);
+ return i;
+ }
+ }
+ }
+
+ if (wx->mac_table[i].state & WX_MAC_STATE_IN_USE)
+ continue;
+ wx->mac_table[i].state |= (WX_MAC_STATE_MODIFIED |
+ WX_MAC_STATE_IN_USE);
+ memcpy(wx->mac_table[i].addr, addr, ETH_ALEN);
+ wx->mac_table[i].pools |= (1ULL << pool);
+ wx_sync_mac_table(wx);
+ return i;
+ }
+ return -ENOMEM;
+}
+
+static int wx_del_mac_filter(struct wx *wx, u8 *addr, u16 pool)
+{
+ u32 i;
+
+ if (is_zero_ether_addr(addr))
+ return -EINVAL;
+
+ /* search table for addr, if found, set to 0 and sync */
+ for (i = 0; i < wx->mac.num_rar_entries; i++) {
+ if (!ether_addr_equal(addr, wx->mac_table[i].addr))
+ continue;
+
+ wx->mac_table[i].state |= WX_MAC_STATE_MODIFIED;
+ wx->mac_table[i].pools &= ~(1ULL << pool);
+ if (!wx->mac_table[i].pools) {
+ wx->mac_table[i].state &= ~WX_MAC_STATE_IN_USE;
+ memset(wx->mac_table[i].addr, 0, ETH_ALEN);
+ }
+ wx_sync_mac_table(wx);
+ return 0;
+ }
+ return -ENOMEM;
+}
+
+static int wx_available_rars(struct wx *wx)
+{
+ u32 i, count = 0;
+
+ for (i = 0; i < wx->mac.num_rar_entries; i++) {
+ if (wx->mac_table[i].state == 0)
+ count++;
+ }
+
+ return count;
+}
+
+/**
+ * wx_write_uc_addr_list - write unicast addresses to RAR table
+ * @netdev: network interface device structure
+ * @pool: index for mac table
+ *
+ * Writes unicast address list to the RAR table.
+ * Returns: -ENOMEM on failure/insufficient address space
+ * 0 on no addresses written
+ * X on writing X addresses to the RAR table
+ **/
+static int wx_write_uc_addr_list(struct net_device *netdev, int pool)
+{
+ struct wx *wx = netdev_priv(netdev);
+ int count = 0;
+
+ /* return ENOMEM indicating insufficient memory for addresses */
+ if (netdev_uc_count(netdev) > wx_available_rars(wx))
+ return -ENOMEM;
+
+ if (!netdev_uc_empty(netdev)) {
+ struct netdev_hw_addr *ha;
+
+ netdev_for_each_uc_addr(ha, netdev) {
+ wx_del_mac_filter(wx, ha->addr, pool);
+ wx_add_mac_filter(wx, ha->addr, pool);
+ count++;
+ }
+ }
+ return count;
+}
+
+/**
+ * wx_mta_vector - Determines bit-vector in multicast table to set
+ * @wx: pointer to private structure
+ * @mc_addr: the multicast address
+ *
+ * Extracts the 12 bits, from a multicast address, to determine which
+ * bit-vector to set in the multicast table. The hardware uses 12 bits, from
+ * incoming rx multicast addresses, to determine the bit-vector to check in
+ * the MTA. Which of the 4 combination, of 12-bits, the hardware uses is set
+ * by the MO field of the MCSTCTRL. The MO field is set during initialization
+ * to mc_filter_type.
+ **/
+static u32 wx_mta_vector(struct wx *wx, u8 *mc_addr)
+{
+ u32 vector = 0;
+
+ switch (wx->mac.mc_filter_type) {
+ case 0: /* use bits [47:36] of the address */
+ vector = ((mc_addr[4] >> 4) | (((u16)mc_addr[5]) << 4));
+ break;
+ case 1: /* use bits [46:35] of the address */
+ vector = ((mc_addr[4] >> 3) | (((u16)mc_addr[5]) << 5));
+ break;
+ case 2: /* use bits [45:34] of the address */
+ vector = ((mc_addr[4] >> 2) | (((u16)mc_addr[5]) << 6));
+ break;
+ case 3: /* use bits [43:32] of the address */
+ vector = ((mc_addr[4]) | (((u16)mc_addr[5]) << 8));
+ break;
+ default: /* Invalid mc_filter_type */
+ wx_err(wx, "MC filter type param set incorrectly\n");
+ break;
+ }
+
+ /* vector can only be 12-bits or boundary will be exceeded */
+ vector &= 0xFFF;
+ return vector;
+}
+
+/**
+ * wx_set_mta - Set bit-vector in multicast table
+ * @wx: pointer to private structure
+ * @mc_addr: Multicast address
+ *
+ * Sets the bit-vector in the multicast table.
+ **/
+static void wx_set_mta(struct wx *wx, u8 *mc_addr)
+{
+ u32 vector, vector_bit, vector_reg;
+
+ wx->addr_ctrl.mta_in_use++;
+
+ vector = wx_mta_vector(wx, mc_addr);
+ wx_dbg(wx, " bit-vector = 0x%03X\n", vector);
+
+ /* The MTA is a register array of 128 32-bit registers. It is treated
+ * like an array of 4096 bits. We want to set bit
+ * BitArray[vector_value]. So we figure out what register the bit is
+ * in, read it, OR in the new bit, then write back the new value. The
+ * register is determined by the upper 7 bits of the vector value and
+ * the bit within that register are determined by the lower 5 bits of
+ * the value.
+ */
+ vector_reg = (vector >> 5) & 0x7F;
+ vector_bit = vector & 0x1F;
+ wx->mac.mta_shadow[vector_reg] |= (1 << vector_bit);
+}
+
+/**
+ * wx_update_mc_addr_list - Updates MAC list of multicast addresses
+ * @wx: pointer to private structure
+ * @netdev: pointer to net device structure
+ *
+ * The given list replaces any existing list. Clears the MC addrs from receive
+ * address registers and the multicast table. Uses unused receive address
+ * registers for the first multicast addresses, and hashes the rest into the
+ * multicast table.
+ **/
+static void wx_update_mc_addr_list(struct wx *wx, struct net_device *netdev)
+{
+ struct netdev_hw_addr *ha;
+ u32 i, psrctl;
+
+ /* Set the new number of MC addresses that we are being requested to
+ * use.
+ */
+ wx->addr_ctrl.num_mc_addrs = netdev_mc_count(netdev);
+ wx->addr_ctrl.mta_in_use = 0;
+
+ /* Clear mta_shadow */
+ wx_dbg(wx, " Clearing MTA\n");
+ memset(&wx->mac.mta_shadow, 0, sizeof(wx->mac.mta_shadow));
+
+ /* Update mta_shadow */
+ netdev_for_each_mc_addr(ha, netdev) {
+ wx_dbg(wx, " Adding the multicast addresses:\n");
+ wx_set_mta(wx, ha->addr);
+ }
+
+ /* Enable mta */
+ for (i = 0; i < wx->mac.mcft_size; i++)
+ wr32a(wx, WX_PSR_MC_TBL(0), i,
+ wx->mac.mta_shadow[i]);
+
+ if (wx->addr_ctrl.mta_in_use > 0) {
+ psrctl = rd32(wx, WX_PSR_CTL);
+ psrctl &= ~(WX_PSR_CTL_MO | WX_PSR_CTL_MFE);
+ psrctl |= WX_PSR_CTL_MFE |
+ (wx->mac.mc_filter_type << WX_PSR_CTL_MO_SHIFT);
+ wr32(wx, WX_PSR_CTL, psrctl);
+ }
+
+ wx_dbg(wx, "Update mc addr list Complete\n");
+}
+
+/**
+ * wx_write_mc_addr_list - write multicast addresses to MTA
+ * @netdev: network interface device structure
+ *
+ * Writes multicast address list to the MTA hash table.
+ * Returns: 0 on no addresses written
+ * X on writing X addresses to MTA
+ **/
+static int wx_write_mc_addr_list(struct net_device *netdev)
+{
+ struct wx *wx = netdev_priv(netdev);
+
+ if (!netif_running(netdev))
+ return 0;
+
+ wx_update_mc_addr_list(wx, netdev);
+
+ return netdev_mc_count(netdev);
+}
+
+/**
+ * wx_set_mac - Change the Ethernet Address of the NIC
+ * @netdev: network interface device structure
+ * @p: pointer to an address structure
+ *
+ * Returns 0 on success, negative on failure
+ **/
+int wx_set_mac(struct net_device *netdev, void *p)
+{
+ struct wx *wx = netdev_priv(netdev);
+ struct sockaddr *addr = p;
+ int retval;
+
+ retval = eth_prepare_mac_addr_change(netdev, addr);
+ if (retval)
+ return retval;
+
+ wx_del_mac_filter(wx, wx->mac.addr, 0);
+ eth_hw_addr_set(netdev, addr->sa_data);
+ memcpy(wx->mac.addr, addr->sa_data, netdev->addr_len);
+
+ wx_mac_set_default_filter(wx, wx->mac.addr);
+
+ return 0;
+}
+EXPORT_SYMBOL(wx_set_mac);
+
+void wx_disable_rx(struct wx *wx)
{
u32 pfdtxgswc;
u32 rxctrl;
- rxctrl = rd32(wxhw, WX_RDB_PB_CTL);
+ rxctrl = rd32(wx, WX_RDB_PB_CTL);
if (rxctrl & WX_RDB_PB_CTL_RXEN) {
- pfdtxgswc = rd32(wxhw, WX_PSR_CTL);
+ pfdtxgswc = rd32(wx, WX_PSR_CTL);
if (pfdtxgswc & WX_PSR_CTL_SW_EN) {
pfdtxgswc &= ~WX_PSR_CTL_SW_EN;
- wr32(wxhw, WX_PSR_CTL, pfdtxgswc);
- wxhw->mac.set_lben = true;
+ wr32(wx, WX_PSR_CTL, pfdtxgswc);
+ wx->mac.set_lben = true;
} else {
- wxhw->mac.set_lben = false;
+ wx->mac.set_lben = false;
}
rxctrl &= ~WX_RDB_PB_CTL_RXEN;
- wr32(wxhw, WX_RDB_PB_CTL, rxctrl);
+ wr32(wx, WX_RDB_PB_CTL, rxctrl);
- if (!(((wxhw->subsystem_device_id & WX_NCSI_MASK) == WX_NCSI_SUP) ||
- ((wxhw->subsystem_device_id & WX_WOL_MASK) == WX_WOL_SUP))) {
+ if (!(((wx->subsystem_device_id & WX_NCSI_MASK) == WX_NCSI_SUP) ||
+ ((wx->subsystem_device_id & WX_WOL_MASK) == WX_WOL_SUP))) {
/* disable mac receiver */
- wr32m(wxhw, WX_MAC_RX_CFG,
+ wr32m(wx, WX_MAC_RX_CFG,
WX_MAC_RX_CFG_RE, 0);
}
}
}
EXPORT_SYMBOL(wx_disable_rx);
+static void wx_enable_rx(struct wx *wx)
+{
+ u32 psrctl;
+
+ /* enable mac receiver */
+ wr32m(wx, WX_MAC_RX_CFG,
+ WX_MAC_RX_CFG_RE, WX_MAC_RX_CFG_RE);
+
+ wr32m(wx, WX_RDB_PB_CTL,
+ WX_RDB_PB_CTL_RXEN, WX_RDB_PB_CTL_RXEN);
+
+ if (wx->mac.set_lben) {
+ psrctl = rd32(wx, WX_PSR_CTL);
+ psrctl |= WX_PSR_CTL_SW_EN;
+ wr32(wx, WX_PSR_CTL, psrctl);
+ wx->mac.set_lben = false;
+ }
+}
+
+/**
+ * wx_set_rxpba - Initialize Rx packet buffer
+ * @wx: pointer to private structure
+ **/
+static void wx_set_rxpba(struct wx *wx)
+{
+ u32 rxpktsize, txpktsize, txpbthresh;
+
+ rxpktsize = wx->mac.rx_pb_size << WX_RDB_PB_SZ_SHIFT;
+ wr32(wx, WX_RDB_PB_SZ(0), rxpktsize);
+
+ /* Only support an equally distributed Tx packet buffer strategy. */
+ txpktsize = wx->mac.tx_pb_size;
+ txpbthresh = (txpktsize / 1024) - WX_TXPKT_SIZE_MAX;
+ wr32(wx, WX_TDB_PB_SZ(0), txpktsize);
+ wr32(wx, WX_TDM_PB_THRE(0), txpbthresh);
+}
+
+static void wx_configure_port(struct wx *wx)
+{
+ u32 value, i;
+
+ value = WX_CFG_PORT_CTL_D_VLAN | WX_CFG_PORT_CTL_QINQ;
+ wr32m(wx, WX_CFG_PORT_CTL,
+ WX_CFG_PORT_CTL_D_VLAN |
+ WX_CFG_PORT_CTL_QINQ,
+ value);
+
+ wr32(wx, WX_CFG_TAG_TPID(0),
+ ETH_P_8021Q | ETH_P_8021AD << 16);
+ wx->tpid[0] = ETH_P_8021Q;
+ wx->tpid[1] = ETH_P_8021AD;
+ for (i = 1; i < 4; i++)
+ wr32(wx, WX_CFG_TAG_TPID(i),
+ ETH_P_8021Q | ETH_P_8021Q << 16);
+ for (i = 2; i < 8; i++)
+ wx->tpid[i] = ETH_P_8021Q;
+}
+
+/**
+ * wx_disable_sec_rx_path - Stops the receive data path
+ * @wx: pointer to private structure
+ *
+ * Stops the receive data path and waits for the HW to internally empty
+ * the Rx security block
+ **/
+static int wx_disable_sec_rx_path(struct wx *wx)
+{
+ u32 secrx;
+
+ wr32m(wx, WX_RSC_CTL,
+ WX_RSC_CTL_RX_DIS, WX_RSC_CTL_RX_DIS);
+
+ return read_poll_timeout(rd32, secrx, secrx & WX_RSC_ST_RSEC_RDY,
+ 1000, 40000, false, wx, WX_RSC_ST);
+}
+
+/**
+ * wx_enable_sec_rx_path - Enables the receive data path
+ * @wx: pointer to private structure
+ *
+ * Enables the receive data path.
+ **/
+static void wx_enable_sec_rx_path(struct wx *wx)
+{
+ wr32m(wx, WX_RSC_CTL, WX_RSC_CTL_RX_DIS, 0);
+ WX_WRITE_FLUSH(wx);
+}
+
+void wx_set_rx_mode(struct net_device *netdev)
+{
+ struct wx *wx = netdev_priv(netdev);
+ u32 fctrl, vmolr, vlnctrl;
+ int count;
+
+ /* Check for Promiscuous and All Multicast modes */
+ fctrl = rd32(wx, WX_PSR_CTL);
+ fctrl &= ~(WX_PSR_CTL_UPE | WX_PSR_CTL_MPE);
+ vmolr = rd32(wx, WX_PSR_VM_L2CTL(0));
+ vmolr &= ~(WX_PSR_VM_L2CTL_UPE |
+ WX_PSR_VM_L2CTL_MPE |
+ WX_PSR_VM_L2CTL_ROPE |
+ WX_PSR_VM_L2CTL_ROMPE);
+ vlnctrl = rd32(wx, WX_PSR_VLAN_CTL);
+ vlnctrl &= ~(WX_PSR_VLAN_CTL_VFE | WX_PSR_VLAN_CTL_CFIEN);
+
+ /* set all bits that we expect to always be set */
+ fctrl |= WX_PSR_CTL_BAM | WX_PSR_CTL_MFE;
+ vmolr |= WX_PSR_VM_L2CTL_BAM |
+ WX_PSR_VM_L2CTL_AUPE |
+ WX_PSR_VM_L2CTL_VACC;
+ vlnctrl |= WX_PSR_VLAN_CTL_VFE;
+
+ wx->addr_ctrl.user_set_promisc = false;
+ if (netdev->flags & IFF_PROMISC) {
+ wx->addr_ctrl.user_set_promisc = true;
+ fctrl |= WX_PSR_CTL_UPE | WX_PSR_CTL_MPE;
+ /* pf don't want packets routing to vf, so clear UPE */
+ vmolr |= WX_PSR_VM_L2CTL_MPE;
+ vlnctrl &= ~WX_PSR_VLAN_CTL_VFE;
+ }
+
+ if (netdev->flags & IFF_ALLMULTI) {
+ fctrl |= WX_PSR_CTL_MPE;
+ vmolr |= WX_PSR_VM_L2CTL_MPE;
+ }
+
+ if (netdev->features & NETIF_F_RXALL) {
+ vmolr |= (WX_PSR_VM_L2CTL_UPE | WX_PSR_VM_L2CTL_MPE);
+ vlnctrl &= ~WX_PSR_VLAN_CTL_VFE;
+ /* receive bad packets */
+ wr32m(wx, WX_RSC_CTL,
+ WX_RSC_CTL_SAVE_MAC_ERR,
+ WX_RSC_CTL_SAVE_MAC_ERR);
+ } else {
+ vmolr |= WX_PSR_VM_L2CTL_ROPE | WX_PSR_VM_L2CTL_ROMPE;
+ }
+
+ /* Write addresses to available RAR registers, if there is not
+ * sufficient space to store all the addresses then enable
+ * unicast promiscuous mode
+ */
+ count = wx_write_uc_addr_list(netdev, 0);
+ if (count < 0) {
+ vmolr &= ~WX_PSR_VM_L2CTL_ROPE;
+ vmolr |= WX_PSR_VM_L2CTL_UPE;
+ }
+
+ /* Write addresses to the MTA, if the attempt fails
+ * then we should just turn on promiscuous mode so
+ * that we can at least receive multicast traffic
+ */
+ count = wx_write_mc_addr_list(netdev);
+ if (count < 0) {
+ vmolr &= ~WX_PSR_VM_L2CTL_ROMPE;
+ vmolr |= WX_PSR_VM_L2CTL_MPE;
+ }
+
+ wr32(wx, WX_PSR_VLAN_CTL, vlnctrl);
+ wr32(wx, WX_PSR_CTL, fctrl);
+ wr32(wx, WX_PSR_VM_L2CTL(0), vmolr);
+}
+EXPORT_SYMBOL(wx_set_rx_mode);
+
+static void wx_set_rx_buffer_len(struct wx *wx)
+{
+ struct net_device *netdev = wx->netdev;
+ u32 mhadd, max_frame;
+
+ max_frame = netdev->mtu + ETH_HLEN + ETH_FCS_LEN;
+ /* adjust max frame to be at least the size of a standard frame */
+ if (max_frame < (ETH_FRAME_LEN + ETH_FCS_LEN))
+ max_frame = (ETH_FRAME_LEN + ETH_FCS_LEN);
+
+ mhadd = rd32(wx, WX_PSR_MAX_SZ);
+ if (max_frame != mhadd)
+ wr32(wx, WX_PSR_MAX_SZ, max_frame);
+}
+
+/* Disable the specified rx queue */
+void wx_disable_rx_queue(struct wx *wx, struct wx_ring *ring)
+{
+ u8 reg_idx = ring->reg_idx;
+ u32 rxdctl;
+ int ret;
+
+ /* write value back with RRCFG.EN bit cleared */
+ wr32m(wx, WX_PX_RR_CFG(reg_idx),
+ WX_PX_RR_CFG_RR_EN, 0);
+
+ /* the hardware may take up to 100us to really disable the rx queue */
+ ret = read_poll_timeout(rd32, rxdctl, !(rxdctl & WX_PX_RR_CFG_RR_EN),
+ 10, 100, true, wx, WX_PX_RR_CFG(reg_idx));
+
+ if (ret == -ETIMEDOUT) {
+ /* Just for information */
+ wx_err(wx,
+ "RRCFG.EN on Rx queue %d not cleared within the polling period\n",
+ reg_idx);
+ }
+}
+EXPORT_SYMBOL(wx_disable_rx_queue);
+
+static void wx_enable_rx_queue(struct wx *wx, struct wx_ring *ring)
+{
+ u8 reg_idx = ring->reg_idx;
+ u32 rxdctl;
+ int ret;
+
+ ret = read_poll_timeout(rd32, rxdctl, rxdctl & WX_PX_RR_CFG_RR_EN,
+ 1000, 10000, true, wx, WX_PX_RR_CFG(reg_idx));
+
+ if (ret == -ETIMEDOUT) {
+ /* Just for information */
+ wx_err(wx,
+ "RRCFG.EN on Rx queue %d not set within the polling period\n",
+ reg_idx);
+ }
+}
+
+static void wx_configure_srrctl(struct wx *wx,
+ struct wx_ring *rx_ring)
+{
+ u16 reg_idx = rx_ring->reg_idx;
+ u32 srrctl;
+
+ srrctl = rd32(wx, WX_PX_RR_CFG(reg_idx));
+ srrctl &= ~(WX_PX_RR_CFG_RR_HDR_SZ |
+ WX_PX_RR_CFG_RR_BUF_SZ |
+ WX_PX_RR_CFG_SPLIT_MODE);
+ /* configure header buffer length, needed for RSC */
+ srrctl |= WX_RXBUFFER_256 << WX_PX_RR_CFG_BHDRSIZE_SHIFT;
+
+ /* configure the packet buffer length */
+ srrctl |= WX_RX_BUFSZ >> WX_PX_RR_CFG_BSIZEPKT_SHIFT;
+
+ wr32(wx, WX_PX_RR_CFG(reg_idx), srrctl);
+}
+
+static void wx_configure_tx_ring(struct wx *wx,
+ struct wx_ring *ring)
+{
+ u32 txdctl = WX_PX_TR_CFG_ENABLE;
+ u8 reg_idx = ring->reg_idx;
+ u64 tdba = ring->dma;
+ int ret;
+
+ /* disable queue to avoid issues while updating state */
+ wr32(wx, WX_PX_TR_CFG(reg_idx), WX_PX_TR_CFG_SWFLSH);
+ WX_WRITE_FLUSH(wx);
+
+ wr32(wx, WX_PX_TR_BAL(reg_idx), tdba & DMA_BIT_MASK(32));
+ wr32(wx, WX_PX_TR_BAH(reg_idx), upper_32_bits(tdba));
+
+ /* reset head and tail pointers */
+ wr32(wx, WX_PX_TR_RP(reg_idx), 0);
+ wr32(wx, WX_PX_TR_WP(reg_idx), 0);
+ ring->tail = wx->hw_addr + WX_PX_TR_WP(reg_idx);
+
+ if (ring->count < WX_MAX_TXD)
+ txdctl |= ring->count / 128 << WX_PX_TR_CFG_TR_SIZE_SHIFT;
+ txdctl |= 0x20 << WX_PX_TR_CFG_WTHRESH_SHIFT;
+
+ /* reinitialize tx_buffer_info */
+ memset(ring->tx_buffer_info, 0,
+ sizeof(struct wx_tx_buffer) * ring->count);
+
+ /* enable queue */
+ wr32(wx, WX_PX_TR_CFG(reg_idx), txdctl);
+
+ /* poll to verify queue is enabled */
+ ret = read_poll_timeout(rd32, txdctl, txdctl & WX_PX_TR_CFG_ENABLE,
+ 1000, 10000, true, wx, WX_PX_TR_CFG(reg_idx));
+ if (ret == -ETIMEDOUT)
+ wx_err(wx, "Could not enable Tx Queue %d\n", reg_idx);
+}
+
+static void wx_configure_rx_ring(struct wx *wx,
+ struct wx_ring *ring)
+{
+ u16 reg_idx = ring->reg_idx;
+ union wx_rx_desc *rx_desc;
+ u64 rdba = ring->dma;
+ u32 rxdctl;
+
+ /* disable queue to avoid issues while updating state */
+ rxdctl = rd32(wx, WX_PX_RR_CFG(reg_idx));
+ wx_disable_rx_queue(wx, ring);
+
+ wr32(wx, WX_PX_RR_BAL(reg_idx), rdba & DMA_BIT_MASK(32));
+ wr32(wx, WX_PX_RR_BAH(reg_idx), upper_32_bits(rdba));
+
+ if (ring->count == WX_MAX_RXD)
+ rxdctl |= 0 << WX_PX_RR_CFG_RR_SIZE_SHIFT;
+ else
+ rxdctl |= (ring->count / 128) << WX_PX_RR_CFG_RR_SIZE_SHIFT;
+
+ rxdctl |= 0x1 << WX_PX_RR_CFG_RR_THER_SHIFT;
+ wr32(wx, WX_PX_RR_CFG(reg_idx), rxdctl);
+
+ /* reset head and tail pointers */
+ wr32(wx, WX_PX_RR_RP(reg_idx), 0);
+ wr32(wx, WX_PX_RR_WP(reg_idx), 0);
+ ring->tail = wx->hw_addr + WX_PX_RR_WP(reg_idx);
+
+ wx_configure_srrctl(wx, ring);
+
+ /* initialize rx_buffer_info */
+ memset(ring->rx_buffer_info, 0,
+ sizeof(struct wx_rx_buffer) * ring->count);
+
+ /* initialize Rx descriptor 0 */
+ rx_desc = WX_RX_DESC(ring, 0);
+ rx_desc->wb.upper.length = 0;
+
+ /* enable receive descriptor ring */
+ wr32m(wx, WX_PX_RR_CFG(reg_idx),
+ WX_PX_RR_CFG_RR_EN, WX_PX_RR_CFG_RR_EN);
+
+ wx_enable_rx_queue(wx, ring);
+ wx_alloc_rx_buffers(ring, wx_desc_unused(ring));
+}
+
+/**
+ * wx_configure_tx - Configure Transmit Unit after Reset
+ * @wx: pointer to private structure
+ *
+ * Configure the Tx unit of the MAC after a reset.
+ **/
+static void wx_configure_tx(struct wx *wx)
+{
+ u32 i;
+
+ /* TDM_CTL.TE must be before Tx queues are enabled */
+ wr32m(wx, WX_TDM_CTL,
+ WX_TDM_CTL_TE, WX_TDM_CTL_TE);
+
+ /* Setup the HW Tx Head and Tail descriptor pointers */
+ for (i = 0; i < wx->num_tx_queues; i++)
+ wx_configure_tx_ring(wx, wx->tx_ring[i]);
+
+ wr32m(wx, WX_TSC_BUF_AE, WX_TSC_BUF_AE_THR, 0x10);
+
+ if (wx->mac.type == wx_mac_em)
+ wr32m(wx, WX_TSC_CTL, WX_TSC_CTL_TX_DIS | WX_TSC_CTL_TSEC_DIS, 0x1);
+
+ /* enable mac transmitter */
+ wr32m(wx, WX_MAC_TX_CFG,
+ WX_MAC_TX_CFG_TE, WX_MAC_TX_CFG_TE);
+}
+
+/**
+ * wx_configure_rx - Configure Receive Unit after Reset
+ * @wx: pointer to private structure
+ *
+ * Configure the Rx unit of the MAC after a reset.
+ **/
+static void wx_configure_rx(struct wx *wx)
+{
+ u32 psrtype, i;
+ int ret;
+
+ wx_disable_rx(wx);
+
+ psrtype = WX_RDB_PL_CFG_L4HDR |
+ WX_RDB_PL_CFG_L3HDR |
+ WX_RDB_PL_CFG_L2HDR |
+ WX_RDB_PL_CFG_TUN_TUNHDR |
+ WX_RDB_PL_CFG_TUN_TUNHDR;
+ wr32(wx, WX_RDB_PL_CFG(0), psrtype);
+
+ /* enable hw crc stripping */
+ wr32m(wx, WX_RSC_CTL, WX_RSC_CTL_CRC_STRIP, WX_RSC_CTL_CRC_STRIP);
+
+ if (wx->mac.type == wx_mac_sp) {
+ u32 psrctl;
+
+ /* RSC Setup */
+ psrctl = rd32(wx, WX_PSR_CTL);
+ psrctl |= WX_PSR_CTL_RSC_ACK; /* Disable RSC for ACK packets */
+ psrctl |= WX_PSR_CTL_RSC_DIS;
+ wr32(wx, WX_PSR_CTL, psrctl);
+ }
+
+ /* set_rx_buffer_len must be called before ring initialization */
+ wx_set_rx_buffer_len(wx);
+
+ /* Setup the HW Rx Head and Tail Descriptor Pointers and
+ * the Base and Length of the Rx Descriptor Ring
+ */
+ for (i = 0; i < wx->num_rx_queues; i++)
+ wx_configure_rx_ring(wx, wx->rx_ring[i]);
+
+ /* Enable all receives, disable security engine prior to block traffic */
+ ret = wx_disable_sec_rx_path(wx);
+ if (ret < 0)
+ wx_err(wx, "The register status is abnormal, please check device.");
+
+ wx_enable_rx(wx);
+ wx_enable_sec_rx_path(wx);
+}
+
+static void wx_configure_isb(struct wx *wx)
+{
+ /* set ISB Address */
+ wr32(wx, WX_PX_ISB_ADDR_L, wx->isb_dma & DMA_BIT_MASK(32));
+ if (IS_ENABLED(CONFIG_ARCH_DMA_ADDR_T_64BIT))
+ wr32(wx, WX_PX_ISB_ADDR_H, upper_32_bits(wx->isb_dma));
+}
+
+void wx_configure(struct wx *wx)
+{
+ wx_set_rxpba(wx);
+ wx_configure_port(wx);
+
+ wx_set_rx_mode(wx->netdev);
+
+ wx_enable_sec_rx_path(wx);
+
+ wx_configure_tx(wx);
+ wx_configure_rx(wx);
+ wx_configure_isb(wx);
+}
+EXPORT_SYMBOL(wx_configure);
+
/**
* wx_disable_pcie_master - Disable PCI-express master access
- * @wxhw: pointer to hardware structure
+ * @wx: pointer to hardware structure
*
* Disables PCI-Express master access and verifies there are no pending
* requests.
**/
-int wx_disable_pcie_master(struct wx_hw *wxhw)
+int wx_disable_pcie_master(struct wx *wx)
{
int status = 0;
u32 val;
/* Always set this bit to ensure any future transactions are blocked */
- pci_clear_master(wxhw->pdev);
+ pci_clear_master(wx->pdev);
/* Exit if master requests are blocked */
- if (!(rd32(wxhw, WX_PX_TRANSACTION_PENDING)))
+ if (!(rd32(wx, WX_PX_TRANSACTION_PENDING)))
return 0;
/* Poll for master request bit to clear */
status = read_poll_timeout(rd32, val, !val, 100, WX_PCI_MASTER_DISABLE_TIMEOUT,
- false, wxhw, WX_PX_TRANSACTION_PENDING);
+ false, wx, WX_PX_TRANSACTION_PENDING);
if (status < 0)
- wx_err(wxhw, "PCIe transaction pending bit did not clear.\n");
+ wx_err(wx, "PCIe transaction pending bit did not clear.\n");
return status;
}
@@ -781,106 +1548,106 @@ EXPORT_SYMBOL(wx_disable_pcie_master);
/**
* wx_stop_adapter - Generic stop Tx/Rx units
- * @wxhw: pointer to hardware structure
+ * @wx: pointer to hardware structure
*
* Sets the adapter_stopped flag within wx_hw struct. Clears interrupts,
* disables transmit and receive units. The adapter_stopped flag is used by
* the shared code and drivers to determine if the adapter is in a stopped
* state and should not touch the hardware.
**/
-int wx_stop_adapter(struct wx_hw *wxhw)
+int wx_stop_adapter(struct wx *wx)
{
u16 i;
/* Set the adapter_stopped flag so other driver functions stop touching
* the hardware
*/
- wxhw->adapter_stopped = true;
+ wx->adapter_stopped = true;
/* Disable the receive unit */
- wx_disable_rx(wxhw);
+ wx_disable_rx(wx);
/* Set interrupt mask to stop interrupts from being generated */
- wx_intr_disable(wxhw, WX_INTR_ALL);
+ wx_intr_disable(wx, WX_INTR_ALL);
/* Clear any pending interrupts, flush previous writes */
- wr32(wxhw, WX_PX_MISC_IC, 0xffffffff);
- wr32(wxhw, WX_BME_CTL, 0x3);
+ wr32(wx, WX_PX_MISC_IC, 0xffffffff);
+ wr32(wx, WX_BME_CTL, 0x3);
/* Disable the transmit unit. Each queue must be disabled. */
- for (i = 0; i < wxhw->mac.max_tx_queues; i++) {
- wr32m(wxhw, WX_PX_TR_CFG(i),
+ for (i = 0; i < wx->mac.max_tx_queues; i++) {
+ wr32m(wx, WX_PX_TR_CFG(i),
WX_PX_TR_CFG_SWFLSH | WX_PX_TR_CFG_ENABLE,
WX_PX_TR_CFG_SWFLSH);
}
/* Disable the receive unit by stopping each queue */
- for (i = 0; i < wxhw->mac.max_rx_queues; i++) {
- wr32m(wxhw, WX_PX_RR_CFG(i),
+ for (i = 0; i < wx->mac.max_rx_queues; i++) {
+ wr32m(wx, WX_PX_RR_CFG(i),
WX_PX_RR_CFG_RR_EN, 0);
}
/* flush all queues disables */
- WX_WRITE_FLUSH(wxhw);
+ WX_WRITE_FLUSH(wx);
/* Prevent the PCI-E bus from hanging by disabling PCI-E master
* access and verify no pending requests
*/
- return wx_disable_pcie_master(wxhw);
+ return wx_disable_pcie_master(wx);
}
EXPORT_SYMBOL(wx_stop_adapter);
-void wx_reset_misc(struct wx_hw *wxhw)
+void wx_reset_misc(struct wx *wx)
{
int i;
/* receive packets that size > 2048 */
- wr32m(wxhw, WX_MAC_RX_CFG, WX_MAC_RX_CFG_JE, WX_MAC_RX_CFG_JE);
+ wr32m(wx, WX_MAC_RX_CFG, WX_MAC_RX_CFG_JE, WX_MAC_RX_CFG_JE);
/* clear counters on read */
- wr32m(wxhw, WX_MMC_CONTROL,
+ wr32m(wx, WX_MMC_CONTROL,
WX_MMC_CONTROL_RSTONRD, WX_MMC_CONTROL_RSTONRD);
- wr32m(wxhw, WX_MAC_RX_FLOW_CTRL,
+ wr32m(wx, WX_MAC_RX_FLOW_CTRL,
WX_MAC_RX_FLOW_CTRL_RFE, WX_MAC_RX_FLOW_CTRL_RFE);
- wr32(wxhw, WX_MAC_PKT_FLT, WX_MAC_PKT_FLT_PR);
+ wr32(wx, WX_MAC_PKT_FLT, WX_MAC_PKT_FLT_PR);
- wr32m(wxhw, WX_MIS_RST_ST,
+ wr32m(wx, WX_MIS_RST_ST,
WX_MIS_RST_ST_RST_INIT, 0x1E00);
/* errata 4: initialize mng flex tbl and wakeup flex tbl*/
- wr32(wxhw, WX_PSR_MNG_FLEX_SEL, 0);
+ wr32(wx, WX_PSR_MNG_FLEX_SEL, 0);
for (i = 0; i < 16; i++) {
- wr32(wxhw, WX_PSR_MNG_FLEX_DW_L(i), 0);
- wr32(wxhw, WX_PSR_MNG_FLEX_DW_H(i), 0);
- wr32(wxhw, WX_PSR_MNG_FLEX_MSK(i), 0);
+ wr32(wx, WX_PSR_MNG_FLEX_DW_L(i), 0);
+ wr32(wx, WX_PSR_MNG_FLEX_DW_H(i), 0);
+ wr32(wx, WX_PSR_MNG_FLEX_MSK(i), 0);
}
- wr32(wxhw, WX_PSR_LAN_FLEX_SEL, 0);
+ wr32(wx, WX_PSR_LAN_FLEX_SEL, 0);
for (i = 0; i < 16; i++) {
- wr32(wxhw, WX_PSR_LAN_FLEX_DW_L(i), 0);
- wr32(wxhw, WX_PSR_LAN_FLEX_DW_H(i), 0);
- wr32(wxhw, WX_PSR_LAN_FLEX_MSK(i), 0);
+ wr32(wx, WX_PSR_LAN_FLEX_DW_L(i), 0);
+ wr32(wx, WX_PSR_LAN_FLEX_DW_H(i), 0);
+ wr32(wx, WX_PSR_LAN_FLEX_MSK(i), 0);
}
/* set pause frame dst mac addr */
- wr32(wxhw, WX_RDB_PFCMACDAL, 0xC2000001);
- wr32(wxhw, WX_RDB_PFCMACDAH, 0x0180);
+ wr32(wx, WX_RDB_PFCMACDAL, 0xC2000001);
+ wr32(wx, WX_RDB_PFCMACDAH, 0x0180);
}
EXPORT_SYMBOL(wx_reset_misc);
/**
* wx_get_pcie_msix_counts - Gets MSI-X vector count
- * @wxhw: pointer to hardware structure
+ * @wx: pointer to hardware structure
* @msix_count: number of MSI interrupts that can be obtained
* @max_msix_count: number of MSI interrupts that mac need
*
* Read PCIe configuration space, and get the MSI-X vector count from
* the capabilities table.
**/
-int wx_get_pcie_msix_counts(struct wx_hw *wxhw, u16 *msix_count, u16 max_msix_count)
+int wx_get_pcie_msix_counts(struct wx *wx, u16 *msix_count, u16 max_msix_count)
{
- struct pci_dev *pdev = wxhw->pdev;
+ struct pci_dev *pdev = wx->pdev;
struct device *dev = &pdev->dev;
int pos;
@@ -904,31 +1671,39 @@ int wx_get_pcie_msix_counts(struct wx_hw *wxhw, u16 *msix_count, u16 max_msix_co
}
EXPORT_SYMBOL(wx_get_pcie_msix_counts);
-int wx_sw_init(struct wx_hw *wxhw)
+int wx_sw_init(struct wx *wx)
{
- struct pci_dev *pdev = wxhw->pdev;
+ struct pci_dev *pdev = wx->pdev;
u32 ssid = 0;
int err = 0;
- wxhw->vendor_id = pdev->vendor;
- wxhw->device_id = pdev->device;
- wxhw->revision_id = pdev->revision;
- wxhw->oem_svid = pdev->subsystem_vendor;
- wxhw->oem_ssid = pdev->subsystem_device;
- wxhw->bus.device = PCI_SLOT(pdev->devfn);
- wxhw->bus.func = PCI_FUNC(pdev->devfn);
-
- if (wxhw->oem_svid == PCI_VENDOR_ID_WANGXUN) {
- wxhw->subsystem_vendor_id = pdev->subsystem_vendor;
- wxhw->subsystem_device_id = pdev->subsystem_device;
+ wx->vendor_id = pdev->vendor;
+ wx->device_id = pdev->device;
+ wx->revision_id = pdev->revision;
+ wx->oem_svid = pdev->subsystem_vendor;
+ wx->oem_ssid = pdev->subsystem_device;
+ wx->bus.device = PCI_SLOT(pdev->devfn);
+ wx->bus.func = PCI_FUNC(pdev->devfn);
+
+ if (wx->oem_svid == PCI_VENDOR_ID_WANGXUN) {
+ wx->subsystem_vendor_id = pdev->subsystem_vendor;
+ wx->subsystem_device_id = pdev->subsystem_device;
} else {
- err = wx_flash_read_dword(wxhw, 0xfffdc, &ssid);
+ err = wx_flash_read_dword(wx, 0xfffdc, &ssid);
if (!err)
- wxhw->subsystem_device_id = swab16((u16)ssid);
+ wx->subsystem_device_id = swab16((u16)ssid);
return err;
}
+ wx->mac_table = kcalloc(wx->mac.num_rar_entries,
+ sizeof(struct wx_mac_addr),
+ GFP_KERNEL);
+ if (!wx->mac_table) {
+ wx_err(wx, "mac_table allocation failed\n");
+ return -ENOMEM;
+ }
+
return 0;
}
EXPORT_SYMBOL(wx_sw_init);
diff --git a/drivers/net/ethernet/wangxun/libwx/wx_hw.h b/drivers/net/ethernet/wangxun/libwx/wx_hw.h
index a0652f5e9939..44dfd6ea442a 100644
--- a/drivers/net/ethernet/wangxun/libwx/wx_hw.h
+++ b/drivers/net/ethernet/wangxun/libwx/wx_hw.h
@@ -4,25 +4,31 @@
#ifndef _WX_HW_H_
#define _WX_HW_H_
-int wx_check_flash_load(struct wx_hw *hw, u32 check_bit);
-void wx_control_hw(struct wx_hw *wxhw, bool drv);
-int wx_mng_present(struct wx_hw *wxhw);
-int wx_host_interface_command(struct wx_hw *wxhw, u32 *buffer,
+void wx_intr_enable(struct wx *wx, u64 qmask);
+void wx_irq_disable(struct wx *wx);
+int wx_check_flash_load(struct wx *wx, u32 check_bit);
+void wx_control_hw(struct wx *wx, bool drv);
+int wx_mng_present(struct wx *wx);
+int wx_host_interface_command(struct wx *wx, u32 *buffer,
u32 length, u32 timeout, bool return_data);
-int wx_read_ee_hostif(struct wx_hw *wxhw, u16 offset, u16 *data);
-int wx_read_ee_hostif_buffer(struct wx_hw *wxhw,
+int wx_read_ee_hostif(struct wx *wx, u16 offset, u16 *data);
+int wx_read_ee_hostif_buffer(struct wx *wx,
u16 offset, u16 words, u16 *data);
-int wx_reset_hostif(struct wx_hw *wxhw);
-void wx_init_eeprom_params(struct wx_hw *wxhw);
-void wx_get_mac_addr(struct wx_hw *wxhw, u8 *mac_addr);
-int wx_set_rar(struct wx_hw *wxhw, u32 index, u8 *addr, u64 pools, u32 enable_addr);
-int wx_clear_rar(struct wx_hw *wxhw, u32 index);
-void wx_init_rx_addrs(struct wx_hw *wxhw);
-void wx_disable_rx(struct wx_hw *wxhw);
-int wx_disable_pcie_master(struct wx_hw *wxhw);
-int wx_stop_adapter(struct wx_hw *wxhw);
-void wx_reset_misc(struct wx_hw *wxhw);
-int wx_get_pcie_msix_counts(struct wx_hw *wxhw, u16 *msix_count, u16 max_msix_count);
-int wx_sw_init(struct wx_hw *wxhw);
+int wx_reset_hostif(struct wx *wx);
+void wx_init_eeprom_params(struct wx *wx);
+void wx_get_mac_addr(struct wx *wx, u8 *mac_addr);
+void wx_init_rx_addrs(struct wx *wx);
+void wx_mac_set_default_filter(struct wx *wx, u8 *addr);
+void wx_flush_sw_mac_table(struct wx *wx);
+int wx_set_mac(struct net_device *netdev, void *p);
+void wx_disable_rx(struct wx *wx);
+void wx_set_rx_mode(struct net_device *netdev);
+void wx_disable_rx_queue(struct wx *wx, struct wx_ring *ring);
+void wx_configure(struct wx *wx);
+int wx_disable_pcie_master(struct wx *wx);
+int wx_stop_adapter(struct wx *wx);
+void wx_reset_misc(struct wx *wx);
+int wx_get_pcie_msix_counts(struct wx *wx, u16 *msix_count, u16 max_msix_count);
+int wx_sw_init(struct wx *wx);
#endif /* _WX_HW_H_ */
diff --git a/drivers/net/ethernet/wangxun/libwx/wx_lib.c b/drivers/net/ethernet/wangxun/libwx/wx_lib.c
new file mode 100644
index 000000000000..eb89a274083e
--- /dev/null
+++ b/drivers/net/ethernet/wangxun/libwx/wx_lib.c
@@ -0,0 +1,2004 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2019 - 2022 Beijing WangXun Technology Co., Ltd. */
+
+#include <linux/etherdevice.h>
+#include <net/page_pool.h>
+#include <linux/iopoll.h>
+#include <linux/pci.h>
+
+#include "wx_type.h"
+#include "wx_lib.h"
+#include "wx_hw.h"
+
+/* wx_test_staterr - tests bits in Rx descriptor status and error fields */
+static __le32 wx_test_staterr(union wx_rx_desc *rx_desc,
+ const u32 stat_err_bits)
+{
+ return rx_desc->wb.upper.status_error & cpu_to_le32(stat_err_bits);
+}
+
+static bool wx_can_reuse_rx_page(struct wx_rx_buffer *rx_buffer,
+ int rx_buffer_pgcnt)
+{
+ unsigned int pagecnt_bias = rx_buffer->pagecnt_bias;
+ struct page *page = rx_buffer->page;
+
+ /* avoid re-using remote and pfmemalloc pages */
+ if (!dev_page_is_reusable(page))
+ return false;
+
+#if (PAGE_SIZE < 8192)
+ /* if we are only owner of page we can reuse it */
+ if (unlikely((rx_buffer_pgcnt - pagecnt_bias) > 1))
+ return false;
+#endif
+
+ /* If we have drained the page fragment pool we need to update
+ * the pagecnt_bias and page count so that we fully restock the
+ * number of references the driver holds.
+ */
+ if (unlikely(pagecnt_bias == 1)) {
+ page_ref_add(page, USHRT_MAX - 1);
+ rx_buffer->pagecnt_bias = USHRT_MAX;
+ }
+
+ return true;
+}
+
+/**
+ * wx_reuse_rx_page - page flip buffer and store it back on the ring
+ * @rx_ring: rx descriptor ring to store buffers on
+ * @old_buff: donor buffer to have page reused
+ *
+ * Synchronizes page for reuse by the adapter
+ **/
+static void wx_reuse_rx_page(struct wx_ring *rx_ring,
+ struct wx_rx_buffer *old_buff)
+{
+ u16 nta = rx_ring->next_to_alloc;
+ struct wx_rx_buffer *new_buff;
+
+ new_buff = &rx_ring->rx_buffer_info[nta];
+
+ /* update, and store next to alloc */
+ nta++;
+ rx_ring->next_to_alloc = (nta < rx_ring->count) ? nta : 0;
+
+ /* transfer page from old buffer to new buffer */
+ new_buff->page = old_buff->page;
+ new_buff->page_dma = old_buff->page_dma;
+ new_buff->page_offset = old_buff->page_offset;
+ new_buff->pagecnt_bias = old_buff->pagecnt_bias;
+}
+
+static void wx_dma_sync_frag(struct wx_ring *rx_ring,
+ struct wx_rx_buffer *rx_buffer)
+{
+ struct sk_buff *skb = rx_buffer->skb;
+ skb_frag_t *frag = &skb_shinfo(skb)->frags[0];
+
+ dma_sync_single_range_for_cpu(rx_ring->dev,
+ WX_CB(skb)->dma,
+ skb_frag_off(frag),
+ skb_frag_size(frag),
+ DMA_FROM_DEVICE);
+
+ /* If the page was released, just unmap it. */
+ if (unlikely(WX_CB(skb)->page_released))
+ page_pool_put_full_page(rx_ring->page_pool, rx_buffer->page, false);
+}
+
+static struct wx_rx_buffer *wx_get_rx_buffer(struct wx_ring *rx_ring,
+ union wx_rx_desc *rx_desc,
+ struct sk_buff **skb,
+ int *rx_buffer_pgcnt)
+{
+ struct wx_rx_buffer *rx_buffer;
+ unsigned int size;
+
+ rx_buffer = &rx_ring->rx_buffer_info[rx_ring->next_to_clean];
+ size = le16_to_cpu(rx_desc->wb.upper.length);
+
+#if (PAGE_SIZE < 8192)
+ *rx_buffer_pgcnt = page_count(rx_buffer->page);
+#else
+ *rx_buffer_pgcnt = 0;
+#endif
+
+ prefetchw(rx_buffer->page);
+ *skb = rx_buffer->skb;
+
+ /* Delay unmapping of the first packet. It carries the header
+ * information, HW may still access the header after the writeback.
+ * Only unmap it when EOP is reached
+ */
+ if (!wx_test_staterr(rx_desc, WX_RXD_STAT_EOP)) {
+ if (!*skb)
+ goto skip_sync;
+ } else {
+ if (*skb)
+ wx_dma_sync_frag(rx_ring, rx_buffer);
+ }
+
+ /* we are reusing so sync this buffer for CPU use */
+ dma_sync_single_range_for_cpu(rx_ring->dev,
+ rx_buffer->dma,
+ rx_buffer->page_offset,
+ size,
+ DMA_FROM_DEVICE);
+skip_sync:
+ rx_buffer->pagecnt_bias--;
+
+ return rx_buffer;
+}
+
+static void wx_put_rx_buffer(struct wx_ring *rx_ring,
+ struct wx_rx_buffer *rx_buffer,
+ struct sk_buff *skb,
+ int rx_buffer_pgcnt)
+{
+ if (wx_can_reuse_rx_page(rx_buffer, rx_buffer_pgcnt)) {
+ /* hand second half of page back to the ring */
+ wx_reuse_rx_page(rx_ring, rx_buffer);
+ } else {
+ if (!IS_ERR(skb) && WX_CB(skb)->dma == rx_buffer->dma)
+ /* the page has been released from the ring */
+ WX_CB(skb)->page_released = true;
+ else
+ page_pool_put_full_page(rx_ring->page_pool, rx_buffer->page, false);
+
+ __page_frag_cache_drain(rx_buffer->page,
+ rx_buffer->pagecnt_bias);
+ }
+
+ /* clear contents of rx_buffer */
+ rx_buffer->page = NULL;
+ rx_buffer->skb = NULL;
+}
+
+static struct sk_buff *wx_build_skb(struct wx_ring *rx_ring,
+ struct wx_rx_buffer *rx_buffer,
+ union wx_rx_desc *rx_desc)
+{
+ unsigned int size = le16_to_cpu(rx_desc->wb.upper.length);
+#if (PAGE_SIZE < 8192)
+ unsigned int truesize = WX_RX_BUFSZ;
+#else
+ unsigned int truesize = ALIGN(size, L1_CACHE_BYTES);
+#endif
+ struct sk_buff *skb = rx_buffer->skb;
+
+ if (!skb) {
+ void *page_addr = page_address(rx_buffer->page) +
+ rx_buffer->page_offset;
+
+ /* prefetch first cache line of first page */
+ prefetch(page_addr);
+#if L1_CACHE_BYTES < 128
+ prefetch(page_addr + L1_CACHE_BYTES);
+#endif
+
+ /* allocate a skb to store the frags */
+ skb = napi_alloc_skb(&rx_ring->q_vector->napi, WX_RXBUFFER_256);
+ if (unlikely(!skb))
+ return NULL;
+
+ /* we will be copying header into skb->data in
+ * pskb_may_pull so it is in our interest to prefetch
+ * it now to avoid a possible cache miss
+ */
+ prefetchw(skb->data);
+
+ if (size <= WX_RXBUFFER_256) {
+ memcpy(__skb_put(skb, size), page_addr,
+ ALIGN(size, sizeof(long)));
+ rx_buffer->pagecnt_bias++;
+
+ return skb;
+ }
+
+ if (!wx_test_staterr(rx_desc, WX_RXD_STAT_EOP))
+ WX_CB(skb)->dma = rx_buffer->dma;
+
+ skb_add_rx_frag(skb, 0, rx_buffer->page,
+ rx_buffer->page_offset,
+ size, truesize);
+ goto out;
+
+ } else {
+ skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags, rx_buffer->page,
+ rx_buffer->page_offset, size, truesize);
+ }
+
+out:
+#if (PAGE_SIZE < 8192)
+ /* flip page offset to other buffer */
+ rx_buffer->page_offset ^= truesize;
+#else
+ /* move offset up to the next cache line */
+ rx_buffer->page_offset += truesize;
+#endif
+
+ return skb;
+}
+
+static bool wx_alloc_mapped_page(struct wx_ring *rx_ring,
+ struct wx_rx_buffer *bi)
+{
+ struct page *page = bi->page;
+ dma_addr_t dma;
+
+ /* since we are recycling buffers we should seldom need to alloc */
+ if (likely(page))
+ return true;
+
+ page = page_pool_dev_alloc_pages(rx_ring->page_pool);
+ WARN_ON(!page);
+ dma = page_pool_get_dma_addr(page);
+
+ bi->page_dma = dma;
+ bi->page = page;
+ bi->page_offset = 0;
+ page_ref_add(page, USHRT_MAX - 1);
+ bi->pagecnt_bias = USHRT_MAX;
+
+ return true;
+}
+
+/**
+ * wx_alloc_rx_buffers - Replace used receive buffers
+ * @rx_ring: ring to place buffers on
+ * @cleaned_count: number of buffers to replace
+ **/
+void wx_alloc_rx_buffers(struct wx_ring *rx_ring, u16 cleaned_count)
+{
+ u16 i = rx_ring->next_to_use;
+ union wx_rx_desc *rx_desc;
+ struct wx_rx_buffer *bi;
+
+ /* nothing to do */
+ if (!cleaned_count)
+ return;
+
+ rx_desc = WX_RX_DESC(rx_ring, i);
+ bi = &rx_ring->rx_buffer_info[i];
+ i -= rx_ring->count;
+
+ do {
+ if (!wx_alloc_mapped_page(rx_ring, bi))
+ break;
+
+ /* sync the buffer for use by the device */
+ dma_sync_single_range_for_device(rx_ring->dev, bi->dma,
+ bi->page_offset,
+ WX_RX_BUFSZ,
+ DMA_FROM_DEVICE);
+
+ rx_desc->read.pkt_addr =
+ cpu_to_le64(bi->page_dma + bi->page_offset);
+
+ rx_desc++;
+ bi++;
+ i++;
+ if (unlikely(!i)) {
+ rx_desc = WX_RX_DESC(rx_ring, 0);
+ bi = rx_ring->rx_buffer_info;
+ i -= rx_ring->count;
+ }
+
+ /* clear the status bits for the next_to_use descriptor */
+ rx_desc->wb.upper.status_error = 0;
+
+ cleaned_count--;
+ } while (cleaned_count);
+
+ i += rx_ring->count;
+
+ if (rx_ring->next_to_use != i) {
+ rx_ring->next_to_use = i;
+ /* update next to alloc since we have filled the ring */
+ rx_ring->next_to_alloc = i;
+
+ /* Force memory writes to complete before letting h/w
+ * know there are new descriptors to fetch. (Only
+ * applicable for weak-ordered memory model archs,
+ * such as IA-64).
+ */
+ wmb();
+ writel(i, rx_ring->tail);
+ }
+}
+
+u16 wx_desc_unused(struct wx_ring *ring)
+{
+ u16 ntc = ring->next_to_clean;
+ u16 ntu = ring->next_to_use;
+
+ return ((ntc > ntu) ? 0 : ring->count) + ntc - ntu - 1;
+}
+
+/**
+ * wx_is_non_eop - process handling of non-EOP buffers
+ * @rx_ring: Rx ring being processed
+ * @rx_desc: Rx descriptor for current buffer
+ * @skb: Current socket buffer containing buffer in progress
+ *
+ * This function updates next to clean. If the buffer is an EOP buffer
+ * this function exits returning false, otherwise it will place the
+ * sk_buff in the next buffer to be chained and return true indicating
+ * that this is in fact a non-EOP buffer.
+ **/
+static bool wx_is_non_eop(struct wx_ring *rx_ring,
+ union wx_rx_desc *rx_desc,
+ struct sk_buff *skb)
+{
+ u32 ntc = rx_ring->next_to_clean + 1;
+
+ /* fetch, update, and store next to clean */
+ ntc = (ntc < rx_ring->count) ? ntc : 0;
+ rx_ring->next_to_clean = ntc;
+
+ prefetch(WX_RX_DESC(rx_ring, ntc));
+
+ /* if we are the last buffer then there is nothing else to do */
+ if (likely(wx_test_staterr(rx_desc, WX_RXD_STAT_EOP)))
+ return false;
+
+ rx_ring->rx_buffer_info[ntc].skb = skb;
+
+ return true;
+}
+
+static void wx_pull_tail(struct sk_buff *skb)
+{
+ skb_frag_t *frag = &skb_shinfo(skb)->frags[0];
+ unsigned int pull_len;
+ unsigned char *va;
+
+ /* it is valid to use page_address instead of kmap since we are
+ * working with pages allocated out of the lomem pool per
+ * alloc_page(GFP_ATOMIC)
+ */
+ va = skb_frag_address(frag);
+
+ /* we need the header to contain the greater of either ETH_HLEN or
+ * 60 bytes if the skb->len is less than 60 for skb_pad.
+ */
+ pull_len = eth_get_headlen(skb->dev, va, WX_RXBUFFER_256);
+
+ /* align pull length to size of long to optimize memcpy performance */
+ skb_copy_to_linear_data(skb, va, ALIGN(pull_len, sizeof(long)));
+
+ /* update all of the pointers */
+ skb_frag_size_sub(frag, pull_len);
+ skb_frag_off_add(frag, pull_len);
+ skb->data_len -= pull_len;
+ skb->tail += pull_len;
+}
+
+/**
+ * wx_cleanup_headers - Correct corrupted or empty headers
+ * @rx_ring: rx descriptor ring packet is being transacted on
+ * @rx_desc: pointer to the EOP Rx descriptor
+ * @skb: pointer to current skb being fixed
+ *
+ * Check for corrupted packet headers caused by senders on the local L2
+ * embedded NIC switch not setting up their Tx Descriptors right. These
+ * should be very rare.
+ *
+ * Also address the case where we are pulling data in on pages only
+ * and as such no data is present in the skb header.
+ *
+ * In addition if skb is not at least 60 bytes we need to pad it so that
+ * it is large enough to qualify as a valid Ethernet frame.
+ *
+ * Returns true if an error was encountered and skb was freed.
+ **/
+static bool wx_cleanup_headers(struct wx_ring *rx_ring,
+ union wx_rx_desc *rx_desc,
+ struct sk_buff *skb)
+{
+ struct net_device *netdev = rx_ring->netdev;
+
+ /* verify that the packet does not have any known errors */
+ if (!netdev ||
+ unlikely(wx_test_staterr(rx_desc, WX_RXD_ERR_RXE) &&
+ !(netdev->features & NETIF_F_RXALL))) {
+ dev_kfree_skb_any(skb);
+ return true;
+ }
+
+ /* place header in linear portion of buffer */
+ if (!skb_headlen(skb))
+ wx_pull_tail(skb);
+
+ /* if eth_skb_pad returns an error the skb was freed */
+ if (eth_skb_pad(skb))
+ return true;
+
+ return false;
+}
+
+/**
+ * wx_clean_rx_irq - Clean completed descriptors from Rx ring - bounce buf
+ * @q_vector: structure containing interrupt and ring information
+ * @rx_ring: rx descriptor ring to transact packets on
+ * @budget: Total limit on number of packets to process
+ *
+ * This function provides a "bounce buffer" approach to Rx interrupt
+ * processing. The advantage to this is that on systems that have
+ * expensive overhead for IOMMU access this provides a means of avoiding
+ * it by maintaining the mapping of the page to the system.
+ *
+ * Returns amount of work completed.
+ **/
+static int wx_clean_rx_irq(struct wx_q_vector *q_vector,
+ struct wx_ring *rx_ring,
+ int budget)
+{
+ unsigned int total_rx_bytes = 0, total_rx_packets = 0;
+ u16 cleaned_count = wx_desc_unused(rx_ring);
+
+ do {
+ struct wx_rx_buffer *rx_buffer;
+ union wx_rx_desc *rx_desc;
+ struct sk_buff *skb;
+ int rx_buffer_pgcnt;
+
+ /* return some buffers to hardware, one at a time is too slow */
+ if (cleaned_count >= WX_RX_BUFFER_WRITE) {
+ wx_alloc_rx_buffers(rx_ring, cleaned_count);
+ cleaned_count = 0;
+ }
+
+ rx_desc = WX_RX_DESC(rx_ring, rx_ring->next_to_clean);
+ if (!wx_test_staterr(rx_desc, WX_RXD_STAT_DD))
+ break;
+
+ /* This memory barrier is needed to keep us from reading
+ * any other fields out of the rx_desc until we know the
+ * descriptor has been written back
+ */
+ dma_rmb();
+
+ rx_buffer = wx_get_rx_buffer(rx_ring, rx_desc, &skb, &rx_buffer_pgcnt);
+
+ /* retrieve a buffer from the ring */
+ skb = wx_build_skb(rx_ring, rx_buffer, rx_desc);
+
+ /* exit if we failed to retrieve a buffer */
+ if (!skb) {
+ rx_buffer->pagecnt_bias++;
+ break;
+ }
+
+ wx_put_rx_buffer(rx_ring, rx_buffer, skb, rx_buffer_pgcnt);
+ cleaned_count++;
+
+ /* place incomplete frames back on ring for completion */
+ if (wx_is_non_eop(rx_ring, rx_desc, skb))
+ continue;
+
+ /* verify the packet layout is correct */
+ if (wx_cleanup_headers(rx_ring, rx_desc, skb))
+ continue;
+
+ /* probably a little skewed due to removing CRC */
+ total_rx_bytes += skb->len;
+
+ skb_record_rx_queue(skb, rx_ring->queue_index);
+ skb->protocol = eth_type_trans(skb, rx_ring->netdev);
+ napi_gro_receive(&q_vector->napi, skb);
+
+ /* update budget accounting */
+ total_rx_packets++;
+ } while (likely(total_rx_packets < budget));
+
+ u64_stats_update_begin(&rx_ring->syncp);
+ rx_ring->stats.packets += total_rx_packets;
+ rx_ring->stats.bytes += total_rx_bytes;
+ u64_stats_update_end(&rx_ring->syncp);
+ q_vector->rx.total_packets += total_rx_packets;
+ q_vector->rx.total_bytes += total_rx_bytes;
+
+ return total_rx_packets;
+}
+
+static struct netdev_queue *wx_txring_txq(const struct wx_ring *ring)
+{
+ return netdev_get_tx_queue(ring->netdev, ring->queue_index);
+}
+
+/**
+ * wx_clean_tx_irq - Reclaim resources after transmit completes
+ * @q_vector: structure containing interrupt and ring information
+ * @tx_ring: tx ring to clean
+ * @napi_budget: Used to determine if we are in netpoll
+ **/
+static bool wx_clean_tx_irq(struct wx_q_vector *q_vector,
+ struct wx_ring *tx_ring, int napi_budget)
+{
+ unsigned int budget = q_vector->wx->tx_work_limit;
+ unsigned int total_bytes = 0, total_packets = 0;
+ unsigned int i = tx_ring->next_to_clean;
+ struct wx_tx_buffer *tx_buffer;
+ union wx_tx_desc *tx_desc;
+
+ if (!netif_carrier_ok(tx_ring->netdev))
+ return true;
+
+ tx_buffer = &tx_ring->tx_buffer_info[i];
+ tx_desc = WX_TX_DESC(tx_ring, i);
+ i -= tx_ring->count;
+
+ do {
+ union wx_tx_desc *eop_desc = tx_buffer->next_to_watch;
+
+ /* if next_to_watch is not set then there is no work pending */
+ if (!eop_desc)
+ break;
+
+ /* prevent any other reads prior to eop_desc */
+ smp_rmb();
+
+ /* if DD is not set pending work has not been completed */
+ if (!(eop_desc->wb.status & cpu_to_le32(WX_TXD_STAT_DD)))
+ break;
+
+ /* clear next_to_watch to prevent false hangs */
+ tx_buffer->next_to_watch = NULL;
+
+ /* update the statistics for this packet */
+ total_bytes += tx_buffer->bytecount;
+ total_packets += tx_buffer->gso_segs;
+
+ /* free the skb */
+ napi_consume_skb(tx_buffer->skb, napi_budget);
+
+ /* unmap skb header data */
+ dma_unmap_single(tx_ring->dev,
+ dma_unmap_addr(tx_buffer, dma),
+ dma_unmap_len(tx_buffer, len),
+ DMA_TO_DEVICE);
+
+ /* clear tx_buffer data */
+ dma_unmap_len_set(tx_buffer, len, 0);
+
+ /* unmap remaining buffers */
+ while (tx_desc != eop_desc) {
+ tx_buffer++;
+ tx_desc++;
+ i++;
+ if (unlikely(!i)) {
+ i -= tx_ring->count;
+ tx_buffer = tx_ring->tx_buffer_info;
+ tx_desc = WX_TX_DESC(tx_ring, 0);
+ }
+
+ /* unmap any remaining paged data */
+ if (dma_unmap_len(tx_buffer, len)) {
+ dma_unmap_page(tx_ring->dev,
+ dma_unmap_addr(tx_buffer, dma),
+ dma_unmap_len(tx_buffer, len),
+ DMA_TO_DEVICE);
+ dma_unmap_len_set(tx_buffer, len, 0);
+ }
+ }
+
+ /* move us one more past the eop_desc for start of next pkt */
+ tx_buffer++;
+ tx_desc++;
+ i++;
+ if (unlikely(!i)) {
+ i -= tx_ring->count;
+ tx_buffer = tx_ring->tx_buffer_info;
+ tx_desc = WX_TX_DESC(tx_ring, 0);
+ }
+
+ /* issue prefetch for next Tx descriptor */
+ prefetch(tx_desc);
+
+ /* update budget accounting */
+ budget--;
+ } while (likely(budget));
+
+ i += tx_ring->count;
+ tx_ring->next_to_clean = i;
+ u64_stats_update_begin(&tx_ring->syncp);
+ tx_ring->stats.bytes += total_bytes;
+ tx_ring->stats.packets += total_packets;
+ u64_stats_update_end(&tx_ring->syncp);
+ q_vector->tx.total_bytes += total_bytes;
+ q_vector->tx.total_packets += total_packets;
+
+ netdev_tx_completed_queue(wx_txring_txq(tx_ring),
+ total_packets, total_bytes);
+
+#define TX_WAKE_THRESHOLD (DESC_NEEDED * 2)
+ if (unlikely(total_packets && netif_carrier_ok(tx_ring->netdev) &&
+ (wx_desc_unused(tx_ring) >= TX_WAKE_THRESHOLD))) {
+ /* Make sure that anybody stopping the queue after this
+ * sees the new next_to_clean.
+ */
+ smp_mb();
+
+ if (__netif_subqueue_stopped(tx_ring->netdev,
+ tx_ring->queue_index) &&
+ netif_running(tx_ring->netdev))
+ netif_wake_subqueue(tx_ring->netdev,
+ tx_ring->queue_index);
+ }
+
+ return !!budget;
+}
+
+/**
+ * wx_poll - NAPI polling RX/TX cleanup routine
+ * @napi: napi struct with our devices info in it
+ * @budget: amount of work driver is allowed to do this pass, in packets
+ *
+ * This function will clean all queues associated with a q_vector.
+ **/
+static int wx_poll(struct napi_struct *napi, int budget)
+{
+ struct wx_q_vector *q_vector = container_of(napi, struct wx_q_vector, napi);
+ int per_ring_budget, work_done = 0;
+ struct wx *wx = q_vector->wx;
+ bool clean_complete = true;
+ struct wx_ring *ring;
+
+ wx_for_each_ring(ring, q_vector->tx) {
+ if (!wx_clean_tx_irq(q_vector, ring, budget))
+ clean_complete = false;
+ }
+
+ /* Exit if we are called by netpoll */
+ if (budget <= 0)
+ return budget;
+
+ /* attempt to distribute budget to each queue fairly, but don't allow
+ * the budget to go below 1 because we'll exit polling
+ */
+ if (q_vector->rx.count > 1)
+ per_ring_budget = max(budget / q_vector->rx.count, 1);
+ else
+ per_ring_budget = budget;
+
+ wx_for_each_ring(ring, q_vector->rx) {
+ int cleaned = wx_clean_rx_irq(q_vector, ring, per_ring_budget);
+
+ work_done += cleaned;
+ if (cleaned >= per_ring_budget)
+ clean_complete = false;
+ }
+
+ /* If all work not completed, return budget and keep polling */
+ if (!clean_complete)
+ return budget;
+
+ /* all work done, exit the polling mode */
+ if (likely(napi_complete_done(napi, work_done))) {
+ if (netif_running(wx->netdev))
+ wx_intr_enable(wx, WX_INTR_Q(q_vector->v_idx));
+ }
+
+ return min(work_done, budget - 1);
+}
+
+static int wx_maybe_stop_tx(struct wx_ring *tx_ring, u16 size)
+{
+ if (likely(wx_desc_unused(tx_ring) >= size))
+ return 0;
+
+ netif_stop_subqueue(tx_ring->netdev, tx_ring->queue_index);
+
+ /* For the next check */
+ smp_mb();
+
+ /* We need to check again in a case another CPU has just
+ * made room available.
+ */
+ if (likely(wx_desc_unused(tx_ring) < size))
+ return -EBUSY;
+
+ /* A reprieve! - use start_queue because it doesn't call schedule */
+ netif_start_subqueue(tx_ring->netdev, tx_ring->queue_index);
+
+ return 0;
+}
+
+static void wx_tx_map(struct wx_ring *tx_ring,
+ struct wx_tx_buffer *first)
+{
+ struct sk_buff *skb = first->skb;
+ struct wx_tx_buffer *tx_buffer;
+ u16 i = tx_ring->next_to_use;
+ unsigned int data_len, size;
+ union wx_tx_desc *tx_desc;
+ skb_frag_t *frag;
+ dma_addr_t dma;
+ u32 cmd_type;
+
+ cmd_type = WX_TXD_DTYP_DATA | WX_TXD_IFCS;
+ tx_desc = WX_TX_DESC(tx_ring, i);
+
+ tx_desc->read.olinfo_status = cpu_to_le32(skb->len << WX_TXD_PAYLEN_SHIFT);
+
+ size = skb_headlen(skb);
+ data_len = skb->data_len;
+ dma = dma_map_single(tx_ring->dev, skb->data, size, DMA_TO_DEVICE);
+
+ tx_buffer = first;
+
+ for (frag = &skb_shinfo(skb)->frags[0];; frag++) {
+ if (dma_mapping_error(tx_ring->dev, dma))
+ goto dma_error;
+
+ /* record length, and DMA address */
+ dma_unmap_len_set(tx_buffer, len, size);
+ dma_unmap_addr_set(tx_buffer, dma, dma);
+
+ tx_desc->read.buffer_addr = cpu_to_le64(dma);
+
+ while (unlikely(size > WX_MAX_DATA_PER_TXD)) {
+ tx_desc->read.cmd_type_len =
+ cpu_to_le32(cmd_type ^ WX_MAX_DATA_PER_TXD);
+
+ i++;
+ tx_desc++;
+ if (i == tx_ring->count) {
+ tx_desc = WX_TX_DESC(tx_ring, 0);
+ i = 0;
+ }
+ tx_desc->read.olinfo_status = 0;
+
+ dma += WX_MAX_DATA_PER_TXD;
+ size -= WX_MAX_DATA_PER_TXD;
+
+ tx_desc->read.buffer_addr = cpu_to_le64(dma);
+ }
+
+ if (likely(!data_len))
+ break;
+
+ tx_desc->read.cmd_type_len = cpu_to_le32(cmd_type ^ size);
+
+ i++;
+ tx_desc++;
+ if (i == tx_ring->count) {
+ tx_desc = WX_TX_DESC(tx_ring, 0);
+ i = 0;
+ }
+ tx_desc->read.olinfo_status = 0;
+
+ size = skb_frag_size(frag);
+
+ data_len -= size;
+
+ dma = skb_frag_dma_map(tx_ring->dev, frag, 0, size,
+ DMA_TO_DEVICE);
+
+ tx_buffer = &tx_ring->tx_buffer_info[i];
+ }
+
+ /* write last descriptor with RS and EOP bits */
+ cmd_type |= size | WX_TXD_EOP | WX_TXD_RS;
+ tx_desc->read.cmd_type_len = cpu_to_le32(cmd_type);
+
+ netdev_tx_sent_queue(wx_txring_txq(tx_ring), first->bytecount);
+
+ skb_tx_timestamp(skb);
+
+ /* Force memory writes to complete before letting h/w know there
+ * are new descriptors to fetch. (Only applicable for weak-ordered
+ * memory model archs, such as IA-64).
+ *
+ * We also need this memory barrier to make certain all of the
+ * status bits have been updated before next_to_watch is written.
+ */
+ wmb();
+
+ /* set next_to_watch value indicating a packet is present */
+ first->next_to_watch = tx_desc;
+
+ i++;
+ if (i == tx_ring->count)
+ i = 0;
+
+ tx_ring->next_to_use = i;
+
+ wx_maybe_stop_tx(tx_ring, DESC_NEEDED);
+
+ if (netif_xmit_stopped(wx_txring_txq(tx_ring)) || !netdev_xmit_more())
+ writel(i, tx_ring->tail);
+
+ return;
+dma_error:
+ dev_err(tx_ring->dev, "TX DMA map failed\n");
+
+ /* clear dma mappings for failed tx_buffer_info map */
+ for (;;) {
+ tx_buffer = &tx_ring->tx_buffer_info[i];
+ if (dma_unmap_len(tx_buffer, len))
+ dma_unmap_page(tx_ring->dev,
+ dma_unmap_addr(tx_buffer, dma),
+ dma_unmap_len(tx_buffer, len),
+ DMA_TO_DEVICE);
+ dma_unmap_len_set(tx_buffer, len, 0);
+ if (tx_buffer == first)
+ break;
+ if (i == 0)
+ i += tx_ring->count;
+ i--;
+ }
+
+ dev_kfree_skb_any(first->skb);
+ first->skb = NULL;
+
+ tx_ring->next_to_use = i;
+}
+
+static netdev_tx_t wx_xmit_frame_ring(struct sk_buff *skb,
+ struct wx_ring *tx_ring)
+{
+ u16 count = TXD_USE_COUNT(skb_headlen(skb));
+ struct wx_tx_buffer *first;
+ unsigned short f;
+
+ /* need: 1 descriptor per page * PAGE_SIZE/WX_MAX_DATA_PER_TXD,
+ * + 1 desc for skb_headlen/WX_MAX_DATA_PER_TXD,
+ * + 2 desc gap to keep tail from touching head,
+ * + 1 desc for context descriptor,
+ * otherwise try next time
+ */
+ for (f = 0; f < skb_shinfo(skb)->nr_frags; f++)
+ count += TXD_USE_COUNT(skb_frag_size(&skb_shinfo(skb)->
+ frags[f]));
+
+ if (wx_maybe_stop_tx(tx_ring, count + 3))
+ return NETDEV_TX_BUSY;
+
+ /* record the location of the first descriptor for this packet */
+ first = &tx_ring->tx_buffer_info[tx_ring->next_to_use];
+ first->skb = skb;
+ first->bytecount = skb->len;
+ first->gso_segs = 1;
+
+ wx_tx_map(tx_ring, first);
+
+ return NETDEV_TX_OK;
+}
+
+netdev_tx_t wx_xmit_frame(struct sk_buff *skb,
+ struct net_device *netdev)
+{
+ unsigned int r_idx = skb->queue_mapping;
+ struct wx *wx = netdev_priv(netdev);
+ struct wx_ring *tx_ring;
+
+ if (!netif_carrier_ok(netdev)) {
+ dev_kfree_skb_any(skb);
+ return NETDEV_TX_OK;
+ }
+
+ /* The minimum packet size for olinfo paylen is 17 so pad the skb
+ * in order to meet this minimum size requirement.
+ */
+ if (skb_put_padto(skb, 17))
+ return NETDEV_TX_OK;
+
+ if (r_idx >= wx->num_tx_queues)
+ r_idx = r_idx % wx->num_tx_queues;
+ tx_ring = wx->tx_ring[r_idx];
+
+ return wx_xmit_frame_ring(skb, tx_ring);
+}
+EXPORT_SYMBOL(wx_xmit_frame);
+
+void wx_napi_enable_all(struct wx *wx)
+{
+ struct wx_q_vector *q_vector;
+ int q_idx;
+
+ for (q_idx = 0; q_idx < wx->num_q_vectors; q_idx++) {
+ q_vector = wx->q_vector[q_idx];
+ napi_enable(&q_vector->napi);
+ }
+}
+EXPORT_SYMBOL(wx_napi_enable_all);
+
+void wx_napi_disable_all(struct wx *wx)
+{
+ struct wx_q_vector *q_vector;
+ int q_idx;
+
+ for (q_idx = 0; q_idx < wx->num_q_vectors; q_idx++) {
+ q_vector = wx->q_vector[q_idx];
+ napi_disable(&q_vector->napi);
+ }
+}
+EXPORT_SYMBOL(wx_napi_disable_all);
+
+/**
+ * wx_set_rss_queues: Allocate queues for RSS
+ * @wx: board private structure to initialize
+ *
+ * This is our "base" multiqueue mode. RSS (Receive Side Scaling) will try
+ * to allocate one Rx queue per CPU, and if available, one Tx queue per CPU.
+ *
+ **/
+static void wx_set_rss_queues(struct wx *wx)
+{
+ wx->num_rx_queues = wx->mac.max_rx_queues;
+ wx->num_tx_queues = wx->mac.max_tx_queues;
+}
+
+static void wx_set_num_queues(struct wx *wx)
+{
+ /* Start with base case */
+ wx->num_rx_queues = 1;
+ wx->num_tx_queues = 1;
+ wx->queues_per_pool = 1;
+
+ wx_set_rss_queues(wx);
+}
+
+/**
+ * wx_acquire_msix_vectors - acquire MSI-X vectors
+ * @wx: board private structure
+ *
+ * Attempts to acquire a suitable range of MSI-X vector interrupts. Will
+ * return a negative error code if unable to acquire MSI-X vectors for any
+ * reason.
+ */
+static int wx_acquire_msix_vectors(struct wx *wx)
+{
+ struct irq_affinity affd = {0, };
+ int nvecs, i;
+
+ nvecs = min_t(int, num_online_cpus(), wx->mac.max_msix_vectors);
+
+ wx->msix_entries = kcalloc(nvecs,
+ sizeof(struct msix_entry),
+ GFP_KERNEL);
+ if (!wx->msix_entries)
+ return -ENOMEM;
+
+ nvecs = pci_alloc_irq_vectors_affinity(wx->pdev, nvecs,
+ nvecs,
+ PCI_IRQ_MSIX | PCI_IRQ_AFFINITY,
+ &affd);
+ if (nvecs < 0) {
+ wx_err(wx, "Failed to allocate MSI-X interrupts. Err: %d\n", nvecs);
+ kfree(wx->msix_entries);
+ wx->msix_entries = NULL;
+ return nvecs;
+ }
+
+ for (i = 0; i < nvecs; i++) {
+ wx->msix_entries[i].entry = i;
+ wx->msix_entries[i].vector = pci_irq_vector(wx->pdev, i);
+ }
+
+ /* one for msix_other */
+ nvecs -= 1;
+ wx->num_q_vectors = nvecs;
+ wx->num_rx_queues = nvecs;
+ wx->num_tx_queues = nvecs;
+
+ return 0;
+}
+
+/**
+ * wx_set_interrupt_capability - set MSI-X or MSI if supported
+ * @wx: board private structure to initialize
+ *
+ * Attempt to configure the interrupts using the best available
+ * capabilities of the hardware and the kernel.
+ **/
+static int wx_set_interrupt_capability(struct wx *wx)
+{
+ struct pci_dev *pdev = wx->pdev;
+ int nvecs, ret;
+
+ /* We will try to get MSI-X interrupts first */
+ ret = wx_acquire_msix_vectors(wx);
+ if (ret == 0 || (ret == -ENOMEM))
+ return ret;
+
+ wx->num_rx_queues = 1;
+ wx->num_tx_queues = 1;
+ wx->num_q_vectors = 1;
+
+ /* minmum one for queue, one for misc*/
+ nvecs = 1;
+ nvecs = pci_alloc_irq_vectors(pdev, nvecs,
+ nvecs, PCI_IRQ_MSI | PCI_IRQ_LEGACY);
+ if (nvecs == 1) {
+ if (pdev->msi_enabled)
+ wx_err(wx, "Fallback to MSI.\n");
+ else
+ wx_err(wx, "Fallback to LEGACY.\n");
+ } else {
+ wx_err(wx, "Failed to allocate MSI/LEGACY interrupts. Error: %d\n", nvecs);
+ return nvecs;
+ }
+
+ pdev->irq = pci_irq_vector(pdev, 0);
+
+ return 0;
+}
+
+/**
+ * wx_cache_ring_rss - Descriptor ring to register mapping for RSS
+ * @wx: board private structure to initialize
+ *
+ * Cache the descriptor ring offsets for RSS, ATR, FCoE, and SR-IOV.
+ *
+ **/
+static void wx_cache_ring_rss(struct wx *wx)
+{
+ u16 i;
+
+ for (i = 0; i < wx->num_rx_queues; i++)
+ wx->rx_ring[i]->reg_idx = i;
+
+ for (i = 0; i < wx->num_tx_queues; i++)
+ wx->tx_ring[i]->reg_idx = i;
+}
+
+static void wx_add_ring(struct wx_ring *ring, struct wx_ring_container *head)
+{
+ ring->next = head->ring;
+ head->ring = ring;
+ head->count++;
+}
+
+/**
+ * wx_alloc_q_vector - Allocate memory for a single interrupt vector
+ * @wx: board private structure to initialize
+ * @v_count: q_vectors allocated on wx, used for ring interleaving
+ * @v_idx: index of vector in wx struct
+ * @txr_count: total number of Tx rings to allocate
+ * @txr_idx: index of first Tx ring to allocate
+ * @rxr_count: total number of Rx rings to allocate
+ * @rxr_idx: index of first Rx ring to allocate
+ *
+ * We allocate one q_vector. If allocation fails we return -ENOMEM.
+ **/
+static int wx_alloc_q_vector(struct wx *wx,
+ unsigned int v_count, unsigned int v_idx,
+ unsigned int txr_count, unsigned int txr_idx,
+ unsigned int rxr_count, unsigned int rxr_idx)
+{
+ struct wx_q_vector *q_vector;
+ int ring_count, default_itr;
+ struct wx_ring *ring;
+
+ /* note this will allocate space for the ring structure as well! */
+ ring_count = txr_count + rxr_count;
+
+ q_vector = kzalloc(struct_size(q_vector, ring, ring_count),
+ GFP_KERNEL);
+ if (!q_vector)
+ return -ENOMEM;
+
+ /* initialize NAPI */
+ netif_napi_add(wx->netdev, &q_vector->napi,
+ wx_poll);
+
+ /* tie q_vector and wx together */
+ wx->q_vector[v_idx] = q_vector;
+ q_vector->wx = wx;
+ q_vector->v_idx = v_idx;
+ if (cpu_online(v_idx))
+ q_vector->numa_node = cpu_to_node(v_idx);
+
+ /* initialize pointer to rings */
+ ring = q_vector->ring;
+
+ if (wx->mac.type == wx_mac_sp)
+ default_itr = WX_12K_ITR;
+ else
+ default_itr = WX_7K_ITR;
+ /* initialize ITR */
+ if (txr_count && !rxr_count)
+ /* tx only vector */
+ q_vector->itr = wx->tx_itr_setting ?
+ default_itr : wx->tx_itr_setting;
+ else
+ /* rx or rx/tx vector */
+ q_vector->itr = wx->rx_itr_setting ?
+ default_itr : wx->rx_itr_setting;
+
+ while (txr_count) {
+ /* assign generic ring traits */
+ ring->dev = &wx->pdev->dev;
+ ring->netdev = wx->netdev;
+
+ /* configure backlink on ring */
+ ring->q_vector = q_vector;
+
+ /* update q_vector Tx values */
+ wx_add_ring(ring, &q_vector->tx);
+
+ /* apply Tx specific ring traits */
+ ring->count = wx->tx_ring_count;
+
+ ring->queue_index = txr_idx;
+
+ /* assign ring to wx */
+ wx->tx_ring[txr_idx] = ring;
+
+ /* update count and index */
+ txr_count--;
+ txr_idx += v_count;
+
+ /* push pointer to next ring */
+ ring++;
+ }
+
+ while (rxr_count) {
+ /* assign generic ring traits */
+ ring->dev = &wx->pdev->dev;
+ ring->netdev = wx->netdev;
+
+ /* configure backlink on ring */
+ ring->q_vector = q_vector;
+
+ /* update q_vector Rx values */
+ wx_add_ring(ring, &q_vector->rx);
+
+ /* apply Rx specific ring traits */
+ ring->count = wx->rx_ring_count;
+ ring->queue_index = rxr_idx;
+
+ /* assign ring to wx */
+ wx->rx_ring[rxr_idx] = ring;
+
+ /* update count and index */
+ rxr_count--;
+ rxr_idx += v_count;
+
+ /* push pointer to next ring */
+ ring++;
+ }
+
+ return 0;
+}
+
+/**
+ * wx_free_q_vector - Free memory allocated for specific interrupt vector
+ * @wx: board private structure to initialize
+ * @v_idx: Index of vector to be freed
+ *
+ * This function frees the memory allocated to the q_vector. In addition if
+ * NAPI is enabled it will delete any references to the NAPI struct prior
+ * to freeing the q_vector.
+ **/
+static void wx_free_q_vector(struct wx *wx, int v_idx)
+{
+ struct wx_q_vector *q_vector = wx->q_vector[v_idx];
+ struct wx_ring *ring;
+
+ wx_for_each_ring(ring, q_vector->tx)
+ wx->tx_ring[ring->queue_index] = NULL;
+
+ wx_for_each_ring(ring, q_vector->rx)
+ wx->rx_ring[ring->queue_index] = NULL;
+
+ wx->q_vector[v_idx] = NULL;
+ netif_napi_del(&q_vector->napi);
+ kfree_rcu(q_vector, rcu);
+}
+
+/**
+ * wx_alloc_q_vectors - Allocate memory for interrupt vectors
+ * @wx: board private structure to initialize
+ *
+ * We allocate one q_vector per queue interrupt. If allocation fails we
+ * return -ENOMEM.
+ **/
+static int wx_alloc_q_vectors(struct wx *wx)
+{
+ unsigned int rxr_idx = 0, txr_idx = 0, v_idx = 0;
+ unsigned int rxr_remaining = wx->num_rx_queues;
+ unsigned int txr_remaining = wx->num_tx_queues;
+ unsigned int q_vectors = wx->num_q_vectors;
+ int rqpv, tqpv;
+ int err;
+
+ for (; v_idx < q_vectors; v_idx++) {
+ rqpv = DIV_ROUND_UP(rxr_remaining, q_vectors - v_idx);
+ tqpv = DIV_ROUND_UP(txr_remaining, q_vectors - v_idx);
+ err = wx_alloc_q_vector(wx, q_vectors, v_idx,
+ tqpv, txr_idx,
+ rqpv, rxr_idx);
+
+ if (err)
+ goto err_out;
+
+ /* update counts and index */
+ rxr_remaining -= rqpv;
+ txr_remaining -= tqpv;
+ rxr_idx++;
+ txr_idx++;
+ }
+
+ return 0;
+
+err_out:
+ wx->num_tx_queues = 0;
+ wx->num_rx_queues = 0;
+ wx->num_q_vectors = 0;
+
+ while (v_idx--)
+ wx_free_q_vector(wx, v_idx);
+
+ return -ENOMEM;
+}
+
+/**
+ * wx_free_q_vectors - Free memory allocated for interrupt vectors
+ * @wx: board private structure to initialize
+ *
+ * This function frees the memory allocated to the q_vectors. In addition if
+ * NAPI is enabled it will delete any references to the NAPI struct prior
+ * to freeing the q_vector.
+ **/
+static void wx_free_q_vectors(struct wx *wx)
+{
+ int v_idx = wx->num_q_vectors;
+
+ wx->num_tx_queues = 0;
+ wx->num_rx_queues = 0;
+ wx->num_q_vectors = 0;
+
+ while (v_idx--)
+ wx_free_q_vector(wx, v_idx);
+}
+
+void wx_reset_interrupt_capability(struct wx *wx)
+{
+ struct pci_dev *pdev = wx->pdev;
+
+ if (!pdev->msi_enabled && !pdev->msix_enabled)
+ return;
+
+ pci_free_irq_vectors(wx->pdev);
+ if (pdev->msix_enabled) {
+ kfree(wx->msix_entries);
+ wx->msix_entries = NULL;
+ }
+}
+EXPORT_SYMBOL(wx_reset_interrupt_capability);
+
+/**
+ * wx_clear_interrupt_scheme - Clear the current interrupt scheme settings
+ * @wx: board private structure to clear interrupt scheme on
+ *
+ * We go through and clear interrupt specific resources and reset the structure
+ * to pre-load conditions
+ **/
+void wx_clear_interrupt_scheme(struct wx *wx)
+{
+ wx_free_q_vectors(wx);
+ wx_reset_interrupt_capability(wx);
+}
+EXPORT_SYMBOL(wx_clear_interrupt_scheme);
+
+int wx_init_interrupt_scheme(struct wx *wx)
+{
+ int ret;
+
+ /* Number of supported queues */
+ wx_set_num_queues(wx);
+
+ /* Set interrupt mode */
+ ret = wx_set_interrupt_capability(wx);
+ if (ret) {
+ wx_err(wx, "Allocate irq vectors for failed.\n");
+ return ret;
+ }
+
+ /* Allocate memory for queues */
+ ret = wx_alloc_q_vectors(wx);
+ if (ret) {
+ wx_err(wx, "Unable to allocate memory for queue vectors.\n");
+ wx_reset_interrupt_capability(wx);
+ return ret;
+ }
+
+ wx_cache_ring_rss(wx);
+
+ return 0;
+}
+EXPORT_SYMBOL(wx_init_interrupt_scheme);
+
+irqreturn_t wx_msix_clean_rings(int __always_unused irq, void *data)
+{
+ struct wx_q_vector *q_vector = data;
+
+ /* EIAM disabled interrupts (on this vector) for us */
+ if (q_vector->rx.ring || q_vector->tx.ring)
+ napi_schedule_irqoff(&q_vector->napi);
+
+ return IRQ_HANDLED;
+}
+EXPORT_SYMBOL(wx_msix_clean_rings);
+
+void wx_free_irq(struct wx *wx)
+{
+ struct pci_dev *pdev = wx->pdev;
+ int vector;
+
+ if (!(pdev->msix_enabled)) {
+ free_irq(pdev->irq, wx);
+ return;
+ }
+
+ for (vector = 0; vector < wx->num_q_vectors; vector++) {
+ struct wx_q_vector *q_vector = wx->q_vector[vector];
+ struct msix_entry *entry = &wx->msix_entries[vector];
+
+ /* free only the irqs that were actually requested */
+ if (!q_vector->rx.ring && !q_vector->tx.ring)
+ continue;
+
+ free_irq(entry->vector, q_vector);
+ }
+
+ free_irq(wx->msix_entries[vector].vector, wx);
+}
+EXPORT_SYMBOL(wx_free_irq);
+
+/**
+ * wx_setup_isb_resources - allocate interrupt status resources
+ * @wx: board private structure
+ *
+ * Return 0 on success, negative on failure
+ **/
+int wx_setup_isb_resources(struct wx *wx)
+{
+ struct pci_dev *pdev = wx->pdev;
+
+ wx->isb_mem = dma_alloc_coherent(&pdev->dev,
+ sizeof(u32) * 4,
+ &wx->isb_dma,
+ GFP_KERNEL);
+ if (!wx->isb_mem) {
+ wx_err(wx, "Alloc isb_mem failed\n");
+ return -ENOMEM;
+ }
+
+ return 0;
+}
+EXPORT_SYMBOL(wx_setup_isb_resources);
+
+/**
+ * wx_free_isb_resources - allocate all queues Rx resources
+ * @wx: board private structure
+ *
+ * Return 0 on success, negative on failure
+ **/
+void wx_free_isb_resources(struct wx *wx)
+{
+ struct pci_dev *pdev = wx->pdev;
+
+ dma_free_coherent(&pdev->dev, sizeof(u32) * 4,
+ wx->isb_mem, wx->isb_dma);
+ wx->isb_mem = NULL;
+}
+EXPORT_SYMBOL(wx_free_isb_resources);
+
+u32 wx_misc_isb(struct wx *wx, enum wx_isb_idx idx)
+{
+ u32 cur_tag = 0;
+
+ cur_tag = wx->isb_mem[WX_ISB_HEADER];
+ wx->isb_tag[idx] = cur_tag;
+
+ return (__force u32)cpu_to_le32(wx->isb_mem[idx]);
+}
+EXPORT_SYMBOL(wx_misc_isb);
+
+/**
+ * wx_set_ivar - set the IVAR registers, mapping interrupt causes to vectors
+ * @wx: pointer to wx struct
+ * @direction: 0 for Rx, 1 for Tx, -1 for other causes
+ * @queue: queue to map the corresponding interrupt to
+ * @msix_vector: the vector to map to the corresponding queue
+ *
+ **/
+static void wx_set_ivar(struct wx *wx, s8 direction,
+ u16 queue, u16 msix_vector)
+{
+ u32 ivar, index;
+
+ if (direction == -1) {
+ /* other causes */
+ msix_vector |= WX_PX_IVAR_ALLOC_VAL;
+ index = 0;
+ ivar = rd32(wx, WX_PX_MISC_IVAR);
+ ivar &= ~(0xFF << index);
+ ivar |= (msix_vector << index);
+ wr32(wx, WX_PX_MISC_IVAR, ivar);
+ } else {
+ /* tx or rx causes */
+ msix_vector |= WX_PX_IVAR_ALLOC_VAL;
+ index = ((16 * (queue & 1)) + (8 * direction));
+ ivar = rd32(wx, WX_PX_IVAR(queue >> 1));
+ ivar &= ~(0xFF << index);
+ ivar |= (msix_vector << index);
+ wr32(wx, WX_PX_IVAR(queue >> 1), ivar);
+ }
+}
+
+/**
+ * wx_write_eitr - write EITR register in hardware specific way
+ * @q_vector: structure containing interrupt and ring information
+ *
+ * This function is made to be called by ethtool and by the driver
+ * when it needs to update EITR registers at runtime. Hardware
+ * specific quirks/differences are taken care of here.
+ */
+static void wx_write_eitr(struct wx_q_vector *q_vector)
+{
+ struct wx *wx = q_vector->wx;
+ int v_idx = q_vector->v_idx;
+ u32 itr_reg;
+
+ if (wx->mac.type == wx_mac_sp)
+ itr_reg = q_vector->itr & WX_SP_MAX_EITR;
+ else
+ itr_reg = q_vector->itr & WX_EM_MAX_EITR;
+
+ itr_reg |= WX_PX_ITR_CNT_WDIS;
+
+ wr32(wx, WX_PX_ITR(v_idx), itr_reg);
+}
+
+/**
+ * wx_configure_vectors - Configure vectors for hardware
+ * @wx: board private structure
+ *
+ * wx_configure_vectors sets up the hardware to properly generate MSI-X/MSI/LEGACY
+ * interrupts.
+ **/
+void wx_configure_vectors(struct wx *wx)
+{
+ struct pci_dev *pdev = wx->pdev;
+ u32 eitrsel = 0;
+ u16 v_idx;
+
+ if (pdev->msix_enabled) {
+ /* Populate MSIX to EITR Select */
+ wr32(wx, WX_PX_ITRSEL, eitrsel);
+ /* use EIAM to auto-mask when MSI-X interrupt is asserted
+ * this saves a register write for every interrupt
+ */
+ wr32(wx, WX_PX_GPIE, WX_PX_GPIE_MODEL);
+ } else {
+ /* legacy interrupts, use EIAM to auto-mask when reading EICR,
+ * specifically only auto mask tx and rx interrupts.
+ */
+ wr32(wx, WX_PX_GPIE, 0);
+ }
+
+ /* Populate the IVAR table and set the ITR values to the
+ * corresponding register.
+ */
+ for (v_idx = 0; v_idx < wx->num_q_vectors; v_idx++) {
+ struct wx_q_vector *q_vector = wx->q_vector[v_idx];
+ struct wx_ring *ring;
+
+ wx_for_each_ring(ring, q_vector->rx)
+ wx_set_ivar(wx, 0, ring->reg_idx, v_idx);
+
+ wx_for_each_ring(ring, q_vector->tx)
+ wx_set_ivar(wx, 1, ring->reg_idx, v_idx);
+
+ wx_write_eitr(q_vector);
+ }
+
+ wx_set_ivar(wx, -1, 0, v_idx);
+ if (pdev->msix_enabled)
+ wr32(wx, WX_PX_ITR(v_idx), 1950);
+}
+EXPORT_SYMBOL(wx_configure_vectors);
+
+/**
+ * wx_clean_rx_ring - Free Rx Buffers per Queue
+ * @rx_ring: ring to free buffers from
+ **/
+static void wx_clean_rx_ring(struct wx_ring *rx_ring)
+{
+ struct wx_rx_buffer *rx_buffer;
+ u16 i = rx_ring->next_to_clean;
+
+ rx_buffer = &rx_ring->rx_buffer_info[i];
+
+ /* Free all the Rx ring sk_buffs */
+ while (i != rx_ring->next_to_alloc) {
+ if (rx_buffer->skb) {
+ struct sk_buff *skb = rx_buffer->skb;
+
+ if (WX_CB(skb)->page_released)
+ page_pool_put_full_page(rx_ring->page_pool, rx_buffer->page, false);
+
+ dev_kfree_skb(skb);
+ }
+
+ /* Invalidate cache lines that may have been written to by
+ * device so that we avoid corrupting memory.
+ */
+ dma_sync_single_range_for_cpu(rx_ring->dev,
+ rx_buffer->dma,
+ rx_buffer->page_offset,
+ WX_RX_BUFSZ,
+ DMA_FROM_DEVICE);
+
+ /* free resources associated with mapping */
+ page_pool_put_full_page(rx_ring->page_pool, rx_buffer->page, false);
+ __page_frag_cache_drain(rx_buffer->page,
+ rx_buffer->pagecnt_bias);
+
+ i++;
+ rx_buffer++;
+ if (i == rx_ring->count) {
+ i = 0;
+ rx_buffer = rx_ring->rx_buffer_info;
+ }
+ }
+
+ rx_ring->next_to_alloc = 0;
+ rx_ring->next_to_clean = 0;
+ rx_ring->next_to_use = 0;
+}
+
+/**
+ * wx_clean_all_rx_rings - Free Rx Buffers for all queues
+ * @wx: board private structure
+ **/
+void wx_clean_all_rx_rings(struct wx *wx)
+{
+ int i;
+
+ for (i = 0; i < wx->num_rx_queues; i++)
+ wx_clean_rx_ring(wx->rx_ring[i]);
+}
+EXPORT_SYMBOL(wx_clean_all_rx_rings);
+
+/**
+ * wx_free_rx_resources - Free Rx Resources
+ * @rx_ring: ring to clean the resources from
+ *
+ * Free all receive software resources
+ **/
+static void wx_free_rx_resources(struct wx_ring *rx_ring)
+{
+ wx_clean_rx_ring(rx_ring);
+ kvfree(rx_ring->rx_buffer_info);
+ rx_ring->rx_buffer_info = NULL;
+
+ /* if not set, then don't free */
+ if (!rx_ring->desc)
+ return;
+
+ dma_free_coherent(rx_ring->dev, rx_ring->size,
+ rx_ring->desc, rx_ring->dma);
+
+ rx_ring->desc = NULL;
+
+ if (rx_ring->page_pool) {
+ page_pool_destroy(rx_ring->page_pool);
+ rx_ring->page_pool = NULL;
+ }
+}
+
+/**
+ * wx_free_all_rx_resources - Free Rx Resources for All Queues
+ * @wx: pointer to hardware structure
+ *
+ * Free all receive software resources
+ **/
+static void wx_free_all_rx_resources(struct wx *wx)
+{
+ int i;
+
+ for (i = 0; i < wx->num_rx_queues; i++)
+ wx_free_rx_resources(wx->rx_ring[i]);
+}
+
+/**
+ * wx_clean_tx_ring - Free Tx Buffers
+ * @tx_ring: ring to be cleaned
+ **/
+static void wx_clean_tx_ring(struct wx_ring *tx_ring)
+{
+ struct wx_tx_buffer *tx_buffer;
+ u16 i = tx_ring->next_to_clean;
+
+ tx_buffer = &tx_ring->tx_buffer_info[i];
+
+ while (i != tx_ring->next_to_use) {
+ union wx_tx_desc *eop_desc, *tx_desc;
+
+ /* Free all the Tx ring sk_buffs */
+ dev_kfree_skb_any(tx_buffer->skb);
+
+ /* unmap skb header data */
+ dma_unmap_single(tx_ring->dev,
+ dma_unmap_addr(tx_buffer, dma),
+ dma_unmap_len(tx_buffer, len),
+ DMA_TO_DEVICE);
+
+ /* check for eop_desc to determine the end of the packet */
+ eop_desc = tx_buffer->next_to_watch;
+ tx_desc = WX_TX_DESC(tx_ring, i);
+
+ /* unmap remaining buffers */
+ while (tx_desc != eop_desc) {
+ tx_buffer++;
+ tx_desc++;
+ i++;
+ if (unlikely(i == tx_ring->count)) {
+ i = 0;
+ tx_buffer = tx_ring->tx_buffer_info;
+ tx_desc = WX_TX_DESC(tx_ring, 0);
+ }
+
+ /* unmap any remaining paged data */
+ if (dma_unmap_len(tx_buffer, len))
+ dma_unmap_page(tx_ring->dev,
+ dma_unmap_addr(tx_buffer, dma),
+ dma_unmap_len(tx_buffer, len),
+ DMA_TO_DEVICE);
+ }
+
+ /* move us one more past the eop_desc for start of next pkt */
+ tx_buffer++;
+ i++;
+ if (unlikely(i == tx_ring->count)) {
+ i = 0;
+ tx_buffer = tx_ring->tx_buffer_info;
+ }
+ }
+
+ netdev_tx_reset_queue(wx_txring_txq(tx_ring));
+
+ /* reset next_to_use and next_to_clean */
+ tx_ring->next_to_use = 0;
+ tx_ring->next_to_clean = 0;
+}
+
+/**
+ * wx_clean_all_tx_rings - Free Tx Buffers for all queues
+ * @wx: board private structure
+ **/
+void wx_clean_all_tx_rings(struct wx *wx)
+{
+ int i;
+
+ for (i = 0; i < wx->num_tx_queues; i++)
+ wx_clean_tx_ring(wx->tx_ring[i]);
+}
+EXPORT_SYMBOL(wx_clean_all_tx_rings);
+
+/**
+ * wx_free_tx_resources - Free Tx Resources per Queue
+ * @tx_ring: Tx descriptor ring for a specific queue
+ *
+ * Free all transmit software resources
+ **/
+static void wx_free_tx_resources(struct wx_ring *tx_ring)
+{
+ wx_clean_tx_ring(tx_ring);
+ kvfree(tx_ring->tx_buffer_info);
+ tx_ring->tx_buffer_info = NULL;
+
+ /* if not set, then don't free */
+ if (!tx_ring->desc)
+ return;
+
+ dma_free_coherent(tx_ring->dev, tx_ring->size,
+ tx_ring->desc, tx_ring->dma);
+ tx_ring->desc = NULL;
+}
+
+/**
+ * wx_free_all_tx_resources - Free Tx Resources for All Queues
+ * @wx: pointer to hardware structure
+ *
+ * Free all transmit software resources
+ **/
+static void wx_free_all_tx_resources(struct wx *wx)
+{
+ int i;
+
+ for (i = 0; i < wx->num_tx_queues; i++)
+ wx_free_tx_resources(wx->tx_ring[i]);
+}
+
+void wx_free_resources(struct wx *wx)
+{
+ wx_free_isb_resources(wx);
+ wx_free_all_rx_resources(wx);
+ wx_free_all_tx_resources(wx);
+}
+EXPORT_SYMBOL(wx_free_resources);
+
+static int wx_alloc_page_pool(struct wx_ring *rx_ring)
+{
+ int ret = 0;
+
+ struct page_pool_params pp_params = {
+ .flags = PP_FLAG_DMA_MAP | PP_FLAG_DMA_SYNC_DEV,
+ .order = 0,
+ .pool_size = rx_ring->size,
+ .nid = dev_to_node(rx_ring->dev),
+ .dev = rx_ring->dev,
+ .dma_dir = DMA_FROM_DEVICE,
+ .offset = 0,
+ .max_len = PAGE_SIZE,
+ };
+
+ rx_ring->page_pool = page_pool_create(&pp_params);
+ if (IS_ERR(rx_ring->page_pool)) {
+ ret = PTR_ERR(rx_ring->page_pool);
+ rx_ring->page_pool = NULL;
+ }
+
+ return ret;
+}
+
+/**
+ * wx_setup_rx_resources - allocate Rx resources (Descriptors)
+ * @rx_ring: rx descriptor ring (for a specific queue) to setup
+ *
+ * Returns 0 on success, negative on failure
+ **/
+static int wx_setup_rx_resources(struct wx_ring *rx_ring)
+{
+ struct device *dev = rx_ring->dev;
+ int orig_node = dev_to_node(dev);
+ int numa_node = NUMA_NO_NODE;
+ int size, ret;
+
+ size = sizeof(struct wx_rx_buffer) * rx_ring->count;
+
+ if (rx_ring->q_vector)
+ numa_node = rx_ring->q_vector->numa_node;
+
+ rx_ring->rx_buffer_info = kvmalloc_node(size, GFP_KERNEL, numa_node);
+ if (!rx_ring->rx_buffer_info)
+ rx_ring->rx_buffer_info = kvmalloc(size, GFP_KERNEL);
+ if (!rx_ring->rx_buffer_info)
+ goto err;
+
+ /* Round up to nearest 4K */
+ rx_ring->size = rx_ring->count * sizeof(union wx_rx_desc);
+ rx_ring->size = ALIGN(rx_ring->size, 4096);
+
+ set_dev_node(dev, numa_node);
+ rx_ring->desc = dma_alloc_coherent(dev, rx_ring->size,
+ &rx_ring->dma, GFP_KERNEL);
+ if (!rx_ring->desc) {
+ set_dev_node(dev, orig_node);
+ rx_ring->desc = dma_alloc_coherent(dev, rx_ring->size,
+ &rx_ring->dma, GFP_KERNEL);
+ }
+
+ if (!rx_ring->desc)
+ goto err;
+
+ rx_ring->next_to_clean = 0;
+ rx_ring->next_to_use = 0;
+
+ ret = wx_alloc_page_pool(rx_ring);
+ if (ret < 0) {
+ dev_err(rx_ring->dev, "Page pool creation failed: %d\n", ret);
+ goto err;
+ }
+
+ return 0;
+err:
+ kvfree(rx_ring->rx_buffer_info);
+ rx_ring->rx_buffer_info = NULL;
+ dev_err(dev, "Unable to allocate memory for the Rx descriptor ring\n");
+ return -ENOMEM;
+}
+
+/**
+ * wx_setup_all_rx_resources - allocate all queues Rx resources
+ * @wx: pointer to hardware structure
+ *
+ * If this function returns with an error, then it's possible one or
+ * more of the rings is populated (while the rest are not). It is the
+ * callers duty to clean those orphaned rings.
+ *
+ * Return 0 on success, negative on failure
+ **/
+static int wx_setup_all_rx_resources(struct wx *wx)
+{
+ int i, err = 0;
+
+ for (i = 0; i < wx->num_rx_queues; i++) {
+ err = wx_setup_rx_resources(wx->rx_ring[i]);
+ if (!err)
+ continue;
+
+ wx_err(wx, "Allocation for Rx Queue %u failed\n", i);
+ goto err_setup_rx;
+ }
+
+ return 0;
+err_setup_rx:
+ /* rewind the index freeing the rings as we go */
+ while (i--)
+ wx_free_rx_resources(wx->rx_ring[i]);
+ return err;
+}
+
+/**
+ * wx_setup_tx_resources - allocate Tx resources (Descriptors)
+ * @tx_ring: tx descriptor ring (for a specific queue) to setup
+ *
+ * Return 0 on success, negative on failure
+ **/
+static int wx_setup_tx_resources(struct wx_ring *tx_ring)
+{
+ struct device *dev = tx_ring->dev;
+ int orig_node = dev_to_node(dev);
+ int numa_node = NUMA_NO_NODE;
+ int size;
+
+ size = sizeof(struct wx_tx_buffer) * tx_ring->count;
+
+ if (tx_ring->q_vector)
+ numa_node = tx_ring->q_vector->numa_node;
+
+ tx_ring->tx_buffer_info = kvmalloc_node(size, GFP_KERNEL, numa_node);
+ if (!tx_ring->tx_buffer_info)
+ tx_ring->tx_buffer_info = kvmalloc(size, GFP_KERNEL);
+ if (!tx_ring->tx_buffer_info)
+ goto err;
+
+ /* round up to nearest 4K */
+ tx_ring->size = tx_ring->count * sizeof(union wx_tx_desc);
+ tx_ring->size = ALIGN(tx_ring->size, 4096);
+
+ set_dev_node(dev, numa_node);
+ tx_ring->desc = dma_alloc_coherent(dev, tx_ring->size,
+ &tx_ring->dma, GFP_KERNEL);
+ if (!tx_ring->desc) {
+ set_dev_node(dev, orig_node);
+ tx_ring->desc = dma_alloc_coherent(dev, tx_ring->size,
+ &tx_ring->dma, GFP_KERNEL);
+ }
+
+ if (!tx_ring->desc)
+ goto err;
+
+ tx_ring->next_to_use = 0;
+ tx_ring->next_to_clean = 0;
+
+ return 0;
+
+err:
+ kvfree(tx_ring->tx_buffer_info);
+ tx_ring->tx_buffer_info = NULL;
+ dev_err(dev, "Unable to allocate memory for the Tx descriptor ring\n");
+ return -ENOMEM;
+}
+
+/**
+ * wx_setup_all_tx_resources - allocate all queues Tx resources
+ * @wx: pointer to private structure
+ *
+ * If this function returns with an error, then it's possible one or
+ * more of the rings is populated (while the rest are not). It is the
+ * callers duty to clean those orphaned rings.
+ *
+ * Return 0 on success, negative on failure
+ **/
+static int wx_setup_all_tx_resources(struct wx *wx)
+{
+ int i, err = 0;
+
+ for (i = 0; i < wx->num_tx_queues; i++) {
+ err = wx_setup_tx_resources(wx->tx_ring[i]);
+ if (!err)
+ continue;
+
+ wx_err(wx, "Allocation for Tx Queue %u failed\n", i);
+ goto err_setup_tx;
+ }
+
+ return 0;
+err_setup_tx:
+ /* rewind the index freeing the rings as we go */
+ while (i--)
+ wx_free_tx_resources(wx->tx_ring[i]);
+ return err;
+}
+
+int wx_setup_resources(struct wx *wx)
+{
+ int err;
+
+ /* allocate transmit descriptors */
+ err = wx_setup_all_tx_resources(wx);
+ if (err)
+ return err;
+
+ /* allocate receive descriptors */
+ err = wx_setup_all_rx_resources(wx);
+ if (err)
+ goto err_free_tx;
+
+ err = wx_setup_isb_resources(wx);
+ if (err)
+ goto err_free_rx;
+
+ return 0;
+
+err_free_rx:
+ wx_free_all_rx_resources(wx);
+err_free_tx:
+ wx_free_all_tx_resources(wx);
+
+ return err;
+}
+EXPORT_SYMBOL(wx_setup_resources);
+
+/**
+ * wx_get_stats64 - Get System Network Statistics
+ * @netdev: network interface device structure
+ * @stats: storage space for 64bit statistics
+ */
+void wx_get_stats64(struct net_device *netdev,
+ struct rtnl_link_stats64 *stats)
+{
+ struct wx *wx = netdev_priv(netdev);
+ int i;
+
+ rcu_read_lock();
+ for (i = 0; i < wx->num_rx_queues; i++) {
+ struct wx_ring *ring = READ_ONCE(wx->rx_ring[i]);
+ u64 bytes, packets;
+ unsigned int start;
+
+ if (ring) {
+ do {
+ start = u64_stats_fetch_begin(&ring->syncp);
+ packets = ring->stats.packets;
+ bytes = ring->stats.bytes;
+ } while (u64_stats_fetch_retry(&ring->syncp, start));
+ stats->rx_packets += packets;
+ stats->rx_bytes += bytes;
+ }
+ }
+
+ for (i = 0; i < wx->num_tx_queues; i++) {
+ struct wx_ring *ring = READ_ONCE(wx->tx_ring[i]);
+ u64 bytes, packets;
+ unsigned int start;
+
+ if (ring) {
+ do {
+ start = u64_stats_fetch_begin(&ring->syncp);
+ packets = ring->stats.packets;
+ bytes = ring->stats.bytes;
+ } while (u64_stats_fetch_retry(&ring->syncp,
+ start));
+ stats->tx_packets += packets;
+ stats->tx_bytes += bytes;
+ }
+ }
+
+ rcu_read_unlock();
+}
+EXPORT_SYMBOL(wx_get_stats64);
+
+MODULE_LICENSE("GPL");
diff --git a/drivers/net/ethernet/wangxun/libwx/wx_lib.h b/drivers/net/ethernet/wangxun/libwx/wx_lib.h
new file mode 100644
index 000000000000..50ee41f1fa10
--- /dev/null
+++ b/drivers/net/ethernet/wangxun/libwx/wx_lib.h
@@ -0,0 +1,32 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * WangXun Gigabit PCI Express Linux driver
+ * Copyright (c) 2019 - 2022 Beijing WangXun Technology Co., Ltd.
+ */
+
+#ifndef _WX_LIB_H_
+#define _WX_LIB_H_
+
+void wx_alloc_rx_buffers(struct wx_ring *rx_ring, u16 cleaned_count);
+u16 wx_desc_unused(struct wx_ring *ring);
+netdev_tx_t wx_xmit_frame(struct sk_buff *skb,
+ struct net_device *netdev);
+void wx_napi_enable_all(struct wx *wx);
+void wx_napi_disable_all(struct wx *wx);
+void wx_reset_interrupt_capability(struct wx *wx);
+void wx_clear_interrupt_scheme(struct wx *wx);
+int wx_init_interrupt_scheme(struct wx *wx);
+irqreturn_t wx_msix_clean_rings(int __always_unused irq, void *data);
+void wx_free_irq(struct wx *wx);
+int wx_setup_isb_resources(struct wx *wx);
+void wx_free_isb_resources(struct wx *wx);
+u32 wx_misc_isb(struct wx *wx, enum wx_isb_idx idx);
+void wx_configure_vectors(struct wx *wx);
+void wx_clean_all_rx_rings(struct wx *wx);
+void wx_clean_all_tx_rings(struct wx *wx);
+void wx_free_resources(struct wx *wx);
+int wx_setup_resources(struct wx *wx);
+void wx_get_stats64(struct net_device *netdev,
+ struct rtnl_link_stats64 *stats);
+
+#endif /* _NGBE_LIB_H_ */
diff --git a/drivers/net/ethernet/wangxun/libwx/wx_type.h b/drivers/net/ethernet/wangxun/libwx/wx_type.h
index 1cbeef8230bf..77d8d7f1707e 100644
--- a/drivers/net/ethernet/wangxun/libwx/wx_type.h
+++ b/drivers/net/ethernet/wangxun/libwx/wx_type.h
@@ -4,6 +4,9 @@
#ifndef _WX_TYPE_H_
#define _WX_TYPE_H_
+#include <linux/bitfield.h>
+#include <linux/netdevice.h>
+
/* Vendor ID */
#ifndef PCI_VENDOR_ID_WANGXUN
#define PCI_VENDOR_ID_WANGXUN 0x8088
@@ -36,12 +39,11 @@
#define WX_SPI_CMD 0x10104
#define WX_SPI_CMD_READ_DWORD 0x1
#define WX_SPI_CLK_DIV 0x3
-#define WX_SPI_CMD_CMD(_v) (((_v) & 0x7) << 28)
-#define WX_SPI_CMD_CLK(_v) (((_v) & 0x7) << 25)
-#define WX_SPI_CMD_ADDR(_v) (((_v) & 0xFFFFFF))
+#define WX_SPI_CMD_CMD(_v) FIELD_PREP(GENMASK(30, 28), _v)
+#define WX_SPI_CMD_CLK(_v) FIELD_PREP(GENMASK(27, 25), _v)
+#define WX_SPI_CMD_ADDR(_v) FIELD_PREP(GENMASK(23, 0), _v)
#define WX_SPI_DATA 0x10108
#define WX_SPI_DATA_BYPASS BIT(31)
-#define WX_SPI_DATA_STATUS(_v) (((_v) & 0xFF) << 16)
#define WX_SPI_DATA_OP_DONE BIT(0)
#define WX_SPI_STATUS 0x1010C
#define WX_SPI_STATUS_OPDONE BIT(0)
@@ -64,21 +66,50 @@
/* port cfg Registers */
#define WX_CFG_PORT_CTL 0x14400
#define WX_CFG_PORT_CTL_DRV_LOAD BIT(3)
+#define WX_CFG_PORT_CTL_QINQ BIT(2)
+#define WX_CFG_PORT_CTL_D_VLAN BIT(0) /* double vlan*/
+#define WX_CFG_TAG_TPID(_i) (0x14430 + ((_i) * 4))
+
+/* GPIO Registers */
+#define WX_GPIO_DR 0x14800
+#define WX_GPIO_DR_0 BIT(0) /* SDP0 Data Value */
+#define WX_GPIO_DR_1 BIT(1) /* SDP1 Data Value */
+#define WX_GPIO_DDR 0x14804
+#define WX_GPIO_DDR_0 BIT(0) /* SDP0 IO direction */
+#define WX_GPIO_DDR_1 BIT(1) /* SDP1 IO direction */
+#define WX_GPIO_CTL 0x14808
+#define WX_GPIO_INTEN 0x14830
+#define WX_GPIO_INTEN_0 BIT(0)
+#define WX_GPIO_INTEN_1 BIT(1)
+#define WX_GPIO_INTMASK 0x14834
+#define WX_GPIO_INTTYPE_LEVEL 0x14838
+#define WX_GPIO_POLARITY 0x1483C
+#define WX_GPIO_EOI 0x1484C
/*********************** Transmit DMA registers **************************/
/* transmit global control */
#define WX_TDM_CTL 0x18000
/* TDM CTL BIT */
#define WX_TDM_CTL_TE BIT(0) /* Transmit Enable */
+#define WX_TDM_PB_THRE(_i) (0x18020 + ((_i) * 4))
/***************************** RDB registers *********************************/
/* receive packet buffer */
#define WX_RDB_PB_CTL 0x19000
#define WX_RDB_PB_CTL_RXEN BIT(31) /* Enable Receiver */
#define WX_RDB_PB_CTL_DISABLED BIT(0)
+#define WX_RDB_PB_SZ(_i) (0x19020 + ((_i) * 4))
+#define WX_RDB_PB_SZ_SHIFT 10
/* statistic */
#define WX_RDB_PFCMACDAL 0x19210
#define WX_RDB_PFCMACDAH 0x19214
+/* ring assignment */
+#define WX_RDB_PL_CFG(_i) (0x19300 + ((_i) * 4))
+#define WX_RDB_PL_CFG_L4HDR BIT(1)
+#define WX_RDB_PL_CFG_L3HDR BIT(2)
+#define WX_RDB_PL_CFG_L2HDR BIT(3)
+#define WX_RDB_PL_CFG_TUN_TUNHDR BIT(4)
+#define WX_RDB_PL_CFG_TUN_OUTL2HDR BIT(5)
/******************************* PSR Registers *******************************/
/* psr control */
@@ -96,10 +127,24 @@
#define WX_PSR_CTL_MO_SHIFT 5
#define WX_PSR_CTL_MO (0x3 << WX_PSR_CTL_MO_SHIFT)
#define WX_PSR_CTL_TPE BIT(4)
+#define WX_PSR_MAX_SZ 0x15020
+#define WX_PSR_VLAN_CTL 0x15088
+#define WX_PSR_VLAN_CTL_CFIEN BIT(29) /* bit 29 */
+#define WX_PSR_VLAN_CTL_VFE BIT(30) /* bit 30 */
/* mcasst/ucast overflow tbl */
#define WX_PSR_MC_TBL(_i) (0x15200 + ((_i) * 4))
#define WX_PSR_UC_TBL(_i) (0x15400 + ((_i) * 4))
+/* VM L2 contorl */
+#define WX_PSR_VM_L2CTL(_i) (0x15600 + ((_i) * 4))
+#define WX_PSR_VM_L2CTL_UPE BIT(4) /* unicast promiscuous */
+#define WX_PSR_VM_L2CTL_VACC BIT(6) /* accept nomatched vlan */
+#define WX_PSR_VM_L2CTL_AUPE BIT(8) /* accept untagged packets */
+#define WX_PSR_VM_L2CTL_ROMPE BIT(9) /* accept packets in MTA tbl */
+#define WX_PSR_VM_L2CTL_ROPE BIT(10) /* accept packets in UC tbl */
+#define WX_PSR_VM_L2CTL_BAM BIT(11) /* accept broadcast packets */
+#define WX_PSR_VM_L2CTL_MPE BIT(12) /* multicast promiscuous */
+
/* Management */
#define WX_PSR_MNG_FLEX_SEL 0x1582C
#define WX_PSR_MNG_FLEX_DW_L(_i) (0x15A00 + ((_i) * 16))
@@ -113,14 +158,35 @@
/* mac switcher */
#define WX_PSR_MAC_SWC_AD_L 0x16200
#define WX_PSR_MAC_SWC_AD_H 0x16204
-#define WX_PSR_MAC_SWC_AD_H_AD(v) (((v) & 0xFFFF))
-#define WX_PSR_MAC_SWC_AD_H_ADTYPE(v) (((v) & 0x1) << 30)
+#define WX_PSR_MAC_SWC_AD_H_AD(v) FIELD_PREP(U16_MAX, v)
+#define WX_PSR_MAC_SWC_AD_H_ADTYPE(v) FIELD_PREP(BIT(30), v)
#define WX_PSR_MAC_SWC_AD_H_AV BIT(31)
#define WX_PSR_MAC_SWC_VM_L 0x16208
#define WX_PSR_MAC_SWC_VM_H 0x1620C
#define WX_PSR_MAC_SWC_IDX 0x16210
#define WX_CLEAR_VMDQ_ALL 0xFFFFFFFFU
+/********************************* RSEC **************************************/
+/* general rsec */
+#define WX_RSC_CTL 0x17000
+#define WX_RSC_CTL_SAVE_MAC_ERR BIT(6)
+#define WX_RSC_CTL_CRC_STRIP BIT(2)
+#define WX_RSC_CTL_RX_DIS BIT(1)
+#define WX_RSC_ST 0x17004
+#define WX_RSC_ST_RSEC_RDY BIT(0)
+
+/****************************** TDB ******************************************/
+#define WX_TDB_PB_SZ(_i) (0x1CC00 + ((_i) * 4))
+#define WX_TXPKT_SIZE_MAX 0xA /* Max Tx Packet size */
+
+/****************************** TSEC *****************************************/
+/* Security Control Registers */
+#define WX_TSC_CTL 0x1D000
+#define WX_TSC_CTL_TX_DIS BIT(1)
+#define WX_TSC_CTL_TSEC_DIS BIT(0)
+#define WX_TSC_BUF_AE 0x1D00C
+#define WX_TSC_BUF_AE_THR GENMASK(9, 0)
+
/************************************** MNG ********************************/
#define WX_MNG_SWFW_SYNC 0x1E008
#define WX_MNG_SWFW_SYNC_SW_MB BIT(2)
@@ -133,11 +199,15 @@
/************************************* ETH MAC *****************************/
#define WX_MAC_TX_CFG 0x11000
#define WX_MAC_TX_CFG_TE BIT(0)
+#define WX_MAC_TX_CFG_SPEED_MASK GENMASK(30, 29)
+#define WX_MAC_TX_CFG_SPEED_10G FIELD_PREP(WX_MAC_TX_CFG_SPEED_MASK, 0)
+#define WX_MAC_TX_CFG_SPEED_1G FIELD_PREP(WX_MAC_TX_CFG_SPEED_MASK, 3)
#define WX_MAC_RX_CFG 0x11004
#define WX_MAC_RX_CFG_RE BIT(0)
#define WX_MAC_RX_CFG_JE BIT(8)
#define WX_MAC_PKT_FLT 0x11008
#define WX_MAC_PKT_FLT_PR BIT(0) /* promiscuous mode */
+#define WX_MAC_WDG_TIMEOUT 0x1100C
#define WX_MAC_RX_FLOW_CTRL 0x11090
#define WX_MAC_RX_FLOW_CTRL_RFE BIT(0) /* receive fc enable */
#define WX_MMC_CONTROL 0x11800
@@ -147,10 +217,34 @@
/* Interrupt Registers */
#define WX_BME_CTL 0x12020
#define WX_PX_MISC_IC 0x100
+#define WX_PX_MISC_ICS 0x104
+#define WX_PX_MISC_IEN 0x108
+#define WX_PX_INTA 0x110
+#define WX_PX_GPIE 0x118
+#define WX_PX_GPIE_MODEL BIT(0)
+#define WX_PX_IC 0x120
#define WX_PX_IMS(_i) (0x140 + (_i) * 4)
+#define WX_PX_IMC(_i) (0x150 + (_i) * 4)
+#define WX_PX_ISB_ADDR_L 0x160
+#define WX_PX_ISB_ADDR_H 0x164
#define WX_PX_TRANSACTION_PENDING 0x168
+#define WX_PX_ITRSEL 0x180
+#define WX_PX_ITR(_i) (0x200 + (_i) * 4)
+#define WX_PX_ITR_CNT_WDIS BIT(31)
+#define WX_PX_MISC_IVAR 0x4FC
+#define WX_PX_IVAR(_i) (0x500 + (_i) * 4)
+
+#define WX_PX_IVAR_ALLOC_VAL 0x80 /* Interrupt Allocation valid */
+#define WX_7K_ITR 595
+#define WX_12K_ITR 336
+#define WX_SP_MAX_EITR 0x00000FF8U
+#define WX_EM_MAX_EITR 0x00007FFCU
/* transmit DMA Registers */
+#define WX_PX_TR_BAL(_i) (0x03000 + ((_i) * 0x40))
+#define WX_PX_TR_BAH(_i) (0x03004 + ((_i) * 0x40))
+#define WX_PX_TR_WP(_i) (0x03008 + ((_i) * 0x40))
+#define WX_PX_TR_RP(_i) (0x0300C + ((_i) * 0x40))
#define WX_PX_TR_CFG(_i) (0x03010 + ((_i) * 0x40))
/* Transmit Config masks */
#define WX_PX_TR_CFG_ENABLE BIT(0) /* Ena specific Tx Queue */
@@ -160,8 +254,22 @@
#define WX_PX_TR_CFG_THRE_SHIFT 8
/* Receive DMA Registers */
+#define WX_PX_RR_BAL(_i) (0x01000 + ((_i) * 0x40))
+#define WX_PX_RR_BAH(_i) (0x01004 + ((_i) * 0x40))
+#define WX_PX_RR_WP(_i) (0x01008 + ((_i) * 0x40))
+#define WX_PX_RR_RP(_i) (0x0100C + ((_i) * 0x40))
#define WX_PX_RR_CFG(_i) (0x01010 + ((_i) * 0x40))
/* PX_RR_CFG bit definitions */
+#define WX_PX_RR_CFG_SPLIT_MODE BIT(26)
+#define WX_PX_RR_CFG_RR_THER_SHIFT 16
+#define WX_PX_RR_CFG_RR_HDR_SZ GENMASK(15, 12)
+#define WX_PX_RR_CFG_RR_BUF_SZ GENMASK(11, 8)
+#define WX_PX_RR_CFG_BHDRSIZE_SHIFT 6 /* 64byte resolution (>> 6)
+ * + at bit 8 offset (<< 12)
+ * = (<< 6)
+ */
+#define WX_PX_RR_CFG_BSIZEPKT_SHIFT 2 /* so many KBs */
+#define WX_PX_RR_CFG_RR_SIZE_SHIFT 1
#define WX_PX_RR_CFG_RR_EN BIT(0)
/* Number of 80 microseconds we wait for PCI Express master disable */
@@ -185,6 +293,50 @@
#define WX_SW_REGION_PTR 0x1C
+#define WX_MAC_STATE_DEFAULT 0x1
+#define WX_MAC_STATE_MODIFIED 0x2
+#define WX_MAC_STATE_IN_USE 0x4
+
+#define WX_MAX_RXD 8192
+#define WX_MAX_TXD 8192
+
+/* Supported Rx Buffer Sizes */
+#define WX_RXBUFFER_256 256 /* Used for skb receive header */
+#define WX_RXBUFFER_2K 2048
+#define WX_MAX_RXBUFFER 16384 /* largest size for single descriptor */
+
+#if MAX_SKB_FRAGS < 8
+#define WX_RX_BUFSZ ALIGN(WX_MAX_RXBUFFER / MAX_SKB_FRAGS, 1024)
+#else
+#define WX_RX_BUFSZ WX_RXBUFFER_2K
+#endif
+
+#define WX_RX_BUFFER_WRITE 16 /* Must be power of 2 */
+
+#define WX_MAX_DATA_PER_TXD BIT(14)
+/* Tx Descriptors needed, worst case */
+#define TXD_USE_COUNT(S) DIV_ROUND_UP((S), WX_MAX_DATA_PER_TXD)
+#define DESC_NEEDED (MAX_SKB_FRAGS + 4)
+
+/* Ether Types */
+#define WX_ETH_P_CNM 0x22E7
+
+#define WX_CFG_PORT_ST 0x14404
+
+/******************* Receive Descriptor bit definitions **********************/
+#define WX_RXD_STAT_DD BIT(0) /* Done */
+#define WX_RXD_STAT_EOP BIT(1) /* End of Packet */
+
+#define WX_RXD_ERR_RXE BIT(29) /* Any MAC Error */
+
+/*********************** Transmit Descriptor Config Masks ****************/
+#define WX_TXD_STAT_DD BIT(0) /* Descriptor Done */
+#define WX_TXD_DTYP_DATA 0 /* Adv Data Descriptor */
+#define WX_TXD_PAYLEN_SHIFT 13 /* Desc PAYLEN shift */
+#define WX_TXD_EOP BIT(24) /* End of Packet */
+#define WX_TXD_IFCS BIT(25) /* Insert FCS */
+#define WX_TXD_RS BIT(27) /* Report Status */
+
/* Host Interface Command Structures */
struct wx_hic_hdr {
u8 cmd;
@@ -249,14 +401,23 @@ enum wx_mac_type {
wx_mac_em
};
+enum em_mac_type {
+ em_mac_type_unknown = 0,
+ em_mac_type_mdi,
+ em_mac_type_rgmii
+};
+
struct wx_mac_info {
enum wx_mac_type type;
bool set_lben;
u8 addr[ETH_ALEN];
u8 perm_addr[ETH_ALEN];
+ u32 mta_shadow[128];
s32 mc_filter_type;
u32 mcft_size;
u32 num_rar_entries;
+ u32 rx_pb_size;
+ u32 tx_pb_size;
u32 max_tx_queues;
u32 max_rx_queues;
@@ -284,19 +445,183 @@ struct wx_addr_filter_info {
bool user_set_promisc;
};
+struct wx_mac_addr {
+ u8 addr[ETH_ALEN];
+ u16 state; /* bitmask */
+ u64 pools;
+};
+
enum wx_reset_type {
WX_LAN_RESET = 0,
WX_SW_RESET,
WX_GLOBAL_RESET
};
-struct wx_hw {
+struct wx_cb {
+ dma_addr_t dma;
+ u16 append_cnt; /* number of skb's appended */
+ bool page_released;
+ bool dma_released;
+};
+
+#define WX_CB(skb) ((struct wx_cb *)(skb)->cb)
+
+/* Transmit Descriptor */
+union wx_tx_desc {
+ struct {
+ __le64 buffer_addr; /* Address of descriptor's data buf */
+ __le32 cmd_type_len;
+ __le32 olinfo_status;
+ } read;
+ struct {
+ __le64 rsvd; /* Reserved */
+ __le32 nxtseq_seed;
+ __le32 status;
+ } wb;
+};
+
+/* Receive Descriptor */
+union wx_rx_desc {
+ struct {
+ __le64 pkt_addr; /* Packet buffer address */
+ __le64 hdr_addr; /* Header buffer address */
+ } read;
+ struct {
+ struct {
+ union {
+ __le32 data;
+ struct {
+ __le16 pkt_info; /* RSS, Pkt type */
+ __le16 hdr_info; /* Splithdr, hdrlen */
+ } hs_rss;
+ } lo_dword;
+ union {
+ __le32 rss; /* RSS Hash */
+ struct {
+ __le16 ip_id; /* IP id */
+ __le16 csum; /* Packet Checksum */
+ } csum_ip;
+ } hi_dword;
+ } lower;
+ struct {
+ __le32 status_error; /* ext status/error */
+ __le16 length; /* Packet length */
+ __le16 vlan; /* VLAN tag */
+ } upper;
+ } wb; /* writeback */
+};
+
+#define WX_RX_DESC(R, i) \
+ (&(((union wx_rx_desc *)((R)->desc))[i]))
+#define WX_TX_DESC(R, i) \
+ (&(((union wx_tx_desc *)((R)->desc))[i]))
+
+/* wrapper around a pointer to a socket buffer,
+ * so a DMA handle can be stored along with the buffer
+ */
+struct wx_tx_buffer {
+ union wx_tx_desc *next_to_watch;
+ struct sk_buff *skb;
+ unsigned int bytecount;
+ unsigned short gso_segs;
+ DEFINE_DMA_UNMAP_ADDR(dma);
+ DEFINE_DMA_UNMAP_LEN(len);
+};
+
+struct wx_rx_buffer {
+ struct sk_buff *skb;
+ dma_addr_t dma;
+ dma_addr_t page_dma;
+ struct page *page;
+ unsigned int page_offset;
+ u16 pagecnt_bias;
+};
+
+struct wx_queue_stats {
+ u64 packets;
+ u64 bytes;
+};
+
+/* iterator for handling rings in ring container */
+#define wx_for_each_ring(posm, headm) \
+ for (posm = (headm).ring; posm; posm = posm->next)
+
+struct wx_ring_container {
+ struct wx_ring *ring; /* pointer to linked list of rings */
+ unsigned int total_bytes; /* total bytes processed this int */
+ unsigned int total_packets; /* total packets processed this int */
+ u8 count; /* total number of rings in vector */
+ u8 itr; /* current ITR setting for ring */
+};
+
+struct wx_ring {
+ struct wx_ring *next; /* pointer to next ring in q_vector */
+ struct wx_q_vector *q_vector; /* backpointer to host q_vector */
+ struct net_device *netdev; /* netdev ring belongs to */
+ struct device *dev; /* device for DMA mapping */
+ struct page_pool *page_pool;
+ void *desc; /* descriptor ring memory */
+ union {
+ struct wx_tx_buffer *tx_buffer_info;
+ struct wx_rx_buffer *rx_buffer_info;
+ };
+ u8 __iomem *tail;
+ dma_addr_t dma; /* phys. address of descriptor ring */
+ unsigned int size; /* length in bytes */
+
+ u16 count; /* amount of descriptors */
+
+ u8 queue_index; /* needed for multiqueue queue management */
+ u8 reg_idx; /* holds the special value that gets
+ * the hardware register offset
+ * associated with this ring, which is
+ * different for DCB and RSS modes
+ */
+ u16 next_to_use;
+ u16 next_to_clean;
+ u16 next_to_alloc;
+
+ struct wx_queue_stats stats;
+ struct u64_stats_sync syncp;
+} ____cacheline_internodealigned_in_smp;
+
+struct wx_q_vector {
+ struct wx *wx;
+ int cpu; /* CPU for DCA */
+ int numa_node;
+ u16 v_idx; /* index of q_vector within array, also used for
+ * finding the bit in EICR and friends that
+ * represents the vector for this ring
+ */
+ u16 itr; /* Interrupt throttle rate written to EITR */
+ struct wx_ring_container rx, tx;
+ struct napi_struct napi;
+ struct rcu_head rcu; /* to avoid race with update stats on free */
+
+ char name[IFNAMSIZ + 17];
+
+ /* for dynamic allocation of rings associated with this q_vector */
+ struct wx_ring ring[0] ____cacheline_internodealigned_in_smp;
+};
+
+enum wx_isb_idx {
+ WX_ISB_HEADER,
+ WX_ISB_MISC,
+ WX_ISB_VEC0,
+ WX_ISB_VEC1,
+ WX_ISB_MAX
+};
+
+struct wx {
u8 __iomem *hw_addr;
struct pci_dev *pdev;
+ struct net_device *netdev;
struct wx_bus_info bus;
struct wx_mac_info mac;
+ enum em_mac_type mac_type;
struct wx_eeprom_info eeprom;
struct wx_addr_filter_info addr_ctrl;
+ struct wx_mac_addr *mac_table;
u16 device_id;
u16 vendor_id;
u16 subsystem_device_id;
@@ -304,11 +629,63 @@ struct wx_hw {
u8 revision_id;
u16 oem_ssid;
u16 oem_svid;
+ u16 msg_enable;
bool adapter_stopped;
+ u16 tpid[8];
+ char eeprom_id[32];
+ char *driver_name;
enum wx_reset_type reset_type;
+
+ /* PHY stuff */
+ unsigned int link;
+ int speed;
+ int duplex;
+ struct phy_device *phydev;
+
+ bool wol_enabled;
+ bool ncsi_enabled;
+ bool gpio_ctrl;
+
+ /* Tx fast path data */
+ int num_tx_queues;
+ u16 tx_itr_setting;
+ u16 tx_work_limit;
+
+ /* Rx fast path data */
+ int num_rx_queues;
+ u16 rx_itr_setting;
+ u16 rx_work_limit;
+
+ int num_q_vectors; /* current number of q_vectors for device */
+ int max_q_vectors; /* upper limit of q_vectors for device */
+
+ u32 tx_ring_count;
+ u32 rx_ring_count;
+
+ struct wx_ring *tx_ring[64] ____cacheline_aligned_in_smp;
+ struct wx_ring *rx_ring[64];
+ struct wx_q_vector *q_vector[64];
+
+ unsigned int queues_per_pool;
+ struct msix_entry *msix_entries;
+
+ /* misc interrupt status block */
+ dma_addr_t isb_dma;
+ u32 *isb_mem;
+ u32 isb_tag[WX_ISB_MAX];
+
+#define WX_MAX_RETA_ENTRIES 128
+ u8 rss_indir_tbl[WX_MAX_RETA_ENTRIES];
+
+#define WX_RSS_KEY_SIZE 40 /* size of RSS Hash Key in bytes */
+ u32 *rss_key;
+ u32 wol;
+
+ u16 bd_number;
};
#define WX_INTR_ALL (~0ULL)
+#define WX_INTR_Q(i) BIT(i)
/* register operations */
#define wr32(a, reg, value) writel((value), ((a)->hw_addr + (reg)))
@@ -319,23 +696,23 @@ struct wx_hw {
wr32((a), (reg) + ((off) << 2), (val))
static inline u32
-rd32m(struct wx_hw *wxhw, u32 reg, u32 mask)
+rd32m(struct wx *wx, u32 reg, u32 mask)
{
u32 val;
- val = rd32(wxhw, reg);
+ val = rd32(wx, reg);
return val & mask;
}
static inline void
-wr32m(struct wx_hw *wxhw, u32 reg, u32 mask, u32 field)
+wr32m(struct wx *wx, u32 reg, u32 mask, u32 field)
{
u32 val;
- val = rd32(wxhw, reg);
+ val = rd32(wx, reg);
val = ((val & ~mask) | (field & mask));
- wr32(wxhw, reg, val);
+ wr32(wx, reg, val);
}
/* On some domestic CPU platforms, sometimes IO is not synchronized with
@@ -343,10 +720,10 @@ wr32m(struct wx_hw *wxhw, u32 reg, u32 mask, u32 field)
*/
#define WX_WRITE_FLUSH(H) rd32(H, WX_MIS_PWR)
-#define wx_err(wxhw, fmt, arg...) \
- dev_err(&(wxhw)->pdev->dev, fmt, ##arg)
+#define wx_err(wx, fmt, arg...) \
+ dev_err(&(wx)->pdev->dev, fmt, ##arg)
-#define wx_dbg(wxhw, fmt, arg...) \
- dev_dbg(&(wxhw)->pdev->dev, fmt, ##arg)
+#define wx_dbg(wx, fmt, arg...) \
+ dev_dbg(&(wx)->pdev->dev, fmt, ##arg)
#endif /* _WX_TYPE_H_ */
diff --git a/drivers/net/ethernet/wangxun/ngbe/Makefile b/drivers/net/ethernet/wangxun/ngbe/Makefile
index 391c2cbc1bb4..61a13d98abe7 100644
--- a/drivers/net/ethernet/wangxun/ngbe/Makefile
+++ b/drivers/net/ethernet/wangxun/ngbe/Makefile
@@ -6,4 +6,4 @@
obj-$(CONFIG_NGBE) += ngbe.o
-ngbe-objs := ngbe_main.o ngbe_hw.o
+ngbe-objs := ngbe_main.o ngbe_hw.o ngbe_mdio.o ngbe_ethtool.o
diff --git a/drivers/net/ethernet/wangxun/ngbe/ngbe.h b/drivers/net/ethernet/wangxun/ngbe/ngbe.h
deleted file mode 100644
index af147ca8605c..000000000000
--- a/drivers/net/ethernet/wangxun/ngbe/ngbe.h
+++ /dev/null
@@ -1,79 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0 */
-/* Copyright (c) 2019 - 2022 Beijing WangXun Technology Co., Ltd. */
-
-#ifndef _NGBE_H_
-#define _NGBE_H_
-
-#include "ngbe_type.h"
-
-#define NGBE_MAX_FDIR_INDICES 7
-
-#define NGBE_MAX_RX_QUEUES (NGBE_MAX_FDIR_INDICES + 1)
-#define NGBE_MAX_TX_QUEUES (NGBE_MAX_FDIR_INDICES + 1)
-
-#define NGBE_ETH_LENGTH_OF_ADDRESS 6
-#define NGBE_MAX_MSIX_VECTORS 0x09
-#define NGBE_RAR_ENTRIES 32
-
-/* TX/RX descriptor defines */
-#define NGBE_DEFAULT_TXD 512 /* default ring size */
-#define NGBE_DEFAULT_TX_WORK 256
-#define NGBE_MAX_TXD 8192
-#define NGBE_MIN_TXD 128
-
-#define NGBE_DEFAULT_RXD 512 /* default ring size */
-#define NGBE_DEFAULT_RX_WORK 256
-#define NGBE_MAX_RXD 8192
-#define NGBE_MIN_RXD 128
-
-#define NGBE_MAC_STATE_DEFAULT 0x1
-#define NGBE_MAC_STATE_MODIFIED 0x2
-#define NGBE_MAC_STATE_IN_USE 0x4
-
-struct ngbe_mac_addr {
- u8 addr[ETH_ALEN];
- u16 state; /* bitmask */
- u64 pools;
-};
-
-/* board specific private data structure */
-struct ngbe_adapter {
- u8 __iomem *io_addr; /* Mainly for iounmap use */
- /* OS defined structs */
- struct net_device *netdev;
- struct pci_dev *pdev;
-
- /* structs defined in ngbe_hw.h */
- struct ngbe_hw hw;
- struct ngbe_mac_addr *mac_table;
- u16 msg_enable;
-
- /* Tx fast path data */
- int num_tx_queues;
- u16 tx_itr_setting;
- u16 tx_work_limit;
-
- /* Rx fast path data */
- int num_rx_queues;
- u16 rx_itr_setting;
- u16 rx_work_limit;
-
- int num_q_vectors; /* current number of q_vectors for device */
- int max_q_vectors; /* upper limit of q_vectors for device */
-
- u32 tx_ring_count;
- u32 rx_ring_count;
-
-#define NGBE_MAX_RETA_ENTRIES 128
- u8 rss_indir_tbl[NGBE_MAX_RETA_ENTRIES];
-
-#define NGBE_RSS_KEY_SIZE 40 /* size of RSS Hash Key in bytes */
- u32 *rss_key;
- u32 wol;
-
- u16 bd_number;
-};
-
-extern char ngbe_driver_name[];
-
-#endif /* _NGBE_H_ */
diff --git a/drivers/net/ethernet/wangxun/ngbe/ngbe_ethtool.c b/drivers/net/ethernet/wangxun/ngbe/ngbe_ethtool.c
new file mode 100644
index 000000000000..5b25834baf38
--- /dev/null
+++ b/drivers/net/ethernet/wangxun/ngbe/ngbe_ethtool.c
@@ -0,0 +1,22 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2015 - 2023 Beijing WangXun Technology Co., Ltd. */
+
+#include <linux/pci.h>
+#include <linux/phy.h>
+#include <linux/netdevice.h>
+
+#include "../libwx/wx_ethtool.h"
+#include "ngbe_ethtool.h"
+
+static const struct ethtool_ops ngbe_ethtool_ops = {
+ .get_drvinfo = wx_get_drvinfo,
+ .get_link = ethtool_op_get_link,
+ .get_link_ksettings = phy_ethtool_get_link_ksettings,
+ .set_link_ksettings = phy_ethtool_set_link_ksettings,
+ .nway_reset = phy_ethtool_nway_reset,
+};
+
+void ngbe_set_ethtool_ops(struct net_device *netdev)
+{
+ netdev->ethtool_ops = &ngbe_ethtool_ops;
+}
diff --git a/drivers/net/ethernet/wangxun/ngbe/ngbe_ethtool.h b/drivers/net/ethernet/wangxun/ngbe/ngbe_ethtool.h
new file mode 100644
index 000000000000..487074e0eeec
--- /dev/null
+++ b/drivers/net/ethernet/wangxun/ngbe/ngbe_ethtool.h
@@ -0,0 +1,9 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright (c) 2015 - 2023 Beijing WangXun Technology Co., Ltd. */
+
+#ifndef _NGBE_ETHTOOL_H_
+#define _NGBE_ETHTOOL_H_
+
+void ngbe_set_ethtool_ops(struct net_device *netdev);
+
+#endif /* _NGBE_ETHTOOL_H_ */
diff --git a/drivers/net/ethernet/wangxun/ngbe/ngbe_hw.c b/drivers/net/ethernet/wangxun/ngbe/ngbe_hw.c
index 0e3923b3737e..6562a2de9527 100644
--- a/drivers/net/ethernet/wangxun/ngbe/ngbe_hw.c
+++ b/drivers/net/ethernet/wangxun/ngbe/ngbe_hw.c
@@ -9,12 +9,10 @@
#include "../libwx/wx_hw.h"
#include "ngbe_type.h"
#include "ngbe_hw.h"
-#include "ngbe.h"
-int ngbe_eeprom_chksum_hostif(struct ngbe_hw *hw)
+int ngbe_eeprom_chksum_hostif(struct wx *wx)
{
struct wx_hic_read_shadow_ram buffer;
- struct wx_hw *wxhw = &hw->wxhw;
int status;
int tmp;
@@ -27,61 +25,73 @@ int ngbe_eeprom_chksum_hostif(struct ngbe_hw *hw)
/* one word */
buffer.length = 0;
- status = wx_host_interface_command(wxhw, (u32 *)&buffer, sizeof(buffer),
+ status = wx_host_interface_command(wx, (u32 *)&buffer, sizeof(buffer),
WX_HI_COMMAND_TIMEOUT, false);
if (status < 0)
return status;
- tmp = rd32a(wxhw, WX_MNG_MBOX, 1);
+ tmp = rd32a(wx, WX_MNG_MBOX, 1);
if (tmp == NGBE_FW_CMD_ST_PASS)
return 0;
return -EIO;
}
-static int ngbe_reset_misc(struct ngbe_hw *hw)
+static int ngbe_reset_misc(struct wx *wx)
{
- struct wx_hw *wxhw = &hw->wxhw;
-
- wx_reset_misc(wxhw);
- if (hw->mac_type == ngbe_mac_type_rgmii)
- wr32(wxhw, NGBE_MDIO_CLAUSE_SELECT, 0xF);
- if (hw->gpio_ctrl) {
+ wx_reset_misc(wx);
+ if (wx->gpio_ctrl) {
/* gpio0 is used to power on/off control*/
- wr32(wxhw, NGBE_GPIO_DDR, 0x1);
- wr32(wxhw, NGBE_GPIO_DR, NGBE_GPIO_DR_0);
+ wr32(wx, NGBE_GPIO_DDR, 0x1);
+ ngbe_sfp_modules_txrx_powerctl(wx, false);
}
return 0;
}
+void ngbe_sfp_modules_txrx_powerctl(struct wx *wx, bool swi)
+{
+ /* gpio0 is used to power on control . 0 is on */
+ wr32(wx, NGBE_GPIO_DR, swi ? 0 : NGBE_GPIO_DR_0);
+}
+
/**
* ngbe_reset_hw - Perform hardware reset
- * @hw: pointer to hardware structure
+ * @wx: pointer to hardware structure
*
* Resets the hardware by resetting the transmit and receive units, masks
* and clears all interrupts, perform a PHY reset, and perform a link (MAC)
* reset.
**/
-int ngbe_reset_hw(struct ngbe_hw *hw)
+int ngbe_reset_hw(struct wx *wx)
{
- struct wx_hw *wxhw = &hw->wxhw;
- int status = 0;
- u32 reset = 0;
+ u32 val = 0;
+ int ret = 0;
- /* Call adapter stop to disable tx/rx and clear interrupts */
- status = wx_stop_adapter(wxhw);
- if (status != 0)
- return status;
- reset = WX_MIS_RST_LAN_RST(wxhw->bus.func);
- wr32(wxhw, WX_MIS_RST, reset | rd32(wxhw, WX_MIS_RST));
- ngbe_reset_misc(hw);
+ /* Call wx stop to disable tx/rx and clear interrupts */
+ ret = wx_stop_adapter(wx);
+ if (ret != 0)
+ return ret;
+
+ if (wx->mac_type != em_mac_type_mdi) {
+ val = WX_MIS_RST_LAN_RST(wx->bus.func);
+ wr32(wx, WX_MIS_RST, val | rd32(wx, WX_MIS_RST));
+
+ ret = read_poll_timeout(rd32, val,
+ !(val & (BIT(9) << wx->bus.func)), 1000,
+ 100000, false, wx, 0x10028);
+ if (ret) {
+ wx_err(wx, "Lan reset exceed s maximum times.\n");
+ return ret;
+ }
+ }
+ ngbe_reset_misc(wx);
/* Store the permanent mac address */
- wx_get_mac_addr(wxhw, wxhw->mac.perm_addr);
+ wx_get_mac_addr(wx, wx->mac.perm_addr);
/* reset num_rar_entries to 128 */
- wxhw->mac.num_rar_entries = NGBE_RAR_ENTRIES;
- wx_init_rx_addrs(wxhw);
- pci_set_master(wxhw->pdev);
+ wx->mac.num_rar_entries = NGBE_RAR_ENTRIES;
+ wx_init_rx_addrs(wx);
+ pci_set_master(wx->pdev);
return 0;
}
diff --git a/drivers/net/ethernet/wangxun/ngbe/ngbe_hw.h b/drivers/net/ethernet/wangxun/ngbe/ngbe_hw.h
index 42476a3fe57c..a4693e006816 100644
--- a/drivers/net/ethernet/wangxun/ngbe/ngbe_hw.h
+++ b/drivers/net/ethernet/wangxun/ngbe/ngbe_hw.h
@@ -7,6 +7,7 @@
#ifndef _NGBE_HW_H_
#define _NGBE_HW_H_
-int ngbe_eeprom_chksum_hostif(struct ngbe_hw *hw);
-int ngbe_reset_hw(struct ngbe_hw *hw);
+int ngbe_eeprom_chksum_hostif(struct wx *wx);
+void ngbe_sfp_modules_txrx_powerctl(struct wx *wx, bool swi);
+int ngbe_reset_hw(struct wx *wx);
#endif /* _NGBE_HW_H_ */
diff --git a/drivers/net/ethernet/wangxun/ngbe/ngbe_main.c b/drivers/net/ethernet/wangxun/ngbe/ngbe_main.c
index f0b24366da18..5b564d348c09 100644
--- a/drivers/net/ethernet/wangxun/ngbe/ngbe_main.c
+++ b/drivers/net/ethernet/wangxun/ngbe/ngbe_main.c
@@ -9,12 +9,16 @@
#include <linux/aer.h>
#include <linux/etherdevice.h>
#include <net/ip.h>
+#include <linux/phy.h>
#include "../libwx/wx_type.h"
#include "../libwx/wx_hw.h"
+#include "../libwx/wx_lib.h"
#include "ngbe_type.h"
+#include "ngbe_mdio.h"
#include "ngbe_hw.h"
-#include "ngbe.h"
+#include "ngbe_ethtool.h"
+
char ngbe_driver_name[] = "ngbe";
/* ngbe_pci_tbl - PCI Device ID Table
@@ -39,70 +43,27 @@ static const struct pci_device_id ngbe_pci_tbl[] = {
{ .device = 0 }
};
-static void ngbe_mac_set_default_filter(struct ngbe_adapter *adapter, u8 *addr)
-{
- struct ngbe_hw *hw = &adapter->hw;
-
- memcpy(&adapter->mac_table[0].addr, addr, ETH_ALEN);
- adapter->mac_table[0].pools = 1ULL;
- adapter->mac_table[0].state = (NGBE_MAC_STATE_DEFAULT |
- NGBE_MAC_STATE_IN_USE);
- wx_set_rar(&hw->wxhw, 0, adapter->mac_table[0].addr,
- adapter->mac_table[0].pools,
- WX_PSR_MAC_SWC_AD_H_AV);
-}
-
/**
* ngbe_init_type_code - Initialize the shared code
- * @hw: pointer to hardware structure
+ * @wx: pointer to hardware structure
**/
-static void ngbe_init_type_code(struct ngbe_hw *hw)
+static void ngbe_init_type_code(struct wx *wx)
{
int wol_mask = 0, ncsi_mask = 0;
- struct wx_hw *wxhw = &hw->wxhw;
- u16 type_mask = 0;
+ u16 type_mask = 0, val;
- wxhw->mac.type = wx_mac_em;
- type_mask = (u16)(wxhw->subsystem_device_id & NGBE_OEM_MASK);
- ncsi_mask = wxhw->subsystem_device_id & NGBE_NCSI_MASK;
- wol_mask = wxhw->subsystem_device_id & NGBE_WOL_MASK;
-
- switch (type_mask) {
- case NGBE_SUBID_M88E1512_SFP:
- case NGBE_SUBID_LY_M88E1512_SFP:
- hw->phy.type = ngbe_phy_m88e1512_sfi;
- break;
- case NGBE_SUBID_M88E1512_RJ45:
- hw->phy.type = ngbe_phy_m88e1512;
- break;
- case NGBE_SUBID_M88E1512_MIX:
- hw->phy.type = ngbe_phy_m88e1512_unknown;
- break;
- case NGBE_SUBID_YT8521S_SFP:
- case NGBE_SUBID_YT8521S_SFP_GPIO:
- case NGBE_SUBID_LY_YT8521S_SFP:
- hw->phy.type = ngbe_phy_yt8521s_sfi;
- break;
- case NGBE_SUBID_INTERNAL_YT8521S_SFP:
- case NGBE_SUBID_INTERNAL_YT8521S_SFP_GPIO:
- hw->phy.type = ngbe_phy_internal_yt8521s_sfi;
- break;
- case NGBE_SUBID_RGMII_FPGA:
- case NGBE_SUBID_OCP_CARD:
- fallthrough;
- default:
- hw->phy.type = ngbe_phy_internal;
- break;
- }
+ wx->mac.type = wx_mac_em;
+ type_mask = (u16)(wx->subsystem_device_id & NGBE_OEM_MASK);
+ ncsi_mask = wx->subsystem_device_id & NGBE_NCSI_MASK;
+ wol_mask = wx->subsystem_device_id & NGBE_WOL_MASK;
- if (hw->phy.type == ngbe_phy_internal ||
- hw->phy.type == ngbe_phy_internal_yt8521s_sfi)
- hw->mac_type = ngbe_mac_type_mdi;
- else
- hw->mac_type = ngbe_mac_type_rgmii;
+ val = rd32(wx, WX_CFG_PORT_ST);
+ wx->mac_type = (val & BIT(7)) >> 7 ?
+ em_mac_type_rgmii :
+ em_mac_type_mdi;
- hw->wol_enabled = (wol_mask == NGBE_WOL_SUP) ? 1 : 0;
- hw->ncsi_enabled = (ncsi_mask == NGBE_NCSI_MASK ||
+ wx->wol_enabled = (wol_mask == NGBE_WOL_SUP) ? 1 : 0;
+ wx->ncsi_enabled = (ncsi_mask == NGBE_NCSI_MASK ||
type_mask == NGBE_SUBID_OCP_CARD) ? 1 : 0;
switch (type_mask) {
@@ -110,31 +71,31 @@ static void ngbe_init_type_code(struct ngbe_hw *hw)
case NGBE_SUBID_LY_M88E1512_SFP:
case NGBE_SUBID_YT8521S_SFP_GPIO:
case NGBE_SUBID_INTERNAL_YT8521S_SFP_GPIO:
- hw->gpio_ctrl = 1;
+ wx->gpio_ctrl = 1;
break;
default:
- hw->gpio_ctrl = 0;
+ wx->gpio_ctrl = 0;
break;
}
}
/**
- * ngbe_init_rss_key - Initialize adapter RSS key
- * @adapter: device handle
+ * ngbe_init_rss_key - Initialize wx RSS key
+ * @wx: device handle
*
* Allocates and initializes the RSS key if it is not allocated.
**/
-static inline int ngbe_init_rss_key(struct ngbe_adapter *adapter)
+static inline int ngbe_init_rss_key(struct wx *wx)
{
u32 *rss_key;
- if (!adapter->rss_key) {
- rss_key = kzalloc(NGBE_RSS_KEY_SIZE, GFP_KERNEL);
+ if (!wx->rss_key) {
+ rss_key = kzalloc(WX_RSS_KEY_SIZE, GFP_KERNEL);
if (unlikely(!rss_key))
return -ENOMEM;
- netdev_rss_key_fill(rss_key, NGBE_RSS_KEY_SIZE);
- adapter->rss_key = rss_key;
+ netdev_rss_key_fill(rss_key, WX_RSS_KEY_SIZE);
+ wx->rss_key = rss_key;
}
return 0;
@@ -142,72 +103,263 @@ static inline int ngbe_init_rss_key(struct ngbe_adapter *adapter)
/**
* ngbe_sw_init - Initialize general software structures
- * @adapter: board private structure to initialize
+ * @wx: board private structure to initialize
**/
-static int ngbe_sw_init(struct ngbe_adapter *adapter)
+static int ngbe_sw_init(struct wx *wx)
{
- struct pci_dev *pdev = adapter->pdev;
- struct ngbe_hw *hw = &adapter->hw;
- struct wx_hw *wxhw = &hw->wxhw;
+ struct pci_dev *pdev = wx->pdev;
u16 msix_count = 0;
int err = 0;
- wxhw->hw_addr = adapter->io_addr;
- wxhw->pdev = pdev;
+ wx->mac.num_rar_entries = NGBE_RAR_ENTRIES;
+ wx->mac.max_rx_queues = NGBE_MAX_RX_QUEUES;
+ wx->mac.max_tx_queues = NGBE_MAX_TX_QUEUES;
+ wx->mac.mcft_size = NGBE_MC_TBL_SIZE;
+ wx->mac.rx_pb_size = NGBE_RX_PB_SIZE;
+ wx->mac.tx_pb_size = NGBE_TDB_PB_SZ;
/* PCI config space info */
- err = wx_sw_init(wxhw);
+ err = wx_sw_init(wx);
if (err < 0) {
- netif_err(adapter, probe, adapter->netdev,
- "Read of internal subsystem device id failed\n");
+ wx_err(wx, "read of internal subsystem device id failed\n");
return err;
}
/* mac type, phy type , oem type */
- ngbe_init_type_code(hw);
+ ngbe_init_type_code(wx);
- wxhw->mac.max_rx_queues = NGBE_MAX_RX_QUEUES;
- wxhw->mac.max_tx_queues = NGBE_MAX_TX_QUEUES;
- wxhw->mac.num_rar_entries = NGBE_RAR_ENTRIES;
/* Set common capability flags and settings */
- adapter->max_q_vectors = NGBE_MAX_MSIX_VECTORS;
-
- err = wx_get_pcie_msix_counts(wxhw, &msix_count, NGBE_MAX_MSIX_VECTORS);
+ wx->max_q_vectors = NGBE_MAX_MSIX_VECTORS;
+ err = wx_get_pcie_msix_counts(wx, &msix_count, NGBE_MAX_MSIX_VECTORS);
if (err)
dev_err(&pdev->dev, "Do not support MSI-X\n");
- wxhw->mac.max_msix_vectors = msix_count;
+ wx->mac.max_msix_vectors = msix_count;
- adapter->mac_table = kcalloc(wxhw->mac.num_rar_entries,
- sizeof(struct ngbe_mac_addr),
- GFP_KERNEL);
- if (!adapter->mac_table) {
- dev_err(&pdev->dev, "mac_table allocation failed: %d\n", err);
- return -ENOMEM;
- }
-
- if (ngbe_init_rss_key(adapter))
+ if (ngbe_init_rss_key(wx))
return -ENOMEM;
/* enable itr by default in dynamic mode */
- adapter->rx_itr_setting = 1;
- adapter->tx_itr_setting = 1;
+ wx->rx_itr_setting = 1;
+ wx->tx_itr_setting = 1;
/* set default ring sizes */
- adapter->tx_ring_count = NGBE_DEFAULT_TXD;
- adapter->rx_ring_count = NGBE_DEFAULT_RXD;
+ wx->tx_ring_count = NGBE_DEFAULT_TXD;
+ wx->rx_ring_count = NGBE_DEFAULT_RXD;
/* set default work limits */
- adapter->tx_work_limit = NGBE_DEFAULT_TX_WORK;
- adapter->rx_work_limit = NGBE_DEFAULT_RX_WORK;
+ wx->tx_work_limit = NGBE_DEFAULT_TX_WORK;
+ wx->rx_work_limit = NGBE_DEFAULT_RX_WORK;
return 0;
}
-static void ngbe_down(struct ngbe_adapter *adapter)
+/**
+ * ngbe_irq_enable - Enable default interrupt generation settings
+ * @wx: board private structure
+ * @queues: enable all queues interrupts
+ **/
+static void ngbe_irq_enable(struct wx *wx, bool queues)
{
- netif_carrier_off(adapter->netdev);
- netif_tx_disable(adapter->netdev);
-};
+ u32 mask;
+
+ /* enable misc interrupt */
+ mask = NGBE_PX_MISC_IEN_MASK;
+
+ wr32(wx, WX_GPIO_DDR, WX_GPIO_DDR_0);
+ wr32(wx, WX_GPIO_INTEN, WX_GPIO_INTEN_0 | WX_GPIO_INTEN_1);
+ wr32(wx, WX_GPIO_INTTYPE_LEVEL, 0x0);
+ wr32(wx, WX_GPIO_POLARITY, wx->gpio_ctrl ? 0 : 0x3);
+
+ wr32(wx, WX_PX_MISC_IEN, mask);
+
+ /* mask interrupt */
+ if (queues)
+ wx_intr_enable(wx, NGBE_INTR_ALL);
+ else
+ wx_intr_enable(wx, NGBE_INTR_MISC(wx));
+}
+
+/**
+ * ngbe_intr - msi/legacy mode Interrupt Handler
+ * @irq: interrupt number
+ * @data: pointer to a network interface device structure
+ **/
+static irqreturn_t ngbe_intr(int __always_unused irq, void *data)
+{
+ struct wx_q_vector *q_vector;
+ struct wx *wx = data;
+ struct pci_dev *pdev;
+ u32 eicr;
+
+ q_vector = wx->q_vector[0];
+ pdev = wx->pdev;
+
+ eicr = wx_misc_isb(wx, WX_ISB_VEC0);
+ if (!eicr) {
+ /* shared interrupt alert!
+ * the interrupt that we masked before the EICR read.
+ */
+ if (netif_running(wx->netdev))
+ ngbe_irq_enable(wx, true);
+ return IRQ_NONE; /* Not our interrupt */
+ }
+ wx->isb_mem[WX_ISB_VEC0] = 0;
+ if (!(pdev->msi_enabled))
+ wr32(wx, WX_PX_INTA, 1);
+
+ wx->isb_mem[WX_ISB_MISC] = 0;
+ /* would disable interrupts here but it is auto disabled */
+ napi_schedule_irqoff(&q_vector->napi);
+
+ if (netif_running(wx->netdev))
+ ngbe_irq_enable(wx, false);
+
+ return IRQ_HANDLED;
+}
+
+static irqreturn_t ngbe_msix_other(int __always_unused irq, void *data)
+{
+ struct wx *wx = data;
+
+ /* re-enable the original interrupt state, no lsc, no queues */
+ if (netif_running(wx->netdev))
+ ngbe_irq_enable(wx, false);
+
+ return IRQ_HANDLED;
+}
+
+/**
+ * ngbe_request_msix_irqs - Initialize MSI-X interrupts
+ * @wx: board private structure
+ *
+ * ngbe_request_msix_irqs allocates MSI-X vectors and requests
+ * interrupts from the kernel.
+ **/
+static int ngbe_request_msix_irqs(struct wx *wx)
+{
+ struct net_device *netdev = wx->netdev;
+ int vector, err;
+
+ for (vector = 0; vector < wx->num_q_vectors; vector++) {
+ struct wx_q_vector *q_vector = wx->q_vector[vector];
+ struct msix_entry *entry = &wx->msix_entries[vector];
+
+ if (q_vector->tx.ring && q_vector->rx.ring)
+ snprintf(q_vector->name, sizeof(q_vector->name) - 1,
+ "%s-TxRx-%d", netdev->name, entry->entry);
+ else
+ /* skip this unused q_vector */
+ continue;
+
+ err = request_irq(entry->vector, wx_msix_clean_rings, 0,
+ q_vector->name, q_vector);
+ if (err) {
+ wx_err(wx, "request_irq failed for MSIX interrupt %s Error: %d\n",
+ q_vector->name, err);
+ goto free_queue_irqs;
+ }
+ }
+
+ err = request_irq(wx->msix_entries[vector].vector,
+ ngbe_msix_other, 0, netdev->name, wx);
+
+ if (err) {
+ wx_err(wx, "request_irq for msix_other failed: %d\n", err);
+ goto free_queue_irqs;
+ }
+
+ return 0;
+
+free_queue_irqs:
+ while (vector) {
+ vector--;
+ free_irq(wx->msix_entries[vector].vector,
+ wx->q_vector[vector]);
+ }
+ wx_reset_interrupt_capability(wx);
+ return err;
+}
+
+/**
+ * ngbe_request_irq - initialize interrupts
+ * @wx: board private structure
+ *
+ * Attempts to configure interrupts using the best available
+ * capabilities of the hardware and kernel.
+ **/
+static int ngbe_request_irq(struct wx *wx)
+{
+ struct net_device *netdev = wx->netdev;
+ struct pci_dev *pdev = wx->pdev;
+ int err;
+
+ if (pdev->msix_enabled)
+ err = ngbe_request_msix_irqs(wx);
+ else if (pdev->msi_enabled)
+ err = request_irq(pdev->irq, ngbe_intr, 0,
+ netdev->name, wx);
+ else
+ err = request_irq(pdev->irq, ngbe_intr, IRQF_SHARED,
+ netdev->name, wx);
+
+ if (err)
+ wx_err(wx, "request_irq failed, Error %d\n", err);
+
+ return err;
+}
+
+static void ngbe_disable_device(struct wx *wx)
+{
+ struct net_device *netdev = wx->netdev;
+ u32 i;
+
+ /* disable all enabled rx queues */
+ for (i = 0; i < wx->num_rx_queues; i++)
+ /* this call also flushes the previous write */
+ wx_disable_rx_queue(wx, wx->rx_ring[i]);
+ /* disable receives */
+ wx_disable_rx(wx);
+ wx_napi_disable_all(wx);
+ netif_tx_stop_all_queues(netdev);
+ netif_tx_disable(netdev);
+ if (wx->gpio_ctrl)
+ ngbe_sfp_modules_txrx_powerctl(wx, false);
+ wx_irq_disable(wx);
+ /* disable transmits in the hardware now that interrupts are off */
+ for (i = 0; i < wx->num_tx_queues; i++) {
+ u8 reg_idx = wx->tx_ring[i]->reg_idx;
+
+ wr32(wx, WX_PX_TR_CFG(reg_idx), WX_PX_TR_CFG_SWFLSH);
+ }
+}
+
+static void ngbe_down(struct wx *wx)
+{
+ phy_stop(wx->phydev);
+ ngbe_disable_device(wx);
+ wx_clean_all_tx_rings(wx);
+ wx_clean_all_rx_rings(wx);
+}
+
+static void ngbe_up(struct wx *wx)
+{
+ wx_configure_vectors(wx);
+
+ /* make sure to complete pre-operations */
+ smp_mb__before_atomic();
+ wx_napi_enable_all(wx);
+ /* enable transmits */
+ netif_tx_start_all_queues(wx->netdev);
+
+ /* clear any pending interrupts, may auto mask */
+ rd32(wx, WX_PX_IC);
+ rd32(wx, WX_PX_MISC_IC);
+ ngbe_irq_enable(wx, true);
+ if (wx->gpio_ctrl)
+ ngbe_sfp_modules_txrx_powerctl(wx, true);
+
+ phy_start(wx->phydev);
+}
/**
* ngbe_open - Called when a network interface is made active
@@ -220,13 +372,43 @@ static void ngbe_down(struct ngbe_adapter *adapter)
**/
static int ngbe_open(struct net_device *netdev)
{
- struct ngbe_adapter *adapter = netdev_priv(netdev);
- struct ngbe_hw *hw = &adapter->hw;
- struct wx_hw *wxhw = &hw->wxhw;
+ struct wx *wx = netdev_priv(netdev);
+ int err;
+
+ wx_control_hw(wx, true);
+
+ err = wx_setup_resources(wx);
+ if (err)
+ return err;
+
+ wx_configure(wx);
+
+ err = ngbe_request_irq(wx);
+ if (err)
+ goto err_free_resources;
+
+ err = ngbe_phy_connect(wx);
+ if (err)
+ goto err_free_irq;
+
+ err = netif_set_real_num_tx_queues(netdev, wx->num_tx_queues);
+ if (err)
+ goto err_dis_phy;
+
+ err = netif_set_real_num_rx_queues(netdev, wx->num_rx_queues);
+ if (err)
+ goto err_dis_phy;
- wx_control_hw(wxhw, true);
+ ngbe_up(wx);
return 0;
+err_dis_phy:
+ phy_disconnect(wx->phydev);
+err_free_irq:
+ wx_free_irq(wx);
+err_free_resources:
+ wx_free_resources(wx);
+ return err;
}
/**
@@ -242,66 +424,40 @@ static int ngbe_open(struct net_device *netdev)
**/
static int ngbe_close(struct net_device *netdev)
{
- struct ngbe_adapter *adapter = netdev_priv(netdev);
-
- ngbe_down(adapter);
- wx_control_hw(&adapter->hw.wxhw, false);
-
- return 0;
-}
-
-static netdev_tx_t ngbe_xmit_frame(struct sk_buff *skb,
- struct net_device *netdev)
-{
- return NETDEV_TX_OK;
-}
-
-/**
- * ngbe_set_mac - Change the Ethernet Address of the NIC
- * @netdev: network interface device structure
- * @p: pointer to an address structure
- *
- * Returns 0 on success, negative on failure
- **/
-static int ngbe_set_mac(struct net_device *netdev, void *p)
-{
- struct ngbe_adapter *adapter = netdev_priv(netdev);
- struct wx_hw *wxhw = &adapter->hw.wxhw;
- struct sockaddr *addr = p;
+ struct wx *wx = netdev_priv(netdev);
- if (!is_valid_ether_addr(addr->sa_data))
- return -EADDRNOTAVAIL;
-
- eth_hw_addr_set(netdev, addr->sa_data);
- memcpy(wxhw->mac.addr, addr->sa_data, netdev->addr_len);
-
- ngbe_mac_set_default_filter(adapter, wxhw->mac.addr);
+ ngbe_down(wx);
+ wx_free_irq(wx);
+ wx_free_resources(wx);
+ phy_disconnect(wx->phydev);
+ wx_control_hw(wx, false);
return 0;
}
static void ngbe_dev_shutdown(struct pci_dev *pdev, bool *enable_wake)
{
- struct ngbe_adapter *adapter = pci_get_drvdata(pdev);
- struct net_device *netdev = adapter->netdev;
+ struct wx *wx = pci_get_drvdata(pdev);
+ struct net_device *netdev;
+ netdev = wx->netdev;
netif_device_detach(netdev);
rtnl_lock();
if (netif_running(netdev))
- ngbe_down(adapter);
+ ngbe_down(wx);
rtnl_unlock();
- wx_control_hw(&adapter->hw.wxhw, false);
+ wx_control_hw(wx, false);
pci_disable_device(pdev);
}
static void ngbe_shutdown(struct pci_dev *pdev)
{
- struct ngbe_adapter *adapter = pci_get_drvdata(pdev);
+ struct wx *wx = pci_get_drvdata(pdev);
bool wake;
- wake = !!adapter->wol;
+ wake = !!wx->wol;
ngbe_dev_shutdown(pdev, &wake);
@@ -314,9 +470,11 @@ static void ngbe_shutdown(struct pci_dev *pdev)
static const struct net_device_ops ngbe_netdev_ops = {
.ndo_open = ngbe_open,
.ndo_stop = ngbe_close,
- .ndo_start_xmit = ngbe_xmit_frame,
+ .ndo_start_xmit = wx_xmit_frame,
+ .ndo_set_rx_mode = wx_set_rx_mode,
.ndo_validate_addr = eth_validate_addr,
- .ndo_set_mac_address = ngbe_set_mac,
+ .ndo_set_mac_address = wx_set_mac,
+ .ndo_get_stats64 = wx_get_stats64,
};
/**
@@ -326,18 +484,16 @@ static const struct net_device_ops ngbe_netdev_ops = {
*
* Returns 0 on success, negative on failure
*
- * ngbe_probe initializes an adapter identified by a pci_dev structure.
- * The OS initialization, configuring of the adapter private structure,
+ * ngbe_probe initializes an wx identified by a pci_dev structure.
+ * The OS initialization, configuring of the wx private structure,
* and a hardware reset occur.
**/
static int ngbe_probe(struct pci_dev *pdev,
const struct pci_device_id __always_unused *ent)
{
- struct ngbe_adapter *adapter = NULL;
- struct ngbe_hw *hw = NULL;
- struct wx_hw *wxhw = NULL;
struct net_device *netdev;
u32 e2rom_cksum_cap = 0;
+ struct wx *wx = NULL;
static int func_nums;
u16 e2rom_ver = 0;
u32 etrack_id = 0;
@@ -368,7 +524,7 @@ static int ngbe_probe(struct pci_dev *pdev,
pci_set_master(pdev);
netdev = devm_alloc_etherdev_mqs(&pdev->dev,
- sizeof(struct ngbe_adapter),
+ sizeof(struct wx),
NGBE_MAX_TX_QUEUES,
NGBE_MAX_RX_QUEUES);
if (!netdev) {
@@ -378,63 +534,74 @@ static int ngbe_probe(struct pci_dev *pdev,
SET_NETDEV_DEV(netdev, &pdev->dev);
- adapter = netdev_priv(netdev);
- adapter->netdev = netdev;
- adapter->pdev = pdev;
- hw = &adapter->hw;
- wxhw = &hw->wxhw;
- adapter->msg_enable = BIT(3) - 1;
-
- adapter->io_addr = devm_ioremap(&pdev->dev,
- pci_resource_start(pdev, 0),
- pci_resource_len(pdev, 0));
- if (!adapter->io_addr) {
+ wx = netdev_priv(netdev);
+ wx->netdev = netdev;
+ wx->pdev = pdev;
+ wx->msg_enable = BIT(3) - 1;
+
+ wx->hw_addr = devm_ioremap(&pdev->dev,
+ pci_resource_start(pdev, 0),
+ pci_resource_len(pdev, 0));
+ if (!wx->hw_addr) {
err = -EIO;
goto err_pci_release_regions;
}
+ wx->driver_name = ngbe_driver_name;
+ ngbe_set_ethtool_ops(netdev);
netdev->netdev_ops = &ngbe_netdev_ops;
netdev->features |= NETIF_F_HIGHDMA;
+ netdev->features = NETIF_F_SG;
- adapter->bd_number = func_nums;
+ /* copy netdev features into list of user selectable features */
+ netdev->hw_features |= netdev->features |
+ NETIF_F_RXALL;
+
+ netdev->priv_flags |= IFF_UNICAST_FLT;
+ netdev->priv_flags |= IFF_SUPP_NOFCS;
+
+ netdev->min_mtu = ETH_MIN_MTU;
+ netdev->max_mtu = NGBE_MAX_JUMBO_FRAME_SIZE - (ETH_HLEN + ETH_FCS_LEN);
+
+ wx->bd_number = func_nums;
/* setup the private structure */
- err = ngbe_sw_init(adapter);
+ err = ngbe_sw_init(wx);
if (err)
goto err_free_mac_table;
/* check if flash load is done after hw power up */
- err = wx_check_flash_load(wxhw, NGBE_SPI_ILDR_STATUS_PERST);
+ err = wx_check_flash_load(wx, NGBE_SPI_ILDR_STATUS_PERST);
if (err)
goto err_free_mac_table;
- err = wx_check_flash_load(wxhw, NGBE_SPI_ILDR_STATUS_PWRRST);
+ err = wx_check_flash_load(wx, NGBE_SPI_ILDR_STATUS_PWRRST);
if (err)
goto err_free_mac_table;
- err = wx_mng_present(wxhw);
+ err = wx_mng_present(wx);
if (err) {
dev_err(&pdev->dev, "Management capability is not present\n");
goto err_free_mac_table;
}
- err = ngbe_reset_hw(hw);
+ err = ngbe_reset_hw(wx);
if (err) {
dev_err(&pdev->dev, "HW Init failed: %d\n", err);
goto err_free_mac_table;
}
- if (wxhw->bus.func == 0) {
- wr32(wxhw, NGBE_CALSUM_CAP_STATUS, 0x0);
- wr32(wxhw, NGBE_EEPROM_VERSION_STORE_REG, 0x0);
+ if (wx->bus.func == 0) {
+ wr32(wx, NGBE_CALSUM_CAP_STATUS, 0x0);
+ wr32(wx, NGBE_EEPROM_VERSION_STORE_REG, 0x0);
} else {
- e2rom_cksum_cap = rd32(wxhw, NGBE_CALSUM_CAP_STATUS);
- saved_ver = rd32(wxhw, NGBE_EEPROM_VERSION_STORE_REG);
+ e2rom_cksum_cap = rd32(wx, NGBE_CALSUM_CAP_STATUS);
+ saved_ver = rd32(wx, NGBE_EEPROM_VERSION_STORE_REG);
}
- wx_init_eeprom_params(wxhw);
- if (wxhw->bus.func == 0 || e2rom_cksum_cap == 0) {
+ wx_init_eeprom_params(wx);
+ if (wx->bus.func == 0 || e2rom_cksum_cap == 0) {
/* make sure the EEPROM is ready */
- err = ngbe_eeprom_chksum_hostif(hw);
+ err = ngbe_eeprom_chksum_hostif(wx);
if (err) {
dev_err(&pdev->dev, "The EEPROM Checksum Is Not Valid\n");
err = -EIO;
@@ -442,14 +609,14 @@ static int ngbe_probe(struct pci_dev *pdev,
}
}
- adapter->wol = 0;
- if (hw->wol_enabled)
- adapter->wol = NGBE_PSR_WKUP_CTL_MAG;
+ wx->wol = 0;
+ if (wx->wol_enabled)
+ wx->wol = NGBE_PSR_WKUP_CTL_MAG;
- hw->wol_enabled = !!(adapter->wol);
- wr32(wxhw, NGBE_PSR_WKUP_CTL, adapter->wol);
+ wx->wol_enabled = !!(wx->wol);
+ wr32(wx, NGBE_PSR_WKUP_CTL, wx->wol);
- device_set_wakeup_enable(&pdev->dev, adapter->wol);
+ device_set_wakeup_enable(&pdev->dev, wx->wol);
/* Save off EEPROM version number and Option Rom version which
* together make a unique identify for the eeprom
@@ -457,37 +624,50 @@ static int ngbe_probe(struct pci_dev *pdev,
if (saved_ver) {
etrack_id = saved_ver;
} else {
- wx_read_ee_hostif(wxhw,
- wxhw->eeprom.sw_region_offset + NGBE_EEPROM_VERSION_H,
+ wx_read_ee_hostif(wx,
+ wx->eeprom.sw_region_offset + NGBE_EEPROM_VERSION_H,
&e2rom_ver);
etrack_id = e2rom_ver << 16;
- wx_read_ee_hostif(wxhw,
- wxhw->eeprom.sw_region_offset + NGBE_EEPROM_VERSION_L,
+ wx_read_ee_hostif(wx,
+ wx->eeprom.sw_region_offset + NGBE_EEPROM_VERSION_L,
&e2rom_ver);
etrack_id |= e2rom_ver;
- wr32(wxhw, NGBE_EEPROM_VERSION_STORE_REG, etrack_id);
+ wr32(wx, NGBE_EEPROM_VERSION_STORE_REG, etrack_id);
}
+ snprintf(wx->eeprom_id, sizeof(wx->eeprom_id),
+ "0x%08x", etrack_id);
+
+ eth_hw_addr_set(netdev, wx->mac.perm_addr);
+ wx_mac_set_default_filter(wx, wx->mac.perm_addr);
- eth_hw_addr_set(netdev, wxhw->mac.perm_addr);
- ngbe_mac_set_default_filter(adapter, wxhw->mac.perm_addr);
+ err = wx_init_interrupt_scheme(wx);
+ if (err)
+ goto err_free_mac_table;
+
+ /* phy Interface Configuration */
+ err = ngbe_mdio_init(wx);
+ if (err)
+ goto err_clear_interrupt_scheme;
err = register_netdev(netdev);
if (err)
goto err_register;
- pci_set_drvdata(pdev, adapter);
+ pci_set_drvdata(pdev, wx);
- netif_info(adapter, probe, netdev,
+ netif_info(wx, probe, netdev,
"PHY: %s, PBA No: Wang Xun GbE Family Controller\n",
- hw->phy.type == ngbe_phy_internal ? "Internal" : "External");
- netif_info(adapter, probe, netdev, "%pM\n", netdev->dev_addr);
+ wx->mac_type == em_mac_type_mdi ? "Internal" : "External");
+ netif_info(wx, probe, netdev, "%pM\n", netdev->dev_addr);
return 0;
err_register:
- wx_control_hw(wxhw, false);
+ wx_control_hw(wx, false);
+err_clear_interrupt_scheme:
+ wx_clear_interrupt_scheme(wx);
err_free_mac_table:
- kfree(adapter->mac_table);
+ kfree(wx->mac_table);
err_pci_release_regions:
pci_disable_pcie_error_reporting(pdev);
pci_release_selected_regions(pdev,
@@ -508,15 +688,16 @@ err_pci_disable_dev:
**/
static void ngbe_remove(struct pci_dev *pdev)
{
- struct ngbe_adapter *adapter = pci_get_drvdata(pdev);
+ struct wx *wx = pci_get_drvdata(pdev);
struct net_device *netdev;
- netdev = adapter->netdev;
+ netdev = wx->netdev;
unregister_netdev(netdev);
pci_release_selected_regions(pdev,
pci_select_bars(pdev, IORESOURCE_MEM));
- kfree(adapter->mac_table);
+ kfree(wx->mac_table);
+ wx_clear_interrupt_scheme(wx);
pci_disable_pcie_error_reporting(pdev);
pci_disable_device(pdev);
diff --git a/drivers/net/ethernet/wangxun/ngbe/ngbe_mdio.c b/drivers/net/ethernet/wangxun/ngbe/ngbe_mdio.c
new file mode 100644
index 000000000000..c9ddbbc3fa4f
--- /dev/null
+++ b/drivers/net/ethernet/wangxun/ngbe/ngbe_mdio.c
@@ -0,0 +1,286 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2019 - 2022 Beijing WangXun Technology Co., Ltd. */
+
+#include <linux/ethtool.h>
+#include <linux/iopoll.h>
+#include <linux/pci.h>
+#include <linux/phy.h>
+
+#include "../libwx/wx_type.h"
+#include "../libwx/wx_hw.h"
+#include "ngbe_type.h"
+#include "ngbe_mdio.h"
+
+static int ngbe_phy_read_reg_internal(struct mii_bus *bus, int phy_addr, int regnum)
+{
+ struct wx *wx = bus->priv;
+
+ if (phy_addr != 0)
+ return 0xffff;
+ return (u16)rd32(wx, NGBE_PHY_CONFIG(regnum));
+}
+
+static int ngbe_phy_write_reg_internal(struct mii_bus *bus, int phy_addr, int regnum, u16 value)
+{
+ struct wx *wx = bus->priv;
+
+ if (phy_addr == 0)
+ wr32(wx, NGBE_PHY_CONFIG(regnum), value);
+ return 0;
+}
+
+static int ngbe_phy_read_reg_mdi_c22(struct mii_bus *bus, int phy_addr, int regnum)
+{
+ u32 command, val, device_type = 0;
+ struct wx *wx = bus->priv;
+ int ret;
+
+ wr32(wx, NGBE_MDIO_CLAUSE_SELECT, 0xF);
+ /* setup and write the address cycle command */
+ command = NGBE_MSCA_RA(regnum) |
+ NGBE_MSCA_PA(phy_addr) |
+ NGBE_MSCA_DA(device_type);
+ wr32(wx, NGBE_MSCA, command);
+ command = NGBE_MSCC_CMD(NGBE_MSCA_CMD_READ) |
+ NGBE_MSCC_BUSY |
+ NGBE_MDIO_CLK(6);
+ wr32(wx, NGBE_MSCC, command);
+
+ /* wait to complete */
+ ret = read_poll_timeout(rd32, val, !(val & NGBE_MSCC_BUSY), 1000,
+ 100000, false, wx, NGBE_MSCC);
+ if (ret) {
+ wx_err(wx, "Mdio read c22 command did not complete.\n");
+ return ret;
+ }
+
+ return (u16)rd32(wx, NGBE_MSCC);
+}
+
+static int ngbe_phy_write_reg_mdi_c22(struct mii_bus *bus, int phy_addr, int regnum, u16 value)
+{
+ u32 command, val, device_type = 0;
+ struct wx *wx = bus->priv;
+ int ret;
+
+ wr32(wx, NGBE_MDIO_CLAUSE_SELECT, 0xF);
+ /* setup and write the address cycle command */
+ command = NGBE_MSCA_RA(regnum) |
+ NGBE_MSCA_PA(phy_addr) |
+ NGBE_MSCA_DA(device_type);
+ wr32(wx, NGBE_MSCA, command);
+ command = value |
+ NGBE_MSCC_CMD(NGBE_MSCA_CMD_WRITE) |
+ NGBE_MSCC_BUSY |
+ NGBE_MDIO_CLK(6);
+ wr32(wx, NGBE_MSCC, command);
+
+ /* wait to complete */
+ ret = read_poll_timeout(rd32, val, !(val & NGBE_MSCC_BUSY), 1000,
+ 100000, false, wx, NGBE_MSCC);
+ if (ret)
+ wx_err(wx, "Mdio write c22 command did not complete.\n");
+
+ return ret;
+}
+
+static int ngbe_phy_read_reg_mdi_c45(struct mii_bus *bus, int phy_addr, int devnum, int regnum)
+{
+ struct wx *wx = bus->priv;
+ u32 val, command;
+ int ret;
+
+ wr32(wx, NGBE_MDIO_CLAUSE_SELECT, 0x0);
+ /* setup and write the address cycle command */
+ command = NGBE_MSCA_RA(regnum) |
+ NGBE_MSCA_PA(phy_addr) |
+ NGBE_MSCA_DA(devnum);
+ wr32(wx, NGBE_MSCA, command);
+ command = NGBE_MSCC_CMD(NGBE_MSCA_CMD_READ) |
+ NGBE_MSCC_BUSY |
+ NGBE_MDIO_CLK(6);
+ wr32(wx, NGBE_MSCC, command);
+
+ /* wait to complete */
+ ret = read_poll_timeout(rd32, val, !(val & NGBE_MSCC_BUSY), 1000,
+ 100000, false, wx, NGBE_MSCC);
+ if (ret) {
+ wx_err(wx, "Mdio read c45 command did not complete.\n");
+ return ret;
+ }
+
+ return (u16)rd32(wx, NGBE_MSCC);
+}
+
+static int ngbe_phy_write_reg_mdi_c45(struct mii_bus *bus, int phy_addr,
+ int devnum, int regnum, u16 value)
+{
+ struct wx *wx = bus->priv;
+ int ret, command;
+ u16 val;
+
+ wr32(wx, NGBE_MDIO_CLAUSE_SELECT, 0x0);
+ /* setup and write the address cycle command */
+ command = NGBE_MSCA_RA(regnum) |
+ NGBE_MSCA_PA(phy_addr) |
+ NGBE_MSCA_DA(devnum);
+ wr32(wx, NGBE_MSCA, command);
+ command = value |
+ NGBE_MSCC_CMD(NGBE_MSCA_CMD_WRITE) |
+ NGBE_MSCC_BUSY |
+ NGBE_MDIO_CLK(6);
+ wr32(wx, NGBE_MSCC, command);
+
+ /* wait to complete */
+ ret = read_poll_timeout(rd32, val, !(val & NGBE_MSCC_BUSY), 1000,
+ 100000, false, wx, NGBE_MSCC);
+ if (ret)
+ wx_err(wx, "Mdio write c45 command did not complete.\n");
+
+ return ret;
+}
+
+static int ngbe_phy_read_reg_c22(struct mii_bus *bus, int phy_addr, int regnum)
+{
+ struct wx *wx = bus->priv;
+ u16 phy_data;
+
+ if (wx->mac_type == em_mac_type_mdi)
+ phy_data = ngbe_phy_read_reg_internal(bus, phy_addr, regnum);
+ else
+ phy_data = ngbe_phy_read_reg_mdi_c22(bus, phy_addr, regnum);
+
+ return phy_data;
+}
+
+static int ngbe_phy_write_reg_c22(struct mii_bus *bus, int phy_addr,
+ int regnum, u16 value)
+{
+ struct wx *wx = bus->priv;
+ int ret;
+
+ if (wx->mac_type == em_mac_type_mdi)
+ ret = ngbe_phy_write_reg_internal(bus, phy_addr, regnum, value);
+ else
+ ret = ngbe_phy_write_reg_mdi_c22(bus, phy_addr, regnum, value);
+
+ return ret;
+}
+
+static void ngbe_handle_link_change(struct net_device *dev)
+{
+ struct wx *wx = netdev_priv(dev);
+ struct phy_device *phydev;
+ u32 lan_speed, reg;
+
+ phydev = wx->phydev;
+ if (!(wx->link != phydev->link ||
+ wx->speed != phydev->speed ||
+ wx->duplex != phydev->duplex))
+ return;
+
+ wx->link = phydev->link;
+ wx->speed = phydev->speed;
+ wx->duplex = phydev->duplex;
+ switch (phydev->speed) {
+ case SPEED_10:
+ lan_speed = 0;
+ break;
+ case SPEED_100:
+ lan_speed = 1;
+ break;
+ case SPEED_1000:
+ default:
+ lan_speed = 2;
+ break;
+ }
+ wr32m(wx, NGBE_CFG_LAN_SPEED, 0x3, lan_speed);
+
+ if (phydev->link) {
+ reg = rd32(wx, WX_MAC_TX_CFG);
+ reg &= ~WX_MAC_TX_CFG_SPEED_MASK;
+ reg |= WX_MAC_TX_CFG_SPEED_1G | WX_MAC_TX_CFG_TE;
+ wr32(wx, WX_MAC_TX_CFG, reg);
+ /* Re configure MAC RX */
+ reg = rd32(wx, WX_MAC_RX_CFG);
+ wr32(wx, WX_MAC_RX_CFG, reg);
+ wr32(wx, WX_MAC_PKT_FLT, WX_MAC_PKT_FLT_PR);
+ reg = rd32(wx, WX_MAC_WDG_TIMEOUT);
+ wr32(wx, WX_MAC_WDG_TIMEOUT, reg);
+ }
+ phy_print_status(phydev);
+}
+
+int ngbe_phy_connect(struct wx *wx)
+{
+ int ret;
+
+ ret = phy_connect_direct(wx->netdev,
+ wx->phydev,
+ ngbe_handle_link_change,
+ PHY_INTERFACE_MODE_RGMII_ID);
+ if (ret) {
+ wx_err(wx, "PHY connect failed.\n");
+ return ret;
+ }
+
+ return 0;
+}
+
+static void ngbe_phy_fixup(struct wx *wx)
+{
+ struct phy_device *phydev = wx->phydev;
+ struct ethtool_eee eee;
+
+ phy_remove_link_mode(phydev, ETHTOOL_LINK_MODE_10baseT_Half_BIT);
+ phy_remove_link_mode(phydev, ETHTOOL_LINK_MODE_100baseT_Half_BIT);
+ phy_remove_link_mode(phydev, ETHTOOL_LINK_MODE_1000baseT_Half_BIT);
+
+ if (wx->mac_type != em_mac_type_mdi)
+ return;
+ /* disable EEE, internal phy does not support eee */
+ memset(&eee, 0, sizeof(eee));
+ phy_ethtool_set_eee(phydev, &eee);
+}
+
+int ngbe_mdio_init(struct wx *wx)
+{
+ struct pci_dev *pdev = wx->pdev;
+ struct mii_bus *mii_bus;
+ int ret;
+
+ mii_bus = devm_mdiobus_alloc(&pdev->dev);
+ if (!mii_bus)
+ return -ENOMEM;
+
+ mii_bus->name = "ngbe_mii_bus";
+ mii_bus->read = ngbe_phy_read_reg_c22;
+ mii_bus->write = ngbe_phy_write_reg_c22;
+ mii_bus->phy_mask = GENMASK(31, 4);
+ mii_bus->parent = &pdev->dev;
+ mii_bus->priv = wx;
+
+ if (wx->mac_type == em_mac_type_rgmii) {
+ mii_bus->read_c45 = ngbe_phy_read_reg_mdi_c45;
+ mii_bus->write_c45 = ngbe_phy_write_reg_mdi_c45;
+ }
+
+ snprintf(mii_bus->id, MII_BUS_ID_SIZE, "ngbe-%x",
+ (pdev->bus->number << 8) | pdev->devfn);
+ ret = devm_mdiobus_register(&pdev->dev, mii_bus);
+ if (ret)
+ return ret;
+
+ wx->phydev = phy_find_first(mii_bus);
+ if (!wx->phydev)
+ return -ENODEV;
+
+ phy_attached_info(wx->phydev);
+ ngbe_phy_fixup(wx);
+
+ wx->link = 0;
+ wx->speed = 0;
+ wx->duplex = 0;
+
+ return 0;
+}
diff --git a/drivers/net/ethernet/wangxun/ngbe/ngbe_mdio.h b/drivers/net/ethernet/wangxun/ngbe/ngbe_mdio.h
new file mode 100644
index 000000000000..0a6400dd89c4
--- /dev/null
+++ b/drivers/net/ethernet/wangxun/ngbe/ngbe_mdio.h
@@ -0,0 +1,12 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * WangXun Gigabit PCI Express Linux driver
+ * Copyright (c) 2019 - 2022 Beijing WangXun Technology Co., Ltd.
+ */
+
+#ifndef _NGBE_MDIO_H_
+#define _NGBE_MDIO_H_
+
+int ngbe_phy_connect(struct wx *wx);
+int ngbe_mdio_init(struct wx *wx);
+#endif /* _NGBE_MDIO_H_ */
diff --git a/drivers/net/ethernet/wangxun/ngbe/ngbe_type.h b/drivers/net/ethernet/wangxun/ngbe/ngbe_type.h
index 39f6c03f1a54..a2351349785e 100644
--- a/drivers/net/ethernet/wangxun/ngbe/ngbe_type.h
+++ b/drivers/net/ethernet/wangxun/ngbe/ngbe_type.h
@@ -49,7 +49,6 @@
#define NGBE_SPI_ILDR_STATUS 0x10120
#define NGBE_SPI_ILDR_STATUS_PERST BIT(0) /* PCIE_PERST is done */
#define NGBE_SPI_ILDR_STATUS_PWRRST BIT(1) /* Power on reset is done */
-#define NGBE_SPI_ILDR_STATUS_LAN_SW_RST(_i) BIT((_i) + 9) /* lan soft reset done */
/* Checksum and EEPROM pointers */
#define NGBE_CALSUM_COMMAND 0xE9
@@ -60,6 +59,25 @@
#define NGBE_EEPROM_VERSION_L 0x1D
#define NGBE_EEPROM_VERSION_H 0x1E
+/* mdio access */
+#define NGBE_MSCA 0x11200
+#define NGBE_MSCA_RA(v) FIELD_PREP(U16_MAX, v)
+#define NGBE_MSCA_PA(v) FIELD_PREP(GENMASK(20, 16), v)
+#define NGBE_MSCA_DA(v) FIELD_PREP(GENMASK(25, 21), v)
+#define NGBE_MSCC 0x11204
+#define NGBE_MSCC_CMD(v) FIELD_PREP(GENMASK(17, 16), v)
+
+enum NGBE_MSCA_CMD_value {
+ NGBE_MSCA_CMD_RSV = 0,
+ NGBE_MSCA_CMD_WRITE,
+ NGBE_MSCA_CMD_POST_READ,
+ NGBE_MSCA_CMD_READ,
+};
+
+#define NGBE_MSCC_SADDR BIT(18)
+#define NGBE_MSCC_BUSY BIT(22)
+#define NGBE_MDIO_CLK(v) FIELD_PREP(GENMASK(21, 19), v)
+
/* Media-dependent registers. */
#define NGBE_MDIO_CLAUSE_SELECT 0x11220
@@ -72,6 +90,24 @@
#define NGBE_GPIO_DDR_0 BIT(0) /* SDP0 IO direction */
#define NGBE_GPIO_DDR_1 BIT(1) /* SDP1 IO direction */
+/* Extended Interrupt Enable Set */
+#define NGBE_PX_MISC_IEN_DEV_RST BIT(10)
+#define NGBE_PX_MISC_IEN_ETH_LK BIT(18)
+#define NGBE_PX_MISC_IEN_INT_ERR BIT(20)
+#define NGBE_PX_MISC_IEN_GPIO BIT(26)
+#define NGBE_PX_MISC_IEN_MASK ( \
+ NGBE_PX_MISC_IEN_DEV_RST | \
+ NGBE_PX_MISC_IEN_ETH_LK | \
+ NGBE_PX_MISC_IEN_INT_ERR | \
+ NGBE_PX_MISC_IEN_GPIO)
+
+#define NGBE_INTR_ALL 0x1FF
+#define NGBE_INTR_MISC(A) BIT((A)->num_q_vectors)
+
+#define NGBE_PHY_CONFIG(reg_offset) (0x14000 + ((reg_offset) * 4))
+#define NGBE_CFG_LAN_SPEED 0x14440
+#define NGBE_CFG_PORT_ST 0x14404
+
/* Wake up registers */
#define NGBE_PSR_WKUP_CTL 0x15B80
/* Wake Up Filter Control Bit */
@@ -90,50 +126,30 @@
#define NGBE_FW_CMD_ST_PASS 0x80658383
#define NGBE_FW_CMD_ST_FAIL 0x70657376
-enum ngbe_phy_type {
- ngbe_phy_unknown = 0,
- ngbe_phy_none,
- ngbe_phy_internal,
- ngbe_phy_m88e1512,
- ngbe_phy_m88e1512_sfi,
- ngbe_phy_m88e1512_unknown,
- ngbe_phy_yt8521s,
- ngbe_phy_yt8521s_sfi,
- ngbe_phy_internal_yt8521s_sfi,
- ngbe_phy_generic
-};
+#define NGBE_MAX_FDIR_INDICES 7
-enum ngbe_media_type {
- ngbe_media_type_unknown = 0,
- ngbe_media_type_fiber,
- ngbe_media_type_copper,
- ngbe_media_type_backplane,
-};
-
-enum ngbe_mac_type {
- ngbe_mac_type_unknown = 0,
- ngbe_mac_type_mdi,
- ngbe_mac_type_rgmii
-};
+#define NGBE_MAX_RX_QUEUES (NGBE_MAX_FDIR_INDICES + 1)
+#define NGBE_MAX_TX_QUEUES (NGBE_MAX_FDIR_INDICES + 1)
-struct ngbe_phy_info {
- enum ngbe_phy_type type;
- enum ngbe_media_type media_type;
+#define NGBE_ETH_LENGTH_OF_ADDRESS 6
+#define NGBE_MAX_MSIX_VECTORS 0x09
+#define NGBE_RAR_ENTRIES 32
+#define NGBE_RX_PB_SIZE 42
+#define NGBE_MC_TBL_SIZE 128
+#define NGBE_TDB_PB_SZ (20 * 1024) /* 160KB Packet Buffer */
+#define NGBE_MAX_JUMBO_FRAME_SIZE 9432 /* max payload 9414 */
- u32 addr;
- u32 id;
+/* TX/RX descriptor defines */
+#define NGBE_DEFAULT_TXD 512 /* default ring size */
+#define NGBE_DEFAULT_TX_WORK 256
+#define NGBE_MAX_TXD 8192
+#define NGBE_MIN_TXD 128
- bool reset_if_overtemp;
+#define NGBE_DEFAULT_RXD 512 /* default ring size */
+#define NGBE_DEFAULT_RX_WORK 256
+#define NGBE_MAX_RXD 8192
+#define NGBE_MIN_RXD 128
-};
-
-struct ngbe_hw {
- struct wx_hw wxhw;
- struct ngbe_phy_info phy;
- enum ngbe_mac_type mac_type;
+extern char ngbe_driver_name[];
- bool wol_enabled;
- bool ncsi_enabled;
- bool gpio_ctrl;
-};
#endif /* _NGBE_TYPE_H_ */
diff --git a/drivers/net/ethernet/wangxun/txgbe/Makefile b/drivers/net/ethernet/wangxun/txgbe/Makefile
index 78484c58b78b..6db14a2cb2d0 100644
--- a/drivers/net/ethernet/wangxun/txgbe/Makefile
+++ b/drivers/net/ethernet/wangxun/txgbe/Makefile
@@ -7,4 +7,5 @@
obj-$(CONFIG_TXGBE) += txgbe.o
txgbe-objs := txgbe_main.o \
- txgbe_hw.o
+ txgbe_hw.o \
+ txgbe_ethtool.o
diff --git a/drivers/net/ethernet/wangxun/txgbe/txgbe.h b/drivers/net/ethernet/wangxun/txgbe/txgbe.h
deleted file mode 100644
index 19e61377bd00..000000000000
--- a/drivers/net/ethernet/wangxun/txgbe/txgbe.h
+++ /dev/null
@@ -1,43 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0 */
-/* Copyright (c) 2015 - 2022 Beijing WangXun Technology Co., Ltd. */
-
-#ifndef _TXGBE_H_
-#define _TXGBE_H_
-
-#define TXGBE_MAX_FDIR_INDICES 63
-
-#define TXGBE_MAX_RX_QUEUES (TXGBE_MAX_FDIR_INDICES + 1)
-#define TXGBE_MAX_TX_QUEUES (TXGBE_MAX_FDIR_INDICES + 1)
-
-#define TXGBE_SP_MAX_TX_QUEUES 128
-#define TXGBE_SP_MAX_RX_QUEUES 128
-#define TXGBE_SP_RAR_ENTRIES 128
-#define TXGBE_SP_MC_TBL_SIZE 128
-
-struct txgbe_mac_addr {
- u8 addr[ETH_ALEN];
- u16 state; /* bitmask */
- u64 pools;
-};
-
-#define TXGBE_MAC_STATE_DEFAULT 0x1
-#define TXGBE_MAC_STATE_MODIFIED 0x2
-#define TXGBE_MAC_STATE_IN_USE 0x4
-
-/* board specific private data structure */
-struct txgbe_adapter {
- u8 __iomem *io_addr; /* Mainly for iounmap use */
- /* OS defined structs */
- struct net_device *netdev;
- struct pci_dev *pdev;
-
- /* structs defined in txgbe_type.h */
- struct txgbe_hw hw;
- u16 msg_enable;
- struct txgbe_mac_addr *mac_table;
- char eeprom_id[32];
-};
-
-extern char txgbe_driver_name[];
-
-#endif /* _TXGBE_H_ */
diff --git a/drivers/net/ethernet/wangxun/txgbe/txgbe_ethtool.c b/drivers/net/ethernet/wangxun/txgbe/txgbe_ethtool.c
new file mode 100644
index 000000000000..d914e9a05404
--- /dev/null
+++ b/drivers/net/ethernet/wangxun/txgbe/txgbe_ethtool.c
@@ -0,0 +1,19 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2015 - 2023 Beijing WangXun Technology Co., Ltd. */
+
+#include <linux/pci.h>
+#include <linux/phylink.h>
+#include <linux/netdevice.h>
+
+#include "../libwx/wx_ethtool.h"
+#include "txgbe_ethtool.h"
+
+static const struct ethtool_ops txgbe_ethtool_ops = {
+ .get_drvinfo = wx_get_drvinfo,
+ .get_link = ethtool_op_get_link,
+};
+
+void txgbe_set_ethtool_ops(struct net_device *netdev)
+{
+ netdev->ethtool_ops = &txgbe_ethtool_ops;
+}
diff --git a/drivers/net/ethernet/wangxun/txgbe/txgbe_ethtool.h b/drivers/net/ethernet/wangxun/txgbe/txgbe_ethtool.h
new file mode 100644
index 000000000000..ace1b3571012
--- /dev/null
+++ b/drivers/net/ethernet/wangxun/txgbe/txgbe_ethtool.h
@@ -0,0 +1,9 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright (c) 2015 - 2023 Beijing WangXun Technology Co., Ltd. */
+
+#ifndef _TXGBE_ETHTOOL_H_
+#define _TXGBE_ETHTOOL_H_
+
+void txgbe_set_ethtool_ops(struct net_device *netdev);
+
+#endif /* _TXGBE_ETHTOOL_H_ */
diff --git a/drivers/net/ethernet/wangxun/txgbe/txgbe_hw.c b/drivers/net/ethernet/wangxun/txgbe/txgbe_hw.c
index 167f7ff73192..ebc46f3be056 100644
--- a/drivers/net/ethernet/wangxun/txgbe/txgbe_hw.c
+++ b/drivers/net/ethernet/wangxun/txgbe/txgbe_hw.c
@@ -12,70 +12,67 @@
#include "../libwx/wx_hw.h"
#include "txgbe_type.h"
#include "txgbe_hw.h"
-#include "txgbe.h"
/**
* txgbe_init_thermal_sensor_thresh - Inits thermal sensor thresholds
- * @hw: pointer to hardware structure
+ * @wx: pointer to hardware structure
*
* Inits the thermal sensor thresholds according to the NVM map
* and save off the threshold and location values into mac.thermal_sensor_data
**/
-static void txgbe_init_thermal_sensor_thresh(struct txgbe_hw *hw)
+static void txgbe_init_thermal_sensor_thresh(struct wx *wx)
{
- struct wx_hw *wxhw = &hw->wxhw;
- struct wx_thermal_sensor_data *data = &wxhw->mac.sensor;
+ struct wx_thermal_sensor_data *data = &wx->mac.sensor;
memset(data, 0, sizeof(struct wx_thermal_sensor_data));
/* Only support thermal sensors attached to SP physical port 0 */
- if (wxhw->bus.func)
+ if (wx->bus.func)
return;
- wr32(wxhw, TXGBE_TS_CTL, TXGBE_TS_CTL_EVAL_MD);
+ wr32(wx, TXGBE_TS_CTL, TXGBE_TS_CTL_EVAL_MD);
- wr32(wxhw, WX_TS_INT_EN,
+ wr32(wx, WX_TS_INT_EN,
WX_TS_INT_EN_ALARM_INT_EN | WX_TS_INT_EN_DALARM_INT_EN);
- wr32(wxhw, WX_TS_EN, WX_TS_EN_ENA);
+ wr32(wx, WX_TS_EN, WX_TS_EN_ENA);
data->alarm_thresh = 100;
- wr32(wxhw, WX_TS_ALARM_THRE, 677);
+ wr32(wx, WX_TS_ALARM_THRE, 677);
data->dalarm_thresh = 90;
- wr32(wxhw, WX_TS_DALARM_THRE, 614);
+ wr32(wx, WX_TS_DALARM_THRE, 614);
}
/**
* txgbe_read_pba_string - Reads part number string from EEPROM
- * @hw: pointer to hardware structure
+ * @wx: pointer to hardware structure
* @pba_num: stores the part number string from the EEPROM
* @pba_num_size: part number string buffer length
*
* Reads the part number string from the EEPROM.
**/
-int txgbe_read_pba_string(struct txgbe_hw *hw, u8 *pba_num, u32 pba_num_size)
+int txgbe_read_pba_string(struct wx *wx, u8 *pba_num, u32 pba_num_size)
{
u16 pba_ptr, offset, length, data;
- struct wx_hw *wxhw = &hw->wxhw;
int ret_val;
if (!pba_num) {
- wx_err(wxhw, "PBA string buffer was null\n");
+ wx_err(wx, "PBA string buffer was null\n");
return -EINVAL;
}
- ret_val = wx_read_ee_hostif(wxhw,
- wxhw->eeprom.sw_region_offset + TXGBE_PBANUM0_PTR,
+ ret_val = wx_read_ee_hostif(wx,
+ wx->eeprom.sw_region_offset + TXGBE_PBANUM0_PTR,
&data);
if (ret_val != 0) {
- wx_err(wxhw, "NVM Read Error\n");
+ wx_err(wx, "NVM Read Error\n");
return ret_val;
}
- ret_val = wx_read_ee_hostif(wxhw,
- wxhw->eeprom.sw_region_offset + TXGBE_PBANUM1_PTR,
+ ret_val = wx_read_ee_hostif(wx,
+ wx->eeprom.sw_region_offset + TXGBE_PBANUM1_PTR,
&pba_ptr);
if (ret_val != 0) {
- wx_err(wxhw, "NVM Read Error\n");
+ wx_err(wx, "NVM Read Error\n");
return ret_val;
}
@@ -84,11 +81,11 @@ int txgbe_read_pba_string(struct txgbe_hw *hw, u8 *pba_num, u32 pba_num_size)
* and we can decode it into an ascii string
*/
if (data != TXGBE_PBANUM_PTR_GUARD) {
- wx_err(wxhw, "NVM PBA number is not stored as string\n");
+ wx_err(wx, "NVM PBA number is not stored as string\n");
/* we will need 11 characters to store the PBA */
if (pba_num_size < 11) {
- wx_err(wxhw, "PBA string buffer too small\n");
+ wx_err(wx, "PBA string buffer too small\n");
return -ENOMEM;
}
@@ -118,20 +115,20 @@ int txgbe_read_pba_string(struct txgbe_hw *hw, u8 *pba_num, u32 pba_num_size)
return 0;
}
- ret_val = wx_read_ee_hostif(wxhw, pba_ptr, &length);
+ ret_val = wx_read_ee_hostif(wx, pba_ptr, &length);
if (ret_val != 0) {
- wx_err(wxhw, "NVM Read Error\n");
+ wx_err(wx, "NVM Read Error\n");
return ret_val;
}
if (length == 0xFFFF || length == 0) {
- wx_err(wxhw, "NVM PBA number section invalid length\n");
+ wx_err(wx, "NVM PBA number section invalid length\n");
return -EINVAL;
}
/* check if pba_num buffer is big enough */
if (pba_num_size < (((u32)length * 2) - 1)) {
- wx_err(wxhw, "PBA string buffer too small\n");
+ wx_err(wx, "PBA string buffer too small\n");
return -ENOMEM;
}
@@ -140,9 +137,9 @@ int txgbe_read_pba_string(struct txgbe_hw *hw, u8 *pba_num, u32 pba_num_size)
length--;
for (offset = 0; offset < length; offset++) {
- ret_val = wx_read_ee_hostif(wxhw, pba_ptr + offset, &data);
+ ret_val = wx_read_ee_hostif(wx, pba_ptr + offset, &data);
if (ret_val != 0) {
- wx_err(wxhw, "NVM Read Error\n");
+ wx_err(wx, "NVM Read Error\n");
return ret_val;
}
pba_num[offset * 2] = (u8)(data >> 8);
@@ -155,14 +152,13 @@ int txgbe_read_pba_string(struct txgbe_hw *hw, u8 *pba_num, u32 pba_num_size)
/**
* txgbe_calc_eeprom_checksum - Calculates and returns the checksum
- * @hw: pointer to hardware structure
+ * @wx: pointer to hardware structure
* @checksum: pointer to cheksum
*
* Returns a negative error code on error
**/
-static int txgbe_calc_eeprom_checksum(struct txgbe_hw *hw, u16 *checksum)
+static int txgbe_calc_eeprom_checksum(struct wx *wx, u16 *checksum)
{
- struct wx_hw *wxhw = &hw->wxhw;
u16 *eeprom_ptrs = NULL;
u32 buffer_size = 0;
u16 *buffer = NULL;
@@ -170,7 +166,7 @@ static int txgbe_calc_eeprom_checksum(struct txgbe_hw *hw, u16 *checksum)
int status;
u16 i;
- wx_init_eeprom_params(wxhw);
+ wx_init_eeprom_params(wx);
if (!buffer) {
eeprom_ptrs = kvmalloc_array(TXGBE_EEPROM_LAST_WORD, sizeof(u16),
@@ -178,11 +174,11 @@ static int txgbe_calc_eeprom_checksum(struct txgbe_hw *hw, u16 *checksum)
if (!eeprom_ptrs)
return -ENOMEM;
/* Read pointer area */
- status = wx_read_ee_hostif_buffer(wxhw, 0,
+ status = wx_read_ee_hostif_buffer(wx, 0,
TXGBE_EEPROM_LAST_WORD,
eeprom_ptrs);
if (status != 0) {
- wx_err(wxhw, "Failed to read EEPROM image\n");
+ wx_err(wx, "Failed to read EEPROM image\n");
kvfree(eeprom_ptrs);
return status;
}
@@ -194,7 +190,7 @@ static int txgbe_calc_eeprom_checksum(struct txgbe_hw *hw, u16 *checksum)
}
for (i = 0; i < TXGBE_EEPROM_LAST_WORD; i++)
- if (i != wxhw->eeprom.sw_region_offset + TXGBE_EEPROM_CHECKSUM)
+ if (i != wx->eeprom.sw_region_offset + TXGBE_EEPROM_CHECKSUM)
*checksum += local_buffer[i];
if (eeprom_ptrs)
@@ -210,15 +206,14 @@ static int txgbe_calc_eeprom_checksum(struct txgbe_hw *hw, u16 *checksum)
/**
* txgbe_validate_eeprom_checksum - Validate EEPROM checksum
- * @hw: pointer to hardware structure
+ * @wx: pointer to hardware structure
* @checksum_val: calculated checksum
*
* Performs checksum calculation and validates the EEPROM checksum. If the
* caller does not need checksum_val, the value can be NULL.
**/
-int txgbe_validate_eeprom_checksum(struct txgbe_hw *hw, u16 *checksum_val)
+int txgbe_validate_eeprom_checksum(struct wx *wx, u16 *checksum_val)
{
- struct wx_hw *wxhw = &hw->wxhw;
u16 read_checksum = 0;
u16 checksum;
int status;
@@ -227,18 +222,18 @@ int txgbe_validate_eeprom_checksum(struct txgbe_hw *hw, u16 *checksum_val)
* not continue or we could be in for a very long wait while every
* EEPROM read fails
*/
- status = wx_read_ee_hostif(wxhw, 0, &checksum);
+ status = wx_read_ee_hostif(wx, 0, &checksum);
if (status) {
- wx_err(wxhw, "EEPROM read failed\n");
+ wx_err(wx, "EEPROM read failed\n");
return status;
}
checksum = 0;
- status = txgbe_calc_eeprom_checksum(hw, &checksum);
+ status = txgbe_calc_eeprom_checksum(wx, &checksum);
if (status != 0)
return status;
- status = wx_read_ee_hostif(wxhw, wxhw->eeprom.sw_region_offset +
+ status = wx_read_ee_hostif(wx, wx->eeprom.sw_region_offset +
TXGBE_EEPROM_CHECKSUM, &read_checksum);
if (status != 0)
return status;
@@ -248,7 +243,7 @@ int txgbe_validate_eeprom_checksum(struct txgbe_hw *hw, u16 *checksum_val)
*/
if (read_checksum != checksum) {
status = -EIO;
- wx_err(wxhw, "Invalid EEPROM checksum\n");
+ wx_err(wx, "Invalid EEPROM checksum\n");
}
/* If the user cares, return the calculated checksum */
@@ -258,55 +253,52 @@ int txgbe_validate_eeprom_checksum(struct txgbe_hw *hw, u16 *checksum_val)
return status;
}
-static void txgbe_reset_misc(struct txgbe_hw *hw)
+static void txgbe_reset_misc(struct wx *wx)
{
- struct wx_hw *wxhw = &hw->wxhw;
-
- wx_reset_misc(wxhw);
- txgbe_init_thermal_sensor_thresh(hw);
+ wx_reset_misc(wx);
+ txgbe_init_thermal_sensor_thresh(wx);
}
/**
* txgbe_reset_hw - Perform hardware reset
- * @hw: pointer to hardware structure
+ * @wx: pointer to wx structure
*
* Resets the hardware by resetting the transmit and receive units, masks
* and clears all interrupts, perform a PHY reset, and perform a link (MAC)
* reset.
**/
-int txgbe_reset_hw(struct txgbe_hw *hw)
+int txgbe_reset_hw(struct wx *wx)
{
- struct wx_hw *wxhw = &hw->wxhw;
int status;
/* Call adapter stop to disable tx/rx and clear interrupts */
- status = wx_stop_adapter(wxhw);
+ status = wx_stop_adapter(wx);
if (status != 0)
return status;
- if (!(((wxhw->subsystem_device_id & WX_NCSI_MASK) == WX_NCSI_SUP) ||
- ((wxhw->subsystem_device_id & WX_WOL_MASK) == WX_WOL_SUP)))
- wx_reset_hostif(wxhw);
+ if (!(((wx->subsystem_device_id & WX_NCSI_MASK) == WX_NCSI_SUP) ||
+ ((wx->subsystem_device_id & WX_WOL_MASK) == WX_WOL_SUP)))
+ wx_reset_hostif(wx);
usleep_range(10, 100);
- status = wx_check_flash_load(wxhw, TXGBE_SPI_ILDR_STATUS_LAN_SW_RST(wxhw->bus.func));
+ status = wx_check_flash_load(wx, TXGBE_SPI_ILDR_STATUS_LAN_SW_RST(wx->bus.func));
if (status != 0)
return status;
- txgbe_reset_misc(hw);
+ txgbe_reset_misc(wx);
/* Store the permanent mac address */
- wx_get_mac_addr(wxhw, wxhw->mac.perm_addr);
+ wx_get_mac_addr(wx, wx->mac.perm_addr);
/* Store MAC address from RAR0, clear receive address registers, and
* clear the multicast table. Also reset num_rar_entries to 128,
* since we modify this value when programming the SAN MAC address.
*/
- wxhw->mac.num_rar_entries = TXGBE_SP_RAR_ENTRIES;
- wx_init_rx_addrs(wxhw);
+ wx->mac.num_rar_entries = TXGBE_SP_RAR_ENTRIES;
+ wx_init_rx_addrs(wx);
- pci_set_master(wxhw->pdev);
+ pci_set_master(wx->pdev);
return 0;
}
diff --git a/drivers/net/ethernet/wangxun/txgbe/txgbe_hw.h b/drivers/net/ethernet/wangxun/txgbe/txgbe_hw.h
index 6a751a69177b..e82f65dff8a6 100644
--- a/drivers/net/ethernet/wangxun/txgbe/txgbe_hw.h
+++ b/drivers/net/ethernet/wangxun/txgbe/txgbe_hw.h
@@ -4,8 +4,8 @@
#ifndef _TXGBE_HW_H_
#define _TXGBE_HW_H_
-int txgbe_read_pba_string(struct txgbe_hw *hw, u8 *pba_num, u32 pba_num_size);
-int txgbe_validate_eeprom_checksum(struct txgbe_hw *hw, u16 *checksum_val);
-int txgbe_reset_hw(struct txgbe_hw *hw);
+int txgbe_read_pba_string(struct wx *wx, u8 *pba_num, u32 pba_num_size);
+int txgbe_validate_eeprom_checksum(struct wx *wx, u16 *checksum_val);
+int txgbe_reset_hw(struct wx *wx);
#endif /* _TXGBE_HW_H_ */
diff --git a/drivers/net/ethernet/wangxun/txgbe/txgbe_main.c b/drivers/net/ethernet/wangxun/txgbe/txgbe_main.c
index 36780e7f05b7..6c0a98230557 100644
--- a/drivers/net/ethernet/wangxun/txgbe/txgbe_main.c
+++ b/drivers/net/ethernet/wangxun/txgbe/txgbe_main.c
@@ -11,10 +11,11 @@
#include <net/ip.h>
#include "../libwx/wx_type.h"
+#include "../libwx/wx_lib.h"
#include "../libwx/wx_hw.h"
#include "txgbe_type.h"
#include "txgbe_hw.h"
-#include "txgbe.h"
+#include "txgbe_ethtool.h"
char txgbe_driver_name[] = "txgbe";
@@ -35,26 +36,26 @@ static const struct pci_device_id txgbe_pci_tbl[] = {
#define DEFAULT_DEBUG_LEVEL_SHIFT 3
-static void txgbe_check_minimum_link(struct txgbe_adapter *adapter)
+static void txgbe_check_minimum_link(struct wx *wx)
{
struct pci_dev *pdev;
- pdev = adapter->pdev;
+ pdev = wx->pdev;
pcie_print_link_status(pdev);
}
/**
* txgbe_enumerate_functions - Get the number of ports this device has
- * @adapter: adapter structure
+ * @wx: wx structure
*
* This function enumerates the phsyical functions co-located on a single slot,
* in order to determine how many ports a device has. This is most useful in
* determining the required GT/s of PCIe bandwidth necessary for optimal
* performance.
**/
-static int txgbe_enumerate_functions(struct txgbe_adapter *adapter)
+static int txgbe_enumerate_functions(struct wx *wx)
{
- struct pci_dev *entry, *pdev = adapter->pdev;
+ struct pci_dev *entry, *pdev = wx->pdev;
int physfns = 0;
list_for_each_entry(entry, &pdev->bus->devices, bus_list) {
@@ -73,196 +74,299 @@ static int txgbe_enumerate_functions(struct txgbe_adapter *adapter)
return physfns;
}
-static void txgbe_sync_mac_table(struct txgbe_adapter *adapter)
+/**
+ * txgbe_irq_enable - Enable default interrupt generation settings
+ * @wx: pointer to private structure
+ * @queues: enable irqs for queues
+ **/
+static void txgbe_irq_enable(struct wx *wx, bool queues)
{
- struct txgbe_hw *hw = &adapter->hw;
- struct wx_hw *wxhw = &hw->wxhw;
- int i;
-
- for (i = 0; i < wxhw->mac.num_rar_entries; i++) {
- if (adapter->mac_table[i].state & TXGBE_MAC_STATE_MODIFIED) {
- if (adapter->mac_table[i].state & TXGBE_MAC_STATE_IN_USE) {
- wx_set_rar(wxhw, i,
- adapter->mac_table[i].addr,
- adapter->mac_table[i].pools,
- WX_PSR_MAC_SWC_AD_H_AV);
- } else {
- wx_clear_rar(wxhw, i);
- }
- adapter->mac_table[i].state &= ~(TXGBE_MAC_STATE_MODIFIED);
- }
- }
+ /* unmask interrupt */
+ wx_intr_enable(wx, TXGBE_INTR_MISC(wx));
+ if (queues)
+ wx_intr_enable(wx, TXGBE_INTR_QALL(wx));
}
-/* this function destroys the first RAR entry */
-static void txgbe_mac_set_default_filter(struct txgbe_adapter *adapter,
- u8 *addr)
+/**
+ * txgbe_intr - msi/legacy mode Interrupt Handler
+ * @irq: interrupt number
+ * @data: pointer to a network interface device structure
+ **/
+static irqreturn_t txgbe_intr(int __always_unused irq, void *data)
{
- struct wx_hw *wxhw = &adapter->hw.wxhw;
-
- memcpy(&adapter->mac_table[0].addr, addr, ETH_ALEN);
- adapter->mac_table[0].pools = 1ULL;
- adapter->mac_table[0].state = (TXGBE_MAC_STATE_DEFAULT |
- TXGBE_MAC_STATE_IN_USE);
- wx_set_rar(wxhw, 0, adapter->mac_table[0].addr,
- adapter->mac_table[0].pools,
- WX_PSR_MAC_SWC_AD_H_AV);
+ struct wx_q_vector *q_vector;
+ struct wx *wx = data;
+ struct pci_dev *pdev;
+ u32 eicr;
+
+ q_vector = wx->q_vector[0];
+ pdev = wx->pdev;
+
+ eicr = wx_misc_isb(wx, WX_ISB_VEC0);
+ if (!eicr) {
+ /* shared interrupt alert!
+ * the interrupt that we masked before the ICR read.
+ */
+ if (netif_running(wx->netdev))
+ txgbe_irq_enable(wx, true);
+ return IRQ_NONE; /* Not our interrupt */
+ }
+ wx->isb_mem[WX_ISB_VEC0] = 0;
+ if (!(pdev->msi_enabled))
+ wr32(wx, WX_PX_INTA, 1);
+
+ wx->isb_mem[WX_ISB_MISC] = 0;
+ /* would disable interrupts here but it is auto disabled */
+ napi_schedule_irqoff(&q_vector->napi);
+
+ /* re-enable link(maybe) and non-queue interrupts, no flush.
+ * txgbe_poll will re-enable the queue interrupts
+ */
+ if (netif_running(wx->netdev))
+ txgbe_irq_enable(wx, false);
+
+ return IRQ_HANDLED;
}
-static void txgbe_flush_sw_mac_table(struct txgbe_adapter *adapter)
+static irqreturn_t txgbe_msix_other(int __always_unused irq, void *data)
{
- struct wx_hw *wxhw = &adapter->hw.wxhw;
- u32 i;
+ struct wx *wx = data;
- for (i = 0; i < wxhw->mac.num_rar_entries; i++) {
- adapter->mac_table[i].state |= TXGBE_MAC_STATE_MODIFIED;
- adapter->mac_table[i].state &= ~TXGBE_MAC_STATE_IN_USE;
- memset(adapter->mac_table[i].addr, 0, ETH_ALEN);
- adapter->mac_table[i].pools = 0;
- }
- txgbe_sync_mac_table(adapter);
+ /* re-enable the original interrupt state */
+ if (netif_running(wx->netdev))
+ txgbe_irq_enable(wx, false);
+
+ return IRQ_HANDLED;
}
-static int txgbe_del_mac_filter(struct txgbe_adapter *adapter, u8 *addr, u16 pool)
+/**
+ * txgbe_request_msix_irqs - Initialize MSI-X interrupts
+ * @wx: board private structure
+ *
+ * Allocate MSI-X vectors and request interrupts from the kernel.
+ **/
+static int txgbe_request_msix_irqs(struct wx *wx)
{
- struct wx_hw *wxhw = &adapter->hw.wxhw;
- u32 i;
+ struct net_device *netdev = wx->netdev;
+ int vector, err;
+
+ for (vector = 0; vector < wx->num_q_vectors; vector++) {
+ struct wx_q_vector *q_vector = wx->q_vector[vector];
+ struct msix_entry *entry = &wx->msix_entries[vector];
+
+ if (q_vector->tx.ring && q_vector->rx.ring)
+ snprintf(q_vector->name, sizeof(q_vector->name) - 1,
+ "%s-TxRx-%d", netdev->name, entry->entry);
+ else
+ /* skip this unused q_vector */
+ continue;
- if (is_zero_ether_addr(addr))
- return -EINVAL;
-
- /* search table for addr, if found, set to 0 and sync */
- for (i = 0; i < wxhw->mac.num_rar_entries; i++) {
- if (ether_addr_equal(addr, adapter->mac_table[i].addr)) {
- if (adapter->mac_table[i].pools & (1ULL << pool)) {
- adapter->mac_table[i].state |= TXGBE_MAC_STATE_MODIFIED;
- adapter->mac_table[i].state &= ~TXGBE_MAC_STATE_IN_USE;
- adapter->mac_table[i].pools &= ~(1ULL << pool);
- txgbe_sync_mac_table(adapter);
- }
- return 0;
+ err = request_irq(entry->vector, wx_msix_clean_rings, 0,
+ q_vector->name, q_vector);
+ if (err) {
+ wx_err(wx, "request_irq failed for MSIX interrupt %s Error: %d\n",
+ q_vector->name, err);
+ goto free_queue_irqs;
}
+ }
- if (adapter->mac_table[i].pools != (1 << pool))
- continue;
- if (!ether_addr_equal(addr, adapter->mac_table[i].addr))
- continue;
+ err = request_irq(wx->msix_entries[vector].vector,
+ txgbe_msix_other, 0, netdev->name, wx);
+ if (err) {
+ wx_err(wx, "request_irq for msix_other failed: %d\n", err);
+ goto free_queue_irqs;
+ }
+
+ return 0;
- adapter->mac_table[i].state |= TXGBE_MAC_STATE_MODIFIED;
- adapter->mac_table[i].state &= ~TXGBE_MAC_STATE_IN_USE;
- memset(adapter->mac_table[i].addr, 0, ETH_ALEN);
- adapter->mac_table[i].pools = 0;
- txgbe_sync_mac_table(adapter);
- return 0;
+free_queue_irqs:
+ while (vector) {
+ vector--;
+ free_irq(wx->msix_entries[vector].vector,
+ wx->q_vector[vector]);
}
- return -ENOMEM;
+ wx_reset_interrupt_capability(wx);
+ return err;
}
-static void txgbe_up_complete(struct txgbe_adapter *adapter)
+/**
+ * txgbe_request_irq - initialize interrupts
+ * @wx: board private structure
+ *
+ * Attempt to configure interrupts using the best available
+ * capabilities of the hardware and kernel.
+ **/
+static int txgbe_request_irq(struct wx *wx)
{
- struct txgbe_hw *hw = &adapter->hw;
- struct wx_hw *wxhw = &hw->wxhw;
+ struct net_device *netdev = wx->netdev;
+ struct pci_dev *pdev = wx->pdev;
+ int err;
- wx_control_hw(wxhw, true);
+ if (pdev->msix_enabled)
+ err = txgbe_request_msix_irqs(wx);
+ else if (pdev->msi_enabled)
+ err = request_irq(wx->pdev->irq, &txgbe_intr, 0,
+ netdev->name, wx);
+ else
+ err = request_irq(wx->pdev->irq, &txgbe_intr, IRQF_SHARED,
+ netdev->name, wx);
+
+ if (err)
+ wx_err(wx, "request_irq failed, Error %d\n", err);
+
+ return err;
}
-static void txgbe_reset(struct txgbe_adapter *adapter)
+static void txgbe_up_complete(struct wx *wx)
{
- struct net_device *netdev = adapter->netdev;
- struct txgbe_hw *hw = &adapter->hw;
+ u32 reg;
+
+ wx_control_hw(wx, true);
+ wx_configure_vectors(wx);
+
+ /* make sure to complete pre-operations */
+ smp_mb__before_atomic();
+ wx_napi_enable_all(wx);
+
+ /* clear any pending interrupts, may auto mask */
+ rd32(wx, WX_PX_IC);
+ rd32(wx, WX_PX_MISC_IC);
+ txgbe_irq_enable(wx, true);
+
+ /* Configure MAC Rx and Tx when link is up */
+ reg = rd32(wx, WX_MAC_RX_CFG);
+ wr32(wx, WX_MAC_RX_CFG, reg);
+ wr32(wx, WX_MAC_PKT_FLT, WX_MAC_PKT_FLT_PR);
+ reg = rd32(wx, WX_MAC_WDG_TIMEOUT);
+ wr32(wx, WX_MAC_WDG_TIMEOUT, reg);
+ reg = rd32(wx, WX_MAC_TX_CFG);
+ wr32(wx, WX_MAC_TX_CFG, (reg & ~WX_MAC_TX_CFG_SPEED_MASK) | WX_MAC_TX_CFG_SPEED_10G);
+
+ /* enable transmits */
+ netif_tx_start_all_queues(wx->netdev);
+ netif_carrier_on(wx->netdev);
+}
+
+static void txgbe_reset(struct wx *wx)
+{
+ struct net_device *netdev = wx->netdev;
u8 old_addr[ETH_ALEN];
int err;
- err = txgbe_reset_hw(hw);
+ err = txgbe_reset_hw(wx);
if (err != 0)
- dev_err(&adapter->pdev->dev, "Hardware Error: %d\n", err);
+ wx_err(wx, "Hardware Error: %d\n", err);
/* do not flush user set addresses */
- memcpy(old_addr, &adapter->mac_table[0].addr, netdev->addr_len);
- txgbe_flush_sw_mac_table(adapter);
- txgbe_mac_set_default_filter(adapter, old_addr);
+ memcpy(old_addr, &wx->mac_table[0].addr, netdev->addr_len);
+ wx_flush_sw_mac_table(wx);
+ wx_mac_set_default_filter(wx, old_addr);
}
-static void txgbe_disable_device(struct txgbe_adapter *adapter)
+static void txgbe_disable_device(struct wx *wx)
{
- struct net_device *netdev = adapter->netdev;
- struct wx_hw *wxhw = &adapter->hw.wxhw;
+ struct net_device *netdev = wx->netdev;
+ u32 i;
- wx_disable_pcie_master(wxhw);
+ wx_disable_pcie_master(wx);
/* disable receives */
- wx_disable_rx(wxhw);
+ wx_disable_rx(wx);
+
+ /* disable all enabled rx queues */
+ for (i = 0; i < wx->num_rx_queues; i++)
+ /* this call also flushes the previous write */
+ wx_disable_rx_queue(wx, wx->rx_ring[i]);
+ netif_tx_stop_all_queues(netdev);
netif_carrier_off(netdev);
netif_tx_disable(netdev);
- if (wxhw->bus.func < 2)
- wr32m(wxhw, TXGBE_MIS_PRB_CTL, TXGBE_MIS_PRB_CTL_LAN_UP(wxhw->bus.func), 0);
+ wx_irq_disable(wx);
+ wx_napi_disable_all(wx);
+
+ if (wx->bus.func < 2)
+ wr32m(wx, TXGBE_MIS_PRB_CTL, TXGBE_MIS_PRB_CTL_LAN_UP(wx->bus.func), 0);
else
- dev_err(&adapter->pdev->dev,
- "%s: invalid bus lan id %d\n",
- __func__, wxhw->bus.func);
+ wx_err(wx, "%s: invalid bus lan id %d\n",
+ __func__, wx->bus.func);
- if (!(((wxhw->subsystem_device_id & WX_NCSI_MASK) == WX_NCSI_SUP) ||
- ((wxhw->subsystem_device_id & WX_WOL_MASK) == WX_WOL_SUP))) {
+ if (!(((wx->subsystem_device_id & WX_NCSI_MASK) == WX_NCSI_SUP) ||
+ ((wx->subsystem_device_id & WX_WOL_MASK) == WX_WOL_SUP))) {
/* disable mac transmiter */
- wr32m(wxhw, WX_MAC_TX_CFG, WX_MAC_TX_CFG_TE, 0);
+ wr32m(wx, WX_MAC_TX_CFG, WX_MAC_TX_CFG_TE, 0);
+ }
+
+ /* disable transmits in the hardware now that interrupts are off */
+ for (i = 0; i < wx->num_tx_queues; i++) {
+ u8 reg_idx = wx->tx_ring[i]->reg_idx;
+
+ wr32(wx, WX_PX_TR_CFG(reg_idx), WX_PX_TR_CFG_SWFLSH);
}
/* Disable the Tx DMA engine */
- wr32m(wxhw, WX_TDM_CTL, WX_TDM_CTL_TE, 0);
+ wr32m(wx, WX_TDM_CTL, WX_TDM_CTL_TE, 0);
}
-static void txgbe_down(struct txgbe_adapter *adapter)
+static void txgbe_down(struct wx *wx)
{
- txgbe_disable_device(adapter);
- txgbe_reset(adapter);
+ txgbe_disable_device(wx);
+ txgbe_reset(wx);
+
+ wx_clean_all_tx_rings(wx);
+ wx_clean_all_rx_rings(wx);
}
/**
- * txgbe_sw_init - Initialize general software structures (struct txgbe_adapter)
- * @adapter: board private structure to initialize
+ * txgbe_sw_init - Initialize general software structures (struct wx)
+ * @wx: board private structure to initialize
**/
-static int txgbe_sw_init(struct txgbe_adapter *adapter)
+static int txgbe_sw_init(struct wx *wx)
{
- struct pci_dev *pdev = adapter->pdev;
- struct txgbe_hw *hw = &adapter->hw;
- struct wx_hw *wxhw = &hw->wxhw;
+ u16 msix_count = 0;
int err;
- wxhw->hw_addr = adapter->io_addr;
- wxhw->pdev = pdev;
+ wx->mac.num_rar_entries = TXGBE_SP_RAR_ENTRIES;
+ wx->mac.max_tx_queues = TXGBE_SP_MAX_TX_QUEUES;
+ wx->mac.max_rx_queues = TXGBE_SP_MAX_RX_QUEUES;
+ wx->mac.mcft_size = TXGBE_SP_MC_TBL_SIZE;
+ wx->mac.rx_pb_size = TXGBE_SP_RX_PB_SIZE;
+ wx->mac.tx_pb_size = TXGBE_SP_TDB_PB_SZ;
/* PCI config space info */
- err = wx_sw_init(wxhw);
+ err = wx_sw_init(wx);
if (err < 0) {
- netif_err(adapter, probe, adapter->netdev,
- "read of internal subsystem device id failed\n");
+ wx_err(wx, "read of internal subsystem device id failed\n");
return err;
}
- switch (wxhw->device_id) {
+ switch (wx->device_id) {
case TXGBE_DEV_ID_SP1000:
case TXGBE_DEV_ID_WX1820:
- wxhw->mac.type = wx_mac_sp;
+ wx->mac.type = wx_mac_sp;
break;
default:
- wxhw->mac.type = wx_mac_unknown;
+ wx->mac.type = wx_mac_unknown;
break;
}
- wxhw->mac.num_rar_entries = TXGBE_SP_RAR_ENTRIES;
- wxhw->mac.max_tx_queues = TXGBE_SP_MAX_TX_QUEUES;
- wxhw->mac.max_rx_queues = TXGBE_SP_MAX_RX_QUEUES;
- wxhw->mac.mcft_size = TXGBE_SP_MC_TBL_SIZE;
-
- adapter->mac_table = kcalloc(wxhw->mac.num_rar_entries,
- sizeof(struct txgbe_mac_addr),
- GFP_KERNEL);
- if (!adapter->mac_table) {
- netif_err(adapter, probe, adapter->netdev,
- "mac_table allocation failed\n");
- return -ENOMEM;
- }
+ /* Set common capability flags and settings */
+ wx->max_q_vectors = TXGBE_MAX_MSIX_VECTORS;
+ err = wx_get_pcie_msix_counts(wx, &msix_count, TXGBE_MAX_MSIX_VECTORS);
+ if (err)
+ wx_err(wx, "Do not support MSI-X\n");
+ wx->mac.max_msix_vectors = msix_count;
+
+ /* enable itr by default in dynamic mode */
+ wx->rx_itr_setting = 1;
+ wx->tx_itr_setting = 1;
+
+ /* set default ring sizes */
+ wx->tx_ring_count = TXGBE_DEFAULT_TXD;
+ wx->rx_ring_count = TXGBE_DEFAULT_RXD;
+
+ /* set default work limits */
+ wx->tx_work_limit = TXGBE_DEFAULT_TX_WORK;
+ wx->rx_work_limit = TXGBE_DEFAULT_RX_WORK;
return 0;
}
@@ -278,23 +382,53 @@ static int txgbe_sw_init(struct txgbe_adapter *adapter)
**/
static int txgbe_open(struct net_device *netdev)
{
- struct txgbe_adapter *adapter = netdev_priv(netdev);
+ struct wx *wx = netdev_priv(netdev);
+ int err;
+
+ err = wx_setup_resources(wx);
+ if (err)
+ goto err_reset;
+
+ wx_configure(wx);
- txgbe_up_complete(adapter);
+ err = txgbe_request_irq(wx);
+ if (err)
+ goto err_free_isb;
+
+ /* Notify the stack of the actual queue counts. */
+ err = netif_set_real_num_tx_queues(netdev, wx->num_tx_queues);
+ if (err)
+ goto err_free_irq;
+
+ err = netif_set_real_num_rx_queues(netdev, wx->num_rx_queues);
+ if (err)
+ goto err_free_irq;
+
+ txgbe_up_complete(wx);
return 0;
+
+err_free_irq:
+ wx_free_irq(wx);
+err_free_isb:
+ wx_free_isb_resources(wx);
+err_reset:
+ txgbe_reset(wx);
+
+ return err;
}
/**
* txgbe_close_suspend - actions necessary to both suspend and close flows
- * @adapter: the private adapter struct
+ * @wx: the private wx struct
*
* This function should contain the necessary work common to both suspending
* and closing of the device.
*/
-static void txgbe_close_suspend(struct txgbe_adapter *adapter)
+static void txgbe_close_suspend(struct wx *wx)
{
- txgbe_disable_device(adapter);
+ txgbe_disable_device(wx);
+ wx_free_resources(wx);
}
/**
@@ -310,29 +444,30 @@ static void txgbe_close_suspend(struct txgbe_adapter *adapter)
**/
static int txgbe_close(struct net_device *netdev)
{
- struct txgbe_adapter *adapter = netdev_priv(netdev);
+ struct wx *wx = netdev_priv(netdev);
- txgbe_down(adapter);
- wx_control_hw(&adapter->hw.wxhw, false);
+ txgbe_down(wx);
+ wx_free_irq(wx);
+ wx_free_resources(wx);
+ wx_control_hw(wx, false);
return 0;
}
static void txgbe_dev_shutdown(struct pci_dev *pdev, bool *enable_wake)
{
- struct txgbe_adapter *adapter = pci_get_drvdata(pdev);
- struct net_device *netdev = adapter->netdev;
- struct txgbe_hw *hw = &adapter->hw;
- struct wx_hw *wxhw = &hw->wxhw;
+ struct wx *wx = pci_get_drvdata(pdev);
+ struct net_device *netdev;
+ netdev = wx->netdev;
netif_device_detach(netdev);
rtnl_lock();
if (netif_running(netdev))
- txgbe_close_suspend(adapter);
+ txgbe_close_suspend(wx);
rtnl_unlock();
- wx_control_hw(wxhw, false);
+ wx_control_hw(wx, false);
pci_disable_device(pdev);
}
@@ -349,45 +484,14 @@ static void txgbe_shutdown(struct pci_dev *pdev)
}
}
-static netdev_tx_t txgbe_xmit_frame(struct sk_buff *skb,
- struct net_device *netdev)
-{
- return NETDEV_TX_OK;
-}
-
-/**
- * txgbe_set_mac - Change the Ethernet Address of the NIC
- * @netdev: network interface device structure
- * @p: pointer to an address structure
- *
- * Returns 0 on success, negative on failure
- **/
-static int txgbe_set_mac(struct net_device *netdev, void *p)
-{
- struct txgbe_adapter *adapter = netdev_priv(netdev);
- struct wx_hw *wxhw = &adapter->hw.wxhw;
- struct sockaddr *addr = p;
- int retval;
-
- retval = eth_prepare_mac_addr_change(netdev, addr);
- if (retval)
- return retval;
-
- txgbe_del_mac_filter(adapter, wxhw->mac.addr, 0);
- eth_hw_addr_set(netdev, addr->sa_data);
- memcpy(wxhw->mac.addr, addr->sa_data, netdev->addr_len);
-
- txgbe_mac_set_default_filter(adapter, wxhw->mac.addr);
-
- return 0;
-}
-
static const struct net_device_ops txgbe_netdev_ops = {
.ndo_open = txgbe_open,
.ndo_stop = txgbe_close,
- .ndo_start_xmit = txgbe_xmit_frame,
+ .ndo_start_xmit = wx_xmit_frame,
+ .ndo_set_rx_mode = wx_set_rx_mode,
.ndo_validate_addr = eth_validate_addr,
- .ndo_set_mac_address = txgbe_set_mac,
+ .ndo_set_mac_address = wx_set_mac,
+ .ndo_get_stats64 = wx_get_stats64,
};
/**
@@ -398,17 +502,15 @@ static const struct net_device_ops txgbe_netdev_ops = {
* Returns 0 on success, negative on failure
*
* txgbe_probe initializes an adapter identified by a pci_dev structure.
- * The OS initialization, configuring of the adapter private structure,
+ * The OS initialization, configuring of the wx private structure,
* and a hardware reset occur.
**/
static int txgbe_probe(struct pci_dev *pdev,
const struct pci_device_id __always_unused *ent)
{
- struct txgbe_adapter *adapter = NULL;
- struct txgbe_hw *hw = NULL;
- struct wx_hw *wxhw = NULL;
struct net_device *netdev;
int err, expected_gts;
+ struct wx *wx = NULL;
u16 eeprom_verh = 0, eeprom_verl = 0, offset = 0;
u16 eeprom_cfg_blkh = 0, eeprom_cfg_blkl = 0;
@@ -440,7 +542,7 @@ static int txgbe_probe(struct pci_dev *pdev,
pci_set_master(pdev);
netdev = devm_alloc_etherdev_mqs(&pdev->dev,
- sizeof(struct txgbe_adapter),
+ sizeof(struct wx),
TXGBE_MAX_TX_QUEUES,
TXGBE_MAX_RX_QUEUES);
if (!netdev) {
@@ -450,81 +552,96 @@ static int txgbe_probe(struct pci_dev *pdev,
SET_NETDEV_DEV(netdev, &pdev->dev);
- adapter = netdev_priv(netdev);
- adapter->netdev = netdev;
- adapter->pdev = pdev;
- hw = &adapter->hw;
- wxhw = &hw->wxhw;
- adapter->msg_enable = (1 << DEFAULT_DEBUG_LEVEL_SHIFT) - 1;
-
- adapter->io_addr = devm_ioremap(&pdev->dev,
- pci_resource_start(pdev, 0),
- pci_resource_len(pdev, 0));
- if (!adapter->io_addr) {
+ wx = netdev_priv(netdev);
+ wx->netdev = netdev;
+ wx->pdev = pdev;
+
+ wx->msg_enable = (1 << DEFAULT_DEBUG_LEVEL_SHIFT) - 1;
+
+ wx->hw_addr = devm_ioremap(&pdev->dev,
+ pci_resource_start(pdev, 0),
+ pci_resource_len(pdev, 0));
+ if (!wx->hw_addr) {
err = -EIO;
goto err_pci_release_regions;
}
+ wx->driver_name = txgbe_driver_name;
+ txgbe_set_ethtool_ops(netdev);
netdev->netdev_ops = &txgbe_netdev_ops;
/* setup the private structure */
- err = txgbe_sw_init(adapter);
+ err = txgbe_sw_init(wx);
if (err)
goto err_free_mac_table;
/* check if flash load is done after hw power up */
- err = wx_check_flash_load(wxhw, TXGBE_SPI_ILDR_STATUS_PERST);
+ err = wx_check_flash_load(wx, TXGBE_SPI_ILDR_STATUS_PERST);
if (err)
goto err_free_mac_table;
- err = wx_check_flash_load(wxhw, TXGBE_SPI_ILDR_STATUS_PWRRST);
+ err = wx_check_flash_load(wx, TXGBE_SPI_ILDR_STATUS_PWRRST);
if (err)
goto err_free_mac_table;
- err = wx_mng_present(wxhw);
+ err = wx_mng_present(wx);
if (err) {
dev_err(&pdev->dev, "Management capability is not present\n");
goto err_free_mac_table;
}
- err = txgbe_reset_hw(hw);
+ err = txgbe_reset_hw(wx);
if (err) {
dev_err(&pdev->dev, "HW Init failed: %d\n", err);
goto err_free_mac_table;
}
netdev->features |= NETIF_F_HIGHDMA;
+ netdev->features = NETIF_F_SG;
+
+ /* copy netdev features into list of user selectable features */
+ netdev->hw_features |= netdev->features | NETIF_F_RXALL;
+
+ netdev->priv_flags |= IFF_UNICAST_FLT;
+ netdev->priv_flags |= IFF_SUPP_NOFCS;
+
+ netdev->min_mtu = ETH_MIN_MTU;
+ netdev->max_mtu = TXGBE_MAX_JUMBO_FRAME_SIZE - (ETH_HLEN + ETH_FCS_LEN);
/* make sure the EEPROM is good */
- err = txgbe_validate_eeprom_checksum(hw, NULL);
+ err = txgbe_validate_eeprom_checksum(wx, NULL);
if (err != 0) {
dev_err(&pdev->dev, "The EEPROM Checksum Is Not Valid\n");
- wr32(wxhw, WX_MIS_RST, WX_MIS_RST_SW_RST);
+ wr32(wx, WX_MIS_RST, WX_MIS_RST_SW_RST);
err = -EIO;
goto err_free_mac_table;
}
- eth_hw_addr_set(netdev, wxhw->mac.perm_addr);
- txgbe_mac_set_default_filter(adapter, wxhw->mac.perm_addr);
+ eth_hw_addr_set(netdev, wx->mac.perm_addr);
+ wx_mac_set_default_filter(wx, wx->mac.perm_addr);
+
+ err = wx_init_interrupt_scheme(wx);
+ if (err)
+ goto err_free_mac_table;
/* Save off EEPROM version number and Option Rom version which
* together make a unique identify for the eeprom
*/
- wx_read_ee_hostif(wxhw,
- wxhw->eeprom.sw_region_offset + TXGBE_EEPROM_VERSION_H,
+ wx_read_ee_hostif(wx,
+ wx->eeprom.sw_region_offset + TXGBE_EEPROM_VERSION_H,
&eeprom_verh);
- wx_read_ee_hostif(wxhw,
- wxhw->eeprom.sw_region_offset + TXGBE_EEPROM_VERSION_L,
+ wx_read_ee_hostif(wx,
+ wx->eeprom.sw_region_offset + TXGBE_EEPROM_VERSION_L,
&eeprom_verl);
etrack_id = (eeprom_verh << 16) | eeprom_verl;
- wx_read_ee_hostif(wxhw,
- wxhw->eeprom.sw_region_offset + TXGBE_ISCSI_BOOT_CONFIG,
+ wx_read_ee_hostif(wx,
+ wx->eeprom.sw_region_offset + TXGBE_ISCSI_BOOT_CONFIG,
&offset);
/* Make sure offset to SCSI block is valid */
if (!(offset == 0x0) && !(offset == 0xffff)) {
- wx_read_ee_hostif(wxhw, offset + 0x84, &eeprom_cfg_blkh);
- wx_read_ee_hostif(wxhw, offset + 0x83, &eeprom_cfg_blkl);
+ wx_read_ee_hostif(wx, offset + 0x84, &eeprom_cfg_blkh);
+ wx_read_ee_hostif(wx, offset + 0x83, &eeprom_cfg_blkl);
/* Only display Option Rom if exist */
if (eeprom_cfg_blkl && eeprom_cfg_blkh) {
@@ -532,15 +649,15 @@ static int txgbe_probe(struct pci_dev *pdev,
build = (eeprom_cfg_blkl << 8) | (eeprom_cfg_blkh >> 8);
patch = eeprom_cfg_blkh & 0x00ff;
- snprintf(adapter->eeprom_id, sizeof(adapter->eeprom_id),
+ snprintf(wx->eeprom_id, sizeof(wx->eeprom_id),
"0x%08x, %d.%d.%d", etrack_id, major, build,
patch);
} else {
- snprintf(adapter->eeprom_id, sizeof(adapter->eeprom_id),
+ snprintf(wx->eeprom_id, sizeof(wx->eeprom_id),
"0x%08x", etrack_id);
}
} else {
- snprintf(adapter->eeprom_id, sizeof(adapter->eeprom_id),
+ snprintf(wx->eeprom_id, sizeof(wx->eeprom_id),
"0x%08x", etrack_id);
}
@@ -548,7 +665,9 @@ static int txgbe_probe(struct pci_dev *pdev,
if (err)
goto err_release_hw;
- pci_set_drvdata(pdev, adapter);
+ pci_set_drvdata(pdev, wx);
+
+ netif_tx_stop_all_queues(netdev);
/* calculate the expected PCIe bandwidth required for optimal
* performance. Note that some older parts will never have enough
@@ -556,27 +675,28 @@ static int txgbe_probe(struct pci_dev *pdev,
* parts to ensure that no warning is displayed, as this could confuse
* users otherwise.
*/
- expected_gts = txgbe_enumerate_functions(adapter) * 10;
+ expected_gts = txgbe_enumerate_functions(wx) * 10;
/* don't check link if we failed to enumerate functions */
if (expected_gts > 0)
- txgbe_check_minimum_link(adapter);
+ txgbe_check_minimum_link(wx);
else
dev_warn(&pdev->dev, "Failed to enumerate PF devices.\n");
/* First try to read PBA as a string */
- err = txgbe_read_pba_string(hw, part_str, TXGBE_PBANUM_LENGTH);
+ err = txgbe_read_pba_string(wx, part_str, TXGBE_PBANUM_LENGTH);
if (err)
strncpy(part_str, "Unknown", TXGBE_PBANUM_LENGTH);
- netif_info(adapter, probe, netdev, "%pM\n", netdev->dev_addr);
+ netif_info(wx, probe, netdev, "%pM\n", netdev->dev_addr);
return 0;
err_release_hw:
- wx_control_hw(wxhw, false);
+ wx_clear_interrupt_scheme(wx);
+ wx_control_hw(wx, false);
err_free_mac_table:
- kfree(adapter->mac_table);
+ kfree(wx->mac_table);
err_pci_release_regions:
pci_disable_pcie_error_reporting(pdev);
pci_release_selected_regions(pdev,
@@ -597,16 +717,17 @@ err_pci_disable_dev:
**/
static void txgbe_remove(struct pci_dev *pdev)
{
- struct txgbe_adapter *adapter = pci_get_drvdata(pdev);
+ struct wx *wx = pci_get_drvdata(pdev);
struct net_device *netdev;
- netdev = adapter->netdev;
+ netdev = wx->netdev;
unregister_netdev(netdev);
pci_release_selected_regions(pdev,
pci_select_bars(pdev, IORESOURCE_MEM));
- kfree(adapter->mac_table);
+ kfree(wx->mac_table);
+ wx_clear_interrupt_scheme(wx);
pci_disable_pcie_error_reporting(pdev);
diff --git a/drivers/net/ethernet/wangxun/txgbe/txgbe_type.h b/drivers/net/ethernet/wangxun/txgbe/txgbe_type.h
index 740a1c447e20..563ea51deca6 100644
--- a/drivers/net/ethernet/wangxun/txgbe/txgbe_type.h
+++ b/drivers/net/ethernet/wangxun/txgbe/txgbe_type.h
@@ -67,8 +67,37 @@
#define TXGBE_PBANUM1_PTR 0x06
#define TXGBE_PBANUM_PTR_GUARD 0xFAFA
-struct txgbe_hw {
- struct wx_hw wxhw;
-};
+#define TXGBE_MAX_MSIX_VECTORS 64
+#define TXGBE_MAX_FDIR_INDICES 63
+
+#define TXGBE_MAX_RX_QUEUES (TXGBE_MAX_FDIR_INDICES + 1)
+#define TXGBE_MAX_TX_QUEUES (TXGBE_MAX_FDIR_INDICES + 1)
+
+#define TXGBE_SP_MAX_TX_QUEUES 128
+#define TXGBE_SP_MAX_RX_QUEUES 128
+#define TXGBE_SP_RAR_ENTRIES 128
+#define TXGBE_SP_MC_TBL_SIZE 128
+#define TXGBE_SP_RX_PB_SIZE 512
+#define TXGBE_SP_TDB_PB_SZ (160 * 1024) /* 160KB Packet Buffer */
+#define TXGBE_MAX_JUMBO_FRAME_SIZE 9432 /* max payload 9414 */
+
+/* TX/RX descriptor defines */
+#define TXGBE_DEFAULT_TXD 512
+#define TXGBE_DEFAULT_TX_WORK 256
+
+#if (PAGE_SIZE < 8192)
+#define TXGBE_DEFAULT_RXD 512
+#define TXGBE_DEFAULT_RX_WORK 256
+#else
+#define TXGBE_DEFAULT_RXD 256
+#define TXGBE_DEFAULT_RX_WORK 128
+#endif
+
+#define TXGBE_INTR_MISC(A) BIT((A)->num_q_vectors)
+#define TXGBE_INTR_QALL(A) (TXGBE_INTR_MISC(A) - 1)
+
+#define TXGBE_MAX_EITR GENMASK(11, 3)
+
+extern char txgbe_driver_name[];
#endif /* _TXGBE_TYPE_H_ */
diff --git a/drivers/net/hamradio/baycom_epp.c b/drivers/net/hamradio/baycom_epp.c
index bd3b0c2655a2..83ff882f5d97 100644
--- a/drivers/net/hamradio/baycom_epp.c
+++ b/drivers/net/hamradio/baycom_epp.c
@@ -623,16 +623,10 @@ static int receive(struct net_device *dev, int cnt)
/* --------------------------------------------------------------------- */
-#if defined(__i386__) && !defined(CONFIG_UML)
-#include <asm/msr.h>
#define GETTICK(x) \
({ \
- if (boot_cpu_has(X86_FEATURE_TSC)) \
- x = (unsigned int)rdtsc(); \
+ x = (unsigned int)get_cycles(); \
})
-#else /* __i386__ && !CONFIG_UML */
-#define GETTICK(x)
-#endif /* __i386__ && !CONFIG_UML */
static void epp_bh(struct work_struct *work)
{
diff --git a/drivers/net/hyperv/netvsc.c b/drivers/net/hyperv/netvsc.c
index 79f4e13620a4..da737d959e81 100644
--- a/drivers/net/hyperv/netvsc.c
+++ b/drivers/net/hyperv/netvsc.c
@@ -851,6 +851,7 @@ static void netvsc_send_completion(struct net_device *ndev,
u32 msglen = hv_pkt_datalen(desc);
struct nvsp_message *pkt_rqst;
u64 cmd_rqst;
+ u32 status;
/* First check if this is a VMBUS completion without data payload */
if (!msglen) {
@@ -922,6 +923,23 @@ static void netvsc_send_completion(struct net_device *ndev,
break;
case NVSP_MSG1_TYPE_SEND_RNDIS_PKT_COMPLETE:
+ if (msglen < sizeof(struct nvsp_message_header) +
+ sizeof(struct nvsp_1_message_send_rndis_packet_complete)) {
+ if (net_ratelimit())
+ netdev_err(ndev, "nvsp_rndis_pkt_complete length too small: %u\n",
+ msglen);
+ return;
+ }
+
+ /* If status indicates an error, output a message so we know
+ * there's a problem. But process the completion anyway so the
+ * resources are released.
+ */
+ status = nvsp_packet->msg.v1_msg.send_rndis_pkt_complete.status;
+ if (status != NVSP_STAT_SUCCESS && net_ratelimit())
+ netdev_err(ndev, "nvsp_rndis_pkt_complete error status: %x\n",
+ status);
+
netvsc_send_tx_complete(ndev, net_device, incoming_channel,
desc, budget);
break;
diff --git a/drivers/net/hyperv/netvsc_drv.c b/drivers/net/hyperv/netvsc_drv.c
index 42162167b5b4..0103ff914024 100644
--- a/drivers/net/hyperv/netvsc_drv.c
+++ b/drivers/net/hyperv/netvsc_drv.c
@@ -2559,6 +2559,9 @@ static int netvsc_probe(struct hv_device *dev,
netdev_lockdep_set_classes(net);
+ net->xdp_features = NETDEV_XDP_ACT_BASIC | NETDEV_XDP_ACT_REDIRECT |
+ NETDEV_XDP_ACT_NDO_XMIT;
+
/* MTU range: 68 - 1500 or 65521 */
net->min_mtu = NETVSC_MTU_MIN;
if (nvdev->nvsp_version >= NVSP_PROTOCOL_VERSION_2)
diff --git a/drivers/net/ieee802154/at86rf230.c b/drivers/net/ieee802154/at86rf230.c
index 15f283b26721..62b984f84d9f 100644
--- a/drivers/net/ieee802154/at86rf230.c
+++ b/drivers/net/ieee802154/at86rf230.c
@@ -17,8 +17,8 @@
#include <linux/irq.h>
#include <linux/gpio.h>
#include <linux/delay.h>
+#include <linux/property.h>
#include <linux/spi/spi.h>
-#include <linux/spi/at86rf230.h>
#include <linux/regmap.h>
#include <linux/skbuff.h>
#include <linux/of_gpio.h>
@@ -82,7 +82,7 @@ struct at86rf230_local {
struct ieee802154_hw *hw;
struct at86rf2xx_chip_data *data;
struct regmap *regmap;
- int slp_tr;
+ struct gpio_desc *slp_tr;
bool sleep;
struct completion state_complete;
@@ -107,8 +107,8 @@ at86rf230_async_state_change(struct at86rf230_local *lp,
static inline void
at86rf230_sleep(struct at86rf230_local *lp)
{
- if (gpio_is_valid(lp->slp_tr)) {
- gpio_set_value(lp->slp_tr, 1);
+ if (lp->slp_tr) {
+ gpiod_set_value(lp->slp_tr, 1);
usleep_range(lp->data->t_off_to_sleep,
lp->data->t_off_to_sleep + 10);
lp->sleep = true;
@@ -118,8 +118,8 @@ at86rf230_sleep(struct at86rf230_local *lp)
static inline void
at86rf230_awake(struct at86rf230_local *lp)
{
- if (gpio_is_valid(lp->slp_tr)) {
- gpio_set_value(lp->slp_tr, 0);
+ if (lp->slp_tr) {
+ gpiod_set_value(lp->slp_tr, 0);
usleep_range(lp->data->t_sleep_to_off,
lp->data->t_sleep_to_off + 100);
lp->sleep = false;
@@ -204,9 +204,9 @@ at86rf230_write_subreg(struct at86rf230_local *lp,
static inline void
at86rf230_slp_tr_rising_edge(struct at86rf230_local *lp)
{
- gpio_set_value(lp->slp_tr, 1);
+ gpiod_set_value(lp->slp_tr, 1);
udelay(1);
- gpio_set_value(lp->slp_tr, 0);
+ gpiod_set_value(lp->slp_tr, 0);
}
static bool
@@ -819,7 +819,7 @@ at86rf230_write_frame_complete(void *context)
ctx->trx.len = 2;
- if (gpio_is_valid(lp->slp_tr))
+ if (lp->slp_tr)
at86rf230_slp_tr_rising_edge(lp);
else
at86rf230_async_write_reg(lp, RG_TRX_STATE, STATE_BUSY_TX, ctx,
@@ -1416,32 +1416,6 @@ static int at86rf230_hw_init(struct at86rf230_local *lp, u8 xtal_trim)
}
static int
-at86rf230_get_pdata(struct spi_device *spi, int *rstn, int *slp_tr,
- u8 *xtal_trim)
-{
- struct at86rf230_platform_data *pdata = spi->dev.platform_data;
- int ret;
-
- if (!IS_ENABLED(CONFIG_OF) || !spi->dev.of_node) {
- if (!pdata)
- return -ENOENT;
-
- *rstn = pdata->rstn;
- *slp_tr = pdata->slp_tr;
- *xtal_trim = pdata->xtal_trim;
- return 0;
- }
-
- *rstn = of_get_named_gpio(spi->dev.of_node, "reset-gpio", 0);
- *slp_tr = of_get_named_gpio(spi->dev.of_node, "sleep-gpio", 0);
- ret = of_property_read_u8(spi->dev.of_node, "xtal-trim", xtal_trim);
- if (ret < 0 && ret != -EINVAL)
- return ret;
-
- return 0;
-}
-
-static int
at86rf230_detect_device(struct at86rf230_local *lp)
{
unsigned int part, version, val;
@@ -1546,41 +1520,47 @@ static int at86rf230_probe(struct spi_device *spi)
{
struct ieee802154_hw *hw;
struct at86rf230_local *lp;
+ struct gpio_desc *slp_tr;
+ struct gpio_desc *rstn;
unsigned int status;
- int rc, irq_type, rstn, slp_tr;
- u8 xtal_trim = 0;
+ int rc, irq_type;
+ u8 xtal_trim;
if (!spi->irq) {
dev_err(&spi->dev, "no IRQ specified\n");
return -EINVAL;
}
- rc = at86rf230_get_pdata(spi, &rstn, &slp_tr, &xtal_trim);
+ rc = device_property_read_u8(&spi->dev, "xtal-trim", &xtal_trim);
if (rc < 0) {
- dev_err(&spi->dev, "failed to parse platform_data: %d\n", rc);
- return rc;
- }
-
- if (gpio_is_valid(rstn)) {
- rc = devm_gpio_request_one(&spi->dev, rstn,
- GPIOF_OUT_INIT_HIGH, "rstn");
- if (rc)
+ if (rc != -EINVAL) {
+ dev_err(&spi->dev,
+ "failed to parse xtal-trim: %d\n", rc);
return rc;
+ }
+ xtal_trim = 0;
}
- if (gpio_is_valid(slp_tr)) {
- rc = devm_gpio_request_one(&spi->dev, slp_tr,
- GPIOF_OUT_INIT_LOW, "slp_tr");
- if (rc)
- return rc;
- }
+ rstn = devm_gpiod_get_optional(&spi->dev, "reset", GPIOD_OUT_LOW);
+ rc = PTR_ERR_OR_ZERO(rstn);
+ if (rc)
+ return rc;
+
+ gpiod_set_consumer_name(rstn, "rstn");
+
+ slp_tr = devm_gpiod_get_optional(&spi->dev, "sleep", GPIOD_OUT_LOW);
+ rc = PTR_ERR_OR_ZERO(slp_tr);
+ if (rc)
+ return rc;
+
+ gpiod_set_consumer_name(slp_tr, "slp_tr");
/* Reset */
- if (gpio_is_valid(rstn)) {
+ if (rstn) {
udelay(1);
- gpio_set_value_cansleep(rstn, 0);
+ gpiod_set_value_cansleep(rstn, 1);
udelay(1);
- gpio_set_value_cansleep(rstn, 1);
+ gpiod_set_value_cansleep(rstn, 0);
usleep_range(120, 240);
}
diff --git a/drivers/net/ieee802154/cc2520.c b/drivers/net/ieee802154/cc2520.c
index edc769daad07..a94d8dd71aad 100644
--- a/drivers/net/ieee802154/cc2520.c
+++ b/drivers/net/ieee802154/cc2520.c
@@ -7,14 +7,13 @@
*/
#include <linux/kernel.h>
#include <linux/module.h>
-#include <linux/gpio.h>
+#include <linux/gpio/consumer.h>
#include <linux/delay.h>
#include <linux/spi/spi.h>
-#include <linux/spi/cc2520.h>
+#include <linux/property.h>
#include <linux/workqueue.h>
#include <linux/interrupt.h>
#include <linux/skbuff.h>
-#include <linux/of_gpio.h>
#include <linux/ieee802154.h>
#include <linux/crc-ccitt.h>
#include <asm/unaligned.h>
@@ -206,7 +205,7 @@ struct cc2520_private {
struct mutex buffer_mutex; /* SPI buffer mutex */
bool is_tx; /* Flag for sync b/w Tx and Rx */
bool amplified; /* Flag for CC2591 */
- int fifo_pin; /* FIFO GPIO pin number */
+ struct gpio_desc *fifo_pin; /* FIFO GPIO pin number */
struct work_struct fifop_irqwork;/* Workqueue for FIFOP */
spinlock_t lock; /* Lock for is_tx*/
struct completion tx_complete; /* Work completion for Tx */
@@ -875,7 +874,7 @@ static void cc2520_fifop_irqwork(struct work_struct *work)
dev_dbg(&priv->spi->dev, "fifop interrupt received\n");
- if (gpio_get_value(priv->fifo_pin))
+ if (gpiod_get_value(priv->fifo_pin))
cc2520_rx(priv);
else
dev_dbg(&priv->spi->dev, "rxfifo overflow\n");
@@ -912,49 +911,11 @@ static irqreturn_t cc2520_sfd_isr(int irq, void *data)
return IRQ_HANDLED;
}
-static int cc2520_get_platform_data(struct spi_device *spi,
- struct cc2520_platform_data *pdata)
-{
- struct device_node *np = spi->dev.of_node;
- struct cc2520_private *priv = spi_get_drvdata(spi);
-
- if (!np) {
- struct cc2520_platform_data *spi_pdata = spi->dev.platform_data;
-
- if (!spi_pdata)
- return -ENOENT;
- *pdata = *spi_pdata;
- priv->fifo_pin = pdata->fifo;
- return 0;
- }
-
- pdata->fifo = of_get_named_gpio(np, "fifo-gpio", 0);
- priv->fifo_pin = pdata->fifo;
-
- pdata->fifop = of_get_named_gpio(np, "fifop-gpio", 0);
-
- pdata->sfd = of_get_named_gpio(np, "sfd-gpio", 0);
- pdata->cca = of_get_named_gpio(np, "cca-gpio", 0);
- pdata->vreg = of_get_named_gpio(np, "vreg-gpio", 0);
- pdata->reset = of_get_named_gpio(np, "reset-gpio", 0);
-
- /* CC2591 front end for CC2520 */
- if (of_property_read_bool(np, "amplified"))
- priv->amplified = true;
-
- return 0;
-}
-
static int cc2520_hw_init(struct cc2520_private *priv)
{
u8 status = 0, state = 0xff;
int ret;
int timeout = 100;
- struct cc2520_platform_data pdata;
-
- ret = cc2520_get_platform_data(priv->spi, &pdata);
- if (ret)
- goto err_ret;
ret = cc2520_read_register(priv, CC2520_FSMSTAT1, &state);
if (ret)
@@ -1071,7 +1032,11 @@ err_ret:
static int cc2520_probe(struct spi_device *spi)
{
struct cc2520_private *priv;
- struct cc2520_platform_data pdata;
+ struct gpio_desc *fifop;
+ struct gpio_desc *cca;
+ struct gpio_desc *sfd;
+ struct gpio_desc *reset;
+ struct gpio_desc *vreg;
int ret;
priv = devm_kzalloc(&spi->dev, sizeof(*priv), GFP_KERNEL);
@@ -1080,11 +1045,11 @@ static int cc2520_probe(struct spi_device *spi)
spi_set_drvdata(spi, priv);
- ret = cc2520_get_platform_data(spi, &pdata);
- if (ret < 0) {
- dev_err(&spi->dev, "no platform data\n");
- return -EINVAL;
- }
+ /* CC2591 front end for CC2520 */
+ /* Assumption that CC2591 is not connected */
+ priv->amplified = false;
+ if (device_property_read_bool(&spi->dev, "amplified"))
+ priv->amplified = true;
priv->spi = spi;
@@ -1098,80 +1063,53 @@ static int cc2520_probe(struct spi_device *spi)
spin_lock_init(&priv->lock);
init_completion(&priv->tx_complete);
- /* Assumption that CC2591 is not connected */
- priv->amplified = false;
-
/* Request all the gpio's */
- if (!gpio_is_valid(pdata.fifo)) {
+ priv->fifo_pin = devm_gpiod_get(&spi->dev, "fifo", GPIOD_IN);
+ if (IS_ERR(priv->fifo_pin)) {
dev_err(&spi->dev, "fifo gpio is not valid\n");
- ret = -EINVAL;
+ ret = PTR_ERR(priv->fifo_pin);
goto err_hw_init;
}
- ret = devm_gpio_request_one(&spi->dev, pdata.fifo,
- GPIOF_IN, "fifo");
- if (ret)
- goto err_hw_init;
-
- if (!gpio_is_valid(pdata.cca)) {
+ cca = devm_gpiod_get(&spi->dev, "cca", GPIOD_IN);
+ if (IS_ERR(cca)) {
dev_err(&spi->dev, "cca gpio is not valid\n");
- ret = -EINVAL;
+ ret = PTR_ERR(cca);
goto err_hw_init;
}
- ret = devm_gpio_request_one(&spi->dev, pdata.cca,
- GPIOF_IN, "cca");
- if (ret)
- goto err_hw_init;
-
- if (!gpio_is_valid(pdata.fifop)) {
+ fifop = devm_gpiod_get(&spi->dev, "fifop", GPIOD_IN);
+ if (IS_ERR(fifop)) {
dev_err(&spi->dev, "fifop gpio is not valid\n");
- ret = -EINVAL;
+ ret = PTR_ERR(fifop);
goto err_hw_init;
}
- ret = devm_gpio_request_one(&spi->dev, pdata.fifop,
- GPIOF_IN, "fifop");
- if (ret)
- goto err_hw_init;
-
- if (!gpio_is_valid(pdata.sfd)) {
+ sfd = devm_gpiod_get(&spi->dev, "sfd", GPIOD_IN);
+ if (IS_ERR(sfd)) {
dev_err(&spi->dev, "sfd gpio is not valid\n");
- ret = -EINVAL;
+ ret = PTR_ERR(sfd);
goto err_hw_init;
}
- ret = devm_gpio_request_one(&spi->dev, pdata.sfd,
- GPIOF_IN, "sfd");
- if (ret)
- goto err_hw_init;
-
- if (!gpio_is_valid(pdata.reset)) {
+ reset = devm_gpiod_get(&spi->dev, "reset", GPIOD_OUT_LOW);
+ if (IS_ERR(reset)) {
dev_err(&spi->dev, "reset gpio is not valid\n");
- ret = -EINVAL;
+ ret = PTR_ERR(reset);
goto err_hw_init;
}
- ret = devm_gpio_request_one(&spi->dev, pdata.reset,
- GPIOF_OUT_INIT_LOW, "reset");
- if (ret)
- goto err_hw_init;
-
- if (!gpio_is_valid(pdata.vreg)) {
+ vreg = devm_gpiod_get(&spi->dev, "vreg", GPIOD_OUT_LOW);
+ if (IS_ERR(vreg)) {
dev_err(&spi->dev, "vreg gpio is not valid\n");
- ret = -EINVAL;
+ ret = PTR_ERR(vreg);
goto err_hw_init;
}
- ret = devm_gpio_request_one(&spi->dev, pdata.vreg,
- GPIOF_OUT_INIT_LOW, "vreg");
- if (ret)
- goto err_hw_init;
-
- gpio_set_value(pdata.vreg, HIGH);
+ gpiod_set_value(vreg, HIGH);
usleep_range(100, 150);
- gpio_set_value(pdata.reset, HIGH);
+ gpiod_set_value(reset, HIGH);
usleep_range(200, 250);
ret = cc2520_hw_init(priv);
@@ -1180,7 +1118,7 @@ static int cc2520_probe(struct spi_device *spi)
/* Set up fifop interrupt */
ret = devm_request_irq(&spi->dev,
- gpio_to_irq(pdata.fifop),
+ gpiod_to_irq(fifop),
cc2520_fifop_isr,
IRQF_TRIGGER_RISING,
dev_name(&spi->dev),
@@ -1192,7 +1130,7 @@ static int cc2520_probe(struct spi_device *spi)
/* Set up sfd interrupt */
ret = devm_request_irq(&spi->dev,
- gpio_to_irq(pdata.sfd),
+ gpiod_to_irq(sfd),
cc2520_sfd_isr,
IRQF_TRIGGER_FALLING,
dev_name(&spi->dev),
@@ -1241,7 +1179,7 @@ MODULE_DEVICE_TABLE(of, cc2520_of_ids);
static struct spi_driver cc2520_driver = {
.driver = {
.name = "cc2520",
- .of_match_table = of_match_ptr(cc2520_of_ids),
+ .of_match_table = cc2520_of_ids,
},
.id_table = cc2520_ids,
.probe = cc2520_probe,
diff --git a/drivers/net/ipa/Makefile b/drivers/net/ipa/Makefile
index 8cdcaaf58ae3..cba199422f47 100644
--- a/drivers/net/ipa/Makefile
+++ b/drivers/net/ipa/Makefile
@@ -4,15 +4,20 @@
IPA_VERSIONS := 3.1 3.5.1 4.2 4.5 4.7 4.9 4.11
+# Some IPA versions can reuse another set of GSI register definitions.
+GSI_IPA_VERSIONS := 3.1 3.5.1 4.0 4.5 4.9 4.11
+
obj-$(CONFIG_QCOM_IPA) += ipa.o
ipa-y := ipa_main.o ipa_power.o ipa_reg.o ipa_mem.o \
- ipa_table.o ipa_interrupt.o gsi.o gsi_trans.o \
- ipa_gsi.o ipa_smp2p.o ipa_uc.o \
+ ipa_table.o ipa_interrupt.o gsi.o gsi_reg.o \
+ gsi_trans.o ipa_gsi.o ipa_smp2p.o ipa_uc.o \
ipa_endpoint.o ipa_cmd.o ipa_modem.o \
ipa_resource.o ipa_qmi.o ipa_qmi_msg.o \
ipa_sysfs.o
+ipa-y += $(GSI_IPA_VERSIONS:%=reg/gsi_reg-v%.o)
+
ipa-y += $(IPA_VERSIONS:%=reg/ipa_reg-v%.o)
ipa-y += $(IPA_VERSIONS:%=data/ipa_data-v%.o)
diff --git a/drivers/net/ipa/gsi.c b/drivers/net/ipa/gsi.c
index bea2da1c4c51..9a0b1fe4a93a 100644
--- a/drivers/net/ipa/gsi.c
+++ b/drivers/net/ipa/gsi.c
@@ -1,7 +1,7 @@
// SPDX-License-Identifier: GPL-2.0
/* Copyright (c) 2015-2018, The Linux Foundation. All rights reserved.
- * Copyright (C) 2018-2022 Linaro Ltd.
+ * Copyright (C) 2018-2023 Linaro Ltd.
*/
#include <linux/types.h>
@@ -16,6 +16,7 @@
#include <linux/netdevice.h>
#include "gsi.h"
+#include "reg.h"
#include "gsi_reg.h"
#include "gsi_private.h"
#include "gsi_trans.h"
@@ -162,12 +163,6 @@ static void gsi_validate_build(void)
* ensure the elements themselves meet the requirement.
*/
BUILD_BUG_ON(!is_power_of_2(GSI_RING_ELEMENT_SIZE));
-
- /* The channel element size must fit in this field */
- BUILD_BUG_ON(GSI_RING_ELEMENT_SIZE > field_max(ELEMENT_SIZE_FMASK));
-
- /* The event ring element size must fit in this field */
- BUILD_BUG_ON(GSI_RING_ELEMENT_SIZE > field_max(EV_ELEMENT_SIZE_FMASK));
}
/* Return the channel id associated with a given channel */
@@ -182,21 +177,39 @@ static bool gsi_channel_initialized(struct gsi_channel *channel)
return !!channel->gsi;
}
+/* Encode the channel protocol for the CH_C_CNTXT_0 register */
+static u32 ch_c_cntxt_0_type_encode(enum ipa_version version,
+ const struct reg *reg,
+ enum gsi_channel_type type)
+{
+ u32 val;
+
+ val = reg_encode(reg, CHTYPE_PROTOCOL, type);
+ if (version < IPA_VERSION_4_5 || version >= IPA_VERSION_5_0)
+ return val;
+
+ type >>= hweight32(reg_fmask(reg, CHTYPE_PROTOCOL));
+
+ return val | reg_encode(reg, CHTYPE_PROTOCOL_MSB, type);
+}
+
/* Update the GSI IRQ type register with the cached value */
static void gsi_irq_type_update(struct gsi *gsi, u32 val)
{
+ const struct reg *reg = gsi_reg(gsi, CNTXT_TYPE_IRQ_MSK);
+
gsi->type_enabled_bitmap = val;
- iowrite32(val, gsi->virt + GSI_CNTXT_TYPE_IRQ_MSK_OFFSET);
+ iowrite32(val, gsi->virt + reg_offset(reg));
}
static void gsi_irq_type_enable(struct gsi *gsi, enum gsi_irq_type_id type_id)
{
- gsi_irq_type_update(gsi, gsi->type_enabled_bitmap | BIT(type_id));
+ gsi_irq_type_update(gsi, gsi->type_enabled_bitmap | type_id);
}
static void gsi_irq_type_disable(struct gsi *gsi, enum gsi_irq_type_id type_id)
{
- gsi_irq_type_update(gsi, gsi->type_enabled_bitmap & ~BIT(type_id));
+ gsi_irq_type_update(gsi, gsi->type_enabled_bitmap & ~type_id);
}
/* Event ring commands are performed one at a time. Their completion
@@ -207,22 +220,29 @@ static void gsi_irq_type_disable(struct gsi *gsi, enum gsi_irq_type_id type_id)
static void gsi_irq_ev_ctrl_enable(struct gsi *gsi, u32 evt_ring_id)
{
u32 val = BIT(evt_ring_id);
+ const struct reg *reg;
/* There's a small chance that a previous command completed
* after the interrupt was disabled, so make sure we have no
* pending interrupts before we enable them.
*/
- iowrite32(~0, gsi->virt + GSI_CNTXT_SRC_EV_CH_IRQ_CLR_OFFSET);
+ reg = gsi_reg(gsi, CNTXT_SRC_EV_CH_IRQ_CLR);
+ iowrite32(~0, gsi->virt + reg_offset(reg));
- iowrite32(val, gsi->virt + GSI_CNTXT_SRC_EV_CH_IRQ_MSK_OFFSET);
+ reg = gsi_reg(gsi, CNTXT_SRC_EV_CH_IRQ_MSK);
+ iowrite32(val, gsi->virt + reg_offset(reg));
gsi_irq_type_enable(gsi, GSI_EV_CTRL);
}
/* Disable event ring control interrupts */
static void gsi_irq_ev_ctrl_disable(struct gsi *gsi)
{
+ const struct reg *reg;
+
gsi_irq_type_disable(gsi, GSI_EV_CTRL);
- iowrite32(0, gsi->virt + GSI_CNTXT_SRC_EV_CH_IRQ_MSK_OFFSET);
+
+ reg = gsi_reg(gsi, CNTXT_SRC_EV_CH_IRQ_MSK);
+ iowrite32(0, gsi->virt + reg_offset(reg));
}
/* Channel commands are performed one at a time. Their completion is
@@ -233,32 +253,43 @@ static void gsi_irq_ev_ctrl_disable(struct gsi *gsi)
static void gsi_irq_ch_ctrl_enable(struct gsi *gsi, u32 channel_id)
{
u32 val = BIT(channel_id);
+ const struct reg *reg;
/* There's a small chance that a previous command completed
* after the interrupt was disabled, so make sure we have no
* pending interrupts before we enable them.
*/
- iowrite32(~0, gsi->virt + GSI_CNTXT_SRC_CH_IRQ_CLR_OFFSET);
+ reg = gsi_reg(gsi, CNTXT_SRC_CH_IRQ_CLR);
+ iowrite32(~0, gsi->virt + reg_offset(reg));
+
+ reg = gsi_reg(gsi, CNTXT_SRC_CH_IRQ_MSK);
+ iowrite32(val, gsi->virt + reg_offset(reg));
- iowrite32(val, gsi->virt + GSI_CNTXT_SRC_CH_IRQ_MSK_OFFSET);
gsi_irq_type_enable(gsi, GSI_CH_CTRL);
}
/* Disable channel control interrupts */
static void gsi_irq_ch_ctrl_disable(struct gsi *gsi)
{
+ const struct reg *reg;
+
gsi_irq_type_disable(gsi, GSI_CH_CTRL);
- iowrite32(0, gsi->virt + GSI_CNTXT_SRC_CH_IRQ_MSK_OFFSET);
+
+ reg = gsi_reg(gsi, CNTXT_SRC_CH_IRQ_MSK);
+ iowrite32(0, gsi->virt + reg_offset(reg));
}
static void gsi_irq_ieob_enable_one(struct gsi *gsi, u32 evt_ring_id)
{
bool enable_ieob = !gsi->ieob_enabled_bitmap;
+ const struct reg *reg;
u32 val;
gsi->ieob_enabled_bitmap |= BIT(evt_ring_id);
+
+ reg = gsi_reg(gsi, CNTXT_SRC_IEOB_IRQ_MSK);
val = gsi->ieob_enabled_bitmap;
- iowrite32(val, gsi->virt + GSI_CNTXT_SRC_IEOB_IRQ_MSK_OFFSET);
+ iowrite32(val, gsi->virt + reg_offset(reg));
/* Enable the interrupt type if this is the first channel enabled */
if (enable_ieob)
@@ -267,6 +298,7 @@ static void gsi_irq_ieob_enable_one(struct gsi *gsi, u32 evt_ring_id)
static void gsi_irq_ieob_disable(struct gsi *gsi, u32 event_mask)
{
+ const struct reg *reg;
u32 val;
gsi->ieob_enabled_bitmap &= ~event_mask;
@@ -275,8 +307,9 @@ static void gsi_irq_ieob_disable(struct gsi *gsi, u32 event_mask)
if (!gsi->ieob_enabled_bitmap)
gsi_irq_type_disable(gsi, GSI_IEOB);
+ reg = gsi_reg(gsi, CNTXT_SRC_IEOB_IRQ_MSK);
val = gsi->ieob_enabled_bitmap;
- iowrite32(val, gsi->virt + GSI_CNTXT_SRC_IEOB_IRQ_MSK_OFFSET);
+ iowrite32(val, gsi->virt + reg_offset(reg));
}
static void gsi_irq_ieob_disable_one(struct gsi *gsi, u32 evt_ring_id)
@@ -287,34 +320,44 @@ static void gsi_irq_ieob_disable_one(struct gsi *gsi, u32 evt_ring_id)
/* Enable all GSI_interrupt types */
static void gsi_irq_enable(struct gsi *gsi)
{
+ const struct reg *reg;
u32 val;
/* Global interrupts include hardware error reports. Enable
* that so we can at least report the error should it occur.
*/
- iowrite32(BIT(ERROR_INT), gsi->virt + GSI_CNTXT_GLOB_IRQ_EN_OFFSET);
- gsi_irq_type_update(gsi, gsi->type_enabled_bitmap | BIT(GSI_GLOB_EE));
+ reg = gsi_reg(gsi, CNTXT_GLOB_IRQ_EN);
+ iowrite32(ERROR_INT, gsi->virt + reg_offset(reg));
+
+ gsi_irq_type_update(gsi, gsi->type_enabled_bitmap | GSI_GLOB_EE);
/* General GSI interrupts are reported to all EEs; if they occur
* they are unrecoverable (without reset). A breakpoint interrupt
* also exists, but we don't support that. We want to be notified
* of errors so we can report them, even if they can't be handled.
*/
- val = BIT(BUS_ERROR);
- val |= BIT(CMD_FIFO_OVRFLOW);
- val |= BIT(MCS_STACK_OVRFLOW);
- iowrite32(val, gsi->virt + GSI_CNTXT_GSI_IRQ_EN_OFFSET);
- gsi_irq_type_update(gsi, gsi->type_enabled_bitmap | BIT(GSI_GENERAL));
+ reg = gsi_reg(gsi, CNTXT_GSI_IRQ_EN);
+ val = BUS_ERROR;
+ val |= CMD_FIFO_OVRFLOW;
+ val |= MCS_STACK_OVRFLOW;
+ iowrite32(val, gsi->virt + reg_offset(reg));
+
+ gsi_irq_type_update(gsi, gsi->type_enabled_bitmap | GSI_GENERAL);
}
/* Disable all GSI interrupt types */
static void gsi_irq_disable(struct gsi *gsi)
{
+ const struct reg *reg;
+
gsi_irq_type_update(gsi, 0);
/* Clear the type-specific interrupt masks set by gsi_irq_enable() */
- iowrite32(0, gsi->virt + GSI_CNTXT_GSI_IRQ_EN_OFFSET);
- iowrite32(0, gsi->virt + GSI_CNTXT_GLOB_IRQ_EN_OFFSET);
+ reg = gsi_reg(gsi, CNTXT_GSI_IRQ_EN);
+ iowrite32(0, gsi->virt + reg_offset(reg));
+
+ reg = gsi_reg(gsi, CNTXT_GLOB_IRQ_EN);
+ iowrite32(0, gsi->virt + reg_offset(reg));
}
/* Return the virtual address associated with a ring index */
@@ -356,11 +399,12 @@ static bool gsi_command(struct gsi *gsi, u32 reg, u32 val)
static enum gsi_evt_ring_state
gsi_evt_ring_state(struct gsi *gsi, u32 evt_ring_id)
{
+ const struct reg *reg = gsi_reg(gsi, EV_CH_E_CNTXT_0);
u32 val;
- val = ioread32(gsi->virt + GSI_EV_CH_E_CNTXT_0_OFFSET(evt_ring_id));
+ val = ioread32(gsi->virt + reg_n_offset(reg, evt_ring_id));
- return u32_get_bits(val, EV_CHSTATE_FMASK);
+ return reg_decode(reg, EV_CHSTATE, val);
}
/* Issue an event ring command and wait for it to complete */
@@ -368,16 +412,18 @@ static void gsi_evt_ring_command(struct gsi *gsi, u32 evt_ring_id,
enum gsi_evt_cmd_opcode opcode)
{
struct device *dev = gsi->dev;
+ const struct reg *reg;
bool timeout;
u32 val;
/* Enable the completion interrupt for the command */
gsi_irq_ev_ctrl_enable(gsi, evt_ring_id);
- val = u32_encode_bits(evt_ring_id, EV_CHID_FMASK);
- val |= u32_encode_bits(opcode, EV_OPCODE_FMASK);
+ reg = gsi_reg(gsi, EV_CH_CMD);
+ val = reg_encode(reg, EV_CHID, evt_ring_id);
+ val |= reg_encode(reg, EV_OPCODE, opcode);
- timeout = !gsi_command(gsi, GSI_EV_CH_CMD_OFFSET, val);
+ timeout = !gsi_command(gsi, reg_offset(reg), val);
gsi_irq_ev_ctrl_disable(gsi);
@@ -464,13 +510,16 @@ static void gsi_evt_ring_de_alloc_command(struct gsi *gsi, u32 evt_ring_id)
/* Fetch the current state of a channel from hardware */
static enum gsi_channel_state gsi_channel_state(struct gsi_channel *channel)
{
+ const struct reg *reg = gsi_reg(channel->gsi, CH_C_CNTXT_0);
u32 channel_id = gsi_channel_id(channel);
- void __iomem *virt = channel->gsi->virt;
+ struct gsi *gsi = channel->gsi;
+ void __iomem *virt = gsi->virt;
u32 val;
- val = ioread32(virt + GSI_CH_C_CNTXT_0_OFFSET(channel_id));
+ reg = gsi_reg(gsi, CH_C_CNTXT_0);
+ val = ioread32(virt + reg_n_offset(reg, channel_id));
- return u32_get_bits(val, CHSTATE_FMASK);
+ return reg_decode(reg, CHSTATE, val);
}
/* Issue a channel command and wait for it to complete */
@@ -480,15 +529,18 @@ gsi_channel_command(struct gsi_channel *channel, enum gsi_ch_cmd_opcode opcode)
u32 channel_id = gsi_channel_id(channel);
struct gsi *gsi = channel->gsi;
struct device *dev = gsi->dev;
+ const struct reg *reg;
bool timeout;
u32 val;
/* Enable the completion interrupt for the command */
gsi_irq_ch_ctrl_enable(gsi, channel_id);
- val = u32_encode_bits(channel_id, CH_CHID_FMASK);
- val |= u32_encode_bits(opcode, CH_OPCODE_FMASK);
- timeout = !gsi_command(gsi, GSI_CH_CMD_OFFSET, val);
+ reg = gsi_reg(gsi, CH_CMD);
+ val = reg_encode(reg, CH_CHID, channel_id);
+ val |= reg_encode(reg, CH_OPCODE, opcode);
+
+ timeout = !gsi_command(gsi, reg_offset(reg), val);
gsi_irq_ch_ctrl_disable(gsi);
@@ -651,6 +703,7 @@ static void gsi_channel_de_alloc_command(struct gsi *gsi, u32 channel_id)
*/
static void gsi_evt_ring_doorbell(struct gsi *gsi, u32 evt_ring_id, u32 index)
{
+ const struct reg *reg = gsi_reg(gsi, EV_CH_E_DOORBELL_0);
struct gsi_ring *ring = &gsi->evt_ring[evt_ring_id].ring;
u32 val;
@@ -658,7 +711,7 @@ static void gsi_evt_ring_doorbell(struct gsi *gsi, u32 evt_ring_id, u32 index)
/* Note: index *must* be used modulo the ring count here */
val = gsi_ring_addr(ring, (index - 1) % ring->count);
- iowrite32(val, gsi->virt + GSI_EV_CH_E_DOORBELL_0_OFFSET(evt_ring_id));
+ iowrite32(val, gsi->virt + reg_n_offset(reg, evt_ring_id));
}
/* Program an event ring for use */
@@ -666,41 +719,56 @@ static void gsi_evt_ring_program(struct gsi *gsi, u32 evt_ring_id)
{
struct gsi_evt_ring *evt_ring = &gsi->evt_ring[evt_ring_id];
struct gsi_ring *ring = &evt_ring->ring;
- size_t size;
+ const struct reg *reg;
u32 val;
+ reg = gsi_reg(gsi, EV_CH_E_CNTXT_0);
/* We program all event rings as GPI type/protocol */
- val = u32_encode_bits(GSI_CHANNEL_TYPE_GPI, EV_CHTYPE_FMASK);
- val |= EV_INTYPE_FMASK;
- val |= u32_encode_bits(GSI_RING_ELEMENT_SIZE, EV_ELEMENT_SIZE_FMASK);
- iowrite32(val, gsi->virt + GSI_EV_CH_E_CNTXT_0_OFFSET(evt_ring_id));
+ val = reg_encode(reg, EV_CHTYPE, GSI_CHANNEL_TYPE_GPI);
+ /* EV_EE field is 0 (GSI_EE_AP) */
+ val |= reg_bit(reg, EV_INTYPE);
+ val |= reg_encode(reg, EV_ELEMENT_SIZE, GSI_RING_ELEMENT_SIZE);
+ iowrite32(val, gsi->virt + reg_n_offset(reg, evt_ring_id));
- size = ring->count * GSI_RING_ELEMENT_SIZE;
- val = ev_r_length_encoded(gsi->version, size);
- iowrite32(val, gsi->virt + GSI_EV_CH_E_CNTXT_1_OFFSET(evt_ring_id));
+ reg = gsi_reg(gsi, EV_CH_E_CNTXT_1);
+ val = reg_encode(reg, R_LENGTH, ring->count * GSI_RING_ELEMENT_SIZE);
+ iowrite32(val, gsi->virt + reg_n_offset(reg, evt_ring_id));
/* The context 2 and 3 registers store the low-order and
* high-order 32 bits of the address of the event ring,
* respectively.
*/
+ reg = gsi_reg(gsi, EV_CH_E_CNTXT_2);
val = lower_32_bits(ring->addr);
- iowrite32(val, gsi->virt + GSI_EV_CH_E_CNTXT_2_OFFSET(evt_ring_id));
+ iowrite32(val, gsi->virt + reg_n_offset(reg, evt_ring_id));
+
+ reg = gsi_reg(gsi, EV_CH_E_CNTXT_3);
val = upper_32_bits(ring->addr);
- iowrite32(val, gsi->virt + GSI_EV_CH_E_CNTXT_3_OFFSET(evt_ring_id));
+ iowrite32(val, gsi->virt + reg_n_offset(reg, evt_ring_id));
/* Enable interrupt moderation by setting the moderation delay */
- val = u32_encode_bits(GSI_EVT_RING_INT_MODT, MODT_FMASK);
- val |= u32_encode_bits(1, MODC_FMASK); /* comes from channel */
- iowrite32(val, gsi->virt + GSI_EV_CH_E_CNTXT_8_OFFSET(evt_ring_id));
+ reg = gsi_reg(gsi, EV_CH_E_CNTXT_8);
+ val = reg_encode(reg, EV_MODT, GSI_EVT_RING_INT_MODT);
+ val |= reg_encode(reg, EV_MODC, 1); /* comes from channel */
+ /* EV_MOD_CNT is 0 (no counter-based interrupt coalescing) */
+ iowrite32(val, gsi->virt + reg_n_offset(reg, evt_ring_id));
+
+ /* No MSI write data, and MSI high and low address is 0 */
+ reg = gsi_reg(gsi, EV_CH_E_CNTXT_9);
+ iowrite32(0, gsi->virt + reg_n_offset(reg, evt_ring_id));
- /* No MSI write data, and MSI address high and low address is 0 */
- iowrite32(0, gsi->virt + GSI_EV_CH_E_CNTXT_9_OFFSET(evt_ring_id));
- iowrite32(0, gsi->virt + GSI_EV_CH_E_CNTXT_10_OFFSET(evt_ring_id));
- iowrite32(0, gsi->virt + GSI_EV_CH_E_CNTXT_11_OFFSET(evt_ring_id));
+ reg = gsi_reg(gsi, EV_CH_E_CNTXT_10);
+ iowrite32(0, gsi->virt + reg_n_offset(reg, evt_ring_id));
+
+ reg = gsi_reg(gsi, EV_CH_E_CNTXT_11);
+ iowrite32(0, gsi->virt + reg_n_offset(reg, evt_ring_id));
/* We don't need to get event read pointer updates */
- iowrite32(0, gsi->virt + GSI_EV_CH_E_CNTXT_12_OFFSET(evt_ring_id));
- iowrite32(0, gsi->virt + GSI_EV_CH_E_CNTXT_13_OFFSET(evt_ring_id));
+ reg = gsi_reg(gsi, EV_CH_E_CNTXT_12);
+ iowrite32(0, gsi->virt + reg_n_offset(reg, evt_ring_id));
+
+ reg = gsi_reg(gsi, EV_CH_E_CNTXT_13);
+ iowrite32(0, gsi->virt + reg_n_offset(reg, evt_ring_id));
/* Finally, tell the hardware our "last processed" event (arbitrary) */
gsi_evt_ring_doorbell(gsi, evt_ring_id, ring->index);
@@ -761,39 +829,52 @@ static void gsi_channel_program(struct gsi_channel *channel, bool doorbell)
union gsi_channel_scratch scr = { };
struct gsi_channel_scratch_gpi *gpi;
struct gsi *gsi = channel->gsi;
+ const struct reg *reg;
u32 wrr_weight = 0;
+ u32 offset;
u32 val;
+ reg = gsi_reg(gsi, CH_C_CNTXT_0);
+
/* We program all channels as GPI type/protocol */
- val = chtype_protocol_encoded(gsi->version, GSI_CHANNEL_TYPE_GPI);
+ val = ch_c_cntxt_0_type_encode(gsi->version, reg, GSI_CHANNEL_TYPE_GPI);
if (channel->toward_ipa)
- val |= CHTYPE_DIR_FMASK;
- val |= u32_encode_bits(channel->evt_ring_id, ERINDEX_FMASK);
- val |= u32_encode_bits(GSI_RING_ELEMENT_SIZE, ELEMENT_SIZE_FMASK);
- iowrite32(val, gsi->virt + GSI_CH_C_CNTXT_0_OFFSET(channel_id));
-
- val = r_length_encoded(gsi->version, size);
- iowrite32(val, gsi->virt + GSI_CH_C_CNTXT_1_OFFSET(channel_id));
+ val |= reg_bit(reg, CHTYPE_DIR);
+ if (gsi->version < IPA_VERSION_5_0)
+ val |= reg_encode(reg, ERINDEX, channel->evt_ring_id);
+ val |= reg_encode(reg, ELEMENT_SIZE, GSI_RING_ELEMENT_SIZE);
+ iowrite32(val, gsi->virt + reg_n_offset(reg, channel_id));
+
+ reg = gsi_reg(gsi, CH_C_CNTXT_1);
+ val = reg_encode(reg, CH_R_LENGTH, size);
+ if (gsi->version >= IPA_VERSION_5_0)
+ val |= reg_encode(reg, CH_ERINDEX, channel->evt_ring_id);
+ iowrite32(val, gsi->virt + reg_n_offset(reg, channel_id));
/* The context 2 and 3 registers store the low-order and
* high-order 32 bits of the address of the channel ring,
* respectively.
*/
+ reg = gsi_reg(gsi, CH_C_CNTXT_2);
val = lower_32_bits(channel->tre_ring.addr);
- iowrite32(val, gsi->virt + GSI_CH_C_CNTXT_2_OFFSET(channel_id));
+ iowrite32(val, gsi->virt + reg_n_offset(reg, channel_id));
+
+ reg = gsi_reg(gsi, CH_C_CNTXT_3);
val = upper_32_bits(channel->tre_ring.addr);
- iowrite32(val, gsi->virt + GSI_CH_C_CNTXT_3_OFFSET(channel_id));
+ iowrite32(val, gsi->virt + reg_n_offset(reg, channel_id));
+
+ reg = gsi_reg(gsi, CH_C_QOS);
/* Command channel gets low weighted round-robin priority */
if (channel->command)
- wrr_weight = field_max(WRR_WEIGHT_FMASK);
- val = u32_encode_bits(wrr_weight, WRR_WEIGHT_FMASK);
+ wrr_weight = reg_field_max(reg, WRR_WEIGHT);
+ val = reg_encode(reg, WRR_WEIGHT, wrr_weight);
/* Max prefetch is 1 segment (do not set MAX_PREFETCH_FMASK) */
/* No need to use the doorbell engine starting at IPA v4.0 */
if (gsi->version < IPA_VERSION_4_0 && doorbell)
- val |= USE_DB_ENG_FMASK;
+ val |= reg_bit(reg, USE_DB_ENG);
/* v4.0 introduces an escape buffer for prefetch. We use it
* on all but the AP command channel.
@@ -801,16 +882,15 @@ static void gsi_channel_program(struct gsi_channel *channel, bool doorbell)
if (gsi->version >= IPA_VERSION_4_0 && !channel->command) {
/* If not otherwise set, prefetch buffers are used */
if (gsi->version < IPA_VERSION_4_5)
- val |= USE_ESCAPE_BUF_ONLY_FMASK;
+ val |= reg_bit(reg, USE_ESCAPE_BUF_ONLY);
else
- val |= u32_encode_bits(GSI_ESCAPE_BUF_ONLY,
- PREFETCH_MODE_FMASK);
+ val |= reg_encode(reg, PREFETCH_MODE, ESCAPE_BUF_ONLY);
}
/* All channels set DB_IN_BYTES */
if (gsi->version >= IPA_VERSION_4_9)
- val |= DB_IN_BYTES;
+ val |= reg_bit(reg, DB_IN_BYTES);
- iowrite32(val, gsi->virt + GSI_CH_C_QOS_OFFSET(channel_id));
+ iowrite32(val, gsi->virt + reg_n_offset(reg, channel_id));
/* Now update the scratch registers for GPI protocol */
gpi = &scr.gpi;
@@ -818,22 +898,27 @@ static void gsi_channel_program(struct gsi_channel *channel, bool doorbell)
GSI_RING_ELEMENT_SIZE;
gpi->outstanding_threshold = 2 * GSI_RING_ELEMENT_SIZE;
+ reg = gsi_reg(gsi, CH_C_SCRATCH_0);
val = scr.data.word1;
- iowrite32(val, gsi->virt + GSI_CH_C_SCRATCH_0_OFFSET(channel_id));
+ iowrite32(val, gsi->virt + reg_n_offset(reg, channel_id));
+ reg = gsi_reg(gsi, CH_C_SCRATCH_1);
val = scr.data.word2;
- iowrite32(val, gsi->virt + GSI_CH_C_SCRATCH_1_OFFSET(channel_id));
+ iowrite32(val, gsi->virt + reg_n_offset(reg, channel_id));
+ reg = gsi_reg(gsi, CH_C_SCRATCH_2);
val = scr.data.word3;
- iowrite32(val, gsi->virt + GSI_CH_C_SCRATCH_2_OFFSET(channel_id));
+ iowrite32(val, gsi->virt + reg_n_offset(reg, channel_id));
/* We must preserve the upper 16 bits of the last scratch register.
* The next sequence assumes those bits remain unchanged between the
* read and the write.
*/
- val = ioread32(gsi->virt + GSI_CH_C_SCRATCH_3_OFFSET(channel_id));
+ reg = gsi_reg(gsi, CH_C_SCRATCH_3);
+ offset = reg_n_offset(reg, channel_id);
+ val = ioread32(gsi->virt + offset);
val = (scr.data.word4 & GENMASK(31, 16)) | (val & GENMASK(15, 0));
- iowrite32(val, gsi->virt + GSI_CH_C_SCRATCH_3_OFFSET(channel_id));
+ iowrite32(val, gsi->virt + offset);
/* All done! */
}
@@ -1049,10 +1134,14 @@ static void gsi_trans_tx_completed(struct gsi_trans *trans)
/* Channel control interrupt handler */
static void gsi_isr_chan_ctrl(struct gsi *gsi)
{
+ const struct reg *reg;
u32 channel_mask;
- channel_mask = ioread32(gsi->virt + GSI_CNTXT_SRC_CH_IRQ_OFFSET);
- iowrite32(channel_mask, gsi->virt + GSI_CNTXT_SRC_CH_IRQ_CLR_OFFSET);
+ reg = gsi_reg(gsi, CNTXT_SRC_CH_IRQ);
+ channel_mask = ioread32(gsi->virt + reg_offset(reg));
+
+ reg = gsi_reg(gsi, CNTXT_SRC_CH_IRQ_CLR);
+ iowrite32(channel_mask, gsi->virt + reg_offset(reg));
while (channel_mask) {
u32 channel_id = __ffs(channel_mask);
@@ -1066,10 +1155,14 @@ static void gsi_isr_chan_ctrl(struct gsi *gsi)
/* Event ring control interrupt handler */
static void gsi_isr_evt_ctrl(struct gsi *gsi)
{
+ const struct reg *reg;
u32 event_mask;
- event_mask = ioread32(gsi->virt + GSI_CNTXT_SRC_EV_CH_IRQ_OFFSET);
- iowrite32(event_mask, gsi->virt + GSI_CNTXT_SRC_EV_CH_IRQ_CLR_OFFSET);
+ reg = gsi_reg(gsi, CNTXT_SRC_EV_CH_IRQ);
+ event_mask = ioread32(gsi->virt + reg_offset(reg));
+
+ reg = gsi_reg(gsi, CNTXT_SRC_EV_CH_IRQ_CLR);
+ iowrite32(event_mask, gsi->virt + reg_offset(reg));
while (event_mask) {
u32 evt_ring_id = __ffs(event_mask);
@@ -1117,21 +1210,29 @@ gsi_isr_glob_evt_err(struct gsi *gsi, u32 err_ee, u32 evt_ring_id, u32 code)
/* Global error interrupt handler */
static void gsi_isr_glob_err(struct gsi *gsi)
{
+ const struct reg *log_reg;
+ const struct reg *clr_reg;
enum gsi_err_type type;
enum gsi_err_code code;
+ u32 offset;
u32 which;
u32 val;
u32 ee;
/* Get the logged error, then reinitialize the log */
- val = ioread32(gsi->virt + GSI_ERROR_LOG_OFFSET);
- iowrite32(0, gsi->virt + GSI_ERROR_LOG_OFFSET);
- iowrite32(~0, gsi->virt + GSI_ERROR_LOG_CLR_OFFSET);
+ log_reg = gsi_reg(gsi, ERROR_LOG);
+ offset = reg_offset(log_reg);
+ val = ioread32(gsi->virt + offset);
+ iowrite32(0, gsi->virt + offset);
+
+ clr_reg = gsi_reg(gsi, ERROR_LOG_CLR);
+ iowrite32(~0, gsi->virt + reg_offset(clr_reg));
- ee = u32_get_bits(val, ERR_EE_FMASK);
- type = u32_get_bits(val, ERR_TYPE_FMASK);
- which = u32_get_bits(val, ERR_VIRT_IDX_FMASK);
- code = u32_get_bits(val, ERR_CODE_FMASK);
+ /* Parse the error value */
+ ee = reg_decode(log_reg, ERR_EE, val);
+ type = reg_decode(log_reg, ERR_TYPE, val);
+ which = reg_decode(log_reg, ERR_VIRT_IDX, val);
+ code = reg_decode(log_reg, ERR_CODE, val);
if (type == GSI_ERR_TYPE_CHAN)
gsi_isr_glob_chan_err(gsi, ee, which, code);
@@ -1144,6 +1245,7 @@ static void gsi_isr_glob_err(struct gsi *gsi)
/* Generic EE interrupt handler */
static void gsi_isr_gp_int1(struct gsi *gsi)
{
+ const struct reg *reg;
u32 result;
u32 val;
@@ -1166,8 +1268,9 @@ static void gsi_isr_gp_int1(struct gsi *gsi)
* In either case, we silently ignore a INCORRECT_CHANNEL_STATE
* error if we receive it.
*/
- val = ioread32(gsi->virt + GSI_CNTXT_SCRATCH_0_OFFSET);
- result = u32_get_bits(val, GENERIC_EE_RESULT_FMASK);
+ reg = gsi_reg(gsi, CNTXT_SCRATCH_0);
+ val = ioread32(gsi->virt + reg_offset(reg));
+ result = reg_decode(reg, GENERIC_EE_RESULT, val);
switch (result) {
case GENERIC_EE_SUCCESS:
@@ -1191,19 +1294,22 @@ static void gsi_isr_gp_int1(struct gsi *gsi)
/* Inter-EE interrupt handler */
static void gsi_isr_glob_ee(struct gsi *gsi)
{
+ const struct reg *reg;
u32 val;
- val = ioread32(gsi->virt + GSI_CNTXT_GLOB_IRQ_STTS_OFFSET);
+ reg = gsi_reg(gsi, CNTXT_GLOB_IRQ_STTS);
+ val = ioread32(gsi->virt + reg_offset(reg));
- if (val & BIT(ERROR_INT))
+ if (val & ERROR_INT)
gsi_isr_glob_err(gsi);
- iowrite32(val, gsi->virt + GSI_CNTXT_GLOB_IRQ_CLR_OFFSET);
+ reg = gsi_reg(gsi, CNTXT_GLOB_IRQ_CLR);
+ iowrite32(val, gsi->virt + reg_offset(reg));
- val &= ~BIT(ERROR_INT);
+ val &= ~ERROR_INT;
- if (val & BIT(GP_INT1)) {
- val ^= BIT(GP_INT1);
+ if (val & GP_INT1) {
+ val ^= GP_INT1;
gsi_isr_gp_int1(gsi);
}
@@ -1214,11 +1320,16 @@ static void gsi_isr_glob_ee(struct gsi *gsi)
/* I/O completion interrupt event */
static void gsi_isr_ieob(struct gsi *gsi)
{
+ const struct reg *reg;
u32 event_mask;
- event_mask = ioread32(gsi->virt + GSI_CNTXT_SRC_IEOB_IRQ_OFFSET);
+ reg = gsi_reg(gsi, CNTXT_SRC_IEOB_IRQ);
+ event_mask = ioread32(gsi->virt + reg_offset(reg));
+
gsi_irq_ieob_disable(gsi, event_mask);
- iowrite32(event_mask, gsi->virt + GSI_CNTXT_SRC_IEOB_IRQ_CLR_OFFSET);
+
+ reg = gsi_reg(gsi, CNTXT_SRC_IEOB_IRQ_CLR);
+ iowrite32(event_mask, gsi->virt + reg_offset(reg));
while (event_mask) {
u32 evt_ring_id = __ffs(event_mask);
@@ -1233,10 +1344,14 @@ static void gsi_isr_ieob(struct gsi *gsi)
static void gsi_isr_general(struct gsi *gsi)
{
struct device *dev = gsi->dev;
+ const struct reg *reg;
u32 val;
- val = ioread32(gsi->virt + GSI_CNTXT_GSI_IRQ_STTS_OFFSET);
- iowrite32(val, gsi->virt + GSI_CNTXT_GSI_IRQ_CLR_OFFSET);
+ reg = gsi_reg(gsi, CNTXT_GSI_IRQ_STTS);
+ val = ioread32(gsi->virt + reg_offset(reg));
+
+ reg = gsi_reg(gsi, CNTXT_GSI_IRQ_CLR);
+ iowrite32(val, gsi->virt + reg_offset(reg));
dev_err(dev, "unexpected general interrupt 0x%08x\n", val);
}
@@ -1252,31 +1367,39 @@ static void gsi_isr_general(struct gsi *gsi)
static irqreturn_t gsi_isr(int irq, void *dev_id)
{
struct gsi *gsi = dev_id;
+ const struct reg *reg;
u32 intr_mask;
u32 cnt = 0;
+ u32 offset;
+
+ reg = gsi_reg(gsi, CNTXT_TYPE_IRQ);
+ offset = reg_offset(reg);
/* enum gsi_irq_type_id defines GSI interrupt types */
- while ((intr_mask = ioread32(gsi->virt + GSI_CNTXT_TYPE_IRQ_OFFSET))) {
+ while ((intr_mask = ioread32(gsi->virt + offset))) {
/* intr_mask contains bitmask of pending GSI interrupts */
do {
u32 gsi_intr = BIT(__ffs(intr_mask));
intr_mask ^= gsi_intr;
+ /* Note: the IRQ condition for each type is cleared
+ * when the type-specific register is updated.
+ */
switch (gsi_intr) {
- case BIT(GSI_CH_CTRL):
+ case GSI_CH_CTRL:
gsi_isr_chan_ctrl(gsi);
break;
- case BIT(GSI_EV_CTRL):
+ case GSI_EV_CTRL:
gsi_isr_evt_ctrl(gsi);
break;
- case BIT(GSI_GLOB_EE):
+ case GSI_GLOB_EE:
gsi_isr_glob_ee(gsi);
break;
- case BIT(GSI_IEOB):
+ case GSI_IEOB:
gsi_isr_ieob(gsi);
break;
- case BIT(GSI_GENERAL):
+ case GSI_GENERAL:
gsi_isr_general(gsi);
break;
default:
@@ -1467,11 +1590,13 @@ void gsi_channel_doorbell(struct gsi_channel *channel)
struct gsi_ring *tre_ring = &channel->tre_ring;
u32 channel_id = gsi_channel_id(channel);
struct gsi *gsi = channel->gsi;
+ const struct reg *reg;
u32 val;
+ reg = gsi_reg(gsi, CH_C_DOORBELL_0);
/* Note: index *must* be used modulo the ring count here */
val = gsi_ring_addr(tre_ring, tre_ring->index % tre_ring->count);
- iowrite32(val, gsi->virt + GSI_CH_C_DOORBELL_0_OFFSET(channel_id));
+ iowrite32(val, gsi->virt + reg_n_offset(reg, channel_id));
}
/* Consult hardware, move newly completed transactions to completed state */
@@ -1482,6 +1607,7 @@ void gsi_channel_update(struct gsi_channel *channel)
struct gsi_evt_ring *evt_ring;
struct gsi_trans *trans;
struct gsi_ring *ring;
+ const struct reg *reg;
u32 offset;
u32 index;
@@ -1491,7 +1617,8 @@ void gsi_channel_update(struct gsi_channel *channel)
/* See if there's anything new to process; if not, we're done. Note
* that index always refers to an entry *within* the event ring.
*/
- offset = GSI_EV_CH_E_CNTXT_4_OFFSET(evt_ring_id);
+ reg = gsi_reg(gsi, EV_CH_E_CNTXT_4);
+ offset = reg_n_offset(reg, evt_ring_id);
index = gsi_ring_index(ring, ioread32(gsi->virt + offset));
if (index == ring->index % ring->count)
return;
@@ -1642,7 +1769,9 @@ static int gsi_generic_command(struct gsi *gsi, u32 channel_id,
enum gsi_generic_cmd_opcode opcode,
u8 params)
{
+ const struct reg *reg;
bool timeout;
+ u32 offset;
u32 val;
/* The error global interrupt type is always enabled (until we tear
@@ -1654,24 +1783,31 @@ static int gsi_generic_command(struct gsi *gsi, u32 channel_id,
* channel), and only from this function. So we enable the GP_INT1
* IRQ type here, and disable it again after the command completes.
*/
- val = BIT(ERROR_INT) | BIT(GP_INT1);
- iowrite32(val, gsi->virt + GSI_CNTXT_GLOB_IRQ_EN_OFFSET);
+ reg = gsi_reg(gsi, CNTXT_GLOB_IRQ_EN);
+ val = ERROR_INT | GP_INT1;
+ iowrite32(val, gsi->virt + reg_offset(reg));
/* First zero the result code field */
- val = ioread32(gsi->virt + GSI_CNTXT_SCRATCH_0_OFFSET);
- val &= ~GENERIC_EE_RESULT_FMASK;
- iowrite32(val, gsi->virt + GSI_CNTXT_SCRATCH_0_OFFSET);
+ reg = gsi_reg(gsi, CNTXT_SCRATCH_0);
+ offset = reg_offset(reg);
+ val = ioread32(gsi->virt + offset);
+
+ val &= ~reg_fmask(reg, GENERIC_EE_RESULT);
+ iowrite32(val, gsi->virt + offset);
/* Now issue the command */
- val = u32_encode_bits(opcode, GENERIC_OPCODE_FMASK);
- val |= u32_encode_bits(channel_id, GENERIC_CHID_FMASK);
- val |= u32_encode_bits(GSI_EE_MODEM, GENERIC_EE_FMASK);
- val |= u32_encode_bits(params, GENERIC_PARAMS_FMASK);
+ reg = gsi_reg(gsi, GENERIC_CMD);
+ val = reg_encode(reg, GENERIC_OPCODE, opcode);
+ val |= reg_encode(reg, GENERIC_CHID, channel_id);
+ val |= reg_encode(reg, GENERIC_EE, GSI_EE_MODEM);
+ if (gsi->version >= IPA_VERSION_4_11)
+ val |= reg_encode(reg, GENERIC_PARAMS, params);
- timeout = !gsi_command(gsi, GSI_GENERIC_CMD_OFFSET, val);
+ timeout = !gsi_command(gsi, reg_offset(reg), val);
/* Disable the GP_INT1 IRQ type again */
- iowrite32(BIT(ERROR_INT), gsi->virt + GSI_CNTXT_GLOB_IRQ_EN_OFFSET);
+ reg = gsi_reg(gsi, CNTXT_GLOB_IRQ_EN);
+ iowrite32(ERROR_INT, gsi->virt + reg_offset(reg));
if (!timeout)
return gsi->result;
@@ -1828,32 +1964,40 @@ static void gsi_channel_teardown(struct gsi *gsi)
/* Turn off all GSI interrupts initially */
static int gsi_irq_setup(struct gsi *gsi)
{
+ const struct reg *reg;
int ret;
/* Writing 1 indicates IRQ interrupts; 0 would be MSI */
- iowrite32(1, gsi->virt + GSI_CNTXT_INTSET_OFFSET);
+ reg = gsi_reg(gsi, CNTXT_INTSET);
+ iowrite32(reg_bit(reg, INTYPE), gsi->virt + reg_offset(reg));
/* Disable all interrupt types */
gsi_irq_type_update(gsi, 0);
/* Clear all type-specific interrupt masks */
- iowrite32(0, gsi->virt + GSI_CNTXT_SRC_CH_IRQ_MSK_OFFSET);
- iowrite32(0, gsi->virt + GSI_CNTXT_SRC_EV_CH_IRQ_MSK_OFFSET);
- iowrite32(0, gsi->virt + GSI_CNTXT_GLOB_IRQ_EN_OFFSET);
- iowrite32(0, gsi->virt + GSI_CNTXT_SRC_IEOB_IRQ_MSK_OFFSET);
+ reg = gsi_reg(gsi, CNTXT_SRC_CH_IRQ_MSK);
+ iowrite32(0, gsi->virt + reg_offset(reg));
+
+ reg = gsi_reg(gsi, CNTXT_SRC_EV_CH_IRQ_MSK);
+ iowrite32(0, gsi->virt + reg_offset(reg));
+
+ reg = gsi_reg(gsi, CNTXT_GLOB_IRQ_EN);
+ iowrite32(0, gsi->virt + reg_offset(reg));
+
+ reg = gsi_reg(gsi, CNTXT_SRC_IEOB_IRQ_MSK);
+ iowrite32(0, gsi->virt + reg_offset(reg));
/* The inter-EE interrupts are not supported for IPA v3.0-v3.1 */
if (gsi->version > IPA_VERSION_3_1) {
- u32 offset;
+ reg = gsi_reg(gsi, INTER_EE_SRC_CH_IRQ_MSK);
+ iowrite32(0, gsi->virt + reg_offset(reg));
- /* These registers are in the non-adjusted address range */
- offset = GSI_INTER_EE_SRC_CH_IRQ_MSK_OFFSET;
- iowrite32(0, gsi->virt_raw + offset);
- offset = GSI_INTER_EE_SRC_EV_CH_IRQ_MSK_OFFSET;
- iowrite32(0, gsi->virt_raw + offset);
+ reg = gsi_reg(gsi, INTER_EE_SRC_EV_CH_IRQ_MSK);
+ iowrite32(0, gsi->virt + reg_offset(reg));
}
- iowrite32(0, gsi->virt + GSI_CNTXT_GSI_IRQ_EN_OFFSET);
+ reg = gsi_reg(gsi, CNTXT_GSI_IRQ_EN);
+ iowrite32(0, gsi->virt + reg_offset(reg));
ret = request_irq(gsi->irq, gsi_isr, 0, "gsi", gsi);
if (ret)
@@ -1871,6 +2015,7 @@ static void gsi_irq_teardown(struct gsi *gsi)
static int gsi_ring_setup(struct gsi *gsi)
{
struct device *dev = gsi->dev;
+ const struct reg *reg;
u32 count;
u32 val;
@@ -1882,9 +2027,10 @@ static int gsi_ring_setup(struct gsi *gsi)
return 0;
}
- val = ioread32(gsi->virt + GSI_GSI_HW_PARAM_2_OFFSET);
+ reg = gsi_reg(gsi, HW_PARAM_2);
+ val = ioread32(gsi->virt + reg_offset(reg));
- count = u32_get_bits(val, NUM_CH_PER_EE_FMASK);
+ count = reg_decode(reg, NUM_CH_PER_EE, val);
if (!count) {
dev_err(dev, "GSI reports zero channels supported\n");
return -EINVAL;
@@ -1896,7 +2042,12 @@ static int gsi_ring_setup(struct gsi *gsi)
}
gsi->channel_count = count;
- count = u32_get_bits(val, NUM_EV_PER_EE_FMASK);
+ if (gsi->version < IPA_VERSION_5_0) {
+ count = reg_decode(reg, NUM_EV_PER_EE, val);
+ } else {
+ reg = gsi_reg(gsi, HW_PARAM_4);
+ count = reg_decode(reg, EV_PER_EE, val);
+ }
if (!count) {
dev_err(dev, "GSI reports zero event rings supported\n");
return -EINVAL;
@@ -1915,12 +2066,14 @@ static int gsi_ring_setup(struct gsi *gsi)
/* Setup function for GSI. GSI firmware must be loaded and initialized */
int gsi_setup(struct gsi *gsi)
{
+ const struct reg *reg;
u32 val;
int ret;
/* Here is where we first touch the GSI hardware */
- val = ioread32(gsi->virt + GSI_GSI_STATUS_OFFSET);
- if (!(val & ENABLED_FMASK)) {
+ reg = gsi_reg(gsi, GSI_STATUS);
+ val = ioread32(gsi->virt + reg_offset(reg));
+ if (!(val & reg_bit(reg, ENABLED))) {
dev_err(gsi->dev, "GSI has not been enabled\n");
return -EIO;
}
@@ -1934,7 +2087,8 @@ int gsi_setup(struct gsi *gsi)
goto err_irq_teardown;
/* Initialize the error log */
- iowrite32(0, gsi->virt + GSI_ERROR_LOG_OFFSET);
+ reg = gsi_reg(gsi, ERROR_LOG);
+ iowrite32(0, gsi->virt + reg_offset(reg));
ret = gsi_channel_setup(gsi);
if (ret)
@@ -2205,67 +2359,37 @@ int gsi_init(struct gsi *gsi, struct platform_device *pdev,
enum ipa_version version, u32 count,
const struct ipa_gsi_endpoint_data *data)
{
- struct device *dev = &pdev->dev;
- struct resource *res;
- resource_size_t size;
- u32 adjust;
int ret;
gsi_validate_build();
- gsi->dev = dev;
+ gsi->dev = &pdev->dev;
gsi->version = version;
/* GSI uses NAPI on all channels. Create a dummy network device
* for the channel NAPI contexts to be associated with.
*/
init_dummy_netdev(&gsi->dummy_dev);
-
- /* Get GSI memory range and map it */
- res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "gsi");
- if (!res) {
- dev_err(dev, "DT error getting \"gsi\" memory property\n");
- return -ENODEV;
- }
-
- size = resource_size(res);
- if (res->start > U32_MAX || size > U32_MAX - res->start) {
- dev_err(dev, "DT memory resource \"gsi\" out of range\n");
- return -EINVAL;
- }
-
- /* Make sure we can make our pointer adjustment if necessary */
- adjust = gsi->version < IPA_VERSION_4_5 ? 0 : GSI_EE_REG_ADJUST;
- if (res->start < adjust) {
- dev_err(dev, "DT memory resource \"gsi\" too low (< %u)\n",
- adjust);
- return -EINVAL;
- }
-
- gsi->virt_raw = ioremap(res->start, size);
- if (!gsi->virt_raw) {
- dev_err(dev, "unable to remap \"gsi\" memory\n");
- return -ENOMEM;
- }
- /* Most registers are accessed using an adjusted register range */
- gsi->virt = gsi->virt_raw - adjust;
-
init_completion(&gsi->completion);
+ ret = gsi_reg_init(gsi, pdev);
+ if (ret)
+ return ret;
+
ret = gsi_irq_init(gsi, pdev); /* No matching exit required */
if (ret)
- goto err_iounmap;
+ goto err_reg_exit;
ret = gsi_channel_init(gsi, count, data);
if (ret)
- goto err_iounmap;
+ goto err_reg_exit;
mutex_init(&gsi->mutex);
return 0;
-err_iounmap:
- iounmap(gsi->virt_raw);
+err_reg_exit:
+ gsi_reg_exit(gsi);
return ret;
}
@@ -2275,7 +2399,7 @@ void gsi_exit(struct gsi *gsi)
{
mutex_destroy(&gsi->mutex);
gsi_channel_exit(gsi);
- iounmap(gsi->virt_raw);
+ gsi_reg_exit(gsi);
}
/* The maximum number of outstanding TREs on a channel. This limits
diff --git a/drivers/net/ipa/gsi.h b/drivers/net/ipa/gsi.h
index 49dcadba4e0b..50bc80cb167c 100644
--- a/drivers/net/ipa/gsi.h
+++ b/drivers/net/ipa/gsi.h
@@ -1,7 +1,7 @@
/* SPDX-License-Identifier: GPL-2.0 */
/* Copyright (c) 2015-2018, The Linux Foundation. All rights reserved.
- * Copyright (C) 2018-2022 Linaro Ltd.
+ * Copyright (C) 2018-2023 Linaro Ltd.
*/
#ifndef _GSI_H_
#define _GSI_H_
@@ -140,8 +140,9 @@ struct gsi_evt_ring {
struct gsi {
struct device *dev; /* Same as IPA device */
enum ipa_version version;
- void __iomem *virt_raw; /* I/O mapped address range */
- void __iomem *virt; /* Adjusted for most registers */
+ void __iomem *virt; /* I/O mapped registers */
+ const struct regs *regs;
+
u32 irq;
u32 channel_count;
u32 evt_ring_count;
diff --git a/drivers/net/ipa/gsi_reg.c b/drivers/net/ipa/gsi_reg.c
new file mode 100644
index 000000000000..1412b67304c8
--- /dev/null
+++ b/drivers/net/ipa/gsi_reg.c
@@ -0,0 +1,151 @@
+// SPDX-License-Identifier: GPL-2.0
+
+/* Copyright (C) 2023 Linaro Ltd. */
+
+#include <linux/platform_device.h>
+#include <linux/io.h>
+
+#include "gsi.h"
+#include "reg.h"
+#include "gsi_reg.h"
+
+/* Is this register ID valid for the current GSI version? */
+static bool gsi_reg_id_valid(struct gsi *gsi, enum gsi_reg_id reg_id)
+{
+ switch (reg_id) {
+ case INTER_EE_SRC_CH_IRQ_MSK:
+ case INTER_EE_SRC_EV_CH_IRQ_MSK:
+ case CH_C_CNTXT_0:
+ case CH_C_CNTXT_1:
+ case CH_C_CNTXT_2:
+ case CH_C_CNTXT_3:
+ case CH_C_QOS:
+ case CH_C_SCRATCH_0:
+ case CH_C_SCRATCH_1:
+ case CH_C_SCRATCH_2:
+ case CH_C_SCRATCH_3:
+ case EV_CH_E_CNTXT_0:
+ case EV_CH_E_CNTXT_1:
+ case EV_CH_E_CNTXT_2:
+ case EV_CH_E_CNTXT_3:
+ case EV_CH_E_CNTXT_4:
+ case EV_CH_E_CNTXT_8:
+ case EV_CH_E_CNTXT_9:
+ case EV_CH_E_CNTXT_10:
+ case EV_CH_E_CNTXT_11:
+ case EV_CH_E_CNTXT_12:
+ case EV_CH_E_CNTXT_13:
+ case EV_CH_E_SCRATCH_0:
+ case EV_CH_E_SCRATCH_1:
+ case CH_C_DOORBELL_0:
+ case EV_CH_E_DOORBELL_0:
+ case GSI_STATUS:
+ case CH_CMD:
+ case EV_CH_CMD:
+ case GENERIC_CMD:
+ case HW_PARAM_2:
+ case CNTXT_TYPE_IRQ:
+ case CNTXT_TYPE_IRQ_MSK:
+ case CNTXT_SRC_CH_IRQ:
+ case CNTXT_SRC_CH_IRQ_MSK:
+ case CNTXT_SRC_CH_IRQ_CLR:
+ case CNTXT_SRC_EV_CH_IRQ:
+ case CNTXT_SRC_EV_CH_IRQ_MSK:
+ case CNTXT_SRC_EV_CH_IRQ_CLR:
+ case CNTXT_SRC_IEOB_IRQ:
+ case CNTXT_SRC_IEOB_IRQ_MSK:
+ case CNTXT_SRC_IEOB_IRQ_CLR:
+ case CNTXT_GLOB_IRQ_STTS:
+ case CNTXT_GLOB_IRQ_EN:
+ case CNTXT_GLOB_IRQ_CLR:
+ case CNTXT_GSI_IRQ_STTS:
+ case CNTXT_GSI_IRQ_EN:
+ case CNTXT_GSI_IRQ_CLR:
+ case CNTXT_INTSET:
+ case ERROR_LOG:
+ case ERROR_LOG_CLR:
+ case CNTXT_SCRATCH_0:
+ return true;
+
+ default:
+ return false;
+ }
+}
+
+const struct reg *gsi_reg(struct gsi *gsi, enum gsi_reg_id reg_id)
+{
+ if (WARN(!gsi_reg_id_valid(gsi, reg_id), "invalid reg %u\n", reg_id))
+ return NULL;
+
+ return reg(gsi->regs, reg_id);
+}
+
+static const struct regs *gsi_regs(struct gsi *gsi)
+{
+ switch (gsi->version) {
+ case IPA_VERSION_3_1:
+ return &gsi_regs_v3_1;
+
+ case IPA_VERSION_3_5_1:
+ return &gsi_regs_v3_5_1;
+
+ case IPA_VERSION_4_2:
+ return &gsi_regs_v4_0;
+
+ case IPA_VERSION_4_5:
+ case IPA_VERSION_4_7:
+ return &gsi_regs_v4_5;
+
+ case IPA_VERSION_4_9:
+ return &gsi_regs_v4_9;
+
+ case IPA_VERSION_4_11:
+ return &gsi_regs_v4_11;
+
+ default:
+ return NULL;
+ }
+}
+
+/* Sets gsi->virt and I/O maps the "gsi" memory range for registers */
+int gsi_reg_init(struct gsi *gsi, struct platform_device *pdev)
+{
+ struct device *dev = &pdev->dev;
+ struct resource *res;
+ resource_size_t size;
+
+ /* Get GSI memory range and map it */
+ res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "gsi");
+ if (!res) {
+ dev_err(dev, "DT error getting \"gsi\" memory property\n");
+ return -ENODEV;
+ }
+
+ size = resource_size(res);
+ if (res->start > U32_MAX || size > U32_MAX - res->start) {
+ dev_err(dev, "DT memory resource \"gsi\" out of range\n");
+ return -EINVAL;
+ }
+
+ gsi->regs = gsi_regs(gsi);
+ if (!gsi->regs) {
+ dev_err(dev, "unsupported IPA version %u (?)\n", gsi->version);
+ return -EINVAL;
+ }
+
+ gsi->virt = ioremap(res->start, size);
+ if (!gsi->virt) {
+ dev_err(dev, "unable to remap \"gsi\" memory\n");
+ return -ENOMEM;
+ }
+
+ return 0;
+}
+
+/* Inverse of gsi_reg_init() */
+void gsi_reg_exit(struct gsi *gsi)
+{
+ iounmap(gsi->virt);
+ gsi->virt = NULL;
+ gsi->regs = NULL;
+}
diff --git a/drivers/net/ipa/gsi_reg.h b/drivers/net/ipa/gsi_reg.h
index 3763359f208f..f62f0a5c653d 100644
--- a/drivers/net/ipa/gsi_reg.h
+++ b/drivers/net/ipa/gsi_reg.h
@@ -1,12 +1,12 @@
/* SPDX-License-Identifier: GPL-2.0 */
/* Copyright (c) 2015-2018, The Linux Foundation. All rights reserved.
- * Copyright (C) 2018-2022 Linaro Ltd.
+ * Copyright (C) 2018-2023 Linaro Ltd.
*/
#ifndef _GSI_REG_H_
#define _GSI_REG_H_
-/* === Only "gsi.c" should include this file === */
+/* === Only "gsi.c" and "gsi_reg.c" should include this file === */
#include <linux/bits.h>
@@ -38,29 +38,75 @@
* (though the actual limit is hardware-dependent).
*/
-/* GSI EE registers as a group are shifted downward by a fixed constant amount
- * for IPA versions 4.5 and beyond. This applies to all GSI registers we use
- * *except* the ones that disable inter-EE interrupts for channels and event
- * channels.
- *
- * The "raw" (not adjusted) GSI register range is mapped, and a pointer to
- * the mapped range is held in gsi->virt_raw. The inter-EE interrupt
- * registers are accessed using that pointer.
- *
- * Most registers are accessed using gsi->virt, which is a copy of the "raw"
- * pointer, adjusted downward by the fixed amount.
- */
-#define GSI_EE_REG_ADJUST 0x0000d000 /* IPA v4.5+ */
-
-/* The inter-EE IRQ registers are relative to gsi->virt_raw (IPA v3.5+) */
-
-#define GSI_INTER_EE_SRC_CH_IRQ_MSK_OFFSET \
- (0x0000c020 + 0x1000 * GSI_EE_AP)
-
-#define GSI_INTER_EE_SRC_EV_CH_IRQ_MSK_OFFSET \
- (0x0000c024 + 0x1000 * GSI_EE_AP)
+/* enum gsi_reg_id - GSI register IDs */
+enum gsi_reg_id {
+ INTER_EE_SRC_CH_IRQ_MSK, /* IPA v3.5+ */
+ INTER_EE_SRC_EV_CH_IRQ_MSK, /* IPA v3.5+ */
+ CH_C_CNTXT_0,
+ CH_C_CNTXT_1,
+ CH_C_CNTXT_2,
+ CH_C_CNTXT_3,
+ CH_C_QOS,
+ CH_C_SCRATCH_0,
+ CH_C_SCRATCH_1,
+ CH_C_SCRATCH_2,
+ CH_C_SCRATCH_3,
+ EV_CH_E_CNTXT_0,
+ EV_CH_E_CNTXT_1,
+ EV_CH_E_CNTXT_2,
+ EV_CH_E_CNTXT_3,
+ EV_CH_E_CNTXT_4,
+ EV_CH_E_CNTXT_8,
+ EV_CH_E_CNTXT_9,
+ EV_CH_E_CNTXT_10,
+ EV_CH_E_CNTXT_11,
+ EV_CH_E_CNTXT_12,
+ EV_CH_E_CNTXT_13,
+ EV_CH_E_SCRATCH_0,
+ EV_CH_E_SCRATCH_1,
+ CH_C_DOORBELL_0,
+ EV_CH_E_DOORBELL_0,
+ GSI_STATUS,
+ CH_CMD,
+ EV_CH_CMD,
+ GENERIC_CMD,
+ HW_PARAM_2, /* IPA v3.5.1+ */
+ HW_PARAM_4, /* IPA v5.0+ */
+ CNTXT_TYPE_IRQ,
+ CNTXT_TYPE_IRQ_MSK,
+ CNTXT_SRC_CH_IRQ,
+ CNTXT_SRC_CH_IRQ_MSK,
+ CNTXT_SRC_CH_IRQ_CLR,
+ CNTXT_SRC_EV_CH_IRQ,
+ CNTXT_SRC_EV_CH_IRQ_MSK,
+ CNTXT_SRC_EV_CH_IRQ_CLR,
+ CNTXT_SRC_IEOB_IRQ,
+ CNTXT_SRC_IEOB_IRQ_MSK,
+ CNTXT_SRC_IEOB_IRQ_CLR,
+ CNTXT_GLOB_IRQ_STTS,
+ CNTXT_GLOB_IRQ_EN,
+ CNTXT_GLOB_IRQ_CLR,
+ CNTXT_GSI_IRQ_STTS,
+ CNTXT_GSI_IRQ_EN,
+ CNTXT_GSI_IRQ_CLR,
+ CNTXT_INTSET,
+ ERROR_LOG,
+ ERROR_LOG_CLR,
+ CNTXT_SCRATCH_0,
+ GSI_REG_ID_COUNT, /* Last; not an ID */
+};
-/* All other register offsets are relative to gsi->virt */
+/* CH_C_CNTXT_0 register */
+enum gsi_reg_ch_c_cntxt_0_field_id {
+ CHTYPE_PROTOCOL,
+ CHTYPE_DIR,
+ CH_EE,
+ CHID,
+ CHTYPE_PROTOCOL_MSB, /* IPA v4.5-4.11 */
+ ERINDEX, /* Not IPA v5.0+ */
+ CHSTATE,
+ ELEMENT_SIZE,
+};
/** enum gsi_channel_type - CHTYPE_PROTOCOL field values in CH_C_CNTXT_0 */
enum gsi_channel_type {
@@ -76,155 +122,64 @@ enum gsi_channel_type {
GSI_CHANNEL_TYPE_11AD = 0x9,
};
-#define GSI_CH_C_CNTXT_0_OFFSET(ch) \
- (0x0001c000 + 0x4000 * GSI_EE_AP + 0x80 * (ch))
-#define CHTYPE_PROTOCOL_FMASK GENMASK(2, 0)
-#define CHTYPE_DIR_FMASK GENMASK(3, 3)
-#define EE_FMASK GENMASK(7, 4)
-#define CHID_FMASK GENMASK(12, 8)
-/* The next field is present for IPA v4.5 and above */
-#define CHTYPE_PROTOCOL_MSB_FMASK GENMASK(13, 13)
-#define ERINDEX_FMASK GENMASK(18, 14)
-#define CHSTATE_FMASK GENMASK(23, 20)
-#define ELEMENT_SIZE_FMASK GENMASK(31, 24)
-
-/* Encoded value for CH_C_CNTXT_0 register channel protocol fields */
-static inline u32
-chtype_protocol_encoded(enum ipa_version version, enum gsi_channel_type type)
-{
- u32 val;
-
- val = u32_encode_bits(type, CHTYPE_PROTOCOL_FMASK);
- if (version < IPA_VERSION_4_5)
- return val;
-
- /* Encode upper bit(s) as well */
- type >>= hweight32(CHTYPE_PROTOCOL_FMASK);
- val |= u32_encode_bits(type, CHTYPE_PROTOCOL_MSB_FMASK);
-
- return val;
-}
-
-#define GSI_CH_C_CNTXT_1_OFFSET(ch) \
- (0x0001c004 + 0x4000 * GSI_EE_AP + 0x80 * (ch))
-
-/* Encoded value for CH_C_CNTXT_1 register R_LENGTH field */
-static inline u32 r_length_encoded(enum ipa_version version, u32 length)
-{
- if (version < IPA_VERSION_4_9)
- return u32_encode_bits(length, GENMASK(15, 0));
- return u32_encode_bits(length, GENMASK(19, 0));
-}
-
-#define GSI_CH_C_CNTXT_2_OFFSET(ch) \
- (0x0001c008 + 0x4000 * GSI_EE_AP + 0x80 * (ch))
-
-#define GSI_CH_C_CNTXT_3_OFFSET(ch) \
- (0x0001c00c + 0x4000 * GSI_EE_AP + 0x80 * (ch))
-
-#define GSI_CH_C_QOS_OFFSET(ch) \
- (0x0001c05c + 0x4000 * GSI_EE_AP + 0x80 * (ch))
-#define WRR_WEIGHT_FMASK GENMASK(3, 0)
-#define MAX_PREFETCH_FMASK GENMASK(8, 8)
-#define USE_DB_ENG_FMASK GENMASK(9, 9)
-/* The next field is only present for IPA v4.0, v4.1, and v4.2 */
-#define USE_ESCAPE_BUF_ONLY_FMASK GENMASK(10, 10)
-/* The next two fields are present for IPA v4.5 and above */
-#define PREFETCH_MODE_FMASK GENMASK(13, 10)
-#define EMPTY_LVL_THRSHOLD_FMASK GENMASK(23, 16)
-/* The next field is present for IPA v4.9 and above */
-#define DB_IN_BYTES GENMASK(24, 24)
+/* CH_C_CNTXT_1 register */
+enum gsi_reg_ch_c_cntxt_1_field_id {
+ CH_R_LENGTH,
+ CH_ERINDEX, /* IPA v5.0+ */
+};
+
+/* CH_C_QOS register */
+enum gsi_reg_ch_c_qos_field_id {
+ WRR_WEIGHT,
+ MAX_PREFETCH,
+ USE_DB_ENG,
+ USE_ESCAPE_BUF_ONLY, /* IPA v4.0-4.2 */
+ PREFETCH_MODE, /* IPA v4.5+ */
+ EMPTY_LVL_THRSHOLD, /* IPA v4.5+ */
+ DB_IN_BYTES, /* IPA v4.9+ */
+ LOW_LATENCY_EN, /* IPA v5.0+ */
+};
/** enum gsi_prefetch_mode - PREFETCH_MODE field in CH_C_QOS */
enum gsi_prefetch_mode {
- GSI_USE_PREFETCH_BUFS = 0x0,
- GSI_ESCAPE_BUF_ONLY = 0x1,
- GSI_SMART_PREFETCH = 0x2,
- GSI_FREE_PREFETCH = 0x3,
+ USE_PREFETCH_BUFS = 0,
+ ESCAPE_BUF_ONLY = 1,
+ SMART_PREFETCH = 2,
+ FREE_PREFETCH = 3,
};
-#define GSI_CH_C_SCRATCH_0_OFFSET(ch) \
- (0x0001c060 + 0x4000 * GSI_EE_AP + 0x80 * (ch))
-
-#define GSI_CH_C_SCRATCH_1_OFFSET(ch) \
- (0x0001c064 + 0x4000 * GSI_EE_AP + 0x80 * (ch))
-
-#define GSI_CH_C_SCRATCH_2_OFFSET(ch) \
- (0x0001c068 + 0x4000 * GSI_EE_AP + 0x80 * (ch))
-
-#define GSI_CH_C_SCRATCH_3_OFFSET(ch) \
- (0x0001c06c + 0x4000 * GSI_EE_AP + 0x80 * (ch))
-
-#define GSI_EV_CH_E_CNTXT_0_OFFSET(ev) \
- (0x0001d000 + 0x4000 * GSI_EE_AP + 0x80 * (ev))
-/* enum gsi_channel_type defines EV_CHTYPE field values in EV_CH_E_CNTXT_0 */
-#define EV_CHTYPE_FMASK GENMASK(3, 0)
-#define EV_EE_FMASK GENMASK(7, 4)
-#define EV_EVCHID_FMASK GENMASK(15, 8)
-#define EV_INTYPE_FMASK GENMASK(16, 16)
-#define EV_CHSTATE_FMASK GENMASK(23, 20)
-#define EV_ELEMENT_SIZE_FMASK GENMASK(31, 24)
-
-#define GSI_EV_CH_E_CNTXT_1_OFFSET(ev) \
- (0x0001d004 + 0x4000 * GSI_EE_AP + 0x80 * (ev))
-/* Encoded value for EV_CH_C_CNTXT_1 register EV_R_LENGTH field */
-static inline u32 ev_r_length_encoded(enum ipa_version version, u32 length)
-{
- if (version < IPA_VERSION_4_9)
- return u32_encode_bits(length, GENMASK(15, 0));
- return u32_encode_bits(length, GENMASK(19, 0));
-}
-
-#define GSI_EV_CH_E_CNTXT_2_OFFSET(ev) \
- (0x0001d008 + 0x4000 * GSI_EE_AP + 0x80 * (ev))
-
-#define GSI_EV_CH_E_CNTXT_3_OFFSET(ev) \
- (0x0001d00c + 0x4000 * GSI_EE_AP + 0x80 * (ev))
-
-#define GSI_EV_CH_E_CNTXT_4_OFFSET(ev) \
- (0x0001d010 + 0x4000 * GSI_EE_AP + 0x80 * (ev))
-
-#define GSI_EV_CH_E_CNTXT_8_OFFSET(ev) \
- (0x0001d020 + 0x4000 * GSI_EE_AP + 0x80 * (ev))
-#define MODT_FMASK GENMASK(15, 0)
-#define MODC_FMASK GENMASK(23, 16)
-#define MOD_CNT_FMASK GENMASK(31, 24)
-
-#define GSI_EV_CH_E_CNTXT_9_OFFSET(ev) \
- (0x0001d024 + 0x4000 * GSI_EE_AP + 0x80 * (ev))
-
-#define GSI_EV_CH_E_CNTXT_10_OFFSET(ev) \
- (0x0001d028 + 0x4000 * GSI_EE_AP + 0x80 * (ev))
-
-#define GSI_EV_CH_E_CNTXT_11_OFFSET(ev) \
- (0x0001d02c + 0x4000 * GSI_EE_AP + 0x80 * (ev))
-
-#define GSI_EV_CH_E_CNTXT_12_OFFSET(ev) \
- (0x0001d030 + 0x4000 * GSI_EE_AP + 0x80 * (ev))
-
-#define GSI_EV_CH_E_CNTXT_13_OFFSET(ev) \
- (0x0001d034 + 0x4000 * GSI_EE_AP + 0x80 * (ev))
-
-#define GSI_EV_CH_E_SCRATCH_0_OFFSET(ev) \
- (0x0001d048 + 0x4000 * GSI_EE_AP + 0x80 * (ev))
-
-#define GSI_EV_CH_E_SCRATCH_1_OFFSET(ev) \
- (0x0001d04c + 0x4000 * GSI_EE_AP + 0x80 * (ev))
+/* EV_CH_E_CNTXT_0 register */
+enum gsi_reg_ch_c_ev_ch_e_cntxt_0_field_id {
+ EV_CHTYPE, /* enum gsi_channel_type */
+ EV_EE, /* enum gsi_ee_id; always GSI_EE_AP for us */
+ EV_EVCHID,
+ EV_INTYPE,
+ EV_CHSTATE,
+ EV_ELEMENT_SIZE,
+};
-#define GSI_CH_C_DOORBELL_0_OFFSET(ch) \
- (0x0001e000 + 0x4000 * GSI_EE_AP + 0x08 * (ch))
+/* EV_CH_E_CNTXT_1 register */
+enum gsi_reg_ev_ch_c_cntxt_1_field_id {
+ R_LENGTH,
+};
-#define GSI_EV_CH_E_DOORBELL_0_OFFSET(ev) \
- (0x0001e100 + 0x4000 * GSI_EE_AP + 0x08 * (ev))
+/* EV_CH_E_CNTXT_8 register */
+enum gsi_reg_ch_c_ev_ch_e_cntxt_8_field_id {
+ EV_MODT,
+ EV_MODC,
+ EV_MOD_CNT,
+};
-#define GSI_GSI_STATUS_OFFSET \
- (0x0001f000 + 0x4000 * GSI_EE_AP)
-#define ENABLED_FMASK GENMASK(0, 0)
+/* GSI_STATUS register */
+enum gsi_reg_gsi_status_field_id {
+ ENABLED,
+};
-#define GSI_CH_CMD_OFFSET \
- (0x0001f008 + 0x4000 * GSI_EE_AP)
-#define CH_CHID_FMASK GENMASK(7, 0)
-#define CH_OPCODE_FMASK GENMASK(31, 24)
+/* CH_CMD register */
+enum gsi_reg_gsi_ch_cmd_field_id {
+ CH_CHID,
+ CH_OPCODE,
+};
/** enum gsi_ch_cmd_opcode - CH_OPCODE field values in CH_CMD */
enum gsi_ch_cmd_opcode {
@@ -236,10 +191,11 @@ enum gsi_ch_cmd_opcode {
GSI_CH_DB_STOP = 0xb,
};
-#define GSI_EV_CH_CMD_OFFSET \
- (0x0001f010 + 0x4000 * GSI_EE_AP)
-#define EV_CHID_FMASK GENMASK(7, 0)
-#define EV_OPCODE_FMASK GENMASK(31, 24)
+/* EV_CH_CMD register */
+enum gsi_ev_ch_cmd_field_id {
+ EV_CHID,
+ EV_OPCODE,
+};
/** enum gsi_evt_cmd_opcode - EV_OPCODE field values in EV_CH_CMD */
enum gsi_evt_cmd_opcode {
@@ -248,12 +204,13 @@ enum gsi_evt_cmd_opcode {
GSI_EVT_DE_ALLOC = 0xa,
};
-#define GSI_GENERIC_CMD_OFFSET \
- (0x0001f018 + 0x4000 * GSI_EE_AP)
-#define GENERIC_OPCODE_FMASK GENMASK(4, 0)
-#define GENERIC_CHID_FMASK GENMASK(9, 5)
-#define GENERIC_EE_FMASK GENMASK(13, 10)
-#define GENERIC_PARAMS_FMASK GENMASK(31, 24) /* IPA v4.11+ */
+/* GENERIC_CMD register */
+enum gsi_generic_cmd_field_id {
+ GENERIC_OPCODE,
+ GENERIC_CHID,
+ GENERIC_EE,
+ GENERIC_PARAMS, /* IPA v4.11+ */
+};
/** enum gsi_generic_cmd_opcode - GENERIC_OPCODE field values in GENERIC_CMD */
enum gsi_generic_cmd_opcode {
@@ -264,22 +221,20 @@ enum gsi_generic_cmd_opcode {
GSI_GENERIC_QUERY_FLOW_CONTROL = 0x5, /* IPA v4.11+ */
};
-/* The next register is present for IPA v3.5.1 and above */
-#define GSI_GSI_HW_PARAM_2_OFFSET \
- (0x0001f040 + 0x4000 * GSI_EE_AP)
-#define IRAM_SIZE_FMASK GENMASK(2, 0)
-#define NUM_CH_PER_EE_FMASK GENMASK(7, 3)
-#define NUM_EV_PER_EE_FMASK GENMASK(12, 8)
-#define GSI_CH_PEND_TRANSLATE_FMASK GENMASK(13, 13)
-#define GSI_CH_FULL_LOGIC_FMASK GENMASK(14, 14)
-/* Fields below are present for IPA v4.0 and above */
-#define GSI_USE_SDMA_FMASK GENMASK(15, 15)
-#define GSI_SDMA_N_INT_FMASK GENMASK(18, 16)
-#define GSI_SDMA_MAX_BURST_FMASK GENMASK(26, 19)
-#define GSI_SDMA_N_IOVEC_FMASK GENMASK(29, 27)
-/* Fields below are present for IPA v4.2 and above */
-#define GSI_USE_RD_WR_ENG_FMASK GENMASK(30, 30)
-#define GSI_USE_INTER_EE_FMASK GENMASK(31, 31)
+/* HW_PARAM_2 register */ /* IPA v3.5.1+ */
+enum gsi_hw_param_2_field_id {
+ IRAM_SIZE,
+ NUM_CH_PER_EE,
+ NUM_EV_PER_EE, /* Not IPA v5.0+ */
+ GSI_CH_PEND_TRANSLATE,
+ GSI_CH_FULL_LOGIC,
+ GSI_USE_SDMA, /* IPA v4.0+ */
+ GSI_SDMA_N_INT, /* IPA v4.0+ */
+ GSI_SDMA_MAX_BURST, /* IPA v4.0+ */
+ GSI_SDMA_N_IOVEC, /* IPA v4.0+ */
+ GSI_USE_RD_WR_ENG, /* IPA v4.2+ */
+ GSI_USE_INTER_EE, /* IPA v4.2+ */
+};
/** enum gsi_iram_size - IRAM_SIZE field values in HW_PARAM_2 */
enum gsi_iram_size {
@@ -293,93 +248,66 @@ enum gsi_iram_size {
IRAM_SIZE_FOUR_KB = 0x5,
};
-/* IRQ condition for each type is cleared by writing type-specific register */
-#define GSI_CNTXT_TYPE_IRQ_OFFSET \
- (0x0001f080 + 0x4000 * GSI_EE_AP)
-#define GSI_CNTXT_TYPE_IRQ_MSK_OFFSET \
- (0x0001f088 + 0x4000 * GSI_EE_AP)
+/* HW_PARAM_4 register */ /* IPA v5.0+ */
+enum gsi_hw_param_4_field_id {
+ EV_PER_EE,
+ IRAM_PROTOCOL_COUNT,
+};
-/* Values here are bit positions in the TYPE_IRQ and TYPE_IRQ_MSK registers */
+/**
+ * enum gsi_irq_type_id: GSI IRQ types
+ * @GSI_CH_CTRL: Channel allocation, deallocation, etc.
+ * @GSI_EV_CTRL: Event ring allocation, deallocation, etc.
+ * @GSI_GLOB_EE: Global/general event
+ * @GSI_IEOB: Transfer (TRE) completion
+ * @GSI_INTER_EE_CH_CTRL: Remote-issued stop/reset (unused)
+ * @GSI_INTER_EE_EV_CTRL: Remote-issued event reset (unused)
+ * @GSI_GENERAL: General hardware event (bus error, etc.)
+ */
enum gsi_irq_type_id {
- GSI_CH_CTRL = 0x0, /* channel allocation, etc. */
- GSI_EV_CTRL = 0x1, /* event ring allocation, etc. */
- GSI_GLOB_EE = 0x2, /* global/general event */
- GSI_IEOB = 0x3, /* TRE completion */
- GSI_INTER_EE_CH_CTRL = 0x4, /* remote-issued stop/reset (unused) */
- GSI_INTER_EE_EV_CTRL = 0x5, /* remote-issued event reset (unused) */
- GSI_GENERAL = 0x6, /* general-purpose event */
+ GSI_CH_CTRL = BIT(0),
+ GSI_EV_CTRL = BIT(1),
+ GSI_GLOB_EE = BIT(2),
+ GSI_IEOB = BIT(3),
+ GSI_INTER_EE_CH_CTRL = BIT(4),
+ GSI_INTER_EE_EV_CTRL = BIT(5),
+ GSI_GENERAL = BIT(6),
+ /* IRQ types 7-31 (and their bit values) are reserved */
};
-#define GSI_CNTXT_SRC_CH_IRQ_OFFSET \
- (0x0001f090 + 0x4000 * GSI_EE_AP)
-
-#define GSI_CNTXT_SRC_EV_CH_IRQ_OFFSET \
- (0x0001f094 + 0x4000 * GSI_EE_AP)
-
-#define GSI_CNTXT_SRC_CH_IRQ_MSK_OFFSET \
- (0x0001f098 + 0x4000 * GSI_EE_AP)
-
-#define GSI_CNTXT_SRC_EV_CH_IRQ_MSK_OFFSET \
- (0x0001f09c + 0x4000 * GSI_EE_AP)
-
-#define GSI_CNTXT_SRC_CH_IRQ_CLR_OFFSET \
- (0x0001f0a0 + 0x4000 * GSI_EE_AP)
-
-#define GSI_CNTXT_SRC_EV_CH_IRQ_CLR_OFFSET \
- (0x0001f0a4 + 0x4000 * GSI_EE_AP)
-
-#define GSI_CNTXT_SRC_IEOB_IRQ_OFFSET \
- (0x0001f0b0 + 0x4000 * GSI_EE_AP)
-
-#define GSI_CNTXT_SRC_IEOB_IRQ_MSK_OFFSET \
- (0x0001f0b8 + 0x4000 * GSI_EE_AP)
-
-#define GSI_CNTXT_SRC_IEOB_IRQ_CLR_OFFSET \
- (0x0001f0c0 + 0x4000 * GSI_EE_AP)
-
-#define GSI_CNTXT_GLOB_IRQ_STTS_OFFSET \
- (0x0001f100 + 0x4000 * GSI_EE_AP)
-#define GSI_CNTXT_GLOB_IRQ_EN_OFFSET \
- (0x0001f108 + 0x4000 * GSI_EE_AP)
-#define GSI_CNTXT_GLOB_IRQ_CLR_OFFSET \
- (0x0001f110 + 0x4000 * GSI_EE_AP)
-/* Values here are bit positions in the GLOB_IRQ_* registers */
+/** enum gsi_global_irq_id: Global GSI interrupt events */
enum gsi_global_irq_id {
- ERROR_INT = 0x0,
- GP_INT1 = 0x1,
- GP_INT2 = 0x2,
- GP_INT3 = 0x3,
+ ERROR_INT = BIT(0),
+ GP_INT1 = BIT(1),
+ GP_INT2 = BIT(2),
+ GP_INT3 = BIT(3),
+ /* Global IRQ types 4-31 (and their bit values) are reserved */
};
-#define GSI_CNTXT_GSI_IRQ_STTS_OFFSET \
- (0x0001f118 + 0x4000 * GSI_EE_AP)
-#define GSI_CNTXT_GSI_IRQ_EN_OFFSET \
- (0x0001f120 + 0x4000 * GSI_EE_AP)
-#define GSI_CNTXT_GSI_IRQ_CLR_OFFSET \
- (0x0001f128 + 0x4000 * GSI_EE_AP)
-/* Values here are bit positions in the (general) GSI_IRQ_* registers */
-enum gsi_general_id {
- BREAK_POINT = 0x0,
- BUS_ERROR = 0x1,
- CMD_FIFO_OVRFLOW = 0x2,
- MCS_STACK_OVRFLOW = 0x3,
+/** enum gsi_general_irq_id: GSI general IRQ conditions */
+enum gsi_general_irq_id {
+ BREAK_POINT = BIT(0),
+ BUS_ERROR = BIT(1),
+ CMD_FIFO_OVRFLOW = BIT(2),
+ MCS_STACK_OVRFLOW = BIT(3),
+ /* General IRQ types 4-31 (and their bit values) are reserved */
};
-#define GSI_CNTXT_INTSET_OFFSET \
- (0x0001f180 + 0x4000 * GSI_EE_AP)
-#define INTYPE_FMASK GENMASK(0, 0)
-
-#define GSI_ERROR_LOG_OFFSET \
- (0x0001f200 + 0x4000 * GSI_EE_AP)
+/* CNTXT_INTSET register */
+enum gsi_cntxt_intset_field_id {
+ INTYPE,
+};
-/* Fields below are present for IPA v3.5.1 and above */
-#define ERR_ARG3_FMASK GENMASK(3, 0)
-#define ERR_ARG2_FMASK GENMASK(7, 4)
-#define ERR_ARG1_FMASK GENMASK(11, 8)
-#define ERR_CODE_FMASK GENMASK(15, 12)
-#define ERR_VIRT_IDX_FMASK GENMASK(23, 19)
-#define ERR_TYPE_FMASK GENMASK(27, 24)
-#define ERR_EE_FMASK GENMASK(31, 28)
+/* ERROR_LOG register */
+enum gsi_error_log_field_id {
+ ERR_ARG3,
+ ERR_ARG2,
+ ERR_ARG1,
+ ERR_CODE,
+ ERR_VIRT_IDX,
+ ERR_TYPE,
+ ERR_EE,
+};
/** enum gsi_err_code - ERR_CODE field values in EE_ERR_LOG */
enum gsi_err_code {
@@ -400,13 +328,11 @@ enum gsi_err_type {
GSI_ERR_TYPE_EVT = 0x3,
};
-#define GSI_ERROR_LOG_CLR_OFFSET \
- (0x0001f210 + 0x4000 * GSI_EE_AP)
-
-#define GSI_CNTXT_SCRATCH_0_OFFSET \
- (0x0001f400 + 0x4000 * GSI_EE_AP)
-#define INTER_EE_RESULT_FMASK GENMASK(2, 0)
-#define GENERIC_EE_RESULT_FMASK GENMASK(7, 5)
+/* CNTXT_SCRATCH_0 register */
+enum gsi_cntxt_scratch_0_field_id {
+ INTER_EE_RESULT,
+ GENERIC_EE_RESULT,
+};
/** enum gsi_generic_ee_result - GENERIC_EE_RESULT field values in SCRATCH_0 */
enum gsi_generic_ee_result {
@@ -419,4 +345,34 @@ enum gsi_generic_ee_result {
GENERIC_EE_NO_RESOURCES = 0x7,
};
+extern const struct regs gsi_regs_v3_1;
+extern const struct regs gsi_regs_v3_5_1;
+extern const struct regs gsi_regs_v4_0;
+extern const struct regs gsi_regs_v4_5;
+extern const struct regs gsi_regs_v4_9;
+extern const struct regs gsi_regs_v4_11;
+
+/**
+ * gsi_reg() - Return the structure describing a GSI register
+ * @gsi: GSI pointer
+ * @reg_id: GSI register ID
+ */
+const struct reg *gsi_reg(struct gsi *gsi, enum gsi_reg_id reg_id);
+
+/**
+ * gsi_reg_init() - Perform GSI register initialization
+ * @gsi: GSI pointer
+ * @pdev: GSI (IPA) platform device
+ *
+ * Initialize GSI registers, including looking up and I/O mapping
+ * the "gsi" memory space.
+ */
+int gsi_reg_init(struct gsi *gsi, struct platform_device *pdev);
+
+/**
+ * gsi_reg_exit() - Inverse of gsi_reg_init()
+ * @gsi: GSI pointer
+ */
+void gsi_reg_exit(struct gsi *gsi);
+
#endif /* _GSI_REG_H_ */
diff --git a/drivers/net/ipa/ipa.h b/drivers/net/ipa/ipa.h
index 5372db58b5bd..f3355e040a9e 100644
--- a/drivers/net/ipa/ipa.h
+++ b/drivers/net/ipa/ipa.h
@@ -45,7 +45,6 @@ struct ipa_interrupt;
* @interrupt: IPA Interrupt information
* @uc_powered: true if power is active by proxy for microcontroller
* @uc_loaded: true after microcontroller has reported it's ready
- * @reg_addr: DMA address used for IPA register access
* @reg_virt: Virtual address used for IPA register access
* @regs: IPA register definitions
* @mem_addr: DMA address of IPA-local memory space
@@ -97,9 +96,8 @@ struct ipa {
bool uc_powered;
bool uc_loaded;
- dma_addr_t reg_addr;
void __iomem *reg_virt;
- const struct ipa_regs *regs;
+ const struct regs *regs;
dma_addr_t mem_addr;
void *mem_virt;
diff --git a/drivers/net/ipa/ipa_cmd.c b/drivers/net/ipa/ipa_cmd.c
index bb3dfa9a2bc8..f1419fbd776c 100644
--- a/drivers/net/ipa/ipa_cmd.c
+++ b/drivers/net/ipa/ipa_cmd.c
@@ -1,7 +1,7 @@
// SPDX-License-Identifier: GPL-2.0
/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved.
- * Copyright (C) 2019-2022 Linaro Ltd.
+ * Copyright (C) 2019-2023 Linaro Ltd.
*/
#include <linux/types.h>
@@ -94,11 +94,11 @@ struct ipa_cmd_register_write {
/* IPA_CMD_IP_PACKET_INIT */
struct ipa_cmd_ip_packet_init {
- u8 dest_endpoint;
+ u8 dest_endpoint; /* Full 8 bits used for IPA v5.0+ */
u8 reserved[7];
};
-/* Field masks for ipa_cmd_ip_packet_init dest_endpoint field */
+/* Field mask for ipa_cmd_ip_packet_init dest_endpoint field (unused v5.0+) */
#define IPA_PACKET_INIT_DEST_ENDPOINT_FMASK GENMASK(4, 0)
/* IPA_CMD_DMA_SHARED_MEM */
@@ -157,9 +157,14 @@ static void ipa_cmd_validate_build(void)
BUILD_BUG_ON(field_max(IP_FLTRT_FLAGS_HASH_ADDR_FMASK) !=
field_max(IP_FLTRT_FLAGS_NHASH_ADDR_FMASK));
- /* Valid endpoint numbers must fit in the IP packet init command */
- BUILD_BUG_ON(field_max(IPA_PACKET_INIT_DEST_ENDPOINT_FMASK) <
- IPA_ENDPOINT_MAX - 1);
+ /* Prior to IPA v5.0, we supported no more than 32 endpoints,
+ * and this was reflected in some 5-bit fields that held
+ * endpoint numbers. Starting with IPA v5.0, the widths of
+ * these fields were extended to 8 bits, meaning up to 256
+ * endpoints. If the driver claims to support more than
+ * that it's an error.
+ */
+ BUILD_BUG_ON(IPA_ENDPOINT_MAX - 1 > U8_MAX);
}
/* Validate a memory region holding a table */
@@ -282,7 +287,7 @@ static bool ipa_cmd_register_write_offset_valid(struct ipa *ipa,
/* Check whether offsets passed to register_write are valid */
static bool ipa_cmd_register_write_valid(struct ipa *ipa)
{
- const struct ipa_reg *reg;
+ const struct reg *reg;
const char *name;
u32 offset;
@@ -290,8 +295,12 @@ static bool ipa_cmd_register_write_valid(struct ipa *ipa)
* offset will fit in a register write IPA immediate command.
*/
if (ipa_table_hash_support(ipa)) {
- reg = ipa_reg(ipa, FILT_ROUT_HASH_FLUSH);
- offset = ipa_reg_offset(reg);
+ if (ipa->version < IPA_VERSION_5_0)
+ reg = ipa_reg(ipa, FILT_ROUT_HASH_FLUSH);
+ else
+ reg = ipa_reg(ipa, FILT_ROUT_CACHE_FLUSH);
+
+ offset = reg_offset(reg);
name = "filter/route hash flush";
if (!ipa_cmd_register_write_offset_valid(ipa, name, offset))
return false;
@@ -305,7 +314,7 @@ static bool ipa_cmd_register_write_valid(struct ipa *ipa)
* fits in the register write command field(s) that must hold it.
*/
reg = ipa_reg(ipa, ENDP_STATUS);
- offset = ipa_reg_n_offset(reg, IPA_ENDPOINT_COUNT - 1);
+ offset = reg_n_offset(reg, IPA_ENDPOINT_COUNT - 1);
name = "maximal endpoint status";
if (!ipa_cmd_register_write_offset_valid(ipa, name, offset))
return false;
@@ -486,8 +495,13 @@ static void ipa_cmd_ip_packet_init_add(struct gsi_trans *trans, u8 endpoint_id)
cmd_payload = ipa_cmd_payload_alloc(ipa, &payload_addr);
payload = &cmd_payload->ip_packet_init;
- payload->dest_endpoint = u8_encode_bits(endpoint_id,
- IPA_PACKET_INIT_DEST_ENDPOINT_FMASK);
+ if (ipa->version < IPA_VERSION_5_0) {
+ payload->dest_endpoint =
+ u8_encode_bits(endpoint_id,
+ IPA_PACKET_INIT_DEST_ENDPOINT_FMASK);
+ } else {
+ payload->dest_endpoint = endpoint_id;
+ }
gsi_trans_cmd_add(trans, payload, sizeof(*payload), payload_addr,
opcode);
diff --git a/drivers/net/ipa/ipa_endpoint.c b/drivers/net/ipa/ipa_endpoint.c
index 136932464261..2ee80ed140b7 100644
--- a/drivers/net/ipa/ipa_endpoint.c
+++ b/drivers/net/ipa/ipa_endpoint.c
@@ -1,7 +1,7 @@
// SPDX-License-Identifier: GPL-2.0
/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved.
- * Copyright (C) 2019-2022 Linaro Ltd.
+ * Copyright (C) 2019-2023 Linaro Ltd.
*/
#include <linux/types.h>
@@ -34,41 +34,181 @@
#define IPA_ENDPOINT_RESET_AGGR_RETRY_MAX 3
-/** enum ipa_status_opcode - status element opcode hardware values */
-enum ipa_status_opcode {
- IPA_STATUS_OPCODE_PACKET = 0x01,
- IPA_STATUS_OPCODE_DROPPED_PACKET = 0x04,
- IPA_STATUS_OPCODE_SUSPENDED_PACKET = 0x08,
- IPA_STATUS_OPCODE_PACKET_2ND_PASS = 0x40,
+/** enum ipa_status_opcode - IPA status opcode field hardware values */
+enum ipa_status_opcode { /* *Not* a bitmask */
+ IPA_STATUS_OPCODE_PACKET = 1,
+ IPA_STATUS_OPCODE_NEW_RULE_PACKET = 2,
+ IPA_STATUS_OPCODE_DROPPED_PACKET = 4,
+ IPA_STATUS_OPCODE_SUSPENDED_PACKET = 8,
+ IPA_STATUS_OPCODE_LOG = 16,
+ IPA_STATUS_OPCODE_DCMP = 32,
+ IPA_STATUS_OPCODE_PACKET_2ND_PASS = 64,
};
-/** enum ipa_status_exception - status element exception type */
-enum ipa_status_exception {
+/** enum ipa_status_exception - IPA status exception field hardware values */
+enum ipa_status_exception { /* *Not* a bitmask */
/* 0 means no exception */
- IPA_STATUS_EXCEPTION_DEAGGR = 0x01,
+ IPA_STATUS_EXCEPTION_DEAGGR = 1,
+ IPA_STATUS_EXCEPTION_IPTYPE = 4,
+ IPA_STATUS_EXCEPTION_PACKET_LENGTH = 8,
+ IPA_STATUS_EXCEPTION_FRAG_RULE_MISS = 16,
+ IPA_STATUS_EXCEPTION_SW_FILTER = 32,
+ IPA_STATUS_EXCEPTION_NAT = 64, /* IPv4 */
+ IPA_STATUS_EXCEPTION_IPV6_CONN_TRACK = 64, /* IPv6 */
+ IPA_STATUS_EXCEPTION_UC = 128,
+ IPA_STATUS_EXCEPTION_INVALID_ENDPOINT = 129,
+ IPA_STATUS_EXCEPTION_HEADER_INSERT = 136,
+ IPA_STATUS_EXCEPTION_CHEKCSUM = 229,
};
-/* Status element provided by hardware */
-struct ipa_status {
- u8 opcode; /* enum ipa_status_opcode */
- u8 exception; /* enum ipa_status_exception */
- __le16 mask;
- __le16 pkt_len;
- u8 endp_src_idx;
- u8 endp_dst_idx;
- __le32 metadata;
- __le32 flags1;
- __le64 flags2;
- __le32 flags3;
- __le32 flags4;
+/** enum ipa_status_mask - IPA status mask field bitmask hardware values */
+enum ipa_status_mask {
+ IPA_STATUS_MASK_FRAG_PROCESS = BIT(0),
+ IPA_STATUS_MASK_FILT_PROCESS = BIT(1),
+ IPA_STATUS_MASK_NAT_PROCESS = BIT(2),
+ IPA_STATUS_MASK_ROUTE_PROCESS = BIT(3),
+ IPA_STATUS_MASK_TAG_VALID = BIT(4),
+ IPA_STATUS_MASK_FRAGMENT = BIT(5),
+ IPA_STATUS_MASK_FIRST_FRAGMENT = BIT(6),
+ IPA_STATUS_MASK_V4 = BIT(7),
+ IPA_STATUS_MASK_CKSUM_PROCESS = BIT(8),
+ IPA_STATUS_MASK_AGGR_PROCESS = BIT(9),
+ IPA_STATUS_MASK_DEST_EOT = BIT(10),
+ IPA_STATUS_MASK_DEAGGR_PROCESS = BIT(11),
+ IPA_STATUS_MASK_DEAGG_FIRST = BIT(12),
+ IPA_STATUS_MASK_SRC_EOT = BIT(13),
+ IPA_STATUS_MASK_PREV_EOT = BIT(14),
+ IPA_STATUS_MASK_BYTE_LIMIT = BIT(15),
};
-/* Field masks for struct ipa_status structure fields */
-#define IPA_STATUS_MASK_TAG_VALID_FMASK GENMASK(4, 4)
-#define IPA_STATUS_SRC_IDX_FMASK GENMASK(4, 0)
-#define IPA_STATUS_DST_IDX_FMASK GENMASK(4, 0)
-#define IPA_STATUS_FLAGS1_RT_RULE_ID_FMASK GENMASK(31, 22)
-#define IPA_STATUS_FLAGS2_TAG_FMASK GENMASK_ULL(63, 16)
+/* Special IPA filter/router rule field value indicating "rule miss" */
+#define IPA_STATUS_RULE_MISS 0x3ff /* 10-bit filter/router rule fields */
+
+/** The IPA status nat_type field uses enum ipa_nat_type hardware values */
+
+/* enum ipa_status_field_id - IPA packet status structure field identifiers */
+enum ipa_status_field_id {
+ STATUS_OPCODE, /* enum ipa_status_opcode */
+ STATUS_EXCEPTION, /* enum ipa_status_exception */
+ STATUS_MASK, /* enum ipa_status_mask (bitmask) */
+ STATUS_LENGTH,
+ STATUS_SRC_ENDPOINT,
+ STATUS_DST_ENDPOINT,
+ STATUS_METADATA,
+ STATUS_FILTER_LOCAL, /* Boolean */
+ STATUS_FILTER_HASH, /* Boolean */
+ STATUS_FILTER_GLOBAL, /* Boolean */
+ STATUS_FILTER_RETAIN, /* Boolean */
+ STATUS_FILTER_RULE_INDEX,
+ STATUS_ROUTER_LOCAL, /* Boolean */
+ STATUS_ROUTER_HASH, /* Boolean */
+ STATUS_UCP, /* Boolean */
+ STATUS_ROUTER_TABLE,
+ STATUS_ROUTER_RULE_INDEX,
+ STATUS_NAT_HIT, /* Boolean */
+ STATUS_NAT_INDEX,
+ STATUS_NAT_TYPE, /* enum ipa_nat_type */
+ STATUS_TAG_LOW32, /* Low-order 32 bits of 48-bit tag */
+ STATUS_TAG_HIGH16, /* High-order 16 bits of 48-bit tag */
+ STATUS_SEQUENCE,
+ STATUS_TIME_OF_DAY,
+ STATUS_HEADER_LOCAL, /* Boolean */
+ STATUS_HEADER_OFFSET,
+ STATUS_FRAG_HIT, /* Boolean */
+ STATUS_FRAG_RULE_INDEX,
+};
+
+/* Size in bytes of an IPA packet status structure */
+#define IPA_STATUS_SIZE sizeof(__le32[4])
+
+/* IPA status structure decoder; looks up field values for a structure */
+static u32 ipa_status_extract(struct ipa *ipa, const void *data,
+ enum ipa_status_field_id field)
+{
+ enum ipa_version version = ipa->version;
+ const __le32 *word = data;
+
+ switch (field) {
+ case STATUS_OPCODE:
+ return le32_get_bits(word[0], GENMASK(7, 0));
+ case STATUS_EXCEPTION:
+ return le32_get_bits(word[0], GENMASK(15, 8));
+ case STATUS_MASK:
+ return le32_get_bits(word[0], GENMASK(31, 16));
+ case STATUS_LENGTH:
+ return le32_get_bits(word[1], GENMASK(15, 0));
+ case STATUS_SRC_ENDPOINT:
+ if (version < IPA_VERSION_5_0)
+ return le32_get_bits(word[1], GENMASK(20, 16));
+ return le32_get_bits(word[1], GENMASK(23, 16));
+ /* Status word 1, bits 21-23 are reserved (not IPA v5.0+) */
+ /* Status word 1, bits 24-26 are reserved (IPA v5.0+) */
+ case STATUS_DST_ENDPOINT:
+ if (version < IPA_VERSION_5_0)
+ return le32_get_bits(word[1], GENMASK(28, 24));
+ return le32_get_bits(word[7], GENMASK(23, 16));
+ /* Status word 1, bits 29-31 are reserved */
+ case STATUS_METADATA:
+ return le32_to_cpu(word[2]);
+ case STATUS_FILTER_LOCAL:
+ return le32_get_bits(word[3], GENMASK(0, 0));
+ case STATUS_FILTER_HASH:
+ return le32_get_bits(word[3], GENMASK(1, 1));
+ case STATUS_FILTER_GLOBAL:
+ return le32_get_bits(word[3], GENMASK(2, 2));
+ case STATUS_FILTER_RETAIN:
+ return le32_get_bits(word[3], GENMASK(3, 3));
+ case STATUS_FILTER_RULE_INDEX:
+ return le32_get_bits(word[3], GENMASK(13, 4));
+ /* ROUTER_TABLE is in word 3, bits 14-21 (IPA v5.0+) */
+ case STATUS_ROUTER_LOCAL:
+ if (version < IPA_VERSION_5_0)
+ return le32_get_bits(word[3], GENMASK(14, 14));
+ return le32_get_bits(word[1], GENMASK(27, 27));
+ case STATUS_ROUTER_HASH:
+ if (version < IPA_VERSION_5_0)
+ return le32_get_bits(word[3], GENMASK(15, 15));
+ return le32_get_bits(word[1], GENMASK(28, 28));
+ case STATUS_UCP:
+ if (version < IPA_VERSION_5_0)
+ return le32_get_bits(word[3], GENMASK(16, 16));
+ return le32_get_bits(word[7], GENMASK(31, 31));
+ case STATUS_ROUTER_TABLE:
+ if (version < IPA_VERSION_5_0)
+ return le32_get_bits(word[3], GENMASK(21, 17));
+ return le32_get_bits(word[3], GENMASK(21, 14));
+ case STATUS_ROUTER_RULE_INDEX:
+ return le32_get_bits(word[3], GENMASK(31, 22));
+ case STATUS_NAT_HIT:
+ return le32_get_bits(word[4], GENMASK(0, 0));
+ case STATUS_NAT_INDEX:
+ return le32_get_bits(word[4], GENMASK(13, 1));
+ case STATUS_NAT_TYPE:
+ return le32_get_bits(word[4], GENMASK(15, 14));
+ case STATUS_TAG_LOW32:
+ return le32_get_bits(word[4], GENMASK(31, 16)) |
+ (le32_get_bits(word[5], GENMASK(15, 0)) << 16);
+ case STATUS_TAG_HIGH16:
+ return le32_get_bits(word[5], GENMASK(31, 16));
+ case STATUS_SEQUENCE:
+ return le32_get_bits(word[6], GENMASK(7, 0));
+ case STATUS_TIME_OF_DAY:
+ return le32_get_bits(word[6], GENMASK(31, 8));
+ case STATUS_HEADER_LOCAL:
+ return le32_get_bits(word[7], GENMASK(0, 0));
+ case STATUS_HEADER_OFFSET:
+ return le32_get_bits(word[7], GENMASK(10, 1));
+ case STATUS_FRAG_HIT:
+ return le32_get_bits(word[7], GENMASK(11, 11));
+ case STATUS_FRAG_RULE_INDEX:
+ return le32_get_bits(word[7], GENMASK(15, 12));
+ /* Status word 7, bits 16-30 are reserved */
+ /* Status word 7, bit 31 is reserved (not IPA v5.0+) */
+ default:
+ WARN(true, "%s: bad field_id %u\n", __func__, field);
+ return 0;
+ }
+}
/* Compute the aggregation size value to use for a given buffer size */
static u32 ipa_aggr_size_kb(u32 rx_buffer_size, bool aggr_hard_limit)
@@ -101,7 +241,7 @@ static bool ipa_endpoint_data_valid_one(struct ipa *ipa, u32 count,
if (!data->toward_ipa) {
const struct ipa_endpoint_rx *rx_config;
- const struct ipa_reg *reg;
+ const struct reg *reg;
u32 buffer_size;
u32 aggr_size;
u32 limit;
@@ -164,7 +304,7 @@ static bool ipa_endpoint_data_valid_one(struct ipa *ipa, u32 count,
rx_config->aggr_hard_limit);
reg = ipa_reg(ipa, ENDP_INIT_AGGR);
- limit = ipa_reg_field_max(reg, BYTE_LIMIT);
+ limit = reg_field_max(reg, BYTE_LIMIT);
if (aggr_size > limit) {
dev_err(dev, "aggregated size too large for RX endpoint %u (%u KB > %u KB)\n",
data->endpoint_id, aggr_size, limit);
@@ -307,7 +447,7 @@ static bool
ipa_endpoint_init_ctrl(struct ipa_endpoint *endpoint, bool suspend_delay)
{
struct ipa *ipa = endpoint->ipa;
- const struct ipa_reg *reg;
+ const struct reg *reg;
u32 field_id;
u32 offset;
bool state;
@@ -320,11 +460,11 @@ ipa_endpoint_init_ctrl(struct ipa_endpoint *endpoint, bool suspend_delay)
WARN_ON(ipa->version >= IPA_VERSION_4_0);
reg = ipa_reg(ipa, ENDP_INIT_CTRL);
- offset = ipa_reg_n_offset(reg, endpoint->endpoint_id);
+ offset = reg_n_offset(reg, endpoint->endpoint_id);
val = ioread32(ipa->reg_virt + offset);
field_id = endpoint->toward_ipa ? ENDP_DELAY : ENDP_SUSPEND;
- mask = ipa_reg_bit(reg, field_id);
+ mask = reg_bit(reg, field_id);
state = !!(val & mask);
@@ -353,13 +493,13 @@ static bool ipa_endpoint_aggr_active(struct ipa_endpoint *endpoint)
u32 endpoint_id = endpoint->endpoint_id;
struct ipa *ipa = endpoint->ipa;
u32 unit = endpoint_id / 32;
- const struct ipa_reg *reg;
+ const struct reg *reg;
u32 val;
WARN_ON(!test_bit(endpoint_id, ipa->available));
reg = ipa_reg(ipa, STATE_AGGR_ACTIVE);
- val = ioread32(ipa->reg_virt + ipa_reg_n_offset(reg, unit));
+ val = ioread32(ipa->reg_virt + reg_n_offset(reg, unit));
return !!(val & BIT(endpoint_id % 32));
}
@@ -370,12 +510,12 @@ static void ipa_endpoint_force_close(struct ipa_endpoint *endpoint)
u32 mask = BIT(endpoint_id % 32);
struct ipa *ipa = endpoint->ipa;
u32 unit = endpoint_id / 32;
- const struct ipa_reg *reg;
+ const struct reg *reg;
WARN_ON(!test_bit(endpoint_id, ipa->available));
reg = ipa_reg(ipa, AGGR_FORCE_CLOSE);
- iowrite32(mask, ipa->reg_virt + ipa_reg_n_offset(reg, unit));
+ iowrite32(mask, ipa->reg_virt + reg_n_offset(reg, unit));
}
/**
@@ -473,7 +613,7 @@ int ipa_endpoint_modem_exception_reset_all(struct ipa *ipa)
for_each_set_bit(endpoint_id, ipa->defined, ipa->endpoint_count) {
struct ipa_endpoint *endpoint;
- const struct ipa_reg *reg;
+ const struct reg *reg;
u32 offset;
/* We only reset modem TX endpoints */
@@ -482,7 +622,7 @@ int ipa_endpoint_modem_exception_reset_all(struct ipa *ipa)
continue;
reg = ipa_reg(ipa, ENDP_STATUS);
- offset = ipa_reg_n_offset(reg, endpoint_id);
+ offset = reg_n_offset(reg, endpoint_id);
/* Value written is 0, and all bits are updated. That
* means status is disabled on the endpoint, and as a
@@ -505,7 +645,7 @@ static void ipa_endpoint_init_cfg(struct ipa_endpoint *endpoint)
u32 endpoint_id = endpoint->endpoint_id;
struct ipa *ipa = endpoint->ipa;
enum ipa_cs_offload_en enabled;
- const struct ipa_reg *reg;
+ const struct reg *reg;
u32 val = 0;
reg = ipa_reg(ipa, ENDP_INIT_CFG);
@@ -518,7 +658,7 @@ static void ipa_endpoint_init_cfg(struct ipa_endpoint *endpoint)
/* Checksum header offset is in 4-byte units */
off = sizeof(struct rmnet_map_header) / sizeof(u32);
- val |= ipa_reg_encode(reg, CS_METADATA_HDR_OFFSET, off);
+ val |= reg_encode(reg, CS_METADATA_HDR_OFFSET, off);
enabled = version < IPA_VERSION_4_5
? IPA_CS_OFFLOAD_UL
@@ -531,26 +671,26 @@ static void ipa_endpoint_init_cfg(struct ipa_endpoint *endpoint)
} else {
enabled = IPA_CS_OFFLOAD_NONE;
}
- val |= ipa_reg_encode(reg, CS_OFFLOAD_EN, enabled);
+ val |= reg_encode(reg, CS_OFFLOAD_EN, enabled);
/* CS_GEN_QMB_MASTER_SEL is 0 */
- iowrite32(val, ipa->reg_virt + ipa_reg_n_offset(reg, endpoint_id));
+ iowrite32(val, ipa->reg_virt + reg_n_offset(reg, endpoint_id));
}
static void ipa_endpoint_init_nat(struct ipa_endpoint *endpoint)
{
u32 endpoint_id = endpoint->endpoint_id;
struct ipa *ipa = endpoint->ipa;
- const struct ipa_reg *reg;
+ const struct reg *reg;
u32 val;
if (!endpoint->toward_ipa)
return;
reg = ipa_reg(ipa, ENDP_INIT_NAT);
- val = ipa_reg_encode(reg, NAT_EN, IPA_NAT_BYPASS);
+ val = reg_encode(reg, NAT_EN, IPA_NAT_TYPE_BYPASS);
- iowrite32(val, ipa->reg_virt + ipa_reg_n_offset(reg, endpoint_id));
+ iowrite32(val, ipa->reg_virt + reg_n_offset(reg, endpoint_id));
}
static u32
@@ -576,13 +716,13 @@ ipa_qmap_header_size(enum ipa_version version, struct ipa_endpoint *endpoint)
/* Encoded value for ENDP_INIT_HDR register HDR_LEN* field(s) */
static u32 ipa_header_size_encode(enum ipa_version version,
- const struct ipa_reg *reg, u32 header_size)
+ const struct reg *reg, u32 header_size)
{
- u32 field_max = ipa_reg_field_max(reg, HDR_LEN);
+ u32 field_max = reg_field_max(reg, HDR_LEN);
u32 val;
/* We know field_max can be used as a mask (2^n - 1) */
- val = ipa_reg_encode(reg, HDR_LEN, header_size & field_max);
+ val = reg_encode(reg, HDR_LEN, header_size & field_max);
if (version < IPA_VERSION_4_5) {
WARN_ON(header_size > field_max);
return val;
@@ -590,21 +730,21 @@ static u32 ipa_header_size_encode(enum ipa_version version,
/* IPA v4.5 adds a few more most-significant bits */
header_size >>= hweight32(field_max);
- WARN_ON(header_size > ipa_reg_field_max(reg, HDR_LEN_MSB));
- val |= ipa_reg_encode(reg, HDR_LEN_MSB, header_size);
+ WARN_ON(header_size > reg_field_max(reg, HDR_LEN_MSB));
+ val |= reg_encode(reg, HDR_LEN_MSB, header_size);
return val;
}
/* Encoded value for ENDP_INIT_HDR register OFST_METADATA* field(s) */
static u32 ipa_metadata_offset_encode(enum ipa_version version,
- const struct ipa_reg *reg, u32 offset)
+ const struct reg *reg, u32 offset)
{
- u32 field_max = ipa_reg_field_max(reg, HDR_OFST_METADATA);
+ u32 field_max = reg_field_max(reg, HDR_OFST_METADATA);
u32 val;
/* We know field_max can be used as a mask (2^n - 1) */
- val = ipa_reg_encode(reg, HDR_OFST_METADATA, offset);
+ val = reg_encode(reg, HDR_OFST_METADATA, offset);
if (version < IPA_VERSION_4_5) {
WARN_ON(offset > field_max);
return val;
@@ -612,8 +752,8 @@ static u32 ipa_metadata_offset_encode(enum ipa_version version,
/* IPA v4.5 adds a few more most-significant bits */
offset >>= hweight32(field_max);
- WARN_ON(offset > ipa_reg_field_max(reg, HDR_OFST_METADATA_MSB));
- val |= ipa_reg_encode(reg, HDR_OFST_METADATA_MSB, offset);
+ WARN_ON(offset > reg_field_max(reg, HDR_OFST_METADATA_MSB));
+ val |= reg_encode(reg, HDR_OFST_METADATA_MSB, offset);
return val;
}
@@ -643,7 +783,7 @@ static void ipa_endpoint_init_hdr(struct ipa_endpoint *endpoint)
{
u32 endpoint_id = endpoint->endpoint_id;
struct ipa *ipa = endpoint->ipa;
- const struct ipa_reg *reg;
+ const struct reg *reg;
u32 val = 0;
reg = ipa_reg(ipa, ENDP_INIT_HDR);
@@ -666,13 +806,13 @@ static void ipa_endpoint_init_hdr(struct ipa_endpoint *endpoint)
off = offsetof(struct rmnet_map_header, pkt_len);
/* Upper bits are stored in HDR_EXT with IPA v4.5 */
if (version >= IPA_VERSION_4_5)
- off &= ipa_reg_field_max(reg, HDR_OFST_PKT_SIZE);
+ off &= reg_field_max(reg, HDR_OFST_PKT_SIZE);
- val |= ipa_reg_bit(reg, HDR_OFST_PKT_SIZE_VALID);
- val |= ipa_reg_encode(reg, HDR_OFST_PKT_SIZE, off);
+ val |= reg_bit(reg, HDR_OFST_PKT_SIZE_VALID);
+ val |= reg_encode(reg, HDR_OFST_PKT_SIZE, off);
}
/* For QMAP TX, metadata offset is 0 (modem assumes this) */
- val |= ipa_reg_bit(reg, HDR_OFST_METADATA_VALID);
+ val |= reg_bit(reg, HDR_OFST_METADATA_VALID);
/* HDR_ADDITIONAL_CONST_LEN is 0; (RX only) */
/* HDR_A5_MUX is 0 */
@@ -680,7 +820,7 @@ static void ipa_endpoint_init_hdr(struct ipa_endpoint *endpoint)
/* HDR_METADATA_REG_VALID is 0 (TX only, version < v4.5) */
}
- iowrite32(val, ipa->reg_virt + ipa_reg_n_offset(reg, endpoint_id));
+ iowrite32(val, ipa->reg_virt + reg_n_offset(reg, endpoint_id));
}
static void ipa_endpoint_init_hdr_ext(struct ipa_endpoint *endpoint)
@@ -688,13 +828,13 @@ static void ipa_endpoint_init_hdr_ext(struct ipa_endpoint *endpoint)
u32 pad_align = endpoint->config.rx.pad_align;
u32 endpoint_id = endpoint->endpoint_id;
struct ipa *ipa = endpoint->ipa;
- const struct ipa_reg *reg;
+ const struct reg *reg;
u32 val = 0;
reg = ipa_reg(ipa, ENDP_INIT_HDR_EXT);
if (endpoint->config.qmap) {
/* We have a header, so we must specify its endianness */
- val |= ipa_reg_bit(reg, HDR_ENDIANNESS); /* big endian */
+ val |= reg_bit(reg, HDR_ENDIANNESS); /* big endian */
/* A QMAP header contains a 6 bit pad field at offset 0.
* The RMNet driver assumes this field is meaningful in
@@ -704,16 +844,16 @@ static void ipa_endpoint_init_hdr_ext(struct ipa_endpoint *endpoint)
* (although 0) should be ignored.
*/
if (!endpoint->toward_ipa) {
- val |= ipa_reg_bit(reg, HDR_TOTAL_LEN_OR_PAD_VALID);
+ val |= reg_bit(reg, HDR_TOTAL_LEN_OR_PAD_VALID);
/* HDR_TOTAL_LEN_OR_PAD is 0 (pad, not total_len) */
- val |= ipa_reg_bit(reg, HDR_PAYLOAD_LEN_INC_PADDING);
+ val |= reg_bit(reg, HDR_PAYLOAD_LEN_INC_PADDING);
/* HDR_TOTAL_LEN_OR_PAD_OFFSET is 0 */
}
}
/* HDR_PAYLOAD_LEN_INC_PADDING is 0 */
if (!endpoint->toward_ipa)
- val |= ipa_reg_encode(reg, HDR_PAD_TO_ALIGNMENT, pad_align);
+ val |= reg_encode(reg, HDR_PAD_TO_ALIGNMENT, pad_align);
/* IPA v4.5 adds some most-significant bits to a few fields,
* two of which are defined in the HDR (not HDR_EXT) register.
@@ -721,25 +861,25 @@ static void ipa_endpoint_init_hdr_ext(struct ipa_endpoint *endpoint)
if (ipa->version >= IPA_VERSION_4_5) {
/* HDR_TOTAL_LEN_OR_PAD_OFFSET is 0, so MSB is 0 */
if (endpoint->config.qmap && !endpoint->toward_ipa) {
- u32 mask = ipa_reg_field_max(reg, HDR_OFST_PKT_SIZE);
+ u32 mask = reg_field_max(reg, HDR_OFST_PKT_SIZE);
u32 off; /* Field offset within header */
off = offsetof(struct rmnet_map_header, pkt_len);
/* Low bits are in the ENDP_INIT_HDR register */
off >>= hweight32(mask);
- val |= ipa_reg_encode(reg, HDR_OFST_PKT_SIZE_MSB, off);
+ val |= reg_encode(reg, HDR_OFST_PKT_SIZE_MSB, off);
/* HDR_ADDITIONAL_CONST_LEN is 0 so MSB is 0 */
}
}
- iowrite32(val, ipa->reg_virt + ipa_reg_n_offset(reg, endpoint_id));
+ iowrite32(val, ipa->reg_virt + reg_n_offset(reg, endpoint_id));
}
static void ipa_endpoint_init_hdr_metadata_mask(struct ipa_endpoint *endpoint)
{
u32 endpoint_id = endpoint->endpoint_id;
struct ipa *ipa = endpoint->ipa;
- const struct ipa_reg *reg;
+ const struct reg *reg;
u32 val = 0;
u32 offset;
@@ -747,7 +887,7 @@ static void ipa_endpoint_init_hdr_metadata_mask(struct ipa_endpoint *endpoint)
return; /* Register not valid for TX endpoints */
reg = ipa_reg(ipa, ENDP_INIT_HDR_METADATA_MASK);
- offset = ipa_reg_n_offset(reg, endpoint_id);
+ offset = reg_n_offset(reg, endpoint_id);
/* Note that HDR_ENDIANNESS indicates big endian header fields */
if (endpoint->config.qmap)
@@ -759,7 +899,7 @@ static void ipa_endpoint_init_hdr_metadata_mask(struct ipa_endpoint *endpoint)
static void ipa_endpoint_init_mode(struct ipa_endpoint *endpoint)
{
struct ipa *ipa = endpoint->ipa;
- const struct ipa_reg *reg;
+ const struct reg *reg;
u32 offset;
u32 val;
@@ -771,82 +911,90 @@ static void ipa_endpoint_init_mode(struct ipa_endpoint *endpoint)
enum ipa_endpoint_name name = endpoint->config.dma_endpoint;
u32 dma_endpoint_id = ipa->name_map[name]->endpoint_id;
- val = ipa_reg_encode(reg, ENDP_MODE, IPA_DMA);
- val |= ipa_reg_encode(reg, DEST_PIPE_INDEX, dma_endpoint_id);
+ val = reg_encode(reg, ENDP_MODE, IPA_DMA);
+ val |= reg_encode(reg, DEST_PIPE_INDEX, dma_endpoint_id);
} else {
- val = ipa_reg_encode(reg, ENDP_MODE, IPA_BASIC);
+ val = reg_encode(reg, ENDP_MODE, IPA_BASIC);
}
/* All other bits unspecified (and 0) */
- offset = ipa_reg_n_offset(reg, endpoint->endpoint_id);
+ offset = reg_n_offset(reg, endpoint->endpoint_id);
iowrite32(val, ipa->reg_virt + offset);
}
-/* For IPA v4.5+, times are expressed using Qtime. The AP uses one of two
- * pulse generators (0 and 1) to measure elapsed time. In ipa_qtime_config()
- * they're configured to have granularity 100 usec and 1 msec, respectively.
- *
- * The return value is the positive or negative Qtime value to use to
- * express the (microsecond) time provided. A positive return value
- * means pulse generator 0 can be used; otherwise use pulse generator 1.
+/* For IPA v4.5+, times are expressed using Qtime. A time is represented
+ * at one of several available granularities, which are configured in
+ * ipa_qtime_config(). Three (or, starting with IPA v5.0, four) pulse
+ * generators are set up with different "tick" periods. A Qtime value
+ * encodes a tick count along with an indication of a pulse generator
+ * (which has a fixed tick period). Two pulse generators are always
+ * available to the AP; a third is available starting with IPA v5.0.
+ * This function determines which pulse generator most accurately
+ * represents the time period provided, and returns the tick count to
+ * use to represent that time.
*/
-static int ipa_qtime_val(u32 microseconds, u32 max)
-{
- u32 val;
-
- /* Use 100 microsecond granularity if possible */
- val = DIV_ROUND_CLOSEST(microseconds, 100);
- if (val <= max)
- return (int)val;
-
- /* Have to use pulse generator 1 (millisecond granularity) */
- val = DIV_ROUND_CLOSEST(microseconds, 1000);
- WARN_ON(val > max);
+static u32
+ipa_qtime_val(struct ipa *ipa, u32 microseconds, u32 max, u32 *select)
+{
+ u32 which = 0;
+ u32 ticks;
+
+ /* Pulse generator 0 has 100 microsecond granularity */
+ ticks = DIV_ROUND_CLOSEST(microseconds, 100);
+ if (ticks <= max)
+ goto out;
+
+ /* Pulse generator 1 has millisecond granularity */
+ which = 1;
+ ticks = DIV_ROUND_CLOSEST(microseconds, 1000);
+ if (ticks <= max)
+ goto out;
+
+ if (ipa->version >= IPA_VERSION_5_0) {
+ /* Pulse generator 2 has 10 millisecond granularity */
+ which = 2;
+ ticks = DIV_ROUND_CLOSEST(microseconds, 100);
+ }
+ WARN_ON(ticks > max);
+out:
+ *select = which;
- return (int)-val;
+ return ticks;
}
/* Encode the aggregation timer limit (microseconds) based on IPA version */
-static u32 aggr_time_limit_encode(struct ipa *ipa, const struct ipa_reg *reg,
+static u32 aggr_time_limit_encode(struct ipa *ipa, const struct reg *reg,
u32 microseconds)
{
+ u32 ticks;
u32 max;
- u32 val;
if (!microseconds)
return 0; /* Nothing to compute if time limit is 0 */
- max = ipa_reg_field_max(reg, TIME_LIMIT);
+ max = reg_field_max(reg, TIME_LIMIT);
if (ipa->version >= IPA_VERSION_4_5) {
- u32 gran_sel;
- int ret;
-
- /* Compute the Qtime limit value to use */
- ret = ipa_qtime_val(microseconds, max);
- if (ret < 0) {
- val = -ret;
- gran_sel = ipa_reg_bit(reg, AGGR_GRAN_SEL);
- } else {
- val = ret;
- gran_sel = 0;
- }
+ u32 select;
- return gran_sel | ipa_reg_encode(reg, TIME_LIMIT, val);
+ ticks = ipa_qtime_val(ipa, microseconds, max, &select);
+
+ return reg_encode(reg, AGGR_GRAN_SEL, select) |
+ reg_encode(reg, TIME_LIMIT, ticks);
}
/* We program aggregation granularity in ipa_hardware_config() */
- val = DIV_ROUND_CLOSEST(microseconds, IPA_AGGR_GRANULARITY);
- WARN(val > max, "aggr_time_limit too large (%u > %u usec)\n",
+ ticks = DIV_ROUND_CLOSEST(microseconds, IPA_AGGR_GRANULARITY);
+ WARN(ticks > max, "aggr_time_limit too large (%u > %u usec)\n",
microseconds, max * IPA_AGGR_GRANULARITY);
- return ipa_reg_encode(reg, TIME_LIMIT, val);
+ return reg_encode(reg, TIME_LIMIT, ticks);
}
static void ipa_endpoint_init_aggr(struct ipa_endpoint *endpoint)
{
u32 endpoint_id = endpoint->endpoint_id;
struct ipa *ipa = endpoint->ipa;
- const struct ipa_reg *reg;
+ const struct reg *reg;
u32 val = 0;
reg = ipa_reg(ipa, ENDP_INIT_AGGR);
@@ -857,13 +1005,13 @@ static void ipa_endpoint_init_aggr(struct ipa_endpoint *endpoint)
u32 limit;
rx_config = &endpoint->config.rx;
- val |= ipa_reg_encode(reg, AGGR_EN, IPA_ENABLE_AGGR);
- val |= ipa_reg_encode(reg, AGGR_TYPE, IPA_GENERIC);
+ val |= reg_encode(reg, AGGR_EN, IPA_ENABLE_AGGR);
+ val |= reg_encode(reg, AGGR_TYPE, IPA_GENERIC);
buffer_size = rx_config->buffer_size;
limit = ipa_aggr_size_kb(buffer_size - NET_SKB_PAD,
rx_config->aggr_hard_limit);
- val |= ipa_reg_encode(reg, BYTE_LIMIT, limit);
+ val |= reg_encode(reg, BYTE_LIMIT, limit);
limit = rx_config->aggr_time_limit;
val |= aggr_time_limit_encode(ipa, reg, limit);
@@ -871,20 +1019,20 @@ static void ipa_endpoint_init_aggr(struct ipa_endpoint *endpoint)
/* AGGR_PKT_LIMIT is 0 (unlimited) */
if (rx_config->aggr_close_eof)
- val |= ipa_reg_bit(reg, SW_EOF_ACTIVE);
+ val |= reg_bit(reg, SW_EOF_ACTIVE);
} else {
- val |= ipa_reg_encode(reg, AGGR_EN, IPA_ENABLE_DEAGGR);
- val |= ipa_reg_encode(reg, AGGR_TYPE, IPA_QCMAP);
+ val |= reg_encode(reg, AGGR_EN, IPA_ENABLE_DEAGGR);
+ val |= reg_encode(reg, AGGR_TYPE, IPA_QCMAP);
/* other fields ignored */
}
/* AGGR_FORCE_CLOSE is 0 */
/* AGGR_GRAN_SEL is 0 for IPA v4.5 */
} else {
- val |= ipa_reg_encode(reg, AGGR_EN, IPA_BYPASS_AGGR);
+ val |= reg_encode(reg, AGGR_EN, IPA_BYPASS_AGGR);
/* other fields ignored */
}
- iowrite32(val, ipa->reg_virt + ipa_reg_n_offset(reg, endpoint_id));
+ iowrite32(val, ipa->reg_virt + reg_n_offset(reg, endpoint_id));
}
/* The head-of-line blocking timer is defined as a tick count. For
@@ -895,7 +1043,7 @@ static void ipa_endpoint_init_aggr(struct ipa_endpoint *endpoint)
* Return the encoded value representing the timeout period provided
* that should be written to the ENDP_INIT_HOL_BLOCK_TIMER register.
*/
-static u32 hol_block_timer_encode(struct ipa *ipa, const struct ipa_reg *reg,
+static u32 hol_block_timer_encode(struct ipa *ipa, const struct reg *reg,
u32 microseconds)
{
u32 width;
@@ -909,21 +1057,14 @@ static u32 hol_block_timer_encode(struct ipa *ipa, const struct ipa_reg *reg,
return 0; /* Nothing to compute if timer period is 0 */
if (ipa->version >= IPA_VERSION_4_5) {
- u32 max = ipa_reg_field_max(reg, TIMER_LIMIT);
- u32 gran_sel;
- int ret;
-
- /* Compute the Qtime limit value to use */
- ret = ipa_qtime_val(microseconds, max);
- if (ret < 0) {
- val = -ret;
- gran_sel = ipa_reg_bit(reg, TIMER_GRAN_SEL);
- } else {
- val = ret;
- gran_sel = 0;
- }
+ u32 max = reg_field_max(reg, TIMER_LIMIT);
+ u32 select;
+ u32 ticks;
+
+ ticks = ipa_qtime_val(ipa, microseconds, max, &select);
- return gran_sel | ipa_reg_encode(reg, TIMER_LIMIT, val);
+ return reg_encode(reg, TIMER_GRAN_SEL, 1) |
+ reg_encode(reg, TIMER_LIMIT, ticks);
}
/* Use 64 bit arithmetic to avoid overflow */
@@ -931,11 +1072,11 @@ static u32 hol_block_timer_encode(struct ipa *ipa, const struct ipa_reg *reg,
ticks = DIV_ROUND_CLOSEST(microseconds * rate, 128 * USEC_PER_SEC);
/* We still need the result to fit into the field */
- WARN_ON(ticks > ipa_reg_field_max(reg, TIMER_BASE_VALUE));
+ WARN_ON(ticks > reg_field_max(reg, TIMER_BASE_VALUE));
/* IPA v3.5.1 through v4.1 just record the tick count */
if (ipa->version < IPA_VERSION_4_2)
- return ipa_reg_encode(reg, TIMER_BASE_VALUE, (u32)ticks);
+ return reg_encode(reg, TIMER_BASE_VALUE, (u32)ticks);
/* For IPA v4.2, the tick count is represented by base and
* scale fields within the 32-bit timer register, where:
@@ -946,7 +1087,7 @@ static u32 hol_block_timer_encode(struct ipa *ipa, const struct ipa_reg *reg,
* such that high bit is included.
*/
high = fls(ticks); /* 1..32 (or warning above) */
- width = hweight32(ipa_reg_fmask(reg, TIMER_BASE_VALUE));
+ width = hweight32(reg_fmask(reg, TIMER_BASE_VALUE));
scale = high > width ? high - width : 0;
if (scale) {
/* If we're scaling, round up to get a closer result */
@@ -956,8 +1097,8 @@ static u32 hol_block_timer_encode(struct ipa *ipa, const struct ipa_reg *reg,
scale++;
}
- val = ipa_reg_encode(reg, TIMER_SCALE, scale);
- val |= ipa_reg_encode(reg, TIMER_BASE_VALUE, (u32)ticks >> scale);
+ val = reg_encode(reg, TIMER_SCALE, scale);
+ val |= reg_encode(reg, TIMER_BASE_VALUE, (u32)ticks >> scale);
return val;
}
@@ -968,14 +1109,14 @@ static void ipa_endpoint_init_hol_block_timer(struct ipa_endpoint *endpoint,
{
u32 endpoint_id = endpoint->endpoint_id;
struct ipa *ipa = endpoint->ipa;
- const struct ipa_reg *reg;
+ const struct reg *reg;
u32 val;
/* This should only be changed when HOL_BLOCK_EN is disabled */
reg = ipa_reg(ipa, ENDP_INIT_HOL_BLOCK_TIMER);
val = hol_block_timer_encode(ipa, reg, microseconds);
- iowrite32(val, ipa->reg_virt + ipa_reg_n_offset(reg, endpoint_id));
+ iowrite32(val, ipa->reg_virt + reg_n_offset(reg, endpoint_id));
}
static void
@@ -983,13 +1124,13 @@ ipa_endpoint_init_hol_block_en(struct ipa_endpoint *endpoint, bool enable)
{
u32 endpoint_id = endpoint->endpoint_id;
struct ipa *ipa = endpoint->ipa;
- const struct ipa_reg *reg;
+ const struct reg *reg;
u32 offset;
u32 val;
reg = ipa_reg(ipa, ENDP_INIT_HOL_BLOCK_EN);
- offset = ipa_reg_n_offset(reg, endpoint_id);
- val = enable ? ipa_reg_bit(reg, HOL_BLOCK_EN) : 0;
+ offset = reg_n_offset(reg, endpoint_id);
+ val = enable ? reg_bit(reg, HOL_BLOCK_EN) : 0;
iowrite32(val, ipa->reg_virt + offset);
@@ -1030,7 +1171,7 @@ static void ipa_endpoint_init_deaggr(struct ipa_endpoint *endpoint)
{
u32 endpoint_id = endpoint->endpoint_id;
struct ipa *ipa = endpoint->ipa;
- const struct ipa_reg *reg;
+ const struct reg *reg;
u32 val = 0;
if (!endpoint->toward_ipa)
@@ -1042,7 +1183,7 @@ static void ipa_endpoint_init_deaggr(struct ipa_endpoint *endpoint)
/* PACKET_OFFSET_LOCATION is ignored (not valid) */
/* MAX_PACKET_LEN is 0 (not enforced) */
- iowrite32(val, ipa->reg_virt + ipa_reg_n_offset(reg, endpoint_id));
+ iowrite32(val, ipa->reg_virt + reg_n_offset(reg, endpoint_id));
}
static void ipa_endpoint_init_rsrc_grp(struct ipa_endpoint *endpoint)
@@ -1050,20 +1191,20 @@ static void ipa_endpoint_init_rsrc_grp(struct ipa_endpoint *endpoint)
u32 resource_group = endpoint->config.resource_group;
u32 endpoint_id = endpoint->endpoint_id;
struct ipa *ipa = endpoint->ipa;
- const struct ipa_reg *reg;
+ const struct reg *reg;
u32 val;
reg = ipa_reg(ipa, ENDP_INIT_RSRC_GRP);
- val = ipa_reg_encode(reg, ENDP_RSRC_GRP, resource_group);
+ val = reg_encode(reg, ENDP_RSRC_GRP, resource_group);
- iowrite32(val, ipa->reg_virt + ipa_reg_n_offset(reg, endpoint_id));
+ iowrite32(val, ipa->reg_virt + reg_n_offset(reg, endpoint_id));
}
static void ipa_endpoint_init_seq(struct ipa_endpoint *endpoint)
{
u32 endpoint_id = endpoint->endpoint_id;
struct ipa *ipa = endpoint->ipa;
- const struct ipa_reg *reg;
+ const struct reg *reg;
u32 val;
if (!endpoint->toward_ipa)
@@ -1072,14 +1213,14 @@ static void ipa_endpoint_init_seq(struct ipa_endpoint *endpoint)
reg = ipa_reg(ipa, ENDP_INIT_SEQ);
/* Low-order byte configures primary packet processing */
- val = ipa_reg_encode(reg, SEQ_TYPE, endpoint->config.tx.seq_type);
+ val = reg_encode(reg, SEQ_TYPE, endpoint->config.tx.seq_type);
/* Second byte (if supported) configures replicated packet processing */
if (ipa->version < IPA_VERSION_4_5)
- val |= ipa_reg_encode(reg, SEQ_REP_TYPE,
- endpoint->config.tx.seq_rep_type);
+ val |= reg_encode(reg, SEQ_REP_TYPE,
+ endpoint->config.tx.seq_rep_type);
- iowrite32(val, ipa->reg_virt + ipa_reg_n_offset(reg, endpoint_id));
+ iowrite32(val, ipa->reg_virt + reg_n_offset(reg, endpoint_id));
}
/**
@@ -1129,12 +1270,12 @@ static void ipa_endpoint_status(struct ipa_endpoint *endpoint)
{
u32 endpoint_id = endpoint->endpoint_id;
struct ipa *ipa = endpoint->ipa;
- const struct ipa_reg *reg;
+ const struct reg *reg;
u32 val = 0;
reg = ipa_reg(ipa, ENDP_STATUS);
if (endpoint->config.status_enable) {
- val |= ipa_reg_bit(reg, STATUS_EN);
+ val |= reg_bit(reg, STATUS_EN);
if (endpoint->toward_ipa) {
enum ipa_endpoint_name name;
u32 status_endpoint_id;
@@ -1142,16 +1283,15 @@ static void ipa_endpoint_status(struct ipa_endpoint *endpoint)
name = endpoint->config.tx.status_endpoint;
status_endpoint_id = ipa->name_map[name]->endpoint_id;
- val |= ipa_reg_encode(reg, STATUS_ENDP,
- status_endpoint_id);
+ val |= reg_encode(reg, STATUS_ENDP, status_endpoint_id);
}
- /* STATUS_LOCATION is 0, meaning status element precedes
- * packet (not present for IPA v4.5+)
+ /* STATUS_LOCATION is 0, meaning IPA packet status
+ * precedes the packet (not present for IPA v4.5+)
*/
/* STATUS_PKT_SUPPRESS_FMASK is 0 (not present for v4.0+) */
}
- iowrite32(val, ipa->reg_virt + ipa_reg_n_offset(reg, endpoint_id));
+ iowrite32(val, ipa->reg_virt + reg_n_offset(reg, endpoint_id));
}
static int ipa_endpoint_replenish_one(struct ipa_endpoint *endpoint,
@@ -1302,8 +1442,8 @@ static bool ipa_endpoint_skb_build(struct ipa_endpoint *endpoint,
return skb != NULL;
}
-/* The format of a packet status element is the same for several status
- * types (opcodes). Other types aren't currently supported.
+ /* The format of an IPA packet status structure is the same for several
+ * status types (opcodes). Other types aren't currently supported.
*/
static bool ipa_status_format_packet(enum ipa_status_opcode opcode)
{
@@ -1318,31 +1458,34 @@ static bool ipa_status_format_packet(enum ipa_status_opcode opcode)
}
}
-static bool ipa_endpoint_status_skip(struct ipa_endpoint *endpoint,
- const struct ipa_status *status)
+static bool
+ipa_endpoint_status_skip(struct ipa_endpoint *endpoint, const void *data)
{
+ struct ipa *ipa = endpoint->ipa;
+ enum ipa_status_opcode opcode;
u32 endpoint_id;
- if (!ipa_status_format_packet(status->opcode))
+ opcode = ipa_status_extract(ipa, data, STATUS_OPCODE);
+ if (!ipa_status_format_packet(opcode))
return true;
- if (!status->pkt_len)
- return true;
- endpoint_id = u8_get_bits(status->endp_dst_idx,
- IPA_STATUS_DST_IDX_FMASK);
+
+ endpoint_id = ipa_status_extract(ipa, data, STATUS_DST_ENDPOINT);
if (endpoint_id != endpoint->endpoint_id)
return true;
return false; /* Don't skip this packet, process it */
}
-static bool ipa_endpoint_status_tag(struct ipa_endpoint *endpoint,
- const struct ipa_status *status)
+static bool
+ipa_endpoint_status_tag_valid(struct ipa_endpoint *endpoint, const void *data)
{
struct ipa_endpoint *command_endpoint;
+ enum ipa_status_mask status_mask;
struct ipa *ipa = endpoint->ipa;
u32 endpoint_id;
- if (!le16_get_bits(status->mask, IPA_STATUS_MASK_TAG_VALID_FMASK))
+ status_mask = ipa_status_extract(ipa, data, STATUS_MASK);
+ if (!status_mask)
return false; /* No valid tag */
/* The status contains a valid tag. We know the packet was sent to
@@ -1350,8 +1493,7 @@ static bool ipa_endpoint_status_tag(struct ipa_endpoint *endpoint,
* If the packet came from the AP->command TX endpoint we know
* this packet was sent as part of the pipeline clear process.
*/
- endpoint_id = u8_get_bits(status->endp_src_idx,
- IPA_STATUS_SRC_IDX_FMASK);
+ endpoint_id = ipa_status_extract(ipa, data, STATUS_SRC_ENDPOINT);
command_endpoint = ipa->name_map[IPA_ENDPOINT_AP_COMMAND_TX];
if (endpoint_id == command_endpoint->endpoint_id) {
complete(&ipa->completion);
@@ -1365,23 +1507,26 @@ static bool ipa_endpoint_status_tag(struct ipa_endpoint *endpoint,
}
/* Return whether the status indicates the packet should be dropped */
-static bool ipa_endpoint_status_drop(struct ipa_endpoint *endpoint,
- const struct ipa_status *status)
+static bool
+ipa_endpoint_status_drop(struct ipa_endpoint *endpoint, const void *data)
{
- u32 val;
+ enum ipa_status_exception exception;
+ struct ipa *ipa = endpoint->ipa;
+ u32 rule;
/* If the status indicates a tagged transfer, we'll drop the packet */
- if (ipa_endpoint_status_tag(endpoint, status))
+ if (ipa_endpoint_status_tag_valid(endpoint, data))
return true;
/* Deaggregation exceptions we drop; all other types we consume */
- if (status->exception)
- return status->exception == IPA_STATUS_EXCEPTION_DEAGGR;
+ exception = ipa_status_extract(ipa, data, STATUS_EXCEPTION);
+ if (exception)
+ return exception == IPA_STATUS_EXCEPTION_DEAGGR;
/* Drop the packet if it fails to match a routing rule; otherwise no */
- val = le32_get_bits(status->flags1, IPA_STATUS_FLAGS1_RT_RULE_ID_FMASK);
+ rule = ipa_status_extract(ipa, data, STATUS_ROUTER_RULE_INDEX);
- return val == field_max(IPA_STATUS_FLAGS1_RT_RULE_ID_FMASK);
+ return rule == IPA_STATUS_RULE_MISS;
}
static void ipa_endpoint_status_parse(struct ipa_endpoint *endpoint,
@@ -1390,47 +1535,46 @@ static void ipa_endpoint_status_parse(struct ipa_endpoint *endpoint,
u32 buffer_size = endpoint->config.rx.buffer_size;
void *data = page_address(page) + NET_SKB_PAD;
u32 unused = buffer_size - total_len;
+ struct ipa *ipa = endpoint->ipa;
u32 resid = total_len;
while (resid) {
- const struct ipa_status *status = data;
+ u32 length;
u32 align;
u32 len;
- if (resid < sizeof(*status)) {
+ if (resid < IPA_STATUS_SIZE) {
dev_err(&endpoint->ipa->pdev->dev,
"short message (%u bytes < %zu byte status)\n",
- resid, sizeof(*status));
+ resid, IPA_STATUS_SIZE);
break;
}
/* Skip over status packets that lack packet data */
- if (ipa_endpoint_status_skip(endpoint, status)) {
- data += sizeof(*status);
- resid -= sizeof(*status);
+ length = ipa_status_extract(ipa, data, STATUS_LENGTH);
+ if (!length || ipa_endpoint_status_skip(endpoint, data)) {
+ data += IPA_STATUS_SIZE;
+ resid -= IPA_STATUS_SIZE;
continue;
}
/* Compute the amount of buffer space consumed by the packet,
- * including the status element. If the hardware is configured
- * to pad packet data to an aligned boundary, account for that.
+ * including the status. If the hardware is configured to
+ * pad packet data to an aligned boundary, account for that.
* And if checksum offload is enabled a trailer containing
* computed checksum information will be appended.
*/
align = endpoint->config.rx.pad_align ? : 1;
- len = le16_to_cpu(status->pkt_len);
- len = sizeof(*status) + ALIGN(len, align);
+ len = IPA_STATUS_SIZE + ALIGN(length, align);
if (endpoint->config.checksum)
len += sizeof(struct rmnet_map_dl_csum_trailer);
- if (!ipa_endpoint_status_drop(endpoint, status)) {
+ if (!ipa_endpoint_status_drop(endpoint, data)) {
void *data2;
u32 extra;
- u32 len2;
/* Client receives only packet data (no status) */
- data2 = data + sizeof(*status);
- len2 = le16_to_cpu(status->pkt_len);
+ data2 = data + IPA_STATUS_SIZE;
/* Have the true size reflect the extra unused space in
* the original receive buffer. Distribute the "cost"
@@ -1438,7 +1582,7 @@ static void ipa_endpoint_status_parse(struct ipa_endpoint *endpoint,
* buffer.
*/
extra = DIV_ROUND_CLOSEST(unused * len, total_len);
- ipa_endpoint_skb_copy(endpoint, data2, len2, extra);
+ ipa_endpoint_skb_copy(endpoint, data2, length, extra);
}
/* Consume status and the full packet it describes */
@@ -1491,18 +1635,18 @@ void ipa_endpoint_trans_release(struct ipa_endpoint *endpoint,
void ipa_endpoint_default_route_set(struct ipa *ipa, u32 endpoint_id)
{
- const struct ipa_reg *reg;
+ const struct reg *reg;
u32 val;
reg = ipa_reg(ipa, ROUTE);
/* ROUTE_DIS is 0 */
- val = ipa_reg_encode(reg, ROUTE_DEF_PIPE, endpoint_id);
- val |= ipa_reg_bit(reg, ROUTE_DEF_HDR_TABLE);
+ val = reg_encode(reg, ROUTE_DEF_PIPE, endpoint_id);
+ val |= reg_bit(reg, ROUTE_DEF_HDR_TABLE);
/* ROUTE_DEF_HDR_OFST is 0 */
- val |= ipa_reg_encode(reg, ROUTE_FRAG_DEF_PIPE, endpoint_id);
- val |= ipa_reg_bit(reg, ROUTE_DEF_RETAIN_HDR);
+ val |= reg_encode(reg, ROUTE_FRAG_DEF_PIPE, endpoint_id);
+ val |= reg_bit(reg, ROUTE_DEF_RETAIN_HDR);
- iowrite32(val, ipa->reg_virt + ipa_reg_offset(reg));
+ iowrite32(val, ipa->reg_virt + reg_offset(reg));
}
void ipa_endpoint_default_route_clear(struct ipa *ipa)
@@ -1840,8 +1984,9 @@ void ipa_endpoint_deconfig(struct ipa *ipa)
int ipa_endpoint_config(struct ipa *ipa)
{
struct device *dev = &ipa->pdev->dev;
- const struct ipa_reg *reg;
+ const struct reg *reg;
u32 endpoint_id;
+ u32 hw_limit;
u32 tx_count;
u32 rx_count;
u32 rx_base;
@@ -1873,12 +2018,12 @@ int ipa_endpoint_config(struct ipa *ipa)
* the highest one doesn't exceed the number supported by software.
*/
reg = ipa_reg(ipa, FLAVOR_0);
- val = ioread32(ipa->reg_virt + ipa_reg_offset(reg));
+ val = ioread32(ipa->reg_virt + reg_offset(reg));
/* Our RX is an IPA producer; our TX is an IPA consumer. */
- tx_count = ipa_reg_decode(reg, MAX_CONS_PIPES, val);
- rx_count = ipa_reg_decode(reg, MAX_PROD_PIPES, val);
- rx_base = ipa_reg_decode(reg, PROD_LOWEST, val);
+ tx_count = reg_decode(reg, MAX_CONS_PIPES, val);
+ rx_count = reg_decode(reg, MAX_PROD_PIPES, val);
+ rx_base = reg_decode(reg, PROD_LOWEST, val);
limit = rx_base + rx_count;
if (limit > IPA_ENDPOINT_MAX) {
@@ -1887,6 +2032,14 @@ int ipa_endpoint_config(struct ipa *ipa)
return -EINVAL;
}
+ /* Until IPA v5.0, the max endpoint ID was 32 */
+ hw_limit = ipa->version < IPA_VERSION_5_0 ? 32 : U8_MAX + 1;
+ if (limit > hw_limit) {
+ dev_err(dev, "unexpected endpoint count, %u > %u\n",
+ limit, hw_limit);
+ return -EINVAL;
+ }
+
/* Allocate and initialize the available endpoint bitmap */
ipa->available = bitmap_zalloc(limit, GFP_KERNEL);
if (!ipa->available)
diff --git a/drivers/net/ipa/ipa_endpoint.h b/drivers/net/ipa/ipa_endpoint.h
index 4a5c3bc549df..3ad2e802040a 100644
--- a/drivers/net/ipa/ipa_endpoint.h
+++ b/drivers/net/ipa/ipa_endpoint.h
@@ -1,7 +1,7 @@
/* SPDX-License-Identifier: GPL-2.0 */
/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved.
- * Copyright (C) 2019-2022 Linaro Ltd.
+ * Copyright (C) 2019-2023 Linaro Ltd.
*/
#ifndef _IPA_ENDPOINT_H_
#define _IPA_ENDPOINT_H_
@@ -38,7 +38,7 @@ enum ipa_endpoint_name {
IPA_ENDPOINT_COUNT, /* Number of names (not an index) */
};
-#define IPA_ENDPOINT_MAX 32 /* Max supported by driver */
+#define IPA_ENDPOINT_MAX 36 /* Max supported by driver */
/**
* struct ipa_endpoint_tx - Endpoint configuration for TX endpoints
diff --git a/drivers/net/ipa/ipa_interrupt.c b/drivers/net/ipa/ipa_interrupt.c
index c1b3977e1ae4..4bc05948f772 100644
--- a/drivers/net/ipa/ipa_interrupt.c
+++ b/drivers/net/ipa/ipa_interrupt.c
@@ -22,10 +22,13 @@
#include <linux/types.h>
#include <linux/interrupt.h>
#include <linux/pm_runtime.h>
+#include <linux/pm_wakeirq.h>
#include "ipa.h"
#include "ipa_reg.h"
#include "ipa_endpoint.h"
+#include "ipa_power.h"
+#include "ipa_uc.h"
#include "ipa_interrupt.h"
/**
@@ -33,47 +36,47 @@
* @ipa: IPA pointer
* @irq: Linux IRQ number used for IPA interrupts
* @enabled: Mask indicating which interrupts are enabled
- * @handler: Array of handlers indexed by IPA interrupt ID
*/
struct ipa_interrupt {
struct ipa *ipa;
u32 irq;
u32 enabled;
- ipa_irq_handler_t handler[IPA_IRQ_COUNT];
};
-/* Returns true if the interrupt type is associated with the microcontroller */
-static bool ipa_interrupt_uc(struct ipa_interrupt *interrupt, u32 irq_id)
-{
- return irq_id == IPA_IRQ_UC_0 || irq_id == IPA_IRQ_UC_1;
-}
-
/* Process a particular interrupt type that has been received */
static void ipa_interrupt_process(struct ipa_interrupt *interrupt, u32 irq_id)
{
- bool uc_irq = ipa_interrupt_uc(interrupt, irq_id);
struct ipa *ipa = interrupt->ipa;
- const struct ipa_reg *reg;
+ const struct reg *reg;
u32 mask = BIT(irq_id);
u32 offset;
- /* For microcontroller interrupts, clear the interrupt right away,
- * "to avoid clearing unhandled interrupts."
- */
reg = ipa_reg(ipa, IPA_IRQ_CLR);
- offset = ipa_reg_offset(reg);
- if (uc_irq)
+ offset = reg_offset(reg);
+
+ switch (irq_id) {
+ case IPA_IRQ_UC_0:
+ case IPA_IRQ_UC_1:
+ /* For microcontroller interrupts, clear the interrupt right
+ * away, "to avoid clearing unhandled interrupts."
+ */
iowrite32(mask, ipa->reg_virt + offset);
-
- if (irq_id < IPA_IRQ_COUNT && interrupt->handler[irq_id])
- interrupt->handler[irq_id](interrupt->ipa, irq_id);
-
- /* Clearing the SUSPEND_TX interrupt also clears the register
- * that tells us which suspended endpoint(s) caused the interrupt,
- * so defer clearing until after the handler has been called.
- */
- if (!uc_irq)
+ ipa_uc_interrupt_handler(ipa, irq_id);
+ break;
+
+ case IPA_IRQ_TX_SUSPEND:
+ /* Clearing the SUSPEND_TX interrupt also clears the
+ * register that tells us which suspended endpoint(s)
+ * caused the interrupt, so defer clearing until after
+ * the handler has been called.
+ */
+ ipa_power_suspend_handler(ipa, irq_id);
+ fallthrough;
+
+ default: /* Silently ignore (and clear) any other condition */
iowrite32(mask, ipa->reg_virt + offset);
+ break;
+ }
}
/* IPA IRQ handler is threaded */
@@ -82,7 +85,7 @@ static irqreturn_t ipa_isr_thread(int irq, void *dev_id)
struct ipa_interrupt *interrupt = dev_id;
struct ipa *ipa = interrupt->ipa;
u32 enabled = interrupt->enabled;
- const struct ipa_reg *reg;
+ const struct reg *reg;
struct device *dev;
u32 pending;
u32 offset;
@@ -99,7 +102,7 @@ static irqreturn_t ipa_isr_thread(int irq, void *dev_id)
* only the enabled ones.
*/
reg = ipa_reg(ipa, IPA_IRQ_STTS);
- offset = ipa_reg_offset(reg);
+ offset = reg_offset(reg);
pending = ioread32(ipa->reg_virt + offset);
while ((mask = pending & enabled)) {
do {
@@ -117,8 +120,7 @@ static irqreturn_t ipa_isr_thread(int irq, void *dev_id)
dev_dbg(dev, "clearing disabled IPA interrupts 0x%08x\n",
pending);
reg = ipa_reg(ipa, IPA_IRQ_CLR);
- offset = ipa_reg_offset(reg);
- iowrite32(pending, ipa->reg_virt + offset);
+ iowrite32(pending, ipa->reg_virt + reg_offset(reg));
}
out_power_put:
pm_runtime_mark_last_busy(dev);
@@ -127,6 +129,29 @@ out_power_put:
return IRQ_HANDLED;
}
+static void ipa_interrupt_enabled_update(struct ipa *ipa)
+{
+ const struct reg *reg = ipa_reg(ipa, IPA_IRQ_EN);
+
+ iowrite32(ipa->interrupt->enabled, ipa->reg_virt + reg_offset(reg));
+}
+
+/* Enable an IPA interrupt type */
+void ipa_interrupt_enable(struct ipa *ipa, enum ipa_irq_id ipa_irq)
+{
+ /* Update the IPA interrupt mask to enable it */
+ ipa->interrupt->enabled |= BIT(ipa_irq);
+ ipa_interrupt_enabled_update(ipa);
+}
+
+/* Disable an IPA interrupt type */
+void ipa_interrupt_disable(struct ipa *ipa, enum ipa_irq_id ipa_irq)
+{
+ /* Update the IPA interrupt mask to disable it */
+ ipa->interrupt->enabled &= ~BIT(ipa_irq);
+ ipa_interrupt_enabled_update(ipa);
+}
+
void ipa_interrupt_irq_disable(struct ipa *ipa)
{
disable_irq(ipa->interrupt->irq);
@@ -144,7 +169,7 @@ static void ipa_interrupt_suspend_control(struct ipa_interrupt *interrupt,
struct ipa *ipa = interrupt->ipa;
u32 mask = BIT(endpoint_id % 32);
u32 unit = endpoint_id / 32;
- const struct ipa_reg *reg;
+ const struct reg *reg;
u32 offset;
u32 val;
@@ -155,7 +180,7 @@ static void ipa_interrupt_suspend_control(struct ipa_interrupt *interrupt,
return;
reg = ipa_reg(ipa, IRQ_SUSPEND_EN);
- offset = ipa_reg_n_offset(reg, unit);
+ offset = reg_n_offset(reg, unit);
val = ioread32(ipa->reg_virt + offset);
if (enable)
@@ -189,18 +214,18 @@ void ipa_interrupt_suspend_clear_all(struct ipa_interrupt *interrupt)
unit_count = roundup(ipa->endpoint_count, 32);
for (unit = 0; unit < unit_count; unit++) {
- const struct ipa_reg *reg;
+ const struct reg *reg;
u32 val;
reg = ipa_reg(ipa, IRQ_SUSPEND_INFO);
- val = ioread32(ipa->reg_virt + ipa_reg_n_offset(reg, unit));
+ val = ioread32(ipa->reg_virt + reg_n_offset(reg, unit));
/* SUSPEND interrupt status isn't cleared on IPA version 3.0 */
if (ipa->version == IPA_VERSION_3_0)
continue;
reg = ipa_reg(ipa, IRQ_SUSPEND_CLR);
- iowrite32(val, ipa->reg_virt + ipa_reg_n_offset(reg, unit));
+ iowrite32(val, ipa->reg_virt + reg_n_offset(reg, unit));
}
}
@@ -210,50 +235,12 @@ void ipa_interrupt_simulate_suspend(struct ipa_interrupt *interrupt)
ipa_interrupt_process(interrupt, IPA_IRQ_TX_SUSPEND);
}
-/* Add a handler for an IPA interrupt */
-void ipa_interrupt_add(struct ipa_interrupt *interrupt,
- enum ipa_irq_id ipa_irq, ipa_irq_handler_t handler)
-{
- struct ipa *ipa = interrupt->ipa;
- const struct ipa_reg *reg;
-
- if (WARN_ON(ipa_irq >= IPA_IRQ_COUNT))
- return;
-
- interrupt->handler[ipa_irq] = handler;
-
- /* Update the IPA interrupt mask to enable it */
- interrupt->enabled |= BIT(ipa_irq);
-
- reg = ipa_reg(ipa, IPA_IRQ_EN);
- iowrite32(interrupt->enabled, ipa->reg_virt + ipa_reg_offset(reg));
-}
-
-/* Remove the handler for an IPA interrupt type */
-void
-ipa_interrupt_remove(struct ipa_interrupt *interrupt, enum ipa_irq_id ipa_irq)
-{
- struct ipa *ipa = interrupt->ipa;
- const struct ipa_reg *reg;
-
- if (WARN_ON(ipa_irq >= IPA_IRQ_COUNT))
- return;
-
- /* Update the IPA interrupt mask to disable it */
- interrupt->enabled &= ~BIT(ipa_irq);
-
- reg = ipa_reg(ipa, IPA_IRQ_EN);
- iowrite32(interrupt->enabled, ipa->reg_virt + ipa_reg_offset(reg));
-
- interrupt->handler[ipa_irq] = NULL;
-}
-
/* Configure the IPA interrupt framework */
struct ipa_interrupt *ipa_interrupt_config(struct ipa *ipa)
{
struct device *dev = &ipa->pdev->dev;
struct ipa_interrupt *interrupt;
- const struct ipa_reg *reg;
+ const struct reg *reg;
unsigned int irq;
int ret;
@@ -273,7 +260,7 @@ struct ipa_interrupt *ipa_interrupt_config(struct ipa *ipa)
/* Start with all IPA interrupts disabled */
reg = ipa_reg(ipa, IPA_IRQ_EN);
- iowrite32(0, ipa->reg_virt + ipa_reg_offset(reg));
+ iowrite32(0, ipa->reg_virt + reg_offset(reg));
ret = request_threaded_irq(irq, NULL, ipa_isr_thread, IRQF_ONESHOT,
"ipa", interrupt);
@@ -282,9 +269,9 @@ struct ipa_interrupt *ipa_interrupt_config(struct ipa *ipa)
goto err_kfree;
}
- ret = enable_irq_wake(irq);
+ ret = dev_pm_set_wake_irq(dev, irq);
if (ret) {
- dev_err(dev, "error %d enabling wakeup for \"ipa\" IRQ\n", ret);
+ dev_err(dev, "error %d registering \"ipa\" IRQ as wakeirq\n", ret);
goto err_free_irq;
}
@@ -302,11 +289,8 @@ err_kfree:
void ipa_interrupt_deconfig(struct ipa_interrupt *interrupt)
{
struct device *dev = &interrupt->ipa->pdev->dev;
- int ret;
- ret = disable_irq_wake(interrupt->irq);
- if (ret)
- dev_err(dev, "error %d disabling \"ipa\" IRQ wakeup\n", ret);
+ dev_pm_clear_wake_irq(dev);
free_irq(interrupt->irq, interrupt);
kfree(interrupt);
}
diff --git a/drivers/net/ipa/ipa_interrupt.h b/drivers/net/ipa/ipa_interrupt.h
index 8a1bd5b89393..12e3e798ccb3 100644
--- a/drivers/net/ipa/ipa_interrupt.h
+++ b/drivers/net/ipa/ipa_interrupt.h
@@ -11,39 +11,7 @@
struct ipa;
struct ipa_interrupt;
-
-/**
- * typedef ipa_irq_handler_t - IPA interrupt handler function type
- * @ipa: IPA pointer
- * @irq_id: interrupt type
- *
- * Callback function registered by ipa_interrupt_add() to handle a specific
- * IPA interrupt type
- */
-typedef void (*ipa_irq_handler_t)(struct ipa *ipa, enum ipa_irq_id irq_id);
-
-/**
- * ipa_interrupt_add() - Register a handler for an IPA interrupt type
- * @interrupt: IPA interrupt structure
- * @irq_id: IPA interrupt type
- * @handler: Handler function for the interrupt
- *
- * Add a handler for an IPA interrupt and enable it. IPA interrupt
- * handlers are run in threaded interrupt context, so are allowed to
- * block.
- */
-void ipa_interrupt_add(struct ipa_interrupt *interrupt, enum ipa_irq_id irq_id,
- ipa_irq_handler_t handler);
-
-/**
- * ipa_interrupt_remove() - Remove the handler for an IPA interrupt type
- * @interrupt: IPA interrupt structure
- * @irq_id: IPA interrupt type
- *
- * Remove an IPA interrupt handler and disable it.
- */
-void ipa_interrupt_remove(struct ipa_interrupt *interrupt,
- enum ipa_irq_id irq_id);
+enum ipa_irq_id;
/**
* ipa_interrupt_suspend_enable - Enable TX_SUSPEND for an endpoint
@@ -86,6 +54,20 @@ void ipa_interrupt_suspend_clear_all(struct ipa_interrupt *interrupt);
void ipa_interrupt_simulate_suspend(struct ipa_interrupt *interrupt);
/**
+ * ipa_interrupt_enable() - Enable an IPA interrupt type
+ * @ipa: IPA pointer
+ * @ipa_irq: IPA interrupt ID
+ */
+void ipa_interrupt_enable(struct ipa *ipa, enum ipa_irq_id ipa_irq);
+
+/**
+ * ipa_interrupt_disable() - Disable an IPA interrupt type
+ * @ipa: IPA pointer
+ * @ipa_irq: IPA interrupt ID
+ */
+void ipa_interrupt_disable(struct ipa *ipa, enum ipa_irq_id ipa_irq);
+
+/**
* ipa_interrupt_irq_enable() - Enable IPA interrupts
* @ipa: IPA pointer
*
diff --git a/drivers/net/ipa/ipa_main.c b/drivers/net/ipa/ipa_main.c
index 4fb92f771974..6cb7bf96a626 100644
--- a/drivers/net/ipa/ipa_main.c
+++ b/drivers/net/ipa/ipa_main.c
@@ -1,7 +1,7 @@
// SPDX-License-Identifier: GPL-2.0
/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved.
- * Copyright (C) 2018-2022 Linaro Ltd.
+ * Copyright (C) 2018-2023 Linaro Ltd.
*/
#include <linux/types.h>
@@ -203,7 +203,7 @@ static void ipa_teardown(struct ipa *ipa)
static void
ipa_hardware_config_bcr(struct ipa *ipa, const struct ipa_data *data)
{
- const struct ipa_reg *reg;
+ const struct reg *reg;
u32 val;
/* IPA v4.5+ has no backward compatibility register */
@@ -212,13 +212,13 @@ ipa_hardware_config_bcr(struct ipa *ipa, const struct ipa_data *data)
reg = ipa_reg(ipa, IPA_BCR);
val = data->backward_compat;
- iowrite32(val, ipa->reg_virt + ipa_reg_offset(reg));
+ iowrite32(val, ipa->reg_virt + reg_offset(reg));
}
static void ipa_hardware_config_tx(struct ipa *ipa)
{
enum ipa_version version = ipa->version;
- const struct ipa_reg *reg;
+ const struct reg *reg;
u32 offset;
u32 val;
@@ -227,11 +227,11 @@ static void ipa_hardware_config_tx(struct ipa *ipa)
/* Disable PA mask to allow HOLB drop */
reg = ipa_reg(ipa, IPA_TX_CFG);
- offset = ipa_reg_offset(reg);
+ offset = reg_offset(reg);
val = ioread32(ipa->reg_virt + offset);
- val &= ~ipa_reg_bit(reg, PA_MASK_EN);
+ val &= ~reg_bit(reg, PA_MASK_EN);
iowrite32(val, ipa->reg_virt + offset);
}
@@ -239,7 +239,7 @@ static void ipa_hardware_config_tx(struct ipa *ipa)
static void ipa_hardware_config_clkon(struct ipa *ipa)
{
enum ipa_version version = ipa->version;
- const struct ipa_reg *reg;
+ const struct reg *reg;
u32 val;
if (version >= IPA_VERSION_4_5)
@@ -252,20 +252,20 @@ static void ipa_hardware_config_clkon(struct ipa *ipa)
reg = ipa_reg(ipa, CLKON_CFG);
if (version == IPA_VERSION_3_1) {
/* Disable MISC clock gating */
- val = ipa_reg_bit(reg, CLKON_MISC);
+ val = reg_bit(reg, CLKON_MISC);
} else { /* IPA v4.0+ */
/* Enable open global clocks in the CLKON configuration */
- val = ipa_reg_bit(reg, CLKON_GLOBAL);
- val |= ipa_reg_bit(reg, GLOBAL_2X_CLK);
+ val = reg_bit(reg, CLKON_GLOBAL);
+ val |= reg_bit(reg, GLOBAL_2X_CLK);
}
- iowrite32(val, ipa->reg_virt + ipa_reg_offset(reg));
+ iowrite32(val, ipa->reg_virt + reg_offset(reg));
}
/* Configure bus access behavior for IPA components */
static void ipa_hardware_config_comp(struct ipa *ipa)
{
- const struct ipa_reg *reg;
+ const struct reg *reg;
u32 offset;
u32 val;
@@ -274,21 +274,22 @@ static void ipa_hardware_config_comp(struct ipa *ipa)
return;
reg = ipa_reg(ipa, COMP_CFG);
- offset = ipa_reg_offset(reg);
+ offset = reg_offset(reg);
+
val = ioread32(ipa->reg_virt + offset);
if (ipa->version == IPA_VERSION_4_0) {
- val &= ~ipa_reg_bit(reg, IPA_QMB_SELECT_CONS_EN);
- val &= ~ipa_reg_bit(reg, IPA_QMB_SELECT_PROD_EN);
- val &= ~ipa_reg_bit(reg, IPA_QMB_SELECT_GLOBAL_EN);
+ val &= ~reg_bit(reg, IPA_QMB_SELECT_CONS_EN);
+ val &= ~reg_bit(reg, IPA_QMB_SELECT_PROD_EN);
+ val &= ~reg_bit(reg, IPA_QMB_SELECT_GLOBAL_EN);
} else if (ipa->version < IPA_VERSION_4_5) {
- val |= ipa_reg_bit(reg, GSI_MULTI_AXI_MASTERS_DIS);
+ val |= reg_bit(reg, GSI_MULTI_AXI_MASTERS_DIS);
} else {
/* For IPA v4.5 FULL_FLUSH_WAIT_RS_CLOSURE_EN is 0 */
}
- val |= ipa_reg_bit(reg, GSI_MULTI_INORDER_RD_DIS);
- val |= ipa_reg_bit(reg, GSI_MULTI_INORDER_WR_DIS);
+ val |= reg_bit(reg, GSI_MULTI_INORDER_RD_DIS);
+ val |= reg_bit(reg, GSI_MULTI_INORDER_WR_DIS);
iowrite32(val, ipa->reg_virt + offset);
}
@@ -299,7 +300,7 @@ ipa_hardware_config_qsb(struct ipa *ipa, const struct ipa_data *data)
{
const struct ipa_qsb_data *data0;
const struct ipa_qsb_data *data1;
- const struct ipa_reg *reg;
+ const struct reg *reg;
u32 val;
/* QMB 0 represents DDR; QMB 1 (if present) represents PCIe */
@@ -310,29 +311,27 @@ ipa_hardware_config_qsb(struct ipa *ipa, const struct ipa_data *data)
/* Max outstanding write accesses for QSB masters */
reg = ipa_reg(ipa, QSB_MAX_WRITES);
- val = ipa_reg_encode(reg, GEN_QMB_0_MAX_WRITES, data0->max_writes);
+ val = reg_encode(reg, GEN_QMB_0_MAX_WRITES, data0->max_writes);
if (data->qsb_count > 1)
- val |= ipa_reg_encode(reg, GEN_QMB_1_MAX_WRITES,
- data1->max_writes);
+ val |= reg_encode(reg, GEN_QMB_1_MAX_WRITES, data1->max_writes);
- iowrite32(val, ipa->reg_virt + ipa_reg_offset(reg));
+ iowrite32(val, ipa->reg_virt + reg_offset(reg));
/* Max outstanding read accesses for QSB masters */
reg = ipa_reg(ipa, QSB_MAX_READS);
- val = ipa_reg_encode(reg, GEN_QMB_0_MAX_READS, data0->max_reads);
+ val = reg_encode(reg, GEN_QMB_0_MAX_READS, data0->max_reads);
if (ipa->version >= IPA_VERSION_4_0)
- val |= ipa_reg_encode(reg, GEN_QMB_0_MAX_READS_BEATS,
- data0->max_reads_beats);
+ val |= reg_encode(reg, GEN_QMB_0_MAX_READS_BEATS,
+ data0->max_reads_beats);
if (data->qsb_count > 1) {
- val = ipa_reg_encode(reg, GEN_QMB_1_MAX_READS,
- data1->max_reads);
+ val = reg_encode(reg, GEN_QMB_1_MAX_READS, data1->max_reads);
if (ipa->version >= IPA_VERSION_4_0)
- val |= ipa_reg_encode(reg, GEN_QMB_1_MAX_READS_BEATS,
- data1->max_reads_beats);
+ val |= reg_encode(reg, GEN_QMB_1_MAX_READS_BEATS,
+ data1->max_reads_beats);
}
- iowrite32(val, ipa->reg_virt + ipa_reg_offset(reg));
+ iowrite32(val, ipa->reg_virt + reg_offset(reg));
}
/* The internal inactivity timer clock is used for the aggregation timer */
@@ -368,41 +367,47 @@ static __always_inline u32 ipa_aggr_granularity_val(u32 usec)
*/
static void ipa_qtime_config(struct ipa *ipa)
{
- const struct ipa_reg *reg;
+ const struct reg *reg;
u32 offset;
u32 val;
/* Timer clock divider must be disabled when we change the rate */
reg = ipa_reg(ipa, TIMERS_XO_CLK_DIV_CFG);
- iowrite32(0, ipa->reg_virt + ipa_reg_offset(reg));
+ iowrite32(0, ipa->reg_virt + reg_offset(reg));
reg = ipa_reg(ipa, QTIME_TIMESTAMP_CFG);
/* Set DPL time stamp resolution to use Qtime (instead of 1 msec) */
- val = ipa_reg_encode(reg, DPL_TIMESTAMP_LSB, DPL_TIMESTAMP_SHIFT);
- val |= ipa_reg_bit(reg, DPL_TIMESTAMP_SEL);
+ val = reg_encode(reg, DPL_TIMESTAMP_LSB, DPL_TIMESTAMP_SHIFT);
+ val |= reg_bit(reg, DPL_TIMESTAMP_SEL);
/* Configure tag and NAT Qtime timestamp resolution as well */
- val = ipa_reg_encode(reg, TAG_TIMESTAMP_LSB, TAG_TIMESTAMP_SHIFT);
- val = ipa_reg_encode(reg, NAT_TIMESTAMP_LSB, NAT_TIMESTAMP_SHIFT);
+ val = reg_encode(reg, TAG_TIMESTAMP_LSB, TAG_TIMESTAMP_SHIFT);
+ val = reg_encode(reg, NAT_TIMESTAMP_LSB, NAT_TIMESTAMP_SHIFT);
- iowrite32(val, ipa->reg_virt + ipa_reg_offset(reg));
+ iowrite32(val, ipa->reg_virt + reg_offset(reg));
/* Set granularity of pulse generators used for other timers */
reg = ipa_reg(ipa, TIMERS_PULSE_GRAN_CFG);
- val = ipa_reg_encode(reg, PULSE_GRAN_0, IPA_GRAN_100_US);
- val |= ipa_reg_encode(reg, PULSE_GRAN_1, IPA_GRAN_1_MS);
- val |= ipa_reg_encode(reg, PULSE_GRAN_2, IPA_GRAN_1_MS);
+ val = reg_encode(reg, PULSE_GRAN_0, IPA_GRAN_100_US);
+ val |= reg_encode(reg, PULSE_GRAN_1, IPA_GRAN_1_MS);
+ if (ipa->version >= IPA_VERSION_5_0) {
+ val |= reg_encode(reg, PULSE_GRAN_2, IPA_GRAN_10_MS);
+ val |= reg_encode(reg, PULSE_GRAN_3, IPA_GRAN_10_MS);
+ } else {
+ val |= reg_encode(reg, PULSE_GRAN_2, IPA_GRAN_1_MS);
+ }
- iowrite32(val, ipa->reg_virt + ipa_reg_offset(reg));
+ iowrite32(val, ipa->reg_virt + reg_offset(reg));
/* Actual divider is 1 more than value supplied here */
reg = ipa_reg(ipa, TIMERS_XO_CLK_DIV_CFG);
- offset = ipa_reg_offset(reg);
- val = ipa_reg_encode(reg, DIV_VALUE, IPA_XO_CLOCK_DIVIDER - 1);
+ offset = reg_offset(reg);
+
+ val = reg_encode(reg, DIV_VALUE, IPA_XO_CLOCK_DIVIDER - 1);
iowrite32(val, ipa->reg_virt + offset);
/* Divider value is set; re-enable the common timer clock divider */
- val |= ipa_reg_bit(reg, DIV_ENABLE);
+ val |= reg_bit(reg, DIV_ENABLE);
iowrite32(val, ipa->reg_virt + offset);
}
@@ -411,13 +416,13 @@ static void ipa_qtime_config(struct ipa *ipa)
static void ipa_hardware_config_counter(struct ipa *ipa)
{
u32 granularity = ipa_aggr_granularity_val(IPA_AGGR_GRANULARITY);
- const struct ipa_reg *reg;
+ const struct reg *reg;
u32 val;
reg = ipa_reg(ipa, COUNTER_CFG);
/* If defined, EOT_COAL_GRANULARITY is 0 */
- val = ipa_reg_encode(reg, AGGR_GRANULARITY, granularity);
- iowrite32(val, ipa->reg_virt + ipa_reg_offset(reg));
+ val = reg_encode(reg, AGGR_GRANULARITY, granularity);
+ iowrite32(val, ipa->reg_virt + reg_offset(reg));
}
static void ipa_hardware_config_timing(struct ipa *ipa)
@@ -430,8 +435,13 @@ static void ipa_hardware_config_timing(struct ipa *ipa)
static void ipa_hardware_config_hashing(struct ipa *ipa)
{
- const struct ipa_reg *reg;
+ const struct reg *reg;
+ /* Other than IPA v4.2, all versions enable "hashing". Starting
+ * with IPA v5.0, the filter and router tables are implemented
+ * differently, but the default configuration enables this feature
+ * (now referred to as "cacheing"), so there's nothing to do here.
+ */
if (ipa->version != IPA_VERSION_4_2)
return;
@@ -441,26 +451,26 @@ static void ipa_hardware_config_hashing(struct ipa *ipa)
/* IPV6_ROUTER_HASH, IPV6_FILTER_HASH, IPV4_ROUTER_HASH,
* IPV4_FILTER_HASH are all zero.
*/
- iowrite32(0, ipa->reg_virt + ipa_reg_offset(reg));
+ iowrite32(0, ipa->reg_virt + reg_offset(reg));
}
static void ipa_idle_indication_cfg(struct ipa *ipa,
u32 enter_idle_debounce_thresh,
bool const_non_idle_enable)
{
- const struct ipa_reg *reg;
+ const struct reg *reg;
u32 val;
if (ipa->version < IPA_VERSION_3_5_1)
return;
reg = ipa_reg(ipa, IDLE_INDICATION_CFG);
- val = ipa_reg_encode(reg, ENTER_IDLE_DEBOUNCE_THRESH,
- enter_idle_debounce_thresh);
+ val = reg_encode(reg, ENTER_IDLE_DEBOUNCE_THRESH,
+ enter_idle_debounce_thresh);
if (const_non_idle_enable)
- val |= ipa_reg_bit(reg, CONST_NON_IDLE_ENABLE);
+ val |= reg_bit(reg, CONST_NON_IDLE_ENABLE);
- iowrite32(val, ipa->reg_virt + ipa_reg_offset(reg));
+ iowrite32(val, ipa->reg_virt + reg_offset(reg));
}
/**
diff --git a/drivers/net/ipa/ipa_mem.c b/drivers/net/ipa/ipa_mem.c
index 9ec5af323f73..85096d1efe5b 100644
--- a/drivers/net/ipa/ipa_mem.c
+++ b/drivers/net/ipa/ipa_mem.c
@@ -1,7 +1,7 @@
// SPDX-License-Identifier: GPL-2.0
/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved.
- * Copyright (C) 2019-2022 Linaro Ltd.
+ * Copyright (C) 2019-2023 Linaro Ltd.
*/
#include <linux/types.h>
@@ -75,7 +75,7 @@ ipa_mem_zero_region_add(struct gsi_trans *trans, enum ipa_mem_id mem_id)
int ipa_mem_setup(struct ipa *ipa)
{
dma_addr_t addr = ipa->zero_addr;
- const struct ipa_reg *reg;
+ const struct reg *reg;
const struct ipa_mem *mem;
struct gsi_trans *trans;
u32 offset;
@@ -115,8 +115,8 @@ int ipa_mem_setup(struct ipa *ipa)
offset = ipa->mem_offset + mem->offset;
reg = ipa_reg(ipa, LOCAL_PKT_PROC_CNTXT);
- val = ipa_reg_encode(reg, IPA_BASE_ADDR, offset);
- iowrite32(val, ipa->reg_virt + ipa_reg_offset(reg));
+ val = reg_encode(reg, IPA_BASE_ADDR, offset);
+ iowrite32(val, ipa->reg_virt + reg_offset(reg));
return 0;
}
@@ -163,6 +163,12 @@ static bool ipa_mem_id_valid(struct ipa *ipa, enum ipa_mem_id mem_id)
return false;
break;
+ case IPA_MEM_AP_V4_FILTER:
+ case IPA_MEM_AP_V6_FILTER:
+ if (version != IPA_VERSION_5_0)
+ return false;
+ break;
+
case IPA_MEM_NAT_TABLE:
case IPA_MEM_STATS_FILTER_ROUTE:
if (version < IPA_VERSION_4_5)
@@ -312,8 +318,8 @@ static bool ipa_mem_size_valid(struct ipa *ipa)
int ipa_mem_config(struct ipa *ipa)
{
struct device *dev = &ipa->pdev->dev;
- const struct ipa_reg *reg;
const struct ipa_mem *mem;
+ const struct reg *reg;
dma_addr_t addr;
u32 mem_size;
void *virt;
@@ -322,13 +328,13 @@ int ipa_mem_config(struct ipa *ipa)
/* Check the advertised location and size of the shared memory area */
reg = ipa_reg(ipa, SHARED_MEM_SIZE);
- val = ioread32(ipa->reg_virt + ipa_reg_offset(reg));
+ val = ioread32(ipa->reg_virt + reg_offset(reg));
/* The fields in the register are in 8 byte units */
- ipa->mem_offset = 8 * ipa_reg_decode(reg, MEM_BADDR, val);
+ ipa->mem_offset = 8 * reg_decode(reg, MEM_BADDR, val);
/* Make sure the end is within the region's mapped space */
- mem_size = 8 * ipa_reg_decode(reg, MEM_SIZE, val);
+ mem_size = 8 * reg_decode(reg, MEM_SIZE, val);
/* If the sizes don't match, issue a warning */
if (ipa->mem_offset + mem_size < ipa->mem_size) {
diff --git a/drivers/net/ipa/ipa_mem.h b/drivers/net/ipa/ipa_mem.h
index 570bfdd99bff..868e9c20e8c4 100644
--- a/drivers/net/ipa/ipa_mem.h
+++ b/drivers/net/ipa/ipa_mem.h
@@ -1,7 +1,7 @@
/* SPDX-License-Identifier: GPL-2.0 */
/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved.
- * Copyright (C) 2019-2021 Linaro Ltd.
+ * Copyright (C) 2019-2023 Linaro Ltd.
*/
#ifndef _IPA_MEM_H_
#define _IPA_MEM_H_
@@ -62,13 +62,15 @@ enum ipa_mem_id {
IPA_MEM_PDN_CONFIG, /* 0/2 canaries (IPA v4.0+) */
IPA_MEM_STATS_QUOTA_MODEM, /* 2/4 canaries (IPA v4.0+) */
IPA_MEM_STATS_QUOTA_AP, /* 0 canaries, optional (IPA v4.0+) */
- IPA_MEM_STATS_TETHERING, /* 0 canaries (IPA v4.0+) */
+ IPA_MEM_STATS_TETHERING, /* 0 canaries, optional (IPA v4.0+) */
IPA_MEM_STATS_DROP, /* 0 canaries, optional (IPA v4.0+) */
- /* The next 5 filter and route statistics regions are optional */
+ /* The next 7 filter and route statistics regions are optional */
IPA_MEM_STATS_V4_FILTER, /* 0 canaries (IPA v4.0-v4.2) */
IPA_MEM_STATS_V6_FILTER, /* 0 canaries (IPA v4.0-v4.2) */
IPA_MEM_STATS_V4_ROUTE, /* 0 canaries (IPA v4.0-v4.2) */
IPA_MEM_STATS_V6_ROUTE, /* 0 canaries (IPA v4.0-v4.2) */
+ IPA_MEM_AP_V4_FILTER, /* 2 canaries (IPA v5.0) */
+ IPA_MEM_AP_V6_FILTER, /* 0 canaries (IPA v5.0) */
IPA_MEM_STATS_FILTER_ROUTE, /* 0 canaries (IPA v4.5+) */
IPA_MEM_NAT_TABLE, /* 4 canaries, optional (IPA v4.5+) */
IPA_MEM_END_MARKER, /* 1 canary (not a real region) */
diff --git a/drivers/net/ipa/ipa_power.c b/drivers/net/ipa/ipa_power.c
index 8057be8cda80..921eecf3eff6 100644
--- a/drivers/net/ipa/ipa_power.c
+++ b/drivers/net/ipa/ipa_power.c
@@ -219,17 +219,7 @@ u32 ipa_core_clock_rate(struct ipa *ipa)
return ipa->power ? (u32)clk_get_rate(ipa->power->core) : 0;
}
-/**
- * ipa_suspend_handler() - Handle the suspend IPA interrupt
- * @ipa: IPA pointer
- * @irq_id: IPA interrupt type (unused)
- *
- * If an RX endpoint is suspended, and the IPA has a packet destined for
- * that endpoint, the IPA generates a SUSPEND interrupt to inform the AP
- * that it should resume the endpoint. If we get one of these interrupts
- * we just wake up the system.
- */
-static void ipa_suspend_handler(struct ipa *ipa, enum ipa_irq_id irq_id)
+void ipa_power_suspend_handler(struct ipa *ipa, enum ipa_irq_id irq_id)
{
/* To handle an IPA interrupt we will have resumed the hardware
* just to handle the interrupt, so we're done. If we are in a
@@ -352,12 +342,11 @@ int ipa_power_setup(struct ipa *ipa)
{
int ret;
- ipa_interrupt_add(ipa->interrupt, IPA_IRQ_TX_SUSPEND,
- ipa_suspend_handler);
+ ipa_interrupt_enable(ipa, IPA_IRQ_TX_SUSPEND);
ret = device_init_wakeup(&ipa->pdev->dev, true);
if (ret)
- ipa_interrupt_remove(ipa->interrupt, IPA_IRQ_TX_SUSPEND);
+ ipa_interrupt_disable(ipa, IPA_IRQ_TX_SUSPEND);
return ret;
}
@@ -365,7 +354,7 @@ int ipa_power_setup(struct ipa *ipa)
void ipa_power_teardown(struct ipa *ipa)
{
(void)device_init_wakeup(&ipa->pdev->dev, false);
- ipa_interrupt_remove(ipa->interrupt, IPA_IRQ_TX_SUSPEND);
+ ipa_interrupt_disable(ipa, IPA_IRQ_TX_SUSPEND);
}
/* Initialize IPA power management */
diff --git a/drivers/net/ipa/ipa_power.h b/drivers/net/ipa/ipa_power.h
index 896f052e51a1..3a4c59ea1222 100644
--- a/drivers/net/ipa/ipa_power.h
+++ b/drivers/net/ipa/ipa_power.h
@@ -10,6 +10,7 @@ struct device;
struct ipa;
struct ipa_power_data;
+enum ipa_irq_id;
/* IPA device power management function block */
extern const struct dev_pm_ops ipa_pm_ops;
@@ -48,6 +49,17 @@ void ipa_power_modem_queue_active(struct ipa *ipa);
void ipa_power_retention(struct ipa *ipa, bool enable);
/**
+ * ipa_power_suspend_handler() - Handler for SUSPEND IPA interrupts
+ * @ipa: IPA pointer
+ * @irq_id: IPA interrupt ID (unused)
+ *
+ * If an RX endpoint is suspended, and the IPA has a packet destined for
+ * that endpoint, the IPA generates a SUSPEND interrupt to inform the AP
+ * that it should resume the endpoint.
+ */
+void ipa_power_suspend_handler(struct ipa *ipa, enum ipa_irq_id irq_id);
+
+/**
* ipa_power_setup() - Set up IPA power management
* @ipa: IPA pointer
*
diff --git a/drivers/net/ipa/ipa_reg.c b/drivers/net/ipa/ipa_reg.c
index ddd529153e15..735fa6591609 100644
--- a/drivers/net/ipa/ipa_reg.c
+++ b/drivers/net/ipa/ipa_reg.c
@@ -9,73 +9,96 @@
#include "ipa.h"
#include "ipa_reg.h"
-/* Is this register valid and defined for the current IPA version? */
-static bool ipa_reg_valid(struct ipa *ipa, enum ipa_reg_id reg_id)
+/* Is this register ID valid for the current IPA version? */
+static bool ipa_reg_id_valid(struct ipa *ipa, enum ipa_reg_id reg_id)
{
enum ipa_version version = ipa->version;
- bool valid;
-
- /* Check for bogus (out of range) register IDs */
- if ((u32)reg_id >= ipa->regs->reg_count)
- return false;
switch (reg_id) {
case IPA_BCR:
case COUNTER_CFG:
- valid = version < IPA_VERSION_4_5;
- break;
+ return version < IPA_VERSION_4_5;
case IPA_TX_CFG:
case FLAVOR_0:
case IDLE_INDICATION_CFG:
- valid = version >= IPA_VERSION_3_5;
- break;
+ return version >= IPA_VERSION_3_5;
case QTIME_TIMESTAMP_CFG:
case TIMERS_XO_CLK_DIV_CFG:
case TIMERS_PULSE_GRAN_CFG:
- valid = version >= IPA_VERSION_4_5;
- break;
+ return version >= IPA_VERSION_4_5;
case SRC_RSRC_GRP_45_RSRC_TYPE:
case DST_RSRC_GRP_45_RSRC_TYPE:
- valid = version <= IPA_VERSION_3_1 ||
- version == IPA_VERSION_4_5;
- break;
+ return version <= IPA_VERSION_3_1 ||
+ version == IPA_VERSION_4_5;
case SRC_RSRC_GRP_67_RSRC_TYPE:
case DST_RSRC_GRP_67_RSRC_TYPE:
- valid = version <= IPA_VERSION_3_1;
- break;
+ return version <= IPA_VERSION_3_1;
case ENDP_FILTER_ROUTER_HSH_CFG:
- valid = version != IPA_VERSION_4_2;
- break;
+ return version != IPA_VERSION_4_2;
case IRQ_SUSPEND_EN:
case IRQ_SUSPEND_CLR:
- valid = version >= IPA_VERSION_3_1;
- break;
+ return version >= IPA_VERSION_3_1;
+
+ case COMP_CFG:
+ case CLKON_CFG:
+ case ROUTE:
+ case SHARED_MEM_SIZE:
+ case QSB_MAX_WRITES:
+ case QSB_MAX_READS:
+ case FILT_ROUT_HASH_EN:
+ case FILT_ROUT_CACHE_CFG:
+ case FILT_ROUT_HASH_FLUSH:
+ case FILT_ROUT_CACHE_FLUSH:
+ case STATE_AGGR_ACTIVE:
+ case LOCAL_PKT_PROC_CNTXT:
+ case AGGR_FORCE_CLOSE:
+ case SRC_RSRC_GRP_01_RSRC_TYPE:
+ case SRC_RSRC_GRP_23_RSRC_TYPE:
+ case DST_RSRC_GRP_01_RSRC_TYPE:
+ case DST_RSRC_GRP_23_RSRC_TYPE:
+ case ENDP_INIT_CTRL:
+ case ENDP_INIT_CFG:
+ case ENDP_INIT_NAT:
+ case ENDP_INIT_HDR:
+ case ENDP_INIT_HDR_EXT:
+ case ENDP_INIT_HDR_METADATA_MASK:
+ case ENDP_INIT_MODE:
+ case ENDP_INIT_AGGR:
+ case ENDP_INIT_HOL_BLOCK_EN:
+ case ENDP_INIT_HOL_BLOCK_TIMER:
+ case ENDP_INIT_DEAGGR:
+ case ENDP_INIT_RSRC_GRP:
+ case ENDP_INIT_SEQ:
+ case ENDP_STATUS:
+ case ENDP_FILTER_CACHE_CFG:
+ case ENDP_ROUTER_CACHE_CFG:
+ case IPA_IRQ_STTS:
+ case IPA_IRQ_EN:
+ case IPA_IRQ_CLR:
+ case IPA_IRQ_UC:
+ case IRQ_SUSPEND_INFO:
+ return true; /* These should be defined for all versions */
default:
- valid = true; /* Others should be defined for all versions */
- break;
+ return false;
}
-
- /* To be valid, it must be defined */
-
- return valid && ipa->regs->reg[reg_id];
}
-const struct ipa_reg *ipa_reg(struct ipa *ipa, enum ipa_reg_id reg_id)
+const struct reg *ipa_reg(struct ipa *ipa, enum ipa_reg_id reg_id)
{
- if (WARN_ON(!ipa_reg_valid(ipa, reg_id)))
+ if (WARN(!ipa_reg_id_valid(ipa, reg_id), "invalid reg %u\n", reg_id))
return NULL;
- return ipa->regs->reg[reg_id];
+ return reg(ipa->regs, reg_id);
}
-static const struct ipa_regs *ipa_regs(enum ipa_version version)
+static const struct regs *ipa_regs(enum ipa_version version)
{
switch (version) {
case IPA_VERSION_3_1:
@@ -100,7 +123,7 @@ static const struct ipa_regs *ipa_regs(enum ipa_version version)
int ipa_reg_init(struct ipa *ipa)
{
struct device *dev = &ipa->pdev->dev;
- const struct ipa_regs *regs;
+ const struct regs *regs;
struct resource *res;
regs = ipa_regs(ipa->version);
@@ -123,7 +146,6 @@ int ipa_reg_init(struct ipa *ipa)
dev_err(dev, "unable to remap \"ipa-reg\" memory\n");
return -ENOMEM;
}
- ipa->reg_addr = res->start;
ipa->regs = regs;
return 0;
diff --git a/drivers/net/ipa/ipa_reg.h b/drivers/net/ipa/ipa_reg.h
index ff64b19a4df8..28aa1351dd48 100644
--- a/drivers/net/ipa/ipa_reg.h
+++ b/drivers/net/ipa/ipa_reg.h
@@ -1,7 +1,7 @@
/* SPDX-License-Identifier: GPL-2.0 */
/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved.
- * Copyright (C) 2018-2022 Linaro Ltd.
+ * Copyright (C) 2018-2023 Linaro Ltd.
*/
#ifndef _IPA_REG_H_
#define _IPA_REG_H_
@@ -10,6 +10,7 @@
#include <linux/bug.h>
#include "ipa_version.h"
+#include "reg.h"
struct ipa;
@@ -35,7 +36,7 @@ struct ipa;
* by register ID. Each entry in the array specifies the base offset and
* (for parameterized registers) a non-zero stride value. Not all versions
* of IPA define all registers. The offset for a register is returned by
- * ipa_reg_offset() when the register's ipa_reg structure is supplied;
+ * reg_offset() when the register's ipa_reg structure is supplied;
* zero is returned for an undefined register (this should never happen).
*
* Some registers encode multiple fields within them. Each field in
@@ -44,9 +45,9 @@ struct ipa;
* an array of field masks, indexed by field ID. Two functions are
* used to access register fields; both take an ipa_reg structure as
* argument. To encode a value to be represented in a register field,
- * the value and field ID are passed to ipa_reg_encode(). To extract
+ * the value and field ID are passed to reg_encode(). To extract
* a value encoded in a register field, the field ID is passed to
- * ipa_reg_decode(). In addition, for single-bit fields, ipa_reg_bit()
+ * reg_decode(). In addition, for single-bit fields, reg_bit()
* can be used to either encode the bit value, or to generate a mask
* used to extract the bit value.
*/
@@ -59,8 +60,10 @@ enum ipa_reg_id {
SHARED_MEM_SIZE,
QSB_MAX_WRITES,
QSB_MAX_READS,
- FILT_ROUT_HASH_EN,
- FILT_ROUT_HASH_FLUSH,
+ FILT_ROUT_HASH_EN, /* Not IPA v5.0+ */
+ FILT_ROUT_CACHE_CFG, /* IPA v5.0+ */
+ FILT_ROUT_HASH_FLUSH, /* Not IPA v5.0+ */
+ FILT_ROUT_CACHE_FLUSH, /* IPA v5.0+ */
STATE_AGGR_ACTIVE,
IPA_BCR, /* Not IPA v4.5+ */
LOCAL_PKT_PROC_CNTXT,
@@ -95,7 +98,9 @@ enum ipa_reg_id {
ENDP_INIT_SEQ, /* TX only */
ENDP_STATUS,
ENDP_FILTER_ROUTER_HSH_CFG, /* Not IPA v4.2 */
- /* The IRQ registers are only used for GSI_EE_AP */
+ ENDP_FILTER_CACHE_CFG, /* IPA v5.0+ */
+ ENDP_ROUTER_CACHE_CFG, /* IPA v5.0+ */
+ /* The IRQ registers that follow are only used for GSI_EE_AP */
IPA_IRQ_STTS,
IPA_IRQ_EN,
IPA_IRQ_CLR,
@@ -106,56 +111,6 @@ enum ipa_reg_id {
IPA_REG_ID_COUNT, /* Last; not an ID */
};
-/**
- * struct ipa_reg - An IPA register descriptor
- * @offset: Register offset relative to base of the "ipa-reg" memory
- * @stride: Distance between two instances, if parameterized
- * @fcount: Number of entries in the @fmask array
- * @fmask: Array of mask values defining position and width of fields
- * @name: Upper-case name of the IPA register
- */
-struct ipa_reg {
- u32 offset;
- u32 stride;
- u32 fcount;
- const u32 *fmask; /* BIT(nr) or GENMASK(h, l) */
- const char *name;
-};
-
-/* Helper macro for defining "simple" (non-parameterized) registers */
-#define IPA_REG(__NAME, __reg_id, __offset) \
- IPA_REG_STRIDE(__NAME, __reg_id, __offset, 0)
-
-/* Helper macro for defining parameterized registers, specifying stride */
-#define IPA_REG_STRIDE(__NAME, __reg_id, __offset, __stride) \
- static const struct ipa_reg ipa_reg_ ## __reg_id = { \
- .name = #__NAME, \
- .offset = __offset, \
- .stride = __stride, \
- }
-
-#define IPA_REG_FIELDS(__NAME, __name, __offset) \
- IPA_REG_STRIDE_FIELDS(__NAME, __name, __offset, 0)
-
-#define IPA_REG_STRIDE_FIELDS(__NAME, __name, __offset, __stride) \
- static const struct ipa_reg ipa_reg_ ## __name = { \
- .name = #__NAME, \
- .offset = __offset, \
- .stride = __stride, \
- .fcount = ARRAY_SIZE(ipa_reg_ ## __name ## _fmask), \
- .fmask = ipa_reg_ ## __name ## _fmask, \
- }
-
-/**
- * struct ipa_regs - Description of registers supported by hardware
- * @reg_count: Number of registers in the @reg[] array
- * @reg: Array of register descriptors
- */
-struct ipa_regs {
- u32 reg_count;
- const struct ipa_reg **reg;
-};
-
/* COMP_CFG register */
enum ipa_reg_comp_cfg_field_id {
COMP_CFG_ENABLE, /* Not IPA v4.0+ */
@@ -251,14 +206,28 @@ enum ipa_reg_qsb_max_reads_field_id {
GEN_QMB_1_MAX_READS_BEATS, /* IPA v4.0+ */
};
+/* FILT_ROUT_CACHE_CFG register */
+enum ipa_reg_filt_rout_cache_cfg_field_id {
+ ROUTER_CACHE_EN,
+ FILTER_CACHE_EN,
+ LOW_PRI_HASH_HIT_DISABLE,
+ LRU_EVICTION_THRESHOLD,
+};
+
/* FILT_ROUT_HASH_EN and FILT_ROUT_HASH_FLUSH registers */
-enum ipa_reg_rout_hash_field_id {
+enum ipa_reg_filt_rout_hash_field_id {
IPV6_ROUTER_HASH,
IPV6_FILTER_HASH,
IPV4_ROUTER_HASH,
IPV4_FILTER_HASH,
};
+/* FILT_ROUT_CACHE_FLUSH register */
+enum ipa_reg_filt_rout_cache_field_id {
+ ROUTER_CACHE,
+ FILTER_CACHE,
+};
+
/* BCR register */
enum ipa_bcr_compat {
BCR_CMDQ_L_LACK_ONE_ENTRY = 0x0, /* Not IPA v4.2+ */
@@ -298,6 +267,7 @@ enum ipa_reg_ipa_tx_cfg_field_id {
DUAL_TX_ENABLE, /* v4.5+ */
SSPND_PA_NO_START_STATE, /* v4,2+, not v4.5 */
SSPND_PA_NO_BQ_STATE, /* v4.2 only */
+ HOLB_STICKY_DROP_EN, /* v5.0+ */
};
/* FLAVOR_0 register */
@@ -333,6 +303,7 @@ enum ipa_reg_timers_pulse_gran_cfg_field_id {
PULSE_GRAN_0,
PULSE_GRAN_1,
PULSE_GRAN_2,
+ PULSE_GRAN_3,
};
/* Values for IPA_GRAN_x fields of TIMERS_PULSE_GRAN_CFG */
@@ -382,11 +353,11 @@ enum ipa_reg_endp_init_nat_field_id {
NAT_EN,
};
-/** enum ipa_nat_en - ENDP_INIT_NAT register NAT_EN field value */
-enum ipa_nat_en {
- IPA_NAT_BYPASS = 0x0,
- IPA_NAT_SRC = 0x1,
- IPA_NAT_DST = 0x2,
+/** enum ipa_nat_type - ENDP_INIT_NAT register NAT_EN field value */
+enum ipa_nat_type {
+ IPA_NAT_TYPE_BYPASS = 0,
+ IPA_NAT_TYPE_SRC = 1,
+ IPA_NAT_TYPE_DST = 2,
};
/* ENDP_INIT_HDR register */
@@ -415,6 +386,8 @@ enum ipa_reg_endp_init_hdr_ext_field_id {
HDR_TOTAL_LEN_OR_PAD_OFFSET_MSB, /* v4.5+ */
HDR_OFST_PKT_SIZE_MSB, /* v4.5+ */
HDR_ADDITIONAL_CONST_LEN_MSB, /* v4.5+ */
+ HDR_BYTES_TO_REMOVE_VALID, /* v5.0+ */
+ HDR_BYTES_TO_REMOVE, /* v5.0+ */
};
/* ENDP_INIT_MODE register */
@@ -573,6 +546,17 @@ enum ipa_reg_endp_filter_router_hsh_cfg_field_id {
ROUTER_HASH_MSK_ALL, /* Bitwise OR of the above 6 fields */
};
+/* ENDP_FILTER_CACHE_CFG and ENDP_ROUTER_CACHE_CFG registers */
+enum ipa_reg_endp_cache_cfg_field_id {
+ CACHE_MSK_SRC_ID,
+ CACHE_MSK_SRC_IP,
+ CACHE_MSK_DST_IP,
+ CACHE_MSK_SRC_PORT,
+ CACHE_MSK_DST_PORT,
+ CACHE_MSK_PROTOCOL,
+ CACHE_MSK_METADATA,
+};
+
/* IPA_IRQ_STTS, IPA_IRQ_EN, and IPA_IRQ_CLR registers */
/**
* enum ipa_irq_id - Bit positions representing type of IPA IRQ
@@ -654,79 +638,15 @@ enum ipa_reg_ipa_irq_uc_field_id {
UC_INTR,
};
-extern const struct ipa_regs ipa_regs_v3_1;
-extern const struct ipa_regs ipa_regs_v3_5_1;
-extern const struct ipa_regs ipa_regs_v4_2;
-extern const struct ipa_regs ipa_regs_v4_5;
-extern const struct ipa_regs ipa_regs_v4_7;
-extern const struct ipa_regs ipa_regs_v4_9;
-extern const struct ipa_regs ipa_regs_v4_11;
-
-/* Return the field mask for a field in a register */
-static inline u32 ipa_reg_fmask(const struct ipa_reg *reg, u32 field_id)
-{
- if (!reg || WARN_ON(field_id >= reg->fcount))
- return 0;
-
- return reg->fmask[field_id];
-}
-
-/* Return the mask for a single-bit field in a register */
-static inline u32 ipa_reg_bit(const struct ipa_reg *reg, u32 field_id)
-{
- u32 fmask = ipa_reg_fmask(reg, field_id);
-
- WARN_ON(!is_power_of_2(fmask));
-
- return fmask;
-}
-
-/* Encode a value into the given field of a register */
-static inline u32
-ipa_reg_encode(const struct ipa_reg *reg, u32 field_id, u32 val)
-{
- u32 fmask = ipa_reg_fmask(reg, field_id);
-
- if (!fmask)
- return 0;
-
- val <<= __ffs(fmask);
- if (WARN_ON(val & ~fmask))
- return 0;
-
- return val;
-}
-
-/* Given a register value, decode (extract) the value in the given field */
-static inline u32
-ipa_reg_decode(const struct ipa_reg *reg, u32 field_id, u32 val)
-{
- u32 fmask = ipa_reg_fmask(reg, field_id);
-
- return fmask ? (val & fmask) >> __ffs(fmask) : 0;
-}
-
-/* Return the maximum value representable by the given field; always 2^n - 1 */
-static inline u32 ipa_reg_field_max(const struct ipa_reg *reg, u32 field_id)
-{
- u32 fmask = ipa_reg_fmask(reg, field_id);
-
- return fmask ? fmask >> __ffs(fmask) : 0;
-}
-
-const struct ipa_reg *ipa_reg(struct ipa *ipa, enum ipa_reg_id reg_id);
-
-/* Returns 0 for NULL reg; warning will have already been issued */
-static inline u32 ipa_reg_offset(const struct ipa_reg *reg)
-{
- return reg ? reg->offset : 0;
-}
+extern const struct regs ipa_regs_v3_1;
+extern const struct regs ipa_regs_v3_5_1;
+extern const struct regs ipa_regs_v4_2;
+extern const struct regs ipa_regs_v4_5;
+extern const struct regs ipa_regs_v4_7;
+extern const struct regs ipa_regs_v4_9;
+extern const struct regs ipa_regs_v4_11;
-/* Returns 0 for NULL reg; warning will have already been issued */
-static inline u32 ipa_reg_n_offset(const struct ipa_reg *reg, u32 n)
-{
- return reg ? reg->offset + n * reg->stride : 0;
-}
+const struct reg *ipa_reg(struct ipa *ipa, enum ipa_reg_id reg_id);
int ipa_reg_init(struct ipa *ipa);
void ipa_reg_exit(struct ipa *ipa);
diff --git a/drivers/net/ipa/ipa_resource.c b/drivers/net/ipa/ipa_resource.c
index a257f0e5e361..82c88a744d10 100644
--- a/drivers/net/ipa/ipa_resource.c
+++ b/drivers/net/ipa/ipa_resource.c
@@ -70,20 +70,20 @@ static bool ipa_resource_limits_valid(struct ipa *ipa,
static void
ipa_resource_config_common(struct ipa *ipa, u32 resource_type,
- const struct ipa_reg *reg,
+ const struct reg *reg,
const struct ipa_resource_limits *xlimits,
const struct ipa_resource_limits *ylimits)
{
u32 val;
- val = ipa_reg_encode(reg, X_MIN_LIM, xlimits->min);
- val |= ipa_reg_encode(reg, X_MAX_LIM, xlimits->max);
+ val = reg_encode(reg, X_MIN_LIM, xlimits->min);
+ val |= reg_encode(reg, X_MAX_LIM, xlimits->max);
if (ylimits) {
- val |= ipa_reg_encode(reg, Y_MIN_LIM, ylimits->min);
- val |= ipa_reg_encode(reg, Y_MAX_LIM, ylimits->max);
+ val |= reg_encode(reg, Y_MIN_LIM, ylimits->min);
+ val |= reg_encode(reg, Y_MAX_LIM, ylimits->max);
}
- iowrite32(val, ipa->reg_virt + ipa_reg_n_offset(reg, resource_type));
+ iowrite32(val, ipa->reg_virt + reg_n_offset(reg, resource_type));
}
static void ipa_resource_config_src(struct ipa *ipa, u32 resource_type,
@@ -92,7 +92,7 @@ static void ipa_resource_config_src(struct ipa *ipa, u32 resource_type,
u32 group_count = data->rsrc_group_src_count;
const struct ipa_resource_limits *ylimits;
const struct ipa_resource *resource;
- const struct ipa_reg *reg;
+ const struct reg *reg;
resource = &data->resource_src[resource_type];
@@ -129,7 +129,7 @@ static void ipa_resource_config_dst(struct ipa *ipa, u32 resource_type,
u32 group_count = data->rsrc_group_dst_count;
const struct ipa_resource_limits *ylimits;
const struct ipa_resource *resource;
- const struct ipa_reg *reg;
+ const struct reg *reg;
resource = &data->resource_dst[resource_type];
diff --git a/drivers/net/ipa/ipa_table.c b/drivers/net/ipa/ipa_table.c
index b81e27b61354..f0529c31d0b6 100644
--- a/drivers/net/ipa/ipa_table.c
+++ b/drivers/net/ipa/ipa_table.c
@@ -1,7 +1,7 @@
// SPDX-License-Identifier: GPL-2.0
/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved.
- * Copyright (C) 2018-2022 Linaro Ltd.
+ * Copyright (C) 2018-2023 Linaro Ltd.
*/
#include <linux/types.h>
@@ -345,9 +345,8 @@ void ipa_table_reset(struct ipa *ipa, bool modem)
int ipa_table_hash_flush(struct ipa *ipa)
{
- const struct ipa_reg *reg;
struct gsi_trans *trans;
- u32 offset;
+ const struct reg *reg;
u32 val;
if (!ipa_table_hash_support(ipa))
@@ -359,15 +358,22 @@ int ipa_table_hash_flush(struct ipa *ipa)
return -EBUSY;
}
- reg = ipa_reg(ipa, FILT_ROUT_HASH_FLUSH);
- offset = ipa_reg_offset(reg);
+ if (ipa->version < IPA_VERSION_5_0) {
+ reg = ipa_reg(ipa, FILT_ROUT_HASH_FLUSH);
- val = ipa_reg_bit(reg, IPV6_ROUTER_HASH);
- val |= ipa_reg_bit(reg, IPV6_FILTER_HASH);
- val |= ipa_reg_bit(reg, IPV4_ROUTER_HASH);
- val |= ipa_reg_bit(reg, IPV4_FILTER_HASH);
+ val = reg_bit(reg, IPV6_ROUTER_HASH);
+ val |= reg_bit(reg, IPV6_FILTER_HASH);
+ val |= reg_bit(reg, IPV4_ROUTER_HASH);
+ val |= reg_bit(reg, IPV4_FILTER_HASH);
+ } else {
+ reg = ipa_reg(ipa, FILT_ROUT_CACHE_FLUSH);
- ipa_cmd_register_write_add(trans, offset, val, val, false);
+ /* IPA v5.0+ uses a unified cache (both IPv4 and IPv6) */
+ val = reg_bit(reg, ROUTER_CACHE);
+ val |= reg_bit(reg, FILTER_CACHE);
+ }
+
+ ipa_cmd_register_write_add(trans, reg_offset(reg), val, val, false);
gsi_trans_commit_wait(trans);
@@ -486,17 +492,26 @@ static void ipa_filter_tuple_zero(struct ipa_endpoint *endpoint)
{
u32 endpoint_id = endpoint->endpoint_id;
struct ipa *ipa = endpoint->ipa;
- const struct ipa_reg *reg;
+ const struct reg *reg;
u32 offset;
u32 val;
- reg = ipa_reg(ipa, ENDP_FILTER_ROUTER_HSH_CFG);
+ if (ipa->version < IPA_VERSION_5_0) {
+ reg = ipa_reg(ipa, ENDP_FILTER_ROUTER_HSH_CFG);
+
+ offset = reg_n_offset(reg, endpoint_id);
+ val = ioread32(endpoint->ipa->reg_virt + offset);
- offset = ipa_reg_n_offset(reg, endpoint_id);
- val = ioread32(endpoint->ipa->reg_virt + offset);
+ /* Zero all filter-related fields, preserving the rest */
+ val &= ~reg_fmask(reg, FILTER_HASH_MSK_ALL);
+ } else {
+ /* IPA v5.0 separates filter and router cache configuration */
+ reg = ipa_reg(ipa, ENDP_FILTER_CACHE_CFG);
+ offset = reg_n_offset(reg, endpoint_id);
- /* Zero all filter-related fields, preserving the rest */
- val &= ~ipa_reg_fmask(reg, FILTER_HASH_MSK_ALL);
+ /* Zero all filter-related fields */
+ val = 0;
+ }
iowrite32(val, endpoint->ipa->reg_virt + offset);
}
@@ -536,17 +551,26 @@ static bool ipa_route_id_modem(struct ipa *ipa, u32 route_id)
*/
static void ipa_route_tuple_zero(struct ipa *ipa, u32 route_id)
{
- const struct ipa_reg *reg;
+ const struct reg *reg;
u32 offset;
u32 val;
- reg = ipa_reg(ipa, ENDP_FILTER_ROUTER_HSH_CFG);
- offset = ipa_reg_n_offset(reg, route_id);
+ if (ipa->version < IPA_VERSION_5_0) {
+ reg = ipa_reg(ipa, ENDP_FILTER_ROUTER_HSH_CFG);
+ offset = reg_n_offset(reg, route_id);
- val = ioread32(ipa->reg_virt + offset);
+ val = ioread32(ipa->reg_virt + offset);
- /* Zero all route-related fields, preserving the rest */
- val &= ~ipa_reg_fmask(reg, ROUTER_HASH_MSK_ALL);
+ /* Zero all route-related fields, preserving the rest */
+ val &= ~reg_fmask(reg, ROUTER_HASH_MSK_ALL);
+ } else {
+ /* IPA v5.0 separates filter and router cache configuration */
+ reg = ipa_reg(ipa, ENDP_ROUTER_CACHE_CFG);
+ offset = reg_n_offset(reg, route_id);
+
+ /* Zero all route-related fields */
+ val = 0;
+ }
iowrite32(val, ipa->reg_virt + offset);
}
diff --git a/drivers/net/ipa/ipa_uc.c b/drivers/net/ipa/ipa_uc.c
index f0ee47281015..7eaa0b4ebed9 100644
--- a/drivers/net/ipa/ipa_uc.c
+++ b/drivers/net/ipa/ipa_uc.c
@@ -124,7 +124,7 @@ static struct ipa_uc_mem_area *ipa_uc_shared(struct ipa *ipa)
}
/* Microcontroller event IPA interrupt handler */
-static void ipa_uc_event_handler(struct ipa *ipa, enum ipa_irq_id irq_id)
+static void ipa_uc_event_handler(struct ipa *ipa)
{
struct ipa_uc_mem_area *shared = ipa_uc_shared(ipa);
struct device *dev = &ipa->pdev->dev;
@@ -138,7 +138,7 @@ static void ipa_uc_event_handler(struct ipa *ipa, enum ipa_irq_id irq_id)
}
/* Microcontroller response IPA interrupt handler */
-static void ipa_uc_response_hdlr(struct ipa *ipa, enum ipa_irq_id irq_id)
+static void ipa_uc_response_hdlr(struct ipa *ipa)
{
struct ipa_uc_mem_area *shared = ipa_uc_shared(ipa);
struct device *dev = &ipa->pdev->dev;
@@ -170,13 +170,22 @@ static void ipa_uc_response_hdlr(struct ipa *ipa, enum ipa_irq_id irq_id)
}
}
+void ipa_uc_interrupt_handler(struct ipa *ipa, enum ipa_irq_id irq_id)
+{
+ /* Silently ignore anything unrecognized */
+ if (irq_id == IPA_IRQ_UC_0)
+ ipa_uc_event_handler(ipa);
+ else if (irq_id == IPA_IRQ_UC_1)
+ ipa_uc_response_hdlr(ipa);
+}
+
/* Configure the IPA microcontroller subsystem */
void ipa_uc_config(struct ipa *ipa)
{
ipa->uc_powered = false;
ipa->uc_loaded = false;
- ipa_interrupt_add(ipa->interrupt, IPA_IRQ_UC_0, ipa_uc_event_handler);
- ipa_interrupt_add(ipa->interrupt, IPA_IRQ_UC_1, ipa_uc_response_hdlr);
+ ipa_interrupt_enable(ipa, IPA_IRQ_UC_0);
+ ipa_interrupt_enable(ipa, IPA_IRQ_UC_1);
}
/* Inverse of ipa_uc_config() */
@@ -184,8 +193,8 @@ void ipa_uc_deconfig(struct ipa *ipa)
{
struct device *dev = &ipa->pdev->dev;
- ipa_interrupt_remove(ipa->interrupt, IPA_IRQ_UC_1);
- ipa_interrupt_remove(ipa->interrupt, IPA_IRQ_UC_0);
+ ipa_interrupt_disable(ipa, IPA_IRQ_UC_1);
+ ipa_interrupt_disable(ipa, IPA_IRQ_UC_0);
if (ipa->uc_loaded)
ipa_power_retention(ipa, false);
@@ -222,7 +231,7 @@ void ipa_uc_power(struct ipa *ipa)
static void send_uc_command(struct ipa *ipa, u32 command, u32 command_param)
{
struct ipa_uc_mem_area *shared = ipa_uc_shared(ipa);
- const struct ipa_reg *reg;
+ const struct reg *reg;
u32 val;
/* Fill in the command data */
@@ -234,9 +243,9 @@ static void send_uc_command(struct ipa *ipa, u32 command, u32 command_param)
/* Use an interrupt to tell the microcontroller the command is ready */
reg = ipa_reg(ipa, IPA_IRQ_UC);
- val = ipa_reg_bit(reg, UC_INTR);
+ val = reg_bit(reg, UC_INTR);
- iowrite32(val, ipa->reg_virt + ipa_reg_offset(reg));
+ iowrite32(val, ipa->reg_virt + reg_offset(reg));
}
/* Tell the microcontroller the AP is shutting down */
diff --git a/drivers/net/ipa/ipa_uc.h b/drivers/net/ipa/ipa_uc.h
index 8514096e6f36..85aa0df818c2 100644
--- a/drivers/net/ipa/ipa_uc.h
+++ b/drivers/net/ipa/ipa_uc.h
@@ -7,6 +7,14 @@
#define _IPA_UC_H_
struct ipa;
+enum ipa_irq_id;
+
+/**
+ * ipa_uc_interrupt_handler() - Handler for microcontroller IPA interrupts
+ * @ipa: IPA pointer
+ * @irq_id: IPA interrupt ID
+ */
+void ipa_uc_interrupt_handler(struct ipa *ipa, enum ipa_irq_id irq_id);
/**
* ipa_uc_config() - Configure the IPA microcontroller subsystem
diff --git a/drivers/net/ipa/ipa_version.h b/drivers/net/ipa/ipa_version.h
index d15821467743..06e75b8ece7e 100644
--- a/drivers/net/ipa/ipa_version.h
+++ b/drivers/net/ipa/ipa_version.h
@@ -9,7 +9,7 @@
/**
* enum ipa_version
* @IPA_VERSION_3_0: IPA version 3.0/GSI version 1.0
- * @IPA_VERSION_3_1: IPA version 3.1/GSI version 1.1
+ * @IPA_VERSION_3_1: IPA version 3.1/GSI version 1.0
* @IPA_VERSION_3_5: IPA version 3.5/GSI version 1.2
* @IPA_VERSION_3_5_1: IPA version 3.5.1/GSI version 1.3
* @IPA_VERSION_4_0: IPA version 4.0/GSI version 2.0
@@ -20,6 +20,8 @@
* @IPA_VERSION_4_9: IPA version 4.9/GSI version 2.9
* @IPA_VERSION_4_11: IPA version 4.11/GSI version 2.11 (2.1.1)
* @IPA_VERSION_5_0: IPA version 5.0/GSI version 3.0
+ * @IPA_VERSION_5_1: IPA version 5.1/GSI version 3.0
+ * @IPA_VERSION_5_5: IPA version 5.5/GSI version 5.5
* @IPA_VERSION_COUNT: Number of defined IPA versions
*
* Defines the version of IPA (and GSI) hardware present on the platform.
@@ -38,6 +40,8 @@ enum ipa_version {
IPA_VERSION_4_9,
IPA_VERSION_4_11,
IPA_VERSION_5_0,
+ IPA_VERSION_5_1,
+ IPA_VERSION_5_5,
IPA_VERSION_COUNT, /* Last; not a version */
};
diff --git a/drivers/net/ipa/reg.h b/drivers/net/ipa/reg.h
new file mode 100644
index 000000000000..57b457f39b6e
--- /dev/null
+++ b/drivers/net/ipa/reg.h
@@ -0,0 +1,133 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+
+/* *Copyright (C) 2022-2023 Linaro Ltd. */
+
+#ifndef _REG_H_
+#define _REG_H_
+
+#include <linux/types.h>
+#include <linux/bits.h>
+
+/**
+ * struct reg - A register descriptor
+ * @offset: Register offset relative to base of register memory
+ * @stride: Distance between two instances, if parameterized
+ * @fcount: Number of entries in the @fmask array
+ * @fmask: Array of mask values defining position and width of fields
+ * @name: Upper-case name of the register
+ */
+struct reg {
+ u32 offset;
+ u32 stride;
+ u32 fcount;
+ const u32 *fmask; /* BIT(nr) or GENMASK(h, l) */
+ const char *name;
+};
+
+/* Helper macro for defining "simple" (non-parameterized) registers */
+#define REG(__NAME, __reg_id, __offset) \
+ REG_STRIDE(__NAME, __reg_id, __offset, 0)
+
+/* Helper macro for defining parameterized registers, specifying stride */
+#define REG_STRIDE(__NAME, __reg_id, __offset, __stride) \
+ static const struct reg reg_ ## __reg_id = { \
+ .name = #__NAME, \
+ .offset = __offset, \
+ .stride = __stride, \
+ }
+
+#define REG_FIELDS(__NAME, __name, __offset) \
+ REG_STRIDE_FIELDS(__NAME, __name, __offset, 0)
+
+#define REG_STRIDE_FIELDS(__NAME, __name, __offset, __stride) \
+ static const struct reg reg_ ## __name = { \
+ .name = #__NAME, \
+ .offset = __offset, \
+ .stride = __stride, \
+ .fcount = ARRAY_SIZE(reg_ ## __name ## _fmask), \
+ .fmask = reg_ ## __name ## _fmask, \
+ }
+
+/**
+ * struct regs - Description of registers supported by hardware
+ * @reg_count: Number of registers in the @reg[] array
+ * @reg: Array of register descriptors
+ */
+struct regs {
+ u32 reg_count;
+ const struct reg **reg;
+};
+
+static inline const struct reg *reg(const struct regs *regs, u32 reg_id)
+{
+ if (WARN(reg_id >= regs->reg_count,
+ "reg out of range (%u > %u)\n", reg_id, regs->reg_count - 1))
+ return NULL;
+
+ return regs->reg[reg_id];
+}
+
+/* Return the field mask for a field in a register, or 0 on error */
+static inline u32 reg_fmask(const struct reg *reg, u32 field_id)
+{
+ if (!reg || WARN_ON(field_id >= reg->fcount))
+ return 0;
+
+ return reg->fmask[field_id];
+}
+
+/* Return the mask for a single-bit field in a register, or 0 on error */
+static inline u32 reg_bit(const struct reg *reg, u32 field_id)
+{
+ u32 fmask = reg_fmask(reg, field_id);
+
+ if (WARN_ON(!is_power_of_2(fmask)))
+ return 0;
+
+ return fmask;
+}
+
+/* Return the maximum value representable by the given field; always 2^n - 1 */
+static inline u32 reg_field_max(const struct reg *reg, u32 field_id)
+{
+ u32 fmask = reg_fmask(reg, field_id);
+
+ return fmask ? fmask >> __ffs(fmask) : 0;
+}
+
+/* Encode a value into the given field of a register */
+static inline u32 reg_encode(const struct reg *reg, u32 field_id, u32 val)
+{
+ u32 fmask = reg_fmask(reg, field_id);
+
+ if (!fmask)
+ return 0;
+
+ val <<= __ffs(fmask);
+ if (WARN_ON(val & ~fmask))
+ return 0;
+
+ return val;
+}
+
+/* Given a register value, decode (extract) the value in the given field */
+static inline u32 reg_decode(const struct reg *reg, u32 field_id, u32 val)
+{
+ u32 fmask = reg_fmask(reg, field_id);
+
+ return fmask ? (val & fmask) >> __ffs(fmask) : 0;
+}
+
+/* Returns 0 for NULL reg; warning should have already been issued */
+static inline u32 reg_offset(const struct reg *reg)
+{
+ return reg ? reg->offset : 0;
+}
+
+/* Returns 0 for NULL reg; warning should have already been issued */
+static inline u32 reg_n_offset(const struct reg *reg, u32 n)
+{
+ return reg ? reg->offset + n * reg->stride : 0;
+}
+
+#endif /* _REG_H_ */
diff --git a/drivers/net/ipa/reg/gsi_reg-v3.1.c b/drivers/net/ipa/reg/gsi_reg-v3.1.c
new file mode 100644
index 000000000000..e036805a7882
--- /dev/null
+++ b/drivers/net/ipa/reg/gsi_reg-v3.1.c
@@ -0,0 +1,291 @@
+// SPDX-License-Identifier: GPL-2.0
+
+/* Copyright (C) 2023 Linaro Ltd. */
+
+#include <linux/types.h>
+
+#include "../gsi.h"
+#include "../reg.h"
+#include "../gsi_reg.h"
+
+REG(INTER_EE_SRC_CH_IRQ_MSK, inter_ee_src_ch_irq_msk,
+ 0x0000c020 + 0x1000 * GSI_EE_AP);
+
+REG(INTER_EE_SRC_EV_CH_IRQ_MSK, inter_ee_src_ev_ch_irq_msk,
+ 0x0000c024 + 0x1000 * GSI_EE_AP);
+
+static const u32 reg_ch_c_cntxt_0_fmask[] = {
+ [CHTYPE_PROTOCOL] = GENMASK(2, 0),
+ [CHTYPE_DIR] = BIT(3),
+ [CH_EE] = GENMASK(7, 4),
+ [CHID] = GENMASK(12, 8),
+ /* Bit 13 reserved */
+ [ERINDEX] = GENMASK(18, 14),
+ /* Bit 19 reserved */
+ [CHSTATE] = GENMASK(23, 20),
+ [ELEMENT_SIZE] = GENMASK(31, 24),
+};
+
+REG_STRIDE_FIELDS(CH_C_CNTXT_0, ch_c_cntxt_0,
+ 0x0001c000 + 0x4000 * GSI_EE_AP, 0x80);
+
+static const u32 reg_ch_c_cntxt_1_fmask[] = {
+ [CH_R_LENGTH] = GENMASK(15, 0),
+ /* Bits 16-31 reserved */
+};
+
+REG_STRIDE_FIELDS(CH_C_CNTXT_1, ch_c_cntxt_1,
+ 0x0001c004 + 0x4000 * GSI_EE_AP, 0x80);
+
+REG_STRIDE(CH_C_CNTXT_2, ch_c_cntxt_2, 0x0001c008 + 0x4000 * GSI_EE_AP, 0x80);
+
+REG_STRIDE(CH_C_CNTXT_3, ch_c_cntxt_3, 0x0001c00c + 0x4000 * GSI_EE_AP, 0x80);
+
+static const u32 reg_ch_c_qos_fmask[] = {
+ [WRR_WEIGHT] = GENMASK(3, 0),
+ /* Bits 4-7 reserved */
+ [MAX_PREFETCH] = BIT(8),
+ [USE_DB_ENG] = BIT(9),
+ /* Bits 10-31 reserved */
+};
+
+REG_STRIDE_FIELDS(CH_C_QOS, ch_c_qos, 0x0001c05c + 0x4000 * GSI_EE_AP, 0x80);
+
+static const u32 reg_error_log_fmask[] = {
+ [ERR_ARG3] = GENMASK(3, 0),
+ [ERR_ARG2] = GENMASK(7, 4),
+ [ERR_ARG1] = GENMASK(11, 8),
+ [ERR_CODE] = GENMASK(15, 12),
+ /* Bits 16-18 reserved */
+ [ERR_VIRT_IDX] = GENMASK(23, 19),
+ [ERR_TYPE] = GENMASK(27, 24),
+ [ERR_EE] = GENMASK(31, 28),
+};
+
+REG_STRIDE(CH_C_SCRATCH_0, ch_c_scratch_0,
+ 0x0001c060 + 0x4000 * GSI_EE_AP, 0x80);
+
+REG_STRIDE(CH_C_SCRATCH_1, ch_c_scratch_1,
+ 0x0001c064 + 0x4000 * GSI_EE_AP, 0x80);
+
+REG_STRIDE(CH_C_SCRATCH_2, ch_c_scratch_2,
+ 0x0001c068 + 0x4000 * GSI_EE_AP, 0x80);
+
+REG_STRIDE(CH_C_SCRATCH_3, ch_c_scratch_3,
+ 0x0001c06c + 0x4000 * GSI_EE_AP, 0x80);
+
+static const u32 reg_ev_ch_e_cntxt_0_fmask[] = {
+ [EV_CHTYPE] = GENMASK(3, 0),
+ [EV_EE] = GENMASK(7, 4),
+ [EV_EVCHID] = GENMASK(15, 8),
+ [EV_INTYPE] = BIT(16),
+ /* Bits 17-19 reserved */
+ [EV_CHSTATE] = GENMASK(23, 20),
+ [EV_ELEMENT_SIZE] = GENMASK(31, 24),
+};
+
+REG_STRIDE_FIELDS(EV_CH_E_CNTXT_0, ev_ch_e_cntxt_0,
+ 0x0001d000 + 0x4000 * GSI_EE_AP, 0x80);
+
+static const u32 reg_ev_ch_e_cntxt_1_fmask[] = {
+ [R_LENGTH] = GENMASK(15, 0),
+};
+
+REG_STRIDE_FIELDS(EV_CH_E_CNTXT_1, ev_ch_e_cntxt_1,
+ 0x0001d004 + 0x4000 * GSI_EE_AP, 0x80);
+
+REG_STRIDE(EV_CH_E_CNTXT_2, ev_ch_e_cntxt_2,
+ 0x0001d008 + 0x4000 * GSI_EE_AP, 0x80);
+
+REG_STRIDE(EV_CH_E_CNTXT_3, ev_ch_e_cntxt_3,
+ 0x0001d00c + 0x4000 * GSI_EE_AP, 0x80);
+
+REG_STRIDE(EV_CH_E_CNTXT_4, ev_ch_e_cntxt_4,
+ 0x0001d010 + 0x4000 * GSI_EE_AP, 0x80);
+
+static const u32 reg_ev_ch_e_cntxt_8_fmask[] = {
+ [EV_MODT] = GENMASK(15, 0),
+ [EV_MODC] = GENMASK(23, 16),
+ [EV_MOD_CNT] = GENMASK(31, 24),
+};
+
+REG_STRIDE_FIELDS(EV_CH_E_CNTXT_8, ev_ch_e_cntxt_8,
+ 0x0001d020 + 0x4000 * GSI_EE_AP, 0x80);
+
+REG_STRIDE(EV_CH_E_CNTXT_9, ev_ch_e_cntxt_9,
+ 0x0001d024 + 0x4000 * GSI_EE_AP, 0x80);
+
+REG_STRIDE(EV_CH_E_CNTXT_10, ev_ch_e_cntxt_10,
+ 0x0001d028 + 0x4000 * GSI_EE_AP, 0x80);
+
+REG_STRIDE(EV_CH_E_CNTXT_11, ev_ch_e_cntxt_11,
+ 0x0001d02c + 0x4000 * GSI_EE_AP, 0x80);
+
+REG_STRIDE(EV_CH_E_CNTXT_12, ev_ch_e_cntxt_12,
+ 0x0001d030 + 0x4000 * GSI_EE_AP, 0x80);
+
+REG_STRIDE(EV_CH_E_CNTXT_13, ev_ch_e_cntxt_13,
+ 0x0001d034 + 0x4000 * GSI_EE_AP, 0x80);
+
+REG_STRIDE(EV_CH_E_SCRATCH_0, ev_ch_e_scratch_0,
+ 0x0001d048 + 0x4000 * GSI_EE_AP, 0x80);
+
+REG_STRIDE(EV_CH_E_SCRATCH_1, ev_ch_e_scratch_1,
+ 0x0001d04c + 0x4000 * GSI_EE_AP, 0x80);
+
+REG_STRIDE(CH_C_DOORBELL_0, ch_c_doorbell_0,
+ 0x0001e000 + 0x4000 * GSI_EE_AP, 0x08);
+
+REG_STRIDE(EV_CH_E_DOORBELL_0, ev_ch_e_doorbell_0,
+ 0x0001e100 + 0x4000 * GSI_EE_AP, 0x08);
+
+static const u32 reg_gsi_status_fmask[] = {
+ [ENABLED] = BIT(0),
+ /* Bits 1-31 reserved */
+};
+
+REG_FIELDS(GSI_STATUS, gsi_status, 0x0001f000 + 0x4000 * GSI_EE_AP);
+
+static const u32 reg_ch_cmd_fmask[] = {
+ [CH_CHID] = GENMASK(7, 0),
+ /* Bits 8-23 reserved */
+ [CH_OPCODE] = GENMASK(31, 24),
+};
+
+REG_FIELDS(CH_CMD, ch_cmd, 0x0001f008 + 0x4000 * GSI_EE_AP);
+
+static const u32 reg_ev_ch_cmd_fmask[] = {
+ [EV_CHID] = GENMASK(7, 0),
+ /* Bits 8-23 reserved */
+ [EV_OPCODE] = GENMASK(31, 24),
+};
+
+REG_FIELDS(EV_CH_CMD, ev_ch_cmd, 0x0001f010 + 0x4000 * GSI_EE_AP);
+
+static const u32 reg_generic_cmd_fmask[] = {
+ [GENERIC_OPCODE] = GENMASK(4, 0),
+ [GENERIC_CHID] = GENMASK(9, 5),
+ [GENERIC_EE] = GENMASK(13, 10),
+ /* Bits 14-31 reserved */
+};
+
+REG_FIELDS(GENERIC_CMD, generic_cmd, 0x0001f018 + 0x4000 * GSI_EE_AP);
+
+REG(CNTXT_TYPE_IRQ, cntxt_type_irq, 0x0001f080 + 0x4000 * GSI_EE_AP);
+
+REG(CNTXT_TYPE_IRQ_MSK, cntxt_type_irq_msk, 0x0001f088 + 0x4000 * GSI_EE_AP);
+
+REG(CNTXT_SRC_CH_IRQ, cntxt_src_ch_irq, 0x0001f090 + 0x4000 * GSI_EE_AP);
+
+REG(CNTXT_SRC_EV_CH_IRQ, cntxt_src_ev_ch_irq, 0x0001f094 + 0x4000 * GSI_EE_AP);
+
+REG(CNTXT_SRC_CH_IRQ_MSK, cntxt_src_ch_irq_msk,
+ 0x0001f098 + 0x4000 * GSI_EE_AP);
+
+REG(CNTXT_SRC_EV_CH_IRQ_MSK, cntxt_src_ev_ch_irq_msk,
+ 0x0001f09c + 0x4000 * GSI_EE_AP);
+
+REG(CNTXT_SRC_CH_IRQ_CLR, cntxt_src_ch_irq_clr,
+ 0x0001f0a0 + 0x4000 * GSI_EE_AP);
+
+REG(CNTXT_SRC_EV_CH_IRQ_CLR, cntxt_src_ev_ch_irq_clr,
+ 0x0001f0a4 + 0x4000 * GSI_EE_AP);
+
+REG(CNTXT_SRC_IEOB_IRQ, cntxt_src_ieob_irq, 0x0001f0b0 + 0x4000 * GSI_EE_AP);
+
+REG(CNTXT_SRC_IEOB_IRQ_MSK, cntxt_src_ieob_irq_msk,
+ 0x0001f0b8 + 0x4000 * GSI_EE_AP);
+
+REG(CNTXT_SRC_IEOB_IRQ_CLR, cntxt_src_ieob_irq_clr,
+ 0x0001f0c0 + 0x4000 * GSI_EE_AP);
+
+REG(CNTXT_GLOB_IRQ_STTS, cntxt_glob_irq_stts, 0x0001f100 + 0x4000 * GSI_EE_AP);
+
+REG(CNTXT_GLOB_IRQ_EN, cntxt_glob_irq_en, 0x0001f108 + 0x4000 * GSI_EE_AP);
+
+REG(CNTXT_GLOB_IRQ_CLR, cntxt_glob_irq_clr, 0x0001f110 + 0x4000 * GSI_EE_AP);
+
+REG(CNTXT_GSI_IRQ_STTS, cntxt_gsi_irq_stts, 0x0001f118 + 0x4000 * GSI_EE_AP);
+
+REG(CNTXT_GSI_IRQ_EN, cntxt_gsi_irq_en, 0x0001f120 + 0x4000 * GSI_EE_AP);
+
+REG(CNTXT_GSI_IRQ_CLR, cntxt_gsi_irq_clr, 0x0001f128 + 0x4000 * GSI_EE_AP);
+
+static const u32 reg_cntxt_intset_fmask[] = {
+ [INTYPE] = BIT(0)
+ /* Bits 1-31 reserved */
+};
+
+REG_FIELDS(CNTXT_INTSET, cntxt_intset, 0x0001f180 + 0x4000 * GSI_EE_AP);
+
+REG_FIELDS(ERROR_LOG, error_log, 0x0001f200 + 0x4000 * GSI_EE_AP);
+
+REG(ERROR_LOG_CLR, error_log_clr, 0x0001f210 + 0x4000 * GSI_EE_AP);
+
+static const u32 reg_cntxt_scratch_0_fmask[] = {
+ [INTER_EE_RESULT] = GENMASK(2, 0),
+ /* Bits 3-4 reserved */
+ [GENERIC_EE_RESULT] = GENMASK(7, 5),
+ /* Bits 8-31 reserved */
+};
+
+REG_FIELDS(CNTXT_SCRATCH_0, cntxt_scratch_0, 0x0001f400 + 0x4000 * GSI_EE_AP);
+
+static const struct reg *reg_array[] = {
+ [INTER_EE_SRC_CH_IRQ_MSK] = &reg_inter_ee_src_ch_irq_msk,
+ [INTER_EE_SRC_EV_CH_IRQ_MSK] = &reg_inter_ee_src_ev_ch_irq_msk,
+ [CH_C_CNTXT_0] = &reg_ch_c_cntxt_0,
+ [CH_C_CNTXT_1] = &reg_ch_c_cntxt_1,
+ [CH_C_CNTXT_2] = &reg_ch_c_cntxt_2,
+ [CH_C_CNTXT_3] = &reg_ch_c_cntxt_3,
+ [CH_C_QOS] = &reg_ch_c_qos,
+ [CH_C_SCRATCH_0] = &reg_ch_c_scratch_0,
+ [CH_C_SCRATCH_1] = &reg_ch_c_scratch_1,
+ [CH_C_SCRATCH_2] = &reg_ch_c_scratch_2,
+ [CH_C_SCRATCH_3] = &reg_ch_c_scratch_3,
+ [EV_CH_E_CNTXT_0] = &reg_ev_ch_e_cntxt_0,
+ [EV_CH_E_CNTXT_1] = &reg_ev_ch_e_cntxt_1,
+ [EV_CH_E_CNTXT_2] = &reg_ev_ch_e_cntxt_2,
+ [EV_CH_E_CNTXT_3] = &reg_ev_ch_e_cntxt_3,
+ [EV_CH_E_CNTXT_4] = &reg_ev_ch_e_cntxt_4,
+ [EV_CH_E_CNTXT_8] = &reg_ev_ch_e_cntxt_8,
+ [EV_CH_E_CNTXT_9] = &reg_ev_ch_e_cntxt_9,
+ [EV_CH_E_CNTXT_10] = &reg_ev_ch_e_cntxt_10,
+ [EV_CH_E_CNTXT_11] = &reg_ev_ch_e_cntxt_11,
+ [EV_CH_E_CNTXT_12] = &reg_ev_ch_e_cntxt_12,
+ [EV_CH_E_CNTXT_13] = &reg_ev_ch_e_cntxt_13,
+ [EV_CH_E_SCRATCH_0] = &reg_ev_ch_e_scratch_0,
+ [EV_CH_E_SCRATCH_1] = &reg_ev_ch_e_scratch_1,
+ [CH_C_DOORBELL_0] = &reg_ch_c_doorbell_0,
+ [EV_CH_E_DOORBELL_0] = &reg_ev_ch_e_doorbell_0,
+ [GSI_STATUS] = &reg_gsi_status,
+ [CH_CMD] = &reg_ch_cmd,
+ [EV_CH_CMD] = &reg_ev_ch_cmd,
+ [GENERIC_CMD] = &reg_generic_cmd,
+ [CNTXT_TYPE_IRQ] = &reg_cntxt_type_irq,
+ [CNTXT_TYPE_IRQ_MSK] = &reg_cntxt_type_irq_msk,
+ [CNTXT_SRC_CH_IRQ] = &reg_cntxt_src_ch_irq,
+ [CNTXT_SRC_EV_CH_IRQ] = &reg_cntxt_src_ev_ch_irq,
+ [CNTXT_SRC_CH_IRQ_MSK] = &reg_cntxt_src_ch_irq_msk,
+ [CNTXT_SRC_EV_CH_IRQ_MSK] = &reg_cntxt_src_ev_ch_irq_msk,
+ [CNTXT_SRC_CH_IRQ_CLR] = &reg_cntxt_src_ch_irq_clr,
+ [CNTXT_SRC_EV_CH_IRQ_CLR] = &reg_cntxt_src_ev_ch_irq_clr,
+ [CNTXT_SRC_IEOB_IRQ] = &reg_cntxt_src_ieob_irq,
+ [CNTXT_SRC_IEOB_IRQ_MSK] = &reg_cntxt_src_ieob_irq_msk,
+ [CNTXT_SRC_IEOB_IRQ_CLR] = &reg_cntxt_src_ieob_irq_clr,
+ [CNTXT_GLOB_IRQ_STTS] = &reg_cntxt_glob_irq_stts,
+ [CNTXT_GLOB_IRQ_EN] = &reg_cntxt_glob_irq_en,
+ [CNTXT_GLOB_IRQ_CLR] = &reg_cntxt_glob_irq_clr,
+ [CNTXT_GSI_IRQ_STTS] = &reg_cntxt_gsi_irq_stts,
+ [CNTXT_GSI_IRQ_EN] = &reg_cntxt_gsi_irq_en,
+ [CNTXT_GSI_IRQ_CLR] = &reg_cntxt_gsi_irq_clr,
+ [CNTXT_INTSET] = &reg_cntxt_intset,
+ [ERROR_LOG] = &reg_error_log,
+ [ERROR_LOG_CLR] = &reg_error_log_clr,
+ [CNTXT_SCRATCH_0] = &reg_cntxt_scratch_0,
+};
+
+const struct regs gsi_regs_v3_1 = {
+ .reg_count = ARRAY_SIZE(reg_array),
+ .reg = reg_array,
+};
diff --git a/drivers/net/ipa/reg/gsi_reg-v3.5.1.c b/drivers/net/ipa/reg/gsi_reg-v3.5.1.c
new file mode 100644
index 000000000000..8c3ab3a5288e
--- /dev/null
+++ b/drivers/net/ipa/reg/gsi_reg-v3.5.1.c
@@ -0,0 +1,303 @@
+// SPDX-License-Identifier: GPL-2.0
+
+/* Copyright (C) 2023 Linaro Ltd. */
+
+#include <linux/types.h>
+
+#include "../gsi.h"
+#include "../reg.h"
+#include "../gsi_reg.h"
+
+REG(INTER_EE_SRC_CH_IRQ_MSK, inter_ee_src_ch_irq_msk,
+ 0x0000c020 + 0x1000 * GSI_EE_AP);
+
+REG(INTER_EE_SRC_EV_CH_IRQ_MSK, inter_ee_src_ev_ch_irq_msk,
+ 0x0000c024 + 0x1000 * GSI_EE_AP);
+
+static const u32 reg_ch_c_cntxt_0_fmask[] = {
+ [CHTYPE_PROTOCOL] = GENMASK(2, 0),
+ [CHTYPE_DIR] = BIT(3),
+ [CH_EE] = GENMASK(7, 4),
+ [CHID] = GENMASK(12, 8),
+ /* Bit 13 reserved */
+ [ERINDEX] = GENMASK(18, 14),
+ /* Bit 19 reserved */
+ [CHSTATE] = GENMASK(23, 20),
+ [ELEMENT_SIZE] = GENMASK(31, 24),
+};
+
+REG_STRIDE_FIELDS(CH_C_CNTXT_0, ch_c_cntxt_0,
+ 0x0001c000 + 0x4000 * GSI_EE_AP, 0x80);
+
+static const u32 reg_ch_c_cntxt_1_fmask[] = {
+ [CH_R_LENGTH] = GENMASK(15, 0),
+ /* Bits 16-31 reserved */
+};
+
+REG_STRIDE_FIELDS(CH_C_CNTXT_1, ch_c_cntxt_1,
+ 0x0001c004 + 0x4000 * GSI_EE_AP, 0x80);
+
+REG_STRIDE(CH_C_CNTXT_2, ch_c_cntxt_2, 0x0001c008 + 0x4000 * GSI_EE_AP, 0x80);
+
+REG_STRIDE(CH_C_CNTXT_3, ch_c_cntxt_3, 0x0001c00c + 0x4000 * GSI_EE_AP, 0x80);
+
+static const u32 reg_ch_c_qos_fmask[] = {
+ [WRR_WEIGHT] = GENMASK(3, 0),
+ /* Bits 4-7 reserved */
+ [MAX_PREFETCH] = BIT(8),
+ [USE_DB_ENG] = BIT(9),
+ /* Bits 10-31 reserved */
+};
+
+REG_STRIDE_FIELDS(CH_C_QOS, ch_c_qos, 0x0001c05c + 0x4000 * GSI_EE_AP, 0x80);
+
+static const u32 reg_error_log_fmask[] = {
+ [ERR_ARG3] = GENMASK(3, 0),
+ [ERR_ARG2] = GENMASK(7, 4),
+ [ERR_ARG1] = GENMASK(11, 8),
+ [ERR_CODE] = GENMASK(15, 12),
+ /* Bits 16-18 reserved */
+ [ERR_VIRT_IDX] = GENMASK(23, 19),
+ [ERR_TYPE] = GENMASK(27, 24),
+ [ERR_EE] = GENMASK(31, 28),
+};
+
+REG_STRIDE(CH_C_SCRATCH_0, ch_c_scratch_0,
+ 0x0001c060 + 0x4000 * GSI_EE_AP, 0x80);
+
+REG_STRIDE(CH_C_SCRATCH_1, ch_c_scratch_1,
+ 0x0001c064 + 0x4000 * GSI_EE_AP, 0x80);
+
+REG_STRIDE(CH_C_SCRATCH_2, ch_c_scratch_2,
+ 0x0001c068 + 0x4000 * GSI_EE_AP, 0x80);
+
+REG_STRIDE(CH_C_SCRATCH_3, ch_c_scratch_3,
+ 0x0001c06c + 0x4000 * GSI_EE_AP, 0x80);
+
+static const u32 reg_ev_ch_e_cntxt_0_fmask[] = {
+ [EV_CHTYPE] = GENMASK(3, 0),
+ [EV_EE] = GENMASK(7, 4),
+ [EV_EVCHID] = GENMASK(15, 8),
+ [EV_INTYPE] = BIT(16),
+ /* Bits 17-19 reserved */
+ [EV_CHSTATE] = GENMASK(23, 20),
+ [EV_ELEMENT_SIZE] = GENMASK(31, 24),
+};
+
+REG_STRIDE_FIELDS(EV_CH_E_CNTXT_0, ev_ch_e_cntxt_0,
+ 0x0001d000 + 0x4000 * GSI_EE_AP, 0x80);
+
+static const u32 reg_ev_ch_e_cntxt_1_fmask[] = {
+ [R_LENGTH] = GENMASK(15, 0),
+};
+
+REG_STRIDE_FIELDS(EV_CH_E_CNTXT_1, ev_ch_e_cntxt_1,
+ 0x0001d004 + 0x4000 * GSI_EE_AP, 0x80);
+
+REG_STRIDE(EV_CH_E_CNTXT_2, ev_ch_e_cntxt_2,
+ 0x0001d008 + 0x4000 * GSI_EE_AP, 0x80);
+
+REG_STRIDE(EV_CH_E_CNTXT_3, ev_ch_e_cntxt_3,
+ 0x0001d00c + 0x4000 * GSI_EE_AP, 0x80);
+
+REG_STRIDE(EV_CH_E_CNTXT_4, ev_ch_e_cntxt_4,
+ 0x0001d010 + 0x4000 * GSI_EE_AP, 0x80);
+
+static const u32 reg_ev_ch_e_cntxt_8_fmask[] = {
+ [EV_MODT] = GENMASK(15, 0),
+ [EV_MODC] = GENMASK(23, 16),
+ [EV_MOD_CNT] = GENMASK(31, 24),
+};
+
+REG_STRIDE_FIELDS(EV_CH_E_CNTXT_8, ev_ch_e_cntxt_8,
+ 0x0001d020 + 0x4000 * GSI_EE_AP, 0x80);
+
+REG_STRIDE(EV_CH_E_CNTXT_9, ev_ch_e_cntxt_9,
+ 0x0001d024 + 0x4000 * GSI_EE_AP, 0x80);
+
+REG_STRIDE(EV_CH_E_CNTXT_10, ev_ch_e_cntxt_10,
+ 0x0001d028 + 0x4000 * GSI_EE_AP, 0x80);
+
+REG_STRIDE(EV_CH_E_CNTXT_11, ev_ch_e_cntxt_11,
+ 0x0001d02c + 0x4000 * GSI_EE_AP, 0x80);
+
+REG_STRIDE(EV_CH_E_CNTXT_12, ev_ch_e_cntxt_12,
+ 0x0001d030 + 0x4000 * GSI_EE_AP, 0x80);
+
+REG_STRIDE(EV_CH_E_CNTXT_13, ev_ch_e_cntxt_13,
+ 0x0001d034 + 0x4000 * GSI_EE_AP, 0x80);
+
+REG_STRIDE(EV_CH_E_SCRATCH_0, ev_ch_e_scratch_0,
+ 0x0001d048 + 0x4000 * GSI_EE_AP, 0x80);
+
+REG_STRIDE(EV_CH_E_SCRATCH_1, ev_ch_e_scratch_1,
+ 0x0001d04c + 0x4000 * GSI_EE_AP, 0x80);
+
+REG_STRIDE(CH_C_DOORBELL_0, ch_c_doorbell_0,
+ 0x0001e000 + 0x4000 * GSI_EE_AP, 0x08);
+
+REG_STRIDE(EV_CH_E_DOORBELL_0, ev_ch_e_doorbell_0,
+ 0x0001e100 + 0x4000 * GSI_EE_AP, 0x08);
+
+static const u32 reg_gsi_status_fmask[] = {
+ [ENABLED] = BIT(0),
+ /* Bits 1-31 reserved */
+};
+
+REG_FIELDS(GSI_STATUS, gsi_status, 0x0001f000 + 0x4000 * GSI_EE_AP);
+
+static const u32 reg_ch_cmd_fmask[] = {
+ [CH_CHID] = GENMASK(7, 0),
+ /* Bits 8-23 reserved */
+ [CH_OPCODE] = GENMASK(31, 24),
+};
+
+REG_FIELDS(CH_CMD, ch_cmd, 0x0001f008 + 0x4000 * GSI_EE_AP);
+
+static const u32 reg_ev_ch_cmd_fmask[] = {
+ [EV_CHID] = GENMASK(7, 0),
+ /* Bits 8-23 reserved */
+ [EV_OPCODE] = GENMASK(31, 24),
+};
+
+REG_FIELDS(EV_CH_CMD, ev_ch_cmd, 0x0001f010 + 0x4000 * GSI_EE_AP);
+
+static const u32 reg_generic_cmd_fmask[] = {
+ [GENERIC_OPCODE] = GENMASK(4, 0),
+ [GENERIC_CHID] = GENMASK(9, 5),
+ [GENERIC_EE] = GENMASK(13, 10),
+ /* Bits 14-31 reserved */
+};
+
+REG_FIELDS(GENERIC_CMD, generic_cmd, 0x0001f018 + 0x4000 * GSI_EE_AP);
+
+static const u32 reg_hw_param_2_fmask[] = {
+ [IRAM_SIZE] = GENMASK(2, 0),
+ [NUM_CH_PER_EE] = GENMASK(7, 3),
+ [NUM_EV_PER_EE] = GENMASK(12, 8),
+ [GSI_CH_PEND_TRANSLATE] = BIT(13),
+ [GSI_CH_FULL_LOGIC] = BIT(14),
+ /* Bits 15-31 reserved */
+};
+
+REG_FIELDS(HW_PARAM_2, hw_param_2, 0x0001f040 + 0x4000 * GSI_EE_AP);
+
+REG(CNTXT_TYPE_IRQ, cntxt_type_irq, 0x0001f080 + 0x4000 * GSI_EE_AP);
+
+REG(CNTXT_TYPE_IRQ_MSK, cntxt_type_irq_msk, 0x0001f088 + 0x4000 * GSI_EE_AP);
+
+REG(CNTXT_SRC_CH_IRQ, cntxt_src_ch_irq, 0x0001f090 + 0x4000 * GSI_EE_AP);
+
+REG(CNTXT_SRC_EV_CH_IRQ, cntxt_src_ev_ch_irq, 0x0001f094 + 0x4000 * GSI_EE_AP);
+
+REG(CNTXT_SRC_CH_IRQ_MSK, cntxt_src_ch_irq_msk,
+ 0x0001f098 + 0x4000 * GSI_EE_AP);
+
+REG(CNTXT_SRC_EV_CH_IRQ_MSK, cntxt_src_ev_ch_irq_msk,
+ 0x0001f09c + 0x4000 * GSI_EE_AP);
+
+REG(CNTXT_SRC_CH_IRQ_CLR, cntxt_src_ch_irq_clr,
+ 0x0001f0a0 + 0x4000 * GSI_EE_AP);
+
+REG(CNTXT_SRC_EV_CH_IRQ_CLR, cntxt_src_ev_ch_irq_clr,
+ 0x0001f0a4 + 0x4000 * GSI_EE_AP);
+
+REG(CNTXT_SRC_IEOB_IRQ, cntxt_src_ieob_irq, 0x0001f0b0 + 0x4000 * GSI_EE_AP);
+
+REG(CNTXT_SRC_IEOB_IRQ_MSK, cntxt_src_ieob_irq_msk,
+ 0x0001f0b8 + 0x4000 * GSI_EE_AP);
+
+REG(CNTXT_SRC_IEOB_IRQ_CLR, cntxt_src_ieob_irq_clr,
+ 0x0001f0c0 + 0x4000 * GSI_EE_AP);
+
+REG(CNTXT_GLOB_IRQ_STTS, cntxt_glob_irq_stts, 0x0001f100 + 0x4000 * GSI_EE_AP);
+
+REG(CNTXT_GLOB_IRQ_EN, cntxt_glob_irq_en, 0x0001f108 + 0x4000 * GSI_EE_AP);
+
+REG(CNTXT_GLOB_IRQ_CLR, cntxt_glob_irq_clr, 0x0001f110 + 0x4000 * GSI_EE_AP);
+
+REG(CNTXT_GSI_IRQ_STTS, cntxt_gsi_irq_stts, 0x0001f118 + 0x4000 * GSI_EE_AP);
+
+REG(CNTXT_GSI_IRQ_EN, cntxt_gsi_irq_en, 0x0001f120 + 0x4000 * GSI_EE_AP);
+
+REG(CNTXT_GSI_IRQ_CLR, cntxt_gsi_irq_clr, 0x0001f128 + 0x4000 * GSI_EE_AP);
+
+static const u32 reg_cntxt_intset_fmask[] = {
+ [INTYPE] = BIT(0)
+ /* Bits 1-31 reserved */
+};
+
+REG_FIELDS(CNTXT_INTSET, cntxt_intset, 0x0001f180 + 0x4000 * GSI_EE_AP);
+
+REG_FIELDS(ERROR_LOG, error_log, 0x0001f200 + 0x4000 * GSI_EE_AP);
+
+REG(ERROR_LOG_CLR, error_log_clr, 0x0001f210 + 0x4000 * GSI_EE_AP);
+
+static const u32 reg_cntxt_scratch_0_fmask[] = {
+ [INTER_EE_RESULT] = GENMASK(2, 0),
+ /* Bits 3-4 reserved */
+ [GENERIC_EE_RESULT] = GENMASK(7, 5),
+ /* Bits 8-31 reserved */
+};
+
+REG_FIELDS(CNTXT_SCRATCH_0, cntxt_scratch_0, 0x0001f400 + 0x4000 * GSI_EE_AP);
+
+static const struct reg *reg_array[] = {
+ [INTER_EE_SRC_CH_IRQ_MSK] = &reg_inter_ee_src_ch_irq_msk,
+ [INTER_EE_SRC_EV_CH_IRQ_MSK] = &reg_inter_ee_src_ev_ch_irq_msk,
+ [CH_C_CNTXT_0] = &reg_ch_c_cntxt_0,
+ [CH_C_CNTXT_1] = &reg_ch_c_cntxt_1,
+ [CH_C_CNTXT_2] = &reg_ch_c_cntxt_2,
+ [CH_C_CNTXT_3] = &reg_ch_c_cntxt_3,
+ [CH_C_QOS] = &reg_ch_c_qos,
+ [CH_C_SCRATCH_0] = &reg_ch_c_scratch_0,
+ [CH_C_SCRATCH_1] = &reg_ch_c_scratch_1,
+ [CH_C_SCRATCH_2] = &reg_ch_c_scratch_2,
+ [CH_C_SCRATCH_3] = &reg_ch_c_scratch_3,
+ [EV_CH_E_CNTXT_0] = &reg_ev_ch_e_cntxt_0,
+ [EV_CH_E_CNTXT_1] = &reg_ev_ch_e_cntxt_1,
+ [EV_CH_E_CNTXT_2] = &reg_ev_ch_e_cntxt_2,
+ [EV_CH_E_CNTXT_3] = &reg_ev_ch_e_cntxt_3,
+ [EV_CH_E_CNTXT_4] = &reg_ev_ch_e_cntxt_4,
+ [EV_CH_E_CNTXT_8] = &reg_ev_ch_e_cntxt_8,
+ [EV_CH_E_CNTXT_9] = &reg_ev_ch_e_cntxt_9,
+ [EV_CH_E_CNTXT_10] = &reg_ev_ch_e_cntxt_10,
+ [EV_CH_E_CNTXT_11] = &reg_ev_ch_e_cntxt_11,
+ [EV_CH_E_CNTXT_12] = &reg_ev_ch_e_cntxt_12,
+ [EV_CH_E_CNTXT_13] = &reg_ev_ch_e_cntxt_13,
+ [EV_CH_E_SCRATCH_0] = &reg_ev_ch_e_scratch_0,
+ [EV_CH_E_SCRATCH_1] = &reg_ev_ch_e_scratch_1,
+ [CH_C_DOORBELL_0] = &reg_ch_c_doorbell_0,
+ [EV_CH_E_DOORBELL_0] = &reg_ev_ch_e_doorbell_0,
+ [GSI_STATUS] = &reg_gsi_status,
+ [CH_CMD] = &reg_ch_cmd,
+ [EV_CH_CMD] = &reg_ev_ch_cmd,
+ [GENERIC_CMD] = &reg_generic_cmd,
+ [HW_PARAM_2] = &reg_hw_param_2,
+ [CNTXT_TYPE_IRQ] = &reg_cntxt_type_irq,
+ [CNTXT_TYPE_IRQ_MSK] = &reg_cntxt_type_irq_msk,
+ [CNTXT_SRC_CH_IRQ] = &reg_cntxt_src_ch_irq,
+ [CNTXT_SRC_EV_CH_IRQ] = &reg_cntxt_src_ev_ch_irq,
+ [CNTXT_SRC_CH_IRQ_MSK] = &reg_cntxt_src_ch_irq_msk,
+ [CNTXT_SRC_EV_CH_IRQ_MSK] = &reg_cntxt_src_ev_ch_irq_msk,
+ [CNTXT_SRC_CH_IRQ_CLR] = &reg_cntxt_src_ch_irq_clr,
+ [CNTXT_SRC_EV_CH_IRQ_CLR] = &reg_cntxt_src_ev_ch_irq_clr,
+ [CNTXT_SRC_IEOB_IRQ] = &reg_cntxt_src_ieob_irq,
+ [CNTXT_SRC_IEOB_IRQ_MSK] = &reg_cntxt_src_ieob_irq_msk,
+ [CNTXT_SRC_IEOB_IRQ_CLR] = &reg_cntxt_src_ieob_irq_clr,
+ [CNTXT_GLOB_IRQ_STTS] = &reg_cntxt_glob_irq_stts,
+ [CNTXT_GLOB_IRQ_EN] = &reg_cntxt_glob_irq_en,
+ [CNTXT_GLOB_IRQ_CLR] = &reg_cntxt_glob_irq_clr,
+ [CNTXT_GSI_IRQ_STTS] = &reg_cntxt_gsi_irq_stts,
+ [CNTXT_GSI_IRQ_EN] = &reg_cntxt_gsi_irq_en,
+ [CNTXT_GSI_IRQ_CLR] = &reg_cntxt_gsi_irq_clr,
+ [CNTXT_INTSET] = &reg_cntxt_intset,
+ [ERROR_LOG] = &reg_error_log,
+ [ERROR_LOG_CLR] = &reg_error_log_clr,
+ [CNTXT_SCRATCH_0] = &reg_cntxt_scratch_0,
+};
+
+const struct regs gsi_regs_v3_5_1 = {
+ .reg_count = ARRAY_SIZE(reg_array),
+ .reg = reg_array,
+};
diff --git a/drivers/net/ipa/reg/gsi_reg-v4.0.c b/drivers/net/ipa/reg/gsi_reg-v4.0.c
new file mode 100644
index 000000000000..7cc7a21d07f9
--- /dev/null
+++ b/drivers/net/ipa/reg/gsi_reg-v4.0.c
@@ -0,0 +1,308 @@
+// SPDX-License-Identifier: GPL-2.0
+
+/* Copyright (C) 2023 Linaro Ltd. */
+
+#include <linux/types.h>
+
+#include "../gsi.h"
+#include "../reg.h"
+#include "../gsi_reg.h"
+
+REG(INTER_EE_SRC_CH_IRQ_MSK, inter_ee_src_ch_irq_msk,
+ 0x0000c020 + 0x1000 * GSI_EE_AP);
+
+REG(INTER_EE_SRC_EV_CH_IRQ_MSK, inter_ee_src_ev_ch_irq_msk,
+ 0x0000c024 + 0x1000 * GSI_EE_AP);
+
+static const u32 reg_ch_c_cntxt_0_fmask[] = {
+ [CHTYPE_PROTOCOL] = GENMASK(2, 0),
+ [CHTYPE_DIR] = BIT(3),
+ [CH_EE] = GENMASK(7, 4),
+ [CHID] = GENMASK(12, 8),
+ /* Bit 13 reserved */
+ [ERINDEX] = GENMASK(18, 14),
+ /* Bit 19 reserved */
+ [CHSTATE] = GENMASK(23, 20),
+ [ELEMENT_SIZE] = GENMASK(31, 24),
+};
+
+REG_STRIDE_FIELDS(CH_C_CNTXT_0, ch_c_cntxt_0,
+ 0x0001c000 + 0x4000 * GSI_EE_AP, 0x80);
+
+static const u32 reg_ch_c_cntxt_1_fmask[] = {
+ [CH_R_LENGTH] = GENMASK(15, 0),
+ /* Bits 16-31 reserved */
+};
+
+REG_STRIDE_FIELDS(CH_C_CNTXT_1, ch_c_cntxt_1,
+ 0x0001c004 + 0x4000 * GSI_EE_AP, 0x80);
+
+REG_STRIDE(CH_C_CNTXT_2, ch_c_cntxt_2, 0x0001c008 + 0x4000 * GSI_EE_AP, 0x80);
+
+REG_STRIDE(CH_C_CNTXT_3, ch_c_cntxt_3, 0x0001c00c + 0x4000 * GSI_EE_AP, 0x80);
+
+static const u32 reg_ch_c_qos_fmask[] = {
+ [WRR_WEIGHT] = GENMASK(3, 0),
+ /* Bits 4-7 reserved */
+ [MAX_PREFETCH] = BIT(8),
+ [USE_DB_ENG] = BIT(9),
+ [USE_ESCAPE_BUF_ONLY] = BIT(10),
+ /* Bits 11-31 reserved */
+};
+
+REG_STRIDE_FIELDS(CH_C_QOS, ch_c_qos, 0x0001c05c + 0x4000 * GSI_EE_AP, 0x80);
+
+static const u32 reg_error_log_fmask[] = {
+ [ERR_ARG3] = GENMASK(3, 0),
+ [ERR_ARG2] = GENMASK(7, 4),
+ [ERR_ARG1] = GENMASK(11, 8),
+ [ERR_CODE] = GENMASK(15, 12),
+ /* Bits 16-18 reserved */
+ [ERR_VIRT_IDX] = GENMASK(23, 19),
+ [ERR_TYPE] = GENMASK(27, 24),
+ [ERR_EE] = GENMASK(31, 28),
+};
+
+REG_STRIDE(CH_C_SCRATCH_0, ch_c_scratch_0,
+ 0x0001c060 + 0x4000 * GSI_EE_AP, 0x80);
+
+REG_STRIDE(CH_C_SCRATCH_1, ch_c_scratch_1,
+ 0x0001c064 + 0x4000 * GSI_EE_AP, 0x80);
+
+REG_STRIDE(CH_C_SCRATCH_2, ch_c_scratch_2,
+ 0x0001c068 + 0x4000 * GSI_EE_AP, 0x80);
+
+REG_STRIDE(CH_C_SCRATCH_3, ch_c_scratch_3,
+ 0x0001c06c + 0x4000 * GSI_EE_AP, 0x80);
+
+static const u32 reg_ev_ch_e_cntxt_0_fmask[] = {
+ [EV_CHTYPE] = GENMASK(3, 0),
+ [EV_EE] = GENMASK(7, 4),
+ [EV_EVCHID] = GENMASK(15, 8),
+ [EV_INTYPE] = BIT(16),
+ /* Bits 17-19 reserved */
+ [EV_CHSTATE] = GENMASK(23, 20),
+ [EV_ELEMENT_SIZE] = GENMASK(31, 24),
+};
+
+REG_STRIDE_FIELDS(EV_CH_E_CNTXT_0, ev_ch_e_cntxt_0,
+ 0x0001d000 + 0x4000 * GSI_EE_AP, 0x80);
+
+static const u32 reg_ev_ch_e_cntxt_1_fmask[] = {
+ [R_LENGTH] = GENMASK(15, 0),
+};
+
+REG_STRIDE_FIELDS(EV_CH_E_CNTXT_1, ev_ch_e_cntxt_1,
+ 0x0001d004 + 0x4000 * GSI_EE_AP, 0x80);
+
+REG_STRIDE(EV_CH_E_CNTXT_2, ev_ch_e_cntxt_2,
+ 0x0001d008 + 0x4000 * GSI_EE_AP, 0x80);
+
+REG_STRIDE(EV_CH_E_CNTXT_3, ev_ch_e_cntxt_3,
+ 0x0001d00c + 0x4000 * GSI_EE_AP, 0x80);
+
+REG_STRIDE(EV_CH_E_CNTXT_4, ev_ch_e_cntxt_4,
+ 0x0001d010 + 0x4000 * GSI_EE_AP, 0x80);
+
+static const u32 reg_ev_ch_e_cntxt_8_fmask[] = {
+ [EV_MODT] = GENMASK(15, 0),
+ [EV_MODC] = GENMASK(23, 16),
+ [EV_MOD_CNT] = GENMASK(31, 24),
+};
+
+REG_STRIDE_FIELDS(EV_CH_E_CNTXT_8, ev_ch_e_cntxt_8,
+ 0x0001d020 + 0x4000 * GSI_EE_AP, 0x80);
+
+REG_STRIDE(EV_CH_E_CNTXT_9, ev_ch_e_cntxt_9,
+ 0x0001d024 + 0x4000 * GSI_EE_AP, 0x80);
+
+REG_STRIDE(EV_CH_E_CNTXT_10, ev_ch_e_cntxt_10,
+ 0x0001d028 + 0x4000 * GSI_EE_AP, 0x80);
+
+REG_STRIDE(EV_CH_E_CNTXT_11, ev_ch_e_cntxt_11,
+ 0x0001d02c + 0x4000 * GSI_EE_AP, 0x80);
+
+REG_STRIDE(EV_CH_E_CNTXT_12, ev_ch_e_cntxt_12,
+ 0x0001d030 + 0x4000 * GSI_EE_AP, 0x80);
+
+REG_STRIDE(EV_CH_E_CNTXT_13, ev_ch_e_cntxt_13,
+ 0x0001d034 + 0x4000 * GSI_EE_AP, 0x80);
+
+REG_STRIDE(EV_CH_E_SCRATCH_0, ev_ch_e_scratch_0,
+ 0x0001d048 + 0x4000 * GSI_EE_AP, 0x80);
+
+REG_STRIDE(EV_CH_E_SCRATCH_1, ev_ch_e_scratch_1,
+ 0x0001d04c + 0x4000 * GSI_EE_AP, 0x80);
+
+REG_STRIDE(CH_C_DOORBELL_0, ch_c_doorbell_0,
+ 0x0001e000 + 0x4000 * GSI_EE_AP, 0x08);
+
+REG_STRIDE(EV_CH_E_DOORBELL_0, ev_ch_e_doorbell_0,
+ 0x0001e100 + 0x4000 * GSI_EE_AP, 0x08);
+
+static const u32 reg_gsi_status_fmask[] = {
+ [ENABLED] = BIT(0),
+ /* Bits 1-31 reserved */
+};
+
+REG_FIELDS(GSI_STATUS, gsi_status, 0x0001f000 + 0x4000 * GSI_EE_AP);
+
+static const u32 reg_ch_cmd_fmask[] = {
+ [CH_CHID] = GENMASK(7, 0),
+ /* Bits 8-23 reserved */
+ [CH_OPCODE] = GENMASK(31, 24),
+};
+
+REG_FIELDS(CH_CMD, ch_cmd, 0x0001f008 + 0x4000 * GSI_EE_AP);
+
+static const u32 reg_ev_ch_cmd_fmask[] = {
+ [EV_CHID] = GENMASK(7, 0),
+ /* Bits 8-23 reserved */
+ [EV_OPCODE] = GENMASK(31, 24),
+};
+
+REG_FIELDS(EV_CH_CMD, ev_ch_cmd, 0x0001f010 + 0x4000 * GSI_EE_AP);
+
+static const u32 reg_generic_cmd_fmask[] = {
+ [GENERIC_OPCODE] = GENMASK(4, 0),
+ [GENERIC_CHID] = GENMASK(9, 5),
+ [GENERIC_EE] = GENMASK(13, 10),
+ /* Bits 14-31 reserved */
+};
+
+REG_FIELDS(GENERIC_CMD, generic_cmd, 0x0001f018 + 0x4000 * GSI_EE_AP);
+
+static const u32 reg_hw_param_2_fmask[] = {
+ [IRAM_SIZE] = GENMASK(2, 0),
+ [NUM_CH_PER_EE] = GENMASK(7, 3),
+ [NUM_EV_PER_EE] = GENMASK(12, 8),
+ [GSI_CH_PEND_TRANSLATE] = BIT(13),
+ [GSI_CH_FULL_LOGIC] = BIT(14),
+ [GSI_USE_SDMA] = BIT(15),
+ [GSI_SDMA_N_INT] = GENMASK(18, 16),
+ [GSI_SDMA_MAX_BURST] = GENMASK(26, 19),
+ [GSI_SDMA_N_IOVEC] = GENMASK(29, 27),
+ /* Bits 30-31 reserved */
+};
+
+REG_FIELDS(HW_PARAM_2, hw_param_2, 0x0001f040 + 0x4000 * GSI_EE_AP);
+
+REG(CNTXT_TYPE_IRQ, cntxt_type_irq, 0x0001f080 + 0x4000 * GSI_EE_AP);
+
+REG(CNTXT_TYPE_IRQ_MSK, cntxt_type_irq_msk, 0x0001f088 + 0x4000 * GSI_EE_AP);
+
+REG(CNTXT_SRC_CH_IRQ, cntxt_src_ch_irq, 0x0001f090 + 0x4000 * GSI_EE_AP);
+
+REG(CNTXT_SRC_EV_CH_IRQ, cntxt_src_ev_ch_irq, 0x0001f094 + 0x4000 * GSI_EE_AP);
+
+REG(CNTXT_SRC_CH_IRQ_MSK, cntxt_src_ch_irq_msk,
+ 0x0001f098 + 0x4000 * GSI_EE_AP);
+
+REG(CNTXT_SRC_EV_CH_IRQ_MSK, cntxt_src_ev_ch_irq_msk,
+ 0x0001f09c + 0x4000 * GSI_EE_AP);
+
+REG(CNTXT_SRC_CH_IRQ_CLR, cntxt_src_ch_irq_clr,
+ 0x0001f0a0 + 0x4000 * GSI_EE_AP);
+
+REG(CNTXT_SRC_EV_CH_IRQ_CLR, cntxt_src_ev_ch_irq_clr,
+ 0x0001f0a4 + 0x4000 * GSI_EE_AP);
+
+REG(CNTXT_SRC_IEOB_IRQ, cntxt_src_ieob_irq, 0x0001f0b0 + 0x4000 * GSI_EE_AP);
+
+REG(CNTXT_SRC_IEOB_IRQ_MSK, cntxt_src_ieob_irq_msk,
+ 0x0001f0b8 + 0x4000 * GSI_EE_AP);
+
+REG(CNTXT_SRC_IEOB_IRQ_CLR, cntxt_src_ieob_irq_clr,
+ 0x0001f0c0 + 0x4000 * GSI_EE_AP);
+
+REG(CNTXT_GLOB_IRQ_STTS, cntxt_glob_irq_stts, 0x0001f100 + 0x4000 * GSI_EE_AP);
+
+REG(CNTXT_GLOB_IRQ_EN, cntxt_glob_irq_en, 0x0001f108 + 0x4000 * GSI_EE_AP);
+
+REG(CNTXT_GLOB_IRQ_CLR, cntxt_glob_irq_clr, 0x0001f110 + 0x4000 * GSI_EE_AP);
+
+REG(CNTXT_GSI_IRQ_STTS, cntxt_gsi_irq_stts, 0x0001f118 + 0x4000 * GSI_EE_AP);
+
+REG(CNTXT_GSI_IRQ_EN, cntxt_gsi_irq_en, 0x0001f120 + 0x4000 * GSI_EE_AP);
+
+REG(CNTXT_GSI_IRQ_CLR, cntxt_gsi_irq_clr, 0x0001f128 + 0x4000 * GSI_EE_AP);
+
+static const u32 reg_cntxt_intset_fmask[] = {
+ [INTYPE] = BIT(0)
+ /* Bits 1-31 reserved */
+};
+
+REG_FIELDS(CNTXT_INTSET, cntxt_intset, 0x0001f180 + 0x4000 * GSI_EE_AP);
+
+REG_FIELDS(ERROR_LOG, error_log, 0x0001f200 + 0x4000 * GSI_EE_AP);
+
+REG(ERROR_LOG_CLR, error_log_clr, 0x0001f210 + 0x4000 * GSI_EE_AP);
+
+static const u32 reg_cntxt_scratch_0_fmask[] = {
+ [INTER_EE_RESULT] = GENMASK(2, 0),
+ /* Bits 3-4 reserved */
+ [GENERIC_EE_RESULT] = GENMASK(7, 5),
+ /* Bits 8-31 reserved */
+};
+
+REG_FIELDS(CNTXT_SCRATCH_0, cntxt_scratch_0, 0x0001f400 + 0x4000 * GSI_EE_AP);
+
+static const struct reg *reg_array[] = {
+ [INTER_EE_SRC_CH_IRQ_MSK] = &reg_inter_ee_src_ch_irq_msk,
+ [INTER_EE_SRC_EV_CH_IRQ_MSK] = &reg_inter_ee_src_ev_ch_irq_msk,
+ [CH_C_CNTXT_0] = &reg_ch_c_cntxt_0,
+ [CH_C_CNTXT_1] = &reg_ch_c_cntxt_1,
+ [CH_C_CNTXT_2] = &reg_ch_c_cntxt_2,
+ [CH_C_CNTXT_3] = &reg_ch_c_cntxt_3,
+ [CH_C_QOS] = &reg_ch_c_qos,
+ [CH_C_SCRATCH_0] = &reg_ch_c_scratch_0,
+ [CH_C_SCRATCH_1] = &reg_ch_c_scratch_1,
+ [CH_C_SCRATCH_2] = &reg_ch_c_scratch_2,
+ [CH_C_SCRATCH_3] = &reg_ch_c_scratch_3,
+ [EV_CH_E_CNTXT_0] = &reg_ev_ch_e_cntxt_0,
+ [EV_CH_E_CNTXT_1] = &reg_ev_ch_e_cntxt_1,
+ [EV_CH_E_CNTXT_2] = &reg_ev_ch_e_cntxt_2,
+ [EV_CH_E_CNTXT_3] = &reg_ev_ch_e_cntxt_3,
+ [EV_CH_E_CNTXT_4] = &reg_ev_ch_e_cntxt_4,
+ [EV_CH_E_CNTXT_8] = &reg_ev_ch_e_cntxt_8,
+ [EV_CH_E_CNTXT_9] = &reg_ev_ch_e_cntxt_9,
+ [EV_CH_E_CNTXT_10] = &reg_ev_ch_e_cntxt_10,
+ [EV_CH_E_CNTXT_11] = &reg_ev_ch_e_cntxt_11,
+ [EV_CH_E_CNTXT_12] = &reg_ev_ch_e_cntxt_12,
+ [EV_CH_E_CNTXT_13] = &reg_ev_ch_e_cntxt_13,
+ [EV_CH_E_SCRATCH_0] = &reg_ev_ch_e_scratch_0,
+ [EV_CH_E_SCRATCH_1] = &reg_ev_ch_e_scratch_1,
+ [CH_C_DOORBELL_0] = &reg_ch_c_doorbell_0,
+ [EV_CH_E_DOORBELL_0] = &reg_ev_ch_e_doorbell_0,
+ [GSI_STATUS] = &reg_gsi_status,
+ [CH_CMD] = &reg_ch_cmd,
+ [EV_CH_CMD] = &reg_ev_ch_cmd,
+ [GENERIC_CMD] = &reg_generic_cmd,
+ [HW_PARAM_2] = &reg_hw_param_2,
+ [CNTXT_TYPE_IRQ] = &reg_cntxt_type_irq,
+ [CNTXT_TYPE_IRQ_MSK] = &reg_cntxt_type_irq_msk,
+ [CNTXT_SRC_CH_IRQ] = &reg_cntxt_src_ch_irq,
+ [CNTXT_SRC_EV_CH_IRQ] = &reg_cntxt_src_ev_ch_irq,
+ [CNTXT_SRC_CH_IRQ_MSK] = &reg_cntxt_src_ch_irq_msk,
+ [CNTXT_SRC_EV_CH_IRQ_MSK] = &reg_cntxt_src_ev_ch_irq_msk,
+ [CNTXT_SRC_CH_IRQ_CLR] = &reg_cntxt_src_ch_irq_clr,
+ [CNTXT_SRC_EV_CH_IRQ_CLR] = &reg_cntxt_src_ev_ch_irq_clr,
+ [CNTXT_SRC_IEOB_IRQ] = &reg_cntxt_src_ieob_irq,
+ [CNTXT_SRC_IEOB_IRQ_MSK] = &reg_cntxt_src_ieob_irq_msk,
+ [CNTXT_SRC_IEOB_IRQ_CLR] = &reg_cntxt_src_ieob_irq_clr,
+ [CNTXT_GLOB_IRQ_STTS] = &reg_cntxt_glob_irq_stts,
+ [CNTXT_GLOB_IRQ_EN] = &reg_cntxt_glob_irq_en,
+ [CNTXT_GLOB_IRQ_CLR] = &reg_cntxt_glob_irq_clr,
+ [CNTXT_GSI_IRQ_STTS] = &reg_cntxt_gsi_irq_stts,
+ [CNTXT_GSI_IRQ_EN] = &reg_cntxt_gsi_irq_en,
+ [CNTXT_GSI_IRQ_CLR] = &reg_cntxt_gsi_irq_clr,
+ [CNTXT_INTSET] = &reg_cntxt_intset,
+ [ERROR_LOG] = &reg_error_log,
+ [ERROR_LOG_CLR] = &reg_error_log_clr,
+ [CNTXT_SCRATCH_0] = &reg_cntxt_scratch_0,
+};
+
+const struct regs gsi_regs_v4_0 = {
+ .reg_count = ARRAY_SIZE(reg_array),
+ .reg = reg_array,
+};
diff --git a/drivers/net/ipa/reg/gsi_reg-v4.11.c b/drivers/net/ipa/reg/gsi_reg-v4.11.c
new file mode 100644
index 000000000000..01696519032f
--- /dev/null
+++ b/drivers/net/ipa/reg/gsi_reg-v4.11.c
@@ -0,0 +1,313 @@
+// SPDX-License-Identifier: GPL-2.0
+
+/* Copyright (C) 2023 Linaro Ltd. */
+
+#include <linux/types.h>
+
+#include "../gsi.h"
+#include "../reg.h"
+#include "../gsi_reg.h"
+
+REG(INTER_EE_SRC_CH_IRQ_MSK, inter_ee_src_ch_irq_msk,
+ 0x0000c020 + 0x1000 * GSI_EE_AP);
+
+REG(INTER_EE_SRC_EV_CH_IRQ_MSK, inter_ee_src_ev_ch_irq_msk,
+ 0x0000c024 + 0x1000 * GSI_EE_AP);
+
+static const u32 reg_ch_c_cntxt_0_fmask[] = {
+ [CHTYPE_PROTOCOL] = GENMASK(2, 0),
+ [CHTYPE_DIR] = BIT(3),
+ [CH_EE] = GENMASK(7, 4),
+ [CHID] = GENMASK(12, 8),
+ [CHTYPE_PROTOCOL_MSB] = BIT(13),
+ [ERINDEX] = GENMASK(18, 14),
+ /* Bit 19 reserved */
+ [CHSTATE] = GENMASK(23, 20),
+ [ELEMENT_SIZE] = GENMASK(31, 24),
+};
+
+REG_STRIDE_FIELDS(CH_C_CNTXT_0, ch_c_cntxt_0,
+ 0x0000f000 + 0x4000 * GSI_EE_AP, 0x80);
+
+static const u32 reg_ch_c_cntxt_1_fmask[] = {
+ [CH_R_LENGTH] = GENMASK(19, 0),
+ /* Bits 20-31 reserved */
+};
+
+REG_STRIDE_FIELDS(CH_C_CNTXT_1, ch_c_cntxt_1,
+ 0x0000f004 + 0x4000 * GSI_EE_AP, 0x80);
+
+REG_STRIDE(CH_C_CNTXT_2, ch_c_cntxt_2, 0x0000f008 + 0x4000 * GSI_EE_AP, 0x80);
+
+REG_STRIDE(CH_C_CNTXT_3, ch_c_cntxt_3, 0x0000f00c + 0x4000 * GSI_EE_AP, 0x80);
+
+static const u32 reg_ch_c_qos_fmask[] = {
+ [WRR_WEIGHT] = GENMASK(3, 0),
+ /* Bits 4-7 reserved */
+ [MAX_PREFETCH] = BIT(8),
+ [USE_DB_ENG] = BIT(9),
+ [PREFETCH_MODE] = GENMASK(13, 10),
+ /* Bits 14-15 reserved */
+ [EMPTY_LVL_THRSHOLD] = GENMASK(23, 16),
+ [DB_IN_BYTES] = BIT(24),
+ /* Bits 25-31 reserved */
+};
+
+REG_STRIDE_FIELDS(CH_C_QOS, ch_c_qos, 0x0000f05c + 0x4000 * GSI_EE_AP, 0x80);
+
+static const u32 reg_error_log_fmask[] = {
+ [ERR_ARG3] = GENMASK(3, 0),
+ [ERR_ARG2] = GENMASK(7, 4),
+ [ERR_ARG1] = GENMASK(11, 8),
+ [ERR_CODE] = GENMASK(15, 12),
+ /* Bits 16-18 reserved */
+ [ERR_VIRT_IDX] = GENMASK(23, 19),
+ [ERR_TYPE] = GENMASK(27, 24),
+ [ERR_EE] = GENMASK(31, 28),
+};
+
+REG_STRIDE(CH_C_SCRATCH_0, ch_c_scratch_0,
+ 0x0000f060 + 0x4000 * GSI_EE_AP, 0x80);
+
+REG_STRIDE(CH_C_SCRATCH_1, ch_c_scratch_1,
+ 0x0000f064 + 0x4000 * GSI_EE_AP, 0x80);
+
+REG_STRIDE(CH_C_SCRATCH_2, ch_c_scratch_2,
+ 0x0000f068 + 0x4000 * GSI_EE_AP, 0x80);
+
+REG_STRIDE(CH_C_SCRATCH_3, ch_c_scratch_3,
+ 0x0000f06c + 0x4000 * GSI_EE_AP, 0x80);
+
+static const u32 reg_ev_ch_e_cntxt_0_fmask[] = {
+ [EV_CHTYPE] = GENMASK(3, 0),
+ [EV_EE] = GENMASK(7, 4),
+ [EV_EVCHID] = GENMASK(15, 8),
+ [EV_INTYPE] = BIT(16),
+ /* Bits 17-19 reserved */
+ [EV_CHSTATE] = GENMASK(23, 20),
+ [EV_ELEMENT_SIZE] = GENMASK(31, 24),
+};
+
+REG_STRIDE_FIELDS(EV_CH_E_CNTXT_0, ev_ch_e_cntxt_0,
+ 0x00010000 + 0x4000 * GSI_EE_AP, 0x80);
+
+static const u32 reg_ev_ch_e_cntxt_1_fmask[] = {
+ [R_LENGTH] = GENMASK(19, 0),
+};
+
+REG_STRIDE_FIELDS(EV_CH_E_CNTXT_1, ev_ch_e_cntxt_1,
+ 0x00010004 + 0x4000 * GSI_EE_AP, 0x80);
+
+REG_STRIDE(EV_CH_E_CNTXT_2, ev_ch_e_cntxt_2,
+ 0x00010008 + 0x4000 * GSI_EE_AP, 0x80);
+
+REG_STRIDE(EV_CH_E_CNTXT_3, ev_ch_e_cntxt_3,
+ 0x0001000c + 0x4000 * GSI_EE_AP, 0x80);
+
+REG_STRIDE(EV_CH_E_CNTXT_4, ev_ch_e_cntxt_4,
+ 0x00010010 + 0x4000 * GSI_EE_AP, 0x80);
+
+static const u32 reg_ev_ch_e_cntxt_8_fmask[] = {
+ [EV_MODT] = GENMASK(15, 0),
+ [EV_MODC] = GENMASK(23, 16),
+ [EV_MOD_CNT] = GENMASK(31, 24),
+};
+
+REG_STRIDE_FIELDS(EV_CH_E_CNTXT_8, ev_ch_e_cntxt_8,
+ 0x00010020 + 0x4000 * GSI_EE_AP, 0x80);
+
+REG_STRIDE(EV_CH_E_CNTXT_9, ev_ch_e_cntxt_9,
+ 0x00010024 + 0x4000 * GSI_EE_AP, 0x80);
+
+REG_STRIDE(EV_CH_E_CNTXT_10, ev_ch_e_cntxt_10,
+ 0x00010028 + 0x4000 * GSI_EE_AP, 0x80);
+
+REG_STRIDE(EV_CH_E_CNTXT_11, ev_ch_e_cntxt_11,
+ 0x0001002c + 0x4000 * GSI_EE_AP, 0x80);
+
+REG_STRIDE(EV_CH_E_CNTXT_12, ev_ch_e_cntxt_12,
+ 0x00010030 + 0x4000 * GSI_EE_AP, 0x80);
+
+REG_STRIDE(EV_CH_E_CNTXT_13, ev_ch_e_cntxt_13,
+ 0x00010034 + 0x4000 * GSI_EE_AP, 0x80);
+
+REG_STRIDE(EV_CH_E_SCRATCH_0, ev_ch_e_scratch_0,
+ 0x00010048 + 0x4000 * GSI_EE_AP, 0x80);
+
+REG_STRIDE(EV_CH_E_SCRATCH_1, ev_ch_e_scratch_1,
+ 0x0001004c + 0x4000 * GSI_EE_AP, 0x80);
+
+REG_STRIDE(CH_C_DOORBELL_0, ch_c_doorbell_0,
+ 0x00011000 + 0x4000 * GSI_EE_AP, 0x08);
+
+REG_STRIDE(EV_CH_E_DOORBELL_0, ev_ch_e_doorbell_0,
+ 0x00011100 + 0x4000 * GSI_EE_AP, 0x08);
+
+static const u32 reg_gsi_status_fmask[] = {
+ [ENABLED] = BIT(0),
+ /* Bits 1-31 reserved */
+};
+
+REG_FIELDS(GSI_STATUS, gsi_status, 0x00012000 + 0x4000 * GSI_EE_AP);
+
+static const u32 reg_ch_cmd_fmask[] = {
+ [CH_CHID] = GENMASK(7, 0),
+ /* Bits 8-23 reserved */
+ [CH_OPCODE] = GENMASK(31, 24),
+};
+
+REG_FIELDS(CH_CMD, ch_cmd, 0x00012008 + 0x4000 * GSI_EE_AP);
+
+static const u32 reg_ev_ch_cmd_fmask[] = {
+ [EV_CHID] = GENMASK(7, 0),
+ /* Bits 8-23 reserved */
+ [EV_OPCODE] = GENMASK(31, 24),
+};
+
+REG_FIELDS(EV_CH_CMD, ev_ch_cmd, 0x00012010 + 0x4000 * GSI_EE_AP);
+
+static const u32 reg_generic_cmd_fmask[] = {
+ [GENERIC_OPCODE] = GENMASK(4, 0),
+ [GENERIC_CHID] = GENMASK(9, 5),
+ [GENERIC_EE] = GENMASK(13, 10),
+ /* Bits 14-23 reserved */
+ [GENERIC_PARAMS] = GENMASK(31, 24),
+};
+
+REG_FIELDS(GENERIC_CMD, generic_cmd, 0x00012018 + 0x4000 * GSI_EE_AP);
+
+static const u32 reg_hw_param_2_fmask[] = {
+ [IRAM_SIZE] = GENMASK(2, 0),
+ [NUM_CH_PER_EE] = GENMASK(7, 3),
+ [NUM_EV_PER_EE] = GENMASK(12, 8),
+ [GSI_CH_PEND_TRANSLATE] = BIT(13),
+ [GSI_CH_FULL_LOGIC] = BIT(14),
+ [GSI_USE_SDMA] = BIT(15),
+ [GSI_SDMA_N_INT] = GENMASK(18, 16),
+ [GSI_SDMA_MAX_BURST] = GENMASK(26, 19),
+ [GSI_SDMA_N_IOVEC] = GENMASK(29, 27),
+ [GSI_USE_RD_WR_ENG] = BIT(30),
+ [GSI_USE_INTER_EE] = BIT(31),
+};
+
+REG_FIELDS(HW_PARAM_2, hw_param_2, 0x00012040 + 0x4000 * GSI_EE_AP);
+
+REG(CNTXT_TYPE_IRQ, cntxt_type_irq, 0x00012080 + 0x4000 * GSI_EE_AP);
+
+REG(CNTXT_TYPE_IRQ_MSK, cntxt_type_irq_msk, 0x00012088 + 0x4000 * GSI_EE_AP);
+
+REG(CNTXT_SRC_CH_IRQ, cntxt_src_ch_irq, 0x00012090 + 0x4000 * GSI_EE_AP);
+
+REG(CNTXT_SRC_EV_CH_IRQ, cntxt_src_ev_ch_irq, 0x00012094 + 0x4000 * GSI_EE_AP);
+
+REG(CNTXT_SRC_CH_IRQ_MSK, cntxt_src_ch_irq_msk,
+ 0x00012098 + 0x4000 * GSI_EE_AP);
+
+REG(CNTXT_SRC_EV_CH_IRQ_MSK, cntxt_src_ev_ch_irq_msk,
+ 0x0001209c + 0x4000 * GSI_EE_AP);
+
+REG(CNTXT_SRC_CH_IRQ_CLR, cntxt_src_ch_irq_clr,
+ 0x000120a0 + 0x4000 * GSI_EE_AP);
+
+REG(CNTXT_SRC_EV_CH_IRQ_CLR, cntxt_src_ev_ch_irq_clr,
+ 0x000120a4 + 0x4000 * GSI_EE_AP);
+
+REG(CNTXT_SRC_IEOB_IRQ, cntxt_src_ieob_irq, 0x000120b0 + 0x4000 * GSI_EE_AP);
+
+REG(CNTXT_SRC_IEOB_IRQ_MSK, cntxt_src_ieob_irq_msk,
+ 0x000120b8 + 0x4000 * GSI_EE_AP);
+
+REG(CNTXT_SRC_IEOB_IRQ_CLR, cntxt_src_ieob_irq_clr,
+ 0x000120c0 + 0x4000 * GSI_EE_AP);
+
+REG(CNTXT_GLOB_IRQ_STTS, cntxt_glob_irq_stts, 0x00012100 + 0x4000 * GSI_EE_AP);
+
+REG(CNTXT_GLOB_IRQ_EN, cntxt_glob_irq_en, 0x00012108 + 0x4000 * GSI_EE_AP);
+
+REG(CNTXT_GLOB_IRQ_CLR, cntxt_glob_irq_clr, 0x00012110 + 0x4000 * GSI_EE_AP);
+
+REG(CNTXT_GSI_IRQ_STTS, cntxt_gsi_irq_stts, 0x00012118 + 0x4000 * GSI_EE_AP);
+
+REG(CNTXT_GSI_IRQ_EN, cntxt_gsi_irq_en, 0x00012120 + 0x4000 * GSI_EE_AP);
+
+REG(CNTXT_GSI_IRQ_CLR, cntxt_gsi_irq_clr, 0x00012128 + 0x4000 * GSI_EE_AP);
+
+static const u32 reg_cntxt_intset_fmask[] = {
+ [INTYPE] = BIT(0)
+ /* Bits 1-31 reserved */
+};
+
+REG_FIELDS(CNTXT_INTSET, cntxt_intset, 0x00012180 + 0x4000 * GSI_EE_AP);
+
+REG_FIELDS(ERROR_LOG, error_log, 0x00012200 + 0x4000 * GSI_EE_AP);
+
+REG(ERROR_LOG_CLR, error_log_clr, 0x00012210 + 0x4000 * GSI_EE_AP);
+
+static const u32 reg_cntxt_scratch_0_fmask[] = {
+ [INTER_EE_RESULT] = GENMASK(2, 0),
+ /* Bits 3-4 reserved */
+ [GENERIC_EE_RESULT] = GENMASK(7, 5),
+ /* Bits 8-31 reserved */
+};
+
+REG_FIELDS(CNTXT_SCRATCH_0, cntxt_scratch_0, 0x00012400 + 0x4000 * GSI_EE_AP);
+
+static const struct reg *reg_array[] = {
+ [INTER_EE_SRC_CH_IRQ_MSK] = &reg_inter_ee_src_ch_irq_msk,
+ [INTER_EE_SRC_EV_CH_IRQ_MSK] = &reg_inter_ee_src_ev_ch_irq_msk,
+ [CH_C_CNTXT_0] = &reg_ch_c_cntxt_0,
+ [CH_C_CNTXT_1] = &reg_ch_c_cntxt_1,
+ [CH_C_CNTXT_2] = &reg_ch_c_cntxt_2,
+ [CH_C_CNTXT_3] = &reg_ch_c_cntxt_3,
+ [CH_C_QOS] = &reg_ch_c_qos,
+ [CH_C_SCRATCH_0] = &reg_ch_c_scratch_0,
+ [CH_C_SCRATCH_1] = &reg_ch_c_scratch_1,
+ [CH_C_SCRATCH_2] = &reg_ch_c_scratch_2,
+ [CH_C_SCRATCH_3] = &reg_ch_c_scratch_3,
+ [EV_CH_E_CNTXT_0] = &reg_ev_ch_e_cntxt_0,
+ [EV_CH_E_CNTXT_1] = &reg_ev_ch_e_cntxt_1,
+ [EV_CH_E_CNTXT_2] = &reg_ev_ch_e_cntxt_2,
+ [EV_CH_E_CNTXT_3] = &reg_ev_ch_e_cntxt_3,
+ [EV_CH_E_CNTXT_4] = &reg_ev_ch_e_cntxt_4,
+ [EV_CH_E_CNTXT_8] = &reg_ev_ch_e_cntxt_8,
+ [EV_CH_E_CNTXT_9] = &reg_ev_ch_e_cntxt_9,
+ [EV_CH_E_CNTXT_10] = &reg_ev_ch_e_cntxt_10,
+ [EV_CH_E_CNTXT_11] = &reg_ev_ch_e_cntxt_11,
+ [EV_CH_E_CNTXT_12] = &reg_ev_ch_e_cntxt_12,
+ [EV_CH_E_CNTXT_13] = &reg_ev_ch_e_cntxt_13,
+ [EV_CH_E_SCRATCH_0] = &reg_ev_ch_e_scratch_0,
+ [EV_CH_E_SCRATCH_1] = &reg_ev_ch_e_scratch_1,
+ [CH_C_DOORBELL_0] = &reg_ch_c_doorbell_0,
+ [EV_CH_E_DOORBELL_0] = &reg_ev_ch_e_doorbell_0,
+ [GSI_STATUS] = &reg_gsi_status,
+ [CH_CMD] = &reg_ch_cmd,
+ [EV_CH_CMD] = &reg_ev_ch_cmd,
+ [GENERIC_CMD] = &reg_generic_cmd,
+ [HW_PARAM_2] = &reg_hw_param_2,
+ [CNTXT_TYPE_IRQ] = &reg_cntxt_type_irq,
+ [CNTXT_TYPE_IRQ_MSK] = &reg_cntxt_type_irq_msk,
+ [CNTXT_SRC_CH_IRQ] = &reg_cntxt_src_ch_irq,
+ [CNTXT_SRC_EV_CH_IRQ] = &reg_cntxt_src_ev_ch_irq,
+ [CNTXT_SRC_CH_IRQ_MSK] = &reg_cntxt_src_ch_irq_msk,
+ [CNTXT_SRC_EV_CH_IRQ_MSK] = &reg_cntxt_src_ev_ch_irq_msk,
+ [CNTXT_SRC_CH_IRQ_CLR] = &reg_cntxt_src_ch_irq_clr,
+ [CNTXT_SRC_EV_CH_IRQ_CLR] = &reg_cntxt_src_ev_ch_irq_clr,
+ [CNTXT_SRC_IEOB_IRQ] = &reg_cntxt_src_ieob_irq,
+ [CNTXT_SRC_IEOB_IRQ_MSK] = &reg_cntxt_src_ieob_irq_msk,
+ [CNTXT_SRC_IEOB_IRQ_CLR] = &reg_cntxt_src_ieob_irq_clr,
+ [CNTXT_GLOB_IRQ_STTS] = &reg_cntxt_glob_irq_stts,
+ [CNTXT_GLOB_IRQ_EN] = &reg_cntxt_glob_irq_en,
+ [CNTXT_GLOB_IRQ_CLR] = &reg_cntxt_glob_irq_clr,
+ [CNTXT_GSI_IRQ_STTS] = &reg_cntxt_gsi_irq_stts,
+ [CNTXT_GSI_IRQ_EN] = &reg_cntxt_gsi_irq_en,
+ [CNTXT_GSI_IRQ_CLR] = &reg_cntxt_gsi_irq_clr,
+ [CNTXT_INTSET] = &reg_cntxt_intset,
+ [ERROR_LOG] = &reg_error_log,
+ [ERROR_LOG_CLR] = &reg_error_log_clr,
+ [CNTXT_SCRATCH_0] = &reg_cntxt_scratch_0,
+};
+
+const struct regs gsi_regs_v4_11 = {
+ .reg_count = ARRAY_SIZE(reg_array),
+ .reg = reg_array,
+};
diff --git a/drivers/net/ipa/reg/gsi_reg-v4.5.c b/drivers/net/ipa/reg/gsi_reg-v4.5.c
new file mode 100644
index 000000000000..648b51b88d4e
--- /dev/null
+++ b/drivers/net/ipa/reg/gsi_reg-v4.5.c
@@ -0,0 +1,311 @@
+// SPDX-License-Identifier: GPL-2.0
+
+/* Copyright (C) 2023 Linaro Ltd. */
+
+#include <linux/types.h>
+
+#include "../gsi.h"
+#include "../reg.h"
+#include "../gsi_reg.h"
+
+REG(INTER_EE_SRC_CH_IRQ_MSK, inter_ee_src_ch_irq_msk,
+ 0x0000c020 + 0x1000 * GSI_EE_AP);
+
+REG(INTER_EE_SRC_EV_CH_IRQ_MSK, inter_ee_src_ev_ch_irq_msk,
+ 0x0000c024 + 0x1000 * GSI_EE_AP);
+
+static const u32 reg_ch_c_cntxt_0_fmask[] = {
+ [CHTYPE_PROTOCOL] = GENMASK(2, 0),
+ [CHTYPE_DIR] = BIT(3),
+ [CH_EE] = GENMASK(7, 4),
+ [CHID] = GENMASK(12, 8),
+ [CHTYPE_PROTOCOL_MSB] = BIT(13),
+ [ERINDEX] = GENMASK(18, 14),
+ /* Bit 19 reserved */
+ [CHSTATE] = GENMASK(23, 20),
+ [ELEMENT_SIZE] = GENMASK(31, 24),
+};
+
+REG_STRIDE_FIELDS(CH_C_CNTXT_0, ch_c_cntxt_0,
+ 0x0000f000 + 0x4000 * GSI_EE_AP, 0x80);
+
+static const u32 reg_ch_c_cntxt_1_fmask[] = {
+ [CH_R_LENGTH] = GENMASK(15, 0),
+ /* Bits 16-31 reserved */
+};
+
+REG_STRIDE_FIELDS(CH_C_CNTXT_1, ch_c_cntxt_1,
+ 0x0000f004 + 0x4000 * GSI_EE_AP, 0x80);
+
+REG_STRIDE(CH_C_CNTXT_2, ch_c_cntxt_2, 0x0000f008 + 0x4000 * GSI_EE_AP, 0x80);
+
+REG_STRIDE(CH_C_CNTXT_3, ch_c_cntxt_3, 0x0000f00c + 0x4000 * GSI_EE_AP, 0x80);
+
+static const u32 reg_ch_c_qos_fmask[] = {
+ [WRR_WEIGHT] = GENMASK(3, 0),
+ /* Bits 4-7 reserved */
+ [MAX_PREFETCH] = BIT(8),
+ [USE_DB_ENG] = BIT(9),
+ [PREFETCH_MODE] = GENMASK(13, 10),
+ /* Bits 14-15 reserved */
+ [EMPTY_LVL_THRSHOLD] = GENMASK(23, 16),
+ /* Bits 24-31 reserved */
+};
+
+REG_STRIDE_FIELDS(CH_C_QOS, ch_c_qos, 0x0000f05c + 0x4000 * GSI_EE_AP, 0x80);
+
+static const u32 reg_error_log_fmask[] = {
+ [ERR_ARG3] = GENMASK(3, 0),
+ [ERR_ARG2] = GENMASK(7, 4),
+ [ERR_ARG1] = GENMASK(11, 8),
+ [ERR_CODE] = GENMASK(15, 12),
+ /* Bits 16-18 reserved */
+ [ERR_VIRT_IDX] = GENMASK(23, 19),
+ [ERR_TYPE] = GENMASK(27, 24),
+ [ERR_EE] = GENMASK(31, 28),
+};
+
+REG_STRIDE(CH_C_SCRATCH_0, ch_c_scratch_0,
+ 0x0000f060 + 0x4000 * GSI_EE_AP, 0x80);
+
+REG_STRIDE(CH_C_SCRATCH_1, ch_c_scratch_1,
+ 0x0000f064 + 0x4000 * GSI_EE_AP, 0x80);
+
+REG_STRIDE(CH_C_SCRATCH_2, ch_c_scratch_2,
+ 0x0000f068 + 0x4000 * GSI_EE_AP, 0x80);
+
+REG_STRIDE(CH_C_SCRATCH_3, ch_c_scratch_3,
+ 0x0000f06c + 0x4000 * GSI_EE_AP, 0x80);
+
+static const u32 reg_ev_ch_e_cntxt_0_fmask[] = {
+ [EV_CHTYPE] = GENMASK(3, 0),
+ [EV_EE] = GENMASK(7, 4),
+ [EV_EVCHID] = GENMASK(15, 8),
+ [EV_INTYPE] = BIT(16),
+ /* Bits 17-19 reserved */
+ [EV_CHSTATE] = GENMASK(23, 20),
+ [EV_ELEMENT_SIZE] = GENMASK(31, 24),
+};
+
+REG_STRIDE_FIELDS(EV_CH_E_CNTXT_0, ev_ch_e_cntxt_0,
+ 0x00010000 + 0x4000 * GSI_EE_AP, 0x80);
+
+static const u32 reg_ev_ch_e_cntxt_1_fmask[] = {
+ [R_LENGTH] = GENMASK(15, 0),
+};
+
+REG_STRIDE_FIELDS(EV_CH_E_CNTXT_1, ev_ch_e_cntxt_1,
+ 0x00010004 + 0x4000 * GSI_EE_AP, 0x80);
+
+REG_STRIDE(EV_CH_E_CNTXT_2, ev_ch_e_cntxt_2,
+ 0x00010008 + 0x4000 * GSI_EE_AP, 0x80);
+
+REG_STRIDE(EV_CH_E_CNTXT_3, ev_ch_e_cntxt_3,
+ 0x0001000c + 0x4000 * GSI_EE_AP, 0x80);
+
+REG_STRIDE(EV_CH_E_CNTXT_4, ev_ch_e_cntxt_4,
+ 0x00010010 + 0x4000 * GSI_EE_AP, 0x80);
+
+static const u32 reg_ev_ch_e_cntxt_8_fmask[] = {
+ [EV_MODT] = GENMASK(15, 0),
+ [EV_MODC] = GENMASK(23, 16),
+ [EV_MOD_CNT] = GENMASK(31, 24),
+};
+
+REG_STRIDE_FIELDS(EV_CH_E_CNTXT_8, ev_ch_e_cntxt_8,
+ 0x00010020 + 0x4000 * GSI_EE_AP, 0x80);
+
+REG_STRIDE(EV_CH_E_CNTXT_9, ev_ch_e_cntxt_9,
+ 0x00010024 + 0x4000 * GSI_EE_AP, 0x80);
+
+REG_STRIDE(EV_CH_E_CNTXT_10, ev_ch_e_cntxt_10,
+ 0x00010028 + 0x4000 * GSI_EE_AP, 0x80);
+
+REG_STRIDE(EV_CH_E_CNTXT_11, ev_ch_e_cntxt_11,
+ 0x0001002c + 0x4000 * GSI_EE_AP, 0x80);
+
+REG_STRIDE(EV_CH_E_CNTXT_12, ev_ch_e_cntxt_12,
+ 0x00010030 + 0x4000 * GSI_EE_AP, 0x80);
+
+REG_STRIDE(EV_CH_E_CNTXT_13, ev_ch_e_cntxt_13,
+ 0x00010034 + 0x4000 * GSI_EE_AP, 0x80);
+
+REG_STRIDE(EV_CH_E_SCRATCH_0, ev_ch_e_scratch_0,
+ 0x00010048 + 0x4000 * GSI_EE_AP, 0x80);
+
+REG_STRIDE(EV_CH_E_SCRATCH_1, ev_ch_e_scratch_1,
+ 0x0001004c + 0x4000 * GSI_EE_AP, 0x80);
+
+REG_STRIDE(CH_C_DOORBELL_0, ch_c_doorbell_0,
+ 0x0001e000 + 0x4000 * GSI_EE_AP, 0x08);
+
+REG_STRIDE(EV_CH_E_DOORBELL_0, ev_ch_e_doorbell_0,
+ 0x0001e100 + 0x4000 * GSI_EE_AP, 0x08);
+
+static const u32 reg_gsi_status_fmask[] = {
+ [ENABLED] = BIT(0),
+ /* Bits 1-31 reserved */
+};
+
+REG_FIELDS(GSI_STATUS, gsi_status, 0x0001f000 + 0x4000 * GSI_EE_AP);
+
+static const u32 reg_ch_cmd_fmask[] = {
+ [CH_CHID] = GENMASK(7, 0),
+ /* Bits 8-23 reserved */
+ [CH_OPCODE] = GENMASK(31, 24),
+};
+
+REG_FIELDS(CH_CMD, ch_cmd, 0x0001f008 + 0x4000 * GSI_EE_AP);
+
+static const u32 reg_ev_ch_cmd_fmask[] = {
+ [EV_CHID] = GENMASK(7, 0),
+ /* Bits 8-23 reserved */
+ [EV_OPCODE] = GENMASK(31, 24),
+};
+
+REG_FIELDS(EV_CH_CMD, ev_ch_cmd, 0x0001f010 + 0x4000 * GSI_EE_AP);
+
+static const u32 reg_generic_cmd_fmask[] = {
+ [GENERIC_OPCODE] = GENMASK(4, 0),
+ [GENERIC_CHID] = GENMASK(9, 5),
+ [GENERIC_EE] = GENMASK(13, 10),
+ /* Bits 14-31 reserved */
+};
+
+REG_FIELDS(GENERIC_CMD, generic_cmd, 0x0001f018 + 0x4000 * GSI_EE_AP);
+
+static const u32 reg_hw_param_2_fmask[] = {
+ [IRAM_SIZE] = GENMASK(2, 0),
+ [NUM_CH_PER_EE] = GENMASK(7, 3),
+ [NUM_EV_PER_EE] = GENMASK(12, 8),
+ [GSI_CH_PEND_TRANSLATE] = BIT(13),
+ [GSI_CH_FULL_LOGIC] = BIT(14),
+ [GSI_USE_SDMA] = BIT(15),
+ [GSI_SDMA_N_INT] = GENMASK(18, 16),
+ [GSI_SDMA_MAX_BURST] = GENMASK(26, 19),
+ [GSI_SDMA_N_IOVEC] = GENMASK(29, 27),
+ [GSI_USE_RD_WR_ENG] = BIT(30),
+ [GSI_USE_INTER_EE] = BIT(31),
+};
+
+REG_FIELDS(HW_PARAM_2, hw_param_2, 0x0001f040 + 0x4000 * GSI_EE_AP);
+
+REG(CNTXT_TYPE_IRQ, cntxt_type_irq, 0x0001f080 + 0x4000 * GSI_EE_AP);
+
+REG(CNTXT_TYPE_IRQ_MSK, cntxt_type_irq_msk, 0x0001f088 + 0x4000 * GSI_EE_AP);
+
+REG(CNTXT_SRC_CH_IRQ, cntxt_src_ch_irq, 0x0001f090 + 0x4000 * GSI_EE_AP);
+
+REG(CNTXT_SRC_EV_CH_IRQ, cntxt_src_ev_ch_irq, 0x0001f094 + 0x4000 * GSI_EE_AP);
+
+REG(CNTXT_SRC_CH_IRQ_MSK, cntxt_src_ch_irq_msk,
+ 0x0001f098 + 0x4000 * GSI_EE_AP);
+
+REG(CNTXT_SRC_EV_CH_IRQ_MSK, cntxt_src_ev_ch_irq_msk,
+ 0x0001f09c + 0x4000 * GSI_EE_AP);
+
+REG(CNTXT_SRC_CH_IRQ_CLR, cntxt_src_ch_irq_clr,
+ 0x0001f0a0 + 0x4000 * GSI_EE_AP);
+
+REG(CNTXT_SRC_EV_CH_IRQ_CLR, cntxt_src_ev_ch_irq_clr,
+ 0x0001f0a4 + 0x4000 * GSI_EE_AP);
+
+REG(CNTXT_SRC_IEOB_IRQ, cntxt_src_ieob_irq, 0x0001f0b0 + 0x4000 * GSI_EE_AP);
+
+REG(CNTXT_SRC_IEOB_IRQ_MSK, cntxt_src_ieob_irq_msk,
+ 0x0001f0b8 + 0x4000 * GSI_EE_AP);
+
+REG(CNTXT_SRC_IEOB_IRQ_CLR, cntxt_src_ieob_irq_clr,
+ 0x0001f0c0 + 0x4000 * GSI_EE_AP);
+
+REG(CNTXT_GLOB_IRQ_STTS, cntxt_glob_irq_stts, 0x0001f100 + 0x4000 * GSI_EE_AP);
+
+REG(CNTXT_GLOB_IRQ_EN, cntxt_glob_irq_en, 0x0001f108 + 0x4000 * GSI_EE_AP);
+
+REG(CNTXT_GLOB_IRQ_CLR, cntxt_glob_irq_clr, 0x0001f110 + 0x4000 * GSI_EE_AP);
+
+REG(CNTXT_GSI_IRQ_STTS, cntxt_gsi_irq_stts, 0x0001f118 + 0x4000 * GSI_EE_AP);
+
+REG(CNTXT_GSI_IRQ_EN, cntxt_gsi_irq_en, 0x0001f120 + 0x4000 * GSI_EE_AP);
+
+REG(CNTXT_GSI_IRQ_CLR, cntxt_gsi_irq_clr, 0x0001f128 + 0x4000 * GSI_EE_AP);
+
+static const u32 reg_cntxt_intset_fmask[] = {
+ [INTYPE] = BIT(0)
+ /* Bits 1-31 reserved */
+};
+
+REG_FIELDS(CNTXT_INTSET, cntxt_intset, 0x0001f180 + 0x4000 * GSI_EE_AP);
+
+REG_FIELDS(ERROR_LOG, error_log, 0x0001f200 + 0x4000 * GSI_EE_AP);
+
+REG(ERROR_LOG_CLR, error_log_clr, 0x0001f210 + 0x4000 * GSI_EE_AP);
+
+static const u32 reg_cntxt_scratch_0_fmask[] = {
+ [INTER_EE_RESULT] = GENMASK(2, 0),
+ /* Bits 3-4 reserved */
+ [GENERIC_EE_RESULT] = GENMASK(7, 5),
+ /* Bits 8-31 reserved */
+};
+
+REG_FIELDS(CNTXT_SCRATCH_0, cntxt_scratch_0, 0x0001f400 + 0x4000 * GSI_EE_AP);
+
+static const struct reg *reg_array[] = {
+ [INTER_EE_SRC_CH_IRQ_MSK] = &reg_inter_ee_src_ch_irq_msk,
+ [INTER_EE_SRC_EV_CH_IRQ_MSK] = &reg_inter_ee_src_ev_ch_irq_msk,
+ [CH_C_CNTXT_0] = &reg_ch_c_cntxt_0,
+ [CH_C_CNTXT_1] = &reg_ch_c_cntxt_1,
+ [CH_C_CNTXT_2] = &reg_ch_c_cntxt_2,
+ [CH_C_CNTXT_3] = &reg_ch_c_cntxt_3,
+ [CH_C_QOS] = &reg_ch_c_qos,
+ [CH_C_SCRATCH_0] = &reg_ch_c_scratch_0,
+ [CH_C_SCRATCH_1] = &reg_ch_c_scratch_1,
+ [CH_C_SCRATCH_2] = &reg_ch_c_scratch_2,
+ [CH_C_SCRATCH_3] = &reg_ch_c_scratch_3,
+ [EV_CH_E_CNTXT_0] = &reg_ev_ch_e_cntxt_0,
+ [EV_CH_E_CNTXT_1] = &reg_ev_ch_e_cntxt_1,
+ [EV_CH_E_CNTXT_2] = &reg_ev_ch_e_cntxt_2,
+ [EV_CH_E_CNTXT_3] = &reg_ev_ch_e_cntxt_3,
+ [EV_CH_E_CNTXT_4] = &reg_ev_ch_e_cntxt_4,
+ [EV_CH_E_CNTXT_8] = &reg_ev_ch_e_cntxt_8,
+ [EV_CH_E_CNTXT_9] = &reg_ev_ch_e_cntxt_9,
+ [EV_CH_E_CNTXT_10] = &reg_ev_ch_e_cntxt_10,
+ [EV_CH_E_CNTXT_11] = &reg_ev_ch_e_cntxt_11,
+ [EV_CH_E_CNTXT_12] = &reg_ev_ch_e_cntxt_12,
+ [EV_CH_E_CNTXT_13] = &reg_ev_ch_e_cntxt_13,
+ [EV_CH_E_SCRATCH_0] = &reg_ev_ch_e_scratch_0,
+ [EV_CH_E_SCRATCH_1] = &reg_ev_ch_e_scratch_1,
+ [CH_C_DOORBELL_0] = &reg_ch_c_doorbell_0,
+ [EV_CH_E_DOORBELL_0] = &reg_ev_ch_e_doorbell_0,
+ [GSI_STATUS] = &reg_gsi_status,
+ [CH_CMD] = &reg_ch_cmd,
+ [EV_CH_CMD] = &reg_ev_ch_cmd,
+ [GENERIC_CMD] = &reg_generic_cmd,
+ [HW_PARAM_2] = &reg_hw_param_2,
+ [CNTXT_TYPE_IRQ] = &reg_cntxt_type_irq,
+ [CNTXT_TYPE_IRQ_MSK] = &reg_cntxt_type_irq_msk,
+ [CNTXT_SRC_CH_IRQ] = &reg_cntxt_src_ch_irq,
+ [CNTXT_SRC_EV_CH_IRQ] = &reg_cntxt_src_ev_ch_irq,
+ [CNTXT_SRC_CH_IRQ_MSK] = &reg_cntxt_src_ch_irq_msk,
+ [CNTXT_SRC_EV_CH_IRQ_MSK] = &reg_cntxt_src_ev_ch_irq_msk,
+ [CNTXT_SRC_CH_IRQ_CLR] = &reg_cntxt_src_ch_irq_clr,
+ [CNTXT_SRC_EV_CH_IRQ_CLR] = &reg_cntxt_src_ev_ch_irq_clr,
+ [CNTXT_SRC_IEOB_IRQ] = &reg_cntxt_src_ieob_irq,
+ [CNTXT_SRC_IEOB_IRQ_MSK] = &reg_cntxt_src_ieob_irq_msk,
+ [CNTXT_SRC_IEOB_IRQ_CLR] = &reg_cntxt_src_ieob_irq_clr,
+ [CNTXT_GLOB_IRQ_STTS] = &reg_cntxt_glob_irq_stts,
+ [CNTXT_GLOB_IRQ_EN] = &reg_cntxt_glob_irq_en,
+ [CNTXT_GLOB_IRQ_CLR] = &reg_cntxt_glob_irq_clr,
+ [CNTXT_GSI_IRQ_STTS] = &reg_cntxt_gsi_irq_stts,
+ [CNTXT_GSI_IRQ_EN] = &reg_cntxt_gsi_irq_en,
+ [CNTXT_GSI_IRQ_CLR] = &reg_cntxt_gsi_irq_clr,
+ [CNTXT_INTSET] = &reg_cntxt_intset,
+ [ERROR_LOG] = &reg_error_log,
+ [ERROR_LOG_CLR] = &reg_error_log_clr,
+ [CNTXT_SCRATCH_0] = &reg_cntxt_scratch_0,
+};
+
+const struct regs gsi_regs_v4_5 = {
+ .reg_count = ARRAY_SIZE(reg_array),
+ .reg = reg_array,
+};
diff --git a/drivers/net/ipa/reg/gsi_reg-v4.9.c b/drivers/net/ipa/reg/gsi_reg-v4.9.c
new file mode 100644
index 000000000000..4bf45d264d6b
--- /dev/null
+++ b/drivers/net/ipa/reg/gsi_reg-v4.9.c
@@ -0,0 +1,312 @@
+// SPDX-License-Identifier: GPL-2.0
+
+/* Copyright (C) 2023 Linaro Ltd. */
+
+#include <linux/types.h>
+
+#include "../gsi.h"
+#include "../reg.h"
+#include "../gsi_reg.h"
+
+REG(INTER_EE_SRC_CH_IRQ_MSK, inter_ee_src_ch_irq_msk,
+ 0x0000c020 + 0x1000 * GSI_EE_AP);
+
+REG(INTER_EE_SRC_EV_CH_IRQ_MSK, inter_ee_src_ev_ch_irq_msk,
+ 0x0000c024 + 0x1000 * GSI_EE_AP);
+
+static const u32 reg_ch_c_cntxt_0_fmask[] = {
+ [CHTYPE_PROTOCOL] = GENMASK(2, 0),
+ [CHTYPE_DIR] = BIT(3),
+ [CH_EE] = GENMASK(7, 4),
+ [CHID] = GENMASK(12, 8),
+ [CHTYPE_PROTOCOL_MSB] = BIT(13),
+ [ERINDEX] = GENMASK(18, 14),
+ /* Bit 19 reserved */
+ [CHSTATE] = GENMASK(23, 20),
+ [ELEMENT_SIZE] = GENMASK(31, 24),
+};
+
+REG_STRIDE_FIELDS(CH_C_CNTXT_0, ch_c_cntxt_0,
+ 0x0001c000 + 0x4000 * GSI_EE_AP, 0x80);
+
+static const u32 reg_ch_c_cntxt_1_fmask[] = {
+ [CH_R_LENGTH] = GENMASK(19, 0),
+ /* Bits 20-31 reserved */
+};
+
+REG_STRIDE_FIELDS(CH_C_CNTXT_1, ch_c_cntxt_1,
+ 0x0001c004 + 0x4000 * GSI_EE_AP, 0x80);
+
+REG_STRIDE(CH_C_CNTXT_2, ch_c_cntxt_2, 0x0001c008 + 0x4000 * GSI_EE_AP, 0x80);
+
+REG_STRIDE(CH_C_CNTXT_3, ch_c_cntxt_3, 0x0001c00c + 0x4000 * GSI_EE_AP, 0x80);
+
+static const u32 reg_ch_c_qos_fmask[] = {
+ [WRR_WEIGHT] = GENMASK(3, 0),
+ /* Bits 4-7 reserved */
+ [MAX_PREFETCH] = BIT(8),
+ [USE_DB_ENG] = BIT(9),
+ [PREFETCH_MODE] = GENMASK(13, 10),
+ /* Bits 14-15 reserved */
+ [EMPTY_LVL_THRSHOLD] = GENMASK(23, 16),
+ [DB_IN_BYTES] = BIT(24),
+ /* Bits 25-31 reserved */
+};
+
+REG_STRIDE_FIELDS(CH_C_QOS, ch_c_qos, 0x0001c05c + 0x4000 * GSI_EE_AP, 0x80);
+
+static const u32 reg_error_log_fmask[] = {
+ [ERR_ARG3] = GENMASK(3, 0),
+ [ERR_ARG2] = GENMASK(7, 4),
+ [ERR_ARG1] = GENMASK(11, 8),
+ [ERR_CODE] = GENMASK(15, 12),
+ /* Bits 16-18 reserved */
+ [ERR_VIRT_IDX] = GENMASK(23, 19),
+ [ERR_TYPE] = GENMASK(27, 24),
+ [ERR_EE] = GENMASK(31, 28),
+};
+
+REG_STRIDE(CH_C_SCRATCH_0, ch_c_scratch_0,
+ 0x0001c060 + 0x4000 * GSI_EE_AP, 0x80);
+
+REG_STRIDE(CH_C_SCRATCH_1, ch_c_scratch_1,
+ 0x0001c064 + 0x4000 * GSI_EE_AP, 0x80);
+
+REG_STRIDE(CH_C_SCRATCH_2, ch_c_scratch_2,
+ 0x0001c068 + 0x4000 * GSI_EE_AP, 0x80);
+
+REG_STRIDE(CH_C_SCRATCH_3, ch_c_scratch_3,
+ 0x0001c06c + 0x4000 * GSI_EE_AP, 0x80);
+
+static const u32 reg_ev_ch_e_cntxt_0_fmask[] = {
+ [EV_CHTYPE] = GENMASK(3, 0),
+ [EV_EE] = GENMASK(7, 4),
+ [EV_EVCHID] = GENMASK(15, 8),
+ [EV_INTYPE] = BIT(16),
+ /* Bits 17-19 reserved */
+ [EV_CHSTATE] = GENMASK(23, 20),
+ [EV_ELEMENT_SIZE] = GENMASK(31, 24),
+};
+
+REG_STRIDE_FIELDS(EV_CH_E_CNTXT_0, ev_ch_e_cntxt_0,
+ 0x0001d000 + 0x4000 * GSI_EE_AP, 0x80);
+
+static const u32 reg_ev_ch_e_cntxt_1_fmask[] = {
+ [R_LENGTH] = GENMASK(15, 0),
+};
+
+REG_STRIDE_FIELDS(EV_CH_E_CNTXT_1, ev_ch_e_cntxt_1,
+ 0x0001d004 + 0x4000 * GSI_EE_AP, 0x80);
+
+REG_STRIDE(EV_CH_E_CNTXT_2, ev_ch_e_cntxt_2,
+ 0x0001d008 + 0x4000 * GSI_EE_AP, 0x80);
+
+REG_STRIDE(EV_CH_E_CNTXT_3, ev_ch_e_cntxt_3,
+ 0x0001d00c + 0x4000 * GSI_EE_AP, 0x80);
+
+REG_STRIDE(EV_CH_E_CNTXT_4, ev_ch_e_cntxt_4,
+ 0x0001d010 + 0x4000 * GSI_EE_AP, 0x80);
+
+static const u32 reg_ev_ch_e_cntxt_8_fmask[] = {
+ [EV_MODT] = GENMASK(15, 0),
+ [EV_MODC] = GENMASK(23, 16),
+ [EV_MOD_CNT] = GENMASK(31, 24),
+};
+
+REG_STRIDE_FIELDS(EV_CH_E_CNTXT_8, ev_ch_e_cntxt_8,
+ 0x0001d020 + 0x4000 * GSI_EE_AP, 0x80);
+
+REG_STRIDE(EV_CH_E_CNTXT_9, ev_ch_e_cntxt_9,
+ 0x0001d024 + 0x4000 * GSI_EE_AP, 0x80);
+
+REG_STRIDE(EV_CH_E_CNTXT_10, ev_ch_e_cntxt_10,
+ 0x0001d028 + 0x4000 * GSI_EE_AP, 0x80);
+
+REG_STRIDE(EV_CH_E_CNTXT_11, ev_ch_e_cntxt_11,
+ 0x0001d02c + 0x4000 * GSI_EE_AP, 0x80);
+
+REG_STRIDE(EV_CH_E_CNTXT_12, ev_ch_e_cntxt_12,
+ 0x0001d030 + 0x4000 * GSI_EE_AP, 0x80);
+
+REG_STRIDE(EV_CH_E_CNTXT_13, ev_ch_e_cntxt_13,
+ 0x0001d034 + 0x4000 * GSI_EE_AP, 0x80);
+
+REG_STRIDE(EV_CH_E_SCRATCH_0, ev_ch_e_scratch_0,
+ 0x0001d048 + 0x4000 * GSI_EE_AP, 0x80);
+
+REG_STRIDE(EV_CH_E_SCRATCH_1, ev_ch_e_scratch_1,
+ 0x0001d04c + 0x4000 * GSI_EE_AP, 0x80);
+
+REG_STRIDE(CH_C_DOORBELL_0, ch_c_doorbell_0,
+ 0x00011000 + 0x4000 * GSI_EE_AP, 0x08);
+
+REG_STRIDE(EV_CH_E_DOORBELL_0, ev_ch_e_doorbell_0,
+ 0x00011100 + 0x4000 * GSI_EE_AP, 0x08);
+
+static const u32 reg_gsi_status_fmask[] = {
+ [ENABLED] = BIT(0),
+ /* Bits 1-31 reserved */
+};
+
+REG_FIELDS(GSI_STATUS, gsi_status, 0x00012000 + 0x4000 * GSI_EE_AP);
+
+static const u32 reg_ch_cmd_fmask[] = {
+ [CH_CHID] = GENMASK(7, 0),
+ /* Bits 8-23 reserved */
+ [CH_OPCODE] = GENMASK(31, 24),
+};
+
+REG_FIELDS(CH_CMD, ch_cmd, 0x00012008 + 0x4000 * GSI_EE_AP);
+
+static const u32 reg_ev_ch_cmd_fmask[] = {
+ [EV_CHID] = GENMASK(7, 0),
+ /* Bits 8-23 reserved */
+ [EV_OPCODE] = GENMASK(31, 24),
+};
+
+REG_FIELDS(EV_CH_CMD, ev_ch_cmd, 0x00012010 + 0x4000 * GSI_EE_AP);
+
+static const u32 reg_generic_cmd_fmask[] = {
+ [GENERIC_OPCODE] = GENMASK(4, 0),
+ [GENERIC_CHID] = GENMASK(9, 5),
+ [GENERIC_EE] = GENMASK(13, 10),
+ /* Bits 14-31 reserved */
+};
+
+REG_FIELDS(GENERIC_CMD, generic_cmd, 0x00012018 + 0x4000 * GSI_EE_AP);
+
+static const u32 reg_hw_param_2_fmask[] = {
+ [IRAM_SIZE] = GENMASK(2, 0),
+ [NUM_CH_PER_EE] = GENMASK(7, 3),
+ [NUM_EV_PER_EE] = GENMASK(12, 8),
+ [GSI_CH_PEND_TRANSLATE] = BIT(13),
+ [GSI_CH_FULL_LOGIC] = BIT(14),
+ [GSI_USE_SDMA] = BIT(15),
+ [GSI_SDMA_N_INT] = GENMASK(18, 16),
+ [GSI_SDMA_MAX_BURST] = GENMASK(26, 19),
+ [GSI_SDMA_N_IOVEC] = GENMASK(29, 27),
+ [GSI_USE_RD_WR_ENG] = BIT(30),
+ [GSI_USE_INTER_EE] = BIT(31),
+};
+
+REG_FIELDS(HW_PARAM_2, hw_param_2, 0x00012040 + 0x4000 * GSI_EE_AP);
+
+REG(CNTXT_TYPE_IRQ, cntxt_type_irq, 0x00012080 + 0x4000 * GSI_EE_AP);
+
+REG(CNTXT_TYPE_IRQ_MSK, cntxt_type_irq_msk, 0x00012088 + 0x4000 * GSI_EE_AP);
+
+REG(CNTXT_SRC_CH_IRQ, cntxt_src_ch_irq, 0x00012090 + 0x4000 * GSI_EE_AP);
+
+REG(CNTXT_SRC_EV_CH_IRQ, cntxt_src_ev_ch_irq, 0x00012094 + 0x4000 * GSI_EE_AP);
+
+REG(CNTXT_SRC_CH_IRQ_MSK, cntxt_src_ch_irq_msk,
+ 0x00012098 + 0x4000 * GSI_EE_AP);
+
+REG(CNTXT_SRC_EV_CH_IRQ_MSK, cntxt_src_ev_ch_irq_msk,
+ 0x0001209c + 0x4000 * GSI_EE_AP);
+
+REG(CNTXT_SRC_CH_IRQ_CLR, cntxt_src_ch_irq_clr,
+ 0x000120a0 + 0x4000 * GSI_EE_AP);
+
+REG(CNTXT_SRC_EV_CH_IRQ_CLR, cntxt_src_ev_ch_irq_clr,
+ 0x000120a4 + 0x4000 * GSI_EE_AP);
+
+REG(CNTXT_SRC_IEOB_IRQ, cntxt_src_ieob_irq, 0x000120b0 + 0x4000 * GSI_EE_AP);
+
+REG(CNTXT_SRC_IEOB_IRQ_MSK, cntxt_src_ieob_irq_msk,
+ 0x000120b8 + 0x4000 * GSI_EE_AP);
+
+REG(CNTXT_SRC_IEOB_IRQ_CLR, cntxt_src_ieob_irq_clr,
+ 0x000120c0 + 0x4000 * GSI_EE_AP);
+
+REG(CNTXT_GLOB_IRQ_STTS, cntxt_glob_irq_stts, 0x00012100 + 0x4000 * GSI_EE_AP);
+
+REG(CNTXT_GLOB_IRQ_EN, cntxt_glob_irq_en, 0x00012108 + 0x4000 * GSI_EE_AP);
+
+REG(CNTXT_GLOB_IRQ_CLR, cntxt_glob_irq_clr, 0x00012110 + 0x4000 * GSI_EE_AP);
+
+REG(CNTXT_GSI_IRQ_STTS, cntxt_gsi_irq_stts, 0x00012118 + 0x4000 * GSI_EE_AP);
+
+REG(CNTXT_GSI_IRQ_EN, cntxt_gsi_irq_en, 0x00012120 + 0x4000 * GSI_EE_AP);
+
+REG(CNTXT_GSI_IRQ_CLR, cntxt_gsi_irq_clr, 0x00012128 + 0x4000 * GSI_EE_AP);
+
+static const u32 reg_cntxt_intset_fmask[] = {
+ [INTYPE] = BIT(0)
+ /* Bits 1-31 reserved */
+};
+
+REG_FIELDS(CNTXT_INTSET, cntxt_intset, 0x00012180 + 0x4000 * GSI_EE_AP);
+
+REG_FIELDS(ERROR_LOG, error_log, 0x00012200 + 0x4000 * GSI_EE_AP);
+
+REG(ERROR_LOG_CLR, error_log_clr, 0x00012210 + 0x4000 * GSI_EE_AP);
+
+static const u32 reg_cntxt_scratch_0_fmask[] = {
+ [INTER_EE_RESULT] = GENMASK(2, 0),
+ /* Bits 3-4 reserved */
+ [GENERIC_EE_RESULT] = GENMASK(7, 5),
+ /* Bits 8-31 reserved */
+};
+
+REG_FIELDS(CNTXT_SCRATCH_0, cntxt_scratch_0, 0x00012400 + 0x4000 * GSI_EE_AP);
+
+static const struct reg *reg_array[] = {
+ [INTER_EE_SRC_CH_IRQ_MSK] = &reg_inter_ee_src_ch_irq_msk,
+ [INTER_EE_SRC_EV_CH_IRQ_MSK] = &reg_inter_ee_src_ev_ch_irq_msk,
+ [CH_C_CNTXT_0] = &reg_ch_c_cntxt_0,
+ [CH_C_CNTXT_1] = &reg_ch_c_cntxt_1,
+ [CH_C_CNTXT_2] = &reg_ch_c_cntxt_2,
+ [CH_C_CNTXT_3] = &reg_ch_c_cntxt_3,
+ [CH_C_QOS] = &reg_ch_c_qos,
+ [CH_C_SCRATCH_0] = &reg_ch_c_scratch_0,
+ [CH_C_SCRATCH_1] = &reg_ch_c_scratch_1,
+ [CH_C_SCRATCH_2] = &reg_ch_c_scratch_2,
+ [CH_C_SCRATCH_3] = &reg_ch_c_scratch_3,
+ [EV_CH_E_CNTXT_0] = &reg_ev_ch_e_cntxt_0,
+ [EV_CH_E_CNTXT_1] = &reg_ev_ch_e_cntxt_1,
+ [EV_CH_E_CNTXT_2] = &reg_ev_ch_e_cntxt_2,
+ [EV_CH_E_CNTXT_3] = &reg_ev_ch_e_cntxt_3,
+ [EV_CH_E_CNTXT_4] = &reg_ev_ch_e_cntxt_4,
+ [EV_CH_E_CNTXT_8] = &reg_ev_ch_e_cntxt_8,
+ [EV_CH_E_CNTXT_9] = &reg_ev_ch_e_cntxt_9,
+ [EV_CH_E_CNTXT_10] = &reg_ev_ch_e_cntxt_10,
+ [EV_CH_E_CNTXT_11] = &reg_ev_ch_e_cntxt_11,
+ [EV_CH_E_CNTXT_12] = &reg_ev_ch_e_cntxt_12,
+ [EV_CH_E_CNTXT_13] = &reg_ev_ch_e_cntxt_13,
+ [EV_CH_E_SCRATCH_0] = &reg_ev_ch_e_scratch_0,
+ [EV_CH_E_SCRATCH_1] = &reg_ev_ch_e_scratch_1,
+ [CH_C_DOORBELL_0] = &reg_ch_c_doorbell_0,
+ [EV_CH_E_DOORBELL_0] = &reg_ev_ch_e_doorbell_0,
+ [GSI_STATUS] = &reg_gsi_status,
+ [CH_CMD] = &reg_ch_cmd,
+ [EV_CH_CMD] = &reg_ev_ch_cmd,
+ [GENERIC_CMD] = &reg_generic_cmd,
+ [HW_PARAM_2] = &reg_hw_param_2,
+ [CNTXT_TYPE_IRQ] = &reg_cntxt_type_irq,
+ [CNTXT_TYPE_IRQ_MSK] = &reg_cntxt_type_irq_msk,
+ [CNTXT_SRC_CH_IRQ] = &reg_cntxt_src_ch_irq,
+ [CNTXT_SRC_EV_CH_IRQ] = &reg_cntxt_src_ev_ch_irq,
+ [CNTXT_SRC_CH_IRQ_MSK] = &reg_cntxt_src_ch_irq_msk,
+ [CNTXT_SRC_EV_CH_IRQ_MSK] = &reg_cntxt_src_ev_ch_irq_msk,
+ [CNTXT_SRC_CH_IRQ_CLR] = &reg_cntxt_src_ch_irq_clr,
+ [CNTXT_SRC_EV_CH_IRQ_CLR] = &reg_cntxt_src_ev_ch_irq_clr,
+ [CNTXT_SRC_IEOB_IRQ] = &reg_cntxt_src_ieob_irq,
+ [CNTXT_SRC_IEOB_IRQ_MSK] = &reg_cntxt_src_ieob_irq_msk,
+ [CNTXT_SRC_IEOB_IRQ_CLR] = &reg_cntxt_src_ieob_irq_clr,
+ [CNTXT_GLOB_IRQ_STTS] = &reg_cntxt_glob_irq_stts,
+ [CNTXT_GLOB_IRQ_EN] = &reg_cntxt_glob_irq_en,
+ [CNTXT_GLOB_IRQ_CLR] = &reg_cntxt_glob_irq_clr,
+ [CNTXT_GSI_IRQ_STTS] = &reg_cntxt_gsi_irq_stts,
+ [CNTXT_GSI_IRQ_EN] = &reg_cntxt_gsi_irq_en,
+ [CNTXT_GSI_IRQ_CLR] = &reg_cntxt_gsi_irq_clr,
+ [CNTXT_INTSET] = &reg_cntxt_intset,
+ [ERROR_LOG] = &reg_error_log,
+ [ERROR_LOG_CLR] = &reg_error_log_clr,
+ [CNTXT_SCRATCH_0] = &reg_cntxt_scratch_0,
+};
+
+const struct regs gsi_regs_v4_9 = {
+ .reg_count = ARRAY_SIZE(reg_array),
+ .reg = reg_array,
+};
diff --git a/drivers/net/ipa/reg/ipa_reg-v3.1.c b/drivers/net/ipa/reg/ipa_reg-v3.1.c
index 677ece3bce9e..648dbfe1fce3 100644
--- a/drivers/net/ipa/reg/ipa_reg-v3.1.c
+++ b/drivers/net/ipa/reg/ipa_reg-v3.1.c
@@ -7,7 +7,7 @@
#include "../ipa.h"
#include "../ipa_reg.h"
-static const u32 ipa_reg_comp_cfg_fmask[] = {
+static const u32 reg_comp_cfg_fmask[] = {
[COMP_CFG_ENABLE] = BIT(0),
[GSI_SNOC_BYPASS_DIS] = BIT(1),
[GEN_QMB_0_SNOC_BYPASS_DIS] = BIT(2),
@@ -16,9 +16,9 @@ static const u32 ipa_reg_comp_cfg_fmask[] = {
/* Bits 5-31 reserved */
};
-IPA_REG_FIELDS(COMP_CFG, comp_cfg, 0x0000003c);
+REG_FIELDS(COMP_CFG, comp_cfg, 0x0000003c);
-static const u32 ipa_reg_clkon_cfg_fmask[] = {
+static const u32 reg_clkon_cfg_fmask[] = {
[CLKON_RX] = BIT(0),
[CLKON_PROC] = BIT(1),
[TX_WRAPPER] = BIT(2),
@@ -39,9 +39,9 @@ static const u32 ipa_reg_clkon_cfg_fmask[] = {
/* Bits 17-31 reserved */
};
-IPA_REG_FIELDS(CLKON_CFG, clkon_cfg, 0x00000044);
+REG_FIELDS(CLKON_CFG, clkon_cfg, 0x00000044);
-static const u32 ipa_reg_route_fmask[] = {
+static const u32 reg_route_fmask[] = {
[ROUTE_DIS] = BIT(0),
[ROUTE_DEF_PIPE] = GENMASK(5, 1),
[ROUTE_DEF_HDR_TABLE] = BIT(6),
@@ -52,31 +52,31 @@ static const u32 ipa_reg_route_fmask[] = {
/* Bits 25-31 reserved */
};
-IPA_REG_FIELDS(ROUTE, route, 0x00000048);
+REG_FIELDS(ROUTE, route, 0x00000048);
-static const u32 ipa_reg_shared_mem_size_fmask[] = {
+static const u32 reg_shared_mem_size_fmask[] = {
[MEM_SIZE] = GENMASK(15, 0),
[MEM_BADDR] = GENMASK(31, 16),
};
-IPA_REG_FIELDS(SHARED_MEM_SIZE, shared_mem_size, 0x00000054);
+REG_FIELDS(SHARED_MEM_SIZE, shared_mem_size, 0x00000054);
-static const u32 ipa_reg_qsb_max_writes_fmask[] = {
+static const u32 reg_qsb_max_writes_fmask[] = {
[GEN_QMB_0_MAX_WRITES] = GENMASK(3, 0),
[GEN_QMB_1_MAX_WRITES] = GENMASK(7, 4),
/* Bits 8-31 reserved */
};
-IPA_REG_FIELDS(QSB_MAX_WRITES, qsb_max_writes, 0x00000074);
+REG_FIELDS(QSB_MAX_WRITES, qsb_max_writes, 0x00000074);
-static const u32 ipa_reg_qsb_max_reads_fmask[] = {
+static const u32 reg_qsb_max_reads_fmask[] = {
[GEN_QMB_0_MAX_READS] = GENMASK(3, 0),
[GEN_QMB_1_MAX_READS] = GENMASK(7, 4),
};
-IPA_REG_FIELDS(QSB_MAX_READS, qsb_max_reads, 0x00000078);
+REG_FIELDS(QSB_MAX_READS, qsb_max_reads, 0x00000078);
-static const u32 ipa_reg_filt_rout_hash_en_fmask[] = {
+static const u32 reg_filt_rout_hash_en_fmask[] = {
[IPV6_ROUTER_HASH] = BIT(0),
/* Bits 1-3 reserved */
[IPV6_FILTER_HASH] = BIT(4),
@@ -87,9 +87,9 @@ static const u32 ipa_reg_filt_rout_hash_en_fmask[] = {
/* Bits 13-31 reserved */
};
-IPA_REG_FIELDS(FILT_ROUT_HASH_EN, filt_rout_hash_en, 0x000008c);
+REG_FIELDS(FILT_ROUT_HASH_EN, filt_rout_hash_en, 0x000008c);
-static const u32 ipa_reg_filt_rout_hash_flush_fmask[] = {
+static const u32 reg_filt_rout_hash_flush_fmask[] = {
[IPV6_ROUTER_HASH] = BIT(0),
/* Bits 1-3 reserved */
[IPV6_FILTER_HASH] = BIT(4),
@@ -100,121 +100,121 @@ static const u32 ipa_reg_filt_rout_hash_flush_fmask[] = {
/* Bits 13-31 reserved */
};
-IPA_REG_FIELDS(FILT_ROUT_HASH_FLUSH, filt_rout_hash_flush, 0x0000090);
+REG_FIELDS(FILT_ROUT_HASH_FLUSH, filt_rout_hash_flush, 0x0000090);
/* Valid bits defined by ipa->available */
-IPA_REG_STRIDE(STATE_AGGR_ACTIVE, state_aggr_active, 0x0000010c, 0x0004);
+REG_STRIDE(STATE_AGGR_ACTIVE, state_aggr_active, 0x0000010c, 0x0004);
-IPA_REG(IPA_BCR, ipa_bcr, 0x000001d0);
+REG(IPA_BCR, ipa_bcr, 0x000001d0);
-static const u32 ipa_reg_local_pkt_proc_cntxt_fmask[] = {
+static const u32 reg_local_pkt_proc_cntxt_fmask[] = {
[IPA_BASE_ADDR] = GENMASK(16, 0),
/* Bits 17-31 reserved */
};
/* Offset must be a multiple of 8 */
-IPA_REG_FIELDS(LOCAL_PKT_PROC_CNTXT, local_pkt_proc_cntxt, 0x000001e8);
+REG_FIELDS(LOCAL_PKT_PROC_CNTXT, local_pkt_proc_cntxt, 0x000001e8);
/* Valid bits defined by ipa->available */
-IPA_REG_STRIDE(AGGR_FORCE_CLOSE, aggr_force_close, 0x000001ec, 0x0004);
+REG_STRIDE(AGGR_FORCE_CLOSE, aggr_force_close, 0x000001ec, 0x0004);
-static const u32 ipa_reg_counter_cfg_fmask[] = {
+static const u32 reg_counter_cfg_fmask[] = {
[EOT_COAL_GRANULARITY] = GENMASK(3, 0),
[AGGR_GRANULARITY] = GENMASK(8, 4),
/* Bits 5-31 reserved */
};
-IPA_REG_FIELDS(COUNTER_CFG, counter_cfg, 0x000001f0);
+REG_FIELDS(COUNTER_CFG, counter_cfg, 0x000001f0);
-static const u32 ipa_reg_src_rsrc_grp_01_rsrc_type_fmask[] = {
+static const u32 reg_src_rsrc_grp_01_rsrc_type_fmask[] = {
[X_MIN_LIM] = GENMASK(7, 0),
[X_MAX_LIM] = GENMASK(15, 8),
[Y_MIN_LIM] = GENMASK(23, 16),
[Y_MAX_LIM] = GENMASK(31, 24),
};
-IPA_REG_STRIDE_FIELDS(SRC_RSRC_GRP_01_RSRC_TYPE, src_rsrc_grp_01_rsrc_type,
- 0x00000400, 0x0020);
+REG_STRIDE_FIELDS(SRC_RSRC_GRP_01_RSRC_TYPE, src_rsrc_grp_01_rsrc_type,
+ 0x00000400, 0x0020);
-static const u32 ipa_reg_src_rsrc_grp_23_rsrc_type_fmask[] = {
+static const u32 reg_src_rsrc_grp_23_rsrc_type_fmask[] = {
[X_MIN_LIM] = GENMASK(7, 0),
[X_MAX_LIM] = GENMASK(15, 8),
[Y_MIN_LIM] = GENMASK(23, 16),
[Y_MAX_LIM] = GENMASK(31, 24),
};
-IPA_REG_STRIDE_FIELDS(SRC_RSRC_GRP_23_RSRC_TYPE, src_rsrc_grp_23_rsrc_type,
- 0x00000404, 0x0020);
+REG_STRIDE_FIELDS(SRC_RSRC_GRP_23_RSRC_TYPE, src_rsrc_grp_23_rsrc_type,
+ 0x00000404, 0x0020);
-static const u32 ipa_reg_src_rsrc_grp_45_rsrc_type_fmask[] = {
+static const u32 reg_src_rsrc_grp_45_rsrc_type_fmask[] = {
[X_MIN_LIM] = GENMASK(7, 0),
[X_MAX_LIM] = GENMASK(15, 8),
[Y_MIN_LIM] = GENMASK(23, 16),
[Y_MAX_LIM] = GENMASK(31, 24),
};
-IPA_REG_STRIDE_FIELDS(SRC_RSRC_GRP_45_RSRC_TYPE, src_rsrc_grp_45_rsrc_type,
- 0x00000408, 0x0020);
+REG_STRIDE_FIELDS(SRC_RSRC_GRP_45_RSRC_TYPE, src_rsrc_grp_45_rsrc_type,
+ 0x00000408, 0x0020);
-static const u32 ipa_reg_src_rsrc_grp_67_rsrc_type_fmask[] = {
+static const u32 reg_src_rsrc_grp_67_rsrc_type_fmask[] = {
[X_MIN_LIM] = GENMASK(7, 0),
[X_MAX_LIM] = GENMASK(15, 8),
[Y_MIN_LIM] = GENMASK(23, 16),
[Y_MAX_LIM] = GENMASK(31, 24),
};
-IPA_REG_STRIDE_FIELDS(SRC_RSRC_GRP_67_RSRC_TYPE, src_rsrc_grp_67_rsrc_type,
- 0x0000040c, 0x0020);
+REG_STRIDE_FIELDS(SRC_RSRC_GRP_67_RSRC_TYPE, src_rsrc_grp_67_rsrc_type,
+ 0x0000040c, 0x0020);
-static const u32 ipa_reg_dst_rsrc_grp_01_rsrc_type_fmask[] = {
+static const u32 reg_dst_rsrc_grp_01_rsrc_type_fmask[] = {
[X_MIN_LIM] = GENMASK(7, 0),
[X_MAX_LIM] = GENMASK(15, 8),
[Y_MIN_LIM] = GENMASK(23, 16),
[Y_MAX_LIM] = GENMASK(31, 24),
};
-IPA_REG_STRIDE_FIELDS(DST_RSRC_GRP_01_RSRC_TYPE, dst_rsrc_grp_01_rsrc_type,
- 0x00000500, 0x0020);
+REG_STRIDE_FIELDS(DST_RSRC_GRP_01_RSRC_TYPE, dst_rsrc_grp_01_rsrc_type,
+ 0x00000500, 0x0020);
-static const u32 ipa_reg_dst_rsrc_grp_23_rsrc_type_fmask[] = {
+static const u32 reg_dst_rsrc_grp_23_rsrc_type_fmask[] = {
[X_MIN_LIM] = GENMASK(7, 0),
[X_MAX_LIM] = GENMASK(15, 8),
[Y_MIN_LIM] = GENMASK(23, 16),
[Y_MAX_LIM] = GENMASK(31, 24),
};
-IPA_REG_STRIDE_FIELDS(DST_RSRC_GRP_23_RSRC_TYPE, dst_rsrc_grp_23_rsrc_type,
- 0x00000504, 0x0020);
+REG_STRIDE_FIELDS(DST_RSRC_GRP_23_RSRC_TYPE, dst_rsrc_grp_23_rsrc_type,
+ 0x00000504, 0x0020);
-static const u32 ipa_reg_dst_rsrc_grp_45_rsrc_type_fmask[] = {
+static const u32 reg_dst_rsrc_grp_45_rsrc_type_fmask[] = {
[X_MIN_LIM] = GENMASK(7, 0),
[X_MAX_LIM] = GENMASK(15, 8),
[Y_MIN_LIM] = GENMASK(23, 16),
[Y_MAX_LIM] = GENMASK(31, 24),
};
-IPA_REG_STRIDE_FIELDS(DST_RSRC_GRP_45_RSRC_TYPE, dst_rsrc_grp_45_rsrc_type,
- 0x00000508, 0x0020);
+REG_STRIDE_FIELDS(DST_RSRC_GRP_45_RSRC_TYPE, dst_rsrc_grp_45_rsrc_type,
+ 0x00000508, 0x0020);
-static const u32 ipa_reg_dst_rsrc_grp_67_rsrc_type_fmask[] = {
+static const u32 reg_dst_rsrc_grp_67_rsrc_type_fmask[] = {
[X_MIN_LIM] = GENMASK(7, 0),
[X_MAX_LIM] = GENMASK(15, 8),
[Y_MIN_LIM] = GENMASK(23, 16),
[Y_MAX_LIM] = GENMASK(31, 24),
};
-IPA_REG_STRIDE_FIELDS(DST_RSRC_GRP_67_RSRC_TYPE, dst_rsrc_grp_67_rsrc_type,
- 0x0000050c, 0x0020);
+REG_STRIDE_FIELDS(DST_RSRC_GRP_67_RSRC_TYPE, dst_rsrc_grp_67_rsrc_type,
+ 0x0000050c, 0x0020);
-static const u32 ipa_reg_endp_init_ctrl_fmask[] = {
+static const u32 reg_endp_init_ctrl_fmask[] = {
[ENDP_SUSPEND] = BIT(0),
[ENDP_DELAY] = BIT(1),
/* Bits 2-31 reserved */
};
-IPA_REG_STRIDE_FIELDS(ENDP_INIT_CTRL, endp_init_ctrl, 0x00000800, 0x0070);
+REG_STRIDE_FIELDS(ENDP_INIT_CTRL, endp_init_ctrl, 0x00000800, 0x0070);
-static const u32 ipa_reg_endp_init_cfg_fmask[] = {
+static const u32 reg_endp_init_cfg_fmask[] = {
[FRAG_OFFLOAD_EN] = BIT(0),
[CS_OFFLOAD_EN] = GENMASK(2, 1),
[CS_METADATA_HDR_OFFSET] = GENMASK(6, 3),
@@ -223,16 +223,16 @@ static const u32 ipa_reg_endp_init_cfg_fmask[] = {
/* Bits 9-31 reserved */
};
-IPA_REG_STRIDE_FIELDS(ENDP_INIT_CFG, endp_init_cfg, 0x00000808, 0x0070);
+REG_STRIDE_FIELDS(ENDP_INIT_CFG, endp_init_cfg, 0x00000808, 0x0070);
-static const u32 ipa_reg_endp_init_nat_fmask[] = {
+static const u32 reg_endp_init_nat_fmask[] = {
[NAT_EN] = GENMASK(1, 0),
/* Bits 2-31 reserved */
};
-IPA_REG_STRIDE_FIELDS(ENDP_INIT_NAT, endp_init_nat, 0x0000080c, 0x0070);
+REG_STRIDE_FIELDS(ENDP_INIT_NAT, endp_init_nat, 0x0000080c, 0x0070);
-static const u32 ipa_reg_endp_init_hdr_fmask[] = {
+static const u32 reg_endp_init_hdr_fmask[] = {
[HDR_LEN] = GENMASK(5, 0),
[HDR_OFST_METADATA_VALID] = BIT(6),
[HDR_OFST_METADATA] = GENMASK(12, 7),
@@ -245,9 +245,9 @@ static const u32 ipa_reg_endp_init_hdr_fmask[] = {
/* Bits 29-31 reserved */
};
-IPA_REG_STRIDE_FIELDS(ENDP_INIT_HDR, endp_init_hdr, 0x00000810, 0x0070);
+REG_STRIDE_FIELDS(ENDP_INIT_HDR, endp_init_hdr, 0x00000810, 0x0070);
-static const u32 ipa_reg_endp_init_hdr_ext_fmask[] = {
+static const u32 reg_endp_init_hdr_ext_fmask[] = {
[HDR_ENDIANNESS] = BIT(0),
[HDR_TOTAL_LEN_OR_PAD_VALID] = BIT(1),
[HDR_TOTAL_LEN_OR_PAD] = BIT(2),
@@ -257,12 +257,12 @@ static const u32 ipa_reg_endp_init_hdr_ext_fmask[] = {
/* Bits 14-31 reserved */
};
-IPA_REG_STRIDE_FIELDS(ENDP_INIT_HDR_EXT, endp_init_hdr_ext, 0x00000814, 0x0070);
+REG_STRIDE_FIELDS(ENDP_INIT_HDR_EXT, endp_init_hdr_ext, 0x00000814, 0x0070);
-IPA_REG_STRIDE(ENDP_INIT_HDR_METADATA_MASK, endp_init_hdr_metadata_mask,
- 0x00000818, 0x0070);
+REG_STRIDE(ENDP_INIT_HDR_METADATA_MASK, endp_init_hdr_metadata_mask,
+ 0x00000818, 0x0070);
-static const u32 ipa_reg_endp_init_mode_fmask[] = {
+static const u32 reg_endp_init_mode_fmask[] = {
[ENDP_MODE] = GENMASK(2, 0),
/* Bit 3 reserved */
[DEST_PIPE_INDEX] = GENMASK(8, 4),
@@ -274,9 +274,9 @@ static const u32 ipa_reg_endp_init_mode_fmask[] = {
/* Bit 31 reserved */
};
-IPA_REG_STRIDE_FIELDS(ENDP_INIT_MODE, endp_init_mode, 0x00000820, 0x0070);
+REG_STRIDE_FIELDS(ENDP_INIT_MODE, endp_init_mode, 0x00000820, 0x0070);
-static const u32 ipa_reg_endp_init_aggr_fmask[] = {
+static const u32 reg_endp_init_aggr_fmask[] = {
[AGGR_EN] = GENMASK(1, 0),
[AGGR_TYPE] = GENMASK(4, 2),
[BYTE_LIMIT] = GENMASK(9, 5),
@@ -289,25 +289,25 @@ static const u32 ipa_reg_endp_init_aggr_fmask[] = {
/* Bits 25-31 reserved */
};
-IPA_REG_STRIDE_FIELDS(ENDP_INIT_AGGR, endp_init_aggr, 0x00000824, 0x0070);
+REG_STRIDE_FIELDS(ENDP_INIT_AGGR, endp_init_aggr, 0x00000824, 0x0070);
-static const u32 ipa_reg_endp_init_hol_block_en_fmask[] = {
+static const u32 reg_endp_init_hol_block_en_fmask[] = {
[HOL_BLOCK_EN] = BIT(0),
/* Bits 1-31 reserved */
};
-IPA_REG_STRIDE_FIELDS(ENDP_INIT_HOL_BLOCK_EN, endp_init_hol_block_en,
- 0x0000082c, 0x0070);
+REG_STRIDE_FIELDS(ENDP_INIT_HOL_BLOCK_EN, endp_init_hol_block_en,
+ 0x0000082c, 0x0070);
/* Entire register is a tick count */
-static const u32 ipa_reg_endp_init_hol_block_timer_fmask[] = {
+static const u32 reg_endp_init_hol_block_timer_fmask[] = {
[TIMER_BASE_VALUE] = GENMASK(31, 0),
};
-IPA_REG_STRIDE_FIELDS(ENDP_INIT_HOL_BLOCK_TIMER, endp_init_hol_block_timer,
- 0x00000830, 0x0070);
+REG_STRIDE_FIELDS(ENDP_INIT_HOL_BLOCK_TIMER, endp_init_hol_block_timer,
+ 0x00000830, 0x0070);
-static const u32 ipa_reg_endp_init_deaggr_fmask[] = {
+static const u32 reg_endp_init_deaggr_fmask[] = {
[DEAGGR_HDR_LEN] = GENMASK(5, 0),
[SYSPIPE_ERR_DETECTION] = BIT(6),
[PACKET_OFFSET_VALID] = BIT(7),
@@ -317,25 +317,24 @@ static const u32 ipa_reg_endp_init_deaggr_fmask[] = {
[MAX_PACKET_LEN] = GENMASK(31, 16),
};
-IPA_REG_STRIDE_FIELDS(ENDP_INIT_DEAGGR, endp_init_deaggr, 0x00000834, 0x0070);
+REG_STRIDE_FIELDS(ENDP_INIT_DEAGGR, endp_init_deaggr, 0x00000834, 0x0070);
-static const u32 ipa_reg_endp_init_rsrc_grp_fmask[] = {
+static const u32 reg_endp_init_rsrc_grp_fmask[] = {
[ENDP_RSRC_GRP] = GENMASK(2, 0),
/* Bits 3-31 reserved */
};
-IPA_REG_STRIDE_FIELDS(ENDP_INIT_RSRC_GRP, endp_init_rsrc_grp,
- 0x00000838, 0x0070);
+REG_STRIDE_FIELDS(ENDP_INIT_RSRC_GRP, endp_init_rsrc_grp, 0x00000838, 0x0070);
-static const u32 ipa_reg_endp_init_seq_fmask[] = {
+static const u32 reg_endp_init_seq_fmask[] = {
[SEQ_TYPE] = GENMASK(7, 0),
[SEQ_REP_TYPE] = GENMASK(15, 8),
/* Bits 16-31 reserved */
};
-IPA_REG_STRIDE_FIELDS(ENDP_INIT_SEQ, endp_init_seq, 0x0000083c, 0x0070);
+REG_STRIDE_FIELDS(ENDP_INIT_SEQ, endp_init_seq, 0x0000083c, 0x0070);
-static const u32 ipa_reg_endp_status_fmask[] = {
+static const u32 reg_endp_status_fmask[] = {
[STATUS_EN] = BIT(0),
[STATUS_ENDP] = GENMASK(5, 1),
/* Bits 6-7 reserved */
@@ -343,9 +342,9 @@ static const u32 ipa_reg_endp_status_fmask[] = {
/* Bits 9-31 reserved */
};
-IPA_REG_STRIDE_FIELDS(ENDP_STATUS, endp_status, 0x00000840, 0x0070);
+REG_STRIDE_FIELDS(ENDP_STATUS, endp_status, 0x00000840, 0x0070);
-static const u32 ipa_reg_endp_filter_router_hsh_cfg_fmask[] = {
+static const u32 reg_endp_filter_router_hsh_cfg_fmask[] = {
[FILTER_HASH_MSK_SRC_ID] = BIT(0),
[FILTER_HASH_MSK_SRC_IP] = BIT(1),
[FILTER_HASH_MSK_DST_IP] = BIT(2),
@@ -366,84 +365,84 @@ static const u32 ipa_reg_endp_filter_router_hsh_cfg_fmask[] = {
/* Bits 23-31 reserved */
};
-IPA_REG_STRIDE_FIELDS(ENDP_FILTER_ROUTER_HSH_CFG, endp_filter_router_hsh_cfg,
- 0x0000085c, 0x0070);
+REG_STRIDE_FIELDS(ENDP_FILTER_ROUTER_HSH_CFG, endp_filter_router_hsh_cfg,
+ 0x0000085c, 0x0070);
/* Valid bits defined by enum ipa_irq_id; only used for GSI_EE_AP */
-IPA_REG(IPA_IRQ_STTS, ipa_irq_stts, 0x00003008 + 0x1000 * GSI_EE_AP);
+REG(IPA_IRQ_STTS, ipa_irq_stts, 0x00003008 + 0x1000 * GSI_EE_AP);
/* Valid bits defined by enum ipa_irq_id; only used for GSI_EE_AP */
-IPA_REG(IPA_IRQ_EN, ipa_irq_en, 0x0000300c + 0x1000 * GSI_EE_AP);
+REG(IPA_IRQ_EN, ipa_irq_en, 0x0000300c + 0x1000 * GSI_EE_AP);
/* Valid bits defined by enum ipa_irq_id; only used for GSI_EE_AP */
-IPA_REG(IPA_IRQ_CLR, ipa_irq_clr, 0x00003010 + 0x1000 * GSI_EE_AP);
+REG(IPA_IRQ_CLR, ipa_irq_clr, 0x00003010 + 0x1000 * GSI_EE_AP);
-static const u32 ipa_reg_ipa_irq_uc_fmask[] = {
+static const u32 reg_ipa_irq_uc_fmask[] = {
[UC_INTR] = BIT(0),
/* Bits 1-31 reserved */
};
-IPA_REG_FIELDS(IPA_IRQ_UC, ipa_irq_uc, 0x0000301c + 0x1000 * GSI_EE_AP);
+REG_FIELDS(IPA_IRQ_UC, ipa_irq_uc, 0x0000301c + 0x1000 * GSI_EE_AP);
/* Valid bits defined by ipa->available */
-IPA_REG_STRIDE(IRQ_SUSPEND_INFO, irq_suspend_info,
- 0x00003030 + 0x1000 * GSI_EE_AP, 0x0004);
+REG_STRIDE(IRQ_SUSPEND_INFO, irq_suspend_info,
+ 0x00003030 + 0x1000 * GSI_EE_AP, 0x0004);
/* Valid bits defined by ipa->available */
-IPA_REG_STRIDE(IRQ_SUSPEND_EN, irq_suspend_en,
- 0x00003034 + 0x1000 * GSI_EE_AP, 0x0004);
+REG_STRIDE(IRQ_SUSPEND_EN, irq_suspend_en,
+ 0x00003034 + 0x1000 * GSI_EE_AP, 0x0004);
/* Valid bits defined by ipa->available */
-IPA_REG_STRIDE(IRQ_SUSPEND_CLR, irq_suspend_clr,
- 0x00003038 + 0x1000 * GSI_EE_AP, 0x0004);
-
-static const struct ipa_reg *ipa_reg_array[] = {
- [COMP_CFG] = &ipa_reg_comp_cfg,
- [CLKON_CFG] = &ipa_reg_clkon_cfg,
- [ROUTE] = &ipa_reg_route,
- [SHARED_MEM_SIZE] = &ipa_reg_shared_mem_size,
- [QSB_MAX_WRITES] = &ipa_reg_qsb_max_writes,
- [QSB_MAX_READS] = &ipa_reg_qsb_max_reads,
- [FILT_ROUT_HASH_EN] = &ipa_reg_filt_rout_hash_en,
- [FILT_ROUT_HASH_FLUSH] = &ipa_reg_filt_rout_hash_flush,
- [STATE_AGGR_ACTIVE] = &ipa_reg_state_aggr_active,
- [IPA_BCR] = &ipa_reg_ipa_bcr,
- [LOCAL_PKT_PROC_CNTXT] = &ipa_reg_local_pkt_proc_cntxt,
- [AGGR_FORCE_CLOSE] = &ipa_reg_aggr_force_close,
- [COUNTER_CFG] = &ipa_reg_counter_cfg,
- [SRC_RSRC_GRP_01_RSRC_TYPE] = &ipa_reg_src_rsrc_grp_01_rsrc_type,
- [SRC_RSRC_GRP_23_RSRC_TYPE] = &ipa_reg_src_rsrc_grp_23_rsrc_type,
- [SRC_RSRC_GRP_45_RSRC_TYPE] = &ipa_reg_src_rsrc_grp_45_rsrc_type,
- [SRC_RSRC_GRP_67_RSRC_TYPE] = &ipa_reg_src_rsrc_grp_67_rsrc_type,
- [DST_RSRC_GRP_01_RSRC_TYPE] = &ipa_reg_dst_rsrc_grp_01_rsrc_type,
- [DST_RSRC_GRP_23_RSRC_TYPE] = &ipa_reg_dst_rsrc_grp_23_rsrc_type,
- [DST_RSRC_GRP_45_RSRC_TYPE] = &ipa_reg_dst_rsrc_grp_45_rsrc_type,
- [DST_RSRC_GRP_67_RSRC_TYPE] = &ipa_reg_dst_rsrc_grp_67_rsrc_type,
- [ENDP_INIT_CTRL] = &ipa_reg_endp_init_ctrl,
- [ENDP_INIT_CFG] = &ipa_reg_endp_init_cfg,
- [ENDP_INIT_NAT] = &ipa_reg_endp_init_nat,
- [ENDP_INIT_HDR] = &ipa_reg_endp_init_hdr,
- [ENDP_INIT_HDR_EXT] = &ipa_reg_endp_init_hdr_ext,
- [ENDP_INIT_HDR_METADATA_MASK] = &ipa_reg_endp_init_hdr_metadata_mask,
- [ENDP_INIT_MODE] = &ipa_reg_endp_init_mode,
- [ENDP_INIT_AGGR] = &ipa_reg_endp_init_aggr,
- [ENDP_INIT_HOL_BLOCK_EN] = &ipa_reg_endp_init_hol_block_en,
- [ENDP_INIT_HOL_BLOCK_TIMER] = &ipa_reg_endp_init_hol_block_timer,
- [ENDP_INIT_DEAGGR] = &ipa_reg_endp_init_deaggr,
- [ENDP_INIT_RSRC_GRP] = &ipa_reg_endp_init_rsrc_grp,
- [ENDP_INIT_SEQ] = &ipa_reg_endp_init_seq,
- [ENDP_STATUS] = &ipa_reg_endp_status,
- [ENDP_FILTER_ROUTER_HSH_CFG] = &ipa_reg_endp_filter_router_hsh_cfg,
- [IPA_IRQ_STTS] = &ipa_reg_ipa_irq_stts,
- [IPA_IRQ_EN] = &ipa_reg_ipa_irq_en,
- [IPA_IRQ_CLR] = &ipa_reg_ipa_irq_clr,
- [IPA_IRQ_UC] = &ipa_reg_ipa_irq_uc,
- [IRQ_SUSPEND_INFO] = &ipa_reg_irq_suspend_info,
- [IRQ_SUSPEND_EN] = &ipa_reg_irq_suspend_en,
- [IRQ_SUSPEND_CLR] = &ipa_reg_irq_suspend_clr,
-};
-
-const struct ipa_regs ipa_regs_v3_1 = {
- .reg_count = ARRAY_SIZE(ipa_reg_array),
- .reg = ipa_reg_array,
+REG_STRIDE(IRQ_SUSPEND_CLR, irq_suspend_clr,
+ 0x00003038 + 0x1000 * GSI_EE_AP, 0x0004);
+
+static const struct reg *reg_array[] = {
+ [COMP_CFG] = &reg_comp_cfg,
+ [CLKON_CFG] = &reg_clkon_cfg,
+ [ROUTE] = &reg_route,
+ [SHARED_MEM_SIZE] = &reg_shared_mem_size,
+ [QSB_MAX_WRITES] = &reg_qsb_max_writes,
+ [QSB_MAX_READS] = &reg_qsb_max_reads,
+ [FILT_ROUT_HASH_EN] = &reg_filt_rout_hash_en,
+ [FILT_ROUT_HASH_FLUSH] = &reg_filt_rout_hash_flush,
+ [STATE_AGGR_ACTIVE] = &reg_state_aggr_active,
+ [IPA_BCR] = &reg_ipa_bcr,
+ [LOCAL_PKT_PROC_CNTXT] = &reg_local_pkt_proc_cntxt,
+ [AGGR_FORCE_CLOSE] = &reg_aggr_force_close,
+ [COUNTER_CFG] = &reg_counter_cfg,
+ [SRC_RSRC_GRP_01_RSRC_TYPE] = &reg_src_rsrc_grp_01_rsrc_type,
+ [SRC_RSRC_GRP_23_RSRC_TYPE] = &reg_src_rsrc_grp_23_rsrc_type,
+ [SRC_RSRC_GRP_45_RSRC_TYPE] = &reg_src_rsrc_grp_45_rsrc_type,
+ [SRC_RSRC_GRP_67_RSRC_TYPE] = &reg_src_rsrc_grp_67_rsrc_type,
+ [DST_RSRC_GRP_01_RSRC_TYPE] = &reg_dst_rsrc_grp_01_rsrc_type,
+ [DST_RSRC_GRP_23_RSRC_TYPE] = &reg_dst_rsrc_grp_23_rsrc_type,
+ [DST_RSRC_GRP_45_RSRC_TYPE] = &reg_dst_rsrc_grp_45_rsrc_type,
+ [DST_RSRC_GRP_67_RSRC_TYPE] = &reg_dst_rsrc_grp_67_rsrc_type,
+ [ENDP_INIT_CTRL] = &reg_endp_init_ctrl,
+ [ENDP_INIT_CFG] = &reg_endp_init_cfg,
+ [ENDP_INIT_NAT] = &reg_endp_init_nat,
+ [ENDP_INIT_HDR] = &reg_endp_init_hdr,
+ [ENDP_INIT_HDR_EXT] = &reg_endp_init_hdr_ext,
+ [ENDP_INIT_HDR_METADATA_MASK] = &reg_endp_init_hdr_metadata_mask,
+ [ENDP_INIT_MODE] = &reg_endp_init_mode,
+ [ENDP_INIT_AGGR] = &reg_endp_init_aggr,
+ [ENDP_INIT_HOL_BLOCK_EN] = &reg_endp_init_hol_block_en,
+ [ENDP_INIT_HOL_BLOCK_TIMER] = &reg_endp_init_hol_block_timer,
+ [ENDP_INIT_DEAGGR] = &reg_endp_init_deaggr,
+ [ENDP_INIT_RSRC_GRP] = &reg_endp_init_rsrc_grp,
+ [ENDP_INIT_SEQ] = &reg_endp_init_seq,
+ [ENDP_STATUS] = &reg_endp_status,
+ [ENDP_FILTER_ROUTER_HSH_CFG] = &reg_endp_filter_router_hsh_cfg,
+ [IPA_IRQ_STTS] = &reg_ipa_irq_stts,
+ [IPA_IRQ_EN] = &reg_ipa_irq_en,
+ [IPA_IRQ_CLR] = &reg_ipa_irq_clr,
+ [IPA_IRQ_UC] = &reg_ipa_irq_uc,
+ [IRQ_SUSPEND_INFO] = &reg_irq_suspend_info,
+ [IRQ_SUSPEND_EN] = &reg_irq_suspend_en,
+ [IRQ_SUSPEND_CLR] = &reg_irq_suspend_clr,
+};
+
+const struct regs ipa_regs_v3_1 = {
+ .reg_count = ARRAY_SIZE(reg_array),
+ .reg = reg_array,
};
diff --git a/drivers/net/ipa/reg/ipa_reg-v3.5.1.c b/drivers/net/ipa/reg/ipa_reg-v3.5.1.c
index b9c6a50de243..78b1bf60cd02 100644
--- a/drivers/net/ipa/reg/ipa_reg-v3.5.1.c
+++ b/drivers/net/ipa/reg/ipa_reg-v3.5.1.c
@@ -7,7 +7,7 @@
#include "../ipa.h"
#include "../ipa_reg.h"
-static const u32 ipa_reg_comp_cfg_fmask[] = {
+static const u32 reg_comp_cfg_fmask[] = {
[COMP_CFG_ENABLE] = BIT(0),
[GSI_SNOC_BYPASS_DIS] = BIT(1),
[GEN_QMB_0_SNOC_BYPASS_DIS] = BIT(2),
@@ -16,9 +16,9 @@ static const u32 ipa_reg_comp_cfg_fmask[] = {
/* Bits 5-31 reserved */
};
-IPA_REG_FIELDS(COMP_CFG, comp_cfg, 0x0000003c);
+REG_FIELDS(COMP_CFG, comp_cfg, 0x0000003c);
-static const u32 ipa_reg_clkon_cfg_fmask[] = {
+static const u32 reg_clkon_cfg_fmask[] = {
[CLKON_RX] = BIT(0),
[CLKON_PROC] = BIT(1),
[TX_WRAPPER] = BIT(2),
@@ -44,9 +44,9 @@ static const u32 ipa_reg_clkon_cfg_fmask[] = {
/* Bits 22-31 reserved */
};
-IPA_REG_FIELDS(CLKON_CFG, clkon_cfg, 0x00000044);
+REG_FIELDS(CLKON_CFG, clkon_cfg, 0x00000044);
-static const u32 ipa_reg_route_fmask[] = {
+static const u32 reg_route_fmask[] = {
[ROUTE_DIS] = BIT(0),
[ROUTE_DEF_PIPE] = GENMASK(5, 1),
[ROUTE_DEF_HDR_TABLE] = BIT(6),
@@ -57,31 +57,31 @@ static const u32 ipa_reg_route_fmask[] = {
/* Bits 25-31 reserved */
};
-IPA_REG_FIELDS(ROUTE, route, 0x00000048);
+REG_FIELDS(ROUTE, route, 0x00000048);
-static const u32 ipa_reg_shared_mem_size_fmask[] = {
+static const u32 reg_shared_mem_size_fmask[] = {
[MEM_SIZE] = GENMASK(15, 0),
[MEM_BADDR] = GENMASK(31, 16),
};
-IPA_REG_FIELDS(SHARED_MEM_SIZE, shared_mem_size, 0x00000054);
+REG_FIELDS(SHARED_MEM_SIZE, shared_mem_size, 0x00000054);
-static const u32 ipa_reg_qsb_max_writes_fmask[] = {
+static const u32 reg_qsb_max_writes_fmask[] = {
[GEN_QMB_0_MAX_WRITES] = GENMASK(3, 0),
[GEN_QMB_1_MAX_WRITES] = GENMASK(7, 4),
/* Bits 8-31 reserved */
};
-IPA_REG_FIELDS(QSB_MAX_WRITES, qsb_max_writes, 0x00000074);
+REG_FIELDS(QSB_MAX_WRITES, qsb_max_writes, 0x00000074);
-static const u32 ipa_reg_qsb_max_reads_fmask[] = {
+static const u32 reg_qsb_max_reads_fmask[] = {
[GEN_QMB_0_MAX_READS] = GENMASK(3, 0),
[GEN_QMB_1_MAX_READS] = GENMASK(7, 4),
};
-IPA_REG_FIELDS(QSB_MAX_READS, qsb_max_reads, 0x00000078);
+REG_FIELDS(QSB_MAX_READS, qsb_max_reads, 0x00000078);
-static const u32 ipa_reg_filt_rout_hash_en_fmask[] = {
+static const u32 reg_filt_rout_hash_en_fmask[] = {
[IPV6_ROUTER_HASH] = BIT(0),
/* Bits 1-3 reserved */
[IPV6_FILTER_HASH] = BIT(4),
@@ -92,9 +92,9 @@ static const u32 ipa_reg_filt_rout_hash_en_fmask[] = {
/* Bits 13-31 reserved */
};
-IPA_REG_FIELDS(FILT_ROUT_HASH_EN, filt_rout_hash_en, 0x000008c);
+REG_FIELDS(FILT_ROUT_HASH_EN, filt_rout_hash_en, 0x000008c);
-static const u32 ipa_reg_filt_rout_hash_flush_fmask[] = {
+static const u32 reg_filt_rout_hash_flush_fmask[] = {
[IPV6_ROUTER_HASH] = BIT(0),
/* Bits 1-3 reserved */
[IPV6_FILTER_HASH] = BIT(4),
@@ -105,42 +105,42 @@ static const u32 ipa_reg_filt_rout_hash_flush_fmask[] = {
/* Bits 13-31 reserved */
};
-IPA_REG_FIELDS(FILT_ROUT_HASH_FLUSH, filt_rout_hash_flush, 0x0000090);
+REG_FIELDS(FILT_ROUT_HASH_FLUSH, filt_rout_hash_flush, 0x0000090);
/* Valid bits defined by ipa->available */
-IPA_REG_STRIDE(STATE_AGGR_ACTIVE, state_aggr_active, 0x0000010c, 0x0004);
+REG_STRIDE(STATE_AGGR_ACTIVE, state_aggr_active, 0x0000010c, 0x0004);
-IPA_REG(IPA_BCR, ipa_bcr, 0x000001d0);
+REG(IPA_BCR, ipa_bcr, 0x000001d0);
-static const u32 ipa_reg_local_pkt_proc_cntxt_fmask[] = {
+static const u32 reg_local_pkt_proc_cntxt_fmask[] = {
[IPA_BASE_ADDR] = GENMASK(16, 0),
/* Bits 17-31 reserved */
};
/* Offset must be a multiple of 8 */
-IPA_REG_FIELDS(LOCAL_PKT_PROC_CNTXT, local_pkt_proc_cntxt, 0x000001e8);
+REG_FIELDS(LOCAL_PKT_PROC_CNTXT, local_pkt_proc_cntxt, 0x000001e8);
/* Valid bits defined by ipa->available */
-IPA_REG_STRIDE(AGGR_FORCE_CLOSE, aggr_force_close, 0x000001ec, 0x0004);
+REG_STRIDE(AGGR_FORCE_CLOSE, aggr_force_close, 0x000001ec, 0x0004);
-static const u32 ipa_reg_counter_cfg_fmask[] = {
+static const u32 reg_counter_cfg_fmask[] = {
/* Bits 0-3 reserved */
[AGGR_GRANULARITY] = GENMASK(8, 4),
/* Bits 5-31 reserved */
};
-IPA_REG_FIELDS(COUNTER_CFG, counter_cfg, 0x000001f0);
+REG_FIELDS(COUNTER_CFG, counter_cfg, 0x000001f0);
-static const u32 ipa_reg_ipa_tx_cfg_fmask[] = {
+static const u32 reg_ipa_tx_cfg_fmask[] = {
[TX0_PREFETCH_DISABLE] = BIT(0),
[TX1_PREFETCH_DISABLE] = BIT(1),
[PREFETCH_ALMOST_EMPTY_SIZE] = GENMASK(4, 2),
/* Bits 5-31 reserved */
};
-IPA_REG_FIELDS(IPA_TX_CFG, ipa_tx_cfg, 0x000001fc);
+REG_FIELDS(IPA_TX_CFG, ipa_tx_cfg, 0x000001fc);
-static const u32 ipa_reg_flavor_0_fmask[] = {
+static const u32 reg_flavor_0_fmask[] = {
[MAX_PIPES] = GENMASK(3, 0),
/* Bits 4-7 reserved */
[MAX_CONS_PIPES] = GENMASK(12, 8),
@@ -151,17 +151,17 @@ static const u32 ipa_reg_flavor_0_fmask[] = {
/* Bits 28-31 reserved */
};
-IPA_REG_FIELDS(FLAVOR_0, flavor_0, 0x00000210);
+REG_FIELDS(FLAVOR_0, flavor_0, 0x00000210);
-static const u32 ipa_reg_idle_indication_cfg_fmask[] = {
+static const u32 reg_idle_indication_cfg_fmask[] = {
[ENTER_IDLE_DEBOUNCE_THRESH] = GENMASK(15, 0),
[CONST_NON_IDLE_ENABLE] = BIT(16),
/* Bits 17-31 reserved */
};
-IPA_REG_FIELDS(IDLE_INDICATION_CFG, idle_indication_cfg, 0x00000220);
+REG_FIELDS(IDLE_INDICATION_CFG, idle_indication_cfg, 0x00000220);
-static const u32 ipa_reg_src_rsrc_grp_01_rsrc_type_fmask[] = {
+static const u32 reg_src_rsrc_grp_01_rsrc_type_fmask[] = {
[X_MIN_LIM] = GENMASK(5, 0),
/* Bits 6-7 reserved */
[X_MAX_LIM] = GENMASK(13, 8),
@@ -172,10 +172,10 @@ static const u32 ipa_reg_src_rsrc_grp_01_rsrc_type_fmask[] = {
/* Bits 30-31 reserved */
};
-IPA_REG_STRIDE_FIELDS(SRC_RSRC_GRP_01_RSRC_TYPE, src_rsrc_grp_01_rsrc_type,
- 0x00000400, 0x0020);
+REG_STRIDE_FIELDS(SRC_RSRC_GRP_01_RSRC_TYPE, src_rsrc_grp_01_rsrc_type,
+ 0x00000400, 0x0020);
-static const u32 ipa_reg_src_rsrc_grp_23_rsrc_type_fmask[] = {
+static const u32 reg_src_rsrc_grp_23_rsrc_type_fmask[] = {
[X_MIN_LIM] = GENMASK(5, 0),
/* Bits 6-7 reserved */
[X_MAX_LIM] = GENMASK(13, 8),
@@ -186,10 +186,10 @@ static const u32 ipa_reg_src_rsrc_grp_23_rsrc_type_fmask[] = {
/* Bits 30-31 reserved */
};
-IPA_REG_STRIDE_FIELDS(SRC_RSRC_GRP_23_RSRC_TYPE, src_rsrc_grp_23_rsrc_type,
- 0x00000404, 0x0020);
+REG_STRIDE_FIELDS(SRC_RSRC_GRP_23_RSRC_TYPE, src_rsrc_grp_23_rsrc_type,
+ 0x00000404, 0x0020);
-static const u32 ipa_reg_dst_rsrc_grp_01_rsrc_type_fmask[] = {
+static const u32 reg_dst_rsrc_grp_01_rsrc_type_fmask[] = {
[X_MIN_LIM] = GENMASK(5, 0),
/* Bits 6-7 reserved */
[X_MAX_LIM] = GENMASK(13, 8),
@@ -200,10 +200,10 @@ static const u32 ipa_reg_dst_rsrc_grp_01_rsrc_type_fmask[] = {
/* Bits 30-31 reserved */
};
-IPA_REG_STRIDE_FIELDS(DST_RSRC_GRP_01_RSRC_TYPE, dst_rsrc_grp_01_rsrc_type,
- 0x00000500, 0x0020);
+REG_STRIDE_FIELDS(DST_RSRC_GRP_01_RSRC_TYPE, dst_rsrc_grp_01_rsrc_type,
+ 0x00000500, 0x0020);
-static const u32 ipa_reg_dst_rsrc_grp_23_rsrc_type_fmask[] = {
+static const u32 reg_dst_rsrc_grp_23_rsrc_type_fmask[] = {
[X_MIN_LIM] = GENMASK(5, 0),
/* Bits 6-7 reserved */
[X_MAX_LIM] = GENMASK(13, 8),
@@ -214,18 +214,18 @@ static const u32 ipa_reg_dst_rsrc_grp_23_rsrc_type_fmask[] = {
/* Bits 30-31 reserved */
};
-IPA_REG_STRIDE_FIELDS(DST_RSRC_GRP_23_RSRC_TYPE, dst_rsrc_grp_23_rsrc_type,
- 0x00000504, 0x0020);
+REG_STRIDE_FIELDS(DST_RSRC_GRP_23_RSRC_TYPE, dst_rsrc_grp_23_rsrc_type,
+ 0x00000504, 0x0020);
-static const u32 ipa_reg_endp_init_ctrl_fmask[] = {
+static const u32 reg_endp_init_ctrl_fmask[] = {
[ENDP_SUSPEND] = BIT(0),
[ENDP_DELAY] = BIT(1),
/* Bits 2-31 reserved */
};
-IPA_REG_STRIDE_FIELDS(ENDP_INIT_CTRL, endp_init_ctrl, 0x00000800, 0x0070);
+REG_STRIDE_FIELDS(ENDP_INIT_CTRL, endp_init_ctrl, 0x00000800, 0x0070);
-static const u32 ipa_reg_endp_init_cfg_fmask[] = {
+static const u32 reg_endp_init_cfg_fmask[] = {
[FRAG_OFFLOAD_EN] = BIT(0),
[CS_OFFLOAD_EN] = GENMASK(2, 1),
[CS_METADATA_HDR_OFFSET] = GENMASK(6, 3),
@@ -234,16 +234,16 @@ static const u32 ipa_reg_endp_init_cfg_fmask[] = {
/* Bits 9-31 reserved */
};
-IPA_REG_STRIDE_FIELDS(ENDP_INIT_CFG, endp_init_cfg, 0x00000808, 0x0070);
+REG_STRIDE_FIELDS(ENDP_INIT_CFG, endp_init_cfg, 0x00000808, 0x0070);
-static const u32 ipa_reg_endp_init_nat_fmask[] = {
+static const u32 reg_endp_init_nat_fmask[] = {
[NAT_EN] = GENMASK(1, 0),
/* Bits 2-31 reserved */
};
-IPA_REG_STRIDE_FIELDS(ENDP_INIT_NAT, endp_init_nat, 0x0000080c, 0x0070);
+REG_STRIDE_FIELDS(ENDP_INIT_NAT, endp_init_nat, 0x0000080c, 0x0070);
-static const u32 ipa_reg_endp_init_hdr_fmask[] = {
+static const u32 reg_endp_init_hdr_fmask[] = {
[HDR_LEN] = GENMASK(5, 0),
[HDR_OFST_METADATA_VALID] = BIT(6),
[HDR_OFST_METADATA] = GENMASK(12, 7),
@@ -256,9 +256,9 @@ static const u32 ipa_reg_endp_init_hdr_fmask[] = {
/* Bits 29-31 reserved */
};
-IPA_REG_STRIDE_FIELDS(ENDP_INIT_HDR, endp_init_hdr, 0x00000810, 0x0070);
+REG_STRIDE_FIELDS(ENDP_INIT_HDR, endp_init_hdr, 0x00000810, 0x0070);
-static const u32 ipa_reg_endp_init_hdr_ext_fmask[] = {
+static const u32 reg_endp_init_hdr_ext_fmask[] = {
[HDR_ENDIANNESS] = BIT(0),
[HDR_TOTAL_LEN_OR_PAD_VALID] = BIT(1),
[HDR_TOTAL_LEN_OR_PAD] = BIT(2),
@@ -268,12 +268,12 @@ static const u32 ipa_reg_endp_init_hdr_ext_fmask[] = {
/* Bits 14-31 reserved */
};
-IPA_REG_STRIDE_FIELDS(ENDP_INIT_HDR_EXT, endp_init_hdr_ext, 0x00000814, 0x0070);
+REG_STRIDE_FIELDS(ENDP_INIT_HDR_EXT, endp_init_hdr_ext, 0x00000814, 0x0070);
-IPA_REG_STRIDE(ENDP_INIT_HDR_METADATA_MASK, endp_init_hdr_metadata_mask,
- 0x00000818, 0x0070);
+REG_STRIDE(ENDP_INIT_HDR_METADATA_MASK, endp_init_hdr_metadata_mask,
+ 0x00000818, 0x0070);
-static const u32 ipa_reg_endp_init_mode_fmask[] = {
+static const u32 reg_endp_init_mode_fmask[] = {
[ENDP_MODE] = GENMASK(2, 0),
/* Bit 3 reserved */
[DEST_PIPE_INDEX] = GENMASK(8, 4),
@@ -285,9 +285,9 @@ static const u32 ipa_reg_endp_init_mode_fmask[] = {
/* Bit 31 reserved */
};
-IPA_REG_STRIDE_FIELDS(ENDP_INIT_MODE, endp_init_mode, 0x00000820, 0x0070);
+REG_STRIDE_FIELDS(ENDP_INIT_MODE, endp_init_mode, 0x00000820, 0x0070);
-static const u32 ipa_reg_endp_init_aggr_fmask[] = {
+static const u32 reg_endp_init_aggr_fmask[] = {
[AGGR_EN] = GENMASK(1, 0),
[AGGR_TYPE] = GENMASK(4, 2),
[BYTE_LIMIT] = GENMASK(9, 5),
@@ -300,25 +300,25 @@ static const u32 ipa_reg_endp_init_aggr_fmask[] = {
/* Bits 25-31 reserved */
};
-IPA_REG_STRIDE_FIELDS(ENDP_INIT_AGGR, endp_init_aggr, 0x00000824, 0x0070);
+REG_STRIDE_FIELDS(ENDP_INIT_AGGR, endp_init_aggr, 0x00000824, 0x0070);
-static const u32 ipa_reg_endp_init_hol_block_en_fmask[] = {
+static const u32 reg_endp_init_hol_block_en_fmask[] = {
[HOL_BLOCK_EN] = BIT(0),
/* Bits 1-31 reserved */
};
-IPA_REG_STRIDE_FIELDS(ENDP_INIT_HOL_BLOCK_EN, endp_init_hol_block_en,
- 0x0000082c, 0x0070);
+REG_STRIDE_FIELDS(ENDP_INIT_HOL_BLOCK_EN, endp_init_hol_block_en,
+ 0x0000082c, 0x0070);
/* Entire register is a tick count */
-static const u32 ipa_reg_endp_init_hol_block_timer_fmask[] = {
+static const u32 reg_endp_init_hol_block_timer_fmask[] = {
[TIMER_BASE_VALUE] = GENMASK(31, 0),
};
-IPA_REG_STRIDE_FIELDS(ENDP_INIT_HOL_BLOCK_TIMER, endp_init_hol_block_timer,
- 0x00000830, 0x0070);
+REG_STRIDE_FIELDS(ENDP_INIT_HOL_BLOCK_TIMER, endp_init_hol_block_timer,
+ 0x00000830, 0x0070);
-static const u32 ipa_reg_endp_init_deaggr_fmask[] = {
+static const u32 reg_endp_init_deaggr_fmask[] = {
[DEAGGR_HDR_LEN] = GENMASK(5, 0),
[SYSPIPE_ERR_DETECTION] = BIT(6),
[PACKET_OFFSET_VALID] = BIT(7),
@@ -328,25 +328,24 @@ static const u32 ipa_reg_endp_init_deaggr_fmask[] = {
[MAX_PACKET_LEN] = GENMASK(31, 16),
};
-IPA_REG_STRIDE_FIELDS(ENDP_INIT_DEAGGR, endp_init_deaggr, 0x00000834, 0x0070);
+REG_STRIDE_FIELDS(ENDP_INIT_DEAGGR, endp_init_deaggr, 0x00000834, 0x0070);
-static const u32 ipa_reg_endp_init_rsrc_grp_fmask[] = {
+static const u32 reg_endp_init_rsrc_grp_fmask[] = {
[ENDP_RSRC_GRP] = GENMASK(1, 0),
/* Bits 2-31 reserved */
};
-IPA_REG_STRIDE_FIELDS(ENDP_INIT_RSRC_GRP, endp_init_rsrc_grp,
- 0x00000838, 0x0070);
+REG_STRIDE_FIELDS(ENDP_INIT_RSRC_GRP, endp_init_rsrc_grp, 0x00000838, 0x0070);
-static const u32 ipa_reg_endp_init_seq_fmask[] = {
+static const u32 reg_endp_init_seq_fmask[] = {
[SEQ_TYPE] = GENMASK(7, 0),
[SEQ_REP_TYPE] = GENMASK(15, 8),
/* Bits 16-31 reserved */
};
-IPA_REG_STRIDE_FIELDS(ENDP_INIT_SEQ, endp_init_seq, 0x0000083c, 0x0070);
+REG_STRIDE_FIELDS(ENDP_INIT_SEQ, endp_init_seq, 0x0000083c, 0x0070);
-static const u32 ipa_reg_endp_status_fmask[] = {
+static const u32 reg_endp_status_fmask[] = {
[STATUS_EN] = BIT(0),
[STATUS_ENDP] = GENMASK(5, 1),
/* Bits 6-7 reserved */
@@ -354,9 +353,9 @@ static const u32 ipa_reg_endp_status_fmask[] = {
/* Bits 9-31 reserved */
};
-IPA_REG_STRIDE_FIELDS(ENDP_STATUS, endp_status, 0x00000840, 0x0070);
+REG_STRIDE_FIELDS(ENDP_STATUS, endp_status, 0x00000840, 0x0070);
-static const u32 ipa_reg_endp_filter_router_hsh_cfg_fmask[] = {
+static const u32 reg_endp_filter_router_hsh_cfg_fmask[] = {
[FILTER_HASH_MSK_SRC_ID] = BIT(0),
[FILTER_HASH_MSK_SRC_IP] = BIT(1),
[FILTER_HASH_MSK_DST_IP] = BIT(2),
@@ -377,83 +376,83 @@ static const u32 ipa_reg_endp_filter_router_hsh_cfg_fmask[] = {
/* Bits 23-31 reserved */
};
-IPA_REG_STRIDE_FIELDS(ENDP_FILTER_ROUTER_HSH_CFG, endp_filter_router_hsh_cfg,
- 0x0000085c, 0x0070);
+REG_STRIDE_FIELDS(ENDP_FILTER_ROUTER_HSH_CFG, endp_filter_router_hsh_cfg,
+ 0x0000085c, 0x0070);
/* Valid bits defined by enum ipa_irq_id; only used for GSI_EE_AP */
-IPA_REG(IPA_IRQ_STTS, ipa_irq_stts, 0x00003008 + 0x1000 * GSI_EE_AP);
+REG(IPA_IRQ_STTS, ipa_irq_stts, 0x00003008 + 0x1000 * GSI_EE_AP);
/* Valid bits defined by enum ipa_irq_id; only used for GSI_EE_AP */
-IPA_REG(IPA_IRQ_EN, ipa_irq_en, 0x0000300c + 0x1000 * GSI_EE_AP);
+REG(IPA_IRQ_EN, ipa_irq_en, 0x0000300c + 0x1000 * GSI_EE_AP);
/* Valid bits defined by enum ipa_irq_id; only used for GSI_EE_AP */
-IPA_REG(IPA_IRQ_CLR, ipa_irq_clr, 0x00003010 + 0x1000 * GSI_EE_AP);
+REG(IPA_IRQ_CLR, ipa_irq_clr, 0x00003010 + 0x1000 * GSI_EE_AP);
-static const u32 ipa_reg_ipa_irq_uc_fmask[] = {
+static const u32 reg_ipa_irq_uc_fmask[] = {
[UC_INTR] = BIT(0),
/* Bits 1-31 reserved */
};
-IPA_REG_FIELDS(IPA_IRQ_UC, ipa_irq_uc, 0x0000301c + 0x1000 * GSI_EE_AP);
+REG_FIELDS(IPA_IRQ_UC, ipa_irq_uc, 0x0000301c + 0x1000 * GSI_EE_AP);
/* Valid bits defined by ipa->available */
-IPA_REG_STRIDE(IRQ_SUSPEND_INFO, irq_suspend_info,
- 0x00003030 + 0x1000 * GSI_EE_AP, 0x0004);
+REG_STRIDE(IRQ_SUSPEND_INFO, irq_suspend_info,
+ 0x00003030 + 0x1000 * GSI_EE_AP, 0x0004);
/* Valid bits defined by ipa->available */
-IPA_REG_STRIDE(IRQ_SUSPEND_EN, irq_suspend_en,
- 0x00003034 + 0x1000 * GSI_EE_AP, 0x0004);
+REG_STRIDE(IRQ_SUSPEND_EN, irq_suspend_en,
+ 0x00003034 + 0x1000 * GSI_EE_AP, 0x0004);
/* Valid bits defined by ipa->available */
-IPA_REG_STRIDE(IRQ_SUSPEND_CLR, irq_suspend_clr,
- 0x00003038 + 0x1000 * GSI_EE_AP, 0x0004);
-
-static const struct ipa_reg *ipa_reg_array[] = {
- [COMP_CFG] = &ipa_reg_comp_cfg,
- [CLKON_CFG] = &ipa_reg_clkon_cfg,
- [ROUTE] = &ipa_reg_route,
- [SHARED_MEM_SIZE] = &ipa_reg_shared_mem_size,
- [QSB_MAX_WRITES] = &ipa_reg_qsb_max_writes,
- [QSB_MAX_READS] = &ipa_reg_qsb_max_reads,
- [FILT_ROUT_HASH_EN] = &ipa_reg_filt_rout_hash_en,
- [FILT_ROUT_HASH_FLUSH] = &ipa_reg_filt_rout_hash_flush,
- [STATE_AGGR_ACTIVE] = &ipa_reg_state_aggr_active,
- [IPA_BCR] = &ipa_reg_ipa_bcr,
- [LOCAL_PKT_PROC_CNTXT] = &ipa_reg_local_pkt_proc_cntxt,
- [AGGR_FORCE_CLOSE] = &ipa_reg_aggr_force_close,
- [COUNTER_CFG] = &ipa_reg_counter_cfg,
- [IPA_TX_CFG] = &ipa_reg_ipa_tx_cfg,
- [FLAVOR_0] = &ipa_reg_flavor_0,
- [IDLE_INDICATION_CFG] = &ipa_reg_idle_indication_cfg,
- [SRC_RSRC_GRP_01_RSRC_TYPE] = &ipa_reg_src_rsrc_grp_01_rsrc_type,
- [SRC_RSRC_GRP_23_RSRC_TYPE] = &ipa_reg_src_rsrc_grp_23_rsrc_type,
- [DST_RSRC_GRP_01_RSRC_TYPE] = &ipa_reg_dst_rsrc_grp_01_rsrc_type,
- [DST_RSRC_GRP_23_RSRC_TYPE] = &ipa_reg_dst_rsrc_grp_23_rsrc_type,
- [ENDP_INIT_CTRL] = &ipa_reg_endp_init_ctrl,
- [ENDP_INIT_CFG] = &ipa_reg_endp_init_cfg,
- [ENDP_INIT_NAT] = &ipa_reg_endp_init_nat,
- [ENDP_INIT_HDR] = &ipa_reg_endp_init_hdr,
- [ENDP_INIT_HDR_EXT] = &ipa_reg_endp_init_hdr_ext,
- [ENDP_INIT_HDR_METADATA_MASK] = &ipa_reg_endp_init_hdr_metadata_mask,
- [ENDP_INIT_MODE] = &ipa_reg_endp_init_mode,
- [ENDP_INIT_AGGR] = &ipa_reg_endp_init_aggr,
- [ENDP_INIT_HOL_BLOCK_EN] = &ipa_reg_endp_init_hol_block_en,
- [ENDP_INIT_HOL_BLOCK_TIMER] = &ipa_reg_endp_init_hol_block_timer,
- [ENDP_INIT_DEAGGR] = &ipa_reg_endp_init_deaggr,
- [ENDP_INIT_RSRC_GRP] = &ipa_reg_endp_init_rsrc_grp,
- [ENDP_INIT_SEQ] = &ipa_reg_endp_init_seq,
- [ENDP_STATUS] = &ipa_reg_endp_status,
- [ENDP_FILTER_ROUTER_HSH_CFG] = &ipa_reg_endp_filter_router_hsh_cfg,
- [IPA_IRQ_STTS] = &ipa_reg_ipa_irq_stts,
- [IPA_IRQ_EN] = &ipa_reg_ipa_irq_en,
- [IPA_IRQ_CLR] = &ipa_reg_ipa_irq_clr,
- [IPA_IRQ_UC] = &ipa_reg_ipa_irq_uc,
- [IRQ_SUSPEND_INFO] = &ipa_reg_irq_suspend_info,
- [IRQ_SUSPEND_EN] = &ipa_reg_irq_suspend_en,
- [IRQ_SUSPEND_CLR] = &ipa_reg_irq_suspend_clr,
-};
-
-const struct ipa_regs ipa_regs_v3_5_1 = {
- .reg_count = ARRAY_SIZE(ipa_reg_array),
- .reg = ipa_reg_array,
+REG_STRIDE(IRQ_SUSPEND_CLR, irq_suspend_clr,
+ 0x00003038 + 0x1000 * GSI_EE_AP, 0x0004);
+
+static const struct reg *reg_array[] = {
+ [COMP_CFG] = &reg_comp_cfg,
+ [CLKON_CFG] = &reg_clkon_cfg,
+ [ROUTE] = &reg_route,
+ [SHARED_MEM_SIZE] = &reg_shared_mem_size,
+ [QSB_MAX_WRITES] = &reg_qsb_max_writes,
+ [QSB_MAX_READS] = &reg_qsb_max_reads,
+ [FILT_ROUT_HASH_EN] = &reg_filt_rout_hash_en,
+ [FILT_ROUT_HASH_FLUSH] = &reg_filt_rout_hash_flush,
+ [STATE_AGGR_ACTIVE] = &reg_state_aggr_active,
+ [IPA_BCR] = &reg_ipa_bcr,
+ [LOCAL_PKT_PROC_CNTXT] = &reg_local_pkt_proc_cntxt,
+ [AGGR_FORCE_CLOSE] = &reg_aggr_force_close,
+ [COUNTER_CFG] = &reg_counter_cfg,
+ [IPA_TX_CFG] = &reg_ipa_tx_cfg,
+ [FLAVOR_0] = &reg_flavor_0,
+ [IDLE_INDICATION_CFG] = &reg_idle_indication_cfg,
+ [SRC_RSRC_GRP_01_RSRC_TYPE] = &reg_src_rsrc_grp_01_rsrc_type,
+ [SRC_RSRC_GRP_23_RSRC_TYPE] = &reg_src_rsrc_grp_23_rsrc_type,
+ [DST_RSRC_GRP_01_RSRC_TYPE] = &reg_dst_rsrc_grp_01_rsrc_type,
+ [DST_RSRC_GRP_23_RSRC_TYPE] = &reg_dst_rsrc_grp_23_rsrc_type,
+ [ENDP_INIT_CTRL] = &reg_endp_init_ctrl,
+ [ENDP_INIT_CFG] = &reg_endp_init_cfg,
+ [ENDP_INIT_NAT] = &reg_endp_init_nat,
+ [ENDP_INIT_HDR] = &reg_endp_init_hdr,
+ [ENDP_INIT_HDR_EXT] = &reg_endp_init_hdr_ext,
+ [ENDP_INIT_HDR_METADATA_MASK] = &reg_endp_init_hdr_metadata_mask,
+ [ENDP_INIT_MODE] = &reg_endp_init_mode,
+ [ENDP_INIT_AGGR] = &reg_endp_init_aggr,
+ [ENDP_INIT_HOL_BLOCK_EN] = &reg_endp_init_hol_block_en,
+ [ENDP_INIT_HOL_BLOCK_TIMER] = &reg_endp_init_hol_block_timer,
+ [ENDP_INIT_DEAGGR] = &reg_endp_init_deaggr,
+ [ENDP_INIT_RSRC_GRP] = &reg_endp_init_rsrc_grp,
+ [ENDP_INIT_SEQ] = &reg_endp_init_seq,
+ [ENDP_STATUS] = &reg_endp_status,
+ [ENDP_FILTER_ROUTER_HSH_CFG] = &reg_endp_filter_router_hsh_cfg,
+ [IPA_IRQ_STTS] = &reg_ipa_irq_stts,
+ [IPA_IRQ_EN] = &reg_ipa_irq_en,
+ [IPA_IRQ_CLR] = &reg_ipa_irq_clr,
+ [IPA_IRQ_UC] = &reg_ipa_irq_uc,
+ [IRQ_SUSPEND_INFO] = &reg_irq_suspend_info,
+ [IRQ_SUSPEND_EN] = &reg_irq_suspend_en,
+ [IRQ_SUSPEND_CLR] = &reg_irq_suspend_clr,
+};
+
+const struct regs ipa_regs_v3_5_1 = {
+ .reg_count = ARRAY_SIZE(reg_array),
+ .reg = reg_array,
};
diff --git a/drivers/net/ipa/reg/ipa_reg-v4.11.c b/drivers/net/ipa/reg/ipa_reg-v4.11.c
index 9a315130530d..29e71cce4a84 100644
--- a/drivers/net/ipa/reg/ipa_reg-v4.11.c
+++ b/drivers/net/ipa/reg/ipa_reg-v4.11.c
@@ -7,7 +7,7 @@
#include "../ipa.h"
#include "../ipa_reg.h"
-static const u32 ipa_reg_comp_cfg_fmask[] = {
+static const u32 reg_comp_cfg_fmask[] = {
[RAM_ARB_PRI_CLIENT_SAMP_FIX_DIS] = BIT(0),
[GSI_SNOC_BYPASS_DIS] = BIT(1),
[GEN_QMB_0_SNOC_BYPASS_DIS] = BIT(2),
@@ -36,9 +36,9 @@ static const u32 ipa_reg_comp_cfg_fmask[] = {
[GEN_QMB_0_DYNAMIC_ASIZE] = BIT(31),
};
-IPA_REG_FIELDS(COMP_CFG, comp_cfg, 0x0000003c);
+REG_FIELDS(COMP_CFG, comp_cfg, 0x0000003c);
-static const u32 ipa_reg_clkon_cfg_fmask[] = {
+static const u32 reg_clkon_cfg_fmask[] = {
[CLKON_RX] = BIT(0),
[CLKON_PROC] = BIT(1),
[TX_WRAPPER] = BIT(2),
@@ -73,9 +73,9 @@ static const u32 ipa_reg_clkon_cfg_fmask[] = {
[DRBIP] = BIT(31),
};
-IPA_REG_FIELDS(CLKON_CFG, clkon_cfg, 0x00000044);
+REG_FIELDS(CLKON_CFG, clkon_cfg, 0x00000044);
-static const u32 ipa_reg_route_fmask[] = {
+static const u32 reg_route_fmask[] = {
[ROUTE_DIS] = BIT(0),
[ROUTE_DEF_PIPE] = GENMASK(5, 1),
[ROUTE_DEF_HDR_TABLE] = BIT(6),
@@ -86,24 +86,24 @@ static const u32 ipa_reg_route_fmask[] = {
/* Bits 25-31 reserved */
};
-IPA_REG_FIELDS(ROUTE, route, 0x00000048);
+REG_FIELDS(ROUTE, route, 0x00000048);
-static const u32 ipa_reg_shared_mem_size_fmask[] = {
+static const u32 reg_shared_mem_size_fmask[] = {
[MEM_SIZE] = GENMASK(15, 0),
[MEM_BADDR] = GENMASK(31, 16),
};
-IPA_REG_FIELDS(SHARED_MEM_SIZE, shared_mem_size, 0x00000054);
+REG_FIELDS(SHARED_MEM_SIZE, shared_mem_size, 0x00000054);
-static const u32 ipa_reg_qsb_max_writes_fmask[] = {
+static const u32 reg_qsb_max_writes_fmask[] = {
[GEN_QMB_0_MAX_WRITES] = GENMASK(3, 0),
[GEN_QMB_1_MAX_WRITES] = GENMASK(7, 4),
/* Bits 8-31 reserved */
};
-IPA_REG_FIELDS(QSB_MAX_WRITES, qsb_max_writes, 0x00000074);
+REG_FIELDS(QSB_MAX_WRITES, qsb_max_writes, 0x00000074);
-static const u32 ipa_reg_qsb_max_reads_fmask[] = {
+static const u32 reg_qsb_max_reads_fmask[] = {
[GEN_QMB_0_MAX_READS] = GENMASK(3, 0),
[GEN_QMB_1_MAX_READS] = GENMASK(7, 4),
/* Bits 8-15 reserved */
@@ -111,9 +111,9 @@ static const u32 ipa_reg_qsb_max_reads_fmask[] = {
[GEN_QMB_1_MAX_READS_BEATS] = GENMASK(31, 24),
};
-IPA_REG_FIELDS(QSB_MAX_READS, qsb_max_reads, 0x00000078);
+REG_FIELDS(QSB_MAX_READS, qsb_max_reads, 0x00000078);
-static const u32 ipa_reg_filt_rout_hash_en_fmask[] = {
+static const u32 reg_filt_rout_hash_en_fmask[] = {
[IPV6_ROUTER_HASH] = BIT(0),
/* Bits 1-3 reserved */
[IPV6_FILTER_HASH] = BIT(4),
@@ -124,9 +124,9 @@ static const u32 ipa_reg_filt_rout_hash_en_fmask[] = {
/* Bits 13-31 reserved */
};
-IPA_REG_FIELDS(FILT_ROUT_HASH_EN, filt_rout_hash_en, 0x0000148);
+REG_FIELDS(FILT_ROUT_HASH_EN, filt_rout_hash_en, 0x0000148);
-static const u32 ipa_reg_filt_rout_hash_flush_fmask[] = {
+static const u32 reg_filt_rout_hash_flush_fmask[] = {
[IPV6_ROUTER_HASH] = BIT(0),
/* Bits 1-3 reserved */
[IPV6_FILTER_HASH] = BIT(4),
@@ -137,23 +137,23 @@ static const u32 ipa_reg_filt_rout_hash_flush_fmask[] = {
/* Bits 13-31 reserved */
};
-IPA_REG_FIELDS(FILT_ROUT_HASH_FLUSH, filt_rout_hash_flush, 0x000014c);
+REG_FIELDS(FILT_ROUT_HASH_FLUSH, filt_rout_hash_flush, 0x000014c);
/* Valid bits defined by ipa->available */
-IPA_REG_STRIDE(STATE_AGGR_ACTIVE, state_aggr_active, 0x000000b4, 0x0004);
+REG_STRIDE(STATE_AGGR_ACTIVE, state_aggr_active, 0x000000b4, 0x0004);
-static const u32 ipa_reg_local_pkt_proc_cntxt_fmask[] = {
+static const u32 reg_local_pkt_proc_cntxt_fmask[] = {
[IPA_BASE_ADDR] = GENMASK(17, 0),
/* Bits 18-31 reserved */
};
/* Offset must be a multiple of 8 */
-IPA_REG_FIELDS(LOCAL_PKT_PROC_CNTXT, local_pkt_proc_cntxt, 0x000001e8);
+REG_FIELDS(LOCAL_PKT_PROC_CNTXT, local_pkt_proc_cntxt, 0x000001e8);
/* Valid bits defined by ipa->available */
-IPA_REG_STRIDE(AGGR_FORCE_CLOSE, aggr_force_close, 0x000001ec, 0x0004);
+REG_STRIDE(AGGR_FORCE_CLOSE, aggr_force_close, 0x000001ec, 0x0004);
-static const u32 ipa_reg_ipa_tx_cfg_fmask[] = {
+static const u32 reg_ipa_tx_cfg_fmask[] = {
/* Bits 0-1 reserved */
[PREFETCH_ALMOST_EMPTY_SIZE_TX0] = GENMASK(5, 2),
[DMAW_SCND_OUTSD_PRED_THRESHOLD] = GENMASK(9, 6),
@@ -166,9 +166,9 @@ static const u32 ipa_reg_ipa_tx_cfg_fmask[] = {
/* Bits 19-31 reserved */
};
-IPA_REG_FIELDS(IPA_TX_CFG, ipa_tx_cfg, 0x000001fc);
+REG_FIELDS(IPA_TX_CFG, ipa_tx_cfg, 0x000001fc);
-static const u32 ipa_reg_flavor_0_fmask[] = {
+static const u32 reg_flavor_0_fmask[] = {
[MAX_PIPES] = GENMASK(4, 0),
/* Bits 5-7 reserved */
[MAX_CONS_PIPES] = GENMASK(12, 8),
@@ -179,17 +179,17 @@ static const u32 ipa_reg_flavor_0_fmask[] = {
/* Bits 28-31 reserved */
};
-IPA_REG_FIELDS(FLAVOR_0, flavor_0, 0x00000210);
+REG_FIELDS(FLAVOR_0, flavor_0, 0x00000210);
-static const u32 ipa_reg_idle_indication_cfg_fmask[] = {
+static const u32 reg_idle_indication_cfg_fmask[] = {
[ENTER_IDLE_DEBOUNCE_THRESH] = GENMASK(15, 0),
[CONST_NON_IDLE_ENABLE] = BIT(16),
/* Bits 17-31 reserved */
};
-IPA_REG_FIELDS(IDLE_INDICATION_CFG, idle_indication_cfg, 0x00000240);
+REG_FIELDS(IDLE_INDICATION_CFG, idle_indication_cfg, 0x00000240);
-static const u32 ipa_reg_qtime_timestamp_cfg_fmask[] = {
+static const u32 reg_qtime_timestamp_cfg_fmask[] = {
[DPL_TIMESTAMP_LSB] = GENMASK(4, 0),
/* Bits 5-6 reserved */
[DPL_TIMESTAMP_SEL] = BIT(7),
@@ -199,26 +199,26 @@ static const u32 ipa_reg_qtime_timestamp_cfg_fmask[] = {
/* Bits 21-31 reserved */
};
-IPA_REG_FIELDS(QTIME_TIMESTAMP_CFG, qtime_timestamp_cfg, 0x0000024c);
+REG_FIELDS(QTIME_TIMESTAMP_CFG, qtime_timestamp_cfg, 0x0000024c);
-static const u32 ipa_reg_timers_xo_clk_div_cfg_fmask[] = {
+static const u32 reg_timers_xo_clk_div_cfg_fmask[] = {
[DIV_VALUE] = GENMASK(8, 0),
/* Bits 9-30 reserved */
[DIV_ENABLE] = BIT(31),
};
-IPA_REG_FIELDS(TIMERS_XO_CLK_DIV_CFG, timers_xo_clk_div_cfg, 0x00000250);
+REG_FIELDS(TIMERS_XO_CLK_DIV_CFG, timers_xo_clk_div_cfg, 0x00000250);
-static const u32 ipa_reg_timers_pulse_gran_cfg_fmask[] = {
+static const u32 reg_timers_pulse_gran_cfg_fmask[] = {
[PULSE_GRAN_0] = GENMASK(2, 0),
[PULSE_GRAN_1] = GENMASK(5, 3),
[PULSE_GRAN_2] = GENMASK(8, 6),
/* Bits 9-31 reserved */
};
-IPA_REG_FIELDS(TIMERS_PULSE_GRAN_CFG, timers_pulse_gran_cfg, 0x00000254);
+REG_FIELDS(TIMERS_PULSE_GRAN_CFG, timers_pulse_gran_cfg, 0x00000254);
-static const u32 ipa_reg_src_rsrc_grp_01_rsrc_type_fmask[] = {
+static const u32 reg_src_rsrc_grp_01_rsrc_type_fmask[] = {
[X_MIN_LIM] = GENMASK(5, 0),
/* Bits 6-7 reserved */
[X_MAX_LIM] = GENMASK(13, 8),
@@ -229,10 +229,10 @@ static const u32 ipa_reg_src_rsrc_grp_01_rsrc_type_fmask[] = {
/* Bits 30-31 reserved */
};
-IPA_REG_STRIDE_FIELDS(SRC_RSRC_GRP_01_RSRC_TYPE, src_rsrc_grp_01_rsrc_type,
- 0x00000400, 0x0020);
+REG_STRIDE_FIELDS(SRC_RSRC_GRP_01_RSRC_TYPE, src_rsrc_grp_01_rsrc_type,
+ 0x00000400, 0x0020);
-static const u32 ipa_reg_src_rsrc_grp_23_rsrc_type_fmask[] = {
+static const u32 reg_src_rsrc_grp_23_rsrc_type_fmask[] = {
[X_MIN_LIM] = GENMASK(5, 0),
/* Bits 6-7 reserved */
[X_MAX_LIM] = GENMASK(13, 8),
@@ -243,10 +243,10 @@ static const u32 ipa_reg_src_rsrc_grp_23_rsrc_type_fmask[] = {
/* Bits 30-31 reserved */
};
-IPA_REG_STRIDE_FIELDS(SRC_RSRC_GRP_23_RSRC_TYPE, src_rsrc_grp_23_rsrc_type,
- 0x00000404, 0x0020);
+REG_STRIDE_FIELDS(SRC_RSRC_GRP_23_RSRC_TYPE, src_rsrc_grp_23_rsrc_type,
+ 0x00000404, 0x0020);
-static const u32 ipa_reg_dst_rsrc_grp_01_rsrc_type_fmask[] = {
+static const u32 reg_dst_rsrc_grp_01_rsrc_type_fmask[] = {
[X_MIN_LIM] = GENMASK(5, 0),
/* Bits 6-7 reserved */
[X_MAX_LIM] = GENMASK(13, 8),
@@ -257,10 +257,10 @@ static const u32 ipa_reg_dst_rsrc_grp_01_rsrc_type_fmask[] = {
/* Bits 30-31 reserved */
};
-IPA_REG_STRIDE_FIELDS(DST_RSRC_GRP_01_RSRC_TYPE, dst_rsrc_grp_01_rsrc_type,
- 0x00000500, 0x0020);
+REG_STRIDE_FIELDS(DST_RSRC_GRP_01_RSRC_TYPE, dst_rsrc_grp_01_rsrc_type,
+ 0x00000500, 0x0020);
-static const u32 ipa_reg_dst_rsrc_grp_23_rsrc_type_fmask[] = {
+static const u32 reg_dst_rsrc_grp_23_rsrc_type_fmask[] = {
[X_MIN_LIM] = GENMASK(5, 0),
/* Bits 6-7 reserved */
[X_MAX_LIM] = GENMASK(13, 8),
@@ -271,10 +271,10 @@ static const u32 ipa_reg_dst_rsrc_grp_23_rsrc_type_fmask[] = {
/* Bits 30-31 reserved */
};
-IPA_REG_STRIDE_FIELDS(DST_RSRC_GRP_23_RSRC_TYPE, dst_rsrc_grp_23_rsrc_type,
- 0x00000504, 0x0020);
+REG_STRIDE_FIELDS(DST_RSRC_GRP_23_RSRC_TYPE, dst_rsrc_grp_23_rsrc_type,
+ 0x00000504, 0x0020);
-static const u32 ipa_reg_endp_init_cfg_fmask[] = {
+static const u32 reg_endp_init_cfg_fmask[] = {
[FRAG_OFFLOAD_EN] = BIT(0),
[CS_OFFLOAD_EN] = GENMASK(2, 1),
[CS_METADATA_HDR_OFFSET] = GENMASK(6, 3),
@@ -283,16 +283,16 @@ static const u32 ipa_reg_endp_init_cfg_fmask[] = {
/* Bits 9-31 reserved */
};
-IPA_REG_STRIDE_FIELDS(ENDP_INIT_CFG, endp_init_cfg, 0x00000808, 0x0070);
+REG_STRIDE_FIELDS(ENDP_INIT_CFG, endp_init_cfg, 0x00000808, 0x0070);
-static const u32 ipa_reg_endp_init_nat_fmask[] = {
+static const u32 reg_endp_init_nat_fmask[] = {
[NAT_EN] = GENMASK(1, 0),
/* Bits 2-31 reserved */
};
-IPA_REG_STRIDE_FIELDS(ENDP_INIT_NAT, endp_init_nat, 0x0000080c, 0x0070);
+REG_STRIDE_FIELDS(ENDP_INIT_NAT, endp_init_nat, 0x0000080c, 0x0070);
-static const u32 ipa_reg_endp_init_hdr_fmask[] = {
+static const u32 reg_endp_init_hdr_fmask[] = {
[HDR_LEN] = GENMASK(5, 0),
[HDR_OFST_METADATA_VALID] = BIT(6),
[HDR_OFST_METADATA] = GENMASK(12, 7),
@@ -305,9 +305,9 @@ static const u32 ipa_reg_endp_init_hdr_fmask[] = {
[HDR_OFST_METADATA_MSB] = GENMASK(31, 30),
};
-IPA_REG_STRIDE_FIELDS(ENDP_INIT_HDR, endp_init_hdr, 0x00000810, 0x0070);
+REG_STRIDE_FIELDS(ENDP_INIT_HDR, endp_init_hdr, 0x00000810, 0x0070);
-static const u32 ipa_reg_endp_init_hdr_ext_fmask[] = {
+static const u32 reg_endp_init_hdr_ext_fmask[] = {
[HDR_ENDIANNESS] = BIT(0),
[HDR_TOTAL_LEN_OR_PAD_VALID] = BIT(1),
[HDR_TOTAL_LEN_OR_PAD] = BIT(2),
@@ -321,12 +321,12 @@ static const u32 ipa_reg_endp_init_hdr_ext_fmask[] = {
/* Bits 22-31 reserved */
};
-IPA_REG_STRIDE_FIELDS(ENDP_INIT_HDR_EXT, endp_init_hdr_ext, 0x00000814, 0x0070);
+REG_STRIDE_FIELDS(ENDP_INIT_HDR_EXT, endp_init_hdr_ext, 0x00000814, 0x0070);
-IPA_REG_STRIDE(ENDP_INIT_HDR_METADATA_MASK, endp_init_hdr_metadata_mask,
- 0x00000818, 0x0070);
+REG_STRIDE(ENDP_INIT_HDR_METADATA_MASK, endp_init_hdr_metadata_mask,
+ 0x00000818, 0x0070);
-static const u32 ipa_reg_endp_init_mode_fmask[] = {
+static const u32 reg_endp_init_mode_fmask[] = {
[ENDP_MODE] = GENMASK(2, 0),
[DCPH_ENABLE] = BIT(3),
[DEST_PIPE_INDEX] = GENMASK(8, 4),
@@ -338,9 +338,9 @@ static const u32 ipa_reg_endp_init_mode_fmask[] = {
/* Bit 31 reserved */
};
-IPA_REG_STRIDE_FIELDS(ENDP_INIT_MODE, endp_init_mode, 0x00000820, 0x0070);
+REG_STRIDE_FIELDS(ENDP_INIT_MODE, endp_init_mode, 0x00000820, 0x0070);
-static const u32 ipa_reg_endp_init_aggr_fmask[] = {
+static const u32 reg_endp_init_aggr_fmask[] = {
[AGGR_EN] = GENMASK(1, 0),
[AGGR_TYPE] = GENMASK(4, 2),
[BYTE_LIMIT] = GENMASK(10, 5),
@@ -355,27 +355,27 @@ static const u32 ipa_reg_endp_init_aggr_fmask[] = {
/* Bits 28-31 reserved */
};
-IPA_REG_STRIDE_FIELDS(ENDP_INIT_AGGR, endp_init_aggr, 0x00000824, 0x0070);
+REG_STRIDE_FIELDS(ENDP_INIT_AGGR, endp_init_aggr, 0x00000824, 0x0070);
-static const u32 ipa_reg_endp_init_hol_block_en_fmask[] = {
+static const u32 reg_endp_init_hol_block_en_fmask[] = {
[HOL_BLOCK_EN] = BIT(0),
/* Bits 1-31 reserved */
};
-IPA_REG_STRIDE_FIELDS(ENDP_INIT_HOL_BLOCK_EN, endp_init_hol_block_en,
- 0x0000082c, 0x0070);
+REG_STRIDE_FIELDS(ENDP_INIT_HOL_BLOCK_EN, endp_init_hol_block_en,
+ 0x0000082c, 0x0070);
-static const u32 ipa_reg_endp_init_hol_block_timer_fmask[] = {
+static const u32 reg_endp_init_hol_block_timer_fmask[] = {
[TIMER_LIMIT] = GENMASK(4, 0),
/* Bits 5-7 reserved */
[TIMER_GRAN_SEL] = BIT(8),
/* Bits 9-31 reserved */
};
-IPA_REG_STRIDE_FIELDS(ENDP_INIT_HOL_BLOCK_TIMER, endp_init_hol_block_timer,
- 0x00000830, 0x0070);
+REG_STRIDE_FIELDS(ENDP_INIT_HOL_BLOCK_TIMER, endp_init_hol_block_timer,
+ 0x00000830, 0x0070);
-static const u32 ipa_reg_endp_init_deaggr_fmask[] = {
+static const u32 reg_endp_init_deaggr_fmask[] = {
[DEAGGR_HDR_LEN] = GENMASK(5, 0),
[SYSPIPE_ERR_DETECTION] = BIT(6),
[PACKET_OFFSET_VALID] = BIT(7),
@@ -385,24 +385,23 @@ static const u32 ipa_reg_endp_init_deaggr_fmask[] = {
[MAX_PACKET_LEN] = GENMASK(31, 16),
};
-IPA_REG_STRIDE_FIELDS(ENDP_INIT_DEAGGR, endp_init_deaggr, 0x00000834, 0x0070);
+REG_STRIDE_FIELDS(ENDP_INIT_DEAGGR, endp_init_deaggr, 0x00000834, 0x0070);
-static const u32 ipa_reg_endp_init_rsrc_grp_fmask[] = {
+static const u32 reg_endp_init_rsrc_grp_fmask[] = {
[ENDP_RSRC_GRP] = GENMASK(1, 0),
/* Bits 2-31 reserved */
};
-IPA_REG_STRIDE_FIELDS(ENDP_INIT_RSRC_GRP, endp_init_rsrc_grp,
- 0x00000838, 0x0070);
+REG_STRIDE_FIELDS(ENDP_INIT_RSRC_GRP, endp_init_rsrc_grp, 0x00000838, 0x0070);
-static const u32 ipa_reg_endp_init_seq_fmask[] = {
+static const u32 reg_endp_init_seq_fmask[] = {
[SEQ_TYPE] = GENMASK(7, 0),
/* Bits 8-31 reserved */
};
-IPA_REG_STRIDE_FIELDS(ENDP_INIT_SEQ, endp_init_seq, 0x0000083c, 0x0070);
+REG_STRIDE_FIELDS(ENDP_INIT_SEQ, endp_init_seq, 0x0000083c, 0x0070);
-static const u32 ipa_reg_endp_status_fmask[] = {
+static const u32 reg_endp_status_fmask[] = {
[STATUS_EN] = BIT(0),
[STATUS_ENDP] = GENMASK(5, 1),
/* Bits 6-8 reserved */
@@ -410,9 +409,9 @@ static const u32 ipa_reg_endp_status_fmask[] = {
/* Bits 10-31 reserved */
};
-IPA_REG_STRIDE_FIELDS(ENDP_STATUS, endp_status, 0x00000840, 0x0070);
+REG_STRIDE_FIELDS(ENDP_STATUS, endp_status, 0x00000840, 0x0070);
-static const u32 ipa_reg_endp_filter_router_hsh_cfg_fmask[] = {
+static const u32 reg_endp_filter_router_hsh_cfg_fmask[] = {
[FILTER_HASH_MSK_SRC_ID] = BIT(0),
[FILTER_HASH_MSK_SRC_IP] = BIT(1),
[FILTER_HASH_MSK_DST_IP] = BIT(2),
@@ -433,83 +432,83 @@ static const u32 ipa_reg_endp_filter_router_hsh_cfg_fmask[] = {
/* Bits 23-31 reserved */
};
-IPA_REG_STRIDE_FIELDS(ENDP_FILTER_ROUTER_HSH_CFG, endp_filter_router_hsh_cfg,
- 0x0000085c, 0x0070);
+REG_STRIDE_FIELDS(ENDP_FILTER_ROUTER_HSH_CFG, endp_filter_router_hsh_cfg,
+ 0x0000085c, 0x0070);
/* Valid bits defined by enum ipa_irq_id; only used for GSI_EE_AP */
-IPA_REG(IPA_IRQ_STTS, ipa_irq_stts, 0x00004008 + 0x1000 * GSI_EE_AP);
+REG(IPA_IRQ_STTS, ipa_irq_stts, 0x00004008 + 0x1000 * GSI_EE_AP);
/* Valid bits defined by enum ipa_irq_id; only used for GSI_EE_AP */
-IPA_REG(IPA_IRQ_EN, ipa_irq_en, 0x0000400c + 0x1000 * GSI_EE_AP);
+REG(IPA_IRQ_EN, ipa_irq_en, 0x0000400c + 0x1000 * GSI_EE_AP);
/* Valid bits defined by enum ipa_irq_id; only used for GSI_EE_AP */
-IPA_REG(IPA_IRQ_CLR, ipa_irq_clr, 0x00004010 + 0x1000 * GSI_EE_AP);
+REG(IPA_IRQ_CLR, ipa_irq_clr, 0x00004010 + 0x1000 * GSI_EE_AP);
-static const u32 ipa_reg_ipa_irq_uc_fmask[] = {
+static const u32 reg_ipa_irq_uc_fmask[] = {
[UC_INTR] = BIT(0),
/* Bits 1-31 reserved */
};
-IPA_REG_FIELDS(IPA_IRQ_UC, ipa_irq_uc, 0x0000401c + 0x1000 * GSI_EE_AP);
+REG_FIELDS(IPA_IRQ_UC, ipa_irq_uc, 0x0000401c + 0x1000 * GSI_EE_AP);
/* Valid bits defined by ipa->available */
-IPA_REG_STRIDE(IRQ_SUSPEND_INFO, irq_suspend_info,
- 0x00004030 + 0x1000 * GSI_EE_AP, 0x0004);
+REG_STRIDE(IRQ_SUSPEND_INFO, irq_suspend_info,
+ 0x00004030 + 0x1000 * GSI_EE_AP, 0x0004);
/* Valid bits defined by ipa->available */
-IPA_REG_STRIDE(IRQ_SUSPEND_EN, irq_suspend_en,
- 0x00004034 + 0x1000 * GSI_EE_AP, 0x0004);
+REG_STRIDE(IRQ_SUSPEND_EN, irq_suspend_en,
+ 0x00004034 + 0x1000 * GSI_EE_AP, 0x0004);
/* Valid bits defined by ipa->available */
-IPA_REG_STRIDE(IRQ_SUSPEND_CLR, irq_suspend_clr,
- 0x00004038 + 0x1000 * GSI_EE_AP, 0x0004);
-
-static const struct ipa_reg *ipa_reg_array[] = {
- [COMP_CFG] = &ipa_reg_comp_cfg,
- [CLKON_CFG] = &ipa_reg_clkon_cfg,
- [ROUTE] = &ipa_reg_route,
- [SHARED_MEM_SIZE] = &ipa_reg_shared_mem_size,
- [QSB_MAX_WRITES] = &ipa_reg_qsb_max_writes,
- [QSB_MAX_READS] = &ipa_reg_qsb_max_reads,
- [FILT_ROUT_HASH_EN] = &ipa_reg_filt_rout_hash_en,
- [FILT_ROUT_HASH_FLUSH] = &ipa_reg_filt_rout_hash_flush,
- [STATE_AGGR_ACTIVE] = &ipa_reg_state_aggr_active,
- [LOCAL_PKT_PROC_CNTXT] = &ipa_reg_local_pkt_proc_cntxt,
- [AGGR_FORCE_CLOSE] = &ipa_reg_aggr_force_close,
- [IPA_TX_CFG] = &ipa_reg_ipa_tx_cfg,
- [FLAVOR_0] = &ipa_reg_flavor_0,
- [IDLE_INDICATION_CFG] = &ipa_reg_idle_indication_cfg,
- [QTIME_TIMESTAMP_CFG] = &ipa_reg_qtime_timestamp_cfg,
- [TIMERS_XO_CLK_DIV_CFG] = &ipa_reg_timers_xo_clk_div_cfg,
- [TIMERS_PULSE_GRAN_CFG] = &ipa_reg_timers_pulse_gran_cfg,
- [SRC_RSRC_GRP_01_RSRC_TYPE] = &ipa_reg_src_rsrc_grp_01_rsrc_type,
- [SRC_RSRC_GRP_23_RSRC_TYPE] = &ipa_reg_src_rsrc_grp_23_rsrc_type,
- [DST_RSRC_GRP_01_RSRC_TYPE] = &ipa_reg_dst_rsrc_grp_01_rsrc_type,
- [DST_RSRC_GRP_23_RSRC_TYPE] = &ipa_reg_dst_rsrc_grp_23_rsrc_type,
- [ENDP_INIT_CFG] = &ipa_reg_endp_init_cfg,
- [ENDP_INIT_NAT] = &ipa_reg_endp_init_nat,
- [ENDP_INIT_HDR] = &ipa_reg_endp_init_hdr,
- [ENDP_INIT_HDR_EXT] = &ipa_reg_endp_init_hdr_ext,
- [ENDP_INIT_HDR_METADATA_MASK] = &ipa_reg_endp_init_hdr_metadata_mask,
- [ENDP_INIT_MODE] = &ipa_reg_endp_init_mode,
- [ENDP_INIT_AGGR] = &ipa_reg_endp_init_aggr,
- [ENDP_INIT_HOL_BLOCK_EN] = &ipa_reg_endp_init_hol_block_en,
- [ENDP_INIT_HOL_BLOCK_TIMER] = &ipa_reg_endp_init_hol_block_timer,
- [ENDP_INIT_DEAGGR] = &ipa_reg_endp_init_deaggr,
- [ENDP_INIT_RSRC_GRP] = &ipa_reg_endp_init_rsrc_grp,
- [ENDP_INIT_SEQ] = &ipa_reg_endp_init_seq,
- [ENDP_STATUS] = &ipa_reg_endp_status,
- [ENDP_FILTER_ROUTER_HSH_CFG] = &ipa_reg_endp_filter_router_hsh_cfg,
- [IPA_IRQ_STTS] = &ipa_reg_ipa_irq_stts,
- [IPA_IRQ_EN] = &ipa_reg_ipa_irq_en,
- [IPA_IRQ_CLR] = &ipa_reg_ipa_irq_clr,
- [IPA_IRQ_UC] = &ipa_reg_ipa_irq_uc,
- [IRQ_SUSPEND_INFO] = &ipa_reg_irq_suspend_info,
- [IRQ_SUSPEND_EN] = &ipa_reg_irq_suspend_en,
- [IRQ_SUSPEND_CLR] = &ipa_reg_irq_suspend_clr,
-};
-
-const struct ipa_regs ipa_regs_v4_11 = {
- .reg_count = ARRAY_SIZE(ipa_reg_array),
- .reg = ipa_reg_array,
+REG_STRIDE(IRQ_SUSPEND_CLR, irq_suspend_clr,
+ 0x00004038 + 0x1000 * GSI_EE_AP, 0x0004);
+
+static const struct reg *reg_array[] = {
+ [COMP_CFG] = &reg_comp_cfg,
+ [CLKON_CFG] = &reg_clkon_cfg,
+ [ROUTE] = &reg_route,
+ [SHARED_MEM_SIZE] = &reg_shared_mem_size,
+ [QSB_MAX_WRITES] = &reg_qsb_max_writes,
+ [QSB_MAX_READS] = &reg_qsb_max_reads,
+ [FILT_ROUT_HASH_EN] = &reg_filt_rout_hash_en,
+ [FILT_ROUT_HASH_FLUSH] = &reg_filt_rout_hash_flush,
+ [STATE_AGGR_ACTIVE] = &reg_state_aggr_active,
+ [LOCAL_PKT_PROC_CNTXT] = &reg_local_pkt_proc_cntxt,
+ [AGGR_FORCE_CLOSE] = &reg_aggr_force_close,
+ [IPA_TX_CFG] = &reg_ipa_tx_cfg,
+ [FLAVOR_0] = &reg_flavor_0,
+ [IDLE_INDICATION_CFG] = &reg_idle_indication_cfg,
+ [QTIME_TIMESTAMP_CFG] = &reg_qtime_timestamp_cfg,
+ [TIMERS_XO_CLK_DIV_CFG] = &reg_timers_xo_clk_div_cfg,
+ [TIMERS_PULSE_GRAN_CFG] = &reg_timers_pulse_gran_cfg,
+ [SRC_RSRC_GRP_01_RSRC_TYPE] = &reg_src_rsrc_grp_01_rsrc_type,
+ [SRC_RSRC_GRP_23_RSRC_TYPE] = &reg_src_rsrc_grp_23_rsrc_type,
+ [DST_RSRC_GRP_01_RSRC_TYPE] = &reg_dst_rsrc_grp_01_rsrc_type,
+ [DST_RSRC_GRP_23_RSRC_TYPE] = &reg_dst_rsrc_grp_23_rsrc_type,
+ [ENDP_INIT_CFG] = &reg_endp_init_cfg,
+ [ENDP_INIT_NAT] = &reg_endp_init_nat,
+ [ENDP_INIT_HDR] = &reg_endp_init_hdr,
+ [ENDP_INIT_HDR_EXT] = &reg_endp_init_hdr_ext,
+ [ENDP_INIT_HDR_METADATA_MASK] = &reg_endp_init_hdr_metadata_mask,
+ [ENDP_INIT_MODE] = &reg_endp_init_mode,
+ [ENDP_INIT_AGGR] = &reg_endp_init_aggr,
+ [ENDP_INIT_HOL_BLOCK_EN] = &reg_endp_init_hol_block_en,
+ [ENDP_INIT_HOL_BLOCK_TIMER] = &reg_endp_init_hol_block_timer,
+ [ENDP_INIT_DEAGGR] = &reg_endp_init_deaggr,
+ [ENDP_INIT_RSRC_GRP] = &reg_endp_init_rsrc_grp,
+ [ENDP_INIT_SEQ] = &reg_endp_init_seq,
+ [ENDP_STATUS] = &reg_endp_status,
+ [ENDP_FILTER_ROUTER_HSH_CFG] = &reg_endp_filter_router_hsh_cfg,
+ [IPA_IRQ_STTS] = &reg_ipa_irq_stts,
+ [IPA_IRQ_EN] = &reg_ipa_irq_en,
+ [IPA_IRQ_CLR] = &reg_ipa_irq_clr,
+ [IPA_IRQ_UC] = &reg_ipa_irq_uc,
+ [IRQ_SUSPEND_INFO] = &reg_irq_suspend_info,
+ [IRQ_SUSPEND_EN] = &reg_irq_suspend_en,
+ [IRQ_SUSPEND_CLR] = &reg_irq_suspend_clr,
+};
+
+const struct regs ipa_regs_v4_11 = {
+ .reg_count = ARRAY_SIZE(reg_array),
+ .reg = reg_array,
};
diff --git a/drivers/net/ipa/reg/ipa_reg-v4.2.c b/drivers/net/ipa/reg/ipa_reg-v4.2.c
index 7a95149f8ec7..bb7cf488144d 100644
--- a/drivers/net/ipa/reg/ipa_reg-v4.2.c
+++ b/drivers/net/ipa/reg/ipa_reg-v4.2.c
@@ -7,7 +7,7 @@
#include "../ipa.h"
#include "../ipa_reg.h"
-static const u32 ipa_reg_comp_cfg_fmask[] = {
+static const u32 reg_comp_cfg_fmask[] = {
/* Bit 0 reserved */
[GSI_SNOC_BYPASS_DIS] = BIT(1),
[GEN_QMB_0_SNOC_BYPASS_DIS] = BIT(2),
@@ -29,9 +29,9 @@ static const u32 ipa_reg_comp_cfg_fmask[] = {
/* Bits 21-31 reserved */
};
-IPA_REG_FIELDS(COMP_CFG, comp_cfg, 0x0000003c);
+REG_FIELDS(COMP_CFG, comp_cfg, 0x0000003c);
-static const u32 ipa_reg_clkon_cfg_fmask[] = {
+static const u32 reg_clkon_cfg_fmask[] = {
[CLKON_RX] = BIT(0),
[CLKON_PROC] = BIT(1),
[TX_WRAPPER] = BIT(2),
@@ -65,9 +65,9 @@ static const u32 ipa_reg_clkon_cfg_fmask[] = {
/* Bits 30-31 reserved */
};
-IPA_REG_FIELDS(CLKON_CFG, clkon_cfg, 0x00000044);
+REG_FIELDS(CLKON_CFG, clkon_cfg, 0x00000044);
-static const u32 ipa_reg_route_fmask[] = {
+static const u32 reg_route_fmask[] = {
[ROUTE_DIS] = BIT(0),
[ROUTE_DEF_PIPE] = GENMASK(5, 1),
[ROUTE_DEF_HDR_TABLE] = BIT(6),
@@ -78,24 +78,24 @@ static const u32 ipa_reg_route_fmask[] = {
/* Bits 25-31 reserved */
};
-IPA_REG_FIELDS(ROUTE, route, 0x00000048);
+REG_FIELDS(ROUTE, route, 0x00000048);
-static const u32 ipa_reg_shared_mem_size_fmask[] = {
+static const u32 reg_shared_mem_size_fmask[] = {
[MEM_SIZE] = GENMASK(15, 0),
[MEM_BADDR] = GENMASK(31, 16),
};
-IPA_REG_FIELDS(SHARED_MEM_SIZE, shared_mem_size, 0x00000054);
+REG_FIELDS(SHARED_MEM_SIZE, shared_mem_size, 0x00000054);
-static const u32 ipa_reg_qsb_max_writes_fmask[] = {
+static const u32 reg_qsb_max_writes_fmask[] = {
[GEN_QMB_0_MAX_WRITES] = GENMASK(3, 0),
[GEN_QMB_1_MAX_WRITES] = GENMASK(7, 4),
/* Bits 8-31 reserved */
};
-IPA_REG_FIELDS(QSB_MAX_WRITES, qsb_max_writes, 0x00000074);
+REG_FIELDS(QSB_MAX_WRITES, qsb_max_writes, 0x00000074);
-static const u32 ipa_reg_qsb_max_reads_fmask[] = {
+static const u32 reg_qsb_max_reads_fmask[] = {
[GEN_QMB_0_MAX_READS] = GENMASK(3, 0),
[GEN_QMB_1_MAX_READS] = GENMASK(7, 4),
/* Bits 8-15 reserved */
@@ -103,9 +103,9 @@ static const u32 ipa_reg_qsb_max_reads_fmask[] = {
[GEN_QMB_1_MAX_READS_BEATS] = GENMASK(31, 24),
};
-IPA_REG_FIELDS(QSB_MAX_READS, qsb_max_reads, 0x00000078);
+REG_FIELDS(QSB_MAX_READS, qsb_max_reads, 0x00000078);
-static const u32 ipa_reg_filt_rout_hash_en_fmask[] = {
+static const u32 reg_filt_rout_hash_en_fmask[] = {
[IPV6_ROUTER_HASH] = BIT(0),
/* Bits 1-3 reserved */
[IPV6_FILTER_HASH] = BIT(4),
@@ -116,9 +116,9 @@ static const u32 ipa_reg_filt_rout_hash_en_fmask[] = {
/* Bits 13-31 reserved */
};
-IPA_REG_FIELDS(FILT_ROUT_HASH_EN, filt_rout_hash_en, 0x0000148);
+REG_FIELDS(FILT_ROUT_HASH_EN, filt_rout_hash_en, 0x0000148);
-static const u32 ipa_reg_filt_rout_hash_flush_fmask[] = {
+static const u32 reg_filt_rout_hash_flush_fmask[] = {
[IPV6_ROUTER_HASH] = BIT(0),
/* Bits 1-3 reserved */
[IPV6_FILTER_HASH] = BIT(4),
@@ -129,33 +129,33 @@ static const u32 ipa_reg_filt_rout_hash_flush_fmask[] = {
/* Bits 13-31 reserved */
};
-IPA_REG_FIELDS(FILT_ROUT_HASH_FLUSH, filt_rout_hash_flush, 0x000014c);
+REG_FIELDS(FILT_ROUT_HASH_FLUSH, filt_rout_hash_flush, 0x000014c);
/* Valid bits defined by ipa->available */
-IPA_REG_STRIDE(STATE_AGGR_ACTIVE, state_aggr_active, 0x000000b4, 0x0004);
+REG_STRIDE(STATE_AGGR_ACTIVE, state_aggr_active, 0x000000b4, 0x0004);
-IPA_REG(IPA_BCR, ipa_bcr, 0x000001d0);
+REG(IPA_BCR, ipa_bcr, 0x000001d0);
-static const u32 ipa_reg_local_pkt_proc_cntxt_fmask[] = {
+static const u32 reg_local_pkt_proc_cntxt_fmask[] = {
[IPA_BASE_ADDR] = GENMASK(16, 0),
/* Bits 17-31 reserved */
};
/* Offset must be a multiple of 8 */
-IPA_REG_FIELDS(LOCAL_PKT_PROC_CNTXT, local_pkt_proc_cntxt, 0x000001e8);
+REG_FIELDS(LOCAL_PKT_PROC_CNTXT, local_pkt_proc_cntxt, 0x000001e8);
/* Valid bits defined by ipa->available */
-IPA_REG_STRIDE(AGGR_FORCE_CLOSE, aggr_force_close, 0x000001ec, 0x0004);
+REG_STRIDE(AGGR_FORCE_CLOSE, aggr_force_close, 0x000001ec, 0x0004);
-static const u32 ipa_reg_counter_cfg_fmask[] = {
+static const u32 reg_counter_cfg_fmask[] = {
/* Bits 0-3 reserved */
[AGGR_GRANULARITY] = GENMASK(8, 4),
/* Bits 9-31 reserved */
};
-IPA_REG_FIELDS(COUNTER_CFG, counter_cfg, 0x000001f0);
+REG_FIELDS(COUNTER_CFG, counter_cfg, 0x000001f0);
-static const u32 ipa_reg_ipa_tx_cfg_fmask[] = {
+static const u32 reg_ipa_tx_cfg_fmask[] = {
/* Bits 0-1 reserved */
[PREFETCH_ALMOST_EMPTY_SIZE_TX0] = GENMASK(5, 2),
[DMAW_SCND_OUTSD_PRED_THRESHOLD] = GENMASK(9, 6),
@@ -169,9 +169,9 @@ static const u32 ipa_reg_ipa_tx_cfg_fmask[] = {
/* Bits 20-31 reserved */
};
-IPA_REG_FIELDS(IPA_TX_CFG, ipa_tx_cfg, 0x000001fc);
+REG_FIELDS(IPA_TX_CFG, ipa_tx_cfg, 0x000001fc);
-static const u32 ipa_reg_flavor_0_fmask[] = {
+static const u32 reg_flavor_0_fmask[] = {
[MAX_PIPES] = GENMASK(3, 0),
/* Bits 4-7 reserved */
[MAX_CONS_PIPES] = GENMASK(12, 8),
@@ -182,17 +182,17 @@ static const u32 ipa_reg_flavor_0_fmask[] = {
/* Bits 28-31 reserved */
};
-IPA_REG_FIELDS(FLAVOR_0, flavor_0, 0x00000210);
+REG_FIELDS(FLAVOR_0, flavor_0, 0x00000210);
-static const u32 ipa_reg_idle_indication_cfg_fmask[] = {
+static const u32 reg_idle_indication_cfg_fmask[] = {
[ENTER_IDLE_DEBOUNCE_THRESH] = GENMASK(15, 0),
[CONST_NON_IDLE_ENABLE] = BIT(16),
/* Bits 17-31 reserved */
};
-IPA_REG_FIELDS(IDLE_INDICATION_CFG, idle_indication_cfg, 0x00000240);
+REG_FIELDS(IDLE_INDICATION_CFG, idle_indication_cfg, 0x00000240);
-static const u32 ipa_reg_src_rsrc_grp_01_rsrc_type_fmask[] = {
+static const u32 reg_src_rsrc_grp_01_rsrc_type_fmask[] = {
[X_MIN_LIM] = GENMASK(5, 0),
/* Bits 6-7 reserved */
[X_MAX_LIM] = GENMASK(13, 8),
@@ -203,10 +203,10 @@ static const u32 ipa_reg_src_rsrc_grp_01_rsrc_type_fmask[] = {
/* Bits 30-31 reserved */
};
-IPA_REG_STRIDE_FIELDS(SRC_RSRC_GRP_01_RSRC_TYPE, src_rsrc_grp_01_rsrc_type,
- 0x00000400, 0x0020);
+REG_STRIDE_FIELDS(SRC_RSRC_GRP_01_RSRC_TYPE, src_rsrc_grp_01_rsrc_type,
+ 0x00000400, 0x0020);
-static const u32 ipa_reg_src_rsrc_grp_23_rsrc_type_fmask[] = {
+static const u32 reg_src_rsrc_grp_23_rsrc_type_fmask[] = {
[X_MIN_LIM] = GENMASK(5, 0),
/* Bits 6-7 reserved */
[X_MAX_LIM] = GENMASK(13, 8),
@@ -217,10 +217,10 @@ static const u32 ipa_reg_src_rsrc_grp_23_rsrc_type_fmask[] = {
/* Bits 30-31 reserved */
};
-IPA_REG_STRIDE_FIELDS(SRC_RSRC_GRP_23_RSRC_TYPE, src_rsrc_grp_23_rsrc_type,
- 0x00000404, 0x0020);
+REG_STRIDE_FIELDS(SRC_RSRC_GRP_23_RSRC_TYPE, src_rsrc_grp_23_rsrc_type,
+ 0x00000404, 0x0020);
-static const u32 ipa_reg_dst_rsrc_grp_01_rsrc_type_fmask[] = {
+static const u32 reg_dst_rsrc_grp_01_rsrc_type_fmask[] = {
[X_MIN_LIM] = GENMASK(5, 0),
/* Bits 6-7 reserved */
[X_MAX_LIM] = GENMASK(13, 8),
@@ -231,10 +231,10 @@ static const u32 ipa_reg_dst_rsrc_grp_01_rsrc_type_fmask[] = {
/* Bits 30-31 reserved */
};
-IPA_REG_STRIDE_FIELDS(DST_RSRC_GRP_01_RSRC_TYPE, dst_rsrc_grp_01_rsrc_type,
- 0x00000500, 0x0020);
+REG_STRIDE_FIELDS(DST_RSRC_GRP_01_RSRC_TYPE, dst_rsrc_grp_01_rsrc_type,
+ 0x00000500, 0x0020);
-static const u32 ipa_reg_dst_rsrc_grp_23_rsrc_type_fmask[] = {
+static const u32 reg_dst_rsrc_grp_23_rsrc_type_fmask[] = {
[X_MIN_LIM] = GENMASK(5, 0),
/* Bits 6-7 reserved */
[X_MAX_LIM] = GENMASK(13, 8),
@@ -245,10 +245,10 @@ static const u32 ipa_reg_dst_rsrc_grp_23_rsrc_type_fmask[] = {
/* Bits 30-31 reserved */
};
-IPA_REG_STRIDE_FIELDS(DST_RSRC_GRP_23_RSRC_TYPE, dst_rsrc_grp_23_rsrc_type,
- 0x00000504, 0x0020);
+REG_STRIDE_FIELDS(DST_RSRC_GRP_23_RSRC_TYPE, dst_rsrc_grp_23_rsrc_type,
+ 0x00000504, 0x0020);
-static const u32 ipa_reg_endp_init_cfg_fmask[] = {
+static const u32 reg_endp_init_cfg_fmask[] = {
[FRAG_OFFLOAD_EN] = BIT(0),
[CS_OFFLOAD_EN] = GENMASK(2, 1),
[CS_METADATA_HDR_OFFSET] = GENMASK(6, 3),
@@ -257,16 +257,16 @@ static const u32 ipa_reg_endp_init_cfg_fmask[] = {
/* Bits 9-31 reserved */
};
-IPA_REG_STRIDE_FIELDS(ENDP_INIT_CFG, endp_init_cfg, 0x00000808, 0x0070);
+REG_STRIDE_FIELDS(ENDP_INIT_CFG, endp_init_cfg, 0x00000808, 0x0070);
-static const u32 ipa_reg_endp_init_nat_fmask[] = {
+static const u32 reg_endp_init_nat_fmask[] = {
[NAT_EN] = GENMASK(1, 0),
/* Bits 2-31 reserved */
};
-IPA_REG_STRIDE_FIELDS(ENDP_INIT_NAT, endp_init_nat, 0x0000080c, 0x0070);
+REG_STRIDE_FIELDS(ENDP_INIT_NAT, endp_init_nat, 0x0000080c, 0x0070);
-static const u32 ipa_reg_endp_init_hdr_fmask[] = {
+static const u32 reg_endp_init_hdr_fmask[] = {
[HDR_LEN] = GENMASK(5, 0),
[HDR_OFST_METADATA_VALID] = BIT(6),
[HDR_OFST_METADATA] = GENMASK(12, 7),
@@ -279,9 +279,9 @@ static const u32 ipa_reg_endp_init_hdr_fmask[] = {
/* Bits 29-31 reserved */
};
-IPA_REG_STRIDE_FIELDS(ENDP_INIT_HDR, endp_init_hdr, 0x00000810, 0x0070);
+REG_STRIDE_FIELDS(ENDP_INIT_HDR, endp_init_hdr, 0x00000810, 0x0070);
-static const u32 ipa_reg_endp_init_hdr_ext_fmask[] = {
+static const u32 reg_endp_init_hdr_ext_fmask[] = {
[HDR_ENDIANNESS] = BIT(0),
[HDR_TOTAL_LEN_OR_PAD_VALID] = BIT(1),
[HDR_TOTAL_LEN_OR_PAD] = BIT(2),
@@ -291,12 +291,12 @@ static const u32 ipa_reg_endp_init_hdr_ext_fmask[] = {
/* Bits 14-31 reserved */
};
-IPA_REG_STRIDE_FIELDS(ENDP_INIT_HDR_EXT, endp_init_hdr_ext, 0x00000814, 0x0070);
+REG_STRIDE_FIELDS(ENDP_INIT_HDR_EXT, endp_init_hdr_ext, 0x00000814, 0x0070);
-IPA_REG_STRIDE(ENDP_INIT_HDR_METADATA_MASK, endp_init_hdr_metadata_mask,
- 0x00000818, 0x0070);
+REG_STRIDE(ENDP_INIT_HDR_METADATA_MASK, endp_init_hdr_metadata_mask,
+ 0x00000818, 0x0070);
-static const u32 ipa_reg_endp_init_mode_fmask[] = {
+static const u32 reg_endp_init_mode_fmask[] = {
[ENDP_MODE] = GENMASK(2, 0),
/* Bit 3 reserved */
[DEST_PIPE_INDEX] = GENMASK(8, 4),
@@ -308,9 +308,9 @@ static const u32 ipa_reg_endp_init_mode_fmask[] = {
/* Bit 31 reserved */
};
-IPA_REG_STRIDE_FIELDS(ENDP_INIT_MODE, endp_init_mode, 0x00000820, 0x0070);
+REG_STRIDE_FIELDS(ENDP_INIT_MODE, endp_init_mode, 0x00000820, 0x0070);
-static const u32 ipa_reg_endp_init_aggr_fmask[] = {
+static const u32 reg_endp_init_aggr_fmask[] = {
[AGGR_EN] = GENMASK(1, 0),
[AGGR_TYPE] = GENMASK(4, 2),
[BYTE_LIMIT] = GENMASK(9, 5),
@@ -323,27 +323,27 @@ static const u32 ipa_reg_endp_init_aggr_fmask[] = {
/* Bits 25-31 reserved */
};
-IPA_REG_STRIDE_FIELDS(ENDP_INIT_AGGR, endp_init_aggr, 0x00000824, 0x0070);
+REG_STRIDE_FIELDS(ENDP_INIT_AGGR, endp_init_aggr, 0x00000824, 0x0070);
-static const u32 ipa_reg_endp_init_hol_block_en_fmask[] = {
+static const u32 reg_endp_init_hol_block_en_fmask[] = {
[HOL_BLOCK_EN] = BIT(0),
/* Bits 1-31 reserved */
};
-IPA_REG_STRIDE_FIELDS(ENDP_INIT_HOL_BLOCK_EN, endp_init_hol_block_en,
- 0x0000082c, 0x0070);
+REG_STRIDE_FIELDS(ENDP_INIT_HOL_BLOCK_EN, endp_init_hol_block_en,
+ 0x0000082c, 0x0070);
-static const u32 ipa_reg_endp_init_hol_block_timer_fmask[] = {
+static const u32 reg_endp_init_hol_block_timer_fmask[] = {
[TIMER_BASE_VALUE] = GENMASK(4, 0),
/* Bits 5-7 reserved */
[TIMER_SCALE] = GENMASK(12, 8),
/* Bits 9-31 reserved */
};
-IPA_REG_STRIDE_FIELDS(ENDP_INIT_HOL_BLOCK_TIMER, endp_init_hol_block_timer,
- 0x00000830, 0x0070);
+REG_STRIDE_FIELDS(ENDP_INIT_HOL_BLOCK_TIMER, endp_init_hol_block_timer,
+ 0x00000830, 0x0070);
-static const u32 ipa_reg_endp_init_deaggr_fmask[] = {
+static const u32 reg_endp_init_deaggr_fmask[] = {
[DEAGGR_HDR_LEN] = GENMASK(5, 0),
[SYSPIPE_ERR_DETECTION] = BIT(6),
[PACKET_OFFSET_VALID] = BIT(7),
@@ -353,25 +353,24 @@ static const u32 ipa_reg_endp_init_deaggr_fmask[] = {
[MAX_PACKET_LEN] = GENMASK(31, 16),
};
-IPA_REG_STRIDE_FIELDS(ENDP_INIT_DEAGGR, endp_init_deaggr, 0x00000834, 0x0070);
+REG_STRIDE_FIELDS(ENDP_INIT_DEAGGR, endp_init_deaggr, 0x00000834, 0x0070);
-static const u32 ipa_reg_endp_init_rsrc_grp_fmask[] = {
+static const u32 reg_endp_init_rsrc_grp_fmask[] = {
[ENDP_RSRC_GRP] = BIT(0),
/* Bits 1-31 reserved */
};
-IPA_REG_STRIDE_FIELDS(ENDP_INIT_RSRC_GRP, endp_init_rsrc_grp,
- 0x00000838, 0x0070);
+REG_STRIDE_FIELDS(ENDP_INIT_RSRC_GRP, endp_init_rsrc_grp, 0x00000838, 0x0070);
-static const u32 ipa_reg_endp_init_seq_fmask[] = {
+static const u32 reg_endp_init_seq_fmask[] = {
[SEQ_TYPE] = GENMASK(7, 0),
[SEQ_REP_TYPE] = GENMASK(15, 8),
/* Bits 16-31 reserved */
};
-IPA_REG_STRIDE_FIELDS(ENDP_INIT_SEQ, endp_init_seq, 0x0000083c, 0x0070);
+REG_STRIDE_FIELDS(ENDP_INIT_SEQ, endp_init_seq, 0x0000083c, 0x0070);
-static const u32 ipa_reg_endp_status_fmask[] = {
+static const u32 reg_endp_status_fmask[] = {
[STATUS_EN] = BIT(0),
[STATUS_ENDP] = GENMASK(5, 1),
/* Bits 6-7 reserved */
@@ -380,80 +379,80 @@ static const u32 ipa_reg_endp_status_fmask[] = {
/* Bits 10-31 reserved */
};
-IPA_REG_STRIDE_FIELDS(ENDP_STATUS, endp_status, 0x00000840, 0x0070);
+REG_STRIDE_FIELDS(ENDP_STATUS, endp_status, 0x00000840, 0x0070);
/* Valid bits defined by enum ipa_irq_id; only used for GSI_EE_AP */
-IPA_REG(IPA_IRQ_STTS, ipa_irq_stts, 0x00003008 + 0x1000 * GSI_EE_AP);
+REG(IPA_IRQ_STTS, ipa_irq_stts, 0x00003008 + 0x1000 * GSI_EE_AP);
/* Valid bits defined by enum ipa_irq_id; only used for GSI_EE_AP */
-IPA_REG(IPA_IRQ_EN, ipa_irq_en, 0x0000300c + 0x1000 * GSI_EE_AP);
+REG(IPA_IRQ_EN, ipa_irq_en, 0x0000300c + 0x1000 * GSI_EE_AP);
/* Valid bits defined by enum ipa_irq_id; only used for GSI_EE_AP */
-IPA_REG(IPA_IRQ_CLR, ipa_irq_clr, 0x00003010 + 0x1000 * GSI_EE_AP);
+REG(IPA_IRQ_CLR, ipa_irq_clr, 0x00003010 + 0x1000 * GSI_EE_AP);
-static const u32 ipa_reg_ipa_irq_uc_fmask[] = {
+static const u32 reg_ipa_irq_uc_fmask[] = {
[UC_INTR] = BIT(0),
/* Bits 1-31 reserved */
};
-IPA_REG_FIELDS(IPA_IRQ_UC, ipa_irq_uc, 0x0000301c + 0x1000 * GSI_EE_AP);
+REG_FIELDS(IPA_IRQ_UC, ipa_irq_uc, 0x0000301c + 0x1000 * GSI_EE_AP);
/* Valid bits defined by ipa->available */
-IPA_REG_STRIDE(IRQ_SUSPEND_INFO, irq_suspend_info,
- 0x00003030 + 0x1000 * GSI_EE_AP, 0x0004);
+REG_STRIDE(IRQ_SUSPEND_INFO, irq_suspend_info,
+ 0x00003030 + 0x1000 * GSI_EE_AP, 0x0004);
/* Valid bits defined by ipa->available */
-IPA_REG_STRIDE(IRQ_SUSPEND_EN, irq_suspend_en,
- 0x00003034 + 0x1000 * GSI_EE_AP, 0x0004);
+REG_STRIDE(IRQ_SUSPEND_EN, irq_suspend_en,
+ 0x00003034 + 0x1000 * GSI_EE_AP, 0x0004);
/* Valid bits defined by ipa->available */
-IPA_REG_STRIDE(IRQ_SUSPEND_CLR, irq_suspend_clr,
- 0x00003038 + 0x1000 * GSI_EE_AP, 0x0004);
-
-static const struct ipa_reg *ipa_reg_array[] = {
- [COMP_CFG] = &ipa_reg_comp_cfg,
- [CLKON_CFG] = &ipa_reg_clkon_cfg,
- [ROUTE] = &ipa_reg_route,
- [SHARED_MEM_SIZE] = &ipa_reg_shared_mem_size,
- [QSB_MAX_WRITES] = &ipa_reg_qsb_max_writes,
- [QSB_MAX_READS] = &ipa_reg_qsb_max_reads,
- [FILT_ROUT_HASH_EN] = &ipa_reg_filt_rout_hash_en,
- [FILT_ROUT_HASH_FLUSH] = &ipa_reg_filt_rout_hash_flush,
- [STATE_AGGR_ACTIVE] = &ipa_reg_state_aggr_active,
- [IPA_BCR] = &ipa_reg_ipa_bcr,
- [LOCAL_PKT_PROC_CNTXT] = &ipa_reg_local_pkt_proc_cntxt,
- [AGGR_FORCE_CLOSE] = &ipa_reg_aggr_force_close,
- [COUNTER_CFG] = &ipa_reg_counter_cfg,
- [IPA_TX_CFG] = &ipa_reg_ipa_tx_cfg,
- [FLAVOR_0] = &ipa_reg_flavor_0,
- [IDLE_INDICATION_CFG] = &ipa_reg_idle_indication_cfg,
- [SRC_RSRC_GRP_01_RSRC_TYPE] = &ipa_reg_src_rsrc_grp_01_rsrc_type,
- [SRC_RSRC_GRP_23_RSRC_TYPE] = &ipa_reg_src_rsrc_grp_23_rsrc_type,
- [DST_RSRC_GRP_01_RSRC_TYPE] = &ipa_reg_dst_rsrc_grp_01_rsrc_type,
- [DST_RSRC_GRP_23_RSRC_TYPE] = &ipa_reg_dst_rsrc_grp_23_rsrc_type,
- [ENDP_INIT_CFG] = &ipa_reg_endp_init_cfg,
- [ENDP_INIT_NAT] = &ipa_reg_endp_init_nat,
- [ENDP_INIT_HDR] = &ipa_reg_endp_init_hdr,
- [ENDP_INIT_HDR_EXT] = &ipa_reg_endp_init_hdr_ext,
- [ENDP_INIT_HDR_METADATA_MASK] = &ipa_reg_endp_init_hdr_metadata_mask,
- [ENDP_INIT_MODE] = &ipa_reg_endp_init_mode,
- [ENDP_INIT_AGGR] = &ipa_reg_endp_init_aggr,
- [ENDP_INIT_HOL_BLOCK_EN] = &ipa_reg_endp_init_hol_block_en,
- [ENDP_INIT_HOL_BLOCK_TIMER] = &ipa_reg_endp_init_hol_block_timer,
- [ENDP_INIT_DEAGGR] = &ipa_reg_endp_init_deaggr,
- [ENDP_INIT_RSRC_GRP] = &ipa_reg_endp_init_rsrc_grp,
- [ENDP_INIT_SEQ] = &ipa_reg_endp_init_seq,
- [ENDP_STATUS] = &ipa_reg_endp_status,
- [IPA_IRQ_STTS] = &ipa_reg_ipa_irq_stts,
- [IPA_IRQ_EN] = &ipa_reg_ipa_irq_en,
- [IPA_IRQ_CLR] = &ipa_reg_ipa_irq_clr,
- [IPA_IRQ_UC] = &ipa_reg_ipa_irq_uc,
- [IRQ_SUSPEND_INFO] = &ipa_reg_irq_suspend_info,
- [IRQ_SUSPEND_EN] = &ipa_reg_irq_suspend_en,
- [IRQ_SUSPEND_CLR] = &ipa_reg_irq_suspend_clr,
-};
-
-const struct ipa_regs ipa_regs_v4_2 = {
- .reg_count = ARRAY_SIZE(ipa_reg_array),
- .reg = ipa_reg_array,
+REG_STRIDE(IRQ_SUSPEND_CLR, irq_suspend_clr,
+ 0x00003038 + 0x1000 * GSI_EE_AP, 0x0004);
+
+static const struct reg *reg_array[] = {
+ [COMP_CFG] = &reg_comp_cfg,
+ [CLKON_CFG] = &reg_clkon_cfg,
+ [ROUTE] = &reg_route,
+ [SHARED_MEM_SIZE] = &reg_shared_mem_size,
+ [QSB_MAX_WRITES] = &reg_qsb_max_writes,
+ [QSB_MAX_READS] = &reg_qsb_max_reads,
+ [FILT_ROUT_HASH_EN] = &reg_filt_rout_hash_en,
+ [FILT_ROUT_HASH_FLUSH] = &reg_filt_rout_hash_flush,
+ [STATE_AGGR_ACTIVE] = &reg_state_aggr_active,
+ [IPA_BCR] = &reg_ipa_bcr,
+ [LOCAL_PKT_PROC_CNTXT] = &reg_local_pkt_proc_cntxt,
+ [AGGR_FORCE_CLOSE] = &reg_aggr_force_close,
+ [COUNTER_CFG] = &reg_counter_cfg,
+ [IPA_TX_CFG] = &reg_ipa_tx_cfg,
+ [FLAVOR_0] = &reg_flavor_0,
+ [IDLE_INDICATION_CFG] = &reg_idle_indication_cfg,
+ [SRC_RSRC_GRP_01_RSRC_TYPE] = &reg_src_rsrc_grp_01_rsrc_type,
+ [SRC_RSRC_GRP_23_RSRC_TYPE] = &reg_src_rsrc_grp_23_rsrc_type,
+ [DST_RSRC_GRP_01_RSRC_TYPE] = &reg_dst_rsrc_grp_01_rsrc_type,
+ [DST_RSRC_GRP_23_RSRC_TYPE] = &reg_dst_rsrc_grp_23_rsrc_type,
+ [ENDP_INIT_CFG] = &reg_endp_init_cfg,
+ [ENDP_INIT_NAT] = &reg_endp_init_nat,
+ [ENDP_INIT_HDR] = &reg_endp_init_hdr,
+ [ENDP_INIT_HDR_EXT] = &reg_endp_init_hdr_ext,
+ [ENDP_INIT_HDR_METADATA_MASK] = &reg_endp_init_hdr_metadata_mask,
+ [ENDP_INIT_MODE] = &reg_endp_init_mode,
+ [ENDP_INIT_AGGR] = &reg_endp_init_aggr,
+ [ENDP_INIT_HOL_BLOCK_EN] = &reg_endp_init_hol_block_en,
+ [ENDP_INIT_HOL_BLOCK_TIMER] = &reg_endp_init_hol_block_timer,
+ [ENDP_INIT_DEAGGR] = &reg_endp_init_deaggr,
+ [ENDP_INIT_RSRC_GRP] = &reg_endp_init_rsrc_grp,
+ [ENDP_INIT_SEQ] = &reg_endp_init_seq,
+ [ENDP_STATUS] = &reg_endp_status,
+ [IPA_IRQ_STTS] = &reg_ipa_irq_stts,
+ [IPA_IRQ_EN] = &reg_ipa_irq_en,
+ [IPA_IRQ_CLR] = &reg_ipa_irq_clr,
+ [IPA_IRQ_UC] = &reg_ipa_irq_uc,
+ [IRQ_SUSPEND_INFO] = &reg_irq_suspend_info,
+ [IRQ_SUSPEND_EN] = &reg_irq_suspend_en,
+ [IRQ_SUSPEND_CLR] = &reg_irq_suspend_clr,
+};
+
+const struct regs ipa_regs_v4_2 = {
+ .reg_count = ARRAY_SIZE(reg_array),
+ .reg = reg_array,
};
diff --git a/drivers/net/ipa/reg/ipa_reg-v4.5.c b/drivers/net/ipa/reg/ipa_reg-v4.5.c
index 587eb8d4e00f..1c58f78851c2 100644
--- a/drivers/net/ipa/reg/ipa_reg-v4.5.c
+++ b/drivers/net/ipa/reg/ipa_reg-v4.5.c
@@ -7,7 +7,7 @@
#include "../ipa.h"
#include "../ipa_reg.h"
-static const u32 ipa_reg_comp_cfg_fmask[] = {
+static const u32 reg_comp_cfg_fmask[] = {
/* Bit 0 reserved */
[GSI_SNOC_BYPASS_DIS] = BIT(1),
[GEN_QMB_0_SNOC_BYPASS_DIS] = BIT(2),
@@ -30,9 +30,9 @@ static const u32 ipa_reg_comp_cfg_fmask[] = {
/* Bits 22-31 reserved */
};
-IPA_REG_FIELDS(COMP_CFG, comp_cfg, 0x0000003c);
+REG_FIELDS(COMP_CFG, comp_cfg, 0x0000003c);
-static const u32 ipa_reg_clkon_cfg_fmask[] = {
+static const u32 reg_clkon_cfg_fmask[] = {
[CLKON_RX] = BIT(0),
[CLKON_PROC] = BIT(1),
[TX_WRAPPER] = BIT(2),
@@ -67,9 +67,9 @@ static const u32 ipa_reg_clkon_cfg_fmask[] = {
/* Bit 31 reserved */
};
-IPA_REG_FIELDS(CLKON_CFG, clkon_cfg, 0x00000044);
+REG_FIELDS(CLKON_CFG, clkon_cfg, 0x00000044);
-static const u32 ipa_reg_route_fmask[] = {
+static const u32 reg_route_fmask[] = {
[ROUTE_DIS] = BIT(0),
[ROUTE_DEF_PIPE] = GENMASK(5, 1),
[ROUTE_DEF_HDR_TABLE] = BIT(6),
@@ -80,24 +80,24 @@ static const u32 ipa_reg_route_fmask[] = {
/* Bits 25-31 reserved */
};
-IPA_REG_FIELDS(ROUTE, route, 0x00000048);
+REG_FIELDS(ROUTE, route, 0x00000048);
-static const u32 ipa_reg_shared_mem_size_fmask[] = {
+static const u32 reg_shared_mem_size_fmask[] = {
[MEM_SIZE] = GENMASK(15, 0),
[MEM_BADDR] = GENMASK(31, 16),
};
-IPA_REG_FIELDS(SHARED_MEM_SIZE, shared_mem_size, 0x00000054);
+REG_FIELDS(SHARED_MEM_SIZE, shared_mem_size, 0x00000054);
-static const u32 ipa_reg_qsb_max_writes_fmask[] = {
+static const u32 reg_qsb_max_writes_fmask[] = {
[GEN_QMB_0_MAX_WRITES] = GENMASK(3, 0),
[GEN_QMB_1_MAX_WRITES] = GENMASK(7, 4),
/* Bits 8-31 reserved */
};
-IPA_REG_FIELDS(QSB_MAX_WRITES, qsb_max_writes, 0x00000074);
+REG_FIELDS(QSB_MAX_WRITES, qsb_max_writes, 0x00000074);
-static const u32 ipa_reg_qsb_max_reads_fmask[] = {
+static const u32 reg_qsb_max_reads_fmask[] = {
[GEN_QMB_0_MAX_READS] = GENMASK(3, 0),
[GEN_QMB_1_MAX_READS] = GENMASK(7, 4),
/* Bits 8-15 reserved */
@@ -105,9 +105,9 @@ static const u32 ipa_reg_qsb_max_reads_fmask[] = {
[GEN_QMB_1_MAX_READS_BEATS] = GENMASK(31, 24),
};
-IPA_REG_FIELDS(QSB_MAX_READS, qsb_max_reads, 0x00000078);
+REG_FIELDS(QSB_MAX_READS, qsb_max_reads, 0x00000078);
-static const u32 ipa_reg_filt_rout_hash_en_fmask[] = {
+static const u32 reg_filt_rout_hash_en_fmask[] = {
[IPV6_ROUTER_HASH] = BIT(0),
/* Bits 1-3 reserved */
[IPV6_FILTER_HASH] = BIT(4),
@@ -118,9 +118,9 @@ static const u32 ipa_reg_filt_rout_hash_en_fmask[] = {
/* Bits 13-31 reserved */
};
-IPA_REG_FIELDS(FILT_ROUT_HASH_EN, filt_rout_hash_en, 0x0000148);
+REG_FIELDS(FILT_ROUT_HASH_EN, filt_rout_hash_en, 0x0000148);
-static const u32 ipa_reg_filt_rout_hash_flush_fmask[] = {
+static const u32 reg_filt_rout_hash_flush_fmask[] = {
[IPV6_ROUTER_HASH] = BIT(0),
/* Bits 1-3 reserved */
[IPV6_FILTER_HASH] = BIT(4),
@@ -131,23 +131,23 @@ static const u32 ipa_reg_filt_rout_hash_flush_fmask[] = {
/* Bits 13-31 reserved */
};
-IPA_REG_FIELDS(FILT_ROUT_HASH_FLUSH, filt_rout_hash_flush, 0x000014c);
+REG_FIELDS(FILT_ROUT_HASH_FLUSH, filt_rout_hash_flush, 0x000014c);
/* Valid bits defined by ipa->available */
-IPA_REG_STRIDE(STATE_AGGR_ACTIVE, state_aggr_active, 0x000000b4, 0x0004);
+REG_STRIDE(STATE_AGGR_ACTIVE, state_aggr_active, 0x000000b4, 0x0004);
-static const u32 ipa_reg_local_pkt_proc_cntxt_fmask[] = {
+static const u32 reg_local_pkt_proc_cntxt_fmask[] = {
[IPA_BASE_ADDR] = GENMASK(17, 0),
/* Bits 18-31 reserved */
};
/* Offset must be a multiple of 8 */
-IPA_REG_FIELDS(LOCAL_PKT_PROC_CNTXT, local_pkt_proc_cntxt, 0x000001e8);
+REG_FIELDS(LOCAL_PKT_PROC_CNTXT, local_pkt_proc_cntxt, 0x000001e8);
/* Valid bits defined by ipa->available */
-IPA_REG_STRIDE(AGGR_FORCE_CLOSE, aggr_force_close, 0x000001ec, 0x0004);
+REG_STRIDE(AGGR_FORCE_CLOSE, aggr_force_close, 0x000001ec, 0x0004);
-static const u32 ipa_reg_ipa_tx_cfg_fmask[] = {
+static const u32 reg_ipa_tx_cfg_fmask[] = {
/* Bits 0-1 reserved */
[PREFETCH_ALMOST_EMPTY_SIZE_TX0] = GENMASK(5, 2),
[DMAW_SCND_OUTSD_PRED_THRESHOLD] = GENMASK(9, 6),
@@ -159,9 +159,9 @@ static const u32 ipa_reg_ipa_tx_cfg_fmask[] = {
/* Bits 18-31 reserved */
};
-IPA_REG_FIELDS(IPA_TX_CFG, ipa_tx_cfg, 0x000001fc);
+REG_FIELDS(IPA_TX_CFG, ipa_tx_cfg, 0x000001fc);
-static const u32 ipa_reg_flavor_0_fmask[] = {
+static const u32 reg_flavor_0_fmask[] = {
[MAX_PIPES] = GENMASK(3, 0),
/* Bits 4-7 reserved */
[MAX_CONS_PIPES] = GENMASK(12, 8),
@@ -172,17 +172,17 @@ static const u32 ipa_reg_flavor_0_fmask[] = {
/* Bits 28-31 reserved */
};
-IPA_REG_FIELDS(FLAVOR_0, flavor_0, 0x00000210);
+REG_FIELDS(FLAVOR_0, flavor_0, 0x00000210);
-static const u32 ipa_reg_idle_indication_cfg_fmask[] = {
+static const u32 reg_idle_indication_cfg_fmask[] = {
[ENTER_IDLE_DEBOUNCE_THRESH] = GENMASK(15, 0),
[CONST_NON_IDLE_ENABLE] = BIT(16),
/* Bits 17-31 reserved */
};
-IPA_REG_FIELDS(IDLE_INDICATION_CFG, idle_indication_cfg, 0x00000240);
+REG_FIELDS(IDLE_INDICATION_CFG, idle_indication_cfg, 0x00000240);
-static const u32 ipa_reg_qtime_timestamp_cfg_fmask[] = {
+static const u32 reg_qtime_timestamp_cfg_fmask[] = {
[DPL_TIMESTAMP_LSB] = GENMASK(4, 0),
/* Bits 5-6 reserved */
[DPL_TIMESTAMP_SEL] = BIT(7),
@@ -192,25 +192,25 @@ static const u32 ipa_reg_qtime_timestamp_cfg_fmask[] = {
/* Bits 21-31 reserved */
};
-IPA_REG_FIELDS(QTIME_TIMESTAMP_CFG, qtime_timestamp_cfg, 0x0000024c);
+REG_FIELDS(QTIME_TIMESTAMP_CFG, qtime_timestamp_cfg, 0x0000024c);
-static const u32 ipa_reg_timers_xo_clk_div_cfg_fmask[] = {
+static const u32 reg_timers_xo_clk_div_cfg_fmask[] = {
[DIV_VALUE] = GENMASK(8, 0),
/* Bits 9-30 reserved */
[DIV_ENABLE] = BIT(31),
};
-IPA_REG_FIELDS(TIMERS_XO_CLK_DIV_CFG, timers_xo_clk_div_cfg, 0x00000250);
+REG_FIELDS(TIMERS_XO_CLK_DIV_CFG, timers_xo_clk_div_cfg, 0x00000250);
-static const u32 ipa_reg_timers_pulse_gran_cfg_fmask[] = {
+static const u32 reg_timers_pulse_gran_cfg_fmask[] = {
[PULSE_GRAN_0] = GENMASK(2, 0),
[PULSE_GRAN_1] = GENMASK(5, 3),
[PULSE_GRAN_2] = GENMASK(8, 6),
};
-IPA_REG_FIELDS(TIMERS_PULSE_GRAN_CFG, timers_pulse_gran_cfg, 0x00000254);
+REG_FIELDS(TIMERS_PULSE_GRAN_CFG, timers_pulse_gran_cfg, 0x00000254);
-static const u32 ipa_reg_src_rsrc_grp_01_rsrc_type_fmask[] = {
+static const u32 reg_src_rsrc_grp_01_rsrc_type_fmask[] = {
[X_MIN_LIM] = GENMASK(5, 0),
/* Bits 6-7 reserved */
[X_MAX_LIM] = GENMASK(13, 8),
@@ -221,10 +221,10 @@ static const u32 ipa_reg_src_rsrc_grp_01_rsrc_type_fmask[] = {
/* Bits 30-31 reserved */
};
-IPA_REG_STRIDE_FIELDS(SRC_RSRC_GRP_01_RSRC_TYPE, src_rsrc_grp_01_rsrc_type,
- 0x00000400, 0x0020);
+REG_STRIDE_FIELDS(SRC_RSRC_GRP_01_RSRC_TYPE, src_rsrc_grp_01_rsrc_type,
+ 0x00000400, 0x0020);
-static const u32 ipa_reg_src_rsrc_grp_23_rsrc_type_fmask[] = {
+static const u32 reg_src_rsrc_grp_23_rsrc_type_fmask[] = {
[X_MIN_LIM] = GENMASK(5, 0),
/* Bits 6-7 reserved */
[X_MAX_LIM] = GENMASK(13, 8),
@@ -235,10 +235,10 @@ static const u32 ipa_reg_src_rsrc_grp_23_rsrc_type_fmask[] = {
/* Bits 30-31 reserved */
};
-IPA_REG_STRIDE_FIELDS(SRC_RSRC_GRP_23_RSRC_TYPE, src_rsrc_grp_23_rsrc_type,
- 0x00000404, 0x0020);
+REG_STRIDE_FIELDS(SRC_RSRC_GRP_23_RSRC_TYPE, src_rsrc_grp_23_rsrc_type,
+ 0x00000404, 0x0020);
-static const u32 ipa_reg_src_rsrc_grp_45_rsrc_type_fmask[] = {
+static const u32 reg_src_rsrc_grp_45_rsrc_type_fmask[] = {
[X_MIN_LIM] = GENMASK(5, 0),
/* Bits 6-7 reserved */
[X_MAX_LIM] = GENMASK(13, 8),
@@ -249,10 +249,10 @@ static const u32 ipa_reg_src_rsrc_grp_45_rsrc_type_fmask[] = {
/* Bits 30-31 reserved */
};
-IPA_REG_STRIDE_FIELDS(SRC_RSRC_GRP_45_RSRC_TYPE, src_rsrc_grp_45_rsrc_type,
- 0x00000408, 0x0020);
+REG_STRIDE_FIELDS(SRC_RSRC_GRP_45_RSRC_TYPE, src_rsrc_grp_45_rsrc_type,
+ 0x00000408, 0x0020);
-static const u32 ipa_reg_dst_rsrc_grp_01_rsrc_type_fmask[] = {
+static const u32 reg_dst_rsrc_grp_01_rsrc_type_fmask[] = {
[X_MIN_LIM] = GENMASK(5, 0),
/* Bits 6-7 reserved */
[X_MAX_LIM] = GENMASK(13, 8),
@@ -263,10 +263,10 @@ static const u32 ipa_reg_dst_rsrc_grp_01_rsrc_type_fmask[] = {
/* Bits 30-31 reserved */
};
-IPA_REG_STRIDE_FIELDS(DST_RSRC_GRP_01_RSRC_TYPE, dst_rsrc_grp_01_rsrc_type,
- 0x00000500, 0x0020);
+REG_STRIDE_FIELDS(DST_RSRC_GRP_01_RSRC_TYPE, dst_rsrc_grp_01_rsrc_type,
+ 0x00000500, 0x0020);
-static const u32 ipa_reg_dst_rsrc_grp_23_rsrc_type_fmask[] = {
+static const u32 reg_dst_rsrc_grp_23_rsrc_type_fmask[] = {
[X_MIN_LIM] = GENMASK(5, 0),
/* Bits 6-7 reserved */
[X_MAX_LIM] = GENMASK(13, 8),
@@ -277,10 +277,10 @@ static const u32 ipa_reg_dst_rsrc_grp_23_rsrc_type_fmask[] = {
/* Bits 30-31 reserved */
};
-IPA_REG_STRIDE_FIELDS(DST_RSRC_GRP_23_RSRC_TYPE, dst_rsrc_grp_23_rsrc_type,
- 0x00000504, 0x0020);
+REG_STRIDE_FIELDS(DST_RSRC_GRP_23_RSRC_TYPE, dst_rsrc_grp_23_rsrc_type,
+ 0x00000504, 0x0020);
-static const u32 ipa_reg_dst_rsrc_grp_45_rsrc_type_fmask[] = {
+static const u32 reg_dst_rsrc_grp_45_rsrc_type_fmask[] = {
[X_MIN_LIM] = GENMASK(5, 0),
/* Bits 6-7 reserved */
[X_MAX_LIM] = GENMASK(13, 8),
@@ -291,10 +291,10 @@ static const u32 ipa_reg_dst_rsrc_grp_45_rsrc_type_fmask[] = {
/* Bits 30-31 reserved */
};
-IPA_REG_STRIDE_FIELDS(DST_RSRC_GRP_45_RSRC_TYPE, dst_rsrc_grp_45_rsrc_type,
- 0x00000508, 0x0020);
+REG_STRIDE_FIELDS(DST_RSRC_GRP_45_RSRC_TYPE, dst_rsrc_grp_45_rsrc_type,
+ 0x00000508, 0x0020);
-static const u32 ipa_reg_endp_init_cfg_fmask[] = {
+static const u32 reg_endp_init_cfg_fmask[] = {
[FRAG_OFFLOAD_EN] = BIT(0),
[CS_OFFLOAD_EN] = GENMASK(2, 1),
[CS_METADATA_HDR_OFFSET] = GENMASK(6, 3),
@@ -303,16 +303,16 @@ static const u32 ipa_reg_endp_init_cfg_fmask[] = {
/* Bits 9-31 reserved */
};
-IPA_REG_STRIDE_FIELDS(ENDP_INIT_CFG, endp_init_cfg, 0x00000808, 0x0070);
+REG_STRIDE_FIELDS(ENDP_INIT_CFG, endp_init_cfg, 0x00000808, 0x0070);
-static const u32 ipa_reg_endp_init_nat_fmask[] = {
+static const u32 reg_endp_init_nat_fmask[] = {
[NAT_EN] = GENMASK(1, 0),
/* Bits 2-31 reserved */
};
-IPA_REG_STRIDE_FIELDS(ENDP_INIT_NAT, endp_init_nat, 0x0000080c, 0x0070);
+REG_STRIDE_FIELDS(ENDP_INIT_NAT, endp_init_nat, 0x0000080c, 0x0070);
-static const u32 ipa_reg_endp_init_hdr_fmask[] = {
+static const u32 reg_endp_init_hdr_fmask[] = {
[HDR_LEN] = GENMASK(5, 0),
[HDR_OFST_METADATA_VALID] = BIT(6),
[HDR_OFST_METADATA] = GENMASK(12, 7),
@@ -325,9 +325,9 @@ static const u32 ipa_reg_endp_init_hdr_fmask[] = {
[HDR_OFST_METADATA_MSB] = GENMASK(31, 30),
};
-IPA_REG_STRIDE_FIELDS(ENDP_INIT_HDR, endp_init_hdr, 0x00000810, 0x0070);
+REG_STRIDE_FIELDS(ENDP_INIT_HDR, endp_init_hdr, 0x00000810, 0x0070);
-static const u32 ipa_reg_endp_init_hdr_ext_fmask[] = {
+static const u32 reg_endp_init_hdr_ext_fmask[] = {
[HDR_ENDIANNESS] = BIT(0),
[HDR_TOTAL_LEN_OR_PAD_VALID] = BIT(1),
[HDR_TOTAL_LEN_OR_PAD] = BIT(2),
@@ -341,12 +341,12 @@ static const u32 ipa_reg_endp_init_hdr_ext_fmask[] = {
/* Bits 22-31 reserved */
};
-IPA_REG_STRIDE_FIELDS(ENDP_INIT_HDR_EXT, endp_init_hdr_ext, 0x00000814, 0x0070);
+REG_STRIDE_FIELDS(ENDP_INIT_HDR_EXT, endp_init_hdr_ext, 0x00000814, 0x0070);
-IPA_REG_STRIDE(ENDP_INIT_HDR_METADATA_MASK, endp_init_hdr_metadata_mask,
- 0x00000818, 0x0070);
+REG_STRIDE(ENDP_INIT_HDR_METADATA_MASK, endp_init_hdr_metadata_mask,
+ 0x00000818, 0x0070);
-static const u32 ipa_reg_endp_init_mode_fmask[] = {
+static const u32 reg_endp_init_mode_fmask[] = {
[ENDP_MODE] = GENMASK(2, 0),
[DCPH_ENABLE] = BIT(3),
[DEST_PIPE_INDEX] = GENMASK(8, 4),
@@ -357,9 +357,9 @@ static const u32 ipa_reg_endp_init_mode_fmask[] = {
/* Bits 30-31 reserved */
};
-IPA_REG_STRIDE_FIELDS(ENDP_INIT_MODE, endp_init_mode, 0x00000820, 0x0070);
+REG_STRIDE_FIELDS(ENDP_INIT_MODE, endp_init_mode, 0x00000820, 0x0070);
-static const u32 ipa_reg_endp_init_aggr_fmask[] = {
+static const u32 reg_endp_init_aggr_fmask[] = {
[AGGR_EN] = GENMASK(1, 0),
[AGGR_TYPE] = GENMASK(4, 2),
[BYTE_LIMIT] = GENMASK(10, 5),
@@ -374,27 +374,27 @@ static const u32 ipa_reg_endp_init_aggr_fmask[] = {
/* Bits 28-31 reserved */
};
-IPA_REG_STRIDE_FIELDS(ENDP_INIT_AGGR, endp_init_aggr, 0x00000824, 0x0070);
+REG_STRIDE_FIELDS(ENDP_INIT_AGGR, endp_init_aggr, 0x00000824, 0x0070);
-static const u32 ipa_reg_endp_init_hol_block_en_fmask[] = {
+static const u32 reg_endp_init_hol_block_en_fmask[] = {
[HOL_BLOCK_EN] = BIT(0),
/* Bits 1-31 reserved */
};
-IPA_REG_STRIDE_FIELDS(ENDP_INIT_HOL_BLOCK_EN, endp_init_hol_block_en,
- 0x0000082c, 0x0070);
+REG_STRIDE_FIELDS(ENDP_INIT_HOL_BLOCK_EN, endp_init_hol_block_en,
+ 0x0000082c, 0x0070);
-static const u32 ipa_reg_endp_init_hol_block_timer_fmask[] = {
+static const u32 reg_endp_init_hol_block_timer_fmask[] = {
[TIMER_LIMIT] = GENMASK(4, 0),
/* Bits 5-7 reserved */
[TIMER_GRAN_SEL] = BIT(8),
/* Bits 9-31 reserved */
};
-IPA_REG_STRIDE_FIELDS(ENDP_INIT_HOL_BLOCK_TIMER, endp_init_hol_block_timer,
- 0x00000830, 0x0070);
+REG_STRIDE_FIELDS(ENDP_INIT_HOL_BLOCK_TIMER, endp_init_hol_block_timer,
+ 0x00000830, 0x0070);
-static const u32 ipa_reg_endp_init_deaggr_fmask[] = {
+static const u32 reg_endp_init_deaggr_fmask[] = {
[DEAGGR_HDR_LEN] = GENMASK(5, 0),
[SYSPIPE_ERR_DETECTION] = BIT(6),
[PACKET_OFFSET_VALID] = BIT(7),
@@ -404,24 +404,23 @@ static const u32 ipa_reg_endp_init_deaggr_fmask[] = {
[MAX_PACKET_LEN] = GENMASK(31, 16),
};
-IPA_REG_STRIDE_FIELDS(ENDP_INIT_DEAGGR, endp_init_deaggr, 0x00000834, 0x0070);
+REG_STRIDE_FIELDS(ENDP_INIT_DEAGGR, endp_init_deaggr, 0x00000834, 0x0070);
-static const u32 ipa_reg_endp_init_rsrc_grp_fmask[] = {
+static const u32 reg_endp_init_rsrc_grp_fmask[] = {
[ENDP_RSRC_GRP] = GENMASK(2, 0),
/* Bits 3-31 reserved */
};
-IPA_REG_STRIDE_FIELDS(ENDP_INIT_RSRC_GRP, endp_init_rsrc_grp,
- 0x00000838, 0x0070);
+REG_STRIDE_FIELDS(ENDP_INIT_RSRC_GRP, endp_init_rsrc_grp, 0x00000838, 0x0070);
-static const u32 ipa_reg_endp_init_seq_fmask[] = {
+static const u32 reg_endp_init_seq_fmask[] = {
[SEQ_TYPE] = GENMASK(7, 0),
/* Bits 8-31 reserved */
};
-IPA_REG_STRIDE_FIELDS(ENDP_INIT_SEQ, endp_init_seq, 0x0000083c, 0x0070);
+REG_STRIDE_FIELDS(ENDP_INIT_SEQ, endp_init_seq, 0x0000083c, 0x0070);
-static const u32 ipa_reg_endp_status_fmask[] = {
+static const u32 reg_endp_status_fmask[] = {
[STATUS_EN] = BIT(0),
[STATUS_ENDP] = GENMASK(5, 1),
/* Bits 6-8 reserved */
@@ -429,9 +428,9 @@ static const u32 ipa_reg_endp_status_fmask[] = {
/* Bits 10-31 reserved */
};
-IPA_REG_STRIDE_FIELDS(ENDP_STATUS, endp_status, 0x00000840, 0x0070);
+REG_STRIDE_FIELDS(ENDP_STATUS, endp_status, 0x00000840, 0x0070);
-static const u32 ipa_reg_endp_filter_router_hsh_cfg_fmask[] = {
+static const u32 reg_endp_filter_router_hsh_cfg_fmask[] = {
[FILTER_HASH_MSK_SRC_ID] = BIT(0),
[FILTER_HASH_MSK_SRC_IP] = BIT(1),
[FILTER_HASH_MSK_DST_IP] = BIT(2),
@@ -452,85 +451,85 @@ static const u32 ipa_reg_endp_filter_router_hsh_cfg_fmask[] = {
/* Bits 23-31 reserved */
};
-IPA_REG_STRIDE_FIELDS(ENDP_FILTER_ROUTER_HSH_CFG, endp_filter_router_hsh_cfg,
- 0x0000085c, 0x0070);
+REG_STRIDE_FIELDS(ENDP_FILTER_ROUTER_HSH_CFG, endp_filter_router_hsh_cfg,
+ 0x0000085c, 0x0070);
/* Valid bits defined by enum ipa_irq_id; only used for GSI_EE_AP */
-IPA_REG(IPA_IRQ_STTS, ipa_irq_stts, 0x00003008 + 0x1000 * GSI_EE_AP);
+REG(IPA_IRQ_STTS, ipa_irq_stts, 0x00003008 + 0x1000 * GSI_EE_AP);
/* Valid bits defined by enum ipa_irq_id; only used for GSI_EE_AP */
-IPA_REG(IPA_IRQ_EN, ipa_irq_en, 0x0000300c + 0x1000 * GSI_EE_AP);
+REG(IPA_IRQ_EN, ipa_irq_en, 0x0000300c + 0x1000 * GSI_EE_AP);
/* Valid bits defined by enum ipa_irq_id; only used for GSI_EE_AP */
-IPA_REG(IPA_IRQ_CLR, ipa_irq_clr, 0x00003010 + 0x1000 * GSI_EE_AP);
+REG(IPA_IRQ_CLR, ipa_irq_clr, 0x00003010 + 0x1000 * GSI_EE_AP);
-static const u32 ipa_reg_ipa_irq_uc_fmask[] = {
+static const u32 reg_ipa_irq_uc_fmask[] = {
[UC_INTR] = BIT(0),
/* Bits 1-31 reserved */
};
-IPA_REG_FIELDS(IPA_IRQ_UC, ipa_irq_uc, 0x0000301c + 0x1000 * GSI_EE_AP);
+REG_FIELDS(IPA_IRQ_UC, ipa_irq_uc, 0x0000301c + 0x1000 * GSI_EE_AP);
/* Valid bits defined by ipa->available */
-IPA_REG_STRIDE(IRQ_SUSPEND_INFO, irq_suspend_info,
- 0x00003030 + 0x1000 * GSI_EE_AP, 0x0004);
+REG_STRIDE(IRQ_SUSPEND_INFO, irq_suspend_info,
+ 0x00003030 + 0x1000 * GSI_EE_AP, 0x0004);
/* Valid bits defined by ipa->available */
-IPA_REG_STRIDE(IRQ_SUSPEND_EN, irq_suspend_en,
- 0x00003034 + 0x1000 * GSI_EE_AP, 0x0004);
+REG_STRIDE(IRQ_SUSPEND_EN, irq_suspend_en,
+ 0x00003034 + 0x1000 * GSI_EE_AP, 0x0004);
/* Valid bits defined by ipa->available */
-IPA_REG_STRIDE(IRQ_SUSPEND_CLR, irq_suspend_clr,
- 0x00003038 + 0x1000 * GSI_EE_AP, 0x0004);
-
-static const struct ipa_reg *ipa_reg_array[] = {
- [COMP_CFG] = &ipa_reg_comp_cfg,
- [CLKON_CFG] = &ipa_reg_clkon_cfg,
- [ROUTE] = &ipa_reg_route,
- [SHARED_MEM_SIZE] = &ipa_reg_shared_mem_size,
- [QSB_MAX_WRITES] = &ipa_reg_qsb_max_writes,
- [QSB_MAX_READS] = &ipa_reg_qsb_max_reads,
- [FILT_ROUT_HASH_EN] = &ipa_reg_filt_rout_hash_en,
- [FILT_ROUT_HASH_FLUSH] = &ipa_reg_filt_rout_hash_flush,
- [STATE_AGGR_ACTIVE] = &ipa_reg_state_aggr_active,
- [LOCAL_PKT_PROC_CNTXT] = &ipa_reg_local_pkt_proc_cntxt,
- [AGGR_FORCE_CLOSE] = &ipa_reg_aggr_force_close,
- [IPA_TX_CFG] = &ipa_reg_ipa_tx_cfg,
- [FLAVOR_0] = &ipa_reg_flavor_0,
- [IDLE_INDICATION_CFG] = &ipa_reg_idle_indication_cfg,
- [QTIME_TIMESTAMP_CFG] = &ipa_reg_qtime_timestamp_cfg,
- [TIMERS_XO_CLK_DIV_CFG] = &ipa_reg_timers_xo_clk_div_cfg,
- [TIMERS_PULSE_GRAN_CFG] = &ipa_reg_timers_pulse_gran_cfg,
- [SRC_RSRC_GRP_01_RSRC_TYPE] = &ipa_reg_src_rsrc_grp_01_rsrc_type,
- [SRC_RSRC_GRP_23_RSRC_TYPE] = &ipa_reg_src_rsrc_grp_23_rsrc_type,
- [SRC_RSRC_GRP_45_RSRC_TYPE] = &ipa_reg_src_rsrc_grp_45_rsrc_type,
- [DST_RSRC_GRP_01_RSRC_TYPE] = &ipa_reg_dst_rsrc_grp_01_rsrc_type,
- [DST_RSRC_GRP_23_RSRC_TYPE] = &ipa_reg_dst_rsrc_grp_23_rsrc_type,
- [DST_RSRC_GRP_45_RSRC_TYPE] = &ipa_reg_dst_rsrc_grp_45_rsrc_type,
- [ENDP_INIT_CFG] = &ipa_reg_endp_init_cfg,
- [ENDP_INIT_NAT] = &ipa_reg_endp_init_nat,
- [ENDP_INIT_HDR] = &ipa_reg_endp_init_hdr,
- [ENDP_INIT_HDR_EXT] = &ipa_reg_endp_init_hdr_ext,
- [ENDP_INIT_HDR_METADATA_MASK] = &ipa_reg_endp_init_hdr_metadata_mask,
- [ENDP_INIT_MODE] = &ipa_reg_endp_init_mode,
- [ENDP_INIT_AGGR] = &ipa_reg_endp_init_aggr,
- [ENDP_INIT_HOL_BLOCK_EN] = &ipa_reg_endp_init_hol_block_en,
- [ENDP_INIT_HOL_BLOCK_TIMER] = &ipa_reg_endp_init_hol_block_timer,
- [ENDP_INIT_DEAGGR] = &ipa_reg_endp_init_deaggr,
- [ENDP_INIT_RSRC_GRP] = &ipa_reg_endp_init_rsrc_grp,
- [ENDP_INIT_SEQ] = &ipa_reg_endp_init_seq,
- [ENDP_STATUS] = &ipa_reg_endp_status,
- [ENDP_FILTER_ROUTER_HSH_CFG] = &ipa_reg_endp_filter_router_hsh_cfg,
- [IPA_IRQ_STTS] = &ipa_reg_ipa_irq_stts,
- [IPA_IRQ_EN] = &ipa_reg_ipa_irq_en,
- [IPA_IRQ_CLR] = &ipa_reg_ipa_irq_clr,
- [IPA_IRQ_UC] = &ipa_reg_ipa_irq_uc,
- [IRQ_SUSPEND_INFO] = &ipa_reg_irq_suspend_info,
- [IRQ_SUSPEND_EN] = &ipa_reg_irq_suspend_en,
- [IRQ_SUSPEND_CLR] = &ipa_reg_irq_suspend_clr,
-};
-
-const struct ipa_regs ipa_regs_v4_5 = {
- .reg_count = ARRAY_SIZE(ipa_reg_array),
- .reg = ipa_reg_array,
+REG_STRIDE(IRQ_SUSPEND_CLR, irq_suspend_clr,
+ 0x00003038 + 0x1000 * GSI_EE_AP, 0x0004);
+
+static const struct reg *reg_array[] = {
+ [COMP_CFG] = &reg_comp_cfg,
+ [CLKON_CFG] = &reg_clkon_cfg,
+ [ROUTE] = &reg_route,
+ [SHARED_MEM_SIZE] = &reg_shared_mem_size,
+ [QSB_MAX_WRITES] = &reg_qsb_max_writes,
+ [QSB_MAX_READS] = &reg_qsb_max_reads,
+ [FILT_ROUT_HASH_EN] = &reg_filt_rout_hash_en,
+ [FILT_ROUT_HASH_FLUSH] = &reg_filt_rout_hash_flush,
+ [STATE_AGGR_ACTIVE] = &reg_state_aggr_active,
+ [LOCAL_PKT_PROC_CNTXT] = &reg_local_pkt_proc_cntxt,
+ [AGGR_FORCE_CLOSE] = &reg_aggr_force_close,
+ [IPA_TX_CFG] = &reg_ipa_tx_cfg,
+ [FLAVOR_0] = &reg_flavor_0,
+ [IDLE_INDICATION_CFG] = &reg_idle_indication_cfg,
+ [QTIME_TIMESTAMP_CFG] = &reg_qtime_timestamp_cfg,
+ [TIMERS_XO_CLK_DIV_CFG] = &reg_timers_xo_clk_div_cfg,
+ [TIMERS_PULSE_GRAN_CFG] = &reg_timers_pulse_gran_cfg,
+ [SRC_RSRC_GRP_01_RSRC_TYPE] = &reg_src_rsrc_grp_01_rsrc_type,
+ [SRC_RSRC_GRP_23_RSRC_TYPE] = &reg_src_rsrc_grp_23_rsrc_type,
+ [SRC_RSRC_GRP_45_RSRC_TYPE] = &reg_src_rsrc_grp_45_rsrc_type,
+ [DST_RSRC_GRP_01_RSRC_TYPE] = &reg_dst_rsrc_grp_01_rsrc_type,
+ [DST_RSRC_GRP_23_RSRC_TYPE] = &reg_dst_rsrc_grp_23_rsrc_type,
+ [DST_RSRC_GRP_45_RSRC_TYPE] = &reg_dst_rsrc_grp_45_rsrc_type,
+ [ENDP_INIT_CFG] = &reg_endp_init_cfg,
+ [ENDP_INIT_NAT] = &reg_endp_init_nat,
+ [ENDP_INIT_HDR] = &reg_endp_init_hdr,
+ [ENDP_INIT_HDR_EXT] = &reg_endp_init_hdr_ext,
+ [ENDP_INIT_HDR_METADATA_MASK] = &reg_endp_init_hdr_metadata_mask,
+ [ENDP_INIT_MODE] = &reg_endp_init_mode,
+ [ENDP_INIT_AGGR] = &reg_endp_init_aggr,
+ [ENDP_INIT_HOL_BLOCK_EN] = &reg_endp_init_hol_block_en,
+ [ENDP_INIT_HOL_BLOCK_TIMER] = &reg_endp_init_hol_block_timer,
+ [ENDP_INIT_DEAGGR] = &reg_endp_init_deaggr,
+ [ENDP_INIT_RSRC_GRP] = &reg_endp_init_rsrc_grp,
+ [ENDP_INIT_SEQ] = &reg_endp_init_seq,
+ [ENDP_STATUS] = &reg_endp_status,
+ [ENDP_FILTER_ROUTER_HSH_CFG] = &reg_endp_filter_router_hsh_cfg,
+ [IPA_IRQ_STTS] = &reg_ipa_irq_stts,
+ [IPA_IRQ_EN] = &reg_ipa_irq_en,
+ [IPA_IRQ_CLR] = &reg_ipa_irq_clr,
+ [IPA_IRQ_UC] = &reg_ipa_irq_uc,
+ [IRQ_SUSPEND_INFO] = &reg_irq_suspend_info,
+ [IRQ_SUSPEND_EN] = &reg_irq_suspend_en,
+ [IRQ_SUSPEND_CLR] = &reg_irq_suspend_clr,
+};
+
+const struct regs ipa_regs_v4_5 = {
+ .reg_count = ARRAY_SIZE(reg_array),
+ .reg = reg_array,
};
diff --git a/drivers/net/ipa/reg/ipa_reg-v4.7.c b/drivers/net/ipa/reg/ipa_reg-v4.7.c
index 21f8a58e59a0..731824fce1d4 100644
--- a/drivers/net/ipa/reg/ipa_reg-v4.7.c
+++ b/drivers/net/ipa/reg/ipa_reg-v4.7.c
@@ -7,7 +7,7 @@
#include "../ipa.h"
#include "../ipa_reg.h"
-static const u32 ipa_reg_comp_cfg_fmask[] = {
+static const u32 reg_comp_cfg_fmask[] = {
[RAM_ARB_PRI_CLIENT_SAMP_FIX_DIS] = BIT(0),
[GSI_SNOC_BYPASS_DIS] = BIT(1),
[GEN_QMB_0_SNOC_BYPASS_DIS] = BIT(2),
@@ -30,9 +30,9 @@ static const u32 ipa_reg_comp_cfg_fmask[] = {
/* Bits 22-31 reserved */
};
-IPA_REG_FIELDS(COMP_CFG, comp_cfg, 0x0000003c);
+REG_FIELDS(COMP_CFG, comp_cfg, 0x0000003c);
-static const u32 ipa_reg_clkon_cfg_fmask[] = {
+static const u32 reg_clkon_cfg_fmask[] = {
[CLKON_RX] = BIT(0),
[CLKON_PROC] = BIT(1),
[TX_WRAPPER] = BIT(2),
@@ -67,9 +67,9 @@ static const u32 ipa_reg_clkon_cfg_fmask[] = {
[DRBIP] = BIT(31),
};
-IPA_REG_FIELDS(CLKON_CFG, clkon_cfg, 0x00000044);
+REG_FIELDS(CLKON_CFG, clkon_cfg, 0x00000044);
-static const u32 ipa_reg_route_fmask[] = {
+static const u32 reg_route_fmask[] = {
[ROUTE_DIS] = BIT(0),
[ROUTE_DEF_PIPE] = GENMASK(5, 1),
[ROUTE_DEF_HDR_TABLE] = BIT(6),
@@ -80,24 +80,24 @@ static const u32 ipa_reg_route_fmask[] = {
/* Bits 25-31 reserved */
};
-IPA_REG_FIELDS(ROUTE, route, 0x00000048);
+REG_FIELDS(ROUTE, route, 0x00000048);
-static const u32 ipa_reg_shared_mem_size_fmask[] = {
+static const u32 reg_shared_mem_size_fmask[] = {
[MEM_SIZE] = GENMASK(15, 0),
[MEM_BADDR] = GENMASK(31, 16),
};
-IPA_REG_FIELDS(SHARED_MEM_SIZE, shared_mem_size, 0x00000054);
+REG_FIELDS(SHARED_MEM_SIZE, shared_mem_size, 0x00000054);
-static const u32 ipa_reg_qsb_max_writes_fmask[] = {
+static const u32 reg_qsb_max_writes_fmask[] = {
[GEN_QMB_0_MAX_WRITES] = GENMASK(3, 0),
[GEN_QMB_1_MAX_WRITES] = GENMASK(7, 4),
/* Bits 8-31 reserved */
};
-IPA_REG_FIELDS(QSB_MAX_WRITES, qsb_max_writes, 0x00000074);
+REG_FIELDS(QSB_MAX_WRITES, qsb_max_writes, 0x00000074);
-static const u32 ipa_reg_qsb_max_reads_fmask[] = {
+static const u32 reg_qsb_max_reads_fmask[] = {
[GEN_QMB_0_MAX_READS] = GENMASK(3, 0),
[GEN_QMB_1_MAX_READS] = GENMASK(7, 4),
/* Bits 8-15 reserved */
@@ -105,9 +105,9 @@ static const u32 ipa_reg_qsb_max_reads_fmask[] = {
[GEN_QMB_1_MAX_READS_BEATS] = GENMASK(31, 24),
};
-IPA_REG_FIELDS(QSB_MAX_READS, qsb_max_reads, 0x00000078);
+REG_FIELDS(QSB_MAX_READS, qsb_max_reads, 0x00000078);
-static const u32 ipa_reg_filt_rout_hash_en_fmask[] = {
+static const u32 reg_filt_rout_hash_en_fmask[] = {
[IPV6_ROUTER_HASH] = BIT(0),
/* Bits 1-3 reserved */
[IPV6_FILTER_HASH] = BIT(4),
@@ -118,9 +118,9 @@ static const u32 ipa_reg_filt_rout_hash_en_fmask[] = {
/* Bits 13-31 reserved */
};
-IPA_REG_FIELDS(FILT_ROUT_HASH_EN, filt_rout_hash_en, 0x0000148);
+REG_FIELDS(FILT_ROUT_HASH_EN, filt_rout_hash_en, 0x0000148);
-static const u32 ipa_reg_filt_rout_hash_flush_fmask[] = {
+static const u32 reg_filt_rout_hash_flush_fmask[] = {
[IPV6_ROUTER_HASH] = BIT(0),
/* Bits 1-3 reserved */
[IPV6_FILTER_HASH] = BIT(4),
@@ -131,23 +131,23 @@ static const u32 ipa_reg_filt_rout_hash_flush_fmask[] = {
/* Bits 13-31 reserved */
};
-IPA_REG_FIELDS(FILT_ROUT_HASH_FLUSH, filt_rout_hash_flush, 0x000014c);
+REG_FIELDS(FILT_ROUT_HASH_FLUSH, filt_rout_hash_flush, 0x000014c);
/* Valid bits defined by ipa->available */
-IPA_REG_STRIDE(STATE_AGGR_ACTIVE, state_aggr_active, 0x000000b4, 0x0004);
+REG_STRIDE(STATE_AGGR_ACTIVE, state_aggr_active, 0x000000b4, 0x0004);
-static const u32 ipa_reg_local_pkt_proc_cntxt_fmask[] = {
+static const u32 reg_local_pkt_proc_cntxt_fmask[] = {
[IPA_BASE_ADDR] = GENMASK(17, 0),
/* Bits 18-31 reserved */
};
/* Offset must be a multiple of 8 */
-IPA_REG_FIELDS(LOCAL_PKT_PROC_CNTXT, local_pkt_proc_cntxt, 0x000001e8);
+REG_FIELDS(LOCAL_PKT_PROC_CNTXT, local_pkt_proc_cntxt, 0x000001e8);
/* Valid bits defined by ipa->available */
-IPA_REG_STRIDE(AGGR_FORCE_CLOSE, aggr_force_close, 0x000001ec, 0x0004);
+REG_STRIDE(AGGR_FORCE_CLOSE, aggr_force_close, 0x000001ec, 0x0004);
-static const u32 ipa_reg_ipa_tx_cfg_fmask[] = {
+static const u32 reg_ipa_tx_cfg_fmask[] = {
/* Bits 0-1 reserved */
[PREFETCH_ALMOST_EMPTY_SIZE_TX0] = GENMASK(5, 2),
[DMAW_SCND_OUTSD_PRED_THRESHOLD] = GENMASK(9, 6),
@@ -160,9 +160,9 @@ static const u32 ipa_reg_ipa_tx_cfg_fmask[] = {
/* Bits 19-31 reserved */
};
-IPA_REG_FIELDS(IPA_TX_CFG, ipa_tx_cfg, 0x000001fc);
+REG_FIELDS(IPA_TX_CFG, ipa_tx_cfg, 0x000001fc);
-static const u32 ipa_reg_flavor_0_fmask[] = {
+static const u32 reg_flavor_0_fmask[] = {
[MAX_PIPES] = GENMASK(3, 0),
/* Bits 4-7 reserved */
[MAX_CONS_PIPES] = GENMASK(12, 8),
@@ -173,17 +173,17 @@ static const u32 ipa_reg_flavor_0_fmask[] = {
/* Bits 28-31 reserved */
};
-IPA_REG_FIELDS(FLAVOR_0, flavor_0, 0x00000210);
+REG_FIELDS(FLAVOR_0, flavor_0, 0x00000210);
-static const u32 ipa_reg_idle_indication_cfg_fmask[] = {
+static const u32 reg_idle_indication_cfg_fmask[] = {
[ENTER_IDLE_DEBOUNCE_THRESH] = GENMASK(15, 0),
[CONST_NON_IDLE_ENABLE] = BIT(16),
/* Bits 17-31 reserved */
};
-IPA_REG_FIELDS(IDLE_INDICATION_CFG, idle_indication_cfg, 0x00000240);
+REG_FIELDS(IDLE_INDICATION_CFG, idle_indication_cfg, 0x00000240);
-static const u32 ipa_reg_qtime_timestamp_cfg_fmask[] = {
+static const u32 reg_qtime_timestamp_cfg_fmask[] = {
[DPL_TIMESTAMP_LSB] = GENMASK(4, 0),
/* Bits 5-6 reserved */
[DPL_TIMESTAMP_SEL] = BIT(7),
@@ -193,25 +193,25 @@ static const u32 ipa_reg_qtime_timestamp_cfg_fmask[] = {
/* Bits 21-31 reserved */
};
-IPA_REG_FIELDS(QTIME_TIMESTAMP_CFG, qtime_timestamp_cfg, 0x0000024c);
+REG_FIELDS(QTIME_TIMESTAMP_CFG, qtime_timestamp_cfg, 0x0000024c);
-static const u32 ipa_reg_timers_xo_clk_div_cfg_fmask[] = {
+static const u32 reg_timers_xo_clk_div_cfg_fmask[] = {
[DIV_VALUE] = GENMASK(8, 0),
/* Bits 9-30 reserved */
[DIV_ENABLE] = BIT(31),
};
-IPA_REG_FIELDS(TIMERS_XO_CLK_DIV_CFG, timers_xo_clk_div_cfg, 0x00000250);
+REG_FIELDS(TIMERS_XO_CLK_DIV_CFG, timers_xo_clk_div_cfg, 0x00000250);
-static const u32 ipa_reg_timers_pulse_gran_cfg_fmask[] = {
+static const u32 reg_timers_pulse_gran_cfg_fmask[] = {
[PULSE_GRAN_0] = GENMASK(2, 0),
[PULSE_GRAN_1] = GENMASK(5, 3),
[PULSE_GRAN_2] = GENMASK(8, 6),
};
-IPA_REG_FIELDS(TIMERS_PULSE_GRAN_CFG, timers_pulse_gran_cfg, 0x00000254);
+REG_FIELDS(TIMERS_PULSE_GRAN_CFG, timers_pulse_gran_cfg, 0x00000254);
-static const u32 ipa_reg_src_rsrc_grp_01_rsrc_type_fmask[] = {
+static const u32 reg_src_rsrc_grp_01_rsrc_type_fmask[] = {
[X_MIN_LIM] = GENMASK(5, 0),
/* Bits 6-7 reserved */
[X_MAX_LIM] = GENMASK(13, 8),
@@ -222,10 +222,10 @@ static const u32 ipa_reg_src_rsrc_grp_01_rsrc_type_fmask[] = {
/* Bits 30-31 reserved */
};
-IPA_REG_STRIDE_FIELDS(SRC_RSRC_GRP_01_RSRC_TYPE, src_rsrc_grp_01_rsrc_type,
- 0x00000400, 0x0020);
+REG_STRIDE_FIELDS(SRC_RSRC_GRP_01_RSRC_TYPE, src_rsrc_grp_01_rsrc_type,
+ 0x00000400, 0x0020);
-static const u32 ipa_reg_src_rsrc_grp_23_rsrc_type_fmask[] = {
+static const u32 reg_src_rsrc_grp_23_rsrc_type_fmask[] = {
[X_MIN_LIM] = GENMASK(5, 0),
/* Bits 6-7 reserved */
[X_MAX_LIM] = GENMASK(13, 8),
@@ -236,10 +236,10 @@ static const u32 ipa_reg_src_rsrc_grp_23_rsrc_type_fmask[] = {
/* Bits 30-31 reserved */
};
-IPA_REG_STRIDE_FIELDS(SRC_RSRC_GRP_23_RSRC_TYPE, src_rsrc_grp_23_rsrc_type,
- 0x00000404, 0x0020);
+REG_STRIDE_FIELDS(SRC_RSRC_GRP_23_RSRC_TYPE, src_rsrc_grp_23_rsrc_type,
+ 0x00000404, 0x0020);
-static const u32 ipa_reg_dst_rsrc_grp_01_rsrc_type_fmask[] = {
+static const u32 reg_dst_rsrc_grp_01_rsrc_type_fmask[] = {
[X_MIN_LIM] = GENMASK(5, 0),
/* Bits 6-7 reserved */
[X_MAX_LIM] = GENMASK(13, 8),
@@ -250,10 +250,10 @@ static const u32 ipa_reg_dst_rsrc_grp_01_rsrc_type_fmask[] = {
/* Bits 30-31 reserved */
};
-IPA_REG_STRIDE_FIELDS(DST_RSRC_GRP_01_RSRC_TYPE, dst_rsrc_grp_01_rsrc_type,
- 0x00000500, 0x0020);
+REG_STRIDE_FIELDS(DST_RSRC_GRP_01_RSRC_TYPE, dst_rsrc_grp_01_rsrc_type,
+ 0x00000500, 0x0020);
-static const u32 ipa_reg_dst_rsrc_grp_23_rsrc_type_fmask[] = {
+static const u32 reg_dst_rsrc_grp_23_rsrc_type_fmask[] = {
[X_MIN_LIM] = GENMASK(5, 0),
/* Bits 6-7 reserved */
[X_MAX_LIM] = GENMASK(13, 8),
@@ -264,10 +264,10 @@ static const u32 ipa_reg_dst_rsrc_grp_23_rsrc_type_fmask[] = {
/* Bits 30-31 reserved */
};
-IPA_REG_STRIDE_FIELDS(DST_RSRC_GRP_23_RSRC_TYPE, dst_rsrc_grp_23_rsrc_type,
- 0x00000504, 0x0020);
+REG_STRIDE_FIELDS(DST_RSRC_GRP_23_RSRC_TYPE, dst_rsrc_grp_23_rsrc_type,
+ 0x00000504, 0x0020);
-static const u32 ipa_reg_endp_init_cfg_fmask[] = {
+static const u32 reg_endp_init_cfg_fmask[] = {
[FRAG_OFFLOAD_EN] = BIT(0),
[CS_OFFLOAD_EN] = GENMASK(2, 1),
[CS_METADATA_HDR_OFFSET] = GENMASK(6, 3),
@@ -276,16 +276,16 @@ static const u32 ipa_reg_endp_init_cfg_fmask[] = {
/* Bits 9-31 reserved */
};
-IPA_REG_STRIDE_FIELDS(ENDP_INIT_CFG, endp_init_cfg, 0x00000808, 0x0070);
+REG_STRIDE_FIELDS(ENDP_INIT_CFG, endp_init_cfg, 0x00000808, 0x0070);
-static const u32 ipa_reg_endp_init_nat_fmask[] = {
+static const u32 reg_endp_init_nat_fmask[] = {
[NAT_EN] = GENMASK(1, 0),
/* Bits 2-31 reserved */
};
-IPA_REG_STRIDE_FIELDS(ENDP_INIT_NAT, endp_init_nat, 0x0000080c, 0x0070);
+REG_STRIDE_FIELDS(ENDP_INIT_NAT, endp_init_nat, 0x0000080c, 0x0070);
-static const u32 ipa_reg_endp_init_hdr_fmask[] = {
+static const u32 reg_endp_init_hdr_fmask[] = {
[HDR_LEN] = GENMASK(5, 0),
[HDR_OFST_METADATA_VALID] = BIT(6),
[HDR_OFST_METADATA] = GENMASK(12, 7),
@@ -298,9 +298,9 @@ static const u32 ipa_reg_endp_init_hdr_fmask[] = {
[HDR_OFST_METADATA_MSB] = GENMASK(31, 30),
};
-IPA_REG_STRIDE_FIELDS(ENDP_INIT_HDR, endp_init_hdr, 0x00000810, 0x0070);
+REG_STRIDE_FIELDS(ENDP_INIT_HDR, endp_init_hdr, 0x00000810, 0x0070);
-static const u32 ipa_reg_endp_init_hdr_ext_fmask[] = {
+static const u32 reg_endp_init_hdr_ext_fmask[] = {
[HDR_ENDIANNESS] = BIT(0),
[HDR_TOTAL_LEN_OR_PAD_VALID] = BIT(1),
[HDR_TOTAL_LEN_OR_PAD] = BIT(2),
@@ -314,12 +314,12 @@ static const u32 ipa_reg_endp_init_hdr_ext_fmask[] = {
/* Bits 22-31 reserved */
};
-IPA_REG_STRIDE_FIELDS(ENDP_INIT_HDR_EXT, endp_init_hdr_ext, 0x00000814, 0x0070);
+REG_STRIDE_FIELDS(ENDP_INIT_HDR_EXT, endp_init_hdr_ext, 0x00000814, 0x0070);
-IPA_REG_STRIDE(ENDP_INIT_HDR_METADATA_MASK, endp_init_hdr_metadata_mask,
- 0x00000818, 0x0070);
+REG_STRIDE(ENDP_INIT_HDR_METADATA_MASK, endp_init_hdr_metadata_mask,
+ 0x00000818, 0x0070);
-static const u32 ipa_reg_endp_init_mode_fmask[] = {
+static const u32 reg_endp_init_mode_fmask[] = {
[ENDP_MODE] = GENMASK(2, 0),
[DCPH_ENABLE] = BIT(3),
[DEST_PIPE_INDEX] = GENMASK(8, 4),
@@ -330,9 +330,9 @@ static const u32 ipa_reg_endp_init_mode_fmask[] = {
/* Bits 30-31 reserved */
};
-IPA_REG_STRIDE_FIELDS(ENDP_INIT_MODE, endp_init_mode, 0x00000820, 0x0070);
+REG_STRIDE_FIELDS(ENDP_INIT_MODE, endp_init_mode, 0x00000820, 0x0070);
-static const u32 ipa_reg_endp_init_aggr_fmask[] = {
+static const u32 reg_endp_init_aggr_fmask[] = {
[AGGR_EN] = GENMASK(1, 0),
[AGGR_TYPE] = GENMASK(4, 2),
[BYTE_LIMIT] = GENMASK(10, 5),
@@ -347,27 +347,27 @@ static const u32 ipa_reg_endp_init_aggr_fmask[] = {
/* Bits 28-31 reserved */
};
-IPA_REG_STRIDE_FIELDS(ENDP_INIT_AGGR, endp_init_aggr, 0x00000824, 0x0070);
+REG_STRIDE_FIELDS(ENDP_INIT_AGGR, endp_init_aggr, 0x00000824, 0x0070);
-static const u32 ipa_reg_endp_init_hol_block_en_fmask[] = {
+static const u32 reg_endp_init_hol_block_en_fmask[] = {
[HOL_BLOCK_EN] = BIT(0),
/* Bits 1-31 reserved */
};
-IPA_REG_STRIDE_FIELDS(ENDP_INIT_HOL_BLOCK_EN, endp_init_hol_block_en,
- 0x0000082c, 0x0070);
+REG_STRIDE_FIELDS(ENDP_INIT_HOL_BLOCK_EN, endp_init_hol_block_en,
+ 0x0000082c, 0x0070);
-static const u32 ipa_reg_endp_init_hol_block_timer_fmask[] = {
+static const u32 reg_endp_init_hol_block_timer_fmask[] = {
[TIMER_LIMIT] = GENMASK(4, 0),
/* Bits 5-7 reserved */
[TIMER_GRAN_SEL] = BIT(8),
/* Bits 9-31 reserved */
};
-IPA_REG_STRIDE_FIELDS(ENDP_INIT_HOL_BLOCK_TIMER, endp_init_hol_block_timer,
- 0x00000830, 0x0070);
+REG_STRIDE_FIELDS(ENDP_INIT_HOL_BLOCK_TIMER, endp_init_hol_block_timer,
+ 0x00000830, 0x0070);
-static const u32 ipa_reg_endp_init_deaggr_fmask[] = {
+static const u32 reg_endp_init_deaggr_fmask[] = {
[DEAGGR_HDR_LEN] = GENMASK(5, 0),
[SYSPIPE_ERR_DETECTION] = BIT(6),
[PACKET_OFFSET_VALID] = BIT(7),
@@ -377,24 +377,23 @@ static const u32 ipa_reg_endp_init_deaggr_fmask[] = {
[MAX_PACKET_LEN] = GENMASK(31, 16),
};
-IPA_REG_STRIDE_FIELDS(ENDP_INIT_DEAGGR, endp_init_deaggr, 0x00000834, 0x0070);
+REG_STRIDE_FIELDS(ENDP_INIT_DEAGGR, endp_init_deaggr, 0x00000834, 0x0070);
-static const u32 ipa_reg_endp_init_rsrc_grp_fmask[] = {
+static const u32 reg_endp_init_rsrc_grp_fmask[] = {
[ENDP_RSRC_GRP] = BIT(0),
/* Bits 1-31 reserved */
};
-IPA_REG_STRIDE_FIELDS(ENDP_INIT_RSRC_GRP, endp_init_rsrc_grp,
- 0x00000838, 0x0070);
+REG_STRIDE_FIELDS(ENDP_INIT_RSRC_GRP, endp_init_rsrc_grp, 0x00000838, 0x0070);
-static const u32 ipa_reg_endp_init_seq_fmask[] = {
+static const u32 reg_endp_init_seq_fmask[] = {
[SEQ_TYPE] = GENMASK(7, 0),
/* Bits 8-31 reserved */
};
-IPA_REG_STRIDE_FIELDS(ENDP_INIT_SEQ, endp_init_seq, 0x0000083c, 0x0070);
+REG_STRIDE_FIELDS(ENDP_INIT_SEQ, endp_init_seq, 0x0000083c, 0x0070);
-static const u32 ipa_reg_endp_status_fmask[] = {
+static const u32 reg_endp_status_fmask[] = {
[STATUS_EN] = BIT(0),
[STATUS_ENDP] = GENMASK(5, 1),
/* Bits 6-8 reserved */
@@ -402,9 +401,9 @@ static const u32 ipa_reg_endp_status_fmask[] = {
/* Bits 10-31 reserved */
};
-IPA_REG_STRIDE_FIELDS(ENDP_STATUS, endp_status, 0x00000840, 0x0070);
+REG_STRIDE_FIELDS(ENDP_STATUS, endp_status, 0x00000840, 0x0070);
-static const u32 ipa_reg_endp_filter_router_hsh_cfg_fmask[] = {
+static const u32 reg_endp_filter_router_hsh_cfg_fmask[] = {
[FILTER_HASH_MSK_SRC_ID] = BIT(0),
[FILTER_HASH_MSK_SRC_IP] = BIT(1),
[FILTER_HASH_MSK_DST_IP] = BIT(2),
@@ -425,83 +424,83 @@ static const u32 ipa_reg_endp_filter_router_hsh_cfg_fmask[] = {
/* Bits 23-31 reserved */
};
-IPA_REG_STRIDE_FIELDS(ENDP_FILTER_ROUTER_HSH_CFG, endp_filter_router_hsh_cfg,
- 0x0000085c, 0x0070);
+REG_STRIDE_FIELDS(ENDP_FILTER_ROUTER_HSH_CFG, endp_filter_router_hsh_cfg,
+ 0x0000085c, 0x0070);
/* Valid bits defined by enum ipa_irq_id; only used for GSI_EE_AP */
-IPA_REG(IPA_IRQ_STTS, ipa_irq_stts, 0x00003008 + 0x1000 * GSI_EE_AP);
+REG(IPA_IRQ_STTS, ipa_irq_stts, 0x00003008 + 0x1000 * GSI_EE_AP);
/* Valid bits defined by enum ipa_irq_id; only used for GSI_EE_AP */
-IPA_REG(IPA_IRQ_EN, ipa_irq_en, 0x0000300c + 0x1000 * GSI_EE_AP);
+REG(IPA_IRQ_EN, ipa_irq_en, 0x0000300c + 0x1000 * GSI_EE_AP);
/* Valid bits defined by enum ipa_irq_id; only used for GSI_EE_AP */
-IPA_REG(IPA_IRQ_CLR, ipa_irq_clr, 0x00003010 + 0x1000 * GSI_EE_AP);
+REG(IPA_IRQ_CLR, ipa_irq_clr, 0x00003010 + 0x1000 * GSI_EE_AP);
-static const u32 ipa_reg_ipa_irq_uc_fmask[] = {
+static const u32 reg_ipa_irq_uc_fmask[] = {
[UC_INTR] = BIT(0),
/* Bits 1-31 reserved */
};
-IPA_REG_FIELDS(IPA_IRQ_UC, ipa_irq_uc, 0x0000301c + 0x1000 * GSI_EE_AP);
+REG_FIELDS(IPA_IRQ_UC, ipa_irq_uc, 0x0000301c + 0x1000 * GSI_EE_AP);
/* Valid bits defined by ipa->available */
-IPA_REG_STRIDE(IRQ_SUSPEND_INFO, irq_suspend_info,
- 0x00003030 + 0x1000 * GSI_EE_AP, 0x0004);
+REG_STRIDE(IRQ_SUSPEND_INFO, irq_suspend_info,
+ 0x00003030 + 0x1000 * GSI_EE_AP, 0x0004);
/* Valid bits defined by ipa->available */
-IPA_REG_STRIDE(IRQ_SUSPEND_EN, irq_suspend_en,
- 0x00003034 + 0x1000 * GSI_EE_AP, 0x0004);
+REG_STRIDE(IRQ_SUSPEND_EN, irq_suspend_en,
+ 0x00003034 + 0x1000 * GSI_EE_AP, 0x0004);
/* Valid bits defined by ipa->available */
-IPA_REG_STRIDE(IRQ_SUSPEND_CLR, irq_suspend_clr,
- 0x00003038 + 0x1000 * GSI_EE_AP, 0x0004);
-
-static const struct ipa_reg *ipa_reg_array[] = {
- [COMP_CFG] = &ipa_reg_comp_cfg,
- [CLKON_CFG] = &ipa_reg_clkon_cfg,
- [ROUTE] = &ipa_reg_route,
- [SHARED_MEM_SIZE] = &ipa_reg_shared_mem_size,
- [QSB_MAX_WRITES] = &ipa_reg_qsb_max_writes,
- [QSB_MAX_READS] = &ipa_reg_qsb_max_reads,
- [FILT_ROUT_HASH_EN] = &ipa_reg_filt_rout_hash_en,
- [FILT_ROUT_HASH_FLUSH] = &ipa_reg_filt_rout_hash_flush,
- [STATE_AGGR_ACTIVE] = &ipa_reg_state_aggr_active,
- [LOCAL_PKT_PROC_CNTXT] = &ipa_reg_local_pkt_proc_cntxt,
- [AGGR_FORCE_CLOSE] = &ipa_reg_aggr_force_close,
- [IPA_TX_CFG] = &ipa_reg_ipa_tx_cfg,
- [FLAVOR_0] = &ipa_reg_flavor_0,
- [IDLE_INDICATION_CFG] = &ipa_reg_idle_indication_cfg,
- [QTIME_TIMESTAMP_CFG] = &ipa_reg_qtime_timestamp_cfg,
- [TIMERS_XO_CLK_DIV_CFG] = &ipa_reg_timers_xo_clk_div_cfg,
- [TIMERS_PULSE_GRAN_CFG] = &ipa_reg_timers_pulse_gran_cfg,
- [SRC_RSRC_GRP_01_RSRC_TYPE] = &ipa_reg_src_rsrc_grp_01_rsrc_type,
- [SRC_RSRC_GRP_23_RSRC_TYPE] = &ipa_reg_src_rsrc_grp_23_rsrc_type,
- [DST_RSRC_GRP_01_RSRC_TYPE] = &ipa_reg_dst_rsrc_grp_01_rsrc_type,
- [DST_RSRC_GRP_23_RSRC_TYPE] = &ipa_reg_dst_rsrc_grp_23_rsrc_type,
- [ENDP_INIT_CFG] = &ipa_reg_endp_init_cfg,
- [ENDP_INIT_NAT] = &ipa_reg_endp_init_nat,
- [ENDP_INIT_HDR] = &ipa_reg_endp_init_hdr,
- [ENDP_INIT_HDR_EXT] = &ipa_reg_endp_init_hdr_ext,
- [ENDP_INIT_HDR_METADATA_MASK] = &ipa_reg_endp_init_hdr_metadata_mask,
- [ENDP_INIT_MODE] = &ipa_reg_endp_init_mode,
- [ENDP_INIT_AGGR] = &ipa_reg_endp_init_aggr,
- [ENDP_INIT_HOL_BLOCK_EN] = &ipa_reg_endp_init_hol_block_en,
- [ENDP_INIT_HOL_BLOCK_TIMER] = &ipa_reg_endp_init_hol_block_timer,
- [ENDP_INIT_DEAGGR] = &ipa_reg_endp_init_deaggr,
- [ENDP_INIT_RSRC_GRP] = &ipa_reg_endp_init_rsrc_grp,
- [ENDP_INIT_SEQ] = &ipa_reg_endp_init_seq,
- [ENDP_STATUS] = &ipa_reg_endp_status,
- [ENDP_FILTER_ROUTER_HSH_CFG] = &ipa_reg_endp_filter_router_hsh_cfg,
- [IPA_IRQ_STTS] = &ipa_reg_ipa_irq_stts,
- [IPA_IRQ_EN] = &ipa_reg_ipa_irq_en,
- [IPA_IRQ_CLR] = &ipa_reg_ipa_irq_clr,
- [IPA_IRQ_UC] = &ipa_reg_ipa_irq_uc,
- [IRQ_SUSPEND_INFO] = &ipa_reg_irq_suspend_info,
- [IRQ_SUSPEND_EN] = &ipa_reg_irq_suspend_en,
- [IRQ_SUSPEND_CLR] = &ipa_reg_irq_suspend_clr,
-};
-
-const struct ipa_regs ipa_regs_v4_7 = {
- .reg_count = ARRAY_SIZE(ipa_reg_array),
- .reg = ipa_reg_array,
+REG_STRIDE(IRQ_SUSPEND_CLR, irq_suspend_clr,
+ 0x00003038 + 0x1000 * GSI_EE_AP, 0x0004);
+
+static const struct reg *reg_array[] = {
+ [COMP_CFG] = &reg_comp_cfg,
+ [CLKON_CFG] = &reg_clkon_cfg,
+ [ROUTE] = &reg_route,
+ [SHARED_MEM_SIZE] = &reg_shared_mem_size,
+ [QSB_MAX_WRITES] = &reg_qsb_max_writes,
+ [QSB_MAX_READS] = &reg_qsb_max_reads,
+ [FILT_ROUT_HASH_EN] = &reg_filt_rout_hash_en,
+ [FILT_ROUT_HASH_FLUSH] = &reg_filt_rout_hash_flush,
+ [STATE_AGGR_ACTIVE] = &reg_state_aggr_active,
+ [LOCAL_PKT_PROC_CNTXT] = &reg_local_pkt_proc_cntxt,
+ [AGGR_FORCE_CLOSE] = &reg_aggr_force_close,
+ [IPA_TX_CFG] = &reg_ipa_tx_cfg,
+ [FLAVOR_0] = &reg_flavor_0,
+ [IDLE_INDICATION_CFG] = &reg_idle_indication_cfg,
+ [QTIME_TIMESTAMP_CFG] = &reg_qtime_timestamp_cfg,
+ [TIMERS_XO_CLK_DIV_CFG] = &reg_timers_xo_clk_div_cfg,
+ [TIMERS_PULSE_GRAN_CFG] = &reg_timers_pulse_gran_cfg,
+ [SRC_RSRC_GRP_01_RSRC_TYPE] = &reg_src_rsrc_grp_01_rsrc_type,
+ [SRC_RSRC_GRP_23_RSRC_TYPE] = &reg_src_rsrc_grp_23_rsrc_type,
+ [DST_RSRC_GRP_01_RSRC_TYPE] = &reg_dst_rsrc_grp_01_rsrc_type,
+ [DST_RSRC_GRP_23_RSRC_TYPE] = &reg_dst_rsrc_grp_23_rsrc_type,
+ [ENDP_INIT_CFG] = &reg_endp_init_cfg,
+ [ENDP_INIT_NAT] = &reg_endp_init_nat,
+ [ENDP_INIT_HDR] = &reg_endp_init_hdr,
+ [ENDP_INIT_HDR_EXT] = &reg_endp_init_hdr_ext,
+ [ENDP_INIT_HDR_METADATA_MASK] = &reg_endp_init_hdr_metadata_mask,
+ [ENDP_INIT_MODE] = &reg_endp_init_mode,
+ [ENDP_INIT_AGGR] = &reg_endp_init_aggr,
+ [ENDP_INIT_HOL_BLOCK_EN] = &reg_endp_init_hol_block_en,
+ [ENDP_INIT_HOL_BLOCK_TIMER] = &reg_endp_init_hol_block_timer,
+ [ENDP_INIT_DEAGGR] = &reg_endp_init_deaggr,
+ [ENDP_INIT_RSRC_GRP] = &reg_endp_init_rsrc_grp,
+ [ENDP_INIT_SEQ] = &reg_endp_init_seq,
+ [ENDP_STATUS] = &reg_endp_status,
+ [ENDP_FILTER_ROUTER_HSH_CFG] = &reg_endp_filter_router_hsh_cfg,
+ [IPA_IRQ_STTS] = &reg_ipa_irq_stts,
+ [IPA_IRQ_EN] = &reg_ipa_irq_en,
+ [IPA_IRQ_CLR] = &reg_ipa_irq_clr,
+ [IPA_IRQ_UC] = &reg_ipa_irq_uc,
+ [IRQ_SUSPEND_INFO] = &reg_irq_suspend_info,
+ [IRQ_SUSPEND_EN] = &reg_irq_suspend_en,
+ [IRQ_SUSPEND_CLR] = &reg_irq_suspend_clr,
+};
+
+const struct regs ipa_regs_v4_7 = {
+ .reg_count = ARRAY_SIZE(reg_array),
+ .reg = reg_array,
};
diff --git a/drivers/net/ipa/reg/ipa_reg-v4.9.c b/drivers/net/ipa/reg/ipa_reg-v4.9.c
index 1f67a03fe599..01f87b5290e0 100644
--- a/drivers/net/ipa/reg/ipa_reg-v4.9.c
+++ b/drivers/net/ipa/reg/ipa_reg-v4.9.c
@@ -7,7 +7,7 @@
#include "../ipa.h"
#include "../ipa_reg.h"
-static const u32 ipa_reg_comp_cfg_fmask[] = {
+static const u32 reg_comp_cfg_fmask[] = {
[RAM_ARB_PRI_CLIENT_SAMP_FIX_DIS] = BIT(0),
[GSI_SNOC_BYPASS_DIS] = BIT(1),
[GEN_QMB_0_SNOC_BYPASS_DIS] = BIT(2),
@@ -35,9 +35,9 @@ static const u32 ipa_reg_comp_cfg_fmask[] = {
[GEN_QMB_0_DYNAMIC_ASIZE] = BIT(31),
};
-IPA_REG_FIELDS(COMP_CFG, comp_cfg, 0x0000003c);
+REG_FIELDS(COMP_CFG, comp_cfg, 0x0000003c);
-static const u32 ipa_reg_clkon_cfg_fmask[] = {
+static const u32 reg_clkon_cfg_fmask[] = {
[CLKON_RX] = BIT(0),
[CLKON_PROC] = BIT(1),
[TX_WRAPPER] = BIT(2),
@@ -72,9 +72,9 @@ static const u32 ipa_reg_clkon_cfg_fmask[] = {
[DRBIP] = BIT(31),
};
-IPA_REG_FIELDS(CLKON_CFG, clkon_cfg, 0x00000044);
+REG_FIELDS(CLKON_CFG, clkon_cfg, 0x00000044);
-static const u32 ipa_reg_route_fmask[] = {
+static const u32 reg_route_fmask[] = {
[ROUTE_DIS] = BIT(0),
[ROUTE_DEF_PIPE] = GENMASK(5, 1),
[ROUTE_DEF_HDR_TABLE] = BIT(6),
@@ -85,24 +85,24 @@ static const u32 ipa_reg_route_fmask[] = {
/* Bits 25-31 reserved */
};
-IPA_REG_FIELDS(ROUTE, route, 0x00000048);
+REG_FIELDS(ROUTE, route, 0x00000048);
-static const u32 ipa_reg_shared_mem_size_fmask[] = {
+static const u32 reg_shared_mem_size_fmask[] = {
[MEM_SIZE] = GENMASK(15, 0),
[MEM_BADDR] = GENMASK(31, 16),
};
-IPA_REG_FIELDS(SHARED_MEM_SIZE, shared_mem_size, 0x00000054);
+REG_FIELDS(SHARED_MEM_SIZE, shared_mem_size, 0x00000054);
-static const u32 ipa_reg_qsb_max_writes_fmask[] = {
+static const u32 reg_qsb_max_writes_fmask[] = {
[GEN_QMB_0_MAX_WRITES] = GENMASK(3, 0),
[GEN_QMB_1_MAX_WRITES] = GENMASK(7, 4),
/* Bits 8-31 reserved */
};
-IPA_REG_FIELDS(QSB_MAX_WRITES, qsb_max_writes, 0x00000074);
+REG_FIELDS(QSB_MAX_WRITES, qsb_max_writes, 0x00000074);
-static const u32 ipa_reg_qsb_max_reads_fmask[] = {
+static const u32 reg_qsb_max_reads_fmask[] = {
[GEN_QMB_0_MAX_READS] = GENMASK(3, 0),
[GEN_QMB_1_MAX_READS] = GENMASK(7, 4),
/* Bits 8-15 reserved */
@@ -110,9 +110,9 @@ static const u32 ipa_reg_qsb_max_reads_fmask[] = {
[GEN_QMB_1_MAX_READS_BEATS] = GENMASK(31, 24),
};
-IPA_REG_FIELDS(QSB_MAX_READS, qsb_max_reads, 0x00000078);
+REG_FIELDS(QSB_MAX_READS, qsb_max_reads, 0x00000078);
-static const u32 ipa_reg_filt_rout_hash_en_fmask[] = {
+static const u32 reg_filt_rout_hash_en_fmask[] = {
[IPV6_ROUTER_HASH] = BIT(0),
/* Bits 1-3 reserved */
[IPV6_FILTER_HASH] = BIT(4),
@@ -123,9 +123,9 @@ static const u32 ipa_reg_filt_rout_hash_en_fmask[] = {
/* Bits 13-31 reserved */
};
-IPA_REG_FIELDS(FILT_ROUT_HASH_EN, filt_rout_hash_en, 0x0000148);
+REG_FIELDS(FILT_ROUT_HASH_EN, filt_rout_hash_en, 0x0000148);
-static const u32 ipa_reg_filt_rout_hash_flush_fmask[] = {
+static const u32 reg_filt_rout_hash_flush_fmask[] = {
[IPV6_ROUTER_HASH] = BIT(0),
/* Bits 1-3 reserved */
[IPV6_FILTER_HASH] = BIT(4),
@@ -136,23 +136,23 @@ static const u32 ipa_reg_filt_rout_hash_flush_fmask[] = {
/* Bits 13-31 reserved */
};
-IPA_REG_FIELDS(FILT_ROUT_HASH_FLUSH, filt_rout_hash_flush, 0x000014c);
+REG_FIELDS(FILT_ROUT_HASH_FLUSH, filt_rout_hash_flush, 0x000014c);
/* Valid bits defined by ipa->available */
-IPA_REG_STRIDE(STATE_AGGR_ACTIVE, state_aggr_active, 0x000000b4, 0x0004);
+REG_STRIDE(STATE_AGGR_ACTIVE, state_aggr_active, 0x000000b4, 0x0004);
-static const u32 ipa_reg_local_pkt_proc_cntxt_fmask[] = {
+static const u32 reg_local_pkt_proc_cntxt_fmask[] = {
[IPA_BASE_ADDR] = GENMASK(17, 0),
/* Bits 18-31 reserved */
};
/* Offset must be a multiple of 8 */
-IPA_REG_FIELDS(LOCAL_PKT_PROC_CNTXT, local_pkt_proc_cntxt, 0x000001e8);
+REG_FIELDS(LOCAL_PKT_PROC_CNTXT, local_pkt_proc_cntxt, 0x000001e8);
/* Valid bits defined by ipa->available */
-IPA_REG_STRIDE(AGGR_FORCE_CLOSE, aggr_force_close, 0x000001ec, 0x0004);
+REG_STRIDE(AGGR_FORCE_CLOSE, aggr_force_close, 0x000001ec, 0x0004);
-static const u32 ipa_reg_ipa_tx_cfg_fmask[] = {
+static const u32 reg_ipa_tx_cfg_fmask[] = {
/* Bits 0-1 reserved */
[PREFETCH_ALMOST_EMPTY_SIZE_TX0] = GENMASK(5, 2),
[DMAW_SCND_OUTSD_PRED_THRESHOLD] = GENMASK(9, 6),
@@ -165,9 +165,9 @@ static const u32 ipa_reg_ipa_tx_cfg_fmask[] = {
/* Bits 19-31 reserved */
};
-IPA_REG_FIELDS(IPA_TX_CFG, ipa_tx_cfg, 0x000001fc);
+REG_FIELDS(IPA_TX_CFG, ipa_tx_cfg, 0x000001fc);
-static const u32 ipa_reg_flavor_0_fmask[] = {
+static const u32 reg_flavor_0_fmask[] = {
[MAX_PIPES] = GENMASK(3, 0),
/* Bits 4-7 reserved */
[MAX_CONS_PIPES] = GENMASK(12, 8),
@@ -178,17 +178,17 @@ static const u32 ipa_reg_flavor_0_fmask[] = {
/* Bits 28-31 reserved */
};
-IPA_REG_FIELDS(FLAVOR_0, flavor_0, 0x00000210);
+REG_FIELDS(FLAVOR_0, flavor_0, 0x00000210);
-static const u32 ipa_reg_idle_indication_cfg_fmask[] = {
+static const u32 reg_idle_indication_cfg_fmask[] = {
[ENTER_IDLE_DEBOUNCE_THRESH] = GENMASK(15, 0),
[CONST_NON_IDLE_ENABLE] = BIT(16),
/* Bits 17-31 reserved */
};
-IPA_REG_FIELDS(IDLE_INDICATION_CFG, idle_indication_cfg, 0x00000240);
+REG_FIELDS(IDLE_INDICATION_CFG, idle_indication_cfg, 0x00000240);
-static const u32 ipa_reg_qtime_timestamp_cfg_fmask[] = {
+static const u32 reg_qtime_timestamp_cfg_fmask[] = {
[DPL_TIMESTAMP_LSB] = GENMASK(4, 0),
/* Bits 5-6 reserved */
[DPL_TIMESTAMP_SEL] = BIT(7),
@@ -198,25 +198,25 @@ static const u32 ipa_reg_qtime_timestamp_cfg_fmask[] = {
/* Bits 21-31 reserved */
};
-IPA_REG_FIELDS(QTIME_TIMESTAMP_CFG, qtime_timestamp_cfg, 0x0000024c);
+REG_FIELDS(QTIME_TIMESTAMP_CFG, qtime_timestamp_cfg, 0x0000024c);
-static const u32 ipa_reg_timers_xo_clk_div_cfg_fmask[] = {
+static const u32 reg_timers_xo_clk_div_cfg_fmask[] = {
[DIV_VALUE] = GENMASK(8, 0),
/* Bits 9-30 reserved */
[DIV_ENABLE] = BIT(31),
};
-IPA_REG_FIELDS(TIMERS_XO_CLK_DIV_CFG, timers_xo_clk_div_cfg, 0x00000250);
+REG_FIELDS(TIMERS_XO_CLK_DIV_CFG, timers_xo_clk_div_cfg, 0x00000250);
-static const u32 ipa_reg_timers_pulse_gran_cfg_fmask[] = {
+static const u32 reg_timers_pulse_gran_cfg_fmask[] = {
[PULSE_GRAN_0] = GENMASK(2, 0),
[PULSE_GRAN_1] = GENMASK(5, 3),
[PULSE_GRAN_2] = GENMASK(8, 6),
};
-IPA_REG_FIELDS(TIMERS_PULSE_GRAN_CFG, timers_pulse_gran_cfg, 0x00000254);
+REG_FIELDS(TIMERS_PULSE_GRAN_CFG, timers_pulse_gran_cfg, 0x00000254);
-static const u32 ipa_reg_src_rsrc_grp_01_rsrc_type_fmask[] = {
+static const u32 reg_src_rsrc_grp_01_rsrc_type_fmask[] = {
[X_MIN_LIM] = GENMASK(5, 0),
/* Bits 6-7 reserved */
[X_MAX_LIM] = GENMASK(13, 8),
@@ -227,10 +227,10 @@ static const u32 ipa_reg_src_rsrc_grp_01_rsrc_type_fmask[] = {
/* Bits 30-31 reserved */
};
-IPA_REG_STRIDE_FIELDS(SRC_RSRC_GRP_01_RSRC_TYPE, src_rsrc_grp_01_rsrc_type,
- 0x00000400, 0x0020);
+REG_STRIDE_FIELDS(SRC_RSRC_GRP_01_RSRC_TYPE, src_rsrc_grp_01_rsrc_type,
+ 0x00000400, 0x0020);
-static const u32 ipa_reg_src_rsrc_grp_23_rsrc_type_fmask[] = {
+static const u32 reg_src_rsrc_grp_23_rsrc_type_fmask[] = {
[X_MIN_LIM] = GENMASK(5, 0),
/* Bits 6-7 reserved */
[X_MAX_LIM] = GENMASK(13, 8),
@@ -241,10 +241,10 @@ static const u32 ipa_reg_src_rsrc_grp_23_rsrc_type_fmask[] = {
/* Bits 30-31 reserved */
};
-IPA_REG_STRIDE_FIELDS(SRC_RSRC_GRP_23_RSRC_TYPE, src_rsrc_grp_23_rsrc_type,
- 0x00000404, 0x0020);
+REG_STRIDE_FIELDS(SRC_RSRC_GRP_23_RSRC_TYPE, src_rsrc_grp_23_rsrc_type,
+ 0x00000404, 0x0020);
-static const u32 ipa_reg_dst_rsrc_grp_01_rsrc_type_fmask[] = {
+static const u32 reg_dst_rsrc_grp_01_rsrc_type_fmask[] = {
[X_MIN_LIM] = GENMASK(5, 0),
/* Bits 6-7 reserved */
[X_MAX_LIM] = GENMASK(13, 8),
@@ -255,10 +255,10 @@ static const u32 ipa_reg_dst_rsrc_grp_01_rsrc_type_fmask[] = {
/* Bits 30-31 reserved */
};
-IPA_REG_STRIDE_FIELDS(DST_RSRC_GRP_01_RSRC_TYPE, dst_rsrc_grp_01_rsrc_type,
- 0x00000500, 0x0020);
+REG_STRIDE_FIELDS(DST_RSRC_GRP_01_RSRC_TYPE, dst_rsrc_grp_01_rsrc_type,
+ 0x00000500, 0x0020);
-static const u32 ipa_reg_dst_rsrc_grp_23_rsrc_type_fmask[] = {
+static const u32 reg_dst_rsrc_grp_23_rsrc_type_fmask[] = {
[X_MIN_LIM] = GENMASK(5, 0),
/* Bits 6-7 reserved */
[X_MAX_LIM] = GENMASK(13, 8),
@@ -269,10 +269,10 @@ static const u32 ipa_reg_dst_rsrc_grp_23_rsrc_type_fmask[] = {
/* Bits 30-31 reserved */
};
-IPA_REG_STRIDE_FIELDS(DST_RSRC_GRP_23_RSRC_TYPE, dst_rsrc_grp_23_rsrc_type,
- 0x00000504, 0x0020);
+REG_STRIDE_FIELDS(DST_RSRC_GRP_23_RSRC_TYPE, dst_rsrc_grp_23_rsrc_type,
+ 0x00000504, 0x0020);
-static const u32 ipa_reg_endp_init_cfg_fmask[] = {
+static const u32 reg_endp_init_cfg_fmask[] = {
[FRAG_OFFLOAD_EN] = BIT(0),
[CS_OFFLOAD_EN] = GENMASK(2, 1),
[CS_METADATA_HDR_OFFSET] = GENMASK(6, 3),
@@ -281,16 +281,16 @@ static const u32 ipa_reg_endp_init_cfg_fmask[] = {
/* Bits 9-31 reserved */
};
-IPA_REG_STRIDE_FIELDS(ENDP_INIT_CFG, endp_init_cfg, 0x00000808, 0x0070);
+REG_STRIDE_FIELDS(ENDP_INIT_CFG, endp_init_cfg, 0x00000808, 0x0070);
-static const u32 ipa_reg_endp_init_nat_fmask[] = {
+static const u32 reg_endp_init_nat_fmask[] = {
[NAT_EN] = GENMASK(1, 0),
/* Bits 2-31 reserved */
};
-IPA_REG_STRIDE_FIELDS(ENDP_INIT_NAT, endp_init_nat, 0x0000080c, 0x0070);
+REG_STRIDE_FIELDS(ENDP_INIT_NAT, endp_init_nat, 0x0000080c, 0x0070);
-static const u32 ipa_reg_endp_init_hdr_fmask[] = {
+static const u32 reg_endp_init_hdr_fmask[] = {
[HDR_LEN] = GENMASK(5, 0),
[HDR_OFST_METADATA_VALID] = BIT(6),
[HDR_OFST_METADATA] = GENMASK(12, 7),
@@ -302,9 +302,9 @@ static const u32 ipa_reg_endp_init_hdr_fmask[] = {
[HDR_OFST_METADATA_MSB] = GENMASK(31, 30),
};
-IPA_REG_STRIDE_FIELDS(ENDP_INIT_HDR, endp_init_hdr, 0x00000810, 0x0070);
+REG_STRIDE_FIELDS(ENDP_INIT_HDR, endp_init_hdr, 0x00000810, 0x0070);
-static const u32 ipa_reg_endp_init_hdr_ext_fmask[] = {
+static const u32 reg_endp_init_hdr_ext_fmask[] = {
[HDR_ENDIANNESS] = BIT(0),
[HDR_TOTAL_LEN_OR_PAD_VALID] = BIT(1),
[HDR_TOTAL_LEN_OR_PAD] = BIT(2),
@@ -318,12 +318,12 @@ static const u32 ipa_reg_endp_init_hdr_ext_fmask[] = {
/* Bits 22-31 reserved */
};
-IPA_REG_STRIDE_FIELDS(ENDP_INIT_HDR_EXT, endp_init_hdr_ext, 0x00000814, 0x0070);
+REG_STRIDE_FIELDS(ENDP_INIT_HDR_EXT, endp_init_hdr_ext, 0x00000814, 0x0070);
-IPA_REG_STRIDE(ENDP_INIT_HDR_METADATA_MASK, endp_init_hdr_metadata_mask,
- 0x00000818, 0x0070);
+REG_STRIDE(ENDP_INIT_HDR_METADATA_MASK, endp_init_hdr_metadata_mask,
+ 0x00000818, 0x0070);
-static const u32 ipa_reg_endp_init_mode_fmask[] = {
+static const u32 reg_endp_init_mode_fmask[] = {
[ENDP_MODE] = GENMASK(2, 0),
[DCPH_ENABLE] = BIT(3),
[DEST_PIPE_INDEX] = GENMASK(8, 4),
@@ -335,9 +335,9 @@ static const u32 ipa_reg_endp_init_mode_fmask[] = {
/* Bit 31 reserved */
};
-IPA_REG_STRIDE_FIELDS(ENDP_INIT_MODE, endp_init_mode, 0x00000820, 0x0070);
+REG_STRIDE_FIELDS(ENDP_INIT_MODE, endp_init_mode, 0x00000820, 0x0070);
-static const u32 ipa_reg_endp_init_aggr_fmask[] = {
+static const u32 reg_endp_init_aggr_fmask[] = {
[AGGR_EN] = GENMASK(1, 0),
[AGGR_TYPE] = GENMASK(4, 2),
[BYTE_LIMIT] = GENMASK(10, 5),
@@ -352,27 +352,27 @@ static const u32 ipa_reg_endp_init_aggr_fmask[] = {
/* Bits 28-31 reserved */
};
-IPA_REG_STRIDE_FIELDS(ENDP_INIT_AGGR, endp_init_aggr, 0x00000824, 0x0070);
+REG_STRIDE_FIELDS(ENDP_INIT_AGGR, endp_init_aggr, 0x00000824, 0x0070);
-static const u32 ipa_reg_endp_init_hol_block_en_fmask[] = {
+static const u32 reg_endp_init_hol_block_en_fmask[] = {
[HOL_BLOCK_EN] = BIT(0),
/* Bits 1-31 reserved */
};
-IPA_REG_STRIDE_FIELDS(ENDP_INIT_HOL_BLOCK_EN, endp_init_hol_block_en,
- 0x0000082c, 0x0070);
+REG_STRIDE_FIELDS(ENDP_INIT_HOL_BLOCK_EN, endp_init_hol_block_en,
+ 0x0000082c, 0x0070);
-static const u32 ipa_reg_endp_init_hol_block_timer_fmask[] = {
+static const u32 reg_endp_init_hol_block_timer_fmask[] = {
[TIMER_LIMIT] = GENMASK(4, 0),
/* Bits 5-7 reserved */
[TIMER_GRAN_SEL] = BIT(8),
/* Bits 9-31 reserved */
};
-IPA_REG_STRIDE_FIELDS(ENDP_INIT_HOL_BLOCK_TIMER, endp_init_hol_block_timer,
- 0x00000830, 0x0070);
+REG_STRIDE_FIELDS(ENDP_INIT_HOL_BLOCK_TIMER, endp_init_hol_block_timer,
+ 0x00000830, 0x0070);
-static const u32 ipa_reg_endp_init_deaggr_fmask[] = {
+static const u32 reg_endp_init_deaggr_fmask[] = {
[DEAGGR_HDR_LEN] = GENMASK(5, 0),
[SYSPIPE_ERR_DETECTION] = BIT(6),
[PACKET_OFFSET_VALID] = BIT(7),
@@ -382,24 +382,23 @@ static const u32 ipa_reg_endp_init_deaggr_fmask[] = {
[MAX_PACKET_LEN] = GENMASK(31, 16),
};
-IPA_REG_STRIDE_FIELDS(ENDP_INIT_DEAGGR, endp_init_deaggr, 0x00000834, 0x0070);
+REG_STRIDE_FIELDS(ENDP_INIT_DEAGGR, endp_init_deaggr, 0x00000834, 0x0070);
-static const u32 ipa_reg_endp_init_rsrc_grp_fmask[] = {
+static const u32 reg_endp_init_rsrc_grp_fmask[] = {
[ENDP_RSRC_GRP] = GENMASK(1, 0),
/* Bits 2-31 reserved */
};
-IPA_REG_STRIDE_FIELDS(ENDP_INIT_RSRC_GRP, endp_init_rsrc_grp,
- 0x00000838, 0x0070);
+REG_STRIDE_FIELDS(ENDP_INIT_RSRC_GRP, endp_init_rsrc_grp, 0x00000838, 0x0070);
-static const u32 ipa_reg_endp_init_seq_fmask[] = {
+static const u32 reg_endp_init_seq_fmask[] = {
[SEQ_TYPE] = GENMASK(7, 0),
/* Bits 8-31 reserved */
};
-IPA_REG_STRIDE_FIELDS(ENDP_INIT_SEQ, endp_init_seq, 0x0000083c, 0x0070);
+REG_STRIDE_FIELDS(ENDP_INIT_SEQ, endp_init_seq, 0x0000083c, 0x0070);
-static const u32 ipa_reg_endp_status_fmask[] = {
+static const u32 reg_endp_status_fmask[] = {
[STATUS_EN] = BIT(0),
[STATUS_ENDP] = GENMASK(5, 1),
/* Bits 6-8 reserved */
@@ -407,9 +406,9 @@ static const u32 ipa_reg_endp_status_fmask[] = {
/* Bits 10-31 reserved */
};
-IPA_REG_STRIDE_FIELDS(ENDP_STATUS, endp_status, 0x00000840, 0x0070);
+REG_STRIDE_FIELDS(ENDP_STATUS, endp_status, 0x00000840, 0x0070);
-static const u32 ipa_reg_endp_filter_router_hsh_cfg_fmask[] = {
+static const u32 reg_endp_filter_router_hsh_cfg_fmask[] = {
[FILTER_HASH_MSK_SRC_ID] = BIT(0),
[FILTER_HASH_MSK_SRC_IP] = BIT(1),
[FILTER_HASH_MSK_DST_IP] = BIT(2),
@@ -430,83 +429,83 @@ static const u32 ipa_reg_endp_filter_router_hsh_cfg_fmask[] = {
/* Bits 23-31 reserved */
};
-IPA_REG_STRIDE_FIELDS(ENDP_FILTER_ROUTER_HSH_CFG, endp_filter_router_hsh_cfg,
- 0x0000085c, 0x0070);
+REG_STRIDE_FIELDS(ENDP_FILTER_ROUTER_HSH_CFG, endp_filter_router_hsh_cfg,
+ 0x0000085c, 0x0070);
/* Valid bits defined by enum ipa_irq_id; only used for GSI_EE_AP */
-IPA_REG(IPA_IRQ_STTS, ipa_irq_stts, 0x00004008 + 0x1000 * GSI_EE_AP);
+REG(IPA_IRQ_STTS, ipa_irq_stts, 0x00004008 + 0x1000 * GSI_EE_AP);
/* Valid bits defined by enum ipa_irq_id; only used for GSI_EE_AP */
-IPA_REG(IPA_IRQ_EN, ipa_irq_en, 0x0000400c + 0x1000 * GSI_EE_AP);
+REG(IPA_IRQ_EN, ipa_irq_en, 0x0000400c + 0x1000 * GSI_EE_AP);
/* Valid bits defined by enum ipa_irq_id; only used for GSI_EE_AP */
-IPA_REG(IPA_IRQ_CLR, ipa_irq_clr, 0x00004010 + 0x1000 * GSI_EE_AP);
+REG(IPA_IRQ_CLR, ipa_irq_clr, 0x00004010 + 0x1000 * GSI_EE_AP);
-static const u32 ipa_reg_ipa_irq_uc_fmask[] = {
+static const u32 reg_ipa_irq_uc_fmask[] = {
[UC_INTR] = BIT(0),
/* Bits 1-31 reserved */
};
-IPA_REG_FIELDS(IPA_IRQ_UC, ipa_irq_uc, 0x0000401c + 0x1000 * GSI_EE_AP);
+REG_FIELDS(IPA_IRQ_UC, ipa_irq_uc, 0x0000401c + 0x1000 * GSI_EE_AP);
/* Valid bits defined by ipa->available */
-IPA_REG_STRIDE(IRQ_SUSPEND_INFO, irq_suspend_info,
- 0x00004030 + 0x1000 * GSI_EE_AP, 0x0004);
+REG_STRIDE(IRQ_SUSPEND_INFO, irq_suspend_info,
+ 0x00004030 + 0x1000 * GSI_EE_AP, 0x0004);
/* Valid bits defined by ipa->available */
-IPA_REG_STRIDE(IRQ_SUSPEND_EN, irq_suspend_en,
- 0x00004034 + 0x1000 * GSI_EE_AP, 0x0004);
+REG_STRIDE(IRQ_SUSPEND_EN, irq_suspend_en,
+ 0x00004034 + 0x1000 * GSI_EE_AP, 0x0004);
/* Valid bits defined by ipa->available */
-IPA_REG_STRIDE(IRQ_SUSPEND_CLR, irq_suspend_clr,
- 0x00004038 + 0x1000 * GSI_EE_AP, 0x0004);
-
-static const struct ipa_reg *ipa_reg_array[] = {
- [COMP_CFG] = &ipa_reg_comp_cfg,
- [CLKON_CFG] = &ipa_reg_clkon_cfg,
- [ROUTE] = &ipa_reg_route,
- [SHARED_MEM_SIZE] = &ipa_reg_shared_mem_size,
- [QSB_MAX_WRITES] = &ipa_reg_qsb_max_writes,
- [QSB_MAX_READS] = &ipa_reg_qsb_max_reads,
- [FILT_ROUT_HASH_EN] = &ipa_reg_filt_rout_hash_en,
- [FILT_ROUT_HASH_FLUSH] = &ipa_reg_filt_rout_hash_flush,
- [STATE_AGGR_ACTIVE] = &ipa_reg_state_aggr_active,
- [LOCAL_PKT_PROC_CNTXT] = &ipa_reg_local_pkt_proc_cntxt,
- [AGGR_FORCE_CLOSE] = &ipa_reg_aggr_force_close,
- [IPA_TX_CFG] = &ipa_reg_ipa_tx_cfg,
- [FLAVOR_0] = &ipa_reg_flavor_0,
- [IDLE_INDICATION_CFG] = &ipa_reg_idle_indication_cfg,
- [QTIME_TIMESTAMP_CFG] = &ipa_reg_qtime_timestamp_cfg,
- [TIMERS_XO_CLK_DIV_CFG] = &ipa_reg_timers_xo_clk_div_cfg,
- [TIMERS_PULSE_GRAN_CFG] = &ipa_reg_timers_pulse_gran_cfg,
- [SRC_RSRC_GRP_01_RSRC_TYPE] = &ipa_reg_src_rsrc_grp_01_rsrc_type,
- [SRC_RSRC_GRP_23_RSRC_TYPE] = &ipa_reg_src_rsrc_grp_23_rsrc_type,
- [DST_RSRC_GRP_01_RSRC_TYPE] = &ipa_reg_dst_rsrc_grp_01_rsrc_type,
- [DST_RSRC_GRP_23_RSRC_TYPE] = &ipa_reg_dst_rsrc_grp_23_rsrc_type,
- [ENDP_INIT_CFG] = &ipa_reg_endp_init_cfg,
- [ENDP_INIT_NAT] = &ipa_reg_endp_init_nat,
- [ENDP_INIT_HDR] = &ipa_reg_endp_init_hdr,
- [ENDP_INIT_HDR_EXT] = &ipa_reg_endp_init_hdr_ext,
- [ENDP_INIT_HDR_METADATA_MASK] = &ipa_reg_endp_init_hdr_metadata_mask,
- [ENDP_INIT_MODE] = &ipa_reg_endp_init_mode,
- [ENDP_INIT_AGGR] = &ipa_reg_endp_init_aggr,
- [ENDP_INIT_HOL_BLOCK_EN] = &ipa_reg_endp_init_hol_block_en,
- [ENDP_INIT_HOL_BLOCK_TIMER] = &ipa_reg_endp_init_hol_block_timer,
- [ENDP_INIT_DEAGGR] = &ipa_reg_endp_init_deaggr,
- [ENDP_INIT_RSRC_GRP] = &ipa_reg_endp_init_rsrc_grp,
- [ENDP_INIT_SEQ] = &ipa_reg_endp_init_seq,
- [ENDP_STATUS] = &ipa_reg_endp_status,
- [ENDP_FILTER_ROUTER_HSH_CFG] = &ipa_reg_endp_filter_router_hsh_cfg,
- [IPA_IRQ_STTS] = &ipa_reg_ipa_irq_stts,
- [IPA_IRQ_EN] = &ipa_reg_ipa_irq_en,
- [IPA_IRQ_CLR] = &ipa_reg_ipa_irq_clr,
- [IPA_IRQ_UC] = &ipa_reg_ipa_irq_uc,
- [IRQ_SUSPEND_INFO] = &ipa_reg_irq_suspend_info,
- [IRQ_SUSPEND_EN] = &ipa_reg_irq_suspend_en,
- [IRQ_SUSPEND_CLR] = &ipa_reg_irq_suspend_clr,
-};
-
-const struct ipa_regs ipa_regs_v4_9 = {
- .reg_count = ARRAY_SIZE(ipa_reg_array),
- .reg = ipa_reg_array,
+REG_STRIDE(IRQ_SUSPEND_CLR, irq_suspend_clr,
+ 0x00004038 + 0x1000 * GSI_EE_AP, 0x0004);
+
+static const struct reg *reg_array[] = {
+ [COMP_CFG] = &reg_comp_cfg,
+ [CLKON_CFG] = &reg_clkon_cfg,
+ [ROUTE] = &reg_route,
+ [SHARED_MEM_SIZE] = &reg_shared_mem_size,
+ [QSB_MAX_WRITES] = &reg_qsb_max_writes,
+ [QSB_MAX_READS] = &reg_qsb_max_reads,
+ [FILT_ROUT_HASH_EN] = &reg_filt_rout_hash_en,
+ [FILT_ROUT_HASH_FLUSH] = &reg_filt_rout_hash_flush,
+ [STATE_AGGR_ACTIVE] = &reg_state_aggr_active,
+ [LOCAL_PKT_PROC_CNTXT] = &reg_local_pkt_proc_cntxt,
+ [AGGR_FORCE_CLOSE] = &reg_aggr_force_close,
+ [IPA_TX_CFG] = &reg_ipa_tx_cfg,
+ [FLAVOR_0] = &reg_flavor_0,
+ [IDLE_INDICATION_CFG] = &reg_idle_indication_cfg,
+ [QTIME_TIMESTAMP_CFG] = &reg_qtime_timestamp_cfg,
+ [TIMERS_XO_CLK_DIV_CFG] = &reg_timers_xo_clk_div_cfg,
+ [TIMERS_PULSE_GRAN_CFG] = &reg_timers_pulse_gran_cfg,
+ [SRC_RSRC_GRP_01_RSRC_TYPE] = &reg_src_rsrc_grp_01_rsrc_type,
+ [SRC_RSRC_GRP_23_RSRC_TYPE] = &reg_src_rsrc_grp_23_rsrc_type,
+ [DST_RSRC_GRP_01_RSRC_TYPE] = &reg_dst_rsrc_grp_01_rsrc_type,
+ [DST_RSRC_GRP_23_RSRC_TYPE] = &reg_dst_rsrc_grp_23_rsrc_type,
+ [ENDP_INIT_CFG] = &reg_endp_init_cfg,
+ [ENDP_INIT_NAT] = &reg_endp_init_nat,
+ [ENDP_INIT_HDR] = &reg_endp_init_hdr,
+ [ENDP_INIT_HDR_EXT] = &reg_endp_init_hdr_ext,
+ [ENDP_INIT_HDR_METADATA_MASK] = &reg_endp_init_hdr_metadata_mask,
+ [ENDP_INIT_MODE] = &reg_endp_init_mode,
+ [ENDP_INIT_AGGR] = &reg_endp_init_aggr,
+ [ENDP_INIT_HOL_BLOCK_EN] = &reg_endp_init_hol_block_en,
+ [ENDP_INIT_HOL_BLOCK_TIMER] = &reg_endp_init_hol_block_timer,
+ [ENDP_INIT_DEAGGR] = &reg_endp_init_deaggr,
+ [ENDP_INIT_RSRC_GRP] = &reg_endp_init_rsrc_grp,
+ [ENDP_INIT_SEQ] = &reg_endp_init_seq,
+ [ENDP_STATUS] = &reg_endp_status,
+ [ENDP_FILTER_ROUTER_HSH_CFG] = &reg_endp_filter_router_hsh_cfg,
+ [IPA_IRQ_STTS] = &reg_ipa_irq_stts,
+ [IPA_IRQ_EN] = &reg_ipa_irq_en,
+ [IPA_IRQ_CLR] = &reg_ipa_irq_clr,
+ [IPA_IRQ_UC] = &reg_ipa_irq_uc,
+ [IRQ_SUSPEND_INFO] = &reg_irq_suspend_info,
+ [IRQ_SUSPEND_EN] = &reg_irq_suspend_en,
+ [IRQ_SUSPEND_CLR] = &reg_irq_suspend_clr,
+};
+
+const struct regs ipa_regs_v4_9 = {
+ .reg_count = ARRAY_SIZE(reg_array),
+ .reg = reg_array,
};
diff --git a/drivers/net/ipvlan/ipvlan_core.c b/drivers/net/ipvlan/ipvlan_core.c
index bb1c298c1e78..460b3d4f2245 100644
--- a/drivers/net/ipvlan/ipvlan_core.c
+++ b/drivers/net/ipvlan/ipvlan_core.c
@@ -157,7 +157,7 @@ void *ipvlan_get_L3_hdr(struct ipvl_port *port, struct sk_buff *skb, int *type)
return NULL;
ip4h = ip_hdr(skb);
- pktlen = ntohs(ip4h->tot_len);
+ pktlen = skb_ip_totlen(skb);
if (ip4h->ihl < 5 || ip4h->version != 4)
return NULL;
if (skb->len < pktlen || pktlen < (ip4h->ihl * 4))
diff --git a/drivers/net/macsec.c b/drivers/net/macsec.c
index becb04123d3e..25616247d7a5 100644
--- a/drivers/net/macsec.c
+++ b/drivers/net/macsec.c
@@ -2583,16 +2583,56 @@ static bool macsec_is_configured(struct macsec_dev *macsec)
return false;
}
+static int macsec_update_offload(struct net_device *dev, enum macsec_offload offload)
+{
+ enum macsec_offload prev_offload;
+ const struct macsec_ops *ops;
+ struct macsec_context ctx;
+ struct macsec_dev *macsec;
+ int ret = 0;
+
+ macsec = macsec_priv(dev);
+
+ /* Check if the offloading mode is supported by the underlying layers */
+ if (offload != MACSEC_OFFLOAD_OFF &&
+ !macsec_check_offload(offload, macsec))
+ return -EOPNOTSUPP;
+
+ /* Check if the net device is busy. */
+ if (netif_running(dev))
+ return -EBUSY;
+
+ /* Check if the device already has rules configured: we do not support
+ * rules migration.
+ */
+ if (macsec_is_configured(macsec))
+ return -EBUSY;
+
+ prev_offload = macsec->offload;
+
+ ops = __macsec_get_ops(offload == MACSEC_OFFLOAD_OFF ? prev_offload : offload,
+ macsec, &ctx);
+ if (!ops)
+ return -EOPNOTSUPP;
+
+ macsec->offload = offload;
+
+ ctx.secy = &macsec->secy;
+ ret = offload == MACSEC_OFFLOAD_OFF ? macsec_offload(ops->mdo_del_secy, &ctx)
+ : macsec_offload(ops->mdo_add_secy, &ctx);
+ if (ret)
+ macsec->offload = prev_offload;
+
+ return ret;
+}
+
static int macsec_upd_offload(struct sk_buff *skb, struct genl_info *info)
{
struct nlattr *tb_offload[MACSEC_OFFLOAD_ATTR_MAX + 1];
- enum macsec_offload offload, prev_offload;
- int (*func)(struct macsec_context *ctx);
struct nlattr **attrs = info->attrs;
- struct net_device *dev;
- const struct macsec_ops *ops;
- struct macsec_context ctx;
+ enum macsec_offload offload;
struct macsec_dev *macsec;
+ struct net_device *dev;
int ret = 0;
if (!attrs[MACSEC_ATTR_IFINDEX])
@@ -2621,55 +2661,9 @@ static int macsec_upd_offload(struct sk_buff *skb, struct genl_info *info)
}
offload = nla_get_u8(tb_offload[MACSEC_OFFLOAD_ATTR_TYPE]);
- if (macsec->offload == offload)
- goto out;
-
- /* Check if the offloading mode is supported by the underlying layers */
- if (offload != MACSEC_OFFLOAD_OFF &&
- !macsec_check_offload(offload, macsec)) {
- ret = -EOPNOTSUPP;
- goto out;
- }
-
- /* Check if the net device is busy. */
- if (netif_running(dev)) {
- ret = -EBUSY;
- goto out;
- }
-
- prev_offload = macsec->offload;
- macsec->offload = offload;
- /* Check if the device already has rules configured: we do not support
- * rules migration.
- */
- if (macsec_is_configured(macsec)) {
- ret = -EBUSY;
- goto rollback;
- }
-
- ops = __macsec_get_ops(offload == MACSEC_OFFLOAD_OFF ? prev_offload : offload,
- macsec, &ctx);
- if (!ops) {
- ret = -EOPNOTSUPP;
- goto rollback;
- }
-
- if (prev_offload == MACSEC_OFFLOAD_OFF)
- func = ops->mdo_add_secy;
- else
- func = ops->mdo_del_secy;
-
- ctx.secy = &macsec->secy;
- ret = macsec_offload(func, &ctx);
- if (ret)
- goto rollback;
-
- rtnl_unlock();
- return 0;
-
-rollback:
- macsec->offload = prev_offload;
+ if (macsec->offload != offload)
+ ret = macsec_update_offload(dev, offload);
out:
rtnl_unlock();
return ret;
@@ -3817,6 +3811,8 @@ static int macsec_changelink(struct net_device *dev, struct nlattr *tb[],
struct netlink_ext_ack *extack)
{
struct macsec_dev *macsec = macsec_priv(dev);
+ bool macsec_offload_state_change = false;
+ enum macsec_offload offload;
struct macsec_tx_sc tx_sc;
struct macsec_secy secy;
int ret;
@@ -3840,8 +3836,18 @@ static int macsec_changelink(struct net_device *dev, struct nlattr *tb[],
if (ret)
goto cleanup;
+ if (data[IFLA_MACSEC_OFFLOAD]) {
+ offload = nla_get_u8(data[IFLA_MACSEC_OFFLOAD]);
+ if (macsec->offload != offload) {
+ macsec_offload_state_change = true;
+ ret = macsec_update_offload(dev, offload);
+ if (ret)
+ goto cleanup;
+ }
+ }
+
/* If h/w offloading is available, propagate to the device */
- if (macsec_is_offloaded(macsec)) {
+ if (!macsec_offload_state_change && macsec_is_offloaded(macsec)) {
const struct macsec_ops *ops;
struct macsec_context ctx;
@@ -4240,16 +4246,22 @@ static size_t macsec_get_size(const struct net_device *dev)
nla_total_size(1) + /* IFLA_MACSEC_SCB */
nla_total_size(1) + /* IFLA_MACSEC_REPLAY_PROTECT */
nla_total_size(1) + /* IFLA_MACSEC_VALIDATION */
+ nla_total_size(1) + /* IFLA_MACSEC_OFFLOAD */
0;
}
static int macsec_fill_info(struct sk_buff *skb,
const struct net_device *dev)
{
- struct macsec_secy *secy = &macsec_priv(dev)->secy;
- struct macsec_tx_sc *tx_sc = &secy->tx_sc;
+ struct macsec_tx_sc *tx_sc;
+ struct macsec_dev *macsec;
+ struct macsec_secy *secy;
u64 csid;
+ macsec = macsec_priv(dev);
+ secy = &macsec->secy;
+ tx_sc = &secy->tx_sc;
+
switch (secy->key_len) {
case MACSEC_GCM_AES_128_SAK_LEN:
csid = secy->xpn ? MACSEC_CIPHER_ID_GCM_AES_XPN_128 : MACSEC_DEFAULT_CIPHER_ID;
@@ -4274,6 +4286,7 @@ static int macsec_fill_info(struct sk_buff *skb,
nla_put_u8(skb, IFLA_MACSEC_SCB, tx_sc->scb) ||
nla_put_u8(skb, IFLA_MACSEC_REPLAY_PROTECT, secy->replay_protect) ||
nla_put_u8(skb, IFLA_MACSEC_VALIDATION, secy->validate_frames) ||
+ nla_put_u8(skb, IFLA_MACSEC_OFFLOAD, macsec->offload) ||
0)
goto nla_put_failure;
diff --git a/drivers/net/mdio/Kconfig b/drivers/net/mdio/Kconfig
index bfa16826a6e1..90309980686e 100644
--- a/drivers/net/mdio/Kconfig
+++ b/drivers/net/mdio/Kconfig
@@ -215,6 +215,17 @@ config MDIO_BUS_MUX_MESON_G12A
the amlogic g12a SoC. The multiplexers connects either the external
or the internal MDIO bus to the parent bus.
+config MDIO_BUS_MUX_MESON_GXL
+ tristate "Amlogic GXL based MDIO bus multiplexer"
+ depends on ARCH_MESON || COMPILE_TEST
+ depends on OF_MDIO && HAS_IOMEM && COMMON_CLK
+ select MDIO_BUS_MUX
+ default m if ARCH_MESON
+ help
+ This module provides a driver for the MDIO multiplexer/glue of
+ the amlogic GXL SoC. The multiplexer connects either the external
+ or the internal MDIO bus to the parent bus.
+
config MDIO_BUS_MUX_BCM6368
tristate "Broadcom BCM6368 MDIO bus multiplexers"
depends on OF && OF_MDIO && (BMIPS_GENERIC || COMPILE_TEST)
diff --git a/drivers/net/mdio/Makefile b/drivers/net/mdio/Makefile
index 15f8dc4042ce..7d4cb4c11e4e 100644
--- a/drivers/net/mdio/Makefile
+++ b/drivers/net/mdio/Makefile
@@ -28,5 +28,6 @@ obj-$(CONFIG_MDIO_BUS_MUX_BCM6368) += mdio-mux-bcm6368.o
obj-$(CONFIG_MDIO_BUS_MUX_BCM_IPROC) += mdio-mux-bcm-iproc.o
obj-$(CONFIG_MDIO_BUS_MUX_GPIO) += mdio-mux-gpio.o
obj-$(CONFIG_MDIO_BUS_MUX_MESON_G12A) += mdio-mux-meson-g12a.o
+obj-$(CONFIG_MDIO_BUS_MUX_MESON_GXL) += mdio-mux-meson-gxl.o
obj-$(CONFIG_MDIO_BUS_MUX_MMIOREG) += mdio-mux-mmioreg.o
obj-$(CONFIG_MDIO_BUS_MUX_MULTIPLEXER) += mdio-mux-multiplexer.o
diff --git a/drivers/net/mdio/fwnode_mdio.c b/drivers/net/mdio/fwnode_mdio.c
index b782c35c4ac1..1183ef5e203e 100644
--- a/drivers/net/mdio/fwnode_mdio.c
+++ b/drivers/net/mdio/fwnode_mdio.c
@@ -115,7 +115,7 @@ int fwnode_mdiobus_register_phy(struct mii_bus *bus,
struct mii_timestamper *mii_ts = NULL;
struct pse_control *psec = NULL;
struct phy_device *phy;
- bool is_c45 = false;
+ bool is_c45;
u32 phy_id;
int rc;
@@ -129,11 +129,7 @@ int fwnode_mdiobus_register_phy(struct mii_bus *bus,
goto clean_pse;
}
- rc = fwnode_property_match_string(child, "compatible",
- "ethernet-phy-ieee802.3-c45");
- if (rc >= 0)
- is_c45 = true;
-
+ is_c45 = fwnode_device_is_compatible(child, "ethernet-phy-ieee802.3-c45");
if (is_c45 || fwnode_get_phy_id(child, &phy_id))
phy = get_phy_device(bus, addr, is_c45);
else
diff --git a/drivers/net/mdio/mdio-aspeed.c b/drivers/net/mdio/mdio-aspeed.c
index 944d005d2bd1..c727103c8b05 100644
--- a/drivers/net/mdio/mdio-aspeed.c
+++ b/drivers/net/mdio/mdio-aspeed.c
@@ -104,61 +104,36 @@ static int aspeed_mdio_write_c22(struct mii_bus *bus, int addr, int regnum,
addr, regnum, val);
}
-static int aspeed_mdio_read_c45(struct mii_bus *bus, int addr, int regnum)
+static int aspeed_mdio_read_c45(struct mii_bus *bus, int addr, int devad,
+ int regnum)
{
- u8 c45_dev = (regnum >> 16) & 0x1F;
- u16 c45_addr = regnum & 0xFFFF;
int rc;
rc = aspeed_mdio_op(bus, ASPEED_MDIO_CTRL_ST_C45, MDIO_C45_OP_ADDR,
- addr, c45_dev, c45_addr);
+ addr, devad, regnum);
if (rc < 0)
return rc;
rc = aspeed_mdio_op(bus, ASPEED_MDIO_CTRL_ST_C45, MDIO_C45_OP_READ,
- addr, c45_dev, 0);
+ addr, devad, 0);
if (rc < 0)
return rc;
return aspeed_mdio_get_data(bus);
}
-static int aspeed_mdio_write_c45(struct mii_bus *bus, int addr, int regnum,
- u16 val)
+static int aspeed_mdio_write_c45(struct mii_bus *bus, int addr, int devad,
+ int regnum, u16 val)
{
- u8 c45_dev = (regnum >> 16) & 0x1F;
- u16 c45_addr = regnum & 0xFFFF;
int rc;
rc = aspeed_mdio_op(bus, ASPEED_MDIO_CTRL_ST_C45, MDIO_C45_OP_ADDR,
- addr, c45_dev, c45_addr);
+ addr, devad, regnum);
if (rc < 0)
return rc;
return aspeed_mdio_op(bus, ASPEED_MDIO_CTRL_ST_C45, MDIO_C45_OP_WRITE,
- addr, c45_dev, val);
-}
-
-static int aspeed_mdio_read(struct mii_bus *bus, int addr, int regnum)
-{
- dev_dbg(&bus->dev, "%s: addr: %d, regnum: %d\n", __func__, addr,
- regnum);
-
- if (regnum & MII_ADDR_C45)
- return aspeed_mdio_read_c45(bus, addr, regnum);
-
- return aspeed_mdio_read_c22(bus, addr, regnum);
-}
-
-static int aspeed_mdio_write(struct mii_bus *bus, int addr, int regnum, u16 val)
-{
- dev_dbg(&bus->dev, "%s: addr: %d, regnum: %d, val: 0x%x\n",
- __func__, addr, regnum, val);
-
- if (regnum & MII_ADDR_C45)
- return aspeed_mdio_write_c45(bus, addr, regnum, val);
-
- return aspeed_mdio_write_c22(bus, addr, regnum, val);
+ addr, devad, val);
}
static int aspeed_mdio_probe(struct platform_device *pdev)
@@ -185,9 +160,10 @@ static int aspeed_mdio_probe(struct platform_device *pdev)
bus->name = DRV_NAME;
snprintf(bus->id, MII_BUS_ID_SIZE, "%s%d", pdev->name, pdev->id);
bus->parent = &pdev->dev;
- bus->read = aspeed_mdio_read;
- bus->write = aspeed_mdio_write;
- bus->probe_capabilities = MDIOBUS_C22_C45;
+ bus->read = aspeed_mdio_read_c22;
+ bus->write = aspeed_mdio_write_c22;
+ bus->read_c45 = aspeed_mdio_read_c45;
+ bus->write_c45 = aspeed_mdio_write_c45;
rc = of_mdiobus_register(bus, pdev->dev.of_node);
if (rc) {
diff --git a/drivers/net/mdio/mdio-bitbang.c b/drivers/net/mdio/mdio-bitbang.c
index 07609114a26b..b83932562be2 100644
--- a/drivers/net/mdio/mdio-bitbang.c
+++ b/drivers/net/mdio/mdio-bitbang.c
@@ -127,14 +127,12 @@ static void mdiobb_cmd(struct mdiobb_ctrl *ctrl, int op, u8 phy, u8 reg)
/* In clause 45 mode all commands are prefixed by MDIO_ADDR to specify the
lower 16 bits of the 21 bit address. This transfer is done identically to a
- MDIO_WRITE except for a different code. To enable clause 45 mode or
- MII_ADDR_C45 into the address. Theoretically clause 45 and normal devices
- can exist on the same bus. Normal devices should ignore the MDIO_ADDR
+ MDIO_WRITE except for a different code. Theoretically clause 45 and normal
+ devices can exist on the same bus. Normal devices should ignore the MDIO_ADDR
phase. */
-static int mdiobb_cmd_addr(struct mdiobb_ctrl *ctrl, int phy, u32 addr)
+static void mdiobb_cmd_addr(struct mdiobb_ctrl *ctrl, int phy, int dev_addr,
+ int reg)
{
- unsigned int dev_addr = (addr >> 16) & 0x1F;
- unsigned int reg = addr & 0xFFFF;
mdiobb_cmd(ctrl, MDIO_C45_ADDR, phy, dev_addr);
/* send the turnaround (10) */
@@ -145,21 +143,13 @@ static int mdiobb_cmd_addr(struct mdiobb_ctrl *ctrl, int phy, u32 addr)
ctrl->ops->set_mdio_dir(ctrl, 0);
mdiobb_get_bit(ctrl);
-
- return dev_addr;
}
-int mdiobb_read(struct mii_bus *bus, int phy, int reg)
+static int mdiobb_read_common(struct mii_bus *bus, int phy)
{
struct mdiobb_ctrl *ctrl = bus->priv;
int ret, i;
- if (reg & MII_ADDR_C45) {
- reg = mdiobb_cmd_addr(ctrl, phy, reg);
- mdiobb_cmd(ctrl, MDIO_C45_READ, phy, reg);
- } else
- mdiobb_cmd(ctrl, ctrl->op_c22_read, phy, reg);
-
ctrl->ops->set_mdio_dir(ctrl, 0);
/* check the turnaround bit: the PHY should be driving it to zero, if this
@@ -180,17 +170,31 @@ int mdiobb_read(struct mii_bus *bus, int phy, int reg)
mdiobb_get_bit(ctrl);
return ret;
}
-EXPORT_SYMBOL(mdiobb_read);
-int mdiobb_write(struct mii_bus *bus, int phy, int reg, u16 val)
+int mdiobb_read_c22(struct mii_bus *bus, int phy, int reg)
{
struct mdiobb_ctrl *ctrl = bus->priv;
- if (reg & MII_ADDR_C45) {
- reg = mdiobb_cmd_addr(ctrl, phy, reg);
- mdiobb_cmd(ctrl, MDIO_C45_WRITE, phy, reg);
- } else
- mdiobb_cmd(ctrl, ctrl->op_c22_write, phy, reg);
+ mdiobb_cmd(ctrl, ctrl->op_c22_read, phy, reg);
+
+ return mdiobb_read_common(bus, phy);
+}
+EXPORT_SYMBOL(mdiobb_read_c22);
+
+int mdiobb_read_c45(struct mii_bus *bus, int phy, int devad, int reg)
+{
+ struct mdiobb_ctrl *ctrl = bus->priv;
+
+ mdiobb_cmd_addr(ctrl, phy, devad, reg);
+ mdiobb_cmd(ctrl, MDIO_C45_READ, phy, reg);
+
+ return mdiobb_read_common(bus, phy);
+}
+EXPORT_SYMBOL(mdiobb_read_c45);
+
+static int mdiobb_write_common(struct mii_bus *bus, u16 val)
+{
+ struct mdiobb_ctrl *ctrl = bus->priv;
/* send the turnaround (10) */
mdiobb_send_bit(ctrl, 1);
@@ -202,7 +206,27 @@ int mdiobb_write(struct mii_bus *bus, int phy, int reg, u16 val)
mdiobb_get_bit(ctrl);
return 0;
}
-EXPORT_SYMBOL(mdiobb_write);
+
+int mdiobb_write_c22(struct mii_bus *bus, int phy, int reg, u16 val)
+{
+ struct mdiobb_ctrl *ctrl = bus->priv;
+
+ mdiobb_cmd(ctrl, ctrl->op_c22_write, phy, reg);
+
+ return mdiobb_write_common(bus, val);
+}
+EXPORT_SYMBOL(mdiobb_write_c22);
+
+int mdiobb_write_c45(struct mii_bus *bus, int phy, int devad, int reg, u16 val)
+{
+ struct mdiobb_ctrl *ctrl = bus->priv;
+
+ mdiobb_cmd_addr(ctrl, phy, devad, reg);
+ mdiobb_cmd(ctrl, MDIO_C45_WRITE, phy, reg);
+
+ return mdiobb_write_common(bus, val);
+}
+EXPORT_SYMBOL(mdiobb_write_c45);
struct mii_bus *alloc_mdio_bitbang(struct mdiobb_ctrl *ctrl)
{
@@ -214,8 +238,11 @@ struct mii_bus *alloc_mdio_bitbang(struct mdiobb_ctrl *ctrl)
__module_get(ctrl->ops->owner);
- bus->read = mdiobb_read;
- bus->write = mdiobb_write;
+ bus->read = mdiobb_read_c22;
+ bus->write = mdiobb_write_c22;
+ bus->read_c45 = mdiobb_read_c45;
+ bus->write_c45 = mdiobb_write_c45;
+
bus->priv = ctrl;
if (!ctrl->override_op_c22) {
ctrl->op_c22_read = MDIO_READ;
diff --git a/drivers/net/mdio/mdio-cavium.c b/drivers/net/mdio/mdio-cavium.c
index 95ce274c1be1..100e46a702ee 100644
--- a/drivers/net/mdio/mdio-cavium.c
+++ b/drivers/net/mdio/mdio-cavium.c
@@ -26,7 +26,7 @@ static void cavium_mdiobus_set_mode(struct cavium_mdiobus *p,
}
static int cavium_mdiobus_c45_addr(struct cavium_mdiobus *p,
- int phy_id, int regnum)
+ int phy_id, int devad, int regnum)
{
union cvmx_smix_cmd smi_cmd;
union cvmx_smix_wr_dat smi_wr;
@@ -38,12 +38,10 @@ static int cavium_mdiobus_c45_addr(struct cavium_mdiobus *p,
smi_wr.s.dat = regnum & 0xffff;
oct_mdio_writeq(smi_wr.u64, p->register_base + SMI_WR_DAT);
- regnum = (regnum >> 16) & 0x1f;
-
smi_cmd.u64 = 0;
smi_cmd.s.phy_op = 0; /* MDIO_CLAUSE_45_ADDRESS */
smi_cmd.s.phy_adr = phy_id;
- smi_cmd.s.reg_adr = regnum;
+ smi_cmd.s.reg_adr = devad;
oct_mdio_writeq(smi_cmd.u64, p->register_base + SMI_CMD);
do {
@@ -59,28 +57,51 @@ static int cavium_mdiobus_c45_addr(struct cavium_mdiobus *p,
return 0;
}
-int cavium_mdiobus_read(struct mii_bus *bus, int phy_id, int regnum)
+int cavium_mdiobus_read_c22(struct mii_bus *bus, int phy_id, int regnum)
{
struct cavium_mdiobus *p = bus->priv;
union cvmx_smix_cmd smi_cmd;
union cvmx_smix_rd_dat smi_rd;
- unsigned int op = 1; /* MDIO_CLAUSE_22_READ */
int timeout = 1000;
- if (regnum & MII_ADDR_C45) {
- int r = cavium_mdiobus_c45_addr(p, phy_id, regnum);
+ cavium_mdiobus_set_mode(p, C22);
+
+ smi_cmd.u64 = 0;
+ smi_cmd.s.phy_op = 1; /* MDIO_CLAUSE_22_READ */
+ smi_cmd.s.phy_adr = phy_id;
+ smi_cmd.s.reg_adr = regnum;
+ oct_mdio_writeq(smi_cmd.u64, p->register_base + SMI_CMD);
+
+ do {
+ /* Wait 1000 clocks so we don't saturate the RSL bus
+ * doing reads.
+ */
+ __delay(1000);
+ smi_rd.u64 = oct_mdio_readq(p->register_base + SMI_RD_DAT);
+ } while (smi_rd.s.pending && --timeout);
+
+ if (smi_rd.s.val)
+ return smi_rd.s.dat;
+ else
+ return -EIO;
+}
+EXPORT_SYMBOL(cavium_mdiobus_read_c22);
- if (r < 0)
- return r;
+int cavium_mdiobus_read_c45(struct mii_bus *bus, int phy_id, int devad,
+ int regnum)
+{
+ struct cavium_mdiobus *p = bus->priv;
+ union cvmx_smix_cmd smi_cmd;
+ union cvmx_smix_rd_dat smi_rd;
+ int timeout = 1000;
+ int r;
- regnum = (regnum >> 16) & 0x1f;
- op = 3; /* MDIO_CLAUSE_45_READ */
- } else {
- cavium_mdiobus_set_mode(p, C22);
- }
+ r = cavium_mdiobus_c45_addr(p, phy_id, devad, regnum);
+ if (r < 0)
+ return r;
smi_cmd.u64 = 0;
- smi_cmd.s.phy_op = op;
+ smi_cmd.s.phy_op = 3; /* MDIO_CLAUSE_45_READ */
smi_cmd.s.phy_adr = phy_id;
smi_cmd.s.reg_adr = regnum;
oct_mdio_writeq(smi_cmd.u64, p->register_base + SMI_CMD);
@@ -98,36 +119,64 @@ int cavium_mdiobus_read(struct mii_bus *bus, int phy_id, int regnum)
else
return -EIO;
}
-EXPORT_SYMBOL(cavium_mdiobus_read);
+EXPORT_SYMBOL(cavium_mdiobus_read_c45);
-int cavium_mdiobus_write(struct mii_bus *bus, int phy_id, int regnum, u16 val)
+int cavium_mdiobus_write_c22(struct mii_bus *bus, int phy_id, int regnum,
+ u16 val)
{
struct cavium_mdiobus *p = bus->priv;
union cvmx_smix_cmd smi_cmd;
union cvmx_smix_wr_dat smi_wr;
- unsigned int op = 0; /* MDIO_CLAUSE_22_WRITE */
int timeout = 1000;
- if (regnum & MII_ADDR_C45) {
- int r = cavium_mdiobus_c45_addr(p, phy_id, regnum);
+ cavium_mdiobus_set_mode(p, C22);
- if (r < 0)
- return r;
+ smi_wr.u64 = 0;
+ smi_wr.s.dat = val;
+ oct_mdio_writeq(smi_wr.u64, p->register_base + SMI_WR_DAT);
- regnum = (regnum >> 16) & 0x1f;
- op = 1; /* MDIO_CLAUSE_45_WRITE */
- } else {
- cavium_mdiobus_set_mode(p, C22);
- }
+ smi_cmd.u64 = 0;
+ smi_cmd.s.phy_op = 0; /* MDIO_CLAUSE_22_WRITE */
+ smi_cmd.s.phy_adr = phy_id;
+ smi_cmd.s.reg_adr = regnum;
+ oct_mdio_writeq(smi_cmd.u64, p->register_base + SMI_CMD);
+
+ do {
+ /* Wait 1000 clocks so we don't saturate the RSL bus
+ * doing reads.
+ */
+ __delay(1000);
+ smi_wr.u64 = oct_mdio_readq(p->register_base + SMI_WR_DAT);
+ } while (smi_wr.s.pending && --timeout);
+
+ if (timeout <= 0)
+ return -EIO;
+
+ return 0;
+}
+EXPORT_SYMBOL(cavium_mdiobus_write_c22);
+
+int cavium_mdiobus_write_c45(struct mii_bus *bus, int phy_id, int devad,
+ int regnum, u16 val)
+{
+ struct cavium_mdiobus *p = bus->priv;
+ union cvmx_smix_cmd smi_cmd;
+ union cvmx_smix_wr_dat smi_wr;
+ int timeout = 1000;
+ int r;
+
+ r = cavium_mdiobus_c45_addr(p, phy_id, devad, regnum);
+ if (r < 0)
+ return r;
smi_wr.u64 = 0;
smi_wr.s.dat = val;
oct_mdio_writeq(smi_wr.u64, p->register_base + SMI_WR_DAT);
smi_cmd.u64 = 0;
- smi_cmd.s.phy_op = op;
+ smi_cmd.s.phy_op = 1; /* MDIO_CLAUSE_45_WRITE */
smi_cmd.s.phy_adr = phy_id;
- smi_cmd.s.reg_adr = regnum;
+ smi_cmd.s.reg_adr = devad;
oct_mdio_writeq(smi_cmd.u64, p->register_base + SMI_CMD);
do {
@@ -143,7 +192,7 @@ int cavium_mdiobus_write(struct mii_bus *bus, int phy_id, int regnum, u16 val)
return 0;
}
-EXPORT_SYMBOL(cavium_mdiobus_write);
+EXPORT_SYMBOL(cavium_mdiobus_write_c45);
MODULE_DESCRIPTION("Common code for OCTEON and Thunder MDIO bus drivers");
MODULE_AUTHOR("David Daney");
diff --git a/drivers/net/mdio/mdio-cavium.h b/drivers/net/mdio/mdio-cavium.h
index a2245d436f5d..71b8e20cd664 100644
--- a/drivers/net/mdio/mdio-cavium.h
+++ b/drivers/net/mdio/mdio-cavium.h
@@ -114,5 +114,10 @@ static inline u64 oct_mdio_readq(void __iomem *addr)
#define oct_mdio_readq(addr) readq(addr)
#endif
-int cavium_mdiobus_read(struct mii_bus *bus, int phy_id, int regnum);
-int cavium_mdiobus_write(struct mii_bus *bus, int phy_id, int regnum, u16 val);
+int cavium_mdiobus_read_c22(struct mii_bus *bus, int phy_id, int regnum);
+int cavium_mdiobus_write_c22(struct mii_bus *bus, int phy_id, int regnum,
+ u16 val);
+int cavium_mdiobus_read_c45(struct mii_bus *bus, int phy_id, int devad,
+ int regnum);
+int cavium_mdiobus_write_c45(struct mii_bus *bus, int phy_id, int devad,
+ int regnum, u16 val);
diff --git a/drivers/net/mdio/mdio-i2c.c b/drivers/net/mdio/mdio-i2c.c
index bf8bf5e20faf..1e0c206d0f2e 100644
--- a/drivers/net/mdio/mdio-i2c.c
+++ b/drivers/net/mdio/mdio-i2c.c
@@ -30,7 +30,8 @@ static unsigned int i2c_mii_phy_addr(int phy_id)
return phy_id + 0x40;
}
-static int i2c_mii_read_default(struct mii_bus *bus, int phy_id, int reg)
+static int i2c_mii_read_default_c45(struct mii_bus *bus, int phy_id, int devad,
+ int reg)
{
struct i2c_adapter *i2c = bus->priv;
struct i2c_msg msgs[2];
@@ -41,8 +42,8 @@ static int i2c_mii_read_default(struct mii_bus *bus, int phy_id, int reg)
return 0xffff;
p = addr;
- if (reg & MII_ADDR_C45) {
- *p++ = 0x20 | ((reg >> 16) & 31);
+ if (devad >= 0) {
+ *p++ = 0x20 | devad;
*p++ = reg >> 8;
}
*p++ = reg;
@@ -64,8 +65,8 @@ static int i2c_mii_read_default(struct mii_bus *bus, int phy_id, int reg)
return data[0] << 8 | data[1];
}
-static int i2c_mii_write_default(struct mii_bus *bus, int phy_id, int reg,
- u16 val)
+static int i2c_mii_write_default_c45(struct mii_bus *bus, int phy_id,
+ int devad, int reg, u16 val)
{
struct i2c_adapter *i2c = bus->priv;
struct i2c_msg msg;
@@ -76,8 +77,8 @@ static int i2c_mii_write_default(struct mii_bus *bus, int phy_id, int reg,
return 0;
p = data;
- if (reg & MII_ADDR_C45) {
- *p++ = (reg >> 16) & 31;
+ if (devad >= 0) {
+ *p++ = devad;
*p++ = reg >> 8;
}
*p++ = reg;
@@ -94,6 +95,17 @@ static int i2c_mii_write_default(struct mii_bus *bus, int phy_id, int reg,
return ret < 0 ? ret : 0;
}
+static int i2c_mii_read_default_c22(struct mii_bus *bus, int phy_id, int reg)
+{
+ return i2c_mii_read_default_c45(bus, phy_id, -1, reg);
+}
+
+static int i2c_mii_write_default_c22(struct mii_bus *bus, int phy_id, int reg,
+ u16 val)
+{
+ return i2c_mii_write_default_c45(bus, phy_id, -1, reg, val);
+}
+
/* RollBall SFPs do not access internal PHY via I2C address 0x56, but
* instead via address 0x51, when SFP page is set to 0x03 and password to
* 0xffffffff.
@@ -285,9 +297,6 @@ static int i2c_mii_read_rollball(struct mii_bus *bus, int phy_id, int reg)
int bus_addr, ret;
u16 val;
- if (!(reg & MII_ADDR_C45))
- return -EOPNOTSUPP;
-
bus_addr = i2c_mii_phy_addr(phy_id);
if (bus_addr != ROLLBALL_PHY_I2C_ADDR)
return 0xffff;
@@ -319,9 +328,6 @@ static int i2c_mii_write_rollball(struct mii_bus *bus, int phy_id, int reg,
int bus_addr, ret;
u8 buf[6];
- if (!(reg & MII_ADDR_C45))
- return -EOPNOTSUPP;
-
bus_addr = i2c_mii_phy_addr(phy_id);
if (bus_addr != ROLLBALL_PHY_I2C_ADDR)
return 0;
@@ -403,8 +409,10 @@ struct mii_bus *mdio_i2c_alloc(struct device *parent, struct i2c_adapter *i2c,
mii->write = i2c_mii_write_rollball;
break;
default:
- mii->read = i2c_mii_read_default;
- mii->write = i2c_mii_write_default;
+ mii->read = i2c_mii_read_default_c22;
+ mii->write = i2c_mii_write_default_c22;
+ mii->read_c45 = i2c_mii_read_default_c45;
+ mii->write_c45 = i2c_mii_write_default_c45;
break;
}
diff --git a/drivers/net/mdio/mdio-ipq4019.c b/drivers/net/mdio/mdio-ipq4019.c
index 4eba5a91075c..78b93de636f5 100644
--- a/drivers/net/mdio/mdio-ipq4019.c
+++ b/drivers/net/mdio/mdio-ipq4019.c
@@ -53,7 +53,8 @@ static int ipq4019_mdio_wait_busy(struct mii_bus *bus)
IPQ4019_MDIO_SLEEP, IPQ4019_MDIO_TIMEOUT);
}
-static int ipq4019_mdio_read(struct mii_bus *bus, int mii_id, int regnum)
+static int ipq4019_mdio_read_c45(struct mii_bus *bus, int mii_id, int mmd,
+ int reg)
{
struct ipq4019_mdio_data *priv = bus->priv;
unsigned int data;
@@ -62,61 +63,71 @@ static int ipq4019_mdio_read(struct mii_bus *bus, int mii_id, int regnum)
if (ipq4019_mdio_wait_busy(bus))
return -ETIMEDOUT;
- /* Clause 45 support */
- if (regnum & MII_ADDR_C45) {
- unsigned int mmd = (regnum >> 16) & 0x1F;
- unsigned int reg = regnum & 0xFFFF;
+ data = readl(priv->membase + MDIO_MODE_REG);
- /* Enter Clause 45 mode */
- data = readl(priv->membase + MDIO_MODE_REG);
+ data |= MDIO_MODE_C45;
- data |= MDIO_MODE_C45;
+ writel(data, priv->membase + MDIO_MODE_REG);
- writel(data, priv->membase + MDIO_MODE_REG);
+ /* issue the phy address and mmd */
+ writel((mii_id << 8) | mmd, priv->membase + MDIO_ADDR_REG);
- /* issue the phy address and mmd */
- writel((mii_id << 8) | mmd, priv->membase + MDIO_ADDR_REG);
+ /* issue reg */
+ writel(reg, priv->membase + MDIO_DATA_WRITE_REG);
- /* issue reg */
- writel(reg, priv->membase + MDIO_DATA_WRITE_REG);
+ cmd = MDIO_CMD_ACCESS_START | MDIO_CMD_ACCESS_CODE_C45_ADDR;
- cmd = MDIO_CMD_ACCESS_START | MDIO_CMD_ACCESS_CODE_C45_ADDR;
- } else {
- /* Enter Clause 22 mode */
- data = readl(priv->membase + MDIO_MODE_REG);
+ /* issue read command */
+ writel(cmd, priv->membase + MDIO_CMD_REG);
- data &= ~MDIO_MODE_C45;
+ /* Wait read complete */
+ if (ipq4019_mdio_wait_busy(bus))
+ return -ETIMEDOUT;
- writel(data, priv->membase + MDIO_MODE_REG);
+ cmd = MDIO_CMD_ACCESS_START | MDIO_CMD_ACCESS_CODE_C45_READ;
- /* issue the phy address and reg */
- writel((mii_id << 8) | regnum, priv->membase + MDIO_ADDR_REG);
+ writel(cmd, priv->membase + MDIO_CMD_REG);
- cmd = MDIO_CMD_ACCESS_START | MDIO_CMD_ACCESS_CODE_READ;
- }
+ if (ipq4019_mdio_wait_busy(bus))
+ return -ETIMEDOUT;
- /* issue read command */
- writel(cmd, priv->membase + MDIO_CMD_REG);
+ /* Read and return data */
+ return readl(priv->membase + MDIO_DATA_READ_REG);
+}
+
+static int ipq4019_mdio_read_c22(struct mii_bus *bus, int mii_id, int regnum)
+{
+ struct ipq4019_mdio_data *priv = bus->priv;
+ unsigned int data;
+ unsigned int cmd;
- /* Wait read complete */
if (ipq4019_mdio_wait_busy(bus))
return -ETIMEDOUT;
- if (regnum & MII_ADDR_C45) {
- cmd = MDIO_CMD_ACCESS_START | MDIO_CMD_ACCESS_CODE_C45_READ;
+ data = readl(priv->membase + MDIO_MODE_REG);
- writel(cmd, priv->membase + MDIO_CMD_REG);
+ data &= ~MDIO_MODE_C45;
- if (ipq4019_mdio_wait_busy(bus))
- return -ETIMEDOUT;
- }
+ writel(data, priv->membase + MDIO_MODE_REG);
+
+ /* issue the phy address and reg */
+ writel((mii_id << 8) | regnum, priv->membase + MDIO_ADDR_REG);
+
+ cmd = MDIO_CMD_ACCESS_START | MDIO_CMD_ACCESS_CODE_READ;
+
+ /* issue read command */
+ writel(cmd, priv->membase + MDIO_CMD_REG);
+
+ /* Wait read complete */
+ if (ipq4019_mdio_wait_busy(bus))
+ return -ETIMEDOUT;
/* Read and return data */
return readl(priv->membase + MDIO_DATA_READ_REG);
}
-static int ipq4019_mdio_write(struct mii_bus *bus, int mii_id, int regnum,
- u16 value)
+static int ipq4019_mdio_write_c45(struct mii_bus *bus, int mii_id, int mmd,
+ int reg, u16 value)
{
struct ipq4019_mdio_data *priv = bus->priv;
unsigned int data;
@@ -125,50 +136,63 @@ static int ipq4019_mdio_write(struct mii_bus *bus, int mii_id, int regnum,
if (ipq4019_mdio_wait_busy(bus))
return -ETIMEDOUT;
- /* Clause 45 support */
- if (regnum & MII_ADDR_C45) {
- unsigned int mmd = (regnum >> 16) & 0x1F;
- unsigned int reg = regnum & 0xFFFF;
+ data = readl(priv->membase + MDIO_MODE_REG);
- /* Enter Clause 45 mode */
- data = readl(priv->membase + MDIO_MODE_REG);
+ data |= MDIO_MODE_C45;
- data |= MDIO_MODE_C45;
+ writel(data, priv->membase + MDIO_MODE_REG);
- writel(data, priv->membase + MDIO_MODE_REG);
+ /* issue the phy address and mmd */
+ writel((mii_id << 8) | mmd, priv->membase + MDIO_ADDR_REG);
- /* issue the phy address and mmd */
- writel((mii_id << 8) | mmd, priv->membase + MDIO_ADDR_REG);
+ /* issue reg */
+ writel(reg, priv->membase + MDIO_DATA_WRITE_REG);
- /* issue reg */
- writel(reg, priv->membase + MDIO_DATA_WRITE_REG);
+ cmd = MDIO_CMD_ACCESS_START | MDIO_CMD_ACCESS_CODE_C45_ADDR;
- cmd = MDIO_CMD_ACCESS_START | MDIO_CMD_ACCESS_CODE_C45_ADDR;
+ writel(cmd, priv->membase + MDIO_CMD_REG);
- writel(cmd, priv->membase + MDIO_CMD_REG);
+ if (ipq4019_mdio_wait_busy(bus))
+ return -ETIMEDOUT;
- if (ipq4019_mdio_wait_busy(bus))
- return -ETIMEDOUT;
- } else {
- /* Enter Clause 22 mode */
- data = readl(priv->membase + MDIO_MODE_REG);
+ /* issue write data */
+ writel(value, priv->membase + MDIO_DATA_WRITE_REG);
- data &= ~MDIO_MODE_C45;
+ cmd = MDIO_CMD_ACCESS_START | MDIO_CMD_ACCESS_CODE_C45_WRITE;
+ writel(cmd, priv->membase + MDIO_CMD_REG);
- writel(data, priv->membase + MDIO_MODE_REG);
+ /* Wait write complete */
+ if (ipq4019_mdio_wait_busy(bus))
+ return -ETIMEDOUT;
- /* issue the phy address and reg */
- writel((mii_id << 8) | regnum, priv->membase + MDIO_ADDR_REG);
- }
+ return 0;
+}
+
+static int ipq4019_mdio_write_c22(struct mii_bus *bus, int mii_id, int regnum,
+ u16 value)
+{
+ struct ipq4019_mdio_data *priv = bus->priv;
+ unsigned int data;
+ unsigned int cmd;
+
+ if (ipq4019_mdio_wait_busy(bus))
+ return -ETIMEDOUT;
+
+ /* Enter Clause 22 mode */
+ data = readl(priv->membase + MDIO_MODE_REG);
+
+ data &= ~MDIO_MODE_C45;
+
+ writel(data, priv->membase + MDIO_MODE_REG);
+
+ /* issue the phy address and reg */
+ writel((mii_id << 8) | regnum, priv->membase + MDIO_ADDR_REG);
/* issue write data */
writel(value, priv->membase + MDIO_DATA_WRITE_REG);
/* issue write command */
- if (regnum & MII_ADDR_C45)
- cmd = MDIO_CMD_ACCESS_START | MDIO_CMD_ACCESS_CODE_C45_WRITE;
- else
- cmd = MDIO_CMD_ACCESS_START | MDIO_CMD_ACCESS_CODE_WRITE;
+ cmd = MDIO_CMD_ACCESS_START | MDIO_CMD_ACCESS_CODE_WRITE;
writel(cmd, priv->membase + MDIO_CMD_REG);
@@ -235,8 +259,10 @@ static int ipq4019_mdio_probe(struct platform_device *pdev)
priv->eth_ldo_rdy = devm_ioremap_resource(&pdev->dev, res);
bus->name = "ipq4019_mdio";
- bus->read = ipq4019_mdio_read;
- bus->write = ipq4019_mdio_write;
+ bus->read = ipq4019_mdio_read_c22;
+ bus->write = ipq4019_mdio_write_c22;
+ bus->read_c45 = ipq4019_mdio_read_c45;
+ bus->write_c45 = ipq4019_mdio_write_c45;
bus->reset = ipq_mdio_reset;
bus->parent = &pdev->dev;
snprintf(bus->id, MII_BUS_ID_SIZE, "%s%d", pdev->name, pdev->id);
diff --git a/drivers/net/mdio/mdio-ipq8064.c b/drivers/net/mdio/mdio-ipq8064.c
index 37e0d8b6da07..fd9716960106 100644
--- a/drivers/net/mdio/mdio-ipq8064.c
+++ b/drivers/net/mdio/mdio-ipq8064.c
@@ -57,10 +57,6 @@ ipq8064_mdio_read(struct mii_bus *bus, int phy_addr, int reg_offset)
u32 ret_val;
int err;
- /* Reject clause 45 */
- if (reg_offset & MII_ADDR_C45)
- return -EOPNOTSUPP;
-
miiaddr |= ((phy_addr << MII_ADDR_SHIFT) & MII_ADDR_MASK) |
((reg_offset << MII_REG_SHIFT) & MII_REG_MASK);
@@ -81,10 +77,6 @@ ipq8064_mdio_write(struct mii_bus *bus, int phy_addr, int reg_offset, u16 data)
u32 miiaddr = MII_WRITE | MII_BUSY | MII_CLKRANGE_250_300M;
struct ipq8064_mdio *priv = bus->priv;
- /* Reject clause 45 */
- if (reg_offset & MII_ADDR_C45)
- return -EOPNOTSUPP;
-
regmap_write(priv->base, MII_DATA_REG_ADDR, data);
miiaddr |= ((phy_addr << MII_ADDR_SHIFT) & MII_ADDR_MASK) |
diff --git a/drivers/net/mdio/mdio-mscc-miim.c b/drivers/net/mdio/mdio-mscc-miim.c
index 51f68daac152..c87e991d1a17 100644
--- a/drivers/net/mdio/mdio-mscc-miim.c
+++ b/drivers/net/mdio/mdio-mscc-miim.c
@@ -108,9 +108,6 @@ static int mscc_miim_read(struct mii_bus *bus, int mii_id, int regnum)
u32 val;
int ret;
- if (regnum & MII_ADDR_C45)
- return -EOPNOTSUPP;
-
ret = mscc_miim_wait_pending(bus);
if (ret)
goto out;
@@ -154,9 +151,6 @@ static int mscc_miim_write(struct mii_bus *bus, int mii_id,
struct mscc_miim_dev *miim = bus->priv;
int ret;
- if (regnum & MII_ADDR_C45)
- return -EOPNOTSUPP;
-
ret = mscc_miim_wait_pending(bus);
if (ret < 0)
goto out;
diff --git a/drivers/net/mdio/mdio-mux-bcm-iproc.c b/drivers/net/mdio/mdio-mux-bcm-iproc.c
index 014c0baedbd2..956d54846b62 100644
--- a/drivers/net/mdio/mdio-mux-bcm-iproc.c
+++ b/drivers/net/mdio/mdio-mux-bcm-iproc.c
@@ -98,7 +98,7 @@ static int iproc_mdio_wait_for_idle(void __iomem *base, bool result)
* Return value: Successful Read operation returns read reg values and write
* operation returns 0. Failure operation returns negative error code.
*/
-static int start_miim_ops(void __iomem *base,
+static int start_miim_ops(void __iomem *base, bool c45,
u16 phyid, u32 reg, u16 val, u32 op)
{
u32 param;
@@ -112,7 +112,7 @@ static int start_miim_ops(void __iomem *base,
param = readl(base + MDIO_PARAM_OFFSET);
param |= phyid << MDIO_PARAM_PHY_ID;
param |= val << MDIO_PARAM_PHY_DATA;
- if (reg & MII_ADDR_C45)
+ if (c45)
param |= BIT(MDIO_PARAM_C45_SEL);
writel(param, base + MDIO_PARAM_OFFSET);
@@ -131,28 +131,58 @@ err:
return ret;
}
-static int iproc_mdiomux_read(struct mii_bus *bus, int phyid, int reg)
+static int iproc_mdiomux_read_c22(struct mii_bus *bus, int phyid, int reg)
{
struct iproc_mdiomux_desc *md = bus->priv;
int ret;
- ret = start_miim_ops(md->base, phyid, reg, 0, MDIO_CTRL_READ_OP);
+ ret = start_miim_ops(md->base, false, phyid, reg, 0, MDIO_CTRL_READ_OP);
if (ret < 0)
- dev_err(&bus->dev, "mdiomux read operation failed!!!");
+ dev_err(&bus->dev, "mdiomux c22 read operation failed!!!");
return ret;
}
-static int iproc_mdiomux_write(struct mii_bus *bus,
- int phyid, int reg, u16 val)
+static int iproc_mdiomux_read_c45(struct mii_bus *bus, int phyid, int devad,
+ int reg)
+{
+ struct iproc_mdiomux_desc *md = bus->priv;
+ int ret;
+
+ ret = start_miim_ops(md->base, true, phyid, reg | devad << 16, 0,
+ MDIO_CTRL_READ_OP);
+ if (ret < 0)
+ dev_err(&bus->dev, "mdiomux read c45 operation failed!!!");
+
+ return ret;
+}
+
+static int iproc_mdiomux_write_c22(struct mii_bus *bus,
+ int phyid, int reg, u16 val)
+{
+ struct iproc_mdiomux_desc *md = bus->priv;
+ int ret;
+
+ /* Write val at reg offset */
+ ret = start_miim_ops(md->base, false, phyid, reg, val,
+ MDIO_CTRL_WRITE_OP);
+ if (ret < 0)
+ dev_err(&bus->dev, "mdiomux write c22 operation failed!!!");
+
+ return ret;
+}
+
+static int iproc_mdiomux_write_c45(struct mii_bus *bus,
+ int phyid, int devad, int reg, u16 val)
{
struct iproc_mdiomux_desc *md = bus->priv;
int ret;
/* Write val at reg offset */
- ret = start_miim_ops(md->base, phyid, reg, val, MDIO_CTRL_WRITE_OP);
+ ret = start_miim_ops(md->base, true, phyid, reg | devad << 16, val,
+ MDIO_CTRL_WRITE_OP);
if (ret < 0)
- dev_err(&bus->dev, "mdiomux write operation failed!!!");
+ dev_err(&bus->dev, "mdiomux write c45 operation failed!!!");
return ret;
}
@@ -223,8 +253,10 @@ static int mdio_mux_iproc_probe(struct platform_device *pdev)
bus->name = "iProc MDIO mux bus";
snprintf(bus->id, MII_BUS_ID_SIZE, "%s-%d", pdev->name, pdev->id);
bus->parent = &pdev->dev;
- bus->read = iproc_mdiomux_read;
- bus->write = iproc_mdiomux_write;
+ bus->read = iproc_mdiomux_read_c22;
+ bus->write = iproc_mdiomux_write_c22;
+ bus->read_c45 = iproc_mdiomux_read_c45;
+ bus->write_c45 = iproc_mdiomux_write_c45;
bus->phy_mask = ~0;
bus->dev.of_node = pdev->dev.of_node;
diff --git a/drivers/net/mdio/mdio-mux-meson-g12a.c b/drivers/net/mdio/mdio-mux-meson-g12a.c
index c4542ecf5623..910e5cf74e89 100644
--- a/drivers/net/mdio/mdio-mux-meson-g12a.c
+++ b/drivers/net/mdio/mdio-mux-meson-g12a.c
@@ -53,10 +53,8 @@
#define MESON_G12A_MDIO_INTERNAL_ID 1
struct g12a_mdio_mux {
- bool pll_is_enabled;
void __iomem *regs;
void *mux_handle;
- struct clk *pclk;
struct clk *pll;
};
@@ -155,14 +153,12 @@ static int g12a_enable_internal_mdio(struct g12a_mdio_mux *priv)
int ret;
/* Enable the phy clock */
- if (!priv->pll_is_enabled) {
+ if (!__clk_is_enabled(priv->pll)) {
ret = clk_prepare_enable(priv->pll);
if (ret)
return ret;
}
- priv->pll_is_enabled = true;
-
/* Initialize ephy control */
writel(EPHY_G12A_ID, priv->regs + ETH_PHY_CNTL0);
@@ -193,10 +189,8 @@ static int g12a_enable_external_mdio(struct g12a_mdio_mux *priv)
writel_relaxed(0x0, priv->regs + ETH_PHY_CNTL2);
/* Disable the phy clock if enabled */
- if (priv->pll_is_enabled) {
+ if (__clk_is_enabled(priv->pll))
clk_disable_unprepare(priv->pll);
- priv->pll_is_enabled = false;
- }
return 0;
}
@@ -311,6 +305,7 @@ static int g12a_mdio_mux_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
struct g12a_mdio_mux *priv;
+ struct clk *pclk;
int ret;
priv = devm_kzalloc(dev, sizeof(*priv), GFP_KERNEL);
@@ -323,34 +318,21 @@ static int g12a_mdio_mux_probe(struct platform_device *pdev)
if (IS_ERR(priv->regs))
return PTR_ERR(priv->regs);
- priv->pclk = devm_clk_get(dev, "pclk");
- if (IS_ERR(priv->pclk))
- return dev_err_probe(dev, PTR_ERR(priv->pclk),
+ pclk = devm_clk_get_enabled(dev, "pclk");
+ if (IS_ERR(pclk))
+ return dev_err_probe(dev, PTR_ERR(pclk),
"failed to get peripheral clock\n");
- /* Make sure the device registers are clocked */
- ret = clk_prepare_enable(priv->pclk);
- if (ret) {
- dev_err(dev, "failed to enable peripheral clock");
- return ret;
- }
-
/* Register PLL in CCF */
ret = g12a_ephy_glue_clk_register(dev);
if (ret)
- goto err;
+ return ret;
ret = mdio_mux_init(dev, dev->of_node, g12a_mdio_switch_fn,
&priv->mux_handle, dev, NULL);
- if (ret) {
+ if (ret)
dev_err_probe(dev, ret, "mdio multiplexer init failed\n");
- goto err;
- }
- return 0;
-
-err:
- clk_disable_unprepare(priv->pclk);
return ret;
}
@@ -360,11 +342,9 @@ static int g12a_mdio_mux_remove(struct platform_device *pdev)
mdio_mux_uninit(priv->mux_handle);
- if (priv->pll_is_enabled)
+ if (__clk_is_enabled(priv->pll))
clk_disable_unprepare(priv->pll);
- clk_disable_unprepare(priv->pclk);
-
return 0;
}
diff --git a/drivers/net/mdio/mdio-mux-meson-gxl.c b/drivers/net/mdio/mdio-mux-meson-gxl.c
new file mode 100644
index 000000000000..76188575ca1f
--- /dev/null
+++ b/drivers/net/mdio/mdio-mux-meson-gxl.c
@@ -0,0 +1,164 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2022 Baylibre, SAS.
+ * Author: Jerome Brunet <jbrunet@baylibre.com>
+ */
+
+#include <linux/bitfield.h>
+#include <linux/delay.h>
+#include <linux/clk.h>
+#include <linux/io.h>
+#include <linux/mdio-mux.h>
+#include <linux/module.h>
+#include <linux/platform_device.h>
+
+#define ETH_REG2 0x0
+#define REG2_PHYID GENMASK(21, 0)
+#define EPHY_GXL_ID 0x110181
+#define REG2_LEDACT GENMASK(23, 22)
+#define REG2_LEDLINK GENMASK(25, 24)
+#define REG2_DIV4SEL BIT(27)
+#define REG2_ADCBYPASS BIT(30)
+#define REG2_CLKINSEL BIT(31)
+#define ETH_REG3 0x4
+#define REG3_ENH BIT(3)
+#define REG3_CFGMODE GENMASK(6, 4)
+#define REG3_AUTOMDIX BIT(7)
+#define REG3_PHYADDR GENMASK(12, 8)
+#define REG3_PWRUPRST BIT(21)
+#define REG3_PWRDOWN BIT(22)
+#define REG3_LEDPOL BIT(23)
+#define REG3_PHYMDI BIT(26)
+#define REG3_CLKINEN BIT(29)
+#define REG3_PHYIP BIT(30)
+#define REG3_PHYEN BIT(31)
+#define ETH_REG4 0x8
+#define REG4_PWRUPRSTSIG BIT(0)
+
+#define MESON_GXL_MDIO_EXTERNAL_ID 0
+#define MESON_GXL_MDIO_INTERNAL_ID 1
+
+struct gxl_mdio_mux {
+ void __iomem *regs;
+ void *mux_handle;
+};
+
+static void gxl_enable_internal_mdio(struct gxl_mdio_mux *priv)
+{
+ u32 val;
+
+ /* Setup the internal phy */
+ val = (REG3_ENH |
+ FIELD_PREP(REG3_CFGMODE, 0x7) |
+ REG3_AUTOMDIX |
+ FIELD_PREP(REG3_PHYADDR, 8) |
+ REG3_LEDPOL |
+ REG3_PHYMDI |
+ REG3_CLKINEN |
+ REG3_PHYIP);
+
+ writel(REG4_PWRUPRSTSIG, priv->regs + ETH_REG4);
+ writel(val, priv->regs + ETH_REG3);
+ mdelay(10);
+
+ /* NOTE: The HW kept the phy id configurable at runtime.
+ * The id below is arbitrary. It is the one used in the vendor code.
+ * The only constraint is that it must match the one in
+ * drivers/net/phy/meson-gxl.c to properly match the PHY.
+ */
+ writel(FIELD_PREP(REG2_PHYID, EPHY_GXL_ID),
+ priv->regs + ETH_REG2);
+
+ /* Enable the internal phy */
+ val |= REG3_PHYEN;
+ writel(val, priv->regs + ETH_REG3);
+ writel(0, priv->regs + ETH_REG4);
+
+ /* The phy needs a bit of time to power up */
+ mdelay(10);
+}
+
+static void gxl_enable_external_mdio(struct gxl_mdio_mux *priv)
+{
+ /* Reset the mdio bus mux to the external phy */
+ writel(0, priv->regs + ETH_REG3);
+}
+
+static int gxl_mdio_switch_fn(int current_child, int desired_child,
+ void *data)
+{
+ struct gxl_mdio_mux *priv = dev_get_drvdata(data);
+
+ if (current_child == desired_child)
+ return 0;
+
+ switch (desired_child) {
+ case MESON_GXL_MDIO_EXTERNAL_ID:
+ gxl_enable_external_mdio(priv);
+ break;
+ case MESON_GXL_MDIO_INTERNAL_ID:
+ gxl_enable_internal_mdio(priv);
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static const struct of_device_id gxl_mdio_mux_match[] = {
+ { .compatible = "amlogic,gxl-mdio-mux", },
+ {},
+};
+MODULE_DEVICE_TABLE(of, gxl_mdio_mux_match);
+
+static int gxl_mdio_mux_probe(struct platform_device *pdev)
+{
+ struct device *dev = &pdev->dev;
+ struct gxl_mdio_mux *priv;
+ struct clk *rclk;
+ int ret;
+
+ priv = devm_kzalloc(dev, sizeof(*priv), GFP_KERNEL);
+ if (!priv)
+ return -ENOMEM;
+ platform_set_drvdata(pdev, priv);
+
+ priv->regs = devm_platform_ioremap_resource(pdev, 0);
+ if (IS_ERR(priv->regs))
+ return PTR_ERR(priv->regs);
+
+ rclk = devm_clk_get_enabled(dev, "ref");
+ if (IS_ERR(rclk))
+ return dev_err_probe(dev, PTR_ERR(rclk),
+ "failed to get reference clock\n");
+
+ ret = mdio_mux_init(dev, dev->of_node, gxl_mdio_switch_fn,
+ &priv->mux_handle, dev, NULL);
+ if (ret)
+ dev_err_probe(dev, ret, "mdio multiplexer init failed\n");
+
+ return ret;
+}
+
+static int gxl_mdio_mux_remove(struct platform_device *pdev)
+{
+ struct gxl_mdio_mux *priv = platform_get_drvdata(pdev);
+
+ mdio_mux_uninit(priv->mux_handle);
+
+ return 0;
+}
+
+static struct platform_driver gxl_mdio_mux_driver = {
+ .probe = gxl_mdio_mux_probe,
+ .remove = gxl_mdio_mux_remove,
+ .driver = {
+ .name = "gxl-mdio-mux",
+ .of_match_table = gxl_mdio_mux_match,
+ },
+};
+module_platform_driver(gxl_mdio_mux_driver);
+
+MODULE_DESCRIPTION("Amlogic GXL MDIO multiplexer driver");
+MODULE_AUTHOR("Jerome Brunet <jbrunet@baylibre.com>");
+MODULE_LICENSE("GPL");
diff --git a/drivers/net/mdio/mdio-mvusb.c b/drivers/net/mdio/mdio-mvusb.c
index d5eabddfdf51..68fc55906e78 100644
--- a/drivers/net/mdio/mdio-mvusb.c
+++ b/drivers/net/mdio/mdio-mvusb.c
@@ -34,9 +34,6 @@ static int mvusb_mdio_read(struct mii_bus *mdio, int dev, int reg)
struct mvusb_mdio *mvusb = mdio->priv;
int err, alen;
- if (dev & MII_ADDR_C45)
- return -EOPNOTSUPP;
-
mvusb->buf[MVUSB_CMD_ADDR] = cpu_to_le16(0xa400 | (dev << 5) | reg);
err = usb_bulk_msg(mvusb->udev, usb_sndbulkpipe(mvusb->udev, 2),
@@ -57,9 +54,6 @@ static int mvusb_mdio_write(struct mii_bus *mdio, int dev, int reg, u16 val)
struct mvusb_mdio *mvusb = mdio->priv;
int alen;
- if (dev & MII_ADDR_C45)
- return -EOPNOTSUPP;
-
mvusb->buf[MVUSB_CMD_ADDR] = cpu_to_le16(0x8000 | (dev << 5) | reg);
mvusb->buf[MVUSB_CMD_VAL] = cpu_to_le16(val);
diff --git a/drivers/net/mdio/mdio-octeon.c b/drivers/net/mdio/mdio-octeon.c
index e096e68ac667..7c65c547d377 100644
--- a/drivers/net/mdio/mdio-octeon.c
+++ b/drivers/net/mdio/mdio-octeon.c
@@ -58,8 +58,10 @@ static int octeon_mdiobus_probe(struct platform_device *pdev)
snprintf(bus->mii_bus->id, MII_BUS_ID_SIZE, "%px", bus->register_base);
bus->mii_bus->parent = &pdev->dev;
- bus->mii_bus->read = cavium_mdiobus_read;
- bus->mii_bus->write = cavium_mdiobus_write;
+ bus->mii_bus->read = cavium_mdiobus_read_c22;
+ bus->mii_bus->write = cavium_mdiobus_write_c22;
+ bus->mii_bus->read_c45 = cavium_mdiobus_read_c45;
+ bus->mii_bus->write_c45 = cavium_mdiobus_write_c45;
platform_set_drvdata(pdev, bus);
diff --git a/drivers/net/mdio/mdio-thunder.c b/drivers/net/mdio/mdio-thunder.c
index 822d2cdd2f35..3847ee92c109 100644
--- a/drivers/net/mdio/mdio-thunder.c
+++ b/drivers/net/mdio/mdio-thunder.c
@@ -93,8 +93,10 @@ static int thunder_mdiobus_pci_probe(struct pci_dev *pdev,
bus->mii_bus->name = KBUILD_MODNAME;
snprintf(bus->mii_bus->id, MII_BUS_ID_SIZE, "%llx", r.start);
bus->mii_bus->parent = &pdev->dev;
- bus->mii_bus->read = cavium_mdiobus_read;
- bus->mii_bus->write = cavium_mdiobus_write;
+ bus->mii_bus->read = cavium_mdiobus_read_c22;
+ bus->mii_bus->write = cavium_mdiobus_write_c22;
+ bus->mii_bus->read_c45 = cavium_mdiobus_read_c45;
+ bus->mii_bus->write_c45 = cavium_mdiobus_write_c45;
err = of_mdiobus_register(bus->mii_bus, node);
if (err)
diff --git a/drivers/net/netdevsim/bpf.c b/drivers/net/netdevsim/bpf.c
index 50854265864d..f60eb97e3a62 100644
--- a/drivers/net/netdevsim/bpf.c
+++ b/drivers/net/netdevsim/bpf.c
@@ -315,10 +315,6 @@ nsim_setup_prog_hw_checks(struct netdevsim *ns, struct netdev_bpf *bpf)
NSIM_EA(bpf->extack, "xdpoffload of non-bound program");
return -EINVAL;
}
- if (!bpf_offload_dev_match(bpf->prog, ns->netdev)) {
- NSIM_EA(bpf->extack, "program bound to different dev");
- return -EINVAL;
- }
state = bpf->prog->aux->offload->dev_priv;
if (WARN_ON(strcmp(state->state, "xlated"))) {
diff --git a/drivers/net/netdevsim/dev.c b/drivers/net/netdevsim/dev.c
index b962fc8e1397..6045bece2654 100644
--- a/drivers/net/netdevsim/dev.c
+++ b/drivers/net/netdevsim/dev.c
@@ -527,13 +527,13 @@ static void nsim_devlink_set_params_init_values(struct nsim_dev *nsim_dev,
union devlink_param_value value;
value.vu32 = nsim_dev->max_macs;
- devlink_param_driverinit_value_set(devlink,
- DEVLINK_PARAM_GENERIC_ID_MAX_MACS,
- value);
+ devl_param_driverinit_value_set(devlink,
+ DEVLINK_PARAM_GENERIC_ID_MAX_MACS,
+ value);
value.vbool = nsim_dev->test1;
- devlink_param_driverinit_value_set(devlink,
- NSIM_DEVLINK_PARAM_ID_TEST1,
- value);
+ devl_param_driverinit_value_set(devlink,
+ NSIM_DEVLINK_PARAM_ID_TEST1,
+ value);
}
static void nsim_devlink_param_load_driverinit_values(struct devlink *devlink)
@@ -542,14 +542,14 @@ static void nsim_devlink_param_load_driverinit_values(struct devlink *devlink)
union devlink_param_value saved_value;
int err;
- err = devlink_param_driverinit_value_get(devlink,
- DEVLINK_PARAM_GENERIC_ID_MAX_MACS,
- &saved_value);
+ err = devl_param_driverinit_value_get(devlink,
+ DEVLINK_PARAM_GENERIC_ID_MAX_MACS,
+ &saved_value);
if (!err)
nsim_dev->max_macs = saved_value.vu32;
- err = devlink_param_driverinit_value_get(devlink,
- NSIM_DEVLINK_PARAM_ID_TEST1,
- &saved_value);
+ err = devl_param_driverinit_value_get(devlink,
+ NSIM_DEVLINK_PARAM_ID_TEST1,
+ &saved_value);
if (!err)
nsim_dev->test1 = saved_value.vbool;
}
@@ -1556,14 +1556,18 @@ int nsim_drv_probe(struct nsim_bus_dev *nsim_bus_dev)
goto err_devlink_unlock;
}
- err = nsim_dev_resources_register(devlink);
+ err = devl_register(devlink);
if (err)
goto err_vfc_free;
- err = devlink_params_register(devlink, nsim_devlink_params,
- ARRAY_SIZE(nsim_devlink_params));
+ err = nsim_dev_resources_register(devlink);
if (err)
goto err_dl_unregister;
+
+ err = devl_params_register(devlink, nsim_devlink_params,
+ ARRAY_SIZE(nsim_devlink_params));
+ if (err)
+ goto err_resource_unregister;
nsim_devlink_set_params_init_values(nsim_dev, devlink);
err = nsim_dev_dummy_region_init(nsim_dev, devlink);
@@ -1605,9 +1609,7 @@ int nsim_drv_probe(struct nsim_bus_dev *nsim_bus_dev)
goto err_hwstats_exit;
nsim_dev->esw_mode = DEVLINK_ESWITCH_MODE_LEGACY;
- devlink_set_features(devlink, DEVLINK_F_RELOAD);
devl_unlock(devlink);
- devlink_register(devlink);
return 0;
err_hwstats_exit:
@@ -1627,10 +1629,12 @@ err_traps_exit:
err_dummy_region_exit:
nsim_dev_dummy_region_exit(nsim_dev);
err_params_unregister:
- devlink_params_unregister(devlink, nsim_devlink_params,
- ARRAY_SIZE(nsim_devlink_params));
-err_dl_unregister:
+ devl_params_unregister(devlink, nsim_devlink_params,
+ ARRAY_SIZE(nsim_devlink_params));
+err_resource_unregister:
devl_resources_unregister(devlink);
+err_dl_unregister:
+ devl_unregister(devlink);
err_vfc_free:
kfree(nsim_dev->vfconfigs);
err_devlink_unlock:
@@ -1668,15 +1672,15 @@ void nsim_drv_remove(struct nsim_bus_dev *nsim_bus_dev)
struct nsim_dev *nsim_dev = dev_get_drvdata(&nsim_bus_dev->dev);
struct devlink *devlink = priv_to_devlink(nsim_dev);
- devlink_unregister(devlink);
devl_lock(devlink);
nsim_dev_reload_destroy(nsim_dev);
nsim_bpf_dev_exit(nsim_dev);
nsim_dev_debugfs_exit(nsim_dev);
- devlink_params_unregister(devlink, nsim_devlink_params,
- ARRAY_SIZE(nsim_devlink_params));
+ devl_params_unregister(devlink, nsim_devlink_params,
+ ARRAY_SIZE(nsim_devlink_params));
devl_resources_unregister(devlink);
+ devl_unregister(devlink);
kfree(nsim_dev->vfconfigs);
kfree(nsim_dev->fa_cookie);
devl_unlock(devlink);
diff --git a/drivers/net/netdevsim/health.c b/drivers/net/netdevsim/health.c
index aa77af4a68df..eb04ed715d2d 100644
--- a/drivers/net/netdevsim/health.c
+++ b/drivers/net/netdevsim/health.c
@@ -233,16 +233,16 @@ int nsim_dev_health_init(struct nsim_dev *nsim_dev, struct devlink *devlink)
int err;
health->empty_reporter =
- devlink_health_reporter_create(devlink,
- &nsim_dev_empty_reporter_ops,
- 0, health);
+ devl_health_reporter_create(devlink,
+ &nsim_dev_empty_reporter_ops,
+ 0, health);
if (IS_ERR(health->empty_reporter))
return PTR_ERR(health->empty_reporter);
health->dummy_reporter =
- devlink_health_reporter_create(devlink,
- &nsim_dev_dummy_reporter_ops,
- 0, health);
+ devl_health_reporter_create(devlink,
+ &nsim_dev_dummy_reporter_ops,
+ 0, health);
if (IS_ERR(health->dummy_reporter)) {
err = PTR_ERR(health->dummy_reporter);
goto err_empty_reporter_destroy;
@@ -266,9 +266,9 @@ int nsim_dev_health_init(struct nsim_dev *nsim_dev, struct devlink *devlink)
return 0;
err_dummy_reporter_destroy:
- devlink_health_reporter_destroy(health->dummy_reporter);
+ devl_health_reporter_destroy(health->dummy_reporter);
err_empty_reporter_destroy:
- devlink_health_reporter_destroy(health->empty_reporter);
+ devl_health_reporter_destroy(health->empty_reporter);
return err;
}
@@ -278,6 +278,6 @@ void nsim_dev_health_exit(struct nsim_dev *nsim_dev)
debugfs_remove_recursive(health->ddir);
kfree(health->recovered_break_msg);
- devlink_health_reporter_destroy(health->dummy_reporter);
- devlink_health_reporter_destroy(health->empty_reporter);
+ devl_health_reporter_destroy(health->dummy_reporter);
+ devl_health_reporter_destroy(health->empty_reporter);
}
diff --git a/drivers/net/netdevsim/ipsec.c b/drivers/net/netdevsim/ipsec.c
index b93baf5c8bee..f0d58092e7e9 100644
--- a/drivers/net/netdevsim/ipsec.c
+++ b/drivers/net/netdevsim/ipsec.c
@@ -125,7 +125,8 @@ static int nsim_ipsec_parse_proto_keys(struct xfrm_state *xs,
return 0;
}
-static int nsim_ipsec_add_sa(struct xfrm_state *xs)
+static int nsim_ipsec_add_sa(struct xfrm_state *xs,
+ struct netlink_ext_ack *extack)
{
struct nsim_ipsec *ipsec;
struct net_device *dev;
@@ -139,25 +140,24 @@ static int nsim_ipsec_add_sa(struct xfrm_state *xs)
ipsec = &ns->ipsec;
if (xs->id.proto != IPPROTO_ESP && xs->id.proto != IPPROTO_AH) {
- netdev_err(dev, "Unsupported protocol 0x%04x for ipsec offload\n",
- xs->id.proto);
+ NL_SET_ERR_MSG_MOD(extack, "Unsupported protocol for ipsec offload");
return -EINVAL;
}
if (xs->calg) {
- netdev_err(dev, "Compression offload not supported\n");
+ NL_SET_ERR_MSG_MOD(extack, "Compression offload not supported");
return -EINVAL;
}
if (xs->xso.type != XFRM_DEV_OFFLOAD_CRYPTO) {
- netdev_err(dev, "Unsupported ipsec offload type\n");
+ NL_SET_ERR_MSG_MOD(extack, "Unsupported ipsec offload type");
return -EINVAL;
}
/* find the first unused index */
ret = nsim_ipsec_find_empty_idx(ipsec);
if (ret < 0) {
- netdev_err(dev, "No space for SA in Rx table!\n");
+ NL_SET_ERR_MSG_MOD(extack, "No space for SA in Rx table!");
return ret;
}
sa_idx = (u16)ret;
@@ -172,7 +172,7 @@ static int nsim_ipsec_add_sa(struct xfrm_state *xs)
/* get the key and salt */
ret = nsim_ipsec_parse_proto_keys(xs, sa.key, &sa.salt);
if (ret) {
- netdev_err(dev, "Failed to get key data for SA table\n");
+ NL_SET_ERR_MSG_MOD(extack, "Failed to get key data for SA table");
return ret;
}
diff --git a/drivers/net/netdevsim/netdev.c b/drivers/net/netdevsim/netdev.c
index 6db6a75ff9b9..35fa1ca98671 100644
--- a/drivers/net/netdevsim/netdev.c
+++ b/drivers/net/netdevsim/netdev.c
@@ -286,6 +286,7 @@ static void nsim_setup(struct net_device *dev)
NETIF_F_TSO;
dev->hw_features |= NETIF_F_HW_TC;
dev->max_mtu = ETH_MAX_MTU;
+ dev->xdp_features = NETDEV_XDP_ACT_HW_OFFLOAD;
}
static int nsim_init_netdevsim(struct netdevsim *ns)
diff --git a/drivers/net/pcs/pcs-lynx.c b/drivers/net/pcs/pcs-lynx.c
index 7d5fc7f54b2f..3903f3baba2b 100644
--- a/drivers/net/pcs/pcs-lynx.c
+++ b/drivers/net/pcs/pcs-lynx.c
@@ -10,9 +10,6 @@
#define SGMII_CLOCK_PERIOD_NS 8 /* PCS is clocked at 125 MHz */
#define LINK_TIMER_VAL(ns) ((u32)((ns) / SGMII_CLOCK_PERIOD_NS))
-#define SGMII_AN_LINK_TIMER_NS 1600000 /* defined by SGMII spec */
-#define IEEE8023_LINK_TIMER_NS 10000000
-
#define LINK_TIMER_LO 0x12
#define LINK_TIMER_HI 0x13
#define IF_MODE 0x14
@@ -126,26 +123,25 @@ static int lynx_pcs_config_giga(struct mdio_device *pcs, unsigned int mode,
phy_interface_t interface,
const unsigned long *advertising)
{
+ int link_timer_ns;
u32 link_timer;
u16 if_mode;
int err;
- if (interface == PHY_INTERFACE_MODE_1000BASEX) {
- link_timer = LINK_TIMER_VAL(IEEE8023_LINK_TIMER_NS);
+ link_timer_ns = phylink_get_link_timer_ns(interface);
+ if (link_timer_ns > 0) {
+ link_timer = LINK_TIMER_VAL(link_timer_ns);
+
mdiodev_write(pcs, LINK_TIMER_LO, link_timer & 0xffff);
mdiodev_write(pcs, LINK_TIMER_HI, link_timer >> 16);
+ }
+ if (interface == PHY_INTERFACE_MODE_1000BASEX) {
if_mode = 0;
} else {
if_mode = IF_MODE_SGMII_EN;
- if (mode == MLO_AN_INBAND) {
+ if (mode == MLO_AN_INBAND)
if_mode |= IF_MODE_USE_SGMII_AN;
-
- /* Adjust link timer for SGMII */
- link_timer = LINK_TIMER_VAL(SGMII_AN_LINK_TIMER_NS);
- mdiodev_write(pcs, LINK_TIMER_LO, link_timer & 0xffff);
- mdiodev_write(pcs, LINK_TIMER_HI, link_timer >> 16);
- }
}
err = mdiodev_modify(pcs, IF_MODE,
diff --git a/drivers/net/pcs/pcs-rzn1-miic.c b/drivers/net/pcs/pcs-rzn1-miic.c
index c1424119e821..323bec5e57f8 100644
--- a/drivers/net/pcs/pcs-rzn1-miic.c
+++ b/drivers/net/pcs/pcs-rzn1-miic.c
@@ -121,15 +121,11 @@ static const char *index_to_string[MIIC_MODCTRL_CONF_CONV_NUM] = {
* struct miic - MII converter structure
* @base: base address of the MII converter
* @dev: Device associated to the MII converter
- * @clks: Clocks used for this device
- * @nclk: Number of clocks
* @lock: Lock used for read-modify-write access
*/
struct miic {
void __iomem *base;
struct device *dev;
- struct clk_bulk_data *clks;
- int nclk;
spinlock_t lock;
};
@@ -232,7 +228,7 @@ static int miic_config(struct phylink_pcs *pcs, unsigned int mode,
}
miic_reg_rmw(miic, MIIC_CONVCTRL(port), mask, val);
- miic_converter_enable(miic_port->miic, miic_port->port, 1);
+ miic_converter_enable(miic, miic_port->port, 1);
return 0;
}
diff --git a/drivers/net/pcs/pcs-xpcs.c b/drivers/net/pcs/pcs-xpcs.c
index f6a038a1d51e..bc428a816719 100644
--- a/drivers/net/pcs/pcs-xpcs.c
+++ b/drivers/net/pcs/pcs-xpcs.c
@@ -199,9 +199,7 @@ int xpcs_write(struct dw_xpcs *xpcs, int dev, u32 reg, u16 val)
static int xpcs_modify_changed(struct dw_xpcs *xpcs, int dev, u32 reg,
u16 mask, u16 set)
{
- u32 reg_addr = mdiobus_c45_addr(dev, reg);
-
- return mdiodev_modify_changed(xpcs->mdiodev, reg_addr, mask, set);
+ return mdiodev_c45_modify_changed(xpcs->mdiodev, dev, reg, mask, set);
}
static int xpcs_read_vendor(struct dw_xpcs *xpcs, int dev, u32 reg)
diff --git a/drivers/net/phy/Kconfig b/drivers/net/phy/Kconfig
index 1327290decab..54874555c921 100644
--- a/drivers/net/phy/Kconfig
+++ b/drivers/net/phy/Kconfig
@@ -257,7 +257,7 @@ config MOTORCOMM_PHY
tristate "Motorcomm PHYs"
help
Enables support for Motorcomm network PHYs.
- Currently supports the YT8511, YT8521, YT8531S Gigabit Ethernet PHYs.
+ Currently supports YT85xx Gigabit Ethernet PHYs.
config NATIONAL_PHY
tristate "National Semiconductor PHYs"
@@ -277,6 +277,13 @@ config NXP_TJA11XX_PHY
help
Currently supports the NXP TJA1100 and TJA1101 PHY.
+config NCN26000_PHY
+ tristate "Onsemi 10BASE-T1S Ethernet PHY"
+ help
+ Adds support for the onsemi 10BASE-T1S Ethernet PHY.
+ Currently supports the NCN26000 10BASE-T1S Industrial PHY
+ with MII interface.
+
config AT803X_PHY
tristate "Qualcomm Atheros AR803X PHYs and QCA833x PHYs"
depends on REGULATOR
diff --git a/drivers/net/phy/Makefile b/drivers/net/phy/Makefile
index f7138d3c896b..b5138066ba04 100644
--- a/drivers/net/phy/Makefile
+++ b/drivers/net/phy/Makefile
@@ -77,6 +77,7 @@ obj-$(CONFIG_MICROCHIP_T1_PHY) += microchip_t1.o
obj-$(CONFIG_MICROSEMI_PHY) += mscc/
obj-$(CONFIG_MOTORCOMM_PHY) += motorcomm.o
obj-$(CONFIG_NATIONAL_PHY) += national.o
+obj-$(CONFIG_NCN26000_PHY) += ncn26000.o
obj-$(CONFIG_NXP_C45_TJA11XX_PHY) += nxp-c45-tja11xx.o
obj-$(CONFIG_NXP_TJA11XX_PHY) += nxp-tja11xx.o
obj-$(CONFIG_QSEMI_PHY) += qsemi.o
diff --git a/drivers/net/phy/marvell.c b/drivers/net/phy/marvell.c
index 0d706ee266af..63a3644d86c9 100644
--- a/drivers/net/phy/marvell.c
+++ b/drivers/net/phy/marvell.c
@@ -1467,7 +1467,7 @@ static int m88e1540_set_fld(struct phy_device *phydev, const u8 *msecs)
/* According to the Marvell data sheet EEE must be disabled for
* Fast Link Down detection to work properly
*/
- ret = phy_ethtool_get_eee(phydev, &eee);
+ ret = genphy_c45_ethtool_get_eee(phydev, &eee);
if (!ret && eee.eee_enabled) {
phydev_warn(phydev, "Fast Link Down detection requires EEE to be disabled!\n");
return -EBUSY;
diff --git a/drivers/net/phy/mdio-open-alliance.h b/drivers/net/phy/mdio-open-alliance.h
new file mode 100644
index 000000000000..931e14660d75
--- /dev/null
+++ b/drivers/net/phy/mdio-open-alliance.h
@@ -0,0 +1,46 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * mdio-open-alliance.h - definition of OPEN Alliance SIG standard registers
+ */
+
+#ifndef __MDIO_OPEN_ALLIANCE__
+#define __MDIO_OPEN_ALLIANCE__
+
+#include <linux/mdio.h>
+
+/* NOTE: all OATC14 registers are located in MDIO_MMD_VEND2 */
+
+/* Open Alliance TC14 (10BASE-T1S) registers */
+#define MDIO_OATC14_PLCA_IDVER 0xca00 /* PLCA ID and version */
+#define MDIO_OATC14_PLCA_CTRL0 0xca01 /* PLCA Control register 0 */
+#define MDIO_OATC14_PLCA_CTRL1 0xca02 /* PLCA Control register 1 */
+#define MDIO_OATC14_PLCA_STATUS 0xca03 /* PLCA Status register */
+#define MDIO_OATC14_PLCA_TOTMR 0xca04 /* PLCA TO Timer register */
+#define MDIO_OATC14_PLCA_BURST 0xca05 /* PLCA BURST mode register */
+
+/* Open Alliance TC14 PLCA IDVER register */
+#define MDIO_OATC14_PLCA_IDM 0xff00 /* PLCA MAP ID */
+#define MDIO_OATC14_PLCA_VER 0x00ff /* PLCA MAP version */
+
+/* Open Alliance TC14 PLCA CTRL0 register */
+#define MDIO_OATC14_PLCA_EN BIT(15) /* PLCA enable */
+#define MDIO_OATC14_PLCA_RST BIT(14) /* PLCA reset */
+
+/* Open Alliance TC14 PLCA CTRL1 register */
+#define MDIO_OATC14_PLCA_NCNT 0xff00 /* PLCA node count */
+#define MDIO_OATC14_PLCA_ID 0x00ff /* PLCA local node ID */
+
+/* Open Alliance TC14 PLCA STATUS register */
+#define MDIO_OATC14_PLCA_PST BIT(15) /* PLCA status indication */
+
+/* Open Alliance TC14 PLCA TOTMR register */
+#define MDIO_OATC14_PLCA_TOT 0x00ff
+
+/* Open Alliance TC14 PLCA BURST register */
+#define MDIO_OATC14_PLCA_MAXBC 0xff00
+#define MDIO_OATC14_PLCA_BTMR 0x00ff
+
+/* Version Identifiers */
+#define OATC14_IDM 0x0a00
+
+#endif /* __MDIO_OPEN_ALLIANCE__ */
diff --git a/drivers/net/phy/mdio_bus.c b/drivers/net/phy/mdio_bus.c
index 16e021b477f0..00d5bcdf0e6f 100644
--- a/drivers/net/phy/mdio_bus.c
+++ b/drivers/net/phy/mdio_bus.c
@@ -19,6 +19,7 @@
#include <linux/interrupt.h>
#include <linux/io.h>
#include <linux/kernel.h>
+#include <linux/micrel_phy.h>
#include <linux/mii.h>
#include <linux/mm.h>
#include <linux/module.h>
@@ -108,9 +109,10 @@ EXPORT_SYMBOL(mdiobus_unregister_device);
struct phy_device *mdiobus_get_phy(struct mii_bus *bus, int addr)
{
+ bool addr_valid = addr >= 0 && addr < ARRAY_SIZE(bus->mdio_map);
struct mdio_device *mdiodev;
- if (addr < 0 || addr >= ARRAY_SIZE(bus->mdio_map))
+ if (WARN_ONCE(!addr_valid, "addr %d out of range\n", addr))
return NULL;
mdiodev = bus->mdio_map[addr];
@@ -511,6 +513,126 @@ static int mdiobus_create_device(struct mii_bus *bus,
return ret;
}
+static struct phy_device *mdiobus_scan(struct mii_bus *bus, int addr, bool c45)
+{
+ struct phy_device *phydev = ERR_PTR(-ENODEV);
+ int err;
+
+ phydev = get_phy_device(bus, addr, c45);
+ if (IS_ERR(phydev))
+ return phydev;
+
+ /* For DT, see if the auto-probed phy has a corresponding child
+ * in the bus node, and set the of_node pointer in this case.
+ */
+ of_mdiobus_link_mdiodev(bus, &phydev->mdio);
+
+ err = phy_device_register(phydev);
+ if (err) {
+ phy_device_free(phydev);
+ return ERR_PTR(-ENODEV);
+ }
+
+ return phydev;
+}
+
+/**
+ * mdiobus_scan_c22 - scan one address on a bus for C22 MDIO devices.
+ * @bus: mii_bus to scan
+ * @addr: address on bus to scan
+ *
+ * This function scans one address on the MDIO bus, looking for
+ * devices which can be identified using a vendor/product ID in
+ * registers 2 and 3. Not all MDIO devices have such registers, but
+ * PHY devices typically do. Hence this function assumes anything
+ * found is a PHY, or can be treated as a PHY. Other MDIO devices,
+ * such as switches, will probably not be found during the scan.
+ */
+struct phy_device *mdiobus_scan_c22(struct mii_bus *bus, int addr)
+{
+ return mdiobus_scan(bus, addr, false);
+}
+EXPORT_SYMBOL(mdiobus_scan_c22);
+
+/**
+ * mdiobus_scan_c45 - scan one address on a bus for C45 MDIO devices.
+ * @bus: mii_bus to scan
+ * @addr: address on bus to scan
+ *
+ * This function scans one address on the MDIO bus, looking for
+ * devices which can be identified using a vendor/product ID in
+ * registers 2 and 3. Not all MDIO devices have such registers, but
+ * PHY devices typically do. Hence this function assumes anything
+ * found is a PHY, or can be treated as a PHY. Other MDIO devices,
+ * such as switches, will probably not be found during the scan.
+ */
+static struct phy_device *mdiobus_scan_c45(struct mii_bus *bus, int addr)
+{
+ return mdiobus_scan(bus, addr, true);
+}
+
+static int mdiobus_scan_bus_c22(struct mii_bus *bus)
+{
+ int i;
+
+ for (i = 0; i < PHY_MAX_ADDR; i++) {
+ if ((bus->phy_mask & BIT(i)) == 0) {
+ struct phy_device *phydev;
+
+ phydev = mdiobus_scan_c22(bus, i);
+ if (IS_ERR(phydev) && (PTR_ERR(phydev) != -ENODEV))
+ return PTR_ERR(phydev);
+ }
+ }
+ return 0;
+}
+
+static int mdiobus_scan_bus_c45(struct mii_bus *bus)
+{
+ int i;
+
+ for (i = 0; i < PHY_MAX_ADDR; i++) {
+ if ((bus->phy_mask & BIT(i)) == 0) {
+ struct phy_device *phydev;
+
+ /* Don't scan C45 if we already have a C22 device */
+ if (bus->mdio_map[i])
+ continue;
+
+ phydev = mdiobus_scan_c45(bus, i);
+ if (IS_ERR(phydev) && (PTR_ERR(phydev) != -ENODEV))
+ return PTR_ERR(phydev);
+ }
+ }
+ return 0;
+}
+
+/* There are some C22 PHYs which do bad things when where is a C45
+ * transaction on the bus, like accepting a read themselves, and
+ * stomping over the true devices reply, to performing a write to
+ * themselves which was intended for another device. Now that C22
+ * devices have been found, see if any of them are bad for C45, and if we
+ * should skip the C45 scan.
+ */
+static bool mdiobus_prevent_c45_scan(struct mii_bus *bus)
+{
+ int i;
+
+ for (i = 0; i < PHY_MAX_ADDR; i++) {
+ struct phy_device *phydev;
+ u32 oui;
+
+ phydev = mdiobus_get_phy(bus, i);
+ if (!phydev)
+ continue;
+ oui = phydev->phy_id >> 10;
+
+ if (oui == MICREL_OUI)
+ return true;
+ }
+ return false;
+}
+
/**
* __mdiobus_register - bring up all the PHYs on a given bus and attach them to bus
* @bus: target mii_bus
@@ -528,11 +650,19 @@ static int mdiobus_create_device(struct mii_bus *bus,
int __mdiobus_register(struct mii_bus *bus, struct module *owner)
{
struct mdio_device *mdiodev;
- int i, err;
struct gpio_desc *gpiod;
+ bool prevent_c45_scan;
+ int i, err;
+
+ if (!bus || !bus->name)
+ return -EINVAL;
- if (NULL == bus || NULL == bus->name ||
- NULL == bus->read || NULL == bus->write)
+ /* An access method always needs both read and write operations */
+ if (!!bus->read != !!bus->write || !!bus->read_c45 != !!bus->write_c45)
+ return -EINVAL;
+
+ /* At least one method is mandatory */
+ if (!bus->read && !bus->read_c45)
return -EINVAL;
if (bus->parent && bus->parent->of_node)
@@ -587,16 +717,18 @@ int __mdiobus_register(struct mii_bus *bus, struct module *owner)
goto error_reset_gpiod;
}
- for (i = 0; i < PHY_MAX_ADDR; i++) {
- if ((bus->phy_mask & BIT(i)) == 0) {
- struct phy_device *phydev;
+ if (bus->read) {
+ err = mdiobus_scan_bus_c22(bus);
+ if (err)
+ goto error;
+ }
- phydev = mdiobus_scan(bus, i);
- if (IS_ERR(phydev) && (PTR_ERR(phydev) != -ENODEV)) {
- err = PTR_ERR(phydev);
- goto error;
- }
- }
+ prevent_c45_scan = mdiobus_prevent_c45_scan(bus);
+
+ if (!prevent_c45_scan && bus->read_c45) {
+ err = mdiobus_scan_bus_c45(bus);
+ if (err)
+ goto error;
}
mdiobus_setup_mdiodev_from_board_info(bus, mdiobus_create_device);
@@ -606,7 +738,7 @@ int __mdiobus_register(struct mii_bus *bus, struct module *owner)
return 0;
error:
- while (--i >= 0) {
+ for (i = 0; i < PHY_MAX_ADDR; i++) {
mdiodev = bus->mdio_map[i];
if (!mdiodev)
continue;
@@ -677,57 +809,6 @@ void mdiobus_free(struct mii_bus *bus)
}
EXPORT_SYMBOL(mdiobus_free);
-/**
- * mdiobus_scan - scan a bus for MDIO devices.
- * @bus: mii_bus to scan
- * @addr: address on bus to scan
- *
- * This function scans the MDIO bus, looking for devices which can be
- * identified using a vendor/product ID in registers 2 and 3. Not all
- * MDIO devices have such registers, but PHY devices typically
- * do. Hence this function assumes anything found is a PHY, or can be
- * treated as a PHY. Other MDIO devices, such as switches, will
- * probably not be found during the scan.
- */
-struct phy_device *mdiobus_scan(struct mii_bus *bus, int addr)
-{
- struct phy_device *phydev = ERR_PTR(-ENODEV);
- int err;
-
- switch (bus->probe_capabilities) {
- case MDIOBUS_NO_CAP:
- case MDIOBUS_C22:
- phydev = get_phy_device(bus, addr, false);
- break;
- case MDIOBUS_C45:
- phydev = get_phy_device(bus, addr, true);
- break;
- case MDIOBUS_C22_C45:
- phydev = get_phy_device(bus, addr, false);
- if (IS_ERR(phydev))
- phydev = get_phy_device(bus, addr, true);
- break;
- }
-
- if (IS_ERR(phydev))
- return phydev;
-
- /*
- * For DT, see if the auto-probed phy has a correspoding child
- * in the bus node, and set the of_node pointer in this case.
- */
- of_mdiobus_link_mdiodev(bus, &phydev->mdio);
-
- err = phy_device_register(phydev);
- if (err) {
- phy_device_free(phydev);
- return ERR_PTR(-ENODEV);
- }
-
- return phydev;
-}
-EXPORT_SYMBOL(mdiobus_scan);
-
static void mdiobus_stats_acct(struct mdio_bus_stats *stats, bool op, int ret)
{
preempt_disable();
@@ -764,7 +845,10 @@ int __mdiobus_read(struct mii_bus *bus, int addr, u32 regnum)
lockdep_assert_held_once(&bus->mdio_lock);
- retval = bus->read(bus, addr, regnum);
+ if (bus->read)
+ retval = bus->read(bus, addr, regnum);
+ else
+ retval = -EOPNOTSUPP;
trace_mdio_access(bus, 1, addr, regnum, retval, retval);
mdiobus_stats_acct(&bus->stats[addr], true, retval);
@@ -790,7 +874,10 @@ int __mdiobus_write(struct mii_bus *bus, int addr, u32 regnum, u16 val)
lockdep_assert_held_once(&bus->mdio_lock);
- err = bus->write(bus, addr, regnum, val);
+ if (bus->write)
+ err = bus->write(bus, addr, regnum, val);
+ else
+ err = -EOPNOTSUPP;
trace_mdio_access(bus, 0, addr, regnum, val, err);
mdiobus_stats_acct(&bus->stats[addr], false, err);
@@ -832,6 +919,99 @@ int __mdiobus_modify_changed(struct mii_bus *bus, int addr, u32 regnum,
EXPORT_SYMBOL_GPL(__mdiobus_modify_changed);
/**
+ * __mdiobus_c45_read - Unlocked version of the mdiobus_c45_read function
+ * @bus: the mii_bus struct
+ * @addr: the phy address
+ * @devad: device address to read
+ * @regnum: register number to read
+ *
+ * Read a MDIO bus register. Caller must hold the mdio bus lock.
+ *
+ * NOTE: MUST NOT be called from interrupt context.
+ */
+int __mdiobus_c45_read(struct mii_bus *bus, int addr, int devad, u32 regnum)
+{
+ int retval;
+
+ lockdep_assert_held_once(&bus->mdio_lock);
+
+ if (bus->read_c45)
+ retval = bus->read_c45(bus, addr, devad, regnum);
+ else
+ retval = -EOPNOTSUPP;
+
+ trace_mdio_access(bus, 1, addr, regnum, retval, retval);
+ mdiobus_stats_acct(&bus->stats[addr], true, retval);
+
+ return retval;
+}
+EXPORT_SYMBOL(__mdiobus_c45_read);
+
+/**
+ * __mdiobus_c45_write - Unlocked version of the mdiobus_write function
+ * @bus: the mii_bus struct
+ * @addr: the phy address
+ * @devad: device address to read
+ * @regnum: register number to write
+ * @val: value to write to @regnum
+ *
+ * Write a MDIO bus register. Caller must hold the mdio bus lock.
+ *
+ * NOTE: MUST NOT be called from interrupt context.
+ */
+int __mdiobus_c45_write(struct mii_bus *bus, int addr, int devad, u32 regnum,
+ u16 val)
+{
+ int err;
+
+ lockdep_assert_held_once(&bus->mdio_lock);
+
+ if (bus->write_c45)
+ err = bus->write_c45(bus, addr, devad, regnum, val);
+ else
+ err = -EOPNOTSUPP;
+
+ trace_mdio_access(bus, 0, addr, regnum, val, err);
+ mdiobus_stats_acct(&bus->stats[addr], false, err);
+
+ return err;
+}
+EXPORT_SYMBOL(__mdiobus_c45_write);
+
+/**
+ * __mdiobus_c45_modify_changed - Unlocked version of the mdiobus_modify function
+ * @bus: the mii_bus struct
+ * @addr: the phy address
+ * @devad: device address to read
+ * @regnum: register number to modify
+ * @mask: bit mask of bits to clear
+ * @set: bit mask of bits to set
+ *
+ * Read, modify, and if any change, write the register value back to the
+ * device. Any error returns a negative number.
+ *
+ * NOTE: MUST NOT be called from interrupt context.
+ */
+static int __mdiobus_c45_modify_changed(struct mii_bus *bus, int addr,
+ int devad, u32 regnum, u16 mask,
+ u16 set)
+{
+ int new, ret;
+
+ ret = __mdiobus_c45_read(bus, addr, devad, regnum);
+ if (ret < 0)
+ return ret;
+
+ new = (ret & ~mask) | set;
+ if (new == ret)
+ return 0;
+
+ ret = __mdiobus_c45_write(bus, addr, devad, regnum, new);
+
+ return ret < 0 ? ret : 1;
+}
+
+/**
* mdiobus_read_nested - Nested version of the mdiobus_read function
* @bus: the mii_bus struct
* @addr: the phy address
@@ -879,6 +1059,56 @@ int mdiobus_read(struct mii_bus *bus, int addr, u32 regnum)
EXPORT_SYMBOL(mdiobus_read);
/**
+ * mdiobus_c45_read - Convenience function for reading a given MII mgmt register
+ * @bus: the mii_bus struct
+ * @addr: the phy address
+ * @devad: device address to read
+ * @regnum: register number to read
+ *
+ * NOTE: MUST NOT be called from interrupt context,
+ * because the bus read/write functions may wait for an interrupt
+ * to conclude the operation.
+ */
+int mdiobus_c45_read(struct mii_bus *bus, int addr, int devad, u32 regnum)
+{
+ int retval;
+
+ mutex_lock(&bus->mdio_lock);
+ retval = __mdiobus_c45_read(bus, addr, devad, regnum);
+ mutex_unlock(&bus->mdio_lock);
+
+ return retval;
+}
+EXPORT_SYMBOL(mdiobus_c45_read);
+
+/**
+ * mdiobus_c45_read_nested - Nested version of the mdiobus_c45_read function
+ * @bus: the mii_bus struct
+ * @addr: the phy address
+ * @devad: device address to read
+ * @regnum: register number to read
+ *
+ * In case of nested MDIO bus access avoid lockdep false positives by
+ * using mutex_lock_nested().
+ *
+ * NOTE: MUST NOT be called from interrupt context,
+ * because the bus read/write functions may wait for an interrupt
+ * to conclude the operation.
+ */
+int mdiobus_c45_read_nested(struct mii_bus *bus, int addr, int devad,
+ u32 regnum)
+{
+ int retval;
+
+ mutex_lock_nested(&bus->mdio_lock, MDIO_MUTEX_NESTED);
+ retval = __mdiobus_c45_read(bus, addr, devad, regnum);
+ mutex_unlock(&bus->mdio_lock);
+
+ return retval;
+}
+EXPORT_SYMBOL(mdiobus_c45_read_nested);
+
+/**
* mdiobus_write_nested - Nested version of the mdiobus_write function
* @bus: the mii_bus struct
* @addr: the phy address
@@ -928,6 +1158,59 @@ int mdiobus_write(struct mii_bus *bus, int addr, u32 regnum, u16 val)
EXPORT_SYMBOL(mdiobus_write);
/**
+ * mdiobus_c45_write - Convenience function for writing a given MII mgmt register
+ * @bus: the mii_bus struct
+ * @addr: the phy address
+ * @devad: device address to read
+ * @regnum: register number to write
+ * @val: value to write to @regnum
+ *
+ * NOTE: MUST NOT be called from interrupt context,
+ * because the bus read/write functions may wait for an interrupt
+ * to conclude the operation.
+ */
+int mdiobus_c45_write(struct mii_bus *bus, int addr, int devad, u32 regnum,
+ u16 val)
+{
+ int err;
+
+ mutex_lock(&bus->mdio_lock);
+ err = __mdiobus_c45_write(bus, addr, devad, regnum, val);
+ mutex_unlock(&bus->mdio_lock);
+
+ return err;
+}
+EXPORT_SYMBOL(mdiobus_c45_write);
+
+/**
+ * mdiobus_c45_write_nested - Nested version of the mdiobus_c45_write function
+ * @bus: the mii_bus struct
+ * @addr: the phy address
+ * @devad: device address to read
+ * @regnum: register number to write
+ * @val: value to write to @regnum
+ *
+ * In case of nested MDIO bus access avoid lockdep false positives by
+ * using mutex_lock_nested().
+ *
+ * NOTE: MUST NOT be called from interrupt context,
+ * because the bus read/write functions may wait for an interrupt
+ * to conclude the operation.
+ */
+int mdiobus_c45_write_nested(struct mii_bus *bus, int addr, int devad,
+ u32 regnum, u16 val)
+{
+ int err;
+
+ mutex_lock_nested(&bus->mdio_lock, MDIO_MUTEX_NESTED);
+ err = __mdiobus_c45_write(bus, addr, devad, regnum, val);
+ mutex_unlock(&bus->mdio_lock);
+
+ return err;
+}
+EXPORT_SYMBOL(mdiobus_c45_write_nested);
+
+/**
* mdiobus_modify - Convenience function for modifying a given mdio device
* register
* @bus: the mii_bus struct
@@ -949,6 +1232,30 @@ int mdiobus_modify(struct mii_bus *bus, int addr, u32 regnum, u16 mask, u16 set)
EXPORT_SYMBOL_GPL(mdiobus_modify);
/**
+ * mdiobus_c45_modify - Convenience function for modifying a given mdio device
+ * register
+ * @bus: the mii_bus struct
+ * @addr: the phy address
+ * @devad: device address to read
+ * @regnum: register number to write
+ * @mask: bit mask of bits to clear
+ * @set: bit mask of bits to set
+ */
+int mdiobus_c45_modify(struct mii_bus *bus, int addr, int devad, u32 regnum,
+ u16 mask, u16 set)
+{
+ int err;
+
+ mutex_lock(&bus->mdio_lock);
+ err = __mdiobus_c45_modify_changed(bus, addr, devad, regnum,
+ mask, set);
+ mutex_unlock(&bus->mdio_lock);
+
+ return err < 0 ? err : 0;
+}
+EXPORT_SYMBOL_GPL(mdiobus_c45_modify);
+
+/**
* mdiobus_modify_changed - Convenience function for modifying a given mdio
* device register and returning if it changed
* @bus: the mii_bus struct
@@ -971,6 +1278,29 @@ int mdiobus_modify_changed(struct mii_bus *bus, int addr, u32 regnum,
EXPORT_SYMBOL_GPL(mdiobus_modify_changed);
/**
+ * mdiobus_c45_modify_changed - Convenience function for modifying a given mdio
+ * device register and returning if it changed
+ * @bus: the mii_bus struct
+ * @addr: the phy address
+ * @devad: device address to read
+ * @regnum: register number to write
+ * @mask: bit mask of bits to clear
+ * @set: bit mask of bits to set
+ */
+int mdiobus_c45_modify_changed(struct mii_bus *bus, int devad, int addr,
+ u32 regnum, u16 mask, u16 set)
+{
+ int err;
+
+ mutex_lock(&bus->mdio_lock);
+ err = __mdiobus_c45_modify_changed(bus, addr, devad, regnum, mask, set);
+ mutex_unlock(&bus->mdio_lock);
+
+ return err;
+}
+EXPORT_SYMBOL_GPL(mdiobus_c45_modify_changed);
+
+/**
* mdio_bus_match - determine if given MDIO driver supports the given
* MDIO device
* @dev: target MDIO device
diff --git a/drivers/net/phy/micrel.c b/drivers/net/phy/micrel.c
index 26ce0c5defcd..2c84fccef4f6 100644
--- a/drivers/net/phy/micrel.c
+++ b/drivers/net/phy/micrel.c
@@ -268,6 +268,9 @@ struct kszphy_type {
u16 interrupt_level_mask;
u16 cable_diag_reg;
unsigned long pair_mask;
+ u16 disable_dll_tx_bit;
+ u16 disable_dll_rx_bit;
+ u16 disable_dll_mask;
bool has_broadcast_disable;
bool has_nand_tree_disable;
bool has_rmii_ref_clk_sel;
@@ -310,6 +313,11 @@ struct kszphy_ptp_priv {
enum hwtstamp_rx_filters rx_filter;
int layer;
int version;
+
+ struct ptp_clock *ptp_clock;
+ struct ptp_clock_info ptp_clock_info;
+ /* Lock for ptp_clock */
+ struct mutex ptp_lock;
};
struct kszphy_priv {
@@ -364,6 +372,21 @@ static const struct kszphy_type ksz9021_type = {
.interrupt_level_mask = BIT(14),
};
+static const struct kszphy_type ksz9131_type = {
+ .interrupt_level_mask = BIT(14),
+ .disable_dll_tx_bit = BIT(12),
+ .disable_dll_rx_bit = BIT(12),
+ .disable_dll_mask = BIT_MASK(12),
+};
+
+static const struct kszphy_type lan8841_type = {
+ .disable_dll_tx_bit = BIT(14),
+ .disable_dll_rx_bit = BIT(14),
+ .disable_dll_mask = BIT_MASK(14),
+ .cable_diag_reg = LAN8814_CABLE_DIAG,
+ .pair_mask = LAN8814_WIRE_PAIR_MASK,
+};
+
static int kszphy_extended_write(struct phy_device *phydev,
u32 regnum, u16 val)
{
@@ -1172,19 +1195,18 @@ static int ksz9131_of_load_skew_values(struct phy_device *phydev,
#define KSZ9131RN_MMD_COMMON_CTRL_REG 2
#define KSZ9131RN_RXC_DLL_CTRL 76
#define KSZ9131RN_TXC_DLL_CTRL 77
-#define KSZ9131RN_DLL_CTRL_BYPASS BIT_MASK(12)
#define KSZ9131RN_DLL_ENABLE_DELAY 0
-#define KSZ9131RN_DLL_DISABLE_DELAY BIT(12)
static int ksz9131_config_rgmii_delay(struct phy_device *phydev)
{
+ const struct kszphy_type *type = phydev->drv->driver_data;
u16 rxcdll_val, txcdll_val;
int ret;
switch (phydev->interface) {
case PHY_INTERFACE_MODE_RGMII:
- rxcdll_val = KSZ9131RN_DLL_DISABLE_DELAY;
- txcdll_val = KSZ9131RN_DLL_DISABLE_DELAY;
+ rxcdll_val = type->disable_dll_rx_bit;
+ txcdll_val = type->disable_dll_tx_bit;
break;
case PHY_INTERFACE_MODE_RGMII_ID:
rxcdll_val = KSZ9131RN_DLL_ENABLE_DELAY;
@@ -1192,10 +1214,10 @@ static int ksz9131_config_rgmii_delay(struct phy_device *phydev)
break;
case PHY_INTERFACE_MODE_RGMII_RXID:
rxcdll_val = KSZ9131RN_DLL_ENABLE_DELAY;
- txcdll_val = KSZ9131RN_DLL_DISABLE_DELAY;
+ txcdll_val = type->disable_dll_tx_bit;
break;
case PHY_INTERFACE_MODE_RGMII_TXID:
- rxcdll_val = KSZ9131RN_DLL_DISABLE_DELAY;
+ rxcdll_val = type->disable_dll_rx_bit;
txcdll_val = KSZ9131RN_DLL_ENABLE_DELAY;
break;
default:
@@ -1203,13 +1225,13 @@ static int ksz9131_config_rgmii_delay(struct phy_device *phydev)
}
ret = phy_modify_mmd(phydev, KSZ9131RN_MMD_COMMON_CTRL_REG,
- KSZ9131RN_RXC_DLL_CTRL, KSZ9131RN_DLL_CTRL_BYPASS,
+ KSZ9131RN_RXC_DLL_CTRL, type->disable_dll_mask,
rxcdll_val);
if (ret < 0)
return ret;
return phy_modify_mmd(phydev, KSZ9131RN_MMD_COMMON_CTRL_REG,
- KSZ9131RN_TXC_DLL_CTRL, KSZ9131RN_DLL_CTRL_BYPASS,
+ KSZ9131RN_TXC_DLL_CTRL, type->disable_dll_mask,
txcdll_val);
}
@@ -1370,6 +1392,26 @@ static int ksz9131_config_aneg(struct phy_device *phydev)
return genphy_config_aneg(phydev);
}
+static int ksz9477_get_features(struct phy_device *phydev)
+{
+ int ret;
+
+ ret = genphy_read_abilities(phydev);
+ if (ret)
+ return ret;
+
+ /* The "EEE control and capability 1" (Register 3.20) seems to be
+ * influenced by the "EEE advertisement 1" (Register 7.60). Changes
+ * on the 7.60 will affect 3.20. So, we need to construct our own list
+ * of caps.
+ * KSZ8563R should have 100BaseTX/Full only.
+ */
+ linkmode_and(phydev->supported_eee, phydev->supported,
+ PHY_EEE_CAP1_FEATURES);
+
+ return 0;
+}
+
#define KSZ8873MLL_GLOBAL_CONTROL_4 0x06
#define KSZ8873MLL_GLOBAL_CONTROL_4_DUPLEX BIT(6)
#define KSZ8873MLL_GLOBAL_CONTROL_4_SPEED BIT(4)
@@ -2088,7 +2130,8 @@ static int ksz886x_cable_test_get_status(struct phy_device *phydev,
const struct kszphy_type *type = phydev->drv->driver_data;
unsigned long pair_mask = type->pair_mask;
int retries = 20;
- int pair, ret;
+ int ret = 0;
+ int pair;
*finished = false;
@@ -2380,8 +2423,8 @@ static void lan8814_get_sig_rx(struct sk_buff *skb, u16 *sig)
*sig = (__force u16)(ntohs(ptp_header->sequence_id));
}
-static bool lan8814_match_rx_ts(struct kszphy_ptp_priv *ptp_priv,
- struct sk_buff *skb)
+static bool lan8814_match_rx_skb(struct kszphy_ptp_priv *ptp_priv,
+ struct sk_buff *skb)
{
struct skb_shared_hwtstamps *shhwtstamps;
struct lan8814_ptp_rx_ts *rx_ts, *tmp;
@@ -2430,7 +2473,7 @@ static bool lan8814_rxtstamp(struct mii_timestamper *mii_ts, struct sk_buff *skb
/* If we failed to match then add it to the queue for when the timestamp
* will come
*/
- if (!lan8814_match_rx_ts(ptp_priv, skb))
+ if (!lan8814_match_rx_skb(ptp_priv, skb))
skb_queue_tail(&ptp_priv->rx_queue, skb);
return true;
@@ -2680,18 +2723,14 @@ static void lan8814_get_sig_tx(struct sk_buff *skb, u16 *sig)
*sig = (__force u16)(ntohs(ptp_header->sequence_id));
}
-static void lan8814_dequeue_tx_skb(struct kszphy_ptp_priv *ptp_priv)
+static void lan8814_match_tx_skb(struct kszphy_ptp_priv *ptp_priv,
+ u32 seconds, u32 nsec, u16 seq_id)
{
- struct phy_device *phydev = ptp_priv->phydev;
struct skb_shared_hwtstamps shhwtstamps;
struct sk_buff *skb, *skb_tmp;
unsigned long flags;
- u32 seconds, nsec;
bool ret = false;
u16 skb_sig;
- u16 seq_id;
-
- lan8814_ptp_tx_ts_get(phydev, &seconds, &nsec, &seq_id);
spin_lock_irqsave(&ptp_priv->tx_queue.lock, flags);
skb_queue_walk_safe(&ptp_priv->tx_queue, skb, skb_tmp) {
@@ -2713,6 +2752,16 @@ static void lan8814_dequeue_tx_skb(struct kszphy_ptp_priv *ptp_priv)
}
}
+static void lan8814_dequeue_tx_skb(struct kszphy_ptp_priv *ptp_priv)
+{
+ struct phy_device *phydev = ptp_priv->phydev;
+ u32 seconds, nsec;
+ u16 seq_id;
+
+ lan8814_ptp_tx_ts_get(phydev, &seconds, &nsec, &seq_id);
+ lan8814_match_tx_skb(ptp_priv, seconds, nsec, seq_id);
+}
+
static void lan8814_get_tx_ts(struct kszphy_ptp_priv *ptp_priv)
{
struct phy_device *phydev = ptp_priv->phydev;
@@ -2761,11 +2810,27 @@ static bool lan8814_match_skb(struct kszphy_ptp_priv *ptp_priv,
return ret;
}
+static void lan8814_match_rx_ts(struct kszphy_ptp_priv *ptp_priv,
+ struct lan8814_ptp_rx_ts *rx_ts)
+{
+ unsigned long flags;
+
+ /* If we failed to match the skb add it to the queue for when
+ * the frame will come
+ */
+ if (!lan8814_match_skb(ptp_priv, rx_ts)) {
+ spin_lock_irqsave(&ptp_priv->rx_ts_lock, flags);
+ list_add(&rx_ts->list, &ptp_priv->rx_ts_list);
+ spin_unlock_irqrestore(&ptp_priv->rx_ts_lock, flags);
+ } else {
+ kfree(rx_ts);
+ }
+}
+
static void lan8814_get_rx_ts(struct kszphy_ptp_priv *ptp_priv)
{
struct phy_device *phydev = ptp_priv->phydev;
struct lan8814_ptp_rx_ts *rx_ts;
- unsigned long flags;
u32 reg;
do {
@@ -2775,17 +2840,7 @@ static void lan8814_get_rx_ts(struct kszphy_ptp_priv *ptp_priv)
lan8814_ptp_rx_ts_get(phydev, &rx_ts->seconds, &rx_ts->nsec,
&rx_ts->seq_id);
-
- /* If we failed to match the skb add it to the queue for when
- * the frame will come
- */
- if (!lan8814_match_skb(ptp_priv, rx_ts)) {
- spin_lock_irqsave(&ptp_priv->rx_ts_lock, flags);
- list_add(&rx_ts->list, &ptp_priv->rx_ts_list);
- spin_unlock_irqrestore(&ptp_priv->rx_ts_lock, flags);
- } else {
- kfree(rx_ts);
- }
+ lan8814_match_rx_ts(ptp_priv, rx_ts);
/* If other timestamps are available in the FIFO,
* process them.
@@ -2794,13 +2849,11 @@ static void lan8814_get_rx_ts(struct kszphy_ptp_priv *ptp_priv)
} while (PTP_CAP_INFO_RX_TS_CNT_GET_(reg) > 0);
}
-static void lan8814_handle_ptp_interrupt(struct phy_device *phydev)
+static void lan8814_handle_ptp_interrupt(struct phy_device *phydev, u16 status)
{
struct kszphy_priv *priv = phydev->priv;
struct kszphy_ptp_priv *ptp_priv = &priv->ptp_priv;
- u16 status;
- status = lanphy_read_page_reg(phydev, 5, PTP_TSU_INT_STS);
if (status & PTP_TSU_INT_STS_PTP_TX_TS_EN_)
lan8814_get_tx_ts(ptp_priv);
@@ -2899,8 +2952,8 @@ static int lan8804_config_intr(struct phy_device *phydev)
static irqreturn_t lan8814_handle_interrupt(struct phy_device *phydev)
{
- int irq_status, tsu_irq_status;
int ret = IRQ_NONE;
+ int irq_status;
irq_status = phy_read(phydev, LAN8814_INTS);
if (irq_status < 0) {
@@ -2913,20 +2966,13 @@ static irqreturn_t lan8814_handle_interrupt(struct phy_device *phydev)
ret = IRQ_HANDLED;
}
- while (1) {
- tsu_irq_status = lanphy_read_page_reg(phydev, 4,
- LAN8814_INTR_STS_REG);
-
- if (tsu_irq_status > 0 &&
- (tsu_irq_status & (LAN8814_INTR_STS_REG_1588_TSU0_ |
- LAN8814_INTR_STS_REG_1588_TSU1_ |
- LAN8814_INTR_STS_REG_1588_TSU2_ |
- LAN8814_INTR_STS_REG_1588_TSU3_))) {
- lan8814_handle_ptp_interrupt(phydev);
- ret = IRQ_HANDLED;
- } else {
+ while (true) {
+ irq_status = lanphy_read_page_reg(phydev, 5, PTP_TSU_INT_STS);
+ if (!irq_status)
break;
- }
+
+ lan8814_handle_ptp_interrupt(phydev, irq_status);
+ ret = IRQ_HANDLED;
}
return ret;
@@ -3016,10 +3062,6 @@ static int lan8814_ptp_probe_once(struct phy_device *phydev)
{
struct lan8814_shared_priv *shared = phydev->shared->priv;
- if (!IS_ENABLED(CONFIG_PTP_1588_CLOCK) ||
- !IS_ENABLED(CONFIG_NETWORK_PHY_TIMESTAMPING))
- return 0;
-
/* Initialise shared lock for clock*/
mutex_init(&shared->shared_lock);
@@ -3039,12 +3081,16 @@ static int lan8814_ptp_probe_once(struct phy_device *phydev)
shared->ptp_clock = ptp_clock_register(&shared->ptp_clock_info,
&phydev->mdio.dev);
- if (IS_ERR_OR_NULL(shared->ptp_clock)) {
+ if (IS_ERR(shared->ptp_clock)) {
phydev_err(phydev, "ptp_clock_register failed %lu\n",
PTR_ERR(shared->ptp_clock));
return -EINVAL;
}
+ /* Check if PHC support is missing at the configuration level */
+ if (!shared->ptp_clock)
+ return 0;
+
phydev_dbg(phydev, "successfully registered ptp clock\n");
shared->phydev = phydev;
@@ -3160,6 +3206,704 @@ static int lan8814_probe(struct phy_device *phydev)
return 0;
}
+#define LAN8841_MMD_TIMER_REG 0
+#define LAN8841_MMD0_REGISTER_17 17
+#define LAN8841_MMD0_REGISTER_17_DROP_OPT(x) ((x) & 0x3)
+#define LAN8841_MMD0_REGISTER_17_XMIT_TOG_TX_DIS BIT(3)
+#define LAN8841_OPERATION_MODE_STRAP_OVERRIDE_LOW_REG 2
+#define LAN8841_OPERATION_MODE_STRAP_OVERRIDE_LOW_REG_MAGJACK BIT(14)
+#define LAN8841_MMD_ANALOG_REG 28
+#define LAN8841_ANALOG_CONTROL_1 1
+#define LAN8841_ANALOG_CONTROL_1_PLL_TRIM(x) (((x) & 0x3) << 5)
+#define LAN8841_ANALOG_CONTROL_10 13
+#define LAN8841_ANALOG_CONTROL_10_PLL_DIV(x) ((x) & 0x3)
+#define LAN8841_ANALOG_CONTROL_11 14
+#define LAN8841_ANALOG_CONTROL_11_LDO_REF(x) (((x) & 0x7) << 12)
+#define LAN8841_TX_LOW_I_CH_C_D_POWER_MANAGMENT 69
+#define LAN8841_TX_LOW_I_CH_C_D_POWER_MANAGMENT_VAL 0xbffc
+#define LAN8841_BTRX_POWER_DOWN 70
+#define LAN8841_BTRX_POWER_DOWN_QBIAS_CH_A BIT(0)
+#define LAN8841_BTRX_POWER_DOWN_BTRX_CH_A BIT(1)
+#define LAN8841_BTRX_POWER_DOWN_QBIAS_CH_B BIT(2)
+#define LAN8841_BTRX_POWER_DOWN_BTRX_CH_B BIT(3)
+#define LAN8841_BTRX_POWER_DOWN_BTRX_CH_C BIT(5)
+#define LAN8841_BTRX_POWER_DOWN_BTRX_CH_D BIT(7)
+#define LAN8841_ADC_CHANNEL_MASK 198
+#define LAN8841_PTP_RX_PARSE_L2_ADDR_EN 370
+#define LAN8841_PTP_RX_PARSE_IP_ADDR_EN 371
+#define LAN8841_PTP_TX_PARSE_L2_ADDR_EN 434
+#define LAN8841_PTP_TX_PARSE_IP_ADDR_EN 435
+#define LAN8841_PTP_CMD_CTL 256
+#define LAN8841_PTP_CMD_CTL_PTP_ENABLE BIT(2)
+#define LAN8841_PTP_CMD_CTL_PTP_DISABLE BIT(1)
+#define LAN8841_PTP_CMD_CTL_PTP_RESET BIT(0)
+#define LAN8841_PTP_RX_PARSE_CONFIG 368
+#define LAN8841_PTP_TX_PARSE_CONFIG 432
+
+static int lan8841_config_init(struct phy_device *phydev)
+{
+ int ret;
+
+ ret = ksz9131_config_init(phydev);
+ if (ret)
+ return ret;
+
+ /* Initialize the HW by resetting everything */
+ phy_modify_mmd(phydev, KSZ9131RN_MMD_COMMON_CTRL_REG,
+ LAN8841_PTP_CMD_CTL,
+ LAN8841_PTP_CMD_CTL_PTP_RESET,
+ LAN8841_PTP_CMD_CTL_PTP_RESET);
+
+ phy_modify_mmd(phydev, KSZ9131RN_MMD_COMMON_CTRL_REG,
+ LAN8841_PTP_CMD_CTL,
+ LAN8841_PTP_CMD_CTL_PTP_ENABLE,
+ LAN8841_PTP_CMD_CTL_PTP_ENABLE);
+
+ /* Don't process any frames */
+ phy_write_mmd(phydev, KSZ9131RN_MMD_COMMON_CTRL_REG,
+ LAN8841_PTP_RX_PARSE_CONFIG, 0);
+ phy_write_mmd(phydev, KSZ9131RN_MMD_COMMON_CTRL_REG,
+ LAN8841_PTP_TX_PARSE_CONFIG, 0);
+ phy_write_mmd(phydev, KSZ9131RN_MMD_COMMON_CTRL_REG,
+ LAN8841_PTP_TX_PARSE_L2_ADDR_EN, 0);
+ phy_write_mmd(phydev, KSZ9131RN_MMD_COMMON_CTRL_REG,
+ LAN8841_PTP_RX_PARSE_L2_ADDR_EN, 0);
+ phy_write_mmd(phydev, KSZ9131RN_MMD_COMMON_CTRL_REG,
+ LAN8841_PTP_TX_PARSE_IP_ADDR_EN, 0);
+ phy_write_mmd(phydev, KSZ9131RN_MMD_COMMON_CTRL_REG,
+ LAN8841_PTP_RX_PARSE_IP_ADDR_EN, 0);
+
+ /* 100BT Clause 40 improvenent errata */
+ phy_write_mmd(phydev, LAN8841_MMD_ANALOG_REG,
+ LAN8841_ANALOG_CONTROL_1,
+ LAN8841_ANALOG_CONTROL_1_PLL_TRIM(0x2));
+ phy_write_mmd(phydev, LAN8841_MMD_ANALOG_REG,
+ LAN8841_ANALOG_CONTROL_10,
+ LAN8841_ANALOG_CONTROL_10_PLL_DIV(0x1));
+
+ /* 10M/100M Ethernet Signal Tuning Errata for Shorted-Center Tap
+ * Magnetics
+ */
+ ret = phy_read_mmd(phydev, KSZ9131RN_MMD_COMMON_CTRL_REG,
+ LAN8841_OPERATION_MODE_STRAP_OVERRIDE_LOW_REG);
+ if (ret & LAN8841_OPERATION_MODE_STRAP_OVERRIDE_LOW_REG_MAGJACK) {
+ phy_write_mmd(phydev, LAN8841_MMD_ANALOG_REG,
+ LAN8841_TX_LOW_I_CH_C_D_POWER_MANAGMENT,
+ LAN8841_TX_LOW_I_CH_C_D_POWER_MANAGMENT_VAL);
+ phy_write_mmd(phydev, LAN8841_MMD_ANALOG_REG,
+ LAN8841_BTRX_POWER_DOWN,
+ LAN8841_BTRX_POWER_DOWN_QBIAS_CH_A |
+ LAN8841_BTRX_POWER_DOWN_BTRX_CH_A |
+ LAN8841_BTRX_POWER_DOWN_QBIAS_CH_B |
+ LAN8841_BTRX_POWER_DOWN_BTRX_CH_B |
+ LAN8841_BTRX_POWER_DOWN_BTRX_CH_C |
+ LAN8841_BTRX_POWER_DOWN_BTRX_CH_D);
+ }
+
+ /* LDO Adjustment errata */
+ phy_write_mmd(phydev, LAN8841_MMD_ANALOG_REG,
+ LAN8841_ANALOG_CONTROL_11,
+ LAN8841_ANALOG_CONTROL_11_LDO_REF(1));
+
+ /* 100BT RGMII latency tuning errata */
+ phy_write_mmd(phydev, MDIO_MMD_PMAPMD,
+ LAN8841_ADC_CHANNEL_MASK, 0x0);
+ phy_write_mmd(phydev, LAN8841_MMD_TIMER_REG,
+ LAN8841_MMD0_REGISTER_17,
+ LAN8841_MMD0_REGISTER_17_DROP_OPT(2) |
+ LAN8841_MMD0_REGISTER_17_XMIT_TOG_TX_DIS);
+
+ return 0;
+}
+
+#define LAN8841_OUTPUT_CTRL 25
+#define LAN8841_OUTPUT_CTRL_INT_BUFFER BIT(14)
+#define LAN8841_INT_PTP BIT(9)
+
+static int lan8841_config_intr(struct phy_device *phydev)
+{
+ int err;
+
+ phy_modify(phydev, LAN8841_OUTPUT_CTRL,
+ LAN8841_OUTPUT_CTRL_INT_BUFFER, 0);
+
+ if (phydev->interrupts == PHY_INTERRUPT_ENABLED) {
+ err = phy_read(phydev, LAN8814_INTS);
+ if (err)
+ return err;
+
+ /* Enable / disable interrupts. It is OK to enable PTP interrupt
+ * even if it PTP is not enabled. Because the underneath blocks
+ * will not enable the PTP so we will never get the PTP
+ * interrupt.
+ */
+ err = phy_write(phydev, LAN8814_INTC,
+ LAN8814_INT_LINK | LAN8841_INT_PTP);
+ } else {
+ err = phy_write(phydev, LAN8814_INTC, 0);
+ if (err)
+ return err;
+
+ err = phy_read(phydev, LAN8814_INTS);
+ }
+
+ return err;
+}
+
+#define LAN8841_PTP_TX_EGRESS_SEC_LO 453
+#define LAN8841_PTP_TX_EGRESS_SEC_HI 452
+#define LAN8841_PTP_TX_EGRESS_NS_LO 451
+#define LAN8841_PTP_TX_EGRESS_NS_HI 450
+#define LAN8841_PTP_TX_EGRESS_NSEC_HI_VALID BIT(15)
+#define LAN8841_PTP_TX_MSG_HEADER2 455
+
+static bool lan8841_ptp_get_tx_ts(struct kszphy_ptp_priv *ptp_priv,
+ u32 *sec, u32 *nsec, u16 *seq)
+{
+ struct phy_device *phydev = ptp_priv->phydev;
+
+ *nsec = phy_read_mmd(phydev, 2, LAN8841_PTP_TX_EGRESS_NS_HI);
+ if (!(*nsec & LAN8841_PTP_TX_EGRESS_NSEC_HI_VALID))
+ return false;
+
+ *nsec = ((*nsec & 0x3fff) << 16);
+ *nsec = *nsec | phy_read_mmd(phydev, 2, LAN8841_PTP_TX_EGRESS_NS_LO);
+
+ *sec = phy_read_mmd(phydev, 2, LAN8841_PTP_TX_EGRESS_SEC_HI);
+ *sec = *sec << 16;
+ *sec = *sec | phy_read_mmd(phydev, 2, LAN8841_PTP_TX_EGRESS_SEC_LO);
+
+ *seq = phy_read_mmd(phydev, 2, LAN8841_PTP_TX_MSG_HEADER2);
+
+ return true;
+}
+
+static void lan8841_ptp_process_tx_ts(struct kszphy_ptp_priv *ptp_priv)
+{
+ u32 sec, nsec;
+ u16 seq;
+
+ while (lan8841_ptp_get_tx_ts(ptp_priv, &sec, &nsec, &seq))
+ lan8814_match_tx_skb(ptp_priv, sec, nsec, seq);
+}
+
+#define LAN8841_PTP_RX_INGRESS_SEC_LO 389
+#define LAN8841_PTP_RX_INGRESS_SEC_HI 388
+#define LAN8841_PTP_RX_INGRESS_NS_LO 387
+#define LAN8841_PTP_RX_INGRESS_NS_HI 386
+#define LAN8841_PTP_RX_INGRESS_NSEC_HI_VALID BIT(15)
+#define LAN8841_PTP_RX_MSG_HEADER2 391
+
+static struct lan8814_ptp_rx_ts *lan8841_ptp_get_rx_ts(struct kszphy_ptp_priv *ptp_priv)
+{
+ struct phy_device *phydev = ptp_priv->phydev;
+ struct lan8814_ptp_rx_ts *rx_ts;
+ u32 sec, nsec;
+ u16 seq;
+
+ nsec = phy_read_mmd(phydev, 2, LAN8841_PTP_RX_INGRESS_NS_HI);
+ if (!(nsec & LAN8841_PTP_RX_INGRESS_NSEC_HI_VALID))
+ return NULL;
+
+ nsec = ((nsec & 0x3fff) << 16);
+ nsec = nsec | phy_read_mmd(phydev, 2, LAN8841_PTP_RX_INGRESS_NS_LO);
+
+ sec = phy_read_mmd(phydev, 2, LAN8841_PTP_RX_INGRESS_SEC_HI);
+ sec = sec << 16;
+ sec = sec | phy_read_mmd(phydev, 2, LAN8841_PTP_RX_INGRESS_SEC_LO);
+
+ seq = phy_read_mmd(phydev, 2, LAN8841_PTP_RX_MSG_HEADER2);
+
+ rx_ts = kzalloc(sizeof(*rx_ts), GFP_KERNEL);
+ if (!rx_ts)
+ return NULL;
+
+ rx_ts->seconds = sec;
+ rx_ts->nsec = nsec;
+ rx_ts->seq_id = seq;
+
+ return rx_ts;
+}
+
+static void lan8841_ptp_process_rx_ts(struct kszphy_ptp_priv *ptp_priv)
+{
+ struct lan8814_ptp_rx_ts *rx_ts;
+
+ while ((rx_ts = lan8841_ptp_get_rx_ts(ptp_priv)) != NULL)
+ lan8814_match_rx_ts(ptp_priv, rx_ts);
+}
+
+#define LAN8841_PTP_INT_STS 259
+#define LAN8841_PTP_INT_STS_PTP_TX_TS_OVRFL_INT BIT(13)
+#define LAN8841_PTP_INT_STS_PTP_TX_TS_INT BIT(12)
+#define LAN8841_PTP_INT_STS_PTP_RX_TS_OVRFL_INT BIT(9)
+#define LAN8841_PTP_INT_STS_PTP_RX_TS_INT BIT(8)
+
+static void lan8841_ptp_flush_fifo(struct kszphy_ptp_priv *ptp_priv, bool egress)
+{
+ struct phy_device *phydev = ptp_priv->phydev;
+ int i;
+
+ for (i = 0; i < FIFO_SIZE; ++i)
+ phy_read_mmd(phydev, 2,
+ egress ? LAN8841_PTP_TX_MSG_HEADER2 :
+ LAN8841_PTP_RX_MSG_HEADER2);
+
+ phy_read_mmd(phydev, 2, LAN8841_PTP_INT_STS);
+}
+
+static void lan8841_handle_ptp_interrupt(struct phy_device *phydev)
+{
+ struct kszphy_priv *priv = phydev->priv;
+ struct kszphy_ptp_priv *ptp_priv = &priv->ptp_priv;
+ u16 status;
+
+ do {
+ status = phy_read_mmd(phydev, 2, LAN8841_PTP_INT_STS);
+ if (status & LAN8841_PTP_INT_STS_PTP_TX_TS_INT)
+ lan8841_ptp_process_tx_ts(ptp_priv);
+
+ if (status & LAN8841_PTP_INT_STS_PTP_RX_TS_INT)
+ lan8841_ptp_process_rx_ts(ptp_priv);
+
+ if (status & LAN8841_PTP_INT_STS_PTP_TX_TS_OVRFL_INT) {
+ lan8841_ptp_flush_fifo(ptp_priv, true);
+ skb_queue_purge(&ptp_priv->tx_queue);
+ }
+
+ if (status & LAN8841_PTP_INT_STS_PTP_RX_TS_OVRFL_INT) {
+ lan8841_ptp_flush_fifo(ptp_priv, false);
+ skb_queue_purge(&ptp_priv->rx_queue);
+ }
+
+ } while (status);
+}
+
+#define LAN8841_INTS_PTP BIT(9)
+
+static irqreturn_t lan8841_handle_interrupt(struct phy_device *phydev)
+{
+ irqreturn_t ret = IRQ_NONE;
+ int irq_status;
+
+ irq_status = phy_read(phydev, LAN8814_INTS);
+ if (irq_status < 0) {
+ phy_error(phydev);
+ return IRQ_NONE;
+ }
+
+ if (irq_status & LAN8814_INT_LINK) {
+ phy_trigger_machine(phydev);
+ ret = IRQ_HANDLED;
+ }
+
+ if (irq_status & LAN8841_INTS_PTP) {
+ lan8841_handle_ptp_interrupt(phydev);
+ ret = IRQ_HANDLED;
+ }
+
+ return ret;
+}
+
+static int lan8841_ts_info(struct mii_timestamper *mii_ts,
+ struct ethtool_ts_info *info)
+{
+ struct kszphy_ptp_priv *ptp_priv;
+
+ ptp_priv = container_of(mii_ts, struct kszphy_ptp_priv, mii_ts);
+
+ info->phc_index = ptp_priv->ptp_clock ?
+ ptp_clock_index(ptp_priv->ptp_clock) : -1;
+ if (info->phc_index == -1) {
+ info->so_timestamping |= SOF_TIMESTAMPING_TX_SOFTWARE |
+ SOF_TIMESTAMPING_RX_SOFTWARE |
+ SOF_TIMESTAMPING_SOFTWARE;
+ return 0;
+ }
+
+ info->so_timestamping = SOF_TIMESTAMPING_TX_HARDWARE |
+ SOF_TIMESTAMPING_RX_HARDWARE |
+ SOF_TIMESTAMPING_RAW_HARDWARE;
+
+ info->tx_types = (1 << HWTSTAMP_TX_OFF) |
+ (1 << HWTSTAMP_TX_ON) |
+ (1 << HWTSTAMP_TX_ONESTEP_SYNC);
+
+ info->rx_filters = (1 << HWTSTAMP_FILTER_NONE) |
+ (1 << HWTSTAMP_FILTER_PTP_V2_L4_EVENT) |
+ (1 << HWTSTAMP_FILTER_PTP_V2_L2_EVENT) |
+ (1 << HWTSTAMP_FILTER_PTP_V2_EVENT);
+
+ return 0;
+}
+
+#define LAN8841_PTP_INT_EN 260
+#define LAN8841_PTP_INT_EN_PTP_TX_TS_OVRFL_EN BIT(13)
+#define LAN8841_PTP_INT_EN_PTP_TX_TS_EN BIT(12)
+#define LAN8841_PTP_INT_EN_PTP_RX_TS_OVRFL_EN BIT(9)
+#define LAN8841_PTP_INT_EN_PTP_RX_TS_EN BIT(8)
+
+static void lan8841_ptp_enable_int(struct kszphy_ptp_priv *ptp_priv,
+ bool enable)
+{
+ struct phy_device *phydev = ptp_priv->phydev;
+
+ if (enable)
+ /* Enable interrupts */
+ phy_modify_mmd(phydev, 2, LAN8841_PTP_INT_EN,
+ LAN8841_PTP_INT_EN_PTP_TX_TS_OVRFL_EN |
+ LAN8841_PTP_INT_EN_PTP_RX_TS_OVRFL_EN |
+ LAN8841_PTP_INT_EN_PTP_TX_TS_EN |
+ LAN8841_PTP_INT_EN_PTP_RX_TS_EN,
+ LAN8841_PTP_INT_EN_PTP_TX_TS_OVRFL_EN |
+ LAN8841_PTP_INT_EN_PTP_RX_TS_OVRFL_EN |
+ LAN8841_PTP_INT_EN_PTP_TX_TS_EN |
+ LAN8841_PTP_INT_EN_PTP_RX_TS_EN);
+ else
+ /* Disable interrupts */
+ phy_modify_mmd(phydev, 2, LAN8841_PTP_INT_EN,
+ LAN8841_PTP_INT_EN_PTP_TX_TS_OVRFL_EN |
+ LAN8841_PTP_INT_EN_PTP_RX_TS_OVRFL_EN |
+ LAN8841_PTP_INT_EN_PTP_TX_TS_EN |
+ LAN8841_PTP_INT_EN_PTP_RX_TS_EN, 0);
+}
+
+#define LAN8841_PTP_RX_TIMESTAMP_EN 379
+#define LAN8841_PTP_TX_TIMESTAMP_EN 443
+#define LAN8841_PTP_TX_MOD 445
+
+static int lan8841_hwtstamp(struct mii_timestamper *mii_ts, struct ifreq *ifr)
+{
+ struct kszphy_ptp_priv *ptp_priv = container_of(mii_ts, struct kszphy_ptp_priv, mii_ts);
+ struct phy_device *phydev = ptp_priv->phydev;
+ struct lan8814_ptp_rx_ts *rx_ts, *tmp;
+ struct hwtstamp_config config;
+ int txcfg = 0, rxcfg = 0;
+ int pkt_ts_enable;
+
+ if (copy_from_user(&config, ifr->ifr_data, sizeof(config)))
+ return -EFAULT;
+
+ ptp_priv->hwts_tx_type = config.tx_type;
+ ptp_priv->rx_filter = config.rx_filter;
+
+ switch (config.rx_filter) {
+ case HWTSTAMP_FILTER_NONE:
+ ptp_priv->layer = 0;
+ ptp_priv->version = 0;
+ break;
+ case HWTSTAMP_FILTER_PTP_V2_L4_EVENT:
+ case HWTSTAMP_FILTER_PTP_V2_L4_SYNC:
+ case HWTSTAMP_FILTER_PTP_V2_L4_DELAY_REQ:
+ ptp_priv->layer = PTP_CLASS_L4;
+ ptp_priv->version = PTP_CLASS_V2;
+ break;
+ case HWTSTAMP_FILTER_PTP_V2_L2_EVENT:
+ case HWTSTAMP_FILTER_PTP_V2_L2_SYNC:
+ case HWTSTAMP_FILTER_PTP_V2_L2_DELAY_REQ:
+ ptp_priv->layer = PTP_CLASS_L2;
+ ptp_priv->version = PTP_CLASS_V2;
+ break;
+ case HWTSTAMP_FILTER_PTP_V2_EVENT:
+ case HWTSTAMP_FILTER_PTP_V2_SYNC:
+ case HWTSTAMP_FILTER_PTP_V2_DELAY_REQ:
+ ptp_priv->layer = PTP_CLASS_L4 | PTP_CLASS_L2;
+ ptp_priv->version = PTP_CLASS_V2;
+ break;
+ default:
+ return -ERANGE;
+ }
+
+ /* Setup parsing of the frames and enable the timestamping for ptp
+ * frames
+ */
+ if (ptp_priv->layer & PTP_CLASS_L2) {
+ rxcfg |= PTP_RX_PARSE_CONFIG_LAYER2_EN_;
+ txcfg |= PTP_TX_PARSE_CONFIG_LAYER2_EN_;
+ } else if (ptp_priv->layer & PTP_CLASS_L4) {
+ rxcfg |= PTP_RX_PARSE_CONFIG_IPV4_EN_ | PTP_RX_PARSE_CONFIG_IPV6_EN_;
+ txcfg |= PTP_TX_PARSE_CONFIG_IPV4_EN_ | PTP_TX_PARSE_CONFIG_IPV6_EN_;
+ }
+
+ phy_write_mmd(phydev, 2, LAN8841_PTP_RX_PARSE_CONFIG, rxcfg);
+ phy_write_mmd(phydev, 2, LAN8841_PTP_TX_PARSE_CONFIG, txcfg);
+
+ pkt_ts_enable = PTP_TIMESTAMP_EN_SYNC_ | PTP_TIMESTAMP_EN_DREQ_ |
+ PTP_TIMESTAMP_EN_PDREQ_ | PTP_TIMESTAMP_EN_PDRES_;
+ phy_write_mmd(phydev, 2, LAN8841_PTP_RX_TIMESTAMP_EN, pkt_ts_enable);
+ phy_write_mmd(phydev, 2, LAN8841_PTP_TX_TIMESTAMP_EN, pkt_ts_enable);
+
+ /* Enable / disable of the TX timestamp in the SYNC frames */
+ phy_modify_mmd(phydev, 2, LAN8841_PTP_TX_MOD,
+ PTP_TX_MOD_TX_PTP_SYNC_TS_INSERT_,
+ ptp_priv->hwts_tx_type == HWTSTAMP_TX_ONESTEP_SYNC ?
+ PTP_TX_MOD_TX_PTP_SYNC_TS_INSERT_ : 0);
+
+ /* Now enable/disable the timestamping */
+ lan8841_ptp_enable_int(ptp_priv,
+ config.rx_filter != HWTSTAMP_FILTER_NONE);
+
+ /* In case of multiple starts and stops, these needs to be cleared */
+ list_for_each_entry_safe(rx_ts, tmp, &ptp_priv->rx_ts_list, list) {
+ list_del(&rx_ts->list);
+ kfree(rx_ts);
+ }
+
+ skb_queue_purge(&ptp_priv->rx_queue);
+ skb_queue_purge(&ptp_priv->tx_queue);
+
+ lan8841_ptp_flush_fifo(ptp_priv, false);
+ lan8841_ptp_flush_fifo(ptp_priv, true);
+
+ return copy_to_user(ifr->ifr_data, &config, sizeof(config)) ? -EFAULT : 0;
+}
+
+#define LAN8841_PTP_LTC_SET_SEC_HI 262
+#define LAN8841_PTP_LTC_SET_SEC_MID 263
+#define LAN8841_PTP_LTC_SET_SEC_LO 264
+#define LAN8841_PTP_LTC_SET_NS_HI 265
+#define LAN8841_PTP_LTC_SET_NS_LO 266
+#define LAN8841_PTP_CMD_CTL_PTP_LTC_LOAD BIT(4)
+
+static int lan8841_ptp_settime64(struct ptp_clock_info *ptp,
+ const struct timespec64 *ts)
+{
+ struct kszphy_ptp_priv *ptp_priv = container_of(ptp, struct kszphy_ptp_priv,
+ ptp_clock_info);
+ struct phy_device *phydev = ptp_priv->phydev;
+
+ /* Set the value to be stored */
+ mutex_lock(&ptp_priv->ptp_lock);
+ phy_write_mmd(phydev, 2, LAN8841_PTP_LTC_SET_SEC_LO, lower_16_bits(ts->tv_sec));
+ phy_write_mmd(phydev, 2, LAN8841_PTP_LTC_SET_SEC_MID, upper_16_bits(ts->tv_sec));
+ phy_write_mmd(phydev, 2, LAN8841_PTP_LTC_SET_SEC_HI, upper_32_bits(ts->tv_sec) & 0xffff);
+ phy_write_mmd(phydev, 2, LAN8841_PTP_LTC_SET_NS_LO, lower_16_bits(ts->tv_nsec));
+ phy_write_mmd(phydev, 2, LAN8841_PTP_LTC_SET_NS_HI, upper_16_bits(ts->tv_nsec) & 0x3fff);
+
+ /* Set the command to load the LTC */
+ phy_write_mmd(phydev, 2, LAN8841_PTP_CMD_CTL,
+ LAN8841_PTP_CMD_CTL_PTP_LTC_LOAD);
+ mutex_unlock(&ptp_priv->ptp_lock);
+
+ return 0;
+}
+
+#define LAN8841_PTP_LTC_RD_SEC_HI 358
+#define LAN8841_PTP_LTC_RD_SEC_MID 359
+#define LAN8841_PTP_LTC_RD_SEC_LO 360
+#define LAN8841_PTP_LTC_RD_NS_HI 361
+#define LAN8841_PTP_LTC_RD_NS_LO 362
+#define LAN8841_PTP_CMD_CTL_PTP_LTC_READ BIT(3)
+
+static int lan8841_ptp_gettime64(struct ptp_clock_info *ptp,
+ struct timespec64 *ts)
+{
+ struct kszphy_ptp_priv *ptp_priv = container_of(ptp, struct kszphy_ptp_priv,
+ ptp_clock_info);
+ struct phy_device *phydev = ptp_priv->phydev;
+ time64_t s;
+ s64 ns;
+
+ mutex_lock(&ptp_priv->ptp_lock);
+ /* Issue the command to read the LTC */
+ phy_write_mmd(phydev, 2, LAN8841_PTP_CMD_CTL,
+ LAN8841_PTP_CMD_CTL_PTP_LTC_READ);
+
+ /* Read the LTC */
+ s = phy_read_mmd(phydev, 2, LAN8841_PTP_LTC_RD_SEC_HI);
+ s <<= 16;
+ s |= phy_read_mmd(phydev, 2, LAN8841_PTP_LTC_RD_SEC_MID);
+ s <<= 16;
+ s |= phy_read_mmd(phydev, 2, LAN8841_PTP_LTC_RD_SEC_LO);
+
+ ns = phy_read_mmd(phydev, 2, LAN8841_PTP_LTC_RD_NS_HI) & 0x3fff;
+ ns <<= 16;
+ ns |= phy_read_mmd(phydev, 2, LAN8841_PTP_LTC_RD_NS_LO);
+ mutex_unlock(&ptp_priv->ptp_lock);
+
+ set_normalized_timespec64(ts, s, ns);
+ return 0;
+}
+
+#define LAN8841_PTP_LTC_STEP_ADJ_LO 276
+#define LAN8841_PTP_LTC_STEP_ADJ_HI 275
+#define LAN8841_PTP_LTC_STEP_ADJ_DIR BIT(15)
+#define LAN8841_PTP_CMD_CTL_PTP_LTC_STEP_SECONDS BIT(5)
+#define LAN8841_PTP_CMD_CTL_PTP_LTC_STEP_NANOSECONDS BIT(6)
+
+static int lan8841_ptp_adjtime(struct ptp_clock_info *ptp, s64 delta)
+{
+ struct kszphy_ptp_priv *ptp_priv = container_of(ptp, struct kszphy_ptp_priv,
+ ptp_clock_info);
+ struct phy_device *phydev = ptp_priv->phydev;
+ struct timespec64 ts;
+ bool add = true;
+ u32 nsec;
+ s32 sec;
+
+ /* The HW allows up to 15 sec to adjust the time, but here we limit to
+ * 10 sec the adjustment. The reason is, in case the adjustment is 14
+ * sec and 999999999 nsec, then we add 8ns to compansate the actual
+ * increment so the value can be bigger than 15 sec. Therefore limit the
+ * possible adjustments so we will not have these corner cases
+ */
+ if (delta > 10000000000LL || delta < -10000000000LL) {
+ /* The timeadjustment is too big, so fall back using set time */
+ u64 now;
+
+ ptp->gettime64(ptp, &ts);
+
+ now = ktime_to_ns(timespec64_to_ktime(ts));
+ ts = ns_to_timespec64(now + delta);
+
+ ptp->settime64(ptp, &ts);
+ return 0;
+ }
+
+ sec = div_u64_rem(delta < 0 ? -delta : delta, NSEC_PER_SEC, &nsec);
+ if (delta < 0 && nsec != 0) {
+ /* It is not allowed to adjust low the nsec part, therefore
+ * subtract more from second part and add to nanosecond such
+ * that would roll over, so the second part will increase
+ */
+ sec--;
+ nsec = NSEC_PER_SEC - nsec;
+ }
+
+ /* Calculate the adjustments and the direction */
+ if (delta < 0)
+ add = false;
+
+ if (nsec > 0)
+ /* add 8 ns to cover the likely normal increment */
+ nsec += 8;
+
+ if (nsec >= NSEC_PER_SEC) {
+ /* carry into seconds */
+ sec++;
+ nsec -= NSEC_PER_SEC;
+ }
+
+ mutex_lock(&ptp_priv->ptp_lock);
+ if (sec) {
+ phy_write_mmd(phydev, 2, LAN8841_PTP_LTC_STEP_ADJ_LO, sec);
+ phy_write_mmd(phydev, 2, LAN8841_PTP_LTC_STEP_ADJ_HI,
+ add ? LAN8841_PTP_LTC_STEP_ADJ_DIR : 0);
+ phy_write_mmd(phydev, 2, LAN8841_PTP_CMD_CTL,
+ LAN8841_PTP_CMD_CTL_PTP_LTC_STEP_SECONDS);
+ }
+
+ if (nsec) {
+ phy_write_mmd(phydev, 2, LAN8841_PTP_LTC_STEP_ADJ_LO,
+ nsec & 0xffff);
+ phy_write_mmd(phydev, 2, LAN8841_PTP_LTC_STEP_ADJ_HI,
+ (nsec >> 16) & 0x3fff);
+ phy_write_mmd(phydev, 2, LAN8841_PTP_CMD_CTL,
+ LAN8841_PTP_CMD_CTL_PTP_LTC_STEP_NANOSECONDS);
+ }
+ mutex_unlock(&ptp_priv->ptp_lock);
+
+ return 0;
+}
+
+#define LAN8841_PTP_LTC_RATE_ADJ_HI 269
+#define LAN8841_PTP_LTC_RATE_ADJ_HI_DIR BIT(15)
+#define LAN8841_PTP_LTC_RATE_ADJ_LO 270
+
+static int lan8841_ptp_adjfine(struct ptp_clock_info *ptp, long scaled_ppm)
+{
+ struct kszphy_ptp_priv *ptp_priv = container_of(ptp, struct kszphy_ptp_priv,
+ ptp_clock_info);
+ struct phy_device *phydev = ptp_priv->phydev;
+ bool faster = true;
+ u32 rate;
+
+ if (!scaled_ppm)
+ return 0;
+
+ if (scaled_ppm < 0) {
+ scaled_ppm = -scaled_ppm;
+ faster = false;
+ }
+
+ rate = LAN8814_1PPM_FORMAT * (upper_16_bits(scaled_ppm));
+ rate += (LAN8814_1PPM_FORMAT * (lower_16_bits(scaled_ppm))) >> 16;
+
+ mutex_lock(&ptp_priv->ptp_lock);
+ phy_write_mmd(phydev, 2, LAN8841_PTP_LTC_RATE_ADJ_HI,
+ faster ? LAN8841_PTP_LTC_RATE_ADJ_HI_DIR | (upper_16_bits(rate) & 0x3fff)
+ : upper_16_bits(rate) & 0x3fff);
+ phy_write_mmd(phydev, 2, LAN8841_PTP_LTC_RATE_ADJ_LO, lower_16_bits(rate));
+ mutex_unlock(&ptp_priv->ptp_lock);
+
+ return 0;
+}
+
+static struct ptp_clock_info lan8841_ptp_clock_info = {
+ .owner = THIS_MODULE,
+ .name = "lan8841 ptp",
+ .max_adj = 31249999,
+ .gettime64 = lan8841_ptp_gettime64,
+ .settime64 = lan8841_ptp_settime64,
+ .adjtime = lan8841_ptp_adjtime,
+ .adjfine = lan8841_ptp_adjfine,
+};
+
+#define LAN8841_OPERATION_MODE_STRAP_LOW_REGISTER 3
+#define LAN8841_OPERATION_MODE_STRAP_LOW_REGISTER_STRAP_RGMII_EN BIT(0)
+
+static int lan8841_probe(struct phy_device *phydev)
+{
+ struct kszphy_ptp_priv *ptp_priv;
+ struct kszphy_priv *priv;
+ int err;
+
+ err = kszphy_probe(phydev);
+ if (err)
+ return err;
+
+ if (phy_read_mmd(phydev, KSZ9131RN_MMD_COMMON_CTRL_REG,
+ LAN8841_OPERATION_MODE_STRAP_LOW_REGISTER) &
+ LAN8841_OPERATION_MODE_STRAP_LOW_REGISTER_STRAP_RGMII_EN)
+ phydev->interface = PHY_INTERFACE_MODE_RGMII_RXID;
+
+ /* Register the clock */
+ if (!IS_ENABLED(CONFIG_NETWORK_PHY_TIMESTAMPING))
+ return 0;
+
+ priv = phydev->priv;
+ ptp_priv = &priv->ptp_priv;
+
+ ptp_priv->ptp_clock_info = lan8841_ptp_clock_info;
+ ptp_priv->ptp_clock = ptp_clock_register(&ptp_priv->ptp_clock_info,
+ &phydev->mdio.dev);
+ if (IS_ERR(ptp_priv->ptp_clock)) {
+ phydev_err(phydev, "ptp_clock_register failed: %lu\n",
+ PTR_ERR(ptp_priv->ptp_clock));
+ return -EINVAL;
+ }
+
+ if (!ptp_priv->ptp_clock)
+ return 0;
+
+ /* Initialize the SW */
+ skb_queue_head_init(&ptp_priv->tx_queue);
+ skb_queue_head_init(&ptp_priv->rx_queue);
+ INIT_LIST_HEAD(&ptp_priv->rx_ts_list);
+ spin_lock_init(&ptp_priv->rx_ts_lock);
+ ptp_priv->phydev = phydev;
+ mutex_init(&ptp_priv->ptp_lock);
+
+ ptp_priv->mii_ts.rxtstamp = lan8814_rxtstamp;
+ ptp_priv->mii_ts.txtstamp = lan8814_txtstamp;
+ ptp_priv->mii_ts.hwtstamp = lan8841_hwtstamp;
+ ptp_priv->mii_ts.ts_info = lan8841_ts_info;
+
+ phydev->mii_ts = &ptp_priv->mii_ts;
+
+ return 0;
+}
+
static struct phy_driver ksphy_driver[] = {
{
.phy_id = PHY_ID_KS8737,
@@ -3370,12 +4114,30 @@ static struct phy_driver ksphy_driver[] = {
.config_intr = lan8804_config_intr,
.handle_interrupt = lan8804_handle_interrupt,
}, {
+ .phy_id = PHY_ID_LAN8841,
+ .phy_id_mask = MICREL_PHY_ID_MASK,
+ .name = "Microchip LAN8841 Gigabit PHY",
+ .flags = PHY_POLL_CABLE_TEST,
+ .driver_data = &lan8841_type,
+ .config_init = lan8841_config_init,
+ .probe = lan8841_probe,
+ .soft_reset = genphy_soft_reset,
+ .config_intr = lan8841_config_intr,
+ .handle_interrupt = lan8841_handle_interrupt,
+ .get_sset_count = kszphy_get_sset_count,
+ .get_strings = kszphy_get_strings,
+ .get_stats = kszphy_get_stats,
+ .suspend = genphy_suspend,
+ .resume = genphy_resume,
+ .cable_test_start = lan8814_cable_test_start,
+ .cable_test_get_status = ksz886x_cable_test_get_status,
+}, {
.phy_id = PHY_ID_KSZ9131,
.phy_id_mask = MICREL_PHY_ID_MASK,
.name = "Microchip KSZ9131 Gigabit PHY",
/* PHY_GBIT_FEATURES */
.flags = PHY_POLL_CABLE_TEST,
- .driver_data = &ksz9021_type,
+ .driver_data = &ksz9131_type,
.probe = kszphy_probe,
.config_init = ksz9131_config_init,
.config_intr = kszphy_config_intr,
@@ -3430,6 +4192,7 @@ static struct phy_driver ksphy_driver[] = {
.handle_interrupt = kszphy_handle_interrupt,
.suspend = genphy_suspend,
.resume = genphy_resume,
+ .get_features = ksz9477_get_features,
} };
module_phy_driver(ksphy_driver);
@@ -3454,6 +4217,7 @@ static struct mdio_device_id __maybe_unused micrel_tbl[] = {
{ PHY_ID_KSZ886X, MICREL_PHY_ID_MASK },
{ PHY_ID_LAN8814, MICREL_PHY_ID_MASK },
{ PHY_ID_LAN8804, MICREL_PHY_ID_MASK },
+ { PHY_ID_LAN8841, MICREL_PHY_ID_MASK },
{ }
};
diff --git a/drivers/net/phy/microchip_t1.c b/drivers/net/phy/microchip_t1.c
index 8569a545e0a3..a838b61cd844 100644
--- a/drivers/net/phy/microchip_t1.c
+++ b/drivers/net/phy/microchip_t1.c
@@ -245,15 +245,42 @@ static int lan87xx_config_rgmii_delay(struct phy_device *phydev)
PHYACC_ATTR_BANK_MISC, LAN87XX_CTRL_1, rc);
}
+static int lan87xx_phy_init_cmd(struct phy_device *phydev,
+ const struct access_ereg_val *cmd_seq, int cnt)
+{
+ int ret, i;
+
+ for (i = 0; i < cnt; i++) {
+ if (cmd_seq[i].mode == PHYACC_ATTR_MODE_POLL &&
+ cmd_seq[i].bank == PHYACC_ATTR_BANK_SMI) {
+ ret = access_smi_poll_timeout(phydev,
+ cmd_seq[i].offset,
+ cmd_seq[i].val,
+ cmd_seq[i].mask);
+ } else {
+ ret = access_ereg(phydev, cmd_seq[i].mode,
+ cmd_seq[i].bank, cmd_seq[i].offset,
+ cmd_seq[i].val);
+ }
+ if (ret < 0)
+ return ret;
+ }
+
+ return ret;
+}
+
static int lan87xx_phy_init(struct phy_device *phydev)
{
- static const struct access_ereg_val init[] = {
+ static const struct access_ereg_val hw_init[] = {
/* TXPD/TXAMP6 Configs */
{ PHYACC_ATTR_MODE_WRITE, PHYACC_ATTR_BANK_AFE,
T1_AFE_PORT_CFG1_REG, 0x002D, 0 },
/* HW_Init Hi and Force_ED */
{ PHYACC_ATTR_MODE_WRITE, PHYACC_ATTR_BANK_SMI,
T1_POWER_DOWN_CONTROL_REG, 0x0308, 0 },
+ };
+
+ static const struct access_ereg_val slave_init[] = {
/* Equalizer Full Duplex Freeze - T1 Slave */
{ PHYACC_ATTR_MODE_WRITE, PHYACC_ATTR_BANK_DSP,
T1_EQ_FD_STG1_FRZ_CFG, 0x0002, 0 },
@@ -267,6 +294,9 @@ static int lan87xx_phy_init(struct phy_device *phydev)
T1_EQ_WT_FD_LCK_FRZ_CFG, 0x0002, 0 },
{ PHYACC_ATTR_MODE_WRITE, PHYACC_ATTR_BANK_DSP,
T1_PST_EQ_LCK_STG1_FRZ_CFG, 0x0002, 0 },
+ };
+
+ static const struct access_ereg_val phy_init[] = {
/* Slave Full Duplex Multi Configs */
{ PHYACC_ATTR_MODE_WRITE, PHYACC_ATTR_BANK_DSP,
T1_SLV_FD_MULT_CFG_REG, 0x0D53, 0 },
@@ -397,7 +427,7 @@ static int lan87xx_phy_init(struct phy_device *phydev)
{ PHYACC_ATTR_MODE_WRITE, PHYACC_ATTR_BANK_SMI,
T1_POWER_DOWN_CONTROL_REG, 0x0300, 0 },
};
- int rc, i;
+ int rc;
/* phy Soft reset */
rc = genphy_soft_reset(phydev);
@@ -405,21 +435,28 @@ static int lan87xx_phy_init(struct phy_device *phydev)
return rc;
/* PHY Initialization */
- for (i = 0; i < ARRAY_SIZE(init); i++) {
- if (init[i].mode == PHYACC_ATTR_MODE_POLL &&
- init[i].bank == PHYACC_ATTR_BANK_SMI) {
- rc = access_smi_poll_timeout(phydev,
- init[i].offset,
- init[i].val,
- init[i].mask);
- } else {
- rc = access_ereg(phydev, init[i].mode, init[i].bank,
- init[i].offset, init[i].val);
- }
+ rc = lan87xx_phy_init_cmd(phydev, hw_init, ARRAY_SIZE(hw_init));
+ if (rc < 0)
+ return rc;
+
+ rc = genphy_read_master_slave(phydev);
+ if (rc)
+ return rc;
+
+ /* The following squence needs to run only if phydev is in
+ * slave mode.
+ */
+ if (phydev->master_slave_state == MASTER_SLAVE_STATE_SLAVE) {
+ rc = lan87xx_phy_init_cmd(phydev, slave_init,
+ ARRAY_SIZE(slave_init));
if (rc < 0)
return rc;
}
+ rc = lan87xx_phy_init_cmd(phydev, phy_init, ARRAY_SIZE(phy_init));
+ if (rc < 0)
+ return rc;
+
return lan87xx_config_rgmii_delay(phydev);
}
@@ -775,6 +812,7 @@ static int lan87xx_read_status(struct phy_device *phydev)
static int lan87xx_config_aneg(struct phy_device *phydev)
{
u16 ctl = 0;
+ int ret;
switch (phydev->master_slave_set) {
case MASTER_SLAVE_CFG_MASTER_FORCE:
@@ -790,7 +828,11 @@ static int lan87xx_config_aneg(struct phy_device *phydev)
return -EOPNOTSUPP;
}
- return phy_modify_changed(phydev, MII_CTRL1000, CTL1000_AS_MASTER, ctl);
+ ret = phy_modify_changed(phydev, MII_CTRL1000, CTL1000_AS_MASTER, ctl);
+ if (ret == 1)
+ return phy_init_hw(phydev);
+
+ return ret;
}
static int lan87xx_get_sqi(struct phy_device *phydev)
diff --git a/drivers/net/phy/motorcomm.c b/drivers/net/phy/motorcomm.c
index 685190db72de..2fa5a90e073b 100644
--- a/drivers/net/phy/motorcomm.c
+++ b/drivers/net/phy/motorcomm.c
@@ -1,6 +1,6 @@
// SPDX-License-Identifier: GPL-2.0+
/*
- * Motorcomm 8511/8521/8531S PHY driver.
+ * Motorcomm 8511/8521/8531/8531S PHY driver.
*
* Author: Peter Geis <pgwipeout@gmail.com>
* Author: Frank <Frank.Sae@motor-comm.com>
@@ -10,10 +10,12 @@
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/phy.h>
+#include <linux/of.h>
#define PHY_ID_YT8511 0x0000010a
-#define PHY_ID_YT8521 0x0000011A
-#define PHY_ID_YT8531S 0x4F51E91A
+#define PHY_ID_YT8521 0x0000011a
+#define PHY_ID_YT8531 0x4f51e91b
+#define PHY_ID_YT8531S 0x4f51e91a
/* YT8521/YT8531S Register Overview
* UTP Register space | FIBER Register space
@@ -161,6 +163,11 @@
#define YT8521_CHIP_CONFIG_REG 0xA001
#define YT8521_CCR_SW_RST BIT(15)
+/* 1b0 disable 1.9ns rxc clock delay *default*
+ * 1b1 enable 1.9ns rxc clock delay
+ */
+#define YT8521_CCR_RXC_DLY_EN BIT(8)
+#define YT8521_CCR_RXC_DLY_1_900_NS 1900
#define YT8521_CCR_MODE_SEL_MASK (BIT(2) | BIT(1) | BIT(0))
#define YT8521_CCR_MODE_UTP_TO_RGMII 0
@@ -178,22 +185,29 @@
#define YT8521_MODE_POLL 0x3
#define YT8521_RGMII_CONFIG1_REG 0xA003
-
-/* TX Gig-E Delay is bits 3:0, default 0x1
- * TX Fast-E Delay is bits 7:4, default 0xf
- * RX Delay is bits 13:10, default 0x0
- * Delay = 150ps * N
- * On = 2250ps, off = 0ps
+/* 1b0 use original tx_clk_rgmii *default*
+ * 1b1 use inverted tx_clk_rgmii.
*/
-#define YT8521_RC1R_RX_DELAY_MASK (0xF << 10)
-#define YT8521_RC1R_RX_DELAY_EN (0xF << 10)
-#define YT8521_RC1R_RX_DELAY_DIS (0x0 << 10)
-#define YT8521_RC1R_FE_TX_DELAY_MASK (0xF << 4)
-#define YT8521_RC1R_FE_TX_DELAY_EN (0xF << 4)
-#define YT8521_RC1R_FE_TX_DELAY_DIS (0x0 << 4)
-#define YT8521_RC1R_GE_TX_DELAY_MASK (0xF << 0)
-#define YT8521_RC1R_GE_TX_DELAY_EN (0xF << 0)
-#define YT8521_RC1R_GE_TX_DELAY_DIS (0x0 << 0)
+#define YT8521_RC1R_TX_CLK_SEL_INVERTED BIT(14)
+#define YT8521_RC1R_RX_DELAY_MASK GENMASK(13, 10)
+#define YT8521_RC1R_FE_TX_DELAY_MASK GENMASK(7, 4)
+#define YT8521_RC1R_GE_TX_DELAY_MASK GENMASK(3, 0)
+#define YT8521_RC1R_RGMII_0_000_NS 0
+#define YT8521_RC1R_RGMII_0_150_NS 1
+#define YT8521_RC1R_RGMII_0_300_NS 2
+#define YT8521_RC1R_RGMII_0_450_NS 3
+#define YT8521_RC1R_RGMII_0_600_NS 4
+#define YT8521_RC1R_RGMII_0_750_NS 5
+#define YT8521_RC1R_RGMII_0_900_NS 6
+#define YT8521_RC1R_RGMII_1_050_NS 7
+#define YT8521_RC1R_RGMII_1_200_NS 8
+#define YT8521_RC1R_RGMII_1_350_NS 9
+#define YT8521_RC1R_RGMII_1_500_NS 10
+#define YT8521_RC1R_RGMII_1_650_NS 11
+#define YT8521_RC1R_RGMII_1_800_NS 12
+#define YT8521_RC1R_RGMII_1_950_NS 13
+#define YT8521_RC1R_RGMII_2_100_NS 14
+#define YT8521_RC1R_RGMII_2_250_NS 15
#define YTPHY_MISC_CONFIG_REG 0xA006
#define YTPHY_MCR_FIBER_SPEED_MASK BIT(0)
@@ -222,11 +236,36 @@
*/
#define YTPHY_WCR_TYPE_PULSE BIT(0)
-#define YT8531S_SYNCE_CFG_REG 0xA012
-#define YT8531S_SCR_SYNCE_ENABLE BIT(6)
+#define YTPHY_SYNCE_CFG_REG 0xA012
+#define YT8521_SCR_SYNCE_ENABLE BIT(5)
+/* 1b0 output 25m clock
+ * 1b1 output 125m clock *default*
+ */
+#define YT8521_SCR_CLK_FRE_SEL_125M BIT(3)
+#define YT8521_SCR_CLK_SRC_MASK GENMASK(2, 1)
+#define YT8521_SCR_CLK_SRC_PLL_125M 0
+#define YT8521_SCR_CLK_SRC_UTP_RX 1
+#define YT8521_SCR_CLK_SRC_SDS_RX 2
+#define YT8521_SCR_CLK_SRC_REF_25M 3
+#define YT8531_SCR_SYNCE_ENABLE BIT(6)
+/* 1b0 output 25m clock *default*
+ * 1b1 output 125m clock
+ */
+#define YT8531_SCR_CLK_FRE_SEL_125M BIT(4)
+#define YT8531_SCR_CLK_SRC_MASK GENMASK(3, 1)
+#define YT8531_SCR_CLK_SRC_PLL_125M 0
+#define YT8531_SCR_CLK_SRC_UTP_RX 1
+#define YT8531_SCR_CLK_SRC_SDS_RX 2
+#define YT8531_SCR_CLK_SRC_CLOCK_FROM_DIGITAL 3
+#define YT8531_SCR_CLK_SRC_REF_25M 4
+#define YT8531_SCR_CLK_SRC_SSC_25M 5
/* Extended Register end */
+#define YTPHY_DTS_OUTPUT_CLK_DIS 0
+#define YTPHY_DTS_OUTPUT_CLK_25M 25000000
+#define YTPHY_DTS_OUTPUT_CLK_125M 125000000
+
struct yt8521_priv {
/* combo_advertising is used for case of YT8521 in combo mode,
* this means that yt8521 may work in utp or fiber mode which depends
@@ -479,6 +518,61 @@ err_restore_page:
return phy_restore_page(phydev, old_page, ret);
}
+static int yt8531_set_wol(struct phy_device *phydev,
+ struct ethtool_wolinfo *wol)
+{
+ const u16 mac_addr_reg[] = {
+ YTPHY_WOL_MACADDR2_REG,
+ YTPHY_WOL_MACADDR1_REG,
+ YTPHY_WOL_MACADDR0_REG,
+ };
+ const u8 *mac_addr;
+ u16 mask, val;
+ int ret;
+ u8 i;
+
+ if (wol->wolopts & WAKE_MAGIC) {
+ mac_addr = phydev->attached_dev->dev_addr;
+
+ /* Store the device address for the magic packet */
+ for (i = 0; i < 3; i++) {
+ ret = ytphy_write_ext_with_lock(phydev, mac_addr_reg[i],
+ ((mac_addr[i * 2] << 8)) |
+ (mac_addr[i * 2 + 1]));
+ if (ret < 0)
+ return ret;
+ }
+
+ /* Enable WOL feature */
+ mask = YTPHY_WCR_PULSE_WIDTH_MASK | YTPHY_WCR_INTR_SEL;
+ val = YTPHY_WCR_ENABLE | YTPHY_WCR_INTR_SEL;
+ val |= YTPHY_WCR_TYPE_PULSE | YTPHY_WCR_PULSE_WIDTH_672MS;
+ ret = ytphy_modify_ext_with_lock(phydev, YTPHY_WOL_CONFIG_REG,
+ mask, val);
+ if (ret < 0)
+ return ret;
+
+ /* Enable WOL interrupt */
+ ret = phy_modify(phydev, YTPHY_INTERRUPT_ENABLE_REG, 0,
+ YTPHY_IER_WOL);
+ if (ret < 0)
+ return ret;
+ } else {
+ /* Disable WOL feature */
+ mask = YTPHY_WCR_ENABLE | YTPHY_WCR_INTR_SEL;
+ ret = ytphy_modify_ext_with_lock(phydev, YTPHY_WOL_CONFIG_REG,
+ mask, 0);
+
+ /* Disable WOL interrupt */
+ ret = phy_modify(phydev, YTPHY_INTERRUPT_ENABLE_REG,
+ YTPHY_IER_WOL, 0);
+ if (ret < 0)
+ return ret;
+ }
+
+ return 0;
+}
+
static int yt8511_read_page(struct phy_device *phydev)
{
return __phy_read(phydev, YT8511_PAGE_SELECT);
@@ -594,6 +688,153 @@ static int yt8521_write_page(struct phy_device *phydev, int page)
};
/**
+ * struct ytphy_cfg_reg_map - map a config value to a register value
+ * @cfg: value in device configuration
+ * @reg: value in the register
+ */
+struct ytphy_cfg_reg_map {
+ u32 cfg;
+ u32 reg;
+};
+
+static const struct ytphy_cfg_reg_map ytphy_rgmii_delays[] = {
+ /* for tx delay / rx delay with YT8521_CCR_RXC_DLY_EN is not set. */
+ { 0, YT8521_RC1R_RGMII_0_000_NS },
+ { 150, YT8521_RC1R_RGMII_0_150_NS },
+ { 300, YT8521_RC1R_RGMII_0_300_NS },
+ { 450, YT8521_RC1R_RGMII_0_450_NS },
+ { 600, YT8521_RC1R_RGMII_0_600_NS },
+ { 750, YT8521_RC1R_RGMII_0_750_NS },
+ { 900, YT8521_RC1R_RGMII_0_900_NS },
+ { 1050, YT8521_RC1R_RGMII_1_050_NS },
+ { 1200, YT8521_RC1R_RGMII_1_200_NS },
+ { 1350, YT8521_RC1R_RGMII_1_350_NS },
+ { 1500, YT8521_RC1R_RGMII_1_500_NS },
+ { 1650, YT8521_RC1R_RGMII_1_650_NS },
+ { 1800, YT8521_RC1R_RGMII_1_800_NS },
+ { 1950, YT8521_RC1R_RGMII_1_950_NS }, /* default tx/rx delay */
+ { 2100, YT8521_RC1R_RGMII_2_100_NS },
+ { 2250, YT8521_RC1R_RGMII_2_250_NS },
+
+ /* only for rx delay with YT8521_CCR_RXC_DLY_EN is set. */
+ { 0 + YT8521_CCR_RXC_DLY_1_900_NS, YT8521_RC1R_RGMII_0_000_NS },
+ { 150 + YT8521_CCR_RXC_DLY_1_900_NS, YT8521_RC1R_RGMII_0_150_NS },
+ { 300 + YT8521_CCR_RXC_DLY_1_900_NS, YT8521_RC1R_RGMII_0_300_NS },
+ { 450 + YT8521_CCR_RXC_DLY_1_900_NS, YT8521_RC1R_RGMII_0_450_NS },
+ { 600 + YT8521_CCR_RXC_DLY_1_900_NS, YT8521_RC1R_RGMII_0_600_NS },
+ { 750 + YT8521_CCR_RXC_DLY_1_900_NS, YT8521_RC1R_RGMII_0_750_NS },
+ { 900 + YT8521_CCR_RXC_DLY_1_900_NS, YT8521_RC1R_RGMII_0_900_NS },
+ { 1050 + YT8521_CCR_RXC_DLY_1_900_NS, YT8521_RC1R_RGMII_1_050_NS },
+ { 1200 + YT8521_CCR_RXC_DLY_1_900_NS, YT8521_RC1R_RGMII_1_200_NS },
+ { 1350 + YT8521_CCR_RXC_DLY_1_900_NS, YT8521_RC1R_RGMII_1_350_NS },
+ { 1500 + YT8521_CCR_RXC_DLY_1_900_NS, YT8521_RC1R_RGMII_1_500_NS },
+ { 1650 + YT8521_CCR_RXC_DLY_1_900_NS, YT8521_RC1R_RGMII_1_650_NS },
+ { 1800 + YT8521_CCR_RXC_DLY_1_900_NS, YT8521_RC1R_RGMII_1_800_NS },
+ { 1950 + YT8521_CCR_RXC_DLY_1_900_NS, YT8521_RC1R_RGMII_1_950_NS },
+ { 2100 + YT8521_CCR_RXC_DLY_1_900_NS, YT8521_RC1R_RGMII_2_100_NS },
+ { 2250 + YT8521_CCR_RXC_DLY_1_900_NS, YT8521_RC1R_RGMII_2_250_NS }
+};
+
+static u32 ytphy_get_delay_reg_value(struct phy_device *phydev,
+ const char *prop_name,
+ const struct ytphy_cfg_reg_map *tbl,
+ int tb_size,
+ u16 *rxc_dly_en,
+ u32 dflt)
+{
+ struct device_node *node = phydev->mdio.dev.of_node;
+ int tb_size_half = tb_size / 2;
+ u32 val;
+ int i;
+
+ if (of_property_read_u32(node, prop_name, &val))
+ goto err_dts_val;
+
+ /* when rxc_dly_en is NULL, it is get the delay for tx, only half of
+ * tb_size is valid.
+ */
+ if (!rxc_dly_en)
+ tb_size = tb_size_half;
+
+ for (i = 0; i < tb_size; i++) {
+ if (tbl[i].cfg == val) {
+ if (rxc_dly_en && i < tb_size_half)
+ *rxc_dly_en = 0;
+ return tbl[i].reg;
+ }
+ }
+
+ phydev_warn(phydev, "Unsupported value %d for %s using default (%u)\n",
+ val, prop_name, dflt);
+
+err_dts_val:
+ /* when rxc_dly_en is not NULL, it is get the delay for rx.
+ * The rx default in dts and ytphy_rgmii_clk_delay_config is 1950 ps,
+ * so YT8521_CCR_RXC_DLY_EN should not be set.
+ */
+ if (rxc_dly_en)
+ *rxc_dly_en = 0;
+
+ return dflt;
+}
+
+static int ytphy_rgmii_clk_delay_config(struct phy_device *phydev)
+{
+ int tb_size = ARRAY_SIZE(ytphy_rgmii_delays);
+ u16 rxc_dly_en = YT8521_CCR_RXC_DLY_EN;
+ u32 rx_reg, tx_reg;
+ u16 mask, val = 0;
+ int ret;
+
+ rx_reg = ytphy_get_delay_reg_value(phydev, "rx-internal-delay-ps",
+ ytphy_rgmii_delays, tb_size,
+ &rxc_dly_en,
+ YT8521_RC1R_RGMII_1_950_NS);
+ tx_reg = ytphy_get_delay_reg_value(phydev, "tx-internal-delay-ps",
+ ytphy_rgmii_delays, tb_size, NULL,
+ YT8521_RC1R_RGMII_1_950_NS);
+
+ switch (phydev->interface) {
+ case PHY_INTERFACE_MODE_RGMII:
+ rxc_dly_en = 0;
+ break;
+ case PHY_INTERFACE_MODE_RGMII_RXID:
+ val |= FIELD_PREP(YT8521_RC1R_RX_DELAY_MASK, rx_reg);
+ break;
+ case PHY_INTERFACE_MODE_RGMII_TXID:
+ rxc_dly_en = 0;
+ val |= FIELD_PREP(YT8521_RC1R_GE_TX_DELAY_MASK, tx_reg);
+ break;
+ case PHY_INTERFACE_MODE_RGMII_ID:
+ val |= FIELD_PREP(YT8521_RC1R_RX_DELAY_MASK, rx_reg) |
+ FIELD_PREP(YT8521_RC1R_GE_TX_DELAY_MASK, tx_reg);
+ break;
+ default: /* do not support other modes */
+ return -EOPNOTSUPP;
+ }
+
+ ret = ytphy_modify_ext(phydev, YT8521_CHIP_CONFIG_REG,
+ YT8521_CCR_RXC_DLY_EN, rxc_dly_en);
+ if (ret < 0)
+ return ret;
+
+ /* Generally, it is not necessary to adjust YT8521_RC1R_FE_TX_DELAY */
+ mask = YT8521_RC1R_RX_DELAY_MASK | YT8521_RC1R_GE_TX_DELAY_MASK;
+ return ytphy_modify_ext(phydev, YT8521_RGMII_CONFIG1_REG, mask, val);
+}
+
+static int ytphy_rgmii_clk_delay_config_with_lock(struct phy_device *phydev)
+{
+ int ret;
+
+ phy_lock_mdio_bus(phydev);
+ ret = ytphy_rgmii_clk_delay_config(phydev);
+ phy_unlock_mdio_bus(phydev);
+
+ return ret;
+}
+
+/**
* yt8521_probe() - read chip config then set suitable polling_mode
* @phydev: a pointer to a &struct phy_device
*
@@ -601,9 +842,12 @@ static int yt8521_write_page(struct phy_device *phydev, int page)
*/
static int yt8521_probe(struct phy_device *phydev)
{
+ struct device_node *node = phydev->mdio.dev.of_node;
struct device *dev = &phydev->mdio.dev;
struct yt8521_priv *priv;
int chip_config;
+ u16 mask, val;
+ u32 freq;
int ret;
priv = devm_kzalloc(dev, sizeof(*priv), GFP_KERNEL);
@@ -648,27 +892,107 @@ static int yt8521_probe(struct phy_device *phydev)
return ret;
}
- return 0;
+ if (of_property_read_u32(node, "motorcomm,clk-out-frequency-hz", &freq))
+ freq = YTPHY_DTS_OUTPUT_CLK_DIS;
+
+ if (phydev->drv->phy_id == PHY_ID_YT8521) {
+ switch (freq) {
+ case YTPHY_DTS_OUTPUT_CLK_DIS:
+ mask = YT8521_SCR_SYNCE_ENABLE;
+ val = 0;
+ break;
+ case YTPHY_DTS_OUTPUT_CLK_25M:
+ mask = YT8521_SCR_SYNCE_ENABLE |
+ YT8521_SCR_CLK_SRC_MASK |
+ YT8521_SCR_CLK_FRE_SEL_125M;
+ val = YT8521_SCR_SYNCE_ENABLE |
+ FIELD_PREP(YT8521_SCR_CLK_SRC_MASK,
+ YT8521_SCR_CLK_SRC_REF_25M);
+ break;
+ case YTPHY_DTS_OUTPUT_CLK_125M:
+ mask = YT8521_SCR_SYNCE_ENABLE |
+ YT8521_SCR_CLK_SRC_MASK |
+ YT8521_SCR_CLK_FRE_SEL_125M;
+ val = YT8521_SCR_SYNCE_ENABLE |
+ YT8521_SCR_CLK_FRE_SEL_125M |
+ FIELD_PREP(YT8521_SCR_CLK_SRC_MASK,
+ YT8521_SCR_CLK_SRC_PLL_125M);
+ break;
+ default:
+ phydev_warn(phydev, "Freq err:%u\n", freq);
+ return -EINVAL;
+ }
+ } else if (phydev->drv->phy_id == PHY_ID_YT8531S) {
+ switch (freq) {
+ case YTPHY_DTS_OUTPUT_CLK_DIS:
+ mask = YT8531_SCR_SYNCE_ENABLE;
+ val = 0;
+ break;
+ case YTPHY_DTS_OUTPUT_CLK_25M:
+ mask = YT8531_SCR_SYNCE_ENABLE |
+ YT8531_SCR_CLK_SRC_MASK |
+ YT8531_SCR_CLK_FRE_SEL_125M;
+ val = YT8531_SCR_SYNCE_ENABLE |
+ FIELD_PREP(YT8531_SCR_CLK_SRC_MASK,
+ YT8531_SCR_CLK_SRC_REF_25M);
+ break;
+ case YTPHY_DTS_OUTPUT_CLK_125M:
+ mask = YT8531_SCR_SYNCE_ENABLE |
+ YT8531_SCR_CLK_SRC_MASK |
+ YT8531_SCR_CLK_FRE_SEL_125M;
+ val = YT8531_SCR_SYNCE_ENABLE |
+ YT8531_SCR_CLK_FRE_SEL_125M |
+ FIELD_PREP(YT8531_SCR_CLK_SRC_MASK,
+ YT8531_SCR_CLK_SRC_PLL_125M);
+ break;
+ default:
+ phydev_warn(phydev, "Freq err:%u\n", freq);
+ return -EINVAL;
+ }
+ } else {
+ phydev_warn(phydev, "PHY id err\n");
+ return -EINVAL;
+ }
+
+ return ytphy_modify_ext_with_lock(phydev, YTPHY_SYNCE_CFG_REG, mask,
+ val);
}
-/**
- * yt8531s_probe() - read chip config then set suitable polling_mode
- * @phydev: a pointer to a &struct phy_device
- *
- * returns 0 or negative errno code
- */
-static int yt8531s_probe(struct phy_device *phydev)
+static int yt8531_probe(struct phy_device *phydev)
{
- int ret;
+ struct device_node *node = phydev->mdio.dev.of_node;
+ u16 mask, val;
+ u32 freq;
- /* Disable SyncE clock output by default */
- ret = ytphy_modify_ext_with_lock(phydev, YT8531S_SYNCE_CFG_REG,
- YT8531S_SCR_SYNCE_ENABLE, 0);
- if (ret < 0)
- return ret;
+ if (of_property_read_u32(node, "motorcomm,clk-out-frequency-hz", &freq))
+ freq = YTPHY_DTS_OUTPUT_CLK_DIS;
+
+ switch (freq) {
+ case YTPHY_DTS_OUTPUT_CLK_DIS:
+ mask = YT8531_SCR_SYNCE_ENABLE;
+ val = 0;
+ break;
+ case YTPHY_DTS_OUTPUT_CLK_25M:
+ mask = YT8531_SCR_SYNCE_ENABLE | YT8531_SCR_CLK_SRC_MASK |
+ YT8531_SCR_CLK_FRE_SEL_125M;
+ val = YT8531_SCR_SYNCE_ENABLE |
+ FIELD_PREP(YT8531_SCR_CLK_SRC_MASK,
+ YT8531_SCR_CLK_SRC_REF_25M);
+ break;
+ case YTPHY_DTS_OUTPUT_CLK_125M:
+ mask = YT8531_SCR_SYNCE_ENABLE | YT8531_SCR_CLK_SRC_MASK |
+ YT8531_SCR_CLK_FRE_SEL_125M;
+ val = YT8531_SCR_SYNCE_ENABLE | YT8531_SCR_CLK_FRE_SEL_125M |
+ FIELD_PREP(YT8531_SCR_CLK_SRC_MASK,
+ YT8531_SCR_CLK_SRC_PLL_125M);
+ break;
+ default:
+ phydev_warn(phydev, "Freq err:%u\n", freq);
+ return -EINVAL;
+ }
- /* same as yt8521_probe */
- return yt8521_probe(phydev);
+ return ytphy_modify_ext_with_lock(phydev, YTPHY_SYNCE_CFG_REG, mask,
+ val);
}
/**
@@ -1133,65 +1457,128 @@ static int yt8521_resume(struct phy_device *phydev)
*/
static int yt8521_config_init(struct phy_device *phydev)
{
+ struct device_node *node = phydev->mdio.dev.of_node;
int old_page;
int ret = 0;
- u16 val;
old_page = phy_select_page(phydev, YT8521_RSSR_UTP_SPACE);
if (old_page < 0)
goto err_restore_page;
- switch (phydev->interface) {
- case PHY_INTERFACE_MODE_RGMII:
- val = YT8521_RC1R_GE_TX_DELAY_DIS | YT8521_RC1R_FE_TX_DELAY_DIS;
- val |= YT8521_RC1R_RX_DELAY_DIS;
- break;
- case PHY_INTERFACE_MODE_RGMII_RXID:
- val = YT8521_RC1R_GE_TX_DELAY_DIS | YT8521_RC1R_FE_TX_DELAY_DIS;
- val |= YT8521_RC1R_RX_DELAY_EN;
- break;
- case PHY_INTERFACE_MODE_RGMII_TXID:
- val = YT8521_RC1R_GE_TX_DELAY_EN | YT8521_RC1R_FE_TX_DELAY_EN;
- val |= YT8521_RC1R_RX_DELAY_DIS;
- break;
- case PHY_INTERFACE_MODE_RGMII_ID:
- val = YT8521_RC1R_GE_TX_DELAY_EN | YT8521_RC1R_FE_TX_DELAY_EN;
- val |= YT8521_RC1R_RX_DELAY_EN;
- break;
- case PHY_INTERFACE_MODE_SGMII:
- break;
- default: /* do not support other modes */
- ret = -EOPNOTSUPP;
- goto err_restore_page;
- }
-
/* set rgmii delay mode */
if (phydev->interface != PHY_INTERFACE_MODE_SGMII) {
- ret = ytphy_modify_ext(phydev, YT8521_RGMII_CONFIG1_REG,
- (YT8521_RC1R_RX_DELAY_MASK |
- YT8521_RC1R_FE_TX_DELAY_MASK |
- YT8521_RC1R_GE_TX_DELAY_MASK),
- val);
+ ret = ytphy_rgmii_clk_delay_config(phydev);
if (ret < 0)
goto err_restore_page;
}
- /* disable auto sleep */
- ret = ytphy_modify_ext(phydev, YT8521_EXTREG_SLEEP_CONTROL1_REG,
- YT8521_ESC1R_SLEEP_SW, 0);
- if (ret < 0)
- goto err_restore_page;
-
- /* enable RXC clock when no wire plug */
- ret = ytphy_modify_ext(phydev, YT8521_CLOCK_GATING_REG,
- YT8521_CGR_RX_CLK_EN, 0);
- if (ret < 0)
- goto err_restore_page;
+ if (of_property_read_bool(node, "motorcomm,auto-sleep-disabled")) {
+ /* disable auto sleep */
+ ret = ytphy_modify_ext(phydev, YT8521_EXTREG_SLEEP_CONTROL1_REG,
+ YT8521_ESC1R_SLEEP_SW, 0);
+ if (ret < 0)
+ goto err_restore_page;
+ }
+ if (of_property_read_bool(node, "motorcomm,keep-pll-enabled")) {
+ /* enable RXC clock when no wire plug */
+ ret = ytphy_modify_ext(phydev, YT8521_CLOCK_GATING_REG,
+ YT8521_CGR_RX_CLK_EN, 0);
+ if (ret < 0)
+ goto err_restore_page;
+ }
err_restore_page:
return phy_restore_page(phydev, old_page, ret);
}
+static int yt8531_config_init(struct phy_device *phydev)
+{
+ struct device_node *node = phydev->mdio.dev.of_node;
+ int ret;
+
+ ret = ytphy_rgmii_clk_delay_config_with_lock(phydev);
+ if (ret < 0)
+ return ret;
+
+ if (of_property_read_bool(node, "motorcomm,auto-sleep-disabled")) {
+ /* disable auto sleep */
+ ret = ytphy_modify_ext_with_lock(phydev,
+ YT8521_EXTREG_SLEEP_CONTROL1_REG,
+ YT8521_ESC1R_SLEEP_SW, 0);
+ if (ret < 0)
+ return ret;
+ }
+
+ if (of_property_read_bool(node, "motorcomm,keep-pll-enabled")) {
+ /* enable RXC clock when no wire plug */
+ ret = ytphy_modify_ext_with_lock(phydev,
+ YT8521_CLOCK_GATING_REG,
+ YT8521_CGR_RX_CLK_EN, 0);
+ if (ret < 0)
+ return ret;
+ }
+
+ return 0;
+}
+
+/**
+ * yt8531_link_change_notify() - Adjust the tx clock direction according to
+ * the current speed and dts config.
+ * @phydev: a pointer to a &struct phy_device
+ *
+ * NOTE: This function is only used to adapt to VF2 with JH7110 SoC. Please
+ * keep "motorcomm,tx-clk-adj-enabled" not exist in dts when the soc is not
+ * JH7110.
+ */
+static void yt8531_link_change_notify(struct phy_device *phydev)
+{
+ struct device_node *node = phydev->mdio.dev.of_node;
+ bool tx_clk_1000_inverted = false;
+ bool tx_clk_100_inverted = false;
+ bool tx_clk_10_inverted = false;
+ bool tx_clk_adj_enabled = false;
+ u16 val = 0;
+ int ret;
+
+ if (of_property_read_bool(node, "motorcomm,tx-clk-adj-enabled"))
+ tx_clk_adj_enabled = true;
+
+ if (!tx_clk_adj_enabled)
+ return;
+
+ if (of_property_read_bool(node, "motorcomm,tx-clk-10-inverted"))
+ tx_clk_10_inverted = true;
+ if (of_property_read_bool(node, "motorcomm,tx-clk-100-inverted"))
+ tx_clk_100_inverted = true;
+ if (of_property_read_bool(node, "motorcomm,tx-clk-1000-inverted"))
+ tx_clk_1000_inverted = true;
+
+ if (phydev->speed < 0)
+ return;
+
+ switch (phydev->speed) {
+ case SPEED_1000:
+ if (tx_clk_1000_inverted)
+ val = YT8521_RC1R_TX_CLK_SEL_INVERTED;
+ break;
+ case SPEED_100:
+ if (tx_clk_100_inverted)
+ val = YT8521_RC1R_TX_CLK_SEL_INVERTED;
+ break;
+ case SPEED_10:
+ if (tx_clk_10_inverted)
+ val = YT8521_RC1R_TX_CLK_SEL_INVERTED;
+ break;
+ default:
+ return;
+ }
+
+ ret = ytphy_modify_ext_with_lock(phydev, YT8521_RGMII_CONFIG1_REG,
+ YT8521_RC1R_TX_CLK_SEL_INVERTED, val);
+ if (ret < 0)
+ phydev_warn(phydev, "Modify TX_CLK_SEL err:%d\n", ret);
+}
+
/**
* yt8521_prepare_fiber_features() - A small helper function that setup
* fiber's features.
@@ -1775,10 +2162,21 @@ static struct phy_driver motorcomm_phy_drvs[] = {
.resume = yt8521_resume,
},
{
+ PHY_ID_MATCH_EXACT(PHY_ID_YT8531),
+ .name = "YT8531 Gigabit Ethernet",
+ .probe = yt8531_probe,
+ .config_init = yt8531_config_init,
+ .suspend = genphy_suspend,
+ .resume = genphy_resume,
+ .get_wol = ytphy_get_wol,
+ .set_wol = yt8531_set_wol,
+ .link_change_notify = yt8531_link_change_notify,
+ },
+ {
PHY_ID_MATCH_EXACT(PHY_ID_YT8531S),
.name = "YT8531S Gigabit Ethernet",
.get_features = yt8521_get_features,
- .probe = yt8531s_probe,
+ .probe = yt8521_probe,
.read_page = yt8521_read_page,
.write_page = yt8521_write_page,
.get_wol = ytphy_get_wol,
@@ -1795,7 +2193,7 @@ static struct phy_driver motorcomm_phy_drvs[] = {
module_phy_driver(motorcomm_phy_drvs);
-MODULE_DESCRIPTION("Motorcomm 8511/8521/8531S PHY driver");
+MODULE_DESCRIPTION("Motorcomm 8511/8521/8531/8531S PHY driver");
MODULE_AUTHOR("Peter Geis");
MODULE_AUTHOR("Frank");
MODULE_LICENSE("GPL");
@@ -1803,8 +2201,9 @@ MODULE_LICENSE("GPL");
static const struct mdio_device_id __maybe_unused motorcomm_tbl[] = {
{ PHY_ID_MATCH_EXACT(PHY_ID_YT8511) },
{ PHY_ID_MATCH_EXACT(PHY_ID_YT8521) },
+ { PHY_ID_MATCH_EXACT(PHY_ID_YT8531) },
{ PHY_ID_MATCH_EXACT(PHY_ID_YT8531S) },
- { /* sentinal */ }
+ { /* sentinel */ }
};
MODULE_DEVICE_TABLE(mdio, motorcomm_tbl);
diff --git a/drivers/net/phy/mxl-gpy.c b/drivers/net/phy/mxl-gpy.c
index 147d7a5a9b35..e5972b4ef6e8 100644
--- a/drivers/net/phy/mxl-gpy.c
+++ b/drivers/net/phy/mxl-gpy.c
@@ -12,6 +12,7 @@
#include <linux/mutex.h>
#include <linux/phy.h>
#include <linux/polynomial.h>
+#include <linux/property.h>
#include <linux/netdevice.h>
/* PHY ID */
@@ -292,6 +293,10 @@ static int gpy_probe(struct phy_device *phydev)
phydev->priv = priv;
mutex_init(&priv->mbox_lock);
+ if (gpy_has_broken_mdint(phydev) &&
+ !device_property_present(dev, "maxlinear,use-broken-interrupts"))
+ phydev->dev_flags |= PHY_F_NO_IRQ;
+
fw_version = phy_read(phydev, PHY_FWV);
if (fw_version < 0)
return fw_version;
diff --git a/drivers/net/phy/ncn26000.c b/drivers/net/phy/ncn26000.c
new file mode 100644
index 000000000000..5680584f659e
--- /dev/null
+++ b/drivers/net/phy/ncn26000.c
@@ -0,0 +1,171 @@
+// SPDX-License-Identifier: (GPL-2.0+ OR BSD-3-Clause)
+/*
+ * Driver for the onsemi 10BASE-T1S NCN26000 PHYs family.
+ *
+ * Copyright 2022 onsemi
+ */
+#include <linux/kernel.h>
+#include <linux/bitfield.h>
+#include <linux/errno.h>
+#include <linux/init.h>
+#include <linux/module.h>
+#include <linux/mii.h>
+#include <linux/phy.h>
+
+#include "mdio-open-alliance.h"
+
+#define PHY_ID_NCN26000 0x180FF5A1
+
+#define NCN26000_REG_IRQ_CTL 16
+#define NCN26000_REG_IRQ_STATUS 17
+
+// the NCN26000 maps link_ctrl to BMCR_ANENABLE
+#define NCN26000_BCMR_LINK_CTRL_BIT BMCR_ANENABLE
+
+// the NCN26000 maps link_status to BMSR_ANEGCOMPLETE
+#define NCN26000_BMSR_LINK_STATUS_BIT BMSR_ANEGCOMPLETE
+
+#define NCN26000_IRQ_LINKST_BIT BIT(0)
+#define NCN26000_IRQ_PLCAST_BIT BIT(1)
+#define NCN26000_IRQ_LJABBER_BIT BIT(2)
+#define NCN26000_IRQ_RJABBER_BIT BIT(3)
+#define NCN26000_IRQ_PLCAREC_BIT BIT(4)
+#define NCN26000_IRQ_PHYSCOL_BIT BIT(5)
+#define NCN26000_IRQ_RESET_BIT BIT(15)
+
+#define TO_TMR_DEFAULT 32
+
+static int ncn26000_config_init(struct phy_device *phydev)
+{
+ /* HW bug workaround: the default value of the PLCA TO_TIMER should be
+ * 32, where the current version of NCN26000 reports 24. This will be
+ * fixed in future PHY versions. For the time being, we force the
+ * correct default here.
+ */
+ return phy_write_mmd(phydev, MDIO_MMD_VEND2, MDIO_OATC14_PLCA_TOTMR,
+ TO_TMR_DEFAULT);
+}
+
+static int ncn26000_config_aneg(struct phy_device *phydev)
+{
+ /* Note: the NCN26000 supports only P2MP link mode. Therefore, AN is not
+ * supported. However, this function is invoked by phylib to enable the
+ * PHY, regardless of the AN support.
+ */
+ phydev->mdix_ctrl = ETH_TP_MDI_AUTO;
+ phydev->mdix = ETH_TP_MDI;
+
+ // bring up the link
+ return phy_write(phydev, MII_BMCR, NCN26000_BCMR_LINK_CTRL_BIT);
+}
+
+static int ncn26000_read_status(struct phy_device *phydev)
+{
+ /* The NCN26000 reports NCN26000_LINK_STATUS_BIT if the link status of
+ * the PHY is up. It further reports the logical AND of the link status
+ * and the PLCA status in the BMSR_LSTATUS bit.
+ */
+ int ret;
+
+ /* The link state is latched low so that momentary link
+ * drops can be detected. Do not double-read the status
+ * in polling mode to detect such short link drops except
+ * the link was already down.
+ */
+ if (!phy_polling_mode(phydev) || !phydev->link) {
+ ret = phy_read(phydev, MII_BMSR);
+ if (ret < 0)
+ return ret;
+ else if (ret & NCN26000_BMSR_LINK_STATUS_BIT)
+ goto upd_link;
+ }
+
+ ret = phy_read(phydev, MII_BMSR);
+ if (ret < 0)
+ return ret;
+
+upd_link:
+ // update link status
+ if (ret & NCN26000_BMSR_LINK_STATUS_BIT) {
+ phydev->link = 1;
+ phydev->pause = 0;
+ phydev->duplex = DUPLEX_HALF;
+ phydev->speed = SPEED_10;
+ } else {
+ phydev->link = 0;
+ phydev->duplex = DUPLEX_UNKNOWN;
+ phydev->speed = SPEED_UNKNOWN;
+ }
+
+ return 0;
+}
+
+static irqreturn_t ncn26000_handle_interrupt(struct phy_device *phydev)
+{
+ int ret;
+
+ // read and aknowledge the IRQ status register
+ ret = phy_read(phydev, NCN26000_REG_IRQ_STATUS);
+
+ // check only link status changes
+ if (ret < 0 || (ret & NCN26000_REG_IRQ_STATUS) == 0)
+ return IRQ_NONE;
+
+ phy_trigger_machine(phydev);
+ return IRQ_HANDLED;
+}
+
+static int ncn26000_config_intr(struct phy_device *phydev)
+{
+ int ret;
+ u16 irqe;
+
+ if (phydev->interrupts == PHY_INTERRUPT_ENABLED) {
+ // acknowledge IRQs
+ ret = phy_read(phydev, NCN26000_REG_IRQ_STATUS);
+ if (ret < 0)
+ return ret;
+
+ // get link status notifications
+ irqe = NCN26000_IRQ_LINKST_BIT;
+ } else {
+ // disable all IRQs
+ irqe = 0;
+ }
+
+ ret = phy_write(phydev, NCN26000_REG_IRQ_CTL, irqe);
+ if (ret != 0)
+ return ret;
+
+ return 0;
+}
+
+static struct phy_driver ncn26000_driver[] = {
+ {
+ PHY_ID_MATCH_MODEL(PHY_ID_NCN26000),
+ .name = "NCN26000",
+ .features = PHY_BASIC_T1S_P2MP_FEATURES,
+ .config_init = ncn26000_config_init,
+ .config_intr = ncn26000_config_intr,
+ .config_aneg = ncn26000_config_aneg,
+ .read_status = ncn26000_read_status,
+ .handle_interrupt = ncn26000_handle_interrupt,
+ .get_plca_cfg = genphy_c45_plca_get_cfg,
+ .set_plca_cfg = genphy_c45_plca_set_cfg,
+ .get_plca_status = genphy_c45_plca_get_status,
+ .soft_reset = genphy_soft_reset,
+ },
+};
+
+module_phy_driver(ncn26000_driver);
+
+static struct mdio_device_id __maybe_unused ncn26000_tbl[] = {
+ { PHY_ID_MATCH_MODEL(PHY_ID_NCN26000) },
+ { }
+};
+
+MODULE_DEVICE_TABLE(mdio, ncn26000_tbl);
+
+MODULE_AUTHOR("Piergiorgio Beruto");
+MODULE_DESCRIPTION("onsemi 10BASE-T1S PHY driver");
+MODULE_LICENSE("Dual BSD/GPL");
diff --git a/drivers/net/phy/phy-c45.c b/drivers/net/phy/phy-c45.c
index a87a4b3ffce4..f9b128cecc3f 100644
--- a/drivers/net/phy/phy-c45.c
+++ b/drivers/net/phy/phy-c45.c
@@ -8,6 +8,8 @@
#include <linux/mii.h>
#include <linux/phy.h>
+#include "mdio-open-alliance.h"
+
/**
* genphy_c45_baset1_able - checks if the PMA has BASE-T1 extended abilities
* @phydev: target phy_device struct
@@ -254,13 +256,17 @@ static int genphy_c45_baset1_an_config_aneg(struct phy_device *phydev)
*/
int genphy_c45_an_config_aneg(struct phy_device *phydev)
{
- int changed, ret;
+ int changed = 0, ret;
u32 adv;
linkmode_and(phydev->advertising, phydev->advertising,
phydev->supported);
- changed = genphy_config_eee_advert(phydev);
+ ret = genphy_c45_write_eee_adv(phydev, phydev->supported_eee);
+ if (ret < 0)
+ return ret;
+ else if (ret)
+ changed = true;
if (genphy_c45_baset1_able(phydev))
return genphy_c45_baset1_an_config_aneg(phydev);
@@ -660,6 +666,199 @@ int genphy_c45_read_mdix(struct phy_device *phydev)
EXPORT_SYMBOL_GPL(genphy_c45_read_mdix);
/**
+ * genphy_c45_write_eee_adv - write advertised EEE link modes
+ * @phydev: target phy_device struct
+ * @adv: the linkmode advertisement settings
+ */
+int genphy_c45_write_eee_adv(struct phy_device *phydev, unsigned long *adv)
+{
+ int val, changed;
+
+ if (linkmode_intersects(phydev->supported, PHY_EEE_CAP1_FEATURES)) {
+ val = linkmode_to_mii_eee_cap1_t(adv);
+
+ /* In eee_broken_modes are stored MDIO_AN_EEE_ADV specific raw
+ * register values.
+ */
+ val &= ~phydev->eee_broken_modes;
+
+ /* IEEE 802.3-2018 45.2.7.13 EEE advertisement 1
+ * (Register 7.60)
+ */
+ val = phy_modify_mmd_changed(phydev, MDIO_MMD_AN,
+ MDIO_AN_EEE_ADV,
+ MDIO_EEE_100TX | MDIO_EEE_1000T |
+ MDIO_EEE_10GT | MDIO_EEE_1000KX |
+ MDIO_EEE_10GKX4 | MDIO_EEE_10GKR,
+ val);
+ if (val < 0)
+ return val;
+ if (val > 0)
+ changed = 1;
+ }
+
+ if (linkmode_test_bit(ETHTOOL_LINK_MODE_10baseT1L_Full_BIT,
+ phydev->supported_eee)) {
+ val = linkmode_adv_to_mii_10base_t1_t(adv);
+ /* IEEE 802.3cg-2019 45.2.7.25 10BASE-T1 AN control register
+ * (Register 7.526)
+ */
+ val = phy_modify_mmd_changed(phydev, MDIO_MMD_AN,
+ MDIO_AN_10BT1_AN_CTRL,
+ MDIO_AN_10BT1_AN_CTRL_ADV_EEE_T1L,
+ val);
+ if (val < 0)
+ return val;
+ if (val > 0)
+ changed = 1;
+ }
+
+ return changed;
+}
+
+/**
+ * genphy_c45_read_eee_adv - read advertised EEE link modes
+ * @phydev: target phy_device struct
+ * @adv: the linkmode advertisement status
+ */
+static int genphy_c45_read_eee_adv(struct phy_device *phydev,
+ unsigned long *adv)
+{
+ int val;
+
+ if (linkmode_intersects(phydev->supported, PHY_EEE_CAP1_FEATURES)) {
+ /* IEEE 802.3-2018 45.2.7.13 EEE advertisement 1
+ * (Register 7.60)
+ */
+ val = phy_read_mmd(phydev, MDIO_MMD_AN, MDIO_AN_EEE_ADV);
+ if (val < 0)
+ return val;
+
+ mii_eee_cap1_mod_linkmode_t(adv, val);
+ }
+
+ if (linkmode_test_bit(ETHTOOL_LINK_MODE_10baseT1L_Full_BIT,
+ phydev->supported_eee)) {
+ /* IEEE 802.3cg-2019 45.2.7.25 10BASE-T1 AN control register
+ * (Register 7.526)
+ */
+ val = phy_read_mmd(phydev, MDIO_MMD_AN, MDIO_AN_10BT1_AN_CTRL);
+ if (val < 0)
+ return val;
+
+ mii_10base_t1_adv_mod_linkmode_t(adv, val);
+ }
+
+ return 0;
+}
+
+/**
+ * genphy_c45_read_eee_lpa - read advertised LP EEE link modes
+ * @phydev: target phy_device struct
+ * @lpa: the linkmode LP advertisement status
+ */
+static int genphy_c45_read_eee_lpa(struct phy_device *phydev,
+ unsigned long *lpa)
+{
+ int val;
+
+ if (linkmode_intersects(phydev->supported, PHY_EEE_CAP1_FEATURES)) {
+ /* IEEE 802.3-2018 45.2.7.14 EEE link partner ability 1
+ * (Register 7.61)
+ */
+ val = phy_read_mmd(phydev, MDIO_MMD_AN, MDIO_AN_EEE_LPABLE);
+ if (val < 0)
+ return val;
+
+ mii_eee_cap1_mod_linkmode_t(lpa, val);
+ }
+
+ if (linkmode_test_bit(ETHTOOL_LINK_MODE_10baseT1L_Full_BIT,
+ phydev->supported_eee)) {
+ /* IEEE 802.3cg-2019 45.2.7.26 10BASE-T1 AN status register
+ * (Register 7.527)
+ */
+ val = phy_read_mmd(phydev, MDIO_MMD_AN, MDIO_AN_10BT1_AN_STAT);
+ if (val < 0)
+ return val;
+
+ mii_10base_t1_adv_mod_linkmode_t(lpa, val);
+ }
+
+ return 0;
+}
+
+/**
+ * genphy_c45_read_eee_cap1 - read supported EEE link modes from register 3.20
+ * @phydev: target phy_device struct
+ */
+static int genphy_c45_read_eee_cap1(struct phy_device *phydev)
+{
+ int val;
+
+ /* IEEE 802.3-2018 45.2.3.10 EEE control and capability 1
+ * (Register 3.20)
+ */
+ val = phy_read_mmd(phydev, MDIO_MMD_PCS, MDIO_PCS_EEE_ABLE);
+ if (val < 0)
+ return val;
+
+ /* The 802.3 2018 standard says the top 2 bits are reserved and should
+ * read as 0. Also, it seems unlikely anybody will build a PHY which
+ * supports 100GBASE-R deep sleep all the way down to 100BASE-TX EEE.
+ * If MDIO_PCS_EEE_ABLE is 0xffff assume EEE is not supported.
+ */
+ if (val == 0xffff)
+ return 0;
+
+ mii_eee_cap1_mod_linkmode_t(phydev->supported_eee, val);
+
+ /* Some buggy devices indicate EEE link modes in MDIO_PCS_EEE_ABLE
+ * which they don't support as indicated by BMSR, ESTATUS etc.
+ */
+ linkmode_and(phydev->supported_eee, phydev->supported_eee,
+ phydev->supported);
+
+ return 0;
+}
+
+/**
+ * genphy_c45_read_eee_abilities - read supported EEE link modes
+ * @phydev: target phy_device struct
+ */
+int genphy_c45_read_eee_abilities(struct phy_device *phydev)
+{
+ int val;
+
+ /* There is not indicator whether optional register
+ * "EEE control and capability 1" (3.20) is supported. Read it only
+ * on devices with appropriate linkmodes.
+ */
+ if (linkmode_intersects(phydev->supported, PHY_EEE_CAP1_FEATURES)) {
+ val = genphy_c45_read_eee_cap1(phydev);
+ if (val)
+ return val;
+ }
+
+ if (linkmode_test_bit(ETHTOOL_LINK_MODE_10baseT1L_Full_BIT,
+ phydev->supported)) {
+ /* IEEE 802.3cg-2019 45.2.1.186b 10BASE-T1L PMA status register
+ * (Register 1.2295)
+ */
+ val = phy_read_mmd(phydev, MDIO_MMD_PMAPMD, MDIO_PMA_10T1L_STAT);
+ if (val < 0)
+ return val;
+
+ linkmode_mod_bit(ETHTOOL_LINK_MODE_10baseT1L_Full_BIT,
+ phydev->supported_eee,
+ val & MDIO_PMA_10T1L_STAT_EEE);
+ }
+
+ return 0;
+}
+EXPORT_SYMBOL_GPL(genphy_c45_read_eee_abilities);
+
+/**
* genphy_c45_pma_read_abilities - read supported link modes from PMA
* @phydev: target phy_device struct
*
@@ -773,6 +972,11 @@ int genphy_c45_pma_read_abilities(struct phy_device *phydev)
}
}
+ /* This is optional functionality. If not supported, we may get an error
+ * which should be ignored.
+ */
+ genphy_c45_read_eee_abilities(phydev);
+
return 0;
}
EXPORT_SYMBOL_GPL(genphy_c45_pma_read_abilities);
@@ -931,6 +1135,312 @@ int genphy_c45_fast_retrain(struct phy_device *phydev, bool enable)
}
EXPORT_SYMBOL_GPL(genphy_c45_fast_retrain);
+/**
+ * genphy_c45_plca_get_cfg - get PLCA configuration from standard registers
+ * @phydev: target phy_device struct
+ * @plca_cfg: output structure to store the PLCA configuration
+ *
+ * Description: if the PHY complies to the Open Alliance TC14 10BASE-T1S PLCA
+ * Management Registers specifications, this function can be used to retrieve
+ * the current PLCA configuration from the standard registers in MMD 31.
+ */
+int genphy_c45_plca_get_cfg(struct phy_device *phydev,
+ struct phy_plca_cfg *plca_cfg)
+{
+ int ret;
+
+ ret = phy_read_mmd(phydev, MDIO_MMD_VEND2, MDIO_OATC14_PLCA_IDVER);
+ if (ret < 0)
+ return ret;
+
+ if ((ret & MDIO_OATC14_PLCA_IDM) != OATC14_IDM)
+ return -ENODEV;
+
+ plca_cfg->version = ret & ~MDIO_OATC14_PLCA_IDM;
+
+ ret = phy_read_mmd(phydev, MDIO_MMD_VEND2, MDIO_OATC14_PLCA_CTRL0);
+ if (ret < 0)
+ return ret;
+
+ plca_cfg->enabled = !!(ret & MDIO_OATC14_PLCA_EN);
+
+ ret = phy_read_mmd(phydev, MDIO_MMD_VEND2, MDIO_OATC14_PLCA_CTRL1);
+ if (ret < 0)
+ return ret;
+
+ plca_cfg->node_cnt = (ret & MDIO_OATC14_PLCA_NCNT) >> 8;
+ plca_cfg->node_id = (ret & MDIO_OATC14_PLCA_ID);
+
+ ret = phy_read_mmd(phydev, MDIO_MMD_VEND2, MDIO_OATC14_PLCA_TOTMR);
+ if (ret < 0)
+ return ret;
+
+ plca_cfg->to_tmr = ret & MDIO_OATC14_PLCA_TOT;
+
+ ret = phy_read_mmd(phydev, MDIO_MMD_VEND2, MDIO_OATC14_PLCA_BURST);
+ if (ret < 0)
+ return ret;
+
+ plca_cfg->burst_cnt = (ret & MDIO_OATC14_PLCA_MAXBC) >> 8;
+ plca_cfg->burst_tmr = (ret & MDIO_OATC14_PLCA_BTMR);
+
+ return 0;
+}
+EXPORT_SYMBOL_GPL(genphy_c45_plca_get_cfg);
+
+/**
+ * genphy_c45_plca_set_cfg - set PLCA configuration using standard registers
+ * @phydev: target phy_device struct
+ * @plca_cfg: structure containing the PLCA configuration. Fields set to -1 are
+ * not to be changed.
+ *
+ * Description: if the PHY complies to the Open Alliance TC14 10BASE-T1S PLCA
+ * Management Registers specifications, this function can be used to modify
+ * the PLCA configuration using the standard registers in MMD 31.
+ */
+int genphy_c45_plca_set_cfg(struct phy_device *phydev,
+ const struct phy_plca_cfg *plca_cfg)
+{
+ u16 val = 0;
+ int ret;
+
+ // PLCA IDVER is read-only
+ if (plca_cfg->version >= 0)
+ return -EINVAL;
+
+ // first of all, disable PLCA if required
+ if (plca_cfg->enabled == 0) {
+ ret = phy_clear_bits_mmd(phydev, MDIO_MMD_VEND2,
+ MDIO_OATC14_PLCA_CTRL0,
+ MDIO_OATC14_PLCA_EN);
+
+ if (ret < 0)
+ return ret;
+ }
+
+ // check if we need to set the PLCA node count, node ID, or both
+ if (plca_cfg->node_cnt >= 0 || plca_cfg->node_id >= 0) {
+ /* if one between node count and node ID is -not- to be
+ * changed, read the register to later perform merge/purge of
+ * the configuration as appropriate
+ */
+ if (plca_cfg->node_cnt < 0 || plca_cfg->node_id < 0) {
+ ret = phy_read_mmd(phydev, MDIO_MMD_VEND2,
+ MDIO_OATC14_PLCA_CTRL1);
+
+ if (ret < 0)
+ return ret;
+
+ val = ret;
+ }
+
+ if (plca_cfg->node_cnt >= 0)
+ val = (val & ~MDIO_OATC14_PLCA_NCNT) |
+ (plca_cfg->node_cnt << 8);
+
+ if (plca_cfg->node_id >= 0)
+ val = (val & ~MDIO_OATC14_PLCA_ID) |
+ (plca_cfg->node_id);
+
+ ret = phy_write_mmd(phydev, MDIO_MMD_VEND2,
+ MDIO_OATC14_PLCA_CTRL1, val);
+
+ if (ret < 0)
+ return ret;
+ }
+
+ if (plca_cfg->to_tmr >= 0) {
+ ret = phy_write_mmd(phydev, MDIO_MMD_VEND2,
+ MDIO_OATC14_PLCA_TOTMR,
+ plca_cfg->to_tmr);
+
+ if (ret < 0)
+ return ret;
+ }
+
+ // check if we need to set the PLCA burst count, burst timer, or both
+ if (plca_cfg->burst_cnt >= 0 || plca_cfg->burst_tmr >= 0) {
+ /* if one between burst count and burst timer is -not- to be
+ * changed, read the register to later perform merge/purge of
+ * the configuration as appropriate
+ */
+ if (plca_cfg->burst_cnt < 0 || plca_cfg->burst_tmr < 0) {
+ ret = phy_read_mmd(phydev, MDIO_MMD_VEND2,
+ MDIO_OATC14_PLCA_BURST);
+
+ if (ret < 0)
+ return ret;
+
+ val = ret;
+ }
+
+ if (plca_cfg->burst_cnt >= 0)
+ val = (val & ~MDIO_OATC14_PLCA_MAXBC) |
+ (plca_cfg->burst_cnt << 8);
+
+ if (plca_cfg->burst_tmr >= 0)
+ val = (val & ~MDIO_OATC14_PLCA_BTMR) |
+ (plca_cfg->burst_tmr);
+
+ ret = phy_write_mmd(phydev, MDIO_MMD_VEND2,
+ MDIO_OATC14_PLCA_BURST, val);
+
+ if (ret < 0)
+ return ret;
+ }
+
+ // if we need to enable PLCA, do it at the end
+ if (plca_cfg->enabled > 0) {
+ ret = phy_set_bits_mmd(phydev, MDIO_MMD_VEND2,
+ MDIO_OATC14_PLCA_CTRL0,
+ MDIO_OATC14_PLCA_EN);
+
+ if (ret < 0)
+ return ret;
+ }
+
+ return 0;
+}
+EXPORT_SYMBOL_GPL(genphy_c45_plca_set_cfg);
+
+/**
+ * genphy_c45_plca_get_status - get PLCA status from standard registers
+ * @phydev: target phy_device struct
+ * @plca_st: output structure to store the PLCA status
+ *
+ * Description: if the PHY complies to the Open Alliance TC14 10BASE-T1S PLCA
+ * Management Registers specifications, this function can be used to retrieve
+ * the current PLCA status information from the standard registers in MMD 31.
+ */
+int genphy_c45_plca_get_status(struct phy_device *phydev,
+ struct phy_plca_status *plca_st)
+{
+ int ret;
+
+ ret = phy_read_mmd(phydev, MDIO_MMD_VEND2, MDIO_OATC14_PLCA_STATUS);
+ if (ret < 0)
+ return ret;
+
+ plca_st->pst = !!(ret & MDIO_OATC14_PLCA_PST);
+ return 0;
+}
+EXPORT_SYMBOL_GPL(genphy_c45_plca_get_status);
+
+/**
+ * genphy_c45_eee_is_active - get EEE status
+ * @phydev: target phy_device struct
+ * @adv: variable to store advertised linkmodes
+ * @lp: variable to store LP advertised linkmodes
+ * @is_enabled: variable to store EEE enabled/disabled configuration value
+ *
+ * Description: this function will read local and link partner PHY
+ * advertisements. Compare them return current EEE state.
+ */
+int genphy_c45_eee_is_active(struct phy_device *phydev, unsigned long *adv,
+ unsigned long *lp, bool *is_enabled)
+{
+ __ETHTOOL_DECLARE_LINK_MODE_MASK(tmp_adv) = {};
+ __ETHTOOL_DECLARE_LINK_MODE_MASK(tmp_lp) = {};
+ __ETHTOOL_DECLARE_LINK_MODE_MASK(common);
+ bool eee_enabled, eee_active;
+ int ret;
+
+ ret = genphy_c45_read_eee_adv(phydev, tmp_adv);
+ if (ret)
+ return ret;
+
+ ret = genphy_c45_read_eee_lpa(phydev, tmp_lp);
+ if (ret)
+ return ret;
+
+ eee_enabled = !linkmode_empty(tmp_adv);
+ linkmode_and(common, tmp_adv, tmp_lp);
+ if (eee_enabled && !linkmode_empty(common))
+ eee_active = phy_check_valid(phydev->speed, phydev->duplex,
+ common);
+ else
+ eee_active = false;
+
+ if (adv)
+ linkmode_copy(adv, tmp_adv);
+ if (lp)
+ linkmode_copy(lp, tmp_lp);
+ if (is_enabled)
+ *is_enabled = eee_enabled;
+
+ return eee_active;
+}
+EXPORT_SYMBOL(genphy_c45_eee_is_active);
+
+/**
+ * genphy_c45_ethtool_get_eee - get EEE supported and status
+ * @phydev: target phy_device struct
+ * @data: ethtool_eee data
+ *
+ * Description: it reports the Supported/Advertisement/LP Advertisement
+ * capabilities.
+ */
+int genphy_c45_ethtool_get_eee(struct phy_device *phydev,
+ struct ethtool_eee *data)
+{
+ __ETHTOOL_DECLARE_LINK_MODE_MASK(adv) = {};
+ __ETHTOOL_DECLARE_LINK_MODE_MASK(lp) = {};
+ bool overflow = false, is_enabled;
+ int ret;
+
+ ret = genphy_c45_eee_is_active(phydev, adv, lp, &is_enabled);
+ if (ret < 0)
+ return ret;
+
+ data->eee_enabled = is_enabled;
+ data->eee_active = ret;
+
+ if (!ethtool_convert_link_mode_to_legacy_u32(&data->supported,
+ phydev->supported_eee))
+ overflow = true;
+ if (!ethtool_convert_link_mode_to_legacy_u32(&data->advertised, adv))
+ overflow = true;
+ if (!ethtool_convert_link_mode_to_legacy_u32(&data->lp_advertised, lp))
+ overflow = true;
+
+ if (overflow)
+ phydev_warn(phydev, "Not all supported or advertised EEE link modes were passed to the user space\n");
+
+ return 0;
+}
+EXPORT_SYMBOL(genphy_c45_ethtool_get_eee);
+
+/**
+ * genphy_c45_ethtool_set_eee - get EEE supported and status
+ * @phydev: target phy_device struct
+ * @data: ethtool_eee data
+ *
+ * Description: it reportes the Supported/Advertisement/LP Advertisement
+ * capabilities.
+ */
+int genphy_c45_ethtool_set_eee(struct phy_device *phydev,
+ struct ethtool_eee *data)
+{
+ __ETHTOOL_DECLARE_LINK_MODE_MASK(adv) = {};
+ int ret;
+
+ if (data->eee_enabled) {
+ if (data->advertised)
+ adv[0] = data->advertised;
+ else
+ linkmode_copy(adv, phydev->supported_eee);
+ }
+
+ ret = genphy_c45_write_eee_adv(phydev, adv);
+ if (ret < 0)
+ return ret;
+ if (ret > 0)
+ return phy_restart_aneg(phydev);
+
+ return 0;
+}
+EXPORT_SYMBOL(genphy_c45_ethtool_set_eee);
+
struct phy_driver genphy_c45_driver = {
.phy_id = 0xffffffff,
.phy_id_mask = 0xffffffff,
diff --git a/drivers/net/phy/phy-core.c b/drivers/net/phy/phy-core.c
index 5d08c627a516..a64186dc53f8 100644
--- a/drivers/net/phy/phy-core.c
+++ b/drivers/net/phy/phy-core.c
@@ -13,7 +13,7 @@
*/
const char *phy_speed_to_str(int speed)
{
- BUILD_BUG_ON_MSG(__ETHTOOL_LINK_MODE_MASK_NBITS != 99,
+ BUILD_BUG_ON_MSG(__ETHTOOL_LINK_MODE_MASK_NBITS != 102,
"Enum ethtool_link_mode_bit_indices and phylib are out of sync. "
"If a speed or mode has been added please update phy_speed_to_str "
"and the PHY settings array.\n");
@@ -260,6 +260,9 @@ static const struct phy_setting settings[] = {
PHY_SETTING( 10, FULL, 10baseT_Full ),
PHY_SETTING( 10, HALF, 10baseT_Half ),
PHY_SETTING( 10, FULL, 10baseT1L_Full ),
+ PHY_SETTING( 10, FULL, 10baseT1S_Full ),
+ PHY_SETTING( 10, HALF, 10baseT1S_Half ),
+ PHY_SETTING( 10, HALF, 10baseT1S_P2MP_Half ),
};
#undef PHY_SETTING
diff --git a/drivers/net/phy/phy.c b/drivers/net/phy/phy.c
index e5b6cb1a77f9..b33e55a7364e 100644
--- a/drivers/net/phy/phy.c
+++ b/drivers/net/phy/phy.c
@@ -242,11 +242,11 @@ unsigned int phy_supported_speeds(struct phy_device *phy,
*
* Description: Returns true if there is a valid setting, false otherwise.
*/
-static inline bool phy_check_valid(int speed, int duplex,
- unsigned long *features)
+bool phy_check_valid(int speed, int duplex, unsigned long *features)
{
return !!phy_lookup_setting(speed, duplex, features, true);
}
+EXPORT_SYMBOL(phy_check_valid);
/**
* phy_sanitize_settings - make sure the PHY is set to supported speed and duplex
@@ -544,6 +544,198 @@ int phy_ethtool_get_stats(struct phy_device *phydev,
EXPORT_SYMBOL(phy_ethtool_get_stats);
/**
+ * phy_ethtool_get_plca_cfg - Get PLCA RS configuration
+ * @phydev: the phy_device struct
+ * @plca_cfg: where to store the retrieved configuration
+ *
+ * Retrieve the PLCA configuration from the PHY. Return 0 on success or a
+ * negative value if an error occurred.
+ */
+int phy_ethtool_get_plca_cfg(struct phy_device *phydev,
+ struct phy_plca_cfg *plca_cfg)
+{
+ int ret;
+
+ if (!phydev->drv) {
+ ret = -EIO;
+ goto out;
+ }
+
+ if (!phydev->drv->get_plca_cfg) {
+ ret = -EOPNOTSUPP;
+ goto out;
+ }
+
+ mutex_lock(&phydev->lock);
+ ret = phydev->drv->get_plca_cfg(phydev, plca_cfg);
+
+ mutex_unlock(&phydev->lock);
+out:
+ return ret;
+}
+
+/**
+ * plca_check_valid - Check PLCA configuration before enabling
+ * @phydev: the phy_device struct
+ * @plca_cfg: current PLCA configuration
+ * @extack: extack for reporting useful error messages
+ *
+ * Checks whether the PLCA and PHY configuration are consistent and it is safe
+ * to enable PLCA. Returns 0 on success or a negative value if the PLCA or PHY
+ * configuration is not consistent.
+ */
+static int plca_check_valid(struct phy_device *phydev,
+ const struct phy_plca_cfg *plca_cfg,
+ struct netlink_ext_ack *extack)
+{
+ int ret = 0;
+
+ if (!linkmode_test_bit(ETHTOOL_LINK_MODE_10baseT1S_P2MP_Half_BIT,
+ phydev->advertising)) {
+ ret = -EOPNOTSUPP;
+ NL_SET_ERR_MSG(extack,
+ "Point to Multi-Point mode is not enabled");
+ } else if (plca_cfg->node_id >= 255) {
+ NL_SET_ERR_MSG(extack, "PLCA node ID is not set");
+ ret = -EINVAL;
+ }
+
+ return ret;
+}
+
+/**
+ * phy_ethtool_set_plca_cfg - Set PLCA RS configuration
+ * @phydev: the phy_device struct
+ * @plca_cfg: new PLCA configuration to apply
+ * @extack: extack for reporting useful error messages
+ *
+ * Sets the PLCA configuration in the PHY. Return 0 on success or a
+ * negative value if an error occurred.
+ */
+int phy_ethtool_set_plca_cfg(struct phy_device *phydev,
+ const struct phy_plca_cfg *plca_cfg,
+ struct netlink_ext_ack *extack)
+{
+ struct phy_plca_cfg *curr_plca_cfg;
+ int ret;
+
+ if (!phydev->drv) {
+ ret = -EIO;
+ goto out;
+ }
+
+ if (!phydev->drv->set_plca_cfg ||
+ !phydev->drv->get_plca_cfg) {
+ ret = -EOPNOTSUPP;
+ goto out;
+ }
+
+ curr_plca_cfg = kmalloc(sizeof(*curr_plca_cfg), GFP_KERNEL);
+ if (!curr_plca_cfg) {
+ ret = -ENOMEM;
+ goto out;
+ }
+
+ mutex_lock(&phydev->lock);
+
+ ret = phydev->drv->get_plca_cfg(phydev, curr_plca_cfg);
+ if (ret)
+ goto out_drv;
+
+ if (curr_plca_cfg->enabled < 0 && plca_cfg->enabled >= 0) {
+ NL_SET_ERR_MSG(extack,
+ "PHY does not support changing the PLCA 'enable' attribute");
+ ret = -EINVAL;
+ goto out_drv;
+ }
+
+ if (curr_plca_cfg->node_id < 0 && plca_cfg->node_id >= 0) {
+ NL_SET_ERR_MSG(extack,
+ "PHY does not support changing the PLCA 'local node ID' attribute");
+ ret = -EINVAL;
+ goto out_drv;
+ }
+
+ if (curr_plca_cfg->node_cnt < 0 && plca_cfg->node_cnt >= 0) {
+ NL_SET_ERR_MSG(extack,
+ "PHY does not support changing the PLCA 'node count' attribute");
+ ret = -EINVAL;
+ goto out_drv;
+ }
+
+ if (curr_plca_cfg->to_tmr < 0 && plca_cfg->to_tmr >= 0) {
+ NL_SET_ERR_MSG(extack,
+ "PHY does not support changing the PLCA 'TO timer' attribute");
+ ret = -EINVAL;
+ goto out_drv;
+ }
+
+ if (curr_plca_cfg->burst_cnt < 0 && plca_cfg->burst_cnt >= 0) {
+ NL_SET_ERR_MSG(extack,
+ "PHY does not support changing the PLCA 'burst count' attribute");
+ ret = -EINVAL;
+ goto out_drv;
+ }
+
+ if (curr_plca_cfg->burst_tmr < 0 && plca_cfg->burst_tmr >= 0) {
+ NL_SET_ERR_MSG(extack,
+ "PHY does not support changing the PLCA 'burst timer' attribute");
+ ret = -EINVAL;
+ goto out_drv;
+ }
+
+ // if enabling PLCA, perform a few sanity checks
+ if (plca_cfg->enabled > 0) {
+ // allow setting node_id concurrently with enabled
+ if (plca_cfg->node_id >= 0)
+ curr_plca_cfg->node_id = plca_cfg->node_id;
+
+ ret = plca_check_valid(phydev, curr_plca_cfg, extack);
+ if (ret)
+ goto out_drv;
+ }
+
+ ret = phydev->drv->set_plca_cfg(phydev, plca_cfg);
+
+out_drv:
+ kfree(curr_plca_cfg);
+ mutex_unlock(&phydev->lock);
+out:
+ return ret;
+}
+
+/**
+ * phy_ethtool_get_plca_status - Get PLCA RS status information
+ * @phydev: the phy_device struct
+ * @plca_st: where to store the retrieved status information
+ *
+ * Retrieve the PLCA status information from the PHY. Return 0 on success or a
+ * negative value if an error occurred.
+ */
+int phy_ethtool_get_plca_status(struct phy_device *phydev,
+ struct phy_plca_status *plca_st)
+{
+ int ret;
+
+ if (!phydev->drv) {
+ ret = -EIO;
+ goto out;
+ }
+
+ if (!phydev->drv->get_plca_status) {
+ ret = -EOPNOTSUPP;
+ goto out;
+ }
+
+ mutex_lock(&phydev->lock);
+ ret = phydev->drv->get_plca_status(phydev, plca_st);
+
+ mutex_unlock(&phydev->lock);
+out:
+ return ret;
+}
+
+/**
* phy_start_cable_test - Start a cable test
*
* @phydev: the phy_device struct
@@ -877,27 +1069,35 @@ EXPORT_SYMBOL(phy_ethtool_ksettings_set);
int phy_speed_down(struct phy_device *phydev, bool sync)
{
__ETHTOOL_DECLARE_LINK_MODE_MASK(adv_tmp);
- int ret;
+ int ret = 0;
+
+ mutex_lock(&phydev->lock);
if (phydev->autoneg != AUTONEG_ENABLE)
- return 0;
+ goto out;
linkmode_copy(adv_tmp, phydev->advertising);
ret = phy_speed_down_core(phydev);
if (ret)
- return ret;
+ goto out;
linkmode_copy(phydev->adv_old, adv_tmp);
- if (linkmode_equal(phydev->advertising, adv_tmp))
- return 0;
+ if (linkmode_equal(phydev->advertising, adv_tmp)) {
+ ret = 0;
+ goto out;
+ }
ret = phy_config_aneg(phydev);
if (ret)
- return ret;
+ goto out;
+
+ ret = sync ? phy_poll_aneg_done(phydev) : 0;
+out:
+ mutex_unlock(&phydev->lock);
- return sync ? phy_poll_aneg_done(phydev) : 0;
+ return ret;
}
EXPORT_SYMBOL_GPL(phy_speed_down);
@@ -910,21 +1110,28 @@ EXPORT_SYMBOL_GPL(phy_speed_down);
int phy_speed_up(struct phy_device *phydev)
{
__ETHTOOL_DECLARE_LINK_MODE_MASK(adv_tmp);
+ int ret = 0;
+
+ mutex_lock(&phydev->lock);
if (phydev->autoneg != AUTONEG_ENABLE)
- return 0;
+ goto out;
if (linkmode_empty(phydev->adv_old))
- return 0;
+ goto out;
linkmode_copy(adv_tmp, phydev->advertising);
linkmode_copy(phydev->advertising, phydev->adv_old);
linkmode_zero(phydev->adv_old);
if (linkmode_equal(phydev->advertising, adv_tmp))
- return 0;
+ goto out;
+
+ ret = phy_config_aneg(phydev);
+out:
+ mutex_unlock(&phydev->lock);
- return phy_config_aneg(phydev);
+ return ret;
}
EXPORT_SYMBOL_GPL(phy_speed_up);
@@ -1265,30 +1472,6 @@ void phy_mac_interrupt(struct phy_device *phydev)
}
EXPORT_SYMBOL(phy_mac_interrupt);
-static void mmd_eee_adv_to_linkmode(unsigned long *advertising, u16 eee_adv)
-{
- linkmode_zero(advertising);
-
- if (eee_adv & MDIO_EEE_100TX)
- linkmode_set_bit(ETHTOOL_LINK_MODE_100baseT_Full_BIT,
- advertising);
- if (eee_adv & MDIO_EEE_1000T)
- linkmode_set_bit(ETHTOOL_LINK_MODE_1000baseT_Full_BIT,
- advertising);
- if (eee_adv & MDIO_EEE_10GT)
- linkmode_set_bit(ETHTOOL_LINK_MODE_10000baseT_Full_BIT,
- advertising);
- if (eee_adv & MDIO_EEE_1000KX)
- linkmode_set_bit(ETHTOOL_LINK_MODE_1000baseKX_Full_BIT,
- advertising);
- if (eee_adv & MDIO_EEE_10GKX4)
- linkmode_set_bit(ETHTOOL_LINK_MODE_10000baseKX4_Full_BIT,
- advertising);
- if (eee_adv & MDIO_EEE_10GKR)
- linkmode_set_bit(ETHTOOL_LINK_MODE_10000baseKR_Full_BIT,
- advertising);
-}
-
/**
* phy_init_eee - init and check the EEE feature
* @phydev: target phy_device struct
@@ -1301,62 +1484,25 @@ static void mmd_eee_adv_to_linkmode(unsigned long *advertising, u16 eee_adv)
*/
int phy_init_eee(struct phy_device *phydev, bool clk_stop_enable)
{
+ int ret;
+
if (!phydev->drv)
return -EIO;
- /* According to 802.3az,the EEE is supported only in full duplex-mode.
- */
- if (phydev->duplex == DUPLEX_FULL) {
- __ETHTOOL_DECLARE_LINK_MODE_MASK(common);
- __ETHTOOL_DECLARE_LINK_MODE_MASK(lp);
- __ETHTOOL_DECLARE_LINK_MODE_MASK(adv);
- int eee_lp, eee_cap, eee_adv;
- int status;
- u32 cap;
-
- /* Read phy status to properly get the right settings */
- status = phy_read_status(phydev);
- if (status)
- return status;
-
- /* First check if the EEE ability is supported */
- eee_cap = phy_read_mmd(phydev, MDIO_MMD_PCS, MDIO_PCS_EEE_ABLE);
- if (eee_cap <= 0)
- goto eee_exit_err;
-
- cap = mmd_eee_cap_to_ethtool_sup_t(eee_cap);
- if (!cap)
- goto eee_exit_err;
-
- /* Check which link settings negotiated and verify it in
- * the EEE advertising registers.
- */
- eee_lp = phy_read_mmd(phydev, MDIO_MMD_AN, MDIO_AN_EEE_LPABLE);
- if (eee_lp <= 0)
- goto eee_exit_err;
-
- eee_adv = phy_read_mmd(phydev, MDIO_MMD_AN, MDIO_AN_EEE_ADV);
- if (eee_adv <= 0)
- goto eee_exit_err;
-
- mmd_eee_adv_to_linkmode(adv, eee_adv);
- mmd_eee_adv_to_linkmode(lp, eee_lp);
- linkmode_and(common, adv, lp);
-
- if (!phy_check_valid(phydev->speed, phydev->duplex, common))
- goto eee_exit_err;
+ ret = genphy_c45_eee_is_active(phydev, NULL, NULL, NULL);
+ if (ret < 0)
+ return ret;
+ if (!ret)
+ return -EPROTONOSUPPORT;
- if (clk_stop_enable)
- /* Configure the PHY to stop receiving xMII
- * clock while it is signaling LPI.
- */
- phy_set_bits_mmd(phydev, MDIO_MMD_PCS, MDIO_CTRL1,
- MDIO_PCS_CTRL1_CLKSTOP_EN);
+ if (clk_stop_enable)
+ /* Configure the PHY to stop receiving xMII
+ * clock while it is signaling LPI.
+ */
+ ret = phy_set_bits_mmd(phydev, MDIO_MMD_PCS, MDIO_CTRL1,
+ MDIO_PCS_CTRL1_CLKSTOP_EN);
- return 0; /* EEE supported */
- }
-eee_exit_err:
- return -EPROTONOSUPPORT;
+ return ret < 0 ? ret : 0;
}
EXPORT_SYMBOL(phy_init_eee);
@@ -1369,10 +1515,16 @@ EXPORT_SYMBOL(phy_init_eee);
*/
int phy_get_eee_err(struct phy_device *phydev)
{
+ int ret;
+
if (!phydev->drv)
return -EIO;
- return phy_read_mmd(phydev, MDIO_MMD_PCS, MDIO_PCS_EEE_WK_ERR);
+ mutex_lock(&phydev->lock);
+ ret = phy_read_mmd(phydev, MDIO_MMD_PCS, MDIO_PCS_EEE_WK_ERR);
+ mutex_unlock(&phydev->lock);
+
+ return ret;
}
EXPORT_SYMBOL(phy_get_eee_err);
@@ -1386,33 +1538,16 @@ EXPORT_SYMBOL(phy_get_eee_err);
*/
int phy_ethtool_get_eee(struct phy_device *phydev, struct ethtool_eee *data)
{
- int val;
+ int ret;
if (!phydev->drv)
return -EIO;
- /* Get Supported EEE */
- val = phy_read_mmd(phydev, MDIO_MMD_PCS, MDIO_PCS_EEE_ABLE);
- if (val < 0)
- return val;
- data->supported = mmd_eee_cap_to_ethtool_sup_t(val);
-
- /* Get advertisement EEE */
- val = phy_read_mmd(phydev, MDIO_MMD_AN, MDIO_AN_EEE_ADV);
- if (val < 0)
- return val;
- data->advertised = mmd_eee_adv_to_ethtool_adv_t(val);
- data->eee_enabled = !!data->advertised;
-
- /* Get LP advertisement EEE */
- val = phy_read_mmd(phydev, MDIO_MMD_AN, MDIO_AN_EEE_LPABLE);
- if (val < 0)
- return val;
- data->lp_advertised = mmd_eee_adv_to_ethtool_adv_t(val);
-
- data->eee_active = !!(data->advertised & data->lp_advertised);
+ mutex_lock(&phydev->lock);
+ ret = genphy_c45_ethtool_get_eee(phydev, data);
+ mutex_unlock(&phydev->lock);
- return 0;
+ return ret;
}
EXPORT_SYMBOL(phy_ethtool_get_eee);
@@ -1425,43 +1560,16 @@ EXPORT_SYMBOL(phy_ethtool_get_eee);
*/
int phy_ethtool_set_eee(struct phy_device *phydev, struct ethtool_eee *data)
{
- int cap, old_adv, adv = 0, ret;
+ int ret;
if (!phydev->drv)
return -EIO;
- /* Get Supported EEE */
- cap = phy_read_mmd(phydev, MDIO_MMD_PCS, MDIO_PCS_EEE_ABLE);
- if (cap < 0)
- return cap;
-
- old_adv = phy_read_mmd(phydev, MDIO_MMD_AN, MDIO_AN_EEE_ADV);
- if (old_adv < 0)
- return old_adv;
-
- if (data->eee_enabled) {
- adv = !data->advertised ? cap :
- ethtool_adv_to_mmd_eee_adv_t(data->advertised) & cap;
- /* Mask prohibited EEE modes */
- adv &= ~phydev->eee_broken_modes;
- }
-
- if (old_adv != adv) {
- ret = phy_write_mmd(phydev, MDIO_MMD_AN, MDIO_AN_EEE_ADV, adv);
- if (ret < 0)
- return ret;
-
- /* Restart autonegotiation so the new modes get sent to the
- * link partner.
- */
- if (phydev->autoneg == AUTONEG_ENABLE) {
- ret = phy_restart_aneg(phydev);
- if (ret < 0)
- return ret;
- }
- }
+ mutex_lock(&phydev->lock);
+ ret = genphy_c45_ethtool_set_eee(phydev, data);
+ mutex_unlock(&phydev->lock);
- return 0;
+ return ret;
}
EXPORT_SYMBOL(phy_ethtool_set_eee);
@@ -1473,8 +1581,15 @@ EXPORT_SYMBOL(phy_ethtool_set_eee);
*/
int phy_ethtool_set_wol(struct phy_device *phydev, struct ethtool_wolinfo *wol)
{
- if (phydev->drv && phydev->drv->set_wol)
- return phydev->drv->set_wol(phydev, wol);
+ int ret;
+
+ if (phydev->drv && phydev->drv->set_wol) {
+ mutex_lock(&phydev->lock);
+ ret = phydev->drv->set_wol(phydev, wol);
+ mutex_unlock(&phydev->lock);
+
+ return ret;
+ }
return -EOPNOTSUPP;
}
@@ -1488,8 +1603,11 @@ EXPORT_SYMBOL(phy_ethtool_set_wol);
*/
void phy_ethtool_get_wol(struct phy_device *phydev, struct ethtool_wolinfo *wol)
{
- if (phydev->drv && phydev->drv->get_wol)
+ if (phydev->drv && phydev->drv->get_wol) {
+ mutex_lock(&phydev->lock);
phydev->drv->get_wol(phydev, wol);
+ mutex_unlock(&phydev->lock);
+ }
}
EXPORT_SYMBOL(phy_ethtool_get_wol);
@@ -1526,6 +1644,7 @@ EXPORT_SYMBOL(phy_ethtool_set_link_ksettings);
int phy_ethtool_nway_reset(struct net_device *ndev)
{
struct phy_device *phydev = ndev->phydev;
+ int ret;
if (!phydev)
return -ENODEV;
@@ -1533,6 +1652,10 @@ int phy_ethtool_nway_reset(struct net_device *ndev)
if (!phydev->drv)
return -EIO;
- return phy_restart_aneg(phydev);
+ mutex_lock(&phydev->lock);
+ ret = phy_restart_aneg(phydev);
+ mutex_unlock(&phydev->lock);
+
+ return ret;
}
EXPORT_SYMBOL(phy_ethtool_nway_reset);
diff --git a/drivers/net/phy/phy_device.c b/drivers/net/phy/phy_device.c
index 607aa786c8cb..71becceb8764 100644
--- a/drivers/net/phy/phy_device.c
+++ b/drivers/net/phy/phy_device.c
@@ -45,6 +45,9 @@ EXPORT_SYMBOL_GPL(phy_basic_features);
__ETHTOOL_DECLARE_LINK_MODE_MASK(phy_basic_t1_features) __ro_after_init;
EXPORT_SYMBOL_GPL(phy_basic_t1_features);
+__ETHTOOL_DECLARE_LINK_MODE_MASK(phy_basic_t1s_p2mp_features) __ro_after_init;
+EXPORT_SYMBOL_GPL(phy_basic_t1s_p2mp_features);
+
__ETHTOOL_DECLARE_LINK_MODE_MASK(phy_gbit_features) __ro_after_init;
EXPORT_SYMBOL_GPL(phy_gbit_features);
@@ -98,6 +101,12 @@ const int phy_basic_t1_features_array[3] = {
};
EXPORT_SYMBOL_GPL(phy_basic_t1_features_array);
+const int phy_basic_t1s_p2mp_features_array[2] = {
+ ETHTOOL_LINK_MODE_TP_BIT,
+ ETHTOOL_LINK_MODE_10baseT1S_P2MP_Half_BIT,
+};
+EXPORT_SYMBOL_GPL(phy_basic_t1s_p2mp_features_array);
+
const int phy_gbit_features_array[2] = {
ETHTOOL_LINK_MODE_1000baseT_Half_BIT,
ETHTOOL_LINK_MODE_1000baseT_Full_BIT,
@@ -123,6 +132,18 @@ static const int phy_10gbit_full_features_array[] = {
ETHTOOL_LINK_MODE_10000baseT_Full_BIT,
};
+static const int phy_eee_cap1_features_array[] = {
+ ETHTOOL_LINK_MODE_100baseT_Full_BIT,
+ ETHTOOL_LINK_MODE_1000baseT_Full_BIT,
+ ETHTOOL_LINK_MODE_10000baseT_Full_BIT,
+ ETHTOOL_LINK_MODE_1000baseKX_Full_BIT,
+ ETHTOOL_LINK_MODE_10000baseKX4_Full_BIT,
+ ETHTOOL_LINK_MODE_10000baseKR_Full_BIT,
+};
+
+__ETHTOOL_DECLARE_LINK_MODE_MASK(phy_eee_cap1_features) __ro_after_init;
+EXPORT_SYMBOL_GPL(phy_eee_cap1_features);
+
static void features_init(void)
{
/* 10/100 half/full*/
@@ -138,6 +159,11 @@ static void features_init(void)
ARRAY_SIZE(phy_basic_t1_features_array),
phy_basic_t1_features);
+ /* 10 half, P2MP, TP */
+ linkmode_set_bit_array(phy_basic_t1s_p2mp_features_array,
+ ARRAY_SIZE(phy_basic_t1s_p2mp_features_array),
+ phy_basic_t1s_p2mp_features);
+
/* 10/100 half/full + 1000 half/full */
linkmode_set_bit_array(phy_basic_ports_array,
ARRAY_SIZE(phy_basic_ports_array),
@@ -199,6 +225,10 @@ static void features_init(void)
linkmode_set_bit_array(phy_10gbit_fec_features_array,
ARRAY_SIZE(phy_10gbit_fec_features_array),
phy_10gbit_fec_features);
+ linkmode_set_bit_array(phy_eee_cap1_features_array,
+ ARRAY_SIZE(phy_eee_cap1_features_array),
+ phy_eee_cap1_features);
+
}
void phy_device_free(struct phy_device *phydev)
@@ -932,7 +962,7 @@ struct phy_device *get_phy_device(struct mii_bus *bus, int addr, bool is_c45)
* probe with C45 to see if we're able to get a valid PHY ID in the C45
* space, if successful, create the C45 PHY device.
*/
- if (!is_c45 && phy_id == 0 && bus->probe_capabilities >= MDIOBUS_C45) {
+ if (!is_c45 && phy_id == 0 && bus->read_c45) {
r = get_phy_c45_ids(bus, addr, &c45_ids);
if (!r)
return phy_device_create(bus, addr, phy_id,
@@ -1487,6 +1517,13 @@ int phy_attach_direct(struct net_device *dev, struct phy_device *phydev,
phydev->interrupts = PHY_INTERRUPT_DISABLED;
+ /* PHYs can request to use poll mode even though they have an
+ * associated interrupt line. This could be the case if they
+ * detect a broken interrupt handling.
+ */
+ if (phydev->dev_flags & PHY_F_NO_IRQ)
+ phydev->irq = PHY_POLL;
+
/* Port is set to PORT_TP by default and the actual PHY driver will set
* it to different value depending on the PHY configuration. If we have
* the generic PHY driver we can't figure it out, thus set the old
@@ -2194,7 +2231,10 @@ int __genphy_config_aneg(struct phy_device *phydev, bool changed)
{
int err;
- if (genphy_config_eee_advert(phydev))
+ err = genphy_c45_write_eee_adv(phydev, phydev->supported_eee);
+ if (err < 0)
+ return err;
+ else if (err)
changed = true;
err = genphy_setup_master_slave(phydev);
@@ -2616,6 +2656,11 @@ int genphy_read_abilities(struct phy_device *phydev)
phydev->supported, val & ESTATUS_1000_XFULL);
}
+ /* This is optional functionality. If not supported, we may get an error
+ * which should be ignored.
+ */
+ genphy_c45_read_eee_abilities(phydev);
+
return 0;
}
EXPORT_SYMBOL(genphy_read_abilities);
@@ -3068,8 +3113,10 @@ static int phy_probe(struct device *dev)
* a controller will attach, and may modify one
* or both of these values
*/
- if (phydrv->features)
+ if (phydrv->features) {
linkmode_copy(phydev->supported, phydrv->features);
+ genphy_c45_read_eee_abilities(phydev);
+ }
else if (phydrv->get_features)
err = phydrv->get_features(phydev);
else if (phydev->is_c45)
@@ -3262,6 +3309,9 @@ static const struct ethtool_phy_ops phy_ethtool_phy_ops = {
.get_sset_count = phy_ethtool_get_sset_count,
.get_strings = phy_ethtool_get_strings,
.get_stats = phy_ethtool_get_stats,
+ .get_plca_cfg = phy_ethtool_get_plca_cfg,
+ .set_plca_cfg = phy_ethtool_set_plca_cfg,
+ .get_plca_status = phy_ethtool_get_plca_status,
.start_cable_test = phy_start_cable_test,
.start_cable_test_tdr = phy_start_cable_test_tdr,
};
diff --git a/drivers/net/phy/phylink.c b/drivers/net/phy/phylink.c
index 4d2519cdb801..1a2f074685fa 100644
--- a/drivers/net/phy/phylink.c
+++ b/drivers/net/phy/phylink.c
@@ -241,12 +241,16 @@ void phylink_caps_to_linkmodes(unsigned long *linkmodes, unsigned long caps)
if (caps & MAC_ASYM_PAUSE)
__set_bit(ETHTOOL_LINK_MODE_Asym_Pause_BIT, linkmodes);
- if (caps & MAC_10HD)
+ if (caps & MAC_10HD) {
__set_bit(ETHTOOL_LINK_MODE_10baseT_Half_BIT, linkmodes);
+ __set_bit(ETHTOOL_LINK_MODE_10baseT1S_Half_BIT, linkmodes);
+ __set_bit(ETHTOOL_LINK_MODE_10baseT1S_P2MP_Half_BIT, linkmodes);
+ }
if (caps & MAC_10FD) {
__set_bit(ETHTOOL_LINK_MODE_10baseT_Full_BIT, linkmodes);
__set_bit(ETHTOOL_LINK_MODE_10baseT1L_Full_BIT, linkmodes);
+ __set_bit(ETHTOOL_LINK_MODE_10baseT1S_Full_BIT, linkmodes);
}
if (caps & MAC_100HD) {
@@ -707,6 +711,7 @@ static int phylink_parse_fixedlink(struct phylink *pl,
struct fwnode_handle *fwnode)
{
struct fwnode_handle *fixed_node;
+ bool pause, asym_pause, autoneg;
const struct phy_setting *s;
struct gpio_desc *desc;
u32 speed;
@@ -779,13 +784,23 @@ static int phylink_parse_fixedlink(struct phylink *pl,
linkmode_copy(pl->link_config.advertising, pl->supported);
phylink_validate(pl, pl->supported, &pl->link_config);
+ pause = phylink_test(pl->supported, Pause);
+ asym_pause = phylink_test(pl->supported, Asym_Pause);
+ autoneg = phylink_test(pl->supported, Autoneg);
s = phy_lookup_setting(pl->link_config.speed, pl->link_config.duplex,
pl->supported, true);
linkmode_zero(pl->supported);
phylink_set(pl->supported, MII);
- phylink_set(pl->supported, Pause);
- phylink_set(pl->supported, Asym_Pause);
- phylink_set(pl->supported, Autoneg);
+
+ if (pause)
+ phylink_set(pl->supported, Pause);
+
+ if (asym_pause)
+ phylink_set(pl->supported, Asym_Pause);
+
+ if (autoneg)
+ phylink_set(pl->supported, Autoneg);
+
if (s) {
__set_bit(s->bit, pl->supported);
__set_bit(s->bit, pl->link_config.lp_advertising);
diff --git a/drivers/net/phy/sfp.c b/drivers/net/phy/sfp.c
index 83b99d95b278..c02cad6478a8 100644
--- a/drivers/net/phy/sfp.c
+++ b/drivers/net/phy/sfp.c
@@ -1,6 +1,4 @@
// SPDX-License-Identifier: GPL-2.0
-#include <linux/acpi.h>
-#include <linux/ctype.h>
#include <linux/debugfs.h>
#include <linux/delay.h>
#include <linux/gpio/consumer.h>
@@ -144,7 +142,7 @@ static const char *sm_state_to_str(unsigned short sm_state)
return sm_state_strings[sm_state];
}
-static const char *gpio_of_names[] = {
+static const char *gpio_names[] = {
"mod-def0",
"los",
"tx-fault",
@@ -2563,7 +2561,7 @@ static void sfp_check_state(struct sfp *sfp)
for (i = 0; i < GPIO_MAX; i++)
if (changed & BIT(i))
- dev_dbg(sfp->dev, "%s %u -> %u\n", gpio_of_names[i],
+ dev_dbg(sfp->dev, "%s %u -> %u\n", gpio_names[i],
!!(sfp->state & BIT(i)), !!(state & BIT(i)));
state |= sfp->state & (SFP_F_TX_DISABLE | SFP_F_RATE_SELECT);
@@ -2644,10 +2642,8 @@ static void sfp_cleanup(void *data)
static int sfp_i2c_get(struct sfp *sfp)
{
- struct acpi_handle *acpi_handle;
struct fwnode_handle *h;
struct i2c_adapter *i2c;
- struct device_node *np;
int err;
h = fwnode_find_reference(dev_fwnode(sfp->dev), "i2c-bus", 0);
@@ -2656,16 +2652,7 @@ static int sfp_i2c_get(struct sfp *sfp)
return -ENODEV;
}
- if (is_acpi_device_node(h)) {
- acpi_handle = ACPI_HANDLE_FWNODE(h);
- i2c = i2c_acpi_find_adapter_by_handle(acpi_handle);
- } else if ((np = to_of_node(h)) != NULL) {
- i2c = of_find_i2c_adapter_by_node(np);
- } else {
- err = -EINVAL;
- goto put;
- }
-
+ i2c = i2c_get_adapter_by_fwnode(h);
if (!i2c) {
err = -EPROBE_DEFER;
goto put;
@@ -2696,19 +2683,11 @@ static int sfp_probe(struct platform_device *pdev)
if (err < 0)
return err;
- sff = sfp->type = &sfp_data;
-
- if (pdev->dev.of_node) {
- const struct of_device_id *id;
+ sff = device_get_match_data(sfp->dev);
+ if (!sff)
+ sff = &sfp_data;
- id = of_match_node(sfp_of_match, pdev->dev.of_node);
- if (WARN_ON(!id))
- return -EINVAL;
-
- sff = sfp->type = id->data;
- } else if (!has_acpi_companion(&pdev->dev)) {
- return -EINVAL;
- }
+ sfp->type = sff;
err = sfp_i2c_get(sfp);
if (err)
@@ -2717,7 +2696,7 @@ static int sfp_probe(struct platform_device *pdev)
for (i = 0; i < GPIO_MAX; i++)
if (sff->gpios & BIT(i)) {
sfp->gpio[i] = devm_gpiod_get_optional(sfp->dev,
- gpio_of_names[i], gpio_flags[i]);
+ gpio_names[i], gpio_flags[i]);
if (IS_ERR(sfp->gpio[i]))
return PTR_ERR(sfp->gpio[i]);
}
@@ -2772,7 +2751,7 @@ static int sfp_probe(struct platform_device *pdev)
sfp_irq_name = devm_kasprintf(sfp->dev, GFP_KERNEL,
"%s-%s", dev_name(sfp->dev),
- gpio_of_names[i]);
+ gpio_names[i]);
if (!sfp_irq_name)
return -ENOMEM;
diff --git a/drivers/net/tap.c b/drivers/net/tap.c
index a2be1994b389..8941aa199ea3 100644
--- a/drivers/net/tap.c
+++ b/drivers/net/tap.c
@@ -533,7 +533,7 @@ static int tap_open(struct inode *inode, struct file *file)
q->sock.state = SS_CONNECTED;
q->sock.file = file;
q->sock.ops = &tap_socket_ops;
- sock_init_data(&q->sock, &q->sk);
+ sock_init_data_uid(&q->sock, &q->sk, inode->i_uid);
q->sk.sk_write_space = tap_sock_write_space;
q->sk.sk_destruct = tap_sock_destruct;
q->flags = IFF_VNET_HDR | IFF_NO_PI | IFF_TAP;
diff --git a/drivers/net/thunderbolt/Kconfig b/drivers/net/thunderbolt/Kconfig
new file mode 100644
index 000000000000..e127848c8cbd
--- /dev/null
+++ b/drivers/net/thunderbolt/Kconfig
@@ -0,0 +1,12 @@
+# SPDX-License-Identifier: GPL-2.0-only
+config USB4_NET
+ tristate "Networking over USB4 and Thunderbolt cables"
+ depends on USB4 && INET
+ help
+ Select this if you want to create network between two computers
+ over a USB4 and Thunderbolt cables. The driver supports Apple
+ ThunderboltIP protocol and allows communication with any host
+ supporting the same protocol including Windows and macOS.
+
+ To compile this driver a module, choose M here. The module will be
+ called thunderbolt_net.
diff --git a/drivers/net/thunderbolt/Makefile b/drivers/net/thunderbolt/Makefile
new file mode 100644
index 000000000000..e81c2a4849f0
--- /dev/null
+++ b/drivers/net/thunderbolt/Makefile
@@ -0,0 +1,6 @@
+# SPDX-License-Identifier: GPL-2.0
+obj-$(CONFIG_USB4_NET) := thunderbolt_net.o
+thunderbolt_net-objs := main.o trace.o
+
+# Tracepoints need to know where to find trace.h
+CFLAGS_trace.o := -I$(src)
diff --git a/drivers/net/thunderbolt.c b/drivers/net/thunderbolt/main.c
index 990484776f2d..26ef3706445e 100644
--- a/drivers/net/thunderbolt.c
+++ b/drivers/net/thunderbolt/main.c
@@ -23,6 +23,8 @@
#include <net/ip6_checksum.h>
+#include "trace.h"
+
/* Protocol timeouts in ms */
#define TBNET_LOGIN_DELAY 4500
#define TBNET_LOGIN_TIMEOUT 500
@@ -305,6 +307,8 @@ static int tbnet_logout_request(struct tbnet *net)
static void start_login(struct tbnet *net)
{
+ netdev_dbg(net->dev, "login started\n");
+
mutex_lock(&net->connection_lock);
net->login_sent = false;
net->login_received = false;
@@ -318,6 +322,8 @@ static void stop_login(struct tbnet *net)
{
cancel_delayed_work_sync(&net->login_work);
cancel_work_sync(&net->connected_work);
+
+ netdev_dbg(net->dev, "login stopped\n");
}
static inline unsigned int tbnet_frame_size(const struct tbnet_frame *tf)
@@ -349,6 +355,8 @@ static void tbnet_free_buffers(struct tbnet_ring *ring)
size = TBNET_RX_PAGE_SIZE;
}
+ trace_tbnet_free_frame(i, tf->page, tf->frame.buffer_phy, dir);
+
if (tf->frame.buffer_phy)
dma_unmap_page(dma_dev, tf->frame.buffer_phy, size,
dir);
@@ -374,6 +382,8 @@ static void tbnet_tear_down(struct tbnet *net, bool send_logout)
int ret, retries = TBNET_LOGOUT_RETRIES;
while (send_logout && retries-- > 0) {
+ netdev_dbg(net->dev, "sending logout request %u\n",
+ retries);
ret = tbnet_logout_request(net);
if (ret != -ETIMEDOUT)
break;
@@ -400,6 +410,8 @@ static void tbnet_tear_down(struct tbnet *net, bool send_logout)
net->login_sent = false;
net->login_received = false;
+ netdev_dbg(net->dev, "network traffic stopped\n");
+
mutex_unlock(&net->connection_lock);
}
@@ -431,12 +443,15 @@ static int tbnet_handle_packet(const void *buf, size_t size, void *data)
switch (pkg->hdr.type) {
case TBIP_LOGIN:
+ netdev_dbg(net->dev, "remote login request received\n");
if (!netif_running(net->dev))
break;
ret = tbnet_login_response(net, route, sequence,
pkg->hdr.command_id);
if (!ret) {
+ netdev_dbg(net->dev, "remote login response sent\n");
+
mutex_lock(&net->connection_lock);
net->login_received = true;
net->remote_transmit_path = pkg->transmit_path;
@@ -458,9 +473,12 @@ static int tbnet_handle_packet(const void *buf, size_t size, void *data)
break;
case TBIP_LOGOUT:
+ netdev_dbg(net->dev, "remote logout request received\n");
ret = tbnet_logout_response(net, route, sequence, command_id);
- if (!ret)
+ if (!ret) {
+ netdev_dbg(net->dev, "remote logout response sent\n");
queue_work(system_long_wq, &net->disconnect_work);
+ }
break;
default:
@@ -512,6 +530,9 @@ static int tbnet_alloc_rx_buffers(struct tbnet *net, unsigned int nbuffers)
tf->frame.buffer_phy = dma_addr;
tf->dev = net->dev;
+ trace_tbnet_alloc_rx_frame(index, tf->page, dma_addr,
+ DMA_FROM_DEVICE);
+
tb_ring_rx(ring->ring, &tf->frame);
ring->prod++;
@@ -588,6 +609,8 @@ static int tbnet_alloc_tx_buffers(struct tbnet *net)
tf->frame.callback = tbnet_tx_callback;
tf->frame.sof = TBIP_PDF_FRAME_START;
tf->frame.eof = TBIP_PDF_FRAME_END;
+
+ trace_tbnet_alloc_tx_frame(i, tf->page, dma_addr, DMA_TO_DEVICE);
}
ring->cons = 0;
@@ -612,6 +635,8 @@ static void tbnet_connected_work(struct work_struct *work)
if (!connected)
return;
+ netdev_dbg(net->dev, "login successful, enabling paths\n");
+
ret = tb_xdomain_alloc_in_hopid(net->xd, net->remote_transmit_path);
if (ret != net->remote_transmit_path) {
netdev_err(net->dev, "failed to allocate Rx HopID\n");
@@ -647,6 +672,8 @@ static void tbnet_connected_work(struct work_struct *work)
netif_carrier_on(net->dev);
netif_start_queue(net->dev);
+
+ netdev_dbg(net->dev, "network traffic started\n");
return;
err_free_tx_buffers:
@@ -668,8 +695,13 @@ static void tbnet_login_work(struct work_struct *work)
if (netif_carrier_ok(net->dev))
return;
+ netdev_dbg(net->dev, "sending login request, retries=%u\n",
+ net->login_retries);
+
ret = tbnet_login_request(net, net->login_retries % 4);
if (ret) {
+ netdev_dbg(net->dev, "sending login request failed, ret=%d\n",
+ ret);
if (net->login_retries++ < TBNET_LOGIN_RETRIES) {
queue_delayed_work(system_long_wq, &net->login_work,
delay);
@@ -677,6 +709,8 @@ static void tbnet_login_work(struct work_struct *work)
netdev_info(net->dev, "ThunderboltIP login timed out\n");
}
} else {
+ netdev_dbg(net->dev, "received login reply\n");
+
net->login_retries = 0;
mutex_lock(&net->connection_lock);
@@ -807,12 +841,16 @@ static int tbnet_poll(struct napi_struct *napi, int budget)
hdr = page_address(page);
if (!tbnet_check_frame(net, tf, hdr)) {
+ trace_tbnet_invalid_rx_ip_frame(hdr->frame_size,
+ hdr->frame_id, hdr->frame_index, hdr->frame_count);
__free_pages(page, TBNET_RX_PAGE_ORDER);
dev_kfree_skb_any(net->skb);
net->skb = NULL;
continue;
}
+ trace_tbnet_rx_ip_frame(hdr->frame_size, hdr->frame_id,
+ hdr->frame_index, hdr->frame_count);
frame_size = le32_to_cpu(hdr->frame_size);
skb = net->skb;
@@ -846,6 +884,7 @@ static int tbnet_poll(struct napi_struct *napi, int budget)
if (last) {
skb->protocol = eth_type_trans(skb, net->dev);
+ trace_tbnet_rx_skb(skb);
napi_gro_receive(&net->napi, skb);
net->skb = NULL;
}
@@ -965,6 +1004,8 @@ static bool tbnet_xmit_csum_and_map(struct tbnet *net, struct sk_buff *skb,
for (i = 0; i < frame_count; i++) {
hdr = page_address(frames[i]->page);
hdr->frame_count = cpu_to_le32(frame_count);
+ trace_tbnet_tx_ip_frame(hdr->frame_size, hdr->frame_id,
+ hdr->frame_index, hdr->frame_count);
dma_sync_single_for_device(dma_dev,
frames[i]->frame.buffer_phy,
tbnet_frame_size(frames[i]), DMA_TO_DEVICE);
@@ -1029,6 +1070,8 @@ static bool tbnet_xmit_csum_and_map(struct tbnet *net, struct sk_buff *skb,
len = le32_to_cpu(hdr->frame_size) - offset;
wsum = csum_partial(dest, len, wsum);
hdr->frame_count = cpu_to_le32(frame_count);
+ trace_tbnet_tx_ip_frame(hdr->frame_size, hdr->frame_id,
+ hdr->frame_index, hdr->frame_count);
offset = 0;
}
@@ -1071,6 +1114,8 @@ static netdev_tx_t tbnet_start_xmit(struct sk_buff *skb,
bool unmap = false;
void *dest;
+ trace_tbnet_tx_skb(skb);
+
nframes = DIV_ROUND_UP(data_len, TBNET_MAX_PAYLOAD_SIZE);
if (tbnet_available_buffers(&net->tx_ring) < nframes) {
netif_stop_queue(net->dev);
@@ -1177,6 +1222,7 @@ static netdev_tx_t tbnet_start_xmit(struct sk_buff *skb,
net->stats.tx_packets++;
net->stats.tx_bytes += skb->len;
+ trace_tbnet_consume_skb(skb);
dev_consume_skb_any(skb);
return NETDEV_TX_OK;
diff --git a/drivers/net/thunderbolt/trace.c b/drivers/net/thunderbolt/trace.c
new file mode 100644
index 000000000000..1b1499520a44
--- /dev/null
+++ b/drivers/net/thunderbolt/trace.c
@@ -0,0 +1,10 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Tracepoints for Thunderbolt/USB4 networking driver
+ *
+ * Copyright (C) 2023, Intel Corporation
+ * Author: Mika Westerberg <mika.westerberg@linux.intel.com>
+ */
+
+#define CREATE_TRACE_POINTS
+#include "trace.h"
diff --git a/drivers/net/thunderbolt/trace.h b/drivers/net/thunderbolt/trace.h
new file mode 100644
index 000000000000..9626eadaebb9
--- /dev/null
+++ b/drivers/net/thunderbolt/trace.h
@@ -0,0 +1,141 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Tracepoints for Thunderbolt/USB4 networking driver
+ *
+ * Copyright (C) 2023, Intel Corporation
+ * Author: Mika Westerberg <mika.westerberg@linux.intel.com>
+ */
+
+#undef TRACE_SYSTEM
+#define TRACE_SYSTEM thunderbolt_net
+
+#if !defined(__TRACE_THUNDERBOLT_NET_H) || defined(TRACE_HEADER_MULTI_READ)
+#define __TRACE_THUNDERBOLT_NET_H
+
+#include <linux/dma-direction.h>
+#include <linux/skbuff.h>
+#include <linux/tracepoint.h>
+
+#define DMA_DATA_DIRECTION_NAMES \
+ { DMA_BIDIRECTIONAL, "DMA_BIDIRECTIONAL" }, \
+ { DMA_TO_DEVICE, "DMA_TO_DEVICE" }, \
+ { DMA_FROM_DEVICE, "DMA_FROM_DEVICE" }, \
+ { DMA_NONE, "DMA_NONE" }
+
+DECLARE_EVENT_CLASS(tbnet_frame,
+ TP_PROTO(unsigned int index, const void *page, dma_addr_t phys,
+ enum dma_data_direction dir),
+ TP_ARGS(index, page, phys, dir),
+ TP_STRUCT__entry(
+ __field(unsigned int, index)
+ __field(const void *, page)
+ __field(dma_addr_t, phys)
+ __field(enum dma_data_direction, dir)
+ ),
+ TP_fast_assign(
+ __entry->index = index;
+ __entry->page = page;
+ __entry->phys = phys;
+ __entry->dir = dir;
+ ),
+ TP_printk("index=%u page=%p phys=%pad dir=%s",
+ __entry->index, __entry->page, &__entry->phys,
+ __print_symbolic(__entry->dir, DMA_DATA_DIRECTION_NAMES))
+);
+
+DEFINE_EVENT(tbnet_frame, tbnet_alloc_rx_frame,
+ TP_PROTO(unsigned int index, const void *page, dma_addr_t phys,
+ enum dma_data_direction dir),
+ TP_ARGS(index, page, phys, dir)
+);
+
+DEFINE_EVENT(tbnet_frame, tbnet_alloc_tx_frame,
+ TP_PROTO(unsigned int index, const void *page, dma_addr_t phys,
+ enum dma_data_direction dir),
+ TP_ARGS(index, page, phys, dir)
+);
+
+DEFINE_EVENT(tbnet_frame, tbnet_free_frame,
+ TP_PROTO(unsigned int index, const void *page, dma_addr_t phys,
+ enum dma_data_direction dir),
+ TP_ARGS(index, page, phys, dir)
+);
+
+DECLARE_EVENT_CLASS(tbnet_ip_frame,
+ TP_PROTO(__le32 size, __le16 id, __le16 index, __le32 count),
+ TP_ARGS(size, id, index, count),
+ TP_STRUCT__entry(
+ __field(u32, size)
+ __field(u16, id)
+ __field(u16, index)
+ __field(u32, count)
+ ),
+ TP_fast_assign(
+ __entry->size = le32_to_cpu(size);
+ __entry->id = le16_to_cpu(id);
+ __entry->index = le16_to_cpu(index);
+ __entry->count = le32_to_cpu(count);
+ ),
+ TP_printk("id=%u size=%u index=%u count=%u",
+ __entry->id, __entry->size, __entry->index, __entry->count)
+);
+
+DEFINE_EVENT(tbnet_ip_frame, tbnet_rx_ip_frame,
+ TP_PROTO(__le32 size, __le16 id, __le16 index, __le32 count),
+ TP_ARGS(size, id, index, count)
+);
+
+DEFINE_EVENT(tbnet_ip_frame, tbnet_invalid_rx_ip_frame,
+ TP_PROTO(__le32 size, __le16 id, __le16 index, __le32 count),
+ TP_ARGS(size, id, index, count)
+);
+
+DEFINE_EVENT(tbnet_ip_frame, tbnet_tx_ip_frame,
+ TP_PROTO(__le32 size, __le16 id, __le16 index, __le32 count),
+ TP_ARGS(size, id, index, count)
+);
+
+DECLARE_EVENT_CLASS(tbnet_skb,
+ TP_PROTO(const struct sk_buff *skb),
+ TP_ARGS(skb),
+ TP_STRUCT__entry(
+ __field(const void *, addr)
+ __field(unsigned int, len)
+ __field(unsigned int, data_len)
+ __field(unsigned int, nr_frags)
+ ),
+ TP_fast_assign(
+ __entry->addr = skb;
+ __entry->len = skb->len;
+ __entry->data_len = skb->data_len;
+ __entry->nr_frags = skb_shinfo(skb)->nr_frags;
+ ),
+ TP_printk("skb=%p len=%u data_len=%u nr_frags=%u",
+ __entry->addr, __entry->len, __entry->data_len,
+ __entry->nr_frags)
+);
+
+DEFINE_EVENT(tbnet_skb, tbnet_rx_skb,
+ TP_PROTO(const struct sk_buff *skb),
+ TP_ARGS(skb)
+);
+
+DEFINE_EVENT(tbnet_skb, tbnet_tx_skb,
+ TP_PROTO(const struct sk_buff *skb),
+ TP_ARGS(skb)
+);
+
+DEFINE_EVENT(tbnet_skb, tbnet_consume_skb,
+ TP_PROTO(const struct sk_buff *skb),
+ TP_ARGS(skb)
+);
+
+#endif /* _TRACE_THUNDERBOLT_NET_H */
+
+#undef TRACE_INCLUDE_PATH
+#define TRACE_INCLUDE_PATH .
+
+#undef TRACE_INCLUDE_FILE
+#define TRACE_INCLUDE_FILE trace
+
+#include <trace/define_trace.h>
diff --git a/drivers/net/tun.c b/drivers/net/tun.c
index a7d17c680f4a..ad653b32b2f0 100644
--- a/drivers/net/tun.c
+++ b/drivers/net/tun.c
@@ -1401,6 +1401,11 @@ static void tun_net_initialize(struct net_device *dev)
eth_hw_addr_random(dev);
+ /* Currently tun does not support XDP, only tap does. */
+ dev->xdp_features = NETDEV_XDP_ACT_BASIC |
+ NETDEV_XDP_ACT_REDIRECT |
+ NETDEV_XDP_ACT_NDO_XMIT;
+
break;
}
@@ -3448,7 +3453,7 @@ static int tun_chr_open(struct inode *inode, struct file * file)
tfile->socket.file = file;
tfile->socket.ops = &tun_socket_ops;
- sock_init_data(&tfile->socket, &tfile->sk);
+ sock_init_data_uid(&tfile->socket, &tfile->sk, inode->i_uid);
tfile->sk.sk_write_space = tun_sock_write_space;
tfile->sk.sk_sndbuf = INT_MAX;
diff --git a/drivers/net/usb/cdc_ether.c b/drivers/net/usb/cdc_ether.c
index c140edb4b648..80849d115e5d 100644
--- a/drivers/net/usb/cdc_ether.c
+++ b/drivers/net/usb/cdc_ether.c
@@ -747,13 +747,6 @@ static const struct usb_device_id products[] = {
.driver_info = 0,
},
-/* Realtek RTL8152 Based USB 2.0 Ethernet Adapters */
-{
- USB_DEVICE_AND_INTERFACE_INFO(REALTEK_VENDOR_ID, 0x8152, USB_CLASS_COMM,
- USB_CDC_SUBCLASS_ETHERNET, USB_CDC_PROTO_NONE),
- .driver_info = 0,
-},
-
/* Realtek RTL8153 Based USB 3.0 Ethernet Adapters */
{
USB_DEVICE_AND_INTERFACE_INFO(REALTEK_VENDOR_ID, 0x8153, USB_CLASS_COMM,
@@ -761,71 +754,6 @@ static const struct usb_device_id products[] = {
.driver_info = 0,
},
-/* Samsung USB Ethernet Adapters */
-{
- USB_DEVICE_AND_INTERFACE_INFO(SAMSUNG_VENDOR_ID, 0xa101, USB_CLASS_COMM,
- USB_CDC_SUBCLASS_ETHERNET, USB_CDC_PROTO_NONE),
- .driver_info = 0,
-},
-
-#if IS_ENABLED(CONFIG_USB_RTL8152)
-/* Linksys USB3GIGV1 Ethernet Adapter */
-{
- USB_DEVICE_AND_INTERFACE_INFO(LINKSYS_VENDOR_ID, 0x0041, USB_CLASS_COMM,
- USB_CDC_SUBCLASS_ETHERNET, USB_CDC_PROTO_NONE),
- .driver_info = 0,
-},
-#endif
-
-/* Lenovo ThinkPad OneLink+ Dock (based on Realtek RTL8153) */
-{
- USB_DEVICE_AND_INTERFACE_INFO(LENOVO_VENDOR_ID, 0x3054, USB_CLASS_COMM,
- USB_CDC_SUBCLASS_ETHERNET, USB_CDC_PROTO_NONE),
- .driver_info = 0,
-},
-
-/* ThinkPad USB-C Dock (based on Realtek RTL8153) */
-{
- USB_DEVICE_AND_INTERFACE_INFO(LENOVO_VENDOR_ID, 0x3062, USB_CLASS_COMM,
- USB_CDC_SUBCLASS_ETHERNET, USB_CDC_PROTO_NONE),
- .driver_info = 0,
-},
-
-/* ThinkPad Thunderbolt 3 Dock (based on Realtek RTL8153) */
-{
- USB_DEVICE_AND_INTERFACE_INFO(LENOVO_VENDOR_ID, 0x3069, USB_CLASS_COMM,
- USB_CDC_SUBCLASS_ETHERNET, USB_CDC_PROTO_NONE),
- .driver_info = 0,
-},
-
-/* ThinkPad Thunderbolt 3 Dock Gen 2 (based on Realtek RTL8153) */
-{
- USB_DEVICE_AND_INTERFACE_INFO(LENOVO_VENDOR_ID, 0x3082, USB_CLASS_COMM,
- USB_CDC_SUBCLASS_ETHERNET, USB_CDC_PROTO_NONE),
- .driver_info = 0,
-},
-
-/* Lenovo Thinkpad USB 3.0 Ethernet Adapters (based on Realtek RTL8153) */
-{
- USB_DEVICE_AND_INTERFACE_INFO(LENOVO_VENDOR_ID, 0x7205, USB_CLASS_COMM,
- USB_CDC_SUBCLASS_ETHERNET, USB_CDC_PROTO_NONE),
- .driver_info = 0,
-},
-
-/* Lenovo USB C to Ethernet Adapter (based on Realtek RTL8153) */
-{
- USB_DEVICE_AND_INTERFACE_INFO(LENOVO_VENDOR_ID, 0x720c, USB_CLASS_COMM,
- USB_CDC_SUBCLASS_ETHERNET, USB_CDC_PROTO_NONE),
- .driver_info = 0,
-},
-
-/* Lenovo USB-C Travel Hub (based on Realtek RTL8153) */
-{
- USB_DEVICE_AND_INTERFACE_INFO(LENOVO_VENDOR_ID, 0x7214, USB_CLASS_COMM,
- USB_CDC_SUBCLASS_ETHERNET, USB_CDC_PROTO_NONE),
- .driver_info = 0,
-},
-
/* Lenovo Powered USB-C Travel Hub (4X90S92381, based on Realtek RTL8153) */
{
USB_DEVICE_AND_INTERFACE_INFO(LENOVO_VENDOR_ID, 0x721e, USB_CLASS_COMM,
@@ -833,48 +761,6 @@ static const struct usb_device_id products[] = {
.driver_info = 0,
},
-/* ThinkPad USB-C Dock Gen 2 (based on Realtek RTL8153) */
-{
- USB_DEVICE_AND_INTERFACE_INFO(LENOVO_VENDOR_ID, 0xa387, USB_CLASS_COMM,
- USB_CDC_SUBCLASS_ETHERNET, USB_CDC_PROTO_NONE),
- .driver_info = 0,
-},
-
-/* NVIDIA Tegra USB 3.0 Ethernet Adapters (based on Realtek RTL8153) */
-{
- USB_DEVICE_AND_INTERFACE_INFO(NVIDIA_VENDOR_ID, 0x09ff, USB_CLASS_COMM,
- USB_CDC_SUBCLASS_ETHERNET, USB_CDC_PROTO_NONE),
- .driver_info = 0,
-},
-
-/* Microsoft Surface 2 dock (based on Realtek RTL8152) */
-{
- USB_DEVICE_AND_INTERFACE_INFO(MICROSOFT_VENDOR_ID, 0x07ab, USB_CLASS_COMM,
- USB_CDC_SUBCLASS_ETHERNET, USB_CDC_PROTO_NONE),
- .driver_info = 0,
-},
-
-/* Microsoft Surface Ethernet Adapter (based on Realtek RTL8153) */
-{
- USB_DEVICE_AND_INTERFACE_INFO(MICROSOFT_VENDOR_ID, 0x07c6, USB_CLASS_COMM,
- USB_CDC_SUBCLASS_ETHERNET, USB_CDC_PROTO_NONE),
- .driver_info = 0,
-},
-
-/* Microsoft Surface Ethernet Adapter (based on Realtek RTL8153B) */
-{
- USB_DEVICE_AND_INTERFACE_INFO(MICROSOFT_VENDOR_ID, 0x0927, USB_CLASS_COMM,
- USB_CDC_SUBCLASS_ETHERNET, USB_CDC_PROTO_NONE),
- .driver_info = 0,
-},
-
-/* TP-LINK UE300 USB 3.0 Ethernet Adapters (based on Realtek RTL8153) */
-{
- USB_DEVICE_AND_INTERFACE_INFO(TPLINK_VENDOR_ID, 0x0601, USB_CLASS_COMM,
- USB_CDC_SUBCLASS_ETHERNET, USB_CDC_PROTO_NONE),
- .driver_info = 0,
-},
-
/* Aquantia AQtion USB to 5GbE Controller (based on AQC111U) */
{
USB_DEVICE_AND_INTERFACE_INFO(AQUANTIA_VENDOR_ID, 0xc101,
diff --git a/drivers/net/usb/r8152.c b/drivers/net/usb/r8152.c
index 23da1d9dafd1..decb5ba56a25 100644
--- a/drivers/net/usb/r8152.c
+++ b/drivers/net/usb/r8152.c
@@ -8232,43 +8232,6 @@ static bool rtl_check_vendor_ok(struct usb_interface *intf)
return true;
}
-static bool rtl_vendor_mode(struct usb_interface *intf)
-{
- struct usb_host_interface *alt = intf->cur_altsetting;
- struct usb_device *udev;
- struct usb_host_config *c;
- int i, num_configs;
-
- if (alt->desc.bInterfaceClass == USB_CLASS_VENDOR_SPEC)
- return rtl_check_vendor_ok(intf);
-
- /* The vendor mode is not always config #1, so to find it out. */
- udev = interface_to_usbdev(intf);
- c = udev->config;
- num_configs = udev->descriptor.bNumConfigurations;
- if (num_configs < 2)
- return false;
-
- for (i = 0; i < num_configs; (i++, c++)) {
- struct usb_interface_descriptor *desc = NULL;
-
- if (c->desc.bNumInterfaces > 0)
- desc = &c->intf_cache[0]->altsetting->desc;
- else
- continue;
-
- if (desc->bInterfaceClass == USB_CLASS_VENDOR_SPEC) {
- usb_driver_set_configuration(udev, c->desc.bConfigurationValue);
- break;
- }
- }
-
- if (i == num_configs)
- dev_err(&intf->dev, "Unexpected Device\n");
-
- return false;
-}
-
static int rtl8152_pre_reset(struct usb_interface *intf)
{
struct r8152 *tp = usb_get_intfdata(intf);
@@ -9500,9 +9463,8 @@ static int rtl_fw_init(struct r8152 *tp)
return 0;
}
-u8 rtl8152_get_version(struct usb_interface *intf)
+static u8 __rtl_get_hw_ver(struct usb_device *udev)
{
- struct usb_device *udev = interface_to_usbdev(intf);
u32 ocp_data = 0;
__le32 *tmp;
u8 version;
@@ -9571,10 +9533,19 @@ u8 rtl8152_get_version(struct usb_interface *intf)
break;
default:
version = RTL_VER_UNKNOWN;
- dev_info(&intf->dev, "Unknown version 0x%04x\n", ocp_data);
+ dev_info(&udev->dev, "Unknown version 0x%04x\n", ocp_data);
break;
}
+ return version;
+}
+
+u8 rtl8152_get_version(struct usb_interface *intf)
+{
+ u8 version;
+
+ version = __rtl_get_hw_ver(interface_to_usbdev(intf));
+
dev_dbg(&intf->dev, "Detected version 0x%04x\n", version);
return version;
@@ -9610,15 +9581,19 @@ static int rtl8152_probe(struct usb_interface *intf,
const struct usb_device_id *id)
{
struct usb_device *udev = interface_to_usbdev(intf);
- u8 version = rtl8152_get_version(intf);
struct r8152 *tp;
struct net_device *netdev;
+ u8 version;
int ret;
- if (version == RTL_VER_UNKNOWN)
+ if (intf->cur_altsetting->desc.bInterfaceClass != USB_CLASS_VENDOR_SPEC)
return -ENODEV;
- if (!rtl_vendor_mode(intf))
+ if (!rtl_check_vendor_ok(intf))
+ return -ENODEV;
+
+ version = rtl8152_get_version(intf);
+ if (version == RTL_VER_UNKNOWN)
return -ENODEV;
usb_reset_device(udev);
@@ -9814,43 +9789,35 @@ static void rtl8152_disconnect(struct usb_interface *intf)
}
}
-#define REALTEK_USB_DEVICE(vend, prod) { \
- USB_DEVICE_INTERFACE_CLASS(vend, prod, USB_CLASS_VENDOR_SPEC), \
-}, \
-{ \
- USB_DEVICE_AND_INTERFACE_INFO(vend, prod, USB_CLASS_COMM, \
- USB_CDC_SUBCLASS_ETHERNET, USB_CDC_PROTO_NONE), \
-}
-
/* table of devices that work with this driver */
static const struct usb_device_id rtl8152_table[] = {
/* Realtek */
- REALTEK_USB_DEVICE(VENDOR_ID_REALTEK, 0x8050),
- REALTEK_USB_DEVICE(VENDOR_ID_REALTEK, 0x8053),
- REALTEK_USB_DEVICE(VENDOR_ID_REALTEK, 0x8152),
- REALTEK_USB_DEVICE(VENDOR_ID_REALTEK, 0x8153),
- REALTEK_USB_DEVICE(VENDOR_ID_REALTEK, 0x8155),
- REALTEK_USB_DEVICE(VENDOR_ID_REALTEK, 0x8156),
+ { USB_DEVICE(VENDOR_ID_REALTEK, 0x8050) },
+ { USB_DEVICE(VENDOR_ID_REALTEK, 0x8053) },
+ { USB_DEVICE(VENDOR_ID_REALTEK, 0x8152) },
+ { USB_DEVICE(VENDOR_ID_REALTEK, 0x8153) },
+ { USB_DEVICE(VENDOR_ID_REALTEK, 0x8155) },
+ { USB_DEVICE(VENDOR_ID_REALTEK, 0x8156) },
/* Microsoft */
- REALTEK_USB_DEVICE(VENDOR_ID_MICROSOFT, 0x07ab),
- REALTEK_USB_DEVICE(VENDOR_ID_MICROSOFT, 0x07c6),
- REALTEK_USB_DEVICE(VENDOR_ID_MICROSOFT, 0x0927),
- REALTEK_USB_DEVICE(VENDOR_ID_MICROSOFT, 0x0c5e),
- REALTEK_USB_DEVICE(VENDOR_ID_SAMSUNG, 0xa101),
- REALTEK_USB_DEVICE(VENDOR_ID_LENOVO, 0x304f),
- REALTEK_USB_DEVICE(VENDOR_ID_LENOVO, 0x3054),
- REALTEK_USB_DEVICE(VENDOR_ID_LENOVO, 0x3062),
- REALTEK_USB_DEVICE(VENDOR_ID_LENOVO, 0x3069),
- REALTEK_USB_DEVICE(VENDOR_ID_LENOVO, 0x3082),
- REALTEK_USB_DEVICE(VENDOR_ID_LENOVO, 0x7205),
- REALTEK_USB_DEVICE(VENDOR_ID_LENOVO, 0x720c),
- REALTEK_USB_DEVICE(VENDOR_ID_LENOVO, 0x7214),
- REALTEK_USB_DEVICE(VENDOR_ID_LENOVO, 0x721e),
- REALTEK_USB_DEVICE(VENDOR_ID_LENOVO, 0xa387),
- REALTEK_USB_DEVICE(VENDOR_ID_LINKSYS, 0x0041),
- REALTEK_USB_DEVICE(VENDOR_ID_NVIDIA, 0x09ff),
- REALTEK_USB_DEVICE(VENDOR_ID_TPLINK, 0x0601),
+ { USB_DEVICE(VENDOR_ID_MICROSOFT, 0x07ab) },
+ { USB_DEVICE(VENDOR_ID_MICROSOFT, 0x07c6) },
+ { USB_DEVICE(VENDOR_ID_MICROSOFT, 0x0927) },
+ { USB_DEVICE(VENDOR_ID_MICROSOFT, 0x0c5e) },
+ { USB_DEVICE(VENDOR_ID_SAMSUNG, 0xa101) },
+ { USB_DEVICE(VENDOR_ID_LENOVO, 0x304f) },
+ { USB_DEVICE(VENDOR_ID_LENOVO, 0x3054) },
+ { USB_DEVICE(VENDOR_ID_LENOVO, 0x3062) },
+ { USB_DEVICE(VENDOR_ID_LENOVO, 0x3069) },
+ { USB_DEVICE(VENDOR_ID_LENOVO, 0x3082) },
+ { USB_DEVICE(VENDOR_ID_LENOVO, 0x7205) },
+ { USB_DEVICE(VENDOR_ID_LENOVO, 0x720c) },
+ { USB_DEVICE(VENDOR_ID_LENOVO, 0x7214) },
+ { USB_DEVICE(VENDOR_ID_LENOVO, 0x721e) },
+ { USB_DEVICE(VENDOR_ID_LENOVO, 0xa387) },
+ { USB_DEVICE(VENDOR_ID_LINKSYS, 0x0041) },
+ { USB_DEVICE(VENDOR_ID_NVIDIA, 0x09ff) },
+ { USB_DEVICE(VENDOR_ID_TPLINK, 0x0601) },
{}
};
@@ -9870,7 +9837,67 @@ static struct usb_driver rtl8152_driver = {
.disable_hub_initiated_lpm = 1,
};
-module_usb_driver(rtl8152_driver);
+static int rtl8152_cfgselector_probe(struct usb_device *udev)
+{
+ struct usb_host_config *c;
+ int i, num_configs;
+
+ /* Switch the device to vendor mode, if and only if the vendor mode
+ * driver supports it.
+ */
+ if (__rtl_get_hw_ver(udev) == RTL_VER_UNKNOWN)
+ return 0;
+
+ /* The vendor mode is not always config #1, so to find it out. */
+ c = udev->config;
+ num_configs = udev->descriptor.bNumConfigurations;
+ for (i = 0; i < num_configs; (i++, c++)) {
+ struct usb_interface_descriptor *desc = NULL;
+
+ if (!c->desc.bNumInterfaces)
+ continue;
+ desc = &c->intf_cache[0]->altsetting->desc;
+ if (desc->bInterfaceClass == USB_CLASS_VENDOR_SPEC)
+ break;
+ }
+
+ if (i == num_configs)
+ return -ENODEV;
+
+ if (usb_set_configuration(udev, c->desc.bConfigurationValue)) {
+ dev_err(&udev->dev, "Failed to set configuration %d\n",
+ c->desc.bConfigurationValue);
+ return -ENODEV;
+ }
+
+ return 0;
+}
+
+static struct usb_device_driver rtl8152_cfgselector_driver = {
+ .name = MODULENAME "-cfgselector",
+ .probe = rtl8152_cfgselector_probe,
+ .id_table = rtl8152_table,
+ .generic_subclass = 1,
+};
+
+static int __init rtl8152_driver_init(void)
+{
+ int ret;
+
+ ret = usb_register_device_driver(&rtl8152_cfgselector_driver, THIS_MODULE);
+ if (ret)
+ return ret;
+ return usb_register(&rtl8152_driver);
+}
+
+static void __exit rtl8152_driver_exit(void)
+{
+ usb_deregister(&rtl8152_driver);
+ usb_deregister_device_driver(&rtl8152_cfgselector_driver);
+}
+
+module_init(rtl8152_driver_init);
+module_exit(rtl8152_driver_exit);
MODULE_AUTHOR(DRIVER_AUTHOR);
MODULE_DESCRIPTION(DRIVER_DESC);
diff --git a/drivers/net/usb/usbnet.c b/drivers/net/usb/usbnet.c
index 64a9a80b2309..283ffddda821 100644
--- a/drivers/net/usb/usbnet.c
+++ b/drivers/net/usb/usbnet.c
@@ -555,32 +555,30 @@ static int rx_submit (struct usbnet *dev, struct urb *urb, gfp_t flags)
/*-------------------------------------------------------------------------*/
-static inline void rx_process (struct usbnet *dev, struct sk_buff *skb)
+static inline int rx_process(struct usbnet *dev, struct sk_buff *skb)
{
if (dev->driver_info->rx_fixup &&
!dev->driver_info->rx_fixup (dev, skb)) {
/* With RX_ASSEMBLE, rx_fixup() must update counters */
if (!(dev->driver_info->flags & FLAG_RX_ASSEMBLE))
dev->net->stats.rx_errors++;
- goto done;
+ return -EPROTO;
}
// else network stack removes extra byte if we forced a short packet
/* all data was already cloned from skb inside the driver */
if (dev->driver_info->flags & FLAG_MULTI_PACKET)
- goto done;
+ return -EALREADY;
if (skb->len < ETH_HLEN) {
dev->net->stats.rx_errors++;
dev->net->stats.rx_length_errors++;
netif_dbg(dev, rx_err, dev->net, "rx length %d\n", skb->len);
- } else {
- usbnet_skb_return(dev, skb);
- return;
+ return -EPROTO;
}
-done:
- skb_queue_tail(&dev->done, skb);
+ usbnet_skb_return(dev, skb);
+ return 0;
}
/*-------------------------------------------------------------------------*/
@@ -1514,6 +1512,14 @@ err:
return ret;
}
+static inline void usb_free_skb(struct sk_buff *skb)
+{
+ struct skb_data *entry = (struct skb_data *)skb->cb;
+
+ usb_free_urb(entry->urb);
+ dev_kfree_skb(skb);
+}
+
/*-------------------------------------------------------------------------*/
// tasklet (work deferred from completions, in_irq) or timer
@@ -1528,15 +1534,14 @@ static void usbnet_bh (struct timer_list *t)
entry = (struct skb_data *) skb->cb;
switch (entry->state) {
case rx_done:
- entry->state = rx_cleanup;
- rx_process (dev, skb);
+ if (rx_process(dev, skb))
+ usb_free_skb(skb);
continue;
case tx_done:
kfree(entry->urb->sg);
fallthrough;
case rx_cleanup:
- usb_free_urb (entry->urb);
- dev_kfree_skb (skb);
+ usb_free_skb(skb);
continue;
default:
netdev_dbg(dev->net, "bogus skb state %d\n", entry->state);
diff --git a/drivers/net/veth.c b/drivers/net/veth.c
index dfc7d87fad59..1bb54de7124d 100644
--- a/drivers/net/veth.c
+++ b/drivers/net/veth.c
@@ -116,6 +116,11 @@ static struct {
{ "peer_ifindex" },
};
+struct veth_xdp_buff {
+ struct xdp_buff xdp;
+ struct sk_buff *skb;
+};
+
static int veth_get_link_ksettings(struct net_device *dev,
struct ethtool_link_ksettings *cmd)
{
@@ -592,23 +597,25 @@ static struct xdp_frame *veth_xdp_rcv_one(struct veth_rq *rq,
rcu_read_lock();
xdp_prog = rcu_dereference(rq->xdp_prog);
if (likely(xdp_prog)) {
- struct xdp_buff xdp;
+ struct veth_xdp_buff vxbuf;
+ struct xdp_buff *xdp = &vxbuf.xdp;
u32 act;
- xdp_convert_frame_to_buff(frame, &xdp);
- xdp.rxq = &rq->xdp_rxq;
+ xdp_convert_frame_to_buff(frame, xdp);
+ xdp->rxq = &rq->xdp_rxq;
+ vxbuf.skb = NULL;
- act = bpf_prog_run_xdp(xdp_prog, &xdp);
+ act = bpf_prog_run_xdp(xdp_prog, xdp);
switch (act) {
case XDP_PASS:
- if (xdp_update_frame_from_buff(&xdp, frame))
+ if (xdp_update_frame_from_buff(xdp, frame))
goto err_xdp;
break;
case XDP_TX:
orig_frame = *frame;
- xdp.rxq->mem = frame->mem;
- if (unlikely(veth_xdp_tx(rq, &xdp, bq) < 0)) {
+ xdp->rxq->mem = frame->mem;
+ if (unlikely(veth_xdp_tx(rq, xdp, bq) < 0)) {
trace_xdp_exception(rq->dev, xdp_prog, act);
frame = &orig_frame;
stats->rx_drops++;
@@ -619,8 +626,8 @@ static struct xdp_frame *veth_xdp_rcv_one(struct veth_rq *rq,
goto xdp_xmit;
case XDP_REDIRECT:
orig_frame = *frame;
- xdp.rxq->mem = frame->mem;
- if (xdp_do_redirect(rq->dev, &xdp, xdp_prog)) {
+ xdp->rxq->mem = frame->mem;
+ if (xdp_do_redirect(rq->dev, xdp, xdp_prog)) {
frame = &orig_frame;
stats->rx_drops++;
goto err_xdp;
@@ -801,7 +808,8 @@ static struct sk_buff *veth_xdp_rcv_skb(struct veth_rq *rq,
{
void *orig_data, *orig_data_end;
struct bpf_prog *xdp_prog;
- struct xdp_buff xdp;
+ struct veth_xdp_buff vxbuf;
+ struct xdp_buff *xdp = &vxbuf.xdp;
u32 act, metalen;
int off;
@@ -815,22 +823,23 @@ static struct sk_buff *veth_xdp_rcv_skb(struct veth_rq *rq,
}
__skb_push(skb, skb->data - skb_mac_header(skb));
- if (veth_convert_skb_to_xdp_buff(rq, &xdp, &skb))
+ if (veth_convert_skb_to_xdp_buff(rq, xdp, &skb))
goto drop;
+ vxbuf.skb = skb;
- orig_data = xdp.data;
- orig_data_end = xdp.data_end;
+ orig_data = xdp->data;
+ orig_data_end = xdp->data_end;
- act = bpf_prog_run_xdp(xdp_prog, &xdp);
+ act = bpf_prog_run_xdp(xdp_prog, xdp);
switch (act) {
case XDP_PASS:
break;
case XDP_TX:
- veth_xdp_get(&xdp);
+ veth_xdp_get(xdp);
consume_skb(skb);
- xdp.rxq->mem = rq->xdp_mem;
- if (unlikely(veth_xdp_tx(rq, &xdp, bq) < 0)) {
+ xdp->rxq->mem = rq->xdp_mem;
+ if (unlikely(veth_xdp_tx(rq, xdp, bq) < 0)) {
trace_xdp_exception(rq->dev, xdp_prog, act);
stats->rx_drops++;
goto err_xdp;
@@ -839,10 +848,10 @@ static struct sk_buff *veth_xdp_rcv_skb(struct veth_rq *rq,
rcu_read_unlock();
goto xdp_xmit;
case XDP_REDIRECT:
- veth_xdp_get(&xdp);
+ veth_xdp_get(xdp);
consume_skb(skb);
- xdp.rxq->mem = rq->xdp_mem;
- if (xdp_do_redirect(rq->dev, &xdp, xdp_prog)) {
+ xdp->rxq->mem = rq->xdp_mem;
+ if (xdp_do_redirect(rq->dev, xdp, xdp_prog)) {
stats->rx_drops++;
goto err_xdp;
}
@@ -862,7 +871,7 @@ static struct sk_buff *veth_xdp_rcv_skb(struct veth_rq *rq,
rcu_read_unlock();
/* check if bpf_xdp_adjust_head was used */
- off = orig_data - xdp.data;
+ off = orig_data - xdp->data;
if (off > 0)
__skb_push(skb, off);
else if (off < 0)
@@ -871,21 +880,21 @@ static struct sk_buff *veth_xdp_rcv_skb(struct veth_rq *rq,
skb_reset_mac_header(skb);
/* check if bpf_xdp_adjust_tail was used */
- off = xdp.data_end - orig_data_end;
+ off = xdp->data_end - orig_data_end;
if (off != 0)
__skb_put(skb, off); /* positive on grow, negative on shrink */
/* XDP frag metadata (e.g. nr_frags) are updated in eBPF helpers
* (e.g. bpf_xdp_adjust_tail), we need to update data_len here.
*/
- if (xdp_buff_has_frags(&xdp))
+ if (xdp_buff_has_frags(xdp))
skb->data_len = skb_shinfo(skb)->xdp_frags_size;
else
skb->data_len = 0;
skb->protocol = eth_type_trans(skb, rq->dev);
- metalen = xdp.data - xdp.data_meta;
+ metalen = xdp->data - xdp->data_meta;
if (metalen)
skb_metadata_set(skb, metalen);
out:
@@ -898,7 +907,7 @@ xdp_drop:
return NULL;
err_xdp:
rcu_read_unlock();
- xdp_return_buff(&xdp);
+ xdp_return_buff(xdp);
xdp_xmit:
return NULL;
}
@@ -1596,6 +1605,28 @@ static int veth_xdp(struct net_device *dev, struct netdev_bpf *xdp)
}
}
+static int veth_xdp_rx_timestamp(const struct xdp_md *ctx, u64 *timestamp)
+{
+ struct veth_xdp_buff *_ctx = (void *)ctx;
+
+ if (!_ctx->skb)
+ return -EOPNOTSUPP;
+
+ *timestamp = skb_hwtstamps(_ctx->skb)->hwtstamp;
+ return 0;
+}
+
+static int veth_xdp_rx_hash(const struct xdp_md *ctx, u32 *hash)
+{
+ struct veth_xdp_buff *_ctx = (void *)ctx;
+
+ if (!_ctx->skb)
+ return -EOPNOTSUPP;
+
+ *hash = skb_get_hash(_ctx->skb);
+ return 0;
+}
+
static const struct net_device_ops veth_netdev_ops = {
.ndo_init = veth_dev_init,
.ndo_open = veth_open,
@@ -1617,6 +1648,11 @@ static const struct net_device_ops veth_netdev_ops = {
.ndo_get_peer_dev = veth_peer_dev,
};
+static const struct xdp_metadata_ops veth_xdp_metadata_ops = {
+ .xmo_rx_timestamp = veth_xdp_rx_timestamp,
+ .xmo_rx_hash = veth_xdp_rx_hash,
+};
+
#define VETH_FEATURES (NETIF_F_SG | NETIF_F_FRAGLIST | NETIF_F_HW_CSUM | \
NETIF_F_RXCSUM | NETIF_F_SCTP_CRC | NETIF_F_HIGHDMA | \
NETIF_F_GSO_SOFTWARE | NETIF_F_GSO_ENCAP_ALL | \
@@ -1633,6 +1669,7 @@ static void veth_setup(struct net_device *dev)
dev->priv_flags |= IFF_PHONY_HEADROOM;
dev->netdev_ops = &veth_netdev_ops;
+ dev->xdp_metadata_ops = &veth_xdp_metadata_ops;
dev->ethtool_ops = &veth_ethtool_ops;
dev->features |= NETIF_F_LLTX;
dev->features |= VETH_FEATURES;
@@ -1649,6 +1686,10 @@ static void veth_setup(struct net_device *dev)
dev->hw_enc_features = VETH_FEATURES;
dev->mpls_features = NETIF_F_HW_CSUM | NETIF_F_GSO_SOFTWARE;
netif_set_tso_max_size(dev, GSO_MAX_SIZE);
+
+ dev->xdp_features = NETDEV_XDP_ACT_BASIC | NETDEV_XDP_ACT_REDIRECT |
+ NETDEV_XDP_ACT_NDO_XMIT | NETDEV_XDP_ACT_RX_SG |
+ NETDEV_XDP_ACT_NDO_XMIT_SG;
}
/*
diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index 61e33e4dd0cd..fb5e68ed3ec2 100644
--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@ -134,7 +134,7 @@ struct send_queue {
struct scatterlist sg[MAX_SKB_FRAGS + 2];
/* Name of the send queue: output.$index */
- char name[40];
+ char name[16];
struct virtnet_sq_stats stats;
@@ -171,7 +171,7 @@ struct receive_queue {
unsigned int min_buf_len;
/* Name of this receive queue: input.$index */
- char name[40];
+ char name[16];
struct xdp_rxq_info xdp_rxq;
};
@@ -446,9 +446,7 @@ static unsigned int mergeable_ctx_to_truesize(void *mrg_ctx)
static struct sk_buff *page_to_skb(struct virtnet_info *vi,
struct receive_queue *rq,
struct page *page, unsigned int offset,
- unsigned int len, unsigned int truesize,
- bool hdr_valid, unsigned int metasize,
- unsigned int headroom)
+ unsigned int len, unsigned int truesize)
{
struct sk_buff *skb;
struct virtio_net_hdr_mrg_rxbuf *hdr;
@@ -466,21 +464,11 @@ static struct sk_buff *page_to_skb(struct virtnet_info *vi,
else
hdr_padded_len = sizeof(struct padded_vnet_hdr);
- /* If headroom is not 0, there is an offset between the beginning of the
- * data and the allocated space, otherwise the data and the allocated
- * space are aligned.
- *
- * Buffers with headroom use PAGE_SIZE as alloc size, see
- * add_recvbuf_mergeable() + get_mergeable_buf_len()
- */
- truesize = headroom ? PAGE_SIZE : truesize;
- tailroom = truesize - headroom;
- buf = p - headroom;
-
+ buf = p;
len -= hdr_len;
offset += hdr_padded_len;
p += hdr_padded_len;
- tailroom -= hdr_padded_len + len;
+ tailroom = truesize - hdr_padded_len - len;
shinfo_size = SKB_DATA_ALIGN(sizeof(struct skb_shared_info));
@@ -510,7 +498,7 @@ static struct sk_buff *page_to_skb(struct virtnet_info *vi,
if (len <= skb_tailroom(skb))
copy = len;
else
- copy = ETH_HLEN + metasize;
+ copy = ETH_HLEN;
skb_put_data(skb, p, copy);
len -= copy;
@@ -549,19 +537,11 @@ static struct sk_buff *page_to_skb(struct virtnet_info *vi,
give_pages(rq, page);
ok:
- /* hdr_valid means no XDP, so we can copy the vnet header */
- if (hdr_valid) {
- hdr = skb_vnet_hdr(skb);
- memcpy(hdr, hdr_p, hdr_len);
- }
+ hdr = skb_vnet_hdr(skb);
+ memcpy(hdr, hdr_p, hdr_len);
if (page_to_free)
put_page(page_to_free);
- if (metasize) {
- __skb_pull(skb, metasize);
- skb_metadata_set(skb, metasize);
- }
-
return skb;
}
@@ -570,22 +550,43 @@ static int __virtnet_xdp_xmit_one(struct virtnet_info *vi,
struct xdp_frame *xdpf)
{
struct virtio_net_hdr_mrg_rxbuf *hdr;
- int err;
+ struct skb_shared_info *shinfo;
+ u8 nr_frags = 0;
+ int err, i;
if (unlikely(xdpf->headroom < vi->hdr_len))
return -EOVERFLOW;
- /* Make room for virtqueue hdr (also change xdpf->headroom?) */
+ if (unlikely(xdp_frame_has_frags(xdpf))) {
+ shinfo = xdp_get_shared_info_from_frame(xdpf);
+ nr_frags = shinfo->nr_frags;
+ }
+
+ /* In wrapping function virtnet_xdp_xmit(), we need to free
+ * up the pending old buffers, where we need to calculate the
+ * position of skb_shared_info in xdp_get_frame_len() and
+ * xdp_return_frame(), which will involve to xdpf->data and
+ * xdpf->headroom. Therefore, we need to update the value of
+ * headroom synchronously here.
+ */
+ xdpf->headroom -= vi->hdr_len;
xdpf->data -= vi->hdr_len;
/* Zero header and leave csum up to XDP layers */
hdr = xdpf->data;
memset(hdr, 0, vi->hdr_len);
xdpf->len += vi->hdr_len;
- sg_init_one(sq->sg, xdpf->data, xdpf->len);
+ sg_init_table(sq->sg, nr_frags + 1);
+ sg_set_buf(sq->sg, xdpf->data, xdpf->len);
+ for (i = 0; i < nr_frags; i++) {
+ skb_frag_t *frag = &shinfo->frags[i];
- err = virtqueue_add_outbuf(sq->vq, sq->sg, 1, xdp_to_ptr(xdpf),
- GFP_ATOMIC);
+ sg_set_page(&sq->sg[i + 1], skb_frag_page(frag),
+ skb_frag_size(frag), skb_frag_off(frag));
+ }
+
+ err = virtqueue_add_outbuf(sq->vq, sq->sg, nr_frags + 1,
+ xdp_to_ptr(xdpf), GFP_ATOMIC);
if (unlikely(err))
return -ENOSPC; /* Caller handle free/refcnt */
@@ -665,7 +666,7 @@ static int virtnet_xdp_xmit(struct net_device *dev,
if (likely(is_xdp_frame(ptr))) {
struct xdp_frame *frame = ptr_to_xdp(ptr);
- bytes += frame->len;
+ bytes += xdp_get_frame_len(frame);
xdp_return_frame(frame);
} else {
struct sk_buff *skb = ptr;
@@ -722,7 +723,7 @@ static unsigned int virtnet_get_headroom(struct virtnet_info *vi)
* have enough headroom.
*/
static struct page *xdp_linearize_page(struct receive_queue *rq,
- u16 *num_buf,
+ int *num_buf,
struct page *p,
int offset,
int page_off,
@@ -822,7 +823,7 @@ static struct sk_buff *receive_small(struct net_device *dev,
if (unlikely(xdp_headroom < virtnet_get_headroom(vi))) {
int offset = buf - page_address(page) + header_offset;
unsigned int tlen = len + vi->hdr_len;
- u16 num_buf = 1;
+ int num_buf = 1;
xdp_headroom = virtnet_get_headroom(vi);
header_offset = VIRTNET_RX_PAD + xdp_headroom;
@@ -924,7 +925,7 @@ static struct sk_buff *receive_big(struct net_device *dev,
{
struct page *page = buf;
struct sk_buff *skb =
- page_to_skb(vi, rq, page, 0, len, PAGE_SIZE, true, 0, 0);
+ page_to_skb(vi, rq, page, 0, len, PAGE_SIZE);
stats->bytes += len - vi->hdr_len;
if (unlikely(!skb))
@@ -938,6 +939,143 @@ err:
return NULL;
}
+/* Why not use xdp_build_skb_from_frame() ?
+ * XDP core assumes that xdp frags are PAGE_SIZE in length, while in
+ * virtio-net there are 2 points that do not match its requirements:
+ * 1. The size of the prefilled buffer is not fixed before xdp is set.
+ * 2. xdp_build_skb_from_frame() does more checks that we don't need,
+ * like eth_type_trans() (which virtio-net does in receive_buf()).
+ */
+static struct sk_buff *build_skb_from_xdp_buff(struct net_device *dev,
+ struct virtnet_info *vi,
+ struct xdp_buff *xdp,
+ unsigned int xdp_frags_truesz)
+{
+ struct skb_shared_info *sinfo = xdp_get_shared_info_from_buff(xdp);
+ unsigned int headroom, data_len;
+ struct sk_buff *skb;
+ int metasize;
+ u8 nr_frags;
+
+ if (unlikely(xdp->data_end > xdp_data_hard_end(xdp))) {
+ pr_debug("Error building skb as missing reserved tailroom for xdp");
+ return NULL;
+ }
+
+ if (unlikely(xdp_buff_has_frags(xdp)))
+ nr_frags = sinfo->nr_frags;
+
+ skb = build_skb(xdp->data_hard_start, xdp->frame_sz);
+ if (unlikely(!skb))
+ return NULL;
+
+ headroom = xdp->data - xdp->data_hard_start;
+ data_len = xdp->data_end - xdp->data;
+ skb_reserve(skb, headroom);
+ __skb_put(skb, data_len);
+
+ metasize = xdp->data - xdp->data_meta;
+ metasize = metasize > 0 ? metasize : 0;
+ if (metasize)
+ skb_metadata_set(skb, metasize);
+
+ if (unlikely(xdp_buff_has_frags(xdp)))
+ xdp_update_skb_shared_info(skb, nr_frags,
+ sinfo->xdp_frags_size,
+ xdp_frags_truesz,
+ xdp_buff_is_frag_pfmemalloc(xdp));
+
+ return skb;
+}
+
+/* TODO: build xdp in big mode */
+static int virtnet_build_xdp_buff_mrg(struct net_device *dev,
+ struct virtnet_info *vi,
+ struct receive_queue *rq,
+ struct xdp_buff *xdp,
+ void *buf,
+ unsigned int len,
+ unsigned int frame_sz,
+ int *num_buf,
+ unsigned int *xdp_frags_truesize,
+ struct virtnet_rq_stats *stats)
+{
+ struct virtio_net_hdr_mrg_rxbuf *hdr = buf;
+ unsigned int headroom, tailroom, room;
+ unsigned int truesize, cur_frag_size;
+ struct skb_shared_info *shinfo;
+ unsigned int xdp_frags_truesz = 0;
+ struct page *page;
+ skb_frag_t *frag;
+ int offset;
+ void *ctx;
+
+ xdp_init_buff(xdp, frame_sz, &rq->xdp_rxq);
+ xdp_prepare_buff(xdp, buf - VIRTIO_XDP_HEADROOM,
+ VIRTIO_XDP_HEADROOM + vi->hdr_len, len - vi->hdr_len, true);
+
+ if (!*num_buf)
+ return 0;
+
+ if (*num_buf > 1) {
+ /* If we want to build multi-buffer xdp, we need
+ * to specify that the flags of xdp_buff have the
+ * XDP_FLAGS_HAS_FRAG bit.
+ */
+ if (!xdp_buff_has_frags(xdp))
+ xdp_buff_set_frags_flag(xdp);
+
+ shinfo = xdp_get_shared_info_from_buff(xdp);
+ shinfo->nr_frags = 0;
+ shinfo->xdp_frags_size = 0;
+ }
+
+ if (*num_buf > MAX_SKB_FRAGS + 1)
+ return -EINVAL;
+
+ while (--*num_buf > 0) {
+ buf = virtqueue_get_buf_ctx(rq->vq, &len, &ctx);
+ if (unlikely(!buf)) {
+ pr_debug("%s: rx error: %d buffers out of %d missing\n",
+ dev->name, *num_buf,
+ virtio16_to_cpu(vi->vdev, hdr->num_buffers));
+ dev->stats.rx_length_errors++;
+ return -EINVAL;
+ }
+
+ stats->bytes += len;
+ page = virt_to_head_page(buf);
+ offset = buf - page_address(page);
+
+ truesize = mergeable_ctx_to_truesize(ctx);
+ headroom = mergeable_ctx_to_headroom(ctx);
+ tailroom = headroom ? sizeof(struct skb_shared_info) : 0;
+ room = SKB_DATA_ALIGN(headroom + tailroom);
+
+ cur_frag_size = truesize;
+ xdp_frags_truesz += cur_frag_size;
+ if (unlikely(len > truesize - room || cur_frag_size > PAGE_SIZE)) {
+ put_page(page);
+ pr_debug("%s: rx error: len %u exceeds truesize %lu\n",
+ dev->name, len, (unsigned long)(truesize - room));
+ dev->stats.rx_length_errors++;
+ return -EINVAL;
+ }
+
+ frag = &shinfo->frags[shinfo->nr_frags++];
+ __skb_frag_set_page(frag, page);
+ skb_frag_off_set(frag, offset);
+ skb_frag_size_set(frag, len);
+ if (page_is_pfmemalloc(page))
+ xdp_buff_set_frag_pfmemalloc(xdp);
+
+ shinfo->xdp_frags_size += len;
+ }
+
+ *xdp_frags_truesize = xdp_frags_truesz;
+ return 0;
+}
+
static struct sk_buff *receive_mergeable(struct net_device *dev,
struct virtnet_info *vi,
struct receive_queue *rq,
@@ -948,23 +1086,24 @@ static struct sk_buff *receive_mergeable(struct net_device *dev,
struct virtnet_rq_stats *stats)
{
struct virtio_net_hdr_mrg_rxbuf *hdr = buf;
- u16 num_buf = virtio16_to_cpu(vi->vdev, hdr->num_buffers);
+ int num_buf = virtio16_to_cpu(vi->vdev, hdr->num_buffers);
struct page *page = virt_to_head_page(buf);
int offset = buf - page_address(page);
struct sk_buff *head_skb, *curr_skb;
struct bpf_prog *xdp_prog;
unsigned int truesize = mergeable_ctx_to_truesize(ctx);
unsigned int headroom = mergeable_ctx_to_headroom(ctx);
- unsigned int metasize = 0;
- unsigned int frame_sz;
+ unsigned int tailroom = headroom ? sizeof(struct skb_shared_info) : 0;
+ unsigned int room = SKB_DATA_ALIGN(headroom + tailroom);
+ unsigned int frame_sz, xdp_room;
int err;
head_skb = NULL;
stats->bytes += len - vi->hdr_len;
- if (unlikely(len > truesize)) {
+ if (unlikely(len > truesize - room)) {
pr_debug("%s: rx error: len %u exceeds truesize %lu\n",
- dev->name, len, (unsigned long)ctx);
+ dev->name, len, (unsigned long)(truesize - room));
dev->stats.rx_length_errors++;
goto err_skb;
}
@@ -977,11 +1116,14 @@ static struct sk_buff *receive_mergeable(struct net_device *dev,
rcu_read_lock();
xdp_prog = rcu_dereference(rq->xdp_prog);
if (xdp_prog) {
+ unsigned int xdp_frags_truesz = 0;
+ struct skb_shared_info *shinfo;
struct xdp_frame *xdpf;
struct page *xdp_page;
struct xdp_buff xdp;
void *data;
u32 act;
+ int i;
/* Transient failure which in theory could occur if
* in-flight packets from before XDP was enabled reach
@@ -990,19 +1132,23 @@ static struct sk_buff *receive_mergeable(struct net_device *dev,
if (unlikely(hdr->hdr.gso_type))
goto err_xdp;
- /* Buffers with headroom use PAGE_SIZE as alloc size,
- * see add_recvbuf_mergeable() + get_mergeable_buf_len()
+ /* Now XDP core assumes frag size is PAGE_SIZE, but buffers
+ * with headroom may add hole in truesize, which
+ * make their length exceed PAGE_SIZE. So we disabled the
+ * hole mechanism for xdp. See add_recvbuf_mergeable().
*/
- frame_sz = headroom ? PAGE_SIZE : truesize;
-
- /* This happens when rx buffer size is underestimated
- * or headroom is not enough because of the buffer
- * was refilled before XDP is set. This should only
- * happen for the first several packets, so we don't
- * care much about its performance.
+ frame_sz = truesize;
+
+ /* This happens when headroom is not enough because
+ * of the buffer was prefilled before XDP is set.
+ * This should only happen for the first several packets.
+ * In fact, vq reset can be used here to help us clean up
+ * the prefilled buffers, but many existing devices do not
+ * support it, and we don't want to bother users who are
+ * using xdp normally.
*/
- if (unlikely(num_buf > 1 ||
- headroom < virtnet_get_headroom(vi))) {
+ if (!xdp_prog->aux->xdp_has_frags &&
+ (num_buf > 1 || headroom < virtnet_get_headroom(vi))) {
/* linearize data for XDP */
xdp_page = xdp_linearize_page(rq, &num_buf,
page, offset,
@@ -1013,82 +1159,53 @@ static struct sk_buff *receive_mergeable(struct net_device *dev,
if (!xdp_page)
goto err_xdp;
offset = VIRTIO_XDP_HEADROOM;
+ } else if (unlikely(headroom < virtnet_get_headroom(vi))) {
+ xdp_room = SKB_DATA_ALIGN(VIRTIO_XDP_HEADROOM +
+ sizeof(struct skb_shared_info));
+ if (len + xdp_room > PAGE_SIZE)
+ goto err_xdp;
+
+ xdp_page = alloc_page(GFP_ATOMIC);
+ if (!xdp_page)
+ goto err_xdp;
+
+ memcpy(page_address(xdp_page) + VIRTIO_XDP_HEADROOM,
+ page_address(page) + offset, len);
+ frame_sz = PAGE_SIZE;
+ offset = VIRTIO_XDP_HEADROOM;
} else {
xdp_page = page;
}
- /* Allow consuming headroom but reserve enough space to push
- * the descriptor on if we get an XDP_TX return code.
- */
data = page_address(xdp_page) + offset;
- xdp_init_buff(&xdp, frame_sz - vi->hdr_len, &rq->xdp_rxq);
- xdp_prepare_buff(&xdp, data - VIRTIO_XDP_HEADROOM + vi->hdr_len,
- VIRTIO_XDP_HEADROOM, len - vi->hdr_len, true);
+ err = virtnet_build_xdp_buff_mrg(dev, vi, rq, &xdp, data, len, frame_sz,
+ &num_buf, &xdp_frags_truesz, stats);
+ if (unlikely(err))
+ goto err_xdp_frags;
act = bpf_prog_run_xdp(xdp_prog, &xdp);
stats->xdp_packets++;
switch (act) {
case XDP_PASS:
- metasize = xdp.data - xdp.data_meta;
-
- /* recalculate offset to account for any header
- * adjustments and minus the metasize to copy the
- * metadata in page_to_skb(). Note other cases do not
- * build an skb and avoid using offset
- */
- offset = xdp.data - page_address(xdp_page) -
- vi->hdr_len - metasize;
-
- /* recalculate len if xdp.data, xdp.data_end or
- * xdp.data_meta were adjusted
- */
- len = xdp.data_end - xdp.data + vi->hdr_len + metasize;
-
- /* recalculate headroom if xdp.data or xdp_data_meta
- * were adjusted, note that offset should always point
- * to the start of the reserved bytes for virtio_net
- * header which are followed by xdp.data, that means
- * that offset is equal to the headroom (when buf is
- * starting at the beginning of the page, otherwise
- * there is a base offset inside the page) but it's used
- * with a different starting point (buf start) than
- * xdp.data (buf start + vnet hdr size). If xdp.data or
- * data_meta were adjusted by the xdp prog then the
- * headroom size has changed and so has the offset, we
- * can use data_hard_start, which points at buf start +
- * vnet hdr size, to calculate the new headroom and use
- * it later to compute buf start in page_to_skb()
- */
- headroom = xdp.data - xdp.data_hard_start - metasize;
-
- /* We can only create skb based on xdp_page. */
- if (unlikely(xdp_page != page)) {
- rcu_read_unlock();
+ if (unlikely(xdp_page != page))
put_page(page);
- head_skb = page_to_skb(vi, rq, xdp_page, offset,
- len, PAGE_SIZE, false,
- metasize,
- headroom);
- return head_skb;
- }
- break;
+ head_skb = build_skb_from_xdp_buff(dev, vi, &xdp, xdp_frags_truesz);
+ rcu_read_unlock();
+ return head_skb;
case XDP_TX:
stats->xdp_tx++;
xdpf = xdp_convert_buff_to_frame(&xdp);
if (unlikely(!xdpf)) {
- if (unlikely(xdp_page != page))
- put_page(xdp_page);
- goto err_xdp;
+ netdev_dbg(dev, "convert buff to frame failed for xdp\n");
+ goto err_xdp_frags;
}
err = virtnet_xdp_xmit(dev, 1, &xdpf, 0);
if (unlikely(!err)) {
xdp_return_frame_rx_napi(xdpf);
} else if (unlikely(err < 0)) {
trace_xdp_exception(vi->dev, xdp_prog, act);
- if (unlikely(xdp_page != page))
- put_page(xdp_page);
- goto err_xdp;
+ goto err_xdp_frags;
}
*xdp_xmit |= VIRTIO_XDP_TX;
if (unlikely(xdp_page != page))
@@ -1098,11 +1215,8 @@ static struct sk_buff *receive_mergeable(struct net_device *dev,
case XDP_REDIRECT:
stats->xdp_redirects++;
err = xdp_do_redirect(dev, &xdp, xdp_prog);
- if (err) {
- if (unlikely(xdp_page != page))
- put_page(xdp_page);
- goto err_xdp;
- }
+ if (err)
+ goto err_xdp_frags;
*xdp_xmit |= VIRTIO_XDP_REDIR;
if (unlikely(xdp_page != page))
put_page(page);
@@ -1115,16 +1229,26 @@ static struct sk_buff *receive_mergeable(struct net_device *dev,
trace_xdp_exception(vi->dev, xdp_prog, act);
fallthrough;
case XDP_DROP:
- if (unlikely(xdp_page != page))
- __free_pages(xdp_page, 0);
- goto err_xdp;
+ goto err_xdp_frags;
}
+err_xdp_frags:
+ if (unlikely(xdp_page != page))
+ __free_pages(xdp_page, 0);
+
+ if (xdp_buff_has_frags(&xdp)) {
+ shinfo = xdp_get_shared_info_from_buff(&xdp);
+ for (i = 0; i < shinfo->nr_frags; i++) {
+ xdp_page = skb_frag_page(&shinfo->frags[i]);
+ put_page(xdp_page);
+ }
+ }
+
+ goto err_xdp;
}
rcu_read_unlock();
skip_xdp:
- head_skb = page_to_skb(vi, rq, page, offset, len, truesize, !xdp_prog,
- metasize, headroom);
+ head_skb = page_to_skb(vi, rq, page, offset, len, truesize);
curr_skb = head_skb;
if (unlikely(!curr_skb))
@@ -1146,9 +1270,12 @@ skip_xdp:
page = virt_to_head_page(buf);
truesize = mergeable_ctx_to_truesize(ctx);
- if (unlikely(len > truesize)) {
+ headroom = mergeable_ctx_to_headroom(ctx);
+ tailroom = headroom ? sizeof(struct skb_shared_info) : 0;
+ room = SKB_DATA_ALIGN(headroom + tailroom);
+ if (unlikely(len > truesize - room)) {
pr_debug("%s: rx error: len %u exceeds truesize %lu\n",
- dev->name, len, (unsigned long)ctx);
+ dev->name, len, (unsigned long)(truesize - room));
dev->stats.rx_length_errors++;
goto err_skb;
}
@@ -1251,13 +1378,7 @@ static void receive_buf(struct virtnet_info *vi, struct receive_queue *rq,
if (unlikely(len < vi->hdr_len + ETH_HLEN)) {
pr_debug("%s: short packet %i\n", dev->name, len);
dev->stats.rx_length_errors++;
- if (vi->mergeable_rx_bufs) {
- put_page(virt_to_head_page(buf));
- } else if (vi->big_packets) {
- give_pages(rq, buf);
- } else {
- put_page(virt_to_head_page(buf));
- }
+ virtnet_rq_free_unused_buf(rq->vq, buf);
return;
}
@@ -1426,13 +1547,16 @@ static int add_recvbuf_mergeable(struct virtnet_info *vi,
/* To avoid internal fragmentation, if there is very likely not
* enough space for another buffer, add the remaining space to
* the current buffer.
+ * XDP core assumes that frame_size of xdp_buff and the length
+ * of the frag are PAGE_SIZE, so we disable the hole mechanism.
*/
- len += hole;
+ if (!headroom)
+ len += hole;
alloc_frag->offset += hole;
}
sg_init_one(rq->sg, buf, len);
- ctx = mergeable_len_to_ctx(len, headroom);
+ ctx = mergeable_len_to_ctx(len + room, headroom);
err = virtqueue_add_inbuf_ctx(rq->vq, rq->sg, 1, buf, ctx, gfp);
if (err < 0)
put_page(virt_to_head_page(buf));
@@ -1608,7 +1732,7 @@ static void free_old_xmit_skbs(struct send_queue *sq, bool in_napi)
} else {
struct xdp_frame *frame = ptr_to_xdp(ptr);
- bytes += frame->len;
+ bytes += xdp_get_frame_len(frame);
xdp_return_frame(frame);
}
packets++;
@@ -2158,9 +2282,9 @@ static int virtnet_close(struct net_device *dev)
cancel_delayed_work_sync(&vi->refill);
for (i = 0; i < vi->max_queue_pairs; i++) {
+ virtnet_napi_tx_disable(&vi->sq[i].napi);
napi_disable(&vi->rq[i].napi);
xdp_rxq_info_unreg(&vi->rq[i].xdp_rxq);
- virtnet_napi_tx_disable(&vi->sq[i].napi);
}
return 0;
@@ -3080,7 +3204,9 @@ static int virtnet_restore_guest_offloads(struct virtnet_info *vi)
static int virtnet_xdp_set(struct net_device *dev, struct bpf_prog *prog,
struct netlink_ext_ack *extack)
{
- unsigned long int max_sz = PAGE_SIZE - sizeof(struct padded_vnet_hdr);
+ unsigned int room = SKB_DATA_ALIGN(VIRTIO_XDP_HEADROOM +
+ sizeof(struct skb_shared_info));
+ unsigned int max_sz = PAGE_SIZE - room - ETH_HLEN;
struct virtnet_info *vi = netdev_priv(dev);
struct bpf_prog *old_prog;
u16 xdp_qp = 0, curr_qp;
@@ -3103,9 +3229,9 @@ static int virtnet_xdp_set(struct net_device *dev, struct bpf_prog *prog,
return -EINVAL;
}
- if (dev->mtu > max_sz) {
- NL_SET_ERR_MSG_MOD(extack, "MTU too large to enable XDP");
- netdev_warn(dev, "XDP requires MTU less than %lu\n", max_sz);
+ if (prog && !prog->aux->xdp_has_frags && dev->mtu > max_sz) {
+ NL_SET_ERR_MSG_MOD(extack, "MTU too large to enable XDP without frags");
+ netdev_warn(dev, "single-buffer XDP requires MTU less than %u\n", max_sz);
return -EINVAL;
}
@@ -3157,7 +3283,10 @@ static int virtnet_xdp_set(struct net_device *dev, struct bpf_prog *prog,
if (i == 0 && !old_prog)
virtnet_clear_guest_offloads(vi);
}
+ if (!old_prog)
+ xdp_features_set_redirect_target(dev, true);
} else {
+ xdp_features_clear_redirect_target(dev);
vi->xdp_enabled = false;
}
@@ -3690,6 +3819,12 @@ static int virtnet_validate(struct virtio_device *vdev)
__virtio_clear_bit(vdev, VIRTIO_NET_F_MTU);
}
+ if (virtio_has_feature(vdev, VIRTIO_NET_F_STANDBY) &&
+ !virtio_has_feature(vdev, VIRTIO_NET_F_MAC)) {
+ dev_warn(&vdev->dev, "device advertises feature VIRTIO_NET_F_STANDBY but not VIRTIO_NET_F_MAC, disabling standby");
+ __virtio_clear_bit(vdev, VIRTIO_NET_F_STANDBY);
+ }
+
return 0;
}
@@ -3787,6 +3922,7 @@ static int virtnet_probe(struct virtio_device *vdev)
dev->hw_features |= NETIF_F_GRO_HW;
dev->vlan_features = dev->features;
+ dev->xdp_features = NETDEV_XDP_ACT_BASIC | NETDEV_XDP_ACT_REDIRECT;
/* MTU range: 68 - 65535 */
dev->min_mtu = MIN_MTU;
@@ -3802,6 +3938,8 @@ static int virtnet_probe(struct virtio_device *vdev)
eth_hw_addr_set(dev, addr);
} else {
eth_hw_addr_random(dev);
+ dev_info(&vdev->dev, "Assigned random MAC address %pM\n",
+ dev->dev_addr);
}
/* Set up our device-specific information */
@@ -3813,8 +3951,10 @@ static int virtnet_probe(struct virtio_device *vdev)
INIT_WORK(&vi->config_work, virtnet_config_changed_work);
spin_lock_init(&vi->refill_lock);
- if (virtio_has_feature(vdev, VIRTIO_NET_F_MRG_RXBUF))
+ if (virtio_has_feature(vdev, VIRTIO_NET_F_MRG_RXBUF)) {
vi->mergeable_rx_bufs = true;
+ dev->xdp_features |= NETDEV_XDP_ACT_RX_SG;
+ }
if (virtio_has_feature(vi->vdev, VIRTIO_NET_F_NOTF_COAL)) {
vi->rx_usecs = 0;
@@ -3929,6 +4069,24 @@ static int virtnet_probe(struct virtio_device *vdev)
virtio_device_ready(vdev);
+ /* a random MAC address has been assigned, notify the device.
+ * We don't fail probe if VIRTIO_NET_F_CTRL_MAC_ADDR is not there
+ * because many devices work fine without getting MAC explicitly
+ */
+ if (!virtio_has_feature(vdev, VIRTIO_NET_F_MAC) &&
+ virtio_has_feature(vi->vdev, VIRTIO_NET_F_CTRL_MAC_ADDR)) {
+ struct scatterlist sg;
+
+ sg_init_one(&sg, dev->dev_addr, dev->addr_len);
+ if (!virtnet_send_command(vi, VIRTIO_NET_CTRL_MAC,
+ VIRTIO_NET_CTRL_MAC_ADDR_SET, &sg)) {
+ pr_debug("virtio_net: setting MAC address failed\n");
+ rtnl_unlock();
+ err = -EINVAL;
+ goto free_unregister_netdev;
+ }
+ }
+
rtnl_unlock();
err = virtnet_cpu_notif_add(vi);
diff --git a/drivers/net/wireless/ath/Kconfig b/drivers/net/wireless/ath/Kconfig
index d88edbf1bea3..910c10028b14 100644
--- a/drivers/net/wireless/ath/Kconfig
+++ b/drivers/net/wireless/ath/Kconfig
@@ -63,5 +63,6 @@ source "drivers/net/wireless/ath/wil6210/Kconfig"
source "drivers/net/wireless/ath/ath10k/Kconfig"
source "drivers/net/wireless/ath/wcn36xx/Kconfig"
source "drivers/net/wireless/ath/ath11k/Kconfig"
+source "drivers/net/wireless/ath/ath12k/Kconfig"
endif
diff --git a/drivers/net/wireless/ath/Makefile b/drivers/net/wireless/ath/Makefile
index 8e4ae9de5ae4..8d6e6e218d24 100644
--- a/drivers/net/wireless/ath/Makefile
+++ b/drivers/net/wireless/ath/Makefile
@@ -8,6 +8,7 @@ obj-$(CONFIG_WIL6210) += wil6210/
obj-$(CONFIG_ATH10K) += ath10k/
obj-$(CONFIG_WCN36XX) += wcn36xx/
obj-$(CONFIG_ATH11K) += ath11k/
+obj-$(CONFIG_ATH12K) += ath12k/
obj-$(CONFIG_ATH_COMMON) += ath.o
diff --git a/drivers/net/wireless/ath/ath10k/ce.c b/drivers/net/wireless/ath/ath10k/ce.c
index 59926227bd49..c2f3bd35c392 100644
--- a/drivers/net/wireless/ath/ath10k/ce.c
+++ b/drivers/net/wireless/ath/ath10k/ce.c
@@ -208,14 +208,6 @@ ath10k_ce_shadow_src_ring_write_index_set(struct ath10k *ar,
ath10k_ce_write32(ar, shadow_sr_wr_ind_addr(ar, ce_state), value);
}
-static inline void
-ath10k_ce_shadow_dest_ring_write_index_set(struct ath10k *ar,
- struct ath10k_ce_pipe *ce_state,
- unsigned int value)
-{
- ath10k_ce_write32(ar, shadow_dst_wr_ind_addr(ar, ce_state), value);
-}
-
static inline void ath10k_ce_src_ring_base_addr_set(struct ath10k *ar,
u32 ce_id,
u64 addr)
diff --git a/drivers/net/wireless/ath/ath11k/ahb.c b/drivers/net/wireless/ath/ath11k/ahb.c
index d34a4d6325b2..cd48eca494ed 100644
--- a/drivers/net/wireless/ath/ath11k/ahb.c
+++ b/drivers/net/wireless/ath/ath11k/ahb.c
@@ -32,6 +32,9 @@ static const struct of_device_id ath11k_ahb_of_match[] = {
{ .compatible = "qcom,wcn6750-wifi",
.data = (void *)ATH11K_HW_WCN6750_HW10,
},
+ { .compatible = "qcom,ipq5018-wifi",
+ .data = (void *)ATH11K_HW_IPQ5018_HW10,
+ },
{ }
};
@@ -267,30 +270,42 @@ static void ath11k_ahb_clearbit32(struct ath11k_base *ab, u8 bit, u32 offset)
static void ath11k_ahb_ce_irq_enable(struct ath11k_base *ab, u16 ce_id)
{
const struct ce_attr *ce_attr;
+ const struct ce_ie_addr *ce_ie_addr = ab->hw_params.ce_ie_addr;
+ u32 ie1_reg_addr, ie2_reg_addr, ie3_reg_addr;
+
+ ie1_reg_addr = ce_ie_addr->ie1_reg_addr + ATH11K_CE_OFFSET(ab);
+ ie2_reg_addr = ce_ie_addr->ie2_reg_addr + ATH11K_CE_OFFSET(ab);
+ ie3_reg_addr = ce_ie_addr->ie3_reg_addr + ATH11K_CE_OFFSET(ab);
ce_attr = &ab->hw_params.host_ce_config[ce_id];
if (ce_attr->src_nentries)
- ath11k_ahb_setbit32(ab, ce_id, CE_HOST_IE_ADDRESS);
+ ath11k_ahb_setbit32(ab, ce_id, ie1_reg_addr);
if (ce_attr->dest_nentries) {
- ath11k_ahb_setbit32(ab, ce_id, CE_HOST_IE_2_ADDRESS);
+ ath11k_ahb_setbit32(ab, ce_id, ie2_reg_addr);
ath11k_ahb_setbit32(ab, ce_id + CE_HOST_IE_3_SHIFT,
- CE_HOST_IE_3_ADDRESS);
+ ie3_reg_addr);
}
}
static void ath11k_ahb_ce_irq_disable(struct ath11k_base *ab, u16 ce_id)
{
const struct ce_attr *ce_attr;
+ const struct ce_ie_addr *ce_ie_addr = ab->hw_params.ce_ie_addr;
+ u32 ie1_reg_addr, ie2_reg_addr, ie3_reg_addr;
+
+ ie1_reg_addr = ce_ie_addr->ie1_reg_addr + ATH11K_CE_OFFSET(ab);
+ ie2_reg_addr = ce_ie_addr->ie2_reg_addr + ATH11K_CE_OFFSET(ab);
+ ie3_reg_addr = ce_ie_addr->ie3_reg_addr + ATH11K_CE_OFFSET(ab);
ce_attr = &ab->hw_params.host_ce_config[ce_id];
if (ce_attr->src_nentries)
- ath11k_ahb_clearbit32(ab, ce_id, CE_HOST_IE_ADDRESS);
+ ath11k_ahb_clearbit32(ab, ce_id, ie1_reg_addr);
if (ce_attr->dest_nentries) {
- ath11k_ahb_clearbit32(ab, ce_id, CE_HOST_IE_2_ADDRESS);
+ ath11k_ahb_clearbit32(ab, ce_id, ie2_reg_addr);
ath11k_ahb_clearbit32(ab, ce_id + CE_HOST_IE_3_SHIFT,
- CE_HOST_IE_3_ADDRESS);
+ ie3_reg_addr);
}
}
@@ -1150,6 +1165,22 @@ static int ath11k_ahb_probe(struct platform_device *pdev)
if (ret)
goto err_core_free;
+ ab->mem_ce = ab->mem;
+
+ if (ab->hw_params.ce_remap) {
+ const struct ce_remap *ce_remap = ab->hw_params.ce_remap;
+ /* ce register space is moved out of wcss unlike ipq8074 or ipq6018
+ * and the space is not contiguous, hence remapping the CE registers
+ * to a new space for accessing them.
+ */
+ ab->mem_ce = ioremap(ce_remap->base, ce_remap->size);
+ if (IS_ERR(ab->mem_ce)) {
+ dev_err(&pdev->dev, "ce ioremap error\n");
+ ret = -ENOMEM;
+ goto err_core_free;
+ }
+ }
+
ret = ath11k_ahb_fw_resources_init(ab);
if (ret)
goto err_core_free;
@@ -1236,6 +1267,10 @@ static void ath11k_ahb_free_resources(struct ath11k_base *ab)
ath11k_ahb_release_smp2p_handle(ab);
ath11k_ahb_fw_resource_deinit(ab);
ath11k_ce_free_pipes(ab);
+
+ if (ab->hw_params.ce_remap)
+ iounmap(ab->mem_ce);
+
ath11k_core_free(ab);
platform_set_drvdata(pdev, NULL);
}
diff --git a/drivers/net/wireless/ath/ath11k/ce.h b/drivers/net/wireless/ath/ath11k/ce.h
index 9644ff909502..1fc6360e7f01 100644
--- a/drivers/net/wireless/ath/ath11k/ce.h
+++ b/drivers/net/wireless/ath/ath11k/ce.h
@@ -49,6 +49,11 @@ void ath11k_ce_byte_swap(void *mem, u32 len);
#define CE_HOST_IE_2_ADDRESS 0x00A18040
#define CE_HOST_IE_3_ADDRESS CE_HOST_IE_ADDRESS
+/* CE IE registers are different for IPQ5018 */
+#define CE_HOST_IPQ5018_IE_ADDRESS 0x0841804C
+#define CE_HOST_IPQ5018_IE_2_ADDRESS 0x08418050
+#define CE_HOST_IPQ5018_IE_3_ADDRESS CE_HOST_IPQ5018_IE_ADDRESS
+
#define CE_HOST_IE_3_SHIFT 0xC
#define CE_RING_IDX_INCR(nentries_mask, idx) (((idx) + 1) & (nentries_mask))
@@ -84,6 +89,17 @@ struct ce_pipe_config {
__le32 reserved;
};
+struct ce_ie_addr {
+ u32 ie1_reg_addr;
+ u32 ie2_reg_addr;
+ u32 ie3_reg_addr;
+};
+
+struct ce_remap {
+ u32 base;
+ u32 size;
+};
+
struct ce_attr {
/* CE_ATTR_* values */
unsigned int flags;
diff --git a/drivers/net/wireless/ath/ath11k/core.c b/drivers/net/wireless/ath/ath11k/core.c
index edf78df9b12f..75fdbe4ef83a 100644
--- a/drivers/net/wireless/ath/ath11k/core.c
+++ b/drivers/net/wireless/ath/ath11k/core.c
@@ -54,6 +54,7 @@ static const struct ath11k_hw_params ath11k_hw_params[] = {
.target_ce_count = 11,
.svc_to_ce_map = ath11k_target_service_to_ce_map_wlan_ipq8074,
.svc_to_ce_map_len = 21,
+ .ce_ie_addr = &ath11k_ce_ie_addr_ipq8074,
.single_pdev_only = false,
.rxdma1_enable = true,
.num_rxmda_per_pdev = 1,
@@ -115,6 +116,7 @@ static const struct ath11k_hw_params ath11k_hw_params[] = {
.tcl_ring_retry = true,
.tx_ring_size = DP_TCL_DATA_RING_SIZE,
.smp2p_wow_exit = false,
+ .ftm_responder = true,
},
{
.hw_rev = ATH11K_HW_IPQ6018_HW10,
@@ -137,6 +139,7 @@ static const struct ath11k_hw_params ath11k_hw_params[] = {
.target_ce_count = 11,
.svc_to_ce_map = ath11k_target_service_to_ce_map_wlan_ipq6018,
.svc_to_ce_map_len = 19,
+ .ce_ie_addr = &ath11k_ce_ie_addr_ipq8074,
.single_pdev_only = false,
.rxdma1_enable = true,
.num_rxmda_per_pdev = 1,
@@ -196,6 +199,7 @@ static const struct ath11k_hw_params ath11k_hw_params[] = {
.tx_ring_size = DP_TCL_DATA_RING_SIZE,
.smp2p_wow_exit = false,
.support_fw_mac_sequence = false,
+ .ftm_responder = true,
},
{
.name = "qca6390 hw2.0",
@@ -218,6 +222,7 @@ static const struct ath11k_hw_params ath11k_hw_params[] = {
.target_ce_count = 9,
.svc_to_ce_map = ath11k_target_service_to_ce_map_wlan_qca6390,
.svc_to_ce_map_len = 14,
+ .ce_ie_addr = &ath11k_ce_ie_addr_ipq8074,
.single_pdev_only = true,
.rxdma1_enable = false,
.num_rxmda_per_pdev = 2,
@@ -279,6 +284,7 @@ static const struct ath11k_hw_params ath11k_hw_params[] = {
.tx_ring_size = DP_TCL_DATA_RING_SIZE,
.smp2p_wow_exit = false,
.support_fw_mac_sequence = true,
+ .ftm_responder = false,
},
{
.name = "qcn9074 hw1.0",
@@ -301,6 +307,7 @@ static const struct ath11k_hw_params ath11k_hw_params[] = {
.target_ce_count = 9,
.svc_to_ce_map = ath11k_target_service_to_ce_map_wlan_qcn9074,
.svc_to_ce_map_len = 18,
+ .ce_ie_addr = &ath11k_ce_ie_addr_ipq8074,
.rxdma1_enable = true,
.num_rxmda_per_pdev = 1,
.rx_mac_buf_ring = false,
@@ -359,6 +366,7 @@ static const struct ath11k_hw_params ath11k_hw_params[] = {
.tx_ring_size = DP_TCL_DATA_RING_SIZE,
.smp2p_wow_exit = false,
.support_fw_mac_sequence = false,
+ .ftm_responder = true,
},
{
.name = "wcn6855 hw2.0",
@@ -381,6 +389,7 @@ static const struct ath11k_hw_params ath11k_hw_params[] = {
.target_ce_count = 9,
.svc_to_ce_map = ath11k_target_service_to_ce_map_wlan_qca6390,
.svc_to_ce_map_len = 14,
+ .ce_ie_addr = &ath11k_ce_ie_addr_ipq8074,
.single_pdev_only = true,
.rxdma1_enable = false,
.num_rxmda_per_pdev = 2,
@@ -442,6 +451,7 @@ static const struct ath11k_hw_params ath11k_hw_params[] = {
.tx_ring_size = DP_TCL_DATA_RING_SIZE,
.smp2p_wow_exit = false,
.support_fw_mac_sequence = true,
+ .ftm_responder = false,
},
{
.name = "wcn6855 hw2.1",
@@ -524,6 +534,7 @@ static const struct ath11k_hw_params ath11k_hw_params[] = {
.tx_ring_size = DP_TCL_DATA_RING_SIZE,
.smp2p_wow_exit = false,
.support_fw_mac_sequence = true,
+ .ftm_responder = false,
},
{
.name = "wcn6750 hw1.0",
@@ -546,6 +557,7 @@ static const struct ath11k_hw_params ath11k_hw_params[] = {
.target_ce_count = 9,
.svc_to_ce_map = ath11k_target_service_to_ce_map_wlan_qca6390,
.svc_to_ce_map_len = 14,
+ .ce_ie_addr = &ath11k_ce_ie_addr_ipq8074,
.single_pdev_only = true,
.rxdma1_enable = false,
.num_rxmda_per_pdev = 1,
@@ -603,6 +615,87 @@ static const struct ath11k_hw_params ath11k_hw_params[] = {
.tx_ring_size = DP_TCL_DATA_RING_SIZE_WCN6750,
.smp2p_wow_exit = true,
.support_fw_mac_sequence = true,
+ .ftm_responder = false,
+ },
+ {
+ .hw_rev = ATH11K_HW_IPQ5018_HW10,
+ .name = "ipq5018 hw1.0",
+ .fw = {
+ .dir = "IPQ5018/hw1.0",
+ .board_size = 256 * 1024,
+ .cal_offset = 128 * 1024,
+ },
+ .max_radios = MAX_RADIOS_5018,
+ .bdf_addr = 0x4BA00000,
+ /* hal_desc_sz and hw ops are similar to qcn9074 */
+ .hal_desc_sz = sizeof(struct hal_rx_desc_qcn9074),
+ .qmi_service_ins_id = ATH11K_QMI_WLFW_SERVICE_INS_ID_V01_IPQ8074,
+ .ring_mask = &ath11k_hw_ring_mask_ipq8074,
+ .credit_flow = false,
+ .max_tx_ring = 1,
+ .spectral = {
+ .fft_sz = 2,
+ .fft_pad_sz = 0,
+ .summary_pad_sz = 16,
+ .fft_hdr_len = 24,
+ .max_fft_bins = 1024,
+ },
+ .internal_sleep_clock = false,
+ .regs = &ipq5018_regs,
+ .hw_ops = &ipq5018_ops,
+ .host_ce_config = ath11k_host_ce_config_qcn9074,
+ .ce_count = CE_CNT_5018,
+ .target_ce_config = ath11k_target_ce_config_wlan_ipq5018,
+ .target_ce_count = TARGET_CE_CNT_5018,
+ .svc_to_ce_map = ath11k_target_service_to_ce_map_wlan_ipq5018,
+ .svc_to_ce_map_len = SVC_CE_MAP_LEN_5018,
+ .ce_ie_addr = &ath11k_ce_ie_addr_ipq5018,
+ .ce_remap = &ath11k_ce_remap_ipq5018,
+ .rxdma1_enable = true,
+ .num_rxmda_per_pdev = RXDMA_PER_PDEV_5018,
+ .rx_mac_buf_ring = false,
+ .vdev_start_delay = false,
+ .htt_peer_map_v2 = true,
+ .interface_modes = BIT(NL80211_IFTYPE_STATION) |
+ BIT(NL80211_IFTYPE_AP) |
+ BIT(NL80211_IFTYPE_MESH_POINT),
+ .supports_monitor = false,
+ .supports_sta_ps = false,
+ .supports_shadow_regs = false,
+ .fw_mem_mode = 0,
+ .num_vdevs = 16 + 1,
+ .num_peers = 512,
+ .supports_regdb = false,
+ .idle_ps = false,
+ .supports_suspend = false,
+ .hal_params = &ath11k_hw_hal_params_ipq8074,
+ .single_pdev_only = false,
+ .cold_boot_calib = true,
+ .fix_l1ss = true,
+ .supports_dynamic_smps_6ghz = false,
+ .alloc_cacheable_memory = true,
+ .supports_rssi_stats = false,
+ .fw_wmi_diag_event = false,
+ .current_cc_support = false,
+ .dbr_debug_support = true,
+ .global_reset = false,
+ .bios_sar_capa = NULL,
+ .m3_fw_support = false,
+ .fixed_bdf_addr = true,
+ .fixed_mem_region = true,
+ .static_window_map = false,
+ .hybrid_bus_type = false,
+ .fixed_fw_mem = false,
+ .support_off_channel_tx = false,
+ .supports_multi_bssid = false,
+
+ .sram_dump = {},
+
+ .tcl_ring_retry = true,
+ .tx_ring_size = DP_TCL_DATA_RING_SIZE,
+ .smp2p_wow_exit = false,
+ .support_fw_mac_sequence = false,
+ .ftm_responder = true,
},
};
diff --git a/drivers/net/wireless/ath/ath11k/core.h b/drivers/net/wireless/ath/ath11k/core.h
index 22460b0abf03..0830276e5028 100644
--- a/drivers/net/wireless/ath/ath11k/core.h
+++ b/drivers/net/wireless/ath/ath11k/core.h
@@ -142,6 +142,7 @@ enum ath11k_hw_rev {
ATH11K_HW_WCN6855_HW20,
ATH11K_HW_WCN6855_HW21,
ATH11K_HW_WCN6750_HW10,
+ ATH11K_HW_IPQ5018_HW10,
};
enum ath11k_firmware_mode {
@@ -230,6 +231,13 @@ struct ath11k_he {
#define MAX_RADIOS 3
+/* ipq5018 hw param macros */
+#define MAX_RADIOS_5018 1
+#define CE_CNT_5018 6
+#define TARGET_CE_CNT_5018 9
+#define SVC_CE_MAP_LEN_5018 17
+#define RXDMA_PER_PDEV_5018 1
+
enum {
WMI_HOST_TP_SCALE_MAX = 0,
WMI_HOST_TP_SCALE_50 = 1,
@@ -338,6 +346,7 @@ struct ath11k_vif {
bool is_started;
bool is_up;
+ bool ftm_responder;
bool spectral_enabled;
bool ps;
u32 aid;
@@ -512,8 +521,8 @@ struct ath11k_sta {
#define ATH11K_MIN_5G_FREQ 4150
#define ATH11K_MIN_6G_FREQ 5925
#define ATH11K_MAX_6G_FREQ 7115
-#define ATH11K_NUM_CHANS 101
-#define ATH11K_MAX_5G_CHAN 173
+#define ATH11K_NUM_CHANS 102
+#define ATH11K_MAX_5G_CHAN 177
enum ath11k_state {
ATH11K_STATE_OFF,
@@ -843,6 +852,7 @@ struct ath11k_base {
struct ath11k_dp dp;
void __iomem *mem;
+ void __iomem *mem_ce;
unsigned long mem_len;
struct {
@@ -912,7 +922,6 @@ struct ath11k_base {
enum ath11k_dfs_region dfs_region;
#ifdef CONFIG_ATH11K_DEBUGFS
struct dentry *debugfs_soc;
- struct dentry *debugfs_ath11k;
#endif
struct ath11k_soc_dp_stats soc_stats;
@@ -1138,6 +1147,9 @@ extern const struct service_to_pipe ath11k_target_service_to_ce_map_wlan_ipq6018
extern const struct ce_pipe_config ath11k_target_ce_config_wlan_qca6390[];
extern const struct service_to_pipe ath11k_target_service_to_ce_map_wlan_qca6390[];
+extern const struct ce_pipe_config ath11k_target_ce_config_wlan_ipq5018[];
+extern const struct service_to_pipe ath11k_target_service_to_ce_map_wlan_ipq5018[];
+
extern const struct ce_pipe_config ath11k_target_ce_config_wlan_qcn9074[];
extern const struct service_to_pipe ath11k_target_service_to_ce_map_wlan_qcn9074[];
int ath11k_core_qmi_firmware_ready(struct ath11k_base *ab);
diff --git a/drivers/net/wireless/ath/ath11k/debugfs.c b/drivers/net/wireless/ath/ath11k/debugfs.c
index ccdf3d5ba1ab..5bb6fd17fdf6 100644
--- a/drivers/net/wireless/ath/ath11k/debugfs.c
+++ b/drivers/net/wireless/ath/ath11k/debugfs.c
@@ -976,10 +976,6 @@ int ath11k_debugfs_pdev_create(struct ath11k_base *ab)
if (test_bit(ATH11K_FLAG_REGISTERED, &ab->dev_flags))
return 0;
- ab->debugfs_soc = debugfs_create_dir(ab->hw_params.name, ab->debugfs_ath11k);
- if (IS_ERR(ab->debugfs_soc))
- return PTR_ERR(ab->debugfs_soc);
-
debugfs_create_file("simulate_fw_crash", 0600, ab->debugfs_soc, ab,
&fops_simulate_fw_crash);
@@ -1001,15 +997,51 @@ void ath11k_debugfs_pdev_destroy(struct ath11k_base *ab)
int ath11k_debugfs_soc_create(struct ath11k_base *ab)
{
- ab->debugfs_ath11k = debugfs_create_dir("ath11k", NULL);
+ struct dentry *root;
+ bool dput_needed;
+ char name[64];
+ int ret;
+
+ root = debugfs_lookup("ath11k", NULL);
+ if (!root) {
+ root = debugfs_create_dir("ath11k", NULL);
+ if (IS_ERR_OR_NULL(root))
+ return PTR_ERR(root);
+
+ dput_needed = false;
+ } else {
+ /* a dentry from lookup() needs dput() after we don't use it */
+ dput_needed = true;
+ }
+
+ scnprintf(name, sizeof(name), "%s-%s", ath11k_bus_str(ab->hif.bus),
+ dev_name(ab->dev));
+
+ ab->debugfs_soc = debugfs_create_dir(name, root);
+ if (IS_ERR_OR_NULL(ab->debugfs_soc)) {
+ ret = PTR_ERR(ab->debugfs_soc);
+ goto out;
+ }
+
+ ret = 0;
- return PTR_ERR_OR_ZERO(ab->debugfs_ath11k);
+out:
+ if (dput_needed)
+ dput(root);
+
+ return ret;
}
void ath11k_debugfs_soc_destroy(struct ath11k_base *ab)
{
- debugfs_remove_recursive(ab->debugfs_ath11k);
- ab->debugfs_ath11k = NULL;
+ debugfs_remove_recursive(ab->debugfs_soc);
+ ab->debugfs_soc = NULL;
+
+ /* We are not removing ath11k directory on purpose, even if it
+ * would be empty. This simplifies the directory handling and it's
+ * a minor cosmetic issue to leave an empty ath11k directory to
+ * debugfs.
+ */
}
EXPORT_SYMBOL(ath11k_debugfs_soc_destroy);
diff --git a/drivers/net/wireless/ath/ath11k/dp_rx.c b/drivers/net/wireless/ath/ath11k/dp_rx.c
index c5a4c34d7749..b65a84a88264 100644
--- a/drivers/net/wireless/ath/ath11k/dp_rx.c
+++ b/drivers/net/wireless/ath/ath11k/dp_rx.c
@@ -1535,13 +1535,12 @@ struct htt_ppdu_stats_info *ath11k_dp_htt_get_ppdu_desc(struct ath11k *ar,
{
struct htt_ppdu_stats_info *ppdu_info;
- spin_lock_bh(&ar->data_lock);
+ lockdep_assert_held(&ar->data_lock);
+
if (!list_empty(&ar->ppdu_stats_info)) {
list_for_each_entry(ppdu_info, &ar->ppdu_stats_info, list) {
- if (ppdu_info->ppdu_id == ppdu_id) {
- spin_unlock_bh(&ar->data_lock);
+ if (ppdu_info->ppdu_id == ppdu_id)
return ppdu_info;
- }
}
if (ar->ppdu_stat_list_depth > HTT_PPDU_DESC_MAX_DEPTH) {
@@ -1553,16 +1552,13 @@ struct htt_ppdu_stats_info *ath11k_dp_htt_get_ppdu_desc(struct ath11k *ar,
kfree(ppdu_info);
}
}
- spin_unlock_bh(&ar->data_lock);
ppdu_info = kzalloc(sizeof(*ppdu_info), GFP_ATOMIC);
if (!ppdu_info)
return NULL;
- spin_lock_bh(&ar->data_lock);
list_add_tail(&ppdu_info->list, &ar->ppdu_stats_info);
ar->ppdu_stat_list_depth++;
- spin_unlock_bh(&ar->data_lock);
return ppdu_info;
}
@@ -1586,16 +1582,17 @@ static int ath11k_htt_pull_ppdu_stats(struct ath11k_base *ab,
ar = ath11k_mac_get_ar_by_pdev_id(ab, pdev_id);
if (!ar) {
ret = -EINVAL;
- goto exit;
+ goto out;
}
if (ath11k_debugfs_is_pktlog_lite_mode_enabled(ar))
trace_ath11k_htt_ppdu_stats(ar, skb->data, len);
+ spin_lock_bh(&ar->data_lock);
ppdu_info = ath11k_dp_htt_get_ppdu_desc(ar, ppdu_id);
if (!ppdu_info) {
ret = -EINVAL;
- goto exit;
+ goto out_unlock_data;
}
ppdu_info->ppdu_id = ppdu_id;
@@ -1604,10 +1601,13 @@ static int ath11k_htt_pull_ppdu_stats(struct ath11k_base *ab,
(void *)ppdu_info);
if (ret) {
ath11k_warn(ab, "Failed to parse tlv %d\n", ret);
- goto exit;
+ goto out_unlock_data;
}
-exit:
+out_unlock_data:
+ spin_unlock_bh(&ar->data_lock);
+
+out:
rcu_read_unlock();
return ret;
@@ -3126,6 +3126,7 @@ int ath11k_peer_rx_frag_setup(struct ath11k *ar, const u8 *peer_mac, int vdev_id
if (!peer) {
ath11k_warn(ab, "failed to find the peer to set up fragment info\n");
spin_unlock_bh(&ab->base_lock);
+ crypto_free_shash(tfm);
return -ENOENT;
}
@@ -5022,6 +5023,7 @@ static int ath11k_dp_rx_mon_deliver(struct ath11k *ar, u32 mac_id,
} else {
rxs->flag |= RX_FLAG_ALLOW_SAME_PN;
}
+ rxs->flag |= RX_FLAG_ONLY_MONITOR;
ath11k_update_radiotap(ar, ppduinfo, mon_skb, rxs);
ath11k_dp_rx_deliver_msdu(ar, napi, mon_skb, rxs);
diff --git a/drivers/net/wireless/ath/ath11k/hal.c b/drivers/net/wireless/ath/ath11k/hal.c
index 2fd224480d45..22422237500c 100644
--- a/drivers/net/wireless/ath/ath11k/hal.c
+++ b/drivers/net/wireless/ath/ath11k/hal.c
@@ -1220,16 +1220,20 @@ static int ath11k_hal_srng_create_config(struct ath11k_base *ab)
s->reg_start[1] = HAL_SEQ_WCSS_UMAC_TCL_REG + HAL_TCL_STATUS_RING_HP;
s = &hal->srng_config[HAL_CE_SRC];
- s->reg_start[0] = HAL_SEQ_WCSS_UMAC_CE0_SRC_REG(ab) + HAL_CE_DST_RING_BASE_LSB;
- s->reg_start[1] = HAL_SEQ_WCSS_UMAC_CE0_SRC_REG(ab) + HAL_CE_DST_RING_HP;
+ s->reg_start[0] = HAL_SEQ_WCSS_UMAC_CE0_SRC_REG(ab) + HAL_CE_DST_RING_BASE_LSB +
+ ATH11K_CE_OFFSET(ab);
+ s->reg_start[1] = HAL_SEQ_WCSS_UMAC_CE0_SRC_REG(ab) + HAL_CE_DST_RING_HP +
+ ATH11K_CE_OFFSET(ab);
s->reg_size[0] = HAL_SEQ_WCSS_UMAC_CE1_SRC_REG(ab) -
HAL_SEQ_WCSS_UMAC_CE0_SRC_REG(ab);
s->reg_size[1] = HAL_SEQ_WCSS_UMAC_CE1_SRC_REG(ab) -
HAL_SEQ_WCSS_UMAC_CE0_SRC_REG(ab);
s = &hal->srng_config[HAL_CE_DST];
- s->reg_start[0] = HAL_SEQ_WCSS_UMAC_CE0_DST_REG(ab) + HAL_CE_DST_RING_BASE_LSB;
- s->reg_start[1] = HAL_SEQ_WCSS_UMAC_CE0_DST_REG(ab) + HAL_CE_DST_RING_HP;
+ s->reg_start[0] = HAL_SEQ_WCSS_UMAC_CE0_DST_REG(ab) + HAL_CE_DST_RING_BASE_LSB +
+ ATH11K_CE_OFFSET(ab);
+ s->reg_start[1] = HAL_SEQ_WCSS_UMAC_CE0_DST_REG(ab) + HAL_CE_DST_RING_HP +
+ ATH11K_CE_OFFSET(ab);
s->reg_size[0] = HAL_SEQ_WCSS_UMAC_CE1_DST_REG(ab) -
HAL_SEQ_WCSS_UMAC_CE0_DST_REG(ab);
s->reg_size[1] = HAL_SEQ_WCSS_UMAC_CE1_DST_REG(ab) -
@@ -1237,8 +1241,9 @@ static int ath11k_hal_srng_create_config(struct ath11k_base *ab)
s = &hal->srng_config[HAL_CE_DST_STATUS];
s->reg_start[0] = HAL_SEQ_WCSS_UMAC_CE0_DST_REG(ab) +
- HAL_CE_DST_STATUS_RING_BASE_LSB;
- s->reg_start[1] = HAL_SEQ_WCSS_UMAC_CE0_DST_REG(ab) + HAL_CE_DST_STATUS_RING_HP;
+ HAL_CE_DST_STATUS_RING_BASE_LSB + ATH11K_CE_OFFSET(ab);
+ s->reg_start[1] = HAL_SEQ_WCSS_UMAC_CE0_DST_REG(ab) + HAL_CE_DST_STATUS_RING_HP +
+ ATH11K_CE_OFFSET(ab);
s->reg_size[0] = HAL_SEQ_WCSS_UMAC_CE1_DST_REG(ab) -
HAL_SEQ_WCSS_UMAC_CE0_DST_REG(ab);
s->reg_size[1] = HAL_SEQ_WCSS_UMAC_CE1_DST_REG(ab) -
diff --git a/drivers/net/wireless/ath/ath11k/hal.h b/drivers/net/wireless/ath/ath11k/hal.h
index 6a1f78ee6eb6..1942d41d6de5 100644
--- a/drivers/net/wireless/ath/ath11k/hal.h
+++ b/drivers/net/wireless/ath/ath11k/hal.h
@@ -321,6 +321,10 @@ struct ath11k_base;
#define HAL_WBM2SW_RELEASE_RING_BASE_MSB_RING_SIZE 0x000fffff
#define HAL_RXDMA_RING_MAX_SIZE 0x0000ffff
+/* IPQ5018 ce registers */
+#define HAL_IPQ5018_CE_WFSS_REG_BASE 0x08400000
+#define HAL_IPQ5018_CE_SIZE 0x200000
+
/* Add any other errors here and return them in
* ath11k_hal_rx_desc_get_err().
*/
@@ -519,6 +523,7 @@ enum hal_srng_dir {
#define HAL_SRNG_FLAGS_MSI_INTR 0x00020000
#define HAL_SRNG_FLAGS_CACHED 0x20000000
#define HAL_SRNG_FLAGS_LMAC_RING 0x80000000
+#define HAL_SRNG_FLAGS_REMAP_CE_RING 0x10000000
#define HAL_SRNG_TLV_HDR_TAG GENMASK(9, 1)
#define HAL_SRNG_TLV_HDR_LEN GENMASK(25, 10)
diff --git a/drivers/net/wireless/ath/ath11k/hw.c b/drivers/net/wireless/ath/ath11k/hw.c
index dbcc0c4035b6..ab8f0ccacc6b 100644
--- a/drivers/net/wireless/ath/ath11k/hw.c
+++ b/drivers/net/wireless/ath/ath11k/hw.c
@@ -791,6 +791,49 @@ static void ath11k_hw_wcn6855_reo_setup(struct ath11k_base *ab)
ring_hash_map);
}
+static void ath11k_hw_ipq5018_reo_setup(struct ath11k_base *ab)
+{
+ u32 reo_base = HAL_SEQ_WCSS_UMAC_REO_REG;
+ u32 val;
+
+ /* Each hash entry uses three bits to map to a particular ring. */
+ u32 ring_hash_map = HAL_HASH_ROUTING_RING_SW1 << 0 |
+ HAL_HASH_ROUTING_RING_SW2 << 4 |
+ HAL_HASH_ROUTING_RING_SW3 << 8 |
+ HAL_HASH_ROUTING_RING_SW4 << 12 |
+ HAL_HASH_ROUTING_RING_SW1 << 16 |
+ HAL_HASH_ROUTING_RING_SW2 << 20 |
+ HAL_HASH_ROUTING_RING_SW3 << 24 |
+ HAL_HASH_ROUTING_RING_SW4 << 28;
+
+ val = ath11k_hif_read32(ab, reo_base + HAL_REO1_GEN_ENABLE);
+
+ val &= ~HAL_REO1_GEN_ENABLE_FRAG_DST_RING;
+ val |= FIELD_PREP(HAL_REO1_GEN_ENABLE_FRAG_DST_RING,
+ HAL_SRNG_RING_ID_REO2SW1) |
+ FIELD_PREP(HAL_REO1_GEN_ENABLE_AGING_LIST_ENABLE, 1) |
+ FIELD_PREP(HAL_REO1_GEN_ENABLE_AGING_FLUSH_ENABLE, 1);
+ ath11k_hif_write32(ab, reo_base + HAL_REO1_GEN_ENABLE, val);
+
+ ath11k_hif_write32(ab, reo_base + HAL_REO1_AGING_THRESH_IX_0(ab),
+ HAL_DEFAULT_REO_TIMEOUT_USEC);
+ ath11k_hif_write32(ab, reo_base + HAL_REO1_AGING_THRESH_IX_1(ab),
+ HAL_DEFAULT_REO_TIMEOUT_USEC);
+ ath11k_hif_write32(ab, reo_base + HAL_REO1_AGING_THRESH_IX_2(ab),
+ HAL_DEFAULT_REO_TIMEOUT_USEC);
+ ath11k_hif_write32(ab, reo_base + HAL_REO1_AGING_THRESH_IX_3(ab),
+ HAL_DEFAULT_REO_TIMEOUT_USEC);
+
+ ath11k_hif_write32(ab, reo_base + HAL_REO1_DEST_RING_CTRL_IX_0,
+ ring_hash_map);
+ ath11k_hif_write32(ab, reo_base + HAL_REO1_DEST_RING_CTRL_IX_1,
+ ring_hash_map);
+ ath11k_hif_write32(ab, reo_base + HAL_REO1_DEST_RING_CTRL_IX_2,
+ ring_hash_map);
+ ath11k_hif_write32(ab, reo_base + HAL_REO1_DEST_RING_CTRL_IX_3,
+ ring_hash_map);
+}
+
static u16 ath11k_hw_ipq8074_mpdu_info_get_peerid(u8 *tlv_data)
{
u16 peer_id = 0;
@@ -1084,6 +1127,47 @@ const struct ath11k_hw_ops wcn6750_ops = {
.get_ring_selector = ath11k_hw_wcn6750_get_tcl_ring_selector,
};
+/* IPQ5018 hw ops is similar to QCN9074 except for the dest ring remap */
+const struct ath11k_hw_ops ipq5018_ops = {
+ .get_hw_mac_from_pdev_id = ath11k_hw_ipq6018_mac_from_pdev_id,
+ .wmi_init_config = ath11k_init_wmi_config_ipq8074,
+ .mac_id_to_pdev_id = ath11k_hw_mac_id_to_pdev_id_ipq8074,
+ .mac_id_to_srng_id = ath11k_hw_mac_id_to_srng_id_ipq8074,
+ .tx_mesh_enable = ath11k_hw_qcn9074_tx_mesh_enable,
+ .rx_desc_get_first_msdu = ath11k_hw_qcn9074_rx_desc_get_first_msdu,
+ .rx_desc_get_last_msdu = ath11k_hw_qcn9074_rx_desc_get_last_msdu,
+ .rx_desc_get_l3_pad_bytes = ath11k_hw_qcn9074_rx_desc_get_l3_pad_bytes,
+ .rx_desc_get_hdr_status = ath11k_hw_qcn9074_rx_desc_get_hdr_status,
+ .rx_desc_encrypt_valid = ath11k_hw_qcn9074_rx_desc_encrypt_valid,
+ .rx_desc_get_encrypt_type = ath11k_hw_qcn9074_rx_desc_get_encrypt_type,
+ .rx_desc_get_decap_type = ath11k_hw_qcn9074_rx_desc_get_decap_type,
+ .rx_desc_get_mesh_ctl = ath11k_hw_qcn9074_rx_desc_get_mesh_ctl,
+ .rx_desc_get_ldpc_support = ath11k_hw_qcn9074_rx_desc_get_ldpc_support,
+ .rx_desc_get_mpdu_seq_ctl_vld = ath11k_hw_qcn9074_rx_desc_get_mpdu_seq_ctl_vld,
+ .rx_desc_get_mpdu_fc_valid = ath11k_hw_qcn9074_rx_desc_get_mpdu_fc_valid,
+ .rx_desc_get_mpdu_start_seq_no = ath11k_hw_qcn9074_rx_desc_get_mpdu_start_seq_no,
+ .rx_desc_get_msdu_len = ath11k_hw_qcn9074_rx_desc_get_msdu_len,
+ .rx_desc_get_msdu_sgi = ath11k_hw_qcn9074_rx_desc_get_msdu_sgi,
+ .rx_desc_get_msdu_rate_mcs = ath11k_hw_qcn9074_rx_desc_get_msdu_rate_mcs,
+ .rx_desc_get_msdu_rx_bw = ath11k_hw_qcn9074_rx_desc_get_msdu_rx_bw,
+ .rx_desc_get_msdu_freq = ath11k_hw_qcn9074_rx_desc_get_msdu_freq,
+ .rx_desc_get_msdu_pkt_type = ath11k_hw_qcn9074_rx_desc_get_msdu_pkt_type,
+ .rx_desc_get_msdu_nss = ath11k_hw_qcn9074_rx_desc_get_msdu_nss,
+ .rx_desc_get_mpdu_tid = ath11k_hw_qcn9074_rx_desc_get_mpdu_tid,
+ .rx_desc_get_mpdu_peer_id = ath11k_hw_qcn9074_rx_desc_get_mpdu_peer_id,
+ .rx_desc_copy_attn_end_tlv = ath11k_hw_qcn9074_rx_desc_copy_attn_end,
+ .rx_desc_get_mpdu_start_tag = ath11k_hw_qcn9074_rx_desc_get_mpdu_start_tag,
+ .rx_desc_get_mpdu_ppdu_id = ath11k_hw_qcn9074_rx_desc_get_mpdu_ppdu_id,
+ .rx_desc_set_msdu_len = ath11k_hw_qcn9074_rx_desc_set_msdu_len,
+ .rx_desc_get_attention = ath11k_hw_qcn9074_rx_desc_get_attention,
+ .reo_setup = ath11k_hw_ipq5018_reo_setup,
+ .rx_desc_get_msdu_payload = ath11k_hw_qcn9074_rx_desc_get_msdu_payload,
+ .mpdu_info_get_peerid = ath11k_hw_ipq8074_mpdu_info_get_peerid,
+ .rx_desc_mac_addr2_valid = ath11k_hw_ipq9074_rx_desc_mac_addr2_valid,
+ .rx_desc_mpdu_start_addr2 = ath11k_hw_ipq9074_rx_desc_mpdu_start_addr2,
+
+};
+
#define ATH11K_TX_RING_MASK_0 BIT(0)
#define ATH11K_TX_RING_MASK_1 BIT(1)
#define ATH11K_TX_RING_MASK_2 BIT(2)
@@ -1972,6 +2056,214 @@ const struct ath11k_hw_ring_mask ath11k_hw_ring_mask_wcn6750 = {
},
};
+/* Target firmware's Copy Engine configuration for IPQ5018 */
+const struct ce_pipe_config ath11k_target_ce_config_wlan_ipq5018[] = {
+ /* CE0: host->target HTC control and raw streams */
+ {
+ .pipenum = __cpu_to_le32(0),
+ .pipedir = __cpu_to_le32(PIPEDIR_OUT),
+ .nentries = __cpu_to_le32(32),
+ .nbytes_max = __cpu_to_le32(2048),
+ .flags = __cpu_to_le32(CE_ATTR_FLAGS),
+ .reserved = __cpu_to_le32(0),
+ },
+
+ /* CE1: target->host HTT + HTC control */
+ {
+ .pipenum = __cpu_to_le32(1),
+ .pipedir = __cpu_to_le32(PIPEDIR_IN),
+ .nentries = __cpu_to_le32(32),
+ .nbytes_max = __cpu_to_le32(2048),
+ .flags = __cpu_to_le32(CE_ATTR_FLAGS),
+ .reserved = __cpu_to_le32(0),
+ },
+
+ /* CE2: target->host WMI */
+ {
+ .pipenum = __cpu_to_le32(2),
+ .pipedir = __cpu_to_le32(PIPEDIR_IN),
+ .nentries = __cpu_to_le32(32),
+ .nbytes_max = __cpu_to_le32(2048),
+ .flags = __cpu_to_le32(CE_ATTR_FLAGS),
+ .reserved = __cpu_to_le32(0),
+ },
+
+ /* CE3: host->target WMI */
+ {
+ .pipenum = __cpu_to_le32(3),
+ .pipedir = __cpu_to_le32(PIPEDIR_OUT),
+ .nentries = __cpu_to_le32(32),
+ .nbytes_max = __cpu_to_le32(2048),
+ .flags = __cpu_to_le32(CE_ATTR_FLAGS),
+ .reserved = __cpu_to_le32(0),
+ },
+
+ /* CE4: host->target HTT */
+ {
+ .pipenum = __cpu_to_le32(4),
+ .pipedir = __cpu_to_le32(PIPEDIR_OUT),
+ .nentries = __cpu_to_le32(256),
+ .nbytes_max = __cpu_to_le32(256),
+ .flags = __cpu_to_le32(CE_ATTR_FLAGS | CE_ATTR_DIS_INTR),
+ .reserved = __cpu_to_le32(0),
+ },
+
+ /* CE5: target->host Pktlog */
+ {
+ .pipenum = __cpu_to_le32(5),
+ .pipedir = __cpu_to_le32(PIPEDIR_IN),
+ .nentries = __cpu_to_le32(32),
+ .nbytes_max = __cpu_to_le32(2048),
+ .flags = __cpu_to_le32(CE_ATTR_FLAGS),
+ .reserved = __cpu_to_le32(0),
+ },
+
+ /* CE6: Reserved for target autonomous hif_memcpy */
+ {
+ .pipenum = __cpu_to_le32(6),
+ .pipedir = __cpu_to_le32(PIPEDIR_INOUT),
+ .nentries = __cpu_to_le32(32),
+ .nbytes_max = __cpu_to_le32(16384),
+ .flags = __cpu_to_le32(CE_ATTR_FLAGS),
+ .reserved = __cpu_to_le32(0),
+ },
+
+ /* CE7 used only by Host */
+ {
+ .pipenum = __cpu_to_le32(7),
+ .pipedir = __cpu_to_le32(PIPEDIR_OUT),
+ .nentries = __cpu_to_le32(32),
+ .nbytes_max = __cpu_to_le32(2048),
+ .flags = __cpu_to_le32(0x2000),
+ .reserved = __cpu_to_le32(0),
+ },
+
+ /* CE8 target->host used only by IPA */
+ {
+ .pipenum = __cpu_to_le32(8),
+ .pipedir = __cpu_to_le32(PIPEDIR_INOUT),
+ .nentries = __cpu_to_le32(32),
+ .nbytes_max = __cpu_to_le32(16384),
+ .flags = __cpu_to_le32(CE_ATTR_FLAGS),
+ .reserved = __cpu_to_le32(0),
+ },
+};
+
+/* Map from service/endpoint to Copy Engine for IPQ5018.
+ * This table is derived from the CE TABLE, above.
+ * It is passed to the Target at startup for use by firmware.
+ */
+const struct service_to_pipe ath11k_target_service_to_ce_map_wlan_ipq5018[] = {
+ {
+ .service_id = __cpu_to_le32(ATH11K_HTC_SVC_ID_WMI_DATA_VO),
+ .pipedir = __cpu_to_le32(PIPEDIR_OUT), /* out = UL = host -> target */
+ .pipenum = __cpu_to_le32(3),
+ },
+ {
+ .service_id = __cpu_to_le32(ATH11K_HTC_SVC_ID_WMI_DATA_VO),
+ .pipedir = __cpu_to_le32(PIPEDIR_IN), /* in = DL = target -> host */
+ .pipenum = __cpu_to_le32(2),
+ },
+ {
+ .service_id = __cpu_to_le32(ATH11K_HTC_SVC_ID_WMI_DATA_BK),
+ .pipedir = __cpu_to_le32(PIPEDIR_OUT), /* out = UL = host -> target */
+ .pipenum = __cpu_to_le32(3),
+ },
+ {
+ .service_id = __cpu_to_le32(ATH11K_HTC_SVC_ID_WMI_DATA_BK),
+ .pipedir = __cpu_to_le32(PIPEDIR_IN), /* in = DL = target -> host */
+ .pipenum = __cpu_to_le32(2),
+ },
+ {
+ .service_id = __cpu_to_le32(ATH11K_HTC_SVC_ID_WMI_DATA_BE),
+ .pipedir = __cpu_to_le32(PIPEDIR_OUT), /* out = UL = host -> target */
+ .pipenum = __cpu_to_le32(3),
+ },
+ {
+ .service_id = __cpu_to_le32(ATH11K_HTC_SVC_ID_WMI_DATA_BE),
+ .pipedir = __cpu_to_le32(PIPEDIR_IN), /* in = DL = target -> host */
+ .pipenum = __cpu_to_le32(2),
+ },
+ {
+ .service_id = __cpu_to_le32(ATH11K_HTC_SVC_ID_WMI_DATA_VI),
+ .pipedir = __cpu_to_le32(PIPEDIR_OUT), /* out = UL = host -> target */
+ .pipenum = __cpu_to_le32(3),
+ },
+ {
+ .service_id = __cpu_to_le32(ATH11K_HTC_SVC_ID_WMI_DATA_VI),
+ .pipedir = __cpu_to_le32(PIPEDIR_IN), /* in = DL = target -> host */
+ .pipenum = __cpu_to_le32(2),
+ },
+ {
+ .service_id = __cpu_to_le32(ATH11K_HTC_SVC_ID_WMI_CONTROL),
+ .pipedir = __cpu_to_le32(PIPEDIR_OUT), /* out = UL = host -> target */
+ .pipenum = __cpu_to_le32(3),
+ },
+ {
+ .service_id = __cpu_to_le32(ATH11K_HTC_SVC_ID_WMI_CONTROL),
+ .pipedir = __cpu_to_le32(PIPEDIR_IN), /* in = DL = target -> host */
+ .pipenum = __cpu_to_le32(2),
+ },
+
+ {
+ .service_id = __cpu_to_le32(ATH11K_HTC_SVC_ID_RSVD_CTRL),
+ .pipedir = __cpu_to_le32(PIPEDIR_OUT), /* out = UL = host -> target */
+ .pipenum = __cpu_to_le32(0),
+ },
+ {
+ .service_id = __cpu_to_le32(ATH11K_HTC_SVC_ID_RSVD_CTRL),
+ .pipedir = __cpu_to_le32(PIPEDIR_IN), /* in = DL = target -> host */
+ .pipenum = __cpu_to_le32(1),
+ },
+
+ {
+ .service_id = __cpu_to_le32(ATH11K_HTC_SVC_ID_TEST_RAW_STREAMS),
+ .pipedir = __cpu_to_le32(PIPEDIR_OUT), /* out = UL = host -> target */
+ .pipenum = __cpu_to_le32(0),
+ },
+ {
+ .service_id = __cpu_to_le32(ATH11K_HTC_SVC_ID_TEST_RAW_STREAMS),
+ .pipedir = __cpu_to_le32(PIPEDIR_IN), /* in = DL = target -> host */
+ .pipenum = __cpu_to_le32(1),
+ },
+ {
+ .service_id = __cpu_to_le32(ATH11K_HTC_SVC_ID_HTT_DATA_MSG),
+ .pipedir = __cpu_to_le32(PIPEDIR_OUT), /* out = UL = host -> target */
+ .pipenum = __cpu_to_le32(4),
+ },
+ {
+ .service_id = __cpu_to_le32(ATH11K_HTC_SVC_ID_HTT_DATA_MSG),
+ .pipedir = __cpu_to_le32(PIPEDIR_IN), /* in = DL = target -> host */
+ .pipenum = __cpu_to_le32(1),
+ },
+ {
+ .service_id = __cpu_to_le32(ATH11K_HTC_SVC_ID_PKT_LOG),
+ .pipedir = __cpu_to_le32(PIPEDIR_IN), /* in = DL = target -> host */
+ .pipenum = __cpu_to_le32(5),
+ },
+
+ /* (Additions here) */
+
+ { /* terminator entry */ }
+};
+
+const struct ce_ie_addr ath11k_ce_ie_addr_ipq8074 = {
+ .ie1_reg_addr = CE_HOST_IE_ADDRESS,
+ .ie2_reg_addr = CE_HOST_IE_2_ADDRESS,
+ .ie3_reg_addr = CE_HOST_IE_3_ADDRESS,
+};
+
+const struct ce_ie_addr ath11k_ce_ie_addr_ipq5018 = {
+ .ie1_reg_addr = CE_HOST_IPQ5018_IE_ADDRESS - HAL_IPQ5018_CE_WFSS_REG_BASE,
+ .ie2_reg_addr = CE_HOST_IPQ5018_IE_2_ADDRESS - HAL_IPQ5018_CE_WFSS_REG_BASE,
+ .ie3_reg_addr = CE_HOST_IPQ5018_IE_3_ADDRESS - HAL_IPQ5018_CE_WFSS_REG_BASE,
+};
+
+const struct ce_remap ath11k_ce_remap_ipq5018 = {
+ .base = HAL_IPQ5018_CE_WFSS_REG_BASE,
+ .size = HAL_IPQ5018_CE_SIZE,
+};
+
const struct ath11k_hw_regs ipq8074_regs = {
/* SW2TCL(x) R0 ring configuration address */
.hal_tcl1_ring_base_lsb = 0x00000510,
@@ -2437,6 +2729,85 @@ static const struct ath11k_hw_tcl2wbm_rbm_map ath11k_hw_tcl2wbm_rbm_map_wcn6750[
},
};
+const struct ath11k_hw_regs ipq5018_regs = {
+ /* SW2TCL(x) R0 ring configuration address */
+ .hal_tcl1_ring_base_lsb = 0x00000694,
+ .hal_tcl1_ring_base_msb = 0x00000698,
+ .hal_tcl1_ring_id = 0x0000069c,
+ .hal_tcl1_ring_misc = 0x000006a4,
+ .hal_tcl1_ring_tp_addr_lsb = 0x000006b0,
+ .hal_tcl1_ring_tp_addr_msb = 0x000006b4,
+ .hal_tcl1_ring_consumer_int_setup_ix0 = 0x000006c4,
+ .hal_tcl1_ring_consumer_int_setup_ix1 = 0x000006c8,
+ .hal_tcl1_ring_msi1_base_lsb = 0x000006dc,
+ .hal_tcl1_ring_msi1_base_msb = 0x000006e0,
+ .hal_tcl1_ring_msi1_data = 0x000006e4,
+ .hal_tcl2_ring_base_lsb = 0x000006ec,
+ .hal_tcl_ring_base_lsb = 0x0000079c,
+
+ /* TCL STATUS ring address */
+ .hal_tcl_status_ring_base_lsb = 0x000008a4,
+
+ /* REO2SW(x) R0 ring configuration address */
+ .hal_reo1_ring_base_lsb = 0x000001ec,
+ .hal_reo1_ring_base_msb = 0x000001f0,
+ .hal_reo1_ring_id = 0x000001f4,
+ .hal_reo1_ring_misc = 0x000001fc,
+ .hal_reo1_ring_hp_addr_lsb = 0x00000200,
+ .hal_reo1_ring_hp_addr_msb = 0x00000204,
+ .hal_reo1_ring_producer_int_setup = 0x00000210,
+ .hal_reo1_ring_msi1_base_lsb = 0x00000234,
+ .hal_reo1_ring_msi1_base_msb = 0x00000238,
+ .hal_reo1_ring_msi1_data = 0x0000023c,
+ .hal_reo2_ring_base_lsb = 0x00000244,
+ .hal_reo1_aging_thresh_ix_0 = 0x00000564,
+ .hal_reo1_aging_thresh_ix_1 = 0x00000568,
+ .hal_reo1_aging_thresh_ix_2 = 0x0000056c,
+ .hal_reo1_aging_thresh_ix_3 = 0x00000570,
+
+ /* REO2SW(x) R2 ring pointers (head/tail) address */
+ .hal_reo1_ring_hp = 0x00003028,
+ .hal_reo1_ring_tp = 0x0000302c,
+ .hal_reo2_ring_hp = 0x00003030,
+
+ /* REO2TCL R0 ring configuration address */
+ .hal_reo_tcl_ring_base_lsb = 0x000003fc,
+ .hal_reo_tcl_ring_hp = 0x00003058,
+
+ /* SW2REO ring address */
+ .hal_sw2reo_ring_base_lsb = 0x0000013c,
+ .hal_sw2reo_ring_hp = 0x00003018,
+
+ /* REO CMD ring address */
+ .hal_reo_cmd_ring_base_lsb = 0x000000e4,
+ .hal_reo_cmd_ring_hp = 0x00003010,
+
+ /* REO status address */
+ .hal_reo_status_ring_base_lsb = 0x00000504,
+ .hal_reo_status_hp = 0x00003070,
+
+ /* WCSS relative address */
+ .hal_seq_wcss_umac_ce0_src_reg = 0x08400000
+ - HAL_IPQ5018_CE_WFSS_REG_BASE,
+ .hal_seq_wcss_umac_ce0_dst_reg = 0x08401000
+ - HAL_IPQ5018_CE_WFSS_REG_BASE,
+ .hal_seq_wcss_umac_ce1_src_reg = 0x08402000
+ - HAL_IPQ5018_CE_WFSS_REG_BASE,
+ .hal_seq_wcss_umac_ce1_dst_reg = 0x08403000
+ - HAL_IPQ5018_CE_WFSS_REG_BASE,
+
+ /* WBM Idle address */
+ .hal_wbm_idle_link_ring_base_lsb = 0x00000874,
+ .hal_wbm_idle_link_ring_misc = 0x00000884,
+
+ /* SW2WBM release address */
+ .hal_wbm_release_ring_base_lsb = 0x000001ec,
+
+ /* WBM2SW release address */
+ .hal_wbm0_release_ring_base_lsb = 0x00000924,
+ .hal_wbm1_release_ring_base_lsb = 0x0000097c,
+};
+
const struct ath11k_hw_hal_params ath11k_hw_hal_params_ipq8074 = {
.rx_buf_rbm = HAL_RX_BUF_RBM_SW3_BM,
.tcl2wbm_rbm_map = ath11k_hw_tcl2wbm_rbm_map_ipq8074,
diff --git a/drivers/net/wireless/ath/ath11k/hw.h b/drivers/net/wireless/ath/ath11k/hw.h
index 0c5ef8a526d8..0be4e1232384 100644
--- a/drivers/net/wireless/ath/ath11k/hw.h
+++ b/drivers/net/wireless/ath/ath11k/hw.h
@@ -80,6 +80,8 @@
#define ATH11K_M3_FILE "m3.bin"
#define ATH11K_REGDB_FILE_NAME "regdb.bin"
+#define ATH11K_CE_OFFSET(ab) (ab->mem_ce - ab->mem)
+
enum ath11k_hw_rate_cck {
ATH11K_HW_RATE_CCK_LP_11M = 0,
ATH11K_HW_RATE_CCK_LP_5_5M,
@@ -158,6 +160,8 @@ struct ath11k_hw_params {
u32 target_ce_count;
const struct service_to_pipe *svc_to_ce_map;
u32 svc_to_ce_map_len;
+ const struct ce_ie_addr *ce_ie_addr;
+ const struct ce_remap *ce_remap;
bool single_pdev_only;
@@ -220,6 +224,7 @@ struct ath11k_hw_params {
u32 tx_ring_size;
bool smp2p_wow_exit;
bool support_fw_mac_sequence;
+ bool ftm_responder;
};
struct ath11k_hw_ops {
@@ -271,12 +276,18 @@ extern const struct ath11k_hw_ops qca6390_ops;
extern const struct ath11k_hw_ops qcn9074_ops;
extern const struct ath11k_hw_ops wcn6855_ops;
extern const struct ath11k_hw_ops wcn6750_ops;
+extern const struct ath11k_hw_ops ipq5018_ops;
extern const struct ath11k_hw_ring_mask ath11k_hw_ring_mask_ipq8074;
extern const struct ath11k_hw_ring_mask ath11k_hw_ring_mask_qca6390;
extern const struct ath11k_hw_ring_mask ath11k_hw_ring_mask_qcn9074;
extern const struct ath11k_hw_ring_mask ath11k_hw_ring_mask_wcn6750;
+extern const struct ce_ie_addr ath11k_ce_ie_addr_ipq8074;
+extern const struct ce_ie_addr ath11k_ce_ie_addr_ipq5018;
+
+extern const struct ce_remap ath11k_ce_remap_ipq5018;
+
extern const struct ath11k_hw_hal_params ath11k_hw_hal_params_ipq8074;
extern const struct ath11k_hw_hal_params ath11k_hw_hal_params_qca6390;
extern const struct ath11k_hw_hal_params ath11k_hw_hal_params_wcn6750;
@@ -406,6 +417,7 @@ extern const struct ath11k_hw_regs qca6390_regs;
extern const struct ath11k_hw_regs qcn9074_regs;
extern const struct ath11k_hw_regs wcn6855_regs;
extern const struct ath11k_hw_regs wcn6750_regs;
+extern const struct ath11k_hw_regs ipq5018_regs;
static inline const char *ath11k_bd_ie_type_str(enum ath11k_bd_ie_type type)
{
diff --git a/drivers/net/wireless/ath/ath11k/mac.c b/drivers/net/wireless/ath/ath11k/mac.c
index 9e923ecb0891..110a38cce0a7 100644
--- a/drivers/net/wireless/ath/ath11k/mac.c
+++ b/drivers/net/wireless/ath/ath11k/mac.c
@@ -96,6 +96,7 @@ static const struct ieee80211_channel ath11k_5ghz_channels[] = {
CHAN5G(165, 5825, 0),
CHAN5G(169, 5845, 0),
CHAN5G(173, 5865, 0),
+ CHAN5G(177, 5885, 0),
};
static const struct ieee80211_channel ath11k_6ghz_channels[] = {
@@ -3110,7 +3111,7 @@ static void ath11k_mac_op_bss_info_changed(struct ieee80211_hw *hw,
u16 bitrate;
int ret = 0;
u8 rateidx;
- u32 rate;
+ u32 rate, param;
u32 ipv4_cnt;
mutex_lock(&ar->conf_mutex);
@@ -3412,6 +3413,20 @@ static void ath11k_mac_op_bss_info_changed(struct ieee80211_hw *hw,
}
}
+ if (changed & BSS_CHANGED_FTM_RESPONDER &&
+ arvif->ftm_responder != info->ftm_responder &&
+ ar->ab->hw_params.ftm_responder &&
+ (vif->type == NL80211_IFTYPE_AP ||
+ vif->type == NL80211_IFTYPE_MESH_POINT)) {
+ arvif->ftm_responder = info->ftm_responder;
+ param = WMI_VDEV_PARAM_ENABLE_DISABLE_RTT_RESPONDER_ROLE;
+ ret = ath11k_wmi_vdev_set_param_cmd(ar, arvif->vdev_id, param,
+ arvif->ftm_responder);
+ if (ret)
+ ath11k_warn(ar->ab, "Failed to set ftm responder %i: %d\n",
+ arvif->vdev_id, ret);
+ }
+
if (changed & BSS_CHANGED_FILS_DISCOVERY ||
changed & BSS_CHANGED_UNSOL_BCAST_PROBE_RESP)
ath11k_mac_fils_discovery(arvif, info);
@@ -3612,7 +3627,7 @@ static int ath11k_mac_op_hw_scan(struct ieee80211_hw *hw,
struct ath11k *ar = hw->priv;
struct ath11k_vif *arvif = ath11k_vif_to_arvif(vif);
struct cfg80211_scan_request *req = &hw_req->req;
- struct scan_req_params arg;
+ struct scan_req_params *arg = NULL;
int ret = 0;
int i;
u32 scan_timeout;
@@ -3640,72 +3655,78 @@ static int ath11k_mac_op_hw_scan(struct ieee80211_hw *hw,
if (ret)
goto exit;
- memset(&arg, 0, sizeof(arg));
- ath11k_wmi_start_scan_init(ar, &arg);
- arg.vdev_id = arvif->vdev_id;
- arg.scan_id = ATH11K_SCAN_ID;
+ arg = kzalloc(sizeof(*arg), GFP_KERNEL);
+
+ if (!arg) {
+ ret = -ENOMEM;
+ goto exit;
+ }
+
+ ath11k_wmi_start_scan_init(ar, arg);
+ arg->vdev_id = arvif->vdev_id;
+ arg->scan_id = ATH11K_SCAN_ID;
if (req->ie_len) {
- arg.extraie.ptr = kmemdup(req->ie, req->ie_len, GFP_KERNEL);
- if (!arg.extraie.ptr) {
+ arg->extraie.ptr = kmemdup(req->ie, req->ie_len, GFP_KERNEL);
+ if (!arg->extraie.ptr) {
ret = -ENOMEM;
goto exit;
}
- arg.extraie.len = req->ie_len;
+ arg->extraie.len = req->ie_len;
}
if (req->n_ssids) {
- arg.num_ssids = req->n_ssids;
- for (i = 0; i < arg.num_ssids; i++) {
- arg.ssid[i].length = req->ssids[i].ssid_len;
- memcpy(&arg.ssid[i].ssid, req->ssids[i].ssid,
+ arg->num_ssids = req->n_ssids;
+ for (i = 0; i < arg->num_ssids; i++) {
+ arg->ssid[i].length = req->ssids[i].ssid_len;
+ memcpy(&arg->ssid[i].ssid, req->ssids[i].ssid,
req->ssids[i].ssid_len);
}
} else {
- arg.scan_flags |= WMI_SCAN_FLAG_PASSIVE;
+ arg->scan_flags |= WMI_SCAN_FLAG_PASSIVE;
}
if (req->n_channels) {
- arg.num_chan = req->n_channels;
- arg.chan_list = kcalloc(arg.num_chan, sizeof(*arg.chan_list),
- GFP_KERNEL);
+ arg->num_chan = req->n_channels;
+ arg->chan_list = kcalloc(arg->num_chan, sizeof(*arg->chan_list),
+ GFP_KERNEL);
- if (!arg.chan_list) {
+ if (!arg->chan_list) {
ret = -ENOMEM;
goto exit;
}
- for (i = 0; i < arg.num_chan; i++)
- arg.chan_list[i] = req->channels[i]->center_freq;
+ for (i = 0; i < arg->num_chan; i++)
+ arg->chan_list[i] = req->channels[i]->center_freq;
}
if (req->flags & NL80211_SCAN_FLAG_RANDOM_ADDR) {
- arg.scan_f_add_spoofed_mac_in_probe = 1;
- ether_addr_copy(arg.mac_addr.addr, req->mac_addr);
- ether_addr_copy(arg.mac_mask.addr, req->mac_addr_mask);
+ arg->scan_f_add_spoofed_mac_in_probe = 1;
+ ether_addr_copy(arg->mac_addr.addr, req->mac_addr);
+ ether_addr_copy(arg->mac_mask.addr, req->mac_addr_mask);
}
/* if duration is set, default dwell times will be overwritten */
if (req->duration) {
- arg.dwell_time_active = req->duration;
- arg.dwell_time_active_2g = req->duration;
- arg.dwell_time_active_6g = req->duration;
- arg.dwell_time_passive = req->duration;
- arg.dwell_time_passive_6g = req->duration;
- arg.burst_duration = req->duration;
-
- scan_timeout = min_t(u32, arg.max_rest_time *
- (arg.num_chan - 1) + (req->duration +
+ arg->dwell_time_active = req->duration;
+ arg->dwell_time_active_2g = req->duration;
+ arg->dwell_time_active_6g = req->duration;
+ arg->dwell_time_passive = req->duration;
+ arg->dwell_time_passive_6g = req->duration;
+ arg->burst_duration = req->duration;
+
+ scan_timeout = min_t(u32, arg->max_rest_time *
+ (arg->num_chan - 1) + (req->duration +
ATH11K_SCAN_CHANNEL_SWITCH_WMI_EVT_OVERHEAD) *
- arg.num_chan, arg.max_scan_time);
+ arg->num_chan, arg->max_scan_time);
} else {
- scan_timeout = arg.max_scan_time;
+ scan_timeout = arg->max_scan_time;
}
/* Add a margin to account for event/command processing */
scan_timeout += ATH11K_MAC_SCAN_CMD_EVT_OVERHEAD;
- ret = ath11k_start_scan(ar, &arg);
+ ret = ath11k_start_scan(ar, arg);
if (ret) {
ath11k_warn(ar->ab, "failed to start hw scan: %d\n", ret);
spin_lock_bh(&ar->data_lock);
@@ -3717,10 +3738,11 @@ static int ath11k_mac_op_hw_scan(struct ieee80211_hw *hw,
msecs_to_jiffies(scan_timeout));
exit:
- kfree(arg.chan_list);
-
- if (req->ie_len)
- kfree(arg.extraie.ptr);
+ if (arg) {
+ kfree(arg->chan_list);
+ kfree(arg->extraie.ptr);
+ kfree(arg);
+ }
mutex_unlock(&ar->conf_mutex);
@@ -9106,6 +9128,10 @@ static int __ath11k_mac_register(struct ath11k *ar)
wiphy_ext_feature_set(ar->hw->wiphy,
NL80211_EXT_FEATURE_SET_SCAN_DWELL);
+ if (ab->hw_params.ftm_responder)
+ wiphy_ext_feature_set(ar->hw->wiphy,
+ NL80211_EXT_FEATURE_ENABLE_FTM_RESPONDER);
+
ath11k_reg_init(ar);
if (!test_bit(ATH11K_FLAG_RAW_MODE, &ab->dev_flags)) {
diff --git a/drivers/net/wireless/ath/ath11k/pci.c b/drivers/net/wireless/ath/ath11k/pci.c
index 99cf3357c66e..776362d151cb 100644
--- a/drivers/net/wireless/ath/ath11k/pci.c
+++ b/drivers/net/wireless/ath/ath11k/pci.c
@@ -543,6 +543,8 @@ static int ath11k_pci_claim(struct ath11k_pci *ab_pci, struct pci_dev *pdev)
goto clear_master;
}
+ ab->mem_ce = ab->mem;
+
ath11k_dbg(ab, ATH11K_DBG_BOOT, "boot pci_mem 0x%pK\n", ab->mem);
return 0;
diff --git a/drivers/net/wireless/ath/ath11k/wmi.c b/drivers/net/wireless/ath/ath11k/wmi.c
index 2a8a3e3dcff6..b3a7d7bfe17c 100644
--- a/drivers/net/wireless/ath/ath11k/wmi.c
+++ b/drivers/net/wireless/ath/ath11k/wmi.c
@@ -3853,8 +3853,8 @@ ath11k_wmi_obss_color_collision_event(struct ath11k_base *ab, struct sk_buff *sk
switch (ev->evt_type) {
case WMI_BSS_COLOR_COLLISION_DETECTION:
- ieeee80211_obss_color_collision_notify(arvif->vif, ev->obss_color_bitmap,
- GFP_KERNEL);
+ ieee80211_obss_color_collision_notify(arvif->vif, ev->obss_color_bitmap,
+ GFP_KERNEL);
ath11k_dbg(ab, ATH11K_DBG_WMI,
"OBSS color collision detected vdev:%d, event:%d, bitmap:%08llx\n",
ev->vdev_id, ev->evt_type, ev->obss_color_bitmap);
diff --git a/drivers/net/wireless/ath/ath11k/wmi.h b/drivers/net/wireless/ath/ath11k/wmi.h
index 8f2c07d70a4a..0a045af5419b 100644
--- a/drivers/net/wireless/ath/ath11k/wmi.h
+++ b/drivers/net/wireless/ath/ath11k/wmi.h
@@ -1073,6 +1073,7 @@ enum wmi_tlv_vdev_param {
WMI_VDEV_PARAM_ENABLE_BCAST_PROBE_RESPONSE,
WMI_VDEV_PARAM_FILS_MAX_CHANNEL_GUARD_TIME,
WMI_VDEV_PARAM_HE_LTF = 0x74,
+ WMI_VDEV_PARAM_ENABLE_DISABLE_RTT_RESPONDER_ROLE = 0x7d,
WMI_VDEV_PARAM_BA_MODE = 0x7e,
WMI_VDEV_PARAM_AUTORATE_MISC_CFG = 0x80,
WMI_VDEV_PARAM_SET_HE_SOUNDING_MODE = 0x87,
diff --git a/drivers/net/wireless/ath/ath12k/Kconfig b/drivers/net/wireless/ath/ath12k/Kconfig
new file mode 100644
index 000000000000..4f9c514c13e7
--- /dev/null
+++ b/drivers/net/wireless/ath/ath12k/Kconfig
@@ -0,0 +1,34 @@
+# SPDX-License-Identifier: BSD-3-Clause-Clear
+config ATH12K
+ tristate "Qualcomm Technologies Wi-Fi 7 support (ath12k)"
+ depends on MAC80211 && HAS_DMA && PCI
+ depends on CRYPTO_MICHAEL_MIC
+ select QCOM_QMI_HELPERS
+ select MHI_BUS
+ select QRTR
+ select QRTR_MHI
+ help
+ Enable support for Qualcomm Technologies Wi-Fi 7 (IEEE
+ 802.11be) family of chipsets, for example WCN7850 and
+ QCN9274.
+
+ If you choose to build a module, it'll be called ath12k.
+
+config ATH12K_DEBUG
+ bool "ath12k debugging"
+ depends on ATH12K
+ help
+ Enable debug support, for example debug messages which must
+ be enabled separately using the debug_mask module parameter.
+
+ If unsure, say Y to make it easier to debug problems. But if
+ you want optimal performance choose N.
+
+config ATH12K_TRACING
+ bool "ath12k tracing support"
+ depends on ATH12K && EVENT_TRACING
+ help
+ Enable ath12k tracing infrastructure.
+
+ If unsure, say Y to make it easier to debug problems. But if
+ you want optimal performance choose N.
diff --git a/drivers/net/wireless/ath/ath12k/Makefile b/drivers/net/wireless/ath/ath12k/Makefile
new file mode 100644
index 000000000000..62c52e733b5e
--- /dev/null
+++ b/drivers/net/wireless/ath/ath12k/Makefile
@@ -0,0 +1,27 @@
+# SPDX-License-Identifier: BSD-3-Clause-Clear
+obj-$(CONFIG_ATH12K) += ath12k.o
+ath12k-y += core.o \
+ hal.o \
+ hal_tx.o \
+ hal_rx.o \
+ wmi.o \
+ mac.o \
+ reg.o \
+ htc.o \
+ qmi.o \
+ dp.o \
+ dp_tx.o \
+ dp_rx.o \
+ debug.o \
+ ce.o \
+ peer.o \
+ dbring.o \
+ hw.o \
+ mhi.o \
+ pci.o \
+ dp_mon.o
+
+ath12k-$(CONFIG_ATH12K_TRACING) += trace.o
+
+# for tracing framework to find trace.h
+CFLAGS_trace.o := -I$(src)
diff --git a/drivers/net/wireless/ath/ath12k/ce.c b/drivers/net/wireless/ath/ath12k/ce.c
new file mode 100644
index 000000000000..aed6987804bf
--- /dev/null
+++ b/drivers/net/wireless/ath/ath12k/ce.c
@@ -0,0 +1,964 @@
+// SPDX-License-Identifier: BSD-3-Clause-Clear
+/*
+ * Copyright (c) 2018-2021 The Linux Foundation. All rights reserved.
+ * Copyright (c) 2021-2022 Qualcomm Innovation Center, Inc. All rights reserved.
+ */
+
+#include "dp_rx.h"
+#include "debug.h"
+#include "hif.h"
+
+const struct ce_attr ath12k_host_ce_config_qcn9274[] = {
+ /* CE0: host->target HTC control and raw streams */
+ {
+ .flags = CE_ATTR_FLAGS,
+ .src_nentries = 16,
+ .src_sz_max = 2048,
+ .dest_nentries = 0,
+ },
+
+ /* CE1: target->host HTT + HTC control */
+ {
+ .flags = CE_ATTR_FLAGS,
+ .src_nentries = 0,
+ .src_sz_max = 2048,
+ .dest_nentries = 512,
+ .recv_cb = ath12k_htc_rx_completion_handler,
+ },
+
+ /* CE2: target->host WMI */
+ {
+ .flags = CE_ATTR_FLAGS,
+ .src_nentries = 0,
+ .src_sz_max = 2048,
+ .dest_nentries = 128,
+ .recv_cb = ath12k_htc_rx_completion_handler,
+ },
+
+ /* CE3: host->target WMI (mac0) */
+ {
+ .flags = CE_ATTR_FLAGS,
+ .src_nentries = 32,
+ .src_sz_max = 2048,
+ .dest_nentries = 0,
+ },
+
+ /* CE4: host->target HTT */
+ {
+ .flags = CE_ATTR_FLAGS | CE_ATTR_DIS_INTR,
+ .src_nentries = 2048,
+ .src_sz_max = 256,
+ .dest_nentries = 0,
+ },
+
+ /* CE5: target->host pktlog */
+ {
+ .flags = CE_ATTR_FLAGS,
+ .src_nentries = 0,
+ .src_sz_max = 2048,
+ .dest_nentries = 512,
+ .recv_cb = ath12k_dp_htt_htc_t2h_msg_handler,
+ },
+
+ /* CE6: target autonomous hif_memcpy */
+ {
+ .flags = CE_ATTR_FLAGS | CE_ATTR_DIS_INTR,
+ .src_nentries = 0,
+ .src_sz_max = 0,
+ .dest_nentries = 0,
+ },
+
+ /* CE7: host->target WMI (mac1) */
+ {
+ .flags = CE_ATTR_FLAGS,
+ .src_nentries = 32,
+ .src_sz_max = 2048,
+ .dest_nentries = 0,
+ },
+
+ /* CE8: target autonomous hif_memcpy */
+ {
+ .flags = CE_ATTR_FLAGS | CE_ATTR_DIS_INTR,
+ .src_nentries = 0,
+ .src_sz_max = 0,
+ .dest_nentries = 0,
+ },
+
+ /* CE9: MHI */
+ {
+ .flags = CE_ATTR_FLAGS | CE_ATTR_DIS_INTR,
+ .src_nentries = 0,
+ .src_sz_max = 0,
+ .dest_nentries = 0,
+ },
+
+ /* CE10: MHI */
+ {
+ .flags = CE_ATTR_FLAGS | CE_ATTR_DIS_INTR,
+ .src_nentries = 0,
+ .src_sz_max = 0,
+ .dest_nentries = 0,
+ },
+
+ /* CE11: MHI */
+ {
+ .flags = CE_ATTR_FLAGS | CE_ATTR_DIS_INTR,
+ .src_nentries = 0,
+ .src_sz_max = 0,
+ .dest_nentries = 0,
+ },
+
+ /* CE12: CV Prefetch */
+ {
+ .flags = CE_ATTR_FLAGS | CE_ATTR_DIS_INTR,
+ .src_nentries = 0,
+ .src_sz_max = 0,
+ .dest_nentries = 0,
+ },
+
+ /* CE13: CV Prefetch */
+ {
+ .flags = CE_ATTR_FLAGS | CE_ATTR_DIS_INTR,
+ .src_nentries = 0,
+ .src_sz_max = 0,
+ .dest_nentries = 0,
+ },
+
+ /* CE14: target->host dbg log */
+ {
+ .flags = CE_ATTR_FLAGS,
+ .src_nentries = 0,
+ .src_sz_max = 2048,
+ .dest_nentries = 512,
+ .recv_cb = ath12k_htc_rx_completion_handler,
+ },
+
+ /* CE15: reserved for future use */
+ {
+ .flags = (CE_ATTR_FLAGS | CE_ATTR_DIS_INTR),
+ .src_nentries = 0,
+ .src_sz_max = 0,
+ .dest_nentries = 0,
+ },
+};
+
+const struct ce_attr ath12k_host_ce_config_wcn7850[] = {
+ /* CE0: host->target HTC control and raw streams */
+ {
+ .flags = CE_ATTR_FLAGS,
+ .src_nentries = 16,
+ .src_sz_max = 2048,
+ .dest_nentries = 0,
+ },
+
+ /* CE1: target->host HTT + HTC control */
+ {
+ .flags = CE_ATTR_FLAGS,
+ .src_nentries = 0,
+ .src_sz_max = 2048,
+ .dest_nentries = 512,
+ .recv_cb = ath12k_htc_rx_completion_handler,
+ },
+
+ /* CE2: target->host WMI */
+ {
+ .flags = CE_ATTR_FLAGS,
+ .src_nentries = 0,
+ .src_sz_max = 2048,
+ .dest_nentries = 64,
+ .recv_cb = ath12k_htc_rx_completion_handler,
+ },
+
+ /* CE3: host->target WMI (mac0) */
+ {
+ .flags = CE_ATTR_FLAGS,
+ .src_nentries = 32,
+ .src_sz_max = 2048,
+ .dest_nentries = 0,
+ },
+
+ /* CE4: host->target HTT */
+ {
+ .flags = CE_ATTR_FLAGS | CE_ATTR_DIS_INTR,
+ .src_nentries = 2048,
+ .src_sz_max = 256,
+ .dest_nentries = 0,
+ },
+
+ /* CE5: target->host pktlog */
+ {
+ .flags = CE_ATTR_FLAGS,
+ .src_nentries = 0,
+ .src_sz_max = 0,
+ .dest_nentries = 0,
+ },
+
+ /* CE6: target autonomous hif_memcpy */
+ {
+ .flags = CE_ATTR_FLAGS | CE_ATTR_DIS_INTR,
+ .src_nentries = 0,
+ .src_sz_max = 0,
+ .dest_nentries = 0,
+ },
+
+ /* CE7: host->target WMI (mac1) */
+ {
+ .flags = CE_ATTR_FLAGS | CE_ATTR_DIS_INTR,
+ .src_nentries = 0,
+ .src_sz_max = 2048,
+ .dest_nentries = 0,
+ },
+
+ /* CE8: target autonomous hif_memcpy */
+ {
+ .flags = CE_ATTR_FLAGS | CE_ATTR_DIS_INTR,
+ .src_nentries = 0,
+ .src_sz_max = 0,
+ .dest_nentries = 0,
+ },
+
+};
+
+static int ath12k_ce_rx_buf_enqueue_pipe(struct ath12k_ce_pipe *pipe,
+ struct sk_buff *skb, dma_addr_t paddr)
+{
+ struct ath12k_base *ab = pipe->ab;
+ struct ath12k_ce_ring *ring = pipe->dest_ring;
+ struct hal_srng *srng;
+ unsigned int write_index;
+ unsigned int nentries_mask = ring->nentries_mask;
+ struct hal_ce_srng_dest_desc *desc;
+ int ret;
+
+ lockdep_assert_held(&ab->ce.ce_lock);
+
+ write_index = ring->write_index;
+
+ srng = &ab->hal.srng_list[ring->hal_ring_id];
+
+ spin_lock_bh(&srng->lock);
+
+ ath12k_hal_srng_access_begin(ab, srng);
+
+ if (unlikely(ath12k_hal_srng_src_num_free(ab, srng, false) < 1)) {
+ ret = -ENOSPC;
+ goto exit;
+ }
+
+ desc = ath12k_hal_srng_src_get_next_entry(ab, srng);
+ if (!desc) {
+ ret = -ENOSPC;
+ goto exit;
+ }
+
+ ath12k_hal_ce_dst_set_desc(desc, paddr);
+
+ ring->skb[write_index] = skb;
+ write_index = CE_RING_IDX_INCR(nentries_mask, write_index);
+ ring->write_index = write_index;
+
+ pipe->rx_buf_needed--;
+
+ ret = 0;
+exit:
+ ath12k_hal_srng_access_end(ab, srng);
+
+ spin_unlock_bh(&srng->lock);
+
+ return ret;
+}
+
+static int ath12k_ce_rx_post_pipe(struct ath12k_ce_pipe *pipe)
+{
+ struct ath12k_base *ab = pipe->ab;
+ struct sk_buff *skb;
+ dma_addr_t paddr;
+ int ret = 0;
+
+ if (!(pipe->dest_ring || pipe->status_ring))
+ return 0;
+
+ spin_lock_bh(&ab->ce.ce_lock);
+ while (pipe->rx_buf_needed) {
+ skb = dev_alloc_skb(pipe->buf_sz);
+ if (!skb) {
+ ret = -ENOMEM;
+ goto exit;
+ }
+
+ WARN_ON_ONCE(!IS_ALIGNED((unsigned long)skb->data, 4));
+
+ paddr = dma_map_single(ab->dev, skb->data,
+ skb->len + skb_tailroom(skb),
+ DMA_FROM_DEVICE);
+ if (unlikely(dma_mapping_error(ab->dev, paddr))) {
+ ath12k_warn(ab, "failed to dma map ce rx buf\n");
+ dev_kfree_skb_any(skb);
+ ret = -EIO;
+ goto exit;
+ }
+
+ ATH12K_SKB_RXCB(skb)->paddr = paddr;
+
+ ret = ath12k_ce_rx_buf_enqueue_pipe(pipe, skb, paddr);
+ if (ret) {
+ ath12k_warn(ab, "failed to enqueue rx buf: %d\n", ret);
+ dma_unmap_single(ab->dev, paddr,
+ skb->len + skb_tailroom(skb),
+ DMA_FROM_DEVICE);
+ dev_kfree_skb_any(skb);
+ goto exit;
+ }
+ }
+
+exit:
+ spin_unlock_bh(&ab->ce.ce_lock);
+ return ret;
+}
+
+static int ath12k_ce_completed_recv_next(struct ath12k_ce_pipe *pipe,
+ struct sk_buff **skb, int *nbytes)
+{
+ struct ath12k_base *ab = pipe->ab;
+ struct hal_ce_srng_dst_status_desc *desc;
+ struct hal_srng *srng;
+ unsigned int sw_index;
+ unsigned int nentries_mask;
+ int ret = 0;
+
+ spin_lock_bh(&ab->ce.ce_lock);
+
+ sw_index = pipe->dest_ring->sw_index;
+ nentries_mask = pipe->dest_ring->nentries_mask;
+
+ srng = &ab->hal.srng_list[pipe->status_ring->hal_ring_id];
+
+ spin_lock_bh(&srng->lock);
+
+ ath12k_hal_srng_access_begin(ab, srng);
+
+ desc = ath12k_hal_srng_dst_get_next_entry(ab, srng);
+ if (!desc) {
+ ret = -EIO;
+ goto err;
+ }
+
+ *nbytes = ath12k_hal_ce_dst_status_get_length(desc);
+ if (*nbytes == 0) {
+ ret = -EIO;
+ goto err;
+ }
+
+ *skb = pipe->dest_ring->skb[sw_index];
+ pipe->dest_ring->skb[sw_index] = NULL;
+
+ sw_index = CE_RING_IDX_INCR(nentries_mask, sw_index);
+ pipe->dest_ring->sw_index = sw_index;
+
+ pipe->rx_buf_needed++;
+err:
+ ath12k_hal_srng_access_end(ab, srng);
+
+ spin_unlock_bh(&srng->lock);
+
+ spin_unlock_bh(&ab->ce.ce_lock);
+
+ return ret;
+}
+
+static void ath12k_ce_recv_process_cb(struct ath12k_ce_pipe *pipe)
+{
+ struct ath12k_base *ab = pipe->ab;
+ struct sk_buff *skb;
+ struct sk_buff_head list;
+ unsigned int nbytes, max_nbytes;
+ int ret;
+
+ __skb_queue_head_init(&list);
+ while (ath12k_ce_completed_recv_next(pipe, &skb, &nbytes) == 0) {
+ max_nbytes = skb->len + skb_tailroom(skb);
+ dma_unmap_single(ab->dev, ATH12K_SKB_RXCB(skb)->paddr,
+ max_nbytes, DMA_FROM_DEVICE);
+
+ if (unlikely(max_nbytes < nbytes)) {
+ ath12k_warn(ab, "rxed more than expected (nbytes %d, max %d)",
+ nbytes, max_nbytes);
+ dev_kfree_skb_any(skb);
+ continue;
+ }
+
+ skb_put(skb, nbytes);
+ __skb_queue_tail(&list, skb);
+ }
+
+ while ((skb = __skb_dequeue(&list))) {
+ ath12k_dbg(ab, ATH12K_DBG_AHB, "rx ce pipe %d len %d\n",
+ pipe->pipe_num, skb->len);
+ pipe->recv_cb(ab, skb);
+ }
+
+ ret = ath12k_ce_rx_post_pipe(pipe);
+ if (ret && ret != -ENOSPC) {
+ ath12k_warn(ab, "failed to post rx buf to pipe: %d err: %d\n",
+ pipe->pipe_num, ret);
+ mod_timer(&ab->rx_replenish_retry,
+ jiffies + ATH12K_CE_RX_POST_RETRY_JIFFIES);
+ }
+}
+
+static struct sk_buff *ath12k_ce_completed_send_next(struct ath12k_ce_pipe *pipe)
+{
+ struct ath12k_base *ab = pipe->ab;
+ struct hal_ce_srng_src_desc *desc;
+ struct hal_srng *srng;
+ unsigned int sw_index;
+ unsigned int nentries_mask;
+ struct sk_buff *skb;
+
+ spin_lock_bh(&ab->ce.ce_lock);
+
+ sw_index = pipe->src_ring->sw_index;
+ nentries_mask = pipe->src_ring->nentries_mask;
+
+ srng = &ab->hal.srng_list[pipe->src_ring->hal_ring_id];
+
+ spin_lock_bh(&srng->lock);
+
+ ath12k_hal_srng_access_begin(ab, srng);
+
+ desc = ath12k_hal_srng_src_reap_next(ab, srng);
+ if (!desc) {
+ skb = ERR_PTR(-EIO);
+ goto err_unlock;
+ }
+
+ skb = pipe->src_ring->skb[sw_index];
+
+ pipe->src_ring->skb[sw_index] = NULL;
+
+ sw_index = CE_RING_IDX_INCR(nentries_mask, sw_index);
+ pipe->src_ring->sw_index = sw_index;
+
+err_unlock:
+ spin_unlock_bh(&srng->lock);
+
+ spin_unlock_bh(&ab->ce.ce_lock);
+
+ return skb;
+}
+
+static void ath12k_ce_send_done_cb(struct ath12k_ce_pipe *pipe)
+{
+ struct ath12k_base *ab = pipe->ab;
+ struct sk_buff *skb;
+
+ while (!IS_ERR(skb = ath12k_ce_completed_send_next(pipe))) {
+ if (!skb)
+ continue;
+
+ dma_unmap_single(ab->dev, ATH12K_SKB_CB(skb)->paddr, skb->len,
+ DMA_TO_DEVICE);
+ dev_kfree_skb_any(skb);
+ }
+}
+
+static void ath12k_ce_srng_msi_ring_params_setup(struct ath12k_base *ab, u32 ce_id,
+ struct hal_srng_params *ring_params)
+{
+ u32 msi_data_start;
+ u32 msi_data_count, msi_data_idx;
+ u32 msi_irq_start;
+ u32 addr_lo;
+ u32 addr_hi;
+ int ret;
+
+ ret = ath12k_hif_get_user_msi_vector(ab, "CE",
+ &msi_data_count, &msi_data_start,
+ &msi_irq_start);
+
+ if (ret)
+ return;
+
+ ath12k_hif_get_msi_address(ab, &addr_lo, &addr_hi);
+ ath12k_hif_get_ce_msi_idx(ab, ce_id, &msi_data_idx);
+
+ ring_params->msi_addr = addr_lo;
+ ring_params->msi_addr |= (dma_addr_t)(((uint64_t)addr_hi) << 32);
+ ring_params->msi_data = (msi_data_idx % msi_data_count) + msi_data_start;
+ ring_params->flags |= HAL_SRNG_FLAGS_MSI_INTR;
+}
+
+static int ath12k_ce_init_ring(struct ath12k_base *ab,
+ struct ath12k_ce_ring *ce_ring,
+ int ce_id, enum hal_ring_type type)
+{
+ struct hal_srng_params params = { 0 };
+ int ret;
+
+ params.ring_base_paddr = ce_ring->base_addr_ce_space;
+ params.ring_base_vaddr = ce_ring->base_addr_owner_space;
+ params.num_entries = ce_ring->nentries;
+
+ if (!(CE_ATTR_DIS_INTR & ab->hw_params->host_ce_config[ce_id].flags))
+ ath12k_ce_srng_msi_ring_params_setup(ab, ce_id, &params);
+
+ switch (type) {
+ case HAL_CE_SRC:
+ if (!(CE_ATTR_DIS_INTR & ab->hw_params->host_ce_config[ce_id].flags))
+ params.intr_batch_cntr_thres_entries = 1;
+ break;
+ case HAL_CE_DST:
+ params.max_buffer_len = ab->hw_params->host_ce_config[ce_id].src_sz_max;
+ if (!(ab->hw_params->host_ce_config[ce_id].flags & CE_ATTR_DIS_INTR)) {
+ params.intr_timer_thres_us = 1024;
+ params.flags |= HAL_SRNG_FLAGS_LOW_THRESH_INTR_EN;
+ params.low_threshold = ce_ring->nentries - 3;
+ }
+ break;
+ case HAL_CE_DST_STATUS:
+ if (!(ab->hw_params->host_ce_config[ce_id].flags & CE_ATTR_DIS_INTR)) {
+ params.intr_batch_cntr_thres_entries = 1;
+ params.intr_timer_thres_us = 0x1000;
+ }
+ break;
+ default:
+ ath12k_warn(ab, "Invalid CE ring type %d\n", type);
+ return -EINVAL;
+ }
+
+ /* TODO: Init other params needed by HAL to init the ring */
+
+ ret = ath12k_hal_srng_setup(ab, type, ce_id, 0, &params);
+ if (ret < 0) {
+ ath12k_warn(ab, "failed to setup srng: %d ring_id %d\n",
+ ret, ce_id);
+ return ret;
+ }
+
+ ce_ring->hal_ring_id = ret;
+
+ return 0;
+}
+
+static struct ath12k_ce_ring *
+ath12k_ce_alloc_ring(struct ath12k_base *ab, int nentries, int desc_sz)
+{
+ struct ath12k_ce_ring *ce_ring;
+ dma_addr_t base_addr;
+
+ ce_ring = kzalloc(struct_size(ce_ring, skb, nentries), GFP_KERNEL);
+ if (!ce_ring)
+ return ERR_PTR(-ENOMEM);
+
+ ce_ring->nentries = nentries;
+ ce_ring->nentries_mask = nentries - 1;
+
+ /* Legacy platforms that do not support cache
+ * coherent DMA are unsupported
+ */
+ ce_ring->base_addr_owner_space_unaligned =
+ dma_alloc_coherent(ab->dev,
+ nentries * desc_sz + CE_DESC_RING_ALIGN,
+ &base_addr, GFP_KERNEL);
+ if (!ce_ring->base_addr_owner_space_unaligned) {
+ kfree(ce_ring);
+ return ERR_PTR(-ENOMEM);
+ }
+
+ ce_ring->base_addr_ce_space_unaligned = base_addr;
+
+ ce_ring->base_addr_owner_space =
+ PTR_ALIGN(ce_ring->base_addr_owner_space_unaligned,
+ CE_DESC_RING_ALIGN);
+
+ ce_ring->base_addr_ce_space = ALIGN(ce_ring->base_addr_ce_space_unaligned,
+ CE_DESC_RING_ALIGN);
+
+ return ce_ring;
+}
+
+static int ath12k_ce_alloc_pipe(struct ath12k_base *ab, int ce_id)
+{
+ struct ath12k_ce_pipe *pipe = &ab->ce.ce_pipe[ce_id];
+ const struct ce_attr *attr = &ab->hw_params->host_ce_config[ce_id];
+ struct ath12k_ce_ring *ring;
+ int nentries;
+ int desc_sz;
+
+ pipe->attr_flags = attr->flags;
+
+ if (attr->src_nentries) {
+ pipe->send_cb = ath12k_ce_send_done_cb;
+ nentries = roundup_pow_of_two(attr->src_nentries);
+ desc_sz = ath12k_hal_ce_get_desc_size(HAL_CE_DESC_SRC);
+ ring = ath12k_ce_alloc_ring(ab, nentries, desc_sz);
+ if (IS_ERR(ring))
+ return PTR_ERR(ring);
+ pipe->src_ring = ring;
+ }
+
+ if (attr->dest_nentries) {
+ pipe->recv_cb = attr->recv_cb;
+ nentries = roundup_pow_of_two(attr->dest_nentries);
+ desc_sz = ath12k_hal_ce_get_desc_size(HAL_CE_DESC_DST);
+ ring = ath12k_ce_alloc_ring(ab, nentries, desc_sz);
+ if (IS_ERR(ring))
+ return PTR_ERR(ring);
+ pipe->dest_ring = ring;
+
+ desc_sz = ath12k_hal_ce_get_desc_size(HAL_CE_DESC_DST_STATUS);
+ ring = ath12k_ce_alloc_ring(ab, nentries, desc_sz);
+ if (IS_ERR(ring))
+ return PTR_ERR(ring);
+ pipe->status_ring = ring;
+ }
+
+ return 0;
+}
+
+void ath12k_ce_per_engine_service(struct ath12k_base *ab, u16 ce_id)
+{
+ struct ath12k_ce_pipe *pipe = &ab->ce.ce_pipe[ce_id];
+
+ if (pipe->send_cb)
+ pipe->send_cb(pipe);
+
+ if (pipe->recv_cb)
+ ath12k_ce_recv_process_cb(pipe);
+}
+
+void ath12k_ce_poll_send_completed(struct ath12k_base *ab, u8 pipe_id)
+{
+ struct ath12k_ce_pipe *pipe = &ab->ce.ce_pipe[pipe_id];
+
+ if ((pipe->attr_flags & CE_ATTR_DIS_INTR) && pipe->send_cb)
+ pipe->send_cb(pipe);
+}
+
+int ath12k_ce_send(struct ath12k_base *ab, struct sk_buff *skb, u8 pipe_id,
+ u16 transfer_id)
+{
+ struct ath12k_ce_pipe *pipe = &ab->ce.ce_pipe[pipe_id];
+ struct hal_ce_srng_src_desc *desc;
+ struct hal_srng *srng;
+ unsigned int write_index, sw_index;
+ unsigned int nentries_mask;
+ int ret = 0;
+ u8 byte_swap_data = 0;
+ int num_used;
+
+ /* Check if some entries could be regained by handling tx completion if
+ * the CE has interrupts disabled and the used entries is more than the
+ * defined usage threshold.
+ */
+ if (pipe->attr_flags & CE_ATTR_DIS_INTR) {
+ spin_lock_bh(&ab->ce.ce_lock);
+ write_index = pipe->src_ring->write_index;
+
+ sw_index = pipe->src_ring->sw_index;
+
+ if (write_index >= sw_index)
+ num_used = write_index - sw_index;
+ else
+ num_used = pipe->src_ring->nentries - sw_index +
+ write_index;
+
+ spin_unlock_bh(&ab->ce.ce_lock);
+
+ if (num_used > ATH12K_CE_USAGE_THRESHOLD)
+ ath12k_ce_poll_send_completed(ab, pipe->pipe_num);
+ }
+
+ if (test_bit(ATH12K_FLAG_CRASH_FLUSH, &ab->dev_flags))
+ return -ESHUTDOWN;
+
+ spin_lock_bh(&ab->ce.ce_lock);
+
+ write_index = pipe->src_ring->write_index;
+ nentries_mask = pipe->src_ring->nentries_mask;
+
+ srng = &ab->hal.srng_list[pipe->src_ring->hal_ring_id];
+
+ spin_lock_bh(&srng->lock);
+
+ ath12k_hal_srng_access_begin(ab, srng);
+
+ if (unlikely(ath12k_hal_srng_src_num_free(ab, srng, false) < 1)) {
+ ath12k_hal_srng_access_end(ab, srng);
+ ret = -ENOBUFS;
+ goto unlock;
+ }
+
+ desc = ath12k_hal_srng_src_get_next_reaped(ab, srng);
+ if (!desc) {
+ ath12k_hal_srng_access_end(ab, srng);
+ ret = -ENOBUFS;
+ goto unlock;
+ }
+
+ if (pipe->attr_flags & CE_ATTR_BYTE_SWAP_DATA)
+ byte_swap_data = 1;
+
+ ath12k_hal_ce_src_set_desc(desc, ATH12K_SKB_CB(skb)->paddr,
+ skb->len, transfer_id, byte_swap_data);
+
+ pipe->src_ring->skb[write_index] = skb;
+ pipe->src_ring->write_index = CE_RING_IDX_INCR(nentries_mask,
+ write_index);
+
+ ath12k_hal_srng_access_end(ab, srng);
+
+unlock:
+ spin_unlock_bh(&srng->lock);
+
+ spin_unlock_bh(&ab->ce.ce_lock);
+
+ return ret;
+}
+
+static void ath12k_ce_rx_pipe_cleanup(struct ath12k_ce_pipe *pipe)
+{
+ struct ath12k_base *ab = pipe->ab;
+ struct ath12k_ce_ring *ring = pipe->dest_ring;
+ struct sk_buff *skb;
+ int i;
+
+ if (!(ring && pipe->buf_sz))
+ return;
+
+ for (i = 0; i < ring->nentries; i++) {
+ skb = ring->skb[i];
+ if (!skb)
+ continue;
+
+ ring->skb[i] = NULL;
+ dma_unmap_single(ab->dev, ATH12K_SKB_RXCB(skb)->paddr,
+ skb->len + skb_tailroom(skb), DMA_FROM_DEVICE);
+ dev_kfree_skb_any(skb);
+ }
+}
+
+void ath12k_ce_cleanup_pipes(struct ath12k_base *ab)
+{
+ struct ath12k_ce_pipe *pipe;
+ int pipe_num;
+
+ for (pipe_num = 0; pipe_num < ab->hw_params->ce_count; pipe_num++) {
+ pipe = &ab->ce.ce_pipe[pipe_num];
+ ath12k_ce_rx_pipe_cleanup(pipe);
+
+ /* Cleanup any src CE's which have interrupts disabled */
+ ath12k_ce_poll_send_completed(ab, pipe_num);
+
+ /* NOTE: Should we also clean up tx buffer in all pipes? */
+ }
+}
+
+void ath12k_ce_rx_post_buf(struct ath12k_base *ab)
+{
+ struct ath12k_ce_pipe *pipe;
+ int i;
+ int ret;
+
+ for (i = 0; i < ab->hw_params->ce_count; i++) {
+ pipe = &ab->ce.ce_pipe[i];
+ ret = ath12k_ce_rx_post_pipe(pipe);
+ if (ret) {
+ if (ret == -ENOSPC)
+ continue;
+
+ ath12k_warn(ab, "failed to post rx buf to pipe: %d err: %d\n",
+ i, ret);
+ mod_timer(&ab->rx_replenish_retry,
+ jiffies + ATH12K_CE_RX_POST_RETRY_JIFFIES);
+
+ return;
+ }
+ }
+}
+
+void ath12k_ce_rx_replenish_retry(struct timer_list *t)
+{
+ struct ath12k_base *ab = from_timer(ab, t, rx_replenish_retry);
+
+ ath12k_ce_rx_post_buf(ab);
+}
+
+static void ath12k_ce_shadow_config(struct ath12k_base *ab)
+{
+ int i;
+
+ for (i = 0; i < ab->hw_params->ce_count; i++) {
+ if (ab->hw_params->host_ce_config[i].src_nentries)
+ ath12k_hal_srng_update_shadow_config(ab, HAL_CE_SRC, i);
+
+ if (ab->hw_params->host_ce_config[i].dest_nentries) {
+ ath12k_hal_srng_update_shadow_config(ab, HAL_CE_DST, i);
+ ath12k_hal_srng_update_shadow_config(ab, HAL_CE_DST_STATUS, i);
+ }
+ }
+}
+
+void ath12k_ce_get_shadow_config(struct ath12k_base *ab,
+ u32 **shadow_cfg, u32 *shadow_cfg_len)
+{
+ if (!ab->hw_params->supports_shadow_regs)
+ return;
+
+ ath12k_hal_srng_get_shadow_config(ab, shadow_cfg, shadow_cfg_len);
+
+ /* shadow is already configured */
+ if (*shadow_cfg_len)
+ return;
+
+ /* shadow isn't configured yet, configure now.
+ * non-CE srngs are configured firstly, then
+ * all CE srngs.
+ */
+ ath12k_hal_srng_shadow_config(ab);
+ ath12k_ce_shadow_config(ab);
+
+ /* get the shadow configuration */
+ ath12k_hal_srng_get_shadow_config(ab, shadow_cfg, shadow_cfg_len);
+}
+
+int ath12k_ce_init_pipes(struct ath12k_base *ab)
+{
+ struct ath12k_ce_pipe *pipe;
+ int i;
+ int ret;
+
+ ath12k_ce_get_shadow_config(ab, &ab->qmi.ce_cfg.shadow_reg_v3,
+ &ab->qmi.ce_cfg.shadow_reg_v3_len);
+
+ for (i = 0; i < ab->hw_params->ce_count; i++) {
+ pipe = &ab->ce.ce_pipe[i];
+
+ if (pipe->src_ring) {
+ ret = ath12k_ce_init_ring(ab, pipe->src_ring, i,
+ HAL_CE_SRC);
+ if (ret) {
+ ath12k_warn(ab, "failed to init src ring: %d\n",
+ ret);
+ /* Should we clear any partial init */
+ return ret;
+ }
+
+ pipe->src_ring->write_index = 0;
+ pipe->src_ring->sw_index = 0;
+ }
+
+ if (pipe->dest_ring) {
+ ret = ath12k_ce_init_ring(ab, pipe->dest_ring, i,
+ HAL_CE_DST);
+ if (ret) {
+ ath12k_warn(ab, "failed to init dest ring: %d\n",
+ ret);
+ /* Should we clear any partial init */
+ return ret;
+ }
+
+ pipe->rx_buf_needed = pipe->dest_ring->nentries ?
+ pipe->dest_ring->nentries - 2 : 0;
+
+ pipe->dest_ring->write_index = 0;
+ pipe->dest_ring->sw_index = 0;
+ }
+
+ if (pipe->status_ring) {
+ ret = ath12k_ce_init_ring(ab, pipe->status_ring, i,
+ HAL_CE_DST_STATUS);
+ if (ret) {
+ ath12k_warn(ab, "failed to init dest status ing: %d\n",
+ ret);
+ /* Should we clear any partial init */
+ return ret;
+ }
+
+ pipe->status_ring->write_index = 0;
+ pipe->status_ring->sw_index = 0;
+ }
+ }
+
+ return 0;
+}
+
+void ath12k_ce_free_pipes(struct ath12k_base *ab)
+{
+ struct ath12k_ce_pipe *pipe;
+ int desc_sz;
+ int i;
+
+ for (i = 0; i < ab->hw_params->ce_count; i++) {
+ pipe = &ab->ce.ce_pipe[i];
+
+ if (pipe->src_ring) {
+ desc_sz = ath12k_hal_ce_get_desc_size(HAL_CE_DESC_SRC);
+ dma_free_coherent(ab->dev,
+ pipe->src_ring->nentries * desc_sz +
+ CE_DESC_RING_ALIGN,
+ pipe->src_ring->base_addr_owner_space,
+ pipe->src_ring->base_addr_ce_space);
+ kfree(pipe->src_ring);
+ pipe->src_ring = NULL;
+ }
+
+ if (pipe->dest_ring) {
+ desc_sz = ath12k_hal_ce_get_desc_size(HAL_CE_DESC_DST);
+ dma_free_coherent(ab->dev,
+ pipe->dest_ring->nentries * desc_sz +
+ CE_DESC_RING_ALIGN,
+ pipe->dest_ring->base_addr_owner_space,
+ pipe->dest_ring->base_addr_ce_space);
+ kfree(pipe->dest_ring);
+ pipe->dest_ring = NULL;
+ }
+
+ if (pipe->status_ring) {
+ desc_sz =
+ ath12k_hal_ce_get_desc_size(HAL_CE_DESC_DST_STATUS);
+ dma_free_coherent(ab->dev,
+ pipe->status_ring->nentries * desc_sz +
+ CE_DESC_RING_ALIGN,
+ pipe->status_ring->base_addr_owner_space,
+ pipe->status_ring->base_addr_ce_space);
+ kfree(pipe->status_ring);
+ pipe->status_ring = NULL;
+ }
+ }
+}
+
+int ath12k_ce_alloc_pipes(struct ath12k_base *ab)
+{
+ struct ath12k_ce_pipe *pipe;
+ int i;
+ int ret;
+ const struct ce_attr *attr;
+
+ spin_lock_init(&ab->ce.ce_lock);
+
+ for (i = 0; i < ab->hw_params->ce_count; i++) {
+ attr = &ab->hw_params->host_ce_config[i];
+ pipe = &ab->ce.ce_pipe[i];
+ pipe->pipe_num = i;
+ pipe->ab = ab;
+ pipe->buf_sz = attr->src_sz_max;
+
+ ret = ath12k_ce_alloc_pipe(ab, i);
+ if (ret) {
+ /* Free any parial successful allocation */
+ ath12k_ce_free_pipes(ab);
+ return ret;
+ }
+ }
+
+ return 0;
+}
+
+int ath12k_ce_get_attr_flags(struct ath12k_base *ab, int ce_id)
+{
+ if (ce_id >= ab->hw_params->ce_count)
+ return -EINVAL;
+
+ return ab->hw_params->host_ce_config[ce_id].flags;
+}
diff --git a/drivers/net/wireless/ath/ath12k/ce.h b/drivers/net/wireless/ath/ath12k/ce.h
new file mode 100644
index 000000000000..17cf16235e0b
--- /dev/null
+++ b/drivers/net/wireless/ath/ath12k/ce.h
@@ -0,0 +1,184 @@
+/* SPDX-License-Identifier: BSD-3-Clause-Clear */
+/*
+ * Copyright (c) 2018-2021 The Linux Foundation. All rights reserved.
+ * Copyright (c) 2021-2022 Qualcomm Innovation Center, Inc. All rights reserved.
+ */
+
+#ifndef ATH12K_CE_H
+#define ATH12K_CE_H
+
+#define CE_COUNT_MAX 16
+
+/* Byte swap data words */
+#define CE_ATTR_BYTE_SWAP_DATA 2
+
+/* no interrupt on copy completion */
+#define CE_ATTR_DIS_INTR 8
+
+/* Host software's Copy Engine configuration. */
+#define CE_ATTR_FLAGS 0
+
+/* Threshold to poll for tx completion in case of Interrupt disabled CE's */
+#define ATH12K_CE_USAGE_THRESHOLD 32
+
+/* Directions for interconnect pipe configuration.
+ * These definitions may be used during configuration and are shared
+ * between Host and Target.
+ *
+ * Pipe Directions are relative to the Host, so PIPEDIR_IN means
+ * "coming IN over air through Target to Host" as with a WiFi Rx operation.
+ * Conversely, PIPEDIR_OUT means "going OUT from Host through Target over air"
+ * as with a WiFi Tx operation. This is somewhat awkward for the "middle-man"
+ * Target since things that are "PIPEDIR_OUT" are coming IN to the Target
+ * over the interconnect.
+ */
+#define PIPEDIR_NONE 0
+#define PIPEDIR_IN 1 /* Target-->Host, WiFi Rx direction */
+#define PIPEDIR_OUT 2 /* Host->Target, WiFi Tx direction */
+#define PIPEDIR_INOUT 3 /* bidirectional */
+#define PIPEDIR_INOUT_H2H 4 /* bidirectional, host to host */
+
+/* CE address/mask */
+#define CE_HOST_IE_ADDRESS 0x00A1803C
+#define CE_HOST_IE_2_ADDRESS 0x00A18040
+#define CE_HOST_IE_3_ADDRESS CE_HOST_IE_ADDRESS
+
+#define CE_HOST_IE_3_SHIFT 0xC
+
+#define CE_RING_IDX_INCR(nentries_mask, idx) (((idx) + 1) & (nentries_mask))
+
+#define ATH12K_CE_RX_POST_RETRY_JIFFIES 50
+
+struct ath12k_base;
+
+/* Establish a mapping between a service/direction and a pipe.
+ * Configuration information for a Copy Engine pipe and services.
+ * Passed from Host to Target through QMI message and must be in
+ * little endian format.
+ */
+struct service_to_pipe {
+ __le32 service_id;
+ __le32 pipedir;
+ __le32 pipenum;
+};
+
+/* Configuration information for a Copy Engine pipe.
+ * Passed from Host to Target through QMI message during startup (one per CE).
+ *
+ * NOTE: Structure is shared between Host software and Target firmware!
+ */
+struct ce_pipe_config {
+ __le32 pipenum;
+ __le32 pipedir;
+ __le32 nentries;
+ __le32 nbytes_max;
+ __le32 flags;
+ __le32 reserved;
+};
+
+struct ce_attr {
+ /* CE_ATTR_* values */
+ unsigned int flags;
+
+ /* #entries in source ring - Must be a power of 2 */
+ unsigned int src_nentries;
+
+ /* Max source send size for this CE.
+ * This is also the minimum size of a destination buffer.
+ */
+ unsigned int src_sz_max;
+
+ /* #entries in destination ring - Must be a power of 2 */
+ unsigned int dest_nentries;
+
+ void (*recv_cb)(struct ath12k_base *ab, struct sk_buff *skb);
+};
+
+#define CE_DESC_RING_ALIGN 8
+
+struct ath12k_ce_ring {
+ /* Number of entries in this ring; must be power of 2 */
+ unsigned int nentries;
+ unsigned int nentries_mask;
+
+ /* For dest ring, this is the next index to be processed
+ * by software after it was/is received into.
+ *
+ * For src ring, this is the last descriptor that was sent
+ * and completion processed by software.
+ *
+ * Regardless of src or dest ring, this is an invariant
+ * (modulo ring size):
+ * write index >= read index >= sw_index
+ */
+ unsigned int sw_index;
+ /* cached copy */
+ unsigned int write_index;
+
+ /* Start of DMA-coherent area reserved for descriptors */
+ /* Host address space */
+ void *base_addr_owner_space_unaligned;
+ /* CE address space */
+ u32 base_addr_ce_space_unaligned;
+
+ /* Actual start of descriptors.
+ * Aligned to descriptor-size boundary.
+ * Points into reserved DMA-coherent area, above.
+ */
+ /* Host address space */
+ void *base_addr_owner_space;
+
+ /* CE address space */
+ u32 base_addr_ce_space;
+
+ /* HAL ring id */
+ u32 hal_ring_id;
+
+ /* keep last */
+ struct sk_buff *skb[];
+};
+
+struct ath12k_ce_pipe {
+ struct ath12k_base *ab;
+ u16 pipe_num;
+ unsigned int attr_flags;
+ unsigned int buf_sz;
+ unsigned int rx_buf_needed;
+
+ void (*send_cb)(struct ath12k_ce_pipe *pipe);
+ void (*recv_cb)(struct ath12k_base *ab, struct sk_buff *skb);
+
+ struct tasklet_struct intr_tq;
+ struct ath12k_ce_ring *src_ring;
+ struct ath12k_ce_ring *dest_ring;
+ struct ath12k_ce_ring *status_ring;
+ u64 timestamp;
+};
+
+struct ath12k_ce {
+ struct ath12k_ce_pipe ce_pipe[CE_COUNT_MAX];
+ /* Protects rings of all ce pipes */
+ spinlock_t ce_lock;
+ struct ath12k_hp_update_timer hp_timer[CE_COUNT_MAX];
+};
+
+extern const struct ce_attr ath12k_host_ce_config_qcn9274[];
+extern const struct ce_attr ath12k_host_ce_config_wcn7850[];
+
+void ath12k_ce_cleanup_pipes(struct ath12k_base *ab);
+void ath12k_ce_rx_replenish_retry(struct timer_list *t);
+void ath12k_ce_per_engine_service(struct ath12k_base *ab, u16 ce_id);
+int ath12k_ce_send(struct ath12k_base *ab, struct sk_buff *skb, u8 pipe_id,
+ u16 transfer_id);
+void ath12k_ce_rx_post_buf(struct ath12k_base *ab);
+int ath12k_ce_init_pipes(struct ath12k_base *ab);
+int ath12k_ce_alloc_pipes(struct ath12k_base *ab);
+void ath12k_ce_free_pipes(struct ath12k_base *ab);
+int ath12k_ce_get_attr_flags(struct ath12k_base *ab, int ce_id);
+void ath12k_ce_poll_send_completed(struct ath12k_base *ab, u8 pipe_id);
+int ath12k_ce_map_service_to_pipe(struct ath12k_base *ab, u16 service_id,
+ u8 *ul_pipe, u8 *dl_pipe);
+int ath12k_ce_attr_attach(struct ath12k_base *ab);
+void ath12k_ce_get_shadow_config(struct ath12k_base *ab,
+ u32 **shadow_cfg, u32 *shadow_cfg_len);
+#endif
diff --git a/drivers/net/wireless/ath/ath12k/core.c b/drivers/net/wireless/ath/ath12k/core.c
new file mode 100644
index 000000000000..a89e66653f04
--- /dev/null
+++ b/drivers/net/wireless/ath/ath12k/core.c
@@ -0,0 +1,939 @@
+// SPDX-License-Identifier: BSD-3-Clause-Clear
+/*
+ * Copyright (c) 2018-2021 The Linux Foundation. All rights reserved.
+ * Copyright (c) 2021-2022 Qualcomm Innovation Center, Inc. All rights reserved.
+ */
+
+#include <linux/module.h>
+#include <linux/slab.h>
+#include <linux/remoteproc.h>
+#include <linux/firmware.h>
+#include <linux/of.h>
+#include "core.h"
+#include "dp_tx.h"
+#include "dp_rx.h"
+#include "debug.h"
+#include "hif.h"
+
+unsigned int ath12k_debug_mask;
+module_param_named(debug_mask, ath12k_debug_mask, uint, 0644);
+MODULE_PARM_DESC(debug_mask, "Debugging mask");
+
+int ath12k_core_suspend(struct ath12k_base *ab)
+{
+ int ret;
+
+ if (!ab->hw_params->supports_suspend)
+ return -EOPNOTSUPP;
+
+ /* TODO: there can frames in queues so for now add delay as a hack.
+ * Need to implement to handle and remove this delay.
+ */
+ msleep(500);
+
+ ret = ath12k_dp_rx_pktlog_stop(ab, true);
+ if (ret) {
+ ath12k_warn(ab, "failed to stop dp rx (and timer) pktlog during suspend: %d\n",
+ ret);
+ return ret;
+ }
+
+ ret = ath12k_dp_rx_pktlog_stop(ab, false);
+ if (ret) {
+ ath12k_warn(ab, "failed to stop dp rx pktlog during suspend: %d\n",
+ ret);
+ return ret;
+ }
+
+ ath12k_hif_irq_disable(ab);
+ ath12k_hif_ce_irq_disable(ab);
+
+ ret = ath12k_hif_suspend(ab);
+ if (ret) {
+ ath12k_warn(ab, "failed to suspend hif: %d\n", ret);
+ return ret;
+ }
+
+ return 0;
+}
+
+int ath12k_core_resume(struct ath12k_base *ab)
+{
+ int ret;
+
+ if (!ab->hw_params->supports_suspend)
+ return -EOPNOTSUPP;
+
+ ret = ath12k_hif_resume(ab);
+ if (ret) {
+ ath12k_warn(ab, "failed to resume hif during resume: %d\n", ret);
+ return ret;
+ }
+
+ ath12k_hif_ce_irq_enable(ab);
+ ath12k_hif_irq_enable(ab);
+
+ ret = ath12k_dp_rx_pktlog_start(ab);
+ if (ret) {
+ ath12k_warn(ab, "failed to start rx pktlog during resume: %d\n",
+ ret);
+ return ret;
+ }
+
+ return 0;
+}
+
+static int ath12k_core_create_board_name(struct ath12k_base *ab, char *name,
+ size_t name_len)
+{
+ /* strlen(',variant=') + strlen(ab->qmi.target.bdf_ext) */
+ char variant[9 + ATH12K_QMI_BDF_EXT_STR_LENGTH] = { 0 };
+
+ if (ab->qmi.target.bdf_ext[0] != '\0')
+ scnprintf(variant, sizeof(variant), ",variant=%s",
+ ab->qmi.target.bdf_ext);
+
+ scnprintf(name, name_len,
+ "bus=%s,qmi-chip-id=%d,qmi-board-id=%d%s",
+ ath12k_bus_str(ab->hif.bus),
+ ab->qmi.target.chip_id,
+ ab->qmi.target.board_id, variant);
+
+ ath12k_dbg(ab, ATH12K_DBG_BOOT, "boot using board name '%s'\n", name);
+
+ return 0;
+}
+
+const struct firmware *ath12k_core_firmware_request(struct ath12k_base *ab,
+ const char *file)
+{
+ const struct firmware *fw;
+ char path[100];
+ int ret;
+
+ if (!file)
+ return ERR_PTR(-ENOENT);
+
+ ath12k_core_create_firmware_path(ab, file, path, sizeof(path));
+
+ ret = firmware_request_nowarn(&fw, path, ab->dev);
+ if (ret)
+ return ERR_PTR(ret);
+
+ ath12k_dbg(ab, ATH12K_DBG_BOOT, "boot firmware request %s size %zu\n",
+ path, fw->size);
+
+ return fw;
+}
+
+void ath12k_core_free_bdf(struct ath12k_base *ab, struct ath12k_board_data *bd)
+{
+ if (!IS_ERR(bd->fw))
+ release_firmware(bd->fw);
+
+ memset(bd, 0, sizeof(*bd));
+}
+
+static int ath12k_core_parse_bd_ie_board(struct ath12k_base *ab,
+ struct ath12k_board_data *bd,
+ const void *buf, size_t buf_len,
+ const char *boardname,
+ int bd_ie_type)
+{
+ const struct ath12k_fw_ie *hdr;
+ bool name_match_found;
+ int ret, board_ie_id;
+ size_t board_ie_len;
+ const void *board_ie_data;
+
+ name_match_found = false;
+
+ /* go through ATH12K_BD_IE_BOARD_ elements */
+ while (buf_len > sizeof(struct ath12k_fw_ie)) {
+ hdr = buf;
+ board_ie_id = le32_to_cpu(hdr->id);
+ board_ie_len = le32_to_cpu(hdr->len);
+ board_ie_data = hdr->data;
+
+ buf_len -= sizeof(*hdr);
+ buf += sizeof(*hdr);
+
+ if (buf_len < ALIGN(board_ie_len, 4)) {
+ ath12k_err(ab, "invalid ATH12K_BD_IE_BOARD length: %zu < %zu\n",
+ buf_len, ALIGN(board_ie_len, 4));
+ ret = -EINVAL;
+ goto out;
+ }
+
+ switch (board_ie_id) {
+ case ATH12K_BD_IE_BOARD_NAME:
+ ath12k_dbg_dump(ab, ATH12K_DBG_BOOT, "board name", "",
+ board_ie_data, board_ie_len);
+
+ if (board_ie_len != strlen(boardname))
+ break;
+
+ ret = memcmp(board_ie_data, boardname, strlen(boardname));
+ if (ret)
+ break;
+
+ name_match_found = true;
+ ath12k_dbg(ab, ATH12K_DBG_BOOT,
+ "boot found match for name '%s'",
+ boardname);
+ break;
+ case ATH12K_BD_IE_BOARD_DATA:
+ if (!name_match_found)
+ /* no match found */
+ break;
+
+ ath12k_dbg(ab, ATH12K_DBG_BOOT,
+ "boot found board data for '%s'", boardname);
+
+ bd->data = board_ie_data;
+ bd->len = board_ie_len;
+
+ ret = 0;
+ goto out;
+ default:
+ ath12k_warn(ab, "unknown ATH12K_BD_IE_BOARD found: %d\n",
+ board_ie_id);
+ break;
+ }
+
+ /* jump over the padding */
+ board_ie_len = ALIGN(board_ie_len, 4);
+
+ buf_len -= board_ie_len;
+ buf += board_ie_len;
+ }
+
+ /* no match found */
+ ret = -ENOENT;
+
+out:
+ return ret;
+}
+
+static int ath12k_core_fetch_board_data_api_n(struct ath12k_base *ab,
+ struct ath12k_board_data *bd,
+ const char *boardname)
+{
+ size_t len, magic_len;
+ const u8 *data;
+ char *filename, filepath[100];
+ size_t ie_len;
+ struct ath12k_fw_ie *hdr;
+ int ret, ie_id;
+
+ filename = ATH12K_BOARD_API2_FILE;
+
+ if (!bd->fw)
+ bd->fw = ath12k_core_firmware_request(ab, filename);
+
+ if (IS_ERR(bd->fw))
+ return PTR_ERR(bd->fw);
+
+ data = bd->fw->data;
+ len = bd->fw->size;
+
+ ath12k_core_create_firmware_path(ab, filename,
+ filepath, sizeof(filepath));
+
+ /* magic has extra null byte padded */
+ magic_len = strlen(ATH12K_BOARD_MAGIC) + 1;
+ if (len < magic_len) {
+ ath12k_err(ab, "failed to find magic value in %s, file too short: %zu\n",
+ filepath, len);
+ ret = -EINVAL;
+ goto err;
+ }
+
+ if (memcmp(data, ATH12K_BOARD_MAGIC, magic_len)) {
+ ath12k_err(ab, "found invalid board magic\n");
+ ret = -EINVAL;
+ goto err;
+ }
+
+ /* magic is padded to 4 bytes */
+ magic_len = ALIGN(magic_len, 4);
+ if (len < magic_len) {
+ ath12k_err(ab, "failed: %s too small to contain board data, len: %zu\n",
+ filepath, len);
+ ret = -EINVAL;
+ goto err;
+ }
+
+ data += magic_len;
+ len -= magic_len;
+
+ while (len > sizeof(struct ath12k_fw_ie)) {
+ hdr = (struct ath12k_fw_ie *)data;
+ ie_id = le32_to_cpu(hdr->id);
+ ie_len = le32_to_cpu(hdr->len);
+
+ len -= sizeof(*hdr);
+ data = hdr->data;
+
+ if (len < ALIGN(ie_len, 4)) {
+ ath12k_err(ab, "invalid length for board ie_id %d ie_len %zu len %zu\n",
+ ie_id, ie_len, len);
+ ret = -EINVAL;
+ goto err;
+ }
+
+ switch (ie_id) {
+ case ATH12K_BD_IE_BOARD:
+ ret = ath12k_core_parse_bd_ie_board(ab, bd, data,
+ ie_len,
+ boardname,
+ ATH12K_BD_IE_BOARD);
+ if (ret == -ENOENT)
+ /* no match found, continue */
+ break;
+ else if (ret)
+ /* there was an error, bail out */
+ goto err;
+ /* either found or error, so stop searching */
+ goto out;
+ }
+
+ /* jump over the padding */
+ ie_len = ALIGN(ie_len, 4);
+
+ len -= ie_len;
+ data += ie_len;
+ }
+
+out:
+ if (!bd->data || !bd->len) {
+ ath12k_err(ab,
+ "failed to fetch board data for %s from %s\n",
+ boardname, filepath);
+ ret = -ENODATA;
+ goto err;
+ }
+
+ return 0;
+
+err:
+ ath12k_core_free_bdf(ab, bd);
+ return ret;
+}
+
+int ath12k_core_fetch_board_data_api_1(struct ath12k_base *ab,
+ struct ath12k_board_data *bd,
+ char *filename)
+{
+ bd->fw = ath12k_core_firmware_request(ab, filename);
+ if (IS_ERR(bd->fw))
+ return PTR_ERR(bd->fw);
+
+ bd->data = bd->fw->data;
+ bd->len = bd->fw->size;
+
+ return 0;
+}
+
+#define BOARD_NAME_SIZE 100
+int ath12k_core_fetch_bdf(struct ath12k_base *ab, struct ath12k_board_data *bd)
+{
+ char boardname[BOARD_NAME_SIZE];
+ int ret;
+
+ ret = ath12k_core_create_board_name(ab, boardname, BOARD_NAME_SIZE);
+ if (ret) {
+ ath12k_err(ab, "failed to create board name: %d", ret);
+ return ret;
+ }
+
+ ab->bd_api = 2;
+ ret = ath12k_core_fetch_board_data_api_n(ab, bd, boardname);
+ if (!ret)
+ goto success;
+
+ ab->bd_api = 1;
+ ret = ath12k_core_fetch_board_data_api_1(ab, bd, ATH12K_DEFAULT_BOARD_FILE);
+ if (ret) {
+ ath12k_err(ab, "failed to fetch board-2.bin or board.bin from %s\n",
+ ab->hw_params->fw.dir);
+ return ret;
+ }
+
+success:
+ ath12k_dbg(ab, ATH12K_DBG_BOOT, "using board api %d\n", ab->bd_api);
+ return 0;
+}
+
+static void ath12k_core_stop(struct ath12k_base *ab)
+{
+ if (!test_bit(ATH12K_FLAG_CRASH_FLUSH, &ab->dev_flags))
+ ath12k_qmi_firmware_stop(ab);
+
+ ath12k_hif_stop(ab);
+ ath12k_wmi_detach(ab);
+ ath12k_dp_rx_pdev_reo_cleanup(ab);
+
+ /* De-Init of components as needed */
+}
+
+static int ath12k_core_soc_create(struct ath12k_base *ab)
+{
+ int ret;
+
+ ret = ath12k_qmi_init_service(ab);
+ if (ret) {
+ ath12k_err(ab, "failed to initialize qmi :%d\n", ret);
+ return ret;
+ }
+
+ ret = ath12k_hif_power_up(ab);
+ if (ret) {
+ ath12k_err(ab, "failed to power up :%d\n", ret);
+ goto err_qmi_deinit;
+ }
+
+ return 0;
+
+err_qmi_deinit:
+ ath12k_qmi_deinit_service(ab);
+ return ret;
+}
+
+static void ath12k_core_soc_destroy(struct ath12k_base *ab)
+{
+ ath12k_dp_free(ab);
+ ath12k_reg_free(ab);
+ ath12k_qmi_deinit_service(ab);
+}
+
+static int ath12k_core_pdev_create(struct ath12k_base *ab)
+{
+ int ret;
+
+ ret = ath12k_mac_register(ab);
+ if (ret) {
+ ath12k_err(ab, "failed register the radio with mac80211: %d\n", ret);
+ return ret;
+ }
+
+ ret = ath12k_dp_pdev_alloc(ab);
+ if (ret) {
+ ath12k_err(ab, "failed to attach DP pdev: %d\n", ret);
+ goto err_mac_unregister;
+ }
+
+ return 0;
+
+err_mac_unregister:
+ ath12k_mac_unregister(ab);
+
+ return ret;
+}
+
+static void ath12k_core_pdev_destroy(struct ath12k_base *ab)
+{
+ ath12k_mac_unregister(ab);
+ ath12k_hif_irq_disable(ab);
+ ath12k_dp_pdev_free(ab);
+}
+
+static int ath12k_core_start(struct ath12k_base *ab,
+ enum ath12k_firmware_mode mode)
+{
+ int ret;
+
+ ret = ath12k_wmi_attach(ab);
+ if (ret) {
+ ath12k_err(ab, "failed to attach wmi: %d\n", ret);
+ return ret;
+ }
+
+ ret = ath12k_htc_init(ab);
+ if (ret) {
+ ath12k_err(ab, "failed to init htc: %d\n", ret);
+ goto err_wmi_detach;
+ }
+
+ ret = ath12k_hif_start(ab);
+ if (ret) {
+ ath12k_err(ab, "failed to start HIF: %d\n", ret);
+ goto err_wmi_detach;
+ }
+
+ ret = ath12k_htc_wait_target(&ab->htc);
+ if (ret) {
+ ath12k_err(ab, "failed to connect to HTC: %d\n", ret);
+ goto err_hif_stop;
+ }
+
+ ret = ath12k_dp_htt_connect(&ab->dp);
+ if (ret) {
+ ath12k_err(ab, "failed to connect to HTT: %d\n", ret);
+ goto err_hif_stop;
+ }
+
+ ret = ath12k_wmi_connect(ab);
+ if (ret) {
+ ath12k_err(ab, "failed to connect wmi: %d\n", ret);
+ goto err_hif_stop;
+ }
+
+ ret = ath12k_htc_start(&ab->htc);
+ if (ret) {
+ ath12k_err(ab, "failed to start HTC: %d\n", ret);
+ goto err_hif_stop;
+ }
+
+ ret = ath12k_wmi_wait_for_service_ready(ab);
+ if (ret) {
+ ath12k_err(ab, "failed to receive wmi service ready event: %d\n",
+ ret);
+ goto err_hif_stop;
+ }
+
+ ret = ath12k_mac_allocate(ab);
+ if (ret) {
+ ath12k_err(ab, "failed to create new hw device with mac80211 :%d\n",
+ ret);
+ goto err_hif_stop;
+ }
+
+ ath12k_dp_cc_config(ab);
+
+ ath12k_dp_pdev_pre_alloc(ab);
+
+ ret = ath12k_dp_rx_pdev_reo_setup(ab);
+ if (ret) {
+ ath12k_err(ab, "failed to initialize reo destination rings: %d\n", ret);
+ goto err_mac_destroy;
+ }
+
+ ret = ath12k_wmi_cmd_init(ab);
+ if (ret) {
+ ath12k_err(ab, "failed to send wmi init cmd: %d\n", ret);
+ goto err_reo_cleanup;
+ }
+
+ ret = ath12k_wmi_wait_for_unified_ready(ab);
+ if (ret) {
+ ath12k_err(ab, "failed to receive wmi unified ready event: %d\n",
+ ret);
+ goto err_reo_cleanup;
+ }
+
+ /* put hardware to DBS mode */
+ if (ab->hw_params->single_pdev_only) {
+ ret = ath12k_wmi_set_hw_mode(ab, WMI_HOST_HW_MODE_DBS);
+ if (ret) {
+ ath12k_err(ab, "failed to send dbs mode: %d\n", ret);
+ goto err_reo_cleanup;
+ }
+ }
+
+ ret = ath12k_dp_tx_htt_h2t_ver_req_msg(ab);
+ if (ret) {
+ ath12k_err(ab, "failed to send htt version request message: %d\n",
+ ret);
+ goto err_reo_cleanup;
+ }
+
+ return 0;
+
+err_reo_cleanup:
+ ath12k_dp_rx_pdev_reo_cleanup(ab);
+err_mac_destroy:
+ ath12k_mac_destroy(ab);
+err_hif_stop:
+ ath12k_hif_stop(ab);
+err_wmi_detach:
+ ath12k_wmi_detach(ab);
+ return ret;
+}
+
+static int ath12k_core_start_firmware(struct ath12k_base *ab,
+ enum ath12k_firmware_mode mode)
+{
+ int ret;
+
+ ath12k_ce_get_shadow_config(ab, &ab->qmi.ce_cfg.shadow_reg_v3,
+ &ab->qmi.ce_cfg.shadow_reg_v3_len);
+
+ ret = ath12k_qmi_firmware_start(ab, mode);
+ if (ret) {
+ ath12k_err(ab, "failed to send firmware start: %d\n", ret);
+ return ret;
+ }
+
+ return ret;
+}
+
+int ath12k_core_qmi_firmware_ready(struct ath12k_base *ab)
+{
+ int ret;
+
+ ret = ath12k_core_start_firmware(ab, ATH12K_FIRMWARE_MODE_NORMAL);
+ if (ret) {
+ ath12k_err(ab, "failed to start firmware: %d\n", ret);
+ return ret;
+ }
+
+ ret = ath12k_ce_init_pipes(ab);
+ if (ret) {
+ ath12k_err(ab, "failed to initialize CE: %d\n", ret);
+ goto err_firmware_stop;
+ }
+
+ ret = ath12k_dp_alloc(ab);
+ if (ret) {
+ ath12k_err(ab, "failed to init DP: %d\n", ret);
+ goto err_firmware_stop;
+ }
+
+ mutex_lock(&ab->core_lock);
+ ret = ath12k_core_start(ab, ATH12K_FIRMWARE_MODE_NORMAL);
+ if (ret) {
+ ath12k_err(ab, "failed to start core: %d\n", ret);
+ goto err_dp_free;
+ }
+
+ ret = ath12k_core_pdev_create(ab);
+ if (ret) {
+ ath12k_err(ab, "failed to create pdev core: %d\n", ret);
+ goto err_core_stop;
+ }
+ ath12k_hif_irq_enable(ab);
+ mutex_unlock(&ab->core_lock);
+
+ return 0;
+
+err_core_stop:
+ ath12k_core_stop(ab);
+ ath12k_mac_destroy(ab);
+err_dp_free:
+ ath12k_dp_free(ab);
+ mutex_unlock(&ab->core_lock);
+err_firmware_stop:
+ ath12k_qmi_firmware_stop(ab);
+
+ return ret;
+}
+
+static int ath12k_core_reconfigure_on_crash(struct ath12k_base *ab)
+{
+ int ret;
+
+ mutex_lock(&ab->core_lock);
+ ath12k_hif_irq_disable(ab);
+ ath12k_dp_pdev_free(ab);
+ ath12k_hif_stop(ab);
+ ath12k_wmi_detach(ab);
+ ath12k_dp_rx_pdev_reo_cleanup(ab);
+ mutex_unlock(&ab->core_lock);
+
+ ath12k_dp_free(ab);
+ ath12k_hal_srng_deinit(ab);
+
+ ab->free_vdev_map = (1LL << (ab->num_radios * TARGET_NUM_VDEVS)) - 1;
+
+ ret = ath12k_hal_srng_init(ab);
+ if (ret)
+ return ret;
+
+ clear_bit(ATH12K_FLAG_CRASH_FLUSH, &ab->dev_flags);
+
+ ret = ath12k_core_qmi_firmware_ready(ab);
+ if (ret)
+ goto err_hal_srng_deinit;
+
+ clear_bit(ATH12K_FLAG_RECOVERY, &ab->dev_flags);
+
+ return 0;
+
+err_hal_srng_deinit:
+ ath12k_hal_srng_deinit(ab);
+ return ret;
+}
+
+void ath12k_core_halt(struct ath12k *ar)
+{
+ struct ath12k_base *ab = ar->ab;
+
+ lockdep_assert_held(&ar->conf_mutex);
+
+ ar->num_created_vdevs = 0;
+ ar->allocated_vdev_map = 0;
+
+ ath12k_mac_scan_finish(ar);
+ ath12k_mac_peer_cleanup_all(ar);
+ cancel_delayed_work_sync(&ar->scan.timeout);
+ cancel_work_sync(&ar->regd_update_work);
+
+ rcu_assign_pointer(ab->pdevs_active[ar->pdev_idx], NULL);
+ synchronize_rcu();
+ INIT_LIST_HEAD(&ar->arvifs);
+ idr_init(&ar->txmgmt_idr);
+}
+
+static void ath12k_core_pre_reconfigure_recovery(struct ath12k_base *ab)
+{
+ struct ath12k *ar;
+ struct ath12k_pdev *pdev;
+ int i;
+
+ spin_lock_bh(&ab->base_lock);
+ ab->stats.fw_crash_counter++;
+ spin_unlock_bh(&ab->base_lock);
+
+ for (i = 0; i < ab->num_radios; i++) {
+ pdev = &ab->pdevs[i];
+ ar = pdev->ar;
+ if (!ar || ar->state == ATH12K_STATE_OFF)
+ continue;
+
+ ieee80211_stop_queues(ar->hw);
+ ath12k_mac_drain_tx(ar);
+ complete(&ar->scan.started);
+ complete(&ar->scan.completed);
+ complete(&ar->peer_assoc_done);
+ complete(&ar->peer_delete_done);
+ complete(&ar->install_key_done);
+ complete(&ar->vdev_setup_done);
+ complete(&ar->vdev_delete_done);
+ complete(&ar->bss_survey_done);
+
+ wake_up(&ar->dp.tx_empty_waitq);
+ idr_for_each(&ar->txmgmt_idr,
+ ath12k_mac_tx_mgmt_pending_free, ar);
+ idr_destroy(&ar->txmgmt_idr);
+ }
+
+ wake_up(&ab->wmi_ab.tx_credits_wq);
+ wake_up(&ab->peer_mapping_wq);
+}
+
+static void ath12k_core_post_reconfigure_recovery(struct ath12k_base *ab)
+{
+ struct ath12k *ar;
+ struct ath12k_pdev *pdev;
+ int i;
+
+ for (i = 0; i < ab->num_radios; i++) {
+ pdev = &ab->pdevs[i];
+ ar = pdev->ar;
+ if (!ar || ar->state == ATH12K_STATE_OFF)
+ continue;
+
+ mutex_lock(&ar->conf_mutex);
+
+ switch (ar->state) {
+ case ATH12K_STATE_ON:
+ ar->state = ATH12K_STATE_RESTARTING;
+ ath12k_core_halt(ar);
+ ieee80211_restart_hw(ar->hw);
+ break;
+ case ATH12K_STATE_OFF:
+ ath12k_warn(ab,
+ "cannot restart radio %d that hasn't been started\n",
+ i);
+ break;
+ case ATH12K_STATE_RESTARTING:
+ break;
+ case ATH12K_STATE_RESTARTED:
+ ar->state = ATH12K_STATE_WEDGED;
+ fallthrough;
+ case ATH12K_STATE_WEDGED:
+ ath12k_warn(ab,
+ "device is wedged, will not restart radio %d\n", i);
+ break;
+ }
+ mutex_unlock(&ar->conf_mutex);
+ }
+ complete(&ab->driver_recovery);
+}
+
+static void ath12k_core_restart(struct work_struct *work)
+{
+ struct ath12k_base *ab = container_of(work, struct ath12k_base, restart_work);
+ int ret;
+
+ if (!ab->is_reset)
+ ath12k_core_pre_reconfigure_recovery(ab);
+
+ ret = ath12k_core_reconfigure_on_crash(ab);
+ if (ret) {
+ ath12k_err(ab, "failed to reconfigure driver on crash recovery\n");
+ return;
+ }
+
+ if (ab->is_reset)
+ complete_all(&ab->reconfigure_complete);
+
+ if (!ab->is_reset)
+ ath12k_core_post_reconfigure_recovery(ab);
+}
+
+static void ath12k_core_reset(struct work_struct *work)
+{
+ struct ath12k_base *ab = container_of(work, struct ath12k_base, reset_work);
+ int reset_count, fail_cont_count;
+ long time_left;
+
+ if (!(test_bit(ATH12K_FLAG_REGISTERED, &ab->dev_flags))) {
+ ath12k_warn(ab, "ignore reset dev flags 0x%lx\n", ab->dev_flags);
+ return;
+ }
+
+ /* Sometimes the recovery will fail and then the next all recovery fail,
+ * this is to avoid infinite recovery since it can not recovery success
+ */
+ fail_cont_count = atomic_read(&ab->fail_cont_count);
+
+ if (fail_cont_count >= ATH12K_RESET_MAX_FAIL_COUNT_FINAL)
+ return;
+
+ if (fail_cont_count >= ATH12K_RESET_MAX_FAIL_COUNT_FIRST &&
+ time_before(jiffies, ab->reset_fail_timeout))
+ return;
+
+ reset_count = atomic_inc_return(&ab->reset_count);
+
+ if (reset_count > 1) {
+ /* Sometimes it happened another reset worker before the previous one
+ * completed, then the second reset worker will destroy the previous one,
+ * thus below is to avoid that.
+ */
+ ath12k_warn(ab, "already resetting count %d\n", reset_count);
+
+ reinit_completion(&ab->reset_complete);
+ time_left = wait_for_completion_timeout(&ab->reset_complete,
+ ATH12K_RESET_TIMEOUT_HZ);
+ if (time_left) {
+ ath12k_dbg(ab, ATH12K_DBG_BOOT, "to skip reset\n");
+ atomic_dec(&ab->reset_count);
+ return;
+ }
+
+ ab->reset_fail_timeout = jiffies + ATH12K_RESET_FAIL_TIMEOUT_HZ;
+ /* Record the continuous recovery fail count when recovery failed*/
+ fail_cont_count = atomic_inc_return(&ab->fail_cont_count);
+ }
+
+ ath12k_dbg(ab, ATH12K_DBG_BOOT, "reset starting\n");
+
+ ab->is_reset = true;
+ atomic_set(&ab->recovery_count, 0);
+
+ ath12k_core_pre_reconfigure_recovery(ab);
+
+ reinit_completion(&ab->reconfigure_complete);
+ ath12k_core_post_reconfigure_recovery(ab);
+
+ reinit_completion(&ab->recovery_start);
+ atomic_set(&ab->recovery_start_count, 0);
+
+ ath12k_dbg(ab, ATH12K_DBG_BOOT, "waiting recovery start...\n");
+
+ time_left = wait_for_completion_timeout(&ab->recovery_start,
+ ATH12K_RECOVER_START_TIMEOUT_HZ);
+
+ ath12k_hif_power_down(ab);
+ ath12k_hif_power_up(ab);
+
+ ath12k_dbg(ab, ATH12K_DBG_BOOT, "reset started\n");
+}
+
+int ath12k_core_pre_init(struct ath12k_base *ab)
+{
+ int ret;
+
+ ret = ath12k_hw_init(ab);
+ if (ret) {
+ ath12k_err(ab, "failed to init hw params: %d\n", ret);
+ return ret;
+ }
+
+ return 0;
+}
+
+int ath12k_core_init(struct ath12k_base *ab)
+{
+ int ret;
+
+ ret = ath12k_core_soc_create(ab);
+ if (ret) {
+ ath12k_err(ab, "failed to create soc core: %d\n", ret);
+ return ret;
+ }
+
+ return 0;
+}
+
+void ath12k_core_deinit(struct ath12k_base *ab)
+{
+ mutex_lock(&ab->core_lock);
+
+ ath12k_core_pdev_destroy(ab);
+ ath12k_core_stop(ab);
+
+ mutex_unlock(&ab->core_lock);
+
+ ath12k_hif_power_down(ab);
+ ath12k_mac_destroy(ab);
+ ath12k_core_soc_destroy(ab);
+}
+
+void ath12k_core_free(struct ath12k_base *ab)
+{
+ destroy_workqueue(ab->workqueue_aux);
+ destroy_workqueue(ab->workqueue);
+ kfree(ab);
+}
+
+struct ath12k_base *ath12k_core_alloc(struct device *dev, size_t priv_size,
+ enum ath12k_bus bus)
+{
+ struct ath12k_base *ab;
+
+ ab = kzalloc(sizeof(*ab) + priv_size, GFP_KERNEL);
+ if (!ab)
+ return NULL;
+
+ init_completion(&ab->driver_recovery);
+
+ ab->workqueue = create_singlethread_workqueue("ath12k_wq");
+ if (!ab->workqueue)
+ goto err_sc_free;
+
+ ab->workqueue_aux = create_singlethread_workqueue("ath12k_aux_wq");
+ if (!ab->workqueue_aux)
+ goto err_free_wq;
+
+ mutex_init(&ab->core_lock);
+ spin_lock_init(&ab->base_lock);
+ init_completion(&ab->reset_complete);
+ init_completion(&ab->reconfigure_complete);
+ init_completion(&ab->recovery_start);
+
+ INIT_LIST_HEAD(&ab->peers);
+ init_waitqueue_head(&ab->peer_mapping_wq);
+ init_waitqueue_head(&ab->wmi_ab.tx_credits_wq);
+ INIT_WORK(&ab->restart_work, ath12k_core_restart);
+ INIT_WORK(&ab->reset_work, ath12k_core_reset);
+ timer_setup(&ab->rx_replenish_retry, ath12k_ce_rx_replenish_retry, 0);
+ init_completion(&ab->htc_suspend);
+
+ ab->dev = dev;
+ ab->hif.bus = bus;
+
+ return ab;
+
+err_free_wq:
+ destroy_workqueue(ab->workqueue);
+err_sc_free:
+ kfree(ab);
+ return NULL;
+}
+
+MODULE_DESCRIPTION("Core module for Qualcomm Atheros 802.11be wireless LAN cards.");
+MODULE_LICENSE("Dual BSD/GPL");
diff --git a/drivers/net/wireless/ath/ath12k/core.h b/drivers/net/wireless/ath/ath12k/core.h
new file mode 100644
index 000000000000..a54ae74543c1
--- /dev/null
+++ b/drivers/net/wireless/ath/ath12k/core.h
@@ -0,0 +1,822 @@
+/* SPDX-License-Identifier: BSD-3-Clause-Clear */
+/*
+ * Copyright (c) 2018-2021 The Linux Foundation. All rights reserved.
+ * Copyright (c) 2021-2022 Qualcomm Innovation Center, Inc. All rights reserved.
+ */
+
+#ifndef ATH12K_CORE_H
+#define ATH12K_CORE_H
+
+#include <linux/types.h>
+#include <linux/interrupt.h>
+#include <linux/irq.h>
+#include <linux/bitfield.h>
+#include "qmi.h"
+#include "htc.h"
+#include "wmi.h"
+#include "hal.h"
+#include "dp.h"
+#include "ce.h"
+#include "mac.h"
+#include "hw.h"
+#include "hal_rx.h"
+#include "reg.h"
+#include "dbring.h"
+
+#define SM(_v, _f) (((_v) << _f##_LSB) & _f##_MASK)
+
+#define ATH12K_TX_MGMT_NUM_PENDING_MAX 512
+
+#define ATH12K_TX_MGMT_TARGET_MAX_SUPPORT_WMI 64
+
+/* Pending management packets threshold for dropping probe responses */
+#define ATH12K_PRB_RSP_DROP_THRESHOLD ((ATH12K_TX_MGMT_TARGET_MAX_SUPPORT_WMI * 3) / 4)
+
+#define ATH12K_INVALID_HW_MAC_ID 0xFF
+#define ATH12K_RX_RATE_TABLE_NUM 320
+#define ATH12K_RX_RATE_TABLE_11AX_NUM 576
+
+#define ATH12K_MON_TIMER_INTERVAL 10
+#define ATH12K_RESET_TIMEOUT_HZ (20 * HZ)
+#define ATH12K_RESET_MAX_FAIL_COUNT_FIRST 3
+#define ATH12K_RESET_MAX_FAIL_COUNT_FINAL 5
+#define ATH12K_RESET_FAIL_TIMEOUT_HZ (20 * HZ)
+#define ATH12K_RECONFIGURE_TIMEOUT_HZ (10 * HZ)
+#define ATH12K_RECOVER_START_TIMEOUT_HZ (20 * HZ)
+
+enum wme_ac {
+ WME_AC_BE,
+ WME_AC_BK,
+ WME_AC_VI,
+ WME_AC_VO,
+ WME_NUM_AC
+};
+
+#define ATH12K_HT_MCS_MAX 7
+#define ATH12K_VHT_MCS_MAX 9
+#define ATH12K_HE_MCS_MAX 11
+
+enum ath12k_crypt_mode {
+ /* Only use hardware crypto engine */
+ ATH12K_CRYPT_MODE_HW,
+ /* Only use software crypto */
+ ATH12K_CRYPT_MODE_SW,
+};
+
+static inline enum wme_ac ath12k_tid_to_ac(u32 tid)
+{
+ return (((tid == 0) || (tid == 3)) ? WME_AC_BE :
+ ((tid == 1) || (tid == 2)) ? WME_AC_BK :
+ ((tid == 4) || (tid == 5)) ? WME_AC_VI :
+ WME_AC_VO);
+}
+
+enum ath12k_skb_flags {
+ ATH12K_SKB_HW_80211_ENCAP = BIT(0),
+ ATH12K_SKB_CIPHER_SET = BIT(1),
+};
+
+struct ath12k_skb_cb {
+ dma_addr_t paddr;
+ struct ath12k *ar;
+ struct ieee80211_vif *vif;
+ dma_addr_t paddr_ext_desc;
+ u32 cipher;
+ u8 flags;
+};
+
+struct ath12k_skb_rxcb {
+ dma_addr_t paddr;
+ bool is_first_msdu;
+ bool is_last_msdu;
+ bool is_continuation;
+ bool is_mcbc;
+ bool is_eapol;
+ struct hal_rx_desc *rx_desc;
+ u8 err_rel_src;
+ u8 err_code;
+ u8 mac_id;
+ u8 unmapped;
+ u8 is_frag;
+ u8 tid;
+ u16 peer_id;
+};
+
+enum ath12k_hw_rev {
+ ATH12K_HW_QCN9274_HW10,
+ ATH12K_HW_QCN9274_HW20,
+ ATH12K_HW_WCN7850_HW20
+};
+
+enum ath12k_firmware_mode {
+ /* the default mode, standard 802.11 functionality */
+ ATH12K_FIRMWARE_MODE_NORMAL,
+
+ /* factory tests etc */
+ ATH12K_FIRMWARE_MODE_FTM,
+};
+
+#define ATH12K_IRQ_NUM_MAX 57
+#define ATH12K_EXT_IRQ_NUM_MAX 16
+
+struct ath12k_ext_irq_grp {
+ struct ath12k_base *ab;
+ u32 irqs[ATH12K_EXT_IRQ_NUM_MAX];
+ u32 num_irq;
+ u32 grp_id;
+ u64 timestamp;
+ struct napi_struct napi;
+ struct net_device napi_ndev;
+};
+
+#define HEHANDLE_CAP_PHYINFO_SIZE 3
+#define HECAP_PHYINFO_SIZE 9
+#define HECAP_MACINFO_SIZE 5
+#define HECAP_TXRX_MCS_NSS_SIZE 2
+#define HECAP_PPET16_PPET8_MAX_SIZE 25
+
+#define HE_PPET16_PPET8_SIZE 8
+
+/* 802.11ax PPE (PPDU packet Extension) threshold */
+struct he_ppe_threshold {
+ u32 numss_m1;
+ u32 ru_mask;
+ u32 ppet16_ppet8_ru3_ru0[HE_PPET16_PPET8_SIZE];
+};
+
+struct ath12k_he {
+ u8 hecap_macinfo[HECAP_MACINFO_SIZE];
+ u32 hecap_rxmcsnssmap;
+ u32 hecap_txmcsnssmap;
+ u32 hecap_phyinfo[HEHANDLE_CAP_PHYINFO_SIZE];
+ struct he_ppe_threshold hecap_ppet;
+ u32 heop_param;
+};
+
+#define MAX_RADIOS 3
+
+enum {
+ WMI_HOST_TP_SCALE_MAX = 0,
+ WMI_HOST_TP_SCALE_50 = 1,
+ WMI_HOST_TP_SCALE_25 = 2,
+ WMI_HOST_TP_SCALE_12 = 3,
+ WMI_HOST_TP_SCALE_MIN = 4,
+ WMI_HOST_TP_SCALE_SIZE = 5,
+};
+
+enum ath12k_scan_state {
+ ATH12K_SCAN_IDLE,
+ ATH12K_SCAN_STARTING,
+ ATH12K_SCAN_RUNNING,
+ ATH12K_SCAN_ABORTING,
+};
+
+enum ath12k_dev_flags {
+ ATH12K_CAC_RUNNING,
+ ATH12K_FLAG_CRASH_FLUSH,
+ ATH12K_FLAG_RAW_MODE,
+ ATH12K_FLAG_HW_CRYPTO_DISABLED,
+ ATH12K_FLAG_RECOVERY,
+ ATH12K_FLAG_UNREGISTERING,
+ ATH12K_FLAG_REGISTERED,
+ ATH12K_FLAG_QMI_FAIL,
+ ATH12K_FLAG_HTC_SUSPEND_COMPLETE,
+};
+
+enum ath12k_monitor_flags {
+ ATH12K_FLAG_MONITOR_ENABLED,
+};
+
+struct ath12k_vif {
+ u32 vdev_id;
+ enum wmi_vdev_type vdev_type;
+ enum wmi_vdev_subtype vdev_subtype;
+ u32 beacon_interval;
+ u32 dtim_period;
+ u16 ast_hash;
+ u16 ast_idx;
+ u16 tcl_metadata;
+ u8 hal_addr_search_flags;
+ u8 search_type;
+
+ struct ath12k *ar;
+ struct ieee80211_vif *vif;
+
+ int bank_id;
+ u8 vdev_id_check_en;
+
+ struct wmi_wmm_params_all_arg wmm_params;
+ struct list_head list;
+ union {
+ struct {
+ u32 uapsd;
+ } sta;
+ struct {
+ /* 127 stations; wmi limit */
+ u8 tim_bitmap[16];
+ u8 tim_len;
+ u32 ssid_len;
+ u8 ssid[IEEE80211_MAX_SSID_LEN];
+ bool hidden_ssid;
+ /* P2P_IE with NoA attribute for P2P_GO case */
+ u32 noa_len;
+ u8 *noa_data;
+ } ap;
+ } u;
+
+ bool is_started;
+ bool is_up;
+ u32 aid;
+ u8 bssid[ETH_ALEN];
+ struct cfg80211_bitrate_mask bitrate_mask;
+ int num_legacy_stations;
+ int rtscts_prot_mode;
+ int txpower;
+ bool rsnie_present;
+ bool wpaie_present;
+ struct ieee80211_chanctx_conf chanctx;
+ u32 key_cipher;
+ u8 tx_encap_type;
+ u8 vdev_stats_id;
+};
+
+struct ath12k_vif_iter {
+ u32 vdev_id;
+ struct ath12k_vif *arvif;
+};
+
+#define HAL_AST_IDX_INVALID 0xFFFF
+#define HAL_RX_MAX_MCS 12
+#define HAL_RX_MAX_MCS_HT 31
+#define HAL_RX_MAX_MCS_VHT 9
+#define HAL_RX_MAX_MCS_HE 11
+#define HAL_RX_MAX_NSS 8
+#define HAL_RX_MAX_NUM_LEGACY_RATES 12
+#define ATH12K_RX_RATE_TABLE_11AX_NUM 576
+#define ATH12K_RX_RATE_TABLE_NUM 320
+
+struct ath12k_rx_peer_rate_stats {
+ u64 ht_mcs_count[HAL_RX_MAX_MCS_HT + 1];
+ u64 vht_mcs_count[HAL_RX_MAX_MCS_VHT + 1];
+ u64 he_mcs_count[HAL_RX_MAX_MCS_HE + 1];
+ u64 nss_count[HAL_RX_MAX_NSS];
+ u64 bw_count[HAL_RX_BW_MAX];
+ u64 gi_count[HAL_RX_GI_MAX];
+ u64 legacy_count[HAL_RX_MAX_NUM_LEGACY_RATES];
+ u64 rx_rate[ATH12K_RX_RATE_TABLE_11AX_NUM];
+};
+
+struct ath12k_rx_peer_stats {
+ u64 num_msdu;
+ u64 num_mpdu_fcs_ok;
+ u64 num_mpdu_fcs_err;
+ u64 tcp_msdu_count;
+ u64 udp_msdu_count;
+ u64 other_msdu_count;
+ u64 ampdu_msdu_count;
+ u64 non_ampdu_msdu_count;
+ u64 stbc_count;
+ u64 beamformed_count;
+ u64 mcs_count[HAL_RX_MAX_MCS + 1];
+ u64 nss_count[HAL_RX_MAX_NSS];
+ u64 bw_count[HAL_RX_BW_MAX];
+ u64 gi_count[HAL_RX_GI_MAX];
+ u64 coding_count[HAL_RX_SU_MU_CODING_MAX];
+ u64 tid_count[IEEE80211_NUM_TIDS + 1];
+ u64 pream_cnt[HAL_RX_PREAMBLE_MAX];
+ u64 reception_type[HAL_RX_RECEPTION_TYPE_MAX];
+ u64 rx_duration;
+ u64 dcm_count;
+ u64 ru_alloc_cnt[HAL_RX_RU_ALLOC_TYPE_MAX];
+ struct ath12k_rx_peer_rate_stats pkt_stats;
+ struct ath12k_rx_peer_rate_stats byte_stats;
+};
+
+#define ATH12K_HE_MCS_NUM 12
+#define ATH12K_VHT_MCS_NUM 10
+#define ATH12K_BW_NUM 5
+#define ATH12K_NSS_NUM 4
+#define ATH12K_LEGACY_NUM 12
+#define ATH12K_GI_NUM 4
+#define ATH12K_HT_MCS_NUM 32
+
+enum ath12k_pkt_rx_err {
+ ATH12K_PKT_RX_ERR_FCS,
+ ATH12K_PKT_RX_ERR_TKIP,
+ ATH12K_PKT_RX_ERR_CRYPT,
+ ATH12K_PKT_RX_ERR_PEER_IDX_INVAL,
+ ATH12K_PKT_RX_ERR_MAX,
+};
+
+enum ath12k_ampdu_subfrm_num {
+ ATH12K_AMPDU_SUBFRM_NUM_10,
+ ATH12K_AMPDU_SUBFRM_NUM_20,
+ ATH12K_AMPDU_SUBFRM_NUM_30,
+ ATH12K_AMPDU_SUBFRM_NUM_40,
+ ATH12K_AMPDU_SUBFRM_NUM_50,
+ ATH12K_AMPDU_SUBFRM_NUM_60,
+ ATH12K_AMPDU_SUBFRM_NUM_MORE,
+ ATH12K_AMPDU_SUBFRM_NUM_MAX,
+};
+
+enum ath12k_amsdu_subfrm_num {
+ ATH12K_AMSDU_SUBFRM_NUM_1,
+ ATH12K_AMSDU_SUBFRM_NUM_2,
+ ATH12K_AMSDU_SUBFRM_NUM_3,
+ ATH12K_AMSDU_SUBFRM_NUM_4,
+ ATH12K_AMSDU_SUBFRM_NUM_MORE,
+ ATH12K_AMSDU_SUBFRM_NUM_MAX,
+};
+
+enum ath12k_counter_type {
+ ATH12K_COUNTER_TYPE_BYTES,
+ ATH12K_COUNTER_TYPE_PKTS,
+ ATH12K_COUNTER_TYPE_MAX,
+};
+
+enum ath12k_stats_type {
+ ATH12K_STATS_TYPE_SUCC,
+ ATH12K_STATS_TYPE_FAIL,
+ ATH12K_STATS_TYPE_RETRY,
+ ATH12K_STATS_TYPE_AMPDU,
+ ATH12K_STATS_TYPE_MAX,
+};
+
+struct ath12k_htt_data_stats {
+ u64 legacy[ATH12K_COUNTER_TYPE_MAX][ATH12K_LEGACY_NUM];
+ u64 ht[ATH12K_COUNTER_TYPE_MAX][ATH12K_HT_MCS_NUM];
+ u64 vht[ATH12K_COUNTER_TYPE_MAX][ATH12K_VHT_MCS_NUM];
+ u64 he[ATH12K_COUNTER_TYPE_MAX][ATH12K_HE_MCS_NUM];
+ u64 bw[ATH12K_COUNTER_TYPE_MAX][ATH12K_BW_NUM];
+ u64 nss[ATH12K_COUNTER_TYPE_MAX][ATH12K_NSS_NUM];
+ u64 gi[ATH12K_COUNTER_TYPE_MAX][ATH12K_GI_NUM];
+ u64 transmit_type[ATH12K_COUNTER_TYPE_MAX][HAL_RX_RECEPTION_TYPE_MAX];
+ u64 ru_loc[ATH12K_COUNTER_TYPE_MAX][HAL_RX_RU_ALLOC_TYPE_MAX];
+};
+
+struct ath12k_htt_tx_stats {
+ struct ath12k_htt_data_stats stats[ATH12K_STATS_TYPE_MAX];
+ u64 tx_duration;
+ u64 ba_fails;
+ u64 ack_fails;
+ u16 ru_start;
+ u16 ru_tones;
+ u32 mu_group[MAX_MU_GROUP_ID];
+};
+
+struct ath12k_per_ppdu_tx_stats {
+ u16 succ_pkts;
+ u16 failed_pkts;
+ u16 retry_pkts;
+ u32 succ_bytes;
+ u32 failed_bytes;
+ u32 retry_bytes;
+};
+
+struct ath12k_wbm_tx_stats {
+ u64 wbm_tx_comp_stats[HAL_WBM_REL_HTT_TX_COMP_STATUS_MAX];
+};
+
+struct ath12k_sta {
+ struct ath12k_vif *arvif;
+
+ /* the following are protected by ar->data_lock */
+ u32 changed; /* IEEE80211_RC_* */
+ u32 bw;
+ u32 nss;
+ u32 smps;
+ enum hal_pn_type pn_type;
+
+ struct work_struct update_wk;
+ struct rate_info txrate;
+ struct rate_info last_txrate;
+ u64 rx_duration;
+ u64 tx_duration;
+ u8 rssi_comb;
+ struct ath12k_rx_peer_stats *rx_stats;
+ struct ath12k_wbm_tx_stats *wbm_tx_stats;
+};
+
+#define ATH12K_MIN_5G_FREQ 4150
+#define ATH12K_MIN_6G_FREQ 5945
+#define ATH12K_MAX_6G_FREQ 7115
+#define ATH12K_NUM_CHANS 100
+#define ATH12K_MAX_5G_CHAN 173
+
+enum ath12k_state {
+ ATH12K_STATE_OFF,
+ ATH12K_STATE_ON,
+ ATH12K_STATE_RESTARTING,
+ ATH12K_STATE_RESTARTED,
+ ATH12K_STATE_WEDGED,
+ /* Add other states as required */
+};
+
+/* Antenna noise floor */
+#define ATH12K_DEFAULT_NOISE_FLOOR -95
+
+struct ath12k_fw_stats {
+ u32 pdev_id;
+ u32 stats_id;
+ struct list_head pdevs;
+ struct list_head vdevs;
+ struct list_head bcn;
+};
+
+struct ath12k_per_peer_tx_stats {
+ u32 succ_bytes;
+ u32 retry_bytes;
+ u32 failed_bytes;
+ u32 duration;
+ u16 succ_pkts;
+ u16 retry_pkts;
+ u16 failed_pkts;
+ u16 ru_start;
+ u16 ru_tones;
+ u8 ba_fails;
+ u8 ppdu_type;
+ u32 mu_grpid;
+ u32 mu_pos;
+ bool is_ampdu;
+};
+
+#define ATH12K_FLUSH_TIMEOUT (5 * HZ)
+#define ATH12K_VDEV_DELETE_TIMEOUT_HZ (5 * HZ)
+
+struct ath12k {
+ struct ath12k_base *ab;
+ struct ath12k_pdev *pdev;
+ struct ieee80211_hw *hw;
+ struct ieee80211_ops *ops;
+ struct ath12k_wmi_pdev *wmi;
+ struct ath12k_pdev_dp dp;
+ u8 mac_addr[ETH_ALEN];
+ u32 ht_cap_info;
+ u32 vht_cap_info;
+ struct ath12k_he ar_he;
+ enum ath12k_state state;
+ bool supports_6ghz;
+ struct {
+ struct completion started;
+ struct completion completed;
+ struct completion on_channel;
+ struct delayed_work timeout;
+ enum ath12k_scan_state state;
+ bool is_roc;
+ int vdev_id;
+ int roc_freq;
+ bool roc_notify;
+ } scan;
+
+ struct {
+ struct ieee80211_supported_band sbands[NUM_NL80211_BANDS];
+ struct ieee80211_sband_iftype_data
+ iftype[NUM_NL80211_BANDS][NUM_NL80211_IFTYPES];
+ } mac;
+
+ unsigned long dev_flags;
+ unsigned int filter_flags;
+ unsigned long monitor_flags;
+ u32 min_tx_power;
+ u32 max_tx_power;
+ u32 txpower_limit_2g;
+ u32 txpower_limit_5g;
+ u32 txpower_scale;
+ u32 power_scale;
+ u32 chan_tx_pwr;
+ u32 num_stations;
+ u32 max_num_stations;
+ bool monitor_present;
+ /* To synchronize concurrent synchronous mac80211 callback operations,
+ * concurrent debugfs configuration and concurrent FW statistics events.
+ */
+ struct mutex conf_mutex;
+ /* protects the radio specific data like debug stats, ppdu_stats_info stats,
+ * vdev_stop_status info, scan data, ath12k_sta info, ath12k_vif info,
+ * channel context data, survey info, test mode data.
+ */
+ spinlock_t data_lock;
+
+ struct list_head arvifs;
+ /* should never be NULL; needed for regular htt rx */
+ struct ieee80211_channel *rx_channel;
+
+ /* valid during scan; needed for mgmt rx during scan */
+ struct ieee80211_channel *scan_channel;
+
+ u8 cfg_tx_chainmask;
+ u8 cfg_rx_chainmask;
+ u8 num_rx_chains;
+ u8 num_tx_chains;
+ /* pdev_idx starts from 0 whereas pdev->pdev_id starts with 1 */
+ u8 pdev_idx;
+ u8 lmac_id;
+
+ struct completion peer_assoc_done;
+ struct completion peer_delete_done;
+
+ int install_key_status;
+ struct completion install_key_done;
+
+ int last_wmi_vdev_start_status;
+ struct completion vdev_setup_done;
+ struct completion vdev_delete_done;
+
+ int num_peers;
+ int max_num_peers;
+ u32 num_started_vdevs;
+ u32 num_created_vdevs;
+ unsigned long long allocated_vdev_map;
+
+ struct idr txmgmt_idr;
+ /* protects txmgmt_idr data */
+ spinlock_t txmgmt_idr_lock;
+ atomic_t num_pending_mgmt_tx;
+
+ /* cycle count is reported twice for each visited channel during scan.
+ * access protected by data_lock
+ */
+ u32 survey_last_rx_clear_count;
+ u32 survey_last_cycle_count;
+
+ /* Channel info events are expected to come in pairs without and with
+ * COMPLETE flag set respectively for each channel visit during scan.
+ *
+ * However there are deviations from this rule. This flag is used to
+ * avoid reporting garbage data.
+ */
+ bool ch_info_can_report_survey;
+ struct survey_info survey[ATH12K_NUM_CHANS];
+ struct completion bss_survey_done;
+
+ struct work_struct regd_update_work;
+
+ struct work_struct wmi_mgmt_tx_work;
+ struct sk_buff_head wmi_mgmt_tx_queue;
+
+ struct ath12k_per_peer_tx_stats peer_tx_stats;
+ struct list_head ppdu_stats_info;
+ u32 ppdu_stat_list_depth;
+
+ struct ath12k_per_peer_tx_stats cached_stats;
+ u32 last_ppdu_id;
+ u32 cached_ppdu_id;
+
+ bool dfs_block_radar_events;
+ bool monitor_conf_enabled;
+ bool monitor_vdev_created;
+ bool monitor_started;
+ int monitor_vdev_id;
+};
+
+struct ath12k_band_cap {
+ u32 phy_id;
+ u32 max_bw_supported;
+ u32 ht_cap_info;
+ u32 he_cap_info[2];
+ u32 he_mcs;
+ u32 he_cap_phy_info[PSOC_HOST_MAX_PHY_SIZE];
+ struct ath12k_wmi_ppe_threshold_arg he_ppet;
+ u16 he_6ghz_capa;
+};
+
+struct ath12k_pdev_cap {
+ u32 supported_bands;
+ u32 ampdu_density;
+ u32 vht_cap;
+ u32 vht_mcs;
+ u32 he_mcs;
+ u32 tx_chain_mask;
+ u32 rx_chain_mask;
+ u32 tx_chain_mask_shift;
+ u32 rx_chain_mask_shift;
+ struct ath12k_band_cap band[NUM_NL80211_BANDS];
+};
+
+struct mlo_timestamp {
+ u32 info;
+ u32 sync_timestamp_lo_us;
+ u32 sync_timestamp_hi_us;
+ u32 mlo_offset_lo;
+ u32 mlo_offset_hi;
+ u32 mlo_offset_clks;
+ u32 mlo_comp_clks;
+ u32 mlo_comp_timer;
+};
+
+struct ath12k_pdev {
+ struct ath12k *ar;
+ u32 pdev_id;
+ struct ath12k_pdev_cap cap;
+ u8 mac_addr[ETH_ALEN];
+ struct mlo_timestamp timestamp;
+};
+
+struct ath12k_board_data {
+ const struct firmware *fw;
+ const void *data;
+ size_t len;
+};
+
+struct ath12k_soc_dp_tx_err_stats {
+ /* TCL Ring Descriptor unavailable */
+ u32 desc_na[DP_TCL_NUM_RING_MAX];
+ /* Other failures during dp_tx due to mem allocation failure
+ * idr unavailable etc.
+ */
+ atomic_t misc_fail;
+};
+
+struct ath12k_soc_dp_stats {
+ u32 err_ring_pkts;
+ u32 invalid_rbm;
+ u32 rxdma_error[HAL_REO_ENTR_RING_RXDMA_ECODE_MAX];
+ u32 reo_error[HAL_REO_DEST_RING_ERROR_CODE_MAX];
+ u32 hal_reo_error[DP_REO_DST_RING_MAX];
+ struct ath12k_soc_dp_tx_err_stats tx_err;
+};
+
+/* Master structure to hold the hw data which may be used in core module */
+struct ath12k_base {
+ enum ath12k_hw_rev hw_rev;
+ struct platform_device *pdev;
+ struct device *dev;
+ struct ath12k_qmi qmi;
+ struct ath12k_wmi_base wmi_ab;
+ struct completion fw_ready;
+ int num_radios;
+ /* HW channel counters frequency value in hertz common to all MACs */
+ u32 cc_freq_hz;
+
+ struct ath12k_htc htc;
+
+ struct ath12k_dp dp;
+
+ void __iomem *mem;
+ unsigned long mem_len;
+
+ struct {
+ enum ath12k_bus bus;
+ const struct ath12k_hif_ops *ops;
+ } hif;
+
+ struct ath12k_ce ce;
+ struct timer_list rx_replenish_retry;
+ struct ath12k_hal hal;
+ /* To synchronize core_start/core_stop */
+ struct mutex core_lock;
+ /* Protects data like peers */
+ spinlock_t base_lock;
+ struct ath12k_pdev pdevs[MAX_RADIOS];
+ struct ath12k_pdev __rcu *pdevs_active[MAX_RADIOS];
+ struct ath12k_wmi_hal_reg_capabilities_ext_arg hal_reg_cap[MAX_RADIOS];
+ unsigned long long free_vdev_map;
+ unsigned long long free_vdev_stats_id_map;
+ struct list_head peers;
+ wait_queue_head_t peer_mapping_wq;
+ u8 mac_addr[ETH_ALEN];
+ bool wmi_ready;
+ u32 wlan_init_status;
+ int irq_num[ATH12K_IRQ_NUM_MAX];
+ struct ath12k_ext_irq_grp ext_irq_grp[ATH12K_EXT_IRQ_GRP_NUM_MAX];
+ struct napi_struct *napi;
+ struct ath12k_wmi_target_cap_arg target_caps;
+ u32 ext_service_bitmap[WMI_SERVICE_EXT_BM_SIZE];
+ bool pdevs_macaddr_valid;
+ int bd_api;
+
+ const struct ath12k_hw_params *hw_params;
+
+ const struct firmware *cal_file;
+
+ /* Below regd's are protected by ab->data_lock */
+ /* This is the regd set for every radio
+ * by the firmware during initializatin
+ */
+ struct ieee80211_regdomain *default_regd[MAX_RADIOS];
+ /* This regd is set during dynamic country setting
+ * This may or may not be used during the runtime
+ */
+ struct ieee80211_regdomain *new_regd[MAX_RADIOS];
+
+ /* Current DFS Regulatory */
+ enum ath12k_dfs_region dfs_region;
+ struct ath12k_soc_dp_stats soc_stats;
+
+ unsigned long dev_flags;
+ struct completion driver_recovery;
+ struct workqueue_struct *workqueue;
+ struct work_struct restart_work;
+ struct workqueue_struct *workqueue_aux;
+ struct work_struct reset_work;
+ atomic_t reset_count;
+ atomic_t recovery_count;
+ atomic_t recovery_start_count;
+ bool is_reset;
+ struct completion reset_complete;
+ struct completion reconfigure_complete;
+ struct completion recovery_start;
+ /* continuous recovery fail count */
+ atomic_t fail_cont_count;
+ unsigned long reset_fail_timeout;
+ struct {
+ /* protected by data_lock */
+ u32 fw_crash_counter;
+ } stats;
+ u32 pktlog_defs_checksum;
+
+ struct ath12k_dbring_cap *db_caps;
+ u32 num_db_cap;
+
+ struct timer_list mon_reap_timer;
+
+ struct completion htc_suspend;
+
+ u64 fw_soc_drop_count;
+ bool static_window_map;
+
+ /* must be last */
+ u8 drv_priv[] __aligned(sizeof(void *));
+};
+
+int ath12k_core_qmi_firmware_ready(struct ath12k_base *ab);
+int ath12k_core_pre_init(struct ath12k_base *ab);
+int ath12k_core_init(struct ath12k_base *ath12k);
+void ath12k_core_deinit(struct ath12k_base *ath12k);
+struct ath12k_base *ath12k_core_alloc(struct device *dev, size_t priv_size,
+ enum ath12k_bus bus);
+void ath12k_core_free(struct ath12k_base *ath12k);
+int ath12k_core_fetch_board_data_api_1(struct ath12k_base *ab,
+ struct ath12k_board_data *bd,
+ char *filename);
+int ath12k_core_fetch_bdf(struct ath12k_base *ath12k,
+ struct ath12k_board_data *bd);
+void ath12k_core_free_bdf(struct ath12k_base *ab, struct ath12k_board_data *bd);
+int ath12k_core_check_dt(struct ath12k_base *ath12k);
+
+void ath12k_core_halt(struct ath12k *ar);
+int ath12k_core_resume(struct ath12k_base *ab);
+int ath12k_core_suspend(struct ath12k_base *ab);
+
+const struct firmware *ath12k_core_firmware_request(struct ath12k_base *ab,
+ const char *filename);
+
+static inline const char *ath12k_scan_state_str(enum ath12k_scan_state state)
+{
+ switch (state) {
+ case ATH12K_SCAN_IDLE:
+ return "idle";
+ case ATH12K_SCAN_STARTING:
+ return "starting";
+ case ATH12K_SCAN_RUNNING:
+ return "running";
+ case ATH12K_SCAN_ABORTING:
+ return "aborting";
+ }
+
+ return "unknown";
+}
+
+static inline struct ath12k_skb_cb *ATH12K_SKB_CB(struct sk_buff *skb)
+{
+ BUILD_BUG_ON(sizeof(struct ath12k_skb_cb) >
+ IEEE80211_TX_INFO_DRIVER_DATA_SIZE);
+ return (struct ath12k_skb_cb *)&IEEE80211_SKB_CB(skb)->driver_data;
+}
+
+static inline struct ath12k_skb_rxcb *ATH12K_SKB_RXCB(struct sk_buff *skb)
+{
+ BUILD_BUG_ON(sizeof(struct ath12k_skb_rxcb) > sizeof(skb->cb));
+ return (struct ath12k_skb_rxcb *)skb->cb;
+}
+
+static inline struct ath12k_vif *ath12k_vif_to_arvif(struct ieee80211_vif *vif)
+{
+ return (struct ath12k_vif *)vif->drv_priv;
+}
+
+static inline struct ath12k *ath12k_ab_to_ar(struct ath12k_base *ab,
+ int mac_id)
+{
+ return ab->pdevs[ath12k_hw_mac_id_to_pdev_id(ab->hw_params, mac_id)].ar;
+}
+
+static inline void ath12k_core_create_firmware_path(struct ath12k_base *ab,
+ const char *filename,
+ void *buf, size_t buf_len)
+{
+ snprintf(buf, buf_len, "%s/%s/%s", ATH12K_FW_DIR,
+ ab->hw_params->fw.dir, filename);
+}
+
+static inline const char *ath12k_bus_str(enum ath12k_bus bus)
+{
+ switch (bus) {
+ case ATH12K_BUS_PCI:
+ return "pci";
+ }
+
+ return "unknown";
+}
+
+#endif /* _CORE_H_ */
diff --git a/drivers/net/wireless/ath/ath12k/dbring.c b/drivers/net/wireless/ath/ath12k/dbring.c
new file mode 100644
index 000000000000..8fbf868e6f7e
--- /dev/null
+++ b/drivers/net/wireless/ath/ath12k/dbring.c
@@ -0,0 +1,357 @@
+// SPDX-License-Identifier: BSD-3-Clause-Clear
+/*
+ * Copyright (c) 2019-2021 The Linux Foundation. All rights reserved.
+ * Copyright (c) 2021-2022 Qualcomm Innovation Center, Inc. All rights reserved.
+ */
+
+#include "core.h"
+#include "debug.h"
+
+static int ath12k_dbring_bufs_replenish(struct ath12k *ar,
+ struct ath12k_dbring *ring,
+ struct ath12k_dbring_element *buff,
+ gfp_t gfp)
+{
+ struct ath12k_base *ab = ar->ab;
+ struct hal_srng *srng;
+ dma_addr_t paddr;
+ void *ptr_aligned, *ptr_unaligned, *desc;
+ int ret;
+ int buf_id;
+ u32 cookie;
+
+ srng = &ab->hal.srng_list[ring->refill_srng.ring_id];
+
+ lockdep_assert_held(&srng->lock);
+
+ ath12k_hal_srng_access_begin(ab, srng);
+
+ ptr_unaligned = buff->payload;
+ ptr_aligned = PTR_ALIGN(ptr_unaligned, ring->buf_align);
+ paddr = dma_map_single(ab->dev, ptr_aligned, ring->buf_sz,
+ DMA_FROM_DEVICE);
+
+ ret = dma_mapping_error(ab->dev, paddr);
+ if (ret)
+ goto err;
+
+ spin_lock_bh(&ring->idr_lock);
+ buf_id = idr_alloc(&ring->bufs_idr, buff, 0, ring->bufs_max, gfp);
+ spin_unlock_bh(&ring->idr_lock);
+ if (buf_id < 0) {
+ ret = -ENOBUFS;
+ goto err_dma_unmap;
+ }
+
+ desc = ath12k_hal_srng_src_get_next_entry(ab, srng);
+ if (!desc) {
+ ret = -ENOENT;
+ goto err_idr_remove;
+ }
+
+ buff->paddr = paddr;
+
+ cookie = u32_encode_bits(ar->pdev_idx, DP_RXDMA_BUF_COOKIE_PDEV_ID) |
+ u32_encode_bits(buf_id, DP_RXDMA_BUF_COOKIE_BUF_ID);
+
+ ath12k_hal_rx_buf_addr_info_set(desc, paddr, cookie, 0);
+
+ ath12k_hal_srng_access_end(ab, srng);
+
+ return 0;
+
+err_idr_remove:
+ spin_lock_bh(&ring->idr_lock);
+ idr_remove(&ring->bufs_idr, buf_id);
+ spin_unlock_bh(&ring->idr_lock);
+err_dma_unmap:
+ dma_unmap_single(ab->dev, paddr, ring->buf_sz,
+ DMA_FROM_DEVICE);
+err:
+ ath12k_hal_srng_access_end(ab, srng);
+ return ret;
+}
+
+static int ath12k_dbring_fill_bufs(struct ath12k *ar,
+ struct ath12k_dbring *ring,
+ gfp_t gfp)
+{
+ struct ath12k_dbring_element *buff;
+ struct hal_srng *srng;
+ struct ath12k_base *ab = ar->ab;
+ int num_remain, req_entries, num_free;
+ u32 align;
+ int size, ret;
+
+ srng = &ab->hal.srng_list[ring->refill_srng.ring_id];
+
+ spin_lock_bh(&srng->lock);
+
+ num_free = ath12k_hal_srng_src_num_free(ab, srng, true);
+ req_entries = min(num_free, ring->bufs_max);
+ num_remain = req_entries;
+ align = ring->buf_align;
+ size = sizeof(*buff) + ring->buf_sz + align - 1;
+
+ while (num_remain > 0) {
+ buff = kzalloc(size, gfp);
+ if (!buff)
+ break;
+
+ ret = ath12k_dbring_bufs_replenish(ar, ring, buff, gfp);
+ if (ret) {
+ ath12k_warn(ab, "failed to replenish db ring num_remain %d req_ent %d\n",
+ num_remain, req_entries);
+ kfree(buff);
+ break;
+ }
+ num_remain--;
+ }
+
+ spin_unlock_bh(&srng->lock);
+
+ return num_remain;
+}
+
+int ath12k_dbring_wmi_cfg_setup(struct ath12k *ar,
+ struct ath12k_dbring *ring,
+ enum wmi_direct_buffer_module id)
+{
+ struct ath12k_wmi_pdev_dma_ring_cfg_arg arg = {0};
+ int ret;
+
+ if (id >= WMI_DIRECT_BUF_MAX)
+ return -EINVAL;
+
+ arg.pdev_id = DP_SW2HW_MACID(ring->pdev_id);
+ arg.module_id = id;
+ arg.base_paddr_lo = lower_32_bits(ring->refill_srng.paddr);
+ arg.base_paddr_hi = upper_32_bits(ring->refill_srng.paddr);
+ arg.head_idx_paddr_lo = lower_32_bits(ring->hp_addr);
+ arg.head_idx_paddr_hi = upper_32_bits(ring->hp_addr);
+ arg.tail_idx_paddr_lo = lower_32_bits(ring->tp_addr);
+ arg.tail_idx_paddr_hi = upper_32_bits(ring->tp_addr);
+ arg.num_elems = ring->bufs_max;
+ arg.buf_size = ring->buf_sz;
+ arg.num_resp_per_event = ring->num_resp_per_event;
+ arg.event_timeout_ms = ring->event_timeout_ms;
+
+ ret = ath12k_wmi_pdev_dma_ring_cfg(ar, &arg);
+ if (ret) {
+ ath12k_warn(ar->ab, "failed to setup db ring cfg\n");
+ return ret;
+ }
+
+ return 0;
+}
+
+int ath12k_dbring_set_cfg(struct ath12k *ar, struct ath12k_dbring *ring,
+ u32 num_resp_per_event, u32 event_timeout_ms,
+ int (*handler)(struct ath12k *,
+ struct ath12k_dbring_data *))
+{
+ if (WARN_ON(!ring))
+ return -EINVAL;
+
+ ring->num_resp_per_event = num_resp_per_event;
+ ring->event_timeout_ms = event_timeout_ms;
+ ring->handler = handler;
+
+ return 0;
+}
+
+int ath12k_dbring_buf_setup(struct ath12k *ar,
+ struct ath12k_dbring *ring,
+ struct ath12k_dbring_cap *db_cap)
+{
+ struct ath12k_base *ab = ar->ab;
+ struct hal_srng *srng;
+ int ret;
+
+ srng = &ab->hal.srng_list[ring->refill_srng.ring_id];
+ ring->bufs_max = ring->refill_srng.size /
+ ath12k_hal_srng_get_entrysize(ab, HAL_RXDMA_DIR_BUF);
+
+ ring->buf_sz = db_cap->min_buf_sz;
+ ring->buf_align = db_cap->min_buf_align;
+ ring->pdev_id = db_cap->pdev_id;
+ ring->hp_addr = ath12k_hal_srng_get_hp_addr(ab, srng);
+ ring->tp_addr = ath12k_hal_srng_get_tp_addr(ab, srng);
+
+ ret = ath12k_dbring_fill_bufs(ar, ring, GFP_KERNEL);
+
+ return ret;
+}
+
+int ath12k_dbring_srng_setup(struct ath12k *ar, struct ath12k_dbring *ring,
+ int ring_num, int num_entries)
+{
+ int ret;
+
+ ret = ath12k_dp_srng_setup(ar->ab, &ring->refill_srng, HAL_RXDMA_DIR_BUF,
+ ring_num, ar->pdev_idx, num_entries);
+ if (ret < 0) {
+ ath12k_warn(ar->ab, "failed to setup srng: %d ring_id %d\n",
+ ret, ring_num);
+ goto err;
+ }
+
+ return 0;
+err:
+ ath12k_dp_srng_cleanup(ar->ab, &ring->refill_srng);
+ return ret;
+}
+
+int ath12k_dbring_get_cap(struct ath12k_base *ab,
+ u8 pdev_idx,
+ enum wmi_direct_buffer_module id,
+ struct ath12k_dbring_cap *db_cap)
+{
+ int i;
+
+ if (!ab->num_db_cap || !ab->db_caps)
+ return -ENOENT;
+
+ if (id >= WMI_DIRECT_BUF_MAX)
+ return -EINVAL;
+
+ for (i = 0; i < ab->num_db_cap; i++) {
+ if (pdev_idx == ab->db_caps[i].pdev_id &&
+ id == ab->db_caps[i].id) {
+ *db_cap = ab->db_caps[i];
+
+ return 0;
+ }
+ }
+
+ return -ENOENT;
+}
+
+int ath12k_dbring_buffer_release_event(struct ath12k_base *ab,
+ struct ath12k_dbring_buf_release_event *ev)
+{
+ struct ath12k_dbring *ring = NULL;
+ struct hal_srng *srng;
+ struct ath12k *ar;
+ struct ath12k_dbring_element *buff;
+ struct ath12k_dbring_data handler_data;
+ struct ath12k_buffer_addr desc;
+ u8 *vaddr_unalign;
+ u32 num_entry, num_buff_reaped;
+ u8 pdev_idx, rbm;
+ u32 cookie;
+ int buf_id;
+ int size;
+ dma_addr_t paddr;
+ int ret = 0;
+
+ pdev_idx = le32_to_cpu(ev->fixed.pdev_id);
+
+ if (pdev_idx >= ab->num_radios) {
+ ath12k_warn(ab, "Invalid pdev id %d\n", pdev_idx);
+ return -EINVAL;
+ }
+
+ if (ev->fixed.num_buf_release_entry !=
+ ev->fixed.num_meta_data_entry) {
+ ath12k_warn(ab, "Buffer entry %d mismatch meta entry %d\n",
+ ev->fixed.num_buf_release_entry,
+ ev->fixed.num_meta_data_entry);
+ return -EINVAL;
+ }
+
+ ar = ab->pdevs[pdev_idx].ar;
+
+ rcu_read_lock();
+ if (!rcu_dereference(ab->pdevs_active[pdev_idx])) {
+ ret = -EINVAL;
+ goto rcu_unlock;
+ }
+
+ switch (ev->fixed.module_id) {
+ case WMI_DIRECT_BUF_SPECTRAL:
+ break;
+ default:
+ ring = NULL;
+ ath12k_warn(ab, "Recv dma buffer release ev on unsupp module %d\n",
+ ev->fixed.module_id);
+ break;
+ }
+
+ if (!ring) {
+ ret = -EINVAL;
+ goto rcu_unlock;
+ }
+
+ srng = &ab->hal.srng_list[ring->refill_srng.ring_id];
+ num_entry = le32_to_cpu(ev->fixed.num_buf_release_entry);
+ size = sizeof(*buff) + ring->buf_sz + ring->buf_align - 1;
+ num_buff_reaped = 0;
+
+ spin_lock_bh(&srng->lock);
+
+ while (num_buff_reaped < num_entry) {
+ desc.info0 = ev->buf_entry[num_buff_reaped].paddr_lo;
+ desc.info1 = ev->buf_entry[num_buff_reaped].paddr_hi;
+ handler_data.meta = ev->meta_data[num_buff_reaped];
+
+ num_buff_reaped++;
+
+ ath12k_hal_rx_buf_addr_info_get(&desc, &paddr, &cookie, &rbm);
+
+ buf_id = u32_get_bits(cookie, DP_RXDMA_BUF_COOKIE_BUF_ID);
+
+ spin_lock_bh(&ring->idr_lock);
+ buff = idr_find(&ring->bufs_idr, buf_id);
+ if (!buff) {
+ spin_unlock_bh(&ring->idr_lock);
+ continue;
+ }
+ idr_remove(&ring->bufs_idr, buf_id);
+ spin_unlock_bh(&ring->idr_lock);
+
+ dma_unmap_single(ab->dev, buff->paddr, ring->buf_sz,
+ DMA_FROM_DEVICE);
+
+ if (ring->handler) {
+ vaddr_unalign = buff->payload;
+ handler_data.data = PTR_ALIGN(vaddr_unalign,
+ ring->buf_align);
+ handler_data.data_sz = ring->buf_sz;
+
+ ring->handler(ar, &handler_data);
+ }
+
+ memset(buff, 0, size);
+ ath12k_dbring_bufs_replenish(ar, ring, buff, GFP_ATOMIC);
+ }
+
+ spin_unlock_bh(&srng->lock);
+
+rcu_unlock:
+ rcu_read_unlock();
+
+ return ret;
+}
+
+void ath12k_dbring_srng_cleanup(struct ath12k *ar, struct ath12k_dbring *ring)
+{
+ ath12k_dp_srng_cleanup(ar->ab, &ring->refill_srng);
+}
+
+void ath12k_dbring_buf_cleanup(struct ath12k *ar, struct ath12k_dbring *ring)
+{
+ struct ath12k_dbring_element *buff;
+ int buf_id;
+
+ spin_lock_bh(&ring->idr_lock);
+ idr_for_each_entry(&ring->bufs_idr, buff, buf_id) {
+ idr_remove(&ring->bufs_idr, buf_id);
+ dma_unmap_single(ar->ab->dev, buff->paddr,
+ ring->buf_sz, DMA_FROM_DEVICE);
+ kfree(buff);
+ }
+
+ idr_destroy(&ring->bufs_idr);
+ spin_unlock_bh(&ring->idr_lock);
+}
diff --git a/drivers/net/wireless/ath/ath12k/dbring.h b/drivers/net/wireless/ath/ath12k/dbring.h
new file mode 100644
index 000000000000..e1c0eba774ec
--- /dev/null
+++ b/drivers/net/wireless/ath/ath12k/dbring.h
@@ -0,0 +1,80 @@
+/* SPDX-License-Identifier: BSD-3-Clause-Clear */
+/*
+ * Copyright (c) 2019-2021 The Linux Foundation. All rights reserved.
+ * Copyright (c) 2021-2022 Qualcomm Innovation Center, Inc. All rights reserved.
+ */
+
+#ifndef ATH12K_DBRING_H
+#define ATH12K_DBRING_H
+
+#include <linux/types.h>
+#include <linux/idr.h>
+#include <linux/spinlock.h>
+#include "dp.h"
+
+struct ath12k_dbring_element {
+ dma_addr_t paddr;
+ u8 payload[];
+};
+
+struct ath12k_dbring_data {
+ void *data;
+ u32 data_sz;
+ struct ath12k_wmi_dma_buf_release_meta_data_params meta;
+};
+
+struct ath12k_dbring_buf_release_event {
+ struct ath12k_wmi_dma_buf_release_fixed_params fixed;
+ const struct ath12k_wmi_dma_buf_release_entry_params *buf_entry;
+ const struct ath12k_wmi_dma_buf_release_meta_data_params *meta_data;
+ u32 num_buf_entry;
+ u32 num_meta;
+};
+
+struct ath12k_dbring_cap {
+ u32 pdev_id;
+ enum wmi_direct_buffer_module id;
+ u32 min_elem;
+ u32 min_buf_sz;
+ u32 min_buf_align;
+};
+
+struct ath12k_dbring {
+ struct dp_srng refill_srng;
+ struct idr bufs_idr;
+ /* Protects bufs_idr */
+ spinlock_t idr_lock;
+ dma_addr_t tp_addr;
+ dma_addr_t hp_addr;
+ int bufs_max;
+ u32 pdev_id;
+ u32 buf_sz;
+ u32 buf_align;
+ u32 num_resp_per_event;
+ u32 event_timeout_ms;
+ int (*handler)(struct ath12k *ar, struct ath12k_dbring_data *data);
+};
+
+int ath12k_dbring_set_cfg(struct ath12k *ar,
+ struct ath12k_dbring *ring,
+ u32 num_resp_per_event,
+ u32 event_timeout_ms,
+ int (*handler)(struct ath12k *,
+ struct ath12k_dbring_data *));
+int ath12k_dbring_wmi_cfg_setup(struct ath12k *ar,
+ struct ath12k_dbring *ring,
+ enum wmi_direct_buffer_module id);
+int ath12k_dbring_buf_setup(struct ath12k *ar,
+ struct ath12k_dbring *ring,
+ struct ath12k_dbring_cap *db_cap);
+int ath12k_dbring_srng_setup(struct ath12k *ar, struct ath12k_dbring *ring,
+ int ring_num, int num_entries);
+int ath12k_dbring_buffer_release_event(struct ath12k_base *ab,
+ struct ath12k_dbring_buf_release_event *ev);
+int ath12k_dbring_get_cap(struct ath12k_base *ab,
+ u8 pdev_idx,
+ enum wmi_direct_buffer_module id,
+ struct ath12k_dbring_cap *db_cap);
+void ath12k_dbring_srng_cleanup(struct ath12k *ar, struct ath12k_dbring *ring);
+void ath12k_dbring_buf_cleanup(struct ath12k *ar, struct ath12k_dbring *ring);
+#endif /* ATH12K_DBRING_H */
diff --git a/drivers/net/wireless/ath/ath12k/debug.c b/drivers/net/wireless/ath/ath12k/debug.c
new file mode 100644
index 000000000000..67893923e010
--- /dev/null
+++ b/drivers/net/wireless/ath/ath12k/debug.c
@@ -0,0 +1,102 @@
+// SPDX-License-Identifier: BSD-3-Clause-Clear
+/*
+ * Copyright (c) 2018-2021 The Linux Foundation. All rights reserved.
+ * Copyright (c) 2021-2022 Qualcomm Innovation Center, Inc. All rights reserved.
+ */
+
+#include <linux/vmalloc.h>
+#include "core.h"
+#include "debug.h"
+
+void ath12k_info(struct ath12k_base *ab, const char *fmt, ...)
+{
+ struct va_format vaf = {
+ .fmt = fmt,
+ };
+ va_list args;
+
+ va_start(args, fmt);
+ vaf.va = &args;
+ dev_info(ab->dev, "%pV", &vaf);
+ /* TODO: Trace the log */
+ va_end(args);
+}
+
+void ath12k_err(struct ath12k_base *ab, const char *fmt, ...)
+{
+ struct va_format vaf = {
+ .fmt = fmt,
+ };
+ va_list args;
+
+ va_start(args, fmt);
+ vaf.va = &args;
+ dev_err(ab->dev, "%pV", &vaf);
+ /* TODO: Trace the log */
+ va_end(args);
+}
+
+void ath12k_warn(struct ath12k_base *ab, const char *fmt, ...)
+{
+ struct va_format vaf = {
+ .fmt = fmt,
+ };
+ va_list args;
+
+ va_start(args, fmt);
+ vaf.va = &args;
+ dev_warn_ratelimited(ab->dev, "%pV", &vaf);
+ /* TODO: Trace the log */
+ va_end(args);
+}
+
+#ifdef CONFIG_ATH12K_DEBUG
+
+void __ath12k_dbg(struct ath12k_base *ab, enum ath12k_debug_mask mask,
+ const char *fmt, ...)
+{
+ struct va_format vaf;
+ va_list args;
+
+ va_start(args, fmt);
+
+ vaf.fmt = fmt;
+ vaf.va = &args;
+
+ if (ath12k_debug_mask & mask)
+ dev_dbg(ab->dev, "%pV", &vaf);
+
+ /* TODO: trace log */
+
+ va_end(args);
+}
+
+void ath12k_dbg_dump(struct ath12k_base *ab,
+ enum ath12k_debug_mask mask,
+ const char *msg, const char *prefix,
+ const void *buf, size_t len)
+{
+ char linebuf[256];
+ size_t linebuflen;
+ const void *ptr;
+
+ if (ath12k_debug_mask & mask) {
+ if (msg)
+ __ath12k_dbg(ab, mask, "%s\n", msg);
+
+ for (ptr = buf; (ptr - buf) < len; ptr += 16) {
+ linebuflen = 0;
+ linebuflen += scnprintf(linebuf + linebuflen,
+ sizeof(linebuf) - linebuflen,
+ "%s%08x: ",
+ (prefix ? prefix : ""),
+ (unsigned int)(ptr - buf));
+ hex_dump_to_buffer(ptr, len - (ptr - buf), 16, 1,
+ linebuf + linebuflen,
+ sizeof(linebuf) - linebuflen, true);
+ dev_dbg(ab->dev, "%s\n", linebuf);
+ }
+ }
+}
+
+#endif /* CONFIG_ATH12K_DEBUG */
diff --git a/drivers/net/wireless/ath/ath12k/debug.h b/drivers/net/wireless/ath/ath12k/debug.h
new file mode 100644
index 000000000000..aa685295f8a4
--- /dev/null
+++ b/drivers/net/wireless/ath/ath12k/debug.h
@@ -0,0 +1,67 @@
+/* SPDX-License-Identifier: BSD-3-Clause-Clear */
+/*
+ * Copyright (c) 2018-2021 The Linux Foundation. All rights reserved.
+ * Copyright (c) 2021-2022 Qualcomm Innovation Center, Inc. All rights reserved.
+ */
+
+#ifndef _ATH12K_DEBUG_H_
+#define _ATH12K_DEBUG_H_
+
+#include "trace.h"
+
+enum ath12k_debug_mask {
+ ATH12K_DBG_AHB = 0x00000001,
+ ATH12K_DBG_WMI = 0x00000002,
+ ATH12K_DBG_HTC = 0x00000004,
+ ATH12K_DBG_DP_HTT = 0x00000008,
+ ATH12K_DBG_MAC = 0x00000010,
+ ATH12K_DBG_BOOT = 0x00000020,
+ ATH12K_DBG_QMI = 0x00000040,
+ ATH12K_DBG_DATA = 0x00000080,
+ ATH12K_DBG_MGMT = 0x00000100,
+ ATH12K_DBG_REG = 0x00000200,
+ ATH12K_DBG_TESTMODE = 0x00000400,
+ ATH12K_DBG_HAL = 0x00000800,
+ ATH12K_DBG_PCI = 0x00001000,
+ ATH12K_DBG_DP_TX = 0x00002000,
+ ATH12K_DBG_DP_RX = 0x00004000,
+ ATH12K_DBG_ANY = 0xffffffff,
+};
+
+__printf(2, 3) void ath12k_info(struct ath12k_base *ab, const char *fmt, ...);
+__printf(2, 3) void ath12k_err(struct ath12k_base *ab, const char *fmt, ...);
+__printf(2, 3) void ath12k_warn(struct ath12k_base *ab, const char *fmt, ...);
+
+extern unsigned int ath12k_debug_mask;
+
+#ifdef CONFIG_ATH12K_DEBUG
+__printf(3, 4) void __ath12k_dbg(struct ath12k_base *ab,
+ enum ath12k_debug_mask mask,
+ const char *fmt, ...);
+void ath12k_dbg_dump(struct ath12k_base *ab,
+ enum ath12k_debug_mask mask,
+ const char *msg, const char *prefix,
+ const void *buf, size_t len);
+#else /* CONFIG_ATH12K_DEBUG */
+static inline void __ath12k_dbg(struct ath12k_base *ab,
+ enum ath12k_debug_mask dbg_mask,
+ const char *fmt, ...)
+{
+}
+
+static inline void ath12k_dbg_dump(struct ath12k_base *ab,
+ enum ath12k_debug_mask mask,
+ const char *msg, const char *prefix,
+ const void *buf, size_t len)
+{
+}
+#endif /* CONFIG_ATH12K_DEBUG */
+
+#define ath12k_dbg(ar, dbg_mask, fmt, ...) \
+do { \
+ typeof(dbg_mask) mask = (dbg_mask); \
+ if (ath12k_debug_mask & mask) \
+ __ath12k_dbg(ar, mask, fmt, ##__VA_ARGS__); \
+} while (0)
+
+#endif /* _ATH12K_DEBUG_H_ */
diff --git a/drivers/net/wireless/ath/ath12k/dp.c b/drivers/net/wireless/ath/ath12k/dp.c
new file mode 100644
index 000000000000..eb0261500ab0
--- /dev/null
+++ b/drivers/net/wireless/ath/ath12k/dp.c
@@ -0,0 +1,1580 @@
+// SPDX-License-Identifier: BSD-3-Clause-Clear
+/*
+ * Copyright (c) 2018-2021 The Linux Foundation. All rights reserved.
+ * Copyright (c) 2021-2022 Qualcomm Innovation Center, Inc. All rights reserved.
+ */
+
+#include <crypto/hash.h>
+#include "core.h"
+#include "dp_tx.h"
+#include "hal_tx.h"
+#include "hif.h"
+#include "debug.h"
+#include "dp_rx.h"
+#include "peer.h"
+#include "dp_mon.h"
+
+static void ath12k_dp_htt_htc_tx_complete(struct ath12k_base *ab,
+ struct sk_buff *skb)
+{
+ dev_kfree_skb_any(skb);
+}
+
+void ath12k_dp_peer_cleanup(struct ath12k *ar, int vdev_id, const u8 *addr)
+{
+ struct ath12k_base *ab = ar->ab;
+ struct ath12k_peer *peer;
+
+ /* TODO: Any other peer specific DP cleanup */
+
+ spin_lock_bh(&ab->base_lock);
+ peer = ath12k_peer_find(ab, vdev_id, addr);
+ if (!peer) {
+ ath12k_warn(ab, "failed to lookup peer %pM on vdev %d\n",
+ addr, vdev_id);
+ spin_unlock_bh(&ab->base_lock);
+ return;
+ }
+
+ ath12k_dp_rx_peer_tid_cleanup(ar, peer);
+ crypto_free_shash(peer->tfm_mmic);
+ spin_unlock_bh(&ab->base_lock);
+}
+
+int ath12k_dp_peer_setup(struct ath12k *ar, int vdev_id, const u8 *addr)
+{
+ struct ath12k_base *ab = ar->ab;
+ struct ath12k_peer *peer;
+ u32 reo_dest;
+ int ret = 0, tid;
+
+ /* NOTE: reo_dest ring id starts from 1 unlike mac_id which starts from 0 */
+ reo_dest = ar->dp.mac_id + 1;
+ ret = ath12k_wmi_set_peer_param(ar, addr, vdev_id,
+ WMI_PEER_SET_DEFAULT_ROUTING,
+ DP_RX_HASH_ENABLE | (reo_dest << 1));
+
+ if (ret) {
+ ath12k_warn(ab, "failed to set default routing %d peer :%pM vdev_id :%d\n",
+ ret, addr, vdev_id);
+ return ret;
+ }
+
+ for (tid = 0; tid <= IEEE80211_NUM_TIDS; tid++) {
+ ret = ath12k_dp_rx_peer_tid_setup(ar, addr, vdev_id, tid, 1, 0,
+ HAL_PN_TYPE_NONE);
+ if (ret) {
+ ath12k_warn(ab, "failed to setup rxd tid queue for tid %d: %d\n",
+ tid, ret);
+ goto peer_clean;
+ }
+ }
+
+ ret = ath12k_dp_rx_peer_frag_setup(ar, addr, vdev_id);
+ if (ret) {
+ ath12k_warn(ab, "failed to setup rx defrag context\n");
+ goto peer_clean;
+ }
+
+ /* TODO: Setup other peer specific resource used in data path */
+
+ return 0;
+
+peer_clean:
+ spin_lock_bh(&ab->base_lock);
+
+ peer = ath12k_peer_find(ab, vdev_id, addr);
+ if (!peer) {
+ ath12k_warn(ab, "failed to find the peer to del rx tid\n");
+ spin_unlock_bh(&ab->base_lock);
+ return -ENOENT;
+ }
+
+ for (; tid >= 0; tid--)
+ ath12k_dp_rx_peer_tid_delete(ar, peer, tid);
+
+ spin_unlock_bh(&ab->base_lock);
+
+ return ret;
+}
+
+void ath12k_dp_srng_cleanup(struct ath12k_base *ab, struct dp_srng *ring)
+{
+ if (!ring->vaddr_unaligned)
+ return;
+
+ dma_free_coherent(ab->dev, ring->size, ring->vaddr_unaligned,
+ ring->paddr_unaligned);
+
+ ring->vaddr_unaligned = NULL;
+}
+
+static int ath12k_dp_srng_find_ring_in_mask(int ring_num, const u8 *grp_mask)
+{
+ int ext_group_num;
+ u8 mask = 1 << ring_num;
+
+ for (ext_group_num = 0; ext_group_num < ATH12K_EXT_IRQ_GRP_NUM_MAX;
+ ext_group_num++) {
+ if (mask & grp_mask[ext_group_num])
+ return ext_group_num;
+ }
+
+ return -ENOENT;
+}
+
+static int ath12k_dp_srng_calculate_msi_group(struct ath12k_base *ab,
+ enum hal_ring_type type, int ring_num)
+{
+ const u8 *grp_mask;
+
+ switch (type) {
+ case HAL_WBM2SW_RELEASE:
+ if (ring_num == HAL_WBM2SW_REL_ERR_RING_NUM) {
+ grp_mask = &ab->hw_params->ring_mask->rx_wbm_rel[0];
+ ring_num = 0;
+ } else {
+ grp_mask = &ab->hw_params->ring_mask->tx[0];
+ }
+ break;
+ case HAL_REO_EXCEPTION:
+ grp_mask = &ab->hw_params->ring_mask->rx_err[0];
+ break;
+ case HAL_REO_DST:
+ grp_mask = &ab->hw_params->ring_mask->rx[0];
+ break;
+ case HAL_REO_STATUS:
+ grp_mask = &ab->hw_params->ring_mask->reo_status[0];
+ break;
+ case HAL_RXDMA_MONITOR_STATUS:
+ case HAL_RXDMA_MONITOR_DST:
+ grp_mask = &ab->hw_params->ring_mask->rx_mon_dest[0];
+ break;
+ case HAL_TX_MONITOR_DST:
+ grp_mask = &ab->hw_params->ring_mask->tx_mon_dest[0];
+ break;
+ case HAL_RXDMA_BUF:
+ grp_mask = &ab->hw_params->ring_mask->host2rxdma[0];
+ break;
+ case HAL_RXDMA_MONITOR_BUF:
+ case HAL_TCL_DATA:
+ case HAL_TCL_CMD:
+ case HAL_REO_CMD:
+ case HAL_SW2WBM_RELEASE:
+ case HAL_WBM_IDLE_LINK:
+ case HAL_TCL_STATUS:
+ case HAL_REO_REINJECT:
+ case HAL_CE_SRC:
+ case HAL_CE_DST:
+ case HAL_CE_DST_STATUS:
+ default:
+ return -ENOENT;
+ }
+
+ return ath12k_dp_srng_find_ring_in_mask(ring_num, grp_mask);
+}
+
+static void ath12k_dp_srng_msi_setup(struct ath12k_base *ab,
+ struct hal_srng_params *ring_params,
+ enum hal_ring_type type, int ring_num)
+{
+ int msi_group_number, msi_data_count;
+ u32 msi_data_start, msi_irq_start, addr_lo, addr_hi;
+ int ret;
+
+ ret = ath12k_hif_get_user_msi_vector(ab, "DP",
+ &msi_data_count, &msi_data_start,
+ &msi_irq_start);
+ if (ret)
+ return;
+
+ msi_group_number = ath12k_dp_srng_calculate_msi_group(ab, type,
+ ring_num);
+ if (msi_group_number < 0) {
+ ath12k_dbg(ab, ATH12K_DBG_PCI,
+ "ring not part of an ext_group; ring_type: %d,ring_num %d",
+ type, ring_num);
+ ring_params->msi_addr = 0;
+ ring_params->msi_data = 0;
+ return;
+ }
+
+ if (msi_group_number > msi_data_count) {
+ ath12k_dbg(ab, ATH12K_DBG_PCI,
+ "multiple msi_groups share one msi, msi_group_num %d",
+ msi_group_number);
+ }
+
+ ath12k_hif_get_msi_address(ab, &addr_lo, &addr_hi);
+
+ ring_params->msi_addr = addr_lo;
+ ring_params->msi_addr |= (dma_addr_t)(((uint64_t)addr_hi) << 32);
+ ring_params->msi_data = (msi_group_number % msi_data_count)
+ + msi_data_start;
+ ring_params->flags |= HAL_SRNG_FLAGS_MSI_INTR;
+}
+
+int ath12k_dp_srng_setup(struct ath12k_base *ab, struct dp_srng *ring,
+ enum hal_ring_type type, int ring_num,
+ int mac_id, int num_entries)
+{
+ struct hal_srng_params params = { 0 };
+ int entry_sz = ath12k_hal_srng_get_entrysize(ab, type);
+ int max_entries = ath12k_hal_srng_get_max_entries(ab, type);
+ int ret;
+
+ if (max_entries < 0 || entry_sz < 0)
+ return -EINVAL;
+
+ if (num_entries > max_entries)
+ num_entries = max_entries;
+
+ ring->size = (num_entries * entry_sz) + HAL_RING_BASE_ALIGN - 1;
+ ring->vaddr_unaligned = dma_alloc_coherent(ab->dev, ring->size,
+ &ring->paddr_unaligned,
+ GFP_KERNEL);
+ if (!ring->vaddr_unaligned)
+ return -ENOMEM;
+
+ ring->vaddr = PTR_ALIGN(ring->vaddr_unaligned, HAL_RING_BASE_ALIGN);
+ ring->paddr = ring->paddr_unaligned + ((unsigned long)ring->vaddr -
+ (unsigned long)ring->vaddr_unaligned);
+
+ params.ring_base_vaddr = ring->vaddr;
+ params.ring_base_paddr = ring->paddr;
+ params.num_entries = num_entries;
+ ath12k_dp_srng_msi_setup(ab, &params, type, ring_num + mac_id);
+
+ switch (type) {
+ case HAL_REO_DST:
+ params.intr_batch_cntr_thres_entries =
+ HAL_SRNG_INT_BATCH_THRESHOLD_RX;
+ params.intr_timer_thres_us = HAL_SRNG_INT_TIMER_THRESHOLD_RX;
+ break;
+ case HAL_RXDMA_BUF:
+ case HAL_RXDMA_MONITOR_BUF:
+ case HAL_RXDMA_MONITOR_STATUS:
+ params.low_threshold = num_entries >> 3;
+ params.flags |= HAL_SRNG_FLAGS_LOW_THRESH_INTR_EN;
+ params.intr_batch_cntr_thres_entries = 0;
+ params.intr_timer_thres_us = HAL_SRNG_INT_TIMER_THRESHOLD_RX;
+ break;
+ case HAL_TX_MONITOR_DST:
+ params.low_threshold = DP_TX_MONITOR_BUF_SIZE_MAX >> 3;
+ params.flags |= HAL_SRNG_FLAGS_LOW_THRESH_INTR_EN;
+ params.intr_batch_cntr_thres_entries = 0;
+ params.intr_timer_thres_us = HAL_SRNG_INT_TIMER_THRESHOLD_RX;
+ break;
+ case HAL_WBM2SW_RELEASE:
+ if (ab->hw_params->hw_ops->dp_srng_is_tx_comp_ring(ring_num)) {
+ params.intr_batch_cntr_thres_entries =
+ HAL_SRNG_INT_BATCH_THRESHOLD_TX;
+ params.intr_timer_thres_us =
+ HAL_SRNG_INT_TIMER_THRESHOLD_TX;
+ break;
+ }
+ /* follow through when ring_num != HAL_WBM2SW_REL_ERR_RING_NUM */
+ fallthrough;
+ case HAL_REO_EXCEPTION:
+ case HAL_REO_REINJECT:
+ case HAL_REO_CMD:
+ case HAL_REO_STATUS:
+ case HAL_TCL_DATA:
+ case HAL_TCL_CMD:
+ case HAL_TCL_STATUS:
+ case HAL_WBM_IDLE_LINK:
+ case HAL_SW2WBM_RELEASE:
+ case HAL_RXDMA_DST:
+ case HAL_RXDMA_MONITOR_DST:
+ case HAL_RXDMA_MONITOR_DESC:
+ params.intr_batch_cntr_thres_entries =
+ HAL_SRNG_INT_BATCH_THRESHOLD_OTHER;
+ params.intr_timer_thres_us = HAL_SRNG_INT_TIMER_THRESHOLD_OTHER;
+ break;
+ case HAL_RXDMA_DIR_BUF:
+ break;
+ default:
+ ath12k_warn(ab, "Not a valid ring type in dp :%d\n", type);
+ return -EINVAL;
+ }
+
+ ret = ath12k_hal_srng_setup(ab, type, ring_num, mac_id, &params);
+ if (ret < 0) {
+ ath12k_warn(ab, "failed to setup srng: %d ring_id %d\n",
+ ret, ring_num);
+ return ret;
+ }
+
+ ring->ring_id = ret;
+
+ return 0;
+}
+
+static
+u32 ath12k_dp_tx_get_vdev_bank_config(struct ath12k_base *ab, struct ath12k_vif *arvif)
+{
+ u32 bank_config = 0;
+
+ /* Only valid for raw frames with HW crypto enabled.
+ * With SW crypto, mac80211 sets key per packet
+ */
+ if (arvif->tx_encap_type == HAL_TCL_ENCAP_TYPE_RAW &&
+ test_bit(ATH12K_FLAG_HW_CRYPTO_DISABLED, &ab->dev_flags))
+ bank_config |=
+ u32_encode_bits(ath12k_dp_tx_get_encrypt_type(arvif->key_cipher),
+ HAL_TX_BANK_CONFIG_ENCRYPT_TYPE);
+
+ bank_config |= u32_encode_bits(arvif->tx_encap_type,
+ HAL_TX_BANK_CONFIG_ENCAP_TYPE);
+ bank_config |= u32_encode_bits(0, HAL_TX_BANK_CONFIG_SRC_BUFFER_SWAP) |
+ u32_encode_bits(0, HAL_TX_BANK_CONFIG_LINK_META_SWAP) |
+ u32_encode_bits(0, HAL_TX_BANK_CONFIG_EPD);
+
+ /* only valid if idx_lookup_override is not set in tcl_data_cmd */
+ bank_config |= u32_encode_bits(0, HAL_TX_BANK_CONFIG_INDEX_LOOKUP_EN);
+
+ bank_config |= u32_encode_bits(arvif->hal_addr_search_flags & HAL_TX_ADDRX_EN,
+ HAL_TX_BANK_CONFIG_ADDRX_EN) |
+ u32_encode_bits(!!(arvif->hal_addr_search_flags &
+ HAL_TX_ADDRY_EN),
+ HAL_TX_BANK_CONFIG_ADDRY_EN);
+
+ bank_config |= u32_encode_bits(ieee80211_vif_is_mesh(arvif->vif) ? 3 : 0,
+ HAL_TX_BANK_CONFIG_MESH_EN) |
+ u32_encode_bits(arvif->vdev_id_check_en,
+ HAL_TX_BANK_CONFIG_VDEV_ID_CHECK_EN);
+
+ bank_config |= u32_encode_bits(0, HAL_TX_BANK_CONFIG_DSCP_TIP_MAP_ID);
+
+ return bank_config;
+}
+
+static int ath12k_dp_tx_get_bank_profile(struct ath12k_base *ab, struct ath12k_vif *arvif,
+ struct ath12k_dp *dp)
+{
+ int bank_id = DP_INVALID_BANK_ID;
+ int i;
+ u32 bank_config;
+ bool configure_register = false;
+
+ /* convert vdev params into hal_tx_bank_config */
+ bank_config = ath12k_dp_tx_get_vdev_bank_config(ab, arvif);
+
+ spin_lock_bh(&dp->tx_bank_lock);
+ /* TODO: implement using idr kernel framework*/
+ for (i = 0; i < dp->num_bank_profiles; i++) {
+ if (dp->bank_profiles[i].is_configured &&
+ (dp->bank_profiles[i].bank_config ^ bank_config) == 0) {
+ bank_id = i;
+ goto inc_ref_and_return;
+ }
+ if (!dp->bank_profiles[i].is_configured ||
+ !dp->bank_profiles[i].num_users) {
+ bank_id = i;
+ goto configure_and_return;
+ }
+ }
+
+ if (bank_id == DP_INVALID_BANK_ID) {
+ spin_unlock_bh(&dp->tx_bank_lock);
+ ath12k_err(ab, "unable to find TX bank!");
+ return bank_id;
+ }
+
+configure_and_return:
+ dp->bank_profiles[bank_id].is_configured = true;
+ dp->bank_profiles[bank_id].bank_config = bank_config;
+ configure_register = true;
+inc_ref_and_return:
+ dp->bank_profiles[bank_id].num_users++;
+ spin_unlock_bh(&dp->tx_bank_lock);
+
+ if (configure_register)
+ ath12k_hal_tx_configure_bank_register(ab, bank_config, bank_id);
+
+ ath12k_dbg(ab, ATH12K_DBG_DP_HTT, "dp_htt tcl bank_id %d input 0x%x match 0x%x num_users %u",
+ bank_id, bank_config, dp->bank_profiles[bank_id].bank_config,
+ dp->bank_profiles[bank_id].num_users);
+
+ return bank_id;
+}
+
+void ath12k_dp_tx_put_bank_profile(struct ath12k_dp *dp, u8 bank_id)
+{
+ spin_lock_bh(&dp->tx_bank_lock);
+ dp->bank_profiles[bank_id].num_users--;
+ spin_unlock_bh(&dp->tx_bank_lock);
+}
+
+static void ath12k_dp_deinit_bank_profiles(struct ath12k_base *ab)
+{
+ struct ath12k_dp *dp = &ab->dp;
+
+ kfree(dp->bank_profiles);
+ dp->bank_profiles = NULL;
+}
+
+static int ath12k_dp_init_bank_profiles(struct ath12k_base *ab)
+{
+ struct ath12k_dp *dp = &ab->dp;
+ u32 num_tcl_banks = ab->hw_params->num_tcl_banks;
+ int i;
+
+ dp->num_bank_profiles = num_tcl_banks;
+ dp->bank_profiles = kmalloc_array(num_tcl_banks,
+ sizeof(struct ath12k_dp_tx_bank_profile),
+ GFP_KERNEL);
+ if (!dp->bank_profiles)
+ return -ENOMEM;
+
+ spin_lock_init(&dp->tx_bank_lock);
+
+ for (i = 0; i < num_tcl_banks; i++) {
+ dp->bank_profiles[i].is_configured = false;
+ dp->bank_profiles[i].num_users = 0;
+ }
+
+ return 0;
+}
+
+static void ath12k_dp_srng_common_cleanup(struct ath12k_base *ab)
+{
+ struct ath12k_dp *dp = &ab->dp;
+ int i;
+
+ ath12k_dp_srng_cleanup(ab, &dp->reo_status_ring);
+ ath12k_dp_srng_cleanup(ab, &dp->reo_cmd_ring);
+ ath12k_dp_srng_cleanup(ab, &dp->reo_except_ring);
+ ath12k_dp_srng_cleanup(ab, &dp->rx_rel_ring);
+ ath12k_dp_srng_cleanup(ab, &dp->reo_reinject_ring);
+ for (i = 0; i < ab->hw_params->max_tx_ring; i++) {
+ ath12k_dp_srng_cleanup(ab, &dp->tx_ring[i].tcl_comp_ring);
+ ath12k_dp_srng_cleanup(ab, &dp->tx_ring[i].tcl_data_ring);
+ }
+ ath12k_dp_srng_cleanup(ab, &dp->tcl_status_ring);
+ ath12k_dp_srng_cleanup(ab, &dp->tcl_cmd_ring);
+ ath12k_dp_srng_cleanup(ab, &dp->wbm_desc_rel_ring);
+}
+
+static int ath12k_dp_srng_common_setup(struct ath12k_base *ab)
+{
+ struct ath12k_dp *dp = &ab->dp;
+ const struct ath12k_hal_tcl_to_wbm_rbm_map *map;
+ struct hal_srng *srng;
+ int i, ret, tx_comp_ring_num;
+ u32 ring_hash_map;
+
+ ret = ath12k_dp_srng_setup(ab, &dp->wbm_desc_rel_ring,
+ HAL_SW2WBM_RELEASE, 0, 0,
+ DP_WBM_RELEASE_RING_SIZE);
+ if (ret) {
+ ath12k_warn(ab, "failed to set up wbm2sw_release ring :%d\n",
+ ret);
+ goto err;
+ }
+
+ ret = ath12k_dp_srng_setup(ab, &dp->tcl_cmd_ring, HAL_TCL_CMD, 0, 0,
+ DP_TCL_CMD_RING_SIZE);
+ if (ret) {
+ ath12k_warn(ab, "failed to set up tcl_cmd ring :%d\n", ret);
+ goto err;
+ }
+
+ ret = ath12k_dp_srng_setup(ab, &dp->tcl_status_ring, HAL_TCL_STATUS,
+ 0, 0, DP_TCL_STATUS_RING_SIZE);
+ if (ret) {
+ ath12k_warn(ab, "failed to set up tcl_status ring :%d\n", ret);
+ goto err;
+ }
+
+ for (i = 0; i < ab->hw_params->max_tx_ring; i++) {
+ map = ab->hw_params->hal_ops->tcl_to_wbm_rbm_map;
+ tx_comp_ring_num = map[i].wbm_ring_num;
+
+ ret = ath12k_dp_srng_setup(ab, &dp->tx_ring[i].tcl_data_ring,
+ HAL_TCL_DATA, i, 0,
+ DP_TCL_DATA_RING_SIZE);
+ if (ret) {
+ ath12k_warn(ab, "failed to set up tcl_data ring (%d) :%d\n",
+ i, ret);
+ goto err;
+ }
+
+ ret = ath12k_dp_srng_setup(ab, &dp->tx_ring[i].tcl_comp_ring,
+ HAL_WBM2SW_RELEASE, tx_comp_ring_num, 0,
+ DP_TX_COMP_RING_SIZE);
+ if (ret) {
+ ath12k_warn(ab, "failed to set up tcl_comp ring (%d) :%d\n",
+ tx_comp_ring_num, ret);
+ goto err;
+ }
+ }
+
+ ret = ath12k_dp_srng_setup(ab, &dp->reo_reinject_ring, HAL_REO_REINJECT,
+ 0, 0, DP_REO_REINJECT_RING_SIZE);
+ if (ret) {
+ ath12k_warn(ab, "failed to set up reo_reinject ring :%d\n",
+ ret);
+ goto err;
+ }
+
+ ret = ath12k_dp_srng_setup(ab, &dp->rx_rel_ring, HAL_WBM2SW_RELEASE,
+ HAL_WBM2SW_REL_ERR_RING_NUM, 0,
+ DP_RX_RELEASE_RING_SIZE);
+ if (ret) {
+ ath12k_warn(ab, "failed to set up rx_rel ring :%d\n", ret);
+ goto err;
+ }
+
+ ret = ath12k_dp_srng_setup(ab, &dp->reo_except_ring, HAL_REO_EXCEPTION,
+ 0, 0, DP_REO_EXCEPTION_RING_SIZE);
+ if (ret) {
+ ath12k_warn(ab, "failed to set up reo_exception ring :%d\n",
+ ret);
+ goto err;
+ }
+
+ ret = ath12k_dp_srng_setup(ab, &dp->reo_cmd_ring, HAL_REO_CMD,
+ 0, 0, DP_REO_CMD_RING_SIZE);
+ if (ret) {
+ ath12k_warn(ab, "failed to set up reo_cmd ring :%d\n", ret);
+ goto err;
+ }
+
+ srng = &ab->hal.srng_list[dp->reo_cmd_ring.ring_id];
+ ath12k_hal_reo_init_cmd_ring(ab, srng);
+
+ ret = ath12k_dp_srng_setup(ab, &dp->reo_status_ring, HAL_REO_STATUS,
+ 0, 0, DP_REO_STATUS_RING_SIZE);
+ if (ret) {
+ ath12k_warn(ab, "failed to set up reo_status ring :%d\n", ret);
+ goto err;
+ }
+
+ /* When hash based routing of rx packet is enabled, 32 entries to map
+ * the hash values to the ring will be configured. Each hash entry uses
+ * four bits to map to a particular ring. The ring mapping will be
+ * 0:TCL, 1:SW1, 2:SW2, 3:SW3, 4:SW4, 5:Release, 6:FW and 7:SW5
+ * 8:SW6, 9:SW7, 10:SW8, 11:Not used.
+ */
+ ring_hash_map = HAL_HASH_ROUTING_RING_SW1 |
+ HAL_HASH_ROUTING_RING_SW2 << 4 |
+ HAL_HASH_ROUTING_RING_SW3 << 8 |
+ HAL_HASH_ROUTING_RING_SW4 << 12 |
+ HAL_HASH_ROUTING_RING_SW1 << 16 |
+ HAL_HASH_ROUTING_RING_SW2 << 20 |
+ HAL_HASH_ROUTING_RING_SW3 << 24 |
+ HAL_HASH_ROUTING_RING_SW4 << 28;
+
+ ath12k_hal_reo_hw_setup(ab, ring_hash_map);
+
+ return 0;
+
+err:
+ ath12k_dp_srng_common_cleanup(ab);
+
+ return ret;
+}
+
+static void ath12k_dp_scatter_idle_link_desc_cleanup(struct ath12k_base *ab)
+{
+ struct ath12k_dp *dp = &ab->dp;
+ struct hal_wbm_idle_scatter_list *slist = dp->scatter_list;
+ int i;
+
+ for (i = 0; i < DP_IDLE_SCATTER_BUFS_MAX; i++) {
+ if (!slist[i].vaddr)
+ continue;
+
+ dma_free_coherent(ab->dev, HAL_WBM_IDLE_SCATTER_BUF_SIZE_MAX,
+ slist[i].vaddr, slist[i].paddr);
+ slist[i].vaddr = NULL;
+ }
+}
+
+static int ath12k_dp_scatter_idle_link_desc_setup(struct ath12k_base *ab,
+ int size,
+ u32 n_link_desc_bank,
+ u32 n_link_desc,
+ u32 last_bank_sz)
+{
+ struct ath12k_dp *dp = &ab->dp;
+ struct dp_link_desc_bank *link_desc_banks = dp->link_desc_banks;
+ struct hal_wbm_idle_scatter_list *slist = dp->scatter_list;
+ u32 n_entries_per_buf;
+ int num_scatter_buf, scatter_idx;
+ struct hal_wbm_link_desc *scatter_buf;
+ int align_bytes, n_entries;
+ dma_addr_t paddr;
+ int rem_entries;
+ int i;
+ int ret = 0;
+ u32 end_offset, cookie;
+
+ n_entries_per_buf = HAL_WBM_IDLE_SCATTER_BUF_SIZE /
+ ath12k_hal_srng_get_entrysize(ab, HAL_WBM_IDLE_LINK);
+ num_scatter_buf = DIV_ROUND_UP(size, HAL_WBM_IDLE_SCATTER_BUF_SIZE);
+
+ if (num_scatter_buf > DP_IDLE_SCATTER_BUFS_MAX)
+ return -EINVAL;
+
+ for (i = 0; i < num_scatter_buf; i++) {
+ slist[i].vaddr = dma_alloc_coherent(ab->dev,
+ HAL_WBM_IDLE_SCATTER_BUF_SIZE_MAX,
+ &slist[i].paddr, GFP_KERNEL);
+ if (!slist[i].vaddr) {
+ ret = -ENOMEM;
+ goto err;
+ }
+ }
+
+ scatter_idx = 0;
+ scatter_buf = slist[scatter_idx].vaddr;
+ rem_entries = n_entries_per_buf;
+
+ for (i = 0; i < n_link_desc_bank; i++) {
+ align_bytes = link_desc_banks[i].vaddr -
+ link_desc_banks[i].vaddr_unaligned;
+ n_entries = (DP_LINK_DESC_ALLOC_SIZE_THRESH - align_bytes) /
+ HAL_LINK_DESC_SIZE;
+ paddr = link_desc_banks[i].paddr;
+ while (n_entries) {
+ cookie = DP_LINK_DESC_COOKIE_SET(n_entries, i);
+ ath12k_hal_set_link_desc_addr(scatter_buf, cookie, paddr);
+ n_entries--;
+ paddr += HAL_LINK_DESC_SIZE;
+ if (rem_entries) {
+ rem_entries--;
+ scatter_buf++;
+ continue;
+ }
+
+ rem_entries = n_entries_per_buf;
+ scatter_idx++;
+ scatter_buf = slist[scatter_idx].vaddr;
+ }
+ }
+
+ end_offset = (scatter_buf - slist[scatter_idx].vaddr) *
+ sizeof(struct hal_wbm_link_desc);
+ ath12k_hal_setup_link_idle_list(ab, slist, num_scatter_buf,
+ n_link_desc, end_offset);
+
+ return 0;
+
+err:
+ ath12k_dp_scatter_idle_link_desc_cleanup(ab);
+
+ return ret;
+}
+
+static void
+ath12k_dp_link_desc_bank_free(struct ath12k_base *ab,
+ struct dp_link_desc_bank *link_desc_banks)
+{
+ int i;
+
+ for (i = 0; i < DP_LINK_DESC_BANKS_MAX; i++) {
+ if (link_desc_banks[i].vaddr_unaligned) {
+ dma_free_coherent(ab->dev,
+ link_desc_banks[i].size,
+ link_desc_banks[i].vaddr_unaligned,
+ link_desc_banks[i].paddr_unaligned);
+ link_desc_banks[i].vaddr_unaligned = NULL;
+ }
+ }
+}
+
+static int ath12k_dp_link_desc_bank_alloc(struct ath12k_base *ab,
+ struct dp_link_desc_bank *desc_bank,
+ int n_link_desc_bank,
+ int last_bank_sz)
+{
+ struct ath12k_dp *dp = &ab->dp;
+ int i;
+ int ret = 0;
+ int desc_sz = DP_LINK_DESC_ALLOC_SIZE_THRESH;
+
+ for (i = 0; i < n_link_desc_bank; i++) {
+ if (i == (n_link_desc_bank - 1) && last_bank_sz)
+ desc_sz = last_bank_sz;
+
+ desc_bank[i].vaddr_unaligned =
+ dma_alloc_coherent(ab->dev, desc_sz,
+ &desc_bank[i].paddr_unaligned,
+ GFP_KERNEL);
+ if (!desc_bank[i].vaddr_unaligned) {
+ ret = -ENOMEM;
+ goto err;
+ }
+
+ desc_bank[i].vaddr = PTR_ALIGN(desc_bank[i].vaddr_unaligned,
+ HAL_LINK_DESC_ALIGN);
+ desc_bank[i].paddr = desc_bank[i].paddr_unaligned +
+ ((unsigned long)desc_bank[i].vaddr -
+ (unsigned long)desc_bank[i].vaddr_unaligned);
+ desc_bank[i].size = desc_sz;
+ }
+
+ return 0;
+
+err:
+ ath12k_dp_link_desc_bank_free(ab, dp->link_desc_banks);
+
+ return ret;
+}
+
+void ath12k_dp_link_desc_cleanup(struct ath12k_base *ab,
+ struct dp_link_desc_bank *desc_bank,
+ u32 ring_type, struct dp_srng *ring)
+{
+ ath12k_dp_link_desc_bank_free(ab, desc_bank);
+
+ if (ring_type != HAL_RXDMA_MONITOR_DESC) {
+ ath12k_dp_srng_cleanup(ab, ring);
+ ath12k_dp_scatter_idle_link_desc_cleanup(ab);
+ }
+}
+
+static int ath12k_wbm_idle_ring_setup(struct ath12k_base *ab, u32 *n_link_desc)
+{
+ struct ath12k_dp *dp = &ab->dp;
+ u32 n_mpdu_link_desc, n_mpdu_queue_desc;
+ u32 n_tx_msdu_link_desc, n_rx_msdu_link_desc;
+ int ret = 0;
+
+ n_mpdu_link_desc = (DP_NUM_TIDS_MAX * DP_AVG_MPDUS_PER_TID_MAX) /
+ HAL_NUM_MPDUS_PER_LINK_DESC;
+
+ n_mpdu_queue_desc = n_mpdu_link_desc /
+ HAL_NUM_MPDU_LINKS_PER_QUEUE_DESC;
+
+ n_tx_msdu_link_desc = (DP_NUM_TIDS_MAX * DP_AVG_FLOWS_PER_TID *
+ DP_AVG_MSDUS_PER_FLOW) /
+ HAL_NUM_TX_MSDUS_PER_LINK_DESC;
+
+ n_rx_msdu_link_desc = (DP_NUM_TIDS_MAX * DP_AVG_MPDUS_PER_TID_MAX *
+ DP_AVG_MSDUS_PER_MPDU) /
+ HAL_NUM_RX_MSDUS_PER_LINK_DESC;
+
+ *n_link_desc = n_mpdu_link_desc + n_mpdu_queue_desc +
+ n_tx_msdu_link_desc + n_rx_msdu_link_desc;
+
+ if (*n_link_desc & (*n_link_desc - 1))
+ *n_link_desc = 1 << fls(*n_link_desc);
+
+ ret = ath12k_dp_srng_setup(ab, &dp->wbm_idle_ring,
+ HAL_WBM_IDLE_LINK, 0, 0, *n_link_desc);
+ if (ret) {
+ ath12k_warn(ab, "failed to setup wbm_idle_ring: %d\n", ret);
+ return ret;
+ }
+ return ret;
+}
+
+int ath12k_dp_link_desc_setup(struct ath12k_base *ab,
+ struct dp_link_desc_bank *link_desc_banks,
+ u32 ring_type, struct hal_srng *srng,
+ u32 n_link_desc)
+{
+ u32 tot_mem_sz;
+ u32 n_link_desc_bank, last_bank_sz;
+ u32 entry_sz, align_bytes, n_entries;
+ struct hal_wbm_link_desc *desc;
+ u32 paddr;
+ int i, ret;
+ u32 cookie;
+
+ tot_mem_sz = n_link_desc * HAL_LINK_DESC_SIZE;
+ tot_mem_sz += HAL_LINK_DESC_ALIGN;
+
+ if (tot_mem_sz <= DP_LINK_DESC_ALLOC_SIZE_THRESH) {
+ n_link_desc_bank = 1;
+ last_bank_sz = tot_mem_sz;
+ } else {
+ n_link_desc_bank = tot_mem_sz /
+ (DP_LINK_DESC_ALLOC_SIZE_THRESH -
+ HAL_LINK_DESC_ALIGN);
+ last_bank_sz = tot_mem_sz %
+ (DP_LINK_DESC_ALLOC_SIZE_THRESH -
+ HAL_LINK_DESC_ALIGN);
+
+ if (last_bank_sz)
+ n_link_desc_bank += 1;
+ }
+
+ if (n_link_desc_bank > DP_LINK_DESC_BANKS_MAX)
+ return -EINVAL;
+
+ ret = ath12k_dp_link_desc_bank_alloc(ab, link_desc_banks,
+ n_link_desc_bank, last_bank_sz);
+ if (ret)
+ return ret;
+
+ /* Setup link desc idle list for HW internal usage */
+ entry_sz = ath12k_hal_srng_get_entrysize(ab, ring_type);
+ tot_mem_sz = entry_sz * n_link_desc;
+
+ /* Setup scatter desc list when the total memory requirement is more */
+ if (tot_mem_sz > DP_LINK_DESC_ALLOC_SIZE_THRESH &&
+ ring_type != HAL_RXDMA_MONITOR_DESC) {
+ ret = ath12k_dp_scatter_idle_link_desc_setup(ab, tot_mem_sz,
+ n_link_desc_bank,
+ n_link_desc,
+ last_bank_sz);
+ if (ret) {
+ ath12k_warn(ab, "failed to setup scatting idle list descriptor :%d\n",
+ ret);
+ goto fail_desc_bank_free;
+ }
+
+ return 0;
+ }
+
+ spin_lock_bh(&srng->lock);
+
+ ath12k_hal_srng_access_begin(ab, srng);
+
+ for (i = 0; i < n_link_desc_bank; i++) {
+ align_bytes = link_desc_banks[i].vaddr -
+ link_desc_banks[i].vaddr_unaligned;
+ n_entries = (link_desc_banks[i].size - align_bytes) /
+ HAL_LINK_DESC_SIZE;
+ paddr = link_desc_banks[i].paddr;
+ while (n_entries &&
+ (desc = ath12k_hal_srng_src_get_next_entry(ab, srng))) {
+ cookie = DP_LINK_DESC_COOKIE_SET(n_entries, i);
+ ath12k_hal_set_link_desc_addr(desc,
+ cookie, paddr);
+ n_entries--;
+ paddr += HAL_LINK_DESC_SIZE;
+ }
+ }
+
+ ath12k_hal_srng_access_end(ab, srng);
+
+ spin_unlock_bh(&srng->lock);
+
+ return 0;
+
+fail_desc_bank_free:
+ ath12k_dp_link_desc_bank_free(ab, link_desc_banks);
+
+ return ret;
+}
+
+int ath12k_dp_service_srng(struct ath12k_base *ab,
+ struct ath12k_ext_irq_grp *irq_grp,
+ int budget)
+{
+ struct napi_struct *napi = &irq_grp->napi;
+ int grp_id = irq_grp->grp_id;
+ int work_done = 0;
+ int i = 0, j;
+ int tot_work_done = 0;
+ enum dp_monitor_mode monitor_mode;
+ u8 ring_mask;
+
+ while (i < ab->hw_params->max_tx_ring) {
+ if (ab->hw_params->ring_mask->tx[grp_id] &
+ BIT(ab->hw_params->hal_ops->tcl_to_wbm_rbm_map[i].wbm_ring_num))
+ ath12k_dp_tx_completion_handler(ab, i);
+ i++;
+ }
+
+ if (ab->hw_params->ring_mask->rx_err[grp_id]) {
+ work_done = ath12k_dp_rx_process_err(ab, napi, budget);
+ budget -= work_done;
+ tot_work_done += work_done;
+ if (budget <= 0)
+ goto done;
+ }
+
+ if (ab->hw_params->ring_mask->rx_wbm_rel[grp_id]) {
+ work_done = ath12k_dp_rx_process_wbm_err(ab,
+ napi,
+ budget);
+ budget -= work_done;
+ tot_work_done += work_done;
+
+ if (budget <= 0)
+ goto done;
+ }
+
+ if (ab->hw_params->ring_mask->rx[grp_id]) {
+ i = fls(ab->hw_params->ring_mask->rx[grp_id]) - 1;
+ work_done = ath12k_dp_rx_process(ab, i, napi,
+ budget);
+ budget -= work_done;
+ tot_work_done += work_done;
+ if (budget <= 0)
+ goto done;
+ }
+
+ if (ab->hw_params->ring_mask->rx_mon_dest[grp_id]) {
+ monitor_mode = ATH12K_DP_RX_MONITOR_MODE;
+ ring_mask = ab->hw_params->ring_mask->rx_mon_dest[grp_id];
+ for (i = 0; i < ab->num_radios; i++) {
+ for (j = 0; j < ab->hw_params->num_rxmda_per_pdev; j++) {
+ int id = i * ab->hw_params->num_rxmda_per_pdev + j;
+
+ if (ring_mask & BIT(id)) {
+ work_done =
+ ath12k_dp_mon_process_ring(ab, id, napi, budget,
+ monitor_mode);
+ budget -= work_done;
+ tot_work_done += work_done;
+
+ if (budget <= 0)
+ goto done;
+ }
+ }
+ }
+ }
+
+ if (ab->hw_params->ring_mask->tx_mon_dest[grp_id]) {
+ monitor_mode = ATH12K_DP_TX_MONITOR_MODE;
+ ring_mask = ab->hw_params->ring_mask->tx_mon_dest[grp_id];
+ for (i = 0; i < ab->num_radios; i++) {
+ for (j = 0; j < ab->hw_params->num_rxmda_per_pdev; j++) {
+ int id = i * ab->hw_params->num_rxmda_per_pdev + j;
+
+ if (ring_mask & BIT(id)) {
+ work_done =
+ ath12k_dp_mon_process_ring(ab, id, napi, budget,
+ monitor_mode);
+ budget -= work_done;
+ tot_work_done += work_done;
+
+ if (budget <= 0)
+ goto done;
+ }
+ }
+ }
+ }
+
+ if (ab->hw_params->ring_mask->reo_status[grp_id])
+ ath12k_dp_rx_process_reo_status(ab);
+
+ if (ab->hw_params->ring_mask->host2rxdma[grp_id]) {
+ struct ath12k_dp *dp = &ab->dp;
+ struct dp_rxdma_ring *rx_ring = &dp->rx_refill_buf_ring;
+
+ ath12k_dp_rx_bufs_replenish(ab, 0, rx_ring, 0,
+ ab->hw_params->hal_params->rx_buf_rbm,
+ true);
+ }
+
+ /* TODO: Implement handler for other interrupts */
+
+done:
+ return tot_work_done;
+}
+
+void ath12k_dp_pdev_free(struct ath12k_base *ab)
+{
+ int i;
+
+ del_timer_sync(&ab->mon_reap_timer);
+
+ for (i = 0; i < ab->num_radios; i++)
+ ath12k_dp_rx_pdev_free(ab, i);
+}
+
+void ath12k_dp_pdev_pre_alloc(struct ath12k_base *ab)
+{
+ struct ath12k *ar;
+ struct ath12k_pdev_dp *dp;
+ int i;
+
+ for (i = 0; i < ab->num_radios; i++) {
+ ar = ab->pdevs[i].ar;
+ dp = &ar->dp;
+ dp->mac_id = i;
+ atomic_set(&dp->num_tx_pending, 0);
+ init_waitqueue_head(&dp->tx_empty_waitq);
+
+ /* TODO: Add any RXDMA setup required per pdev */
+ }
+}
+
+static void ath12k_dp_service_mon_ring(struct timer_list *t)
+{
+ struct ath12k_base *ab = from_timer(ab, t, mon_reap_timer);
+ int i;
+
+ for (i = 0; i < ab->hw_params->num_rxmda_per_pdev; i++)
+ ath12k_dp_mon_process_ring(ab, i, NULL, DP_MON_SERVICE_BUDGET,
+ ATH12K_DP_RX_MONITOR_MODE);
+
+ mod_timer(&ab->mon_reap_timer, jiffies +
+ msecs_to_jiffies(ATH12K_MON_TIMER_INTERVAL));
+}
+
+static void ath12k_dp_mon_reap_timer_init(struct ath12k_base *ab)
+{
+ if (ab->hw_params->rxdma1_enable)
+ return;
+
+ timer_setup(&ab->mon_reap_timer, ath12k_dp_service_mon_ring, 0);
+}
+
+int ath12k_dp_pdev_alloc(struct ath12k_base *ab)
+{
+ struct ath12k *ar;
+ int ret;
+ int i;
+
+ ret = ath12k_dp_rx_htt_setup(ab);
+ if (ret)
+ goto out;
+
+ ath12k_dp_mon_reap_timer_init(ab);
+
+ /* TODO: Per-pdev rx ring unlike tx ring which is mapped to different AC's */
+ for (i = 0; i < ab->num_radios; i++) {
+ ar = ab->pdevs[i].ar;
+ ret = ath12k_dp_rx_pdev_alloc(ab, i);
+ if (ret) {
+ ath12k_warn(ab, "failed to allocate pdev rx for pdev_id :%d\n",
+ i);
+ goto err;
+ }
+ ret = ath12k_dp_rx_pdev_mon_attach(ar);
+ if (ret) {
+ ath12k_warn(ab, "failed to initialize mon pdev %d\n", i);
+ goto err;
+ }
+ }
+
+ return 0;
+err:
+ ath12k_dp_pdev_free(ab);
+out:
+ return ret;
+}
+
+int ath12k_dp_htt_connect(struct ath12k_dp *dp)
+{
+ struct ath12k_htc_svc_conn_req conn_req = {0};
+ struct ath12k_htc_svc_conn_resp conn_resp = {0};
+ int status;
+
+ conn_req.ep_ops.ep_tx_complete = ath12k_dp_htt_htc_tx_complete;
+ conn_req.ep_ops.ep_rx_complete = ath12k_dp_htt_htc_t2h_msg_handler;
+
+ /* connect to control service */
+ conn_req.service_id = ATH12K_HTC_SVC_ID_HTT_DATA_MSG;
+
+ status = ath12k_htc_connect_service(&dp->ab->htc, &conn_req,
+ &conn_resp);
+
+ if (status)
+ return status;
+
+ dp->eid = conn_resp.eid;
+
+ return 0;
+}
+
+static void ath12k_dp_update_vdev_search(struct ath12k_vif *arvif)
+{
+ switch (arvif->vdev_type) {
+ case WMI_VDEV_TYPE_STA:
+ /* TODO: Verify the search type and flags since ast hash
+ * is not part of peer mapv3
+ */
+ arvif->hal_addr_search_flags = HAL_TX_ADDRY_EN;
+ arvif->search_type = HAL_TX_ADDR_SEARCH_DEFAULT;
+ break;
+ case WMI_VDEV_TYPE_AP:
+ case WMI_VDEV_TYPE_IBSS:
+ arvif->hal_addr_search_flags = HAL_TX_ADDRX_EN;
+ arvif->search_type = HAL_TX_ADDR_SEARCH_DEFAULT;
+ break;
+ case WMI_VDEV_TYPE_MONITOR:
+ default:
+ return;
+ }
+}
+
+void ath12k_dp_vdev_tx_attach(struct ath12k *ar, struct ath12k_vif *arvif)
+{
+ struct ath12k_base *ab = ar->ab;
+
+ arvif->tcl_metadata |= u32_encode_bits(1, HTT_TCL_META_DATA_TYPE) |
+ u32_encode_bits(arvif->vdev_id,
+ HTT_TCL_META_DATA_VDEV_ID) |
+ u32_encode_bits(ar->pdev->pdev_id,
+ HTT_TCL_META_DATA_PDEV_ID);
+
+ /* set HTT extension valid bit to 0 by default */
+ arvif->tcl_metadata &= ~HTT_TCL_META_DATA_VALID_HTT;
+
+ ath12k_dp_update_vdev_search(arvif);
+ arvif->vdev_id_check_en = true;
+ arvif->bank_id = ath12k_dp_tx_get_bank_profile(ab, arvif, &ab->dp);
+
+ /* TODO: error path for bank id failure */
+ if (arvif->bank_id == DP_INVALID_BANK_ID) {
+ ath12k_err(ar->ab, "Failed to initialize DP TX Banks");
+ return;
+ }
+}
+
+static void ath12k_dp_cc_cleanup(struct ath12k_base *ab)
+{
+ struct ath12k_rx_desc_info *desc_info, *tmp;
+ struct ath12k_tx_desc_info *tx_desc_info, *tmp1;
+ struct ath12k_dp *dp = &ab->dp;
+ struct sk_buff *skb;
+ int i;
+
+ if (!dp->spt_info)
+ return;
+
+ /* RX Descriptor cleanup */
+ spin_lock_bh(&dp->rx_desc_lock);
+
+ list_for_each_entry_safe(desc_info, tmp, &dp->rx_desc_used_list, list) {
+ list_del(&desc_info->list);
+ skb = desc_info->skb;
+
+ if (!skb)
+ continue;
+
+ dma_unmap_single(ab->dev, ATH12K_SKB_RXCB(skb)->paddr,
+ skb->len + skb_tailroom(skb), DMA_FROM_DEVICE);
+ dev_kfree_skb_any(skb);
+ }
+
+ spin_unlock_bh(&dp->rx_desc_lock);
+
+ /* TX Descriptor cleanup */
+ for (i = 0; i < ATH12K_HW_MAX_QUEUES; i++) {
+ spin_lock_bh(&dp->tx_desc_lock[i]);
+
+ list_for_each_entry_safe(tx_desc_info, tmp1, &dp->tx_desc_used_list[i],
+ list) {
+ list_del(&tx_desc_info->list);
+ skb = tx_desc_info->skb;
+
+ if (!skb)
+ continue;
+
+ dma_unmap_single(ab->dev, ATH12K_SKB_CB(skb)->paddr,
+ skb->len, DMA_TO_DEVICE);
+ dev_kfree_skb_any(skb);
+ }
+
+ spin_unlock_bh(&dp->tx_desc_lock[i]);
+ }
+
+ /* unmap SPT pages */
+ for (i = 0; i < dp->num_spt_pages; i++) {
+ if (!dp->spt_info[i].vaddr)
+ continue;
+
+ dma_free_coherent(ab->dev, ATH12K_PAGE_SIZE,
+ dp->spt_info[i].vaddr, dp->spt_info[i].paddr);
+ dp->spt_info[i].vaddr = NULL;
+ }
+
+ kfree(dp->spt_info);
+}
+
+static void ath12k_dp_reoq_lut_cleanup(struct ath12k_base *ab)
+{
+ struct ath12k_dp *dp = &ab->dp;
+
+ if (!ab->hw_params->reoq_lut_support)
+ return;
+
+ if (!dp->reoq_lut.vaddr)
+ return;
+
+ dma_free_coherent(ab->dev, DP_REOQ_LUT_SIZE,
+ dp->reoq_lut.vaddr, dp->reoq_lut.paddr);
+ dp->reoq_lut.vaddr = NULL;
+
+ ath12k_hif_write32(ab,
+ HAL_SEQ_WCSS_UMAC_REO_REG + HAL_REO1_QDESC_LUT_BASE0(ab), 0);
+}
+
+void ath12k_dp_free(struct ath12k_base *ab)
+{
+ struct ath12k_dp *dp = &ab->dp;
+ int i;
+
+ ath12k_dp_link_desc_cleanup(ab, dp->link_desc_banks,
+ HAL_WBM_IDLE_LINK, &dp->wbm_idle_ring);
+
+ ath12k_dp_cc_cleanup(ab);
+ ath12k_dp_reoq_lut_cleanup(ab);
+ ath12k_dp_deinit_bank_profiles(ab);
+ ath12k_dp_srng_common_cleanup(ab);
+
+ ath12k_dp_rx_reo_cmd_list_cleanup(ab);
+
+ for (i = 0; i < ab->hw_params->max_tx_ring; i++)
+ kfree(dp->tx_ring[i].tx_status);
+
+ ath12k_dp_rx_free(ab);
+ /* Deinit any SOC level resource */
+}
+
+void ath12k_dp_cc_config(struct ath12k_base *ab)
+{
+ u32 cmem_base = ab->qmi.dev_mem[ATH12K_QMI_DEVMEM_CMEM_INDEX].start;
+ u32 reo_base = HAL_SEQ_WCSS_UMAC_REO_REG;
+ u32 wbm_base = HAL_SEQ_WCSS_UMAC_WBM_REG;
+ u32 val = 0;
+
+ ath12k_hif_write32(ab, reo_base + HAL_REO1_SW_COOKIE_CFG0(ab), cmem_base);
+
+ val |= u32_encode_bits(ATH12K_CMEM_ADDR_MSB,
+ HAL_REO1_SW_COOKIE_CFG_CMEM_BASE_ADDR_MSB) |
+ u32_encode_bits(ATH12K_CC_PPT_MSB,
+ HAL_REO1_SW_COOKIE_CFG_COOKIE_PPT_MSB) |
+ u32_encode_bits(ATH12K_CC_SPT_MSB,
+ HAL_REO1_SW_COOKIE_CFG_COOKIE_SPT_MSB) |
+ u32_encode_bits(1, HAL_REO1_SW_COOKIE_CFG_ALIGN) |
+ u32_encode_bits(1, HAL_REO1_SW_COOKIE_CFG_ENABLE) |
+ u32_encode_bits(1, HAL_REO1_SW_COOKIE_CFG_GLOBAL_ENABLE);
+
+ ath12k_hif_write32(ab, reo_base + HAL_REO1_SW_COOKIE_CFG1(ab), val);
+
+ /* Enable HW CC for WBM */
+ ath12k_hif_write32(ab, wbm_base + HAL_WBM_SW_COOKIE_CFG0, cmem_base);
+
+ val = u32_encode_bits(ATH12K_CMEM_ADDR_MSB,
+ HAL_WBM_SW_COOKIE_CFG_CMEM_BASE_ADDR_MSB) |
+ u32_encode_bits(ATH12K_CC_PPT_MSB,
+ HAL_WBM_SW_COOKIE_CFG_COOKIE_PPT_MSB) |
+ u32_encode_bits(ATH12K_CC_SPT_MSB,
+ HAL_WBM_SW_COOKIE_CFG_COOKIE_SPT_MSB) |
+ u32_encode_bits(1, HAL_WBM_SW_COOKIE_CFG_ALIGN);
+
+ ath12k_hif_write32(ab, wbm_base + HAL_WBM_SW_COOKIE_CFG1, val);
+
+ /* Enable conversion complete indication */
+ val = ath12k_hif_read32(ab, wbm_base + HAL_WBM_SW_COOKIE_CFG2);
+ val |= u32_encode_bits(1, HAL_WBM_SW_COOKIE_CFG_RELEASE_PATH_EN) |
+ u32_encode_bits(1, HAL_WBM_SW_COOKIE_CFG_ERR_PATH_EN) |
+ u32_encode_bits(1, HAL_WBM_SW_COOKIE_CFG_CONV_IND_EN);
+
+ ath12k_hif_write32(ab, wbm_base + HAL_WBM_SW_COOKIE_CFG2, val);
+
+ /* Enable Cookie conversion for WBM2SW Rings */
+ val = ath12k_hif_read32(ab, wbm_base + HAL_WBM_SW_COOKIE_CONVERT_CFG);
+ val |= u32_encode_bits(1, HAL_WBM_SW_COOKIE_CONV_CFG_GLOBAL_EN) |
+ ab->hw_params->hal_params->wbm2sw_cc_enable;
+
+ ath12k_hif_write32(ab, wbm_base + HAL_WBM_SW_COOKIE_CONVERT_CFG, val);
+}
+
+static u32 ath12k_dp_cc_cookie_gen(u16 ppt_idx, u16 spt_idx)
+{
+ return (u32)ppt_idx << ATH12K_CC_PPT_SHIFT | spt_idx;
+}
+
+static inline void *ath12k_dp_cc_get_desc_addr_ptr(struct ath12k_base *ab,
+ u16 ppt_idx, u16 spt_idx)
+{
+ struct ath12k_dp *dp = &ab->dp;
+
+ return dp->spt_info[ppt_idx].vaddr + spt_idx;
+}
+
+struct ath12k_rx_desc_info *ath12k_dp_get_rx_desc(struct ath12k_base *ab,
+ u32 cookie)
+{
+ struct ath12k_rx_desc_info **desc_addr_ptr;
+ u16 ppt_idx, spt_idx;
+
+ ppt_idx = u32_get_bits(cookie, ATH12K_DP_CC_COOKIE_PPT);
+ spt_idx = u32_get_bits(cookie, ATH12k_DP_CC_COOKIE_SPT);
+
+ if (ppt_idx > ATH12K_NUM_RX_SPT_PAGES ||
+ spt_idx > ATH12K_MAX_SPT_ENTRIES)
+ return NULL;
+
+ desc_addr_ptr = ath12k_dp_cc_get_desc_addr_ptr(ab, ppt_idx, spt_idx);
+
+ return *desc_addr_ptr;
+}
+
+struct ath12k_tx_desc_info *ath12k_dp_get_tx_desc(struct ath12k_base *ab,
+ u32 cookie)
+{
+ struct ath12k_tx_desc_info **desc_addr_ptr;
+ u16 ppt_idx, spt_idx;
+
+ ppt_idx = u32_get_bits(cookie, ATH12K_DP_CC_COOKIE_PPT);
+ spt_idx = u32_get_bits(cookie, ATH12k_DP_CC_COOKIE_SPT);
+
+ if (ppt_idx < ATH12K_NUM_RX_SPT_PAGES ||
+ ppt_idx > ab->dp.num_spt_pages ||
+ spt_idx > ATH12K_MAX_SPT_ENTRIES)
+ return NULL;
+
+ desc_addr_ptr = ath12k_dp_cc_get_desc_addr_ptr(ab, ppt_idx, spt_idx);
+
+ return *desc_addr_ptr;
+}
+
+static int ath12k_dp_cc_desc_init(struct ath12k_base *ab)
+{
+ struct ath12k_dp *dp = &ab->dp;
+ struct ath12k_rx_desc_info *rx_descs, **rx_desc_addr;
+ struct ath12k_tx_desc_info *tx_descs, **tx_desc_addr;
+ u32 i, j, pool_id, tx_spt_page;
+ u32 ppt_idx;
+
+ spin_lock_bh(&dp->rx_desc_lock);
+
+ /* First ATH12K_NUM_RX_SPT_PAGES of allocated SPT pages are used for RX */
+ for (i = 0; i < ATH12K_NUM_RX_SPT_PAGES; i++) {
+ rx_descs = kcalloc(ATH12K_MAX_SPT_ENTRIES, sizeof(*rx_descs),
+ GFP_ATOMIC);
+
+ if (!rx_descs) {
+ spin_unlock_bh(&dp->rx_desc_lock);
+ return -ENOMEM;
+ }
+
+ for (j = 0; j < ATH12K_MAX_SPT_ENTRIES; j++) {
+ rx_descs[j].cookie = ath12k_dp_cc_cookie_gen(i, j);
+ rx_descs[j].magic = ATH12K_DP_RX_DESC_MAGIC;
+ list_add_tail(&rx_descs[j].list, &dp->rx_desc_free_list);
+
+ /* Update descriptor VA in SPT */
+ rx_desc_addr = ath12k_dp_cc_get_desc_addr_ptr(ab, i, j);
+ *rx_desc_addr = &rx_descs[j];
+ }
+ }
+
+ spin_unlock_bh(&dp->rx_desc_lock);
+
+ for (pool_id = 0; pool_id < ATH12K_HW_MAX_QUEUES; pool_id++) {
+ spin_lock_bh(&dp->tx_desc_lock[pool_id]);
+ for (i = 0; i < ATH12K_TX_SPT_PAGES_PER_POOL; i++) {
+ tx_descs = kcalloc(ATH12K_MAX_SPT_ENTRIES, sizeof(*tx_descs),
+ GFP_ATOMIC);
+
+ if (!tx_descs) {
+ spin_unlock_bh(&dp->tx_desc_lock[pool_id]);
+ /* Caller takes care of TX pending and RX desc cleanup */
+ return -ENOMEM;
+ }
+
+ for (j = 0; j < ATH12K_MAX_SPT_ENTRIES; j++) {
+ tx_spt_page = i + pool_id * ATH12K_TX_SPT_PAGES_PER_POOL;
+ ppt_idx = ATH12K_NUM_RX_SPT_PAGES + tx_spt_page;
+ tx_descs[j].desc_id = ath12k_dp_cc_cookie_gen(ppt_idx, j);
+ tx_descs[j].pool_id = pool_id;
+ list_add_tail(&tx_descs[j].list,
+ &dp->tx_desc_free_list[pool_id]);
+
+ /* Update descriptor VA in SPT */
+ tx_desc_addr =
+ ath12k_dp_cc_get_desc_addr_ptr(ab, ppt_idx, j);
+ *tx_desc_addr = &tx_descs[j];
+ }
+ }
+ spin_unlock_bh(&dp->tx_desc_lock[pool_id]);
+ }
+ return 0;
+}
+
+static int ath12k_dp_cc_init(struct ath12k_base *ab)
+{
+ struct ath12k_dp *dp = &ab->dp;
+ int i, ret = 0;
+ u32 cmem_base;
+
+ INIT_LIST_HEAD(&dp->rx_desc_free_list);
+ INIT_LIST_HEAD(&dp->rx_desc_used_list);
+ spin_lock_init(&dp->rx_desc_lock);
+
+ for (i = 0; i < ATH12K_HW_MAX_QUEUES; i++) {
+ INIT_LIST_HEAD(&dp->tx_desc_free_list[i]);
+ INIT_LIST_HEAD(&dp->tx_desc_used_list[i]);
+ spin_lock_init(&dp->tx_desc_lock[i]);
+ }
+
+ dp->num_spt_pages = ATH12K_NUM_SPT_PAGES;
+ if (dp->num_spt_pages > ATH12K_MAX_PPT_ENTRIES)
+ dp->num_spt_pages = ATH12K_MAX_PPT_ENTRIES;
+
+ dp->spt_info = kcalloc(dp->num_spt_pages, sizeof(struct ath12k_spt_info),
+ GFP_KERNEL);
+
+ if (!dp->spt_info) {
+ ath12k_warn(ab, "SPT page allocation failure");
+ return -ENOMEM;
+ }
+
+ cmem_base = ab->qmi.dev_mem[ATH12K_QMI_DEVMEM_CMEM_INDEX].start;
+
+ for (i = 0; i < dp->num_spt_pages; i++) {
+ dp->spt_info[i].vaddr = dma_alloc_coherent(ab->dev,
+ ATH12K_PAGE_SIZE,
+ &dp->spt_info[i].paddr,
+ GFP_KERNEL);
+
+ if (!dp->spt_info[i].vaddr) {
+ ret = -ENOMEM;
+ goto free;
+ }
+
+ if (dp->spt_info[i].paddr & ATH12K_SPT_4K_ALIGN_CHECK) {
+ ath12k_warn(ab, "SPT allocated memoty is not 4K aligned");
+ ret = -EINVAL;
+ goto free;
+ }
+
+ /* Write to PPT in CMEM */
+ ath12k_hif_write32(ab, cmem_base + ATH12K_PPT_ADDR_OFFSET(i),
+ dp->spt_info[i].paddr >> ATH12K_SPT_4K_ALIGN_OFFSET);
+ }
+
+ ret = ath12k_dp_cc_desc_init(ab);
+ if (ret) {
+ ath12k_warn(ab, "HW CC desc init failed %d", ret);
+ goto free;
+ }
+
+ return 0;
+free:
+ ath12k_dp_cc_cleanup(ab);
+ return ret;
+}
+
+static int ath12k_dp_reoq_lut_setup(struct ath12k_base *ab)
+{
+ struct ath12k_dp *dp = &ab->dp;
+
+ if (!ab->hw_params->reoq_lut_support)
+ return 0;
+
+ dp->reoq_lut.vaddr = dma_alloc_coherent(ab->dev,
+ DP_REOQ_LUT_SIZE,
+ &dp->reoq_lut.paddr,
+ GFP_KERNEL);
+
+ if (!dp->reoq_lut.vaddr) {
+ ath12k_warn(ab, "failed to allocate memory for reoq table");
+ return -ENOMEM;
+ }
+
+ memset(dp->reoq_lut.vaddr, 0, DP_REOQ_LUT_SIZE);
+
+ ath12k_hif_write32(ab, HAL_SEQ_WCSS_UMAC_REO_REG + HAL_REO1_QDESC_LUT_BASE0(ab),
+ dp->reoq_lut.paddr);
+ return 0;
+}
+
+int ath12k_dp_alloc(struct ath12k_base *ab)
+{
+ struct ath12k_dp *dp = &ab->dp;
+ struct hal_srng *srng = NULL;
+ size_t size = 0;
+ u32 n_link_desc = 0;
+ int ret;
+ int i;
+
+ dp->ab = ab;
+
+ INIT_LIST_HEAD(&dp->reo_cmd_list);
+ INIT_LIST_HEAD(&dp->reo_cmd_cache_flush_list);
+ spin_lock_init(&dp->reo_cmd_lock);
+
+ dp->reo_cmd_cache_flush_count = 0;
+
+ ret = ath12k_wbm_idle_ring_setup(ab, &n_link_desc);
+ if (ret) {
+ ath12k_warn(ab, "failed to setup wbm_idle_ring: %d\n", ret);
+ return ret;
+ }
+
+ srng = &ab->hal.srng_list[dp->wbm_idle_ring.ring_id];
+
+ ret = ath12k_dp_link_desc_setup(ab, dp->link_desc_banks,
+ HAL_WBM_IDLE_LINK, srng, n_link_desc);
+ if (ret) {
+ ath12k_warn(ab, "failed to setup link desc: %d\n", ret);
+ return ret;
+ }
+
+ ret = ath12k_dp_cc_init(ab);
+
+ if (ret) {
+ ath12k_warn(ab, "failed to setup cookie converter %d\n", ret);
+ goto fail_link_desc_cleanup;
+ }
+ ret = ath12k_dp_init_bank_profiles(ab);
+ if (ret) {
+ ath12k_warn(ab, "failed to setup bank profiles %d\n", ret);
+ goto fail_hw_cc_cleanup;
+ }
+
+ ret = ath12k_dp_srng_common_setup(ab);
+ if (ret)
+ goto fail_dp_bank_profiles_cleanup;
+
+ size = sizeof(struct hal_wbm_release_ring_tx) * DP_TX_COMP_RING_SIZE;
+
+ ret = ath12k_dp_reoq_lut_setup(ab);
+ if (ret) {
+ ath12k_warn(ab, "failed to setup reoq table %d\n", ret);
+ goto fail_cmn_srng_cleanup;
+ }
+
+ for (i = 0; i < ab->hw_params->max_tx_ring; i++) {
+ dp->tx_ring[i].tcl_data_ring_id = i;
+
+ dp->tx_ring[i].tx_status_head = 0;
+ dp->tx_ring[i].tx_status_tail = DP_TX_COMP_RING_SIZE - 1;
+ dp->tx_ring[i].tx_status = kmalloc(size, GFP_KERNEL);
+ if (!dp->tx_ring[i].tx_status) {
+ ret = -ENOMEM;
+ /* FIXME: The allocated tx status is not freed
+ * properly here
+ */
+ goto fail_cmn_reoq_cleanup;
+ }
+ }
+
+ for (i = 0; i < HAL_DSCP_TID_MAP_TBL_NUM_ENTRIES_MAX; i++)
+ ath12k_hal_tx_set_dscp_tid_map(ab, i);
+
+ ret = ath12k_dp_rx_alloc(ab);
+ if (ret)
+ goto fail_dp_rx_free;
+
+ /* Init any SOC level resource for DP */
+
+ return 0;
+
+fail_dp_rx_free:
+ ath12k_dp_rx_free(ab);
+
+fail_cmn_reoq_cleanup:
+ ath12k_dp_reoq_lut_cleanup(ab);
+
+fail_cmn_srng_cleanup:
+ ath12k_dp_srng_common_cleanup(ab);
+
+fail_dp_bank_profiles_cleanup:
+ ath12k_dp_deinit_bank_profiles(ab);
+
+fail_hw_cc_cleanup:
+ ath12k_dp_cc_cleanup(ab);
+
+fail_link_desc_cleanup:
+ ath12k_dp_link_desc_cleanup(ab, dp->link_desc_banks,
+ HAL_WBM_IDLE_LINK, &dp->wbm_idle_ring);
+
+ return ret;
+}
diff --git a/drivers/net/wireless/ath/ath12k/dp.h b/drivers/net/wireless/ath/ath12k/dp.h
new file mode 100644
index 000000000000..36a876d7f61d
--- /dev/null
+++ b/drivers/net/wireless/ath/ath12k/dp.h
@@ -0,0 +1,1816 @@
+/* SPDX-License-Identifier: BSD-3-Clause-Clear */
+/*
+ * Copyright (c) 2018-2021 The Linux Foundation. All rights reserved.
+ * Copyright (c) 2021-2022 Qualcomm Innovation Center, Inc. All rights reserved.
+ */
+
+#ifndef ATH12K_DP_H
+#define ATH12K_DP_H
+
+#include "hal_rx.h"
+#include "hw.h"
+
+#define MAX_RXDMA_PER_PDEV 2
+
+struct ath12k_base;
+struct ath12k_peer;
+struct ath12k_dp;
+struct ath12k_vif;
+struct hal_tcl_status_ring;
+struct ath12k_ext_irq_grp;
+
+#define DP_MON_PURGE_TIMEOUT_MS 100
+#define DP_MON_SERVICE_BUDGET 128
+
+struct dp_srng {
+ u32 *vaddr_unaligned;
+ u32 *vaddr;
+ dma_addr_t paddr_unaligned;
+ dma_addr_t paddr;
+ int size;
+ u32 ring_id;
+};
+
+struct dp_rxdma_ring {
+ struct dp_srng refill_buf_ring;
+ struct idr bufs_idr;
+ /* Protects bufs_idr */
+ spinlock_t idr_lock;
+ int bufs_max;
+};
+
+#define ATH12K_TX_COMPL_NEXT(x) (((x) + 1) % DP_TX_COMP_RING_SIZE)
+
+struct dp_tx_ring {
+ u8 tcl_data_ring_id;
+ struct dp_srng tcl_data_ring;
+ struct dp_srng tcl_comp_ring;
+ struct hal_wbm_completion_ring_tx *tx_status;
+ int tx_status_head;
+ int tx_status_tail;
+};
+
+struct ath12k_pdev_mon_stats {
+ u32 status_ppdu_state;
+ u32 status_ppdu_start;
+ u32 status_ppdu_end;
+ u32 status_ppdu_compl;
+ u32 status_ppdu_start_mis;
+ u32 status_ppdu_end_mis;
+ u32 status_ppdu_done;
+ u32 dest_ppdu_done;
+ u32 dest_mpdu_done;
+ u32 dest_mpdu_drop;
+ u32 dup_mon_linkdesc_cnt;
+ u32 dup_mon_buf_cnt;
+};
+
+struct dp_link_desc_bank {
+ void *vaddr_unaligned;
+ void *vaddr;
+ dma_addr_t paddr_unaligned;
+ dma_addr_t paddr;
+ u32 size;
+};
+
+/* Size to enforce scatter idle list mode */
+#define DP_LINK_DESC_ALLOC_SIZE_THRESH 0x200000
+#define DP_LINK_DESC_BANKS_MAX 8
+
+#define DP_LINK_DESC_START 0x4000
+#define DP_LINK_DESC_SHIFT 3
+
+#define DP_LINK_DESC_COOKIE_SET(id, page) \
+ ((((id) + DP_LINK_DESC_START) << DP_LINK_DESC_SHIFT) | (page))
+
+#define DP_LINK_DESC_BANK_MASK GENMASK(2, 0)
+
+#define DP_RX_DESC_COOKIE_INDEX_MAX 0x3ffff
+#define DP_RX_DESC_COOKIE_POOL_ID_MAX 0x1c0000
+#define DP_RX_DESC_COOKIE_MAX \
+ (DP_RX_DESC_COOKIE_INDEX_MAX | DP_RX_DESC_COOKIE_POOL_ID_MAX)
+#define DP_NOT_PPDU_ID_WRAP_AROUND 20000
+
+enum ath12k_dp_ppdu_state {
+ DP_PPDU_STATUS_START,
+ DP_PPDU_STATUS_DONE,
+};
+
+struct dp_mon_mpdu {
+ struct list_head list;
+ struct sk_buff *head;
+ struct sk_buff *tail;
+};
+
+#define DP_MON_MAX_STATUS_BUF 32
+
+struct ath12k_mon_data {
+ struct dp_link_desc_bank link_desc_banks[DP_LINK_DESC_BANKS_MAX];
+ struct hal_rx_mon_ppdu_info mon_ppdu_info;
+
+ u32 mon_ppdu_status;
+ u32 mon_last_buf_cookie;
+ u64 mon_last_linkdesc_paddr;
+ u16 chan_noise_floor;
+
+ struct ath12k_pdev_mon_stats rx_mon_stats;
+ /* lock for monitor data */
+ spinlock_t mon_lock;
+ struct sk_buff_head rx_status_q;
+ struct dp_mon_mpdu *mon_mpdu;
+ struct list_head dp_rx_mon_mpdu_list;
+ struct sk_buff *dest_skb_q[DP_MON_MAX_STATUS_BUF];
+ struct dp_mon_tx_ppdu_info *tx_prot_ppdu_info;
+ struct dp_mon_tx_ppdu_info *tx_data_ppdu_info;
+};
+
+struct ath12k_pdev_dp {
+ u32 mac_id;
+ atomic_t num_tx_pending;
+ wait_queue_head_t tx_empty_waitq;
+ struct dp_srng rxdma_mon_dst_ring[MAX_RXDMA_PER_PDEV];
+ struct dp_srng tx_mon_dst_ring[MAX_RXDMA_PER_PDEV];
+
+ struct ieee80211_rx_status rx_status;
+ struct ath12k_mon_data mon_data;
+};
+
+#define DP_NUM_CLIENTS_MAX 64
+#define DP_AVG_TIDS_PER_CLIENT 2
+#define DP_NUM_TIDS_MAX (DP_NUM_CLIENTS_MAX * DP_AVG_TIDS_PER_CLIENT)
+#define DP_AVG_MSDUS_PER_FLOW 128
+#define DP_AVG_FLOWS_PER_TID 2
+#define DP_AVG_MPDUS_PER_TID_MAX 128
+#define DP_AVG_MSDUS_PER_MPDU 4
+
+#define DP_RX_HASH_ENABLE 1 /* Enable hash based Rx steering */
+
+#define DP_BA_WIN_SZ_MAX 256
+
+#define DP_TCL_NUM_RING_MAX 4
+
+#define DP_IDLE_SCATTER_BUFS_MAX 16
+
+#define DP_WBM_RELEASE_RING_SIZE 64
+#define DP_TCL_DATA_RING_SIZE 512
+#define DP_TX_COMP_RING_SIZE 32768
+#define DP_TX_IDR_SIZE DP_TX_COMP_RING_SIZE
+#define DP_TCL_CMD_RING_SIZE 32
+#define DP_TCL_STATUS_RING_SIZE 32
+#define DP_REO_DST_RING_MAX 8
+#define DP_REO_DST_RING_SIZE 2048
+#define DP_REO_REINJECT_RING_SIZE 32
+#define DP_RX_RELEASE_RING_SIZE 1024
+#define DP_REO_EXCEPTION_RING_SIZE 128
+#define DP_REO_CMD_RING_SIZE 128
+#define DP_REO_STATUS_RING_SIZE 2048
+#define DP_RXDMA_BUF_RING_SIZE 4096
+#define DP_RXDMA_REFILL_RING_SIZE 2048
+#define DP_RXDMA_ERR_DST_RING_SIZE 1024
+#define DP_RXDMA_MON_STATUS_RING_SIZE 1024
+#define DP_RXDMA_MONITOR_BUF_RING_SIZE 4096
+#define DP_RXDMA_MONITOR_DST_RING_SIZE 2048
+#define DP_RXDMA_MONITOR_DESC_RING_SIZE 4096
+#define DP_TX_MONITOR_BUF_RING_SIZE 4096
+#define DP_TX_MONITOR_DEST_RING_SIZE 2048
+
+#define DP_TX_MONITOR_BUF_SIZE 2048
+#define DP_TX_MONITOR_BUF_SIZE_MIN 48
+#define DP_TX_MONITOR_BUF_SIZE_MAX 8192
+
+#define DP_RX_BUFFER_SIZE 2048
+#define DP_RX_BUFFER_SIZE_LITE 1024
+#define DP_RX_BUFFER_ALIGN_SIZE 128
+
+#define DP_RXDMA_BUF_COOKIE_BUF_ID GENMASK(17, 0)
+#define DP_RXDMA_BUF_COOKIE_PDEV_ID GENMASK(19, 18)
+
+#define DP_HW2SW_MACID(mac_id) ({ typeof(mac_id) x = (mac_id); x ? x - 1 : 0; })
+#define DP_SW2HW_MACID(mac_id) ((mac_id) + 1)
+
+#define DP_TX_DESC_ID_MAC_ID GENMASK(1, 0)
+#define DP_TX_DESC_ID_MSDU_ID GENMASK(18, 2)
+#define DP_TX_DESC_ID_POOL_ID GENMASK(20, 19)
+
+#define ATH12K_SHADOW_DP_TIMER_INTERVAL 20
+#define ATH12K_SHADOW_CTRL_TIMER_INTERVAL 10
+
+#define ATH12K_NUM_POOL_TX_DESC 32768
+
+/* TODO: revisit this count during testing */
+#define ATH12K_RX_DESC_COUNT (12288)
+
+#define ATH12K_PAGE_SIZE PAGE_SIZE
+
+/* Total 1024 entries in PPT, i.e 4K/4 considering 4K aligned
+ * SPT pages which makes lower 12bits 0
+ */
+#define ATH12K_MAX_PPT_ENTRIES 1024
+
+/* Total 512 entries in a SPT, i.e 4K Page/8 */
+#define ATH12K_MAX_SPT_ENTRIES 512
+
+#define ATH12K_NUM_RX_SPT_PAGES ((ATH12K_RX_DESC_COUNT) / ATH12K_MAX_SPT_ENTRIES)
+
+#define ATH12K_TX_SPT_PAGES_PER_POOL (ATH12K_NUM_POOL_TX_DESC / \
+ ATH12K_MAX_SPT_ENTRIES)
+#define ATH12K_NUM_TX_SPT_PAGES (ATH12K_TX_SPT_PAGES_PER_POOL * ATH12K_HW_MAX_QUEUES)
+#define ATH12K_NUM_SPT_PAGES (ATH12K_NUM_RX_SPT_PAGES + ATH12K_NUM_TX_SPT_PAGES)
+
+/* The SPT pages are divided for RX and TX, first block for RX
+ * and remaining for TX
+ */
+#define ATH12K_NUM_TX_SPT_PAGE_START ATH12K_NUM_RX_SPT_PAGES
+
+#define ATH12K_DP_RX_DESC_MAGIC 0xBABABABA
+
+/* 4K aligned address have last 12 bits set to 0, this check is done
+ * so that two spt pages address can be stored per 8bytes
+ * of CMEM (PPT)
+ */
+#define ATH12K_SPT_4K_ALIGN_CHECK 0xFFF
+#define ATH12K_SPT_4K_ALIGN_OFFSET 12
+#define ATH12K_PPT_ADDR_OFFSET(ppt_index) (4 * (ppt_index))
+
+/* To indicate HW of CMEM address, b0-31 are cmem base received via QMI */
+#define ATH12K_CMEM_ADDR_MSB 0x10
+
+/* Of 20 bits cookie, b0-b8 is to indicate SPT offset and b9-19 for PPT */
+#define ATH12K_CC_SPT_MSB 8
+#define ATH12K_CC_PPT_MSB 19
+#define ATH12K_CC_PPT_SHIFT 9
+#define ATH12k_DP_CC_COOKIE_SPT GENMASK(8, 0)
+#define ATH12K_DP_CC_COOKIE_PPT GENMASK(19, 9)
+
+#define DP_REO_QREF_NUM GENMASK(31, 16)
+#define DP_MAX_PEER_ID 2047
+
+/* Total size of the LUT is based on 2K peers, each having reference
+ * for 17tids, note each entry is of type ath12k_reo_queue_ref
+ * hence total size is 2048 * 17 * 8 = 278528
+ */
+#define DP_REOQ_LUT_SIZE 278528
+
+/* Invalid TX Bank ID value */
+#define DP_INVALID_BANK_ID -1
+
+struct ath12k_dp_tx_bank_profile {
+ u8 is_configured;
+ u32 num_users;
+ u32 bank_config;
+};
+
+struct ath12k_hp_update_timer {
+ struct timer_list timer;
+ bool started;
+ bool init;
+ u32 tx_num;
+ u32 timer_tx_num;
+ u32 ring_id;
+ u32 interval;
+ struct ath12k_base *ab;
+};
+
+struct ath12k_rx_desc_info {
+ struct list_head list;
+ struct sk_buff *skb;
+ u32 cookie;
+ u32 magic;
+};
+
+struct ath12k_tx_desc_info {
+ struct list_head list;
+ struct sk_buff *skb;
+ u32 desc_id; /* Cookie */
+ u8 mac_id;
+ u8 pool_id;
+};
+
+struct ath12k_spt_info {
+ dma_addr_t paddr;
+ u64 *vaddr;
+};
+
+struct ath12k_reo_queue_ref {
+ u32 info0;
+ u32 info1;
+} __packed;
+
+struct ath12k_reo_q_addr_lut {
+ dma_addr_t paddr;
+ u32 *vaddr;
+};
+
+struct ath12k_dp {
+ struct ath12k_base *ab;
+ u8 num_bank_profiles;
+ /* protects the access and update of bank_profiles */
+ spinlock_t tx_bank_lock;
+ struct ath12k_dp_tx_bank_profile *bank_profiles;
+ enum ath12k_htc_ep_id eid;
+ struct completion htt_tgt_version_received;
+ u8 htt_tgt_ver_major;
+ u8 htt_tgt_ver_minor;
+ struct dp_link_desc_bank link_desc_banks[DP_LINK_DESC_BANKS_MAX];
+ struct dp_srng wbm_idle_ring;
+ struct dp_srng wbm_desc_rel_ring;
+ struct dp_srng tcl_cmd_ring;
+ struct dp_srng tcl_status_ring;
+ struct dp_srng reo_reinject_ring;
+ struct dp_srng rx_rel_ring;
+ struct dp_srng reo_except_ring;
+ struct dp_srng reo_cmd_ring;
+ struct dp_srng reo_status_ring;
+ struct dp_srng reo_dst_ring[DP_REO_DST_RING_MAX];
+ struct dp_tx_ring tx_ring[DP_TCL_NUM_RING_MAX];
+ struct hal_wbm_idle_scatter_list scatter_list[DP_IDLE_SCATTER_BUFS_MAX];
+ struct list_head reo_cmd_list;
+ struct list_head reo_cmd_cache_flush_list;
+ u32 reo_cmd_cache_flush_count;
+
+ /* protects access to below fields,
+ * - reo_cmd_list
+ * - reo_cmd_cache_flush_list
+ * - reo_cmd_cache_flush_count
+ */
+ spinlock_t reo_cmd_lock;
+ struct ath12k_hp_update_timer reo_cmd_timer;
+ struct ath12k_hp_update_timer tx_ring_timer[DP_TCL_NUM_RING_MAX];
+ struct ath12k_spt_info *spt_info;
+ u32 num_spt_pages;
+ struct list_head rx_desc_free_list;
+ struct list_head rx_desc_used_list;
+ /* protects the free and used desc list */
+ spinlock_t rx_desc_lock;
+
+ struct list_head tx_desc_free_list[ATH12K_HW_MAX_QUEUES];
+ struct list_head tx_desc_used_list[ATH12K_HW_MAX_QUEUES];
+ /* protects the free and used desc lists */
+ spinlock_t tx_desc_lock[ATH12K_HW_MAX_QUEUES];
+
+ struct dp_rxdma_ring rx_refill_buf_ring;
+ struct dp_srng rx_mac_buf_ring[MAX_RXDMA_PER_PDEV];
+ struct dp_srng rxdma_err_dst_ring[MAX_RXDMA_PER_PDEV];
+ struct dp_rxdma_ring rxdma_mon_buf_ring;
+ struct dp_rxdma_ring tx_mon_buf_ring;
+ struct ath12k_reo_q_addr_lut reoq_lut;
+};
+
+/* HTT definitions */
+
+#define HTT_TCL_META_DATA_TYPE BIT(0)
+#define HTT_TCL_META_DATA_VALID_HTT BIT(1)
+
+/* vdev meta data */
+#define HTT_TCL_META_DATA_VDEV_ID GENMASK(9, 2)
+#define HTT_TCL_META_DATA_PDEV_ID GENMASK(11, 10)
+#define HTT_TCL_META_DATA_HOST_INSPECTED BIT(12)
+
+/* peer meta data */
+#define HTT_TCL_META_DATA_PEER_ID GENMASK(15, 2)
+
+#define HTT_TX_WBM_COMP_STATUS_OFFSET 8
+
+/* HTT tx completion is overlayed in wbm_release_ring */
+#define HTT_TX_WBM_COMP_INFO0_STATUS GENMASK(16, 13)
+#define HTT_TX_WBM_COMP_INFO1_REINJECT_REASON GENMASK(3, 0)
+#define HTT_TX_WBM_COMP_INFO1_EXCEPTION_FRAME BIT(4)
+
+#define HTT_TX_WBM_COMP_INFO2_ACK_RSSI GENMASK(31, 24)
+
+struct htt_tx_wbm_completion {
+ __le32 rsvd0[2];
+ __le32 info0;
+ __le32 info1;
+ __le32 info2;
+ __le32 info3;
+ __le32 info4;
+ __le32 rsvd1;
+
+} __packed;
+
+enum htt_h2t_msg_type {
+ HTT_H2T_MSG_TYPE_VERSION_REQ = 0,
+ HTT_H2T_MSG_TYPE_SRING_SETUP = 0xb,
+ HTT_H2T_MSG_TYPE_RX_RING_SELECTION_CFG = 0xc,
+ HTT_H2T_MSG_TYPE_EXT_STATS_CFG = 0x10,
+ HTT_H2T_MSG_TYPE_PPDU_STATS_CFG = 0x11,
+ HTT_H2T_MSG_TYPE_VDEV_TXRX_STATS_CFG = 0x1a,
+ HTT_H2T_MSG_TYPE_TX_MONITOR_CFG = 0x1b,
+};
+
+#define HTT_VER_REQ_INFO_MSG_ID GENMASK(7, 0)
+
+struct htt_ver_req_cmd {
+ __le32 ver_reg_info;
+} __packed;
+
+enum htt_srng_ring_type {
+ HTT_HW_TO_SW_RING,
+ HTT_SW_TO_HW_RING,
+ HTT_SW_TO_SW_RING,
+};
+
+enum htt_srng_ring_id {
+ HTT_RXDMA_HOST_BUF_RING,
+ HTT_RXDMA_MONITOR_STATUS_RING,
+ HTT_RXDMA_MONITOR_BUF_RING,
+ HTT_RXDMA_MONITOR_DESC_RING,
+ HTT_RXDMA_MONITOR_DEST_RING,
+ HTT_HOST1_TO_FW_RXBUF_RING,
+ HTT_HOST2_TO_FW_RXBUF_RING,
+ HTT_RXDMA_NON_MONITOR_DEST_RING,
+ HTT_TX_MON_HOST2MON_BUF_RING,
+ HTT_TX_MON_MON2HOST_DEST_RING,
+};
+
+/* host -> target HTT_SRING_SETUP message
+ *
+ * After target is booted up, Host can send SRING setup message for
+ * each host facing LMAC SRING. Target setups up HW registers based
+ * on setup message and confirms back to Host if response_required is set.
+ * Host should wait for confirmation message before sending new SRING
+ * setup message
+ *
+ * The message would appear as follows:
+ *
+ * |31 24|23 20|19|18 16|15|14 8|7 0|
+ * |--------------- +-----------------+----------------+------------------|
+ * | ring_type | ring_id | pdev_id | msg_type |
+ * |----------------------------------------------------------------------|
+ * | ring_base_addr_lo |
+ * |----------------------------------------------------------------------|
+ * | ring_base_addr_hi |
+ * |----------------------------------------------------------------------|
+ * |ring_misc_cfg_flag|ring_entry_size| ring_size |
+ * |----------------------------------------------------------------------|
+ * | ring_head_offset32_remote_addr_lo |
+ * |----------------------------------------------------------------------|
+ * | ring_head_offset32_remote_addr_hi |
+ * |----------------------------------------------------------------------|
+ * | ring_tail_offset32_remote_addr_lo |
+ * |----------------------------------------------------------------------|
+ * | ring_tail_offset32_remote_addr_hi |
+ * |----------------------------------------------------------------------|
+ * | ring_msi_addr_lo |
+ * |----------------------------------------------------------------------|
+ * | ring_msi_addr_hi |
+ * |----------------------------------------------------------------------|
+ * | ring_msi_data |
+ * |----------------------------------------------------------------------|
+ * | intr_timer_th |IM| intr_batch_counter_th |
+ * |----------------------------------------------------------------------|
+ * | reserved |RR|PTCF| intr_low_threshold |
+ * |----------------------------------------------------------------------|
+ * Where
+ * IM = sw_intr_mode
+ * RR = response_required
+ * PTCF = prefetch_timer_cfg
+ *
+ * The message is interpreted as follows:
+ * dword0 - b'0:7 - msg_type: This will be set to
+ * HTT_H2T_MSG_TYPE_SRING_SETUP
+ * b'8:15 - pdev_id:
+ * 0 (for rings at SOC/UMAC level),
+ * 1/2/3 mac id (for rings at LMAC level)
+ * b'16:23 - ring_id: identify which ring is to setup,
+ * more details can be got from enum htt_srng_ring_id
+ * b'24:31 - ring_type: identify type of host rings,
+ * more details can be got from enum htt_srng_ring_type
+ * dword1 - b'0:31 - ring_base_addr_lo: Lower 32bits of ring base address
+ * dword2 - b'0:31 - ring_base_addr_hi: Upper 32bits of ring base address
+ * dword3 - b'0:15 - ring_size: size of the ring in unit of 4-bytes words
+ * b'16:23 - ring_entry_size: Size of each entry in 4-byte word units
+ * b'24:31 - ring_misc_cfg_flag: Valid only for HW_TO_SW_RING and
+ * SW_TO_HW_RING.
+ * Refer to HTT_SRING_SETUP_RING_MISC_CFG_RING defs.
+ * dword4 - b'0:31 - ring_head_off32_remote_addr_lo:
+ * Lower 32 bits of memory address of the remote variable
+ * storing the 4-byte word offset that identifies the head
+ * element within the ring.
+ * (The head offset variable has type u32.)
+ * Valid for HW_TO_SW and SW_TO_SW rings.
+ * dword5 - b'0:31 - ring_head_off32_remote_addr_hi:
+ * Upper 32 bits of memory address of the remote variable
+ * storing the 4-byte word offset that identifies the head
+ * element within the ring.
+ * (The head offset variable has type u32.)
+ * Valid for HW_TO_SW and SW_TO_SW rings.
+ * dword6 - b'0:31 - ring_tail_off32_remote_addr_lo:
+ * Lower 32 bits of memory address of the remote variable
+ * storing the 4-byte word offset that identifies the tail
+ * element within the ring.
+ * (The tail offset variable has type u32.)
+ * Valid for HW_TO_SW and SW_TO_SW rings.
+ * dword7 - b'0:31 - ring_tail_off32_remote_addr_hi:
+ * Upper 32 bits of memory address of the remote variable
+ * storing the 4-byte word offset that identifies the tail
+ * element within the ring.
+ * (The tail offset variable has type u32.)
+ * Valid for HW_TO_SW and SW_TO_SW rings.
+ * dword8 - b'0:31 - ring_msi_addr_lo: Lower 32bits of MSI cfg address
+ * valid only for HW_TO_SW_RING and SW_TO_HW_RING
+ * dword9 - b'0:31 - ring_msi_addr_hi: Upper 32bits of MSI cfg address
+ * valid only for HW_TO_SW_RING and SW_TO_HW_RING
+ * dword10 - b'0:31 - ring_msi_data: MSI data
+ * Refer to HTT_SRING_SETUP_RING_MSC_CFG_xxx defs
+ * valid only for HW_TO_SW_RING and SW_TO_HW_RING
+ * dword11 - b'0:14 - intr_batch_counter_th:
+ * batch counter threshold is in units of 4-byte words.
+ * HW internally maintains and increments batch count.
+ * (see SRING spec for detail description).
+ * When batch count reaches threshold value, an interrupt
+ * is generated by HW.
+ * b'15 - sw_intr_mode:
+ * This configuration shall be static.
+ * Only programmed at power up.
+ * 0: generate pulse style sw interrupts
+ * 1: generate level style sw interrupts
+ * b'16:31 - intr_timer_th:
+ * The timer init value when timer is idle or is
+ * initialized to start downcounting.
+ * In 8us units (to cover a range of 0 to 524 ms)
+ * dword12 - b'0:15 - intr_low_threshold:
+ * Used only by Consumer ring to generate ring_sw_int_p.
+ * Ring entries low threshold water mark, that is used
+ * in combination with the interrupt timer as well as
+ * the clearing of the level interrupt.
+ * b'16:18 - prefetch_timer_cfg:
+ * Used only by Consumer ring to set timer mode to
+ * support Application prefetch handling.
+ * The external tail offset/pointer will be updated
+ * at following intervals:
+ * 3'b000: (Prefetch feature disabled; used only for debug)
+ * 3'b001: 1 usec
+ * 3'b010: 4 usec
+ * 3'b011: 8 usec (default)
+ * 3'b100: 16 usec
+ * Others: Reserverd
+ * b'19 - response_required:
+ * Host needs HTT_T2H_MSG_TYPE_SRING_SETUP_DONE as response
+ * b'20:31 - reserved: reserved for future use
+ */
+
+#define HTT_SRNG_SETUP_CMD_INFO0_MSG_TYPE GENMASK(7, 0)
+#define HTT_SRNG_SETUP_CMD_INFO0_PDEV_ID GENMASK(15, 8)
+#define HTT_SRNG_SETUP_CMD_INFO0_RING_ID GENMASK(23, 16)
+#define HTT_SRNG_SETUP_CMD_INFO0_RING_TYPE GENMASK(31, 24)
+
+#define HTT_SRNG_SETUP_CMD_INFO1_RING_SIZE GENMASK(15, 0)
+#define HTT_SRNG_SETUP_CMD_INFO1_RING_ENTRY_SIZE GENMASK(23, 16)
+#define HTT_SRNG_SETUP_CMD_INFO1_RING_LOOP_CNT_DIS BIT(25)
+#define HTT_SRNG_SETUP_CMD_INFO1_RING_FLAGS_MSI_SWAP BIT(27)
+#define HTT_SRNG_SETUP_CMD_INFO1_RING_FLAGS_HOST_FW_SWAP BIT(28)
+#define HTT_SRNG_SETUP_CMD_INFO1_RING_FLAGS_TLV_SWAP BIT(29)
+
+#define HTT_SRNG_SETUP_CMD_INTR_INFO_BATCH_COUNTER_THRESH GENMASK(14, 0)
+#define HTT_SRNG_SETUP_CMD_INTR_INFO_SW_INTR_MODE BIT(15)
+#define HTT_SRNG_SETUP_CMD_INTR_INFO_INTR_TIMER_THRESH GENMASK(31, 16)
+
+#define HTT_SRNG_SETUP_CMD_INFO2_INTR_LOW_THRESH GENMASK(15, 0)
+#define HTT_SRNG_SETUP_CMD_INFO2_PRE_FETCH_TIMER_CFG GENMASK(18, 16)
+#define HTT_SRNG_SETUP_CMD_INFO2_RESPONSE_REQUIRED BIT(19)
+
+struct htt_srng_setup_cmd {
+ __le32 info0;
+ __le32 ring_base_addr_lo;
+ __le32 ring_base_addr_hi;
+ __le32 info1;
+ __le32 ring_head_off32_remote_addr_lo;
+ __le32 ring_head_off32_remote_addr_hi;
+ __le32 ring_tail_off32_remote_addr_lo;
+ __le32 ring_tail_off32_remote_addr_hi;
+ __le32 ring_msi_addr_lo;
+ __le32 ring_msi_addr_hi;
+ __le32 msi_data;
+ __le32 intr_info;
+ __le32 info2;
+} __packed;
+
+/* host -> target FW PPDU_STATS config message
+ *
+ * @details
+ * The following field definitions describe the format of the HTT host
+ * to target FW for PPDU_STATS_CFG msg.
+ * The message allows the host to configure the PPDU_STATS_IND messages
+ * produced by the target.
+ *
+ * |31 24|23 16|15 8|7 0|
+ * |-----------------------------------------------------------|
+ * | REQ bit mask | pdev_mask | msg type |
+ * |-----------------------------------------------------------|
+ * Header fields:
+ * - MSG_TYPE
+ * Bits 7:0
+ * Purpose: identifies this is a req to configure ppdu_stats_ind from target
+ * Value: 0x11
+ * - PDEV_MASK
+ * Bits 8:15
+ * Purpose: identifies which pdevs this PPDU stats configuration applies to
+ * Value: This is a overloaded field, refer to usage and interpretation of
+ * PDEV in interface document.
+ * Bit 8 : Reserved for SOC stats
+ * Bit 9 - 15 : Indicates PDEV_MASK in DBDC
+ * Indicates MACID_MASK in DBS
+ * - REQ_TLV_BIT_MASK
+ * Bits 16:31
+ * Purpose: each set bit indicates the corresponding PPDU stats TLV type
+ * needs to be included in the target's PPDU_STATS_IND messages.
+ * Value: refer htt_ppdu_stats_tlv_tag_t <<<???
+ *
+ */
+
+struct htt_ppdu_stats_cfg_cmd {
+ __le32 msg;
+} __packed;
+
+#define HTT_PPDU_STATS_CFG_MSG_TYPE GENMASK(7, 0)
+#define HTT_PPDU_STATS_CFG_PDEV_ID GENMASK(15, 8)
+#define HTT_PPDU_STATS_CFG_TLV_TYPE_BITMASK GENMASK(31, 16)
+
+enum htt_ppdu_stats_tag_type {
+ HTT_PPDU_STATS_TAG_COMMON,
+ HTT_PPDU_STATS_TAG_USR_COMMON,
+ HTT_PPDU_STATS_TAG_USR_RATE,
+ HTT_PPDU_STATS_TAG_USR_MPDU_ENQ_BITMAP_64,
+ HTT_PPDU_STATS_TAG_USR_MPDU_ENQ_BITMAP_256,
+ HTT_PPDU_STATS_TAG_SCH_CMD_STATUS,
+ HTT_PPDU_STATS_TAG_USR_COMPLTN_COMMON,
+ HTT_PPDU_STATS_TAG_USR_COMPLTN_BA_BITMAP_64,
+ HTT_PPDU_STATS_TAG_USR_COMPLTN_BA_BITMAP_256,
+ HTT_PPDU_STATS_TAG_USR_COMPLTN_ACK_BA_STATUS,
+ HTT_PPDU_STATS_TAG_USR_COMPLTN_FLUSH,
+ HTT_PPDU_STATS_TAG_USR_COMMON_ARRAY,
+ HTT_PPDU_STATS_TAG_INFO,
+ HTT_PPDU_STATS_TAG_TX_MGMTCTRL_PAYLOAD,
+
+ /* New TLV's are added above to this line */
+ HTT_PPDU_STATS_TAG_MAX,
+};
+
+#define HTT_PPDU_STATS_TAG_DEFAULT (BIT(HTT_PPDU_STATS_TAG_COMMON) \
+ | BIT(HTT_PPDU_STATS_TAG_USR_COMMON) \
+ | BIT(HTT_PPDU_STATS_TAG_USR_RATE) \
+ | BIT(HTT_PPDU_STATS_TAG_SCH_CMD_STATUS) \
+ | BIT(HTT_PPDU_STATS_TAG_USR_COMPLTN_COMMON) \
+ | BIT(HTT_PPDU_STATS_TAG_USR_COMPLTN_ACK_BA_STATUS) \
+ | BIT(HTT_PPDU_STATS_TAG_USR_COMPLTN_FLUSH) \
+ | BIT(HTT_PPDU_STATS_TAG_USR_COMMON_ARRAY))
+
+#define HTT_PPDU_STATS_TAG_PKTLOG (BIT(HTT_PPDU_STATS_TAG_USR_MPDU_ENQ_BITMAP_64) | \
+ BIT(HTT_PPDU_STATS_TAG_USR_MPDU_ENQ_BITMAP_256) | \
+ BIT(HTT_PPDU_STATS_TAG_USR_COMPLTN_BA_BITMAP_64) | \
+ BIT(HTT_PPDU_STATS_TAG_USR_COMPLTN_BA_BITMAP_256) | \
+ BIT(HTT_PPDU_STATS_TAG_INFO) | \
+ BIT(HTT_PPDU_STATS_TAG_TX_MGMTCTRL_PAYLOAD) | \
+ HTT_PPDU_STATS_TAG_DEFAULT)
+
+enum htt_stats_internal_ppdu_frametype {
+ HTT_STATS_PPDU_FTYPE_CTRL,
+ HTT_STATS_PPDU_FTYPE_DATA,
+ HTT_STATS_PPDU_FTYPE_BAR,
+ HTT_STATS_PPDU_FTYPE_MAX
+};
+
+/* HTT_H2T_MSG_TYPE_RX_RING_SELECTION_CFG Message
+ *
+ * details:
+ * HTT_H2T_MSG_TYPE_RX_RING_SELECTION_CFG message is sent by host to
+ * configure RXDMA rings.
+ * The configuration is per ring based and includes both packet subtypes
+ * and PPDU/MPDU TLVs.
+ *
+ * The message would appear as follows:
+ *
+ * |31 26|25|24|23 16|15 8|7 0|
+ * |-----------------+----------------+----------------+---------------|
+ * | rsvd1 |PS|SS| ring_id | pdev_id | msg_type |
+ * |-------------------------------------------------------------------|
+ * | rsvd2 | ring_buffer_size |
+ * |-------------------------------------------------------------------|
+ * | packet_type_enable_flags_0 |
+ * |-------------------------------------------------------------------|
+ * | packet_type_enable_flags_1 |
+ * |-------------------------------------------------------------------|
+ * | packet_type_enable_flags_2 |
+ * |-------------------------------------------------------------------|
+ * | packet_type_enable_flags_3 |
+ * |-------------------------------------------------------------------|
+ * | tlv_filter_in_flags |
+ * |-------------------------------------------------------------------|
+ * Where:
+ * PS = pkt_swap
+ * SS = status_swap
+ * The message is interpreted as follows:
+ * dword0 - b'0:7 - msg_type: This will be set to
+ * HTT_H2T_MSG_TYPE_RX_RING_SELECTION_CFG
+ * b'8:15 - pdev_id:
+ * 0 (for rings at SOC/UMAC level),
+ * 1/2/3 mac id (for rings at LMAC level)
+ * b'16:23 - ring_id : Identify the ring to configure.
+ * More details can be got from enum htt_srng_ring_id
+ * b'24 - status_swap: 1 is to swap status TLV
+ * b'25 - pkt_swap: 1 is to swap packet TLV
+ * b'26:31 - rsvd1: reserved for future use
+ * dword1 - b'0:16 - ring_buffer_size: size of bufferes referenced by rx ring,
+ * in byte units.
+ * Valid only for HW_TO_SW_RING and SW_TO_HW_RING
+ * - b'16:31 - rsvd2: Reserved for future use
+ * dword2 - b'0:31 - packet_type_enable_flags_0:
+ * Enable MGMT packet from 0b0000 to 0b1001
+ * bits from low to high: FP, MD, MO - 3 bits
+ * FP: Filter_Pass
+ * MD: Monitor_Direct
+ * MO: Monitor_Other
+ * 10 mgmt subtypes * 3 bits -> 30 bits
+ * Refer to PKT_TYPE_ENABLE_FLAG0_xxx_MGMT_xxx defs
+ * dword3 - b'0:31 - packet_type_enable_flags_1:
+ * Enable MGMT packet from 0b1010 to 0b1111
+ * bits from low to high: FP, MD, MO - 3 bits
+ * Refer to PKT_TYPE_ENABLE_FLAG1_xxx_MGMT_xxx defs
+ * dword4 - b'0:31 - packet_type_enable_flags_2:
+ * Enable CTRL packet from 0b0000 to 0b1001
+ * bits from low to high: FP, MD, MO - 3 bits
+ * Refer to PKT_TYPE_ENABLE_FLAG2_xxx_CTRL_xxx defs
+ * dword5 - b'0:31 - packet_type_enable_flags_3:
+ * Enable CTRL packet from 0b1010 to 0b1111,
+ * MCAST_DATA, UCAST_DATA, NULL_DATA
+ * bits from low to high: FP, MD, MO - 3 bits
+ * Refer to PKT_TYPE_ENABLE_FLAG3_xxx_CTRL_xxx defs
+ * dword6 - b'0:31 - tlv_filter_in_flags:
+ * Filter in Attention/MPDU/PPDU/Header/User tlvs
+ * Refer to CFG_TLV_FILTER_IN_FLAG defs
+ */
+
+#define HTT_RX_RING_SELECTION_CFG_CMD_INFO0_MSG_TYPE GENMASK(7, 0)
+#define HTT_RX_RING_SELECTION_CFG_CMD_INFO0_PDEV_ID GENMASK(15, 8)
+#define HTT_RX_RING_SELECTION_CFG_CMD_INFO0_RING_ID GENMASK(23, 16)
+#define HTT_RX_RING_SELECTION_CFG_CMD_INFO0_SS BIT(24)
+#define HTT_RX_RING_SELECTION_CFG_CMD_INFO0_PS BIT(25)
+#define HTT_RX_RING_SELECTION_CFG_CMD_INFO1_BUF_SIZE GENMASK(15, 0)
+#define HTT_RX_RING_SELECTION_CFG_CMD_OFFSET_VALID BIT(26)
+
+#define HTT_RX_RING_SELECTION_CFG_RX_PACKET_OFFSET GENMASK(15, 0)
+#define HTT_RX_RING_SELECTION_CFG_RX_HEADER_OFFSET GENMASK(31, 16)
+#define HTT_RX_RING_SELECTION_CFG_RX_MPDU_END_OFFSET GENMASK(15, 0)
+#define HTT_RX_RING_SELECTION_CFG_RX_MPDU_START_OFFSET GENMASK(31, 16)
+#define HTT_RX_RING_SELECTION_CFG_RX_MSDU_END_OFFSET GENMASK(15, 0)
+#define HTT_RX_RING_SELECTION_CFG_RX_MSDU_START_OFFSET GENMASK(31, 16)
+#define HTT_RX_RING_SELECTION_CFG_RX_ATTENTION_OFFSET GENMASK(15, 0)
+
+enum htt_rx_filter_tlv_flags {
+ HTT_RX_FILTER_TLV_FLAGS_MPDU_START = BIT(0),
+ HTT_RX_FILTER_TLV_FLAGS_MSDU_START = BIT(1),
+ HTT_RX_FILTER_TLV_FLAGS_RX_PACKET = BIT(2),
+ HTT_RX_FILTER_TLV_FLAGS_MSDU_END = BIT(3),
+ HTT_RX_FILTER_TLV_FLAGS_MPDU_END = BIT(4),
+ HTT_RX_FILTER_TLV_FLAGS_PACKET_HEADER = BIT(5),
+ HTT_RX_FILTER_TLV_FLAGS_PER_MSDU_HEADER = BIT(6),
+ HTT_RX_FILTER_TLV_FLAGS_ATTENTION = BIT(7),
+ HTT_RX_FILTER_TLV_FLAGS_PPDU_START = BIT(8),
+ HTT_RX_FILTER_TLV_FLAGS_PPDU_END = BIT(9),
+ HTT_RX_FILTER_TLV_FLAGS_PPDU_END_USER_STATS = BIT(10),
+ HTT_RX_FILTER_TLV_FLAGS_PPDU_END_USER_STATS_EXT = BIT(11),
+ HTT_RX_FILTER_TLV_FLAGS_PPDU_END_STATUS_DONE = BIT(12),
+};
+
+enum htt_rx_mgmt_pkt_filter_tlv_flags0 {
+ HTT_RX_FP_MGMT_PKT_FILTER_TLV_FLAGS0_ASSOC_REQ = BIT(0),
+ HTT_RX_MD_MGMT_PKT_FILTER_TLV_FLAGS0_ASSOC_REQ = BIT(1),
+ HTT_RX_MO_MGMT_PKT_FILTER_TLV_FLAGS0_ASSOC_REQ = BIT(2),
+ HTT_RX_FP_MGMT_PKT_FILTER_TLV_FLAGS0_ASSOC_RESP = BIT(3),
+ HTT_RX_MD_MGMT_PKT_FILTER_TLV_FLAGS0_ASSOC_RESP = BIT(4),
+ HTT_RX_MO_MGMT_PKT_FILTER_TLV_FLAGS0_ASSOC_RESP = BIT(5),
+ HTT_RX_FP_MGMT_PKT_FILTER_TLV_FLAGS0_REASSOC_REQ = BIT(6),
+ HTT_RX_MD_MGMT_PKT_FILTER_TLV_FLAGS0_REASSOC_REQ = BIT(7),
+ HTT_RX_MO_MGMT_PKT_FILTER_TLV_FLAGS0_REASSOC_REQ = BIT(8),
+ HTT_RX_FP_MGMT_PKT_FILTER_TLV_FLAGS0_REASSOC_RESP = BIT(9),
+ HTT_RX_MD_MGMT_PKT_FILTER_TLV_FLAGS0_REASSOC_RESP = BIT(10),
+ HTT_RX_MO_MGMT_PKT_FILTER_TLV_FLAGS0_REASSOC_RESP = BIT(11),
+ HTT_RX_FP_MGMT_PKT_FILTER_TLV_FLAGS0_PROBE_REQ = BIT(12),
+ HTT_RX_MD_MGMT_PKT_FILTER_TLV_FLAGS0_PROBE_REQ = BIT(13),
+ HTT_RX_MO_MGMT_PKT_FILTER_TLV_FLAGS0_PROBE_REQ = BIT(14),
+ HTT_RX_FP_MGMT_PKT_FILTER_TLV_FLAGS0_PROBE_RESP = BIT(15),
+ HTT_RX_MD_MGMT_PKT_FILTER_TLV_FLAGS0_PROBE_RESP = BIT(16),
+ HTT_RX_MO_MGMT_PKT_FILTER_TLV_FLAGS0_PROBE_RESP = BIT(17),
+ HTT_RX_FP_MGMT_PKT_FILTER_TLV_FLAGS0_PROBE_TIMING_ADV = BIT(18),
+ HTT_RX_MD_MGMT_PKT_FILTER_TLV_FLAGS0_PROBE_TIMING_ADV = BIT(19),
+ HTT_RX_MO_MGMT_PKT_FILTER_TLV_FLAGS0_PROBE_TIMING_ADV = BIT(20),
+ HTT_RX_FP_MGMT_PKT_FILTER_TLV_FLAGS0_RESERVED_7 = BIT(21),
+ HTT_RX_MD_MGMT_PKT_FILTER_TLV_FLAGS0_RESERVED_7 = BIT(22),
+ HTT_RX_MO_MGMT_PKT_FILTER_TLV_FLAGS0_RESERVED_7 = BIT(23),
+ HTT_RX_FP_MGMT_PKT_FILTER_TLV_FLAGS0_BEACON = BIT(24),
+ HTT_RX_MD_MGMT_PKT_FILTER_TLV_FLAGS0_BEACON = BIT(25),
+ HTT_RX_MO_MGMT_PKT_FILTER_TLV_FLAGS0_BEACON = BIT(26),
+ HTT_RX_FP_MGMT_PKT_FILTER_TLV_FLAGS0_ATIM = BIT(27),
+ HTT_RX_MD_MGMT_PKT_FILTER_TLV_FLAGS0_ATIM = BIT(28),
+ HTT_RX_MO_MGMT_PKT_FILTER_TLV_FLAGS0_ATIM = BIT(29),
+};
+
+enum htt_rx_mgmt_pkt_filter_tlv_flags1 {
+ HTT_RX_FP_MGMT_PKT_FILTER_TLV_FLAGS1_DISASSOC = BIT(0),
+ HTT_RX_MD_MGMT_PKT_FILTER_TLV_FLAGS1_DISASSOC = BIT(1),
+ HTT_RX_MO_MGMT_PKT_FILTER_TLV_FLAGS1_DISASSOC = BIT(2),
+ HTT_RX_FP_MGMT_PKT_FILTER_TLV_FLAGS1_AUTH = BIT(3),
+ HTT_RX_MD_MGMT_PKT_FILTER_TLV_FLAGS1_AUTH = BIT(4),
+ HTT_RX_MO_MGMT_PKT_FILTER_TLV_FLAGS1_AUTH = BIT(5),
+ HTT_RX_FP_MGMT_PKT_FILTER_TLV_FLAGS1_DEAUTH = BIT(6),
+ HTT_RX_MD_MGMT_PKT_FILTER_TLV_FLAGS1_DEAUTH = BIT(7),
+ HTT_RX_MO_MGMT_PKT_FILTER_TLV_FLAGS1_DEAUTH = BIT(8),
+ HTT_RX_FP_MGMT_PKT_FILTER_TLV_FLAGS1_ACTION = BIT(9),
+ HTT_RX_MD_MGMT_PKT_FILTER_TLV_FLAGS1_ACTION = BIT(10),
+ HTT_RX_MO_MGMT_PKT_FILTER_TLV_FLAGS1_ACTION = BIT(11),
+ HTT_RX_FP_MGMT_PKT_FILTER_TLV_FLAGS1_ACTION_NOACK = BIT(12),
+ HTT_RX_MD_MGMT_PKT_FILTER_TLV_FLAGS1_ACTION_NOACK = BIT(13),
+ HTT_RX_MO_MGMT_PKT_FILTER_TLV_FLAGS1_ACTION_NOACK = BIT(14),
+ HTT_RX_FP_MGMT_PKT_FILTER_TLV_FLAGS1_RESERVED_15 = BIT(15),
+ HTT_RX_MD_MGMT_PKT_FILTER_TLV_FLAGS1_RESERVED_15 = BIT(16),
+ HTT_RX_MO_MGMT_PKT_FILTER_TLV_FLAGS1_RESERVED_15 = BIT(17),
+};
+
+enum htt_rx_ctrl_pkt_filter_tlv_flags2 {
+ HTT_RX_FP_CTRL_PKT_FILTER_TLV_FLAGS2_CTRL_RESERVED_1 = BIT(0),
+ HTT_RX_MD_CTRL_PKT_FILTER_TLV_FLAGS2_CTRL_RESERVED_1 = BIT(1),
+ HTT_RX_MO_CTRL_PKT_FILTER_TLV_FLAGS2_CTRL_RESERVED_1 = BIT(2),
+ HTT_RX_FP_CTRL_PKT_FILTER_TLV_FLAGS2_CTRL_RESERVED_2 = BIT(3),
+ HTT_RX_MD_CTRL_PKT_FILTER_TLV_FLAGS2_CTRL_RESERVED_2 = BIT(4),
+ HTT_RX_MO_CTRL_PKT_FILTER_TLV_FLAGS2_CTRL_RESERVED_2 = BIT(5),
+ HTT_RX_FP_CTRL_PKT_FILTER_TLV_FLAGS2_CTRL_TRIGGER = BIT(6),
+ HTT_RX_MD_CTRL_PKT_FILTER_TLV_FLAGS2_CTRL_TRIGGER = BIT(7),
+ HTT_RX_MO_CTRL_PKT_FILTER_TLV_FLAGS2_CTRL_TRIGGER = BIT(8),
+ HTT_RX_FP_CTRL_PKT_FILTER_TLV_FLAGS2_CTRL_RESERVED_4 = BIT(9),
+ HTT_RX_MD_CTRL_PKT_FILTER_TLV_FLAGS2_CTRL_RESERVED_4 = BIT(10),
+ HTT_RX_MO_CTRL_PKT_FILTER_TLV_FLAGS2_CTRL_RESERVED_4 = BIT(11),
+ HTT_RX_FP_CTRL_PKT_FILTER_TLV_FLAGS2_CTRL_BF_REP_POLL = BIT(12),
+ HTT_RX_MD_CTRL_PKT_FILTER_TLV_FLAGS2_CTRL_BF_REP_POLL = BIT(13),
+ HTT_RX_MO_CTRL_PKT_FILTER_TLV_FLAGS2_CTRL_BF_REP_POLL = BIT(14),
+ HTT_RX_FP_CTRL_PKT_FILTER_TLV_FLAGS2_CTRL_VHT_NDP = BIT(15),
+ HTT_RX_MD_CTRL_PKT_FILTER_TLV_FLAGS2_CTRL_VHT_NDP = BIT(16),
+ HTT_RX_MO_CTRL_PKT_FILTER_TLV_FLAGS2_CTRL_VHT_NDP = BIT(17),
+ HTT_RX_FP_CTRL_PKT_FILTER_TLV_FLAGS2_CTRL_FRAME_EXT = BIT(18),
+ HTT_RX_MD_CTRL_PKT_FILTER_TLV_FLAGS2_CTRL_FRAME_EXT = BIT(19),
+ HTT_RX_MO_CTRL_PKT_FILTER_TLV_FLAGS2_CTRL_FRAME_EXT = BIT(20),
+ HTT_RX_FP_CTRL_PKT_FILTER_TLV_FLAGS2_CTRL_WRAPPER = BIT(21),
+ HTT_RX_MD_CTRL_PKT_FILTER_TLV_FLAGS2_CTRL_WRAPPER = BIT(22),
+ HTT_RX_MO_CTRL_PKT_FILTER_TLV_FLAGS2_CTRL_WRAPPER = BIT(23),
+ HTT_RX_FP_CTRL_PKT_FILTER_TLV_FLAGS2_BAR = BIT(24),
+ HTT_RX_MD_CTRL_PKT_FILTER_TLV_FLAGS2_BAR = BIT(25),
+ HTT_RX_MO_CTRL_PKT_FILTER_TLV_FLAGS2_BAR = BIT(26),
+ HTT_RX_FP_CTRL_PKT_FILTER_TLV_FLAGS2_BA = BIT(27),
+ HTT_RX_MD_CTRL_PKT_FILTER_TLV_FLAGS2_BA = BIT(28),
+ HTT_RX_MO_CTRL_PKT_FILTER_TLV_FLAGS2_BA = BIT(29),
+};
+
+enum htt_rx_ctrl_pkt_filter_tlv_flags3 {
+ HTT_RX_FP_CTRL_PKT_FILTER_TLV_FLAGS3_PSPOLL = BIT(0),
+ HTT_RX_MD_CTRL_PKT_FILTER_TLV_FLAGS3_PSPOLL = BIT(1),
+ HTT_RX_MO_CTRL_PKT_FILTER_TLV_FLAGS3_PSPOLL = BIT(2),
+ HTT_RX_FP_CTRL_PKT_FILTER_TLV_FLAGS3_RTS = BIT(3),
+ HTT_RX_MD_CTRL_PKT_FILTER_TLV_FLAGS3_RTS = BIT(4),
+ HTT_RX_MO_CTRL_PKT_FILTER_TLV_FLAGS3_RTS = BIT(5),
+ HTT_RX_FP_CTRL_PKT_FILTER_TLV_FLAGS3_CTS = BIT(6),
+ HTT_RX_MD_CTRL_PKT_FILTER_TLV_FLAGS3_CTS = BIT(7),
+ HTT_RX_MO_CTRL_PKT_FILTER_TLV_FLAGS3_CTS = BIT(8),
+ HTT_RX_FP_CTRL_PKT_FILTER_TLV_FLAGS3_ACK = BIT(9),
+ HTT_RX_MD_CTRL_PKT_FILTER_TLV_FLAGS3_ACK = BIT(10),
+ HTT_RX_MO_CTRL_PKT_FILTER_TLV_FLAGS3_ACK = BIT(11),
+ HTT_RX_FP_CTRL_PKT_FILTER_TLV_FLAGS3_CFEND = BIT(12),
+ HTT_RX_MD_CTRL_PKT_FILTER_TLV_FLAGS3_CFEND = BIT(13),
+ HTT_RX_MO_CTRL_PKT_FILTER_TLV_FLAGS3_CFEND = BIT(14),
+ HTT_RX_FP_CTRL_PKT_FILTER_TLV_FLAGS3_CFEND_ACK = BIT(15),
+ HTT_RX_MD_CTRL_PKT_FILTER_TLV_FLAGS3_CFEND_ACK = BIT(16),
+ HTT_RX_MO_CTRL_PKT_FILTER_TLV_FLAGS3_CFEND_ACK = BIT(17),
+};
+
+enum htt_rx_data_pkt_filter_tlv_flasg3 {
+ HTT_RX_FP_DATA_PKT_FILTER_TLV_FLASG3_MCAST = BIT(18),
+ HTT_RX_MD_DATA_PKT_FILTER_TLV_FLASG3_MCAST = BIT(19),
+ HTT_RX_MO_DATA_PKT_FILTER_TLV_FLASG3_MCAST = BIT(20),
+ HTT_RX_FP_DATA_PKT_FILTER_TLV_FLASG3_UCAST = BIT(21),
+ HTT_RX_MD_DATA_PKT_FILTER_TLV_FLASG3_UCAST = BIT(22),
+ HTT_RX_MO_DATA_PKT_FILTER_TLV_FLASG3_UCAST = BIT(23),
+ HTT_RX_FP_DATA_PKT_FILTER_TLV_FLASG3_NULL_DATA = BIT(24),
+ HTT_RX_MD_DATA_PKT_FILTER_TLV_FLASG3_NULL_DATA = BIT(25),
+ HTT_RX_MO_DATA_PKT_FILTER_TLV_FLASG3_NULL_DATA = BIT(26),
+};
+
+#define HTT_RX_FP_MGMT_FILTER_FLAGS0 \
+ (HTT_RX_FP_MGMT_PKT_FILTER_TLV_FLAGS0_ASSOC_REQ \
+ | HTT_RX_FP_MGMT_PKT_FILTER_TLV_FLAGS0_ASSOC_RESP \
+ | HTT_RX_FP_MGMT_PKT_FILTER_TLV_FLAGS0_REASSOC_REQ \
+ | HTT_RX_FP_MGMT_PKT_FILTER_TLV_FLAGS0_REASSOC_RESP \
+ | HTT_RX_FP_MGMT_PKT_FILTER_TLV_FLAGS0_PROBE_REQ \
+ | HTT_RX_FP_MGMT_PKT_FILTER_TLV_FLAGS0_PROBE_RESP \
+ | HTT_RX_FP_MGMT_PKT_FILTER_TLV_FLAGS0_PROBE_TIMING_ADV \
+ | HTT_RX_FP_MGMT_PKT_FILTER_TLV_FLAGS0_BEACON \
+ | HTT_RX_FP_MGMT_PKT_FILTER_TLV_FLAGS0_ATIM)
+
+#define HTT_RX_MD_MGMT_FILTER_FLAGS0 \
+ (HTT_RX_MD_MGMT_PKT_FILTER_TLV_FLAGS0_ASSOC_REQ \
+ | HTT_RX_MD_MGMT_PKT_FILTER_TLV_FLAGS0_ASSOC_RESP \
+ | HTT_RX_MD_MGMT_PKT_FILTER_TLV_FLAGS0_REASSOC_REQ \
+ | HTT_RX_MD_MGMT_PKT_FILTER_TLV_FLAGS0_REASSOC_RESP \
+ | HTT_RX_MD_MGMT_PKT_FILTER_TLV_FLAGS0_PROBE_REQ \
+ | HTT_RX_MD_MGMT_PKT_FILTER_TLV_FLAGS0_PROBE_RESP \
+ | HTT_RX_MD_MGMT_PKT_FILTER_TLV_FLAGS0_PROBE_TIMING_ADV \
+ | HTT_RX_MD_MGMT_PKT_FILTER_TLV_FLAGS0_BEACON \
+ | HTT_RX_MD_MGMT_PKT_FILTER_TLV_FLAGS0_ATIM)
+
+#define HTT_RX_MO_MGMT_FILTER_FLAGS0 \
+ (HTT_RX_MO_MGMT_PKT_FILTER_TLV_FLAGS0_ASSOC_REQ \
+ | HTT_RX_MO_MGMT_PKT_FILTER_TLV_FLAGS0_ASSOC_RESP \
+ | HTT_RX_MO_MGMT_PKT_FILTER_TLV_FLAGS0_REASSOC_REQ \
+ | HTT_RX_MO_MGMT_PKT_FILTER_TLV_FLAGS0_REASSOC_RESP \
+ | HTT_RX_MO_MGMT_PKT_FILTER_TLV_FLAGS0_PROBE_REQ \
+ | HTT_RX_MO_MGMT_PKT_FILTER_TLV_FLAGS0_PROBE_RESP \
+ | HTT_RX_MO_MGMT_PKT_FILTER_TLV_FLAGS0_PROBE_TIMING_ADV \
+ | HTT_RX_MO_MGMT_PKT_FILTER_TLV_FLAGS0_BEACON \
+ | HTT_RX_MO_MGMT_PKT_FILTER_TLV_FLAGS0_ATIM)
+
+#define HTT_RX_FP_MGMT_FILTER_FLAGS1 (HTT_RX_FP_MGMT_PKT_FILTER_TLV_FLAGS1_DISASSOC \
+ | HTT_RX_FP_MGMT_PKT_FILTER_TLV_FLAGS1_AUTH \
+ | HTT_RX_FP_MGMT_PKT_FILTER_TLV_FLAGS1_DEAUTH \
+ | HTT_RX_FP_MGMT_PKT_FILTER_TLV_FLAGS1_ACTION \
+ | HTT_RX_FP_MGMT_PKT_FILTER_TLV_FLAGS1_ACTION_NOACK)
+
+#define HTT_RX_MD_MGMT_FILTER_FLAGS1 (HTT_RX_MD_MGMT_PKT_FILTER_TLV_FLAGS1_DISASSOC \
+ | HTT_RX_MD_MGMT_PKT_FILTER_TLV_FLAGS1_AUTH \
+ | HTT_RX_MD_MGMT_PKT_FILTER_TLV_FLAGS1_DEAUTH \
+ | HTT_RX_MD_MGMT_PKT_FILTER_TLV_FLAGS1_ACTION \
+ | HTT_RX_MD_MGMT_PKT_FILTER_TLV_FLAGS1_ACTION_NOACK)
+
+#define HTT_RX_MO_MGMT_FILTER_FLAGS1 (HTT_RX_MO_MGMT_PKT_FILTER_TLV_FLAGS1_DISASSOC \
+ | HTT_RX_MO_MGMT_PKT_FILTER_TLV_FLAGS1_AUTH \
+ | HTT_RX_MO_MGMT_PKT_FILTER_TLV_FLAGS1_DEAUTH \
+ | HTT_RX_MO_MGMT_PKT_FILTER_TLV_FLAGS1_ACTION \
+ | HTT_RX_MO_MGMT_PKT_FILTER_TLV_FLAGS1_ACTION_NOACK)
+
+#define HTT_RX_FP_CTRL_FILTER_FLASG2 (HTT_RX_FP_CTRL_PKT_FILTER_TLV_FLAGS2_CTRL_WRAPPER \
+ | HTT_RX_FP_CTRL_PKT_FILTER_TLV_FLAGS2_BAR \
+ | HTT_RX_FP_CTRL_PKT_FILTER_TLV_FLAGS2_BA)
+
+#define HTT_RX_MD_CTRL_FILTER_FLASG2 (HTT_RX_MD_CTRL_PKT_FILTER_TLV_FLAGS2_CTRL_WRAPPER \
+ | HTT_RX_MD_CTRL_PKT_FILTER_TLV_FLAGS2_BAR \
+ | HTT_RX_MD_CTRL_PKT_FILTER_TLV_FLAGS2_BA)
+
+#define HTT_RX_MO_CTRL_FILTER_FLASG2 (HTT_RX_MO_CTRL_PKT_FILTER_TLV_FLAGS2_CTRL_WRAPPER \
+ | HTT_RX_MO_CTRL_PKT_FILTER_TLV_FLAGS2_BAR \
+ | HTT_RX_MO_CTRL_PKT_FILTER_TLV_FLAGS2_BA)
+
+#define HTT_RX_FP_CTRL_FILTER_FLASG3 (HTT_RX_FP_CTRL_PKT_FILTER_TLV_FLAGS3_PSPOLL \
+ | HTT_RX_FP_CTRL_PKT_FILTER_TLV_FLAGS3_RTS \
+ | HTT_RX_FP_CTRL_PKT_FILTER_TLV_FLAGS3_CTS \
+ | HTT_RX_FP_CTRL_PKT_FILTER_TLV_FLAGS3_ACK \
+ | HTT_RX_FP_CTRL_PKT_FILTER_TLV_FLAGS3_CFEND \
+ | HTT_RX_FP_CTRL_PKT_FILTER_TLV_FLAGS3_CFEND_ACK)
+
+#define HTT_RX_MD_CTRL_FILTER_FLASG3 (HTT_RX_MD_CTRL_PKT_FILTER_TLV_FLAGS3_PSPOLL \
+ | HTT_RX_MD_CTRL_PKT_FILTER_TLV_FLAGS3_RTS \
+ | HTT_RX_MD_CTRL_PKT_FILTER_TLV_FLAGS3_CTS \
+ | HTT_RX_MD_CTRL_PKT_FILTER_TLV_FLAGS3_ACK \
+ | HTT_RX_MD_CTRL_PKT_FILTER_TLV_FLAGS3_CFEND \
+ | HTT_RX_MD_CTRL_PKT_FILTER_TLV_FLAGS3_CFEND_ACK)
+
+#define HTT_RX_MO_CTRL_FILTER_FLASG3 (HTT_RX_MO_CTRL_PKT_FILTER_TLV_FLAGS3_PSPOLL \
+ | HTT_RX_MO_CTRL_PKT_FILTER_TLV_FLAGS3_RTS \
+ | HTT_RX_MO_CTRL_PKT_FILTER_TLV_FLAGS3_CTS \
+ | HTT_RX_MO_CTRL_PKT_FILTER_TLV_FLAGS3_ACK \
+ | HTT_RX_MO_CTRL_PKT_FILTER_TLV_FLAGS3_CFEND \
+ | HTT_RX_MO_CTRL_PKT_FILTER_TLV_FLAGS3_CFEND_ACK)
+
+#define HTT_RX_FP_DATA_FILTER_FLASG3 (HTT_RX_FP_DATA_PKT_FILTER_TLV_FLASG3_MCAST \
+ | HTT_RX_FP_DATA_PKT_FILTER_TLV_FLASG3_UCAST \
+ | HTT_RX_FP_DATA_PKT_FILTER_TLV_FLASG3_NULL_DATA)
+
+#define HTT_RX_MD_DATA_FILTER_FLASG3 (HTT_RX_MD_DATA_PKT_FILTER_TLV_FLASG3_MCAST \
+ | HTT_RX_MD_DATA_PKT_FILTER_TLV_FLASG3_UCAST \
+ | HTT_RX_MD_DATA_PKT_FILTER_TLV_FLASG3_NULL_DATA)
+
+#define HTT_RX_MO_DATA_FILTER_FLASG3 (HTT_RX_MO_DATA_PKT_FILTER_TLV_FLASG3_MCAST \
+ | HTT_RX_MO_DATA_PKT_FILTER_TLV_FLASG3_UCAST \
+ | HTT_RX_MO_DATA_PKT_FILTER_TLV_FLASG3_NULL_DATA)
+
+#define HTT_RX_MON_FP_MGMT_FILTER_FLAGS0 \
+ (HTT_RX_FP_MGMT_FILTER_FLAGS0 | \
+ HTT_RX_FP_MGMT_PKT_FILTER_TLV_FLAGS0_RESERVED_7)
+
+#define HTT_RX_MON_MO_MGMT_FILTER_FLAGS0 \
+ (HTT_RX_MO_MGMT_FILTER_FLAGS0 | \
+ HTT_RX_MO_MGMT_PKT_FILTER_TLV_FLAGS0_RESERVED_7)
+
+#define HTT_RX_MON_FP_MGMT_FILTER_FLAGS1 \
+ (HTT_RX_FP_MGMT_FILTER_FLAGS1 | \
+ HTT_RX_FP_MGMT_PKT_FILTER_TLV_FLAGS1_RESERVED_15)
+
+#define HTT_RX_MON_MO_MGMT_FILTER_FLAGS1 \
+ (HTT_RX_MO_MGMT_FILTER_FLAGS1 | \
+ HTT_RX_MO_MGMT_PKT_FILTER_TLV_FLAGS1_RESERVED_15)
+
+#define HTT_RX_MON_FP_CTRL_FILTER_FLASG2 \
+ (HTT_RX_FP_CTRL_FILTER_FLASG2 | \
+ HTT_RX_FP_CTRL_PKT_FILTER_TLV_FLAGS2_CTRL_RESERVED_1 | \
+ HTT_RX_FP_CTRL_PKT_FILTER_TLV_FLAGS2_CTRL_RESERVED_2 | \
+ HTT_RX_FP_CTRL_PKT_FILTER_TLV_FLAGS2_CTRL_TRIGGER | \
+ HTT_RX_FP_CTRL_PKT_FILTER_TLV_FLAGS2_CTRL_RESERVED_4 | \
+ HTT_RX_FP_CTRL_PKT_FILTER_TLV_FLAGS2_CTRL_BF_REP_POLL | \
+ HTT_RX_FP_CTRL_PKT_FILTER_TLV_FLAGS2_CTRL_VHT_NDP | \
+ HTT_RX_FP_CTRL_PKT_FILTER_TLV_FLAGS2_CTRL_FRAME_EXT)
+
+#define HTT_RX_MON_MO_CTRL_FILTER_FLASG2 \
+ (HTT_RX_MO_CTRL_FILTER_FLASG2 | \
+ HTT_RX_MO_CTRL_PKT_FILTER_TLV_FLAGS2_CTRL_RESERVED_1 | \
+ HTT_RX_MO_CTRL_PKT_FILTER_TLV_FLAGS2_CTRL_RESERVED_2 | \
+ HTT_RX_MO_CTRL_PKT_FILTER_TLV_FLAGS2_CTRL_TRIGGER | \
+ HTT_RX_MO_CTRL_PKT_FILTER_TLV_FLAGS2_CTRL_RESERVED_4 | \
+ HTT_RX_MO_CTRL_PKT_FILTER_TLV_FLAGS2_CTRL_BF_REP_POLL | \
+ HTT_RX_MO_CTRL_PKT_FILTER_TLV_FLAGS2_CTRL_VHT_NDP | \
+ HTT_RX_MO_CTRL_PKT_FILTER_TLV_FLAGS2_CTRL_FRAME_EXT)
+
+#define HTT_RX_MON_FP_CTRL_FILTER_FLASG3 HTT_RX_FP_CTRL_FILTER_FLASG3
+
+#define HTT_RX_MON_MO_CTRL_FILTER_FLASG3 HTT_RX_MO_CTRL_FILTER_FLASG3
+
+#define HTT_RX_MON_FP_DATA_FILTER_FLASG3 HTT_RX_FP_DATA_FILTER_FLASG3
+
+#define HTT_RX_MON_MO_DATA_FILTER_FLASG3 HTT_RX_MO_DATA_FILTER_FLASG3
+
+#define HTT_RX_MON_FILTER_TLV_FLAGS \
+ (HTT_RX_FILTER_TLV_FLAGS_MPDU_START | \
+ HTT_RX_FILTER_TLV_FLAGS_PPDU_START | \
+ HTT_RX_FILTER_TLV_FLAGS_PPDU_END | \
+ HTT_RX_FILTER_TLV_FLAGS_PPDU_END_USER_STATS | \
+ HTT_RX_FILTER_TLV_FLAGS_PPDU_END_USER_STATS_EXT | \
+ HTT_RX_FILTER_TLV_FLAGS_PPDU_END_STATUS_DONE)
+
+#define HTT_RX_MON_FILTER_TLV_FLAGS_MON_STATUS_RING \
+ (HTT_RX_FILTER_TLV_FLAGS_MPDU_START | \
+ HTT_RX_FILTER_TLV_FLAGS_PPDU_START | \
+ HTT_RX_FILTER_TLV_FLAGS_PPDU_END | \
+ HTT_RX_FILTER_TLV_FLAGS_PPDU_END_USER_STATS | \
+ HTT_RX_FILTER_TLV_FLAGS_PPDU_END_USER_STATS_EXT | \
+ HTT_RX_FILTER_TLV_FLAGS_PPDU_END_STATUS_DONE)
+
+#define HTT_RX_MON_FILTER_TLV_FLAGS_MON_BUF_RING \
+ (HTT_RX_FILTER_TLV_FLAGS_MPDU_START | \
+ HTT_RX_FILTER_TLV_FLAGS_MSDU_START | \
+ HTT_RX_FILTER_TLV_FLAGS_RX_PACKET | \
+ HTT_RX_FILTER_TLV_FLAGS_MSDU_END | \
+ HTT_RX_FILTER_TLV_FLAGS_MPDU_END | \
+ HTT_RX_FILTER_TLV_FLAGS_PACKET_HEADER | \
+ HTT_RX_FILTER_TLV_FLAGS_PER_MSDU_HEADER | \
+ HTT_RX_FILTER_TLV_FLAGS_ATTENTION)
+
+/* msdu start. mpdu end, attention, rx hdr tlv's are not subscribed */
+#define HTT_RX_TLV_FLAGS_RXDMA_RING \
+ (HTT_RX_FILTER_TLV_FLAGS_MPDU_START | \
+ HTT_RX_FILTER_TLV_FLAGS_RX_PACKET | \
+ HTT_RX_FILTER_TLV_FLAGS_MSDU_END)
+
+#define HTT_TX_RING_SELECTION_CFG_CMD_INFO0_MSG_TYPE GENMASK(7, 0)
+#define HTT_TX_RING_SELECTION_CFG_CMD_INFO0_PDEV_ID GENMASK(15, 8)
+
+struct htt_rx_ring_selection_cfg_cmd {
+ __le32 info0;
+ __le32 info1;
+ __le32 pkt_type_en_flags0;
+ __le32 pkt_type_en_flags1;
+ __le32 pkt_type_en_flags2;
+ __le32 pkt_type_en_flags3;
+ __le32 rx_filter_tlv;
+ __le32 rx_packet_offset;
+ __le32 rx_mpdu_offset;
+ __le32 rx_msdu_offset;
+ __le32 rx_attn_offset;
+} __packed;
+
+struct htt_rx_ring_tlv_filter {
+ u32 rx_filter; /* see htt_rx_filter_tlv_flags */
+ u32 pkt_filter_flags0; /* MGMT */
+ u32 pkt_filter_flags1; /* MGMT */
+ u32 pkt_filter_flags2; /* CTRL */
+ u32 pkt_filter_flags3; /* DATA */
+ bool offset_valid;
+ u16 rx_packet_offset;
+ u16 rx_header_offset;
+ u16 rx_mpdu_end_offset;
+ u16 rx_mpdu_start_offset;
+ u16 rx_msdu_end_offset;
+ u16 rx_msdu_start_offset;
+ u16 rx_attn_offset;
+};
+
+#define HTT_STATS_FRAME_CTRL_TYPE_MGMT 0x0
+#define HTT_STATS_FRAME_CTRL_TYPE_CTRL 0x1
+#define HTT_STATS_FRAME_CTRL_TYPE_DATA 0x2
+#define HTT_STATS_FRAME_CTRL_TYPE_RESV 0x3
+
+#define HTT_TX_RING_SELECTION_CFG_CMD_INFO0_MSG_TYPE GENMASK(7, 0)
+#define HTT_TX_RING_SELECTION_CFG_CMD_INFO0_PDEV_ID GENMASK(15, 8)
+#define HTT_TX_RING_SELECTION_CFG_CMD_INFO0_RING_ID GENMASK(23, 16)
+#define HTT_TX_RING_SELECTION_CFG_CMD_INFO0_SS BIT(24)
+#define HTT_TX_RING_SELECTION_CFG_CMD_INFO0_PS BIT(25)
+
+#define HTT_TX_RING_SELECTION_CFG_CMD_INFO1_RING_BUFF_SIZE GENMASK(15, 0)
+#define HTT_TX_RING_SELECTION_CFG_CMD_INFO1_PKT_TYPE GENMASK(18, 16)
+#define HTT_TX_RING_SELECTION_CFG_CMD_INFO1_CONF_LEN_MGMT GENMASK(21, 19)
+#define HTT_TX_RING_SELECTION_CFG_CMD_INFO1_CONF_LEN_CTRL GENMASK(24, 22)
+#define HTT_TX_RING_SELECTION_CFG_CMD_INFO1_CONF_LEN_DATA GENMASK(27, 25)
+
+#define HTT_TX_RING_SELECTION_CFG_CMD_INFO2_PKT_TYPE_EN_FLAG GENMASK(2, 0)
+
+struct htt_tx_ring_selection_cfg_cmd {
+ __le32 info0;
+ __le32 info1;
+ __le32 info2;
+ __le32 tlv_filter_mask_in0;
+ __le32 tlv_filter_mask_in1;
+ __le32 tlv_filter_mask_in2;
+ __le32 tlv_filter_mask_in3;
+ __le32 reserverd[3];
+} __packed;
+
+#define HTT_TX_RING_TLV_FILTER_MGMT_DMA_LEN GENMASK(3, 0)
+#define HTT_TX_RING_TLV_FILTER_CTRL_DMA_LEN GENMASK(7, 4)
+#define HTT_TX_RING_TLV_FILTER_DATA_DMA_LEN GENMASK(11, 8)
+
+#define HTT_TX_MON_FILTER_HYBRID_MODE \
+ (HTT_TX_FILTER_TLV_FLAGS0_RESPONSE_START_STATUS | \
+ HTT_TX_FILTER_TLV_FLAGS0_RESPONSE_END_STATUS | \
+ HTT_TX_FILTER_TLV_FLAGS0_TX_FES_STATUS_START | \
+ HTT_TX_FILTER_TLV_FLAGS0_TX_FES_STATUS_END | \
+ HTT_TX_FILTER_TLV_FLAGS0_TX_FES_STATUS_START_PPDU | \
+ HTT_TX_FILTER_TLV_FLAGS0_TX_FES_STATUS_USER_PPDU | \
+ HTT_TX_FILTER_TLV_FLAGS0_TX_FES_STATUS_ACK_OR_BA | \
+ HTT_TX_FILTER_TLV_FLAGS0_TX_FES_STATUS_1K_BA | \
+ HTT_TX_FILTER_TLV_FLAGS0_TX_FES_STATUS_START_PROT | \
+ HTT_TX_FILTER_TLV_FLAGS0_TX_FES_STATUS_PROT | \
+ HTT_TX_FILTER_TLV_FLAGS0_TX_FES_STATUS_USER_RESPONSE | \
+ HTT_TX_FILTER_TLV_FLAGS0_RECEIVED_RESPONSE_INFO | \
+ HTT_TX_FILTER_TLV_FLAGS0_RECEIVED_RESPONSE_INFO_PART2)
+
+struct htt_tx_ring_tlv_filter {
+ u32 tx_mon_downstream_tlv_flags;
+ u32 tx_mon_upstream_tlv_flags0;
+ u32 tx_mon_upstream_tlv_flags1;
+ u32 tx_mon_upstream_tlv_flags2;
+ bool tx_mon_mgmt_filter;
+ bool tx_mon_data_filter;
+ bool tx_mon_ctrl_filter;
+ u16 tx_mon_pkt_dma_len;
+} __packed;
+
+enum htt_tx_mon_upstream_tlv_flags0 {
+ HTT_TX_FILTER_TLV_FLAGS0_RESPONSE_START_STATUS = BIT(1),
+ HTT_TX_FILTER_TLV_FLAGS0_RESPONSE_END_STATUS = BIT(2),
+ HTT_TX_FILTER_TLV_FLAGS0_TX_FES_STATUS_START = BIT(3),
+ HTT_TX_FILTER_TLV_FLAGS0_TX_FES_STATUS_END = BIT(4),
+ HTT_TX_FILTER_TLV_FLAGS0_TX_FES_STATUS_START_PPDU = BIT(5),
+ HTT_TX_FILTER_TLV_FLAGS0_TX_FES_STATUS_USER_PPDU = BIT(6),
+ HTT_TX_FILTER_TLV_FLAGS0_TX_FES_STATUS_ACK_OR_BA = BIT(7),
+ HTT_TX_FILTER_TLV_FLAGS0_TX_FES_STATUS_1K_BA = BIT(8),
+ HTT_TX_FILTER_TLV_FLAGS0_TX_FES_STATUS_START_PROT = BIT(9),
+ HTT_TX_FILTER_TLV_FLAGS0_TX_FES_STATUS_PROT = BIT(10),
+ HTT_TX_FILTER_TLV_FLAGS0_TX_FES_STATUS_USER_RESPONSE = BIT(11),
+ HTT_TX_FILTER_TLV_FLAGS0_RX_FRAME_BITMAP_ACK = BIT(12),
+ HTT_TX_FILTER_TLV_FLAGS0_RX_FRAME_1K_BITMAP_ACK = BIT(13),
+ HTT_TX_FILTER_TLV_FLAGS0_COEX_TX_STATUS = BIT(14),
+ HTT_TX_FILTER_TLV_FLAGS0_RECEIVED_RESPONSE_INFO = BIT(15),
+ HTT_TX_FILTER_TLV_FLAGS0_RECEIVED_RESPONSE_INFO_PART2 = BIT(16),
+};
+
+#define HTT_TX_FILTER_TLV_FLAGS2_TXPCU_PHYTX_OTHER_TRANSMIT_INFO32 BIT(11)
+
+/* HTT message target->host */
+
+enum htt_t2h_msg_type {
+ HTT_T2H_MSG_TYPE_VERSION_CONF,
+ HTT_T2H_MSG_TYPE_PEER_MAP = 0x3,
+ HTT_T2H_MSG_TYPE_PEER_UNMAP = 0x4,
+ HTT_T2H_MSG_TYPE_RX_ADDBA = 0x5,
+ HTT_T2H_MSG_TYPE_PKTLOG = 0x8,
+ HTT_T2H_MSG_TYPE_SEC_IND = 0xb,
+ HTT_T2H_MSG_TYPE_PEER_MAP2 = 0x1e,
+ HTT_T2H_MSG_TYPE_PEER_UNMAP2 = 0x1f,
+ HTT_T2H_MSG_TYPE_PPDU_STATS_IND = 0x1d,
+ HTT_T2H_MSG_TYPE_EXT_STATS_CONF = 0x1c,
+ HTT_T2H_MSG_TYPE_BKPRESSURE_EVENT_IND = 0x24,
+ HTT_T2H_MSG_TYPE_MLO_TIMESTAMP_OFFSET_IND = 0x28,
+ HTT_T2H_MSG_TYPE_PEER_MAP3 = 0x2b,
+ HTT_T2H_MSG_TYPE_VDEV_TXRX_STATS_PERIODIC_IND = 0x2c,
+};
+
+#define HTT_TARGET_VERSION_MAJOR 3
+
+#define HTT_T2H_MSG_TYPE GENMASK(7, 0)
+#define HTT_T2H_VERSION_CONF_MINOR GENMASK(15, 8)
+#define HTT_T2H_VERSION_CONF_MAJOR GENMASK(23, 16)
+
+struct htt_t2h_version_conf_msg {
+ __le32 version;
+} __packed;
+
+#define HTT_T2H_PEER_MAP_INFO_VDEV_ID GENMASK(15, 8)
+#define HTT_T2H_PEER_MAP_INFO_PEER_ID GENMASK(31, 16)
+#define HTT_T2H_PEER_MAP_INFO1_MAC_ADDR_H16 GENMASK(15, 0)
+#define HTT_T2H_PEER_MAP_INFO1_HW_PEER_ID GENMASK(31, 16)
+#define HTT_T2H_PEER_MAP_INFO2_AST_HASH_VAL GENMASK(15, 0)
+#define HTT_T2H_PEER_MAP_INFO2_NEXT_HOP_M BIT(16)
+#define HTT_T2H_PEER_MAP_INFO2_NEXT_HOP_S 16
+
+struct htt_t2h_peer_map_event {
+ __le32 info;
+ __le32 mac_addr_l32;
+ __le32 info1;
+ __le32 info2;
+} __packed;
+
+#define HTT_T2H_PEER_UNMAP_INFO_VDEV_ID HTT_T2H_PEER_MAP_INFO_VDEV_ID
+#define HTT_T2H_PEER_UNMAP_INFO_PEER_ID HTT_T2H_PEER_MAP_INFO_PEER_ID
+#define HTT_T2H_PEER_UNMAP_INFO1_MAC_ADDR_H16 \
+ HTT_T2H_PEER_MAP_INFO1_MAC_ADDR_H16
+#define HTT_T2H_PEER_MAP_INFO1_NEXT_HOP_M HTT_T2H_PEER_MAP_INFO2_NEXT_HOP_M
+#define HTT_T2H_PEER_MAP_INFO1_NEXT_HOP_S HTT_T2H_PEER_MAP_INFO2_NEXT_HOP_S
+
+struct htt_t2h_peer_unmap_event {
+ __le32 info;
+ __le32 mac_addr_l32;
+ __le32 info1;
+} __packed;
+
+struct htt_resp_msg {
+ union {
+ struct htt_t2h_version_conf_msg version_msg;
+ struct htt_t2h_peer_map_event peer_map_ev;
+ struct htt_t2h_peer_unmap_event peer_unmap_ev;
+ };
+} __packed;
+
+#define HTT_VDEV_GET_STATS_U64(msg_l32, msg_u32)\
+ (((u64)__le32_to_cpu(msg_u32) << 32) | (__le32_to_cpu(msg_l32)))
+#define HTT_T2H_VDEV_STATS_PERIODIC_MSG_TYPE GENMASK(7, 0)
+#define HTT_T2H_VDEV_STATS_PERIODIC_PDEV_ID GENMASK(15, 8)
+#define HTT_T2H_VDEV_STATS_PERIODIC_NUM_VDEV GENMASK(23, 16)
+#define HTT_T2H_VDEV_STATS_PERIODIC_PAYLOAD_BYTES GENMASK(15, 0)
+#define HTT_VDEV_TXRX_STATS_COMMON_TLV 0
+#define HTT_VDEV_TXRX_STATS_HW_STATS_TLV 1
+
+struct htt_t2h_vdev_txrx_stats_ind {
+ __le32 vdev_id;
+ __le32 rx_msdu_byte_cnt_lo;
+ __le32 rx_msdu_byte_cnt_hi;
+ __le32 rx_msdu_cnt_lo;
+ __le32 rx_msdu_cnt_hi;
+ __le32 tx_msdu_byte_cnt_lo;
+ __le32 tx_msdu_byte_cnt_hi;
+ __le32 tx_msdu_cnt_lo;
+ __le32 tx_msdu_cnt_hi;
+ __le32 tx_retry_cnt_lo;
+ __le32 tx_retry_cnt_hi;
+ __le32 tx_retry_byte_cnt_lo;
+ __le32 tx_retry_byte_cnt_hi;
+ __le32 tx_drop_cnt_lo;
+ __le32 tx_drop_cnt_hi;
+ __le32 tx_drop_byte_cnt_lo;
+ __le32 tx_drop_byte_cnt_hi;
+ __le32 msdu_ttl_cnt_lo;
+ __le32 msdu_ttl_cnt_hi;
+ __le32 msdu_ttl_byte_cnt_lo;
+ __le32 msdu_ttl_byte_cnt_hi;
+} __packed;
+
+struct htt_t2h_vdev_common_stats_tlv {
+ __le32 soc_drop_count_lo;
+ __le32 soc_drop_count_hi;
+} __packed;
+
+/* ppdu stats
+ *
+ * @details
+ * The following field definitions describe the format of the HTT target
+ * to host ppdu stats indication message.
+ *
+ *
+ * |31 16|15 12|11 10|9 8|7 0 |
+ * |----------------------------------------------------------------------|
+ * | payload_size | rsvd |pdev_id|mac_id | msg type |
+ * |----------------------------------------------------------------------|
+ * | ppdu_id |
+ * |----------------------------------------------------------------------|
+ * | Timestamp in us |
+ * |----------------------------------------------------------------------|
+ * | reserved |
+ * |----------------------------------------------------------------------|
+ * | type-specific stats info |
+ * | (see htt_ppdu_stats.h) |
+ * |----------------------------------------------------------------------|
+ * Header fields:
+ * - MSG_TYPE
+ * Bits 7:0
+ * Purpose: Identifies this is a PPDU STATS indication
+ * message.
+ * Value: 0x1d
+ * - mac_id
+ * Bits 9:8
+ * Purpose: mac_id of this ppdu_id
+ * Value: 0-3
+ * - pdev_id
+ * Bits 11:10
+ * Purpose: pdev_id of this ppdu_id
+ * Value: 0-3
+ * 0 (for rings at SOC level),
+ * 1/2/3 PDEV -> 0/1/2
+ * - payload_size
+ * Bits 31:16
+ * Purpose: total tlv size
+ * Value: payload_size in bytes
+ */
+
+#define HTT_T2H_PPDU_STATS_INFO_PDEV_ID GENMASK(11, 10)
+#define HTT_T2H_PPDU_STATS_INFO_PAYLOAD_SIZE GENMASK(31, 16)
+
+struct ath12k_htt_ppdu_stats_msg {
+ __le32 info;
+ __le32 ppdu_id;
+ __le32 timestamp;
+ __le32 rsvd;
+ u8 data[];
+} __packed;
+
+struct htt_tlv {
+ __le32 header;
+ u8 value[];
+} __packed;
+
+#define HTT_TLV_TAG GENMASK(11, 0)
+#define HTT_TLV_LEN GENMASK(23, 12)
+
+enum HTT_PPDU_STATS_BW {
+ HTT_PPDU_STATS_BANDWIDTH_5MHZ = 0,
+ HTT_PPDU_STATS_BANDWIDTH_10MHZ = 1,
+ HTT_PPDU_STATS_BANDWIDTH_20MHZ = 2,
+ HTT_PPDU_STATS_BANDWIDTH_40MHZ = 3,
+ HTT_PPDU_STATS_BANDWIDTH_80MHZ = 4,
+ HTT_PPDU_STATS_BANDWIDTH_160MHZ = 5, /* includes 80+80 */
+ HTT_PPDU_STATS_BANDWIDTH_DYN = 6,
+};
+
+#define HTT_PPDU_STATS_CMN_FLAGS_FRAME_TYPE_M GENMASK(7, 0)
+#define HTT_PPDU_STATS_CMN_FLAGS_QUEUE_TYPE_M GENMASK(15, 8)
+/* bw - HTT_PPDU_STATS_BW */
+#define HTT_PPDU_STATS_CMN_FLAGS_BW_M GENMASK(19, 16)
+
+struct htt_ppdu_stats_common {
+ __le32 ppdu_id;
+ __le16 sched_cmdid;
+ u8 ring_id;
+ u8 num_users;
+ __le32 flags; /* %HTT_PPDU_STATS_COMMON_FLAGS_*/
+ __le32 chain_mask;
+ __le32 fes_duration_us; /* frame exchange sequence */
+ __le32 ppdu_sch_eval_start_tstmp_us;
+ __le32 ppdu_sch_end_tstmp_us;
+ __le32 ppdu_start_tstmp_us;
+ /* BIT [15 : 0] - phy mode (WLAN_PHY_MODE) with which ppdu was transmitted
+ * BIT [31 : 16] - bandwidth (in MHz) with which ppdu was transmitted
+ */
+ __le16 phy_mode;
+ __le16 bw_mhz;
+} __packed;
+
+enum htt_ppdu_stats_gi {
+ HTT_PPDU_STATS_SGI_0_8_US,
+ HTT_PPDU_STATS_SGI_0_4_US,
+ HTT_PPDU_STATS_SGI_1_6_US,
+ HTT_PPDU_STATS_SGI_3_2_US,
+};
+
+#define HTT_PPDU_STATS_USER_RATE_INFO0_USER_POS_M GENMASK(3, 0)
+#define HTT_PPDU_STATS_USER_RATE_INFO0_MU_GROUP_ID_M GENMASK(11, 4)
+
+enum HTT_PPDU_STATS_PPDU_TYPE {
+ HTT_PPDU_STATS_PPDU_TYPE_SU,
+ HTT_PPDU_STATS_PPDU_TYPE_MU_MIMO,
+ HTT_PPDU_STATS_PPDU_TYPE_MU_OFDMA,
+ HTT_PPDU_STATS_PPDU_TYPE_MU_MIMO_OFDMA,
+ HTT_PPDU_STATS_PPDU_TYPE_UL_TRIG,
+ HTT_PPDU_STATS_PPDU_TYPE_BURST_BCN,
+ HTT_PPDU_STATS_PPDU_TYPE_UL_BSR_RESP,
+ HTT_PPDU_STATS_PPDU_TYPE_UL_BSR_TRIG,
+ HTT_PPDU_STATS_PPDU_TYPE_UL_RESP,
+ HTT_PPDU_STATS_PPDU_TYPE_MAX
+};
+
+#define HTT_PPDU_STATS_USER_RATE_INFO1_RESP_TYPE_VALD_M BIT(0)
+#define HTT_PPDU_STATS_USER_RATE_INFO1_PPDU_TYPE_M GENMASK(5, 1)
+
+#define HTT_PPDU_STATS_USER_RATE_FLAGS_LTF_SIZE_M GENMASK(1, 0)
+#define HTT_PPDU_STATS_USER_RATE_FLAGS_STBC_M BIT(2)
+#define HTT_PPDU_STATS_USER_RATE_FLAGS_HE_RE_M BIT(3)
+#define HTT_PPDU_STATS_USER_RATE_FLAGS_TXBF_M GENMASK(7, 4)
+#define HTT_PPDU_STATS_USER_RATE_FLAGS_BW_M GENMASK(11, 8)
+#define HTT_PPDU_STATS_USER_RATE_FLAGS_NSS_M GENMASK(15, 12)
+#define HTT_PPDU_STATS_USER_RATE_FLAGS_MCS_M GENMASK(19, 16)
+#define HTT_PPDU_STATS_USER_RATE_FLAGS_PREAMBLE_M GENMASK(23, 20)
+#define HTT_PPDU_STATS_USER_RATE_FLAGS_GI_M GENMASK(27, 24)
+#define HTT_PPDU_STATS_USER_RATE_FLAGS_DCM_M BIT(28)
+#define HTT_PPDU_STATS_USER_RATE_FLAGS_LDPC_M BIT(29)
+
+#define HTT_USR_RATE_PREAMBLE(_val) \
+ le32_get_bits(_val, HTT_PPDU_STATS_USER_RATE_FLAGS_PREAMBLE_M)
+#define HTT_USR_RATE_BW(_val) \
+ le32_get_bits(_val, HTT_PPDU_STATS_USER_RATE_FLAGS_BW_M)
+#define HTT_USR_RATE_NSS(_val) \
+ le32_get_bits(_val, HTT_PPDU_STATS_USER_RATE_FLAGS_NSS_M)
+#define HTT_USR_RATE_MCS(_val) \
+ le32_get_bits(_val, HTT_PPDU_STATS_USER_RATE_FLAGS_MCS_M)
+#define HTT_USR_RATE_GI(_val) \
+ le32_get_bits(_val, HTT_PPDU_STATS_USER_RATE_FLAGS_GI_M)
+#define HTT_USR_RATE_DCM(_val) \
+ le32_get_bits(_val, HTT_PPDU_STATS_USER_RATE_FLAGS_DCM_M)
+
+#define HTT_PPDU_STATS_USER_RATE_RESP_FLAGS_LTF_SIZE_M GENMASK(1, 0)
+#define HTT_PPDU_STATS_USER_RATE_RESP_FLAGS_STBC_M BIT(2)
+#define HTT_PPDU_STATS_USER_RATE_RESP_FLAGS_HE_RE_M BIT(3)
+#define HTT_PPDU_STATS_USER_RATE_RESP_FLAGS_TXBF_M GENMASK(7, 4)
+#define HTT_PPDU_STATS_USER_RATE_RESP_FLAGS_BW_M GENMASK(11, 8)
+#define HTT_PPDU_STATS_USER_RATE_RESP_FLAGS_NSS_M GENMASK(15, 12)
+#define HTT_PPDU_STATS_USER_RATE_RESP_FLAGS_MCS_M GENMASK(19, 16)
+#define HTT_PPDU_STATS_USER_RATE_RESP_FLAGS_PREAMBLE_M GENMASK(23, 20)
+#define HTT_PPDU_STATS_USER_RATE_RESP_FLAGS_GI_M GENMASK(27, 24)
+#define HTT_PPDU_STATS_USER_RATE_RESP_FLAGS_DCM_M BIT(28)
+#define HTT_PPDU_STATS_USER_RATE_RESP_FLAGS_LDPC_M BIT(29)
+
+struct htt_ppdu_stats_user_rate {
+ u8 tid_num;
+ u8 reserved0;
+ __le16 sw_peer_id;
+ __le32 info0; /* %HTT_PPDU_STATS_USER_RATE_INFO0_*/
+ __le16 ru_end;
+ __le16 ru_start;
+ __le16 resp_ru_end;
+ __le16 resp_ru_start;
+ __le32 info1; /* %HTT_PPDU_STATS_USER_RATE_INFO1_ */
+ __le32 rate_flags; /* %HTT_PPDU_STATS_USER_RATE_FLAGS_ */
+ /* Note: resp_rate_info is only valid for if resp_type is UL */
+ __le32 resp_rate_flags; /* %HTT_PPDU_STATS_USER_RATE_RESP_FLAGS_ */
+} __packed;
+
+#define HTT_PPDU_STATS_TX_INFO_FLAGS_RATECODE_M GENMASK(7, 0)
+#define HTT_PPDU_STATS_TX_INFO_FLAGS_IS_AMPDU_M BIT(8)
+#define HTT_PPDU_STATS_TX_INFO_FLAGS_BA_ACK_FAILED_M GENMASK(10, 9)
+#define HTT_PPDU_STATS_TX_INFO_FLAGS_BW_M GENMASK(13, 11)
+#define HTT_PPDU_STATS_TX_INFO_FLAGS_SGI_M BIT(14)
+#define HTT_PPDU_STATS_TX_INFO_FLAGS_PEERID_M GENMASK(31, 16)
+
+#define HTT_TX_INFO_IS_AMSDU(_flags) \
+ u32_get_bits(_flags, HTT_PPDU_STATS_TX_INFO_FLAGS_IS_AMPDU_M)
+#define HTT_TX_INFO_BA_ACK_FAILED(_flags) \
+ u32_get_bits(_flags, HTT_PPDU_STATS_TX_INFO_FLAGS_BA_ACK_FAILED_M)
+#define HTT_TX_INFO_RATECODE(_flags) \
+ u32_get_bits(_flags, HTT_PPDU_STATS_TX_INFO_FLAGS_RATECODE_M)
+#define HTT_TX_INFO_PEERID(_flags) \
+ u32_get_bits(_flags, HTT_PPDU_STATS_TX_INFO_FLAGS_PEERID_M)
+
+struct htt_tx_ppdu_stats_info {
+ struct htt_tlv tlv_hdr;
+ __le32 tx_success_bytes;
+ __le32 tx_retry_bytes;
+ __le32 tx_failed_bytes;
+ __le32 flags; /* %HTT_PPDU_STATS_TX_INFO_FLAGS_ */
+ __le16 tx_success_msdus;
+ __le16 tx_retry_msdus;
+ __le16 tx_failed_msdus;
+ __le16 tx_duration; /* united in us */
+} __packed;
+
+enum htt_ppdu_stats_usr_compln_status {
+ HTT_PPDU_STATS_USER_STATUS_OK,
+ HTT_PPDU_STATS_USER_STATUS_FILTERED,
+ HTT_PPDU_STATS_USER_STATUS_RESP_TIMEOUT,
+ HTT_PPDU_STATS_USER_STATUS_RESP_MISMATCH,
+ HTT_PPDU_STATS_USER_STATUS_ABORT,
+};
+
+#define HTT_PPDU_STATS_USR_CMPLTN_CMN_FLAGS_LONG_RETRY_M GENMASK(3, 0)
+#define HTT_PPDU_STATS_USR_CMPLTN_CMN_FLAGS_SHORT_RETRY_M GENMASK(7, 4)
+#define HTT_PPDU_STATS_USR_CMPLTN_CMN_FLAGS_IS_AMPDU_M BIT(8)
+#define HTT_PPDU_STATS_USR_CMPLTN_CMN_FLAGS_RESP_TYPE_M GENMASK(12, 9)
+
+#define HTT_USR_CMPLTN_IS_AMPDU(_val) \
+ le32_get_bits(_val, HTT_PPDU_STATS_USR_CMPLTN_CMN_FLAGS_IS_AMPDU_M)
+#define HTT_USR_CMPLTN_LONG_RETRY(_val) \
+ le32_get_bits(_val, HTT_PPDU_STATS_USR_CMPLTN_CMN_FLAGS_LONG_RETRY_M)
+#define HTT_USR_CMPLTN_SHORT_RETRY(_val) \
+ le32_get_bits(_val, HTT_PPDU_STATS_USR_CMPLTN_CMN_FLAGS_SHORT_RETRY_M)
+
+struct htt_ppdu_stats_usr_cmpltn_cmn {
+ u8 status;
+ u8 tid_num;
+ __le16 sw_peer_id;
+ /* RSSI value of last ack packet (units = dB above noise floor) */
+ __le32 ack_rssi;
+ __le16 mpdu_tried;
+ __le16 mpdu_success;
+ __le32 flags; /* %HTT_PPDU_STATS_USR_CMPLTN_CMN_FLAGS_LONG_RETRIES*/
+} __packed;
+
+#define HTT_PPDU_STATS_ACK_BA_INFO_NUM_MPDU_M GENMASK(8, 0)
+#define HTT_PPDU_STATS_ACK_BA_INFO_NUM_MSDU_M GENMASK(24, 9)
+#define HTT_PPDU_STATS_ACK_BA_INFO_TID_NUM GENMASK(31, 25)
+
+#define HTT_PPDU_STATS_NON_QOS_TID 16
+
+struct htt_ppdu_stats_usr_cmpltn_ack_ba_status {
+ __le32 ppdu_id;
+ __le16 sw_peer_id;
+ __le16 reserved0;
+ __le32 info; /* %HTT_PPDU_STATS_USR_CMPLTN_CMN_INFO_ */
+ __le16 current_seq;
+ __le16 start_seq;
+ __le32 success_bytes;
+} __packed;
+
+struct htt_ppdu_user_stats {
+ u16 peer_id;
+ u16 delay_ba;
+ u32 tlv_flags;
+ bool is_valid_peer_id;
+ struct htt_ppdu_stats_user_rate rate;
+ struct htt_ppdu_stats_usr_cmpltn_cmn cmpltn_cmn;
+ struct htt_ppdu_stats_usr_cmpltn_ack_ba_status ack_ba;
+};
+
+#define HTT_PPDU_STATS_MAX_USERS 8
+#define HTT_PPDU_DESC_MAX_DEPTH 16
+
+struct htt_ppdu_stats {
+ struct htt_ppdu_stats_common common;
+ struct htt_ppdu_user_stats user_stats[HTT_PPDU_STATS_MAX_USERS];
+};
+
+struct htt_ppdu_stats_info {
+ u32 tlv_bitmap;
+ u32 ppdu_id;
+ u32 frame_type;
+ u32 frame_ctrl;
+ u32 delay_ba;
+ u32 bar_num_users;
+ struct htt_ppdu_stats ppdu_stats;
+ struct list_head list;
+};
+
+/* @brief target -> host MLO offset indiciation message
+ *
+ * @details
+ * The following field definitions describe the format of the HTT target
+ * to host mlo offset indication message.
+ *
+ *
+ * |31 29|28 |26|25 22|21 16|15 13|12 10 |9 8|7 0|
+ * |---------------------------------------------------------------------|
+ * | rsvd1 | mac_freq |chip_id |pdev_id|msgtype|
+ * |---------------------------------------------------------------------|
+ * | sync_timestamp_lo_us |
+ * |---------------------------------------------------------------------|
+ * | sync_timestamp_hi_us |
+ * |---------------------------------------------------------------------|
+ * | mlo_offset_lo |
+ * |---------------------------------------------------------------------|
+ * | mlo_offset_hi |
+ * |---------------------------------------------------------------------|
+ * | mlo_offset_clcks |
+ * |---------------------------------------------------------------------|
+ * | rsvd2 | mlo_comp_clks |mlo_comp_us |
+ * |---------------------------------------------------------------------|
+ * | rsvd3 |mlo_comp_timer |
+ * |---------------------------------------------------------------------|
+ * Header fields
+ * - MSG_TYPE
+ * Bits 7:0
+ * Purpose: Identifies this is a MLO offset indication msg
+ * - PDEV_ID
+ * Bits 9:8
+ * Purpose: Pdev of this MLO offset
+ * - CHIP_ID
+ * Bits 12:10
+ * Purpose: chip_id of this MLO offset
+ * - MAC_FREQ
+ * Bits 28:13
+ * - SYNC_TIMESTAMP_LO_US
+ * Purpose: clock frequency of the mac HW block in MHz
+ * Bits: 31:0
+ * Purpose: lower 32 bits of the WLAN global time stamp at which
+ * last sync interrupt was received
+ * - SYNC_TIMESTAMP_HI_US
+ * Bits: 31:0
+ * Purpose: upper 32 bits of WLAN global time stamp at which
+ * last sync interrupt was received
+ * - MLO_OFFSET_LO
+ * Bits: 31:0
+ * Purpose: lower 32 bits of the MLO offset in us
+ * - MLO_OFFSET_HI
+ * Bits: 31:0
+ * Purpose: upper 32 bits of the MLO offset in us
+ * - MLO_COMP_US
+ * Bits: 15:0
+ * Purpose: MLO time stamp compensation applied in us
+ * - MLO_COMP_CLCKS
+ * Bits: 25:16
+ * Purpose: MLO time stamp compensation applied in clock ticks
+ * - MLO_COMP_TIMER
+ * Bits: 21:0
+ * Purpose: Periodic timer at which compensation is applied
+ */
+
+#define HTT_T2H_MLO_OFFSET_INFO_MSG_TYPE GENMASK(7, 0)
+#define HTT_T2H_MLO_OFFSET_INFO_PDEV_ID GENMASK(9, 8)
+
+struct ath12k_htt_mlo_offset_msg {
+ __le32 info;
+ __le32 sync_timestamp_lo_us;
+ __le32 sync_timestamp_hi_us;
+ __le32 mlo_offset_hi;
+ __le32 mlo_offset_lo;
+ __le32 mlo_offset_clks;
+ __le32 mlo_comp_clks;
+ __le32 mlo_comp_timer;
+} __packed;
+
+/* @brief host -> target FW extended statistics retrieve
+ *
+ * @details
+ * The following field definitions describe the format of the HTT host
+ * to target FW extended stats retrieve message.
+ * The message specifies the type of stats the host wants to retrieve.
+ *
+ * |31 24|23 16|15 8|7 0|
+ * |-----------------------------------------------------------|
+ * | reserved | stats type | pdev_mask | msg type |
+ * |-----------------------------------------------------------|
+ * | config param [0] |
+ * |-----------------------------------------------------------|
+ * | config param [1] |
+ * |-----------------------------------------------------------|
+ * | config param [2] |
+ * |-----------------------------------------------------------|
+ * | config param [3] |
+ * |-----------------------------------------------------------|
+ * | reserved |
+ * |-----------------------------------------------------------|
+ * | cookie LSBs |
+ * |-----------------------------------------------------------|
+ * | cookie MSBs |
+ * |-----------------------------------------------------------|
+ * Header fields:
+ * - MSG_TYPE
+ * Bits 7:0
+ * Purpose: identifies this is a extended stats upload request message
+ * Value: 0x10
+ * - PDEV_MASK
+ * Bits 8:15
+ * Purpose: identifies the mask of PDEVs to retrieve stats from
+ * Value: This is a overloaded field, refer to usage and interpretation of
+ * PDEV in interface document.
+ * Bit 8 : Reserved for SOC stats
+ * Bit 9 - 15 : Indicates PDEV_MASK in DBDC
+ * Indicates MACID_MASK in DBS
+ * - STATS_TYPE
+ * Bits 23:16
+ * Purpose: identifies which FW statistics to upload
+ * Value: Defined by htt_dbg_ext_stats_type (see htt_stats.h)
+ * - Reserved
+ * Bits 31:24
+ * - CONFIG_PARAM [0]
+ * Bits 31:0
+ * Purpose: give an opaque configuration value to the specified stats type
+ * Value: stats-type specific configuration value
+ * Refer to htt_stats.h for interpretation for each stats sub_type
+ * - CONFIG_PARAM [1]
+ * Bits 31:0
+ * Purpose: give an opaque configuration value to the specified stats type
+ * Value: stats-type specific configuration value
+ * Refer to htt_stats.h for interpretation for each stats sub_type
+ * - CONFIG_PARAM [2]
+ * Bits 31:0
+ * Purpose: give an opaque configuration value to the specified stats type
+ * Value: stats-type specific configuration value
+ * Refer to htt_stats.h for interpretation for each stats sub_type
+ * - CONFIG_PARAM [3]
+ * Bits 31:0
+ * Purpose: give an opaque configuration value to the specified stats type
+ * Value: stats-type specific configuration value
+ * Refer to htt_stats.h for interpretation for each stats sub_type
+ * - Reserved [31:0] for future use.
+ * - COOKIE_LSBS
+ * Bits 31:0
+ * Purpose: Provide a mechanism to match a target->host stats confirmation
+ * message with its preceding host->target stats request message.
+ * Value: LSBs of the opaque cookie specified by the host-side requestor
+ * - COOKIE_MSBS
+ * Bits 31:0
+ * Purpose: Provide a mechanism to match a target->host stats confirmation
+ * message with its preceding host->target stats request message.
+ * Value: MSBs of the opaque cookie specified by the host-side requestor
+ */
+
+struct htt_ext_stats_cfg_hdr {
+ u8 msg_type;
+ u8 pdev_mask;
+ u8 stats_type;
+ u8 reserved;
+} __packed;
+
+struct htt_ext_stats_cfg_cmd {
+ struct htt_ext_stats_cfg_hdr hdr;
+ __le32 cfg_param0;
+ __le32 cfg_param1;
+ __le32 cfg_param2;
+ __le32 cfg_param3;
+ __le32 reserved;
+ __le32 cookie_lsb;
+ __le32 cookie_msb;
+} __packed;
+
+/* htt stats config default params */
+#define HTT_STAT_DEFAULT_RESET_START_OFFSET 0
+#define HTT_STAT_DEFAULT_CFG0_ALL_HWQS 0xffffffff
+#define HTT_STAT_DEFAULT_CFG0_ALL_TXQS 0xffffffff
+#define HTT_STAT_DEFAULT_CFG0_ALL_CMDQS 0xffff
+#define HTT_STAT_DEFAULT_CFG0_ALL_RINGS 0xffff
+#define HTT_STAT_DEFAULT_CFG0_ACTIVE_PEERS 0xff
+#define HTT_STAT_DEFAULT_CFG0_CCA_CUMULATIVE 0x00
+#define HTT_STAT_DEFAULT_CFG0_ACTIVE_VDEVS 0x00
+
+/* HTT_DBG_EXT_STATS_PEER_INFO
+ * PARAMS:
+ * @config_param0:
+ * [Bit0] - [0] for sw_peer_id, [1] for mac_addr based request
+ * [Bit15 : Bit 1] htt_peer_stats_req_mode_t
+ * [Bit31 : Bit16] sw_peer_id
+ * @config_param1:
+ * peer_stats_req_type_mask:32 (enum htt_peer_stats_tlv_enum)
+ * 0 bit htt_peer_stats_cmn_tlv
+ * 1 bit htt_peer_details_tlv
+ * 2 bit htt_tx_peer_rate_stats_tlv
+ * 3 bit htt_rx_peer_rate_stats_tlv
+ * 4 bit htt_tx_tid_stats_tlv/htt_tx_tid_stats_v1_tlv
+ * 5 bit htt_rx_tid_stats_tlv
+ * 6 bit htt_msdu_flow_stats_tlv
+ * @config_param2: [Bit31 : Bit0] mac_addr31to0
+ * @config_param3: [Bit15 : Bit0] mac_addr47to32
+ * [Bit31 : Bit16] reserved
+ */
+#define HTT_STAT_PEER_INFO_MAC_ADDR BIT(0)
+#define HTT_STAT_DEFAULT_PEER_REQ_TYPE 0x7f
+
+/* Used to set different configs to the specified stats type.*/
+struct htt_ext_stats_cfg_params {
+ u32 cfg0;
+ u32 cfg1;
+ u32 cfg2;
+ u32 cfg3;
+};
+
+enum vdev_stats_offload_timer_duration {
+ ATH12K_STATS_TIMER_DUR_500MS = 1,
+ ATH12K_STATS_TIMER_DUR_1SEC = 2,
+ ATH12K_STATS_TIMER_DUR_2SEC = 3,
+};
+
+static inline void ath12k_dp_get_mac_addr(u32 addr_l32, u16 addr_h16, u8 *addr)
+{
+ memcpy(addr, &addr_l32, 4);
+ memcpy(addr + 4, &addr_h16, ETH_ALEN - 4);
+}
+
+int ath12k_dp_service_srng(struct ath12k_base *ab,
+ struct ath12k_ext_irq_grp *irq_grp,
+ int budget);
+int ath12k_dp_htt_connect(struct ath12k_dp *dp);
+void ath12k_dp_vdev_tx_attach(struct ath12k *ar, struct ath12k_vif *arvif);
+void ath12k_dp_free(struct ath12k_base *ab);
+int ath12k_dp_alloc(struct ath12k_base *ab);
+void ath12k_dp_cc_config(struct ath12k_base *ab);
+int ath12k_dp_pdev_alloc(struct ath12k_base *ab);
+void ath12k_dp_pdev_pre_alloc(struct ath12k_base *ab);
+void ath12k_dp_pdev_free(struct ath12k_base *ab);
+int ath12k_dp_tx_htt_srng_setup(struct ath12k_base *ab, u32 ring_id,
+ int mac_id, enum hal_ring_type ring_type);
+int ath12k_dp_peer_setup(struct ath12k *ar, int vdev_id, const u8 *addr);
+void ath12k_dp_peer_cleanup(struct ath12k *ar, int vdev_id, const u8 *addr);
+void ath12k_dp_srng_cleanup(struct ath12k_base *ab, struct dp_srng *ring);
+int ath12k_dp_srng_setup(struct ath12k_base *ab, struct dp_srng *ring,
+ enum hal_ring_type type, int ring_num,
+ int mac_id, int num_entries);
+void ath12k_dp_link_desc_cleanup(struct ath12k_base *ab,
+ struct dp_link_desc_bank *desc_bank,
+ u32 ring_type, struct dp_srng *ring);
+int ath12k_dp_link_desc_setup(struct ath12k_base *ab,
+ struct dp_link_desc_bank *link_desc_banks,
+ u32 ring_type, struct hal_srng *srng,
+ u32 n_link_desc);
+struct ath12k_rx_desc_info *ath12k_dp_get_rx_desc(struct ath12k_base *ab,
+ u32 cookie);
+struct ath12k_tx_desc_info *ath12k_dp_get_tx_desc(struct ath12k_base *ab,
+ u32 desc_id);
+#endif
diff --git a/drivers/net/wireless/ath/ath12k/dp_mon.c b/drivers/net/wireless/ath/ath12k/dp_mon.c
new file mode 100644
index 000000000000..a214797c96a2
--- /dev/null
+++ b/drivers/net/wireless/ath/ath12k/dp_mon.c
@@ -0,0 +1,2596 @@
+// SPDX-License-Identifier: BSD-3-Clause-Clear
+/*
+ * Copyright (c) 2019-2021 The Linux Foundation. All rights reserved.
+ * Copyright (c) 2021-2022 Qualcomm Innovation Center, Inc. All rights reserved.
+ */
+
+#include "dp_mon.h"
+#include "debug.h"
+#include "dp_rx.h"
+#include "dp_tx.h"
+#include "peer.h"
+
+static void ath12k_dp_mon_rx_handle_ofdma_info(void *rx_tlv,
+ struct hal_rx_user_status *rx_user_status)
+{
+ struct hal_rx_ppdu_end_user_stats *ppdu_end_user =
+ (struct hal_rx_ppdu_end_user_stats *)rx_tlv;
+
+ rx_user_status->ul_ofdma_user_v0_word0 =
+ __le32_to_cpu(ppdu_end_user->usr_resp_ref);
+ rx_user_status->ul_ofdma_user_v0_word1 =
+ __le32_to_cpu(ppdu_end_user->usr_resp_ref_ext);
+}
+
+static void
+ath12k_dp_mon_rx_populate_byte_count(void *rx_tlv, void *ppduinfo,
+ struct hal_rx_user_status *rx_user_status)
+{
+ struct hal_rx_ppdu_end_user_stats *ppdu_end_user =
+ (struct hal_rx_ppdu_end_user_stats *)rx_tlv;
+ u32 mpdu_ok_byte_count = __le32_to_cpu(ppdu_end_user->mpdu_ok_cnt);
+ u32 mpdu_err_byte_count = __le32_to_cpu(ppdu_end_user->mpdu_err_cnt);
+
+ rx_user_status->mpdu_ok_byte_count =
+ u32_get_bits(mpdu_ok_byte_count,
+ HAL_RX_PPDU_END_USER_STATS_MPDU_DELIM_OK_BYTE_COUNT);
+ rx_user_status->mpdu_err_byte_count =
+ u32_get_bits(mpdu_err_byte_count,
+ HAL_RX_PPDU_END_USER_STATS_MPDU_DELIM_ERR_BYTE_COUNT);
+}
+
+static void
+ath12k_dp_mon_rx_populate_mu_user_info(void *rx_tlv,
+ struct hal_rx_mon_ppdu_info *ppdu_info,
+ struct hal_rx_user_status *rx_user_status)
+{
+ rx_user_status->ast_index = ppdu_info->ast_index;
+ rx_user_status->tid = ppdu_info->tid;
+ rx_user_status->tcp_ack_msdu_count =
+ ppdu_info->tcp_ack_msdu_count;
+ rx_user_status->tcp_msdu_count =
+ ppdu_info->tcp_msdu_count;
+ rx_user_status->udp_msdu_count =
+ ppdu_info->udp_msdu_count;
+ rx_user_status->other_msdu_count =
+ ppdu_info->other_msdu_count;
+ rx_user_status->frame_control = ppdu_info->frame_control;
+ rx_user_status->frame_control_info_valid =
+ ppdu_info->frame_control_info_valid;
+ rx_user_status->data_sequence_control_info_valid =
+ ppdu_info->data_sequence_control_info_valid;
+ rx_user_status->first_data_seq_ctrl =
+ ppdu_info->first_data_seq_ctrl;
+ rx_user_status->preamble_type = ppdu_info->preamble_type;
+ rx_user_status->ht_flags = ppdu_info->ht_flags;
+ rx_user_status->vht_flags = ppdu_info->vht_flags;
+ rx_user_status->he_flags = ppdu_info->he_flags;
+ rx_user_status->rs_flags = ppdu_info->rs_flags;
+
+ rx_user_status->mpdu_cnt_fcs_ok =
+ ppdu_info->num_mpdu_fcs_ok;
+ rx_user_status->mpdu_cnt_fcs_err =
+ ppdu_info->num_mpdu_fcs_err;
+ memcpy(&rx_user_status->mpdu_fcs_ok_bitmap[0], &ppdu_info->mpdu_fcs_ok_bitmap[0],
+ HAL_RX_NUM_WORDS_PER_PPDU_BITMAP *
+ sizeof(ppdu_info->mpdu_fcs_ok_bitmap[0]));
+
+ ath12k_dp_mon_rx_populate_byte_count(rx_tlv, ppdu_info, rx_user_status);
+}
+
+static void ath12k_dp_mon_parse_vht_sig_a(u8 *tlv_data,
+ struct hal_rx_mon_ppdu_info *ppdu_info)
+{
+ struct hal_rx_vht_sig_a_info *vht_sig =
+ (struct hal_rx_vht_sig_a_info *)tlv_data;
+ u32 nsts, group_id, info0, info1;
+ u8 gi_setting;
+
+ info0 = __le32_to_cpu(vht_sig->info0);
+ info1 = __le32_to_cpu(vht_sig->info1);
+
+ ppdu_info->ldpc = u32_get_bits(info1, HAL_RX_VHT_SIG_A_INFO_INFO1_SU_MU_CODING);
+ ppdu_info->mcs = u32_get_bits(info1, HAL_RX_VHT_SIG_A_INFO_INFO1_MCS);
+ gi_setting = u32_get_bits(info1, HAL_RX_VHT_SIG_A_INFO_INFO1_GI_SETTING);
+ switch (gi_setting) {
+ case HAL_RX_VHT_SIG_A_NORMAL_GI:
+ ppdu_info->gi = HAL_RX_GI_0_8_US;
+ break;
+ case HAL_RX_VHT_SIG_A_SHORT_GI:
+ case HAL_RX_VHT_SIG_A_SHORT_GI_AMBIGUITY:
+ ppdu_info->gi = HAL_RX_GI_0_4_US;
+ break;
+ }
+
+ ppdu_info->is_stbc = u32_get_bits(info0, HAL_RX_VHT_SIG_A_INFO_INFO0_STBC);
+ nsts = u32_get_bits(info0, HAL_RX_VHT_SIG_A_INFO_INFO0_NSTS);
+ if (ppdu_info->is_stbc && nsts > 0)
+ nsts = ((nsts + 1) >> 1) - 1;
+
+ ppdu_info->nss = u32_get_bits(nsts, VHT_SIG_SU_NSS_MASK);
+ ppdu_info->bw = u32_get_bits(info0, HAL_RX_VHT_SIG_A_INFO_INFO0_BW);
+ ppdu_info->beamformed = u32_get_bits(info1,
+ HAL_RX_VHT_SIG_A_INFO_INFO1_BEAMFORMED);
+ group_id = u32_get_bits(info0, HAL_RX_VHT_SIG_A_INFO_INFO0_GROUP_ID);
+ if (group_id == 0 || group_id == 63)
+ ppdu_info->reception_type = HAL_RX_RECEPTION_TYPE_SU;
+ else
+ ppdu_info->reception_type = HAL_RX_RECEPTION_TYPE_MU_MIMO;
+ ppdu_info->vht_flag_values5 = group_id;
+ ppdu_info->vht_flag_values3[0] = (((ppdu_info->mcs) << 4) |
+ ppdu_info->nss);
+ ppdu_info->vht_flag_values2 = ppdu_info->bw;
+ ppdu_info->vht_flag_values4 =
+ u32_get_bits(info1, HAL_RX_VHT_SIG_A_INFO_INFO1_SU_MU_CODING);
+}
+
+static void ath12k_dp_mon_parse_ht_sig(u8 *tlv_data,
+ struct hal_rx_mon_ppdu_info *ppdu_info)
+{
+ struct hal_rx_ht_sig_info *ht_sig =
+ (struct hal_rx_ht_sig_info *)tlv_data;
+ u32 info0 = __le32_to_cpu(ht_sig->info0);
+ u32 info1 = __le32_to_cpu(ht_sig->info1);
+
+ ppdu_info->mcs = u32_get_bits(info0, HAL_RX_HT_SIG_INFO_INFO0_MCS);
+ ppdu_info->bw = u32_get_bits(info0, HAL_RX_HT_SIG_INFO_INFO0_BW);
+ ppdu_info->is_stbc = u32_get_bits(info1, HAL_RX_HT_SIG_INFO_INFO1_STBC);
+ ppdu_info->ldpc = u32_get_bits(info1, HAL_RX_HT_SIG_INFO_INFO1_FEC_CODING);
+ ppdu_info->gi = u32_get_bits(info1, HAL_RX_HT_SIG_INFO_INFO1_GI);
+ ppdu_info->nss = (ppdu_info->mcs >> 3);
+ ppdu_info->reception_type = HAL_RX_RECEPTION_TYPE_SU;
+}
+
+static void ath12k_dp_mon_parse_l_sig_b(u8 *tlv_data,
+ struct hal_rx_mon_ppdu_info *ppdu_info)
+{
+ struct hal_rx_lsig_b_info *lsigb =
+ (struct hal_rx_lsig_b_info *)tlv_data;
+ u32 info0 = __le32_to_cpu(lsigb->info0);
+ u8 rate;
+
+ rate = u32_get_bits(info0, HAL_RX_LSIG_B_INFO_INFO0_RATE);
+ switch (rate) {
+ case 1:
+ rate = HAL_RX_LEGACY_RATE_1_MBPS;
+ break;
+ case 2:
+ case 5:
+ rate = HAL_RX_LEGACY_RATE_2_MBPS;
+ break;
+ case 3:
+ case 6:
+ rate = HAL_RX_LEGACY_RATE_5_5_MBPS;
+ break;
+ case 4:
+ case 7:
+ rate = HAL_RX_LEGACY_RATE_11_MBPS;
+ break;
+ default:
+ rate = HAL_RX_LEGACY_RATE_INVALID;
+ }
+
+ ppdu_info->rate = rate;
+ ppdu_info->cck_flag = 1;
+ ppdu_info->reception_type = HAL_RX_RECEPTION_TYPE_SU;
+}
+
+static void ath12k_dp_mon_parse_l_sig_a(u8 *tlv_data,
+ struct hal_rx_mon_ppdu_info *ppdu_info)
+{
+ struct hal_rx_lsig_a_info *lsiga =
+ (struct hal_rx_lsig_a_info *)tlv_data;
+ u32 info0 = __le32_to_cpu(lsiga->info0);
+ u8 rate;
+
+ rate = u32_get_bits(info0, HAL_RX_LSIG_A_INFO_INFO0_RATE);
+ switch (rate) {
+ case 8:
+ rate = HAL_RX_LEGACY_RATE_48_MBPS;
+ break;
+ case 9:
+ rate = HAL_RX_LEGACY_RATE_24_MBPS;
+ break;
+ case 10:
+ rate = HAL_RX_LEGACY_RATE_12_MBPS;
+ break;
+ case 11:
+ rate = HAL_RX_LEGACY_RATE_6_MBPS;
+ break;
+ case 12:
+ rate = HAL_RX_LEGACY_RATE_54_MBPS;
+ break;
+ case 13:
+ rate = HAL_RX_LEGACY_RATE_36_MBPS;
+ break;
+ case 14:
+ rate = HAL_RX_LEGACY_RATE_18_MBPS;
+ break;
+ case 15:
+ rate = HAL_RX_LEGACY_RATE_9_MBPS;
+ break;
+ default:
+ rate = HAL_RX_LEGACY_RATE_INVALID;
+ }
+
+ ppdu_info->rate = rate;
+ ppdu_info->reception_type = HAL_RX_RECEPTION_TYPE_SU;
+}
+
+static void ath12k_dp_mon_parse_he_sig_b2_ofdma(u8 *tlv_data,
+ struct hal_rx_mon_ppdu_info *ppdu_info)
+{
+ struct hal_rx_he_sig_b2_ofdma_info *he_sig_b2_ofdma =
+ (struct hal_rx_he_sig_b2_ofdma_info *)tlv_data;
+ u32 info0, value;
+
+ info0 = __le32_to_cpu(he_sig_b2_ofdma->info0);
+
+ ppdu_info->he_data1 |= HE_MCS_KNOWN | HE_DCM_KNOWN | HE_CODING_KNOWN;
+
+ /* HE-data2 */
+ ppdu_info->he_data2 |= HE_TXBF_KNOWN;
+
+ ppdu_info->mcs = u32_get_bits(info0, HAL_RX_HE_SIG_B2_OFDMA_INFO_INFO0_STA_MCS);
+ value = ppdu_info->mcs << HE_TRANSMIT_MCS_SHIFT;
+ ppdu_info->he_data3 |= value;
+
+ value = u32_get_bits(info0, HAL_RX_HE_SIG_B2_OFDMA_INFO_INFO0_STA_DCM);
+ value = value << HE_DCM_SHIFT;
+ ppdu_info->he_data3 |= value;
+
+ value = u32_get_bits(info0, HAL_RX_HE_SIG_B2_OFDMA_INFO_INFO0_STA_CODING);
+ ppdu_info->ldpc = value;
+ value = value << HE_CODING_SHIFT;
+ ppdu_info->he_data3 |= value;
+
+ /* HE-data4 */
+ value = u32_get_bits(info0, HAL_RX_HE_SIG_B2_OFDMA_INFO_INFO0_STA_ID);
+ value = value << HE_STA_ID_SHIFT;
+ ppdu_info->he_data4 |= value;
+
+ ppdu_info->nss = u32_get_bits(info0, HAL_RX_HE_SIG_B2_OFDMA_INFO_INFO0_STA_NSTS);
+ ppdu_info->beamformed = u32_get_bits(info0,
+ HAL_RX_HE_SIG_B2_OFDMA_INFO_INFO0_STA_TXBF);
+ ppdu_info->reception_type = HAL_RX_RECEPTION_TYPE_MU_OFDMA;
+}
+
+static void ath12k_dp_mon_parse_he_sig_b2_mu(u8 *tlv_data,
+ struct hal_rx_mon_ppdu_info *ppdu_info)
+{
+ struct hal_rx_he_sig_b2_mu_info *he_sig_b2_mu =
+ (struct hal_rx_he_sig_b2_mu_info *)tlv_data;
+ u32 info0, value;
+
+ info0 = __le32_to_cpu(he_sig_b2_mu->info0);
+
+ ppdu_info->he_data1 |= HE_MCS_KNOWN | HE_CODING_KNOWN;
+
+ ppdu_info->mcs = u32_get_bits(info0, HAL_RX_HE_SIG_B2_MU_INFO_INFO0_STA_MCS);
+ value = ppdu_info->mcs << HE_TRANSMIT_MCS_SHIFT;
+ ppdu_info->he_data3 |= value;
+
+ value = u32_get_bits(info0, HAL_RX_HE_SIG_B2_MU_INFO_INFO0_STA_CODING);
+ ppdu_info->ldpc = value;
+ value = value << HE_CODING_SHIFT;
+ ppdu_info->he_data3 |= value;
+
+ value = u32_get_bits(info0, HAL_RX_HE_SIG_B2_MU_INFO_INFO0_STA_ID);
+ value = value << HE_STA_ID_SHIFT;
+ ppdu_info->he_data4 |= value;
+
+ ppdu_info->nss = u32_get_bits(info0, HAL_RX_HE_SIG_B2_MU_INFO_INFO0_STA_NSTS);
+}
+
+static void ath12k_dp_mon_parse_he_sig_b1_mu(u8 *tlv_data,
+ struct hal_rx_mon_ppdu_info *ppdu_info)
+{
+ struct hal_rx_he_sig_b1_mu_info *he_sig_b1_mu =
+ (struct hal_rx_he_sig_b1_mu_info *)tlv_data;
+ u32 info0 = __le32_to_cpu(he_sig_b1_mu->info0);
+ u16 ru_tones;
+
+ ru_tones = u32_get_bits(info0,
+ HAL_RX_HE_SIG_B1_MU_INFO_INFO0_RU_ALLOCATION);
+ ppdu_info->ru_alloc = ath12k_he_ru_tones_to_nl80211_he_ru_alloc(ru_tones);
+ ppdu_info->he_RU[0] = ru_tones;
+ ppdu_info->reception_type = HAL_RX_RECEPTION_TYPE_MU_MIMO;
+}
+
+static void ath12k_dp_mon_parse_he_sig_mu(u8 *tlv_data,
+ struct hal_rx_mon_ppdu_info *ppdu_info)
+{
+ struct hal_rx_he_sig_a_mu_dl_info *he_sig_a_mu_dl =
+ (struct hal_rx_he_sig_a_mu_dl_info *)tlv_data;
+ u32 info0, info1, value;
+ u16 he_gi = 0, he_ltf = 0;
+
+ info0 = __le32_to_cpu(he_sig_a_mu_dl->info0);
+ info1 = __le32_to_cpu(he_sig_a_mu_dl->info1);
+
+ ppdu_info->he_mu_flags = 1;
+
+ ppdu_info->he_data1 = HE_MU_FORMAT_TYPE;
+ ppdu_info->he_data1 |=
+ HE_BSS_COLOR_KNOWN |
+ HE_DL_UL_KNOWN |
+ HE_LDPC_EXTRA_SYMBOL_KNOWN |
+ HE_STBC_KNOWN |
+ HE_DATA_BW_RU_KNOWN |
+ HE_DOPPLER_KNOWN;
+
+ ppdu_info->he_data2 =
+ HE_GI_KNOWN |
+ HE_LTF_SYMBOLS_KNOWN |
+ HE_PRE_FEC_PADDING_KNOWN |
+ HE_PE_DISAMBIGUITY_KNOWN |
+ HE_TXOP_KNOWN |
+ HE_MIDABLE_PERIODICITY_KNOWN;
+
+ /* data3 */
+ ppdu_info->he_data3 = u32_get_bits(info0, HAL_RX_HE_SIG_A_MU_DL_INFO0_BSS_COLOR);
+ value = u32_get_bits(info0, HAL_RX_HE_SIG_A_MU_DL_INFO0_UL_FLAG);
+ value = value << HE_DL_UL_SHIFT;
+ ppdu_info->he_data3 |= value;
+
+ value = u32_get_bits(info1, HAL_RX_HE_SIG_A_MU_DL_INFO1_LDPC_EXTRA);
+ value = value << HE_LDPC_EXTRA_SYMBOL_SHIFT;
+ ppdu_info->he_data3 |= value;
+
+ value = u32_get_bits(info1, HAL_RX_HE_SIG_A_MU_DL_INFO1_STBC);
+ value = value << HE_STBC_SHIFT;
+ ppdu_info->he_data3 |= value;
+
+ /* data4 */
+ ppdu_info->he_data4 = u32_get_bits(info0,
+ HAL_RX_HE_SIG_A_MU_DL_INFO0_SPATIAL_REUSE);
+ ppdu_info->he_data4 = value;
+
+ /* data5 */
+ value = u32_get_bits(info0, HAL_RX_HE_SIG_A_MU_DL_INFO0_TRANSMIT_BW);
+ ppdu_info->he_data5 = value;
+ ppdu_info->bw = value;
+
+ value = u32_get_bits(info0, HAL_RX_HE_SIG_A_MU_DL_INFO0_CP_LTF_SIZE);
+ switch (value) {
+ case 0:
+ he_gi = HE_GI_0_8;
+ he_ltf = HE_LTF_4_X;
+ break;
+ case 1:
+ he_gi = HE_GI_0_8;
+ he_ltf = HE_LTF_2_X;
+ break;
+ case 2:
+ he_gi = HE_GI_1_6;
+ he_ltf = HE_LTF_2_X;
+ break;
+ case 3:
+ he_gi = HE_GI_3_2;
+ he_ltf = HE_LTF_4_X;
+ break;
+ }
+
+ ppdu_info->gi = he_gi;
+ value = he_gi << HE_GI_SHIFT;
+ ppdu_info->he_data5 |= value;
+
+ value = he_ltf << HE_LTF_SIZE_SHIFT;
+ ppdu_info->he_data5 |= value;
+
+ value = u32_get_bits(info1, HAL_RX_HE_SIG_A_MU_DL_INFO1_NUM_LTF_SYMB);
+ value = (value << HE_LTF_SYM_SHIFT);
+ ppdu_info->he_data5 |= value;
+
+ value = u32_get_bits(info1, HAL_RX_HE_SIG_A_MU_DL_INFO1_PKT_EXT_FACTOR);
+ value = value << HE_PRE_FEC_PAD_SHIFT;
+ ppdu_info->he_data5 |= value;
+
+ value = u32_get_bits(info1, HAL_RX_HE_SIG_A_MU_DL_INFO1_PKT_EXT_PE_DISAM);
+ value = value << HE_PE_DISAMBIGUITY_SHIFT;
+ ppdu_info->he_data5 |= value;
+
+ /*data6*/
+ value = u32_get_bits(info0, HAL_RX_HE_SIG_A_MU_DL_INFO0_DOPPLER_INDICATION);
+ value = value << HE_DOPPLER_SHIFT;
+ ppdu_info->he_data6 |= value;
+
+ value = u32_get_bits(info1, HAL_RX_HE_SIG_A_MU_DL_INFO1_TXOP_DURATION);
+ value = value << HE_TXOP_SHIFT;
+ ppdu_info->he_data6 |= value;
+
+ /* HE-MU Flags */
+ /* HE-MU-flags1 */
+ ppdu_info->he_flags1 =
+ HE_SIG_B_MCS_KNOWN |
+ HE_SIG_B_DCM_KNOWN |
+ HE_SIG_B_COMPRESSION_FLAG_1_KNOWN |
+ HE_SIG_B_SYM_NUM_KNOWN |
+ HE_RU_0_KNOWN;
+
+ value = u32_get_bits(info0, HAL_RX_HE_SIG_A_MU_DL_INFO0_MCS_OF_SIGB);
+ ppdu_info->he_flags1 |= value;
+ value = u32_get_bits(info0, HAL_RX_HE_SIG_A_MU_DL_INFO0_DCM_OF_SIGB);
+ value = value << HE_DCM_FLAG_1_SHIFT;
+ ppdu_info->he_flags1 |= value;
+
+ /* HE-MU-flags2 */
+ ppdu_info->he_flags2 = HE_BW_KNOWN;
+
+ value = u32_get_bits(info0, HAL_RX_HE_SIG_A_MU_DL_INFO0_TRANSMIT_BW);
+ ppdu_info->he_flags2 |= value;
+ value = u32_get_bits(info0, HAL_RX_HE_SIG_A_MU_DL_INFO0_COMP_MODE_SIGB);
+ value = value << HE_SIG_B_COMPRESSION_FLAG_2_SHIFT;
+ ppdu_info->he_flags2 |= value;
+ value = u32_get_bits(info0, HAL_RX_HE_SIG_A_MU_DL_INFO0_NUM_SIGB_SYMB);
+ value = value - 1;
+ value = value << HE_NUM_SIG_B_SYMBOLS_SHIFT;
+ ppdu_info->he_flags2 |= value;
+
+ ppdu_info->is_stbc = info1 &
+ HAL_RX_HE_SIG_A_MU_DL_INFO1_STBC;
+ ppdu_info->reception_type = HAL_RX_RECEPTION_TYPE_MU_MIMO;
+}
+
+static void ath12k_dp_mon_parse_he_sig_su(u8 *tlv_data,
+ struct hal_rx_mon_ppdu_info *ppdu_info)
+{
+ struct hal_rx_he_sig_a_su_info *he_sig_a =
+ (struct hal_rx_he_sig_a_su_info *)tlv_data;
+ u32 info0, info1, value;
+ u32 dcm;
+ u8 he_dcm = 0, he_stbc = 0;
+ u16 he_gi = 0, he_ltf = 0;
+
+ ppdu_info->he_flags = 1;
+
+ info0 = __le32_to_cpu(he_sig_a->info0);
+ info1 = __le32_to_cpu(he_sig_a->info1);
+
+ value = u32_get_bits(info0, HAL_RX_HE_SIG_A_SU_INFO_INFO0_FORMAT_IND);
+ if (value == 0)
+ ppdu_info->he_data1 = HE_TRIG_FORMAT_TYPE;
+ else
+ ppdu_info->he_data1 = HE_SU_FORMAT_TYPE;
+
+ ppdu_info->he_data1 |=
+ HE_BSS_COLOR_KNOWN |
+ HE_BEAM_CHANGE_KNOWN |
+ HE_DL_UL_KNOWN |
+ HE_MCS_KNOWN |
+ HE_DCM_KNOWN |
+ HE_CODING_KNOWN |
+ HE_LDPC_EXTRA_SYMBOL_KNOWN |
+ HE_STBC_KNOWN |
+ HE_DATA_BW_RU_KNOWN |
+ HE_DOPPLER_KNOWN;
+
+ ppdu_info->he_data2 |=
+ HE_GI_KNOWN |
+ HE_TXBF_KNOWN |
+ HE_PE_DISAMBIGUITY_KNOWN |
+ HE_TXOP_KNOWN |
+ HE_LTF_SYMBOLS_KNOWN |
+ HE_PRE_FEC_PADDING_KNOWN |
+ HE_MIDABLE_PERIODICITY_KNOWN;
+
+ ppdu_info->he_data3 = u32_get_bits(info0,
+ HAL_RX_HE_SIG_A_SU_INFO_INFO0_BSS_COLOR);
+ value = u32_get_bits(info0, HAL_RX_HE_SIG_A_SU_INFO_INFO0_BEAM_CHANGE);
+ value = value << HE_BEAM_CHANGE_SHIFT;
+ ppdu_info->he_data3 |= value;
+ value = u32_get_bits(info0, HAL_RX_HE_SIG_A_SU_INFO_INFO0_DL_UL_FLAG);
+ value = value << HE_DL_UL_SHIFT;
+ ppdu_info->he_data3 |= value;
+
+ value = u32_get_bits(info0, HAL_RX_HE_SIG_A_SU_INFO_INFO0_TRANSMIT_MCS);
+ ppdu_info->mcs = value;
+ value = value << HE_TRANSMIT_MCS_SHIFT;
+ ppdu_info->he_data3 |= value;
+
+ value = u32_get_bits(info0, HAL_RX_HE_SIG_A_SU_INFO_INFO0_DCM);
+ he_dcm = value;
+ value = value << HE_DCM_SHIFT;
+ ppdu_info->he_data3 |= value;
+ value = u32_get_bits(info1, HAL_RX_HE_SIG_A_SU_INFO_INFO1_CODING);
+ value = value << HE_CODING_SHIFT;
+ ppdu_info->he_data3 |= value;
+ value = u32_get_bits(info1, HAL_RX_HE_SIG_A_SU_INFO_INFO1_LDPC_EXTRA);
+ value = value << HE_LDPC_EXTRA_SYMBOL_SHIFT;
+ ppdu_info->he_data3 |= value;
+ value = u32_get_bits(info1, HAL_RX_HE_SIG_A_SU_INFO_INFO1_STBC);
+ he_stbc = value;
+ value = value << HE_STBC_SHIFT;
+ ppdu_info->he_data3 |= value;
+
+ /* data4 */
+ ppdu_info->he_data4 = u32_get_bits(info0,
+ HAL_RX_HE_SIG_A_SU_INFO_INFO0_SPATIAL_REUSE);
+
+ /* data5 */
+ value = u32_get_bits(info0,
+ HAL_RX_HE_SIG_A_SU_INFO_INFO0_TRANSMIT_BW);
+ ppdu_info->he_data5 = value;
+ ppdu_info->bw = value;
+ value = u32_get_bits(info0, HAL_RX_HE_SIG_A_SU_INFO_INFO0_CP_LTF_SIZE);
+ switch (value) {
+ case 0:
+ he_gi = HE_GI_0_8;
+ he_ltf = HE_LTF_1_X;
+ break;
+ case 1:
+ he_gi = HE_GI_0_8;
+ he_ltf = HE_LTF_2_X;
+ break;
+ case 2:
+ he_gi = HE_GI_1_6;
+ he_ltf = HE_LTF_2_X;
+ break;
+ case 3:
+ if (he_dcm && he_stbc) {
+ he_gi = HE_GI_0_8;
+ he_ltf = HE_LTF_4_X;
+ } else {
+ he_gi = HE_GI_3_2;
+ he_ltf = HE_LTF_4_X;
+ }
+ break;
+ }
+ ppdu_info->gi = he_gi;
+ value = he_gi << HE_GI_SHIFT;
+ ppdu_info->he_data5 |= value;
+ value = he_ltf << HE_LTF_SIZE_SHIFT;
+ ppdu_info->ltf_size = he_ltf;
+ ppdu_info->he_data5 |= value;
+
+ value = u32_get_bits(info0, HAL_RX_HE_SIG_A_SU_INFO_INFO0_NSTS);
+ value = (value << HE_LTF_SYM_SHIFT);
+ ppdu_info->he_data5 |= value;
+
+ value = u32_get_bits(info1, HAL_RX_HE_SIG_A_SU_INFO_INFO1_PKT_EXT_FACTOR);
+ value = value << HE_PRE_FEC_PAD_SHIFT;
+ ppdu_info->he_data5 |= value;
+
+ value = u32_get_bits(info1, HAL_RX_HE_SIG_A_SU_INFO_INFO1_TXBF);
+ value = value << HE_TXBF_SHIFT;
+ ppdu_info->he_data5 |= value;
+ value = u32_get_bits(info1, HAL_RX_HE_SIG_A_SU_INFO_INFO1_PKT_EXT_PE_DISAM);
+ value = value << HE_PE_DISAMBIGUITY_SHIFT;
+ ppdu_info->he_data5 |= value;
+
+ /* data6 */
+ value = u32_get_bits(info0, HAL_RX_HE_SIG_A_SU_INFO_INFO0_NSTS);
+ value++;
+ ppdu_info->he_data6 = value;
+ value = u32_get_bits(info1, HAL_RX_HE_SIG_A_SU_INFO_INFO1_DOPPLER_IND);
+ value = value << HE_DOPPLER_SHIFT;
+ ppdu_info->he_data6 |= value;
+ value = u32_get_bits(info1, HAL_RX_HE_SIG_A_SU_INFO_INFO1_TXOP_DURATION);
+ value = value << HE_TXOP_SHIFT;
+ ppdu_info->he_data6 |= value;
+
+ ppdu_info->mcs =
+ u32_get_bits(info0, HAL_RX_HE_SIG_A_SU_INFO_INFO0_TRANSMIT_MCS);
+ ppdu_info->bw =
+ u32_get_bits(info0, HAL_RX_HE_SIG_A_SU_INFO_INFO0_TRANSMIT_BW);
+ ppdu_info->ldpc = u32_get_bits(info1, HAL_RX_HE_SIG_A_SU_INFO_INFO1_CODING);
+ ppdu_info->is_stbc = u32_get_bits(info1, HAL_RX_HE_SIG_A_SU_INFO_INFO1_STBC);
+ ppdu_info->beamformed = u32_get_bits(info1, HAL_RX_HE_SIG_A_SU_INFO_INFO1_TXBF);
+ dcm = u32_get_bits(info0, HAL_RX_HE_SIG_A_SU_INFO_INFO0_DCM);
+ ppdu_info->nss = u32_get_bits(info0, HAL_RX_HE_SIG_A_SU_INFO_INFO0_NSTS);
+ ppdu_info->dcm = dcm;
+ ppdu_info->reception_type = HAL_RX_RECEPTION_TYPE_SU;
+}
+
+static enum hal_rx_mon_status
+ath12k_dp_mon_rx_parse_status_tlv(struct ath12k_base *ab,
+ struct ath12k_mon_data *pmon,
+ u32 tlv_tag, u8 *tlv_data, u32 userid)
+{
+ struct hal_rx_mon_ppdu_info *ppdu_info = &pmon->mon_ppdu_info;
+ u32 info[7];
+
+ switch (tlv_tag) {
+ case HAL_RX_PPDU_START: {
+ struct hal_rx_ppdu_start *ppdu_start =
+ (struct hal_rx_ppdu_start *)tlv_data;
+
+ info[0] = __le32_to_cpu(ppdu_start->info0);
+
+ ppdu_info->ppdu_id =
+ u32_get_bits(info[0], HAL_RX_PPDU_START_INFO0_PPDU_ID);
+ ppdu_info->chan_num = __le32_to_cpu(ppdu_start->chan_num);
+ ppdu_info->ppdu_ts = __le32_to_cpu(ppdu_start->ppdu_start_ts);
+
+ if (ppdu_info->ppdu_id != ppdu_info->last_ppdu_id) {
+ ppdu_info->last_ppdu_id = ppdu_info->ppdu_id;
+ ppdu_info->num_users = 0;
+ memset(&ppdu_info->mpdu_fcs_ok_bitmap, 0,
+ HAL_RX_NUM_WORDS_PER_PPDU_BITMAP *
+ sizeof(ppdu_info->mpdu_fcs_ok_bitmap[0]));
+ }
+ break;
+ }
+ case HAL_RX_PPDU_END_USER_STATS: {
+ struct hal_rx_ppdu_end_user_stats *eu_stats =
+ (struct hal_rx_ppdu_end_user_stats *)tlv_data;
+
+ info[0] = __le32_to_cpu(eu_stats->info0);
+ info[1] = __le32_to_cpu(eu_stats->info1);
+ info[2] = __le32_to_cpu(eu_stats->info2);
+ info[4] = __le32_to_cpu(eu_stats->info4);
+ info[5] = __le32_to_cpu(eu_stats->info5);
+ info[6] = __le32_to_cpu(eu_stats->info6);
+
+ ppdu_info->ast_index =
+ u32_get_bits(info[2], HAL_RX_PPDU_END_USER_STATS_INFO2_AST_INDEX);
+ ppdu_info->fc_valid =
+ u32_get_bits(info[1], HAL_RX_PPDU_END_USER_STATS_INFO1_FC_VALID);
+ ppdu_info->tid =
+ ffs(u32_get_bits(info[6],
+ HAL_RX_PPDU_END_USER_STATS_INFO6_TID_BITMAP)
+ - 1);
+ ppdu_info->tcp_msdu_count =
+ u32_get_bits(info[4],
+ HAL_RX_PPDU_END_USER_STATS_INFO4_TCP_MSDU_CNT);
+ ppdu_info->udp_msdu_count =
+ u32_get_bits(info[4],
+ HAL_RX_PPDU_END_USER_STATS_INFO4_UDP_MSDU_CNT);
+ ppdu_info->other_msdu_count =
+ u32_get_bits(info[5],
+ HAL_RX_PPDU_END_USER_STATS_INFO5_OTHER_MSDU_CNT);
+ ppdu_info->tcp_ack_msdu_count =
+ u32_get_bits(info[5],
+ HAL_RX_PPDU_END_USER_STATS_INFO5_TCP_ACK_MSDU_CNT);
+ ppdu_info->preamble_type =
+ u32_get_bits(info[1],
+ HAL_RX_PPDU_END_USER_STATS_INFO1_PKT_TYPE);
+ ppdu_info->num_mpdu_fcs_ok =
+ u32_get_bits(info[1],
+ HAL_RX_PPDU_END_USER_STATS_INFO1_MPDU_CNT_FCS_OK);
+ ppdu_info->num_mpdu_fcs_err =
+ u32_get_bits(info[0],
+ HAL_RX_PPDU_END_USER_STATS_INFO0_MPDU_CNT_FCS_ERR);
+ switch (ppdu_info->preamble_type) {
+ case HAL_RX_PREAMBLE_11N:
+ ppdu_info->ht_flags = 1;
+ break;
+ case HAL_RX_PREAMBLE_11AC:
+ ppdu_info->vht_flags = 1;
+ break;
+ case HAL_RX_PREAMBLE_11AX:
+ ppdu_info->he_flags = 1;
+ break;
+ default:
+ break;
+ }
+
+ if (userid < HAL_MAX_UL_MU_USERS) {
+ struct hal_rx_user_status *rxuser_stats =
+ &ppdu_info->userstats[userid];
+ ppdu_info->num_users += 1;
+
+ ath12k_dp_mon_rx_handle_ofdma_info(tlv_data, rxuser_stats);
+ ath12k_dp_mon_rx_populate_mu_user_info(tlv_data, ppdu_info,
+ rxuser_stats);
+ }
+ ppdu_info->mpdu_fcs_ok_bitmap[0] = __le32_to_cpu(eu_stats->rsvd1[0]);
+ ppdu_info->mpdu_fcs_ok_bitmap[1] = __le32_to_cpu(eu_stats->rsvd1[1]);
+ break;
+ }
+ case HAL_RX_PPDU_END_USER_STATS_EXT: {
+ struct hal_rx_ppdu_end_user_stats_ext *eu_stats =
+ (struct hal_rx_ppdu_end_user_stats_ext *)tlv_data;
+ ppdu_info->mpdu_fcs_ok_bitmap[2] = __le32_to_cpu(eu_stats->info1);
+ ppdu_info->mpdu_fcs_ok_bitmap[3] = __le32_to_cpu(eu_stats->info2);
+ ppdu_info->mpdu_fcs_ok_bitmap[4] = __le32_to_cpu(eu_stats->info3);
+ ppdu_info->mpdu_fcs_ok_bitmap[5] = __le32_to_cpu(eu_stats->info4);
+ ppdu_info->mpdu_fcs_ok_bitmap[6] = __le32_to_cpu(eu_stats->info5);
+ ppdu_info->mpdu_fcs_ok_bitmap[7] = __le32_to_cpu(eu_stats->info6);
+ break;
+ }
+ case HAL_PHYRX_HT_SIG:
+ ath12k_dp_mon_parse_ht_sig(tlv_data, ppdu_info);
+ break;
+
+ case HAL_PHYRX_L_SIG_B:
+ ath12k_dp_mon_parse_l_sig_b(tlv_data, ppdu_info);
+ break;
+
+ case HAL_PHYRX_L_SIG_A:
+ ath12k_dp_mon_parse_l_sig_a(tlv_data, ppdu_info);
+ break;
+
+ case HAL_PHYRX_VHT_SIG_A:
+ ath12k_dp_mon_parse_vht_sig_a(tlv_data, ppdu_info);
+ break;
+
+ case HAL_PHYRX_HE_SIG_A_SU:
+ ath12k_dp_mon_parse_he_sig_su(tlv_data, ppdu_info);
+ break;
+
+ case HAL_PHYRX_HE_SIG_A_MU_DL:
+ ath12k_dp_mon_parse_he_sig_mu(tlv_data, ppdu_info);
+ break;
+
+ case HAL_PHYRX_HE_SIG_B1_MU:
+ ath12k_dp_mon_parse_he_sig_b1_mu(tlv_data, ppdu_info);
+ break;
+
+ case HAL_PHYRX_HE_SIG_B2_MU:
+ ath12k_dp_mon_parse_he_sig_b2_mu(tlv_data, ppdu_info);
+ break;
+
+ case HAL_PHYRX_HE_SIG_B2_OFDMA:
+ ath12k_dp_mon_parse_he_sig_b2_ofdma(tlv_data, ppdu_info);
+ break;
+
+ case HAL_PHYRX_RSSI_LEGACY: {
+ struct hal_rx_phyrx_rssi_legacy_info *rssi =
+ (struct hal_rx_phyrx_rssi_legacy_info *)tlv_data;
+ u32 reception_type = 0;
+ u32 rssi_legacy_info = __le32_to_cpu(rssi->rsvd[0]);
+
+ info[0] = __le32_to_cpu(rssi->info0);
+
+ /* TODO: Please note that the combined rssi will not be accurate
+ * in MU case. Rssi in MU needs to be retrieved from
+ * PHYRX_OTHER_RECEIVE_INFO TLV.
+ */
+ ppdu_info->rssi_comb =
+ u32_get_bits(info[0],
+ HAL_RX_PHYRX_RSSI_LEGACY_INFO_INFO0_RSSI_COMB);
+ reception_type =
+ u32_get_bits(rssi_legacy_info,
+ HAL_RX_PHYRX_RSSI_LEGACY_INFO_RSVD1_RECEPTION);
+
+ switch (reception_type) {
+ case HAL_RECEPTION_TYPE_ULOFMDA:
+ ppdu_info->reception_type = HAL_RX_RECEPTION_TYPE_MU_OFDMA;
+ break;
+ case HAL_RECEPTION_TYPE_ULMIMO:
+ ppdu_info->reception_type = HAL_RX_RECEPTION_TYPE_MU_MIMO;
+ break;
+ default:
+ ppdu_info->reception_type = HAL_RX_RECEPTION_TYPE_SU;
+ break;
+ }
+ break;
+ }
+ case HAL_RXPCU_PPDU_END_INFO: {
+ struct hal_rx_ppdu_end_duration *ppdu_rx_duration =
+ (struct hal_rx_ppdu_end_duration *)tlv_data;
+
+ info[0] = __le32_to_cpu(ppdu_rx_duration->info0);
+ ppdu_info->rx_duration =
+ u32_get_bits(info[0], HAL_RX_PPDU_END_DURATION);
+ ppdu_info->tsft = __le32_to_cpu(ppdu_rx_duration->rsvd0[1]);
+ ppdu_info->tsft = (ppdu_info->tsft << 32) |
+ __le32_to_cpu(ppdu_rx_duration->rsvd0[0]);
+ break;
+ }
+ case HAL_RX_MPDU_START: {
+ struct hal_rx_mpdu_start *mpdu_start =
+ (struct hal_rx_mpdu_start *)tlv_data;
+ struct dp_mon_mpdu *mon_mpdu = pmon->mon_mpdu;
+ u16 peer_id;
+
+ info[1] = __le32_to_cpu(mpdu_start->info1);
+ peer_id = u32_get_bits(info[1], HAL_RX_MPDU_START_INFO1_PEERID);
+ if (peer_id)
+ ppdu_info->peer_id = peer_id;
+
+ ppdu_info->mpdu_len += u32_get_bits(info[1],
+ HAL_RX_MPDU_START_INFO2_MPDU_LEN);
+ if (userid < HAL_MAX_UL_MU_USERS) {
+ info[0] = __le32_to_cpu(mpdu_start->info0);
+ ppdu_info->userid = userid;
+ ppdu_info->ampdu_id[userid] =
+ u32_get_bits(info[0], HAL_RX_MPDU_START_INFO1_PEERID);
+ }
+
+ mon_mpdu = kzalloc(sizeof(*mon_mpdu), GFP_ATOMIC);
+ if (!mon_mpdu)
+ return HAL_RX_MON_STATUS_PPDU_NOT_DONE;
+
+ break;
+ }
+ case HAL_RX_MSDU_START:
+ /* TODO: add msdu start parsing logic */
+ break;
+ case HAL_MON_BUF_ADDR: {
+ struct dp_rxdma_ring *buf_ring = &ab->dp.rxdma_mon_buf_ring;
+ struct dp_mon_packet_info *packet_info =
+ (struct dp_mon_packet_info *)tlv_data;
+ int buf_id = u32_get_bits(packet_info->cookie,
+ DP_RXDMA_BUF_COOKIE_BUF_ID);
+ struct sk_buff *msdu;
+ struct dp_mon_mpdu *mon_mpdu = pmon->mon_mpdu;
+ struct ath12k_skb_rxcb *rxcb;
+
+ spin_lock_bh(&buf_ring->idr_lock);
+ msdu = idr_remove(&buf_ring->bufs_idr, buf_id);
+ spin_unlock_bh(&buf_ring->idr_lock);
+
+ if (unlikely(!msdu)) {
+ ath12k_warn(ab, "montior destination with invalid buf_id %d\n",
+ buf_id);
+ return HAL_RX_MON_STATUS_PPDU_NOT_DONE;
+ }
+
+ rxcb = ATH12K_SKB_RXCB(msdu);
+ dma_unmap_single(ab->dev, rxcb->paddr,
+ msdu->len + skb_tailroom(msdu),
+ DMA_FROM_DEVICE);
+
+ if (mon_mpdu->tail)
+ mon_mpdu->tail->next = msdu;
+ else
+ mon_mpdu->tail = msdu;
+
+ ath12k_dp_mon_buf_replenish(ab, buf_ring, 1);
+
+ break;
+ }
+ case HAL_RX_MSDU_END: {
+ struct rx_msdu_end_qcn9274 *msdu_end =
+ (struct rx_msdu_end_qcn9274 *)tlv_data;
+ bool is_first_msdu_in_mpdu;
+ u16 msdu_end_info;
+
+ msdu_end_info = __le16_to_cpu(msdu_end->info5);
+ is_first_msdu_in_mpdu = u32_get_bits(msdu_end_info,
+ RX_MSDU_END_INFO5_FIRST_MSDU);
+ if (is_first_msdu_in_mpdu) {
+ pmon->mon_mpdu->head = pmon->mon_mpdu->tail;
+ pmon->mon_mpdu->tail = NULL;
+ }
+ break;
+ }
+ case HAL_RX_MPDU_END:
+ list_add_tail(&pmon->mon_mpdu->list, &pmon->dp_rx_mon_mpdu_list);
+ break;
+ case HAL_DUMMY:
+ return HAL_RX_MON_STATUS_BUF_DONE;
+ case HAL_RX_PPDU_END_STATUS_DONE:
+ case 0:
+ return HAL_RX_MON_STATUS_PPDU_DONE;
+ default:
+ break;
+ }
+
+ return HAL_RX_MON_STATUS_PPDU_NOT_DONE;
+}
+
+static void ath12k_dp_mon_rx_msdus_set_payload(struct ath12k *ar, struct sk_buff *msdu)
+{
+ u32 rx_pkt_offset, l2_hdr_offset;
+
+ rx_pkt_offset = ar->ab->hw_params->hal_desc_sz;
+ l2_hdr_offset = ath12k_dp_rx_h_l3pad(ar->ab,
+ (struct hal_rx_desc *)msdu->data);
+ skb_pull(msdu, rx_pkt_offset + l2_hdr_offset);
+}
+
+static struct sk_buff *
+ath12k_dp_mon_rx_merg_msdus(struct ath12k *ar,
+ u32 mac_id, struct sk_buff *head_msdu,
+ struct ieee80211_rx_status *rxs, bool *fcs_err)
+{
+ struct ath12k_base *ab = ar->ab;
+ struct sk_buff *msdu, *mpdu_buf, *prev_buf;
+ struct hal_rx_desc *rx_desc;
+ u8 *hdr_desc, *dest, decap_format;
+ struct ieee80211_hdr_3addr *wh;
+ u32 err_bitmap;
+
+ mpdu_buf = NULL;
+
+ if (!head_msdu)
+ goto err_merge_fail;
+
+ rx_desc = (struct hal_rx_desc *)head_msdu->data;
+ err_bitmap = ath12k_dp_rx_h_mpdu_err(ab, rx_desc);
+
+ if (err_bitmap & HAL_RX_MPDU_ERR_FCS)
+ *fcs_err = true;
+
+ decap_format = ath12k_dp_rx_h_decap_type(ab, rx_desc);
+
+ ath12k_dp_rx_h_ppdu(ar, rx_desc, rxs);
+
+ if (decap_format == DP_RX_DECAP_TYPE_RAW) {
+ ath12k_dp_mon_rx_msdus_set_payload(ar, head_msdu);
+
+ prev_buf = head_msdu;
+ msdu = head_msdu->next;
+
+ while (msdu) {
+ ath12k_dp_mon_rx_msdus_set_payload(ar, msdu);
+
+ prev_buf = msdu;
+ msdu = msdu->next;
+ }
+
+ prev_buf->next = NULL;
+
+ skb_trim(prev_buf, prev_buf->len - HAL_RX_FCS_LEN);
+ } else if (decap_format == DP_RX_DECAP_TYPE_NATIVE_WIFI) {
+ u8 qos_pkt = 0;
+
+ rx_desc = (struct hal_rx_desc *)head_msdu->data;
+ hdr_desc = ab->hw_params->hal_ops->rx_desc_get_msdu_payload(rx_desc);
+
+ /* Base size */
+ wh = (struct ieee80211_hdr_3addr *)hdr_desc;
+
+ if (ieee80211_is_data_qos(wh->frame_control))
+ qos_pkt = 1;
+
+ msdu = head_msdu;
+
+ while (msdu) {
+ ath12k_dp_mon_rx_msdus_set_payload(ar, msdu);
+ if (qos_pkt) {
+ dest = skb_push(msdu, sizeof(__le16));
+ if (!dest)
+ goto err_merge_fail;
+ memcpy(dest, hdr_desc, sizeof(struct ieee80211_qos_hdr));
+ }
+ prev_buf = msdu;
+ msdu = msdu->next;
+ }
+ dest = skb_put(prev_buf, HAL_RX_FCS_LEN);
+ if (!dest)
+ goto err_merge_fail;
+
+ ath12k_dbg(ab, ATH12K_DBG_DATA,
+ "mpdu_buf %pK mpdu_buf->len %u",
+ prev_buf, prev_buf->len);
+ } else {
+ ath12k_dbg(ab, ATH12K_DBG_DATA,
+ "decap format %d is not supported!\n",
+ decap_format);
+ goto err_merge_fail;
+ }
+
+ return head_msdu;
+
+err_merge_fail:
+ if (mpdu_buf && decap_format != DP_RX_DECAP_TYPE_RAW) {
+ ath12k_dbg(ab, ATH12K_DBG_DATA,
+ "err_merge_fail mpdu_buf %pK", mpdu_buf);
+ /* Free the head buffer */
+ dev_kfree_skb_any(mpdu_buf);
+ }
+ return NULL;
+}
+
+static void
+ath12k_dp_mon_rx_update_radiotap_he(struct hal_rx_mon_ppdu_info *rx_status,
+ u8 *rtap_buf)
+{
+ u32 rtap_len = 0;
+
+ put_unaligned_le16(rx_status->he_data1, &rtap_buf[rtap_len]);
+ rtap_len += 2;
+
+ put_unaligned_le16(rx_status->he_data2, &rtap_buf[rtap_len]);
+ rtap_len += 2;
+
+ put_unaligned_le16(rx_status->he_data3, &rtap_buf[rtap_len]);
+ rtap_len += 2;
+
+ put_unaligned_le16(rx_status->he_data4, &rtap_buf[rtap_len]);
+ rtap_len += 2;
+
+ put_unaligned_le16(rx_status->he_data5, &rtap_buf[rtap_len]);
+ rtap_len += 2;
+
+ put_unaligned_le16(rx_status->he_data6, &rtap_buf[rtap_len]);
+}
+
+static void
+ath12k_dp_mon_rx_update_radiotap_he_mu(struct hal_rx_mon_ppdu_info *rx_status,
+ u8 *rtap_buf)
+{
+ u32 rtap_len = 0;
+
+ put_unaligned_le16(rx_status->he_flags1, &rtap_buf[rtap_len]);
+ rtap_len += 2;
+
+ put_unaligned_le16(rx_status->he_flags2, &rtap_buf[rtap_len]);
+ rtap_len += 2;
+
+ rtap_buf[rtap_len] = rx_status->he_RU[0];
+ rtap_len += 1;
+
+ rtap_buf[rtap_len] = rx_status->he_RU[1];
+ rtap_len += 1;
+
+ rtap_buf[rtap_len] = rx_status->he_RU[2];
+ rtap_len += 1;
+
+ rtap_buf[rtap_len] = rx_status->he_RU[3];
+}
+
+static void ath12k_dp_mon_update_radiotap(struct ath12k *ar,
+ struct hal_rx_mon_ppdu_info *ppduinfo,
+ struct sk_buff *mon_skb,
+ struct ieee80211_rx_status *rxs)
+{
+ struct ieee80211_supported_band *sband;
+ u8 *ptr = NULL;
+ u16 ampdu_id = ppduinfo->ampdu_id[ppduinfo->userid];
+
+ rxs->flag |= RX_FLAG_MACTIME_START;
+ rxs->signal = ppduinfo->rssi_comb + ATH12K_DEFAULT_NOISE_FLOOR;
+ rxs->nss = ppduinfo->nss + 1;
+
+ if (ampdu_id) {
+ rxs->flag |= RX_FLAG_AMPDU_DETAILS;
+ rxs->ampdu_reference = ampdu_id;
+ }
+
+ if (ppduinfo->he_mu_flags) {
+ rxs->flag |= RX_FLAG_RADIOTAP_HE_MU;
+ rxs->encoding = RX_ENC_HE;
+ ptr = skb_push(mon_skb, sizeof(struct ieee80211_radiotap_he_mu));
+ ath12k_dp_mon_rx_update_radiotap_he_mu(ppduinfo, ptr);
+ } else if (ppduinfo->he_flags) {
+ rxs->flag |= RX_FLAG_RADIOTAP_HE;
+ rxs->encoding = RX_ENC_HE;
+ ptr = skb_push(mon_skb, sizeof(struct ieee80211_radiotap_he));
+ ath12k_dp_mon_rx_update_radiotap_he(ppduinfo, ptr);
+ rxs->rate_idx = ppduinfo->rate;
+ } else if (ppduinfo->vht_flags) {
+ rxs->encoding = RX_ENC_VHT;
+ rxs->rate_idx = ppduinfo->rate;
+ } else if (ppduinfo->ht_flags) {
+ rxs->encoding = RX_ENC_HT;
+ rxs->rate_idx = ppduinfo->rate;
+ } else {
+ rxs->encoding = RX_ENC_LEGACY;
+ sband = &ar->mac.sbands[rxs->band];
+ rxs->rate_idx = ath12k_mac_hw_rate_to_idx(sband, ppduinfo->rate,
+ ppduinfo->cck_flag);
+ }
+
+ rxs->mactime = ppduinfo->tsft;
+}
+
+static void ath12k_dp_mon_rx_deliver_msdu(struct ath12k *ar, struct napi_struct *napi,
+ struct sk_buff *msdu,
+ struct ieee80211_rx_status *status)
+{
+ static const struct ieee80211_radiotap_he known = {
+ .data1 = cpu_to_le16(IEEE80211_RADIOTAP_HE_DATA1_DATA_MCS_KNOWN |
+ IEEE80211_RADIOTAP_HE_DATA1_BW_RU_ALLOC_KNOWN),
+ .data2 = cpu_to_le16(IEEE80211_RADIOTAP_HE_DATA2_GI_KNOWN),
+ };
+ struct ieee80211_rx_status *rx_status;
+ struct ieee80211_radiotap_he *he = NULL;
+ struct ieee80211_sta *pubsta = NULL;
+ struct ath12k_peer *peer;
+ struct ath12k_skb_rxcb *rxcb = ATH12K_SKB_RXCB(msdu);
+ u8 decap = DP_RX_DECAP_TYPE_RAW;
+ bool is_mcbc = rxcb->is_mcbc;
+ bool is_eapol_tkip = rxcb->is_eapol;
+
+ if ((status->encoding == RX_ENC_HE) && !(status->flag & RX_FLAG_RADIOTAP_HE) &&
+ !(status->flag & RX_FLAG_SKIP_MONITOR)) {
+ he = skb_push(msdu, sizeof(known));
+ memcpy(he, &known, sizeof(known));
+ status->flag |= RX_FLAG_RADIOTAP_HE;
+ }
+
+ if (!(status->flag & RX_FLAG_ONLY_MONITOR))
+ decap = ath12k_dp_rx_h_decap_type(ar->ab, rxcb->rx_desc);
+ spin_lock_bh(&ar->ab->base_lock);
+ peer = ath12k_dp_rx_h_find_peer(ar->ab, msdu);
+ if (peer && peer->sta)
+ pubsta = peer->sta;
+ spin_unlock_bh(&ar->ab->base_lock);
+
+ ath12k_dbg(ar->ab, ATH12K_DBG_DATA,
+ "rx skb %pK len %u peer %pM %u %s %s%s%s%s%s%s%s %srate_idx %u vht_nss %u freq %u band %u flag 0x%x fcs-err %i mic-err %i amsdu-more %i\n",
+ msdu,
+ msdu->len,
+ peer ? peer->addr : NULL,
+ rxcb->tid,
+ (is_mcbc) ? "mcast" : "ucast",
+ (status->encoding == RX_ENC_LEGACY) ? "legacy" : "",
+ (status->encoding == RX_ENC_HT) ? "ht" : "",
+ (status->encoding == RX_ENC_VHT) ? "vht" : "",
+ (status->encoding == RX_ENC_HE) ? "he" : "",
+ (status->bw == RATE_INFO_BW_40) ? "40" : "",
+ (status->bw == RATE_INFO_BW_80) ? "80" : "",
+ (status->bw == RATE_INFO_BW_160) ? "160" : "",
+ status->enc_flags & RX_ENC_FLAG_SHORT_GI ? "sgi " : "",
+ status->rate_idx,
+ status->nss,
+ status->freq,
+ status->band, status->flag,
+ !!(status->flag & RX_FLAG_FAILED_FCS_CRC),
+ !!(status->flag & RX_FLAG_MMIC_ERROR),
+ !!(status->flag & RX_FLAG_AMSDU_MORE));
+
+ ath12k_dbg_dump(ar->ab, ATH12K_DBG_DP_RX, NULL, "dp rx msdu: ",
+ msdu->data, msdu->len);
+ rx_status = IEEE80211_SKB_RXCB(msdu);
+ *rx_status = *status;
+
+ /* TODO: trace rx packet */
+
+ /* PN for multicast packets are not validate in HW,
+ * so skip 802.3 rx path
+ * Also, fast_rx expectes the STA to be authorized, hence
+ * eapol packets are sent in slow path.
+ */
+ if (decap == DP_RX_DECAP_TYPE_ETHERNET2_DIX && !is_eapol_tkip &&
+ !(is_mcbc && rx_status->flag & RX_FLAG_DECRYPTED))
+ rx_status->flag |= RX_FLAG_8023;
+
+ ieee80211_rx_napi(ar->hw, pubsta, msdu, napi);
+}
+
+static int ath12k_dp_mon_rx_deliver(struct ath12k *ar, u32 mac_id,
+ struct sk_buff *head_msdu,
+ struct hal_rx_mon_ppdu_info *ppduinfo,
+ struct napi_struct *napi)
+{
+ struct ath12k_pdev_dp *dp = &ar->dp;
+ struct sk_buff *mon_skb, *skb_next, *header;
+ struct ieee80211_rx_status *rxs = &dp->rx_status;
+ bool fcs_err = false;
+
+ mon_skb = ath12k_dp_mon_rx_merg_msdus(ar, mac_id, head_msdu,
+ rxs, &fcs_err);
+ if (!mon_skb)
+ goto mon_deliver_fail;
+
+ header = mon_skb;
+ rxs->flag = 0;
+
+ if (fcs_err)
+ rxs->flag = RX_FLAG_FAILED_FCS_CRC;
+
+ do {
+ skb_next = mon_skb->next;
+ if (!skb_next)
+ rxs->flag &= ~RX_FLAG_AMSDU_MORE;
+ else
+ rxs->flag |= RX_FLAG_AMSDU_MORE;
+
+ if (mon_skb == header) {
+ header = NULL;
+ rxs->flag &= ~RX_FLAG_ALLOW_SAME_PN;
+ } else {
+ rxs->flag |= RX_FLAG_ALLOW_SAME_PN;
+ }
+ rxs->flag |= RX_FLAG_ONLY_MONITOR;
+ ath12k_dp_mon_update_radiotap(ar, ppduinfo, mon_skb, rxs);
+ ath12k_dp_mon_rx_deliver_msdu(ar, napi, mon_skb, rxs);
+ mon_skb = skb_next;
+ } while (mon_skb);
+ rxs->flag = 0;
+
+ return 0;
+
+mon_deliver_fail:
+ mon_skb = head_msdu;
+ while (mon_skb) {
+ skb_next = mon_skb->next;
+ dev_kfree_skb_any(mon_skb);
+ mon_skb = skb_next;
+ }
+ return -EINVAL;
+}
+
+static enum hal_rx_mon_status
+ath12k_dp_mon_parse_rx_dest(struct ath12k_base *ab, struct ath12k_mon_data *pmon,
+ struct sk_buff *skb)
+{
+ struct hal_rx_mon_ppdu_info *ppdu_info = &pmon->mon_ppdu_info;
+ struct hal_tlv_hdr *tlv;
+ enum hal_rx_mon_status hal_status;
+ u32 tlv_userid = 0;
+ u16 tlv_tag, tlv_len;
+ u8 *ptr = skb->data;
+
+ memset(ppdu_info, 0, sizeof(struct hal_rx_mon_ppdu_info));
+
+ do {
+ tlv = (struct hal_tlv_hdr *)ptr;
+ tlv_tag = le32_get_bits(tlv->tl, HAL_TLV_HDR_TAG);
+ tlv_len = le32_get_bits(tlv->tl, HAL_TLV_HDR_LEN);
+ tlv_userid = le32_get_bits(tlv->tl, HAL_TLV_USR_ID);
+ ptr += sizeof(*tlv);
+
+ /* The actual length of PPDU_END is the combined length of many PHY
+ * TLVs that follow. Skip the TLV header and
+ * rx_rxpcu_classification_overview that follows the header to get to
+ * next TLV.
+ */
+
+ if (tlv_tag == HAL_RX_PPDU_END)
+ tlv_len = sizeof(struct hal_rx_rxpcu_classification_overview);
+
+ hal_status = ath12k_dp_mon_rx_parse_status_tlv(ab, pmon,
+ tlv_tag, ptr, tlv_userid);
+ ptr += tlv_len;
+ ptr = PTR_ALIGN(ptr, HAL_TLV_ALIGN);
+
+ if ((ptr - skb->data) >= DP_RX_BUFFER_SIZE)
+ break;
+
+ } while (hal_status == HAL_RX_MON_STATUS_PPDU_NOT_DONE);
+
+ return hal_status;
+}
+
+enum hal_rx_mon_status
+ath12k_dp_mon_rx_parse_mon_status(struct ath12k *ar,
+ struct ath12k_mon_data *pmon,
+ int mac_id,
+ struct sk_buff *skb,
+ struct napi_struct *napi)
+{
+ struct ath12k_base *ab = ar->ab;
+ struct hal_rx_mon_ppdu_info *ppdu_info = &pmon->mon_ppdu_info;
+ struct dp_mon_mpdu *tmp;
+ struct dp_mon_mpdu *mon_mpdu = pmon->mon_mpdu;
+ struct sk_buff *head_msdu, *tail_msdu;
+ enum hal_rx_mon_status hal_status = HAL_RX_MON_STATUS_BUF_DONE;
+
+ ath12k_dp_mon_parse_rx_dest(ab, pmon, skb);
+
+ list_for_each_entry_safe(mon_mpdu, tmp, &pmon->dp_rx_mon_mpdu_list, list) {
+ list_del(&mon_mpdu->list);
+ head_msdu = mon_mpdu->head;
+ tail_msdu = mon_mpdu->tail;
+
+ if (head_msdu && tail_msdu) {
+ ath12k_dp_mon_rx_deliver(ar, mac_id, head_msdu,
+ ppdu_info, napi);
+ }
+
+ kfree(mon_mpdu);
+ }
+ return hal_status;
+}
+
+int ath12k_dp_mon_buf_replenish(struct ath12k_base *ab,
+ struct dp_rxdma_ring *buf_ring,
+ int req_entries)
+{
+ struct hal_mon_buf_ring *mon_buf;
+ struct sk_buff *skb;
+ struct hal_srng *srng;
+ dma_addr_t paddr;
+ u32 cookie, buf_id;
+
+ srng = &ab->hal.srng_list[buf_ring->refill_buf_ring.ring_id];
+ spin_lock_bh(&srng->lock);
+ ath12k_hal_srng_access_begin(ab, srng);
+
+ while (req_entries > 0) {
+ skb = dev_alloc_skb(DP_RX_BUFFER_SIZE + DP_RX_BUFFER_ALIGN_SIZE);
+ if (unlikely(!skb))
+ goto fail_alloc_skb;
+
+ if (!IS_ALIGNED((unsigned long)skb->data, DP_RX_BUFFER_ALIGN_SIZE)) {
+ skb_pull(skb,
+ PTR_ALIGN(skb->data, DP_RX_BUFFER_ALIGN_SIZE) -
+ skb->data);
+ }
+
+ paddr = dma_map_single(ab->dev, skb->data,
+ skb->len + skb_tailroom(skb),
+ DMA_FROM_DEVICE);
+
+ if (unlikely(dma_mapping_error(ab->dev, paddr)))
+ goto fail_free_skb;
+
+ spin_lock_bh(&buf_ring->idr_lock);
+ buf_id = idr_alloc(&buf_ring->bufs_idr, skb, 0,
+ buf_ring->bufs_max * 3, GFP_ATOMIC);
+ spin_unlock_bh(&buf_ring->idr_lock);
+
+ if (unlikely(buf_id < 0))
+ goto fail_dma_unmap;
+
+ mon_buf = ath12k_hal_srng_src_get_next_entry(ab, srng);
+ if (unlikely(!mon_buf))
+ goto fail_idr_remove;
+
+ ATH12K_SKB_RXCB(skb)->paddr = paddr;
+
+ cookie = u32_encode_bits(buf_id, DP_RXDMA_BUF_COOKIE_BUF_ID);
+
+ mon_buf->paddr_lo = cpu_to_le32(lower_32_bits(paddr));
+ mon_buf->paddr_hi = cpu_to_le32(upper_32_bits(paddr));
+ mon_buf->cookie = cpu_to_le64(cookie);
+
+ req_entries--;
+ }
+
+ ath12k_hal_srng_access_end(ab, srng);
+ spin_unlock_bh(&srng->lock);
+ return 0;
+
+fail_idr_remove:
+ spin_lock_bh(&buf_ring->idr_lock);
+ idr_remove(&buf_ring->bufs_idr, buf_id);
+ spin_unlock_bh(&buf_ring->idr_lock);
+fail_dma_unmap:
+ dma_unmap_single(ab->dev, paddr, skb->len + skb_tailroom(skb),
+ DMA_FROM_DEVICE);
+fail_free_skb:
+ dev_kfree_skb_any(skb);
+fail_alloc_skb:
+ ath12k_hal_srng_access_end(ab, srng);
+ spin_unlock_bh(&srng->lock);
+ return -ENOMEM;
+}
+
+static struct dp_mon_tx_ppdu_info *
+ath12k_dp_mon_tx_get_ppdu_info(struct ath12k_mon_data *pmon,
+ unsigned int ppdu_id,
+ enum dp_mon_tx_ppdu_info_type type)
+{
+ struct dp_mon_tx_ppdu_info *tx_ppdu_info;
+
+ if (type == DP_MON_TX_PROT_PPDU_INFO) {
+ tx_ppdu_info = pmon->tx_prot_ppdu_info;
+
+ if (tx_ppdu_info && !tx_ppdu_info->is_used)
+ return tx_ppdu_info;
+ kfree(tx_ppdu_info);
+ } else {
+ tx_ppdu_info = pmon->tx_data_ppdu_info;
+
+ if (tx_ppdu_info && !tx_ppdu_info->is_used)
+ return tx_ppdu_info;
+ kfree(tx_ppdu_info);
+ }
+
+ /* allocate new tx_ppdu_info */
+ tx_ppdu_info = kzalloc(sizeof(*tx_ppdu_info), GFP_ATOMIC);
+ if (!tx_ppdu_info)
+ return NULL;
+
+ tx_ppdu_info->is_used = 0;
+ tx_ppdu_info->ppdu_id = ppdu_id;
+
+ if (type == DP_MON_TX_PROT_PPDU_INFO)
+ pmon->tx_prot_ppdu_info = tx_ppdu_info;
+ else
+ pmon->tx_data_ppdu_info = tx_ppdu_info;
+
+ return tx_ppdu_info;
+}
+
+static struct dp_mon_tx_ppdu_info *
+ath12k_dp_mon_hal_tx_ppdu_info(struct ath12k_mon_data *pmon,
+ u16 tlv_tag)
+{
+ switch (tlv_tag) {
+ case HAL_TX_FES_SETUP:
+ case HAL_TX_FLUSH:
+ case HAL_PCU_PPDU_SETUP_INIT:
+ case HAL_TX_PEER_ENTRY:
+ case HAL_TX_QUEUE_EXTENSION:
+ case HAL_TX_MPDU_START:
+ case HAL_TX_MSDU_START:
+ case HAL_TX_DATA:
+ case HAL_MON_BUF_ADDR:
+ case HAL_TX_MPDU_END:
+ case HAL_TX_LAST_MPDU_FETCHED:
+ case HAL_TX_LAST_MPDU_END:
+ case HAL_COEX_TX_REQ:
+ case HAL_TX_RAW_OR_NATIVE_FRAME_SETUP:
+ case HAL_SCH_CRITICAL_TLV_REFERENCE:
+ case HAL_TX_FES_SETUP_COMPLETE:
+ case HAL_TQM_MPDU_GLOBAL_START:
+ case HAL_SCHEDULER_END:
+ case HAL_TX_FES_STATUS_USER_PPDU:
+ break;
+ case HAL_TX_FES_STATUS_PROT: {
+ if (!pmon->tx_prot_ppdu_info->is_used)
+ pmon->tx_prot_ppdu_info->is_used = true;
+
+ return pmon->tx_prot_ppdu_info;
+ }
+ }
+
+ if (!pmon->tx_data_ppdu_info->is_used)
+ pmon->tx_data_ppdu_info->is_used = true;
+
+ return pmon->tx_data_ppdu_info;
+}
+
+#define MAX_MONITOR_HEADER 512
+#define MAX_DUMMY_FRM_BODY 128
+
+struct sk_buff *ath12k_dp_mon_tx_alloc_skb(void)
+{
+ struct sk_buff *skb;
+
+ skb = dev_alloc_skb(MAX_MONITOR_HEADER + MAX_DUMMY_FRM_BODY);
+ if (!skb)
+ return NULL;
+
+ skb_reserve(skb, MAX_MONITOR_HEADER);
+
+ if (!IS_ALIGNED((unsigned long)skb->data, 4))
+ skb_pull(skb, PTR_ALIGN(skb->data, 4) - skb->data);
+
+ return skb;
+}
+
+static int
+ath12k_dp_mon_tx_gen_cts2self_frame(struct dp_mon_tx_ppdu_info *tx_ppdu_info)
+{
+ struct sk_buff *skb;
+ struct ieee80211_cts *cts;
+
+ skb = ath12k_dp_mon_tx_alloc_skb();
+ if (!skb)
+ return -ENOMEM;
+
+ cts = (struct ieee80211_cts *)skb->data;
+ memset(cts, 0, MAX_DUMMY_FRM_BODY);
+ cts->frame_control =
+ cpu_to_le16(IEEE80211_FTYPE_CTL | IEEE80211_STYPE_CTS);
+ cts->duration = cpu_to_le16(tx_ppdu_info->rx_status.rx_duration);
+ memcpy(cts->ra, tx_ppdu_info->rx_status.addr1, sizeof(cts->ra));
+
+ skb_put(skb, sizeof(*cts));
+ tx_ppdu_info->tx_mon_mpdu->head = skb;
+ tx_ppdu_info->tx_mon_mpdu->tail = NULL;
+ list_add_tail(&tx_ppdu_info->tx_mon_mpdu->list,
+ &tx_ppdu_info->dp_tx_mon_mpdu_list);
+
+ return 0;
+}
+
+static int
+ath12k_dp_mon_tx_gen_rts_frame(struct dp_mon_tx_ppdu_info *tx_ppdu_info)
+{
+ struct sk_buff *skb;
+ struct ieee80211_rts *rts;
+
+ skb = ath12k_dp_mon_tx_alloc_skb();
+ if (!skb)
+ return -ENOMEM;
+
+ rts = (struct ieee80211_rts *)skb->data;
+ memset(rts, 0, MAX_DUMMY_FRM_BODY);
+ rts->frame_control =
+ cpu_to_le16(IEEE80211_FTYPE_CTL | IEEE80211_STYPE_RTS);
+ rts->duration = cpu_to_le16(tx_ppdu_info->rx_status.rx_duration);
+ memcpy(rts->ra, tx_ppdu_info->rx_status.addr1, sizeof(rts->ra));
+ memcpy(rts->ta, tx_ppdu_info->rx_status.addr2, sizeof(rts->ta));
+
+ skb_put(skb, sizeof(*rts));
+ tx_ppdu_info->tx_mon_mpdu->head = skb;
+ tx_ppdu_info->tx_mon_mpdu->tail = NULL;
+ list_add_tail(&tx_ppdu_info->tx_mon_mpdu->list,
+ &tx_ppdu_info->dp_tx_mon_mpdu_list);
+
+ return 0;
+}
+
+static int
+ath12k_dp_mon_tx_gen_3addr_qos_null_frame(struct dp_mon_tx_ppdu_info *tx_ppdu_info)
+{
+ struct sk_buff *skb;
+ struct ieee80211_qos_hdr *qhdr;
+
+ skb = ath12k_dp_mon_tx_alloc_skb();
+ if (!skb)
+ return -ENOMEM;
+
+ qhdr = (struct ieee80211_qos_hdr *)skb->data;
+ memset(qhdr, 0, MAX_DUMMY_FRM_BODY);
+ qhdr->frame_control =
+ cpu_to_le16(IEEE80211_FTYPE_DATA | IEEE80211_STYPE_QOS_NULLFUNC);
+ qhdr->duration_id = cpu_to_le16(tx_ppdu_info->rx_status.rx_duration);
+ memcpy(qhdr->addr1, tx_ppdu_info->rx_status.addr1, ETH_ALEN);
+ memcpy(qhdr->addr2, tx_ppdu_info->rx_status.addr2, ETH_ALEN);
+ memcpy(qhdr->addr3, tx_ppdu_info->rx_status.addr3, ETH_ALEN);
+
+ skb_put(skb, sizeof(*qhdr));
+ tx_ppdu_info->tx_mon_mpdu->head = skb;
+ tx_ppdu_info->tx_mon_mpdu->tail = NULL;
+ list_add_tail(&tx_ppdu_info->tx_mon_mpdu->list,
+ &tx_ppdu_info->dp_tx_mon_mpdu_list);
+
+ return 0;
+}
+
+static int
+ath12k_dp_mon_tx_gen_4addr_qos_null_frame(struct dp_mon_tx_ppdu_info *tx_ppdu_info)
+{
+ struct sk_buff *skb;
+ struct dp_mon_qosframe_addr4 *qhdr;
+
+ skb = ath12k_dp_mon_tx_alloc_skb();
+ if (!skb)
+ return -ENOMEM;
+
+ qhdr = (struct dp_mon_qosframe_addr4 *)skb->data;
+ memset(qhdr, 0, MAX_DUMMY_FRM_BODY);
+ qhdr->frame_control =
+ cpu_to_le16(IEEE80211_FTYPE_DATA | IEEE80211_STYPE_QOS_NULLFUNC);
+ qhdr->duration = cpu_to_le16(tx_ppdu_info->rx_status.rx_duration);
+ memcpy(qhdr->addr1, tx_ppdu_info->rx_status.addr1, ETH_ALEN);
+ memcpy(qhdr->addr2, tx_ppdu_info->rx_status.addr2, ETH_ALEN);
+ memcpy(qhdr->addr3, tx_ppdu_info->rx_status.addr3, ETH_ALEN);
+ memcpy(qhdr->addr4, tx_ppdu_info->rx_status.addr4, ETH_ALEN);
+
+ skb_put(skb, sizeof(*qhdr));
+ tx_ppdu_info->tx_mon_mpdu->head = skb;
+ tx_ppdu_info->tx_mon_mpdu->tail = NULL;
+ list_add_tail(&tx_ppdu_info->tx_mon_mpdu->list,
+ &tx_ppdu_info->dp_tx_mon_mpdu_list);
+
+ return 0;
+}
+
+static int
+ath12k_dp_mon_tx_gen_ack_frame(struct dp_mon_tx_ppdu_info *tx_ppdu_info)
+{
+ struct sk_buff *skb;
+ struct dp_mon_frame_min_one *fbmhdr;
+
+ skb = ath12k_dp_mon_tx_alloc_skb();
+ if (!skb)
+ return -ENOMEM;
+
+ fbmhdr = (struct dp_mon_frame_min_one *)skb->data;
+ memset(fbmhdr, 0, MAX_DUMMY_FRM_BODY);
+ fbmhdr->frame_control =
+ cpu_to_le16(IEEE80211_FTYPE_DATA | IEEE80211_STYPE_QOS_CFACK);
+ memcpy(fbmhdr->addr1, tx_ppdu_info->rx_status.addr1, ETH_ALEN);
+
+ /* set duration zero for ack frame */
+ fbmhdr->duration = 0;
+
+ skb_put(skb, sizeof(*fbmhdr));
+ tx_ppdu_info->tx_mon_mpdu->head = skb;
+ tx_ppdu_info->tx_mon_mpdu->tail = NULL;
+ list_add_tail(&tx_ppdu_info->tx_mon_mpdu->list,
+ &tx_ppdu_info->dp_tx_mon_mpdu_list);
+
+ return 0;
+}
+
+static int
+ath12k_dp_mon_tx_gen_prot_frame(struct dp_mon_tx_ppdu_info *tx_ppdu_info)
+{
+ int ret = 0;
+
+ switch (tx_ppdu_info->rx_status.medium_prot_type) {
+ case DP_MON_TX_MEDIUM_RTS_LEGACY:
+ case DP_MON_TX_MEDIUM_RTS_11AC_STATIC_BW:
+ case DP_MON_TX_MEDIUM_RTS_11AC_DYNAMIC_BW:
+ ret = ath12k_dp_mon_tx_gen_rts_frame(tx_ppdu_info);
+ break;
+ case DP_MON_TX_MEDIUM_CTS2SELF:
+ ret = ath12k_dp_mon_tx_gen_cts2self_frame(tx_ppdu_info);
+ break;
+ case DP_MON_TX_MEDIUM_QOS_NULL_NO_ACK_3ADDR:
+ ret = ath12k_dp_mon_tx_gen_3addr_qos_null_frame(tx_ppdu_info);
+ break;
+ case DP_MON_TX_MEDIUM_QOS_NULL_NO_ACK_4ADDR:
+ ret = ath12k_dp_mon_tx_gen_4addr_qos_null_frame(tx_ppdu_info);
+ break;
+ }
+
+ return ret;
+}
+
+static enum dp_mon_tx_tlv_status
+ath12k_dp_mon_tx_parse_status_tlv(struct ath12k_base *ab,
+ struct ath12k_mon_data *pmon,
+ u16 tlv_tag, u8 *tlv_data, u32 userid)
+{
+ struct dp_mon_tx_ppdu_info *tx_ppdu_info;
+ enum dp_mon_tx_tlv_status status = DP_MON_TX_STATUS_PPDU_NOT_DONE;
+ u32 info[7];
+
+ tx_ppdu_info = ath12k_dp_mon_hal_tx_ppdu_info(pmon, tlv_tag);
+
+ switch (tlv_tag) {
+ case HAL_TX_FES_SETUP: {
+ struct hal_tx_fes_setup *tx_fes_setup =
+ (struct hal_tx_fes_setup *)tlv_data;
+
+ info[0] = __le32_to_cpu(tx_fes_setup->info0);
+ tx_ppdu_info->ppdu_id = __le32_to_cpu(tx_fes_setup->schedule_id);
+ tx_ppdu_info->num_users =
+ u32_get_bits(info[0], HAL_TX_FES_SETUP_INFO0_NUM_OF_USERS);
+ status = DP_MON_TX_FES_SETUP;
+ break;
+ }
+
+ case HAL_TX_FES_STATUS_END: {
+ struct hal_tx_fes_status_end *tx_fes_status_end =
+ (struct hal_tx_fes_status_end *)tlv_data;
+ u32 tst_15_0, tst_31_16;
+
+ info[0] = __le32_to_cpu(tx_fes_status_end->info0);
+ tst_15_0 =
+ u32_get_bits(info[0],
+ HAL_TX_FES_STATUS_END_INFO0_START_TIMESTAMP_15_0);
+ tst_31_16 =
+ u32_get_bits(info[0],
+ HAL_TX_FES_STATUS_END_INFO0_START_TIMESTAMP_31_16);
+
+ tx_ppdu_info->rx_status.ppdu_ts = (tst_15_0 | (tst_31_16 << 16));
+ status = DP_MON_TX_FES_STATUS_END;
+ break;
+ }
+
+ case HAL_RX_RESPONSE_REQUIRED_INFO: {
+ struct hal_rx_resp_req_info *rx_resp_req_info =
+ (struct hal_rx_resp_req_info *)tlv_data;
+ u32 addr_32;
+ u16 addr_16;
+
+ info[0] = __le32_to_cpu(rx_resp_req_info->info0);
+ info[1] = __le32_to_cpu(rx_resp_req_info->info1);
+ info[2] = __le32_to_cpu(rx_resp_req_info->info2);
+ info[3] = __le32_to_cpu(rx_resp_req_info->info3);
+ info[4] = __le32_to_cpu(rx_resp_req_info->info4);
+ info[5] = __le32_to_cpu(rx_resp_req_info->info5);
+
+ tx_ppdu_info->rx_status.ppdu_id =
+ u32_get_bits(info[0], HAL_RX_RESP_REQ_INFO0_PPDU_ID);
+ tx_ppdu_info->rx_status.reception_type =
+ u32_get_bits(info[0], HAL_RX_RESP_REQ_INFO0_RECEPTION_TYPE);
+ tx_ppdu_info->rx_status.rx_duration =
+ u32_get_bits(info[1], HAL_RX_RESP_REQ_INFO1_DURATION);
+ tx_ppdu_info->rx_status.mcs =
+ u32_get_bits(info[1], HAL_RX_RESP_REQ_INFO1_RATE_MCS);
+ tx_ppdu_info->rx_status.sgi =
+ u32_get_bits(info[1], HAL_RX_RESP_REQ_INFO1_SGI);
+ tx_ppdu_info->rx_status.is_stbc =
+ u32_get_bits(info[1], HAL_RX_RESP_REQ_INFO1_STBC);
+ tx_ppdu_info->rx_status.ldpc =
+ u32_get_bits(info[1], HAL_RX_RESP_REQ_INFO1_LDPC);
+ tx_ppdu_info->rx_status.is_ampdu =
+ u32_get_bits(info[1], HAL_RX_RESP_REQ_INFO1_IS_AMPDU);
+ tx_ppdu_info->rx_status.num_users =
+ u32_get_bits(info[2], HAL_RX_RESP_REQ_INFO2_NUM_USER);
+
+ addr_32 = u32_get_bits(info[3], HAL_RX_RESP_REQ_INFO3_ADDR1_31_0);
+ addr_16 = u32_get_bits(info[3], HAL_RX_RESP_REQ_INFO4_ADDR1_47_32);
+ ath12k_dp_get_mac_addr(addr_32, addr_16, tx_ppdu_info->rx_status.addr1);
+
+ addr_16 = u32_get_bits(info[4], HAL_RX_RESP_REQ_INFO4_ADDR1_15_0);
+ addr_32 = u32_get_bits(info[5], HAL_RX_RESP_REQ_INFO5_ADDR1_47_16);
+ ath12k_dp_get_mac_addr(addr_32, addr_16, tx_ppdu_info->rx_status.addr2);
+
+ if (tx_ppdu_info->rx_status.reception_type == 0)
+ ath12k_dp_mon_tx_gen_cts2self_frame(tx_ppdu_info);
+ status = DP_MON_RX_RESPONSE_REQUIRED_INFO;
+ break;
+ }
+
+ case HAL_PCU_PPDU_SETUP_INIT: {
+ struct hal_tx_pcu_ppdu_setup_init *ppdu_setup =
+ (struct hal_tx_pcu_ppdu_setup_init *)tlv_data;
+ u32 addr_32;
+ u16 addr_16;
+
+ info[0] = __le32_to_cpu(ppdu_setup->info0);
+ info[1] = __le32_to_cpu(ppdu_setup->info1);
+ info[2] = __le32_to_cpu(ppdu_setup->info2);
+ info[3] = __le32_to_cpu(ppdu_setup->info3);
+ info[4] = __le32_to_cpu(ppdu_setup->info4);
+ info[5] = __le32_to_cpu(ppdu_setup->info5);
+ info[6] = __le32_to_cpu(ppdu_setup->info6);
+
+ /* protection frame address 1 */
+ addr_32 = u32_get_bits(info[1],
+ HAL_TX_PPDU_SETUP_INFO1_PROT_FRAME_ADDR1_31_0);
+ addr_16 = u32_get_bits(info[2],
+ HAL_TX_PPDU_SETUP_INFO2_PROT_FRAME_ADDR1_47_32);
+ ath12k_dp_get_mac_addr(addr_32, addr_16, tx_ppdu_info->rx_status.addr1);
+
+ /* protection frame address 2 */
+ addr_16 = u32_get_bits(info[2],
+ HAL_TX_PPDU_SETUP_INFO2_PROT_FRAME_ADDR2_15_0);
+ addr_32 = u32_get_bits(info[3],
+ HAL_TX_PPDU_SETUP_INFO3_PROT_FRAME_ADDR2_47_16);
+ ath12k_dp_get_mac_addr(addr_32, addr_16, tx_ppdu_info->rx_status.addr2);
+
+ /* protection frame address 3 */
+ addr_32 = u32_get_bits(info[4],
+ HAL_TX_PPDU_SETUP_INFO4_PROT_FRAME_ADDR3_31_0);
+ addr_16 = u32_get_bits(info[5],
+ HAL_TX_PPDU_SETUP_INFO5_PROT_FRAME_ADDR3_47_32);
+ ath12k_dp_get_mac_addr(addr_32, addr_16, tx_ppdu_info->rx_status.addr3);
+
+ /* protection frame address 4 */
+ addr_16 = u32_get_bits(info[5],
+ HAL_TX_PPDU_SETUP_INFO5_PROT_FRAME_ADDR4_15_0);
+ addr_32 = u32_get_bits(info[6],
+ HAL_TX_PPDU_SETUP_INFO6_PROT_FRAME_ADDR4_47_16);
+ ath12k_dp_get_mac_addr(addr_32, addr_16, tx_ppdu_info->rx_status.addr4);
+
+ status = u32_get_bits(info[0],
+ HAL_TX_PPDU_SETUP_INFO0_MEDIUM_PROT_TYPE);
+ break;
+ }
+
+ case HAL_TX_QUEUE_EXTENSION: {
+ struct hal_tx_queue_exten *tx_q_exten =
+ (struct hal_tx_queue_exten *)tlv_data;
+
+ info[0] = __le32_to_cpu(tx_q_exten->info0);
+
+ tx_ppdu_info->rx_status.frame_control =
+ u32_get_bits(info[0],
+ HAL_TX_Q_EXT_INFO0_FRAME_CTRL);
+ tx_ppdu_info->rx_status.fc_valid = true;
+ break;
+ }
+
+ case HAL_TX_FES_STATUS_START: {
+ struct hal_tx_fes_status_start *tx_fes_start =
+ (struct hal_tx_fes_status_start *)tlv_data;
+
+ info[0] = __le32_to_cpu(tx_fes_start->info0);
+
+ tx_ppdu_info->rx_status.medium_prot_type =
+ u32_get_bits(info[0],
+ HAL_TX_FES_STATUS_START_INFO0_MEDIUM_PROT_TYPE);
+ break;
+ }
+
+ case HAL_TX_FES_STATUS_PROT: {
+ struct hal_tx_fes_status_prot *tx_fes_status =
+ (struct hal_tx_fes_status_prot *)tlv_data;
+ u32 start_timestamp;
+ u32 end_timestamp;
+
+ info[0] = __le32_to_cpu(tx_fes_status->info0);
+ info[1] = __le32_to_cpu(tx_fes_status->info1);
+
+ start_timestamp =
+ u32_get_bits(info[0],
+ HAL_TX_FES_STAT_PROT_INFO0_STRT_FRM_TS_15_0);
+ start_timestamp |=
+ u32_get_bits(info[0],
+ HAL_TX_FES_STAT_PROT_INFO0_STRT_FRM_TS_31_16) << 15;
+ end_timestamp =
+ u32_get_bits(info[1],
+ HAL_TX_FES_STAT_PROT_INFO1_END_FRM_TS_15_0);
+ end_timestamp |=
+ u32_get_bits(info[1],
+ HAL_TX_FES_STAT_PROT_INFO1_END_FRM_TS_31_16) << 15;
+ tx_ppdu_info->rx_status.rx_duration = end_timestamp - start_timestamp;
+
+ ath12k_dp_mon_tx_gen_prot_frame(tx_ppdu_info);
+ break;
+ }
+
+ case HAL_TX_FES_STATUS_START_PPDU:
+ case HAL_TX_FES_STATUS_START_PROT: {
+ struct hal_tx_fes_status_start_prot *tx_fes_stat_start =
+ (struct hal_tx_fes_status_start_prot *)tlv_data;
+ u64 ppdu_ts;
+
+ info[0] = __le32_to_cpu(tx_fes_stat_start->info0);
+
+ tx_ppdu_info->rx_status.ppdu_ts =
+ u32_get_bits(info[0],
+ HAL_TX_FES_STAT_STRT_INFO0_PROT_TS_LOWER_32);
+ ppdu_ts = (u32_get_bits(info[1],
+ HAL_TX_FES_STAT_STRT_INFO1_PROT_TS_UPPER_32));
+ tx_ppdu_info->rx_status.ppdu_ts |= ppdu_ts << 32;
+ break;
+ }
+
+ case HAL_TX_FES_STATUS_USER_PPDU: {
+ struct hal_tx_fes_status_user_ppdu *tx_fes_usr_ppdu =
+ (struct hal_tx_fes_status_user_ppdu *)tlv_data;
+
+ info[0] = __le32_to_cpu(tx_fes_usr_ppdu->info0);
+
+ tx_ppdu_info->rx_status.rx_duration =
+ u32_get_bits(info[0],
+ HAL_TX_FES_STAT_USR_PPDU_INFO0_DURATION);
+ break;
+ }
+
+ case HAL_MACTX_HE_SIG_A_SU:
+ ath12k_dp_mon_parse_he_sig_su(tlv_data, &tx_ppdu_info->rx_status);
+ break;
+
+ case HAL_MACTX_HE_SIG_A_MU_DL:
+ ath12k_dp_mon_parse_he_sig_mu(tlv_data, &tx_ppdu_info->rx_status);
+ break;
+
+ case HAL_MACTX_HE_SIG_B1_MU:
+ ath12k_dp_mon_parse_he_sig_b1_mu(tlv_data, &tx_ppdu_info->rx_status);
+ break;
+
+ case HAL_MACTX_HE_SIG_B2_MU:
+ ath12k_dp_mon_parse_he_sig_b2_mu(tlv_data, &tx_ppdu_info->rx_status);
+ break;
+
+ case HAL_MACTX_HE_SIG_B2_OFDMA:
+ ath12k_dp_mon_parse_he_sig_b2_ofdma(tlv_data, &tx_ppdu_info->rx_status);
+ break;
+
+ case HAL_MACTX_VHT_SIG_A:
+ ath12k_dp_mon_parse_vht_sig_a(tlv_data, &tx_ppdu_info->rx_status);
+ break;
+
+ case HAL_MACTX_L_SIG_A:
+ ath12k_dp_mon_parse_l_sig_a(tlv_data, &tx_ppdu_info->rx_status);
+ break;
+
+ case HAL_MACTX_L_SIG_B:
+ ath12k_dp_mon_parse_l_sig_b(tlv_data, &tx_ppdu_info->rx_status);
+ break;
+
+ case HAL_RX_FRAME_BITMAP_ACK: {
+ struct hal_rx_frame_bitmap_ack *fbm_ack =
+ (struct hal_rx_frame_bitmap_ack *)tlv_data;
+ u32 addr_32;
+ u16 addr_16;
+
+ info[0] = __le32_to_cpu(fbm_ack->info0);
+ info[1] = __le32_to_cpu(fbm_ack->info1);
+
+ addr_32 = u32_get_bits(info[0],
+ HAL_RX_FBM_ACK_INFO0_ADDR1_31_0);
+ addr_16 = u32_get_bits(info[1],
+ HAL_RX_FBM_ACK_INFO1_ADDR1_47_32);
+ ath12k_dp_get_mac_addr(addr_32, addr_16, tx_ppdu_info->rx_status.addr1);
+
+ ath12k_dp_mon_tx_gen_ack_frame(tx_ppdu_info);
+ break;
+ }
+
+ case HAL_MACTX_PHY_DESC: {
+ struct hal_tx_phy_desc *tx_phy_desc =
+ (struct hal_tx_phy_desc *)tlv_data;
+
+ info[0] = __le32_to_cpu(tx_phy_desc->info0);
+ info[1] = __le32_to_cpu(tx_phy_desc->info1);
+ info[2] = __le32_to_cpu(tx_phy_desc->info2);
+ info[3] = __le32_to_cpu(tx_phy_desc->info3);
+
+ tx_ppdu_info->rx_status.beamformed =
+ u32_get_bits(info[0],
+ HAL_TX_PHY_DESC_INFO0_BF_TYPE);
+ tx_ppdu_info->rx_status.preamble_type =
+ u32_get_bits(info[0],
+ HAL_TX_PHY_DESC_INFO0_PREAMBLE_11B);
+ tx_ppdu_info->rx_status.mcs =
+ u32_get_bits(info[1],
+ HAL_TX_PHY_DESC_INFO1_MCS);
+ tx_ppdu_info->rx_status.ltf_size =
+ u32_get_bits(info[3],
+ HAL_TX_PHY_DESC_INFO3_LTF_SIZE);
+ tx_ppdu_info->rx_status.nss =
+ u32_get_bits(info[2],
+ HAL_TX_PHY_DESC_INFO2_NSS);
+ tx_ppdu_info->rx_status.chan_num =
+ u32_get_bits(info[3],
+ HAL_TX_PHY_DESC_INFO3_ACTIVE_CHANNEL);
+ tx_ppdu_info->rx_status.bw =
+ u32_get_bits(info[0],
+ HAL_TX_PHY_DESC_INFO0_BANDWIDTH);
+ break;
+ }
+
+ case HAL_TX_MPDU_START: {
+ struct dp_mon_mpdu *mon_mpdu = tx_ppdu_info->tx_mon_mpdu;
+
+ mon_mpdu = kzalloc(sizeof(*mon_mpdu), GFP_ATOMIC);
+ if (!mon_mpdu)
+ return DP_MON_TX_STATUS_PPDU_NOT_DONE;
+ status = DP_MON_TX_MPDU_START;
+ break;
+ }
+
+ case HAL_MON_BUF_ADDR: {
+ struct dp_rxdma_ring *buf_ring = &ab->dp.tx_mon_buf_ring;
+ struct dp_mon_packet_info *packet_info =
+ (struct dp_mon_packet_info *)tlv_data;
+ int buf_id = u32_get_bits(packet_info->cookie,
+ DP_RXDMA_BUF_COOKIE_BUF_ID);
+ struct sk_buff *msdu;
+ struct dp_mon_mpdu *mon_mpdu = tx_ppdu_info->tx_mon_mpdu;
+ struct ath12k_skb_rxcb *rxcb;
+
+ spin_lock_bh(&buf_ring->idr_lock);
+ msdu = idr_remove(&buf_ring->bufs_idr, buf_id);
+ spin_unlock_bh(&buf_ring->idr_lock);
+
+ if (unlikely(!msdu)) {
+ ath12k_warn(ab, "montior destination with invalid buf_id %d\n",
+ buf_id);
+ return DP_MON_TX_STATUS_PPDU_NOT_DONE;
+ }
+
+ rxcb = ATH12K_SKB_RXCB(msdu);
+ dma_unmap_single(ab->dev, rxcb->paddr,
+ msdu->len + skb_tailroom(msdu),
+ DMA_FROM_DEVICE);
+
+ if (!mon_mpdu->head)
+ mon_mpdu->head = msdu;
+ else if (mon_mpdu->tail)
+ mon_mpdu->tail->next = msdu;
+
+ mon_mpdu->tail = msdu;
+
+ ath12k_dp_mon_buf_replenish(ab, buf_ring, 1);
+ status = DP_MON_TX_BUFFER_ADDR;
+ break;
+ }
+
+ case HAL_TX_MPDU_END:
+ list_add_tail(&tx_ppdu_info->tx_mon_mpdu->list,
+ &tx_ppdu_info->dp_tx_mon_mpdu_list);
+ break;
+ }
+
+ return status;
+}
+
+enum dp_mon_tx_tlv_status
+ath12k_dp_mon_tx_status_get_num_user(u16 tlv_tag,
+ struct hal_tlv_hdr *tx_tlv,
+ u8 *num_users)
+{
+ u32 tlv_status = DP_MON_TX_STATUS_PPDU_NOT_DONE;
+ u32 info0;
+
+ switch (tlv_tag) {
+ case HAL_TX_FES_SETUP: {
+ struct hal_tx_fes_setup *tx_fes_setup =
+ (struct hal_tx_fes_setup *)tx_tlv;
+
+ info0 = __le32_to_cpu(tx_fes_setup->info0);
+
+ *num_users = u32_get_bits(info0, HAL_TX_FES_SETUP_INFO0_NUM_OF_USERS);
+ tlv_status = DP_MON_TX_FES_SETUP;
+ break;
+ }
+
+ case HAL_RX_RESPONSE_REQUIRED_INFO: {
+ /* TODO: need to update *num_users */
+ tlv_status = DP_MON_RX_RESPONSE_REQUIRED_INFO;
+ break;
+ }
+ }
+
+ return tlv_status;
+}
+
+static void
+ath12k_dp_mon_tx_process_ppdu_info(struct ath12k *ar, int mac_id,
+ struct napi_struct *napi,
+ struct dp_mon_tx_ppdu_info *tx_ppdu_info)
+{
+ struct dp_mon_mpdu *tmp, *mon_mpdu;
+ struct sk_buff *head_msdu;
+
+ list_for_each_entry_safe(mon_mpdu, tmp,
+ &tx_ppdu_info->dp_tx_mon_mpdu_list, list) {
+ list_del(&mon_mpdu->list);
+ head_msdu = mon_mpdu->head;
+
+ if (head_msdu)
+ ath12k_dp_mon_rx_deliver(ar, mac_id, head_msdu,
+ &tx_ppdu_info->rx_status, napi);
+
+ kfree(mon_mpdu);
+ }
+}
+
+enum hal_rx_mon_status
+ath12k_dp_mon_tx_parse_mon_status(struct ath12k *ar,
+ struct ath12k_mon_data *pmon,
+ int mac_id,
+ struct sk_buff *skb,
+ struct napi_struct *napi,
+ u32 ppdu_id)
+{
+ struct ath12k_base *ab = ar->ab;
+ struct dp_mon_tx_ppdu_info *tx_prot_ppdu_info, *tx_data_ppdu_info;
+ struct hal_tlv_hdr *tlv;
+ u8 *ptr = skb->data;
+ u16 tlv_tag;
+ u16 tlv_len;
+ u32 tlv_userid = 0;
+ u8 num_user;
+ u32 tlv_status = DP_MON_TX_STATUS_PPDU_NOT_DONE;
+
+ tx_prot_ppdu_info = ath12k_dp_mon_tx_get_ppdu_info(pmon, ppdu_id,
+ DP_MON_TX_PROT_PPDU_INFO);
+ if (!tx_prot_ppdu_info)
+ return -ENOMEM;
+
+ tlv = (struct hal_tlv_hdr *)ptr;
+ tlv_tag = le32_get_bits(tlv->tl, HAL_TLV_HDR_TAG);
+
+ tlv_status = ath12k_dp_mon_tx_status_get_num_user(tlv_tag, tlv, &num_user);
+ if (tlv_status == DP_MON_TX_STATUS_PPDU_NOT_DONE || !num_user)
+ return -EINVAL;
+
+ tx_data_ppdu_info = ath12k_dp_mon_tx_get_ppdu_info(pmon, ppdu_id,
+ DP_MON_TX_DATA_PPDU_INFO);
+ if (!tx_data_ppdu_info)
+ return -ENOMEM;
+
+ do {
+ tlv = (struct hal_tlv_hdr *)ptr;
+ tlv_tag = le32_get_bits(tlv->tl, HAL_TLV_HDR_TAG);
+ tlv_len = le32_get_bits(tlv->tl, HAL_TLV_HDR_LEN);
+ tlv_userid = le32_get_bits(tlv->tl, HAL_TLV_USR_ID);
+
+ tlv_status = ath12k_dp_mon_tx_parse_status_tlv(ab, pmon,
+ tlv_tag, ptr,
+ tlv_userid);
+ ptr += tlv_len;
+ ptr = PTR_ALIGN(ptr, HAL_TLV_ALIGN);
+ if ((ptr - skb->data) >= DP_TX_MONITOR_BUF_SIZE)
+ break;
+ } while (tlv_status != DP_MON_TX_FES_STATUS_END);
+
+ ath12k_dp_mon_tx_process_ppdu_info(ar, mac_id, napi, tx_data_ppdu_info);
+ ath12k_dp_mon_tx_process_ppdu_info(ar, mac_id, napi, tx_prot_ppdu_info);
+
+ return tlv_status;
+}
+
+int ath12k_dp_mon_srng_process(struct ath12k *ar, int mac_id, int *budget,
+ enum dp_monitor_mode monitor_mode,
+ struct napi_struct *napi)
+{
+ struct hal_mon_dest_desc *mon_dst_desc;
+ struct ath12k_pdev_dp *pdev_dp = &ar->dp;
+ struct ath12k_mon_data *pmon = (struct ath12k_mon_data *)&pdev_dp->mon_data;
+ struct ath12k_base *ab = ar->ab;
+ struct ath12k_dp *dp = &ab->dp;
+ struct sk_buff *skb;
+ struct ath12k_skb_rxcb *rxcb;
+ struct dp_srng *mon_dst_ring;
+ struct hal_srng *srng;
+ struct dp_rxdma_ring *buf_ring;
+ u64 cookie;
+ u32 ppdu_id;
+ int num_buffs_reaped = 0, srng_id, buf_id;
+ u8 dest_idx = 0, i;
+ bool end_of_ppdu;
+ struct hal_rx_mon_ppdu_info *ppdu_info;
+ struct ath12k_peer *peer = NULL;
+
+ ppdu_info = &pmon->mon_ppdu_info;
+ memset(ppdu_info, 0, sizeof(*ppdu_info));
+ ppdu_info->peer_id = HAL_INVALID_PEERID;
+
+ srng_id = ath12k_hw_mac_id_to_srng_id(ab->hw_params, mac_id);
+
+ if (monitor_mode == ATH12K_DP_RX_MONITOR_MODE) {
+ mon_dst_ring = &pdev_dp->rxdma_mon_dst_ring[srng_id];
+ buf_ring = &dp->rxdma_mon_buf_ring;
+ } else {
+ mon_dst_ring = &pdev_dp->tx_mon_dst_ring[srng_id];
+ buf_ring = &dp->tx_mon_buf_ring;
+ }
+
+ srng = &ab->hal.srng_list[mon_dst_ring->ring_id];
+
+ spin_lock_bh(&srng->lock);
+ ath12k_hal_srng_access_begin(ab, srng);
+
+ while (likely(*budget)) {
+ *budget -= 1;
+ mon_dst_desc = ath12k_hal_srng_dst_peek(ab, srng);
+ if (unlikely(!mon_dst_desc))
+ break;
+
+ cookie = le32_to_cpu(mon_dst_desc->cookie);
+ buf_id = u32_get_bits(cookie, DP_RXDMA_BUF_COOKIE_BUF_ID);
+
+ spin_lock_bh(&buf_ring->idr_lock);
+ skb = idr_remove(&buf_ring->bufs_idr, buf_id);
+ spin_unlock_bh(&buf_ring->idr_lock);
+
+ if (unlikely(!skb)) {
+ ath12k_warn(ab, "montior destination with invalid buf_id %d\n",
+ buf_id);
+ goto move_next;
+ }
+
+ rxcb = ATH12K_SKB_RXCB(skb);
+ dma_unmap_single(ab->dev, rxcb->paddr,
+ skb->len + skb_tailroom(skb),
+ DMA_FROM_DEVICE);
+
+ pmon->dest_skb_q[dest_idx] = skb;
+ dest_idx++;
+ ppdu_id = le32_to_cpu(mon_dst_desc->ppdu_id);
+ end_of_ppdu = le32_get_bits(mon_dst_desc->info0,
+ HAL_MON_DEST_INFO0_END_OF_PPDU);
+ if (!end_of_ppdu)
+ continue;
+
+ for (i = 0; i < dest_idx; i++) {
+ skb = pmon->dest_skb_q[i];
+
+ if (monitor_mode == ATH12K_DP_RX_MONITOR_MODE)
+ ath12k_dp_mon_rx_parse_mon_status(ar, pmon, mac_id,
+ skb, napi);
+ else
+ ath12k_dp_mon_tx_parse_mon_status(ar, pmon, mac_id,
+ skb, napi, ppdu_id);
+
+ peer = ath12k_peer_find_by_id(ab, ppdu_info->peer_id);
+
+ if (!peer || !peer->sta) {
+ ath12k_dbg(ab, ATH12K_DBG_DATA,
+ "failed to find the peer with peer_id %d\n",
+ ppdu_info->peer_id);
+ dev_kfree_skb_any(skb);
+ continue;
+ }
+
+ dev_kfree_skb_any(skb);
+ pmon->dest_skb_q[i] = NULL;
+ }
+
+ dest_idx = 0;
+move_next:
+ ath12k_dp_mon_buf_replenish(ab, buf_ring, 1);
+ ath12k_hal_srng_src_get_next_entry(ab, srng);
+ num_buffs_reaped++;
+ }
+
+ ath12k_hal_srng_access_end(ab, srng);
+ spin_unlock_bh(&srng->lock);
+
+ return num_buffs_reaped;
+}
+
+static void
+ath12k_dp_mon_rx_update_peer_rate_table_stats(struct ath12k_rx_peer_stats *rx_stats,
+ struct hal_rx_mon_ppdu_info *ppdu_info,
+ struct hal_rx_user_status *user_stats,
+ u32 num_msdu)
+{
+ u32 rate_idx = 0;
+ u32 mcs_idx = (user_stats) ? user_stats->mcs : ppdu_info->mcs;
+ u32 nss_idx = (user_stats) ? user_stats->nss - 1 : ppdu_info->nss - 1;
+ u32 bw_idx = ppdu_info->bw;
+ u32 gi_idx = ppdu_info->gi;
+
+ if ((mcs_idx > HAL_RX_MAX_MCS_HE) || (nss_idx >= HAL_RX_MAX_NSS) ||
+ (bw_idx >= HAL_RX_BW_MAX) || (gi_idx >= HAL_RX_GI_MAX)) {
+ return;
+ }
+
+ if (ppdu_info->preamble_type == HAL_RX_PREAMBLE_11N ||
+ ppdu_info->preamble_type == HAL_RX_PREAMBLE_11AC) {
+ rate_idx = mcs_idx * 8 + 8 * 10 * nss_idx;
+ rate_idx += bw_idx * 2 + gi_idx;
+ } else if (ppdu_info->preamble_type == HAL_RX_PREAMBLE_11AX) {
+ gi_idx = ath12k_he_gi_to_nl80211_he_gi(ppdu_info->gi);
+ rate_idx = mcs_idx * 12 + 12 * 12 * nss_idx;
+ rate_idx += bw_idx * 3 + gi_idx;
+ } else {
+ return;
+ }
+
+ rx_stats->pkt_stats.rx_rate[rate_idx] += num_msdu;
+ if (user_stats)
+ rx_stats->byte_stats.rx_rate[rate_idx] += user_stats->mpdu_ok_byte_count;
+ else
+ rx_stats->byte_stats.rx_rate[rate_idx] += ppdu_info->mpdu_len;
+}
+
+static void ath12k_dp_mon_rx_update_peer_su_stats(struct ath12k *ar,
+ struct ath12k_sta *arsta,
+ struct hal_rx_mon_ppdu_info *ppdu_info)
+{
+ struct ath12k_rx_peer_stats *rx_stats = arsta->rx_stats;
+ u32 num_msdu;
+
+ if (!rx_stats)
+ return;
+
+ arsta->rssi_comb = ppdu_info->rssi_comb;
+
+ num_msdu = ppdu_info->tcp_msdu_count + ppdu_info->tcp_ack_msdu_count +
+ ppdu_info->udp_msdu_count + ppdu_info->other_msdu_count;
+
+ rx_stats->num_msdu += num_msdu;
+ rx_stats->tcp_msdu_count += ppdu_info->tcp_msdu_count +
+ ppdu_info->tcp_ack_msdu_count;
+ rx_stats->udp_msdu_count += ppdu_info->udp_msdu_count;
+ rx_stats->other_msdu_count += ppdu_info->other_msdu_count;
+
+ if (ppdu_info->preamble_type == HAL_RX_PREAMBLE_11A ||
+ ppdu_info->preamble_type == HAL_RX_PREAMBLE_11B) {
+ ppdu_info->nss = 1;
+ ppdu_info->mcs = HAL_RX_MAX_MCS;
+ ppdu_info->tid = IEEE80211_NUM_TIDS;
+ }
+
+ if (ppdu_info->ldpc < HAL_RX_SU_MU_CODING_MAX)
+ rx_stats->coding_count[ppdu_info->ldpc] += num_msdu;
+
+ if (ppdu_info->tid <= IEEE80211_NUM_TIDS)
+ rx_stats->tid_count[ppdu_info->tid] += num_msdu;
+
+ if (ppdu_info->preamble_type < HAL_RX_PREAMBLE_MAX)
+ rx_stats->pream_cnt[ppdu_info->preamble_type] += num_msdu;
+
+ if (ppdu_info->reception_type < HAL_RX_RECEPTION_TYPE_MAX)
+ rx_stats->reception_type[ppdu_info->reception_type] += num_msdu;
+
+ if (ppdu_info->is_stbc)
+ rx_stats->stbc_count += num_msdu;
+
+ if (ppdu_info->beamformed)
+ rx_stats->beamformed_count += num_msdu;
+
+ if (ppdu_info->num_mpdu_fcs_ok > 1)
+ rx_stats->ampdu_msdu_count += num_msdu;
+ else
+ rx_stats->non_ampdu_msdu_count += num_msdu;
+
+ rx_stats->num_mpdu_fcs_ok += ppdu_info->num_mpdu_fcs_ok;
+ rx_stats->num_mpdu_fcs_err += ppdu_info->num_mpdu_fcs_err;
+ rx_stats->dcm_count += ppdu_info->dcm;
+
+ rx_stats->rx_duration += ppdu_info->rx_duration;
+ arsta->rx_duration = rx_stats->rx_duration;
+
+ if (ppdu_info->nss > 0 && ppdu_info->nss <= HAL_RX_MAX_NSS) {
+ rx_stats->pkt_stats.nss_count[ppdu_info->nss - 1] += num_msdu;
+ rx_stats->byte_stats.nss_count[ppdu_info->nss - 1] += ppdu_info->mpdu_len;
+ }
+
+ if (ppdu_info->preamble_type == HAL_RX_PREAMBLE_11N &&
+ ppdu_info->mcs <= HAL_RX_MAX_MCS_HT) {
+ rx_stats->pkt_stats.ht_mcs_count[ppdu_info->mcs] += num_msdu;
+ rx_stats->byte_stats.ht_mcs_count[ppdu_info->mcs] += ppdu_info->mpdu_len;
+ /* To fit into rate table for HT packets */
+ ppdu_info->mcs = ppdu_info->mcs % 8;
+ }
+
+ if (ppdu_info->preamble_type == HAL_RX_PREAMBLE_11AC &&
+ ppdu_info->mcs <= HAL_RX_MAX_MCS_VHT) {
+ rx_stats->pkt_stats.vht_mcs_count[ppdu_info->mcs] += num_msdu;
+ rx_stats->byte_stats.vht_mcs_count[ppdu_info->mcs] += ppdu_info->mpdu_len;
+ }
+
+ if (ppdu_info->preamble_type == HAL_RX_PREAMBLE_11AX &&
+ ppdu_info->mcs <= HAL_RX_MAX_MCS_HE) {
+ rx_stats->pkt_stats.he_mcs_count[ppdu_info->mcs] += num_msdu;
+ rx_stats->byte_stats.he_mcs_count[ppdu_info->mcs] += ppdu_info->mpdu_len;
+ }
+
+ if ((ppdu_info->preamble_type == HAL_RX_PREAMBLE_11A ||
+ ppdu_info->preamble_type == HAL_RX_PREAMBLE_11B) &&
+ ppdu_info->rate < HAL_RX_LEGACY_RATE_INVALID) {
+ rx_stats->pkt_stats.legacy_count[ppdu_info->rate] += num_msdu;
+ rx_stats->byte_stats.legacy_count[ppdu_info->rate] += ppdu_info->mpdu_len;
+ }
+
+ if (ppdu_info->gi < HAL_RX_GI_MAX) {
+ rx_stats->pkt_stats.gi_count[ppdu_info->gi] += num_msdu;
+ rx_stats->byte_stats.gi_count[ppdu_info->gi] += ppdu_info->mpdu_len;
+ }
+
+ if (ppdu_info->bw < HAL_RX_BW_MAX) {
+ rx_stats->pkt_stats.bw_count[ppdu_info->bw] += num_msdu;
+ rx_stats->byte_stats.bw_count[ppdu_info->bw] += ppdu_info->mpdu_len;
+ }
+
+ ath12k_dp_mon_rx_update_peer_rate_table_stats(rx_stats, ppdu_info,
+ NULL, num_msdu);
+}
+
+void ath12k_dp_mon_rx_process_ulofdma(struct hal_rx_mon_ppdu_info *ppdu_info)
+{
+ struct hal_rx_user_status *rx_user_status;
+ u32 num_users, i, mu_ul_user_v0_word0, mu_ul_user_v0_word1, ru_size;
+
+ if (!(ppdu_info->reception_type == HAL_RX_RECEPTION_TYPE_MU_MIMO ||
+ ppdu_info->reception_type == HAL_RX_RECEPTION_TYPE_MU_OFDMA ||
+ ppdu_info->reception_type == HAL_RX_RECEPTION_TYPE_MU_OFDMA_MIMO))
+ return;
+
+ num_users = ppdu_info->num_users;
+ if (num_users > HAL_MAX_UL_MU_USERS)
+ num_users = HAL_MAX_UL_MU_USERS;
+
+ for (i = 0; i < num_users; i++) {
+ rx_user_status = &ppdu_info->userstats[i];
+ mu_ul_user_v0_word0 =
+ rx_user_status->ul_ofdma_user_v0_word0;
+ mu_ul_user_v0_word1 =
+ rx_user_status->ul_ofdma_user_v0_word1;
+
+ if (u32_get_bits(mu_ul_user_v0_word0,
+ HAL_RX_UL_OFDMA_USER_INFO_V0_W0_VALID) &&
+ !u32_get_bits(mu_ul_user_v0_word0,
+ HAL_RX_UL_OFDMA_USER_INFO_V0_W0_VER)) {
+ rx_user_status->mcs =
+ u32_get_bits(mu_ul_user_v0_word1,
+ HAL_RX_UL_OFDMA_USER_INFO_V0_W1_MCS);
+ rx_user_status->nss =
+ u32_get_bits(mu_ul_user_v0_word1,
+ HAL_RX_UL_OFDMA_USER_INFO_V0_W1_NSS) + 1;
+
+ rx_user_status->ofdma_info_valid = 1;
+ rx_user_status->ul_ofdma_ru_start_index =
+ u32_get_bits(mu_ul_user_v0_word1,
+ HAL_RX_UL_OFDMA_USER_INFO_V0_W1_RU_START);
+
+ ru_size = u32_get_bits(mu_ul_user_v0_word1,
+ HAL_RX_UL_OFDMA_USER_INFO_V0_W1_RU_SIZE);
+ rx_user_status->ul_ofdma_ru_width = ru_size;
+ rx_user_status->ul_ofdma_ru_size = ru_size;
+ }
+ rx_user_status->ldpc = u32_get_bits(mu_ul_user_v0_word1,
+ HAL_RX_UL_OFDMA_USER_INFO_V0_W1_LDPC);
+ }
+ ppdu_info->ldpc = 1;
+}
+
+static void
+ath12k_dp_mon_rx_update_user_stats(struct ath12k *ar,
+ struct hal_rx_mon_ppdu_info *ppdu_info,
+ u32 uid)
+{
+ struct ath12k_sta *arsta = NULL;
+ struct ath12k_rx_peer_stats *rx_stats = NULL;
+ struct hal_rx_user_status *user_stats = &ppdu_info->userstats[uid];
+ struct ath12k_peer *peer;
+ u32 num_msdu;
+
+ if (user_stats->ast_index == 0 || user_stats->ast_index == 0xFFFF)
+ return;
+
+ peer = ath12k_peer_find_by_ast(ar->ab, user_stats->ast_index);
+
+ if (!peer) {
+ ath12k_warn(ar->ab, "peer ast idx %d can't be found\n",
+ user_stats->ast_index);
+ return;
+ }
+
+ arsta = (struct ath12k_sta *)peer->sta->drv_priv;
+ rx_stats = arsta->rx_stats;
+
+ if (!rx_stats)
+ return;
+
+ arsta->rssi_comb = ppdu_info->rssi_comb;
+
+ num_msdu = user_stats->tcp_msdu_count + user_stats->tcp_ack_msdu_count +
+ user_stats->udp_msdu_count + user_stats->other_msdu_count;
+
+ rx_stats->num_msdu += num_msdu;
+ rx_stats->tcp_msdu_count += user_stats->tcp_msdu_count +
+ user_stats->tcp_ack_msdu_count;
+ rx_stats->udp_msdu_count += user_stats->udp_msdu_count;
+ rx_stats->other_msdu_count += user_stats->other_msdu_count;
+
+ if (ppdu_info->ldpc < HAL_RX_SU_MU_CODING_MAX)
+ rx_stats->coding_count[ppdu_info->ldpc] += num_msdu;
+
+ if (user_stats->tid <= IEEE80211_NUM_TIDS)
+ rx_stats->tid_count[user_stats->tid] += num_msdu;
+
+ if (user_stats->preamble_type < HAL_RX_PREAMBLE_MAX)
+ rx_stats->pream_cnt[user_stats->preamble_type] += num_msdu;
+
+ if (ppdu_info->reception_type < HAL_RX_RECEPTION_TYPE_MAX)
+ rx_stats->reception_type[ppdu_info->reception_type] += num_msdu;
+
+ if (ppdu_info->is_stbc)
+ rx_stats->stbc_count += num_msdu;
+
+ if (ppdu_info->beamformed)
+ rx_stats->beamformed_count += num_msdu;
+
+ if (user_stats->mpdu_cnt_fcs_ok > 1)
+ rx_stats->ampdu_msdu_count += num_msdu;
+ else
+ rx_stats->non_ampdu_msdu_count += num_msdu;
+
+ rx_stats->num_mpdu_fcs_ok += user_stats->mpdu_cnt_fcs_ok;
+ rx_stats->num_mpdu_fcs_err += user_stats->mpdu_cnt_fcs_err;
+ rx_stats->dcm_count += ppdu_info->dcm;
+ if (ppdu_info->reception_type == HAL_RX_RECEPTION_TYPE_MU_OFDMA ||
+ ppdu_info->reception_type == HAL_RX_RECEPTION_TYPE_MU_OFDMA_MIMO)
+ rx_stats->ru_alloc_cnt[user_stats->ul_ofdma_ru_size] += num_msdu;
+
+ rx_stats->rx_duration += ppdu_info->rx_duration;
+ arsta->rx_duration = rx_stats->rx_duration;
+
+ if (user_stats->nss > 0 && user_stats->nss <= HAL_RX_MAX_NSS) {
+ rx_stats->pkt_stats.nss_count[user_stats->nss - 1] += num_msdu;
+ rx_stats->byte_stats.nss_count[user_stats->nss - 1] +=
+ user_stats->mpdu_ok_byte_count;
+ }
+
+ if (user_stats->preamble_type == HAL_RX_PREAMBLE_11AX &&
+ user_stats->mcs <= HAL_RX_MAX_MCS_HE) {
+ rx_stats->pkt_stats.he_mcs_count[user_stats->mcs] += num_msdu;
+ rx_stats->byte_stats.he_mcs_count[user_stats->mcs] +=
+ user_stats->mpdu_ok_byte_count;
+ }
+
+ if (ppdu_info->gi < HAL_RX_GI_MAX) {
+ rx_stats->pkt_stats.gi_count[ppdu_info->gi] += num_msdu;
+ rx_stats->byte_stats.gi_count[ppdu_info->gi] +=
+ user_stats->mpdu_ok_byte_count;
+ }
+
+ if (ppdu_info->bw < HAL_RX_BW_MAX) {
+ rx_stats->pkt_stats.bw_count[ppdu_info->bw] += num_msdu;
+ rx_stats->byte_stats.bw_count[ppdu_info->bw] +=
+ user_stats->mpdu_ok_byte_count;
+ }
+
+ ath12k_dp_mon_rx_update_peer_rate_table_stats(rx_stats, ppdu_info,
+ user_stats, num_msdu);
+}
+
+static void
+ath12k_dp_mon_rx_update_peer_mu_stats(struct ath12k *ar,
+ struct hal_rx_mon_ppdu_info *ppdu_info)
+{
+ u32 num_users, i;
+
+ num_users = ppdu_info->num_users;
+ if (num_users > HAL_MAX_UL_MU_USERS)
+ num_users = HAL_MAX_UL_MU_USERS;
+
+ for (i = 0; i < num_users; i++)
+ ath12k_dp_mon_rx_update_user_stats(ar, ppdu_info, i);
+}
+
+int ath12k_dp_mon_rx_process_stats(struct ath12k *ar, int mac_id,
+ struct napi_struct *napi, int *budget)
+{
+ struct ath12k_base *ab = ar->ab;
+ struct ath12k_pdev_dp *pdev_dp = &ar->dp;
+ struct ath12k_mon_data *pmon = (struct ath12k_mon_data *)&pdev_dp->mon_data;
+ struct hal_rx_mon_ppdu_info *ppdu_info = &pmon->mon_ppdu_info;
+ struct ath12k_dp *dp = &ab->dp;
+ struct hal_mon_dest_desc *mon_dst_desc;
+ struct sk_buff *skb;
+ struct ath12k_skb_rxcb *rxcb;
+ struct dp_srng *mon_dst_ring;
+ struct hal_srng *srng;
+ struct dp_rxdma_ring *buf_ring;
+ struct ath12k_sta *arsta = NULL;
+ struct ath12k_peer *peer;
+ u64 cookie;
+ int num_buffs_reaped = 0, srng_id, buf_id;
+ u8 dest_idx = 0, i;
+ bool end_of_ppdu;
+ u32 hal_status;
+
+ srng_id = ath12k_hw_mac_id_to_srng_id(ab->hw_params, mac_id);
+ mon_dst_ring = &pdev_dp->rxdma_mon_dst_ring[srng_id];
+ buf_ring = &dp->rxdma_mon_buf_ring;
+
+ srng = &ab->hal.srng_list[mon_dst_ring->ring_id];
+ spin_lock_bh(&srng->lock);
+ ath12k_hal_srng_access_begin(ab, srng);
+
+ while (likely(*budget)) {
+ *budget -= 1;
+ mon_dst_desc = ath12k_hal_srng_dst_peek(ab, srng);
+ if (unlikely(!mon_dst_desc))
+ break;
+ cookie = le32_to_cpu(mon_dst_desc->cookie);
+ buf_id = u32_get_bits(cookie, DP_RXDMA_BUF_COOKIE_BUF_ID);
+
+ spin_lock_bh(&buf_ring->idr_lock);
+ skb = idr_remove(&buf_ring->bufs_idr, buf_id);
+ spin_unlock_bh(&buf_ring->idr_lock);
+
+ if (unlikely(!skb)) {
+ ath12k_warn(ab, "montior destination with invalid buf_id %d\n",
+ buf_id);
+ goto move_next;
+ }
+
+ rxcb = ATH12K_SKB_RXCB(skb);
+ dma_unmap_single(ab->dev, rxcb->paddr,
+ skb->len + skb_tailroom(skb),
+ DMA_FROM_DEVICE);
+ pmon->dest_skb_q[dest_idx] = skb;
+ dest_idx++;
+ end_of_ppdu = le32_get_bits(mon_dst_desc->info0,
+ HAL_MON_DEST_INFO0_END_OF_PPDU);
+ if (!end_of_ppdu)
+ continue;
+
+ for (i = 0; i < dest_idx; i++) {
+ skb = pmon->dest_skb_q[i];
+ hal_status = ath12k_dp_mon_parse_rx_dest(ab, pmon, skb);
+
+ if (ppdu_info->peer_id == HAL_INVALID_PEERID ||
+ hal_status != HAL_RX_MON_STATUS_PPDU_DONE) {
+ dev_kfree_skb_any(skb);
+ continue;
+ }
+
+ rcu_read_lock();
+ spin_lock_bh(&ab->base_lock);
+ peer = ath12k_peer_find_by_id(ab, ppdu_info->peer_id);
+ if (!peer || !peer->sta) {
+ ath12k_dbg(ab, ATH12K_DBG_DATA,
+ "failed to find the peer with peer_id %d\n",
+ ppdu_info->peer_id);
+ spin_unlock_bh(&ab->base_lock);
+ rcu_read_unlock();
+ dev_kfree_skb_any(skb);
+ continue;
+ }
+
+ if (ppdu_info->reception_type == HAL_RX_RECEPTION_TYPE_SU) {
+ arsta = (struct ath12k_sta *)peer->sta->drv_priv;
+ ath12k_dp_mon_rx_update_peer_su_stats(ar, arsta,
+ ppdu_info);
+ } else if ((ppdu_info->fc_valid) &&
+ (ppdu_info->ast_index != HAL_AST_IDX_INVALID)) {
+ ath12k_dp_mon_rx_process_ulofdma(ppdu_info);
+ ath12k_dp_mon_rx_update_peer_mu_stats(ar, ppdu_info);
+ }
+
+ spin_unlock_bh(&ab->base_lock);
+ rcu_read_unlock();
+ dev_kfree_skb_any(skb);
+ memset(ppdu_info, 0, sizeof(*ppdu_info));
+ ppdu_info->peer_id = HAL_INVALID_PEERID;
+ }
+
+ dest_idx = 0;
+move_next:
+ ath12k_dp_mon_buf_replenish(ab, buf_ring, 1);
+ ath12k_hal_srng_src_get_next_entry(ab, srng);
+ num_buffs_reaped++;
+ }
+
+ ath12k_hal_srng_access_end(ab, srng);
+ spin_unlock_bh(&srng->lock);
+ return num_buffs_reaped;
+}
+
+int ath12k_dp_mon_process_ring(struct ath12k_base *ab, int mac_id,
+ struct napi_struct *napi, int budget,
+ enum dp_monitor_mode monitor_mode)
+{
+ struct ath12k *ar = ath12k_ab_to_ar(ab, mac_id);
+ int num_buffs_reaped = 0;
+
+ if (!ar->monitor_started)
+ ath12k_dp_mon_rx_process_stats(ar, mac_id, napi, &budget);
+ else
+ num_buffs_reaped = ath12k_dp_mon_srng_process(ar, mac_id, &budget,
+ monitor_mode, napi);
+
+ return num_buffs_reaped;
+}
diff --git a/drivers/net/wireless/ath/ath12k/dp_mon.h b/drivers/net/wireless/ath/ath12k/dp_mon.h
new file mode 100644
index 000000000000..c18c385798a1
--- /dev/null
+++ b/drivers/net/wireless/ath/ath12k/dp_mon.h
@@ -0,0 +1,106 @@
+/* SPDX-License-Identifier: BSD-3-Clause-Clear */
+/*
+ * Copyright (c) 2019-2021 The Linux Foundation. All rights reserved.
+ * Copyright (c) 2021-2022 Qualcomm Innovation Center, Inc. All rights reserved.
+ */
+
+#ifndef ATH12K_DP_MON_H
+#define ATH12K_DP_MON_H
+
+#include "core.h"
+
+enum dp_monitor_mode {
+ ATH12K_DP_TX_MONITOR_MODE,
+ ATH12K_DP_RX_MONITOR_MODE
+};
+
+enum dp_mon_tx_ppdu_info_type {
+ DP_MON_TX_PROT_PPDU_INFO,
+ DP_MON_TX_DATA_PPDU_INFO
+};
+
+enum dp_mon_tx_tlv_status {
+ DP_MON_TX_FES_SETUP,
+ DP_MON_TX_FES_STATUS_END,
+ DP_MON_RX_RESPONSE_REQUIRED_INFO,
+ DP_MON_RESPONSE_END_STATUS_INFO,
+ DP_MON_TX_MPDU_START,
+ DP_MON_TX_MSDU_START,
+ DP_MON_TX_BUFFER_ADDR,
+ DP_MON_TX_DATA,
+ DP_MON_TX_STATUS_PPDU_NOT_DONE,
+};
+
+enum dp_mon_tx_medium_protection_type {
+ DP_MON_TX_MEDIUM_NO_PROTECTION,
+ DP_MON_TX_MEDIUM_RTS_LEGACY,
+ DP_MON_TX_MEDIUM_RTS_11AC_STATIC_BW,
+ DP_MON_TX_MEDIUM_RTS_11AC_DYNAMIC_BW,
+ DP_MON_TX_MEDIUM_CTS2SELF,
+ DP_MON_TX_MEDIUM_QOS_NULL_NO_ACK_3ADDR,
+ DP_MON_TX_MEDIUM_QOS_NULL_NO_ACK_4ADDR
+};
+
+struct dp_mon_qosframe_addr4 {
+ __le16 frame_control;
+ __le16 duration;
+ u8 addr1[ETH_ALEN];
+ u8 addr2[ETH_ALEN];
+ u8 addr3[ETH_ALEN];
+ __le16 seq_ctrl;
+ u8 addr4[ETH_ALEN];
+ __le16 qos_ctrl;
+} __packed;
+
+struct dp_mon_frame_min_one {
+ __le16 frame_control;
+ __le16 duration;
+ u8 addr1[ETH_ALEN];
+} __packed;
+
+struct dp_mon_packet_info {
+ u64 cookie;
+ u16 dma_length;
+ bool msdu_continuation;
+ bool truncated;
+};
+
+struct dp_mon_tx_ppdu_info {
+ u32 ppdu_id;
+ u8 num_users;
+ bool is_used;
+ struct hal_rx_mon_ppdu_info rx_status;
+ struct list_head dp_tx_mon_mpdu_list;
+ struct dp_mon_mpdu *tx_mon_mpdu;
+};
+
+enum hal_rx_mon_status
+ath12k_dp_mon_rx_parse_mon_status(struct ath12k *ar,
+ struct ath12k_mon_data *pmon,
+ int mac_id, struct sk_buff *skb,
+ struct napi_struct *napi);
+int ath12k_dp_mon_buf_replenish(struct ath12k_base *ab,
+ struct dp_rxdma_ring *buf_ring,
+ int req_entries);
+int ath12k_dp_mon_srng_process(struct ath12k *ar, int mac_id,
+ int *budget, enum dp_monitor_mode monitor_mode,
+ struct napi_struct *napi);
+int ath12k_dp_mon_process_ring(struct ath12k_base *ab, int mac_id,
+ struct napi_struct *napi, int budget,
+ enum dp_monitor_mode monitor_mode);
+struct sk_buff *ath12k_dp_mon_tx_alloc_skb(void);
+enum dp_mon_tx_tlv_status
+ath12k_dp_mon_tx_status_get_num_user(u16 tlv_tag,
+ struct hal_tlv_hdr *tx_tlv,
+ u8 *num_users);
+enum hal_rx_mon_status
+ath12k_dp_mon_tx_parse_mon_status(struct ath12k *ar,
+ struct ath12k_mon_data *pmon,
+ int mac_id,
+ struct sk_buff *skb,
+ struct napi_struct *napi,
+ u32 ppdu_id);
+void ath12k_dp_mon_rx_process_ulofdma(struct hal_rx_mon_ppdu_info *ppdu_info);
+int ath12k_dp_mon_rx_process_stats(struct ath12k *ar, int mac_id,
+ struct napi_struct *napi, int *budget);
+#endif
diff --git a/drivers/net/wireless/ath/ath12k/dp_rx.c b/drivers/net/wireless/ath/ath12k/dp_rx.c
new file mode 100644
index 000000000000..83a43ad48c51
--- /dev/null
+++ b/drivers/net/wireless/ath/ath12k/dp_rx.c
@@ -0,0 +1,4234 @@
+// SPDX-License-Identifier: BSD-3-Clause-Clear
+/*
+ * Copyright (c) 2018-2021 The Linux Foundation. All rights reserved.
+ * Copyright (c) 2021-2022 Qualcomm Innovation Center, Inc. All rights reserved.
+ */
+
+#include <linux/ieee80211.h>
+#include <linux/kernel.h>
+#include <linux/skbuff.h>
+#include <crypto/hash.h>
+#include "core.h"
+#include "debug.h"
+#include "hal_desc.h"
+#include "hw.h"
+#include "dp_rx.h"
+#include "hal_rx.h"
+#include "dp_tx.h"
+#include "peer.h"
+#include "dp_mon.h"
+
+#define ATH12K_DP_RX_FRAGMENT_TIMEOUT_MS (2 * HZ)
+
+static enum hal_encrypt_type ath12k_dp_rx_h_enctype(struct ath12k_base *ab,
+ struct hal_rx_desc *desc)
+{
+ if (!ab->hw_params->hal_ops->rx_desc_encrypt_valid(desc))
+ return HAL_ENCRYPT_TYPE_OPEN;
+
+ return ab->hw_params->hal_ops->rx_desc_get_encrypt_type(desc);
+}
+
+u8 ath12k_dp_rx_h_decap_type(struct ath12k_base *ab,
+ struct hal_rx_desc *desc)
+{
+ return ab->hw_params->hal_ops->rx_desc_get_decap_type(desc);
+}
+
+static u8 ath12k_dp_rx_h_mesh_ctl_present(struct ath12k_base *ab,
+ struct hal_rx_desc *desc)
+{
+ return ab->hw_params->hal_ops->rx_desc_get_mesh_ctl(desc);
+}
+
+static bool ath12k_dp_rx_h_seq_ctrl_valid(struct ath12k_base *ab,
+ struct hal_rx_desc *desc)
+{
+ return ab->hw_params->hal_ops->rx_desc_get_mpdu_seq_ctl_vld(desc);
+}
+
+static bool ath12k_dp_rx_h_fc_valid(struct ath12k_base *ab,
+ struct hal_rx_desc *desc)
+{
+ return ab->hw_params->hal_ops->rx_desc_get_mpdu_fc_valid(desc);
+}
+
+static bool ath12k_dp_rx_h_more_frags(struct ath12k_base *ab,
+ struct sk_buff *skb)
+{
+ struct ieee80211_hdr *hdr;
+
+ hdr = (struct ieee80211_hdr *)(skb->data + ab->hw_params->hal_desc_sz);
+ return ieee80211_has_morefrags(hdr->frame_control);
+}
+
+static u16 ath12k_dp_rx_h_frag_no(struct ath12k_base *ab,
+ struct sk_buff *skb)
+{
+ struct ieee80211_hdr *hdr;
+
+ hdr = (struct ieee80211_hdr *)(skb->data + ab->hw_params->hal_desc_sz);
+ return le16_to_cpu(hdr->seq_ctrl) & IEEE80211_SCTL_FRAG;
+}
+
+static u16 ath12k_dp_rx_h_seq_no(struct ath12k_base *ab,
+ struct hal_rx_desc *desc)
+{
+ return ab->hw_params->hal_ops->rx_desc_get_mpdu_start_seq_no(desc);
+}
+
+static bool ath12k_dp_rx_h_msdu_done(struct ath12k_base *ab,
+ struct hal_rx_desc *desc)
+{
+ return ab->hw_params->hal_ops->dp_rx_h_msdu_done(desc);
+}
+
+static bool ath12k_dp_rx_h_l4_cksum_fail(struct ath12k_base *ab,
+ struct hal_rx_desc *desc)
+{
+ return ab->hw_params->hal_ops->dp_rx_h_l4_cksum_fail(desc);
+}
+
+static bool ath12k_dp_rx_h_ip_cksum_fail(struct ath12k_base *ab,
+ struct hal_rx_desc *desc)
+{
+ return ab->hw_params->hal_ops->dp_rx_h_ip_cksum_fail(desc);
+}
+
+static bool ath12k_dp_rx_h_is_decrypted(struct ath12k_base *ab,
+ struct hal_rx_desc *desc)
+{
+ return ab->hw_params->hal_ops->dp_rx_h_is_decrypted(desc);
+}
+
+u32 ath12k_dp_rx_h_mpdu_err(struct ath12k_base *ab,
+ struct hal_rx_desc *desc)
+{
+ return ab->hw_params->hal_ops->dp_rx_h_mpdu_err(desc);
+}
+
+static u16 ath12k_dp_rx_h_msdu_len(struct ath12k_base *ab,
+ struct hal_rx_desc *desc)
+{
+ return ab->hw_params->hal_ops->rx_desc_get_msdu_len(desc);
+}
+
+static u8 ath12k_dp_rx_h_sgi(struct ath12k_base *ab,
+ struct hal_rx_desc *desc)
+{
+ return ab->hw_params->hal_ops->rx_desc_get_msdu_sgi(desc);
+}
+
+static u8 ath12k_dp_rx_h_rate_mcs(struct ath12k_base *ab,
+ struct hal_rx_desc *desc)
+{
+ return ab->hw_params->hal_ops->rx_desc_get_msdu_rate_mcs(desc);
+}
+
+static u8 ath12k_dp_rx_h_rx_bw(struct ath12k_base *ab,
+ struct hal_rx_desc *desc)
+{
+ return ab->hw_params->hal_ops->rx_desc_get_msdu_rx_bw(desc);
+}
+
+static u32 ath12k_dp_rx_h_freq(struct ath12k_base *ab,
+ struct hal_rx_desc *desc)
+{
+ return ab->hw_params->hal_ops->rx_desc_get_msdu_freq(desc);
+}
+
+static u8 ath12k_dp_rx_h_pkt_type(struct ath12k_base *ab,
+ struct hal_rx_desc *desc)
+{
+ return ab->hw_params->hal_ops->rx_desc_get_msdu_pkt_type(desc);
+}
+
+static u8 ath12k_dp_rx_h_nss(struct ath12k_base *ab,
+ struct hal_rx_desc *desc)
+{
+ return hweight8(ab->hw_params->hal_ops->rx_desc_get_msdu_nss(desc));
+}
+
+static u8 ath12k_dp_rx_h_tid(struct ath12k_base *ab,
+ struct hal_rx_desc *desc)
+{
+ return ab->hw_params->hal_ops->rx_desc_get_mpdu_tid(desc);
+}
+
+static u16 ath12k_dp_rx_h_peer_id(struct ath12k_base *ab,
+ struct hal_rx_desc *desc)
+{
+ return ab->hw_params->hal_ops->rx_desc_get_mpdu_peer_id(desc);
+}
+
+u8 ath12k_dp_rx_h_l3pad(struct ath12k_base *ab,
+ struct hal_rx_desc *desc)
+{
+ return ab->hw_params->hal_ops->rx_desc_get_l3_pad_bytes(desc);
+}
+
+static bool ath12k_dp_rx_h_first_msdu(struct ath12k_base *ab,
+ struct hal_rx_desc *desc)
+{
+ return ab->hw_params->hal_ops->rx_desc_get_first_msdu(desc);
+}
+
+static bool ath12k_dp_rx_h_last_msdu(struct ath12k_base *ab,
+ struct hal_rx_desc *desc)
+{
+ return ab->hw_params->hal_ops->rx_desc_get_last_msdu(desc);
+}
+
+static void ath12k_dp_rx_desc_end_tlv_copy(struct ath12k_base *ab,
+ struct hal_rx_desc *fdesc,
+ struct hal_rx_desc *ldesc)
+{
+ ab->hw_params->hal_ops->rx_desc_copy_end_tlv(fdesc, ldesc);
+}
+
+static void ath12k_dp_rxdesc_set_msdu_len(struct ath12k_base *ab,
+ struct hal_rx_desc *desc,
+ u16 len)
+{
+ ab->hw_params->hal_ops->rx_desc_set_msdu_len(desc, len);
+}
+
+static bool ath12k_dp_rx_h_is_mcbc(struct ath12k_base *ab,
+ struct hal_rx_desc *desc)
+{
+ return ab->hw_params->hal_ops->rx_desc_is_mcbc(desc);
+}
+
+static bool ath12k_dp_rxdesc_mac_addr2_valid(struct ath12k_base *ab,
+ struct hal_rx_desc *desc)
+{
+ return ab->hw_params->hal_ops->rx_desc_mac_addr2_valid(desc);
+}
+
+static u8 *ath12k_dp_rxdesc_get_mpdu_start_addr2(struct ath12k_base *ab,
+ struct hal_rx_desc *desc)
+{
+ return ab->hw_params->hal_ops->rx_desc_mpdu_start_addr2(desc);
+}
+
+static void ath12k_dp_rx_desc_get_dot11_hdr(struct ath12k_base *ab,
+ struct hal_rx_desc *desc,
+ struct ieee80211_hdr *hdr)
+{
+ ab->hw_params->hal_ops->rx_desc_get_dot11_hdr(desc, hdr);
+}
+
+static void ath12k_dp_rx_desc_get_crypto_header(struct ath12k_base *ab,
+ struct hal_rx_desc *desc,
+ u8 *crypto_hdr,
+ enum hal_encrypt_type enctype)
+{
+ ab->hw_params->hal_ops->rx_desc_get_crypto_header(desc, crypto_hdr, enctype);
+}
+
+static u16 ath12k_dp_rxdesc_get_mpdu_frame_ctrl(struct ath12k_base *ab,
+ struct hal_rx_desc *desc)
+{
+ return ab->hw_params->hal_ops->rx_desc_get_mpdu_frame_ctl(desc);
+}
+
+static int ath12k_dp_purge_mon_ring(struct ath12k_base *ab)
+{
+ int i, reaped = 0;
+ unsigned long timeout = jiffies + msecs_to_jiffies(DP_MON_PURGE_TIMEOUT_MS);
+
+ do {
+ for (i = 0; i < ab->hw_params->num_rxmda_per_pdev; i++)
+ reaped += ath12k_dp_mon_process_ring(ab, i, NULL,
+ DP_MON_SERVICE_BUDGET,
+ ATH12K_DP_RX_MONITOR_MODE);
+
+ /* nothing more to reap */
+ if (reaped < DP_MON_SERVICE_BUDGET)
+ return 0;
+
+ } while (time_before(jiffies, timeout));
+
+ ath12k_warn(ab, "dp mon ring purge timeout");
+
+ return -ETIMEDOUT;
+}
+
+/* Returns number of Rx buffers replenished */
+int ath12k_dp_rx_bufs_replenish(struct ath12k_base *ab, int mac_id,
+ struct dp_rxdma_ring *rx_ring,
+ int req_entries,
+ enum hal_rx_buf_return_buf_manager mgr,
+ bool hw_cc)
+{
+ struct ath12k_buffer_addr *desc;
+ struct hal_srng *srng;
+ struct sk_buff *skb;
+ int num_free;
+ int num_remain;
+ int buf_id;
+ u32 cookie;
+ dma_addr_t paddr;
+ struct ath12k_dp *dp = &ab->dp;
+ struct ath12k_rx_desc_info *rx_desc;
+
+ req_entries = min(req_entries, rx_ring->bufs_max);
+
+ srng = &ab->hal.srng_list[rx_ring->refill_buf_ring.ring_id];
+
+ spin_lock_bh(&srng->lock);
+
+ ath12k_hal_srng_access_begin(ab, srng);
+
+ num_free = ath12k_hal_srng_src_num_free(ab, srng, true);
+ if (!req_entries && (num_free > (rx_ring->bufs_max * 3) / 4))
+ req_entries = num_free;
+
+ req_entries = min(num_free, req_entries);
+ num_remain = req_entries;
+
+ while (num_remain > 0) {
+ skb = dev_alloc_skb(DP_RX_BUFFER_SIZE +
+ DP_RX_BUFFER_ALIGN_SIZE);
+ if (!skb)
+ break;
+
+ if (!IS_ALIGNED((unsigned long)skb->data,
+ DP_RX_BUFFER_ALIGN_SIZE)) {
+ skb_pull(skb,
+ PTR_ALIGN(skb->data, DP_RX_BUFFER_ALIGN_SIZE) -
+ skb->data);
+ }
+
+ paddr = dma_map_single(ab->dev, skb->data,
+ skb->len + skb_tailroom(skb),
+ DMA_FROM_DEVICE);
+ if (dma_mapping_error(ab->dev, paddr))
+ goto fail_free_skb;
+
+ if (hw_cc) {
+ spin_lock_bh(&dp->rx_desc_lock);
+
+ /* Get desc from free list and store in used list
+ * for cleanup purposes
+ *
+ * TODO: pass the removed descs rather than
+ * add/read to optimize
+ */
+ rx_desc = list_first_entry_or_null(&dp->rx_desc_free_list,
+ struct ath12k_rx_desc_info,
+ list);
+ if (!rx_desc) {
+ spin_unlock_bh(&dp->rx_desc_lock);
+ goto fail_dma_unmap;
+ }
+
+ rx_desc->skb = skb;
+ cookie = rx_desc->cookie;
+ list_del(&rx_desc->list);
+ list_add_tail(&rx_desc->list, &dp->rx_desc_used_list);
+
+ spin_unlock_bh(&dp->rx_desc_lock);
+ } else {
+ spin_lock_bh(&rx_ring->idr_lock);
+ buf_id = idr_alloc(&rx_ring->bufs_idr, skb, 0,
+ rx_ring->bufs_max * 3, GFP_ATOMIC);
+ spin_unlock_bh(&rx_ring->idr_lock);
+ if (buf_id < 0)
+ goto fail_dma_unmap;
+ cookie = u32_encode_bits(mac_id,
+ DP_RXDMA_BUF_COOKIE_PDEV_ID) |
+ u32_encode_bits(buf_id,
+ DP_RXDMA_BUF_COOKIE_BUF_ID);
+ }
+
+ desc = ath12k_hal_srng_src_get_next_entry(ab, srng);
+ if (!desc)
+ goto fail_buf_unassign;
+
+ ATH12K_SKB_RXCB(skb)->paddr = paddr;
+
+ num_remain--;
+
+ ath12k_hal_rx_buf_addr_info_set(desc, paddr, cookie, mgr);
+ }
+
+ ath12k_hal_srng_access_end(ab, srng);
+
+ spin_unlock_bh(&srng->lock);
+
+ return req_entries - num_remain;
+
+fail_buf_unassign:
+ if (hw_cc) {
+ spin_lock_bh(&dp->rx_desc_lock);
+ list_del(&rx_desc->list);
+ list_add_tail(&rx_desc->list, &dp->rx_desc_free_list);
+ rx_desc->skb = NULL;
+ spin_unlock_bh(&dp->rx_desc_lock);
+ } else {
+ spin_lock_bh(&rx_ring->idr_lock);
+ idr_remove(&rx_ring->bufs_idr, buf_id);
+ spin_unlock_bh(&rx_ring->idr_lock);
+ }
+fail_dma_unmap:
+ dma_unmap_single(ab->dev, paddr, skb->len + skb_tailroom(skb),
+ DMA_FROM_DEVICE);
+fail_free_skb:
+ dev_kfree_skb_any(skb);
+
+ ath12k_hal_srng_access_end(ab, srng);
+
+ spin_unlock_bh(&srng->lock);
+
+ return req_entries - num_remain;
+}
+
+static int ath12k_dp_rxdma_buf_ring_free(struct ath12k_base *ab,
+ struct dp_rxdma_ring *rx_ring)
+{
+ struct sk_buff *skb;
+ int buf_id;
+
+ spin_lock_bh(&rx_ring->idr_lock);
+ idr_for_each_entry(&rx_ring->bufs_idr, skb, buf_id) {
+ idr_remove(&rx_ring->bufs_idr, buf_id);
+ /* TODO: Understand where internal driver does this dma_unmap
+ * of rxdma_buffer.
+ */
+ dma_unmap_single(ab->dev, ATH12K_SKB_RXCB(skb)->paddr,
+ skb->len + skb_tailroom(skb), DMA_FROM_DEVICE);
+ dev_kfree_skb_any(skb);
+ }
+
+ idr_destroy(&rx_ring->bufs_idr);
+ spin_unlock_bh(&rx_ring->idr_lock);
+
+ return 0;
+}
+
+static int ath12k_dp_rxdma_buf_free(struct ath12k_base *ab)
+{
+ struct ath12k_dp *dp = &ab->dp;
+ struct dp_rxdma_ring *rx_ring = &dp->rx_refill_buf_ring;
+
+ ath12k_dp_rxdma_buf_ring_free(ab, rx_ring);
+
+ rx_ring = &dp->rxdma_mon_buf_ring;
+ ath12k_dp_rxdma_buf_ring_free(ab, rx_ring);
+
+ rx_ring = &dp->tx_mon_buf_ring;
+ ath12k_dp_rxdma_buf_ring_free(ab, rx_ring);
+
+ return 0;
+}
+
+static int ath12k_dp_rxdma_ring_buf_setup(struct ath12k_base *ab,
+ struct dp_rxdma_ring *rx_ring,
+ u32 ringtype)
+{
+ int num_entries;
+
+ num_entries = rx_ring->refill_buf_ring.size /
+ ath12k_hal_srng_get_entrysize(ab, ringtype);
+
+ rx_ring->bufs_max = num_entries;
+ if ((ringtype == HAL_RXDMA_MONITOR_BUF) || (ringtype == HAL_TX_MONITOR_BUF))
+ ath12k_dp_mon_buf_replenish(ab, rx_ring, num_entries);
+ else
+ ath12k_dp_rx_bufs_replenish(ab, 0, rx_ring, num_entries,
+ ab->hw_params->hal_params->rx_buf_rbm,
+ ringtype == HAL_RXDMA_BUF);
+ return 0;
+}
+
+static int ath12k_dp_rxdma_buf_setup(struct ath12k_base *ab)
+{
+ struct ath12k_dp *dp = &ab->dp;
+ struct dp_rxdma_ring *rx_ring = &dp->rx_refill_buf_ring;
+ int ret;
+
+ ret = ath12k_dp_rxdma_ring_buf_setup(ab, rx_ring,
+ HAL_RXDMA_BUF);
+ if (ret) {
+ ath12k_warn(ab,
+ "failed to setup HAL_RXDMA_BUF\n");
+ return ret;
+ }
+
+ if (ab->hw_params->rxdma1_enable) {
+ rx_ring = &dp->rxdma_mon_buf_ring;
+ ret = ath12k_dp_rxdma_ring_buf_setup(ab, rx_ring,
+ HAL_RXDMA_MONITOR_BUF);
+ if (ret) {
+ ath12k_warn(ab,
+ "failed to setup HAL_RXDMA_MONITOR_BUF\n");
+ return ret;
+ }
+
+ rx_ring = &dp->tx_mon_buf_ring;
+ ret = ath12k_dp_rxdma_ring_buf_setup(ab, rx_ring,
+ HAL_TX_MONITOR_BUF);
+ if (ret) {
+ ath12k_warn(ab,
+ "failed to setup HAL_TX_MONITOR_BUF\n");
+ return ret;
+ }
+ }
+
+ return 0;
+}
+
+static void ath12k_dp_rx_pdev_srng_free(struct ath12k *ar)
+{
+ struct ath12k_pdev_dp *dp = &ar->dp;
+ struct ath12k_base *ab = ar->ab;
+ int i;
+
+ for (i = 0; i < ab->hw_params->num_rxmda_per_pdev; i++) {
+ ath12k_dp_srng_cleanup(ab, &dp->rxdma_mon_dst_ring[i]);
+ ath12k_dp_srng_cleanup(ab, &dp->tx_mon_dst_ring[i]);
+ }
+}
+
+void ath12k_dp_rx_pdev_reo_cleanup(struct ath12k_base *ab)
+{
+ struct ath12k_dp *dp = &ab->dp;
+ int i;
+
+ for (i = 0; i < DP_REO_DST_RING_MAX; i++)
+ ath12k_dp_srng_cleanup(ab, &dp->reo_dst_ring[i]);
+}
+
+int ath12k_dp_rx_pdev_reo_setup(struct ath12k_base *ab)
+{
+ struct ath12k_dp *dp = &ab->dp;
+ int ret;
+ int i;
+
+ for (i = 0; i < DP_REO_DST_RING_MAX; i++) {
+ ret = ath12k_dp_srng_setup(ab, &dp->reo_dst_ring[i],
+ HAL_REO_DST, i, 0,
+ DP_REO_DST_RING_SIZE);
+ if (ret) {
+ ath12k_warn(ab, "failed to setup reo_dst_ring\n");
+ goto err_reo_cleanup;
+ }
+ }
+
+ return 0;
+
+err_reo_cleanup:
+ ath12k_dp_rx_pdev_reo_cleanup(ab);
+
+ return ret;
+}
+
+static int ath12k_dp_rx_pdev_srng_alloc(struct ath12k *ar)
+{
+ struct ath12k_pdev_dp *dp = &ar->dp;
+ struct ath12k_base *ab = ar->ab;
+ int i;
+ int ret;
+ u32 mac_id = dp->mac_id;
+
+ for (i = 0; i < ab->hw_params->num_rxmda_per_pdev; i++) {
+ ret = ath12k_dp_srng_setup(ar->ab,
+ &dp->rxdma_mon_dst_ring[i],
+ HAL_RXDMA_MONITOR_DST,
+ 0, mac_id + i,
+ DP_RXDMA_MONITOR_DST_RING_SIZE);
+ if (ret) {
+ ath12k_warn(ar->ab,
+ "failed to setup HAL_RXDMA_MONITOR_DST\n");
+ return ret;
+ }
+
+ ret = ath12k_dp_srng_setup(ar->ab,
+ &dp->tx_mon_dst_ring[i],
+ HAL_TX_MONITOR_DST,
+ 0, mac_id + i,
+ DP_TX_MONITOR_DEST_RING_SIZE);
+ if (ret) {
+ ath12k_warn(ar->ab,
+ "failed to setup HAL_TX_MONITOR_DST\n");
+ return ret;
+ }
+ }
+
+ return 0;
+}
+
+void ath12k_dp_rx_reo_cmd_list_cleanup(struct ath12k_base *ab)
+{
+ struct ath12k_dp *dp = &ab->dp;
+ struct ath12k_dp_rx_reo_cmd *cmd, *tmp;
+ struct ath12k_dp_rx_reo_cache_flush_elem *cmd_cache, *tmp_cache;
+
+ spin_lock_bh(&dp->reo_cmd_lock);
+ list_for_each_entry_safe(cmd, tmp, &dp->reo_cmd_list, list) {
+ list_del(&cmd->list);
+ dma_unmap_single(ab->dev, cmd->data.paddr,
+ cmd->data.size, DMA_BIDIRECTIONAL);
+ kfree(cmd->data.vaddr);
+ kfree(cmd);
+ }
+
+ list_for_each_entry_safe(cmd_cache, tmp_cache,
+ &dp->reo_cmd_cache_flush_list, list) {
+ list_del(&cmd_cache->list);
+ dp->reo_cmd_cache_flush_count--;
+ dma_unmap_single(ab->dev, cmd_cache->data.paddr,
+ cmd_cache->data.size, DMA_BIDIRECTIONAL);
+ kfree(cmd_cache->data.vaddr);
+ kfree(cmd_cache);
+ }
+ spin_unlock_bh(&dp->reo_cmd_lock);
+}
+
+static void ath12k_dp_reo_cmd_free(struct ath12k_dp *dp, void *ctx,
+ enum hal_reo_cmd_status status)
+{
+ struct ath12k_dp_rx_tid *rx_tid = ctx;
+
+ if (status != HAL_REO_CMD_SUCCESS)
+ ath12k_warn(dp->ab, "failed to flush rx tid hw desc, tid %d status %d\n",
+ rx_tid->tid, status);
+
+ dma_unmap_single(dp->ab->dev, rx_tid->paddr, rx_tid->size,
+ DMA_BIDIRECTIONAL);
+ kfree(rx_tid->vaddr);
+ rx_tid->vaddr = NULL;
+}
+
+static int ath12k_dp_reo_cmd_send(struct ath12k_base *ab, struct ath12k_dp_rx_tid *rx_tid,
+ enum hal_reo_cmd_type type,
+ struct ath12k_hal_reo_cmd *cmd,
+ void (*cb)(struct ath12k_dp *dp, void *ctx,
+ enum hal_reo_cmd_status status))
+{
+ struct ath12k_dp *dp = &ab->dp;
+ struct ath12k_dp_rx_reo_cmd *dp_cmd;
+ struct hal_srng *cmd_ring;
+ int cmd_num;
+
+ cmd_ring = &ab->hal.srng_list[dp->reo_cmd_ring.ring_id];
+ cmd_num = ath12k_hal_reo_cmd_send(ab, cmd_ring, type, cmd);
+
+ /* cmd_num should start from 1, during failure return the error code */
+ if (cmd_num < 0)
+ return cmd_num;
+
+ /* reo cmd ring descriptors has cmd_num starting from 1 */
+ if (cmd_num == 0)
+ return -EINVAL;
+
+ if (!cb)
+ return 0;
+
+ /* Can this be optimized so that we keep the pending command list only
+ * for tid delete command to free up the resource on the command status
+ * indication?
+ */
+ dp_cmd = kzalloc(sizeof(*dp_cmd), GFP_ATOMIC);
+
+ if (!dp_cmd)
+ return -ENOMEM;
+
+ memcpy(&dp_cmd->data, rx_tid, sizeof(*rx_tid));
+ dp_cmd->cmd_num = cmd_num;
+ dp_cmd->handler = cb;
+
+ spin_lock_bh(&dp->reo_cmd_lock);
+ list_add_tail(&dp_cmd->list, &dp->reo_cmd_list);
+ spin_unlock_bh(&dp->reo_cmd_lock);
+
+ return 0;
+}
+
+static void ath12k_dp_reo_cache_flush(struct ath12k_base *ab,
+ struct ath12k_dp_rx_tid *rx_tid)
+{
+ struct ath12k_hal_reo_cmd cmd = {0};
+ unsigned long tot_desc_sz, desc_sz;
+ int ret;
+
+ tot_desc_sz = rx_tid->size;
+ desc_sz = ath12k_hal_reo_qdesc_size(0, HAL_DESC_REO_NON_QOS_TID);
+
+ while (tot_desc_sz > desc_sz) {
+ tot_desc_sz -= desc_sz;
+ cmd.addr_lo = lower_32_bits(rx_tid->paddr + tot_desc_sz);
+ cmd.addr_hi = upper_32_bits(rx_tid->paddr);
+ ret = ath12k_dp_reo_cmd_send(ab, rx_tid,
+ HAL_REO_CMD_FLUSH_CACHE, &cmd,
+ NULL);
+ if (ret)
+ ath12k_warn(ab,
+ "failed to send HAL_REO_CMD_FLUSH_CACHE, tid %d (%d)\n",
+ rx_tid->tid, ret);
+ }
+
+ memset(&cmd, 0, sizeof(cmd));
+ cmd.addr_lo = lower_32_bits(rx_tid->paddr);
+ cmd.addr_hi = upper_32_bits(rx_tid->paddr);
+ cmd.flag = HAL_REO_CMD_FLG_NEED_STATUS;
+ ret = ath12k_dp_reo_cmd_send(ab, rx_tid,
+ HAL_REO_CMD_FLUSH_CACHE,
+ &cmd, ath12k_dp_reo_cmd_free);
+ if (ret) {
+ ath12k_err(ab, "failed to send HAL_REO_CMD_FLUSH_CACHE cmd, tid %d (%d)\n",
+ rx_tid->tid, ret);
+ dma_unmap_single(ab->dev, rx_tid->paddr, rx_tid->size,
+ DMA_BIDIRECTIONAL);
+ kfree(rx_tid->vaddr);
+ rx_tid->vaddr = NULL;
+ }
+}
+
+static void ath12k_dp_rx_tid_del_func(struct ath12k_dp *dp, void *ctx,
+ enum hal_reo_cmd_status status)
+{
+ struct ath12k_base *ab = dp->ab;
+ struct ath12k_dp_rx_tid *rx_tid = ctx;
+ struct ath12k_dp_rx_reo_cache_flush_elem *elem, *tmp;
+
+ if (status == HAL_REO_CMD_DRAIN) {
+ goto free_desc;
+ } else if (status != HAL_REO_CMD_SUCCESS) {
+ /* Shouldn't happen! Cleanup in case of other failure? */
+ ath12k_warn(ab, "failed to delete rx tid %d hw descriptor %d\n",
+ rx_tid->tid, status);
+ return;
+ }
+
+ elem = kzalloc(sizeof(*elem), GFP_ATOMIC);
+ if (!elem)
+ goto free_desc;
+
+ elem->ts = jiffies;
+ memcpy(&elem->data, rx_tid, sizeof(*rx_tid));
+
+ spin_lock_bh(&dp->reo_cmd_lock);
+ list_add_tail(&elem->list, &dp->reo_cmd_cache_flush_list);
+ dp->reo_cmd_cache_flush_count++;
+
+ /* Flush and invalidate aged REO desc from HW cache */
+ list_for_each_entry_safe(elem, tmp, &dp->reo_cmd_cache_flush_list,
+ list) {
+ if (dp->reo_cmd_cache_flush_count > ATH12K_DP_RX_REO_DESC_FREE_THRES ||
+ time_after(jiffies, elem->ts +
+ msecs_to_jiffies(ATH12K_DP_RX_REO_DESC_FREE_TIMEOUT_MS))) {
+ list_del(&elem->list);
+ dp->reo_cmd_cache_flush_count--;
+
+ /* Unlock the reo_cmd_lock before using ath12k_dp_reo_cmd_send()
+ * within ath12k_dp_reo_cache_flush. The reo_cmd_cache_flush_list
+ * is used in only two contexts, one is in this function called
+ * from napi and the other in ath12k_dp_free during core destroy.
+ * Before dp_free, the irqs would be disabled and would wait to
+ * synchronize. Hence there wouldn’t be any race against add or
+ * delete to this list. Hence unlock-lock is safe here.
+ */
+ spin_unlock_bh(&dp->reo_cmd_lock);
+
+ ath12k_dp_reo_cache_flush(ab, &elem->data);
+ kfree(elem);
+ spin_lock_bh(&dp->reo_cmd_lock);
+ }
+ }
+ spin_unlock_bh(&dp->reo_cmd_lock);
+
+ return;
+free_desc:
+ dma_unmap_single(ab->dev, rx_tid->paddr, rx_tid->size,
+ DMA_BIDIRECTIONAL);
+ kfree(rx_tid->vaddr);
+ rx_tid->vaddr = NULL;
+}
+
+static void ath12k_peer_rx_tid_qref_setup(struct ath12k_base *ab, u16 peer_id, u16 tid,
+ dma_addr_t paddr)
+{
+ struct ath12k_reo_queue_ref *qref;
+ struct ath12k_dp *dp = &ab->dp;
+
+ if (!ab->hw_params->reoq_lut_support)
+ return;
+
+ /* TODO: based on ML peer or not, select the LUT. below assumes non
+ * ML peer
+ */
+ qref = (struct ath12k_reo_queue_ref *)dp->reoq_lut.vaddr +
+ (peer_id * (IEEE80211_NUM_TIDS + 1) + tid);
+
+ qref->info0 = u32_encode_bits(lower_32_bits(paddr),
+ BUFFER_ADDR_INFO0_ADDR);
+ qref->info1 = u32_encode_bits(upper_32_bits(paddr),
+ BUFFER_ADDR_INFO1_ADDR) |
+ u32_encode_bits(tid, DP_REO_QREF_NUM);
+}
+
+static void ath12k_peer_rx_tid_qref_reset(struct ath12k_base *ab, u16 peer_id, u16 tid)
+{
+ struct ath12k_reo_queue_ref *qref;
+ struct ath12k_dp *dp = &ab->dp;
+
+ if (!ab->hw_params->reoq_lut_support)
+ return;
+
+ /* TODO: based on ML peer or not, select the LUT. below assumes non
+ * ML peer
+ */
+ qref = (struct ath12k_reo_queue_ref *)dp->reoq_lut.vaddr +
+ (peer_id * (IEEE80211_NUM_TIDS + 1) + tid);
+
+ qref->info0 = u32_encode_bits(0, BUFFER_ADDR_INFO0_ADDR);
+ qref->info1 = u32_encode_bits(0, BUFFER_ADDR_INFO1_ADDR) |
+ u32_encode_bits(tid, DP_REO_QREF_NUM);
+}
+
+void ath12k_dp_rx_peer_tid_delete(struct ath12k *ar,
+ struct ath12k_peer *peer, u8 tid)
+{
+ struct ath12k_hal_reo_cmd cmd = {0};
+ struct ath12k_dp_rx_tid *rx_tid = &peer->rx_tid[tid];
+ int ret;
+
+ if (!rx_tid->active)
+ return;
+
+ cmd.flag = HAL_REO_CMD_FLG_NEED_STATUS;
+ cmd.addr_lo = lower_32_bits(rx_tid->paddr);
+ cmd.addr_hi = upper_32_bits(rx_tid->paddr);
+ cmd.upd0 = HAL_REO_CMD_UPD0_VLD;
+ ret = ath12k_dp_reo_cmd_send(ar->ab, rx_tid,
+ HAL_REO_CMD_UPDATE_RX_QUEUE, &cmd,
+ ath12k_dp_rx_tid_del_func);
+ if (ret) {
+ ath12k_err(ar->ab, "failed to send HAL_REO_CMD_UPDATE_RX_QUEUE cmd, tid %d (%d)\n",
+ tid, ret);
+ dma_unmap_single(ar->ab->dev, rx_tid->paddr, rx_tid->size,
+ DMA_BIDIRECTIONAL);
+ kfree(rx_tid->vaddr);
+ rx_tid->vaddr = NULL;
+ }
+
+ ath12k_peer_rx_tid_qref_reset(ar->ab, peer->peer_id, tid);
+
+ rx_tid->active = false;
+}
+
+/* TODO: it's strange (and ugly) that struct hal_reo_dest_ring is converted
+ * to struct hal_wbm_release_ring, I couldn't figure out the logic behind
+ * that.
+ */
+static int ath12k_dp_rx_link_desc_return(struct ath12k_base *ab,
+ struct hal_reo_dest_ring *ring,
+ enum hal_wbm_rel_bm_act action)
+{
+ struct hal_wbm_release_ring *link_desc = (struct hal_wbm_release_ring *)ring;
+ struct hal_wbm_release_ring *desc;
+ struct ath12k_dp *dp = &ab->dp;
+ struct hal_srng *srng;
+ int ret = 0;
+
+ srng = &ab->hal.srng_list[dp->wbm_desc_rel_ring.ring_id];
+
+ spin_lock_bh(&srng->lock);
+
+ ath12k_hal_srng_access_begin(ab, srng);
+
+ desc = ath12k_hal_srng_src_get_next_entry(ab, srng);
+ if (!desc) {
+ ret = -ENOBUFS;
+ goto exit;
+ }
+
+ ath12k_hal_rx_msdu_link_desc_set(ab, desc, link_desc, action);
+
+exit:
+ ath12k_hal_srng_access_end(ab, srng);
+
+ spin_unlock_bh(&srng->lock);
+
+ return ret;
+}
+
+static void ath12k_dp_rx_frags_cleanup(struct ath12k_dp_rx_tid *rx_tid,
+ bool rel_link_desc)
+{
+ struct ath12k_base *ab = rx_tid->ab;
+
+ lockdep_assert_held(&ab->base_lock);
+
+ if (rx_tid->dst_ring_desc) {
+ if (rel_link_desc)
+ ath12k_dp_rx_link_desc_return(ab, rx_tid->dst_ring_desc,
+ HAL_WBM_REL_BM_ACT_PUT_IN_IDLE);
+ kfree(rx_tid->dst_ring_desc);
+ rx_tid->dst_ring_desc = NULL;
+ }
+
+ rx_tid->cur_sn = 0;
+ rx_tid->last_frag_no = 0;
+ rx_tid->rx_frag_bitmap = 0;
+ __skb_queue_purge(&rx_tid->rx_frags);
+}
+
+void ath12k_dp_rx_peer_tid_cleanup(struct ath12k *ar, struct ath12k_peer *peer)
+{
+ struct ath12k_dp_rx_tid *rx_tid;
+ int i;
+
+ lockdep_assert_held(&ar->ab->base_lock);
+
+ for (i = 0; i <= IEEE80211_NUM_TIDS; i++) {
+ rx_tid = &peer->rx_tid[i];
+
+ ath12k_dp_rx_peer_tid_delete(ar, peer, i);
+ ath12k_dp_rx_frags_cleanup(rx_tid, true);
+
+ spin_unlock_bh(&ar->ab->base_lock);
+ del_timer_sync(&rx_tid->frag_timer);
+ spin_lock_bh(&ar->ab->base_lock);
+ }
+}
+
+static int ath12k_peer_rx_tid_reo_update(struct ath12k *ar,
+ struct ath12k_peer *peer,
+ struct ath12k_dp_rx_tid *rx_tid,
+ u32 ba_win_sz, u16 ssn,
+ bool update_ssn)
+{
+ struct ath12k_hal_reo_cmd cmd = {0};
+ int ret;
+
+ cmd.addr_lo = lower_32_bits(rx_tid->paddr);
+ cmd.addr_hi = upper_32_bits(rx_tid->paddr);
+ cmd.flag = HAL_REO_CMD_FLG_NEED_STATUS;
+ cmd.upd0 = HAL_REO_CMD_UPD0_BA_WINDOW_SIZE;
+ cmd.ba_window_size = ba_win_sz;
+
+ if (update_ssn) {
+ cmd.upd0 |= HAL_REO_CMD_UPD0_SSN;
+ cmd.upd2 = u32_encode_bits(ssn, HAL_REO_CMD_UPD2_SSN);
+ }
+
+ ret = ath12k_dp_reo_cmd_send(ar->ab, rx_tid,
+ HAL_REO_CMD_UPDATE_RX_QUEUE, &cmd,
+ NULL);
+ if (ret) {
+ ath12k_warn(ar->ab, "failed to update rx tid queue, tid %d (%d)\n",
+ rx_tid->tid, ret);
+ return ret;
+ }
+
+ rx_tid->ba_win_sz = ba_win_sz;
+
+ return 0;
+}
+
+int ath12k_dp_rx_peer_tid_setup(struct ath12k *ar, const u8 *peer_mac, int vdev_id,
+ u8 tid, u32 ba_win_sz, u16 ssn,
+ enum hal_pn_type pn_type)
+{
+ struct ath12k_base *ab = ar->ab;
+ struct ath12k_dp *dp = &ab->dp;
+ struct hal_rx_reo_queue *addr_aligned;
+ struct ath12k_peer *peer;
+ struct ath12k_dp_rx_tid *rx_tid;
+ u32 hw_desc_sz;
+ void *vaddr;
+ dma_addr_t paddr;
+ int ret;
+
+ spin_lock_bh(&ab->base_lock);
+
+ peer = ath12k_peer_find(ab, vdev_id, peer_mac);
+ if (!peer) {
+ spin_unlock_bh(&ab->base_lock);
+ ath12k_warn(ab, "failed to find the peer to set up rx tid\n");
+ return -ENOENT;
+ }
+
+ if (ab->hw_params->reoq_lut_support && !dp->reoq_lut.vaddr) {
+ spin_unlock_bh(&ab->base_lock);
+ ath12k_warn(ab, "reo qref table is not setup\n");
+ return -EINVAL;
+ }
+
+ if (peer->peer_id > DP_MAX_PEER_ID || tid > IEEE80211_NUM_TIDS) {
+ ath12k_warn(ab, "peer id of peer %d or tid %d doesn't allow reoq setup\n",
+ peer->peer_id, tid);
+ spin_unlock_bh(&ab->base_lock);
+ return -EINVAL;
+ }
+
+ rx_tid = &peer->rx_tid[tid];
+ /* Update the tid queue if it is already setup */
+ if (rx_tid->active) {
+ paddr = rx_tid->paddr;
+ ret = ath12k_peer_rx_tid_reo_update(ar, peer, rx_tid,
+ ba_win_sz, ssn, true);
+ spin_unlock_bh(&ab->base_lock);
+ if (ret) {
+ ath12k_warn(ab, "failed to update reo for rx tid %d\n", tid);
+ return ret;
+ }
+
+ return ret;
+ }
+
+ rx_tid->tid = tid;
+
+ rx_tid->ba_win_sz = ba_win_sz;
+
+ /* TODO: Optimize the memory allocation for qos tid based on
+ * the actual BA window size in REO tid update path.
+ */
+ if (tid == HAL_DESC_REO_NON_QOS_TID)
+ hw_desc_sz = ath12k_hal_reo_qdesc_size(ba_win_sz, tid);
+ else
+ hw_desc_sz = ath12k_hal_reo_qdesc_size(DP_BA_WIN_SZ_MAX, tid);
+
+ vaddr = kzalloc(hw_desc_sz + HAL_LINK_DESC_ALIGN - 1, GFP_ATOMIC);
+ if (!vaddr) {
+ spin_unlock_bh(&ab->base_lock);
+ return -ENOMEM;
+ }
+
+ addr_aligned = PTR_ALIGN(vaddr, HAL_LINK_DESC_ALIGN);
+
+ ath12k_hal_reo_qdesc_setup(addr_aligned, tid, ba_win_sz,
+ ssn, pn_type);
+
+ paddr = dma_map_single(ab->dev, addr_aligned, hw_desc_sz,
+ DMA_BIDIRECTIONAL);
+
+ ret = dma_mapping_error(ab->dev, paddr);
+ if (ret) {
+ spin_unlock_bh(&ab->base_lock);
+ goto err_mem_free;
+ }
+
+ rx_tid->vaddr = vaddr;
+ rx_tid->paddr = paddr;
+ rx_tid->size = hw_desc_sz;
+ rx_tid->active = true;
+
+ if (ab->hw_params->reoq_lut_support) {
+ /* Update the REO queue LUT at the corresponding peer id
+ * and tid with qaddr.
+ */
+ ath12k_peer_rx_tid_qref_setup(ab, peer->peer_id, tid, paddr);
+ spin_unlock_bh(&ab->base_lock);
+ } else {
+ spin_unlock_bh(&ab->base_lock);
+ ret = ath12k_wmi_peer_rx_reorder_queue_setup(ar, vdev_id, peer_mac,
+ paddr, tid, 1, ba_win_sz);
+ }
+
+ return ret;
+
+err_mem_free:
+ kfree(vaddr);
+
+ return ret;
+}
+
+int ath12k_dp_rx_ampdu_start(struct ath12k *ar,
+ struct ieee80211_ampdu_params *params)
+{
+ struct ath12k_base *ab = ar->ab;
+ struct ath12k_sta *arsta = (void *)params->sta->drv_priv;
+ int vdev_id = arsta->arvif->vdev_id;
+ int ret;
+
+ ret = ath12k_dp_rx_peer_tid_setup(ar, params->sta->addr, vdev_id,
+ params->tid, params->buf_size,
+ params->ssn, arsta->pn_type);
+ if (ret)
+ ath12k_warn(ab, "failed to setup rx tid %d\n", ret);
+
+ return ret;
+}
+
+int ath12k_dp_rx_ampdu_stop(struct ath12k *ar,
+ struct ieee80211_ampdu_params *params)
+{
+ struct ath12k_base *ab = ar->ab;
+ struct ath12k_peer *peer;
+ struct ath12k_sta *arsta = (void *)params->sta->drv_priv;
+ int vdev_id = arsta->arvif->vdev_id;
+ bool active;
+ int ret;
+
+ spin_lock_bh(&ab->base_lock);
+
+ peer = ath12k_peer_find(ab, vdev_id, params->sta->addr);
+ if (!peer) {
+ spin_unlock_bh(&ab->base_lock);
+ ath12k_warn(ab, "failed to find the peer to stop rx aggregation\n");
+ return -ENOENT;
+ }
+
+ active = peer->rx_tid[params->tid].active;
+
+ if (!active) {
+ spin_unlock_bh(&ab->base_lock);
+ return 0;
+ }
+
+ ret = ath12k_peer_rx_tid_reo_update(ar, peer, peer->rx_tid, 1, 0, false);
+ spin_unlock_bh(&ab->base_lock);
+ if (ret) {
+ ath12k_warn(ab, "failed to update reo for rx tid %d: %d\n",
+ params->tid, ret);
+ return ret;
+ }
+
+ return ret;
+}
+
+int ath12k_dp_rx_peer_pn_replay_config(struct ath12k_vif *arvif,
+ const u8 *peer_addr,
+ enum set_key_cmd key_cmd,
+ struct ieee80211_key_conf *key)
+{
+ struct ath12k *ar = arvif->ar;
+ struct ath12k_base *ab = ar->ab;
+ struct ath12k_hal_reo_cmd cmd = {0};
+ struct ath12k_peer *peer;
+ struct ath12k_dp_rx_tid *rx_tid;
+ u8 tid;
+ int ret = 0;
+
+ /* NOTE: Enable PN/TSC replay check offload only for unicast frames.
+ * We use mac80211 PN/TSC replay check functionality for bcast/mcast
+ * for now.
+ */
+ if (!(key->flags & IEEE80211_KEY_FLAG_PAIRWISE))
+ return 0;
+
+ cmd.flag = HAL_REO_CMD_FLG_NEED_STATUS;
+ cmd.upd0 = HAL_REO_CMD_UPD0_PN |
+ HAL_REO_CMD_UPD0_PN_SIZE |
+ HAL_REO_CMD_UPD0_PN_VALID |
+ HAL_REO_CMD_UPD0_PN_CHECK |
+ HAL_REO_CMD_UPD0_SVLD;
+
+ switch (key->cipher) {
+ case WLAN_CIPHER_SUITE_TKIP:
+ case WLAN_CIPHER_SUITE_CCMP:
+ case WLAN_CIPHER_SUITE_CCMP_256:
+ case WLAN_CIPHER_SUITE_GCMP:
+ case WLAN_CIPHER_SUITE_GCMP_256:
+ if (key_cmd == SET_KEY) {
+ cmd.upd1 |= HAL_REO_CMD_UPD1_PN_CHECK;
+ cmd.pn_size = 48;
+ }
+ break;
+ default:
+ break;
+ }
+
+ spin_lock_bh(&ab->base_lock);
+
+ peer = ath12k_peer_find(ab, arvif->vdev_id, peer_addr);
+ if (!peer) {
+ spin_unlock_bh(&ab->base_lock);
+ ath12k_warn(ab, "failed to find the peer %pM to configure pn replay detection\n",
+ peer_addr);
+ return -ENOENT;
+ }
+
+ for (tid = 0; tid <= IEEE80211_NUM_TIDS; tid++) {
+ rx_tid = &peer->rx_tid[tid];
+ if (!rx_tid->active)
+ continue;
+ cmd.addr_lo = lower_32_bits(rx_tid->paddr);
+ cmd.addr_hi = upper_32_bits(rx_tid->paddr);
+ ret = ath12k_dp_reo_cmd_send(ab, rx_tid,
+ HAL_REO_CMD_UPDATE_RX_QUEUE,
+ &cmd, NULL);
+ if (ret) {
+ ath12k_warn(ab, "failed to configure rx tid %d queue of peer %pM for pn replay detection %d\n",
+ tid, peer_addr, ret);
+ break;
+ }
+ }
+
+ spin_unlock_bh(&ab->base_lock);
+
+ return ret;
+}
+
+static int ath12k_get_ppdu_user_index(struct htt_ppdu_stats *ppdu_stats,
+ u16 peer_id)
+{
+ int i;
+
+ for (i = 0; i < HTT_PPDU_STATS_MAX_USERS - 1; i++) {
+ if (ppdu_stats->user_stats[i].is_valid_peer_id) {
+ if (peer_id == ppdu_stats->user_stats[i].peer_id)
+ return i;
+ } else {
+ return i;
+ }
+ }
+
+ return -EINVAL;
+}
+
+static int ath12k_htt_tlv_ppdu_stats_parse(struct ath12k_base *ab,
+ u16 tag, u16 len, const void *ptr,
+ void *data)
+{
+ const struct htt_ppdu_stats_usr_cmpltn_ack_ba_status *ba_status;
+ const struct htt_ppdu_stats_usr_cmpltn_cmn *cmplt_cmn;
+ const struct htt_ppdu_stats_user_rate *user_rate;
+ struct htt_ppdu_stats_info *ppdu_info;
+ struct htt_ppdu_user_stats *user_stats;
+ int cur_user;
+ u16 peer_id;
+
+ ppdu_info = data;
+
+ switch (tag) {
+ case HTT_PPDU_STATS_TAG_COMMON:
+ if (len < sizeof(struct htt_ppdu_stats_common)) {
+ ath12k_warn(ab, "Invalid len %d for the tag 0x%x\n",
+ len, tag);
+ return -EINVAL;
+ }
+ memcpy(&ppdu_info->ppdu_stats.common, ptr,
+ sizeof(struct htt_ppdu_stats_common));
+ break;
+ case HTT_PPDU_STATS_TAG_USR_RATE:
+ if (len < sizeof(struct htt_ppdu_stats_user_rate)) {
+ ath12k_warn(ab, "Invalid len %d for the tag 0x%x\n",
+ len, tag);
+ return -EINVAL;
+ }
+ user_rate = ptr;
+ peer_id = le16_to_cpu(user_rate->sw_peer_id);
+ cur_user = ath12k_get_ppdu_user_index(&ppdu_info->ppdu_stats,
+ peer_id);
+ if (cur_user < 0)
+ return -EINVAL;
+ user_stats = &ppdu_info->ppdu_stats.user_stats[cur_user];
+ user_stats->peer_id = peer_id;
+ user_stats->is_valid_peer_id = true;
+ memcpy(&user_stats->rate, ptr,
+ sizeof(struct htt_ppdu_stats_user_rate));
+ user_stats->tlv_flags |= BIT(tag);
+ break;
+ case HTT_PPDU_STATS_TAG_USR_COMPLTN_COMMON:
+ if (len < sizeof(struct htt_ppdu_stats_usr_cmpltn_cmn)) {
+ ath12k_warn(ab, "Invalid len %d for the tag 0x%x\n",
+ len, tag);
+ return -EINVAL;
+ }
+
+ cmplt_cmn = ptr;
+ peer_id = le16_to_cpu(cmplt_cmn->sw_peer_id);
+ cur_user = ath12k_get_ppdu_user_index(&ppdu_info->ppdu_stats,
+ peer_id);
+ if (cur_user < 0)
+ return -EINVAL;
+ user_stats = &ppdu_info->ppdu_stats.user_stats[cur_user];
+ user_stats->peer_id = peer_id;
+ user_stats->is_valid_peer_id = true;
+ memcpy(&user_stats->cmpltn_cmn, ptr,
+ sizeof(struct htt_ppdu_stats_usr_cmpltn_cmn));
+ user_stats->tlv_flags |= BIT(tag);
+ break;
+ case HTT_PPDU_STATS_TAG_USR_COMPLTN_ACK_BA_STATUS:
+ if (len <
+ sizeof(struct htt_ppdu_stats_usr_cmpltn_ack_ba_status)) {
+ ath12k_warn(ab, "Invalid len %d for the tag 0x%x\n",
+ len, tag);
+ return -EINVAL;
+ }
+
+ ba_status = ptr;
+ peer_id = le16_to_cpu(ba_status->sw_peer_id);
+ cur_user = ath12k_get_ppdu_user_index(&ppdu_info->ppdu_stats,
+ peer_id);
+ if (cur_user < 0)
+ return -EINVAL;
+ user_stats = &ppdu_info->ppdu_stats.user_stats[cur_user];
+ user_stats->peer_id = peer_id;
+ user_stats->is_valid_peer_id = true;
+ memcpy(&user_stats->ack_ba, ptr,
+ sizeof(struct htt_ppdu_stats_usr_cmpltn_ack_ba_status));
+ user_stats->tlv_flags |= BIT(tag);
+ break;
+ }
+ return 0;
+}
+
+static int ath12k_dp_htt_tlv_iter(struct ath12k_base *ab, const void *ptr, size_t len,
+ int (*iter)(struct ath12k_base *ar, u16 tag, u16 len,
+ const void *ptr, void *data),
+ void *data)
+{
+ const struct htt_tlv *tlv;
+ const void *begin = ptr;
+ u16 tlv_tag, tlv_len;
+ int ret = -EINVAL;
+
+ while (len > 0) {
+ if (len < sizeof(*tlv)) {
+ ath12k_err(ab, "htt tlv parse failure at byte %zd (%zu bytes left, %zu expected)\n",
+ ptr - begin, len, sizeof(*tlv));
+ return -EINVAL;
+ }
+ tlv = (struct htt_tlv *)ptr;
+ tlv_tag = le32_get_bits(tlv->header, HTT_TLV_TAG);
+ tlv_len = le32_get_bits(tlv->header, HTT_TLV_LEN);
+ ptr += sizeof(*tlv);
+ len -= sizeof(*tlv);
+
+ if (tlv_len > len) {
+ ath12k_err(ab, "htt tlv parse failure of tag %u at byte %zd (%zu bytes left, %u expected)\n",
+ tlv_tag, ptr - begin, len, tlv_len);
+ return -EINVAL;
+ }
+ ret = iter(ab, tlv_tag, tlv_len, ptr, data);
+ if (ret == -ENOMEM)
+ return ret;
+
+ ptr += tlv_len;
+ len -= tlv_len;
+ }
+ return 0;
+}
+
+static void
+ath12k_update_per_peer_tx_stats(struct ath12k *ar,
+ struct htt_ppdu_stats *ppdu_stats, u8 user)
+{
+ struct ath12k_base *ab = ar->ab;
+ struct ath12k_peer *peer;
+ struct ieee80211_sta *sta;
+ struct ath12k_sta *arsta;
+ struct htt_ppdu_stats_user_rate *user_rate;
+ struct ath12k_per_peer_tx_stats *peer_stats = &ar->peer_tx_stats;
+ struct htt_ppdu_user_stats *usr_stats = &ppdu_stats->user_stats[user];
+ struct htt_ppdu_stats_common *common = &ppdu_stats->common;
+ int ret;
+ u8 flags, mcs, nss, bw, sgi, dcm, rate_idx = 0;
+ u32 v, succ_bytes = 0;
+ u16 tones, rate = 0, succ_pkts = 0;
+ u32 tx_duration = 0;
+ u8 tid = HTT_PPDU_STATS_NON_QOS_TID;
+ bool is_ampdu = false;
+
+ if (!usr_stats)
+ return;
+
+ if (!(usr_stats->tlv_flags & BIT(HTT_PPDU_STATS_TAG_USR_RATE)))
+ return;
+
+ if (usr_stats->tlv_flags & BIT(HTT_PPDU_STATS_TAG_USR_COMPLTN_COMMON))
+ is_ampdu =
+ HTT_USR_CMPLTN_IS_AMPDU(usr_stats->cmpltn_cmn.flags);
+
+ if (usr_stats->tlv_flags &
+ BIT(HTT_PPDU_STATS_TAG_USR_COMPLTN_ACK_BA_STATUS)) {
+ succ_bytes = le32_to_cpu(usr_stats->ack_ba.success_bytes);
+ succ_pkts = le32_get_bits(usr_stats->ack_ba.info,
+ HTT_PPDU_STATS_ACK_BA_INFO_NUM_MSDU_M);
+ tid = le32_get_bits(usr_stats->ack_ba.info,
+ HTT_PPDU_STATS_ACK_BA_INFO_TID_NUM);
+ }
+
+ if (common->fes_duration_us)
+ tx_duration = le32_to_cpu(common->fes_duration_us);
+
+ user_rate = &usr_stats->rate;
+ flags = HTT_USR_RATE_PREAMBLE(user_rate->rate_flags);
+ bw = HTT_USR_RATE_BW(user_rate->rate_flags) - 2;
+ nss = HTT_USR_RATE_NSS(user_rate->rate_flags) + 1;
+ mcs = HTT_USR_RATE_MCS(user_rate->rate_flags);
+ sgi = HTT_USR_RATE_GI(user_rate->rate_flags);
+ dcm = HTT_USR_RATE_DCM(user_rate->rate_flags);
+
+ /* Note: If host configured fixed rates and in some other special
+ * cases, the broadcast/management frames are sent in different rates.
+ * Firmware rate's control to be skipped for this?
+ */
+
+ if (flags == WMI_RATE_PREAMBLE_HE && mcs > 11) {
+ ath12k_warn(ab, "Invalid HE mcs %d peer stats", mcs);
+ return;
+ }
+
+ if (flags == WMI_RATE_PREAMBLE_HE && mcs > ATH12K_HE_MCS_MAX) {
+ ath12k_warn(ab, "Invalid HE mcs %d peer stats", mcs);
+ return;
+ }
+
+ if (flags == WMI_RATE_PREAMBLE_VHT && mcs > ATH12K_VHT_MCS_MAX) {
+ ath12k_warn(ab, "Invalid VHT mcs %d peer stats", mcs);
+ return;
+ }
+
+ if (flags == WMI_RATE_PREAMBLE_HT && (mcs > ATH12K_HT_MCS_MAX || nss < 1)) {
+ ath12k_warn(ab, "Invalid HT mcs %d nss %d peer stats",
+ mcs, nss);
+ return;
+ }
+
+ if (flags == WMI_RATE_PREAMBLE_CCK || flags == WMI_RATE_PREAMBLE_OFDM) {
+ ret = ath12k_mac_hw_ratecode_to_legacy_rate(mcs,
+ flags,
+ &rate_idx,
+ &rate);
+ if (ret < 0)
+ return;
+ }
+
+ rcu_read_lock();
+ spin_lock_bh(&ab->base_lock);
+ peer = ath12k_peer_find_by_id(ab, usr_stats->peer_id);
+
+ if (!peer || !peer->sta) {
+ spin_unlock_bh(&ab->base_lock);
+ rcu_read_unlock();
+ return;
+ }
+
+ sta = peer->sta;
+ arsta = (struct ath12k_sta *)sta->drv_priv;
+
+ memset(&arsta->txrate, 0, sizeof(arsta->txrate));
+
+ switch (flags) {
+ case WMI_RATE_PREAMBLE_OFDM:
+ arsta->txrate.legacy = rate;
+ break;
+ case WMI_RATE_PREAMBLE_CCK:
+ arsta->txrate.legacy = rate;
+ break;
+ case WMI_RATE_PREAMBLE_HT:
+ arsta->txrate.mcs = mcs + 8 * (nss - 1);
+ arsta->txrate.flags = RATE_INFO_FLAGS_MCS;
+ if (sgi)
+ arsta->txrate.flags |= RATE_INFO_FLAGS_SHORT_GI;
+ break;
+ case WMI_RATE_PREAMBLE_VHT:
+ arsta->txrate.mcs = mcs;
+ arsta->txrate.flags = RATE_INFO_FLAGS_VHT_MCS;
+ if (sgi)
+ arsta->txrate.flags |= RATE_INFO_FLAGS_SHORT_GI;
+ break;
+ case WMI_RATE_PREAMBLE_HE:
+ arsta->txrate.mcs = mcs;
+ arsta->txrate.flags = RATE_INFO_FLAGS_HE_MCS;
+ arsta->txrate.he_dcm = dcm;
+ arsta->txrate.he_gi = ath12k_he_gi_to_nl80211_he_gi(sgi);
+ tones = le16_to_cpu(user_rate->ru_end) -
+ le16_to_cpu(user_rate->ru_start) + 1;
+ v = ath12k_he_ru_tones_to_nl80211_he_ru_alloc(tones);
+ arsta->txrate.he_ru_alloc = v;
+ break;
+ }
+
+ arsta->txrate.nss = nss;
+ arsta->txrate.bw = ath12k_mac_bw_to_mac80211_bw(bw);
+ arsta->tx_duration += tx_duration;
+ memcpy(&arsta->last_txrate, &arsta->txrate, sizeof(struct rate_info));
+
+ /* PPDU stats reported for mgmt packet doesn't have valid tx bytes.
+ * So skip peer stats update for mgmt packets.
+ */
+ if (tid < HTT_PPDU_STATS_NON_QOS_TID) {
+ memset(peer_stats, 0, sizeof(*peer_stats));
+ peer_stats->succ_pkts = succ_pkts;
+ peer_stats->succ_bytes = succ_bytes;
+ peer_stats->is_ampdu = is_ampdu;
+ peer_stats->duration = tx_duration;
+ peer_stats->ba_fails =
+ HTT_USR_CMPLTN_LONG_RETRY(usr_stats->cmpltn_cmn.flags) +
+ HTT_USR_CMPLTN_SHORT_RETRY(usr_stats->cmpltn_cmn.flags);
+ }
+
+ spin_unlock_bh(&ab->base_lock);
+ rcu_read_unlock();
+}
+
+static void ath12k_htt_update_ppdu_stats(struct ath12k *ar,
+ struct htt_ppdu_stats *ppdu_stats)
+{
+ u8 user;
+
+ for (user = 0; user < HTT_PPDU_STATS_MAX_USERS - 1; user++)
+ ath12k_update_per_peer_tx_stats(ar, ppdu_stats, user);
+}
+
+static
+struct htt_ppdu_stats_info *ath12k_dp_htt_get_ppdu_desc(struct ath12k *ar,
+ u32 ppdu_id)
+{
+ struct htt_ppdu_stats_info *ppdu_info;
+
+ lockdep_assert_held(&ar->data_lock);
+ if (!list_empty(&ar->ppdu_stats_info)) {
+ list_for_each_entry(ppdu_info, &ar->ppdu_stats_info, list) {
+ if (ppdu_info->ppdu_id == ppdu_id)
+ return ppdu_info;
+ }
+
+ if (ar->ppdu_stat_list_depth > HTT_PPDU_DESC_MAX_DEPTH) {
+ ppdu_info = list_first_entry(&ar->ppdu_stats_info,
+ typeof(*ppdu_info), list);
+ list_del(&ppdu_info->list);
+ ar->ppdu_stat_list_depth--;
+ ath12k_htt_update_ppdu_stats(ar, &ppdu_info->ppdu_stats);
+ kfree(ppdu_info);
+ }
+ }
+
+ ppdu_info = kzalloc(sizeof(*ppdu_info), GFP_ATOMIC);
+ if (!ppdu_info)
+ return NULL;
+
+ list_add_tail(&ppdu_info->list, &ar->ppdu_stats_info);
+ ar->ppdu_stat_list_depth++;
+
+ return ppdu_info;
+}
+
+static void ath12k_copy_to_delay_stats(struct ath12k_peer *peer,
+ struct htt_ppdu_user_stats *usr_stats)
+{
+ peer->ppdu_stats_delayba.sw_peer_id = le16_to_cpu(usr_stats->rate.sw_peer_id);
+ peer->ppdu_stats_delayba.info0 = le32_to_cpu(usr_stats->rate.info0);
+ peer->ppdu_stats_delayba.ru_end = le16_to_cpu(usr_stats->rate.ru_end);
+ peer->ppdu_stats_delayba.ru_start = le16_to_cpu(usr_stats->rate.ru_start);
+ peer->ppdu_stats_delayba.info1 = le32_to_cpu(usr_stats->rate.info1);
+ peer->ppdu_stats_delayba.rate_flags = le32_to_cpu(usr_stats->rate.rate_flags);
+ peer->ppdu_stats_delayba.resp_rate_flags =
+ le32_to_cpu(usr_stats->rate.resp_rate_flags);
+
+ peer->delayba_flag = true;
+}
+
+static void ath12k_copy_to_bar(struct ath12k_peer *peer,
+ struct htt_ppdu_user_stats *usr_stats)
+{
+ usr_stats->rate.sw_peer_id = cpu_to_le16(peer->ppdu_stats_delayba.sw_peer_id);
+ usr_stats->rate.info0 = cpu_to_le32(peer->ppdu_stats_delayba.info0);
+ usr_stats->rate.ru_end = cpu_to_le16(peer->ppdu_stats_delayba.ru_end);
+ usr_stats->rate.ru_start = cpu_to_le16(peer->ppdu_stats_delayba.ru_start);
+ usr_stats->rate.info1 = cpu_to_le32(peer->ppdu_stats_delayba.info1);
+ usr_stats->rate.rate_flags = cpu_to_le32(peer->ppdu_stats_delayba.rate_flags);
+ usr_stats->rate.resp_rate_flags =
+ cpu_to_le32(peer->ppdu_stats_delayba.resp_rate_flags);
+
+ peer->delayba_flag = false;
+}
+
+static int ath12k_htt_pull_ppdu_stats(struct ath12k_base *ab,
+ struct sk_buff *skb)
+{
+ struct ath12k_htt_ppdu_stats_msg *msg;
+ struct htt_ppdu_stats_info *ppdu_info;
+ struct ath12k_peer *peer = NULL;
+ struct htt_ppdu_user_stats *usr_stats = NULL;
+ u32 peer_id = 0;
+ struct ath12k *ar;
+ int ret, i;
+ u8 pdev_id;
+ u32 ppdu_id, len;
+
+ msg = (struct ath12k_htt_ppdu_stats_msg *)skb->data;
+ len = le32_get_bits(msg->info, HTT_T2H_PPDU_STATS_INFO_PAYLOAD_SIZE);
+ pdev_id = le32_get_bits(msg->info, HTT_T2H_PPDU_STATS_INFO_PDEV_ID);
+ ppdu_id = le32_to_cpu(msg->ppdu_id);
+
+ rcu_read_lock();
+ ar = ath12k_mac_get_ar_by_pdev_id(ab, pdev_id);
+ if (!ar) {
+ ret = -EINVAL;
+ goto exit;
+ }
+
+ spin_lock_bh(&ar->data_lock);
+ ppdu_info = ath12k_dp_htt_get_ppdu_desc(ar, ppdu_id);
+ if (!ppdu_info) {
+ spin_unlock_bh(&ar->data_lock);
+ ret = -EINVAL;
+ goto exit;
+ }
+
+ ppdu_info->ppdu_id = ppdu_id;
+ ret = ath12k_dp_htt_tlv_iter(ab, msg->data, len,
+ ath12k_htt_tlv_ppdu_stats_parse,
+ (void *)ppdu_info);
+ if (ret) {
+ spin_unlock_bh(&ar->data_lock);
+ ath12k_warn(ab, "Failed to parse tlv %d\n", ret);
+ goto exit;
+ }
+
+ /* back up data rate tlv for all peers */
+ if (ppdu_info->frame_type == HTT_STATS_PPDU_FTYPE_DATA &&
+ (ppdu_info->tlv_bitmap & (1 << HTT_PPDU_STATS_TAG_USR_COMMON)) &&
+ ppdu_info->delay_ba) {
+ for (i = 0; i < ppdu_info->ppdu_stats.common.num_users; i++) {
+ peer_id = ppdu_info->ppdu_stats.user_stats[i].peer_id;
+ spin_lock_bh(&ab->base_lock);
+ peer = ath12k_peer_find_by_id(ab, peer_id);
+ if (!peer) {
+ spin_unlock_bh(&ab->base_lock);
+ continue;
+ }
+
+ usr_stats = &ppdu_info->ppdu_stats.user_stats[i];
+ if (usr_stats->delay_ba)
+ ath12k_copy_to_delay_stats(peer, usr_stats);
+ spin_unlock_bh(&ab->base_lock);
+ }
+ }
+
+ /* restore all peers' data rate tlv to mu-bar tlv */
+ if (ppdu_info->frame_type == HTT_STATS_PPDU_FTYPE_BAR &&
+ (ppdu_info->tlv_bitmap & (1 << HTT_PPDU_STATS_TAG_USR_COMMON))) {
+ for (i = 0; i < ppdu_info->bar_num_users; i++) {
+ peer_id = ppdu_info->ppdu_stats.user_stats[i].peer_id;
+ spin_lock_bh(&ab->base_lock);
+ peer = ath12k_peer_find_by_id(ab, peer_id);
+ if (!peer) {
+ spin_unlock_bh(&ab->base_lock);
+ continue;
+ }
+
+ usr_stats = &ppdu_info->ppdu_stats.user_stats[i];
+ if (peer->delayba_flag)
+ ath12k_copy_to_bar(peer, usr_stats);
+ spin_unlock_bh(&ab->base_lock);
+ }
+ }
+
+ spin_unlock_bh(&ar->data_lock);
+
+exit:
+ rcu_read_unlock();
+
+ return ret;
+}
+
+static void ath12k_htt_mlo_offset_event_handler(struct ath12k_base *ab,
+ struct sk_buff *skb)
+{
+ struct ath12k_htt_mlo_offset_msg *msg;
+ struct ath12k_pdev *pdev;
+ struct ath12k *ar;
+ u8 pdev_id;
+
+ msg = (struct ath12k_htt_mlo_offset_msg *)skb->data;
+ pdev_id = u32_get_bits(__le32_to_cpu(msg->info),
+ HTT_T2H_MLO_OFFSET_INFO_PDEV_ID);
+ ar = ath12k_mac_get_ar_by_pdev_id(ab, pdev_id);
+
+ if (!ar) {
+ ath12k_warn(ab, "invalid pdev id %d on htt mlo offset\n", pdev_id);
+ return;
+ }
+
+ spin_lock_bh(&ar->data_lock);
+ pdev = ar->pdev;
+
+ pdev->timestamp.info = __le32_to_cpu(msg->info);
+ pdev->timestamp.sync_timestamp_lo_us = __le32_to_cpu(msg->sync_timestamp_lo_us);
+ pdev->timestamp.sync_timestamp_hi_us = __le32_to_cpu(msg->sync_timestamp_hi_us);
+ pdev->timestamp.mlo_offset_lo = __le32_to_cpu(msg->mlo_offset_lo);
+ pdev->timestamp.mlo_offset_hi = __le32_to_cpu(msg->mlo_offset_hi);
+ pdev->timestamp.mlo_offset_clks = __le32_to_cpu(msg->mlo_offset_clks);
+ pdev->timestamp.mlo_comp_clks = __le32_to_cpu(msg->mlo_comp_clks);
+ pdev->timestamp.mlo_comp_timer = __le32_to_cpu(msg->mlo_comp_timer);
+
+ spin_unlock_bh(&ar->data_lock);
+}
+
+void ath12k_dp_htt_htc_t2h_msg_handler(struct ath12k_base *ab,
+ struct sk_buff *skb)
+{
+ struct ath12k_dp *dp = &ab->dp;
+ struct htt_resp_msg *resp = (struct htt_resp_msg *)skb->data;
+ enum htt_t2h_msg_type type;
+ u16 peer_id;
+ u8 vdev_id;
+ u8 mac_addr[ETH_ALEN];
+ u16 peer_mac_h16;
+ u16 ast_hash = 0;
+ u16 hw_peer_id;
+
+ type = le32_get_bits(resp->version_msg.version, HTT_T2H_MSG_TYPE);
+
+ ath12k_dbg(ab, ATH12K_DBG_DP_HTT, "dp_htt rx msg type :0x%0x\n", type);
+
+ switch (type) {
+ case HTT_T2H_MSG_TYPE_VERSION_CONF:
+ dp->htt_tgt_ver_major = le32_get_bits(resp->version_msg.version,
+ HTT_T2H_VERSION_CONF_MAJOR);
+ dp->htt_tgt_ver_minor = le32_get_bits(resp->version_msg.version,
+ HTT_T2H_VERSION_CONF_MINOR);
+ complete(&dp->htt_tgt_version_received);
+ break;
+ /* TODO: remove unused peer map versions after testing */
+ case HTT_T2H_MSG_TYPE_PEER_MAP:
+ vdev_id = le32_get_bits(resp->peer_map_ev.info,
+ HTT_T2H_PEER_MAP_INFO_VDEV_ID);
+ peer_id = le32_get_bits(resp->peer_map_ev.info,
+ HTT_T2H_PEER_MAP_INFO_PEER_ID);
+ peer_mac_h16 = le32_get_bits(resp->peer_map_ev.info1,
+ HTT_T2H_PEER_MAP_INFO1_MAC_ADDR_H16);
+ ath12k_dp_get_mac_addr(le32_to_cpu(resp->peer_map_ev.mac_addr_l32),
+ peer_mac_h16, mac_addr);
+ ath12k_peer_map_event(ab, vdev_id, peer_id, mac_addr, 0, 0);
+ break;
+ case HTT_T2H_MSG_TYPE_PEER_MAP2:
+ vdev_id = le32_get_bits(resp->peer_map_ev.info,
+ HTT_T2H_PEER_MAP_INFO_VDEV_ID);
+ peer_id = le32_get_bits(resp->peer_map_ev.info,
+ HTT_T2H_PEER_MAP_INFO_PEER_ID);
+ peer_mac_h16 = le32_get_bits(resp->peer_map_ev.info1,
+ HTT_T2H_PEER_MAP_INFO1_MAC_ADDR_H16);
+ ath12k_dp_get_mac_addr(le32_to_cpu(resp->peer_map_ev.mac_addr_l32),
+ peer_mac_h16, mac_addr);
+ ast_hash = le32_get_bits(resp->peer_map_ev.info2,
+ HTT_T2H_PEER_MAP_INFO2_AST_HASH_VAL);
+ hw_peer_id = le32_get_bits(resp->peer_map_ev.info1,
+ HTT_T2H_PEER_MAP_INFO1_HW_PEER_ID);
+ ath12k_peer_map_event(ab, vdev_id, peer_id, mac_addr, ast_hash,
+ hw_peer_id);
+ break;
+ case HTT_T2H_MSG_TYPE_PEER_MAP3:
+ vdev_id = le32_get_bits(resp->peer_map_ev.info,
+ HTT_T2H_PEER_MAP_INFO_VDEV_ID);
+ peer_id = le32_get_bits(resp->peer_map_ev.info,
+ HTT_T2H_PEER_MAP_INFO_PEER_ID);
+ peer_mac_h16 = le32_get_bits(resp->peer_map_ev.info1,
+ HTT_T2H_PEER_MAP_INFO1_MAC_ADDR_H16);
+ ath12k_dp_get_mac_addr(le32_to_cpu(resp->peer_map_ev.mac_addr_l32),
+ peer_mac_h16, mac_addr);
+ ath12k_peer_map_event(ab, vdev_id, peer_id, mac_addr, ast_hash,
+ peer_id);
+ break;
+ case HTT_T2H_MSG_TYPE_PEER_UNMAP:
+ case HTT_T2H_MSG_TYPE_PEER_UNMAP2:
+ peer_id = le32_get_bits(resp->peer_unmap_ev.info,
+ HTT_T2H_PEER_UNMAP_INFO_PEER_ID);
+ ath12k_peer_unmap_event(ab, peer_id);
+ break;
+ case HTT_T2H_MSG_TYPE_PPDU_STATS_IND:
+ ath12k_htt_pull_ppdu_stats(ab, skb);
+ break;
+ case HTT_T2H_MSG_TYPE_EXT_STATS_CONF:
+ break;
+ case HTT_T2H_MSG_TYPE_MLO_TIMESTAMP_OFFSET_IND:
+ ath12k_htt_mlo_offset_event_handler(ab, skb);
+ break;
+ default:
+ ath12k_dbg(ab, ATH12K_DBG_DP_HTT, "dp_htt event %d not handled\n",
+ type);
+ break;
+ }
+
+ dev_kfree_skb_any(skb);
+}
+
+static int ath12k_dp_rx_msdu_coalesce(struct ath12k *ar,
+ struct sk_buff_head *msdu_list,
+ struct sk_buff *first, struct sk_buff *last,
+ u8 l3pad_bytes, int msdu_len)
+{
+ struct ath12k_base *ab = ar->ab;
+ struct sk_buff *skb;
+ struct ath12k_skb_rxcb *rxcb = ATH12K_SKB_RXCB(first);
+ int buf_first_hdr_len, buf_first_len;
+ struct hal_rx_desc *ldesc;
+ int space_extra, rem_len, buf_len;
+ u32 hal_rx_desc_sz = ar->ab->hw_params->hal_desc_sz;
+
+ /* As the msdu is spread across multiple rx buffers,
+ * find the offset to the start of msdu for computing
+ * the length of the msdu in the first buffer.
+ */
+ buf_first_hdr_len = hal_rx_desc_sz + l3pad_bytes;
+ buf_first_len = DP_RX_BUFFER_SIZE - buf_first_hdr_len;
+
+ if (WARN_ON_ONCE(msdu_len <= buf_first_len)) {
+ skb_put(first, buf_first_hdr_len + msdu_len);
+ skb_pull(first, buf_first_hdr_len);
+ return 0;
+ }
+
+ ldesc = (struct hal_rx_desc *)last->data;
+ rxcb->is_first_msdu = ath12k_dp_rx_h_first_msdu(ab, ldesc);
+ rxcb->is_last_msdu = ath12k_dp_rx_h_last_msdu(ab, ldesc);
+
+ /* MSDU spans over multiple buffers because the length of the MSDU
+ * exceeds DP_RX_BUFFER_SIZE - HAL_RX_DESC_SIZE. So assume the data
+ * in the first buf is of length DP_RX_BUFFER_SIZE - HAL_RX_DESC_SIZE.
+ */
+ skb_put(first, DP_RX_BUFFER_SIZE);
+ skb_pull(first, buf_first_hdr_len);
+
+ /* When an MSDU spread over multiple buffers MSDU_END
+ * tlvs are valid only in the last buffer. Copy those tlvs.
+ */
+ ath12k_dp_rx_desc_end_tlv_copy(ab, rxcb->rx_desc, ldesc);
+
+ space_extra = msdu_len - (buf_first_len + skb_tailroom(first));
+ if (space_extra > 0 &&
+ (pskb_expand_head(first, 0, space_extra, GFP_ATOMIC) < 0)) {
+ /* Free up all buffers of the MSDU */
+ while ((skb = __skb_dequeue(msdu_list)) != NULL) {
+ rxcb = ATH12K_SKB_RXCB(skb);
+ if (!rxcb->is_continuation) {
+ dev_kfree_skb_any(skb);
+ break;
+ }
+ dev_kfree_skb_any(skb);
+ }
+ return -ENOMEM;
+ }
+
+ rem_len = msdu_len - buf_first_len;
+ while ((skb = __skb_dequeue(msdu_list)) != NULL && rem_len > 0) {
+ rxcb = ATH12K_SKB_RXCB(skb);
+ if (rxcb->is_continuation)
+ buf_len = DP_RX_BUFFER_SIZE - hal_rx_desc_sz;
+ else
+ buf_len = rem_len;
+
+ if (buf_len > (DP_RX_BUFFER_SIZE - hal_rx_desc_sz)) {
+ WARN_ON_ONCE(1);
+ dev_kfree_skb_any(skb);
+ return -EINVAL;
+ }
+
+ skb_put(skb, buf_len + hal_rx_desc_sz);
+ skb_pull(skb, hal_rx_desc_sz);
+ skb_copy_from_linear_data(skb, skb_put(first, buf_len),
+ buf_len);
+ dev_kfree_skb_any(skb);
+
+ rem_len -= buf_len;
+ if (!rxcb->is_continuation)
+ break;
+ }
+
+ return 0;
+}
+
+static struct sk_buff *ath12k_dp_rx_get_msdu_last_buf(struct sk_buff_head *msdu_list,
+ struct sk_buff *first)
+{
+ struct sk_buff *skb;
+ struct ath12k_skb_rxcb *rxcb = ATH12K_SKB_RXCB(first);
+
+ if (!rxcb->is_continuation)
+ return first;
+
+ skb_queue_walk(msdu_list, skb) {
+ rxcb = ATH12K_SKB_RXCB(skb);
+ if (!rxcb->is_continuation)
+ return skb;
+ }
+
+ return NULL;
+}
+
+static void ath12k_dp_rx_h_csum_offload(struct ath12k *ar, struct sk_buff *msdu)
+{
+ struct ath12k_skb_rxcb *rxcb = ATH12K_SKB_RXCB(msdu);
+ struct ath12k_base *ab = ar->ab;
+ bool ip_csum_fail, l4_csum_fail;
+
+ ip_csum_fail = ath12k_dp_rx_h_ip_cksum_fail(ab, rxcb->rx_desc);
+ l4_csum_fail = ath12k_dp_rx_h_l4_cksum_fail(ab, rxcb->rx_desc);
+
+ msdu->ip_summed = (ip_csum_fail || l4_csum_fail) ?
+ CHECKSUM_NONE : CHECKSUM_UNNECESSARY;
+}
+
+static int ath12k_dp_rx_crypto_mic_len(struct ath12k *ar,
+ enum hal_encrypt_type enctype)
+{
+ switch (enctype) {
+ case HAL_ENCRYPT_TYPE_OPEN:
+ case HAL_ENCRYPT_TYPE_TKIP_NO_MIC:
+ case HAL_ENCRYPT_TYPE_TKIP_MIC:
+ return 0;
+ case HAL_ENCRYPT_TYPE_CCMP_128:
+ return IEEE80211_CCMP_MIC_LEN;
+ case HAL_ENCRYPT_TYPE_CCMP_256:
+ return IEEE80211_CCMP_256_MIC_LEN;
+ case HAL_ENCRYPT_TYPE_GCMP_128:
+ case HAL_ENCRYPT_TYPE_AES_GCMP_256:
+ return IEEE80211_GCMP_MIC_LEN;
+ case HAL_ENCRYPT_TYPE_WEP_40:
+ case HAL_ENCRYPT_TYPE_WEP_104:
+ case HAL_ENCRYPT_TYPE_WEP_128:
+ case HAL_ENCRYPT_TYPE_WAPI_GCM_SM4:
+ case HAL_ENCRYPT_TYPE_WAPI:
+ break;
+ }
+
+ ath12k_warn(ar->ab, "unsupported encryption type %d for mic len\n", enctype);
+ return 0;
+}
+
+static int ath12k_dp_rx_crypto_param_len(struct ath12k *ar,
+ enum hal_encrypt_type enctype)
+{
+ switch (enctype) {
+ case HAL_ENCRYPT_TYPE_OPEN:
+ return 0;
+ case HAL_ENCRYPT_TYPE_TKIP_NO_MIC:
+ case HAL_ENCRYPT_TYPE_TKIP_MIC:
+ return IEEE80211_TKIP_IV_LEN;
+ case HAL_ENCRYPT_TYPE_CCMP_128:
+ return IEEE80211_CCMP_HDR_LEN;
+ case HAL_ENCRYPT_TYPE_CCMP_256:
+ return IEEE80211_CCMP_256_HDR_LEN;
+ case HAL_ENCRYPT_TYPE_GCMP_128:
+ case HAL_ENCRYPT_TYPE_AES_GCMP_256:
+ return IEEE80211_GCMP_HDR_LEN;
+ case HAL_ENCRYPT_TYPE_WEP_40:
+ case HAL_ENCRYPT_TYPE_WEP_104:
+ case HAL_ENCRYPT_TYPE_WEP_128:
+ case HAL_ENCRYPT_TYPE_WAPI_GCM_SM4:
+ case HAL_ENCRYPT_TYPE_WAPI:
+ break;
+ }
+
+ ath12k_warn(ar->ab, "unsupported encryption type %d\n", enctype);
+ return 0;
+}
+
+static int ath12k_dp_rx_crypto_icv_len(struct ath12k *ar,
+ enum hal_encrypt_type enctype)
+{
+ switch (enctype) {
+ case HAL_ENCRYPT_TYPE_OPEN:
+ case HAL_ENCRYPT_TYPE_CCMP_128:
+ case HAL_ENCRYPT_TYPE_CCMP_256:
+ case HAL_ENCRYPT_TYPE_GCMP_128:
+ case HAL_ENCRYPT_TYPE_AES_GCMP_256:
+ return 0;
+ case HAL_ENCRYPT_TYPE_TKIP_NO_MIC:
+ case HAL_ENCRYPT_TYPE_TKIP_MIC:
+ return IEEE80211_TKIP_ICV_LEN;
+ case HAL_ENCRYPT_TYPE_WEP_40:
+ case HAL_ENCRYPT_TYPE_WEP_104:
+ case HAL_ENCRYPT_TYPE_WEP_128:
+ case HAL_ENCRYPT_TYPE_WAPI_GCM_SM4:
+ case HAL_ENCRYPT_TYPE_WAPI:
+ break;
+ }
+
+ ath12k_warn(ar->ab, "unsupported encryption type %d\n", enctype);
+ return 0;
+}
+
+static void ath12k_dp_rx_h_undecap_nwifi(struct ath12k *ar,
+ struct sk_buff *msdu,
+ enum hal_encrypt_type enctype,
+ struct ieee80211_rx_status *status)
+{
+ struct ath12k_base *ab = ar->ab;
+ struct ath12k_skb_rxcb *rxcb = ATH12K_SKB_RXCB(msdu);
+ u8 decap_hdr[DP_MAX_NWIFI_HDR_LEN];
+ struct ieee80211_hdr *hdr;
+ size_t hdr_len;
+ u8 *crypto_hdr;
+ u16 qos_ctl;
+
+ /* pull decapped header */
+ hdr = (struct ieee80211_hdr *)msdu->data;
+ hdr_len = ieee80211_hdrlen(hdr->frame_control);
+ skb_pull(msdu, hdr_len);
+
+ /* Rebuild qos header */
+ hdr->frame_control |= __cpu_to_le16(IEEE80211_STYPE_QOS_DATA);
+
+ /* Reset the order bit as the HT_Control header is stripped */
+ hdr->frame_control &= ~(__cpu_to_le16(IEEE80211_FCTL_ORDER));
+
+ qos_ctl = rxcb->tid;
+
+ if (ath12k_dp_rx_h_mesh_ctl_present(ab, rxcb->rx_desc))
+ qos_ctl |= IEEE80211_QOS_CTL_MESH_CONTROL_PRESENT;
+
+ /* TODO: Add other QoS ctl fields when required */
+
+ /* copy decap header before overwriting for reuse below */
+ memcpy(decap_hdr, hdr, hdr_len);
+
+ /* Rebuild crypto header for mac80211 use */
+ if (!(status->flag & RX_FLAG_IV_STRIPPED)) {
+ crypto_hdr = skb_push(msdu, ath12k_dp_rx_crypto_param_len(ar, enctype));
+ ath12k_dp_rx_desc_get_crypto_header(ar->ab,
+ rxcb->rx_desc, crypto_hdr,
+ enctype);
+ }
+
+ memcpy(skb_push(msdu,
+ IEEE80211_QOS_CTL_LEN), &qos_ctl,
+ IEEE80211_QOS_CTL_LEN);
+ memcpy(skb_push(msdu, hdr_len), decap_hdr, hdr_len);
+}
+
+static void ath12k_dp_rx_h_undecap_raw(struct ath12k *ar, struct sk_buff *msdu,
+ enum hal_encrypt_type enctype,
+ struct ieee80211_rx_status *status,
+ bool decrypted)
+{
+ struct ath12k_skb_rxcb *rxcb = ATH12K_SKB_RXCB(msdu);
+ struct ieee80211_hdr *hdr;
+ size_t hdr_len;
+ size_t crypto_len;
+
+ if (!rxcb->is_first_msdu ||
+ !(rxcb->is_first_msdu && rxcb->is_last_msdu)) {
+ WARN_ON_ONCE(1);
+ return;
+ }
+
+ skb_trim(msdu, msdu->len - FCS_LEN);
+
+ if (!decrypted)
+ return;
+
+ hdr = (void *)msdu->data;
+
+ /* Tail */
+ if (status->flag & RX_FLAG_IV_STRIPPED) {
+ skb_trim(msdu, msdu->len -
+ ath12k_dp_rx_crypto_mic_len(ar, enctype));
+
+ skb_trim(msdu, msdu->len -
+ ath12k_dp_rx_crypto_icv_len(ar, enctype));
+ } else {
+ /* MIC */
+ if (status->flag & RX_FLAG_MIC_STRIPPED)
+ skb_trim(msdu, msdu->len -
+ ath12k_dp_rx_crypto_mic_len(ar, enctype));
+
+ /* ICV */
+ if (status->flag & RX_FLAG_ICV_STRIPPED)
+ skb_trim(msdu, msdu->len -
+ ath12k_dp_rx_crypto_icv_len(ar, enctype));
+ }
+
+ /* MMIC */
+ if ((status->flag & RX_FLAG_MMIC_STRIPPED) &&
+ !ieee80211_has_morefrags(hdr->frame_control) &&
+ enctype == HAL_ENCRYPT_TYPE_TKIP_MIC)
+ skb_trim(msdu, msdu->len - IEEE80211_CCMP_MIC_LEN);
+
+ /* Head */
+ if (status->flag & RX_FLAG_IV_STRIPPED) {
+ hdr_len = ieee80211_hdrlen(hdr->frame_control);
+ crypto_len = ath12k_dp_rx_crypto_param_len(ar, enctype);
+
+ memmove(msdu->data + crypto_len, msdu->data, hdr_len);
+ skb_pull(msdu, crypto_len);
+ }
+}
+
+static void ath12k_get_dot11_hdr_from_rx_desc(struct ath12k *ar,
+ struct sk_buff *msdu,
+ struct ath12k_skb_rxcb *rxcb,
+ struct ieee80211_rx_status *status,
+ enum hal_encrypt_type enctype)
+{
+ struct hal_rx_desc *rx_desc = rxcb->rx_desc;
+ struct ath12k_base *ab = ar->ab;
+ size_t hdr_len, crypto_len;
+ struct ieee80211_hdr *hdr;
+ u16 qos_ctl;
+ __le16 fc;
+ u8 *crypto_hdr;
+
+ if (!(status->flag & RX_FLAG_IV_STRIPPED)) {
+ crypto_len = ath12k_dp_rx_crypto_param_len(ar, enctype);
+ crypto_hdr = skb_push(msdu, crypto_len);
+ ath12k_dp_rx_desc_get_crypto_header(ab, rx_desc, crypto_hdr, enctype);
+ }
+
+ fc = cpu_to_le16(ath12k_dp_rxdesc_get_mpdu_frame_ctrl(ab, rx_desc));
+ hdr_len = ieee80211_hdrlen(fc);
+ skb_push(msdu, hdr_len);
+ hdr = (struct ieee80211_hdr *)msdu->data;
+ hdr->frame_control = fc;
+
+ /* Get wifi header from rx_desc */
+ ath12k_dp_rx_desc_get_dot11_hdr(ab, rx_desc, hdr);
+
+ if (rxcb->is_mcbc)
+ status->flag &= ~RX_FLAG_PN_VALIDATED;
+
+ /* Add QOS header */
+ if (ieee80211_is_data_qos(hdr->frame_control)) {
+ qos_ctl = rxcb->tid;
+ if (ath12k_dp_rx_h_mesh_ctl_present(ab, rx_desc))
+ qos_ctl |= IEEE80211_QOS_CTL_MESH_CONTROL_PRESENT;
+
+ /* TODO: Add other QoS ctl fields when required */
+ memcpy(msdu->data + (hdr_len - IEEE80211_QOS_CTL_LEN),
+ &qos_ctl, IEEE80211_QOS_CTL_LEN);
+ }
+}
+
+static void ath12k_dp_rx_h_undecap_eth(struct ath12k *ar,
+ struct sk_buff *msdu,
+ enum hal_encrypt_type enctype,
+ struct ieee80211_rx_status *status)
+{
+ struct ieee80211_hdr *hdr;
+ struct ethhdr *eth;
+ u8 da[ETH_ALEN];
+ u8 sa[ETH_ALEN];
+ struct ath12k_skb_rxcb *rxcb = ATH12K_SKB_RXCB(msdu);
+ struct ath12k_dp_rx_rfc1042_hdr rfc = {0xaa, 0xaa, 0x03, {0x00, 0x00, 0x00}};
+
+ eth = (struct ethhdr *)msdu->data;
+ ether_addr_copy(da, eth->h_dest);
+ ether_addr_copy(sa, eth->h_source);
+ rfc.snap_type = eth->h_proto;
+ skb_pull(msdu, sizeof(*eth));
+ memcpy(skb_push(msdu, sizeof(rfc)), &rfc,
+ sizeof(rfc));
+ ath12k_get_dot11_hdr_from_rx_desc(ar, msdu, rxcb, status, enctype);
+
+ /* original 802.11 header has a different DA and in
+ * case of 4addr it may also have different SA
+ */
+ hdr = (struct ieee80211_hdr *)msdu->data;
+ ether_addr_copy(ieee80211_get_DA(hdr), da);
+ ether_addr_copy(ieee80211_get_SA(hdr), sa);
+}
+
+static void ath12k_dp_rx_h_undecap(struct ath12k *ar, struct sk_buff *msdu,
+ struct hal_rx_desc *rx_desc,
+ enum hal_encrypt_type enctype,
+ struct ieee80211_rx_status *status,
+ bool decrypted)
+{
+ struct ath12k_base *ab = ar->ab;
+ u8 decap;
+ struct ethhdr *ehdr;
+
+ decap = ath12k_dp_rx_h_decap_type(ab, rx_desc);
+
+ switch (decap) {
+ case DP_RX_DECAP_TYPE_NATIVE_WIFI:
+ ath12k_dp_rx_h_undecap_nwifi(ar, msdu, enctype, status);
+ break;
+ case DP_RX_DECAP_TYPE_RAW:
+ ath12k_dp_rx_h_undecap_raw(ar, msdu, enctype, status,
+ decrypted);
+ break;
+ case DP_RX_DECAP_TYPE_ETHERNET2_DIX:
+ ehdr = (struct ethhdr *)msdu->data;
+
+ /* mac80211 allows fast path only for authorized STA */
+ if (ehdr->h_proto == cpu_to_be16(ETH_P_PAE)) {
+ ATH12K_SKB_RXCB(msdu)->is_eapol = true;
+ ath12k_dp_rx_h_undecap_eth(ar, msdu, enctype, status);
+ break;
+ }
+
+ /* PN for mcast packets will be validated in mac80211;
+ * remove eth header and add 802.11 header.
+ */
+ if (ATH12K_SKB_RXCB(msdu)->is_mcbc && decrypted)
+ ath12k_dp_rx_h_undecap_eth(ar, msdu, enctype, status);
+ break;
+ case DP_RX_DECAP_TYPE_8023:
+ /* TODO: Handle undecap for these formats */
+ break;
+ }
+}
+
+struct ath12k_peer *
+ath12k_dp_rx_h_find_peer(struct ath12k_base *ab, struct sk_buff *msdu)
+{
+ struct ath12k_skb_rxcb *rxcb = ATH12K_SKB_RXCB(msdu);
+ struct hal_rx_desc *rx_desc = rxcb->rx_desc;
+ struct ath12k_peer *peer = NULL;
+
+ lockdep_assert_held(&ab->base_lock);
+
+ if (rxcb->peer_id)
+ peer = ath12k_peer_find_by_id(ab, rxcb->peer_id);
+
+ if (peer)
+ return peer;
+
+ if (!rx_desc || !(ath12k_dp_rxdesc_mac_addr2_valid(ab, rx_desc)))
+ return NULL;
+
+ peer = ath12k_peer_find_by_addr(ab,
+ ath12k_dp_rxdesc_get_mpdu_start_addr2(ab,
+ rx_desc));
+ return peer;
+}
+
+static void ath12k_dp_rx_h_mpdu(struct ath12k *ar,
+ struct sk_buff *msdu,
+ struct hal_rx_desc *rx_desc,
+ struct ieee80211_rx_status *rx_status)
+{
+ bool fill_crypto_hdr;
+ struct ath12k_base *ab = ar->ab;
+ struct ath12k_skb_rxcb *rxcb;
+ enum hal_encrypt_type enctype;
+ bool is_decrypted = false;
+ struct ieee80211_hdr *hdr;
+ struct ath12k_peer *peer;
+ u32 err_bitmap;
+
+ /* PN for multicast packets will be checked in mac80211 */
+ rxcb = ATH12K_SKB_RXCB(msdu);
+ fill_crypto_hdr = ath12k_dp_rx_h_is_mcbc(ar->ab, rx_desc);
+ rxcb->is_mcbc = fill_crypto_hdr;
+
+ if (rxcb->is_mcbc)
+ rxcb->peer_id = ath12k_dp_rx_h_peer_id(ar->ab, rx_desc);
+
+ spin_lock_bh(&ar->ab->base_lock);
+ peer = ath12k_dp_rx_h_find_peer(ar->ab, msdu);
+ if (peer) {
+ if (rxcb->is_mcbc)
+ enctype = peer->sec_type_grp;
+ else
+ enctype = peer->sec_type;
+ } else {
+ enctype = HAL_ENCRYPT_TYPE_OPEN;
+ }
+ spin_unlock_bh(&ar->ab->base_lock);
+
+ err_bitmap = ath12k_dp_rx_h_mpdu_err(ab, rx_desc);
+ if (enctype != HAL_ENCRYPT_TYPE_OPEN && !err_bitmap)
+ is_decrypted = ath12k_dp_rx_h_is_decrypted(ab, rx_desc);
+
+ /* Clear per-MPDU flags while leaving per-PPDU flags intact */
+ rx_status->flag &= ~(RX_FLAG_FAILED_FCS_CRC |
+ RX_FLAG_MMIC_ERROR |
+ RX_FLAG_DECRYPTED |
+ RX_FLAG_IV_STRIPPED |
+ RX_FLAG_MMIC_STRIPPED);
+
+ if (err_bitmap & HAL_RX_MPDU_ERR_FCS)
+ rx_status->flag |= RX_FLAG_FAILED_FCS_CRC;
+ if (err_bitmap & HAL_RX_MPDU_ERR_TKIP_MIC)
+ rx_status->flag |= RX_FLAG_MMIC_ERROR;
+
+ if (is_decrypted) {
+ rx_status->flag |= RX_FLAG_DECRYPTED | RX_FLAG_MMIC_STRIPPED;
+
+ if (fill_crypto_hdr)
+ rx_status->flag |= RX_FLAG_MIC_STRIPPED |
+ RX_FLAG_ICV_STRIPPED;
+ else
+ rx_status->flag |= RX_FLAG_IV_STRIPPED |
+ RX_FLAG_PN_VALIDATED;
+ }
+
+ ath12k_dp_rx_h_csum_offload(ar, msdu);
+ ath12k_dp_rx_h_undecap(ar, msdu, rx_desc,
+ enctype, rx_status, is_decrypted);
+
+ if (!is_decrypted || fill_crypto_hdr)
+ return;
+
+ if (ath12k_dp_rx_h_decap_type(ar->ab, rx_desc) !=
+ DP_RX_DECAP_TYPE_ETHERNET2_DIX) {
+ hdr = (void *)msdu->data;
+ hdr->frame_control &= ~__cpu_to_le16(IEEE80211_FCTL_PROTECTED);
+ }
+}
+
+static void ath12k_dp_rx_h_rate(struct ath12k *ar, struct hal_rx_desc *rx_desc,
+ struct ieee80211_rx_status *rx_status)
+{
+ struct ath12k_base *ab = ar->ab;
+ struct ieee80211_supported_band *sband;
+ enum rx_msdu_start_pkt_type pkt_type;
+ u8 bw;
+ u8 rate_mcs, nss;
+ u8 sgi;
+ bool is_cck;
+
+ pkt_type = ath12k_dp_rx_h_pkt_type(ab, rx_desc);
+ bw = ath12k_dp_rx_h_rx_bw(ab, rx_desc);
+ rate_mcs = ath12k_dp_rx_h_rate_mcs(ab, rx_desc);
+ nss = ath12k_dp_rx_h_nss(ab, rx_desc);
+ sgi = ath12k_dp_rx_h_sgi(ab, rx_desc);
+
+ switch (pkt_type) {
+ case RX_MSDU_START_PKT_TYPE_11A:
+ case RX_MSDU_START_PKT_TYPE_11B:
+ is_cck = (pkt_type == RX_MSDU_START_PKT_TYPE_11B);
+ sband = &ar->mac.sbands[rx_status->band];
+ rx_status->rate_idx = ath12k_mac_hw_rate_to_idx(sband, rate_mcs,
+ is_cck);
+ break;
+ case RX_MSDU_START_PKT_TYPE_11N:
+ rx_status->encoding = RX_ENC_HT;
+ if (rate_mcs > ATH12K_HT_MCS_MAX) {
+ ath12k_warn(ar->ab,
+ "Received with invalid mcs in HT mode %d\n",
+ rate_mcs);
+ break;
+ }
+ rx_status->rate_idx = rate_mcs + (8 * (nss - 1));
+ if (sgi)
+ rx_status->enc_flags |= RX_ENC_FLAG_SHORT_GI;
+ rx_status->bw = ath12k_mac_bw_to_mac80211_bw(bw);
+ break;
+ case RX_MSDU_START_PKT_TYPE_11AC:
+ rx_status->encoding = RX_ENC_VHT;
+ rx_status->rate_idx = rate_mcs;
+ if (rate_mcs > ATH12K_VHT_MCS_MAX) {
+ ath12k_warn(ar->ab,
+ "Received with invalid mcs in VHT mode %d\n",
+ rate_mcs);
+ break;
+ }
+ rx_status->nss = nss;
+ if (sgi)
+ rx_status->enc_flags |= RX_ENC_FLAG_SHORT_GI;
+ rx_status->bw = ath12k_mac_bw_to_mac80211_bw(bw);
+ break;
+ case RX_MSDU_START_PKT_TYPE_11AX:
+ rx_status->rate_idx = rate_mcs;
+ if (rate_mcs > ATH12K_HE_MCS_MAX) {
+ ath12k_warn(ar->ab,
+ "Received with invalid mcs in HE mode %d\n",
+ rate_mcs);
+ break;
+ }
+ rx_status->encoding = RX_ENC_HE;
+ rx_status->nss = nss;
+ rx_status->he_gi = ath12k_he_gi_to_nl80211_he_gi(sgi);
+ rx_status->bw = ath12k_mac_bw_to_mac80211_bw(bw);
+ break;
+ }
+}
+
+void ath12k_dp_rx_h_ppdu(struct ath12k *ar, struct hal_rx_desc *rx_desc,
+ struct ieee80211_rx_status *rx_status)
+{
+ struct ath12k_base *ab = ar->ab;
+ u8 channel_num;
+ u32 center_freq, meta_data;
+ struct ieee80211_channel *channel;
+
+ rx_status->freq = 0;
+ rx_status->rate_idx = 0;
+ rx_status->nss = 0;
+ rx_status->encoding = RX_ENC_LEGACY;
+ rx_status->bw = RATE_INFO_BW_20;
+ rx_status->enc_flags = 0;
+
+ rx_status->flag |= RX_FLAG_NO_SIGNAL_VAL;
+
+ meta_data = ath12k_dp_rx_h_freq(ab, rx_desc);
+ channel_num = meta_data;
+ center_freq = meta_data >> 16;
+
+ if (center_freq >= 5935 && center_freq <= 7105) {
+ rx_status->band = NL80211_BAND_6GHZ;
+ } else if (channel_num >= 1 && channel_num <= 14) {
+ rx_status->band = NL80211_BAND_2GHZ;
+ } else if (channel_num >= 36 && channel_num <= 173) {
+ rx_status->band = NL80211_BAND_5GHZ;
+ } else {
+ spin_lock_bh(&ar->data_lock);
+ channel = ar->rx_channel;
+ if (channel) {
+ rx_status->band = channel->band;
+ channel_num =
+ ieee80211_frequency_to_channel(channel->center_freq);
+ }
+ spin_unlock_bh(&ar->data_lock);
+ ath12k_dbg_dump(ar->ab, ATH12K_DBG_DATA, NULL, "rx_desc: ",
+ rx_desc, sizeof(*rx_desc));
+ }
+
+ rx_status->freq = ieee80211_channel_to_frequency(channel_num,
+ rx_status->band);
+
+ ath12k_dp_rx_h_rate(ar, rx_desc, rx_status);
+}
+
+static void ath12k_dp_rx_deliver_msdu(struct ath12k *ar, struct napi_struct *napi,
+ struct sk_buff *msdu,
+ struct ieee80211_rx_status *status)
+{
+ struct ath12k_base *ab = ar->ab;
+ static const struct ieee80211_radiotap_he known = {
+ .data1 = cpu_to_le16(IEEE80211_RADIOTAP_HE_DATA1_DATA_MCS_KNOWN |
+ IEEE80211_RADIOTAP_HE_DATA1_BW_RU_ALLOC_KNOWN),
+ .data2 = cpu_to_le16(IEEE80211_RADIOTAP_HE_DATA2_GI_KNOWN),
+ };
+ struct ieee80211_radiotap_he *he;
+ struct ieee80211_rx_status *rx_status;
+ struct ieee80211_sta *pubsta;
+ struct ath12k_peer *peer;
+ struct ath12k_skb_rxcb *rxcb = ATH12K_SKB_RXCB(msdu);
+ u8 decap = DP_RX_DECAP_TYPE_RAW;
+ bool is_mcbc = rxcb->is_mcbc;
+ bool is_eapol = rxcb->is_eapol;
+
+ if (status->encoding == RX_ENC_HE && !(status->flag & RX_FLAG_RADIOTAP_HE) &&
+ !(status->flag & RX_FLAG_SKIP_MONITOR)) {
+ he = skb_push(msdu, sizeof(known));
+ memcpy(he, &known, sizeof(known));
+ status->flag |= RX_FLAG_RADIOTAP_HE;
+ }
+
+ if (!(status->flag & RX_FLAG_ONLY_MONITOR))
+ decap = ath12k_dp_rx_h_decap_type(ab, rxcb->rx_desc);
+
+ spin_lock_bh(&ab->base_lock);
+ peer = ath12k_dp_rx_h_find_peer(ab, msdu);
+
+ pubsta = peer ? peer->sta : NULL;
+
+ spin_unlock_bh(&ab->base_lock);
+
+ ath12k_dbg(ab, ATH12K_DBG_DATA,
+ "rx skb %pK len %u peer %pM %d %s sn %u %s%s%s%s%s%s%s%s rate_idx %u vht_nss %u freq %u band %u flag 0x%x fcs-err %i mic-err %i amsdu-more %i\n",
+ msdu,
+ msdu->len,
+ peer ? peer->addr : NULL,
+ rxcb->tid,
+ is_mcbc ? "mcast" : "ucast",
+ ath12k_dp_rx_h_seq_no(ab, rxcb->rx_desc),
+ (status->encoding == RX_ENC_LEGACY) ? "legacy" : "",
+ (status->encoding == RX_ENC_HT) ? "ht" : "",
+ (status->encoding == RX_ENC_VHT) ? "vht" : "",
+ (status->encoding == RX_ENC_HE) ? "he" : "",
+ (status->bw == RATE_INFO_BW_40) ? "40" : "",
+ (status->bw == RATE_INFO_BW_80) ? "80" : "",
+ (status->bw == RATE_INFO_BW_160) ? "160" : "",
+ status->enc_flags & RX_ENC_FLAG_SHORT_GI ? "sgi " : "",
+ status->rate_idx,
+ status->nss,
+ status->freq,
+ status->band, status->flag,
+ !!(status->flag & RX_FLAG_FAILED_FCS_CRC),
+ !!(status->flag & RX_FLAG_MMIC_ERROR),
+ !!(status->flag & RX_FLAG_AMSDU_MORE));
+
+ ath12k_dbg_dump(ab, ATH12K_DBG_DP_RX, NULL, "dp rx msdu: ",
+ msdu->data, msdu->len);
+
+ rx_status = IEEE80211_SKB_RXCB(msdu);
+ *rx_status = *status;
+
+ /* TODO: trace rx packet */
+
+ /* PN for multicast packets are not validate in HW,
+ * so skip 802.3 rx path
+ * Also, fast_rx expectes the STA to be authorized, hence
+ * eapol packets are sent in slow path.
+ */
+ if (decap == DP_RX_DECAP_TYPE_ETHERNET2_DIX && !is_eapol &&
+ !(is_mcbc && rx_status->flag & RX_FLAG_DECRYPTED))
+ rx_status->flag |= RX_FLAG_8023;
+
+ ieee80211_rx_napi(ar->hw, pubsta, msdu, napi);
+}
+
+static int ath12k_dp_rx_process_msdu(struct ath12k *ar,
+ struct sk_buff *msdu,
+ struct sk_buff_head *msdu_list,
+ struct ieee80211_rx_status *rx_status)
+{
+ struct ath12k_base *ab = ar->ab;
+ struct hal_rx_desc *rx_desc, *lrx_desc;
+ struct ath12k_skb_rxcb *rxcb;
+ struct sk_buff *last_buf;
+ u8 l3_pad_bytes;
+ u16 msdu_len;
+ int ret;
+ u32 hal_rx_desc_sz = ar->ab->hw_params->hal_desc_sz;
+
+ last_buf = ath12k_dp_rx_get_msdu_last_buf(msdu_list, msdu);
+ if (!last_buf) {
+ ath12k_warn(ab,
+ "No valid Rx buffer to access MSDU_END tlv\n");
+ ret = -EIO;
+ goto free_out;
+ }
+
+ rx_desc = (struct hal_rx_desc *)msdu->data;
+ lrx_desc = (struct hal_rx_desc *)last_buf->data;
+ if (!ath12k_dp_rx_h_msdu_done(ab, lrx_desc)) {
+ ath12k_warn(ab, "msdu_done bit in msdu_end is not set\n");
+ ret = -EIO;
+ goto free_out;
+ }
+
+ rxcb = ATH12K_SKB_RXCB(msdu);
+ rxcb->rx_desc = rx_desc;
+ msdu_len = ath12k_dp_rx_h_msdu_len(ab, lrx_desc);
+ l3_pad_bytes = ath12k_dp_rx_h_l3pad(ab, lrx_desc);
+
+ if (rxcb->is_frag) {
+ skb_pull(msdu, hal_rx_desc_sz);
+ } else if (!rxcb->is_continuation) {
+ if ((msdu_len + hal_rx_desc_sz) > DP_RX_BUFFER_SIZE) {
+ ret = -EINVAL;
+ ath12k_warn(ab, "invalid msdu len %u\n", msdu_len);
+ ath12k_dbg_dump(ab, ATH12K_DBG_DATA, NULL, "", rx_desc,
+ sizeof(*rx_desc));
+ goto free_out;
+ }
+ skb_put(msdu, hal_rx_desc_sz + l3_pad_bytes + msdu_len);
+ skb_pull(msdu, hal_rx_desc_sz + l3_pad_bytes);
+ } else {
+ ret = ath12k_dp_rx_msdu_coalesce(ar, msdu_list,
+ msdu, last_buf,
+ l3_pad_bytes, msdu_len);
+ if (ret) {
+ ath12k_warn(ab,
+ "failed to coalesce msdu rx buffer%d\n", ret);
+ goto free_out;
+ }
+ }
+
+ ath12k_dp_rx_h_ppdu(ar, rx_desc, rx_status);
+ ath12k_dp_rx_h_mpdu(ar, msdu, rx_desc, rx_status);
+
+ rx_status->flag |= RX_FLAG_SKIP_MONITOR | RX_FLAG_DUP_VALIDATED;
+
+ return 0;
+
+free_out:
+ return ret;
+}
+
+static void ath12k_dp_rx_process_received_packets(struct ath12k_base *ab,
+ struct napi_struct *napi,
+ struct sk_buff_head *msdu_list,
+ int ring_id)
+{
+ struct ieee80211_rx_status rx_status = {0};
+ struct ath12k_skb_rxcb *rxcb;
+ struct sk_buff *msdu;
+ struct ath12k *ar;
+ u8 mac_id;
+ int ret;
+
+ if (skb_queue_empty(msdu_list))
+ return;
+
+ rcu_read_lock();
+
+ while ((msdu = __skb_dequeue(msdu_list))) {
+ rxcb = ATH12K_SKB_RXCB(msdu);
+ mac_id = rxcb->mac_id;
+ ar = ab->pdevs[mac_id].ar;
+ if (!rcu_dereference(ab->pdevs_active[mac_id])) {
+ dev_kfree_skb_any(msdu);
+ continue;
+ }
+
+ if (test_bit(ATH12K_CAC_RUNNING, &ar->dev_flags)) {
+ dev_kfree_skb_any(msdu);
+ continue;
+ }
+
+ ret = ath12k_dp_rx_process_msdu(ar, msdu, msdu_list, &rx_status);
+ if (ret) {
+ ath12k_dbg(ab, ATH12K_DBG_DATA,
+ "Unable to process msdu %d", ret);
+ dev_kfree_skb_any(msdu);
+ continue;
+ }
+
+ ath12k_dp_rx_deliver_msdu(ar, napi, msdu, &rx_status);
+ }
+
+ rcu_read_unlock();
+}
+
+int ath12k_dp_rx_process(struct ath12k_base *ab, int ring_id,
+ struct napi_struct *napi, int budget)
+{
+ struct ath12k_rx_desc_info *desc_info;
+ struct ath12k_dp *dp = &ab->dp;
+ struct dp_rxdma_ring *rx_ring = &dp->rx_refill_buf_ring;
+ struct hal_reo_dest_ring *desc;
+ int num_buffs_reaped = 0;
+ struct sk_buff_head msdu_list;
+ struct ath12k_skb_rxcb *rxcb;
+ int total_msdu_reaped = 0;
+ struct hal_srng *srng;
+ struct sk_buff *msdu;
+ bool done = false;
+ int mac_id;
+ u64 desc_va;
+
+ __skb_queue_head_init(&msdu_list);
+
+ srng = &ab->hal.srng_list[dp->reo_dst_ring[ring_id].ring_id];
+
+ spin_lock_bh(&srng->lock);
+
+try_again:
+ ath12k_hal_srng_access_begin(ab, srng);
+
+ while ((desc = ath12k_hal_srng_dst_get_next_entry(ab, srng))) {
+ enum hal_reo_dest_ring_push_reason push_reason;
+ u32 cookie;
+
+ cookie = le32_get_bits(desc->buf_addr_info.info1,
+ BUFFER_ADDR_INFO1_SW_COOKIE);
+
+ mac_id = le32_get_bits(desc->info0,
+ HAL_REO_DEST_RING_INFO0_SRC_LINK_ID);
+
+ desc_va = ((u64)le32_to_cpu(desc->buf_va_hi) << 32 |
+ le32_to_cpu(desc->buf_va_lo));
+ desc_info = (struct ath12k_rx_desc_info *)((unsigned long)desc_va);
+
+ /* retry manual desc retrieval */
+ if (!desc_info) {
+ desc_info = ath12k_dp_get_rx_desc(ab, cookie);
+ if (!desc_info) {
+ ath12k_warn(ab, "Invalid cookie in manual desc retrival");
+ continue;
+ }
+ }
+
+ if (desc_info->magic != ATH12K_DP_RX_DESC_MAGIC)
+ ath12k_warn(ab, "Check HW CC implementation");
+
+ msdu = desc_info->skb;
+ desc_info->skb = NULL;
+
+ spin_lock_bh(&dp->rx_desc_lock);
+ list_move_tail(&desc_info->list, &dp->rx_desc_free_list);
+ spin_unlock_bh(&dp->rx_desc_lock);
+
+ rxcb = ATH12K_SKB_RXCB(msdu);
+ dma_unmap_single(ab->dev, rxcb->paddr,
+ msdu->len + skb_tailroom(msdu),
+ DMA_FROM_DEVICE);
+
+ num_buffs_reaped++;
+
+ push_reason = le32_get_bits(desc->info0,
+ HAL_REO_DEST_RING_INFO0_PUSH_REASON);
+ if (push_reason !=
+ HAL_REO_DEST_RING_PUSH_REASON_ROUTING_INSTRUCTION) {
+ dev_kfree_skb_any(msdu);
+ ab->soc_stats.hal_reo_error[dp->reo_dst_ring[ring_id].ring_id]++;
+ continue;
+ }
+
+ rxcb->is_first_msdu = !!(le32_to_cpu(desc->rx_msdu_info.info0) &
+ RX_MSDU_DESC_INFO0_FIRST_MSDU_IN_MPDU);
+ rxcb->is_last_msdu = !!(le32_to_cpu(desc->rx_msdu_info.info0) &
+ RX_MSDU_DESC_INFO0_LAST_MSDU_IN_MPDU);
+ rxcb->is_continuation = !!(le32_to_cpu(desc->rx_msdu_info.info0) &
+ RX_MSDU_DESC_INFO0_MSDU_CONTINUATION);
+ rxcb->mac_id = mac_id;
+ rxcb->peer_id = le32_get_bits(desc->rx_mpdu_info.peer_meta_data,
+ RX_MPDU_DESC_META_DATA_PEER_ID);
+ rxcb->tid = le32_get_bits(desc->rx_mpdu_info.info0,
+ RX_MPDU_DESC_INFO0_TID);
+
+ __skb_queue_tail(&msdu_list, msdu);
+
+ if (!rxcb->is_continuation) {
+ total_msdu_reaped++;
+ done = true;
+ } else {
+ done = false;
+ }
+
+ if (total_msdu_reaped >= budget)
+ break;
+ }
+
+ /* Hw might have updated the head pointer after we cached it.
+ * In this case, even though there are entries in the ring we'll
+ * get rx_desc NULL. Give the read another try with updated cached
+ * head pointer so that we can reap complete MPDU in the current
+ * rx processing.
+ */
+ if (!done && ath12k_hal_srng_dst_num_free(ab, srng, true)) {
+ ath12k_hal_srng_access_end(ab, srng);
+ goto try_again;
+ }
+
+ ath12k_hal_srng_access_end(ab, srng);
+
+ spin_unlock_bh(&srng->lock);
+
+ if (!total_msdu_reaped)
+ goto exit;
+
+ /* TODO: Move to implicit BM? */
+ ath12k_dp_rx_bufs_replenish(ab, 0, rx_ring, num_buffs_reaped,
+ ab->hw_params->hal_params->rx_buf_rbm, true);
+
+ ath12k_dp_rx_process_received_packets(ab, napi, &msdu_list,
+ ring_id);
+
+exit:
+ return total_msdu_reaped;
+}
+
+static void ath12k_dp_rx_frag_timer(struct timer_list *timer)
+{
+ struct ath12k_dp_rx_tid *rx_tid = from_timer(rx_tid, timer, frag_timer);
+
+ spin_lock_bh(&rx_tid->ab->base_lock);
+ if (rx_tid->last_frag_no &&
+ rx_tid->rx_frag_bitmap == GENMASK(rx_tid->last_frag_no, 0)) {
+ spin_unlock_bh(&rx_tid->ab->base_lock);
+ return;
+ }
+ ath12k_dp_rx_frags_cleanup(rx_tid, true);
+ spin_unlock_bh(&rx_tid->ab->base_lock);
+}
+
+int ath12k_dp_rx_peer_frag_setup(struct ath12k *ar, const u8 *peer_mac, int vdev_id)
+{
+ struct ath12k_base *ab = ar->ab;
+ struct crypto_shash *tfm;
+ struct ath12k_peer *peer;
+ struct ath12k_dp_rx_tid *rx_tid;
+ int i;
+
+ tfm = crypto_alloc_shash("michael_mic", 0, 0);
+ if (IS_ERR(tfm))
+ return PTR_ERR(tfm);
+
+ spin_lock_bh(&ab->base_lock);
+
+ peer = ath12k_peer_find(ab, vdev_id, peer_mac);
+ if (!peer) {
+ spin_unlock_bh(&ab->base_lock);
+ ath12k_warn(ab, "failed to find the peer to set up fragment info\n");
+ return -ENOENT;
+ }
+
+ for (i = 0; i <= IEEE80211_NUM_TIDS; i++) {
+ rx_tid = &peer->rx_tid[i];
+ rx_tid->ab = ab;
+ timer_setup(&rx_tid->frag_timer, ath12k_dp_rx_frag_timer, 0);
+ skb_queue_head_init(&rx_tid->rx_frags);
+ }
+
+ peer->tfm_mmic = tfm;
+ spin_unlock_bh(&ab->base_lock);
+
+ return 0;
+}
+
+static int ath12k_dp_rx_h_michael_mic(struct crypto_shash *tfm, u8 *key,
+ struct ieee80211_hdr *hdr, u8 *data,
+ size_t data_len, u8 *mic)
+{
+ SHASH_DESC_ON_STACK(desc, tfm);
+ u8 mic_hdr[16] = {0};
+ u8 tid = 0;
+ int ret;
+
+ if (!tfm)
+ return -EINVAL;
+
+ desc->tfm = tfm;
+
+ ret = crypto_shash_setkey(tfm, key, 8);
+ if (ret)
+ goto out;
+
+ ret = crypto_shash_init(desc);
+ if (ret)
+ goto out;
+
+ /* TKIP MIC header */
+ memcpy(mic_hdr, ieee80211_get_DA(hdr), ETH_ALEN);
+ memcpy(mic_hdr + ETH_ALEN, ieee80211_get_SA(hdr), ETH_ALEN);
+ if (ieee80211_is_data_qos(hdr->frame_control))
+ tid = ieee80211_get_tid(hdr);
+ mic_hdr[12] = tid;
+
+ ret = crypto_shash_update(desc, mic_hdr, 16);
+ if (ret)
+ goto out;
+ ret = crypto_shash_update(desc, data, data_len);
+ if (ret)
+ goto out;
+ ret = crypto_shash_final(desc, mic);
+out:
+ shash_desc_zero(desc);
+ return ret;
+}
+
+static int ath12k_dp_rx_h_verify_tkip_mic(struct ath12k *ar, struct ath12k_peer *peer,
+ struct sk_buff *msdu)
+{
+ struct ath12k_base *ab = ar->ab;
+ struct hal_rx_desc *rx_desc = (struct hal_rx_desc *)msdu->data;
+ struct ieee80211_rx_status *rxs = IEEE80211_SKB_RXCB(msdu);
+ struct ieee80211_key_conf *key_conf;
+ struct ieee80211_hdr *hdr;
+ u8 mic[IEEE80211_CCMP_MIC_LEN];
+ int head_len, tail_len, ret;
+ size_t data_len;
+ u32 hdr_len, hal_rx_desc_sz = ar->ab->hw_params->hal_desc_sz;
+ u8 *key, *data;
+ u8 key_idx;
+
+ if (ath12k_dp_rx_h_enctype(ab, rx_desc) != HAL_ENCRYPT_TYPE_TKIP_MIC)
+ return 0;
+
+ hdr = (struct ieee80211_hdr *)(msdu->data + hal_rx_desc_sz);
+ hdr_len = ieee80211_hdrlen(hdr->frame_control);
+ head_len = hdr_len + hal_rx_desc_sz + IEEE80211_TKIP_IV_LEN;
+ tail_len = IEEE80211_CCMP_MIC_LEN + IEEE80211_TKIP_ICV_LEN + FCS_LEN;
+
+ if (!is_multicast_ether_addr(hdr->addr1))
+ key_idx = peer->ucast_keyidx;
+ else
+ key_idx = peer->mcast_keyidx;
+
+ key_conf = peer->keys[key_idx];
+
+ data = msdu->data + head_len;
+ data_len = msdu->len - head_len - tail_len;
+ key = &key_conf->key[NL80211_TKIP_DATA_OFFSET_RX_MIC_KEY];
+
+ ret = ath12k_dp_rx_h_michael_mic(peer->tfm_mmic, key, hdr, data, data_len, mic);
+ if (ret || memcmp(mic, data + data_len, IEEE80211_CCMP_MIC_LEN))
+ goto mic_fail;
+
+ return 0;
+
+mic_fail:
+ (ATH12K_SKB_RXCB(msdu))->is_first_msdu = true;
+ (ATH12K_SKB_RXCB(msdu))->is_last_msdu = true;
+
+ rxs->flag |= RX_FLAG_MMIC_ERROR | RX_FLAG_MMIC_STRIPPED |
+ RX_FLAG_IV_STRIPPED | RX_FLAG_DECRYPTED;
+ skb_pull(msdu, hal_rx_desc_sz);
+
+ ath12k_dp_rx_h_ppdu(ar, rx_desc, rxs);
+ ath12k_dp_rx_h_undecap(ar, msdu, rx_desc,
+ HAL_ENCRYPT_TYPE_TKIP_MIC, rxs, true);
+ ieee80211_rx(ar->hw, msdu);
+ return -EINVAL;
+}
+
+static void ath12k_dp_rx_h_undecap_frag(struct ath12k *ar, struct sk_buff *msdu,
+ enum hal_encrypt_type enctype, u32 flags)
+{
+ struct ieee80211_hdr *hdr;
+ size_t hdr_len;
+ size_t crypto_len;
+ u32 hal_rx_desc_sz = ar->ab->hw_params->hal_desc_sz;
+
+ if (!flags)
+ return;
+
+ hdr = (struct ieee80211_hdr *)(msdu->data + hal_rx_desc_sz);
+
+ if (flags & RX_FLAG_MIC_STRIPPED)
+ skb_trim(msdu, msdu->len -
+ ath12k_dp_rx_crypto_mic_len(ar, enctype));
+
+ if (flags & RX_FLAG_ICV_STRIPPED)
+ skb_trim(msdu, msdu->len -
+ ath12k_dp_rx_crypto_icv_len(ar, enctype));
+
+ if (flags & RX_FLAG_IV_STRIPPED) {
+ hdr_len = ieee80211_hdrlen(hdr->frame_control);
+ crypto_len = ath12k_dp_rx_crypto_param_len(ar, enctype);
+
+ memmove(msdu->data + hal_rx_desc_sz + crypto_len,
+ msdu->data + hal_rx_desc_sz, hdr_len);
+ skb_pull(msdu, crypto_len);
+ }
+}
+
+static int ath12k_dp_rx_h_defrag(struct ath12k *ar,
+ struct ath12k_peer *peer,
+ struct ath12k_dp_rx_tid *rx_tid,
+ struct sk_buff **defrag_skb)
+{
+ struct ath12k_base *ab = ar->ab;
+ struct hal_rx_desc *rx_desc;
+ struct sk_buff *skb, *first_frag, *last_frag;
+ struct ieee80211_hdr *hdr;
+ enum hal_encrypt_type enctype;
+ bool is_decrypted = false;
+ int msdu_len = 0;
+ int extra_space;
+ u32 flags, hal_rx_desc_sz = ar->ab->hw_params->hal_desc_sz;
+
+ first_frag = skb_peek(&rx_tid->rx_frags);
+ last_frag = skb_peek_tail(&rx_tid->rx_frags);
+
+ skb_queue_walk(&rx_tid->rx_frags, skb) {
+ flags = 0;
+ rx_desc = (struct hal_rx_desc *)skb->data;
+ hdr = (struct ieee80211_hdr *)(skb->data + hal_rx_desc_sz);
+
+ enctype = ath12k_dp_rx_h_enctype(ab, rx_desc);
+ if (enctype != HAL_ENCRYPT_TYPE_OPEN)
+ is_decrypted = ath12k_dp_rx_h_is_decrypted(ab,
+ rx_desc);
+
+ if (is_decrypted) {
+ if (skb != first_frag)
+ flags |= RX_FLAG_IV_STRIPPED;
+ if (skb != last_frag)
+ flags |= RX_FLAG_ICV_STRIPPED |
+ RX_FLAG_MIC_STRIPPED;
+ }
+
+ /* RX fragments are always raw packets */
+ if (skb != last_frag)
+ skb_trim(skb, skb->len - FCS_LEN);
+ ath12k_dp_rx_h_undecap_frag(ar, skb, enctype, flags);
+
+ if (skb != first_frag)
+ skb_pull(skb, hal_rx_desc_sz +
+ ieee80211_hdrlen(hdr->frame_control));
+ msdu_len += skb->len;
+ }
+
+ extra_space = msdu_len - (DP_RX_BUFFER_SIZE + skb_tailroom(first_frag));
+ if (extra_space > 0 &&
+ (pskb_expand_head(first_frag, 0, extra_space, GFP_ATOMIC) < 0))
+ return -ENOMEM;
+
+ __skb_unlink(first_frag, &rx_tid->rx_frags);
+ while ((skb = __skb_dequeue(&rx_tid->rx_frags))) {
+ skb_put_data(first_frag, skb->data, skb->len);
+ dev_kfree_skb_any(skb);
+ }
+
+ hdr = (struct ieee80211_hdr *)(first_frag->data + hal_rx_desc_sz);
+ hdr->frame_control &= ~__cpu_to_le16(IEEE80211_FCTL_MOREFRAGS);
+ ATH12K_SKB_RXCB(first_frag)->is_frag = 1;
+
+ if (ath12k_dp_rx_h_verify_tkip_mic(ar, peer, first_frag))
+ first_frag = NULL;
+
+ *defrag_skb = first_frag;
+ return 0;
+}
+
+static int ath12k_dp_rx_h_defrag_reo_reinject(struct ath12k *ar,
+ struct ath12k_dp_rx_tid *rx_tid,
+ struct sk_buff *defrag_skb)
+{
+ struct ath12k_base *ab = ar->ab;
+ struct ath12k_dp *dp = &ab->dp;
+ struct hal_rx_desc *rx_desc = (struct hal_rx_desc *)defrag_skb->data;
+ struct hal_reo_entrance_ring *reo_ent_ring;
+ struct hal_reo_dest_ring *reo_dest_ring;
+ struct dp_link_desc_bank *link_desc_banks;
+ struct hal_rx_msdu_link *msdu_link;
+ struct hal_rx_msdu_details *msdu0;
+ struct hal_srng *srng;
+ dma_addr_t link_paddr, buf_paddr;
+ u32 desc_bank, msdu_info, msdu_ext_info, mpdu_info;
+ u32 cookie, hal_rx_desc_sz, dest_ring_info0;
+ int ret;
+ struct ath12k_rx_desc_info *desc_info;
+ u8 dst_ind;
+
+ hal_rx_desc_sz = ab->hw_params->hal_desc_sz;
+ link_desc_banks = dp->link_desc_banks;
+ reo_dest_ring = rx_tid->dst_ring_desc;
+
+ ath12k_hal_rx_reo_ent_paddr_get(ab, &reo_dest_ring->buf_addr_info,
+ &link_paddr, &cookie);
+ desc_bank = u32_get_bits(cookie, DP_LINK_DESC_BANK_MASK);
+
+ msdu_link = (struct hal_rx_msdu_link *)(link_desc_banks[desc_bank].vaddr +
+ (link_paddr - link_desc_banks[desc_bank].paddr));
+ msdu0 = &msdu_link->msdu_link[0];
+ msdu_ext_info = le32_to_cpu(msdu0->rx_msdu_ext_info.info0);
+ dst_ind = u32_get_bits(msdu_ext_info, RX_MSDU_EXT_DESC_INFO0_REO_DEST_IND);
+
+ memset(msdu0, 0, sizeof(*msdu0));
+
+ msdu_info = u32_encode_bits(1, RX_MSDU_DESC_INFO0_FIRST_MSDU_IN_MPDU) |
+ u32_encode_bits(1, RX_MSDU_DESC_INFO0_LAST_MSDU_IN_MPDU) |
+ u32_encode_bits(0, RX_MSDU_DESC_INFO0_MSDU_CONTINUATION) |
+ u32_encode_bits(defrag_skb->len - hal_rx_desc_sz,
+ RX_MSDU_DESC_INFO0_MSDU_LENGTH) |
+ u32_encode_bits(1, RX_MSDU_DESC_INFO0_VALID_SA) |
+ u32_encode_bits(1, RX_MSDU_DESC_INFO0_VALID_DA);
+ msdu0->rx_msdu_info.info0 = cpu_to_le32(msdu_info);
+ msdu0->rx_msdu_ext_info.info0 = cpu_to_le32(msdu_ext_info);
+
+ /* change msdu len in hal rx desc */
+ ath12k_dp_rxdesc_set_msdu_len(ab, rx_desc, defrag_skb->len - hal_rx_desc_sz);
+
+ buf_paddr = dma_map_single(ab->dev, defrag_skb->data,
+ defrag_skb->len + skb_tailroom(defrag_skb),
+ DMA_FROM_DEVICE);
+ if (dma_mapping_error(ab->dev, buf_paddr))
+ return -ENOMEM;
+
+ spin_lock_bh(&dp->rx_desc_lock);
+ desc_info = list_first_entry_or_null(&dp->rx_desc_free_list,
+ struct ath12k_rx_desc_info,
+ list);
+ if (!desc_info) {
+ spin_unlock_bh(&dp->rx_desc_lock);
+ ath12k_warn(ab, "failed to find rx desc for reinject\n");
+ ret = -ENOMEM;
+ goto err_unmap_dma;
+ }
+
+ desc_info->skb = defrag_skb;
+
+ list_del(&desc_info->list);
+ list_add_tail(&desc_info->list, &dp->rx_desc_used_list);
+ spin_unlock_bh(&dp->rx_desc_lock);
+
+ ATH12K_SKB_RXCB(defrag_skb)->paddr = buf_paddr;
+
+ ath12k_hal_rx_buf_addr_info_set(&msdu0->buf_addr_info, buf_paddr,
+ desc_info->cookie,
+ HAL_RX_BUF_RBM_SW3_BM);
+
+ /* Fill mpdu details into reo entrace ring */
+ srng = &ab->hal.srng_list[dp->reo_reinject_ring.ring_id];
+
+ spin_lock_bh(&srng->lock);
+ ath12k_hal_srng_access_begin(ab, srng);
+
+ reo_ent_ring = ath12k_hal_srng_src_get_next_entry(ab, srng);
+ if (!reo_ent_ring) {
+ ath12k_hal_srng_access_end(ab, srng);
+ spin_unlock_bh(&srng->lock);
+ ret = -ENOSPC;
+ goto err_free_desc;
+ }
+ memset(reo_ent_ring, 0, sizeof(*reo_ent_ring));
+
+ ath12k_hal_rx_buf_addr_info_set(&reo_ent_ring->buf_addr_info, link_paddr,
+ cookie,
+ HAL_RX_BUF_RBM_WBM_CHIP0_IDLE_DESC_LIST);
+
+ mpdu_info = u32_encode_bits(1, RX_MPDU_DESC_INFO0_MSDU_COUNT) |
+ u32_encode_bits(0, RX_MPDU_DESC_INFO0_FRAG_FLAG) |
+ u32_encode_bits(1, RX_MPDU_DESC_INFO0_RAW_MPDU) |
+ u32_encode_bits(1, RX_MPDU_DESC_INFO0_VALID_PN) |
+ u32_encode_bits(rx_tid->tid, RX_MPDU_DESC_INFO0_TID);
+
+ reo_ent_ring->rx_mpdu_info.info0 = cpu_to_le32(mpdu_info);
+ reo_ent_ring->rx_mpdu_info.peer_meta_data =
+ reo_dest_ring->rx_mpdu_info.peer_meta_data;
+
+ reo_ent_ring->queue_addr_lo = cpu_to_le32(lower_32_bits(rx_tid->paddr));
+ reo_ent_ring->info0 = le32_encode_bits(upper_32_bits(rx_tid->paddr),
+ HAL_REO_ENTR_RING_INFO0_QUEUE_ADDR_HI) |
+ le32_encode_bits(dst_ind, HAL_REO_ENTR_RING_INFO0_DEST_IND);
+
+ reo_ent_ring->info1 = le32_encode_bits(rx_tid->cur_sn,
+ HAL_REO_ENTR_RING_INFO1_MPDU_SEQ_NUM);
+ dest_ring_info0 = le32_get_bits(reo_dest_ring->info0,
+ HAL_REO_DEST_RING_INFO0_SRC_LINK_ID);
+ reo_ent_ring->info2 =
+ cpu_to_le32(u32_get_bits(dest_ring_info0,
+ HAL_REO_ENTR_RING_INFO2_SRC_LINK_ID));
+
+ ath12k_hal_srng_access_end(ab, srng);
+ spin_unlock_bh(&srng->lock);
+
+ return 0;
+
+err_free_desc:
+ spin_lock_bh(&dp->rx_desc_lock);
+ list_del(&desc_info->list);
+ list_add_tail(&desc_info->list, &dp->rx_desc_free_list);
+ desc_info->skb = NULL;
+ spin_unlock_bh(&dp->rx_desc_lock);
+err_unmap_dma:
+ dma_unmap_single(ab->dev, buf_paddr, defrag_skb->len + skb_tailroom(defrag_skb),
+ DMA_FROM_DEVICE);
+ return ret;
+}
+
+static int ath12k_dp_rx_h_cmp_frags(struct ath12k_base *ab,
+ struct sk_buff *a, struct sk_buff *b)
+{
+ int frag1, frag2;
+
+ frag1 = ath12k_dp_rx_h_frag_no(ab, a);
+ frag2 = ath12k_dp_rx_h_frag_no(ab, b);
+
+ return frag1 - frag2;
+}
+
+static void ath12k_dp_rx_h_sort_frags(struct ath12k_base *ab,
+ struct sk_buff_head *frag_list,
+ struct sk_buff *cur_frag)
+{
+ struct sk_buff *skb;
+ int cmp;
+
+ skb_queue_walk(frag_list, skb) {
+ cmp = ath12k_dp_rx_h_cmp_frags(ab, skb, cur_frag);
+ if (cmp < 0)
+ continue;
+ __skb_queue_before(frag_list, skb, cur_frag);
+ return;
+ }
+ __skb_queue_tail(frag_list, cur_frag);
+}
+
+static u64 ath12k_dp_rx_h_get_pn(struct ath12k *ar, struct sk_buff *skb)
+{
+ struct ieee80211_hdr *hdr;
+ u64 pn = 0;
+ u8 *ehdr;
+ u32 hal_rx_desc_sz = ar->ab->hw_params->hal_desc_sz;
+
+ hdr = (struct ieee80211_hdr *)(skb->data + hal_rx_desc_sz);
+ ehdr = skb->data + hal_rx_desc_sz + ieee80211_hdrlen(hdr->frame_control);
+
+ pn = ehdr[0];
+ pn |= (u64)ehdr[1] << 8;
+ pn |= (u64)ehdr[4] << 16;
+ pn |= (u64)ehdr[5] << 24;
+ pn |= (u64)ehdr[6] << 32;
+ pn |= (u64)ehdr[7] << 40;
+
+ return pn;
+}
+
+static bool
+ath12k_dp_rx_h_defrag_validate_incr_pn(struct ath12k *ar, struct ath12k_dp_rx_tid *rx_tid)
+{
+ struct ath12k_base *ab = ar->ab;
+ enum hal_encrypt_type encrypt_type;
+ struct sk_buff *first_frag, *skb;
+ struct hal_rx_desc *desc;
+ u64 last_pn;
+ u64 cur_pn;
+
+ first_frag = skb_peek(&rx_tid->rx_frags);
+ desc = (struct hal_rx_desc *)first_frag->data;
+
+ encrypt_type = ath12k_dp_rx_h_enctype(ab, desc);
+ if (encrypt_type != HAL_ENCRYPT_TYPE_CCMP_128 &&
+ encrypt_type != HAL_ENCRYPT_TYPE_CCMP_256 &&
+ encrypt_type != HAL_ENCRYPT_TYPE_GCMP_128 &&
+ encrypt_type != HAL_ENCRYPT_TYPE_AES_GCMP_256)
+ return true;
+
+ last_pn = ath12k_dp_rx_h_get_pn(ar, first_frag);
+ skb_queue_walk(&rx_tid->rx_frags, skb) {
+ if (skb == first_frag)
+ continue;
+
+ cur_pn = ath12k_dp_rx_h_get_pn(ar, skb);
+ if (cur_pn != last_pn + 1)
+ return false;
+ last_pn = cur_pn;
+ }
+ return true;
+}
+
+static int ath12k_dp_rx_frag_h_mpdu(struct ath12k *ar,
+ struct sk_buff *msdu,
+ struct hal_reo_dest_ring *ring_desc)
+{
+ struct ath12k_base *ab = ar->ab;
+ struct hal_rx_desc *rx_desc;
+ struct ath12k_peer *peer;
+ struct ath12k_dp_rx_tid *rx_tid;
+ struct sk_buff *defrag_skb = NULL;
+ u32 peer_id;
+ u16 seqno, frag_no;
+ u8 tid;
+ int ret = 0;
+ bool more_frags;
+
+ rx_desc = (struct hal_rx_desc *)msdu->data;
+ peer_id = ath12k_dp_rx_h_peer_id(ab, rx_desc);
+ tid = ath12k_dp_rx_h_tid(ab, rx_desc);
+ seqno = ath12k_dp_rx_h_seq_no(ab, rx_desc);
+ frag_no = ath12k_dp_rx_h_frag_no(ab, msdu);
+ more_frags = ath12k_dp_rx_h_more_frags(ab, msdu);
+
+ if (!ath12k_dp_rx_h_seq_ctrl_valid(ab, rx_desc) ||
+ !ath12k_dp_rx_h_fc_valid(ab, rx_desc) ||
+ tid > IEEE80211_NUM_TIDS)
+ return -EINVAL;
+
+ /* received unfragmented packet in reo
+ * exception ring, this shouldn't happen
+ * as these packets typically come from
+ * reo2sw srngs.
+ */
+ if (WARN_ON_ONCE(!frag_no && !more_frags))
+ return -EINVAL;
+
+ spin_lock_bh(&ab->base_lock);
+ peer = ath12k_peer_find_by_id(ab, peer_id);
+ if (!peer) {
+ ath12k_warn(ab, "failed to find the peer to de-fragment received fragment peer_id %d\n",
+ peer_id);
+ ret = -ENOENT;
+ goto out_unlock;
+ }
+ rx_tid = &peer->rx_tid[tid];
+
+ if ((!skb_queue_empty(&rx_tid->rx_frags) && seqno != rx_tid->cur_sn) ||
+ skb_queue_empty(&rx_tid->rx_frags)) {
+ /* Flush stored fragments and start a new sequence */
+ ath12k_dp_rx_frags_cleanup(rx_tid, true);
+ rx_tid->cur_sn = seqno;
+ }
+
+ if (rx_tid->rx_frag_bitmap & BIT(frag_no)) {
+ /* Fragment already present */
+ ret = -EINVAL;
+ goto out_unlock;
+ }
+
+ if (frag_no > __fls(rx_tid->rx_frag_bitmap))
+ __skb_queue_tail(&rx_tid->rx_frags, msdu);
+ else
+ ath12k_dp_rx_h_sort_frags(ab, &rx_tid->rx_frags, msdu);
+
+ rx_tid->rx_frag_bitmap |= BIT(frag_no);
+ if (!more_frags)
+ rx_tid->last_frag_no = frag_no;
+
+ if (frag_no == 0) {
+ rx_tid->dst_ring_desc = kmemdup(ring_desc,
+ sizeof(*rx_tid->dst_ring_desc),
+ GFP_ATOMIC);
+ if (!rx_tid->dst_ring_desc) {
+ ret = -ENOMEM;
+ goto out_unlock;
+ }
+ } else {
+ ath12k_dp_rx_link_desc_return(ab, ring_desc,
+ HAL_WBM_REL_BM_ACT_PUT_IN_IDLE);
+ }
+
+ if (!rx_tid->last_frag_no ||
+ rx_tid->rx_frag_bitmap != GENMASK(rx_tid->last_frag_no, 0)) {
+ mod_timer(&rx_tid->frag_timer, jiffies +
+ ATH12K_DP_RX_FRAGMENT_TIMEOUT_MS);
+ goto out_unlock;
+ }
+
+ spin_unlock_bh(&ab->base_lock);
+ del_timer_sync(&rx_tid->frag_timer);
+ spin_lock_bh(&ab->base_lock);
+
+ peer = ath12k_peer_find_by_id(ab, peer_id);
+ if (!peer)
+ goto err_frags_cleanup;
+
+ if (!ath12k_dp_rx_h_defrag_validate_incr_pn(ar, rx_tid))
+ goto err_frags_cleanup;
+
+ if (ath12k_dp_rx_h_defrag(ar, peer, rx_tid, &defrag_skb))
+ goto err_frags_cleanup;
+
+ if (!defrag_skb)
+ goto err_frags_cleanup;
+
+ if (ath12k_dp_rx_h_defrag_reo_reinject(ar, rx_tid, defrag_skb))
+ goto err_frags_cleanup;
+
+ ath12k_dp_rx_frags_cleanup(rx_tid, false);
+ goto out_unlock;
+
+err_frags_cleanup:
+ dev_kfree_skb_any(defrag_skb);
+ ath12k_dp_rx_frags_cleanup(rx_tid, true);
+out_unlock:
+ spin_unlock_bh(&ab->base_lock);
+ return ret;
+}
+
+static int
+ath12k_dp_process_rx_err_buf(struct ath12k *ar, struct hal_reo_dest_ring *desc,
+ bool drop, u32 cookie)
+{
+ struct ath12k_base *ab = ar->ab;
+ struct sk_buff *msdu;
+ struct ath12k_skb_rxcb *rxcb;
+ struct hal_rx_desc *rx_desc;
+ u16 msdu_len;
+ u32 hal_rx_desc_sz = ab->hw_params->hal_desc_sz;
+ struct ath12k_rx_desc_info *desc_info;
+ u64 desc_va;
+
+ desc_va = ((u64)le32_to_cpu(desc->buf_va_hi) << 32 |
+ le32_to_cpu(desc->buf_va_lo));
+ desc_info = (struct ath12k_rx_desc_info *)((unsigned long)desc_va);
+
+ /* retry manual desc retrieval */
+ if (!desc_info) {
+ desc_info = ath12k_dp_get_rx_desc(ab, cookie);
+ if (!desc_info) {
+ ath12k_warn(ab, "Invalid cookie in manual desc retrival");
+ return -EINVAL;
+ }
+ }
+
+ if (desc_info->magic != ATH12K_DP_RX_DESC_MAGIC)
+ ath12k_warn(ab, " RX Exception, Check HW CC implementation");
+
+ msdu = desc_info->skb;
+ desc_info->skb = NULL;
+ spin_lock_bh(&ab->dp.rx_desc_lock);
+ list_move_tail(&desc_info->list, &ab->dp.rx_desc_free_list);
+ spin_unlock_bh(&ab->dp.rx_desc_lock);
+
+ rxcb = ATH12K_SKB_RXCB(msdu);
+ dma_unmap_single(ar->ab->dev, rxcb->paddr,
+ msdu->len + skb_tailroom(msdu),
+ DMA_FROM_DEVICE);
+
+ if (drop) {
+ dev_kfree_skb_any(msdu);
+ return 0;
+ }
+
+ rcu_read_lock();
+ if (!rcu_dereference(ar->ab->pdevs_active[ar->pdev_idx])) {
+ dev_kfree_skb_any(msdu);
+ goto exit;
+ }
+
+ if (test_bit(ATH12K_CAC_RUNNING, &ar->dev_flags)) {
+ dev_kfree_skb_any(msdu);
+ goto exit;
+ }
+
+ rx_desc = (struct hal_rx_desc *)msdu->data;
+ msdu_len = ath12k_dp_rx_h_msdu_len(ar->ab, rx_desc);
+ if ((msdu_len + hal_rx_desc_sz) > DP_RX_BUFFER_SIZE) {
+ ath12k_warn(ar->ab, "invalid msdu leng %u", msdu_len);
+ ath12k_dbg_dump(ar->ab, ATH12K_DBG_DATA, NULL, "", rx_desc,
+ sizeof(*rx_desc));
+ dev_kfree_skb_any(msdu);
+ goto exit;
+ }
+
+ skb_put(msdu, hal_rx_desc_sz + msdu_len);
+
+ if (ath12k_dp_rx_frag_h_mpdu(ar, msdu, desc)) {
+ dev_kfree_skb_any(msdu);
+ ath12k_dp_rx_link_desc_return(ar->ab, desc,
+ HAL_WBM_REL_BM_ACT_PUT_IN_IDLE);
+ }
+exit:
+ rcu_read_unlock();
+ return 0;
+}
+
+int ath12k_dp_rx_process_err(struct ath12k_base *ab, struct napi_struct *napi,
+ int budget)
+{
+ u32 msdu_cookies[HAL_NUM_RX_MSDUS_PER_LINK_DESC];
+ struct dp_link_desc_bank *link_desc_banks;
+ enum hal_rx_buf_return_buf_manager rbm;
+ struct hal_rx_msdu_link *link_desc_va;
+ int tot_n_bufs_reaped, quota, ret, i;
+ struct hal_reo_dest_ring *reo_desc;
+ struct dp_rxdma_ring *rx_ring;
+ struct dp_srng *reo_except;
+ u32 desc_bank, num_msdus;
+ struct hal_srng *srng;
+ struct ath12k_dp *dp;
+ int mac_id;
+ struct ath12k *ar;
+ dma_addr_t paddr;
+ bool is_frag;
+ bool drop = false;
+
+ tot_n_bufs_reaped = 0;
+ quota = budget;
+
+ dp = &ab->dp;
+ reo_except = &dp->reo_except_ring;
+ link_desc_banks = dp->link_desc_banks;
+
+ srng = &ab->hal.srng_list[reo_except->ring_id];
+
+ spin_lock_bh(&srng->lock);
+
+ ath12k_hal_srng_access_begin(ab, srng);
+
+ while (budget &&
+ (reo_desc = ath12k_hal_srng_dst_get_next_entry(ab, srng))) {
+ ab->soc_stats.err_ring_pkts++;
+ ret = ath12k_hal_desc_reo_parse_err(ab, reo_desc, &paddr,
+ &desc_bank);
+ if (ret) {
+ ath12k_warn(ab, "failed to parse error reo desc %d\n",
+ ret);
+ continue;
+ }
+ link_desc_va = link_desc_banks[desc_bank].vaddr +
+ (paddr - link_desc_banks[desc_bank].paddr);
+ ath12k_hal_rx_msdu_link_info_get(link_desc_va, &num_msdus, msdu_cookies,
+ &rbm);
+ if (rbm != HAL_RX_BUF_RBM_WBM_CHIP0_IDLE_DESC_LIST &&
+ rbm != HAL_RX_BUF_RBM_SW3_BM &&
+ rbm != ab->hw_params->hal_params->rx_buf_rbm) {
+ ab->soc_stats.invalid_rbm++;
+ ath12k_warn(ab, "invalid return buffer manager %d\n", rbm);
+ ath12k_dp_rx_link_desc_return(ab, reo_desc,
+ HAL_WBM_REL_BM_ACT_REL_MSDU);
+ continue;
+ }
+
+ is_frag = !!(le32_to_cpu(reo_desc->rx_mpdu_info.info0) &
+ RX_MPDU_DESC_INFO0_FRAG_FLAG);
+
+ /* Process only rx fragments with one msdu per link desc below, and drop
+ * msdu's indicated due to error reasons.
+ */
+ if (!is_frag || num_msdus > 1) {
+ drop = true;
+ /* Return the link desc back to wbm idle list */
+ ath12k_dp_rx_link_desc_return(ab, reo_desc,
+ HAL_WBM_REL_BM_ACT_PUT_IN_IDLE);
+ }
+
+ for (i = 0; i < num_msdus; i++) {
+ mac_id = le32_get_bits(reo_desc->info0,
+ HAL_REO_DEST_RING_INFO0_SRC_LINK_ID);
+
+ ar = ab->pdevs[mac_id].ar;
+
+ if (!ath12k_dp_process_rx_err_buf(ar, reo_desc, drop,
+ msdu_cookies[i]))
+ tot_n_bufs_reaped++;
+ }
+
+ if (tot_n_bufs_reaped >= quota) {
+ tot_n_bufs_reaped = quota;
+ goto exit;
+ }
+
+ budget = quota - tot_n_bufs_reaped;
+ }
+
+exit:
+ ath12k_hal_srng_access_end(ab, srng);
+
+ spin_unlock_bh(&srng->lock);
+
+ rx_ring = &dp->rx_refill_buf_ring;
+
+ ath12k_dp_rx_bufs_replenish(ab, 0, rx_ring, tot_n_bufs_reaped,
+ ab->hw_params->hal_params->rx_buf_rbm, true);
+
+ return tot_n_bufs_reaped;
+}
+
+static void ath12k_dp_rx_null_q_desc_sg_drop(struct ath12k *ar,
+ int msdu_len,
+ struct sk_buff_head *msdu_list)
+{
+ struct sk_buff *skb, *tmp;
+ struct ath12k_skb_rxcb *rxcb;
+ int n_buffs;
+
+ n_buffs = DIV_ROUND_UP(msdu_len,
+ (DP_RX_BUFFER_SIZE - ar->ab->hw_params->hal_desc_sz));
+
+ skb_queue_walk_safe(msdu_list, skb, tmp) {
+ rxcb = ATH12K_SKB_RXCB(skb);
+ if (rxcb->err_rel_src == HAL_WBM_REL_SRC_MODULE_REO &&
+ rxcb->err_code == HAL_REO_DEST_RING_ERROR_CODE_DESC_ADDR_ZERO) {
+ if (!n_buffs)
+ break;
+ __skb_unlink(skb, msdu_list);
+ dev_kfree_skb_any(skb);
+ n_buffs--;
+ }
+ }
+}
+
+static int ath12k_dp_rx_h_null_q_desc(struct ath12k *ar, struct sk_buff *msdu,
+ struct ieee80211_rx_status *status,
+ struct sk_buff_head *msdu_list)
+{
+ struct ath12k_base *ab = ar->ab;
+ u16 msdu_len, peer_id;
+ struct hal_rx_desc *desc = (struct hal_rx_desc *)msdu->data;
+ u8 l3pad_bytes;
+ struct ath12k_skb_rxcb *rxcb = ATH12K_SKB_RXCB(msdu);
+ u32 hal_rx_desc_sz = ar->ab->hw_params->hal_desc_sz;
+
+ msdu_len = ath12k_dp_rx_h_msdu_len(ab, desc);
+ peer_id = ath12k_dp_rx_h_peer_id(ab, desc);
+
+ if (!ath12k_peer_find_by_id(ab, peer_id)) {
+ ath12k_dbg(ab, ATH12K_DBG_DATA, "invalid peer id received in wbm err pkt%d\n",
+ peer_id);
+ return -EINVAL;
+ }
+
+ if (!rxcb->is_frag && ((msdu_len + hal_rx_desc_sz) > DP_RX_BUFFER_SIZE)) {
+ /* First buffer will be freed by the caller, so deduct it's length */
+ msdu_len = msdu_len - (DP_RX_BUFFER_SIZE - hal_rx_desc_sz);
+ ath12k_dp_rx_null_q_desc_sg_drop(ar, msdu_len, msdu_list);
+ return -EINVAL;
+ }
+
+ /* Even after cleaning up the sg buffers in the msdu list with above check
+ * any msdu received with continuation flag needs to be dropped as invalid.
+ * This protects against some random err frame with continuation flag.
+ */
+ if (rxcb->is_continuation)
+ return -EINVAL;
+
+ if (!ath12k_dp_rx_h_msdu_done(ab, desc)) {
+ ath12k_warn(ar->ab,
+ "msdu_done bit not set in null_q_des processing\n");
+ __skb_queue_purge(msdu_list);
+ return -EIO;
+ }
+
+ /* Handle NULL queue descriptor violations arising out a missing
+ * REO queue for a given peer or a given TID. This typically
+ * may happen if a packet is received on a QOS enabled TID before the
+ * ADDBA negotiation for that TID, when the TID queue is setup. Or
+ * it may also happen for MC/BC frames if they are not routed to the
+ * non-QOS TID queue, in the absence of any other default TID queue.
+ * This error can show up both in a REO destination or WBM release ring.
+ */
+
+ if (rxcb->is_frag) {
+ skb_pull(msdu, hal_rx_desc_sz);
+ } else {
+ l3pad_bytes = ath12k_dp_rx_h_l3pad(ab, desc);
+
+ if ((hal_rx_desc_sz + l3pad_bytes + msdu_len) > DP_RX_BUFFER_SIZE)
+ return -EINVAL;
+
+ skb_put(msdu, hal_rx_desc_sz + l3pad_bytes + msdu_len);
+ skb_pull(msdu, hal_rx_desc_sz + l3pad_bytes);
+ }
+ ath12k_dp_rx_h_ppdu(ar, desc, status);
+
+ ath12k_dp_rx_h_mpdu(ar, msdu, desc, status);
+
+ rxcb->tid = ath12k_dp_rx_h_tid(ab, desc);
+
+ /* Please note that caller will having the access to msdu and completing
+ * rx with mac80211. Need not worry about cleaning up amsdu_list.
+ */
+
+ return 0;
+}
+
+static bool ath12k_dp_rx_h_reo_err(struct ath12k *ar, struct sk_buff *msdu,
+ struct ieee80211_rx_status *status,
+ struct sk_buff_head *msdu_list)
+{
+ struct ath12k_skb_rxcb *rxcb = ATH12K_SKB_RXCB(msdu);
+ bool drop = false;
+
+ ar->ab->soc_stats.reo_error[rxcb->err_code]++;
+
+ switch (rxcb->err_code) {
+ case HAL_REO_DEST_RING_ERROR_CODE_DESC_ADDR_ZERO:
+ if (ath12k_dp_rx_h_null_q_desc(ar, msdu, status, msdu_list))
+ drop = true;
+ break;
+ case HAL_REO_DEST_RING_ERROR_CODE_PN_CHECK_FAILED:
+ /* TODO: Do not drop PN failed packets in the driver;
+ * instead, it is good to drop such packets in mac80211
+ * after incrementing the replay counters.
+ */
+ fallthrough;
+ default:
+ /* TODO: Review other errors and process them to mac80211
+ * as appropriate.
+ */
+ drop = true;
+ break;
+ }
+
+ return drop;
+}
+
+static void ath12k_dp_rx_h_tkip_mic_err(struct ath12k *ar, struct sk_buff *msdu,
+ struct ieee80211_rx_status *status)
+{
+ struct ath12k_base *ab = ar->ab;
+ u16 msdu_len;
+ struct hal_rx_desc *desc = (struct hal_rx_desc *)msdu->data;
+ u8 l3pad_bytes;
+ struct ath12k_skb_rxcb *rxcb = ATH12K_SKB_RXCB(msdu);
+ u32 hal_rx_desc_sz = ar->ab->hw_params->hal_desc_sz;
+
+ rxcb->is_first_msdu = ath12k_dp_rx_h_first_msdu(ab, desc);
+ rxcb->is_last_msdu = ath12k_dp_rx_h_last_msdu(ab, desc);
+
+ l3pad_bytes = ath12k_dp_rx_h_l3pad(ab, desc);
+ msdu_len = ath12k_dp_rx_h_msdu_len(ab, desc);
+ skb_put(msdu, hal_rx_desc_sz + l3pad_bytes + msdu_len);
+ skb_pull(msdu, hal_rx_desc_sz + l3pad_bytes);
+
+ ath12k_dp_rx_h_ppdu(ar, desc, status);
+
+ status->flag |= (RX_FLAG_MMIC_STRIPPED | RX_FLAG_MMIC_ERROR |
+ RX_FLAG_DECRYPTED);
+
+ ath12k_dp_rx_h_undecap(ar, msdu, desc,
+ HAL_ENCRYPT_TYPE_TKIP_MIC, status, false);
+}
+
+static bool ath12k_dp_rx_h_rxdma_err(struct ath12k *ar, struct sk_buff *msdu,
+ struct ieee80211_rx_status *status)
+{
+ struct ath12k_base *ab = ar->ab;
+ struct ath12k_skb_rxcb *rxcb = ATH12K_SKB_RXCB(msdu);
+ struct hal_rx_desc *rx_desc = (struct hal_rx_desc *)msdu->data;
+ bool drop = false;
+ u32 err_bitmap;
+
+ ar->ab->soc_stats.rxdma_error[rxcb->err_code]++;
+
+ switch (rxcb->err_code) {
+ case HAL_REO_ENTR_RING_RXDMA_ECODE_DECRYPT_ERR:
+ case HAL_REO_ENTR_RING_RXDMA_ECODE_TKIP_MIC_ERR:
+ err_bitmap = ath12k_dp_rx_h_mpdu_err(ab, rx_desc);
+ if (err_bitmap & HAL_RX_MPDU_ERR_TKIP_MIC) {
+ ath12k_dp_rx_h_tkip_mic_err(ar, msdu, status);
+ break;
+ }
+ fallthrough;
+ default:
+ /* TODO: Review other rxdma error code to check if anything is
+ * worth reporting to mac80211
+ */
+ drop = true;
+ break;
+ }
+
+ return drop;
+}
+
+static void ath12k_dp_rx_wbm_err(struct ath12k *ar,
+ struct napi_struct *napi,
+ struct sk_buff *msdu,
+ struct sk_buff_head *msdu_list)
+{
+ struct ath12k_skb_rxcb *rxcb = ATH12K_SKB_RXCB(msdu);
+ struct ieee80211_rx_status rxs = {0};
+ bool drop = true;
+
+ switch (rxcb->err_rel_src) {
+ case HAL_WBM_REL_SRC_MODULE_REO:
+ drop = ath12k_dp_rx_h_reo_err(ar, msdu, &rxs, msdu_list);
+ break;
+ case HAL_WBM_REL_SRC_MODULE_RXDMA:
+ drop = ath12k_dp_rx_h_rxdma_err(ar, msdu, &rxs);
+ break;
+ default:
+ /* msdu will get freed */
+ break;
+ }
+
+ if (drop) {
+ dev_kfree_skb_any(msdu);
+ return;
+ }
+
+ ath12k_dp_rx_deliver_msdu(ar, napi, msdu, &rxs);
+}
+
+int ath12k_dp_rx_process_wbm_err(struct ath12k_base *ab,
+ struct napi_struct *napi, int budget)
+{
+ struct ath12k *ar;
+ struct ath12k_dp *dp = &ab->dp;
+ struct dp_rxdma_ring *rx_ring;
+ struct hal_rx_wbm_rel_info err_info;
+ struct hal_srng *srng;
+ struct sk_buff *msdu;
+ struct sk_buff_head msdu_list[MAX_RADIOS];
+ struct ath12k_skb_rxcb *rxcb;
+ void *rx_desc;
+ int mac_id;
+ int num_buffs_reaped = 0;
+ struct ath12k_rx_desc_info *desc_info;
+ int ret, i;
+
+ for (i = 0; i < ab->num_radios; i++)
+ __skb_queue_head_init(&msdu_list[i]);
+
+ srng = &ab->hal.srng_list[dp->rx_rel_ring.ring_id];
+ rx_ring = &dp->rx_refill_buf_ring;
+
+ spin_lock_bh(&srng->lock);
+
+ ath12k_hal_srng_access_begin(ab, srng);
+
+ while (budget) {
+ rx_desc = ath12k_hal_srng_dst_get_next_entry(ab, srng);
+ if (!rx_desc)
+ break;
+
+ ret = ath12k_hal_wbm_desc_parse_err(ab, rx_desc, &err_info);
+ if (ret) {
+ ath12k_warn(ab,
+ "failed to parse rx error in wbm_rel ring desc %d\n",
+ ret);
+ continue;
+ }
+
+ desc_info = (struct ath12k_rx_desc_info *)err_info.rx_desc;
+
+ /* retry manual desc retrieval if hw cc is not done */
+ if (!desc_info) {
+ desc_info = ath12k_dp_get_rx_desc(ab, err_info.cookie);
+ if (!desc_info) {
+ ath12k_warn(ab, "Invalid cookie in manual desc retrival");
+ continue;
+ }
+ }
+
+ /* FIXME: Extract mac id correctly. Since descs are not tied
+ * to mac, we can extract from vdev id in ring desc.
+ */
+ mac_id = 0;
+
+ if (desc_info->magic != ATH12K_DP_RX_DESC_MAGIC)
+ ath12k_warn(ab, "WBM RX err, Check HW CC implementation");
+
+ msdu = desc_info->skb;
+ desc_info->skb = NULL;
+
+ spin_lock_bh(&dp->rx_desc_lock);
+ list_move_tail(&desc_info->list, &dp->rx_desc_free_list);
+ spin_unlock_bh(&dp->rx_desc_lock);
+
+ rxcb = ATH12K_SKB_RXCB(msdu);
+ dma_unmap_single(ab->dev, rxcb->paddr,
+ msdu->len + skb_tailroom(msdu),
+ DMA_FROM_DEVICE);
+
+ num_buffs_reaped++;
+
+ if (!err_info.continuation)
+ budget--;
+
+ if (err_info.push_reason !=
+ HAL_REO_DEST_RING_PUSH_REASON_ERR_DETECTED) {
+ dev_kfree_skb_any(msdu);
+ continue;
+ }
+
+ rxcb->err_rel_src = err_info.err_rel_src;
+ rxcb->err_code = err_info.err_code;
+ rxcb->rx_desc = (struct hal_rx_desc *)msdu->data;
+ __skb_queue_tail(&msdu_list[mac_id], msdu);
+
+ rxcb->is_first_msdu = err_info.first_msdu;
+ rxcb->is_last_msdu = err_info.last_msdu;
+ rxcb->is_continuation = err_info.continuation;
+ }
+
+ ath12k_hal_srng_access_end(ab, srng);
+
+ spin_unlock_bh(&srng->lock);
+
+ if (!num_buffs_reaped)
+ goto done;
+
+ ath12k_dp_rx_bufs_replenish(ab, 0, rx_ring, num_buffs_reaped,
+ ab->hw_params->hal_params->rx_buf_rbm, true);
+
+ rcu_read_lock();
+ for (i = 0; i < ab->num_radios; i++) {
+ if (!rcu_dereference(ab->pdevs_active[i])) {
+ __skb_queue_purge(&msdu_list[i]);
+ continue;
+ }
+
+ ar = ab->pdevs[i].ar;
+
+ if (test_bit(ATH12K_CAC_RUNNING, &ar->dev_flags)) {
+ __skb_queue_purge(&msdu_list[i]);
+ continue;
+ }
+
+ while ((msdu = __skb_dequeue(&msdu_list[i])) != NULL)
+ ath12k_dp_rx_wbm_err(ar, napi, msdu, &msdu_list[i]);
+ }
+ rcu_read_unlock();
+done:
+ return num_buffs_reaped;
+}
+
+void ath12k_dp_rx_process_reo_status(struct ath12k_base *ab)
+{
+ struct ath12k_dp *dp = &ab->dp;
+ struct hal_tlv_64_hdr *hdr;
+ struct hal_srng *srng;
+ struct ath12k_dp_rx_reo_cmd *cmd, *tmp;
+ bool found = false;
+ u16 tag;
+ struct hal_reo_status reo_status;
+
+ srng = &ab->hal.srng_list[dp->reo_status_ring.ring_id];
+
+ memset(&reo_status, 0, sizeof(reo_status));
+
+ spin_lock_bh(&srng->lock);
+
+ ath12k_hal_srng_access_begin(ab, srng);
+
+ while ((hdr = ath12k_hal_srng_dst_get_next_entry(ab, srng))) {
+ tag = u64_get_bits(hdr->tl, HAL_SRNG_TLV_HDR_TAG);
+
+ switch (tag) {
+ case HAL_REO_GET_QUEUE_STATS_STATUS:
+ ath12k_hal_reo_status_queue_stats(ab, hdr,
+ &reo_status);
+ break;
+ case HAL_REO_FLUSH_QUEUE_STATUS:
+ ath12k_hal_reo_flush_queue_status(ab, hdr,
+ &reo_status);
+ break;
+ case HAL_REO_FLUSH_CACHE_STATUS:
+ ath12k_hal_reo_flush_cache_status(ab, hdr,
+ &reo_status);
+ break;
+ case HAL_REO_UNBLOCK_CACHE_STATUS:
+ ath12k_hal_reo_unblk_cache_status(ab, hdr,
+ &reo_status);
+ break;
+ case HAL_REO_FLUSH_TIMEOUT_LIST_STATUS:
+ ath12k_hal_reo_flush_timeout_list_status(ab, hdr,
+ &reo_status);
+ break;
+ case HAL_REO_DESCRIPTOR_THRESHOLD_REACHED_STATUS:
+ ath12k_hal_reo_desc_thresh_reached_status(ab, hdr,
+ &reo_status);
+ break;
+ case HAL_REO_UPDATE_RX_REO_QUEUE_STATUS:
+ ath12k_hal_reo_update_rx_reo_queue_status(ab, hdr,
+ &reo_status);
+ break;
+ default:
+ ath12k_warn(ab, "Unknown reo status type %d\n", tag);
+ continue;
+ }
+
+ spin_lock_bh(&dp->reo_cmd_lock);
+ list_for_each_entry_safe(cmd, tmp, &dp->reo_cmd_list, list) {
+ if (reo_status.uniform_hdr.cmd_num == cmd->cmd_num) {
+ found = true;
+ list_del(&cmd->list);
+ break;
+ }
+ }
+ spin_unlock_bh(&dp->reo_cmd_lock);
+
+ if (found) {
+ cmd->handler(dp, (void *)&cmd->data,
+ reo_status.uniform_hdr.cmd_status);
+ kfree(cmd);
+ }
+
+ found = false;
+ }
+
+ ath12k_hal_srng_access_end(ab, srng);
+
+ spin_unlock_bh(&srng->lock);
+}
+
+void ath12k_dp_rx_free(struct ath12k_base *ab)
+{
+ struct ath12k_dp *dp = &ab->dp;
+ int i;
+
+ ath12k_dp_srng_cleanup(ab, &dp->rx_refill_buf_ring.refill_buf_ring);
+
+ for (i = 0; i < ab->hw_params->num_rxmda_per_pdev; i++) {
+ if (ab->hw_params->rx_mac_buf_ring)
+ ath12k_dp_srng_cleanup(ab, &dp->rx_mac_buf_ring[i]);
+ }
+
+ for (i = 0; i < ab->hw_params->num_rxdma_dst_ring; i++)
+ ath12k_dp_srng_cleanup(ab, &dp->rxdma_err_dst_ring[i]);
+
+ ath12k_dp_srng_cleanup(ab, &dp->rxdma_mon_buf_ring.refill_buf_ring);
+ ath12k_dp_srng_cleanup(ab, &dp->tx_mon_buf_ring.refill_buf_ring);
+
+ ath12k_dp_rxdma_buf_free(ab);
+}
+
+void ath12k_dp_rx_pdev_free(struct ath12k_base *ab, int mac_id)
+{
+ struct ath12k *ar = ab->pdevs[mac_id].ar;
+
+ ath12k_dp_rx_pdev_srng_free(ar);
+}
+
+int ath12k_dp_rxdma_ring_sel_config_qcn9274(struct ath12k_base *ab)
+{
+ struct ath12k_dp *dp = &ab->dp;
+ struct htt_rx_ring_tlv_filter tlv_filter = {0};
+ u32 ring_id;
+ int ret;
+ u32 hal_rx_desc_sz = ab->hw_params->hal_desc_sz;
+
+ ring_id = dp->rx_refill_buf_ring.refill_buf_ring.ring_id;
+
+ tlv_filter.rx_filter = HTT_RX_TLV_FLAGS_RXDMA_RING;
+ tlv_filter.pkt_filter_flags2 = HTT_RX_FP_CTRL_PKT_FILTER_TLV_FLAGS2_BAR;
+ tlv_filter.pkt_filter_flags3 = HTT_RX_FP_DATA_PKT_FILTER_TLV_FLASG3_MCAST |
+ HTT_RX_FP_DATA_PKT_FILTER_TLV_FLASG3_UCAST |
+ HTT_RX_FP_DATA_PKT_FILTER_TLV_FLASG3_NULL_DATA;
+ tlv_filter.offset_valid = true;
+ tlv_filter.rx_packet_offset = hal_rx_desc_sz;
+
+ tlv_filter.rx_mpdu_start_offset =
+ ab->hw_params->hal_ops->rx_desc_get_mpdu_start_offset();
+ tlv_filter.rx_msdu_end_offset =
+ ab->hw_params->hal_ops->rx_desc_get_msdu_end_offset();
+
+ /* TODO: Selectively subscribe to required qwords within msdu_end
+ * and mpdu_start and setup the mask in below msg
+ * and modify the rx_desc struct
+ */
+ ret = ath12k_dp_tx_htt_rx_filter_setup(ab, ring_id, 0,
+ HAL_RXDMA_BUF,
+ DP_RXDMA_REFILL_RING_SIZE,
+ &tlv_filter);
+
+ return ret;
+}
+
+int ath12k_dp_rxdma_ring_sel_config_wcn7850(struct ath12k_base *ab)
+{
+ struct ath12k_dp *dp = &ab->dp;
+ struct htt_rx_ring_tlv_filter tlv_filter = {0};
+ u32 ring_id;
+ int ret;
+ u32 hal_rx_desc_sz = ab->hw_params->hal_desc_sz;
+ int i;
+
+ ring_id = dp->rx_refill_buf_ring.refill_buf_ring.ring_id;
+
+ tlv_filter.rx_filter = HTT_RX_TLV_FLAGS_RXDMA_RING;
+ tlv_filter.pkt_filter_flags2 = HTT_RX_FP_CTRL_PKT_FILTER_TLV_FLAGS2_BAR;
+ tlv_filter.pkt_filter_flags3 = HTT_RX_FP_DATA_PKT_FILTER_TLV_FLASG3_MCAST |
+ HTT_RX_FP_DATA_PKT_FILTER_TLV_FLASG3_UCAST |
+ HTT_RX_FP_DATA_PKT_FILTER_TLV_FLASG3_NULL_DATA;
+ tlv_filter.offset_valid = true;
+ tlv_filter.rx_packet_offset = hal_rx_desc_sz;
+
+ tlv_filter.rx_header_offset = offsetof(struct hal_rx_desc_wcn7850, pkt_hdr_tlv);
+
+ tlv_filter.rx_mpdu_start_offset =
+ ab->hw_params->hal_ops->rx_desc_get_mpdu_start_offset();
+ tlv_filter.rx_msdu_end_offset =
+ ab->hw_params->hal_ops->rx_desc_get_msdu_end_offset();
+
+ /* TODO: Selectively subscribe to required qwords within msdu_end
+ * and mpdu_start and setup the mask in below msg
+ * and modify the rx_desc struct
+ */
+
+ for (i = 0; i < ab->hw_params->num_rxmda_per_pdev; i++) {
+ ring_id = dp->rx_mac_buf_ring[i].ring_id;
+ ret = ath12k_dp_tx_htt_rx_filter_setup(ab, ring_id, i,
+ HAL_RXDMA_BUF,
+ DP_RXDMA_REFILL_RING_SIZE,
+ &tlv_filter);
+ }
+
+ return ret;
+}
+
+int ath12k_dp_rx_htt_setup(struct ath12k_base *ab)
+{
+ struct ath12k_dp *dp = &ab->dp;
+ u32 ring_id;
+ int i, ret;
+
+ /* TODO: Need to verify the HTT setup for QCN9224 */
+ ring_id = dp->rx_refill_buf_ring.refill_buf_ring.ring_id;
+ ret = ath12k_dp_tx_htt_srng_setup(ab, ring_id, 0, HAL_RXDMA_BUF);
+ if (ret) {
+ ath12k_warn(ab, "failed to configure rx_refill_buf_ring %d\n",
+ ret);
+ return ret;
+ }
+
+ if (ab->hw_params->rx_mac_buf_ring) {
+ for (i = 0; i < ab->hw_params->num_rxmda_per_pdev; i++) {
+ ring_id = dp->rx_mac_buf_ring[i].ring_id;
+ ret = ath12k_dp_tx_htt_srng_setup(ab, ring_id,
+ i, HAL_RXDMA_BUF);
+ if (ret) {
+ ath12k_warn(ab, "failed to configure rx_mac_buf_ring%d %d\n",
+ i, ret);
+ return ret;
+ }
+ }
+ }
+
+ for (i = 0; i < ab->hw_params->num_rxdma_dst_ring; i++) {
+ ring_id = dp->rxdma_err_dst_ring[i].ring_id;
+ ret = ath12k_dp_tx_htt_srng_setup(ab, ring_id,
+ i, HAL_RXDMA_DST);
+ if (ret) {
+ ath12k_warn(ab, "failed to configure rxdma_err_dest_ring%d %d\n",
+ i, ret);
+ return ret;
+ }
+ }
+
+ if (ab->hw_params->rxdma1_enable) {
+ ring_id = dp->rxdma_mon_buf_ring.refill_buf_ring.ring_id;
+ ret = ath12k_dp_tx_htt_srng_setup(ab, ring_id,
+ 0, HAL_RXDMA_MONITOR_BUF);
+ if (ret) {
+ ath12k_warn(ab, "failed to configure rxdma_mon_buf_ring %d\n",
+ ret);
+ return ret;
+ }
+
+ ring_id = dp->tx_mon_buf_ring.refill_buf_ring.ring_id;
+ ret = ath12k_dp_tx_htt_srng_setup(ab, ring_id,
+ 0, HAL_TX_MONITOR_BUF);
+ if (ret) {
+ ath12k_warn(ab, "failed to configure rxdma_mon_buf_ring %d\n",
+ ret);
+ return ret;
+ }
+ }
+
+ ret = ab->hw_params->hw_ops->rxdma_ring_sel_config(ab);
+ if (ret) {
+ ath12k_warn(ab, "failed to setup rxdma ring selection config\n");
+ return ret;
+ }
+
+ return 0;
+}
+
+int ath12k_dp_rx_alloc(struct ath12k_base *ab)
+{
+ struct ath12k_dp *dp = &ab->dp;
+ int i, ret;
+
+ idr_init(&dp->rx_refill_buf_ring.bufs_idr);
+ spin_lock_init(&dp->rx_refill_buf_ring.idr_lock);
+
+ idr_init(&dp->rxdma_mon_buf_ring.bufs_idr);
+ spin_lock_init(&dp->rxdma_mon_buf_ring.idr_lock);
+
+ idr_init(&dp->tx_mon_buf_ring.bufs_idr);
+ spin_lock_init(&dp->tx_mon_buf_ring.idr_lock);
+
+ ret = ath12k_dp_srng_setup(ab,
+ &dp->rx_refill_buf_ring.refill_buf_ring,
+ HAL_RXDMA_BUF, 0, 0,
+ DP_RXDMA_BUF_RING_SIZE);
+ if (ret) {
+ ath12k_warn(ab, "failed to setup rx_refill_buf_ring\n");
+ return ret;
+ }
+
+ if (ab->hw_params->rx_mac_buf_ring) {
+ for (i = 0; i < ab->hw_params->num_rxmda_per_pdev; i++) {
+ ret = ath12k_dp_srng_setup(ab,
+ &dp->rx_mac_buf_ring[i],
+ HAL_RXDMA_BUF, 1,
+ i, 1024);
+ if (ret) {
+ ath12k_warn(ab, "failed to setup rx_mac_buf_ring %d\n",
+ i);
+ return ret;
+ }
+ }
+ }
+
+ for (i = 0; i < ab->hw_params->num_rxdma_dst_ring; i++) {
+ ret = ath12k_dp_srng_setup(ab, &dp->rxdma_err_dst_ring[i],
+ HAL_RXDMA_DST, 0, i,
+ DP_RXDMA_ERR_DST_RING_SIZE);
+ if (ret) {
+ ath12k_warn(ab, "failed to setup rxdma_err_dst_ring %d\n", i);
+ return ret;
+ }
+ }
+
+ if (ab->hw_params->rxdma1_enable) {
+ ret = ath12k_dp_srng_setup(ab,
+ &dp->rxdma_mon_buf_ring.refill_buf_ring,
+ HAL_RXDMA_MONITOR_BUF, 0, 0,
+ DP_RXDMA_MONITOR_BUF_RING_SIZE);
+ if (ret) {
+ ath12k_warn(ab, "failed to setup HAL_RXDMA_MONITOR_BUF\n");
+ return ret;
+ }
+
+ ret = ath12k_dp_srng_setup(ab,
+ &dp->tx_mon_buf_ring.refill_buf_ring,
+ HAL_TX_MONITOR_BUF, 0, 0,
+ DP_TX_MONITOR_BUF_RING_SIZE);
+ if (ret) {
+ ath12k_warn(ab, "failed to setup DP_TX_MONITOR_BUF_RING_SIZE\n");
+ return ret;
+ }
+ }
+
+ ret = ath12k_dp_rxdma_buf_setup(ab);
+ if (ret) {
+ ath12k_warn(ab, "failed to setup rxdma ring\n");
+ return ret;
+ }
+
+ return 0;
+}
+
+int ath12k_dp_rx_pdev_alloc(struct ath12k_base *ab, int mac_id)
+{
+ struct ath12k *ar = ab->pdevs[mac_id].ar;
+ struct ath12k_pdev_dp *dp = &ar->dp;
+ u32 ring_id;
+ int i;
+ int ret;
+
+ if (!ab->hw_params->rxdma1_enable)
+ goto out;
+
+ ret = ath12k_dp_rx_pdev_srng_alloc(ar);
+ if (ret) {
+ ath12k_warn(ab, "failed to setup rx srngs\n");
+ return ret;
+ }
+
+ for (i = 0; i < ab->hw_params->num_rxmda_per_pdev; i++) {
+ ring_id = dp->rxdma_mon_dst_ring[i].ring_id;
+ ret = ath12k_dp_tx_htt_srng_setup(ab, ring_id,
+ mac_id + i,
+ HAL_RXDMA_MONITOR_DST);
+ if (ret) {
+ ath12k_warn(ab,
+ "failed to configure rxdma_mon_dst_ring %d %d\n",
+ i, ret);
+ return ret;
+ }
+
+ ring_id = dp->tx_mon_dst_ring[i].ring_id;
+ ret = ath12k_dp_tx_htt_srng_setup(ab, ring_id,
+ mac_id + i,
+ HAL_TX_MONITOR_DST);
+ if (ret) {
+ ath12k_warn(ab,
+ "failed to configure tx_mon_dst_ring %d %d\n",
+ i, ret);
+ return ret;
+ }
+ }
+out:
+ return 0;
+}
+
+static int ath12k_dp_rx_pdev_mon_status_attach(struct ath12k *ar)
+{
+ struct ath12k_pdev_dp *dp = &ar->dp;
+ struct ath12k_mon_data *pmon = (struct ath12k_mon_data *)&dp->mon_data;
+
+ skb_queue_head_init(&pmon->rx_status_q);
+
+ pmon->mon_ppdu_status = DP_PPDU_STATUS_START;
+
+ memset(&pmon->rx_mon_stats, 0,
+ sizeof(pmon->rx_mon_stats));
+ return 0;
+}
+
+int ath12k_dp_rx_pdev_mon_attach(struct ath12k *ar)
+{
+ struct ath12k_pdev_dp *dp = &ar->dp;
+ struct ath12k_mon_data *pmon = &dp->mon_data;
+ int ret = 0;
+
+ ret = ath12k_dp_rx_pdev_mon_status_attach(ar);
+ if (ret) {
+ ath12k_warn(ar->ab, "pdev_mon_status_attach() failed");
+ return ret;
+ }
+
+ /* if rxdma1_enable is false, no need to setup
+ * rxdma_mon_desc_ring.
+ */
+ if (!ar->ab->hw_params->rxdma1_enable)
+ return 0;
+
+ pmon->mon_last_linkdesc_paddr = 0;
+ pmon->mon_last_buf_cookie = DP_RX_DESC_COOKIE_MAX + 1;
+ spin_lock_init(&pmon->mon_lock);
+
+ return 0;
+}
+
+int ath12k_dp_rx_pktlog_start(struct ath12k_base *ab)
+{
+ /* start reap timer */
+ mod_timer(&ab->mon_reap_timer,
+ jiffies + msecs_to_jiffies(ATH12K_MON_TIMER_INTERVAL));
+
+ return 0;
+}
+
+int ath12k_dp_rx_pktlog_stop(struct ath12k_base *ab, bool stop_timer)
+{
+ int ret;
+
+ if (stop_timer)
+ del_timer_sync(&ab->mon_reap_timer);
+
+ /* reap all the monitor related rings */
+ ret = ath12k_dp_purge_mon_ring(ab);
+ if (ret) {
+ ath12k_warn(ab, "failed to purge dp mon ring: %d\n", ret);
+ return ret;
+ }
+
+ return 0;
+}
diff --git a/drivers/net/wireless/ath/ath12k/dp_rx.h b/drivers/net/wireless/ath/ath12k/dp_rx.h
new file mode 100644
index 000000000000..c955b5c859d1
--- /dev/null
+++ b/drivers/net/wireless/ath/ath12k/dp_rx.h
@@ -0,0 +1,145 @@
+/* SPDX-License-Identifier: BSD-3-Clause-Clear */
+/*
+ * Copyright (c) 2018-2021 The Linux Foundation. All rights reserved.
+ * Copyright (c) 2021-2022 Qualcomm Innovation Center, Inc. All rights reserved.
+ */
+#ifndef ATH12K_DP_RX_H
+#define ATH12K_DP_RX_H
+
+#include "core.h"
+#include "rx_desc.h"
+#include "debug.h"
+
+#define DP_MAX_NWIFI_HDR_LEN 30
+
+struct ath12k_dp_rx_tid {
+ u8 tid;
+ u32 *vaddr;
+ dma_addr_t paddr;
+ u32 size;
+ u32 ba_win_sz;
+ bool active;
+
+ /* Info related to rx fragments */
+ u32 cur_sn;
+ u16 last_frag_no;
+ u16 rx_frag_bitmap;
+
+ struct sk_buff_head rx_frags;
+ struct hal_reo_dest_ring *dst_ring_desc;
+
+ /* Timer info related to fragments */
+ struct timer_list frag_timer;
+ struct ath12k_base *ab;
+};
+
+struct ath12k_dp_rx_reo_cache_flush_elem {
+ struct list_head list;
+ struct ath12k_dp_rx_tid data;
+ unsigned long ts;
+};
+
+struct ath12k_dp_rx_reo_cmd {
+ struct list_head list;
+ struct ath12k_dp_rx_tid data;
+ int cmd_num;
+ void (*handler)(struct ath12k_dp *dp, void *ctx,
+ enum hal_reo_cmd_status status);
+};
+
+#define ATH12K_DP_RX_REO_DESC_FREE_THRES 64
+#define ATH12K_DP_RX_REO_DESC_FREE_TIMEOUT_MS 1000
+
+enum ath12k_dp_rx_decap_type {
+ DP_RX_DECAP_TYPE_RAW,
+ DP_RX_DECAP_TYPE_NATIVE_WIFI,
+ DP_RX_DECAP_TYPE_ETHERNET2_DIX,
+ DP_RX_DECAP_TYPE_8023,
+};
+
+struct ath12k_dp_rx_rfc1042_hdr {
+ u8 llc_dsap;
+ u8 llc_ssap;
+ u8 llc_ctrl;
+ u8 snap_oui[3];
+ __be16 snap_type;
+} __packed;
+
+static inline u32 ath12k_he_gi_to_nl80211_he_gi(u8 sgi)
+{
+ u32 ret = 0;
+
+ switch (sgi) {
+ case RX_MSDU_START_SGI_0_8_US:
+ ret = NL80211_RATE_INFO_HE_GI_0_8;
+ break;
+ case RX_MSDU_START_SGI_1_6_US:
+ ret = NL80211_RATE_INFO_HE_GI_1_6;
+ break;
+ case RX_MSDU_START_SGI_3_2_US:
+ ret = NL80211_RATE_INFO_HE_GI_3_2;
+ break;
+ }
+
+ return ret;
+}
+
+int ath12k_dp_rx_ampdu_start(struct ath12k *ar,
+ struct ieee80211_ampdu_params *params);
+int ath12k_dp_rx_ampdu_stop(struct ath12k *ar,
+ struct ieee80211_ampdu_params *params);
+int ath12k_dp_rx_peer_pn_replay_config(struct ath12k_vif *arvif,
+ const u8 *peer_addr,
+ enum set_key_cmd key_cmd,
+ struct ieee80211_key_conf *key);
+void ath12k_dp_rx_peer_tid_cleanup(struct ath12k *ar, struct ath12k_peer *peer);
+void ath12k_dp_rx_peer_tid_delete(struct ath12k *ar,
+ struct ath12k_peer *peer, u8 tid);
+int ath12k_dp_rx_peer_tid_setup(struct ath12k *ar, const u8 *peer_mac, int vdev_id,
+ u8 tid, u32 ba_win_sz, u16 ssn,
+ enum hal_pn_type pn_type);
+void ath12k_dp_htt_htc_t2h_msg_handler(struct ath12k_base *ab,
+ struct sk_buff *skb);
+int ath12k_dp_rx_pdev_reo_setup(struct ath12k_base *ab);
+void ath12k_dp_rx_pdev_reo_cleanup(struct ath12k_base *ab);
+int ath12k_dp_rx_htt_setup(struct ath12k_base *ab);
+int ath12k_dp_rx_alloc(struct ath12k_base *ab);
+void ath12k_dp_rx_free(struct ath12k_base *ab);
+int ath12k_dp_rx_pdev_alloc(struct ath12k_base *ab, int pdev_idx);
+void ath12k_dp_rx_pdev_free(struct ath12k_base *ab, int pdev_idx);
+void ath12k_dp_rx_reo_cmd_list_cleanup(struct ath12k_base *ab);
+void ath12k_dp_rx_process_reo_status(struct ath12k_base *ab);
+int ath12k_dp_rx_process_wbm_err(struct ath12k_base *ab,
+ struct napi_struct *napi, int budget);
+int ath12k_dp_rx_process_err(struct ath12k_base *ab, struct napi_struct *napi,
+ int budget);
+int ath12k_dp_rx_process(struct ath12k_base *ab, int mac_id,
+ struct napi_struct *napi,
+ int budget);
+int ath12k_dp_rx_bufs_replenish(struct ath12k_base *ab, int mac_id,
+ struct dp_rxdma_ring *rx_ring,
+ int req_entries,
+ enum hal_rx_buf_return_buf_manager mgr,
+ bool hw_cc);
+int ath12k_dp_rx_pdev_mon_attach(struct ath12k *ar);
+int ath12k_dp_rx_peer_frag_setup(struct ath12k *ar, const u8 *peer_mac, int vdev_id);
+
+int ath12k_dp_rx_pktlog_start(struct ath12k_base *ab);
+int ath12k_dp_rx_pktlog_stop(struct ath12k_base *ab, bool stop_timer);
+u8 ath12k_dp_rx_h_l3pad(struct ath12k_base *ab,
+ struct hal_rx_desc *desc);
+struct ath12k_peer *
+ath12k_dp_rx_h_find_peer(struct ath12k_base *ab, struct sk_buff *msdu);
+u8 ath12k_dp_rx_h_decap_type(struct ath12k_base *ab,
+ struct hal_rx_desc *desc);
+u32 ath12k_dp_rx_h_mpdu_err(struct ath12k_base *ab,
+ struct hal_rx_desc *desc);
+void ath12k_dp_rx_h_ppdu(struct ath12k *ar, struct hal_rx_desc *rx_desc,
+ struct ieee80211_rx_status *rx_status);
+struct ath12k_peer *
+ath12k_dp_rx_h_find_peer(struct ath12k_base *ab, struct sk_buff *msdu);
+
+int ath12k_dp_rxdma_ring_sel_config_qcn9274(struct ath12k_base *ab);
+int ath12k_dp_rxdma_ring_sel_config_wcn7850(struct ath12k_base *ab);
+
+#endif /* ATH12K_DP_RX_H */
diff --git a/drivers/net/wireless/ath/ath12k/dp_tx.c b/drivers/net/wireless/ath/ath12k/dp_tx.c
new file mode 100644
index 000000000000..95294f35155c
--- /dev/null
+++ b/drivers/net/wireless/ath/ath12k/dp_tx.c
@@ -0,0 +1,1211 @@
+// SPDX-License-Identifier: BSD-3-Clause-Clear
+/*
+ * Copyright (c) 2018-2021 The Linux Foundation. All rights reserved.
+ * Copyright (c) 2021-2022 Qualcomm Innovation Center, Inc. All rights reserved.
+ */
+
+#include "core.h"
+#include "dp_tx.h"
+#include "debug.h"
+#include "hw.h"
+
+static enum hal_tcl_encap_type
+ath12k_dp_tx_get_encap_type(struct ath12k_vif *arvif, struct sk_buff *skb)
+{
+ struct ieee80211_tx_info *tx_info = IEEE80211_SKB_CB(skb);
+
+ if (tx_info->flags & IEEE80211_TX_CTL_HW_80211_ENCAP)
+ return HAL_TCL_ENCAP_TYPE_ETHERNET;
+
+ return HAL_TCL_ENCAP_TYPE_NATIVE_WIFI;
+}
+
+static void ath12k_dp_tx_encap_nwifi(struct sk_buff *skb)
+{
+ struct ieee80211_hdr *hdr = (void *)skb->data;
+ u8 *qos_ctl;
+
+ if (!ieee80211_is_data_qos(hdr->frame_control))
+ return;
+
+ qos_ctl = ieee80211_get_qos_ctl(hdr);
+ memmove(skb->data + IEEE80211_QOS_CTL_LEN,
+ skb->data, (void *)qos_ctl - (void *)skb->data);
+ skb_pull(skb, IEEE80211_QOS_CTL_LEN);
+
+ hdr = (void *)skb->data;
+ hdr->frame_control &= ~__cpu_to_le16(IEEE80211_STYPE_QOS_DATA);
+}
+
+static u8 ath12k_dp_tx_get_tid(struct sk_buff *skb)
+{
+ struct ieee80211_hdr *hdr = (void *)skb->data;
+ struct ath12k_skb_cb *cb = ATH12K_SKB_CB(skb);
+
+ if (cb->flags & ATH12K_SKB_HW_80211_ENCAP)
+ return skb->priority & IEEE80211_QOS_CTL_TID_MASK;
+ else if (!ieee80211_is_data_qos(hdr->frame_control))
+ return HAL_DESC_REO_NON_QOS_TID;
+ else
+ return skb->priority & IEEE80211_QOS_CTL_TID_MASK;
+}
+
+enum hal_encrypt_type ath12k_dp_tx_get_encrypt_type(u32 cipher)
+{
+ switch (cipher) {
+ case WLAN_CIPHER_SUITE_WEP40:
+ return HAL_ENCRYPT_TYPE_WEP_40;
+ case WLAN_CIPHER_SUITE_WEP104:
+ return HAL_ENCRYPT_TYPE_WEP_104;
+ case WLAN_CIPHER_SUITE_TKIP:
+ return HAL_ENCRYPT_TYPE_TKIP_MIC;
+ case WLAN_CIPHER_SUITE_CCMP:
+ return HAL_ENCRYPT_TYPE_CCMP_128;
+ case WLAN_CIPHER_SUITE_CCMP_256:
+ return HAL_ENCRYPT_TYPE_CCMP_256;
+ case WLAN_CIPHER_SUITE_GCMP:
+ return HAL_ENCRYPT_TYPE_GCMP_128;
+ case WLAN_CIPHER_SUITE_GCMP_256:
+ return HAL_ENCRYPT_TYPE_AES_GCMP_256;
+ default:
+ return HAL_ENCRYPT_TYPE_OPEN;
+ }
+}
+
+static void ath12k_dp_tx_release_txbuf(struct ath12k_dp *dp,
+ struct ath12k_tx_desc_info *tx_desc,
+ u8 pool_id)
+{
+ spin_lock_bh(&dp->tx_desc_lock[pool_id]);
+ list_move_tail(&tx_desc->list, &dp->tx_desc_free_list[pool_id]);
+ spin_unlock_bh(&dp->tx_desc_lock[pool_id]);
+}
+
+static struct ath12k_tx_desc_info *ath12k_dp_tx_assign_buffer(struct ath12k_dp *dp,
+ u8 pool_id)
+{
+ struct ath12k_tx_desc_info *desc;
+
+ spin_lock_bh(&dp->tx_desc_lock[pool_id]);
+ desc = list_first_entry_or_null(&dp->tx_desc_free_list[pool_id],
+ struct ath12k_tx_desc_info,
+ list);
+ if (!desc) {
+ spin_unlock_bh(&dp->tx_desc_lock[pool_id]);
+ ath12k_warn(dp->ab, "failed to allocate data Tx buffer\n");
+ return NULL;
+ }
+
+ list_move_tail(&desc->list, &dp->tx_desc_used_list[pool_id]);
+ spin_unlock_bh(&dp->tx_desc_lock[pool_id]);
+
+ return desc;
+}
+
+static void ath12k_hal_tx_cmd_ext_desc_setup(struct ath12k_base *ab, void *cmd,
+ struct hal_tx_info *ti)
+{
+ struct hal_tx_msdu_ext_desc *tcl_ext_cmd = (struct hal_tx_msdu_ext_desc *)cmd;
+
+ tcl_ext_cmd->info0 = le32_encode_bits(ti->paddr,
+ HAL_TX_MSDU_EXT_INFO0_BUF_PTR_LO);
+ tcl_ext_cmd->info1 = le32_encode_bits(0x0,
+ HAL_TX_MSDU_EXT_INFO1_BUF_PTR_HI) |
+ le32_encode_bits(ti->data_len,
+ HAL_TX_MSDU_EXT_INFO1_BUF_LEN);
+
+ tcl_ext_cmd->info1 = le32_encode_bits(1, HAL_TX_MSDU_EXT_INFO1_EXTN_OVERRIDE) |
+ le32_encode_bits(ti->encap_type,
+ HAL_TX_MSDU_EXT_INFO1_ENCAP_TYPE) |
+ le32_encode_bits(ti->encrypt_type,
+ HAL_TX_MSDU_EXT_INFO1_ENCRYPT_TYPE);
+}
+
+int ath12k_dp_tx(struct ath12k *ar, struct ath12k_vif *arvif,
+ struct sk_buff *skb)
+{
+ struct ath12k_base *ab = ar->ab;
+ struct ath12k_dp *dp = &ab->dp;
+ struct hal_tx_info ti = {0};
+ struct ath12k_tx_desc_info *tx_desc;
+ struct ieee80211_tx_info *info = IEEE80211_SKB_CB(skb);
+ struct ath12k_skb_cb *skb_cb = ATH12K_SKB_CB(skb);
+ struct hal_tcl_data_cmd *hal_tcl_desc;
+ struct hal_tx_msdu_ext_desc *msg;
+ struct sk_buff *skb_ext_desc;
+ struct hal_srng *tcl_ring;
+ struct ieee80211_hdr *hdr = (void *)skb->data;
+ struct dp_tx_ring *tx_ring;
+ u8 pool_id;
+ u8 hal_ring_id;
+ int ret;
+ u8 ring_selector, ring_map = 0;
+ bool tcl_ring_retry;
+ bool msdu_ext_desc = false;
+
+ if (test_bit(ATH12K_FLAG_CRASH_FLUSH, &ar->ab->dev_flags))
+ return -ESHUTDOWN;
+
+ if (!(info->flags & IEEE80211_TX_CTL_HW_80211_ENCAP) &&
+ !ieee80211_is_data(hdr->frame_control))
+ return -ENOTSUPP;
+
+ pool_id = skb_get_queue_mapping(skb) & (ATH12K_HW_MAX_QUEUES - 1);
+
+ /* Let the default ring selection be based on current processor
+ * number, where one of the 3 tcl rings are selected based on
+ * the smp_processor_id(). In case that ring
+ * is full/busy, we resort to other available rings.
+ * If all rings are full, we drop the packet.
+ * TODO: Add throttling logic when all rings are full
+ */
+ ring_selector = ab->hw_params->hw_ops->get_ring_selector(skb);
+
+tcl_ring_sel:
+ tcl_ring_retry = false;
+ ti.ring_id = ring_selector % ab->hw_params->max_tx_ring;
+
+ ring_map |= BIT(ti.ring_id);
+ ti.rbm_id = ab->hw_params->hal_ops->tcl_to_wbm_rbm_map[ti.ring_id].rbm_id;
+
+ tx_ring = &dp->tx_ring[ti.ring_id];
+
+ tx_desc = ath12k_dp_tx_assign_buffer(dp, pool_id);
+ if (!tx_desc)
+ return -ENOMEM;
+
+ ti.bank_id = arvif->bank_id;
+ ti.meta_data_flags = arvif->tcl_metadata;
+
+ if (arvif->tx_encap_type == HAL_TCL_ENCAP_TYPE_RAW &&
+ test_bit(ATH12K_FLAG_HW_CRYPTO_DISABLED, &ar->ab->dev_flags)) {
+ if (skb_cb->flags & ATH12K_SKB_CIPHER_SET) {
+ ti.encrypt_type =
+ ath12k_dp_tx_get_encrypt_type(skb_cb->cipher);
+
+ if (ieee80211_has_protected(hdr->frame_control))
+ skb_put(skb, IEEE80211_CCMP_MIC_LEN);
+ } else {
+ ti.encrypt_type = HAL_ENCRYPT_TYPE_OPEN;
+ }
+
+ msdu_ext_desc = true;
+ }
+
+ ti.encap_type = ath12k_dp_tx_get_encap_type(arvif, skb);
+ ti.addr_search_flags = arvif->hal_addr_search_flags;
+ ti.search_type = arvif->search_type;
+ ti.type = HAL_TCL_DESC_TYPE_BUFFER;
+ ti.pkt_offset = 0;
+ ti.lmac_id = ar->lmac_id;
+ ti.vdev_id = arvif->vdev_id;
+ ti.bss_ast_hash = arvif->ast_hash;
+ ti.bss_ast_idx = arvif->ast_idx;
+ ti.dscp_tid_tbl_idx = 0;
+
+ if (skb->ip_summed == CHECKSUM_PARTIAL &&
+ ti.encap_type != HAL_TCL_ENCAP_TYPE_RAW) {
+ ti.flags0 |= u32_encode_bits(1, HAL_TCL_DATA_CMD_INFO2_IP4_CKSUM_EN) |
+ u32_encode_bits(1, HAL_TCL_DATA_CMD_INFO2_UDP4_CKSUM_EN) |
+ u32_encode_bits(1, HAL_TCL_DATA_CMD_INFO2_UDP6_CKSUM_EN) |
+ u32_encode_bits(1, HAL_TCL_DATA_CMD_INFO2_TCP4_CKSUM_EN) |
+ u32_encode_bits(1, HAL_TCL_DATA_CMD_INFO2_TCP6_CKSUM_EN);
+ }
+
+ ti.flags1 |= u32_encode_bits(1, HAL_TCL_DATA_CMD_INFO3_TID_OVERWRITE);
+
+ ti.tid = ath12k_dp_tx_get_tid(skb);
+
+ switch (ti.encap_type) {
+ case HAL_TCL_ENCAP_TYPE_NATIVE_WIFI:
+ ath12k_dp_tx_encap_nwifi(skb);
+ break;
+ case HAL_TCL_ENCAP_TYPE_RAW:
+ if (!test_bit(ATH12K_FLAG_RAW_MODE, &ab->dev_flags)) {
+ ret = -EINVAL;
+ goto fail_remove_tx_buf;
+ }
+ break;
+ case HAL_TCL_ENCAP_TYPE_ETHERNET:
+ /* no need to encap */
+ break;
+ case HAL_TCL_ENCAP_TYPE_802_3:
+ default:
+ /* TODO: Take care of other encap modes as well */
+ ret = -EINVAL;
+ atomic_inc(&ab->soc_stats.tx_err.misc_fail);
+ goto fail_remove_tx_buf;
+ }
+
+ ti.paddr = dma_map_single(ab->dev, skb->data, skb->len, DMA_TO_DEVICE);
+ if (dma_mapping_error(ab->dev, ti.paddr)) {
+ atomic_inc(&ab->soc_stats.tx_err.misc_fail);
+ ath12k_warn(ab, "failed to DMA map data Tx buffer\n");
+ ret = -ENOMEM;
+ goto fail_remove_tx_buf;
+ }
+
+ tx_desc->skb = skb;
+ tx_desc->mac_id = ar->pdev_idx;
+ ti.desc_id = tx_desc->desc_id;
+ ti.data_len = skb->len;
+ skb_cb->paddr = ti.paddr;
+ skb_cb->vif = arvif->vif;
+ skb_cb->ar = ar;
+
+ if (msdu_ext_desc) {
+ skb_ext_desc = dev_alloc_skb(sizeof(struct hal_tx_msdu_ext_desc));
+ if (!skb_ext_desc) {
+ ret = -ENOMEM;
+ goto fail_unmap_dma;
+ }
+
+ skb_put(skb_ext_desc, sizeof(struct hal_tx_msdu_ext_desc));
+ memset(skb_ext_desc->data, 0, skb_ext_desc->len);
+
+ msg = (struct hal_tx_msdu_ext_desc *)skb_ext_desc->data;
+ ath12k_hal_tx_cmd_ext_desc_setup(ab, msg, &ti);
+
+ ti.paddr = dma_map_single(ab->dev, skb_ext_desc->data,
+ skb_ext_desc->len, DMA_TO_DEVICE);
+ ret = dma_mapping_error(ab->dev, ti.paddr);
+ if (ret) {
+ kfree(skb_ext_desc);
+ goto fail_unmap_dma;
+ }
+
+ ti.data_len = skb_ext_desc->len;
+ ti.type = HAL_TCL_DESC_TYPE_EXT_DESC;
+
+ skb_cb->paddr_ext_desc = ti.paddr;
+ }
+
+ hal_ring_id = tx_ring->tcl_data_ring.ring_id;
+ tcl_ring = &ab->hal.srng_list[hal_ring_id];
+
+ spin_lock_bh(&tcl_ring->lock);
+
+ ath12k_hal_srng_access_begin(ab, tcl_ring);
+
+ hal_tcl_desc = ath12k_hal_srng_src_get_next_entry(ab, tcl_ring);
+ if (!hal_tcl_desc) {
+ /* NOTE: It is highly unlikely we'll be running out of tcl_ring
+ * desc because the desc is directly enqueued onto hw queue.
+ */
+ ath12k_hal_srng_access_end(ab, tcl_ring);
+ ab->soc_stats.tx_err.desc_na[ti.ring_id]++;
+ spin_unlock_bh(&tcl_ring->lock);
+ ret = -ENOMEM;
+
+ /* Checking for available tcl descritors in another ring in
+ * case of failure due to full tcl ring now, is better than
+ * checking this ring earlier for each pkt tx.
+ * Restart ring selection if some rings are not checked yet.
+ */
+ if (ring_map != (BIT(ab->hw_params->max_tx_ring) - 1) &&
+ ab->hw_params->tcl_ring_retry) {
+ tcl_ring_retry = true;
+ ring_selector++;
+ }
+
+ goto fail_unmap_dma;
+ }
+
+ ath12k_hal_tx_cmd_desc_setup(ab, hal_tcl_desc, &ti);
+
+ ath12k_hal_srng_access_end(ab, tcl_ring);
+
+ spin_unlock_bh(&tcl_ring->lock);
+
+ ath12k_dbg_dump(ab, ATH12K_DBG_DP_TX, NULL, "dp tx msdu: ",
+ skb->data, skb->len);
+
+ atomic_inc(&ar->dp.num_tx_pending);
+
+ return 0;
+
+fail_unmap_dma:
+ dma_unmap_single(ab->dev, ti.paddr, ti.data_len, DMA_TO_DEVICE);
+ dma_unmap_single(ab->dev, skb_cb->paddr_ext_desc,
+ sizeof(struct hal_tx_msdu_ext_desc), DMA_TO_DEVICE);
+
+fail_remove_tx_buf:
+ ath12k_dp_tx_release_txbuf(dp, tx_desc, pool_id);
+ if (tcl_ring_retry)
+ goto tcl_ring_sel;
+
+ return ret;
+}
+
+static void ath12k_dp_tx_free_txbuf(struct ath12k_base *ab,
+ struct sk_buff *msdu, u8 mac_id,
+ struct dp_tx_ring *tx_ring)
+{
+ struct ath12k *ar;
+ struct ath12k_skb_cb *skb_cb;
+
+ skb_cb = ATH12K_SKB_CB(msdu);
+
+ dma_unmap_single(ab->dev, skb_cb->paddr, msdu->len, DMA_TO_DEVICE);
+ if (skb_cb->paddr_ext_desc)
+ dma_unmap_single(ab->dev, skb_cb->paddr_ext_desc,
+ sizeof(struct hal_tx_msdu_ext_desc), DMA_TO_DEVICE);
+
+ dev_kfree_skb_any(msdu);
+
+ ar = ab->pdevs[mac_id].ar;
+ if (atomic_dec_and_test(&ar->dp.num_tx_pending))
+ wake_up(&ar->dp.tx_empty_waitq);
+}
+
+static void
+ath12k_dp_tx_htt_tx_complete_buf(struct ath12k_base *ab,
+ struct sk_buff *msdu,
+ struct dp_tx_ring *tx_ring,
+ struct ath12k_dp_htt_wbm_tx_status *ts)
+{
+ struct ieee80211_tx_info *info;
+ struct ath12k_skb_cb *skb_cb;
+ struct ath12k *ar;
+
+ skb_cb = ATH12K_SKB_CB(msdu);
+ info = IEEE80211_SKB_CB(msdu);
+
+ ar = skb_cb->ar;
+
+ if (atomic_dec_and_test(&ar->dp.num_tx_pending))
+ wake_up(&ar->dp.tx_empty_waitq);
+
+ dma_unmap_single(ab->dev, skb_cb->paddr, msdu->len, DMA_TO_DEVICE);
+ if (skb_cb->paddr_ext_desc)
+ dma_unmap_single(ab->dev, skb_cb->paddr_ext_desc,
+ sizeof(struct hal_tx_msdu_ext_desc), DMA_TO_DEVICE);
+
+ memset(&info->status, 0, sizeof(info->status));
+
+ if (ts->acked) {
+ if (!(info->flags & IEEE80211_TX_CTL_NO_ACK)) {
+ info->flags |= IEEE80211_TX_STAT_ACK;
+ info->status.ack_signal = ATH12K_DEFAULT_NOISE_FLOOR +
+ ts->ack_rssi;
+ info->status.flags = IEEE80211_TX_STATUS_ACK_SIGNAL_VALID;
+ } else {
+ info->flags |= IEEE80211_TX_STAT_NOACK_TRANSMITTED;
+ }
+ }
+
+ ieee80211_tx_status(ar->hw, msdu);
+}
+
+static void
+ath12k_dp_tx_process_htt_tx_complete(struct ath12k_base *ab,
+ void *desc, u8 mac_id,
+ struct sk_buff *msdu,
+ struct dp_tx_ring *tx_ring)
+{
+ struct htt_tx_wbm_completion *status_desc;
+ struct ath12k_dp_htt_wbm_tx_status ts = {0};
+ enum hal_wbm_htt_tx_comp_status wbm_status;
+
+ status_desc = desc + HTT_TX_WBM_COMP_STATUS_OFFSET;
+
+ wbm_status = le32_get_bits(status_desc->info0,
+ HTT_TX_WBM_COMP_INFO0_STATUS);
+
+ switch (wbm_status) {
+ case HAL_WBM_REL_HTT_TX_COMP_STATUS_OK:
+ case HAL_WBM_REL_HTT_TX_COMP_STATUS_DROP:
+ case HAL_WBM_REL_HTT_TX_COMP_STATUS_TTL:
+ ts.acked = (wbm_status == HAL_WBM_REL_HTT_TX_COMP_STATUS_OK);
+ ts.ack_rssi = le32_get_bits(status_desc->info2,
+ HTT_TX_WBM_COMP_INFO2_ACK_RSSI);
+ ath12k_dp_tx_htt_tx_complete_buf(ab, msdu, tx_ring, &ts);
+ break;
+ case HAL_WBM_REL_HTT_TX_COMP_STATUS_REINJ:
+ case HAL_WBM_REL_HTT_TX_COMP_STATUS_INSPECT:
+ ath12k_dp_tx_free_txbuf(ab, msdu, mac_id, tx_ring);
+ break;
+ case HAL_WBM_REL_HTT_TX_COMP_STATUS_MEC_NOTIFY:
+ /* This event is to be handled only when the driver decides to
+ * use WDS offload functionality.
+ */
+ break;
+ default:
+ ath12k_warn(ab, "Unknown htt tx status %d\n", wbm_status);
+ break;
+ }
+}
+
+static void ath12k_dp_tx_complete_msdu(struct ath12k *ar,
+ struct sk_buff *msdu,
+ struct hal_tx_status *ts)
+{
+ struct ath12k_base *ab = ar->ab;
+ struct ieee80211_tx_info *info;
+ struct ath12k_skb_cb *skb_cb;
+
+ if (WARN_ON_ONCE(ts->buf_rel_source != HAL_WBM_REL_SRC_MODULE_TQM)) {
+ /* Must not happen */
+ return;
+ }
+
+ skb_cb = ATH12K_SKB_CB(msdu);
+
+ dma_unmap_single(ab->dev, skb_cb->paddr, msdu->len, DMA_TO_DEVICE);
+ if (skb_cb->paddr_ext_desc)
+ dma_unmap_single(ab->dev, skb_cb->paddr_ext_desc,
+ sizeof(struct hal_tx_msdu_ext_desc), DMA_TO_DEVICE);
+
+ rcu_read_lock();
+
+ if (!rcu_dereference(ab->pdevs_active[ar->pdev_idx])) {
+ dev_kfree_skb_any(msdu);
+ goto exit;
+ }
+
+ if (!skb_cb->vif) {
+ dev_kfree_skb_any(msdu);
+ goto exit;
+ }
+
+ info = IEEE80211_SKB_CB(msdu);
+ memset(&info->status, 0, sizeof(info->status));
+
+ /* skip tx rate update from ieee80211_status*/
+ info->status.rates[0].idx = -1;
+
+ if (ts->status == HAL_WBM_TQM_REL_REASON_FRAME_ACKED &&
+ !(info->flags & IEEE80211_TX_CTL_NO_ACK)) {
+ info->flags |= IEEE80211_TX_STAT_ACK;
+ info->status.ack_signal = ATH12K_DEFAULT_NOISE_FLOOR +
+ ts->ack_rssi;
+ info->status.flags = IEEE80211_TX_STATUS_ACK_SIGNAL_VALID;
+ }
+
+ if (ts->status == HAL_WBM_TQM_REL_REASON_CMD_REMOVE_TX &&
+ (info->flags & IEEE80211_TX_CTL_NO_ACK))
+ info->flags |= IEEE80211_TX_STAT_NOACK_TRANSMITTED;
+
+ /* NOTE: Tx rate status reporting. Tx completion status does not have
+ * necessary information (for example nss) to build the tx rate.
+ * Might end up reporting it out-of-band from HTT stats.
+ */
+
+ ieee80211_tx_status(ar->hw, msdu);
+
+exit:
+ rcu_read_unlock();
+}
+
+static void ath12k_dp_tx_status_parse(struct ath12k_base *ab,
+ struct hal_wbm_completion_ring_tx *desc,
+ struct hal_tx_status *ts)
+{
+ ts->buf_rel_source =
+ le32_get_bits(desc->info0, HAL_WBM_COMPL_TX_INFO0_REL_SRC_MODULE);
+ if (ts->buf_rel_source != HAL_WBM_REL_SRC_MODULE_FW &&
+ ts->buf_rel_source != HAL_WBM_REL_SRC_MODULE_TQM)
+ return;
+
+ if (ts->buf_rel_source == HAL_WBM_REL_SRC_MODULE_FW)
+ return;
+
+ ts->status = le32_get_bits(desc->info0,
+ HAL_WBM_COMPL_TX_INFO0_TQM_RELEASE_REASON);
+
+ ts->ppdu_id = le32_get_bits(desc->info1,
+ HAL_WBM_COMPL_TX_INFO1_TQM_STATUS_NUMBER);
+ if (le32_to_cpu(desc->rate_stats.info0) & HAL_TX_RATE_STATS_INFO0_VALID)
+ ts->rate_stats = le32_to_cpu(desc->rate_stats.info0);
+ else
+ ts->rate_stats = 0;
+}
+
+void ath12k_dp_tx_completion_handler(struct ath12k_base *ab, int ring_id)
+{
+ struct ath12k *ar;
+ struct ath12k_dp *dp = &ab->dp;
+ int hal_ring_id = dp->tx_ring[ring_id].tcl_comp_ring.ring_id;
+ struct hal_srng *status_ring = &ab->hal.srng_list[hal_ring_id];
+ struct ath12k_tx_desc_info *tx_desc = NULL;
+ struct sk_buff *msdu;
+ struct hal_tx_status ts = { 0 };
+ struct dp_tx_ring *tx_ring = &dp->tx_ring[ring_id];
+ struct hal_wbm_release_ring *desc;
+ u8 mac_id;
+ u64 desc_va;
+
+ spin_lock_bh(&status_ring->lock);
+
+ ath12k_hal_srng_access_begin(ab, status_ring);
+
+ while (ATH12K_TX_COMPL_NEXT(tx_ring->tx_status_head) != tx_ring->tx_status_tail) {
+ desc = ath12k_hal_srng_dst_get_next_entry(ab, status_ring);
+ if (!desc)
+ break;
+
+ memcpy(&tx_ring->tx_status[tx_ring->tx_status_head],
+ desc, sizeof(*desc));
+ tx_ring->tx_status_head =
+ ATH12K_TX_COMPL_NEXT(tx_ring->tx_status_head);
+ }
+
+ if (ath12k_hal_srng_dst_peek(ab, status_ring) &&
+ (ATH12K_TX_COMPL_NEXT(tx_ring->tx_status_head) == tx_ring->tx_status_tail)) {
+ /* TODO: Process pending tx_status messages when kfifo_is_full() */
+ ath12k_warn(ab, "Unable to process some of the tx_status ring desc because status_fifo is full\n");
+ }
+
+ ath12k_hal_srng_access_end(ab, status_ring);
+
+ spin_unlock_bh(&status_ring->lock);
+
+ while (ATH12K_TX_COMPL_NEXT(tx_ring->tx_status_tail) != tx_ring->tx_status_head) {
+ struct hal_wbm_completion_ring_tx *tx_status;
+ u32 desc_id;
+
+ tx_ring->tx_status_tail =
+ ATH12K_TX_COMPL_NEXT(tx_ring->tx_status_tail);
+ tx_status = &tx_ring->tx_status[tx_ring->tx_status_tail];
+ ath12k_dp_tx_status_parse(ab, tx_status, &ts);
+
+ if (le32_get_bits(tx_status->info0, HAL_WBM_COMPL_TX_INFO0_CC_DONE)) {
+ /* HW done cookie conversion */
+ desc_va = ((u64)le32_to_cpu(tx_status->buf_va_hi) << 32 |
+ le32_to_cpu(tx_status->buf_va_lo));
+ tx_desc = (struct ath12k_tx_desc_info *)((unsigned long)desc_va);
+ } else {
+ /* SW does cookie conversion to VA */
+ desc_id = le32_get_bits(tx_status->buf_va_hi,
+ BUFFER_ADDR_INFO1_SW_COOKIE);
+
+ tx_desc = ath12k_dp_get_tx_desc(ab, desc_id);
+ }
+ if (!tx_desc) {
+ ath12k_warn(ab, "unable to retrieve tx_desc!");
+ continue;
+ }
+
+ msdu = tx_desc->skb;
+ mac_id = tx_desc->mac_id;
+
+ /* Release descriptor as soon as extracting necessary info
+ * to reduce contention
+ */
+ ath12k_dp_tx_release_txbuf(dp, tx_desc, tx_desc->pool_id);
+ if (ts.buf_rel_source == HAL_WBM_REL_SRC_MODULE_FW) {
+ ath12k_dp_tx_process_htt_tx_complete(ab,
+ (void *)tx_status,
+ mac_id, msdu,
+ tx_ring);
+ continue;
+ }
+
+ ar = ab->pdevs[mac_id].ar;
+
+ if (atomic_dec_and_test(&ar->dp.num_tx_pending))
+ wake_up(&ar->dp.tx_empty_waitq);
+
+ ath12k_dp_tx_complete_msdu(ar, msdu, &ts);
+ }
+}
+
+static int
+ath12k_dp_tx_get_ring_id_type(struct ath12k_base *ab,
+ int mac_id, u32 ring_id,
+ enum hal_ring_type ring_type,
+ enum htt_srng_ring_type *htt_ring_type,
+ enum htt_srng_ring_id *htt_ring_id)
+{
+ int ret = 0;
+
+ switch (ring_type) {
+ case HAL_RXDMA_BUF:
+ /* for some targets, host fills rx buffer to fw and fw fills to
+ * rxbuf ring for each rxdma
+ */
+ if (!ab->hw_params->rx_mac_buf_ring) {
+ if (!(ring_id == HAL_SRNG_SW2RXDMA_BUF0 ||
+ ring_id == HAL_SRNG_SW2RXDMA_BUF1)) {
+ ret = -EINVAL;
+ }
+ *htt_ring_id = HTT_RXDMA_HOST_BUF_RING;
+ *htt_ring_type = HTT_SW_TO_HW_RING;
+ } else {
+ if (ring_id == HAL_SRNG_SW2RXDMA_BUF0) {
+ *htt_ring_id = HTT_HOST1_TO_FW_RXBUF_RING;
+ *htt_ring_type = HTT_SW_TO_SW_RING;
+ } else {
+ *htt_ring_id = HTT_RXDMA_HOST_BUF_RING;
+ *htt_ring_type = HTT_SW_TO_HW_RING;
+ }
+ }
+ break;
+ case HAL_RXDMA_DST:
+ *htt_ring_id = HTT_RXDMA_NON_MONITOR_DEST_RING;
+ *htt_ring_type = HTT_HW_TO_SW_RING;
+ break;
+ case HAL_RXDMA_MONITOR_BUF:
+ *htt_ring_id = HTT_RXDMA_MONITOR_BUF_RING;
+ *htt_ring_type = HTT_SW_TO_HW_RING;
+ break;
+ case HAL_RXDMA_MONITOR_STATUS:
+ *htt_ring_id = HTT_RXDMA_MONITOR_STATUS_RING;
+ *htt_ring_type = HTT_SW_TO_HW_RING;
+ break;
+ case HAL_RXDMA_MONITOR_DST:
+ *htt_ring_id = HTT_RXDMA_MONITOR_DEST_RING;
+ *htt_ring_type = HTT_HW_TO_SW_RING;
+ break;
+ case HAL_RXDMA_MONITOR_DESC:
+ *htt_ring_id = HTT_RXDMA_MONITOR_DESC_RING;
+ *htt_ring_type = HTT_SW_TO_HW_RING;
+ break;
+ case HAL_TX_MONITOR_BUF:
+ *htt_ring_id = HTT_TX_MON_HOST2MON_BUF_RING;
+ *htt_ring_type = HTT_SW_TO_HW_RING;
+ break;
+ case HAL_TX_MONITOR_DST:
+ *htt_ring_id = HTT_TX_MON_MON2HOST_DEST_RING;
+ *htt_ring_type = HTT_HW_TO_SW_RING;
+ break;
+ default:
+ ath12k_warn(ab, "Unsupported ring type in DP :%d\n", ring_type);
+ ret = -EINVAL;
+ }
+ return ret;
+}
+
+int ath12k_dp_tx_htt_srng_setup(struct ath12k_base *ab, u32 ring_id,
+ int mac_id, enum hal_ring_type ring_type)
+{
+ struct htt_srng_setup_cmd *cmd;
+ struct hal_srng *srng = &ab->hal.srng_list[ring_id];
+ struct hal_srng_params params;
+ struct sk_buff *skb;
+ u32 ring_entry_sz;
+ int len = sizeof(*cmd);
+ dma_addr_t hp_addr, tp_addr;
+ enum htt_srng_ring_type htt_ring_type;
+ enum htt_srng_ring_id htt_ring_id;
+ int ret;
+
+ skb = ath12k_htc_alloc_skb(ab, len);
+ if (!skb)
+ return -ENOMEM;
+
+ memset(&params, 0, sizeof(params));
+ ath12k_hal_srng_get_params(ab, srng, &params);
+
+ hp_addr = ath12k_hal_srng_get_hp_addr(ab, srng);
+ tp_addr = ath12k_hal_srng_get_tp_addr(ab, srng);
+
+ ret = ath12k_dp_tx_get_ring_id_type(ab, mac_id, ring_id,
+ ring_type, &htt_ring_type,
+ &htt_ring_id);
+ if (ret)
+ goto err_free;
+
+ skb_put(skb, len);
+ cmd = (struct htt_srng_setup_cmd *)skb->data;
+ cmd->info0 = le32_encode_bits(HTT_H2T_MSG_TYPE_SRING_SETUP,
+ HTT_SRNG_SETUP_CMD_INFO0_MSG_TYPE);
+ if (htt_ring_type == HTT_SW_TO_HW_RING ||
+ htt_ring_type == HTT_HW_TO_SW_RING)
+ cmd->info0 |= le32_encode_bits(DP_SW2HW_MACID(mac_id),
+ HTT_SRNG_SETUP_CMD_INFO0_PDEV_ID);
+ else
+ cmd->info0 |= le32_encode_bits(mac_id,
+ HTT_SRNG_SETUP_CMD_INFO0_PDEV_ID);
+ cmd->info0 |= le32_encode_bits(htt_ring_type,
+ HTT_SRNG_SETUP_CMD_INFO0_RING_TYPE);
+ cmd->info0 |= le32_encode_bits(htt_ring_id,
+ HTT_SRNG_SETUP_CMD_INFO0_RING_ID);
+
+ cmd->ring_base_addr_lo = cpu_to_le32(params.ring_base_paddr &
+ HAL_ADDR_LSB_REG_MASK);
+
+ cmd->ring_base_addr_hi = cpu_to_le32((u64)params.ring_base_paddr >>
+ HAL_ADDR_MSB_REG_SHIFT);
+
+ ret = ath12k_hal_srng_get_entrysize(ab, ring_type);
+ if (ret < 0)
+ goto err_free;
+
+ ring_entry_sz = ret;
+
+ ring_entry_sz >>= 2;
+ cmd->info1 = le32_encode_bits(ring_entry_sz,
+ HTT_SRNG_SETUP_CMD_INFO1_RING_ENTRY_SIZE);
+ cmd->info1 |= le32_encode_bits(params.num_entries * ring_entry_sz,
+ HTT_SRNG_SETUP_CMD_INFO1_RING_SIZE);
+ cmd->info1 |= le32_encode_bits(!!(params.flags & HAL_SRNG_FLAGS_MSI_SWAP),
+ HTT_SRNG_SETUP_CMD_INFO1_RING_FLAGS_MSI_SWAP);
+ cmd->info1 |= le32_encode_bits(!!(params.flags & HAL_SRNG_FLAGS_DATA_TLV_SWAP),
+ HTT_SRNG_SETUP_CMD_INFO1_RING_FLAGS_TLV_SWAP);
+ cmd->info1 |= le32_encode_bits(!!(params.flags & HAL_SRNG_FLAGS_RING_PTR_SWAP),
+ HTT_SRNG_SETUP_CMD_INFO1_RING_FLAGS_HOST_FW_SWAP);
+ if (htt_ring_type == HTT_SW_TO_HW_RING)
+ cmd->info1 |= cpu_to_le32(HTT_SRNG_SETUP_CMD_INFO1_RING_LOOP_CNT_DIS);
+
+ cmd->ring_head_off32_remote_addr_lo = cpu_to_le32(lower_32_bits(hp_addr));
+ cmd->ring_head_off32_remote_addr_hi = cpu_to_le32(upper_32_bits(hp_addr));
+
+ cmd->ring_tail_off32_remote_addr_lo = cpu_to_le32(lower_32_bits(tp_addr));
+ cmd->ring_tail_off32_remote_addr_hi = cpu_to_le32(upper_32_bits(tp_addr));
+
+ cmd->ring_msi_addr_lo = cpu_to_le32(lower_32_bits(params.msi_addr));
+ cmd->ring_msi_addr_hi = cpu_to_le32(upper_32_bits(params.msi_addr));
+ cmd->msi_data = cpu_to_le32(params.msi_data);
+
+ cmd->intr_info =
+ le32_encode_bits(params.intr_batch_cntr_thres_entries * ring_entry_sz,
+ HTT_SRNG_SETUP_CMD_INTR_INFO_BATCH_COUNTER_THRESH);
+ cmd->intr_info |=
+ le32_encode_bits(params.intr_timer_thres_us >> 3,
+ HTT_SRNG_SETUP_CMD_INTR_INFO_INTR_TIMER_THRESH);
+
+ cmd->info2 = 0;
+ if (params.flags & HAL_SRNG_FLAGS_LOW_THRESH_INTR_EN) {
+ cmd->info2 = le32_encode_bits(params.low_threshold,
+ HTT_SRNG_SETUP_CMD_INFO2_INTR_LOW_THRESH);
+ }
+
+ ath12k_dbg(ab, ATH12K_DBG_HAL,
+ "%s msi_addr_lo:0x%x, msi_addr_hi:0x%x, msi_data:0x%x\n",
+ __func__, cmd->ring_msi_addr_lo, cmd->ring_msi_addr_hi,
+ cmd->msi_data);
+
+ ath12k_dbg(ab, ATH12K_DBG_HAL,
+ "ring_id:%d, ring_type:%d, intr_info:0x%x, flags:0x%x\n",
+ ring_id, ring_type, cmd->intr_info, cmd->info2);
+
+ ret = ath12k_htc_send(&ab->htc, ab->dp.eid, skb);
+ if (ret)
+ goto err_free;
+
+ return 0;
+
+err_free:
+ dev_kfree_skb_any(skb);
+
+ return ret;
+}
+
+#define HTT_TARGET_VERSION_TIMEOUT_HZ (3 * HZ)
+
+int ath12k_dp_tx_htt_h2t_ver_req_msg(struct ath12k_base *ab)
+{
+ struct ath12k_dp *dp = &ab->dp;
+ struct sk_buff *skb;
+ struct htt_ver_req_cmd *cmd;
+ int len = sizeof(*cmd);
+ int ret;
+
+ init_completion(&dp->htt_tgt_version_received);
+
+ skb = ath12k_htc_alloc_skb(ab, len);
+ if (!skb)
+ return -ENOMEM;
+
+ skb_put(skb, len);
+ cmd = (struct htt_ver_req_cmd *)skb->data;
+ cmd->ver_reg_info = le32_encode_bits(HTT_H2T_MSG_TYPE_VERSION_REQ,
+ HTT_VER_REQ_INFO_MSG_ID);
+
+ ret = ath12k_htc_send(&ab->htc, dp->eid, skb);
+ if (ret) {
+ dev_kfree_skb_any(skb);
+ return ret;
+ }
+
+ ret = wait_for_completion_timeout(&dp->htt_tgt_version_received,
+ HTT_TARGET_VERSION_TIMEOUT_HZ);
+ if (ret == 0) {
+ ath12k_warn(ab, "htt target version request timed out\n");
+ return -ETIMEDOUT;
+ }
+
+ if (dp->htt_tgt_ver_major != HTT_TARGET_VERSION_MAJOR) {
+ ath12k_err(ab, "unsupported htt major version %d supported version is %d\n",
+ dp->htt_tgt_ver_major, HTT_TARGET_VERSION_MAJOR);
+ return -ENOTSUPP;
+ }
+
+ return 0;
+}
+
+int ath12k_dp_tx_htt_h2t_ppdu_stats_req(struct ath12k *ar, u32 mask)
+{
+ struct ath12k_base *ab = ar->ab;
+ struct ath12k_dp *dp = &ab->dp;
+ struct sk_buff *skb;
+ struct htt_ppdu_stats_cfg_cmd *cmd;
+ int len = sizeof(*cmd);
+ u8 pdev_mask;
+ int ret;
+ int i;
+
+ for (i = 0; i < ab->hw_params->num_rxmda_per_pdev; i++) {
+ skb = ath12k_htc_alloc_skb(ab, len);
+ if (!skb)
+ return -ENOMEM;
+
+ skb_put(skb, len);
+ cmd = (struct htt_ppdu_stats_cfg_cmd *)skb->data;
+ cmd->msg = le32_encode_bits(HTT_H2T_MSG_TYPE_PPDU_STATS_CFG,
+ HTT_PPDU_STATS_CFG_MSG_TYPE);
+
+ pdev_mask = 1 << (i + 1);
+ cmd->msg |= le32_encode_bits(pdev_mask, HTT_PPDU_STATS_CFG_PDEV_ID);
+ cmd->msg |= le32_encode_bits(mask, HTT_PPDU_STATS_CFG_TLV_TYPE_BITMASK);
+
+ ret = ath12k_htc_send(&ab->htc, dp->eid, skb);
+ if (ret) {
+ dev_kfree_skb_any(skb);
+ return ret;
+ }
+ }
+
+ return 0;
+}
+
+int ath12k_dp_tx_htt_rx_filter_setup(struct ath12k_base *ab, u32 ring_id,
+ int mac_id, enum hal_ring_type ring_type,
+ int rx_buf_size,
+ struct htt_rx_ring_tlv_filter *tlv_filter)
+{
+ struct htt_rx_ring_selection_cfg_cmd *cmd;
+ struct hal_srng *srng = &ab->hal.srng_list[ring_id];
+ struct hal_srng_params params;
+ struct sk_buff *skb;
+ int len = sizeof(*cmd);
+ enum htt_srng_ring_type htt_ring_type;
+ enum htt_srng_ring_id htt_ring_id;
+ int ret;
+
+ skb = ath12k_htc_alloc_skb(ab, len);
+ if (!skb)
+ return -ENOMEM;
+
+ memset(&params, 0, sizeof(params));
+ ath12k_hal_srng_get_params(ab, srng, &params);
+
+ ret = ath12k_dp_tx_get_ring_id_type(ab, mac_id, ring_id,
+ ring_type, &htt_ring_type,
+ &htt_ring_id);
+ if (ret)
+ goto err_free;
+
+ skb_put(skb, len);
+ cmd = (struct htt_rx_ring_selection_cfg_cmd *)skb->data;
+ cmd->info0 = le32_encode_bits(HTT_H2T_MSG_TYPE_RX_RING_SELECTION_CFG,
+ HTT_RX_RING_SELECTION_CFG_CMD_INFO0_MSG_TYPE);
+ if (htt_ring_type == HTT_SW_TO_HW_RING ||
+ htt_ring_type == HTT_HW_TO_SW_RING)
+ cmd->info0 |=
+ le32_encode_bits(DP_SW2HW_MACID(mac_id),
+ HTT_RX_RING_SELECTION_CFG_CMD_INFO0_PDEV_ID);
+ else
+ cmd->info0 |=
+ le32_encode_bits(mac_id,
+ HTT_RX_RING_SELECTION_CFG_CMD_INFO0_PDEV_ID);
+ cmd->info0 |= le32_encode_bits(htt_ring_id,
+ HTT_RX_RING_SELECTION_CFG_CMD_INFO0_RING_ID);
+ cmd->info0 |= le32_encode_bits(!!(params.flags & HAL_SRNG_FLAGS_MSI_SWAP),
+ HTT_RX_RING_SELECTION_CFG_CMD_INFO0_SS);
+ cmd->info0 |= le32_encode_bits(!!(params.flags & HAL_SRNG_FLAGS_DATA_TLV_SWAP),
+ HTT_RX_RING_SELECTION_CFG_CMD_INFO0_PS);
+ cmd->info0 |= le32_encode_bits(tlv_filter->offset_valid,
+ HTT_RX_RING_SELECTION_CFG_CMD_OFFSET_VALID);
+ cmd->info1 = le32_encode_bits(rx_buf_size,
+ HTT_RX_RING_SELECTION_CFG_CMD_INFO1_BUF_SIZE);
+ cmd->pkt_type_en_flags0 = cpu_to_le32(tlv_filter->pkt_filter_flags0);
+ cmd->pkt_type_en_flags1 = cpu_to_le32(tlv_filter->pkt_filter_flags1);
+ cmd->pkt_type_en_flags2 = cpu_to_le32(tlv_filter->pkt_filter_flags2);
+ cmd->pkt_type_en_flags3 = cpu_to_le32(tlv_filter->pkt_filter_flags3);
+ cmd->rx_filter_tlv = cpu_to_le32(tlv_filter->rx_filter);
+
+ if (tlv_filter->offset_valid) {
+ cmd->rx_packet_offset =
+ le32_encode_bits(tlv_filter->rx_packet_offset,
+ HTT_RX_RING_SELECTION_CFG_RX_PACKET_OFFSET);
+
+ cmd->rx_packet_offset |=
+ le32_encode_bits(tlv_filter->rx_header_offset,
+ HTT_RX_RING_SELECTION_CFG_RX_HEADER_OFFSET);
+
+ cmd->rx_mpdu_offset =
+ le32_encode_bits(tlv_filter->rx_mpdu_end_offset,
+ HTT_RX_RING_SELECTION_CFG_RX_MPDU_END_OFFSET);
+
+ cmd->rx_mpdu_offset |=
+ le32_encode_bits(tlv_filter->rx_mpdu_start_offset,
+ HTT_RX_RING_SELECTION_CFG_RX_MPDU_START_OFFSET);
+
+ cmd->rx_msdu_offset =
+ le32_encode_bits(tlv_filter->rx_msdu_end_offset,
+ HTT_RX_RING_SELECTION_CFG_RX_MSDU_END_OFFSET);
+
+ cmd->rx_msdu_offset |=
+ le32_encode_bits(tlv_filter->rx_msdu_start_offset,
+ HTT_RX_RING_SELECTION_CFG_RX_MSDU_START_OFFSET);
+
+ cmd->rx_attn_offset =
+ le32_encode_bits(tlv_filter->rx_attn_offset,
+ HTT_RX_RING_SELECTION_CFG_RX_ATTENTION_OFFSET);
+ }
+
+ ret = ath12k_htc_send(&ab->htc, ab->dp.eid, skb);
+ if (ret)
+ goto err_free;
+
+ return 0;
+
+err_free:
+ dev_kfree_skb_any(skb);
+
+ return ret;
+}
+
+int
+ath12k_dp_tx_htt_h2t_ext_stats_req(struct ath12k *ar, u8 type,
+ struct htt_ext_stats_cfg_params *cfg_params,
+ u64 cookie)
+{
+ struct ath12k_base *ab = ar->ab;
+ struct ath12k_dp *dp = &ab->dp;
+ struct sk_buff *skb;
+ struct htt_ext_stats_cfg_cmd *cmd;
+ int len = sizeof(*cmd);
+ int ret;
+
+ skb = ath12k_htc_alloc_skb(ab, len);
+ if (!skb)
+ return -ENOMEM;
+
+ skb_put(skb, len);
+
+ cmd = (struct htt_ext_stats_cfg_cmd *)skb->data;
+ memset(cmd, 0, sizeof(*cmd));
+ cmd->hdr.msg_type = HTT_H2T_MSG_TYPE_EXT_STATS_CFG;
+
+ cmd->hdr.pdev_mask = 1 << ar->pdev->pdev_id;
+
+ cmd->hdr.stats_type = type;
+ cmd->cfg_param0 = cpu_to_le32(cfg_params->cfg0);
+ cmd->cfg_param1 = cpu_to_le32(cfg_params->cfg1);
+ cmd->cfg_param2 = cpu_to_le32(cfg_params->cfg2);
+ cmd->cfg_param3 = cpu_to_le32(cfg_params->cfg3);
+ cmd->cookie_lsb = cpu_to_le32(lower_32_bits(cookie));
+ cmd->cookie_msb = cpu_to_le32(upper_32_bits(cookie));
+
+ ret = ath12k_htc_send(&ab->htc, dp->eid, skb);
+ if (ret) {
+ ath12k_warn(ab, "failed to send htt type stats request: %d",
+ ret);
+ dev_kfree_skb_any(skb);
+ return ret;
+ }
+
+ return 0;
+}
+
+int ath12k_dp_tx_htt_monitor_mode_ring_config(struct ath12k *ar, bool reset)
+{
+ struct ath12k_base *ab = ar->ab;
+ int ret;
+
+ ret = ath12k_dp_tx_htt_tx_monitor_mode_ring_config(ar, reset);
+ if (ret) {
+ ath12k_err(ab, "failed to setup tx monitor filter %d\n", ret);
+ return ret;
+ }
+
+ ret = ath12k_dp_tx_htt_tx_monitor_mode_ring_config(ar, reset);
+ if (ret) {
+ ath12k_err(ab, "failed to setup rx monitor filter %d\n", ret);
+ return ret;
+ }
+
+ return 0;
+}
+
+int ath12k_dp_tx_htt_rx_monitor_mode_ring_config(struct ath12k *ar, bool reset)
+{
+ struct ath12k_base *ab = ar->ab;
+ struct ath12k_dp *dp = &ab->dp;
+ struct htt_rx_ring_tlv_filter tlv_filter = {0};
+ int ret, ring_id;
+
+ ring_id = dp->rxdma_mon_buf_ring.refill_buf_ring.ring_id;
+ tlv_filter.offset_valid = false;
+
+ if (!reset) {
+ tlv_filter.rx_filter = HTT_RX_MON_FILTER_TLV_FLAGS_MON_BUF_RING;
+ tlv_filter.pkt_filter_flags0 =
+ HTT_RX_MON_FP_MGMT_FILTER_FLAGS0 |
+ HTT_RX_MON_MO_MGMT_FILTER_FLAGS0;
+ tlv_filter.pkt_filter_flags1 =
+ HTT_RX_MON_FP_MGMT_FILTER_FLAGS1 |
+ HTT_RX_MON_MO_MGMT_FILTER_FLAGS1;
+ tlv_filter.pkt_filter_flags2 =
+ HTT_RX_MON_FP_CTRL_FILTER_FLASG2 |
+ HTT_RX_MON_MO_CTRL_FILTER_FLASG2;
+ tlv_filter.pkt_filter_flags3 =
+ HTT_RX_MON_FP_CTRL_FILTER_FLASG3 |
+ HTT_RX_MON_MO_CTRL_FILTER_FLASG3 |
+ HTT_RX_MON_FP_DATA_FILTER_FLASG3 |
+ HTT_RX_MON_MO_DATA_FILTER_FLASG3;
+ }
+
+ if (ab->hw_params->rxdma1_enable) {
+ ret = ath12k_dp_tx_htt_rx_filter_setup(ar->ab, ring_id, 0,
+ HAL_RXDMA_MONITOR_BUF,
+ DP_RXDMA_REFILL_RING_SIZE,
+ &tlv_filter);
+ if (ret) {
+ ath12k_err(ab,
+ "failed to setup filter for monitor buf %d\n", ret);
+ return ret;
+ }
+ }
+
+ return 0;
+}
+
+int ath12k_dp_tx_htt_tx_filter_setup(struct ath12k_base *ab, u32 ring_id,
+ int mac_id, enum hal_ring_type ring_type,
+ int tx_buf_size,
+ struct htt_tx_ring_tlv_filter *htt_tlv_filter)
+{
+ struct htt_tx_ring_selection_cfg_cmd *cmd;
+ struct hal_srng *srng = &ab->hal.srng_list[ring_id];
+ struct hal_srng_params params;
+ struct sk_buff *skb;
+ int len = sizeof(*cmd);
+ enum htt_srng_ring_type htt_ring_type;
+ enum htt_srng_ring_id htt_ring_id;
+ int ret;
+
+ skb = ath12k_htc_alloc_skb(ab, len);
+ if (!skb)
+ return -ENOMEM;
+
+ memset(&params, 0, sizeof(params));
+ ath12k_hal_srng_get_params(ab, srng, &params);
+
+ ret = ath12k_dp_tx_get_ring_id_type(ab, mac_id, ring_id,
+ ring_type, &htt_ring_type,
+ &htt_ring_id);
+
+ if (ret)
+ goto err_free;
+
+ skb_put(skb, len);
+ cmd = (struct htt_tx_ring_selection_cfg_cmd *)skb->data;
+ cmd->info0 = le32_encode_bits(HTT_H2T_MSG_TYPE_TX_MONITOR_CFG,
+ HTT_TX_RING_SELECTION_CFG_CMD_INFO0_MSG_TYPE);
+ if (htt_ring_type == HTT_SW_TO_HW_RING ||
+ htt_ring_type == HTT_HW_TO_SW_RING)
+ cmd->info0 |=
+ le32_encode_bits(DP_SW2HW_MACID(mac_id),
+ HTT_TX_RING_SELECTION_CFG_CMD_INFO0_PDEV_ID);
+ else
+ cmd->info0 |=
+ le32_encode_bits(mac_id,
+ HTT_TX_RING_SELECTION_CFG_CMD_INFO0_PDEV_ID);
+ cmd->info0 |= le32_encode_bits(htt_ring_id,
+ HTT_TX_RING_SELECTION_CFG_CMD_INFO0_RING_ID);
+ cmd->info0 |= le32_encode_bits(!!(params.flags & HAL_SRNG_FLAGS_MSI_SWAP),
+ HTT_TX_RING_SELECTION_CFG_CMD_INFO0_SS);
+ cmd->info0 |= le32_encode_bits(!!(params.flags & HAL_SRNG_FLAGS_DATA_TLV_SWAP),
+ HTT_TX_RING_SELECTION_CFG_CMD_INFO0_PS);
+
+ cmd->info1 |=
+ le32_encode_bits(tx_buf_size,
+ HTT_TX_RING_SELECTION_CFG_CMD_INFO1_RING_BUFF_SIZE);
+
+ if (htt_tlv_filter->tx_mon_mgmt_filter) {
+ cmd->info1 |=
+ le32_encode_bits(HTT_STATS_FRAME_CTRL_TYPE_MGMT,
+ HTT_TX_RING_SELECTION_CFG_CMD_INFO1_PKT_TYPE);
+ cmd->info1 |=
+ le32_encode_bits(htt_tlv_filter->tx_mon_pkt_dma_len,
+ HTT_TX_RING_SELECTION_CFG_CMD_INFO1_CONF_LEN_MGMT);
+ cmd->info2 |=
+ le32_encode_bits(HTT_STATS_FRAME_CTRL_TYPE_MGMT,
+ HTT_TX_RING_SELECTION_CFG_CMD_INFO2_PKT_TYPE_EN_FLAG);
+ }
+
+ if (htt_tlv_filter->tx_mon_data_filter) {
+ cmd->info1 |=
+ le32_encode_bits(HTT_STATS_FRAME_CTRL_TYPE_CTRL,
+ HTT_TX_RING_SELECTION_CFG_CMD_INFO1_PKT_TYPE);
+ cmd->info1 |=
+ le32_encode_bits(htt_tlv_filter->tx_mon_pkt_dma_len,
+ HTT_TX_RING_SELECTION_CFG_CMD_INFO1_CONF_LEN_CTRL);
+ cmd->info2 |=
+ le32_encode_bits(HTT_STATS_FRAME_CTRL_TYPE_CTRL,
+ HTT_TX_RING_SELECTION_CFG_CMD_INFO2_PKT_TYPE_EN_FLAG);
+ }
+
+ if (htt_tlv_filter->tx_mon_ctrl_filter) {
+ cmd->info1 |=
+ le32_encode_bits(HTT_STATS_FRAME_CTRL_TYPE_DATA,
+ HTT_TX_RING_SELECTION_CFG_CMD_INFO1_PKT_TYPE);
+ cmd->info1 |=
+ le32_encode_bits(htt_tlv_filter->tx_mon_pkt_dma_len,
+ HTT_TX_RING_SELECTION_CFG_CMD_INFO1_CONF_LEN_DATA);
+ cmd->info2 |=
+ le32_encode_bits(HTT_STATS_FRAME_CTRL_TYPE_DATA,
+ HTT_TX_RING_SELECTION_CFG_CMD_INFO2_PKT_TYPE_EN_FLAG);
+ }
+
+ cmd->tlv_filter_mask_in0 =
+ cpu_to_le32(htt_tlv_filter->tx_mon_downstream_tlv_flags);
+ cmd->tlv_filter_mask_in1 =
+ cpu_to_le32(htt_tlv_filter->tx_mon_upstream_tlv_flags0);
+ cmd->tlv_filter_mask_in2 =
+ cpu_to_le32(htt_tlv_filter->tx_mon_upstream_tlv_flags1);
+ cmd->tlv_filter_mask_in3 =
+ cpu_to_le32(htt_tlv_filter->tx_mon_upstream_tlv_flags2);
+
+ ret = ath12k_htc_send(&ab->htc, ab->dp.eid, skb);
+ if (ret)
+ goto err_free;
+
+ return 0;
+
+err_free:
+ dev_kfree_skb_any(skb);
+ return ret;
+}
+
+int ath12k_dp_tx_htt_tx_monitor_mode_ring_config(struct ath12k *ar, bool reset)
+{
+ struct ath12k_base *ab = ar->ab;
+ struct ath12k_dp *dp = &ab->dp;
+ struct htt_tx_ring_tlv_filter tlv_filter = {0};
+ int ret, ring_id;
+
+ ring_id = dp->tx_mon_buf_ring.refill_buf_ring.ring_id;
+
+ /* TODO: Need to set upstream/downstream tlv filters
+ * here
+ */
+
+ if (ab->hw_params->rxdma1_enable) {
+ ret = ath12k_dp_tx_htt_tx_filter_setup(ar->ab, ring_id, 0,
+ HAL_TX_MONITOR_BUF,
+ DP_RXDMA_REFILL_RING_SIZE,
+ &tlv_filter);
+ if (ret) {
+ ath12k_err(ab,
+ "failed to setup filter for monitor buf %d\n", ret);
+ return ret;
+ }
+ }
+
+ return 0;
+}
diff --git a/drivers/net/wireless/ath/ath12k/dp_tx.h b/drivers/net/wireless/ath/ath12k/dp_tx.h
new file mode 100644
index 000000000000..436d77e5e9ee
--- /dev/null
+++ b/drivers/net/wireless/ath/ath12k/dp_tx.h
@@ -0,0 +1,41 @@
+/* SPDX-License-Identifier: BSD-3-Clause-Clear */
+/*
+ * Copyright (c) 2018-2021 The Linux Foundation. All rights reserved.
+ * Copyright (c) 2021-2022 Qualcomm Innovation Center, Inc. All rights reserved.
+ */
+
+#ifndef ATH12K_DP_TX_H
+#define ATH12K_DP_TX_H
+
+#include "core.h"
+#include "hal_tx.h"
+
+struct ath12k_dp_htt_wbm_tx_status {
+ bool acked;
+ int ack_rssi;
+};
+
+int ath12k_dp_tx_htt_h2t_ver_req_msg(struct ath12k_base *ab);
+int ath12k_dp_tx(struct ath12k *ar, struct ath12k_vif *arvif,
+ struct sk_buff *skb);
+void ath12k_dp_tx_completion_handler(struct ath12k_base *ab, int ring_id);
+
+int ath12k_dp_tx_htt_h2t_ppdu_stats_req(struct ath12k *ar, u32 mask);
+int
+ath12k_dp_tx_htt_h2t_ext_stats_req(struct ath12k *ar, u8 type,
+ struct htt_ext_stats_cfg_params *cfg_params,
+ u64 cookie);
+int ath12k_dp_tx_htt_rx_monitor_mode_ring_config(struct ath12k *ar, bool reset);
+
+int ath12k_dp_tx_htt_rx_filter_setup(struct ath12k_base *ab, u32 ring_id,
+ int mac_id, enum hal_ring_type ring_type,
+ int rx_buf_size,
+ struct htt_rx_ring_tlv_filter *tlv_filter);
+void ath12k_dp_tx_put_bank_profile(struct ath12k_dp *dp, u8 bank_id);
+int ath12k_dp_tx_htt_tx_filter_setup(struct ath12k_base *ab, u32 ring_id,
+ int mac_id, enum hal_ring_type ring_type,
+ int tx_buf_size,
+ struct htt_tx_ring_tlv_filter *htt_tlv_filter);
+int ath12k_dp_tx_htt_tx_monitor_mode_ring_config(struct ath12k *ar, bool reset);
+int ath12k_dp_tx_htt_monitor_mode_ring_config(struct ath12k *ar, bool reset);
+#endif
diff --git a/drivers/net/wireless/ath/ath12k/hal.c b/drivers/net/wireless/ath/ath12k/hal.c
new file mode 100644
index 000000000000..95d04819083f
--- /dev/null
+++ b/drivers/net/wireless/ath/ath12k/hal.c
@@ -0,0 +1,2222 @@
+// SPDX-License-Identifier: BSD-3-Clause-Clear
+/*
+ * Copyright (c) 2018-2021 The Linux Foundation. All rights reserved.
+ * Copyright (c) 2021-2022 Qualcomm Innovation Center, Inc. All rights reserved.
+ */
+#include <linux/dma-mapping.h>
+#include "hal_tx.h"
+#include "hal_rx.h"
+#include "debug.h"
+#include "hal_desc.h"
+#include "hif.h"
+
+static const struct hal_srng_config hw_srng_config_template[] = {
+ /* TODO: max_rings can populated by querying HW capabilities */
+ [HAL_REO_DST] = {
+ .start_ring_id = HAL_SRNG_RING_ID_REO2SW1,
+ .max_rings = 8,
+ .entry_size = sizeof(struct hal_reo_dest_ring) >> 2,
+ .mac_type = ATH12K_HAL_SRNG_UMAC,
+ .ring_dir = HAL_SRNG_DIR_DST,
+ .max_size = HAL_REO_REO2SW1_RING_BASE_MSB_RING_SIZE,
+ },
+ [HAL_REO_EXCEPTION] = {
+ /* Designating REO2SW0 ring as exception ring.
+ * Any of theREO2SW rings can be used as exception ring.
+ */
+ .start_ring_id = HAL_SRNG_RING_ID_REO2SW0,
+ .max_rings = 1,
+ .entry_size = sizeof(struct hal_reo_dest_ring) >> 2,
+ .mac_type = ATH12K_HAL_SRNG_UMAC,
+ .ring_dir = HAL_SRNG_DIR_DST,
+ .max_size = HAL_REO_REO2SW0_RING_BASE_MSB_RING_SIZE,
+ },
+ [HAL_REO_REINJECT] = {
+ .start_ring_id = HAL_SRNG_RING_ID_SW2REO,
+ .max_rings = 4,
+ .entry_size = sizeof(struct hal_reo_entrance_ring) >> 2,
+ .mac_type = ATH12K_HAL_SRNG_UMAC,
+ .ring_dir = HAL_SRNG_DIR_SRC,
+ .max_size = HAL_REO_SW2REO_RING_BASE_MSB_RING_SIZE,
+ },
+ [HAL_REO_CMD] = {
+ .start_ring_id = HAL_SRNG_RING_ID_REO_CMD,
+ .max_rings = 1,
+ .entry_size = (sizeof(struct hal_tlv_64_hdr) +
+ sizeof(struct hal_reo_get_queue_stats)) >> 2,
+ .mac_type = ATH12K_HAL_SRNG_UMAC,
+ .ring_dir = HAL_SRNG_DIR_SRC,
+ .max_size = HAL_REO_CMD_RING_BASE_MSB_RING_SIZE,
+ },
+ [HAL_REO_STATUS] = {
+ .start_ring_id = HAL_SRNG_RING_ID_REO_STATUS,
+ .max_rings = 1,
+ .entry_size = (sizeof(struct hal_tlv_64_hdr) +
+ sizeof(struct hal_reo_get_queue_stats_status)) >> 2,
+ .mac_type = ATH12K_HAL_SRNG_UMAC,
+ .ring_dir = HAL_SRNG_DIR_DST,
+ .max_size = HAL_REO_STATUS_RING_BASE_MSB_RING_SIZE,
+ },
+ [HAL_TCL_DATA] = {
+ .start_ring_id = HAL_SRNG_RING_ID_SW2TCL1,
+ .max_rings = 6,
+ .entry_size = sizeof(struct hal_tcl_data_cmd) >> 2,
+ .mac_type = ATH12K_HAL_SRNG_UMAC,
+ .ring_dir = HAL_SRNG_DIR_SRC,
+ .max_size = HAL_SW2TCL1_RING_BASE_MSB_RING_SIZE,
+ },
+ [HAL_TCL_CMD] = {
+ .start_ring_id = HAL_SRNG_RING_ID_SW2TCL_CMD,
+ .max_rings = 1,
+ .entry_size = sizeof(struct hal_tcl_gse_cmd) >> 2,
+ .mac_type = ATH12K_HAL_SRNG_UMAC,
+ .ring_dir = HAL_SRNG_DIR_SRC,
+ .max_size = HAL_SW2TCL1_CMD_RING_BASE_MSB_RING_SIZE,
+ },
+ [HAL_TCL_STATUS] = {
+ .start_ring_id = HAL_SRNG_RING_ID_TCL_STATUS,
+ .max_rings = 1,
+ .entry_size = (sizeof(struct hal_tlv_hdr) +
+ sizeof(struct hal_tcl_status_ring)) >> 2,
+ .mac_type = ATH12K_HAL_SRNG_UMAC,
+ .ring_dir = HAL_SRNG_DIR_DST,
+ .max_size = HAL_TCL_STATUS_RING_BASE_MSB_RING_SIZE,
+ },
+ [HAL_CE_SRC] = {
+ .start_ring_id = HAL_SRNG_RING_ID_CE0_SRC,
+ .max_rings = 16,
+ .entry_size = sizeof(struct hal_ce_srng_src_desc) >> 2,
+ .mac_type = ATH12K_HAL_SRNG_UMAC,
+ .ring_dir = HAL_SRNG_DIR_SRC,
+ .max_size = HAL_CE_SRC_RING_BASE_MSB_RING_SIZE,
+ },
+ [HAL_CE_DST] = {
+ .start_ring_id = HAL_SRNG_RING_ID_CE0_DST,
+ .max_rings = 16,
+ .entry_size = sizeof(struct hal_ce_srng_dest_desc) >> 2,
+ .mac_type = ATH12K_HAL_SRNG_UMAC,
+ .ring_dir = HAL_SRNG_DIR_SRC,
+ .max_size = HAL_CE_DST_RING_BASE_MSB_RING_SIZE,
+ },
+ [HAL_CE_DST_STATUS] = {
+ .start_ring_id = HAL_SRNG_RING_ID_CE0_DST_STATUS,
+ .max_rings = 16,
+ .entry_size = sizeof(struct hal_ce_srng_dst_status_desc) >> 2,
+ .mac_type = ATH12K_HAL_SRNG_UMAC,
+ .ring_dir = HAL_SRNG_DIR_DST,
+ .max_size = HAL_CE_DST_STATUS_RING_BASE_MSB_RING_SIZE,
+ },
+ [HAL_WBM_IDLE_LINK] = {
+ .start_ring_id = HAL_SRNG_RING_ID_WBM_IDLE_LINK,
+ .max_rings = 1,
+ .entry_size = sizeof(struct hal_wbm_link_desc) >> 2,
+ .mac_type = ATH12K_HAL_SRNG_UMAC,
+ .ring_dir = HAL_SRNG_DIR_SRC,
+ .max_size = HAL_WBM_IDLE_LINK_RING_BASE_MSB_RING_SIZE,
+ },
+ [HAL_SW2WBM_RELEASE] = {
+ .start_ring_id = HAL_SRNG_RING_ID_WBM_SW0_RELEASE,
+ .max_rings = 2,
+ .entry_size = sizeof(struct hal_wbm_release_ring) >> 2,
+ .mac_type = ATH12K_HAL_SRNG_UMAC,
+ .ring_dir = HAL_SRNG_DIR_SRC,
+ .max_size = HAL_SW2WBM_RELEASE_RING_BASE_MSB_RING_SIZE,
+ },
+ [HAL_WBM2SW_RELEASE] = {
+ .start_ring_id = HAL_SRNG_RING_ID_WBM2SW0_RELEASE,
+ .max_rings = 8,
+ .entry_size = sizeof(struct hal_wbm_release_ring) >> 2,
+ .mac_type = ATH12K_HAL_SRNG_UMAC,
+ .ring_dir = HAL_SRNG_DIR_DST,
+ .max_size = HAL_WBM2SW_RELEASE_RING_BASE_MSB_RING_SIZE,
+ },
+ [HAL_RXDMA_BUF] = {
+ .start_ring_id = HAL_SRNG_SW2RXDMA_BUF0,
+ .max_rings = 1,
+ .entry_size = sizeof(struct hal_wbm_buffer_ring) >> 2,
+ .mac_type = ATH12K_HAL_SRNG_DMAC,
+ .ring_dir = HAL_SRNG_DIR_SRC,
+ .max_size = HAL_RXDMA_RING_MAX_SIZE_BE,
+ },
+ [HAL_RXDMA_DST] = {
+ .start_ring_id = HAL_SRNG_RING_ID_WMAC1_RXDMA2SW0,
+ .max_rings = 0,
+ .entry_size = 0,
+ .mac_type = ATH12K_HAL_SRNG_PMAC,
+ .ring_dir = HAL_SRNG_DIR_DST,
+ .max_size = HAL_RXDMA_RING_MAX_SIZE_BE,
+ },
+ [HAL_RXDMA_MONITOR_BUF] = {
+ .start_ring_id = HAL_SRNG_SW2RXMON_BUF0,
+ .max_rings = 1,
+ .entry_size = sizeof(struct hal_mon_buf_ring) >> 2,
+ .mac_type = ATH12K_HAL_SRNG_PMAC,
+ .ring_dir = HAL_SRNG_DIR_SRC,
+ .max_size = HAL_RXDMA_RING_MAX_SIZE_BE,
+ },
+ [HAL_RXDMA_MONITOR_STATUS] = { 0, },
+ [HAL_RXDMA_MONITOR_DESC] = { 0, },
+ [HAL_RXDMA_DIR_BUF] = {
+ .start_ring_id = HAL_SRNG_RING_ID_RXDMA_DIR_BUF,
+ .max_rings = 2,
+ .entry_size = 8 >> 2, /* TODO: Define the struct */
+ .mac_type = ATH12K_HAL_SRNG_PMAC,
+ .ring_dir = HAL_SRNG_DIR_SRC,
+ .max_size = HAL_RXDMA_RING_MAX_SIZE_BE,
+ },
+ [HAL_PPE2TCL] = {
+ .start_ring_id = HAL_SRNG_RING_ID_PPE2TCL1,
+ .max_rings = 1,
+ .entry_size = sizeof(struct hal_tcl_entrance_from_ppe_ring) >> 2,
+ .mac_type = ATH12K_HAL_SRNG_PMAC,
+ .ring_dir = HAL_SRNG_DIR_SRC,
+ .max_size = HAL_SW2TCL1_RING_BASE_MSB_RING_SIZE,
+ },
+ [HAL_PPE_RELEASE] = {
+ .start_ring_id = HAL_SRNG_RING_ID_WBM_PPE_RELEASE,
+ .max_rings = 1,
+ .entry_size = sizeof(struct hal_wbm_release_ring) >> 2,
+ .mac_type = ATH12K_HAL_SRNG_PMAC,
+ .ring_dir = HAL_SRNG_DIR_SRC,
+ .max_size = HAL_WBM2PPE_RELEASE_RING_BASE_MSB_RING_SIZE,
+ },
+ [HAL_TX_MONITOR_BUF] = {
+ .start_ring_id = HAL_SRNG_SW2TXMON_BUF0,
+ .max_rings = 1,
+ .entry_size = sizeof(struct hal_mon_buf_ring) >> 2,
+ .mac_type = ATH12K_HAL_SRNG_PMAC,
+ .ring_dir = HAL_SRNG_DIR_SRC,
+ .max_size = HAL_RXDMA_RING_MAX_SIZE_BE,
+ },
+ [HAL_RXDMA_MONITOR_DST] = {
+ .start_ring_id = HAL_SRNG_RING_ID_WMAC1_SW2RXMON_BUF0,
+ .max_rings = 1,
+ .entry_size = sizeof(struct hal_mon_dest_desc) >> 2,
+ .mac_type = ATH12K_HAL_SRNG_PMAC,
+ .ring_dir = HAL_SRNG_DIR_DST,
+ .max_size = HAL_RXDMA_RING_MAX_SIZE_BE,
+ },
+ [HAL_TX_MONITOR_DST] = {
+ .start_ring_id = HAL_SRNG_RING_ID_WMAC1_TXMON2SW0_BUF0,
+ .max_rings = 1,
+ .entry_size = sizeof(struct hal_mon_dest_desc) >> 2,
+ .mac_type = ATH12K_HAL_SRNG_PMAC,
+ .ring_dir = HAL_SRNG_DIR_DST,
+ .max_size = HAL_RXDMA_RING_MAX_SIZE_BE,
+ }
+};
+
+static const struct ath12k_hal_tcl_to_wbm_rbm_map
+ath12k_hal_qcn9274_tcl_to_wbm_rbm_map[DP_TCL_NUM_RING_MAX] = {
+ {
+ .wbm_ring_num = 0,
+ .rbm_id = HAL_RX_BUF_RBM_SW0_BM,
+ },
+ {
+ .wbm_ring_num = 1,
+ .rbm_id = HAL_RX_BUF_RBM_SW1_BM,
+ },
+ {
+ .wbm_ring_num = 2,
+ .rbm_id = HAL_RX_BUF_RBM_SW2_BM,
+ },
+ {
+ .wbm_ring_num = 4,
+ .rbm_id = HAL_RX_BUF_RBM_SW4_BM,
+ }
+};
+
+static const struct ath12k_hal_tcl_to_wbm_rbm_map
+ath12k_hal_wcn7850_tcl_to_wbm_rbm_map[DP_TCL_NUM_RING_MAX] = {
+ {
+ .wbm_ring_num = 0,
+ .rbm_id = HAL_RX_BUF_RBM_SW0_BM,
+ },
+ {
+ .wbm_ring_num = 2,
+ .rbm_id = HAL_RX_BUF_RBM_SW2_BM,
+ },
+ {
+ .wbm_ring_num = 4,
+ .rbm_id = HAL_RX_BUF_RBM_SW4_BM,
+ },
+};
+
+static unsigned int ath12k_hal_reo1_ring_id_offset(struct ath12k_base *ab)
+{
+ return HAL_REO1_RING_ID(ab) - HAL_REO1_RING_BASE_LSB(ab);
+}
+
+static unsigned int ath12k_hal_reo1_ring_msi1_base_lsb_offset(struct ath12k_base *ab)
+{
+ return HAL_REO1_RING_MSI1_BASE_LSB(ab) - HAL_REO1_RING_BASE_LSB(ab);
+}
+
+static unsigned int ath12k_hal_reo1_ring_msi1_base_msb_offset(struct ath12k_base *ab)
+{
+ return HAL_REO1_RING_MSI1_BASE_MSB(ab) - HAL_REO1_RING_BASE_LSB(ab);
+}
+
+static unsigned int ath12k_hal_reo1_ring_msi1_data_offset(struct ath12k_base *ab)
+{
+ return HAL_REO1_RING_MSI1_DATA(ab) - HAL_REO1_RING_BASE_LSB(ab);
+}
+
+static unsigned int ath12k_hal_reo1_ring_base_msb_offset(struct ath12k_base *ab)
+{
+ return HAL_REO1_RING_BASE_MSB(ab) - HAL_REO1_RING_BASE_LSB(ab);
+}
+
+static unsigned int ath12k_hal_reo1_ring_producer_int_setup_offset(struct ath12k_base *ab)
+{
+ return HAL_REO1_RING_PRODUCER_INT_SETUP(ab) - HAL_REO1_RING_BASE_LSB(ab);
+}
+
+static unsigned int ath12k_hal_reo1_ring_hp_addr_lsb_offset(struct ath12k_base *ab)
+{
+ return HAL_REO1_RING_HP_ADDR_LSB(ab) - HAL_REO1_RING_BASE_LSB(ab);
+}
+
+static unsigned int ath12k_hal_reo1_ring_hp_addr_msb_offset(struct ath12k_base *ab)
+{
+ return HAL_REO1_RING_HP_ADDR_MSB(ab) - HAL_REO1_RING_BASE_LSB(ab);
+}
+
+static unsigned int ath12k_hal_reo1_ring_misc_offset(struct ath12k_base *ab)
+{
+ return HAL_REO1_RING_MISC(ab) - HAL_REO1_RING_BASE_LSB(ab);
+}
+
+static bool ath12k_hw_qcn9274_rx_desc_get_first_msdu(struct hal_rx_desc *desc)
+{
+ return !!le16_get_bits(desc->u.qcn9274.msdu_end.info5,
+ RX_MSDU_END_INFO5_FIRST_MSDU);
+}
+
+static bool ath12k_hw_qcn9274_rx_desc_get_last_msdu(struct hal_rx_desc *desc)
+{
+ return !!le16_get_bits(desc->u.qcn9274.msdu_end.info5,
+ RX_MSDU_END_INFO5_LAST_MSDU);
+}
+
+static u8 ath12k_hw_qcn9274_rx_desc_get_l3_pad_bytes(struct hal_rx_desc *desc)
+{
+ return le16_get_bits(desc->u.qcn9274.msdu_end.info5,
+ RX_MSDU_END_INFO5_L3_HDR_PADDING);
+}
+
+static bool ath12k_hw_qcn9274_rx_desc_encrypt_valid(struct hal_rx_desc *desc)
+{
+ return !!le32_get_bits(desc->u.qcn9274.mpdu_start.info4,
+ RX_MPDU_START_INFO4_ENCRYPT_INFO_VALID);
+}
+
+static u32 ath12k_hw_qcn9274_rx_desc_get_encrypt_type(struct hal_rx_desc *desc)
+{
+ return le32_get_bits(desc->u.qcn9274.mpdu_start.info2,
+ RX_MPDU_START_INFO2_ENC_TYPE);
+}
+
+static u8 ath12k_hw_qcn9274_rx_desc_get_decap_type(struct hal_rx_desc *desc)
+{
+ return le32_get_bits(desc->u.qcn9274.msdu_end.info11,
+ RX_MSDU_END_INFO11_DECAP_FORMAT);
+}
+
+static u8 ath12k_hw_qcn9274_rx_desc_get_mesh_ctl(struct hal_rx_desc *desc)
+{
+ return le32_get_bits(desc->u.qcn9274.msdu_end.info11,
+ RX_MSDU_END_INFO11_MESH_CTRL_PRESENT);
+}
+
+static bool ath12k_hw_qcn9274_rx_desc_get_mpdu_seq_ctl_vld(struct hal_rx_desc *desc)
+{
+ return !!le32_get_bits(desc->u.qcn9274.mpdu_start.info4,
+ RX_MPDU_START_INFO4_MPDU_SEQ_CTRL_VALID);
+}
+
+static bool ath12k_hw_qcn9274_rx_desc_get_mpdu_fc_valid(struct hal_rx_desc *desc)
+{
+ return !!le32_get_bits(desc->u.qcn9274.mpdu_start.info4,
+ RX_MPDU_START_INFO4_MPDU_FCTRL_VALID);
+}
+
+static u16 ath12k_hw_qcn9274_rx_desc_get_mpdu_start_seq_no(struct hal_rx_desc *desc)
+{
+ return le32_get_bits(desc->u.qcn9274.mpdu_start.info4,
+ RX_MPDU_START_INFO4_MPDU_SEQ_NUM);
+}
+
+static u16 ath12k_hw_qcn9274_rx_desc_get_msdu_len(struct hal_rx_desc *desc)
+{
+ return le32_get_bits(desc->u.qcn9274.msdu_end.info10,
+ RX_MSDU_END_INFO10_MSDU_LENGTH);
+}
+
+static u8 ath12k_hw_qcn9274_rx_desc_get_msdu_sgi(struct hal_rx_desc *desc)
+{
+ return le32_get_bits(desc->u.qcn9274.msdu_end.info12,
+ RX_MSDU_END_INFO12_SGI);
+}
+
+static u8 ath12k_hw_qcn9274_rx_desc_get_msdu_rate_mcs(struct hal_rx_desc *desc)
+{
+ return le32_get_bits(desc->u.qcn9274.msdu_end.info12,
+ RX_MSDU_END_INFO12_RATE_MCS);
+}
+
+static u8 ath12k_hw_qcn9274_rx_desc_get_msdu_rx_bw(struct hal_rx_desc *desc)
+{
+ return le32_get_bits(desc->u.qcn9274.msdu_end.info12,
+ RX_MSDU_END_INFO12_RECV_BW);
+}
+
+static u32 ath12k_hw_qcn9274_rx_desc_get_msdu_freq(struct hal_rx_desc *desc)
+{
+ return __le32_to_cpu(desc->u.qcn9274.msdu_end.phy_meta_data);
+}
+
+static u8 ath12k_hw_qcn9274_rx_desc_get_msdu_pkt_type(struct hal_rx_desc *desc)
+{
+ return le32_get_bits(desc->u.qcn9274.msdu_end.info12,
+ RX_MSDU_END_INFO12_PKT_TYPE);
+}
+
+static u8 ath12k_hw_qcn9274_rx_desc_get_msdu_nss(struct hal_rx_desc *desc)
+{
+ return le32_get_bits(desc->u.qcn9274.msdu_end.info12,
+ RX_MSDU_END_INFO12_MIMO_SS_BITMAP);
+}
+
+static u8 ath12k_hw_qcn9274_rx_desc_get_mpdu_tid(struct hal_rx_desc *desc)
+{
+ return le16_get_bits(desc->u.qcn9274.msdu_end.info5,
+ RX_MSDU_END_INFO5_TID);
+}
+
+static u16 ath12k_hw_qcn9274_rx_desc_get_mpdu_peer_id(struct hal_rx_desc *desc)
+{
+ return __le16_to_cpu(desc->u.qcn9274.mpdu_start.sw_peer_id);
+}
+
+static void ath12k_hw_qcn9274_rx_desc_copy_end_tlv(struct hal_rx_desc *fdesc,
+ struct hal_rx_desc *ldesc)
+{
+ memcpy(&fdesc->u.qcn9274.msdu_end, &ldesc->u.qcn9274.msdu_end,
+ sizeof(struct rx_msdu_end_qcn9274));
+}
+
+static u32 ath12k_hw_qcn9274_rx_desc_get_mpdu_ppdu_id(struct hal_rx_desc *desc)
+{
+ return __le16_to_cpu(desc->u.qcn9274.mpdu_start.phy_ppdu_id);
+}
+
+static void ath12k_hw_qcn9274_rx_desc_set_msdu_len(struct hal_rx_desc *desc, u16 len)
+{
+ u32 info = __le32_to_cpu(desc->u.qcn9274.msdu_end.info10);
+
+ info &= ~RX_MSDU_END_INFO10_MSDU_LENGTH;
+ info |= u32_encode_bits(len, RX_MSDU_END_INFO10_MSDU_LENGTH);
+
+ desc->u.qcn9274.msdu_end.info10 = __cpu_to_le32(info);
+}
+
+static u8 *ath12k_hw_qcn9274_rx_desc_get_msdu_payload(struct hal_rx_desc *desc)
+{
+ return &desc->u.qcn9274.msdu_payload[0];
+}
+
+static u32 ath12k_hw_qcn9274_rx_desc_get_mpdu_start_offset(void)
+{
+ return offsetof(struct hal_rx_desc_qcn9274, mpdu_start);
+}
+
+static u32 ath12k_hw_qcn9274_rx_desc_get_msdu_end_offset(void)
+{
+ return offsetof(struct hal_rx_desc_qcn9274, msdu_end);
+}
+
+static bool ath12k_hw_qcn9274_rx_desc_mac_addr2_valid(struct hal_rx_desc *desc)
+{
+ return __le32_to_cpu(desc->u.qcn9274.mpdu_start.info4) &
+ RX_MPDU_START_INFO4_MAC_ADDR2_VALID;
+}
+
+static u8 *ath12k_hw_qcn9274_rx_desc_mpdu_start_addr2(struct hal_rx_desc *desc)
+{
+ return desc->u.qcn9274.mpdu_start.addr2;
+}
+
+static bool ath12k_hw_qcn9274_rx_desc_is_mcbc(struct hal_rx_desc *desc)
+{
+ return __le32_to_cpu(desc->u.qcn9274.mpdu_start.info6) &
+ RX_MPDU_START_INFO6_MCAST_BCAST;
+}
+
+static void ath12k_hw_qcn9274_rx_desc_get_dot11_hdr(struct hal_rx_desc *desc,
+ struct ieee80211_hdr *hdr)
+{
+ hdr->frame_control = desc->u.qcn9274.mpdu_start.frame_ctrl;
+ hdr->duration_id = desc->u.qcn9274.mpdu_start.duration;
+ ether_addr_copy(hdr->addr1, desc->u.qcn9274.mpdu_start.addr1);
+ ether_addr_copy(hdr->addr2, desc->u.qcn9274.mpdu_start.addr2);
+ ether_addr_copy(hdr->addr3, desc->u.qcn9274.mpdu_start.addr3);
+ if (__le32_to_cpu(desc->u.qcn9274.mpdu_start.info4) &
+ RX_MPDU_START_INFO4_MAC_ADDR4_VALID) {
+ ether_addr_copy(hdr->addr4, desc->u.qcn9274.mpdu_start.addr4);
+ }
+ hdr->seq_ctrl = desc->u.qcn9274.mpdu_start.seq_ctrl;
+}
+
+static void ath12k_hw_qcn9274_rx_desc_get_crypto_hdr(struct hal_rx_desc *desc,
+ u8 *crypto_hdr,
+ enum hal_encrypt_type enctype)
+{
+ unsigned int key_id;
+
+ switch (enctype) {
+ case HAL_ENCRYPT_TYPE_OPEN:
+ return;
+ case HAL_ENCRYPT_TYPE_TKIP_NO_MIC:
+ case HAL_ENCRYPT_TYPE_TKIP_MIC:
+ crypto_hdr[0] =
+ HAL_RX_MPDU_INFO_PN_GET_BYTE2(desc->u.qcn9274.mpdu_start.pn[0]);
+ crypto_hdr[1] = 0;
+ crypto_hdr[2] =
+ HAL_RX_MPDU_INFO_PN_GET_BYTE1(desc->u.qcn9274.mpdu_start.pn[0]);
+ break;
+ case HAL_ENCRYPT_TYPE_CCMP_128:
+ case HAL_ENCRYPT_TYPE_CCMP_256:
+ case HAL_ENCRYPT_TYPE_GCMP_128:
+ case HAL_ENCRYPT_TYPE_AES_GCMP_256:
+ crypto_hdr[0] =
+ HAL_RX_MPDU_INFO_PN_GET_BYTE1(desc->u.qcn9274.mpdu_start.pn[0]);
+ crypto_hdr[1] =
+ HAL_RX_MPDU_INFO_PN_GET_BYTE2(desc->u.qcn9274.mpdu_start.pn[0]);
+ crypto_hdr[2] = 0;
+ break;
+ case HAL_ENCRYPT_TYPE_WEP_40:
+ case HAL_ENCRYPT_TYPE_WEP_104:
+ case HAL_ENCRYPT_TYPE_WEP_128:
+ case HAL_ENCRYPT_TYPE_WAPI_GCM_SM4:
+ case HAL_ENCRYPT_TYPE_WAPI:
+ return;
+ }
+ key_id = le32_get_bits(desc->u.qcn9274.mpdu_start.info5,
+ RX_MPDU_START_INFO5_KEY_ID);
+ crypto_hdr[3] = 0x20 | (key_id << 6);
+ crypto_hdr[4] = HAL_RX_MPDU_INFO_PN_GET_BYTE3(desc->u.qcn9274.mpdu_start.pn[0]);
+ crypto_hdr[5] = HAL_RX_MPDU_INFO_PN_GET_BYTE4(desc->u.qcn9274.mpdu_start.pn[0]);
+ crypto_hdr[6] = HAL_RX_MPDU_INFO_PN_GET_BYTE1(desc->u.qcn9274.mpdu_start.pn[1]);
+ crypto_hdr[7] = HAL_RX_MPDU_INFO_PN_GET_BYTE2(desc->u.qcn9274.mpdu_start.pn[1]);
+}
+
+static u16 ath12k_hw_qcn9274_rx_desc_get_mpdu_frame_ctl(struct hal_rx_desc *desc)
+{
+ return __le16_to_cpu(desc->u.qcn9274.mpdu_start.frame_ctrl);
+}
+
+static int ath12k_hal_srng_create_config_qcn9274(struct ath12k_base *ab)
+{
+ struct ath12k_hal *hal = &ab->hal;
+ struct hal_srng_config *s;
+
+ hal->srng_config = kmemdup(hw_srng_config_template,
+ sizeof(hw_srng_config_template),
+ GFP_KERNEL);
+ if (!hal->srng_config)
+ return -ENOMEM;
+
+ s = &hal->srng_config[HAL_REO_DST];
+ s->reg_start[0] = HAL_SEQ_WCSS_UMAC_REO_REG + HAL_REO1_RING_BASE_LSB(ab);
+ s->reg_start[1] = HAL_SEQ_WCSS_UMAC_REO_REG + HAL_REO1_RING_HP;
+ s->reg_size[0] = HAL_REO2_RING_BASE_LSB(ab) - HAL_REO1_RING_BASE_LSB(ab);
+ s->reg_size[1] = HAL_REO2_RING_HP - HAL_REO1_RING_HP;
+
+ s = &hal->srng_config[HAL_REO_EXCEPTION];
+ s->reg_start[0] = HAL_SEQ_WCSS_UMAC_REO_REG + HAL_REO_SW0_RING_BASE_LSB(ab);
+ s->reg_start[1] = HAL_SEQ_WCSS_UMAC_REO_REG + HAL_REO_SW0_RING_HP;
+
+ s = &hal->srng_config[HAL_REO_REINJECT];
+ s->reg_start[0] = HAL_SEQ_WCSS_UMAC_REO_REG + HAL_SW2REO_RING_BASE_LSB(ab);
+ s->reg_start[1] = HAL_SEQ_WCSS_UMAC_REO_REG + HAL_SW2REO_RING_HP;
+ s->reg_size[0] = HAL_SW2REO1_RING_BASE_LSB(ab) - HAL_SW2REO_RING_BASE_LSB(ab);
+ s->reg_size[1] = HAL_SW2REO1_RING_HP - HAL_SW2REO_RING_HP;
+
+ s = &hal->srng_config[HAL_REO_CMD];
+ s->reg_start[0] = HAL_SEQ_WCSS_UMAC_REO_REG + HAL_REO_CMD_RING_BASE_LSB(ab);
+ s->reg_start[1] = HAL_SEQ_WCSS_UMAC_REO_REG + HAL_REO_CMD_HP;
+
+ s = &hal->srng_config[HAL_REO_STATUS];
+ s->reg_start[0] = HAL_SEQ_WCSS_UMAC_REO_REG + HAL_REO_STATUS_RING_BASE_LSB(ab);
+ s->reg_start[1] = HAL_SEQ_WCSS_UMAC_REO_REG + HAL_REO_STATUS_HP;
+
+ s = &hal->srng_config[HAL_TCL_DATA];
+ s->reg_start[0] = HAL_SEQ_WCSS_UMAC_TCL_REG + HAL_TCL1_RING_BASE_LSB;
+ s->reg_start[1] = HAL_SEQ_WCSS_UMAC_TCL_REG + HAL_TCL1_RING_HP;
+ s->reg_size[0] = HAL_TCL2_RING_BASE_LSB - HAL_TCL1_RING_BASE_LSB;
+ s->reg_size[1] = HAL_TCL2_RING_HP - HAL_TCL1_RING_HP;
+
+ s = &hal->srng_config[HAL_TCL_CMD];
+ s->reg_start[0] = HAL_SEQ_WCSS_UMAC_TCL_REG + HAL_TCL_RING_BASE_LSB(ab);
+ s->reg_start[1] = HAL_SEQ_WCSS_UMAC_TCL_REG + HAL_TCL_RING_HP;
+
+ s = &hal->srng_config[HAL_TCL_STATUS];
+ s->reg_start[0] = HAL_SEQ_WCSS_UMAC_TCL_REG + HAL_TCL_STATUS_RING_BASE_LSB(ab);
+ s->reg_start[1] = HAL_SEQ_WCSS_UMAC_TCL_REG + HAL_TCL_STATUS_RING_HP;
+
+ s = &hal->srng_config[HAL_CE_SRC];
+ s->reg_start[0] = HAL_SEQ_WCSS_UMAC_CE0_SRC_REG + HAL_CE_DST_RING_BASE_LSB;
+ s->reg_start[1] = HAL_SEQ_WCSS_UMAC_CE0_SRC_REG + HAL_CE_DST_RING_HP;
+ s->reg_size[0] = HAL_SEQ_WCSS_UMAC_CE1_SRC_REG -
+ HAL_SEQ_WCSS_UMAC_CE0_SRC_REG;
+ s->reg_size[1] = HAL_SEQ_WCSS_UMAC_CE1_SRC_REG -
+ HAL_SEQ_WCSS_UMAC_CE0_SRC_REG;
+
+ s = &hal->srng_config[HAL_CE_DST];
+ s->reg_start[0] = HAL_SEQ_WCSS_UMAC_CE0_DST_REG + HAL_CE_DST_RING_BASE_LSB;
+ s->reg_start[1] = HAL_SEQ_WCSS_UMAC_CE0_DST_REG + HAL_CE_DST_RING_HP;
+ s->reg_size[0] = HAL_SEQ_WCSS_UMAC_CE1_DST_REG -
+ HAL_SEQ_WCSS_UMAC_CE0_DST_REG;
+ s->reg_size[1] = HAL_SEQ_WCSS_UMAC_CE1_DST_REG -
+ HAL_SEQ_WCSS_UMAC_CE0_DST_REG;
+
+ s = &hal->srng_config[HAL_CE_DST_STATUS];
+ s->reg_start[0] = HAL_SEQ_WCSS_UMAC_CE0_DST_REG +
+ HAL_CE_DST_STATUS_RING_BASE_LSB;
+ s->reg_start[1] = HAL_SEQ_WCSS_UMAC_CE0_DST_REG + HAL_CE_DST_STATUS_RING_HP;
+ s->reg_size[0] = HAL_SEQ_WCSS_UMAC_CE1_DST_REG -
+ HAL_SEQ_WCSS_UMAC_CE0_DST_REG;
+ s->reg_size[1] = HAL_SEQ_WCSS_UMAC_CE1_DST_REG -
+ HAL_SEQ_WCSS_UMAC_CE0_DST_REG;
+
+ s = &hal->srng_config[HAL_WBM_IDLE_LINK];
+ s->reg_start[0] = HAL_SEQ_WCSS_UMAC_WBM_REG + HAL_WBM_IDLE_LINK_RING_BASE_LSB(ab);
+ s->reg_start[1] = HAL_SEQ_WCSS_UMAC_WBM_REG + HAL_WBM_IDLE_LINK_RING_HP;
+
+ s = &hal->srng_config[HAL_SW2WBM_RELEASE];
+ s->reg_start[0] = HAL_SEQ_WCSS_UMAC_WBM_REG +
+ HAL_WBM_SW_RELEASE_RING_BASE_LSB(ab);
+ s->reg_start[1] = HAL_SEQ_WCSS_UMAC_WBM_REG + HAL_WBM_SW_RELEASE_RING_HP;
+ s->reg_size[0] = HAL_WBM_SW1_RELEASE_RING_BASE_LSB(ab) -
+ HAL_WBM_SW_RELEASE_RING_BASE_LSB(ab);
+ s->reg_size[1] = HAL_WBM_SW1_RELEASE_RING_HP - HAL_WBM_SW_RELEASE_RING_HP;
+
+ s = &hal->srng_config[HAL_WBM2SW_RELEASE];
+ s->reg_start[0] = HAL_SEQ_WCSS_UMAC_WBM_REG + HAL_WBM0_RELEASE_RING_BASE_LSB(ab);
+ s->reg_start[1] = HAL_SEQ_WCSS_UMAC_WBM_REG + HAL_WBM0_RELEASE_RING_HP;
+ s->reg_size[0] = HAL_WBM1_RELEASE_RING_BASE_LSB(ab) -
+ HAL_WBM0_RELEASE_RING_BASE_LSB(ab);
+ s->reg_size[1] = HAL_WBM1_RELEASE_RING_HP - HAL_WBM0_RELEASE_RING_HP;
+
+ /* Some LMAC rings are not accesed from the host:
+ * RXDMA_BUG, RXDMA_DST, RXDMA_MONITOR_BUF, RXDMA_MONITOR_STATUS,
+ * RXDMA_MONITOR_DST, RXDMA_MONITOR_DESC, RXDMA_DIR_BUF_SRC,
+ * RXDMA_RX_MONITOR_BUF, TX_MONITOR_BUF, TX_MONITOR_DST, SW2RXDMA
+ */
+ s = &hal->srng_config[HAL_PPE2TCL];
+ s->reg_start[0] = HAL_SEQ_WCSS_UMAC_TCL_REG + HAL_TCL_PPE2TCL1_RING_BASE_LSB;
+ s->reg_start[1] = HAL_SEQ_WCSS_UMAC_TCL_REG + HAL_TCL_PPE2TCL1_RING_HP;
+
+ s = &hal->srng_config[HAL_PPE_RELEASE];
+ s->reg_start[0] = HAL_SEQ_WCSS_UMAC_WBM_REG +
+ HAL_WBM_PPE_RELEASE_RING_BASE_LSB(ab);
+ s->reg_start[1] = HAL_SEQ_WCSS_UMAC_WBM_REG + HAL_WBM_PPE_RELEASE_RING_HP;
+
+ return 0;
+}
+
+static bool ath12k_hw_qcn9274_dp_rx_h_msdu_done(struct hal_rx_desc *desc)
+{
+ return !!le32_get_bits(desc->u.qcn9274.msdu_end.info14,
+ RX_MSDU_END_INFO14_MSDU_DONE);
+}
+
+static bool ath12k_hw_qcn9274_dp_rx_h_l4_cksum_fail(struct hal_rx_desc *desc)
+{
+ return !!le32_get_bits(desc->u.qcn9274.msdu_end.info13,
+ RX_MSDU_END_INFO13_TCP_UDP_CKSUM_FAIL);
+}
+
+static bool ath12k_hw_qcn9274_dp_rx_h_ip_cksum_fail(struct hal_rx_desc *desc)
+{
+ return !!le32_get_bits(desc->u.qcn9274.msdu_end.info13,
+ RX_MSDU_END_INFO13_IP_CKSUM_FAIL);
+}
+
+static bool ath12k_hw_qcn9274_dp_rx_h_is_decrypted(struct hal_rx_desc *desc)
+{
+ return (le32_get_bits(desc->u.qcn9274.msdu_end.info14,
+ RX_MSDU_END_INFO14_DECRYPT_STATUS_CODE) ==
+ RX_DESC_DECRYPT_STATUS_CODE_OK);
+}
+
+static u32 ath12k_hw_qcn9274_dp_rx_h_mpdu_err(struct hal_rx_desc *desc)
+{
+ u32 info = __le32_to_cpu(desc->u.qcn9274.msdu_end.info13);
+ u32 errmap = 0;
+
+ if (info & RX_MSDU_END_INFO13_FCS_ERR)
+ errmap |= HAL_RX_MPDU_ERR_FCS;
+
+ if (info & RX_MSDU_END_INFO13_DECRYPT_ERR)
+ errmap |= HAL_RX_MPDU_ERR_DECRYPT;
+
+ if (info & RX_MSDU_END_INFO13_TKIP_MIC_ERR)
+ errmap |= HAL_RX_MPDU_ERR_TKIP_MIC;
+
+ if (info & RX_MSDU_END_INFO13_A_MSDU_ERROR)
+ errmap |= HAL_RX_MPDU_ERR_AMSDU_ERR;
+
+ if (info & RX_MSDU_END_INFO13_OVERFLOW_ERR)
+ errmap |= HAL_RX_MPDU_ERR_OVERFLOW;
+
+ if (info & RX_MSDU_END_INFO13_MSDU_LEN_ERR)
+ errmap |= HAL_RX_MPDU_ERR_MSDU_LEN;
+
+ if (info & RX_MSDU_END_INFO13_MPDU_LEN_ERR)
+ errmap |= HAL_RX_MPDU_ERR_MPDU_LEN;
+
+ return errmap;
+}
+
+const struct hal_ops hal_qcn9274_ops = {
+ .rx_desc_get_first_msdu = ath12k_hw_qcn9274_rx_desc_get_first_msdu,
+ .rx_desc_get_last_msdu = ath12k_hw_qcn9274_rx_desc_get_last_msdu,
+ .rx_desc_get_l3_pad_bytes = ath12k_hw_qcn9274_rx_desc_get_l3_pad_bytes,
+ .rx_desc_encrypt_valid = ath12k_hw_qcn9274_rx_desc_encrypt_valid,
+ .rx_desc_get_encrypt_type = ath12k_hw_qcn9274_rx_desc_get_encrypt_type,
+ .rx_desc_get_decap_type = ath12k_hw_qcn9274_rx_desc_get_decap_type,
+ .rx_desc_get_mesh_ctl = ath12k_hw_qcn9274_rx_desc_get_mesh_ctl,
+ .rx_desc_get_mpdu_seq_ctl_vld = ath12k_hw_qcn9274_rx_desc_get_mpdu_seq_ctl_vld,
+ .rx_desc_get_mpdu_fc_valid = ath12k_hw_qcn9274_rx_desc_get_mpdu_fc_valid,
+ .rx_desc_get_mpdu_start_seq_no = ath12k_hw_qcn9274_rx_desc_get_mpdu_start_seq_no,
+ .rx_desc_get_msdu_len = ath12k_hw_qcn9274_rx_desc_get_msdu_len,
+ .rx_desc_get_msdu_sgi = ath12k_hw_qcn9274_rx_desc_get_msdu_sgi,
+ .rx_desc_get_msdu_rate_mcs = ath12k_hw_qcn9274_rx_desc_get_msdu_rate_mcs,
+ .rx_desc_get_msdu_rx_bw = ath12k_hw_qcn9274_rx_desc_get_msdu_rx_bw,
+ .rx_desc_get_msdu_freq = ath12k_hw_qcn9274_rx_desc_get_msdu_freq,
+ .rx_desc_get_msdu_pkt_type = ath12k_hw_qcn9274_rx_desc_get_msdu_pkt_type,
+ .rx_desc_get_msdu_nss = ath12k_hw_qcn9274_rx_desc_get_msdu_nss,
+ .rx_desc_get_mpdu_tid = ath12k_hw_qcn9274_rx_desc_get_mpdu_tid,
+ .rx_desc_get_mpdu_peer_id = ath12k_hw_qcn9274_rx_desc_get_mpdu_peer_id,
+ .rx_desc_copy_end_tlv = ath12k_hw_qcn9274_rx_desc_copy_end_tlv,
+ .rx_desc_get_mpdu_ppdu_id = ath12k_hw_qcn9274_rx_desc_get_mpdu_ppdu_id,
+ .rx_desc_set_msdu_len = ath12k_hw_qcn9274_rx_desc_set_msdu_len,
+ .rx_desc_get_msdu_payload = ath12k_hw_qcn9274_rx_desc_get_msdu_payload,
+ .rx_desc_get_mpdu_start_offset = ath12k_hw_qcn9274_rx_desc_get_mpdu_start_offset,
+ .rx_desc_get_msdu_end_offset = ath12k_hw_qcn9274_rx_desc_get_msdu_end_offset,
+ .rx_desc_mac_addr2_valid = ath12k_hw_qcn9274_rx_desc_mac_addr2_valid,
+ .rx_desc_mpdu_start_addr2 = ath12k_hw_qcn9274_rx_desc_mpdu_start_addr2,
+ .rx_desc_is_mcbc = ath12k_hw_qcn9274_rx_desc_is_mcbc,
+ .rx_desc_get_dot11_hdr = ath12k_hw_qcn9274_rx_desc_get_dot11_hdr,
+ .rx_desc_get_crypto_header = ath12k_hw_qcn9274_rx_desc_get_crypto_hdr,
+ .rx_desc_get_mpdu_frame_ctl = ath12k_hw_qcn9274_rx_desc_get_mpdu_frame_ctl,
+ .create_srng_config = ath12k_hal_srng_create_config_qcn9274,
+ .tcl_to_wbm_rbm_map = ath12k_hal_qcn9274_tcl_to_wbm_rbm_map,
+ .dp_rx_h_msdu_done = ath12k_hw_qcn9274_dp_rx_h_msdu_done,
+ .dp_rx_h_l4_cksum_fail = ath12k_hw_qcn9274_dp_rx_h_l4_cksum_fail,
+ .dp_rx_h_ip_cksum_fail = ath12k_hw_qcn9274_dp_rx_h_ip_cksum_fail,
+ .dp_rx_h_is_decrypted = ath12k_hw_qcn9274_dp_rx_h_is_decrypted,
+ .dp_rx_h_mpdu_err = ath12k_hw_qcn9274_dp_rx_h_mpdu_err,
+};
+
+static bool ath12k_hw_wcn7850_rx_desc_get_first_msdu(struct hal_rx_desc *desc)
+{
+ return !!le16_get_bits(desc->u.wcn7850.msdu_end.info5,
+ RX_MSDU_END_INFO5_FIRST_MSDU);
+}
+
+static bool ath12k_hw_wcn7850_rx_desc_get_last_msdu(struct hal_rx_desc *desc)
+{
+ return !!le16_get_bits(desc->u.wcn7850.msdu_end.info5,
+ RX_MSDU_END_INFO5_LAST_MSDU);
+}
+
+static u8 ath12k_hw_wcn7850_rx_desc_get_l3_pad_bytes(struct hal_rx_desc *desc)
+{
+ return le16_get_bits(desc->u.wcn7850.msdu_end.info5,
+ RX_MSDU_END_INFO5_L3_HDR_PADDING);
+}
+
+static bool ath12k_hw_wcn7850_rx_desc_encrypt_valid(struct hal_rx_desc *desc)
+{
+ return !!le32_get_bits(desc->u.wcn7850.mpdu_start.info4,
+ RX_MPDU_START_INFO4_ENCRYPT_INFO_VALID);
+}
+
+static u32 ath12k_hw_wcn7850_rx_desc_get_encrypt_type(struct hal_rx_desc *desc)
+{
+ return le32_get_bits(desc->u.wcn7850.mpdu_start.info2,
+ RX_MPDU_START_INFO2_ENC_TYPE);
+}
+
+static u8 ath12k_hw_wcn7850_rx_desc_get_decap_type(struct hal_rx_desc *desc)
+{
+ return le32_get_bits(desc->u.wcn7850.msdu_end.info11,
+ RX_MSDU_END_INFO11_DECAP_FORMAT);
+}
+
+static u8 ath12k_hw_wcn7850_rx_desc_get_mesh_ctl(struct hal_rx_desc *desc)
+{
+ return le32_get_bits(desc->u.wcn7850.msdu_end.info11,
+ RX_MSDU_END_INFO11_MESH_CTRL_PRESENT);
+}
+
+static bool ath12k_hw_wcn7850_rx_desc_get_mpdu_seq_ctl_vld(struct hal_rx_desc *desc)
+{
+ return !!le32_get_bits(desc->u.wcn7850.mpdu_start.info4,
+ RX_MPDU_START_INFO4_MPDU_SEQ_CTRL_VALID);
+}
+
+static bool ath12k_hw_wcn7850_rx_desc_get_mpdu_fc_valid(struct hal_rx_desc *desc)
+{
+ return !!le32_get_bits(desc->u.wcn7850.mpdu_start.info4,
+ RX_MPDU_START_INFO4_MPDU_FCTRL_VALID);
+}
+
+static u16 ath12k_hw_wcn7850_rx_desc_get_mpdu_start_seq_no(struct hal_rx_desc *desc)
+{
+ return le32_get_bits(desc->u.wcn7850.mpdu_start.info4,
+ RX_MPDU_START_INFO4_MPDU_SEQ_NUM);
+}
+
+static u16 ath12k_hw_wcn7850_rx_desc_get_msdu_len(struct hal_rx_desc *desc)
+{
+ return le32_get_bits(desc->u.wcn7850.msdu_end.info10,
+ RX_MSDU_END_INFO10_MSDU_LENGTH);
+}
+
+static u8 ath12k_hw_wcn7850_rx_desc_get_msdu_sgi(struct hal_rx_desc *desc)
+{
+ return le32_get_bits(desc->u.wcn7850.msdu_end.info12,
+ RX_MSDU_END_INFO12_SGI);
+}
+
+static u8 ath12k_hw_wcn7850_rx_desc_get_msdu_rate_mcs(struct hal_rx_desc *desc)
+{
+ return le32_get_bits(desc->u.wcn7850.msdu_end.info12,
+ RX_MSDU_END_INFO12_RATE_MCS);
+}
+
+static u8 ath12k_hw_wcn7850_rx_desc_get_msdu_rx_bw(struct hal_rx_desc *desc)
+{
+ return le32_get_bits(desc->u.wcn7850.msdu_end.info12,
+ RX_MSDU_END_INFO12_RECV_BW);
+}
+
+static u32 ath12k_hw_wcn7850_rx_desc_get_msdu_freq(struct hal_rx_desc *desc)
+{
+ return __le32_to_cpu(desc->u.wcn7850.msdu_end.phy_meta_data);
+}
+
+static u8 ath12k_hw_wcn7850_rx_desc_get_msdu_pkt_type(struct hal_rx_desc *desc)
+{
+ return le32_get_bits(desc->u.wcn7850.msdu_end.info12,
+ RX_MSDU_END_INFO12_PKT_TYPE);
+}
+
+static u8 ath12k_hw_wcn7850_rx_desc_get_msdu_nss(struct hal_rx_desc *desc)
+{
+ return le32_get_bits(desc->u.wcn7850.msdu_end.info12,
+ RX_MSDU_END_INFO12_MIMO_SS_BITMAP);
+}
+
+static u8 ath12k_hw_wcn7850_rx_desc_get_mpdu_tid(struct hal_rx_desc *desc)
+{
+ return le16_get_bits(desc->u.wcn7850.msdu_end.info5,
+ RX_MSDU_END_INFO5_TID);
+}
+
+static u16 ath12k_hw_wcn7850_rx_desc_get_mpdu_peer_id(struct hal_rx_desc *desc)
+{
+ return __le16_to_cpu(desc->u.wcn7850.mpdu_start.sw_peer_id);
+}
+
+static void ath12k_hw_wcn7850_rx_desc_copy_end_tlv(struct hal_rx_desc *fdesc,
+ struct hal_rx_desc *ldesc)
+{
+ memcpy(&fdesc->u.wcn7850.msdu_end, &ldesc->u.wcn7850.msdu_end,
+ sizeof(struct rx_msdu_end_qcn9274));
+}
+
+static u32 ath12k_hw_wcn7850_rx_desc_get_mpdu_start_tag(struct hal_rx_desc *desc)
+{
+ return le64_get_bits(desc->u.wcn7850.mpdu_start_tag,
+ HAL_TLV_HDR_TAG);
+}
+
+static u32 ath12k_hw_wcn7850_rx_desc_get_mpdu_ppdu_id(struct hal_rx_desc *desc)
+{
+ return __le16_to_cpu(desc->u.wcn7850.mpdu_start.phy_ppdu_id);
+}
+
+static void ath12k_hw_wcn7850_rx_desc_set_msdu_len(struct hal_rx_desc *desc, u16 len)
+{
+ u32 info = __le32_to_cpu(desc->u.wcn7850.msdu_end.info10);
+
+ info &= ~RX_MSDU_END_INFO10_MSDU_LENGTH;
+ info |= u32_encode_bits(len, RX_MSDU_END_INFO10_MSDU_LENGTH);
+
+ desc->u.wcn7850.msdu_end.info10 = __cpu_to_le32(info);
+}
+
+static u8 *ath12k_hw_wcn7850_rx_desc_get_msdu_payload(struct hal_rx_desc *desc)
+{
+ return &desc->u.wcn7850.msdu_payload[0];
+}
+
+static u32 ath12k_hw_wcn7850_rx_desc_get_mpdu_start_offset(void)
+{
+ return offsetof(struct hal_rx_desc_wcn7850, mpdu_start_tag);
+}
+
+static u32 ath12k_hw_wcn7850_rx_desc_get_msdu_end_offset(void)
+{
+ return offsetof(struct hal_rx_desc_wcn7850, msdu_end_tag);
+}
+
+static bool ath12k_hw_wcn7850_rx_desc_mac_addr2_valid(struct hal_rx_desc *desc)
+{
+ return __le32_to_cpu(desc->u.wcn7850.mpdu_start.info4) &
+ RX_MPDU_START_INFO4_MAC_ADDR2_VALID;
+}
+
+static u8 *ath12k_hw_wcn7850_rx_desc_mpdu_start_addr2(struct hal_rx_desc *desc)
+{
+ return desc->u.wcn7850.mpdu_start.addr2;
+}
+
+static bool ath12k_hw_wcn7850_rx_desc_is_mcbc(struct hal_rx_desc *desc)
+{
+ return __le32_to_cpu(desc->u.wcn7850.mpdu_start.info6) &
+ RX_MPDU_START_INFO6_MCAST_BCAST;
+}
+
+static void ath12k_hw_wcn7850_rx_desc_get_dot11_hdr(struct hal_rx_desc *desc,
+ struct ieee80211_hdr *hdr)
+{
+ hdr->frame_control = desc->u.wcn7850.mpdu_start.frame_ctrl;
+ hdr->duration_id = desc->u.wcn7850.mpdu_start.duration;
+ ether_addr_copy(hdr->addr1, desc->u.wcn7850.mpdu_start.addr1);
+ ether_addr_copy(hdr->addr2, desc->u.wcn7850.mpdu_start.addr2);
+ ether_addr_copy(hdr->addr3, desc->u.wcn7850.mpdu_start.addr3);
+ if (__le32_to_cpu(desc->u.wcn7850.mpdu_start.info4) &
+ RX_MPDU_START_INFO4_MAC_ADDR4_VALID) {
+ ether_addr_copy(hdr->addr4, desc->u.wcn7850.mpdu_start.addr4);
+ }
+ hdr->seq_ctrl = desc->u.wcn7850.mpdu_start.seq_ctrl;
+}
+
+static void ath12k_hw_wcn7850_rx_desc_get_crypto_hdr(struct hal_rx_desc *desc,
+ u8 *crypto_hdr,
+ enum hal_encrypt_type enctype)
+{
+ unsigned int key_id;
+
+ switch (enctype) {
+ case HAL_ENCRYPT_TYPE_OPEN:
+ return;
+ case HAL_ENCRYPT_TYPE_TKIP_NO_MIC:
+ case HAL_ENCRYPT_TYPE_TKIP_MIC:
+ crypto_hdr[0] =
+ HAL_RX_MPDU_INFO_PN_GET_BYTE2(desc->u.wcn7850.mpdu_start.pn[0]);
+ crypto_hdr[1] = 0;
+ crypto_hdr[2] =
+ HAL_RX_MPDU_INFO_PN_GET_BYTE1(desc->u.wcn7850.mpdu_start.pn[0]);
+ break;
+ case HAL_ENCRYPT_TYPE_CCMP_128:
+ case HAL_ENCRYPT_TYPE_CCMP_256:
+ case HAL_ENCRYPT_TYPE_GCMP_128:
+ case HAL_ENCRYPT_TYPE_AES_GCMP_256:
+ crypto_hdr[0] =
+ HAL_RX_MPDU_INFO_PN_GET_BYTE1(desc->u.wcn7850.mpdu_start.pn[0]);
+ crypto_hdr[1] =
+ HAL_RX_MPDU_INFO_PN_GET_BYTE2(desc->u.wcn7850.mpdu_start.pn[0]);
+ crypto_hdr[2] = 0;
+ break;
+ case HAL_ENCRYPT_TYPE_WEP_40:
+ case HAL_ENCRYPT_TYPE_WEP_104:
+ case HAL_ENCRYPT_TYPE_WEP_128:
+ case HAL_ENCRYPT_TYPE_WAPI_GCM_SM4:
+ case HAL_ENCRYPT_TYPE_WAPI:
+ return;
+ }
+ key_id = u32_get_bits(__le32_to_cpu(desc->u.wcn7850.mpdu_start.info5),
+ RX_MPDU_START_INFO5_KEY_ID);
+ crypto_hdr[3] = 0x20 | (key_id << 6);
+ crypto_hdr[4] = HAL_RX_MPDU_INFO_PN_GET_BYTE3(desc->u.wcn7850.mpdu_start.pn[0]);
+ crypto_hdr[5] = HAL_RX_MPDU_INFO_PN_GET_BYTE4(desc->u.wcn7850.mpdu_start.pn[0]);
+ crypto_hdr[6] = HAL_RX_MPDU_INFO_PN_GET_BYTE1(desc->u.wcn7850.mpdu_start.pn[1]);
+ crypto_hdr[7] = HAL_RX_MPDU_INFO_PN_GET_BYTE2(desc->u.wcn7850.mpdu_start.pn[1]);
+}
+
+static u16 ath12k_hw_wcn7850_rx_desc_get_mpdu_frame_ctl(struct hal_rx_desc *desc)
+{
+ return __le16_to_cpu(desc->u.wcn7850.mpdu_start.frame_ctrl);
+}
+
+static int ath12k_hal_srng_create_config_wcn7850(struct ath12k_base *ab)
+{
+ struct ath12k_hal *hal = &ab->hal;
+ struct hal_srng_config *s;
+
+ hal->srng_config = kmemdup(hw_srng_config_template,
+ sizeof(hw_srng_config_template),
+ GFP_KERNEL);
+ if (!hal->srng_config)
+ return -ENOMEM;
+
+ s = &hal->srng_config[HAL_REO_DST];
+ s->reg_start[0] = HAL_SEQ_WCSS_UMAC_REO_REG + HAL_REO1_RING_BASE_LSB(ab);
+ s->reg_start[1] = HAL_SEQ_WCSS_UMAC_REO_REG + HAL_REO1_RING_HP;
+ s->reg_size[0] = HAL_REO2_RING_BASE_LSB(ab) - HAL_REO1_RING_BASE_LSB(ab);
+ s->reg_size[1] = HAL_REO2_RING_HP - HAL_REO1_RING_HP;
+
+ s = &hal->srng_config[HAL_REO_EXCEPTION];
+ s->reg_start[0] = HAL_SEQ_WCSS_UMAC_REO_REG + HAL_REO_SW0_RING_BASE_LSB(ab);
+ s->reg_start[1] = HAL_SEQ_WCSS_UMAC_REO_REG + HAL_REO_SW0_RING_HP;
+
+ s = &hal->srng_config[HAL_REO_REINJECT];
+ s->max_rings = 1;
+ s->reg_start[0] = HAL_SEQ_WCSS_UMAC_REO_REG + HAL_SW2REO_RING_BASE_LSB(ab);
+ s->reg_start[1] = HAL_SEQ_WCSS_UMAC_REO_REG + HAL_SW2REO_RING_HP;
+
+ s = &hal->srng_config[HAL_REO_CMD];
+ s->reg_start[0] = HAL_SEQ_WCSS_UMAC_REO_REG + HAL_REO_CMD_RING_BASE_LSB(ab);
+ s->reg_start[1] = HAL_SEQ_WCSS_UMAC_REO_REG + HAL_REO_CMD_HP;
+
+ s = &hal->srng_config[HAL_REO_STATUS];
+ s->reg_start[0] = HAL_SEQ_WCSS_UMAC_REO_REG + HAL_REO_STATUS_RING_BASE_LSB(ab);
+ s->reg_start[1] = HAL_SEQ_WCSS_UMAC_REO_REG + HAL_REO_STATUS_HP;
+
+ s = &hal->srng_config[HAL_TCL_DATA];
+ s->max_rings = 5;
+ s->reg_start[0] = HAL_SEQ_WCSS_UMAC_TCL_REG + HAL_TCL1_RING_BASE_LSB;
+ s->reg_start[1] = HAL_SEQ_WCSS_UMAC_TCL_REG + HAL_TCL1_RING_HP;
+ s->reg_size[0] = HAL_TCL2_RING_BASE_LSB - HAL_TCL1_RING_BASE_LSB;
+ s->reg_size[1] = HAL_TCL2_RING_HP - HAL_TCL1_RING_HP;
+
+ s = &hal->srng_config[HAL_TCL_CMD];
+ s->reg_start[0] = HAL_SEQ_WCSS_UMAC_TCL_REG + HAL_TCL_RING_BASE_LSB(ab);
+ s->reg_start[1] = HAL_SEQ_WCSS_UMAC_TCL_REG + HAL_TCL_RING_HP;
+
+ s = &hal->srng_config[HAL_TCL_STATUS];
+ s->reg_start[0] = HAL_SEQ_WCSS_UMAC_TCL_REG + HAL_TCL_STATUS_RING_BASE_LSB(ab);
+ s->reg_start[1] = HAL_SEQ_WCSS_UMAC_TCL_REG + HAL_TCL_STATUS_RING_HP;
+
+ s = &hal->srng_config[HAL_CE_SRC];
+ s->max_rings = 12;
+ s->reg_start[0] = HAL_SEQ_WCSS_UMAC_CE0_SRC_REG + HAL_CE_DST_RING_BASE_LSB;
+ s->reg_start[1] = HAL_SEQ_WCSS_UMAC_CE0_SRC_REG + HAL_CE_DST_RING_HP;
+ s->reg_size[0] = HAL_SEQ_WCSS_UMAC_CE1_SRC_REG -
+ HAL_SEQ_WCSS_UMAC_CE0_SRC_REG;
+ s->reg_size[1] = HAL_SEQ_WCSS_UMAC_CE1_SRC_REG -
+ HAL_SEQ_WCSS_UMAC_CE0_SRC_REG;
+
+ s = &hal->srng_config[HAL_CE_DST];
+ s->max_rings = 12;
+ s->reg_start[0] = HAL_SEQ_WCSS_UMAC_CE0_DST_REG + HAL_CE_DST_RING_BASE_LSB;
+ s->reg_start[1] = HAL_SEQ_WCSS_UMAC_CE0_DST_REG + HAL_CE_DST_RING_HP;
+ s->reg_size[0] = HAL_SEQ_WCSS_UMAC_CE1_DST_REG -
+ HAL_SEQ_WCSS_UMAC_CE0_DST_REG;
+ s->reg_size[1] = HAL_SEQ_WCSS_UMAC_CE1_DST_REG -
+ HAL_SEQ_WCSS_UMAC_CE0_DST_REG;
+
+ s = &hal->srng_config[HAL_CE_DST_STATUS];
+ s->max_rings = 12;
+ s->reg_start[0] = HAL_SEQ_WCSS_UMAC_CE0_DST_REG +
+ HAL_CE_DST_STATUS_RING_BASE_LSB;
+ s->reg_start[1] = HAL_SEQ_WCSS_UMAC_CE0_DST_REG + HAL_CE_DST_STATUS_RING_HP;
+ s->reg_size[0] = HAL_SEQ_WCSS_UMAC_CE1_DST_REG -
+ HAL_SEQ_WCSS_UMAC_CE0_DST_REG;
+ s->reg_size[1] = HAL_SEQ_WCSS_UMAC_CE1_DST_REG -
+ HAL_SEQ_WCSS_UMAC_CE0_DST_REG;
+
+ s = &hal->srng_config[HAL_WBM_IDLE_LINK];
+ s->reg_start[0] = HAL_SEQ_WCSS_UMAC_WBM_REG + HAL_WBM_IDLE_LINK_RING_BASE_LSB(ab);
+ s->reg_start[1] = HAL_SEQ_WCSS_UMAC_WBM_REG + HAL_WBM_IDLE_LINK_RING_HP;
+
+ s = &hal->srng_config[HAL_SW2WBM_RELEASE];
+ s->max_rings = 1;
+ s->reg_start[0] = HAL_SEQ_WCSS_UMAC_WBM_REG +
+ HAL_WBM_SW_RELEASE_RING_BASE_LSB(ab);
+ s->reg_start[1] = HAL_SEQ_WCSS_UMAC_WBM_REG + HAL_WBM_SW_RELEASE_RING_HP;
+
+ s = &hal->srng_config[HAL_WBM2SW_RELEASE];
+ s->reg_start[0] = HAL_SEQ_WCSS_UMAC_WBM_REG + HAL_WBM0_RELEASE_RING_BASE_LSB(ab);
+ s->reg_start[1] = HAL_SEQ_WCSS_UMAC_WBM_REG + HAL_WBM0_RELEASE_RING_HP;
+ s->reg_size[0] = HAL_WBM1_RELEASE_RING_BASE_LSB(ab) -
+ HAL_WBM0_RELEASE_RING_BASE_LSB(ab);
+ s->reg_size[1] = HAL_WBM1_RELEASE_RING_HP - HAL_WBM0_RELEASE_RING_HP;
+
+ s = &hal->srng_config[HAL_RXDMA_BUF];
+ s->max_rings = 2;
+ s->mac_type = ATH12K_HAL_SRNG_PMAC;
+
+ s = &hal->srng_config[HAL_RXDMA_DST];
+ s->max_rings = 1;
+ s->entry_size = sizeof(struct hal_reo_entrance_ring) >> 2;
+
+ /* below rings are not used */
+ s = &hal->srng_config[HAL_RXDMA_DIR_BUF];
+ s->max_rings = 0;
+
+ s = &hal->srng_config[HAL_PPE2TCL];
+ s->max_rings = 0;
+
+ s = &hal->srng_config[HAL_PPE_RELEASE];
+ s->max_rings = 0;
+
+ s = &hal->srng_config[HAL_TX_MONITOR_BUF];
+ s->max_rings = 0;
+
+ s = &hal->srng_config[HAL_TX_MONITOR_DST];
+ s->max_rings = 0;
+
+ s = &hal->srng_config[HAL_PPE2TCL];
+ s->max_rings = 0;
+
+ return 0;
+}
+
+static bool ath12k_hw_wcn7850_dp_rx_h_msdu_done(struct hal_rx_desc *desc)
+{
+ return !!le32_get_bits(desc->u.wcn7850.msdu_end.info14,
+ RX_MSDU_END_INFO14_MSDU_DONE);
+}
+
+static bool ath12k_hw_wcn7850_dp_rx_h_l4_cksum_fail(struct hal_rx_desc *desc)
+{
+ return !!le32_get_bits(desc->u.wcn7850.msdu_end.info13,
+ RX_MSDU_END_INFO13_TCP_UDP_CKSUM_FAIL);
+}
+
+static bool ath12k_hw_wcn7850_dp_rx_h_ip_cksum_fail(struct hal_rx_desc *desc)
+{
+ return !!le32_get_bits(desc->u.wcn7850.msdu_end.info13,
+ RX_MSDU_END_INFO13_IP_CKSUM_FAIL);
+}
+
+static bool ath12k_hw_wcn7850_dp_rx_h_is_decrypted(struct hal_rx_desc *desc)
+{
+ return (le32_get_bits(desc->u.wcn7850.msdu_end.info14,
+ RX_MSDU_END_INFO14_DECRYPT_STATUS_CODE) ==
+ RX_DESC_DECRYPT_STATUS_CODE_OK);
+}
+
+static u32 ath12k_hw_wcn7850_dp_rx_h_mpdu_err(struct hal_rx_desc *desc)
+{
+ u32 info = __le32_to_cpu(desc->u.wcn7850.msdu_end.info13);
+ u32 errmap = 0;
+
+ if (info & RX_MSDU_END_INFO13_FCS_ERR)
+ errmap |= HAL_RX_MPDU_ERR_FCS;
+
+ if (info & RX_MSDU_END_INFO13_DECRYPT_ERR)
+ errmap |= HAL_RX_MPDU_ERR_DECRYPT;
+
+ if (info & RX_MSDU_END_INFO13_TKIP_MIC_ERR)
+ errmap |= HAL_RX_MPDU_ERR_TKIP_MIC;
+
+ if (info & RX_MSDU_END_INFO13_A_MSDU_ERROR)
+ errmap |= HAL_RX_MPDU_ERR_AMSDU_ERR;
+
+ if (info & RX_MSDU_END_INFO13_OVERFLOW_ERR)
+ errmap |= HAL_RX_MPDU_ERR_OVERFLOW;
+
+ if (info & RX_MSDU_END_INFO13_MSDU_LEN_ERR)
+ errmap |= HAL_RX_MPDU_ERR_MSDU_LEN;
+
+ if (info & RX_MSDU_END_INFO13_MPDU_LEN_ERR)
+ errmap |= HAL_RX_MPDU_ERR_MPDU_LEN;
+
+ return errmap;
+}
+
+const struct hal_ops hal_wcn7850_ops = {
+ .rx_desc_get_first_msdu = ath12k_hw_wcn7850_rx_desc_get_first_msdu,
+ .rx_desc_get_last_msdu = ath12k_hw_wcn7850_rx_desc_get_last_msdu,
+ .rx_desc_get_l3_pad_bytes = ath12k_hw_wcn7850_rx_desc_get_l3_pad_bytes,
+ .rx_desc_encrypt_valid = ath12k_hw_wcn7850_rx_desc_encrypt_valid,
+ .rx_desc_get_encrypt_type = ath12k_hw_wcn7850_rx_desc_get_encrypt_type,
+ .rx_desc_get_decap_type = ath12k_hw_wcn7850_rx_desc_get_decap_type,
+ .rx_desc_get_mesh_ctl = ath12k_hw_wcn7850_rx_desc_get_mesh_ctl,
+ .rx_desc_get_mpdu_seq_ctl_vld = ath12k_hw_wcn7850_rx_desc_get_mpdu_seq_ctl_vld,
+ .rx_desc_get_mpdu_fc_valid = ath12k_hw_wcn7850_rx_desc_get_mpdu_fc_valid,
+ .rx_desc_get_mpdu_start_seq_no = ath12k_hw_wcn7850_rx_desc_get_mpdu_start_seq_no,
+ .rx_desc_get_msdu_len = ath12k_hw_wcn7850_rx_desc_get_msdu_len,
+ .rx_desc_get_msdu_sgi = ath12k_hw_wcn7850_rx_desc_get_msdu_sgi,
+ .rx_desc_get_msdu_rate_mcs = ath12k_hw_wcn7850_rx_desc_get_msdu_rate_mcs,
+ .rx_desc_get_msdu_rx_bw = ath12k_hw_wcn7850_rx_desc_get_msdu_rx_bw,
+ .rx_desc_get_msdu_freq = ath12k_hw_wcn7850_rx_desc_get_msdu_freq,
+ .rx_desc_get_msdu_pkt_type = ath12k_hw_wcn7850_rx_desc_get_msdu_pkt_type,
+ .rx_desc_get_msdu_nss = ath12k_hw_wcn7850_rx_desc_get_msdu_nss,
+ .rx_desc_get_mpdu_tid = ath12k_hw_wcn7850_rx_desc_get_mpdu_tid,
+ .rx_desc_get_mpdu_peer_id = ath12k_hw_wcn7850_rx_desc_get_mpdu_peer_id,
+ .rx_desc_copy_end_tlv = ath12k_hw_wcn7850_rx_desc_copy_end_tlv,
+ .rx_desc_get_mpdu_start_tag = ath12k_hw_wcn7850_rx_desc_get_mpdu_start_tag,
+ .rx_desc_get_mpdu_ppdu_id = ath12k_hw_wcn7850_rx_desc_get_mpdu_ppdu_id,
+ .rx_desc_set_msdu_len = ath12k_hw_wcn7850_rx_desc_set_msdu_len,
+ .rx_desc_get_msdu_payload = ath12k_hw_wcn7850_rx_desc_get_msdu_payload,
+ .rx_desc_get_mpdu_start_offset = ath12k_hw_wcn7850_rx_desc_get_mpdu_start_offset,
+ .rx_desc_get_msdu_end_offset = ath12k_hw_wcn7850_rx_desc_get_msdu_end_offset,
+ .rx_desc_mac_addr2_valid = ath12k_hw_wcn7850_rx_desc_mac_addr2_valid,
+ .rx_desc_mpdu_start_addr2 = ath12k_hw_wcn7850_rx_desc_mpdu_start_addr2,
+ .rx_desc_is_mcbc = ath12k_hw_wcn7850_rx_desc_is_mcbc,
+ .rx_desc_get_dot11_hdr = ath12k_hw_wcn7850_rx_desc_get_dot11_hdr,
+ .rx_desc_get_crypto_header = ath12k_hw_wcn7850_rx_desc_get_crypto_hdr,
+ .rx_desc_get_mpdu_frame_ctl = ath12k_hw_wcn7850_rx_desc_get_mpdu_frame_ctl,
+ .create_srng_config = ath12k_hal_srng_create_config_wcn7850,
+ .tcl_to_wbm_rbm_map = ath12k_hal_wcn7850_tcl_to_wbm_rbm_map,
+ .dp_rx_h_msdu_done = ath12k_hw_wcn7850_dp_rx_h_msdu_done,
+ .dp_rx_h_l4_cksum_fail = ath12k_hw_wcn7850_dp_rx_h_l4_cksum_fail,
+ .dp_rx_h_ip_cksum_fail = ath12k_hw_wcn7850_dp_rx_h_ip_cksum_fail,
+ .dp_rx_h_is_decrypted = ath12k_hw_wcn7850_dp_rx_h_is_decrypted,
+ .dp_rx_h_mpdu_err = ath12k_hw_wcn7850_dp_rx_h_mpdu_err,
+};
+
+static int ath12k_hal_alloc_cont_rdp(struct ath12k_base *ab)
+{
+ struct ath12k_hal *hal = &ab->hal;
+ size_t size;
+
+ size = sizeof(u32) * HAL_SRNG_RING_ID_MAX;
+ hal->rdp.vaddr = dma_alloc_coherent(ab->dev, size, &hal->rdp.paddr,
+ GFP_KERNEL);
+ if (!hal->rdp.vaddr)
+ return -ENOMEM;
+
+ return 0;
+}
+
+static void ath12k_hal_free_cont_rdp(struct ath12k_base *ab)
+{
+ struct ath12k_hal *hal = &ab->hal;
+ size_t size;
+
+ if (!hal->rdp.vaddr)
+ return;
+
+ size = sizeof(u32) * HAL_SRNG_RING_ID_MAX;
+ dma_free_coherent(ab->dev, size,
+ hal->rdp.vaddr, hal->rdp.paddr);
+ hal->rdp.vaddr = NULL;
+}
+
+static int ath12k_hal_alloc_cont_wrp(struct ath12k_base *ab)
+{
+ struct ath12k_hal *hal = &ab->hal;
+ size_t size;
+
+ size = sizeof(u32) * (HAL_SRNG_NUM_PMAC_RINGS + HAL_SRNG_NUM_DMAC_RINGS);
+ hal->wrp.vaddr = dma_alloc_coherent(ab->dev, size, &hal->wrp.paddr,
+ GFP_KERNEL);
+ if (!hal->wrp.vaddr)
+ return -ENOMEM;
+
+ return 0;
+}
+
+static void ath12k_hal_free_cont_wrp(struct ath12k_base *ab)
+{
+ struct ath12k_hal *hal = &ab->hal;
+ size_t size;
+
+ if (!hal->wrp.vaddr)
+ return;
+
+ size = sizeof(u32) * (HAL_SRNG_NUM_PMAC_RINGS + HAL_SRNG_NUM_DMAC_RINGS);
+ dma_free_coherent(ab->dev, size,
+ hal->wrp.vaddr, hal->wrp.paddr);
+ hal->wrp.vaddr = NULL;
+}
+
+static void ath12k_hal_ce_dst_setup(struct ath12k_base *ab,
+ struct hal_srng *srng, int ring_num)
+{
+ struct hal_srng_config *srng_config = &ab->hal.srng_config[HAL_CE_DST];
+ u32 addr;
+ u32 val;
+
+ addr = HAL_CE_DST_RING_CTRL +
+ srng_config->reg_start[HAL_SRNG_REG_GRP_R0] +
+ ring_num * srng_config->reg_size[HAL_SRNG_REG_GRP_R0];
+
+ val = ath12k_hif_read32(ab, addr);
+ val &= ~HAL_CE_DST_R0_DEST_CTRL_MAX_LEN;
+ val |= u32_encode_bits(srng->u.dst_ring.max_buffer_length,
+ HAL_CE_DST_R0_DEST_CTRL_MAX_LEN);
+ ath12k_hif_write32(ab, addr, val);
+}
+
+static void ath12k_hal_srng_dst_hw_init(struct ath12k_base *ab,
+ struct hal_srng *srng)
+{
+ struct ath12k_hal *hal = &ab->hal;
+ u32 val;
+ u64 hp_addr;
+ u32 reg_base;
+
+ reg_base = srng->hwreg_base[HAL_SRNG_REG_GRP_R0];
+
+ if (srng->flags & HAL_SRNG_FLAGS_MSI_INTR) {
+ ath12k_hif_write32(ab, reg_base +
+ ath12k_hal_reo1_ring_msi1_base_lsb_offset(ab),
+ srng->msi_addr);
+
+ val = u32_encode_bits(((u64)srng->msi_addr >> HAL_ADDR_MSB_REG_SHIFT),
+ HAL_REO1_RING_MSI1_BASE_MSB_ADDR) |
+ HAL_REO1_RING_MSI1_BASE_MSB_MSI1_ENABLE;
+ ath12k_hif_write32(ab, reg_base +
+ ath12k_hal_reo1_ring_msi1_base_msb_offset(ab), val);
+
+ ath12k_hif_write32(ab,
+ reg_base + ath12k_hal_reo1_ring_msi1_data_offset(ab),
+ srng->msi_data);
+ }
+
+ ath12k_hif_write32(ab, reg_base, srng->ring_base_paddr);
+
+ val = u32_encode_bits(((u64)srng->ring_base_paddr >> HAL_ADDR_MSB_REG_SHIFT),
+ HAL_REO1_RING_BASE_MSB_RING_BASE_ADDR_MSB) |
+ u32_encode_bits((srng->entry_size * srng->num_entries),
+ HAL_REO1_RING_BASE_MSB_RING_SIZE);
+ ath12k_hif_write32(ab, reg_base + ath12k_hal_reo1_ring_base_msb_offset(ab), val);
+
+ val = u32_encode_bits(srng->ring_id, HAL_REO1_RING_ID_RING_ID) |
+ u32_encode_bits(srng->entry_size, HAL_REO1_RING_ID_ENTRY_SIZE);
+ ath12k_hif_write32(ab, reg_base + ath12k_hal_reo1_ring_id_offset(ab), val);
+
+ /* interrupt setup */
+ val = u32_encode_bits((srng->intr_timer_thres_us >> 3),
+ HAL_REO1_RING_PRDR_INT_SETUP_INTR_TMR_THOLD);
+
+ val |= u32_encode_bits((srng->intr_batch_cntr_thres_entries * srng->entry_size),
+ HAL_REO1_RING_PRDR_INT_SETUP_BATCH_COUNTER_THOLD);
+
+ ath12k_hif_write32(ab,
+ reg_base + ath12k_hal_reo1_ring_producer_int_setup_offset(ab),
+ val);
+
+ hp_addr = hal->rdp.paddr +
+ ((unsigned long)srng->u.dst_ring.hp_addr -
+ (unsigned long)hal->rdp.vaddr);
+ ath12k_hif_write32(ab, reg_base + ath12k_hal_reo1_ring_hp_addr_lsb_offset(ab),
+ hp_addr & HAL_ADDR_LSB_REG_MASK);
+ ath12k_hif_write32(ab, reg_base + ath12k_hal_reo1_ring_hp_addr_msb_offset(ab),
+ hp_addr >> HAL_ADDR_MSB_REG_SHIFT);
+
+ /* Initialize head and tail pointers to indicate ring is empty */
+ reg_base = srng->hwreg_base[HAL_SRNG_REG_GRP_R2];
+ ath12k_hif_write32(ab, reg_base, 0);
+ ath12k_hif_write32(ab, reg_base + HAL_REO1_RING_TP_OFFSET, 0);
+ *srng->u.dst_ring.hp_addr = 0;
+
+ reg_base = srng->hwreg_base[HAL_SRNG_REG_GRP_R0];
+ val = 0;
+ if (srng->flags & HAL_SRNG_FLAGS_DATA_TLV_SWAP)
+ val |= HAL_REO1_RING_MISC_DATA_TLV_SWAP;
+ if (srng->flags & HAL_SRNG_FLAGS_RING_PTR_SWAP)
+ val |= HAL_REO1_RING_MISC_HOST_FW_SWAP;
+ if (srng->flags & HAL_SRNG_FLAGS_MSI_SWAP)
+ val |= HAL_REO1_RING_MISC_MSI_SWAP;
+ val |= HAL_REO1_RING_MISC_SRNG_ENABLE;
+
+ ath12k_hif_write32(ab, reg_base + ath12k_hal_reo1_ring_misc_offset(ab), val);
+}
+
+static void ath12k_hal_srng_src_hw_init(struct ath12k_base *ab,
+ struct hal_srng *srng)
+{
+ struct ath12k_hal *hal = &ab->hal;
+ u32 val;
+ u64 tp_addr;
+ u32 reg_base;
+
+ reg_base = srng->hwreg_base[HAL_SRNG_REG_GRP_R0];
+
+ if (srng->flags & HAL_SRNG_FLAGS_MSI_INTR) {
+ ath12k_hif_write32(ab, reg_base +
+ HAL_TCL1_RING_MSI1_BASE_LSB_OFFSET(ab),
+ srng->msi_addr);
+
+ val = u32_encode_bits(((u64)srng->msi_addr >> HAL_ADDR_MSB_REG_SHIFT),
+ HAL_TCL1_RING_MSI1_BASE_MSB_ADDR) |
+ HAL_TCL1_RING_MSI1_BASE_MSB_MSI1_ENABLE;
+ ath12k_hif_write32(ab, reg_base +
+ HAL_TCL1_RING_MSI1_BASE_MSB_OFFSET(ab),
+ val);
+
+ ath12k_hif_write32(ab, reg_base +
+ HAL_TCL1_RING_MSI1_DATA_OFFSET(ab),
+ srng->msi_data);
+ }
+
+ ath12k_hif_write32(ab, reg_base, srng->ring_base_paddr);
+
+ val = u32_encode_bits(((u64)srng->ring_base_paddr >> HAL_ADDR_MSB_REG_SHIFT),
+ HAL_TCL1_RING_BASE_MSB_RING_BASE_ADDR_MSB) |
+ u32_encode_bits((srng->entry_size * srng->num_entries),
+ HAL_TCL1_RING_BASE_MSB_RING_SIZE);
+ ath12k_hif_write32(ab, reg_base + HAL_TCL1_RING_BASE_MSB_OFFSET, val);
+
+ val = u32_encode_bits(srng->entry_size, HAL_REO1_RING_ID_ENTRY_SIZE);
+ ath12k_hif_write32(ab, reg_base + HAL_TCL1_RING_ID_OFFSET(ab), val);
+
+ val = u32_encode_bits(srng->intr_timer_thres_us,
+ HAL_TCL1_RING_CONSR_INT_SETUP_IX0_INTR_TMR_THOLD);
+
+ val |= u32_encode_bits((srng->intr_batch_cntr_thres_entries * srng->entry_size),
+ HAL_TCL1_RING_CONSR_INT_SETUP_IX0_BATCH_COUNTER_THOLD);
+
+ ath12k_hif_write32(ab,
+ reg_base + HAL_TCL1_RING_CONSR_INT_SETUP_IX0_OFFSET(ab),
+ val);
+
+ val = 0;
+ if (srng->flags & HAL_SRNG_FLAGS_LOW_THRESH_INTR_EN) {
+ val |= u32_encode_bits(srng->u.src_ring.low_threshold,
+ HAL_TCL1_RING_CONSR_INT_SETUP_IX1_LOW_THOLD);
+ }
+ ath12k_hif_write32(ab,
+ reg_base + HAL_TCL1_RING_CONSR_INT_SETUP_IX1_OFFSET(ab),
+ val);
+
+ if (srng->ring_id != HAL_SRNG_RING_ID_WBM_IDLE_LINK) {
+ tp_addr = hal->rdp.paddr +
+ ((unsigned long)srng->u.src_ring.tp_addr -
+ (unsigned long)hal->rdp.vaddr);
+ ath12k_hif_write32(ab,
+ reg_base + HAL_TCL1_RING_TP_ADDR_LSB_OFFSET(ab),
+ tp_addr & HAL_ADDR_LSB_REG_MASK);
+ ath12k_hif_write32(ab,
+ reg_base + HAL_TCL1_RING_TP_ADDR_MSB_OFFSET(ab),
+ tp_addr >> HAL_ADDR_MSB_REG_SHIFT);
+ }
+
+ /* Initialize head and tail pointers to indicate ring is empty */
+ reg_base = srng->hwreg_base[HAL_SRNG_REG_GRP_R2];
+ ath12k_hif_write32(ab, reg_base, 0);
+ ath12k_hif_write32(ab, reg_base + HAL_TCL1_RING_TP_OFFSET, 0);
+ *srng->u.src_ring.tp_addr = 0;
+
+ reg_base = srng->hwreg_base[HAL_SRNG_REG_GRP_R0];
+ val = 0;
+ if (srng->flags & HAL_SRNG_FLAGS_DATA_TLV_SWAP)
+ val |= HAL_TCL1_RING_MISC_DATA_TLV_SWAP;
+ if (srng->flags & HAL_SRNG_FLAGS_RING_PTR_SWAP)
+ val |= HAL_TCL1_RING_MISC_HOST_FW_SWAP;
+ if (srng->flags & HAL_SRNG_FLAGS_MSI_SWAP)
+ val |= HAL_TCL1_RING_MISC_MSI_SWAP;
+
+ /* Loop count is not used for SRC rings */
+ val |= HAL_TCL1_RING_MISC_MSI_LOOPCNT_DISABLE;
+
+ val |= HAL_TCL1_RING_MISC_SRNG_ENABLE;
+
+ if (srng->ring_id == HAL_SRNG_RING_ID_WBM_IDLE_LINK)
+ val |= HAL_TCL1_RING_MISC_MSI_RING_ID_DISABLE;
+
+ ath12k_hif_write32(ab, reg_base + HAL_TCL1_RING_MISC_OFFSET(ab), val);
+}
+
+static void ath12k_hal_srng_hw_init(struct ath12k_base *ab,
+ struct hal_srng *srng)
+{
+ if (srng->ring_dir == HAL_SRNG_DIR_SRC)
+ ath12k_hal_srng_src_hw_init(ab, srng);
+ else
+ ath12k_hal_srng_dst_hw_init(ab, srng);
+}
+
+static int ath12k_hal_srng_get_ring_id(struct ath12k_base *ab,
+ enum hal_ring_type type,
+ int ring_num, int mac_id)
+{
+ struct hal_srng_config *srng_config = &ab->hal.srng_config[type];
+ int ring_id;
+
+ if (ring_num >= srng_config->max_rings) {
+ ath12k_warn(ab, "invalid ring number :%d\n", ring_num);
+ return -EINVAL;
+ }
+
+ ring_id = srng_config->start_ring_id + ring_num;
+ if (srng_config->mac_type == ATH12K_HAL_SRNG_PMAC)
+ ring_id += mac_id * HAL_SRNG_RINGS_PER_PMAC;
+
+ if (WARN_ON(ring_id >= HAL_SRNG_RING_ID_MAX))
+ return -EINVAL;
+
+ return ring_id;
+}
+
+int ath12k_hal_srng_get_entrysize(struct ath12k_base *ab, u32 ring_type)
+{
+ struct hal_srng_config *srng_config;
+
+ if (WARN_ON(ring_type >= HAL_MAX_RING_TYPES))
+ return -EINVAL;
+
+ srng_config = &ab->hal.srng_config[ring_type];
+
+ return (srng_config->entry_size << 2);
+}
+
+int ath12k_hal_srng_get_max_entries(struct ath12k_base *ab, u32 ring_type)
+{
+ struct hal_srng_config *srng_config;
+
+ if (WARN_ON(ring_type >= HAL_MAX_RING_TYPES))
+ return -EINVAL;
+
+ srng_config = &ab->hal.srng_config[ring_type];
+
+ return (srng_config->max_size / srng_config->entry_size);
+}
+
+void ath12k_hal_srng_get_params(struct ath12k_base *ab, struct hal_srng *srng,
+ struct hal_srng_params *params)
+{
+ params->ring_base_paddr = srng->ring_base_paddr;
+ params->ring_base_vaddr = srng->ring_base_vaddr;
+ params->num_entries = srng->num_entries;
+ params->intr_timer_thres_us = srng->intr_timer_thres_us;
+ params->intr_batch_cntr_thres_entries =
+ srng->intr_batch_cntr_thres_entries;
+ params->low_threshold = srng->u.src_ring.low_threshold;
+ params->msi_addr = srng->msi_addr;
+ params->msi2_addr = srng->msi2_addr;
+ params->msi_data = srng->msi_data;
+ params->msi2_data = srng->msi2_data;
+ params->flags = srng->flags;
+}
+
+dma_addr_t ath12k_hal_srng_get_hp_addr(struct ath12k_base *ab,
+ struct hal_srng *srng)
+{
+ if (!(srng->flags & HAL_SRNG_FLAGS_LMAC_RING))
+ return 0;
+
+ if (srng->ring_dir == HAL_SRNG_DIR_SRC)
+ return ab->hal.wrp.paddr +
+ ((unsigned long)srng->u.src_ring.hp_addr -
+ (unsigned long)ab->hal.wrp.vaddr);
+ else
+ return ab->hal.rdp.paddr +
+ ((unsigned long)srng->u.dst_ring.hp_addr -
+ (unsigned long)ab->hal.rdp.vaddr);
+}
+
+dma_addr_t ath12k_hal_srng_get_tp_addr(struct ath12k_base *ab,
+ struct hal_srng *srng)
+{
+ if (!(srng->flags & HAL_SRNG_FLAGS_LMAC_RING))
+ return 0;
+
+ if (srng->ring_dir == HAL_SRNG_DIR_SRC)
+ return ab->hal.rdp.paddr +
+ ((unsigned long)srng->u.src_ring.tp_addr -
+ (unsigned long)ab->hal.rdp.vaddr);
+ else
+ return ab->hal.wrp.paddr +
+ ((unsigned long)srng->u.dst_ring.tp_addr -
+ (unsigned long)ab->hal.wrp.vaddr);
+}
+
+u32 ath12k_hal_ce_get_desc_size(enum hal_ce_desc type)
+{
+ switch (type) {
+ case HAL_CE_DESC_SRC:
+ return sizeof(struct hal_ce_srng_src_desc);
+ case HAL_CE_DESC_DST:
+ return sizeof(struct hal_ce_srng_dest_desc);
+ case HAL_CE_DESC_DST_STATUS:
+ return sizeof(struct hal_ce_srng_dst_status_desc);
+ }
+
+ return 0;
+}
+
+void ath12k_hal_ce_src_set_desc(struct hal_ce_srng_src_desc *desc, dma_addr_t paddr,
+ u32 len, u32 id, u8 byte_swap_data)
+{
+ desc->buffer_addr_low = cpu_to_le32(paddr & HAL_ADDR_LSB_REG_MASK);
+ desc->buffer_addr_info =
+ le32_encode_bits(((u64)paddr >> HAL_ADDR_MSB_REG_SHIFT),
+ HAL_CE_SRC_DESC_ADDR_INFO_ADDR_HI) |
+ le32_encode_bits(byte_swap_data,
+ HAL_CE_SRC_DESC_ADDR_INFO_BYTE_SWAP) |
+ le32_encode_bits(0, HAL_CE_SRC_DESC_ADDR_INFO_GATHER) |
+ le32_encode_bits(len, HAL_CE_SRC_DESC_ADDR_INFO_LEN);
+ desc->meta_info = le32_encode_bits(id, HAL_CE_SRC_DESC_META_INFO_DATA);
+}
+
+void ath12k_hal_ce_dst_set_desc(struct hal_ce_srng_dest_desc *desc, dma_addr_t paddr)
+{
+ desc->buffer_addr_low = cpu_to_le32(paddr & HAL_ADDR_LSB_REG_MASK);
+ desc->buffer_addr_info =
+ le32_encode_bits(((u64)paddr >> HAL_ADDR_MSB_REG_SHIFT),
+ HAL_CE_DEST_DESC_ADDR_INFO_ADDR_HI);
+}
+
+u32 ath12k_hal_ce_dst_status_get_length(struct hal_ce_srng_dst_status_desc *desc)
+{
+ u32 len;
+
+ len = le32_get_bits(desc->flags, HAL_CE_DST_STATUS_DESC_FLAGS_LEN);
+ desc->flags &= ~cpu_to_le32(HAL_CE_DST_STATUS_DESC_FLAGS_LEN);
+
+ return len;
+}
+
+void ath12k_hal_set_link_desc_addr(struct hal_wbm_link_desc *desc, u32 cookie,
+ dma_addr_t paddr)
+{
+ desc->buf_addr_info.info0 = le32_encode_bits((paddr & HAL_ADDR_LSB_REG_MASK),
+ BUFFER_ADDR_INFO0_ADDR);
+ desc->buf_addr_info.info1 =
+ le32_encode_bits(((u64)paddr >> HAL_ADDR_MSB_REG_SHIFT),
+ BUFFER_ADDR_INFO1_ADDR) |
+ le32_encode_bits(1, BUFFER_ADDR_INFO1_RET_BUF_MGR) |
+ le32_encode_bits(cookie, BUFFER_ADDR_INFO1_SW_COOKIE);
+}
+
+void *ath12k_hal_srng_dst_peek(struct ath12k_base *ab, struct hal_srng *srng)
+{
+ lockdep_assert_held(&srng->lock);
+
+ if (srng->u.dst_ring.tp != srng->u.dst_ring.cached_hp)
+ return (srng->ring_base_vaddr + srng->u.dst_ring.tp);
+
+ return NULL;
+}
+
+void *ath12k_hal_srng_dst_get_next_entry(struct ath12k_base *ab,
+ struct hal_srng *srng)
+{
+ void *desc;
+
+ lockdep_assert_held(&srng->lock);
+
+ if (srng->u.dst_ring.tp == srng->u.dst_ring.cached_hp)
+ return NULL;
+
+ desc = srng->ring_base_vaddr + srng->u.dst_ring.tp;
+
+ srng->u.dst_ring.tp = (srng->u.dst_ring.tp + srng->entry_size) %
+ srng->ring_size;
+
+ return desc;
+}
+
+int ath12k_hal_srng_dst_num_free(struct ath12k_base *ab, struct hal_srng *srng,
+ bool sync_hw_ptr)
+{
+ u32 tp, hp;
+
+ lockdep_assert_held(&srng->lock);
+
+ tp = srng->u.dst_ring.tp;
+
+ if (sync_hw_ptr) {
+ hp = *srng->u.dst_ring.hp_addr;
+ srng->u.dst_ring.cached_hp = hp;
+ } else {
+ hp = srng->u.dst_ring.cached_hp;
+ }
+
+ if (hp >= tp)
+ return (hp - tp) / srng->entry_size;
+ else
+ return (srng->ring_size - tp + hp) / srng->entry_size;
+}
+
+/* Returns number of available entries in src ring */
+int ath12k_hal_srng_src_num_free(struct ath12k_base *ab, struct hal_srng *srng,
+ bool sync_hw_ptr)
+{
+ u32 tp, hp;
+
+ lockdep_assert_held(&srng->lock);
+
+ hp = srng->u.src_ring.hp;
+
+ if (sync_hw_ptr) {
+ tp = *srng->u.src_ring.tp_addr;
+ srng->u.src_ring.cached_tp = tp;
+ } else {
+ tp = srng->u.src_ring.cached_tp;
+ }
+
+ if (tp > hp)
+ return ((tp - hp) / srng->entry_size) - 1;
+ else
+ return ((srng->ring_size - hp + tp) / srng->entry_size) - 1;
+}
+
+void *ath12k_hal_srng_src_get_next_entry(struct ath12k_base *ab,
+ struct hal_srng *srng)
+{
+ void *desc;
+ u32 next_hp;
+
+ lockdep_assert_held(&srng->lock);
+
+ /* TODO: Using % is expensive, but we have to do this since size of some
+ * SRNG rings is not power of 2 (due to descriptor sizes). Need to see
+ * if separate function is defined for rings having power of 2 ring size
+ * (TCL2SW, REO2SW, SW2RXDMA and CE rings) so that we can avoid the
+ * overhead of % by using mask (with &).
+ */
+ next_hp = (srng->u.src_ring.hp + srng->entry_size) % srng->ring_size;
+
+ if (next_hp == srng->u.src_ring.cached_tp)
+ return NULL;
+
+ desc = srng->ring_base_vaddr + srng->u.src_ring.hp;
+ srng->u.src_ring.hp = next_hp;
+
+ /* TODO: Reap functionality is not used by all rings. If particular
+ * ring does not use reap functionality, we need not update reap_hp
+ * with next_hp pointer. Need to make sure a separate function is used
+ * before doing any optimization by removing below code updating
+ * reap_hp.
+ */
+ srng->u.src_ring.reap_hp = next_hp;
+
+ return desc;
+}
+
+void *ath12k_hal_srng_src_reap_next(struct ath12k_base *ab,
+ struct hal_srng *srng)
+{
+ void *desc;
+ u32 next_reap_hp;
+
+ lockdep_assert_held(&srng->lock);
+
+ next_reap_hp = (srng->u.src_ring.reap_hp + srng->entry_size) %
+ srng->ring_size;
+
+ if (next_reap_hp == srng->u.src_ring.cached_tp)
+ return NULL;
+
+ desc = srng->ring_base_vaddr + next_reap_hp;
+ srng->u.src_ring.reap_hp = next_reap_hp;
+
+ return desc;
+}
+
+void *ath12k_hal_srng_src_get_next_reaped(struct ath12k_base *ab,
+ struct hal_srng *srng)
+{
+ void *desc;
+
+ lockdep_assert_held(&srng->lock);
+
+ if (srng->u.src_ring.hp == srng->u.src_ring.reap_hp)
+ return NULL;
+
+ desc = srng->ring_base_vaddr + srng->u.src_ring.hp;
+ srng->u.src_ring.hp = (srng->u.src_ring.hp + srng->entry_size) %
+ srng->ring_size;
+
+ return desc;
+}
+
+void ath12k_hal_srng_access_begin(struct ath12k_base *ab, struct hal_srng *srng)
+{
+ lockdep_assert_held(&srng->lock);
+
+ if (srng->ring_dir == HAL_SRNG_DIR_SRC)
+ srng->u.src_ring.cached_tp =
+ *(volatile u32 *)srng->u.src_ring.tp_addr;
+ else
+ srng->u.dst_ring.cached_hp = *srng->u.dst_ring.hp_addr;
+}
+
+/* Update cached ring head/tail pointers to HW. ath12k_hal_srng_access_begin()
+ * should have been called before this.
+ */
+void ath12k_hal_srng_access_end(struct ath12k_base *ab, struct hal_srng *srng)
+{
+ lockdep_assert_held(&srng->lock);
+
+ /* TODO: See if we need a write memory barrier here */
+ if (srng->flags & HAL_SRNG_FLAGS_LMAC_RING) {
+ /* For LMAC rings, ring pointer updates are done through FW and
+ * hence written to a shared memory location that is read by FW
+ */
+ if (srng->ring_dir == HAL_SRNG_DIR_SRC) {
+ srng->u.src_ring.last_tp =
+ *(volatile u32 *)srng->u.src_ring.tp_addr;
+ *srng->u.src_ring.hp_addr = srng->u.src_ring.hp;
+ } else {
+ srng->u.dst_ring.last_hp = *srng->u.dst_ring.hp_addr;
+ *srng->u.dst_ring.tp_addr = srng->u.dst_ring.tp;
+ }
+ } else {
+ if (srng->ring_dir == HAL_SRNG_DIR_SRC) {
+ srng->u.src_ring.last_tp =
+ *(volatile u32 *)srng->u.src_ring.tp_addr;
+ ath12k_hif_write32(ab,
+ (unsigned long)srng->u.src_ring.hp_addr -
+ (unsigned long)ab->mem,
+ srng->u.src_ring.hp);
+ } else {
+ srng->u.dst_ring.last_hp = *srng->u.dst_ring.hp_addr;
+ ath12k_hif_write32(ab,
+ (unsigned long)srng->u.dst_ring.tp_addr -
+ (unsigned long)ab->mem,
+ srng->u.dst_ring.tp);
+ }
+ }
+
+ srng->timestamp = jiffies;
+}
+
+void ath12k_hal_setup_link_idle_list(struct ath12k_base *ab,
+ struct hal_wbm_idle_scatter_list *sbuf,
+ u32 nsbufs, u32 tot_link_desc,
+ u32 end_offset)
+{
+ struct ath12k_buffer_addr *link_addr;
+ int i;
+ u32 reg_scatter_buf_sz = HAL_WBM_IDLE_SCATTER_BUF_SIZE / 64;
+ u32 val;
+
+ link_addr = (void *)sbuf[0].vaddr + HAL_WBM_IDLE_SCATTER_BUF_SIZE;
+
+ for (i = 1; i < nsbufs; i++) {
+ link_addr->info0 = cpu_to_le32(sbuf[i].paddr & HAL_ADDR_LSB_REG_MASK);
+
+ link_addr->info1 =
+ le32_encode_bits((u64)sbuf[i].paddr >> HAL_ADDR_MSB_REG_SHIFT,
+ HAL_WBM_SCATTERED_DESC_MSB_BASE_ADDR_39_32) |
+ le32_encode_bits(BASE_ADDR_MATCH_TAG_VAL,
+ HAL_WBM_SCATTERED_DESC_MSB_BASE_ADDR_MATCH_TAG);
+
+ link_addr = (void *)sbuf[i].vaddr +
+ HAL_WBM_IDLE_SCATTER_BUF_SIZE;
+ }
+
+ val = u32_encode_bits(reg_scatter_buf_sz, HAL_WBM_SCATTER_BUFFER_SIZE) |
+ u32_encode_bits(0x1, HAL_WBM_LINK_DESC_IDLE_LIST_MODE);
+
+ ath12k_hif_write32(ab,
+ HAL_SEQ_WCSS_UMAC_WBM_REG +
+ HAL_WBM_R0_IDLE_LIST_CONTROL_ADDR(ab),
+ val);
+
+ val = u32_encode_bits(reg_scatter_buf_sz * nsbufs,
+ HAL_WBM_SCATTER_RING_SIZE_OF_IDLE_LINK_DESC_LIST);
+ ath12k_hif_write32(ab,
+ HAL_SEQ_WCSS_UMAC_WBM_REG + HAL_WBM_R0_IDLE_LIST_SIZE_ADDR(ab),
+ val);
+
+ val = u32_encode_bits(sbuf[0].paddr & HAL_ADDR_LSB_REG_MASK,
+ BUFFER_ADDR_INFO0_ADDR);
+ ath12k_hif_write32(ab,
+ HAL_SEQ_WCSS_UMAC_WBM_REG +
+ HAL_WBM_SCATTERED_RING_BASE_LSB(ab),
+ val);
+
+ val = u32_encode_bits(BASE_ADDR_MATCH_TAG_VAL,
+ HAL_WBM_SCATTERED_DESC_MSB_BASE_ADDR_MATCH_TAG) |
+ u32_encode_bits((u64)sbuf[0].paddr >> HAL_ADDR_MSB_REG_SHIFT,
+ HAL_WBM_SCATTERED_DESC_MSB_BASE_ADDR_39_32);
+ ath12k_hif_write32(ab,
+ HAL_SEQ_WCSS_UMAC_WBM_REG +
+ HAL_WBM_SCATTERED_RING_BASE_MSB(ab),
+ val);
+
+ /* Setup head and tail pointers for the idle list */
+ val = u32_encode_bits(sbuf[nsbufs - 1].paddr, BUFFER_ADDR_INFO0_ADDR);
+ ath12k_hif_write32(ab,
+ HAL_SEQ_WCSS_UMAC_WBM_REG +
+ HAL_WBM_SCATTERED_DESC_PTR_HEAD_INFO_IX0(ab),
+ val);
+
+ val = u32_encode_bits(((u64)sbuf[nsbufs - 1].paddr >> HAL_ADDR_MSB_REG_SHIFT),
+ HAL_WBM_SCATTERED_DESC_MSB_BASE_ADDR_39_32) |
+ u32_encode_bits((end_offset >> 2),
+ HAL_WBM_SCATTERED_DESC_HEAD_P_OFFSET_IX1);
+ ath12k_hif_write32(ab,
+ HAL_SEQ_WCSS_UMAC_WBM_REG +
+ HAL_WBM_SCATTERED_DESC_PTR_HEAD_INFO_IX1(ab),
+ val);
+
+ val = u32_encode_bits(sbuf[0].paddr, BUFFER_ADDR_INFO0_ADDR);
+ ath12k_hif_write32(ab,
+ HAL_SEQ_WCSS_UMAC_WBM_REG +
+ HAL_WBM_SCATTERED_DESC_PTR_HEAD_INFO_IX0(ab),
+ val);
+
+ val = u32_encode_bits(sbuf[0].paddr, BUFFER_ADDR_INFO0_ADDR);
+ ath12k_hif_write32(ab,
+ HAL_SEQ_WCSS_UMAC_WBM_REG +
+ HAL_WBM_SCATTERED_DESC_PTR_TAIL_INFO_IX0(ab),
+ val);
+
+ val = u32_encode_bits(((u64)sbuf[0].paddr >> HAL_ADDR_MSB_REG_SHIFT),
+ HAL_WBM_SCATTERED_DESC_MSB_BASE_ADDR_39_32) |
+ u32_encode_bits(0, HAL_WBM_SCATTERED_DESC_TAIL_P_OFFSET_IX1);
+ ath12k_hif_write32(ab,
+ HAL_SEQ_WCSS_UMAC_WBM_REG +
+ HAL_WBM_SCATTERED_DESC_PTR_TAIL_INFO_IX1(ab),
+ val);
+
+ val = 2 * tot_link_desc;
+ ath12k_hif_write32(ab,
+ HAL_SEQ_WCSS_UMAC_WBM_REG +
+ HAL_WBM_SCATTERED_DESC_PTR_HP_ADDR(ab),
+ val);
+
+ /* Enable the SRNG */
+ val = u32_encode_bits(1, HAL_WBM_IDLE_LINK_RING_MISC_SRNG_ENABLE) |
+ u32_encode_bits(1, HAL_WBM_IDLE_LINK_RING_MISC_RIND_ID_DISABLE);
+ ath12k_hif_write32(ab,
+ HAL_SEQ_WCSS_UMAC_WBM_REG +
+ HAL_WBM_IDLE_LINK_RING_MISC_ADDR(ab),
+ val);
+}
+
+int ath12k_hal_srng_setup(struct ath12k_base *ab, enum hal_ring_type type,
+ int ring_num, int mac_id,
+ struct hal_srng_params *params)
+{
+ struct ath12k_hal *hal = &ab->hal;
+ struct hal_srng_config *srng_config = &ab->hal.srng_config[type];
+ struct hal_srng *srng;
+ int ring_id;
+ u32 idx;
+ int i;
+ u32 reg_base;
+
+ ring_id = ath12k_hal_srng_get_ring_id(ab, type, ring_num, mac_id);
+ if (ring_id < 0)
+ return ring_id;
+
+ srng = &hal->srng_list[ring_id];
+
+ srng->ring_id = ring_id;
+ srng->ring_dir = srng_config->ring_dir;
+ srng->ring_base_paddr = params->ring_base_paddr;
+ srng->ring_base_vaddr = params->ring_base_vaddr;
+ srng->entry_size = srng_config->entry_size;
+ srng->num_entries = params->num_entries;
+ srng->ring_size = srng->entry_size * srng->num_entries;
+ srng->intr_batch_cntr_thres_entries =
+ params->intr_batch_cntr_thres_entries;
+ srng->intr_timer_thres_us = params->intr_timer_thres_us;
+ srng->flags = params->flags;
+ srng->msi_addr = params->msi_addr;
+ srng->msi2_addr = params->msi2_addr;
+ srng->msi_data = params->msi_data;
+ srng->msi2_data = params->msi2_data;
+ srng->initialized = 1;
+ spin_lock_init(&srng->lock);
+ lockdep_set_class(&srng->lock, &srng->lock_key);
+
+ for (i = 0; i < HAL_SRNG_NUM_REG_GRP; i++) {
+ srng->hwreg_base[i] = srng_config->reg_start[i] +
+ (ring_num * srng_config->reg_size[i]);
+ }
+
+ memset(srng->ring_base_vaddr, 0,
+ (srng->entry_size * srng->num_entries) << 2);
+
+ reg_base = srng->hwreg_base[HAL_SRNG_REG_GRP_R2];
+
+ if (srng->ring_dir == HAL_SRNG_DIR_SRC) {
+ srng->u.src_ring.hp = 0;
+ srng->u.src_ring.cached_tp = 0;
+ srng->u.src_ring.reap_hp = srng->ring_size - srng->entry_size;
+ srng->u.src_ring.tp_addr = (void *)(hal->rdp.vaddr + ring_id);
+ srng->u.src_ring.low_threshold = params->low_threshold *
+ srng->entry_size;
+ if (srng_config->mac_type == ATH12K_HAL_SRNG_UMAC) {
+ if (!ab->hw_params->supports_shadow_regs)
+ srng->u.src_ring.hp_addr =
+ (u32 *)((unsigned long)ab->mem + reg_base);
+ else
+ ath12k_dbg(ab, ATH12K_DBG_HAL,
+ "hal type %d ring_num %d reg_base 0x%x shadow 0x%lx\n",
+ type, ring_num,
+ reg_base,
+ (unsigned long)srng->u.src_ring.hp_addr -
+ (unsigned long)ab->mem);
+ } else {
+ idx = ring_id - HAL_SRNG_RING_ID_DMAC_CMN_ID_START;
+ srng->u.src_ring.hp_addr = (void *)(hal->wrp.vaddr +
+ idx);
+ srng->flags |= HAL_SRNG_FLAGS_LMAC_RING;
+ }
+ } else {
+ /* During initialization loop count in all the descriptors
+ * will be set to zero, and HW will set it to 1 on completing
+ * descriptor update in first loop, and increments it by 1 on
+ * subsequent loops (loop count wraps around after reaching
+ * 0xffff). The 'loop_cnt' in SW ring state is the expected
+ * loop count in descriptors updated by HW (to be processed
+ * by SW).
+ */
+ srng->u.dst_ring.loop_cnt = 1;
+ srng->u.dst_ring.tp = 0;
+ srng->u.dst_ring.cached_hp = 0;
+ srng->u.dst_ring.hp_addr = (void *)(hal->rdp.vaddr + ring_id);
+ if (srng_config->mac_type == ATH12K_HAL_SRNG_UMAC) {
+ if (!ab->hw_params->supports_shadow_regs)
+ srng->u.dst_ring.tp_addr =
+ (u32 *)((unsigned long)ab->mem + reg_base +
+ (HAL_REO1_RING_TP - HAL_REO1_RING_HP));
+ else
+ ath12k_dbg(ab, ATH12K_DBG_HAL,
+ "type %d ring_num %d target_reg 0x%x shadow 0x%lx\n",
+ type, ring_num,
+ reg_base + HAL_REO1_RING_TP - HAL_REO1_RING_HP,
+ (unsigned long)srng->u.dst_ring.tp_addr -
+ (unsigned long)ab->mem);
+ } else {
+ /* For PMAC & DMAC rings, tail pointer updates will be done
+ * through FW by writing to a shared memory location
+ */
+ idx = ring_id - HAL_SRNG_RING_ID_DMAC_CMN_ID_START;
+ srng->u.dst_ring.tp_addr = (void *)(hal->wrp.vaddr +
+ idx);
+ srng->flags |= HAL_SRNG_FLAGS_LMAC_RING;
+ }
+ }
+
+ if (srng_config->mac_type != ATH12K_HAL_SRNG_UMAC)
+ return ring_id;
+
+ ath12k_hal_srng_hw_init(ab, srng);
+
+ if (type == HAL_CE_DST) {
+ srng->u.dst_ring.max_buffer_length = params->max_buffer_len;
+ ath12k_hal_ce_dst_setup(ab, srng, ring_num);
+ }
+
+ return ring_id;
+}
+
+static void ath12k_hal_srng_update_hp_tp_addr(struct ath12k_base *ab,
+ int shadow_cfg_idx,
+ enum hal_ring_type ring_type,
+ int ring_num)
+{
+ struct hal_srng *srng;
+ struct ath12k_hal *hal = &ab->hal;
+ int ring_id;
+ struct hal_srng_config *srng_config = &hal->srng_config[ring_type];
+
+ ring_id = ath12k_hal_srng_get_ring_id(ab, ring_type, ring_num, 0);
+ if (ring_id < 0)
+ return;
+
+ srng = &hal->srng_list[ring_id];
+
+ if (srng_config->ring_dir == HAL_SRNG_DIR_DST)
+ srng->u.dst_ring.tp_addr = (u32 *)(HAL_SHADOW_REG(shadow_cfg_idx) +
+ (unsigned long)ab->mem);
+ else
+ srng->u.src_ring.hp_addr = (u32 *)(HAL_SHADOW_REG(shadow_cfg_idx) +
+ (unsigned long)ab->mem);
+}
+
+int ath12k_hal_srng_update_shadow_config(struct ath12k_base *ab,
+ enum hal_ring_type ring_type,
+ int ring_num)
+{
+ struct ath12k_hal *hal = &ab->hal;
+ struct hal_srng_config *srng_config = &hal->srng_config[ring_type];
+ int shadow_cfg_idx = hal->num_shadow_reg_configured;
+ u32 target_reg;
+
+ if (shadow_cfg_idx >= HAL_SHADOW_NUM_REGS)
+ return -EINVAL;
+
+ hal->num_shadow_reg_configured++;
+
+ target_reg = srng_config->reg_start[HAL_HP_OFFSET_IN_REG_START];
+ target_reg += srng_config->reg_size[HAL_HP_OFFSET_IN_REG_START] *
+ ring_num;
+
+ /* For destination ring, shadow the TP */
+ if (srng_config->ring_dir == HAL_SRNG_DIR_DST)
+ target_reg += HAL_OFFSET_FROM_HP_TO_TP;
+
+ hal->shadow_reg_addr[shadow_cfg_idx] = target_reg;
+
+ /* update hp/tp addr to hal structure*/
+ ath12k_hal_srng_update_hp_tp_addr(ab, shadow_cfg_idx, ring_type,
+ ring_num);
+
+ ath12k_dbg(ab, ATH12K_DBG_HAL,
+ "target_reg %x, shadow reg 0x%x shadow_idx 0x%x, ring_type %d, ring num %d",
+ target_reg,
+ HAL_SHADOW_REG(shadow_cfg_idx),
+ shadow_cfg_idx,
+ ring_type, ring_num);
+
+ return 0;
+}
+
+void ath12k_hal_srng_shadow_config(struct ath12k_base *ab)
+{
+ struct ath12k_hal *hal = &ab->hal;
+ int ring_type, ring_num;
+
+ /* update all the non-CE srngs. */
+ for (ring_type = 0; ring_type < HAL_MAX_RING_TYPES; ring_type++) {
+ struct hal_srng_config *srng_config = &hal->srng_config[ring_type];
+
+ if (ring_type == HAL_CE_SRC ||
+ ring_type == HAL_CE_DST ||
+ ring_type == HAL_CE_DST_STATUS)
+ continue;
+
+ if (srng_config->mac_type == ATH12K_HAL_SRNG_DMAC ||
+ srng_config->mac_type == ATH12K_HAL_SRNG_PMAC)
+ continue;
+
+ for (ring_num = 0; ring_num < srng_config->max_rings; ring_num++)
+ ath12k_hal_srng_update_shadow_config(ab, ring_type, ring_num);
+ }
+}
+
+void ath12k_hal_srng_get_shadow_config(struct ath12k_base *ab,
+ u32 **cfg, u32 *len)
+{
+ struct ath12k_hal *hal = &ab->hal;
+
+ *len = hal->num_shadow_reg_configured;
+ *cfg = hal->shadow_reg_addr;
+}
+
+void ath12k_hal_srng_shadow_update_hp_tp(struct ath12k_base *ab,
+ struct hal_srng *srng)
+{
+ lockdep_assert_held(&srng->lock);
+
+ /* check whether the ring is empty. Update the shadow
+ * HP only when then ring isn't' empty.
+ */
+ if (srng->ring_dir == HAL_SRNG_DIR_SRC &&
+ *srng->u.src_ring.tp_addr != srng->u.src_ring.hp)
+ ath12k_hal_srng_access_end(ab, srng);
+}
+
+static void ath12k_hal_register_srng_lock_keys(struct ath12k_base *ab)
+{
+ struct ath12k_hal *hal = &ab->hal;
+ u32 ring_id;
+
+ for (ring_id = 0; ring_id < HAL_SRNG_RING_ID_MAX; ring_id++)
+ lockdep_register_key(&hal->srng_list[ring_id].lock_key);
+}
+
+static void ath12k_hal_unregister_srng_lock_keys(struct ath12k_base *ab)
+{
+ struct ath12k_hal *hal = &ab->hal;
+ u32 ring_id;
+
+ for (ring_id = 0; ring_id < HAL_SRNG_RING_ID_MAX; ring_id++)
+ lockdep_unregister_key(&hal->srng_list[ring_id].lock_key);
+}
+
+int ath12k_hal_srng_init(struct ath12k_base *ab)
+{
+ struct ath12k_hal *hal = &ab->hal;
+ int ret;
+
+ memset(hal, 0, sizeof(*hal));
+
+ ret = ab->hw_params->hal_ops->create_srng_config(ab);
+ if (ret)
+ goto err_hal;
+
+ ret = ath12k_hal_alloc_cont_rdp(ab);
+ if (ret)
+ goto err_hal;
+
+ ret = ath12k_hal_alloc_cont_wrp(ab);
+ if (ret)
+ goto err_free_cont_rdp;
+
+ ath12k_hal_register_srng_lock_keys(ab);
+
+ return 0;
+
+err_free_cont_rdp:
+ ath12k_hal_free_cont_rdp(ab);
+
+err_hal:
+ return ret;
+}
+
+void ath12k_hal_srng_deinit(struct ath12k_base *ab)
+{
+ struct ath12k_hal *hal = &ab->hal;
+
+ ath12k_hal_unregister_srng_lock_keys(ab);
+ ath12k_hal_free_cont_rdp(ab);
+ ath12k_hal_free_cont_wrp(ab);
+ kfree(hal->srng_config);
+ hal->srng_config = NULL;
+}
+
+void ath12k_hal_dump_srng_stats(struct ath12k_base *ab)
+{
+ struct hal_srng *srng;
+ struct ath12k_ext_irq_grp *irq_grp;
+ struct ath12k_ce_pipe *ce_pipe;
+ int i;
+
+ ath12k_err(ab, "Last interrupt received for each CE:\n");
+ for (i = 0; i < ab->hw_params->ce_count; i++) {
+ ce_pipe = &ab->ce.ce_pipe[i];
+
+ if (ath12k_ce_get_attr_flags(ab, i) & CE_ATTR_DIS_INTR)
+ continue;
+
+ ath12k_err(ab, "CE_id %d pipe_num %d %ums before\n",
+ i, ce_pipe->pipe_num,
+ jiffies_to_msecs(jiffies - ce_pipe->timestamp));
+ }
+
+ ath12k_err(ab, "\nLast interrupt received for each group:\n");
+ for (i = 0; i < ATH12K_EXT_IRQ_GRP_NUM_MAX; i++) {
+ irq_grp = &ab->ext_irq_grp[i];
+ ath12k_err(ab, "group_id %d %ums before\n",
+ irq_grp->grp_id,
+ jiffies_to_msecs(jiffies - irq_grp->timestamp));
+ }
+
+ for (i = 0; i < HAL_SRNG_RING_ID_MAX; i++) {
+ srng = &ab->hal.srng_list[i];
+
+ if (!srng->initialized)
+ continue;
+
+ if (srng->ring_dir == HAL_SRNG_DIR_SRC)
+ ath12k_err(ab,
+ "src srng id %u hp %u, reap_hp %u, cur tp %u, cached tp %u last tp %u napi processed before %ums\n",
+ srng->ring_id, srng->u.src_ring.hp,
+ srng->u.src_ring.reap_hp,
+ *srng->u.src_ring.tp_addr, srng->u.src_ring.cached_tp,
+ srng->u.src_ring.last_tp,
+ jiffies_to_msecs(jiffies - srng->timestamp));
+ else if (srng->ring_dir == HAL_SRNG_DIR_DST)
+ ath12k_err(ab,
+ "dst srng id %u tp %u, cur hp %u, cached hp %u last hp %u napi processed before %ums\n",
+ srng->ring_id, srng->u.dst_ring.tp,
+ *srng->u.dst_ring.hp_addr,
+ srng->u.dst_ring.cached_hp,
+ srng->u.dst_ring.last_hp,
+ jiffies_to_msecs(jiffies - srng->timestamp));
+ }
+}
diff --git a/drivers/net/wireless/ath/ath12k/hal.h b/drivers/net/wireless/ath/ath12k/hal.h
new file mode 100644
index 000000000000..dfbd8bce70e5
--- /dev/null
+++ b/drivers/net/wireless/ath/ath12k/hal.h
@@ -0,0 +1,1142 @@
+/* SPDX-License-Identifier: BSD-3-Clause-Clear */
+/*
+ * Copyright (c) 2018-2021 The Linux Foundation. All rights reserved.
+ * Copyright (c) 2021-2022 Qualcomm Innovation Center, Inc. All rights reserved.
+ */
+
+#ifndef ATH12K_HAL_H
+#define ATH12K_HAL_H
+
+#include "hal_desc.h"
+#include "rx_desc.h"
+
+struct ath12k_base;
+
+#define HAL_LINK_DESC_SIZE (32 << 2)
+#define HAL_LINK_DESC_ALIGN 128
+#define HAL_NUM_MPDUS_PER_LINK_DESC 6
+#define HAL_NUM_TX_MSDUS_PER_LINK_DESC 7
+#define HAL_NUM_RX_MSDUS_PER_LINK_DESC 6
+#define HAL_NUM_MPDU_LINKS_PER_QUEUE_DESC 12
+#define HAL_MAX_AVAIL_BLK_RES 3
+
+#define HAL_RING_BASE_ALIGN 8
+
+#define HAL_WBM_IDLE_SCATTER_BUF_SIZE_MAX 32704
+/* TODO: Check with hw team on the supported scatter buf size */
+#define HAL_WBM_IDLE_SCATTER_NEXT_PTR_SIZE 8
+#define HAL_WBM_IDLE_SCATTER_BUF_SIZE (HAL_WBM_IDLE_SCATTER_BUF_SIZE_MAX - \
+ HAL_WBM_IDLE_SCATTER_NEXT_PTR_SIZE)
+
+/* TODO: 16 entries per radio times MAX_VAPS_SUPPORTED */
+#define HAL_DSCP_TID_MAP_TBL_NUM_ENTRIES_MAX 32
+#define HAL_DSCP_TID_TBL_SIZE 24
+
+/* calculate the register address from bar0 of shadow register x */
+#define HAL_SHADOW_BASE_ADDR 0x000008fc
+#define HAL_SHADOW_NUM_REGS 40
+#define HAL_HP_OFFSET_IN_REG_START 1
+#define HAL_OFFSET_FROM_HP_TO_TP 4
+
+#define HAL_SHADOW_REG(x) (HAL_SHADOW_BASE_ADDR + (4 * (x)))
+
+/* WCSS Relative address */
+#define HAL_SEQ_WCSS_UMAC_OFFSET 0x00a00000
+#define HAL_SEQ_WCSS_UMAC_REO_REG 0x00a38000
+#define HAL_SEQ_WCSS_UMAC_TCL_REG 0x00a44000
+#define HAL_SEQ_WCSS_UMAC_CE0_SRC_REG 0x01b80000
+#define HAL_SEQ_WCSS_UMAC_CE0_DST_REG 0x01b81000
+#define HAL_SEQ_WCSS_UMAC_CE1_SRC_REG 0x01b82000
+#define HAL_SEQ_WCSS_UMAC_CE1_DST_REG 0x01b83000
+#define HAL_SEQ_WCSS_UMAC_WBM_REG 0x00a34000
+
+#define HAL_CE_WFSS_CE_REG_BASE 0x01b80000
+
+#define HAL_TCL_SW_CONFIG_BANK_ADDR 0x00a4408c
+
+/* SW2TCL(x) R0 ring configuration address */
+#define HAL_TCL1_RING_CMN_CTRL_REG 0x00000020
+#define HAL_TCL1_RING_DSCP_TID_MAP 0x00000240
+#define HAL_TCL1_RING_BASE_LSB 0x00000900
+#define HAL_TCL1_RING_BASE_MSB 0x00000904
+#define HAL_TCL1_RING_ID(ab) ((ab)->hw_params->regs->hal_tcl1_ring_id)
+#define HAL_TCL1_RING_MISC(ab) \
+ ((ab)->hw_params->regs->hal_tcl1_ring_misc)
+#define HAL_TCL1_RING_TP_ADDR_LSB(ab) \
+ ((ab)->hw_params->regs->hal_tcl1_ring_tp_addr_lsb)
+#define HAL_TCL1_RING_TP_ADDR_MSB(ab) \
+ ((ab)->hw_params->regs->hal_tcl1_ring_tp_addr_msb)
+#define HAL_TCL1_RING_CONSUMER_INT_SETUP_IX0(ab) \
+ ((ab)->hw_params->regs->hal_tcl1_ring_consumer_int_setup_ix0)
+#define HAL_TCL1_RING_CONSUMER_INT_SETUP_IX1(ab) \
+ ((ab)->hw_params->regs->hal_tcl1_ring_consumer_int_setup_ix1)
+#define HAL_TCL1_RING_MSI1_BASE_LSB(ab) \
+ ((ab)->hw_params->regs->hal_tcl1_ring_msi1_base_lsb)
+#define HAL_TCL1_RING_MSI1_BASE_MSB(ab) \
+ ((ab)->hw_params->regs->hal_tcl1_ring_msi1_base_msb)
+#define HAL_TCL1_RING_MSI1_DATA(ab) \
+ ((ab)->hw_params->regs->hal_tcl1_ring_msi1_data)
+#define HAL_TCL2_RING_BASE_LSB 0x00000978
+#define HAL_TCL_RING_BASE_LSB(ab) \
+ ((ab)->hw_params->regs->hal_tcl_ring_base_lsb)
+
+#define HAL_TCL1_RING_MSI1_BASE_LSB_OFFSET(ab) \
+ (HAL_TCL1_RING_MSI1_BASE_LSB(ab) - HAL_TCL1_RING_BASE_LSB)
+#define HAL_TCL1_RING_MSI1_BASE_MSB_OFFSET(ab) \
+ (HAL_TCL1_RING_MSI1_BASE_MSB(ab) - HAL_TCL1_RING_BASE_LSB)
+#define HAL_TCL1_RING_MSI1_DATA_OFFSET(ab) \
+ (HAL_TCL1_RING_MSI1_DATA(ab) - HAL_TCL1_RING_BASE_LSB)
+#define HAL_TCL1_RING_BASE_MSB_OFFSET \
+ (HAL_TCL1_RING_BASE_MSB - HAL_TCL1_RING_BASE_LSB)
+#define HAL_TCL1_RING_ID_OFFSET(ab) \
+ (HAL_TCL1_RING_ID(ab) - HAL_TCL1_RING_BASE_LSB)
+#define HAL_TCL1_RING_CONSR_INT_SETUP_IX0_OFFSET(ab) \
+ (HAL_TCL1_RING_CONSUMER_INT_SETUP_IX0(ab) - HAL_TCL1_RING_BASE_LSB)
+#define HAL_TCL1_RING_CONSR_INT_SETUP_IX1_OFFSET(ab) \
+ (HAL_TCL1_RING_CONSUMER_INT_SETUP_IX1(ab) - HAL_TCL1_RING_BASE_LSB)
+#define HAL_TCL1_RING_TP_ADDR_LSB_OFFSET(ab) \
+ (HAL_TCL1_RING_TP_ADDR_LSB(ab) - HAL_TCL1_RING_BASE_LSB)
+#define HAL_TCL1_RING_TP_ADDR_MSB_OFFSET(ab) \
+ (HAL_TCL1_RING_TP_ADDR_MSB(ab) - HAL_TCL1_RING_BASE_LSB)
+#define HAL_TCL1_RING_MISC_OFFSET(ab) \
+ (HAL_TCL1_RING_MISC(ab) - HAL_TCL1_RING_BASE_LSB)
+
+/* SW2TCL(x) R2 ring pointers (head/tail) address */
+#define HAL_TCL1_RING_HP 0x00002000
+#define HAL_TCL1_RING_TP 0x00002004
+#define HAL_TCL2_RING_HP 0x00002008
+#define HAL_TCL_RING_HP 0x00002028
+
+#define HAL_TCL1_RING_TP_OFFSET \
+ (HAL_TCL1_RING_TP - HAL_TCL1_RING_HP)
+
+/* TCL STATUS ring address */
+#define HAL_TCL_STATUS_RING_BASE_LSB(ab) \
+ ((ab)->hw_params->regs->hal_tcl_status_ring_base_lsb)
+#define HAL_TCL_STATUS_RING_HP 0x00002048
+
+/* PPE2TCL1 Ring address */
+#define HAL_TCL_PPE2TCL1_RING_BASE_LSB 0x00000c48
+#define HAL_TCL_PPE2TCL1_RING_HP 0x00002038
+
+/* WBM PPE Release Ring address */
+#define HAL_WBM_PPE_RELEASE_RING_BASE_LSB(ab) \
+ ((ab)->hw_params->regs->hal_ppe_rel_ring_base)
+#define HAL_WBM_PPE_RELEASE_RING_HP 0x00003020
+
+/* REO2SW(x) R0 ring configuration address */
+#define HAL_REO1_GEN_ENABLE 0x00000000
+#define HAL_REO1_MISC_CTRL_ADDR(ab) \
+ ((ab)->hw_params->regs->hal_reo1_misc_ctrl_addr)
+#define HAL_REO1_DEST_RING_CTRL_IX_0 0x00000004
+#define HAL_REO1_DEST_RING_CTRL_IX_1 0x00000008
+#define HAL_REO1_DEST_RING_CTRL_IX_2 0x0000000c
+#define HAL_REO1_DEST_RING_CTRL_IX_3 0x00000010
+#define HAL_REO1_SW_COOKIE_CFG0(ab) ((ab)->hw_params->regs->hal_reo1_sw_cookie_cfg0)
+#define HAL_REO1_SW_COOKIE_CFG1(ab) ((ab)->hw_params->regs->hal_reo1_sw_cookie_cfg1)
+#define HAL_REO1_QDESC_LUT_BASE0(ab) ((ab)->hw_params->regs->hal_reo1_qdesc_lut_base0)
+#define HAL_REO1_QDESC_LUT_BASE1(ab) ((ab)->hw_params->regs->hal_reo1_qdesc_lut_base1)
+#define HAL_REO1_RING_BASE_LSB(ab) ((ab)->hw_params->regs->hal_reo1_ring_base_lsb)
+#define HAL_REO1_RING_BASE_MSB(ab) ((ab)->hw_params->regs->hal_reo1_ring_base_msb)
+#define HAL_REO1_RING_ID(ab) ((ab)->hw_params->regs->hal_reo1_ring_id)
+#define HAL_REO1_RING_MISC(ab) ((ab)->hw_params->regs->hal_reo1_ring_misc)
+#define HAL_REO1_RING_HP_ADDR_LSB(ab) ((ab)->hw_params->regs->hal_reo1_ring_hp_addr_lsb)
+#define HAL_REO1_RING_HP_ADDR_MSB(ab) ((ab)->hw_params->regs->hal_reo1_ring_hp_addr_msb)
+#define HAL_REO1_RING_PRODUCER_INT_SETUP(ab) \
+ ((ab)->hw_params->regs->hal_reo1_ring_producer_int_setup)
+#define HAL_REO1_RING_MSI1_BASE_LSB(ab) \
+ ((ab)->hw_params->regs->hal_reo1_ring_msi1_base_lsb)
+#define HAL_REO1_RING_MSI1_BASE_MSB(ab) \
+ ((ab)->hw_params->regs->hal_reo1_ring_msi1_base_msb)
+#define HAL_REO1_RING_MSI1_DATA(ab) ((ab)->hw_params->regs->hal_reo1_ring_msi1_data)
+#define HAL_REO2_RING_BASE_LSB(ab) ((ab)->hw_params->regs->hal_reo2_ring_base)
+#define HAL_REO1_AGING_THRESH_IX_0(ab) ((ab)->hw_params->regs->hal_reo1_aging_thres_ix0)
+#define HAL_REO1_AGING_THRESH_IX_1(ab) ((ab)->hw_params->regs->hal_reo1_aging_thres_ix1)
+#define HAL_REO1_AGING_THRESH_IX_2(ab) ((ab)->hw_params->regs->hal_reo1_aging_thres_ix2)
+#define HAL_REO1_AGING_THRESH_IX_3(ab) ((ab)->hw_params->regs->hal_reo1_aging_thres_ix3)
+
+/* REO2SW(x) R2 ring pointers (head/tail) address */
+#define HAL_REO1_RING_HP 0x00003048
+#define HAL_REO1_RING_TP 0x0000304c
+#define HAL_REO2_RING_HP 0x00003050
+
+#define HAL_REO1_RING_TP_OFFSET (HAL_REO1_RING_TP - HAL_REO1_RING_HP)
+
+/* REO2SW0 ring configuration address */
+#define HAL_REO_SW0_RING_BASE_LSB(ab) \
+ ((ab)->hw_params->regs->hal_reo2_sw0_ring_base)
+
+/* REO2SW0 R2 ring pointer (head/tail) address */
+#define HAL_REO_SW0_RING_HP 0x00003088
+
+/* REO CMD R0 address */
+#define HAL_REO_CMD_RING_BASE_LSB(ab) \
+ ((ab)->hw_params->regs->hal_reo_cmd_ring_base)
+
+/* REO CMD R2 address */
+#define HAL_REO_CMD_HP 0x00003020
+
+/* SW2REO R0 address */
+#define HAL_SW2REO_RING_BASE_LSB(ab) \
+ ((ab)->hw_params->regs->hal_sw2reo_ring_base)
+#define HAL_SW2REO1_RING_BASE_LSB(ab) \
+ ((ab)->hw_params->regs->hal_sw2reo1_ring_base)
+
+/* SW2REO R2 address */
+#define HAL_SW2REO_RING_HP 0x00003028
+#define HAL_SW2REO1_RING_HP 0x00003030
+
+/* CE ring R0 address */
+#define HAL_CE_SRC_RING_BASE_LSB 0x00000000
+#define HAL_CE_DST_RING_BASE_LSB 0x00000000
+#define HAL_CE_DST_STATUS_RING_BASE_LSB 0x00000058
+#define HAL_CE_DST_RING_CTRL 0x000000b0
+
+/* CE ring R2 address */
+#define HAL_CE_DST_RING_HP 0x00000400
+#define HAL_CE_DST_STATUS_RING_HP 0x00000408
+
+/* REO status address */
+#define HAL_REO_STATUS_RING_BASE_LSB(ab) \
+ ((ab)->hw_params->regs->hal_reo_status_ring_base)
+#define HAL_REO_STATUS_HP 0x000030a8
+
+/* WBM Idle R0 address */
+#define HAL_WBM_IDLE_LINK_RING_BASE_LSB(ab) \
+ ((ab)->hw_params->regs->hal_wbm_idle_ring_base_lsb)
+#define HAL_WBM_IDLE_LINK_RING_MISC_ADDR(ab) \
+ ((ab)->hw_params->regs->hal_wbm_idle_ring_misc_addr)
+#define HAL_WBM_R0_IDLE_LIST_CONTROL_ADDR(ab) \
+ ((ab)->hw_params->regs->hal_wbm_r0_idle_list_cntl_addr)
+#define HAL_WBM_R0_IDLE_LIST_SIZE_ADDR(ab) \
+ ((ab)->hw_params->regs->hal_wbm_r0_idle_list_size_addr)
+#define HAL_WBM_SCATTERED_RING_BASE_LSB(ab) \
+ ((ab)->hw_params->regs->hal_wbm_scattered_ring_base_lsb)
+#define HAL_WBM_SCATTERED_RING_BASE_MSB(ab) \
+ ((ab)->hw_params->regs->hal_wbm_scattered_ring_base_msb)
+#define HAL_WBM_SCATTERED_DESC_PTR_HEAD_INFO_IX0(ab) \
+ ((ab)->hw_params->regs->hal_wbm_scattered_desc_head_info_ix0)
+#define HAL_WBM_SCATTERED_DESC_PTR_HEAD_INFO_IX1(ab) \
+ ((ab)->hw_params->regs->hal_wbm_scattered_desc_head_info_ix1)
+#define HAL_WBM_SCATTERED_DESC_PTR_TAIL_INFO_IX0(ab) \
+ ((ab)->hw_params->regs->hal_wbm_scattered_desc_tail_info_ix0)
+#define HAL_WBM_SCATTERED_DESC_PTR_TAIL_INFO_IX1(ab) \
+ ((ab)->hw_params->regs->hal_wbm_scattered_desc_tail_info_ix1)
+#define HAL_WBM_SCATTERED_DESC_PTR_HP_ADDR(ab) \
+ ((ab)->hw_params->regs->hal_wbm_scattered_desc_ptr_hp_addr)
+
+/* WBM Idle R2 address */
+#define HAL_WBM_IDLE_LINK_RING_HP 0x000030b8
+
+/* SW2WBM R0 release address */
+#define HAL_WBM_SW_RELEASE_RING_BASE_LSB(ab) \
+ ((ab)->hw_params->regs->hal_wbm_sw_release_ring_base_lsb)
+#define HAL_WBM_SW1_RELEASE_RING_BASE_LSB(ab) \
+ ((ab)->hw_params->regs->hal_wbm_sw1_release_ring_base_lsb)
+
+/* SW2WBM R2 release address */
+#define HAL_WBM_SW_RELEASE_RING_HP 0x00003010
+#define HAL_WBM_SW1_RELEASE_RING_HP 0x00003018
+
+/* WBM2SW R0 release address */
+#define HAL_WBM0_RELEASE_RING_BASE_LSB(ab) \
+ ((ab)->hw_params->regs->hal_wbm0_release_ring_base_lsb)
+
+#define HAL_WBM1_RELEASE_RING_BASE_LSB(ab) \
+ ((ab)->hw_params->regs->hal_wbm1_release_ring_base_lsb)
+
+/* WBM2SW R2 release address */
+#define HAL_WBM0_RELEASE_RING_HP 0x000030c8
+#define HAL_WBM1_RELEASE_RING_HP 0x000030d0
+
+/* WBM cookie config address and mask */
+#define HAL_WBM_SW_COOKIE_CFG0 0x00000040
+#define HAL_WBM_SW_COOKIE_CFG1 0x00000044
+#define HAL_WBM_SW_COOKIE_CFG2 0x00000090
+#define HAL_WBM_SW_COOKIE_CONVERT_CFG 0x00000094
+
+#define HAL_WBM_SW_COOKIE_CFG_CMEM_BASE_ADDR_MSB GENMASK(7, 0)
+#define HAL_WBM_SW_COOKIE_CFG_COOKIE_PPT_MSB GENMASK(12, 8)
+#define HAL_WBM_SW_COOKIE_CFG_COOKIE_SPT_MSB GENMASK(17, 13)
+#define HAL_WBM_SW_COOKIE_CFG_ALIGN BIT(18)
+#define HAL_WBM_SW_COOKIE_CFG_RELEASE_PATH_EN BIT(0)
+#define HAL_WBM_SW_COOKIE_CFG_ERR_PATH_EN BIT(1)
+#define HAL_WBM_SW_COOKIE_CFG_CONV_IND_EN BIT(3)
+
+#define HAL_WBM_SW_COOKIE_CONV_CFG_WBM2SW0_EN BIT(1)
+#define HAL_WBM_SW_COOKIE_CONV_CFG_WBM2SW1_EN BIT(2)
+#define HAL_WBM_SW_COOKIE_CONV_CFG_WBM2SW2_EN BIT(3)
+#define HAL_WBM_SW_COOKIE_CONV_CFG_WBM2SW3_EN BIT(4)
+#define HAL_WBM_SW_COOKIE_CONV_CFG_WBM2SW4_EN BIT(5)
+#define HAL_WBM_SW_COOKIE_CONV_CFG_GLOBAL_EN BIT(8)
+
+/* TCL ring feild mask and offset */
+#define HAL_TCL1_RING_BASE_MSB_RING_SIZE GENMASK(27, 8)
+#define HAL_TCL1_RING_BASE_MSB_RING_BASE_ADDR_MSB GENMASK(7, 0)
+#define HAL_TCL1_RING_ID_ENTRY_SIZE GENMASK(7, 0)
+#define HAL_TCL1_RING_MISC_MSI_RING_ID_DISABLE BIT(0)
+#define HAL_TCL1_RING_MISC_MSI_LOOPCNT_DISABLE BIT(1)
+#define HAL_TCL1_RING_MISC_MSI_SWAP BIT(3)
+#define HAL_TCL1_RING_MISC_HOST_FW_SWAP BIT(4)
+#define HAL_TCL1_RING_MISC_DATA_TLV_SWAP BIT(5)
+#define HAL_TCL1_RING_MISC_SRNG_ENABLE BIT(6)
+#define HAL_TCL1_RING_CONSR_INT_SETUP_IX0_INTR_TMR_THOLD GENMASK(31, 16)
+#define HAL_TCL1_RING_CONSR_INT_SETUP_IX0_BATCH_COUNTER_THOLD GENMASK(14, 0)
+#define HAL_TCL1_RING_CONSR_INT_SETUP_IX1_LOW_THOLD GENMASK(15, 0)
+#define HAL_TCL1_RING_MSI1_BASE_MSB_MSI1_ENABLE BIT(8)
+#define HAL_TCL1_RING_MSI1_BASE_MSB_ADDR GENMASK(7, 0)
+#define HAL_TCL1_RING_CMN_CTRL_DSCP_TID_MAP_PROG_EN BIT(23)
+#define HAL_TCL1_RING_FIELD_DSCP_TID_MAP GENMASK(31, 0)
+#define HAL_TCL1_RING_FIELD_DSCP_TID_MAP0 GENMASK(2, 0)
+#define HAL_TCL1_RING_FIELD_DSCP_TID_MAP1 GENMASK(5, 3)
+#define HAL_TCL1_RING_FIELD_DSCP_TID_MAP2 GENMASK(8, 6)
+#define HAL_TCL1_RING_FIELD_DSCP_TID_MAP3 GENMASK(11, 9)
+#define HAL_TCL1_RING_FIELD_DSCP_TID_MAP4 GENMASK(14, 12)
+#define HAL_TCL1_RING_FIELD_DSCP_TID_MAP5 GENMASK(17, 15)
+#define HAL_TCL1_RING_FIELD_DSCP_TID_MAP6 GENMASK(20, 18)
+#define HAL_TCL1_RING_FIELD_DSCP_TID_MAP7 GENMASK(23, 21)
+
+/* REO ring feild mask and offset */
+#define HAL_REO1_RING_BASE_MSB_RING_SIZE GENMASK(27, 8)
+#define HAL_REO1_RING_BASE_MSB_RING_BASE_ADDR_MSB GENMASK(7, 0)
+#define HAL_REO1_RING_ID_RING_ID GENMASK(15, 8)
+#define HAL_REO1_RING_ID_ENTRY_SIZE GENMASK(7, 0)
+#define HAL_REO1_RING_MISC_MSI_SWAP BIT(3)
+#define HAL_REO1_RING_MISC_HOST_FW_SWAP BIT(4)
+#define HAL_REO1_RING_MISC_DATA_TLV_SWAP BIT(5)
+#define HAL_REO1_RING_MISC_SRNG_ENABLE BIT(6)
+#define HAL_REO1_RING_PRDR_INT_SETUP_INTR_TMR_THOLD GENMASK(31, 16)
+#define HAL_REO1_RING_PRDR_INT_SETUP_BATCH_COUNTER_THOLD GENMASK(14, 0)
+#define HAL_REO1_RING_MSI1_BASE_MSB_MSI1_ENABLE BIT(8)
+#define HAL_REO1_RING_MSI1_BASE_MSB_ADDR GENMASK(7, 0)
+#define HAL_REO1_MISC_CTL_FRAG_DST_RING GENMASK(20, 17)
+#define HAL_REO1_MISC_CTL_BAR_DST_RING GENMASK(24, 21)
+#define HAL_REO1_GEN_ENABLE_AGING_LIST_ENABLE BIT(2)
+#define HAL_REO1_GEN_ENABLE_AGING_FLUSH_ENABLE BIT(3)
+#define HAL_REO1_SW_COOKIE_CFG_CMEM_BASE_ADDR_MSB GENMASK(7, 0)
+#define HAL_REO1_SW_COOKIE_CFG_COOKIE_PPT_MSB GENMASK(12, 8)
+#define HAL_REO1_SW_COOKIE_CFG_COOKIE_SPT_MSB GENMASK(17, 13)
+#define HAL_REO1_SW_COOKIE_CFG_ALIGN BIT(18)
+#define HAL_REO1_SW_COOKIE_CFG_ENABLE BIT(19)
+#define HAL_REO1_SW_COOKIE_CFG_GLOBAL_ENABLE BIT(20)
+
+/* CE ring bit field mask and shift */
+#define HAL_CE_DST_R0_DEST_CTRL_MAX_LEN GENMASK(15, 0)
+
+#define HAL_ADDR_LSB_REG_MASK 0xffffffff
+
+#define HAL_ADDR_MSB_REG_SHIFT 32
+
+/* WBM ring bit field mask and shift */
+#define HAL_WBM_LINK_DESC_IDLE_LIST_MODE BIT(1)
+#define HAL_WBM_SCATTER_BUFFER_SIZE GENMASK(10, 2)
+#define HAL_WBM_SCATTER_RING_SIZE_OF_IDLE_LINK_DESC_LIST GENMASK(31, 16)
+#define HAL_WBM_SCATTERED_DESC_MSB_BASE_ADDR_39_32 GENMASK(7, 0)
+#define HAL_WBM_SCATTERED_DESC_MSB_BASE_ADDR_MATCH_TAG GENMASK(31, 8)
+
+#define HAL_WBM_SCATTERED_DESC_HEAD_P_OFFSET_IX1 GENMASK(20, 8)
+#define HAL_WBM_SCATTERED_DESC_TAIL_P_OFFSET_IX1 GENMASK(20, 8)
+
+#define HAL_WBM_IDLE_LINK_RING_MISC_SRNG_ENABLE BIT(6)
+#define HAL_WBM_IDLE_LINK_RING_MISC_RIND_ID_DISABLE BIT(0)
+
+#define BASE_ADDR_MATCH_TAG_VAL 0x5
+
+#define HAL_REO_REO2SW1_RING_BASE_MSB_RING_SIZE 0x000fffff
+#define HAL_REO_REO2SW0_RING_BASE_MSB_RING_SIZE 0x000fffff
+#define HAL_REO_SW2REO_RING_BASE_MSB_RING_SIZE 0x0000ffff
+#define HAL_REO_CMD_RING_BASE_MSB_RING_SIZE 0x0000ffff
+#define HAL_REO_STATUS_RING_BASE_MSB_RING_SIZE 0x0000ffff
+#define HAL_SW2TCL1_RING_BASE_MSB_RING_SIZE 0x000fffff
+#define HAL_SW2TCL1_CMD_RING_BASE_MSB_RING_SIZE 0x000fffff
+#define HAL_TCL_STATUS_RING_BASE_MSB_RING_SIZE 0x0000ffff
+#define HAL_CE_SRC_RING_BASE_MSB_RING_SIZE 0x0000ffff
+#define HAL_CE_DST_RING_BASE_MSB_RING_SIZE 0x0000ffff
+#define HAL_CE_DST_STATUS_RING_BASE_MSB_RING_SIZE 0x0000ffff
+#define HAL_WBM_IDLE_LINK_RING_BASE_MSB_RING_SIZE 0x000fffff
+#define HAL_SW2WBM_RELEASE_RING_BASE_MSB_RING_SIZE 0x0000ffff
+#define HAL_WBM2SW_RELEASE_RING_BASE_MSB_RING_SIZE 0x000fffff
+#define HAL_RXDMA_RING_MAX_SIZE 0x0000ffff
+#define HAL_RXDMA_RING_MAX_SIZE_BE 0x000fffff
+#define HAL_WBM2PPE_RELEASE_RING_BASE_MSB_RING_SIZE 0x000fffff
+
+#define HAL_WBM2SW_REL_ERR_RING_NUM 3
+/* Add any other errors here and return them in
+ * ath12k_hal_rx_desc_get_err().
+ */
+
+enum hal_srng_ring_id {
+ HAL_SRNG_RING_ID_REO2SW0 = 0,
+ HAL_SRNG_RING_ID_REO2SW1,
+ HAL_SRNG_RING_ID_REO2SW2,
+ HAL_SRNG_RING_ID_REO2SW3,
+ HAL_SRNG_RING_ID_REO2SW4,
+ HAL_SRNG_RING_ID_REO2SW5,
+ HAL_SRNG_RING_ID_REO2SW6,
+ HAL_SRNG_RING_ID_REO2SW7,
+ HAL_SRNG_RING_ID_REO2SW8,
+ HAL_SRNG_RING_ID_REO2TCL,
+ HAL_SRNG_RING_ID_REO2PPE,
+
+ HAL_SRNG_RING_ID_SW2REO = 16,
+ HAL_SRNG_RING_ID_SW2REO1,
+ HAL_SRNG_RING_ID_SW2REO2,
+ HAL_SRNG_RING_ID_SW2REO3,
+
+ HAL_SRNG_RING_ID_REO_CMD,
+ HAL_SRNG_RING_ID_REO_STATUS,
+
+ HAL_SRNG_RING_ID_SW2TCL1 = 24,
+ HAL_SRNG_RING_ID_SW2TCL2,
+ HAL_SRNG_RING_ID_SW2TCL3,
+ HAL_SRNG_RING_ID_SW2TCL4,
+ HAL_SRNG_RING_ID_SW2TCL5,
+ HAL_SRNG_RING_ID_SW2TCL6,
+ HAL_SRNG_RING_ID_PPE2TCL1 = 30,
+
+ HAL_SRNG_RING_ID_SW2TCL_CMD = 40,
+ HAL_SRNG_RING_ID_SW2TCL1_CMD,
+ HAL_SRNG_RING_ID_TCL_STATUS,
+
+ HAL_SRNG_RING_ID_CE0_SRC = 64,
+ HAL_SRNG_RING_ID_CE1_SRC,
+ HAL_SRNG_RING_ID_CE2_SRC,
+ HAL_SRNG_RING_ID_CE3_SRC,
+ HAL_SRNG_RING_ID_CE4_SRC,
+ HAL_SRNG_RING_ID_CE5_SRC,
+ HAL_SRNG_RING_ID_CE6_SRC,
+ HAL_SRNG_RING_ID_CE7_SRC,
+ HAL_SRNG_RING_ID_CE8_SRC,
+ HAL_SRNG_RING_ID_CE9_SRC,
+ HAL_SRNG_RING_ID_CE10_SRC,
+ HAL_SRNG_RING_ID_CE11_SRC,
+ HAL_SRNG_RING_ID_CE12_SRC,
+ HAL_SRNG_RING_ID_CE13_SRC,
+ HAL_SRNG_RING_ID_CE14_SRC,
+ HAL_SRNG_RING_ID_CE15_SRC,
+
+ HAL_SRNG_RING_ID_CE0_DST = 81,
+ HAL_SRNG_RING_ID_CE1_DST,
+ HAL_SRNG_RING_ID_CE2_DST,
+ HAL_SRNG_RING_ID_CE3_DST,
+ HAL_SRNG_RING_ID_CE4_DST,
+ HAL_SRNG_RING_ID_CE5_DST,
+ HAL_SRNG_RING_ID_CE6_DST,
+ HAL_SRNG_RING_ID_CE7_DST,
+ HAL_SRNG_RING_ID_CE8_DST,
+ HAL_SRNG_RING_ID_CE9_DST,
+ HAL_SRNG_RING_ID_CE10_DST,
+ HAL_SRNG_RING_ID_CE11_DST,
+ HAL_SRNG_RING_ID_CE12_DST,
+ HAL_SRNG_RING_ID_CE13_DST,
+ HAL_SRNG_RING_ID_CE14_DST,
+ HAL_SRNG_RING_ID_CE15_DST,
+
+ HAL_SRNG_RING_ID_CE0_DST_STATUS = 100,
+ HAL_SRNG_RING_ID_CE1_DST_STATUS,
+ HAL_SRNG_RING_ID_CE2_DST_STATUS,
+ HAL_SRNG_RING_ID_CE3_DST_STATUS,
+ HAL_SRNG_RING_ID_CE4_DST_STATUS,
+ HAL_SRNG_RING_ID_CE5_DST_STATUS,
+ HAL_SRNG_RING_ID_CE6_DST_STATUS,
+ HAL_SRNG_RING_ID_CE7_DST_STATUS,
+ HAL_SRNG_RING_ID_CE8_DST_STATUS,
+ HAL_SRNG_RING_ID_CE9_DST_STATUS,
+ HAL_SRNG_RING_ID_CE10_DST_STATUS,
+ HAL_SRNG_RING_ID_CE11_DST_STATUS,
+ HAL_SRNG_RING_ID_CE12_DST_STATUS,
+ HAL_SRNG_RING_ID_CE13_DST_STATUS,
+ HAL_SRNG_RING_ID_CE14_DST_STATUS,
+ HAL_SRNG_RING_ID_CE15_DST_STATUS,
+
+ HAL_SRNG_RING_ID_WBM_IDLE_LINK = 120,
+ HAL_SRNG_RING_ID_WBM_SW0_RELEASE,
+ HAL_SRNG_RING_ID_WBM_SW1_RELEASE,
+ HAL_SRNG_RING_ID_WBM_PPE_RELEASE = 123,
+
+ HAL_SRNG_RING_ID_WBM2SW0_RELEASE = 128,
+ HAL_SRNG_RING_ID_WBM2SW1_RELEASE,
+ HAL_SRNG_RING_ID_WBM2SW2_RELEASE,
+ HAL_SRNG_RING_ID_WBM2SW3_RELEASE, /* RX ERROR RING */
+ HAL_SRNG_RING_ID_WBM2SW4_RELEASE,
+ HAL_SRNG_RING_ID_WBM2SW5_RELEASE,
+ HAL_SRNG_RING_ID_WBM2SW6_RELEASE,
+ HAL_SRNG_RING_ID_WBM2SW7_RELEASE,
+
+ HAL_SRNG_RING_ID_UMAC_ID_END = 159,
+
+ /* Common DMAC rings shared by all LMACs */
+ HAL_SRNG_RING_ID_DMAC_CMN_ID_START = 160,
+ HAL_SRNG_SW2RXDMA_BUF0 = HAL_SRNG_RING_ID_DMAC_CMN_ID_START,
+ HAL_SRNG_SW2RXDMA_BUF1 = 161,
+ HAL_SRNG_SW2RXDMA_BUF2 = 162,
+
+ HAL_SRNG_SW2RXMON_BUF0 = 168,
+
+ HAL_SRNG_SW2TXMON_BUF0 = 176,
+
+ HAL_SRNG_RING_ID_DMAC_CMN_ID_END = 183,
+ HAL_SRNG_RING_ID_PMAC1_ID_START = 184,
+
+ HAL_SRNG_RING_ID_WMAC1_SW2RXMON_BUF0 = HAL_SRNG_RING_ID_PMAC1_ID_START,
+
+ HAL_SRNG_RING_ID_WMAC1_RXDMA2SW0,
+ HAL_SRNG_RING_ID_WMAC1_RXDMA2SW1,
+ HAL_SRNG_RING_ID_WMAC1_RXMON2SW0 = HAL_SRNG_RING_ID_WMAC1_RXDMA2SW1,
+ HAL_SRNG_RING_ID_WMAC1_SW2RXDMA1_DESC,
+ HAL_SRNG_RING_ID_RXDMA_DIR_BUF,
+ HAL_SRNG_RING_ID_WMAC1_SW2TXMON_BUF0,
+ HAL_SRNG_RING_ID_WMAC1_TXMON2SW0_BUF0,
+
+ HAL_SRNG_RING_ID_PMAC1_ID_END,
+};
+
+/* SRNG registers are split into two groups R0 and R2 */
+#define HAL_SRNG_REG_GRP_R0 0
+#define HAL_SRNG_REG_GRP_R2 1
+#define HAL_SRNG_NUM_REG_GRP 2
+
+/* TODO: number of PMACs */
+#define HAL_SRNG_NUM_PMACS 3
+#define HAL_SRNG_NUM_DMAC_RINGS (HAL_SRNG_RING_ID_DMAC_CMN_ID_END - \
+ HAL_SRNG_RING_ID_DMAC_CMN_ID_START)
+#define HAL_SRNG_RINGS_PER_PMAC (HAL_SRNG_RING_ID_PMAC1_ID_END - \
+ HAL_SRNG_RING_ID_PMAC1_ID_START)
+#define HAL_SRNG_NUM_PMAC_RINGS (HAL_SRNG_NUM_PMACS * HAL_SRNG_RINGS_PER_PMAC)
+#define HAL_SRNG_RING_ID_MAX (HAL_SRNG_RING_ID_DMAC_CMN_ID_END + \
+ HAL_SRNG_NUM_PMAC_RINGS)
+
+enum hal_ring_type {
+ HAL_REO_DST,
+ HAL_REO_EXCEPTION,
+ HAL_REO_REINJECT,
+ HAL_REO_CMD,
+ HAL_REO_STATUS,
+ HAL_TCL_DATA,
+ HAL_TCL_CMD,
+ HAL_TCL_STATUS,
+ HAL_CE_SRC,
+ HAL_CE_DST,
+ HAL_CE_DST_STATUS,
+ HAL_WBM_IDLE_LINK,
+ HAL_SW2WBM_RELEASE,
+ HAL_WBM2SW_RELEASE,
+ HAL_RXDMA_BUF,
+ HAL_RXDMA_DST,
+ HAL_RXDMA_MONITOR_BUF,
+ HAL_RXDMA_MONITOR_STATUS,
+ HAL_RXDMA_MONITOR_DST,
+ HAL_RXDMA_MONITOR_DESC,
+ HAL_RXDMA_DIR_BUF,
+ HAL_PPE2TCL,
+ HAL_PPE_RELEASE,
+ HAL_TX_MONITOR_BUF,
+ HAL_TX_MONITOR_DST,
+ HAL_MAX_RING_TYPES,
+};
+
+#define HAL_RX_MAX_BA_WINDOW 256
+
+#define HAL_DEFAULT_BE_BK_VI_REO_TIMEOUT_USEC (100 * 1000)
+#define HAL_DEFAULT_VO_REO_TIMEOUT_USEC (40 * 1000)
+
+/**
+ * enum hal_reo_cmd_type: Enum for REO command type
+ * @HAL_REO_CMD_GET_QUEUE_STATS: Get REO queue status/stats
+ * @HAL_REO_CMD_FLUSH_QUEUE: Flush all frames in REO queue
+ * @HAL_REO_CMD_FLUSH_CACHE: Flush descriptor entries in the cache
+ * @HAL_REO_CMD_UNBLOCK_CACHE: Unblock a descriptor's address that was blocked
+ * earlier with a 'REO_FLUSH_CACHE' command
+ * @HAL_REO_CMD_FLUSH_TIMEOUT_LIST: Flush buffers/descriptors from timeout list
+ * @HAL_REO_CMD_UPDATE_RX_QUEUE: Update REO queue settings
+ */
+enum hal_reo_cmd_type {
+ HAL_REO_CMD_GET_QUEUE_STATS = 0,
+ HAL_REO_CMD_FLUSH_QUEUE = 1,
+ HAL_REO_CMD_FLUSH_CACHE = 2,
+ HAL_REO_CMD_UNBLOCK_CACHE = 3,
+ HAL_REO_CMD_FLUSH_TIMEOUT_LIST = 4,
+ HAL_REO_CMD_UPDATE_RX_QUEUE = 5,
+};
+
+/**
+ * enum hal_reo_cmd_status: Enum for execution status of REO command
+ * @HAL_REO_CMD_SUCCESS: Command has successfully executed
+ * @HAL_REO_CMD_BLOCKED: Command could not be executed as the queue
+ * or cache was blocked
+ * @HAL_REO_CMD_FAILED: Command execution failed, could be due to
+ * invalid queue desc
+ * @HAL_REO_CMD_RESOURCE_BLOCKED:
+ * @HAL_REO_CMD_DRAIN:
+ */
+enum hal_reo_cmd_status {
+ HAL_REO_CMD_SUCCESS = 0,
+ HAL_REO_CMD_BLOCKED = 1,
+ HAL_REO_CMD_FAILED = 2,
+ HAL_REO_CMD_RESOURCE_BLOCKED = 3,
+ HAL_REO_CMD_DRAIN = 0xff,
+};
+
+struct hal_wbm_idle_scatter_list {
+ dma_addr_t paddr;
+ struct hal_wbm_link_desc *vaddr;
+};
+
+struct hal_srng_params {
+ dma_addr_t ring_base_paddr;
+ u32 *ring_base_vaddr;
+ int num_entries;
+ u32 intr_batch_cntr_thres_entries;
+ u32 intr_timer_thres_us;
+ u32 flags;
+ u32 max_buffer_len;
+ u32 low_threshold;
+ u32 high_threshold;
+ dma_addr_t msi_addr;
+ dma_addr_t msi2_addr;
+ u32 msi_data;
+ u32 msi2_data;
+
+ /* Add more params as needed */
+};
+
+enum hal_srng_dir {
+ HAL_SRNG_DIR_SRC,
+ HAL_SRNG_DIR_DST
+};
+
+/* srng flags */
+#define HAL_SRNG_FLAGS_MSI_SWAP 0x00000008
+#define HAL_SRNG_FLAGS_RING_PTR_SWAP 0x00000010
+#define HAL_SRNG_FLAGS_DATA_TLV_SWAP 0x00000020
+#define HAL_SRNG_FLAGS_LOW_THRESH_INTR_EN 0x00010000
+#define HAL_SRNG_FLAGS_MSI_INTR 0x00020000
+#define HAL_SRNG_FLAGS_HIGH_THRESH_INTR_EN 0x00080000
+#define HAL_SRNG_FLAGS_LMAC_RING 0x80000000
+
+#define HAL_SRNG_TLV_HDR_TAG GENMASK(9, 1)
+#define HAL_SRNG_TLV_HDR_LEN GENMASK(25, 10)
+
+/* Common SRNG ring structure for source and destination rings */
+struct hal_srng {
+ /* Unique SRNG ring ID */
+ u8 ring_id;
+
+ /* Ring initialization done */
+ u8 initialized;
+
+ /* Interrupt/MSI value assigned to this ring */
+ int irq;
+
+ /* Physical base address of the ring */
+ dma_addr_t ring_base_paddr;
+
+ /* Virtual base address of the ring */
+ u32 *ring_base_vaddr;
+
+ /* Number of entries in ring */
+ u32 num_entries;
+
+ /* Ring size */
+ u32 ring_size;
+
+ /* Ring size mask */
+ u32 ring_size_mask;
+
+ /* Size of ring entry */
+ u32 entry_size;
+
+ /* Interrupt timer threshold - in micro seconds */
+ u32 intr_timer_thres_us;
+
+ /* Interrupt batch counter threshold - in number of ring entries */
+ u32 intr_batch_cntr_thres_entries;
+
+ /* MSI Address */
+ dma_addr_t msi_addr;
+
+ /* MSI data */
+ u32 msi_data;
+
+ /* MSI2 Address */
+ dma_addr_t msi2_addr;
+
+ /* MSI2 data */
+ u32 msi2_data;
+
+ /* Misc flags */
+ u32 flags;
+
+ /* Lock for serializing ring index updates */
+ spinlock_t lock;
+
+ struct lock_class_key lock_key;
+
+ /* Start offset of SRNG register groups for this ring
+ * TBD: See if this is required - register address can be derived
+ * from ring ID
+ */
+ u32 hwreg_base[HAL_SRNG_NUM_REG_GRP];
+
+ u64 timestamp;
+
+ /* Source or Destination ring */
+ enum hal_srng_dir ring_dir;
+
+ union {
+ struct {
+ /* SW tail pointer */
+ u32 tp;
+
+ /* Shadow head pointer location to be updated by HW */
+ volatile u32 *hp_addr;
+
+ /* Cached head pointer */
+ u32 cached_hp;
+
+ /* Tail pointer location to be updated by SW - This
+ * will be a register address and need not be
+ * accessed through SW structure
+ */
+ u32 *tp_addr;
+
+ /* Current SW loop cnt */
+ u32 loop_cnt;
+
+ /* max transfer size */
+ u16 max_buffer_length;
+
+ /* head pointer at access end */
+ u32 last_hp;
+ } dst_ring;
+
+ struct {
+ /* SW head pointer */
+ u32 hp;
+
+ /* SW reap head pointer */
+ u32 reap_hp;
+
+ /* Shadow tail pointer location to be updated by HW */
+ u32 *tp_addr;
+
+ /* Cached tail pointer */
+ u32 cached_tp;
+
+ /* Head pointer location to be updated by SW - This
+ * will be a register address and need not be accessed
+ * through SW structure
+ */
+ u32 *hp_addr;
+
+ /* Low threshold - in number of ring entries */
+ u32 low_threshold;
+
+ /* tail pointer at access end */
+ u32 last_tp;
+ } src_ring;
+ } u;
+};
+
+/* Interrupt mitigation - Batch threshold in terms of numer of frames */
+#define HAL_SRNG_INT_BATCH_THRESHOLD_TX 256
+#define HAL_SRNG_INT_BATCH_THRESHOLD_RX 128
+#define HAL_SRNG_INT_BATCH_THRESHOLD_OTHER 1
+
+/* Interrupt mitigation - timer threshold in us */
+#define HAL_SRNG_INT_TIMER_THRESHOLD_TX 1000
+#define HAL_SRNG_INT_TIMER_THRESHOLD_RX 500
+#define HAL_SRNG_INT_TIMER_THRESHOLD_OTHER 256
+
+enum hal_srng_mac_type {
+ ATH12K_HAL_SRNG_UMAC,
+ ATH12K_HAL_SRNG_DMAC,
+ ATH12K_HAL_SRNG_PMAC
+};
+
+/* HW SRNG configuration table */
+struct hal_srng_config {
+ int start_ring_id;
+ u16 max_rings;
+ u16 entry_size;
+ u32 reg_start[HAL_SRNG_NUM_REG_GRP];
+ u16 reg_size[HAL_SRNG_NUM_REG_GRP];
+ enum hal_srng_mac_type mac_type;
+ enum hal_srng_dir ring_dir;
+ u32 max_size;
+};
+
+/**
+ * enum hal_rx_buf_return_buf_manager
+ *
+ * @HAL_RX_BUF_RBM_WBM_IDLE_BUF_LIST: Buffer returned to WBM idle buffer list
+ * @HAL_RX_BUF_RBM_WBM_CHIP0_IDLE_DESC_LIST: Descriptor returned to WBM idle
+ * descriptor list, where the chip 0 WBM is chosen in case of a multi-chip config
+ * @HAL_RX_BUF_RBM_WBM_CHIP1_IDLE_DESC_LIST: Descriptor returned to WBM idle
+ * descriptor list, where the chip 1 WBM is chosen in case of a multi-chip config
+ * @HAL_RX_BUF_RBM_WBM_CHIP2_IDLE_DESC_LIST: Descriptor returned to WBM idle
+ * descriptor list, where the chip 2 WBM is chosen in case of a multi-chip config
+ * @HAL_RX_BUF_RBM_FW_BM: Buffer returned to FW
+ * @HAL_RX_BUF_RBM_SW0_BM: For ring 0 -- returned to host
+ * @HAL_RX_BUF_RBM_SW1_BM: For ring 1 -- returned to host
+ * @HAL_RX_BUF_RBM_SW2_BM: For ring 2 -- returned to host
+ * @HAL_RX_BUF_RBM_SW3_BM: For ring 3 -- returned to host
+ * @HAL_RX_BUF_RBM_SW4_BM: For ring 4 -- returned to host
+ * @HAL_RX_BUF_RBM_SW5_BM: For ring 5 -- returned to host
+ * @HAL_RX_BUF_RBM_SW6_BM: For ring 6 -- returned to host
+ */
+
+enum hal_rx_buf_return_buf_manager {
+ HAL_RX_BUF_RBM_WBM_IDLE_BUF_LIST,
+ HAL_RX_BUF_RBM_WBM_CHIP0_IDLE_DESC_LIST,
+ HAL_RX_BUF_RBM_WBM_CHIP1_IDLE_DESC_LIST,
+ HAL_RX_BUF_RBM_WBM_CHIP2_IDLE_DESC_LIST,
+ HAL_RX_BUF_RBM_FW_BM,
+ HAL_RX_BUF_RBM_SW0_BM,
+ HAL_RX_BUF_RBM_SW1_BM,
+ HAL_RX_BUF_RBM_SW2_BM,
+ HAL_RX_BUF_RBM_SW3_BM,
+ HAL_RX_BUF_RBM_SW4_BM,
+ HAL_RX_BUF_RBM_SW5_BM,
+ HAL_RX_BUF_RBM_SW6_BM,
+};
+
+#define HAL_SRNG_DESC_LOOP_CNT 0xf0000000
+
+#define HAL_REO_CMD_FLG_NEED_STATUS BIT(0)
+#define HAL_REO_CMD_FLG_STATS_CLEAR BIT(1)
+#define HAL_REO_CMD_FLG_FLUSH_BLOCK_LATER BIT(2)
+#define HAL_REO_CMD_FLG_FLUSH_RELEASE_BLOCKING BIT(3)
+#define HAL_REO_CMD_FLG_FLUSH_NO_INVAL BIT(4)
+#define HAL_REO_CMD_FLG_FLUSH_FWD_ALL_MPDUS BIT(5)
+#define HAL_REO_CMD_FLG_FLUSH_ALL BIT(6)
+#define HAL_REO_CMD_FLG_UNBLK_RESOURCE BIT(7)
+#define HAL_REO_CMD_FLG_UNBLK_CACHE BIT(8)
+
+/* Should be matching with HAL_REO_UPD_RX_QUEUE_INFO0_UPD_* feilds */
+#define HAL_REO_CMD_UPD0_RX_QUEUE_NUM BIT(8)
+#define HAL_REO_CMD_UPD0_VLD BIT(9)
+#define HAL_REO_CMD_UPD0_ALDC BIT(10)
+#define HAL_REO_CMD_UPD0_DIS_DUP_DETECTION BIT(11)
+#define HAL_REO_CMD_UPD0_SOFT_REORDER_EN BIT(12)
+#define HAL_REO_CMD_UPD0_AC BIT(13)
+#define HAL_REO_CMD_UPD0_BAR BIT(14)
+#define HAL_REO_CMD_UPD0_RETRY BIT(15)
+#define HAL_REO_CMD_UPD0_CHECK_2K_MODE BIT(16)
+#define HAL_REO_CMD_UPD0_OOR_MODE BIT(17)
+#define HAL_REO_CMD_UPD0_BA_WINDOW_SIZE BIT(18)
+#define HAL_REO_CMD_UPD0_PN_CHECK BIT(19)
+#define HAL_REO_CMD_UPD0_EVEN_PN BIT(20)
+#define HAL_REO_CMD_UPD0_UNEVEN_PN BIT(21)
+#define HAL_REO_CMD_UPD0_PN_HANDLE_ENABLE BIT(22)
+#define HAL_REO_CMD_UPD0_PN_SIZE BIT(23)
+#define HAL_REO_CMD_UPD0_IGNORE_AMPDU_FLG BIT(24)
+#define HAL_REO_CMD_UPD0_SVLD BIT(25)
+#define HAL_REO_CMD_UPD0_SSN BIT(26)
+#define HAL_REO_CMD_UPD0_SEQ_2K_ERR BIT(27)
+#define HAL_REO_CMD_UPD0_PN_ERR BIT(28)
+#define HAL_REO_CMD_UPD0_PN_VALID BIT(29)
+#define HAL_REO_CMD_UPD0_PN BIT(30)
+
+/* Should be matching with HAL_REO_UPD_RX_QUEUE_INFO1_* feilds */
+#define HAL_REO_CMD_UPD1_VLD BIT(16)
+#define HAL_REO_CMD_UPD1_ALDC GENMASK(18, 17)
+#define HAL_REO_CMD_UPD1_DIS_DUP_DETECTION BIT(19)
+#define HAL_REO_CMD_UPD1_SOFT_REORDER_EN BIT(20)
+#define HAL_REO_CMD_UPD1_AC GENMASK(22, 21)
+#define HAL_REO_CMD_UPD1_BAR BIT(23)
+#define HAL_REO_CMD_UPD1_RETRY BIT(24)
+#define HAL_REO_CMD_UPD1_CHECK_2K_MODE BIT(25)
+#define HAL_REO_CMD_UPD1_OOR_MODE BIT(26)
+#define HAL_REO_CMD_UPD1_PN_CHECK BIT(27)
+#define HAL_REO_CMD_UPD1_EVEN_PN BIT(28)
+#define HAL_REO_CMD_UPD1_UNEVEN_PN BIT(29)
+#define HAL_REO_CMD_UPD1_PN_HANDLE_ENABLE BIT(30)
+#define HAL_REO_CMD_UPD1_IGNORE_AMPDU_FLG BIT(31)
+
+/* Should be matching with HAL_REO_UPD_RX_QUEUE_INFO2_* feilds */
+#define HAL_REO_CMD_UPD2_SVLD BIT(10)
+#define HAL_REO_CMD_UPD2_SSN GENMASK(22, 11)
+#define HAL_REO_CMD_UPD2_SEQ_2K_ERR BIT(23)
+#define HAL_REO_CMD_UPD2_PN_ERR BIT(24)
+
+struct ath12k_hal_reo_cmd {
+ u32 addr_lo;
+ u32 flag;
+ u32 upd0;
+ u32 upd1;
+ u32 upd2;
+ u32 pn[4];
+ u16 rx_queue_num;
+ u16 min_rel;
+ u16 min_fwd;
+ u8 addr_hi;
+ u8 ac_list;
+ u8 blocking_idx;
+ u16 ba_window_size;
+ u8 pn_size;
+};
+
+enum hal_pn_type {
+ HAL_PN_TYPE_NONE,
+ HAL_PN_TYPE_WPA,
+ HAL_PN_TYPE_WAPI_EVEN,
+ HAL_PN_TYPE_WAPI_UNEVEN,
+};
+
+enum hal_ce_desc {
+ HAL_CE_DESC_SRC,
+ HAL_CE_DESC_DST,
+ HAL_CE_DESC_DST_STATUS,
+};
+
+#define HAL_HASH_ROUTING_RING_TCL 0
+#define HAL_HASH_ROUTING_RING_SW1 1
+#define HAL_HASH_ROUTING_RING_SW2 2
+#define HAL_HASH_ROUTING_RING_SW3 3
+#define HAL_HASH_ROUTING_RING_SW4 4
+#define HAL_HASH_ROUTING_RING_REL 5
+#define HAL_HASH_ROUTING_RING_FW 6
+
+struct hal_reo_status_header {
+ u16 cmd_num;
+ enum hal_reo_cmd_status cmd_status;
+ u16 cmd_exe_time;
+ u32 timestamp;
+};
+
+struct hal_reo_status_queue_stats {
+ u16 ssn;
+ u16 curr_idx;
+ u32 pn[4];
+ u32 last_rx_queue_ts;
+ u32 last_rx_dequeue_ts;
+ u32 rx_bitmap[8]; /* Bitmap from 0-255 */
+ u32 curr_mpdu_cnt;
+ u32 curr_msdu_cnt;
+ u16 fwd_due_to_bar_cnt;
+ u16 dup_cnt;
+ u32 frames_in_order_cnt;
+ u32 num_mpdu_processed_cnt;
+ u32 num_msdu_processed_cnt;
+ u32 total_num_processed_byte_cnt;
+ u32 late_rx_mpdu_cnt;
+ u32 reorder_hole_cnt;
+ u8 timeout_cnt;
+ u8 bar_rx_cnt;
+ u8 num_window_2k_jump_cnt;
+};
+
+struct hal_reo_status_flush_queue {
+ bool err_detected;
+};
+
+enum hal_reo_status_flush_cache_err_code {
+ HAL_REO_STATUS_FLUSH_CACHE_ERR_CODE_SUCCESS,
+ HAL_REO_STATUS_FLUSH_CACHE_ERR_CODE_IN_USE,
+ HAL_REO_STATUS_FLUSH_CACHE_ERR_CODE_NOT_FOUND,
+};
+
+struct hal_reo_status_flush_cache {
+ bool err_detected;
+ enum hal_reo_status_flush_cache_err_code err_code;
+ bool cache_controller_flush_status_hit;
+ u8 cache_controller_flush_status_desc_type;
+ u8 cache_controller_flush_status_client_id;
+ u8 cache_controller_flush_status_err;
+ u8 cache_controller_flush_status_cnt;
+};
+
+enum hal_reo_status_unblock_cache_type {
+ HAL_REO_STATUS_UNBLOCK_BLOCKING_RESOURCE,
+ HAL_REO_STATUS_UNBLOCK_ENTIRE_CACHE_USAGE,
+};
+
+struct hal_reo_status_unblock_cache {
+ bool err_detected;
+ enum hal_reo_status_unblock_cache_type unblock_type;
+};
+
+struct hal_reo_status_flush_timeout_list {
+ bool err_detected;
+ bool list_empty;
+ u16 release_desc_cnt;
+ u16 fwd_buf_cnt;
+};
+
+enum hal_reo_threshold_idx {
+ HAL_REO_THRESHOLD_IDX_DESC_COUNTER0,
+ HAL_REO_THRESHOLD_IDX_DESC_COUNTER1,
+ HAL_REO_THRESHOLD_IDX_DESC_COUNTER2,
+ HAL_REO_THRESHOLD_IDX_DESC_COUNTER_SUM,
+};
+
+struct hal_reo_status_desc_thresh_reached {
+ enum hal_reo_threshold_idx threshold_idx;
+ u32 link_desc_counter0;
+ u32 link_desc_counter1;
+ u32 link_desc_counter2;
+ u32 link_desc_counter_sum;
+};
+
+struct hal_reo_status {
+ struct hal_reo_status_header uniform_hdr;
+ u8 loop_cnt;
+ union {
+ struct hal_reo_status_queue_stats queue_stats;
+ struct hal_reo_status_flush_queue flush_queue;
+ struct hal_reo_status_flush_cache flush_cache;
+ struct hal_reo_status_unblock_cache unblock_cache;
+ struct hal_reo_status_flush_timeout_list timeout_list;
+ struct hal_reo_status_desc_thresh_reached desc_thresh_reached;
+ } u;
+};
+
+/* HAL context to be used to access SRNG APIs (currently used by data path
+ * and transport (CE) modules)
+ */
+struct ath12k_hal {
+ /* HAL internal state for all SRNG rings.
+ */
+ struct hal_srng srng_list[HAL_SRNG_RING_ID_MAX];
+
+ /* SRNG configuration table */
+ struct hal_srng_config *srng_config;
+
+ /* Remote pointer memory for HW/FW updates */
+ struct {
+ u32 *vaddr;
+ dma_addr_t paddr;
+ } rdp;
+
+ /* Shared memory for ring pointer updates from host to FW */
+ struct {
+ u32 *vaddr;
+ dma_addr_t paddr;
+ } wrp;
+
+ /* Available REO blocking resources bitmap */
+ u8 avail_blk_resource;
+
+ u8 current_blk_index;
+
+ /* shadow register configuration */
+ u32 shadow_reg_addr[HAL_SHADOW_NUM_REGS];
+ int num_shadow_reg_configured;
+};
+
+/* Maps WBM ring number and Return Buffer Manager Id per TCL ring */
+struct ath12k_hal_tcl_to_wbm_rbm_map {
+ u8 wbm_ring_num;
+ u8 rbm_id;
+};
+
+struct hal_ops {
+ bool (*rx_desc_get_first_msdu)(struct hal_rx_desc *desc);
+ bool (*rx_desc_get_last_msdu)(struct hal_rx_desc *desc);
+ u8 (*rx_desc_get_l3_pad_bytes)(struct hal_rx_desc *desc);
+ u8 *(*rx_desc_get_hdr_status)(struct hal_rx_desc *desc);
+ bool (*rx_desc_encrypt_valid)(struct hal_rx_desc *desc);
+ u32 (*rx_desc_get_encrypt_type)(struct hal_rx_desc *desc);
+ u8 (*rx_desc_get_decap_type)(struct hal_rx_desc *desc);
+ u8 (*rx_desc_get_mesh_ctl)(struct hal_rx_desc *desc);
+ bool (*rx_desc_get_mpdu_seq_ctl_vld)(struct hal_rx_desc *desc);
+ bool (*rx_desc_get_mpdu_fc_valid)(struct hal_rx_desc *desc);
+ u16 (*rx_desc_get_mpdu_start_seq_no)(struct hal_rx_desc *desc);
+ u16 (*rx_desc_get_msdu_len)(struct hal_rx_desc *desc);
+ u8 (*rx_desc_get_msdu_sgi)(struct hal_rx_desc *desc);
+ u8 (*rx_desc_get_msdu_rate_mcs)(struct hal_rx_desc *desc);
+ u8 (*rx_desc_get_msdu_rx_bw)(struct hal_rx_desc *desc);
+ u32 (*rx_desc_get_msdu_freq)(struct hal_rx_desc *desc);
+ u8 (*rx_desc_get_msdu_pkt_type)(struct hal_rx_desc *desc);
+ u8 (*rx_desc_get_msdu_nss)(struct hal_rx_desc *desc);
+ u8 (*rx_desc_get_mpdu_tid)(struct hal_rx_desc *desc);
+ u16 (*rx_desc_get_mpdu_peer_id)(struct hal_rx_desc *desc);
+ void (*rx_desc_copy_end_tlv)(struct hal_rx_desc *fdesc,
+ struct hal_rx_desc *ldesc);
+ u32 (*rx_desc_get_mpdu_start_tag)(struct hal_rx_desc *desc);
+ u32 (*rx_desc_get_mpdu_ppdu_id)(struct hal_rx_desc *desc);
+ void (*rx_desc_set_msdu_len)(struct hal_rx_desc *desc, u16 len);
+ struct rx_attention *(*rx_desc_get_attention)(struct hal_rx_desc *desc);
+ u8 *(*rx_desc_get_msdu_payload)(struct hal_rx_desc *desc);
+ u32 (*rx_desc_get_mpdu_start_offset)(void);
+ u32 (*rx_desc_get_msdu_end_offset)(void);
+ bool (*rx_desc_mac_addr2_valid)(struct hal_rx_desc *desc);
+ u8* (*rx_desc_mpdu_start_addr2)(struct hal_rx_desc *desc);
+ bool (*rx_desc_is_mcbc)(struct hal_rx_desc *desc);
+ void (*rx_desc_get_dot11_hdr)(struct hal_rx_desc *desc,
+ struct ieee80211_hdr *hdr);
+ u16 (*rx_desc_get_mpdu_frame_ctl)(struct hal_rx_desc *desc);
+ void (*rx_desc_get_crypto_header)(struct hal_rx_desc *desc,
+ u8 *crypto_hdr,
+ enum hal_encrypt_type enctype);
+ int (*create_srng_config)(struct ath12k_base *ab);
+ bool (*dp_rx_h_msdu_done)(struct hal_rx_desc *desc);
+ bool (*dp_rx_h_l4_cksum_fail)(struct hal_rx_desc *desc);
+ bool (*dp_rx_h_ip_cksum_fail)(struct hal_rx_desc *desc);
+ bool (*dp_rx_h_is_decrypted)(struct hal_rx_desc *desc);
+ u32 (*dp_rx_h_mpdu_err)(struct hal_rx_desc *desc);
+ const struct ath12k_hal_tcl_to_wbm_rbm_map *tcl_to_wbm_rbm_map;
+};
+
+extern const struct hal_ops hal_qcn9274_ops;
+extern const struct hal_ops hal_wcn7850_ops;
+
+u32 ath12k_hal_reo_qdesc_size(u32 ba_window_size, u8 tid);
+void ath12k_hal_reo_qdesc_setup(struct hal_rx_reo_queue *qdesc,
+ int tid, u32 ba_window_size,
+ u32 start_seq, enum hal_pn_type type);
+void ath12k_hal_reo_init_cmd_ring(struct ath12k_base *ab,
+ struct hal_srng *srng);
+void ath12k_hal_reo_hw_setup(struct ath12k_base *ab, u32 ring_hash_map);
+void ath12k_hal_setup_link_idle_list(struct ath12k_base *ab,
+ struct hal_wbm_idle_scatter_list *sbuf,
+ u32 nsbufs, u32 tot_link_desc,
+ u32 end_offset);
+
+dma_addr_t ath12k_hal_srng_get_tp_addr(struct ath12k_base *ab,
+ struct hal_srng *srng);
+dma_addr_t ath12k_hal_srng_get_hp_addr(struct ath12k_base *ab,
+ struct hal_srng *srng);
+void ath12k_hal_set_link_desc_addr(struct hal_wbm_link_desc *desc, u32 cookie,
+ dma_addr_t paddr);
+u32 ath12k_hal_ce_get_desc_size(enum hal_ce_desc type);
+void ath12k_hal_ce_src_set_desc(struct hal_ce_srng_src_desc *desc, dma_addr_t paddr,
+ u32 len, u32 id, u8 byte_swap_data);
+void ath12k_hal_ce_dst_set_desc(struct hal_ce_srng_dest_desc *desc, dma_addr_t paddr);
+u32 ath12k_hal_ce_dst_status_get_length(struct hal_ce_srng_dst_status_desc *desc);
+int ath12k_hal_srng_get_entrysize(struct ath12k_base *ab, u32 ring_type);
+int ath12k_hal_srng_get_max_entries(struct ath12k_base *ab, u32 ring_type);
+void ath12k_hal_srng_get_params(struct ath12k_base *ab, struct hal_srng *srng,
+ struct hal_srng_params *params);
+void *ath12k_hal_srng_dst_get_next_entry(struct ath12k_base *ab,
+ struct hal_srng *srng);
+void *ath12k_hal_srng_dst_peek(struct ath12k_base *ab, struct hal_srng *srng);
+int ath12k_hal_srng_dst_num_free(struct ath12k_base *ab, struct hal_srng *srng,
+ bool sync_hw_ptr);
+void *ath12k_hal_srng_src_get_next_reaped(struct ath12k_base *ab,
+ struct hal_srng *srng);
+void *ath12k_hal_srng_src_reap_next(struct ath12k_base *ab,
+ struct hal_srng *srng);
+void *ath12k_hal_srng_src_get_next_entry(struct ath12k_base *ab,
+ struct hal_srng *srng);
+int ath12k_hal_srng_src_num_free(struct ath12k_base *ab, struct hal_srng *srng,
+ bool sync_hw_ptr);
+void ath12k_hal_srng_access_begin(struct ath12k_base *ab,
+ struct hal_srng *srng);
+void ath12k_hal_srng_access_end(struct ath12k_base *ab, struct hal_srng *srng);
+int ath12k_hal_srng_setup(struct ath12k_base *ab, enum hal_ring_type type,
+ int ring_num, int mac_id,
+ struct hal_srng_params *params);
+int ath12k_hal_srng_init(struct ath12k_base *ath12k);
+void ath12k_hal_srng_deinit(struct ath12k_base *ath12k);
+void ath12k_hal_dump_srng_stats(struct ath12k_base *ab);
+void ath12k_hal_srng_get_shadow_config(struct ath12k_base *ab,
+ u32 **cfg, u32 *len);
+int ath12k_hal_srng_update_shadow_config(struct ath12k_base *ab,
+ enum hal_ring_type ring_type,
+ int ring_num);
+void ath12k_hal_srng_shadow_config(struct ath12k_base *ab);
+void ath12k_hal_srng_shadow_update_hp_tp(struct ath12k_base *ab,
+ struct hal_srng *srng);
+#endif
diff --git a/drivers/net/wireless/ath/ath12k/hal_desc.h b/drivers/net/wireless/ath/ath12k/hal_desc.h
new file mode 100644
index 000000000000..2250ca2d19a3
--- /dev/null
+++ b/drivers/net/wireless/ath/ath12k/hal_desc.h
@@ -0,0 +1,2961 @@
+/* SPDX-License-Identifier: BSD-3-Clause-Clear */
+/*
+ * Copyright (c) 2018-2021 The Linux Foundation. All rights reserved.
+ * Copyright (c) 2021-2022 Qualcomm Innovation Center, Inc. All rights reserved.
+ */
+#include "core.h"
+
+#ifndef ATH12K_HAL_DESC_H
+#define ATH12K_HAL_DESC_H
+
+#define BUFFER_ADDR_INFO0_ADDR GENMASK(31, 0)
+
+#define BUFFER_ADDR_INFO1_ADDR GENMASK(7, 0)
+#define BUFFER_ADDR_INFO1_RET_BUF_MGR GENMASK(11, 8)
+#define BUFFER_ADDR_INFO1_SW_COOKIE GENMASK(31, 12)
+
+struct ath12k_buffer_addr {
+ __le32 info0;
+ __le32 info1;
+} __packed;
+
+/* ath12k_buffer_addr
+ *
+ * buffer_addr_31_0
+ * Address (lower 32 bits) of the MSDU buffer or MSDU_EXTENSION
+ * descriptor or Link descriptor
+ *
+ * buffer_addr_39_32
+ * Address (upper 8 bits) of the MSDU buffer or MSDU_EXTENSION
+ * descriptor or Link descriptor
+ *
+ * return_buffer_manager (RBM)
+ * Consumer: WBM
+ * Producer: SW/FW
+ * Indicates to which buffer manager the buffer or MSDU_EXTENSION
+ * descriptor or link descriptor that is being pointed to shall be
+ * returned after the frame has been processed. It is used by WBM
+ * for routing purposes.
+ *
+ * Values are defined in enum %HAL_RX_BUF_RBM_
+ *
+ * sw_buffer_cookie
+ * Cookie field exclusively used by SW. HW ignores the contents,
+ * accept that it passes the programmed value on to other
+ * descriptors together with the physical address.
+ *
+ * Field can be used by SW to for example associate the buffers
+ * physical address with the virtual address.
+ *
+ * NOTE1:
+ * The three most significant bits can have a special meaning
+ * in case this struct is embedded in a TX_MPDU_DETAILS STRUCT,
+ * and field transmit_bw_restriction is set
+ *
+ * In case of NON punctured transmission:
+ * Sw_buffer_cookie[19:17] = 3'b000: 20 MHz TX only
+ * Sw_buffer_cookie[19:17] = 3'b001: 40 MHz TX only
+ * Sw_buffer_cookie[19:17] = 3'b010: 80 MHz TX only
+ * Sw_buffer_cookie[19:17] = 3'b011: 160 MHz TX only
+ * Sw_buffer_cookie[19:17] = 3'b101: 240 MHz TX only
+ * Sw_buffer_cookie[19:17] = 3'b100: 320 MHz TX only
+ * Sw_buffer_cookie[19:18] = 2'b11: reserved
+ *
+ * In case of punctured transmission:
+ * Sw_buffer_cookie[19:16] = 4'b0000: pattern 0 only
+ * Sw_buffer_cookie[19:16] = 4'b0001: pattern 1 only
+ * Sw_buffer_cookie[19:16] = 4'b0010: pattern 2 only
+ * Sw_buffer_cookie[19:16] = 4'b0011: pattern 3 only
+ * Sw_buffer_cookie[19:16] = 4'b0100: pattern 4 only
+ * Sw_buffer_cookie[19:16] = 4'b0101: pattern 5 only
+ * Sw_buffer_cookie[19:16] = 4'b0110: pattern 6 only
+ * Sw_buffer_cookie[19:16] = 4'b0111: pattern 7 only
+ * Sw_buffer_cookie[19:16] = 4'b1000: pattern 8 only
+ * Sw_buffer_cookie[19:16] = 4'b1001: pattern 9 only
+ * Sw_buffer_cookie[19:16] = 4'b1010: pattern 10 only
+ * Sw_buffer_cookie[19:16] = 4'b1011: pattern 11 only
+ * Sw_buffer_cookie[19:18] = 2'b11: reserved
+ *
+ * Note: a punctured transmission is indicated by the presence
+ * of TLV TX_PUNCTURE_SETUP embedded in the scheduler TLV
+ *
+ * Sw_buffer_cookie[20:17]: Tid: The TID field in the QoS control
+ * field
+ *
+ * Sw_buffer_cookie[16]: Mpdu_qos_control_valid: This field
+ * indicates MPDUs with a QoS control field.
+ *
+ */
+
+enum hal_tlv_tag {
+ HAL_MACTX_CBF_START = 0 /* 0x0 */,
+ HAL_PHYRX_DATA = 1 /* 0x1 */,
+ HAL_PHYRX_CBF_DATA_RESP = 2 /* 0x2 */,
+ HAL_PHYRX_ABORT_REQUEST = 3 /* 0x3 */,
+ HAL_PHYRX_USER_ABORT_NOTIFICATION = 4 /* 0x4 */,
+ HAL_MACTX_DATA_RESP = 5 /* 0x5 */,
+ HAL_MACTX_CBF_DATA = 6 /* 0x6 */,
+ HAL_MACTX_CBF_DONE = 7 /* 0x7 */,
+ HAL_PHYRX_LMR_DATA_RESP = 8 /* 0x8 */,
+ HAL_RXPCU_TO_UCODE_START = 9 /* 0x9 */,
+ HAL_RXPCU_TO_UCODE_DELIMITER_FOR_FULL_MPDU = 10 /* 0xa */,
+ HAL_RXPCU_TO_UCODE_FULL_MPDU_DATA = 11 /* 0xb */,
+ HAL_RXPCU_TO_UCODE_FCS_STATUS = 12 /* 0xc */,
+ HAL_RXPCU_TO_UCODE_MPDU_DELIMITER = 13 /* 0xd */,
+ HAL_RXPCU_TO_UCODE_DELIMITER_FOR_MPDU_HEADER = 14 /* 0xe */,
+ HAL_RXPCU_TO_UCODE_MPDU_HEADER_DATA = 15 /* 0xf */,
+ HAL_RXPCU_TO_UCODE_END = 16 /* 0x10 */,
+ HAL_MACRX_CBF_READ_REQUEST = 32 /* 0x20 */,
+ HAL_MACRX_CBF_DATA_REQUEST = 33 /* 0x21 */,
+ HAL_MACRXXPECT_NDP_RECEPTION = 34 /* 0x22 */,
+ HAL_MACRX_FREEZE_CAPTURE_CHANNEL = 35 /* 0x23 */,
+ HAL_MACRX_NDP_TIMEOUT = 36 /* 0x24 */,
+ HAL_MACRX_ABORT_ACK = 37 /* 0x25 */,
+ HAL_MACRX_REQ_IMPLICIT_FB = 38 /* 0x26 */,
+ HAL_MACRX_CHAIN_MASK = 39 /* 0x27 */,
+ HAL_MACRX_NAP_USER = 40 /* 0x28 */,
+ HAL_MACRX_ABORT_REQUEST = 41 /* 0x29 */,
+ HAL_PHYTX_OTHER_TRANSMIT_INFO16 = 42 /* 0x2a */,
+ HAL_PHYTX_ABORT_ACK = 43 /* 0x2b */,
+ HAL_PHYTX_ABORT_REQUEST = 44 /* 0x2c */,
+ HAL_PHYTX_PKT_END = 45 /* 0x2d */,
+ HAL_PHYTX_PPDU_HEADER_INFO_REQUEST = 46 /* 0x2e */,
+ HAL_PHYTX_REQUEST_CTRL_INFO = 47 /* 0x2f */,
+ HAL_PHYTX_DATA_REQUEST = 48 /* 0x30 */,
+ HAL_PHYTX_BF_CV_LOADING_DONE = 49 /* 0x31 */,
+ HAL_PHYTX_NAP_ACK = 50 /* 0x32 */,
+ HAL_PHYTX_NAP_DONE = 51 /* 0x33 */,
+ HAL_PHYTX_OFF_ACK = 52 /* 0x34 */,
+ HAL_PHYTX_ON_ACK = 53 /* 0x35 */,
+ HAL_PHYTX_SYNTH_OFF_ACK = 54 /* 0x36 */,
+ HAL_PHYTX_DEBUG16 = 55 /* 0x37 */,
+ HAL_MACTX_ABORT_REQUEST = 56 /* 0x38 */,
+ HAL_MACTX_ABORT_ACK = 57 /* 0x39 */,
+ HAL_MACTX_PKT_END = 58 /* 0x3a */,
+ HAL_MACTX_PRE_PHY_DESC = 59 /* 0x3b */,
+ HAL_MACTX_BF_PARAMS_COMMON = 60 /* 0x3c */,
+ HAL_MACTX_BF_PARAMS_PER_USER = 61 /* 0x3d */,
+ HAL_MACTX_PREFETCH_CV = 62 /* 0x3e */,
+ HAL_MACTX_USER_DESC_COMMON = 63 /* 0x3f */,
+ HAL_MACTX_USER_DESC_PER_USER = 64 /* 0x40 */,
+ HAL_XAMPLE_USER_TLV_16 = 65 /* 0x41 */,
+ HAL_XAMPLE_TLV_16 = 66 /* 0x42 */,
+ HAL_MACTX_PHY_OFF = 67 /* 0x43 */,
+ HAL_MACTX_PHY_ON = 68 /* 0x44 */,
+ HAL_MACTX_SYNTH_OFF = 69 /* 0x45 */,
+ HAL_MACTXXPECT_CBF_COMMON = 70 /* 0x46 */,
+ HAL_MACTXXPECT_CBF_PER_USER = 71 /* 0x47 */,
+ HAL_MACTX_PHY_DESC = 72 /* 0x48 */,
+ HAL_MACTX_L_SIG_A = 73 /* 0x49 */,
+ HAL_MACTX_L_SIG_B = 74 /* 0x4a */,
+ HAL_MACTX_HT_SIG = 75 /* 0x4b */,
+ HAL_MACTX_VHT_SIG_A = 76 /* 0x4c */,
+ HAL_MACTX_VHT_SIG_B_SU20 = 77 /* 0x4d */,
+ HAL_MACTX_VHT_SIG_B_SU40 = 78 /* 0x4e */,
+ HAL_MACTX_VHT_SIG_B_SU80 = 79 /* 0x4f */,
+ HAL_MACTX_VHT_SIG_B_SU160 = 80 /* 0x50 */,
+ HAL_MACTX_VHT_SIG_B_MU20 = 81 /* 0x51 */,
+ HAL_MACTX_VHT_SIG_B_MU40 = 82 /* 0x52 */,
+ HAL_MACTX_VHT_SIG_B_MU80 = 83 /* 0x53 */,
+ HAL_MACTX_VHT_SIG_B_MU160 = 84 /* 0x54 */,
+ HAL_MACTX_SERVICE = 85 /* 0x55 */,
+ HAL_MACTX_HE_SIG_A_SU = 86 /* 0x56 */,
+ HAL_MACTX_HE_SIG_A_MU_DL = 87 /* 0x57 */,
+ HAL_MACTX_HE_SIG_A_MU_UL = 88 /* 0x58 */,
+ HAL_MACTX_HE_SIG_B1_MU = 89 /* 0x59 */,
+ HAL_MACTX_HE_SIG_B2_MU = 90 /* 0x5a */,
+ HAL_MACTX_HE_SIG_B2_OFDMA = 91 /* 0x5b */,
+ HAL_MACTX_DELETE_CV = 92 /* 0x5c */,
+ HAL_MACTX_MU_UPLINK_COMMON = 93 /* 0x5d */,
+ HAL_MACTX_MU_UPLINK_USER_SETUP = 94 /* 0x5e */,
+ HAL_MACTX_OTHER_TRANSMIT_INFO = 95 /* 0x5f */,
+ HAL_MACTX_PHY_NAP = 96 /* 0x60 */,
+ HAL_MACTX_DEBUG = 97 /* 0x61 */,
+ HAL_PHYRX_ABORT_ACK = 98 /* 0x62 */,
+ HAL_PHYRX_GENERATED_CBF_DETAILS = 99 /* 0x63 */,
+ HAL_PHYRX_RSSI_LEGACY = 100 /* 0x64 */,
+ HAL_PHYRX_RSSI_HT = 101 /* 0x65 */,
+ HAL_PHYRX_USER_INFO = 102 /* 0x66 */,
+ HAL_PHYRX_PKT_END = 103 /* 0x67 */,
+ HAL_PHYRX_DEBUG = 104 /* 0x68 */,
+ HAL_PHYRX_CBF_TRANSFER_DONE = 105 /* 0x69 */,
+ HAL_PHYRX_CBF_TRANSFER_ABORT = 106 /* 0x6a */,
+ HAL_PHYRX_L_SIG_A = 107 /* 0x6b */,
+ HAL_PHYRX_L_SIG_B = 108 /* 0x6c */,
+ HAL_PHYRX_HT_SIG = 109 /* 0x6d */,
+ HAL_PHYRX_VHT_SIG_A = 110 /* 0x6e */,
+ HAL_PHYRX_VHT_SIG_B_SU20 = 111 /* 0x6f */,
+ HAL_PHYRX_VHT_SIG_B_SU40 = 112 /* 0x70 */,
+ HAL_PHYRX_VHT_SIG_B_SU80 = 113 /* 0x71 */,
+ HAL_PHYRX_VHT_SIG_B_SU160 = 114 /* 0x72 */,
+ HAL_PHYRX_VHT_SIG_B_MU20 = 115 /* 0x73 */,
+ HAL_PHYRX_VHT_SIG_B_MU40 = 116 /* 0x74 */,
+ HAL_PHYRX_VHT_SIG_B_MU80 = 117 /* 0x75 */,
+ HAL_PHYRX_VHT_SIG_B_MU160 = 118 /* 0x76 */,
+ HAL_PHYRX_HE_SIG_A_SU = 119 /* 0x77 */,
+ HAL_PHYRX_HE_SIG_A_MU_DL = 120 /* 0x78 */,
+ HAL_PHYRX_HE_SIG_A_MU_UL = 121 /* 0x79 */,
+ HAL_PHYRX_HE_SIG_B1_MU = 122 /* 0x7a */,
+ HAL_PHYRX_HE_SIG_B2_MU = 123 /* 0x7b */,
+ HAL_PHYRX_HE_SIG_B2_OFDMA = 124 /* 0x7c */,
+ HAL_PHYRX_OTHER_RECEIVE_INFO = 125 /* 0x7d */,
+ HAL_PHYRX_COMMON_USER_INFO = 126 /* 0x7e */,
+ HAL_PHYRX_DATA_DONE = 127 /* 0x7f */,
+ HAL_COEX_TX_REQ = 128 /* 0x80 */,
+ HAL_DUMMY = 129 /* 0x81 */,
+ HALXAMPLE_TLV_32_NAME = 130 /* 0x82 */,
+ HAL_MPDU_LIMIT = 131 /* 0x83 */,
+ HAL_NA_LENGTH_END = 132 /* 0x84 */,
+ HAL_OLE_BUF_STATUS = 133 /* 0x85 */,
+ HAL_PCU_PPDU_SETUP_DONE = 134 /* 0x86 */,
+ HAL_PCU_PPDU_SETUP_END = 135 /* 0x87 */,
+ HAL_PCU_PPDU_SETUP_INIT = 136 /* 0x88 */,
+ HAL_PCU_PPDU_SETUP_START = 137 /* 0x89 */,
+ HAL_PDG_FES_SETUP = 138 /* 0x8a */,
+ HAL_PDG_RESPONSE = 139 /* 0x8b */,
+ HAL_PDG_TX_REQ = 140 /* 0x8c */,
+ HAL_SCH_WAIT_INSTR = 141 /* 0x8d */,
+ HAL_TQM_FLOWMPTY_STATUS = 143 /* 0x8f */,
+ HAL_TQM_FLOW_NOTMPTY_STATUS = 144 /* 0x90 */,
+ HAL_TQM_GEN_MPDU_LENGTH_LIST = 145 /* 0x91 */,
+ HAL_TQM_GEN_MPDU_LENGTH_LIST_STATUS = 146 /* 0x92 */,
+ HAL_TQM_GEN_MPDUS = 147 /* 0x93 */,
+ HAL_TQM_GEN_MPDUS_STATUS = 148 /* 0x94 */,
+ HAL_TQM_REMOVE_MPDU = 149 /* 0x95 */,
+ HAL_TQM_REMOVE_MPDU_STATUS = 150 /* 0x96 */,
+ HAL_TQM_REMOVE_MSDU = 151 /* 0x97 */,
+ HAL_TQM_REMOVE_MSDU_STATUS = 152 /* 0x98 */,
+ HAL_TQM_UPDATE_TX_MPDU_COUNT = 153 /* 0x99 */,
+ HAL_TQM_WRITE_CMD = 154 /* 0x9a */,
+ HAL_OFDMA_TRIGGER_DETAILS = 155 /* 0x9b */,
+ HAL_TX_DATA = 156 /* 0x9c */,
+ HAL_TX_FES_SETUP = 157 /* 0x9d */,
+ HAL_RX_PACKET = 158 /* 0x9e */,
+ HALXPECTED_RESPONSE = 159 /* 0x9f */,
+ HAL_TX_MPDU_END = 160 /* 0xa0 */,
+ HAL_TX_MPDU_START = 161 /* 0xa1 */,
+ HAL_TX_MSDU_END = 162 /* 0xa2 */,
+ HAL_TX_MSDU_START = 163 /* 0xa3 */,
+ HAL_TX_SW_MODE_SETUP = 164 /* 0xa4 */,
+ HAL_TXPCU_BUFFER_STATUS = 165 /* 0xa5 */,
+ HAL_TXPCU_USER_BUFFER_STATUS = 166 /* 0xa6 */,
+ HAL_DATA_TO_TIME_CONFIG = 167 /* 0xa7 */,
+ HALXAMPLE_USER_TLV_32 = 168 /* 0xa8 */,
+ HAL_MPDU_INFO = 169 /* 0xa9 */,
+ HAL_PDG_USER_SETUP = 170 /* 0xaa */,
+ HAL_TX_11AH_SETUP = 171 /* 0xab */,
+ HAL_REO_UPDATE_RX_REO_QUEUE_STATUS = 172 /* 0xac */,
+ HAL_TX_PEER_ENTRY = 173 /* 0xad */,
+ HAL_TX_RAW_OR_NATIVE_FRAME_SETUP = 174 /* 0xae */,
+ HALXAMPLE_USER_TLV_44 = 175 /* 0xaf */,
+ HAL_TX_FLUSH = 176 /* 0xb0 */,
+ HAL_TX_FLUSH_REQ = 177 /* 0xb1 */,
+ HAL_TQM_WRITE_CMD_STATUS = 178 /* 0xb2 */,
+ HAL_TQM_GET_MPDU_QUEUE_STATS = 179 /* 0xb3 */,
+ HAL_TQM_GET_MSDU_FLOW_STATS = 180 /* 0xb4 */,
+ HALXAMPLE_USER_CTLV_44 = 181 /* 0xb5 */,
+ HAL_TX_FES_STATUS_START = 182 /* 0xb6 */,
+ HAL_TX_FES_STATUS_USER_PPDU = 183 /* 0xb7 */,
+ HAL_TX_FES_STATUS_USER_RESPONSE = 184 /* 0xb8 */,
+ HAL_TX_FES_STATUS_END = 185 /* 0xb9 */,
+ HAL_RX_TRIG_INFO = 186 /* 0xba */,
+ HAL_RXPCU_TX_SETUP_CLEAR = 187 /* 0xbb */,
+ HAL_RX_FRAME_BITMAP_REQ = 188 /* 0xbc */,
+ HAL_RX_FRAME_BITMAP_ACK = 189 /* 0xbd */,
+ HAL_COEX_RX_STATUS = 190 /* 0xbe */,
+ HAL_RX_START_PARAM = 191 /* 0xbf */,
+ HAL_RX_PPDU_START = 192 /* 0xc0 */,
+ HAL_RX_PPDU_END = 193 /* 0xc1 */,
+ HAL_RX_MPDU_START = 194 /* 0xc2 */,
+ HAL_RX_MPDU_END = 195 /* 0xc3 */,
+ HAL_RX_MSDU_START = 196 /* 0xc4 */,
+ HAL_RX_MSDU_END = 197 /* 0xc5 */,
+ HAL_RX_ATTENTION = 198 /* 0xc6 */,
+ HAL_RECEIVED_RESPONSE_INFO = 199 /* 0xc7 */,
+ HAL_RX_PHY_SLEEP = 200 /* 0xc8 */,
+ HAL_RX_HEADER = 201 /* 0xc9 */,
+ HAL_RX_PEER_ENTRY = 202 /* 0xca */,
+ HAL_RX_FLUSH = 203 /* 0xcb */,
+ HAL_RX_RESPONSE_REQUIRED_INFO = 204 /* 0xcc */,
+ HAL_RX_FRAMELESS_BAR_DETAILS = 205 /* 0xcd */,
+ HAL_TQM_GET_MPDU_QUEUE_STATS_STATUS = 206 /* 0xce */,
+ HAL_TQM_GET_MSDU_FLOW_STATS_STATUS = 207 /* 0xcf */,
+ HAL_TX_CBF_INFO = 208 /* 0xd0 */,
+ HAL_PCU_PPDU_SETUP_USER = 209 /* 0xd1 */,
+ HAL_RX_MPDU_PCU_START = 210 /* 0xd2 */,
+ HAL_RX_PM_INFO = 211 /* 0xd3 */,
+ HAL_RX_USER_PPDU_END = 212 /* 0xd4 */,
+ HAL_RX_PRE_PPDU_START = 213 /* 0xd5 */,
+ HAL_RX_PREAMBLE = 214 /* 0xd6 */,
+ HAL_TX_FES_SETUP_COMPLETE = 215 /* 0xd7 */,
+ HAL_TX_LAST_MPDU_FETCHED = 216 /* 0xd8 */,
+ HAL_TXDMA_STOP_REQUEST = 217 /* 0xd9 */,
+ HAL_RXPCU_SETUP = 218 /* 0xda */,
+ HAL_RXPCU_USER_SETUP = 219 /* 0xdb */,
+ HAL_TX_FES_STATUS_ACK_OR_BA = 220 /* 0xdc */,
+ HAL_TQM_ACKED_MPDU = 221 /* 0xdd */,
+ HAL_COEX_TX_RESP = 222 /* 0xde */,
+ HAL_COEX_TX_STATUS = 223 /* 0xdf */,
+ HAL_MACTX_COEX_PHY_CTRL = 224 /* 0xe0 */,
+ HAL_COEX_STATUS_BROADCAST = 225 /* 0xe1 */,
+ HAL_RESPONSE_START_STATUS = 226 /* 0xe2 */,
+ HAL_RESPONSEND_STATUS = 227 /* 0xe3 */,
+ HAL_CRYPTO_STATUS = 228 /* 0xe4 */,
+ HAL_RECEIVED_TRIGGER_INFO = 229 /* 0xe5 */,
+ HAL_COEX_TX_STOP_CTRL = 230 /* 0xe6 */,
+ HAL_RX_PPDU_ACK_REPORT = 231 /* 0xe7 */,
+ HAL_RX_PPDU_NO_ACK_REPORT = 232 /* 0xe8 */,
+ HAL_SCH_COEX_STATUS = 233 /* 0xe9 */,
+ HAL_SCHEDULER_COMMAND_STATUS = 234 /* 0xea */,
+ HAL_SCHEDULER_RX_PPDU_NO_RESPONSE_STATUS = 235 /* 0xeb */,
+ HAL_TX_FES_STATUS_PROT = 236 /* 0xec */,
+ HAL_TX_FES_STATUS_START_PPDU = 237 /* 0xed */,
+ HAL_TX_FES_STATUS_START_PROT = 238 /* 0xee */,
+ HAL_TXPCU_PHYTX_DEBUG32 = 239 /* 0xef */,
+ HAL_TXPCU_PHYTX_OTHER_TRANSMIT_INFO32 = 240 /* 0xf0 */,
+ HAL_TX_MPDU_COUNT_TRANSFERND = 241 /* 0xf1 */,
+ HAL_WHO_ANCHOR_OFFSET = 242 /* 0xf2 */,
+ HAL_WHO_ANCHOR_VALUE = 243 /* 0xf3 */,
+ HAL_WHO_CCE_INFO = 244 /* 0xf4 */,
+ HAL_WHO_COMMIT = 245 /* 0xf5 */,
+ HAL_WHO_COMMIT_DONE = 246 /* 0xf6 */,
+ HAL_WHO_FLUSH = 247 /* 0xf7 */,
+ HAL_WHO_L2_LLC = 248 /* 0xf8 */,
+ HAL_WHO_L2_PAYLOAD = 249 /* 0xf9 */,
+ HAL_WHO_L3_CHECKSUM = 250 /* 0xfa */,
+ HAL_WHO_L3_INFO = 251 /* 0xfb */,
+ HAL_WHO_L4_CHECKSUM = 252 /* 0xfc */,
+ HAL_WHO_L4_INFO = 253 /* 0xfd */,
+ HAL_WHO_MSDU = 254 /* 0xfe */,
+ HAL_WHO_MSDU_MISC = 255 /* 0xff */,
+ HAL_WHO_PACKET_DATA = 256 /* 0x100 */,
+ HAL_WHO_PACKET_HDR = 257 /* 0x101 */,
+ HAL_WHO_PPDU_END = 258 /* 0x102 */,
+ HAL_WHO_PPDU_START = 259 /* 0x103 */,
+ HAL_WHO_TSO = 260 /* 0x104 */,
+ HAL_WHO_WMAC_HEADER_PV0 = 261 /* 0x105 */,
+ HAL_WHO_WMAC_HEADER_PV1 = 262 /* 0x106 */,
+ HAL_WHO_WMAC_IV = 263 /* 0x107 */,
+ HAL_MPDU_INFO_END = 264 /* 0x108 */,
+ HAL_MPDU_INFO_BITMAP = 265 /* 0x109 */,
+ HAL_TX_QUEUE_EXTENSION = 266 /* 0x10a */,
+ HAL_SCHEDULER_SELFGEN_RESPONSE_STATUS = 267 /* 0x10b */,
+ HAL_TQM_UPDATE_TX_MPDU_COUNT_STATUS = 268 /* 0x10c */,
+ HAL_TQM_ACKED_MPDU_STATUS = 269 /* 0x10d */,
+ HAL_TQM_ADD_MSDU_STATUS = 270 /* 0x10e */,
+ HAL_TQM_LIST_GEN_DONE = 271 /* 0x10f */,
+ HAL_WHO_TERMINATE = 272 /* 0x110 */,
+ HAL_TX_LAST_MPDU_END = 273 /* 0x111 */,
+ HAL_TX_CV_DATA = 274 /* 0x112 */,
+ HAL_PPDU_TX_END = 275 /* 0x113 */,
+ HAL_PROT_TX_END = 276 /* 0x114 */,
+ HAL_MPDU_INFO_GLOBAL_END = 277 /* 0x115 */,
+ HAL_TQM_SCH_INSTR_GLOBAL_END = 278 /* 0x116 */,
+ HAL_RX_PPDU_END_USER_STATS = 279 /* 0x117 */,
+ HAL_RX_PPDU_END_USER_STATS_EXT = 280 /* 0x118 */,
+ HAL_REO_GET_QUEUE_STATS = 281 /* 0x119 */,
+ HAL_REO_FLUSH_QUEUE = 282 /* 0x11a */,
+ HAL_REO_FLUSH_CACHE = 283 /* 0x11b */,
+ HAL_REO_UNBLOCK_CACHE = 284 /* 0x11c */,
+ HAL_REO_GET_QUEUE_STATS_STATUS = 285 /* 0x11d */,
+ HAL_REO_FLUSH_QUEUE_STATUS = 286 /* 0x11e */,
+ HAL_REO_FLUSH_CACHE_STATUS = 287 /* 0x11f */,
+ HAL_REO_UNBLOCK_CACHE_STATUS = 288 /* 0x120 */,
+ HAL_TQM_FLUSH_CACHE = 289 /* 0x121 */,
+ HAL_TQM_UNBLOCK_CACHE = 290 /* 0x122 */,
+ HAL_TQM_FLUSH_CACHE_STATUS = 291 /* 0x123 */,
+ HAL_TQM_UNBLOCK_CACHE_STATUS = 292 /* 0x124 */,
+ HAL_RX_PPDU_END_STATUS_DONE = 293 /* 0x125 */,
+ HAL_RX_STATUS_BUFFER_DONE = 294 /* 0x126 */,
+ HAL_TX_DATA_SYNC = 297 /* 0x129 */,
+ HAL_PHYRX_CBF_READ_REQUEST_ACK = 298 /* 0x12a */,
+ HAL_TQM_GET_MPDU_HEAD_INFO = 299 /* 0x12b */,
+ HAL_TQM_SYNC_CMD = 300 /* 0x12c */,
+ HAL_TQM_GET_MPDU_HEAD_INFO_STATUS = 301 /* 0x12d */,
+ HAL_TQM_SYNC_CMD_STATUS = 302 /* 0x12e */,
+ HAL_TQM_THRESHOLD_DROP_NOTIFICATION_STATUS = 303 /* 0x12f */,
+ HAL_TQM_DESCRIPTOR_THRESHOLD_REACHED_STATUS = 304 /* 0x130 */,
+ HAL_REO_FLUSH_TIMEOUT_LIST = 305 /* 0x131 */,
+ HAL_REO_FLUSH_TIMEOUT_LIST_STATUS = 306 /* 0x132 */,
+ HAL_REO_DESCRIPTOR_THRESHOLD_REACHED_STATUS = 307 /* 0x133 */,
+ HAL_SCHEDULER_RX_SIFS_RESPONSE_TRIGGER_STATUS = 308 /* 0x134 */,
+ HALXAMPLE_USER_TLV_32_NAME = 309 /* 0x135 */,
+ HAL_RX_PPDU_START_USER_INFO = 310 /* 0x136 */,
+ HAL_RX_RING_MASK = 311 /* 0x137 */,
+ HAL_COEX_MAC_NAP = 312 /* 0x138 */,
+ HAL_RXPCU_PPDU_END_INFO = 313 /* 0x139 */,
+ HAL_WHO_MESH_CONTROL = 314 /* 0x13a */,
+ HAL_PDG_SW_MODE_BW_START = 315 /* 0x13b */,
+ HAL_PDG_SW_MODE_BW_END = 316 /* 0x13c */,
+ HAL_PDG_WAIT_FOR_MAC_REQUEST = 317 /* 0x13d */,
+ HAL_PDG_WAIT_FOR_PHY_REQUEST = 318 /* 0x13e */,
+ HAL_SCHEDULER_END = 319 /* 0x13f */,
+ HAL_RX_PPDU_START_DROPPED = 320 /* 0x140 */,
+ HAL_RX_PPDU_END_DROPPED = 321 /* 0x141 */,
+ HAL_RX_PPDU_END_STATUS_DONE_DROPPED = 322 /* 0x142 */,
+ HAL_RX_MPDU_START_DROPPED = 323 /* 0x143 */,
+ HAL_RX_MSDU_START_DROPPED = 324 /* 0x144 */,
+ HAL_RX_MSDU_END_DROPPED = 325 /* 0x145 */,
+ HAL_RX_MPDU_END_DROPPED = 326 /* 0x146 */,
+ HAL_RX_ATTENTION_DROPPED = 327 /* 0x147 */,
+ HAL_TXPCU_USER_SETUP = 328 /* 0x148 */,
+ HAL_RXPCU_USER_SETUP_EXT = 329 /* 0x149 */,
+ HAL_CMD_PART_0_END = 330 /* 0x14a */,
+ HAL_MACTX_SYNTH_ON = 331 /* 0x14b */,
+ HAL_SCH_CRITICAL_TLV_REFERENCE = 332 /* 0x14c */,
+ HAL_TQM_MPDU_GLOBAL_START = 333 /* 0x14d */,
+ HALXAMPLE_TLV_32 = 334 /* 0x14e */,
+ HAL_TQM_UPDATE_TX_MSDU_FLOW = 335 /* 0x14f */,
+ HAL_TQM_UPDATE_TX_MPDU_QUEUE_HEAD = 336 /* 0x150 */,
+ HAL_TQM_UPDATE_TX_MSDU_FLOW_STATUS = 337 /* 0x151 */,
+ HAL_TQM_UPDATE_TX_MPDU_QUEUE_HEAD_STATUS = 338 /* 0x152 */,
+ HAL_REO_UPDATE_RX_REO_QUEUE = 339 /* 0x153 */,
+ HAL_TQM_MPDU_QUEUEMPTY_STATUS = 340 /* 0x154 */,
+ HAL_TQM_2_SCH_MPDU_AVAILABLE = 341 /* 0x155 */,
+ HAL_PDG_TRIG_RESPONSE = 342 /* 0x156 */,
+ HAL_TRIGGER_RESPONSE_TX_DONE = 343 /* 0x157 */,
+ HAL_ABORT_FROM_PHYRX_DETAILS = 344 /* 0x158 */,
+ HAL_SCH_TQM_CMD_WRAPPER = 345 /* 0x159 */,
+ HAL_MPDUS_AVAILABLE = 346 /* 0x15a */,
+ HAL_RECEIVED_RESPONSE_INFO_PART2 = 347 /* 0x15b */,
+ HAL_PHYRX_TX_START_TIMING = 348 /* 0x15c */,
+ HAL_TXPCU_PREAMBLE_DONE = 349 /* 0x15d */,
+ HAL_NDP_PREAMBLE_DONE = 350 /* 0x15e */,
+ HAL_SCH_TQM_CMD_WRAPPER_RBO_DROP = 351 /* 0x15f */,
+ HAL_SCH_TQM_CMD_WRAPPER_CONT_DROP = 352 /* 0x160 */,
+ HAL_MACTX_CLEAR_PREV_TX_INFO = 353 /* 0x161 */,
+ HAL_TX_PUNCTURE_SETUP = 354 /* 0x162 */,
+ HAL_R2R_STATUS_END = 355 /* 0x163 */,
+ HAL_MACTX_PREFETCH_CV_COMMON = 356 /* 0x164 */,
+ HAL_END_OF_FLUSH_MARKER = 357 /* 0x165 */,
+ HAL_MACTX_MU_UPLINK_COMMON_PUNC = 358 /* 0x166 */,
+ HAL_MACTX_MU_UPLINK_USER_SETUP_PUNC = 359 /* 0x167 */,
+ HAL_RECEIVED_RESPONSE_USER_7_0 = 360 /* 0x168 */,
+ HAL_RECEIVED_RESPONSE_USER_15_8 = 361 /* 0x169 */,
+ HAL_RECEIVED_RESPONSE_USER_23_16 = 362 /* 0x16a */,
+ HAL_RECEIVED_RESPONSE_USER_31_24 = 363 /* 0x16b */,
+ HAL_RECEIVED_RESPONSE_USER_36_32 = 364 /* 0x16c */,
+ HAL_TX_LOOPBACK_SETUP = 365 /* 0x16d */,
+ HAL_PHYRX_OTHER_RECEIVE_INFO_RU_DETAILS = 366 /* 0x16e */,
+ HAL_SCH_WAIT_INSTR_TX_PATH = 367 /* 0x16f */,
+ HAL_MACTX_OTHER_TRANSMIT_INFO_TX2TX = 368 /* 0x170 */,
+ HAL_MACTX_OTHER_TRANSMIT_INFOMUPHY_SETUP = 369 /* 0x171 */,
+ HAL_PHYRX_OTHER_RECEIVE_INFOVM_DETAILS = 370 /* 0x172 */,
+ HAL_TX_WUR_DATA = 371 /* 0x173 */,
+ HAL_RX_PPDU_END_START = 372 /* 0x174 */,
+ HAL_RX_PPDU_END_MIDDLE = 373 /* 0x175 */,
+ HAL_RX_PPDU_END_LAST = 374 /* 0x176 */,
+ HAL_MACTX_BACKOFF_BASED_TRANSMISSION = 375 /* 0x177 */,
+ HAL_MACTX_OTHER_TRANSMIT_INFO_DL_OFDMA_TX = 376 /* 0x178 */,
+ HAL_SRP_INFO = 377 /* 0x179 */,
+ HAL_OBSS_SR_INFO = 378 /* 0x17a */,
+ HAL_SCHEDULER_SW_MSG_STATUS = 379 /* 0x17b */,
+ HAL_HWSCH_RXPCU_MAC_INFO_ANNOUNCEMENT = 380 /* 0x17c */,
+ HAL_RXPCU_SETUP_COMPLETE = 381 /* 0x17d */,
+ HAL_SNOOP_PPDU_START = 382 /* 0x17e */,
+ HAL_SNOOP_MPDU_USR_DBG_INFO = 383 /* 0x17f */,
+ HAL_SNOOP_MSDU_USR_DBG_INFO = 384 /* 0x180 */,
+ HAL_SNOOP_MSDU_USR_DATA = 385 /* 0x181 */,
+ HAL_SNOOP_MPDU_USR_STAT_INFO = 386 /* 0x182 */,
+ HAL_SNOOP_PPDU_END = 387 /* 0x183 */,
+ HAL_SNOOP_SPARE = 388 /* 0x184 */,
+ HAL_PHYRX_OTHER_RECEIVE_INFO_MU_RSSI_COMMON = 390 /* 0x186 */,
+ HAL_PHYRX_OTHER_RECEIVE_INFO_MU_RSSI_USER = 391 /* 0x187 */,
+ HAL_MACTX_OTHER_TRANSMIT_INFO_SCH_DETAILS = 392 /* 0x188 */,
+ HAL_PHYRX_OTHER_RECEIVE_INFO_108PVM_DETAILS = 393 /* 0x189 */,
+ HAL_SCH_TLV_WRAPPER = 394 /* 0x18a */,
+ HAL_SCHEDULER_STATUS_WRAPPER = 395 /* 0x18b */,
+ HAL_MPDU_INFO_6X = 396 /* 0x18c */,
+ HAL_MACTX_11AZ_USER_DESC_PER_USER = 397 /* 0x18d */,
+ HAL_MACTX_U_SIGHT_SU_MU = 398 /* 0x18e */,
+ HAL_MACTX_U_SIGHT_TB = 399 /* 0x18f */,
+ HAL_PHYRX_U_SIGHT_SU_MU = 403 /* 0x193 */,
+ HAL_PHYRX_U_SIGHT_TB = 404 /* 0x194 */,
+ HAL_MACRX_LMR_READ_REQUEST = 408 /* 0x198 */,
+ HAL_MACRX_LMR_DATA_REQUEST = 409 /* 0x199 */,
+ HAL_PHYRX_LMR_TRANSFER_DONE = 410 /* 0x19a */,
+ HAL_PHYRX_LMR_TRANSFER_ABORT = 411 /* 0x19b */,
+ HAL_PHYRX_LMR_READ_REQUEST_ACK = 412 /* 0x19c */,
+ HAL_MACRX_SECURE_LTF_SEQ_PTR = 413 /* 0x19d */,
+ HAL_PHYRX_USER_INFO_MU_UL = 414 /* 0x19e */,
+ HAL_MPDU_QUEUE_OVERVIEW = 415 /* 0x19f */,
+ HAL_SCHEDULER_NAV_INFO = 416 /* 0x1a0 */,
+ HAL_LMR_PEER_ENTRY = 418 /* 0x1a2 */,
+ HAL_LMR_MPDU_START = 419 /* 0x1a3 */,
+ HAL_LMR_DATA = 420 /* 0x1a4 */,
+ HAL_LMR_MPDU_END = 421 /* 0x1a5 */,
+ HAL_REO_GET_QUEUE_1K_STATS_STATUS = 422 /* 0x1a6 */,
+ HAL_RX_FRAME_1K_BITMAP_ACK = 423 /* 0x1a7 */,
+ HAL_TX_FES_STATUS_1K_BA = 424 /* 0x1a8 */,
+ HAL_TQM_ACKED_1K_MPDU = 425 /* 0x1a9 */,
+ HAL_MACRX_INBSS_OBSS_IND = 426 /* 0x1aa */,
+ HAL_PHYRX_LOCATION = 427 /* 0x1ab */,
+ HAL_MLO_TX_NOTIFICATION_SU = 428 /* 0x1ac */,
+ HAL_MLO_TX_NOTIFICATION_MU = 429 /* 0x1ad */,
+ HAL_MLO_TX_REQ_SU = 430 /* 0x1ae */,
+ HAL_MLO_TX_REQ_MU = 431 /* 0x1af */,
+ HAL_MLO_TX_RESP = 432 /* 0x1b0 */,
+ HAL_MLO_RX_NOTIFICATION = 433 /* 0x1b1 */,
+ HAL_MLO_BKOFF_TRUNC_REQ = 434 /* 0x1b2 */,
+ HAL_MLO_TBTT_NOTIFICATION = 435 /* 0x1b3 */,
+ HAL_MLO_MESSAGE = 436 /* 0x1b4 */,
+ HAL_MLO_TS_SYNC_MSG = 437 /* 0x1b5 */,
+ HAL_MLO_FES_SETUP = 438 /* 0x1b6 */,
+ HAL_MLO_PDG_FES_SETUP_SU = 439 /* 0x1b7 */,
+ HAL_MLO_PDG_FES_SETUP_MU = 440 /* 0x1b8 */,
+ HAL_MPDU_INFO_1K_BITMAP = 441 /* 0x1b9 */,
+ HAL_MON_BUF_ADDR = 442 /* 0x1ba */,
+ HAL_TX_FRAG_STATE = 443 /* 0x1bb */,
+ HAL_MACTXHT_SIG_USR_OFDMA = 446 /* 0x1be */,
+ HAL_PHYRXHT_SIG_CMN_PUNC = 448 /* 0x1c0 */,
+ HAL_PHYRXHT_SIG_CMN_OFDMA = 450 /* 0x1c2 */,
+ HAL_PHYRXHT_SIG_USR_OFDMA = 454 /* 0x1c6 */,
+ HAL_PHYRX_PKT_END_PART1 = 456 /* 0x1c8 */,
+ HAL_MACTXXPECT_NDP_RECEPTION = 457 /* 0x1c9 */,
+ HAL_MACTX_SECURE_LTF_SEQ_PTR = 458 /* 0x1ca */,
+ HAL_MLO_PDG_BKOFF_TRUNC_NOTIFY = 460 /* 0x1cc */,
+ HAL_PHYRX_11AZ_INTEGRITY_DATA = 461 /* 0x1cd */,
+ HAL_PHYTX_LOCATION = 462 /* 0x1ce */,
+ HAL_PHYTX_11AZ_INTEGRITY_DATA = 463 /* 0x1cf */,
+ HAL_MACTXHT_SIG_USR_SU = 466 /* 0x1d2 */,
+ HAL_MACTXHT_SIG_USR_MU_MIMO = 467 /* 0x1d3 */,
+ HAL_PHYRXHT_SIG_USR_SU = 468 /* 0x1d4 */,
+ HAL_PHYRXHT_SIG_USR_MU_MIMO = 469 /* 0x1d5 */,
+ HAL_PHYRX_GENERIC_U_SIG = 470 /* 0x1d6 */,
+ HAL_PHYRX_GENERICHT_SIG = 471 /* 0x1d7 */,
+ HAL_OVERWRITE_RESP_START = 472 /* 0x1d8 */,
+ HAL_OVERWRITE_RESP_PREAMBLE_INFO = 473 /* 0x1d9 */,
+ HAL_OVERWRITE_RESP_FRAME_INFO = 474 /* 0x1da */,
+ HAL_OVERWRITE_RESP_END = 475 /* 0x1db */,
+ HAL_RXPCUARLY_RX_INDICATION = 476 /* 0x1dc */,
+ HAL_MON_DROP = 477 /* 0x1dd */,
+ HAL_MACRX_MU_UPLINK_COMMON_SNIFF = 478 /* 0x1de */,
+ HAL_MACRX_MU_UPLINK_USER_SETUP_SNIFF = 479 /* 0x1df */,
+ HAL_MACRX_MU_UPLINK_USER_SEL_SNIFF = 480 /* 0x1e0 */,
+ HAL_MACRX_MU_UPLINK_FCS_STATUS_SNIFF = 481 /* 0x1e1 */,
+ HAL_MACTX_PREFETCH_CV_DMA = 482 /* 0x1e2 */,
+ HAL_MACTX_PREFETCH_CV_PER_USER = 483 /* 0x1e3 */,
+ HAL_PHYRX_OTHER_RECEIVE_INFO_ALL_SIGB_DETAILS = 484 /* 0x1e4 */,
+ HAL_MACTX_BF_PARAMS_UPDATE_COMMON = 485 /* 0x1e5 */,
+ HAL_MACTX_BF_PARAMS_UPDATE_PER_USER = 486 /* 0x1e6 */,
+ HAL_RANGING_USER_DETAILS = 487 /* 0x1e7 */,
+ HAL_PHYTX_CV_CORR_STATUS = 488 /* 0x1e8 */,
+ HAL_PHYTX_CV_CORR_COMMON = 489 /* 0x1e9 */,
+ HAL_PHYTX_CV_CORR_USER = 490 /* 0x1ea */,
+ HAL_MACTX_CV_CORR_COMMON = 491 /* 0x1eb */,
+ HAL_MACTX_CV_CORR_MAC_INFO_GROUP = 492 /* 0x1ec */,
+ HAL_BW_PUNCTUREVAL_WRAPPER = 493 /* 0x1ed */,
+ HAL_MACTX_RX_NOTIFICATION_FOR_PHY = 494 /* 0x1ee */,
+ HAL_MACTX_TX_NOTIFICATION_FOR_PHY = 495 /* 0x1ef */,
+ HAL_MACTX_MU_UPLINK_COMMON_PER_BW = 496 /* 0x1f0 */,
+ HAL_MACTX_MU_UPLINK_USER_SETUP_PER_BW = 497 /* 0x1f1 */,
+ HAL_RX_PPDU_END_USER_STATS_EXT2 = 498 /* 0x1f2 */,
+ HAL_FW2SW_MON = 499 /* 0x1f3 */,
+ HAL_WSI_DIRECT_MESSAGE = 500 /* 0x1f4 */,
+ HAL_MACTXMLSR_PRE_SWITCH = 501 /* 0x1f5 */,
+ HAL_MACTXMLSR_SWITCH = 502 /* 0x1f6 */,
+ HAL_MACTXMLSR_SWITCH_BACK = 503 /* 0x1f7 */,
+ HAL_PHYTXMLSR_SWITCH_ACK = 504 /* 0x1f8 */,
+ HAL_PHYTXMLSR_SWITCH_BACK_ACK = 505 /* 0x1f9 */,
+ HAL_SPARE_REUSE_TAG_0 = 506 /* 0x1fa */,
+ HAL_SPARE_REUSE_TAG_1 = 507 /* 0x1fb */,
+ HAL_SPARE_REUSE_TAG_2 = 508 /* 0x1fc */,
+ HAL_SPARE_REUSE_TAG_3 = 509 /* 0x1fd */,
+ /* FIXME: Assign correct value for HAL_TCL_DATA_CMD */
+ HAL_TCL_DATA_CMD = 510,
+ HAL_TLV_BASE = 511 /* 0x1ff */,
+};
+
+#define HAL_TLV_HDR_TAG GENMASK(9, 1)
+#define HAL_TLV_HDR_LEN GENMASK(25, 10)
+#define HAL_TLV_USR_ID GENMASK(31, 26)
+
+#define HAL_TLV_ALIGN 4
+
+struct hal_tlv_hdr {
+ __le32 tl;
+ u8 value[];
+} __packed;
+
+#define HAL_TLV_64_HDR_TAG GENMASK(9, 1)
+#define HAL_TLV_64_HDR_LEN GENMASK(21, 10)
+
+struct hal_tlv_64_hdr {
+ u64 tl;
+ u8 value[];
+} __packed;
+
+#define RX_MPDU_DESC_INFO0_MSDU_COUNT GENMASK(7, 0)
+#define RX_MPDU_DESC_INFO0_FRAG_FLAG BIT(8)
+#define RX_MPDU_DESC_INFO0_MPDU_RETRY BIT(9)
+#define RX_MPDU_DESC_INFO0_AMPDU_FLAG BIT(10)
+#define RX_MPDU_DESC_INFO0_BAR_FRAME BIT(11)
+#define RX_MPDU_DESC_INFO0_VALID_PN BIT(12)
+#define RX_MPDU_DESC_INFO0_RAW_MPDU BIT(13)
+#define RX_MPDU_DESC_INFO0_MORE_FRAG_FLAG BIT(14)
+#define RX_MPDU_DESC_INFO0_SRC_INFO GENMASK(26, 15)
+#define RX_MPDU_DESC_INFO0_MPDU_QOS_CTRL_VALID BIT(27)
+#define RX_MPDU_DESC_INFO0_TID GENMASK(31, 28)
+
+/* TODO revisit after meta data is concluded */
+#define RX_MPDU_DESC_META_DATA_PEER_ID GENMASK(15, 0)
+
+struct rx_mpdu_desc {
+ __le32 info0; /* %RX_MPDU_DESC_INFO */
+ __le32 peer_meta_data;
+} __packed;
+
+/* rx_mpdu_desc
+ * Producer: RXDMA
+ * Consumer: REO/SW/FW
+ *
+ * msdu_count
+ * The number of MSDUs within the MPDU
+ *
+ * fragment_flag
+ * When set, this MPDU is a fragment and REO should forward this
+ * fragment MPDU to the REO destination ring without any reorder
+ * checks, pn checks or bitmap update. This implies that REO is
+ * forwarding the pointer to the MSDU link descriptor.
+ *
+ * mpdu_retry_bit
+ * The retry bit setting from the MPDU header of the received frame
+ *
+ * ampdu_flag
+ * Indicates the MPDU was received as part of an A-MPDU.
+ *
+ * bar_frame
+ * Indicates the received frame is a BAR frame. After processing,
+ * this frame shall be pushed to SW or deleted.
+ *
+ * valid_pn
+ * When not set, REO will not perform a PN sequence number check.
+ *
+ * raw_mpdu
+ * Field only valid when first_msdu_in_mpdu_flag is set. Indicates
+ * the contents in the MSDU buffer contains a 'RAW' MPDU. This
+ * 'RAW' MPDU might be spread out over multiple MSDU buffers.
+ *
+ * more_fragment_flag
+ * The More Fragment bit setting from the MPDU header of the
+ * received frame
+ *
+ * src_info
+ * Source (Virtual) device/interface info associated with this peer.
+ * This field gets passed on by REO to PPE in the EDMA descriptor.
+ *
+ * mpdu_qos_control_valid
+ * When set, the MPDU has a QoS control field
+ *
+ * tid
+ * Field only valid when mpdu_qos_control_valid is set
+ */
+
+enum hal_rx_msdu_desc_reo_dest_ind {
+ HAL_RX_MSDU_DESC_REO_DEST_IND_TCL,
+ HAL_RX_MSDU_DESC_REO_DEST_IND_SW1,
+ HAL_RX_MSDU_DESC_REO_DEST_IND_SW2,
+ HAL_RX_MSDU_DESC_REO_DEST_IND_SW3,
+ HAL_RX_MSDU_DESC_REO_DEST_IND_SW4,
+ HAL_RX_MSDU_DESC_REO_DEST_IND_RELEASE,
+ HAL_RX_MSDU_DESC_REO_DEST_IND_FW,
+ HAL_RX_MSDU_DESC_REO_DEST_IND_SW5,
+ HAL_RX_MSDU_DESC_REO_DEST_IND_SW6,
+ HAL_RX_MSDU_DESC_REO_DEST_IND_SW7,
+ HAL_RX_MSDU_DESC_REO_DEST_IND_SW8,
+};
+
+#define RX_MSDU_DESC_INFO0_FIRST_MSDU_IN_MPDU BIT(0)
+#define RX_MSDU_DESC_INFO0_LAST_MSDU_IN_MPDU BIT(1)
+#define RX_MSDU_DESC_INFO0_MSDU_CONTINUATION BIT(2)
+#define RX_MSDU_DESC_INFO0_MSDU_LENGTH GENMASK(16, 3)
+#define RX_MSDU_DESC_INFO0_MSDU_DROP BIT(17)
+#define RX_MSDU_DESC_INFO0_VALID_SA BIT(18)
+#define RX_MSDU_DESC_INFO0_VALID_DA BIT(19)
+#define RX_MSDU_DESC_INFO0_DA_MCBC BIT(20)
+#define RX_MSDU_DESC_INFO0_L3_HDR_PAD_MSB BIT(21)
+#define RX_MSDU_DESC_INFO0_TCP_UDP_CHKSUM_FAIL BIT(22)
+#define RX_MSDU_DESC_INFO0_IP_CHKSUM_FAIL BIT(23)
+#define RX_MSDU_DESC_INFO0_FROM_DS BIT(24)
+#define RX_MSDU_DESC_INFO0_TO_DS BIT(25)
+#define RX_MSDU_DESC_INFO0_INTRA_BSS BIT(26)
+#define RX_MSDU_DESC_INFO0_DST_CHIP_ID GENMASK(28, 27)
+#define RX_MSDU_DESC_INFO0_DECAP_FORMAT GENMASK(30, 29)
+
+#define HAL_RX_MSDU_PKT_LENGTH_GET(val) \
+ (u32_get_bits((val), RX_MSDU_DESC_INFO0_MSDU_LENGTH))
+
+struct rx_msdu_desc {
+ __le32 info0;
+} __packed;
+
+/* rx_msdu_desc
+ *
+ * first_msdu_in_mpdu
+ * Indicates first msdu in mpdu.
+ *
+ * last_msdu_in_mpdu
+ * Indicates last msdu in mpdu. This flag can be true only when
+ * 'Msdu_continuation' set to 0. This implies that when an msdu
+ * is spread out over multiple buffers and thus msdu_continuation
+ * is set, only for the very last buffer of the msdu, can the
+ * 'last_msdu_in_mpdu' be set.
+ *
+ * When both first_msdu_in_mpdu and last_msdu_in_mpdu are set,
+ * the MPDU that this MSDU belongs to only contains a single MSDU.
+ *
+ * msdu_continuation
+ * When set, this MSDU buffer was not able to hold the entire MSDU.
+ * The next buffer will therefor contain additional information
+ * related to this MSDU.
+ *
+ * msdu_length
+ * Field is only valid in combination with the 'first_msdu_in_mpdu'
+ * being set. Full MSDU length in bytes after decapsulation. This
+ * field is still valid for MPDU frames without A-MSDU. It still
+ * represents MSDU length after decapsulation Or in case of RAW
+ * MPDUs, it indicates the length of the entire MPDU (without FCS
+ * field).
+ *
+ * msdu_drop
+ * Indicates that REO shall drop this MSDU and not forward it to
+ * any other ring.
+ *
+ * valid_sa
+ * Indicates OLE found a valid SA entry for this MSDU.
+ *
+ * valid_da
+ * When set, OLE found a valid DA entry for this MSDU.
+ *
+ * da_mcbc
+ * Field Only valid if valid_da is set. Indicates the DA address
+ * is a Multicast or Broadcast address for this MSDU.
+ *
+ * l3_header_padding_msb
+ * Passed on from 'RX_MSDU_END' TLV (only the MSB is reported as
+ * the LSB is always zero). Number of bytes padded to make sure
+ * that the L3 header will always start of a Dword boundary
+ *
+ * tcp_udp_checksum_fail
+ * Passed on from 'RX_ATTENTION' TLV
+ * Indicates that the computed checksum did not match the checksum
+ * in the TCP/UDP header.
+ *
+ * ip_checksum_fail
+ * Passed on from 'RX_ATTENTION' TLV
+ * Indicates that the computed checksum did not match the checksum
+ * in the IP header.
+ *
+ * from_DS
+ * Set if the 'from DS' bit is set in the frame control.
+ *
+ * to_DS
+ * Set if the 'to DS' bit is set in the frame control.
+ *
+ * intra_bss
+ * This packet needs intra-BSS routing by SW as the 'vdev_id'
+ * for the destination is the same as the 'vdev_id' that this
+ * MSDU was got in.
+ *
+ * dest_chip_id
+ * If intra_bss is set, copied by RXOLE/RXDMA from 'ADDR_SEARCH_ENTRY'
+ * to support intra-BSS routing with multi-chip multi-link operation.
+ * This indicates into which chip's TCL the packet should be queued.
+ *
+ * decap_format
+ * Indicates the format after decapsulation:
+ */
+
+#define RX_MSDU_EXT_DESC_INFO0_REO_DEST_IND GENMASK(4, 0)
+#define RX_MSDU_EXT_DESC_INFO0_SERVICE_CODE GENMASK(13, 5)
+#define RX_MSDU_EXT_DESC_INFO0_PRIORITY_VALID BIT(14)
+#define RX_MSDU_EXT_DESC_INFO0_DATA_OFFSET GENMASK(26, 15)
+#define RX_MSDU_EXT_DESC_INFO0_SRC_LINK_ID GENMASK(29, 27)
+
+struct rx_msdu_ext_desc {
+ __le32 info0;
+} __packed;
+
+/* rx_msdu_ext_desc
+ *
+ * reo_destination_indication
+ * The ID of the REO exit ring where the MSDU frame shall push
+ * after (MPDU level) reordering has finished.
+ *
+ * service_code
+ * Opaque service code between PPE and Wi-Fi
+ *
+ * priority_valid
+ *
+ * data_offset
+ * The offset to Rx packet data within the buffer (including
+ * Rx DMA offset programming and L3 header padding inserted
+ * by Rx OLE).
+ *
+ * src_link_id
+ * Set to the link ID of the PMAC that received the frame
+ */
+
+enum hal_reo_dest_ring_buffer_type {
+ HAL_REO_DEST_RING_BUFFER_TYPE_MSDU,
+ HAL_REO_DEST_RING_BUFFER_TYPE_LINK_DESC,
+};
+
+enum hal_reo_dest_ring_push_reason {
+ HAL_REO_DEST_RING_PUSH_REASON_ERR_DETECTED,
+ HAL_REO_DEST_RING_PUSH_REASON_ROUTING_INSTRUCTION,
+};
+
+enum hal_reo_dest_ring_error_code {
+ HAL_REO_DEST_RING_ERROR_CODE_DESC_ADDR_ZERO,
+ HAL_REO_DEST_RING_ERROR_CODE_DESC_INVALID,
+ HAL_REO_DEST_RING_ERROR_CODE_AMPDU_IN_NON_BA,
+ HAL_REO_DEST_RING_ERROR_CODE_NON_BA_DUPLICATE,
+ HAL_REO_DEST_RING_ERROR_CODE_BA_DUPLICATE,
+ HAL_REO_DEST_RING_ERROR_CODE_FRAME_2K_JUMP,
+ HAL_REO_DEST_RING_ERROR_CODE_BAR_2K_JUMP,
+ HAL_REO_DEST_RING_ERROR_CODE_FRAME_OOR,
+ HAL_REO_DEST_RING_ERROR_CODE_BAR_OOR,
+ HAL_REO_DEST_RING_ERROR_CODE_NO_BA_SESSION,
+ HAL_REO_DEST_RING_ERROR_CODE_FRAME_SN_EQUALS_SSN,
+ HAL_REO_DEST_RING_ERROR_CODE_PN_CHECK_FAILED,
+ HAL_REO_DEST_RING_ERROR_CODE_2K_ERR_FLAG_SET,
+ HAL_REO_DEST_RING_ERROR_CODE_PN_ERR_FLAG_SET,
+ HAL_REO_DEST_RING_ERROR_CODE_DESC_BLOCKED,
+ HAL_REO_DEST_RING_ERROR_CODE_MAX,
+};
+
+#define HAL_REO_DEST_RING_INFO0_BUFFER_TYPE BIT(0)
+#define HAL_REO_DEST_RING_INFO0_PUSH_REASON GENMASK(2, 1)
+#define HAL_REO_DEST_RING_INFO0_ERROR_CODE GENMASK(7, 3)
+#define HAL_REO_DEST_RING_INFO0_MSDU_DATA_SIZE GENMASK(11, 8)
+#define HAL_REO_DEST_RING_INFO0_SW_EXCEPTION BIT(12)
+#define HAL_REO_DEST_RING_INFO0_SRC_LINK_ID GENMASK(15, 13)
+#define HAL_REO_DEST_RING_INFO0_SIGNATURE GENMASK(19, 16)
+#define HAL_REO_DEST_RING_INFO0_RING_ID GENMASK(27, 20)
+#define HAL_REO_DEST_RING_INFO0_LOOPING_COUNT GENMASK(31, 28)
+
+struct hal_reo_dest_ring {
+ struct ath12k_buffer_addr buf_addr_info;
+ struct rx_mpdu_desc rx_mpdu_info;
+ struct rx_msdu_desc rx_msdu_info;
+ __le32 buf_va_lo;
+ __le32 buf_va_hi;
+ __le32 info0; /* %HAL_REO_DEST_RING_INFO0_ */
+} __packed;
+
+/* hal_reo_dest_ring
+ *
+ * Producer: RXDMA
+ * Consumer: REO/SW/FW
+ *
+ * buf_addr_info
+ * Details of the physical address of a buffer or MSDU
+ * link descriptor.
+ *
+ * rx_mpdu_info
+ * General information related to the MPDU that is passed
+ * on from REO entrance ring to the REO destination ring.
+ *
+ * rx_msdu_info
+ * General information related to the MSDU that is passed
+ * on from RXDMA all the way to the REO destination ring.
+ *
+ * buf_va_lo
+ * Field only valid if Reo_dest_buffer_type is set to MSDU_buf_address
+ * Lower 32 bits of the 64-bit virtual address corresponding
+ * to Buf_or_link_desc_addr_info
+ *
+ * buf_va_hi
+ * Address (upper 32 bits) of the REO queue descriptor.
+ * Upper 32 bits of the 64-bit virtual address corresponding
+ * to Buf_or_link_desc_addr_info
+ *
+ * buffer_type
+ * Indicates the type of address provided in the buf_addr_info.
+ * Values are defined in enum %HAL_REO_DEST_RING_BUFFER_TYPE_.
+ *
+ * push_reason
+ * Reason for pushing this frame to this exit ring. Values are
+ * defined in enum %HAL_REO_DEST_RING_PUSH_REASON_.
+ *
+ * error_code
+ * Valid only when 'push_reason' is set. All error codes are
+ * defined in enum %HAL_REO_DEST_RING_ERROR_CODE_.
+ *
+ * captured_msdu_data_size
+ * The number of following REO_DESTINATION STRUCTs that have
+ * been replaced with msdu_data extracted from the msdu_buffer
+ * and copied into the ring for easy FW/SW access.
+ *
+ * sw_exception
+ * This field has the same setting as the SW_exception field
+ * in the corresponding REO_entrance_ring descriptor.
+ * When set, the REO entrance descriptor is generated by FW,
+ * and the MPDU was processed in the following way:
+ * - NO re-order function is needed.
+ * - MPDU delinking is determined by the setting of Entrance
+ * ring field: SW_excection_mpdu_delink
+ * - Destination ring selection is based on the setting of
+ * the Entrance ring field SW_exception_destination _ring_valid
+ *
+ * src_link_id
+ * Set to the link ID of the PMAC that received the frame
+ *
+ * signature
+ * Set to value 0x8 when msdu capture mode is enabled for this ring
+ *
+ * ring_id
+ * The buffer pointer ring id.
+ * 0 - Idle ring
+ * 1 - N refers to other rings.
+ *
+ * looping_count
+ * Indicates the number of times the producer of entries into
+ * this ring has looped around the ring.
+ */
+
+#define HAL_REO_TO_PPE_RING_INFO0_DATA_LENGTH GENMASK(15, 0)
+#define HAL_REO_TO_PPE_RING_INFO0_DATA_OFFSET GENMASK(23, 16)
+#define HAL_REO_TO_PPE_RING_INFO0_POOL_ID GENMASK(28, 24)
+#define HAL_REO_TO_PPE_RING_INFO0_PREHEADER BIT(29)
+#define HAL_REO_TO_PPE_RING_INFO0_TSO_EN BIT(30)
+#define HAL_REO_TO_PPE_RING_INFO0_MORE BIT(31)
+
+struct hal_reo_to_ppe_ring {
+ __le32 buffer_addr;
+ __le32 info0; /* %HAL_REO_TO_PPE_RING_INFO0_ */
+} __packed;
+
+/* hal_reo_to_ppe_ring
+ *
+ * Producer: REO
+ * Consumer: PPE
+ *
+ * buf_addr_info
+ * Details of the physical address of a buffer or MSDU
+ * link descriptor.
+ *
+ * data_length
+ * Length of valid data in bytes
+ *
+ * data_offset
+ * Offset to the data from buffer pointer. Can be used to
+ * strip header in the data for tunnel termination etc.
+ *
+ * pool_id
+ * REO has global configuration register for this field.
+ * It may have several free buffer pools, each
+ * RX-Descriptor ring can fetch free buffer from specific
+ * buffer pool; pool id will indicate which pool the buffer
+ * will be released to; POOL_ID Zero returned to SW
+ *
+ * preheader
+ * Disabled: 0 (Default)
+ * Enabled: 1
+ *
+ * tso_en
+ * Disabled: 0 (Default)
+ * Enabled: 1
+ *
+ * more
+ * More Segments followed
+ */
+
+enum hal_reo_entr_rxdma_push_reason {
+ HAL_REO_ENTR_RING_RXDMA_PUSH_REASON_ERR_DETECTED,
+ HAL_REO_ENTR_RING_RXDMA_PUSH_REASON_ROUTING_INSTRUCTION,
+ HAL_REO_ENTR_RING_RXDMA_PUSH_REASON_RX_FLUSH,
+};
+
+enum hal_reo_entr_rxdma_ecode {
+ HAL_REO_ENTR_RING_RXDMA_ECODE_OVERFLOW_ERR,
+ HAL_REO_ENTR_RING_RXDMA_ECODE_MPDU_LEN_ERR,
+ HAL_REO_ENTR_RING_RXDMA_ECODE_FCS_ERR,
+ HAL_REO_ENTR_RING_RXDMA_ECODE_DECRYPT_ERR,
+ HAL_REO_ENTR_RING_RXDMA_ECODE_TKIP_MIC_ERR,
+ HAL_REO_ENTR_RING_RXDMA_ECODE_UNECRYPTED_ERR,
+ HAL_REO_ENTR_RING_RXDMA_ECODE_MSDU_LEN_ERR,
+ HAL_REO_ENTR_RING_RXDMA_ECODE_MSDU_LIMIT_ERR,
+ HAL_REO_ENTR_RING_RXDMA_ECODE_WIFI_PARSE_ERR,
+ HAL_REO_ENTR_RING_RXDMA_ECODE_AMSDU_PARSE_ERR,
+ HAL_REO_ENTR_RING_RXDMA_ECODE_SA_TIMEOUT_ERR,
+ HAL_REO_ENTR_RING_RXDMA_ECODE_DA_TIMEOUT_ERR,
+ HAL_REO_ENTR_RING_RXDMA_ECODE_FLOW_TIMEOUT_ERR,
+ HAL_REO_ENTR_RING_RXDMA_ECODE_FLUSH_REQUEST_ERR,
+ HAL_REO_ENTR_RING_RXDMA_ECODE_AMSDU_FRAG_ERR,
+ HAL_REO_ENTR_RING_RXDMA_ECODE_MAX,
+};
+
+enum hal_rx_reo_dest_ring {
+ HAL_RX_REO_DEST_RING_TCL,
+ HAL_RX_REO_DEST_RING_SW1,
+ HAL_RX_REO_DEST_RING_SW2,
+ HAL_RX_REO_DEST_RING_SW3,
+ HAL_RX_REO_DEST_RING_SW4,
+ HAL_RX_REO_DEST_RING_RELEASE,
+ HAL_RX_REO_DEST_RING_FW,
+ HAL_RX_REO_DEST_RING_SW5,
+ HAL_RX_REO_DEST_RING_SW6,
+ HAL_RX_REO_DEST_RING_SW7,
+ HAL_RX_REO_DEST_RING_SW8,
+};
+
+#define HAL_REO_ENTR_RING_INFO0_QUEUE_ADDR_HI GENMASK(7, 0)
+#define HAL_REO_ENTR_RING_INFO0_MPDU_BYTE_COUNT GENMASK(21, 8)
+#define HAL_REO_ENTR_RING_INFO0_DEST_IND GENMASK(26, 22)
+#define HAL_REO_ENTR_RING_INFO0_FRAMELESS_BAR BIT(27)
+
+#define HAL_REO_ENTR_RING_INFO1_RXDMA_PUSH_REASON GENMASK(1, 0)
+#define HAL_REO_ENTR_RING_INFO1_RXDMA_ERROR_CODE GENMASK(6, 2)
+#define HAL_REO_ENTR_RING_INFO1_MPDU_FRAG_NUM GENMASK(10, 7)
+#define HAL_REO_ENTR_RING_INFO1_SW_EXCEPTION BIT(11)
+#define HAL_REO_ENTR_RING_INFO1_SW_EXCEPT_MPDU_DELINK BIT(12)
+#define HAL_REO_ENTR_RING_INFO1_SW_EXCEPTION_RING_VLD BIT(13)
+#define HAL_REO_ENTR_RING_INFO1_SW_EXCEPTION_RING GENMASK(18, 14)
+#define HAL_REO_ENTR_RING_INFO1_MPDU_SEQ_NUM GENMASK(30, 19)
+
+#define HAL_REO_ENTR_RING_INFO2_PHY_PPDU_ID GENMASK(15, 0)
+#define HAL_REO_ENTR_RING_INFO2_SRC_LINK_ID GENMASK(18, 16)
+#define HAL_REO_ENTR_RING_INFO2_RING_ID GENMASK(27, 20)
+#define HAL_REO_ENTR_RING_INFO2_LOOPING_COUNT GENMASK(31, 28)
+
+struct hal_reo_entrance_ring {
+ struct ath12k_buffer_addr buf_addr_info;
+ struct rx_mpdu_desc rx_mpdu_info;
+ __le32 queue_addr_lo;
+ __le32 info0; /* %HAL_REO_ENTR_RING_INFO0_ */
+ __le32 info1; /* %HAL_REO_ENTR_RING_INFO1_ */
+ __le32 info2; /* %HAL_REO_DEST_RING_INFO2_ */
+
+} __packed;
+
+/* hal_reo_entrance_ring
+ *
+ * Producer: RXDMA
+ * Consumer: REO
+ *
+ * buf_addr_info
+ * Details of the physical address of a buffer or MSDU
+ * link descriptor.
+ *
+ * rx_mpdu_info
+ * General information related to the MPDU that is passed
+ * on from REO entrance ring to the REO destination ring.
+ *
+ * queue_addr_lo
+ * Address (lower 32 bits) of the REO queue descriptor.
+ *
+ * queue_addr_hi
+ * Address (upper 8 bits) of the REO queue descriptor.
+ *
+ * mpdu_byte_count
+ * An approximation of the number of bytes received in this MPDU.
+ * Used to keeps stats on the amount of data flowing
+ * through a queue.
+ *
+ * reo_destination_indication
+ * The id of the reo exit ring where the msdu frame shall push
+ * after (MPDU level) reordering has finished. Values are defined
+ * in enum %HAL_RX_MSDU_DESC_REO_DEST_IND_.
+ *
+ * frameless_bar
+ * Indicates that this REO entrance ring struct contains BAR info
+ * from a multi TID BAR frame. The original multi TID BAR frame
+ * itself contained all the REO info for the first TID, but all
+ * the subsequent TID info and their linkage to the REO descriptors
+ * is passed down as 'frameless' BAR info.
+ *
+ * The only fields valid in this descriptor when this bit is set
+ * are queue_addr_lo, queue_addr_hi, mpdu_sequence_number,
+ * bar_frame and peer_meta_data.
+ *
+ * rxdma_push_reason
+ * Reason for pushing this frame to this exit ring. Values are
+ * defined in enum %HAL_REO_ENTR_RING_RXDMA_PUSH_REASON_.
+ *
+ * rxdma_error_code
+ * Valid only when 'push_reason' is set. All error codes are
+ * defined in enum %HAL_REO_ENTR_RING_RXDMA_ECODE_.
+ *
+ * mpdu_fragment_number
+ * Field only valid when Reo_level_mpdu_frame_info.
+ * Rx_mpdu_desc_info_details.Fragment_flag is set.
+ *
+ * sw_exception
+ * When not set, REO is performing all its default MPDU processing
+ * operations,
+ * When set, this REO entrance descriptor is generated by FW, and
+ * should be processed as an exception. This implies:
+ * NO re-order function is needed.
+ * MPDU delinking is determined by the setting of field
+ * SW_excection_mpdu_delink
+ *
+ * sw_exception_mpdu_delink
+ * Field only valid when SW_exception is set.
+ * 1'b0: REO should NOT delink the MPDU, and thus pass this
+ * MPDU on to the destination ring as is. This implies that
+ * in the REO_DESTINATION_RING struct field
+ * Buf_or_link_desc_addr_info should point to an MSDU link
+ * descriptor
+ * 1'b1: REO should perform the normal MPDU delink into MSDU operations.
+ *
+ * sw_exception_dest_ring
+ * Field only valid when fields SW_exception and SW
+ * exception_destination_ring_valid are set. values are defined
+ * in %HAL_RX_REO_DEST_RING_.
+ *
+ * mpdu_seq_number
+ * The field can have two different meanings based on the setting
+ * of sub-field Reo level mpdu frame info.
+ * Rx_mpdu_desc_info_details. BAR_frame
+ * 'BAR_frame' is NOT set:
+ * The MPDU sequence number of the received frame.
+ * 'BAR_frame' is set.
+ * The MPDU Start sequence number from the BAR frame
+ *
+ * phy_ppdu_id
+ * A PPDU counter value that PHY increments for every PPDU received
+ *
+ * src_link_id
+ * Set to the link ID of the PMAC that received the frame
+ *
+ * ring_id
+ * The buffer pointer ring id.
+ * 0 - Idle ring
+ * 1 - N refers to other rings.
+ *
+ * looping_count
+ * Indicates the number of times the producer of entries into
+ * this ring has looped around the ring.
+ */
+
+#define HAL_REO_CMD_HDR_INFO0_CMD_NUMBER GENMASK(15, 0)
+#define HAL_REO_CMD_HDR_INFO0_STATUS_REQUIRED BIT(16)
+
+struct hal_reo_cmd_hdr {
+ __le32 info0;
+} __packed;
+
+#define HAL_REO_GET_QUEUE_STATS_INFO0_QUEUE_ADDR_HI GENMASK(7, 0)
+#define HAL_REO_GET_QUEUE_STATS_INFO0_CLEAR_STATS BIT(8)
+
+struct hal_reo_get_queue_stats {
+ struct hal_reo_cmd_hdr cmd;
+ __le32 queue_addr_lo;
+ __le32 info0;
+ __le32 rsvd0[6];
+ __le32 tlv64_pad;
+} __packed;
+
+/* hal_reo_get_queue_stats
+ * Producer: SW
+ * Consumer: REO
+ *
+ * cmd
+ * Details for command execution tracking purposes.
+ *
+ * queue_addr_lo
+ * Address (lower 32 bits) of the REO queue descriptor.
+ *
+ * queue_addr_hi
+ * Address (upper 8 bits) of the REO queue descriptor.
+ *
+ * clear_stats
+ * Clear stats settings. When set, Clear the stats after
+ * generating the status.
+ *
+ * Following stats will be cleared.
+ * Timeout_count
+ * Forward_due_to_bar_count
+ * Duplicate_count
+ * Frames_in_order_count
+ * BAR_received_count
+ * MPDU_Frames_processed_count
+ * MSDU_Frames_processed_count
+ * Total_processed_byte_count
+ * Late_receive_MPDU_count
+ * window_jump_2k
+ * Hole_count
+ */
+
+#define HAL_REO_FLUSH_QUEUE_INFO0_DESC_ADDR_HI GENMASK(7, 0)
+#define HAL_REO_FLUSH_QUEUE_INFO0_BLOCK_DESC_ADDR BIT(8)
+#define HAL_REO_FLUSH_QUEUE_INFO0_BLOCK_RESRC_IDX GENMASK(10, 9)
+
+struct hal_reo_flush_queue {
+ struct hal_reo_cmd_hdr cmd;
+ __le32 desc_addr_lo;
+ __le32 info0;
+ __le32 rsvd0[6];
+} __packed;
+
+#define HAL_REO_FLUSH_CACHE_INFO0_CACHE_ADDR_HI GENMASK(7, 0)
+#define HAL_REO_FLUSH_CACHE_INFO0_FWD_ALL_MPDUS BIT(8)
+#define HAL_REO_FLUSH_CACHE_INFO0_RELEASE_BLOCK_IDX BIT(9)
+#define HAL_REO_FLUSH_CACHE_INFO0_BLOCK_RESRC_IDX GENMASK(11, 10)
+#define HAL_REO_FLUSH_CACHE_INFO0_FLUSH_WO_INVALIDATE BIT(12)
+#define HAL_REO_FLUSH_CACHE_INFO0_BLOCK_CACHE_USAGE BIT(13)
+#define HAL_REO_FLUSH_CACHE_INFO0_FLUSH_ALL BIT(14)
+
+struct hal_reo_flush_cache {
+ struct hal_reo_cmd_hdr cmd;
+ __le32 cache_addr_lo;
+ __le32 info0;
+ __le32 rsvd0[6];
+} __packed;
+
+#define HAL_TCL_DATA_CMD_INFO0_CMD_TYPE BIT(0)
+#define HAL_TCL_DATA_CMD_INFO0_DESC_TYPE BIT(1)
+#define HAL_TCL_DATA_CMD_INFO0_BANK_ID GENMASK(7, 2)
+#define HAL_TCL_DATA_CMD_INFO0_TX_NOTIFY_FRAME GENMASK(10, 8)
+#define HAL_TCL_DATA_CMD_INFO0_HDR_LEN_READ_SEL BIT(11)
+#define HAL_TCL_DATA_CMD_INFO0_BUF_TIMESTAMP GENMASK(30, 12)
+#define HAL_TCL_DATA_CMD_INFO0_BUF_TIMESTAMP_VLD BIT(31)
+
+#define HAL_TCL_DATA_CMD_INFO1_CMD_NUM GENMASK(31, 16)
+
+#define HAL_TCL_DATA_CMD_INFO2_DATA_LEN GENMASK(15, 0)
+#define HAL_TCL_DATA_CMD_INFO2_IP4_CKSUM_EN BIT(16)
+#define HAL_TCL_DATA_CMD_INFO2_UDP4_CKSUM_EN BIT(17)
+#define HAL_TCL_DATA_CMD_INFO2_UDP6_CKSUM_EN BIT(18)
+#define HAL_TCL_DATA_CMD_INFO2_TCP4_CKSUM_EN BIT(19)
+#define HAL_TCL_DATA_CMD_INFO2_TCP6_CKSUM_EN BIT(20)
+#define HAL_TCL_DATA_CMD_INFO2_TO_FW BIT(21)
+#define HAL_TCL_DATA_CMD_INFO2_PKT_OFFSET GENMASK(31, 23)
+
+#define HAL_TCL_DATA_CMD_INFO3_TID_OVERWRITE BIT(0)
+#define HAL_TCL_DATA_CMD_INFO3_FLOW_OVERRIDE_EN BIT(1)
+#define HAL_TCL_DATA_CMD_INFO3_CLASSIFY_INFO_SEL GENMASK(3, 2)
+#define HAL_TCL_DATA_CMD_INFO3_TID GENMASK(7, 4)
+#define HAL_TCL_DATA_CMD_INFO3_FLOW_OVERRIDE BIT(8)
+#define HAL_TCL_DATA_CMD_INFO3_PMAC_ID GENMASK(10, 9)
+#define HAL_TCL_DATA_CMD_INFO3_MSDU_COLOR GENMASK(12, 11)
+#define HAL_TCL_DATA_CMD_INFO3_VDEV_ID GENMASK(31, 24)
+
+#define HAL_TCL_DATA_CMD_INFO4_SEARCH_INDEX GENMASK(19, 0)
+#define HAL_TCL_DATA_CMD_INFO4_CACHE_SET_NUM GENMASK(23, 20)
+#define HAL_TCL_DATA_CMD_INFO4_IDX_LOOKUP_OVERRIDE BIT(24)
+
+#define HAL_TCL_DATA_CMD_INFO5_RING_ID GENMASK(27, 20)
+#define HAL_TCL_DATA_CMD_INFO5_LOOPING_COUNT GENMASK(31, 28)
+
+enum hal_encrypt_type {
+ HAL_ENCRYPT_TYPE_WEP_40,
+ HAL_ENCRYPT_TYPE_WEP_104,
+ HAL_ENCRYPT_TYPE_TKIP_NO_MIC,
+ HAL_ENCRYPT_TYPE_WEP_128,
+ HAL_ENCRYPT_TYPE_TKIP_MIC,
+ HAL_ENCRYPT_TYPE_WAPI,
+ HAL_ENCRYPT_TYPE_CCMP_128,
+ HAL_ENCRYPT_TYPE_OPEN,
+ HAL_ENCRYPT_TYPE_CCMP_256,
+ HAL_ENCRYPT_TYPE_GCMP_128,
+ HAL_ENCRYPT_TYPE_AES_GCMP_256,
+ HAL_ENCRYPT_TYPE_WAPI_GCM_SM4,
+};
+
+enum hal_tcl_encap_type {
+ HAL_TCL_ENCAP_TYPE_RAW,
+ HAL_TCL_ENCAP_TYPE_NATIVE_WIFI,
+ HAL_TCL_ENCAP_TYPE_ETHERNET,
+ HAL_TCL_ENCAP_TYPE_802_3 = 3,
+};
+
+enum hal_tcl_desc_type {
+ HAL_TCL_DESC_TYPE_BUFFER,
+ HAL_TCL_DESC_TYPE_EXT_DESC,
+};
+
+enum hal_wbm_htt_tx_comp_status {
+ HAL_WBM_REL_HTT_TX_COMP_STATUS_OK,
+ HAL_WBM_REL_HTT_TX_COMP_STATUS_DROP,
+ HAL_WBM_REL_HTT_TX_COMP_STATUS_TTL,
+ HAL_WBM_REL_HTT_TX_COMP_STATUS_REINJ,
+ HAL_WBM_REL_HTT_TX_COMP_STATUS_INSPECT,
+ HAL_WBM_REL_HTT_TX_COMP_STATUS_MEC_NOTIFY,
+ HAL_WBM_REL_HTT_TX_COMP_STATUS_MAX,
+};
+
+struct hal_tcl_data_cmd {
+ struct ath12k_buffer_addr buf_addr_info;
+ __le32 info0;
+ __le32 info1;
+ __le32 info2;
+ __le32 info3;
+ __le32 info4;
+ __le32 info5;
+} __packed;
+
+/* hal_tcl_data_cmd
+ *
+ * buf_addr_info
+ * Details of the physical address of a buffer or MSDU
+ * link descriptor.
+ *
+ * tcl_cmd_type
+ * used to select the type of TCL Command decriptor
+ *
+ * desc_type
+ * Indicates the type of address provided in the buf_addr_info.
+ * Values are defined in enum %HAL_REO_DEST_RING_BUFFER_TYPE_.
+ *
+ * bank_id
+ * used to select one of the TCL register banks for fields removed
+ * from 'TCL_DATA_CMD' that do not change often within one virtual
+ * device or a set of virtual devices:
+ *
+ * tx_notify_frame
+ * TCL copies this value to 'TQM_ENTRANCE_RING' field FW_tx_notify_frame.
+ *
+ * hdr_length_read_sel
+ * used to select the per 'encap_type' register set for MSDU header
+ * read length
+ *
+ * buffer_timestamp
+ * buffer_timestamp_valid
+ * Frame system entrance timestamp. It shall be filled by first
+ * module (SW, TCL or TQM) that sees the frames first.
+ *
+ * cmd_num
+ * This number can be used to match against status.
+ *
+ * data_length
+ * MSDU length in case of direct descriptor. Length of link
+ * extension descriptor in case of Link extension descriptor.
+ *
+ * *_checksum_en
+ * Enable checksum replacement for ipv4, udp_over_ipv4, ipv6,
+ * udp_over_ipv6, tcp_over_ipv4 and tcp_over_ipv6.
+ *
+ * to_fw
+ * Forward packet to FW along with classification result. The
+ * packet will not be forward to TQM when this bit is set.
+ * 1'b0: Use classification result to forward the packet.
+ * 1'b1: Override classification result & forward packet only to fw
+ *
+ * packet_offset
+ * Packet offset from Metadata in case of direct buffer descriptor.
+ *
+ * hlos_tid_overwrite
+ *
+ * When set, TCL shall ignore the IP DSCP and VLAN PCP
+ * fields and use HLOS_TID as the final TID. Otherwise TCL
+ * shall consider the DSCP and PCP fields as well as HLOS_TID
+ * and choose a final TID based on the configured priority
+ *
+ * flow_override_enable
+ * TCL uses this to select the flow pointer from the peer table,
+ * which can be overridden by SW for pre-encrypted raw WiFi packets
+ * that cannot be parsed for UDP or for other MLO
+ * 0 - FP_PARSE_IP: Use the flow-pointer based on parsing the IPv4
+ * or IPv6 header.
+ * 1 - FP_USE_OVERRIDE: Use the who_classify_info_sel and
+ * flow_override fields to select the flow-pointer
+ *
+ * who_classify_info_sel
+ * Field only valid when flow_override_enable is set to FP_USE_OVERRIDE.
+ * This field is used to select one of the 'WHO_CLASSIFY_INFO's in the
+ * peer table in case more than 2 flows are mapped to a single TID.
+ * 0: To choose Flow 0 and 1 of any TID use this value.
+ * 1: To choose Flow 2 and 3 of any TID use this value.
+ * 2: To choose Flow 4 and 5 of any TID use this value.
+ * 3: To choose Flow 6 and 7 of any TID use this value.
+ *
+ * If who_classify_info sel is not in sync with the num_tx_classify_info
+ * field from address search, then TCL will set 'who_classify_info_sel'
+ * to 0 use flows 0 and 1.
+ *
+ * hlos_tid
+ * HLOS MSDU priority
+ * Field is used when HLOS_TID_overwrite is set.
+ *
+ * flow_override
+ * Field only valid when flow_override_enable is set to FP_USE_OVERRIDE
+ * TCL uses this to select the flow pointer from the peer table,
+ * which can be overridden by SW for pre-encrypted raw WiFi packets
+ * that cannot be parsed for UDP or for other MLO
+ * 0 - FP_USE_NON_UDP: Use the non-UDP flow pointer (flow 0)
+ * 1 - FP_USE_UDP: Use the UDP flow pointer (flow 1)
+ *
+ * pmac_id
+ * TCL uses this PMAC_ID in address search, i.e, while
+ * finding matching entry for the packet in AST corresponding
+ * to given PMAC_ID
+ *
+ * If PMAC ID is all 1s (=> value 3), it indicates wildcard
+ * match for any PMAC
+ *
+ * vdev_id
+ * Virtual device ID to check against the address search entry to
+ * avoid security issues from transmitting packets from an incorrect
+ * virtual device
+ *
+ * search_index
+ * The index that will be used for index based address or
+ * flow search. The field is valid when 'search_type' is 1 or 2.
+ *
+ * cache_set_num
+ *
+ * Cache set number that should be used to cache the index
+ * based search results, for address and flow search. This
+ * value should be equal to LSB four bits of the hash value of
+ * match data, in case of search index points to an entry which
+ * may be used in content based search also. The value can be
+ * anything when the entry pointed by search index will not be
+ * used for content based search.
+ *
+ * index_loop_override
+ * When set, address search and packet routing is forced to use
+ * 'search_index' instead of following the register configuration
+ * seleced by Bank_id.
+ *
+ * ring_id
+ * The buffer pointer ring ID.
+ * 0 refers to the IDLE ring
+ * 1 - N refers to other rings
+ *
+ * looping_count
+ *
+ * A count value that indicates the number of times the
+ * producer of entries into the Ring has looped around the
+ * ring.
+ *
+ * At initialization time, this value is set to 0. On the
+ * first loop, this value is set to 1. After the max value is
+ * reached allowed by the number of bits for this field, the
+ * count value continues with 0 again.
+ *
+ * In case SW is the consumer of the ring entries, it can
+ * use this field to figure out up to where the producer of
+ * entries has created new entries. This eliminates the need to
+ * check where the head pointer' of the ring is located once
+ * the SW starts processing an interrupt indicating that new
+ * entries have been put into this ring...
+ *
+ * Also note that SW if it wants only needs to look at the
+ * LSB bit of this count value.
+ */
+
+#define HAL_TCL_DESC_LEN sizeof(struct hal_tcl_data_cmd)
+
+#define HAL_TX_MSDU_EXT_INFO0_BUF_PTR_LO GENMASK(31, 0)
+
+#define HAL_TX_MSDU_EXT_INFO1_BUF_PTR_HI GENMASK(7, 0)
+#define HAL_TX_MSDU_EXT_INFO1_EXTN_OVERRIDE BIT(8)
+#define HAL_TX_MSDU_EXT_INFO1_ENCAP_TYPE GENMASK(10, 9)
+#define HAL_TX_MSDU_EXT_INFO1_ENCRYPT_TYPE GENMASK(14, 11)
+#define HAL_TX_MSDU_EXT_INFO1_BUF_LEN GENMASK(31, 16)
+
+struct hal_tx_msdu_ext_desc {
+ __le32 rsvd0[6];
+ __le32 info0;
+ __le32 info1;
+ __le32 rsvd1[10];
+};
+
+struct hal_tcl_gse_cmd {
+ __le32 ctrl_buf_addr_lo;
+ __le32 info0;
+ __le32 meta_data[2];
+ __le32 rsvd0[2];
+ __le32 info1;
+} __packed;
+
+/* hal_tcl_gse_cmd
+ *
+ * ctrl_buf_addr_lo, ctrl_buf_addr_hi
+ * Address of a control buffer containing additional info needed
+ * for this command execution.
+ *
+ * meta_data
+ * Meta data to be returned in the status descriptor
+ */
+
+enum hal_tcl_cache_op_res {
+ HAL_TCL_CACHE_OP_RES_DONE,
+ HAL_TCL_CACHE_OP_RES_NOT_FOUND,
+ HAL_TCL_CACHE_OP_RES_TIMEOUT,
+};
+
+struct hal_tcl_status_ring {
+ __le32 info0;
+ __le32 msdu_byte_count;
+ __le32 msdu_timestamp;
+ __le32 meta_data[2];
+ __le32 info1;
+ __le32 rsvd0;
+ __le32 info2;
+} __packed;
+
+/* hal_tcl_status_ring
+ *
+ * msdu_cnt
+ * msdu_byte_count
+ * MSDU count of Entry and MSDU byte count for entry 1.
+ *
+ */
+
+#define HAL_CE_SRC_DESC_ADDR_INFO_ADDR_HI GENMASK(7, 0)
+#define HAL_CE_SRC_DESC_ADDR_INFO_HASH_EN BIT(8)
+#define HAL_CE_SRC_DESC_ADDR_INFO_BYTE_SWAP BIT(9)
+#define HAL_CE_SRC_DESC_ADDR_INFO_DEST_SWAP BIT(10)
+#define HAL_CE_SRC_DESC_ADDR_INFO_GATHER BIT(11)
+#define HAL_CE_SRC_DESC_ADDR_INFO_LEN GENMASK(31, 16)
+
+#define HAL_CE_SRC_DESC_META_INFO_DATA GENMASK(15, 0)
+
+#define HAL_CE_SRC_DESC_FLAGS_RING_ID GENMASK(27, 20)
+#define HAL_CE_SRC_DESC_FLAGS_LOOP_CNT HAL_SRNG_DESC_LOOP_CNT
+
+struct hal_ce_srng_src_desc {
+ __le32 buffer_addr_low;
+ __le32 buffer_addr_info; /* %HAL_CE_SRC_DESC_ADDR_INFO_ */
+ __le32 meta_info; /* %HAL_CE_SRC_DESC_META_INFO_ */
+ __le32 flags; /* %HAL_CE_SRC_DESC_FLAGS_ */
+} __packed;
+
+/* hal_ce_srng_src_desc
+ *
+ * buffer_addr_lo
+ * LSB 32 bits of the 40 Bit Pointer to the source buffer
+ *
+ * buffer_addr_hi
+ * MSB 8 bits of the 40 Bit Pointer to the source buffer
+ *
+ * toeplitz_en
+ * Enable generation of 32-bit Toeplitz-LFSR hash for
+ * data transfer. In case of gather field in first source
+ * ring entry of the gather copy cycle in taken into account.
+ *
+ * src_swap
+ * Treats source memory organization as big-endian. For
+ * each dword read (4 bytes), the byte 0 is swapped with byte 3
+ * and byte 1 is swapped with byte 2.
+ * In case of gather field in first source ring entry of
+ * the gather copy cycle in taken into account.
+ *
+ * dest_swap
+ * Treats destination memory organization as big-endian.
+ * For each dword write (4 bytes), the byte 0 is swapped with
+ * byte 3 and byte 1 is swapped with byte 2.
+ * In case of gather field in first source ring entry of
+ * the gather copy cycle in taken into account.
+ *
+ * gather
+ * Enables gather of multiple copy engine source
+ * descriptors to one destination.
+ *
+ * ce_res_0
+ * Reserved
+ *
+ *
+ * length
+ * Length of the buffer in units of octets of the current
+ * descriptor
+ *
+ * fw_metadata
+ * Meta data used by FW.
+ * In case of gather field in first source ring entry of
+ * the gather copy cycle in taken into account.
+ *
+ * ce_res_1
+ * Reserved
+ *
+ * ce_res_2
+ * Reserved
+ *
+ * ring_id
+ * The buffer pointer ring ID.
+ * 0 refers to the IDLE ring
+ * 1 - N refers to other rings
+ * Helps with debugging when dumping ring contents.
+ *
+ * looping_count
+ * A count value that indicates the number of times the
+ * producer of entries into the Ring has looped around the
+ * ring.
+ *
+ * At initialization time, this value is set to 0. On the
+ * first loop, this value is set to 1. After the max value is
+ * reached allowed by the number of bits for this field, the
+ * count value continues with 0 again.
+ *
+ * In case SW is the consumer of the ring entries, it can
+ * use this field to figure out up to where the producer of
+ * entries has created new entries. This eliminates the need to
+ * check where the head pointer' of the ring is located once
+ * the SW starts processing an interrupt indicating that new
+ * entries have been put into this ring...
+ *
+ * Also note that SW if it wants only needs to look at the
+ * LSB bit of this count value.
+ */
+
+#define HAL_CE_DEST_DESC_ADDR_INFO_ADDR_HI GENMASK(7, 0)
+#define HAL_CE_DEST_DESC_ADDR_INFO_RING_ID GENMASK(27, 20)
+#define HAL_CE_DEST_DESC_ADDR_INFO_LOOP_CNT HAL_SRNG_DESC_LOOP_CNT
+
+struct hal_ce_srng_dest_desc {
+ __le32 buffer_addr_low;
+ __le32 buffer_addr_info; /* %HAL_CE_DEST_DESC_ADDR_INFO_ */
+} __packed;
+
+/* hal_ce_srng_dest_desc
+ *
+ * dst_buffer_low
+ * LSB 32 bits of the 40 Bit Pointer to the Destination
+ * buffer
+ *
+ * dst_buffer_high
+ * MSB 8 bits of the 40 Bit Pointer to the Destination
+ * buffer
+ *
+ * ce_res_4
+ * Reserved
+ *
+ * ring_id
+ * The buffer pointer ring ID.
+ * 0 refers to the IDLE ring
+ * 1 - N refers to other rings
+ * Helps with debugging when dumping ring contents.
+ *
+ * looping_count
+ * A count value that indicates the number of times the
+ * producer of entries into the Ring has looped around the
+ * ring.
+ *
+ * At initialization time, this value is set to 0. On the
+ * first loop, this value is set to 1. After the max value is
+ * reached allowed by the number of bits for this field, the
+ * count value continues with 0 again.
+ *
+ * In case SW is the consumer of the ring entries, it can
+ * use this field to figure out up to where the producer of
+ * entries has created new entries. This eliminates the need to
+ * check where the head pointer' of the ring is located once
+ * the SW starts processing an interrupt indicating that new
+ * entries have been put into this ring...
+ *
+ * Also note that SW if it wants only needs to look at the
+ * LSB bit of this count value.
+ */
+
+#define HAL_CE_DST_STATUS_DESC_FLAGS_HASH_EN BIT(8)
+#define HAL_CE_DST_STATUS_DESC_FLAGS_BYTE_SWAP BIT(9)
+#define HAL_CE_DST_STATUS_DESC_FLAGS_DEST_SWAP BIT(10)
+#define HAL_CE_DST_STATUS_DESC_FLAGS_GATHER BIT(11)
+#define HAL_CE_DST_STATUS_DESC_FLAGS_LEN GENMASK(31, 16)
+
+#define HAL_CE_DST_STATUS_DESC_META_INFO_DATA GENMASK(15, 0)
+#define HAL_CE_DST_STATUS_DESC_META_INFO_RING_ID GENMASK(27, 20)
+#define HAL_CE_DST_STATUS_DESC_META_INFO_LOOP_CNT HAL_SRNG_DESC_LOOP_CNT
+
+struct hal_ce_srng_dst_status_desc {
+ __le32 flags; /* %HAL_CE_DST_STATUS_DESC_FLAGS_ */
+ __le32 toeplitz_hash0;
+ __le32 toeplitz_hash1;
+ __le32 meta_info; /* HAL_CE_DST_STATUS_DESC_META_INFO_ */
+} __packed;
+
+/* hal_ce_srng_dst_status_desc
+ *
+ * ce_res_5
+ * Reserved
+ *
+ * toeplitz_en
+ *
+ * src_swap
+ * Source memory buffer swapped
+ *
+ * dest_swap
+ * Destination memory buffer swapped
+ *
+ * gather
+ * Gather of multiple copy engine source descriptors to one
+ * destination enabled
+ *
+ * ce_res_6
+ * Reserved
+ *
+ * length
+ * Sum of all the Lengths of the source descriptor in the
+ * gather chain
+ *
+ * toeplitz_hash_0
+ * 32 LS bits of 64 bit Toeplitz LFSR hash result
+ *
+ * toeplitz_hash_1
+ * 32 MS bits of 64 bit Toeplitz LFSR hash result
+ *
+ * fw_metadata
+ * Meta data used by FW
+ * In case of gather field in first source ring entry of
+ * the gather copy cycle in taken into account.
+ *
+ * ce_res_7
+ * Reserved
+ *
+ * ring_id
+ * The buffer pointer ring ID.
+ * 0 refers to the IDLE ring
+ * 1 - N refers to other rings
+ * Helps with debugging when dumping ring contents.
+ *
+ * looping_count
+ * A count value that indicates the number of times the
+ * producer of entries into the Ring has looped around the
+ * ring.
+ *
+ * At initialization time, this value is set to 0. On the
+ * first loop, this value is set to 1. After the max value is
+ * reached allowed by the number of bits for this field, the
+ * count value continues with 0 again.
+ *
+ * In case SW is the consumer of the ring entries, it can
+ * use this field to figure out up to where the producer of
+ * entries has created new entries. This eliminates the need to
+ * check where the head pointer' of the ring is located once
+ * the SW starts processing an interrupt indicating that new
+ * entries have been put into this ring...
+ *
+ * Also note that SW if it wants only needs to look at the
+ * LSB bit of this count value.
+ */
+
+#define HAL_TX_RATE_STATS_INFO0_VALID BIT(0)
+#define HAL_TX_RATE_STATS_INFO0_BW GENMASK(3, 1)
+#define HAL_TX_RATE_STATS_INFO0_PKT_TYPE GENMASK(7, 4)
+#define HAL_TX_RATE_STATS_INFO0_STBC BIT(8)
+#define HAL_TX_RATE_STATS_INFO0_LDPC BIT(9)
+#define HAL_TX_RATE_STATS_INFO0_SGI GENMASK(11, 10)
+#define HAL_TX_RATE_STATS_INFO0_MCS GENMASK(15, 12)
+#define HAL_TX_RATE_STATS_INFO0_OFDMA_TX BIT(16)
+#define HAL_TX_RATE_STATS_INFO0_TONES_IN_RU GENMASK(28, 17)
+
+enum hal_tx_rate_stats_bw {
+ HAL_TX_RATE_STATS_BW_20,
+ HAL_TX_RATE_STATS_BW_40,
+ HAL_TX_RATE_STATS_BW_80,
+ HAL_TX_RATE_STATS_BW_160,
+};
+
+enum hal_tx_rate_stats_pkt_type {
+ HAL_TX_RATE_STATS_PKT_TYPE_11A,
+ HAL_TX_RATE_STATS_PKT_TYPE_11B,
+ HAL_TX_RATE_STATS_PKT_TYPE_11N,
+ HAL_TX_RATE_STATS_PKT_TYPE_11AC,
+ HAL_TX_RATE_STATS_PKT_TYPE_11AX,
+ HAL_TX_RATE_STATS_PKT_TYPE_11BA,
+ HAL_TX_RATE_STATS_PKT_TYPE_11BE,
+};
+
+enum hal_tx_rate_stats_sgi {
+ HAL_TX_RATE_STATS_SGI_08US,
+ HAL_TX_RATE_STATS_SGI_04US,
+ HAL_TX_RATE_STATS_SGI_16US,
+ HAL_TX_RATE_STATS_SGI_32US,
+};
+
+struct hal_tx_rate_stats {
+ __le32 info0;
+ __le32 tsf;
+} __packed;
+
+struct hal_wbm_link_desc {
+ struct ath12k_buffer_addr buf_addr_info;
+} __packed;
+
+/* hal_wbm_link_desc
+ *
+ * Producer: WBM
+ * Consumer: WBM
+ *
+ * buf_addr_info
+ * Details of the physical address of a buffer or MSDU
+ * link descriptor.
+ */
+
+enum hal_wbm_rel_src_module {
+ HAL_WBM_REL_SRC_MODULE_TQM,
+ HAL_WBM_REL_SRC_MODULE_RXDMA,
+ HAL_WBM_REL_SRC_MODULE_REO,
+ HAL_WBM_REL_SRC_MODULE_FW,
+ HAL_WBM_REL_SRC_MODULE_SW,
+};
+
+enum hal_wbm_rel_desc_type {
+ HAL_WBM_REL_DESC_TYPE_REL_MSDU,
+ HAL_WBM_REL_DESC_TYPE_MSDU_LINK,
+ HAL_WBM_REL_DESC_TYPE_MPDU_LINK,
+ HAL_WBM_REL_DESC_TYPE_MSDU_EXT,
+ HAL_WBM_REL_DESC_TYPE_QUEUE_EXT,
+};
+
+/* hal_wbm_rel_desc_type
+ *
+ * msdu_buffer
+ * The address points to an MSDU buffer
+ *
+ * msdu_link_descriptor
+ * The address points to an Tx MSDU link descriptor
+ *
+ * mpdu_link_descriptor
+ * The address points to an MPDU link descriptor
+ *
+ * msdu_ext_descriptor
+ * The address points to an MSDU extension descriptor
+ *
+ * queue_ext_descriptor
+ * The address points to an TQM queue extension descriptor. WBM should
+ * treat this is the same way as a link descriptor.
+ */
+
+enum hal_wbm_rel_bm_act {
+ HAL_WBM_REL_BM_ACT_PUT_IN_IDLE,
+ HAL_WBM_REL_BM_ACT_REL_MSDU,
+};
+
+/* hal_wbm_rel_bm_act
+ *
+ * put_in_idle_list
+ * Put the buffer or descriptor back in the idle list. In case of MSDU or
+ * MDPU link descriptor, BM does not need to check to release any
+ * individual MSDU buffers.
+ *
+ * release_msdu_list
+ * This BM action can only be used in combination with desc_type being
+ * msdu_link_descriptor. Field first_msdu_index points out which MSDU
+ * pointer in the MSDU link descriptor is the first of an MPDU that is
+ * released. BM shall release all the MSDU buffers linked to this first
+ * MSDU buffer pointer. All related MSDU buffer pointer entries shall be
+ * set to value 0, which represents the 'NULL' pointer. When all MSDU
+ * buffer pointers in the MSDU link descriptor are 'NULL', the MSDU link
+ * descriptor itself shall also be released.
+ */
+#define HAL_WBM_COMPL_RX_INFO0_REL_SRC_MODULE GENMASK(2, 0)
+#define HAL_WBM_COMPL_RX_INFO0_BM_ACTION GENMASK(5, 3)
+#define HAL_WBM_COMPL_RX_INFO0_DESC_TYPE GENMASK(8, 6)
+#define HAL_WBM_COMPL_RX_INFO0_RBM GENMASK(12, 9)
+#define HAL_WBM_COMPL_RX_INFO0_RXDMA_PUSH_REASON GENMASK(18, 17)
+#define HAL_WBM_COMPL_RX_INFO0_RXDMA_ERROR_CODE GENMASK(23, 19)
+#define HAL_WBM_COMPL_RX_INFO0_REO_PUSH_REASON GENMASK(25, 24)
+#define HAL_WBM_COMPL_RX_INFO0_REO_ERROR_CODE GENMASK(30, 26)
+#define HAL_WBM_COMPL_RX_INFO0_WBM_INTERNAL_ERROR BIT(31)
+
+#define HAL_WBM_COMPL_RX_INFO1_PHY_ADDR_HI GENMASK(7, 0)
+#define HAL_WBM_COMPL_RX_INFO1_SW_COOKIE GENMASK(27, 8)
+#define HAL_WBM_COMPL_RX_INFO1_LOOPING_COUNT GENMASK(31, 28)
+
+struct hal_wbm_completion_ring_rx {
+ __le32 addr_lo;
+ __le32 addr_hi;
+ __le32 info0;
+ struct rx_mpdu_desc rx_mpdu_info;
+ struct rx_msdu_desc rx_msdu_info;
+ __le32 phy_addr_lo;
+ __le32 info1;
+} __packed;
+
+#define HAL_WBM_COMPL_TX_INFO0_REL_SRC_MODULE GENMASK(2, 0)
+#define HAL_WBM_COMPL_TX_INFO0_DESC_TYPE GENMASK(8, 6)
+#define HAL_WBM_COMPL_TX_INFO0_RBM GENMASK(12, 9)
+#define HAL_WBM_COMPL_TX_INFO0_TQM_RELEASE_REASON GENMASK(16, 13)
+#define HAL_WBM_COMPL_TX_INFO0_RBM_OVERRIDE_VLD BIT(17)
+#define HAL_WBM_COMPL_TX_INFO0_SW_COOKIE_LO GENMASK(29, 18)
+#define HAL_WBM_COMPL_TX_INFO0_CC_DONE BIT(30)
+#define HAL_WBM_COMPL_TX_INFO0_WBM_INTERNAL_ERROR BIT(31)
+
+#define HAL_WBM_COMPL_TX_INFO1_TQM_STATUS_NUMBER GENMASK(23, 0)
+#define HAL_WBM_COMPL_TX_INFO1_TRANSMIT_COUNT GENMASK(30, 24)
+#define HAL_WBM_COMPL_TX_INFO1_SW_REL_DETAILS_VALID BIT(31)
+
+#define HAL_WBM_COMPL_TX_INFO2_ACK_FRAME_RSSI GENMASK(7, 0)
+#define HAL_WBM_COMPL_TX_INFO2_FIRST_MSDU BIT(8)
+#define HAL_WBM_COMPL_TX_INFO2_LAST_MSDU BIT(9)
+#define HAL_WBM_COMPL_TX_INFO2_FW_TX_NOTIF_FRAME GENMASK(12, 10)
+#define HAL_WBM_COMPL_TX_INFO2_BUFFER_TIMESTAMP GENMASK(31, 13)
+
+#define HAL_WBM_COMPL_TX_INFO3_PEER_ID GENMASK(15, 0)
+#define HAL_WBM_COMPL_TX_INFO3_TID GENMASK(19, 16)
+#define HAL_WBM_COMPL_TX_INFO3_SW_COOKIE_HI GENMASK(27, 20)
+#define HAL_WBM_COMPL_TX_INFO3_LOOPING_COUNT GENMASK(31, 28)
+
+struct hal_wbm_completion_ring_tx {
+ __le32 buf_va_lo;
+ __le32 buf_va_hi;
+ __le32 info0;
+ __le32 info1;
+ __le32 info2;
+ struct hal_tx_rate_stats rate_stats;
+ __le32 info3;
+} __packed;
+
+#define HAL_WBM_RELEASE_TX_INFO0_REL_SRC_MODULE GENMASK(2, 0)
+#define HAL_WBM_RELEASE_TX_INFO0_BM_ACTION GENMASK(5, 3)
+#define HAL_WBM_RELEASE_TX_INFO0_DESC_TYPE GENMASK(8, 6)
+#define HAL_WBM_RELEASE_TX_INFO0_FIRST_MSDU_IDX GENMASK(12, 9)
+#define HAL_WBM_RELEASE_TX_INFO0_TQM_RELEASE_REASON GENMASK(18, 13)
+#define HAL_WBM_RELEASE_TX_INFO0_RBM_OVERRIDE_VLD BIT(17)
+#define HAL_WBM_RELEASE_TX_INFO0_SW_BUFFER_COOKIE_11_0 GENMASK(29, 18)
+#define HAL_WBM_RELEASE_TX_INFO0_WBM_INTERNAL_ERROR BIT(31)
+
+#define HAL_WBM_RELEASE_TX_INFO1_TQM_STATUS_NUMBER GENMASK(23, 0)
+#define HAL_WBM_RELEASE_TX_INFO1_TRANSMIT_COUNT GENMASK(30, 24)
+#define HAL_WBM_RELEASE_TX_INFO1_SW_REL_DETAILS_VALID BIT(31)
+
+#define HAL_WBM_RELEASE_TX_INFO2_ACK_FRAME_RSSI GENMASK(7, 0)
+#define HAL_WBM_RELEASE_TX_INFO2_FIRST_MSDU BIT(8)
+#define HAL_WBM_RELEASE_TX_INFO2_LAST_MSDU BIT(9)
+#define HAL_WBM_RELEASE_TX_INFO2_FW_TX_NOTIF_FRAME GENMASK(12, 10)
+#define HAL_WBM_RELEASE_TX_INFO2_BUFFER_TIMESTAMP GENMASK(31, 13)
+
+#define HAL_WBM_RELEASE_TX_INFO3_PEER_ID GENMASK(15, 0)
+#define HAL_WBM_RELEASE_TX_INFO3_TID GENMASK(19, 16)
+#define HAL_WBM_RELEASE_TX_INFO3_SW_BUFFER_COOKIE_19_12 GENMASK(27, 20)
+#define HAL_WBM_RELEASE_TX_INFO3_LOOPING_COUNT GENMASK(31, 28)
+
+struct hal_wbm_release_ring_tx {
+ struct ath12k_buffer_addr buf_addr_info;
+ __le32 info0;
+ __le32 info1;
+ __le32 info2;
+ struct hal_tx_rate_stats rate_stats;
+ __le32 info3;
+} __packed;
+
+#define HAL_WBM_RELEASE_RX_INFO0_REL_SRC_MODULE GENMASK(2, 0)
+#define HAL_WBM_RELEASE_RX_INFO0_BM_ACTION GENMASK(5, 3)
+#define HAL_WBM_RELEASE_RX_INFO0_DESC_TYPE GENMASK(8, 6)
+#define HAL_WBM_RELEASE_RX_INFO0_FIRST_MSDU_IDX GENMASK(12, 9)
+#define HAL_WBM_RELEASE_RX_INFO0_CC_STATUS BIT(16)
+#define HAL_WBM_RELEASE_RX_INFO0_RXDMA_PUSH_REASON GENMASK(18, 17)
+#define HAL_WBM_RELEASE_RX_INFO0_RXDMA_ERROR_CODE GENMASK(23, 19)
+#define HAL_WBM_RELEASE_RX_INFO0_REO_PUSH_REASON GENMASK(25, 24)
+#define HAL_WBM_RELEASE_RX_INFO0_REO_ERROR_CODE GENMASK(30, 26)
+#define HAL_WBM_RELEASE_RX_INFO0_WBM_INTERNAL_ERROR BIT(31)
+
+#define HAL_WBM_RELEASE_RX_INFO2_RING_ID GENMASK(27, 20)
+#define HAL_WBM_RELEASE_RX_INFO2_LOOPING_COUNT GENMASK(31, 28)
+
+struct hal_wbm_release_ring_rx {
+ struct ath12k_buffer_addr buf_addr_info;
+ __le32 info0;
+ struct rx_mpdu_desc rx_mpdu_info;
+ struct rx_msdu_desc rx_msdu_info;
+ __le32 info1;
+ __le32 info2;
+} __packed;
+
+#define HAL_WBM_RELEASE_RX_CC_INFO0_RBM GENMASK(12, 9)
+#define HAL_WBM_RELEASE_RX_CC_INFO1_COOKIE GENMASK(27, 8)
+/* Used when hw cc is success */
+struct hal_wbm_release_ring_cc_rx {
+ __le32 buf_va_lo;
+ __le32 buf_va_hi;
+ __le32 info0;
+ struct rx_mpdu_desc rx_mpdu_info;
+ struct rx_msdu_desc rx_msdu_info;
+ __le32 buf_pa_lo;
+ __le32 info1;
+} __packed;
+
+#define HAL_WBM_RELEASE_INFO0_REL_SRC_MODULE GENMASK(2, 0)
+#define HAL_WBM_RELEASE_INFO0_BM_ACTION GENMASK(5, 3)
+#define HAL_WBM_RELEASE_INFO0_DESC_TYPE GENMASK(8, 6)
+#define HAL_WBM_RELEASE_INFO0_RXDMA_PUSH_REASON GENMASK(18, 17)
+#define HAL_WBM_RELEASE_INFO0_RXDMA_ERROR_CODE GENMASK(23, 19)
+#define HAL_WBM_RELEASE_INFO0_REO_PUSH_REASON GENMASK(25, 24)
+#define HAL_WBM_RELEASE_INFO0_REO_ERROR_CODE GENMASK(30, 26)
+#define HAL_WBM_RELEASE_INFO0_WBM_INTERNAL_ERROR BIT(31)
+
+#define HAL_WBM_RELEASE_INFO3_FIRST_MSDU BIT(0)
+#define HAL_WBM_RELEASE_INFO3_LAST_MSDU BIT(1)
+#define HAL_WBM_RELEASE_INFO3_CONTINUATION BIT(2)
+
+#define HAL_WBM_RELEASE_INFO5_LOOPING_COUNT GENMASK(31, 28)
+
+struct hal_wbm_release_ring {
+ struct ath12k_buffer_addr buf_addr_info;
+ __le32 info0;
+ __le32 info1;
+ __le32 info2;
+ __le32 info3;
+ __le32 info4;
+ __le32 info5;
+} __packed;
+
+/* hal_wbm_release_ring
+ *
+ * Producer: SW/TQM/RXDMA/REO/SWITCH
+ * Consumer: WBM/SW/FW
+ *
+ * HTT tx status is overlayed on wbm_release ring on 4-byte words 2, 3, 4 and 5
+ * for software based completions.
+ *
+ * buf_addr_info
+ * Details of the physical address of the buffer or link descriptor.
+ *
+ * release_source_module
+ * Indicates which module initiated the release of this buffer/descriptor.
+ * Values are defined in enum %HAL_WBM_REL_SRC_MODULE_.
+ *
+ * buffer_or_desc_type
+ * Field only valid when WBM is marked as the return_buffer_manager in
+ * the Released_Buffer_address_info. Indicates that type of buffer or
+ * descriptor is being released. Values are in enum %HAL_WBM_REL_DESC_TYPE.
+ *
+ * wbm_internal_error
+ * Is set when WBM got a buffer pointer but the action was to push it to
+ * the idle link descriptor ring or do link related activity OR
+ * Is set when WBM got a link buffer pointer but the action was to push it
+ * to the buffer descriptor ring.
+ *
+ * looping_count
+ * A count value that indicates the number of times the
+ * producer of entries into the Buffer Manager Ring has looped
+ * around the ring.
+ *
+ * At initialization time, this value is set to 0. On the
+ * first loop, this value is set to 1. After the max value is
+ * reached allowed by the number of bits for this field, the
+ * count value continues with 0 again.
+ *
+ * In case SW is the consumer of the ring entries, it can
+ * use this field to figure out up to where the producer of
+ * entries has created new entries. This eliminates the need to
+ * check where the head pointer' of the ring is located once
+ * the SW starts processing an interrupt indicating that new
+ * entries have been put into this ring...
+ *
+ * Also note that SW if it wants only needs to look at the
+ * LSB bit of this count value.
+ */
+
+/**
+ * enum hal_wbm_tqm_rel_reason - TQM release reason code
+ * @HAL_WBM_TQM_REL_REASON_FRAME_ACKED: ACK or BACK received for the frame
+ * @HAL_WBM_TQM_REL_REASON_CMD_REMOVE_MPDU: Command remove_mpdus initiated by SW
+ * @HAL_WBM_TQM_REL_REASON_CMD_REMOVE_TX: Command remove transmitted_mpdus
+ * initiated by sw.
+ * @HAL_WBM_TQM_REL_REASON_CMD_REMOVE_NOTX: Command remove untransmitted_mpdus
+ * initiated by sw.
+ * @HAL_WBM_TQM_REL_REASON_CMD_REMOVE_AGED_FRAMES: Command remove aged msdus or
+ * mpdus.
+ * @HAL_WBM_TQM_REL_REASON_CMD_REMOVE_RESEAON1: Remove command initiated by
+ * fw with fw_reason1.
+ * @HAL_WBM_TQM_REL_REASON_CMD_REMOVE_RESEAON2: Remove command initiated by
+ * fw with fw_reason2.
+ * @HAL_WBM_TQM_REL_REASON_CMD_REMOVE_RESEAON3: Remove command initiated by
+ * fw with fw_reason3.
+ */
+enum hal_wbm_tqm_rel_reason {
+ HAL_WBM_TQM_REL_REASON_FRAME_ACKED,
+ HAL_WBM_TQM_REL_REASON_CMD_REMOVE_MPDU,
+ HAL_WBM_TQM_REL_REASON_CMD_REMOVE_TX,
+ HAL_WBM_TQM_REL_REASON_CMD_REMOVE_NOTX,
+ HAL_WBM_TQM_REL_REASON_CMD_REMOVE_AGED_FRAMES,
+ HAL_WBM_TQM_REL_REASON_CMD_REMOVE_RESEAON1,
+ HAL_WBM_TQM_REL_REASON_CMD_REMOVE_RESEAON2,
+ HAL_WBM_TQM_REL_REASON_CMD_REMOVE_RESEAON3,
+};
+
+struct hal_wbm_buffer_ring {
+ struct ath12k_buffer_addr buf_addr_info;
+};
+
+enum hal_mon_end_reason {
+ HAL_MON_STATUS_BUFFER_FULL,
+ HAL_MON_FLUSH_DETECTED,
+ HAL_MON_END_OF_PPDU,
+ HAL_MON_PPDU_TRUNCATED,
+};
+
+#define HAL_SW_MONITOR_RING_INFO0_RXDMA_PUSH_REASON GENMASK(1, 0)
+#define HAL_SW_MONITOR_RING_INFO0_RXDMA_ERROR_CODE GENMASK(6, 2)
+#define HAL_SW_MONITOR_RING_INFO0_MPDU_FRAGMENT_NUMBER GENMASK(10, 7)
+#define HAL_SW_MONITOR_RING_INFO0_FRAMELESS_BAR BIT(11)
+#define HAL_SW_MONITOR_RING_INFO0_STATUS_BUF_COUNT GENMASK(15, 12)
+#define HAL_SW_MONITOR_RING_INFO0_END_OF_PPDU BIT(16)
+
+#define HAL_SW_MONITOR_RING_INFO1_PHY_PPDU_ID GENMASK(15, 0)
+#define HAL_SW_MONITOR_RING_INFO1_RING_ID GENMASK(27, 20)
+#define HAL_SW_MONITOR_RING_INFO1_LOOPING_COUNT GENMASK(31, 28)
+
+struct hal_sw_monitor_ring {
+ struct ath12k_buffer_addr buf_addr_info;
+ struct rx_mpdu_desc rx_mpdu_info;
+ struct ath12k_buffer_addr status_buff_addr_info;
+ __le32 info0; /* %HAL_SW_MONITOR_RING_INFO0 */
+ __le32 info1; /* %HAL_SW_MONITOR_RING_INFO1 */
+} __packed;
+
+/* hal_sw_monitor_ring
+ *
+ * Producer: RXDMA
+ * Consumer: REO/SW/FW
+ * buf_addr_info
+ * Details of the physical address of a buffer or MSDU
+ * link descriptor.
+ *
+ * rx_mpdu_info
+ * Details related to the MPDU being pushed to SW, valid
+ * only if end_of_ppdu is set to 0.
+ *
+ * status_buff_addr_info
+ * Details of the physical address of the first status
+ * buffer used for the PPDU (either the PPDU that included the
+ * MPDU being pushed to SW if end_of_ppdu = 0, or the PPDU
+ * whose end is indicated through end_of_ppdu = 1)
+ *
+ * rxdma_push_reason
+ * Indicates why RXDMA pushed the frame to this ring
+ *
+ * <enum 0 rxdma_error_detected> RXDMA detected an error an
+ * pushed this frame to this queue
+ *
+ * <enum 1 rxdma_routing_instruction> RXDMA pushed the
+ * frame to this queue per received routing instructions. No
+ * error within RXDMA was detected
+ *
+ * <enum 2 rxdma_rx_flush> RXDMA received an RX_FLUSH. As a
+ * result the MSDU link descriptor might not have the
+ * last_msdu_in_mpdu_flag set, but instead WBM might just see a
+ * NULL pointer in the MSDU link descriptor. This is to be
+ * considered a normal condition for this scenario.
+ *
+ * rxdma_error_code
+ * Field only valid when rxdma_push_reason is set to
+ * 'rxdma_error_detected.'
+ *
+ * <enum 0 rxdma_overflow_err>MPDU frame is not complete
+ * due to a FIFO overflow error in RXPCU.
+ *
+ * <enum 1 rxdma_mpdu_length_err>MPDU frame is not complete
+ * due to receiving incomplete MPDU from the PHY
+ *
+ * <enum 3 rxdma_decrypt_err>CRYPTO reported a decryption
+ * error or CRYPTO received an encrypted frame, but did not get
+ * a valid corresponding key id in the peer entry.
+ *
+ * <enum 4 rxdma_tkip_mic_err>CRYPTO reported a TKIP MIC
+ * error
+ *
+ * <enum 5 rxdma_unecrypted_err>CRYPTO reported an
+ * unencrypted frame error when encrypted was expected
+ *
+ * <enum 6 rxdma_msdu_len_err>RX OLE reported an MSDU
+ * length error
+ *
+ * <enum 7 rxdma_msdu_limit_err>RX OLE reported that max
+ * number of MSDUs allowed in an MPDU got exceeded
+ *
+ * <enum 8 rxdma_wifi_parse_err>RX OLE reported a parsing
+ * error
+ *
+ * <enum 9 rxdma_amsdu_parse_err>RX OLE reported an A-MSDU
+ * parsing error
+ *
+ * <enum 10 rxdma_sa_timeout_err>RX OLE reported a timeout
+ * during SA search
+ *
+ * <enum 11 rxdma_da_timeout_err>RX OLE reported a timeout
+ * during DA search
+ *
+ * <enum 12 rxdma_flow_timeout_err>RX OLE reported a
+ * timeout during flow search
+ *
+ * <enum 13 rxdma_flush_request>RXDMA received a flush
+ * request
+ *
+ * <enum 14 rxdma_amsdu_fragment_err>Rx PCU reported A-MSDU
+ * present as well as a fragmented MPDU.
+ *
+ * mpdu_fragment_number
+ * Field only valid when Reo_level_mpdu_frame_info.
+ * Rx_mpdu_desc_info_details.Fragment_flag is set and
+ * end_of_ppdu is set to 0.
+ *
+ * The fragment number from the 802.11 header.
+ *
+ * Note that the sequence number is embedded in the field:
+ * Reo_level_mpdu_frame_info. Rx_mpdu_desc_info_details.
+ * Mpdu_sequence_number
+ *
+ * frameless_bar
+ * When set, this SW monitor ring struct contains BAR info
+ * from a multi TID BAR frame. The original multi TID BAR frame
+ * itself contained all the REO info for the first TID, but all
+ * the subsequent TID info and their linkage to the REO
+ * descriptors is passed down as 'frameless' BAR info.
+ *
+ * The only fields valid in this descriptor when this bit
+ * is within the
+ *
+ * Reo_level_mpdu_frame_info:
+ * Within Rx_mpdu_desc_info_details:
+ * Mpdu_Sequence_number
+ * BAR_frame
+ * Peer_meta_data
+ * All other fields shall be set to 0.
+ *
+ * status_buf_count
+ * A count of status buffers used so far for the PPDU
+ * (either the PPDU that included the MPDU being pushed to SW
+ * if end_of_ppdu = 0, or the PPDU whose end is indicated
+ * through end_of_ppdu = 1)
+ *
+ * end_of_ppdu
+ * Some hw RXDMA can be configured to generate a separate
+ * 'SW_MONITOR_RING' descriptor at the end of a PPDU (either
+ * through an 'RX_PPDU_END' TLV or through an 'RX_FLUSH') to
+ * demarcate PPDUs.
+ *
+ * For such a descriptor, this bit is set to 1 and fields
+ * Reo_level_mpdu_frame_info, mpdu_fragment_number and
+ * Frameless_bar are all set to 0.
+ *
+ * Otherwise this bit is set to 0.
+ *
+ * phy_ppdu_id
+ * A PPDU counter value that PHY increments for every PPDU
+ * received
+ *
+ * The counter value wraps around. Some hw RXDMA can be
+ * configured to copy this from the RX_PPDU_START TLV for every
+ * output descriptor.
+ *
+ * ring_id
+ * For debugging.
+ * This field is filled in by the SRNG module.
+ * It help to identify the ring that is being looked
+ *
+ * looping_count
+ * For debugging.
+ * This field is filled in by the SRNG module.
+ *
+ * A count value that indicates the number of times the
+ * producer of entries into this Ring has looped around the
+ * ring.
+ * At initialization time, this value is set to 0. On the
+ * first loop, this value is set to 1. After the max value is
+ * reached allowed by the number of bits for this field, the
+ * count value continues with 0 again.
+ *
+ * In case SW is the consumer of the ring entries, it can
+ * use this field to figure out up to where the producer of
+ * entries has created new entries. This eliminates the need to
+ * check where the head pointer' of the ring is located once
+ * the SW starts processing an interrupt indicating that new
+ * entries have been put into this ring...
+ */
+
+enum hal_desc_owner {
+ HAL_DESC_OWNER_WBM,
+ HAL_DESC_OWNER_SW,
+ HAL_DESC_OWNER_TQM,
+ HAL_DESC_OWNER_RXDMA,
+ HAL_DESC_OWNER_REO,
+ HAL_DESC_OWNER_SWITCH,
+};
+
+enum hal_desc_buf_type {
+ HAL_DESC_BUF_TYPE_TX_MSDU_LINK,
+ HAL_DESC_BUF_TYPE_TX_MPDU_LINK,
+ HAL_DESC_BUF_TYPE_TX_MPDU_QUEUE_HEAD,
+ HAL_DESC_BUF_TYPE_TX_MPDU_QUEUE_EXT,
+ HAL_DESC_BUF_TYPE_TX_FLOW,
+ HAL_DESC_BUF_TYPE_TX_BUFFER,
+ HAL_DESC_BUF_TYPE_RX_MSDU_LINK,
+ HAL_DESC_BUF_TYPE_RX_MPDU_LINK,
+ HAL_DESC_BUF_TYPE_RX_REO_QUEUE,
+ HAL_DESC_BUF_TYPE_RX_REO_QUEUE_EXT,
+ HAL_DESC_BUF_TYPE_RX_BUFFER,
+ HAL_DESC_BUF_TYPE_IDLE_LINK,
+};
+
+#define HAL_DESC_REO_OWNED 4
+#define HAL_DESC_REO_QUEUE_DESC 8
+#define HAL_DESC_REO_QUEUE_EXT_DESC 9
+#define HAL_DESC_REO_NON_QOS_TID 16
+
+#define HAL_DESC_HDR_INFO0_OWNER GENMASK(3, 0)
+#define HAL_DESC_HDR_INFO0_BUF_TYPE GENMASK(7, 4)
+#define HAL_DESC_HDR_INFO0_DBG_RESERVED GENMASK(31, 8)
+
+struct hal_desc_header {
+ __le32 info0;
+} __packed;
+
+struct hal_rx_mpdu_link_ptr {
+ struct ath12k_buffer_addr addr_info;
+} __packed;
+
+struct hal_rx_msdu_details {
+ struct ath12k_buffer_addr buf_addr_info;
+ struct rx_msdu_desc rx_msdu_info;
+ struct rx_msdu_ext_desc rx_msdu_ext_info;
+} __packed;
+
+#define HAL_RX_MSDU_LNK_INFO0_RX_QUEUE_NUMBER GENMASK(15, 0)
+#define HAL_RX_MSDU_LNK_INFO0_FIRST_MSDU_LNK BIT(16)
+
+struct hal_rx_msdu_link {
+ struct hal_desc_header desc_hdr;
+ struct ath12k_buffer_addr buf_addr_info;
+ __le32 info0;
+ __le32 pn[4];
+ struct hal_rx_msdu_details msdu_link[6];
+} __packed;
+
+struct hal_rx_reo_queue_ext {
+ struct hal_desc_header desc_hdr;
+ __le32 rsvd;
+ struct hal_rx_mpdu_link_ptr mpdu_link[15];
+} __packed;
+
+/* hal_rx_reo_queue_ext
+ * Consumer: REO
+ * Producer: REO
+ *
+ * descriptor_header
+ * Details about which module owns this struct.
+ *
+ * mpdu_link
+ * Pointer to the next MPDU_link descriptor in the MPDU queue.
+ */
+
+enum hal_rx_reo_queue_pn_size {
+ HAL_RX_REO_QUEUE_PN_SIZE_24,
+ HAL_RX_REO_QUEUE_PN_SIZE_48,
+ HAL_RX_REO_QUEUE_PN_SIZE_128,
+};
+
+#define HAL_RX_REO_QUEUE_RX_QUEUE_NUMBER GENMASK(15, 0)
+
+#define HAL_RX_REO_QUEUE_INFO0_VLD BIT(0)
+#define HAL_RX_REO_QUEUE_INFO0_ASSOC_LNK_DESC_COUNTER GENMASK(2, 1)
+#define HAL_RX_REO_QUEUE_INFO0_DIS_DUP_DETECTION BIT(3)
+#define HAL_RX_REO_QUEUE_INFO0_SOFT_REORDER_EN BIT(4)
+#define HAL_RX_REO_QUEUE_INFO0_AC GENMASK(6, 5)
+#define HAL_RX_REO_QUEUE_INFO0_BAR BIT(7)
+#define HAL_RX_REO_QUEUE_INFO0_RETRY BIT(8)
+#define HAL_RX_REO_QUEUE_INFO0_CHECK_2K_MODE BIT(9)
+#define HAL_RX_REO_QUEUE_INFO0_OOR_MODE BIT(10)
+#define HAL_RX_REO_QUEUE_INFO0_BA_WINDOW_SIZE GENMASK(20, 11)
+#define HAL_RX_REO_QUEUE_INFO0_PN_CHECK BIT(21)
+#define HAL_RX_REO_QUEUE_INFO0_EVEN_PN BIT(22)
+#define HAL_RX_REO_QUEUE_INFO0_UNEVEN_PN BIT(23)
+#define HAL_RX_REO_QUEUE_INFO0_PN_HANDLE_ENABLE BIT(24)
+#define HAL_RX_REO_QUEUE_INFO0_PN_SIZE GENMASK(26, 25)
+#define HAL_RX_REO_QUEUE_INFO0_IGNORE_AMPDU_FLG BIT(27)
+
+#define HAL_RX_REO_QUEUE_INFO1_SVLD BIT(0)
+#define HAL_RX_REO_QUEUE_INFO1_SSN GENMASK(12, 1)
+#define HAL_RX_REO_QUEUE_INFO1_CURRENT_IDX GENMASK(22, 13)
+#define HAL_RX_REO_QUEUE_INFO1_SEQ_2K_ERR BIT(23)
+#define HAL_RX_REO_QUEUE_INFO1_PN_ERR BIT(24)
+#define HAL_RX_REO_QUEUE_INFO1_PN_VALID BIT(31)
+
+#define HAL_RX_REO_QUEUE_INFO2_MPDU_COUNT GENMASK(6, 0)
+#define HAL_RX_REO_QUEUE_INFO2_MSDU_COUNT (31, 7)
+
+#define HAL_RX_REO_QUEUE_INFO3_TIMEOUT_COUNT GENMASK(9, 4)
+#define HAL_RX_REO_QUEUE_INFO3_FWD_DUE_TO_BAR_CNT GENMASK(15, 10)
+#define HAL_RX_REO_QUEUE_INFO3_DUPLICATE_COUNT GENMASK(31, 16)
+
+#define HAL_RX_REO_QUEUE_INFO4_FRAME_IN_ORD_COUNT GENMASK(23, 0)
+#define HAL_RX_REO_QUEUE_INFO4_BAR_RECVD_COUNT GENMASK(31, 24)
+
+#define HAL_RX_REO_QUEUE_INFO5_LATE_RX_MPDU_COUNT GENMASK(11, 0)
+#define HAL_RX_REO_QUEUE_INFO5_WINDOW_JUMP_2K GENMASK(15, 12)
+#define HAL_RX_REO_QUEUE_INFO5_HOLE_COUNT GENMASK(31, 16)
+
+struct hal_rx_reo_queue {
+ struct hal_desc_header desc_hdr;
+ __le32 rx_queue_num;
+ __le32 info0;
+ __le32 info1;
+ __le32 pn[4];
+ __le32 last_rx_enqueue_timestamp;
+ __le32 last_rx_dequeue_timestamp;
+ __le32 next_aging_queue[2];
+ __le32 prev_aging_queue[2];
+ __le32 rx_bitmap[9];
+ __le32 info2;
+ __le32 info3;
+ __le32 info4;
+ __le32 processed_mpdus;
+ __le32 processed_msdus;
+ __le32 processed_total_bytes;
+ __le32 info5;
+ __le32 rsvd[2];
+ struct hal_rx_reo_queue_ext ext_desc[];
+} __packed;
+
+/* hal_rx_reo_queue
+ *
+ * descriptor_header
+ * Details about which module owns this struct. Note that sub field
+ * Buffer_type shall be set to receive_reo_queue_descriptor.
+ *
+ * receive_queue_number
+ * Indicates the MPDU queue ID to which this MPDU link descriptor belongs.
+ *
+ * vld
+ * Valid bit indicating a session is established and the queue descriptor
+ * is valid.
+ * associated_link_descriptor_counter
+ * Indicates which of the 3 link descriptor counters shall be incremented
+ * or decremented when link descriptors are added or removed from this
+ * flow queue.
+ * disable_duplicate_detection
+ * When set, do not perform any duplicate detection.
+ * soft_reorder_enable
+ * When set, REO has been instructed to not perform the actual re-ordering
+ * of frames for this queue, but just to insert the reorder opcodes.
+ * ac
+ * Indicates the access category of the queue descriptor.
+ * bar
+ * Indicates if BAR has been received.
+ * retry
+ * Retry bit is checked if this bit is set.
+ * chk_2k_mode
+ * Indicates what type of operation is expected from Reo when the received
+ * frame SN falls within the 2K window.
+ * oor_mode
+ * Indicates what type of operation is expected when the received frame
+ * falls within the OOR window.
+ * ba_window_size
+ * Indicates the negotiated (window size + 1). Max of 256 bits.
+ *
+ * A value 255 means 256 bitmap, 63 means 64 bitmap, 0 (means non-BA
+ * session, with window size of 0). The 3 values here are the main values
+ * validated, but other values should work as well.
+ *
+ * A BA window size of 0 (=> one frame entry bitmat), means that there is
+ * no additional rx_reo_queue_ext desc. following rx_reo_queue in memory.
+ * A BA window size of 1 - 105, means that there is 1 rx_reo_queue_ext.
+ * A BA window size of 106 - 210, means that there are 2 rx_reo_queue_ext.
+ * A BA window size of 211 - 256, means that there are 3 rx_reo_queue_ext.
+ * pn_check_needed, pn_shall_be_even, pn_shall_be_uneven, pn_handling_enable,
+ * pn_size
+ * REO shall perform the PN increment check, even number check, uneven
+ * number check, PN error check and size of the PN field check.
+ * ignore_ampdu_flag
+ * REO shall ignore the ampdu_flag on entrance descriptor for this queue.
+ *
+ * svld
+ * Sequence number in next field is valid one.
+ * ssn
+ * Starting Sequence number of the session.
+ * current_index
+ * Points to last forwarded packet
+ * seq_2k_error_detected_flag
+ * REO has detected a 2k error jump in the sequence number and from that
+ * moment forward, all new frames are forwarded directly to FW, without
+ * duplicate detect, reordering, etc.
+ * pn_error_detected_flag
+ * REO has detected a PN error.
+ */
+
+#define HAL_REO_UPD_RX_QUEUE_INFO0_QUEUE_ADDR_HI GENMASK(7, 0)
+#define HAL_REO_UPD_RX_QUEUE_INFO0_UPD_RX_QUEUE_NUM BIT(8)
+#define HAL_REO_UPD_RX_QUEUE_INFO0_UPD_VLD BIT(9)
+#define HAL_REO_UPD_RX_QUEUE_INFO0_UPD_ASSOC_LNK_DESC_CNT BIT(10)
+#define HAL_REO_UPD_RX_QUEUE_INFO0_UPD_DIS_DUP_DETECTION BIT(11)
+#define HAL_REO_UPD_RX_QUEUE_INFO0_UPD_SOFT_REORDER_EN BIT(12)
+#define HAL_REO_UPD_RX_QUEUE_INFO0_UPD_AC BIT(13)
+#define HAL_REO_UPD_RX_QUEUE_INFO0_UPD_BAR BIT(14)
+#define HAL_REO_UPD_RX_QUEUE_INFO0_UPD_RETRY BIT(15)
+#define HAL_REO_UPD_RX_QUEUE_INFO0_UPD_CHECK_2K_MODE BIT(16)
+#define HAL_REO_UPD_RX_QUEUE_INFO0_UPD_OOR_MODE BIT(17)
+#define HAL_REO_UPD_RX_QUEUE_INFO0_UPD_BA_WINDOW_SIZE BIT(18)
+#define HAL_REO_UPD_RX_QUEUE_INFO0_UPD_PN_CHECK BIT(19)
+#define HAL_REO_UPD_RX_QUEUE_INFO0_UPD_EVEN_PN BIT(20)
+#define HAL_REO_UPD_RX_QUEUE_INFO0_UPD_UNEVEN_PN BIT(21)
+#define HAL_REO_UPD_RX_QUEUE_INFO0_UPD_PN_HANDLE_ENABLE BIT(22)
+#define HAL_REO_UPD_RX_QUEUE_INFO0_UPD_PN_SIZE BIT(23)
+#define HAL_REO_UPD_RX_QUEUE_INFO0_UPD_IGNORE_AMPDU_FLG BIT(24)
+#define HAL_REO_UPD_RX_QUEUE_INFO0_UPD_SVLD BIT(25)
+#define HAL_REO_UPD_RX_QUEUE_INFO0_UPD_SSN BIT(26)
+#define HAL_REO_UPD_RX_QUEUE_INFO0_UPD_SEQ_2K_ERR BIT(27)
+#define HAL_REO_UPD_RX_QUEUE_INFO0_UPD_PN_ERR BIT(28)
+#define HAL_REO_UPD_RX_QUEUE_INFO0_UPD_PN_VALID BIT(29)
+#define HAL_REO_UPD_RX_QUEUE_INFO0_UPD_PN BIT(30)
+
+#define HAL_REO_UPD_RX_QUEUE_INFO1_RX_QUEUE_NUMBER GENMASK(15, 0)
+#define HAL_REO_UPD_RX_QUEUE_INFO1_VLD BIT(16)
+#define HAL_REO_UPD_RX_QUEUE_INFO1_ASSOC_LNK_DESC_COUNTER GENMASK(18, 17)
+#define HAL_REO_UPD_RX_QUEUE_INFO1_DIS_DUP_DETECTION BIT(19)
+#define HAL_REO_UPD_RX_QUEUE_INFO1_SOFT_REORDER_EN BIT(20)
+#define HAL_REO_UPD_RX_QUEUE_INFO1_AC GENMASK(22, 21)
+#define HAL_REO_UPD_RX_QUEUE_INFO1_BAR BIT(23)
+#define HAL_REO_UPD_RX_QUEUE_INFO1_RETRY BIT(24)
+#define HAL_REO_UPD_RX_QUEUE_INFO1_CHECK_2K_MODE BIT(25)
+#define HAL_REO_UPD_RX_QUEUE_INFO1_OOR_MODE BIT(26)
+#define HAL_REO_UPD_RX_QUEUE_INFO1_PN_CHECK BIT(27)
+#define HAL_REO_UPD_RX_QUEUE_INFO1_EVEN_PN BIT(28)
+#define HAL_REO_UPD_RX_QUEUE_INFO1_UNEVEN_PN BIT(29)
+#define HAL_REO_UPD_RX_QUEUE_INFO1_PN_HANDLE_ENABLE BIT(30)
+#define HAL_REO_UPD_RX_QUEUE_INFO1_IGNORE_AMPDU_FLG BIT(31)
+
+#define HAL_REO_UPD_RX_QUEUE_INFO2_BA_WINDOW_SIZE GENMASK(7, 0)
+#define HAL_REO_UPD_RX_QUEUE_INFO2_PN_SIZE GENMASK(9, 8)
+#define HAL_REO_UPD_RX_QUEUE_INFO2_SVLD BIT(10)
+#define HAL_REO_UPD_RX_QUEUE_INFO2_SSN GENMASK(22, 11)
+#define HAL_REO_UPD_RX_QUEUE_INFO2_SEQ_2K_ERR BIT(23)
+#define HAL_REO_UPD_RX_QUEUE_INFO2_PN_ERR BIT(24)
+#define HAL_REO_UPD_RX_QUEUE_INFO2_PN_VALID BIT(25)
+
+struct hal_reo_update_rx_queue {
+ struct hal_reo_cmd_hdr cmd;
+ __le32 queue_addr_lo;
+ __le32 info0;
+ __le32 info1;
+ __le32 info2;
+ __le32 pn[4];
+} __packed;
+
+#define HAL_REO_UNBLOCK_CACHE_INFO0_UNBLK_CACHE BIT(0)
+#define HAL_REO_UNBLOCK_CACHE_INFO0_RESOURCE_IDX GENMASK(2, 1)
+
+struct hal_reo_unblock_cache {
+ struct hal_reo_cmd_hdr cmd;
+ __le32 info0;
+ __le32 rsvd[7];
+} __packed;
+
+enum hal_reo_exec_status {
+ HAL_REO_EXEC_STATUS_SUCCESS,
+ HAL_REO_EXEC_STATUS_BLOCKED,
+ HAL_REO_EXEC_STATUS_FAILED,
+ HAL_REO_EXEC_STATUS_RESOURCE_BLOCKED,
+};
+
+#define HAL_REO_STATUS_HDR_INFO0_STATUS_NUM GENMASK(15, 0)
+#define HAL_REO_STATUS_HDR_INFO0_EXEC_TIME GENMASK(25, 16)
+#define HAL_REO_STATUS_HDR_INFO0_EXEC_STATUS GENMASK(27, 26)
+
+struct hal_reo_status_hdr {
+ __le32 info0;
+ __le32 timestamp;
+} __packed;
+
+/* hal_reo_status_hdr
+ * Producer: REO
+ * Consumer: SW
+ *
+ * status_num
+ * The value in this field is equal to value of the reo command
+ * number. This field helps to correlate the statuses with the REO
+ * commands.
+ *
+ * execution_time (in us)
+ * The amount of time REO took to excecute the command. Note that
+ * this time does not include the duration of the command waiting
+ * in the command ring, before the execution started.
+ *
+ * execution_status
+ * Execution status of the command. Values are defined in
+ * enum %HAL_REO_EXEC_STATUS_.
+ */
+#define HAL_REO_GET_QUEUE_STATS_STATUS_INFO0_SSN GENMASK(11, 0)
+#define HAL_REO_GET_QUEUE_STATS_STATUS_INFO0_CUR_IDX GENMASK(21, 12)
+
+#define HAL_REO_GET_QUEUE_STATS_STATUS_INFO1_MPDU_COUNT GENMASK(6, 0)
+#define HAL_REO_GET_QUEUE_STATS_STATUS_INFO1_MSDU_COUNT GENMASK(31, 7)
+
+#define HAL_REO_GET_QUEUE_STATS_STATUS_INFO2_WINDOW_JMP2K GENMASK(3, 0)
+#define HAL_REO_GET_QUEUE_STATS_STATUS_INFO2_TIMEOUT_COUNT GENMASK(9, 4)
+#define HAL_REO_GET_QUEUE_STATS_STATUS_INFO2_FDTB_COUNT GENMASK(15, 10)
+#define HAL_REO_GET_QUEUE_STATS_STATUS_INFO2_DUPLICATE_COUNT GENMASK(31, 16)
+
+#define HAL_REO_GET_QUEUE_STATS_STATUS_INFO3_FIO_COUNT GENMASK(23, 0)
+#define HAL_REO_GET_QUEUE_STATS_STATUS_INFO3_BAR_RCVD_CNT GENMASK(31, 24)
+
+#define HAL_REO_GET_QUEUE_STATS_STATUS_INFO4_LATE_RX_MPDU GENMASK(11, 0)
+#define HAL_REO_GET_QUEUE_STATS_STATUS_INFO4_HOLE_COUNT GENMASK(27, 12)
+
+#define HAL_REO_GET_QUEUE_STATS_STATUS_INFO5_LOOPING_CNT GENMASK(31, 28)
+
+struct hal_reo_get_queue_stats_status {
+ struct hal_reo_status_hdr hdr;
+ __le32 info0;
+ __le32 pn[4];
+ __le32 last_rx_enqueue_timestamp;
+ __le32 last_rx_dequeue_timestamp;
+ __le32 rx_bitmap[9];
+ __le32 info1;
+ __le32 info2;
+ __le32 info3;
+ __le32 num_mpdu_frames;
+ __le32 num_msdu_frames;
+ __le32 total_bytes;
+ __le32 info4;
+ __le32 info5;
+} __packed;
+
+/* hal_reo_get_queue_stats_status
+ * Producer: REO
+ * Consumer: SW
+ *
+ * status_hdr
+ * Details that can link this status with the original command. It
+ * also contains info on how long REO took to execute this command.
+ *
+ * ssn
+ * Starting Sequence number of the session, this changes whenever
+ * window moves (can be filled by SW then maintained by REO).
+ *
+ * current_index
+ * Points to last forwarded packet.
+ *
+ * pn
+ * Bits of the PN number.
+ *
+ * last_rx_enqueue_timestamp
+ * last_rx_dequeue_timestamp
+ * Timestamp of arrival of the last MPDU for this queue and
+ * Timestamp of forwarding an MPDU accordingly.
+ *
+ * rx_bitmap
+ * When a bit is set, the corresponding frame is currently held
+ * in the re-order queue. The bitmap is Fully managed by HW.
+ *
+ * current_mpdu_count
+ * current_msdu_count
+ * The number of MPDUs and MSDUs in the queue.
+ *
+ * timeout_count
+ * The number of times REO started forwarding frames even though
+ * there is a hole in the bitmap. Forwarding reason is timeout.
+ *
+ * forward_due_to_bar_count
+ * The number of times REO started forwarding frames even though
+ * there is a hole in the bitmap. Fwd reason is reception of BAR.
+ *
+ * duplicate_count
+ * The number of duplicate frames that have been detected.
+ *
+ * frames_in_order_count
+ * The number of frames that have been received in order (without
+ * a hole that prevented them from being forwarded immediately).
+ *
+ * bar_received_count
+ * The number of times a BAR frame is received.
+ *
+ * mpdu_frames_processed_count
+ * msdu_frames_processed_count
+ * The total number of MPDU/MSDU frames that have been processed.
+ *
+ * total_bytes
+ * An approximation of the number of bytes received for this queue.
+ *
+ * late_receive_mpdu_count
+ * The number of MPDUs received after the window had already moved
+ * on. The 'late' sequence window is defined as
+ * (Window SSN - 256) - (Window SSN - 1).
+ *
+ * window_jump_2k
+ * The number of times the window moved more than 2K
+ *
+ * hole_count
+ * The number of times a hole was created in the receive bitmap.
+ *
+ * looping_count
+ * A count value that indicates the number of times the producer of
+ * entries into this Ring has looped around the ring.
+ */
+
+#define HAL_REO_STATUS_LOOP_CNT GENMASK(31, 28)
+
+#define HAL_REO_FLUSH_QUEUE_INFO0_ERR_DETECTED BIT(0)
+#define HAL_REO_FLUSH_QUEUE_INFO0_RSVD GENMASK(31, 1)
+#define HAL_REO_FLUSH_QUEUE_INFO1_RSVD GENMASK(27, 0)
+
+struct hal_reo_flush_queue_status {
+ struct hal_reo_status_hdr hdr;
+ __le32 info0;
+ __le32 rsvd0[21];
+ __le32 info1;
+} __packed;
+
+/* hal_reo_flush_queue_status
+ * Producer: REO
+ * Consumer: SW
+ *
+ * status_hdr
+ * Details that can link this status with the original command. It
+ * also contains info on how long REO took to execute this command.
+ *
+ * error_detected
+ * Status of blocking resource
+ *
+ * 0 - No error has been detected while executing this command
+ * 1 - Error detected. The resource to be used for blocking was
+ * already in use.
+ *
+ * looping_count
+ * A count value that indicates the number of times the producer of
+ * entries into this Ring has looped around the ring.
+ */
+
+#define HAL_REO_FLUSH_CACHE_STATUS_INFO0_IS_ERR BIT(0)
+#define HAL_REO_FLUSH_CACHE_STATUS_INFO0_BLOCK_ERR_CODE GENMASK(2, 1)
+#define HAL_REO_FLUSH_CACHE_STATUS_INFO0_FLUSH_STATUS_HIT BIT(8)
+#define HAL_REO_FLUSH_CACHE_STATUS_INFO0_FLUSH_DESC_TYPE GENMASK(11, 9)
+#define HAL_REO_FLUSH_CACHE_STATUS_INFO0_FLUSH_CLIENT_ID GENMASK(15, 12)
+#define HAL_REO_FLUSH_CACHE_STATUS_INFO0_FLUSH_ERR GENMASK(17, 16)
+#define HAL_REO_FLUSH_CACHE_STATUS_INFO0_FLUSH_COUNT GENMASK(25, 18)
+
+struct hal_reo_flush_cache_status {
+ struct hal_reo_status_hdr hdr;
+ __le32 info0;
+ __le32 rsvd0[21];
+ __le32 info1;
+} __packed;
+
+/* hal_reo_flush_cache_status
+ * Producer: REO
+ * Consumer: SW
+ *
+ * status_hdr
+ * Details that can link this status with the original command. It
+ * also contains info on how long REO took to execute this command.
+ *
+ * error_detected
+ * Status for blocking resource handling
+ *
+ * 0 - No error has been detected while executing this command
+ * 1 - An error in the blocking resource management was detected
+ *
+ * block_error_details
+ * only valid when error_detected is set
+ *
+ * 0 - No blocking related errors found
+ * 1 - Blocking resource is already in use
+ * 2 - Resource requested to be unblocked, was not blocked
+ *
+ * cache_controller_flush_status_hit
+ * The status that the cache controller returned on executing the
+ * flush command.
+ *
+ * 0 - miss; 1 - hit
+ *
+ * cache_controller_flush_status_desc_type
+ * Flush descriptor type
+ *
+ * cache_controller_flush_status_client_id
+ * Module who made the flush request
+ *
+ * In REO, this is always 0
+ *
+ * cache_controller_flush_status_error
+ * Error condition
+ *
+ * 0 - No error found
+ * 1 - HW interface is still busy
+ * 2 - Line currently locked. Used for one line flush command
+ * 3 - At least one line is still locked.
+ * Used for cache flush command.
+ *
+ * cache_controller_flush_count
+ * The number of lines that were actually flushed out
+ *
+ * looping_count
+ * A count value that indicates the number of times the producer of
+ * entries into this Ring has looped around the ring.
+ */
+
+#define HAL_REO_UNBLOCK_CACHE_STATUS_INFO0_IS_ERR BIT(0)
+#define HAL_REO_UNBLOCK_CACHE_STATUS_INFO0_TYPE BIT(1)
+
+struct hal_reo_unblock_cache_status {
+ struct hal_reo_status_hdr hdr;
+ __le32 info0;
+ __le32 rsvd0[21];
+ __le32 info1;
+} __packed;
+
+/* hal_reo_unblock_cache_status
+ * Producer: REO
+ * Consumer: SW
+ *
+ * status_hdr
+ * Details that can link this status with the original command. It
+ * also contains info on how long REO took to execute this command.
+ *
+ * error_detected
+ * 0 - No error has been detected while executing this command
+ * 1 - The blocking resource was not in use, and therefore it could
+ * not be unblocked.
+ *
+ * unblock_type
+ * Reference to the type of unblock command
+ * 0 - Unblock a blocking resource
+ * 1 - The entire cache usage is unblock
+ *
+ * looping_count
+ * A count value that indicates the number of times the producer of
+ * entries into this Ring has looped around the ring.
+ */
+
+#define HAL_REO_FLUSH_TIMEOUT_STATUS_INFO0_IS_ERR BIT(0)
+#define HAL_REO_FLUSH_TIMEOUT_STATUS_INFO0_LIST_EMPTY BIT(1)
+
+#define HAL_REO_FLUSH_TIMEOUT_STATUS_INFO1_REL_DESC_COUNT GENMASK(15, 0)
+#define HAL_REO_FLUSH_TIMEOUT_STATUS_INFO1_FWD_BUF_COUNT GENMASK(31, 16)
+
+struct hal_reo_flush_timeout_list_status {
+ struct hal_reo_status_hdr hdr;
+ __le32 info0;
+ __le32 info1;
+ __le32 rsvd0[20];
+ __le32 info2;
+} __packed;
+
+/* hal_reo_flush_timeout_list_status
+ * Producer: REO
+ * Consumer: SW
+ *
+ * status_hdr
+ * Details that can link this status with the original command. It
+ * also contains info on how long REO took to execute this command.
+ *
+ * error_detected
+ * 0 - No error has been detected while executing this command
+ * 1 - Command not properly executed and returned with error
+ *
+ * timeout_list_empty
+ * When set, REO has depleted the timeout list and all entries are
+ * gone.
+ *
+ * release_desc_count
+ * Producer: SW; Consumer: REO
+ * The number of link descriptor released
+ *
+ * forward_buf_count
+ * Producer: SW; Consumer: REO
+ * The number of buffers forwarded to the REO destination rings
+ *
+ * looping_count
+ * A count value that indicates the number of times the producer of
+ * entries into this Ring has looped around the ring.
+ */
+
+#define HAL_REO_DESC_THRESH_STATUS_INFO0_THRESH_INDEX GENMASK(1, 0)
+#define HAL_REO_DESC_THRESH_STATUS_INFO1_LINK_DESC_COUNTER0 GENMASK(23, 0)
+#define HAL_REO_DESC_THRESH_STATUS_INFO2_LINK_DESC_COUNTER1 GENMASK(23, 0)
+#define HAL_REO_DESC_THRESH_STATUS_INFO3_LINK_DESC_COUNTER2 GENMASK(23, 0)
+#define HAL_REO_DESC_THRESH_STATUS_INFO4_LINK_DESC_COUNTER_SUM GENMASK(25, 0)
+
+struct hal_reo_desc_thresh_reached_status {
+ struct hal_reo_status_hdr hdr;
+ __le32 info0;
+ __le32 info1;
+ __le32 info2;
+ __le32 info3;
+ __le32 info4;
+ __le32 rsvd0[17];
+ __le32 info5;
+} __packed;
+
+/* hal_reo_desc_thresh_reached_status
+ * Producer: REO
+ * Consumer: SW
+ *
+ * status_hdr
+ * Details that can link this status with the original command. It
+ * also contains info on how long REO took to execute this command.
+ *
+ * threshold_index
+ * The index of the threshold register whose value got reached
+ *
+ * link_descriptor_counter0
+ * link_descriptor_counter1
+ * link_descriptor_counter2
+ * link_descriptor_counter_sum
+ * Value of the respective counters at generation of this message
+ *
+ * looping_count
+ * A count value that indicates the number of times the producer of
+ * entries into this Ring has looped around the ring.
+ */
+
+#define HAL_TCL_ENTRANCE_FROM_PPE_RING_INFO0_DATA_LENGTH GENMASK(13, 0)
+#define HAL_TCL_ENTRANCE_FROM_PPE_RING_INFO0_L4_CSUM_STATUS BIT(14)
+#define HAL_TCL_ENTRANCE_FROM_PPE_RING_INFO0_L3_CSUM_STATUS BIT(15)
+#define HAL_TCL_ENTRANCE_FROM_PPE_RING_INFO0_PID GENMASK(27, 24)
+#define HAL_TCL_ENTRANCE_FROM_PPE_RING_INFO0_QDISC BIT(28)
+#define HAL_TCL_ENTRANCE_FROM_PPE_RING_INFO0_MULTICAST BIT(29)
+#define HAL_TCL_ENTRANCE_FROM_PPE_RING_INFO0_MORE BIT(30)
+#define HAL_TCL_ENTRANCE_FROM_PPE_RING_INFO0_VALID_TOGGLE BIT(31)
+
+struct hal_tcl_entrance_from_ppe_ring {
+ __le32 buffer_addr;
+ __le32 info0;
+} __packed;
+
+struct hal_mon_buf_ring {
+ __le32 paddr_lo;
+ __le32 paddr_hi;
+ __le64 cookie;
+};
+
+/* hal_mon_buf_ring
+ * Producer : SW
+ * Consumer : Monitor
+ *
+ * paddr_lo
+ * Lower 32-bit physical address of the buffer pointer from the source ring.
+ * paddr_hi
+ * bit range 7-0 : upper 8 bit of the physical address.
+ * bit range 31-8 : reserved.
+ * cookie
+ * Consumer: RxMon/TxMon 64 bit cookie of the buffers.
+ */
+
+#define HAL_MON_DEST_COOKIE_BUF_ID GENMASK(17, 0)
+
+#define HAL_MON_DEST_INFO0_END_OFFSET GENMASK(15, 0)
+#define HAL_MON_DEST_INFO0_FLUSH_DETECTED BIT(16)
+#define HAL_MON_DEST_INFO0_END_OF_PPDU BIT(17)
+#define HAL_MON_DEST_INFO0_INITIATOR BIT(18)
+#define HAL_MON_DEST_INFO0_EMPTY_DESC BIT(19)
+#define HAL_MON_DEST_INFO0_RING_ID GENMASK(27, 20)
+#define HAL_MON_DEST_INFO0_LOOPING_COUNT GENMASK(31, 28)
+
+struct hal_mon_dest_desc {
+ __le32 cookie;
+ __le32 reserved;
+ __le32 ppdu_id;
+ __le32 info0;
+};
+
+/* hal_mon_dest_ring
+ * Producer : TxMon/RxMon
+ * Consumer : SW
+ * cookie
+ * bit 0 -17 buf_id to track the skb's vaddr.
+ * ppdu_id
+ * Phy ppdu_id
+ * end_offset
+ * The offset into status buffer where DMA ended, ie., offset to the last
+ * TLV + last TLV size.
+ * flush_detected
+ * Indicates whether 'tx_flush' or 'rx_flush' occurred.
+ * end_of_ppdu
+ * Indicates end of ppdu.
+ * pmac_id
+ * Indicates PMAC that received from frame.
+ * empty_descriptor
+ * This descriptor is written on flush or end of ppdu or end of status
+ * buffer.
+ * ring_id
+ * updated by SRNG.
+ * looping_count
+ * updated by SRNG.
+ */
+
+#endif /* ATH12K_HAL_DESC_H */
diff --git a/drivers/net/wireless/ath/ath12k/hal_rx.c b/drivers/net/wireless/ath/ath12k/hal_rx.c
new file mode 100644
index 000000000000..ee61a6462fdc
--- /dev/null
+++ b/drivers/net/wireless/ath/ath12k/hal_rx.c
@@ -0,0 +1,850 @@
+// SPDX-License-Identifier: BSD-3-Clause-Clear
+/*
+ * Copyright (c) 2018-2021 The Linux Foundation. All rights reserved.
+ * Copyright (c) 2021-2022 Qualcomm Innovation Center, Inc. All rights reserved.
+ */
+
+#include "debug.h"
+#include "hal.h"
+#include "hal_tx.h"
+#include "hal_rx.h"
+#include "hal_desc.h"
+#include "hif.h"
+
+static void ath12k_hal_reo_set_desc_hdr(struct hal_desc_header *hdr,
+ u8 owner, u8 buffer_type, u32 magic)
+{
+ hdr->info0 = le32_encode_bits(owner, HAL_DESC_HDR_INFO0_OWNER) |
+ le32_encode_bits(buffer_type, HAL_DESC_HDR_INFO0_BUF_TYPE);
+
+ /* Magic pattern in reserved bits for debugging */
+ hdr->info0 |= le32_encode_bits(magic, HAL_DESC_HDR_INFO0_DBG_RESERVED);
+}
+
+static int ath12k_hal_reo_cmd_queue_stats(struct hal_tlv_64_hdr *tlv,
+ struct ath12k_hal_reo_cmd *cmd)
+{
+ struct hal_reo_get_queue_stats *desc;
+
+ tlv->tl = u32_encode_bits(HAL_REO_GET_QUEUE_STATS, HAL_TLV_HDR_TAG) |
+ u32_encode_bits(sizeof(*desc), HAL_TLV_HDR_LEN);
+
+ desc = (struct hal_reo_get_queue_stats *)tlv->value;
+ memset_startat(desc, 0, queue_addr_lo);
+
+ desc->cmd.info0 &= ~cpu_to_le32(HAL_REO_CMD_HDR_INFO0_STATUS_REQUIRED);
+ if (cmd->flag & HAL_REO_CMD_FLG_NEED_STATUS)
+ desc->cmd.info0 |= cpu_to_le32(HAL_REO_CMD_HDR_INFO0_STATUS_REQUIRED);
+
+ desc->queue_addr_lo = cpu_to_le32(cmd->addr_lo);
+ desc->info0 = le32_encode_bits(cmd->addr_hi,
+ HAL_REO_GET_QUEUE_STATS_INFO0_QUEUE_ADDR_HI);
+ if (cmd->flag & HAL_REO_CMD_FLG_STATS_CLEAR)
+ desc->info0 |= cpu_to_le32(HAL_REO_GET_QUEUE_STATS_INFO0_CLEAR_STATS);
+
+ return le32_get_bits(desc->cmd.info0, HAL_REO_CMD_HDR_INFO0_CMD_NUMBER);
+}
+
+static int ath12k_hal_reo_cmd_flush_cache(struct ath12k_hal *hal,
+ struct hal_tlv_64_hdr *tlv,
+ struct ath12k_hal_reo_cmd *cmd)
+{
+ struct hal_reo_flush_cache *desc;
+ u8 avail_slot = ffz(hal->avail_blk_resource);
+
+ if (cmd->flag & HAL_REO_CMD_FLG_FLUSH_BLOCK_LATER) {
+ if (avail_slot >= HAL_MAX_AVAIL_BLK_RES)
+ return -ENOSPC;
+
+ hal->current_blk_index = avail_slot;
+ }
+
+ tlv->tl = u32_encode_bits(HAL_REO_FLUSH_CACHE, HAL_TLV_HDR_TAG) |
+ u32_encode_bits(sizeof(*desc), HAL_TLV_HDR_LEN);
+
+ desc = (struct hal_reo_flush_cache *)tlv->value;
+ memset_startat(desc, 0, cache_addr_lo);
+
+ desc->cmd.info0 &= ~cpu_to_le32(HAL_REO_CMD_HDR_INFO0_STATUS_REQUIRED);
+ if (cmd->flag & HAL_REO_CMD_FLG_NEED_STATUS)
+ desc->cmd.info0 |= cpu_to_le32(HAL_REO_CMD_HDR_INFO0_STATUS_REQUIRED);
+
+ desc->cache_addr_lo = cpu_to_le32(cmd->addr_lo);
+ desc->info0 = le32_encode_bits(cmd->addr_hi,
+ HAL_REO_FLUSH_CACHE_INFO0_CACHE_ADDR_HI);
+
+ if (cmd->flag & HAL_REO_CMD_FLG_FLUSH_FWD_ALL_MPDUS)
+ desc->info0 |= cpu_to_le32(HAL_REO_FLUSH_CACHE_INFO0_FWD_ALL_MPDUS);
+
+ if (cmd->flag & HAL_REO_CMD_FLG_FLUSH_BLOCK_LATER) {
+ desc->info0 |= cpu_to_le32(HAL_REO_FLUSH_CACHE_INFO0_BLOCK_CACHE_USAGE);
+ desc->info0 |=
+ le32_encode_bits(avail_slot,
+ HAL_REO_FLUSH_CACHE_INFO0_BLOCK_RESRC_IDX);
+ }
+
+ if (cmd->flag & HAL_REO_CMD_FLG_FLUSH_NO_INVAL)
+ desc->info0 |= cpu_to_le32(HAL_REO_FLUSH_CACHE_INFO0_FLUSH_WO_INVALIDATE);
+
+ if (cmd->flag & HAL_REO_CMD_FLG_FLUSH_ALL)
+ desc->info0 |= cpu_to_le32(HAL_REO_FLUSH_CACHE_INFO0_FLUSH_ALL);
+
+ return le32_get_bits(desc->cmd.info0, HAL_REO_CMD_HDR_INFO0_CMD_NUMBER);
+}
+
+static int ath12k_hal_reo_cmd_update_rx_queue(struct hal_tlv_64_hdr *tlv,
+ struct ath12k_hal_reo_cmd *cmd)
+{
+ struct hal_reo_update_rx_queue *desc;
+
+ tlv->tl = u32_encode_bits(HAL_REO_UPDATE_RX_REO_QUEUE, HAL_TLV_HDR_TAG) |
+ u32_encode_bits(sizeof(*desc), HAL_TLV_HDR_LEN);
+
+ desc = (struct hal_reo_update_rx_queue *)tlv->value;
+ memset_startat(desc, 0, queue_addr_lo);
+
+ desc->cmd.info0 &= ~cpu_to_le32(HAL_REO_CMD_HDR_INFO0_STATUS_REQUIRED);
+ if (cmd->flag & HAL_REO_CMD_FLG_NEED_STATUS)
+ desc->cmd.info0 |= cpu_to_le32(HAL_REO_CMD_HDR_INFO0_STATUS_REQUIRED);
+
+ desc->queue_addr_lo = cpu_to_le32(cmd->addr_lo);
+ desc->info0 =
+ le32_encode_bits(cmd->addr_hi,
+ HAL_REO_UPD_RX_QUEUE_INFO0_QUEUE_ADDR_HI) |
+ le32_encode_bits(!!(cmd->upd0 & HAL_REO_CMD_UPD0_RX_QUEUE_NUM),
+ HAL_REO_UPD_RX_QUEUE_INFO0_UPD_RX_QUEUE_NUM) |
+ le32_encode_bits(!!(cmd->upd0 & HAL_REO_CMD_UPD0_VLD),
+ HAL_REO_UPD_RX_QUEUE_INFO0_UPD_VLD) |
+ le32_encode_bits(!!(cmd->upd0 & HAL_REO_CMD_UPD0_ALDC),
+ HAL_REO_UPD_RX_QUEUE_INFO0_UPD_ASSOC_LNK_DESC_CNT) |
+ le32_encode_bits(!!(cmd->upd0 & HAL_REO_CMD_UPD0_DIS_DUP_DETECTION),
+ HAL_REO_UPD_RX_QUEUE_INFO0_UPD_DIS_DUP_DETECTION) |
+ le32_encode_bits(!!(cmd->upd0 & HAL_REO_CMD_UPD0_SOFT_REORDER_EN),
+ HAL_REO_UPD_RX_QUEUE_INFO0_UPD_SOFT_REORDER_EN) |
+ le32_encode_bits(!!(cmd->upd0 & HAL_REO_CMD_UPD0_AC),
+ HAL_REO_UPD_RX_QUEUE_INFO0_UPD_AC) |
+ le32_encode_bits(!!(cmd->upd0 & HAL_REO_CMD_UPD0_BAR),
+ HAL_REO_UPD_RX_QUEUE_INFO0_UPD_BAR) |
+ le32_encode_bits(!!(cmd->upd0 & HAL_REO_CMD_UPD0_RETRY),
+ HAL_REO_UPD_RX_QUEUE_INFO0_UPD_RETRY) |
+ le32_encode_bits(!!(cmd->upd0 & HAL_REO_CMD_UPD0_CHECK_2K_MODE),
+ HAL_REO_UPD_RX_QUEUE_INFO0_UPD_CHECK_2K_MODE) |
+ le32_encode_bits(!!(cmd->upd0 & HAL_REO_CMD_UPD0_OOR_MODE),
+ HAL_REO_UPD_RX_QUEUE_INFO0_UPD_OOR_MODE) |
+ le32_encode_bits(!!(cmd->upd0 & HAL_REO_CMD_UPD0_BA_WINDOW_SIZE),
+ HAL_REO_UPD_RX_QUEUE_INFO0_UPD_BA_WINDOW_SIZE) |
+ le32_encode_bits(!!(cmd->upd0 & HAL_REO_CMD_UPD0_PN_CHECK),
+ HAL_REO_UPD_RX_QUEUE_INFO0_UPD_PN_CHECK) |
+ le32_encode_bits(!!(cmd->upd0 & HAL_REO_CMD_UPD0_EVEN_PN),
+ HAL_REO_UPD_RX_QUEUE_INFO0_UPD_EVEN_PN) |
+ le32_encode_bits(!!(cmd->upd0 & HAL_REO_CMD_UPD0_UNEVEN_PN),
+ HAL_REO_UPD_RX_QUEUE_INFO0_UPD_UNEVEN_PN) |
+ le32_encode_bits(!!(cmd->upd0 & HAL_REO_CMD_UPD0_PN_HANDLE_ENABLE),
+ HAL_REO_UPD_RX_QUEUE_INFO0_UPD_PN_HANDLE_ENABLE) |
+ le32_encode_bits(!!(cmd->upd0 & HAL_REO_CMD_UPD0_PN_SIZE),
+ HAL_REO_UPD_RX_QUEUE_INFO0_UPD_PN_SIZE) |
+ le32_encode_bits(!!(cmd->upd0 & HAL_REO_CMD_UPD0_IGNORE_AMPDU_FLG),
+ HAL_REO_UPD_RX_QUEUE_INFO0_UPD_IGNORE_AMPDU_FLG) |
+ le32_encode_bits(!!(cmd->upd0 & HAL_REO_CMD_UPD0_SVLD),
+ HAL_REO_UPD_RX_QUEUE_INFO0_UPD_SVLD) |
+ le32_encode_bits(!!(cmd->upd0 & HAL_REO_CMD_UPD0_SSN),
+ HAL_REO_UPD_RX_QUEUE_INFO0_UPD_SSN) |
+ le32_encode_bits(!!(cmd->upd0 & HAL_REO_CMD_UPD0_SEQ_2K_ERR),
+ HAL_REO_UPD_RX_QUEUE_INFO0_UPD_SEQ_2K_ERR) |
+ le32_encode_bits(!!(cmd->upd0 & HAL_REO_CMD_UPD0_PN_VALID),
+ HAL_REO_UPD_RX_QUEUE_INFO0_UPD_PN_VALID) |
+ le32_encode_bits(!!(cmd->upd0 & HAL_REO_CMD_UPD0_PN),
+ HAL_REO_UPD_RX_QUEUE_INFO0_UPD_PN);
+
+ desc->info1 =
+ le32_encode_bits(cmd->rx_queue_num,
+ HAL_REO_UPD_RX_QUEUE_INFO1_RX_QUEUE_NUMBER) |
+ le32_encode_bits(!!(cmd->upd1 & HAL_REO_CMD_UPD1_VLD),
+ HAL_REO_UPD_RX_QUEUE_INFO1_VLD) |
+ le32_encode_bits(u32_get_bits(cmd->upd1, HAL_REO_CMD_UPD1_ALDC),
+ HAL_REO_UPD_RX_QUEUE_INFO1_ASSOC_LNK_DESC_COUNTER) |
+ le32_encode_bits(!!(cmd->upd1 & HAL_REO_CMD_UPD1_DIS_DUP_DETECTION),
+ HAL_REO_UPD_RX_QUEUE_INFO1_DIS_DUP_DETECTION) |
+ le32_encode_bits(!!(cmd->upd1 & HAL_REO_CMD_UPD1_SOFT_REORDER_EN),
+ HAL_REO_UPD_RX_QUEUE_INFO1_SOFT_REORDER_EN) |
+ le32_encode_bits(u32_get_bits(cmd->upd1, HAL_REO_CMD_UPD1_AC),
+ HAL_REO_UPD_RX_QUEUE_INFO1_AC) |
+ le32_encode_bits(!!(cmd->upd1 & HAL_REO_CMD_UPD1_BAR),
+ HAL_REO_UPD_RX_QUEUE_INFO1_BAR) |
+ le32_encode_bits(!!(cmd->upd1 & HAL_REO_CMD_UPD1_CHECK_2K_MODE),
+ HAL_REO_UPD_RX_QUEUE_INFO1_CHECK_2K_MODE) |
+ le32_encode_bits(!!(cmd->upd1 & HAL_REO_CMD_UPD1_RETRY),
+ HAL_REO_UPD_RX_QUEUE_INFO1_RETRY) |
+ le32_encode_bits(!!(cmd->upd1 & HAL_REO_CMD_UPD1_OOR_MODE),
+ HAL_REO_UPD_RX_QUEUE_INFO1_OOR_MODE) |
+ le32_encode_bits(!!(cmd->upd1 & HAL_REO_CMD_UPD1_PN_CHECK),
+ HAL_REO_UPD_RX_QUEUE_INFO1_PN_CHECK) |
+ le32_encode_bits(!!(cmd->upd1 & HAL_REO_CMD_UPD1_EVEN_PN),
+ HAL_REO_UPD_RX_QUEUE_INFO1_EVEN_PN) |
+ le32_encode_bits(!!(cmd->upd1 & HAL_REO_CMD_UPD1_UNEVEN_PN),
+ HAL_REO_UPD_RX_QUEUE_INFO1_UNEVEN_PN) |
+ le32_encode_bits(!!(cmd->upd1 & HAL_REO_CMD_UPD1_PN_HANDLE_ENABLE),
+ HAL_REO_UPD_RX_QUEUE_INFO1_PN_HANDLE_ENABLE) |
+ le32_encode_bits(!!(cmd->upd1 & HAL_REO_CMD_UPD1_IGNORE_AMPDU_FLG),
+ HAL_REO_UPD_RX_QUEUE_INFO1_IGNORE_AMPDU_FLG);
+
+ if (cmd->pn_size == 24)
+ cmd->pn_size = HAL_RX_REO_QUEUE_PN_SIZE_24;
+ else if (cmd->pn_size == 48)
+ cmd->pn_size = HAL_RX_REO_QUEUE_PN_SIZE_48;
+ else if (cmd->pn_size == 128)
+ cmd->pn_size = HAL_RX_REO_QUEUE_PN_SIZE_128;
+
+ if (cmd->ba_window_size < 1)
+ cmd->ba_window_size = 1;
+
+ if (cmd->ba_window_size == 1)
+ cmd->ba_window_size++;
+
+ desc->info2 =
+ le32_encode_bits(cmd->ba_window_size - 1,
+ HAL_REO_UPD_RX_QUEUE_INFO2_BA_WINDOW_SIZE) |
+ le32_encode_bits(cmd->pn_size, HAL_REO_UPD_RX_QUEUE_INFO2_PN_SIZE) |
+ le32_encode_bits(!!(cmd->upd2 & HAL_REO_CMD_UPD2_SVLD),
+ HAL_REO_UPD_RX_QUEUE_INFO2_SVLD) |
+ le32_encode_bits(u32_get_bits(cmd->upd2, HAL_REO_CMD_UPD2_SSN),
+ HAL_REO_UPD_RX_QUEUE_INFO2_SSN) |
+ le32_encode_bits(!!(cmd->upd2 & HAL_REO_CMD_UPD2_SEQ_2K_ERR),
+ HAL_REO_UPD_RX_QUEUE_INFO2_SEQ_2K_ERR) |
+ le32_encode_bits(!!(cmd->upd2 & HAL_REO_CMD_UPD2_PN_ERR),
+ HAL_REO_UPD_RX_QUEUE_INFO2_PN_ERR);
+
+ return le32_get_bits(desc->cmd.info0, HAL_REO_CMD_HDR_INFO0_CMD_NUMBER);
+}
+
+int ath12k_hal_reo_cmd_send(struct ath12k_base *ab, struct hal_srng *srng,
+ enum hal_reo_cmd_type type,
+ struct ath12k_hal_reo_cmd *cmd)
+{
+ struct hal_tlv_64_hdr *reo_desc;
+ int ret;
+
+ spin_lock_bh(&srng->lock);
+
+ ath12k_hal_srng_access_begin(ab, srng);
+ reo_desc = ath12k_hal_srng_src_get_next_entry(ab, srng);
+ if (!reo_desc) {
+ ret = -ENOBUFS;
+ goto out;
+ }
+
+ switch (type) {
+ case HAL_REO_CMD_GET_QUEUE_STATS:
+ ret = ath12k_hal_reo_cmd_queue_stats(reo_desc, cmd);
+ break;
+ case HAL_REO_CMD_FLUSH_CACHE:
+ ret = ath12k_hal_reo_cmd_flush_cache(&ab->hal, reo_desc, cmd);
+ break;
+ case HAL_REO_CMD_UPDATE_RX_QUEUE:
+ ret = ath12k_hal_reo_cmd_update_rx_queue(reo_desc, cmd);
+ break;
+ case HAL_REO_CMD_FLUSH_QUEUE:
+ case HAL_REO_CMD_UNBLOCK_CACHE:
+ case HAL_REO_CMD_FLUSH_TIMEOUT_LIST:
+ ath12k_warn(ab, "Unsupported reo command %d\n", type);
+ ret = -ENOTSUPP;
+ break;
+ default:
+ ath12k_warn(ab, "Unknown reo command %d\n", type);
+ ret = -EINVAL;
+ break;
+ }
+
+out:
+ ath12k_hal_srng_access_end(ab, srng);
+ spin_unlock_bh(&srng->lock);
+
+ return ret;
+}
+
+void ath12k_hal_rx_buf_addr_info_set(struct ath12k_buffer_addr *binfo,
+ dma_addr_t paddr, u32 cookie, u8 manager)
+{
+ u32 paddr_lo, paddr_hi;
+
+ paddr_lo = lower_32_bits(paddr);
+ paddr_hi = upper_32_bits(paddr);
+ binfo->info0 = le32_encode_bits(paddr_lo, BUFFER_ADDR_INFO0_ADDR);
+ binfo->info1 = le32_encode_bits(paddr_hi, BUFFER_ADDR_INFO1_ADDR) |
+ le32_encode_bits(cookie, BUFFER_ADDR_INFO1_SW_COOKIE) |
+ le32_encode_bits(manager, BUFFER_ADDR_INFO1_RET_BUF_MGR);
+}
+
+void ath12k_hal_rx_buf_addr_info_get(struct ath12k_buffer_addr *binfo,
+ dma_addr_t *paddr,
+ u32 *cookie, u8 *rbm)
+{
+ *paddr = (((u64)le32_get_bits(binfo->info1, BUFFER_ADDR_INFO1_ADDR)) << 32) |
+ le32_get_bits(binfo->info0, BUFFER_ADDR_INFO0_ADDR);
+ *cookie = le32_get_bits(binfo->info1, BUFFER_ADDR_INFO1_SW_COOKIE);
+ *rbm = le32_get_bits(binfo->info1, BUFFER_ADDR_INFO1_RET_BUF_MGR);
+}
+
+void ath12k_hal_rx_msdu_link_info_get(struct hal_rx_msdu_link *link, u32 *num_msdus,
+ u32 *msdu_cookies,
+ enum hal_rx_buf_return_buf_manager *rbm)
+{
+ struct hal_rx_msdu_details *msdu;
+ u32 val;
+ int i;
+
+ *num_msdus = HAL_NUM_RX_MSDUS_PER_LINK_DESC;
+
+ msdu = &link->msdu_link[0];
+ *rbm = le32_get_bits(msdu->buf_addr_info.info1,
+ BUFFER_ADDR_INFO1_RET_BUF_MGR);
+
+ for (i = 0; i < *num_msdus; i++) {
+ msdu = &link->msdu_link[i];
+
+ val = le32_get_bits(msdu->buf_addr_info.info0,
+ BUFFER_ADDR_INFO0_ADDR);
+ if (val == 0) {
+ *num_msdus = i;
+ break;
+ }
+ *msdu_cookies = le32_get_bits(msdu->buf_addr_info.info1,
+ BUFFER_ADDR_INFO1_SW_COOKIE);
+ msdu_cookies++;
+ }
+}
+
+int ath12k_hal_desc_reo_parse_err(struct ath12k_base *ab,
+ struct hal_reo_dest_ring *desc,
+ dma_addr_t *paddr, u32 *desc_bank)
+{
+ enum hal_reo_dest_ring_push_reason push_reason;
+ enum hal_reo_dest_ring_error_code err_code;
+ u32 cookie, val;
+
+ push_reason = le32_get_bits(desc->info0,
+ HAL_REO_DEST_RING_INFO0_PUSH_REASON);
+ err_code = le32_get_bits(desc->info0,
+ HAL_REO_DEST_RING_INFO0_ERROR_CODE);
+ ab->soc_stats.reo_error[err_code]++;
+
+ if (push_reason != HAL_REO_DEST_RING_PUSH_REASON_ERR_DETECTED &&
+ push_reason != HAL_REO_DEST_RING_PUSH_REASON_ROUTING_INSTRUCTION) {
+ ath12k_warn(ab, "expected error push reason code, received %d\n",
+ push_reason);
+ return -EINVAL;
+ }
+
+ val = le32_get_bits(desc->info0, HAL_REO_DEST_RING_INFO0_BUFFER_TYPE);
+ if (val != HAL_REO_DEST_RING_BUFFER_TYPE_LINK_DESC) {
+ ath12k_warn(ab, "expected buffer type link_desc");
+ return -EINVAL;
+ }
+
+ ath12k_hal_rx_reo_ent_paddr_get(ab, &desc->buf_addr_info, paddr, &cookie);
+ *desc_bank = u32_get_bits(cookie, DP_LINK_DESC_BANK_MASK);
+
+ return 0;
+}
+
+int ath12k_hal_wbm_desc_parse_err(struct ath12k_base *ab, void *desc,
+ struct hal_rx_wbm_rel_info *rel_info)
+{
+ struct hal_wbm_release_ring *wbm_desc = desc;
+ struct hal_wbm_release_ring_cc_rx *wbm_cc_desc = desc;
+ enum hal_wbm_rel_desc_type type;
+ enum hal_wbm_rel_src_module rel_src;
+ bool hw_cc_done;
+ u64 desc_va;
+ u32 val;
+
+ type = le32_get_bits(wbm_desc->info0, HAL_WBM_RELEASE_INFO0_DESC_TYPE);
+ /* We expect only WBM_REL buffer type */
+ if (type != HAL_WBM_REL_DESC_TYPE_REL_MSDU) {
+ WARN_ON(1);
+ return -EINVAL;
+ }
+
+ rel_src = le32_get_bits(wbm_desc->info0,
+ HAL_WBM_RELEASE_INFO0_REL_SRC_MODULE);
+ if (rel_src != HAL_WBM_REL_SRC_MODULE_RXDMA &&
+ rel_src != HAL_WBM_REL_SRC_MODULE_REO)
+ return -EINVAL;
+
+ /* The format of wbm rel ring desc changes based on the
+ * hw cookie conversion status
+ */
+ hw_cc_done = le32_get_bits(wbm_desc->info0,
+ HAL_WBM_RELEASE_RX_INFO0_CC_STATUS);
+
+ if (!hw_cc_done) {
+ val = le32_get_bits(wbm_desc->buf_addr_info.info1,
+ BUFFER_ADDR_INFO1_RET_BUF_MGR);
+ if (val != HAL_RX_BUF_RBM_SW3_BM) {
+ ab->soc_stats.invalid_rbm++;
+ return -EINVAL;
+ }
+
+ rel_info->cookie = le32_get_bits(wbm_desc->buf_addr_info.info1,
+ BUFFER_ADDR_INFO1_SW_COOKIE);
+
+ rel_info->rx_desc = NULL;
+ } else {
+ val = le32_get_bits(wbm_cc_desc->info0,
+ HAL_WBM_RELEASE_RX_CC_INFO0_RBM);
+ if (val != HAL_RX_BUF_RBM_SW3_BM) {
+ ab->soc_stats.invalid_rbm++;
+ return -EINVAL;
+ }
+
+ rel_info->cookie = le32_get_bits(wbm_cc_desc->info1,
+ HAL_WBM_RELEASE_RX_CC_INFO1_COOKIE);
+
+ desc_va = ((u64)le32_to_cpu(wbm_cc_desc->buf_va_hi) << 32 |
+ le32_to_cpu(wbm_cc_desc->buf_va_lo));
+ rel_info->rx_desc =
+ (struct ath12k_rx_desc_info *)((unsigned long)desc_va);
+ }
+
+ rel_info->err_rel_src = rel_src;
+ rel_info->hw_cc_done = hw_cc_done;
+
+ rel_info->first_msdu = le32_get_bits(wbm_desc->info3,
+ HAL_WBM_RELEASE_INFO3_FIRST_MSDU);
+ rel_info->last_msdu = le32_get_bits(wbm_desc->info3,
+ HAL_WBM_RELEASE_INFO3_LAST_MSDU);
+ rel_info->continuation = le32_get_bits(wbm_desc->info3,
+ HAL_WBM_RELEASE_INFO3_CONTINUATION);
+
+ if (rel_info->err_rel_src == HAL_WBM_REL_SRC_MODULE_REO) {
+ rel_info->push_reason =
+ le32_get_bits(wbm_desc->info0,
+ HAL_WBM_RELEASE_INFO0_REO_PUSH_REASON);
+ rel_info->err_code =
+ le32_get_bits(wbm_desc->info0,
+ HAL_WBM_RELEASE_INFO0_REO_ERROR_CODE);
+ } else {
+ rel_info->push_reason =
+ le32_get_bits(wbm_desc->info0,
+ HAL_WBM_RELEASE_INFO0_RXDMA_PUSH_REASON);
+ rel_info->err_code =
+ le32_get_bits(wbm_desc->info0,
+ HAL_WBM_RELEASE_INFO0_RXDMA_ERROR_CODE);
+ }
+
+ return 0;
+}
+
+void ath12k_hal_rx_reo_ent_paddr_get(struct ath12k_base *ab,
+ struct ath12k_buffer_addr *buff_addr,
+ dma_addr_t *paddr, u32 *cookie)
+{
+ *paddr = ((u64)(le32_get_bits(buff_addr->info1,
+ BUFFER_ADDR_INFO1_ADDR)) << 32) |
+ le32_get_bits(buff_addr->info0, BUFFER_ADDR_INFO0_ADDR);
+
+ *cookie = le32_get_bits(buff_addr->info1, BUFFER_ADDR_INFO1_SW_COOKIE);
+}
+
+void ath12k_hal_rx_msdu_link_desc_set(struct ath12k_base *ab,
+ struct hal_wbm_release_ring *dst_desc,
+ struct hal_wbm_release_ring *src_desc,
+ enum hal_wbm_rel_bm_act action)
+{
+ dst_desc->buf_addr_info = src_desc->buf_addr_info;
+ dst_desc->info0 |= le32_encode_bits(HAL_WBM_REL_SRC_MODULE_SW,
+ HAL_WBM_RELEASE_INFO0_REL_SRC_MODULE) |
+ le32_encode_bits(action, HAL_WBM_RELEASE_INFO0_BM_ACTION) |
+ le32_encode_bits(HAL_WBM_REL_DESC_TYPE_MSDU_LINK,
+ HAL_WBM_RELEASE_INFO0_DESC_TYPE);
+}
+
+void ath12k_hal_reo_status_queue_stats(struct ath12k_base *ab, struct hal_tlv_64_hdr *tlv,
+ struct hal_reo_status *status)
+{
+ struct hal_reo_get_queue_stats_status *desc =
+ (struct hal_reo_get_queue_stats_status *)tlv->value;
+
+ status->uniform_hdr.cmd_num =
+ le32_get_bits(desc->hdr.info0,
+ HAL_REO_STATUS_HDR_INFO0_STATUS_NUM);
+ status->uniform_hdr.cmd_status =
+ le32_get_bits(desc->hdr.info0,
+ HAL_REO_STATUS_HDR_INFO0_EXEC_STATUS);
+
+ ath12k_dbg(ab, ATH12K_DBG_HAL, "Queue stats status:\n");
+ ath12k_dbg(ab, ATH12K_DBG_HAL, "header: cmd_num %d status %d\n",
+ status->uniform_hdr.cmd_num,
+ status->uniform_hdr.cmd_status);
+ ath12k_dbg(ab, ATH12K_DBG_HAL, "ssn %u cur_idx %u\n",
+ le32_get_bits(desc->info0,
+ HAL_REO_GET_QUEUE_STATS_STATUS_INFO0_SSN),
+ le32_get_bits(desc->info0,
+ HAL_REO_GET_QUEUE_STATS_STATUS_INFO0_CUR_IDX));
+ ath12k_dbg(ab, ATH12K_DBG_HAL, "pn = [%08x, %08x, %08x, %08x]\n",
+ desc->pn[0], desc->pn[1], desc->pn[2], desc->pn[3]);
+ ath12k_dbg(ab, ATH12K_DBG_HAL, "last_rx: enqueue_tstamp %08x dequeue_tstamp %08x\n",
+ desc->last_rx_enqueue_timestamp,
+ desc->last_rx_dequeue_timestamp);
+ ath12k_dbg(ab, ATH12K_DBG_HAL, "rx_bitmap [%08x %08x %08x %08x %08x %08x %08x %08x]\n",
+ desc->rx_bitmap[0], desc->rx_bitmap[1], desc->rx_bitmap[2],
+ desc->rx_bitmap[3], desc->rx_bitmap[4], desc->rx_bitmap[5],
+ desc->rx_bitmap[6], desc->rx_bitmap[7]);
+ ath12k_dbg(ab, ATH12K_DBG_HAL, "count: cur_mpdu %u cur_msdu %u\n",
+ le32_get_bits(desc->info1,
+ HAL_REO_GET_QUEUE_STATS_STATUS_INFO1_MPDU_COUNT),
+ le32_get_bits(desc->info1,
+ HAL_REO_GET_QUEUE_STATS_STATUS_INFO1_MSDU_COUNT));
+ ath12k_dbg(ab, ATH12K_DBG_HAL, "fwd_timeout %u fwd_bar %u dup_count %u\n",
+ le32_get_bits(desc->info2,
+ HAL_REO_GET_QUEUE_STATS_STATUS_INFO2_TIMEOUT_COUNT),
+ le32_get_bits(desc->info2,
+ HAL_REO_GET_QUEUE_STATS_STATUS_INFO2_FDTB_COUNT),
+ le32_get_bits(desc->info2,
+ HAL_REO_GET_QUEUE_STATS_STATUS_INFO2_DUPLICATE_COUNT));
+ ath12k_dbg(ab, ATH12K_DBG_HAL, "frames_in_order %u bar_rcvd %u\n",
+ le32_get_bits(desc->info3,
+ HAL_REO_GET_QUEUE_STATS_STATUS_INFO3_FIO_COUNT),
+ le32_get_bits(desc->info3,
+ HAL_REO_GET_QUEUE_STATS_STATUS_INFO3_BAR_RCVD_CNT));
+ ath12k_dbg(ab, ATH12K_DBG_HAL, "num_mpdus %d num_msdus %d total_bytes %d\n",
+ desc->num_mpdu_frames, desc->num_msdu_frames,
+ desc->total_bytes);
+ ath12k_dbg(ab, ATH12K_DBG_HAL, "late_rcvd %u win_jump_2k %u hole_cnt %u\n",
+ le32_get_bits(desc->info4,
+ HAL_REO_GET_QUEUE_STATS_STATUS_INFO4_LATE_RX_MPDU),
+ le32_get_bits(desc->info2,
+ HAL_REO_GET_QUEUE_STATS_STATUS_INFO2_WINDOW_JMP2K),
+ le32_get_bits(desc->info4,
+ HAL_REO_GET_QUEUE_STATS_STATUS_INFO4_HOLE_COUNT));
+ ath12k_dbg(ab, ATH12K_DBG_HAL, "looping count %u\n",
+ le32_get_bits(desc->info5,
+ HAL_REO_GET_QUEUE_STATS_STATUS_INFO5_LOOPING_CNT));
+}
+
+void ath12k_hal_reo_flush_queue_status(struct ath12k_base *ab, struct hal_tlv_64_hdr *tlv,
+ struct hal_reo_status *status)
+{
+ struct hal_reo_flush_queue_status *desc =
+ (struct hal_reo_flush_queue_status *)tlv->value;
+
+ status->uniform_hdr.cmd_num =
+ le32_get_bits(desc->hdr.info0,
+ HAL_REO_STATUS_HDR_INFO0_STATUS_NUM);
+ status->uniform_hdr.cmd_status =
+ le32_get_bits(desc->hdr.info0,
+ HAL_REO_STATUS_HDR_INFO0_EXEC_STATUS);
+ status->u.flush_queue.err_detected =
+ le32_get_bits(desc->info0,
+ HAL_REO_FLUSH_QUEUE_INFO0_ERR_DETECTED);
+}
+
+void ath12k_hal_reo_flush_cache_status(struct ath12k_base *ab, struct hal_tlv_64_hdr *tlv,
+ struct hal_reo_status *status)
+{
+ struct ath12k_hal *hal = &ab->hal;
+ struct hal_reo_flush_cache_status *desc =
+ (struct hal_reo_flush_cache_status *)tlv->value;
+
+ status->uniform_hdr.cmd_num =
+ le32_get_bits(desc->hdr.info0,
+ HAL_REO_STATUS_HDR_INFO0_STATUS_NUM);
+ status->uniform_hdr.cmd_status =
+ le32_get_bits(desc->hdr.info0,
+ HAL_REO_STATUS_HDR_INFO0_EXEC_STATUS);
+
+ status->u.flush_cache.err_detected =
+ le32_get_bits(desc->info0,
+ HAL_REO_FLUSH_CACHE_STATUS_INFO0_IS_ERR);
+ status->u.flush_cache.err_code =
+ le32_get_bits(desc->info0,
+ HAL_REO_FLUSH_CACHE_STATUS_INFO0_BLOCK_ERR_CODE);
+ if (!status->u.flush_cache.err_code)
+ hal->avail_blk_resource |= BIT(hal->current_blk_index);
+
+ status->u.flush_cache.cache_controller_flush_status_hit =
+ le32_get_bits(desc->info0,
+ HAL_REO_FLUSH_CACHE_STATUS_INFO0_FLUSH_STATUS_HIT);
+
+ status->u.flush_cache.cache_controller_flush_status_desc_type =
+ le32_get_bits(desc->info0,
+ HAL_REO_FLUSH_CACHE_STATUS_INFO0_FLUSH_DESC_TYPE);
+ status->u.flush_cache.cache_controller_flush_status_client_id =
+ le32_get_bits(desc->info0,
+ HAL_REO_FLUSH_CACHE_STATUS_INFO0_FLUSH_CLIENT_ID);
+ status->u.flush_cache.cache_controller_flush_status_err =
+ le32_get_bits(desc->info0,
+ HAL_REO_FLUSH_CACHE_STATUS_INFO0_FLUSH_ERR);
+ status->u.flush_cache.cache_controller_flush_status_cnt =
+ le32_get_bits(desc->info0,
+ HAL_REO_FLUSH_CACHE_STATUS_INFO0_FLUSH_COUNT);
+}
+
+void ath12k_hal_reo_unblk_cache_status(struct ath12k_base *ab, struct hal_tlv_64_hdr *tlv,
+ struct hal_reo_status *status)
+{
+ struct ath12k_hal *hal = &ab->hal;
+ struct hal_reo_unblock_cache_status *desc =
+ (struct hal_reo_unblock_cache_status *)tlv->value;
+
+ status->uniform_hdr.cmd_num =
+ le32_get_bits(desc->hdr.info0,
+ HAL_REO_STATUS_HDR_INFO0_STATUS_NUM);
+ status->uniform_hdr.cmd_status =
+ le32_get_bits(desc->hdr.info0,
+ HAL_REO_STATUS_HDR_INFO0_EXEC_STATUS);
+
+ status->u.unblock_cache.err_detected =
+ le32_get_bits(desc->info0,
+ HAL_REO_UNBLOCK_CACHE_STATUS_INFO0_IS_ERR);
+ status->u.unblock_cache.unblock_type =
+ le32_get_bits(desc->info0,
+ HAL_REO_UNBLOCK_CACHE_STATUS_INFO0_TYPE);
+
+ if (!status->u.unblock_cache.err_detected &&
+ status->u.unblock_cache.unblock_type ==
+ HAL_REO_STATUS_UNBLOCK_BLOCKING_RESOURCE)
+ hal->avail_blk_resource &= ~BIT(hal->current_blk_index);
+}
+
+void ath12k_hal_reo_flush_timeout_list_status(struct ath12k_base *ab,
+ struct hal_tlv_64_hdr *tlv,
+ struct hal_reo_status *status)
+{
+ struct hal_reo_flush_timeout_list_status *desc =
+ (struct hal_reo_flush_timeout_list_status *)tlv->value;
+
+ status->uniform_hdr.cmd_num =
+ le32_get_bits(desc->hdr.info0,
+ HAL_REO_STATUS_HDR_INFO0_STATUS_NUM);
+ status->uniform_hdr.cmd_status =
+ le32_get_bits(desc->hdr.info0,
+ HAL_REO_STATUS_HDR_INFO0_EXEC_STATUS);
+
+ status->u.timeout_list.err_detected =
+ le32_get_bits(desc->info0,
+ HAL_REO_FLUSH_TIMEOUT_STATUS_INFO0_IS_ERR);
+ status->u.timeout_list.list_empty =
+ le32_get_bits(desc->info0,
+ HAL_REO_FLUSH_TIMEOUT_STATUS_INFO0_LIST_EMPTY);
+
+ status->u.timeout_list.release_desc_cnt =
+ le32_get_bits(desc->info1,
+ HAL_REO_FLUSH_TIMEOUT_STATUS_INFO1_REL_DESC_COUNT);
+ status->u.timeout_list.fwd_buf_cnt =
+ le32_get_bits(desc->info0,
+ HAL_REO_FLUSH_TIMEOUT_STATUS_INFO1_FWD_BUF_COUNT);
+}
+
+void ath12k_hal_reo_desc_thresh_reached_status(struct ath12k_base *ab,
+ struct hal_tlv_64_hdr *tlv,
+ struct hal_reo_status *status)
+{
+ struct hal_reo_desc_thresh_reached_status *desc =
+ (struct hal_reo_desc_thresh_reached_status *)tlv->value;
+
+ status->uniform_hdr.cmd_num =
+ le32_get_bits(desc->hdr.info0,
+ HAL_REO_STATUS_HDR_INFO0_STATUS_NUM);
+ status->uniform_hdr.cmd_status =
+ le32_get_bits(desc->hdr.info0,
+ HAL_REO_STATUS_HDR_INFO0_EXEC_STATUS);
+
+ status->u.desc_thresh_reached.threshold_idx =
+ le32_get_bits(desc->info0,
+ HAL_REO_DESC_THRESH_STATUS_INFO0_THRESH_INDEX);
+
+ status->u.desc_thresh_reached.link_desc_counter0 =
+ le32_get_bits(desc->info1,
+ HAL_REO_DESC_THRESH_STATUS_INFO1_LINK_DESC_COUNTER0);
+
+ status->u.desc_thresh_reached.link_desc_counter1 =
+ le32_get_bits(desc->info2,
+ HAL_REO_DESC_THRESH_STATUS_INFO2_LINK_DESC_COUNTER1);
+
+ status->u.desc_thresh_reached.link_desc_counter2 =
+ le32_get_bits(desc->info3,
+ HAL_REO_DESC_THRESH_STATUS_INFO3_LINK_DESC_COUNTER2);
+
+ status->u.desc_thresh_reached.link_desc_counter_sum =
+ le32_get_bits(desc->info4,
+ HAL_REO_DESC_THRESH_STATUS_INFO4_LINK_DESC_COUNTER_SUM);
+}
+
+void ath12k_hal_reo_update_rx_reo_queue_status(struct ath12k_base *ab,
+ struct hal_tlv_64_hdr *tlv,
+ struct hal_reo_status *status)
+{
+ struct hal_reo_status_hdr *desc =
+ (struct hal_reo_status_hdr *)tlv->value;
+
+ status->uniform_hdr.cmd_num =
+ le32_get_bits(desc->info0,
+ HAL_REO_STATUS_HDR_INFO0_STATUS_NUM);
+ status->uniform_hdr.cmd_status =
+ le32_get_bits(desc->info0,
+ HAL_REO_STATUS_HDR_INFO0_EXEC_STATUS);
+}
+
+u32 ath12k_hal_reo_qdesc_size(u32 ba_window_size, u8 tid)
+{
+ u32 num_ext_desc;
+
+ if (ba_window_size <= 1) {
+ if (tid != HAL_DESC_REO_NON_QOS_TID)
+ num_ext_desc = 1;
+ else
+ num_ext_desc = 0;
+ } else if (ba_window_size <= 105) {
+ num_ext_desc = 1;
+ } else if (ba_window_size <= 210) {
+ num_ext_desc = 2;
+ } else {
+ num_ext_desc = 3;
+ }
+
+ return sizeof(struct hal_rx_reo_queue) +
+ (num_ext_desc * sizeof(struct hal_rx_reo_queue_ext));
+}
+
+void ath12k_hal_reo_qdesc_setup(struct hal_rx_reo_queue *qdesc,
+ int tid, u32 ba_window_size,
+ u32 start_seq, enum hal_pn_type type)
+{
+ struct hal_rx_reo_queue_ext *ext_desc;
+
+ memset(qdesc, 0, sizeof(*qdesc));
+
+ ath12k_hal_reo_set_desc_hdr(&qdesc->desc_hdr, HAL_DESC_REO_OWNED,
+ HAL_DESC_REO_QUEUE_DESC,
+ REO_QUEUE_DESC_MAGIC_DEBUG_PATTERN_0);
+
+ qdesc->rx_queue_num = le32_encode_bits(tid, HAL_RX_REO_QUEUE_RX_QUEUE_NUMBER);
+
+ qdesc->info0 =
+ le32_encode_bits(1, HAL_RX_REO_QUEUE_INFO0_VLD) |
+ le32_encode_bits(1, HAL_RX_REO_QUEUE_INFO0_ASSOC_LNK_DESC_COUNTER) |
+ le32_encode_bits(ath12k_tid_to_ac(tid), HAL_RX_REO_QUEUE_INFO0_AC);
+
+ if (ba_window_size < 1)
+ ba_window_size = 1;
+
+ if (ba_window_size == 1 && tid != HAL_DESC_REO_NON_QOS_TID)
+ ba_window_size++;
+
+ if (ba_window_size == 1)
+ qdesc->info0 |= le32_encode_bits(1, HAL_RX_REO_QUEUE_INFO0_RETRY);
+
+ qdesc->info0 |= le32_encode_bits(ba_window_size - 1,
+ HAL_RX_REO_QUEUE_INFO0_BA_WINDOW_SIZE);
+ switch (type) {
+ case HAL_PN_TYPE_NONE:
+ case HAL_PN_TYPE_WAPI_EVEN:
+ case HAL_PN_TYPE_WAPI_UNEVEN:
+ break;
+ case HAL_PN_TYPE_WPA:
+ qdesc->info0 |=
+ le32_encode_bits(1, HAL_RX_REO_QUEUE_INFO0_PN_CHECK) |
+ le32_encode_bits(HAL_RX_REO_QUEUE_PN_SIZE_48,
+ HAL_RX_REO_QUEUE_INFO0_PN_SIZE);
+ break;
+ }
+
+ /* TODO: Set Ignore ampdu flags based on BA window size and/or
+ * AMPDU capabilities
+ */
+ qdesc->info0 |= le32_encode_bits(1, HAL_RX_REO_QUEUE_INFO0_IGNORE_AMPDU_FLG);
+
+ qdesc->info1 |= le32_encode_bits(0, HAL_RX_REO_QUEUE_INFO1_SVLD);
+
+ if (start_seq <= 0xfff)
+ qdesc->info1 = le32_encode_bits(start_seq,
+ HAL_RX_REO_QUEUE_INFO1_SSN);
+
+ if (tid == HAL_DESC_REO_NON_QOS_TID)
+ return;
+
+ ext_desc = qdesc->ext_desc;
+
+ /* TODO: HW queue descriptors are currently allocated for max BA
+ * window size for all QOS TIDs so that same descriptor can be used
+ * later when ADDBA request is received. This should be changed to
+ * allocate HW queue descriptors based on BA window size being
+ * negotiated (0 for non BA cases), and reallocate when BA window
+ * size changes and also send WMI message to FW to change the REO
+ * queue descriptor in Rx peer entry as part of dp_rx_tid_update.
+ */
+ memset(ext_desc, 0, 3 * sizeof(*ext_desc));
+ ath12k_hal_reo_set_desc_hdr(&ext_desc->desc_hdr, HAL_DESC_REO_OWNED,
+ HAL_DESC_REO_QUEUE_EXT_DESC,
+ REO_QUEUE_DESC_MAGIC_DEBUG_PATTERN_1);
+ ext_desc++;
+ ath12k_hal_reo_set_desc_hdr(&ext_desc->desc_hdr, HAL_DESC_REO_OWNED,
+ HAL_DESC_REO_QUEUE_EXT_DESC,
+ REO_QUEUE_DESC_MAGIC_DEBUG_PATTERN_2);
+ ext_desc++;
+ ath12k_hal_reo_set_desc_hdr(&ext_desc->desc_hdr, HAL_DESC_REO_OWNED,
+ HAL_DESC_REO_QUEUE_EXT_DESC,
+ REO_QUEUE_DESC_MAGIC_DEBUG_PATTERN_3);
+}
+
+void ath12k_hal_reo_init_cmd_ring(struct ath12k_base *ab,
+ struct hal_srng *srng)
+{
+ struct hal_srng_params params;
+ struct hal_tlv_64_hdr *tlv;
+ struct hal_reo_get_queue_stats *desc;
+ int i, cmd_num = 1;
+ int entry_size;
+ u8 *entry;
+
+ memset(&params, 0, sizeof(params));
+
+ entry_size = ath12k_hal_srng_get_entrysize(ab, HAL_REO_CMD);
+ ath12k_hal_srng_get_params(ab, srng, &params);
+ entry = (u8 *)params.ring_base_vaddr;
+
+ for (i = 0; i < params.num_entries; i++) {
+ tlv = (struct hal_tlv_64_hdr *)entry;
+ desc = (struct hal_reo_get_queue_stats *)tlv->value;
+ desc->cmd.info0 = le32_encode_bits(cmd_num++,
+ HAL_REO_CMD_HDR_INFO0_CMD_NUMBER);
+ entry += entry_size;
+ }
+}
+
+void ath12k_hal_reo_hw_setup(struct ath12k_base *ab, u32 ring_hash_map)
+{
+ u32 reo_base = HAL_SEQ_WCSS_UMAC_REO_REG;
+ u32 val;
+
+ val = ath12k_hif_read32(ab, reo_base + HAL_REO1_GEN_ENABLE);
+
+ val |= u32_encode_bits(1, HAL_REO1_GEN_ENABLE_AGING_LIST_ENABLE) |
+ u32_encode_bits(1, HAL_REO1_GEN_ENABLE_AGING_FLUSH_ENABLE);
+ ath12k_hif_write32(ab, reo_base + HAL_REO1_GEN_ENABLE, val);
+
+ val = ath12k_hif_read32(ab, reo_base + HAL_REO1_MISC_CTRL_ADDR(ab));
+
+ val &= ~(HAL_REO1_MISC_CTL_FRAG_DST_RING |
+ HAL_REO1_MISC_CTL_BAR_DST_RING);
+ val |= u32_encode_bits(HAL_SRNG_RING_ID_REO2SW0,
+ HAL_REO1_MISC_CTL_FRAG_DST_RING);
+ val |= u32_encode_bits(HAL_SRNG_RING_ID_REO2SW0,
+ HAL_REO1_MISC_CTL_BAR_DST_RING);
+ ath12k_hif_write32(ab, reo_base + HAL_REO1_MISC_CTRL_ADDR(ab), val);
+
+ ath12k_hif_write32(ab, reo_base + HAL_REO1_AGING_THRESH_IX_0(ab),
+ HAL_DEFAULT_BE_BK_VI_REO_TIMEOUT_USEC);
+ ath12k_hif_write32(ab, reo_base + HAL_REO1_AGING_THRESH_IX_1(ab),
+ HAL_DEFAULT_BE_BK_VI_REO_TIMEOUT_USEC);
+ ath12k_hif_write32(ab, reo_base + HAL_REO1_AGING_THRESH_IX_2(ab),
+ HAL_DEFAULT_BE_BK_VI_REO_TIMEOUT_USEC);
+ ath12k_hif_write32(ab, reo_base + HAL_REO1_AGING_THRESH_IX_3(ab),
+ HAL_DEFAULT_VO_REO_TIMEOUT_USEC);
+
+ ath12k_hif_write32(ab, reo_base + HAL_REO1_DEST_RING_CTRL_IX_2,
+ ring_hash_map);
+ ath12k_hif_write32(ab, reo_base + HAL_REO1_DEST_RING_CTRL_IX_3,
+ ring_hash_map);
+}
diff --git a/drivers/net/wireless/ath/ath12k/hal_rx.h b/drivers/net/wireless/ath/ath12k/hal_rx.h
new file mode 100644
index 000000000000..fcfb6c819047
--- /dev/null
+++ b/drivers/net/wireless/ath/ath12k/hal_rx.h
@@ -0,0 +1,704 @@
+/* SPDX-License-Identifier: BSD-3-Clause-Clear */
+/*
+ * Copyright (c) 2018-2021 The Linux Foundation. All rights reserved.
+ * Copyright (c) 2021-2022 Qualcomm Innovation Center, Inc. All rights reserved.
+ */
+
+#ifndef ATH12K_HAL_RX_H
+#define ATH12K_HAL_RX_H
+
+struct hal_rx_wbm_rel_info {
+ u32 cookie;
+ enum hal_wbm_rel_src_module err_rel_src;
+ enum hal_reo_dest_ring_push_reason push_reason;
+ u32 err_code;
+ bool first_msdu;
+ bool last_msdu;
+ bool continuation;
+ void *rx_desc;
+ bool hw_cc_done;
+};
+
+#define HAL_INVALID_PEERID 0xffff
+#define VHT_SIG_SU_NSS_MASK 0x7
+
+#define HAL_RX_MAX_MCS 12
+#define HAL_RX_MAX_NSS 8
+
+#define HAL_RX_MPDU_INFO_PN_GET_BYTE1(__val) \
+ le32_get_bits((__val), GENMASK(7, 0))
+
+#define HAL_RX_MPDU_INFO_PN_GET_BYTE2(__val) \
+ le32_get_bits((__val), GENMASK(15, 8))
+
+#define HAL_RX_MPDU_INFO_PN_GET_BYTE3(__val) \
+ le32_get_bits((__val), GENMASK(23, 16))
+
+#define HAL_RX_MPDU_INFO_PN_GET_BYTE4(__val) \
+ le32_get_bits((__val), GENMASK(31, 24))
+
+struct hal_rx_mon_status_tlv_hdr {
+ u32 hdr;
+ u8 value[];
+};
+
+enum hal_rx_su_mu_coding {
+ HAL_RX_SU_MU_CODING_BCC,
+ HAL_RX_SU_MU_CODING_LDPC,
+ HAL_RX_SU_MU_CODING_MAX,
+};
+
+enum hal_rx_gi {
+ HAL_RX_GI_0_8_US,
+ HAL_RX_GI_0_4_US,
+ HAL_RX_GI_1_6_US,
+ HAL_RX_GI_3_2_US,
+ HAL_RX_GI_MAX,
+};
+
+enum hal_rx_bw {
+ HAL_RX_BW_20MHZ,
+ HAL_RX_BW_40MHZ,
+ HAL_RX_BW_80MHZ,
+ HAL_RX_BW_160MHZ,
+ HAL_RX_BW_MAX,
+};
+
+enum hal_rx_preamble {
+ HAL_RX_PREAMBLE_11A,
+ HAL_RX_PREAMBLE_11B,
+ HAL_RX_PREAMBLE_11N,
+ HAL_RX_PREAMBLE_11AC,
+ HAL_RX_PREAMBLE_11AX,
+ HAL_RX_PREAMBLE_MAX,
+};
+
+enum hal_rx_reception_type {
+ HAL_RX_RECEPTION_TYPE_SU,
+ HAL_RX_RECEPTION_TYPE_MU_MIMO,
+ HAL_RX_RECEPTION_TYPE_MU_OFDMA,
+ HAL_RX_RECEPTION_TYPE_MU_OFDMA_MIMO,
+ HAL_RX_RECEPTION_TYPE_MAX,
+};
+
+enum hal_rx_legacy_rate {
+ HAL_RX_LEGACY_RATE_1_MBPS,
+ HAL_RX_LEGACY_RATE_2_MBPS,
+ HAL_RX_LEGACY_RATE_5_5_MBPS,
+ HAL_RX_LEGACY_RATE_6_MBPS,
+ HAL_RX_LEGACY_RATE_9_MBPS,
+ HAL_RX_LEGACY_RATE_11_MBPS,
+ HAL_RX_LEGACY_RATE_12_MBPS,
+ HAL_RX_LEGACY_RATE_18_MBPS,
+ HAL_RX_LEGACY_RATE_24_MBPS,
+ HAL_RX_LEGACY_RATE_36_MBPS,
+ HAL_RX_LEGACY_RATE_48_MBPS,
+ HAL_RX_LEGACY_RATE_54_MBPS,
+ HAL_RX_LEGACY_RATE_INVALID,
+};
+
+#define HAL_TLV_STATUS_PPDU_NOT_DONE 0
+#define HAL_TLV_STATUS_PPDU_DONE 1
+#define HAL_TLV_STATUS_BUF_DONE 2
+#define HAL_TLV_STATUS_PPDU_NON_STD_DONE 3
+#define HAL_RX_FCS_LEN 4
+
+enum hal_rx_mon_status {
+ HAL_RX_MON_STATUS_PPDU_NOT_DONE,
+ HAL_RX_MON_STATUS_PPDU_DONE,
+ HAL_RX_MON_STATUS_BUF_DONE,
+};
+
+#define HAL_RX_MAX_MPDU 256
+#define HAL_RX_NUM_WORDS_PER_PPDU_BITMAP (HAL_RX_MAX_MPDU >> 5)
+
+struct hal_rx_user_status {
+ u32 mcs:4,
+ nss:3,
+ ofdma_info_valid:1,
+ ul_ofdma_ru_start_index:7,
+ ul_ofdma_ru_width:7,
+ ul_ofdma_ru_size:8;
+ u32 ul_ofdma_user_v0_word0;
+ u32 ul_ofdma_user_v0_word1;
+ u32 ast_index;
+ u32 tid;
+ u16 tcp_msdu_count;
+ u16 tcp_ack_msdu_count;
+ u16 udp_msdu_count;
+ u16 other_msdu_count;
+ u16 frame_control;
+ u8 frame_control_info_valid;
+ u8 data_sequence_control_info_valid;
+ u16 first_data_seq_ctrl;
+ u32 preamble_type;
+ u16 ht_flags;
+ u16 vht_flags;
+ u16 he_flags;
+ u8 rs_flags;
+ u8 ldpc;
+ u32 mpdu_cnt_fcs_ok;
+ u32 mpdu_cnt_fcs_err;
+ u32 mpdu_fcs_ok_bitmap[HAL_RX_NUM_WORDS_PER_PPDU_BITMAP];
+ u32 mpdu_ok_byte_count;
+ u32 mpdu_err_byte_count;
+};
+
+#define HAL_MAX_UL_MU_USERS 37
+
+struct hal_rx_mon_ppdu_info {
+ u32 ppdu_id;
+ u32 last_ppdu_id;
+ u64 ppdu_ts;
+ u32 num_mpdu_fcs_ok;
+ u32 num_mpdu_fcs_err;
+ u32 preamble_type;
+ u32 mpdu_len;
+ u16 chan_num;
+ u16 tcp_msdu_count;
+ u16 tcp_ack_msdu_count;
+ u16 udp_msdu_count;
+ u16 other_msdu_count;
+ u16 peer_id;
+ u8 rate;
+ u8 mcs;
+ u8 nss;
+ u8 bw;
+ u8 vht_flag_values1;
+ u8 vht_flag_values2;
+ u8 vht_flag_values3[4];
+ u8 vht_flag_values4;
+ u8 vht_flag_values5;
+ u16 vht_flag_values6;
+ u8 is_stbc;
+ u8 gi;
+ u8 sgi;
+ u8 ldpc;
+ u8 beamformed;
+ u8 rssi_comb;
+ u16 tid;
+ u8 fc_valid;
+ u16 ht_flags;
+ u16 vht_flags;
+ u16 he_flags;
+ u16 he_mu_flags;
+ u8 dcm;
+ u8 ru_alloc;
+ u8 reception_type;
+ u64 tsft;
+ u64 rx_duration;
+ u16 frame_control;
+ u32 ast_index;
+ u8 rs_fcs_err;
+ u8 rs_flags;
+ u8 cck_flag;
+ u8 ofdm_flag;
+ u8 ulofdma_flag;
+ u8 frame_control_info_valid;
+ u16 he_per_user_1;
+ u16 he_per_user_2;
+ u8 he_per_user_position;
+ u8 he_per_user_known;
+ u16 he_flags1;
+ u16 he_flags2;
+ u8 he_RU[4];
+ u16 he_data1;
+ u16 he_data2;
+ u16 he_data3;
+ u16 he_data4;
+ u16 he_data5;
+ u16 he_data6;
+ u32 ppdu_len;
+ u32 prev_ppdu_id;
+ u32 device_id;
+ u16 first_data_seq_ctrl;
+ u8 monitor_direct_used;
+ u8 data_sequence_control_info_valid;
+ u8 ltf_size;
+ u8 rxpcu_filter_pass;
+ s8 rssi_chain[8][8];
+ u32 num_users;
+ u32 mpdu_fcs_ok_bitmap[HAL_RX_NUM_WORDS_PER_PPDU_BITMAP];
+ u8 addr1[ETH_ALEN];
+ u8 addr2[ETH_ALEN];
+ u8 addr3[ETH_ALEN];
+ u8 addr4[ETH_ALEN];
+ struct hal_rx_user_status userstats[HAL_MAX_UL_MU_USERS];
+ u8 userid;
+ u16 ampdu_id[HAL_MAX_UL_MU_USERS];
+ bool first_msdu_in_mpdu;
+ bool is_ampdu;
+ u8 medium_prot_type;
+};
+
+#define HAL_RX_PPDU_START_INFO0_PPDU_ID GENMASK(15, 0)
+
+struct hal_rx_ppdu_start {
+ __le32 info0;
+ __le32 chan_num;
+ __le32 ppdu_start_ts;
+} __packed;
+
+#define HAL_RX_PPDU_END_USER_STATS_INFO0_MPDU_CNT_FCS_ERR GENMASK(25, 16)
+
+#define HAL_RX_PPDU_END_USER_STATS_INFO1_MPDU_CNT_FCS_OK GENMASK(8, 0)
+#define HAL_RX_PPDU_END_USER_STATS_INFO1_FC_VALID BIT(9)
+#define HAL_RX_PPDU_END_USER_STATS_INFO1_QOS_CTRL_VALID BIT(10)
+#define HAL_RX_PPDU_END_USER_STATS_INFO1_HT_CTRL_VALID BIT(11)
+#define HAL_RX_PPDU_END_USER_STATS_INFO1_PKT_TYPE GENMASK(23, 20)
+
+#define HAL_RX_PPDU_END_USER_STATS_INFO2_AST_INDEX GENMASK(15, 0)
+#define HAL_RX_PPDU_END_USER_STATS_INFO2_FRAME_CTRL GENMASK(31, 16)
+
+#define HAL_RX_PPDU_END_USER_STATS_INFO3_QOS_CTRL GENMASK(31, 16)
+
+#define HAL_RX_PPDU_END_USER_STATS_INFO4_UDP_MSDU_CNT GENMASK(15, 0)
+#define HAL_RX_PPDU_END_USER_STATS_INFO4_TCP_MSDU_CNT GENMASK(31, 16)
+
+#define HAL_RX_PPDU_END_USER_STATS_INFO5_OTHER_MSDU_CNT GENMASK(15, 0)
+#define HAL_RX_PPDU_END_USER_STATS_INFO5_TCP_ACK_MSDU_CNT GENMASK(31, 16)
+
+#define HAL_RX_PPDU_END_USER_STATS_INFO6_TID_BITMAP GENMASK(15, 0)
+#define HAL_RX_PPDU_END_USER_STATS_INFO6_TID_EOSP_BITMAP GENMASK(31, 16)
+
+#define HAL_RX_PPDU_END_USER_STATS_MPDU_DELIM_OK_BYTE_COUNT GENMASK(24, 0)
+#define HAL_RX_PPDU_END_USER_STATS_MPDU_DELIM_ERR_BYTE_COUNT GENMASK(24, 0)
+
+struct hal_rx_ppdu_end_user_stats {
+ __le32 rsvd0[2];
+ __le32 info0;
+ __le32 info1;
+ __le32 info2;
+ __le32 info3;
+ __le32 ht_ctrl;
+ __le32 rsvd1[2];
+ __le32 info4;
+ __le32 info5;
+ __le32 usr_resp_ref;
+ __le32 info6;
+ __le32 rsvd3[4];
+ __le32 mpdu_ok_cnt;
+ __le32 rsvd4;
+ __le32 mpdu_err_cnt;
+ __le32 rsvd5[2];
+ __le32 usr_resp_ref_ext;
+ __le32 rsvd6;
+} __packed;
+
+struct hal_rx_ppdu_end_user_stats_ext {
+ __le32 info0;
+ __le32 info1;
+ __le32 info2;
+ __le32 info3;
+ __le32 info4;
+ __le32 info5;
+ __le32 info6;
+} __packed;
+
+#define HAL_RX_HT_SIG_INFO_INFO0_MCS GENMASK(6, 0)
+#define HAL_RX_HT_SIG_INFO_INFO0_BW BIT(7)
+
+#define HAL_RX_HT_SIG_INFO_INFO1_STBC GENMASK(5, 4)
+#define HAL_RX_HT_SIG_INFO_INFO1_FEC_CODING BIT(6)
+#define HAL_RX_HT_SIG_INFO_INFO1_GI BIT(7)
+
+struct hal_rx_ht_sig_info {
+ __le32 info0;
+ __le32 info1;
+} __packed;
+
+#define HAL_RX_LSIG_B_INFO_INFO0_RATE GENMASK(3, 0)
+#define HAL_RX_LSIG_B_INFO_INFO0_LEN GENMASK(15, 4)
+
+struct hal_rx_lsig_b_info {
+ __le32 info0;
+} __packed;
+
+#define HAL_RX_LSIG_A_INFO_INFO0_RATE GENMASK(3, 0)
+#define HAL_RX_LSIG_A_INFO_INFO0_LEN GENMASK(16, 5)
+#define HAL_RX_LSIG_A_INFO_INFO0_PKT_TYPE GENMASK(27, 24)
+
+struct hal_rx_lsig_a_info {
+ __le32 info0;
+} __packed;
+
+#define HAL_RX_VHT_SIG_A_INFO_INFO0_BW GENMASK(1, 0)
+#define HAL_RX_VHT_SIG_A_INFO_INFO0_STBC BIT(3)
+#define HAL_RX_VHT_SIG_A_INFO_INFO0_GROUP_ID GENMASK(9, 4)
+#define HAL_RX_VHT_SIG_A_INFO_INFO0_NSTS GENMASK(21, 10)
+
+#define HAL_RX_VHT_SIG_A_INFO_INFO1_GI_SETTING GENMASK(1, 0)
+#define HAL_RX_VHT_SIG_A_INFO_INFO1_SU_MU_CODING BIT(2)
+#define HAL_RX_VHT_SIG_A_INFO_INFO1_MCS GENMASK(7, 4)
+#define HAL_RX_VHT_SIG_A_INFO_INFO1_BEAMFORMED BIT(8)
+
+struct hal_rx_vht_sig_a_info {
+ __le32 info0;
+ __le32 info1;
+} __packed;
+
+enum hal_rx_vht_sig_a_gi_setting {
+ HAL_RX_VHT_SIG_A_NORMAL_GI = 0,
+ HAL_RX_VHT_SIG_A_SHORT_GI = 1,
+ HAL_RX_VHT_SIG_A_SHORT_GI_AMBIGUITY = 3,
+};
+
+#define HE_GI_0_8 0
+#define HE_GI_0_4 1
+#define HE_GI_1_6 2
+#define HE_GI_3_2 3
+
+#define HE_LTF_1_X 0
+#define HE_LTF_2_X 1
+#define HE_LTF_4_X 2
+
+#define HAL_RX_HE_SIG_A_SU_INFO_INFO0_TRANSMIT_MCS GENMASK(6, 3)
+#define HAL_RX_HE_SIG_A_SU_INFO_INFO0_DCM BIT(7)
+#define HAL_RX_HE_SIG_A_SU_INFO_INFO0_TRANSMIT_BW GENMASK(20, 19)
+#define HAL_RX_HE_SIG_A_SU_INFO_INFO0_CP_LTF_SIZE GENMASK(22, 21)
+#define HAL_RX_HE_SIG_A_SU_INFO_INFO0_NSTS GENMASK(25, 23)
+#define HAL_RX_HE_SIG_A_SU_INFO_INFO0_BSS_COLOR GENMASK(13, 8)
+#define HAL_RX_HE_SIG_A_SU_INFO_INFO0_SPATIAL_REUSE GENMASK(18, 15)
+#define HAL_RX_HE_SIG_A_SU_INFO_INFO0_FORMAT_IND BIT(0)
+#define HAL_RX_HE_SIG_A_SU_INFO_INFO0_BEAM_CHANGE BIT(1)
+#define HAL_RX_HE_SIG_A_SU_INFO_INFO0_DL_UL_FLAG BIT(2)
+
+#define HAL_RX_HE_SIG_A_SU_INFO_INFO1_TXOP_DURATION GENMASK(6, 0)
+#define HAL_RX_HE_SIG_A_SU_INFO_INFO1_CODING BIT(7)
+#define HAL_RX_HE_SIG_A_SU_INFO_INFO1_LDPC_EXTRA BIT(8)
+#define HAL_RX_HE_SIG_A_SU_INFO_INFO1_STBC BIT(9)
+#define HAL_RX_HE_SIG_A_SU_INFO_INFO1_TXBF BIT(10)
+#define HAL_RX_HE_SIG_A_SU_INFO_INFO1_PKT_EXT_FACTOR GENMASK(12, 11)
+#define HAL_RX_HE_SIG_A_SU_INFO_INFO1_PKT_EXT_PE_DISAM BIT(13)
+#define HAL_RX_HE_SIG_A_SU_INFO_INFO1_DOPPLER_IND BIT(15)
+
+struct hal_rx_he_sig_a_su_info {
+ __le32 info0;
+ __le32 info1;
+} __packed;
+
+#define HAL_RX_HE_SIG_A_MU_DL_INFO0_UL_FLAG BIT(1)
+#define HAL_RX_HE_SIG_A_MU_DL_INFO0_MCS_OF_SIGB GENMASK(3, 1)
+#define HAL_RX_HE_SIG_A_MU_DL_INFO0_DCM_OF_SIGB BIT(4)
+#define HAL_RX_HE_SIG_A_MU_DL_INFO0_BSS_COLOR GENMASK(10, 5)
+#define HAL_RX_HE_SIG_A_MU_DL_INFO0_SPATIAL_REUSE GENMASK(14, 11)
+#define HAL_RX_HE_SIG_A_MU_DL_INFO0_TRANSMIT_BW GENMASK(17, 15)
+#define HAL_RX_HE_SIG_A_MU_DL_INFO0_NUM_SIGB_SYMB GENMASK(21, 18)
+#define HAL_RX_HE_SIG_A_MU_DL_INFO0_COMP_MODE_SIGB BIT(22)
+#define HAL_RX_HE_SIG_A_MU_DL_INFO0_CP_LTF_SIZE GENMASK(24, 23)
+#define HAL_RX_HE_SIG_A_MU_DL_INFO0_DOPPLER_INDICATION BIT(25)
+
+#define HAL_RX_HE_SIG_A_MU_DL_INFO1_TXOP_DURATION GENMASK(6, 0)
+#define HAL_RX_HE_SIG_A_MU_DL_INFO1_CODING BIT(7)
+#define HAL_RX_HE_SIG_A_MU_DL_INFO1_NUM_LTF_SYMB GENMASK(10, 8)
+#define HAL_RX_HE_SIG_A_MU_DL_INFO1_LDPC_EXTRA BIT(11)
+#define HAL_RX_HE_SIG_A_MU_DL_INFO1_STBC BIT(12)
+#define HAL_RX_HE_SIG_A_MU_DL_INFO1_TXBF BIT(10)
+#define HAL_RX_HE_SIG_A_MU_DL_INFO1_PKT_EXT_FACTOR GENMASK(14, 13)
+#define HAL_RX_HE_SIG_A_MU_DL_INFO1_PKT_EXT_PE_DISAM BIT(15)
+
+struct hal_rx_he_sig_a_mu_dl_info {
+ __le32 info0;
+ __le32 info1;
+} __packed;
+
+#define HAL_RX_HE_SIG_B1_MU_INFO_INFO0_RU_ALLOCATION GENMASK(7, 0)
+
+struct hal_rx_he_sig_b1_mu_info {
+ __le32 info0;
+} __packed;
+
+#define HAL_RX_HE_SIG_B2_MU_INFO_INFO0_STA_ID GENMASK(10, 0)
+#define HAL_RX_HE_SIG_B2_MU_INFO_INFO0_STA_MCS GENMASK(18, 15)
+#define HAL_RX_HE_SIG_B2_MU_INFO_INFO0_STA_CODING BIT(20)
+#define HAL_RX_HE_SIG_B2_MU_INFO_INFO0_STA_NSTS GENMASK(31, 29)
+
+struct hal_rx_he_sig_b2_mu_info {
+ __le32 info0;
+} __packed;
+
+#define HAL_RX_HE_SIG_B2_OFDMA_INFO_INFO0_STA_ID GENMASK(10, 0)
+#define HAL_RX_HE_SIG_B2_OFDMA_INFO_INFO0_STA_NSTS GENMASK(13, 11)
+#define HAL_RX_HE_SIG_B2_OFDMA_INFO_INFO0_STA_TXBF BIT(19)
+#define HAL_RX_HE_SIG_B2_OFDMA_INFO_INFO0_STA_MCS GENMASK(18, 15)
+#define HAL_RX_HE_SIG_B2_OFDMA_INFO_INFO0_STA_DCM BIT(19)
+#define HAL_RX_HE_SIG_B2_OFDMA_INFO_INFO0_STA_CODING BIT(20)
+
+struct hal_rx_he_sig_b2_ofdma_info {
+ __le32 info0;
+} __packed;
+
+enum hal_rx_ul_reception_type {
+ HAL_RECEPTION_TYPE_ULOFMDA,
+ HAL_RECEPTION_TYPE_ULMIMO,
+ HAL_RECEPTION_TYPE_OTHER,
+ HAL_RECEPTION_TYPE_FRAMELESS
+};
+
+#define HAL_RX_PHYRX_RSSI_LEGACY_INFO_INFO0_RSSI_COMB GENMASK(15, 8)
+#define HAL_RX_PHYRX_RSSI_LEGACY_INFO_RSVD1_RECEPTION GENMASK(3, 0)
+
+struct hal_rx_phyrx_rssi_legacy_info {
+ __le32 rsvd[35];
+ __le32 info0;
+} __packed;
+
+#define HAL_RX_MPDU_START_INFO0_PPDU_ID GENMASK(31, 16)
+#define HAL_RX_MPDU_START_INFO1_PEERID GENMASK(31, 16)
+#define HAL_RX_MPDU_START_INFO2_MPDU_LEN GENMASK(13, 0)
+struct hal_rx_mpdu_start {
+ __le32 info0;
+ __le32 info1;
+ __le32 rsvd1[11];
+ __le32 info2;
+ __le32 rsvd2[9];
+} __packed;
+
+#define HAL_RX_PPDU_END_DURATION GENMASK(23, 0)
+struct hal_rx_ppdu_end_duration {
+ __le32 rsvd0[9];
+ __le32 info0;
+ __le32 rsvd1[4];
+} __packed;
+
+struct hal_rx_rxpcu_classification_overview {
+ u32 rsvd0;
+} __packed;
+
+struct hal_rx_msdu_desc_info {
+ u32 msdu_flags;
+ u16 msdu_len; /* 14 bits for length */
+};
+
+#define HAL_RX_NUM_MSDU_DESC 6
+struct hal_rx_msdu_list {
+ struct hal_rx_msdu_desc_info msdu_info[HAL_RX_NUM_MSDU_DESC];
+ u32 sw_cookie[HAL_RX_NUM_MSDU_DESC];
+ u8 rbm[HAL_RX_NUM_MSDU_DESC];
+};
+
+#define HAL_RX_FBM_ACK_INFO0_ADDR1_31_0 GENMASK(31, 0)
+#define HAL_RX_FBM_ACK_INFO1_ADDR1_47_32 GENMASK(15, 0)
+#define HAL_RX_FBM_ACK_INFO1_ADDR2_15_0 GENMASK(31, 16)
+#define HAL_RX_FBM_ACK_INFO2_ADDR2_47_16 GENMASK(31, 0)
+
+struct hal_rx_frame_bitmap_ack {
+ __le32 reserved;
+ __le32 info0;
+ __le32 info1;
+ __le32 info2;
+ __le32 reserved1[10];
+} __packed;
+
+#define HAL_RX_RESP_REQ_INFO0_PPDU_ID GENMASK(15, 0)
+#define HAL_RX_RESP_REQ_INFO0_RECEPTION_TYPE BIT(16)
+#define HAL_RX_RESP_REQ_INFO1_DURATION GENMASK(15, 0)
+#define HAL_RX_RESP_REQ_INFO1_RATE_MCS GENMASK(24, 21)
+#define HAL_RX_RESP_REQ_INFO1_SGI GENMASK(26, 25)
+#define HAL_RX_RESP_REQ_INFO1_STBC BIT(27)
+#define HAL_RX_RESP_REQ_INFO1_LDPC BIT(28)
+#define HAL_RX_RESP_REQ_INFO1_IS_AMPDU BIT(29)
+#define HAL_RX_RESP_REQ_INFO2_NUM_USER GENMASK(6, 0)
+#define HAL_RX_RESP_REQ_INFO3_ADDR1_31_0 GENMASK(31, 0)
+#define HAL_RX_RESP_REQ_INFO4_ADDR1_47_32 GENMASK(15, 0)
+#define HAL_RX_RESP_REQ_INFO4_ADDR1_15_0 GENMASK(31, 16)
+#define HAL_RX_RESP_REQ_INFO5_ADDR1_47_16 GENMASK(31, 0)
+
+struct hal_rx_resp_req_info {
+ __le32 info0;
+ __le32 reserved[1];
+ __le32 info1;
+ __le32 info2;
+ __le32 reserved1[2];
+ __le32 info3;
+ __le32 info4;
+ __le32 info5;
+ __le32 reserved2[5];
+} __packed;
+
+#define REO_QUEUE_DESC_MAGIC_DEBUG_PATTERN_0 0xDDBEEF
+#define REO_QUEUE_DESC_MAGIC_DEBUG_PATTERN_1 0xADBEEF
+#define REO_QUEUE_DESC_MAGIC_DEBUG_PATTERN_2 0xBDBEEF
+#define REO_QUEUE_DESC_MAGIC_DEBUG_PATTERN_3 0xCDBEEF
+
+#define HAL_RX_UL_OFDMA_USER_INFO_V0_W0_VALID BIT(30)
+#define HAL_RX_UL_OFDMA_USER_INFO_V0_W0_VER BIT(31)
+#define HAL_RX_UL_OFDMA_USER_INFO_V0_W1_NSS GENMASK(2, 0)
+#define HAL_RX_UL_OFDMA_USER_INFO_V0_W1_MCS GENMASK(6, 3)
+#define HAL_RX_UL_OFDMA_USER_INFO_V0_W1_LDPC BIT(7)
+#define HAL_RX_UL_OFDMA_USER_INFO_V0_W1_DCM BIT(8)
+#define HAL_RX_UL_OFDMA_USER_INFO_V0_W1_RU_START GENMASK(15, 9)
+#define HAL_RX_UL_OFDMA_USER_INFO_V0_W1_RU_SIZE GENMASK(18, 16)
+
+/* HE Radiotap data1 Mask */
+#define HE_SU_FORMAT_TYPE 0x0000
+#define HE_EXT_SU_FORMAT_TYPE 0x0001
+#define HE_MU_FORMAT_TYPE 0x0002
+#define HE_TRIG_FORMAT_TYPE 0x0003
+#define HE_BEAM_CHANGE_KNOWN 0x0008
+#define HE_DL_UL_KNOWN 0x0010
+#define HE_MCS_KNOWN 0x0020
+#define HE_DCM_KNOWN 0x0040
+#define HE_CODING_KNOWN 0x0080
+#define HE_LDPC_EXTRA_SYMBOL_KNOWN 0x0100
+#define HE_STBC_KNOWN 0x0200
+#define HE_DATA_BW_RU_KNOWN 0x4000
+#define HE_DOPPLER_KNOWN 0x8000
+#define HE_BSS_COLOR_KNOWN 0x0004
+
+/* HE Radiotap data2 Mask */
+#define HE_GI_KNOWN 0x0002
+#define HE_TXBF_KNOWN 0x0010
+#define HE_PE_DISAMBIGUITY_KNOWN 0x0020
+#define HE_TXOP_KNOWN 0x0040
+#define HE_LTF_SYMBOLS_KNOWN 0x0004
+#define HE_PRE_FEC_PADDING_KNOWN 0x0008
+#define HE_MIDABLE_PERIODICITY_KNOWN 0x0080
+
+/* HE radiotap data3 shift values */
+#define HE_BEAM_CHANGE_SHIFT 6
+#define HE_DL_UL_SHIFT 7
+#define HE_TRANSMIT_MCS_SHIFT 8
+#define HE_DCM_SHIFT 12
+#define HE_CODING_SHIFT 13
+#define HE_LDPC_EXTRA_SYMBOL_SHIFT 14
+#define HE_STBC_SHIFT 15
+
+/* HE radiotap data4 shift values */
+#define HE_STA_ID_SHIFT 4
+
+/* HE radiotap data5 */
+#define HE_GI_SHIFT 4
+#define HE_LTF_SIZE_SHIFT 6
+#define HE_LTF_SYM_SHIFT 8
+#define HE_TXBF_SHIFT 14
+#define HE_PE_DISAMBIGUITY_SHIFT 15
+#define HE_PRE_FEC_PAD_SHIFT 12
+
+/* HE radiotap data6 */
+#define HE_DOPPLER_SHIFT 4
+#define HE_TXOP_SHIFT 8
+
+/* HE radiotap HE-MU flags1 */
+#define HE_SIG_B_MCS_KNOWN 0x0010
+#define HE_SIG_B_DCM_KNOWN 0x0040
+#define HE_SIG_B_SYM_NUM_KNOWN 0x8000
+#define HE_RU_0_KNOWN 0x0100
+#define HE_RU_1_KNOWN 0x0200
+#define HE_RU_2_KNOWN 0x0400
+#define HE_RU_3_KNOWN 0x0800
+#define HE_DCM_FLAG_1_SHIFT 5
+#define HE_SPATIAL_REUSE_MU_KNOWN 0x0100
+#define HE_SIG_B_COMPRESSION_FLAG_1_KNOWN 0x4000
+
+/* HE radiotap HE-MU flags2 */
+#define HE_SIG_B_COMPRESSION_FLAG_2_SHIFT 3
+#define HE_BW_KNOWN 0x0004
+#define HE_NUM_SIG_B_SYMBOLS_SHIFT 4
+#define HE_SIG_B_COMPRESSION_FLAG_2_KNOWN 0x0100
+#define HE_NUM_SIG_B_FLAG_2_SHIFT 9
+#define HE_LTF_FLAG_2_SYMBOLS_SHIFT 12
+#define HE_LTF_KNOWN 0x8000
+
+/* HE radiotap per_user_1 */
+#define HE_STA_SPATIAL_SHIFT 11
+#define HE_TXBF_SHIFT 14
+#define HE_RESERVED_SET_TO_1_SHIFT 19
+#define HE_STA_CODING_SHIFT 20
+
+/* HE radiotap per_user_2 */
+#define HE_STA_MCS_SHIFT 4
+#define HE_STA_DCM_SHIFT 5
+
+/* HE radiotap per user known */
+#define HE_USER_FIELD_POSITION_KNOWN 0x01
+#define HE_STA_ID_PER_USER_KNOWN 0x02
+#define HE_STA_NSTS_KNOWN 0x04
+#define HE_STA_TX_BF_KNOWN 0x08
+#define HE_STA_SPATIAL_CONFIG_KNOWN 0x10
+#define HE_STA_MCS_KNOWN 0x20
+#define HE_STA_DCM_KNOWN 0x40
+#define HE_STA_CODING_KNOWN 0x80
+
+#define HAL_RX_MPDU_ERR_FCS BIT(0)
+#define HAL_RX_MPDU_ERR_DECRYPT BIT(1)
+#define HAL_RX_MPDU_ERR_TKIP_MIC BIT(2)
+#define HAL_RX_MPDU_ERR_AMSDU_ERR BIT(3)
+#define HAL_RX_MPDU_ERR_OVERFLOW BIT(4)
+#define HAL_RX_MPDU_ERR_MSDU_LEN BIT(5)
+#define HAL_RX_MPDU_ERR_MPDU_LEN BIT(6)
+#define HAL_RX_MPDU_ERR_UNENCRYPTED_FRAME BIT(7)
+
+static inline
+enum nl80211_he_ru_alloc ath12k_he_ru_tones_to_nl80211_he_ru_alloc(u16 ru_tones)
+{
+ enum nl80211_he_ru_alloc ret;
+
+ switch (ru_tones) {
+ case RU_52:
+ ret = NL80211_RATE_INFO_HE_RU_ALLOC_52;
+ break;
+ case RU_106:
+ ret = NL80211_RATE_INFO_HE_RU_ALLOC_106;
+ break;
+ case RU_242:
+ ret = NL80211_RATE_INFO_HE_RU_ALLOC_242;
+ break;
+ case RU_484:
+ ret = NL80211_RATE_INFO_HE_RU_ALLOC_484;
+ break;
+ case RU_996:
+ ret = NL80211_RATE_INFO_HE_RU_ALLOC_996;
+ break;
+ case RU_26:
+ fallthrough;
+ default:
+ ret = NL80211_RATE_INFO_HE_RU_ALLOC_26;
+ break;
+ }
+ return ret;
+}
+
+void ath12k_hal_reo_status_queue_stats(struct ath12k_base *ab,
+ struct hal_tlv_64_hdr *tlv,
+ struct hal_reo_status *status);
+void ath12k_hal_reo_flush_queue_status(struct ath12k_base *ab,
+ struct hal_tlv_64_hdr *tlv,
+ struct hal_reo_status *status);
+void ath12k_hal_reo_flush_cache_status(struct ath12k_base *ab,
+ struct hal_tlv_64_hdr *tlv,
+ struct hal_reo_status *status);
+void ath12k_hal_reo_unblk_cache_status(struct ath12k_base *ab,
+ struct hal_tlv_64_hdr *tlv,
+ struct hal_reo_status *status);
+void ath12k_hal_reo_flush_timeout_list_status(struct ath12k_base *ab,
+ struct hal_tlv_64_hdr *tlv,
+ struct hal_reo_status *status);
+void ath12k_hal_reo_desc_thresh_reached_status(struct ath12k_base *ab,
+ struct hal_tlv_64_hdr *tlv,
+ struct hal_reo_status *status);
+void ath12k_hal_reo_update_rx_reo_queue_status(struct ath12k_base *ab,
+ struct hal_tlv_64_hdr *tlv,
+ struct hal_reo_status *status);
+void ath12k_hal_rx_msdu_link_info_get(struct hal_rx_msdu_link *link, u32 *num_msdus,
+ u32 *msdu_cookies,
+ enum hal_rx_buf_return_buf_manager *rbm);
+void ath12k_hal_rx_msdu_link_desc_set(struct ath12k_base *ab,
+ struct hal_wbm_release_ring *dst_desc,
+ struct hal_wbm_release_ring *src_desc,
+ enum hal_wbm_rel_bm_act action);
+void ath12k_hal_rx_buf_addr_info_set(struct ath12k_buffer_addr *binfo,
+ dma_addr_t paddr, u32 cookie, u8 manager);
+void ath12k_hal_rx_buf_addr_info_get(struct ath12k_buffer_addr *binfo,
+ dma_addr_t *paddr,
+ u32 *cookie, u8 *rbm);
+int ath12k_hal_desc_reo_parse_err(struct ath12k_base *ab,
+ struct hal_reo_dest_ring *desc,
+ dma_addr_t *paddr, u32 *desc_bank);
+int ath12k_hal_wbm_desc_parse_err(struct ath12k_base *ab, void *desc,
+ struct hal_rx_wbm_rel_info *rel_info);
+void ath12k_hal_rx_reo_ent_paddr_get(struct ath12k_base *ab,
+ struct ath12k_buffer_addr *buff_addr,
+ dma_addr_t *paddr, u32 *cookie);
+
+#endif
diff --git a/drivers/net/wireless/ath/ath12k/hal_tx.c b/drivers/net/wireless/ath/ath12k/hal_tx.c
new file mode 100644
index 000000000000..869e07e406fe
--- /dev/null
+++ b/drivers/net/wireless/ath/ath12k/hal_tx.c
@@ -0,0 +1,145 @@
+// SPDX-License-Identifier: BSD-3-Clause-Clear
+/*
+ * Copyright (c) 2018-2021 The Linux Foundation. All rights reserved.
+ * Copyright (c) 2021-2022 Qualcomm Innovation Center, Inc. All rights reserved.
+ */
+
+#include "hal_desc.h"
+#include "hal.h"
+#include "hal_tx.h"
+#include "hif.h"
+
+#define DSCP_TID_MAP_TBL_ENTRY_SIZE 64
+
+/* dscp_tid_map - Default DSCP-TID mapping
+ *=================
+ * DSCP TID
+ *=================
+ * 000xxx 0
+ * 001xxx 1
+ * 010xxx 2
+ * 011xxx 3
+ * 100xxx 4
+ * 101xxx 5
+ * 110xxx 6
+ * 111xxx 7
+ */
+static inline u8 dscp2tid(u8 dscp)
+{
+ return dscp >> 3;
+}
+
+void ath12k_hal_tx_cmd_desc_setup(struct ath12k_base *ab,
+ struct hal_tcl_data_cmd *tcl_cmd,
+ struct hal_tx_info *ti)
+{
+ tcl_cmd->buf_addr_info.info0 =
+ le32_encode_bits(ti->paddr, BUFFER_ADDR_INFO0_ADDR);
+ tcl_cmd->buf_addr_info.info1 =
+ le32_encode_bits(((uint64_t)ti->paddr >> HAL_ADDR_MSB_REG_SHIFT),
+ BUFFER_ADDR_INFO1_ADDR);
+ tcl_cmd->buf_addr_info.info1 |=
+ le32_encode_bits((ti->rbm_id), BUFFER_ADDR_INFO1_RET_BUF_MGR) |
+ le32_encode_bits(ti->desc_id, BUFFER_ADDR_INFO1_SW_COOKIE);
+
+ tcl_cmd->info0 =
+ le32_encode_bits(ti->type, HAL_TCL_DATA_CMD_INFO0_DESC_TYPE) |
+ le32_encode_bits(ti->bank_id, HAL_TCL_DATA_CMD_INFO0_BANK_ID);
+
+ tcl_cmd->info1 =
+ le32_encode_bits(ti->meta_data_flags,
+ HAL_TCL_DATA_CMD_INFO1_CMD_NUM);
+
+ tcl_cmd->info2 = cpu_to_le32(ti->flags0) |
+ le32_encode_bits(ti->data_len, HAL_TCL_DATA_CMD_INFO2_DATA_LEN) |
+ le32_encode_bits(ti->pkt_offset, HAL_TCL_DATA_CMD_INFO2_PKT_OFFSET);
+
+ tcl_cmd->info3 = cpu_to_le32(ti->flags1) |
+ le32_encode_bits(ti->tid, HAL_TCL_DATA_CMD_INFO3_TID) |
+ le32_encode_bits(ti->lmac_id, HAL_TCL_DATA_CMD_INFO3_PMAC_ID) |
+ le32_encode_bits(ti->vdev_id, HAL_TCL_DATA_CMD_INFO3_VDEV_ID);
+
+ tcl_cmd->info4 = le32_encode_bits(ti->bss_ast_idx,
+ HAL_TCL_DATA_CMD_INFO4_SEARCH_INDEX) |
+ le32_encode_bits(ti->bss_ast_hash,
+ HAL_TCL_DATA_CMD_INFO4_CACHE_SET_NUM);
+ tcl_cmd->info5 = 0;
+}
+
+void ath12k_hal_tx_set_dscp_tid_map(struct ath12k_base *ab, int id)
+{
+ u32 ctrl_reg_val;
+ u32 addr;
+ u8 hw_map_val[HAL_DSCP_TID_TBL_SIZE], dscp, tid;
+ int i;
+ u32 value;
+
+ ctrl_reg_val = ath12k_hif_read32(ab, HAL_SEQ_WCSS_UMAC_TCL_REG +
+ HAL_TCL1_RING_CMN_CTRL_REG);
+ /* Enable read/write access */
+ ctrl_reg_val |= HAL_TCL1_RING_CMN_CTRL_DSCP_TID_MAP_PROG_EN;
+ ath12k_hif_write32(ab, HAL_SEQ_WCSS_UMAC_TCL_REG +
+ HAL_TCL1_RING_CMN_CTRL_REG, ctrl_reg_val);
+
+ addr = HAL_SEQ_WCSS_UMAC_TCL_REG + HAL_TCL1_RING_DSCP_TID_MAP +
+ (4 * id * (HAL_DSCP_TID_TBL_SIZE / 4));
+
+ /* Configure each DSCP-TID mapping in three bits there by configure
+ * three bytes in an iteration.
+ */
+ for (i = 0, dscp = 0; i < HAL_DSCP_TID_TBL_SIZE; i += 3) {
+ tid = dscp2tid(dscp);
+ value = u32_encode_bits(tid, HAL_TCL1_RING_FIELD_DSCP_TID_MAP0);
+ dscp++;
+
+ tid = dscp2tid(dscp);
+ value |= u32_encode_bits(tid, HAL_TCL1_RING_FIELD_DSCP_TID_MAP1);
+ dscp++;
+
+ tid = dscp2tid(dscp);
+ value |= u32_encode_bits(tid, HAL_TCL1_RING_FIELD_DSCP_TID_MAP2);
+ dscp++;
+
+ tid = dscp2tid(dscp);
+ value |= u32_encode_bits(tid, HAL_TCL1_RING_FIELD_DSCP_TID_MAP3);
+ dscp++;
+
+ tid = dscp2tid(dscp);
+ value |= u32_encode_bits(tid, HAL_TCL1_RING_FIELD_DSCP_TID_MAP4);
+ dscp++;
+
+ tid = dscp2tid(dscp);
+ value |= u32_encode_bits(tid, HAL_TCL1_RING_FIELD_DSCP_TID_MAP5);
+ dscp++;
+
+ tid = dscp2tid(dscp);
+ value |= u32_encode_bits(tid, HAL_TCL1_RING_FIELD_DSCP_TID_MAP6);
+ dscp++;
+
+ tid = dscp2tid(dscp);
+ value |= u32_encode_bits(tid, HAL_TCL1_RING_FIELD_DSCP_TID_MAP7);
+ dscp++;
+
+ memcpy(&hw_map_val[i], &value, 3);
+ }
+
+ for (i = 0; i < HAL_DSCP_TID_TBL_SIZE; i += 4) {
+ ath12k_hif_write32(ab, addr, *(u32 *)&hw_map_val[i]);
+ addr += 4;
+ }
+
+ /* Disable read/write access */
+ ctrl_reg_val = ath12k_hif_read32(ab, HAL_SEQ_WCSS_UMAC_TCL_REG +
+ HAL_TCL1_RING_CMN_CTRL_REG);
+ ctrl_reg_val &= ~HAL_TCL1_RING_CMN_CTRL_DSCP_TID_MAP_PROG_EN;
+ ath12k_hif_write32(ab, HAL_SEQ_WCSS_UMAC_TCL_REG +
+ HAL_TCL1_RING_CMN_CTRL_REG,
+ ctrl_reg_val);
+}
+
+void ath12k_hal_tx_configure_bank_register(struct ath12k_base *ab, u32 bank_config,
+ u8 bank_id)
+{
+ ath12k_hif_write32(ab, HAL_TCL_SW_CONFIG_BANK_ADDR + 4 * bank_id,
+ bank_config);
+}
diff --git a/drivers/net/wireless/ath/ath12k/hal_tx.h b/drivers/net/wireless/ath/ath12k/hal_tx.h
new file mode 100644
index 000000000000..7c837094a6f7
--- /dev/null
+++ b/drivers/net/wireless/ath/ath12k/hal_tx.h
@@ -0,0 +1,194 @@
+/* SPDX-License-Identifier: BSD-3-Clause-Clear */
+/*
+ * Copyright (c) 2018-2021 The Linux Foundation. All rights reserved.
+ * Copyright (c) 2021-2022 Qualcomm Innovation Center, Inc. All rights reserved.
+ */
+
+#ifndef ATH12K_HAL_TX_H
+#define ATH12K_HAL_TX_H
+
+#include "hal_desc.h"
+#include "core.h"
+
+#define HAL_TX_ADDRX_EN 1
+#define HAL_TX_ADDRY_EN 2
+
+#define HAL_TX_ADDR_SEARCH_DEFAULT 0
+#define HAL_TX_ADDR_SEARCH_INDEX 1
+
+/* TODO: check all these data can be managed with struct ath12k_tx_desc_info for perf */
+struct hal_tx_info {
+ u16 meta_data_flags; /* %HAL_TCL_DATA_CMD_INFO0_META_ */
+ u8 ring_id;
+ u8 rbm_id;
+ u32 desc_id;
+ enum hal_tcl_desc_type type;
+ enum hal_tcl_encap_type encap_type;
+ dma_addr_t paddr;
+ u32 data_len;
+ u32 pkt_offset;
+ enum hal_encrypt_type encrypt_type;
+ u32 flags0; /* %HAL_TCL_DATA_CMD_INFO1_ */
+ u32 flags1; /* %HAL_TCL_DATA_CMD_INFO2_ */
+ u16 addr_search_flags; /* %HAL_TCL_DATA_CMD_INFO0_ADDR(X/Y)_ */
+ u16 bss_ast_hash;
+ u16 bss_ast_idx;
+ u8 tid;
+ u8 search_type; /* %HAL_TX_ADDR_SEARCH_ */
+ u8 lmac_id;
+ u8 vdev_id;
+ u8 dscp_tid_tbl_idx;
+ bool enable_mesh;
+ int bank_id;
+};
+
+/* TODO: Check if the actual desc macros can be used instead */
+#define HAL_TX_STATUS_FLAGS_FIRST_MSDU BIT(0)
+#define HAL_TX_STATUS_FLAGS_LAST_MSDU BIT(1)
+#define HAL_TX_STATUS_FLAGS_MSDU_IN_AMSDU BIT(2)
+#define HAL_TX_STATUS_FLAGS_RATE_STATS_VALID BIT(3)
+#define HAL_TX_STATUS_FLAGS_RATE_LDPC BIT(4)
+#define HAL_TX_STATUS_FLAGS_RATE_STBC BIT(5)
+#define HAL_TX_STATUS_FLAGS_OFDMA BIT(6)
+
+#define HAL_TX_STATUS_DESC_LEN sizeof(struct hal_wbm_release_ring)
+
+/* Tx status parsed from srng desc */
+struct hal_tx_status {
+ enum hal_wbm_rel_src_module buf_rel_source;
+ enum hal_wbm_tqm_rel_reason status;
+ u8 ack_rssi;
+ u32 flags; /* %HAL_TX_STATUS_FLAGS_ */
+ u32 ppdu_id;
+ u8 try_cnt;
+ u8 tid;
+ u16 peer_id;
+ u32 rate_stats;
+};
+
+#define HAL_TX_PHY_DESC_INFO0_BF_TYPE GENMASK(17, 16)
+#define HAL_TX_PHY_DESC_INFO0_PREAMBLE_11B BIT(20)
+#define HAL_TX_PHY_DESC_INFO0_PKT_TYPE GENMASK(24, 21)
+#define HAL_TX_PHY_DESC_INFO0_BANDWIDTH GENMASK(30, 28)
+#define HAL_TX_PHY_DESC_INFO1_MCS GENMASK(3, 0)
+#define HAL_TX_PHY_DESC_INFO1_STBC BIT(6)
+#define HAL_TX_PHY_DESC_INFO2_NSS GENMASK(23, 21)
+#define HAL_TX_PHY_DESC_INFO3_AP_PKT_BW GENMASK(6, 4)
+#define HAL_TX_PHY_DESC_INFO3_LTF_SIZE GENMASK(20, 19)
+#define HAL_TX_PHY_DESC_INFO3_ACTIVE_CHANNEL GENMASK(17, 15)
+
+struct hal_tx_phy_desc {
+ __le32 info0;
+ __le32 info1;
+ __le32 info2;
+ __le32 info3;
+} __packed;
+
+#define HAL_TX_FES_STAT_PROT_INFO0_STRT_FRM_TS_15_0 GENMASK(15, 0)
+#define HAL_TX_FES_STAT_PROT_INFO0_STRT_FRM_TS_31_16 GENMASK(31, 16)
+#define HAL_TX_FES_STAT_PROT_INFO1_END_FRM_TS_15_0 GENMASK(15, 0)
+#define HAL_TX_FES_STAT_PROT_INFO1_END_FRM_TS_31_16 GENMASK(31, 16)
+
+struct hal_tx_fes_status_prot {
+ __le64 reserved;
+ __le32 info0;
+ __le32 info1;
+ __le32 reserved1[11];
+} __packed;
+
+#define HAL_TX_FES_STAT_USR_PPDU_INFO0_DURATION GENMASK(15, 0)
+
+struct hal_tx_fes_status_user_ppdu {
+ __le64 reserved;
+ __le32 info0;
+ __le32 reserved1[3];
+} __packed;
+
+#define HAL_TX_FES_STAT_STRT_INFO0_PROT_TS_LOWER_32 GENMASK(31, 0)
+#define HAL_TX_FES_STAT_STRT_INFO1_PROT_TS_UPPER_32 GENMASK(31, 0)
+
+struct hal_tx_fes_status_start_prot {
+ __le32 info0;
+ __le32 info1;
+ __le64 reserved;
+} __packed;
+
+#define HAL_TX_FES_STATUS_START_INFO0_MEDIUM_PROT_TYPE GENMASK(29, 27)
+
+struct hal_tx_fes_status_start {
+ __le32 reserved;
+ __le32 info0;
+ __le64 reserved1;
+} __packed;
+
+#define HAL_TX_Q_EXT_INFO0_FRAME_CTRL GENMASK(15, 0)
+#define HAL_TX_Q_EXT_INFO0_QOS_CTRL GENMASK(31, 16)
+#define HAL_TX_Q_EXT_INFO1_AMPDU_FLAG BIT(0)
+
+struct hal_tx_queue_exten {
+ __le32 info0;
+ __le32 info1;
+} __packed;
+
+#define HAL_TX_FES_SETUP_INFO0_NUM_OF_USERS GENMASK(28, 23)
+
+struct hal_tx_fes_setup {
+ __le32 schedule_id;
+ __le32 info0;
+ __le64 reserved;
+} __packed;
+
+#define HAL_TX_PPDU_SETUP_INFO0_MEDIUM_PROT_TYPE GENMASK(2, 0)
+#define HAL_TX_PPDU_SETUP_INFO1_PROT_FRAME_ADDR1_31_0 GENMASK(31, 0)
+#define HAL_TX_PPDU_SETUP_INFO2_PROT_FRAME_ADDR1_47_32 GENMASK(15, 0)
+#define HAL_TX_PPDU_SETUP_INFO2_PROT_FRAME_ADDR2_15_0 GENMASK(31, 16)
+#define HAL_TX_PPDU_SETUP_INFO3_PROT_FRAME_ADDR2_47_16 GENMASK(31, 0)
+#define HAL_TX_PPDU_SETUP_INFO4_PROT_FRAME_ADDR3_31_0 GENMASK(31, 0)
+#define HAL_TX_PPDU_SETUP_INFO5_PROT_FRAME_ADDR3_47_32 GENMASK(15, 0)
+#define HAL_TX_PPDU_SETUP_INFO5_PROT_FRAME_ADDR4_15_0 GENMASK(31, 16)
+#define HAL_TX_PPDU_SETUP_INFO6_PROT_FRAME_ADDR4_47_16 GENMASK(31, 0)
+
+struct hal_tx_pcu_ppdu_setup_init {
+ __le32 info0;
+ __le32 info1;
+ __le32 info2;
+ __le32 info3;
+ __le32 reserved;
+ __le32 info4;
+ __le32 info5;
+ __le32 info6;
+} __packed;
+
+#define HAL_TX_FES_STATUS_END_INFO0_START_TIMESTAMP_15_0 GENMASK(15, 0)
+#define HAL_TX_FES_STATUS_END_INFO0_START_TIMESTAMP_31_16 GENMASK(31, 16)
+
+struct hal_tx_fes_status_end {
+ __le32 reserved[2];
+ __le32 info0;
+ __le32 reserved1[19];
+} __packed;
+
+#define HAL_TX_BANK_CONFIG_EPD BIT(0)
+#define HAL_TX_BANK_CONFIG_ENCAP_TYPE GENMASK(2, 1)
+#define HAL_TX_BANK_CONFIG_ENCRYPT_TYPE GENMASK(6, 3)
+#define HAL_TX_BANK_CONFIG_SRC_BUFFER_SWAP BIT(7)
+#define HAL_TX_BANK_CONFIG_LINK_META_SWAP BIT(8)
+#define HAL_TX_BANK_CONFIG_INDEX_LOOKUP_EN BIT(9)
+#define HAL_TX_BANK_CONFIG_ADDRX_EN BIT(10)
+#define HAL_TX_BANK_CONFIG_ADDRY_EN BIT(11)
+#define HAL_TX_BANK_CONFIG_MESH_EN GENMASK(13, 12)
+#define HAL_TX_BANK_CONFIG_VDEV_ID_CHECK_EN BIT(14)
+#define HAL_TX_BANK_CONFIG_PMAC_ID GENMASK(16, 15)
+/* STA mode will have MCAST_PKT_CTRL instead of DSCP_TID_MAP bitfield */
+#define HAL_TX_BANK_CONFIG_DSCP_TIP_MAP_ID GENMASK(22, 17)
+
+void ath12k_hal_tx_cmd_desc_setup(struct ath12k_base *ab,
+ struct hal_tcl_data_cmd *tcl_cmd,
+ struct hal_tx_info *ti);
+void ath12k_hal_tx_set_dscp_tid_map(struct ath12k_base *ab, int id);
+int ath12k_hal_reo_cmd_send(struct ath12k_base *ab, struct hal_srng *srng,
+ enum hal_reo_cmd_type type,
+ struct ath12k_hal_reo_cmd *cmd);
+void ath12k_hal_tx_configure_bank_register(struct ath12k_base *ab, u32 bank_config,
+ u8 bank_id);
+#endif
diff --git a/drivers/net/wireless/ath/ath12k/hif.h b/drivers/net/wireless/ath/ath12k/hif.h
new file mode 100644
index 000000000000..54490cdb63a1
--- /dev/null
+++ b/drivers/net/wireless/ath/ath12k/hif.h
@@ -0,0 +1,144 @@
+/* SPDX-License-Identifier: BSD-3-Clause-Clear */
+/*
+ * Copyright (c) 2019-2021 The Linux Foundation. All rights reserved.
+ * Copyright (c) 2021-2022 Qualcomm Innovation Center, Inc. All rights reserved.
+ */
+
+#ifndef ATH12K_HIF_H
+#define ATH12K_HIF_H
+
+#include "core.h"
+
+struct ath12k_hif_ops {
+ u32 (*read32)(struct ath12k_base *sc, u32 address);
+ void (*write32)(struct ath12k_base *sc, u32 address, u32 data);
+ void (*irq_enable)(struct ath12k_base *sc);
+ void (*irq_disable)(struct ath12k_base *sc);
+ int (*start)(struct ath12k_base *sc);
+ void (*stop)(struct ath12k_base *sc);
+ int (*power_up)(struct ath12k_base *sc);
+ void (*power_down)(struct ath12k_base *sc);
+ int (*suspend)(struct ath12k_base *ab);
+ int (*resume)(struct ath12k_base *ab);
+ int (*map_service_to_pipe)(struct ath12k_base *sc, u16 service_id,
+ u8 *ul_pipe, u8 *dl_pipe);
+ int (*get_user_msi_vector)(struct ath12k_base *ab, char *user_name,
+ int *num_vectors, u32 *user_base_data,
+ u32 *base_vector);
+ void (*get_msi_address)(struct ath12k_base *ab, u32 *msi_addr_lo,
+ u32 *msi_addr_hi);
+ void (*ce_irq_enable)(struct ath12k_base *ab);
+ void (*ce_irq_disable)(struct ath12k_base *ab);
+ void (*get_ce_msi_idx)(struct ath12k_base *ab, u32 ce_id, u32 *msi_idx);
+};
+
+static inline int ath12k_hif_map_service_to_pipe(struct ath12k_base *ab, u16 service_id,
+ u8 *ul_pipe, u8 *dl_pipe)
+{
+ return ab->hif.ops->map_service_to_pipe(ab, service_id,
+ ul_pipe, dl_pipe);
+}
+
+static inline int ath12k_hif_get_user_msi_vector(struct ath12k_base *ab,
+ char *user_name,
+ int *num_vectors,
+ u32 *user_base_data,
+ u32 *base_vector)
+{
+ if (!ab->hif.ops->get_user_msi_vector)
+ return -EOPNOTSUPP;
+
+ return ab->hif.ops->get_user_msi_vector(ab, user_name, num_vectors,
+ user_base_data,
+ base_vector);
+}
+
+static inline void ath12k_hif_get_msi_address(struct ath12k_base *ab,
+ u32 *msi_addr_lo,
+ u32 *msi_addr_hi)
+{
+ if (!ab->hif.ops->get_msi_address)
+ return;
+
+ ab->hif.ops->get_msi_address(ab, msi_addr_lo, msi_addr_hi);
+}
+
+static inline void ath12k_hif_get_ce_msi_idx(struct ath12k_base *ab, u32 ce_id,
+ u32 *msi_data_idx)
+{
+ if (ab->hif.ops->get_ce_msi_idx)
+ ab->hif.ops->get_ce_msi_idx(ab, ce_id, msi_data_idx);
+ else
+ *msi_data_idx = ce_id;
+}
+
+static inline void ath12k_hif_ce_irq_enable(struct ath12k_base *ab)
+{
+ if (ab->hif.ops->ce_irq_enable)
+ ab->hif.ops->ce_irq_enable(ab);
+}
+
+static inline void ath12k_hif_ce_irq_disable(struct ath12k_base *ab)
+{
+ if (ab->hif.ops->ce_irq_disable)
+ ab->hif.ops->ce_irq_disable(ab);
+}
+
+static inline void ath12k_hif_irq_enable(struct ath12k_base *ab)
+{
+ ab->hif.ops->irq_enable(ab);
+}
+
+static inline void ath12k_hif_irq_disable(struct ath12k_base *ab)
+{
+ ab->hif.ops->irq_disable(ab);
+}
+
+static inline int ath12k_hif_suspend(struct ath12k_base *ab)
+{
+ if (ab->hif.ops->suspend)
+ return ab->hif.ops->suspend(ab);
+
+ return 0;
+}
+
+static inline int ath12k_hif_resume(struct ath12k_base *ab)
+{
+ if (ab->hif.ops->resume)
+ return ab->hif.ops->resume(ab);
+
+ return 0;
+}
+
+static inline int ath12k_hif_start(struct ath12k_base *ab)
+{
+ return ab->hif.ops->start(ab);
+}
+
+static inline void ath12k_hif_stop(struct ath12k_base *ab)
+{
+ ab->hif.ops->stop(ab);
+}
+
+static inline u32 ath12k_hif_read32(struct ath12k_base *ab, u32 address)
+{
+ return ab->hif.ops->read32(ab, address);
+}
+
+static inline void ath12k_hif_write32(struct ath12k_base *ab, u32 address,
+ u32 data)
+{
+ ab->hif.ops->write32(ab, address, data);
+}
+
+static inline int ath12k_hif_power_up(struct ath12k_base *ab)
+{
+ return ab->hif.ops->power_up(ab);
+}
+
+static inline void ath12k_hif_power_down(struct ath12k_base *ab)
+{
+ ab->hif.ops->power_down(ab);
+}
+
+#endif /* ATH12K_HIF_H */
diff --git a/drivers/net/wireless/ath/ath12k/htc.c b/drivers/net/wireless/ath/ath12k/htc.c
new file mode 100644
index 000000000000..23f7428abd95
--- /dev/null
+++ b/drivers/net/wireless/ath/ath12k/htc.c
@@ -0,0 +1,789 @@
+// SPDX-License-Identifier: BSD-3-Clause-Clear
+/*
+ * Copyright (c) 2018-2021 The Linux Foundation. All rights reserved.
+ * Copyright (c) 2021-2022 Qualcomm Innovation Center, Inc. All rights reserved.
+ */
+#include <linux/skbuff.h>
+#include <linux/ctype.h>
+
+#include "debug.h"
+#include "hif.h"
+
+struct sk_buff *ath12k_htc_alloc_skb(struct ath12k_base *ab, int size)
+{
+ struct sk_buff *skb;
+
+ skb = dev_alloc_skb(size + sizeof(struct ath12k_htc_hdr));
+ if (!skb)
+ return NULL;
+
+ skb_reserve(skb, sizeof(struct ath12k_htc_hdr));
+
+ /* FW/HTC requires 4-byte aligned streams */
+ if (!IS_ALIGNED((unsigned long)skb->data, 4))
+ ath12k_warn(ab, "Unaligned HTC tx skb\n");
+
+ return skb;
+}
+
+static void ath12k_htc_control_tx_complete(struct ath12k_base *ab,
+ struct sk_buff *skb)
+{
+ kfree_skb(skb);
+}
+
+static struct sk_buff *ath12k_htc_build_tx_ctrl_skb(void)
+{
+ struct sk_buff *skb;
+ struct ath12k_skb_cb *skb_cb;
+
+ skb = dev_alloc_skb(ATH12K_HTC_CONTROL_BUFFER_SIZE);
+ if (!skb)
+ return NULL;
+
+ skb_reserve(skb, sizeof(struct ath12k_htc_hdr));
+ WARN_ON_ONCE(!IS_ALIGNED((unsigned long)skb->data, 4));
+
+ skb_cb = ATH12K_SKB_CB(skb);
+ memset(skb_cb, 0, sizeof(*skb_cb));
+
+ return skb;
+}
+
+static void ath12k_htc_prepare_tx_skb(struct ath12k_htc_ep *ep,
+ struct sk_buff *skb)
+{
+ struct ath12k_htc_hdr *hdr;
+
+ hdr = (struct ath12k_htc_hdr *)skb->data;
+
+ memset(hdr, 0, sizeof(*hdr));
+ hdr->htc_info = le32_encode_bits(ep->eid, HTC_HDR_ENDPOINTID) |
+ le32_encode_bits((skb->len - sizeof(*hdr)),
+ HTC_HDR_PAYLOADLEN);
+
+ if (ep->tx_credit_flow_enabled)
+ hdr->htc_info |= le32_encode_bits(ATH12K_HTC_FLAG_NEED_CREDIT_UPDATE,
+ HTC_HDR_FLAGS);
+
+ spin_lock_bh(&ep->htc->tx_lock);
+ hdr->ctrl_info = le32_encode_bits(ep->seq_no++, HTC_HDR_CONTROLBYTES1);
+ spin_unlock_bh(&ep->htc->tx_lock);
+}
+
+int ath12k_htc_send(struct ath12k_htc *htc,
+ enum ath12k_htc_ep_id eid,
+ struct sk_buff *skb)
+{
+ struct ath12k_htc_ep *ep = &htc->endpoint[eid];
+ struct ath12k_skb_cb *skb_cb = ATH12K_SKB_CB(skb);
+ struct device *dev = htc->ab->dev;
+ struct ath12k_base *ab = htc->ab;
+ int credits = 0;
+ int ret;
+
+ if (eid >= ATH12K_HTC_EP_COUNT) {
+ ath12k_warn(ab, "Invalid endpoint id: %d\n", eid);
+ return -ENOENT;
+ }
+
+ skb_push(skb, sizeof(struct ath12k_htc_hdr));
+
+ if (ep->tx_credit_flow_enabled) {
+ credits = DIV_ROUND_UP(skb->len, htc->target_credit_size);
+ spin_lock_bh(&htc->tx_lock);
+ if (ep->tx_credits < credits) {
+ ath12k_dbg(ab, ATH12K_DBG_HTC,
+ "htc insufficient credits ep %d required %d available %d\n",
+ eid, credits, ep->tx_credits);
+ spin_unlock_bh(&htc->tx_lock);
+ ret = -EAGAIN;
+ goto err_pull;
+ }
+ ep->tx_credits -= credits;
+ ath12k_dbg(ab, ATH12K_DBG_HTC,
+ "htc ep %d consumed %d credits (total %d)\n",
+ eid, credits, ep->tx_credits);
+ spin_unlock_bh(&htc->tx_lock);
+ }
+
+ ath12k_htc_prepare_tx_skb(ep, skb);
+
+ skb_cb->paddr = dma_map_single(dev, skb->data, skb->len, DMA_TO_DEVICE);
+ ret = dma_mapping_error(dev, skb_cb->paddr);
+ if (ret) {
+ ret = -EIO;
+ goto err_credits;
+ }
+
+ ret = ath12k_ce_send(htc->ab, skb, ep->ul_pipe_id, ep->eid);
+ if (ret)
+ goto err_unmap;
+
+ return 0;
+
+err_unmap:
+ dma_unmap_single(dev, skb_cb->paddr, skb->len, DMA_TO_DEVICE);
+err_credits:
+ if (ep->tx_credit_flow_enabled) {
+ spin_lock_bh(&htc->tx_lock);
+ ep->tx_credits += credits;
+ ath12k_dbg(ab, ATH12K_DBG_HTC,
+ "htc ep %d reverted %d credits back (total %d)\n",
+ eid, credits, ep->tx_credits);
+ spin_unlock_bh(&htc->tx_lock);
+
+ if (ep->ep_ops.ep_tx_credits)
+ ep->ep_ops.ep_tx_credits(htc->ab);
+ }
+err_pull:
+ skb_pull(skb, sizeof(struct ath12k_htc_hdr));
+ return ret;
+}
+
+static void
+ath12k_htc_process_credit_report(struct ath12k_htc *htc,
+ const struct ath12k_htc_credit_report *report,
+ int len,
+ enum ath12k_htc_ep_id eid)
+{
+ struct ath12k_base *ab = htc->ab;
+ struct ath12k_htc_ep *ep;
+ int i, n_reports;
+
+ if (len % sizeof(*report))
+ ath12k_warn(ab, "Uneven credit report len %d", len);
+
+ n_reports = len / sizeof(*report);
+
+ spin_lock_bh(&htc->tx_lock);
+ for (i = 0; i < n_reports; i++, report++) {
+ if (report->eid >= ATH12K_HTC_EP_COUNT)
+ break;
+
+ ep = &htc->endpoint[report->eid];
+ ep->tx_credits += report->credits;
+
+ ath12k_dbg(ab, ATH12K_DBG_HTC, "htc ep %d got %d credits (total %d)\n",
+ report->eid, report->credits, ep->tx_credits);
+
+ if (ep->ep_ops.ep_tx_credits) {
+ spin_unlock_bh(&htc->tx_lock);
+ ep->ep_ops.ep_tx_credits(htc->ab);
+ spin_lock_bh(&htc->tx_lock);
+ }
+ }
+ spin_unlock_bh(&htc->tx_lock);
+}
+
+static int ath12k_htc_process_trailer(struct ath12k_htc *htc,
+ u8 *buffer,
+ int length,
+ enum ath12k_htc_ep_id src_eid)
+{
+ struct ath12k_base *ab = htc->ab;
+ int status = 0;
+ struct ath12k_htc_record *record;
+ size_t len;
+
+ while (length > 0) {
+ record = (struct ath12k_htc_record *)buffer;
+
+ if (length < sizeof(record->hdr)) {
+ status = -EINVAL;
+ break;
+ }
+
+ if (record->hdr.len > length) {
+ /* no room left in buffer for record */
+ ath12k_warn(ab, "Invalid record length: %d\n",
+ record->hdr.len);
+ status = -EINVAL;
+ break;
+ }
+
+ switch (record->hdr.id) {
+ case ATH12K_HTC_RECORD_CREDITS:
+ len = sizeof(struct ath12k_htc_credit_report);
+ if (record->hdr.len < len) {
+ ath12k_warn(ab, "Credit report too long\n");
+ status = -EINVAL;
+ break;
+ }
+ ath12k_htc_process_credit_report(htc,
+ record->credit_report,
+ record->hdr.len,
+ src_eid);
+ break;
+ default:
+ ath12k_warn(ab, "Unhandled record: id:%d length:%d\n",
+ record->hdr.id, record->hdr.len);
+ break;
+ }
+
+ if (status)
+ break;
+
+ /* multiple records may be present in a trailer */
+ buffer += sizeof(record->hdr) + record->hdr.len;
+ length -= sizeof(record->hdr) + record->hdr.len;
+ }
+
+ return status;
+}
+
+static void ath12k_htc_suspend_complete(struct ath12k_base *ab, bool ack)
+{
+ ath12k_dbg(ab, ATH12K_DBG_BOOT, "boot suspend complete %d\n", ack);
+
+ if (ack)
+ set_bit(ATH12K_FLAG_HTC_SUSPEND_COMPLETE, &ab->dev_flags);
+ else
+ clear_bit(ATH12K_FLAG_HTC_SUSPEND_COMPLETE, &ab->dev_flags);
+
+ complete(&ab->htc_suspend);
+}
+
+void ath12k_htc_rx_completion_handler(struct ath12k_base *ab,
+ struct sk_buff *skb)
+{
+ int status = 0;
+ struct ath12k_htc *htc = &ab->htc;
+ struct ath12k_htc_hdr *hdr;
+ struct ath12k_htc_ep *ep;
+ u16 payload_len;
+ u32 trailer_len = 0;
+ size_t min_len;
+ u8 eid;
+ bool trailer_present;
+
+ hdr = (struct ath12k_htc_hdr *)skb->data;
+ skb_pull(skb, sizeof(*hdr));
+
+ eid = le32_get_bits(hdr->htc_info, HTC_HDR_ENDPOINTID);
+
+ if (eid >= ATH12K_HTC_EP_COUNT) {
+ ath12k_warn(ab, "HTC Rx: invalid eid %d\n", eid);
+ goto out;
+ }
+
+ ep = &htc->endpoint[eid];
+
+ payload_len = le32_get_bits(hdr->htc_info, HTC_HDR_PAYLOADLEN);
+
+ if (payload_len + sizeof(*hdr) > ATH12K_HTC_MAX_LEN) {
+ ath12k_warn(ab, "HTC rx frame too long, len: %zu\n",
+ payload_len + sizeof(*hdr));
+ goto out;
+ }
+
+ if (skb->len < payload_len) {
+ ath12k_warn(ab, "HTC Rx: insufficient length, got %d, expected %d\n",
+ skb->len, payload_len);
+ goto out;
+ }
+
+ /* get flags to check for trailer */
+ trailer_present = le32_get_bits(hdr->htc_info, HTC_HDR_FLAGS) &
+ ATH12K_HTC_FLAG_TRAILER_PRESENT;
+
+ if (trailer_present) {
+ u8 *trailer;
+
+ trailer_len = le32_get_bits(hdr->ctrl_info,
+ HTC_HDR_CONTROLBYTES0);
+ min_len = sizeof(struct ath12k_htc_record_hdr);
+
+ if ((trailer_len < min_len) ||
+ (trailer_len > payload_len)) {
+ ath12k_warn(ab, "Invalid trailer length: %d\n",
+ trailer_len);
+ goto out;
+ }
+
+ trailer = (u8 *)hdr;
+ trailer += sizeof(*hdr);
+ trailer += payload_len;
+ trailer -= trailer_len;
+ status = ath12k_htc_process_trailer(htc, trailer,
+ trailer_len, eid);
+ if (status)
+ goto out;
+
+ skb_trim(skb, skb->len - trailer_len);
+ }
+
+ if (trailer_len >= payload_len)
+ /* zero length packet with trailer data, just drop these */
+ goto out;
+
+ if (eid == ATH12K_HTC_EP_0) {
+ struct ath12k_htc_msg *msg = (struct ath12k_htc_msg *)skb->data;
+
+ switch (le32_get_bits(msg->msg_svc_id, HTC_MSG_MESSAGEID)) {
+ case ATH12K_HTC_MSG_READY_ID:
+ case ATH12K_HTC_MSG_CONNECT_SERVICE_RESP_ID:
+ /* handle HTC control message */
+ if (completion_done(&htc->ctl_resp)) {
+ /* this is a fatal error, target should not be
+ * sending unsolicited messages on the ep 0
+ */
+ ath12k_warn(ab, "HTC rx ctrl still processing\n");
+ complete(&htc->ctl_resp);
+ goto out;
+ }
+
+ htc->control_resp_len =
+ min_t(int, skb->len,
+ ATH12K_HTC_MAX_CTRL_MSG_LEN);
+
+ memcpy(htc->control_resp_buffer, skb->data,
+ htc->control_resp_len);
+
+ complete(&htc->ctl_resp);
+ break;
+ case ATH12K_HTC_MSG_SEND_SUSPEND_COMPLETE:
+ ath12k_htc_suspend_complete(ab, true);
+ break;
+ case ATH12K_HTC_MSG_NACK_SUSPEND:
+ ath12k_htc_suspend_complete(ab, false);
+ break;
+ case ATH12K_HTC_MSG_WAKEUP_FROM_SUSPEND_ID:
+ break;
+ default:
+ ath12k_warn(ab, "ignoring unsolicited htc ep0 event %u\n",
+ le32_get_bits(msg->msg_svc_id, HTC_MSG_MESSAGEID));
+ break;
+ }
+ goto out;
+ }
+
+ ath12k_dbg(ab, ATH12K_DBG_HTC, "htc rx completion ep %d skb %pK\n",
+ eid, skb);
+ ep->ep_ops.ep_rx_complete(ab, skb);
+
+ /* poll tx completion for interrupt disabled CE's */
+ ath12k_ce_poll_send_completed(ab, ep->ul_pipe_id);
+
+ /* skb is now owned by the rx completion handler */
+ skb = NULL;
+out:
+ kfree_skb(skb);
+}
+
+static void ath12k_htc_control_rx_complete(struct ath12k_base *ab,
+ struct sk_buff *skb)
+{
+ /* This is unexpected. FW is not supposed to send regular rx on this
+ * endpoint.
+ */
+ ath12k_warn(ab, "unexpected htc rx\n");
+ kfree_skb(skb);
+}
+
+static const char *htc_service_name(enum ath12k_htc_svc_id id)
+{
+ switch (id) {
+ case ATH12K_HTC_SVC_ID_RESERVED:
+ return "Reserved";
+ case ATH12K_HTC_SVC_ID_RSVD_CTRL:
+ return "Control";
+ case ATH12K_HTC_SVC_ID_WMI_CONTROL:
+ return "WMI";
+ case ATH12K_HTC_SVC_ID_WMI_DATA_BE:
+ return "DATA BE";
+ case ATH12K_HTC_SVC_ID_WMI_DATA_BK:
+ return "DATA BK";
+ case ATH12K_HTC_SVC_ID_WMI_DATA_VI:
+ return "DATA VI";
+ case ATH12K_HTC_SVC_ID_WMI_DATA_VO:
+ return "DATA VO";
+ case ATH12K_HTC_SVC_ID_WMI_CONTROL_MAC1:
+ return "WMI MAC1";
+ case ATH12K_HTC_SVC_ID_WMI_CONTROL_MAC2:
+ return "WMI MAC2";
+ case ATH12K_HTC_SVC_ID_NMI_CONTROL:
+ return "NMI Control";
+ case ATH12K_HTC_SVC_ID_NMI_DATA:
+ return "NMI Data";
+ case ATH12K_HTC_SVC_ID_HTT_DATA_MSG:
+ return "HTT Data";
+ case ATH12K_HTC_SVC_ID_TEST_RAW_STREAMS:
+ return "RAW";
+ case ATH12K_HTC_SVC_ID_IPA_TX:
+ return "IPA TX";
+ case ATH12K_HTC_SVC_ID_PKT_LOG:
+ return "PKT LOG";
+ case ATH12K_HTC_SVC_ID_WMI_CONTROL_DIAG:
+ return "WMI DIAG";
+ }
+
+ return "Unknown";
+}
+
+static void ath12k_htc_reset_endpoint_states(struct ath12k_htc *htc)
+{
+ struct ath12k_htc_ep *ep;
+ int i;
+
+ for (i = ATH12K_HTC_EP_0; i < ATH12K_HTC_EP_COUNT; i++) {
+ ep = &htc->endpoint[i];
+ ep->service_id = ATH12K_HTC_SVC_ID_UNUSED;
+ ep->max_ep_message_len = 0;
+ ep->max_tx_queue_depth = 0;
+ ep->eid = i;
+ ep->htc = htc;
+ ep->tx_credit_flow_enabled = true;
+ }
+}
+
+static u8 ath12k_htc_get_credit_allocation(struct ath12k_htc *htc,
+ u16 service_id)
+{
+ struct ath12k_htc_svc_tx_credits *serv_entry;
+ u8 i, allocation = 0;
+
+ serv_entry = htc->service_alloc_table;
+
+ for (i = 0; i < ATH12K_HTC_MAX_SERVICE_ALLOC_ENTRIES; i++) {
+ if (serv_entry[i].service_id == service_id) {
+ allocation = serv_entry[i].credit_allocation;
+ break;
+ }
+ }
+
+ return allocation;
+}
+
+static int ath12k_htc_setup_target_buffer_assignments(struct ath12k_htc *htc)
+{
+ struct ath12k_htc_svc_tx_credits *serv_entry;
+ static const u32 svc_id[] = {
+ ATH12K_HTC_SVC_ID_WMI_CONTROL,
+ ATH12K_HTC_SVC_ID_WMI_CONTROL_MAC1,
+ ATH12K_HTC_SVC_ID_WMI_CONTROL_MAC2,
+ };
+ int i, credits;
+
+ credits = htc->total_transmit_credits;
+ serv_entry = htc->service_alloc_table;
+
+ if ((htc->wmi_ep_count == 0) ||
+ (htc->wmi_ep_count > ARRAY_SIZE(svc_id)))
+ return -EINVAL;
+
+ /* Divide credits among number of endpoints for WMI */
+ credits = credits / htc->wmi_ep_count;
+ for (i = 0; i < htc->wmi_ep_count; i++) {
+ serv_entry[i].service_id = svc_id[i];
+ serv_entry[i].credit_allocation = credits;
+ }
+
+ return 0;
+}
+
+int ath12k_htc_wait_target(struct ath12k_htc *htc)
+{
+ int i, status = 0;
+ struct ath12k_base *ab = htc->ab;
+ unsigned long time_left;
+ struct ath12k_htc_ready *ready;
+ u16 message_id;
+ u16 credit_count;
+ u16 credit_size;
+
+ time_left = wait_for_completion_timeout(&htc->ctl_resp,
+ ATH12K_HTC_WAIT_TIMEOUT_HZ);
+ if (!time_left) {
+ ath12k_warn(ab, "failed to receive control response completion, polling..\n");
+
+ for (i = 0; i < ab->hw_params->ce_count; i++)
+ ath12k_ce_per_engine_service(htc->ab, i);
+
+ time_left =
+ wait_for_completion_timeout(&htc->ctl_resp,
+ ATH12K_HTC_WAIT_TIMEOUT_HZ);
+
+ if (!time_left)
+ status = -ETIMEDOUT;
+ }
+
+ if (status < 0) {
+ ath12k_warn(ab, "ctl_resp never came in (%d)\n", status);
+ return status;
+ }
+
+ if (htc->control_resp_len < sizeof(*ready)) {
+ ath12k_warn(ab, "Invalid HTC ready msg len:%d\n",
+ htc->control_resp_len);
+ return -ECOMM;
+ }
+
+ ready = (struct ath12k_htc_ready *)htc->control_resp_buffer;
+ message_id = le32_get_bits(ready->id_credit_count, HTC_MSG_MESSAGEID);
+ credit_count = le32_get_bits(ready->id_credit_count,
+ HTC_READY_MSG_CREDITCOUNT);
+ credit_size = le32_get_bits(ready->size_ep, HTC_READY_MSG_CREDITSIZE);
+
+ if (message_id != ATH12K_HTC_MSG_READY_ID) {
+ ath12k_warn(ab, "Invalid HTC ready msg: 0x%x\n", message_id);
+ return -ECOMM;
+ }
+
+ htc->total_transmit_credits = credit_count;
+ htc->target_credit_size = credit_size;
+
+ ath12k_dbg(ab, ATH12K_DBG_HTC,
+ "Target ready! transmit resources: %d size:%d\n",
+ htc->total_transmit_credits, htc->target_credit_size);
+
+ if ((htc->total_transmit_credits == 0) ||
+ (htc->target_credit_size == 0)) {
+ ath12k_warn(ab, "Invalid credit size received\n");
+ return -ECOMM;
+ }
+
+ ath12k_htc_setup_target_buffer_assignments(htc);
+
+ return 0;
+}
+
+int ath12k_htc_connect_service(struct ath12k_htc *htc,
+ struct ath12k_htc_svc_conn_req *conn_req,
+ struct ath12k_htc_svc_conn_resp *conn_resp)
+{
+ struct ath12k_base *ab = htc->ab;
+ struct ath12k_htc_conn_svc *req_msg;
+ struct ath12k_htc_conn_svc_resp resp_msg_dummy;
+ struct ath12k_htc_conn_svc_resp *resp_msg = &resp_msg_dummy;
+ enum ath12k_htc_ep_id assigned_eid = ATH12K_HTC_EP_COUNT;
+ struct ath12k_htc_ep *ep;
+ struct sk_buff *skb;
+ unsigned int max_msg_size = 0;
+ int length, status;
+ unsigned long time_left;
+ bool disable_credit_flow_ctrl = false;
+ u16 message_id, service_id, flags = 0;
+ u8 tx_alloc = 0;
+
+ /* special case for HTC pseudo control service */
+ if (conn_req->service_id == ATH12K_HTC_SVC_ID_RSVD_CTRL) {
+ disable_credit_flow_ctrl = true;
+ assigned_eid = ATH12K_HTC_EP_0;
+ max_msg_size = ATH12K_HTC_MAX_CTRL_MSG_LEN;
+ memset(&resp_msg_dummy, 0, sizeof(resp_msg_dummy));
+ goto setup;
+ }
+
+ tx_alloc = ath12k_htc_get_credit_allocation(htc,
+ conn_req->service_id);
+ if (!tx_alloc)
+ ath12k_dbg(ab, ATH12K_DBG_BOOT,
+ "boot htc service %s does not allocate target credits\n",
+ htc_service_name(conn_req->service_id));
+
+ skb = ath12k_htc_build_tx_ctrl_skb();
+ if (!skb) {
+ ath12k_warn(ab, "Failed to allocate HTC packet\n");
+ return -ENOMEM;
+ }
+
+ length = sizeof(*req_msg);
+ skb_put(skb, length);
+ memset(skb->data, 0, length);
+
+ req_msg = (struct ath12k_htc_conn_svc *)skb->data;
+ req_msg->msg_svc_id = le32_encode_bits(ATH12K_HTC_MSG_CONNECT_SERVICE_ID,
+ HTC_MSG_MESSAGEID);
+
+ flags |= u32_encode_bits(tx_alloc, ATH12K_HTC_CONN_FLAGS_RECV_ALLOC);
+
+ /* Only enable credit flow control for WMI ctrl service */
+ if (!(conn_req->service_id == ATH12K_HTC_SVC_ID_WMI_CONTROL ||
+ conn_req->service_id == ATH12K_HTC_SVC_ID_WMI_CONTROL_MAC1 ||
+ conn_req->service_id == ATH12K_HTC_SVC_ID_WMI_CONTROL_MAC2)) {
+ flags |= ATH12K_HTC_CONN_FLAGS_DISABLE_CREDIT_FLOW_CTRL;
+ disable_credit_flow_ctrl = true;
+ }
+
+ req_msg->flags_len = le32_encode_bits(flags, HTC_SVC_MSG_CONNECTIONFLAGS);
+ req_msg->msg_svc_id |= le32_encode_bits(conn_req->service_id,
+ HTC_SVC_MSG_SERVICE_ID);
+
+ reinit_completion(&htc->ctl_resp);
+
+ status = ath12k_htc_send(htc, ATH12K_HTC_EP_0, skb);
+ if (status) {
+ kfree_skb(skb);
+ return status;
+ }
+
+ /* wait for response */
+ time_left = wait_for_completion_timeout(&htc->ctl_resp,
+ ATH12K_HTC_CONN_SVC_TIMEOUT_HZ);
+ if (!time_left) {
+ ath12k_err(ab, "Service connect timeout\n");
+ return -ETIMEDOUT;
+ }
+
+ /* we controlled the buffer creation, it's aligned */
+ resp_msg = (struct ath12k_htc_conn_svc_resp *)htc->control_resp_buffer;
+ message_id = le32_get_bits(resp_msg->msg_svc_id, HTC_MSG_MESSAGEID);
+ service_id = le32_get_bits(resp_msg->msg_svc_id,
+ HTC_SVC_RESP_MSG_SERVICEID);
+
+ if ((message_id != ATH12K_HTC_MSG_CONNECT_SERVICE_RESP_ID) ||
+ (htc->control_resp_len < sizeof(*resp_msg))) {
+ ath12k_err(ab, "Invalid resp message ID 0x%x", message_id);
+ return -EPROTO;
+ }
+
+ ath12k_dbg(ab, ATH12K_DBG_HTC,
+ "HTC Service %s connect response: status: %u, assigned ep: %u\n",
+ htc_service_name(service_id),
+ le32_get_bits(resp_msg->flags_len, HTC_SVC_RESP_MSG_STATUS),
+ le32_get_bits(resp_msg->flags_len, HTC_SVC_RESP_MSG_ENDPOINTID));
+
+ conn_resp->connect_resp_code = le32_get_bits(resp_msg->flags_len,
+ HTC_SVC_RESP_MSG_STATUS);
+
+ /* check response status */
+ if (conn_resp->connect_resp_code != ATH12K_HTC_CONN_SVC_STATUS_SUCCESS) {
+ ath12k_err(ab, "HTC Service %s connect request failed: 0x%x)\n",
+ htc_service_name(service_id),
+ conn_resp->connect_resp_code);
+ return -EPROTO;
+ }
+
+ assigned_eid = le32_get_bits(resp_msg->flags_len,
+ HTC_SVC_RESP_MSG_ENDPOINTID);
+
+ max_msg_size = le32_get_bits(resp_msg->flags_len,
+ HTC_SVC_RESP_MSG_MAXMSGSIZE);
+
+setup:
+
+ if (assigned_eid >= ATH12K_HTC_EP_COUNT)
+ return -EPROTO;
+
+ if (max_msg_size == 0)
+ return -EPROTO;
+
+ ep = &htc->endpoint[assigned_eid];
+ ep->eid = assigned_eid;
+
+ if (ep->service_id != ATH12K_HTC_SVC_ID_UNUSED)
+ return -EPROTO;
+
+ /* return assigned endpoint to caller */
+ conn_resp->eid = assigned_eid;
+ conn_resp->max_msg_len = le32_get_bits(resp_msg->flags_len,
+ HTC_SVC_RESP_MSG_MAXMSGSIZE);
+
+ /* setup the endpoint */
+ ep->service_id = conn_req->service_id;
+ ep->max_tx_queue_depth = conn_req->max_send_queue_depth;
+ ep->max_ep_message_len = le32_get_bits(resp_msg->flags_len,
+ HTC_SVC_RESP_MSG_MAXMSGSIZE);
+ ep->tx_credits = tx_alloc;
+
+ /* copy all the callbacks */
+ ep->ep_ops = conn_req->ep_ops;
+
+ status = ath12k_hif_map_service_to_pipe(htc->ab,
+ ep->service_id,
+ &ep->ul_pipe_id,
+ &ep->dl_pipe_id);
+ if (status)
+ return status;
+
+ ath12k_dbg(ab, ATH12K_DBG_BOOT,
+ "boot htc service '%s' ul pipe %d dl pipe %d eid %d ready\n",
+ htc_service_name(ep->service_id), ep->ul_pipe_id,
+ ep->dl_pipe_id, ep->eid);
+
+ if (disable_credit_flow_ctrl && ep->tx_credit_flow_enabled) {
+ ep->tx_credit_flow_enabled = false;
+ ath12k_dbg(ab, ATH12K_DBG_BOOT,
+ "boot htc service '%s' eid %d TX flow control disabled\n",
+ htc_service_name(ep->service_id), assigned_eid);
+ }
+
+ return status;
+}
+
+int ath12k_htc_start(struct ath12k_htc *htc)
+{
+ struct sk_buff *skb;
+ int status;
+ struct ath12k_base *ab = htc->ab;
+ struct ath12k_htc_setup_complete_extended *msg;
+
+ skb = ath12k_htc_build_tx_ctrl_skb();
+ if (!skb)
+ return -ENOMEM;
+
+ skb_put(skb, sizeof(*msg));
+ memset(skb->data, 0, skb->len);
+
+ msg = (struct ath12k_htc_setup_complete_extended *)skb->data;
+ msg->msg_id = le32_encode_bits(ATH12K_HTC_MSG_SETUP_COMPLETE_EX_ID,
+ HTC_MSG_MESSAGEID);
+
+ ath12k_dbg(ab, ATH12K_DBG_HTC, "HTC is using TX credit flow control\n");
+
+ status = ath12k_htc_send(htc, ATH12K_HTC_EP_0, skb);
+ if (status) {
+ kfree_skb(skb);
+ return status;
+ }
+
+ return 0;
+}
+
+int ath12k_htc_init(struct ath12k_base *ab)
+{
+ struct ath12k_htc *htc = &ab->htc;
+ struct ath12k_htc_svc_conn_req conn_req = { };
+ struct ath12k_htc_svc_conn_resp conn_resp = { };
+ int ret;
+
+ spin_lock_init(&htc->tx_lock);
+
+ ath12k_htc_reset_endpoint_states(htc);
+
+ htc->ab = ab;
+
+ switch (ab->wmi_ab.preferred_hw_mode) {
+ case WMI_HOST_HW_MODE_SINGLE:
+ htc->wmi_ep_count = 1;
+ break;
+ case WMI_HOST_HW_MODE_DBS:
+ case WMI_HOST_HW_MODE_DBS_OR_SBS:
+ htc->wmi_ep_count = 2;
+ break;
+ case WMI_HOST_HW_MODE_DBS_SBS:
+ htc->wmi_ep_count = 3;
+ break;
+ default:
+ htc->wmi_ep_count = ab->hw_params->max_radios;
+ break;
+ }
+
+ /* setup our pseudo HTC control endpoint connection */
+ conn_req.ep_ops.ep_tx_complete = ath12k_htc_control_tx_complete;
+ conn_req.ep_ops.ep_rx_complete = ath12k_htc_control_rx_complete;
+ conn_req.max_send_queue_depth = ATH12K_NUM_CONTROL_TX_BUFFERS;
+ conn_req.service_id = ATH12K_HTC_SVC_ID_RSVD_CTRL;
+
+ /* connect fake service */
+ ret = ath12k_htc_connect_service(htc, &conn_req, &conn_resp);
+ if (ret) {
+ ath12k_err(ab, "could not connect to htc service (%d)\n", ret);
+ return ret;
+ }
+
+ init_completion(&htc->ctl_resp);
+
+ return 0;
+}
diff --git a/drivers/net/wireless/ath/ath12k/htc.h b/drivers/net/wireless/ath/ath12k/htc.h
new file mode 100644
index 000000000000..7e3dccc7cc14
--- /dev/null
+++ b/drivers/net/wireless/ath/ath12k/htc.h
@@ -0,0 +1,316 @@
+/* SPDX-License-Identifier: BSD-3-Clause-Clear */
+/*
+ * Copyright (c) 2018-2021 The Linux Foundation. All rights reserved.
+ * Copyright (c) 2021-2022 Qualcomm Innovation Center, Inc. All rights reserved.
+ */
+
+#ifndef ATH12K_HTC_H
+#define ATH12K_HTC_H
+
+#include <linux/kernel.h>
+#include <linux/list.h>
+#include <linux/bug.h>
+#include <linux/skbuff.h>
+#include <linux/timer.h>
+
+struct ath12k_base;
+
+#define HTC_HDR_ENDPOINTID GENMASK(7, 0)
+#define HTC_HDR_FLAGS GENMASK(15, 8)
+#define HTC_HDR_PAYLOADLEN GENMASK(31, 16)
+#define HTC_HDR_CONTROLBYTES0 GENMASK(7, 0)
+#define HTC_HDR_CONTROLBYTES1 GENMASK(15, 8)
+#define HTC_HDR_RESERVED GENMASK(31, 16)
+
+#define HTC_SVC_MSG_SERVICE_ID GENMASK(31, 16)
+#define HTC_SVC_MSG_CONNECTIONFLAGS GENMASK(15, 0)
+#define HTC_SVC_MSG_SERVICEMETALENGTH GENMASK(23, 16)
+#define HTC_READY_MSG_CREDITCOUNT GENMASK(31, 16)
+#define HTC_READY_MSG_CREDITSIZE GENMASK(15, 0)
+#define HTC_READY_MSG_MAXENDPOINTS GENMASK(23, 16)
+
+#define HTC_READY_EX_MSG_HTCVERSION GENMASK(7, 0)
+#define HTC_READY_EX_MSG_MAXMSGSPERHTCBUNDLE GENMASK(15, 8)
+
+#define HTC_SVC_RESP_MSG_SERVICEID GENMASK(31, 16)
+#define HTC_SVC_RESP_MSG_STATUS GENMASK(7, 0)
+#define HTC_SVC_RESP_MSG_ENDPOINTID GENMASK(15, 8)
+#define HTC_SVC_RESP_MSG_MAXMSGSIZE GENMASK(31, 16)
+#define HTC_SVC_RESP_MSG_SERVICEMETALENGTH GENMASK(7, 0)
+
+#define HTC_MSG_MESSAGEID GENMASK(15, 0)
+#define HTC_SETUP_COMPLETE_EX_MSG_SETUPFLAGS GENMASK(31, 0)
+#define HTC_SETUP_COMPLETE_EX_MSG_MAXMSGSPERBUNDLEDRECV GENMASK(7, 0)
+#define HTC_SETUP_COMPLETE_EX_MSG_RSVD0 GENMASK(15, 8)
+#define HTC_SETUP_COMPLETE_EX_MSG_RSVD1 GENMASK(23, 16)
+#define HTC_SETUP_COMPLETE_EX_MSG_RSVD2 GENMASK(31, 24)
+
+enum ath12k_htc_tx_flags {
+ ATH12K_HTC_FLAG_NEED_CREDIT_UPDATE = 0x01,
+ ATH12K_HTC_FLAG_SEND_BUNDLE = 0x02
+};
+
+enum ath12k_htc_rx_flags {
+ ATH12K_HTC_FLAG_TRAILER_PRESENT = 0x02,
+ ATH12K_HTC_FLAG_BUNDLE_MASK = 0xF0
+};
+
+struct ath12k_htc_hdr {
+ __le32 htc_info;
+ __le32 ctrl_info;
+} __packed __aligned(4);
+
+enum ath12k_htc_msg_id {
+ ATH12K_HTC_MSG_READY_ID = 1,
+ ATH12K_HTC_MSG_CONNECT_SERVICE_ID = 2,
+ ATH12K_HTC_MSG_CONNECT_SERVICE_RESP_ID = 3,
+ ATH12K_HTC_MSG_SETUP_COMPLETE_ID = 4,
+ ATH12K_HTC_MSG_SETUP_COMPLETE_EX_ID = 5,
+ ATH12K_HTC_MSG_SEND_SUSPEND_COMPLETE = 6,
+ ATH12K_HTC_MSG_NACK_SUSPEND = 7,
+ ATH12K_HTC_MSG_WAKEUP_FROM_SUSPEND_ID = 8,
+};
+
+enum ath12k_htc_version {
+ ATH12K_HTC_VERSION_2P0 = 0x00, /* 2.0 */
+ ATH12K_HTC_VERSION_2P1 = 0x01, /* 2.1 */
+};
+
+enum ath12k_htc_conn_flag_threshold_level {
+ ATH12K_HTC_CONN_FLAGS_THRESHOLD_LEVEL_ONE_FOURTH,
+ ATH12K_HTC_CONN_FLAGS_THRESHOLD_LEVEL_ONE_HALF,
+ ATH12K_HTC_CONN_FLAGS_THRESHOLD_LEVEL_THREE_FOURTHS,
+ ATH12K_HTC_CONN_FLAGS_THRESHOLD_LEVEL_UNITY,
+};
+
+#define ATH12K_HTC_CONN_FLAGS_THRESHOLD_LEVEL_MASK GENMASK(1, 0)
+#define ATH12K_HTC_CONN_FLAGS_REDUCE_CREDIT_DRIBBLE BIT(2)
+#define ATH12K_HTC_CONN_FLAGS_DISABLE_CREDIT_FLOW_CTRL BIT(3)
+#define ATH12K_HTC_CONN_FLAGS_RECV_ALLOC GENMASK(15, 8)
+
+enum ath12k_htc_conn_svc_status {
+ ATH12K_HTC_CONN_SVC_STATUS_SUCCESS = 0,
+ ATH12K_HTC_CONN_SVC_STATUS_NOT_FOUND = 1,
+ ATH12K_HTC_CONN_SVC_STATUS_FAILED = 2,
+ ATH12K_HTC_CONN_SVC_STATUS_NO_RESOURCES = 3,
+ ATH12K_HTC_CONN_SVC_STATUS_NO_MORE_EP = 4
+};
+
+struct ath12k_htc_ready {
+ __le32 id_credit_count;
+ __le32 size_ep;
+} __packed;
+
+struct ath12k_htc_ready_extended {
+ struct ath12k_htc_ready base;
+ __le32 ver_bundle;
+} __packed;
+
+struct ath12k_htc_conn_svc {
+ __le32 msg_svc_id;
+ __le32 flags_len;
+} __packed;
+
+struct ath12k_htc_conn_svc_resp {
+ __le32 msg_svc_id;
+ __le32 flags_len;
+ __le32 svc_meta_pad;
+} __packed;
+
+struct ath12k_htc_setup_complete_extended {
+ __le32 msg_id;
+ __le32 flags;
+ __le32 max_msgs_per_bundled_recv;
+} __packed;
+
+struct ath12k_htc_msg {
+ __le32 msg_svc_id;
+ __le32 flags_len;
+} __packed __aligned(4);
+
+enum ath12k_htc_record_id {
+ ATH12K_HTC_RECORD_NULL = 0,
+ ATH12K_HTC_RECORD_CREDITS = 1
+};
+
+struct ath12k_htc_record_hdr {
+ u8 id; /* @enum ath12k_htc_record_id */
+ u8 len;
+ u8 pad0;
+ u8 pad1;
+} __packed;
+
+struct ath12k_htc_credit_report {
+ u8 eid; /* @enum ath12k_htc_ep_id */
+ u8 credits;
+ u8 pad0;
+ u8 pad1;
+} __packed;
+
+struct ath12k_htc_record {
+ struct ath12k_htc_record_hdr hdr;
+ struct ath12k_htc_credit_report credit_report[];
+} __packed __aligned(4);
+
+/* HTC FRAME structure layout draft
+ *
+ * note: the trailer offset is dynamic depending
+ * on payload length. this is only a struct layout draft
+ *
+ *=======================================================
+ *
+ * HTC HEADER
+ *
+ *=======================================================
+ * |
+ * HTC message | payload
+ * (variable length) | (variable length)
+ *=======================================================
+ *
+ * HTC Record
+ *
+ *=======================================================
+ */
+
+enum ath12k_htc_svc_gid {
+ ATH12K_HTC_SVC_GRP_RSVD = 0,
+ ATH12K_HTC_SVC_GRP_WMI = 1,
+ ATH12K_HTC_SVC_GRP_NMI = 2,
+ ATH12K_HTC_SVC_GRP_HTT = 3,
+ ATH12K_HTC_SVC_GRP_CFG = 4,
+ ATH12K_HTC_SVC_GRP_IPA = 5,
+ ATH12K_HTC_SVC_GRP_PKTLOG = 6,
+
+ ATH12K_HTC_SVC_GRP_TEST = 254,
+ ATH12K_HTC_SVC_GRP_LAST = 255,
+};
+
+#define SVC(group, idx) \
+ (int)(((int)(group) << 8) | (int)(idx))
+
+enum ath12k_htc_svc_id {
+ /* NOTE: service ID of 0x0000 is reserved and should never be used */
+ ATH12K_HTC_SVC_ID_RESERVED = 0x0000,
+ ATH12K_HTC_SVC_ID_UNUSED = ATH12K_HTC_SVC_ID_RESERVED,
+
+ ATH12K_HTC_SVC_ID_RSVD_CTRL = SVC(ATH12K_HTC_SVC_GRP_RSVD, 1),
+ ATH12K_HTC_SVC_ID_WMI_CONTROL = SVC(ATH12K_HTC_SVC_GRP_WMI, 0),
+ ATH12K_HTC_SVC_ID_WMI_DATA_BE = SVC(ATH12K_HTC_SVC_GRP_WMI, 1),
+ ATH12K_HTC_SVC_ID_WMI_DATA_BK = SVC(ATH12K_HTC_SVC_GRP_WMI, 2),
+ ATH12K_HTC_SVC_ID_WMI_DATA_VI = SVC(ATH12K_HTC_SVC_GRP_WMI, 3),
+ ATH12K_HTC_SVC_ID_WMI_DATA_VO = SVC(ATH12K_HTC_SVC_GRP_WMI, 4),
+ ATH12K_HTC_SVC_ID_WMI_CONTROL_MAC1 = SVC(ATH12K_HTC_SVC_GRP_WMI, 5),
+ ATH12K_HTC_SVC_ID_WMI_CONTROL_MAC2 = SVC(ATH12K_HTC_SVC_GRP_WMI, 6),
+ ATH12K_HTC_SVC_ID_WMI_CONTROL_DIAG = SVC(ATH12K_HTC_SVC_GRP_WMI, 7),
+
+ ATH12K_HTC_SVC_ID_NMI_CONTROL = SVC(ATH12K_HTC_SVC_GRP_NMI, 0),
+ ATH12K_HTC_SVC_ID_NMI_DATA = SVC(ATH12K_HTC_SVC_GRP_NMI, 1),
+
+ ATH12K_HTC_SVC_ID_HTT_DATA_MSG = SVC(ATH12K_HTC_SVC_GRP_HTT, 0),
+
+ /* raw stream service (i.e. flash, tcmd, calibration apps) */
+ ATH12K_HTC_SVC_ID_TEST_RAW_STREAMS = SVC(ATH12K_HTC_SVC_GRP_TEST, 0),
+ ATH12K_HTC_SVC_ID_IPA_TX = SVC(ATH12K_HTC_SVC_GRP_IPA, 0),
+ ATH12K_HTC_SVC_ID_PKT_LOG = SVC(ATH12K_HTC_SVC_GRP_PKTLOG, 0),
+};
+
+#undef SVC
+
+enum ath12k_htc_ep_id {
+ ATH12K_HTC_EP_UNUSED = -1,
+ ATH12K_HTC_EP_0 = 0,
+ ATH12K_HTC_EP_1 = 1,
+ ATH12K_HTC_EP_2,
+ ATH12K_HTC_EP_3,
+ ATH12K_HTC_EP_4,
+ ATH12K_HTC_EP_5,
+ ATH12K_HTC_EP_6,
+ ATH12K_HTC_EP_7,
+ ATH12K_HTC_EP_8,
+ ATH12K_HTC_EP_COUNT,
+};
+
+struct ath12k_htc_ep_ops {
+ void (*ep_tx_complete)(struct ath12k_base *ab, struct sk_buff *skb);
+ void (*ep_rx_complete)(struct ath12k_base *ab, struct sk_buff *skb);
+ void (*ep_tx_credits)(struct ath12k_base *ab);
+};
+
+/* service connection information */
+struct ath12k_htc_svc_conn_req {
+ u16 service_id;
+ struct ath12k_htc_ep_ops ep_ops;
+ int max_send_queue_depth;
+};
+
+/* service connection response information */
+struct ath12k_htc_svc_conn_resp {
+ u8 buffer_len;
+ u8 actual_len;
+ enum ath12k_htc_ep_id eid;
+ unsigned int max_msg_len;
+ u8 connect_resp_code;
+};
+
+#define ATH12K_NUM_CONTROL_TX_BUFFERS 2
+#define ATH12K_HTC_MAX_LEN 4096
+#define ATH12K_HTC_MAX_CTRL_MSG_LEN 256
+#define ATH12K_HTC_WAIT_TIMEOUT_HZ (1 * HZ)
+#define ATH12K_HTC_CONTROL_BUFFER_SIZE (ATH12K_HTC_MAX_CTRL_MSG_LEN + \
+ sizeof(struct ath12k_htc_hdr))
+#define ATH12K_HTC_CONN_SVC_TIMEOUT_HZ (1 * HZ)
+#define ATH12K_HTC_MAX_SERVICE_ALLOC_ENTRIES 8
+
+struct ath12k_htc_ep {
+ struct ath12k_htc *htc;
+ enum ath12k_htc_ep_id eid;
+ enum ath12k_htc_svc_id service_id;
+ struct ath12k_htc_ep_ops ep_ops;
+
+ int max_tx_queue_depth;
+ int max_ep_message_len;
+ u8 ul_pipe_id;
+ u8 dl_pipe_id;
+
+ u8 seq_no; /* for debugging */
+ int tx_credits;
+ bool tx_credit_flow_enabled;
+};
+
+struct ath12k_htc_svc_tx_credits {
+ u16 service_id;
+ u8 credit_allocation;
+};
+
+struct ath12k_htc {
+ struct ath12k_base *ab;
+ struct ath12k_htc_ep endpoint[ATH12K_HTC_EP_COUNT];
+
+ /* protects endpoints */
+ spinlock_t tx_lock;
+
+ u8 control_resp_buffer[ATH12K_HTC_MAX_CTRL_MSG_LEN];
+ int control_resp_len;
+
+ struct completion ctl_resp;
+
+ int total_transmit_credits;
+ struct ath12k_htc_svc_tx_credits
+ service_alloc_table[ATH12K_HTC_MAX_SERVICE_ALLOC_ENTRIES];
+ int target_credit_size;
+ u8 wmi_ep_count;
+};
+
+int ath12k_htc_init(struct ath12k_base *ar);
+int ath12k_htc_wait_target(struct ath12k_htc *htc);
+int ath12k_htc_start(struct ath12k_htc *htc);
+int ath12k_htc_connect_service(struct ath12k_htc *htc,
+ struct ath12k_htc_svc_conn_req *conn_req,
+ struct ath12k_htc_svc_conn_resp *conn_resp);
+int ath12k_htc_send(struct ath12k_htc *htc, enum ath12k_htc_ep_id eid,
+ struct sk_buff *packet);
+struct sk_buff *ath12k_htc_alloc_skb(struct ath12k_base *ar, int size);
+void ath12k_htc_rx_completion_handler(struct ath12k_base *ar,
+ struct sk_buff *skb);
+
+#endif
diff --git a/drivers/net/wireless/ath/ath12k/hw.c b/drivers/net/wireless/ath/ath12k/hw.c
new file mode 100644
index 000000000000..91d576fd4b0f
--- /dev/null
+++ b/drivers/net/wireless/ath/ath12k/hw.c
@@ -0,0 +1,1041 @@
+// SPDX-License-Identifier: BSD-3-Clause-Clear
+/*
+ * Copyright (c) 2018-2021 The Linux Foundation. All rights reserved.
+ * Copyright (c) 2021-2022 Qualcomm Innovation Center, Inc. All rights reserved.
+ */
+
+#include <linux/types.h>
+#include <linux/bitops.h>
+#include <linux/bitfield.h>
+
+#include "debug.h"
+#include "core.h"
+#include "ce.h"
+#include "hw.h"
+#include "mhi.h"
+#include "dp_rx.h"
+
+static u8 ath12k_hw_qcn9274_mac_from_pdev_id(int pdev_idx)
+{
+ return pdev_idx;
+}
+
+static int ath12k_hw_mac_id_to_pdev_id_qcn9274(const struct ath12k_hw_params *hw,
+ int mac_id)
+{
+ return mac_id;
+}
+
+static int ath12k_hw_mac_id_to_srng_id_qcn9274(const struct ath12k_hw_params *hw,
+ int mac_id)
+{
+ return 0;
+}
+
+static u8 ath12k_hw_get_ring_selector_qcn9274(struct sk_buff *skb)
+{
+ return smp_processor_id();
+}
+
+static bool ath12k_dp_srng_is_comp_ring_qcn9274(int ring_num)
+{
+ if (ring_num < 3 || ring_num == 4)
+ return true;
+
+ return false;
+}
+
+static int ath12k_hw_mac_id_to_pdev_id_wcn7850(const struct ath12k_hw_params *hw,
+ int mac_id)
+{
+ return 0;
+}
+
+static int ath12k_hw_mac_id_to_srng_id_wcn7850(const struct ath12k_hw_params *hw,
+ int mac_id)
+{
+ return mac_id;
+}
+
+static u8 ath12k_hw_get_ring_selector_wcn7850(struct sk_buff *skb)
+{
+ return skb_get_queue_mapping(skb);
+}
+
+static bool ath12k_dp_srng_is_comp_ring_wcn7850(int ring_num)
+{
+ if (ring_num == 0 || ring_num == 2 || ring_num == 4)
+ return true;
+
+ return false;
+}
+
+static const struct ath12k_hw_ops qcn9274_ops = {
+ .get_hw_mac_from_pdev_id = ath12k_hw_qcn9274_mac_from_pdev_id,
+ .mac_id_to_pdev_id = ath12k_hw_mac_id_to_pdev_id_qcn9274,
+ .mac_id_to_srng_id = ath12k_hw_mac_id_to_srng_id_qcn9274,
+ .rxdma_ring_sel_config = ath12k_dp_rxdma_ring_sel_config_qcn9274,
+ .get_ring_selector = ath12k_hw_get_ring_selector_qcn9274,
+ .dp_srng_is_tx_comp_ring = ath12k_dp_srng_is_comp_ring_qcn9274,
+};
+
+static const struct ath12k_hw_ops wcn7850_ops = {
+ .get_hw_mac_from_pdev_id = ath12k_hw_qcn9274_mac_from_pdev_id,
+ .mac_id_to_pdev_id = ath12k_hw_mac_id_to_pdev_id_wcn7850,
+ .mac_id_to_srng_id = ath12k_hw_mac_id_to_srng_id_wcn7850,
+ .rxdma_ring_sel_config = ath12k_dp_rxdma_ring_sel_config_wcn7850,
+ .get_ring_selector = ath12k_hw_get_ring_selector_wcn7850,
+ .dp_srng_is_tx_comp_ring = ath12k_dp_srng_is_comp_ring_wcn7850,
+};
+
+#define ATH12K_TX_RING_MASK_0 0x1
+#define ATH12K_TX_RING_MASK_1 0x2
+#define ATH12K_TX_RING_MASK_2 0x4
+#define ATH12K_TX_RING_MASK_3 0x8
+#define ATH12K_TX_RING_MASK_4 0x10
+
+#define ATH12K_RX_RING_MASK_0 0x1
+#define ATH12K_RX_RING_MASK_1 0x2
+#define ATH12K_RX_RING_MASK_2 0x4
+#define ATH12K_RX_RING_MASK_3 0x8
+
+#define ATH12K_RX_ERR_RING_MASK_0 0x1
+
+#define ATH12K_RX_WBM_REL_RING_MASK_0 0x1
+
+#define ATH12K_REO_STATUS_RING_MASK_0 0x1
+
+#define ATH12K_HOST2RXDMA_RING_MASK_0 0x1
+
+#define ATH12K_RX_MON_RING_MASK_0 0x1
+#define ATH12K_RX_MON_RING_MASK_1 0x2
+#define ATH12K_RX_MON_RING_MASK_2 0x4
+
+#define ATH12K_TX_MON_RING_MASK_0 0x1
+#define ATH12K_TX_MON_RING_MASK_1 0x2
+
+/* Target firmware's Copy Engine configuration. */
+static const struct ce_pipe_config ath12k_target_ce_config_wlan_qcn9274[] = {
+ /* CE0: host->target HTC control and raw streams */
+ {
+ .pipenum = __cpu_to_le32(0),
+ .pipedir = __cpu_to_le32(PIPEDIR_OUT),
+ .nentries = __cpu_to_le32(32),
+ .nbytes_max = __cpu_to_le32(2048),
+ .flags = __cpu_to_le32(CE_ATTR_FLAGS),
+ .reserved = __cpu_to_le32(0),
+ },
+
+ /* CE1: target->host HTT + HTC control */
+ {
+ .pipenum = __cpu_to_le32(1),
+ .pipedir = __cpu_to_le32(PIPEDIR_IN),
+ .nentries = __cpu_to_le32(32),
+ .nbytes_max = __cpu_to_le32(2048),
+ .flags = __cpu_to_le32(CE_ATTR_FLAGS),
+ .reserved = __cpu_to_le32(0),
+ },
+
+ /* CE2: target->host WMI */
+ {
+ .pipenum = __cpu_to_le32(2),
+ .pipedir = __cpu_to_le32(PIPEDIR_IN),
+ .nentries = __cpu_to_le32(32),
+ .nbytes_max = __cpu_to_le32(2048),
+ .flags = __cpu_to_le32(CE_ATTR_FLAGS),
+ .reserved = __cpu_to_le32(0),
+ },
+
+ /* CE3: host->target WMI (mac0) */
+ {
+ .pipenum = __cpu_to_le32(3),
+ .pipedir = __cpu_to_le32(PIPEDIR_OUT),
+ .nentries = __cpu_to_le32(32),
+ .nbytes_max = __cpu_to_le32(2048),
+ .flags = __cpu_to_le32(CE_ATTR_FLAGS),
+ .reserved = __cpu_to_le32(0),
+ },
+
+ /* CE4: host->target HTT */
+ {
+ .pipenum = __cpu_to_le32(4),
+ .pipedir = __cpu_to_le32(PIPEDIR_OUT),
+ .nentries = __cpu_to_le32(256),
+ .nbytes_max = __cpu_to_le32(256),
+ .flags = __cpu_to_le32(CE_ATTR_FLAGS | CE_ATTR_DIS_INTR),
+ .reserved = __cpu_to_le32(0),
+ },
+
+ /* CE5: target->host Pktlog */
+ {
+ .pipenum = __cpu_to_le32(5),
+ .pipedir = __cpu_to_le32(PIPEDIR_IN),
+ .nentries = __cpu_to_le32(32),
+ .nbytes_max = __cpu_to_le32(2048),
+ .flags = __cpu_to_le32(CE_ATTR_FLAGS),
+ .reserved = __cpu_to_le32(0),
+ },
+
+ /* CE6: Reserved for target autonomous hif_memcpy */
+ {
+ .pipenum = __cpu_to_le32(6),
+ .pipedir = __cpu_to_le32(PIPEDIR_INOUT),
+ .nentries = __cpu_to_le32(32),
+ .nbytes_max = __cpu_to_le32(16384),
+ .flags = __cpu_to_le32(CE_ATTR_FLAGS),
+ .reserved = __cpu_to_le32(0),
+ },
+
+ /* CE7: host->target WMI (mac1) */
+ {
+ .pipenum = __cpu_to_le32(7),
+ .pipedir = __cpu_to_le32(PIPEDIR_OUT),
+ .nentries = __cpu_to_le32(32),
+ .nbytes_max = __cpu_to_le32(2048),
+ .flags = __cpu_to_le32(CE_ATTR_FLAGS),
+ .reserved = __cpu_to_le32(0),
+ },
+
+ /* CE8: Reserved for target autonomous hif_memcpy */
+ {
+ .pipenum = __cpu_to_le32(8),
+ .pipedir = __cpu_to_le32(PIPEDIR_INOUT),
+ .nentries = __cpu_to_le32(32),
+ .nbytes_max = __cpu_to_le32(16384),
+ .flags = __cpu_to_le32(CE_ATTR_FLAGS),
+ .reserved = __cpu_to_le32(0),
+ },
+
+ /* CE9, 10 and 11: Reserved for MHI */
+
+ /* CE12: Target CV prefetch */
+ {
+ .pipenum = __cpu_to_le32(12),
+ .pipedir = __cpu_to_le32(PIPEDIR_OUT),
+ .nentries = __cpu_to_le32(32),
+ .nbytes_max = __cpu_to_le32(2048),
+ .flags = __cpu_to_le32(CE_ATTR_FLAGS),
+ .reserved = __cpu_to_le32(0),
+ },
+
+ /* CE13: Target CV prefetch */
+ {
+ .pipenum = __cpu_to_le32(13),
+ .pipedir = __cpu_to_le32(PIPEDIR_OUT),
+ .nentries = __cpu_to_le32(32),
+ .nbytes_max = __cpu_to_le32(2048),
+ .flags = __cpu_to_le32(CE_ATTR_FLAGS),
+ .reserved = __cpu_to_le32(0),
+ },
+
+ /* CE14: WMI logging/CFR/Spectral/Radar */
+ {
+ .pipenum = __cpu_to_le32(14),
+ .pipedir = __cpu_to_le32(PIPEDIR_IN),
+ .nentries = __cpu_to_le32(32),
+ .nbytes_max = __cpu_to_le32(2048),
+ .flags = __cpu_to_le32(CE_ATTR_FLAGS),
+ .reserved = __cpu_to_le32(0),
+ },
+
+ /* CE15: Reserved */
+};
+
+/* Target firmware's Copy Engine configuration. */
+static const struct ce_pipe_config ath12k_target_ce_config_wlan_wcn7850[] = {
+ /* CE0: host->target HTC control and raw streams */
+ {
+ .pipenum = __cpu_to_le32(0),
+ .pipedir = __cpu_to_le32(PIPEDIR_OUT),
+ .nentries = __cpu_to_le32(32),
+ .nbytes_max = __cpu_to_le32(2048),
+ .flags = __cpu_to_le32(CE_ATTR_FLAGS),
+ .reserved = __cpu_to_le32(0),
+ },
+
+ /* CE1: target->host HTT + HTC control */
+ {
+ .pipenum = __cpu_to_le32(1),
+ .pipedir = __cpu_to_le32(PIPEDIR_IN),
+ .nentries = __cpu_to_le32(32),
+ .nbytes_max = __cpu_to_le32(2048),
+ .flags = __cpu_to_le32(CE_ATTR_FLAGS),
+ .reserved = __cpu_to_le32(0),
+ },
+
+ /* CE2: target->host WMI */
+ {
+ .pipenum = __cpu_to_le32(2),
+ .pipedir = __cpu_to_le32(PIPEDIR_IN),
+ .nentries = __cpu_to_le32(32),
+ .nbytes_max = __cpu_to_le32(2048),
+ .flags = __cpu_to_le32(CE_ATTR_FLAGS),
+ .reserved = __cpu_to_le32(0),
+ },
+
+ /* CE3: host->target WMI */
+ {
+ .pipenum = __cpu_to_le32(3),
+ .pipedir = __cpu_to_le32(PIPEDIR_OUT),
+ .nentries = __cpu_to_le32(32),
+ .nbytes_max = __cpu_to_le32(2048),
+ .flags = __cpu_to_le32(CE_ATTR_FLAGS),
+ .reserved = __cpu_to_le32(0),
+ },
+
+ /* CE4: host->target HTT */
+ {
+ .pipenum = __cpu_to_le32(4),
+ .pipedir = __cpu_to_le32(PIPEDIR_OUT),
+ .nentries = __cpu_to_le32(256),
+ .nbytes_max = __cpu_to_le32(256),
+ .flags = __cpu_to_le32(CE_ATTR_FLAGS | CE_ATTR_DIS_INTR),
+ .reserved = __cpu_to_le32(0),
+ },
+
+ /* CE5: target->host Pktlog */
+ {
+ .pipenum = __cpu_to_le32(5),
+ .pipedir = __cpu_to_le32(PIPEDIR_IN),
+ .nentries = __cpu_to_le32(32),
+ .nbytes_max = __cpu_to_le32(2048),
+ .flags = __cpu_to_le32(CE_ATTR_FLAGS),
+ .reserved = __cpu_to_le32(0),
+ },
+
+ /* CE6: Reserved for target autonomous hif_memcpy */
+ {
+ .pipenum = __cpu_to_le32(6),
+ .pipedir = __cpu_to_le32(PIPEDIR_INOUT),
+ .nentries = __cpu_to_le32(32),
+ .nbytes_max = __cpu_to_le32(16384),
+ .flags = __cpu_to_le32(CE_ATTR_FLAGS),
+ .reserved = __cpu_to_le32(0),
+ },
+
+ /* CE7 used only by Host */
+ {
+ .pipenum = __cpu_to_le32(7),
+ .pipedir = __cpu_to_le32(PIPEDIR_INOUT_H2H),
+ .nentries = __cpu_to_le32(0),
+ .nbytes_max = __cpu_to_le32(0),
+ .flags = __cpu_to_le32(CE_ATTR_FLAGS | CE_ATTR_DIS_INTR),
+ .reserved = __cpu_to_le32(0),
+ },
+
+ /* CE8 target->host used only by IPA */
+ {
+ .pipenum = __cpu_to_le32(8),
+ .pipedir = __cpu_to_le32(PIPEDIR_INOUT),
+ .nentries = __cpu_to_le32(32),
+ .nbytes_max = __cpu_to_le32(16384),
+ .flags = __cpu_to_le32(CE_ATTR_FLAGS),
+ .reserved = __cpu_to_le32(0),
+ },
+ /* CE 9, 10, 11 are used by MHI driver */
+};
+
+/* Map from service/endpoint to Copy Engine.
+ * This table is derived from the CE_PCI TABLE, above.
+ * It is passed to the Target at startup for use by firmware.
+ */
+static const struct service_to_pipe ath12k_target_service_to_ce_map_wlan_qcn9274[] = {
+ {
+ __cpu_to_le32(ATH12K_HTC_SVC_ID_WMI_DATA_VO),
+ __cpu_to_le32(PIPEDIR_OUT), /* out = UL = host -> target */
+ __cpu_to_le32(3),
+ },
+ {
+ __cpu_to_le32(ATH12K_HTC_SVC_ID_WMI_DATA_VO),
+ __cpu_to_le32(PIPEDIR_IN), /* in = DL = target -> host */
+ __cpu_to_le32(2),
+ },
+ {
+ __cpu_to_le32(ATH12K_HTC_SVC_ID_WMI_DATA_BK),
+ __cpu_to_le32(PIPEDIR_OUT), /* out = UL = host -> target */
+ __cpu_to_le32(3),
+ },
+ {
+ __cpu_to_le32(ATH12K_HTC_SVC_ID_WMI_DATA_BK),
+ __cpu_to_le32(PIPEDIR_IN), /* in = DL = target -> host */
+ __cpu_to_le32(2),
+ },
+ {
+ __cpu_to_le32(ATH12K_HTC_SVC_ID_WMI_DATA_BE),
+ __cpu_to_le32(PIPEDIR_OUT), /* out = UL = host -> target */
+ __cpu_to_le32(3),
+ },
+ {
+ __cpu_to_le32(ATH12K_HTC_SVC_ID_WMI_DATA_BE),
+ __cpu_to_le32(PIPEDIR_IN), /* in = DL = target -> host */
+ __cpu_to_le32(2),
+ },
+ {
+ __cpu_to_le32(ATH12K_HTC_SVC_ID_WMI_DATA_VI),
+ __cpu_to_le32(PIPEDIR_OUT), /* out = UL = host -> target */
+ __cpu_to_le32(3),
+ },
+ {
+ __cpu_to_le32(ATH12K_HTC_SVC_ID_WMI_DATA_VI),
+ __cpu_to_le32(PIPEDIR_IN), /* in = DL = target -> host */
+ __cpu_to_le32(2),
+ },
+ {
+ __cpu_to_le32(ATH12K_HTC_SVC_ID_WMI_CONTROL),
+ __cpu_to_le32(PIPEDIR_OUT), /* out = UL = host -> target */
+ __cpu_to_le32(3),
+ },
+ {
+ __cpu_to_le32(ATH12K_HTC_SVC_ID_WMI_CONTROL),
+ __cpu_to_le32(PIPEDIR_IN), /* in = DL = target -> host */
+ __cpu_to_le32(2),
+ },
+ {
+ __cpu_to_le32(ATH12K_HTC_SVC_ID_RSVD_CTRL),
+ __cpu_to_le32(PIPEDIR_OUT), /* out = UL = host -> target */
+ __cpu_to_le32(0),
+ },
+ {
+ __cpu_to_le32(ATH12K_HTC_SVC_ID_RSVD_CTRL),
+ __cpu_to_le32(PIPEDIR_IN), /* in = DL = target -> host */
+ __cpu_to_le32(1),
+ },
+ {
+ __cpu_to_le32(ATH12K_HTC_SVC_ID_TEST_RAW_STREAMS),
+ __cpu_to_le32(PIPEDIR_OUT), /* out = UL = host -> target */
+ __cpu_to_le32(0),
+ },
+ {
+ __cpu_to_le32(ATH12K_HTC_SVC_ID_TEST_RAW_STREAMS),
+ __cpu_to_le32(PIPEDIR_IN), /* in = DL = target -> host */
+ __cpu_to_le32(1),
+ },
+ {
+ __cpu_to_le32(ATH12K_HTC_SVC_ID_HTT_DATA_MSG),
+ __cpu_to_le32(PIPEDIR_OUT), /* out = UL = host -> target */
+ __cpu_to_le32(4),
+ },
+ {
+ __cpu_to_le32(ATH12K_HTC_SVC_ID_HTT_DATA_MSG),
+ __cpu_to_le32(PIPEDIR_IN), /* in = DL = target -> host */
+ __cpu_to_le32(1),
+ },
+ {
+ __cpu_to_le32(ATH12K_HTC_SVC_ID_WMI_CONTROL_MAC1),
+ __cpu_to_le32(PIPEDIR_OUT), /* out = UL = host -> target */
+ __cpu_to_le32(7),
+ },
+ {
+ __cpu_to_le32(ATH12K_HTC_SVC_ID_WMI_CONTROL_MAC1),
+ __cpu_to_le32(PIPEDIR_IN), /* in = DL = target -> host */
+ __cpu_to_le32(2),
+ },
+ {
+ __cpu_to_le32(ATH12K_HTC_SVC_ID_PKT_LOG),
+ __cpu_to_le32(PIPEDIR_IN), /* in = DL = target -> host */
+ __cpu_to_le32(5),
+ },
+ {
+ __cpu_to_le32(ATH12K_HTC_SVC_ID_WMI_CONTROL_DIAG),
+ __cpu_to_le32(PIPEDIR_IN), /* in = DL = target -> host */
+ __cpu_to_le32(14),
+ },
+
+ /* (Additions here) */
+
+ { /* must be last */
+ __cpu_to_le32(0),
+ __cpu_to_le32(0),
+ __cpu_to_le32(0),
+ },
+};
+
+static const struct service_to_pipe ath12k_target_service_to_ce_map_wlan_wcn7850[] = {
+ {
+ __cpu_to_le32(ATH12K_HTC_SVC_ID_WMI_DATA_VO),
+ __cpu_to_le32(PIPEDIR_OUT), /* out = UL = host -> target */
+ __cpu_to_le32(3),
+ },
+ {
+ __cpu_to_le32(ATH12K_HTC_SVC_ID_WMI_DATA_VO),
+ __cpu_to_le32(PIPEDIR_IN), /* in = DL = target -> host */
+ __cpu_to_le32(2),
+ },
+ {
+ __cpu_to_le32(ATH12K_HTC_SVC_ID_WMI_DATA_BK),
+ __cpu_to_le32(PIPEDIR_OUT), /* out = UL = host -> target */
+ __cpu_to_le32(3),
+ },
+ {
+ __cpu_to_le32(ATH12K_HTC_SVC_ID_WMI_DATA_BK),
+ __cpu_to_le32(PIPEDIR_IN), /* in = DL = target -> host */
+ __cpu_to_le32(2),
+ },
+ {
+ __cpu_to_le32(ATH12K_HTC_SVC_ID_WMI_DATA_BE),
+ __cpu_to_le32(PIPEDIR_OUT), /* out = UL = host -> target */
+ __cpu_to_le32(3),
+ },
+ {
+ __cpu_to_le32(ATH12K_HTC_SVC_ID_WMI_DATA_BE),
+ __cpu_to_le32(PIPEDIR_IN), /* in = DL = target -> host */
+ __cpu_to_le32(2),
+ },
+ {
+ __cpu_to_le32(ATH12K_HTC_SVC_ID_WMI_DATA_VI),
+ __cpu_to_le32(PIPEDIR_OUT), /* out = UL = host -> target */
+ __cpu_to_le32(3),
+ },
+ {
+ __cpu_to_le32(ATH12K_HTC_SVC_ID_WMI_DATA_VI),
+ __cpu_to_le32(PIPEDIR_IN), /* in = DL = target -> host */
+ __cpu_to_le32(2),
+ },
+ {
+ __cpu_to_le32(ATH12K_HTC_SVC_ID_WMI_CONTROL),
+ __cpu_to_le32(PIPEDIR_OUT), /* out = UL = host -> target */
+ __cpu_to_le32(3),
+ },
+ {
+ __cpu_to_le32(ATH12K_HTC_SVC_ID_WMI_CONTROL),
+ __cpu_to_le32(PIPEDIR_IN), /* in = DL = target -> host */
+ __cpu_to_le32(2),
+ },
+ {
+ __cpu_to_le32(ATH12K_HTC_SVC_ID_RSVD_CTRL),
+ __cpu_to_le32(PIPEDIR_OUT), /* out = UL = host -> target */
+ __cpu_to_le32(0),
+ },
+ {
+ __cpu_to_le32(ATH12K_HTC_SVC_ID_RSVD_CTRL),
+ __cpu_to_le32(PIPEDIR_IN), /* in = DL = target -> host */
+ __cpu_to_le32(2),
+ },
+ {
+ __cpu_to_le32(ATH12K_HTC_SVC_ID_HTT_DATA_MSG),
+ __cpu_to_le32(PIPEDIR_OUT), /* out = UL = host -> target */
+ __cpu_to_le32(4),
+ },
+ {
+ __cpu_to_le32(ATH12K_HTC_SVC_ID_HTT_DATA_MSG),
+ __cpu_to_le32(PIPEDIR_IN), /* in = DL = target -> host */
+ __cpu_to_le32(1),
+ },
+
+ /* (Additions here) */
+
+ { /* must be last */
+ __cpu_to_le32(0),
+ __cpu_to_le32(0),
+ __cpu_to_le32(0),
+ },
+};
+
+static const struct ath12k_hw_ring_mask ath12k_hw_ring_mask_qcn9274 = {
+ .tx = {
+ ATH12K_TX_RING_MASK_0,
+ ATH12K_TX_RING_MASK_1,
+ ATH12K_TX_RING_MASK_2,
+ ATH12K_TX_RING_MASK_3,
+ },
+ .rx_mon_dest = {
+ 0, 0, 0,
+ ATH12K_RX_MON_RING_MASK_0,
+ ATH12K_RX_MON_RING_MASK_1,
+ ATH12K_RX_MON_RING_MASK_2,
+ },
+ .rx = {
+ 0, 0, 0, 0,
+ ATH12K_RX_RING_MASK_0,
+ ATH12K_RX_RING_MASK_1,
+ ATH12K_RX_RING_MASK_2,
+ ATH12K_RX_RING_MASK_3,
+ },
+ .rx_err = {
+ 0, 0, 0,
+ ATH12K_RX_ERR_RING_MASK_0,
+ },
+ .rx_wbm_rel = {
+ 0, 0, 0,
+ ATH12K_RX_WBM_REL_RING_MASK_0,
+ },
+ .reo_status = {
+ 0, 0, 0,
+ ATH12K_REO_STATUS_RING_MASK_0,
+ },
+ .host2rxdma = {
+ 0, 0, 0,
+ ATH12K_HOST2RXDMA_RING_MASK_0,
+ },
+ .tx_mon_dest = {
+ ATH12K_TX_MON_RING_MASK_0,
+ ATH12K_TX_MON_RING_MASK_1,
+ },
+};
+
+static const struct ath12k_hw_ring_mask ath12k_hw_ring_mask_wcn7850 = {
+ .tx = {
+ ATH12K_TX_RING_MASK_0,
+ ATH12K_TX_RING_MASK_2,
+ ATH12K_TX_RING_MASK_4,
+ },
+ .rx_mon_dest = {
+ },
+ .rx = {
+ 0, 0, 0,
+ ATH12K_RX_RING_MASK_0,
+ ATH12K_RX_RING_MASK_1,
+ ATH12K_RX_RING_MASK_2,
+ ATH12K_RX_RING_MASK_3,
+ },
+ .rx_err = {
+ ATH12K_RX_ERR_RING_MASK_0,
+ },
+ .rx_wbm_rel = {
+ ATH12K_RX_WBM_REL_RING_MASK_0,
+ },
+ .reo_status = {
+ ATH12K_REO_STATUS_RING_MASK_0,
+ },
+ .host2rxdma = {
+ },
+ .tx_mon_dest = {
+ },
+};
+
+static const struct ath12k_hw_regs qcn9274_v1_regs = {
+ /* SW2TCL(x) R0 ring configuration address */
+ .hal_tcl1_ring_id = 0x00000908,
+ .hal_tcl1_ring_misc = 0x00000910,
+ .hal_tcl1_ring_tp_addr_lsb = 0x0000091c,
+ .hal_tcl1_ring_tp_addr_msb = 0x00000920,
+ .hal_tcl1_ring_consumer_int_setup_ix0 = 0x00000930,
+ .hal_tcl1_ring_consumer_int_setup_ix1 = 0x00000934,
+ .hal_tcl1_ring_msi1_base_lsb = 0x00000948,
+ .hal_tcl1_ring_msi1_base_msb = 0x0000094c,
+ .hal_tcl1_ring_msi1_data = 0x00000950,
+ .hal_tcl_ring_base_lsb = 0x00000b58,
+
+ /* TCL STATUS ring address */
+ .hal_tcl_status_ring_base_lsb = 0x00000d38,
+
+ .hal_wbm_idle_ring_base_lsb = 0x00000d0c,
+ .hal_wbm_idle_ring_misc_addr = 0x00000d1c,
+ .hal_wbm_r0_idle_list_cntl_addr = 0x00000210,
+ .hal_wbm_r0_idle_list_size_addr = 0x00000214,
+ .hal_wbm_scattered_ring_base_lsb = 0x00000220,
+ .hal_wbm_scattered_ring_base_msb = 0x00000224,
+ .hal_wbm_scattered_desc_head_info_ix0 = 0x00000230,
+ .hal_wbm_scattered_desc_head_info_ix1 = 0x00000234,
+ .hal_wbm_scattered_desc_tail_info_ix0 = 0x00000240,
+ .hal_wbm_scattered_desc_tail_info_ix1 = 0x00000244,
+ .hal_wbm_scattered_desc_ptr_hp_addr = 0x0000024c,
+
+ .hal_wbm_sw_release_ring_base_lsb = 0x0000034c,
+ .hal_wbm_sw1_release_ring_base_lsb = 0x000003c4,
+ .hal_wbm0_release_ring_base_lsb = 0x00000dd8,
+ .hal_wbm1_release_ring_base_lsb = 0x00000e50,
+
+ /* PCIe base address */
+ .pcie_qserdes_sysclk_en_sel = 0x01e0c0a8,
+ .pcie_pcs_osc_dtct_config_base = 0x01e0d45c,
+
+ /* PPE release ring address */
+ .hal_ppe_rel_ring_base = 0x0000043c,
+
+ /* REO DEST ring address */
+ .hal_reo2_ring_base = 0x0000055c,
+ .hal_reo1_misc_ctrl_addr = 0x00000b7c,
+ .hal_reo1_sw_cookie_cfg0 = 0x00000050,
+ .hal_reo1_sw_cookie_cfg1 = 0x00000054,
+ .hal_reo1_qdesc_lut_base0 = 0x00000058,
+ .hal_reo1_qdesc_lut_base1 = 0x0000005c,
+ .hal_reo1_ring_base_lsb = 0x000004e4,
+ .hal_reo1_ring_base_msb = 0x000004e8,
+ .hal_reo1_ring_id = 0x000004ec,
+ .hal_reo1_ring_misc = 0x000004f4,
+ .hal_reo1_ring_hp_addr_lsb = 0x000004f8,
+ .hal_reo1_ring_hp_addr_msb = 0x000004fc,
+ .hal_reo1_ring_producer_int_setup = 0x00000508,
+ .hal_reo1_ring_msi1_base_lsb = 0x0000052C,
+ .hal_reo1_ring_msi1_base_msb = 0x00000530,
+ .hal_reo1_ring_msi1_data = 0x00000534,
+ .hal_reo1_aging_thres_ix0 = 0x00000b08,
+ .hal_reo1_aging_thres_ix1 = 0x00000b0c,
+ .hal_reo1_aging_thres_ix2 = 0x00000b10,
+ .hal_reo1_aging_thres_ix3 = 0x00000b14,
+
+ /* REO Exception ring address */
+ .hal_reo2_sw0_ring_base = 0x000008a4,
+
+ /* REO Reinject ring address */
+ .hal_sw2reo_ring_base = 0x00000304,
+ .hal_sw2reo1_ring_base = 0x0000037c,
+
+ /* REO cmd ring address */
+ .hal_reo_cmd_ring_base = 0x0000028c,
+
+ /* REO status ring address */
+ .hal_reo_status_ring_base = 0x00000a84,
+};
+
+static const struct ath12k_hw_regs qcn9274_v2_regs = {
+ /* SW2TCL(x) R0 ring configuration address */
+ .hal_tcl1_ring_id = 0x00000908,
+ .hal_tcl1_ring_misc = 0x00000910,
+ .hal_tcl1_ring_tp_addr_lsb = 0x0000091c,
+ .hal_tcl1_ring_tp_addr_msb = 0x00000920,
+ .hal_tcl1_ring_consumer_int_setup_ix0 = 0x00000930,
+ .hal_tcl1_ring_consumer_int_setup_ix1 = 0x00000934,
+ .hal_tcl1_ring_msi1_base_lsb = 0x00000948,
+ .hal_tcl1_ring_msi1_base_msb = 0x0000094c,
+ .hal_tcl1_ring_msi1_data = 0x00000950,
+ .hal_tcl_ring_base_lsb = 0x00000b58,
+
+ /* TCL STATUS ring address */
+ .hal_tcl_status_ring_base_lsb = 0x00000d38,
+
+ /* WBM idle link ring address */
+ .hal_wbm_idle_ring_base_lsb = 0x00000d3c,
+ .hal_wbm_idle_ring_misc_addr = 0x00000d4c,
+ .hal_wbm_r0_idle_list_cntl_addr = 0x00000240,
+ .hal_wbm_r0_idle_list_size_addr = 0x00000244,
+ .hal_wbm_scattered_ring_base_lsb = 0x00000250,
+ .hal_wbm_scattered_ring_base_msb = 0x00000254,
+ .hal_wbm_scattered_desc_head_info_ix0 = 0x00000260,
+ .hal_wbm_scattered_desc_head_info_ix1 = 0x00000264,
+ .hal_wbm_scattered_desc_tail_info_ix0 = 0x00000270,
+ .hal_wbm_scattered_desc_tail_info_ix1 = 0x00000274,
+ .hal_wbm_scattered_desc_ptr_hp_addr = 0x0000027c,
+
+ /* SW2WBM release ring address */
+ .hal_wbm_sw_release_ring_base_lsb = 0x0000037c,
+ .hal_wbm_sw1_release_ring_base_lsb = 0x000003f4,
+
+ /* WBM2SW release ring address */
+ .hal_wbm0_release_ring_base_lsb = 0x00000e08,
+ .hal_wbm1_release_ring_base_lsb = 0x00000e80,
+
+ /* PCIe base address */
+ .pcie_qserdes_sysclk_en_sel = 0x01e0c0a8,
+ .pcie_pcs_osc_dtct_config_base = 0x01e0d45c,
+
+ /* PPE release ring address */
+ .hal_ppe_rel_ring_base = 0x0000046c,
+
+ /* REO DEST ring address */
+ .hal_reo2_ring_base = 0x00000578,
+ .hal_reo1_misc_ctrl_addr = 0x00000b9c,
+ .hal_reo1_sw_cookie_cfg0 = 0x0000006c,
+ .hal_reo1_sw_cookie_cfg1 = 0x00000070,
+ .hal_reo1_qdesc_lut_base0 = 0x00000074,
+ .hal_reo1_qdesc_lut_base1 = 0x00000078,
+ .hal_reo1_ring_base_lsb = 0x00000500,
+ .hal_reo1_ring_base_msb = 0x00000504,
+ .hal_reo1_ring_id = 0x00000508,
+ .hal_reo1_ring_misc = 0x00000510,
+ .hal_reo1_ring_hp_addr_lsb = 0x00000514,
+ .hal_reo1_ring_hp_addr_msb = 0x00000518,
+ .hal_reo1_ring_producer_int_setup = 0x00000524,
+ .hal_reo1_ring_msi1_base_lsb = 0x00000548,
+ .hal_reo1_ring_msi1_base_msb = 0x0000054C,
+ .hal_reo1_ring_msi1_data = 0x00000550,
+ .hal_reo1_aging_thres_ix0 = 0x00000B28,
+ .hal_reo1_aging_thres_ix1 = 0x00000B2C,
+ .hal_reo1_aging_thres_ix2 = 0x00000B30,
+ .hal_reo1_aging_thres_ix3 = 0x00000B34,
+
+ /* REO Exception ring address */
+ .hal_reo2_sw0_ring_base = 0x000008c0,
+
+ /* REO Reinject ring address */
+ .hal_sw2reo_ring_base = 0x00000320,
+ .hal_sw2reo1_ring_base = 0x00000398,
+
+ /* REO cmd ring address */
+ .hal_reo_cmd_ring_base = 0x000002A8,
+
+ /* REO status ring address */
+ .hal_reo_status_ring_base = 0x00000aa0,
+};
+
+static const struct ath12k_hw_regs wcn7850_regs = {
+ /* SW2TCL(x) R0 ring configuration address */
+ .hal_tcl1_ring_id = 0x00000908,
+ .hal_tcl1_ring_misc = 0x00000910,
+ .hal_tcl1_ring_tp_addr_lsb = 0x0000091c,
+ .hal_tcl1_ring_tp_addr_msb = 0x00000920,
+ .hal_tcl1_ring_consumer_int_setup_ix0 = 0x00000930,
+ .hal_tcl1_ring_consumer_int_setup_ix1 = 0x00000934,
+ .hal_tcl1_ring_msi1_base_lsb = 0x00000948,
+ .hal_tcl1_ring_msi1_base_msb = 0x0000094c,
+ .hal_tcl1_ring_msi1_data = 0x00000950,
+ .hal_tcl_ring_base_lsb = 0x00000b58,
+
+ /* TCL STATUS ring address */
+ .hal_tcl_status_ring_base_lsb = 0x00000d38,
+
+ .hal_wbm_idle_ring_base_lsb = 0x00000d3c,
+ .hal_wbm_idle_ring_misc_addr = 0x00000d4c,
+ .hal_wbm_r0_idle_list_cntl_addr = 0x00000240,
+ .hal_wbm_r0_idle_list_size_addr = 0x00000244,
+ .hal_wbm_scattered_ring_base_lsb = 0x00000250,
+ .hal_wbm_scattered_ring_base_msb = 0x00000254,
+ .hal_wbm_scattered_desc_head_info_ix0 = 0x00000260,
+ .hal_wbm_scattered_desc_head_info_ix1 = 0x00000264,
+ .hal_wbm_scattered_desc_tail_info_ix0 = 0x00000270,
+ .hal_wbm_scattered_desc_tail_info_ix1 = 0x00000274,
+ .hal_wbm_scattered_desc_ptr_hp_addr = 0x00000027c,
+
+ .hal_wbm_sw_release_ring_base_lsb = 0x0000037c,
+ .hal_wbm_sw1_release_ring_base_lsb = 0x00000284,
+ .hal_wbm0_release_ring_base_lsb = 0x00000e08,
+ .hal_wbm1_release_ring_base_lsb = 0x00000e80,
+
+ /* PCIe base address */
+ .pcie_qserdes_sysclk_en_sel = 0x01e0e0a8,
+ .pcie_pcs_osc_dtct_config_base = 0x01e0f45c,
+
+ /* PPE release ring address */
+ .hal_ppe_rel_ring_base = 0x0000043c,
+
+ /* REO DEST ring address */
+ .hal_reo2_ring_base = 0x0000055c,
+ .hal_reo1_misc_ctrl_addr = 0x00000b7c,
+ .hal_reo1_sw_cookie_cfg0 = 0x00000050,
+ .hal_reo1_sw_cookie_cfg1 = 0x00000054,
+ .hal_reo1_qdesc_lut_base0 = 0x00000058,
+ .hal_reo1_qdesc_lut_base1 = 0x0000005c,
+ .hal_reo1_ring_base_lsb = 0x000004e4,
+ .hal_reo1_ring_base_msb = 0x000004e8,
+ .hal_reo1_ring_id = 0x000004ec,
+ .hal_reo1_ring_misc = 0x000004f4,
+ .hal_reo1_ring_hp_addr_lsb = 0x000004f8,
+ .hal_reo1_ring_hp_addr_msb = 0x000004fc,
+ .hal_reo1_ring_producer_int_setup = 0x00000508,
+ .hal_reo1_ring_msi1_base_lsb = 0x0000052C,
+ .hal_reo1_ring_msi1_base_msb = 0x00000530,
+ .hal_reo1_ring_msi1_data = 0x00000534,
+ .hal_reo1_aging_thres_ix0 = 0x00000b08,
+ .hal_reo1_aging_thres_ix1 = 0x00000b0c,
+ .hal_reo1_aging_thres_ix2 = 0x00000b10,
+ .hal_reo1_aging_thres_ix3 = 0x00000b14,
+
+ /* REO Exception ring address */
+ .hal_reo2_sw0_ring_base = 0x000008a4,
+
+ /* REO Reinject ring address */
+ .hal_sw2reo_ring_base = 0x00000304,
+ .hal_sw2reo1_ring_base = 0x0000037c,
+
+ /* REO cmd ring address */
+ .hal_reo_cmd_ring_base = 0x0000028c,
+
+ /* REO status ring address */
+ .hal_reo_status_ring_base = 0x00000a84,
+};
+
+static const struct ath12k_hw_hal_params ath12k_hw_hal_params_qcn9274 = {
+ .rx_buf_rbm = HAL_RX_BUF_RBM_SW3_BM,
+ .wbm2sw_cc_enable = HAL_WBM_SW_COOKIE_CONV_CFG_WBM2SW0_EN |
+ HAL_WBM_SW_COOKIE_CONV_CFG_WBM2SW1_EN |
+ HAL_WBM_SW_COOKIE_CONV_CFG_WBM2SW2_EN |
+ HAL_WBM_SW_COOKIE_CONV_CFG_WBM2SW3_EN |
+ HAL_WBM_SW_COOKIE_CONV_CFG_WBM2SW4_EN,
+};
+
+static const struct ath12k_hw_hal_params ath12k_hw_hal_params_wcn7850 = {
+ .rx_buf_rbm = HAL_RX_BUF_RBM_SW1_BM,
+ .wbm2sw_cc_enable = HAL_WBM_SW_COOKIE_CONV_CFG_WBM2SW0_EN |
+ HAL_WBM_SW_COOKIE_CONV_CFG_WBM2SW2_EN |
+ HAL_WBM_SW_COOKIE_CONV_CFG_WBM2SW3_EN |
+ HAL_WBM_SW_COOKIE_CONV_CFG_WBM2SW4_EN,
+};
+
+static const struct ath12k_hw_params ath12k_hw_params[] = {
+ {
+ .name = "qcn9274 hw1.0",
+ .hw_rev = ATH12K_HW_QCN9274_HW10,
+ .fw = {
+ .dir = "QCN9274/hw1.0",
+ .board_size = 256 * 1024,
+ .cal_offset = 128 * 1024,
+ },
+ .max_radios = 1,
+ .single_pdev_only = false,
+ .qmi_service_ins_id = ATH12K_QMI_WLFW_SERVICE_INS_ID_V01_QCN9274,
+ .internal_sleep_clock = false,
+
+ .hw_ops = &qcn9274_ops,
+ .ring_mask = &ath12k_hw_ring_mask_qcn9274,
+ .regs = &qcn9274_v1_regs,
+
+ .host_ce_config = ath12k_host_ce_config_qcn9274,
+ .ce_count = 16,
+ .target_ce_config = ath12k_target_ce_config_wlan_qcn9274,
+ .target_ce_count = 12,
+ .svc_to_ce_map = ath12k_target_service_to_ce_map_wlan_qcn9274,
+ .svc_to_ce_map_len = 18,
+
+ .hal_params = &ath12k_hw_hal_params_qcn9274,
+
+ .rxdma1_enable = false,
+ .num_rxmda_per_pdev = 1,
+ .num_rxdma_dst_ring = 0,
+ .rx_mac_buf_ring = false,
+ .vdev_start_delay = false,
+
+ .interface_modes = BIT(NL80211_IFTYPE_STATION) |
+ BIT(NL80211_IFTYPE_AP),
+ .supports_monitor = false,
+
+ .idle_ps = false,
+ .download_calib = true,
+ .supports_suspend = false,
+ .tcl_ring_retry = true,
+ .reoq_lut_support = false,
+ .supports_shadow_regs = false,
+
+ .hal_desc_sz = sizeof(struct hal_rx_desc_qcn9274),
+ .num_tcl_banks = 48,
+ .max_tx_ring = 4,
+
+ .mhi_config = &ath12k_mhi_config_qcn9274,
+
+ .wmi_init = ath12k_wmi_init_qcn9274,
+
+ .hal_ops = &hal_qcn9274_ops,
+
+ },
+ {
+ .name = "wcn7850 hw2.0",
+ .hw_rev = ATH12K_HW_WCN7850_HW20,
+
+ .fw = {
+ .dir = "WCN7850/hw2.0",
+ .board_size = 256 * 1024,
+ .cal_offset = 256 * 1024,
+ },
+
+ .max_radios = 1,
+ .single_pdev_only = true,
+ .qmi_service_ins_id = ATH12K_QMI_WLFW_SERVICE_INS_ID_V01_WCN7850,
+ .internal_sleep_clock = true,
+
+ .hw_ops = &wcn7850_ops,
+ .ring_mask = &ath12k_hw_ring_mask_wcn7850,
+ .regs = &wcn7850_regs,
+
+ .host_ce_config = ath12k_host_ce_config_wcn7850,
+ .ce_count = 9,
+ .target_ce_config = ath12k_target_ce_config_wlan_wcn7850,
+ .target_ce_count = 9,
+ .svc_to_ce_map = ath12k_target_service_to_ce_map_wlan_wcn7850,
+ .svc_to_ce_map_len = 14,
+
+ .hal_params = &ath12k_hw_hal_params_wcn7850,
+
+ .rxdma1_enable = false,
+ .num_rxmda_per_pdev = 2,
+ .num_rxdma_dst_ring = 1,
+ .rx_mac_buf_ring = true,
+ .vdev_start_delay = true,
+
+ .interface_modes = BIT(NL80211_IFTYPE_STATION),
+ .supports_monitor = false,
+
+ .idle_ps = false,
+ .download_calib = false,
+ .supports_suspend = false,
+ .tcl_ring_retry = false,
+ .reoq_lut_support = false,
+ .supports_shadow_regs = true,
+
+ .hal_desc_sz = sizeof(struct hal_rx_desc_wcn7850),
+ .num_tcl_banks = 7,
+ .max_tx_ring = 3,
+
+ .mhi_config = &ath12k_mhi_config_wcn7850,
+
+ .wmi_init = ath12k_wmi_init_wcn7850,
+
+ .hal_ops = &hal_wcn7850_ops,
+ },
+ {
+ .name = "qcn9274 hw2.0",
+ .hw_rev = ATH12K_HW_QCN9274_HW20,
+ .fw = {
+ .dir = "QCN9274/hw2.0",
+ .board_size = 256 * 1024,
+ .cal_offset = 128 * 1024,
+ },
+ .max_radios = 1,
+ .single_pdev_only = false,
+ .qmi_service_ins_id = ATH12K_QMI_WLFW_SERVICE_INS_ID_V01_QCN9274,
+ .internal_sleep_clock = false,
+
+ .hw_ops = &qcn9274_ops,
+ .ring_mask = &ath12k_hw_ring_mask_qcn9274,
+ .regs = &qcn9274_v2_regs,
+
+ .host_ce_config = ath12k_host_ce_config_qcn9274,
+ .ce_count = 16,
+ .target_ce_config = ath12k_target_ce_config_wlan_qcn9274,
+ .target_ce_count = 12,
+ .svc_to_ce_map = ath12k_target_service_to_ce_map_wlan_qcn9274,
+ .svc_to_ce_map_len = 18,
+
+ .hal_params = &ath12k_hw_hal_params_qcn9274,
+
+ .rxdma1_enable = false,
+ .num_rxmda_per_pdev = 1,
+ .num_rxdma_dst_ring = 0,
+ .rx_mac_buf_ring = false,
+ .vdev_start_delay = false,
+
+ .interface_modes = BIT(NL80211_IFTYPE_STATION) |
+ BIT(NL80211_IFTYPE_AP),
+ .supports_monitor = false,
+
+ .idle_ps = false,
+ .download_calib = true,
+ .supports_suspend = false,
+ .tcl_ring_retry = true,
+ .reoq_lut_support = false,
+ .supports_shadow_regs = false,
+
+ .hal_desc_sz = sizeof(struct hal_rx_desc_qcn9274),
+ .num_tcl_banks = 48,
+ .max_tx_ring = 4,
+
+ .mhi_config = &ath12k_mhi_config_qcn9274,
+
+ .wmi_init = ath12k_wmi_init_qcn9274,
+
+ .hal_ops = &hal_qcn9274_ops,
+ },
+};
+
+int ath12k_hw_init(struct ath12k_base *ab)
+{
+ const struct ath12k_hw_params *hw_params = NULL;
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(ath12k_hw_params); i++) {
+ hw_params = &ath12k_hw_params[i];
+
+ if (hw_params->hw_rev == ab->hw_rev)
+ break;
+ }
+
+ if (i == ARRAY_SIZE(ath12k_hw_params)) {
+ ath12k_err(ab, "Unsupported hardware version: 0x%x\n", ab->hw_rev);
+ return -EINVAL;
+ }
+
+ ab->hw_params = hw_params;
+
+ ath12k_info(ab, "Hardware name: %s\n", ab->hw_params->name);
+
+ return 0;
+}
diff --git a/drivers/net/wireless/ath/ath12k/hw.h b/drivers/net/wireless/ath/ath12k/hw.h
new file mode 100644
index 000000000000..e3461004188b
--- /dev/null
+++ b/drivers/net/wireless/ath/ath12k/hw.h
@@ -0,0 +1,312 @@
+/* SPDX-License-Identifier: BSD-3-Clause-Clear */
+/*
+ * Copyright (c) 2018-2021 The Linux Foundation. All rights reserved.
+ * Copyright (c) 2021-2022 Qualcomm Innovation Center, Inc. All rights reserved.
+ */
+
+#ifndef ATH12K_HW_H
+#define ATH12K_HW_H
+
+#include <linux/mhi.h>
+
+#include "wmi.h"
+#include "hal.h"
+
+/* Target configuration defines */
+
+/* Num VDEVS per radio */
+#define TARGET_NUM_VDEVS (16 + 1)
+
+#define TARGET_NUM_PEERS_PDEV (512 + TARGET_NUM_VDEVS)
+
+/* Num of peers for Single Radio mode */
+#define TARGET_NUM_PEERS_SINGLE (TARGET_NUM_PEERS_PDEV)
+
+/* Num of peers for DBS */
+#define TARGET_NUM_PEERS_DBS (2 * TARGET_NUM_PEERS_PDEV)
+
+/* Num of peers for DBS_SBS */
+#define TARGET_NUM_PEERS_DBS_SBS (3 * TARGET_NUM_PEERS_PDEV)
+
+/* Max num of stations (per radio) */
+#define TARGET_NUM_STATIONS 512
+
+#define TARGET_NUM_PEERS(x) TARGET_NUM_PEERS_##x
+#define TARGET_NUM_PEER_KEYS 2
+#define TARGET_NUM_TIDS(x) (2 * TARGET_NUM_PEERS(x) + \
+ 4 * TARGET_NUM_VDEVS + 8)
+
+#define TARGET_AST_SKID_LIMIT 16
+#define TARGET_NUM_OFFLD_PEERS 4
+#define TARGET_NUM_OFFLD_REORDER_BUFFS 4
+
+#define TARGET_TX_CHAIN_MASK (BIT(0) | BIT(1) | BIT(2) | BIT(4))
+#define TARGET_RX_CHAIN_MASK (BIT(0) | BIT(1) | BIT(2) | BIT(4))
+#define TARGET_RX_TIMEOUT_LO_PRI 100
+#define TARGET_RX_TIMEOUT_HI_PRI 40
+
+#define TARGET_DECAP_MODE_RAW 0
+#define TARGET_DECAP_MODE_NATIVE_WIFI 1
+#define TARGET_DECAP_MODE_ETH 2
+
+#define TARGET_SCAN_MAX_PENDING_REQS 4
+#define TARGET_BMISS_OFFLOAD_MAX_VDEV 3
+#define TARGET_ROAM_OFFLOAD_MAX_VDEV 3
+#define TARGET_ROAM_OFFLOAD_MAX_AP_PROFILES 8
+#define TARGET_GTK_OFFLOAD_MAX_VDEV 3
+#define TARGET_NUM_MCAST_GROUPS 12
+#define TARGET_NUM_MCAST_TABLE_ELEMS 64
+#define TARGET_MCAST2UCAST_MODE 2
+#define TARGET_TX_DBG_LOG_SIZE 1024
+#define TARGET_RX_SKIP_DEFRAG_TIMEOUT_DUP_DETECTION_CHECK 1
+#define TARGET_VOW_CONFIG 0
+#define TARGET_NUM_MSDU_DESC (2500)
+#define TARGET_MAX_FRAG_ENTRIES 6
+#define TARGET_MAX_BCN_OFFLD 16
+#define TARGET_NUM_WDS_ENTRIES 32
+#define TARGET_DMA_BURST_SIZE 1
+#define TARGET_RX_BATCHMODE 1
+
+#define ATH12K_HW_MAX_QUEUES 4
+#define ATH12K_QUEUE_LEN 4096
+
+#define ATH12K_HW_RATECODE_CCK_SHORT_PREAM_MASK 0x4
+
+#define ATH12K_FW_DIR "ath12k"
+
+#define ATH12K_BOARD_MAGIC "QCA-ATH12K-BOARD"
+#define ATH12K_BOARD_API2_FILE "board-2.bin"
+#define ATH12K_DEFAULT_BOARD_FILE "board.bin"
+#define ATH12K_DEFAULT_CAL_FILE "caldata.bin"
+#define ATH12K_AMSS_FILE "amss.bin"
+#define ATH12K_M3_FILE "m3.bin"
+#define ATH12K_REGDB_FILE_NAME "regdb.bin"
+
+enum ath12k_hw_rate_cck {
+ ATH12K_HW_RATE_CCK_LP_11M = 0,
+ ATH12K_HW_RATE_CCK_LP_5_5M,
+ ATH12K_HW_RATE_CCK_LP_2M,
+ ATH12K_HW_RATE_CCK_LP_1M,
+ ATH12K_HW_RATE_CCK_SP_11M,
+ ATH12K_HW_RATE_CCK_SP_5_5M,
+ ATH12K_HW_RATE_CCK_SP_2M,
+};
+
+enum ath12k_hw_rate_ofdm {
+ ATH12K_HW_RATE_OFDM_48M = 0,
+ ATH12K_HW_RATE_OFDM_24M,
+ ATH12K_HW_RATE_OFDM_12M,
+ ATH12K_HW_RATE_OFDM_6M,
+ ATH12K_HW_RATE_OFDM_54M,
+ ATH12K_HW_RATE_OFDM_36M,
+ ATH12K_HW_RATE_OFDM_18M,
+ ATH12K_HW_RATE_OFDM_9M,
+};
+
+enum ath12k_bus {
+ ATH12K_BUS_PCI,
+};
+
+#define ATH12K_EXT_IRQ_GRP_NUM_MAX 11
+
+struct hal_rx_desc;
+struct hal_tcl_data_cmd;
+struct htt_rx_ring_tlv_filter;
+enum hal_encrypt_type;
+
+struct ath12k_hw_ring_mask {
+ u8 tx[ATH12K_EXT_IRQ_GRP_NUM_MAX];
+ u8 rx_mon_dest[ATH12K_EXT_IRQ_GRP_NUM_MAX];
+ u8 rx[ATH12K_EXT_IRQ_GRP_NUM_MAX];
+ u8 rx_err[ATH12K_EXT_IRQ_GRP_NUM_MAX];
+ u8 rx_wbm_rel[ATH12K_EXT_IRQ_GRP_NUM_MAX];
+ u8 reo_status[ATH12K_EXT_IRQ_GRP_NUM_MAX];
+ u8 host2rxdma[ATH12K_EXT_IRQ_GRP_NUM_MAX];
+ u8 tx_mon_dest[ATH12K_EXT_IRQ_GRP_NUM_MAX];
+};
+
+struct ath12k_hw_hal_params {
+ enum hal_rx_buf_return_buf_manager rx_buf_rbm;
+ u32 wbm2sw_cc_enable;
+};
+
+struct ath12k_hw_params {
+ const char *name;
+ u16 hw_rev;
+
+ struct {
+ const char *dir;
+ size_t board_size;
+ size_t cal_offset;
+ } fw;
+
+ u8 max_radios;
+ bool single_pdev_only:1;
+ u32 qmi_service_ins_id;
+ bool internal_sleep_clock:1;
+
+ const struct ath12k_hw_ops *hw_ops;
+ const struct ath12k_hw_ring_mask *ring_mask;
+ const struct ath12k_hw_regs *regs;
+
+ const struct ce_attr *host_ce_config;
+ u32 ce_count;
+ const struct ce_pipe_config *target_ce_config;
+ u32 target_ce_count;
+ const struct service_to_pipe *svc_to_ce_map;
+ u32 svc_to_ce_map_len;
+
+ const struct ath12k_hw_hal_params *hal_params;
+
+ bool rxdma1_enable:1;
+ int num_rxmda_per_pdev;
+ int num_rxdma_dst_ring;
+ bool rx_mac_buf_ring:1;
+ bool vdev_start_delay:1;
+
+ u16 interface_modes;
+ bool supports_monitor:1;
+
+ bool idle_ps:1;
+ bool download_calib:1;
+ bool supports_suspend:1;
+ bool tcl_ring_retry:1;
+ bool reoq_lut_support:1;
+ bool supports_shadow_regs:1;
+
+ u32 hal_desc_sz;
+ u32 num_tcl_banks;
+ u32 max_tx_ring;
+
+ const struct mhi_controller_config *mhi_config;
+
+ void (*wmi_init)(struct ath12k_base *ab,
+ struct ath12k_wmi_resource_config_arg *config);
+
+ const struct hal_ops *hal_ops;
+};
+
+struct ath12k_hw_ops {
+ u8 (*get_hw_mac_from_pdev_id)(int pdev_id);
+ int (*mac_id_to_pdev_id)(const struct ath12k_hw_params *hw, int mac_id);
+ int (*mac_id_to_srng_id)(const struct ath12k_hw_params *hw, int mac_id);
+ int (*rxdma_ring_sel_config)(struct ath12k_base *ab);
+ u8 (*get_ring_selector)(struct sk_buff *skb);
+ bool (*dp_srng_is_tx_comp_ring)(int ring_num);
+};
+
+static inline
+int ath12k_hw_get_mac_from_pdev_id(const struct ath12k_hw_params *hw,
+ int pdev_idx)
+{
+ if (hw->hw_ops->get_hw_mac_from_pdev_id)
+ return hw->hw_ops->get_hw_mac_from_pdev_id(pdev_idx);
+
+ return 0;
+}
+
+static inline int ath12k_hw_mac_id_to_pdev_id(const struct ath12k_hw_params *hw,
+ int mac_id)
+{
+ if (hw->hw_ops->mac_id_to_pdev_id)
+ return hw->hw_ops->mac_id_to_pdev_id(hw, mac_id);
+
+ return 0;
+}
+
+static inline int ath12k_hw_mac_id_to_srng_id(const struct ath12k_hw_params *hw,
+ int mac_id)
+{
+ if (hw->hw_ops->mac_id_to_srng_id)
+ return hw->hw_ops->mac_id_to_srng_id(hw, mac_id);
+
+ return 0;
+}
+
+struct ath12k_fw_ie {
+ __le32 id;
+ __le32 len;
+ u8 data[];
+};
+
+enum ath12k_bd_ie_board_type {
+ ATH12K_BD_IE_BOARD_NAME = 0,
+ ATH12K_BD_IE_BOARD_DATA = 1,
+};
+
+enum ath12k_bd_ie_type {
+ /* contains sub IEs of enum ath12k_bd_ie_board_type */
+ ATH12K_BD_IE_BOARD = 0,
+ ATH12K_BD_IE_BOARD_EXT = 1,
+};
+
+struct ath12k_hw_regs {
+ u32 hal_tcl1_ring_id;
+ u32 hal_tcl1_ring_misc;
+ u32 hal_tcl1_ring_tp_addr_lsb;
+ u32 hal_tcl1_ring_tp_addr_msb;
+ u32 hal_tcl1_ring_consumer_int_setup_ix0;
+ u32 hal_tcl1_ring_consumer_int_setup_ix1;
+ u32 hal_tcl1_ring_msi1_base_lsb;
+ u32 hal_tcl1_ring_msi1_base_msb;
+ u32 hal_tcl1_ring_msi1_data;
+ u32 hal_tcl_ring_base_lsb;
+
+ u32 hal_tcl_status_ring_base_lsb;
+
+ u32 hal_wbm_idle_ring_base_lsb;
+ u32 hal_wbm_idle_ring_misc_addr;
+ u32 hal_wbm_r0_idle_list_cntl_addr;
+ u32 hal_wbm_r0_idle_list_size_addr;
+ u32 hal_wbm_scattered_ring_base_lsb;
+ u32 hal_wbm_scattered_ring_base_msb;
+ u32 hal_wbm_scattered_desc_head_info_ix0;
+ u32 hal_wbm_scattered_desc_head_info_ix1;
+ u32 hal_wbm_scattered_desc_tail_info_ix0;
+ u32 hal_wbm_scattered_desc_tail_info_ix1;
+ u32 hal_wbm_scattered_desc_ptr_hp_addr;
+
+ u32 hal_wbm_sw_release_ring_base_lsb;
+ u32 hal_wbm_sw1_release_ring_base_lsb;
+ u32 hal_wbm0_release_ring_base_lsb;
+ u32 hal_wbm1_release_ring_base_lsb;
+
+ u32 pcie_qserdes_sysclk_en_sel;
+ u32 pcie_pcs_osc_dtct_config_base;
+
+ u32 hal_ppe_rel_ring_base;
+
+ u32 hal_reo2_ring_base;
+ u32 hal_reo1_misc_ctrl_addr;
+ u32 hal_reo1_sw_cookie_cfg0;
+ u32 hal_reo1_sw_cookie_cfg1;
+ u32 hal_reo1_qdesc_lut_base0;
+ u32 hal_reo1_qdesc_lut_base1;
+ u32 hal_reo1_ring_base_lsb;
+ u32 hal_reo1_ring_base_msb;
+ u32 hal_reo1_ring_id;
+ u32 hal_reo1_ring_misc;
+ u32 hal_reo1_ring_hp_addr_lsb;
+ u32 hal_reo1_ring_hp_addr_msb;
+ u32 hal_reo1_ring_producer_int_setup;
+ u32 hal_reo1_ring_msi1_base_lsb;
+ u32 hal_reo1_ring_msi1_base_msb;
+ u32 hal_reo1_ring_msi1_data;
+ u32 hal_reo1_aging_thres_ix0;
+ u32 hal_reo1_aging_thres_ix1;
+ u32 hal_reo1_aging_thres_ix2;
+ u32 hal_reo1_aging_thres_ix3;
+
+ u32 hal_reo2_sw0_ring_base;
+
+ u32 hal_sw2reo_ring_base;
+ u32 hal_sw2reo1_ring_base;
+
+ u32 hal_reo_cmd_ring_base;
+
+ u32 hal_reo_status_ring_base;
+};
+
+int ath12k_hw_init(struct ath12k_base *ab);
+
+#endif
diff --git a/drivers/net/wireless/ath/ath12k/mac.c b/drivers/net/wireless/ath/ath12k/mac.c
new file mode 100644
index 000000000000..bf7e5b6977b2
--- /dev/null
+++ b/drivers/net/wireless/ath/ath12k/mac.c
@@ -0,0 +1,7038 @@
+// SPDX-License-Identifier: BSD-3-Clause-Clear
+/*
+ * Copyright (c) 2018-2021 The Linux Foundation. All rights reserved.
+ * Copyright (c) 2021-2022 Qualcomm Innovation Center, Inc. All rights reserved.
+ */
+
+#include <net/mac80211.h>
+#include <linux/etherdevice.h>
+#include "mac.h"
+#include "core.h"
+#include "debug.h"
+#include "wmi.h"
+#include "hw.h"
+#include "dp_tx.h"
+#include "dp_rx.h"
+#include "peer.h"
+
+#define CHAN2G(_channel, _freq, _flags) { \
+ .band = NL80211_BAND_2GHZ, \
+ .hw_value = (_channel), \
+ .center_freq = (_freq), \
+ .flags = (_flags), \
+ .max_antenna_gain = 0, \
+ .max_power = 30, \
+}
+
+#define CHAN5G(_channel, _freq, _flags) { \
+ .band = NL80211_BAND_5GHZ, \
+ .hw_value = (_channel), \
+ .center_freq = (_freq), \
+ .flags = (_flags), \
+ .max_antenna_gain = 0, \
+ .max_power = 30, \
+}
+
+#define CHAN6G(_channel, _freq, _flags) { \
+ .band = NL80211_BAND_6GHZ, \
+ .hw_value = (_channel), \
+ .center_freq = (_freq), \
+ .flags = (_flags), \
+ .max_antenna_gain = 0, \
+ .max_power = 30, \
+}
+
+static const struct ieee80211_channel ath12k_2ghz_channels[] = {
+ CHAN2G(1, 2412, 0),
+ CHAN2G(2, 2417, 0),
+ CHAN2G(3, 2422, 0),
+ CHAN2G(4, 2427, 0),
+ CHAN2G(5, 2432, 0),
+ CHAN2G(6, 2437, 0),
+ CHAN2G(7, 2442, 0),
+ CHAN2G(8, 2447, 0),
+ CHAN2G(9, 2452, 0),
+ CHAN2G(10, 2457, 0),
+ CHAN2G(11, 2462, 0),
+ CHAN2G(12, 2467, 0),
+ CHAN2G(13, 2472, 0),
+ CHAN2G(14, 2484, 0),
+};
+
+static const struct ieee80211_channel ath12k_5ghz_channels[] = {
+ CHAN5G(36, 5180, 0),
+ CHAN5G(40, 5200, 0),
+ CHAN5G(44, 5220, 0),
+ CHAN5G(48, 5240, 0),
+ CHAN5G(52, 5260, 0),
+ CHAN5G(56, 5280, 0),
+ CHAN5G(60, 5300, 0),
+ CHAN5G(64, 5320, 0),
+ CHAN5G(100, 5500, 0),
+ CHAN5G(104, 5520, 0),
+ CHAN5G(108, 5540, 0),
+ CHAN5G(112, 5560, 0),
+ CHAN5G(116, 5580, 0),
+ CHAN5G(120, 5600, 0),
+ CHAN5G(124, 5620, 0),
+ CHAN5G(128, 5640, 0),
+ CHAN5G(132, 5660, 0),
+ CHAN5G(136, 5680, 0),
+ CHAN5G(140, 5700, 0),
+ CHAN5G(144, 5720, 0),
+ CHAN5G(149, 5745, 0),
+ CHAN5G(153, 5765, 0),
+ CHAN5G(157, 5785, 0),
+ CHAN5G(161, 5805, 0),
+ CHAN5G(165, 5825, 0),
+ CHAN5G(169, 5845, 0),
+ CHAN5G(173, 5865, 0),
+};
+
+static const struct ieee80211_channel ath12k_6ghz_channels[] = {
+ CHAN6G(1, 5955, 0),
+ CHAN6G(5, 5975, 0),
+ CHAN6G(9, 5995, 0),
+ CHAN6G(13, 6015, 0),
+ CHAN6G(17, 6035, 0),
+ CHAN6G(21, 6055, 0),
+ CHAN6G(25, 6075, 0),
+ CHAN6G(29, 6095, 0),
+ CHAN6G(33, 6115, 0),
+ CHAN6G(37, 6135, 0),
+ CHAN6G(41, 6155, 0),
+ CHAN6G(45, 6175, 0),
+ CHAN6G(49, 6195, 0),
+ CHAN6G(53, 6215, 0),
+ CHAN6G(57, 6235, 0),
+ CHAN6G(61, 6255, 0),
+ CHAN6G(65, 6275, 0),
+ CHAN6G(69, 6295, 0),
+ CHAN6G(73, 6315, 0),
+ CHAN6G(77, 6335, 0),
+ CHAN6G(81, 6355, 0),
+ CHAN6G(85, 6375, 0),
+ CHAN6G(89, 6395, 0),
+ CHAN6G(93, 6415, 0),
+ CHAN6G(97, 6435, 0),
+ CHAN6G(101, 6455, 0),
+ CHAN6G(105, 6475, 0),
+ CHAN6G(109, 6495, 0),
+ CHAN6G(113, 6515, 0),
+ CHAN6G(117, 6535, 0),
+ CHAN6G(121, 6555, 0),
+ CHAN6G(125, 6575, 0),
+ CHAN6G(129, 6595, 0),
+ CHAN6G(133, 6615, 0),
+ CHAN6G(137, 6635, 0),
+ CHAN6G(141, 6655, 0),
+ CHAN6G(145, 6675, 0),
+ CHAN6G(149, 6695, 0),
+ CHAN6G(153, 6715, 0),
+ CHAN6G(157, 6735, 0),
+ CHAN6G(161, 6755, 0),
+ CHAN6G(165, 6775, 0),
+ CHAN6G(169, 6795, 0),
+ CHAN6G(173, 6815, 0),
+ CHAN6G(177, 6835, 0),
+ CHAN6G(181, 6855, 0),
+ CHAN6G(185, 6875, 0),
+ CHAN6G(189, 6895, 0),
+ CHAN6G(193, 6915, 0),
+ CHAN6G(197, 6935, 0),
+ CHAN6G(201, 6955, 0),
+ CHAN6G(205, 6975, 0),
+ CHAN6G(209, 6995, 0),
+ CHAN6G(213, 7015, 0),
+ CHAN6G(217, 7035, 0),
+ CHAN6G(221, 7055, 0),
+ CHAN6G(225, 7075, 0),
+ CHAN6G(229, 7095, 0),
+ CHAN6G(233, 7115, 0),
+};
+
+static struct ieee80211_rate ath12k_legacy_rates[] = {
+ { .bitrate = 10,
+ .hw_value = ATH12K_HW_RATE_CCK_LP_1M },
+ { .bitrate = 20,
+ .hw_value = ATH12K_HW_RATE_CCK_LP_2M,
+ .hw_value_short = ATH12K_HW_RATE_CCK_SP_2M,
+ .flags = IEEE80211_RATE_SHORT_PREAMBLE },
+ { .bitrate = 55,
+ .hw_value = ATH12K_HW_RATE_CCK_LP_5_5M,
+ .hw_value_short = ATH12K_HW_RATE_CCK_SP_5_5M,
+ .flags = IEEE80211_RATE_SHORT_PREAMBLE },
+ { .bitrate = 110,
+ .hw_value = ATH12K_HW_RATE_CCK_LP_11M,
+ .hw_value_short = ATH12K_HW_RATE_CCK_SP_11M,
+ .flags = IEEE80211_RATE_SHORT_PREAMBLE },
+
+ { .bitrate = 60, .hw_value = ATH12K_HW_RATE_OFDM_6M },
+ { .bitrate = 90, .hw_value = ATH12K_HW_RATE_OFDM_9M },
+ { .bitrate = 120, .hw_value = ATH12K_HW_RATE_OFDM_12M },
+ { .bitrate = 180, .hw_value = ATH12K_HW_RATE_OFDM_18M },
+ { .bitrate = 240, .hw_value = ATH12K_HW_RATE_OFDM_24M },
+ { .bitrate = 360, .hw_value = ATH12K_HW_RATE_OFDM_36M },
+ { .bitrate = 480, .hw_value = ATH12K_HW_RATE_OFDM_48M },
+ { .bitrate = 540, .hw_value = ATH12K_HW_RATE_OFDM_54M },
+};
+
+static const int
+ath12k_phymodes[NUM_NL80211_BANDS][ATH12K_CHAN_WIDTH_NUM] = {
+ [NL80211_BAND_2GHZ] = {
+ [NL80211_CHAN_WIDTH_5] = MODE_UNKNOWN,
+ [NL80211_CHAN_WIDTH_10] = MODE_UNKNOWN,
+ [NL80211_CHAN_WIDTH_20_NOHT] = MODE_11AX_HE20_2G,
+ [NL80211_CHAN_WIDTH_20] = MODE_11AX_HE20_2G,
+ [NL80211_CHAN_WIDTH_40] = MODE_11AX_HE40_2G,
+ [NL80211_CHAN_WIDTH_80] = MODE_11AX_HE80_2G,
+ [NL80211_CHAN_WIDTH_80P80] = MODE_UNKNOWN,
+ [NL80211_CHAN_WIDTH_160] = MODE_UNKNOWN,
+ },
+ [NL80211_BAND_5GHZ] = {
+ [NL80211_CHAN_WIDTH_5] = MODE_UNKNOWN,
+ [NL80211_CHAN_WIDTH_10] = MODE_UNKNOWN,
+ [NL80211_CHAN_WIDTH_20_NOHT] = MODE_11AX_HE20,
+ [NL80211_CHAN_WIDTH_20] = MODE_11AX_HE20,
+ [NL80211_CHAN_WIDTH_40] = MODE_11AX_HE40,
+ [NL80211_CHAN_WIDTH_80] = MODE_11AX_HE80,
+ [NL80211_CHAN_WIDTH_160] = MODE_11AX_HE160,
+ [NL80211_CHAN_WIDTH_80P80] = MODE_11AX_HE80_80,
+ },
+ [NL80211_BAND_6GHZ] = {
+ [NL80211_CHAN_WIDTH_5] = MODE_UNKNOWN,
+ [NL80211_CHAN_WIDTH_10] = MODE_UNKNOWN,
+ [NL80211_CHAN_WIDTH_20_NOHT] = MODE_11AX_HE20,
+ [NL80211_CHAN_WIDTH_20] = MODE_11AX_HE20,
+ [NL80211_CHAN_WIDTH_40] = MODE_11AX_HE40,
+ [NL80211_CHAN_WIDTH_80] = MODE_11AX_HE80,
+ [NL80211_CHAN_WIDTH_160] = MODE_11AX_HE160,
+ [NL80211_CHAN_WIDTH_80P80] = MODE_11AX_HE80_80,
+ },
+
+};
+
+const struct htt_rx_ring_tlv_filter ath12k_mac_mon_status_filter_default = {
+ .rx_filter = HTT_RX_FILTER_TLV_FLAGS_MPDU_START |
+ HTT_RX_FILTER_TLV_FLAGS_PPDU_END |
+ HTT_RX_FILTER_TLV_FLAGS_PPDU_END_STATUS_DONE,
+ .pkt_filter_flags0 = HTT_RX_FP_MGMT_FILTER_FLAGS0,
+ .pkt_filter_flags1 = HTT_RX_FP_MGMT_FILTER_FLAGS1,
+ .pkt_filter_flags2 = HTT_RX_FP_CTRL_FILTER_FLASG2,
+ .pkt_filter_flags3 = HTT_RX_FP_DATA_FILTER_FLASG3 |
+ HTT_RX_FP_CTRL_FILTER_FLASG3
+};
+
+#define ATH12K_MAC_FIRST_OFDM_RATE_IDX 4
+#define ath12k_g_rates ath12k_legacy_rates
+#define ath12k_g_rates_size (ARRAY_SIZE(ath12k_legacy_rates))
+#define ath12k_a_rates (ath12k_legacy_rates + 4)
+#define ath12k_a_rates_size (ARRAY_SIZE(ath12k_legacy_rates) - 4)
+
+#define ATH12K_MAC_SCAN_TIMEOUT_MSECS 200 /* in msecs */
+
+static const u32 ath12k_smps_map[] = {
+ [WLAN_HT_CAP_SM_PS_STATIC] = WMI_PEER_SMPS_STATIC,
+ [WLAN_HT_CAP_SM_PS_DYNAMIC] = WMI_PEER_SMPS_DYNAMIC,
+ [WLAN_HT_CAP_SM_PS_INVALID] = WMI_PEER_SMPS_PS_NONE,
+ [WLAN_HT_CAP_SM_PS_DISABLED] = WMI_PEER_SMPS_PS_NONE,
+};
+
+static int ath12k_start_vdev_delay(struct ieee80211_hw *hw,
+ struct ieee80211_vif *vif);
+
+static const char *ath12k_mac_phymode_str(enum wmi_phy_mode mode)
+{
+ switch (mode) {
+ case MODE_11A:
+ return "11a";
+ case MODE_11G:
+ return "11g";
+ case MODE_11B:
+ return "11b";
+ case MODE_11GONLY:
+ return "11gonly";
+ case MODE_11NA_HT20:
+ return "11na-ht20";
+ case MODE_11NG_HT20:
+ return "11ng-ht20";
+ case MODE_11NA_HT40:
+ return "11na-ht40";
+ case MODE_11NG_HT40:
+ return "11ng-ht40";
+ case MODE_11AC_VHT20:
+ return "11ac-vht20";
+ case MODE_11AC_VHT40:
+ return "11ac-vht40";
+ case MODE_11AC_VHT80:
+ return "11ac-vht80";
+ case MODE_11AC_VHT160:
+ return "11ac-vht160";
+ case MODE_11AC_VHT80_80:
+ return "11ac-vht80+80";
+ case MODE_11AC_VHT20_2G:
+ return "11ac-vht20-2g";
+ case MODE_11AC_VHT40_2G:
+ return "11ac-vht40-2g";
+ case MODE_11AC_VHT80_2G:
+ return "11ac-vht80-2g";
+ case MODE_11AX_HE20:
+ return "11ax-he20";
+ case MODE_11AX_HE40:
+ return "11ax-he40";
+ case MODE_11AX_HE80:
+ return "11ax-he80";
+ case MODE_11AX_HE80_80:
+ return "11ax-he80+80";
+ case MODE_11AX_HE160:
+ return "11ax-he160";
+ case MODE_11AX_HE20_2G:
+ return "11ax-he20-2g";
+ case MODE_11AX_HE40_2G:
+ return "11ax-he40-2g";
+ case MODE_11AX_HE80_2G:
+ return "11ax-he80-2g";
+ case MODE_UNKNOWN:
+ /* skip */
+ break;
+
+ /* no default handler to allow compiler to check that the
+ * enum is fully handled
+ */
+ }
+
+ return "<unknown>";
+}
+
+enum rate_info_bw
+ath12k_mac_bw_to_mac80211_bw(enum ath12k_supported_bw bw)
+{
+ u8 ret = RATE_INFO_BW_20;
+
+ switch (bw) {
+ case ATH12K_BW_20:
+ ret = RATE_INFO_BW_20;
+ break;
+ case ATH12K_BW_40:
+ ret = RATE_INFO_BW_40;
+ break;
+ case ATH12K_BW_80:
+ ret = RATE_INFO_BW_80;
+ break;
+ case ATH12K_BW_160:
+ ret = RATE_INFO_BW_160;
+ break;
+ }
+
+ return ret;
+}
+
+enum ath12k_supported_bw ath12k_mac_mac80211_bw_to_ath12k_bw(enum rate_info_bw bw)
+{
+ switch (bw) {
+ case RATE_INFO_BW_20:
+ return ATH12K_BW_20;
+ case RATE_INFO_BW_40:
+ return ATH12K_BW_40;
+ case RATE_INFO_BW_80:
+ return ATH12K_BW_80;
+ case RATE_INFO_BW_160:
+ return ATH12K_BW_160;
+ default:
+ return ATH12K_BW_20;
+ }
+}
+
+int ath12k_mac_hw_ratecode_to_legacy_rate(u8 hw_rc, u8 preamble, u8 *rateidx,
+ u16 *rate)
+{
+ /* As default, it is OFDM rates */
+ int i = ATH12K_MAC_FIRST_OFDM_RATE_IDX;
+ int max_rates_idx = ath12k_g_rates_size;
+
+ if (preamble == WMI_RATE_PREAMBLE_CCK) {
+ hw_rc &= ~ATH12K_HW_RATECODE_CCK_SHORT_PREAM_MASK;
+ i = 0;
+ max_rates_idx = ATH12K_MAC_FIRST_OFDM_RATE_IDX;
+ }
+
+ while (i < max_rates_idx) {
+ if (hw_rc == ath12k_legacy_rates[i].hw_value) {
+ *rateidx = i;
+ *rate = ath12k_legacy_rates[i].bitrate;
+ return 0;
+ }
+ i++;
+ }
+
+ return -EINVAL;
+}
+
+u8 ath12k_mac_bitrate_to_idx(const struct ieee80211_supported_band *sband,
+ u32 bitrate)
+{
+ int i;
+
+ for (i = 0; i < sband->n_bitrates; i++)
+ if (sband->bitrates[i].bitrate == bitrate)
+ return i;
+
+ return 0;
+}
+
+static u32
+ath12k_mac_max_ht_nss(const u8 ht_mcs_mask[IEEE80211_HT_MCS_MASK_LEN])
+{
+ int nss;
+
+ for (nss = IEEE80211_HT_MCS_MASK_LEN - 1; nss >= 0; nss--)
+ if (ht_mcs_mask[nss])
+ return nss + 1;
+
+ return 1;
+}
+
+static u32
+ath12k_mac_max_vht_nss(const u16 vht_mcs_mask[NL80211_VHT_NSS_MAX])
+{
+ int nss;
+
+ for (nss = NL80211_VHT_NSS_MAX - 1; nss >= 0; nss--)
+ if (vht_mcs_mask[nss])
+ return nss + 1;
+
+ return 1;
+}
+
+static u8 ath12k_parse_mpdudensity(u8 mpdudensity)
+{
+/* From IEEE Std 802.11-2020 defined values for "Minimum MPDU Start Spacing":
+ * 0 for no restriction
+ * 1 for 1/4 us
+ * 2 for 1/2 us
+ * 3 for 1 us
+ * 4 for 2 us
+ * 5 for 4 us
+ * 6 for 8 us
+ * 7 for 16 us
+ */
+ switch (mpdudensity) {
+ case 0:
+ return 0;
+ case 1:
+ case 2:
+ case 3:
+ /* Our lower layer calculations limit our precision to
+ * 1 microsecond
+ */
+ return 1;
+ case 4:
+ return 2;
+ case 5:
+ return 4;
+ case 6:
+ return 8;
+ case 7:
+ return 16;
+ default:
+ return 0;
+ }
+}
+
+static int ath12k_mac_vif_chan(struct ieee80211_vif *vif,
+ struct cfg80211_chan_def *def)
+{
+ struct ieee80211_chanctx_conf *conf;
+
+ rcu_read_lock();
+ conf = rcu_dereference(vif->bss_conf.chanctx_conf);
+ if (!conf) {
+ rcu_read_unlock();
+ return -ENOENT;
+ }
+
+ *def = conf->def;
+ rcu_read_unlock();
+
+ return 0;
+}
+
+static bool ath12k_mac_bitrate_is_cck(int bitrate)
+{
+ switch (bitrate) {
+ case 10:
+ case 20:
+ case 55:
+ case 110:
+ return true;
+ }
+
+ return false;
+}
+
+u8 ath12k_mac_hw_rate_to_idx(const struct ieee80211_supported_band *sband,
+ u8 hw_rate, bool cck)
+{
+ const struct ieee80211_rate *rate;
+ int i;
+
+ for (i = 0; i < sband->n_bitrates; i++) {
+ rate = &sband->bitrates[i];
+
+ if (ath12k_mac_bitrate_is_cck(rate->bitrate) != cck)
+ continue;
+
+ if (rate->hw_value == hw_rate)
+ return i;
+ else if (rate->flags & IEEE80211_RATE_SHORT_PREAMBLE &&
+ rate->hw_value_short == hw_rate)
+ return i;
+ }
+
+ return 0;
+}
+
+static u8 ath12k_mac_bitrate_to_rate(int bitrate)
+{
+ return DIV_ROUND_UP(bitrate, 5) |
+ (ath12k_mac_bitrate_is_cck(bitrate) ? BIT(7) : 0);
+}
+
+static void ath12k_get_arvif_iter(void *data, u8 *mac,
+ struct ieee80211_vif *vif)
+{
+ struct ath12k_vif_iter *arvif_iter = data;
+ struct ath12k_vif *arvif = (void *)vif->drv_priv;
+
+ if (arvif->vdev_id == arvif_iter->vdev_id)
+ arvif_iter->arvif = arvif;
+}
+
+struct ath12k_vif *ath12k_mac_get_arvif(struct ath12k *ar, u32 vdev_id)
+{
+ struct ath12k_vif_iter arvif_iter = {};
+ u32 flags;
+
+ arvif_iter.vdev_id = vdev_id;
+
+ flags = IEEE80211_IFACE_ITER_RESUME_ALL;
+ ieee80211_iterate_active_interfaces_atomic(ar->hw,
+ flags,
+ ath12k_get_arvif_iter,
+ &arvif_iter);
+ if (!arvif_iter.arvif) {
+ ath12k_warn(ar->ab, "No VIF found for vdev %d\n", vdev_id);
+ return NULL;
+ }
+
+ return arvif_iter.arvif;
+}
+
+struct ath12k_vif *ath12k_mac_get_arvif_by_vdev_id(struct ath12k_base *ab,
+ u32 vdev_id)
+{
+ int i;
+ struct ath12k_pdev *pdev;
+ struct ath12k_vif *arvif;
+
+ for (i = 0; i < ab->num_radios; i++) {
+ pdev = rcu_dereference(ab->pdevs_active[i]);
+ if (pdev && pdev->ar) {
+ arvif = ath12k_mac_get_arvif(pdev->ar, vdev_id);
+ if (arvif)
+ return arvif;
+ }
+ }
+
+ return NULL;
+}
+
+struct ath12k *ath12k_mac_get_ar_by_vdev_id(struct ath12k_base *ab, u32 vdev_id)
+{
+ int i;
+ struct ath12k_pdev *pdev;
+
+ for (i = 0; i < ab->num_radios; i++) {
+ pdev = rcu_dereference(ab->pdevs_active[i]);
+ if (pdev && pdev->ar) {
+ if (pdev->ar->allocated_vdev_map & (1LL << vdev_id))
+ return pdev->ar;
+ }
+ }
+
+ return NULL;
+}
+
+struct ath12k *ath12k_mac_get_ar_by_pdev_id(struct ath12k_base *ab, u32 pdev_id)
+{
+ int i;
+ struct ath12k_pdev *pdev;
+
+ if (ab->hw_params->single_pdev_only) {
+ pdev = rcu_dereference(ab->pdevs_active[0]);
+ return pdev ? pdev->ar : NULL;
+ }
+
+ if (WARN_ON(pdev_id > ab->num_radios))
+ return NULL;
+
+ for (i = 0; i < ab->num_radios; i++) {
+ pdev = rcu_dereference(ab->pdevs_active[i]);
+
+ if (pdev && pdev->pdev_id == pdev_id)
+ return (pdev->ar ? pdev->ar : NULL);
+ }
+
+ return NULL;
+}
+
+static void ath12k_pdev_caps_update(struct ath12k *ar)
+{
+ struct ath12k_base *ab = ar->ab;
+
+ ar->max_tx_power = ab->target_caps.hw_max_tx_power;
+
+ /* FIXME: Set min_tx_power to ab->target_caps.hw_min_tx_power.
+ * But since the received value in svcrdy is same as hw_max_tx_power,
+ * we can set ar->min_tx_power to 0 currently until
+ * this is fixed in firmware
+ */
+ ar->min_tx_power = 0;
+
+ ar->txpower_limit_2g = ar->max_tx_power;
+ ar->txpower_limit_5g = ar->max_tx_power;
+ ar->txpower_scale = WMI_HOST_TP_SCALE_MAX;
+}
+
+static int ath12k_mac_txpower_recalc(struct ath12k *ar)
+{
+ struct ath12k_pdev *pdev = ar->pdev;
+ struct ath12k_vif *arvif;
+ int ret, txpower = -1;
+ u32 param;
+
+ lockdep_assert_held(&ar->conf_mutex);
+
+ list_for_each_entry(arvif, &ar->arvifs, list) {
+ if (arvif->txpower <= 0)
+ continue;
+
+ if (txpower == -1)
+ txpower = arvif->txpower;
+ else
+ txpower = min(txpower, arvif->txpower);
+ }
+
+ if (txpower == -1)
+ return 0;
+
+ /* txpwr is set as 2 units per dBm in FW*/
+ txpower = min_t(u32, max_t(u32, ar->min_tx_power, txpower),
+ ar->max_tx_power) * 2;
+
+ ath12k_dbg(ar->ab, ATH12K_DBG_MAC, "txpower to set in hw %d\n",
+ txpower / 2);
+
+ if ((pdev->cap.supported_bands & WMI_HOST_WLAN_2G_CAP) &&
+ ar->txpower_limit_2g != txpower) {
+ param = WMI_PDEV_PARAM_TXPOWER_LIMIT2G;
+ ret = ath12k_wmi_pdev_set_param(ar, param,
+ txpower, ar->pdev->pdev_id);
+ if (ret)
+ goto fail;
+ ar->txpower_limit_2g = txpower;
+ }
+
+ if ((pdev->cap.supported_bands & WMI_HOST_WLAN_5G_CAP) &&
+ ar->txpower_limit_5g != txpower) {
+ param = WMI_PDEV_PARAM_TXPOWER_LIMIT5G;
+ ret = ath12k_wmi_pdev_set_param(ar, param,
+ txpower, ar->pdev->pdev_id);
+ if (ret)
+ goto fail;
+ ar->txpower_limit_5g = txpower;
+ }
+
+ return 0;
+
+fail:
+ ath12k_warn(ar->ab, "failed to recalc txpower limit %d using pdev param %d: %d\n",
+ txpower / 2, param, ret);
+ return ret;
+}
+
+static int ath12k_recalc_rtscts_prot(struct ath12k_vif *arvif)
+{
+ struct ath12k *ar = arvif->ar;
+ u32 vdev_param, rts_cts;
+ int ret;
+
+ lockdep_assert_held(&ar->conf_mutex);
+
+ vdev_param = WMI_VDEV_PARAM_ENABLE_RTSCTS;
+
+ /* Enable RTS/CTS protection for sw retries (when legacy stations
+ * are in BSS) or by default only for second rate series.
+ * TODO: Check if we need to enable CTS 2 Self in any case
+ */
+ rts_cts = WMI_USE_RTS_CTS;
+
+ if (arvif->num_legacy_stations > 0)
+ rts_cts |= WMI_RTSCTS_ACROSS_SW_RETRIES << 4;
+ else
+ rts_cts |= WMI_RTSCTS_FOR_SECOND_RATESERIES << 4;
+
+ /* Need not send duplicate param value to firmware */
+ if (arvif->rtscts_prot_mode == rts_cts)
+ return 0;
+
+ arvif->rtscts_prot_mode = rts_cts;
+
+ ath12k_dbg(ar->ab, ATH12K_DBG_MAC, "mac vdev %d recalc rts/cts prot %d\n",
+ arvif->vdev_id, rts_cts);
+
+ ret = ath12k_wmi_vdev_set_param_cmd(ar, arvif->vdev_id,
+ vdev_param, rts_cts);
+ if (ret)
+ ath12k_warn(ar->ab, "failed to recalculate rts/cts prot for vdev %d: %d\n",
+ arvif->vdev_id, ret);
+
+ return ret;
+}
+
+static int ath12k_mac_set_kickout(struct ath12k_vif *arvif)
+{
+ struct ath12k *ar = arvif->ar;
+ u32 param;
+ int ret;
+
+ ret = ath12k_wmi_pdev_set_param(ar, WMI_PDEV_PARAM_STA_KICKOUT_TH,
+ ATH12K_KICKOUT_THRESHOLD,
+ ar->pdev->pdev_id);
+ if (ret) {
+ ath12k_warn(ar->ab, "failed to set kickout threshold on vdev %i: %d\n",
+ arvif->vdev_id, ret);
+ return ret;
+ }
+
+ param = WMI_VDEV_PARAM_AP_KEEPALIVE_MIN_IDLE_INACTIVE_TIME_SECS;
+ ret = ath12k_wmi_vdev_set_param_cmd(ar, arvif->vdev_id, param,
+ ATH12K_KEEPALIVE_MIN_IDLE);
+ if (ret) {
+ ath12k_warn(ar->ab, "failed to set keepalive minimum idle time on vdev %i: %d\n",
+ arvif->vdev_id, ret);
+ return ret;
+ }
+
+ param = WMI_VDEV_PARAM_AP_KEEPALIVE_MAX_IDLE_INACTIVE_TIME_SECS;
+ ret = ath12k_wmi_vdev_set_param_cmd(ar, arvif->vdev_id, param,
+ ATH12K_KEEPALIVE_MAX_IDLE);
+ if (ret) {
+ ath12k_warn(ar->ab, "failed to set keepalive maximum idle time on vdev %i: %d\n",
+ arvif->vdev_id, ret);
+ return ret;
+ }
+
+ param = WMI_VDEV_PARAM_AP_KEEPALIVE_MAX_UNRESPONSIVE_TIME_SECS;
+ ret = ath12k_wmi_vdev_set_param_cmd(ar, arvif->vdev_id, param,
+ ATH12K_KEEPALIVE_MAX_UNRESPONSIVE);
+ if (ret) {
+ ath12k_warn(ar->ab, "failed to set keepalive maximum unresponsive time on vdev %i: %d\n",
+ arvif->vdev_id, ret);
+ return ret;
+ }
+
+ return 0;
+}
+
+void ath12k_mac_peer_cleanup_all(struct ath12k *ar)
+{
+ struct ath12k_peer *peer, *tmp;
+ struct ath12k_base *ab = ar->ab;
+
+ lockdep_assert_held(&ar->conf_mutex);
+
+ spin_lock_bh(&ab->base_lock);
+ list_for_each_entry_safe(peer, tmp, &ab->peers, list) {
+ ath12k_dp_rx_peer_tid_cleanup(ar, peer);
+ list_del(&peer->list);
+ kfree(peer);
+ }
+ spin_unlock_bh(&ab->base_lock);
+
+ ar->num_peers = 0;
+ ar->num_stations = 0;
+}
+
+static int ath12k_mac_vdev_setup_sync(struct ath12k *ar)
+{
+ lockdep_assert_held(&ar->conf_mutex);
+
+ if (test_bit(ATH12K_FLAG_CRASH_FLUSH, &ar->ab->dev_flags))
+ return -ESHUTDOWN;
+
+ if (!wait_for_completion_timeout(&ar->vdev_setup_done,
+ ATH12K_VDEV_SETUP_TIMEOUT_HZ))
+ return -ETIMEDOUT;
+
+ return ar->last_wmi_vdev_start_status ? -EINVAL : 0;
+}
+
+static int ath12k_monitor_vdev_up(struct ath12k *ar, int vdev_id)
+{
+ int ret;
+
+ ret = ath12k_wmi_vdev_up(ar, vdev_id, 0, ar->mac_addr);
+ if (ret) {
+ ath12k_warn(ar->ab, "failed to put up monitor vdev %i: %d\n",
+ vdev_id, ret);
+ return ret;
+ }
+
+ ath12k_dbg(ar->ab, ATH12K_DBG_MAC, "mac monitor vdev %i started\n",
+ vdev_id);
+ return 0;
+}
+
+static int ath12k_mac_monitor_vdev_start(struct ath12k *ar, int vdev_id,
+ struct cfg80211_chan_def *chandef)
+{
+ struct ieee80211_channel *channel;
+ struct wmi_vdev_start_req_arg arg = {};
+ int ret;
+
+ lockdep_assert_held(&ar->conf_mutex);
+
+ channel = chandef->chan;
+ arg.vdev_id = vdev_id;
+ arg.freq = channel->center_freq;
+ arg.band_center_freq1 = chandef->center_freq1;
+ arg.band_center_freq2 = chandef->center_freq2;
+ arg.mode = ath12k_phymodes[chandef->chan->band][chandef->width];
+ arg.chan_radar = !!(channel->flags & IEEE80211_CHAN_RADAR);
+
+ arg.min_power = 0;
+ arg.max_power = channel->max_power;
+ arg.max_reg_power = channel->max_reg_power;
+ arg.max_antenna_gain = channel->max_antenna_gain;
+
+ arg.pref_tx_streams = ar->num_tx_chains;
+ arg.pref_rx_streams = ar->num_rx_chains;
+
+ arg.passive |= !!(chandef->chan->flags & IEEE80211_CHAN_NO_IR);
+
+ reinit_completion(&ar->vdev_setup_done);
+ reinit_completion(&ar->vdev_delete_done);
+
+ ret = ath12k_wmi_vdev_start(ar, &arg, false);
+ if (ret) {
+ ath12k_warn(ar->ab, "failed to request monitor vdev %i start: %d\n",
+ vdev_id, ret);
+ return ret;
+ }
+
+ ret = ath12k_mac_vdev_setup_sync(ar);
+ if (ret) {
+ ath12k_warn(ar->ab, "failed to synchronize setup for monitor vdev %i start: %d\n",
+ vdev_id, ret);
+ return ret;
+ }
+
+ ret = ath12k_wmi_vdev_up(ar, vdev_id, 0, ar->mac_addr);
+ if (ret) {
+ ath12k_warn(ar->ab, "failed to put up monitor vdev %i: %d\n",
+ vdev_id, ret);
+ goto vdev_stop;
+ }
+
+ ath12k_dbg(ar->ab, ATH12K_DBG_MAC, "mac monitor vdev %i started\n",
+ vdev_id);
+ return 0;
+
+vdev_stop:
+ ret = ath12k_wmi_vdev_stop(ar, vdev_id);
+ if (ret)
+ ath12k_warn(ar->ab, "failed to stop monitor vdev %i after start failure: %d\n",
+ vdev_id, ret);
+ return ret;
+}
+
+static int ath12k_mac_monitor_vdev_stop(struct ath12k *ar)
+{
+ int ret;
+
+ lockdep_assert_held(&ar->conf_mutex);
+
+ reinit_completion(&ar->vdev_setup_done);
+
+ ret = ath12k_wmi_vdev_stop(ar, ar->monitor_vdev_id);
+ if (ret)
+ ath12k_warn(ar->ab, "failed to request monitor vdev %i stop: %d\n",
+ ar->monitor_vdev_id, ret);
+
+ ret = ath12k_mac_vdev_setup_sync(ar);
+ if (ret)
+ ath12k_warn(ar->ab, "failed to synchronize monitor vdev %i stop: %d\n",
+ ar->monitor_vdev_id, ret);
+
+ ret = ath12k_wmi_vdev_down(ar, ar->monitor_vdev_id);
+ if (ret)
+ ath12k_warn(ar->ab, "failed to put down monitor vdev %i: %d\n",
+ ar->monitor_vdev_id, ret);
+
+ ath12k_dbg(ar->ab, ATH12K_DBG_MAC, "mac monitor vdev %i stopped\n",
+ ar->monitor_vdev_id);
+ return ret;
+}
+
+static int ath12k_mac_monitor_vdev_create(struct ath12k *ar)
+{
+ struct ath12k_pdev *pdev = ar->pdev;
+ struct ath12k_wmi_vdev_create_arg arg = {};
+ int bit, ret;
+ u8 tmp_addr[6];
+ u16 nss;
+
+ lockdep_assert_held(&ar->conf_mutex);
+
+ if (ar->monitor_vdev_created)
+ return 0;
+
+ if (ar->ab->free_vdev_map == 0) {
+ ath12k_warn(ar->ab, "failed to find free vdev id for monitor vdev\n");
+ return -ENOMEM;
+ }
+
+ bit = __ffs64(ar->ab->free_vdev_map);
+
+ ar->monitor_vdev_id = bit;
+
+ arg.if_id = ar->monitor_vdev_id;
+ arg.type = WMI_VDEV_TYPE_MONITOR;
+ arg.subtype = WMI_VDEV_SUBTYPE_NONE;
+ arg.pdev_id = pdev->pdev_id;
+ arg.if_stats_id = ATH12K_INVAL_VDEV_STATS_ID;
+
+ if (pdev->cap.supported_bands & WMI_HOST_WLAN_2G_CAP) {
+ arg.chains[NL80211_BAND_2GHZ].tx = ar->num_tx_chains;
+ arg.chains[NL80211_BAND_2GHZ].rx = ar->num_rx_chains;
+ }
+
+ if (pdev->cap.supported_bands & WMI_HOST_WLAN_5G_CAP) {
+ arg.chains[NL80211_BAND_5GHZ].tx = ar->num_tx_chains;
+ arg.chains[NL80211_BAND_5GHZ].rx = ar->num_rx_chains;
+ }
+
+ ret = ath12k_wmi_vdev_create(ar, tmp_addr, &arg);
+ if (ret) {
+ ath12k_warn(ar->ab, "failed to request monitor vdev %i creation: %d\n",
+ ar->monitor_vdev_id, ret);
+ ar->monitor_vdev_id = -1;
+ return ret;
+ }
+
+ nss = hweight32(ar->cfg_tx_chainmask) ? : 1;
+ ret = ath12k_wmi_vdev_set_param_cmd(ar, ar->monitor_vdev_id,
+ WMI_VDEV_PARAM_NSS, nss);
+ if (ret) {
+ ath12k_warn(ar->ab, "failed to set vdev %d chainmask 0x%x, nss %d :%d\n",
+ ar->monitor_vdev_id, ar->cfg_tx_chainmask, nss, ret);
+ return ret;
+ }
+
+ ret = ath12k_mac_txpower_recalc(ar);
+ if (ret)
+ return ret;
+
+ ar->allocated_vdev_map |= 1LL << ar->monitor_vdev_id;
+ ar->ab->free_vdev_map &= ~(1LL << ar->monitor_vdev_id);
+ ar->num_created_vdevs++;
+ ar->monitor_vdev_created = true;
+ ath12k_dbg(ar->ab, ATH12K_DBG_MAC, "mac monitor vdev %d created\n",
+ ar->monitor_vdev_id);
+
+ return 0;
+}
+
+static int ath12k_mac_monitor_vdev_delete(struct ath12k *ar)
+{
+ int ret;
+ unsigned long time_left;
+
+ lockdep_assert_held(&ar->conf_mutex);
+
+ if (!ar->monitor_vdev_created)
+ return 0;
+
+ reinit_completion(&ar->vdev_delete_done);
+
+ ret = ath12k_wmi_vdev_delete(ar, ar->monitor_vdev_id);
+ if (ret) {
+ ath12k_warn(ar->ab, "failed to request wmi monitor vdev %i removal: %d\n",
+ ar->monitor_vdev_id, ret);
+ return ret;
+ }
+
+ time_left = wait_for_completion_timeout(&ar->vdev_delete_done,
+ ATH12K_VDEV_DELETE_TIMEOUT_HZ);
+ if (time_left == 0) {
+ ath12k_warn(ar->ab, "Timeout in receiving vdev delete response\n");
+ } else {
+ ar->allocated_vdev_map &= ~(1LL << ar->monitor_vdev_id);
+ ar->ab->free_vdev_map |= 1LL << (ar->monitor_vdev_id);
+ ath12k_dbg(ar->ab, ATH12K_DBG_MAC, "mac monitor vdev %d deleted\n",
+ ar->monitor_vdev_id);
+ ar->num_created_vdevs--;
+ ar->monitor_vdev_id = -1;
+ ar->monitor_vdev_created = false;
+ }
+
+ return ret;
+}
+
+static void
+ath12k_mac_get_any_chandef_iter(struct ieee80211_hw *hw,
+ struct ieee80211_chanctx_conf *conf,
+ void *data)
+{
+ struct cfg80211_chan_def **def = data;
+
+ *def = &conf->def;
+}
+
+static int ath12k_mac_monitor_start(struct ath12k *ar)
+{
+ struct cfg80211_chan_def *chandef = NULL;
+ int ret;
+
+ lockdep_assert_held(&ar->conf_mutex);
+
+ if (ar->monitor_started)
+ return 0;
+
+ ieee80211_iter_chan_contexts_atomic(ar->hw,
+ ath12k_mac_get_any_chandef_iter,
+ &chandef);
+ if (!chandef)
+ return 0;
+
+ ret = ath12k_mac_monitor_vdev_start(ar, ar->monitor_vdev_id, chandef);
+ if (ret) {
+ ath12k_warn(ar->ab, "failed to start monitor vdev: %d\n", ret);
+ ath12k_mac_monitor_vdev_delete(ar);
+ return ret;
+ }
+
+ ar->monitor_started = true;
+ ar->num_started_vdevs++;
+ ret = ath12k_dp_tx_htt_monitor_mode_ring_config(ar, false);
+ ath12k_dbg(ar->ab, ATH12K_DBG_MAC, "mac monitor started ret %d\n", ret);
+
+ return ret;
+}
+
+static int ath12k_mac_monitor_stop(struct ath12k *ar)
+{
+ int ret;
+
+ lockdep_assert_held(&ar->conf_mutex);
+
+ if (!ar->monitor_started)
+ return 0;
+
+ ret = ath12k_mac_monitor_vdev_stop(ar);
+ if (ret) {
+ ath12k_warn(ar->ab, "failed to stop monitor vdev: %d\n", ret);
+ return ret;
+ }
+
+ ar->monitor_started = false;
+ ar->num_started_vdevs--;
+ ret = ath12k_dp_tx_htt_monitor_mode_ring_config(ar, true);
+ ath12k_dbg(ar->ab, ATH12K_DBG_MAC, "mac monitor stopped ret %d\n", ret);
+ return ret;
+}
+
+static int ath12k_mac_op_config(struct ieee80211_hw *hw, u32 changed)
+{
+ struct ath12k *ar = hw->priv;
+ struct ieee80211_conf *conf = &hw->conf;
+ int ret = 0;
+
+ mutex_lock(&ar->conf_mutex);
+
+ if (changed & IEEE80211_CONF_CHANGE_MONITOR) {
+ ar->monitor_conf_enabled = conf->flags & IEEE80211_CONF_MONITOR;
+ if (ar->monitor_conf_enabled) {
+ if (ar->monitor_vdev_created)
+ goto exit;
+ ret = ath12k_mac_monitor_vdev_create(ar);
+ if (ret)
+ goto exit;
+ ret = ath12k_mac_monitor_start(ar);
+ if (ret)
+ goto err_mon_del;
+ } else {
+ if (!ar->monitor_vdev_created)
+ goto exit;
+ ret = ath12k_mac_monitor_stop(ar);
+ if (ret)
+ goto exit;
+ ath12k_mac_monitor_vdev_delete(ar);
+ }
+ }
+
+exit:
+ mutex_unlock(&ar->conf_mutex);
+ return ret;
+
+err_mon_del:
+ ath12k_mac_monitor_vdev_delete(ar);
+ mutex_unlock(&ar->conf_mutex);
+ return ret;
+}
+
+static int ath12k_mac_setup_bcn_tmpl(struct ath12k_vif *arvif)
+{
+ struct ath12k *ar = arvif->ar;
+ struct ath12k_base *ab = ar->ab;
+ struct ieee80211_hw *hw = ar->hw;
+ struct ieee80211_vif *vif = arvif->vif;
+ struct ieee80211_mutable_offsets offs = {};
+ struct sk_buff *bcn;
+ struct ieee80211_mgmt *mgmt;
+ u8 *ies;
+ int ret;
+
+ if (arvif->vdev_type != WMI_VDEV_TYPE_AP)
+ return 0;
+
+ bcn = ieee80211_beacon_get_template(hw, vif, &offs, 0);
+ if (!bcn) {
+ ath12k_warn(ab, "failed to get beacon template from mac80211\n");
+ return -EPERM;
+ }
+
+ ies = bcn->data + ieee80211_get_hdrlen_from_skb(bcn);
+ ies += sizeof(mgmt->u.beacon);
+
+ if (cfg80211_find_ie(WLAN_EID_RSN, ies, (skb_tail_pointer(bcn) - ies)))
+ arvif->rsnie_present = true;
+
+ if (cfg80211_find_vendor_ie(WLAN_OUI_MICROSOFT,
+ WLAN_OUI_TYPE_MICROSOFT_WPA,
+ ies, (skb_tail_pointer(bcn) - ies)))
+ arvif->wpaie_present = true;
+
+ ret = ath12k_wmi_bcn_tmpl(ar, arvif->vdev_id, &offs, bcn);
+
+ kfree_skb(bcn);
+
+ if (ret)
+ ath12k_warn(ab, "failed to submit beacon template command: %d\n",
+ ret);
+
+ return ret;
+}
+
+static void ath12k_control_beaconing(struct ath12k_vif *arvif,
+ struct ieee80211_bss_conf *info)
+{
+ struct ath12k *ar = arvif->ar;
+ int ret;
+
+ lockdep_assert_held(&arvif->ar->conf_mutex);
+
+ if (!info->enable_beacon) {
+ ret = ath12k_wmi_vdev_down(ar, arvif->vdev_id);
+ if (ret)
+ ath12k_warn(ar->ab, "failed to down vdev_id %i: %d\n",
+ arvif->vdev_id, ret);
+
+ arvif->is_up = false;
+ return;
+ }
+
+ /* Install the beacon template to the FW */
+ ret = ath12k_mac_setup_bcn_tmpl(arvif);
+ if (ret) {
+ ath12k_warn(ar->ab, "failed to update bcn tmpl during vdev up: %d\n",
+ ret);
+ return;
+ }
+
+ arvif->aid = 0;
+
+ ether_addr_copy(arvif->bssid, info->bssid);
+
+ ret = ath12k_wmi_vdev_up(arvif->ar, arvif->vdev_id, arvif->aid,
+ arvif->bssid);
+ if (ret) {
+ ath12k_warn(ar->ab, "failed to bring up vdev %d: %i\n",
+ arvif->vdev_id, ret);
+ return;
+ }
+
+ arvif->is_up = true;
+
+ ath12k_dbg(ar->ab, ATH12K_DBG_MAC, "mac vdev %d up\n", arvif->vdev_id);
+}
+
+static void ath12k_peer_assoc_h_basic(struct ath12k *ar,
+ struct ieee80211_vif *vif,
+ struct ieee80211_sta *sta,
+ struct ath12k_wmi_peer_assoc_arg *arg)
+{
+ struct ath12k_vif *arvif = (void *)vif->drv_priv;
+ u32 aid;
+
+ lockdep_assert_held(&ar->conf_mutex);
+
+ if (vif->type == NL80211_IFTYPE_STATION)
+ aid = vif->cfg.aid;
+ else
+ aid = sta->aid;
+
+ ether_addr_copy(arg->peer_mac, sta->addr);
+ arg->vdev_id = arvif->vdev_id;
+ arg->peer_associd = aid;
+ arg->auth_flag = true;
+ /* TODO: STA WAR in ath10k for listen interval required? */
+ arg->peer_listen_intval = ar->hw->conf.listen_interval;
+ arg->peer_nss = 1;
+ arg->peer_caps = vif->bss_conf.assoc_capability;
+}
+
+static void ath12k_peer_assoc_h_crypto(struct ath12k *ar,
+ struct ieee80211_vif *vif,
+ struct ieee80211_sta *sta,
+ struct ath12k_wmi_peer_assoc_arg *arg)
+{
+ struct ieee80211_bss_conf *info = &vif->bss_conf;
+ struct cfg80211_chan_def def;
+ struct cfg80211_bss *bss;
+ struct ath12k_vif *arvif = (struct ath12k_vif *)vif->drv_priv;
+ const u8 *rsnie = NULL;
+ const u8 *wpaie = NULL;
+
+ lockdep_assert_held(&ar->conf_mutex);
+
+ if (WARN_ON(ath12k_mac_vif_chan(vif, &def)))
+ return;
+
+ bss = cfg80211_get_bss(ar->hw->wiphy, def.chan, info->bssid, NULL, 0,
+ IEEE80211_BSS_TYPE_ANY, IEEE80211_PRIVACY_ANY);
+
+ if (arvif->rsnie_present || arvif->wpaie_present) {
+ arg->need_ptk_4_way = true;
+ if (arvif->wpaie_present)
+ arg->need_gtk_2_way = true;
+ } else if (bss) {
+ const struct cfg80211_bss_ies *ies;
+
+ rcu_read_lock();
+ rsnie = ieee80211_bss_get_ie(bss, WLAN_EID_RSN);
+
+ ies = rcu_dereference(bss->ies);
+
+ wpaie = cfg80211_find_vendor_ie(WLAN_OUI_MICROSOFT,
+ WLAN_OUI_TYPE_MICROSOFT_WPA,
+ ies->data,
+ ies->len);
+ rcu_read_unlock();
+ cfg80211_put_bss(ar->hw->wiphy, bss);
+ }
+
+ /* FIXME: base on RSN IE/WPA IE is a correct idea? */
+ if (rsnie || wpaie) {
+ ath12k_dbg(ar->ab, ATH12K_DBG_WMI,
+ "%s: rsn ie found\n", __func__);
+ arg->need_ptk_4_way = true;
+ }
+
+ if (wpaie) {
+ ath12k_dbg(ar->ab, ATH12K_DBG_WMI,
+ "%s: wpa ie found\n", __func__);
+ arg->need_gtk_2_way = true;
+ }
+
+ if (sta->mfp) {
+ /* TODO: Need to check if FW supports PMF? */
+ arg->is_pmf_enabled = true;
+ }
+
+ /* TODO: safe_mode_enabled (bypass 4-way handshake) flag req? */
+}
+
+static void ath12k_peer_assoc_h_rates(struct ath12k *ar,
+ struct ieee80211_vif *vif,
+ struct ieee80211_sta *sta,
+ struct ath12k_wmi_peer_assoc_arg *arg)
+{
+ struct ath12k_vif *arvif = (void *)vif->drv_priv;
+ struct wmi_rate_set_arg *rateset = &arg->peer_legacy_rates;
+ struct cfg80211_chan_def def;
+ const struct ieee80211_supported_band *sband;
+ const struct ieee80211_rate *rates;
+ enum nl80211_band band;
+ u32 ratemask;
+ u8 rate;
+ int i;
+
+ lockdep_assert_held(&ar->conf_mutex);
+
+ if (WARN_ON(ath12k_mac_vif_chan(vif, &def)))
+ return;
+
+ band = def.chan->band;
+ sband = ar->hw->wiphy->bands[band];
+ ratemask = sta->deflink.supp_rates[band];
+ ratemask &= arvif->bitrate_mask.control[band].legacy;
+ rates = sband->bitrates;
+
+ rateset->num_rates = 0;
+
+ for (i = 0; i < 32; i++, ratemask >>= 1, rates++) {
+ if (!(ratemask & 1))
+ continue;
+
+ rate = ath12k_mac_bitrate_to_rate(rates->bitrate);
+ rateset->rates[rateset->num_rates] = rate;
+ rateset->num_rates++;
+ }
+}
+
+static bool
+ath12k_peer_assoc_h_ht_masked(const u8 ht_mcs_mask[IEEE80211_HT_MCS_MASK_LEN])
+{
+ int nss;
+
+ for (nss = 0; nss < IEEE80211_HT_MCS_MASK_LEN; nss++)
+ if (ht_mcs_mask[nss])
+ return false;
+
+ return true;
+}
+
+static bool
+ath12k_peer_assoc_h_vht_masked(const u16 vht_mcs_mask[NL80211_VHT_NSS_MAX])
+{
+ int nss;
+
+ for (nss = 0; nss < NL80211_VHT_NSS_MAX; nss++)
+ if (vht_mcs_mask[nss])
+ return false;
+
+ return true;
+}
+
+static void ath12k_peer_assoc_h_ht(struct ath12k *ar,
+ struct ieee80211_vif *vif,
+ struct ieee80211_sta *sta,
+ struct ath12k_wmi_peer_assoc_arg *arg)
+{
+ const struct ieee80211_sta_ht_cap *ht_cap = &sta->deflink.ht_cap;
+ struct ath12k_vif *arvif = (void *)vif->drv_priv;
+ struct cfg80211_chan_def def;
+ enum nl80211_band band;
+ const u8 *ht_mcs_mask;
+ int i, n;
+ u8 max_nss;
+ u32 stbc;
+
+ lockdep_assert_held(&ar->conf_mutex);
+
+ if (WARN_ON(ath12k_mac_vif_chan(vif, &def)))
+ return;
+
+ if (!ht_cap->ht_supported)
+ return;
+
+ band = def.chan->band;
+ ht_mcs_mask = arvif->bitrate_mask.control[band].ht_mcs;
+
+ if (ath12k_peer_assoc_h_ht_masked(ht_mcs_mask))
+ return;
+
+ arg->ht_flag = true;
+
+ arg->peer_max_mpdu = (1 << (IEEE80211_HT_MAX_AMPDU_FACTOR +
+ ht_cap->ampdu_factor)) - 1;
+
+ arg->peer_mpdu_density =
+ ath12k_parse_mpdudensity(ht_cap->ampdu_density);
+
+ arg->peer_ht_caps = ht_cap->cap;
+ arg->peer_rate_caps |= WMI_HOST_RC_HT_FLAG;
+
+ if (ht_cap->cap & IEEE80211_HT_CAP_LDPC_CODING)
+ arg->ldpc_flag = true;
+
+ if (sta->deflink.bandwidth >= IEEE80211_STA_RX_BW_40) {
+ arg->bw_40 = true;
+ arg->peer_rate_caps |= WMI_HOST_RC_CW40_FLAG;
+ }
+
+ if (arvif->bitrate_mask.control[band].gi != NL80211_TXRATE_FORCE_LGI) {
+ if (ht_cap->cap & (IEEE80211_HT_CAP_SGI_20 |
+ IEEE80211_HT_CAP_SGI_40))
+ arg->peer_rate_caps |= WMI_HOST_RC_SGI_FLAG;
+ }
+
+ if (ht_cap->cap & IEEE80211_HT_CAP_TX_STBC) {
+ arg->peer_rate_caps |= WMI_HOST_RC_TX_STBC_FLAG;
+ arg->stbc_flag = true;
+ }
+
+ if (ht_cap->cap & IEEE80211_HT_CAP_RX_STBC) {
+ stbc = ht_cap->cap & IEEE80211_HT_CAP_RX_STBC;
+ stbc = stbc >> IEEE80211_HT_CAP_RX_STBC_SHIFT;
+ stbc = stbc << WMI_HOST_RC_RX_STBC_FLAG_S;
+ arg->peer_rate_caps |= stbc;
+ arg->stbc_flag = true;
+ }
+
+ if (ht_cap->mcs.rx_mask[1] && ht_cap->mcs.rx_mask[2])
+ arg->peer_rate_caps |= WMI_HOST_RC_TS_FLAG;
+ else if (ht_cap->mcs.rx_mask[1])
+ arg->peer_rate_caps |= WMI_HOST_RC_DS_FLAG;
+
+ for (i = 0, n = 0, max_nss = 0; i < IEEE80211_HT_MCS_MASK_LEN * 8; i++)
+ if ((ht_cap->mcs.rx_mask[i / 8] & BIT(i % 8)) &&
+ (ht_mcs_mask[i / 8] & BIT(i % 8))) {
+ max_nss = (i / 8) + 1;
+ arg->peer_ht_rates.rates[n++] = i;
+ }
+
+ /* This is a workaround for HT-enabled STAs which break the spec
+ * and have no HT capabilities RX mask (no HT RX MCS map).
+ *
+ * As per spec, in section 20.3.5 Modulation and coding scheme (MCS),
+ * MCS 0 through 7 are mandatory in 20MHz with 800 ns GI at all STAs.
+ *
+ * Firmware asserts if such situation occurs.
+ */
+ if (n == 0) {
+ arg->peer_ht_rates.num_rates = 8;
+ for (i = 0; i < arg->peer_ht_rates.num_rates; i++)
+ arg->peer_ht_rates.rates[i] = i;
+ } else {
+ arg->peer_ht_rates.num_rates = n;
+ arg->peer_nss = min(sta->deflink.rx_nss, max_nss);
+ }
+
+ ath12k_dbg(ar->ab, ATH12K_DBG_MAC, "mac ht peer %pM mcs cnt %d nss %d\n",
+ arg->peer_mac,
+ arg->peer_ht_rates.num_rates,
+ arg->peer_nss);
+}
+
+static int ath12k_mac_get_max_vht_mcs_map(u16 mcs_map, int nss)
+{
+ switch ((mcs_map >> (2 * nss)) & 0x3) {
+ case IEEE80211_VHT_MCS_SUPPORT_0_7: return BIT(8) - 1;
+ case IEEE80211_VHT_MCS_SUPPORT_0_8: return BIT(9) - 1;
+ case IEEE80211_VHT_MCS_SUPPORT_0_9: return BIT(10) - 1;
+ }
+ return 0;
+}
+
+static u16
+ath12k_peer_assoc_h_vht_limit(u16 tx_mcs_set,
+ const u16 vht_mcs_limit[NL80211_VHT_NSS_MAX])
+{
+ int idx_limit;
+ int nss;
+ u16 mcs_map;
+ u16 mcs;
+
+ for (nss = 0; nss < NL80211_VHT_NSS_MAX; nss++) {
+ mcs_map = ath12k_mac_get_max_vht_mcs_map(tx_mcs_set, nss) &
+ vht_mcs_limit[nss];
+
+ if (mcs_map)
+ idx_limit = fls(mcs_map) - 1;
+ else
+ idx_limit = -1;
+
+ switch (idx_limit) {
+ case 0:
+ case 1:
+ case 2:
+ case 3:
+ case 4:
+ case 5:
+ case 6:
+ case 7:
+ mcs = IEEE80211_VHT_MCS_SUPPORT_0_7;
+ break;
+ case 8:
+ mcs = IEEE80211_VHT_MCS_SUPPORT_0_8;
+ break;
+ case 9:
+ mcs = IEEE80211_VHT_MCS_SUPPORT_0_9;
+ break;
+ default:
+ WARN_ON(1);
+ fallthrough;
+ case -1:
+ mcs = IEEE80211_VHT_MCS_NOT_SUPPORTED;
+ break;
+ }
+
+ tx_mcs_set &= ~(0x3 << (nss * 2));
+ tx_mcs_set |= mcs << (nss * 2);
+ }
+
+ return tx_mcs_set;
+}
+
+static void ath12k_peer_assoc_h_vht(struct ath12k *ar,
+ struct ieee80211_vif *vif,
+ struct ieee80211_sta *sta,
+ struct ath12k_wmi_peer_assoc_arg *arg)
+{
+ const struct ieee80211_sta_vht_cap *vht_cap = &sta->deflink.vht_cap;
+ struct ath12k_vif *arvif = (void *)vif->drv_priv;
+ struct cfg80211_chan_def def;
+ enum nl80211_band band;
+ const u16 *vht_mcs_mask;
+ u16 tx_mcs_map;
+ u8 ampdu_factor;
+ u8 max_nss, vht_mcs;
+ int i;
+
+ if (WARN_ON(ath12k_mac_vif_chan(vif, &def)))
+ return;
+
+ if (!vht_cap->vht_supported)
+ return;
+
+ band = def.chan->band;
+ vht_mcs_mask = arvif->bitrate_mask.control[band].vht_mcs;
+
+ if (ath12k_peer_assoc_h_vht_masked(vht_mcs_mask))
+ return;
+
+ arg->vht_flag = true;
+
+ /* TODO: similar flags required? */
+ arg->vht_capable = true;
+
+ if (def.chan->band == NL80211_BAND_2GHZ)
+ arg->vht_ng_flag = true;
+
+ arg->peer_vht_caps = vht_cap->cap;
+
+ ampdu_factor = (vht_cap->cap &
+ IEEE80211_VHT_CAP_MAX_A_MPDU_LENGTH_EXPONENT_MASK) >>
+ IEEE80211_VHT_CAP_MAX_A_MPDU_LENGTH_EXPONENT_SHIFT;
+
+ /* Workaround: Some Netgear/Linksys 11ac APs set Rx A-MPDU factor to
+ * zero in VHT IE. Using it would result in degraded throughput.
+ * arg->peer_max_mpdu at this point contains HT max_mpdu so keep
+ * it if VHT max_mpdu is smaller.
+ */
+ arg->peer_max_mpdu = max(arg->peer_max_mpdu,
+ (1U << (IEEE80211_HT_MAX_AMPDU_FACTOR +
+ ampdu_factor)) - 1);
+
+ if (sta->deflink.bandwidth == IEEE80211_STA_RX_BW_80)
+ arg->bw_80 = true;
+
+ if (sta->deflink.bandwidth == IEEE80211_STA_RX_BW_160)
+ arg->bw_160 = true;
+
+ /* Calculate peer NSS capability from VHT capabilities if STA
+ * supports VHT.
+ */
+ for (i = 0, max_nss = 0, vht_mcs = 0; i < NL80211_VHT_NSS_MAX; i++) {
+ vht_mcs = __le16_to_cpu(vht_cap->vht_mcs.rx_mcs_map) >>
+ (2 * i) & 3;
+
+ if (vht_mcs != IEEE80211_VHT_MCS_NOT_SUPPORTED &&
+ vht_mcs_mask[i])
+ max_nss = i + 1;
+ }
+ arg->peer_nss = min(sta->deflink.rx_nss, max_nss);
+ arg->rx_max_rate = __le16_to_cpu(vht_cap->vht_mcs.rx_highest);
+ arg->rx_mcs_set = __le16_to_cpu(vht_cap->vht_mcs.rx_mcs_map);
+ arg->tx_max_rate = __le16_to_cpu(vht_cap->vht_mcs.tx_highest);
+
+ tx_mcs_map = __le16_to_cpu(vht_cap->vht_mcs.tx_mcs_map);
+ arg->tx_mcs_set = ath12k_peer_assoc_h_vht_limit(tx_mcs_map, vht_mcs_mask);
+
+ /* In QCN9274 platform, VHT MCS rate 10 and 11 is enabled by default.
+ * VHT MCS rate 10 and 11 is not supported in 11ac standard.
+ * so explicitly disable the VHT MCS rate 10 and 11 in 11ac mode.
+ */
+ arg->tx_mcs_set &= ~IEEE80211_VHT_MCS_SUPPORT_0_11_MASK;
+ arg->tx_mcs_set |= IEEE80211_DISABLE_VHT_MCS_SUPPORT_0_11;
+
+ if ((arg->tx_mcs_set & IEEE80211_VHT_MCS_NOT_SUPPORTED) ==
+ IEEE80211_VHT_MCS_NOT_SUPPORTED)
+ arg->peer_vht_caps &= ~IEEE80211_VHT_CAP_MU_BEAMFORMEE_CAPABLE;
+
+ /* TODO: Check */
+ arg->tx_max_mcs_nss = 0xFF;
+
+ ath12k_dbg(ar->ab, ATH12K_DBG_MAC, "mac vht peer %pM max_mpdu %d flags 0x%x\n",
+ sta->addr, arg->peer_max_mpdu, arg->peer_flags);
+
+ /* TODO: rxnss_override */
+}
+
+static void ath12k_peer_assoc_h_he(struct ath12k *ar,
+ struct ieee80211_vif *vif,
+ struct ieee80211_sta *sta,
+ struct ath12k_wmi_peer_assoc_arg *arg)
+{
+ const struct ieee80211_sta_he_cap *he_cap = &sta->deflink.he_cap;
+ int i;
+ u8 ampdu_factor, rx_mcs_80, rx_mcs_160, max_nss;
+ u16 mcs_160_map, mcs_80_map;
+ bool support_160;
+ u16 v;
+
+ if (!he_cap->has_he)
+ return;
+
+ arg->he_flag = true;
+
+ support_160 = !!(he_cap->he_cap_elem.phy_cap_info[0] &
+ IEEE80211_HE_PHY_CAP0_CHANNEL_WIDTH_SET_160MHZ_IN_5G);
+
+ /* Supported HE-MCS and NSS Set of peer he_cap is intersection with self he_cp */
+ mcs_160_map = le16_to_cpu(he_cap->he_mcs_nss_supp.rx_mcs_160);
+ mcs_80_map = le16_to_cpu(he_cap->he_mcs_nss_supp.rx_mcs_80);
+
+ if (support_160) {
+ for (i = 7; i >= 0; i--) {
+ u8 mcs_160 = (mcs_160_map >> (2 * i)) & 3;
+
+ if (mcs_160 != IEEE80211_HE_MCS_NOT_SUPPORTED) {
+ rx_mcs_160 = i + 1;
+ break;
+ }
+ }
+ }
+
+ for (i = 7; i >= 0; i--) {
+ u8 mcs_80 = (mcs_80_map >> (2 * i)) & 3;
+
+ if (mcs_80 != IEEE80211_HE_MCS_NOT_SUPPORTED) {
+ rx_mcs_80 = i + 1;
+ break;
+ }
+ }
+
+ if (support_160)
+ max_nss = min(rx_mcs_80, rx_mcs_160);
+ else
+ max_nss = rx_mcs_80;
+
+ arg->peer_nss = min(sta->deflink.rx_nss, max_nss);
+
+ memcpy(&arg->peer_he_cap_macinfo, he_cap->he_cap_elem.mac_cap_info,
+ sizeof(arg->peer_he_cap_macinfo));
+ memcpy(&arg->peer_he_cap_phyinfo, he_cap->he_cap_elem.phy_cap_info,
+ sizeof(arg->peer_he_cap_phyinfo));
+ arg->peer_he_ops = vif->bss_conf.he_oper.params;
+
+ /* the top most byte is used to indicate BSS color info */
+ arg->peer_he_ops &= 0xffffff;
+
+ /* As per section 26.6.1 IEEE Std 802.11ax‐2022, if the Max AMPDU
+ * Exponent Extension in HE cap is zero, use the arg->peer_max_mpdu
+ * as calculated while parsing VHT caps(if VHT caps is present)
+ * or HT caps (if VHT caps is not present).
+ *
+ * For non-zero value of Max AMPDU Exponent Extension in HE MAC caps,
+ * if a HE STA sends VHT cap and HE cap IE in assoc request then, use
+ * MAX_AMPDU_LEN_FACTOR as 20 to calculate max_ampdu length.
+ * If a HE STA that does not send VHT cap, but HE and HT cap in assoc
+ * request, then use MAX_AMPDU_LEN_FACTOR as 16 to calculate max_ampdu
+ * length.
+ */
+ ampdu_factor = (he_cap->he_cap_elem.mac_cap_info[3] &
+ IEEE80211_HE_MAC_CAP3_MAX_AMPDU_LEN_EXP_MASK) >>
+ IEEE80211_HE_MAC_CAP3_MAX_AMPDU_LEN_EXP_MASK;
+
+ if (ampdu_factor) {
+ if (sta->deflink.vht_cap.vht_supported)
+ arg->peer_max_mpdu = (1 << (IEEE80211_HE_VHT_MAX_AMPDU_FACTOR +
+ ampdu_factor)) - 1;
+ else if (sta->deflink.ht_cap.ht_supported)
+ arg->peer_max_mpdu = (1 << (IEEE80211_HE_HT_MAX_AMPDU_FACTOR +
+ ampdu_factor)) - 1;
+ }
+
+ if (he_cap->he_cap_elem.phy_cap_info[6] &
+ IEEE80211_HE_PHY_CAP6_PPE_THRESHOLD_PRESENT) {
+ int bit = 7;
+ int nss, ru;
+
+ arg->peer_ppet.numss_m1 = he_cap->ppe_thres[0] &
+ IEEE80211_PPE_THRES_NSS_MASK;
+ arg->peer_ppet.ru_bit_mask =
+ (he_cap->ppe_thres[0] &
+ IEEE80211_PPE_THRES_RU_INDEX_BITMASK_MASK) >>
+ IEEE80211_PPE_THRES_RU_INDEX_BITMASK_POS;
+
+ for (nss = 0; nss <= arg->peer_ppet.numss_m1; nss++) {
+ for (ru = 0; ru < 4; ru++) {
+ u32 val = 0;
+ int i;
+
+ if ((arg->peer_ppet.ru_bit_mask & BIT(ru)) == 0)
+ continue;
+ for (i = 0; i < 6; i++) {
+ val >>= 1;
+ val |= ((he_cap->ppe_thres[bit / 8] >>
+ (bit % 8)) & 0x1) << 5;
+ bit++;
+ }
+ arg->peer_ppet.ppet16_ppet8_ru3_ru0[nss] |=
+ val << (ru * 6);
+ }
+ }
+ }
+
+ if (he_cap->he_cap_elem.mac_cap_info[0] & IEEE80211_HE_MAC_CAP0_TWT_RES)
+ arg->twt_responder = true;
+ if (he_cap->he_cap_elem.mac_cap_info[0] & IEEE80211_HE_MAC_CAP0_TWT_REQ)
+ arg->twt_requester = true;
+
+ switch (sta->deflink.bandwidth) {
+ case IEEE80211_STA_RX_BW_160:
+ if (he_cap->he_cap_elem.phy_cap_info[0] &
+ IEEE80211_HE_PHY_CAP0_CHANNEL_WIDTH_SET_80PLUS80_MHZ_IN_5G) {
+ v = le16_to_cpu(he_cap->he_mcs_nss_supp.rx_mcs_80p80);
+ arg->peer_he_rx_mcs_set[WMI_HECAP_TXRX_MCS_NSS_IDX_80_80] = v;
+
+ v = le16_to_cpu(he_cap->he_mcs_nss_supp.tx_mcs_80p80);
+ arg->peer_he_tx_mcs_set[WMI_HECAP_TXRX_MCS_NSS_IDX_80_80] = v;
+
+ arg->peer_he_mcs_count++;
+ }
+ v = le16_to_cpu(he_cap->he_mcs_nss_supp.rx_mcs_160);
+ arg->peer_he_rx_mcs_set[WMI_HECAP_TXRX_MCS_NSS_IDX_160] = v;
+
+ v = le16_to_cpu(he_cap->he_mcs_nss_supp.tx_mcs_160);
+ arg->peer_he_tx_mcs_set[WMI_HECAP_TXRX_MCS_NSS_IDX_160] = v;
+
+ arg->peer_he_mcs_count++;
+ fallthrough;
+
+ default:
+ v = le16_to_cpu(he_cap->he_mcs_nss_supp.rx_mcs_80);
+ arg->peer_he_rx_mcs_set[WMI_HECAP_TXRX_MCS_NSS_IDX_80] = v;
+
+ v = le16_to_cpu(he_cap->he_mcs_nss_supp.tx_mcs_80);
+ arg->peer_he_tx_mcs_set[WMI_HECAP_TXRX_MCS_NSS_IDX_80] = v;
+
+ arg->peer_he_mcs_count++;
+ break;
+ }
+}
+
+static void ath12k_peer_assoc_h_smps(struct ieee80211_sta *sta,
+ struct ath12k_wmi_peer_assoc_arg *arg)
+{
+ const struct ieee80211_sta_ht_cap *ht_cap = &sta->deflink.ht_cap;
+ int smps;
+
+ if (!ht_cap->ht_supported)
+ return;
+
+ smps = ht_cap->cap & IEEE80211_HT_CAP_SM_PS;
+ smps >>= IEEE80211_HT_CAP_SM_PS_SHIFT;
+
+ switch (smps) {
+ case WLAN_HT_CAP_SM_PS_STATIC:
+ arg->static_mimops_flag = true;
+ break;
+ case WLAN_HT_CAP_SM_PS_DYNAMIC:
+ arg->dynamic_mimops_flag = true;
+ break;
+ case WLAN_HT_CAP_SM_PS_DISABLED:
+ arg->spatial_mux_flag = true;
+ break;
+ default:
+ break;
+ }
+}
+
+static void ath12k_peer_assoc_h_qos(struct ath12k *ar,
+ struct ieee80211_vif *vif,
+ struct ieee80211_sta *sta,
+ struct ath12k_wmi_peer_assoc_arg *arg)
+{
+ struct ath12k_vif *arvif = (void *)vif->drv_priv;
+
+ switch (arvif->vdev_type) {
+ case WMI_VDEV_TYPE_AP:
+ if (sta->wme) {
+ /* TODO: Check WME vs QoS */
+ arg->is_wme_set = true;
+ arg->qos_flag = true;
+ }
+
+ if (sta->wme && sta->uapsd_queues) {
+ /* TODO: Check WME vs QoS */
+ arg->is_wme_set = true;
+ arg->apsd_flag = true;
+ arg->peer_rate_caps |= WMI_HOST_RC_UAPSD_FLAG;
+ }
+ break;
+ case WMI_VDEV_TYPE_STA:
+ if (sta->wme) {
+ arg->is_wme_set = true;
+ arg->qos_flag = true;
+ }
+ break;
+ default:
+ break;
+ }
+
+ ath12k_dbg(ar->ab, ATH12K_DBG_MAC, "mac peer %pM qos %d\n",
+ sta->addr, arg->qos_flag);
+}
+
+static int ath12k_peer_assoc_qos_ap(struct ath12k *ar,
+ struct ath12k_vif *arvif,
+ struct ieee80211_sta *sta)
+{
+ struct ath12k_wmi_ap_ps_arg arg;
+ u32 max_sp;
+ u32 uapsd;
+ int ret;
+
+ lockdep_assert_held(&ar->conf_mutex);
+
+ arg.vdev_id = arvif->vdev_id;
+
+ ath12k_dbg(ar->ab, ATH12K_DBG_MAC, "mac uapsd_queues 0x%x max_sp %d\n",
+ sta->uapsd_queues, sta->max_sp);
+
+ uapsd = 0;
+ if (sta->uapsd_queues & IEEE80211_WMM_IE_STA_QOSINFO_AC_VO)
+ uapsd |= WMI_AP_PS_UAPSD_AC3_DELIVERY_EN |
+ WMI_AP_PS_UAPSD_AC3_TRIGGER_EN;
+ if (sta->uapsd_queues & IEEE80211_WMM_IE_STA_QOSINFO_AC_VI)
+ uapsd |= WMI_AP_PS_UAPSD_AC2_DELIVERY_EN |
+ WMI_AP_PS_UAPSD_AC2_TRIGGER_EN;
+ if (sta->uapsd_queues & IEEE80211_WMM_IE_STA_QOSINFO_AC_BK)
+ uapsd |= WMI_AP_PS_UAPSD_AC1_DELIVERY_EN |
+ WMI_AP_PS_UAPSD_AC1_TRIGGER_EN;
+ if (sta->uapsd_queues & IEEE80211_WMM_IE_STA_QOSINFO_AC_BE)
+ uapsd |= WMI_AP_PS_UAPSD_AC0_DELIVERY_EN |
+ WMI_AP_PS_UAPSD_AC0_TRIGGER_EN;
+
+ max_sp = 0;
+ if (sta->max_sp < MAX_WMI_AP_PS_PEER_PARAM_MAX_SP)
+ max_sp = sta->max_sp;
+
+ arg.param = WMI_AP_PS_PEER_PARAM_UAPSD;
+ arg.value = uapsd;
+ ret = ath12k_wmi_send_set_ap_ps_param_cmd(ar, sta->addr, &arg);
+ if (ret)
+ goto err;
+
+ arg.param = WMI_AP_PS_PEER_PARAM_MAX_SP;
+ arg.value = max_sp;
+ ret = ath12k_wmi_send_set_ap_ps_param_cmd(ar, sta->addr, &arg);
+ if (ret)
+ goto err;
+
+ /* TODO: revisit during testing */
+ arg.param = WMI_AP_PS_PEER_PARAM_SIFS_RESP_FRMTYPE;
+ arg.value = DISABLE_SIFS_RESPONSE_TRIGGER;
+ ret = ath12k_wmi_send_set_ap_ps_param_cmd(ar, sta->addr, &arg);
+ if (ret)
+ goto err;
+
+ arg.param = WMI_AP_PS_PEER_PARAM_SIFS_RESP_UAPSD;
+ arg.value = DISABLE_SIFS_RESPONSE_TRIGGER;
+ ret = ath12k_wmi_send_set_ap_ps_param_cmd(ar, sta->addr, &arg);
+ if (ret)
+ goto err;
+
+ return 0;
+
+err:
+ ath12k_warn(ar->ab, "failed to set ap ps peer param %d for vdev %i: %d\n",
+ arg.param, arvif->vdev_id, ret);
+ return ret;
+}
+
+static bool ath12k_mac_sta_has_ofdm_only(struct ieee80211_sta *sta)
+{
+ return sta->deflink.supp_rates[NL80211_BAND_2GHZ] >>
+ ATH12K_MAC_FIRST_OFDM_RATE_IDX;
+}
+
+static enum wmi_phy_mode ath12k_mac_get_phymode_vht(struct ath12k *ar,
+ struct ieee80211_sta *sta)
+{
+ if (sta->deflink.bandwidth == IEEE80211_STA_RX_BW_160) {
+ switch (sta->deflink.vht_cap.cap &
+ IEEE80211_VHT_CAP_SUPP_CHAN_WIDTH_MASK) {
+ case IEEE80211_VHT_CAP_SUPP_CHAN_WIDTH_160MHZ:
+ return MODE_11AC_VHT160;
+ case IEEE80211_VHT_CAP_SUPP_CHAN_WIDTH_160_80PLUS80MHZ:
+ return MODE_11AC_VHT80_80;
+ default:
+ /* not sure if this is a valid case? */
+ return MODE_11AC_VHT160;
+ }
+ }
+
+ if (sta->deflink.bandwidth == IEEE80211_STA_RX_BW_80)
+ return MODE_11AC_VHT80;
+
+ if (sta->deflink.bandwidth == IEEE80211_STA_RX_BW_40)
+ return MODE_11AC_VHT40;
+
+ if (sta->deflink.bandwidth == IEEE80211_STA_RX_BW_20)
+ return MODE_11AC_VHT20;
+
+ return MODE_UNKNOWN;
+}
+
+static enum wmi_phy_mode ath12k_mac_get_phymode_he(struct ath12k *ar,
+ struct ieee80211_sta *sta)
+{
+ if (sta->deflink.bandwidth == IEEE80211_STA_RX_BW_160) {
+ if (sta->deflink.he_cap.he_cap_elem.phy_cap_info[0] &
+ IEEE80211_HE_PHY_CAP0_CHANNEL_WIDTH_SET_160MHZ_IN_5G)
+ return MODE_11AX_HE160;
+ else if (sta->deflink.he_cap.he_cap_elem.phy_cap_info[0] &
+ IEEE80211_HE_PHY_CAP0_CHANNEL_WIDTH_SET_80PLUS80_MHZ_IN_5G)
+ return MODE_11AX_HE80_80;
+ /* not sure if this is a valid case? */
+ return MODE_11AX_HE160;
+ }
+
+ if (sta->deflink.bandwidth == IEEE80211_STA_RX_BW_80)
+ return MODE_11AX_HE80;
+
+ if (sta->deflink.bandwidth == IEEE80211_STA_RX_BW_40)
+ return MODE_11AX_HE40;
+
+ if (sta->deflink.bandwidth == IEEE80211_STA_RX_BW_20)
+ return MODE_11AX_HE20;
+
+ return MODE_UNKNOWN;
+}
+
+static void ath12k_peer_assoc_h_phymode(struct ath12k *ar,
+ struct ieee80211_vif *vif,
+ struct ieee80211_sta *sta,
+ struct ath12k_wmi_peer_assoc_arg *arg)
+{
+ struct ath12k_vif *arvif = (void *)vif->drv_priv;
+ struct cfg80211_chan_def def;
+ enum nl80211_band band;
+ const u8 *ht_mcs_mask;
+ const u16 *vht_mcs_mask;
+ enum wmi_phy_mode phymode = MODE_UNKNOWN;
+
+ if (WARN_ON(ath12k_mac_vif_chan(vif, &def)))
+ return;
+
+ band = def.chan->band;
+ ht_mcs_mask = arvif->bitrate_mask.control[band].ht_mcs;
+ vht_mcs_mask = arvif->bitrate_mask.control[band].vht_mcs;
+
+ switch (band) {
+ case NL80211_BAND_2GHZ:
+ if (sta->deflink.he_cap.has_he) {
+ if (sta->deflink.bandwidth == IEEE80211_STA_RX_BW_80)
+ phymode = MODE_11AX_HE80_2G;
+ else if (sta->deflink.bandwidth == IEEE80211_STA_RX_BW_40)
+ phymode = MODE_11AX_HE40_2G;
+ else
+ phymode = MODE_11AX_HE20_2G;
+ } else if (sta->deflink.vht_cap.vht_supported &&
+ !ath12k_peer_assoc_h_vht_masked(vht_mcs_mask)) {
+ if (sta->deflink.bandwidth == IEEE80211_STA_RX_BW_40)
+ phymode = MODE_11AC_VHT40;
+ else
+ phymode = MODE_11AC_VHT20;
+ } else if (sta->deflink.ht_cap.ht_supported &&
+ !ath12k_peer_assoc_h_ht_masked(ht_mcs_mask)) {
+ if (sta->deflink.bandwidth == IEEE80211_STA_RX_BW_40)
+ phymode = MODE_11NG_HT40;
+ else
+ phymode = MODE_11NG_HT20;
+ } else if (ath12k_mac_sta_has_ofdm_only(sta)) {
+ phymode = MODE_11G;
+ } else {
+ phymode = MODE_11B;
+ }
+ break;
+ case NL80211_BAND_5GHZ:
+ case NL80211_BAND_6GHZ:
+ /* Check HE first */
+ if (sta->deflink.he_cap.has_he) {
+ phymode = ath12k_mac_get_phymode_he(ar, sta);
+ } else if (sta->deflink.vht_cap.vht_supported &&
+ !ath12k_peer_assoc_h_vht_masked(vht_mcs_mask)) {
+ phymode = ath12k_mac_get_phymode_vht(ar, sta);
+ } else if (sta->deflink.ht_cap.ht_supported &&
+ !ath12k_peer_assoc_h_ht_masked(ht_mcs_mask)) {
+ if (sta->deflink.bandwidth >= IEEE80211_STA_RX_BW_40)
+ phymode = MODE_11NA_HT40;
+ else
+ phymode = MODE_11NA_HT20;
+ } else {
+ phymode = MODE_11A;
+ }
+ break;
+ default:
+ break;
+ }
+
+ ath12k_dbg(ar->ab, ATH12K_DBG_MAC, "mac peer %pM phymode %s\n",
+ sta->addr, ath12k_mac_phymode_str(phymode));
+
+ arg->peer_phymode = phymode;
+ WARN_ON(phymode == MODE_UNKNOWN);
+}
+
+static void ath12k_peer_assoc_prepare(struct ath12k *ar,
+ struct ieee80211_vif *vif,
+ struct ieee80211_sta *sta,
+ struct ath12k_wmi_peer_assoc_arg *arg,
+ bool reassoc)
+{
+ lockdep_assert_held(&ar->conf_mutex);
+
+ memset(arg, 0, sizeof(*arg));
+
+ reinit_completion(&ar->peer_assoc_done);
+
+ arg->peer_new_assoc = !reassoc;
+ ath12k_peer_assoc_h_basic(ar, vif, sta, arg);
+ ath12k_peer_assoc_h_crypto(ar, vif, sta, arg);
+ ath12k_peer_assoc_h_rates(ar, vif, sta, arg);
+ ath12k_peer_assoc_h_ht(ar, vif, sta, arg);
+ ath12k_peer_assoc_h_vht(ar, vif, sta, arg);
+ ath12k_peer_assoc_h_he(ar, vif, sta, arg);
+ ath12k_peer_assoc_h_qos(ar, vif, sta, arg);
+ ath12k_peer_assoc_h_phymode(ar, vif, sta, arg);
+ ath12k_peer_assoc_h_smps(sta, arg);
+
+ /* TODO: amsdu_disable req? */
+}
+
+static int ath12k_setup_peer_smps(struct ath12k *ar, struct ath12k_vif *arvif,
+ const u8 *addr,
+ const struct ieee80211_sta_ht_cap *ht_cap)
+{
+ int smps;
+
+ if (!ht_cap->ht_supported)
+ return 0;
+
+ smps = ht_cap->cap & IEEE80211_HT_CAP_SM_PS;
+ smps >>= IEEE80211_HT_CAP_SM_PS_SHIFT;
+
+ if (smps >= ARRAY_SIZE(ath12k_smps_map))
+ return -EINVAL;
+
+ return ath12k_wmi_set_peer_param(ar, addr, arvif->vdev_id,
+ WMI_PEER_MIMO_PS_STATE,
+ ath12k_smps_map[smps]);
+}
+
+static void ath12k_bss_assoc(struct ieee80211_hw *hw,
+ struct ieee80211_vif *vif,
+ struct ieee80211_bss_conf *bss_conf)
+{
+ struct ath12k *ar = hw->priv;
+ struct ath12k_vif *arvif = (void *)vif->drv_priv;
+ struct ath12k_wmi_peer_assoc_arg peer_arg;
+ struct ieee80211_sta *ap_sta;
+ struct ath12k_peer *peer;
+ bool is_auth = false;
+ int ret;
+
+ lockdep_assert_held(&ar->conf_mutex);
+
+ ath12k_dbg(ar->ab, ATH12K_DBG_MAC, "mac vdev %i assoc bssid %pM aid %d\n",
+ arvif->vdev_id, arvif->bssid, arvif->aid);
+
+ rcu_read_lock();
+
+ ap_sta = ieee80211_find_sta(vif, bss_conf->bssid);
+ if (!ap_sta) {
+ ath12k_warn(ar->ab, "failed to find station entry for bss %pM vdev %i\n",
+ bss_conf->bssid, arvif->vdev_id);
+ rcu_read_unlock();
+ return;
+ }
+
+ ath12k_peer_assoc_prepare(ar, vif, ap_sta, &peer_arg, false);
+
+ rcu_read_unlock();
+
+ ret = ath12k_wmi_send_peer_assoc_cmd(ar, &peer_arg);
+ if (ret) {
+ ath12k_warn(ar->ab, "failed to run peer assoc for %pM vdev %i: %d\n",
+ bss_conf->bssid, arvif->vdev_id, ret);
+ return;
+ }
+
+ if (!wait_for_completion_timeout(&ar->peer_assoc_done, 1 * HZ)) {
+ ath12k_warn(ar->ab, "failed to get peer assoc conf event for %pM vdev %i\n",
+ bss_conf->bssid, arvif->vdev_id);
+ return;
+ }
+
+ ret = ath12k_setup_peer_smps(ar, arvif, bss_conf->bssid,
+ &ap_sta->deflink.ht_cap);
+ if (ret) {
+ ath12k_warn(ar->ab, "failed to setup peer SMPS for vdev %d: %d\n",
+ arvif->vdev_id, ret);
+ return;
+ }
+
+ WARN_ON(arvif->is_up);
+
+ arvif->aid = vif->cfg.aid;
+ ether_addr_copy(arvif->bssid, bss_conf->bssid);
+
+ ret = ath12k_wmi_vdev_up(ar, arvif->vdev_id, arvif->aid, arvif->bssid);
+ if (ret) {
+ ath12k_warn(ar->ab, "failed to set vdev %d up: %d\n",
+ arvif->vdev_id, ret);
+ return;
+ }
+
+ arvif->is_up = true;
+
+ ath12k_dbg(ar->ab, ATH12K_DBG_MAC,
+ "mac vdev %d up (associated) bssid %pM aid %d\n",
+ arvif->vdev_id, bss_conf->bssid, vif->cfg.aid);
+
+ spin_lock_bh(&ar->ab->base_lock);
+
+ peer = ath12k_peer_find(ar->ab, arvif->vdev_id, arvif->bssid);
+ if (peer && peer->is_authorized)
+ is_auth = true;
+
+ spin_unlock_bh(&ar->ab->base_lock);
+
+ /* Authorize BSS Peer */
+ if (is_auth) {
+ ret = ath12k_wmi_set_peer_param(ar, arvif->bssid,
+ arvif->vdev_id,
+ WMI_PEER_AUTHORIZE,
+ 1);
+ if (ret)
+ ath12k_warn(ar->ab, "Unable to authorize BSS peer: %d\n", ret);
+ }
+
+ ret = ath12k_wmi_send_obss_spr_cmd(ar, arvif->vdev_id,
+ &bss_conf->he_obss_pd);
+ if (ret)
+ ath12k_warn(ar->ab, "failed to set vdev %i OBSS PD parameters: %d\n",
+ arvif->vdev_id, ret);
+}
+
+static void ath12k_bss_disassoc(struct ieee80211_hw *hw,
+ struct ieee80211_vif *vif)
+{
+ struct ath12k *ar = hw->priv;
+ struct ath12k_vif *arvif = (void *)vif->drv_priv;
+ int ret;
+
+ lockdep_assert_held(&ar->conf_mutex);
+
+ ath12k_dbg(ar->ab, ATH12K_DBG_MAC, "mac vdev %i disassoc bssid %pM\n",
+ arvif->vdev_id, arvif->bssid);
+
+ ret = ath12k_wmi_vdev_down(ar, arvif->vdev_id);
+ if (ret)
+ ath12k_warn(ar->ab, "failed to down vdev %i: %d\n",
+ arvif->vdev_id, ret);
+
+ arvif->is_up = false;
+
+ /* TODO: cancel connection_loss_work */
+}
+
+static u32 ath12k_mac_get_rate_hw_value(int bitrate)
+{
+ u32 preamble;
+ u16 hw_value;
+ int rate;
+ size_t i;
+
+ if (ath12k_mac_bitrate_is_cck(bitrate))
+ preamble = WMI_RATE_PREAMBLE_CCK;
+ else
+ preamble = WMI_RATE_PREAMBLE_OFDM;
+
+ for (i = 0; i < ARRAY_SIZE(ath12k_legacy_rates); i++) {
+ if (ath12k_legacy_rates[i].bitrate != bitrate)
+ continue;
+
+ hw_value = ath12k_legacy_rates[i].hw_value;
+ rate = ATH12K_HW_RATE_CODE(hw_value, 0, preamble);
+
+ return rate;
+ }
+
+ return -EINVAL;
+}
+
+static void ath12k_recalculate_mgmt_rate(struct ath12k *ar,
+ struct ieee80211_vif *vif,
+ struct cfg80211_chan_def *def)
+{
+ struct ath12k_vif *arvif = (void *)vif->drv_priv;
+ const struct ieee80211_supported_band *sband;
+ u8 basic_rate_idx;
+ int hw_rate_code;
+ u32 vdev_param;
+ u16 bitrate;
+ int ret;
+
+ lockdep_assert_held(&ar->conf_mutex);
+
+ sband = ar->hw->wiphy->bands[def->chan->band];
+ basic_rate_idx = ffs(vif->bss_conf.basic_rates) - 1;
+ bitrate = sband->bitrates[basic_rate_idx].bitrate;
+
+ hw_rate_code = ath12k_mac_get_rate_hw_value(bitrate);
+ if (hw_rate_code < 0) {
+ ath12k_warn(ar->ab, "bitrate not supported %d\n", bitrate);
+ return;
+ }
+
+ vdev_param = WMI_VDEV_PARAM_MGMT_RATE;
+ ret = ath12k_wmi_vdev_set_param_cmd(ar, arvif->vdev_id, vdev_param,
+ hw_rate_code);
+ if (ret)
+ ath12k_warn(ar->ab, "failed to set mgmt tx rate %d\n", ret);
+
+ vdev_param = WMI_VDEV_PARAM_BEACON_RATE;
+ ret = ath12k_wmi_vdev_set_param_cmd(ar, arvif->vdev_id, vdev_param,
+ hw_rate_code);
+ if (ret)
+ ath12k_warn(ar->ab, "failed to set beacon tx rate %d\n", ret);
+}
+
+static int ath12k_mac_fils_discovery(struct ath12k_vif *arvif,
+ struct ieee80211_bss_conf *info)
+{
+ struct ath12k *ar = arvif->ar;
+ struct sk_buff *tmpl;
+ int ret;
+ u32 interval;
+ bool unsol_bcast_probe_resp_enabled = false;
+
+ if (info->fils_discovery.max_interval) {
+ interval = info->fils_discovery.max_interval;
+
+ tmpl = ieee80211_get_fils_discovery_tmpl(ar->hw, arvif->vif);
+ if (tmpl)
+ ret = ath12k_wmi_fils_discovery_tmpl(ar, arvif->vdev_id,
+ tmpl);
+ } else if (info->unsol_bcast_probe_resp_interval) {
+ unsol_bcast_probe_resp_enabled = 1;
+ interval = info->unsol_bcast_probe_resp_interval;
+
+ tmpl = ieee80211_get_unsol_bcast_probe_resp_tmpl(ar->hw,
+ arvif->vif);
+ if (tmpl)
+ ret = ath12k_wmi_probe_resp_tmpl(ar, arvif->vdev_id,
+ tmpl);
+ } else { /* Disable */
+ return ath12k_wmi_fils_discovery(ar, arvif->vdev_id, 0, false);
+ }
+
+ if (!tmpl) {
+ ath12k_warn(ar->ab,
+ "mac vdev %i failed to retrieve %s template\n",
+ arvif->vdev_id, (unsol_bcast_probe_resp_enabled ?
+ "unsolicited broadcast probe response" :
+ "FILS discovery"));
+ return -EPERM;
+ }
+ kfree_skb(tmpl);
+
+ if (!ret)
+ ret = ath12k_wmi_fils_discovery(ar, arvif->vdev_id, interval,
+ unsol_bcast_probe_resp_enabled);
+
+ return ret;
+}
+
+static void ath12k_mac_op_bss_info_changed(struct ieee80211_hw *hw,
+ struct ieee80211_vif *vif,
+ struct ieee80211_bss_conf *info,
+ u64 changed)
+{
+ struct ath12k *ar = hw->priv;
+ struct ath12k_vif *arvif = ath12k_vif_to_arvif(vif);
+ struct cfg80211_chan_def def;
+ u32 param_id, param_value;
+ enum nl80211_band band;
+ u32 vdev_param;
+ int mcast_rate;
+ u32 preamble;
+ u16 hw_value;
+ u16 bitrate;
+ int ret;
+ u8 rateidx;
+ u32 rate;
+
+ mutex_lock(&ar->conf_mutex);
+
+ if (changed & BSS_CHANGED_BEACON_INT) {
+ arvif->beacon_interval = info->beacon_int;
+
+ param_id = WMI_VDEV_PARAM_BEACON_INTERVAL;
+ ret = ath12k_wmi_vdev_set_param_cmd(ar, arvif->vdev_id,
+ param_id,
+ arvif->beacon_interval);
+ if (ret)
+ ath12k_warn(ar->ab, "Failed to set beacon interval for VDEV: %d\n",
+ arvif->vdev_id);
+ else
+ ath12k_dbg(ar->ab, ATH12K_DBG_MAC,
+ "Beacon interval: %d set for VDEV: %d\n",
+ arvif->beacon_interval, arvif->vdev_id);
+ }
+
+ if (changed & BSS_CHANGED_BEACON) {
+ param_id = WMI_PDEV_PARAM_BEACON_TX_MODE;
+ param_value = WMI_BEACON_STAGGERED_MODE;
+ ret = ath12k_wmi_pdev_set_param(ar, param_id,
+ param_value, ar->pdev->pdev_id);
+ if (ret)
+ ath12k_warn(ar->ab, "Failed to set beacon mode for VDEV: %d\n",
+ arvif->vdev_id);
+ else
+ ath12k_dbg(ar->ab, ATH12K_DBG_MAC,
+ "Set staggered beacon mode for VDEV: %d\n",
+ arvif->vdev_id);
+
+ ret = ath12k_mac_setup_bcn_tmpl(arvif);
+ if (ret)
+ ath12k_warn(ar->ab, "failed to update bcn template: %d\n",
+ ret);
+ }
+
+ if (changed & (BSS_CHANGED_BEACON_INFO | BSS_CHANGED_BEACON)) {
+ arvif->dtim_period = info->dtim_period;
+
+ param_id = WMI_VDEV_PARAM_DTIM_PERIOD;
+ ret = ath12k_wmi_vdev_set_param_cmd(ar, arvif->vdev_id,
+ param_id,
+ arvif->dtim_period);
+
+ if (ret)
+ ath12k_warn(ar->ab, "Failed to set dtim period for VDEV %d: %i\n",
+ arvif->vdev_id, ret);
+ else
+ ath12k_dbg(ar->ab, ATH12K_DBG_MAC,
+ "DTIM period: %d set for VDEV: %d\n",
+ arvif->dtim_period, arvif->vdev_id);
+ }
+
+ if (changed & BSS_CHANGED_SSID &&
+ vif->type == NL80211_IFTYPE_AP) {
+ arvif->u.ap.ssid_len = vif->cfg.ssid_len;
+ if (vif->cfg.ssid_len)
+ memcpy(arvif->u.ap.ssid, vif->cfg.ssid, vif->cfg.ssid_len);
+ arvif->u.ap.hidden_ssid = info->hidden_ssid;
+ }
+
+ if (changed & BSS_CHANGED_BSSID && !is_zero_ether_addr(info->bssid))
+ ether_addr_copy(arvif->bssid, info->bssid);
+
+ if (changed & BSS_CHANGED_BEACON_ENABLED) {
+ ath12k_control_beaconing(arvif, info);
+
+ if (arvif->is_up && vif->bss_conf.he_support &&
+ vif->bss_conf.he_oper.params) {
+ /* TODO: Extend to support 1024 BA Bitmap size */
+ ret = ath12k_wmi_vdev_set_param_cmd(ar, arvif->vdev_id,
+ WMI_VDEV_PARAM_BA_MODE,
+ WMI_BA_MODE_BUFFER_SIZE_256);
+ if (ret)
+ ath12k_warn(ar->ab,
+ "failed to set BA BUFFER SIZE 256 for vdev: %d\n",
+ arvif->vdev_id);
+
+ param_id = WMI_VDEV_PARAM_HEOPS_0_31;
+ param_value = vif->bss_conf.he_oper.params;
+ ret = ath12k_wmi_vdev_set_param_cmd(ar, arvif->vdev_id,
+ param_id, param_value);
+ ath12k_dbg(ar->ab, ATH12K_DBG_MAC,
+ "he oper param: %x set for VDEV: %d\n",
+ param_value, arvif->vdev_id);
+
+ if (ret)
+ ath12k_warn(ar->ab, "Failed to set he oper params %x for VDEV %d: %i\n",
+ param_value, arvif->vdev_id, ret);
+ }
+ }
+
+ if (changed & BSS_CHANGED_ERP_CTS_PROT) {
+ u32 cts_prot;
+
+ cts_prot = !!(info->use_cts_prot);
+ param_id = WMI_VDEV_PARAM_PROTECTION_MODE;
+
+ if (arvif->is_started) {
+ ret = ath12k_wmi_vdev_set_param_cmd(ar, arvif->vdev_id,
+ param_id, cts_prot);
+ if (ret)
+ ath12k_warn(ar->ab, "Failed to set CTS prot for VDEV: %d\n",
+ arvif->vdev_id);
+ else
+ ath12k_dbg(ar->ab, ATH12K_DBG_MAC, "Set CTS prot: %d for VDEV: %d\n",
+ cts_prot, arvif->vdev_id);
+ } else {
+ ath12k_dbg(ar->ab, ATH12K_DBG_MAC, "defer protection mode setup, vdev is not ready yet\n");
+ }
+ }
+
+ if (changed & BSS_CHANGED_ERP_SLOT) {
+ u32 slottime;
+
+ if (info->use_short_slot)
+ slottime = WMI_VDEV_SLOT_TIME_SHORT; /* 9us */
+
+ else
+ slottime = WMI_VDEV_SLOT_TIME_LONG; /* 20us */
+
+ param_id = WMI_VDEV_PARAM_SLOT_TIME;
+ ret = ath12k_wmi_vdev_set_param_cmd(ar, arvif->vdev_id,
+ param_id, slottime);
+ if (ret)
+ ath12k_warn(ar->ab, "Failed to set erp slot for VDEV: %d\n",
+ arvif->vdev_id);
+ else
+ ath12k_dbg(ar->ab, ATH12K_DBG_MAC,
+ "Set slottime: %d for VDEV: %d\n",
+ slottime, arvif->vdev_id);
+ }
+
+ if (changed & BSS_CHANGED_ERP_PREAMBLE) {
+ u32 preamble;
+
+ if (info->use_short_preamble)
+ preamble = WMI_VDEV_PREAMBLE_SHORT;
+ else
+ preamble = WMI_VDEV_PREAMBLE_LONG;
+
+ param_id = WMI_VDEV_PARAM_PREAMBLE;
+ ret = ath12k_wmi_vdev_set_param_cmd(ar, arvif->vdev_id,
+ param_id, preamble);
+ if (ret)
+ ath12k_warn(ar->ab, "Failed to set preamble for VDEV: %d\n",
+ arvif->vdev_id);
+ else
+ ath12k_dbg(ar->ab, ATH12K_DBG_MAC,
+ "Set preamble: %d for VDEV: %d\n",
+ preamble, arvif->vdev_id);
+ }
+
+ if (changed & BSS_CHANGED_ASSOC) {
+ if (vif->cfg.assoc)
+ ath12k_bss_assoc(hw, vif, info);
+ else
+ ath12k_bss_disassoc(hw, vif);
+ }
+
+ if (changed & BSS_CHANGED_TXPOWER) {
+ ath12k_dbg(ar->ab, ATH12K_DBG_MAC, "mac vdev_id %i txpower %d\n",
+ arvif->vdev_id, info->txpower);
+
+ arvif->txpower = info->txpower;
+ ath12k_mac_txpower_recalc(ar);
+ }
+
+ if (changed & BSS_CHANGED_MCAST_RATE &&
+ !ath12k_mac_vif_chan(arvif->vif, &def)) {
+ band = def.chan->band;
+ mcast_rate = vif->bss_conf.mcast_rate[band];
+
+ if (mcast_rate > 0)
+ rateidx = mcast_rate - 1;
+ else
+ rateidx = ffs(vif->bss_conf.basic_rates) - 1;
+
+ if (ar->pdev->cap.supported_bands & WMI_HOST_WLAN_5G_CAP)
+ rateidx += ATH12K_MAC_FIRST_OFDM_RATE_IDX;
+
+ bitrate = ath12k_legacy_rates[rateidx].bitrate;
+ hw_value = ath12k_legacy_rates[rateidx].hw_value;
+
+ if (ath12k_mac_bitrate_is_cck(bitrate))
+ preamble = WMI_RATE_PREAMBLE_CCK;
+ else
+ preamble = WMI_RATE_PREAMBLE_OFDM;
+
+ rate = ATH12K_HW_RATE_CODE(hw_value, 0, preamble);
+
+ ath12k_dbg(ar->ab, ATH12K_DBG_MAC,
+ "mac vdev %d mcast_rate %x\n",
+ arvif->vdev_id, rate);
+
+ vdev_param = WMI_VDEV_PARAM_MCAST_DATA_RATE;
+ ret = ath12k_wmi_vdev_set_param_cmd(ar, arvif->vdev_id,
+ vdev_param, rate);
+ if (ret)
+ ath12k_warn(ar->ab,
+ "failed to set mcast rate on vdev %i: %d\n",
+ arvif->vdev_id, ret);
+
+ vdev_param = WMI_VDEV_PARAM_BCAST_DATA_RATE;
+ ret = ath12k_wmi_vdev_set_param_cmd(ar, arvif->vdev_id,
+ vdev_param, rate);
+ if (ret)
+ ath12k_warn(ar->ab,
+ "failed to set bcast rate on vdev %i: %d\n",
+ arvif->vdev_id, ret);
+ }
+
+ if (changed & BSS_CHANGED_BASIC_RATES &&
+ !ath12k_mac_vif_chan(arvif->vif, &def))
+ ath12k_recalculate_mgmt_rate(ar, vif, &def);
+
+ if (changed & BSS_CHANGED_TWT) {
+ if (info->twt_requester || info->twt_responder)
+ ath12k_wmi_send_twt_enable_cmd(ar, ar->pdev->pdev_id);
+ else
+ ath12k_wmi_send_twt_disable_cmd(ar, ar->pdev->pdev_id);
+ }
+
+ if (changed & BSS_CHANGED_HE_OBSS_PD)
+ ath12k_wmi_send_obss_spr_cmd(ar, arvif->vdev_id,
+ &info->he_obss_pd);
+
+ if (changed & BSS_CHANGED_HE_BSS_COLOR) {
+ if (vif->type == NL80211_IFTYPE_AP) {
+ ret = ath12k_wmi_obss_color_cfg_cmd(ar,
+ arvif->vdev_id,
+ info->he_bss_color.color,
+ ATH12K_BSS_COLOR_AP_PERIODS,
+ info->he_bss_color.enabled);
+ if (ret)
+ ath12k_warn(ar->ab, "failed to set bss color collision on vdev %i: %d\n",
+ arvif->vdev_id, ret);
+ } else if (vif->type == NL80211_IFTYPE_STATION) {
+ ret = ath12k_wmi_send_bss_color_change_enable_cmd(ar,
+ arvif->vdev_id,
+ 1);
+ if (ret)
+ ath12k_warn(ar->ab, "failed to enable bss color change on vdev %i: %d\n",
+ arvif->vdev_id, ret);
+ ret = ath12k_wmi_obss_color_cfg_cmd(ar,
+ arvif->vdev_id,
+ 0,
+ ATH12K_BSS_COLOR_STA_PERIODS,
+ 1);
+ if (ret)
+ ath12k_warn(ar->ab, "failed to set bss color collision on vdev %i: %d\n",
+ arvif->vdev_id, ret);
+ }
+ }
+
+ if (changed & BSS_CHANGED_FILS_DISCOVERY ||
+ changed & BSS_CHANGED_UNSOL_BCAST_PROBE_RESP)
+ ath12k_mac_fils_discovery(arvif, info);
+
+ mutex_unlock(&ar->conf_mutex);
+}
+
+void __ath12k_mac_scan_finish(struct ath12k *ar)
+{
+ lockdep_assert_held(&ar->data_lock);
+
+ switch (ar->scan.state) {
+ case ATH12K_SCAN_IDLE:
+ break;
+ case ATH12K_SCAN_RUNNING:
+ case ATH12K_SCAN_ABORTING:
+ if (!ar->scan.is_roc) {
+ struct cfg80211_scan_info info = {
+ .aborted = (ar->scan.state ==
+ ATH12K_SCAN_ABORTING),
+ };
+
+ ieee80211_scan_completed(ar->hw, &info);
+ } else if (ar->scan.roc_notify) {
+ ieee80211_remain_on_channel_expired(ar->hw);
+ }
+ fallthrough;
+ case ATH12K_SCAN_STARTING:
+ ar->scan.state = ATH12K_SCAN_IDLE;
+ ar->scan_channel = NULL;
+ ar->scan.roc_freq = 0;
+ cancel_delayed_work(&ar->scan.timeout);
+ complete(&ar->scan.completed);
+ break;
+ }
+}
+
+void ath12k_mac_scan_finish(struct ath12k *ar)
+{
+ spin_lock_bh(&ar->data_lock);
+ __ath12k_mac_scan_finish(ar);
+ spin_unlock_bh(&ar->data_lock);
+}
+
+static int ath12k_scan_stop(struct ath12k *ar)
+{
+ struct ath12k_wmi_scan_cancel_arg arg = {
+ .req_type = WLAN_SCAN_CANCEL_SINGLE,
+ .scan_id = ATH12K_SCAN_ID,
+ };
+ int ret;
+
+ lockdep_assert_held(&ar->conf_mutex);
+
+ /* TODO: Fill other STOP Params */
+ arg.pdev_id = ar->pdev->pdev_id;
+
+ ret = ath12k_wmi_send_scan_stop_cmd(ar, &arg);
+ if (ret) {
+ ath12k_warn(ar->ab, "failed to stop wmi scan: %d\n", ret);
+ goto out;
+ }
+
+ ret = wait_for_completion_timeout(&ar->scan.completed, 3 * HZ);
+ if (ret == 0) {
+ ath12k_warn(ar->ab,
+ "failed to receive scan abort comple: timed out\n");
+ ret = -ETIMEDOUT;
+ } else if (ret > 0) {
+ ret = 0;
+ }
+
+out:
+ /* Scan state should be updated upon scan completion but in case
+ * firmware fails to deliver the event (for whatever reason) it is
+ * desired to clean up scan state anyway. Firmware may have just
+ * dropped the scan completion event delivery due to transport pipe
+ * being overflown with data and/or it can recover on its own before
+ * next scan request is submitted.
+ */
+ spin_lock_bh(&ar->data_lock);
+ if (ar->scan.state != ATH12K_SCAN_IDLE)
+ __ath12k_mac_scan_finish(ar);
+ spin_unlock_bh(&ar->data_lock);
+
+ return ret;
+}
+
+static void ath12k_scan_abort(struct ath12k *ar)
+{
+ int ret;
+
+ lockdep_assert_held(&ar->conf_mutex);
+
+ spin_lock_bh(&ar->data_lock);
+
+ switch (ar->scan.state) {
+ case ATH12K_SCAN_IDLE:
+ /* This can happen if timeout worker kicked in and called
+ * abortion while scan completion was being processed.
+ */
+ break;
+ case ATH12K_SCAN_STARTING:
+ case ATH12K_SCAN_ABORTING:
+ ath12k_warn(ar->ab, "refusing scan abortion due to invalid scan state: %d\n",
+ ar->scan.state);
+ break;
+ case ATH12K_SCAN_RUNNING:
+ ar->scan.state = ATH12K_SCAN_ABORTING;
+ spin_unlock_bh(&ar->data_lock);
+
+ ret = ath12k_scan_stop(ar);
+ if (ret)
+ ath12k_warn(ar->ab, "failed to abort scan: %d\n", ret);
+
+ spin_lock_bh(&ar->data_lock);
+ break;
+ }
+
+ spin_unlock_bh(&ar->data_lock);
+}
+
+static void ath12k_scan_timeout_work(struct work_struct *work)
+{
+ struct ath12k *ar = container_of(work, struct ath12k,
+ scan.timeout.work);
+
+ mutex_lock(&ar->conf_mutex);
+ ath12k_scan_abort(ar);
+ mutex_unlock(&ar->conf_mutex);
+}
+
+static int ath12k_start_scan(struct ath12k *ar,
+ struct ath12k_wmi_scan_req_arg *arg)
+{
+ int ret;
+
+ lockdep_assert_held(&ar->conf_mutex);
+
+ ret = ath12k_wmi_send_scan_start_cmd(ar, arg);
+ if (ret)
+ return ret;
+
+ ret = wait_for_completion_timeout(&ar->scan.started, 1 * HZ);
+ if (ret == 0) {
+ ret = ath12k_scan_stop(ar);
+ if (ret)
+ ath12k_warn(ar->ab, "failed to stop scan: %d\n", ret);
+
+ return -ETIMEDOUT;
+ }
+
+ /* If we failed to start the scan, return error code at
+ * this point. This is probably due to some issue in the
+ * firmware, but no need to wedge the driver due to that...
+ */
+ spin_lock_bh(&ar->data_lock);
+ if (ar->scan.state == ATH12K_SCAN_IDLE) {
+ spin_unlock_bh(&ar->data_lock);
+ return -EINVAL;
+ }
+ spin_unlock_bh(&ar->data_lock);
+
+ return 0;
+}
+
+static int ath12k_mac_op_hw_scan(struct ieee80211_hw *hw,
+ struct ieee80211_vif *vif,
+ struct ieee80211_scan_request *hw_req)
+{
+ struct ath12k *ar = hw->priv;
+ struct ath12k_vif *arvif = ath12k_vif_to_arvif(vif);
+ struct cfg80211_scan_request *req = &hw_req->req;
+ struct ath12k_wmi_scan_req_arg arg = {};
+ int ret;
+ int i;
+
+ mutex_lock(&ar->conf_mutex);
+
+ spin_lock_bh(&ar->data_lock);
+ switch (ar->scan.state) {
+ case ATH12K_SCAN_IDLE:
+ reinit_completion(&ar->scan.started);
+ reinit_completion(&ar->scan.completed);
+ ar->scan.state = ATH12K_SCAN_STARTING;
+ ar->scan.is_roc = false;
+ ar->scan.vdev_id = arvif->vdev_id;
+ ret = 0;
+ break;
+ case ATH12K_SCAN_STARTING:
+ case ATH12K_SCAN_RUNNING:
+ case ATH12K_SCAN_ABORTING:
+ ret = -EBUSY;
+ break;
+ }
+ spin_unlock_bh(&ar->data_lock);
+
+ if (ret)
+ goto exit;
+
+ ath12k_wmi_start_scan_init(ar, &arg);
+ arg.vdev_id = arvif->vdev_id;
+ arg.scan_id = ATH12K_SCAN_ID;
+
+ if (req->ie_len) {
+ arg.extraie.len = req->ie_len;
+ arg.extraie.ptr = kzalloc(req->ie_len, GFP_KERNEL);
+ memcpy(arg.extraie.ptr, req->ie, req->ie_len);
+ }
+
+ if (req->n_ssids) {
+ arg.num_ssids = req->n_ssids;
+ for (i = 0; i < arg.num_ssids; i++)
+ arg.ssid[i] = req->ssids[i];
+ } else {
+ arg.scan_flags |= WMI_SCAN_FLAG_PASSIVE;
+ }
+
+ if (req->n_channels) {
+ arg.num_chan = req->n_channels;
+ for (i = 0; i < arg.num_chan; i++)
+ arg.chan_list[i] = req->channels[i]->center_freq;
+ }
+
+ ret = ath12k_start_scan(ar, &arg);
+ if (ret) {
+ ath12k_warn(ar->ab, "failed to start hw scan: %d\n", ret);
+ spin_lock_bh(&ar->data_lock);
+ ar->scan.state = ATH12K_SCAN_IDLE;
+ spin_unlock_bh(&ar->data_lock);
+ }
+
+ /* Add a margin to account for event/command processing */
+ ieee80211_queue_delayed_work(ar->hw, &ar->scan.timeout,
+ msecs_to_jiffies(arg.max_scan_time +
+ ATH12K_MAC_SCAN_TIMEOUT_MSECS));
+
+exit:
+ if (req->ie_len)
+ kfree(arg.extraie.ptr);
+
+ mutex_unlock(&ar->conf_mutex);
+ return ret;
+}
+
+static void ath12k_mac_op_cancel_hw_scan(struct ieee80211_hw *hw,
+ struct ieee80211_vif *vif)
+{
+ struct ath12k *ar = hw->priv;
+
+ mutex_lock(&ar->conf_mutex);
+ ath12k_scan_abort(ar);
+ mutex_unlock(&ar->conf_mutex);
+
+ cancel_delayed_work_sync(&ar->scan.timeout);
+}
+
+static int ath12k_install_key(struct ath12k_vif *arvif,
+ struct ieee80211_key_conf *key,
+ enum set_key_cmd cmd,
+ const u8 *macaddr, u32 flags)
+{
+ int ret;
+ struct ath12k *ar = arvif->ar;
+ struct wmi_vdev_install_key_arg arg = {
+ .vdev_id = arvif->vdev_id,
+ .key_idx = key->keyidx,
+ .key_len = key->keylen,
+ .key_data = key->key,
+ .key_flags = flags,
+ .macaddr = macaddr,
+ };
+
+ lockdep_assert_held(&arvif->ar->conf_mutex);
+
+ reinit_completion(&ar->install_key_done);
+
+ if (test_bit(ATH12K_FLAG_HW_CRYPTO_DISABLED, &ar->ab->dev_flags))
+ return 0;
+
+ if (cmd == DISABLE_KEY) {
+ /* TODO: Check if FW expects value other than NONE for del */
+ /* arg.key_cipher = WMI_CIPHER_NONE; */
+ arg.key_len = 0;
+ arg.key_data = NULL;
+ goto install;
+ }
+
+ switch (key->cipher) {
+ case WLAN_CIPHER_SUITE_CCMP:
+ arg.key_cipher = WMI_CIPHER_AES_CCM;
+ /* TODO: Re-check if flag is valid */
+ key->flags |= IEEE80211_KEY_FLAG_GENERATE_IV_MGMT;
+ break;
+ case WLAN_CIPHER_SUITE_TKIP:
+ arg.key_cipher = WMI_CIPHER_TKIP;
+ arg.key_txmic_len = 8;
+ arg.key_rxmic_len = 8;
+ break;
+ case WLAN_CIPHER_SUITE_CCMP_256:
+ arg.key_cipher = WMI_CIPHER_AES_CCM;
+ break;
+ case WLAN_CIPHER_SUITE_GCMP:
+ case WLAN_CIPHER_SUITE_GCMP_256:
+ arg.key_cipher = WMI_CIPHER_AES_GCM;
+ break;
+ default:
+ ath12k_warn(ar->ab, "cipher %d is not supported\n", key->cipher);
+ return -EOPNOTSUPP;
+ }
+
+ if (test_bit(ATH12K_FLAG_RAW_MODE, &ar->ab->dev_flags))
+ key->flags |= IEEE80211_KEY_FLAG_GENERATE_IV |
+ IEEE80211_KEY_FLAG_RESERVE_TAILROOM;
+
+install:
+ ret = ath12k_wmi_vdev_install_key(arvif->ar, &arg);
+
+ if (ret)
+ return ret;
+
+ if (!wait_for_completion_timeout(&ar->install_key_done, 1 * HZ))
+ return -ETIMEDOUT;
+
+ if (ether_addr_equal(macaddr, arvif->vif->addr))
+ arvif->key_cipher = key->cipher;
+
+ return ar->install_key_status ? -EINVAL : 0;
+}
+
+static int ath12k_clear_peer_keys(struct ath12k_vif *arvif,
+ const u8 *addr)
+{
+ struct ath12k *ar = arvif->ar;
+ struct ath12k_base *ab = ar->ab;
+ struct ath12k_peer *peer;
+ int first_errno = 0;
+ int ret;
+ int i;
+ u32 flags = 0;
+
+ lockdep_assert_held(&ar->conf_mutex);
+
+ spin_lock_bh(&ab->base_lock);
+ peer = ath12k_peer_find(ab, arvif->vdev_id, addr);
+ spin_unlock_bh(&ab->base_lock);
+
+ if (!peer)
+ return -ENOENT;
+
+ for (i = 0; i < ARRAY_SIZE(peer->keys); i++) {
+ if (!peer->keys[i])
+ continue;
+
+ /* key flags are not required to delete the key */
+ ret = ath12k_install_key(arvif, peer->keys[i],
+ DISABLE_KEY, addr, flags);
+ if (ret < 0 && first_errno == 0)
+ first_errno = ret;
+
+ if (ret < 0)
+ ath12k_warn(ab, "failed to remove peer key %d: %d\n",
+ i, ret);
+
+ spin_lock_bh(&ab->base_lock);
+ peer->keys[i] = NULL;
+ spin_unlock_bh(&ab->base_lock);
+ }
+
+ return first_errno;
+}
+
+static int ath12k_mac_op_set_key(struct ieee80211_hw *hw, enum set_key_cmd cmd,
+ struct ieee80211_vif *vif, struct ieee80211_sta *sta,
+ struct ieee80211_key_conf *key)
+{
+ struct ath12k *ar = hw->priv;
+ struct ath12k_base *ab = ar->ab;
+ struct ath12k_vif *arvif = ath12k_vif_to_arvif(vif);
+ struct ath12k_peer *peer;
+ struct ath12k_sta *arsta;
+ const u8 *peer_addr;
+ int ret = 0;
+ u32 flags = 0;
+
+ /* BIP needs to be done in software */
+ if (key->cipher == WLAN_CIPHER_SUITE_AES_CMAC ||
+ key->cipher == WLAN_CIPHER_SUITE_BIP_GMAC_128 ||
+ key->cipher == WLAN_CIPHER_SUITE_BIP_GMAC_256 ||
+ key->cipher == WLAN_CIPHER_SUITE_BIP_CMAC_256)
+ return 1;
+
+ if (test_bit(ATH12K_FLAG_HW_CRYPTO_DISABLED, &ar->ab->dev_flags))
+ return 1;
+
+ if (key->keyidx > WMI_MAX_KEY_INDEX)
+ return -ENOSPC;
+
+ mutex_lock(&ar->conf_mutex);
+
+ if (sta)
+ peer_addr = sta->addr;
+ else if (arvif->vdev_type == WMI_VDEV_TYPE_STA)
+ peer_addr = vif->bss_conf.bssid;
+ else
+ peer_addr = vif->addr;
+
+ key->hw_key_idx = key->keyidx;
+
+ /* the peer should not disappear in mid-way (unless FW goes awry) since
+ * we already hold conf_mutex. we just make sure its there now.
+ */
+ spin_lock_bh(&ab->base_lock);
+ peer = ath12k_peer_find(ab, arvif->vdev_id, peer_addr);
+ spin_unlock_bh(&ab->base_lock);
+
+ if (!peer) {
+ if (cmd == SET_KEY) {
+ ath12k_warn(ab, "cannot install key for non-existent peer %pM\n",
+ peer_addr);
+ ret = -EOPNOTSUPP;
+ goto exit;
+ } else {
+ /* if the peer doesn't exist there is no key to disable
+ * anymore
+ */
+ goto exit;
+ }
+ }
+
+ if (key->flags & IEEE80211_KEY_FLAG_PAIRWISE)
+ flags |= WMI_KEY_PAIRWISE;
+ else
+ flags |= WMI_KEY_GROUP;
+
+ ret = ath12k_install_key(arvif, key, cmd, peer_addr, flags);
+ if (ret) {
+ ath12k_warn(ab, "ath12k_install_key failed (%d)\n", ret);
+ goto exit;
+ }
+
+ ret = ath12k_dp_rx_peer_pn_replay_config(arvif, peer_addr, cmd, key);
+ if (ret) {
+ ath12k_warn(ab, "failed to offload PN replay detection %d\n", ret);
+ goto exit;
+ }
+
+ spin_lock_bh(&ab->base_lock);
+ peer = ath12k_peer_find(ab, arvif->vdev_id, peer_addr);
+ if (peer && cmd == SET_KEY) {
+ peer->keys[key->keyidx] = key;
+ if (key->flags & IEEE80211_KEY_FLAG_PAIRWISE) {
+ peer->ucast_keyidx = key->keyidx;
+ peer->sec_type = ath12k_dp_tx_get_encrypt_type(key->cipher);
+ } else {
+ peer->mcast_keyidx = key->keyidx;
+ peer->sec_type_grp = ath12k_dp_tx_get_encrypt_type(key->cipher);
+ }
+ } else if (peer && cmd == DISABLE_KEY) {
+ peer->keys[key->keyidx] = NULL;
+ if (key->flags & IEEE80211_KEY_FLAG_PAIRWISE)
+ peer->ucast_keyidx = 0;
+ else
+ peer->mcast_keyidx = 0;
+ } else if (!peer)
+ /* impossible unless FW goes crazy */
+ ath12k_warn(ab, "peer %pM disappeared!\n", peer_addr);
+
+ if (sta) {
+ arsta = (struct ath12k_sta *)sta->drv_priv;
+
+ switch (key->cipher) {
+ case WLAN_CIPHER_SUITE_TKIP:
+ case WLAN_CIPHER_SUITE_CCMP:
+ case WLAN_CIPHER_SUITE_CCMP_256:
+ case WLAN_CIPHER_SUITE_GCMP:
+ case WLAN_CIPHER_SUITE_GCMP_256:
+ if (cmd == SET_KEY)
+ arsta->pn_type = HAL_PN_TYPE_WPA;
+ else
+ arsta->pn_type = HAL_PN_TYPE_NONE;
+ break;
+ default:
+ arsta->pn_type = HAL_PN_TYPE_NONE;
+ break;
+ }
+ }
+
+ spin_unlock_bh(&ab->base_lock);
+
+exit:
+ mutex_unlock(&ar->conf_mutex);
+ return ret;
+}
+
+static int
+ath12k_mac_bitrate_mask_num_vht_rates(struct ath12k *ar,
+ enum nl80211_band band,
+ const struct cfg80211_bitrate_mask *mask)
+{
+ int num_rates = 0;
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(mask->control[band].vht_mcs); i++)
+ num_rates += hweight16(mask->control[band].vht_mcs[i]);
+
+ return num_rates;
+}
+
+static int
+ath12k_mac_set_peer_vht_fixed_rate(struct ath12k_vif *arvif,
+ struct ieee80211_sta *sta,
+ const struct cfg80211_bitrate_mask *mask,
+ enum nl80211_band band)
+{
+ struct ath12k *ar = arvif->ar;
+ u8 vht_rate, nss;
+ u32 rate_code;
+ int ret, i;
+
+ lockdep_assert_held(&ar->conf_mutex);
+
+ nss = 0;
+
+ for (i = 0; i < ARRAY_SIZE(mask->control[band].vht_mcs); i++) {
+ if (hweight16(mask->control[band].vht_mcs[i]) == 1) {
+ nss = i + 1;
+ vht_rate = ffs(mask->control[band].vht_mcs[i]) - 1;
+ }
+ }
+
+ if (!nss) {
+ ath12k_warn(ar->ab, "No single VHT Fixed rate found to set for %pM",
+ sta->addr);
+ return -EINVAL;
+ }
+
+ ath12k_dbg(ar->ab, ATH12K_DBG_MAC,
+ "Setting Fixed VHT Rate for peer %pM. Device will not switch to any other selected rates",
+ sta->addr);
+
+ rate_code = ATH12K_HW_RATE_CODE(vht_rate, nss - 1,
+ WMI_RATE_PREAMBLE_VHT);
+ ret = ath12k_wmi_set_peer_param(ar, sta->addr,
+ arvif->vdev_id,
+ WMI_PEER_PARAM_FIXED_RATE,
+ rate_code);
+ if (ret)
+ ath12k_warn(ar->ab,
+ "failed to update STA %pM Fixed Rate %d: %d\n",
+ sta->addr, rate_code, ret);
+
+ return ret;
+}
+
+static int ath12k_station_assoc(struct ath12k *ar,
+ struct ieee80211_vif *vif,
+ struct ieee80211_sta *sta,
+ bool reassoc)
+{
+ struct ath12k_vif *arvif = ath12k_vif_to_arvif(vif);
+ struct ath12k_wmi_peer_assoc_arg peer_arg;
+ int ret;
+ struct cfg80211_chan_def def;
+ enum nl80211_band band;
+ struct cfg80211_bitrate_mask *mask;
+ u8 num_vht_rates;
+
+ lockdep_assert_held(&ar->conf_mutex);
+
+ if (WARN_ON(ath12k_mac_vif_chan(vif, &def)))
+ return -EPERM;
+
+ band = def.chan->band;
+ mask = &arvif->bitrate_mask;
+
+ ath12k_peer_assoc_prepare(ar, vif, sta, &peer_arg, reassoc);
+
+ ret = ath12k_wmi_send_peer_assoc_cmd(ar, &peer_arg);
+ if (ret) {
+ ath12k_warn(ar->ab, "failed to run peer assoc for STA %pM vdev %i: %d\n",
+ sta->addr, arvif->vdev_id, ret);
+ return ret;
+ }
+
+ if (!wait_for_completion_timeout(&ar->peer_assoc_done, 1 * HZ)) {
+ ath12k_warn(ar->ab, "failed to get peer assoc conf event for %pM vdev %i\n",
+ sta->addr, arvif->vdev_id);
+ return -ETIMEDOUT;
+ }
+
+ num_vht_rates = ath12k_mac_bitrate_mask_num_vht_rates(ar, band, mask);
+
+ /* If single VHT rate is configured (by set_bitrate_mask()),
+ * peer_assoc will disable VHT. This is now enabled by a peer specific
+ * fixed param.
+ * Note that all other rates and NSS will be disabled for this peer.
+ */
+ if (sta->deflink.vht_cap.vht_supported && num_vht_rates == 1) {
+ ret = ath12k_mac_set_peer_vht_fixed_rate(arvif, sta, mask,
+ band);
+ if (ret)
+ return ret;
+ }
+
+ /* Re-assoc is run only to update supported rates for given station. It
+ * doesn't make much sense to reconfigure the peer completely.
+ */
+ if (reassoc)
+ return 0;
+
+ ret = ath12k_setup_peer_smps(ar, arvif, sta->addr,
+ &sta->deflink.ht_cap);
+ if (ret) {
+ ath12k_warn(ar->ab, "failed to setup peer SMPS for vdev %d: %d\n",
+ arvif->vdev_id, ret);
+ return ret;
+ }
+
+ if (!sta->wme) {
+ arvif->num_legacy_stations++;
+ ret = ath12k_recalc_rtscts_prot(arvif);
+ if (ret)
+ return ret;
+ }
+
+ if (sta->wme && sta->uapsd_queues) {
+ ret = ath12k_peer_assoc_qos_ap(ar, arvif, sta);
+ if (ret) {
+ ath12k_warn(ar->ab, "failed to set qos params for STA %pM for vdev %i: %d\n",
+ sta->addr, arvif->vdev_id, ret);
+ return ret;
+ }
+ }
+
+ return 0;
+}
+
+static int ath12k_station_disassoc(struct ath12k *ar,
+ struct ieee80211_vif *vif,
+ struct ieee80211_sta *sta)
+{
+ struct ath12k_vif *arvif = (void *)vif->drv_priv;
+ int ret;
+
+ lockdep_assert_held(&ar->conf_mutex);
+
+ if (!sta->wme) {
+ arvif->num_legacy_stations--;
+ ret = ath12k_recalc_rtscts_prot(arvif);
+ if (ret)
+ return ret;
+ }
+
+ ret = ath12k_clear_peer_keys(arvif, sta->addr);
+ if (ret) {
+ ath12k_warn(ar->ab, "failed to clear all peer keys for vdev %i: %d\n",
+ arvif->vdev_id, ret);
+ return ret;
+ }
+ return 0;
+}
+
+static void ath12k_sta_rc_update_wk(struct work_struct *wk)
+{
+ struct ath12k *ar;
+ struct ath12k_vif *arvif;
+ struct ath12k_sta *arsta;
+ struct ieee80211_sta *sta;
+ struct cfg80211_chan_def def;
+ enum nl80211_band band;
+ const u8 *ht_mcs_mask;
+ const u16 *vht_mcs_mask;
+ u32 changed, bw, nss, smps;
+ int err, num_vht_rates;
+ const struct cfg80211_bitrate_mask *mask;
+ struct ath12k_wmi_peer_assoc_arg peer_arg;
+
+ arsta = container_of(wk, struct ath12k_sta, update_wk);
+ sta = container_of((void *)arsta, struct ieee80211_sta, drv_priv);
+ arvif = arsta->arvif;
+ ar = arvif->ar;
+
+ if (WARN_ON(ath12k_mac_vif_chan(arvif->vif, &def)))
+ return;
+
+ band = def.chan->band;
+ ht_mcs_mask = arvif->bitrate_mask.control[band].ht_mcs;
+ vht_mcs_mask = arvif->bitrate_mask.control[band].vht_mcs;
+
+ spin_lock_bh(&ar->data_lock);
+
+ changed = arsta->changed;
+ arsta->changed = 0;
+
+ bw = arsta->bw;
+ nss = arsta->nss;
+ smps = arsta->smps;
+
+ spin_unlock_bh(&ar->data_lock);
+
+ mutex_lock(&ar->conf_mutex);
+
+ nss = max_t(u32, 1, nss);
+ nss = min(nss, max(ath12k_mac_max_ht_nss(ht_mcs_mask),
+ ath12k_mac_max_vht_nss(vht_mcs_mask)));
+
+ if (changed & IEEE80211_RC_BW_CHANGED) {
+ err = ath12k_wmi_set_peer_param(ar, sta->addr, arvif->vdev_id,
+ WMI_PEER_CHWIDTH, bw);
+ if (err)
+ ath12k_warn(ar->ab, "failed to update STA %pM peer bw %d: %d\n",
+ sta->addr, bw, err);
+ }
+
+ if (changed & IEEE80211_RC_NSS_CHANGED) {
+ ath12k_dbg(ar->ab, ATH12K_DBG_MAC, "mac update sta %pM nss %d\n",
+ sta->addr, nss);
+
+ err = ath12k_wmi_set_peer_param(ar, sta->addr, arvif->vdev_id,
+ WMI_PEER_NSS, nss);
+ if (err)
+ ath12k_warn(ar->ab, "failed to update STA %pM nss %d: %d\n",
+ sta->addr, nss, err);
+ }
+
+ if (changed & IEEE80211_RC_SMPS_CHANGED) {
+ ath12k_dbg(ar->ab, ATH12K_DBG_MAC, "mac update sta %pM smps %d\n",
+ sta->addr, smps);
+
+ err = ath12k_wmi_set_peer_param(ar, sta->addr, arvif->vdev_id,
+ WMI_PEER_MIMO_PS_STATE, smps);
+ if (err)
+ ath12k_warn(ar->ab, "failed to update STA %pM smps %d: %d\n",
+ sta->addr, smps, err);
+ }
+
+ if (changed & IEEE80211_RC_SUPP_RATES_CHANGED) {
+ mask = &arvif->bitrate_mask;
+ num_vht_rates = ath12k_mac_bitrate_mask_num_vht_rates(ar, band,
+ mask);
+
+ /* Peer_assoc_prepare will reject vht rates in
+ * bitrate_mask if its not available in range format and
+ * sets vht tx_rateset as unsupported. So multiple VHT MCS
+ * setting(eg. MCS 4,5,6) per peer is not supported here.
+ * But, Single rate in VHT mask can be set as per-peer
+ * fixed rate. But even if any HT rates are configured in
+ * the bitrate mask, device will not switch to those rates
+ * when per-peer Fixed rate is set.
+ * TODO: Check RATEMASK_CMDID to support auto rates selection
+ * across HT/VHT and for multiple VHT MCS support.
+ */
+ if (sta->deflink.vht_cap.vht_supported && num_vht_rates == 1) {
+ ath12k_mac_set_peer_vht_fixed_rate(arvif, sta, mask,
+ band);
+ } else {
+ /* If the peer is non-VHT or no fixed VHT rate
+ * is provided in the new bitrate mask we set the
+ * other rates using peer_assoc command.
+ */
+ ath12k_peer_assoc_prepare(ar, arvif->vif, sta,
+ &peer_arg, true);
+
+ err = ath12k_wmi_send_peer_assoc_cmd(ar, &peer_arg);
+ if (err)
+ ath12k_warn(ar->ab, "failed to run peer assoc for STA %pM vdev %i: %d\n",
+ sta->addr, arvif->vdev_id, err);
+
+ if (!wait_for_completion_timeout(&ar->peer_assoc_done, 1 * HZ))
+ ath12k_warn(ar->ab, "failed to get peer assoc conf event for %pM vdev %i\n",
+ sta->addr, arvif->vdev_id);
+ }
+ }
+
+ mutex_unlock(&ar->conf_mutex);
+}
+
+static int ath12k_mac_inc_num_stations(struct ath12k_vif *arvif,
+ struct ieee80211_sta *sta)
+{
+ struct ath12k *ar = arvif->ar;
+
+ lockdep_assert_held(&ar->conf_mutex);
+
+ if (arvif->vdev_type == WMI_VDEV_TYPE_STA && !sta->tdls)
+ return 0;
+
+ if (ar->num_stations >= ar->max_num_stations)
+ return -ENOBUFS;
+
+ ar->num_stations++;
+
+ return 0;
+}
+
+static void ath12k_mac_dec_num_stations(struct ath12k_vif *arvif,
+ struct ieee80211_sta *sta)
+{
+ struct ath12k *ar = arvif->ar;
+
+ lockdep_assert_held(&ar->conf_mutex);
+
+ if (arvif->vdev_type == WMI_VDEV_TYPE_STA && !sta->tdls)
+ return;
+
+ ar->num_stations--;
+}
+
+static int ath12k_mac_station_add(struct ath12k *ar,
+ struct ieee80211_vif *vif,
+ struct ieee80211_sta *sta)
+{
+ struct ath12k_base *ab = ar->ab;
+ struct ath12k_vif *arvif = ath12k_vif_to_arvif(vif);
+ struct ath12k_sta *arsta = (struct ath12k_sta *)sta->drv_priv;
+ struct ath12k_wmi_peer_create_arg peer_param;
+ int ret;
+
+ lockdep_assert_held(&ar->conf_mutex);
+
+ ret = ath12k_mac_inc_num_stations(arvif, sta);
+ if (ret) {
+ ath12k_warn(ab, "refusing to associate station: too many connected already (%d)\n",
+ ar->max_num_stations);
+ goto exit;
+ }
+
+ arsta->rx_stats = kzalloc(sizeof(*arsta->rx_stats), GFP_KERNEL);
+ if (!arsta->rx_stats) {
+ ret = -ENOMEM;
+ goto dec_num_station;
+ }
+
+ peer_param.vdev_id = arvif->vdev_id;
+ peer_param.peer_addr = sta->addr;
+ peer_param.peer_type = WMI_PEER_TYPE_DEFAULT;
+
+ ret = ath12k_peer_create(ar, arvif, sta, &peer_param);
+ if (ret) {
+ ath12k_warn(ab, "Failed to add peer: %pM for VDEV: %d\n",
+ sta->addr, arvif->vdev_id);
+ goto free_peer;
+ }
+
+ ath12k_dbg(ab, ATH12K_DBG_MAC, "Added peer: %pM for VDEV: %d\n",
+ sta->addr, arvif->vdev_id);
+
+ if (ieee80211_vif_is_mesh(vif)) {
+ ret = ath12k_wmi_set_peer_param(ar, sta->addr,
+ arvif->vdev_id,
+ WMI_PEER_USE_4ADDR, 1);
+ if (ret) {
+ ath12k_warn(ab, "failed to STA %pM 4addr capability: %d\n",
+ sta->addr, ret);
+ goto free_peer;
+ }
+ }
+
+ ret = ath12k_dp_peer_setup(ar, arvif->vdev_id, sta->addr);
+ if (ret) {
+ ath12k_warn(ab, "failed to setup dp for peer %pM on vdev %i (%d)\n",
+ sta->addr, arvif->vdev_id, ret);
+ goto free_peer;
+ }
+
+ if (ab->hw_params->vdev_start_delay &&
+ !arvif->is_started &&
+ arvif->vdev_type != WMI_VDEV_TYPE_AP) {
+ ret = ath12k_start_vdev_delay(ar->hw, vif);
+ if (ret) {
+ ath12k_warn(ab, "failed to delay vdev start: %d\n", ret);
+ goto free_peer;
+ }
+ }
+
+ return 0;
+
+free_peer:
+ ath12k_peer_delete(ar, arvif->vdev_id, sta->addr);
+dec_num_station:
+ ath12k_mac_dec_num_stations(arvif, sta);
+exit:
+ return ret;
+}
+
+static int ath12k_mac_op_sta_state(struct ieee80211_hw *hw,
+ struct ieee80211_vif *vif,
+ struct ieee80211_sta *sta,
+ enum ieee80211_sta_state old_state,
+ enum ieee80211_sta_state new_state)
+{
+ struct ath12k *ar = hw->priv;
+ struct ath12k_vif *arvif = ath12k_vif_to_arvif(vif);
+ struct ath12k_sta *arsta = (struct ath12k_sta *)sta->drv_priv;
+ struct ath12k_peer *peer;
+ int ret = 0;
+
+ /* cancel must be done outside the mutex to avoid deadlock */
+ if ((old_state == IEEE80211_STA_NONE &&
+ new_state == IEEE80211_STA_NOTEXIST))
+ cancel_work_sync(&arsta->update_wk);
+
+ mutex_lock(&ar->conf_mutex);
+
+ if (old_state == IEEE80211_STA_NOTEXIST &&
+ new_state == IEEE80211_STA_NONE) {
+ memset(arsta, 0, sizeof(*arsta));
+ arsta->arvif = arvif;
+ INIT_WORK(&arsta->update_wk, ath12k_sta_rc_update_wk);
+
+ ret = ath12k_mac_station_add(ar, vif, sta);
+ if (ret)
+ ath12k_warn(ar->ab, "Failed to add station: %pM for VDEV: %d\n",
+ sta->addr, arvif->vdev_id);
+ } else if ((old_state == IEEE80211_STA_NONE &&
+ new_state == IEEE80211_STA_NOTEXIST)) {
+ ath12k_dp_peer_cleanup(ar, arvif->vdev_id, sta->addr);
+
+ ret = ath12k_peer_delete(ar, arvif->vdev_id, sta->addr);
+ if (ret)
+ ath12k_warn(ar->ab, "Failed to delete peer: %pM for VDEV: %d\n",
+ sta->addr, arvif->vdev_id);
+ else
+ ath12k_dbg(ar->ab, ATH12K_DBG_MAC, "Removed peer: %pM for VDEV: %d\n",
+ sta->addr, arvif->vdev_id);
+
+ ath12k_mac_dec_num_stations(arvif, sta);
+ spin_lock_bh(&ar->ab->base_lock);
+ peer = ath12k_peer_find(ar->ab, arvif->vdev_id, sta->addr);
+ if (peer && peer->sta == sta) {
+ ath12k_warn(ar->ab, "Found peer entry %pM n vdev %i after it was supposedly removed\n",
+ vif->addr, arvif->vdev_id);
+ peer->sta = NULL;
+ list_del(&peer->list);
+ kfree(peer);
+ ar->num_peers--;
+ }
+ spin_unlock_bh(&ar->ab->base_lock);
+
+ kfree(arsta->rx_stats);
+ arsta->rx_stats = NULL;
+ } else if (old_state == IEEE80211_STA_AUTH &&
+ new_state == IEEE80211_STA_ASSOC &&
+ (vif->type == NL80211_IFTYPE_AP ||
+ vif->type == NL80211_IFTYPE_MESH_POINT ||
+ vif->type == NL80211_IFTYPE_ADHOC)) {
+ ret = ath12k_station_assoc(ar, vif, sta, false);
+ if (ret)
+ ath12k_warn(ar->ab, "Failed to associate station: %pM\n",
+ sta->addr);
+ } else if (old_state == IEEE80211_STA_ASSOC &&
+ new_state == IEEE80211_STA_AUTHORIZED) {
+ spin_lock_bh(&ar->ab->base_lock);
+
+ peer = ath12k_peer_find(ar->ab, arvif->vdev_id, sta->addr);
+ if (peer)
+ peer->is_authorized = true;
+
+ spin_unlock_bh(&ar->ab->base_lock);
+
+ if (vif->type == NL80211_IFTYPE_STATION && arvif->is_up) {
+ ret = ath12k_wmi_set_peer_param(ar, sta->addr,
+ arvif->vdev_id,
+ WMI_PEER_AUTHORIZE,
+ 1);
+ if (ret)
+ ath12k_warn(ar->ab, "Unable to authorize peer %pM vdev %d: %d\n",
+ sta->addr, arvif->vdev_id, ret);
+ }
+ } else if (old_state == IEEE80211_STA_AUTHORIZED &&
+ new_state == IEEE80211_STA_ASSOC) {
+ spin_lock_bh(&ar->ab->base_lock);
+
+ peer = ath12k_peer_find(ar->ab, arvif->vdev_id, sta->addr);
+ if (peer)
+ peer->is_authorized = false;
+
+ spin_unlock_bh(&ar->ab->base_lock);
+ } else if (old_state == IEEE80211_STA_ASSOC &&
+ new_state == IEEE80211_STA_AUTH &&
+ (vif->type == NL80211_IFTYPE_AP ||
+ vif->type == NL80211_IFTYPE_MESH_POINT ||
+ vif->type == NL80211_IFTYPE_ADHOC)) {
+ ret = ath12k_station_disassoc(ar, vif, sta);
+ if (ret)
+ ath12k_warn(ar->ab, "Failed to disassociate station: %pM\n",
+ sta->addr);
+ }
+
+ mutex_unlock(&ar->conf_mutex);
+ return ret;
+}
+
+static int ath12k_mac_op_sta_set_txpwr(struct ieee80211_hw *hw,
+ struct ieee80211_vif *vif,
+ struct ieee80211_sta *sta)
+{
+ struct ath12k *ar = hw->priv;
+ struct ath12k_vif *arvif = (void *)vif->drv_priv;
+ int ret;
+ s16 txpwr;
+
+ if (sta->deflink.txpwr.type == NL80211_TX_POWER_AUTOMATIC) {
+ txpwr = 0;
+ } else {
+ txpwr = sta->deflink.txpwr.power;
+ if (!txpwr)
+ return -EINVAL;
+ }
+
+ if (txpwr > ATH12K_TX_POWER_MAX_VAL || txpwr < ATH12K_TX_POWER_MIN_VAL)
+ return -EINVAL;
+
+ mutex_lock(&ar->conf_mutex);
+
+ ret = ath12k_wmi_set_peer_param(ar, sta->addr, arvif->vdev_id,
+ WMI_PEER_USE_FIXED_PWR, txpwr);
+ if (ret) {
+ ath12k_warn(ar->ab, "failed to set tx power for station ret: %d\n",
+ ret);
+ goto out;
+ }
+
+out:
+ mutex_unlock(&ar->conf_mutex);
+ return ret;
+}
+
+static void ath12k_mac_op_sta_rc_update(struct ieee80211_hw *hw,
+ struct ieee80211_vif *vif,
+ struct ieee80211_sta *sta,
+ u32 changed)
+{
+ struct ath12k *ar = hw->priv;
+ struct ath12k_sta *arsta = (struct ath12k_sta *)sta->drv_priv;
+ struct ath12k_vif *arvif = (void *)vif->drv_priv;
+ struct ath12k_peer *peer;
+ u32 bw, smps;
+
+ spin_lock_bh(&ar->ab->base_lock);
+
+ peer = ath12k_peer_find(ar->ab, arvif->vdev_id, sta->addr);
+ if (!peer) {
+ spin_unlock_bh(&ar->ab->base_lock);
+ ath12k_warn(ar->ab, "mac sta rc update failed to find peer %pM on vdev %i\n",
+ sta->addr, arvif->vdev_id);
+ return;
+ }
+
+ spin_unlock_bh(&ar->ab->base_lock);
+
+ ath12k_dbg(ar->ab, ATH12K_DBG_MAC,
+ "mac sta rc update for %pM changed %08x bw %d nss %d smps %d\n",
+ sta->addr, changed, sta->deflink.bandwidth, sta->deflink.rx_nss,
+ sta->deflink.smps_mode);
+
+ spin_lock_bh(&ar->data_lock);
+
+ if (changed & IEEE80211_RC_BW_CHANGED) {
+ bw = WMI_PEER_CHWIDTH_20MHZ;
+
+ switch (sta->deflink.bandwidth) {
+ case IEEE80211_STA_RX_BW_20:
+ bw = WMI_PEER_CHWIDTH_20MHZ;
+ break;
+ case IEEE80211_STA_RX_BW_40:
+ bw = WMI_PEER_CHWIDTH_40MHZ;
+ break;
+ case IEEE80211_STA_RX_BW_80:
+ bw = WMI_PEER_CHWIDTH_80MHZ;
+ break;
+ case IEEE80211_STA_RX_BW_160:
+ bw = WMI_PEER_CHWIDTH_160MHZ;
+ break;
+ default:
+ ath12k_warn(ar->ab, "Invalid bandwidth %d in rc update for %pM\n",
+ sta->deflink.bandwidth, sta->addr);
+ bw = WMI_PEER_CHWIDTH_20MHZ;
+ break;
+ }
+
+ arsta->bw = bw;
+ }
+
+ if (changed & IEEE80211_RC_NSS_CHANGED)
+ arsta->nss = sta->deflink.rx_nss;
+
+ if (changed & IEEE80211_RC_SMPS_CHANGED) {
+ smps = WMI_PEER_SMPS_PS_NONE;
+
+ switch (sta->deflink.smps_mode) {
+ case IEEE80211_SMPS_AUTOMATIC:
+ case IEEE80211_SMPS_OFF:
+ smps = WMI_PEER_SMPS_PS_NONE;
+ break;
+ case IEEE80211_SMPS_STATIC:
+ smps = WMI_PEER_SMPS_STATIC;
+ break;
+ case IEEE80211_SMPS_DYNAMIC:
+ smps = WMI_PEER_SMPS_DYNAMIC;
+ break;
+ default:
+ ath12k_warn(ar->ab, "Invalid smps %d in sta rc update for %pM\n",
+ sta->deflink.smps_mode, sta->addr);
+ smps = WMI_PEER_SMPS_PS_NONE;
+ break;
+ }
+
+ arsta->smps = smps;
+ }
+
+ arsta->changed |= changed;
+
+ spin_unlock_bh(&ar->data_lock);
+
+ ieee80211_queue_work(hw, &arsta->update_wk);
+}
+
+static int ath12k_conf_tx_uapsd(struct ath12k *ar, struct ieee80211_vif *vif,
+ u16 ac, bool enable)
+{
+ struct ath12k_vif *arvif = ath12k_vif_to_arvif(vif);
+ u32 value;
+ int ret;
+
+ if (arvif->vdev_type != WMI_VDEV_TYPE_STA)
+ return 0;
+
+ switch (ac) {
+ case IEEE80211_AC_VO:
+ value = WMI_STA_PS_UAPSD_AC3_DELIVERY_EN |
+ WMI_STA_PS_UAPSD_AC3_TRIGGER_EN;
+ break;
+ case IEEE80211_AC_VI:
+ value = WMI_STA_PS_UAPSD_AC2_DELIVERY_EN |
+ WMI_STA_PS_UAPSD_AC2_TRIGGER_EN;
+ break;
+ case IEEE80211_AC_BE:
+ value = WMI_STA_PS_UAPSD_AC1_DELIVERY_EN |
+ WMI_STA_PS_UAPSD_AC1_TRIGGER_EN;
+ break;
+ case IEEE80211_AC_BK:
+ value = WMI_STA_PS_UAPSD_AC0_DELIVERY_EN |
+ WMI_STA_PS_UAPSD_AC0_TRIGGER_EN;
+ break;
+ }
+
+ if (enable)
+ arvif->u.sta.uapsd |= value;
+ else
+ arvif->u.sta.uapsd &= ~value;
+
+ ret = ath12k_wmi_set_sta_ps_param(ar, arvif->vdev_id,
+ WMI_STA_PS_PARAM_UAPSD,
+ arvif->u.sta.uapsd);
+ if (ret) {
+ ath12k_warn(ar->ab, "could not set uapsd params %d\n", ret);
+ goto exit;
+ }
+
+ if (arvif->u.sta.uapsd)
+ value = WMI_STA_PS_RX_WAKE_POLICY_POLL_UAPSD;
+ else
+ value = WMI_STA_PS_RX_WAKE_POLICY_WAKE;
+
+ ret = ath12k_wmi_set_sta_ps_param(ar, arvif->vdev_id,
+ WMI_STA_PS_PARAM_RX_WAKE_POLICY,
+ value);
+ if (ret)
+ ath12k_warn(ar->ab, "could not set rx wake param %d\n", ret);
+
+exit:
+ return ret;
+}
+
+static int ath12k_mac_op_conf_tx(struct ieee80211_hw *hw,
+ struct ieee80211_vif *vif,
+ unsigned int link_id, u16 ac,
+ const struct ieee80211_tx_queue_params *params)
+{
+ struct ath12k *ar = hw->priv;
+ struct ath12k_vif *arvif = (void *)vif->drv_priv;
+ struct wmi_wmm_params_arg *p = NULL;
+ int ret;
+
+ mutex_lock(&ar->conf_mutex);
+
+ switch (ac) {
+ case IEEE80211_AC_VO:
+ p = &arvif->wmm_params.ac_vo;
+ break;
+ case IEEE80211_AC_VI:
+ p = &arvif->wmm_params.ac_vi;
+ break;
+ case IEEE80211_AC_BE:
+ p = &arvif->wmm_params.ac_be;
+ break;
+ case IEEE80211_AC_BK:
+ p = &arvif->wmm_params.ac_bk;
+ break;
+ }
+
+ if (WARN_ON(!p)) {
+ ret = -EINVAL;
+ goto exit;
+ }
+
+ p->cwmin = params->cw_min;
+ p->cwmax = params->cw_max;
+ p->aifs = params->aifs;
+ p->txop = params->txop;
+
+ ret = ath12k_wmi_send_wmm_update_cmd(ar, arvif->vdev_id,
+ &arvif->wmm_params);
+ if (ret) {
+ ath12k_warn(ar->ab, "failed to set wmm params: %d\n", ret);
+ goto exit;
+ }
+
+ ret = ath12k_conf_tx_uapsd(ar, vif, ac, params->uapsd);
+
+ if (ret)
+ ath12k_warn(ar->ab, "failed to set sta uapsd: %d\n", ret);
+
+exit:
+ mutex_unlock(&ar->conf_mutex);
+ return ret;
+}
+
+static struct ieee80211_sta_ht_cap
+ath12k_create_ht_cap(struct ath12k *ar, u32 ar_ht_cap, u32 rate_cap_rx_chainmask)
+{
+ int i;
+ struct ieee80211_sta_ht_cap ht_cap = {0};
+ u32 ar_vht_cap = ar->pdev->cap.vht_cap;
+
+ if (!(ar_ht_cap & WMI_HT_CAP_ENABLED))
+ return ht_cap;
+
+ ht_cap.ht_supported = 1;
+ ht_cap.ampdu_factor = IEEE80211_HT_MAX_AMPDU_64K;
+ ht_cap.ampdu_density = IEEE80211_HT_MPDU_DENSITY_NONE;
+ ht_cap.cap |= IEEE80211_HT_CAP_SUP_WIDTH_20_40;
+ ht_cap.cap |= IEEE80211_HT_CAP_DSSSCCK40;
+ ht_cap.cap |= WLAN_HT_CAP_SM_PS_STATIC << IEEE80211_HT_CAP_SM_PS_SHIFT;
+
+ if (ar_ht_cap & WMI_HT_CAP_HT20_SGI)
+ ht_cap.cap |= IEEE80211_HT_CAP_SGI_20;
+
+ if (ar_ht_cap & WMI_HT_CAP_HT40_SGI)
+ ht_cap.cap |= IEEE80211_HT_CAP_SGI_40;
+
+ if (ar_ht_cap & WMI_HT_CAP_DYNAMIC_SMPS) {
+ u32 smps;
+
+ smps = WLAN_HT_CAP_SM_PS_DYNAMIC;
+ smps <<= IEEE80211_HT_CAP_SM_PS_SHIFT;
+
+ ht_cap.cap |= smps;
+ }
+
+ if (ar_ht_cap & WMI_HT_CAP_TX_STBC)
+ ht_cap.cap |= IEEE80211_HT_CAP_TX_STBC;
+
+ if (ar_ht_cap & WMI_HT_CAP_RX_STBC) {
+ u32 stbc;
+
+ stbc = ar_ht_cap;
+ stbc &= WMI_HT_CAP_RX_STBC;
+ stbc >>= WMI_HT_CAP_RX_STBC_MASK_SHIFT;
+ stbc <<= IEEE80211_HT_CAP_RX_STBC_SHIFT;
+ stbc &= IEEE80211_HT_CAP_RX_STBC;
+
+ ht_cap.cap |= stbc;
+ }
+
+ if (ar_ht_cap & WMI_HT_CAP_RX_LDPC)
+ ht_cap.cap |= IEEE80211_HT_CAP_LDPC_CODING;
+
+ if (ar_ht_cap & WMI_HT_CAP_L_SIG_TXOP_PROT)
+ ht_cap.cap |= IEEE80211_HT_CAP_LSIG_TXOP_PROT;
+
+ if (ar_vht_cap & WMI_VHT_CAP_MAX_MPDU_LEN_MASK)
+ ht_cap.cap |= IEEE80211_HT_CAP_MAX_AMSDU;
+
+ for (i = 0; i < ar->num_rx_chains; i++) {
+ if (rate_cap_rx_chainmask & BIT(i))
+ ht_cap.mcs.rx_mask[i] = 0xFF;
+ }
+
+ ht_cap.mcs.tx_params |= IEEE80211_HT_MCS_TX_DEFINED;
+
+ return ht_cap;
+}
+
+static int ath12k_mac_set_txbf_conf(struct ath12k_vif *arvif)
+{
+ u32 value = 0;
+ struct ath12k *ar = arvif->ar;
+ int nsts;
+ int sound_dim;
+ u32 vht_cap = ar->pdev->cap.vht_cap;
+ u32 vdev_param = WMI_VDEV_PARAM_TXBF;
+
+ if (vht_cap & (IEEE80211_VHT_CAP_SU_BEAMFORMEE_CAPABLE)) {
+ nsts = vht_cap & IEEE80211_VHT_CAP_BEAMFORMEE_STS_MASK;
+ nsts >>= IEEE80211_VHT_CAP_BEAMFORMEE_STS_SHIFT;
+ value |= SM(nsts, WMI_TXBF_STS_CAP_OFFSET);
+ }
+
+ if (vht_cap & (IEEE80211_VHT_CAP_SU_BEAMFORMER_CAPABLE)) {
+ sound_dim = vht_cap &
+ IEEE80211_VHT_CAP_SOUNDING_DIMENSIONS_MASK;
+ sound_dim >>= IEEE80211_VHT_CAP_SOUNDING_DIMENSIONS_SHIFT;
+ if (sound_dim > (ar->num_tx_chains - 1))
+ sound_dim = ar->num_tx_chains - 1;
+ value |= SM(sound_dim, WMI_BF_SOUND_DIM_OFFSET);
+ }
+
+ if (!value)
+ return 0;
+
+ if (vht_cap & IEEE80211_VHT_CAP_SU_BEAMFORMER_CAPABLE) {
+ value |= WMI_VDEV_PARAM_TXBF_SU_TX_BFER;
+
+ if ((vht_cap & IEEE80211_VHT_CAP_MU_BEAMFORMER_CAPABLE) &&
+ arvif->vdev_type == WMI_VDEV_TYPE_AP)
+ value |= WMI_VDEV_PARAM_TXBF_MU_TX_BFER;
+ }
+
+ if (vht_cap & IEEE80211_VHT_CAP_SU_BEAMFORMEE_CAPABLE) {
+ value |= WMI_VDEV_PARAM_TXBF_SU_TX_BFEE;
+
+ if ((vht_cap & IEEE80211_VHT_CAP_MU_BEAMFORMEE_CAPABLE) &&
+ arvif->vdev_type == WMI_VDEV_TYPE_STA)
+ value |= WMI_VDEV_PARAM_TXBF_MU_TX_BFEE;
+ }
+
+ return ath12k_wmi_vdev_set_param_cmd(ar, arvif->vdev_id,
+ vdev_param, value);
+}
+
+static void ath12k_set_vht_txbf_cap(struct ath12k *ar, u32 *vht_cap)
+{
+ bool subfer, subfee;
+ int sound_dim = 0;
+
+ subfer = !!(*vht_cap & (IEEE80211_VHT_CAP_SU_BEAMFORMER_CAPABLE));
+ subfee = !!(*vht_cap & (IEEE80211_VHT_CAP_SU_BEAMFORMEE_CAPABLE));
+
+ if (ar->num_tx_chains < 2) {
+ *vht_cap &= ~(IEEE80211_VHT_CAP_SU_BEAMFORMER_CAPABLE);
+ subfer = false;
+ }
+
+ /* If SU Beaformer is not set, then disable MU Beamformer Capability */
+ if (!subfer)
+ *vht_cap &= ~(IEEE80211_VHT_CAP_MU_BEAMFORMER_CAPABLE);
+
+ /* If SU Beaformee is not set, then disable MU Beamformee Capability */
+ if (!subfee)
+ *vht_cap &= ~(IEEE80211_VHT_CAP_MU_BEAMFORMEE_CAPABLE);
+
+ sound_dim = u32_get_bits(*vht_cap,
+ IEEE80211_VHT_CAP_SOUNDING_DIMENSIONS_MASK);
+ *vht_cap = u32_replace_bits(*vht_cap, 0,
+ IEEE80211_VHT_CAP_SOUNDING_DIMENSIONS_MASK);
+
+ /* TODO: Need to check invalid STS and Sound_dim values set by FW? */
+
+ /* Enable Sounding Dimension Field only if SU BF is enabled */
+ if (subfer) {
+ if (sound_dim > (ar->num_tx_chains - 1))
+ sound_dim = ar->num_tx_chains - 1;
+
+ *vht_cap = u32_replace_bits(*vht_cap, sound_dim,
+ IEEE80211_VHT_CAP_SOUNDING_DIMENSIONS_MASK);
+ }
+
+ /* Use the STS advertised by FW unless SU Beamformee is not supported*/
+ if (!subfee)
+ *vht_cap &= ~(IEEE80211_VHT_CAP_BEAMFORMEE_STS_MASK);
+}
+
+static struct ieee80211_sta_vht_cap
+ath12k_create_vht_cap(struct ath12k *ar, u32 rate_cap_tx_chainmask,
+ u32 rate_cap_rx_chainmask)
+{
+ struct ieee80211_sta_vht_cap vht_cap = {0};
+ u16 txmcs_map, rxmcs_map;
+ int i;
+
+ vht_cap.vht_supported = 1;
+ vht_cap.cap = ar->pdev->cap.vht_cap;
+
+ ath12k_set_vht_txbf_cap(ar, &vht_cap.cap);
+
+ /* TODO: Enable back VHT160 mode once association issues are fixed */
+ /* Disabling VHT160 and VHT80+80 modes */
+ vht_cap.cap &= ~IEEE80211_VHT_CAP_SUPP_CHAN_WIDTH_MASK;
+ vht_cap.cap &= ~IEEE80211_VHT_CAP_SHORT_GI_160;
+
+ rxmcs_map = 0;
+ txmcs_map = 0;
+ for (i = 0; i < 8; i++) {
+ if (i < ar->num_tx_chains && rate_cap_tx_chainmask & BIT(i))
+ txmcs_map |= IEEE80211_VHT_MCS_SUPPORT_0_9 << (i * 2);
+ else
+ txmcs_map |= IEEE80211_VHT_MCS_NOT_SUPPORTED << (i * 2);
+
+ if (i < ar->num_rx_chains && rate_cap_rx_chainmask & BIT(i))
+ rxmcs_map |= IEEE80211_VHT_MCS_SUPPORT_0_9 << (i * 2);
+ else
+ rxmcs_map |= IEEE80211_VHT_MCS_NOT_SUPPORTED << (i * 2);
+ }
+
+ if (rate_cap_tx_chainmask <= 1)
+ vht_cap.cap &= ~IEEE80211_VHT_CAP_TXSTBC;
+
+ vht_cap.vht_mcs.rx_mcs_map = cpu_to_le16(rxmcs_map);
+ vht_cap.vht_mcs.tx_mcs_map = cpu_to_le16(txmcs_map);
+
+ return vht_cap;
+}
+
+static void ath12k_mac_setup_ht_vht_cap(struct ath12k *ar,
+ struct ath12k_pdev_cap *cap,
+ u32 *ht_cap_info)
+{
+ struct ieee80211_supported_band *band;
+ u32 rate_cap_tx_chainmask;
+ u32 rate_cap_rx_chainmask;
+ u32 ht_cap;
+
+ rate_cap_tx_chainmask = ar->cfg_tx_chainmask >> cap->tx_chain_mask_shift;
+ rate_cap_rx_chainmask = ar->cfg_rx_chainmask >> cap->rx_chain_mask_shift;
+
+ if (cap->supported_bands & WMI_HOST_WLAN_2G_CAP) {
+ band = &ar->mac.sbands[NL80211_BAND_2GHZ];
+ ht_cap = cap->band[NL80211_BAND_2GHZ].ht_cap_info;
+ if (ht_cap_info)
+ *ht_cap_info = ht_cap;
+ band->ht_cap = ath12k_create_ht_cap(ar, ht_cap,
+ rate_cap_rx_chainmask);
+ }
+
+ if (cap->supported_bands & WMI_HOST_WLAN_5G_CAP &&
+ (ar->ab->hw_params->single_pdev_only ||
+ !ar->supports_6ghz)) {
+ band = &ar->mac.sbands[NL80211_BAND_5GHZ];
+ ht_cap = cap->band[NL80211_BAND_5GHZ].ht_cap_info;
+ if (ht_cap_info)
+ *ht_cap_info = ht_cap;
+ band->ht_cap = ath12k_create_ht_cap(ar, ht_cap,
+ rate_cap_rx_chainmask);
+ band->vht_cap = ath12k_create_vht_cap(ar, rate_cap_tx_chainmask,
+ rate_cap_rx_chainmask);
+ }
+}
+
+static int ath12k_check_chain_mask(struct ath12k *ar, u32 ant, bool is_tx_ant)
+{
+ /* TODO: Check the request chainmask against the supported
+ * chainmask table which is advertised in extented_service_ready event
+ */
+
+ return 0;
+}
+
+static void ath12k_gen_ppe_thresh(struct ath12k_wmi_ppe_threshold_arg *fw_ppet,
+ u8 *he_ppet)
+{
+ int nss, ru;
+ u8 bit = 7;
+
+ he_ppet[0] = fw_ppet->numss_m1 & IEEE80211_PPE_THRES_NSS_MASK;
+ he_ppet[0] |= (fw_ppet->ru_bit_mask <<
+ IEEE80211_PPE_THRES_RU_INDEX_BITMASK_POS) &
+ IEEE80211_PPE_THRES_RU_INDEX_BITMASK_MASK;
+ for (nss = 0; nss <= fw_ppet->numss_m1; nss++) {
+ for (ru = 0; ru < 4; ru++) {
+ u8 val;
+ int i;
+
+ if ((fw_ppet->ru_bit_mask & BIT(ru)) == 0)
+ continue;
+ val = (fw_ppet->ppet16_ppet8_ru3_ru0[nss] >> (ru * 6)) &
+ 0x3f;
+ val = ((val >> 3) & 0x7) | ((val & 0x7) << 3);
+ for (i = 5; i >= 0; i--) {
+ he_ppet[bit / 8] |=
+ ((val >> i) & 0x1) << ((bit % 8));
+ bit++;
+ }
+ }
+ }
+}
+
+static void
+ath12k_mac_filter_he_cap_mesh(struct ieee80211_he_cap_elem *he_cap_elem)
+{
+ u8 m;
+
+ m = IEEE80211_HE_MAC_CAP0_TWT_RES |
+ IEEE80211_HE_MAC_CAP0_TWT_REQ;
+ he_cap_elem->mac_cap_info[0] &= ~m;
+
+ m = IEEE80211_HE_MAC_CAP2_TRS |
+ IEEE80211_HE_MAC_CAP2_BCAST_TWT |
+ IEEE80211_HE_MAC_CAP2_MU_CASCADING;
+ he_cap_elem->mac_cap_info[2] &= ~m;
+
+ m = IEEE80211_HE_MAC_CAP3_FLEX_TWT_SCHED |
+ IEEE80211_HE_MAC_CAP2_BCAST_TWT |
+ IEEE80211_HE_MAC_CAP2_MU_CASCADING;
+ he_cap_elem->mac_cap_info[3] &= ~m;
+
+ m = IEEE80211_HE_MAC_CAP4_BSRP_BQRP_A_MPDU_AGG |
+ IEEE80211_HE_MAC_CAP4_BQR;
+ he_cap_elem->mac_cap_info[4] &= ~m;
+
+ m = IEEE80211_HE_MAC_CAP5_SUBCHAN_SELECTIVE_TRANSMISSION |
+ IEEE80211_HE_MAC_CAP5_UL_2x996_TONE_RU |
+ IEEE80211_HE_MAC_CAP5_PUNCTURED_SOUNDING |
+ IEEE80211_HE_MAC_CAP5_HT_VHT_TRIG_FRAME_RX;
+ he_cap_elem->mac_cap_info[5] &= ~m;
+
+ m = IEEE80211_HE_PHY_CAP2_UL_MU_FULL_MU_MIMO |
+ IEEE80211_HE_PHY_CAP2_UL_MU_PARTIAL_MU_MIMO;
+ he_cap_elem->phy_cap_info[2] &= ~m;
+
+ m = IEEE80211_HE_PHY_CAP3_RX_PARTIAL_BW_SU_IN_20MHZ_MU |
+ IEEE80211_HE_PHY_CAP3_DCM_MAX_CONST_TX_MASK |
+ IEEE80211_HE_PHY_CAP3_DCM_MAX_CONST_RX_MASK;
+ he_cap_elem->phy_cap_info[3] &= ~m;
+
+ m = IEEE80211_HE_PHY_CAP4_MU_BEAMFORMER;
+ he_cap_elem->phy_cap_info[4] &= ~m;
+
+ m = IEEE80211_HE_PHY_CAP5_NG16_MU_FEEDBACK;
+ he_cap_elem->phy_cap_info[5] &= ~m;
+
+ m = IEEE80211_HE_PHY_CAP6_CODEBOOK_SIZE_75_MU |
+ IEEE80211_HE_PHY_CAP6_TRIG_MU_BEAMFORMING_PARTIAL_BW_FB |
+ IEEE80211_HE_PHY_CAP6_TRIG_CQI_FB |
+ IEEE80211_HE_PHY_CAP6_PARTIAL_BANDWIDTH_DL_MUMIMO;
+ he_cap_elem->phy_cap_info[6] &= ~m;
+
+ m = IEEE80211_HE_PHY_CAP7_PSR_BASED_SR |
+ IEEE80211_HE_PHY_CAP7_POWER_BOOST_FACTOR_SUPP |
+ IEEE80211_HE_PHY_CAP7_STBC_TX_ABOVE_80MHZ |
+ IEEE80211_HE_PHY_CAP7_STBC_RX_ABOVE_80MHZ;
+ he_cap_elem->phy_cap_info[7] &= ~m;
+
+ m = IEEE80211_HE_PHY_CAP8_HE_ER_SU_PPDU_4XLTF_AND_08_US_GI |
+ IEEE80211_HE_PHY_CAP8_20MHZ_IN_40MHZ_HE_PPDU_IN_2G |
+ IEEE80211_HE_PHY_CAP8_20MHZ_IN_160MHZ_HE_PPDU |
+ IEEE80211_HE_PHY_CAP8_80MHZ_IN_160MHZ_HE_PPDU;
+ he_cap_elem->phy_cap_info[8] &= ~m;
+
+ m = IEEE80211_HE_PHY_CAP9_LONGER_THAN_16_SIGB_OFDM_SYM |
+ IEEE80211_HE_PHY_CAP9_NON_TRIGGERED_CQI_FEEDBACK |
+ IEEE80211_HE_PHY_CAP9_RX_1024_QAM_LESS_THAN_242_TONE_RU |
+ IEEE80211_HE_PHY_CAP9_TX_1024_QAM_LESS_THAN_242_TONE_RU |
+ IEEE80211_HE_PHY_CAP9_RX_FULL_BW_SU_USING_MU_WITH_COMP_SIGB |
+ IEEE80211_HE_PHY_CAP9_RX_FULL_BW_SU_USING_MU_WITH_NON_COMP_SIGB;
+ he_cap_elem->phy_cap_info[9] &= ~m;
+}
+
+static __le16 ath12k_mac_setup_he_6ghz_cap(struct ath12k_pdev_cap *pcap,
+ struct ath12k_band_cap *bcap)
+{
+ u8 val;
+
+ bcap->he_6ghz_capa = IEEE80211_HT_MPDU_DENSITY_NONE;
+ if (bcap->ht_cap_info & WMI_HT_CAP_DYNAMIC_SMPS)
+ bcap->he_6ghz_capa |=
+ u32_encode_bits(WLAN_HT_CAP_SM_PS_DYNAMIC,
+ IEEE80211_HE_6GHZ_CAP_SM_PS);
+ else
+ bcap->he_6ghz_capa |=
+ u32_encode_bits(WLAN_HT_CAP_SM_PS_DISABLED,
+ IEEE80211_HE_6GHZ_CAP_SM_PS);
+ val = u32_get_bits(pcap->vht_cap,
+ IEEE80211_VHT_CAP_MAX_A_MPDU_LENGTH_EXPONENT_MASK);
+ bcap->he_6ghz_capa |=
+ u32_encode_bits(val, IEEE80211_HE_6GHZ_CAP_MAX_AMPDU_LEN_EXP);
+ val = u32_get_bits(pcap->vht_cap,
+ IEEE80211_VHT_CAP_MAX_MPDU_MASK);
+ bcap->he_6ghz_capa |=
+ u32_encode_bits(val, IEEE80211_HE_6GHZ_CAP_MAX_MPDU_LEN);
+ if (pcap->vht_cap & IEEE80211_VHT_CAP_RX_ANTENNA_PATTERN)
+ bcap->he_6ghz_capa |= IEEE80211_HE_6GHZ_CAP_RX_ANTPAT_CONS;
+ if (pcap->vht_cap & IEEE80211_VHT_CAP_TX_ANTENNA_PATTERN)
+ bcap->he_6ghz_capa |= IEEE80211_HE_6GHZ_CAP_TX_ANTPAT_CONS;
+
+ return cpu_to_le16(bcap->he_6ghz_capa);
+}
+
+static int ath12k_mac_copy_he_cap(struct ath12k *ar,
+ struct ath12k_pdev_cap *cap,
+ struct ieee80211_sband_iftype_data *data,
+ int band)
+{
+ int i, idx = 0;
+
+ for (i = 0; i < NUM_NL80211_IFTYPES; i++) {
+ struct ieee80211_sta_he_cap *he_cap = &data[idx].he_cap;
+ struct ath12k_band_cap *band_cap = &cap->band[band];
+ struct ieee80211_he_cap_elem *he_cap_elem =
+ &he_cap->he_cap_elem;
+
+ switch (i) {
+ case NL80211_IFTYPE_STATION:
+ case NL80211_IFTYPE_AP:
+ case NL80211_IFTYPE_MESH_POINT:
+ break;
+
+ default:
+ continue;
+ }
+
+ data[idx].types_mask = BIT(i);
+ he_cap->has_he = true;
+ memcpy(he_cap_elem->mac_cap_info, band_cap->he_cap_info,
+ sizeof(he_cap_elem->mac_cap_info));
+ memcpy(he_cap_elem->phy_cap_info, band_cap->he_cap_phy_info,
+ sizeof(he_cap_elem->phy_cap_info));
+
+ he_cap_elem->mac_cap_info[1] &=
+ IEEE80211_HE_MAC_CAP1_TF_MAC_PAD_DUR_MASK;
+
+ he_cap_elem->phy_cap_info[5] &=
+ ~IEEE80211_HE_PHY_CAP5_BEAMFORMEE_NUM_SND_DIM_UNDER_80MHZ_MASK;
+ he_cap_elem->phy_cap_info[5] &=
+ ~IEEE80211_HE_PHY_CAP5_BEAMFORMEE_NUM_SND_DIM_ABOVE_80MHZ_MASK;
+ he_cap_elem->phy_cap_info[5] |= ar->num_tx_chains - 1;
+
+ switch (i) {
+ case NL80211_IFTYPE_AP:
+ he_cap_elem->phy_cap_info[3] &=
+ ~IEEE80211_HE_PHY_CAP3_DCM_MAX_CONST_TX_MASK;
+ he_cap_elem->phy_cap_info[9] |=
+ IEEE80211_HE_PHY_CAP9_RX_1024_QAM_LESS_THAN_242_TONE_RU;
+ break;
+ case NL80211_IFTYPE_STATION:
+ he_cap_elem->mac_cap_info[0] &=
+ ~IEEE80211_HE_MAC_CAP0_TWT_RES;
+ he_cap_elem->mac_cap_info[0] |=
+ IEEE80211_HE_MAC_CAP0_TWT_REQ;
+ he_cap_elem->phy_cap_info[9] |=
+ IEEE80211_HE_PHY_CAP9_TX_1024_QAM_LESS_THAN_242_TONE_RU;
+ break;
+ case NL80211_IFTYPE_MESH_POINT:
+ ath12k_mac_filter_he_cap_mesh(he_cap_elem);
+ break;
+ }
+
+ he_cap->he_mcs_nss_supp.rx_mcs_80 =
+ cpu_to_le16(band_cap->he_mcs & 0xffff);
+ he_cap->he_mcs_nss_supp.tx_mcs_80 =
+ cpu_to_le16(band_cap->he_mcs & 0xffff);
+ he_cap->he_mcs_nss_supp.rx_mcs_160 =
+ cpu_to_le16((band_cap->he_mcs >> 16) & 0xffff);
+ he_cap->he_mcs_nss_supp.tx_mcs_160 =
+ cpu_to_le16((band_cap->he_mcs >> 16) & 0xffff);
+ he_cap->he_mcs_nss_supp.rx_mcs_80p80 =
+ cpu_to_le16((band_cap->he_mcs >> 16) & 0xffff);
+ he_cap->he_mcs_nss_supp.tx_mcs_80p80 =
+ cpu_to_le16((band_cap->he_mcs >> 16) & 0xffff);
+
+ memset(he_cap->ppe_thres, 0, sizeof(he_cap->ppe_thres));
+ if (he_cap_elem->phy_cap_info[6] &
+ IEEE80211_HE_PHY_CAP6_PPE_THRESHOLD_PRESENT)
+ ath12k_gen_ppe_thresh(&band_cap->he_ppet,
+ he_cap->ppe_thres);
+
+ if (band == NL80211_BAND_6GHZ) {
+ data[idx].he_6ghz_capa.capa =
+ ath12k_mac_setup_he_6ghz_cap(cap, band_cap);
+ }
+ idx++;
+ }
+
+ return idx;
+}
+
+static void ath12k_mac_setup_he_cap(struct ath12k *ar,
+ struct ath12k_pdev_cap *cap)
+{
+ struct ieee80211_supported_band *band;
+ int count;
+
+ if (cap->supported_bands & WMI_HOST_WLAN_2G_CAP) {
+ count = ath12k_mac_copy_he_cap(ar, cap,
+ ar->mac.iftype[NL80211_BAND_2GHZ],
+ NL80211_BAND_2GHZ);
+ band = &ar->mac.sbands[NL80211_BAND_2GHZ];
+ band->iftype_data = ar->mac.iftype[NL80211_BAND_2GHZ];
+ band->n_iftype_data = count;
+ }
+
+ if (cap->supported_bands & WMI_HOST_WLAN_5G_CAP) {
+ count = ath12k_mac_copy_he_cap(ar, cap,
+ ar->mac.iftype[NL80211_BAND_5GHZ],
+ NL80211_BAND_5GHZ);
+ band = &ar->mac.sbands[NL80211_BAND_5GHZ];
+ band->iftype_data = ar->mac.iftype[NL80211_BAND_5GHZ];
+ band->n_iftype_data = count;
+ }
+
+ if (cap->supported_bands & WMI_HOST_WLAN_5G_CAP &&
+ ar->supports_6ghz) {
+ count = ath12k_mac_copy_he_cap(ar, cap,
+ ar->mac.iftype[NL80211_BAND_6GHZ],
+ NL80211_BAND_6GHZ);
+ band = &ar->mac.sbands[NL80211_BAND_6GHZ];
+ band->iftype_data = ar->mac.iftype[NL80211_BAND_6GHZ];
+ band->n_iftype_data = count;
+ }
+}
+
+static int __ath12k_set_antenna(struct ath12k *ar, u32 tx_ant, u32 rx_ant)
+{
+ int ret;
+
+ lockdep_assert_held(&ar->conf_mutex);
+
+ if (ath12k_check_chain_mask(ar, tx_ant, true))
+ return -EINVAL;
+
+ if (ath12k_check_chain_mask(ar, rx_ant, false))
+ return -EINVAL;
+
+ ar->cfg_tx_chainmask = tx_ant;
+ ar->cfg_rx_chainmask = rx_ant;
+
+ if (ar->state != ATH12K_STATE_ON &&
+ ar->state != ATH12K_STATE_RESTARTED)
+ return 0;
+
+ ret = ath12k_wmi_pdev_set_param(ar, WMI_PDEV_PARAM_TX_CHAIN_MASK,
+ tx_ant, ar->pdev->pdev_id);
+ if (ret) {
+ ath12k_warn(ar->ab, "failed to set tx-chainmask: %d, req 0x%x\n",
+ ret, tx_ant);
+ return ret;
+ }
+
+ ar->num_tx_chains = hweight32(tx_ant);
+
+ ret = ath12k_wmi_pdev_set_param(ar, WMI_PDEV_PARAM_RX_CHAIN_MASK,
+ rx_ant, ar->pdev->pdev_id);
+ if (ret) {
+ ath12k_warn(ar->ab, "failed to set rx-chainmask: %d, req 0x%x\n",
+ ret, rx_ant);
+ return ret;
+ }
+
+ ar->num_rx_chains = hweight32(rx_ant);
+
+ /* Reload HT/VHT/HE capability */
+ ath12k_mac_setup_ht_vht_cap(ar, &ar->pdev->cap, NULL);
+ ath12k_mac_setup_he_cap(ar, &ar->pdev->cap);
+
+ return 0;
+}
+
+int ath12k_mac_tx_mgmt_pending_free(int buf_id, void *skb, void *ctx)
+{
+ struct sk_buff *msdu = skb;
+ struct ieee80211_tx_info *info;
+ struct ath12k *ar = ctx;
+ struct ath12k_base *ab = ar->ab;
+
+ spin_lock_bh(&ar->txmgmt_idr_lock);
+ idr_remove(&ar->txmgmt_idr, buf_id);
+ spin_unlock_bh(&ar->txmgmt_idr_lock);
+ dma_unmap_single(ab->dev, ATH12K_SKB_CB(msdu)->paddr, msdu->len,
+ DMA_TO_DEVICE);
+
+ info = IEEE80211_SKB_CB(msdu);
+ memset(&info->status, 0, sizeof(info->status));
+
+ ieee80211_free_txskb(ar->hw, msdu);
+
+ return 0;
+}
+
+static int ath12k_mac_vif_txmgmt_idr_remove(int buf_id, void *skb, void *ctx)
+{
+ struct ieee80211_vif *vif = ctx;
+ struct ath12k_skb_cb *skb_cb = ATH12K_SKB_CB(skb);
+ struct sk_buff *msdu = skb;
+ struct ath12k *ar = skb_cb->ar;
+ struct ath12k_base *ab = ar->ab;
+
+ if (skb_cb->vif == vif) {
+ spin_lock_bh(&ar->txmgmt_idr_lock);
+ idr_remove(&ar->txmgmt_idr, buf_id);
+ spin_unlock_bh(&ar->txmgmt_idr_lock);
+ dma_unmap_single(ab->dev, skb_cb->paddr, msdu->len,
+ DMA_TO_DEVICE);
+ }
+
+ return 0;
+}
+
+static int ath12k_mac_mgmt_tx_wmi(struct ath12k *ar, struct ath12k_vif *arvif,
+ struct sk_buff *skb)
+{
+ struct ath12k_base *ab = ar->ab;
+ struct ieee80211_hdr *hdr = (struct ieee80211_hdr *)skb->data;
+ struct ieee80211_tx_info *info;
+ dma_addr_t paddr;
+ int buf_id;
+ int ret;
+
+ spin_lock_bh(&ar->txmgmt_idr_lock);
+ buf_id = idr_alloc(&ar->txmgmt_idr, skb, 0,
+ ATH12K_TX_MGMT_NUM_PENDING_MAX, GFP_ATOMIC);
+ spin_unlock_bh(&ar->txmgmt_idr_lock);
+ if (buf_id < 0)
+ return -ENOSPC;
+
+ info = IEEE80211_SKB_CB(skb);
+ if (!(info->flags & IEEE80211_TX_CTL_HW_80211_ENCAP)) {
+ if ((ieee80211_is_action(hdr->frame_control) ||
+ ieee80211_is_deauth(hdr->frame_control) ||
+ ieee80211_is_disassoc(hdr->frame_control)) &&
+ ieee80211_has_protected(hdr->frame_control)) {
+ skb_put(skb, IEEE80211_CCMP_MIC_LEN);
+ }
+ }
+
+ paddr = dma_map_single(ab->dev, skb->data, skb->len, DMA_TO_DEVICE);
+ if (dma_mapping_error(ab->dev, paddr)) {
+ ath12k_warn(ab, "failed to DMA map mgmt Tx buffer\n");
+ ret = -EIO;
+ goto err_free_idr;
+ }
+
+ ATH12K_SKB_CB(skb)->paddr = paddr;
+
+ ret = ath12k_wmi_mgmt_send(ar, arvif->vdev_id, buf_id, skb);
+ if (ret) {
+ ath12k_warn(ar->ab, "failed to send mgmt frame: %d\n", ret);
+ goto err_unmap_buf;
+ }
+
+ return 0;
+
+err_unmap_buf:
+ dma_unmap_single(ab->dev, ATH12K_SKB_CB(skb)->paddr,
+ skb->len, DMA_TO_DEVICE);
+err_free_idr:
+ spin_lock_bh(&ar->txmgmt_idr_lock);
+ idr_remove(&ar->txmgmt_idr, buf_id);
+ spin_unlock_bh(&ar->txmgmt_idr_lock);
+
+ return ret;
+}
+
+static void ath12k_mgmt_over_wmi_tx_purge(struct ath12k *ar)
+{
+ struct sk_buff *skb;
+
+ while ((skb = skb_dequeue(&ar->wmi_mgmt_tx_queue)) != NULL)
+ ieee80211_free_txskb(ar->hw, skb);
+}
+
+static void ath12k_mgmt_over_wmi_tx_work(struct work_struct *work)
+{
+ struct ath12k *ar = container_of(work, struct ath12k, wmi_mgmt_tx_work);
+ struct ath12k_skb_cb *skb_cb;
+ struct ath12k_vif *arvif;
+ struct sk_buff *skb;
+ int ret;
+
+ while ((skb = skb_dequeue(&ar->wmi_mgmt_tx_queue)) != NULL) {
+ skb_cb = ATH12K_SKB_CB(skb);
+ if (!skb_cb->vif) {
+ ath12k_warn(ar->ab, "no vif found for mgmt frame\n");
+ ieee80211_free_txskb(ar->hw, skb);
+ continue;
+ }
+
+ arvif = ath12k_vif_to_arvif(skb_cb->vif);
+ if (ar->allocated_vdev_map & (1LL << arvif->vdev_id) &&
+ arvif->is_started) {
+ ret = ath12k_mac_mgmt_tx_wmi(ar, arvif, skb);
+ if (ret) {
+ ath12k_warn(ar->ab, "failed to tx mgmt frame, vdev_id %d :%d\n",
+ arvif->vdev_id, ret);
+ ieee80211_free_txskb(ar->hw, skb);
+ } else {
+ atomic_inc(&ar->num_pending_mgmt_tx);
+ }
+ } else {
+ ath12k_warn(ar->ab,
+ "dropping mgmt frame for vdev %d, is_started %d\n",
+ arvif->vdev_id,
+ arvif->is_started);
+ ieee80211_free_txskb(ar->hw, skb);
+ }
+ }
+}
+
+static int ath12k_mac_mgmt_tx(struct ath12k *ar, struct sk_buff *skb,
+ bool is_prb_rsp)
+{
+ struct sk_buff_head *q = &ar->wmi_mgmt_tx_queue;
+
+ if (test_bit(ATH12K_FLAG_CRASH_FLUSH, &ar->ab->dev_flags))
+ return -ESHUTDOWN;
+
+ /* Drop probe response packets when the pending management tx
+ * count has reached a certain threshold, so as to prioritize
+ * other mgmt packets like auth and assoc to be sent on time
+ * for establishing successful connections.
+ */
+ if (is_prb_rsp &&
+ atomic_read(&ar->num_pending_mgmt_tx) > ATH12K_PRB_RSP_DROP_THRESHOLD) {
+ ath12k_warn(ar->ab,
+ "dropping probe response as pending queue is almost full\n");
+ return -ENOSPC;
+ }
+
+ if (skb_queue_len(q) == ATH12K_TX_MGMT_NUM_PENDING_MAX) {
+ ath12k_warn(ar->ab, "mgmt tx queue is full\n");
+ return -ENOSPC;
+ }
+
+ skb_queue_tail(q, skb);
+ ieee80211_queue_work(ar->hw, &ar->wmi_mgmt_tx_work);
+
+ return 0;
+}
+
+static void ath12k_mac_op_tx(struct ieee80211_hw *hw,
+ struct ieee80211_tx_control *control,
+ struct sk_buff *skb)
+{
+ struct ath12k_skb_cb *skb_cb = ATH12K_SKB_CB(skb);
+ struct ath12k *ar = hw->priv;
+ struct ieee80211_tx_info *info = IEEE80211_SKB_CB(skb);
+ struct ieee80211_vif *vif = info->control.vif;
+ struct ath12k_vif *arvif = ath12k_vif_to_arvif(vif);
+ struct ieee80211_hdr *hdr = (struct ieee80211_hdr *)skb->data;
+ struct ieee80211_key_conf *key = info->control.hw_key;
+ u32 info_flags = info->flags;
+ bool is_prb_rsp;
+ int ret;
+
+ memset(skb_cb, 0, sizeof(*skb_cb));
+ skb_cb->vif = vif;
+
+ if (key) {
+ skb_cb->cipher = key->cipher;
+ skb_cb->flags |= ATH12K_SKB_CIPHER_SET;
+ }
+
+ if (info_flags & IEEE80211_TX_CTL_HW_80211_ENCAP) {
+ skb_cb->flags |= ATH12K_SKB_HW_80211_ENCAP;
+ } else if (ieee80211_is_mgmt(hdr->frame_control)) {
+ is_prb_rsp = ieee80211_is_probe_resp(hdr->frame_control);
+ ret = ath12k_mac_mgmt_tx(ar, skb, is_prb_rsp);
+ if (ret) {
+ ath12k_warn(ar->ab, "failed to queue management frame %d\n",
+ ret);
+ ieee80211_free_txskb(ar->hw, skb);
+ }
+ return;
+ }
+
+ ret = ath12k_dp_tx(ar, arvif, skb);
+ if (ret) {
+ ath12k_warn(ar->ab, "failed to transmit frame %d\n", ret);
+ ieee80211_free_txskb(ar->hw, skb);
+ }
+}
+
+void ath12k_mac_drain_tx(struct ath12k *ar)
+{
+ /* make sure rcu-protected mac80211 tx path itself is drained */
+ synchronize_net();
+
+ cancel_work_sync(&ar->wmi_mgmt_tx_work);
+ ath12k_mgmt_over_wmi_tx_purge(ar);
+}
+
+static int ath12k_mac_config_mon_status_default(struct ath12k *ar, bool enable)
+{
+ return -ENOTSUPP;
+ /* TODO: Need to support new monitor mode */
+}
+
+static void ath12k_mac_wait_reconfigure(struct ath12k_base *ab)
+{
+ int recovery_start_count;
+
+ if (!ab->is_reset)
+ return;
+
+ recovery_start_count = atomic_inc_return(&ab->recovery_start_count);
+
+ ath12k_dbg(ab, ATH12K_DBG_MAC, "recovery start count %d\n", recovery_start_count);
+
+ if (recovery_start_count == ab->num_radios) {
+ complete(&ab->recovery_start);
+ ath12k_dbg(ab, ATH12K_DBG_MAC, "recovery started success\n");
+ }
+
+ ath12k_dbg(ab, ATH12K_DBG_MAC, "waiting reconfigure...\n");
+
+ wait_for_completion_timeout(&ab->reconfigure_complete,
+ ATH12K_RECONFIGURE_TIMEOUT_HZ);
+}
+
+static int ath12k_mac_op_start(struct ieee80211_hw *hw)
+{
+ struct ath12k *ar = hw->priv;
+ struct ath12k_base *ab = ar->ab;
+ struct ath12k_pdev *pdev = ar->pdev;
+ int ret;
+
+ ath12k_mac_drain_tx(ar);
+ mutex_lock(&ar->conf_mutex);
+
+ switch (ar->state) {
+ case ATH12K_STATE_OFF:
+ ar->state = ATH12K_STATE_ON;
+ break;
+ case ATH12K_STATE_RESTARTING:
+ ar->state = ATH12K_STATE_RESTARTED;
+ ath12k_mac_wait_reconfigure(ab);
+ break;
+ case ATH12K_STATE_RESTARTED:
+ case ATH12K_STATE_WEDGED:
+ case ATH12K_STATE_ON:
+ WARN_ON(1);
+ ret = -EINVAL;
+ goto err;
+ }
+
+ ret = ath12k_wmi_pdev_set_param(ar, WMI_PDEV_PARAM_PMF_QOS,
+ 1, pdev->pdev_id);
+
+ if (ret) {
+ ath12k_err(ar->ab, "failed to enable PMF QOS: (%d\n", ret);
+ goto err;
+ }
+
+ ret = ath12k_wmi_pdev_set_param(ar, WMI_PDEV_PARAM_DYNAMIC_BW, 1,
+ pdev->pdev_id);
+ if (ret) {
+ ath12k_err(ar->ab, "failed to enable dynamic bw: %d\n", ret);
+ goto err;
+ }
+
+ ret = ath12k_wmi_pdev_set_param(ar, WMI_PDEV_PARAM_ARP_AC_OVERRIDE,
+ 0, pdev->pdev_id);
+ if (ret) {
+ ath12k_err(ab, "failed to set ac override for ARP: %d\n",
+ ret);
+ goto err;
+ }
+
+ ret = ath12k_wmi_send_dfs_phyerr_offload_enable_cmd(ar, pdev->pdev_id);
+ if (ret) {
+ ath12k_err(ab, "failed to offload radar detection: %d\n",
+ ret);
+ goto err;
+ }
+
+ ret = ath12k_dp_tx_htt_h2t_ppdu_stats_req(ar,
+ HTT_PPDU_STATS_TAG_DEFAULT);
+ if (ret) {
+ ath12k_err(ab, "failed to req ppdu stats: %d\n", ret);
+ goto err;
+ }
+
+ ret = ath12k_wmi_pdev_set_param(ar, WMI_PDEV_PARAM_MESH_MCAST_ENABLE,
+ 1, pdev->pdev_id);
+
+ if (ret) {
+ ath12k_err(ar->ab, "failed to enable MESH MCAST ENABLE: (%d\n", ret);
+ goto err;
+ }
+
+ __ath12k_set_antenna(ar, ar->cfg_tx_chainmask, ar->cfg_rx_chainmask);
+
+ /* TODO: Do we need to enable ANI? */
+
+ ath12k_reg_update_chan_list(ar);
+
+ ar->num_started_vdevs = 0;
+ ar->num_created_vdevs = 0;
+ ar->num_peers = 0;
+ ar->allocated_vdev_map = 0;
+
+ /* Configure monitor status ring with default rx_filter to get rx status
+ * such as rssi, rx_duration.
+ */
+ ret = ath12k_mac_config_mon_status_default(ar, true);
+ if (ret && (ret != -ENOTSUPP)) {
+ ath12k_err(ab, "failed to configure monitor status ring with default rx_filter: (%d)\n",
+ ret);
+ goto err;
+ }
+
+ if (ret == -ENOTSUPP)
+ ath12k_dbg(ar->ab, ATH12K_DBG_MAC,
+ "monitor status config is not yet supported");
+
+ /* Configure the hash seed for hash based reo dest ring selection */
+ ath12k_wmi_pdev_lro_cfg(ar, ar->pdev->pdev_id);
+
+ /* allow device to enter IMPS */
+ if (ab->hw_params->idle_ps) {
+ ret = ath12k_wmi_pdev_set_param(ar, WMI_PDEV_PARAM_IDLE_PS_CONFIG,
+ 1, pdev->pdev_id);
+ if (ret) {
+ ath12k_err(ab, "failed to enable idle ps: %d\n", ret);
+ goto err;
+ }
+ }
+
+ mutex_unlock(&ar->conf_mutex);
+
+ rcu_assign_pointer(ab->pdevs_active[ar->pdev_idx],
+ &ab->pdevs[ar->pdev_idx]);
+
+ return 0;
+
+err:
+ ar->state = ATH12K_STATE_OFF;
+ mutex_unlock(&ar->conf_mutex);
+
+ return ret;
+}
+
+static void ath12k_mac_op_stop(struct ieee80211_hw *hw)
+{
+ struct ath12k *ar = hw->priv;
+ struct htt_ppdu_stats_info *ppdu_stats, *tmp;
+ int ret;
+
+ ath12k_mac_drain_tx(ar);
+
+ mutex_lock(&ar->conf_mutex);
+ ret = ath12k_mac_config_mon_status_default(ar, false);
+ if (ret && (ret != -ENOTSUPP))
+ ath12k_err(ar->ab, "failed to clear rx_filter for monitor status ring: (%d)\n",
+ ret);
+
+ clear_bit(ATH12K_CAC_RUNNING, &ar->dev_flags);
+ ar->state = ATH12K_STATE_OFF;
+ mutex_unlock(&ar->conf_mutex);
+
+ cancel_delayed_work_sync(&ar->scan.timeout);
+ cancel_work_sync(&ar->regd_update_work);
+
+ spin_lock_bh(&ar->data_lock);
+ list_for_each_entry_safe(ppdu_stats, tmp, &ar->ppdu_stats_info, list) {
+ list_del(&ppdu_stats->list);
+ kfree(ppdu_stats);
+ }
+ spin_unlock_bh(&ar->data_lock);
+
+ rcu_assign_pointer(ar->ab->pdevs_active[ar->pdev_idx], NULL);
+
+ synchronize_rcu();
+
+ atomic_set(&ar->num_pending_mgmt_tx, 0);
+}
+
+static u8
+ath12k_mac_get_vdev_stats_id(struct ath12k_vif *arvif)
+{
+ struct ath12k_base *ab = arvif->ar->ab;
+ u8 vdev_stats_id = 0;
+
+ do {
+ if (ab->free_vdev_stats_id_map & (1LL << vdev_stats_id)) {
+ vdev_stats_id++;
+ if (vdev_stats_id <= ATH12K_INVAL_VDEV_STATS_ID) {
+ vdev_stats_id = ATH12K_INVAL_VDEV_STATS_ID;
+ break;
+ }
+ } else {
+ ab->free_vdev_stats_id_map |= (1LL << vdev_stats_id);
+ break;
+ }
+ } while (vdev_stats_id);
+
+ arvif->vdev_stats_id = vdev_stats_id;
+ return vdev_stats_id;
+}
+
+static void ath12k_mac_setup_vdev_create_arg(struct ath12k_vif *arvif,
+ struct ath12k_wmi_vdev_create_arg *arg)
+{
+ struct ath12k *ar = arvif->ar;
+ struct ath12k_pdev *pdev = ar->pdev;
+
+ arg->if_id = arvif->vdev_id;
+ arg->type = arvif->vdev_type;
+ arg->subtype = arvif->vdev_subtype;
+ arg->pdev_id = pdev->pdev_id;
+
+ if (pdev->cap.supported_bands & WMI_HOST_WLAN_2G_CAP) {
+ arg->chains[NL80211_BAND_2GHZ].tx = ar->num_tx_chains;
+ arg->chains[NL80211_BAND_2GHZ].rx = ar->num_rx_chains;
+ }
+ if (pdev->cap.supported_bands & WMI_HOST_WLAN_5G_CAP) {
+ arg->chains[NL80211_BAND_5GHZ].tx = ar->num_tx_chains;
+ arg->chains[NL80211_BAND_5GHZ].rx = ar->num_rx_chains;
+ }
+ if (pdev->cap.supported_bands & WMI_HOST_WLAN_5G_CAP &&
+ ar->supports_6ghz) {
+ arg->chains[NL80211_BAND_6GHZ].tx = ar->num_tx_chains;
+ arg->chains[NL80211_BAND_6GHZ].rx = ar->num_rx_chains;
+ }
+
+ arg->if_stats_id = ath12k_mac_get_vdev_stats_id(arvif);
+}
+
+static u32
+ath12k_mac_prepare_he_mode(struct ath12k_pdev *pdev, u32 viftype)
+{
+ struct ath12k_pdev_cap *pdev_cap = &pdev->cap;
+ struct ath12k_band_cap *cap_band = NULL;
+ u32 *hecap_phy_ptr = NULL;
+ u32 hemode;
+
+ if (pdev->cap.supported_bands & WMI_HOST_WLAN_2G_CAP)
+ cap_band = &pdev_cap->band[NL80211_BAND_2GHZ];
+ else
+ cap_band = &pdev_cap->band[NL80211_BAND_5GHZ];
+
+ hecap_phy_ptr = &cap_band->he_cap_phy_info[0];
+
+ hemode = u32_encode_bits(HE_SU_BFEE_ENABLE, HE_MODE_SU_TX_BFEE) |
+ u32_encode_bits(HECAP_PHY_SUBFMR_GET(hecap_phy_ptr),
+ HE_MODE_SU_TX_BFER) |
+ u32_encode_bits(HECAP_PHY_ULMUMIMO_GET(hecap_phy_ptr),
+ HE_MODE_UL_MUMIMO);
+
+ /* TODO: WDS and other modes */
+ if (viftype == NL80211_IFTYPE_AP) {
+ hemode |= u32_encode_bits(HECAP_PHY_MUBFMR_GET(hecap_phy_ptr),
+ HE_MODE_MU_TX_BFER) |
+ u32_encode_bits(HE_DL_MUOFDMA_ENABLE, HE_MODE_DL_OFDMA) |
+ u32_encode_bits(HE_UL_MUOFDMA_ENABLE, HE_MODE_UL_OFDMA);
+ } else {
+ hemode |= u32_encode_bits(HE_MU_BFEE_ENABLE, HE_MODE_MU_TX_BFEE);
+ }
+
+ return hemode;
+}
+
+static int ath12k_set_he_mu_sounding_mode(struct ath12k *ar,
+ struct ath12k_vif *arvif)
+{
+ u32 param_id, param_value;
+ struct ath12k_base *ab = ar->ab;
+ int ret;
+
+ param_id = WMI_VDEV_PARAM_SET_HEMU_MODE;
+ param_value = ath12k_mac_prepare_he_mode(ar->pdev, arvif->vif->type);
+ ret = ath12k_wmi_vdev_set_param_cmd(ar, arvif->vdev_id,
+ param_id, param_value);
+ if (ret) {
+ ath12k_warn(ab, "failed to set vdev %d HE MU mode: %d param_value %x\n",
+ arvif->vdev_id, ret, param_value);
+ return ret;
+ }
+ param_id = WMI_VDEV_PARAM_SET_HE_SOUNDING_MODE;
+ param_value =
+ u32_encode_bits(HE_VHT_SOUNDING_MODE_ENABLE, HE_VHT_SOUNDING_MODE) |
+ u32_encode_bits(HE_TRIG_NONTRIG_SOUNDING_MODE_ENABLE,
+ HE_TRIG_NONTRIG_SOUNDING_MODE);
+ ret = ath12k_wmi_vdev_set_param_cmd(ar, arvif->vdev_id,
+ param_id, param_value);
+ if (ret) {
+ ath12k_warn(ab, "failed to set vdev %d HE MU mode: %d\n",
+ arvif->vdev_id, ret);
+ return ret;
+ }
+ return ret;
+}
+
+static void ath12k_mac_op_update_vif_offload(struct ieee80211_hw *hw,
+ struct ieee80211_vif *vif)
+{
+ struct ath12k *ar = hw->priv;
+ struct ath12k_base *ab = ar->ab;
+ struct ath12k_vif *arvif = ath12k_vif_to_arvif(vif);
+ u32 param_id, param_value;
+ int ret;
+
+ param_id = WMI_VDEV_PARAM_TX_ENCAP_TYPE;
+ if (vif->type != NL80211_IFTYPE_STATION &&
+ vif->type != NL80211_IFTYPE_AP)
+ vif->offload_flags &= ~(IEEE80211_OFFLOAD_ENCAP_ENABLED |
+ IEEE80211_OFFLOAD_DECAP_ENABLED);
+
+ if (vif->offload_flags & IEEE80211_OFFLOAD_ENCAP_ENABLED)
+ arvif->tx_encap_type = ATH12K_HW_TXRX_ETHERNET;
+ else if (test_bit(ATH12K_FLAG_RAW_MODE, &ab->dev_flags))
+ arvif->tx_encap_type = ATH12K_HW_TXRX_RAW;
+ else
+ arvif->tx_encap_type = ATH12K_HW_TXRX_NATIVE_WIFI;
+
+ ret = ath12k_wmi_vdev_set_param_cmd(ar, arvif->vdev_id,
+ param_id, arvif->tx_encap_type);
+ if (ret) {
+ ath12k_warn(ab, "failed to set vdev %d tx encap mode: %d\n",
+ arvif->vdev_id, ret);
+ vif->offload_flags &= ~IEEE80211_OFFLOAD_ENCAP_ENABLED;
+ }
+
+ param_id = WMI_VDEV_PARAM_RX_DECAP_TYPE;
+ if (vif->offload_flags & IEEE80211_OFFLOAD_DECAP_ENABLED)
+ param_value = ATH12K_HW_TXRX_ETHERNET;
+ else if (test_bit(ATH12K_FLAG_RAW_MODE, &ab->dev_flags))
+ param_value = ATH12K_HW_TXRX_RAW;
+ else
+ param_value = ATH12K_HW_TXRX_NATIVE_WIFI;
+
+ ret = ath12k_wmi_vdev_set_param_cmd(ar, arvif->vdev_id,
+ param_id, param_value);
+ if (ret) {
+ ath12k_warn(ab, "failed to set vdev %d rx decap mode: %d\n",
+ arvif->vdev_id, ret);
+ vif->offload_flags &= ~IEEE80211_OFFLOAD_DECAP_ENABLED;
+ }
+}
+
+static int ath12k_mac_op_add_interface(struct ieee80211_hw *hw,
+ struct ieee80211_vif *vif)
+{
+ struct ath12k *ar = hw->priv;
+ struct ath12k_base *ab = ar->ab;
+ struct ath12k_vif *arvif = ath12k_vif_to_arvif(vif);
+ struct ath12k_wmi_vdev_create_arg vdev_arg = {0};
+ struct ath12k_wmi_peer_create_arg peer_param;
+ u32 param_id, param_value;
+ u16 nss;
+ int i;
+ int ret;
+ int bit;
+
+ vif->driver_flags |= IEEE80211_VIF_SUPPORTS_UAPSD;
+
+ mutex_lock(&ar->conf_mutex);
+
+ if (vif->type == NL80211_IFTYPE_AP &&
+ ar->num_peers > (ar->max_num_peers - 1)) {
+ ath12k_warn(ab, "failed to create vdev due to insufficient peer entry resource in firmware\n");
+ ret = -ENOBUFS;
+ goto err;
+ }
+
+ if (ar->num_created_vdevs > (TARGET_NUM_VDEVS - 1)) {
+ ath12k_warn(ab, "failed to create vdev, reached max vdev limit %d\n",
+ TARGET_NUM_VDEVS);
+ ret = -EBUSY;
+ goto err;
+ }
+
+ memset(arvif, 0, sizeof(*arvif));
+
+ arvif->ar = ar;
+ arvif->vif = vif;
+
+ INIT_LIST_HEAD(&arvif->list);
+
+ /* Should we initialize any worker to handle connection loss indication
+ * from firmware in sta mode?
+ */
+
+ for (i = 0; i < ARRAY_SIZE(arvif->bitrate_mask.control); i++) {
+ arvif->bitrate_mask.control[i].legacy = 0xffffffff;
+ memset(arvif->bitrate_mask.control[i].ht_mcs, 0xff,
+ sizeof(arvif->bitrate_mask.control[i].ht_mcs));
+ memset(arvif->bitrate_mask.control[i].vht_mcs, 0xff,
+ sizeof(arvif->bitrate_mask.control[i].vht_mcs));
+ }
+
+ bit = __ffs64(ab->free_vdev_map);
+
+ arvif->vdev_id = bit;
+ arvif->vdev_subtype = WMI_VDEV_SUBTYPE_NONE;
+
+ switch (vif->type) {
+ case NL80211_IFTYPE_UNSPECIFIED:
+ case NL80211_IFTYPE_STATION:
+ arvif->vdev_type = WMI_VDEV_TYPE_STA;
+ break;
+ case NL80211_IFTYPE_MESH_POINT:
+ arvif->vdev_subtype = WMI_VDEV_SUBTYPE_MESH_11S;
+ fallthrough;
+ case NL80211_IFTYPE_AP:
+ arvif->vdev_type = WMI_VDEV_TYPE_AP;
+ break;
+ case NL80211_IFTYPE_MONITOR:
+ arvif->vdev_type = WMI_VDEV_TYPE_MONITOR;
+ ar->monitor_vdev_id = bit;
+ break;
+ default:
+ WARN_ON(1);
+ break;
+ }
+
+ ath12k_dbg(ar->ab, ATH12K_DBG_MAC, "mac add interface id %d type %d subtype %d map %llx\n",
+ arvif->vdev_id, arvif->vdev_type, arvif->vdev_subtype,
+ ab->free_vdev_map);
+
+ vif->cab_queue = arvif->vdev_id % (ATH12K_HW_MAX_QUEUES - 1);
+ for (i = 0; i < ARRAY_SIZE(vif->hw_queue); i++)
+ vif->hw_queue[i] = i % (ATH12K_HW_MAX_QUEUES - 1);
+
+ ath12k_mac_setup_vdev_create_arg(arvif, &vdev_arg);
+
+ ret = ath12k_wmi_vdev_create(ar, vif->addr, &vdev_arg);
+ if (ret) {
+ ath12k_warn(ab, "failed to create WMI vdev %d: %d\n",
+ arvif->vdev_id, ret);
+ goto err;
+ }
+
+ ar->num_created_vdevs++;
+ ath12k_dbg(ab, ATH12K_DBG_MAC, "vdev %pM created, vdev_id %d\n",
+ vif->addr, arvif->vdev_id);
+ ar->allocated_vdev_map |= 1LL << arvif->vdev_id;
+ ab->free_vdev_map &= ~(1LL << arvif->vdev_id);
+
+ spin_lock_bh(&ar->data_lock);
+ list_add(&arvif->list, &ar->arvifs);
+ spin_unlock_bh(&ar->data_lock);
+
+ ath12k_mac_op_update_vif_offload(hw, vif);
+
+ nss = hweight32(ar->cfg_tx_chainmask) ? : 1;
+ ret = ath12k_wmi_vdev_set_param_cmd(ar, arvif->vdev_id,
+ WMI_VDEV_PARAM_NSS, nss);
+ if (ret) {
+ ath12k_warn(ab, "failed to set vdev %d chainmask 0x%x, nss %d :%d\n",
+ arvif->vdev_id, ar->cfg_tx_chainmask, nss, ret);
+ goto err_vdev_del;
+ }
+
+ switch (arvif->vdev_type) {
+ case WMI_VDEV_TYPE_AP:
+ peer_param.vdev_id = arvif->vdev_id;
+ peer_param.peer_addr = vif->addr;
+ peer_param.peer_type = WMI_PEER_TYPE_DEFAULT;
+ ret = ath12k_peer_create(ar, arvif, NULL, &peer_param);
+ if (ret) {
+ ath12k_warn(ab, "failed to vdev %d create peer for AP: %d\n",
+ arvif->vdev_id, ret);
+ goto err_vdev_del;
+ }
+
+ ret = ath12k_mac_set_kickout(arvif);
+ if (ret) {
+ ath12k_warn(ar->ab, "failed to set vdev %i kickout parameters: %d\n",
+ arvif->vdev_id, ret);
+ goto err_peer_del;
+ }
+ break;
+ case WMI_VDEV_TYPE_STA:
+ param_id = WMI_STA_PS_PARAM_RX_WAKE_POLICY;
+ param_value = WMI_STA_PS_RX_WAKE_POLICY_WAKE;
+ ret = ath12k_wmi_set_sta_ps_param(ar, arvif->vdev_id,
+ param_id, param_value);
+ if (ret) {
+ ath12k_warn(ar->ab, "failed to set vdev %d RX wake policy: %d\n",
+ arvif->vdev_id, ret);
+ goto err_peer_del;
+ }
+
+ param_id = WMI_STA_PS_PARAM_TX_WAKE_THRESHOLD;
+ param_value = WMI_STA_PS_TX_WAKE_THRESHOLD_ALWAYS;
+ ret = ath12k_wmi_set_sta_ps_param(ar, arvif->vdev_id,
+ param_id, param_value);
+ if (ret) {
+ ath12k_warn(ar->ab, "failed to set vdev %d TX wake threshold: %d\n",
+ arvif->vdev_id, ret);
+ goto err_peer_del;
+ }
+
+ param_id = WMI_STA_PS_PARAM_PSPOLL_COUNT;
+ param_value = WMI_STA_PS_PSPOLL_COUNT_NO_MAX;
+ ret = ath12k_wmi_set_sta_ps_param(ar, arvif->vdev_id,
+ param_id, param_value);
+ if (ret) {
+ ath12k_warn(ar->ab, "failed to set vdev %d pspoll count: %d\n",
+ arvif->vdev_id, ret);
+ goto err_peer_del;
+ }
+
+ ret = ath12k_wmi_pdev_set_ps_mode(ar, arvif->vdev_id, false);
+ if (ret) {
+ ath12k_warn(ar->ab, "failed to disable vdev %d ps mode: %d\n",
+ arvif->vdev_id, ret);
+ goto err_peer_del;
+ }
+ break;
+ default:
+ break;
+ }
+
+ arvif->txpower = vif->bss_conf.txpower;
+ ret = ath12k_mac_txpower_recalc(ar);
+ if (ret)
+ goto err_peer_del;
+
+ param_id = WMI_VDEV_PARAM_RTS_THRESHOLD;
+ param_value = ar->hw->wiphy->rts_threshold;
+ ret = ath12k_wmi_vdev_set_param_cmd(ar, arvif->vdev_id,
+ param_id, param_value);
+ if (ret) {
+ ath12k_warn(ar->ab, "failed to set rts threshold for vdev %d: %d\n",
+ arvif->vdev_id, ret);
+ }
+
+ ath12k_dp_vdev_tx_attach(ar, arvif);
+
+ if (vif->type != NL80211_IFTYPE_MONITOR && ar->monitor_conf_enabled)
+ ath12k_mac_monitor_vdev_create(ar);
+
+ mutex_unlock(&ar->conf_mutex);
+
+ return ret;
+
+err_peer_del:
+ if (arvif->vdev_type == WMI_VDEV_TYPE_AP) {
+ reinit_completion(&ar->peer_delete_done);
+
+ ret = ath12k_wmi_send_peer_delete_cmd(ar, vif->addr,
+ arvif->vdev_id);
+ if (ret) {
+ ath12k_warn(ar->ab, "failed to delete peer vdev_id %d addr %pM\n",
+ arvif->vdev_id, vif->addr);
+ goto err;
+ }
+
+ ret = ath12k_wait_for_peer_delete_done(ar, arvif->vdev_id,
+ vif->addr);
+ if (ret)
+ goto err;
+
+ ar->num_peers--;
+ }
+
+err_vdev_del:
+ ath12k_wmi_vdev_delete(ar, arvif->vdev_id);
+ ar->num_created_vdevs--;
+ ar->allocated_vdev_map &= ~(1LL << arvif->vdev_id);
+ ab->free_vdev_map |= 1LL << arvif->vdev_id;
+ ab->free_vdev_stats_id_map &= ~(1LL << arvif->vdev_stats_id);
+ spin_lock_bh(&ar->data_lock);
+ list_del(&arvif->list);
+ spin_unlock_bh(&ar->data_lock);
+
+err:
+ mutex_unlock(&ar->conf_mutex);
+
+ return ret;
+}
+
+static void ath12k_mac_vif_unref(struct ath12k_dp *dp, struct ieee80211_vif *vif)
+{
+ struct ath12k_tx_desc_info *tx_desc_info, *tmp1;
+ struct ath12k_skb_cb *skb_cb;
+ struct sk_buff *skb;
+ int i;
+
+ for (i = 0; i < ATH12K_HW_MAX_QUEUES; i++) {
+ spin_lock_bh(&dp->tx_desc_lock[i]);
+
+ list_for_each_entry_safe(tx_desc_info, tmp1, &dp->tx_desc_used_list[i],
+ list) {
+ skb = tx_desc_info->skb;
+ if (!skb)
+ continue;
+
+ skb_cb = ATH12K_SKB_CB(skb);
+ if (skb_cb->vif == vif)
+ skb_cb->vif = NULL;
+ }
+
+ spin_unlock_bh(&dp->tx_desc_lock[i]);
+ }
+}
+
+static void ath12k_mac_op_remove_interface(struct ieee80211_hw *hw,
+ struct ieee80211_vif *vif)
+{
+ struct ath12k *ar = hw->priv;
+ struct ath12k_vif *arvif = ath12k_vif_to_arvif(vif);
+ struct ath12k_base *ab = ar->ab;
+ unsigned long time_left;
+ int ret;
+
+ mutex_lock(&ar->conf_mutex);
+
+ ath12k_dbg(ab, ATH12K_DBG_MAC, "mac remove interface (vdev %d)\n",
+ arvif->vdev_id);
+
+ if (arvif->vdev_type == WMI_VDEV_TYPE_AP) {
+ ret = ath12k_peer_delete(ar, arvif->vdev_id, vif->addr);
+ if (ret)
+ ath12k_warn(ab, "failed to submit AP self-peer removal on vdev %d: %d\n",
+ arvif->vdev_id, ret);
+ }
+
+ reinit_completion(&ar->vdev_delete_done);
+
+ ret = ath12k_wmi_vdev_delete(ar, arvif->vdev_id);
+ if (ret) {
+ ath12k_warn(ab, "failed to delete WMI vdev %d: %d\n",
+ arvif->vdev_id, ret);
+ goto err_vdev_del;
+ }
+
+ time_left = wait_for_completion_timeout(&ar->vdev_delete_done,
+ ATH12K_VDEV_DELETE_TIMEOUT_HZ);
+ if (time_left == 0) {
+ ath12k_warn(ab, "Timeout in receiving vdev delete response\n");
+ goto err_vdev_del;
+ }
+
+ if (arvif->vdev_type == WMI_VDEV_TYPE_MONITOR) {
+ ar->monitor_vdev_id = -1;
+ ar->monitor_vdev_created = false;
+ } else if (ar->monitor_vdev_created && !ar->monitor_started) {
+ ret = ath12k_mac_monitor_vdev_delete(ar);
+ }
+
+ ab->free_vdev_map |= 1LL << (arvif->vdev_id);
+ ar->allocated_vdev_map &= ~(1LL << arvif->vdev_id);
+ ab->free_vdev_stats_id_map &= ~(1LL << arvif->vdev_stats_id);
+ ar->num_created_vdevs--;
+
+ ath12k_dbg(ab, ATH12K_DBG_MAC, "vdev %pM deleted, vdev_id %d\n",
+ vif->addr, arvif->vdev_id);
+
+err_vdev_del:
+ spin_lock_bh(&ar->data_lock);
+ list_del(&arvif->list);
+ spin_unlock_bh(&ar->data_lock);
+
+ ath12k_peer_cleanup(ar, arvif->vdev_id);
+
+ idr_for_each(&ar->txmgmt_idr,
+ ath12k_mac_vif_txmgmt_idr_remove, vif);
+
+ ath12k_mac_vif_unref(&ab->dp, vif);
+ ath12k_dp_tx_put_bank_profile(&ab->dp, arvif->bank_id);
+
+ /* Recalc txpower for remaining vdev */
+ ath12k_mac_txpower_recalc(ar);
+ clear_bit(ATH12K_FLAG_MONITOR_ENABLED, &ar->monitor_flags);
+
+ /* TODO: recal traffic pause state based on the available vdevs */
+
+ mutex_unlock(&ar->conf_mutex);
+}
+
+/* FIXME: Has to be verified. */
+#define SUPPORTED_FILTERS \
+ (FIF_ALLMULTI | \
+ FIF_CONTROL | \
+ FIF_PSPOLL | \
+ FIF_OTHER_BSS | \
+ FIF_BCN_PRBRESP_PROMISC | \
+ FIF_PROBE_REQ | \
+ FIF_FCSFAIL)
+
+static void ath12k_mac_op_configure_filter(struct ieee80211_hw *hw,
+ unsigned int changed_flags,
+ unsigned int *total_flags,
+ u64 multicast)
+{
+ struct ath12k *ar = hw->priv;
+ bool reset_flag;
+ int ret;
+
+ mutex_lock(&ar->conf_mutex);
+
+ changed_flags &= SUPPORTED_FILTERS;
+ *total_flags &= SUPPORTED_FILTERS;
+ ar->filter_flags = *total_flags;
+
+ /* For monitor mode */
+ reset_flag = !(ar->filter_flags & FIF_BCN_PRBRESP_PROMISC);
+
+ ret = ath12k_dp_tx_htt_monitor_mode_ring_config(ar, reset_flag);
+ if (!ret) {
+ if (!reset_flag)
+ set_bit(ATH12K_FLAG_MONITOR_ENABLED, &ar->monitor_flags);
+ else
+ clear_bit(ATH12K_FLAG_MONITOR_ENABLED, &ar->monitor_flags);
+ } else {
+ ath12k_warn(ar->ab,
+ "fail to set monitor filter: %d\n", ret);
+ }
+ ath12k_dbg(ar->ab, ATH12K_DBG_MAC,
+ "changed_flags:0x%x, total_flags:0x%x, reset_flag:%d\n",
+ changed_flags, *total_flags, reset_flag);
+
+ mutex_unlock(&ar->conf_mutex);
+}
+
+static int ath12k_mac_op_get_antenna(struct ieee80211_hw *hw, u32 *tx_ant, u32 *rx_ant)
+{
+ struct ath12k *ar = hw->priv;
+
+ mutex_lock(&ar->conf_mutex);
+
+ *tx_ant = ar->cfg_tx_chainmask;
+ *rx_ant = ar->cfg_rx_chainmask;
+
+ mutex_unlock(&ar->conf_mutex);
+
+ return 0;
+}
+
+static int ath12k_mac_op_set_antenna(struct ieee80211_hw *hw, u32 tx_ant, u32 rx_ant)
+{
+ struct ath12k *ar = hw->priv;
+ int ret;
+
+ mutex_lock(&ar->conf_mutex);
+ ret = __ath12k_set_antenna(ar, tx_ant, rx_ant);
+ mutex_unlock(&ar->conf_mutex);
+
+ return ret;
+}
+
+static int ath12k_mac_op_ampdu_action(struct ieee80211_hw *hw,
+ struct ieee80211_vif *vif,
+ struct ieee80211_ampdu_params *params)
+{
+ struct ath12k *ar = hw->priv;
+ int ret = -EINVAL;
+
+ mutex_lock(&ar->conf_mutex);
+
+ switch (params->action) {
+ case IEEE80211_AMPDU_RX_START:
+ ret = ath12k_dp_rx_ampdu_start(ar, params);
+ break;
+ case IEEE80211_AMPDU_RX_STOP:
+ ret = ath12k_dp_rx_ampdu_stop(ar, params);
+ break;
+ case IEEE80211_AMPDU_TX_START:
+ case IEEE80211_AMPDU_TX_STOP_CONT:
+ case IEEE80211_AMPDU_TX_STOP_FLUSH:
+ case IEEE80211_AMPDU_TX_STOP_FLUSH_CONT:
+ case IEEE80211_AMPDU_TX_OPERATIONAL:
+ /* Tx A-MPDU aggregation offloaded to hw/fw so deny mac80211
+ * Tx aggregation requests.
+ */
+ ret = -EOPNOTSUPP;
+ break;
+ }
+
+ mutex_unlock(&ar->conf_mutex);
+
+ return ret;
+}
+
+static int ath12k_mac_op_add_chanctx(struct ieee80211_hw *hw,
+ struct ieee80211_chanctx_conf *ctx)
+{
+ struct ath12k *ar = hw->priv;
+ struct ath12k_base *ab = ar->ab;
+
+ ath12k_dbg(ab, ATH12K_DBG_MAC,
+ "mac chanctx add freq %u width %d ptr %pK\n",
+ ctx->def.chan->center_freq, ctx->def.width, ctx);
+
+ mutex_lock(&ar->conf_mutex);
+
+ spin_lock_bh(&ar->data_lock);
+ /* TODO: In case of multiple channel context, populate rx_channel from
+ * Rx PPDU desc information.
+ */
+ ar->rx_channel = ctx->def.chan;
+ spin_unlock_bh(&ar->data_lock);
+
+ mutex_unlock(&ar->conf_mutex);
+
+ return 0;
+}
+
+static void ath12k_mac_op_remove_chanctx(struct ieee80211_hw *hw,
+ struct ieee80211_chanctx_conf *ctx)
+{
+ struct ath12k *ar = hw->priv;
+ struct ath12k_base *ab = ar->ab;
+
+ ath12k_dbg(ab, ATH12K_DBG_MAC,
+ "mac chanctx remove freq %u width %d ptr %pK\n",
+ ctx->def.chan->center_freq, ctx->def.width, ctx);
+
+ mutex_lock(&ar->conf_mutex);
+
+ spin_lock_bh(&ar->data_lock);
+ /* TODO: In case of there is one more channel context left, populate
+ * rx_channel with the channel of that remaining channel context.
+ */
+ ar->rx_channel = NULL;
+ spin_unlock_bh(&ar->data_lock);
+
+ mutex_unlock(&ar->conf_mutex);
+}
+
+static int
+ath12k_mac_vdev_start_restart(struct ath12k_vif *arvif,
+ const struct cfg80211_chan_def *chandef,
+ bool restart)
+{
+ struct ath12k *ar = arvif->ar;
+ struct ath12k_base *ab = ar->ab;
+ struct wmi_vdev_start_req_arg arg = {};
+ int he_support = arvif->vif->bss_conf.he_support;
+ int ret;
+
+ lockdep_assert_held(&ar->conf_mutex);
+
+ reinit_completion(&ar->vdev_setup_done);
+
+ arg.vdev_id = arvif->vdev_id;
+ arg.dtim_period = arvif->dtim_period;
+ arg.bcn_intval = arvif->beacon_interval;
+
+ arg.freq = chandef->chan->center_freq;
+ arg.band_center_freq1 = chandef->center_freq1;
+ arg.band_center_freq2 = chandef->center_freq2;
+ arg.mode = ath12k_phymodes[chandef->chan->band][chandef->width];
+
+ arg.min_power = 0;
+ arg.max_power = chandef->chan->max_power * 2;
+ arg.max_reg_power = chandef->chan->max_reg_power * 2;
+ arg.max_antenna_gain = chandef->chan->max_antenna_gain * 2;
+
+ arg.pref_tx_streams = ar->num_tx_chains;
+ arg.pref_rx_streams = ar->num_rx_chains;
+
+ if (arvif->vdev_type == WMI_VDEV_TYPE_AP) {
+ arg.ssid = arvif->u.ap.ssid;
+ arg.ssid_len = arvif->u.ap.ssid_len;
+ arg.hidden_ssid = arvif->u.ap.hidden_ssid;
+
+ /* For now allow DFS for AP mode */
+ arg.chan_radar = !!(chandef->chan->flags & IEEE80211_CHAN_RADAR);
+
+ arg.passive = arg.chan_radar;
+
+ spin_lock_bh(&ab->base_lock);
+ arg.regdomain = ar->ab->dfs_region;
+ spin_unlock_bh(&ab->base_lock);
+
+ /* TODO: Notify if secondary 80Mhz also needs radar detection */
+ if (he_support) {
+ ret = ath12k_set_he_mu_sounding_mode(ar, arvif);
+ if (ret) {
+ ath12k_warn(ar->ab, "failed to set he mode vdev %i\n",
+ arg.vdev_id);
+ return ret;
+ }
+ }
+ }
+
+ arg.passive |= !!(chandef->chan->flags & IEEE80211_CHAN_NO_IR);
+
+ ath12k_dbg(ab, ATH12K_DBG_MAC,
+ "mac vdev %d start center_freq %d phymode %s\n",
+ arg.vdev_id, arg.freq,
+ ath12k_mac_phymode_str(arg.mode));
+
+ ret = ath12k_wmi_vdev_start(ar, &arg, restart);
+ if (ret) {
+ ath12k_warn(ar->ab, "failed to %s WMI vdev %i\n",
+ restart ? "restart" : "start", arg.vdev_id);
+ return ret;
+ }
+
+ ret = ath12k_mac_vdev_setup_sync(ar);
+ if (ret) {
+ ath12k_warn(ab, "failed to synchronize setup for vdev %i %s: %d\n",
+ arg.vdev_id, restart ? "restart" : "start", ret);
+ return ret;
+ }
+
+ ar->num_started_vdevs++;
+ ath12k_dbg(ab, ATH12K_DBG_MAC, "vdev %pM started, vdev_id %d\n",
+ arvif->vif->addr, arvif->vdev_id);
+
+ /* Enable CAC Flag in the driver by checking the channel DFS cac time,
+ * i.e dfs_cac_ms value which will be valid only for radar channels
+ * and state as NL80211_DFS_USABLE which indicates CAC needs to be
+ * done before channel usage. This flags is used to drop rx packets.
+ * during CAC.
+ */
+ /* TODO: Set the flag for other interface types as required */
+ if (arvif->vdev_type == WMI_VDEV_TYPE_AP &&
+ chandef->chan->dfs_cac_ms &&
+ chandef->chan->dfs_state == NL80211_DFS_USABLE) {
+ set_bit(ATH12K_CAC_RUNNING, &ar->dev_flags);
+ ath12k_dbg(ab, ATH12K_DBG_MAC,
+ "CAC Started in chan_freq %d for vdev %d\n",
+ arg.freq, arg.vdev_id);
+ }
+
+ ret = ath12k_mac_set_txbf_conf(arvif);
+ if (ret)
+ ath12k_warn(ab, "failed to set txbf conf for vdev %d: %d\n",
+ arvif->vdev_id, ret);
+
+ return 0;
+}
+
+static int ath12k_mac_vdev_stop(struct ath12k_vif *arvif)
+{
+ struct ath12k *ar = arvif->ar;
+ int ret;
+
+ lockdep_assert_held(&ar->conf_mutex);
+
+ reinit_completion(&ar->vdev_setup_done);
+
+ ret = ath12k_wmi_vdev_stop(ar, arvif->vdev_id);
+ if (ret) {
+ ath12k_warn(ar->ab, "failed to stop WMI vdev %i: %d\n",
+ arvif->vdev_id, ret);
+ goto err;
+ }
+
+ ret = ath12k_mac_vdev_setup_sync(ar);
+ if (ret) {
+ ath12k_warn(ar->ab, "failed to synchronize setup for vdev %i: %d\n",
+ arvif->vdev_id, ret);
+ goto err;
+ }
+
+ WARN_ON(ar->num_started_vdevs == 0);
+
+ ar->num_started_vdevs--;
+ ath12k_dbg(ar->ab, ATH12K_DBG_MAC, "vdev %pM stopped, vdev_id %d\n",
+ arvif->vif->addr, arvif->vdev_id);
+
+ if (test_bit(ATH12K_CAC_RUNNING, &ar->dev_flags)) {
+ clear_bit(ATH12K_CAC_RUNNING, &ar->dev_flags);
+ ath12k_dbg(ar->ab, ATH12K_DBG_MAC, "CAC Stopped for vdev %d\n",
+ arvif->vdev_id);
+ }
+
+ return 0;
+err:
+ return ret;
+}
+
+static int ath12k_mac_vdev_start(struct ath12k_vif *arvif,
+ const struct cfg80211_chan_def *chandef)
+{
+ return ath12k_mac_vdev_start_restart(arvif, chandef, false);
+}
+
+static int ath12k_mac_vdev_restart(struct ath12k_vif *arvif,
+ const struct cfg80211_chan_def *chandef)
+{
+ return ath12k_mac_vdev_start_restart(arvif, chandef, true);
+}
+
+struct ath12k_mac_change_chanctx_arg {
+ struct ieee80211_chanctx_conf *ctx;
+ struct ieee80211_vif_chanctx_switch *vifs;
+ int n_vifs;
+ int next_vif;
+};
+
+static void
+ath12k_mac_change_chanctx_cnt_iter(void *data, u8 *mac,
+ struct ieee80211_vif *vif)
+{
+ struct ath12k_mac_change_chanctx_arg *arg = data;
+
+ if (rcu_access_pointer(vif->bss_conf.chanctx_conf) != arg->ctx)
+ return;
+
+ arg->n_vifs++;
+}
+
+static void
+ath12k_mac_change_chanctx_fill_iter(void *data, u8 *mac,
+ struct ieee80211_vif *vif)
+{
+ struct ath12k_mac_change_chanctx_arg *arg = data;
+ struct ieee80211_chanctx_conf *ctx;
+
+ ctx = rcu_access_pointer(vif->bss_conf.chanctx_conf);
+ if (ctx != arg->ctx)
+ return;
+
+ if (WARN_ON(arg->next_vif == arg->n_vifs))
+ return;
+
+ arg->vifs[arg->next_vif].vif = vif;
+ arg->vifs[arg->next_vif].old_ctx = ctx;
+ arg->vifs[arg->next_vif].new_ctx = ctx;
+ arg->next_vif++;
+}
+
+static void
+ath12k_mac_update_vif_chan(struct ath12k *ar,
+ struct ieee80211_vif_chanctx_switch *vifs,
+ int n_vifs)
+{
+ struct ath12k_base *ab = ar->ab;
+ struct ath12k_vif *arvif;
+ int ret;
+ int i;
+ bool monitor_vif = false;
+
+ lockdep_assert_held(&ar->conf_mutex);
+
+ for (i = 0; i < n_vifs; i++) {
+ arvif = (void *)vifs[i].vif->drv_priv;
+
+ if (vifs[i].vif->type == NL80211_IFTYPE_MONITOR)
+ monitor_vif = true;
+
+ ath12k_dbg(ab, ATH12K_DBG_MAC,
+ "mac chanctx switch vdev_id %i freq %u->%u width %d->%d\n",
+ arvif->vdev_id,
+ vifs[i].old_ctx->def.chan->center_freq,
+ vifs[i].new_ctx->def.chan->center_freq,
+ vifs[i].old_ctx->def.width,
+ vifs[i].new_ctx->def.width);
+
+ if (WARN_ON(!arvif->is_started))
+ continue;
+
+ if (WARN_ON(!arvif->is_up))
+ continue;
+
+ ret = ath12k_wmi_vdev_down(ar, arvif->vdev_id);
+ if (ret) {
+ ath12k_warn(ab, "failed to down vdev %d: %d\n",
+ arvif->vdev_id, ret);
+ continue;
+ }
+ }
+
+ /* All relevant vdevs are downed and associated channel resources
+ * should be available for the channel switch now.
+ */
+
+ /* TODO: Update ar->rx_channel */
+
+ for (i = 0; i < n_vifs; i++) {
+ arvif = (void *)vifs[i].vif->drv_priv;
+
+ if (WARN_ON(!arvif->is_started))
+ continue;
+
+ if (WARN_ON(!arvif->is_up))
+ continue;
+
+ ret = ath12k_mac_vdev_restart(arvif, &vifs[i].new_ctx->def);
+ if (ret) {
+ ath12k_warn(ab, "failed to restart vdev %d: %d\n",
+ arvif->vdev_id, ret);
+ continue;
+ }
+
+ ret = ath12k_mac_setup_bcn_tmpl(arvif);
+ if (ret)
+ ath12k_warn(ab, "failed to update bcn tmpl during csa: %d\n",
+ ret);
+
+ ret = ath12k_wmi_vdev_up(arvif->ar, arvif->vdev_id, arvif->aid,
+ arvif->bssid);
+ if (ret) {
+ ath12k_warn(ab, "failed to bring vdev up %d: %d\n",
+ arvif->vdev_id, ret);
+ continue;
+ }
+ }
+
+ /* Restart the internal monitor vdev on new channel */
+ if (!monitor_vif && ar->monitor_vdev_created) {
+ if (!ath12k_mac_monitor_stop(ar))
+ ath12k_mac_monitor_start(ar);
+ }
+}
+
+static void
+ath12k_mac_update_active_vif_chan(struct ath12k *ar,
+ struct ieee80211_chanctx_conf *ctx)
+{
+ struct ath12k_mac_change_chanctx_arg arg = { .ctx = ctx };
+
+ lockdep_assert_held(&ar->conf_mutex);
+
+ ieee80211_iterate_active_interfaces_atomic(ar->hw,
+ IEEE80211_IFACE_ITER_NORMAL,
+ ath12k_mac_change_chanctx_cnt_iter,
+ &arg);
+ if (arg.n_vifs == 0)
+ return;
+
+ arg.vifs = kcalloc(arg.n_vifs, sizeof(arg.vifs[0]), GFP_KERNEL);
+ if (!arg.vifs)
+ return;
+
+ ieee80211_iterate_active_interfaces_atomic(ar->hw,
+ IEEE80211_IFACE_ITER_NORMAL,
+ ath12k_mac_change_chanctx_fill_iter,
+ &arg);
+
+ ath12k_mac_update_vif_chan(ar, arg.vifs, arg.n_vifs);
+
+ kfree(arg.vifs);
+}
+
+static void ath12k_mac_op_change_chanctx(struct ieee80211_hw *hw,
+ struct ieee80211_chanctx_conf *ctx,
+ u32 changed)
+{
+ struct ath12k *ar = hw->priv;
+ struct ath12k_base *ab = ar->ab;
+
+ mutex_lock(&ar->conf_mutex);
+
+ ath12k_dbg(ab, ATH12K_DBG_MAC,
+ "mac chanctx change freq %u width %d ptr %pK changed %x\n",
+ ctx->def.chan->center_freq, ctx->def.width, ctx, changed);
+
+ /* This shouldn't really happen because channel switching should use
+ * switch_vif_chanctx().
+ */
+ if (WARN_ON(changed & IEEE80211_CHANCTX_CHANGE_CHANNEL))
+ goto unlock;
+
+ if (changed & IEEE80211_CHANCTX_CHANGE_WIDTH)
+ ath12k_mac_update_active_vif_chan(ar, ctx);
+
+ /* TODO: Recalc radar detection */
+
+unlock:
+ mutex_unlock(&ar->conf_mutex);
+}
+
+static int ath12k_start_vdev_delay(struct ieee80211_hw *hw,
+ struct ieee80211_vif *vif)
+{
+ struct ath12k *ar = hw->priv;
+ struct ath12k_base *ab = ar->ab;
+ struct ath12k_vif *arvif = (void *)vif->drv_priv;
+ int ret;
+
+ if (WARN_ON(arvif->is_started))
+ return -EBUSY;
+
+ ret = ath12k_mac_vdev_start(arvif, &arvif->chanctx.def);
+ if (ret) {
+ ath12k_warn(ab, "failed to start vdev %i addr %pM on freq %d: %d\n",
+ arvif->vdev_id, vif->addr,
+ arvif->chanctx.def.chan->center_freq, ret);
+ return ret;
+ }
+
+ if (arvif->vdev_type == WMI_VDEV_TYPE_MONITOR) {
+ ret = ath12k_monitor_vdev_up(ar, arvif->vdev_id);
+ if (ret) {
+ ath12k_warn(ab, "failed put monitor up: %d\n", ret);
+ return ret;
+ }
+ }
+
+ arvif->is_started = true;
+
+ /* TODO: Setup ps and cts/rts protection */
+ return 0;
+}
+
+static int
+ath12k_mac_op_assign_vif_chanctx(struct ieee80211_hw *hw,
+ struct ieee80211_vif *vif,
+ struct ieee80211_bss_conf *link_conf,
+ struct ieee80211_chanctx_conf *ctx)
+{
+ struct ath12k *ar = hw->priv;
+ struct ath12k_base *ab = ar->ab;
+ struct ath12k_vif *arvif = (void *)vif->drv_priv;
+ int ret;
+ struct ath12k_wmi_peer_create_arg param;
+
+ mutex_lock(&ar->conf_mutex);
+
+ ath12k_dbg(ab, ATH12K_DBG_MAC,
+ "mac chanctx assign ptr %pK vdev_id %i\n",
+ ctx, arvif->vdev_id);
+
+ /* for some targets bss peer must be created before vdev_start */
+ if (ab->hw_params->vdev_start_delay &&
+ arvif->vdev_type != WMI_VDEV_TYPE_AP &&
+ arvif->vdev_type != WMI_VDEV_TYPE_MONITOR &&
+ !ath12k_peer_exist_by_vdev_id(ab, arvif->vdev_id)) {
+ memcpy(&arvif->chanctx, ctx, sizeof(*ctx));
+ ret = 0;
+ goto out;
+ }
+
+ if (WARN_ON(arvif->is_started)) {
+ ret = -EBUSY;
+ goto out;
+ }
+
+ if (ab->hw_params->vdev_start_delay &&
+ (arvif->vdev_type == WMI_VDEV_TYPE_AP ||
+ arvif->vdev_type == WMI_VDEV_TYPE_MONITOR)) {
+ param.vdev_id = arvif->vdev_id;
+ param.peer_type = WMI_PEER_TYPE_DEFAULT;
+ param.peer_addr = ar->mac_addr;
+
+ ret = ath12k_peer_create(ar, arvif, NULL, &param);
+ if (ret) {
+ ath12k_warn(ab, "failed to create peer after vdev start delay: %d",
+ ret);
+ goto out;
+ }
+ }
+
+ if (arvif->vdev_type == WMI_VDEV_TYPE_MONITOR) {
+ ret = ath12k_mac_monitor_start(ar);
+ if (ret)
+ goto out;
+ arvif->is_started = true;
+ goto out;
+ }
+
+ ret = ath12k_mac_vdev_start(arvif, &ctx->def);
+ if (ret) {
+ ath12k_warn(ab, "failed to start vdev %i addr %pM on freq %d: %d\n",
+ arvif->vdev_id, vif->addr,
+ ctx->def.chan->center_freq, ret);
+ goto out;
+ }
+
+ if (arvif->vdev_type != WMI_VDEV_TYPE_MONITOR && ar->monitor_vdev_created)
+ ath12k_mac_monitor_start(ar);
+
+ arvif->is_started = true;
+
+ /* TODO: Setup ps and cts/rts protection */
+
+out:
+ mutex_unlock(&ar->conf_mutex);
+
+ return ret;
+}
+
+static void
+ath12k_mac_op_unassign_vif_chanctx(struct ieee80211_hw *hw,
+ struct ieee80211_vif *vif,
+ struct ieee80211_bss_conf *link_conf,
+ struct ieee80211_chanctx_conf *ctx)
+{
+ struct ath12k *ar = hw->priv;
+ struct ath12k_base *ab = ar->ab;
+ struct ath12k_vif *arvif = (void *)vif->drv_priv;
+ int ret;
+
+ mutex_lock(&ar->conf_mutex);
+
+ ath12k_dbg(ab, ATH12K_DBG_MAC,
+ "mac chanctx unassign ptr %pK vdev_id %i\n",
+ ctx, arvif->vdev_id);
+
+ WARN_ON(!arvif->is_started);
+
+ if (ab->hw_params->vdev_start_delay &&
+ arvif->vdev_type == WMI_VDEV_TYPE_MONITOR &&
+ ath12k_peer_find_by_addr(ab, ar->mac_addr))
+ ath12k_peer_delete(ar, arvif->vdev_id, ar->mac_addr);
+
+ if (arvif->vdev_type == WMI_VDEV_TYPE_MONITOR) {
+ ret = ath12k_mac_monitor_stop(ar);
+ if (ret) {
+ mutex_unlock(&ar->conf_mutex);
+ return;
+ }
+
+ arvif->is_started = false;
+ mutex_unlock(&ar->conf_mutex);
+ }
+
+ ret = ath12k_mac_vdev_stop(arvif);
+ if (ret)
+ ath12k_warn(ab, "failed to stop vdev %i: %d\n",
+ arvif->vdev_id, ret);
+
+ arvif->is_started = false;
+
+ if (ab->hw_params->vdev_start_delay &&
+ arvif->vdev_type == WMI_VDEV_TYPE_MONITOR)
+ ath12k_wmi_vdev_down(ar, arvif->vdev_id);
+
+ if (arvif->vdev_type != WMI_VDEV_TYPE_MONITOR &&
+ ar->num_started_vdevs == 1 && ar->monitor_vdev_created)
+ ath12k_mac_monitor_stop(ar);
+
+ mutex_unlock(&ar->conf_mutex);
+}
+
+static int
+ath12k_mac_op_switch_vif_chanctx(struct ieee80211_hw *hw,
+ struct ieee80211_vif_chanctx_switch *vifs,
+ int n_vifs,
+ enum ieee80211_chanctx_switch_mode mode)
+{
+ struct ath12k *ar = hw->priv;
+
+ mutex_lock(&ar->conf_mutex);
+
+ ath12k_dbg(ar->ab, ATH12K_DBG_MAC,
+ "mac chanctx switch n_vifs %d mode %d\n",
+ n_vifs, mode);
+ ath12k_mac_update_vif_chan(ar, vifs, n_vifs);
+
+ mutex_unlock(&ar->conf_mutex);
+
+ return 0;
+}
+
+static int
+ath12k_set_vdev_param_to_all_vifs(struct ath12k *ar, int param, u32 value)
+{
+ struct ath12k_vif *arvif;
+ int ret = 0;
+
+ mutex_lock(&ar->conf_mutex);
+ list_for_each_entry(arvif, &ar->arvifs, list) {
+ ath12k_dbg(ar->ab, ATH12K_DBG_MAC, "setting mac vdev %d param %d value %d\n",
+ param, arvif->vdev_id, value);
+
+ ret = ath12k_wmi_vdev_set_param_cmd(ar, arvif->vdev_id,
+ param, value);
+ if (ret) {
+ ath12k_warn(ar->ab, "failed to set param %d for vdev %d: %d\n",
+ param, arvif->vdev_id, ret);
+ break;
+ }
+ }
+ mutex_unlock(&ar->conf_mutex);
+ return ret;
+}
+
+/* mac80211 stores device specific RTS/Fragmentation threshold value,
+ * this is set interface specific to firmware from ath12k driver
+ */
+static int ath12k_mac_op_set_rts_threshold(struct ieee80211_hw *hw, u32 value)
+{
+ struct ath12k *ar = hw->priv;
+ int param_id = WMI_VDEV_PARAM_RTS_THRESHOLD;
+
+ return ath12k_set_vdev_param_to_all_vifs(ar, param_id, value);
+}
+
+static int ath12k_mac_op_set_frag_threshold(struct ieee80211_hw *hw, u32 value)
+{
+ /* Even though there's a WMI vdev param for fragmentation threshold no
+ * known firmware actually implements it. Moreover it is not possible to
+ * rely frame fragmentation to mac80211 because firmware clears the
+ * "more fragments" bit in frame control making it impossible for remote
+ * devices to reassemble frames.
+ *
+ * Hence implement a dummy callback just to say fragmentation isn't
+ * supported. This effectively prevents mac80211 from doing frame
+ * fragmentation in software.
+ */
+ return -EOPNOTSUPP;
+}
+
+static void ath12k_mac_op_flush(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
+ u32 queues, bool drop)
+{
+ struct ath12k *ar = hw->priv;
+ long time_left;
+
+ if (drop)
+ return;
+
+ time_left = wait_event_timeout(ar->dp.tx_empty_waitq,
+ (atomic_read(&ar->dp.num_tx_pending) == 0),
+ ATH12K_FLUSH_TIMEOUT);
+ if (time_left == 0)
+ ath12k_warn(ar->ab, "failed to flush transmit queue %ld\n", time_left);
+}
+
+static int
+ath12k_mac_bitrate_mask_num_ht_rates(struct ath12k *ar,
+ enum nl80211_band band,
+ const struct cfg80211_bitrate_mask *mask)
+{
+ int num_rates = 0;
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(mask->control[band].ht_mcs); i++)
+ num_rates += hweight16(mask->control[band].ht_mcs[i]);
+
+ return num_rates;
+}
+
+static bool
+ath12k_mac_has_single_legacy_rate(struct ath12k *ar,
+ enum nl80211_band band,
+ const struct cfg80211_bitrate_mask *mask)
+{
+ int num_rates = 0;
+
+ num_rates = hweight32(mask->control[band].legacy);
+
+ if (ath12k_mac_bitrate_mask_num_ht_rates(ar, band, mask))
+ return false;
+
+ if (ath12k_mac_bitrate_mask_num_vht_rates(ar, band, mask))
+ return false;
+
+ return num_rates == 1;
+}
+
+static bool
+ath12k_mac_bitrate_mask_get_single_nss(struct ath12k *ar,
+ enum nl80211_band band,
+ const struct cfg80211_bitrate_mask *mask,
+ int *nss)
+{
+ struct ieee80211_supported_band *sband = &ar->mac.sbands[band];
+ u16 vht_mcs_map = le16_to_cpu(sband->vht_cap.vht_mcs.tx_mcs_map);
+ u8 ht_nss_mask = 0;
+ u8 vht_nss_mask = 0;
+ int i;
+
+ /* No need to consider legacy here. Basic rates are always present
+ * in bitrate mask
+ */
+
+ for (i = 0; i < ARRAY_SIZE(mask->control[band].ht_mcs); i++) {
+ if (mask->control[band].ht_mcs[i] == 0)
+ continue;
+ else if (mask->control[band].ht_mcs[i] ==
+ sband->ht_cap.mcs.rx_mask[i])
+ ht_nss_mask |= BIT(i);
+ else
+ return false;
+ }
+
+ for (i = 0; i < ARRAY_SIZE(mask->control[band].vht_mcs); i++) {
+ if (mask->control[band].vht_mcs[i] == 0)
+ continue;
+ else if (mask->control[band].vht_mcs[i] ==
+ ath12k_mac_get_max_vht_mcs_map(vht_mcs_map, i))
+ vht_nss_mask |= BIT(i);
+ else
+ return false;
+ }
+
+ if (ht_nss_mask != vht_nss_mask)
+ return false;
+
+ if (ht_nss_mask == 0)
+ return false;
+
+ if (BIT(fls(ht_nss_mask)) - 1 != ht_nss_mask)
+ return false;
+
+ *nss = fls(ht_nss_mask);
+
+ return true;
+}
+
+static int
+ath12k_mac_get_single_legacy_rate(struct ath12k *ar,
+ enum nl80211_band band,
+ const struct cfg80211_bitrate_mask *mask,
+ u32 *rate, u8 *nss)
+{
+ int rate_idx;
+ u16 bitrate;
+ u8 preamble;
+ u8 hw_rate;
+
+ if (hweight32(mask->control[band].legacy) != 1)
+ return -EINVAL;
+
+ rate_idx = ffs(mask->control[band].legacy) - 1;
+
+ if (band == NL80211_BAND_5GHZ || band == NL80211_BAND_6GHZ)
+ rate_idx += ATH12K_MAC_FIRST_OFDM_RATE_IDX;
+
+ hw_rate = ath12k_legacy_rates[rate_idx].hw_value;
+ bitrate = ath12k_legacy_rates[rate_idx].bitrate;
+
+ if (ath12k_mac_bitrate_is_cck(bitrate))
+ preamble = WMI_RATE_PREAMBLE_CCK;
+ else
+ preamble = WMI_RATE_PREAMBLE_OFDM;
+
+ *nss = 1;
+ *rate = ATH12K_HW_RATE_CODE(hw_rate, 0, preamble);
+
+ return 0;
+}
+
+static int ath12k_mac_set_fixed_rate_params(struct ath12k_vif *arvif,
+ u32 rate, u8 nss, u8 sgi, u8 ldpc)
+{
+ struct ath12k *ar = arvif->ar;
+ u32 vdev_param;
+ int ret;
+
+ lockdep_assert_held(&ar->conf_mutex);
+
+ ath12k_dbg(ar->ab, ATH12K_DBG_MAC, "mac set fixed rate params vdev %i rate 0x%02x nss %u sgi %u\n",
+ arvif->vdev_id, rate, nss, sgi);
+
+ vdev_param = WMI_VDEV_PARAM_FIXED_RATE;
+ ret = ath12k_wmi_vdev_set_param_cmd(ar, arvif->vdev_id,
+ vdev_param, rate);
+ if (ret) {
+ ath12k_warn(ar->ab, "failed to set fixed rate param 0x%02x: %d\n",
+ rate, ret);
+ return ret;
+ }
+
+ vdev_param = WMI_VDEV_PARAM_NSS;
+ ret = ath12k_wmi_vdev_set_param_cmd(ar, arvif->vdev_id,
+ vdev_param, nss);
+ if (ret) {
+ ath12k_warn(ar->ab, "failed to set nss param %d: %d\n",
+ nss, ret);
+ return ret;
+ }
+
+ vdev_param = WMI_VDEV_PARAM_SGI;
+ ret = ath12k_wmi_vdev_set_param_cmd(ar, arvif->vdev_id,
+ vdev_param, sgi);
+ if (ret) {
+ ath12k_warn(ar->ab, "failed to set sgi param %d: %d\n",
+ sgi, ret);
+ return ret;
+ }
+
+ vdev_param = WMI_VDEV_PARAM_LDPC;
+ ret = ath12k_wmi_vdev_set_param_cmd(ar, arvif->vdev_id,
+ vdev_param, ldpc);
+ if (ret) {
+ ath12k_warn(ar->ab, "failed to set ldpc param %d: %d\n",
+ ldpc, ret);
+ return ret;
+ }
+
+ return 0;
+}
+
+static bool
+ath12k_mac_vht_mcs_range_present(struct ath12k *ar,
+ enum nl80211_band band,
+ const struct cfg80211_bitrate_mask *mask)
+{
+ int i;
+ u16 vht_mcs;
+
+ for (i = 0; i < NL80211_VHT_NSS_MAX; i++) {
+ vht_mcs = mask->control[band].vht_mcs[i];
+
+ switch (vht_mcs) {
+ case 0:
+ case BIT(8) - 1:
+ case BIT(9) - 1:
+ case BIT(10) - 1:
+ break;
+ default:
+ return false;
+ }
+ }
+
+ return true;
+}
+
+static void ath12k_mac_set_bitrate_mask_iter(void *data,
+ struct ieee80211_sta *sta)
+{
+ struct ath12k_vif *arvif = data;
+ struct ath12k_sta *arsta = (struct ath12k_sta *)sta->drv_priv;
+ struct ath12k *ar = arvif->ar;
+
+ spin_lock_bh(&ar->data_lock);
+ arsta->changed |= IEEE80211_RC_SUPP_RATES_CHANGED;
+ spin_unlock_bh(&ar->data_lock);
+
+ ieee80211_queue_work(ar->hw, &arsta->update_wk);
+}
+
+static void ath12k_mac_disable_peer_fixed_rate(void *data,
+ struct ieee80211_sta *sta)
+{
+ struct ath12k_vif *arvif = data;
+ struct ath12k *ar = arvif->ar;
+ int ret;
+
+ ret = ath12k_wmi_set_peer_param(ar, sta->addr,
+ arvif->vdev_id,
+ WMI_PEER_PARAM_FIXED_RATE,
+ WMI_FIXED_RATE_NONE);
+ if (ret)
+ ath12k_warn(ar->ab,
+ "failed to disable peer fixed rate for STA %pM ret %d\n",
+ sta->addr, ret);
+}
+
+static int
+ath12k_mac_op_set_bitrate_mask(struct ieee80211_hw *hw,
+ struct ieee80211_vif *vif,
+ const struct cfg80211_bitrate_mask *mask)
+{
+ struct ath12k_vif *arvif = (void *)vif->drv_priv;
+ struct cfg80211_chan_def def;
+ struct ath12k *ar = arvif->ar;
+ enum nl80211_band band;
+ const u8 *ht_mcs_mask;
+ const u16 *vht_mcs_mask;
+ u32 rate;
+ u8 nss;
+ u8 sgi;
+ u8 ldpc;
+ int single_nss;
+ int ret;
+ int num_rates;
+
+ if (ath12k_mac_vif_chan(vif, &def))
+ return -EPERM;
+
+ band = def.chan->band;
+ ht_mcs_mask = mask->control[band].ht_mcs;
+ vht_mcs_mask = mask->control[band].vht_mcs;
+ ldpc = !!(ar->ht_cap_info & WMI_HT_CAP_LDPC);
+
+ sgi = mask->control[band].gi;
+ if (sgi == NL80211_TXRATE_FORCE_LGI)
+ return -EINVAL;
+
+ /* mac80211 doesn't support sending a fixed HT/VHT MCS alone, rather it
+ * requires passing at least one of used basic rates along with them.
+ * Fixed rate setting across different preambles(legacy, HT, VHT) is
+ * not supported by the FW. Hence use of FIXED_RATE vdev param is not
+ * suitable for setting single HT/VHT rates.
+ * But, there could be a single basic rate passed from userspace which
+ * can be done through the FIXED_RATE param.
+ */
+ if (ath12k_mac_has_single_legacy_rate(ar, band, mask)) {
+ ret = ath12k_mac_get_single_legacy_rate(ar, band, mask, &rate,
+ &nss);
+ if (ret) {
+ ath12k_warn(ar->ab, "failed to get single legacy rate for vdev %i: %d\n",
+ arvif->vdev_id, ret);
+ return ret;
+ }
+ ieee80211_iterate_stations_atomic(ar->hw,
+ ath12k_mac_disable_peer_fixed_rate,
+ arvif);
+ } else if (ath12k_mac_bitrate_mask_get_single_nss(ar, band, mask,
+ &single_nss)) {
+ rate = WMI_FIXED_RATE_NONE;
+ nss = single_nss;
+ } else {
+ rate = WMI_FIXED_RATE_NONE;
+ nss = min_t(u32, ar->num_tx_chains,
+ max(ath12k_mac_max_ht_nss(ht_mcs_mask),
+ ath12k_mac_max_vht_nss(vht_mcs_mask)));
+
+ /* If multiple rates across different preambles are given
+ * we can reconfigure this info with all peers using PEER_ASSOC
+ * command with the below exception cases.
+ * - Single VHT Rate : peer_assoc command accommodates only MCS
+ * range values i.e 0-7, 0-8, 0-9 for VHT. Though mac80211
+ * mandates passing basic rates along with HT/VHT rates, FW
+ * doesn't allow switching from VHT to Legacy. Hence instead of
+ * setting legacy and VHT rates using RATEMASK_CMD vdev cmd,
+ * we could set this VHT rate as peer fixed rate param, which
+ * will override FIXED rate and FW rate control algorithm.
+ * If single VHT rate is passed along with HT rates, we select
+ * the VHT rate as fixed rate for vht peers.
+ * - Multiple VHT Rates : When Multiple VHT rates are given,this
+ * can be set using RATEMASK CMD which uses FW rate-ctl alg.
+ * TODO: Setting multiple VHT MCS and replacing peer_assoc with
+ * RATEMASK_CMDID can cover all use cases of setting rates
+ * across multiple preambles and rates within same type.
+ * But requires more validation of the command at this point.
+ */
+
+ num_rates = ath12k_mac_bitrate_mask_num_vht_rates(ar, band,
+ mask);
+
+ if (!ath12k_mac_vht_mcs_range_present(ar, band, mask) &&
+ num_rates > 1) {
+ /* TODO: Handle multiple VHT MCS values setting using
+ * RATEMASK CMD
+ */
+ ath12k_warn(ar->ab,
+ "Setting more than one MCS Value in bitrate mask not supported\n");
+ return -EINVAL;
+ }
+
+ ieee80211_iterate_stations_atomic(ar->hw,
+ ath12k_mac_disable_peer_fixed_rate,
+ arvif);
+
+ mutex_lock(&ar->conf_mutex);
+
+ arvif->bitrate_mask = *mask;
+ ieee80211_iterate_stations_atomic(ar->hw,
+ ath12k_mac_set_bitrate_mask_iter,
+ arvif);
+
+ mutex_unlock(&ar->conf_mutex);
+ }
+
+ mutex_lock(&ar->conf_mutex);
+
+ ret = ath12k_mac_set_fixed_rate_params(arvif, rate, nss, sgi, ldpc);
+ if (ret) {
+ ath12k_warn(ar->ab, "failed to set fixed rate params on vdev %i: %d\n",
+ arvif->vdev_id, ret);
+ }
+
+ mutex_unlock(&ar->conf_mutex);
+
+ return ret;
+}
+
+static void
+ath12k_mac_op_reconfig_complete(struct ieee80211_hw *hw,
+ enum ieee80211_reconfig_type reconfig_type)
+{
+ struct ath12k *ar = hw->priv;
+ struct ath12k_base *ab = ar->ab;
+ int recovery_count;
+
+ if (reconfig_type != IEEE80211_RECONFIG_TYPE_RESTART)
+ return;
+
+ mutex_lock(&ar->conf_mutex);
+
+ if (ar->state == ATH12K_STATE_RESTARTED) {
+ ath12k_warn(ar->ab, "pdev %d successfully recovered\n",
+ ar->pdev->pdev_id);
+ ar->state = ATH12K_STATE_ON;
+ ieee80211_wake_queues(ar->hw);
+
+ if (ab->is_reset) {
+ recovery_count = atomic_inc_return(&ab->recovery_count);
+ ath12k_dbg(ab, ATH12K_DBG_BOOT, "recovery count %d\n",
+ recovery_count);
+ /* When there are multiple radios in an SOC,
+ * the recovery has to be done for each radio
+ */
+ if (recovery_count == ab->num_radios) {
+ atomic_dec(&ab->reset_count);
+ complete(&ab->reset_complete);
+ ab->is_reset = false;
+ atomic_set(&ab->fail_cont_count, 0);
+ ath12k_dbg(ab, ATH12K_DBG_BOOT, "reset success\n");
+ }
+ }
+ }
+
+ mutex_unlock(&ar->conf_mutex);
+}
+
+static void
+ath12k_mac_update_bss_chan_survey(struct ath12k *ar,
+ struct ieee80211_channel *channel)
+{
+ int ret;
+ enum wmi_bss_chan_info_req_type type = WMI_BSS_SURVEY_REQ_TYPE_READ;
+
+ lockdep_assert_held(&ar->conf_mutex);
+
+ if (!test_bit(WMI_TLV_SERVICE_BSS_CHANNEL_INFO_64, ar->ab->wmi_ab.svc_map) ||
+ ar->rx_channel != channel)
+ return;
+
+ if (ar->scan.state != ATH12K_SCAN_IDLE) {
+ ath12k_dbg(ar->ab, ATH12K_DBG_MAC,
+ "ignoring bss chan info req while scanning..\n");
+ return;
+ }
+
+ reinit_completion(&ar->bss_survey_done);
+
+ ret = ath12k_wmi_pdev_bss_chan_info_request(ar, type);
+ if (ret) {
+ ath12k_warn(ar->ab, "failed to send pdev bss chan info request\n");
+ return;
+ }
+
+ ret = wait_for_completion_timeout(&ar->bss_survey_done, 3 * HZ);
+ if (ret == 0)
+ ath12k_warn(ar->ab, "bss channel survey timed out\n");
+}
+
+static int ath12k_mac_op_get_survey(struct ieee80211_hw *hw, int idx,
+ struct survey_info *survey)
+{
+ struct ath12k *ar = hw->priv;
+ struct ieee80211_supported_band *sband;
+ struct survey_info *ar_survey;
+ int ret = 0;
+
+ if (idx >= ATH12K_NUM_CHANS)
+ return -ENOENT;
+
+ ar_survey = &ar->survey[idx];
+
+ mutex_lock(&ar->conf_mutex);
+
+ sband = hw->wiphy->bands[NL80211_BAND_2GHZ];
+ if (sband && idx >= sband->n_channels) {
+ idx -= sband->n_channels;
+ sband = NULL;
+ }
+
+ if (!sband)
+ sband = hw->wiphy->bands[NL80211_BAND_5GHZ];
+
+ if (!sband || idx >= sband->n_channels) {
+ ret = -ENOENT;
+ goto exit;
+ }
+
+ ath12k_mac_update_bss_chan_survey(ar, &sband->channels[idx]);
+
+ spin_lock_bh(&ar->data_lock);
+ memcpy(survey, ar_survey, sizeof(*survey));
+ spin_unlock_bh(&ar->data_lock);
+
+ survey->channel = &sband->channels[idx];
+
+ if (ar->rx_channel == survey->channel)
+ survey->filled |= SURVEY_INFO_IN_USE;
+
+exit:
+ mutex_unlock(&ar->conf_mutex);
+ return ret;
+}
+
+static void ath12k_mac_op_sta_statistics(struct ieee80211_hw *hw,
+ struct ieee80211_vif *vif,
+ struct ieee80211_sta *sta,
+ struct station_info *sinfo)
+{
+ struct ath12k_sta *arsta = (struct ath12k_sta *)sta->drv_priv;
+
+ sinfo->rx_duration = arsta->rx_duration;
+ sinfo->filled |= BIT_ULL(NL80211_STA_INFO_RX_DURATION);
+
+ sinfo->tx_duration = arsta->tx_duration;
+ sinfo->filled |= BIT_ULL(NL80211_STA_INFO_TX_DURATION);
+
+ if (!arsta->txrate.legacy && !arsta->txrate.nss)
+ return;
+
+ if (arsta->txrate.legacy) {
+ sinfo->txrate.legacy = arsta->txrate.legacy;
+ } else {
+ sinfo->txrate.mcs = arsta->txrate.mcs;
+ sinfo->txrate.nss = arsta->txrate.nss;
+ sinfo->txrate.bw = arsta->txrate.bw;
+ sinfo->txrate.he_gi = arsta->txrate.he_gi;
+ sinfo->txrate.he_dcm = arsta->txrate.he_dcm;
+ sinfo->txrate.he_ru_alloc = arsta->txrate.he_ru_alloc;
+ }
+ sinfo->txrate.flags = arsta->txrate.flags;
+ sinfo->filled |= BIT_ULL(NL80211_STA_INFO_TX_BITRATE);
+
+ /* TODO: Use real NF instead of default one. */
+ sinfo->signal = arsta->rssi_comb + ATH12K_DEFAULT_NOISE_FLOOR;
+ sinfo->filled |= BIT_ULL(NL80211_STA_INFO_SIGNAL);
+}
+
+static const struct ieee80211_ops ath12k_ops = {
+ .tx = ath12k_mac_op_tx,
+ .wake_tx_queue = ieee80211_handle_wake_tx_queue,
+ .start = ath12k_mac_op_start,
+ .stop = ath12k_mac_op_stop,
+ .reconfig_complete = ath12k_mac_op_reconfig_complete,
+ .add_interface = ath12k_mac_op_add_interface,
+ .remove_interface = ath12k_mac_op_remove_interface,
+ .update_vif_offload = ath12k_mac_op_update_vif_offload,
+ .config = ath12k_mac_op_config,
+ .bss_info_changed = ath12k_mac_op_bss_info_changed,
+ .configure_filter = ath12k_mac_op_configure_filter,
+ .hw_scan = ath12k_mac_op_hw_scan,
+ .cancel_hw_scan = ath12k_mac_op_cancel_hw_scan,
+ .set_key = ath12k_mac_op_set_key,
+ .sta_state = ath12k_mac_op_sta_state,
+ .sta_set_txpwr = ath12k_mac_op_sta_set_txpwr,
+ .sta_rc_update = ath12k_mac_op_sta_rc_update,
+ .conf_tx = ath12k_mac_op_conf_tx,
+ .set_antenna = ath12k_mac_op_set_antenna,
+ .get_antenna = ath12k_mac_op_get_antenna,
+ .ampdu_action = ath12k_mac_op_ampdu_action,
+ .add_chanctx = ath12k_mac_op_add_chanctx,
+ .remove_chanctx = ath12k_mac_op_remove_chanctx,
+ .change_chanctx = ath12k_mac_op_change_chanctx,
+ .assign_vif_chanctx = ath12k_mac_op_assign_vif_chanctx,
+ .unassign_vif_chanctx = ath12k_mac_op_unassign_vif_chanctx,
+ .switch_vif_chanctx = ath12k_mac_op_switch_vif_chanctx,
+ .set_rts_threshold = ath12k_mac_op_set_rts_threshold,
+ .set_frag_threshold = ath12k_mac_op_set_frag_threshold,
+ .set_bitrate_mask = ath12k_mac_op_set_bitrate_mask,
+ .get_survey = ath12k_mac_op_get_survey,
+ .flush = ath12k_mac_op_flush,
+ .sta_statistics = ath12k_mac_op_sta_statistics,
+};
+
+static void ath12k_mac_update_ch_list(struct ath12k *ar,
+ struct ieee80211_supported_band *band,
+ u32 freq_low, u32 freq_high)
+{
+ int i;
+
+ if (!(freq_low && freq_high))
+ return;
+
+ for (i = 0; i < band->n_channels; i++) {
+ if (band->channels[i].center_freq < freq_low ||
+ band->channels[i].center_freq > freq_high)
+ band->channels[i].flags |= IEEE80211_CHAN_DISABLED;
+ }
+}
+
+static u32 ath12k_get_phy_id(struct ath12k *ar, u32 band)
+{
+ struct ath12k_pdev *pdev = ar->pdev;
+ struct ath12k_pdev_cap *pdev_cap = &pdev->cap;
+
+ if (band == WMI_HOST_WLAN_2G_CAP)
+ return pdev_cap->band[NL80211_BAND_2GHZ].phy_id;
+
+ if (band == WMI_HOST_WLAN_5G_CAP)
+ return pdev_cap->band[NL80211_BAND_5GHZ].phy_id;
+
+ ath12k_warn(ar->ab, "unsupported phy cap:%d\n", band);
+
+ return 0;
+}
+
+static int ath12k_mac_setup_channels_rates(struct ath12k *ar,
+ u32 supported_bands)
+{
+ struct ieee80211_supported_band *band;
+ struct ath12k_wmi_hal_reg_capabilities_ext_arg *reg_cap;
+ void *channels;
+ u32 phy_id;
+
+ BUILD_BUG_ON((ARRAY_SIZE(ath12k_2ghz_channels) +
+ ARRAY_SIZE(ath12k_5ghz_channels) +
+ ARRAY_SIZE(ath12k_6ghz_channels)) !=
+ ATH12K_NUM_CHANS);
+
+ reg_cap = &ar->ab->hal_reg_cap[ar->pdev_idx];
+
+ if (supported_bands & WMI_HOST_WLAN_2G_CAP) {
+ channels = kmemdup(ath12k_2ghz_channels,
+ sizeof(ath12k_2ghz_channels),
+ GFP_KERNEL);
+ if (!channels)
+ return -ENOMEM;
+
+ band = &ar->mac.sbands[NL80211_BAND_2GHZ];
+ band->band = NL80211_BAND_2GHZ;
+ band->n_channels = ARRAY_SIZE(ath12k_2ghz_channels);
+ band->channels = channels;
+ band->n_bitrates = ath12k_g_rates_size;
+ band->bitrates = ath12k_g_rates;
+ ar->hw->wiphy->bands[NL80211_BAND_2GHZ] = band;
+
+ if (ar->ab->hw_params->single_pdev_only) {
+ phy_id = ath12k_get_phy_id(ar, WMI_HOST_WLAN_2G_CAP);
+ reg_cap = &ar->ab->hal_reg_cap[phy_id];
+ }
+ ath12k_mac_update_ch_list(ar, band,
+ reg_cap->low_2ghz_chan,
+ reg_cap->high_2ghz_chan);
+ }
+
+ if (supported_bands & WMI_HOST_WLAN_5G_CAP) {
+ if (reg_cap->high_5ghz_chan >= ATH12K_MAX_6G_FREQ) {
+ channels = kmemdup(ath12k_6ghz_channels,
+ sizeof(ath12k_6ghz_channels), GFP_KERNEL);
+ if (!channels) {
+ kfree(ar->mac.sbands[NL80211_BAND_2GHZ].channels);
+ return -ENOMEM;
+ }
+
+ ar->supports_6ghz = true;
+ band = &ar->mac.sbands[NL80211_BAND_6GHZ];
+ band->band = NL80211_BAND_6GHZ;
+ band->n_channels = ARRAY_SIZE(ath12k_6ghz_channels);
+ band->channels = channels;
+ band->n_bitrates = ath12k_a_rates_size;
+ band->bitrates = ath12k_a_rates;
+ ar->hw->wiphy->bands[NL80211_BAND_6GHZ] = band;
+ ath12k_mac_update_ch_list(ar, band,
+ reg_cap->low_5ghz_chan,
+ reg_cap->high_5ghz_chan);
+ }
+
+ if (reg_cap->low_5ghz_chan < ATH12K_MIN_6G_FREQ) {
+ channels = kmemdup(ath12k_5ghz_channels,
+ sizeof(ath12k_5ghz_channels),
+ GFP_KERNEL);
+ if (!channels) {
+ kfree(ar->mac.sbands[NL80211_BAND_2GHZ].channels);
+ kfree(ar->mac.sbands[NL80211_BAND_6GHZ].channels);
+ return -ENOMEM;
+ }
+
+ band = &ar->mac.sbands[NL80211_BAND_5GHZ];
+ band->band = NL80211_BAND_5GHZ;
+ band->n_channels = ARRAY_SIZE(ath12k_5ghz_channels);
+ band->channels = channels;
+ band->n_bitrates = ath12k_a_rates_size;
+ band->bitrates = ath12k_a_rates;
+ ar->hw->wiphy->bands[NL80211_BAND_5GHZ] = band;
+
+ if (ar->ab->hw_params->single_pdev_only) {
+ phy_id = ath12k_get_phy_id(ar, WMI_HOST_WLAN_5G_CAP);
+ reg_cap = &ar->ab->hal_reg_cap[phy_id];
+ }
+
+ ath12k_mac_update_ch_list(ar, band,
+ reg_cap->low_5ghz_chan,
+ reg_cap->high_5ghz_chan);
+ }
+ }
+
+ return 0;
+}
+
+static int ath12k_mac_setup_iface_combinations(struct ath12k *ar)
+{
+ struct ath12k_base *ab = ar->ab;
+ struct ieee80211_iface_combination *combinations;
+ struct ieee80211_iface_limit *limits;
+ int n_limits, max_interfaces;
+ bool ap, mesh;
+
+ ap = ab->hw_params->interface_modes & BIT(NL80211_IFTYPE_AP);
+
+ mesh = IS_ENABLED(CONFIG_MAC80211_MESH) &&
+ ab->hw_params->interface_modes & BIT(NL80211_IFTYPE_MESH_POINT);
+
+ combinations = kzalloc(sizeof(*combinations), GFP_KERNEL);
+ if (!combinations)
+ return -ENOMEM;
+
+ if (ap || mesh) {
+ n_limits = 2;
+ max_interfaces = 16;
+ } else {
+ n_limits = 1;
+ max_interfaces = 1;
+ }
+
+ limits = kcalloc(n_limits, sizeof(*limits), GFP_KERNEL);
+ if (!limits) {
+ kfree(combinations);
+ return -ENOMEM;
+ }
+
+ limits[0].max = 1;
+ limits[0].types |= BIT(NL80211_IFTYPE_STATION);
+
+ if (ap) {
+ limits[1].max = max_interfaces;
+ limits[1].types |= BIT(NL80211_IFTYPE_AP);
+ }
+
+ if (mesh)
+ limits[1].types |= BIT(NL80211_IFTYPE_MESH_POINT);
+
+ combinations[0].limits = limits;
+ combinations[0].n_limits = n_limits;
+ combinations[0].max_interfaces = max_interfaces;
+ combinations[0].num_different_channels = 1;
+ combinations[0].beacon_int_infra_match = true;
+ combinations[0].beacon_int_min_gcd = 100;
+ combinations[0].radar_detect_widths = BIT(NL80211_CHAN_WIDTH_20_NOHT) |
+ BIT(NL80211_CHAN_WIDTH_20) |
+ BIT(NL80211_CHAN_WIDTH_40) |
+ BIT(NL80211_CHAN_WIDTH_80);
+
+ ar->hw->wiphy->iface_combinations = combinations;
+ ar->hw->wiphy->n_iface_combinations = 1;
+
+ return 0;
+}
+
+static const u8 ath12k_if_types_ext_capa[] = {
+ [0] = WLAN_EXT_CAPA1_EXT_CHANNEL_SWITCHING,
+ [7] = WLAN_EXT_CAPA8_OPMODE_NOTIF,
+};
+
+static const u8 ath12k_if_types_ext_capa_sta[] = {
+ [0] = WLAN_EXT_CAPA1_EXT_CHANNEL_SWITCHING,
+ [7] = WLAN_EXT_CAPA8_OPMODE_NOTIF,
+ [9] = WLAN_EXT_CAPA10_TWT_REQUESTER_SUPPORT,
+};
+
+static const u8 ath12k_if_types_ext_capa_ap[] = {
+ [0] = WLAN_EXT_CAPA1_EXT_CHANNEL_SWITCHING,
+ [7] = WLAN_EXT_CAPA8_OPMODE_NOTIF,
+ [9] = WLAN_EXT_CAPA10_TWT_RESPONDER_SUPPORT,
+};
+
+static const struct wiphy_iftype_ext_capab ath12k_iftypes_ext_capa[] = {
+ {
+ .extended_capabilities = ath12k_if_types_ext_capa,
+ .extended_capabilities_mask = ath12k_if_types_ext_capa,
+ .extended_capabilities_len = sizeof(ath12k_if_types_ext_capa),
+ }, {
+ .iftype = NL80211_IFTYPE_STATION,
+ .extended_capabilities = ath12k_if_types_ext_capa_sta,
+ .extended_capabilities_mask = ath12k_if_types_ext_capa_sta,
+ .extended_capabilities_len =
+ sizeof(ath12k_if_types_ext_capa_sta),
+ }, {
+ .iftype = NL80211_IFTYPE_AP,
+ .extended_capabilities = ath12k_if_types_ext_capa_ap,
+ .extended_capabilities_mask = ath12k_if_types_ext_capa_ap,
+ .extended_capabilities_len =
+ sizeof(ath12k_if_types_ext_capa_ap),
+ },
+};
+
+static void __ath12k_mac_unregister(struct ath12k *ar)
+{
+ cancel_work_sync(&ar->regd_update_work);
+
+ ieee80211_unregister_hw(ar->hw);
+
+ idr_for_each(&ar->txmgmt_idr, ath12k_mac_tx_mgmt_pending_free, ar);
+ idr_destroy(&ar->txmgmt_idr);
+
+ kfree(ar->mac.sbands[NL80211_BAND_2GHZ].channels);
+ kfree(ar->mac.sbands[NL80211_BAND_5GHZ].channels);
+ kfree(ar->mac.sbands[NL80211_BAND_6GHZ].channels);
+
+ kfree(ar->hw->wiphy->iface_combinations[0].limits);
+ kfree(ar->hw->wiphy->iface_combinations);
+
+ SET_IEEE80211_DEV(ar->hw, NULL);
+}
+
+void ath12k_mac_unregister(struct ath12k_base *ab)
+{
+ struct ath12k *ar;
+ struct ath12k_pdev *pdev;
+ int i;
+
+ for (i = 0; i < ab->num_radios; i++) {
+ pdev = &ab->pdevs[i];
+ ar = pdev->ar;
+ if (!ar)
+ continue;
+
+ __ath12k_mac_unregister(ar);
+ }
+}
+
+static int __ath12k_mac_register(struct ath12k *ar)
+{
+ struct ath12k_base *ab = ar->ab;
+ struct ath12k_pdev_cap *cap = &ar->pdev->cap;
+ static const u32 cipher_suites[] = {
+ WLAN_CIPHER_SUITE_TKIP,
+ WLAN_CIPHER_SUITE_CCMP,
+ WLAN_CIPHER_SUITE_AES_CMAC,
+ WLAN_CIPHER_SUITE_BIP_CMAC_256,
+ WLAN_CIPHER_SUITE_BIP_GMAC_128,
+ WLAN_CIPHER_SUITE_BIP_GMAC_256,
+ WLAN_CIPHER_SUITE_GCMP,
+ WLAN_CIPHER_SUITE_GCMP_256,
+ WLAN_CIPHER_SUITE_CCMP_256,
+ };
+ int ret;
+ u32 ht_cap = 0;
+
+ ath12k_pdev_caps_update(ar);
+
+ SET_IEEE80211_PERM_ADDR(ar->hw, ar->mac_addr);
+
+ SET_IEEE80211_DEV(ar->hw, ab->dev);
+
+ ret = ath12k_mac_setup_channels_rates(ar,
+ cap->supported_bands);
+ if (ret)
+ goto err;
+
+ ath12k_mac_setup_ht_vht_cap(ar, cap, &ht_cap);
+ ath12k_mac_setup_he_cap(ar, cap);
+
+ ret = ath12k_mac_setup_iface_combinations(ar);
+ if (ret) {
+ ath12k_err(ar->ab, "failed to setup interface combinations: %d\n", ret);
+ goto err_free_channels;
+ }
+
+ ar->hw->wiphy->available_antennas_rx = cap->rx_chain_mask;
+ ar->hw->wiphy->available_antennas_tx = cap->tx_chain_mask;
+
+ ar->hw->wiphy->interface_modes = ab->hw_params->interface_modes;
+
+ ieee80211_hw_set(ar->hw, SIGNAL_DBM);
+ ieee80211_hw_set(ar->hw, SUPPORTS_PS);
+ ieee80211_hw_set(ar->hw, SUPPORTS_DYNAMIC_PS);
+ ieee80211_hw_set(ar->hw, MFP_CAPABLE);
+ ieee80211_hw_set(ar->hw, REPORTS_TX_ACK_STATUS);
+ ieee80211_hw_set(ar->hw, HAS_RATE_CONTROL);
+ ieee80211_hw_set(ar->hw, AP_LINK_PS);
+ ieee80211_hw_set(ar->hw, SPECTRUM_MGMT);
+ ieee80211_hw_set(ar->hw, CONNECTION_MONITOR);
+ ieee80211_hw_set(ar->hw, SUPPORTS_PER_STA_GTK);
+ ieee80211_hw_set(ar->hw, CHANCTX_STA_CSA);
+ ieee80211_hw_set(ar->hw, QUEUE_CONTROL);
+ ieee80211_hw_set(ar->hw, SUPPORTS_TX_FRAG);
+ ieee80211_hw_set(ar->hw, REPORTS_LOW_ACK);
+
+ if (ht_cap & WMI_HT_CAP_ENABLED) {
+ ieee80211_hw_set(ar->hw, AMPDU_AGGREGATION);
+ ieee80211_hw_set(ar->hw, TX_AMPDU_SETUP_IN_HW);
+ ieee80211_hw_set(ar->hw, SUPPORTS_REORDERING_BUFFER);
+ ieee80211_hw_set(ar->hw, SUPPORTS_AMSDU_IN_AMPDU);
+ ieee80211_hw_set(ar->hw, USES_RSS);
+ }
+
+ ar->hw->wiphy->features |= NL80211_FEATURE_STATIC_SMPS;
+ ar->hw->wiphy->flags |= WIPHY_FLAG_IBSS_RSN;
+
+ /* TODO: Check if HT capability advertised from firmware is different
+ * for each band for a dual band capable radio. It will be tricky to
+ * handle it when the ht capability different for each band.
+ */
+ if (ht_cap & WMI_HT_CAP_DYNAMIC_SMPS)
+ ar->hw->wiphy->features |= NL80211_FEATURE_DYNAMIC_SMPS;
+
+ ar->hw->wiphy->max_scan_ssids = WLAN_SCAN_PARAMS_MAX_SSID;
+ ar->hw->wiphy->max_scan_ie_len = WLAN_SCAN_PARAMS_MAX_IE_LEN;
+
+ ar->hw->max_listen_interval = ATH12K_MAX_HW_LISTEN_INTERVAL;
+
+ ar->hw->wiphy->flags |= WIPHY_FLAG_HAS_REMAIN_ON_CHANNEL;
+ ar->hw->wiphy->flags |= WIPHY_FLAG_HAS_CHANNEL_SWITCH;
+ ar->hw->wiphy->max_remain_on_channel_duration = 5000;
+
+ ar->hw->wiphy->flags |= WIPHY_FLAG_AP_UAPSD;
+ ar->hw->wiphy->features |= NL80211_FEATURE_AP_MODE_CHAN_WIDTH_CHANGE |
+ NL80211_FEATURE_AP_SCAN;
+
+ ar->max_num_stations = TARGET_NUM_STATIONS;
+ ar->max_num_peers = TARGET_NUM_PEERS_PDEV;
+
+ ar->hw->wiphy->max_ap_assoc_sta = ar->max_num_stations;
+
+ ar->hw->queues = ATH12K_HW_MAX_QUEUES;
+ ar->hw->wiphy->tx_queue_len = ATH12K_QUEUE_LEN;
+ ar->hw->offchannel_tx_hw_queue = ATH12K_HW_MAX_QUEUES - 1;
+ ar->hw->max_rx_aggregation_subframes = IEEE80211_MAX_AMPDU_BUF_HE;
+
+ ar->hw->vif_data_size = sizeof(struct ath12k_vif);
+ ar->hw->sta_data_size = sizeof(struct ath12k_sta);
+
+ wiphy_ext_feature_set(ar->hw->wiphy, NL80211_EXT_FEATURE_CQM_RSSI_LIST);
+ wiphy_ext_feature_set(ar->hw->wiphy, NL80211_EXT_FEATURE_STA_TX_PWR);
+
+ ar->hw->wiphy->cipher_suites = cipher_suites;
+ ar->hw->wiphy->n_cipher_suites = ARRAY_SIZE(cipher_suites);
+
+ ar->hw->wiphy->iftype_ext_capab = ath12k_iftypes_ext_capa;
+ ar->hw->wiphy->num_iftype_ext_capab =
+ ARRAY_SIZE(ath12k_iftypes_ext_capa);
+
+ if (ar->supports_6ghz) {
+ wiphy_ext_feature_set(ar->hw->wiphy,
+ NL80211_EXT_FEATURE_FILS_DISCOVERY);
+ wiphy_ext_feature_set(ar->hw->wiphy,
+ NL80211_EXT_FEATURE_UNSOL_BCAST_PROBE_RESP);
+ }
+
+ ath12k_reg_init(ar);
+
+ if (!test_bit(ATH12K_FLAG_RAW_MODE, &ab->dev_flags)) {
+ ar->hw->netdev_features = NETIF_F_HW_CSUM;
+ ieee80211_hw_set(ar->hw, SW_CRYPTO_CONTROL);
+ ieee80211_hw_set(ar->hw, SUPPORT_FAST_XMIT);
+ }
+
+ ret = ieee80211_register_hw(ar->hw);
+ if (ret) {
+ ath12k_err(ar->ab, "ieee80211 registration failed: %d\n", ret);
+ goto err_free_if_combs;
+ }
+
+ if (!ab->hw_params->supports_monitor)
+ /* There's a race between calling ieee80211_register_hw()
+ * and here where the monitor mode is enabled for a little
+ * while. But that time is so short and in practise it make
+ * a difference in real life.
+ */
+ ar->hw->wiphy->interface_modes &= ~BIT(NL80211_IFTYPE_MONITOR);
+
+ /* Apply the regd received during initialization */
+ ret = ath12k_regd_update(ar, true);
+ if (ret) {
+ ath12k_err(ar->ab, "ath12k regd update failed: %d\n", ret);
+ goto err_unregister_hw;
+ }
+
+ return 0;
+
+err_unregister_hw:
+ ieee80211_unregister_hw(ar->hw);
+
+err_free_if_combs:
+ kfree(ar->hw->wiphy->iface_combinations[0].limits);
+ kfree(ar->hw->wiphy->iface_combinations);
+
+err_free_channels:
+ kfree(ar->mac.sbands[NL80211_BAND_2GHZ].channels);
+ kfree(ar->mac.sbands[NL80211_BAND_5GHZ].channels);
+ kfree(ar->mac.sbands[NL80211_BAND_6GHZ].channels);
+
+err:
+ SET_IEEE80211_DEV(ar->hw, NULL);
+ return ret;
+}
+
+int ath12k_mac_register(struct ath12k_base *ab)
+{
+ struct ath12k *ar;
+ struct ath12k_pdev *pdev;
+ int i;
+ int ret;
+
+ if (test_bit(ATH12K_FLAG_REGISTERED, &ab->dev_flags))
+ return 0;
+
+ for (i = 0; i < ab->num_radios; i++) {
+ pdev = &ab->pdevs[i];
+ ar = pdev->ar;
+ if (ab->pdevs_macaddr_valid) {
+ ether_addr_copy(ar->mac_addr, pdev->mac_addr);
+ } else {
+ ether_addr_copy(ar->mac_addr, ab->mac_addr);
+ ar->mac_addr[4] += i;
+ }
+
+ ret = __ath12k_mac_register(ar);
+ if (ret)
+ goto err_cleanup;
+
+ idr_init(&ar->txmgmt_idr);
+ spin_lock_init(&ar->txmgmt_idr_lock);
+ }
+
+ /* Initialize channel counters frequency value in hertz */
+ ab->cc_freq_hz = 320000;
+ ab->free_vdev_map = (1LL << (ab->num_radios * TARGET_NUM_VDEVS)) - 1;
+
+ return 0;
+
+err_cleanup:
+ for (i = i - 1; i >= 0; i--) {
+ pdev = &ab->pdevs[i];
+ ar = pdev->ar;
+ __ath12k_mac_unregister(ar);
+ }
+
+ return ret;
+}
+
+int ath12k_mac_allocate(struct ath12k_base *ab)
+{
+ struct ieee80211_hw *hw;
+ struct ath12k *ar;
+ struct ath12k_pdev *pdev;
+ int ret;
+ int i;
+
+ if (test_bit(ATH12K_FLAG_REGISTERED, &ab->dev_flags))
+ return 0;
+
+ for (i = 0; i < ab->num_radios; i++) {
+ pdev = &ab->pdevs[i];
+ hw = ieee80211_alloc_hw(sizeof(struct ath12k), &ath12k_ops);
+ if (!hw) {
+ ath12k_warn(ab, "failed to allocate mac80211 hw device\n");
+ ret = -ENOMEM;
+ goto err_free_mac;
+ }
+
+ ar = hw->priv;
+ ar->hw = hw;
+ ar->ab = ab;
+ ar->pdev = pdev;
+ ar->pdev_idx = i;
+ ar->lmac_id = ath12k_hw_get_mac_from_pdev_id(ab->hw_params, i);
+
+ ar->wmi = &ab->wmi_ab.wmi[i];
+ /* FIXME: wmi[0] is already initialized during attach,
+ * Should we do this again?
+ */
+ ath12k_wmi_pdev_attach(ab, i);
+
+ ar->cfg_tx_chainmask = pdev->cap.tx_chain_mask;
+ ar->cfg_rx_chainmask = pdev->cap.rx_chain_mask;
+ ar->num_tx_chains = hweight32(pdev->cap.tx_chain_mask);
+ ar->num_rx_chains = hweight32(pdev->cap.rx_chain_mask);
+
+ pdev->ar = ar;
+ spin_lock_init(&ar->data_lock);
+ INIT_LIST_HEAD(&ar->arvifs);
+ INIT_LIST_HEAD(&ar->ppdu_stats_info);
+ mutex_init(&ar->conf_mutex);
+ init_completion(&ar->vdev_setup_done);
+ init_completion(&ar->vdev_delete_done);
+ init_completion(&ar->peer_assoc_done);
+ init_completion(&ar->peer_delete_done);
+ init_completion(&ar->install_key_done);
+ init_completion(&ar->bss_survey_done);
+ init_completion(&ar->scan.started);
+ init_completion(&ar->scan.completed);
+
+ INIT_DELAYED_WORK(&ar->scan.timeout, ath12k_scan_timeout_work);
+ INIT_WORK(&ar->regd_update_work, ath12k_regd_update_work);
+
+ INIT_WORK(&ar->wmi_mgmt_tx_work, ath12k_mgmt_over_wmi_tx_work);
+ skb_queue_head_init(&ar->wmi_mgmt_tx_queue);
+ clear_bit(ATH12K_FLAG_MONITOR_ENABLED, &ar->monitor_flags);
+ }
+
+ return 0;
+
+err_free_mac:
+ ath12k_mac_destroy(ab);
+
+ return ret;
+}
+
+void ath12k_mac_destroy(struct ath12k_base *ab)
+{
+ struct ath12k *ar;
+ struct ath12k_pdev *pdev;
+ int i;
+
+ for (i = 0; i < ab->num_radios; i++) {
+ pdev = &ab->pdevs[i];
+ ar = pdev->ar;
+ if (!ar)
+ continue;
+
+ ieee80211_free_hw(ar->hw);
+ pdev->ar = NULL;
+ }
+}
diff --git a/drivers/net/wireless/ath/ath12k/mac.h b/drivers/net/wireless/ath/ath12k/mac.h
new file mode 100644
index 000000000000..57f4295420bb
--- /dev/null
+++ b/drivers/net/wireless/ath/ath12k/mac.h
@@ -0,0 +1,76 @@
+/* SPDX-License-Identifier: BSD-3-Clause-Clear */
+/*
+ * Copyright (c) 2018-2021 The Linux Foundation. All rights reserved.
+ * Copyright (c) 2021-2022 Qualcomm Innovation Center, Inc. All rights reserved.
+ */
+
+#ifndef ATH12K_MAC_H
+#define ATH12K_MAC_H
+
+#include <net/mac80211.h>
+#include <net/cfg80211.h>
+
+struct ath12k;
+struct ath12k_base;
+
+struct ath12k_generic_iter {
+ struct ath12k *ar;
+ int ret;
+};
+
+/* number of failed packets (20 packets with 16 sw reties each) */
+#define ATH12K_KICKOUT_THRESHOLD (20 * 16)
+
+/* Use insanely high numbers to make sure that the firmware implementation
+ * won't start, we have the same functionality already in hostapd. Unit
+ * is seconds.
+ */
+#define ATH12K_KEEPALIVE_MIN_IDLE 3747
+#define ATH12K_KEEPALIVE_MAX_IDLE 3895
+#define ATH12K_KEEPALIVE_MAX_UNRESPONSIVE 3900
+
+/* FIXME: should these be in ieee80211.h? */
+#define IEEE80211_VHT_MCS_SUPPORT_0_11_MASK GENMASK(23, 16)
+#define IEEE80211_DISABLE_VHT_MCS_SUPPORT_0_11 BIT(24)
+
+#define ATH12K_CHAN_WIDTH_NUM 8
+
+#define ATH12K_TX_POWER_MAX_VAL 70
+#define ATH12K_TX_POWER_MIN_VAL 0
+
+enum ath12k_supported_bw {
+ ATH12K_BW_20 = 0,
+ ATH12K_BW_40 = 1,
+ ATH12K_BW_80 = 2,
+ ATH12K_BW_160 = 3,
+};
+
+extern const struct htt_rx_ring_tlv_filter ath12k_mac_mon_status_filter_default;
+
+void ath12k_mac_destroy(struct ath12k_base *ab);
+void ath12k_mac_unregister(struct ath12k_base *ab);
+int ath12k_mac_register(struct ath12k_base *ab);
+int ath12k_mac_allocate(struct ath12k_base *ab);
+int ath12k_mac_hw_ratecode_to_legacy_rate(u8 hw_rc, u8 preamble, u8 *rateidx,
+ u16 *rate);
+u8 ath12k_mac_bitrate_to_idx(const struct ieee80211_supported_band *sband,
+ u32 bitrate);
+u8 ath12k_mac_hw_rate_to_idx(const struct ieee80211_supported_band *sband,
+ u8 hw_rate, bool cck);
+
+void __ath12k_mac_scan_finish(struct ath12k *ar);
+void ath12k_mac_scan_finish(struct ath12k *ar);
+
+struct ath12k_vif *ath12k_mac_get_arvif(struct ath12k *ar, u32 vdev_id);
+struct ath12k_vif *ath12k_mac_get_arvif_by_vdev_id(struct ath12k_base *ab,
+ u32 vdev_id);
+struct ath12k *ath12k_mac_get_ar_by_vdev_id(struct ath12k_base *ab, u32 vdev_id);
+struct ath12k *ath12k_mac_get_ar_by_pdev_id(struct ath12k_base *ab, u32 pdev_id);
+
+void ath12k_mac_drain_tx(struct ath12k *ar);
+void ath12k_mac_peer_cleanup_all(struct ath12k *ar);
+int ath12k_mac_tx_mgmt_pending_free(int buf_id, void *skb, void *ctx);
+enum rate_info_bw ath12k_mac_bw_to_mac80211_bw(enum ath12k_supported_bw bw);
+enum ath12k_supported_bw ath12k_mac_mac80211_bw_to_ath12k_bw(enum rate_info_bw bw);
+enum hal_encrypt_type ath12k_dp_tx_get_encrypt_type(u32 cipher);
+#endif
diff --git a/drivers/net/wireless/ath/ath12k/mhi.c b/drivers/net/wireless/ath/ath12k/mhi.c
new file mode 100644
index 000000000000..42f1140baa4f
--- /dev/null
+++ b/drivers/net/wireless/ath/ath12k/mhi.c
@@ -0,0 +1,616 @@
+// SPDX-License-Identifier: BSD-3-Clause-Clear
+/*
+ * Copyright (c) 2020-2021 The Linux Foundation. All rights reserved.
+ * Copyright (c) 2021-2022 Qualcomm Innovation Center, Inc. All rights reserved.
+ */
+
+#include <linux/msi.h>
+#include <linux/pci.h>
+
+#include "core.h"
+#include "debug.h"
+#include "mhi.h"
+#include "pci.h"
+
+#define MHI_TIMEOUT_DEFAULT_MS 90000
+
+static const struct mhi_channel_config ath12k_mhi_channels_qcn9274[] = {
+ {
+ .num = 0,
+ .name = "LOOPBACK",
+ .num_elements = 32,
+ .event_ring = 1,
+ .dir = DMA_TO_DEVICE,
+ .ee_mask = 0x4,
+ .pollcfg = 0,
+ .doorbell = MHI_DB_BRST_DISABLE,
+ .lpm_notify = false,
+ .offload_channel = false,
+ .doorbell_mode_switch = false,
+ .auto_queue = false,
+ },
+ {
+ .num = 1,
+ .name = "LOOPBACK",
+ .num_elements = 32,
+ .event_ring = 1,
+ .dir = DMA_FROM_DEVICE,
+ .ee_mask = 0x4,
+ .pollcfg = 0,
+ .doorbell = MHI_DB_BRST_DISABLE,
+ .lpm_notify = false,
+ .offload_channel = false,
+ .doorbell_mode_switch = false,
+ .auto_queue = false,
+ },
+ {
+ .num = 20,
+ .name = "IPCR",
+ .num_elements = 32,
+ .event_ring = 1,
+ .dir = DMA_TO_DEVICE,
+ .ee_mask = 0x4,
+ .pollcfg = 0,
+ .doorbell = MHI_DB_BRST_DISABLE,
+ .lpm_notify = false,
+ .offload_channel = false,
+ .doorbell_mode_switch = false,
+ .auto_queue = false,
+ },
+ {
+ .num = 21,
+ .name = "IPCR",
+ .num_elements = 32,
+ .event_ring = 1,
+ .dir = DMA_FROM_DEVICE,
+ .ee_mask = 0x4,
+ .pollcfg = 0,
+ .doorbell = MHI_DB_BRST_DISABLE,
+ .lpm_notify = false,
+ .offload_channel = false,
+ .doorbell_mode_switch = false,
+ .auto_queue = true,
+ },
+};
+
+static struct mhi_event_config ath12k_mhi_events_qcn9274[] = {
+ {
+ .num_elements = 32,
+ .irq_moderation_ms = 0,
+ .irq = 1,
+ .data_type = MHI_ER_CTRL,
+ .mode = MHI_DB_BRST_DISABLE,
+ .hardware_event = false,
+ .client_managed = false,
+ .offload_channel = false,
+ },
+ {
+ .num_elements = 256,
+ .irq_moderation_ms = 1,
+ .irq = 2,
+ .mode = MHI_DB_BRST_DISABLE,
+ .priority = 1,
+ .hardware_event = false,
+ .client_managed = false,
+ .offload_channel = false,
+ },
+};
+
+const struct mhi_controller_config ath12k_mhi_config_qcn9274 = {
+ .max_channels = 30,
+ .timeout_ms = 10000,
+ .use_bounce_buf = false,
+ .buf_len = 0,
+ .num_channels = ARRAY_SIZE(ath12k_mhi_channels_qcn9274),
+ .ch_cfg = ath12k_mhi_channels_qcn9274,
+ .num_events = ARRAY_SIZE(ath12k_mhi_events_qcn9274),
+ .event_cfg = ath12k_mhi_events_qcn9274,
+};
+
+static const struct mhi_channel_config ath12k_mhi_channels_wcn7850[] = {
+ {
+ .num = 0,
+ .name = "LOOPBACK",
+ .num_elements = 32,
+ .event_ring = 0,
+ .dir = DMA_TO_DEVICE,
+ .ee_mask = 0x4,
+ .pollcfg = 0,
+ .doorbell = MHI_DB_BRST_DISABLE,
+ .lpm_notify = false,
+ .offload_channel = false,
+ .doorbell_mode_switch = false,
+ .auto_queue = false,
+ },
+ {
+ .num = 1,
+ .name = "LOOPBACK",
+ .num_elements = 32,
+ .event_ring = 0,
+ .dir = DMA_FROM_DEVICE,
+ .ee_mask = 0x4,
+ .pollcfg = 0,
+ .doorbell = MHI_DB_BRST_DISABLE,
+ .lpm_notify = false,
+ .offload_channel = false,
+ .doorbell_mode_switch = false,
+ .auto_queue = false,
+ },
+ {
+ .num = 20,
+ .name = "IPCR",
+ .num_elements = 64,
+ .event_ring = 1,
+ .dir = DMA_TO_DEVICE,
+ .ee_mask = 0x4,
+ .pollcfg = 0,
+ .doorbell = MHI_DB_BRST_DISABLE,
+ .lpm_notify = false,
+ .offload_channel = false,
+ .doorbell_mode_switch = false,
+ .auto_queue = false,
+ },
+ {
+ .num = 21,
+ .name = "IPCR",
+ .num_elements = 64,
+ .event_ring = 1,
+ .dir = DMA_FROM_DEVICE,
+ .ee_mask = 0x4,
+ .pollcfg = 0,
+ .doorbell = MHI_DB_BRST_DISABLE,
+ .lpm_notify = false,
+ .offload_channel = false,
+ .doorbell_mode_switch = false,
+ .auto_queue = true,
+ },
+};
+
+static struct mhi_event_config ath12k_mhi_events_wcn7850[] = {
+ {
+ .num_elements = 32,
+ .irq_moderation_ms = 0,
+ .irq = 1,
+ .mode = MHI_DB_BRST_DISABLE,
+ .data_type = MHI_ER_CTRL,
+ .hardware_event = false,
+ .client_managed = false,
+ .offload_channel = false,
+ },
+ {
+ .num_elements = 256,
+ .irq_moderation_ms = 1,
+ .irq = 2,
+ .mode = MHI_DB_BRST_DISABLE,
+ .priority = 1,
+ .hardware_event = false,
+ .client_managed = false,
+ .offload_channel = false,
+ },
+};
+
+const struct mhi_controller_config ath12k_mhi_config_wcn7850 = {
+ .max_channels = 128,
+ .timeout_ms = 2000,
+ .use_bounce_buf = false,
+ .buf_len = 0,
+ .num_channels = ARRAY_SIZE(ath12k_mhi_channels_wcn7850),
+ .ch_cfg = ath12k_mhi_channels_wcn7850,
+ .num_events = ARRAY_SIZE(ath12k_mhi_events_wcn7850),
+ .event_cfg = ath12k_mhi_events_wcn7850,
+};
+
+void ath12k_mhi_set_mhictrl_reset(struct ath12k_base *ab)
+{
+ u32 val;
+
+ val = ath12k_pci_read32(ab, MHISTATUS);
+
+ ath12k_dbg(ab, ATH12K_DBG_PCI, "MHISTATUS 0x%x\n", val);
+
+ /* Observed on some targets that after SOC_GLOBAL_RESET, MHISTATUS
+ * has SYSERR bit set and thus need to set MHICTRL_RESET
+ * to clear SYSERR.
+ */
+ ath12k_pci_write32(ab, MHICTRL, MHICTRL_RESET_MASK);
+
+ mdelay(10);
+}
+
+static void ath12k_mhi_reset_txvecdb(struct ath12k_base *ab)
+{
+ ath12k_pci_write32(ab, PCIE_TXVECDB, 0);
+}
+
+static void ath12k_mhi_reset_txvecstatus(struct ath12k_base *ab)
+{
+ ath12k_pci_write32(ab, PCIE_TXVECSTATUS, 0);
+}
+
+static void ath12k_mhi_reset_rxvecdb(struct ath12k_base *ab)
+{
+ ath12k_pci_write32(ab, PCIE_RXVECDB, 0);
+}
+
+static void ath12k_mhi_reset_rxvecstatus(struct ath12k_base *ab)
+{
+ ath12k_pci_write32(ab, PCIE_RXVECSTATUS, 0);
+}
+
+void ath12k_mhi_clear_vector(struct ath12k_base *ab)
+{
+ ath12k_mhi_reset_txvecdb(ab);
+ ath12k_mhi_reset_txvecstatus(ab);
+ ath12k_mhi_reset_rxvecdb(ab);
+ ath12k_mhi_reset_rxvecstatus(ab);
+}
+
+static int ath12k_mhi_get_msi(struct ath12k_pci *ab_pci)
+{
+ struct ath12k_base *ab = ab_pci->ab;
+ u32 user_base_data, base_vector;
+ int ret, num_vectors, i;
+ int *irq;
+
+ ret = ath12k_pci_get_user_msi_assignment(ab,
+ "MHI", &num_vectors,
+ &user_base_data, &base_vector);
+ if (ret)
+ return ret;
+
+ ath12k_dbg(ab, ATH12K_DBG_PCI, "Number of assigned MSI for MHI is %d, base vector is %d\n",
+ num_vectors, base_vector);
+
+ irq = kcalloc(num_vectors, sizeof(*irq), GFP_KERNEL);
+ if (!irq)
+ return -ENOMEM;
+
+ for (i = 0; i < num_vectors; i++)
+ irq[i] = ath12k_pci_get_msi_irq(ab->dev,
+ base_vector + i);
+
+ ab_pci->mhi_ctrl->irq = irq;
+ ab_pci->mhi_ctrl->nr_irqs = num_vectors;
+
+ return 0;
+}
+
+static int ath12k_mhi_op_runtime_get(struct mhi_controller *mhi_cntrl)
+{
+ return 0;
+}
+
+static void ath12k_mhi_op_runtime_put(struct mhi_controller *mhi_cntrl)
+{
+}
+
+static char *ath12k_mhi_op_callback_to_str(enum mhi_callback reason)
+{
+ switch (reason) {
+ case MHI_CB_IDLE:
+ return "MHI_CB_IDLE";
+ case MHI_CB_PENDING_DATA:
+ return "MHI_CB_PENDING_DATA";
+ case MHI_CB_LPM_ENTER:
+ return "MHI_CB_LPM_ENTER";
+ case MHI_CB_LPM_EXIT:
+ return "MHI_CB_LPM_EXIT";
+ case MHI_CB_EE_RDDM:
+ return "MHI_CB_EE_RDDM";
+ case MHI_CB_EE_MISSION_MODE:
+ return "MHI_CB_EE_MISSION_MODE";
+ case MHI_CB_SYS_ERROR:
+ return "MHI_CB_SYS_ERROR";
+ case MHI_CB_FATAL_ERROR:
+ return "MHI_CB_FATAL_ERROR";
+ case MHI_CB_BW_REQ:
+ return "MHI_CB_BW_REQ";
+ default:
+ return "UNKNOWN";
+ }
+}
+
+static void ath12k_mhi_op_status_cb(struct mhi_controller *mhi_cntrl,
+ enum mhi_callback cb)
+{
+ struct ath12k_base *ab = dev_get_drvdata(mhi_cntrl->cntrl_dev);
+
+ ath12k_dbg(ab, ATH12K_DBG_BOOT, "mhi notify status reason %s\n",
+ ath12k_mhi_op_callback_to_str(cb));
+
+ switch (cb) {
+ case MHI_CB_SYS_ERROR:
+ ath12k_warn(ab, "firmware crashed: MHI_CB_SYS_ERROR\n");
+ break;
+ case MHI_CB_EE_RDDM:
+ if (!(test_bit(ATH12K_FLAG_UNREGISTERING, &ab->dev_flags)))
+ queue_work(ab->workqueue_aux, &ab->reset_work);
+ break;
+ default:
+ break;
+ }
+}
+
+static int ath12k_mhi_op_read_reg(struct mhi_controller *mhi_cntrl,
+ void __iomem *addr,
+ u32 *out)
+{
+ *out = readl(addr);
+
+ return 0;
+}
+
+static void ath12k_mhi_op_write_reg(struct mhi_controller *mhi_cntrl,
+ void __iomem *addr,
+ u32 val)
+{
+ writel(val, addr);
+}
+
+int ath12k_mhi_register(struct ath12k_pci *ab_pci)
+{
+ struct ath12k_base *ab = ab_pci->ab;
+ struct mhi_controller *mhi_ctrl;
+ int ret;
+
+ mhi_ctrl = mhi_alloc_controller();
+ if (!mhi_ctrl)
+ return -ENOMEM;
+
+ ath12k_core_create_firmware_path(ab, ATH12K_AMSS_FILE,
+ ab_pci->amss_path,
+ sizeof(ab_pci->amss_path));
+
+ ab_pci->mhi_ctrl = mhi_ctrl;
+ mhi_ctrl->cntrl_dev = ab->dev;
+ mhi_ctrl->fw_image = ab_pci->amss_path;
+ mhi_ctrl->regs = ab->mem;
+ mhi_ctrl->reg_len = ab->mem_len;
+
+ ret = ath12k_mhi_get_msi(ab_pci);
+ if (ret) {
+ ath12k_err(ab, "failed to get msi for mhi\n");
+ mhi_free_controller(mhi_ctrl);
+ return ret;
+ }
+
+ mhi_ctrl->iova_start = 0;
+ mhi_ctrl->iova_stop = 0xffffffff;
+ mhi_ctrl->sbl_size = SZ_512K;
+ mhi_ctrl->seg_len = SZ_512K;
+ mhi_ctrl->fbc_download = true;
+ mhi_ctrl->runtime_get = ath12k_mhi_op_runtime_get;
+ mhi_ctrl->runtime_put = ath12k_mhi_op_runtime_put;
+ mhi_ctrl->status_cb = ath12k_mhi_op_status_cb;
+ mhi_ctrl->read_reg = ath12k_mhi_op_read_reg;
+ mhi_ctrl->write_reg = ath12k_mhi_op_write_reg;
+
+ ret = mhi_register_controller(mhi_ctrl, ab->hw_params->mhi_config);
+ if (ret) {
+ ath12k_err(ab, "failed to register to mhi bus, err = %d\n", ret);
+ mhi_free_controller(mhi_ctrl);
+ return ret;
+ }
+
+ return 0;
+}
+
+void ath12k_mhi_unregister(struct ath12k_pci *ab_pci)
+{
+ struct mhi_controller *mhi_ctrl = ab_pci->mhi_ctrl;
+
+ mhi_unregister_controller(mhi_ctrl);
+ kfree(mhi_ctrl->irq);
+ mhi_free_controller(mhi_ctrl);
+ ab_pci->mhi_ctrl = NULL;
+}
+
+static char *ath12k_mhi_state_to_str(enum ath12k_mhi_state mhi_state)
+{
+ switch (mhi_state) {
+ case ATH12K_MHI_INIT:
+ return "INIT";
+ case ATH12K_MHI_DEINIT:
+ return "DEINIT";
+ case ATH12K_MHI_POWER_ON:
+ return "POWER_ON";
+ case ATH12K_MHI_POWER_OFF:
+ return "POWER_OFF";
+ case ATH12K_MHI_FORCE_POWER_OFF:
+ return "FORCE_POWER_OFF";
+ case ATH12K_MHI_SUSPEND:
+ return "SUSPEND";
+ case ATH12K_MHI_RESUME:
+ return "RESUME";
+ case ATH12K_MHI_TRIGGER_RDDM:
+ return "TRIGGER_RDDM";
+ case ATH12K_MHI_RDDM_DONE:
+ return "RDDM_DONE";
+ default:
+ return "UNKNOWN";
+ }
+};
+
+static void ath12k_mhi_set_state_bit(struct ath12k_pci *ab_pci,
+ enum ath12k_mhi_state mhi_state)
+{
+ struct ath12k_base *ab = ab_pci->ab;
+
+ switch (mhi_state) {
+ case ATH12K_MHI_INIT:
+ set_bit(ATH12K_MHI_INIT, &ab_pci->mhi_state);
+ break;
+ case ATH12K_MHI_DEINIT:
+ clear_bit(ATH12K_MHI_INIT, &ab_pci->mhi_state);
+ break;
+ case ATH12K_MHI_POWER_ON:
+ set_bit(ATH12K_MHI_POWER_ON, &ab_pci->mhi_state);
+ break;
+ case ATH12K_MHI_POWER_OFF:
+ case ATH12K_MHI_FORCE_POWER_OFF:
+ clear_bit(ATH12K_MHI_POWER_ON, &ab_pci->mhi_state);
+ clear_bit(ATH12K_MHI_TRIGGER_RDDM, &ab_pci->mhi_state);
+ clear_bit(ATH12K_MHI_RDDM_DONE, &ab_pci->mhi_state);
+ break;
+ case ATH12K_MHI_SUSPEND:
+ set_bit(ATH12K_MHI_SUSPEND, &ab_pci->mhi_state);
+ break;
+ case ATH12K_MHI_RESUME:
+ clear_bit(ATH12K_MHI_SUSPEND, &ab_pci->mhi_state);
+ break;
+ case ATH12K_MHI_TRIGGER_RDDM:
+ set_bit(ATH12K_MHI_TRIGGER_RDDM, &ab_pci->mhi_state);
+ break;
+ case ATH12K_MHI_RDDM_DONE:
+ set_bit(ATH12K_MHI_RDDM_DONE, &ab_pci->mhi_state);
+ break;
+ default:
+ ath12k_err(ab, "unhandled mhi state (%d)\n", mhi_state);
+ }
+}
+
+static int ath12k_mhi_check_state_bit(struct ath12k_pci *ab_pci,
+ enum ath12k_mhi_state mhi_state)
+{
+ struct ath12k_base *ab = ab_pci->ab;
+
+ switch (mhi_state) {
+ case ATH12K_MHI_INIT:
+ if (!test_bit(ATH12K_MHI_INIT, &ab_pci->mhi_state))
+ return 0;
+ break;
+ case ATH12K_MHI_DEINIT:
+ case ATH12K_MHI_POWER_ON:
+ if (test_bit(ATH12K_MHI_INIT, &ab_pci->mhi_state) &&
+ !test_bit(ATH12K_MHI_POWER_ON, &ab_pci->mhi_state))
+ return 0;
+ break;
+ case ATH12K_MHI_FORCE_POWER_OFF:
+ if (test_bit(ATH12K_MHI_POWER_ON, &ab_pci->mhi_state))
+ return 0;
+ break;
+ case ATH12K_MHI_POWER_OFF:
+ case ATH12K_MHI_SUSPEND:
+ if (test_bit(ATH12K_MHI_POWER_ON, &ab_pci->mhi_state) &&
+ !test_bit(ATH12K_MHI_SUSPEND, &ab_pci->mhi_state))
+ return 0;
+ break;
+ case ATH12K_MHI_RESUME:
+ if (test_bit(ATH12K_MHI_SUSPEND, &ab_pci->mhi_state))
+ return 0;
+ break;
+ case ATH12K_MHI_TRIGGER_RDDM:
+ if (test_bit(ATH12K_MHI_POWER_ON, &ab_pci->mhi_state) &&
+ !test_bit(ATH12K_MHI_TRIGGER_RDDM, &ab_pci->mhi_state))
+ return 0;
+ break;
+ case ATH12K_MHI_RDDM_DONE:
+ return 0;
+ default:
+ ath12k_err(ab, "unhandled mhi state: %s(%d)\n",
+ ath12k_mhi_state_to_str(mhi_state), mhi_state);
+ }
+
+ ath12k_err(ab, "failed to set mhi state %s(%d) in current mhi state (0x%lx)\n",
+ ath12k_mhi_state_to_str(mhi_state), mhi_state,
+ ab_pci->mhi_state);
+
+ return -EINVAL;
+}
+
+static int ath12k_mhi_set_state(struct ath12k_pci *ab_pci,
+ enum ath12k_mhi_state mhi_state)
+{
+ struct ath12k_base *ab = ab_pci->ab;
+ int ret;
+
+ ret = ath12k_mhi_check_state_bit(ab_pci, mhi_state);
+ if (ret)
+ goto out;
+
+ ath12k_dbg(ab, ATH12K_DBG_PCI, "setting mhi state: %s(%d)\n",
+ ath12k_mhi_state_to_str(mhi_state), mhi_state);
+
+ switch (mhi_state) {
+ case ATH12K_MHI_INIT:
+ ret = mhi_prepare_for_power_up(ab_pci->mhi_ctrl);
+ break;
+ case ATH12K_MHI_DEINIT:
+ mhi_unprepare_after_power_down(ab_pci->mhi_ctrl);
+ ret = 0;
+ break;
+ case ATH12K_MHI_POWER_ON:
+ ret = mhi_async_power_up(ab_pci->mhi_ctrl);
+ break;
+ case ATH12K_MHI_POWER_OFF:
+ mhi_power_down(ab_pci->mhi_ctrl, true);
+ ret = 0;
+ break;
+ case ATH12K_MHI_FORCE_POWER_OFF:
+ mhi_power_down(ab_pci->mhi_ctrl, false);
+ ret = 0;
+ break;
+ case ATH12K_MHI_SUSPEND:
+ ret = mhi_pm_suspend(ab_pci->mhi_ctrl);
+ break;
+ case ATH12K_MHI_RESUME:
+ ret = mhi_pm_resume(ab_pci->mhi_ctrl);
+ break;
+ case ATH12K_MHI_TRIGGER_RDDM:
+ ret = mhi_force_rddm_mode(ab_pci->mhi_ctrl);
+ break;
+ case ATH12K_MHI_RDDM_DONE:
+ break;
+ default:
+ ath12k_err(ab, "unhandled MHI state (%d)\n", mhi_state);
+ ret = -EINVAL;
+ }
+
+ if (ret)
+ goto out;
+
+ ath12k_mhi_set_state_bit(ab_pci, mhi_state);
+
+ return 0;
+
+out:
+ ath12k_err(ab, "failed to set mhi state: %s(%d)\n",
+ ath12k_mhi_state_to_str(mhi_state), mhi_state);
+ return ret;
+}
+
+int ath12k_mhi_start(struct ath12k_pci *ab_pci)
+{
+ int ret;
+
+ ab_pci->mhi_ctrl->timeout_ms = MHI_TIMEOUT_DEFAULT_MS;
+
+ ret = ath12k_mhi_set_state(ab_pci, ATH12K_MHI_INIT);
+ if (ret)
+ goto out;
+
+ ret = ath12k_mhi_set_state(ab_pci, ATH12K_MHI_POWER_ON);
+ if (ret)
+ goto out;
+
+ return 0;
+
+out:
+ return ret;
+}
+
+void ath12k_mhi_stop(struct ath12k_pci *ab_pci)
+{
+ ath12k_mhi_set_state(ab_pci, ATH12K_MHI_POWER_OFF);
+ ath12k_mhi_set_state(ab_pci, ATH12K_MHI_DEINIT);
+}
+
+void ath12k_mhi_suspend(struct ath12k_pci *ab_pci)
+{
+ ath12k_mhi_set_state(ab_pci, ATH12K_MHI_SUSPEND);
+}
+
+void ath12k_mhi_resume(struct ath12k_pci *ab_pci)
+{
+ ath12k_mhi_set_state(ab_pci, ATH12K_MHI_RESUME);
+}
diff --git a/drivers/net/wireless/ath/ath12k/mhi.h b/drivers/net/wireless/ath/ath12k/mhi.h
new file mode 100644
index 000000000000..ebc23640ce7a
--- /dev/null
+++ b/drivers/net/wireless/ath/ath12k/mhi.h
@@ -0,0 +1,46 @@
+/* SPDX-License-Identifier: BSD-3-Clause-Clear */
+/*
+ * Copyright (c) 2020-2021 The Linux Foundation. All rights reserved.
+ * Copyright (c) 2021-2022 Qualcomm Innovation Center, Inc. All rights reserved.
+ */
+#ifndef _ATH12K_MHI_H
+#define _ATH12K_MHI_H
+
+#include "pci.h"
+
+#define PCIE_TXVECDB 0x360
+#define PCIE_TXVECSTATUS 0x368
+#define PCIE_RXVECDB 0x394
+#define PCIE_RXVECSTATUS 0x39C
+
+#define MHISTATUS 0x48
+#define MHICTRL 0x38
+#define MHICTRL_RESET_MASK 0x2
+
+enum ath12k_mhi_state {
+ ATH12K_MHI_INIT,
+ ATH12K_MHI_DEINIT,
+ ATH12K_MHI_POWER_ON,
+ ATH12K_MHI_POWER_OFF,
+ ATH12K_MHI_FORCE_POWER_OFF,
+ ATH12K_MHI_SUSPEND,
+ ATH12K_MHI_RESUME,
+ ATH12K_MHI_TRIGGER_RDDM,
+ ATH12K_MHI_RDDM,
+ ATH12K_MHI_RDDM_DONE,
+};
+
+extern const struct mhi_controller_config ath12k_mhi_config_qcn9274;
+extern const struct mhi_controller_config ath12k_mhi_config_wcn7850;
+
+int ath12k_mhi_start(struct ath12k_pci *ar_pci);
+void ath12k_mhi_stop(struct ath12k_pci *ar_pci);
+int ath12k_mhi_register(struct ath12k_pci *ar_pci);
+void ath12k_mhi_unregister(struct ath12k_pci *ar_pci);
+void ath12k_mhi_set_mhictrl_reset(struct ath12k_base *ab);
+void ath12k_mhi_clear_vector(struct ath12k_base *ab);
+
+void ath12k_mhi_suspend(struct ath12k_pci *ar_pci);
+void ath12k_mhi_resume(struct ath12k_pci *ar_pci);
+
+#endif
diff --git a/drivers/net/wireless/ath/ath12k/pci.c b/drivers/net/wireless/ath/ath12k/pci.c
new file mode 100644
index 000000000000..ae7f6083c9fc
--- /dev/null
+++ b/drivers/net/wireless/ath/ath12k/pci.c
@@ -0,0 +1,1374 @@
+// SPDX-License-Identifier: BSD-3-Clause-Clear
+/*
+ * Copyright (c) 2019-2021 The Linux Foundation. All rights reserved.
+ * Copyright (c) 2021-2022 Qualcomm Innovation Center, Inc. All rights reserved.
+ */
+
+#include <linux/module.h>
+#include <linux/msi.h>
+#include <linux/pci.h>
+
+#include "pci.h"
+#include "core.h"
+#include "hif.h"
+#include "mhi.h"
+#include "debug.h"
+
+#define ATH12K_PCI_BAR_NUM 0
+#define ATH12K_PCI_DMA_MASK 32
+
+#define ATH12K_PCI_IRQ_CE0_OFFSET 3
+
+#define WINDOW_ENABLE_BIT 0x40000000
+#define WINDOW_REG_ADDRESS 0x310c
+#define WINDOW_VALUE_MASK GENMASK(24, 19)
+#define WINDOW_START 0x80000
+#define WINDOW_RANGE_MASK GENMASK(18, 0)
+#define WINDOW_STATIC_MASK GENMASK(31, 6)
+
+#define TCSR_SOC_HW_VERSION 0x1B00000
+#define TCSR_SOC_HW_VERSION_MAJOR_MASK GENMASK(11, 8)
+#define TCSR_SOC_HW_VERSION_MINOR_MASK GENMASK(7, 4)
+
+/* BAR0 + 4k is always accessible, and no
+ * need to force wakeup.
+ * 4K - 32 = 0xFE0
+ */
+#define ACCESS_ALWAYS_OFF 0xFE0
+
+#define QCN9274_DEVICE_ID 0x1109
+#define WCN7850_DEVICE_ID 0x1107
+
+static const struct pci_device_id ath12k_pci_id_table[] = {
+ { PCI_VDEVICE(QCOM, QCN9274_DEVICE_ID) },
+ { PCI_VDEVICE(QCOM, WCN7850_DEVICE_ID) },
+ {0}
+};
+
+MODULE_DEVICE_TABLE(pci, ath12k_pci_id_table);
+
+/* TODO: revisit IRQ mapping for new SRNG's */
+static const struct ath12k_msi_config ath12k_msi_config[] = {
+ {
+ .total_vectors = 16,
+ .total_users = 3,
+ .users = (struct ath12k_msi_user[]) {
+ { .name = "MHI", .num_vectors = 3, .base_vector = 0 },
+ { .name = "CE", .num_vectors = 5, .base_vector = 3 },
+ { .name = "DP", .num_vectors = 8, .base_vector = 8 },
+ },
+ },
+};
+
+static const char *irq_name[ATH12K_IRQ_NUM_MAX] = {
+ "bhi",
+ "mhi-er0",
+ "mhi-er1",
+ "ce0",
+ "ce1",
+ "ce2",
+ "ce3",
+ "ce4",
+ "ce5",
+ "ce6",
+ "ce7",
+ "ce8",
+ "ce9",
+ "ce10",
+ "ce11",
+ "ce12",
+ "ce13",
+ "ce14",
+ "ce15",
+ "host2wbm-desc-feed",
+ "host2reo-re-injection",
+ "host2reo-command",
+ "host2rxdma-monitor-ring3",
+ "host2rxdma-monitor-ring2",
+ "host2rxdma-monitor-ring1",
+ "reo2ost-exception",
+ "wbm2host-rx-release",
+ "reo2host-status",
+ "reo2host-destination-ring4",
+ "reo2host-destination-ring3",
+ "reo2host-destination-ring2",
+ "reo2host-destination-ring1",
+ "rxdma2host-monitor-destination-mac3",
+ "rxdma2host-monitor-destination-mac2",
+ "rxdma2host-monitor-destination-mac1",
+ "ppdu-end-interrupts-mac3",
+ "ppdu-end-interrupts-mac2",
+ "ppdu-end-interrupts-mac1",
+ "rxdma2host-monitor-status-ring-mac3",
+ "rxdma2host-monitor-status-ring-mac2",
+ "rxdma2host-monitor-status-ring-mac1",
+ "host2rxdma-host-buf-ring-mac3",
+ "host2rxdma-host-buf-ring-mac2",
+ "host2rxdma-host-buf-ring-mac1",
+ "rxdma2host-destination-ring-mac3",
+ "rxdma2host-destination-ring-mac2",
+ "rxdma2host-destination-ring-mac1",
+ "host2tcl-input-ring4",
+ "host2tcl-input-ring3",
+ "host2tcl-input-ring2",
+ "host2tcl-input-ring1",
+ "wbm2host-tx-completions-ring4",
+ "wbm2host-tx-completions-ring3",
+ "wbm2host-tx-completions-ring2",
+ "wbm2host-tx-completions-ring1",
+ "tcl2host-status-ring",
+};
+
+static void ath12k_pci_select_window(struct ath12k_pci *ab_pci, u32 offset)
+{
+ struct ath12k_base *ab = ab_pci->ab;
+
+ u32 window = u32_get_bits(offset, WINDOW_VALUE_MASK);
+ u32 static_window;
+
+ lockdep_assert_held(&ab_pci->window_lock);
+
+ /* Preserve the static window configuration and reset only dynamic window */
+ static_window = ab_pci->register_window & WINDOW_STATIC_MASK;
+ window |= static_window;
+
+ if (window != ab_pci->register_window) {
+ iowrite32(WINDOW_ENABLE_BIT | window,
+ ab->mem + WINDOW_REG_ADDRESS);
+ ioread32(ab->mem + WINDOW_REG_ADDRESS);
+ ab_pci->register_window = window;
+ }
+}
+
+static void ath12k_pci_select_static_window(struct ath12k_pci *ab_pci)
+{
+ u32 umac_window = u32_get_bits(HAL_SEQ_WCSS_UMAC_OFFSET, WINDOW_VALUE_MASK);
+ u32 ce_window = u32_get_bits(HAL_CE_WFSS_CE_REG_BASE, WINDOW_VALUE_MASK);
+ u32 window;
+
+ window = (umac_window << 12) | (ce_window << 6);
+
+ spin_lock_bh(&ab_pci->window_lock);
+ ab_pci->register_window = window;
+ spin_unlock_bh(&ab_pci->window_lock);
+
+ iowrite32(WINDOW_ENABLE_BIT | window, ab_pci->ab->mem + WINDOW_REG_ADDRESS);
+}
+
+static u32 ath12k_pci_get_window_start(struct ath12k_base *ab,
+ u32 offset)
+{
+ u32 window_start;
+
+ /* If offset lies within DP register range, use 3rd window */
+ if ((offset ^ HAL_SEQ_WCSS_UMAC_OFFSET) < WINDOW_RANGE_MASK)
+ window_start = 3 * WINDOW_START;
+ /* If offset lies within CE register range, use 2nd window */
+ else if ((offset ^ HAL_CE_WFSS_CE_REG_BASE) < WINDOW_RANGE_MASK)
+ window_start = 2 * WINDOW_START;
+ /* If offset lies within PCI_BAR_WINDOW0_BASE and within PCI_SOC_PCI_REG_BASE
+ * use 0th window
+ */
+ else if (((offset ^ PCI_BAR_WINDOW0_BASE) < WINDOW_RANGE_MASK) &&
+ !((offset ^ PCI_SOC_PCI_REG_BASE) < PCI_SOC_RANGE_MASK))
+ window_start = 0;
+ else
+ window_start = WINDOW_START;
+
+ return window_start;
+}
+
+static void ath12k_pci_soc_global_reset(struct ath12k_base *ab)
+{
+ u32 val, delay;
+
+ val = ath12k_pci_read32(ab, PCIE_SOC_GLOBAL_RESET);
+
+ val |= PCIE_SOC_GLOBAL_RESET_V;
+
+ ath12k_pci_write32(ab, PCIE_SOC_GLOBAL_RESET, val);
+
+ /* TODO: exact time to sleep is uncertain */
+ delay = 10;
+ mdelay(delay);
+
+ /* Need to toggle V bit back otherwise stuck in reset status */
+ val &= ~PCIE_SOC_GLOBAL_RESET_V;
+
+ ath12k_pci_write32(ab, PCIE_SOC_GLOBAL_RESET, val);
+
+ mdelay(delay);
+
+ val = ath12k_pci_read32(ab, PCIE_SOC_GLOBAL_RESET);
+ if (val == 0xffffffff)
+ ath12k_warn(ab, "link down error during global reset\n");
+}
+
+static void ath12k_pci_clear_dbg_registers(struct ath12k_base *ab)
+{
+ u32 val;
+
+ /* read cookie */
+ val = ath12k_pci_read32(ab, PCIE_Q6_COOKIE_ADDR);
+ ath12k_dbg(ab, ATH12K_DBG_PCI, "cookie:0x%x\n", val);
+
+ val = ath12k_pci_read32(ab, WLAON_WARM_SW_ENTRY);
+ ath12k_dbg(ab, ATH12K_DBG_PCI, "WLAON_WARM_SW_ENTRY 0x%x\n", val);
+
+ /* TODO: exact time to sleep is uncertain */
+ mdelay(10);
+
+ /* write 0 to WLAON_WARM_SW_ENTRY to prevent Q6 from
+ * continuing warm path and entering dead loop.
+ */
+ ath12k_pci_write32(ab, WLAON_WARM_SW_ENTRY, 0);
+ mdelay(10);
+
+ val = ath12k_pci_read32(ab, WLAON_WARM_SW_ENTRY);
+ ath12k_dbg(ab, ATH12K_DBG_PCI, "WLAON_WARM_SW_ENTRY 0x%x\n", val);
+
+ /* A read clear register. clear the register to prevent
+ * Q6 from entering wrong code path.
+ */
+ val = ath12k_pci_read32(ab, WLAON_SOC_RESET_CAUSE_REG);
+ ath12k_dbg(ab, ATH12K_DBG_PCI, "soc reset cause:%d\n", val);
+}
+
+static void ath12k_pci_enable_ltssm(struct ath12k_base *ab)
+{
+ u32 val;
+ int i;
+
+ val = ath12k_pci_read32(ab, PCIE_PCIE_PARF_LTSSM);
+
+ /* PCIE link seems very unstable after the Hot Reset*/
+ for (i = 0; val != PARM_LTSSM_VALUE && i < 5; i++) {
+ if (val == 0xffffffff)
+ mdelay(5);
+
+ ath12k_pci_write32(ab, PCIE_PCIE_PARF_LTSSM, PARM_LTSSM_VALUE);
+ val = ath12k_pci_read32(ab, PCIE_PCIE_PARF_LTSSM);
+ }
+
+ ath12k_dbg(ab, ATH12K_DBG_PCI, "pci ltssm 0x%x\n", val);
+
+ val = ath12k_pci_read32(ab, GCC_GCC_PCIE_HOT_RST);
+ val |= GCC_GCC_PCIE_HOT_RST_VAL;
+ ath12k_pci_write32(ab, GCC_GCC_PCIE_HOT_RST, val);
+ val = ath12k_pci_read32(ab, GCC_GCC_PCIE_HOT_RST);
+
+ ath12k_dbg(ab, ATH12K_DBG_PCI, "pci pcie_hot_rst 0x%x\n", val);
+
+ mdelay(5);
+}
+
+static void ath12k_pci_clear_all_intrs(struct ath12k_base *ab)
+{
+ /* This is a WAR for PCIE Hotreset.
+ * When target receive Hotreset, but will set the interrupt.
+ * So when download SBL again, SBL will open Interrupt and
+ * receive it, and crash immediately.
+ */
+ ath12k_pci_write32(ab, PCIE_PCIE_INT_ALL_CLEAR, PCIE_INT_CLEAR_ALL);
+}
+
+static void ath12k_pci_set_wlaon_pwr_ctrl(struct ath12k_base *ab)
+{
+ u32 val;
+
+ val = ath12k_pci_read32(ab, WLAON_QFPROM_PWR_CTRL_REG);
+ val &= ~QFPROM_PWR_CTRL_VDD4BLOW_MASK;
+ ath12k_pci_write32(ab, WLAON_QFPROM_PWR_CTRL_REG, val);
+}
+
+static void ath12k_pci_force_wake(struct ath12k_base *ab)
+{
+ ath12k_pci_write32(ab, PCIE_SOC_WAKE_PCIE_LOCAL_REG, 1);
+ mdelay(5);
+}
+
+static void ath12k_pci_sw_reset(struct ath12k_base *ab, bool power_on)
+{
+ if (power_on) {
+ ath12k_pci_enable_ltssm(ab);
+ ath12k_pci_clear_all_intrs(ab);
+ ath12k_pci_set_wlaon_pwr_ctrl(ab);
+ }
+
+ ath12k_mhi_clear_vector(ab);
+ ath12k_pci_clear_dbg_registers(ab);
+ ath12k_pci_soc_global_reset(ab);
+ ath12k_mhi_set_mhictrl_reset(ab);
+}
+
+static void ath12k_pci_free_ext_irq(struct ath12k_base *ab)
+{
+ int i, j;
+
+ for (i = 0; i < ATH12K_EXT_IRQ_GRP_NUM_MAX; i++) {
+ struct ath12k_ext_irq_grp *irq_grp = &ab->ext_irq_grp[i];
+
+ for (j = 0; j < irq_grp->num_irq; j++)
+ free_irq(ab->irq_num[irq_grp->irqs[j]], irq_grp);
+
+ netif_napi_del(&irq_grp->napi);
+ }
+}
+
+static void ath12k_pci_free_irq(struct ath12k_base *ab)
+{
+ int i, irq_idx;
+
+ for (i = 0; i < ab->hw_params->ce_count; i++) {
+ if (ath12k_ce_get_attr_flags(ab, i) & CE_ATTR_DIS_INTR)
+ continue;
+ irq_idx = ATH12K_PCI_IRQ_CE0_OFFSET + i;
+ free_irq(ab->irq_num[irq_idx], &ab->ce.ce_pipe[i]);
+ }
+
+ ath12k_pci_free_ext_irq(ab);
+}
+
+static void ath12k_pci_ce_irq_enable(struct ath12k_base *ab, u16 ce_id)
+{
+ u32 irq_idx;
+
+ irq_idx = ATH12K_PCI_IRQ_CE0_OFFSET + ce_id;
+ enable_irq(ab->irq_num[irq_idx]);
+}
+
+static void ath12k_pci_ce_irq_disable(struct ath12k_base *ab, u16 ce_id)
+{
+ u32 irq_idx;
+
+ irq_idx = ATH12K_PCI_IRQ_CE0_OFFSET + ce_id;
+ disable_irq_nosync(ab->irq_num[irq_idx]);
+}
+
+static void ath12k_pci_ce_irqs_disable(struct ath12k_base *ab)
+{
+ int i;
+
+ for (i = 0; i < ab->hw_params->ce_count; i++) {
+ if (ath12k_ce_get_attr_flags(ab, i) & CE_ATTR_DIS_INTR)
+ continue;
+ ath12k_pci_ce_irq_disable(ab, i);
+ }
+}
+
+static void ath12k_pci_sync_ce_irqs(struct ath12k_base *ab)
+{
+ int i;
+ int irq_idx;
+
+ for (i = 0; i < ab->hw_params->ce_count; i++) {
+ if (ath12k_ce_get_attr_flags(ab, i) & CE_ATTR_DIS_INTR)
+ continue;
+
+ irq_idx = ATH12K_PCI_IRQ_CE0_OFFSET + i;
+ synchronize_irq(ab->irq_num[irq_idx]);
+ }
+}
+
+static void ath12k_pci_ce_tasklet(struct tasklet_struct *t)
+{
+ struct ath12k_ce_pipe *ce_pipe = from_tasklet(ce_pipe, t, intr_tq);
+
+ ath12k_ce_per_engine_service(ce_pipe->ab, ce_pipe->pipe_num);
+
+ ath12k_pci_ce_irq_enable(ce_pipe->ab, ce_pipe->pipe_num);
+}
+
+static irqreturn_t ath12k_pci_ce_interrupt_handler(int irq, void *arg)
+{
+ struct ath12k_ce_pipe *ce_pipe = arg;
+
+ /* last interrupt received for this CE */
+ ce_pipe->timestamp = jiffies;
+
+ ath12k_pci_ce_irq_disable(ce_pipe->ab, ce_pipe->pipe_num);
+ tasklet_schedule(&ce_pipe->intr_tq);
+
+ return IRQ_HANDLED;
+}
+
+static void ath12k_pci_ext_grp_disable(struct ath12k_ext_irq_grp *irq_grp)
+{
+ int i;
+
+ for (i = 0; i < irq_grp->num_irq; i++)
+ disable_irq_nosync(irq_grp->ab->irq_num[irq_grp->irqs[i]]);
+}
+
+static void __ath12k_pci_ext_irq_disable(struct ath12k_base *sc)
+{
+ int i;
+
+ for (i = 0; i < ATH12K_EXT_IRQ_GRP_NUM_MAX; i++) {
+ struct ath12k_ext_irq_grp *irq_grp = &sc->ext_irq_grp[i];
+
+ ath12k_pci_ext_grp_disable(irq_grp);
+
+ napi_synchronize(&irq_grp->napi);
+ napi_disable(&irq_grp->napi);
+ }
+}
+
+static void ath12k_pci_ext_grp_enable(struct ath12k_ext_irq_grp *irq_grp)
+{
+ int i;
+
+ for (i = 0; i < irq_grp->num_irq; i++)
+ enable_irq(irq_grp->ab->irq_num[irq_grp->irqs[i]]);
+}
+
+static void ath12k_pci_sync_ext_irqs(struct ath12k_base *ab)
+{
+ int i, j, irq_idx;
+
+ for (i = 0; i < ATH12K_EXT_IRQ_GRP_NUM_MAX; i++) {
+ struct ath12k_ext_irq_grp *irq_grp = &ab->ext_irq_grp[i];
+
+ for (j = 0; j < irq_grp->num_irq; j++) {
+ irq_idx = irq_grp->irqs[j];
+ synchronize_irq(ab->irq_num[irq_idx]);
+ }
+ }
+}
+
+static int ath12k_pci_ext_grp_napi_poll(struct napi_struct *napi, int budget)
+{
+ struct ath12k_ext_irq_grp *irq_grp = container_of(napi,
+ struct ath12k_ext_irq_grp,
+ napi);
+ struct ath12k_base *ab = irq_grp->ab;
+ int work_done;
+
+ work_done = ath12k_dp_service_srng(ab, irq_grp, budget);
+ if (work_done < budget) {
+ napi_complete_done(napi, work_done);
+ ath12k_pci_ext_grp_enable(irq_grp);
+ }
+
+ if (work_done > budget)
+ work_done = budget;
+
+ return work_done;
+}
+
+static irqreturn_t ath12k_pci_ext_interrupt_handler(int irq, void *arg)
+{
+ struct ath12k_ext_irq_grp *irq_grp = arg;
+
+ ath12k_dbg(irq_grp->ab, ATH12K_DBG_PCI, "ext irq:%d\n", irq);
+
+ /* last interrupt received for this group */
+ irq_grp->timestamp = jiffies;
+
+ ath12k_pci_ext_grp_disable(irq_grp);
+
+ napi_schedule(&irq_grp->napi);
+
+ return IRQ_HANDLED;
+}
+
+static int ath12k_pci_ext_irq_config(struct ath12k_base *ab)
+{
+ int i, j, ret, num_vectors = 0;
+ u32 user_base_data = 0, base_vector = 0, base_idx;
+
+ base_idx = ATH12K_PCI_IRQ_CE0_OFFSET + CE_COUNT_MAX;
+ ret = ath12k_pci_get_user_msi_assignment(ab, "DP",
+ &num_vectors,
+ &user_base_data,
+ &base_vector);
+ if (ret < 0)
+ return ret;
+
+ for (i = 0; i < ATH12K_EXT_IRQ_GRP_NUM_MAX; i++) {
+ struct ath12k_ext_irq_grp *irq_grp = &ab->ext_irq_grp[i];
+ u32 num_irq = 0;
+
+ irq_grp->ab = ab;
+ irq_grp->grp_id = i;
+ init_dummy_netdev(&irq_grp->napi_ndev);
+ netif_napi_add(&irq_grp->napi_ndev, &irq_grp->napi,
+ ath12k_pci_ext_grp_napi_poll);
+
+ if (ab->hw_params->ring_mask->tx[i] ||
+ ab->hw_params->ring_mask->rx[i] ||
+ ab->hw_params->ring_mask->rx_err[i] ||
+ ab->hw_params->ring_mask->rx_wbm_rel[i] ||
+ ab->hw_params->ring_mask->reo_status[i] ||
+ ab->hw_params->ring_mask->host2rxdma[i] ||
+ ab->hw_params->ring_mask->rx_mon_dest[i]) {
+ num_irq = 1;
+ }
+
+ irq_grp->num_irq = num_irq;
+ irq_grp->irqs[0] = base_idx + i;
+
+ for (j = 0; j < irq_grp->num_irq; j++) {
+ int irq_idx = irq_grp->irqs[j];
+ int vector = (i % num_vectors) + base_vector;
+ int irq = ath12k_pci_get_msi_irq(ab->dev, vector);
+
+ ab->irq_num[irq_idx] = irq;
+
+ ath12k_dbg(ab, ATH12K_DBG_PCI,
+ "irq:%d group:%d\n", irq, i);
+
+ irq_set_status_flags(irq, IRQ_DISABLE_UNLAZY);
+ ret = request_irq(irq, ath12k_pci_ext_interrupt_handler,
+ IRQF_SHARED,
+ "DP_EXT_IRQ", irq_grp);
+ if (ret) {
+ ath12k_err(ab, "failed request irq %d: %d\n",
+ vector, ret);
+ return ret;
+ }
+
+ disable_irq_nosync(ab->irq_num[irq_idx]);
+ }
+ }
+
+ return 0;
+}
+
+static int ath12k_pci_config_irq(struct ath12k_base *ab)
+{
+ struct ath12k_ce_pipe *ce_pipe;
+ u32 msi_data_start;
+ u32 msi_data_count, msi_data_idx;
+ u32 msi_irq_start;
+ unsigned int msi_data;
+ int irq, i, ret, irq_idx;
+
+ ret = ath12k_pci_get_user_msi_assignment(ab,
+ "CE", &msi_data_count,
+ &msi_data_start, &msi_irq_start);
+ if (ret)
+ return ret;
+
+ /* Configure CE irqs */
+
+ for (i = 0, msi_data_idx = 0; i < ab->hw_params->ce_count; i++) {
+ if (ath12k_ce_get_attr_flags(ab, i) & CE_ATTR_DIS_INTR)
+ continue;
+
+ msi_data = (msi_data_idx % msi_data_count) + msi_irq_start;
+ irq = ath12k_pci_get_msi_irq(ab->dev, msi_data);
+ ce_pipe = &ab->ce.ce_pipe[i];
+
+ irq_idx = ATH12K_PCI_IRQ_CE0_OFFSET + i;
+
+ tasklet_setup(&ce_pipe->intr_tq, ath12k_pci_ce_tasklet);
+
+ ret = request_irq(irq, ath12k_pci_ce_interrupt_handler,
+ IRQF_SHARED, irq_name[irq_idx],
+ ce_pipe);
+ if (ret) {
+ ath12k_err(ab, "failed to request irq %d: %d\n",
+ irq_idx, ret);
+ return ret;
+ }
+
+ ab->irq_num[irq_idx] = irq;
+ msi_data_idx++;
+
+ ath12k_pci_ce_irq_disable(ab, i);
+ }
+
+ ret = ath12k_pci_ext_irq_config(ab);
+ if (ret)
+ return ret;
+
+ return 0;
+}
+
+static void ath12k_pci_init_qmi_ce_config(struct ath12k_base *ab)
+{
+ struct ath12k_qmi_ce_cfg *cfg = &ab->qmi.ce_cfg;
+
+ cfg->tgt_ce = ab->hw_params->target_ce_config;
+ cfg->tgt_ce_len = ab->hw_params->target_ce_count;
+
+ cfg->svc_to_ce_map = ab->hw_params->svc_to_ce_map;
+ cfg->svc_to_ce_map_len = ab->hw_params->svc_to_ce_map_len;
+ ab->qmi.service_ins_id = ab->hw_params->qmi_service_ins_id;
+}
+
+static void ath12k_pci_ce_irqs_enable(struct ath12k_base *ab)
+{
+ int i;
+
+ for (i = 0; i < ab->hw_params->ce_count; i++) {
+ if (ath12k_ce_get_attr_flags(ab, i) & CE_ATTR_DIS_INTR)
+ continue;
+ ath12k_pci_ce_irq_enable(ab, i);
+ }
+}
+
+static void ath12k_pci_msi_config(struct ath12k_pci *ab_pci, bool enable)
+{
+ struct pci_dev *dev = ab_pci->pdev;
+ u16 control;
+
+ pci_read_config_word(dev, dev->msi_cap + PCI_MSI_FLAGS, &control);
+
+ if (enable)
+ control |= PCI_MSI_FLAGS_ENABLE;
+ else
+ control &= ~PCI_MSI_FLAGS_ENABLE;
+
+ pci_write_config_word(dev, dev->msi_cap + PCI_MSI_FLAGS, control);
+}
+
+static void ath12k_pci_msi_enable(struct ath12k_pci *ab_pci)
+{
+ ath12k_pci_msi_config(ab_pci, true);
+}
+
+static void ath12k_pci_msi_disable(struct ath12k_pci *ab_pci)
+{
+ ath12k_pci_msi_config(ab_pci, false);
+}
+
+static int ath12k_pci_msi_alloc(struct ath12k_pci *ab_pci)
+{
+ struct ath12k_base *ab = ab_pci->ab;
+ const struct ath12k_msi_config *msi_config = ab_pci->msi_config;
+ struct msi_desc *msi_desc;
+ int num_vectors;
+ int ret;
+
+ num_vectors = pci_alloc_irq_vectors(ab_pci->pdev,
+ msi_config->total_vectors,
+ msi_config->total_vectors,
+ PCI_IRQ_MSI);
+ if (num_vectors != msi_config->total_vectors) {
+ ath12k_err(ab, "failed to get %d MSI vectors, only %d available",
+ msi_config->total_vectors, num_vectors);
+
+ if (num_vectors >= 0)
+ return -EINVAL;
+ else
+ return num_vectors;
+ }
+
+ ath12k_pci_msi_disable(ab_pci);
+
+ msi_desc = irq_get_msi_desc(ab_pci->pdev->irq);
+ if (!msi_desc) {
+ ath12k_err(ab, "msi_desc is NULL!\n");
+ ret = -EINVAL;
+ goto free_msi_vector;
+ }
+
+ ab_pci->msi_ep_base_data = msi_desc->msg.data;
+ if (msi_desc->pci.msi_attrib.is_64)
+ set_bit(ATH12K_PCI_FLAG_IS_MSI_64, &ab_pci->flags);
+
+ ath12k_dbg(ab, ATH12K_DBG_PCI, "msi base data is %d\n", ab_pci->msi_ep_base_data);
+
+ return 0;
+
+free_msi_vector:
+ pci_free_irq_vectors(ab_pci->pdev);
+
+ return ret;
+}
+
+static void ath12k_pci_msi_free(struct ath12k_pci *ab_pci)
+{
+ pci_free_irq_vectors(ab_pci->pdev);
+}
+
+static int ath12k_pci_claim(struct ath12k_pci *ab_pci, struct pci_dev *pdev)
+{
+ struct ath12k_base *ab = ab_pci->ab;
+ u16 device_id;
+ int ret = 0;
+
+ pci_read_config_word(pdev, PCI_DEVICE_ID, &device_id);
+ if (device_id != ab_pci->dev_id) {
+ ath12k_err(ab, "pci device id mismatch: 0x%x 0x%x\n",
+ device_id, ab_pci->dev_id);
+ ret = -EIO;
+ goto out;
+ }
+
+ ret = pci_assign_resource(pdev, ATH12K_PCI_BAR_NUM);
+ if (ret) {
+ ath12k_err(ab, "failed to assign pci resource: %d\n", ret);
+ goto out;
+ }
+
+ ret = pci_enable_device(pdev);
+ if (ret) {
+ ath12k_err(ab, "failed to enable pci device: %d\n", ret);
+ goto out;
+ }
+
+ ret = pci_request_region(pdev, ATH12K_PCI_BAR_NUM, "ath12k_pci");
+ if (ret) {
+ ath12k_err(ab, "failed to request pci region: %d\n", ret);
+ goto disable_device;
+ }
+
+ ret = dma_set_mask_and_coherent(&pdev->dev,
+ DMA_BIT_MASK(ATH12K_PCI_DMA_MASK));
+ if (ret) {
+ ath12k_err(ab, "failed to set pci dma mask to %d: %d\n",
+ ATH12K_PCI_DMA_MASK, ret);
+ goto release_region;
+ }
+
+ pci_set_master(pdev);
+
+ ab->mem_len = pci_resource_len(pdev, ATH12K_PCI_BAR_NUM);
+ ab->mem = pci_iomap(pdev, ATH12K_PCI_BAR_NUM, 0);
+ if (!ab->mem) {
+ ath12k_err(ab, "failed to map pci bar %d\n", ATH12K_PCI_BAR_NUM);
+ ret = -EIO;
+ goto clear_master;
+ }
+
+ ath12k_dbg(ab, ATH12K_DBG_BOOT, "boot pci_mem 0x%pK\n", ab->mem);
+ return 0;
+
+clear_master:
+ pci_clear_master(pdev);
+release_region:
+ pci_release_region(pdev, ATH12K_PCI_BAR_NUM);
+disable_device:
+ pci_disable_device(pdev);
+out:
+ return ret;
+}
+
+static void ath12k_pci_free_region(struct ath12k_pci *ab_pci)
+{
+ struct ath12k_base *ab = ab_pci->ab;
+ struct pci_dev *pci_dev = ab_pci->pdev;
+
+ pci_iounmap(pci_dev, ab->mem);
+ ab->mem = NULL;
+ pci_clear_master(pci_dev);
+ pci_release_region(pci_dev, ATH12K_PCI_BAR_NUM);
+ if (pci_is_enabled(pci_dev))
+ pci_disable_device(pci_dev);
+}
+
+static void ath12k_pci_aspm_disable(struct ath12k_pci *ab_pci)
+{
+ struct ath12k_base *ab = ab_pci->ab;
+
+ pcie_capability_read_word(ab_pci->pdev, PCI_EXP_LNKCTL,
+ &ab_pci->link_ctl);
+
+ ath12k_dbg(ab, ATH12K_DBG_PCI, "pci link_ctl 0x%04x L0s %d L1 %d\n",
+ ab_pci->link_ctl,
+ u16_get_bits(ab_pci->link_ctl, PCI_EXP_LNKCTL_ASPM_L0S),
+ u16_get_bits(ab_pci->link_ctl, PCI_EXP_LNKCTL_ASPM_L1));
+
+ /* disable L0s and L1 */
+ pcie_capability_write_word(ab_pci->pdev, PCI_EXP_LNKCTL,
+ ab_pci->link_ctl & ~PCI_EXP_LNKCTL_ASPMC);
+
+ set_bit(ATH12K_PCI_ASPM_RESTORE, &ab_pci->flags);
+}
+
+static void ath12k_pci_aspm_restore(struct ath12k_pci *ab_pci)
+{
+ if (test_and_clear_bit(ATH12K_PCI_ASPM_RESTORE, &ab_pci->flags))
+ pcie_capability_write_word(ab_pci->pdev, PCI_EXP_LNKCTL,
+ ab_pci->link_ctl);
+}
+
+static void ath12k_pci_kill_tasklets(struct ath12k_base *ab)
+{
+ int i;
+
+ for (i = 0; i < ab->hw_params->ce_count; i++) {
+ struct ath12k_ce_pipe *ce_pipe = &ab->ce.ce_pipe[i];
+
+ if (ath12k_ce_get_attr_flags(ab, i) & CE_ATTR_DIS_INTR)
+ continue;
+
+ tasklet_kill(&ce_pipe->intr_tq);
+ }
+}
+
+static void ath12k_pci_ce_irq_disable_sync(struct ath12k_base *ab)
+{
+ ath12k_pci_ce_irqs_disable(ab);
+ ath12k_pci_sync_ce_irqs(ab);
+ ath12k_pci_kill_tasklets(ab);
+}
+
+int ath12k_pci_map_service_to_pipe(struct ath12k_base *ab, u16 service_id,
+ u8 *ul_pipe, u8 *dl_pipe)
+{
+ const struct service_to_pipe *entry;
+ bool ul_set = false, dl_set = false;
+ int i;
+
+ for (i = 0; i < ab->hw_params->svc_to_ce_map_len; i++) {
+ entry = &ab->hw_params->svc_to_ce_map[i];
+
+ if (__le32_to_cpu(entry->service_id) != service_id)
+ continue;
+
+ switch (__le32_to_cpu(entry->pipedir)) {
+ case PIPEDIR_NONE:
+ break;
+ case PIPEDIR_IN:
+ WARN_ON(dl_set);
+ *dl_pipe = __le32_to_cpu(entry->pipenum);
+ dl_set = true;
+ break;
+ case PIPEDIR_OUT:
+ WARN_ON(ul_set);
+ *ul_pipe = __le32_to_cpu(entry->pipenum);
+ ul_set = true;
+ break;
+ case PIPEDIR_INOUT:
+ WARN_ON(dl_set);
+ WARN_ON(ul_set);
+ *dl_pipe = __le32_to_cpu(entry->pipenum);
+ *ul_pipe = __le32_to_cpu(entry->pipenum);
+ dl_set = true;
+ ul_set = true;
+ break;
+ }
+ }
+
+ if (WARN_ON(!ul_set || !dl_set))
+ return -ENOENT;
+
+ return 0;
+}
+
+int ath12k_pci_get_msi_irq(struct device *dev, unsigned int vector)
+{
+ struct pci_dev *pci_dev = to_pci_dev(dev);
+
+ return pci_irq_vector(pci_dev, vector);
+}
+
+int ath12k_pci_get_user_msi_assignment(struct ath12k_base *ab, char *user_name,
+ int *num_vectors, u32 *user_base_data,
+ u32 *base_vector)
+{
+ struct ath12k_pci *ab_pci = ath12k_pci_priv(ab);
+ const struct ath12k_msi_config *msi_config = ab_pci->msi_config;
+ int idx;
+
+ for (idx = 0; idx < msi_config->total_users; idx++) {
+ if (strcmp(user_name, msi_config->users[idx].name) == 0) {
+ *num_vectors = msi_config->users[idx].num_vectors;
+ *user_base_data = msi_config->users[idx].base_vector
+ + ab_pci->msi_ep_base_data;
+ *base_vector = msi_config->users[idx].base_vector;
+
+ ath12k_dbg(ab, ATH12K_DBG_PCI, "Assign MSI to user: %s, num_vectors: %d, user_base_data: %u, base_vector: %u\n",
+ user_name, *num_vectors, *user_base_data,
+ *base_vector);
+
+ return 0;
+ }
+ }
+
+ ath12k_err(ab, "Failed to find MSI assignment for %s!\n", user_name);
+
+ return -EINVAL;
+}
+
+void ath12k_pci_get_msi_address(struct ath12k_base *ab, u32 *msi_addr_lo,
+ u32 *msi_addr_hi)
+{
+ struct ath12k_pci *ab_pci = ath12k_pci_priv(ab);
+ struct pci_dev *pci_dev = to_pci_dev(ab->dev);
+
+ pci_read_config_dword(pci_dev, pci_dev->msi_cap + PCI_MSI_ADDRESS_LO,
+ msi_addr_lo);
+
+ if (test_bit(ATH12K_PCI_FLAG_IS_MSI_64, &ab_pci->flags)) {
+ pci_read_config_dword(pci_dev, pci_dev->msi_cap + PCI_MSI_ADDRESS_HI,
+ msi_addr_hi);
+ } else {
+ *msi_addr_hi = 0;
+ }
+}
+
+void ath12k_pci_get_ce_msi_idx(struct ath12k_base *ab, u32 ce_id,
+ u32 *msi_idx)
+{
+ u32 i, msi_data_idx;
+
+ for (i = 0, msi_data_idx = 0; i < ab->hw_params->ce_count; i++) {
+ if (ath12k_ce_get_attr_flags(ab, i) & CE_ATTR_DIS_INTR)
+ continue;
+
+ if (ce_id == i)
+ break;
+
+ msi_data_idx++;
+ }
+ *msi_idx = msi_data_idx;
+}
+
+void ath12k_pci_hif_ce_irq_enable(struct ath12k_base *ab)
+{
+ ath12k_pci_ce_irqs_enable(ab);
+}
+
+void ath12k_pci_hif_ce_irq_disable(struct ath12k_base *ab)
+{
+ ath12k_pci_ce_irq_disable_sync(ab);
+}
+
+void ath12k_pci_ext_irq_enable(struct ath12k_base *ab)
+{
+ int i;
+
+ for (i = 0; i < ATH12K_EXT_IRQ_GRP_NUM_MAX; i++) {
+ struct ath12k_ext_irq_grp *irq_grp = &ab->ext_irq_grp[i];
+
+ napi_enable(&irq_grp->napi);
+ ath12k_pci_ext_grp_enable(irq_grp);
+ }
+}
+
+void ath12k_pci_ext_irq_disable(struct ath12k_base *ab)
+{
+ __ath12k_pci_ext_irq_disable(ab);
+ ath12k_pci_sync_ext_irqs(ab);
+}
+
+int ath12k_pci_hif_suspend(struct ath12k_base *ab)
+{
+ struct ath12k_pci *ar_pci = ath12k_pci_priv(ab);
+
+ ath12k_mhi_suspend(ar_pci);
+
+ return 0;
+}
+
+int ath12k_pci_hif_resume(struct ath12k_base *ab)
+{
+ struct ath12k_pci *ar_pci = ath12k_pci_priv(ab);
+
+ ath12k_mhi_resume(ar_pci);
+
+ return 0;
+}
+
+void ath12k_pci_stop(struct ath12k_base *ab)
+{
+ ath12k_pci_ce_irq_disable_sync(ab);
+ ath12k_ce_cleanup_pipes(ab);
+}
+
+int ath12k_pci_start(struct ath12k_base *ab)
+{
+ struct ath12k_pci *ab_pci = ath12k_pci_priv(ab);
+
+ set_bit(ATH12K_PCI_FLAG_INIT_DONE, &ab_pci->flags);
+
+ ath12k_pci_aspm_restore(ab_pci);
+
+ ath12k_pci_ce_irqs_enable(ab);
+ ath12k_ce_rx_post_buf(ab);
+
+ return 0;
+}
+
+u32 ath12k_pci_read32(struct ath12k_base *ab, u32 offset)
+{
+ struct ath12k_pci *ab_pci = ath12k_pci_priv(ab);
+ u32 val, window_start;
+
+ /* for offset beyond BAR + 4K - 32, may
+ * need to wakeup MHI to access.
+ */
+ if (test_bit(ATH12K_PCI_FLAG_INIT_DONE, &ab_pci->flags) &&
+ offset >= ACCESS_ALWAYS_OFF)
+ mhi_device_get_sync(ab_pci->mhi_ctrl->mhi_dev);
+
+ if (offset < WINDOW_START) {
+ val = ioread32(ab->mem + offset);
+ } else {
+ if (ab->static_window_map)
+ window_start = ath12k_pci_get_window_start(ab, offset);
+ else
+ window_start = WINDOW_START;
+
+ if (window_start == WINDOW_START) {
+ spin_lock_bh(&ab_pci->window_lock);
+ ath12k_pci_select_window(ab_pci, offset);
+ val = ioread32(ab->mem + window_start +
+ (offset & WINDOW_RANGE_MASK));
+ spin_unlock_bh(&ab_pci->window_lock);
+ } else {
+ if ((!window_start) &&
+ (offset >= PCI_MHIREGLEN_REG &&
+ offset <= PCI_MHI_REGION_END))
+ offset = offset - PCI_MHIREGLEN_REG;
+
+ val = ioread32(ab->mem + window_start +
+ (offset & WINDOW_RANGE_MASK));
+ }
+ }
+
+ if (test_bit(ATH12K_PCI_FLAG_INIT_DONE, &ab_pci->flags) &&
+ offset >= ACCESS_ALWAYS_OFF)
+ mhi_device_put(ab_pci->mhi_ctrl->mhi_dev);
+
+ return val;
+}
+
+void ath12k_pci_write32(struct ath12k_base *ab, u32 offset, u32 value)
+{
+ struct ath12k_pci *ab_pci = ath12k_pci_priv(ab);
+ u32 window_start;
+
+ /* for offset beyond BAR + 4K - 32, may
+ * need to wakeup MHI to access.
+ */
+ if (test_bit(ATH12K_PCI_FLAG_INIT_DONE, &ab_pci->flags) &&
+ offset >= ACCESS_ALWAYS_OFF)
+ mhi_device_get_sync(ab_pci->mhi_ctrl->mhi_dev);
+
+ if (offset < WINDOW_START) {
+ iowrite32(value, ab->mem + offset);
+ } else {
+ if (ab->static_window_map)
+ window_start = ath12k_pci_get_window_start(ab, offset);
+ else
+ window_start = WINDOW_START;
+
+ if (window_start == WINDOW_START) {
+ spin_lock_bh(&ab_pci->window_lock);
+ ath12k_pci_select_window(ab_pci, offset);
+ iowrite32(value, ab->mem + window_start +
+ (offset & WINDOW_RANGE_MASK));
+ spin_unlock_bh(&ab_pci->window_lock);
+ } else {
+ if ((!window_start) &&
+ (offset >= PCI_MHIREGLEN_REG &&
+ offset <= PCI_MHI_REGION_END))
+ offset = offset - PCI_MHIREGLEN_REG;
+
+ iowrite32(value, ab->mem + window_start +
+ (offset & WINDOW_RANGE_MASK));
+ }
+ }
+
+ if (test_bit(ATH12K_PCI_FLAG_INIT_DONE, &ab_pci->flags) &&
+ offset >= ACCESS_ALWAYS_OFF)
+ mhi_device_put(ab_pci->mhi_ctrl->mhi_dev);
+}
+
+int ath12k_pci_power_up(struct ath12k_base *ab)
+{
+ struct ath12k_pci *ab_pci = ath12k_pci_priv(ab);
+ int ret;
+
+ ab_pci->register_window = 0;
+ clear_bit(ATH12K_PCI_FLAG_INIT_DONE, &ab_pci->flags);
+ ath12k_pci_sw_reset(ab_pci->ab, true);
+
+ /* Disable ASPM during firmware download due to problems switching
+ * to AMSS state.
+ */
+ ath12k_pci_aspm_disable(ab_pci);
+
+ ath12k_pci_msi_enable(ab_pci);
+
+ ret = ath12k_mhi_start(ab_pci);
+ if (ret) {
+ ath12k_err(ab, "failed to start mhi: %d\n", ret);
+ return ret;
+ }
+
+ if (ab->static_window_map)
+ ath12k_pci_select_static_window(ab_pci);
+
+ return 0;
+}
+
+void ath12k_pci_power_down(struct ath12k_base *ab)
+{
+ struct ath12k_pci *ab_pci = ath12k_pci_priv(ab);
+
+ /* restore aspm in case firmware bootup fails */
+ ath12k_pci_aspm_restore(ab_pci);
+
+ ath12k_pci_force_wake(ab_pci->ab);
+ ath12k_pci_msi_disable(ab_pci);
+ ath12k_mhi_stop(ab_pci);
+ clear_bit(ATH12K_PCI_FLAG_INIT_DONE, &ab_pci->flags);
+ ath12k_pci_sw_reset(ab_pci->ab, false);
+}
+
+static const struct ath12k_hif_ops ath12k_pci_hif_ops = {
+ .start = ath12k_pci_start,
+ .stop = ath12k_pci_stop,
+ .read32 = ath12k_pci_read32,
+ .write32 = ath12k_pci_write32,
+ .power_down = ath12k_pci_power_down,
+ .power_up = ath12k_pci_power_up,
+ .suspend = ath12k_pci_hif_suspend,
+ .resume = ath12k_pci_hif_resume,
+ .irq_enable = ath12k_pci_ext_irq_enable,
+ .irq_disable = ath12k_pci_ext_irq_disable,
+ .get_msi_address = ath12k_pci_get_msi_address,
+ .get_user_msi_vector = ath12k_pci_get_user_msi_assignment,
+ .map_service_to_pipe = ath12k_pci_map_service_to_pipe,
+ .ce_irq_enable = ath12k_pci_hif_ce_irq_enable,
+ .ce_irq_disable = ath12k_pci_hif_ce_irq_disable,
+ .get_ce_msi_idx = ath12k_pci_get_ce_msi_idx,
+};
+
+static
+void ath12k_pci_read_hw_version(struct ath12k_base *ab, u32 *major, u32 *minor)
+{
+ u32 soc_hw_version;
+
+ soc_hw_version = ath12k_pci_read32(ab, TCSR_SOC_HW_VERSION);
+ *major = FIELD_GET(TCSR_SOC_HW_VERSION_MAJOR_MASK,
+ soc_hw_version);
+ *minor = FIELD_GET(TCSR_SOC_HW_VERSION_MINOR_MASK,
+ soc_hw_version);
+
+ ath12k_dbg(ab, ATH12K_DBG_PCI,
+ "pci tcsr_soc_hw_version major %d minor %d\n",
+ *major, *minor);
+}
+
+static int ath12k_pci_probe(struct pci_dev *pdev,
+ const struct pci_device_id *pci_dev)
+{
+ struct ath12k_base *ab;
+ struct ath12k_pci *ab_pci;
+ u32 soc_hw_version_major, soc_hw_version_minor;
+ int ret;
+
+ ab = ath12k_core_alloc(&pdev->dev, sizeof(*ab_pci), ATH12K_BUS_PCI);
+ if (!ab) {
+ dev_err(&pdev->dev, "failed to allocate ath12k base\n");
+ return -ENOMEM;
+ }
+
+ ab->dev = &pdev->dev;
+ pci_set_drvdata(pdev, ab);
+ ab_pci = ath12k_pci_priv(ab);
+ ab_pci->dev_id = pci_dev->device;
+ ab_pci->ab = ab;
+ ab_pci->pdev = pdev;
+ ab->hif.ops = &ath12k_pci_hif_ops;
+ pci_set_drvdata(pdev, ab);
+ spin_lock_init(&ab_pci->window_lock);
+
+ ret = ath12k_pci_claim(ab_pci, pdev);
+ if (ret) {
+ ath12k_err(ab, "failed to claim device: %d\n", ret);
+ goto err_free_core;
+ }
+
+ switch (pci_dev->device) {
+ case QCN9274_DEVICE_ID:
+ ab_pci->msi_config = &ath12k_msi_config[0];
+ ab->static_window_map = true;
+ ath12k_pci_read_hw_version(ab, &soc_hw_version_major,
+ &soc_hw_version_minor);
+ switch (soc_hw_version_major) {
+ case ATH12K_PCI_SOC_HW_VERSION_2:
+ ab->hw_rev = ATH12K_HW_QCN9274_HW20;
+ break;
+ case ATH12K_PCI_SOC_HW_VERSION_1:
+ ab->hw_rev = ATH12K_HW_QCN9274_HW10;
+ break;
+ default:
+ dev_err(&pdev->dev,
+ "Unknown hardware version found for QCN9274: 0x%x\n",
+ soc_hw_version_major);
+ return -EOPNOTSUPP;
+ }
+ break;
+ case WCN7850_DEVICE_ID:
+ ab_pci->msi_config = &ath12k_msi_config[0];
+ ab->static_window_map = false;
+ ab->hw_rev = ATH12K_HW_WCN7850_HW20;
+ break;
+
+ default:
+ dev_err(&pdev->dev, "Unknown PCI device found: 0x%x\n",
+ pci_dev->device);
+ ret = -EOPNOTSUPP;
+ goto err_pci_free_region;
+ }
+
+ ret = ath12k_pci_msi_alloc(ab_pci);
+ if (ret) {
+ ath12k_err(ab, "failed to alloc msi: %d\n", ret);
+ goto err_pci_free_region;
+ }
+
+ ret = ath12k_core_pre_init(ab);
+ if (ret)
+ goto err_pci_msi_free;
+
+ ret = ath12k_mhi_register(ab_pci);
+ if (ret) {
+ ath12k_err(ab, "failed to register mhi: %d\n", ret);
+ goto err_pci_msi_free;
+ }
+
+ ret = ath12k_hal_srng_init(ab);
+ if (ret)
+ goto err_mhi_unregister;
+
+ ret = ath12k_ce_alloc_pipes(ab);
+ if (ret) {
+ ath12k_err(ab, "failed to allocate ce pipes: %d\n", ret);
+ goto err_hal_srng_deinit;
+ }
+
+ ath12k_pci_init_qmi_ce_config(ab);
+
+ ret = ath12k_pci_config_irq(ab);
+ if (ret) {
+ ath12k_err(ab, "failed to config irq: %d\n", ret);
+ goto err_ce_free;
+ }
+
+ ret = ath12k_core_init(ab);
+ if (ret) {
+ ath12k_err(ab, "failed to init core: %d\n", ret);
+ goto err_free_irq;
+ }
+ return 0;
+
+err_free_irq:
+ ath12k_pci_free_irq(ab);
+
+err_ce_free:
+ ath12k_ce_free_pipes(ab);
+
+err_hal_srng_deinit:
+ ath12k_hal_srng_deinit(ab);
+
+err_mhi_unregister:
+ ath12k_mhi_unregister(ab_pci);
+
+err_pci_msi_free:
+ ath12k_pci_msi_free(ab_pci);
+
+err_pci_free_region:
+ ath12k_pci_free_region(ab_pci);
+
+err_free_core:
+ ath12k_core_free(ab);
+
+ return ret;
+}
+
+static void ath12k_pci_remove(struct pci_dev *pdev)
+{
+ struct ath12k_base *ab = pci_get_drvdata(pdev);
+ struct ath12k_pci *ab_pci = ath12k_pci_priv(ab);
+
+ if (test_bit(ATH12K_FLAG_QMI_FAIL, &ab->dev_flags)) {
+ ath12k_pci_power_down(ab);
+ ath12k_qmi_deinit_service(ab);
+ goto qmi_fail;
+ }
+
+ set_bit(ATH12K_FLAG_UNREGISTERING, &ab->dev_flags);
+
+ cancel_work_sync(&ab->reset_work);
+ ath12k_core_deinit(ab);
+
+qmi_fail:
+ ath12k_mhi_unregister(ab_pci);
+
+ ath12k_pci_free_irq(ab);
+ ath12k_pci_msi_free(ab_pci);
+ ath12k_pci_free_region(ab_pci);
+
+ ath12k_hal_srng_deinit(ab);
+ ath12k_ce_free_pipes(ab);
+ ath12k_core_free(ab);
+}
+
+static void ath12k_pci_shutdown(struct pci_dev *pdev)
+{
+ struct ath12k_base *ab = pci_get_drvdata(pdev);
+
+ ath12k_pci_power_down(ab);
+}
+
+static __maybe_unused int ath12k_pci_pm_suspend(struct device *dev)
+{
+ struct ath12k_base *ab = dev_get_drvdata(dev);
+ int ret;
+
+ ret = ath12k_core_suspend(ab);
+ if (ret)
+ ath12k_warn(ab, "failed to suspend core: %d\n", ret);
+
+ return ret;
+}
+
+static __maybe_unused int ath12k_pci_pm_resume(struct device *dev)
+{
+ struct ath12k_base *ab = dev_get_drvdata(dev);
+ int ret;
+
+ ret = ath12k_core_resume(ab);
+ if (ret)
+ ath12k_warn(ab, "failed to resume core: %d\n", ret);
+
+ return ret;
+}
+
+static SIMPLE_DEV_PM_OPS(ath12k_pci_pm_ops,
+ ath12k_pci_pm_suspend,
+ ath12k_pci_pm_resume);
+
+static struct pci_driver ath12k_pci_driver = {
+ .name = "ath12k_pci",
+ .id_table = ath12k_pci_id_table,
+ .probe = ath12k_pci_probe,
+ .remove = ath12k_pci_remove,
+ .shutdown = ath12k_pci_shutdown,
+ .driver.pm = &ath12k_pci_pm_ops,
+};
+
+static int ath12k_pci_init(void)
+{
+ int ret;
+
+ ret = pci_register_driver(&ath12k_pci_driver);
+ if (ret) {
+ pr_err("failed to register ath12k pci driver: %d\n",
+ ret);
+ return ret;
+ }
+
+ return 0;
+}
+module_init(ath12k_pci_init);
+
+static void ath12k_pci_exit(void)
+{
+ pci_unregister_driver(&ath12k_pci_driver);
+}
+
+module_exit(ath12k_pci_exit);
+
+MODULE_DESCRIPTION("Driver support for Qualcomm Technologies 802.11be WLAN PCIe devices");
+MODULE_LICENSE("Dual BSD/GPL");
diff --git a/drivers/net/wireless/ath/ath12k/pci.h b/drivers/net/wireless/ath/ath12k/pci.h
new file mode 100644
index 000000000000..0d9e40ab31f2
--- /dev/null
+++ b/drivers/net/wireless/ath/ath12k/pci.h
@@ -0,0 +1,135 @@
+/* SPDX-License-Identifier: BSD-3-Clause-Clear */
+/*
+ * Copyright (c) 2019-2021 The Linux Foundation. All rights reserved.
+ * Copyright (c) 2021-2022 Qualcomm Innovation Center, Inc. All rights reserved.
+ */
+#ifndef ATH12K_PCI_H
+#define ATH12K_PCI_H
+
+#include <linux/mhi.h>
+
+#include "core.h"
+
+#define PCIE_SOC_GLOBAL_RESET 0x3008
+#define PCIE_SOC_GLOBAL_RESET_V 1
+
+#define WLAON_WARM_SW_ENTRY 0x1f80504
+#define WLAON_SOC_RESET_CAUSE_REG 0x01f8060c
+
+#define PCIE_Q6_COOKIE_ADDR 0x01f80500
+#define PCIE_Q6_COOKIE_DATA 0xc0000000
+
+/* register to wake the UMAC from power collapse */
+#define PCIE_SCRATCH_0_SOC_PCIE_REG 0x4040
+
+/* register used for handshake mechanism to validate UMAC is awake */
+#define PCIE_SOC_WAKE_PCIE_LOCAL_REG 0x3004
+
+#define PCIE_PCIE_PARF_LTSSM 0x1e081b0
+#define PARM_LTSSM_VALUE 0x111
+
+#define GCC_GCC_PCIE_HOT_RST 0x1e38338
+#define GCC_GCC_PCIE_HOT_RST_VAL 0x10
+
+#define PCIE_PCIE_INT_ALL_CLEAR 0x1e08228
+#define PCIE_SMLH_REQ_RST_LINK_DOWN 0x2
+#define PCIE_INT_CLEAR_ALL 0xffffffff
+
+#define PCIE_QSERDES_COM_SYSCLK_EN_SEL_REG(ab) \
+ ((ab)->hw_params->regs->pcie_qserdes_sysclk_en_sel)
+#define PCIE_QSERDES_COM_SYSCLK_EN_SEL_VAL 0x10
+#define PCIE_QSERDES_COM_SYSCLK_EN_SEL_MSK 0xffffffff
+#define PCIE_PCS_OSC_DTCT_CONFIG1_REG(ab) \
+ ((ab)->hw_params->regs->pcie_pcs_osc_dtct_config_base)
+#define PCIE_PCS_OSC_DTCT_CONFIG1_VAL 0x02
+#define PCIE_PCS_OSC_DTCT_CONFIG2_REG(ab) \
+ ((ab)->hw_params->regs->pcie_pcs_osc_dtct_config_base + 0x4)
+#define PCIE_PCS_OSC_DTCT_CONFIG2_VAL 0x52
+#define PCIE_PCS_OSC_DTCT_CONFIG4_REG(ab) \
+ ((ab)->hw_params->regs->pcie_pcs_osc_dtct_config_base + 0xc)
+#define PCIE_PCS_OSC_DTCT_CONFIG4_VAL 0xff
+#define PCIE_PCS_OSC_DTCT_CONFIG_MSK 0x000000ff
+
+#define WLAON_QFPROM_PWR_CTRL_REG 0x01f8031c
+#define QFPROM_PWR_CTRL_VDD4BLOW_MASK 0x4
+
+#define PCI_BAR_WINDOW0_BASE 0x1E00000
+#define PCI_BAR_WINDOW0_END 0x1E7FFFC
+#define PCI_SOC_RANGE_MASK 0x3FFF
+#define PCI_SOC_PCI_REG_BASE 0x1E04000
+#define PCI_SOC_PCI_REG_END 0x1E07FFC
+#define PCI_PARF_BASE 0x1E08000
+#define PCI_PARF_END 0x1E0BFFC
+#define PCI_MHIREGLEN_REG 0x1E0E100
+#define PCI_MHI_REGION_END 0x1E0EFFC
+#define QRTR_PCI_DOMAIN_NR_MASK GENMASK(7, 4)
+#define QRTR_PCI_BUS_NUMBER_MASK GENMASK(3, 0)
+
+#define ATH12K_PCI_SOC_HW_VERSION_1 1
+#define ATH12K_PCI_SOC_HW_VERSION_2 2
+
+struct ath12k_msi_user {
+ const char *name;
+ int num_vectors;
+ u32 base_vector;
+};
+
+struct ath12k_msi_config {
+ int total_vectors;
+ int total_users;
+ const struct ath12k_msi_user *users;
+};
+
+enum ath12k_pci_flags {
+ ATH12K_PCI_FLAG_INIT_DONE,
+ ATH12K_PCI_FLAG_IS_MSI_64,
+ ATH12K_PCI_ASPM_RESTORE,
+};
+
+struct ath12k_pci {
+ struct pci_dev *pdev;
+ struct ath12k_base *ab;
+ u16 dev_id;
+ char amss_path[100];
+ u32 msi_ep_base_data;
+ struct mhi_controller *mhi_ctrl;
+ const struct ath12k_msi_config *msi_config;
+ unsigned long mhi_state;
+ u32 register_window;
+
+ /* protects register_window above */
+ spinlock_t window_lock;
+
+ /* enum ath12k_pci_flags */
+ unsigned long flags;
+ u16 link_ctl;
+};
+
+static inline struct ath12k_pci *ath12k_pci_priv(struct ath12k_base *ab)
+{
+ return (struct ath12k_pci *)ab->drv_priv;
+}
+
+int ath12k_pci_get_user_msi_assignment(struct ath12k_base *ab, char *user_name,
+ int *num_vectors, u32 *user_base_data,
+ u32 *base_vector);
+int ath12k_pci_get_msi_irq(struct device *dev, unsigned int vector);
+void ath12k_pci_write32(struct ath12k_base *ab, u32 offset, u32 value);
+u32 ath12k_pci_read32(struct ath12k_base *ab, u32 offset);
+int ath12k_pci_map_service_to_pipe(struct ath12k_base *ab, u16 service_id,
+ u8 *ul_pipe, u8 *dl_pipe);
+void ath12k_pci_get_msi_address(struct ath12k_base *ab, u32 *msi_addr_lo,
+ u32 *msi_addr_hi);
+void ath12k_pci_get_ce_msi_idx(struct ath12k_base *ab, u32 ce_id,
+ u32 *msi_idx);
+void ath12k_pci_hif_ce_irq_enable(struct ath12k_base *ab);
+void ath12k_pci_hif_ce_irq_disable(struct ath12k_base *ab);
+void ath12k_pci_ext_irq_enable(struct ath12k_base *ab);
+void ath12k_pci_ext_irq_disable(struct ath12k_base *ab);
+int ath12k_pci_hif_suspend(struct ath12k_base *ab);
+int ath12k_pci_hif_resume(struct ath12k_base *ab);
+void ath12k_pci_stop(struct ath12k_base *ab);
+int ath12k_pci_start(struct ath12k_base *ab);
+int ath12k_pci_power_up(struct ath12k_base *ab);
+void ath12k_pci_power_down(struct ath12k_base *ab);
+#endif /* ATH12K_PCI_H */
diff --git a/drivers/net/wireless/ath/ath12k/peer.c b/drivers/net/wireless/ath/ath12k/peer.c
new file mode 100644
index 000000000000..19c0626fbff1
--- /dev/null
+++ b/drivers/net/wireless/ath/ath12k/peer.c
@@ -0,0 +1,342 @@
+// SPDX-License-Identifier: BSD-3-Clause-Clear
+/*
+ * Copyright (c) 2018-2021 The Linux Foundation. All rights reserved.
+ * Copyright (c) 2021-2022 Qualcomm Innovation Center, Inc. All rights reserved.
+ */
+
+#include "core.h"
+#include "peer.h"
+#include "debug.h"
+
+struct ath12k_peer *ath12k_peer_find(struct ath12k_base *ab, int vdev_id,
+ const u8 *addr)
+{
+ struct ath12k_peer *peer;
+
+ lockdep_assert_held(&ab->base_lock);
+
+ list_for_each_entry(peer, &ab->peers, list) {
+ if (peer->vdev_id != vdev_id)
+ continue;
+ if (!ether_addr_equal(peer->addr, addr))
+ continue;
+
+ return peer;
+ }
+
+ return NULL;
+}
+
+static struct ath12k_peer *ath12k_peer_find_by_pdev_idx(struct ath12k_base *ab,
+ u8 pdev_idx, const u8 *addr)
+{
+ struct ath12k_peer *peer;
+
+ lockdep_assert_held(&ab->base_lock);
+
+ list_for_each_entry(peer, &ab->peers, list) {
+ if (peer->pdev_idx != pdev_idx)
+ continue;
+ if (!ether_addr_equal(peer->addr, addr))
+ continue;
+
+ return peer;
+ }
+
+ return NULL;
+}
+
+struct ath12k_peer *ath12k_peer_find_by_addr(struct ath12k_base *ab,
+ const u8 *addr)
+{
+ struct ath12k_peer *peer;
+
+ lockdep_assert_held(&ab->base_lock);
+
+ list_for_each_entry(peer, &ab->peers, list) {
+ if (!ether_addr_equal(peer->addr, addr))
+ continue;
+
+ return peer;
+ }
+
+ return NULL;
+}
+
+struct ath12k_peer *ath12k_peer_find_by_id(struct ath12k_base *ab,
+ int peer_id)
+{
+ struct ath12k_peer *peer;
+
+ lockdep_assert_held(&ab->base_lock);
+
+ list_for_each_entry(peer, &ab->peers, list)
+ if (peer_id == peer->peer_id)
+ return peer;
+
+ return NULL;
+}
+
+bool ath12k_peer_exist_by_vdev_id(struct ath12k_base *ab, int vdev_id)
+{
+ struct ath12k_peer *peer;
+
+ spin_lock_bh(&ab->base_lock);
+
+ list_for_each_entry(peer, &ab->peers, list) {
+ if (vdev_id == peer->vdev_id) {
+ spin_unlock_bh(&ab->base_lock);
+ return true;
+ }
+ }
+ spin_unlock_bh(&ab->base_lock);
+ return false;
+}
+
+struct ath12k_peer *ath12k_peer_find_by_ast(struct ath12k_base *ab,
+ int ast_hash)
+{
+ struct ath12k_peer *peer;
+
+ lockdep_assert_held(&ab->base_lock);
+
+ list_for_each_entry(peer, &ab->peers, list)
+ if (ast_hash == peer->ast_hash)
+ return peer;
+
+ return NULL;
+}
+
+void ath12k_peer_unmap_event(struct ath12k_base *ab, u16 peer_id)
+{
+ struct ath12k_peer *peer;
+
+ spin_lock_bh(&ab->base_lock);
+
+ peer = ath12k_peer_find_by_id(ab, peer_id);
+ if (!peer) {
+ ath12k_warn(ab, "peer-unmap-event: unknown peer id %d\n",
+ peer_id);
+ goto exit;
+ }
+
+ ath12k_dbg(ab, ATH12K_DBG_DP_HTT, "htt peer unmap vdev %d peer %pM id %d\n",
+ peer->vdev_id, peer->addr, peer_id);
+
+ list_del(&peer->list);
+ kfree(peer);
+ wake_up(&ab->peer_mapping_wq);
+
+exit:
+ spin_unlock_bh(&ab->base_lock);
+}
+
+void ath12k_peer_map_event(struct ath12k_base *ab, u8 vdev_id, u16 peer_id,
+ u8 *mac_addr, u16 ast_hash, u16 hw_peer_id)
+{
+ struct ath12k_peer *peer;
+
+ spin_lock_bh(&ab->base_lock);
+ peer = ath12k_peer_find(ab, vdev_id, mac_addr);
+ if (!peer) {
+ peer = kzalloc(sizeof(*peer), GFP_ATOMIC);
+ if (!peer)
+ goto exit;
+
+ peer->vdev_id = vdev_id;
+ peer->peer_id = peer_id;
+ peer->ast_hash = ast_hash;
+ peer->hw_peer_id = hw_peer_id;
+ ether_addr_copy(peer->addr, mac_addr);
+ list_add(&peer->list, &ab->peers);
+ wake_up(&ab->peer_mapping_wq);
+ }
+
+ ath12k_dbg(ab, ATH12K_DBG_DP_HTT, "htt peer map vdev %d peer %pM id %d\n",
+ vdev_id, mac_addr, peer_id);
+
+exit:
+ spin_unlock_bh(&ab->base_lock);
+}
+
+static int ath12k_wait_for_peer_common(struct ath12k_base *ab, int vdev_id,
+ const u8 *addr, bool expect_mapped)
+{
+ int ret;
+
+ ret = wait_event_timeout(ab->peer_mapping_wq, ({
+ bool mapped;
+
+ spin_lock_bh(&ab->base_lock);
+ mapped = !!ath12k_peer_find(ab, vdev_id, addr);
+ spin_unlock_bh(&ab->base_lock);
+
+ (mapped == expect_mapped ||
+ test_bit(ATH12K_FLAG_CRASH_FLUSH, &ab->dev_flags));
+ }), 3 * HZ);
+
+ if (ret <= 0)
+ return -ETIMEDOUT;
+
+ return 0;
+}
+
+void ath12k_peer_cleanup(struct ath12k *ar, u32 vdev_id)
+{
+ struct ath12k_peer *peer, *tmp;
+ struct ath12k_base *ab = ar->ab;
+
+ lockdep_assert_held(&ar->conf_mutex);
+
+ spin_lock_bh(&ab->base_lock);
+ list_for_each_entry_safe(peer, tmp, &ab->peers, list) {
+ if (peer->vdev_id != vdev_id)
+ continue;
+
+ ath12k_warn(ab, "removing stale peer %pM from vdev_id %d\n",
+ peer->addr, vdev_id);
+
+ list_del(&peer->list);
+ kfree(peer);
+ ar->num_peers--;
+ }
+
+ spin_unlock_bh(&ab->base_lock);
+}
+
+static int ath12k_wait_for_peer_deleted(struct ath12k *ar, int vdev_id, const u8 *addr)
+{
+ return ath12k_wait_for_peer_common(ar->ab, vdev_id, addr, false);
+}
+
+int ath12k_wait_for_peer_delete_done(struct ath12k *ar, u32 vdev_id,
+ const u8 *addr)
+{
+ int ret;
+ unsigned long time_left;
+
+ ret = ath12k_wait_for_peer_deleted(ar, vdev_id, addr);
+ if (ret) {
+ ath12k_warn(ar->ab, "failed wait for peer deleted");
+ return ret;
+ }
+
+ time_left = wait_for_completion_timeout(&ar->peer_delete_done,
+ 3 * HZ);
+ if (time_left == 0) {
+ ath12k_warn(ar->ab, "Timeout in receiving peer delete response\n");
+ return -ETIMEDOUT;
+ }
+
+ return 0;
+}
+
+int ath12k_peer_delete(struct ath12k *ar, u32 vdev_id, u8 *addr)
+{
+ int ret;
+
+ lockdep_assert_held(&ar->conf_mutex);
+
+ reinit_completion(&ar->peer_delete_done);
+
+ ret = ath12k_wmi_send_peer_delete_cmd(ar, addr, vdev_id);
+ if (ret) {
+ ath12k_warn(ar->ab,
+ "failed to delete peer vdev_id %d addr %pM ret %d\n",
+ vdev_id, addr, ret);
+ return ret;
+ }
+
+ ret = ath12k_wait_for_peer_delete_done(ar, vdev_id, addr);
+ if (ret)
+ return ret;
+
+ ar->num_peers--;
+
+ return 0;
+}
+
+static int ath12k_wait_for_peer_created(struct ath12k *ar, int vdev_id, const u8 *addr)
+{
+ return ath12k_wait_for_peer_common(ar->ab, vdev_id, addr, true);
+}
+
+int ath12k_peer_create(struct ath12k *ar, struct ath12k_vif *arvif,
+ struct ieee80211_sta *sta,
+ struct ath12k_wmi_peer_create_arg *arg)
+{
+ struct ath12k_peer *peer;
+ int ret;
+
+ lockdep_assert_held(&ar->conf_mutex);
+
+ if (ar->num_peers > (ar->max_num_peers - 1)) {
+ ath12k_warn(ar->ab,
+ "failed to create peer due to insufficient peer entry resource in firmware\n");
+ return -ENOBUFS;
+ }
+
+ spin_lock_bh(&ar->ab->base_lock);
+ peer = ath12k_peer_find_by_pdev_idx(ar->ab, ar->pdev_idx, arg->peer_addr);
+ if (peer) {
+ spin_unlock_bh(&ar->ab->base_lock);
+ return -EINVAL;
+ }
+ spin_unlock_bh(&ar->ab->base_lock);
+
+ ret = ath12k_wmi_send_peer_create_cmd(ar, arg);
+ if (ret) {
+ ath12k_warn(ar->ab,
+ "failed to send peer create vdev_id %d ret %d\n",
+ arg->vdev_id, ret);
+ return ret;
+ }
+
+ ret = ath12k_wait_for_peer_created(ar, arg->vdev_id,
+ arg->peer_addr);
+ if (ret)
+ return ret;
+
+ spin_lock_bh(&ar->ab->base_lock);
+
+ peer = ath12k_peer_find(ar->ab, arg->vdev_id, arg->peer_addr);
+ if (!peer) {
+ spin_unlock_bh(&ar->ab->base_lock);
+ ath12k_warn(ar->ab, "failed to find peer %pM on vdev %i after creation\n",
+ arg->peer_addr, arg->vdev_id);
+
+ reinit_completion(&ar->peer_delete_done);
+
+ ret = ath12k_wmi_send_peer_delete_cmd(ar, arg->peer_addr,
+ arg->vdev_id);
+ if (ret) {
+ ath12k_warn(ar->ab, "failed to delete peer vdev_id %d addr %pM\n",
+ arg->vdev_id, arg->peer_addr);
+ return ret;
+ }
+
+ ret = ath12k_wait_for_peer_delete_done(ar, arg->vdev_id,
+ arg->peer_addr);
+ if (ret)
+ return ret;
+
+ return -ENOENT;
+ }
+
+ peer->pdev_idx = ar->pdev_idx;
+ peer->sta = sta;
+
+ if (arvif->vif->type == NL80211_IFTYPE_STATION) {
+ arvif->ast_hash = peer->ast_hash;
+ arvif->ast_idx = peer->hw_peer_id;
+ }
+
+ peer->sec_type = HAL_ENCRYPT_TYPE_OPEN;
+ peer->sec_type_grp = HAL_ENCRYPT_TYPE_OPEN;
+
+ ar->num_peers++;
+
+ spin_unlock_bh(&ar->ab->base_lock);
+
+ return 0;
+}
diff --git a/drivers/net/wireless/ath/ath12k/peer.h b/drivers/net/wireless/ath/ath12k/peer.h
new file mode 100644
index 000000000000..b296dc0e2f67
--- /dev/null
+++ b/drivers/net/wireless/ath/ath12k/peer.h
@@ -0,0 +1,67 @@
+/* SPDX-License-Identifier: BSD-3-Clause-Clear */
+/*
+ * Copyright (c) 2018-2021 The Linux Foundation. All rights reserved.
+ * Copyright (c) 2021-2022 Qualcomm Innovation Center, Inc. All rights reserved.
+ */
+
+#ifndef ATH12K_PEER_H
+#define ATH12K_PEER_H
+
+#include "dp_rx.h"
+
+struct ppdu_user_delayba {
+ u16 sw_peer_id;
+ u32 info0;
+ u16 ru_end;
+ u16 ru_start;
+ u32 info1;
+ u32 rate_flags;
+ u32 resp_rate_flags;
+};
+
+struct ath12k_peer {
+ struct list_head list;
+ struct ieee80211_sta *sta;
+ int vdev_id;
+ u8 addr[ETH_ALEN];
+ int peer_id;
+ u16 ast_hash;
+ u8 pdev_idx;
+ u16 hw_peer_id;
+
+ /* protected by ab->data_lock */
+ struct ieee80211_key_conf *keys[WMI_MAX_KEY_INDEX + 1];
+ struct ath12k_dp_rx_tid rx_tid[IEEE80211_NUM_TIDS + 1];
+
+ /* Info used in MMIC verification of
+ * RX fragments
+ */
+ struct crypto_shash *tfm_mmic;
+ u8 mcast_keyidx;
+ u8 ucast_keyidx;
+ u16 sec_type;
+ u16 sec_type_grp;
+ struct ppdu_user_delayba ppdu_stats_delayba;
+ bool delayba_flag;
+ bool is_authorized;
+};
+
+void ath12k_peer_unmap_event(struct ath12k_base *ab, u16 peer_id);
+void ath12k_peer_map_event(struct ath12k_base *ab, u8 vdev_id, u16 peer_id,
+ u8 *mac_addr, u16 ast_hash, u16 hw_peer_id);
+struct ath12k_peer *ath12k_peer_find(struct ath12k_base *ab, int vdev_id,
+ const u8 *addr);
+struct ath12k_peer *ath12k_peer_find_by_addr(struct ath12k_base *ab,
+ const u8 *addr);
+struct ath12k_peer *ath12k_peer_find_by_id(struct ath12k_base *ab, int peer_id);
+void ath12k_peer_cleanup(struct ath12k *ar, u32 vdev_id);
+int ath12k_peer_delete(struct ath12k *ar, u32 vdev_id, u8 *addr);
+int ath12k_peer_create(struct ath12k *ar, struct ath12k_vif *arvif,
+ struct ieee80211_sta *sta,
+ struct ath12k_wmi_peer_create_arg *arg);
+int ath12k_wait_for_peer_delete_done(struct ath12k *ar, u32 vdev_id,
+ const u8 *addr);
+bool ath12k_peer_exist_by_vdev_id(struct ath12k_base *ab, int vdev_id);
+struct ath12k_peer *ath12k_peer_find_by_ast(struct ath12k_base *ab, int ast_hash);
+
+#endif /* _PEER_H_ */
diff --git a/drivers/net/wireless/ath/ath12k/qmi.c b/drivers/net/wireless/ath/ath12k/qmi.c
new file mode 100644
index 000000000000..979a63f2e2ab
--- /dev/null
+++ b/drivers/net/wireless/ath/ath12k/qmi.c
@@ -0,0 +1,3087 @@
+// SPDX-License-Identifier: BSD-3-Clause-Clear
+/*
+ * Copyright (c) 2018-2021 The Linux Foundation. All rights reserved.
+ * Copyright (c) 2021-2022 Qualcomm Innovation Center, Inc. All rights reserved.
+ */
+
+#include <linux/elf.h>
+
+#include "qmi.h"
+#include "core.h"
+#include "debug.h"
+#include <linux/of.h>
+#include <linux/firmware.h>
+
+#define SLEEP_CLOCK_SELECT_INTERNAL_BIT 0x02
+#define HOST_CSTATE_BIT 0x04
+#define PLATFORM_CAP_PCIE_GLOBAL_RESET 0x08
+#define ATH12K_QMI_MAX_CHUNK_SIZE 2097152
+
+static struct qmi_elem_info wlfw_host_mlo_chip_info_s_v01_ei[] = {
+ {
+ .data_type = QMI_UNSIGNED_1_BYTE,
+ .elem_len = 1,
+ .elem_size = sizeof(u8),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0,
+ .offset = offsetof(struct wlfw_host_mlo_chip_info_s_v01,
+ chip_id),
+ },
+ {
+ .data_type = QMI_UNSIGNED_1_BYTE,
+ .elem_len = 1,
+ .elem_size = sizeof(u8),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0,
+ .offset = offsetof(struct wlfw_host_mlo_chip_info_s_v01,
+ num_local_links),
+ },
+ {
+ .data_type = QMI_UNSIGNED_1_BYTE,
+ .elem_len = QMI_WLFW_MAX_NUM_MLO_LINKS_PER_CHIP_V01,
+ .elem_size = sizeof(u8),
+ .array_type = STATIC_ARRAY,
+ .tlv_type = 0,
+ .offset = offsetof(struct wlfw_host_mlo_chip_info_s_v01,
+ hw_link_id),
+ },
+ {
+ .data_type = QMI_UNSIGNED_1_BYTE,
+ .elem_len = QMI_WLFW_MAX_NUM_MLO_LINKS_PER_CHIP_V01,
+ .elem_size = sizeof(u8),
+ .array_type = STATIC_ARRAY,
+ .tlv_type = 0,
+ .offset = offsetof(struct wlfw_host_mlo_chip_info_s_v01,
+ valid_mlo_link_id),
+ },
+ {
+ .data_type = QMI_EOTI,
+ .array_type = NO_ARRAY,
+ .tlv_type = QMI_COMMON_TLV_TYPE,
+ },
+};
+
+static struct qmi_elem_info qmi_wlanfw_host_cap_req_msg_v01_ei[] = {
+ {
+ .data_type = QMI_OPT_FLAG,
+ .elem_len = 1,
+ .elem_size = sizeof(u8),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x10,
+ .offset = offsetof(struct qmi_wlanfw_host_cap_req_msg_v01,
+ num_clients_valid),
+ },
+ {
+ .data_type = QMI_UNSIGNED_4_BYTE,
+ .elem_len = 1,
+ .elem_size = sizeof(u32),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x10,
+ .offset = offsetof(struct qmi_wlanfw_host_cap_req_msg_v01,
+ num_clients),
+ },
+ {
+ .data_type = QMI_OPT_FLAG,
+ .elem_len = 1,
+ .elem_size = sizeof(u8),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x11,
+ .offset = offsetof(struct qmi_wlanfw_host_cap_req_msg_v01,
+ wake_msi_valid),
+ },
+ {
+ .data_type = QMI_UNSIGNED_4_BYTE,
+ .elem_len = 1,
+ .elem_size = sizeof(u32),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x11,
+ .offset = offsetof(struct qmi_wlanfw_host_cap_req_msg_v01,
+ wake_msi),
+ },
+ {
+ .data_type = QMI_OPT_FLAG,
+ .elem_len = 1,
+ .elem_size = sizeof(u8),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x12,
+ .offset = offsetof(struct qmi_wlanfw_host_cap_req_msg_v01,
+ gpios_valid),
+ },
+ {
+ .data_type = QMI_DATA_LEN,
+ .elem_len = 1,
+ .elem_size = sizeof(u8),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x12,
+ .offset = offsetof(struct qmi_wlanfw_host_cap_req_msg_v01,
+ gpios_len),
+ },
+ {
+ .data_type = QMI_UNSIGNED_4_BYTE,
+ .elem_len = QMI_WLFW_MAX_NUM_GPIO_V01,
+ .elem_size = sizeof(u32),
+ .array_type = VAR_LEN_ARRAY,
+ .tlv_type = 0x12,
+ .offset = offsetof(struct qmi_wlanfw_host_cap_req_msg_v01,
+ gpios),
+ },
+ {
+ .data_type = QMI_OPT_FLAG,
+ .elem_len = 1,
+ .elem_size = sizeof(u8),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x13,
+ .offset = offsetof(struct qmi_wlanfw_host_cap_req_msg_v01,
+ nm_modem_valid),
+ },
+ {
+ .data_type = QMI_UNSIGNED_1_BYTE,
+ .elem_len = 1,
+ .elem_size = sizeof(u8),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x13,
+ .offset = offsetof(struct qmi_wlanfw_host_cap_req_msg_v01,
+ nm_modem),
+ },
+ {
+ .data_type = QMI_OPT_FLAG,
+ .elem_len = 1,
+ .elem_size = sizeof(u8),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x14,
+ .offset = offsetof(struct qmi_wlanfw_host_cap_req_msg_v01,
+ bdf_support_valid),
+ },
+ {
+ .data_type = QMI_UNSIGNED_1_BYTE,
+ .elem_len = 1,
+ .elem_size = sizeof(u8),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x14,
+ .offset = offsetof(struct qmi_wlanfw_host_cap_req_msg_v01,
+ bdf_support),
+ },
+ {
+ .data_type = QMI_OPT_FLAG,
+ .elem_len = 1,
+ .elem_size = sizeof(u8),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x15,
+ .offset = offsetof(struct qmi_wlanfw_host_cap_req_msg_v01,
+ bdf_cache_support_valid),
+ },
+ {
+ .data_type = QMI_UNSIGNED_1_BYTE,
+ .elem_len = 1,
+ .elem_size = sizeof(u8),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x15,
+ .offset = offsetof(struct qmi_wlanfw_host_cap_req_msg_v01,
+ bdf_cache_support),
+ },
+ {
+ .data_type = QMI_OPT_FLAG,
+ .elem_len = 1,
+ .elem_size = sizeof(u8),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x16,
+ .offset = offsetof(struct qmi_wlanfw_host_cap_req_msg_v01,
+ m3_support_valid),
+ },
+ {
+ .data_type = QMI_UNSIGNED_1_BYTE,
+ .elem_len = 1,
+ .elem_size = sizeof(u8),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x16,
+ .offset = offsetof(struct qmi_wlanfw_host_cap_req_msg_v01,
+ m3_support),
+ },
+ {
+ .data_type = QMI_OPT_FLAG,
+ .elem_len = 1,
+ .elem_size = sizeof(u8),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x17,
+ .offset = offsetof(struct qmi_wlanfw_host_cap_req_msg_v01,
+ m3_cache_support_valid),
+ },
+ {
+ .data_type = QMI_UNSIGNED_1_BYTE,
+ .elem_len = 1,
+ .elem_size = sizeof(u8),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x17,
+ .offset = offsetof(struct qmi_wlanfw_host_cap_req_msg_v01,
+ m3_cache_support),
+ },
+ {
+ .data_type = QMI_OPT_FLAG,
+ .elem_len = 1,
+ .elem_size = sizeof(u8),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x18,
+ .offset = offsetof(struct qmi_wlanfw_host_cap_req_msg_v01,
+ cal_filesys_support_valid),
+ },
+ {
+ .data_type = QMI_UNSIGNED_1_BYTE,
+ .elem_len = 1,
+ .elem_size = sizeof(u8),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x18,
+ .offset = offsetof(struct qmi_wlanfw_host_cap_req_msg_v01,
+ cal_filesys_support),
+ },
+ {
+ .data_type = QMI_OPT_FLAG,
+ .elem_len = 1,
+ .elem_size = sizeof(u8),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x19,
+ .offset = offsetof(struct qmi_wlanfw_host_cap_req_msg_v01,
+ cal_cache_support_valid),
+ },
+ {
+ .data_type = QMI_UNSIGNED_1_BYTE,
+ .elem_len = 1,
+ .elem_size = sizeof(u8),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x19,
+ .offset = offsetof(struct qmi_wlanfw_host_cap_req_msg_v01,
+ cal_cache_support),
+ },
+ {
+ .data_type = QMI_OPT_FLAG,
+ .elem_len = 1,
+ .elem_size = sizeof(u8),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x1A,
+ .offset = offsetof(struct qmi_wlanfw_host_cap_req_msg_v01,
+ cal_done_valid),
+ },
+ {
+ .data_type = QMI_UNSIGNED_1_BYTE,
+ .elem_len = 1,
+ .elem_size = sizeof(u8),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x1A,
+ .offset = offsetof(struct qmi_wlanfw_host_cap_req_msg_v01,
+ cal_done),
+ },
+ {
+ .data_type = QMI_OPT_FLAG,
+ .elem_len = 1,
+ .elem_size = sizeof(u8),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x1B,
+ .offset = offsetof(struct qmi_wlanfw_host_cap_req_msg_v01,
+ mem_bucket_valid),
+ },
+ {
+ .data_type = QMI_UNSIGNED_4_BYTE,
+ .elem_len = 1,
+ .elem_size = sizeof(u32),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x1B,
+ .offset = offsetof(struct qmi_wlanfw_host_cap_req_msg_v01,
+ mem_bucket),
+ },
+ {
+ .data_type = QMI_OPT_FLAG,
+ .elem_len = 1,
+ .elem_size = sizeof(u8),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x1C,
+ .offset = offsetof(struct qmi_wlanfw_host_cap_req_msg_v01,
+ mem_cfg_mode_valid),
+ },
+ {
+ .data_type = QMI_UNSIGNED_1_BYTE,
+ .elem_len = 1,
+ .elem_size = sizeof(u8),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x1C,
+ .offset = offsetof(struct qmi_wlanfw_host_cap_req_msg_v01,
+ mem_cfg_mode),
+ },
+ {
+ .data_type = QMI_OPT_FLAG,
+ .elem_len = 1,
+ .elem_size = sizeof(u8),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x1D,
+ .offset = offsetof(struct qmi_wlanfw_host_cap_req_msg_v01,
+ cal_duration_valid),
+ },
+ {
+ .data_type = QMI_UNSIGNED_2_BYTE,
+ .elem_len = 1,
+ .elem_size = sizeof(u16),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x1D,
+ .offset = offsetof(struct qmi_wlanfw_host_cap_req_msg_v01,
+ cal_duraiton),
+ },
+ {
+ .data_type = QMI_OPT_FLAG,
+ .elem_len = 1,
+ .elem_size = sizeof(u8),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x1E,
+ .offset = offsetof(struct qmi_wlanfw_host_cap_req_msg_v01,
+ platform_name_valid),
+ },
+ {
+ .data_type = QMI_STRING,
+ .elem_len = QMI_WLANFW_MAX_PLATFORM_NAME_LEN_V01 + 1,
+ .elem_size = sizeof(char),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x1E,
+ .offset = offsetof(struct qmi_wlanfw_host_cap_req_msg_v01,
+ platform_name),
+ },
+ {
+ .data_type = QMI_OPT_FLAG,
+ .elem_len = 1,
+ .elem_size = sizeof(u8),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x1F,
+ .offset = offsetof(struct qmi_wlanfw_host_cap_req_msg_v01,
+ ddr_range_valid),
+ },
+ {
+ .data_type = QMI_STRUCT,
+ .elem_len = QMI_WLANFW_MAX_HOST_DDR_RANGE_SIZE_V01,
+ .elem_size = sizeof(struct qmi_wlanfw_host_ddr_range),
+ .array_type = STATIC_ARRAY,
+ .tlv_type = 0x1F,
+ .offset = offsetof(struct qmi_wlanfw_host_cap_req_msg_v01,
+ ddr_range),
+ },
+ {
+ .data_type = QMI_OPT_FLAG,
+ .elem_len = 1,
+ .elem_size = sizeof(u8),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x20,
+ .offset = offsetof(struct qmi_wlanfw_host_cap_req_msg_v01,
+ host_build_type_valid),
+ },
+ {
+ .data_type = QMI_SIGNED_4_BYTE_ENUM,
+ .elem_len = 1,
+ .elem_size = sizeof(enum qmi_wlanfw_host_build_type),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x20,
+ .offset = offsetof(struct qmi_wlanfw_host_cap_req_msg_v01,
+ host_build_type),
+ },
+ {
+ .data_type = QMI_OPT_FLAG,
+ .elem_len = 1,
+ .elem_size = sizeof(u8),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x21,
+ .offset = offsetof(struct qmi_wlanfw_host_cap_req_msg_v01,
+ mlo_capable_valid),
+ },
+ {
+ .data_type = QMI_OPT_FLAG,
+ .elem_len = 1,
+ .elem_size = sizeof(u8),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x21,
+ .offset = offsetof(struct qmi_wlanfw_host_cap_req_msg_v01,
+ mlo_capable),
+ },
+ {
+ .data_type = QMI_OPT_FLAG,
+ .elem_len = 1,
+ .elem_size = sizeof(u8),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x22,
+ .offset = offsetof(struct qmi_wlanfw_host_cap_req_msg_v01,
+ mlo_chip_id_valid),
+ },
+ {
+ .data_type = QMI_UNSIGNED_2_BYTE,
+ .elem_len = 1,
+ .elem_size = sizeof(u16),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x22,
+ .offset = offsetof(struct qmi_wlanfw_host_cap_req_msg_v01,
+ mlo_chip_id),
+ },
+ {
+ .data_type = QMI_OPT_FLAG,
+ .elem_len = 1,
+ .elem_size = sizeof(u8),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x23,
+ .offset = offsetof(struct qmi_wlanfw_host_cap_req_msg_v01,
+ mlo_group_id_valid),
+ },
+ {
+ .data_type = QMI_UNSIGNED_1_BYTE,
+ .elem_len = 1,
+ .elem_size = sizeof(u8),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x23,
+ .offset = offsetof(struct qmi_wlanfw_host_cap_req_msg_v01,
+ mlo_group_id),
+ },
+ {
+ .data_type = QMI_OPT_FLAG,
+ .elem_len = 1,
+ .elem_size = sizeof(u8),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x24,
+ .offset = offsetof(struct qmi_wlanfw_host_cap_req_msg_v01,
+ max_mlo_peer_valid),
+ },
+ {
+ .data_type = QMI_UNSIGNED_2_BYTE,
+ .elem_len = 1,
+ .elem_size = sizeof(u16),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x24,
+ .offset = offsetof(struct qmi_wlanfw_host_cap_req_msg_v01,
+ max_mlo_peer),
+ },
+ {
+ .data_type = QMI_OPT_FLAG,
+ .elem_len = 1,
+ .elem_size = sizeof(u8),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x25,
+ .offset = offsetof(struct qmi_wlanfw_host_cap_req_msg_v01,
+ mlo_num_chips_valid),
+ },
+ {
+ .data_type = QMI_UNSIGNED_1_BYTE,
+ .elem_len = 1,
+ .elem_size = sizeof(u8),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x25,
+ .offset = offsetof(struct qmi_wlanfw_host_cap_req_msg_v01,
+ mlo_num_chips),
+ },
+ {
+ .data_type = QMI_OPT_FLAG,
+ .elem_len = 1,
+ .elem_size = sizeof(u8),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x26,
+ .offset = offsetof(struct qmi_wlanfw_host_cap_req_msg_v01,
+ mlo_chip_info_valid),
+ },
+ {
+ .data_type = QMI_STRUCT,
+ .elem_len = QMI_WLFW_MAX_NUM_MLO_CHIPS_V01,
+ .elem_size = sizeof(struct wlfw_host_mlo_chip_info_s_v01),
+ .array_type = STATIC_ARRAY,
+ .tlv_type = 0x26,
+ .offset = offsetof(struct qmi_wlanfw_host_cap_req_msg_v01,
+ mlo_chip_info),
+ .ei_array = wlfw_host_mlo_chip_info_s_v01_ei,
+ },
+ {
+ .data_type = QMI_OPT_FLAG,
+ .elem_len = 1,
+ .elem_size = sizeof(u8),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x27,
+ .offset = offsetof(struct qmi_wlanfw_host_cap_req_msg_v01,
+ feature_list_valid),
+ },
+ {
+ .data_type = QMI_UNSIGNED_8_BYTE,
+ .elem_len = 1,
+ .elem_size = sizeof(u64),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x27,
+ .offset = offsetof(struct qmi_wlanfw_host_cap_req_msg_v01,
+ feature_list),
+ },
+ {
+ .data_type = QMI_EOTI,
+ .array_type = NO_ARRAY,
+ .tlv_type = QMI_COMMON_TLV_TYPE,
+ },
+};
+
+static struct qmi_elem_info qmi_wlanfw_host_cap_resp_msg_v01_ei[] = {
+ {
+ .data_type = QMI_STRUCT,
+ .elem_len = 1,
+ .elem_size = sizeof(struct qmi_response_type_v01),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x02,
+ .offset = offsetof(struct qmi_wlanfw_host_cap_resp_msg_v01, resp),
+ .ei_array = qmi_response_type_v01_ei,
+ },
+ {
+ .data_type = QMI_EOTI,
+ .array_type = NO_ARRAY,
+ .tlv_type = QMI_COMMON_TLV_TYPE,
+ },
+};
+
+static struct qmi_elem_info qmi_wlanfw_ind_register_req_msg_v01_ei[] = {
+ {
+ .data_type = QMI_OPT_FLAG,
+ .elem_len = 1,
+ .elem_size = sizeof(u8),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x10,
+ .offset = offsetof(struct qmi_wlanfw_ind_register_req_msg_v01,
+ fw_ready_enable_valid),
+ },
+ {
+ .data_type = QMI_UNSIGNED_1_BYTE,
+ .elem_len = 1,
+ .elem_size = sizeof(u8),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x10,
+ .offset = offsetof(struct qmi_wlanfw_ind_register_req_msg_v01,
+ fw_ready_enable),
+ },
+ {
+ .data_type = QMI_OPT_FLAG,
+ .elem_len = 1,
+ .elem_size = sizeof(u8),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x11,
+ .offset = offsetof(struct qmi_wlanfw_ind_register_req_msg_v01,
+ initiate_cal_download_enable_valid),
+ },
+ {
+ .data_type = QMI_UNSIGNED_1_BYTE,
+ .elem_len = 1,
+ .elem_size = sizeof(u8),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x11,
+ .offset = offsetof(struct qmi_wlanfw_ind_register_req_msg_v01,
+ initiate_cal_download_enable),
+ },
+ {
+ .data_type = QMI_OPT_FLAG,
+ .elem_len = 1,
+ .elem_size = sizeof(u8),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x12,
+ .offset = offsetof(struct qmi_wlanfw_ind_register_req_msg_v01,
+ initiate_cal_update_enable_valid),
+ },
+ {
+ .data_type = QMI_UNSIGNED_1_BYTE,
+ .elem_len = 1,
+ .elem_size = sizeof(u8),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x12,
+ .offset = offsetof(struct qmi_wlanfw_ind_register_req_msg_v01,
+ initiate_cal_update_enable),
+ },
+ {
+ .data_type = QMI_OPT_FLAG,
+ .elem_len = 1,
+ .elem_size = sizeof(u8),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x13,
+ .offset = offsetof(struct qmi_wlanfw_ind_register_req_msg_v01,
+ msa_ready_enable_valid),
+ },
+ {
+ .data_type = QMI_UNSIGNED_1_BYTE,
+ .elem_len = 1,
+ .elem_size = sizeof(u8),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x13,
+ .offset = offsetof(struct qmi_wlanfw_ind_register_req_msg_v01,
+ msa_ready_enable),
+ },
+ {
+ .data_type = QMI_OPT_FLAG,
+ .elem_len = 1,
+ .elem_size = sizeof(u8),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x14,
+ .offset = offsetof(struct qmi_wlanfw_ind_register_req_msg_v01,
+ pin_connect_result_enable_valid),
+ },
+ {
+ .data_type = QMI_UNSIGNED_1_BYTE,
+ .elem_len = 1,
+ .elem_size = sizeof(u8),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x14,
+ .offset = offsetof(struct qmi_wlanfw_ind_register_req_msg_v01,
+ pin_connect_result_enable),
+ },
+ {
+ .data_type = QMI_OPT_FLAG,
+ .elem_len = 1,
+ .elem_size = sizeof(u8),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x15,
+ .offset = offsetof(struct qmi_wlanfw_ind_register_req_msg_v01,
+ client_id_valid),
+ },
+ {
+ .data_type = QMI_UNSIGNED_4_BYTE,
+ .elem_len = 1,
+ .elem_size = sizeof(u32),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x15,
+ .offset = offsetof(struct qmi_wlanfw_ind_register_req_msg_v01,
+ client_id),
+ },
+ {
+ .data_type = QMI_OPT_FLAG,
+ .elem_len = 1,
+ .elem_size = sizeof(u8),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x16,
+ .offset = offsetof(struct qmi_wlanfw_ind_register_req_msg_v01,
+ request_mem_enable_valid),
+ },
+ {
+ .data_type = QMI_UNSIGNED_1_BYTE,
+ .elem_len = 1,
+ .elem_size = sizeof(u8),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x16,
+ .offset = offsetof(struct qmi_wlanfw_ind_register_req_msg_v01,
+ request_mem_enable),
+ },
+ {
+ .data_type = QMI_OPT_FLAG,
+ .elem_len = 1,
+ .elem_size = sizeof(u8),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x17,
+ .offset = offsetof(struct qmi_wlanfw_ind_register_req_msg_v01,
+ fw_mem_ready_enable_valid),
+ },
+ {
+ .data_type = QMI_UNSIGNED_1_BYTE,
+ .elem_len = 1,
+ .elem_size = sizeof(u8),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x17,
+ .offset = offsetof(struct qmi_wlanfw_ind_register_req_msg_v01,
+ fw_mem_ready_enable),
+ },
+ {
+ .data_type = QMI_OPT_FLAG,
+ .elem_len = 1,
+ .elem_size = sizeof(u8),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x18,
+ .offset = offsetof(struct qmi_wlanfw_ind_register_req_msg_v01,
+ fw_init_done_enable_valid),
+ },
+ {
+ .data_type = QMI_UNSIGNED_1_BYTE,
+ .elem_len = 1,
+ .elem_size = sizeof(u8),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x18,
+ .offset = offsetof(struct qmi_wlanfw_ind_register_req_msg_v01,
+ fw_init_done_enable),
+ },
+
+ {
+ .data_type = QMI_OPT_FLAG,
+ .elem_len = 1,
+ .elem_size = sizeof(u8),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x19,
+ .offset = offsetof(struct qmi_wlanfw_ind_register_req_msg_v01,
+ rejuvenate_enable_valid),
+ },
+ {
+ .data_type = QMI_UNSIGNED_1_BYTE,
+ .elem_len = 1,
+ .elem_size = sizeof(u8),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x19,
+ .offset = offsetof(struct qmi_wlanfw_ind_register_req_msg_v01,
+ rejuvenate_enable),
+ },
+ {
+ .data_type = QMI_OPT_FLAG,
+ .elem_len = 1,
+ .elem_size = sizeof(u8),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x1A,
+ .offset = offsetof(struct qmi_wlanfw_ind_register_req_msg_v01,
+ xo_cal_enable_valid),
+ },
+ {
+ .data_type = QMI_UNSIGNED_1_BYTE,
+ .elem_len = 1,
+ .elem_size = sizeof(u8),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x1A,
+ .offset = offsetof(struct qmi_wlanfw_ind_register_req_msg_v01,
+ xo_cal_enable),
+ },
+ {
+ .data_type = QMI_OPT_FLAG,
+ .elem_len = 1,
+ .elem_size = sizeof(u8),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x1B,
+ .offset = offsetof(struct qmi_wlanfw_ind_register_req_msg_v01,
+ cal_done_enable_valid),
+ },
+ {
+ .data_type = QMI_UNSIGNED_1_BYTE,
+ .elem_len = 1,
+ .elem_size = sizeof(u8),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x1B,
+ .offset = offsetof(struct qmi_wlanfw_ind_register_req_msg_v01,
+ cal_done_enable),
+ },
+ {
+ .data_type = QMI_EOTI,
+ .array_type = NO_ARRAY,
+ .tlv_type = QMI_COMMON_TLV_TYPE,
+ },
+};
+
+static struct qmi_elem_info qmi_wlanfw_ind_register_resp_msg_v01_ei[] = {
+ {
+ .data_type = QMI_STRUCT,
+ .elem_len = 1,
+ .elem_size = sizeof(struct qmi_response_type_v01),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x02,
+ .offset = offsetof(struct qmi_wlanfw_ind_register_resp_msg_v01,
+ resp),
+ .ei_array = qmi_response_type_v01_ei,
+ },
+ {
+ .data_type = QMI_OPT_FLAG,
+ .elem_len = 1,
+ .elem_size = sizeof(u8),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x10,
+ .offset = offsetof(struct qmi_wlanfw_ind_register_resp_msg_v01,
+ fw_status_valid),
+ },
+ {
+ .data_type = QMI_UNSIGNED_8_BYTE,
+ .elem_len = 1,
+ .elem_size = sizeof(u64),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x10,
+ .offset = offsetof(struct qmi_wlanfw_ind_register_resp_msg_v01,
+ fw_status),
+ },
+ {
+ .data_type = QMI_EOTI,
+ .array_type = NO_ARRAY,
+ .tlv_type = QMI_COMMON_TLV_TYPE,
+ },
+};
+
+static struct qmi_elem_info qmi_wlanfw_mem_cfg_s_v01_ei[] = {
+ {
+ .data_type = QMI_UNSIGNED_8_BYTE,
+ .elem_len = 1,
+ .elem_size = sizeof(u64),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0,
+ .offset = offsetof(struct qmi_wlanfw_mem_cfg_s_v01, offset),
+ },
+ {
+ .data_type = QMI_UNSIGNED_4_BYTE,
+ .elem_len = 1,
+ .elem_size = sizeof(u32),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0,
+ .offset = offsetof(struct qmi_wlanfw_mem_cfg_s_v01, size),
+ },
+ {
+ .data_type = QMI_UNSIGNED_1_BYTE,
+ .elem_len = 1,
+ .elem_size = sizeof(u8),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0,
+ .offset = offsetof(struct qmi_wlanfw_mem_cfg_s_v01, secure_flag),
+ },
+ {
+ .data_type = QMI_EOTI,
+ .array_type = NO_ARRAY,
+ .tlv_type = QMI_COMMON_TLV_TYPE,
+ },
+};
+
+static struct qmi_elem_info qmi_wlanfw_mem_seg_s_v01_ei[] = {
+ {
+ .data_type = QMI_UNSIGNED_4_BYTE,
+ .elem_len = 1,
+ .elem_size = sizeof(u32),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0,
+ .offset = offsetof(struct qmi_wlanfw_mem_seg_s_v01,
+ size),
+ },
+ {
+ .data_type = QMI_SIGNED_4_BYTE_ENUM,
+ .elem_len = 1,
+ .elem_size = sizeof(enum qmi_wlanfw_mem_type_enum_v01),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0,
+ .offset = offsetof(struct qmi_wlanfw_mem_seg_s_v01, type),
+ },
+ {
+ .data_type = QMI_DATA_LEN,
+ .elem_len = 1,
+ .elem_size = sizeof(u8),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0,
+ .offset = offsetof(struct qmi_wlanfw_mem_seg_s_v01, mem_cfg_len),
+ },
+ {
+ .data_type = QMI_STRUCT,
+ .elem_len = QMI_WLANFW_MAX_NUM_MEM_CFG_V01,
+ .elem_size = sizeof(struct qmi_wlanfw_mem_cfg_s_v01),
+ .array_type = VAR_LEN_ARRAY,
+ .tlv_type = 0,
+ .offset = offsetof(struct qmi_wlanfw_mem_seg_s_v01, mem_cfg),
+ .ei_array = qmi_wlanfw_mem_cfg_s_v01_ei,
+ },
+ {
+ .data_type = QMI_EOTI,
+ .array_type = NO_ARRAY,
+ .tlv_type = QMI_COMMON_TLV_TYPE,
+ },
+};
+
+static struct qmi_elem_info qmi_wlanfw_request_mem_ind_msg_v01_ei[] = {
+ {
+ .data_type = QMI_DATA_LEN,
+ .elem_len = 1,
+ .elem_size = sizeof(u8),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x01,
+ .offset = offsetof(struct qmi_wlanfw_request_mem_ind_msg_v01,
+ mem_seg_len),
+ },
+ {
+ .data_type = QMI_STRUCT,
+ .elem_len = ATH12K_QMI_WLANFW_MAX_NUM_MEM_SEG_V01,
+ .elem_size = sizeof(struct qmi_wlanfw_mem_seg_s_v01),
+ .array_type = VAR_LEN_ARRAY,
+ .tlv_type = 0x01,
+ .offset = offsetof(struct qmi_wlanfw_request_mem_ind_msg_v01,
+ mem_seg),
+ .ei_array = qmi_wlanfw_mem_seg_s_v01_ei,
+ },
+ {
+ .data_type = QMI_EOTI,
+ .array_type = NO_ARRAY,
+ .tlv_type = QMI_COMMON_TLV_TYPE,
+ },
+};
+
+static struct qmi_elem_info qmi_wlanfw_mem_seg_resp_s_v01_ei[] = {
+ {
+ .data_type = QMI_UNSIGNED_8_BYTE,
+ .elem_len = 1,
+ .elem_size = sizeof(u64),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0,
+ .offset = offsetof(struct qmi_wlanfw_mem_seg_resp_s_v01, addr),
+ },
+ {
+ .data_type = QMI_UNSIGNED_4_BYTE,
+ .elem_len = 1,
+ .elem_size = sizeof(u32),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0,
+ .offset = offsetof(struct qmi_wlanfw_mem_seg_resp_s_v01, size),
+ },
+ {
+ .data_type = QMI_SIGNED_4_BYTE_ENUM,
+ .elem_len = 1,
+ .elem_size = sizeof(enum qmi_wlanfw_mem_type_enum_v01),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0,
+ .offset = offsetof(struct qmi_wlanfw_mem_seg_resp_s_v01, type),
+ },
+ {
+ .data_type = QMI_UNSIGNED_1_BYTE,
+ .elem_len = 1,
+ .elem_size = sizeof(u8),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0,
+ .offset = offsetof(struct qmi_wlanfw_mem_seg_resp_s_v01, restore),
+ },
+ {
+ .data_type = QMI_EOTI,
+ .array_type = NO_ARRAY,
+ .tlv_type = QMI_COMMON_TLV_TYPE,
+ },
+};
+
+static struct qmi_elem_info qmi_wlanfw_respond_mem_req_msg_v01_ei[] = {
+ {
+ .data_type = QMI_DATA_LEN,
+ .elem_len = 1,
+ .elem_size = sizeof(u8),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x01,
+ .offset = offsetof(struct qmi_wlanfw_respond_mem_req_msg_v01,
+ mem_seg_len),
+ },
+ {
+ .data_type = QMI_STRUCT,
+ .elem_len = ATH12K_QMI_WLANFW_MAX_NUM_MEM_SEG_V01,
+ .elem_size = sizeof(struct qmi_wlanfw_mem_seg_resp_s_v01),
+ .array_type = VAR_LEN_ARRAY,
+ .tlv_type = 0x01,
+ .offset = offsetof(struct qmi_wlanfw_respond_mem_req_msg_v01,
+ mem_seg),
+ .ei_array = qmi_wlanfw_mem_seg_resp_s_v01_ei,
+ },
+ {
+ .data_type = QMI_EOTI,
+ .array_type = NO_ARRAY,
+ .tlv_type = QMI_COMMON_TLV_TYPE,
+ },
+};
+
+static struct qmi_elem_info qmi_wlanfw_respond_mem_resp_msg_v01_ei[] = {
+ {
+ .data_type = QMI_STRUCT,
+ .elem_len = 1,
+ .elem_size = sizeof(struct qmi_response_type_v01),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x02,
+ .offset = offsetof(struct qmi_wlanfw_respond_mem_resp_msg_v01,
+ resp),
+ .ei_array = qmi_response_type_v01_ei,
+ },
+ {
+ .data_type = QMI_EOTI,
+ .array_type = NO_ARRAY,
+ .tlv_type = QMI_COMMON_TLV_TYPE,
+ },
+};
+
+static struct qmi_elem_info qmi_wlanfw_cap_req_msg_v01_ei[] = {
+ {
+ .data_type = QMI_EOTI,
+ .array_type = NO_ARRAY,
+ .tlv_type = QMI_COMMON_TLV_TYPE,
+ },
+};
+
+static struct qmi_elem_info qmi_wlanfw_rf_chip_info_s_v01_ei[] = {
+ {
+ .data_type = QMI_UNSIGNED_4_BYTE,
+ .elem_len = 1,
+ .elem_size = sizeof(u32),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0,
+ .offset = offsetof(struct qmi_wlanfw_rf_chip_info_s_v01,
+ chip_id),
+ },
+ {
+ .data_type = QMI_UNSIGNED_4_BYTE,
+ .elem_len = 1,
+ .elem_size = sizeof(u32),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0,
+ .offset = offsetof(struct qmi_wlanfw_rf_chip_info_s_v01,
+ chip_family),
+ },
+ {
+ .data_type = QMI_EOTI,
+ .array_type = NO_ARRAY,
+ .tlv_type = QMI_COMMON_TLV_TYPE,
+ },
+};
+
+static struct qmi_elem_info qmi_wlanfw_rf_board_info_s_v01_ei[] = {
+ {
+ .data_type = QMI_UNSIGNED_4_BYTE,
+ .elem_len = 1,
+ .elem_size = sizeof(u32),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0,
+ .offset = offsetof(struct qmi_wlanfw_rf_board_info_s_v01,
+ board_id),
+ },
+ {
+ .data_type = QMI_EOTI,
+ .array_type = NO_ARRAY,
+ .tlv_type = QMI_COMMON_TLV_TYPE,
+ },
+};
+
+static struct qmi_elem_info qmi_wlanfw_soc_info_s_v01_ei[] = {
+ {
+ .data_type = QMI_UNSIGNED_4_BYTE,
+ .elem_len = 1,
+ .elem_size = sizeof(u32),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0,
+ .offset = offsetof(struct qmi_wlanfw_soc_info_s_v01, soc_id),
+ },
+ {
+ .data_type = QMI_EOTI,
+ .array_type = NO_ARRAY,
+ .tlv_type = QMI_COMMON_TLV_TYPE,
+ },
+};
+
+static struct qmi_elem_info qmi_wlanfw_dev_mem_info_s_v01_ei[] = {
+ {
+ .data_type = QMI_UNSIGNED_8_BYTE,
+ .elem_len = 1,
+ .elem_size = sizeof(u64),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0,
+ .offset = offsetof(struct qmi_wlanfw_dev_mem_info_s_v01,
+ start),
+ },
+ {
+ .data_type = QMI_UNSIGNED_8_BYTE,
+ .elem_len = 1,
+ .elem_size = sizeof(u64),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0,
+ .offset = offsetof(struct qmi_wlanfw_dev_mem_info_s_v01,
+ size),
+ },
+ {
+ .data_type = QMI_EOTI,
+ .array_type = NO_ARRAY,
+ .tlv_type = QMI_COMMON_TLV_TYPE,
+ },
+};
+
+static struct qmi_elem_info qmi_wlanfw_fw_version_info_s_v01_ei[] = {
+ {
+ .data_type = QMI_UNSIGNED_4_BYTE,
+ .elem_len = 1,
+ .elem_size = sizeof(u32),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0,
+ .offset = offsetof(struct qmi_wlanfw_fw_version_info_s_v01,
+ fw_version),
+ },
+ {
+ .data_type = QMI_STRING,
+ .elem_len = ATH12K_QMI_WLANFW_MAX_TIMESTAMP_LEN_V01 + 1,
+ .elem_size = sizeof(char),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0,
+ .offset = offsetof(struct qmi_wlanfw_fw_version_info_s_v01,
+ fw_build_timestamp),
+ },
+ {
+ .data_type = QMI_EOTI,
+ .array_type = NO_ARRAY,
+ .tlv_type = QMI_COMMON_TLV_TYPE,
+ },
+};
+
+static struct qmi_elem_info qmi_wlanfw_cap_resp_msg_v01_ei[] = {
+ {
+ .data_type = QMI_STRUCT,
+ .elem_len = 1,
+ .elem_size = sizeof(struct qmi_response_type_v01),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x02,
+ .offset = offsetof(struct qmi_wlanfw_cap_resp_msg_v01, resp),
+ .ei_array = qmi_response_type_v01_ei,
+ },
+ {
+ .data_type = QMI_OPT_FLAG,
+ .elem_len = 1,
+ .elem_size = sizeof(u8),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x10,
+ .offset = offsetof(struct qmi_wlanfw_cap_resp_msg_v01,
+ chip_info_valid),
+ },
+ {
+ .data_type = QMI_STRUCT,
+ .elem_len = 1,
+ .elem_size = sizeof(struct qmi_wlanfw_rf_chip_info_s_v01),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x10,
+ .offset = offsetof(struct qmi_wlanfw_cap_resp_msg_v01,
+ chip_info),
+ .ei_array = qmi_wlanfw_rf_chip_info_s_v01_ei,
+ },
+ {
+ .data_type = QMI_OPT_FLAG,
+ .elem_len = 1,
+ .elem_size = sizeof(u8),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x11,
+ .offset = offsetof(struct qmi_wlanfw_cap_resp_msg_v01,
+ board_info_valid),
+ },
+ {
+ .data_type = QMI_STRUCT,
+ .elem_len = 1,
+ .elem_size = sizeof(struct qmi_wlanfw_rf_board_info_s_v01),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x11,
+ .offset = offsetof(struct qmi_wlanfw_cap_resp_msg_v01,
+ board_info),
+ .ei_array = qmi_wlanfw_rf_board_info_s_v01_ei,
+ },
+ {
+ .data_type = QMI_OPT_FLAG,
+ .elem_len = 1,
+ .elem_size = sizeof(u8),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x12,
+ .offset = offsetof(struct qmi_wlanfw_cap_resp_msg_v01,
+ soc_info_valid),
+ },
+ {
+ .data_type = QMI_STRUCT,
+ .elem_len = 1,
+ .elem_size = sizeof(struct qmi_wlanfw_soc_info_s_v01),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x12,
+ .offset = offsetof(struct qmi_wlanfw_cap_resp_msg_v01,
+ soc_info),
+ .ei_array = qmi_wlanfw_soc_info_s_v01_ei,
+ },
+ {
+ .data_type = QMI_OPT_FLAG,
+ .elem_len = 1,
+ .elem_size = sizeof(u8),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x13,
+ .offset = offsetof(struct qmi_wlanfw_cap_resp_msg_v01,
+ fw_version_info_valid),
+ },
+ {
+ .data_type = QMI_STRUCT,
+ .elem_len = 1,
+ .elem_size = sizeof(struct qmi_wlanfw_fw_version_info_s_v01),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x13,
+ .offset = offsetof(struct qmi_wlanfw_cap_resp_msg_v01,
+ fw_version_info),
+ .ei_array = qmi_wlanfw_fw_version_info_s_v01_ei,
+ },
+ {
+ .data_type = QMI_OPT_FLAG,
+ .elem_len = 1,
+ .elem_size = sizeof(u8),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x14,
+ .offset = offsetof(struct qmi_wlanfw_cap_resp_msg_v01,
+ fw_build_id_valid),
+ },
+ {
+ .data_type = QMI_STRING,
+ .elem_len = ATH12K_QMI_WLANFW_MAX_BUILD_ID_LEN_V01 + 1,
+ .elem_size = sizeof(char),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x14,
+ .offset = offsetof(struct qmi_wlanfw_cap_resp_msg_v01,
+ fw_build_id),
+ },
+ {
+ .data_type = QMI_OPT_FLAG,
+ .elem_len = 1,
+ .elem_size = sizeof(u8),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x15,
+ .offset = offsetof(struct qmi_wlanfw_cap_resp_msg_v01,
+ num_macs_valid),
+ },
+ {
+ .data_type = QMI_UNSIGNED_1_BYTE,
+ .elem_len = 1,
+ .elem_size = sizeof(u8),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x15,
+ .offset = offsetof(struct qmi_wlanfw_cap_resp_msg_v01,
+ num_macs),
+ },
+ {
+ .data_type = QMI_OPT_FLAG,
+ .elem_len = 1,
+ .elem_size = sizeof(u8),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x16,
+ .offset = offsetof(struct qmi_wlanfw_cap_resp_msg_v01,
+ voltage_mv_valid),
+ },
+ {
+ .data_type = QMI_UNSIGNED_4_BYTE,
+ .elem_len = 1,
+ .elem_size = sizeof(u32),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x16,
+ .offset = offsetof(struct qmi_wlanfw_cap_resp_msg_v01,
+ voltage_mv),
+ },
+ {
+ .data_type = QMI_OPT_FLAG,
+ .elem_len = 1,
+ .elem_size = sizeof(u8),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x17,
+ .offset = offsetof(struct qmi_wlanfw_cap_resp_msg_v01,
+ time_freq_hz_valid),
+ },
+ {
+ .data_type = QMI_UNSIGNED_4_BYTE,
+ .elem_len = 1,
+ .elem_size = sizeof(u32),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x17,
+ .offset = offsetof(struct qmi_wlanfw_cap_resp_msg_v01,
+ time_freq_hz),
+ },
+ {
+ .data_type = QMI_OPT_FLAG,
+ .elem_len = 1,
+ .elem_size = sizeof(u8),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x18,
+ .offset = offsetof(struct qmi_wlanfw_cap_resp_msg_v01,
+ otp_version_valid),
+ },
+ {
+ .data_type = QMI_UNSIGNED_4_BYTE,
+ .elem_len = 1,
+ .elem_size = sizeof(u32),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x18,
+ .offset = offsetof(struct qmi_wlanfw_cap_resp_msg_v01,
+ otp_version),
+ },
+ {
+ .data_type = QMI_OPT_FLAG,
+ .elem_len = 1,
+ .elem_size = sizeof(u8),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x19,
+ .offset = offsetof(struct qmi_wlanfw_cap_resp_msg_v01,
+ eeprom_caldata_read_timeout_valid),
+ },
+ {
+ .data_type = QMI_UNSIGNED_4_BYTE,
+ .elem_len = 1,
+ .elem_size = sizeof(u32),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x19,
+ .offset = offsetof(struct qmi_wlanfw_cap_resp_msg_v01,
+ eeprom_caldata_read_timeout),
+ },
+ {
+ .data_type = QMI_OPT_FLAG,
+ .elem_len = 1,
+ .elem_size = sizeof(u8),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x1A,
+ .offset = offsetof(struct qmi_wlanfw_cap_resp_msg_v01,
+ fw_caps_valid),
+ },
+ {
+ .data_type = QMI_UNSIGNED_8_BYTE,
+ .elem_len = 1,
+ .elem_size = sizeof(u64),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x1A,
+ .offset = offsetof(struct qmi_wlanfw_cap_resp_msg_v01, fw_caps),
+ },
+ {
+ .data_type = QMI_OPT_FLAG,
+ .elem_len = 1,
+ .elem_size = sizeof(u8),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x1B,
+ .offset = offsetof(struct qmi_wlanfw_cap_resp_msg_v01,
+ rd_card_chain_cap_valid),
+ },
+ {
+ .data_type = QMI_UNSIGNED_4_BYTE,
+ .elem_len = 1,
+ .elem_size = sizeof(u32),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x1B,
+ .offset = offsetof(struct qmi_wlanfw_cap_resp_msg_v01,
+ rd_card_chain_cap),
+ },
+ {
+ .data_type = QMI_OPT_FLAG,
+ .elem_len = 1,
+ .elem_size = sizeof(u8),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x1C,
+ .offset = offsetof(struct qmi_wlanfw_cap_resp_msg_v01,
+ dev_mem_info_valid),
+ },
+ {
+ .data_type = QMI_STRUCT,
+ .elem_len = ATH12K_QMI_WLFW_MAX_DEV_MEM_NUM_V01,
+ .elem_size = sizeof(struct qmi_wlanfw_dev_mem_info_s_v01),
+ .array_type = STATIC_ARRAY,
+ .tlv_type = 0x1C,
+ .offset = offsetof(struct qmi_wlanfw_cap_resp_msg_v01, dev_mem),
+ .ei_array = qmi_wlanfw_dev_mem_info_s_v01_ei,
+ },
+ {
+ .data_type = QMI_EOTI,
+ .array_type = NO_ARRAY,
+ .tlv_type = QMI_COMMON_TLV_TYPE,
+ },
+};
+
+static struct qmi_elem_info qmi_wlanfw_bdf_download_req_msg_v01_ei[] = {
+ {
+ .data_type = QMI_UNSIGNED_1_BYTE,
+ .elem_len = 1,
+ .elem_size = sizeof(u8),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x01,
+ .offset = offsetof(struct qmi_wlanfw_bdf_download_req_msg_v01,
+ valid),
+ },
+ {
+ .data_type = QMI_OPT_FLAG,
+ .elem_len = 1,
+ .elem_size = sizeof(u8),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x10,
+ .offset = offsetof(struct qmi_wlanfw_bdf_download_req_msg_v01,
+ file_id_valid),
+ },
+ {
+ .data_type = QMI_SIGNED_4_BYTE_ENUM,
+ .elem_len = 1,
+ .elem_size = sizeof(enum qmi_wlanfw_cal_temp_id_enum_v01),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x10,
+ .offset = offsetof(struct qmi_wlanfw_bdf_download_req_msg_v01,
+ file_id),
+ },
+ {
+ .data_type = QMI_OPT_FLAG,
+ .elem_len = 1,
+ .elem_size = sizeof(u8),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x11,
+ .offset = offsetof(struct qmi_wlanfw_bdf_download_req_msg_v01,
+ total_size_valid),
+ },
+ {
+ .data_type = QMI_UNSIGNED_4_BYTE,
+ .elem_len = 1,
+ .elem_size = sizeof(u32),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x11,
+ .offset = offsetof(struct qmi_wlanfw_bdf_download_req_msg_v01,
+ total_size),
+ },
+ {
+ .data_type = QMI_OPT_FLAG,
+ .elem_len = 1,
+ .elem_size = sizeof(u8),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x12,
+ .offset = offsetof(struct qmi_wlanfw_bdf_download_req_msg_v01,
+ seg_id_valid),
+ },
+ {
+ .data_type = QMI_UNSIGNED_4_BYTE,
+ .elem_len = 1,
+ .elem_size = sizeof(u32),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x12,
+ .offset = offsetof(struct qmi_wlanfw_bdf_download_req_msg_v01,
+ seg_id),
+ },
+ {
+ .data_type = QMI_OPT_FLAG,
+ .elem_len = 1,
+ .elem_size = sizeof(u8),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x13,
+ .offset = offsetof(struct qmi_wlanfw_bdf_download_req_msg_v01,
+ data_valid),
+ },
+ {
+ .data_type = QMI_DATA_LEN,
+ .elem_len = 1,
+ .elem_size = sizeof(u16),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x13,
+ .offset = offsetof(struct qmi_wlanfw_bdf_download_req_msg_v01,
+ data_len),
+ },
+ {
+ .data_type = QMI_UNSIGNED_1_BYTE,
+ .elem_len = QMI_WLANFW_MAX_DATA_SIZE_V01,
+ .elem_size = sizeof(u8),
+ .array_type = VAR_LEN_ARRAY,
+ .tlv_type = 0x13,
+ .offset = offsetof(struct qmi_wlanfw_bdf_download_req_msg_v01,
+ data),
+ },
+ {
+ .data_type = QMI_OPT_FLAG,
+ .elem_len = 1,
+ .elem_size = sizeof(u8),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x14,
+ .offset = offsetof(struct qmi_wlanfw_bdf_download_req_msg_v01,
+ end_valid),
+ },
+ {
+ .data_type = QMI_UNSIGNED_1_BYTE,
+ .elem_len = 1,
+ .elem_size = sizeof(u8),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x14,
+ .offset = offsetof(struct qmi_wlanfw_bdf_download_req_msg_v01,
+ end),
+ },
+ {
+ .data_type = QMI_OPT_FLAG,
+ .elem_len = 1,
+ .elem_size = sizeof(u8),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x15,
+ .offset = offsetof(struct qmi_wlanfw_bdf_download_req_msg_v01,
+ bdf_type_valid),
+ },
+ {
+ .data_type = QMI_UNSIGNED_1_BYTE,
+ .elem_len = 1,
+ .elem_size = sizeof(u8),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x15,
+ .offset = offsetof(struct qmi_wlanfw_bdf_download_req_msg_v01,
+ bdf_type),
+ },
+
+ {
+ .data_type = QMI_EOTI,
+ .array_type = NO_ARRAY,
+ .tlv_type = QMI_COMMON_TLV_TYPE,
+ },
+};
+
+static struct qmi_elem_info qmi_wlanfw_bdf_download_resp_msg_v01_ei[] = {
+ {
+ .data_type = QMI_STRUCT,
+ .elem_len = 1,
+ .elem_size = sizeof(struct qmi_response_type_v01),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x02,
+ .offset = offsetof(struct qmi_wlanfw_bdf_download_resp_msg_v01,
+ resp),
+ .ei_array = qmi_response_type_v01_ei,
+ },
+ {
+ .data_type = QMI_EOTI,
+ .array_type = NO_ARRAY,
+ .tlv_type = QMI_COMMON_TLV_TYPE,
+ },
+};
+
+static struct qmi_elem_info qmi_wlanfw_m3_info_req_msg_v01_ei[] = {
+ {
+ .data_type = QMI_UNSIGNED_8_BYTE,
+ .elem_len = 1,
+ .elem_size = sizeof(u64),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x01,
+ .offset = offsetof(struct qmi_wlanfw_m3_info_req_msg_v01, addr),
+ },
+ {
+ .data_type = QMI_UNSIGNED_4_BYTE,
+ .elem_len = 1,
+ .elem_size = sizeof(u32),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x02,
+ .offset = offsetof(struct qmi_wlanfw_m3_info_req_msg_v01, size),
+ },
+ {
+ .data_type = QMI_EOTI,
+ .array_type = NO_ARRAY,
+ .tlv_type = QMI_COMMON_TLV_TYPE,
+ },
+};
+
+static struct qmi_elem_info qmi_wlanfw_m3_info_resp_msg_v01_ei[] = {
+ {
+ .data_type = QMI_STRUCT,
+ .elem_len = 1,
+ .elem_size = sizeof(struct qmi_response_type_v01),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x02,
+ .offset = offsetof(struct qmi_wlanfw_m3_info_resp_msg_v01, resp),
+ .ei_array = qmi_response_type_v01_ei,
+ },
+ {
+ .data_type = QMI_EOTI,
+ .array_type = NO_ARRAY,
+ .tlv_type = QMI_COMMON_TLV_TYPE,
+ },
+};
+
+static struct qmi_elem_info qmi_wlanfw_ce_tgt_pipe_cfg_s_v01_ei[] = {
+ {
+ .data_type = QMI_UNSIGNED_4_BYTE,
+ .elem_len = 1,
+ .elem_size = sizeof(u32),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0,
+ .offset = offsetof(struct qmi_wlanfw_ce_tgt_pipe_cfg_s_v01,
+ pipe_num),
+ },
+ {
+ .data_type = QMI_SIGNED_4_BYTE_ENUM,
+ .elem_len = 1,
+ .elem_size = sizeof(enum qmi_wlanfw_pipedir_enum_v01),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0,
+ .offset = offsetof(struct qmi_wlanfw_ce_tgt_pipe_cfg_s_v01,
+ pipe_dir),
+ },
+ {
+ .data_type = QMI_UNSIGNED_4_BYTE,
+ .elem_len = 1,
+ .elem_size = sizeof(u32),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0,
+ .offset = offsetof(struct qmi_wlanfw_ce_tgt_pipe_cfg_s_v01,
+ nentries),
+ },
+ {
+ .data_type = QMI_UNSIGNED_4_BYTE,
+ .elem_len = 1,
+ .elem_size = sizeof(u32),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0,
+ .offset = offsetof(struct qmi_wlanfw_ce_tgt_pipe_cfg_s_v01,
+ nbytes_max),
+ },
+ {
+ .data_type = QMI_UNSIGNED_4_BYTE,
+ .elem_len = 1,
+ .elem_size = sizeof(u32),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0,
+ .offset = offsetof(struct qmi_wlanfw_ce_tgt_pipe_cfg_s_v01,
+ flags),
+ },
+ {
+ .data_type = QMI_EOTI,
+ .array_type = NO_ARRAY,
+ .tlv_type = QMI_COMMON_TLV_TYPE,
+ },
+};
+
+static struct qmi_elem_info qmi_wlanfw_ce_svc_pipe_cfg_s_v01_ei[] = {
+ {
+ .data_type = QMI_UNSIGNED_4_BYTE,
+ .elem_len = 1,
+ .elem_size = sizeof(u32),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0,
+ .offset = offsetof(struct qmi_wlanfw_ce_svc_pipe_cfg_s_v01,
+ service_id),
+ },
+ {
+ .data_type = QMI_SIGNED_4_BYTE_ENUM,
+ .elem_len = 1,
+ .elem_size = sizeof(enum qmi_wlanfw_pipedir_enum_v01),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0,
+ .offset = offsetof(struct qmi_wlanfw_ce_svc_pipe_cfg_s_v01,
+ pipe_dir),
+ },
+ {
+ .data_type = QMI_UNSIGNED_4_BYTE,
+ .elem_len = 1,
+ .elem_size = sizeof(u32),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0,
+ .offset = offsetof(struct qmi_wlanfw_ce_svc_pipe_cfg_s_v01,
+ pipe_num),
+ },
+ {
+ .data_type = QMI_EOTI,
+ .array_type = NO_ARRAY,
+ .tlv_type = QMI_COMMON_TLV_TYPE,
+ },
+};
+
+static struct qmi_elem_info qmi_wlanfw_shadow_reg_cfg_s_v01_ei[] = {
+ {
+ .data_type = QMI_UNSIGNED_2_BYTE,
+ .elem_len = 1,
+ .elem_size = sizeof(u16),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0,
+ .offset = offsetof(struct qmi_wlanfw_shadow_reg_cfg_s_v01, id),
+ },
+ {
+ .data_type = QMI_UNSIGNED_2_BYTE,
+ .elem_len = 1,
+ .elem_size = sizeof(u16),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0,
+ .offset = offsetof(struct qmi_wlanfw_shadow_reg_cfg_s_v01,
+ offset),
+ },
+ {
+ .data_type = QMI_EOTI,
+ .array_type = QMI_COMMON_TLV_TYPE,
+ },
+};
+
+static struct qmi_elem_info qmi_wlanfw_shadow_reg_v3_cfg_s_v01_ei[] = {
+ {
+ .data_type = QMI_UNSIGNED_4_BYTE,
+ .elem_len = 1,
+ .elem_size = sizeof(u32),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0,
+ .offset = offsetof(struct qmi_wlanfw_shadow_reg_v3_cfg_s_v01,
+ addr),
+ },
+ {
+ .data_type = QMI_EOTI,
+ .array_type = NO_ARRAY,
+ .tlv_type = QMI_COMMON_TLV_TYPE,
+ },
+};
+
+static struct qmi_elem_info qmi_wlanfw_wlan_mode_req_msg_v01_ei[] = {
+ {
+ .data_type = QMI_UNSIGNED_4_BYTE,
+ .elem_len = 1,
+ .elem_size = sizeof(u32),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x01,
+ .offset = offsetof(struct qmi_wlanfw_wlan_mode_req_msg_v01,
+ mode),
+ },
+ {
+ .data_type = QMI_OPT_FLAG,
+ .elem_len = 1,
+ .elem_size = sizeof(u8),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x10,
+ .offset = offsetof(struct qmi_wlanfw_wlan_mode_req_msg_v01,
+ hw_debug_valid),
+ },
+ {
+ .data_type = QMI_UNSIGNED_1_BYTE,
+ .elem_len = 1,
+ .elem_size = sizeof(u8),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x10,
+ .offset = offsetof(struct qmi_wlanfw_wlan_mode_req_msg_v01,
+ hw_debug),
+ },
+ {
+ .data_type = QMI_EOTI,
+ .array_type = NO_ARRAY,
+ .tlv_type = QMI_COMMON_TLV_TYPE,
+ },
+};
+
+static struct qmi_elem_info qmi_wlanfw_wlan_mode_resp_msg_v01_ei[] = {
+ {
+ .data_type = QMI_STRUCT,
+ .elem_len = 1,
+ .elem_size = sizeof(struct qmi_response_type_v01),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x02,
+ .offset = offsetof(struct qmi_wlanfw_wlan_mode_resp_msg_v01,
+ resp),
+ .ei_array = qmi_response_type_v01_ei,
+ },
+ {
+ .data_type = QMI_EOTI,
+ .array_type = NO_ARRAY,
+ .tlv_type = QMI_COMMON_TLV_TYPE,
+ },
+};
+
+static struct qmi_elem_info qmi_wlanfw_wlan_cfg_req_msg_v01_ei[] = {
+ {
+ .data_type = QMI_OPT_FLAG,
+ .elem_len = 1,
+ .elem_size = sizeof(u8),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x10,
+ .offset = offsetof(struct qmi_wlanfw_wlan_cfg_req_msg_v01,
+ host_version_valid),
+ },
+ {
+ .data_type = QMI_STRING,
+ .elem_len = QMI_WLANFW_MAX_STR_LEN_V01 + 1,
+ .elem_size = sizeof(char),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x10,
+ .offset = offsetof(struct qmi_wlanfw_wlan_cfg_req_msg_v01,
+ host_version),
+ },
+ {
+ .data_type = QMI_OPT_FLAG,
+ .elem_len = 1,
+ .elem_size = sizeof(u8),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x11,
+ .offset = offsetof(struct qmi_wlanfw_wlan_cfg_req_msg_v01,
+ tgt_cfg_valid),
+ },
+ {
+ .data_type = QMI_DATA_LEN,
+ .elem_len = 1,
+ .elem_size = sizeof(u8),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x11,
+ .offset = offsetof(struct qmi_wlanfw_wlan_cfg_req_msg_v01,
+ tgt_cfg_len),
+ },
+ {
+ .data_type = QMI_STRUCT,
+ .elem_len = QMI_WLANFW_MAX_NUM_CE_V01,
+ .elem_size = sizeof(struct qmi_wlanfw_ce_tgt_pipe_cfg_s_v01),
+ .array_type = VAR_LEN_ARRAY,
+ .tlv_type = 0x11,
+ .offset = offsetof(struct qmi_wlanfw_wlan_cfg_req_msg_v01,
+ tgt_cfg),
+ .ei_array = qmi_wlanfw_ce_tgt_pipe_cfg_s_v01_ei,
+ },
+ {
+ .data_type = QMI_OPT_FLAG,
+ .elem_len = 1,
+ .elem_size = sizeof(u8),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x12,
+ .offset = offsetof(struct qmi_wlanfw_wlan_cfg_req_msg_v01,
+ svc_cfg_valid),
+ },
+ {
+ .data_type = QMI_DATA_LEN,
+ .elem_len = 1,
+ .elem_size = sizeof(u8),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x12,
+ .offset = offsetof(struct qmi_wlanfw_wlan_cfg_req_msg_v01,
+ svc_cfg_len),
+ },
+ {
+ .data_type = QMI_STRUCT,
+ .elem_len = QMI_WLANFW_MAX_NUM_SVC_V01,
+ .elem_size = sizeof(struct qmi_wlanfw_ce_svc_pipe_cfg_s_v01),
+ .array_type = VAR_LEN_ARRAY,
+ .tlv_type = 0x12,
+ .offset = offsetof(struct qmi_wlanfw_wlan_cfg_req_msg_v01,
+ svc_cfg),
+ .ei_array = qmi_wlanfw_ce_svc_pipe_cfg_s_v01_ei,
+ },
+ {
+ .data_type = QMI_OPT_FLAG,
+ .elem_len = 1,
+ .elem_size = sizeof(u8),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x13,
+ .offset = offsetof(struct qmi_wlanfw_wlan_cfg_req_msg_v01,
+ shadow_reg_valid),
+ },
+ {
+ .data_type = QMI_DATA_LEN,
+ .elem_len = 1,
+ .elem_size = sizeof(u8),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x13,
+ .offset = offsetof(struct qmi_wlanfw_wlan_cfg_req_msg_v01,
+ shadow_reg_len),
+ },
+ {
+ .data_type = QMI_STRUCT,
+ .elem_len = QMI_WLANFW_MAX_NUM_SHADOW_REG_V01,
+ .elem_size = sizeof(struct qmi_wlanfw_shadow_reg_cfg_s_v01),
+ .array_type = VAR_LEN_ARRAY,
+ .tlv_type = 0x13,
+ .offset = offsetof(struct qmi_wlanfw_wlan_cfg_req_msg_v01,
+ shadow_reg),
+ .ei_array = qmi_wlanfw_shadow_reg_cfg_s_v01_ei,
+ },
+ {
+ .data_type = QMI_OPT_FLAG,
+ .elem_len = 1,
+ .elem_size = sizeof(u8),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x17,
+ .offset = offsetof(struct qmi_wlanfw_wlan_cfg_req_msg_v01,
+ shadow_reg_v3_valid),
+ },
+ {
+ .data_type = QMI_DATA_LEN,
+ .elem_len = 1,
+ .elem_size = sizeof(u8),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x17,
+ .offset = offsetof(struct qmi_wlanfw_wlan_cfg_req_msg_v01,
+ shadow_reg_v3_len),
+ },
+ {
+ .data_type = QMI_STRUCT,
+ .elem_len = QMI_WLANFW_MAX_NUM_SHADOW_REG_V3_V01,
+ .elem_size = sizeof(struct qmi_wlanfw_shadow_reg_v3_cfg_s_v01),
+ .array_type = VAR_LEN_ARRAY,
+ .tlv_type = 0x17,
+ .offset = offsetof(struct qmi_wlanfw_wlan_cfg_req_msg_v01,
+ shadow_reg_v3),
+ .ei_array = qmi_wlanfw_shadow_reg_v3_cfg_s_v01_ei,
+ },
+ {
+ .data_type = QMI_EOTI,
+ .array_type = NO_ARRAY,
+ .tlv_type = QMI_COMMON_TLV_TYPE,
+ },
+};
+
+static struct qmi_elem_info qmi_wlanfw_wlan_cfg_resp_msg_v01_ei[] = {
+ {
+ .data_type = QMI_STRUCT,
+ .elem_len = 1,
+ .elem_size = sizeof(struct qmi_response_type_v01),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x02,
+ .offset = offsetof(struct qmi_wlanfw_wlan_cfg_resp_msg_v01, resp),
+ .ei_array = qmi_response_type_v01_ei,
+ },
+ {
+ .data_type = QMI_EOTI,
+ .array_type = NO_ARRAY,
+ .tlv_type = QMI_COMMON_TLV_TYPE,
+ },
+};
+
+static struct qmi_elem_info qmi_wlanfw_mem_ready_ind_msg_v01_ei[] = {
+ {
+ .data_type = QMI_EOTI,
+ .array_type = NO_ARRAY,
+ },
+};
+
+static struct qmi_elem_info qmi_wlanfw_fw_ready_ind_msg_v01_ei[] = {
+ {
+ .data_type = QMI_EOTI,
+ .array_type = NO_ARRAY,
+ },
+};
+
+static void ath12k_host_cap_parse_mlo(struct qmi_wlanfw_host_cap_req_msg_v01 *req)
+{
+ req->mlo_capable_valid = 1;
+ req->mlo_capable = 1;
+ req->mlo_chip_id_valid = 1;
+ req->mlo_chip_id = 0;
+ req->mlo_group_id_valid = 1;
+ req->mlo_group_id = 0;
+ req->max_mlo_peer_valid = 1;
+ /* Max peer number generally won't change for the same device
+ * but needs to be synced with host driver.
+ */
+ req->max_mlo_peer = 32;
+ req->mlo_num_chips_valid = 1;
+ req->mlo_num_chips = 1;
+ req->mlo_chip_info_valid = 1;
+ req->mlo_chip_info[0].chip_id = 0;
+ req->mlo_chip_info[0].num_local_links = 2;
+ req->mlo_chip_info[0].hw_link_id[0] = 0;
+ req->mlo_chip_info[0].hw_link_id[1] = 1;
+ req->mlo_chip_info[0].valid_mlo_link_id[0] = 1;
+ req->mlo_chip_info[0].valid_mlo_link_id[1] = 1;
+}
+
+static int ath12k_qmi_host_cap_send(struct ath12k_base *ab)
+{
+ struct qmi_wlanfw_host_cap_req_msg_v01 req;
+ struct qmi_wlanfw_host_cap_resp_msg_v01 resp;
+ struct qmi_txn txn = {};
+ int ret = 0;
+
+ memset(&req, 0, sizeof(req));
+ memset(&resp, 0, sizeof(resp));
+
+ req.num_clients_valid = 1;
+ req.num_clients = 1;
+ req.mem_cfg_mode = ab->qmi.target_mem_mode;
+ req.mem_cfg_mode_valid = 1;
+ req.bdf_support_valid = 1;
+ req.bdf_support = 1;
+
+ req.m3_support_valid = 1;
+ req.m3_support = 1;
+ req.m3_cache_support_valid = 1;
+ req.m3_cache_support = 1;
+
+ req.cal_done_valid = 1;
+ req.cal_done = ab->qmi.cal_done;
+
+ req.feature_list_valid = 1;
+ req.feature_list = BIT(CNSS_QDSS_CFG_MISS_V01);
+
+ /* BRINGUP: here we are piggybacking a lot of stuff using
+ * internal_sleep_clock, should it be split?
+ */
+ if (ab->hw_params->internal_sleep_clock) {
+ req.nm_modem_valid = 1;
+
+ /* Notify firmware that this is non-qualcomm platform. */
+ req.nm_modem |= HOST_CSTATE_BIT;
+
+ /* Notify firmware about the sleep clock selection,
+ * nm_modem_bit[1] is used for this purpose. Host driver on
+ * non-qualcomm platforms should select internal sleep
+ * clock.
+ */
+ req.nm_modem |= SLEEP_CLOCK_SELECT_INTERNAL_BIT;
+ req.nm_modem |= PLATFORM_CAP_PCIE_GLOBAL_RESET;
+
+ ath12k_host_cap_parse_mlo(&req);
+ }
+
+ ret = qmi_txn_init(&ab->qmi.handle, &txn,
+ qmi_wlanfw_host_cap_resp_msg_v01_ei, &resp);
+ if (ret < 0)
+ goto out;
+
+ ret = qmi_send_request(&ab->qmi.handle, NULL, &txn,
+ QMI_WLANFW_HOST_CAP_REQ_V01,
+ QMI_WLANFW_HOST_CAP_REQ_MSG_V01_MAX_LEN,
+ qmi_wlanfw_host_cap_req_msg_v01_ei, &req);
+ if (ret < 0) {
+ ath12k_warn(ab, "Failed to send host capability request,err = %d\n", ret);
+ goto out;
+ }
+
+ ret = qmi_txn_wait(&txn, msecs_to_jiffies(ATH12K_QMI_WLANFW_TIMEOUT_MS));
+ if (ret < 0)
+ goto out;
+
+ if (resp.resp.result != QMI_RESULT_SUCCESS_V01) {
+ ath12k_warn(ab, "Host capability request failed, result: %d, err: %d\n",
+ resp.resp.result, resp.resp.error);
+ ret = -EINVAL;
+ goto out;
+ }
+
+out:
+ return ret;
+}
+
+static int ath12k_qmi_fw_ind_register_send(struct ath12k_base *ab)
+{
+ struct qmi_wlanfw_ind_register_req_msg_v01 *req;
+ struct qmi_wlanfw_ind_register_resp_msg_v01 *resp;
+ struct qmi_handle *handle = &ab->qmi.handle;
+ struct qmi_txn txn;
+ int ret;
+
+ req = kzalloc(sizeof(*req), GFP_KERNEL);
+ if (!req)
+ return -ENOMEM;
+
+ resp = kzalloc(sizeof(*resp), GFP_KERNEL);
+ if (!resp) {
+ ret = -ENOMEM;
+ goto resp_out;
+ }
+
+ req->client_id_valid = 1;
+ req->client_id = QMI_WLANFW_CLIENT_ID;
+ req->fw_ready_enable_valid = 1;
+ req->fw_ready_enable = 1;
+ req->request_mem_enable_valid = 1;
+ req->request_mem_enable = 1;
+ req->fw_mem_ready_enable_valid = 1;
+ req->fw_mem_ready_enable = 1;
+ req->cal_done_enable_valid = 1;
+ req->cal_done_enable = 1;
+ req->fw_init_done_enable_valid = 1;
+ req->fw_init_done_enable = 1;
+
+ req->pin_connect_result_enable_valid = 0;
+ req->pin_connect_result_enable = 0;
+
+ ret = qmi_txn_init(handle, &txn,
+ qmi_wlanfw_ind_register_resp_msg_v01_ei, resp);
+ if (ret < 0)
+ goto out;
+
+ ret = qmi_send_request(&ab->qmi.handle, NULL, &txn,
+ QMI_WLANFW_IND_REGISTER_REQ_V01,
+ QMI_WLANFW_IND_REGISTER_REQ_MSG_V01_MAX_LEN,
+ qmi_wlanfw_ind_register_req_msg_v01_ei, req);
+ if (ret < 0) {
+ ath12k_warn(ab, "Failed to send indication register request, err = %d\n",
+ ret);
+ goto out;
+ }
+
+ ret = qmi_txn_wait(&txn, msecs_to_jiffies(ATH12K_QMI_WLANFW_TIMEOUT_MS));
+ if (ret < 0) {
+ ath12k_warn(ab, "failed to register fw indication %d\n", ret);
+ goto out;
+ }
+
+ if (resp->resp.result != QMI_RESULT_SUCCESS_V01) {
+ ath12k_warn(ab, "FW Ind register request failed, result: %d, err: %d\n",
+ resp->resp.result, resp->resp.error);
+ ret = -EINVAL;
+ goto out;
+ }
+
+out:
+ kfree(resp);
+resp_out:
+ kfree(req);
+ return ret;
+}
+
+static int ath12k_qmi_respond_fw_mem_request(struct ath12k_base *ab)
+{
+ struct qmi_wlanfw_respond_mem_req_msg_v01 *req;
+ struct qmi_wlanfw_respond_mem_resp_msg_v01 resp;
+ struct qmi_txn txn = {};
+ int ret = 0, i;
+ bool delayed;
+
+ req = kzalloc(sizeof(*req), GFP_KERNEL);
+ if (!req)
+ return -ENOMEM;
+
+ memset(&resp, 0, sizeof(resp));
+
+ /* Some targets by default request a block of big contiguous
+ * DMA memory, it's hard to allocate from kernel. So host returns
+ * failure to firmware and firmware then request multiple blocks of
+ * small chunk size memory.
+ */
+ if (ab->qmi.target_mem_delayed) {
+ delayed = true;
+ ath12k_dbg(ab, ATH12K_DBG_QMI, "qmi delays mem_request %d\n",
+ ab->qmi.mem_seg_count);
+ memset(req, 0, sizeof(*req));
+ } else {
+ delayed = false;
+ req->mem_seg_len = ab->qmi.mem_seg_count;
+ for (i = 0; i < req->mem_seg_len ; i++) {
+ req->mem_seg[i].addr = ab->qmi.target_mem[i].paddr;
+ req->mem_seg[i].size = ab->qmi.target_mem[i].size;
+ req->mem_seg[i].type = ab->qmi.target_mem[i].type;
+ ath12k_dbg(ab, ATH12K_DBG_QMI,
+ "qmi req mem_seg[%d] %pad %u %u\n", i,
+ &ab->qmi.target_mem[i].paddr,
+ ab->qmi.target_mem[i].size,
+ ab->qmi.target_mem[i].type);
+ }
+ }
+
+ ret = qmi_txn_init(&ab->qmi.handle, &txn,
+ qmi_wlanfw_respond_mem_resp_msg_v01_ei, &resp);
+ if (ret < 0)
+ goto out;
+
+ ret = qmi_send_request(&ab->qmi.handle, NULL, &txn,
+ QMI_WLANFW_RESPOND_MEM_REQ_V01,
+ QMI_WLANFW_RESPOND_MEM_REQ_MSG_V01_MAX_LEN,
+ qmi_wlanfw_respond_mem_req_msg_v01_ei, req);
+ if (ret < 0) {
+ ath12k_warn(ab, "qmi failed to respond memory request, err = %d\n",
+ ret);
+ goto out;
+ }
+
+ ret = qmi_txn_wait(&txn, msecs_to_jiffies(ATH12K_QMI_WLANFW_TIMEOUT_MS));
+ if (ret < 0) {
+ ath12k_warn(ab, "qmi failed memory request, err = %d\n", ret);
+ goto out;
+ }
+
+ if (resp.resp.result != QMI_RESULT_SUCCESS_V01) {
+ /* the error response is expected when
+ * target_mem_delayed is true.
+ */
+ if (delayed && resp.resp.error == 0)
+ goto out;
+
+ ath12k_warn(ab, "Respond mem req failed, result: %d, err: %d\n",
+ resp.resp.result, resp.resp.error);
+ ret = -EINVAL;
+ goto out;
+ }
+out:
+ kfree(req);
+ return ret;
+}
+
+static void ath12k_qmi_free_target_mem_chunk(struct ath12k_base *ab)
+{
+ int i;
+
+ for (i = 0; i < ab->qmi.mem_seg_count; i++) {
+ if (!ab->qmi.target_mem[i].v.addr)
+ continue;
+ dma_free_coherent(ab->dev,
+ ab->qmi.target_mem[i].size,
+ ab->qmi.target_mem[i].v.addr,
+ ab->qmi.target_mem[i].paddr);
+ ab->qmi.target_mem[i].v.addr = NULL;
+ }
+}
+
+static int ath12k_qmi_alloc_target_mem_chunk(struct ath12k_base *ab)
+{
+ int i;
+ struct target_mem_chunk *chunk;
+
+ ab->qmi.target_mem_delayed = false;
+
+ for (i = 0; i < ab->qmi.mem_seg_count; i++) {
+ chunk = &ab->qmi.target_mem[i];
+
+ /* Allocate memory for the region and the functionality supported
+ * on the host. For the non-supported memory region, host does not
+ * allocate memory, assigns NULL and FW will handle this without crashing.
+ */
+ switch (chunk->type) {
+ case HOST_DDR_REGION_TYPE:
+ case M3_DUMP_REGION_TYPE:
+ case PAGEABLE_MEM_REGION_TYPE:
+ case CALDB_MEM_REGION_TYPE:
+ chunk->v.addr = dma_alloc_coherent(ab->dev,
+ chunk->size,
+ &chunk->paddr,
+ GFP_KERNEL | __GFP_NOWARN);
+ if (!chunk->v.addr) {
+ if (chunk->size > ATH12K_QMI_MAX_CHUNK_SIZE) {
+ ab->qmi.target_mem_delayed = true;
+ ath12k_warn(ab,
+ "qmi dma allocation failed (%d B type %u), will try later with small size\n",
+ chunk->size,
+ chunk->type);
+ ath12k_qmi_free_target_mem_chunk(ab);
+ return 0;
+ }
+ ath12k_warn(ab, "memory allocation failure for %u size: %d\n",
+ chunk->type, chunk->size);
+ return -ENOMEM;
+ }
+ break;
+ default:
+ ath12k_warn(ab, "memory type %u not supported\n",
+ chunk->type);
+ chunk->paddr = 0;
+ chunk->v.addr = NULL;
+ break;
+ }
+ }
+ return 0;
+}
+
+static int ath12k_qmi_request_target_cap(struct ath12k_base *ab)
+{
+ struct qmi_wlanfw_cap_req_msg_v01 req;
+ struct qmi_wlanfw_cap_resp_msg_v01 resp;
+ struct qmi_txn txn = {};
+ unsigned int board_id = ATH12K_BOARD_ID_DEFAULT;
+ int ret = 0;
+ int i;
+
+ memset(&req, 0, sizeof(req));
+ memset(&resp, 0, sizeof(resp));
+
+ ret = qmi_txn_init(&ab->qmi.handle, &txn,
+ qmi_wlanfw_cap_resp_msg_v01_ei, &resp);
+ if (ret < 0)
+ goto out;
+
+ ret = qmi_send_request(&ab->qmi.handle, NULL, &txn,
+ QMI_WLANFW_CAP_REQ_V01,
+ QMI_WLANFW_CAP_REQ_MSG_V01_MAX_LEN,
+ qmi_wlanfw_cap_req_msg_v01_ei, &req);
+ if (ret < 0) {
+ ath12k_warn(ab, "qmi failed to send target cap request, err = %d\n",
+ ret);
+ goto out;
+ }
+
+ ret = qmi_txn_wait(&txn, msecs_to_jiffies(ATH12K_QMI_WLANFW_TIMEOUT_MS));
+ if (ret < 0) {
+ ath12k_warn(ab, "qmi failed target cap request %d\n", ret);
+ goto out;
+ }
+
+ if (resp.resp.result != QMI_RESULT_SUCCESS_V01) {
+ ath12k_warn(ab, "qmi targetcap req failed, result: %d, err: %d\n",
+ resp.resp.result, resp.resp.error);
+ ret = -EINVAL;
+ goto out;
+ }
+
+ if (resp.chip_info_valid) {
+ ab->qmi.target.chip_id = resp.chip_info.chip_id;
+ ab->qmi.target.chip_family = resp.chip_info.chip_family;
+ }
+
+ if (resp.board_info_valid)
+ ab->qmi.target.board_id = resp.board_info.board_id;
+ else
+ ab->qmi.target.board_id = board_id;
+
+ if (resp.soc_info_valid)
+ ab->qmi.target.soc_id = resp.soc_info.soc_id;
+
+ if (resp.fw_version_info_valid) {
+ ab->qmi.target.fw_version = resp.fw_version_info.fw_version;
+ strscpy(ab->qmi.target.fw_build_timestamp,
+ resp.fw_version_info.fw_build_timestamp,
+ sizeof(ab->qmi.target.fw_build_timestamp));
+ }
+
+ if (resp.fw_build_id_valid)
+ strscpy(ab->qmi.target.fw_build_id, resp.fw_build_id,
+ sizeof(ab->qmi.target.fw_build_id));
+
+ if (resp.dev_mem_info_valid) {
+ for (i = 0; i < ATH12K_QMI_WLFW_MAX_DEV_MEM_NUM_V01; i++) {
+ ab->qmi.dev_mem[i].start =
+ resp.dev_mem[i].start;
+ ab->qmi.dev_mem[i].size =
+ resp.dev_mem[i].size;
+ ath12k_dbg(ab, ATH12K_DBG_QMI,
+ "devmem [%d] start ox%llx size %llu\n", i,
+ ab->qmi.dev_mem[i].start,
+ ab->qmi.dev_mem[i].size);
+ }
+ }
+
+ if (resp.eeprom_caldata_read_timeout_valid) {
+ ab->qmi.target.eeprom_caldata = resp.eeprom_caldata_read_timeout;
+ ath12k_dbg(ab, ATH12K_DBG_QMI, "qmi cal data supported from eeprom\n");
+ }
+
+ ath12k_info(ab, "chip_id 0x%x chip_family 0x%x board_id 0x%x soc_id 0x%x\n",
+ ab->qmi.target.chip_id, ab->qmi.target.chip_family,
+ ab->qmi.target.board_id, ab->qmi.target.soc_id);
+
+ ath12k_info(ab, "fw_version 0x%x fw_build_timestamp %s fw_build_id %s",
+ ab->qmi.target.fw_version,
+ ab->qmi.target.fw_build_timestamp,
+ ab->qmi.target.fw_build_id);
+
+out:
+ return ret;
+}
+
+static int ath12k_qmi_load_file_target_mem(struct ath12k_base *ab,
+ const u8 *data, u32 len, u8 type)
+{
+ struct qmi_wlanfw_bdf_download_req_msg_v01 *req;
+ struct qmi_wlanfw_bdf_download_resp_msg_v01 resp;
+ struct qmi_txn txn = {};
+ const u8 *temp = data;
+ int ret;
+ u32 remaining = len;
+
+ req = kzalloc(sizeof(*req), GFP_KERNEL);
+ if (!req)
+ return -ENOMEM;
+ memset(&resp, 0, sizeof(resp));
+
+ while (remaining) {
+ req->valid = 1;
+ req->file_id_valid = 1;
+ req->file_id = ab->qmi.target.board_id;
+ req->total_size_valid = 1;
+ req->total_size = remaining;
+ req->seg_id_valid = 1;
+ req->data_valid = 1;
+ req->bdf_type = type;
+ req->bdf_type_valid = 1;
+ req->end_valid = 1;
+ req->end = 0;
+
+ if (remaining > QMI_WLANFW_MAX_DATA_SIZE_V01) {
+ req->data_len = QMI_WLANFW_MAX_DATA_SIZE_V01;
+ } else {
+ req->data_len = remaining;
+ req->end = 1;
+ }
+
+ if (type == ATH12K_QMI_FILE_TYPE_EEPROM) {
+ req->data_valid = 0;
+ req->end = 1;
+ req->data_len = ATH12K_QMI_MAX_BDF_FILE_NAME_SIZE;
+ } else {
+ memcpy(req->data, temp, req->data_len);
+ }
+
+ ret = qmi_txn_init(&ab->qmi.handle, &txn,
+ qmi_wlanfw_bdf_download_resp_msg_v01_ei,
+ &resp);
+ if (ret < 0)
+ goto out;
+
+ ath12k_dbg(ab, ATH12K_DBG_QMI, "qmi bdf download req fixed addr type %d\n",
+ type);
+
+ ret = qmi_send_request(&ab->qmi.handle, NULL, &txn,
+ QMI_WLANFW_BDF_DOWNLOAD_REQ_V01,
+ QMI_WLANFW_BDF_DOWNLOAD_REQ_MSG_V01_MAX_LEN,
+ qmi_wlanfw_bdf_download_req_msg_v01_ei, req);
+ if (ret < 0) {
+ qmi_txn_cancel(&txn);
+ goto out;
+ }
+
+ ret = qmi_txn_wait(&txn, msecs_to_jiffies(ATH12K_QMI_WLANFW_TIMEOUT_MS));
+ if (ret < 0)
+ goto out;
+
+ if (resp.resp.result != QMI_RESULT_SUCCESS_V01) {
+ ath12k_warn(ab, "qmi BDF download failed, result: %d, err: %d\n",
+ resp.resp.result, resp.resp.error);
+ ret = -EINVAL;
+ goto out;
+ }
+
+ if (type == ATH12K_QMI_FILE_TYPE_EEPROM) {
+ remaining = 0;
+ } else {
+ remaining -= req->data_len;
+ temp += req->data_len;
+ req->seg_id++;
+ ath12k_dbg(ab, ATH12K_DBG_QMI,
+ "qmi bdf download request remaining %i\n",
+ remaining);
+ }
+ }
+
+out:
+ kfree(req);
+ return ret;
+}
+
+static int ath12k_qmi_load_bdf_qmi(struct ath12k_base *ab,
+ enum ath12k_qmi_bdf_type type)
+{
+ struct device *dev = ab->dev;
+ char filename[ATH12K_QMI_MAX_BDF_FILE_NAME_SIZE];
+ const struct firmware *fw_entry;
+ struct ath12k_board_data bd;
+ u32 fw_size, file_type;
+ int ret = 0;
+ const u8 *tmp;
+
+ memset(&bd, 0, sizeof(bd));
+
+ switch (type) {
+ case ATH12K_QMI_BDF_TYPE_ELF:
+ ret = ath12k_core_fetch_bdf(ab, &bd);
+ if (ret) {
+ ath12k_warn(ab, "qmi failed to load bdf:\n");
+ goto out;
+ }
+
+ if (bd.len >= SELFMAG && memcmp(bd.data, ELFMAG, SELFMAG) == 0)
+ type = ATH12K_QMI_BDF_TYPE_ELF;
+ else
+ type = ATH12K_QMI_BDF_TYPE_BIN;
+
+ break;
+ case ATH12K_QMI_BDF_TYPE_REGDB:
+ ret = ath12k_core_fetch_board_data_api_1(ab, &bd,
+ ATH12K_REGDB_FILE_NAME);
+ if (ret) {
+ ath12k_warn(ab, "qmi failed to load regdb bin:\n");
+ goto out;
+ }
+ break;
+ case ATH12K_QMI_BDF_TYPE_CALIBRATION:
+
+ if (ab->qmi.target.eeprom_caldata) {
+ file_type = ATH12K_QMI_FILE_TYPE_EEPROM;
+ tmp = filename;
+ fw_size = ATH12K_QMI_MAX_BDF_FILE_NAME_SIZE;
+ } else {
+ file_type = ATH12K_QMI_FILE_TYPE_CALDATA;
+
+ /* cal-<bus>-<id>.bin */
+ snprintf(filename, sizeof(filename), "cal-%s-%s.bin",
+ ath12k_bus_str(ab->hif.bus), dev_name(dev));
+ fw_entry = ath12k_core_firmware_request(ab, filename);
+ if (!IS_ERR(fw_entry))
+ goto success;
+
+ fw_entry = ath12k_core_firmware_request(ab,
+ ATH12K_DEFAULT_CAL_FILE);
+ if (IS_ERR(fw_entry)) {
+ ret = PTR_ERR(fw_entry);
+ ath12k_warn(ab,
+ "qmi failed to load CAL data file:%s\n",
+ filename);
+ goto out;
+ }
+
+success:
+ fw_size = min_t(u32, ab->hw_params->fw.board_size,
+ fw_entry->size);
+ tmp = fw_entry->data;
+ }
+ ret = ath12k_qmi_load_file_target_mem(ab, tmp, fw_size, file_type);
+ if (ret < 0) {
+ ath12k_warn(ab, "qmi failed to load caldata\n");
+ goto out_qmi_cal;
+ }
+
+ ath12k_dbg(ab, ATH12K_DBG_QMI, "qmi caldata downloaded: type: %u\n",
+ file_type);
+
+out_qmi_cal:
+ if (!ab->qmi.target.eeprom_caldata)
+ release_firmware(fw_entry);
+ return ret;
+ default:
+ ath12k_warn(ab, "unknown file type for load %d", type);
+ goto out;
+ }
+
+ ath12k_dbg(ab, ATH12K_DBG_QMI, "qmi bdf_type %d\n", type);
+
+ fw_size = min_t(u32, ab->hw_params->fw.board_size, bd.len);
+
+ ret = ath12k_qmi_load_file_target_mem(ab, bd.data, fw_size, type);
+ if (ret < 0)
+ ath12k_warn(ab, "qmi failed to load bdf file\n");
+
+out:
+ ath12k_core_free_bdf(ab, &bd);
+ ath12k_dbg(ab, ATH12K_DBG_QMI, "qmi BDF download sequence completed\n");
+
+ return ret;
+}
+
+static int ath12k_qmi_m3_load(struct ath12k_base *ab)
+{
+ struct m3_mem_region *m3_mem = &ab->qmi.m3_mem;
+ const struct firmware *fw;
+ char path[100];
+ int ret;
+
+ if (m3_mem->vaddr || m3_mem->size)
+ return 0;
+
+ fw = ath12k_core_firmware_request(ab, ATH12K_M3_FILE);
+ if (IS_ERR(fw)) {
+ ret = PTR_ERR(fw);
+ ath12k_core_create_firmware_path(ab, ATH12K_M3_FILE,
+ path, sizeof(path));
+ ath12k_err(ab, "failed to load %s: %d\n", path, ret);
+ return ret;
+ }
+
+ m3_mem->vaddr = dma_alloc_coherent(ab->dev,
+ fw->size, &m3_mem->paddr,
+ GFP_KERNEL);
+ if (!m3_mem->vaddr) {
+ ath12k_err(ab, "failed to allocate memory for M3 with size %zu\n",
+ fw->size);
+ release_firmware(fw);
+ return -ENOMEM;
+ }
+
+ memcpy(m3_mem->vaddr, fw->data, fw->size);
+ m3_mem->size = fw->size;
+ release_firmware(fw);
+
+ return 0;
+}
+
+static void ath12k_qmi_m3_free(struct ath12k_base *ab)
+{
+ struct m3_mem_region *m3_mem = &ab->qmi.m3_mem;
+
+ if (!m3_mem->vaddr)
+ return;
+
+ dma_free_coherent(ab->dev, m3_mem->size,
+ m3_mem->vaddr, m3_mem->paddr);
+ m3_mem->vaddr = NULL;
+}
+
+static int ath12k_qmi_wlanfw_m3_info_send(struct ath12k_base *ab)
+{
+ struct m3_mem_region *m3_mem = &ab->qmi.m3_mem;
+ struct qmi_wlanfw_m3_info_req_msg_v01 req;
+ struct qmi_wlanfw_m3_info_resp_msg_v01 resp;
+ struct qmi_txn txn = {};
+ int ret = 0;
+
+ memset(&req, 0, sizeof(req));
+ memset(&resp, 0, sizeof(resp));
+
+ ret = ath12k_qmi_m3_load(ab);
+ if (ret) {
+ ath12k_err(ab, "failed to load m3 firmware: %d", ret);
+ return ret;
+ }
+
+ req.addr = m3_mem->paddr;
+ req.size = m3_mem->size;
+
+ ret = qmi_txn_init(&ab->qmi.handle, &txn,
+ qmi_wlanfw_m3_info_resp_msg_v01_ei, &resp);
+ if (ret < 0)
+ goto out;
+
+ ret = qmi_send_request(&ab->qmi.handle, NULL, &txn,
+ QMI_WLANFW_M3_INFO_REQ_V01,
+ QMI_WLANFW_M3_INFO_REQ_MSG_V01_MAX_MSG_LEN,
+ qmi_wlanfw_m3_info_req_msg_v01_ei, &req);
+ if (ret < 0) {
+ ath12k_warn(ab, "qmi failed to send M3 information request, err = %d\n",
+ ret);
+ goto out;
+ }
+
+ ret = qmi_txn_wait(&txn, msecs_to_jiffies(ATH12K_QMI_WLANFW_TIMEOUT_MS));
+ if (ret < 0) {
+ ath12k_warn(ab, "qmi failed M3 information request %d\n", ret);
+ goto out;
+ }
+
+ if (resp.resp.result != QMI_RESULT_SUCCESS_V01) {
+ ath12k_warn(ab, "qmi M3 info request failed, result: %d, err: %d\n",
+ resp.resp.result, resp.resp.error);
+ ret = -EINVAL;
+ goto out;
+ }
+out:
+ return ret;
+}
+
+static int ath12k_qmi_wlanfw_mode_send(struct ath12k_base *ab,
+ u32 mode)
+{
+ struct qmi_wlanfw_wlan_mode_req_msg_v01 req;
+ struct qmi_wlanfw_wlan_mode_resp_msg_v01 resp;
+ struct qmi_txn txn = {};
+ int ret = 0;
+
+ memset(&req, 0, sizeof(req));
+ memset(&resp, 0, sizeof(resp));
+
+ req.mode = mode;
+ req.hw_debug_valid = 1;
+ req.hw_debug = 0;
+
+ ret = qmi_txn_init(&ab->qmi.handle, &txn,
+ qmi_wlanfw_wlan_mode_resp_msg_v01_ei, &resp);
+ if (ret < 0)
+ goto out;
+
+ ret = qmi_send_request(&ab->qmi.handle, NULL, &txn,
+ QMI_WLANFW_WLAN_MODE_REQ_V01,
+ QMI_WLANFW_WLAN_MODE_REQ_MSG_V01_MAX_LEN,
+ qmi_wlanfw_wlan_mode_req_msg_v01_ei, &req);
+ if (ret < 0) {
+ ath12k_warn(ab, "qmi failed to send mode request, mode: %d, err = %d\n",
+ mode, ret);
+ goto out;
+ }
+
+ ret = qmi_txn_wait(&txn, msecs_to_jiffies(ATH12K_QMI_WLANFW_TIMEOUT_MS));
+ if (ret < 0) {
+ if (mode == ATH12K_FIRMWARE_MODE_OFF && ret == -ENETRESET) {
+ ath12k_warn(ab, "WLFW service is dis-connected\n");
+ return 0;
+ }
+ ath12k_warn(ab, "qmi failed set mode request, mode: %d, err = %d\n",
+ mode, ret);
+ goto out;
+ }
+
+ if (resp.resp.result != QMI_RESULT_SUCCESS_V01) {
+ ath12k_warn(ab, "Mode request failed, mode: %d, result: %d err: %d\n",
+ mode, resp.resp.result, resp.resp.error);
+ ret = -EINVAL;
+ goto out;
+ }
+
+out:
+ return ret;
+}
+
+static int ath12k_qmi_wlanfw_wlan_cfg_send(struct ath12k_base *ab)
+{
+ struct qmi_wlanfw_wlan_cfg_req_msg_v01 *req;
+ struct qmi_wlanfw_wlan_cfg_resp_msg_v01 resp;
+ struct ce_pipe_config *ce_cfg;
+ struct service_to_pipe *svc_cfg;
+ struct qmi_txn txn = {};
+ int ret = 0, pipe_num;
+
+ ce_cfg = (struct ce_pipe_config *)ab->qmi.ce_cfg.tgt_ce;
+ svc_cfg = (struct service_to_pipe *)ab->qmi.ce_cfg.svc_to_ce_map;
+
+ req = kzalloc(sizeof(*req), GFP_KERNEL);
+ if (!req)
+ return -ENOMEM;
+
+ memset(&resp, 0, sizeof(resp));
+
+ req->host_version_valid = 1;
+ strscpy(req->host_version, ATH12K_HOST_VERSION_STRING,
+ sizeof(req->host_version));
+
+ req->tgt_cfg_valid = 1;
+ /* This is number of CE configs */
+ req->tgt_cfg_len = ab->qmi.ce_cfg.tgt_ce_len;
+ for (pipe_num = 0; pipe_num < req->tgt_cfg_len ; pipe_num++) {
+ req->tgt_cfg[pipe_num].pipe_num = ce_cfg[pipe_num].pipenum;
+ req->tgt_cfg[pipe_num].pipe_dir = ce_cfg[pipe_num].pipedir;
+ req->tgt_cfg[pipe_num].nentries = ce_cfg[pipe_num].nentries;
+ req->tgt_cfg[pipe_num].nbytes_max = ce_cfg[pipe_num].nbytes_max;
+ req->tgt_cfg[pipe_num].flags = ce_cfg[pipe_num].flags;
+ }
+
+ req->svc_cfg_valid = 1;
+ /* This is number of Service/CE configs */
+ req->svc_cfg_len = ab->qmi.ce_cfg.svc_to_ce_map_len;
+ for (pipe_num = 0; pipe_num < req->svc_cfg_len; pipe_num++) {
+ req->svc_cfg[pipe_num].service_id = svc_cfg[pipe_num].service_id;
+ req->svc_cfg[pipe_num].pipe_dir = svc_cfg[pipe_num].pipedir;
+ req->svc_cfg[pipe_num].pipe_num = svc_cfg[pipe_num].pipenum;
+ }
+
+ /* set shadow v3 configuration */
+ if (ab->hw_params->supports_shadow_regs) {
+ req->shadow_reg_v3_valid = 1;
+ req->shadow_reg_v3_len = min_t(u32,
+ ab->qmi.ce_cfg.shadow_reg_v3_len,
+ QMI_WLANFW_MAX_NUM_SHADOW_REG_V3_V01);
+ memcpy(&req->shadow_reg_v3, ab->qmi.ce_cfg.shadow_reg_v3,
+ sizeof(u32) * req->shadow_reg_v3_len);
+ } else {
+ req->shadow_reg_v3_valid = 0;
+ }
+
+ ret = qmi_txn_init(&ab->qmi.handle, &txn,
+ qmi_wlanfw_wlan_cfg_resp_msg_v01_ei, &resp);
+ if (ret < 0)
+ goto out;
+
+ ret = qmi_send_request(&ab->qmi.handle, NULL, &txn,
+ QMI_WLANFW_WLAN_CFG_REQ_V01,
+ QMI_WLANFW_WLAN_CFG_REQ_MSG_V01_MAX_LEN,
+ qmi_wlanfw_wlan_cfg_req_msg_v01_ei, req);
+ if (ret < 0) {
+ ath12k_warn(ab, "qmi failed to send wlan config request, err = %d\n",
+ ret);
+ goto out;
+ }
+
+ ret = qmi_txn_wait(&txn, msecs_to_jiffies(ATH12K_QMI_WLANFW_TIMEOUT_MS));
+ if (ret < 0) {
+ ath12k_warn(ab, "qmi failed wlan config request, err = %d\n", ret);
+ goto out;
+ }
+
+ if (resp.resp.result != QMI_RESULT_SUCCESS_V01) {
+ ath12k_warn(ab, "qmi wlan config request failed, result: %d, err: %d\n",
+ resp.resp.result, resp.resp.error);
+ ret = -EINVAL;
+ goto out;
+ }
+
+out:
+ kfree(req);
+ return ret;
+}
+
+void ath12k_qmi_firmware_stop(struct ath12k_base *ab)
+{
+ int ret;
+
+ ret = ath12k_qmi_wlanfw_mode_send(ab, ATH12K_FIRMWARE_MODE_OFF);
+ if (ret < 0) {
+ ath12k_warn(ab, "qmi failed to send wlan mode off\n");
+ return;
+ }
+}
+
+int ath12k_qmi_firmware_start(struct ath12k_base *ab,
+ u32 mode)
+{
+ int ret;
+
+ ret = ath12k_qmi_wlanfw_wlan_cfg_send(ab);
+ if (ret < 0) {
+ ath12k_warn(ab, "qmi failed to send wlan cfg:%d\n", ret);
+ return ret;
+ }
+
+ ret = ath12k_qmi_wlanfw_mode_send(ab, mode);
+ if (ret < 0) {
+ ath12k_warn(ab, "qmi failed to send wlan fw mode:%d\n", ret);
+ return ret;
+ }
+
+ return 0;
+}
+
+static int
+ath12k_qmi_driver_event_post(struct ath12k_qmi *qmi,
+ enum ath12k_qmi_event_type type,
+ void *data)
+{
+ struct ath12k_qmi_driver_event *event;
+
+ event = kzalloc(sizeof(*event), GFP_ATOMIC);
+ if (!event)
+ return -ENOMEM;
+
+ event->type = type;
+ event->data = data;
+
+ spin_lock(&qmi->event_lock);
+ list_add_tail(&event->list, &qmi->event_list);
+ spin_unlock(&qmi->event_lock);
+
+ queue_work(qmi->event_wq, &qmi->event_work);
+
+ return 0;
+}
+
+static int ath12k_qmi_event_server_arrive(struct ath12k_qmi *qmi)
+{
+ struct ath12k_base *ab = qmi->ab;
+ int ret;
+
+ ret = ath12k_qmi_fw_ind_register_send(ab);
+ if (ret < 0) {
+ ath12k_warn(ab, "qmi failed to send FW indication QMI:%d\n", ret);
+ return ret;
+ }
+
+ ret = ath12k_qmi_host_cap_send(ab);
+ if (ret < 0) {
+ ath12k_warn(ab, "qmi failed to send host cap QMI:%d\n", ret);
+ return ret;
+ }
+
+ return ret;
+}
+
+static int ath12k_qmi_event_mem_request(struct ath12k_qmi *qmi)
+{
+ struct ath12k_base *ab = qmi->ab;
+ int ret;
+
+ ret = ath12k_qmi_respond_fw_mem_request(ab);
+ if (ret < 0) {
+ ath12k_warn(ab, "qmi failed to respond fw mem req:%d\n", ret);
+ return ret;
+ }
+
+ return ret;
+}
+
+static int ath12k_qmi_event_load_bdf(struct ath12k_qmi *qmi)
+{
+ struct ath12k_base *ab = qmi->ab;
+ int ret;
+
+ ret = ath12k_qmi_request_target_cap(ab);
+ if (ret < 0) {
+ ath12k_warn(ab, "qmi failed to req target capabilities:%d\n", ret);
+ return ret;
+ }
+
+ ret = ath12k_qmi_load_bdf_qmi(ab, ATH12K_QMI_BDF_TYPE_REGDB);
+ if (ret < 0) {
+ ath12k_warn(ab, "qmi failed to load regdb file:%d\n", ret);
+ return ret;
+ }
+
+ ret = ath12k_qmi_load_bdf_qmi(ab, ATH12K_QMI_BDF_TYPE_ELF);
+ if (ret < 0) {
+ ath12k_warn(ab, "qmi failed to load board data file:%d\n", ret);
+ return ret;
+ }
+
+ if (ab->hw_params->download_calib) {
+ ret = ath12k_qmi_load_bdf_qmi(ab, ATH12K_QMI_BDF_TYPE_CALIBRATION);
+ if (ret < 0)
+ ath12k_warn(ab, "qmi failed to load calibrated data :%d\n", ret);
+ }
+
+ ret = ath12k_qmi_wlanfw_m3_info_send(ab);
+ if (ret < 0) {
+ ath12k_warn(ab, "qmi failed to send m3 info req:%d\n", ret);
+ return ret;
+ }
+
+ return ret;
+}
+
+static void ath12k_qmi_msg_mem_request_cb(struct qmi_handle *qmi_hdl,
+ struct sockaddr_qrtr *sq,
+ struct qmi_txn *txn,
+ const void *data)
+{
+ struct ath12k_qmi *qmi = container_of(qmi_hdl, struct ath12k_qmi, handle);
+ struct ath12k_base *ab = qmi->ab;
+ const struct qmi_wlanfw_request_mem_ind_msg_v01 *msg = data;
+ int i, ret;
+
+ ath12k_dbg(ab, ATH12K_DBG_QMI, "qmi firmware request memory request\n");
+
+ if (msg->mem_seg_len == 0 ||
+ msg->mem_seg_len > ATH12K_QMI_WLANFW_MAX_NUM_MEM_SEG_V01)
+ ath12k_warn(ab, "Invalid memory segment length: %u\n",
+ msg->mem_seg_len);
+
+ ab->qmi.mem_seg_count = msg->mem_seg_len;
+
+ for (i = 0; i < qmi->mem_seg_count ; i++) {
+ ab->qmi.target_mem[i].type = msg->mem_seg[i].type;
+ ab->qmi.target_mem[i].size = msg->mem_seg[i].size;
+ ath12k_dbg(ab, ATH12K_DBG_QMI, "qmi mem seg type %d size %d\n",
+ msg->mem_seg[i].type, msg->mem_seg[i].size);
+ }
+
+ ret = ath12k_qmi_alloc_target_mem_chunk(ab);
+ if (ret) {
+ ath12k_warn(ab, "qmi failed to alloc target memory: %d\n",
+ ret);
+ return;
+ }
+
+ ath12k_qmi_driver_event_post(qmi, ATH12K_QMI_EVENT_REQUEST_MEM, NULL);
+}
+
+static void ath12k_qmi_msg_mem_ready_cb(struct qmi_handle *qmi_hdl,
+ struct sockaddr_qrtr *sq,
+ struct qmi_txn *txn,
+ const void *decoded)
+{
+ struct ath12k_qmi *qmi = container_of(qmi_hdl, struct ath12k_qmi, handle);
+ struct ath12k_base *ab = qmi->ab;
+
+ ath12k_dbg(ab, ATH12K_DBG_QMI, "qmi firmware memory ready indication\n");
+ ath12k_qmi_driver_event_post(qmi, ATH12K_QMI_EVENT_FW_MEM_READY, NULL);
+}
+
+static void ath12k_qmi_msg_fw_ready_cb(struct qmi_handle *qmi_hdl,
+ struct sockaddr_qrtr *sq,
+ struct qmi_txn *txn,
+ const void *decoded)
+{
+ struct ath12k_qmi *qmi = container_of(qmi_hdl, struct ath12k_qmi, handle);
+ struct ath12k_base *ab = qmi->ab;
+
+ ath12k_dbg(ab, ATH12K_DBG_QMI, "qmi firmware ready\n");
+ ath12k_qmi_driver_event_post(qmi, ATH12K_QMI_EVENT_FW_READY, NULL);
+}
+
+static const struct qmi_msg_handler ath12k_qmi_msg_handlers[] = {
+ {
+ .type = QMI_INDICATION,
+ .msg_id = QMI_WLFW_REQUEST_MEM_IND_V01,
+ .ei = qmi_wlanfw_request_mem_ind_msg_v01_ei,
+ .decoded_size = sizeof(struct qmi_wlanfw_request_mem_ind_msg_v01),
+ .fn = ath12k_qmi_msg_mem_request_cb,
+ },
+ {
+ .type = QMI_INDICATION,
+ .msg_id = QMI_WLFW_FW_MEM_READY_IND_V01,
+ .ei = qmi_wlanfw_mem_ready_ind_msg_v01_ei,
+ .decoded_size = sizeof(struct qmi_wlanfw_fw_mem_ready_ind_msg_v01),
+ .fn = ath12k_qmi_msg_mem_ready_cb,
+ },
+ {
+ .type = QMI_INDICATION,
+ .msg_id = QMI_WLFW_FW_READY_IND_V01,
+ .ei = qmi_wlanfw_fw_ready_ind_msg_v01_ei,
+ .decoded_size = sizeof(struct qmi_wlanfw_fw_ready_ind_msg_v01),
+ .fn = ath12k_qmi_msg_fw_ready_cb,
+ },
+};
+
+static int ath12k_qmi_ops_new_server(struct qmi_handle *qmi_hdl,
+ struct qmi_service *service)
+{
+ struct ath12k_qmi *qmi = container_of(qmi_hdl, struct ath12k_qmi, handle);
+ struct ath12k_base *ab = qmi->ab;
+ struct sockaddr_qrtr *sq = &qmi->sq;
+ int ret;
+
+ sq->sq_family = AF_QIPCRTR;
+ sq->sq_node = service->node;
+ sq->sq_port = service->port;
+
+ ret = kernel_connect(qmi_hdl->sock, (struct sockaddr *)sq,
+ sizeof(*sq), 0);
+ if (ret) {
+ ath12k_warn(ab, "qmi failed to connect to remote service %d\n", ret);
+ return ret;
+ }
+
+ ath12k_dbg(ab, ATH12K_DBG_QMI, "qmi wifi fw qmi service connected\n");
+ ath12k_qmi_driver_event_post(qmi, ATH12K_QMI_EVENT_SERVER_ARRIVE, NULL);
+
+ return ret;
+}
+
+static void ath12k_qmi_ops_del_server(struct qmi_handle *qmi_hdl,
+ struct qmi_service *service)
+{
+ struct ath12k_qmi *qmi = container_of(qmi_hdl, struct ath12k_qmi, handle);
+ struct ath12k_base *ab = qmi->ab;
+
+ ath12k_dbg(ab, ATH12K_DBG_QMI, "qmi wifi fw del server\n");
+ ath12k_qmi_driver_event_post(qmi, ATH12K_QMI_EVENT_SERVER_EXIT, NULL);
+}
+
+static const struct qmi_ops ath12k_qmi_ops = {
+ .new_server = ath12k_qmi_ops_new_server,
+ .del_server = ath12k_qmi_ops_del_server,
+};
+
+static void ath12k_qmi_driver_event_work(struct work_struct *work)
+{
+ struct ath12k_qmi *qmi = container_of(work, struct ath12k_qmi,
+ event_work);
+ struct ath12k_qmi_driver_event *event;
+ struct ath12k_base *ab = qmi->ab;
+ int ret;
+
+ spin_lock(&qmi->event_lock);
+ while (!list_empty(&qmi->event_list)) {
+ event = list_first_entry(&qmi->event_list,
+ struct ath12k_qmi_driver_event, list);
+ list_del(&event->list);
+ spin_unlock(&qmi->event_lock);
+
+ if (test_bit(ATH12K_FLAG_UNREGISTERING, &ab->dev_flags))
+ return;
+
+ switch (event->type) {
+ case ATH12K_QMI_EVENT_SERVER_ARRIVE:
+ ret = ath12k_qmi_event_server_arrive(qmi);
+ if (ret < 0)
+ set_bit(ATH12K_FLAG_QMI_FAIL, &ab->dev_flags);
+ break;
+ case ATH12K_QMI_EVENT_SERVER_EXIT:
+ set_bit(ATH12K_FLAG_CRASH_FLUSH, &ab->dev_flags);
+ set_bit(ATH12K_FLAG_RECOVERY, &ab->dev_flags);
+ break;
+ case ATH12K_QMI_EVENT_REQUEST_MEM:
+ ret = ath12k_qmi_event_mem_request(qmi);
+ if (ret < 0)
+ set_bit(ATH12K_FLAG_QMI_FAIL, &ab->dev_flags);
+ break;
+ case ATH12K_QMI_EVENT_FW_MEM_READY:
+ ret = ath12k_qmi_event_load_bdf(qmi);
+ if (ret < 0)
+ set_bit(ATH12K_FLAG_QMI_FAIL, &ab->dev_flags);
+ break;
+ case ATH12K_QMI_EVENT_FW_READY:
+ clear_bit(ATH12K_FLAG_QMI_FAIL, &ab->dev_flags);
+ if (test_bit(ATH12K_FLAG_REGISTERED, &ab->dev_flags)) {
+ ath12k_hal_dump_srng_stats(ab);
+ queue_work(ab->workqueue, &ab->restart_work);
+ break;
+ }
+
+ clear_bit(ATH12K_FLAG_CRASH_FLUSH,
+ &ab->dev_flags);
+ clear_bit(ATH12K_FLAG_RECOVERY, &ab->dev_flags);
+ ath12k_core_qmi_firmware_ready(ab);
+ set_bit(ATH12K_FLAG_REGISTERED, &ab->dev_flags);
+
+ break;
+ default:
+ ath12k_warn(ab, "invalid event type: %d", event->type);
+ break;
+ }
+ kfree(event);
+ spin_lock(&qmi->event_lock);
+ }
+ spin_unlock(&qmi->event_lock);
+}
+
+int ath12k_qmi_init_service(struct ath12k_base *ab)
+{
+ int ret;
+
+ memset(&ab->qmi.target, 0, sizeof(struct target_info));
+ memset(&ab->qmi.target_mem, 0, sizeof(struct target_mem_chunk));
+ ab->qmi.ab = ab;
+
+ ab->qmi.target_mem_mode = ATH12K_QMI_TARGET_MEM_MODE_DEFAULT;
+ ret = qmi_handle_init(&ab->qmi.handle, ATH12K_QMI_RESP_LEN_MAX,
+ &ath12k_qmi_ops, ath12k_qmi_msg_handlers);
+ if (ret < 0) {
+ ath12k_warn(ab, "failed to initialize qmi handle\n");
+ return ret;
+ }
+
+ ab->qmi.event_wq = alloc_workqueue("ath12k_qmi_driver_event",
+ WQ_UNBOUND, 1);
+ if (!ab->qmi.event_wq) {
+ ath12k_err(ab, "failed to allocate workqueue\n");
+ return -EFAULT;
+ }
+
+ INIT_LIST_HEAD(&ab->qmi.event_list);
+ spin_lock_init(&ab->qmi.event_lock);
+ INIT_WORK(&ab->qmi.event_work, ath12k_qmi_driver_event_work);
+
+ ret = qmi_add_lookup(&ab->qmi.handle, ATH12K_QMI_WLFW_SERVICE_ID_V01,
+ ATH12K_QMI_WLFW_SERVICE_VERS_V01,
+ ab->qmi.service_ins_id);
+ if (ret < 0) {
+ ath12k_warn(ab, "failed to add qmi lookup\n");
+ destroy_workqueue(ab->qmi.event_wq);
+ return ret;
+ }
+
+ return ret;
+}
+
+void ath12k_qmi_deinit_service(struct ath12k_base *ab)
+{
+ qmi_handle_release(&ab->qmi.handle);
+ cancel_work_sync(&ab->qmi.event_work);
+ destroy_workqueue(ab->qmi.event_wq);
+ ath12k_qmi_m3_free(ab);
+ ath12k_qmi_free_target_mem_chunk(ab);
+}
diff --git a/drivers/net/wireless/ath/ath12k/qmi.h b/drivers/net/wireless/ath/ath12k/qmi.h
new file mode 100644
index 000000000000..ad87f19903db
--- /dev/null
+++ b/drivers/net/wireless/ath/ath12k/qmi.h
@@ -0,0 +1,569 @@
+/* SPDX-License-Identifier: BSD-3-Clause-Clear */
+/*
+ * Copyright (c) 2018-2021 The Linux Foundation. All rights reserved.
+ * Copyright (c) 2021-2022 Qualcomm Innovation Center, Inc. All rights reserved.
+ */
+
+#ifndef ATH12K_QMI_H
+#define ATH12K_QMI_H
+
+#include <linux/mutex.h>
+#include <linux/soc/qcom/qmi.h>
+
+#define ATH12K_HOST_VERSION_STRING "WIN"
+#define ATH12K_QMI_WLANFW_TIMEOUT_MS 10000
+#define ATH12K_QMI_MAX_BDF_FILE_NAME_SIZE 64
+#define ATH12K_QMI_CALDB_ADDRESS 0x4BA00000
+#define ATH12K_QMI_WLANFW_MAX_BUILD_ID_LEN_V01 128
+#define ATH12K_QMI_WLFW_NODE_ID_BASE 0x07
+#define ATH12K_QMI_WLFW_SERVICE_ID_V01 0x45
+#define ATH12K_QMI_WLFW_SERVICE_VERS_V01 0x01
+#define ATH12K_QMI_WLFW_SERVICE_INS_ID_V01 0x02
+#define ATH12K_QMI_WLFW_SERVICE_INS_ID_V01_WCN7850 0x1
+
+#define ATH12K_QMI_WLFW_SERVICE_INS_ID_V01_QCN9274 0x07
+#define ATH12K_QMI_WLANFW_MAX_TIMESTAMP_LEN_V01 32
+#define ATH12K_QMI_RESP_LEN_MAX 8192
+#define ATH12K_QMI_WLANFW_MAX_NUM_MEM_SEG_V01 52
+#define ATH12K_QMI_CALDB_SIZE 0x480000
+#define ATH12K_QMI_BDF_EXT_STR_LENGTH 0x20
+#define ATH12K_QMI_FW_MEM_REQ_SEGMENT_CNT 3
+#define ATH12K_QMI_WLFW_MAX_DEV_MEM_NUM_V01 4
+#define ATH12K_QMI_DEVMEM_CMEM_INDEX 0
+
+#define QMI_WLFW_REQUEST_MEM_IND_V01 0x0035
+#define QMI_WLFW_FW_MEM_READY_IND_V01 0x0037
+#define QMI_WLFW_FW_READY_IND_V01 0x0038
+
+#define QMI_WLANFW_MAX_DATA_SIZE_V01 6144
+#define ATH12K_FIRMWARE_MODE_OFF 4
+#define ATH12K_QMI_TARGET_MEM_MODE_DEFAULT 0
+
+#define ATH12K_BOARD_ID_DEFAULT 0xFF
+
+struct ath12k_base;
+
+enum ath12k_qmi_file_type {
+ ATH12K_QMI_FILE_TYPE_BDF_GOLDEN = 0,
+ ATH12K_QMI_FILE_TYPE_CALDATA = 2,
+ ATH12K_QMI_FILE_TYPE_EEPROM = 3,
+ ATH12K_QMI_MAX_FILE_TYPE = 4,
+};
+
+enum ath12k_qmi_bdf_type {
+ ATH12K_QMI_BDF_TYPE_BIN = 0,
+ ATH12K_QMI_BDF_TYPE_ELF = 1,
+ ATH12K_QMI_BDF_TYPE_REGDB = 4,
+ ATH12K_QMI_BDF_TYPE_CALIBRATION = 5,
+};
+
+enum ath12k_qmi_event_type {
+ ATH12K_QMI_EVENT_SERVER_ARRIVE,
+ ATH12K_QMI_EVENT_SERVER_EXIT,
+ ATH12K_QMI_EVENT_REQUEST_MEM,
+ ATH12K_QMI_EVENT_FW_MEM_READY,
+ ATH12K_QMI_EVENT_FW_READY,
+ ATH12K_QMI_EVENT_REGISTER_DRIVER,
+ ATH12K_QMI_EVENT_UNREGISTER_DRIVER,
+ ATH12K_QMI_EVENT_RECOVERY,
+ ATH12K_QMI_EVENT_FORCE_FW_ASSERT,
+ ATH12K_QMI_EVENT_POWER_UP,
+ ATH12K_QMI_EVENT_POWER_DOWN,
+ ATH12K_QMI_EVENT_MAX,
+};
+
+struct ath12k_qmi_driver_event {
+ struct list_head list;
+ enum ath12k_qmi_event_type type;
+ void *data;
+};
+
+struct ath12k_qmi_ce_cfg {
+ const struct ce_pipe_config *tgt_ce;
+ int tgt_ce_len;
+ const struct service_to_pipe *svc_to_ce_map;
+ int svc_to_ce_map_len;
+ const u8 *shadow_reg;
+ int shadow_reg_len;
+ u32 *shadow_reg_v3;
+ int shadow_reg_v3_len;
+};
+
+struct ath12k_qmi_event_msg {
+ struct list_head list;
+ enum ath12k_qmi_event_type type;
+};
+
+struct target_mem_chunk {
+ u32 size;
+ u32 type;
+ dma_addr_t paddr;
+ union {
+ void __iomem *ioaddr;
+ void *addr;
+ } v;
+};
+
+struct target_info {
+ u32 chip_id;
+ u32 chip_family;
+ u32 board_id;
+ u32 soc_id;
+ u32 fw_version;
+ u32 eeprom_caldata;
+ char fw_build_timestamp[ATH12K_QMI_WLANFW_MAX_TIMESTAMP_LEN_V01 + 1];
+ char fw_build_id[ATH12K_QMI_WLANFW_MAX_BUILD_ID_LEN_V01 + 1];
+ char bdf_ext[ATH12K_QMI_BDF_EXT_STR_LENGTH];
+};
+
+struct m3_mem_region {
+ u32 size;
+ dma_addr_t paddr;
+ void *vaddr;
+};
+
+struct dev_mem_info {
+ u64 start;
+ u64 size;
+};
+
+struct ath12k_qmi {
+ struct ath12k_base *ab;
+ struct qmi_handle handle;
+ struct sockaddr_qrtr sq;
+ struct work_struct event_work;
+ struct workqueue_struct *event_wq;
+ struct list_head event_list;
+ spinlock_t event_lock; /* spinlock for qmi event list */
+ struct ath12k_qmi_ce_cfg ce_cfg;
+ struct target_mem_chunk target_mem[ATH12K_QMI_WLANFW_MAX_NUM_MEM_SEG_V01];
+ u32 mem_seg_count;
+ u32 target_mem_mode;
+ bool target_mem_delayed;
+ u8 cal_done;
+ struct target_info target;
+ struct m3_mem_region m3_mem;
+ unsigned int service_ins_id;
+ struct dev_mem_info dev_mem[ATH12K_QMI_WLFW_MAX_DEV_MEM_NUM_V01];
+};
+
+#define QMI_WLANFW_HOST_CAP_REQ_MSG_V01_MAX_LEN 261
+#define QMI_WLANFW_HOST_CAP_REQ_V01 0x0034
+#define QMI_WLANFW_HOST_CAP_RESP_MSG_V01_MAX_LEN 7
+#define QMI_WLFW_HOST_CAP_RESP_V01 0x0034
+#define QMI_WLFW_MAX_NUM_GPIO_V01 32
+#define QMI_WLANFW_MAX_PLATFORM_NAME_LEN_V01 64
+#define QMI_WLANFW_MAX_HOST_DDR_RANGE_SIZE_V01 3
+
+struct qmi_wlanfw_host_ddr_range {
+ u64 start;
+ u64 size;
+};
+
+enum ath12k_qmi_target_mem {
+ HOST_DDR_REGION_TYPE = 0x1,
+ BDF_MEM_REGION_TYPE = 0x2,
+ M3_DUMP_REGION_TYPE = 0x3,
+ CALDB_MEM_REGION_TYPE = 0x4,
+ PAGEABLE_MEM_REGION_TYPE = 0x9,
+};
+
+enum qmi_wlanfw_host_build_type {
+ WLANFW_HOST_BUILD_TYPE_ENUM_MIN_VAL_V01 = INT_MIN,
+ QMI_WLANFW_HOST_BUILD_TYPE_UNSPECIFIED_V01 = 0,
+ QMI_WLANFW_HOST_BUILD_TYPE_PRIMARY_V01 = 1,
+ QMI_WLANFW_HOST_BUILD_TYPE_SECONDARY_V01 = 2,
+ WLANFW_HOST_BUILD_TYPE_ENUM_MAX_VAL_V01 = INT_MAX,
+};
+
+#define QMI_WLFW_MAX_NUM_MLO_CHIPS_V01 3
+#define QMI_WLFW_MAX_NUM_MLO_LINKS_PER_CHIP_V01 2
+
+struct wlfw_host_mlo_chip_info_s_v01 {
+ u8 chip_id;
+ u8 num_local_links;
+ u8 hw_link_id[QMI_WLFW_MAX_NUM_MLO_LINKS_PER_CHIP_V01];
+ u8 valid_mlo_link_id[QMI_WLFW_MAX_NUM_MLO_LINKS_PER_CHIP_V01];
+};
+
+enum ath12k_qmi_cnss_feature {
+ CNSS_FEATURE_MIN_ENUM_VAL_V01 = INT_MIN,
+ CNSS_QDSS_CFG_MISS_V01 = 3,
+ CNSS_MAX_FEATURE_V01 = 64,
+ CNSS_FEATURE_MAX_ENUM_VAL_V01 = INT_MAX,
+};
+
+struct qmi_wlanfw_host_cap_req_msg_v01 {
+ u8 num_clients_valid;
+ u32 num_clients;
+ u8 wake_msi_valid;
+ u32 wake_msi;
+ u8 gpios_valid;
+ u32 gpios_len;
+ u32 gpios[QMI_WLFW_MAX_NUM_GPIO_V01];
+ u8 nm_modem_valid;
+ u8 nm_modem;
+ u8 bdf_support_valid;
+ u8 bdf_support;
+ u8 bdf_cache_support_valid;
+ u8 bdf_cache_support;
+ u8 m3_support_valid;
+ u8 m3_support;
+ u8 m3_cache_support_valid;
+ u8 m3_cache_support;
+ u8 cal_filesys_support_valid;
+ u8 cal_filesys_support;
+ u8 cal_cache_support_valid;
+ u8 cal_cache_support;
+ u8 cal_done_valid;
+ u8 cal_done;
+ u8 mem_bucket_valid;
+ u32 mem_bucket;
+ u8 mem_cfg_mode_valid;
+ u8 mem_cfg_mode;
+ u8 cal_duration_valid;
+ u16 cal_duraiton;
+ u8 platform_name_valid;
+ char platform_name[QMI_WLANFW_MAX_PLATFORM_NAME_LEN_V01 + 1];
+ u8 ddr_range_valid;
+ struct qmi_wlanfw_host_ddr_range ddr_range[QMI_WLANFW_MAX_HOST_DDR_RANGE_SIZE_V01];
+ u8 host_build_type_valid;
+ enum qmi_wlanfw_host_build_type host_build_type;
+ u8 mlo_capable_valid;
+ u8 mlo_capable;
+ u8 mlo_chip_id_valid;
+ u16 mlo_chip_id;
+ u8 mlo_group_id_valid;
+ u8 mlo_group_id;
+ u8 max_mlo_peer_valid;
+ u16 max_mlo_peer;
+ u8 mlo_num_chips_valid;
+ u8 mlo_num_chips;
+ u8 mlo_chip_info_valid;
+ struct wlfw_host_mlo_chip_info_s_v01 mlo_chip_info[QMI_WLFW_MAX_NUM_MLO_CHIPS_V01];
+ u8 feature_list_valid;
+ u64 feature_list;
+
+};
+
+struct qmi_wlanfw_host_cap_resp_msg_v01 {
+ struct qmi_response_type_v01 resp;
+};
+
+#define QMI_WLANFW_IND_REGISTER_REQ_MSG_V01_MAX_LEN 54
+#define QMI_WLANFW_IND_REGISTER_REQ_V01 0x0020
+#define QMI_WLANFW_IND_REGISTER_RESP_MSG_V01_MAX_LEN 18
+#define QMI_WLANFW_IND_REGISTER_RESP_V01 0x0020
+#define QMI_WLANFW_CLIENT_ID 0x4b4e454c
+
+struct qmi_wlanfw_ind_register_req_msg_v01 {
+ u8 fw_ready_enable_valid;
+ u8 fw_ready_enable;
+ u8 initiate_cal_download_enable_valid;
+ u8 initiate_cal_download_enable;
+ u8 initiate_cal_update_enable_valid;
+ u8 initiate_cal_update_enable;
+ u8 msa_ready_enable_valid;
+ u8 msa_ready_enable;
+ u8 pin_connect_result_enable_valid;
+ u8 pin_connect_result_enable;
+ u8 client_id_valid;
+ u32 client_id;
+ u8 request_mem_enable_valid;
+ u8 request_mem_enable;
+ u8 fw_mem_ready_enable_valid;
+ u8 fw_mem_ready_enable;
+ u8 fw_init_done_enable_valid;
+ u8 fw_init_done_enable;
+ u8 rejuvenate_enable_valid;
+ u32 rejuvenate_enable;
+ u8 xo_cal_enable_valid;
+ u8 xo_cal_enable;
+ u8 cal_done_enable_valid;
+ u8 cal_done_enable;
+};
+
+struct qmi_wlanfw_ind_register_resp_msg_v01 {
+ struct qmi_response_type_v01 resp;
+ u8 fw_status_valid;
+ u64 fw_status;
+};
+
+#define QMI_WLANFW_REQUEST_MEM_IND_MSG_V01_MAX_LEN 1824
+#define QMI_WLANFW_RESPOND_MEM_REQ_MSG_V01_MAX_LEN 888
+#define QMI_WLANFW_RESPOND_MEM_RESP_MSG_V01_MAX_LEN 7
+#define QMI_WLANFW_REQUEST_MEM_IND_V01 0x0035
+#define QMI_WLANFW_RESPOND_MEM_REQ_V01 0x0036
+#define QMI_WLANFW_RESPOND_MEM_RESP_V01 0x0036
+#define QMI_WLANFW_MAX_NUM_MEM_CFG_V01 2
+#define QMI_WLANFW_MAX_STR_LEN_V01 16
+
+struct qmi_wlanfw_mem_cfg_s_v01 {
+ u64 offset;
+ u32 size;
+ u8 secure_flag;
+};
+
+enum qmi_wlanfw_mem_type_enum_v01 {
+ WLANFW_MEM_TYPE_ENUM_MIN_VAL_V01 = INT_MIN,
+ QMI_WLANFW_MEM_TYPE_MSA_V01 = 0,
+ QMI_WLANFW_MEM_TYPE_DDR_V01 = 1,
+ QMI_WLANFW_MEM_BDF_V01 = 2,
+ QMI_WLANFW_MEM_M3_V01 = 3,
+ QMI_WLANFW_MEM_CAL_V01 = 4,
+ QMI_WLANFW_MEM_DPD_V01 = 5,
+ WLANFW_MEM_TYPE_ENUM_MAX_VAL_V01 = INT_MAX,
+};
+
+struct qmi_wlanfw_mem_seg_s_v01 {
+ u32 size;
+ enum qmi_wlanfw_mem_type_enum_v01 type;
+ u32 mem_cfg_len;
+ struct qmi_wlanfw_mem_cfg_s_v01 mem_cfg[QMI_WLANFW_MAX_NUM_MEM_CFG_V01];
+};
+
+struct qmi_wlanfw_request_mem_ind_msg_v01 {
+ u32 mem_seg_len;
+ struct qmi_wlanfw_mem_seg_s_v01 mem_seg[ATH12K_QMI_WLANFW_MAX_NUM_MEM_SEG_V01];
+};
+
+struct qmi_wlanfw_mem_seg_resp_s_v01 {
+ u64 addr;
+ u32 size;
+ enum qmi_wlanfw_mem_type_enum_v01 type;
+ u8 restore;
+};
+
+struct qmi_wlanfw_respond_mem_req_msg_v01 {
+ u32 mem_seg_len;
+ struct qmi_wlanfw_mem_seg_resp_s_v01 mem_seg[ATH12K_QMI_WLANFW_MAX_NUM_MEM_SEG_V01];
+};
+
+struct qmi_wlanfw_respond_mem_resp_msg_v01 {
+ struct qmi_response_type_v01 resp;
+};
+
+struct qmi_wlanfw_fw_mem_ready_ind_msg_v01 {
+ char placeholder;
+};
+
+struct qmi_wlanfw_fw_ready_ind_msg_v01 {
+ char placeholder;
+};
+
+#define QMI_WLANFW_CAP_REQ_MSG_V01_MAX_LEN 0
+#define QMI_WLANFW_CAP_RESP_MSG_V01_MAX_LEN 207
+#define QMI_WLANFW_CAP_REQ_V01 0x0024
+#define QMI_WLANFW_CAP_RESP_V01 0x0024
+
+enum qmi_wlanfw_pipedir_enum_v01 {
+ QMI_WLFW_PIPEDIR_NONE_V01 = 0,
+ QMI_WLFW_PIPEDIR_IN_V01 = 1,
+ QMI_WLFW_PIPEDIR_OUT_V01 = 2,
+ QMI_WLFW_PIPEDIR_INOUT_V01 = 3,
+};
+
+struct qmi_wlanfw_ce_tgt_pipe_cfg_s_v01 {
+ __le32 pipe_num;
+ __le32 pipe_dir;
+ __le32 nentries;
+ __le32 nbytes_max;
+ __le32 flags;
+};
+
+struct qmi_wlanfw_ce_svc_pipe_cfg_s_v01 {
+ __le32 service_id;
+ __le32 pipe_dir;
+ __le32 pipe_num;
+};
+
+struct qmi_wlanfw_shadow_reg_cfg_s_v01 {
+ u16 id;
+ u16 offset;
+};
+
+struct qmi_wlanfw_shadow_reg_v3_cfg_s_v01 {
+ u32 addr;
+};
+
+struct qmi_wlanfw_memory_region_info_s_v01 {
+ u64 region_addr;
+ u32 size;
+ u8 secure_flag;
+};
+
+struct qmi_wlanfw_rf_chip_info_s_v01 {
+ u32 chip_id;
+ u32 chip_family;
+};
+
+struct qmi_wlanfw_rf_board_info_s_v01 {
+ u32 board_id;
+};
+
+struct qmi_wlanfw_soc_info_s_v01 {
+ u32 soc_id;
+};
+
+struct qmi_wlanfw_fw_version_info_s_v01 {
+ u32 fw_version;
+ char fw_build_timestamp[ATH12K_QMI_WLANFW_MAX_TIMESTAMP_LEN_V01 + 1];
+};
+
+struct qmi_wlanfw_dev_mem_info_s_v01 {
+ u64 start;
+ u64 size;
+};
+
+enum qmi_wlanfw_cal_temp_id_enum_v01 {
+ QMI_WLANFW_CAL_TEMP_IDX_0_V01 = 0,
+ QMI_WLANFW_CAL_TEMP_IDX_1_V01 = 1,
+ QMI_WLANFW_CAL_TEMP_IDX_2_V01 = 2,
+ QMI_WLANFW_CAL_TEMP_IDX_3_V01 = 3,
+ QMI_WLANFW_CAL_TEMP_IDX_4_V01 = 4,
+ QMI_WLANFW_CAL_TEMP_ID_MAX_V01 = 0xFF,
+};
+
+enum qmi_wlanfw_rd_card_chain_cap_v01 {
+ WLFW_RD_CARD_CHAIN_CAP_MIN_VAL_V01 = INT_MIN,
+ WLFW_RD_CARD_CHAIN_CAP_UNSPECIFIED_V01 = 0,
+ WLFW_RD_CARD_CHAIN_CAP_1x1_V01 = 1,
+ WLFW_RD_CARD_CHAIN_CAP_2x2_V01 = 2,
+ WLFW_RD_CARD_CHAIN_CAP_MAX_VAL_V01 = INT_MAX,
+};
+
+struct qmi_wlanfw_cap_resp_msg_v01 {
+ struct qmi_response_type_v01 resp;
+ u8 chip_info_valid;
+ struct qmi_wlanfw_rf_chip_info_s_v01 chip_info;
+ u8 board_info_valid;
+ struct qmi_wlanfw_rf_board_info_s_v01 board_info;
+ u8 soc_info_valid;
+ struct qmi_wlanfw_soc_info_s_v01 soc_info;
+ u8 fw_version_info_valid;
+ struct qmi_wlanfw_fw_version_info_s_v01 fw_version_info;
+ u8 fw_build_id_valid;
+ char fw_build_id[ATH12K_QMI_WLANFW_MAX_BUILD_ID_LEN_V01 + 1];
+ u8 num_macs_valid;
+ u8 num_macs;
+ u8 voltage_mv_valid;
+ u32 voltage_mv;
+ u8 time_freq_hz_valid;
+ u32 time_freq_hz;
+ u8 otp_version_valid;
+ u32 otp_version;
+ u8 eeprom_caldata_read_timeout_valid;
+ u32 eeprom_caldata_read_timeout;
+ u8 fw_caps_valid;
+ u64 fw_caps;
+ u8 rd_card_chain_cap_valid;
+ enum qmi_wlanfw_rd_card_chain_cap_v01 rd_card_chain_cap;
+ u8 dev_mem_info_valid;
+ struct qmi_wlanfw_dev_mem_info_s_v01 dev_mem[ATH12K_QMI_WLFW_MAX_DEV_MEM_NUM_V01];
+};
+
+struct qmi_wlanfw_cap_req_msg_v01 {
+ char placeholder;
+};
+
+#define QMI_WLANFW_BDF_DOWNLOAD_REQ_MSG_V01_MAX_LEN 6182
+#define QMI_WLANFW_BDF_DOWNLOAD_RESP_MSG_V01_MAX_LEN 7
+#define QMI_WLANFW_BDF_DOWNLOAD_RESP_V01 0x0025
+#define QMI_WLANFW_BDF_DOWNLOAD_REQ_V01 0x0025
+/* TODO: Need to check with MCL and FW team that data can be pointer and
+ * can be last element in structure
+ */
+struct qmi_wlanfw_bdf_download_req_msg_v01 {
+ u8 valid;
+ u8 file_id_valid;
+ enum qmi_wlanfw_cal_temp_id_enum_v01 file_id;
+ u8 total_size_valid;
+ u32 total_size;
+ u8 seg_id_valid;
+ u32 seg_id;
+ u8 data_valid;
+ u32 data_len;
+ u8 data[QMI_WLANFW_MAX_DATA_SIZE_V01];
+ u8 end_valid;
+ u8 end;
+ u8 bdf_type_valid;
+ u8 bdf_type;
+
+};
+
+struct qmi_wlanfw_bdf_download_resp_msg_v01 {
+ struct qmi_response_type_v01 resp;
+};
+
+#define QMI_WLANFW_M3_INFO_REQ_MSG_V01_MAX_MSG_LEN 18
+#define QMI_WLANFW_M3_INFO_RESP_MSG_V01_MAX_MSG_LEN 7
+#define QMI_WLANFW_M3_INFO_RESP_V01 0x003C
+#define QMI_WLANFW_M3_INFO_REQ_V01 0x003C
+
+struct qmi_wlanfw_m3_info_req_msg_v01 {
+ u64 addr;
+ u32 size;
+};
+
+struct qmi_wlanfw_m3_info_resp_msg_v01 {
+ struct qmi_response_type_v01 resp;
+};
+
+#define QMI_WLANFW_WLAN_MODE_REQ_MSG_V01_MAX_LEN 11
+#define QMI_WLANFW_WLAN_MODE_RESP_MSG_V01_MAX_LEN 7
+#define QMI_WLANFW_WLAN_CFG_REQ_MSG_V01_MAX_LEN 803
+#define QMI_WLANFW_WLAN_CFG_RESP_MSG_V01_MAX_LEN 7
+#define QMI_WLANFW_WLAN_MODE_REQ_V01 0x0022
+#define QMI_WLANFW_WLAN_MODE_RESP_V01 0x0022
+#define QMI_WLANFW_WLAN_CFG_REQ_V01 0x0023
+#define QMI_WLANFW_WLAN_CFG_RESP_V01 0x0023
+#define QMI_WLANFW_MAX_STR_LEN_V01 16
+#define QMI_WLANFW_MAX_NUM_CE_V01 12
+#define QMI_WLANFW_MAX_NUM_SVC_V01 24
+#define QMI_WLANFW_MAX_NUM_SHADOW_REG_V01 24
+#define QMI_WLANFW_MAX_NUM_SHADOW_REG_V3_V01 60
+
+struct qmi_wlanfw_wlan_mode_req_msg_v01 {
+ u32 mode;
+ u8 hw_debug_valid;
+ u8 hw_debug;
+};
+
+struct qmi_wlanfw_wlan_mode_resp_msg_v01 {
+ struct qmi_response_type_v01 resp;
+};
+
+struct qmi_wlanfw_wlan_cfg_req_msg_v01 {
+ u8 host_version_valid;
+ char host_version[QMI_WLANFW_MAX_STR_LEN_V01 + 1];
+ u8 tgt_cfg_valid;
+ u32 tgt_cfg_len;
+ struct qmi_wlanfw_ce_tgt_pipe_cfg_s_v01
+ tgt_cfg[QMI_WLANFW_MAX_NUM_CE_V01];
+ u8 svc_cfg_valid;
+ u32 svc_cfg_len;
+ struct qmi_wlanfw_ce_svc_pipe_cfg_s_v01
+ svc_cfg[QMI_WLANFW_MAX_NUM_SVC_V01];
+ u8 shadow_reg_valid;
+ u32 shadow_reg_len;
+ struct qmi_wlanfw_shadow_reg_cfg_s_v01
+ shadow_reg[QMI_WLANFW_MAX_NUM_SHADOW_REG_V01];
+ u8 shadow_reg_v3_valid;
+ u32 shadow_reg_v3_len;
+ struct qmi_wlanfw_shadow_reg_v3_cfg_s_v01
+ shadow_reg_v3[QMI_WLANFW_MAX_NUM_SHADOW_REG_V3_V01];
+};
+
+struct qmi_wlanfw_wlan_cfg_resp_msg_v01 {
+ struct qmi_response_type_v01 resp;
+};
+
+int ath12k_qmi_firmware_start(struct ath12k_base *ab,
+ u32 mode);
+void ath12k_qmi_firmware_stop(struct ath12k_base *ab);
+void ath12k_qmi_event_work(struct work_struct *work);
+void ath12k_qmi_msg_recv_work(struct work_struct *work);
+void ath12k_qmi_deinit_service(struct ath12k_base *ab);
+int ath12k_qmi_init_service(struct ath12k_base *ab);
+
+#endif
diff --git a/drivers/net/wireless/ath/ath12k/reg.c b/drivers/net/wireless/ath/ath12k/reg.c
new file mode 100644
index 000000000000..6ede91ebc8e1
--- /dev/null
+++ b/drivers/net/wireless/ath/ath12k/reg.c
@@ -0,0 +1,732 @@
+// SPDX-License-Identifier: BSD-3-Clause-Clear
+/*
+ * Copyright (c) 2018-2021 The Linux Foundation. All rights reserved.
+ * Copyright (c) 2021-2022 Qualcomm Innovation Center, Inc. All rights reserved.
+ */
+#include <linux/rtnetlink.h>
+#include "core.h"
+#include "debug.h"
+
+/* World regdom to be used in case default regd from fw is unavailable */
+#define ATH12K_2GHZ_CH01_11 REG_RULE(2412 - 10, 2462 + 10, 40, 0, 20, 0)
+#define ATH12K_5GHZ_5150_5350 REG_RULE(5150 - 10, 5350 + 10, 80, 0, 30,\
+ NL80211_RRF_NO_IR)
+#define ATH12K_5GHZ_5725_5850 REG_RULE(5725 - 10, 5850 + 10, 80, 0, 30,\
+ NL80211_RRF_NO_IR)
+
+#define ETSI_WEATHER_RADAR_BAND_LOW 5590
+#define ETSI_WEATHER_RADAR_BAND_HIGH 5650
+#define ETSI_WEATHER_RADAR_BAND_CAC_TIMEOUT 600000
+
+static const struct ieee80211_regdomain ath12k_world_regd = {
+ .n_reg_rules = 3,
+ .alpha2 = "00",
+ .reg_rules = {
+ ATH12K_2GHZ_CH01_11,
+ ATH12K_5GHZ_5150_5350,
+ ATH12K_5GHZ_5725_5850,
+ }
+};
+
+static bool ath12k_regdom_changes(struct ath12k *ar, char *alpha2)
+{
+ const struct ieee80211_regdomain *regd;
+
+ regd = rcu_dereference_rtnl(ar->hw->wiphy->regd);
+ /* This can happen during wiphy registration where the previous
+ * user request is received before we update the regd received
+ * from firmware.
+ */
+ if (!regd)
+ return true;
+
+ return memcmp(regd->alpha2, alpha2, 2) != 0;
+}
+
+static void
+ath12k_reg_notifier(struct wiphy *wiphy, struct regulatory_request *request)
+{
+ struct ieee80211_hw *hw = wiphy_to_ieee80211_hw(wiphy);
+ struct ath12k_wmi_init_country_arg arg;
+ struct ath12k *ar = hw->priv;
+ int ret;
+
+ ath12k_dbg(ar->ab, ATH12K_DBG_REG,
+ "Regulatory Notification received for %s\n", wiphy_name(wiphy));
+
+ /* Currently supporting only General User Hints. Cell base user
+ * hints to be handled later.
+ * Hints from other sources like Core, Beacons are not expected for
+ * self managed wiphy's
+ */
+ if (!(request->initiator == NL80211_REGDOM_SET_BY_USER &&
+ request->user_reg_hint_type == NL80211_USER_REG_HINT_USER)) {
+ ath12k_warn(ar->ab, "Unexpected Regulatory event for this wiphy\n");
+ return;
+ }
+
+ if (!IS_ENABLED(CONFIG_ATH_REG_DYNAMIC_USER_REG_HINTS)) {
+ ath12k_dbg(ar->ab, ATH12K_DBG_REG,
+ "Country Setting is not allowed\n");
+ return;
+ }
+
+ if (!ath12k_regdom_changes(ar, request->alpha2)) {
+ ath12k_dbg(ar->ab, ATH12K_DBG_REG, "Country is already set\n");
+ return;
+ }
+
+ /* Set the country code to the firmware and wait for
+ * the WMI_REG_CHAN_LIST_CC EVENT for updating the
+ * reg info
+ */
+ arg.flags = ALPHA_IS_SET;
+ memcpy(&arg.cc_info.alpha2, request->alpha2, 2);
+ arg.cc_info.alpha2[2] = 0;
+
+ ret = ath12k_wmi_send_init_country_cmd(ar, &arg);
+ if (ret)
+ ath12k_warn(ar->ab,
+ "INIT Country code set to fw failed : %d\n", ret);
+}
+
+int ath12k_reg_update_chan_list(struct ath12k *ar)
+{
+ struct ieee80211_supported_band **bands;
+ struct ath12k_wmi_scan_chan_list_arg *arg;
+ struct ieee80211_channel *channel;
+ struct ieee80211_hw *hw = ar->hw;
+ struct ath12k_wmi_channel_arg *ch;
+ enum nl80211_band band;
+ int num_channels = 0;
+ int i, ret;
+
+ bands = hw->wiphy->bands;
+ for (band = 0; band < NUM_NL80211_BANDS; band++) {
+ if (!bands[band])
+ continue;
+
+ for (i = 0; i < bands[band]->n_channels; i++) {
+ if (bands[band]->channels[i].flags &
+ IEEE80211_CHAN_DISABLED)
+ continue;
+
+ num_channels++;
+ }
+ }
+
+ if (WARN_ON(!num_channels))
+ return -EINVAL;
+
+ arg = kzalloc(struct_size(arg, channel, num_channels), GFP_KERNEL);
+
+ if (!arg)
+ return -ENOMEM;
+
+ arg->pdev_id = ar->pdev->pdev_id;
+ arg->nallchans = num_channels;
+
+ ch = arg->channel;
+
+ for (band = 0; band < NUM_NL80211_BANDS; band++) {
+ if (!bands[band])
+ continue;
+
+ for (i = 0; i < bands[band]->n_channels; i++) {
+ channel = &bands[band]->channels[i];
+
+ if (channel->flags & IEEE80211_CHAN_DISABLED)
+ continue;
+
+ /* TODO: Set to true/false based on some condition? */
+ ch->allow_ht = true;
+ ch->allow_vht = true;
+ ch->allow_he = true;
+
+ ch->dfs_set =
+ !!(channel->flags & IEEE80211_CHAN_RADAR);
+ ch->is_chan_passive = !!(channel->flags &
+ IEEE80211_CHAN_NO_IR);
+ ch->is_chan_passive |= ch->dfs_set;
+ ch->mhz = channel->center_freq;
+ ch->cfreq1 = channel->center_freq;
+ ch->minpower = 0;
+ ch->maxpower = channel->max_power * 2;
+ ch->maxregpower = channel->max_reg_power * 2;
+ ch->antennamax = channel->max_antenna_gain * 2;
+
+ /* TODO: Use appropriate phymodes */
+ if (channel->band == NL80211_BAND_2GHZ)
+ ch->phy_mode = MODE_11G;
+ else
+ ch->phy_mode = MODE_11A;
+
+ if (channel->band == NL80211_BAND_6GHZ &&
+ cfg80211_channel_is_psc(channel))
+ ch->psc_channel = true;
+
+ ath12k_dbg(ar->ab, ATH12K_DBG_WMI,
+ "mac channel [%d/%d] freq %d maxpower %d regpower %d antenna %d mode %d\n",
+ i, arg->nallchans,
+ ch->mhz, ch->maxpower, ch->maxregpower,
+ ch->antennamax, ch->phy_mode);
+
+ ch++;
+ /* TODO: use quarrter/half rate, cfreq12, dfs_cfreq2
+ * set_agile, reg_class_idx
+ */
+ }
+ }
+
+ ret = ath12k_wmi_send_scan_chan_list_cmd(ar, arg);
+ kfree(arg);
+
+ return ret;
+}
+
+static void ath12k_copy_regd(struct ieee80211_regdomain *regd_orig,
+ struct ieee80211_regdomain *regd_copy)
+{
+ u8 i;
+
+ /* The caller should have checked error conditions */
+ memcpy(regd_copy, regd_orig, sizeof(*regd_orig));
+
+ for (i = 0; i < regd_orig->n_reg_rules; i++)
+ memcpy(&regd_copy->reg_rules[i], &regd_orig->reg_rules[i],
+ sizeof(struct ieee80211_reg_rule));
+}
+
+int ath12k_regd_update(struct ath12k *ar, bool init)
+{
+ struct ieee80211_regdomain *regd, *regd_copy = NULL;
+ int ret, regd_len, pdev_id;
+ struct ath12k_base *ab;
+
+ ab = ar->ab;
+ pdev_id = ar->pdev_idx;
+
+ spin_lock_bh(&ab->base_lock);
+
+ if (init) {
+ /* Apply the regd received during init through
+ * WMI_REG_CHAN_LIST_CC event. In case of failure to
+ * receive the regd, initialize with a default world
+ * regulatory.
+ */
+ if (ab->default_regd[pdev_id]) {
+ regd = ab->default_regd[pdev_id];
+ } else {
+ ath12k_warn(ab,
+ "failed to receive default regd during init\n");
+ regd = (struct ieee80211_regdomain *)&ath12k_world_regd;
+ }
+ } else {
+ regd = ab->new_regd[pdev_id];
+ }
+
+ if (!regd) {
+ ret = -EINVAL;
+ spin_unlock_bh(&ab->base_lock);
+ goto err;
+ }
+
+ regd_len = sizeof(*regd) + (regd->n_reg_rules *
+ sizeof(struct ieee80211_reg_rule));
+
+ regd_copy = kzalloc(regd_len, GFP_ATOMIC);
+ if (regd_copy)
+ ath12k_copy_regd(regd, regd_copy);
+
+ spin_unlock_bh(&ab->base_lock);
+
+ if (!regd_copy) {
+ ret = -ENOMEM;
+ goto err;
+ }
+
+ rtnl_lock();
+ wiphy_lock(ar->hw->wiphy);
+ ret = regulatory_set_wiphy_regd_sync(ar->hw->wiphy, regd_copy);
+ wiphy_unlock(ar->hw->wiphy);
+ rtnl_unlock();
+
+ kfree(regd_copy);
+
+ if (ret)
+ goto err;
+
+ if (ar->state == ATH12K_STATE_ON) {
+ ret = ath12k_reg_update_chan_list(ar);
+ if (ret)
+ goto err;
+ }
+
+ return 0;
+err:
+ ath12k_warn(ab, "failed to perform regd update : %d\n", ret);
+ return ret;
+}
+
+static enum nl80211_dfs_regions
+ath12k_map_fw_dfs_region(enum ath12k_dfs_region dfs_region)
+{
+ switch (dfs_region) {
+ case ATH12K_DFS_REG_FCC:
+ case ATH12K_DFS_REG_CN:
+ return NL80211_DFS_FCC;
+ case ATH12K_DFS_REG_ETSI:
+ case ATH12K_DFS_REG_KR:
+ return NL80211_DFS_ETSI;
+ case ATH12K_DFS_REG_MKK:
+ case ATH12K_DFS_REG_MKK_N:
+ return NL80211_DFS_JP;
+ default:
+ return NL80211_DFS_UNSET;
+ }
+}
+
+static u32 ath12k_map_fw_reg_flags(u16 reg_flags)
+{
+ u32 flags = 0;
+
+ if (reg_flags & REGULATORY_CHAN_NO_IR)
+ flags = NL80211_RRF_NO_IR;
+
+ if (reg_flags & REGULATORY_CHAN_RADAR)
+ flags |= NL80211_RRF_DFS;
+
+ if (reg_flags & REGULATORY_CHAN_NO_OFDM)
+ flags |= NL80211_RRF_NO_OFDM;
+
+ if (reg_flags & REGULATORY_CHAN_INDOOR_ONLY)
+ flags |= NL80211_RRF_NO_OUTDOOR;
+
+ if (reg_flags & REGULATORY_CHAN_NO_HT40)
+ flags |= NL80211_RRF_NO_HT40;
+
+ if (reg_flags & REGULATORY_CHAN_NO_80MHZ)
+ flags |= NL80211_RRF_NO_80MHZ;
+
+ if (reg_flags & REGULATORY_CHAN_NO_160MHZ)
+ flags |= NL80211_RRF_NO_160MHZ;
+
+ return flags;
+}
+
+static bool
+ath12k_reg_can_intersect(struct ieee80211_reg_rule *rule1,
+ struct ieee80211_reg_rule *rule2)
+{
+ u32 start_freq1, end_freq1;
+ u32 start_freq2, end_freq2;
+
+ start_freq1 = rule1->freq_range.start_freq_khz;
+ start_freq2 = rule2->freq_range.start_freq_khz;
+
+ end_freq1 = rule1->freq_range.end_freq_khz;
+ end_freq2 = rule2->freq_range.end_freq_khz;
+
+ if ((start_freq1 >= start_freq2 &&
+ start_freq1 < end_freq2) ||
+ (start_freq2 > start_freq1 &&
+ start_freq2 < end_freq1))
+ return true;
+
+ /* TODO: Should we restrict intersection feasibility
+ * based on min bandwidth of the intersected region also,
+ * say the intersected rule should have a min bandwidth
+ * of 20MHz?
+ */
+
+ return false;
+}
+
+static void ath12k_reg_intersect_rules(struct ieee80211_reg_rule *rule1,
+ struct ieee80211_reg_rule *rule2,
+ struct ieee80211_reg_rule *new_rule)
+{
+ u32 start_freq1, end_freq1;
+ u32 start_freq2, end_freq2;
+ u32 freq_diff, max_bw;
+
+ start_freq1 = rule1->freq_range.start_freq_khz;
+ start_freq2 = rule2->freq_range.start_freq_khz;
+
+ end_freq1 = rule1->freq_range.end_freq_khz;
+ end_freq2 = rule2->freq_range.end_freq_khz;
+
+ new_rule->freq_range.start_freq_khz = max_t(u32, start_freq1,
+ start_freq2);
+ new_rule->freq_range.end_freq_khz = min_t(u32, end_freq1, end_freq2);
+
+ freq_diff = new_rule->freq_range.end_freq_khz -
+ new_rule->freq_range.start_freq_khz;
+ max_bw = min_t(u32, rule1->freq_range.max_bandwidth_khz,
+ rule2->freq_range.max_bandwidth_khz);
+ new_rule->freq_range.max_bandwidth_khz = min_t(u32, max_bw, freq_diff);
+
+ new_rule->power_rule.max_antenna_gain =
+ min_t(u32, rule1->power_rule.max_antenna_gain,
+ rule2->power_rule.max_antenna_gain);
+
+ new_rule->power_rule.max_eirp = min_t(u32, rule1->power_rule.max_eirp,
+ rule2->power_rule.max_eirp);
+
+ /* Use the flags of both the rules */
+ new_rule->flags = rule1->flags | rule2->flags;
+
+ /* To be safe, lts use the max cac timeout of both rules */
+ new_rule->dfs_cac_ms = max_t(u32, rule1->dfs_cac_ms,
+ rule2->dfs_cac_ms);
+}
+
+static struct ieee80211_regdomain *
+ath12k_regd_intersect(struct ieee80211_regdomain *default_regd,
+ struct ieee80211_regdomain *curr_regd)
+{
+ u8 num_old_regd_rules, num_curr_regd_rules, num_new_regd_rules;
+ struct ieee80211_reg_rule *old_rule, *curr_rule, *new_rule;
+ struct ieee80211_regdomain *new_regd = NULL;
+ u8 i, j, k;
+
+ num_old_regd_rules = default_regd->n_reg_rules;
+ num_curr_regd_rules = curr_regd->n_reg_rules;
+ num_new_regd_rules = 0;
+
+ /* Find the number of intersecting rules to allocate new regd memory */
+ for (i = 0; i < num_old_regd_rules; i++) {
+ old_rule = default_regd->reg_rules + i;
+ for (j = 0; j < num_curr_regd_rules; j++) {
+ curr_rule = curr_regd->reg_rules + j;
+
+ if (ath12k_reg_can_intersect(old_rule, curr_rule))
+ num_new_regd_rules++;
+ }
+ }
+
+ if (!num_new_regd_rules)
+ return NULL;
+
+ new_regd = kzalloc(sizeof(*new_regd) + (num_new_regd_rules *
+ sizeof(struct ieee80211_reg_rule)),
+ GFP_ATOMIC);
+
+ if (!new_regd)
+ return NULL;
+
+ /* We set the new country and dfs region directly and only trim
+ * the freq, power, antenna gain by intersecting with the
+ * default regdomain. Also MAX of the dfs cac timeout is selected.
+ */
+ new_regd->n_reg_rules = num_new_regd_rules;
+ memcpy(new_regd->alpha2, curr_regd->alpha2, sizeof(new_regd->alpha2));
+ new_regd->dfs_region = curr_regd->dfs_region;
+ new_rule = new_regd->reg_rules;
+
+ for (i = 0, k = 0; i < num_old_regd_rules; i++) {
+ old_rule = default_regd->reg_rules + i;
+ for (j = 0; j < num_curr_regd_rules; j++) {
+ curr_rule = curr_regd->reg_rules + j;
+
+ if (ath12k_reg_can_intersect(old_rule, curr_rule))
+ ath12k_reg_intersect_rules(old_rule, curr_rule,
+ (new_rule + k++));
+ }
+ }
+ return new_regd;
+}
+
+static const char *
+ath12k_reg_get_regdom_str(enum nl80211_dfs_regions dfs_region)
+{
+ switch (dfs_region) {
+ case NL80211_DFS_FCC:
+ return "FCC";
+ case NL80211_DFS_ETSI:
+ return "ETSI";
+ case NL80211_DFS_JP:
+ return "JP";
+ default:
+ return "UNSET";
+ }
+}
+
+static u16
+ath12k_reg_adjust_bw(u16 start_freq, u16 end_freq, u16 max_bw)
+{
+ u16 bw;
+
+ bw = end_freq - start_freq;
+ bw = min_t(u16, bw, max_bw);
+
+ if (bw >= 80 && bw < 160)
+ bw = 80;
+ else if (bw >= 40 && bw < 80)
+ bw = 40;
+ else if (bw < 40)
+ bw = 20;
+
+ return bw;
+}
+
+static void
+ath12k_reg_update_rule(struct ieee80211_reg_rule *reg_rule, u32 start_freq,
+ u32 end_freq, u32 bw, u32 ant_gain, u32 reg_pwr,
+ u32 reg_flags)
+{
+ reg_rule->freq_range.start_freq_khz = MHZ_TO_KHZ(start_freq);
+ reg_rule->freq_range.end_freq_khz = MHZ_TO_KHZ(end_freq);
+ reg_rule->freq_range.max_bandwidth_khz = MHZ_TO_KHZ(bw);
+ reg_rule->power_rule.max_antenna_gain = DBI_TO_MBI(ant_gain);
+ reg_rule->power_rule.max_eirp = DBM_TO_MBM(reg_pwr);
+ reg_rule->flags = reg_flags;
+}
+
+static void
+ath12k_reg_update_weather_radar_band(struct ath12k_base *ab,
+ struct ieee80211_regdomain *regd,
+ struct ath12k_reg_rule *reg_rule,
+ u8 *rule_idx, u32 flags, u16 max_bw)
+{
+ u32 end_freq;
+ u16 bw;
+ u8 i;
+
+ i = *rule_idx;
+
+ bw = ath12k_reg_adjust_bw(reg_rule->start_freq,
+ ETSI_WEATHER_RADAR_BAND_LOW, max_bw);
+
+ ath12k_reg_update_rule(regd->reg_rules + i, reg_rule->start_freq,
+ ETSI_WEATHER_RADAR_BAND_LOW, bw,
+ reg_rule->ant_gain, reg_rule->reg_power,
+ flags);
+
+ ath12k_dbg(ab, ATH12K_DBG_REG,
+ "\t%d. (%d - %d @ %d) (%d, %d) (%d ms) (FLAGS %d)\n",
+ i + 1, reg_rule->start_freq, ETSI_WEATHER_RADAR_BAND_LOW,
+ bw, reg_rule->ant_gain, reg_rule->reg_power,
+ regd->reg_rules[i].dfs_cac_ms,
+ flags);
+
+ if (reg_rule->end_freq > ETSI_WEATHER_RADAR_BAND_HIGH)
+ end_freq = ETSI_WEATHER_RADAR_BAND_HIGH;
+ else
+ end_freq = reg_rule->end_freq;
+
+ bw = ath12k_reg_adjust_bw(ETSI_WEATHER_RADAR_BAND_LOW, end_freq,
+ max_bw);
+
+ i++;
+
+ ath12k_reg_update_rule(regd->reg_rules + i,
+ ETSI_WEATHER_RADAR_BAND_LOW, end_freq, bw,
+ reg_rule->ant_gain, reg_rule->reg_power,
+ flags);
+
+ regd->reg_rules[i].dfs_cac_ms = ETSI_WEATHER_RADAR_BAND_CAC_TIMEOUT;
+
+ ath12k_dbg(ab, ATH12K_DBG_REG,
+ "\t%d. (%d - %d @ %d) (%d, %d) (%d ms) (FLAGS %d)\n",
+ i + 1, ETSI_WEATHER_RADAR_BAND_LOW, end_freq,
+ bw, reg_rule->ant_gain, reg_rule->reg_power,
+ regd->reg_rules[i].dfs_cac_ms,
+ flags);
+
+ if (end_freq == reg_rule->end_freq) {
+ regd->n_reg_rules--;
+ *rule_idx = i;
+ return;
+ }
+
+ bw = ath12k_reg_adjust_bw(ETSI_WEATHER_RADAR_BAND_HIGH,
+ reg_rule->end_freq, max_bw);
+
+ i++;
+
+ ath12k_reg_update_rule(regd->reg_rules + i, ETSI_WEATHER_RADAR_BAND_HIGH,
+ reg_rule->end_freq, bw,
+ reg_rule->ant_gain, reg_rule->reg_power,
+ flags);
+
+ ath12k_dbg(ab, ATH12K_DBG_REG,
+ "\t%d. (%d - %d @ %d) (%d, %d) (%d ms) (FLAGS %d)\n",
+ i + 1, ETSI_WEATHER_RADAR_BAND_HIGH, reg_rule->end_freq,
+ bw, reg_rule->ant_gain, reg_rule->reg_power,
+ regd->reg_rules[i].dfs_cac_ms,
+ flags);
+
+ *rule_idx = i;
+}
+
+struct ieee80211_regdomain *
+ath12k_reg_build_regd(struct ath12k_base *ab,
+ struct ath12k_reg_info *reg_info, bool intersect)
+{
+ struct ieee80211_regdomain *tmp_regd, *default_regd, *new_regd = NULL;
+ struct ath12k_reg_rule *reg_rule;
+ u8 i = 0, j = 0, k = 0;
+ u8 num_rules;
+ u16 max_bw;
+ u32 flags;
+ char alpha2[3];
+
+ num_rules = reg_info->num_5g_reg_rules + reg_info->num_2g_reg_rules;
+
+ /* FIXME: Currently taking reg rules for 6G only from Indoor AP mode list.
+ * This can be updated to choose the combination dynamically based on AP
+ * type and client type, after complete 6G regulatory support is added.
+ */
+ if (reg_info->is_ext_reg_event)
+ num_rules += reg_info->num_6g_reg_rules_ap[WMI_REG_INDOOR_AP];
+
+ if (!num_rules)
+ goto ret;
+
+ /* Add max additional rules to accommodate weather radar band */
+ if (reg_info->dfs_region == ATH12K_DFS_REG_ETSI)
+ num_rules += 2;
+
+ tmp_regd = kzalloc(sizeof(*tmp_regd) +
+ (num_rules * sizeof(struct ieee80211_reg_rule)),
+ GFP_ATOMIC);
+ if (!tmp_regd)
+ goto ret;
+
+ memcpy(tmp_regd->alpha2, reg_info->alpha2, REG_ALPHA2_LEN + 1);
+ memcpy(alpha2, reg_info->alpha2, REG_ALPHA2_LEN + 1);
+ alpha2[2] = '\0';
+ tmp_regd->dfs_region = ath12k_map_fw_dfs_region(reg_info->dfs_region);
+
+ ath12k_dbg(ab, ATH12K_DBG_REG,
+ "\r\nCountry %s, CFG Regdomain %s FW Regdomain %d, num_reg_rules %d\n",
+ alpha2, ath12k_reg_get_regdom_str(tmp_regd->dfs_region),
+ reg_info->dfs_region, num_rules);
+ /* Update reg_rules[] below. Firmware is expected to
+ * send these rules in order(2G rules first and then 5G)
+ */
+ for (; i < num_rules; i++) {
+ if (reg_info->num_2g_reg_rules &&
+ (i < reg_info->num_2g_reg_rules)) {
+ reg_rule = reg_info->reg_rules_2g_ptr + i;
+ max_bw = min_t(u16, reg_rule->max_bw,
+ reg_info->max_bw_2g);
+ flags = 0;
+ } else if (reg_info->num_5g_reg_rules &&
+ (j < reg_info->num_5g_reg_rules)) {
+ reg_rule = reg_info->reg_rules_5g_ptr + j++;
+ max_bw = min_t(u16, reg_rule->max_bw,
+ reg_info->max_bw_5g);
+
+ /* FW doesn't pass NL80211_RRF_AUTO_BW flag for
+ * BW Auto correction, we can enable this by default
+ * for all 5G rules here. The regulatory core performs
+ * BW correction if required and applies flags as
+ * per other BW rule flags we pass from here
+ */
+ flags = NL80211_RRF_AUTO_BW;
+ } else if (reg_info->is_ext_reg_event &&
+ reg_info->num_6g_reg_rules_ap[WMI_REG_INDOOR_AP] &&
+ (k < reg_info->num_6g_reg_rules_ap[WMI_REG_INDOOR_AP])) {
+ reg_rule = reg_info->reg_rules_6g_ap_ptr[WMI_REG_INDOOR_AP] + k++;
+ max_bw = min_t(u16, reg_rule->max_bw,
+ reg_info->max_bw_6g_ap[WMI_REG_INDOOR_AP]);
+ flags = NL80211_RRF_AUTO_BW;
+ } else {
+ break;
+ }
+
+ flags |= ath12k_map_fw_reg_flags(reg_rule->flags);
+
+ ath12k_reg_update_rule(tmp_regd->reg_rules + i,
+ reg_rule->start_freq,
+ reg_rule->end_freq, max_bw,
+ reg_rule->ant_gain, reg_rule->reg_power,
+ flags);
+
+ /* Update dfs cac timeout if the dfs domain is ETSI and the
+ * new rule covers weather radar band.
+ * Default value of '0' corresponds to 60s timeout, so no
+ * need to update that for other rules.
+ */
+ if (flags & NL80211_RRF_DFS &&
+ reg_info->dfs_region == ATH12K_DFS_REG_ETSI &&
+ (reg_rule->end_freq > ETSI_WEATHER_RADAR_BAND_LOW &&
+ reg_rule->start_freq < ETSI_WEATHER_RADAR_BAND_HIGH)){
+ ath12k_reg_update_weather_radar_band(ab, tmp_regd,
+ reg_rule, &i,
+ flags, max_bw);
+ continue;
+ }
+
+ if (reg_info->is_ext_reg_event) {
+ ath12k_dbg(ab, ATH12K_DBG_REG, "\t%d. (%d - %d @ %d) (%d, %d) (%d ms) (FLAGS %d) (%d, %d)\n",
+ i + 1, reg_rule->start_freq, reg_rule->end_freq,
+ max_bw, reg_rule->ant_gain, reg_rule->reg_power,
+ tmp_regd->reg_rules[i].dfs_cac_ms,
+ flags, reg_rule->psd_flag, reg_rule->psd_eirp);
+ } else {
+ ath12k_dbg(ab, ATH12K_DBG_REG,
+ "\t%d. (%d - %d @ %d) (%d, %d) (%d ms) (FLAGS %d)\n",
+ i + 1, reg_rule->start_freq, reg_rule->end_freq,
+ max_bw, reg_rule->ant_gain, reg_rule->reg_power,
+ tmp_regd->reg_rules[i].dfs_cac_ms,
+ flags);
+ }
+ }
+
+ tmp_regd->n_reg_rules = i;
+
+ if (intersect) {
+ default_regd = ab->default_regd[reg_info->phy_id];
+
+ /* Get a new regd by intersecting the received regd with
+ * our default regd.
+ */
+ new_regd = ath12k_regd_intersect(default_regd, tmp_regd);
+ kfree(tmp_regd);
+ if (!new_regd) {
+ ath12k_warn(ab, "Unable to create intersected regdomain\n");
+ goto ret;
+ }
+ } else {
+ new_regd = tmp_regd;
+ }
+
+ret:
+ return new_regd;
+}
+
+void ath12k_regd_update_work(struct work_struct *work)
+{
+ struct ath12k *ar = container_of(work, struct ath12k,
+ regd_update_work);
+ int ret;
+
+ ret = ath12k_regd_update(ar, false);
+ if (ret) {
+ /* Firmware has already moved to the new regd. We need
+ * to maintain channel consistency across FW, Host driver
+ * and userspace. Hence as a fallback mechanism we can set
+ * the prev or default country code to the firmware.
+ */
+ /* TODO: Implement Fallback Mechanism */
+ }
+}
+
+void ath12k_reg_init(struct ath12k *ar)
+{
+ ar->hw->wiphy->regulatory_flags = REGULATORY_WIPHY_SELF_MANAGED;
+ ar->hw->wiphy->reg_notifier = ath12k_reg_notifier;
+}
+
+void ath12k_reg_free(struct ath12k_base *ab)
+{
+ int i;
+
+ for (i = 0; i < ab->hw_params->max_radios; i++) {
+ kfree(ab->default_regd[i]);
+ kfree(ab->new_regd[i]);
+ }
+}
diff --git a/drivers/net/wireless/ath/ath12k/reg.h b/drivers/net/wireless/ath/ath12k/reg.h
new file mode 100644
index 000000000000..56d009a47234
--- /dev/null
+++ b/drivers/net/wireless/ath/ath12k/reg.h
@@ -0,0 +1,95 @@
+/* SPDX-License-Identifier: BSD-3-Clause-Clear */
+/*
+ * Copyright (c) 2019-2021 The Linux Foundation. All rights reserved.
+ * Copyright (c) 2021-2022 Qualcomm Innovation Center, Inc. All rights reserved.
+ */
+
+#ifndef ATH12K_REG_H
+#define ATH12K_REG_H
+
+#include <linux/kernel.h>
+#include <net/regulatory.h>
+
+struct ath12k_base;
+struct ath12k;
+
+/* DFS regdomains supported by Firmware */
+enum ath12k_dfs_region {
+ ATH12K_DFS_REG_UNSET,
+ ATH12K_DFS_REG_FCC,
+ ATH12K_DFS_REG_ETSI,
+ ATH12K_DFS_REG_MKK,
+ ATH12K_DFS_REG_CN,
+ ATH12K_DFS_REG_KR,
+ ATH12K_DFS_REG_MKK_N,
+ ATH12K_DFS_REG_UNDEF,
+};
+
+enum ath12k_reg_cc_code {
+ REG_SET_CC_STATUS_PASS = 0,
+ REG_CURRENT_ALPHA2_NOT_FOUND = 1,
+ REG_INIT_ALPHA2_NOT_FOUND = 2,
+ REG_SET_CC_CHANGE_NOT_ALLOWED = 3,
+ REG_SET_CC_STATUS_NO_MEMORY = 4,
+ REG_SET_CC_STATUS_FAIL = 5,
+};
+
+struct ath12k_reg_rule {
+ u16 start_freq;
+ u16 end_freq;
+ u16 max_bw;
+ u8 reg_power;
+ u8 ant_gain;
+ u16 flags;
+ bool psd_flag;
+ u16 psd_eirp;
+};
+
+struct ath12k_reg_info {
+ enum ath12k_reg_cc_code status_code;
+ u8 num_phy;
+ u8 phy_id;
+ u16 reg_dmn_pair;
+ u16 ctry_code;
+ u8 alpha2[REG_ALPHA2_LEN + 1];
+ u32 dfs_region;
+ u32 phybitmap;
+ bool is_ext_reg_event;
+ u32 min_bw_2g;
+ u32 max_bw_2g;
+ u32 min_bw_5g;
+ u32 max_bw_5g;
+ u32 num_2g_reg_rules;
+ u32 num_5g_reg_rules;
+ struct ath12k_reg_rule *reg_rules_2g_ptr;
+ struct ath12k_reg_rule *reg_rules_5g_ptr;
+ enum wmi_reg_6g_client_type client_type;
+ bool rnr_tpe_usable;
+ bool unspecified_ap_usable;
+ /* TODO: All 6G related info can be stored only for required
+ * combination instead of all types, to optimize memory usage.
+ */
+ u8 domain_code_6g_ap[WMI_REG_CURRENT_MAX_AP_TYPE];
+ u8 domain_code_6g_client[WMI_REG_CURRENT_MAX_AP_TYPE][WMI_REG_MAX_CLIENT_TYPE];
+ u32 domain_code_6g_super_id;
+ u32 min_bw_6g_ap[WMI_REG_CURRENT_MAX_AP_TYPE];
+ u32 max_bw_6g_ap[WMI_REG_CURRENT_MAX_AP_TYPE];
+ u32 min_bw_6g_client[WMI_REG_CURRENT_MAX_AP_TYPE][WMI_REG_MAX_CLIENT_TYPE];
+ u32 max_bw_6g_client[WMI_REG_CURRENT_MAX_AP_TYPE][WMI_REG_MAX_CLIENT_TYPE];
+ u32 num_6g_reg_rules_ap[WMI_REG_CURRENT_MAX_AP_TYPE];
+ u32 num_6g_reg_rules_cl[WMI_REG_CURRENT_MAX_AP_TYPE][WMI_REG_MAX_CLIENT_TYPE];
+ struct ath12k_reg_rule *reg_rules_6g_ap_ptr[WMI_REG_CURRENT_MAX_AP_TYPE];
+ struct ath12k_reg_rule *reg_rules_6g_client_ptr
+ [WMI_REG_CURRENT_MAX_AP_TYPE][WMI_REG_MAX_CLIENT_TYPE];
+};
+
+void ath12k_reg_init(struct ath12k *ar);
+void ath12k_reg_free(struct ath12k_base *ab);
+void ath12k_regd_update_work(struct work_struct *work);
+struct ieee80211_regdomain *ath12k_reg_build_regd(struct ath12k_base *ab,
+ struct ath12k_reg_info *reg_info,
+ bool intersect);
+int ath12k_regd_update(struct ath12k *ar, bool init);
+int ath12k_reg_update_chan_list(struct ath12k *ar);
+
+#endif
diff --git a/drivers/net/wireless/ath/ath12k/rx_desc.h b/drivers/net/wireless/ath/ath12k/rx_desc.h
new file mode 100644
index 000000000000..5feaff6450ad
--- /dev/null
+++ b/drivers/net/wireless/ath/ath12k/rx_desc.h
@@ -0,0 +1,1441 @@
+/* SPDX-License-Identifier: BSD-3-Clause-Clear */
+/*
+ * Copyright (c) 2018-2021 The Linux Foundation. All rights reserved.
+ * Copyright (c) 2021-2022 Qualcomm Innovation Center, Inc. All rights reserved.
+ */
+#ifndef ATH12K_RX_DESC_H
+#define ATH12K_RX_DESC_H
+
+enum rx_desc_decap_type {
+ RX_DESC_DECAP_TYPE_RAW,
+ RX_DESC_DECAP_TYPE_NATIVE_WIFI,
+ RX_DESC_DECAP_TYPE_ETHERNET2_DIX,
+ RX_DESC_DECAP_TYPE_8023,
+};
+
+enum rx_desc_decrypt_status_code {
+ RX_DESC_DECRYPT_STATUS_CODE_OK,
+ RX_DESC_DECRYPT_STATUS_CODE_UNPROTECTED_FRAME,
+ RX_DESC_DECRYPT_STATUS_CODE_DATA_ERR,
+ RX_DESC_DECRYPT_STATUS_CODE_KEY_INVALID,
+ RX_DESC_DECRYPT_STATUS_CODE_PEER_ENTRY_INVALID,
+ RX_DESC_DECRYPT_STATUS_CODE_OTHER,
+};
+
+#define RX_MPDU_START_INFO0_REO_DEST_IND GENMASK(4, 0)
+#define RX_MPDU_START_INFO0_LMAC_PEER_ID_MSB GENMASK(6, 5)
+#define RX_MPDU_START_INFO0_FLOW_ID_TOEPLITZ BIT(7)
+#define RX_MPDU_START_INFO0_PKT_SEL_FP_UCAST_DATA BIT(8)
+#define RX_MPDU_START_INFO0_PKT_SEL_FP_MCAST_DATA BIT(9)
+#define RX_MPDU_START_INFO0_PKT_SEL_FP_CTRL_BAR BIT(10)
+#define RX_MPDU_START_INFO0_RXDMA0_SRC_RING_SEL GENMASK(13, 11)
+#define RX_MPDU_START_INFO0_RXDMA0_DST_RING_SEL GENMASK(16, 14)
+#define RX_MPDU_START_INFO0_MCAST_ECHO_DROP_EN BIT(17)
+#define RX_MPDU_START_INFO0_WDS_LEARN_DETECT_EN BIT(18)
+#define RX_MPDU_START_INFO0_INTRA_BSS_CHECK_EN BIT(19)
+#define RX_MPDU_START_INFO0_USE_PPE BIT(20)
+#define RX_MPDU_START_INFO0_PPE_ROUTING_EN BIT(21)
+
+#define RX_MPDU_START_INFO1_REO_QUEUE_DESC_HI GENMASK(7, 0)
+#define RX_MPDU_START_INFO1_RECV_QUEUE_NUM GENMASK(23, 8)
+#define RX_MPDU_START_INFO1_PRE_DELIM_ERR_WARN BIT(24)
+#define RX_MPDU_START_INFO1_FIRST_DELIM_ERR BIT(25)
+
+#define RX_MPDU_START_INFO2_EPD_EN BIT(0)
+#define RX_MPDU_START_INFO2_ALL_FRAME_ENCPD BIT(1)
+#define RX_MPDU_START_INFO2_ENC_TYPE GENMASK(5, 2)
+#define RX_MPDU_START_INFO2_VAR_WEP_KEY_WIDTH GENMASK(7, 6)
+#define RX_MPDU_START_INFO2_MESH_STA GENMASK(9, 8)
+#define RX_MPDU_START_INFO2_BSSID_HIT BIT(10)
+#define RX_MPDU_START_INFO2_BSSID_NUM GENMASK(14, 11)
+#define RX_MPDU_START_INFO2_TID GENMASK(18, 15)
+
+#define RX_MPDU_START_INFO3_RXPCU_MPDU_FLTR GENMASK(1, 0)
+#define RX_MPDU_START_INFO3_SW_FRAME_GRP_ID GENMASK(8, 2)
+#define RX_MPDU_START_INFO3_NDP_FRAME BIT(9)
+#define RX_MPDU_START_INFO3_PHY_ERR BIT(10)
+#define RX_MPDU_START_INFO3_PHY_ERR_MPDU_HDR BIT(11)
+#define RX_MPDU_START_INFO3_PROTO_VER_ERR BIT(12)
+#define RX_MPDU_START_INFO3_AST_LOOKUP_VALID BIT(13)
+#define RX_MPDU_START_INFO3_RANGING BIT(14)
+
+#define RX_MPDU_START_INFO4_MPDU_FCTRL_VALID BIT(0)
+#define RX_MPDU_START_INFO4_MPDU_DUR_VALID BIT(1)
+#define RX_MPDU_START_INFO4_MAC_ADDR1_VALID BIT(2)
+#define RX_MPDU_START_INFO4_MAC_ADDR2_VALID BIT(3)
+#define RX_MPDU_START_INFO4_MAC_ADDR3_VALID BIT(4)
+#define RX_MPDU_START_INFO4_MAC_ADDR4_VALID BIT(5)
+#define RX_MPDU_START_INFO4_MPDU_SEQ_CTRL_VALID BIT(6)
+#define RX_MPDU_START_INFO4_MPDU_QOS_CTRL_VALID BIT(7)
+#define RX_MPDU_START_INFO4_MPDU_HT_CTRL_VALID BIT(8)
+#define RX_MPDU_START_INFO4_ENCRYPT_INFO_VALID BIT(9)
+#define RX_MPDU_START_INFO4_MPDU_FRAG_NUMBER GENMASK(13, 10)
+#define RX_MPDU_START_INFO4_MORE_FRAG_FLAG BIT(14)
+#define RX_MPDU_START_INFO4_FROM_DS BIT(16)
+#define RX_MPDU_START_INFO4_TO_DS BIT(17)
+#define RX_MPDU_START_INFO4_ENCRYPTED BIT(18)
+#define RX_MPDU_START_INFO4_MPDU_RETRY BIT(19)
+#define RX_MPDU_START_INFO4_MPDU_SEQ_NUM GENMASK(31, 20)
+
+#define RX_MPDU_START_INFO5_KEY_ID GENMASK(7, 0)
+#define RX_MPDU_START_INFO5_NEW_PEER_ENTRY BIT(8)
+#define RX_MPDU_START_INFO5_DECRYPT_NEEDED BIT(9)
+#define RX_MPDU_START_INFO5_DECAP_TYPE GENMASK(11, 10)
+#define RX_MPDU_START_INFO5_VLAN_TAG_C_PADDING BIT(12)
+#define RX_MPDU_START_INFO5_VLAN_TAG_S_PADDING BIT(13)
+#define RX_MPDU_START_INFO5_STRIP_VLAN_TAG_C BIT(14)
+#define RX_MPDU_START_INFO5_STRIP_VLAN_TAG_S BIT(15)
+#define RX_MPDU_START_INFO5_PRE_DELIM_COUNT GENMASK(27, 16)
+#define RX_MPDU_START_INFO5_AMPDU_FLAG BIT(28)
+#define RX_MPDU_START_INFO5_BAR_FRAME BIT(29)
+#define RX_MPDU_START_INFO5_RAW_MPDU BIT(30)
+
+#define RX_MPDU_START_INFO6_MPDU_LEN GENMASK(13, 0)
+#define RX_MPDU_START_INFO6_FIRST_MPDU BIT(14)
+#define RX_MPDU_START_INFO6_MCAST_BCAST BIT(15)
+#define RX_MPDU_START_INFO6_AST_IDX_NOT_FOUND BIT(16)
+#define RX_MPDU_START_INFO6_AST_IDX_TIMEOUT BIT(17)
+#define RX_MPDU_START_INFO6_POWER_MGMT BIT(18)
+#define RX_MPDU_START_INFO6_NON_QOS BIT(19)
+#define RX_MPDU_START_INFO6_NULL_DATA BIT(20)
+#define RX_MPDU_START_INFO6_MGMT_TYPE BIT(21)
+#define RX_MPDU_START_INFO6_CTRL_TYPE BIT(22)
+#define RX_MPDU_START_INFO6_MORE_DATA BIT(23)
+#define RX_MPDU_START_INFO6_EOSP BIT(24)
+#define RX_MPDU_START_INFO6_FRAGMENT BIT(25)
+#define RX_MPDU_START_INFO6_ORDER BIT(26)
+#define RX_MPDU_START_INFO6_UAPSD_TRIGGER BIT(27)
+#define RX_MPDU_START_INFO6_ENCRYPT_REQUIRED BIT(28)
+#define RX_MPDU_START_INFO6_DIRECTED BIT(29)
+#define RX_MPDU_START_INFO6_AMSDU_PRESENT BIT(30)
+
+#define RX_MPDU_START_INFO7_VDEV_ID GENMASK(7, 0)
+#define RX_MPDU_START_INFO7_SERVICE_CODE GENMASK(16, 8)
+#define RX_MPDU_START_INFO7_PRIORITY_VALID BIT(17)
+#define RX_MPDU_START_INFO7_SRC_INFO GENMASK(29, 18)
+
+#define RX_MPDU_START_INFO8_AUTH_TO_SEND_WDS BIT(0)
+
+struct rx_mpdu_start_qcn9274 {
+ __le32 info0;
+ __le32 reo_queue_desc_lo;
+ __le32 info1;
+ __le32 pn[4];
+ __le32 info2;
+ __le32 peer_meta_data;
+ __le16 info3;
+ __le16 phy_ppdu_id;
+ __le16 ast_index;
+ __le16 sw_peer_id;
+ __le32 info4;
+ __le32 info5;
+ __le32 info6;
+ __le16 frame_ctrl;
+ __le16 duration;
+ u8 addr1[ETH_ALEN];
+ u8 addr2[ETH_ALEN];
+ u8 addr3[ETH_ALEN];
+ __le16 seq_ctrl;
+ u8 addr4[ETH_ALEN];
+ __le16 qos_ctrl;
+ __le32 ht_ctrl;
+ __le32 info7;
+ u8 multi_link_addr1[ETH_ALEN];
+ u8 multi_link_addr2[ETH_ALEN];
+ __le32 info8;
+ __le32 res0;
+ __le32 res1;
+} __packed;
+
+/* rx_mpdu_start
+ *
+ * reo_destination_indication
+ * The id of the reo exit ring where the msdu frame shall push
+ * after (MPDU level) reordering has finished. Values are defined
+ * in enum %HAL_RX_MSDU_DESC_REO_DEST_IND_.
+ *
+ * lmac_peer_id_msb
+ *
+ * If use_flow_id_toeplitz_clfy is set and lmac_peer_id_'sb
+ * is 2'b00, Rx OLE uses a REO destination indicati'n of {1'b1,
+ * hash[3:0]} using the chosen Toeplitz hash from Common Parser
+ * if flow search fails.
+ * If use_flow_id_toeplitz_clfy is set and lmac_peer_id_msb
+ * 's not 2'b00, Rx OLE uses a REO destination indication of
+ * {lmac_peer_id_msb, hash[2:0]} using the chosen Toeplitz
+ * hash from Common Parser if flow search fails.
+ *
+ * use_flow_id_toeplitz_clfy
+ * Indication to Rx OLE to enable REO destination routing based
+ * on the chosen Toeplitz hash from Common Parser, in case
+ * flow search fails
+ *
+ * pkt_selection_fp_ucast_data
+ * Filter pass Unicast data frame (matching rxpcu_filter_pass
+ * and sw_frame_group_Unicast_data) routing selection
+ *
+ * pkt_selection_fp_mcast_data
+ * Filter pass Multicast data frame (matching rxpcu_filter_pass
+ * and sw_frame_group_Multicast_data) routing selection
+ *
+ * pkt_selection_fp_ctrl_bar
+ * Filter pass BAR frame (matching rxpcu_filter_pass
+ * and sw_frame_group_ctrl_1000) routing selection
+ *
+ * rxdma0_src_ring_selection
+ * Field only valid when for the received frame type the corresponding
+ * pkt_selection_fp_... bit is set
+ *
+ * rxdma0_dst_ring_selection
+ * Field only valid when for the received frame type the corresponding
+ * pkt_selection_fp_... bit is set
+ *
+ * mcast_echo_drop_enable
+ * If set, for multicast packets, multicast echo check (i.e.
+ * SA search with mcast_echo_check = 1) shall be performed
+ * by RXOLE, and any multicast echo packets should be indicated
+ * to RXDMA for release to WBM
+ *
+ * wds_learning_detect_en
+ * If set, WDS learning detection based on SA search and notification
+ * to FW (using RXDMA0 status ring) is enabled and the "timestamp"
+ * field in address search failure cache-only entry should
+ * be used to avoid multiple WDS learning notifications.
+ *
+ * intrabss_check_en
+ * If set, intra-BSS routing detection is enabled
+ *
+ * use_ppe
+ * Indicates to RXDMA to ignore the REO_destination_indication
+ * and use a programmed value corresponding to the REO2PPE
+ * ring
+ * This override to REO2PPE for packets requiring multiple
+ * buffers shall be disabled based on an RXDMA configuration,
+ * as PPE may not support such packets.
+ *
+ * Supported only in full AP chips, not in client/soft
+ * chips
+ *
+ * ppe_routing_enable
+ * Global enable/disable bit for routing to PPE, used to disable
+ * PPE routing even if RXOLE CCE or flow search indicate 'Use_PPE'
+ * This is set by SW for peers which are being handled by a
+ * host SW/accelerator subsystem that also handles packet
+ * uffer management for WiFi-to-PPE routing.
+ *
+ * This is cleared by SW for peers which are being handled
+ * by a different subsystem, completely disabling WiFi-to-PPE
+ * routing for such peers.
+ *
+ * rx_reo_queue_desc_addr_lo
+ * Address (lower 32 bits) of the REO queue descriptor.
+ *
+ * rx_reo_queue_desc_addr_hi
+ * Address (upper 8 bits) of the REO queue descriptor.
+ *
+ * receive_queue_number
+ * Indicates the MPDU queue ID to which this MPDU link
+ * descriptor belongs.
+ *
+ * pre_delim_err_warning
+ * Indicates that a delimiter FCS error was found in between the
+ * previous MPDU and this MPDU. Note that this is just a warning,
+ * and does not mean that this MPDU is corrupted in any way. If
+ * it is, there will be other errors indicated such as FCS or
+ * decrypt errors.
+ *
+ * first_delim_err
+ * Indicates that the first delimiter had a FCS failure.
+ *
+ * pn
+ * The PN number.
+ *
+ * epd_en
+ * Field only valid when AST_based_lookup_valid == 1.
+ * In case of ndp or phy_err or AST_based_lookup_valid == 0,
+ * this field will be set to 0
+ * If set to one use EPD instead of LPD
+ * In case of ndp or phy_err, this field will never be set.
+ *
+ * all_frames_shall_be_encrypted
+ * In case of ndp or phy_err or AST_based_lookup_valid == 0,
+ * this field will be set to 0
+ *
+ * When set, all frames (data only ?) shall be encrypted. If
+ * not, RX CRYPTO shall set an error flag.
+ *
+ *
+ * encrypt_type
+ * In case of ndp or phy_err or AST_based_lookup_valid == 0,
+ * this field will be set to 0
+ *
+ * Indicates type of decrypt cipher used (as defined in the
+ * peer entry)
+ *
+ * wep_key_width_for_variable_key
+ *
+ * Field only valid when key_type is set to wep_varied_width.
+ *
+ * mesh_sta
+ *
+ * bssid_hit
+ * When set, the BSSID of the incoming frame matched one of
+ * the 8 BSSID register values
+ * bssid_number
+ * Field only valid when bssid_hit is set.
+ * This number indicates which one out of the 8 BSSID register
+ * values matched the incoming frame
+ *
+ * tid
+ * Field only valid when mpdu_qos_control_valid is set
+ * The TID field in the QoS control field
+ *
+ * peer_meta_data
+ * Meta data that SW has programmed in the Peer table entry
+ * of the transmitting STA.
+ *
+ * rxpcu_mpdu_filter_in_category
+ * Field indicates what the reason was that this mpdu frame
+ * was allowed to come into the receive path by rxpcu. Values
+ * are defined in enum %RX_DESC_RXPCU_FILTER_*.
+ *
+ * sw_frame_group_id
+ * SW processes frames based on certain classifications. Values
+ * are defined in enum %RX_DESC_SW_FRAME_GRP_ID_*.
+ *
+ * ndp_frame
+ * When set, the received frame was an NDP frame, and thus
+ * there will be no MPDU data.
+ * phy_err
+ * When set, a PHY error was received before MAC received any
+ * data, and thus there will be no MPDU data.
+ *
+ * phy_err_during_mpdu_header
+ * When set, a PHY error was received before MAC received the
+ * complete MPDU header which was needed for proper decoding
+ *
+ * protocol_version_err
+ * Set when RXPCU detected a version error in the Frame control
+ * field
+ *
+ * ast_based_lookup_valid
+ * When set, AST based lookup for this frame has found a valid
+ * result.
+ *
+ * ranging
+ * When set, a ranging NDPA or a ranging NDP was received.
+ *
+ * phy_ppdu_id
+ * A ppdu counter value that PHY increments for every PPDU
+ * received. The counter value wraps around.
+ *
+ * ast_index
+ *
+ * This field indicates the index of the AST entry corresponding
+ * to this MPDU. It is provided by the GSE module instantiated
+ * in RXPCU.
+ * A value of 0xFFFF indicates an invalid AST index, meaning
+ * that No AST entry was found or NO AST search was performed
+ *
+ * sw_peer_id
+ * In case of ndp or phy_err or AST_based_lookup_valid == 0,
+ * this field will be set to 0
+ * This field indicates a unique peer identifier. It is set
+ * equal to field 'sw_peer_id' from the AST entry
+ *
+ * frame_control_valid
+ * When set, the field Mpdu_Frame_control_field has valid information
+ *
+ * frame_duration_valid
+ * When set, the field Mpdu_duration_field has valid information
+ *
+ * mac_addr_ad1..4_valid
+ * When set, the fields mac_addr_adx_..... have valid information
+ *
+ * mpdu_seq_ctrl_valid
+ *
+ * When set, the fields mpdu_sequence_control_field and mpdu_sequence_number
+ * have valid information as well as field
+ * For MPDUs without a sequence control field, this field will
+ * not be set.
+ *
+ * mpdu_qos_ctrl_valid, mpdu_ht_ctrl_valid
+ *
+ * When set, the field mpdu_qos_control_field, mpdu_ht_control has valid
+ * information, For MPDUs without a QoS,HT control field, this field
+ * will not be set.
+ *
+ * frame_encryption_info_valid
+ *
+ * When set, the encryption related info fields, like IV and
+ * PN are valid
+ * For MPDUs that are not encrypted, this will not be set.
+ *
+ * mpdu_fragment_number
+ *
+ * Field only valid when Mpdu_sequence_control_valid is set
+ * AND Fragment_flag is set. The fragment number from the 802.11 header
+ *
+ * more_fragment_flag
+ *
+ * The More Fragment bit setting from the MPDU header of the
+ * received frame
+ *
+ * fr_ds
+ *
+ * Field only valid when Mpdu_frame_control_valid is set
+ * Set if the from DS bit is set in the frame control.
+ *
+ * to_ds
+ *
+ * Field only valid when Mpdu_frame_control_valid is set
+ * Set if the to DS bit is set in the frame control.
+ *
+ * encrypted
+ *
+ * Field only valid when Mpdu_frame_control_valid is set.
+ * Protected bit from the frame control.
+ *
+ * mpdu_retry
+ * Field only valid when Mpdu_frame_control_valid is set.
+ * Retry bit from the frame control. Only valid when first_msdu is set
+ *
+ * mpdu_sequence_number
+ * Field only valid when Mpdu_sequence_control_valid is set.
+ *
+ * The sequence number from the 802.11 header.
+ * key_id
+ * The key ID octet from the IV.
+ * Field only valid when Frame_encryption_info_valid is set
+ *
+ * new_peer_entry
+ * Set if new RX_PEER_ENTRY TLV follows. If clear, RX_PEER_ENTRY
+ * doesn't follow so RX DECRYPTION module either uses old peer
+ * entry or not decrypt.
+ *
+ * decrypt_needed
+ * When RXPCU sets bit 'ast_index_not_found or ast_index_timeout',
+ * RXPCU will also ensure that this bit is NOT set. CRYPTO for that
+ * reason only needs to evaluate this bit and non of the other ones
+ *
+ * decap_type
+ * Used by the OLE during decapsulation. Values are defined in
+ * enum %MPDU_START_DECAP_TYPE_*.
+ *
+ * rx_insert_vlan_c_tag_padding
+ * rx_insert_vlan_s_tag_padding
+ * Insert 4 byte of all zeros as VLAN tag or double VLAN tag if
+ * the rx payload does not have VLAN.
+ *
+ * strip_vlan_c_tag_decap
+ * strip_vlan_s_tag_decap
+ * Strip VLAN or double VLAN during decapsulation.
+ *
+ * pre_delim_count
+ * The number of delimiters before this MPDU. Note that this
+ * number is cleared at PPDU start. If this MPDU is the first
+ * received MPDU in the PPDU and this MPDU gets filtered-in,
+ * this field will indicate the number of delimiters located
+ * after the last MPDU in the previous PPDU.
+ *
+ * If this MPDU is located after the first received MPDU in
+ * an PPDU, this field will indicate the number of delimiters
+ * located between the previous MPDU and this MPDU.
+ *
+ * ampdu_flag
+ * Received frame was part of an A-MPDU.
+ *
+ * bar_frame
+ * Received frame is a BAR frame
+ *
+ * raw_mpdu
+ * Set when no 802.11 to nwifi/ethernet hdr conversion is done
+ *
+ * mpdu_length
+ * MPDU length before decapsulation.
+ *
+ * first_mpdu
+ * Indicates the first MSDU of the PPDU. If both first_mpdu
+ * and last_mpdu are set in the MSDU then this is a not an
+ * A-MPDU frame but a stand alone MPDU. Interior MPDU in an
+ * A-MPDU shall have both first_mpdu and last_mpdu bits set to
+ * 0. The PPDU start status will only be valid when this bit
+ * is set.
+ *
+ * mcast_bcast
+ * Multicast / broadcast indicator. Only set when the MAC
+ * address 1 bit 0 is set indicating mcast/bcast and the BSSID
+ * matches one of the 4 BSSID registers. Only set when
+ * first_msdu is set.
+ *
+ * ast_index_not_found
+ * Only valid when first_msdu is set. Indicates no AST matching
+ * entries within the max search count.
+ *
+ * ast_index_timeout
+ * Only valid when first_msdu is set. Indicates an unsuccessful
+ * search in the address search table due to timeout.
+ *
+ * power_mgmt
+ * Power management bit set in the 802.11 header. Only set
+ * when first_msdu is set.
+ *
+ * non_qos
+ * Set if packet is not a non-QoS data frame. Only set when
+ * first_msdu is set.
+ *
+ * null_data
+ * Set if frame type indicates either null data or QoS null
+ * data format. Only set when first_msdu is set.
+ *
+ * mgmt_type
+ * Set if packet is a management packet. Only set when
+ * first_msdu is set.
+ *
+ * ctrl_type
+ * Set if packet is a control packet. Only set when first_msdu
+ * is set.
+ *
+ * more_data
+ * Set if more bit in frame control is set. Only set when
+ * first_msdu is set.
+ *
+ * eosp
+ * Set if the EOSP (end of service period) bit in the QoS
+ * control field is set. Only set when first_msdu is set.
+ *
+ *
+ * fragment_flag
+ * Fragment indication
+ *
+ * order
+ * Set if the order bit in the frame control is set. Only
+ * set when first_msdu is set.
+ *
+ * u_apsd_trigger
+ * U-APSD trigger frame
+ *
+ * encrypt_required
+ * Indicates that this data type frame is not encrypted even if
+ * the policy for this MPDU requires encryption as indicated in
+ * the peer table key type.
+ *
+ * directed
+ * MPDU is a directed packet which means that the RA matched
+ * our STA addresses. In proxySTA it means that the TA matched
+ * an entry in our address search table with the corresponding
+ * 'no_ack' bit is the address search entry cleared.
+ * amsdu_present
+ * AMSDU present
+ *
+ * mpdu_frame_control_field
+ * Frame control field in header. Only valid when the field is marked valid.
+ *
+ * mpdu_duration_field
+ * Duration field in header. Only valid when the field is marked valid.
+ *
+ * mac_addr_adx
+ * MAC addresses in the received frame. Only valid when corresponding
+ * address valid bit is set
+ *
+ * mpdu_qos_control_field, mpdu_ht_control_field
+ * QoS/HT control fields from header. Valid only when corresponding fields
+ * are marked valid
+ *
+ * vdev_id
+ * Virtual device associated with this peer
+ * RXOLE uses this to determine intra-BSS routing.
+ *
+ * service_code
+ * Opaque service code between PPE and Wi-Fi
+ * This field gets passed on by REO to PPE in the EDMA descriptor
+ * ('REO_TO_PPE_RING').
+ *
+ * priority_valid
+ * This field gets passed on by REO to PPE in the EDMA descriptor
+ * ('REO_TO_PPE_RING').
+ *
+ * src_info
+ * Source (virtual) device/interface info. associated with
+ * this peer
+ * This field gets passed on by REO to PPE in the EDMA descriptor
+ * ('REO_TO_PPE_RING').
+ *
+ * multi_link_addr_ad1_ad2_valid
+ * If set, Rx OLE shall convert Address1 and Address2 of received
+ * data frames to multi-link addresses during decapsulation to eth/nwifi
+ *
+ * multi_link_addr_ad1,ad2
+ * Multi-link receiver address1,2. Only valid when corresponding
+ * valid bit is set
+ *
+ * authorize_to_send_wds
+ * If not set, RXDMA shall perform error-routing for WDS packets
+ * as the sender is not authorized and might misuse WDS frame
+ * format to inject packets with arbitrary DA/SA.
+ *
+ */
+
+enum rx_msdu_start_pkt_type {
+ RX_MSDU_START_PKT_TYPE_11A,
+ RX_MSDU_START_PKT_TYPE_11B,
+ RX_MSDU_START_PKT_TYPE_11N,
+ RX_MSDU_START_PKT_TYPE_11AC,
+ RX_MSDU_START_PKT_TYPE_11AX,
+};
+
+enum rx_msdu_start_sgi {
+ RX_MSDU_START_SGI_0_8_US,
+ RX_MSDU_START_SGI_0_4_US,
+ RX_MSDU_START_SGI_1_6_US,
+ RX_MSDU_START_SGI_3_2_US,
+};
+
+enum rx_msdu_start_recv_bw {
+ RX_MSDU_START_RECV_BW_20MHZ,
+ RX_MSDU_START_RECV_BW_40MHZ,
+ RX_MSDU_START_RECV_BW_80MHZ,
+ RX_MSDU_START_RECV_BW_160MHZ,
+};
+
+enum rx_msdu_start_reception_type {
+ RX_MSDU_START_RECEPTION_TYPE_SU,
+ RX_MSDU_START_RECEPTION_TYPE_DL_MU_MIMO,
+ RX_MSDU_START_RECEPTION_TYPE_DL_MU_OFDMA,
+ RX_MSDU_START_RECEPTION_TYPE_DL_MU_OFDMA_MIMO,
+ RX_MSDU_START_RECEPTION_TYPE_UL_MU_MIMO,
+ RX_MSDU_START_RECEPTION_TYPE_UL_MU_OFDMA,
+ RX_MSDU_START_RECEPTION_TYPE_UL_MU_OFDMA_MIMO,
+};
+
+#define RX_MSDU_END_INFO0_RXPCU_MPDU_FITLER GENMASK(1, 0)
+#define RX_MSDU_END_INFO0_SW_FRAME_GRP_ID GENMASK(8, 2)
+
+#define RX_MSDU_END_INFO1_REPORTED_MPDU_LENGTH GENMASK(13, 0)
+
+#define RX_MSDU_END_INFO2_CCE_SUPER_RULE GENMASK(13, 8)
+#define RX_MSDU_END_INFO2_CCND_TRUNCATE BIT(14)
+#define RX_MSDU_END_INFO2_CCND_CCE_DIS BIT(15)
+
+#define RX_MSDU_END_INFO3_DA_OFFSET GENMASK(5, 0)
+#define RX_MSDU_END_INFO3_SA_OFFSET GENMASK(11, 6)
+#define RX_MSDU_END_INFO3_DA_OFFSET_VALID BIT(12)
+#define RX_MSDU_END_INFO3_SA_OFFSET_VALID BIT(13)
+
+#define RX_MSDU_END_INFO4_TCP_FLAG GENMASK(8, 0)
+#define RX_MSDU_END_INFO4_LRO_ELIGIBLE BIT(9)
+
+#define RX_MSDU_END_INFO5_SA_IDX_TIMEOUT BIT(0)
+#define RX_MSDU_END_INFO5_DA_IDX_TIMEOUT BIT(1)
+#define RX_MSDU_END_INFO5_TO_DS BIT(2)
+#define RX_MSDU_END_INFO5_TID GENMASK(6, 3)
+#define RX_MSDU_END_INFO5_SA_IS_VALID BIT(7)
+#define RX_MSDU_END_INFO5_DA_IS_VALID BIT(8)
+#define RX_MSDU_END_INFO5_DA_IS_MCBC BIT(9)
+#define RX_MSDU_END_INFO5_L3_HDR_PADDING GENMASK(11, 10)
+#define RX_MSDU_END_INFO5_FIRST_MSDU BIT(12)
+#define RX_MSDU_END_INFO5_LAST_MSDU BIT(13)
+#define RX_MSDU_END_INFO5_FROM_DS BIT(14)
+#define RX_MSDU_END_INFO5_IP_CHKSUM_FAIL_COPY BIT(15)
+
+#define RX_MSDU_END_INFO6_MSDU_DROP BIT(0)
+#define RX_MSDU_END_INFO6_REO_DEST_IND GENMASK(5, 1)
+#define RX_MSDU_END_INFO6_FLOW_IDX GENMASK(25, 6)
+#define RX_MSDU_END_INFO6_USE_PPE BIT(26)
+#define RX_MSDU_END_INFO6_MESH_STA GENMASK(28, 27)
+#define RX_MSDU_END_INFO6_VLAN_CTAG_STRIPPED BIT(29)
+#define RX_MSDU_END_INFO6_VLAN_STAG_STRIPPED BIT(30)
+#define RX_MSDU_END_INFO6_FRAGMENT_FLAG BIT(31)
+
+#define RX_MSDU_END_INFO7_AGGR_COUNT GENMASK(7, 0)
+#define RX_MSDU_END_INFO7_FLOW_AGGR_CONTN BIT(8)
+#define RX_MSDU_END_INFO7_FISA_TIMEOUT BIT(9)
+#define RX_MSDU_END_INFO7_TCPUDP_CSUM_FAIL_CPY BIT(10)
+#define RX_MSDU_END_INFO7_MSDU_LIMIT_ERROR BIT(11)
+#define RX_MSDU_END_INFO7_FLOW_IDX_TIMEOUT BIT(12)
+#define RX_MSDU_END_INFO7_FLOW_IDX_INVALID BIT(13)
+#define RX_MSDU_END_INFO7_CCE_MATCH BIT(14)
+#define RX_MSDU_END_INFO7_AMSDU_PARSER_ERR BIT(15)
+
+#define RX_MSDU_END_INFO8_KEY_ID GENMASK(7, 0)
+
+#define RX_MSDU_END_INFO9_SERVICE_CODE GENMASK(14, 6)
+#define RX_MSDU_END_INFO9_PRIORITY_VALID BIT(15)
+#define RX_MSDU_END_INFO9_INRA_BSS BIT(16)
+#define RX_MSDU_END_INFO9_DEST_CHIP_ID GENMASK(18, 17)
+#define RX_MSDU_END_INFO9_MCAST_ECHO BIT(19)
+#define RX_MSDU_END_INFO9_WDS_LEARN_EVENT BIT(20)
+#define RX_MSDU_END_INFO9_WDS_ROAM_EVENT BIT(21)
+#define RX_MSDU_END_INFO9_WDS_KEEP_ALIVE_EVENT BIT(22)
+
+#define RX_MSDU_END_INFO10_MSDU_LENGTH GENMASK(13, 0)
+#define RX_MSDU_END_INFO10_STBC BIT(14)
+#define RX_MSDU_END_INFO10_IPSEC_ESP BIT(15)
+#define RX_MSDU_END_INFO10_L3_OFFSET GENMASK(22, 16)
+#define RX_MSDU_END_INFO10_IPSEC_AH BIT(23)
+#define RX_MSDU_END_INFO10_L4_OFFSET GENMASK(31, 24)
+
+#define RX_MSDU_END_INFO11_MSDU_NUMBER GENMASK(7, 0)
+#define RX_MSDU_END_INFO11_DECAP_FORMAT GENMASK(9, 8)
+#define RX_MSDU_END_INFO11_IPV4 BIT(10)
+#define RX_MSDU_END_INFO11_IPV6 BIT(11)
+#define RX_MSDU_END_INFO11_TCP BIT(12)
+#define RX_MSDU_END_INFO11_UDP BIT(13)
+#define RX_MSDU_END_INFO11_IP_FRAG BIT(14)
+#define RX_MSDU_END_INFO11_TCP_ONLY_ACK BIT(15)
+#define RX_MSDU_END_INFO11_DA_IS_BCAST_MCAST BIT(16)
+#define RX_MSDU_END_INFO11_SEL_TOEPLITZ_HASH GENMASK(18, 17)
+#define RX_MSDU_END_INFO11_IP_FIXED_HDR_VALID BIT(19)
+#define RX_MSDU_END_INFO11_IP_EXTN_HDR_VALID BIT(20)
+#define RX_MSDU_END_INFO11_IP_TCP_UDP_HDR_VALID BIT(21)
+#define RX_MSDU_END_INFO11_MESH_CTRL_PRESENT BIT(22)
+#define RX_MSDU_END_INFO11_LDPC BIT(23)
+#define RX_MSDU_END_INFO11_IP4_IP6_NXT_HDR GENMASK(31, 24)
+
+#define RX_MSDU_END_INFO12_USER_RSSI GENMASK(7, 0)
+#define RX_MSDU_END_INFO12_PKT_TYPE GENMASK(11, 8)
+#define RX_MSDU_END_INFO12_SGI GENMASK(13, 12)
+#define RX_MSDU_END_INFO12_RATE_MCS GENMASK(17, 14)
+#define RX_MSDU_END_INFO12_RECV_BW GENMASK(20, 18)
+#define RX_MSDU_END_INFO12_RECEPTION_TYPE GENMASK(23, 21)
+#define RX_MSDU_END_INFO12_MIMO_SS_BITMAP GENMASK(30, 24)
+#define RX_MSDU_END_INFO12_MIMO_DONE_COPY BIT(31)
+
+#define RX_MSDU_END_INFO13_FIRST_MPDU BIT(0)
+#define RX_MSDU_END_INFO13_MCAST_BCAST BIT(2)
+#define RX_MSDU_END_INFO13_AST_IDX_NOT_FOUND BIT(3)
+#define RX_MSDU_END_INFO13_AST_IDX_TIMEDOUT BIT(4)
+#define RX_MSDU_END_INFO13_POWER_MGMT BIT(5)
+#define RX_MSDU_END_INFO13_NON_QOS BIT(6)
+#define RX_MSDU_END_INFO13_NULL_DATA BIT(7)
+#define RX_MSDU_END_INFO13_MGMT_TYPE BIT(8)
+#define RX_MSDU_END_INFO13_CTRL_TYPE BIT(9)
+#define RX_MSDU_END_INFO13_MORE_DATA BIT(10)
+#define RX_MSDU_END_INFO13_EOSP BIT(11)
+#define RX_MSDU_END_INFO13_A_MSDU_ERROR BIT(12)
+#define RX_MSDU_END_INFO13_ORDER BIT(14)
+#define RX_MSDU_END_INFO13_WIFI_PARSER_ERR BIT(15)
+#define RX_MSDU_END_INFO13_OVERFLOW_ERR BIT(16)
+#define RX_MSDU_END_INFO13_MSDU_LEN_ERR BIT(17)
+#define RX_MSDU_END_INFO13_TCP_UDP_CKSUM_FAIL BIT(18)
+#define RX_MSDU_END_INFO13_IP_CKSUM_FAIL BIT(19)
+#define RX_MSDU_END_INFO13_SA_IDX_INVALID BIT(20)
+#define RX_MSDU_END_INFO13_DA_IDX_INVALID BIT(21)
+#define RX_MSDU_END_INFO13_AMSDU_ADDR_MISMATCH BIT(22)
+#define RX_MSDU_END_INFO13_RX_IN_TX_DECRYPT_BYP BIT(23)
+#define RX_MSDU_END_INFO13_ENCRYPT_REQUIRED BIT(24)
+#define RX_MSDU_END_INFO13_DIRECTED BIT(25)
+#define RX_MSDU_END_INFO13_BUFFER_FRAGMENT BIT(26)
+#define RX_MSDU_END_INFO13_MPDU_LEN_ERR BIT(27)
+#define RX_MSDU_END_INFO13_TKIP_MIC_ERR BIT(28)
+#define RX_MSDU_END_INFO13_DECRYPT_ERR BIT(29)
+#define RX_MSDU_END_INFO13_UNDECRYPT_FRAME_ERR BIT(30)
+#define RX_MSDU_END_INFO13_FCS_ERR BIT(31)
+
+#define RX_MSDU_END_INFO14_DECRYPT_STATUS_CODE GENMASK(12, 10)
+#define RX_MSDU_END_INFO14_RX_BITMAP_NOT_UPDED BIT(13)
+#define RX_MSDU_END_INFO14_MSDU_DONE BIT(31)
+
+struct rx_msdu_end_qcn9274 {
+ __le16 info0;
+ __le16 phy_ppdu_id;
+ __le16 ip_hdr_cksum;
+ __le16 info1;
+ __le16 info2;
+ __le16 cumulative_l3_checksum;
+ __le32 rule_indication0;
+ __le32 ipv6_options_crc;
+ __le16 info3;
+ __le16 l3_type;
+ __le32 rule_indication1;
+ __le32 tcp_seq_num;
+ __le32 tcp_ack_num;
+ __le16 info4;
+ __le16 window_size;
+ __le16 sa_sw_peer_id;
+ __le16 info5;
+ __le16 sa_idx;
+ __le16 da_idx_or_sw_peer_id;
+ __le32 info6;
+ __le32 fse_metadata;
+ __le16 cce_metadata;
+ __le16 tcp_udp_cksum;
+ __le16 info7;
+ __le16 cumulative_ip_length;
+ __le32 info8;
+ __le32 info9;
+ __le32 info10;
+ __le32 info11;
+ __le16 vlan_ctag_ci;
+ __le16 vlan_stag_ci;
+ __le32 peer_meta_data;
+ __le32 info12;
+ __le32 flow_id_toeplitz;
+ __le32 ppdu_start_timestamp_63_32;
+ __le32 phy_meta_data;
+ __le32 ppdu_start_timestamp_31_0;
+ __le32 toeplitz_hash_2_or_4;
+ __le16 res0;
+ __le16 sa_15_0;
+ __le32 sa_47_16;
+ __le32 info13;
+ __le32 info14;
+} __packed;
+
+/* rx_msdu_end
+ *
+ * rxpcu_mpdu_filter_in_category
+ * Field indicates what the reason was that this mpdu frame
+ * was allowed to come into the receive path by rxpcu. Values
+ * are defined in enum %RX_DESC_RXPCU_FILTER_*.
+ *
+ * sw_frame_group_id
+ * SW processes frames based on certain classifications. Values
+ * are defined in enum %RX_DESC_SW_FRAME_GRP_ID_*.
+ *
+ * phy_ppdu_id
+ * A ppdu counter value that PHY increments for every PPDU
+ * received. The counter value wraps around.
+ *
+ * ip_hdr_cksum
+ * This can include the IP header checksum or the pseudo
+ * header checksum used by TCP/UDP checksum.
+ *
+ * reported_mpdu_length
+ * MPDU length before decapsulation. Only valid when first_msdu is
+ * set. This field is taken directly from the length field of the
+ * A-MPDU delimiter or the preamble length field for non-A-MPDU
+ * frames.
+ *
+ * cce_super_rule
+ * Indicates the super filter rule.
+ *
+ * cce_classify_not_done_truncate
+ * Classification failed due to truncated frame.
+ *
+ * cce_classify_not_done_cce_dis
+ * Classification failed due to CCE global disable
+ *
+ * cumulative_l3_checksum
+ * FISA: IP header checksum including the total MSDU length
+ * that is part of this flow aggregated so far, reported if
+ * 'RXOLE_R0_FISA_CTRL. CHKSUM_CUM_IP_LEN_EN' is set
+ *
+ * rule_indication
+ * Bitmap indicating which of rules have matched.
+ *
+ * ipv6_options_crc
+ * 32 bit CRC computed out of IP v6 extension headers.
+ *
+ * da_offset
+ * Offset into MSDU buffer for DA.
+ *
+ * sa_offset
+ * Offset into MSDU buffer for SA.
+ *
+ * da_offset_valid
+ * da_offset field is valid. This will be set to 0 in case
+ * of a dynamic A-MSDU when DA is compressed.
+ *
+ * sa_offset_valid
+ * sa_offset field is valid. This will be set to 0 in case
+ * of a dynamic A-MSDU when SA is compressed.
+ *
+ * l3_type
+ * The 16-bit type value indicating the type of L3 later
+ * extracted from LLC/SNAP, set to zero if SNAP is not
+ * available.
+ *
+ * tcp_seq_number
+ * TCP sequence number.
+ *
+ * tcp_ack_number
+ * TCP acknowledge number.
+ *
+ * tcp_flag
+ * TCP flags {NS, CWR, ECE, URG, ACK, PSH, RST, SYN, FIN}.
+ *
+ * lro_eligible
+ * Computed out of TCP and IP fields to indicate that this
+ * MSDU is eligible for LRO.
+ *
+ * window_size
+ * TCP receive window size.
+ *
+ * sa_sw_peer_id
+ * sw_peer_id from the address search entry corresponding to the
+ * source address of the MSDU.
+ *
+ * sa_idx_timeout
+ * Indicates an unsuccessful MAC source address search due to the
+ * expiring of the search timer.
+ *
+ * da_idx_timeout
+ * Indicates an unsuccessful MAC destination address search due to
+ * the expiring of the search timer.
+ *
+ * to_ds
+ * Set if the to DS bit is set in the frame control.
+ *
+ * tid
+ * TID field in the QoS control field
+ *
+ * sa_is_valid
+ * Indicates that OLE found a valid SA entry.
+ *
+ * da_is_valid
+ * Indicates that OLE found a valid DA entry.
+ *
+ * da_is_mcbc
+ * Field Only valid if da_is_valid is set. Indicates the DA address
+ * was a Multicast of Broadcast address.
+ *
+ * l3_header_padding
+ * Number of bytes padded to make sure that the L3 header will
+ * always start of a Dword boundary.
+ *
+ * first_msdu
+ * Indicates the first MSDU of A-MSDU. If both first_msdu and
+ * last_msdu are set in the MSDU then this is a non-aggregated MSDU
+ * frame: normal MPDU. Interior MSDU in an A-MSDU shall have both
+ * first_mpdu and last_mpdu bits set to 0.
+ *
+ * last_msdu
+ * Indicates the last MSDU of the A-MSDU. MPDU end status is only
+ * valid when last_msdu is set.
+ *
+ * fr_ds
+ * Set if the from DS bit is set in the frame control.
+ *
+ * ip_chksum_fail_copy
+ * Indicates that the computed checksum did not match the
+ * checksum in the IP header.
+ *
+ * sa_idx
+ * The offset in the address table which matches the MAC source
+ * address.
+ *
+ * da_idx_or_sw_peer_id
+ * Based on a register configuration in RXOLE, this field will
+ * contain:
+ * The offset in the address table which matches the MAC destination
+ * address
+ * OR:
+ * sw_peer_id from the address search entry corresponding to
+ * the destination address of the MSDU
+ *
+ * msdu_drop
+ * REO shall drop this MSDU and not forward it to any other ring.
+ *
+ * The id of the reo exit ring where the msdu frame shall push
+ * after (MPDU level) reordering has finished. Values are defined
+ * in enum %HAL_RX_MSDU_DESC_REO_DEST_IND_.
+ *
+ * flow_idx
+ * Flow table index.
+ *
+ * use_ppe
+ * Indicates to RXDMA to ignore the REO_destination_indication
+ * and use a programmed value corresponding to the REO2PPE
+ * ring
+ *
+ * mesh_sta
+ * When set, this is a Mesh (11s) STA.
+ *
+ * vlan_ctag_stripped
+ * Set by RXOLE if it stripped 4-bytes of C-VLAN Tag from the
+ * packet
+ *
+ * vlan_stag_stripped
+ * Set by RXOLE if it stripped 4-bytes of S-VLAN Tag from the
+ * packet
+ *
+ * fragment_flag
+ * Indicates that this is an 802.11 fragment frame. This is
+ * set when either the more_frag bit is set in the frame control
+ * or the fragment number is not zero. Only set when first_msdu
+ * is set.
+ *
+ * fse_metadata
+ * FSE related meta data.
+ *
+ * cce_metadata
+ * CCE related meta data.
+ *
+ * tcp_udp_chksum
+ * The value of the computed TCP/UDP checksum. A mode bit
+ * selects whether this checksum is the full checksum or the
+ * partial checksum which does not include the pseudo header.
+ *
+ * aggregation_count
+ * Number of MSDU's aggregated so far
+ *
+ * flow_aggregation_continuation
+ * To indicate that this MSDU can be aggregated with
+ * the previous packet with the same flow id
+ *
+ * fisa_timeout
+ * To indicate that the aggregation has restarted for
+ * this flow due to timeout
+ *
+ * tcp_udp_chksum_fail
+ * Indicates that the computed checksum (tcp_udp_chksum) did
+ * not match the checksum in the TCP/UDP header.
+ *
+ * msdu_limit_error
+ * Indicates that the MSDU threshold was exceeded and thus all the
+ * rest of the MSDUs will not be scattered and will not be
+ * decapsulated but will be DMA'ed in RAW format as a single MSDU.
+ *
+ * flow_idx_timeout
+ * Indicates an unsuccessful flow search due to the expiring of
+ * the search timer.
+ *
+ * flow_idx_invalid
+ * flow id is not valid.
+ *
+ * cce_match
+ * Indicates that this status has a corresponding MSDU that
+ * requires FW processing. The OLE will have classification
+ * ring mask registers which will indicate the ring(s) for
+ * packets and descriptors which need FW attention.
+ *
+ * amsdu_parser_error
+ * A-MSDU could not be properly de-agregated.
+ *
+ * cumulative_ip_length
+ * Total MSDU length that is part of this flow aggregated
+ * so far
+ *
+ * key_id
+ * The key ID octet from the IV. Only valid when first_msdu is set.
+ *
+ * service_code
+ * Opaque service code between PPE and Wi-Fi
+ *
+ * priority_valid
+ * This field gets passed on by REO to PPE in the EDMA descriptor
+ *
+ * intra_bss
+ * This packet needs intra-BSS routing by SW as the 'vdev_id'
+ * for the destination is the same as 'vdev_id' (from 'RX_MPDU_PCU_START')
+ * that this MSDU was got in.
+ *
+ * dest_chip_id
+ * If intra_bss is set, copied by RXOLE from 'ADDR_SEARCH_ENTRY'
+ * to support intra-BSS routing with multi-chip multi-link
+ * operation. This indicates into which chip's TCL the packet should be
+ * queueued
+ *
+ * multicast_echo
+ * If set, this packet is a multicast echo, i.e. the DA is
+ * multicast and Rx OLE SA search with mcast_echo_check = 1
+ * passed. RXDMA should release such packets to WBM.
+ *
+ * wds_learning_event
+ * If set, this packet has an SA search failure with WDS learning
+ * enabled for the peer. RXOLE should route this TLV to the
+ * RXDMA0 status ring to notify FW.
+ *
+ * wds_roaming_event
+ * If set, this packet's SA 'Sw_peer_id' mismatches the 'Sw_peer_id'
+ * of the peer through which the packet was got, indicating
+ * the SA node has roamed. RXOLE should route this TLV to
+ * the RXDMA0 status ring to notify FW.
+ *
+ * wds_keep_alive_event
+ * If set, the AST timestamp for this packet's SA is older
+ * than the current timestamp by more than a threshold programmed
+ * in RXOLE. RXOLE should route this TLV to the RXDMA0 status
+ * ring to notify FW to keep the AST entry for the SA alive.
+ *
+ * msdu_length
+ * MSDU length in bytes after decapsulation.
+ * This field is still valid for MPDU frames without A-MSDU.
+ * It still represents MSDU length after decapsulation
+ *
+ * stbc
+ * When set, use STBC transmission rates.
+ *
+ * ipsec_esp
+ * Set if IPv4/v6 packet is using IPsec ESP.
+ *
+ * l3_offset
+ * Depending upon mode bit, this field either indicates the
+ * L3 offset in bytes from the start of the RX_HEADER or the IP
+ * offset in bytes from the start of the packet after
+ * decapsulation. The latter is only valid if ipv4_proto or
+ * ipv6_proto is set.
+ *
+ * ipsec_ah
+ * Set if IPv4/v6 packet is using IPsec AH
+ *
+ * l4_offset
+ * Depending upon mode bit, this field either indicates the
+ * L4 offset nin bytes from the start of RX_HEADER (only valid
+ * if either ipv4_proto or ipv6_proto is set to 1) or indicates
+ * the offset in bytes to the start of TCP or UDP header from
+ * the start of the IP header after decapsulation (Only valid if
+ * tcp_proto or udp_proto is set). The value 0 indicates that
+ * the offset is longer than 127 bytes.
+ *
+ * msdu_number
+ * Indicates the MSDU number within a MPDU. This value is
+ * reset to zero at the start of each MPDU. If the number of
+ * MSDU exceeds 255 this number will wrap using modulo 256.
+ *
+ * decap_type
+ * Indicates the format after decapsulation. Values are defined in
+ * enum %MPDU_START_DECAP_TYPE_*.
+ *
+ * ipv4_proto
+ * Set if L2 layer indicates IPv4 protocol.
+ *
+ * ipv6_proto
+ * Set if L2 layer indicates IPv6 protocol.
+ *
+ * tcp_proto
+ * Set if the ipv4_proto or ipv6_proto are set and the IP protocol
+ * indicates TCP.
+ *
+ * udp_proto
+ * Set if the ipv4_proto or ipv6_proto are set and the IP protocol
+ * indicates UDP.
+ *
+ * ip_frag
+ * Indicates that either the IP More frag bit is set or IP frag
+ * number is non-zero. If set indicates that this is a fragmented
+ * IP packet.
+ *
+ * tcp_only_ack
+ * Set if only the TCP Ack bit is set in the TCP flags and if
+ * the TCP payload is 0.
+ *
+ * da_is_bcast_mcast
+ * The destination address is broadcast or multicast.
+ *
+ * toeplitz_hash
+ * Actual chosen Hash.
+ * 0 - Toeplitz hash of 2-tuple (IP source address, IP
+ * destination address)
+ * 1 - Toeplitz hash of 4-tuple (IP source address,
+ * IP destination address, L4 (TCP/UDP) source port,
+ * L4 (TCP/UDP) destination port)
+ * 2 - Toeplitz of flow_id
+ * 3 - Zero is used
+ *
+ * ip_fixed_header_valid
+ * Fixed 20-byte IPv4 header or 40-byte IPv6 header parsed
+ * fully within first 256 bytes of the packet
+ *
+ * ip_extn_header_valid
+ * IPv6/IPv6 header, including IPv4 options and
+ * recognizable extension headers parsed fully within first 256
+ * bytes of the packet
+ *
+ * tcp_udp_header_valid
+ * Fixed 20-byte TCP (excluding TCP options) or 8-byte UDP
+ * header parsed fully within first 256 bytes of the packet
+ *
+ * mesh_control_present
+ * When set, this MSDU includes the 'Mesh Control' field
+ *
+ * ldpc
+ *
+ * ip4_protocol_ip6_next_header
+ * For IPv4, this is the 8 bit protocol field set). For IPv6 this
+ * is the 8 bit next_header field.
+ *
+ *
+ * vlan_ctag_ci
+ * 2 bytes of C-VLAN Tag Control Information from WHO_L2_LLC
+ *
+ * vlan_stag_ci
+ * 2 bytes of S-VLAN Tag Control Information from WHO_L2_LLC
+ * in case of double VLAN
+ *
+ * peer_meta_data
+ * Meta data that SW has programmed in the Peer table entry
+ * of the transmitting STA.
+ *
+ * user_rssi
+ * RSSI for this user
+ *
+ * pkt_type
+ * Values are defined in enum %RX_MSDU_START_PKT_TYPE_*.
+ *
+ * sgi
+ * Field only valid when pkt type is HT, VHT or HE. Values are
+ * defined in enum %RX_MSDU_START_SGI_*.
+ *
+ * rate_mcs
+ * MCS Rate used.
+ *
+ * receive_bandwidth
+ * Full receive Bandwidth. Values are defined in enum
+ * %RX_MSDU_START_RECV_*.
+ *
+ * reception_type
+ * Indicates what type of reception this is and defined in enum
+ * %RX_MSDU_START_RECEPTION_TYPE_*.
+ *
+ * mimo_ss_bitmap
+ * Field only valid when
+ * Reception_type is RX_MSDU_START_RECEPTION_TYPE_DL_MU_MIMO or
+ * RX_MSDU_START_RECEPTION_TYPE_DL_MU_OFDMA_MIMO.
+ *
+ * Bitmap, with each bit indicating if the related spatial
+ * stream is used for this STA
+ *
+ * LSB related to SS 0
+ *
+ * 0 - spatial stream not used for this reception
+ * 1 - spatial stream used for this reception
+ *
+ * msdu_done_copy
+ * If set indicates that the RX packet data, RX header data,
+ * RX PPDU start descriptor, RX MPDU start/end descriptor,
+ * RX MSDU start/end descriptors and RX Attention descriptor
+ * are all valid. This bit is in the last 64-bit of the descriptor
+ * expected to be subscribed in future hardware.
+ *
+ * flow_id_toeplitz
+ * Toeplitz hash of 5-tuple
+ * {IP source address, IP destination address, IP source port, IP
+ * destination port, L4 protocol} in case of non-IPSec.
+ *
+ * In case of IPSec - Toeplitz hash of 4-tuple
+ * {IP source address, IP destination address, SPI, L4 protocol}
+ *
+ * The relevant Toeplitz key registers are provided in RxOLE's
+ * instance of common parser module. These registers are separate
+ * from the Toeplitz keys used by ASE/FSE modules inside RxOLE.
+ * The actual value will be passed on from common parser module
+ * to RxOLE in one of the WHO_* TLVs.
+ *
+ * ppdu_start_timestamp
+ * Timestamp that indicates when the PPDU that contained this MPDU
+ * started on the medium.
+ *
+ * phy_meta_data
+ * SW programmed Meta data provided by the PHY. Can be used for SW
+ * to indicate the channel the device is on.
+ *
+ * toeplitz_hash_2_or_4
+ * Controlled by multiple RxOLE registers for TCP/UDP over
+ * IPv4/IPv6 - Either, Toeplitz hash computed over 2-tuple
+ * IPv4 or IPv6 src/dest addresses is reported; or, Toeplitz
+ * hash computed over 4-tuple IPv4 or IPv6 src/dest addresses
+ * and src/dest ports is reported. The Flow_id_toeplitz hash
+ * can also be reported here. Usually the hash reported here
+ * is the one used for hash-based REO routing (see use_flow_id_toeplitz_clfy
+ * in 'RXPT_CLASSIFY_INFO').
+ *
+ * sa
+ * Source MAC address
+ *
+ * first_mpdu
+ * Indicates the first MSDU of the PPDU. If both first_mpdu
+ * and last_mpdu are set in the MSDU then this is a not an
+ * A-MPDU frame but a stand alone MPDU. Interior MPDU in an
+ * A-MPDU shall have both first_mpdu and last_mpdu bits set to
+ * 0. The PPDU start status will only be valid when this bit
+ * is set.
+ *
+ * mcast_bcast
+ * Multicast / broadcast indicator. Only set when the MAC
+ * address 1 bit 0 is set indicating mcast/bcast and the BSSID
+ * matches one of the 4 BSSID registers. Only set when
+ * first_msdu is set.
+ *
+ * ast_index_not_found
+ * Only valid when first_msdu is set. Indicates no AST matching
+ * entries within the max search count.
+ *
+ * ast_index_timeout
+ * Only valid when first_msdu is set. Indicates an unsuccessful
+ * search in the address search table due to timeout.
+ *
+ * power_mgmt
+ * Power management bit set in the 802.11 header. Only set
+ * when first_msdu is set.
+ *
+ * non_qos
+ * Set if packet is not a non-QoS data frame. Only set when
+ * first_msdu is set.
+ *
+ * null_data
+ * Set if frame type indicates either null data or QoS null
+ * data format. Only set when first_msdu is set.
+ *
+ * mgmt_type
+ * Set if packet is a management packet. Only set when
+ * first_msdu is set.
+ *
+ * ctrl_type
+ * Set if packet is a control packet. Only set when first_msdu
+ * is set.
+ *
+ * more_data
+ * Set if more bit in frame control is set. Only set when
+ * first_msdu is set.
+ *
+ * eosp
+ * Set if the EOSP (end of service period) bit in the QoS
+ * control field is set. Only set when first_msdu is set.
+ *
+ * a_msdu_error
+ * Set if number of MSDUs in A-MSDU is above a threshold or if the
+ * size of the MSDU is invalid. This receive buffer will contain
+ * all of the remainder of MSDUs in this MPDU w/o decapsulation.
+ *
+ * order
+ * Set if the order bit in the frame control is set. Only
+ * set when first_msdu is set.
+ *
+ * wifi_parser_error
+ * Indicates that the WiFi frame has one of the following errors
+ *
+ * overflow_err
+ * RXPCU Receive FIFO ran out of space to receive the full MPDU.
+ * Therefore this MPDU is terminated early and is thus corrupted.
+ *
+ * This MPDU will not be ACKed.
+ *
+ * RXPCU might still be able to correctly receive the following
+ * MPDUs in the PPDU if enough fifo space became available in time.
+ *
+ * mpdu_length_err
+ * Set by RXPCU if the expected MPDU length does not correspond
+ * with the actually received number of bytes in the MPDU.
+ *
+ * tcp_udp_chksum_fail
+ * Indicates that the computed checksum (tcp_udp_chksum) did
+ * not match the checksum in the TCP/UDP header.
+ *
+ * ip_chksum_fail
+ * Indicates that the computed checksum did not match the
+ * checksum in the IP header.
+ *
+ * sa_idx_invalid
+ * Indicates no matching entry was found in the address search
+ * table for the source MAC address.
+ *
+ * da_idx_invalid
+ * Indicates no matching entry was found in the address search
+ * table for the destination MAC address.
+ *
+ * amsdu_addr_mismatch
+ * Indicates that an A-MSDU with 'from DS = 0' had an SA mismatching
+ * TA or an A-MDU with 'to DS = 0' had a DA mismatching RA
+ *
+ * rx_in_tx_decrypt_byp
+ * Indicates that RX packet is not decrypted as Crypto is busy
+ * with TX packet processing.
+ *
+ * encrypt_required
+ * Indicates that this data type frame is not encrypted even if
+ * the policy for this MPDU requires encryption as indicated in
+ * the peer table key type.
+ *
+ * directed
+ * MPDU is a directed packet which means that the RA matched
+ * our STA addresses. In proxySTA it means that the TA matched
+ * an entry in our address search table with the corresponding
+ * 'no_ack' bit is the address search entry cleared.
+ *
+ * buffer_fragment
+ * Indicates that at least one of the rx buffers has been
+ * fragmented. If set the FW should look at the rx_frag_info
+ * descriptor described below.
+ *
+ * mpdu_length_err
+ * Indicates that the MPDU was pre-maturely terminated
+ * resulting in a truncated MPDU. Don't trust the MPDU length
+ * field.
+ *
+ * tkip_mic_err
+ * Indicates that the MPDU Michael integrity check failed
+ *
+ * decrypt_err
+ * Indicates that the MPDU decrypt integrity check failed
+ *
+ * fcs_err
+ * Indicates that the MPDU FCS check failed
+ *
+ * flow_idx_timeout
+ * Indicates an unsuccessful flow search due to the expiring of
+ * the search timer.
+ *
+ * flow_idx_invalid
+ * flow id is not valid.
+ *
+ * decrypt_status_code
+ * Field provides insight into the decryption performed. Values
+ * are defined in enum %RX_DESC_DECRYPT_STATUS_CODE_*.
+ *
+ * rx_bitmap_not_updated
+ * Frame is received, but RXPCU could not update the receive bitmap
+ * due to (temporary) fifo constraints.
+ *
+ * msdu_done
+ * If set indicates that the RX packet data, RX header data, RX
+ * PPDU start descriptor, RX MPDU start/end descriptor, RX MSDU
+ * start/end descriptors and RX Attention descriptor are all
+ * valid. This bit must be in the last octet of the
+ * descriptor.
+ *
+ */
+
+/* TODO: Move to compact TLV approach
+ * By default these tlv's are not aligned to 128b boundary
+ * Need to remove unused qwords and make them compact/aligned
+ */
+struct hal_rx_desc_qcn9274 {
+ struct rx_msdu_end_qcn9274 msdu_end;
+ struct rx_mpdu_start_qcn9274 mpdu_start;
+ u8 msdu_payload[];
+} __packed;
+
+#define RX_BE_PADDING0_BYTES 8
+#define RX_BE_PADDING1_BYTES 8
+
+#define HAL_RX_BE_PKT_HDR_TLV_LEN 112
+
+struct rx_pkt_hdr_tlv {
+ __le64 tag;
+ __le64 phy_ppdu_id;
+ u8 rx_pkt_hdr[HAL_RX_BE_PKT_HDR_TLV_LEN];
+};
+
+struct hal_rx_desc_wcn7850 {
+ __le64 msdu_end_tag;
+ struct rx_msdu_end_qcn9274 msdu_end;
+ u8 rx_padding0[RX_BE_PADDING0_BYTES];
+ __le64 mpdu_start_tag;
+ struct rx_mpdu_start_qcn9274 mpdu_start;
+ struct rx_pkt_hdr_tlv pkt_hdr_tlv;
+ u8 msdu_payload[];
+};
+
+struct hal_rx_desc {
+ union {
+ struct hal_rx_desc_qcn9274 qcn9274;
+ struct hal_rx_desc_wcn7850 wcn7850;
+ } u;
+} __packed;
+
+#define MAX_USER_POS 8
+#define MAX_MU_GROUP_ID 64
+#define MAX_MU_GROUP_SHOW 16
+#define MAX_MU_GROUP_LENGTH (6 * MAX_MU_GROUP_SHOW)
+
+#define HAL_RX_RU_ALLOC_TYPE_MAX 6
+#define RU_26 1
+#define RU_52 2
+#define RU_106 4
+#define RU_242 9
+#define RU_484 18
+#define RU_996 37
+
+#endif /* ATH12K_RX_DESC_H */
diff --git a/drivers/net/wireless/ath/ath12k/trace.c b/drivers/net/wireless/ath/ath12k/trace.c
new file mode 100644
index 000000000000..0d0edf4204b7
--- /dev/null
+++ b/drivers/net/wireless/ath/ath12k/trace.c
@@ -0,0 +1,10 @@
+// SPDX-License-Identifier: BSD-3-Clause-Clear
+/*
+ * Copyright (c) 2019-2021 The Linux Foundation. All rights reserved.
+ * Copyright (c) 2021-2022 Qualcomm Innovation Center, Inc. All rights reserved.
+ */
+
+#include <linux/module.h>
+
+#define CREATE_TRACE_POINTS
+#include "trace.h"
diff --git a/drivers/net/wireless/ath/ath12k/trace.h b/drivers/net/wireless/ath/ath12k/trace.h
new file mode 100644
index 000000000000..f72096684b74
--- /dev/null
+++ b/drivers/net/wireless/ath/ath12k/trace.h
@@ -0,0 +1,152 @@
+/* SPDX-License-Identifier: BSD-3-Clause-Clear */
+/*
+ * Copyright (c) 2019-2021 The Linux Foundation. All rights reserved.
+ * Copyright (c) 2021-2022 Qualcomm Innovation Center, Inc. All rights reserved.
+ */
+
+#if !defined(_TRACE_H_) || defined(TRACE_HEADER_MULTI_READ)
+
+#include <linux/tracepoint.h>
+#include "core.h"
+
+#define _TRACE_H_
+
+/* create empty functions when tracing is disabled */
+#if !defined(CONFIG_ATH12K_TRACING)
+#undef TRACE_EVENT
+#define TRACE_EVENT(name, proto, ...) \
+static inline void trace_ ## name(proto) {}
+#endif /* !CONFIG_ATH12K_TRACING || __CHECKER__ */
+
+#undef TRACE_SYSTEM
+#define TRACE_SYSTEM ath12k
+
+TRACE_EVENT(ath12k_htt_pktlog,
+ TP_PROTO(struct ath12k *ar, const void *buf, u16 buf_len,
+ u32 pktlog_checksum),
+
+ TP_ARGS(ar, buf, buf_len, pktlog_checksum),
+
+ TP_STRUCT__entry(
+ __string(device, dev_name(ar->ab->dev))
+ __string(driver, dev_driver_string(ar->ab->dev))
+ __field(u16, buf_len)
+ __field(u32, pktlog_checksum)
+ __dynamic_array(u8, pktlog, buf_len)
+ ),
+
+ TP_fast_assign(
+ __assign_str(device, dev_name(ar->ab->dev));
+ __assign_str(driver, dev_driver_string(ar->ab->dev));
+ __entry->buf_len = buf_len;
+ __entry->pktlog_checksum = pktlog_checksum;
+ memcpy(__get_dynamic_array(pktlog), buf, buf_len);
+ ),
+
+ TP_printk(
+ "%s %s size %u pktlog_checksum %d",
+ __get_str(driver),
+ __get_str(device),
+ __entry->buf_len,
+ __entry->pktlog_checksum
+ )
+);
+
+TRACE_EVENT(ath12k_htt_ppdu_stats,
+ TP_PROTO(struct ath12k *ar, const void *data, size_t len),
+
+ TP_ARGS(ar, data, len),
+
+ TP_STRUCT__entry(
+ __string(device, dev_name(ar->ab->dev))
+ __string(driver, dev_driver_string(ar->ab->dev))
+ __field(u16, len)
+ __field(u32, info)
+ __field(u32, sync_tstmp_lo_us)
+ __field(u32, sync_tstmp_hi_us)
+ __field(u32, mlo_offset_lo)
+ __field(u32, mlo_offset_hi)
+ __field(u32, mlo_offset_clks)
+ __field(u32, mlo_comp_clks)
+ __field(u32, mlo_comp_timer)
+ __dynamic_array(u8, ppdu, len)
+ ),
+
+ TP_fast_assign(
+ __assign_str(device, dev_name(ar->ab->dev));
+ __assign_str(driver, dev_driver_string(ar->ab->dev));
+ __entry->len = len;
+ __entry->info = ar->pdev->timestamp.info;
+ __entry->sync_tstmp_lo_us = ar->pdev->timestamp.sync_timestamp_hi_us;
+ __entry->sync_tstmp_hi_us = ar->pdev->timestamp.sync_timestamp_lo_us;
+ __entry->mlo_offset_lo = ar->pdev->timestamp.mlo_offset_lo;
+ __entry->mlo_offset_hi = ar->pdev->timestamp.mlo_offset_hi;
+ __entry->mlo_offset_clks = ar->pdev->timestamp.mlo_offset_clks;
+ __entry->mlo_comp_clks = ar->pdev->timestamp.mlo_comp_clks;
+ __entry->mlo_comp_timer = ar->pdev->timestamp.mlo_comp_timer;
+ memcpy(__get_dynamic_array(ppdu), data, len);
+ ),
+
+ TP_printk(
+ "%s %s ppdu len %d",
+ __get_str(driver),
+ __get_str(device),
+ __entry->len
+ )
+);
+
+TRACE_EVENT(ath12k_htt_rxdesc,
+ TP_PROTO(struct ath12k *ar, const void *data, size_t type, size_t len),
+
+ TP_ARGS(ar, data, type, len),
+
+ TP_STRUCT__entry(
+ __string(device, dev_name(ar->ab->dev))
+ __string(driver, dev_driver_string(ar->ab->dev))
+ __field(u16, len)
+ __field(u16, type)
+ __field(u32, info)
+ __field(u32, sync_tstmp_lo_us)
+ __field(u32, sync_tstmp_hi_us)
+ __field(u32, mlo_offset_lo)
+ __field(u32, mlo_offset_hi)
+ __field(u32, mlo_offset_clks)
+ __field(u32, mlo_comp_clks)
+ __field(u32, mlo_comp_timer)
+ __dynamic_array(u8, rxdesc, len)
+ ),
+
+ TP_fast_assign(
+ __assign_str(device, dev_name(ar->ab->dev));
+ __assign_str(driver, dev_driver_string(ar->ab->dev));
+ __entry->len = len;
+ __entry->type = type;
+ __entry->info = ar->pdev->timestamp.info;
+ __entry->sync_tstmp_lo_us = ar->pdev->timestamp.sync_timestamp_hi_us;
+ __entry->sync_tstmp_hi_us = ar->pdev->timestamp.sync_timestamp_lo_us;
+ __entry->mlo_offset_lo = ar->pdev->timestamp.mlo_offset_lo;
+ __entry->mlo_offset_hi = ar->pdev->timestamp.mlo_offset_hi;
+ __entry->mlo_offset_clks = ar->pdev->timestamp.mlo_offset_clks;
+ __entry->mlo_comp_clks = ar->pdev->timestamp.mlo_comp_clks;
+ __entry->mlo_comp_timer = ar->pdev->timestamp.mlo_comp_timer;
+ memcpy(__get_dynamic_array(rxdesc), data, len);
+ ),
+
+ TP_printk(
+ "%s %s rxdesc len %d",
+ __get_str(driver),
+ __get_str(device),
+ __entry->len
+ )
+);
+
+#endif /* _TRACE_H_ || TRACE_HEADER_MULTI_READ*/
+
+/* we don't want to use include/trace/events */
+#undef TRACE_INCLUDE_PATH
+#define TRACE_INCLUDE_PATH .
+#undef TRACE_INCLUDE_FILE
+#define TRACE_INCLUDE_FILE trace
+
+/* This part must be outside protection */
+#include <trace/define_trace.h>
diff --git a/drivers/net/wireless/ath/ath12k/wmi.c b/drivers/net/wireless/ath/ath12k/wmi.c
new file mode 100644
index 000000000000..f6df14149531
--- /dev/null
+++ b/drivers/net/wireless/ath/ath12k/wmi.c
@@ -0,0 +1,6600 @@
+// SPDX-License-Identifier: BSD-3-Clause-Clear
+/*
+ * Copyright (c) 2018-2021 The Linux Foundation. All rights reserved.
+ * Copyright (c) 2021-2022 Qualcomm Innovation Center, Inc. All rights reserved.
+ */
+#include <linux/skbuff.h>
+#include <linux/ctype.h>
+#include <net/mac80211.h>
+#include <net/cfg80211.h>
+#include <linux/completion.h>
+#include <linux/if_ether.h>
+#include <linux/types.h>
+#include <linux/pci.h>
+#include <linux/uuid.h>
+#include <linux/time.h>
+#include <linux/of.h>
+#include "core.h"
+#include "debug.h"
+#include "mac.h"
+#include "hw.h"
+#include "peer.h"
+
+struct ath12k_wmi_svc_ready_parse {
+ bool wmi_svc_bitmap_done;
+};
+
+struct ath12k_wmi_dma_ring_caps_parse {
+ struct ath12k_wmi_dma_ring_caps_params *dma_ring_caps;
+ u32 n_dma_ring_caps;
+};
+
+struct ath12k_wmi_service_ext_arg {
+ u32 default_conc_scan_config_bits;
+ u32 default_fw_config_bits;
+ struct ath12k_wmi_ppe_threshold_arg ppet;
+ u32 he_cap_info;
+ u32 mpdu_density;
+ u32 max_bssid_rx_filters;
+ u32 num_hw_modes;
+ u32 num_phy;
+};
+
+struct ath12k_wmi_svc_rdy_ext_parse {
+ struct ath12k_wmi_service_ext_arg arg;
+ const struct ath12k_wmi_soc_mac_phy_hw_mode_caps_params *hw_caps;
+ const struct ath12k_wmi_hw_mode_cap_params *hw_mode_caps;
+ u32 n_hw_mode_caps;
+ u32 tot_phy_id;
+ struct ath12k_wmi_hw_mode_cap_params pref_hw_mode_caps;
+ struct ath12k_wmi_mac_phy_caps_params *mac_phy_caps;
+ u32 n_mac_phy_caps;
+ const struct ath12k_wmi_soc_hal_reg_caps_params *soc_hal_reg_caps;
+ const struct ath12k_wmi_hal_reg_caps_ext_params *ext_hal_reg_caps;
+ u32 n_ext_hal_reg_caps;
+ struct ath12k_wmi_dma_ring_caps_parse dma_caps_parse;
+ bool hw_mode_done;
+ bool mac_phy_done;
+ bool ext_hal_reg_done;
+ bool mac_phy_chainmask_combo_done;
+ bool mac_phy_chainmask_cap_done;
+ bool oem_dma_ring_cap_done;
+ bool dma_ring_cap_done;
+};
+
+struct ath12k_wmi_svc_rdy_ext2_parse {
+ struct ath12k_wmi_dma_ring_caps_parse dma_caps_parse;
+ bool dma_ring_cap_done;
+};
+
+struct ath12k_wmi_rdy_parse {
+ u32 num_extra_mac_addr;
+};
+
+struct ath12k_wmi_dma_buf_release_arg {
+ struct ath12k_wmi_dma_buf_release_fixed_params fixed;
+ const struct ath12k_wmi_dma_buf_release_entry_params *buf_entry;
+ const struct ath12k_wmi_dma_buf_release_meta_data_params *meta_data;
+ u32 num_buf_entry;
+ u32 num_meta;
+ bool buf_entry_done;
+ bool meta_data_done;
+};
+
+struct ath12k_wmi_tlv_policy {
+ size_t min_len;
+};
+
+struct wmi_tlv_mgmt_rx_parse {
+ const struct ath12k_wmi_mgmt_rx_params *fixed;
+ const u8 *frame_buf;
+ bool frame_buf_done;
+};
+
+static const struct ath12k_wmi_tlv_policy ath12k_wmi_tlv_policies[] = {
+ [WMI_TAG_ARRAY_BYTE] = { .min_len = 0 },
+ [WMI_TAG_ARRAY_UINT32] = { .min_len = 0 },
+ [WMI_TAG_SERVICE_READY_EVENT] = {
+ .min_len = sizeof(struct wmi_service_ready_event) },
+ [WMI_TAG_SERVICE_READY_EXT_EVENT] = {
+ .min_len = sizeof(struct wmi_service_ready_ext_event) },
+ [WMI_TAG_SOC_MAC_PHY_HW_MODE_CAPS] = {
+ .min_len = sizeof(struct ath12k_wmi_soc_mac_phy_hw_mode_caps_params) },
+ [WMI_TAG_SOC_HAL_REG_CAPABILITIES] = {
+ .min_len = sizeof(struct ath12k_wmi_soc_hal_reg_caps_params) },
+ [WMI_TAG_VDEV_START_RESPONSE_EVENT] = {
+ .min_len = sizeof(struct wmi_vdev_start_resp_event) },
+ [WMI_TAG_PEER_DELETE_RESP_EVENT] = {
+ .min_len = sizeof(struct wmi_peer_delete_resp_event) },
+ [WMI_TAG_OFFLOAD_BCN_TX_STATUS_EVENT] = {
+ .min_len = sizeof(struct wmi_bcn_tx_status_event) },
+ [WMI_TAG_VDEV_STOPPED_EVENT] = {
+ .min_len = sizeof(struct wmi_vdev_stopped_event) },
+ [WMI_TAG_REG_CHAN_LIST_CC_EXT_EVENT] = {
+ .min_len = sizeof(struct wmi_reg_chan_list_cc_ext_event) },
+ [WMI_TAG_MGMT_RX_HDR] = {
+ .min_len = sizeof(struct ath12k_wmi_mgmt_rx_params) },
+ [WMI_TAG_MGMT_TX_COMPL_EVENT] = {
+ .min_len = sizeof(struct wmi_mgmt_tx_compl_event) },
+ [WMI_TAG_SCAN_EVENT] = {
+ .min_len = sizeof(struct wmi_scan_event) },
+ [WMI_TAG_PEER_STA_KICKOUT_EVENT] = {
+ .min_len = sizeof(struct wmi_peer_sta_kickout_event) },
+ [WMI_TAG_ROAM_EVENT] = {
+ .min_len = sizeof(struct wmi_roam_event) },
+ [WMI_TAG_CHAN_INFO_EVENT] = {
+ .min_len = sizeof(struct wmi_chan_info_event) },
+ [WMI_TAG_PDEV_BSS_CHAN_INFO_EVENT] = {
+ .min_len = sizeof(struct wmi_pdev_bss_chan_info_event) },
+ [WMI_TAG_VDEV_INSTALL_KEY_COMPLETE_EVENT] = {
+ .min_len = sizeof(struct wmi_vdev_install_key_compl_event) },
+ [WMI_TAG_READY_EVENT] = {
+ .min_len = sizeof(struct ath12k_wmi_ready_event_min_params) },
+ [WMI_TAG_SERVICE_AVAILABLE_EVENT] = {
+ .min_len = sizeof(struct wmi_service_available_event) },
+ [WMI_TAG_PEER_ASSOC_CONF_EVENT] = {
+ .min_len = sizeof(struct wmi_peer_assoc_conf_event) },
+ [WMI_TAG_PDEV_CTL_FAILSAFE_CHECK_EVENT] = {
+ .min_len = sizeof(struct wmi_pdev_ctl_failsafe_chk_event) },
+ [WMI_TAG_HOST_SWFDA_EVENT] = {
+ .min_len = sizeof(struct wmi_fils_discovery_event) },
+ [WMI_TAG_OFFLOAD_PRB_RSP_TX_STATUS_EVENT] = {
+ .min_len = sizeof(struct wmi_probe_resp_tx_status_event) },
+ [WMI_TAG_VDEV_DELETE_RESP_EVENT] = {
+ .min_len = sizeof(struct wmi_vdev_delete_resp_event) },
+};
+
+static __le32 ath12k_wmi_tlv_hdr(u32 cmd, u32 len)
+{
+ return le32_encode_bits(cmd, WMI_TLV_TAG) |
+ le32_encode_bits(len, WMI_TLV_LEN);
+}
+
+static __le32 ath12k_wmi_tlv_cmd_hdr(u32 cmd, u32 len)
+{
+ return ath12k_wmi_tlv_hdr(cmd, len - TLV_HDR_SIZE);
+}
+
+void ath12k_wmi_init_qcn9274(struct ath12k_base *ab,
+ struct ath12k_wmi_resource_config_arg *config)
+{
+ config->num_vdevs = ab->num_radios * TARGET_NUM_VDEVS;
+
+ if (ab->num_radios == 2) {
+ config->num_peers = TARGET_NUM_PEERS(DBS);
+ config->num_tids = TARGET_NUM_TIDS(DBS);
+ } else if (ab->num_radios == 3) {
+ config->num_peers = TARGET_NUM_PEERS(DBS_SBS);
+ config->num_tids = TARGET_NUM_TIDS(DBS_SBS);
+ } else {
+ /* Control should not reach here */
+ config->num_peers = TARGET_NUM_PEERS(SINGLE);
+ config->num_tids = TARGET_NUM_TIDS(SINGLE);
+ }
+ config->num_offload_peers = TARGET_NUM_OFFLD_PEERS;
+ config->num_offload_reorder_buffs = TARGET_NUM_OFFLD_REORDER_BUFFS;
+ config->num_peer_keys = TARGET_NUM_PEER_KEYS;
+ config->ast_skid_limit = TARGET_AST_SKID_LIMIT;
+ config->tx_chain_mask = (1 << ab->target_caps.num_rf_chains) - 1;
+ config->rx_chain_mask = (1 << ab->target_caps.num_rf_chains) - 1;
+ config->rx_timeout_pri[0] = TARGET_RX_TIMEOUT_LO_PRI;
+ config->rx_timeout_pri[1] = TARGET_RX_TIMEOUT_LO_PRI;
+ config->rx_timeout_pri[2] = TARGET_RX_TIMEOUT_LO_PRI;
+ config->rx_timeout_pri[3] = TARGET_RX_TIMEOUT_HI_PRI;
+
+ if (test_bit(ATH12K_FLAG_RAW_MODE, &ab->dev_flags))
+ config->rx_decap_mode = TARGET_DECAP_MODE_RAW;
+ else
+ config->rx_decap_mode = TARGET_DECAP_MODE_NATIVE_WIFI;
+
+ config->scan_max_pending_req = TARGET_SCAN_MAX_PENDING_REQS;
+ config->bmiss_offload_max_vdev = TARGET_BMISS_OFFLOAD_MAX_VDEV;
+ config->roam_offload_max_vdev = TARGET_ROAM_OFFLOAD_MAX_VDEV;
+ config->roam_offload_max_ap_profiles = TARGET_ROAM_OFFLOAD_MAX_AP_PROFILES;
+ config->num_mcast_groups = TARGET_NUM_MCAST_GROUPS;
+ config->num_mcast_table_elems = TARGET_NUM_MCAST_TABLE_ELEMS;
+ config->mcast2ucast_mode = TARGET_MCAST2UCAST_MODE;
+ config->tx_dbg_log_size = TARGET_TX_DBG_LOG_SIZE;
+ config->num_wds_entries = TARGET_NUM_WDS_ENTRIES;
+ config->dma_burst_size = TARGET_DMA_BURST_SIZE;
+ config->rx_skip_defrag_timeout_dup_detection_check =
+ TARGET_RX_SKIP_DEFRAG_TIMEOUT_DUP_DETECTION_CHECK;
+ config->vow_config = TARGET_VOW_CONFIG;
+ config->gtk_offload_max_vdev = TARGET_GTK_OFFLOAD_MAX_VDEV;
+ config->num_msdu_desc = TARGET_NUM_MSDU_DESC;
+ config->beacon_tx_offload_max_vdev = ab->num_radios * TARGET_MAX_BCN_OFFLD;
+ config->rx_batchmode = TARGET_RX_BATCHMODE;
+ /* Indicates host supports peer map v3 and unmap v2 support */
+ config->peer_map_unmap_version = 0x32;
+ config->twt_ap_pdev_count = ab->num_radios;
+ config->twt_ap_sta_count = 1000;
+}
+
+void ath12k_wmi_init_wcn7850(struct ath12k_base *ab,
+ struct ath12k_wmi_resource_config_arg *config)
+{
+ config->num_vdevs = 4;
+ config->num_peers = 16;
+ config->num_tids = 32;
+
+ config->num_offload_peers = 3;
+ config->num_offload_reorder_buffs = 3;
+ config->num_peer_keys = TARGET_NUM_PEER_KEYS;
+ config->ast_skid_limit = TARGET_AST_SKID_LIMIT;
+ config->tx_chain_mask = (1 << ab->target_caps.num_rf_chains) - 1;
+ config->rx_chain_mask = (1 << ab->target_caps.num_rf_chains) - 1;
+ config->rx_timeout_pri[0] = TARGET_RX_TIMEOUT_LO_PRI;
+ config->rx_timeout_pri[1] = TARGET_RX_TIMEOUT_LO_PRI;
+ config->rx_timeout_pri[2] = TARGET_RX_TIMEOUT_LO_PRI;
+ config->rx_timeout_pri[3] = TARGET_RX_TIMEOUT_HI_PRI;
+ config->rx_decap_mode = TARGET_DECAP_MODE_NATIVE_WIFI;
+ config->scan_max_pending_req = TARGET_SCAN_MAX_PENDING_REQS;
+ config->bmiss_offload_max_vdev = TARGET_BMISS_OFFLOAD_MAX_VDEV;
+ config->roam_offload_max_vdev = TARGET_ROAM_OFFLOAD_MAX_VDEV;
+ config->roam_offload_max_ap_profiles = TARGET_ROAM_OFFLOAD_MAX_AP_PROFILES;
+ config->num_mcast_groups = 0;
+ config->num_mcast_table_elems = 0;
+ config->mcast2ucast_mode = 0;
+ config->tx_dbg_log_size = TARGET_TX_DBG_LOG_SIZE;
+ config->num_wds_entries = 0;
+ config->dma_burst_size = 0;
+ config->rx_skip_defrag_timeout_dup_detection_check = 0;
+ config->vow_config = TARGET_VOW_CONFIG;
+ config->gtk_offload_max_vdev = 2;
+ config->num_msdu_desc = 0x400;
+ config->beacon_tx_offload_max_vdev = 2;
+ config->rx_batchmode = TARGET_RX_BATCHMODE;
+
+ config->peer_map_unmap_version = 0x1;
+ config->use_pdev_id = 1;
+ config->max_frag_entries = 0xa;
+ config->num_tdls_vdevs = 0x1;
+ config->num_tdls_conn_table_entries = 8;
+ config->beacon_tx_offload_max_vdev = 0x2;
+ config->num_multicast_filter_entries = 0x20;
+ config->num_wow_filters = 0x16;
+ config->num_keep_alive_pattern = 0;
+}
+
+#define PRIMAP(_hw_mode_) \
+ [_hw_mode_] = _hw_mode_##_PRI
+
+static const int ath12k_hw_mode_pri_map[] = {
+ PRIMAP(WMI_HOST_HW_MODE_SINGLE),
+ PRIMAP(WMI_HOST_HW_MODE_DBS),
+ PRIMAP(WMI_HOST_HW_MODE_SBS_PASSIVE),
+ PRIMAP(WMI_HOST_HW_MODE_SBS),
+ PRIMAP(WMI_HOST_HW_MODE_DBS_SBS),
+ PRIMAP(WMI_HOST_HW_MODE_DBS_OR_SBS),
+ /* keep last */
+ PRIMAP(WMI_HOST_HW_MODE_MAX),
+};
+
+static int
+ath12k_wmi_tlv_iter(struct ath12k_base *ab, const void *ptr, size_t len,
+ int (*iter)(struct ath12k_base *ab, u16 tag, u16 len,
+ const void *ptr, void *data),
+ void *data)
+{
+ const void *begin = ptr;
+ const struct wmi_tlv *tlv;
+ u16 tlv_tag, tlv_len;
+ int ret;
+
+ while (len > 0) {
+ if (len < sizeof(*tlv)) {
+ ath12k_err(ab, "wmi tlv parse failure at byte %zd (%zu bytes left, %zu expected)\n",
+ ptr - begin, len, sizeof(*tlv));
+ return -EINVAL;
+ }
+
+ tlv = ptr;
+ tlv_tag = le32_get_bits(tlv->header, WMI_TLV_TAG);
+ tlv_len = le32_get_bits(tlv->header, WMI_TLV_LEN);
+ ptr += sizeof(*tlv);
+ len -= sizeof(*tlv);
+
+ if (tlv_len > len) {
+ ath12k_err(ab, "wmi tlv parse failure of tag %u at byte %zd (%zu bytes left, %u expected)\n",
+ tlv_tag, ptr - begin, len, tlv_len);
+ return -EINVAL;
+ }
+
+ if (tlv_tag < ARRAY_SIZE(ath12k_wmi_tlv_policies) &&
+ ath12k_wmi_tlv_policies[tlv_tag].min_len &&
+ ath12k_wmi_tlv_policies[tlv_tag].min_len > tlv_len) {
+ ath12k_err(ab, "wmi tlv parse failure of tag %u at byte %zd (%u bytes is less than min length %zu)\n",
+ tlv_tag, ptr - begin, tlv_len,
+ ath12k_wmi_tlv_policies[tlv_tag].min_len);
+ return -EINVAL;
+ }
+
+ ret = iter(ab, tlv_tag, tlv_len, ptr, data);
+ if (ret)
+ return ret;
+
+ ptr += tlv_len;
+ len -= tlv_len;
+ }
+
+ return 0;
+}
+
+static int ath12k_wmi_tlv_iter_parse(struct ath12k_base *ab, u16 tag, u16 len,
+ const void *ptr, void *data)
+{
+ const void **tb = data;
+
+ if (tag < WMI_TAG_MAX)
+ tb[tag] = ptr;
+
+ return 0;
+}
+
+static int ath12k_wmi_tlv_parse(struct ath12k_base *ar, const void **tb,
+ const void *ptr, size_t len)
+{
+ return ath12k_wmi_tlv_iter(ar, ptr, len, ath12k_wmi_tlv_iter_parse,
+ (void *)tb);
+}
+
+static const void **
+ath12k_wmi_tlv_parse_alloc(struct ath12k_base *ab, const void *ptr,
+ size_t len, gfp_t gfp)
+{
+ const void **tb;
+ int ret;
+
+ tb = kcalloc(WMI_TAG_MAX, sizeof(*tb), gfp);
+ if (!tb)
+ return ERR_PTR(-ENOMEM);
+
+ ret = ath12k_wmi_tlv_parse(ab, tb, ptr, len);
+ if (ret) {
+ kfree(tb);
+ return ERR_PTR(ret);
+ }
+
+ return tb;
+}
+
+static int ath12k_wmi_cmd_send_nowait(struct ath12k_wmi_pdev *wmi, struct sk_buff *skb,
+ u32 cmd_id)
+{
+ struct ath12k_skb_cb *skb_cb = ATH12K_SKB_CB(skb);
+ struct ath12k_base *ab = wmi->wmi_ab->ab;
+ struct wmi_cmd_hdr *cmd_hdr;
+ int ret;
+
+ if (!skb_push(skb, sizeof(struct wmi_cmd_hdr)))
+ return -ENOMEM;
+
+ cmd_hdr = (struct wmi_cmd_hdr *)skb->data;
+ cmd_hdr->cmd_id = le32_encode_bits(cmd_id, WMI_CMD_HDR_CMD_ID);
+
+ memset(skb_cb, 0, sizeof(*skb_cb));
+ ret = ath12k_htc_send(&ab->htc, wmi->eid, skb);
+
+ if (ret)
+ goto err_pull;
+
+ return 0;
+
+err_pull:
+ skb_pull(skb, sizeof(struct wmi_cmd_hdr));
+ return ret;
+}
+
+int ath12k_wmi_cmd_send(struct ath12k_wmi_pdev *wmi, struct sk_buff *skb,
+ u32 cmd_id)
+{
+ struct ath12k_wmi_base *wmi_sc = wmi->wmi_ab;
+ int ret = -EOPNOTSUPP;
+
+ might_sleep();
+
+ wait_event_timeout(wmi_sc->tx_credits_wq, ({
+ ret = ath12k_wmi_cmd_send_nowait(wmi, skb, cmd_id);
+
+ if (ret && test_bit(ATH12K_FLAG_CRASH_FLUSH, &wmi_sc->ab->dev_flags))
+ ret = -ESHUTDOWN;
+
+ (ret != -EAGAIN);
+ }), WMI_SEND_TIMEOUT_HZ);
+
+ if (ret == -EAGAIN)
+ ath12k_warn(wmi_sc->ab, "wmi command %d timeout\n", cmd_id);
+
+ return ret;
+}
+
+static int ath12k_pull_svc_ready_ext(struct ath12k_wmi_pdev *wmi_handle,
+ const void *ptr,
+ struct ath12k_wmi_service_ext_arg *arg)
+{
+ const struct wmi_service_ready_ext_event *ev = ptr;
+ int i;
+
+ if (!ev)
+ return -EINVAL;
+
+ /* Move this to host based bitmap */
+ arg->default_conc_scan_config_bits =
+ le32_to_cpu(ev->default_conc_scan_config_bits);
+ arg->default_fw_config_bits = le32_to_cpu(ev->default_fw_config_bits);
+ arg->he_cap_info = le32_to_cpu(ev->he_cap_info);
+ arg->mpdu_density = le32_to_cpu(ev->mpdu_density);
+ arg->max_bssid_rx_filters = le32_to_cpu(ev->max_bssid_rx_filters);
+ arg->ppet.numss_m1 = le32_to_cpu(ev->ppet.numss_m1);
+ arg->ppet.ru_bit_mask = le32_to_cpu(ev->ppet.ru_info);
+
+ for (i = 0; i < WMI_MAX_NUM_SS; i++)
+ arg->ppet.ppet16_ppet8_ru3_ru0[i] =
+ le32_to_cpu(ev->ppet.ppet16_ppet8_ru3_ru0[i]);
+
+ return 0;
+}
+
+static int
+ath12k_pull_mac_phy_cap_svc_ready_ext(struct ath12k_wmi_pdev *wmi_handle,
+ struct ath12k_wmi_svc_rdy_ext_parse *svc,
+ u8 hw_mode_id, u8 phy_id,
+ struct ath12k_pdev *pdev)
+{
+ const struct ath12k_wmi_mac_phy_caps_params *mac_caps;
+ const struct ath12k_wmi_soc_mac_phy_hw_mode_caps_params *hw_caps = svc->hw_caps;
+ const struct ath12k_wmi_hw_mode_cap_params *wmi_hw_mode_caps = svc->hw_mode_caps;
+ const struct ath12k_wmi_mac_phy_caps_params *wmi_mac_phy_caps = svc->mac_phy_caps;
+ struct ath12k_band_cap *cap_band;
+ struct ath12k_pdev_cap *pdev_cap = &pdev->cap;
+ u32 phy_map;
+ u32 hw_idx, phy_idx = 0;
+ int i;
+
+ if (!hw_caps || !wmi_hw_mode_caps || !svc->soc_hal_reg_caps)
+ return -EINVAL;
+
+ for (hw_idx = 0; hw_idx < le32_to_cpu(hw_caps->num_hw_modes); hw_idx++) {
+ if (hw_mode_id == le32_to_cpu(wmi_hw_mode_caps[hw_idx].hw_mode_id))
+ break;
+
+ phy_map = le32_to_cpu(wmi_hw_mode_caps[hw_idx].phy_id_map);
+ phy_idx = fls(phy_map);
+ }
+
+ if (hw_idx == le32_to_cpu(hw_caps->num_hw_modes))
+ return -EINVAL;
+
+ phy_idx += phy_id;
+ if (phy_id >= le32_to_cpu(svc->soc_hal_reg_caps->num_phy))
+ return -EINVAL;
+
+ mac_caps = wmi_mac_phy_caps + phy_idx;
+
+ pdev->pdev_id = le32_to_cpu(mac_caps->pdev_id);
+ pdev_cap->supported_bands |= le32_to_cpu(mac_caps->supported_bands);
+ pdev_cap->ampdu_density = le32_to_cpu(mac_caps->ampdu_density);
+
+ /* Take non-zero tx/rx chainmask. If tx/rx chainmask differs from
+ * band to band for a single radio, need to see how this should be
+ * handled.
+ */
+ if (le32_to_cpu(mac_caps->supported_bands) & WMI_HOST_WLAN_2G_CAP) {
+ pdev_cap->tx_chain_mask = le32_to_cpu(mac_caps->tx_chain_mask_2g);
+ pdev_cap->rx_chain_mask = le32_to_cpu(mac_caps->rx_chain_mask_2g);
+ } else if (le32_to_cpu(mac_caps->supported_bands) & WMI_HOST_WLAN_5G_CAP) {
+ pdev_cap->vht_cap = le32_to_cpu(mac_caps->vht_cap_info_5g);
+ pdev_cap->vht_mcs = le32_to_cpu(mac_caps->vht_supp_mcs_5g);
+ pdev_cap->he_mcs = le32_to_cpu(mac_caps->he_supp_mcs_5g);
+ pdev_cap->tx_chain_mask = le32_to_cpu(mac_caps->tx_chain_mask_5g);
+ pdev_cap->rx_chain_mask = le32_to_cpu(mac_caps->rx_chain_mask_5g);
+ } else {
+ return -EINVAL;
+ }
+
+ /* tx/rx chainmask reported from fw depends on the actual hw chains used,
+ * For example, for 4x4 capable macphys, first 4 chains can be used for first
+ * mac and the remaing 4 chains can be used for the second mac or vice-versa.
+ * In this case, tx/rx chainmask 0xf will be advertised for first mac and 0xf0
+ * will be advertised for second mac or vice-versa. Compute the shift value
+ * for tx/rx chainmask which will be used to advertise supported ht/vht rates to
+ * mac80211.
+ */
+ pdev_cap->tx_chain_mask_shift =
+ find_first_bit((unsigned long *)&pdev_cap->tx_chain_mask, 32);
+ pdev_cap->rx_chain_mask_shift =
+ find_first_bit((unsigned long *)&pdev_cap->rx_chain_mask, 32);
+
+ if (le32_to_cpu(mac_caps->supported_bands) & WMI_HOST_WLAN_2G_CAP) {
+ cap_band = &pdev_cap->band[NL80211_BAND_2GHZ];
+ cap_band->phy_id = le32_to_cpu(mac_caps->phy_id);
+ cap_band->max_bw_supported = le32_to_cpu(mac_caps->max_bw_supported_2g);
+ cap_band->ht_cap_info = le32_to_cpu(mac_caps->ht_cap_info_2g);
+ cap_band->he_cap_info[0] = le32_to_cpu(mac_caps->he_cap_info_2g);
+ cap_band->he_cap_info[1] = le32_to_cpu(mac_caps->he_cap_info_2g_ext);
+ cap_band->he_mcs = le32_to_cpu(mac_caps->he_supp_mcs_2g);
+ for (i = 0; i < WMI_MAX_HECAP_PHY_SIZE; i++)
+ cap_band->he_cap_phy_info[i] =
+ le32_to_cpu(mac_caps->he_cap_phy_info_2g[i]);
+
+ cap_band->he_ppet.numss_m1 = le32_to_cpu(mac_caps->he_ppet2g.numss_m1);
+ cap_band->he_ppet.ru_bit_mask = le32_to_cpu(mac_caps->he_ppet2g.ru_info);
+
+ for (i = 0; i < WMI_MAX_NUM_SS; i++)
+ cap_band->he_ppet.ppet16_ppet8_ru3_ru0[i] =
+ le32_to_cpu(mac_caps->he_ppet2g.ppet16_ppet8_ru3_ru0[i]);
+ }
+
+ if (le32_to_cpu(mac_caps->supported_bands) & WMI_HOST_WLAN_5G_CAP) {
+ cap_band = &pdev_cap->band[NL80211_BAND_5GHZ];
+ cap_band->phy_id = le32_to_cpu(mac_caps->phy_id);
+ cap_band->max_bw_supported =
+ le32_to_cpu(mac_caps->max_bw_supported_5g);
+ cap_band->ht_cap_info = le32_to_cpu(mac_caps->ht_cap_info_5g);
+ cap_band->he_cap_info[0] = le32_to_cpu(mac_caps->he_cap_info_5g);
+ cap_band->he_cap_info[1] = le32_to_cpu(mac_caps->he_cap_info_5g_ext);
+ cap_band->he_mcs = le32_to_cpu(mac_caps->he_supp_mcs_5g);
+ for (i = 0; i < WMI_MAX_HECAP_PHY_SIZE; i++)
+ cap_band->he_cap_phy_info[i] =
+ le32_to_cpu(mac_caps->he_cap_phy_info_5g[i]);
+
+ cap_band->he_ppet.numss_m1 = le32_to_cpu(mac_caps->he_ppet5g.numss_m1);
+ cap_band->he_ppet.ru_bit_mask = le32_to_cpu(mac_caps->he_ppet5g.ru_info);
+
+ for (i = 0; i < WMI_MAX_NUM_SS; i++)
+ cap_band->he_ppet.ppet16_ppet8_ru3_ru0[i] =
+ le32_to_cpu(mac_caps->he_ppet5g.ppet16_ppet8_ru3_ru0[i]);
+
+ cap_band = &pdev_cap->band[NL80211_BAND_6GHZ];
+ cap_band->max_bw_supported =
+ le32_to_cpu(mac_caps->max_bw_supported_5g);
+ cap_band->ht_cap_info = le32_to_cpu(mac_caps->ht_cap_info_5g);
+ cap_band->he_cap_info[0] = le32_to_cpu(mac_caps->he_cap_info_5g);
+ cap_band->he_cap_info[1] = le32_to_cpu(mac_caps->he_cap_info_5g_ext);
+ cap_band->he_mcs = le32_to_cpu(mac_caps->he_supp_mcs_5g);
+ for (i = 0; i < WMI_MAX_HECAP_PHY_SIZE; i++)
+ cap_band->he_cap_phy_info[i] =
+ le32_to_cpu(mac_caps->he_cap_phy_info_5g[i]);
+
+ cap_band->he_ppet.numss_m1 = le32_to_cpu(mac_caps->he_ppet5g.numss_m1);
+ cap_band->he_ppet.ru_bit_mask = le32_to_cpu(mac_caps->he_ppet5g.ru_info);
+
+ for (i = 0; i < WMI_MAX_NUM_SS; i++)
+ cap_band->he_ppet.ppet16_ppet8_ru3_ru0[i] =
+ le32_to_cpu(mac_caps->he_ppet5g.ppet16_ppet8_ru3_ru0[i]);
+ }
+
+ return 0;
+}
+
+static int
+ath12k_pull_reg_cap_svc_rdy_ext(struct ath12k_wmi_pdev *wmi_handle,
+ const struct ath12k_wmi_soc_hal_reg_caps_params *reg_caps,
+ const struct ath12k_wmi_hal_reg_caps_ext_params *ext_caps,
+ u8 phy_idx,
+ struct ath12k_wmi_hal_reg_capabilities_ext_arg *param)
+{
+ const struct ath12k_wmi_hal_reg_caps_ext_params *ext_reg_cap;
+
+ if (!reg_caps || !ext_caps)
+ return -EINVAL;
+
+ if (phy_idx >= le32_to_cpu(reg_caps->num_phy))
+ return -EINVAL;
+
+ ext_reg_cap = &ext_caps[phy_idx];
+
+ param->phy_id = le32_to_cpu(ext_reg_cap->phy_id);
+ param->eeprom_reg_domain = le32_to_cpu(ext_reg_cap->eeprom_reg_domain);
+ param->eeprom_reg_domain_ext =
+ le32_to_cpu(ext_reg_cap->eeprom_reg_domain_ext);
+ param->regcap1 = le32_to_cpu(ext_reg_cap->regcap1);
+ param->regcap2 = le32_to_cpu(ext_reg_cap->regcap2);
+ /* check if param->wireless_mode is needed */
+ param->low_2ghz_chan = le32_to_cpu(ext_reg_cap->low_2ghz_chan);
+ param->high_2ghz_chan = le32_to_cpu(ext_reg_cap->high_2ghz_chan);
+ param->low_5ghz_chan = le32_to_cpu(ext_reg_cap->low_5ghz_chan);
+ param->high_5ghz_chan = le32_to_cpu(ext_reg_cap->high_5ghz_chan);
+
+ return 0;
+}
+
+static int ath12k_pull_service_ready_tlv(struct ath12k_base *ab,
+ const void *evt_buf,
+ struct ath12k_wmi_target_cap_arg *cap)
+{
+ const struct wmi_service_ready_event *ev = evt_buf;
+
+ if (!ev) {
+ ath12k_err(ab, "%s: failed by NULL param\n",
+ __func__);
+ return -EINVAL;
+ }
+
+ cap->phy_capability = le32_to_cpu(ev->phy_capability);
+ cap->max_frag_entry = le32_to_cpu(ev->max_frag_entry);
+ cap->num_rf_chains = le32_to_cpu(ev->num_rf_chains);
+ cap->ht_cap_info = le32_to_cpu(ev->ht_cap_info);
+ cap->vht_cap_info = le32_to_cpu(ev->vht_cap_info);
+ cap->vht_supp_mcs = le32_to_cpu(ev->vht_supp_mcs);
+ cap->hw_min_tx_power = le32_to_cpu(ev->hw_min_tx_power);
+ cap->hw_max_tx_power = le32_to_cpu(ev->hw_max_tx_power);
+ cap->sys_cap_info = le32_to_cpu(ev->sys_cap_info);
+ cap->min_pkt_size_enable = le32_to_cpu(ev->min_pkt_size_enable);
+ cap->max_bcn_ie_size = le32_to_cpu(ev->max_bcn_ie_size);
+ cap->max_num_scan_channels = le32_to_cpu(ev->max_num_scan_channels);
+ cap->max_supported_macs = le32_to_cpu(ev->max_supported_macs);
+ cap->wmi_fw_sub_feat_caps = le32_to_cpu(ev->wmi_fw_sub_feat_caps);
+ cap->txrx_chainmask = le32_to_cpu(ev->txrx_chainmask);
+ cap->default_dbs_hw_mode_index = le32_to_cpu(ev->default_dbs_hw_mode_index);
+ cap->num_msdu_desc = le32_to_cpu(ev->num_msdu_desc);
+
+ return 0;
+}
+
+/* Save the wmi_service_bitmap into a linear bitmap. The wmi_services in
+ * wmi_service ready event are advertised in b0-b3 (LSB 4-bits) of each
+ * 4-byte word.
+ */
+static void ath12k_wmi_service_bitmap_copy(struct ath12k_wmi_pdev *wmi,
+ const u32 *wmi_svc_bm)
+{
+ int i, j;
+
+ for (i = 0, j = 0; i < WMI_SERVICE_BM_SIZE && j < WMI_MAX_SERVICE; i++) {
+ do {
+ if (wmi_svc_bm[i] & BIT(j % WMI_SERVICE_BITS_IN_SIZE32))
+ set_bit(j, wmi->wmi_ab->svc_map);
+ } while (++j % WMI_SERVICE_BITS_IN_SIZE32);
+ }
+}
+
+static int ath12k_wmi_svc_rdy_parse(struct ath12k_base *ab, u16 tag, u16 len,
+ const void *ptr, void *data)
+{
+ struct ath12k_wmi_svc_ready_parse *svc_ready = data;
+ struct ath12k_wmi_pdev *wmi_handle = &ab->wmi_ab.wmi[0];
+ u16 expect_len;
+
+ switch (tag) {
+ case WMI_TAG_SERVICE_READY_EVENT:
+ if (ath12k_pull_service_ready_tlv(ab, ptr, &ab->target_caps))
+ return -EINVAL;
+ break;
+
+ case WMI_TAG_ARRAY_UINT32:
+ if (!svc_ready->wmi_svc_bitmap_done) {
+ expect_len = WMI_SERVICE_BM_SIZE * sizeof(u32);
+ if (len < expect_len) {
+ ath12k_warn(ab, "invalid len %d for the tag 0x%x\n",
+ len, tag);
+ return -EINVAL;
+ }
+
+ ath12k_wmi_service_bitmap_copy(wmi_handle, ptr);
+
+ svc_ready->wmi_svc_bitmap_done = true;
+ }
+ break;
+ default:
+ break;
+ }
+
+ return 0;
+}
+
+static int ath12k_service_ready_event(struct ath12k_base *ab, struct sk_buff *skb)
+{
+ struct ath12k_wmi_svc_ready_parse svc_ready = { };
+ int ret;
+
+ ret = ath12k_wmi_tlv_iter(ab, skb->data, skb->len,
+ ath12k_wmi_svc_rdy_parse,
+ &svc_ready);
+ if (ret) {
+ ath12k_warn(ab, "failed to parse tlv %d\n", ret);
+ return ret;
+ }
+
+ return 0;
+}
+
+struct sk_buff *ath12k_wmi_alloc_skb(struct ath12k_wmi_base *wmi_sc, u32 len)
+{
+ struct sk_buff *skb;
+ struct ath12k_base *ab = wmi_sc->ab;
+ u32 round_len = roundup(len, 4);
+
+ skb = ath12k_htc_alloc_skb(ab, WMI_SKB_HEADROOM + round_len);
+ if (!skb)
+ return NULL;
+
+ skb_reserve(skb, WMI_SKB_HEADROOM);
+ if (!IS_ALIGNED((unsigned long)skb->data, 4))
+ ath12k_warn(ab, "unaligned WMI skb data\n");
+
+ skb_put(skb, round_len);
+ memset(skb->data, 0, round_len);
+
+ return skb;
+}
+
+int ath12k_wmi_mgmt_send(struct ath12k *ar, u32 vdev_id, u32 buf_id,
+ struct sk_buff *frame)
+{
+ struct ath12k_wmi_pdev *wmi = ar->wmi;
+ struct wmi_mgmt_send_cmd *cmd;
+ struct wmi_tlv *frame_tlv;
+ struct sk_buff *skb;
+ u32 buf_len;
+ int ret, len;
+
+ buf_len = min_t(int, frame->len, WMI_MGMT_SEND_DOWNLD_LEN);
+
+ len = sizeof(*cmd) + sizeof(*frame_tlv) + roundup(buf_len, 4);
+
+ skb = ath12k_wmi_alloc_skb(wmi->wmi_ab, len);
+ if (!skb)
+ return -ENOMEM;
+
+ cmd = (struct wmi_mgmt_send_cmd *)skb->data;
+ cmd->tlv_header = ath12k_wmi_tlv_cmd_hdr(WMI_TAG_MGMT_TX_SEND_CMD,
+ sizeof(*cmd));
+ cmd->vdev_id = cpu_to_le32(vdev_id);
+ cmd->desc_id = cpu_to_le32(buf_id);
+ cmd->chanfreq = 0;
+ cmd->paddr_lo = cpu_to_le32(lower_32_bits(ATH12K_SKB_CB(frame)->paddr));
+ cmd->paddr_hi = cpu_to_le32(upper_32_bits(ATH12K_SKB_CB(frame)->paddr));
+ cmd->frame_len = cpu_to_le32(frame->len);
+ cmd->buf_len = cpu_to_le32(buf_len);
+ cmd->tx_params_valid = 0;
+
+ frame_tlv = (struct wmi_tlv *)(skb->data + sizeof(*cmd));
+ frame_tlv->header = ath12k_wmi_tlv_hdr(WMI_TAG_ARRAY_BYTE, buf_len);
+
+ memcpy(frame_tlv->value, frame->data, buf_len);
+
+ ret = ath12k_wmi_cmd_send(wmi, skb, WMI_MGMT_TX_SEND_CMDID);
+ if (ret) {
+ ath12k_warn(ar->ab,
+ "failed to submit WMI_MGMT_TX_SEND_CMDID cmd\n");
+ dev_kfree_skb(skb);
+ }
+
+ return ret;
+}
+
+int ath12k_wmi_vdev_create(struct ath12k *ar, u8 *macaddr,
+ struct ath12k_wmi_vdev_create_arg *args)
+{
+ struct ath12k_wmi_pdev *wmi = ar->wmi;
+ struct wmi_vdev_create_cmd *cmd;
+ struct sk_buff *skb;
+ struct ath12k_wmi_vdev_txrx_streams_params *txrx_streams;
+ struct wmi_tlv *tlv;
+ int ret, len;
+ void *ptr;
+
+ /* It can be optimized my sending tx/rx chain configuration
+ * only for supported bands instead of always sending it for
+ * both the bands.
+ */
+ len = sizeof(*cmd) + TLV_HDR_SIZE +
+ (WMI_NUM_SUPPORTED_BAND_MAX * sizeof(*txrx_streams));
+
+ skb = ath12k_wmi_alloc_skb(wmi->wmi_ab, len);
+ if (!skb)
+ return -ENOMEM;
+
+ cmd = (struct wmi_vdev_create_cmd *)skb->data;
+ cmd->tlv_header = ath12k_wmi_tlv_cmd_hdr(WMI_TAG_VDEV_CREATE_CMD,
+ sizeof(*cmd));
+
+ cmd->vdev_id = cpu_to_le32(args->if_id);
+ cmd->vdev_type = cpu_to_le32(args->type);
+ cmd->vdev_subtype = cpu_to_le32(args->subtype);
+ cmd->num_cfg_txrx_streams = cpu_to_le32(WMI_NUM_SUPPORTED_BAND_MAX);
+ cmd->pdev_id = cpu_to_le32(args->pdev_id);
+ cmd->vdev_stats_id = cpu_to_le32(args->if_stats_id);
+ ether_addr_copy(cmd->vdev_macaddr.addr, macaddr);
+
+ ptr = skb->data + sizeof(*cmd);
+ len = WMI_NUM_SUPPORTED_BAND_MAX * sizeof(*txrx_streams);
+
+ tlv = ptr;
+ tlv->header = ath12k_wmi_tlv_hdr(WMI_TAG_ARRAY_STRUCT, len);
+
+ ptr += TLV_HDR_SIZE;
+ txrx_streams = ptr;
+ len = sizeof(*txrx_streams);
+ txrx_streams->tlv_header = ath12k_wmi_tlv_cmd_hdr(WMI_TAG_VDEV_TXRX_STREAMS,
+ len);
+ txrx_streams->band = WMI_TPC_CHAINMASK_CONFIG_BAND_2G;
+ txrx_streams->supported_tx_streams =
+ args->chains[NL80211_BAND_2GHZ].tx;
+ txrx_streams->supported_rx_streams =
+ args->chains[NL80211_BAND_2GHZ].rx;
+
+ txrx_streams++;
+ txrx_streams->tlv_header = ath12k_wmi_tlv_cmd_hdr(WMI_TAG_VDEV_TXRX_STREAMS,
+ len);
+ txrx_streams->band = WMI_TPC_CHAINMASK_CONFIG_BAND_5G;
+ txrx_streams->supported_tx_streams =
+ args->chains[NL80211_BAND_5GHZ].tx;
+ txrx_streams->supported_rx_streams =
+ args->chains[NL80211_BAND_5GHZ].rx;
+
+ ath12k_dbg(ar->ab, ATH12K_DBG_WMI,
+ "WMI vdev create: id %d type %d subtype %d macaddr %pM pdevid %d\n",
+ args->if_id, args->type, args->subtype,
+ macaddr, args->pdev_id);
+
+ ret = ath12k_wmi_cmd_send(wmi, skb, WMI_VDEV_CREATE_CMDID);
+ if (ret) {
+ ath12k_warn(ar->ab,
+ "failed to submit WMI_VDEV_CREATE_CMDID\n");
+ dev_kfree_skb(skb);
+ }
+
+ return ret;
+}
+
+int ath12k_wmi_vdev_delete(struct ath12k *ar, u8 vdev_id)
+{
+ struct ath12k_wmi_pdev *wmi = ar->wmi;
+ struct wmi_vdev_delete_cmd *cmd;
+ struct sk_buff *skb;
+ int ret;
+
+ skb = ath12k_wmi_alloc_skb(wmi->wmi_ab, sizeof(*cmd));
+ if (!skb)
+ return -ENOMEM;
+
+ cmd = (struct wmi_vdev_delete_cmd *)skb->data;
+ cmd->tlv_header = ath12k_wmi_tlv_cmd_hdr(WMI_TAG_VDEV_DELETE_CMD,
+ sizeof(*cmd));
+ cmd->vdev_id = cpu_to_le32(vdev_id);
+
+ ath12k_dbg(ar->ab, ATH12K_DBG_WMI, "WMI vdev delete id %d\n", vdev_id);
+
+ ret = ath12k_wmi_cmd_send(wmi, skb, WMI_VDEV_DELETE_CMDID);
+ if (ret) {
+ ath12k_warn(ar->ab, "failed to submit WMI_VDEV_DELETE_CMDID\n");
+ dev_kfree_skb(skb);
+ }
+
+ return ret;
+}
+
+int ath12k_wmi_vdev_stop(struct ath12k *ar, u8 vdev_id)
+{
+ struct ath12k_wmi_pdev *wmi = ar->wmi;
+ struct wmi_vdev_stop_cmd *cmd;
+ struct sk_buff *skb;
+ int ret;
+
+ skb = ath12k_wmi_alloc_skb(wmi->wmi_ab, sizeof(*cmd));
+ if (!skb)
+ return -ENOMEM;
+
+ cmd = (struct wmi_vdev_stop_cmd *)skb->data;
+
+ cmd->tlv_header = ath12k_wmi_tlv_cmd_hdr(WMI_TAG_VDEV_STOP_CMD,
+ sizeof(*cmd));
+ cmd->vdev_id = cpu_to_le32(vdev_id);
+
+ ath12k_dbg(ar->ab, ATH12K_DBG_WMI, "WMI vdev stop id 0x%x\n", vdev_id);
+
+ ret = ath12k_wmi_cmd_send(wmi, skb, WMI_VDEV_STOP_CMDID);
+ if (ret) {
+ ath12k_warn(ar->ab, "failed to submit WMI_VDEV_STOP cmd\n");
+ dev_kfree_skb(skb);
+ }
+
+ return ret;
+}
+
+int ath12k_wmi_vdev_down(struct ath12k *ar, u8 vdev_id)
+{
+ struct ath12k_wmi_pdev *wmi = ar->wmi;
+ struct wmi_vdev_down_cmd *cmd;
+ struct sk_buff *skb;
+ int ret;
+
+ skb = ath12k_wmi_alloc_skb(wmi->wmi_ab, sizeof(*cmd));
+ if (!skb)
+ return -ENOMEM;
+
+ cmd = (struct wmi_vdev_down_cmd *)skb->data;
+
+ cmd->tlv_header = ath12k_wmi_tlv_cmd_hdr(WMI_TAG_VDEV_DOWN_CMD,
+ sizeof(*cmd));
+ cmd->vdev_id = cpu_to_le32(vdev_id);
+
+ ath12k_dbg(ar->ab, ATH12K_DBG_WMI, "WMI vdev down id 0x%x\n", vdev_id);
+
+ ret = ath12k_wmi_cmd_send(wmi, skb, WMI_VDEV_DOWN_CMDID);
+ if (ret) {
+ ath12k_warn(ar->ab, "failed to submit WMI_VDEV_DOWN cmd\n");
+ dev_kfree_skb(skb);
+ }
+
+ return ret;
+}
+
+static void ath12k_wmi_put_wmi_channel(struct ath12k_wmi_channel_params *chan,
+ struct wmi_vdev_start_req_arg *arg)
+{
+ memset(chan, 0, sizeof(*chan));
+
+ chan->mhz = cpu_to_le32(arg->freq);
+ chan->band_center_freq1 = cpu_to_le32(arg->band_center_freq1);
+ if (arg->mode == MODE_11AC_VHT80_80)
+ chan->band_center_freq2 = cpu_to_le32(arg->band_center_freq2);
+ else
+ chan->band_center_freq2 = 0;
+
+ chan->info |= le32_encode_bits(arg->mode, WMI_CHAN_INFO_MODE);
+ if (arg->passive)
+ chan->info |= cpu_to_le32(WMI_CHAN_INFO_PASSIVE);
+ if (arg->allow_ibss)
+ chan->info |= cpu_to_le32(WMI_CHAN_INFO_ADHOC_ALLOWED);
+ if (arg->allow_ht)
+ chan->info |= cpu_to_le32(WMI_CHAN_INFO_ALLOW_HT);
+ if (arg->allow_vht)
+ chan->info |= cpu_to_le32(WMI_CHAN_INFO_ALLOW_VHT);
+ if (arg->allow_he)
+ chan->info |= cpu_to_le32(WMI_CHAN_INFO_ALLOW_HE);
+ if (arg->ht40plus)
+ chan->info |= cpu_to_le32(WMI_CHAN_INFO_HT40_PLUS);
+ if (arg->chan_radar)
+ chan->info |= cpu_to_le32(WMI_CHAN_INFO_DFS);
+ if (arg->freq2_radar)
+ chan->info |= cpu_to_le32(WMI_CHAN_INFO_DFS_FREQ2);
+
+ chan->reg_info_1 = le32_encode_bits(arg->max_power,
+ WMI_CHAN_REG_INFO1_MAX_PWR) |
+ le32_encode_bits(arg->max_reg_power,
+ WMI_CHAN_REG_INFO1_MAX_REG_PWR);
+
+ chan->reg_info_2 = le32_encode_bits(arg->max_antenna_gain,
+ WMI_CHAN_REG_INFO2_ANT_MAX) |
+ le32_encode_bits(arg->max_power, WMI_CHAN_REG_INFO2_MAX_TX_PWR);
+}
+
+int ath12k_wmi_vdev_start(struct ath12k *ar, struct wmi_vdev_start_req_arg *arg,
+ bool restart)
+{
+ struct ath12k_wmi_pdev *wmi = ar->wmi;
+ struct wmi_vdev_start_request_cmd *cmd;
+ struct sk_buff *skb;
+ struct ath12k_wmi_channel_params *chan;
+ struct wmi_tlv *tlv;
+ void *ptr;
+ int ret, len;
+
+ if (WARN_ON(arg->ssid_len > sizeof(cmd->ssid.ssid)))
+ return -EINVAL;
+
+ len = sizeof(*cmd) + sizeof(*chan) + TLV_HDR_SIZE;
+
+ skb = ath12k_wmi_alloc_skb(wmi->wmi_ab, len);
+ if (!skb)
+ return -ENOMEM;
+
+ cmd = (struct wmi_vdev_start_request_cmd *)skb->data;
+ cmd->tlv_header = ath12k_wmi_tlv_cmd_hdr(WMI_TAG_VDEV_START_REQUEST_CMD,
+ sizeof(*cmd));
+ cmd->vdev_id = cpu_to_le32(arg->vdev_id);
+ cmd->beacon_interval = cpu_to_le32(arg->bcn_intval);
+ cmd->bcn_tx_rate = cpu_to_le32(arg->bcn_tx_rate);
+ cmd->dtim_period = cpu_to_le32(arg->dtim_period);
+ cmd->num_noa_descriptors = cpu_to_le32(arg->num_noa_descriptors);
+ cmd->preferred_rx_streams = cpu_to_le32(arg->pref_rx_streams);
+ cmd->preferred_tx_streams = cpu_to_le32(arg->pref_tx_streams);
+ cmd->cac_duration_ms = cpu_to_le32(arg->cac_duration_ms);
+ cmd->regdomain = cpu_to_le32(arg->regdomain);
+ cmd->he_ops = cpu_to_le32(arg->he_ops);
+
+ if (!restart) {
+ if (arg->ssid) {
+ cmd->ssid.ssid_len = cpu_to_le32(arg->ssid_len);
+ memcpy(cmd->ssid.ssid, arg->ssid, arg->ssid_len);
+ }
+ if (arg->hidden_ssid)
+ cmd->flags |= cpu_to_le32(WMI_VDEV_START_HIDDEN_SSID);
+ if (arg->pmf_enabled)
+ cmd->flags |= cpu_to_le32(WMI_VDEV_START_PMF_ENABLED);
+ }
+
+ cmd->flags |= cpu_to_le32(WMI_VDEV_START_LDPC_RX_ENABLED);
+
+ ptr = skb->data + sizeof(*cmd);
+ chan = ptr;
+
+ ath12k_wmi_put_wmi_channel(chan, arg);
+
+ chan->tlv_header = ath12k_wmi_tlv_cmd_hdr(WMI_TAG_CHANNEL,
+ sizeof(*chan));
+ ptr += sizeof(*chan);
+
+ tlv = ptr;
+ tlv->header = ath12k_wmi_tlv_hdr(WMI_TAG_ARRAY_STRUCT, 0);
+
+ /* Note: This is a nested TLV containing:
+ * [wmi_tlv][wmi_p2p_noa_descriptor][wmi_tlv]..
+ */
+
+ ptr += sizeof(*tlv);
+
+ ath12k_dbg(ar->ab, ATH12K_DBG_WMI, "vdev %s id 0x%x freq 0x%x mode 0x%x\n",
+ restart ? "restart" : "start", arg->vdev_id,
+ arg->freq, arg->mode);
+
+ if (restart)
+ ret = ath12k_wmi_cmd_send(wmi, skb,
+ WMI_VDEV_RESTART_REQUEST_CMDID);
+ else
+ ret = ath12k_wmi_cmd_send(wmi, skb,
+ WMI_VDEV_START_REQUEST_CMDID);
+ if (ret) {
+ ath12k_warn(ar->ab, "failed to submit vdev_%s cmd\n",
+ restart ? "restart" : "start");
+ dev_kfree_skb(skb);
+ }
+
+ return ret;
+}
+
+int ath12k_wmi_vdev_up(struct ath12k *ar, u32 vdev_id, u32 aid, const u8 *bssid)
+{
+ struct ath12k_wmi_pdev *wmi = ar->wmi;
+ struct wmi_vdev_up_cmd *cmd;
+ struct sk_buff *skb;
+ int ret;
+
+ skb = ath12k_wmi_alloc_skb(wmi->wmi_ab, sizeof(*cmd));
+ if (!skb)
+ return -ENOMEM;
+
+ cmd = (struct wmi_vdev_up_cmd *)skb->data;
+
+ cmd->tlv_header = ath12k_wmi_tlv_cmd_hdr(WMI_TAG_VDEV_UP_CMD,
+ sizeof(*cmd));
+ cmd->vdev_id = cpu_to_le32(vdev_id);
+ cmd->vdev_assoc_id = cpu_to_le32(aid);
+
+ ether_addr_copy(cmd->vdev_bssid.addr, bssid);
+
+ ath12k_dbg(ar->ab, ATH12K_DBG_WMI,
+ "WMI mgmt vdev up id 0x%x assoc id %d bssid %pM\n",
+ vdev_id, aid, bssid);
+
+ ret = ath12k_wmi_cmd_send(wmi, skb, WMI_VDEV_UP_CMDID);
+ if (ret) {
+ ath12k_warn(ar->ab, "failed to submit WMI_VDEV_UP cmd\n");
+ dev_kfree_skb(skb);
+ }
+
+ return ret;
+}
+
+int ath12k_wmi_send_peer_create_cmd(struct ath12k *ar,
+ struct ath12k_wmi_peer_create_arg *arg)
+{
+ struct ath12k_wmi_pdev *wmi = ar->wmi;
+ struct wmi_peer_create_cmd *cmd;
+ struct sk_buff *skb;
+ int ret;
+
+ skb = ath12k_wmi_alloc_skb(wmi->wmi_ab, sizeof(*cmd));
+ if (!skb)
+ return -ENOMEM;
+
+ cmd = (struct wmi_peer_create_cmd *)skb->data;
+ cmd->tlv_header = ath12k_wmi_tlv_cmd_hdr(WMI_TAG_PEER_CREATE_CMD,
+ sizeof(*cmd));
+
+ ether_addr_copy(cmd->peer_macaddr.addr, arg->peer_addr);
+ cmd->peer_type = cpu_to_le32(arg->peer_type);
+ cmd->vdev_id = cpu_to_le32(arg->vdev_id);
+
+ ath12k_dbg(ar->ab, ATH12K_DBG_WMI,
+ "WMI peer create vdev_id %d peer_addr %pM\n",
+ arg->vdev_id, arg->peer_addr);
+
+ ret = ath12k_wmi_cmd_send(wmi, skb, WMI_PEER_CREATE_CMDID);
+ if (ret) {
+ ath12k_warn(ar->ab, "failed to submit WMI_PEER_CREATE cmd\n");
+ dev_kfree_skb(skb);
+ }
+
+ return ret;
+}
+
+int ath12k_wmi_send_peer_delete_cmd(struct ath12k *ar,
+ const u8 *peer_addr, u8 vdev_id)
+{
+ struct ath12k_wmi_pdev *wmi = ar->wmi;
+ struct wmi_peer_delete_cmd *cmd;
+ struct sk_buff *skb;
+ int ret;
+
+ skb = ath12k_wmi_alloc_skb(wmi->wmi_ab, sizeof(*cmd));
+ if (!skb)
+ return -ENOMEM;
+
+ cmd = (struct wmi_peer_delete_cmd *)skb->data;
+ cmd->tlv_header = ath12k_wmi_tlv_cmd_hdr(WMI_TAG_PEER_DELETE_CMD,
+ sizeof(*cmd));
+
+ ether_addr_copy(cmd->peer_macaddr.addr, peer_addr);
+ cmd->vdev_id = cpu_to_le32(vdev_id);
+
+ ath12k_dbg(ar->ab, ATH12K_DBG_WMI,
+ "WMI peer delete vdev_id %d peer_addr %pM\n",
+ vdev_id, peer_addr);
+
+ ret = ath12k_wmi_cmd_send(wmi, skb, WMI_PEER_DELETE_CMDID);
+ if (ret) {
+ ath12k_warn(ar->ab, "failed to send WMI_PEER_DELETE cmd\n");
+ dev_kfree_skb(skb);
+ }
+
+ return ret;
+}
+
+int ath12k_wmi_send_pdev_set_regdomain(struct ath12k *ar,
+ struct ath12k_wmi_pdev_set_regdomain_arg *arg)
+{
+ struct ath12k_wmi_pdev *wmi = ar->wmi;
+ struct wmi_pdev_set_regdomain_cmd *cmd;
+ struct sk_buff *skb;
+ int ret;
+
+ skb = ath12k_wmi_alloc_skb(wmi->wmi_ab, sizeof(*cmd));
+ if (!skb)
+ return -ENOMEM;
+
+ cmd = (struct wmi_pdev_set_regdomain_cmd *)skb->data;
+ cmd->tlv_header = ath12k_wmi_tlv_cmd_hdr(WMI_TAG_PDEV_SET_REGDOMAIN_CMD,
+ sizeof(*cmd));
+
+ cmd->reg_domain = cpu_to_le32(arg->current_rd_in_use);
+ cmd->reg_domain_2g = cpu_to_le32(arg->current_rd_2g);
+ cmd->reg_domain_5g = cpu_to_le32(arg->current_rd_5g);
+ cmd->conformance_test_limit_2g = cpu_to_le32(arg->ctl_2g);
+ cmd->conformance_test_limit_5g = cpu_to_le32(arg->ctl_5g);
+ cmd->dfs_domain = cpu_to_le32(arg->dfs_domain);
+ cmd->pdev_id = cpu_to_le32(arg->pdev_id);
+
+ ath12k_dbg(ar->ab, ATH12K_DBG_WMI,
+ "WMI pdev regd rd %d rd2g %d rd5g %d domain %d pdev id %d\n",
+ arg->current_rd_in_use, arg->current_rd_2g,
+ arg->current_rd_5g, arg->dfs_domain, arg->pdev_id);
+
+ ret = ath12k_wmi_cmd_send(wmi, skb, WMI_PDEV_SET_REGDOMAIN_CMDID);
+ if (ret) {
+ ath12k_warn(ar->ab,
+ "failed to send WMI_PDEV_SET_REGDOMAIN cmd\n");
+ dev_kfree_skb(skb);
+ }
+
+ return ret;
+}
+
+int ath12k_wmi_set_peer_param(struct ath12k *ar, const u8 *peer_addr,
+ u32 vdev_id, u32 param_id, u32 param_val)
+{
+ struct ath12k_wmi_pdev *wmi = ar->wmi;
+ struct wmi_peer_set_param_cmd *cmd;
+ struct sk_buff *skb;
+ int ret;
+
+ skb = ath12k_wmi_alloc_skb(wmi->wmi_ab, sizeof(*cmd));
+ if (!skb)
+ return -ENOMEM;
+
+ cmd = (struct wmi_peer_set_param_cmd *)skb->data;
+ cmd->tlv_header = ath12k_wmi_tlv_cmd_hdr(WMI_TAG_PEER_SET_PARAM_CMD,
+ sizeof(*cmd));
+ ether_addr_copy(cmd->peer_macaddr.addr, peer_addr);
+ cmd->vdev_id = cpu_to_le32(vdev_id);
+ cmd->param_id = cpu_to_le32(param_id);
+ cmd->param_value = cpu_to_le32(param_val);
+
+ ath12k_dbg(ar->ab, ATH12K_DBG_WMI,
+ "WMI vdev %d peer 0x%pM set param %d value %d\n",
+ vdev_id, peer_addr, param_id, param_val);
+
+ ret = ath12k_wmi_cmd_send(wmi, skb, WMI_PEER_SET_PARAM_CMDID);
+ if (ret) {
+ ath12k_warn(ar->ab, "failed to send WMI_PEER_SET_PARAM cmd\n");
+ dev_kfree_skb(skb);
+ }
+
+ return ret;
+}
+
+int ath12k_wmi_send_peer_flush_tids_cmd(struct ath12k *ar,
+ u8 peer_addr[ETH_ALEN],
+ u32 peer_tid_bitmap,
+ u8 vdev_id)
+{
+ struct ath12k_wmi_pdev *wmi = ar->wmi;
+ struct wmi_peer_flush_tids_cmd *cmd;
+ struct sk_buff *skb;
+ int ret;
+
+ skb = ath12k_wmi_alloc_skb(wmi->wmi_ab, sizeof(*cmd));
+ if (!skb)
+ return -ENOMEM;
+
+ cmd = (struct wmi_peer_flush_tids_cmd *)skb->data;
+ cmd->tlv_header = ath12k_wmi_tlv_cmd_hdr(WMI_TAG_PEER_FLUSH_TIDS_CMD,
+ sizeof(*cmd));
+
+ ether_addr_copy(cmd->peer_macaddr.addr, peer_addr);
+ cmd->peer_tid_bitmap = cpu_to_le32(peer_tid_bitmap);
+ cmd->vdev_id = cpu_to_le32(vdev_id);
+
+ ath12k_dbg(ar->ab, ATH12K_DBG_WMI,
+ "WMI peer flush vdev_id %d peer_addr %pM tids %08x\n",
+ vdev_id, peer_addr, peer_tid_bitmap);
+
+ ret = ath12k_wmi_cmd_send(wmi, skb, WMI_PEER_FLUSH_TIDS_CMDID);
+ if (ret) {
+ ath12k_warn(ar->ab,
+ "failed to send WMI_PEER_FLUSH_TIDS cmd\n");
+ dev_kfree_skb(skb);
+ }
+
+ return ret;
+}
+
+int ath12k_wmi_peer_rx_reorder_queue_setup(struct ath12k *ar,
+ int vdev_id, const u8 *addr,
+ dma_addr_t paddr, u8 tid,
+ u8 ba_window_size_valid,
+ u32 ba_window_size)
+{
+ struct wmi_peer_reorder_queue_setup_cmd *cmd;
+ struct sk_buff *skb;
+ int ret;
+
+ skb = ath12k_wmi_alloc_skb(ar->wmi->wmi_ab, sizeof(*cmd));
+ if (!skb)
+ return -ENOMEM;
+
+ cmd = (struct wmi_peer_reorder_queue_setup_cmd *)skb->data;
+ cmd->tlv_header = ath12k_wmi_tlv_cmd_hdr(WMI_TAG_REORDER_QUEUE_SETUP_CMD,
+ sizeof(*cmd));
+
+ ether_addr_copy(cmd->peer_macaddr.addr, addr);
+ cmd->vdev_id = cpu_to_le32(vdev_id);
+ cmd->tid = cpu_to_le32(tid);
+ cmd->queue_ptr_lo = cpu_to_le32(lower_32_bits(paddr));
+ cmd->queue_ptr_hi = cpu_to_le32(upper_32_bits(paddr));
+ cmd->queue_no = cpu_to_le32(tid);
+ cmd->ba_window_size_valid = cpu_to_le32(ba_window_size_valid);
+ cmd->ba_window_size = cpu_to_le32(ba_window_size);
+
+ ath12k_dbg(ar->ab, ATH12K_DBG_WMI,
+ "wmi rx reorder queue setup addr %pM vdev_id %d tid %d\n",
+ addr, vdev_id, tid);
+
+ ret = ath12k_wmi_cmd_send(ar->wmi, skb,
+ WMI_PEER_REORDER_QUEUE_SETUP_CMDID);
+ if (ret) {
+ ath12k_warn(ar->ab,
+ "failed to send WMI_PEER_REORDER_QUEUE_SETUP\n");
+ dev_kfree_skb(skb);
+ }
+
+ return ret;
+}
+
+int
+ath12k_wmi_rx_reord_queue_remove(struct ath12k *ar,
+ struct ath12k_wmi_rx_reorder_queue_remove_arg *arg)
+{
+ struct ath12k_wmi_pdev *wmi = ar->wmi;
+ struct wmi_peer_reorder_queue_remove_cmd *cmd;
+ struct sk_buff *skb;
+ int ret;
+
+ skb = ath12k_wmi_alloc_skb(wmi->wmi_ab, sizeof(*cmd));
+ if (!skb)
+ return -ENOMEM;
+
+ cmd = (struct wmi_peer_reorder_queue_remove_cmd *)skb->data;
+ cmd->tlv_header = ath12k_wmi_tlv_cmd_hdr(WMI_TAG_REORDER_QUEUE_REMOVE_CMD,
+ sizeof(*cmd));
+
+ ether_addr_copy(cmd->peer_macaddr.addr, arg->peer_macaddr);
+ cmd->vdev_id = cpu_to_le32(arg->vdev_id);
+ cmd->tid_mask = cpu_to_le32(arg->peer_tid_bitmap);
+
+ ath12k_dbg(ar->ab, ATH12K_DBG_WMI,
+ "%s: peer_macaddr %pM vdev_id %d, tid_map %d", __func__,
+ arg->peer_macaddr, arg->vdev_id, arg->peer_tid_bitmap);
+
+ ret = ath12k_wmi_cmd_send(wmi, skb,
+ WMI_PEER_REORDER_QUEUE_REMOVE_CMDID);
+ if (ret) {
+ ath12k_warn(ar->ab,
+ "failed to send WMI_PEER_REORDER_QUEUE_REMOVE_CMDID");
+ dev_kfree_skb(skb);
+ }
+
+ return ret;
+}
+
+int ath12k_wmi_pdev_set_param(struct ath12k *ar, u32 param_id,
+ u32 param_value, u8 pdev_id)
+{
+ struct ath12k_wmi_pdev *wmi = ar->wmi;
+ struct wmi_pdev_set_param_cmd *cmd;
+ struct sk_buff *skb;
+ int ret;
+
+ skb = ath12k_wmi_alloc_skb(wmi->wmi_ab, sizeof(*cmd));
+ if (!skb)
+ return -ENOMEM;
+
+ cmd = (struct wmi_pdev_set_param_cmd *)skb->data;
+ cmd->tlv_header = ath12k_wmi_tlv_cmd_hdr(WMI_TAG_PDEV_SET_PARAM_CMD,
+ sizeof(*cmd));
+ cmd->pdev_id = cpu_to_le32(pdev_id);
+ cmd->param_id = cpu_to_le32(param_id);
+ cmd->param_value = cpu_to_le32(param_value);
+
+ ath12k_dbg(ar->ab, ATH12K_DBG_WMI,
+ "WMI pdev set param %d pdev id %d value %d\n",
+ param_id, pdev_id, param_value);
+
+ ret = ath12k_wmi_cmd_send(wmi, skb, WMI_PDEV_SET_PARAM_CMDID);
+ if (ret) {
+ ath12k_warn(ar->ab, "failed to send WMI_PDEV_SET_PARAM cmd\n");
+ dev_kfree_skb(skb);
+ }
+
+ return ret;
+}
+
+int ath12k_wmi_pdev_set_ps_mode(struct ath12k *ar, int vdev_id, u32 enable)
+{
+ struct ath12k_wmi_pdev *wmi = ar->wmi;
+ struct wmi_pdev_set_ps_mode_cmd *cmd;
+ struct sk_buff *skb;
+ int ret;
+
+ skb = ath12k_wmi_alloc_skb(wmi->wmi_ab, sizeof(*cmd));
+ if (!skb)
+ return -ENOMEM;
+
+ cmd = (struct wmi_pdev_set_ps_mode_cmd *)skb->data;
+ cmd->tlv_header = ath12k_wmi_tlv_cmd_hdr(WMI_TAG_STA_POWERSAVE_MODE_CMD,
+ sizeof(*cmd));
+ cmd->vdev_id = cpu_to_le32(vdev_id);
+ cmd->sta_ps_mode = cpu_to_le32(enable);
+
+ ath12k_dbg(ar->ab, ATH12K_DBG_WMI,
+ "WMI vdev set psmode %d vdev id %d\n",
+ enable, vdev_id);
+
+ ret = ath12k_wmi_cmd_send(wmi, skb, WMI_STA_POWERSAVE_MODE_CMDID);
+ if (ret) {
+ ath12k_warn(ar->ab, "failed to send WMI_PDEV_SET_PARAM cmd\n");
+ dev_kfree_skb(skb);
+ }
+
+ return ret;
+}
+
+int ath12k_wmi_pdev_suspend(struct ath12k *ar, u32 suspend_opt,
+ u32 pdev_id)
+{
+ struct ath12k_wmi_pdev *wmi = ar->wmi;
+ struct wmi_pdev_suspend_cmd *cmd;
+ struct sk_buff *skb;
+ int ret;
+
+ skb = ath12k_wmi_alloc_skb(wmi->wmi_ab, sizeof(*cmd));
+ if (!skb)
+ return -ENOMEM;
+
+ cmd = (struct wmi_pdev_suspend_cmd *)skb->data;
+
+ cmd->tlv_header = ath12k_wmi_tlv_cmd_hdr(WMI_TAG_PDEV_SUSPEND_CMD,
+ sizeof(*cmd));
+
+ cmd->suspend_opt = cpu_to_le32(suspend_opt);
+ cmd->pdev_id = cpu_to_le32(pdev_id);
+
+ ath12k_dbg(ar->ab, ATH12K_DBG_WMI,
+ "WMI pdev suspend pdev_id %d\n", pdev_id);
+
+ ret = ath12k_wmi_cmd_send(wmi, skb, WMI_PDEV_SUSPEND_CMDID);
+ if (ret) {
+ ath12k_warn(ar->ab, "failed to send WMI_PDEV_SUSPEND cmd\n");
+ dev_kfree_skb(skb);
+ }
+
+ return ret;
+}
+
+int ath12k_wmi_pdev_resume(struct ath12k *ar, u32 pdev_id)
+{
+ struct ath12k_wmi_pdev *wmi = ar->wmi;
+ struct wmi_pdev_resume_cmd *cmd;
+ struct sk_buff *skb;
+ int ret;
+
+ skb = ath12k_wmi_alloc_skb(wmi->wmi_ab, sizeof(*cmd));
+ if (!skb)
+ return -ENOMEM;
+
+ cmd = (struct wmi_pdev_resume_cmd *)skb->data;
+
+ cmd->tlv_header = ath12k_wmi_tlv_cmd_hdr(WMI_TAG_PDEV_RESUME_CMD,
+ sizeof(*cmd));
+ cmd->pdev_id = cpu_to_le32(pdev_id);
+
+ ath12k_dbg(ar->ab, ATH12K_DBG_WMI,
+ "WMI pdev resume pdev id %d\n", pdev_id);
+
+ ret = ath12k_wmi_cmd_send(wmi, skb, WMI_PDEV_RESUME_CMDID);
+ if (ret) {
+ ath12k_warn(ar->ab, "failed to send WMI_PDEV_RESUME cmd\n");
+ dev_kfree_skb(skb);
+ }
+
+ return ret;
+}
+
+/* TODO FW Support for the cmd is not available yet.
+ * Can be tested once the command and corresponding
+ * event is implemented in FW
+ */
+int ath12k_wmi_pdev_bss_chan_info_request(struct ath12k *ar,
+ enum wmi_bss_chan_info_req_type type)
+{
+ struct ath12k_wmi_pdev *wmi = ar->wmi;
+ struct wmi_pdev_bss_chan_info_req_cmd *cmd;
+ struct sk_buff *skb;
+ int ret;
+
+ skb = ath12k_wmi_alloc_skb(wmi->wmi_ab, sizeof(*cmd));
+ if (!skb)
+ return -ENOMEM;
+
+ cmd = (struct wmi_pdev_bss_chan_info_req_cmd *)skb->data;
+
+ cmd->tlv_header = ath12k_wmi_tlv_cmd_hdr(WMI_TAG_PDEV_BSS_CHAN_INFO_REQUEST,
+ sizeof(*cmd));
+ cmd->req_type = cpu_to_le32(type);
+
+ ath12k_dbg(ar->ab, ATH12K_DBG_WMI,
+ "WMI bss chan info req type %d\n", type);
+
+ ret = ath12k_wmi_cmd_send(wmi, skb,
+ WMI_PDEV_BSS_CHAN_INFO_REQUEST_CMDID);
+ if (ret) {
+ ath12k_warn(ar->ab,
+ "failed to send WMI_PDEV_BSS_CHAN_INFO_REQUEST cmd\n");
+ dev_kfree_skb(skb);
+ }
+
+ return ret;
+}
+
+int ath12k_wmi_send_set_ap_ps_param_cmd(struct ath12k *ar, u8 *peer_addr,
+ struct ath12k_wmi_ap_ps_arg *arg)
+{
+ struct ath12k_wmi_pdev *wmi = ar->wmi;
+ struct wmi_ap_ps_peer_cmd *cmd;
+ struct sk_buff *skb;
+ int ret;
+
+ skb = ath12k_wmi_alloc_skb(wmi->wmi_ab, sizeof(*cmd));
+ if (!skb)
+ return -ENOMEM;
+
+ cmd = (struct wmi_ap_ps_peer_cmd *)skb->data;
+ cmd->tlv_header = ath12k_wmi_tlv_cmd_hdr(WMI_TAG_AP_PS_PEER_CMD,
+ sizeof(*cmd));
+
+ cmd->vdev_id = cpu_to_le32(arg->vdev_id);
+ ether_addr_copy(cmd->peer_macaddr.addr, peer_addr);
+ cmd->param = cpu_to_le32(arg->param);
+ cmd->value = cpu_to_le32(arg->value);
+
+ ath12k_dbg(ar->ab, ATH12K_DBG_WMI,
+ "WMI set ap ps vdev id %d peer %pM param %d value %d\n",
+ arg->vdev_id, peer_addr, arg->param, arg->value);
+
+ ret = ath12k_wmi_cmd_send(wmi, skb, WMI_AP_PS_PEER_PARAM_CMDID);
+ if (ret) {
+ ath12k_warn(ar->ab,
+ "failed to send WMI_AP_PS_PEER_PARAM_CMDID\n");
+ dev_kfree_skb(skb);
+ }
+
+ return ret;
+}
+
+int ath12k_wmi_set_sta_ps_param(struct ath12k *ar, u32 vdev_id,
+ u32 param, u32 param_value)
+{
+ struct ath12k_wmi_pdev *wmi = ar->wmi;
+ struct wmi_sta_powersave_param_cmd *cmd;
+ struct sk_buff *skb;
+ int ret;
+
+ skb = ath12k_wmi_alloc_skb(wmi->wmi_ab, sizeof(*cmd));
+ if (!skb)
+ return -ENOMEM;
+
+ cmd = (struct wmi_sta_powersave_param_cmd *)skb->data;
+ cmd->tlv_header = ath12k_wmi_tlv_cmd_hdr(WMI_TAG_STA_POWERSAVE_PARAM_CMD,
+ sizeof(*cmd));
+
+ cmd->vdev_id = cpu_to_le32(vdev_id);
+ cmd->param = cpu_to_le32(param);
+ cmd->value = cpu_to_le32(param_value);
+
+ ath12k_dbg(ar->ab, ATH12K_DBG_WMI,
+ "WMI set sta ps vdev_id %d param %d value %d\n",
+ vdev_id, param, param_value);
+
+ ret = ath12k_wmi_cmd_send(wmi, skb, WMI_STA_POWERSAVE_PARAM_CMDID);
+ if (ret) {
+ ath12k_warn(ar->ab, "failed to send WMI_STA_POWERSAVE_PARAM_CMDID");
+ dev_kfree_skb(skb);
+ }
+
+ return ret;
+}
+
+int ath12k_wmi_force_fw_hang_cmd(struct ath12k *ar, u32 type, u32 delay_time_ms)
+{
+ struct ath12k_wmi_pdev *wmi = ar->wmi;
+ struct wmi_force_fw_hang_cmd *cmd;
+ struct sk_buff *skb;
+ int ret, len;
+
+ len = sizeof(*cmd);
+
+ skb = ath12k_wmi_alloc_skb(wmi->wmi_ab, len);
+ if (!skb)
+ return -ENOMEM;
+
+ cmd = (struct wmi_force_fw_hang_cmd *)skb->data;
+ cmd->tlv_header = ath12k_wmi_tlv_cmd_hdr(WMI_TAG_FORCE_FW_HANG_CMD,
+ len);
+
+ cmd->type = cpu_to_le32(type);
+ cmd->delay_time_ms = cpu_to_le32(delay_time_ms);
+
+ ret = ath12k_wmi_cmd_send(wmi, skb, WMI_FORCE_FW_HANG_CMDID);
+
+ if (ret) {
+ ath12k_warn(ar->ab, "Failed to send WMI_FORCE_FW_HANG_CMDID");
+ dev_kfree_skb(skb);
+ }
+ return ret;
+}
+
+int ath12k_wmi_vdev_set_param_cmd(struct ath12k *ar, u32 vdev_id,
+ u32 param_id, u32 param_value)
+{
+ struct ath12k_wmi_pdev *wmi = ar->wmi;
+ struct wmi_vdev_set_param_cmd *cmd;
+ struct sk_buff *skb;
+ int ret;
+
+ skb = ath12k_wmi_alloc_skb(wmi->wmi_ab, sizeof(*cmd));
+ if (!skb)
+ return -ENOMEM;
+
+ cmd = (struct wmi_vdev_set_param_cmd *)skb->data;
+ cmd->tlv_header = ath12k_wmi_tlv_cmd_hdr(WMI_TAG_VDEV_SET_PARAM_CMD,
+ sizeof(*cmd));
+
+ cmd->vdev_id = cpu_to_le32(vdev_id);
+ cmd->param_id = cpu_to_le32(param_id);
+ cmd->param_value = cpu_to_le32(param_value);
+
+ ath12k_dbg(ar->ab, ATH12K_DBG_WMI,
+ "WMI vdev id 0x%x set param %d value %d\n",
+ vdev_id, param_id, param_value);
+
+ ret = ath12k_wmi_cmd_send(wmi, skb, WMI_VDEV_SET_PARAM_CMDID);
+ if (ret) {
+ ath12k_warn(ar->ab,
+ "failed to send WMI_VDEV_SET_PARAM_CMDID\n");
+ dev_kfree_skb(skb);
+ }
+
+ return ret;
+}
+
+int ath12k_wmi_send_pdev_temperature_cmd(struct ath12k *ar)
+{
+ struct ath12k_wmi_pdev *wmi = ar->wmi;
+ struct wmi_get_pdev_temperature_cmd *cmd;
+ struct sk_buff *skb;
+ int ret;
+
+ skb = ath12k_wmi_alloc_skb(wmi->wmi_ab, sizeof(*cmd));
+ if (!skb)
+ return -ENOMEM;
+
+ cmd = (struct wmi_get_pdev_temperature_cmd *)skb->data;
+ cmd->tlv_header = ath12k_wmi_tlv_cmd_hdr(WMI_TAG_PDEV_GET_TEMPERATURE_CMD,
+ sizeof(*cmd));
+ cmd->pdev_id = cpu_to_le32(ar->pdev->pdev_id);
+
+ ath12k_dbg(ar->ab, ATH12K_DBG_WMI,
+ "WMI pdev get temperature for pdev_id %d\n", ar->pdev->pdev_id);
+
+ ret = ath12k_wmi_cmd_send(wmi, skb, WMI_PDEV_GET_TEMPERATURE_CMDID);
+ if (ret) {
+ ath12k_warn(ar->ab, "failed to send WMI_PDEV_GET_TEMPERATURE cmd\n");
+ dev_kfree_skb(skb);
+ }
+
+ return ret;
+}
+
+int ath12k_wmi_send_bcn_offload_control_cmd(struct ath12k *ar,
+ u32 vdev_id, u32 bcn_ctrl_op)
+{
+ struct ath12k_wmi_pdev *wmi = ar->wmi;
+ struct wmi_bcn_offload_ctrl_cmd *cmd;
+ struct sk_buff *skb;
+ int ret;
+
+ skb = ath12k_wmi_alloc_skb(wmi->wmi_ab, sizeof(*cmd));
+ if (!skb)
+ return -ENOMEM;
+
+ cmd = (struct wmi_bcn_offload_ctrl_cmd *)skb->data;
+ cmd->tlv_header = ath12k_wmi_tlv_cmd_hdr(WMI_TAG_BCN_OFFLOAD_CTRL_CMD,
+ sizeof(*cmd));
+
+ cmd->vdev_id = cpu_to_le32(vdev_id);
+ cmd->bcn_ctrl_op = cpu_to_le32(bcn_ctrl_op);
+
+ ath12k_dbg(ar->ab, ATH12K_DBG_WMI,
+ "WMI bcn ctrl offload vdev id %d ctrl_op %d\n",
+ vdev_id, bcn_ctrl_op);
+
+ ret = ath12k_wmi_cmd_send(wmi, skb, WMI_BCN_OFFLOAD_CTRL_CMDID);
+ if (ret) {
+ ath12k_warn(ar->ab,
+ "failed to send WMI_BCN_OFFLOAD_CTRL_CMDID\n");
+ dev_kfree_skb(skb);
+ }
+
+ return ret;
+}
+
+int ath12k_wmi_bcn_tmpl(struct ath12k *ar, u32 vdev_id,
+ struct ieee80211_mutable_offsets *offs,
+ struct sk_buff *bcn)
+{
+ struct ath12k_wmi_pdev *wmi = ar->wmi;
+ struct wmi_bcn_tmpl_cmd *cmd;
+ struct ath12k_wmi_bcn_prb_info_params *bcn_prb_info;
+ struct wmi_tlv *tlv;
+ struct sk_buff *skb;
+ void *ptr;
+ int ret, len;
+ size_t aligned_len = roundup(bcn->len, 4);
+
+ len = sizeof(*cmd) + sizeof(*bcn_prb_info) + TLV_HDR_SIZE + aligned_len;
+
+ skb = ath12k_wmi_alloc_skb(wmi->wmi_ab, len);
+ if (!skb)
+ return -ENOMEM;
+
+ cmd = (struct wmi_bcn_tmpl_cmd *)skb->data;
+ cmd->tlv_header = ath12k_wmi_tlv_cmd_hdr(WMI_TAG_BCN_TMPL_CMD,
+ sizeof(*cmd));
+ cmd->vdev_id = cpu_to_le32(vdev_id);
+ cmd->tim_ie_offset = cpu_to_le32(offs->tim_offset);
+ cmd->csa_switch_count_offset = cpu_to_le32(offs->cntdwn_counter_offs[0]);
+ cmd->ext_csa_switch_count_offset = cpu_to_le32(offs->cntdwn_counter_offs[1]);
+ cmd->buf_len = cpu_to_le32(bcn->len);
+
+ ptr = skb->data + sizeof(*cmd);
+
+ bcn_prb_info = ptr;
+ len = sizeof(*bcn_prb_info);
+ bcn_prb_info->tlv_header = ath12k_wmi_tlv_cmd_hdr(WMI_TAG_BCN_PRB_INFO,
+ len);
+ bcn_prb_info->caps = 0;
+ bcn_prb_info->erp = 0;
+
+ ptr += sizeof(*bcn_prb_info);
+
+ tlv = ptr;
+ tlv->header = ath12k_wmi_tlv_hdr(WMI_TAG_ARRAY_BYTE, aligned_len);
+ memcpy(tlv->value, bcn->data, bcn->len);
+
+ ret = ath12k_wmi_cmd_send(wmi, skb, WMI_BCN_TMPL_CMDID);
+ if (ret) {
+ ath12k_warn(ar->ab, "failed to send WMI_BCN_TMPL_CMDID\n");
+ dev_kfree_skb(skb);
+ }
+
+ return ret;
+}
+
+int ath12k_wmi_vdev_install_key(struct ath12k *ar,
+ struct wmi_vdev_install_key_arg *arg)
+{
+ struct ath12k_wmi_pdev *wmi = ar->wmi;
+ struct wmi_vdev_install_key_cmd *cmd;
+ struct wmi_tlv *tlv;
+ struct sk_buff *skb;
+ int ret, len, key_len_aligned;
+
+ /* WMI_TAG_ARRAY_BYTE needs to be aligned with 4, the actual key
+ * length is specifed in cmd->key_len.
+ */
+ key_len_aligned = roundup(arg->key_len, 4);
+
+ len = sizeof(*cmd) + TLV_HDR_SIZE + key_len_aligned;
+
+ skb = ath12k_wmi_alloc_skb(wmi->wmi_ab, len);
+ if (!skb)
+ return -ENOMEM;
+
+ cmd = (struct wmi_vdev_install_key_cmd *)skb->data;
+ cmd->tlv_header = ath12k_wmi_tlv_cmd_hdr(WMI_TAG_VDEV_INSTALL_KEY_CMD,
+ sizeof(*cmd));
+ cmd->vdev_id = cpu_to_le32(arg->vdev_id);
+ ether_addr_copy(cmd->peer_macaddr.addr, arg->macaddr);
+ cmd->key_idx = cpu_to_le32(arg->key_idx);
+ cmd->key_flags = cpu_to_le32(arg->key_flags);
+ cmd->key_cipher = cpu_to_le32(arg->key_cipher);
+ cmd->key_len = cpu_to_le32(arg->key_len);
+ cmd->key_txmic_len = cpu_to_le32(arg->key_txmic_len);
+ cmd->key_rxmic_len = cpu_to_le32(arg->key_rxmic_len);
+
+ if (arg->key_rsc_counter)
+ cmd->key_rsc_counter = cpu_to_le64(arg->key_rsc_counter);
+
+ tlv = (struct wmi_tlv *)(skb->data + sizeof(*cmd));
+ tlv->header = ath12k_wmi_tlv_hdr(WMI_TAG_ARRAY_BYTE, key_len_aligned);
+ memcpy(tlv->value, arg->key_data, arg->key_len);
+
+ ath12k_dbg(ar->ab, ATH12K_DBG_WMI,
+ "WMI vdev install key idx %d cipher %d len %d\n",
+ arg->key_idx, arg->key_cipher, arg->key_len);
+
+ ret = ath12k_wmi_cmd_send(wmi, skb, WMI_VDEV_INSTALL_KEY_CMDID);
+ if (ret) {
+ ath12k_warn(ar->ab,
+ "failed to send WMI_VDEV_INSTALL_KEY cmd\n");
+ dev_kfree_skb(skb);
+ }
+
+ return ret;
+}
+
+static void ath12k_wmi_copy_peer_flags(struct wmi_peer_assoc_complete_cmd *cmd,
+ struct ath12k_wmi_peer_assoc_arg *arg,
+ bool hw_crypto_disabled)
+{
+ cmd->peer_flags = 0;
+
+ if (arg->is_wme_set) {
+ if (arg->qos_flag)
+ cmd->peer_flags |= cpu_to_le32(WMI_PEER_QOS);
+ if (arg->apsd_flag)
+ cmd->peer_flags |= cpu_to_le32(WMI_PEER_APSD);
+ if (arg->ht_flag)
+ cmd->peer_flags |= cpu_to_le32(WMI_PEER_HT);
+ if (arg->bw_40)
+ cmd->peer_flags |= cpu_to_le32(WMI_PEER_40MHZ);
+ if (arg->bw_80)
+ cmd->peer_flags |= cpu_to_le32(WMI_PEER_80MHZ);
+ if (arg->bw_160)
+ cmd->peer_flags |= cpu_to_le32(WMI_PEER_160MHZ);
+
+ /* Typically if STBC is enabled for VHT it should be enabled
+ * for HT as well
+ **/
+ if (arg->stbc_flag)
+ cmd->peer_flags |= cpu_to_le32(WMI_PEER_STBC);
+
+ /* Typically if LDPC is enabled for VHT it should be enabled
+ * for HT as well
+ **/
+ if (arg->ldpc_flag)
+ cmd->peer_flags |= cpu_to_le32(WMI_PEER_LDPC);
+
+ if (arg->static_mimops_flag)
+ cmd->peer_flags |= cpu_to_le32(WMI_PEER_STATIC_MIMOPS);
+ if (arg->dynamic_mimops_flag)
+ cmd->peer_flags |= cpu_to_le32(WMI_PEER_DYN_MIMOPS);
+ if (arg->spatial_mux_flag)
+ cmd->peer_flags |= cpu_to_le32(WMI_PEER_SPATIAL_MUX);
+ if (arg->vht_flag)
+ cmd->peer_flags |= cpu_to_le32(WMI_PEER_VHT);
+ if (arg->he_flag)
+ cmd->peer_flags |= cpu_to_le32(WMI_PEER_HE);
+ if (arg->twt_requester)
+ cmd->peer_flags |= cpu_to_le32(WMI_PEER_TWT_REQ);
+ if (arg->twt_responder)
+ cmd->peer_flags |= cpu_to_le32(WMI_PEER_TWT_RESP);
+ }
+
+ /* Suppress authorization for all AUTH modes that need 4-way handshake
+ * (during re-association).
+ * Authorization will be done for these modes on key installation.
+ */
+ if (arg->auth_flag)
+ cmd->peer_flags |= cpu_to_le32(WMI_PEER_AUTH);
+ if (arg->need_ptk_4_way) {
+ cmd->peer_flags |= cpu_to_le32(WMI_PEER_NEED_PTK_4_WAY);
+ if (!hw_crypto_disabled)
+ cmd->peer_flags &= cpu_to_le32(~WMI_PEER_AUTH);
+ }
+ if (arg->need_gtk_2_way)
+ cmd->peer_flags |= cpu_to_le32(WMI_PEER_NEED_GTK_2_WAY);
+ /* safe mode bypass the 4-way handshake */
+ if (arg->safe_mode_enabled)
+ cmd->peer_flags &= cpu_to_le32(~(WMI_PEER_NEED_PTK_4_WAY |
+ WMI_PEER_NEED_GTK_2_WAY));
+
+ if (arg->is_pmf_enabled)
+ cmd->peer_flags |= cpu_to_le32(WMI_PEER_PMF);
+
+ /* Disable AMSDU for station transmit, if user configures it */
+ /* Disable AMSDU for AP transmit to 11n Stations, if user configures
+ * it
+ * if (arg->amsdu_disable) Add after FW support
+ **/
+
+ /* Target asserts if node is marked HT and all MCS is set to 0.
+ * Mark the node as non-HT if all the mcs rates are disabled through
+ * iwpriv
+ **/
+ if (arg->peer_ht_rates.num_rates == 0)
+ cmd->peer_flags &= cpu_to_le32(~WMI_PEER_HT);
+}
+
+int ath12k_wmi_send_peer_assoc_cmd(struct ath12k *ar,
+ struct ath12k_wmi_peer_assoc_arg *arg)
+{
+ struct ath12k_wmi_pdev *wmi = ar->wmi;
+ struct wmi_peer_assoc_complete_cmd *cmd;
+ struct ath12k_wmi_vht_rate_set_params *mcs;
+ struct ath12k_wmi_he_rate_set_params *he_mcs;
+ struct sk_buff *skb;
+ struct wmi_tlv *tlv;
+ void *ptr;
+ u32 peer_legacy_rates_align;
+ u32 peer_ht_rates_align;
+ int i, ret, len;
+
+ peer_legacy_rates_align = roundup(arg->peer_legacy_rates.num_rates,
+ sizeof(u32));
+ peer_ht_rates_align = roundup(arg->peer_ht_rates.num_rates,
+ sizeof(u32));
+
+ len = sizeof(*cmd) +
+ TLV_HDR_SIZE + (peer_legacy_rates_align * sizeof(u8)) +
+ TLV_HDR_SIZE + (peer_ht_rates_align * sizeof(u8)) +
+ sizeof(*mcs) + TLV_HDR_SIZE +
+ (sizeof(*he_mcs) * arg->peer_he_mcs_count);
+
+ skb = ath12k_wmi_alloc_skb(wmi->wmi_ab, len);
+ if (!skb)
+ return -ENOMEM;
+
+ ptr = skb->data;
+
+ cmd = ptr;
+ cmd->tlv_header = ath12k_wmi_tlv_cmd_hdr(WMI_TAG_PEER_ASSOC_COMPLETE_CMD,
+ sizeof(*cmd));
+
+ cmd->vdev_id = cpu_to_le32(arg->vdev_id);
+
+ cmd->peer_new_assoc = cpu_to_le32(arg->peer_new_assoc);
+ cmd->peer_associd = cpu_to_le32(arg->peer_associd);
+
+ ath12k_wmi_copy_peer_flags(cmd, arg,
+ test_bit(ATH12K_FLAG_HW_CRYPTO_DISABLED,
+ &ar->ab->dev_flags));
+
+ ether_addr_copy(cmd->peer_macaddr.addr, arg->peer_mac);
+
+ cmd->peer_rate_caps = cpu_to_le32(arg->peer_rate_caps);
+ cmd->peer_caps = cpu_to_le32(arg->peer_caps);
+ cmd->peer_listen_intval = cpu_to_le32(arg->peer_listen_intval);
+ cmd->peer_ht_caps = cpu_to_le32(arg->peer_ht_caps);
+ cmd->peer_max_mpdu = cpu_to_le32(arg->peer_max_mpdu);
+ cmd->peer_mpdu_density = cpu_to_le32(arg->peer_mpdu_density);
+ cmd->peer_vht_caps = cpu_to_le32(arg->peer_vht_caps);
+ cmd->peer_phymode = cpu_to_le32(arg->peer_phymode);
+
+ /* Update 11ax capabilities */
+ cmd->peer_he_cap_info = cpu_to_le32(arg->peer_he_cap_macinfo[0]);
+ cmd->peer_he_cap_info_ext = cpu_to_le32(arg->peer_he_cap_macinfo[1]);
+ cmd->peer_he_cap_info_internal = cpu_to_le32(arg->peer_he_cap_macinfo_internal);
+ cmd->peer_he_caps_6ghz = cpu_to_le32(arg->peer_he_caps_6ghz);
+ cmd->peer_he_ops = cpu_to_le32(arg->peer_he_ops);
+ for (i = 0; i < WMI_MAX_HECAP_PHY_SIZE; i++)
+ cmd->peer_he_cap_phy[i] =
+ cpu_to_le32(arg->peer_he_cap_phyinfo[i]);
+ cmd->peer_ppet.numss_m1 = cpu_to_le32(arg->peer_ppet.numss_m1);
+ cmd->peer_ppet.ru_info = cpu_to_le32(arg->peer_ppet.ru_bit_mask);
+ for (i = 0; i < WMI_MAX_NUM_SS; i++)
+ cmd->peer_ppet.ppet16_ppet8_ru3_ru0[i] =
+ cpu_to_le32(arg->peer_ppet.ppet16_ppet8_ru3_ru0[i]);
+
+ /* Update peer legacy rate information */
+ ptr += sizeof(*cmd);
+
+ tlv = ptr;
+ tlv->header = ath12k_wmi_tlv_hdr(WMI_TAG_ARRAY_BYTE, peer_legacy_rates_align);
+
+ ptr += TLV_HDR_SIZE;
+
+ cmd->num_peer_legacy_rates = cpu_to_le32(arg->peer_legacy_rates.num_rates);
+ memcpy(ptr, arg->peer_legacy_rates.rates,
+ arg->peer_legacy_rates.num_rates);
+
+ /* Update peer HT rate information */
+ ptr += peer_legacy_rates_align;
+
+ tlv = ptr;
+ tlv->header = ath12k_wmi_tlv_hdr(WMI_TAG_ARRAY_BYTE, peer_ht_rates_align);
+ ptr += TLV_HDR_SIZE;
+ cmd->num_peer_ht_rates = cpu_to_le32(arg->peer_ht_rates.num_rates);
+ memcpy(ptr, arg->peer_ht_rates.rates,
+ arg->peer_ht_rates.num_rates);
+
+ /* VHT Rates */
+ ptr += peer_ht_rates_align;
+
+ mcs = ptr;
+
+ mcs->tlv_header = ath12k_wmi_tlv_cmd_hdr(WMI_TAG_VHT_RATE_SET,
+ sizeof(*mcs));
+
+ cmd->peer_nss = cpu_to_le32(arg->peer_nss);
+
+ /* Update bandwidth-NSS mapping */
+ cmd->peer_bw_rxnss_override = 0;
+ cmd->peer_bw_rxnss_override |= cpu_to_le32(arg->peer_bw_rxnss_override);
+
+ if (arg->vht_capable) {
+ mcs->rx_max_rate = cpu_to_le32(arg->rx_max_rate);
+ mcs->rx_mcs_set = cpu_to_le32(arg->rx_mcs_set);
+ mcs->tx_max_rate = cpu_to_le32(arg->tx_max_rate);
+ mcs->tx_mcs_set = cpu_to_le32(arg->tx_mcs_set);
+ }
+
+ /* HE Rates */
+ cmd->peer_he_mcs = cpu_to_le32(arg->peer_he_mcs_count);
+ cmd->min_data_rate = cpu_to_le32(arg->min_data_rate);
+
+ ptr += sizeof(*mcs);
+
+ len = arg->peer_he_mcs_count * sizeof(*he_mcs);
+
+ tlv = ptr;
+ tlv->header = ath12k_wmi_tlv_hdr(WMI_TAG_ARRAY_STRUCT, len);
+ ptr += TLV_HDR_SIZE;
+
+ /* Loop through the HE rate set */
+ for (i = 0; i < arg->peer_he_mcs_count; i++) {
+ he_mcs = ptr;
+ he_mcs->tlv_header = ath12k_wmi_tlv_cmd_hdr(WMI_TAG_HE_RATE_SET,
+ sizeof(*he_mcs));
+
+ he_mcs->rx_mcs_set = cpu_to_le32(arg->peer_he_rx_mcs_set[i]);
+ he_mcs->tx_mcs_set = cpu_to_le32(arg->peer_he_tx_mcs_set[i]);
+ ptr += sizeof(*he_mcs);
+ }
+
+ ath12k_dbg(ar->ab, ATH12K_DBG_WMI,
+ "wmi peer assoc vdev id %d assoc id %d peer mac %pM peer_flags %x rate_caps %x peer_caps %x listen_intval %d ht_caps %x max_mpdu %d nss %d phymode %d peer_mpdu_density %d vht_caps %x he cap_info %x he ops %x he cap_info_ext %x he phy %x %x %x peer_bw_rxnss_override %x\n",
+ cmd->vdev_id, cmd->peer_associd, arg->peer_mac,
+ cmd->peer_flags, cmd->peer_rate_caps, cmd->peer_caps,
+ cmd->peer_listen_intval, cmd->peer_ht_caps,
+ cmd->peer_max_mpdu, cmd->peer_nss, cmd->peer_phymode,
+ cmd->peer_mpdu_density,
+ cmd->peer_vht_caps, cmd->peer_he_cap_info,
+ cmd->peer_he_ops, cmd->peer_he_cap_info_ext,
+ cmd->peer_he_cap_phy[0], cmd->peer_he_cap_phy[1],
+ cmd->peer_he_cap_phy[2],
+ cmd->peer_bw_rxnss_override);
+
+ ret = ath12k_wmi_cmd_send(wmi, skb, WMI_PEER_ASSOC_CMDID);
+ if (ret) {
+ ath12k_warn(ar->ab,
+ "failed to send WMI_PEER_ASSOC_CMDID\n");
+ dev_kfree_skb(skb);
+ }
+
+ return ret;
+}
+
+void ath12k_wmi_start_scan_init(struct ath12k *ar,
+ struct ath12k_wmi_scan_req_arg *arg)
+{
+ /* setup commonly used values */
+ arg->scan_req_id = 1;
+ arg->scan_priority = WMI_SCAN_PRIORITY_LOW;
+ arg->dwell_time_active = 50;
+ arg->dwell_time_active_2g = 0;
+ arg->dwell_time_passive = 150;
+ arg->dwell_time_active_6g = 40;
+ arg->dwell_time_passive_6g = 30;
+ arg->min_rest_time = 50;
+ arg->max_rest_time = 500;
+ arg->repeat_probe_time = 0;
+ arg->probe_spacing_time = 0;
+ arg->idle_time = 0;
+ arg->max_scan_time = 20000;
+ arg->probe_delay = 5;
+ arg->notify_scan_events = WMI_SCAN_EVENT_STARTED |
+ WMI_SCAN_EVENT_COMPLETED |
+ WMI_SCAN_EVENT_BSS_CHANNEL |
+ WMI_SCAN_EVENT_FOREIGN_CHAN |
+ WMI_SCAN_EVENT_DEQUEUED;
+ arg->scan_flags |= WMI_SCAN_CHAN_STAT_EVENT;
+ arg->num_bssid = 1;
+
+ /* fill bssid_list[0] with 0xff, otherwise bssid and RA will be
+ * ZEROs in probe request
+ */
+ eth_broadcast_addr(arg->bssid_list[0].addr);
+}
+
+static void ath12k_wmi_copy_scan_event_cntrl_flags(struct wmi_start_scan_cmd *cmd,
+ struct ath12k_wmi_scan_req_arg *arg)
+{
+ /* Scan events subscription */
+ if (arg->scan_ev_started)
+ cmd->notify_scan_events |= cpu_to_le32(WMI_SCAN_EVENT_STARTED);
+ if (arg->scan_ev_completed)
+ cmd->notify_scan_events |= cpu_to_le32(WMI_SCAN_EVENT_COMPLETED);
+ if (arg->scan_ev_bss_chan)
+ cmd->notify_scan_events |= cpu_to_le32(WMI_SCAN_EVENT_BSS_CHANNEL);
+ if (arg->scan_ev_foreign_chan)
+ cmd->notify_scan_events |= cpu_to_le32(WMI_SCAN_EVENT_FOREIGN_CHAN);
+ if (arg->scan_ev_dequeued)
+ cmd->notify_scan_events |= cpu_to_le32(WMI_SCAN_EVENT_DEQUEUED);
+ if (arg->scan_ev_preempted)
+ cmd->notify_scan_events |= cpu_to_le32(WMI_SCAN_EVENT_PREEMPTED);
+ if (arg->scan_ev_start_failed)
+ cmd->notify_scan_events |= cpu_to_le32(WMI_SCAN_EVENT_START_FAILED);
+ if (arg->scan_ev_restarted)
+ cmd->notify_scan_events |= cpu_to_le32(WMI_SCAN_EVENT_RESTARTED);
+ if (arg->scan_ev_foreign_chn_exit)
+ cmd->notify_scan_events |= cpu_to_le32(WMI_SCAN_EVENT_FOREIGN_CHAN_EXIT);
+ if (arg->scan_ev_suspended)
+ cmd->notify_scan_events |= cpu_to_le32(WMI_SCAN_EVENT_SUSPENDED);
+ if (arg->scan_ev_resumed)
+ cmd->notify_scan_events |= cpu_to_le32(WMI_SCAN_EVENT_RESUMED);
+
+ /** Set scan control flags */
+ cmd->scan_ctrl_flags = 0;
+ if (arg->scan_f_passive)
+ cmd->scan_ctrl_flags |= cpu_to_le32(WMI_SCAN_FLAG_PASSIVE);
+ if (arg->scan_f_strict_passive_pch)
+ cmd->scan_ctrl_flags |= cpu_to_le32(WMI_SCAN_FLAG_STRICT_PASSIVE_ON_PCHN);
+ if (arg->scan_f_promisc_mode)
+ cmd->scan_ctrl_flags |= cpu_to_le32(WMI_SCAN_FILTER_PROMISCUOS);
+ if (arg->scan_f_capture_phy_err)
+ cmd->scan_ctrl_flags |= cpu_to_le32(WMI_SCAN_CAPTURE_PHY_ERROR);
+ if (arg->scan_f_half_rate)
+ cmd->scan_ctrl_flags |= cpu_to_le32(WMI_SCAN_FLAG_HALF_RATE_SUPPORT);
+ if (arg->scan_f_quarter_rate)
+ cmd->scan_ctrl_flags |= cpu_to_le32(WMI_SCAN_FLAG_QUARTER_RATE_SUPPORT);
+ if (arg->scan_f_cck_rates)
+ cmd->scan_ctrl_flags |= cpu_to_le32(WMI_SCAN_ADD_CCK_RATES);
+ if (arg->scan_f_ofdm_rates)
+ cmd->scan_ctrl_flags |= cpu_to_le32(WMI_SCAN_ADD_OFDM_RATES);
+ if (arg->scan_f_chan_stat_evnt)
+ cmd->scan_ctrl_flags |= cpu_to_le32(WMI_SCAN_CHAN_STAT_EVENT);
+ if (arg->scan_f_filter_prb_req)
+ cmd->scan_ctrl_flags |= cpu_to_le32(WMI_SCAN_FILTER_PROBE_REQ);
+ if (arg->scan_f_bcast_probe)
+ cmd->scan_ctrl_flags |= cpu_to_le32(WMI_SCAN_ADD_BCAST_PROBE_REQ);
+ if (arg->scan_f_offchan_mgmt_tx)
+ cmd->scan_ctrl_flags |= cpu_to_le32(WMI_SCAN_OFFCHAN_MGMT_TX);
+ if (arg->scan_f_offchan_data_tx)
+ cmd->scan_ctrl_flags |= cpu_to_le32(WMI_SCAN_OFFCHAN_DATA_TX);
+ if (arg->scan_f_force_active_dfs_chn)
+ cmd->scan_ctrl_flags |= cpu_to_le32(WMI_SCAN_FLAG_FORCE_ACTIVE_ON_DFS);
+ if (arg->scan_f_add_tpc_ie_in_probe)
+ cmd->scan_ctrl_flags |= cpu_to_le32(WMI_SCAN_ADD_TPC_IE_IN_PROBE_REQ);
+ if (arg->scan_f_add_ds_ie_in_probe)
+ cmd->scan_ctrl_flags |= cpu_to_le32(WMI_SCAN_ADD_DS_IE_IN_PROBE_REQ);
+ if (arg->scan_f_add_spoofed_mac_in_probe)
+ cmd->scan_ctrl_flags |= cpu_to_le32(WMI_SCAN_ADD_SPOOF_MAC_IN_PROBE_REQ);
+ if (arg->scan_f_add_rand_seq_in_probe)
+ cmd->scan_ctrl_flags |= cpu_to_le32(WMI_SCAN_RANDOM_SEQ_NO_IN_PROBE_REQ);
+ if (arg->scan_f_en_ie_whitelist_in_probe)
+ cmd->scan_ctrl_flags |=
+ cpu_to_le32(WMI_SCAN_ENABLE_IE_WHTELIST_IN_PROBE_REQ);
+
+ cmd->scan_ctrl_flags |= le32_encode_bits(arg->adaptive_dwell_time_mode,
+ WMI_SCAN_DWELL_MODE_MASK);
+}
+
+int ath12k_wmi_send_scan_start_cmd(struct ath12k *ar,
+ struct ath12k_wmi_scan_req_arg *arg)
+{
+ struct ath12k_wmi_pdev *wmi = ar->wmi;
+ struct wmi_start_scan_cmd *cmd;
+ struct ath12k_wmi_ssid_params *ssid = NULL;
+ struct ath12k_wmi_mac_addr_params *bssid;
+ struct sk_buff *skb;
+ struct wmi_tlv *tlv;
+ void *ptr;
+ int i, ret, len;
+ u32 *tmp_ptr;
+ u8 extraie_len_with_pad = 0;
+ struct ath12k_wmi_hint_short_ssid_arg *s_ssid = NULL;
+ struct ath12k_wmi_hint_bssid_arg *hint_bssid = NULL;
+
+ len = sizeof(*cmd);
+
+ len += TLV_HDR_SIZE;
+ if (arg->num_chan)
+ len += arg->num_chan * sizeof(u32);
+
+ len += TLV_HDR_SIZE;
+ if (arg->num_ssids)
+ len += arg->num_ssids * sizeof(*ssid);
+
+ len += TLV_HDR_SIZE;
+ if (arg->num_bssid)
+ len += sizeof(*bssid) * arg->num_bssid;
+
+ len += TLV_HDR_SIZE;
+ if (arg->extraie.len)
+ extraie_len_with_pad =
+ roundup(arg->extraie.len, sizeof(u32));
+ len += extraie_len_with_pad;
+
+ if (arg->num_hint_bssid)
+ len += TLV_HDR_SIZE +
+ arg->num_hint_bssid * sizeof(*hint_bssid);
+
+ if (arg->num_hint_s_ssid)
+ len += TLV_HDR_SIZE +
+ arg->num_hint_s_ssid * sizeof(*s_ssid);
+
+ skb = ath12k_wmi_alloc_skb(wmi->wmi_ab, len);
+ if (!skb)
+ return -ENOMEM;
+
+ ptr = skb->data;
+
+ cmd = ptr;
+ cmd->tlv_header = ath12k_wmi_tlv_cmd_hdr(WMI_TAG_START_SCAN_CMD,
+ sizeof(*cmd));
+
+ cmd->scan_id = cpu_to_le32(arg->scan_id);
+ cmd->scan_req_id = cpu_to_le32(arg->scan_req_id);
+ cmd->vdev_id = cpu_to_le32(arg->vdev_id);
+ cmd->scan_priority = cpu_to_le32(arg->scan_priority);
+ cmd->notify_scan_events = cpu_to_le32(arg->notify_scan_events);
+
+ ath12k_wmi_copy_scan_event_cntrl_flags(cmd, arg);
+
+ cmd->dwell_time_active = cpu_to_le32(arg->dwell_time_active);
+ cmd->dwell_time_active_2g = cpu_to_le32(arg->dwell_time_active_2g);
+ cmd->dwell_time_passive = cpu_to_le32(arg->dwell_time_passive);
+ cmd->dwell_time_active_6g = cpu_to_le32(arg->dwell_time_active_6g);
+ cmd->dwell_time_passive_6g = cpu_to_le32(arg->dwell_time_passive_6g);
+ cmd->min_rest_time = cpu_to_le32(arg->min_rest_time);
+ cmd->max_rest_time = cpu_to_le32(arg->max_rest_time);
+ cmd->repeat_probe_time = cpu_to_le32(arg->repeat_probe_time);
+ cmd->probe_spacing_time = cpu_to_le32(arg->probe_spacing_time);
+ cmd->idle_time = cpu_to_le32(arg->idle_time);
+ cmd->max_scan_time = cpu_to_le32(arg->max_scan_time);
+ cmd->probe_delay = cpu_to_le32(arg->probe_delay);
+ cmd->burst_duration = cpu_to_le32(arg->burst_duration);
+ cmd->num_chan = cpu_to_le32(arg->num_chan);
+ cmd->num_bssid = cpu_to_le32(arg->num_bssid);
+ cmd->num_ssids = cpu_to_le32(arg->num_ssids);
+ cmd->ie_len = cpu_to_le32(arg->extraie.len);
+ cmd->n_probes = cpu_to_le32(arg->n_probes);
+
+ ptr += sizeof(*cmd);
+
+ len = arg->num_chan * sizeof(u32);
+
+ tlv = ptr;
+ tlv->header = ath12k_wmi_tlv_hdr(WMI_TAG_ARRAY_UINT32, len);
+ ptr += TLV_HDR_SIZE;
+ tmp_ptr = (u32 *)ptr;
+
+ memcpy(tmp_ptr, arg->chan_list, arg->num_chan * 4);
+
+ ptr += len;
+
+ len = arg->num_ssids * sizeof(*ssid);
+ tlv = ptr;
+ tlv->header = ath12k_wmi_tlv_hdr(WMI_TAG_ARRAY_FIXED_STRUCT, len);
+
+ ptr += TLV_HDR_SIZE;
+
+ if (arg->num_ssids) {
+ ssid = ptr;
+ for (i = 0; i < arg->num_ssids; ++i) {
+ ssid->ssid_len = cpu_to_le32(arg->ssid[i].ssid_len);
+ memcpy(ssid->ssid, arg->ssid[i].ssid,
+ arg->ssid[i].ssid_len);
+ ssid++;
+ }
+ }
+
+ ptr += (arg->num_ssids * sizeof(*ssid));
+ len = arg->num_bssid * sizeof(*bssid);
+ tlv = ptr;
+ tlv->header = ath12k_wmi_tlv_hdr(WMI_TAG_ARRAY_FIXED_STRUCT, len);
+
+ ptr += TLV_HDR_SIZE;
+ bssid = ptr;
+
+ if (arg->num_bssid) {
+ for (i = 0; i < arg->num_bssid; ++i) {
+ ether_addr_copy(bssid->addr,
+ arg->bssid_list[i].addr);
+ bssid++;
+ }
+ }
+
+ ptr += arg->num_bssid * sizeof(*bssid);
+
+ len = extraie_len_with_pad;
+ tlv = ptr;
+ tlv->header = ath12k_wmi_tlv_hdr(WMI_TAG_ARRAY_BYTE, len);
+ ptr += TLV_HDR_SIZE;
+
+ if (arg->extraie.len)
+ memcpy(ptr, arg->extraie.ptr,
+ arg->extraie.len);
+
+ ptr += extraie_len_with_pad;
+
+ if (arg->num_hint_s_ssid) {
+ len = arg->num_hint_s_ssid * sizeof(*s_ssid);
+ tlv = ptr;
+ tlv->header = ath12k_wmi_tlv_hdr(WMI_TAG_ARRAY_FIXED_STRUCT, len);
+ ptr += TLV_HDR_SIZE;
+ s_ssid = ptr;
+ for (i = 0; i < arg->num_hint_s_ssid; ++i) {
+ s_ssid->freq_flags = arg->hint_s_ssid[i].freq_flags;
+ s_ssid->short_ssid = arg->hint_s_ssid[i].short_ssid;
+ s_ssid++;
+ }
+ ptr += len;
+ }
+
+ if (arg->num_hint_bssid) {
+ len = arg->num_hint_bssid * sizeof(struct ath12k_wmi_hint_bssid_arg);
+ tlv = ptr;
+ tlv->header = ath12k_wmi_tlv_hdr(WMI_TAG_ARRAY_FIXED_STRUCT, len);
+ ptr += TLV_HDR_SIZE;
+ hint_bssid = ptr;
+ for (i = 0; i < arg->num_hint_bssid; ++i) {
+ hint_bssid->freq_flags =
+ arg->hint_bssid[i].freq_flags;
+ ether_addr_copy(&arg->hint_bssid[i].bssid.addr[0],
+ &hint_bssid->bssid.addr[0]);
+ hint_bssid++;
+ }
+ }
+
+ ret = ath12k_wmi_cmd_send(wmi, skb,
+ WMI_START_SCAN_CMDID);
+ if (ret) {
+ ath12k_warn(ar->ab, "failed to send WMI_START_SCAN_CMDID\n");
+ dev_kfree_skb(skb);
+ }
+
+ return ret;
+}
+
+int ath12k_wmi_send_scan_stop_cmd(struct ath12k *ar,
+ struct ath12k_wmi_scan_cancel_arg *arg)
+{
+ struct ath12k_wmi_pdev *wmi = ar->wmi;
+ struct wmi_stop_scan_cmd *cmd;
+ struct sk_buff *skb;
+ int ret;
+
+ skb = ath12k_wmi_alloc_skb(wmi->wmi_ab, sizeof(*cmd));
+ if (!skb)
+ return -ENOMEM;
+
+ cmd = (struct wmi_stop_scan_cmd *)skb->data;
+
+ cmd->tlv_header = ath12k_wmi_tlv_cmd_hdr(WMI_TAG_STOP_SCAN_CMD,
+ sizeof(*cmd));
+
+ cmd->vdev_id = cpu_to_le32(arg->vdev_id);
+ cmd->requestor = cpu_to_le32(arg->requester);
+ cmd->scan_id = cpu_to_le32(arg->scan_id);
+ cmd->pdev_id = cpu_to_le32(arg->pdev_id);
+ /* stop the scan with the corresponding scan_id */
+ if (arg->req_type == WLAN_SCAN_CANCEL_PDEV_ALL) {
+ /* Cancelling all scans */
+ cmd->req_type = cpu_to_le32(WMI_SCAN_STOP_ALL);
+ } else if (arg->req_type == WLAN_SCAN_CANCEL_VDEV_ALL) {
+ /* Cancelling VAP scans */
+ cmd->req_type = cpu_to_le32(WMI_SCAN_STOP_VAP_ALL);
+ } else if (arg->req_type == WLAN_SCAN_CANCEL_SINGLE) {
+ /* Cancelling specific scan */
+ cmd->req_type = WMI_SCAN_STOP_ONE;
+ } else {
+ ath12k_warn(ar->ab, "invalid scan cancel req_type %d",
+ arg->req_type);
+ dev_kfree_skb(skb);
+ return -EINVAL;
+ }
+
+ ret = ath12k_wmi_cmd_send(wmi, skb,
+ WMI_STOP_SCAN_CMDID);
+ if (ret) {
+ ath12k_warn(ar->ab, "failed to send WMI_STOP_SCAN_CMDID\n");
+ dev_kfree_skb(skb);
+ }
+
+ return ret;
+}
+
+int ath12k_wmi_send_scan_chan_list_cmd(struct ath12k *ar,
+ struct ath12k_wmi_scan_chan_list_arg *arg)
+{
+ struct ath12k_wmi_pdev *wmi = ar->wmi;
+ struct wmi_scan_chan_list_cmd *cmd;
+ struct sk_buff *skb;
+ struct ath12k_wmi_channel_params *chan_info;
+ struct ath12k_wmi_channel_arg *channel_arg;
+ struct wmi_tlv *tlv;
+ void *ptr;
+ int i, ret, len;
+ u16 num_send_chans, num_sends = 0, max_chan_limit = 0;
+ __le32 *reg1, *reg2;
+
+ channel_arg = &arg->channel[0];
+ while (arg->nallchans) {
+ len = sizeof(*cmd) + TLV_HDR_SIZE;
+ max_chan_limit = (wmi->wmi_ab->max_msg_len[ar->pdev_idx] - len) /
+ sizeof(*chan_info);
+
+ num_send_chans = min(arg->nallchans, max_chan_limit);
+
+ arg->nallchans -= num_send_chans;
+ len += sizeof(*chan_info) * num_send_chans;
+
+ skb = ath12k_wmi_alloc_skb(wmi->wmi_ab, len);
+ if (!skb)
+ return -ENOMEM;
+
+ cmd = (struct wmi_scan_chan_list_cmd *)skb->data;
+ cmd->tlv_header = ath12k_wmi_tlv_cmd_hdr(WMI_TAG_SCAN_CHAN_LIST_CMD,
+ sizeof(*cmd));
+ cmd->pdev_id = cpu_to_le32(arg->pdev_id);
+ cmd->num_scan_chans = cpu_to_le32(num_send_chans);
+ if (num_sends)
+ cmd->flags |= cpu_to_le32(WMI_APPEND_TO_EXISTING_CHAN_LIST_FLAG);
+
+ ath12k_dbg(ar->ab, ATH12K_DBG_WMI,
+ "WMI no.of chan = %d len = %d pdev_id = %d num_sends = %d\n",
+ num_send_chans, len, cmd->pdev_id, num_sends);
+
+ ptr = skb->data + sizeof(*cmd);
+
+ len = sizeof(*chan_info) * num_send_chans;
+ tlv = ptr;
+ tlv->header = ath12k_wmi_tlv_cmd_hdr(WMI_TAG_ARRAY_STRUCT,
+ len);
+ ptr += TLV_HDR_SIZE;
+
+ for (i = 0; i < num_send_chans; ++i) {
+ chan_info = ptr;
+ memset(chan_info, 0, sizeof(*chan_info));
+ len = sizeof(*chan_info);
+ chan_info->tlv_header = ath12k_wmi_tlv_cmd_hdr(WMI_TAG_CHANNEL,
+ len);
+
+ reg1 = &chan_info->reg_info_1;
+ reg2 = &chan_info->reg_info_2;
+ chan_info->mhz = cpu_to_le32(channel_arg->mhz);
+ chan_info->band_center_freq1 = cpu_to_le32(channel_arg->cfreq1);
+ chan_info->band_center_freq2 = cpu_to_le32(channel_arg->cfreq2);
+
+ if (channel_arg->is_chan_passive)
+ chan_info->info |= cpu_to_le32(WMI_CHAN_INFO_PASSIVE);
+ if (channel_arg->allow_he)
+ chan_info->info |= cpu_to_le32(WMI_CHAN_INFO_ALLOW_HE);
+ else if (channel_arg->allow_vht)
+ chan_info->info |= cpu_to_le32(WMI_CHAN_INFO_ALLOW_VHT);
+ else if (channel_arg->allow_ht)
+ chan_info->info |= cpu_to_le32(WMI_CHAN_INFO_ALLOW_HT);
+ if (channel_arg->half_rate)
+ chan_info->info |= cpu_to_le32(WMI_CHAN_INFO_HALF_RATE);
+ if (channel_arg->quarter_rate)
+ chan_info->info |=
+ cpu_to_le32(WMI_CHAN_INFO_QUARTER_RATE);
+
+ if (channel_arg->psc_channel)
+ chan_info->info |= cpu_to_le32(WMI_CHAN_INFO_PSC);
+
+ chan_info->info |= le32_encode_bits(channel_arg->phy_mode,
+ WMI_CHAN_INFO_MODE);
+ *reg1 |= le32_encode_bits(channel_arg->minpower,
+ WMI_CHAN_REG_INFO1_MIN_PWR);
+ *reg1 |= le32_encode_bits(channel_arg->maxpower,
+ WMI_CHAN_REG_INFO1_MAX_PWR);
+ *reg1 |= le32_encode_bits(channel_arg->maxregpower,
+ WMI_CHAN_REG_INFO1_MAX_REG_PWR);
+ *reg1 |= le32_encode_bits(channel_arg->reg_class_id,
+ WMI_CHAN_REG_INFO1_REG_CLS);
+ *reg2 |= le32_encode_bits(channel_arg->antennamax,
+ WMI_CHAN_REG_INFO2_ANT_MAX);
+
+ ath12k_dbg(ar->ab, ATH12K_DBG_WMI,
+ "WMI chan scan list chan[%d] = %u, chan_info->info %8x\n",
+ i, chan_info->mhz, chan_info->info);
+
+ ptr += sizeof(*chan_info);
+
+ channel_arg++;
+ }
+
+ ret = ath12k_wmi_cmd_send(wmi, skb, WMI_SCAN_CHAN_LIST_CMDID);
+ if (ret) {
+ ath12k_warn(ar->ab, "failed to send WMI_SCAN_CHAN_LIST cmd\n");
+ dev_kfree_skb(skb);
+ return ret;
+ }
+
+ num_sends++;
+ }
+
+ return 0;
+}
+
+int ath12k_wmi_send_wmm_update_cmd(struct ath12k *ar, u32 vdev_id,
+ struct wmi_wmm_params_all_arg *param)
+{
+ struct ath12k_wmi_pdev *wmi = ar->wmi;
+ struct wmi_vdev_set_wmm_params_cmd *cmd;
+ struct wmi_wmm_params *wmm_param;
+ struct wmi_wmm_params_arg *wmi_wmm_arg;
+ struct sk_buff *skb;
+ int ret, ac;
+
+ skb = ath12k_wmi_alloc_skb(wmi->wmi_ab, sizeof(*cmd));
+ if (!skb)
+ return -ENOMEM;
+
+ cmd = (struct wmi_vdev_set_wmm_params_cmd *)skb->data;
+ cmd->tlv_header = ath12k_wmi_tlv_cmd_hdr(WMI_TAG_VDEV_SET_WMM_PARAMS_CMD,
+ sizeof(*cmd));
+
+ cmd->vdev_id = cpu_to_le32(vdev_id);
+ cmd->wmm_param_type = 0;
+
+ for (ac = 0; ac < WME_NUM_AC; ac++) {
+ switch (ac) {
+ case WME_AC_BE:
+ wmi_wmm_arg = &param->ac_be;
+ break;
+ case WME_AC_BK:
+ wmi_wmm_arg = &param->ac_bk;
+ break;
+ case WME_AC_VI:
+ wmi_wmm_arg = &param->ac_vi;
+ break;
+ case WME_AC_VO:
+ wmi_wmm_arg = &param->ac_vo;
+ break;
+ }
+
+ wmm_param = (struct wmi_wmm_params *)&cmd->wmm_params[ac];
+ wmm_param->tlv_header =
+ ath12k_wmi_tlv_cmd_hdr(WMI_TAG_VDEV_SET_WMM_PARAMS_CMD,
+ sizeof(*wmm_param));
+
+ wmm_param->aifs = cpu_to_le32(wmi_wmm_arg->aifs);
+ wmm_param->cwmin = cpu_to_le32(wmi_wmm_arg->cwmin);
+ wmm_param->cwmax = cpu_to_le32(wmi_wmm_arg->cwmax);
+ wmm_param->txoplimit = cpu_to_le32(wmi_wmm_arg->txop);
+ wmm_param->acm = cpu_to_le32(wmi_wmm_arg->acm);
+ wmm_param->no_ack = cpu_to_le32(wmi_wmm_arg->no_ack);
+
+ ath12k_dbg(ar->ab, ATH12K_DBG_WMI,
+ "wmi wmm set ac %d aifs %d cwmin %d cwmax %d txop %d acm %d no_ack %d\n",
+ ac, wmm_param->aifs, wmm_param->cwmin,
+ wmm_param->cwmax, wmm_param->txoplimit,
+ wmm_param->acm, wmm_param->no_ack);
+ }
+ ret = ath12k_wmi_cmd_send(wmi, skb,
+ WMI_VDEV_SET_WMM_PARAMS_CMDID);
+ if (ret) {
+ ath12k_warn(ar->ab,
+ "failed to send WMI_VDEV_SET_WMM_PARAMS_CMDID");
+ dev_kfree_skb(skb);
+ }
+
+ return ret;
+}
+
+int ath12k_wmi_send_dfs_phyerr_offload_enable_cmd(struct ath12k *ar,
+ u32 pdev_id)
+{
+ struct ath12k_wmi_pdev *wmi = ar->wmi;
+ struct wmi_dfs_phyerr_offload_cmd *cmd;
+ struct sk_buff *skb;
+ int ret;
+
+ skb = ath12k_wmi_alloc_skb(wmi->wmi_ab, sizeof(*cmd));
+ if (!skb)
+ return -ENOMEM;
+
+ cmd = (struct wmi_dfs_phyerr_offload_cmd *)skb->data;
+ cmd->tlv_header =
+ ath12k_wmi_tlv_cmd_hdr(WMI_TAG_PDEV_DFS_PHYERR_OFFLOAD_ENABLE_CMD,
+ sizeof(*cmd));
+
+ cmd->pdev_id = cpu_to_le32(pdev_id);
+
+ ath12k_dbg(ar->ab, ATH12K_DBG_WMI,
+ "WMI dfs phy err offload enable pdev id %d\n", pdev_id);
+
+ ret = ath12k_wmi_cmd_send(wmi, skb,
+ WMI_PDEV_DFS_PHYERR_OFFLOAD_ENABLE_CMDID);
+ if (ret) {
+ ath12k_warn(ar->ab,
+ "failed to send WMI_PDEV_DFS_PHYERR_OFFLOAD_ENABLE cmd\n");
+ dev_kfree_skb(skb);
+ }
+
+ return ret;
+}
+
+int ath12k_wmi_delba_send(struct ath12k *ar, u32 vdev_id, const u8 *mac,
+ u32 tid, u32 initiator, u32 reason)
+{
+ struct ath12k_wmi_pdev *wmi = ar->wmi;
+ struct wmi_delba_send_cmd *cmd;
+ struct sk_buff *skb;
+ int ret;
+
+ skb = ath12k_wmi_alloc_skb(wmi->wmi_ab, sizeof(*cmd));
+ if (!skb)
+ return -ENOMEM;
+
+ cmd = (struct wmi_delba_send_cmd *)skb->data;
+ cmd->tlv_header = ath12k_wmi_tlv_cmd_hdr(WMI_TAG_DELBA_SEND_CMD,
+ sizeof(*cmd));
+ cmd->vdev_id = cpu_to_le32(vdev_id);
+ ether_addr_copy(cmd->peer_macaddr.addr, mac);
+ cmd->tid = cpu_to_le32(tid);
+ cmd->initiator = cpu_to_le32(initiator);
+ cmd->reasoncode = cpu_to_le32(reason);
+
+ ath12k_dbg(ar->ab, ATH12K_DBG_WMI,
+ "wmi delba send vdev_id 0x%X mac_addr %pM tid %u initiator %u reason %u\n",
+ vdev_id, mac, tid, initiator, reason);
+
+ ret = ath12k_wmi_cmd_send(wmi, skb, WMI_DELBA_SEND_CMDID);
+
+ if (ret) {
+ ath12k_warn(ar->ab,
+ "failed to send WMI_DELBA_SEND_CMDID cmd\n");
+ dev_kfree_skb(skb);
+ }
+
+ return ret;
+}
+
+int ath12k_wmi_addba_set_resp(struct ath12k *ar, u32 vdev_id, const u8 *mac,
+ u32 tid, u32 status)
+{
+ struct ath12k_wmi_pdev *wmi = ar->wmi;
+ struct wmi_addba_setresponse_cmd *cmd;
+ struct sk_buff *skb;
+ int ret;
+
+ skb = ath12k_wmi_alloc_skb(wmi->wmi_ab, sizeof(*cmd));
+ if (!skb)
+ return -ENOMEM;
+
+ cmd = (struct wmi_addba_setresponse_cmd *)skb->data;
+ cmd->tlv_header =
+ ath12k_wmi_tlv_cmd_hdr(WMI_TAG_ADDBA_SETRESPONSE_CMD,
+ sizeof(*cmd));
+ cmd->vdev_id = cpu_to_le32(vdev_id);
+ ether_addr_copy(cmd->peer_macaddr.addr, mac);
+ cmd->tid = cpu_to_le32(tid);
+ cmd->statuscode = cpu_to_le32(status);
+
+ ath12k_dbg(ar->ab, ATH12K_DBG_WMI,
+ "wmi addba set resp vdev_id 0x%X mac_addr %pM tid %u status %u\n",
+ vdev_id, mac, tid, status);
+
+ ret = ath12k_wmi_cmd_send(wmi, skb, WMI_ADDBA_SET_RESP_CMDID);
+
+ if (ret) {
+ ath12k_warn(ar->ab,
+ "failed to send WMI_ADDBA_SET_RESP_CMDID cmd\n");
+ dev_kfree_skb(skb);
+ }
+
+ return ret;
+}
+
+int ath12k_wmi_addba_send(struct ath12k *ar, u32 vdev_id, const u8 *mac,
+ u32 tid, u32 buf_size)
+{
+ struct ath12k_wmi_pdev *wmi = ar->wmi;
+ struct wmi_addba_send_cmd *cmd;
+ struct sk_buff *skb;
+ int ret;
+
+ skb = ath12k_wmi_alloc_skb(wmi->wmi_ab, sizeof(*cmd));
+ if (!skb)
+ return -ENOMEM;
+
+ cmd = (struct wmi_addba_send_cmd *)skb->data;
+ cmd->tlv_header = ath12k_wmi_tlv_cmd_hdr(WMI_TAG_ADDBA_SEND_CMD,
+ sizeof(*cmd));
+ cmd->vdev_id = cpu_to_le32(vdev_id);
+ ether_addr_copy(cmd->peer_macaddr.addr, mac);
+ cmd->tid = cpu_to_le32(tid);
+ cmd->buffersize = cpu_to_le32(buf_size);
+
+ ath12k_dbg(ar->ab, ATH12K_DBG_WMI,
+ "wmi addba send vdev_id 0x%X mac_addr %pM tid %u bufsize %u\n",
+ vdev_id, mac, tid, buf_size);
+
+ ret = ath12k_wmi_cmd_send(wmi, skb, WMI_ADDBA_SEND_CMDID);
+
+ if (ret) {
+ ath12k_warn(ar->ab,
+ "failed to send WMI_ADDBA_SEND_CMDID cmd\n");
+ dev_kfree_skb(skb);
+ }
+
+ return ret;
+}
+
+int ath12k_wmi_addba_clear_resp(struct ath12k *ar, u32 vdev_id, const u8 *mac)
+{
+ struct ath12k_wmi_pdev *wmi = ar->wmi;
+ struct wmi_addba_clear_resp_cmd *cmd;
+ struct sk_buff *skb;
+ int ret;
+
+ skb = ath12k_wmi_alloc_skb(wmi->wmi_ab, sizeof(*cmd));
+ if (!skb)
+ return -ENOMEM;
+
+ cmd = (struct wmi_addba_clear_resp_cmd *)skb->data;
+ cmd->tlv_header =
+ ath12k_wmi_tlv_cmd_hdr(WMI_TAG_ADDBA_CLEAR_RESP_CMD,
+ sizeof(*cmd));
+ cmd->vdev_id = cpu_to_le32(vdev_id);
+ ether_addr_copy(cmd->peer_macaddr.addr, mac);
+
+ ath12k_dbg(ar->ab, ATH12K_DBG_WMI,
+ "wmi addba clear resp vdev_id 0x%X mac_addr %pM\n",
+ vdev_id, mac);
+
+ ret = ath12k_wmi_cmd_send(wmi, skb, WMI_ADDBA_CLEAR_RESP_CMDID);
+
+ if (ret) {
+ ath12k_warn(ar->ab,
+ "failed to send WMI_ADDBA_CLEAR_RESP_CMDID cmd\n");
+ dev_kfree_skb(skb);
+ }
+
+ return ret;
+}
+
+int ath12k_wmi_send_init_country_cmd(struct ath12k *ar,
+ struct ath12k_wmi_init_country_arg *arg)
+{
+ struct ath12k_wmi_pdev *wmi = ar->wmi;
+ struct wmi_init_country_cmd *cmd;
+ struct sk_buff *skb;
+ int ret;
+
+ skb = ath12k_wmi_alloc_skb(wmi->wmi_ab, sizeof(*cmd));
+ if (!skb)
+ return -ENOMEM;
+
+ cmd = (struct wmi_init_country_cmd *)skb->data;
+ cmd->tlv_header =
+ ath12k_wmi_tlv_cmd_hdr(WMI_TAG_SET_INIT_COUNTRY_CMD,
+ sizeof(*cmd));
+
+ cmd->pdev_id = cpu_to_le32(ar->pdev->pdev_id);
+
+ switch (arg->flags) {
+ case ALPHA_IS_SET:
+ cmd->init_cc_type = WMI_COUNTRY_INFO_TYPE_ALPHA;
+ memcpy(&cmd->cc_info.alpha2, arg->cc_info.alpha2, 3);
+ break;
+ case CC_IS_SET:
+ cmd->init_cc_type = cpu_to_le32(WMI_COUNTRY_INFO_TYPE_COUNTRY_CODE);
+ cmd->cc_info.country_code =
+ cpu_to_le32(arg->cc_info.country_code);
+ break;
+ case REGDMN_IS_SET:
+ cmd->init_cc_type = cpu_to_le32(WMI_COUNTRY_INFO_TYPE_REGDOMAIN);
+ cmd->cc_info.regdom_id = cpu_to_le32(arg->cc_info.regdom_id);
+ break;
+ default:
+ ret = -EINVAL;
+ goto out;
+ }
+
+ ret = ath12k_wmi_cmd_send(wmi, skb,
+ WMI_SET_INIT_COUNTRY_CMDID);
+
+out:
+ if (ret) {
+ ath12k_warn(ar->ab,
+ "failed to send WMI_SET_INIT_COUNTRY CMD :%d\n",
+ ret);
+ dev_kfree_skb(skb);
+ }
+
+ return ret;
+}
+
+int
+ath12k_wmi_send_twt_enable_cmd(struct ath12k *ar, u32 pdev_id)
+{
+ struct ath12k_wmi_pdev *wmi = ar->wmi;
+ struct ath12k_base *ab = wmi->wmi_ab->ab;
+ struct wmi_twt_enable_params_cmd *cmd;
+ struct sk_buff *skb;
+ int ret, len;
+
+ len = sizeof(*cmd);
+
+ skb = ath12k_wmi_alloc_skb(wmi->wmi_ab, len);
+ if (!skb)
+ return -ENOMEM;
+
+ cmd = (struct wmi_twt_enable_params_cmd *)skb->data;
+ cmd->tlv_header = ath12k_wmi_tlv_cmd_hdr(WMI_TAG_TWT_ENABLE_CMD,
+ len);
+ cmd->pdev_id = cpu_to_le32(pdev_id);
+ cmd->sta_cong_timer_ms = cpu_to_le32(ATH12K_TWT_DEF_STA_CONG_TIMER_MS);
+ cmd->default_slot_size = cpu_to_le32(ATH12K_TWT_DEF_DEFAULT_SLOT_SIZE);
+ cmd->congestion_thresh_setup =
+ cpu_to_le32(ATH12K_TWT_DEF_CONGESTION_THRESH_SETUP);
+ cmd->congestion_thresh_teardown =
+ cpu_to_le32(ATH12K_TWT_DEF_CONGESTION_THRESH_TEARDOWN);
+ cmd->congestion_thresh_critical =
+ cpu_to_le32(ATH12K_TWT_DEF_CONGESTION_THRESH_CRITICAL);
+ cmd->interference_thresh_teardown =
+ cpu_to_le32(ATH12K_TWT_DEF_INTERFERENCE_THRESH_TEARDOWN);
+ cmd->interference_thresh_setup =
+ cpu_to_le32(ATH12K_TWT_DEF_INTERFERENCE_THRESH_SETUP);
+ cmd->min_no_sta_setup = cpu_to_le32(ATH12K_TWT_DEF_MIN_NO_STA_SETUP);
+ cmd->min_no_sta_teardown = cpu_to_le32(ATH12K_TWT_DEF_MIN_NO_STA_TEARDOWN);
+ cmd->no_of_bcast_mcast_slots =
+ cpu_to_le32(ATH12K_TWT_DEF_NO_OF_BCAST_MCAST_SLOTS);
+ cmd->min_no_twt_slots = cpu_to_le32(ATH12K_TWT_DEF_MIN_NO_TWT_SLOTS);
+ cmd->max_no_sta_twt = cpu_to_le32(ATH12K_TWT_DEF_MAX_NO_STA_TWT);
+ cmd->mode_check_interval = cpu_to_le32(ATH12K_TWT_DEF_MODE_CHECK_INTERVAL);
+ cmd->add_sta_slot_interval = cpu_to_le32(ATH12K_TWT_DEF_ADD_STA_SLOT_INTERVAL);
+ cmd->remove_sta_slot_interval =
+ cpu_to_le32(ATH12K_TWT_DEF_REMOVE_STA_SLOT_INTERVAL);
+ /* TODO add MBSSID support */
+ cmd->mbss_support = 0;
+
+ ret = ath12k_wmi_cmd_send(wmi, skb,
+ WMI_TWT_ENABLE_CMDID);
+ if (ret) {
+ ath12k_warn(ab, "Failed to send WMI_TWT_ENABLE_CMDID");
+ dev_kfree_skb(skb);
+ }
+ return ret;
+}
+
+int
+ath12k_wmi_send_twt_disable_cmd(struct ath12k *ar, u32 pdev_id)
+{
+ struct ath12k_wmi_pdev *wmi = ar->wmi;
+ struct ath12k_base *ab = wmi->wmi_ab->ab;
+ struct wmi_twt_disable_params_cmd *cmd;
+ struct sk_buff *skb;
+ int ret, len;
+
+ len = sizeof(*cmd);
+
+ skb = ath12k_wmi_alloc_skb(wmi->wmi_ab, len);
+ if (!skb)
+ return -ENOMEM;
+
+ cmd = (struct wmi_twt_disable_params_cmd *)skb->data;
+ cmd->tlv_header = ath12k_wmi_tlv_cmd_hdr(WMI_TAG_TWT_DISABLE_CMD,
+ len);
+ cmd->pdev_id = cpu_to_le32(pdev_id);
+
+ ret = ath12k_wmi_cmd_send(wmi, skb,
+ WMI_TWT_DISABLE_CMDID);
+ if (ret) {
+ ath12k_warn(ab, "Failed to send WMI_TWT_DISABLE_CMDID");
+ dev_kfree_skb(skb);
+ }
+ return ret;
+}
+
+int
+ath12k_wmi_send_obss_spr_cmd(struct ath12k *ar, u32 vdev_id,
+ struct ieee80211_he_obss_pd *he_obss_pd)
+{
+ struct ath12k_wmi_pdev *wmi = ar->wmi;
+ struct ath12k_base *ab = wmi->wmi_ab->ab;
+ struct wmi_obss_spatial_reuse_params_cmd *cmd;
+ struct sk_buff *skb;
+ int ret, len;
+
+ len = sizeof(*cmd);
+
+ skb = ath12k_wmi_alloc_skb(wmi->wmi_ab, len);
+ if (!skb)
+ return -ENOMEM;
+
+ cmd = (struct wmi_obss_spatial_reuse_params_cmd *)skb->data;
+ cmd->tlv_header = ath12k_wmi_tlv_cmd_hdr(WMI_TAG_OBSS_SPATIAL_REUSE_SET_CMD,
+ len);
+ cmd->vdev_id = cpu_to_le32(vdev_id);
+ cmd->enable = cpu_to_le32(he_obss_pd->enable);
+ cmd->obss_min = a_cpu_to_sle32(he_obss_pd->min_offset);
+ cmd->obss_max = a_cpu_to_sle32(he_obss_pd->max_offset);
+
+ ret = ath12k_wmi_cmd_send(wmi, skb,
+ WMI_PDEV_OBSS_PD_SPATIAL_REUSE_CMDID);
+ if (ret) {
+ ath12k_warn(ab,
+ "Failed to send WMI_PDEV_OBSS_PD_SPATIAL_REUSE_CMDID");
+ dev_kfree_skb(skb);
+ }
+ return ret;
+}
+
+int ath12k_wmi_obss_color_cfg_cmd(struct ath12k *ar, u32 vdev_id,
+ u8 bss_color, u32 period,
+ bool enable)
+{
+ struct ath12k_wmi_pdev *wmi = ar->wmi;
+ struct ath12k_base *ab = wmi->wmi_ab->ab;
+ struct wmi_obss_color_collision_cfg_params_cmd *cmd;
+ struct sk_buff *skb;
+ int ret, len;
+
+ len = sizeof(*cmd);
+
+ skb = ath12k_wmi_alloc_skb(wmi->wmi_ab, len);
+ if (!skb)
+ return -ENOMEM;
+
+ cmd = (struct wmi_obss_color_collision_cfg_params_cmd *)skb->data;
+ cmd->tlv_header = ath12k_wmi_tlv_cmd_hdr(WMI_TAG_OBSS_COLOR_COLLISION_DET_CONFIG,
+ len);
+ cmd->vdev_id = cpu_to_le32(vdev_id);
+ cmd->evt_type = enable ? cpu_to_le32(ATH12K_OBSS_COLOR_COLLISION_DETECTION) :
+ cpu_to_le32(ATH12K_OBSS_COLOR_COLLISION_DETECTION_DISABLE);
+ cmd->current_bss_color = cpu_to_le32(bss_color);
+ cmd->detection_period_ms = cpu_to_le32(period);
+ cmd->scan_period_ms = cpu_to_le32(ATH12K_BSS_COLOR_COLLISION_SCAN_PERIOD_MS);
+ cmd->free_slot_expiry_time_ms = 0;
+ cmd->flags = 0;
+
+ ath12k_dbg(ar->ab, ATH12K_DBG_WMI,
+ "wmi_send_obss_color_collision_cfg id %d type %d bss_color %d detect_period %d scan_period %d\n",
+ cmd->vdev_id, cmd->evt_type, cmd->current_bss_color,
+ cmd->detection_period_ms, cmd->scan_period_ms);
+
+ ret = ath12k_wmi_cmd_send(wmi, skb,
+ WMI_OBSS_COLOR_COLLISION_DET_CONFIG_CMDID);
+ if (ret) {
+ ath12k_warn(ab, "Failed to send WMI_OBSS_COLOR_COLLISION_DET_CONFIG_CMDID");
+ dev_kfree_skb(skb);
+ }
+ return ret;
+}
+
+int ath12k_wmi_send_bss_color_change_enable_cmd(struct ath12k *ar, u32 vdev_id,
+ bool enable)
+{
+ struct ath12k_wmi_pdev *wmi = ar->wmi;
+ struct ath12k_base *ab = wmi->wmi_ab->ab;
+ struct wmi_bss_color_change_enable_params_cmd *cmd;
+ struct sk_buff *skb;
+ int ret, len;
+
+ len = sizeof(*cmd);
+
+ skb = ath12k_wmi_alloc_skb(wmi->wmi_ab, len);
+ if (!skb)
+ return -ENOMEM;
+
+ cmd = (struct wmi_bss_color_change_enable_params_cmd *)skb->data;
+ cmd->tlv_header = ath12k_wmi_tlv_cmd_hdr(WMI_TAG_BSS_COLOR_CHANGE_ENABLE,
+ len);
+ cmd->vdev_id = cpu_to_le32(vdev_id);
+ cmd->enable = enable ? cpu_to_le32(1) : 0;
+
+ ath12k_dbg(ar->ab, ATH12K_DBG_WMI,
+ "wmi_send_bss_color_change_enable id %d enable %d\n",
+ cmd->vdev_id, cmd->enable);
+
+ ret = ath12k_wmi_cmd_send(wmi, skb,
+ WMI_BSS_COLOR_CHANGE_ENABLE_CMDID);
+ if (ret) {
+ ath12k_warn(ab, "Failed to send WMI_BSS_COLOR_CHANGE_ENABLE_CMDID");
+ dev_kfree_skb(skb);
+ }
+ return ret;
+}
+
+int ath12k_wmi_fils_discovery_tmpl(struct ath12k *ar, u32 vdev_id,
+ struct sk_buff *tmpl)
+{
+ struct wmi_tlv *tlv;
+ struct sk_buff *skb;
+ void *ptr;
+ int ret, len;
+ size_t aligned_len;
+ struct wmi_fils_discovery_tmpl_cmd *cmd;
+
+ aligned_len = roundup(tmpl->len, 4);
+ len = sizeof(*cmd) + TLV_HDR_SIZE + aligned_len;
+
+ ath12k_dbg(ar->ab, ATH12K_DBG_WMI,
+ "WMI vdev %i set FILS discovery template\n", vdev_id);
+
+ skb = ath12k_wmi_alloc_skb(ar->wmi->wmi_ab, len);
+ if (!skb)
+ return -ENOMEM;
+
+ cmd = (struct wmi_fils_discovery_tmpl_cmd *)skb->data;
+ cmd->tlv_header = ath12k_wmi_tlv_cmd_hdr(WMI_TAG_FILS_DISCOVERY_TMPL_CMD,
+ sizeof(*cmd));
+ cmd->vdev_id = cpu_to_le32(vdev_id);
+ cmd->buf_len = cpu_to_le32(tmpl->len);
+ ptr = skb->data + sizeof(*cmd);
+
+ tlv = ptr;
+ tlv->header = ath12k_wmi_tlv_hdr(WMI_TAG_ARRAY_BYTE, aligned_len);
+ memcpy(tlv->value, tmpl->data, tmpl->len);
+
+ ret = ath12k_wmi_cmd_send(ar->wmi, skb, WMI_FILS_DISCOVERY_TMPL_CMDID);
+ if (ret) {
+ ath12k_warn(ar->ab,
+ "WMI vdev %i failed to send FILS discovery template command\n",
+ vdev_id);
+ dev_kfree_skb(skb);
+ }
+ return ret;
+}
+
+int ath12k_wmi_probe_resp_tmpl(struct ath12k *ar, u32 vdev_id,
+ struct sk_buff *tmpl)
+{
+ struct wmi_probe_tmpl_cmd *cmd;
+ struct ath12k_wmi_bcn_prb_info_params *probe_info;
+ struct wmi_tlv *tlv;
+ struct sk_buff *skb;
+ void *ptr;
+ int ret, len;
+ size_t aligned_len = roundup(tmpl->len, 4);
+
+ ath12k_dbg(ar->ab, ATH12K_DBG_WMI,
+ "WMI vdev %i set probe response template\n", vdev_id);
+
+ len = sizeof(*cmd) + sizeof(*probe_info) + TLV_HDR_SIZE + aligned_len;
+
+ skb = ath12k_wmi_alloc_skb(ar->wmi->wmi_ab, len);
+ if (!skb)
+ return -ENOMEM;
+
+ cmd = (struct wmi_probe_tmpl_cmd *)skb->data;
+ cmd->tlv_header = ath12k_wmi_tlv_cmd_hdr(WMI_TAG_PRB_TMPL_CMD,
+ sizeof(*cmd));
+ cmd->vdev_id = cpu_to_le32(vdev_id);
+ cmd->buf_len = cpu_to_le32(tmpl->len);
+
+ ptr = skb->data + sizeof(*cmd);
+
+ probe_info = ptr;
+ len = sizeof(*probe_info);
+ probe_info->tlv_header = ath12k_wmi_tlv_cmd_hdr(WMI_TAG_BCN_PRB_INFO,
+ len);
+ probe_info->caps = 0;
+ probe_info->erp = 0;
+
+ ptr += sizeof(*probe_info);
+
+ tlv = ptr;
+ tlv->header = ath12k_wmi_tlv_hdr(WMI_TAG_ARRAY_BYTE, aligned_len);
+ memcpy(tlv->value, tmpl->data, tmpl->len);
+
+ ret = ath12k_wmi_cmd_send(ar->wmi, skb, WMI_PRB_TMPL_CMDID);
+ if (ret) {
+ ath12k_warn(ar->ab,
+ "WMI vdev %i failed to send probe response template command\n",
+ vdev_id);
+ dev_kfree_skb(skb);
+ }
+ return ret;
+}
+
+int ath12k_wmi_fils_discovery(struct ath12k *ar, u32 vdev_id, u32 interval,
+ bool unsol_bcast_probe_resp_enabled)
+{
+ struct sk_buff *skb;
+ int ret, len;
+ struct wmi_fils_discovery_cmd *cmd;
+
+ ath12k_dbg(ar->ab, ATH12K_DBG_WMI,
+ "WMI vdev %i set %s interval to %u TU\n",
+ vdev_id, unsol_bcast_probe_resp_enabled ?
+ "unsolicited broadcast probe response" : "FILS discovery",
+ interval);
+
+ len = sizeof(*cmd);
+ skb = ath12k_wmi_alloc_skb(ar->wmi->wmi_ab, len);
+ if (!skb)
+ return -ENOMEM;
+
+ cmd = (struct wmi_fils_discovery_cmd *)skb->data;
+ cmd->tlv_header = ath12k_wmi_tlv_cmd_hdr(WMI_TAG_ENABLE_FILS_CMD,
+ len);
+ cmd->vdev_id = cpu_to_le32(vdev_id);
+ cmd->interval = cpu_to_le32(interval);
+ cmd->config = cpu_to_le32(unsol_bcast_probe_resp_enabled);
+
+ ret = ath12k_wmi_cmd_send(ar->wmi, skb, WMI_ENABLE_FILS_CMDID);
+ if (ret) {
+ ath12k_warn(ar->ab,
+ "WMI vdev %i failed to send FILS discovery enable/disable command\n",
+ vdev_id);
+ dev_kfree_skb(skb);
+ }
+ return ret;
+}
+
+static void
+ath12k_fill_band_to_mac_param(struct ath12k_base *soc,
+ struct ath12k_wmi_pdev_band_arg *arg)
+{
+ u8 i;
+ struct ath12k_wmi_hal_reg_capabilities_ext_arg *hal_reg_cap;
+ struct ath12k_pdev *pdev;
+
+ for (i = 0; i < soc->num_radios; i++) {
+ pdev = &soc->pdevs[i];
+ hal_reg_cap = &soc->hal_reg_cap[i];
+ arg[i].pdev_id = pdev->pdev_id;
+
+ switch (pdev->cap.supported_bands) {
+ case WMI_HOST_WLAN_2G_5G_CAP:
+ arg[i].start_freq = hal_reg_cap->low_2ghz_chan;
+ arg[i].end_freq = hal_reg_cap->high_5ghz_chan;
+ break;
+ case WMI_HOST_WLAN_2G_CAP:
+ arg[i].start_freq = hal_reg_cap->low_2ghz_chan;
+ arg[i].end_freq = hal_reg_cap->high_2ghz_chan;
+ break;
+ case WMI_HOST_WLAN_5G_CAP:
+ arg[i].start_freq = hal_reg_cap->low_5ghz_chan;
+ arg[i].end_freq = hal_reg_cap->high_5ghz_chan;
+ break;
+ default:
+ break;
+ }
+ }
+}
+
+static void
+ath12k_wmi_copy_resource_config(struct ath12k_wmi_resource_config_params *wmi_cfg,
+ struct ath12k_wmi_resource_config_arg *tg_cfg)
+{
+ wmi_cfg->num_vdevs = cpu_to_le32(tg_cfg->num_vdevs);
+ wmi_cfg->num_peers = cpu_to_le32(tg_cfg->num_peers);
+ wmi_cfg->num_offload_peers = cpu_to_le32(tg_cfg->num_offload_peers);
+ wmi_cfg->num_offload_reorder_buffs =
+ cpu_to_le32(tg_cfg->num_offload_reorder_buffs);
+ wmi_cfg->num_peer_keys = cpu_to_le32(tg_cfg->num_peer_keys);
+ wmi_cfg->num_tids = cpu_to_le32(tg_cfg->num_tids);
+ wmi_cfg->ast_skid_limit = cpu_to_le32(tg_cfg->ast_skid_limit);
+ wmi_cfg->tx_chain_mask = cpu_to_le32(tg_cfg->tx_chain_mask);
+ wmi_cfg->rx_chain_mask = cpu_to_le32(tg_cfg->rx_chain_mask);
+ wmi_cfg->rx_timeout_pri[0] = cpu_to_le32(tg_cfg->rx_timeout_pri[0]);
+ wmi_cfg->rx_timeout_pri[1] = cpu_to_le32(tg_cfg->rx_timeout_pri[1]);
+ wmi_cfg->rx_timeout_pri[2] = cpu_to_le32(tg_cfg->rx_timeout_pri[2]);
+ wmi_cfg->rx_timeout_pri[3] = cpu_to_le32(tg_cfg->rx_timeout_pri[3]);
+ wmi_cfg->rx_decap_mode = cpu_to_le32(tg_cfg->rx_decap_mode);
+ wmi_cfg->scan_max_pending_req = cpu_to_le32(tg_cfg->scan_max_pending_req);
+ wmi_cfg->bmiss_offload_max_vdev = cpu_to_le32(tg_cfg->bmiss_offload_max_vdev);
+ wmi_cfg->roam_offload_max_vdev = cpu_to_le32(tg_cfg->roam_offload_max_vdev);
+ wmi_cfg->roam_offload_max_ap_profiles =
+ cpu_to_le32(tg_cfg->roam_offload_max_ap_profiles);
+ wmi_cfg->num_mcast_groups = cpu_to_le32(tg_cfg->num_mcast_groups);
+ wmi_cfg->num_mcast_table_elems = cpu_to_le32(tg_cfg->num_mcast_table_elems);
+ wmi_cfg->mcast2ucast_mode = cpu_to_le32(tg_cfg->mcast2ucast_mode);
+ wmi_cfg->tx_dbg_log_size = cpu_to_le32(tg_cfg->tx_dbg_log_size);
+ wmi_cfg->num_wds_entries = cpu_to_le32(tg_cfg->num_wds_entries);
+ wmi_cfg->dma_burst_size = cpu_to_le32(tg_cfg->dma_burst_size);
+ wmi_cfg->mac_aggr_delim = cpu_to_le32(tg_cfg->mac_aggr_delim);
+ wmi_cfg->rx_skip_defrag_timeout_dup_detection_check =
+ cpu_to_le32(tg_cfg->rx_skip_defrag_timeout_dup_detection_check);
+ wmi_cfg->vow_config = cpu_to_le32(tg_cfg->vow_config);
+ wmi_cfg->gtk_offload_max_vdev = cpu_to_le32(tg_cfg->gtk_offload_max_vdev);
+ wmi_cfg->num_msdu_desc = cpu_to_le32(tg_cfg->num_msdu_desc);
+ wmi_cfg->max_frag_entries = cpu_to_le32(tg_cfg->max_frag_entries);
+ wmi_cfg->num_tdls_vdevs = cpu_to_le32(tg_cfg->num_tdls_vdevs);
+ wmi_cfg->num_tdls_conn_table_entries =
+ cpu_to_le32(tg_cfg->num_tdls_conn_table_entries);
+ wmi_cfg->beacon_tx_offload_max_vdev =
+ cpu_to_le32(tg_cfg->beacon_tx_offload_max_vdev);
+ wmi_cfg->num_multicast_filter_entries =
+ cpu_to_le32(tg_cfg->num_multicast_filter_entries);
+ wmi_cfg->num_wow_filters = cpu_to_le32(tg_cfg->num_wow_filters);
+ wmi_cfg->num_keep_alive_pattern = cpu_to_le32(tg_cfg->num_keep_alive_pattern);
+ wmi_cfg->keep_alive_pattern_size = cpu_to_le32(tg_cfg->keep_alive_pattern_size);
+ wmi_cfg->max_tdls_concurrent_sleep_sta =
+ cpu_to_le32(tg_cfg->max_tdls_concurrent_sleep_sta);
+ wmi_cfg->max_tdls_concurrent_buffer_sta =
+ cpu_to_le32(tg_cfg->max_tdls_concurrent_buffer_sta);
+ wmi_cfg->wmi_send_separate = cpu_to_le32(tg_cfg->wmi_send_separate);
+ wmi_cfg->num_ocb_vdevs = cpu_to_le32(tg_cfg->num_ocb_vdevs);
+ wmi_cfg->num_ocb_channels = cpu_to_le32(tg_cfg->num_ocb_channels);
+ wmi_cfg->num_ocb_schedules = cpu_to_le32(tg_cfg->num_ocb_schedules);
+ wmi_cfg->bpf_instruction_size = cpu_to_le32(tg_cfg->bpf_instruction_size);
+ wmi_cfg->max_bssid_rx_filters = cpu_to_le32(tg_cfg->max_bssid_rx_filters);
+ wmi_cfg->use_pdev_id = cpu_to_le32(tg_cfg->use_pdev_id);
+ wmi_cfg->flag1 = cpu_to_le32(tg_cfg->atf_config);
+ wmi_cfg->peer_map_unmap_version = cpu_to_le32(tg_cfg->peer_map_unmap_version);
+ wmi_cfg->sched_params = cpu_to_le32(tg_cfg->sched_params);
+ wmi_cfg->twt_ap_pdev_count = cpu_to_le32(tg_cfg->twt_ap_pdev_count);
+ wmi_cfg->twt_ap_sta_count = cpu_to_le32(tg_cfg->twt_ap_sta_count);
+ wmi_cfg->host_service_flags =
+ cpu_to_le32(1 << WMI_RSRC_CFG_HOST_SVC_FLAG_REG_CC_EXT_SUPPORT_BIT);
+}
+
+static int ath12k_init_cmd_send(struct ath12k_wmi_pdev *wmi,
+ struct ath12k_wmi_init_cmd_arg *arg)
+{
+ struct ath12k_base *ab = wmi->wmi_ab->ab;
+ struct sk_buff *skb;
+ struct wmi_init_cmd *cmd;
+ struct ath12k_wmi_resource_config_params *cfg;
+ struct ath12k_wmi_pdev_set_hw_mode_cmd *hw_mode;
+ struct ath12k_wmi_pdev_band_to_mac_params *band_to_mac;
+ struct ath12k_wmi_host_mem_chunk_params *host_mem_chunks;
+ struct wmi_tlv *tlv;
+ size_t ret, len;
+ void *ptr;
+ u32 hw_mode_len = 0;
+ u16 idx;
+
+ if (arg->hw_mode_id != WMI_HOST_HW_MODE_MAX)
+ hw_mode_len = sizeof(*hw_mode) + TLV_HDR_SIZE +
+ (arg->num_band_to_mac * sizeof(*band_to_mac));
+
+ len = sizeof(*cmd) + TLV_HDR_SIZE + sizeof(*cfg) + hw_mode_len +
+ (arg->num_mem_chunks ? (sizeof(*host_mem_chunks) * WMI_MAX_MEM_REQS) : 0);
+
+ skb = ath12k_wmi_alloc_skb(wmi->wmi_ab, len);
+ if (!skb)
+ return -ENOMEM;
+
+ cmd = (struct wmi_init_cmd *)skb->data;
+
+ cmd->tlv_header = ath12k_wmi_tlv_cmd_hdr(WMI_TAG_INIT_CMD,
+ sizeof(*cmd));
+
+ ptr = skb->data + sizeof(*cmd);
+ cfg = ptr;
+
+ ath12k_wmi_copy_resource_config(cfg, &arg->res_cfg);
+
+ cfg->tlv_header = ath12k_wmi_tlv_cmd_hdr(WMI_TAG_RESOURCE_CONFIG,
+ sizeof(*cfg));
+
+ ptr += sizeof(*cfg);
+ host_mem_chunks = ptr + TLV_HDR_SIZE;
+ len = sizeof(struct ath12k_wmi_host_mem_chunk_params);
+
+ for (idx = 0; idx < arg->num_mem_chunks; ++idx) {
+ host_mem_chunks[idx].tlv_header =
+ ath12k_wmi_tlv_hdr(WMI_TAG_WLAN_HOST_MEMORY_CHUNK,
+ len);
+
+ host_mem_chunks[idx].ptr = cpu_to_le32(arg->mem_chunks[idx].paddr);
+ host_mem_chunks[idx].size = cpu_to_le32(arg->mem_chunks[idx].len);
+ host_mem_chunks[idx].req_id = cpu_to_le32(arg->mem_chunks[idx].req_id);
+
+ ath12k_dbg(ab, ATH12K_DBG_WMI,
+ "WMI host mem chunk req_id %d paddr 0x%llx len %d\n",
+ arg->mem_chunks[idx].req_id,
+ (u64)arg->mem_chunks[idx].paddr,
+ arg->mem_chunks[idx].len);
+ }
+ cmd->num_host_mem_chunks = cpu_to_le32(arg->num_mem_chunks);
+ len = sizeof(struct ath12k_wmi_host_mem_chunk_params) * arg->num_mem_chunks;
+
+ /* num_mem_chunks is zero */
+ tlv = ptr;
+ tlv->header = ath12k_wmi_tlv_hdr(WMI_TAG_ARRAY_STRUCT, len);
+ ptr += TLV_HDR_SIZE + len;
+
+ if (arg->hw_mode_id != WMI_HOST_HW_MODE_MAX) {
+ hw_mode = (struct ath12k_wmi_pdev_set_hw_mode_cmd *)ptr;
+ hw_mode->tlv_header = ath12k_wmi_tlv_cmd_hdr(WMI_TAG_PDEV_SET_HW_MODE_CMD,
+ sizeof(*hw_mode));
+
+ hw_mode->hw_mode_index = cpu_to_le32(arg->hw_mode_id);
+ hw_mode->num_band_to_mac = cpu_to_le32(arg->num_band_to_mac);
+
+ ptr += sizeof(*hw_mode);
+
+ len = arg->num_band_to_mac * sizeof(*band_to_mac);
+ tlv = ptr;
+ tlv->header = ath12k_wmi_tlv_hdr(WMI_TAG_ARRAY_STRUCT, len);
+
+ ptr += TLV_HDR_SIZE;
+ len = sizeof(*band_to_mac);
+
+ for (idx = 0; idx < arg->num_band_to_mac; idx++) {
+ band_to_mac = (void *)ptr;
+
+ band_to_mac->tlv_header =
+ ath12k_wmi_tlv_cmd_hdr(WMI_TAG_PDEV_BAND_TO_MAC,
+ len);
+ band_to_mac->pdev_id = cpu_to_le32(arg->band_to_mac[idx].pdev_id);
+ band_to_mac->start_freq =
+ cpu_to_le32(arg->band_to_mac[idx].start_freq);
+ band_to_mac->end_freq =
+ cpu_to_le32(arg->band_to_mac[idx].end_freq);
+ ptr += sizeof(*band_to_mac);
+ }
+ }
+
+ ret = ath12k_wmi_cmd_send(wmi, skb, WMI_INIT_CMDID);
+ if (ret) {
+ ath12k_warn(ab, "failed to send WMI_INIT_CMDID\n");
+ dev_kfree_skb(skb);
+ }
+
+ return ret;
+}
+
+int ath12k_wmi_pdev_lro_cfg(struct ath12k *ar,
+ int pdev_id)
+{
+ struct ath12k_wmi_pdev_lro_config_cmd *cmd;
+ struct sk_buff *skb;
+ int ret;
+
+ skb = ath12k_wmi_alloc_skb(ar->wmi->wmi_ab, sizeof(*cmd));
+ if (!skb)
+ return -ENOMEM;
+
+ cmd = (struct ath12k_wmi_pdev_lro_config_cmd *)skb->data;
+ cmd->tlv_header = ath12k_wmi_tlv_cmd_hdr(WMI_TAG_LRO_INFO_CMD,
+ sizeof(*cmd));
+
+ get_random_bytes(cmd->th_4, sizeof(cmd->th_4));
+ get_random_bytes(cmd->th_6, sizeof(cmd->th_6));
+
+ cmd->pdev_id = cpu_to_le32(pdev_id);
+
+ ath12k_dbg(ar->ab, ATH12K_DBG_WMI,
+ "WMI lro cfg cmd pdev_id 0x%x\n", pdev_id);
+
+ ret = ath12k_wmi_cmd_send(ar->wmi, skb, WMI_LRO_CONFIG_CMDID);
+ if (ret) {
+ ath12k_warn(ar->ab,
+ "failed to send lro cfg req wmi cmd\n");
+ goto err;
+ }
+
+ return 0;
+err:
+ dev_kfree_skb(skb);
+ return ret;
+}
+
+int ath12k_wmi_wait_for_service_ready(struct ath12k_base *ab)
+{
+ unsigned long time_left;
+
+ time_left = wait_for_completion_timeout(&ab->wmi_ab.service_ready,
+ WMI_SERVICE_READY_TIMEOUT_HZ);
+ if (!time_left)
+ return -ETIMEDOUT;
+
+ return 0;
+}
+
+int ath12k_wmi_wait_for_unified_ready(struct ath12k_base *ab)
+{
+ unsigned long time_left;
+
+ time_left = wait_for_completion_timeout(&ab->wmi_ab.unified_ready,
+ WMI_SERVICE_READY_TIMEOUT_HZ);
+ if (!time_left)
+ return -ETIMEDOUT;
+
+ return 0;
+}
+
+int ath12k_wmi_set_hw_mode(struct ath12k_base *ab,
+ enum wmi_host_hw_mode_config_type mode)
+{
+ struct ath12k_wmi_pdev_set_hw_mode_cmd *cmd;
+ struct sk_buff *skb;
+ struct ath12k_wmi_base *wmi_ab = &ab->wmi_ab;
+ int len;
+ int ret;
+
+ len = sizeof(*cmd);
+
+ skb = ath12k_wmi_alloc_skb(wmi_ab, len);
+ if (!skb)
+ return -ENOMEM;
+
+ cmd = (struct ath12k_wmi_pdev_set_hw_mode_cmd *)skb->data;
+
+ cmd->tlv_header = ath12k_wmi_tlv_cmd_hdr(WMI_TAG_PDEV_SET_HW_MODE_CMD,
+ sizeof(*cmd));
+
+ cmd->pdev_id = WMI_PDEV_ID_SOC;
+ cmd->hw_mode_index = cpu_to_le32(mode);
+
+ ret = ath12k_wmi_cmd_send(&wmi_ab->wmi[0], skb, WMI_PDEV_SET_HW_MODE_CMDID);
+ if (ret) {
+ ath12k_warn(ab, "failed to send WMI_PDEV_SET_HW_MODE_CMDID\n");
+ dev_kfree_skb(skb);
+ }
+
+ return ret;
+}
+
+int ath12k_wmi_cmd_init(struct ath12k_base *ab)
+{
+ struct ath12k_wmi_base *wmi_sc = &ab->wmi_ab;
+ struct ath12k_wmi_init_cmd_arg arg = {};
+
+ ab->hw_params->wmi_init(ab, &arg.res_cfg);
+
+ arg.num_mem_chunks = wmi_sc->num_mem_chunks;
+ arg.hw_mode_id = wmi_sc->preferred_hw_mode;
+ arg.mem_chunks = wmi_sc->mem_chunks;
+
+ if (ab->hw_params->single_pdev_only)
+ arg.hw_mode_id = WMI_HOST_HW_MODE_MAX;
+
+ arg.num_band_to_mac = ab->num_radios;
+ ath12k_fill_band_to_mac_param(ab, arg.band_to_mac);
+
+ return ath12k_init_cmd_send(&wmi_sc->wmi[0], &arg);
+}
+
+int ath12k_wmi_vdev_spectral_conf(struct ath12k *ar,
+ struct ath12k_wmi_vdev_spectral_conf_arg *arg)
+{
+ struct ath12k_wmi_vdev_spectral_conf_cmd *cmd;
+ struct sk_buff *skb;
+ int ret;
+
+ skb = ath12k_wmi_alloc_skb(ar->wmi->wmi_ab, sizeof(*cmd));
+ if (!skb)
+ return -ENOMEM;
+
+ cmd = (struct ath12k_wmi_vdev_spectral_conf_cmd *)skb->data;
+ cmd->tlv_header = ath12k_wmi_tlv_cmd_hdr(WMI_TAG_VDEV_SPECTRAL_CONFIGURE_CMD,
+ sizeof(*cmd));
+ cmd->vdev_id = cpu_to_le32(arg->vdev_id);
+ cmd->scan_count = cpu_to_le32(arg->scan_count);
+ cmd->scan_period = cpu_to_le32(arg->scan_period);
+ cmd->scan_priority = cpu_to_le32(arg->scan_priority);
+ cmd->scan_fft_size = cpu_to_le32(arg->scan_fft_size);
+ cmd->scan_gc_ena = cpu_to_le32(arg->scan_gc_ena);
+ cmd->scan_restart_ena = cpu_to_le32(arg->scan_restart_ena);
+ cmd->scan_noise_floor_ref = cpu_to_le32(arg->scan_noise_floor_ref);
+ cmd->scan_init_delay = cpu_to_le32(arg->scan_init_delay);
+ cmd->scan_nb_tone_thr = cpu_to_le32(arg->scan_nb_tone_thr);
+ cmd->scan_str_bin_thr = cpu_to_le32(arg->scan_str_bin_thr);
+ cmd->scan_wb_rpt_mode = cpu_to_le32(arg->scan_wb_rpt_mode);
+ cmd->scan_rssi_rpt_mode = cpu_to_le32(arg->scan_rssi_rpt_mode);
+ cmd->scan_rssi_thr = cpu_to_le32(arg->scan_rssi_thr);
+ cmd->scan_pwr_format = cpu_to_le32(arg->scan_pwr_format);
+ cmd->scan_rpt_mode = cpu_to_le32(arg->scan_rpt_mode);
+ cmd->scan_bin_scale = cpu_to_le32(arg->scan_bin_scale);
+ cmd->scan_dbm_adj = cpu_to_le32(arg->scan_dbm_adj);
+ cmd->scan_chn_mask = cpu_to_le32(arg->scan_chn_mask);
+
+ ath12k_dbg(ar->ab, ATH12K_DBG_WMI,
+ "WMI spectral scan config cmd vdev_id 0x%x\n",
+ arg->vdev_id);
+
+ ret = ath12k_wmi_cmd_send(ar->wmi, skb,
+ WMI_VDEV_SPECTRAL_SCAN_CONFIGURE_CMDID);
+ if (ret) {
+ ath12k_warn(ar->ab,
+ "failed to send spectral scan config wmi cmd\n");
+ goto err;
+ }
+
+ return 0;
+err:
+ dev_kfree_skb(skb);
+ return ret;
+}
+
+int ath12k_wmi_vdev_spectral_enable(struct ath12k *ar, u32 vdev_id,
+ u32 trigger, u32 enable)
+{
+ struct ath12k_wmi_vdev_spectral_enable_cmd *cmd;
+ struct sk_buff *skb;
+ int ret;
+
+ skb = ath12k_wmi_alloc_skb(ar->wmi->wmi_ab, sizeof(*cmd));
+ if (!skb)
+ return -ENOMEM;
+
+ cmd = (struct ath12k_wmi_vdev_spectral_enable_cmd *)skb->data;
+ cmd->tlv_header = ath12k_wmi_tlv_cmd_hdr(WMI_TAG_VDEV_SPECTRAL_ENABLE_CMD,
+ sizeof(*cmd));
+
+ cmd->vdev_id = cpu_to_le32(vdev_id);
+ cmd->trigger_cmd = cpu_to_le32(trigger);
+ cmd->enable_cmd = cpu_to_le32(enable);
+
+ ath12k_dbg(ar->ab, ATH12K_DBG_WMI,
+ "WMI spectral enable cmd vdev id 0x%x\n",
+ vdev_id);
+
+ ret = ath12k_wmi_cmd_send(ar->wmi, skb,
+ WMI_VDEV_SPECTRAL_SCAN_ENABLE_CMDID);
+ if (ret) {
+ ath12k_warn(ar->ab,
+ "failed to send spectral enable wmi cmd\n");
+ goto err;
+ }
+
+ return 0;
+err:
+ dev_kfree_skb(skb);
+ return ret;
+}
+
+int ath12k_wmi_pdev_dma_ring_cfg(struct ath12k *ar,
+ struct ath12k_wmi_pdev_dma_ring_cfg_arg *arg)
+{
+ struct ath12k_wmi_pdev_dma_ring_cfg_req_cmd *cmd;
+ struct sk_buff *skb;
+ int ret;
+
+ skb = ath12k_wmi_alloc_skb(ar->wmi->wmi_ab, sizeof(*cmd));
+ if (!skb)
+ return -ENOMEM;
+
+ cmd = (struct ath12k_wmi_pdev_dma_ring_cfg_req_cmd *)skb->data;
+ cmd->tlv_header = ath12k_wmi_tlv_cmd_hdr(WMI_TAG_DMA_RING_CFG_REQ,
+ sizeof(*cmd));
+
+ cmd->pdev_id = cpu_to_le32(DP_SW2HW_MACID(arg->pdev_id));
+ cmd->module_id = cpu_to_le32(arg->module_id);
+ cmd->base_paddr_lo = cpu_to_le32(arg->base_paddr_lo);
+ cmd->base_paddr_hi = cpu_to_le32(arg->base_paddr_hi);
+ cmd->head_idx_paddr_lo = cpu_to_le32(arg->head_idx_paddr_lo);
+ cmd->head_idx_paddr_hi = cpu_to_le32(arg->head_idx_paddr_hi);
+ cmd->tail_idx_paddr_lo = cpu_to_le32(arg->tail_idx_paddr_lo);
+ cmd->tail_idx_paddr_hi = cpu_to_le32(arg->tail_idx_paddr_hi);
+ cmd->num_elems = cpu_to_le32(arg->num_elems);
+ cmd->buf_size = cpu_to_le32(arg->buf_size);
+ cmd->num_resp_per_event = cpu_to_le32(arg->num_resp_per_event);
+ cmd->event_timeout_ms = cpu_to_le32(arg->event_timeout_ms);
+
+ ath12k_dbg(ar->ab, ATH12K_DBG_WMI,
+ "WMI DMA ring cfg req cmd pdev_id 0x%x\n",
+ arg->pdev_id);
+
+ ret = ath12k_wmi_cmd_send(ar->wmi, skb,
+ WMI_PDEV_DMA_RING_CFG_REQ_CMDID);
+ if (ret) {
+ ath12k_warn(ar->ab,
+ "failed to send dma ring cfg req wmi cmd\n");
+ goto err;
+ }
+
+ return 0;
+err:
+ dev_kfree_skb(skb);
+ return ret;
+}
+
+static int ath12k_wmi_dma_buf_entry_parse(struct ath12k_base *soc,
+ u16 tag, u16 len,
+ const void *ptr, void *data)
+{
+ struct ath12k_wmi_dma_buf_release_arg *arg = data;
+
+ if (tag != WMI_TAG_DMA_BUF_RELEASE_ENTRY)
+ return -EPROTO;
+
+ if (arg->num_buf_entry >= le32_to_cpu(arg->fixed.num_buf_release_entry))
+ return -ENOBUFS;
+
+ arg->num_buf_entry++;
+ return 0;
+}
+
+static int ath12k_wmi_dma_buf_meta_parse(struct ath12k_base *soc,
+ u16 tag, u16 len,
+ const void *ptr, void *data)
+{
+ struct ath12k_wmi_dma_buf_release_arg *arg = data;
+
+ if (tag != WMI_TAG_DMA_BUF_RELEASE_SPECTRAL_META_DATA)
+ return -EPROTO;
+
+ if (arg->num_meta >= le32_to_cpu(arg->fixed.num_meta_data_entry))
+ return -ENOBUFS;
+
+ arg->num_meta++;
+
+ return 0;
+}
+
+static int ath12k_wmi_dma_buf_parse(struct ath12k_base *ab,
+ u16 tag, u16 len,
+ const void *ptr, void *data)
+{
+ struct ath12k_wmi_dma_buf_release_arg *arg = data;
+ const struct ath12k_wmi_dma_buf_release_fixed_params *fixed;
+ u32 pdev_id;
+ int ret;
+
+ switch (tag) {
+ case WMI_TAG_DMA_BUF_RELEASE:
+ fixed = ptr;
+ arg->fixed = *fixed;
+ pdev_id = DP_HW2SW_MACID(le32_to_cpu(fixed->pdev_id));
+ arg->fixed.pdev_id = cpu_to_le32(pdev_id);
+ break;
+ case WMI_TAG_ARRAY_STRUCT:
+ if (!arg->buf_entry_done) {
+ arg->num_buf_entry = 0;
+ arg->buf_entry = ptr;
+
+ ret = ath12k_wmi_tlv_iter(ab, ptr, len,
+ ath12k_wmi_dma_buf_entry_parse,
+ arg);
+ if (ret) {
+ ath12k_warn(ab, "failed to parse dma buf entry tlv %d\n",
+ ret);
+ return ret;
+ }
+
+ arg->buf_entry_done = true;
+ } else if (!arg->meta_data_done) {
+ arg->num_meta = 0;
+ arg->meta_data = ptr;
+
+ ret = ath12k_wmi_tlv_iter(ab, ptr, len,
+ ath12k_wmi_dma_buf_meta_parse,
+ arg);
+ if (ret) {
+ ath12k_warn(ab, "failed to parse dma buf meta tlv %d\n",
+ ret);
+ return ret;
+ }
+
+ arg->meta_data_done = true;
+ }
+ break;
+ default:
+ break;
+ }
+ return 0;
+}
+
+static void ath12k_wmi_pdev_dma_ring_buf_release_event(struct ath12k_base *ab,
+ struct sk_buff *skb)
+{
+ struct ath12k_wmi_dma_buf_release_arg arg = {};
+ struct ath12k_dbring_buf_release_event param;
+ int ret;
+
+ ret = ath12k_wmi_tlv_iter(ab, skb->data, skb->len,
+ ath12k_wmi_dma_buf_parse,
+ &arg);
+ if (ret) {
+ ath12k_warn(ab, "failed to parse dma buf release tlv %d\n", ret);
+ return;
+ }
+
+ param.fixed = arg.fixed;
+ param.buf_entry = arg.buf_entry;
+ param.num_buf_entry = arg.num_buf_entry;
+ param.meta_data = arg.meta_data;
+ param.num_meta = arg.num_meta;
+
+ ret = ath12k_dbring_buffer_release_event(ab, &param);
+ if (ret) {
+ ath12k_warn(ab, "failed to handle dma buf release event %d\n", ret);
+ return;
+ }
+}
+
+static int ath12k_wmi_hw_mode_caps_parse(struct ath12k_base *soc,
+ u16 tag, u16 len,
+ const void *ptr, void *data)
+{
+ struct ath12k_wmi_svc_rdy_ext_parse *svc_rdy_ext = data;
+ struct ath12k_wmi_hw_mode_cap_params *hw_mode_cap;
+ u32 phy_map = 0;
+
+ if (tag != WMI_TAG_HW_MODE_CAPABILITIES)
+ return -EPROTO;
+
+ if (svc_rdy_ext->n_hw_mode_caps >= svc_rdy_ext->arg.num_hw_modes)
+ return -ENOBUFS;
+
+ hw_mode_cap = container_of(ptr, struct ath12k_wmi_hw_mode_cap_params,
+ hw_mode_id);
+ svc_rdy_ext->n_hw_mode_caps++;
+
+ phy_map = le32_to_cpu(hw_mode_cap->phy_id_map);
+ svc_rdy_ext->tot_phy_id += fls(phy_map);
+
+ return 0;
+}
+
+static int ath12k_wmi_hw_mode_caps(struct ath12k_base *soc,
+ u16 len, const void *ptr, void *data)
+{
+ struct ath12k_wmi_svc_rdy_ext_parse *svc_rdy_ext = data;
+ const struct ath12k_wmi_hw_mode_cap_params *hw_mode_caps;
+ enum wmi_host_hw_mode_config_type mode, pref;
+ u32 i;
+ int ret;
+
+ svc_rdy_ext->n_hw_mode_caps = 0;
+ svc_rdy_ext->hw_mode_caps = ptr;
+
+ ret = ath12k_wmi_tlv_iter(soc, ptr, len,
+ ath12k_wmi_hw_mode_caps_parse,
+ svc_rdy_ext);
+ if (ret) {
+ ath12k_warn(soc, "failed to parse tlv %d\n", ret);
+ return ret;
+ }
+
+ for (i = 0 ; i < svc_rdy_ext->n_hw_mode_caps; i++) {
+ hw_mode_caps = &svc_rdy_ext->hw_mode_caps[i];
+ mode = le32_to_cpu(hw_mode_caps->hw_mode_id);
+ pref = soc->wmi_ab.preferred_hw_mode;
+
+ if (ath12k_hw_mode_pri_map[mode] < ath12k_hw_mode_pri_map[pref]) {
+ svc_rdy_ext->pref_hw_mode_caps = *hw_mode_caps;
+ soc->wmi_ab.preferred_hw_mode = mode;
+ }
+ }
+
+ ath12k_dbg(soc, ATH12K_DBG_WMI, "preferred_hw_mode:%d\n",
+ soc->wmi_ab.preferred_hw_mode);
+ if (soc->wmi_ab.preferred_hw_mode == WMI_HOST_HW_MODE_MAX)
+ return -EINVAL;
+
+ return 0;
+}
+
+static int ath12k_wmi_mac_phy_caps_parse(struct ath12k_base *soc,
+ u16 tag, u16 len,
+ const void *ptr, void *data)
+{
+ struct ath12k_wmi_svc_rdy_ext_parse *svc_rdy_ext = data;
+
+ if (tag != WMI_TAG_MAC_PHY_CAPABILITIES)
+ return -EPROTO;
+
+ if (svc_rdy_ext->n_mac_phy_caps >= svc_rdy_ext->tot_phy_id)
+ return -ENOBUFS;
+
+ len = min_t(u16, len, sizeof(struct ath12k_wmi_mac_phy_caps_params));
+ if (!svc_rdy_ext->n_mac_phy_caps) {
+ svc_rdy_ext->mac_phy_caps = kzalloc((svc_rdy_ext->tot_phy_id) * len,
+ GFP_ATOMIC);
+ if (!svc_rdy_ext->mac_phy_caps)
+ return -ENOMEM;
+ }
+
+ memcpy(svc_rdy_ext->mac_phy_caps + svc_rdy_ext->n_mac_phy_caps, ptr, len);
+ svc_rdy_ext->n_mac_phy_caps++;
+ return 0;
+}
+
+static int ath12k_wmi_ext_hal_reg_caps_parse(struct ath12k_base *soc,
+ u16 tag, u16 len,
+ const void *ptr, void *data)
+{
+ struct ath12k_wmi_svc_rdy_ext_parse *svc_rdy_ext = data;
+
+ if (tag != WMI_TAG_HAL_REG_CAPABILITIES_EXT)
+ return -EPROTO;
+
+ if (svc_rdy_ext->n_ext_hal_reg_caps >= svc_rdy_ext->arg.num_phy)
+ return -ENOBUFS;
+
+ svc_rdy_ext->n_ext_hal_reg_caps++;
+ return 0;
+}
+
+static int ath12k_wmi_ext_hal_reg_caps(struct ath12k_base *soc,
+ u16 len, const void *ptr, void *data)
+{
+ struct ath12k_wmi_pdev *wmi_handle = &soc->wmi_ab.wmi[0];
+ struct ath12k_wmi_svc_rdy_ext_parse *svc_rdy_ext = data;
+ struct ath12k_wmi_hal_reg_capabilities_ext_arg reg_cap;
+ int ret;
+ u32 i;
+
+ svc_rdy_ext->n_ext_hal_reg_caps = 0;
+ svc_rdy_ext->ext_hal_reg_caps = ptr;
+ ret = ath12k_wmi_tlv_iter(soc, ptr, len,
+ ath12k_wmi_ext_hal_reg_caps_parse,
+ svc_rdy_ext);
+ if (ret) {
+ ath12k_warn(soc, "failed to parse tlv %d\n", ret);
+ return ret;
+ }
+
+ for (i = 0; i < svc_rdy_ext->arg.num_phy; i++) {
+ ret = ath12k_pull_reg_cap_svc_rdy_ext(wmi_handle,
+ svc_rdy_ext->soc_hal_reg_caps,
+ svc_rdy_ext->ext_hal_reg_caps, i,
+ &reg_cap);
+ if (ret) {
+ ath12k_warn(soc, "failed to extract reg cap %d\n", i);
+ return ret;
+ }
+ soc->hal_reg_cap[reg_cap.phy_id] = reg_cap;
+ }
+ return 0;
+}
+
+static int ath12k_wmi_ext_soc_hal_reg_caps_parse(struct ath12k_base *soc,
+ u16 len, const void *ptr,
+ void *data)
+{
+ struct ath12k_wmi_pdev *wmi_handle = &soc->wmi_ab.wmi[0];
+ struct ath12k_wmi_svc_rdy_ext_parse *svc_rdy_ext = data;
+ u8 hw_mode_id = le32_to_cpu(svc_rdy_ext->pref_hw_mode_caps.hw_mode_id);
+ u32 phy_id_map;
+ int pdev_index = 0;
+ int ret;
+
+ svc_rdy_ext->soc_hal_reg_caps = ptr;
+ svc_rdy_ext->arg.num_phy = le32_to_cpu(svc_rdy_ext->soc_hal_reg_caps->num_phy);
+
+ soc->num_radios = 0;
+ phy_id_map = le32_to_cpu(svc_rdy_ext->pref_hw_mode_caps.phy_id_map);
+
+ while (phy_id_map && soc->num_radios < MAX_RADIOS) {
+ ret = ath12k_pull_mac_phy_cap_svc_ready_ext(wmi_handle,
+ svc_rdy_ext,
+ hw_mode_id, soc->num_radios,
+ &soc->pdevs[pdev_index]);
+ if (ret) {
+ ath12k_warn(soc, "failed to extract mac caps, idx :%d\n",
+ soc->num_radios);
+ return ret;
+ }
+
+ soc->num_radios++;
+
+ /* For single_pdev_only targets,
+ * save mac_phy capability in the same pdev
+ */
+ if (soc->hw_params->single_pdev_only)
+ pdev_index = 0;
+ else
+ pdev_index = soc->num_radios;
+
+ /* TODO: mac_phy_cap prints */
+ phy_id_map >>= 1;
+ }
+
+ if (soc->hw_params->single_pdev_only) {
+ soc->num_radios = 1;
+ soc->pdevs[0].pdev_id = 0;
+ }
+
+ return 0;
+}
+
+static int ath12k_wmi_dma_ring_caps_parse(struct ath12k_base *soc,
+ u16 tag, u16 len,
+ const void *ptr, void *data)
+{
+ struct ath12k_wmi_dma_ring_caps_parse *parse = data;
+
+ if (tag != WMI_TAG_DMA_RING_CAPABILITIES)
+ return -EPROTO;
+
+ parse->n_dma_ring_caps++;
+ return 0;
+}
+
+static int ath12k_wmi_alloc_dbring_caps(struct ath12k_base *ab,
+ u32 num_cap)
+{
+ size_t sz;
+ void *ptr;
+
+ sz = num_cap * sizeof(struct ath12k_dbring_cap);
+ ptr = kzalloc(sz, GFP_ATOMIC);
+ if (!ptr)
+ return -ENOMEM;
+
+ ab->db_caps = ptr;
+ ab->num_db_cap = num_cap;
+
+ return 0;
+}
+
+static void ath12k_wmi_free_dbring_caps(struct ath12k_base *ab)
+{
+ kfree(ab->db_caps);
+ ab->db_caps = NULL;
+}
+
+static int ath12k_wmi_dma_ring_caps(struct ath12k_base *ab,
+ u16 len, const void *ptr, void *data)
+{
+ struct ath12k_wmi_dma_ring_caps_parse *dma_caps_parse = data;
+ struct ath12k_wmi_dma_ring_caps_params *dma_caps;
+ struct ath12k_dbring_cap *dir_buff_caps;
+ int ret;
+ u32 i;
+
+ dma_caps_parse->n_dma_ring_caps = 0;
+ dma_caps = (struct ath12k_wmi_dma_ring_caps_params *)ptr;
+ ret = ath12k_wmi_tlv_iter(ab, ptr, len,
+ ath12k_wmi_dma_ring_caps_parse,
+ dma_caps_parse);
+ if (ret) {
+ ath12k_warn(ab, "failed to parse dma ring caps tlv %d\n", ret);
+ return ret;
+ }
+
+ if (!dma_caps_parse->n_dma_ring_caps)
+ return 0;
+
+ if (ab->num_db_cap) {
+ ath12k_warn(ab, "Already processed, so ignoring dma ring caps\n");
+ return 0;
+ }
+
+ ret = ath12k_wmi_alloc_dbring_caps(ab, dma_caps_parse->n_dma_ring_caps);
+ if (ret)
+ return ret;
+
+ dir_buff_caps = ab->db_caps;
+ for (i = 0; i < dma_caps_parse->n_dma_ring_caps; i++) {
+ if (le32_to_cpu(dma_caps[i].module_id) >= WMI_DIRECT_BUF_MAX) {
+ ath12k_warn(ab, "Invalid module id %d\n",
+ le32_to_cpu(dma_caps[i].module_id));
+ ret = -EINVAL;
+ goto free_dir_buff;
+ }
+
+ dir_buff_caps[i].id = le32_to_cpu(dma_caps[i].module_id);
+ dir_buff_caps[i].pdev_id =
+ DP_HW2SW_MACID(le32_to_cpu(dma_caps[i].pdev_id));
+ dir_buff_caps[i].min_elem = le32_to_cpu(dma_caps[i].min_elem);
+ dir_buff_caps[i].min_buf_sz = le32_to_cpu(dma_caps[i].min_buf_sz);
+ dir_buff_caps[i].min_buf_align = le32_to_cpu(dma_caps[i].min_buf_align);
+ }
+
+ return 0;
+
+free_dir_buff:
+ ath12k_wmi_free_dbring_caps(ab);
+ return ret;
+}
+
+static int ath12k_wmi_svc_rdy_ext_parse(struct ath12k_base *ab,
+ u16 tag, u16 len,
+ const void *ptr, void *data)
+{
+ struct ath12k_wmi_pdev *wmi_handle = &ab->wmi_ab.wmi[0];
+ struct ath12k_wmi_svc_rdy_ext_parse *svc_rdy_ext = data;
+ int ret;
+
+ switch (tag) {
+ case WMI_TAG_SERVICE_READY_EXT_EVENT:
+ ret = ath12k_pull_svc_ready_ext(wmi_handle, ptr,
+ &svc_rdy_ext->arg);
+ if (ret) {
+ ath12k_warn(ab, "unable to extract ext params\n");
+ return ret;
+ }
+ break;
+
+ case WMI_TAG_SOC_MAC_PHY_HW_MODE_CAPS:
+ svc_rdy_ext->hw_caps = ptr;
+ svc_rdy_ext->arg.num_hw_modes =
+ le32_to_cpu(svc_rdy_ext->hw_caps->num_hw_modes);
+ break;
+
+ case WMI_TAG_SOC_HAL_REG_CAPABILITIES:
+ ret = ath12k_wmi_ext_soc_hal_reg_caps_parse(ab, len, ptr,
+ svc_rdy_ext);
+ if (ret)
+ return ret;
+ break;
+
+ case WMI_TAG_ARRAY_STRUCT:
+ if (!svc_rdy_ext->hw_mode_done) {
+ ret = ath12k_wmi_hw_mode_caps(ab, len, ptr, svc_rdy_ext);
+ if (ret)
+ return ret;
+
+ svc_rdy_ext->hw_mode_done = true;
+ } else if (!svc_rdy_ext->mac_phy_done) {
+ svc_rdy_ext->n_mac_phy_caps = 0;
+ ret = ath12k_wmi_tlv_iter(ab, ptr, len,
+ ath12k_wmi_mac_phy_caps_parse,
+ svc_rdy_ext);
+ if (ret) {
+ ath12k_warn(ab, "failed to parse tlv %d\n", ret);
+ return ret;
+ }
+
+ svc_rdy_ext->mac_phy_done = true;
+ } else if (!svc_rdy_ext->ext_hal_reg_done) {
+ ret = ath12k_wmi_ext_hal_reg_caps(ab, len, ptr, svc_rdy_ext);
+ if (ret)
+ return ret;
+
+ svc_rdy_ext->ext_hal_reg_done = true;
+ } else if (!svc_rdy_ext->mac_phy_chainmask_combo_done) {
+ svc_rdy_ext->mac_phy_chainmask_combo_done = true;
+ } else if (!svc_rdy_ext->mac_phy_chainmask_cap_done) {
+ svc_rdy_ext->mac_phy_chainmask_cap_done = true;
+ } else if (!svc_rdy_ext->oem_dma_ring_cap_done) {
+ svc_rdy_ext->oem_dma_ring_cap_done = true;
+ } else if (!svc_rdy_ext->dma_ring_cap_done) {
+ ret = ath12k_wmi_dma_ring_caps(ab, len, ptr,
+ &svc_rdy_ext->dma_caps_parse);
+ if (ret)
+ return ret;
+
+ svc_rdy_ext->dma_ring_cap_done = true;
+ }
+ break;
+
+ default:
+ break;
+ }
+ return 0;
+}
+
+static int ath12k_service_ready_ext_event(struct ath12k_base *ab,
+ struct sk_buff *skb)
+{
+ struct ath12k_wmi_svc_rdy_ext_parse svc_rdy_ext = { };
+ int ret;
+
+ ret = ath12k_wmi_tlv_iter(ab, skb->data, skb->len,
+ ath12k_wmi_svc_rdy_ext_parse,
+ &svc_rdy_ext);
+ if (ret) {
+ ath12k_warn(ab, "failed to parse tlv %d\n", ret);
+ goto err;
+ }
+
+ if (!test_bit(WMI_TLV_SERVICE_EXT2_MSG, ab->wmi_ab.svc_map))
+ complete(&ab->wmi_ab.service_ready);
+
+ kfree(svc_rdy_ext.mac_phy_caps);
+ return 0;
+
+err:
+ ath12k_wmi_free_dbring_caps(ab);
+ return ret;
+}
+
+static int ath12k_wmi_svc_rdy_ext2_parse(struct ath12k_base *ab,
+ u16 tag, u16 len,
+ const void *ptr, void *data)
+{
+ struct ath12k_wmi_svc_rdy_ext2_parse *parse = data;
+ int ret;
+
+ switch (tag) {
+ case WMI_TAG_ARRAY_STRUCT:
+ if (!parse->dma_ring_cap_done) {
+ ret = ath12k_wmi_dma_ring_caps(ab, len, ptr,
+ &parse->dma_caps_parse);
+ if (ret)
+ return ret;
+
+ parse->dma_ring_cap_done = true;
+ }
+ break;
+ default:
+ break;
+ }
+
+ return 0;
+}
+
+static int ath12k_service_ready_ext2_event(struct ath12k_base *ab,
+ struct sk_buff *skb)
+{
+ struct ath12k_wmi_svc_rdy_ext2_parse svc_rdy_ext2 = { };
+ int ret;
+
+ ret = ath12k_wmi_tlv_iter(ab, skb->data, skb->len,
+ ath12k_wmi_svc_rdy_ext2_parse,
+ &svc_rdy_ext2);
+ if (ret) {
+ ath12k_warn(ab, "failed to parse ext2 event tlv %d\n", ret);
+ goto err;
+ }
+
+ complete(&ab->wmi_ab.service_ready);
+
+ return 0;
+
+err:
+ ath12k_wmi_free_dbring_caps(ab);
+ return ret;
+}
+
+static int ath12k_pull_vdev_start_resp_tlv(struct ath12k_base *ab, struct sk_buff *skb,
+ struct wmi_vdev_start_resp_event *vdev_rsp)
+{
+ const void **tb;
+ const struct wmi_vdev_start_resp_event *ev;
+ int ret;
+
+ tb = ath12k_wmi_tlv_parse_alloc(ab, skb->data, skb->len, GFP_ATOMIC);
+ if (IS_ERR(tb)) {
+ ret = PTR_ERR(tb);
+ ath12k_warn(ab, "failed to parse tlv: %d\n", ret);
+ return ret;
+ }
+
+ ev = tb[WMI_TAG_VDEV_START_RESPONSE_EVENT];
+ if (!ev) {
+ ath12k_warn(ab, "failed to fetch vdev start resp ev");
+ kfree(tb);
+ return -EPROTO;
+ }
+
+ *vdev_rsp = *ev;
+
+ kfree(tb);
+ return 0;
+}
+
+static struct ath12k_reg_rule
+*create_ext_reg_rules_from_wmi(u32 num_reg_rules,
+ struct ath12k_wmi_reg_rule_ext_params *wmi_reg_rule)
+{
+ struct ath12k_reg_rule *reg_rule_ptr;
+ u32 count;
+
+ reg_rule_ptr = kzalloc((num_reg_rules * sizeof(*reg_rule_ptr)),
+ GFP_ATOMIC);
+
+ if (!reg_rule_ptr)
+ return NULL;
+
+ for (count = 0; count < num_reg_rules; count++) {
+ reg_rule_ptr[count].start_freq =
+ le32_get_bits(wmi_reg_rule[count].freq_info,
+ REG_RULE_START_FREQ);
+ reg_rule_ptr[count].end_freq =
+ le32_get_bits(wmi_reg_rule[count].freq_info,
+ REG_RULE_END_FREQ);
+ reg_rule_ptr[count].max_bw =
+ le32_get_bits(wmi_reg_rule[count].bw_pwr_info,
+ REG_RULE_MAX_BW);
+ reg_rule_ptr[count].reg_power =
+ le32_get_bits(wmi_reg_rule[count].bw_pwr_info,
+ REG_RULE_REG_PWR);
+ reg_rule_ptr[count].ant_gain =
+ le32_get_bits(wmi_reg_rule[count].bw_pwr_info,
+ REG_RULE_ANT_GAIN);
+ reg_rule_ptr[count].flags =
+ le32_get_bits(wmi_reg_rule[count].flag_info,
+ REG_RULE_FLAGS);
+ reg_rule_ptr[count].psd_flag =
+ le32_get_bits(wmi_reg_rule[count].psd_power_info,
+ REG_RULE_PSD_INFO);
+ reg_rule_ptr[count].psd_eirp =
+ le32_get_bits(wmi_reg_rule[count].psd_power_info,
+ REG_RULE_PSD_EIRP);
+ }
+
+ return reg_rule_ptr;
+}
+
+static int ath12k_pull_reg_chan_list_ext_update_ev(struct ath12k_base *ab,
+ struct sk_buff *skb,
+ struct ath12k_reg_info *reg_info)
+{
+ const void **tb;
+ const struct wmi_reg_chan_list_cc_ext_event *ev;
+ struct ath12k_wmi_reg_rule_ext_params *ext_wmi_reg_rule;
+ u32 num_2g_reg_rules, num_5g_reg_rules;
+ u32 num_6g_reg_rules_ap[WMI_REG_CURRENT_MAX_AP_TYPE];
+ u32 num_6g_reg_rules_cl[WMI_REG_CURRENT_MAX_AP_TYPE][WMI_REG_MAX_CLIENT_TYPE];
+ u32 total_reg_rules = 0;
+ int ret, i, j;
+
+ ath12k_dbg(ab, ATH12K_DBG_WMI, "processing regulatory ext channel list\n");
+
+ tb = ath12k_wmi_tlv_parse_alloc(ab, skb->data, skb->len, GFP_ATOMIC);
+ if (IS_ERR(tb)) {
+ ret = PTR_ERR(tb);
+ ath12k_warn(ab, "failed to parse tlv: %d\n", ret);
+ return ret;
+ }
+
+ ev = tb[WMI_TAG_REG_CHAN_LIST_CC_EXT_EVENT];
+ if (!ev) {
+ ath12k_warn(ab, "failed to fetch reg chan list ext update ev\n");
+ kfree(tb);
+ return -EPROTO;
+ }
+
+ reg_info->num_2g_reg_rules = le32_to_cpu(ev->num_2g_reg_rules);
+ reg_info->num_5g_reg_rules = le32_to_cpu(ev->num_5g_reg_rules);
+ reg_info->num_6g_reg_rules_ap[WMI_REG_INDOOR_AP] =
+ le32_to_cpu(ev->num_6g_reg_rules_ap_lpi);
+ reg_info->num_6g_reg_rules_ap[WMI_REG_STD_POWER_AP] =
+ le32_to_cpu(ev->num_6g_reg_rules_ap_sp);
+ reg_info->num_6g_reg_rules_ap[WMI_REG_VLP_AP] =
+ le32_to_cpu(ev->num_6g_reg_rules_ap_vlp);
+
+ for (i = 0; i < WMI_REG_MAX_CLIENT_TYPE; i++) {
+ reg_info->num_6g_reg_rules_cl[WMI_REG_INDOOR_AP][i] =
+ le32_to_cpu(ev->num_6g_reg_rules_cl_lpi[i]);
+ reg_info->num_6g_reg_rules_cl[WMI_REG_STD_POWER_AP][i] =
+ le32_to_cpu(ev->num_6g_reg_rules_cl_sp[i]);
+ reg_info->num_6g_reg_rules_cl[WMI_REG_VLP_AP][i] =
+ le32_to_cpu(ev->num_6g_reg_rules_cl_vlp[i]);
+ }
+
+ num_2g_reg_rules = reg_info->num_2g_reg_rules;
+ total_reg_rules += num_2g_reg_rules;
+ num_5g_reg_rules = reg_info->num_5g_reg_rules;
+ total_reg_rules += num_5g_reg_rules;
+
+ if (num_2g_reg_rules > MAX_REG_RULES || num_5g_reg_rules > MAX_REG_RULES) {
+ ath12k_warn(ab, "Num reg rules for 2G/5G exceeds max limit (num_2g_reg_rules: %d num_5g_reg_rules: %d max_rules: %d)\n",
+ num_2g_reg_rules, num_5g_reg_rules, MAX_REG_RULES);
+ kfree(tb);
+ return -EINVAL;
+ }
+
+ for (i = 0; i < WMI_REG_CURRENT_MAX_AP_TYPE; i++) {
+ num_6g_reg_rules_ap[i] = reg_info->num_6g_reg_rules_ap[i];
+
+ if (num_6g_reg_rules_ap[i] > MAX_6G_REG_RULES) {
+ ath12k_warn(ab, "Num 6G reg rules for AP mode(%d) exceeds max limit (num_6g_reg_rules_ap: %d, max_rules: %d)\n",
+ i, num_6g_reg_rules_ap[i], MAX_6G_REG_RULES);
+ kfree(tb);
+ return -EINVAL;
+ }
+
+ total_reg_rules += num_6g_reg_rules_ap[i];
+ }
+
+ for (i = 0; i < WMI_REG_MAX_CLIENT_TYPE; i++) {
+ num_6g_reg_rules_cl[WMI_REG_INDOOR_AP][i] =
+ reg_info->num_6g_reg_rules_cl[WMI_REG_INDOOR_AP][i];
+ total_reg_rules += num_6g_reg_rules_cl[WMI_REG_INDOOR_AP][i];
+
+ num_6g_reg_rules_cl[WMI_REG_STD_POWER_AP][i] =
+ reg_info->num_6g_reg_rules_cl[WMI_REG_STD_POWER_AP][i];
+ total_reg_rules += num_6g_reg_rules_cl[WMI_REG_STD_POWER_AP][i];
+
+ num_6g_reg_rules_cl[WMI_REG_VLP_AP][i] =
+ reg_info->num_6g_reg_rules_cl[WMI_REG_VLP_AP][i];
+ total_reg_rules += num_6g_reg_rules_cl[WMI_REG_VLP_AP][i];
+
+ if (num_6g_reg_rules_cl[WMI_REG_INDOOR_AP][i] > MAX_6G_REG_RULES ||
+ num_6g_reg_rules_cl[WMI_REG_STD_POWER_AP][i] > MAX_6G_REG_RULES ||
+ num_6g_reg_rules_cl[WMI_REG_VLP_AP][i] > MAX_6G_REG_RULES) {
+ ath12k_warn(ab, "Num 6g client reg rules exceeds max limit, for client(type: %d)\n",
+ i);
+ kfree(tb);
+ return -EINVAL;
+ }
+ }
+
+ if (!total_reg_rules) {
+ ath12k_warn(ab, "No reg rules available\n");
+ kfree(tb);
+ return -EINVAL;
+ }
+
+ memcpy(reg_info->alpha2, &ev->alpha2, REG_ALPHA2_LEN);
+
+ /* FIXME: Currently FW includes 6G reg rule also in 5G rule
+ * list for country US.
+ * Having same 6G reg rule in 5G and 6G rules list causes
+ * intersect check to be true, and same rules will be shown
+ * multiple times in iw cmd. So added hack below to avoid
+ * parsing 6G rule from 5G reg rule list, and this can be
+ * removed later, after FW updates to remove 6G reg rule
+ * from 5G rules list.
+ */
+ if (memcmp(reg_info->alpha2, "US", 2) == 0) {
+ reg_info->num_5g_reg_rules = REG_US_5G_NUM_REG_RULES;
+ num_5g_reg_rules = reg_info->num_5g_reg_rules;
+ }
+
+ reg_info->dfs_region = le32_to_cpu(ev->dfs_region);
+ reg_info->phybitmap = le32_to_cpu(ev->phybitmap);
+ reg_info->num_phy = le32_to_cpu(ev->num_phy);
+ reg_info->phy_id = le32_to_cpu(ev->phy_id);
+ reg_info->ctry_code = le32_to_cpu(ev->country_id);
+ reg_info->reg_dmn_pair = le32_to_cpu(ev->domain_code);
+
+ switch (le32_to_cpu(ev->status_code)) {
+ case WMI_REG_SET_CC_STATUS_PASS:
+ reg_info->status_code = REG_SET_CC_STATUS_PASS;
+ break;
+ case WMI_REG_CURRENT_ALPHA2_NOT_FOUND:
+ reg_info->status_code = REG_CURRENT_ALPHA2_NOT_FOUND;
+ break;
+ case WMI_REG_INIT_ALPHA2_NOT_FOUND:
+ reg_info->status_code = REG_INIT_ALPHA2_NOT_FOUND;
+ break;
+ case WMI_REG_SET_CC_CHANGE_NOT_ALLOWED:
+ reg_info->status_code = REG_SET_CC_CHANGE_NOT_ALLOWED;
+ break;
+ case WMI_REG_SET_CC_STATUS_NO_MEMORY:
+ reg_info->status_code = REG_SET_CC_STATUS_NO_MEMORY;
+ break;
+ case WMI_REG_SET_CC_STATUS_FAIL:
+ reg_info->status_code = REG_SET_CC_STATUS_FAIL;
+ break;
+ }
+
+ reg_info->is_ext_reg_event = true;
+
+ reg_info->min_bw_2g = le32_to_cpu(ev->min_bw_2g);
+ reg_info->max_bw_2g = le32_to_cpu(ev->max_bw_2g);
+ reg_info->min_bw_5g = le32_to_cpu(ev->min_bw_5g);
+ reg_info->max_bw_5g = le32_to_cpu(ev->max_bw_5g);
+ reg_info->min_bw_6g_ap[WMI_REG_INDOOR_AP] = le32_to_cpu(ev->min_bw_6g_ap_lpi);
+ reg_info->max_bw_6g_ap[WMI_REG_INDOOR_AP] = le32_to_cpu(ev->max_bw_6g_ap_lpi);
+ reg_info->min_bw_6g_ap[WMI_REG_STD_POWER_AP] = le32_to_cpu(ev->min_bw_6g_ap_sp);
+ reg_info->max_bw_6g_ap[WMI_REG_STD_POWER_AP] = le32_to_cpu(ev->max_bw_6g_ap_sp);
+ reg_info->min_bw_6g_ap[WMI_REG_VLP_AP] = le32_to_cpu(ev->min_bw_6g_ap_vlp);
+ reg_info->max_bw_6g_ap[WMI_REG_VLP_AP] = le32_to_cpu(ev->max_bw_6g_ap_vlp);
+
+ for (i = 0; i < WMI_REG_MAX_CLIENT_TYPE; i++) {
+ reg_info->min_bw_6g_client[WMI_REG_INDOOR_AP][i] =
+ le32_to_cpu(ev->min_bw_6g_client_lpi[i]);
+ reg_info->max_bw_6g_client[WMI_REG_INDOOR_AP][i] =
+ le32_to_cpu(ev->max_bw_6g_client_lpi[i]);
+ reg_info->min_bw_6g_client[WMI_REG_STD_POWER_AP][i] =
+ le32_to_cpu(ev->min_bw_6g_client_sp[i]);
+ reg_info->max_bw_6g_client[WMI_REG_STD_POWER_AP][i] =
+ le32_to_cpu(ev->max_bw_6g_client_sp[i]);
+ reg_info->min_bw_6g_client[WMI_REG_VLP_AP][i] =
+ le32_to_cpu(ev->min_bw_6g_client_vlp[i]);
+ reg_info->max_bw_6g_client[WMI_REG_VLP_AP][i] =
+ le32_to_cpu(ev->max_bw_6g_client_vlp[i]);
+ }
+
+ ath12k_dbg(ab, ATH12K_DBG_WMI,
+ "%s:cc_ext %s dsf %d BW: min_2g %d max_2g %d min_5g %d max_5g %d",
+ __func__, reg_info->alpha2, reg_info->dfs_region,
+ reg_info->min_bw_2g, reg_info->max_bw_2g,
+ reg_info->min_bw_5g, reg_info->max_bw_5g);
+
+ ath12k_dbg(ab, ATH12K_DBG_WMI,
+ "num_2g_reg_rules %d num_5g_reg_rules %d",
+ num_2g_reg_rules, num_5g_reg_rules);
+
+ ath12k_dbg(ab, ATH12K_DBG_WMI,
+ "num_6g_reg_rules_ap_lpi: %d num_6g_reg_rules_ap_sp: %d num_6g_reg_rules_ap_vlp: %d",
+ num_6g_reg_rules_ap[WMI_REG_INDOOR_AP],
+ num_6g_reg_rules_ap[WMI_REG_STD_POWER_AP],
+ num_6g_reg_rules_ap[WMI_REG_VLP_AP]);
+
+ ath12k_dbg(ab, ATH12K_DBG_WMI,
+ "6g Regular client: num_6g_reg_rules_lpi: %d num_6g_reg_rules_sp: %d num_6g_reg_rules_vlp: %d",
+ num_6g_reg_rules_cl[WMI_REG_INDOOR_AP][WMI_REG_DEFAULT_CLIENT],
+ num_6g_reg_rules_cl[WMI_REG_STD_POWER_AP][WMI_REG_DEFAULT_CLIENT],
+ num_6g_reg_rules_cl[WMI_REG_VLP_AP][WMI_REG_DEFAULT_CLIENT]);
+
+ ath12k_dbg(ab, ATH12K_DBG_WMI,
+ "6g Subordinate client: num_6g_reg_rules_lpi: %d num_6g_reg_rules_sp: %d num_6g_reg_rules_vlp: %d",
+ num_6g_reg_rules_cl[WMI_REG_INDOOR_AP][WMI_REG_SUBORDINATE_CLIENT],
+ num_6g_reg_rules_cl[WMI_REG_STD_POWER_AP][WMI_REG_SUBORDINATE_CLIENT],
+ num_6g_reg_rules_cl[WMI_REG_VLP_AP][WMI_REG_SUBORDINATE_CLIENT]);
+
+ ext_wmi_reg_rule =
+ (struct ath12k_wmi_reg_rule_ext_params *)((u8 *)ev
+ + sizeof(*ev)
+ + sizeof(struct wmi_tlv));
+
+ if (num_2g_reg_rules) {
+ reg_info->reg_rules_2g_ptr =
+ create_ext_reg_rules_from_wmi(num_2g_reg_rules,
+ ext_wmi_reg_rule);
+
+ if (!reg_info->reg_rules_2g_ptr) {
+ kfree(tb);
+ ath12k_warn(ab, "Unable to Allocate memory for 2g rules\n");
+ return -ENOMEM;
+ }
+ }
+
+ if (num_5g_reg_rules) {
+ ext_wmi_reg_rule += num_2g_reg_rules;
+ reg_info->reg_rules_5g_ptr =
+ create_ext_reg_rules_from_wmi(num_5g_reg_rules,
+ ext_wmi_reg_rule);
+
+ if (!reg_info->reg_rules_5g_ptr) {
+ kfree(tb);
+ ath12k_warn(ab, "Unable to Allocate memory for 5g rules\n");
+ return -ENOMEM;
+ }
+ }
+
+ ext_wmi_reg_rule += num_5g_reg_rules;
+
+ for (i = 0; i < WMI_REG_CURRENT_MAX_AP_TYPE; i++) {
+ reg_info->reg_rules_6g_ap_ptr[i] =
+ create_ext_reg_rules_from_wmi(num_6g_reg_rules_ap[i],
+ ext_wmi_reg_rule);
+
+ if (!reg_info->reg_rules_6g_ap_ptr[i]) {
+ kfree(tb);
+ ath12k_warn(ab, "Unable to Allocate memory for 6g ap rules\n");
+ return -ENOMEM;
+ }
+
+ ext_wmi_reg_rule += num_6g_reg_rules_ap[i];
+ }
+
+ for (j = 0; j < WMI_REG_CURRENT_MAX_AP_TYPE; j++) {
+ for (i = 0; i < WMI_REG_MAX_CLIENT_TYPE; i++) {
+ reg_info->reg_rules_6g_client_ptr[j][i] =
+ create_ext_reg_rules_from_wmi(num_6g_reg_rules_cl[j][i],
+ ext_wmi_reg_rule);
+
+ if (!reg_info->reg_rules_6g_client_ptr[j][i]) {
+ kfree(tb);
+ ath12k_warn(ab, "Unable to Allocate memory for 6g client rules\n");
+ return -ENOMEM;
+ }
+
+ ext_wmi_reg_rule += num_6g_reg_rules_cl[j][i];
+ }
+ }
+
+ reg_info->client_type = le32_to_cpu(ev->client_type);
+ reg_info->rnr_tpe_usable = ev->rnr_tpe_usable;
+ reg_info->unspecified_ap_usable = ev->unspecified_ap_usable;
+ reg_info->domain_code_6g_ap[WMI_REG_INDOOR_AP] =
+ le32_to_cpu(ev->domain_code_6g_ap_lpi);
+ reg_info->domain_code_6g_ap[WMI_REG_STD_POWER_AP] =
+ le32_to_cpu(ev->domain_code_6g_ap_sp);
+ reg_info->domain_code_6g_ap[WMI_REG_VLP_AP] =
+ le32_to_cpu(ev->domain_code_6g_ap_vlp);
+
+ for (i = 0; i < WMI_REG_MAX_CLIENT_TYPE; i++) {
+ reg_info->domain_code_6g_client[WMI_REG_INDOOR_AP][i] =
+ le32_to_cpu(ev->domain_code_6g_client_lpi[i]);
+ reg_info->domain_code_6g_client[WMI_REG_STD_POWER_AP][i] =
+ le32_to_cpu(ev->domain_code_6g_client_sp[i]);
+ reg_info->domain_code_6g_client[WMI_REG_VLP_AP][i] =
+ le32_to_cpu(ev->domain_code_6g_client_vlp[i]);
+ }
+
+ reg_info->domain_code_6g_super_id = le32_to_cpu(ev->domain_code_6g_super_id);
+
+ ath12k_dbg(ab, ATH12K_DBG_WMI, "6g client_type: %d domain_code_6g_super_id: %d",
+ reg_info->client_type, reg_info->domain_code_6g_super_id);
+
+ ath12k_dbg(ab, ATH12K_DBG_WMI, "processed regulatory ext channel list\n");
+
+ kfree(tb);
+ return 0;
+}
+
+static int ath12k_pull_peer_del_resp_ev(struct ath12k_base *ab, struct sk_buff *skb,
+ struct wmi_peer_delete_resp_event *peer_del_resp)
+{
+ const void **tb;
+ const struct wmi_peer_delete_resp_event *ev;
+ int ret;
+
+ tb = ath12k_wmi_tlv_parse_alloc(ab, skb->data, skb->len, GFP_ATOMIC);
+ if (IS_ERR(tb)) {
+ ret = PTR_ERR(tb);
+ ath12k_warn(ab, "failed to parse tlv: %d\n", ret);
+ return ret;
+ }
+
+ ev = tb[WMI_TAG_PEER_DELETE_RESP_EVENT];
+ if (!ev) {
+ ath12k_warn(ab, "failed to fetch peer delete resp ev");
+ kfree(tb);
+ return -EPROTO;
+ }
+
+ memset(peer_del_resp, 0, sizeof(*peer_del_resp));
+
+ peer_del_resp->vdev_id = ev->vdev_id;
+ ether_addr_copy(peer_del_resp->peer_macaddr.addr,
+ ev->peer_macaddr.addr);
+
+ kfree(tb);
+ return 0;
+}
+
+static int ath12k_pull_vdev_del_resp_ev(struct ath12k_base *ab,
+ struct sk_buff *skb,
+ u32 *vdev_id)
+{
+ const void **tb;
+ const struct wmi_vdev_delete_resp_event *ev;
+ int ret;
+
+ tb = ath12k_wmi_tlv_parse_alloc(ab, skb->data, skb->len, GFP_ATOMIC);
+ if (IS_ERR(tb)) {
+ ret = PTR_ERR(tb);
+ ath12k_warn(ab, "failed to parse tlv: %d\n", ret);
+ return ret;
+ }
+
+ ev = tb[WMI_TAG_VDEV_DELETE_RESP_EVENT];
+ if (!ev) {
+ ath12k_warn(ab, "failed to fetch vdev delete resp ev");
+ kfree(tb);
+ return -EPROTO;
+ }
+
+ *vdev_id = le32_to_cpu(ev->vdev_id);
+
+ kfree(tb);
+ return 0;
+}
+
+static int ath12k_pull_bcn_tx_status_ev(struct ath12k_base *ab, void *evt_buf,
+ u32 len, u32 *vdev_id,
+ u32 *tx_status)
+{
+ const void **tb;
+ const struct wmi_bcn_tx_status_event *ev;
+ int ret;
+
+ tb = ath12k_wmi_tlv_parse_alloc(ab, evt_buf, len, GFP_ATOMIC);
+ if (IS_ERR(tb)) {
+ ret = PTR_ERR(tb);
+ ath12k_warn(ab, "failed to parse tlv: %d\n", ret);
+ return ret;
+ }
+
+ ev = tb[WMI_TAG_OFFLOAD_BCN_TX_STATUS_EVENT];
+ if (!ev) {
+ ath12k_warn(ab, "failed to fetch bcn tx status ev");
+ kfree(tb);
+ return -EPROTO;
+ }
+
+ *vdev_id = le32_to_cpu(ev->vdev_id);
+ *tx_status = le32_to_cpu(ev->tx_status);
+
+ kfree(tb);
+ return 0;
+}
+
+static int ath12k_pull_vdev_stopped_param_tlv(struct ath12k_base *ab, struct sk_buff *skb,
+ u32 *vdev_id)
+{
+ const void **tb;
+ const struct wmi_vdev_stopped_event *ev;
+ int ret;
+
+ tb = ath12k_wmi_tlv_parse_alloc(ab, skb->data, skb->len, GFP_ATOMIC);
+ if (IS_ERR(tb)) {
+ ret = PTR_ERR(tb);
+ ath12k_warn(ab, "failed to parse tlv: %d\n", ret);
+ return ret;
+ }
+
+ ev = tb[WMI_TAG_VDEV_STOPPED_EVENT];
+ if (!ev) {
+ ath12k_warn(ab, "failed to fetch vdev stop ev");
+ kfree(tb);
+ return -EPROTO;
+ }
+
+ *vdev_id = le32_to_cpu(ev->vdev_id);
+
+ kfree(tb);
+ return 0;
+}
+
+static int ath12k_wmi_tlv_mgmt_rx_parse(struct ath12k_base *ab,
+ u16 tag, u16 len,
+ const void *ptr, void *data)
+{
+ struct wmi_tlv_mgmt_rx_parse *parse = data;
+
+ switch (tag) {
+ case WMI_TAG_MGMT_RX_HDR:
+ parse->fixed = ptr;
+ break;
+ case WMI_TAG_ARRAY_BYTE:
+ if (!parse->frame_buf_done) {
+ parse->frame_buf = ptr;
+ parse->frame_buf_done = true;
+ }
+ break;
+ }
+ return 0;
+}
+
+static int ath12k_pull_mgmt_rx_params_tlv(struct ath12k_base *ab,
+ struct sk_buff *skb,
+ struct ath12k_wmi_mgmt_rx_arg *hdr)
+{
+ struct wmi_tlv_mgmt_rx_parse parse = { };
+ const struct ath12k_wmi_mgmt_rx_params *ev;
+ const u8 *frame;
+ int i, ret;
+
+ ret = ath12k_wmi_tlv_iter(ab, skb->data, skb->len,
+ ath12k_wmi_tlv_mgmt_rx_parse,
+ &parse);
+ if (ret) {
+ ath12k_warn(ab, "failed to parse mgmt rx tlv %d\n", ret);
+ return ret;
+ }
+
+ ev = parse.fixed;
+ frame = parse.frame_buf;
+
+ if (!ev || !frame) {
+ ath12k_warn(ab, "failed to fetch mgmt rx hdr");
+ return -EPROTO;
+ }
+
+ hdr->pdev_id = le32_to_cpu(ev->pdev_id);
+ hdr->chan_freq = le32_to_cpu(ev->chan_freq);
+ hdr->channel = le32_to_cpu(ev->channel);
+ hdr->snr = le32_to_cpu(ev->snr);
+ hdr->rate = le32_to_cpu(ev->rate);
+ hdr->phy_mode = le32_to_cpu(ev->phy_mode);
+ hdr->buf_len = le32_to_cpu(ev->buf_len);
+ hdr->status = le32_to_cpu(ev->status);
+ hdr->flags = le32_to_cpu(ev->flags);
+ hdr->rssi = a_sle32_to_cpu(ev->rssi);
+ hdr->tsf_delta = le32_to_cpu(ev->tsf_delta);
+
+ for (i = 0; i < ATH_MAX_ANTENNA; i++)
+ hdr->rssi_ctl[i] = le32_to_cpu(ev->rssi_ctl[i]);
+
+ if (skb->len < (frame - skb->data) + hdr->buf_len) {
+ ath12k_warn(ab, "invalid length in mgmt rx hdr ev");
+ return -EPROTO;
+ }
+
+ /* shift the sk_buff to point to `frame` */
+ skb_trim(skb, 0);
+ skb_put(skb, frame - skb->data);
+ skb_pull(skb, frame - skb->data);
+ skb_put(skb, hdr->buf_len);
+
+ return 0;
+}
+
+static int wmi_process_mgmt_tx_comp(struct ath12k *ar, u32 desc_id,
+ u32 status)
+{
+ struct sk_buff *msdu;
+ struct ieee80211_tx_info *info;
+ struct ath12k_skb_cb *skb_cb;
+
+ spin_lock_bh(&ar->txmgmt_idr_lock);
+ msdu = idr_find(&ar->txmgmt_idr, desc_id);
+
+ if (!msdu) {
+ ath12k_warn(ar->ab, "received mgmt tx compl for invalid msdu_id: %d\n",
+ desc_id);
+ spin_unlock_bh(&ar->txmgmt_idr_lock);
+ return -ENOENT;
+ }
+
+ idr_remove(&ar->txmgmt_idr, desc_id);
+ spin_unlock_bh(&ar->txmgmt_idr_lock);
+
+ skb_cb = ATH12K_SKB_CB(msdu);
+ dma_unmap_single(ar->ab->dev, skb_cb->paddr, msdu->len, DMA_TO_DEVICE);
+
+ info = IEEE80211_SKB_CB(msdu);
+ if ((!(info->flags & IEEE80211_TX_CTL_NO_ACK)) && !status)
+ info->flags |= IEEE80211_TX_STAT_ACK;
+
+ ieee80211_tx_status_irqsafe(ar->hw, msdu);
+
+ /* WARN when we received this event without doing any mgmt tx */
+ if (atomic_dec_if_positive(&ar->num_pending_mgmt_tx) < 0)
+ WARN_ON_ONCE(1);
+
+ return 0;
+}
+
+static int ath12k_pull_mgmt_tx_compl_param_tlv(struct ath12k_base *ab,
+ struct sk_buff *skb,
+ struct wmi_mgmt_tx_compl_event *param)
+{
+ const void **tb;
+ const struct wmi_mgmt_tx_compl_event *ev;
+ int ret;
+
+ tb = ath12k_wmi_tlv_parse_alloc(ab, skb->data, skb->len, GFP_ATOMIC);
+ if (IS_ERR(tb)) {
+ ret = PTR_ERR(tb);
+ ath12k_warn(ab, "failed to parse tlv: %d\n", ret);
+ return ret;
+ }
+
+ ev = tb[WMI_TAG_MGMT_TX_COMPL_EVENT];
+ if (!ev) {
+ ath12k_warn(ab, "failed to fetch mgmt tx compl ev");
+ kfree(tb);
+ return -EPROTO;
+ }
+
+ param->pdev_id = ev->pdev_id;
+ param->desc_id = ev->desc_id;
+ param->status = ev->status;
+
+ kfree(tb);
+ return 0;
+}
+
+static void ath12k_wmi_event_scan_started(struct ath12k *ar)
+{
+ lockdep_assert_held(&ar->data_lock);
+
+ switch (ar->scan.state) {
+ case ATH12K_SCAN_IDLE:
+ case ATH12K_SCAN_RUNNING:
+ case ATH12K_SCAN_ABORTING:
+ ath12k_warn(ar->ab, "received scan started event in an invalid scan state: %s (%d)\n",
+ ath12k_scan_state_str(ar->scan.state),
+ ar->scan.state);
+ break;
+ case ATH12K_SCAN_STARTING:
+ ar->scan.state = ATH12K_SCAN_RUNNING;
+ complete(&ar->scan.started);
+ break;
+ }
+}
+
+static void ath12k_wmi_event_scan_start_failed(struct ath12k *ar)
+{
+ lockdep_assert_held(&ar->data_lock);
+
+ switch (ar->scan.state) {
+ case ATH12K_SCAN_IDLE:
+ case ATH12K_SCAN_RUNNING:
+ case ATH12K_SCAN_ABORTING:
+ ath12k_warn(ar->ab, "received scan start failed event in an invalid scan state: %s (%d)\n",
+ ath12k_scan_state_str(ar->scan.state),
+ ar->scan.state);
+ break;
+ case ATH12K_SCAN_STARTING:
+ complete(&ar->scan.started);
+ __ath12k_mac_scan_finish(ar);
+ break;
+ }
+}
+
+static void ath12k_wmi_event_scan_completed(struct ath12k *ar)
+{
+ lockdep_assert_held(&ar->data_lock);
+
+ switch (ar->scan.state) {
+ case ATH12K_SCAN_IDLE:
+ case ATH12K_SCAN_STARTING:
+ /* One suspected reason scan can be completed while starting is
+ * if firmware fails to deliver all scan events to the host,
+ * e.g. when transport pipe is full. This has been observed
+ * with spectral scan phyerr events starving wmi transport
+ * pipe. In such case the "scan completed" event should be (and
+ * is) ignored by the host as it may be just firmware's scan
+ * state machine recovering.
+ */
+ ath12k_warn(ar->ab, "received scan completed event in an invalid scan state: %s (%d)\n",
+ ath12k_scan_state_str(ar->scan.state),
+ ar->scan.state);
+ break;
+ case ATH12K_SCAN_RUNNING:
+ case ATH12K_SCAN_ABORTING:
+ __ath12k_mac_scan_finish(ar);
+ break;
+ }
+}
+
+static void ath12k_wmi_event_scan_bss_chan(struct ath12k *ar)
+{
+ lockdep_assert_held(&ar->data_lock);
+
+ switch (ar->scan.state) {
+ case ATH12K_SCAN_IDLE:
+ case ATH12K_SCAN_STARTING:
+ ath12k_warn(ar->ab, "received scan bss chan event in an invalid scan state: %s (%d)\n",
+ ath12k_scan_state_str(ar->scan.state),
+ ar->scan.state);
+ break;
+ case ATH12K_SCAN_RUNNING:
+ case ATH12K_SCAN_ABORTING:
+ ar->scan_channel = NULL;
+ break;
+ }
+}
+
+static void ath12k_wmi_event_scan_foreign_chan(struct ath12k *ar, u32 freq)
+{
+ lockdep_assert_held(&ar->data_lock);
+
+ switch (ar->scan.state) {
+ case ATH12K_SCAN_IDLE:
+ case ATH12K_SCAN_STARTING:
+ ath12k_warn(ar->ab, "received scan foreign chan event in an invalid scan state: %s (%d)\n",
+ ath12k_scan_state_str(ar->scan.state),
+ ar->scan.state);
+ break;
+ case ATH12K_SCAN_RUNNING:
+ case ATH12K_SCAN_ABORTING:
+ ar->scan_channel = ieee80211_get_channel(ar->hw->wiphy, freq);
+ break;
+ }
+}
+
+static const char *
+ath12k_wmi_event_scan_type_str(enum wmi_scan_event_type type,
+ enum wmi_scan_completion_reason reason)
+{
+ switch (type) {
+ case WMI_SCAN_EVENT_STARTED:
+ return "started";
+ case WMI_SCAN_EVENT_COMPLETED:
+ switch (reason) {
+ case WMI_SCAN_REASON_COMPLETED:
+ return "completed";
+ case WMI_SCAN_REASON_CANCELLED:
+ return "completed [cancelled]";
+ case WMI_SCAN_REASON_PREEMPTED:
+ return "completed [preempted]";
+ case WMI_SCAN_REASON_TIMEDOUT:
+ return "completed [timedout]";
+ case WMI_SCAN_REASON_INTERNAL_FAILURE:
+ return "completed [internal err]";
+ case WMI_SCAN_REASON_MAX:
+ break;
+ }
+ return "completed [unknown]";
+ case WMI_SCAN_EVENT_BSS_CHANNEL:
+ return "bss channel";
+ case WMI_SCAN_EVENT_FOREIGN_CHAN:
+ return "foreign channel";
+ case WMI_SCAN_EVENT_DEQUEUED:
+ return "dequeued";
+ case WMI_SCAN_EVENT_PREEMPTED:
+ return "preempted";
+ case WMI_SCAN_EVENT_START_FAILED:
+ return "start failed";
+ case WMI_SCAN_EVENT_RESTARTED:
+ return "restarted";
+ case WMI_SCAN_EVENT_FOREIGN_CHAN_EXIT:
+ return "foreign channel exit";
+ default:
+ return "unknown";
+ }
+}
+
+static int ath12k_pull_scan_ev(struct ath12k_base *ab, struct sk_buff *skb,
+ struct wmi_scan_event *scan_evt_param)
+{
+ const void **tb;
+ const struct wmi_scan_event *ev;
+ int ret;
+
+ tb = ath12k_wmi_tlv_parse_alloc(ab, skb->data, skb->len, GFP_ATOMIC);
+ if (IS_ERR(tb)) {
+ ret = PTR_ERR(tb);
+ ath12k_warn(ab, "failed to parse tlv: %d\n", ret);
+ return ret;
+ }
+
+ ev = tb[WMI_TAG_SCAN_EVENT];
+ if (!ev) {
+ ath12k_warn(ab, "failed to fetch scan ev");
+ kfree(tb);
+ return -EPROTO;
+ }
+
+ scan_evt_param->event_type = ev->event_type;
+ scan_evt_param->reason = ev->reason;
+ scan_evt_param->channel_freq = ev->channel_freq;
+ scan_evt_param->scan_req_id = ev->scan_req_id;
+ scan_evt_param->scan_id = ev->scan_id;
+ scan_evt_param->vdev_id = ev->vdev_id;
+ scan_evt_param->tsf_timestamp = ev->tsf_timestamp;
+
+ kfree(tb);
+ return 0;
+}
+
+static int ath12k_pull_peer_sta_kickout_ev(struct ath12k_base *ab, struct sk_buff *skb,
+ struct wmi_peer_sta_kickout_arg *arg)
+{
+ const void **tb;
+ const struct wmi_peer_sta_kickout_event *ev;
+ int ret;
+
+ tb = ath12k_wmi_tlv_parse_alloc(ab, skb->data, skb->len, GFP_ATOMIC);
+ if (IS_ERR(tb)) {
+ ret = PTR_ERR(tb);
+ ath12k_warn(ab, "failed to parse tlv: %d\n", ret);
+ return ret;
+ }
+
+ ev = tb[WMI_TAG_PEER_STA_KICKOUT_EVENT];
+ if (!ev) {
+ ath12k_warn(ab, "failed to fetch peer sta kickout ev");
+ kfree(tb);
+ return -EPROTO;
+ }
+
+ arg->mac_addr = ev->peer_macaddr.addr;
+
+ kfree(tb);
+ return 0;
+}
+
+static int ath12k_pull_roam_ev(struct ath12k_base *ab, struct sk_buff *skb,
+ struct wmi_roam_event *roam_ev)
+{
+ const void **tb;
+ const struct wmi_roam_event *ev;
+ int ret;
+
+ tb = ath12k_wmi_tlv_parse_alloc(ab, skb->data, skb->len, GFP_ATOMIC);
+ if (IS_ERR(tb)) {
+ ret = PTR_ERR(tb);
+ ath12k_warn(ab, "failed to parse tlv: %d\n", ret);
+ return ret;
+ }
+
+ ev = tb[WMI_TAG_ROAM_EVENT];
+ if (!ev) {
+ ath12k_warn(ab, "failed to fetch roam ev");
+ kfree(tb);
+ return -EPROTO;
+ }
+
+ roam_ev->vdev_id = ev->vdev_id;
+ roam_ev->reason = ev->reason;
+ roam_ev->rssi = ev->rssi;
+
+ kfree(tb);
+ return 0;
+}
+
+static int freq_to_idx(struct ath12k *ar, int freq)
+{
+ struct ieee80211_supported_band *sband;
+ int band, ch, idx = 0;
+
+ for (band = NL80211_BAND_2GHZ; band < NUM_NL80211_BANDS; band++) {
+ sband = ar->hw->wiphy->bands[band];
+ if (!sband)
+ continue;
+
+ for (ch = 0; ch < sband->n_channels; ch++, idx++)
+ if (sband->channels[ch].center_freq == freq)
+ goto exit;
+ }
+
+exit:
+ return idx;
+}
+
+static int ath12k_pull_chan_info_ev(struct ath12k_base *ab, u8 *evt_buf,
+ u32 len, struct wmi_chan_info_event *ch_info_ev)
+{
+ const void **tb;
+ const struct wmi_chan_info_event *ev;
+ int ret;
+
+ tb = ath12k_wmi_tlv_parse_alloc(ab, evt_buf, len, GFP_ATOMIC);
+ if (IS_ERR(tb)) {
+ ret = PTR_ERR(tb);
+ ath12k_warn(ab, "failed to parse tlv: %d\n", ret);
+ return ret;
+ }
+
+ ev = tb[WMI_TAG_CHAN_INFO_EVENT];
+ if (!ev) {
+ ath12k_warn(ab, "failed to fetch chan info ev");
+ kfree(tb);
+ return -EPROTO;
+ }
+
+ ch_info_ev->err_code = ev->err_code;
+ ch_info_ev->freq = ev->freq;
+ ch_info_ev->cmd_flags = ev->cmd_flags;
+ ch_info_ev->noise_floor = ev->noise_floor;
+ ch_info_ev->rx_clear_count = ev->rx_clear_count;
+ ch_info_ev->cycle_count = ev->cycle_count;
+ ch_info_ev->chan_tx_pwr_range = ev->chan_tx_pwr_range;
+ ch_info_ev->chan_tx_pwr_tp = ev->chan_tx_pwr_tp;
+ ch_info_ev->rx_frame_count = ev->rx_frame_count;
+ ch_info_ev->tx_frame_cnt = ev->tx_frame_cnt;
+ ch_info_ev->mac_clk_mhz = ev->mac_clk_mhz;
+ ch_info_ev->vdev_id = ev->vdev_id;
+
+ kfree(tb);
+ return 0;
+}
+
+static int
+ath12k_pull_pdev_bss_chan_info_ev(struct ath12k_base *ab, struct sk_buff *skb,
+ struct wmi_pdev_bss_chan_info_event *bss_ch_info_ev)
+{
+ const void **tb;
+ const struct wmi_pdev_bss_chan_info_event *ev;
+ int ret;
+
+ tb = ath12k_wmi_tlv_parse_alloc(ab, skb->data, skb->len, GFP_ATOMIC);
+ if (IS_ERR(tb)) {
+ ret = PTR_ERR(tb);
+ ath12k_warn(ab, "failed to parse tlv: %d\n", ret);
+ return ret;
+ }
+
+ ev = tb[WMI_TAG_PDEV_BSS_CHAN_INFO_EVENT];
+ if (!ev) {
+ ath12k_warn(ab, "failed to fetch pdev bss chan info ev");
+ kfree(tb);
+ return -EPROTO;
+ }
+
+ bss_ch_info_ev->pdev_id = ev->pdev_id;
+ bss_ch_info_ev->freq = ev->freq;
+ bss_ch_info_ev->noise_floor = ev->noise_floor;
+ bss_ch_info_ev->rx_clear_count_low = ev->rx_clear_count_low;
+ bss_ch_info_ev->rx_clear_count_high = ev->rx_clear_count_high;
+ bss_ch_info_ev->cycle_count_low = ev->cycle_count_low;
+ bss_ch_info_ev->cycle_count_high = ev->cycle_count_high;
+ bss_ch_info_ev->tx_cycle_count_low = ev->tx_cycle_count_low;
+ bss_ch_info_ev->tx_cycle_count_high = ev->tx_cycle_count_high;
+ bss_ch_info_ev->rx_cycle_count_low = ev->rx_cycle_count_low;
+ bss_ch_info_ev->rx_cycle_count_high = ev->rx_cycle_count_high;
+ bss_ch_info_ev->rx_bss_cycle_count_low = ev->rx_bss_cycle_count_low;
+ bss_ch_info_ev->rx_bss_cycle_count_high = ev->rx_bss_cycle_count_high;
+
+ kfree(tb);
+ return 0;
+}
+
+static int
+ath12k_pull_vdev_install_key_compl_ev(struct ath12k_base *ab, struct sk_buff *skb,
+ struct wmi_vdev_install_key_complete_arg *arg)
+{
+ const void **tb;
+ const struct wmi_vdev_install_key_compl_event *ev;
+ int ret;
+
+ tb = ath12k_wmi_tlv_parse_alloc(ab, skb->data, skb->len, GFP_ATOMIC);
+ if (IS_ERR(tb)) {
+ ret = PTR_ERR(tb);
+ ath12k_warn(ab, "failed to parse tlv: %d\n", ret);
+ return ret;
+ }
+
+ ev = tb[WMI_TAG_VDEV_INSTALL_KEY_COMPLETE_EVENT];
+ if (!ev) {
+ ath12k_warn(ab, "failed to fetch vdev install key compl ev");
+ kfree(tb);
+ return -EPROTO;
+ }
+
+ arg->vdev_id = le32_to_cpu(ev->vdev_id);
+ arg->macaddr = ev->peer_macaddr.addr;
+ arg->key_idx = le32_to_cpu(ev->key_idx);
+ arg->key_flags = le32_to_cpu(ev->key_flags);
+ arg->status = le32_to_cpu(ev->status);
+
+ kfree(tb);
+ return 0;
+}
+
+static int ath12k_pull_peer_assoc_conf_ev(struct ath12k_base *ab, struct sk_buff *skb,
+ struct wmi_peer_assoc_conf_arg *peer_assoc_conf)
+{
+ const void **tb;
+ const struct wmi_peer_assoc_conf_event *ev;
+ int ret;
+
+ tb = ath12k_wmi_tlv_parse_alloc(ab, skb->data, skb->len, GFP_ATOMIC);
+ if (IS_ERR(tb)) {
+ ret = PTR_ERR(tb);
+ ath12k_warn(ab, "failed to parse tlv: %d\n", ret);
+ return ret;
+ }
+
+ ev = tb[WMI_TAG_PEER_ASSOC_CONF_EVENT];
+ if (!ev) {
+ ath12k_warn(ab, "failed to fetch peer assoc conf ev");
+ kfree(tb);
+ return -EPROTO;
+ }
+
+ peer_assoc_conf->vdev_id = le32_to_cpu(ev->vdev_id);
+ peer_assoc_conf->macaddr = ev->peer_macaddr.addr;
+
+ kfree(tb);
+ return 0;
+}
+
+static int
+ath12k_pull_pdev_temp_ev(struct ath12k_base *ab, u8 *evt_buf,
+ u32 len, const struct wmi_pdev_temperature_event *ev)
+{
+ const void **tb;
+ int ret;
+
+ tb = ath12k_wmi_tlv_parse_alloc(ab, evt_buf, len, GFP_ATOMIC);
+ if (IS_ERR(tb)) {
+ ret = PTR_ERR(tb);
+ ath12k_warn(ab, "failed to parse tlv: %d\n", ret);
+ return ret;
+ }
+
+ ev = tb[WMI_TAG_PDEV_TEMPERATURE_EVENT];
+ if (!ev) {
+ ath12k_warn(ab, "failed to fetch pdev temp ev");
+ kfree(tb);
+ return -EPROTO;
+ }
+
+ kfree(tb);
+ return 0;
+}
+
+static void ath12k_wmi_op_ep_tx_credits(struct ath12k_base *ab)
+{
+ /* try to send pending beacons first. they take priority */
+ wake_up(&ab->wmi_ab.tx_credits_wq);
+}
+
+static void ath12k_wmi_htc_tx_complete(struct ath12k_base *ab,
+ struct sk_buff *skb)
+{
+ dev_kfree_skb(skb);
+}
+
+static bool ath12k_reg_is_world_alpha(char *alpha)
+{
+ return alpha[0] == '0' && alpha[1] == '0';
+}
+
+static int ath12k_reg_chan_list_event(struct ath12k_base *ab, struct sk_buff *skb)
+{
+ struct ath12k_reg_info *reg_info = NULL;
+ struct ieee80211_regdomain *regd = NULL;
+ bool intersect = false;
+ int ret = 0, pdev_idx, i, j;
+ struct ath12k *ar;
+
+ reg_info = kzalloc(sizeof(*reg_info), GFP_ATOMIC);
+ if (!reg_info) {
+ ret = -ENOMEM;
+ goto fallback;
+ }
+
+ ret = ath12k_pull_reg_chan_list_ext_update_ev(ab, skb, reg_info);
+
+ if (ret) {
+ ath12k_warn(ab, "failed to extract regulatory info from received event\n");
+ goto fallback;
+ }
+
+ if (reg_info->status_code != REG_SET_CC_STATUS_PASS) {
+ /* In case of failure to set the requested ctry,
+ * fw retains the current regd. We print a failure info
+ * and return from here.
+ */
+ ath12k_warn(ab, "Failed to set the requested Country regulatory setting\n");
+ goto mem_free;
+ }
+
+ pdev_idx = reg_info->phy_id;
+
+ if (pdev_idx >= ab->num_radios) {
+ /* Process the event for phy0 only if single_pdev_only
+ * is true. If pdev_idx is valid but not 0, discard the
+ * event. Otherwise, it goes to fallback.
+ */
+ if (ab->hw_params->single_pdev_only &&
+ pdev_idx < ab->hw_params->num_rxmda_per_pdev)
+ goto mem_free;
+ else
+ goto fallback;
+ }
+
+ /* Avoid multiple overwrites to default regd, during core
+ * stop-start after mac registration.
+ */
+ if (ab->default_regd[pdev_idx] && !ab->new_regd[pdev_idx] &&
+ !memcmp(ab->default_regd[pdev_idx]->alpha2,
+ reg_info->alpha2, 2))
+ goto mem_free;
+
+ /* Intersect new rules with default regd if a new country setting was
+ * requested, i.e a default regd was already set during initialization
+ * and the regd coming from this event has a valid country info.
+ */
+ if (ab->default_regd[pdev_idx] &&
+ !ath12k_reg_is_world_alpha((char *)
+ ab->default_regd[pdev_idx]->alpha2) &&
+ !ath12k_reg_is_world_alpha((char *)reg_info->alpha2))
+ intersect = true;
+
+ regd = ath12k_reg_build_regd(ab, reg_info, intersect);
+ if (!regd) {
+ ath12k_warn(ab, "failed to build regd from reg_info\n");
+ goto fallback;
+ }
+
+ spin_lock(&ab->base_lock);
+ if (test_bit(ATH12K_FLAG_REGISTERED, &ab->dev_flags)) {
+ /* Once mac is registered, ar is valid and all CC events from
+ * fw is considered to be received due to user requests
+ * currently.
+ * Free previously built regd before assigning the newly
+ * generated regd to ar. NULL pointer handling will be
+ * taken care by kfree itself.
+ */
+ ar = ab->pdevs[pdev_idx].ar;
+ kfree(ab->new_regd[pdev_idx]);
+ ab->new_regd[pdev_idx] = regd;
+ ieee80211_queue_work(ar->hw, &ar->regd_update_work);
+ } else {
+ /* Multiple events for the same *ar is not expected. But we
+ * can still clear any previously stored default_regd if we
+ * are receiving this event for the same radio by mistake.
+ * NULL pointer handling will be taken care by kfree itself.
+ */
+ kfree(ab->default_regd[pdev_idx]);
+ /* This regd would be applied during mac registration */
+ ab->default_regd[pdev_idx] = regd;
+ }
+ ab->dfs_region = reg_info->dfs_region;
+ spin_unlock(&ab->base_lock);
+
+ goto mem_free;
+
+fallback:
+ /* Fallback to older reg (by sending previous country setting
+ * again if fw has succeeded and we failed to process here.
+ * The Regdomain should be uniform across driver and fw. Since the
+ * FW has processed the command and sent a success status, we expect
+ * this function to succeed as well. If it doesn't, CTRY needs to be
+ * reverted at the fw and the old SCAN_CHAN_LIST cmd needs to be sent.
+ */
+ /* TODO: This is rare, but still should also be handled */
+ WARN_ON(1);
+mem_free:
+ if (reg_info) {
+ kfree(reg_info->reg_rules_2g_ptr);
+ kfree(reg_info->reg_rules_5g_ptr);
+ if (reg_info->is_ext_reg_event) {
+ for (i = 0; i < WMI_REG_CURRENT_MAX_AP_TYPE; i++)
+ kfree(reg_info->reg_rules_6g_ap_ptr[i]);
+
+ for (j = 0; j < WMI_REG_CURRENT_MAX_AP_TYPE; j++)
+ for (i = 0; i < WMI_REG_MAX_CLIENT_TYPE; i++)
+ kfree(reg_info->reg_rules_6g_client_ptr[j][i]);
+ }
+ kfree(reg_info);
+ }
+ return ret;
+}
+
+static int ath12k_wmi_rdy_parse(struct ath12k_base *ab, u16 tag, u16 len,
+ const void *ptr, void *data)
+{
+ struct ath12k_wmi_rdy_parse *rdy_parse = data;
+ struct wmi_ready_event fixed_param;
+ struct ath12k_wmi_mac_addr_params *addr_list;
+ struct ath12k_pdev *pdev;
+ u32 num_mac_addr;
+ int i;
+
+ switch (tag) {
+ case WMI_TAG_READY_EVENT:
+ memset(&fixed_param, 0, sizeof(fixed_param));
+ memcpy(&fixed_param, (struct wmi_ready_event *)ptr,
+ min_t(u16, sizeof(fixed_param), len));
+ ab->wlan_init_status = le32_to_cpu(fixed_param.ready_event_min.status);
+ rdy_parse->num_extra_mac_addr =
+ le32_to_cpu(fixed_param.ready_event_min.num_extra_mac_addr);
+
+ ether_addr_copy(ab->mac_addr,
+ fixed_param.ready_event_min.mac_addr.addr);
+ ab->pktlog_defs_checksum = le32_to_cpu(fixed_param.pktlog_defs_checksum);
+ ab->wmi_ready = true;
+ break;
+ case WMI_TAG_ARRAY_FIXED_STRUCT:
+ addr_list = (struct ath12k_wmi_mac_addr_params *)ptr;
+ num_mac_addr = rdy_parse->num_extra_mac_addr;
+
+ if (!(ab->num_radios > 1 && num_mac_addr >= ab->num_radios))
+ break;
+
+ for (i = 0; i < ab->num_radios; i++) {
+ pdev = &ab->pdevs[i];
+ ether_addr_copy(pdev->mac_addr, addr_list[i].addr);
+ }
+ ab->pdevs_macaddr_valid = true;
+ break;
+ default:
+ break;
+ }
+
+ return 0;
+}
+
+static int ath12k_ready_event(struct ath12k_base *ab, struct sk_buff *skb)
+{
+ struct ath12k_wmi_rdy_parse rdy_parse = { };
+ int ret;
+
+ ret = ath12k_wmi_tlv_iter(ab, skb->data, skb->len,
+ ath12k_wmi_rdy_parse, &rdy_parse);
+ if (ret) {
+ ath12k_warn(ab, "failed to parse tlv %d\n", ret);
+ return ret;
+ }
+
+ complete(&ab->wmi_ab.unified_ready);
+ return 0;
+}
+
+static void ath12k_peer_delete_resp_event(struct ath12k_base *ab, struct sk_buff *skb)
+{
+ struct wmi_peer_delete_resp_event peer_del_resp;
+ struct ath12k *ar;
+
+ if (ath12k_pull_peer_del_resp_ev(ab, skb, &peer_del_resp) != 0) {
+ ath12k_warn(ab, "failed to extract peer delete resp");
+ return;
+ }
+
+ rcu_read_lock();
+ ar = ath12k_mac_get_ar_by_vdev_id(ab, le32_to_cpu(peer_del_resp.vdev_id));
+ if (!ar) {
+ ath12k_warn(ab, "invalid vdev id in peer delete resp ev %d",
+ peer_del_resp.vdev_id);
+ rcu_read_unlock();
+ return;
+ }
+
+ complete(&ar->peer_delete_done);
+ rcu_read_unlock();
+ ath12k_dbg(ab, ATH12K_DBG_WMI, "peer delete resp for vdev id %d addr %pM\n",
+ peer_del_resp.vdev_id, peer_del_resp.peer_macaddr.addr);
+}
+
+static void ath12k_vdev_delete_resp_event(struct ath12k_base *ab,
+ struct sk_buff *skb)
+{
+ struct ath12k *ar;
+ u32 vdev_id = 0;
+
+ if (ath12k_pull_vdev_del_resp_ev(ab, skb, &vdev_id) != 0) {
+ ath12k_warn(ab, "failed to extract vdev delete resp");
+ return;
+ }
+
+ rcu_read_lock();
+ ar = ath12k_mac_get_ar_by_vdev_id(ab, vdev_id);
+ if (!ar) {
+ ath12k_warn(ab, "invalid vdev id in vdev delete resp ev %d",
+ vdev_id);
+ rcu_read_unlock();
+ return;
+ }
+
+ complete(&ar->vdev_delete_done);
+
+ rcu_read_unlock();
+
+ ath12k_dbg(ab, ATH12K_DBG_WMI, "vdev delete resp for vdev id %d\n",
+ vdev_id);
+}
+
+static const char *ath12k_wmi_vdev_resp_print(u32 vdev_resp_status)
+{
+ switch (vdev_resp_status) {
+ case WMI_VDEV_START_RESPONSE_INVALID_VDEVID:
+ return "invalid vdev id";
+ case WMI_VDEV_START_RESPONSE_NOT_SUPPORTED:
+ return "not supported";
+ case WMI_VDEV_START_RESPONSE_DFS_VIOLATION:
+ return "dfs violation";
+ case WMI_VDEV_START_RESPONSE_INVALID_REGDOMAIN:
+ return "invalid regdomain";
+ default:
+ return "unknown";
+ }
+}
+
+static void ath12k_vdev_start_resp_event(struct ath12k_base *ab, struct sk_buff *skb)
+{
+ struct wmi_vdev_start_resp_event vdev_start_resp;
+ struct ath12k *ar;
+ u32 status;
+
+ if (ath12k_pull_vdev_start_resp_tlv(ab, skb, &vdev_start_resp) != 0) {
+ ath12k_warn(ab, "failed to extract vdev start resp");
+ return;
+ }
+
+ rcu_read_lock();
+ ar = ath12k_mac_get_ar_by_vdev_id(ab, le32_to_cpu(vdev_start_resp.vdev_id));
+ if (!ar) {
+ ath12k_warn(ab, "invalid vdev id in vdev start resp ev %d",
+ vdev_start_resp.vdev_id);
+ rcu_read_unlock();
+ return;
+ }
+
+ ar->last_wmi_vdev_start_status = 0;
+
+ status = le32_to_cpu(vdev_start_resp.status);
+
+ if (WARN_ON_ONCE(status)) {
+ ath12k_warn(ab, "vdev start resp error status %d (%s)\n",
+ status, ath12k_wmi_vdev_resp_print(status));
+ ar->last_wmi_vdev_start_status = status;
+ }
+
+ complete(&ar->vdev_setup_done);
+
+ rcu_read_unlock();
+
+ ath12k_dbg(ab, ATH12K_DBG_WMI, "vdev start resp for vdev id %d",
+ vdev_start_resp.vdev_id);
+}
+
+static void ath12k_bcn_tx_status_event(struct ath12k_base *ab, struct sk_buff *skb)
+{
+ u32 vdev_id, tx_status;
+
+ if (ath12k_pull_bcn_tx_status_ev(ab, skb->data, skb->len,
+ &vdev_id, &tx_status) != 0) {
+ ath12k_warn(ab, "failed to extract bcn tx status");
+ return;
+ }
+}
+
+static void ath12k_vdev_stopped_event(struct ath12k_base *ab, struct sk_buff *skb)
+{
+ struct ath12k *ar;
+ u32 vdev_id = 0;
+
+ if (ath12k_pull_vdev_stopped_param_tlv(ab, skb, &vdev_id) != 0) {
+ ath12k_warn(ab, "failed to extract vdev stopped event");
+ return;
+ }
+
+ rcu_read_lock();
+ ar = ath12k_mac_get_ar_by_vdev_id(ab, vdev_id);
+ if (!ar) {
+ ath12k_warn(ab, "invalid vdev id in vdev stopped ev %d",
+ vdev_id);
+ rcu_read_unlock();
+ return;
+ }
+
+ complete(&ar->vdev_setup_done);
+
+ rcu_read_unlock();
+
+ ath12k_dbg(ab, ATH12K_DBG_WMI, "vdev stopped for vdev id %d", vdev_id);
+}
+
+static void ath12k_mgmt_rx_event(struct ath12k_base *ab, struct sk_buff *skb)
+{
+ struct ath12k_wmi_mgmt_rx_arg rx_ev = {0};
+ struct ath12k *ar;
+ struct ieee80211_rx_status *status = IEEE80211_SKB_RXCB(skb);
+ struct ieee80211_hdr *hdr;
+ u16 fc;
+ struct ieee80211_supported_band *sband;
+
+ if (ath12k_pull_mgmt_rx_params_tlv(ab, skb, &rx_ev) != 0) {
+ ath12k_warn(ab, "failed to extract mgmt rx event");
+ dev_kfree_skb(skb);
+ return;
+ }
+
+ memset(status, 0, sizeof(*status));
+
+ ath12k_dbg(ab, ATH12K_DBG_MGMT, "mgmt rx event status %08x\n",
+ rx_ev.status);
+
+ rcu_read_lock();
+ ar = ath12k_mac_get_ar_by_pdev_id(ab, rx_ev.pdev_id);
+
+ if (!ar) {
+ ath12k_warn(ab, "invalid pdev_id %d in mgmt_rx_event\n",
+ rx_ev.pdev_id);
+ dev_kfree_skb(skb);
+ goto exit;
+ }
+
+ if ((test_bit(ATH12K_CAC_RUNNING, &ar->dev_flags)) ||
+ (rx_ev.status & (WMI_RX_STATUS_ERR_DECRYPT |
+ WMI_RX_STATUS_ERR_KEY_CACHE_MISS |
+ WMI_RX_STATUS_ERR_CRC))) {
+ dev_kfree_skb(skb);
+ goto exit;
+ }
+
+ if (rx_ev.status & WMI_RX_STATUS_ERR_MIC)
+ status->flag |= RX_FLAG_MMIC_ERROR;
+
+ if (rx_ev.chan_freq >= ATH12K_MIN_6G_FREQ) {
+ status->band = NL80211_BAND_6GHZ;
+ } else if (rx_ev.channel >= 1 && rx_ev.channel <= 14) {
+ status->band = NL80211_BAND_2GHZ;
+ } else if (rx_ev.channel >= 36 && rx_ev.channel <= ATH12K_MAX_5G_CHAN) {
+ status->band = NL80211_BAND_5GHZ;
+ } else {
+ /* Shouldn't happen unless list of advertised channels to
+ * mac80211 has been changed.
+ */
+ WARN_ON_ONCE(1);
+ dev_kfree_skb(skb);
+ goto exit;
+ }
+
+ if (rx_ev.phy_mode == MODE_11B &&
+ (status->band == NL80211_BAND_5GHZ || status->band == NL80211_BAND_6GHZ))
+ ath12k_dbg(ab, ATH12K_DBG_WMI,
+ "wmi mgmt rx 11b (CCK) on 5/6GHz, band = %d\n", status->band);
+
+ sband = &ar->mac.sbands[status->band];
+
+ status->freq = ieee80211_channel_to_frequency(rx_ev.channel,
+ status->band);
+ status->signal = rx_ev.snr + ATH12K_DEFAULT_NOISE_FLOOR;
+ status->rate_idx = ath12k_mac_bitrate_to_idx(sband, rx_ev.rate / 100);
+
+ hdr = (struct ieee80211_hdr *)skb->data;
+ fc = le16_to_cpu(hdr->frame_control);
+
+ /* Firmware is guaranteed to report all essential management frames via
+ * WMI while it can deliver some extra via HTT. Since there can be
+ * duplicates split the reporting wrt monitor/sniffing.
+ */
+ status->flag |= RX_FLAG_SKIP_MONITOR;
+
+ /* In case of PMF, FW delivers decrypted frames with Protected Bit set
+ * including group privacy action frames.
+ */
+ if (ieee80211_has_protected(hdr->frame_control)) {
+ status->flag |= RX_FLAG_DECRYPTED;
+
+ if (!ieee80211_is_robust_mgmt_frame(skb)) {
+ status->flag |= RX_FLAG_IV_STRIPPED |
+ RX_FLAG_MMIC_STRIPPED;
+ hdr->frame_control = __cpu_to_le16(fc &
+ ~IEEE80211_FCTL_PROTECTED);
+ }
+ }
+
+ /* TODO: Pending handle beacon implementation
+ *if (ieee80211_is_beacon(hdr->frame_control))
+ * ath12k_mac_handle_beacon(ar, skb);
+ */
+
+ ath12k_dbg(ab, ATH12K_DBG_MGMT,
+ "event mgmt rx skb %pK len %d ftype %02x stype %02x\n",
+ skb, skb->len,
+ fc & IEEE80211_FCTL_FTYPE, fc & IEEE80211_FCTL_STYPE);
+
+ ath12k_dbg(ab, ATH12K_DBG_MGMT,
+ "event mgmt rx freq %d band %d snr %d, rate_idx %d\n",
+ status->freq, status->band, status->signal,
+ status->rate_idx);
+
+ ieee80211_rx_ni(ar->hw, skb);
+
+exit:
+ rcu_read_unlock();
+}
+
+static void ath12k_mgmt_tx_compl_event(struct ath12k_base *ab, struct sk_buff *skb)
+{
+ struct wmi_mgmt_tx_compl_event tx_compl_param = {0};
+ struct ath12k *ar;
+
+ if (ath12k_pull_mgmt_tx_compl_param_tlv(ab, skb, &tx_compl_param) != 0) {
+ ath12k_warn(ab, "failed to extract mgmt tx compl event");
+ return;
+ }
+
+ rcu_read_lock();
+ ar = ath12k_mac_get_ar_by_pdev_id(ab, le32_to_cpu(tx_compl_param.pdev_id));
+ if (!ar) {
+ ath12k_warn(ab, "invalid pdev id %d in mgmt_tx_compl_event\n",
+ tx_compl_param.pdev_id);
+ goto exit;
+ }
+
+ wmi_process_mgmt_tx_comp(ar, le32_to_cpu(tx_compl_param.desc_id),
+ le32_to_cpu(tx_compl_param.status));
+
+ ath12k_dbg(ab, ATH12K_DBG_MGMT,
+ "mgmt tx compl ev pdev_id %d, desc_id %d, status %d",
+ tx_compl_param.pdev_id, tx_compl_param.desc_id,
+ tx_compl_param.status);
+
+exit:
+ rcu_read_unlock();
+}
+
+static struct ath12k *ath12k_get_ar_on_scan_abort(struct ath12k_base *ab,
+ u32 vdev_id)
+{
+ int i;
+ struct ath12k_pdev *pdev;
+ struct ath12k *ar;
+
+ for (i = 0; i < ab->num_radios; i++) {
+ pdev = rcu_dereference(ab->pdevs_active[i]);
+ if (pdev && pdev->ar) {
+ ar = pdev->ar;
+
+ spin_lock_bh(&ar->data_lock);
+ if (ar->scan.state == ATH12K_SCAN_ABORTING &&
+ ar->scan.vdev_id == vdev_id) {
+ spin_unlock_bh(&ar->data_lock);
+ return ar;
+ }
+ spin_unlock_bh(&ar->data_lock);
+ }
+ }
+ return NULL;
+}
+
+static void ath12k_scan_event(struct ath12k_base *ab, struct sk_buff *skb)
+{
+ struct ath12k *ar;
+ struct wmi_scan_event scan_ev = {0};
+
+ if (ath12k_pull_scan_ev(ab, skb, &scan_ev) != 0) {
+ ath12k_warn(ab, "failed to extract scan event");
+ return;
+ }
+
+ rcu_read_lock();
+
+ /* In case the scan was cancelled, ex. during interface teardown,
+ * the interface will not be found in active interfaces.
+ * Rather, in such scenarios, iterate over the active pdev's to
+ * search 'ar' if the corresponding 'ar' scan is ABORTING and the
+ * aborting scan's vdev id matches this event info.
+ */
+ if (le32_to_cpu(scan_ev.event_type) == WMI_SCAN_EVENT_COMPLETED &&
+ le32_to_cpu(scan_ev.reason) == WMI_SCAN_REASON_CANCELLED)
+ ar = ath12k_get_ar_on_scan_abort(ab, le32_to_cpu(scan_ev.vdev_id));
+ else
+ ar = ath12k_mac_get_ar_by_vdev_id(ab, le32_to_cpu(scan_ev.vdev_id));
+
+ if (!ar) {
+ ath12k_warn(ab, "Received scan event for unknown vdev");
+ rcu_read_unlock();
+ return;
+ }
+
+ spin_lock_bh(&ar->data_lock);
+
+ ath12k_dbg(ab, ATH12K_DBG_WMI,
+ "scan event %s type %d reason %d freq %d req_id %d scan_id %d vdev_id %d state %s (%d)\n",
+ ath12k_wmi_event_scan_type_str(le32_to_cpu(scan_ev.event_type),
+ le32_to_cpu(scan_ev.reason)),
+ le32_to_cpu(scan_ev.event_type),
+ le32_to_cpu(scan_ev.reason),
+ le32_to_cpu(scan_ev.channel_freq),
+ le32_to_cpu(scan_ev.scan_req_id),
+ le32_to_cpu(scan_ev.scan_id),
+ le32_to_cpu(scan_ev.vdev_id),
+ ath12k_scan_state_str(ar->scan.state), ar->scan.state);
+
+ switch (le32_to_cpu(scan_ev.event_type)) {
+ case WMI_SCAN_EVENT_STARTED:
+ ath12k_wmi_event_scan_started(ar);
+ break;
+ case WMI_SCAN_EVENT_COMPLETED:
+ ath12k_wmi_event_scan_completed(ar);
+ break;
+ case WMI_SCAN_EVENT_BSS_CHANNEL:
+ ath12k_wmi_event_scan_bss_chan(ar);
+ break;
+ case WMI_SCAN_EVENT_FOREIGN_CHAN:
+ ath12k_wmi_event_scan_foreign_chan(ar, le32_to_cpu(scan_ev.channel_freq));
+ break;
+ case WMI_SCAN_EVENT_START_FAILED:
+ ath12k_warn(ab, "received scan start failure event\n");
+ ath12k_wmi_event_scan_start_failed(ar);
+ break;
+ case WMI_SCAN_EVENT_DEQUEUED:
+ case WMI_SCAN_EVENT_PREEMPTED:
+ case WMI_SCAN_EVENT_RESTARTED:
+ case WMI_SCAN_EVENT_FOREIGN_CHAN_EXIT:
+ default:
+ break;
+ }
+
+ spin_unlock_bh(&ar->data_lock);
+
+ rcu_read_unlock();
+}
+
+static void ath12k_peer_sta_kickout_event(struct ath12k_base *ab, struct sk_buff *skb)
+{
+ struct wmi_peer_sta_kickout_arg arg = {};
+ struct ieee80211_sta *sta;
+ struct ath12k_peer *peer;
+ struct ath12k *ar;
+
+ if (ath12k_pull_peer_sta_kickout_ev(ab, skb, &arg) != 0) {
+ ath12k_warn(ab, "failed to extract peer sta kickout event");
+ return;
+ }
+
+ rcu_read_lock();
+
+ spin_lock_bh(&ab->base_lock);
+
+ peer = ath12k_peer_find_by_addr(ab, arg.mac_addr);
+
+ if (!peer) {
+ ath12k_warn(ab, "peer not found %pM\n",
+ arg.mac_addr);
+ goto exit;
+ }
+
+ ar = ath12k_mac_get_ar_by_vdev_id(ab, peer->vdev_id);
+ if (!ar) {
+ ath12k_warn(ab, "invalid vdev id in peer sta kickout ev %d",
+ peer->vdev_id);
+ goto exit;
+ }
+
+ sta = ieee80211_find_sta_by_ifaddr(ar->hw,
+ arg.mac_addr, NULL);
+ if (!sta) {
+ ath12k_warn(ab, "Spurious quick kickout for STA %pM\n",
+ arg.mac_addr);
+ goto exit;
+ }
+
+ ath12k_dbg(ab, ATH12K_DBG_WMI, "peer sta kickout event %pM",
+ arg.mac_addr);
+
+ ieee80211_report_low_ack(sta, 10);
+
+exit:
+ spin_unlock_bh(&ab->base_lock);
+ rcu_read_unlock();
+}
+
+static void ath12k_roam_event(struct ath12k_base *ab, struct sk_buff *skb)
+{
+ struct wmi_roam_event roam_ev = {};
+ struct ath12k *ar;
+
+ if (ath12k_pull_roam_ev(ab, skb, &roam_ev) != 0) {
+ ath12k_warn(ab, "failed to extract roam event");
+ return;
+ }
+
+ ath12k_dbg(ab, ATH12K_DBG_WMI,
+ "wmi roam event vdev %u reason 0x%08x rssi %d\n",
+ roam_ev.vdev_id, roam_ev.reason, roam_ev.rssi);
+
+ rcu_read_lock();
+ ar = ath12k_mac_get_ar_by_vdev_id(ab, le32_to_cpu(roam_ev.vdev_id));
+ if (!ar) {
+ ath12k_warn(ab, "invalid vdev id in roam ev %d",
+ roam_ev.vdev_id);
+ rcu_read_unlock();
+ return;
+ }
+
+ if (le32_to_cpu(roam_ev.reason) >= WMI_ROAM_REASON_MAX)
+ ath12k_warn(ab, "ignoring unknown roam event reason %d on vdev %i\n",
+ roam_ev.reason, roam_ev.vdev_id);
+
+ switch (le32_to_cpu(roam_ev.reason)) {
+ case WMI_ROAM_REASON_BEACON_MISS:
+ /* TODO: Pending beacon miss and connection_loss_work
+ * implementation
+ * ath12k_mac_handle_beacon_miss(ar, vdev_id);
+ */
+ break;
+ case WMI_ROAM_REASON_BETTER_AP:
+ case WMI_ROAM_REASON_LOW_RSSI:
+ case WMI_ROAM_REASON_SUITABLE_AP_FOUND:
+ case WMI_ROAM_REASON_HO_FAILED:
+ ath12k_warn(ab, "ignoring not implemented roam event reason %d on vdev %i\n",
+ roam_ev.reason, roam_ev.vdev_id);
+ break;
+ }
+
+ rcu_read_unlock();
+}
+
+static void ath12k_chan_info_event(struct ath12k_base *ab, struct sk_buff *skb)
+{
+ struct wmi_chan_info_event ch_info_ev = {0};
+ struct ath12k *ar;
+ struct survey_info *survey;
+ int idx;
+ /* HW channel counters frequency value in hertz */
+ u32 cc_freq_hz = ab->cc_freq_hz;
+
+ if (ath12k_pull_chan_info_ev(ab, skb->data, skb->len, &ch_info_ev) != 0) {
+ ath12k_warn(ab, "failed to extract chan info event");
+ return;
+ }
+
+ ath12k_dbg(ab, ATH12K_DBG_WMI,
+ "chan info vdev_id %d err_code %d freq %d cmd_flags %d noise_floor %d rx_clear_count %d cycle_count %d mac_clk_mhz %d\n",
+ ch_info_ev.vdev_id, ch_info_ev.err_code, ch_info_ev.freq,
+ ch_info_ev.cmd_flags, ch_info_ev.noise_floor,
+ ch_info_ev.rx_clear_count, ch_info_ev.cycle_count,
+ ch_info_ev.mac_clk_mhz);
+
+ if (le32_to_cpu(ch_info_ev.cmd_flags) == WMI_CHAN_INFO_END_RESP) {
+ ath12k_dbg(ab, ATH12K_DBG_WMI, "chan info report completed\n");
+ return;
+ }
+
+ rcu_read_lock();
+ ar = ath12k_mac_get_ar_by_vdev_id(ab, le32_to_cpu(ch_info_ev.vdev_id));
+ if (!ar) {
+ ath12k_warn(ab, "invalid vdev id in chan info ev %d",
+ ch_info_ev.vdev_id);
+ rcu_read_unlock();
+ return;
+ }
+ spin_lock_bh(&ar->data_lock);
+
+ switch (ar->scan.state) {
+ case ATH12K_SCAN_IDLE:
+ case ATH12K_SCAN_STARTING:
+ ath12k_warn(ab, "received chan info event without a scan request, ignoring\n");
+ goto exit;
+ case ATH12K_SCAN_RUNNING:
+ case ATH12K_SCAN_ABORTING:
+ break;
+ }
+
+ idx = freq_to_idx(ar, le32_to_cpu(ch_info_ev.freq));
+ if (idx >= ARRAY_SIZE(ar->survey)) {
+ ath12k_warn(ab, "chan info: invalid frequency %d (idx %d out of bounds)\n",
+ ch_info_ev.freq, idx);
+ goto exit;
+ }
+
+ /* If FW provides MAC clock frequency in Mhz, overriding the initialized
+ * HW channel counters frequency value
+ */
+ if (ch_info_ev.mac_clk_mhz)
+ cc_freq_hz = (le32_to_cpu(ch_info_ev.mac_clk_mhz) * 1000);
+
+ if (ch_info_ev.cmd_flags == WMI_CHAN_INFO_START_RESP) {
+ survey = &ar->survey[idx];
+ memset(survey, 0, sizeof(*survey));
+ survey->noise = le32_to_cpu(ch_info_ev.noise_floor);
+ survey->filled = SURVEY_INFO_NOISE_DBM | SURVEY_INFO_TIME |
+ SURVEY_INFO_TIME_BUSY;
+ survey->time = div_u64(le32_to_cpu(ch_info_ev.cycle_count), cc_freq_hz);
+ survey->time_busy = div_u64(le32_to_cpu(ch_info_ev.rx_clear_count),
+ cc_freq_hz);
+ }
+exit:
+ spin_unlock_bh(&ar->data_lock);
+ rcu_read_unlock();
+}
+
+static void
+ath12k_pdev_bss_chan_info_event(struct ath12k_base *ab, struct sk_buff *skb)
+{
+ struct wmi_pdev_bss_chan_info_event bss_ch_info_ev = {};
+ struct survey_info *survey;
+ struct ath12k *ar;
+ u32 cc_freq_hz = ab->cc_freq_hz;
+ u64 busy, total, tx, rx, rx_bss;
+ int idx;
+
+ if (ath12k_pull_pdev_bss_chan_info_ev(ab, skb, &bss_ch_info_ev) != 0) {
+ ath12k_warn(ab, "failed to extract pdev bss chan info event");
+ return;
+ }
+
+ busy = (u64)(le32_to_cpu(bss_ch_info_ev.rx_clear_count_high)) << 32 |
+ le32_to_cpu(bss_ch_info_ev.rx_clear_count_low);
+
+ total = (u64)(le32_to_cpu(bss_ch_info_ev.cycle_count_high)) << 32 |
+ le32_to_cpu(bss_ch_info_ev.cycle_count_low);
+
+ tx = (u64)(le32_to_cpu(bss_ch_info_ev.tx_cycle_count_high)) << 32 |
+ le32_to_cpu(bss_ch_info_ev.tx_cycle_count_low);
+
+ rx = (u64)(le32_to_cpu(bss_ch_info_ev.rx_cycle_count_high)) << 32 |
+ le32_to_cpu(bss_ch_info_ev.rx_cycle_count_low);
+
+ rx_bss = (u64)(le32_to_cpu(bss_ch_info_ev.rx_bss_cycle_count_high)) << 32 |
+ le32_to_cpu(bss_ch_info_ev.rx_bss_cycle_count_low);
+
+ ath12k_dbg(ab, ATH12K_DBG_WMI,
+ "pdev bss chan info:\n pdev_id: %d freq: %d noise: %d cycle: busy %llu total %llu tx %llu rx %llu rx_bss %llu\n",
+ bss_ch_info_ev.pdev_id, bss_ch_info_ev.freq,
+ bss_ch_info_ev.noise_floor, busy, total,
+ tx, rx, rx_bss);
+
+ rcu_read_lock();
+ ar = ath12k_mac_get_ar_by_pdev_id(ab, le32_to_cpu(bss_ch_info_ev.pdev_id));
+
+ if (!ar) {
+ ath12k_warn(ab, "invalid pdev id %d in bss_chan_info event\n",
+ bss_ch_info_ev.pdev_id);
+ rcu_read_unlock();
+ return;
+ }
+
+ spin_lock_bh(&ar->data_lock);
+ idx = freq_to_idx(ar, le32_to_cpu(bss_ch_info_ev.freq));
+ if (idx >= ARRAY_SIZE(ar->survey)) {
+ ath12k_warn(ab, "bss chan info: invalid frequency %d (idx %d out of bounds)\n",
+ bss_ch_info_ev.freq, idx);
+ goto exit;
+ }
+
+ survey = &ar->survey[idx];
+
+ survey->noise = le32_to_cpu(bss_ch_info_ev.noise_floor);
+ survey->time = div_u64(total, cc_freq_hz);
+ survey->time_busy = div_u64(busy, cc_freq_hz);
+ survey->time_rx = div_u64(rx_bss, cc_freq_hz);
+ survey->time_tx = div_u64(tx, cc_freq_hz);
+ survey->filled |= (SURVEY_INFO_NOISE_DBM |
+ SURVEY_INFO_TIME |
+ SURVEY_INFO_TIME_BUSY |
+ SURVEY_INFO_TIME_RX |
+ SURVEY_INFO_TIME_TX);
+exit:
+ spin_unlock_bh(&ar->data_lock);
+ complete(&ar->bss_survey_done);
+
+ rcu_read_unlock();
+}
+
+static void ath12k_vdev_install_key_compl_event(struct ath12k_base *ab,
+ struct sk_buff *skb)
+{
+ struct wmi_vdev_install_key_complete_arg install_key_compl = {0};
+ struct ath12k *ar;
+
+ if (ath12k_pull_vdev_install_key_compl_ev(ab, skb, &install_key_compl) != 0) {
+ ath12k_warn(ab, "failed to extract install key compl event");
+ return;
+ }
+
+ ath12k_dbg(ab, ATH12K_DBG_WMI,
+ "vdev install key ev idx %d flags %08x macaddr %pM status %d\n",
+ install_key_compl.key_idx, install_key_compl.key_flags,
+ install_key_compl.macaddr, install_key_compl.status);
+
+ rcu_read_lock();
+ ar = ath12k_mac_get_ar_by_vdev_id(ab, install_key_compl.vdev_id);
+ if (!ar) {
+ ath12k_warn(ab, "invalid vdev id in install key compl ev %d",
+ install_key_compl.vdev_id);
+ rcu_read_unlock();
+ return;
+ }
+
+ ar->install_key_status = 0;
+
+ if (install_key_compl.status != WMI_VDEV_INSTALL_KEY_COMPL_STATUS_SUCCESS) {
+ ath12k_warn(ab, "install key failed for %pM status %d\n",
+ install_key_compl.macaddr, install_key_compl.status);
+ ar->install_key_status = install_key_compl.status;
+ }
+
+ complete(&ar->install_key_done);
+ rcu_read_unlock();
+}
+
+static void ath12k_service_available_event(struct ath12k_base *ab, struct sk_buff *skb)
+{
+ const void **tb;
+ const struct wmi_service_available_event *ev;
+ int ret;
+ int i, j;
+
+ tb = ath12k_wmi_tlv_parse_alloc(ab, skb->data, skb->len, GFP_ATOMIC);
+ if (IS_ERR(tb)) {
+ ret = PTR_ERR(tb);
+ ath12k_warn(ab, "failed to parse tlv: %d\n", ret);
+ return;
+ }
+
+ ev = tb[WMI_TAG_SERVICE_AVAILABLE_EVENT];
+ if (!ev) {
+ ath12k_warn(ab, "failed to fetch svc available ev");
+ kfree(tb);
+ return;
+ }
+
+ /* TODO: Use wmi_service_segment_offset information to get the service
+ * especially when more services are advertised in multiple sevice
+ * available events.
+ */
+ for (i = 0, j = WMI_MAX_SERVICE;
+ i < WMI_SERVICE_SEGMENT_BM_SIZE32 && j < WMI_MAX_EXT_SERVICE;
+ i++) {
+ do {
+ if (le32_to_cpu(ev->wmi_service_segment_bitmap[i]) &
+ BIT(j % WMI_AVAIL_SERVICE_BITS_IN_SIZE32))
+ set_bit(j, ab->wmi_ab.svc_map);
+ } while (++j % WMI_AVAIL_SERVICE_BITS_IN_SIZE32);
+ }
+
+ ath12k_dbg(ab, ATH12K_DBG_WMI,
+ "wmi_ext_service_bitmap 0:0x%x, 1:0x%x, 2:0x%x, 3:0x%x",
+ ev->wmi_service_segment_bitmap[0], ev->wmi_service_segment_bitmap[1],
+ ev->wmi_service_segment_bitmap[2], ev->wmi_service_segment_bitmap[3]);
+
+ kfree(tb);
+}
+
+static void ath12k_peer_assoc_conf_event(struct ath12k_base *ab, struct sk_buff *skb)
+{
+ struct wmi_peer_assoc_conf_arg peer_assoc_conf = {0};
+ struct ath12k *ar;
+
+ if (ath12k_pull_peer_assoc_conf_ev(ab, skb, &peer_assoc_conf) != 0) {
+ ath12k_warn(ab, "failed to extract peer assoc conf event");
+ return;
+ }
+
+ ath12k_dbg(ab, ATH12K_DBG_WMI,
+ "peer assoc conf ev vdev id %d macaddr %pM\n",
+ peer_assoc_conf.vdev_id, peer_assoc_conf.macaddr);
+
+ rcu_read_lock();
+ ar = ath12k_mac_get_ar_by_vdev_id(ab, peer_assoc_conf.vdev_id);
+
+ if (!ar) {
+ ath12k_warn(ab, "invalid vdev id in peer assoc conf ev %d",
+ peer_assoc_conf.vdev_id);
+ rcu_read_unlock();
+ return;
+ }
+
+ complete(&ar->peer_assoc_done);
+ rcu_read_unlock();
+}
+
+static void ath12k_update_stats_event(struct ath12k_base *ab, struct sk_buff *skb)
+{
+}
+
+/* PDEV_CTL_FAILSAFE_CHECK_EVENT is received from FW when the frequency scanned
+ * is not part of BDF CTL(Conformance test limits) table entries.
+ */
+static void ath12k_pdev_ctl_failsafe_check_event(struct ath12k_base *ab,
+ struct sk_buff *skb)
+{
+ const void **tb;
+ const struct wmi_pdev_ctl_failsafe_chk_event *ev;
+ int ret;
+
+ tb = ath12k_wmi_tlv_parse_alloc(ab, skb->data, skb->len, GFP_ATOMIC);
+ if (IS_ERR(tb)) {
+ ret = PTR_ERR(tb);
+ ath12k_warn(ab, "failed to parse tlv: %d\n", ret);
+ return;
+ }
+
+ ev = tb[WMI_TAG_PDEV_CTL_FAILSAFE_CHECK_EVENT];
+ if (!ev) {
+ ath12k_warn(ab, "failed to fetch pdev ctl failsafe check ev");
+ kfree(tb);
+ return;
+ }
+
+ ath12k_dbg(ab, ATH12K_DBG_WMI,
+ "pdev ctl failsafe check ev status %d\n",
+ ev->ctl_failsafe_status);
+
+ /* If ctl_failsafe_status is set to 1 FW will max out the Transmit power
+ * to 10 dBm else the CTL power entry in the BDF would be picked up.
+ */
+ if (ev->ctl_failsafe_status != 0)
+ ath12k_warn(ab, "pdev ctl failsafe failure status %d",
+ ev->ctl_failsafe_status);
+
+ kfree(tb);
+}
+
+static void
+ath12k_wmi_process_csa_switch_count_event(struct ath12k_base *ab,
+ const struct ath12k_wmi_pdev_csa_event *ev,
+ const u32 *vdev_ids)
+{
+ int i;
+ struct ath12k_vif *arvif;
+
+ /* Finish CSA once the switch count becomes NULL */
+ if (ev->current_switch_count)
+ return;
+
+ rcu_read_lock();
+ for (i = 0; i < le32_to_cpu(ev->num_vdevs); i++) {
+ arvif = ath12k_mac_get_arvif_by_vdev_id(ab, vdev_ids[i]);
+
+ if (!arvif) {
+ ath12k_warn(ab, "Recvd csa status for unknown vdev %d",
+ vdev_ids[i]);
+ continue;
+ }
+
+ if (arvif->is_up && arvif->vif->bss_conf.csa_active)
+ ieee80211_csa_finish(arvif->vif);
+ }
+ rcu_read_unlock();
+}
+
+static void
+ath12k_wmi_pdev_csa_switch_count_status_event(struct ath12k_base *ab,
+ struct sk_buff *skb)
+{
+ const void **tb;
+ const struct ath12k_wmi_pdev_csa_event *ev;
+ const u32 *vdev_ids;
+ int ret;
+
+ tb = ath12k_wmi_tlv_parse_alloc(ab, skb->data, skb->len, GFP_ATOMIC);
+ if (IS_ERR(tb)) {
+ ret = PTR_ERR(tb);
+ ath12k_warn(ab, "failed to parse tlv: %d\n", ret);
+ return;
+ }
+
+ ev = tb[WMI_TAG_PDEV_CSA_SWITCH_COUNT_STATUS_EVENT];
+ vdev_ids = tb[WMI_TAG_ARRAY_UINT32];
+
+ if (!ev || !vdev_ids) {
+ ath12k_warn(ab, "failed to fetch pdev csa switch count ev");
+ kfree(tb);
+ return;
+ }
+
+ ath12k_dbg(ab, ATH12K_DBG_WMI,
+ "pdev csa switch count %d for pdev %d, num_vdevs %d",
+ ev->current_switch_count, ev->pdev_id,
+ ev->num_vdevs);
+
+ ath12k_wmi_process_csa_switch_count_event(ab, ev, vdev_ids);
+
+ kfree(tb);
+}
+
+static void
+ath12k_wmi_pdev_dfs_radar_detected_event(struct ath12k_base *ab, struct sk_buff *skb)
+{
+ const void **tb;
+ const struct ath12k_wmi_pdev_radar_event *ev;
+ struct ath12k *ar;
+ int ret;
+
+ tb = ath12k_wmi_tlv_parse_alloc(ab, skb->data, skb->len, GFP_ATOMIC);
+ if (IS_ERR(tb)) {
+ ret = PTR_ERR(tb);
+ ath12k_warn(ab, "failed to parse tlv: %d\n", ret);
+ return;
+ }
+
+ ev = tb[WMI_TAG_PDEV_DFS_RADAR_DETECTION_EVENT];
+
+ if (!ev) {
+ ath12k_warn(ab, "failed to fetch pdev dfs radar detected ev");
+ kfree(tb);
+ return;
+ }
+
+ ath12k_dbg(ab, ATH12K_DBG_WMI,
+ "pdev dfs radar detected on pdev %d, detection mode %d, chan freq %d, chan_width %d, detector id %d, seg id %d, timestamp %d, chirp %d, freq offset %d, sidx %d",
+ ev->pdev_id, ev->detection_mode, ev->chan_freq, ev->chan_width,
+ ev->detector_id, ev->segment_id, ev->timestamp, ev->is_chirp,
+ ev->freq_offset, ev->sidx);
+
+ ar = ath12k_mac_get_ar_by_pdev_id(ab, le32_to_cpu(ev->pdev_id));
+
+ if (!ar) {
+ ath12k_warn(ab, "radar detected in invalid pdev %d\n",
+ ev->pdev_id);
+ goto exit;
+ }
+
+ ath12k_dbg(ar->ab, ATH12K_DBG_REG, "DFS Radar Detected in pdev %d\n",
+ ev->pdev_id);
+
+ if (ar->dfs_block_radar_events)
+ ath12k_info(ab, "DFS Radar detected, but ignored as requested\n");
+ else
+ ieee80211_radar_detected(ar->hw);
+
+exit:
+ kfree(tb);
+}
+
+static void
+ath12k_wmi_pdev_temperature_event(struct ath12k_base *ab,
+ struct sk_buff *skb)
+{
+ struct ath12k *ar;
+ struct wmi_pdev_temperature_event ev = {0};
+
+ if (ath12k_pull_pdev_temp_ev(ab, skb->data, skb->len, &ev) != 0) {
+ ath12k_warn(ab, "failed to extract pdev temperature event");
+ return;
+ }
+
+ ath12k_dbg(ab, ATH12K_DBG_WMI,
+ "pdev temperature ev temp %d pdev_id %d\n", ev.temp, ev.pdev_id);
+
+ ar = ath12k_mac_get_ar_by_pdev_id(ab, le32_to_cpu(ev.pdev_id));
+ if (!ar) {
+ ath12k_warn(ab, "invalid pdev id in pdev temperature ev %d", ev.pdev_id);
+ return;
+ }
+}
+
+static void ath12k_fils_discovery_event(struct ath12k_base *ab,
+ struct sk_buff *skb)
+{
+ const void **tb;
+ const struct wmi_fils_discovery_event *ev;
+ int ret;
+
+ tb = ath12k_wmi_tlv_parse_alloc(ab, skb->data, skb->len, GFP_ATOMIC);
+ if (IS_ERR(tb)) {
+ ret = PTR_ERR(tb);
+ ath12k_warn(ab,
+ "failed to parse FILS discovery event tlv %d\n",
+ ret);
+ return;
+ }
+
+ ev = tb[WMI_TAG_HOST_SWFDA_EVENT];
+ if (!ev) {
+ ath12k_warn(ab, "failed to fetch FILS discovery event\n");
+ kfree(tb);
+ return;
+ }
+
+ ath12k_warn(ab,
+ "FILS discovery frame expected from host for vdev_id: %u, transmission scheduled at %u, next TBTT: %u\n",
+ ev->vdev_id, ev->fils_tt, ev->tbtt);
+
+ kfree(tb);
+}
+
+static void ath12k_probe_resp_tx_status_event(struct ath12k_base *ab,
+ struct sk_buff *skb)
+{
+ const void **tb;
+ const struct wmi_probe_resp_tx_status_event *ev;
+ int ret;
+
+ tb = ath12k_wmi_tlv_parse_alloc(ab, skb->data, skb->len, GFP_ATOMIC);
+ if (IS_ERR(tb)) {
+ ret = PTR_ERR(tb);
+ ath12k_warn(ab,
+ "failed to parse probe response transmission status event tlv: %d\n",
+ ret);
+ return;
+ }
+
+ ev = tb[WMI_TAG_OFFLOAD_PRB_RSP_TX_STATUS_EVENT];
+ if (!ev) {
+ ath12k_warn(ab,
+ "failed to fetch probe response transmission status event");
+ kfree(tb);
+ return;
+ }
+
+ if (ev->tx_status)
+ ath12k_warn(ab,
+ "Probe response transmission failed for vdev_id %u, status %u\n",
+ ev->vdev_id, ev->tx_status);
+
+ kfree(tb);
+}
+
+static void ath12k_wmi_op_rx(struct ath12k_base *ab, struct sk_buff *skb)
+{
+ struct wmi_cmd_hdr *cmd_hdr;
+ enum wmi_tlv_event_id id;
+
+ cmd_hdr = (struct wmi_cmd_hdr *)skb->data;
+ id = le32_get_bits(cmd_hdr->cmd_id, WMI_CMD_HDR_CMD_ID);
+
+ if (!skb_pull(skb, sizeof(struct wmi_cmd_hdr)))
+ goto out;
+
+ switch (id) {
+ /* Process all the WMI events here */
+ case WMI_SERVICE_READY_EVENTID:
+ ath12k_service_ready_event(ab, skb);
+ break;
+ case WMI_SERVICE_READY_EXT_EVENTID:
+ ath12k_service_ready_ext_event(ab, skb);
+ break;
+ case WMI_SERVICE_READY_EXT2_EVENTID:
+ ath12k_service_ready_ext2_event(ab, skb);
+ break;
+ case WMI_REG_CHAN_LIST_CC_EXT_EVENTID:
+ ath12k_reg_chan_list_event(ab, skb);
+ break;
+ case WMI_READY_EVENTID:
+ ath12k_ready_event(ab, skb);
+ break;
+ case WMI_PEER_DELETE_RESP_EVENTID:
+ ath12k_peer_delete_resp_event(ab, skb);
+ break;
+ case WMI_VDEV_START_RESP_EVENTID:
+ ath12k_vdev_start_resp_event(ab, skb);
+ break;
+ case WMI_OFFLOAD_BCN_TX_STATUS_EVENTID:
+ ath12k_bcn_tx_status_event(ab, skb);
+ break;
+ case WMI_VDEV_STOPPED_EVENTID:
+ ath12k_vdev_stopped_event(ab, skb);
+ break;
+ case WMI_MGMT_RX_EVENTID:
+ ath12k_mgmt_rx_event(ab, skb);
+ /* mgmt_rx_event() owns the skb now! */
+ return;
+ case WMI_MGMT_TX_COMPLETION_EVENTID:
+ ath12k_mgmt_tx_compl_event(ab, skb);
+ break;
+ case WMI_SCAN_EVENTID:
+ ath12k_scan_event(ab, skb);
+ break;
+ case WMI_PEER_STA_KICKOUT_EVENTID:
+ ath12k_peer_sta_kickout_event(ab, skb);
+ break;
+ case WMI_ROAM_EVENTID:
+ ath12k_roam_event(ab, skb);
+ break;
+ case WMI_CHAN_INFO_EVENTID:
+ ath12k_chan_info_event(ab, skb);
+ break;
+ case WMI_PDEV_BSS_CHAN_INFO_EVENTID:
+ ath12k_pdev_bss_chan_info_event(ab, skb);
+ break;
+ case WMI_VDEV_INSTALL_KEY_COMPLETE_EVENTID:
+ ath12k_vdev_install_key_compl_event(ab, skb);
+ break;
+ case WMI_SERVICE_AVAILABLE_EVENTID:
+ ath12k_service_available_event(ab, skb);
+ break;
+ case WMI_PEER_ASSOC_CONF_EVENTID:
+ ath12k_peer_assoc_conf_event(ab, skb);
+ break;
+ case WMI_UPDATE_STATS_EVENTID:
+ ath12k_update_stats_event(ab, skb);
+ break;
+ case WMI_PDEV_CTL_FAILSAFE_CHECK_EVENTID:
+ ath12k_pdev_ctl_failsafe_check_event(ab, skb);
+ break;
+ case WMI_PDEV_CSA_SWITCH_COUNT_STATUS_EVENTID:
+ ath12k_wmi_pdev_csa_switch_count_status_event(ab, skb);
+ break;
+ case WMI_PDEV_TEMPERATURE_EVENTID:
+ ath12k_wmi_pdev_temperature_event(ab, skb);
+ break;
+ case WMI_PDEV_DMA_RING_BUF_RELEASE_EVENTID:
+ ath12k_wmi_pdev_dma_ring_buf_release_event(ab, skb);
+ break;
+ case WMI_HOST_FILS_DISCOVERY_EVENTID:
+ ath12k_fils_discovery_event(ab, skb);
+ break;
+ case WMI_OFFLOAD_PROB_RESP_TX_STATUS_EVENTID:
+ ath12k_probe_resp_tx_status_event(ab, skb);
+ break;
+ /* add Unsupported events here */
+ case WMI_TBTTOFFSET_EXT_UPDATE_EVENTID:
+ case WMI_PEER_OPER_MODE_CHANGE_EVENTID:
+ case WMI_TWT_ENABLE_EVENTID:
+ case WMI_TWT_DISABLE_EVENTID:
+ case WMI_PDEV_DMA_RING_CFG_RSP_EVENTID:
+ ath12k_dbg(ab, ATH12K_DBG_WMI,
+ "ignoring unsupported event 0x%x\n", id);
+ break;
+ case WMI_PDEV_DFS_RADAR_DETECTION_EVENTID:
+ ath12k_wmi_pdev_dfs_radar_detected_event(ab, skb);
+ break;
+ case WMI_VDEV_DELETE_RESP_EVENTID:
+ ath12k_vdev_delete_resp_event(ab, skb);
+ break;
+ /* TODO: Add remaining events */
+ default:
+ ath12k_dbg(ab, ATH12K_DBG_WMI, "Unknown eventid: 0x%x\n", id);
+ break;
+ }
+
+out:
+ dev_kfree_skb(skb);
+}
+
+static int ath12k_connect_pdev_htc_service(struct ath12k_base *ab,
+ u32 pdev_idx)
+{
+ int status;
+ u32 svc_id[] = { ATH12K_HTC_SVC_ID_WMI_CONTROL,
+ ATH12K_HTC_SVC_ID_WMI_CONTROL_MAC1,
+ ATH12K_HTC_SVC_ID_WMI_CONTROL_MAC2 };
+ struct ath12k_htc_svc_conn_req conn_req = {};
+ struct ath12k_htc_svc_conn_resp conn_resp = {};
+
+ /* these fields are the same for all service endpoints */
+ conn_req.ep_ops.ep_tx_complete = ath12k_wmi_htc_tx_complete;
+ conn_req.ep_ops.ep_rx_complete = ath12k_wmi_op_rx;
+ conn_req.ep_ops.ep_tx_credits = ath12k_wmi_op_ep_tx_credits;
+
+ /* connect to control service */
+ conn_req.service_id = svc_id[pdev_idx];
+
+ status = ath12k_htc_connect_service(&ab->htc, &conn_req, &conn_resp);
+ if (status) {
+ ath12k_warn(ab, "failed to connect to WMI CONTROL service status: %d\n",
+ status);
+ return status;
+ }
+
+ ab->wmi_ab.wmi_endpoint_id[pdev_idx] = conn_resp.eid;
+ ab->wmi_ab.wmi[pdev_idx].eid = conn_resp.eid;
+ ab->wmi_ab.max_msg_len[pdev_idx] = conn_resp.max_msg_len;
+
+ return 0;
+}
+
+static int
+ath12k_wmi_send_unit_test_cmd(struct ath12k *ar,
+ struct wmi_unit_test_cmd ut_cmd,
+ u32 *test_args)
+{
+ struct ath12k_wmi_pdev *wmi = ar->wmi;
+ struct wmi_unit_test_cmd *cmd;
+ struct sk_buff *skb;
+ struct wmi_tlv *tlv;
+ void *ptr;
+ u32 *ut_cmd_args;
+ int buf_len, arg_len;
+ int ret;
+ int i;
+
+ arg_len = sizeof(u32) * le32_to_cpu(ut_cmd.num_args);
+ buf_len = sizeof(ut_cmd) + arg_len + TLV_HDR_SIZE;
+
+ skb = ath12k_wmi_alloc_skb(wmi->wmi_ab, buf_len);
+ if (!skb)
+ return -ENOMEM;
+
+ cmd = (struct wmi_unit_test_cmd *)skb->data;
+ cmd->tlv_header = ath12k_wmi_tlv_cmd_hdr(WMI_TAG_UNIT_TEST_CMD,
+ sizeof(ut_cmd));
+
+ cmd->vdev_id = ut_cmd.vdev_id;
+ cmd->module_id = ut_cmd.module_id;
+ cmd->num_args = ut_cmd.num_args;
+ cmd->diag_token = ut_cmd.diag_token;
+
+ ptr = skb->data + sizeof(ut_cmd);
+
+ tlv = ptr;
+ tlv->header = ath12k_wmi_tlv_hdr(WMI_TAG_ARRAY_UINT32, arg_len);
+
+ ptr += TLV_HDR_SIZE;
+
+ ut_cmd_args = ptr;
+ for (i = 0; i < le32_to_cpu(ut_cmd.num_args); i++)
+ ut_cmd_args[i] = test_args[i];
+
+ ath12k_dbg(ar->ab, ATH12K_DBG_WMI,
+ "WMI unit test : module %d vdev %d n_args %d token %d\n",
+ cmd->module_id, cmd->vdev_id, cmd->num_args,
+ cmd->diag_token);
+
+ ret = ath12k_wmi_cmd_send(wmi, skb, WMI_UNIT_TEST_CMDID);
+
+ if (ret) {
+ ath12k_warn(ar->ab, "failed to send WMI_UNIT_TEST CMD :%d\n",
+ ret);
+ dev_kfree_skb(skb);
+ }
+
+ return ret;
+}
+
+int ath12k_wmi_simulate_radar(struct ath12k *ar)
+{
+ struct ath12k_vif *arvif;
+ u32 dfs_args[DFS_MAX_TEST_ARGS];
+ struct wmi_unit_test_cmd wmi_ut;
+ bool arvif_found = false;
+
+ list_for_each_entry(arvif, &ar->arvifs, list) {
+ if (arvif->is_started && arvif->vdev_type == WMI_VDEV_TYPE_AP) {
+ arvif_found = true;
+ break;
+ }
+ }
+
+ if (!arvif_found)
+ return -EINVAL;
+
+ dfs_args[DFS_TEST_CMDID] = 0;
+ dfs_args[DFS_TEST_PDEV_ID] = ar->pdev->pdev_id;
+ /* Currently we could pass segment_id(b0 - b1), chirp(b2)
+ * freq offset (b3 - b10) to unit test. For simulation
+ * purpose this can be set to 0 which is valid.
+ */
+ dfs_args[DFS_TEST_RADAR_PARAM] = 0;
+
+ wmi_ut.vdev_id = cpu_to_le32(arvif->vdev_id);
+ wmi_ut.module_id = cpu_to_le32(DFS_UNIT_TEST_MODULE);
+ wmi_ut.num_args = cpu_to_le32(DFS_MAX_TEST_ARGS);
+ wmi_ut.diag_token = cpu_to_le32(DFS_UNIT_TEST_TOKEN);
+
+ ath12k_dbg(ar->ab, ATH12K_DBG_REG, "Triggering Radar Simulation\n");
+
+ return ath12k_wmi_send_unit_test_cmd(ar, wmi_ut, dfs_args);
+}
+
+int ath12k_wmi_connect(struct ath12k_base *ab)
+{
+ u32 i;
+ u8 wmi_ep_count;
+
+ wmi_ep_count = ab->htc.wmi_ep_count;
+ if (wmi_ep_count > ab->hw_params->max_radios)
+ return -1;
+
+ for (i = 0; i < wmi_ep_count; i++)
+ ath12k_connect_pdev_htc_service(ab, i);
+
+ return 0;
+}
+
+static void ath12k_wmi_pdev_detach(struct ath12k_base *ab, u8 pdev_id)
+{
+ if (WARN_ON(pdev_id >= MAX_RADIOS))
+ return;
+
+ /* TODO: Deinit any pdev specific wmi resource */
+}
+
+int ath12k_wmi_pdev_attach(struct ath12k_base *ab,
+ u8 pdev_id)
+{
+ struct ath12k_wmi_pdev *wmi_handle;
+
+ if (pdev_id >= ab->hw_params->max_radios)
+ return -EINVAL;
+
+ wmi_handle = &ab->wmi_ab.wmi[pdev_id];
+
+ wmi_handle->wmi_ab = &ab->wmi_ab;
+
+ ab->wmi_ab.ab = ab;
+ /* TODO: Init remaining resource specific to pdev */
+
+ return 0;
+}
+
+int ath12k_wmi_attach(struct ath12k_base *ab)
+{
+ int ret;
+
+ ret = ath12k_wmi_pdev_attach(ab, 0);
+ if (ret)
+ return ret;
+
+ ab->wmi_ab.ab = ab;
+ ab->wmi_ab.preferred_hw_mode = WMI_HOST_HW_MODE_MAX;
+
+ /* It's overwritten when service_ext_ready is handled */
+ if (ab->hw_params->single_pdev_only)
+ ab->wmi_ab.preferred_hw_mode = WMI_HOST_HW_MODE_SINGLE;
+
+ /* TODO: Init remaining wmi soc resources required */
+ init_completion(&ab->wmi_ab.service_ready);
+ init_completion(&ab->wmi_ab.unified_ready);
+
+ return 0;
+}
+
+void ath12k_wmi_detach(struct ath12k_base *ab)
+{
+ int i;
+
+ /* TODO: Deinit wmi resource specific to SOC as required */
+
+ for (i = 0; i < ab->htc.wmi_ep_count; i++)
+ ath12k_wmi_pdev_detach(ab, i);
+
+ ath12k_wmi_free_dbring_caps(ab);
+}
diff --git a/drivers/net/wireless/ath/ath12k/wmi.h b/drivers/net/wireless/ath/ath12k/wmi.h
new file mode 100644
index 000000000000..84e3fb918e43
--- /dev/null
+++ b/drivers/net/wireless/ath/ath12k/wmi.h
@@ -0,0 +1,4803 @@
+/* SPDX-License-Identifier: BSD-3-Clause-Clear */
+/*
+ * Copyright (c) 2018-2021 The Linux Foundation. All rights reserved.
+ * Copyright (c) 2021-2022 Qualcomm Innovation Center, Inc. All rights reserved.
+ */
+
+#ifndef ATH12K_WMI_H
+#define ATH12K_WMI_H
+
+#include <net/mac80211.h>
+#include "htc.h"
+
+/* Naming conventions for structures:
+ *
+ * _cmd means that this is a firmware command sent from host to firmware.
+ *
+ * _event means that this is a firmware event sent from firmware to host
+ *
+ * _params is a structure which is embedded either into _cmd or _event (or
+ * both), it is not sent individually.
+ *
+ * _arg is used inside the host, the firmware does not see that at all.
+ */
+
+struct ath12k_base;
+struct ath12k;
+
+/* There is no signed version of __le32, so for a temporary solution come
+ * up with our own version. The idea is from fs/ntfs/endian.h.
+ *
+ * Use a_ prefix so that it doesn't conflict if we get proper support to
+ * linux/types.h.
+ */
+typedef __s32 __bitwise a_sle32;
+
+static inline a_sle32 a_cpu_to_sle32(s32 val)
+{
+ return (__force a_sle32)cpu_to_le32(val);
+}
+
+static inline s32 a_sle32_to_cpu(a_sle32 val)
+{
+ return le32_to_cpu((__force __le32)val);
+}
+
+/* defines to set Packet extension values which can be 0 us, 8 usec or 16 usec */
+#define MAX_HE_NSS 8
+#define MAX_HE_MODULATION 8
+#define MAX_HE_RU 4
+#define HE_MODULATION_NONE 7
+#define HE_PET_0_USEC 0
+#define HE_PET_8_USEC 1
+#define HE_PET_16_USEC 2
+
+#define WMI_MAX_CHAINS 8
+
+#define WMI_MAX_NUM_SS MAX_HE_NSS
+#define WMI_MAX_NUM_RU MAX_HE_RU
+
+#define WMI_TLV_CMD(grp_id) (((grp_id) << 12) | 0x1)
+#define WMI_TLV_EV(grp_id) (((grp_id) << 12) | 0x1)
+#define WMI_TLV_CMD_UNSUPPORTED 0
+#define WMI_TLV_PDEV_PARAM_UNSUPPORTED 0
+#define WMI_TLV_VDEV_PARAM_UNSUPPORTED 0
+
+struct wmi_cmd_hdr {
+ __le32 cmd_id;
+} __packed;
+
+struct wmi_tlv {
+ __le32 header;
+ u8 value[];
+} __packed;
+
+#define WMI_TLV_LEN GENMASK(15, 0)
+#define WMI_TLV_TAG GENMASK(31, 16)
+#define TLV_HDR_SIZE sizeof_field(struct wmi_tlv, header)
+
+#define WMI_CMD_HDR_CMD_ID GENMASK(23, 0)
+#define WMI_MAX_MEM_REQS 32
+#define ATH12K_MAX_HW_LISTEN_INTERVAL 5
+
+#define WMI_HOST_RC_DS_FLAG 0x01
+#define WMI_HOST_RC_CW40_FLAG 0x02
+#define WMI_HOST_RC_SGI_FLAG 0x04
+#define WMI_HOST_RC_HT_FLAG 0x08
+#define WMI_HOST_RC_RTSCTS_FLAG 0x10
+#define WMI_HOST_RC_TX_STBC_FLAG 0x20
+#define WMI_HOST_RC_RX_STBC_FLAG 0xC0
+#define WMI_HOST_RC_RX_STBC_FLAG_S 6
+#define WMI_HOST_RC_WEP_TKIP_FLAG 0x100
+#define WMI_HOST_RC_TS_FLAG 0x200
+#define WMI_HOST_RC_UAPSD_FLAG 0x400
+
+#define WMI_HT_CAP_ENABLED 0x0001
+#define WMI_HT_CAP_HT20_SGI 0x0002
+#define WMI_HT_CAP_DYNAMIC_SMPS 0x0004
+#define WMI_HT_CAP_TX_STBC 0x0008
+#define WMI_HT_CAP_TX_STBC_MASK_SHIFT 3
+#define WMI_HT_CAP_RX_STBC 0x0030
+#define WMI_HT_CAP_RX_STBC_MASK_SHIFT 4
+#define WMI_HT_CAP_LDPC 0x0040
+#define WMI_HT_CAP_L_SIG_TXOP_PROT 0x0080
+#define WMI_HT_CAP_MPDU_DENSITY 0x0700
+#define WMI_HT_CAP_MPDU_DENSITY_MASK_SHIFT 8
+#define WMI_HT_CAP_HT40_SGI 0x0800
+#define WMI_HT_CAP_RX_LDPC 0x1000
+#define WMI_HT_CAP_TX_LDPC 0x2000
+#define WMI_HT_CAP_IBF_BFER 0x4000
+
+/* These macros should be used when we wish to advertise STBC support for
+ * only 1SS or 2SS or 3SS.
+ */
+#define WMI_HT_CAP_RX_STBC_1SS 0x0010
+#define WMI_HT_CAP_RX_STBC_2SS 0x0020
+#define WMI_HT_CAP_RX_STBC_3SS 0x0030
+
+#define WMI_HT_CAP_DEFAULT_ALL (WMI_HT_CAP_ENABLED | \
+ WMI_HT_CAP_HT20_SGI | \
+ WMI_HT_CAP_HT40_SGI | \
+ WMI_HT_CAP_TX_STBC | \
+ WMI_HT_CAP_RX_STBC | \
+ WMI_HT_CAP_LDPC)
+
+#define WMI_VHT_CAP_MAX_MPDU_LEN_MASK 0x00000003
+#define WMI_VHT_CAP_RX_LDPC 0x00000010
+#define WMI_VHT_CAP_SGI_80MHZ 0x00000020
+#define WMI_VHT_CAP_SGI_160MHZ 0x00000040
+#define WMI_VHT_CAP_TX_STBC 0x00000080
+#define WMI_VHT_CAP_RX_STBC_MASK 0x00000300
+#define WMI_VHT_CAP_RX_STBC_MASK_SHIFT 8
+#define WMI_VHT_CAP_SU_BFER 0x00000800
+#define WMI_VHT_CAP_SU_BFEE 0x00001000
+#define WMI_VHT_CAP_MAX_CS_ANT_MASK 0x0000E000
+#define WMI_VHT_CAP_MAX_CS_ANT_MASK_SHIFT 13
+#define WMI_VHT_CAP_MAX_SND_DIM_MASK 0x00070000
+#define WMI_VHT_CAP_MAX_SND_DIM_MASK_SHIFT 16
+#define WMI_VHT_CAP_MU_BFER 0x00080000
+#define WMI_VHT_CAP_MU_BFEE 0x00100000
+#define WMI_VHT_CAP_MAX_AMPDU_LEN_EXP 0x03800000
+#define WMI_VHT_CAP_MAX_AMPDU_LEN_EXP_SHIT 23
+#define WMI_VHT_CAP_RX_FIXED_ANT 0x10000000
+#define WMI_VHT_CAP_TX_FIXED_ANT 0x20000000
+
+#define WMI_VHT_CAP_MAX_MPDU_LEN_11454 0x00000002
+
+/* These macros should be used when we wish to advertise STBC support for
+ * only 1SS or 2SS or 3SS.
+ */
+#define WMI_VHT_CAP_RX_STBC_1SS 0x00000100
+#define WMI_VHT_CAP_RX_STBC_2SS 0x00000200
+#define WMI_VHT_CAP_RX_STBC_3SS 0x00000300
+
+#define WMI_VHT_CAP_DEFAULT_ALL (WMI_VHT_CAP_MAX_MPDU_LEN_11454 | \
+ WMI_VHT_CAP_SGI_80MHZ | \
+ WMI_VHT_CAP_TX_STBC | \
+ WMI_VHT_CAP_RX_STBC_MASK | \
+ WMI_VHT_CAP_RX_LDPC | \
+ WMI_VHT_CAP_MAX_AMPDU_LEN_EXP | \
+ WMI_VHT_CAP_RX_FIXED_ANT | \
+ WMI_VHT_CAP_TX_FIXED_ANT)
+
+#define WLAN_SCAN_MAX_HINT_S_SSID 10
+#define WLAN_SCAN_MAX_HINT_BSSID 10
+#define MAX_RNR_BSS 5
+
+#define WLAN_SCAN_MAX_HINT_S_SSID 10
+#define WLAN_SCAN_MAX_HINT_BSSID 10
+#define MAX_RNR_BSS 5
+
+#define WLAN_SCAN_PARAMS_MAX_SSID 16
+#define WLAN_SCAN_PARAMS_MAX_BSSID 4
+#define WLAN_SCAN_PARAMS_MAX_IE_LEN 256
+
+#define WMI_APPEND_TO_EXISTING_CHAN_LIST_FLAG 1
+
+#define WMI_BA_MODE_BUFFER_SIZE_256 3
+
+/* HW mode config type replicated from FW header
+ * @WMI_HOST_HW_MODE_SINGLE: Only one PHY is active.
+ * @WMI_HOST_HW_MODE_DBS: Both PHYs are active in different bands,
+ * one in 2G and another in 5G.
+ * @WMI_HOST_HW_MODE_SBS_PASSIVE: Both PHYs are in passive mode (only rx) in
+ * same band; no tx allowed.
+ * @WMI_HOST_HW_MODE_SBS: Both PHYs are active in the same band.
+ * Support for both PHYs within one band is planned
+ * for 5G only(as indicated in WMI_MAC_PHY_CAPABILITIES),
+ * but could be extended to other bands in the future.
+ * The separation of the band between the two PHYs needs
+ * to be communicated separately.
+ * @WMI_HOST_HW_MODE_DBS_SBS: 3 PHYs, with 2 on the same band doing SBS
+ * as in WMI_HW_MODE_SBS, and 3rd on the other band
+ * @WMI_HOST_HW_MODE_DBS_OR_SBS: Two PHY with one PHY capabale of both 2G and
+ * 5G. It can support SBS (5G + 5G) OR DBS (5G + 2G).
+ * @WMI_HOST_HW_MODE_MAX: Max hw_mode_id. Used to indicate invalid mode.
+ */
+enum wmi_host_hw_mode_config_type {
+ WMI_HOST_HW_MODE_SINGLE = 0,
+ WMI_HOST_HW_MODE_DBS = 1,
+ WMI_HOST_HW_MODE_SBS_PASSIVE = 2,
+ WMI_HOST_HW_MODE_SBS = 3,
+ WMI_HOST_HW_MODE_DBS_SBS = 4,
+ WMI_HOST_HW_MODE_DBS_OR_SBS = 5,
+
+ /* keep last */
+ WMI_HOST_HW_MODE_MAX
+};
+
+/* HW mode priority values used to detect the preferred HW mode
+ * on the available modes.
+ */
+enum wmi_host_hw_mode_priority {
+ WMI_HOST_HW_MODE_DBS_SBS_PRI,
+ WMI_HOST_HW_MODE_DBS_PRI,
+ WMI_HOST_HW_MODE_DBS_OR_SBS_PRI,
+ WMI_HOST_HW_MODE_SBS_PRI,
+ WMI_HOST_HW_MODE_SBS_PASSIVE_PRI,
+ WMI_HOST_HW_MODE_SINGLE_PRI,
+
+ /* keep last the lowest priority */
+ WMI_HOST_HW_MODE_MAX_PRI
+};
+
+enum WMI_HOST_WLAN_BAND {
+ WMI_HOST_WLAN_2G_CAP = 1,
+ WMI_HOST_WLAN_5G_CAP = 2,
+ WMI_HOST_WLAN_2G_5G_CAP = 3,
+};
+
+enum wmi_cmd_group {
+ /* 0 to 2 are reserved */
+ WMI_GRP_START = 0x3,
+ WMI_GRP_SCAN = WMI_GRP_START,
+ WMI_GRP_PDEV = 0x4,
+ WMI_GRP_VDEV = 0x5,
+ WMI_GRP_PEER = 0x6,
+ WMI_GRP_MGMT = 0x7,
+ WMI_GRP_BA_NEG = 0x8,
+ WMI_GRP_STA_PS = 0x9,
+ WMI_GRP_DFS = 0xa,
+ WMI_GRP_ROAM = 0xb,
+ WMI_GRP_OFL_SCAN = 0xc,
+ WMI_GRP_P2P = 0xd,
+ WMI_GRP_AP_PS = 0xe,
+ WMI_GRP_RATE_CTRL = 0xf,
+ WMI_GRP_PROFILE = 0x10,
+ WMI_GRP_SUSPEND = 0x11,
+ WMI_GRP_BCN_FILTER = 0x12,
+ WMI_GRP_WOW = 0x13,
+ WMI_GRP_RTT = 0x14,
+ WMI_GRP_SPECTRAL = 0x15,
+ WMI_GRP_STATS = 0x16,
+ WMI_GRP_ARP_NS_OFL = 0x17,
+ WMI_GRP_NLO_OFL = 0x18,
+ WMI_GRP_GTK_OFL = 0x19,
+ WMI_GRP_CSA_OFL = 0x1a,
+ WMI_GRP_CHATTER = 0x1b,
+ WMI_GRP_TID_ADDBA = 0x1c,
+ WMI_GRP_MISC = 0x1d,
+ WMI_GRP_GPIO = 0x1e,
+ WMI_GRP_FWTEST = 0x1f,
+ WMI_GRP_TDLS = 0x20,
+ WMI_GRP_RESMGR = 0x21,
+ WMI_GRP_STA_SMPS = 0x22,
+ WMI_GRP_WLAN_HB = 0x23,
+ WMI_GRP_RMC = 0x24,
+ WMI_GRP_MHF_OFL = 0x25,
+ WMI_GRP_LOCATION_SCAN = 0x26,
+ WMI_GRP_OEM = 0x27,
+ WMI_GRP_NAN = 0x28,
+ WMI_GRP_COEX = 0x29,
+ WMI_GRP_OBSS_OFL = 0x2a,
+ WMI_GRP_LPI = 0x2b,
+ WMI_GRP_EXTSCAN = 0x2c,
+ WMI_GRP_DHCP_OFL = 0x2d,
+ WMI_GRP_IPA = 0x2e,
+ WMI_GRP_MDNS_OFL = 0x2f,
+ WMI_GRP_SAP_OFL = 0x30,
+ WMI_GRP_OCB = 0x31,
+ WMI_GRP_SOC = 0x32,
+ WMI_GRP_PKT_FILTER = 0x33,
+ WMI_GRP_MAWC = 0x34,
+ WMI_GRP_PMF_OFFLOAD = 0x35,
+ WMI_GRP_BPF_OFFLOAD = 0x36,
+ WMI_GRP_NAN_DATA = 0x37,
+ WMI_GRP_PROTOTYPE = 0x38,
+ WMI_GRP_MONITOR = 0x39,
+ WMI_GRP_REGULATORY = 0x3a,
+ WMI_GRP_HW_DATA_FILTER = 0x3b,
+ WMI_GRP_WLM = 0x3c,
+ WMI_GRP_11K_OFFLOAD = 0x3d,
+ WMI_GRP_TWT = 0x3e,
+ WMI_GRP_MOTION_DET = 0x3f,
+ WMI_GRP_SPATIAL_REUSE = 0x40,
+};
+
+#define WMI_CMD_GRP(grp_id) (((grp_id) << 12) | 0x1)
+#define WMI_EVT_GRP_START_ID(grp_id) (((grp_id) << 12) | 0x1)
+
+enum wmi_tlv_cmd_id {
+ WMI_CMD_UNSUPPORTED = 0,
+ WMI_INIT_CMDID = 0x1,
+ WMI_START_SCAN_CMDID = WMI_TLV_CMD(WMI_GRP_SCAN),
+ WMI_STOP_SCAN_CMDID,
+ WMI_SCAN_CHAN_LIST_CMDID,
+ WMI_SCAN_SCH_PRIO_TBL_CMDID,
+ WMI_SCAN_UPDATE_REQUEST_CMDID,
+ WMI_SCAN_PROB_REQ_OUI_CMDID,
+ WMI_SCAN_ADAPTIVE_DWELL_CONFIG_CMDID,
+ WMI_PDEV_SET_REGDOMAIN_CMDID = WMI_TLV_CMD(WMI_GRP_PDEV),
+ WMI_PDEV_SET_CHANNEL_CMDID,
+ WMI_PDEV_SET_PARAM_CMDID,
+ WMI_PDEV_PKTLOG_ENABLE_CMDID,
+ WMI_PDEV_PKTLOG_DISABLE_CMDID,
+ WMI_PDEV_SET_WMM_PARAMS_CMDID,
+ WMI_PDEV_SET_HT_CAP_IE_CMDID,
+ WMI_PDEV_SET_VHT_CAP_IE_CMDID,
+ WMI_PDEV_SET_DSCP_TID_MAP_CMDID,
+ WMI_PDEV_SET_QUIET_MODE_CMDID,
+ WMI_PDEV_GREEN_AP_PS_ENABLE_CMDID,
+ WMI_PDEV_GET_TPC_CONFIG_CMDID,
+ WMI_PDEV_SET_BASE_MACADDR_CMDID,
+ WMI_PDEV_DUMP_CMDID,
+ WMI_PDEV_SET_LED_CONFIG_CMDID,
+ WMI_PDEV_GET_TEMPERATURE_CMDID,
+ WMI_PDEV_SET_LED_FLASHING_CMDID,
+ WMI_PDEV_SMART_ANT_ENABLE_CMDID,
+ WMI_PDEV_SMART_ANT_SET_RX_ANTENNA_CMDID,
+ WMI_PDEV_SET_ANTENNA_SWITCH_TABLE_CMDID,
+ WMI_PDEV_SET_CTL_TABLE_CMDID,
+ WMI_PDEV_SET_MIMOGAIN_TABLE_CMDID,
+ WMI_PDEV_FIPS_CMDID,
+ WMI_PDEV_GET_ANI_CCK_CONFIG_CMDID,
+ WMI_PDEV_GET_ANI_OFDM_CONFIG_CMDID,
+ WMI_PDEV_GET_NFCAL_POWER_CMDID,
+ WMI_PDEV_GET_TPC_CMDID,
+ WMI_MIB_STATS_ENABLE_CMDID,
+ WMI_PDEV_SET_PCL_CMDID,
+ WMI_PDEV_SET_HW_MODE_CMDID,
+ WMI_PDEV_SET_MAC_CONFIG_CMDID,
+ WMI_PDEV_SET_ANTENNA_MODE_CMDID,
+ WMI_SET_PERIODIC_CHANNEL_STATS_CONFIG_CMDID,
+ WMI_PDEV_WAL_POWER_DEBUG_CMDID,
+ WMI_PDEV_SET_REORDER_TIMEOUT_VAL_CMDID,
+ WMI_PDEV_SET_WAKEUP_CONFIG_CMDID,
+ WMI_PDEV_GET_ANTDIV_STATUS_CMDID,
+ WMI_PDEV_GET_CHIP_POWER_STATS_CMDID,
+ WMI_PDEV_SET_STATS_THRESHOLD_CMDID,
+ WMI_PDEV_MULTIPLE_VDEV_RESTART_REQUEST_CMDID,
+ WMI_PDEV_UPDATE_PKT_ROUTING_CMDID,
+ WMI_PDEV_CHECK_CAL_VERSION_CMDID,
+ WMI_PDEV_SET_DIVERSITY_GAIN_CMDID,
+ WMI_PDEV_DIV_GET_RSSI_ANTID_CMDID,
+ WMI_PDEV_BSS_CHAN_INFO_REQUEST_CMDID,
+ WMI_PDEV_UPDATE_PMK_CACHE_CMDID,
+ WMI_PDEV_UPDATE_FILS_HLP_PKT_CMDID,
+ WMI_PDEV_UPDATE_CTLTABLE_REQUEST_CMDID,
+ WMI_PDEV_CONFIG_VENDOR_OUI_ACTION_CMDID,
+ WMI_PDEV_SET_AC_TX_QUEUE_OPTIMIZED_CMDID,
+ WMI_PDEV_SET_RX_FILTER_PROMISCUOUS_CMDID,
+ WMI_PDEV_DMA_RING_CFG_REQ_CMDID,
+ WMI_PDEV_HE_TB_ACTION_FRM_CMDID,
+ WMI_PDEV_PKTLOG_FILTER_CMDID,
+ WMI_VDEV_CREATE_CMDID = WMI_TLV_CMD(WMI_GRP_VDEV),
+ WMI_VDEV_DELETE_CMDID,
+ WMI_VDEV_START_REQUEST_CMDID,
+ WMI_VDEV_RESTART_REQUEST_CMDID,
+ WMI_VDEV_UP_CMDID,
+ WMI_VDEV_STOP_CMDID,
+ WMI_VDEV_DOWN_CMDID,
+ WMI_VDEV_SET_PARAM_CMDID,
+ WMI_VDEV_INSTALL_KEY_CMDID,
+ WMI_VDEV_WNM_SLEEPMODE_CMDID,
+ WMI_VDEV_WMM_ADDTS_CMDID,
+ WMI_VDEV_WMM_DELTS_CMDID,
+ WMI_VDEV_SET_WMM_PARAMS_CMDID,
+ WMI_VDEV_SET_GTX_PARAMS_CMDID,
+ WMI_VDEV_IPSEC_NATKEEPALIVE_FILTER_CMDID,
+ WMI_VDEV_PLMREQ_START_CMDID,
+ WMI_VDEV_PLMREQ_STOP_CMDID,
+ WMI_VDEV_TSF_TSTAMP_ACTION_CMDID,
+ WMI_VDEV_SET_IE_CMDID,
+ WMI_VDEV_RATEMASK_CMDID,
+ WMI_VDEV_ATF_REQUEST_CMDID,
+ WMI_VDEV_SET_DSCP_TID_MAP_CMDID,
+ WMI_VDEV_FILTER_NEIGHBOR_RX_PACKETS_CMDID,
+ WMI_VDEV_SET_QUIET_MODE_CMDID,
+ WMI_VDEV_SET_CUSTOM_AGGR_SIZE_CMDID,
+ WMI_VDEV_ENCRYPT_DECRYPT_DATA_REQ_CMDID,
+ WMI_VDEV_ADD_MAC_ADDR_TO_RX_FILTER_CMDID,
+ WMI_PEER_CREATE_CMDID = WMI_TLV_CMD(WMI_GRP_PEER),
+ WMI_PEER_DELETE_CMDID,
+ WMI_PEER_FLUSH_TIDS_CMDID,
+ WMI_PEER_SET_PARAM_CMDID,
+ WMI_PEER_ASSOC_CMDID,
+ WMI_PEER_ADD_WDS_ENTRY_CMDID,
+ WMI_PEER_REMOVE_WDS_ENTRY_CMDID,
+ WMI_PEER_MCAST_GROUP_CMDID,
+ WMI_PEER_INFO_REQ_CMDID,
+ WMI_PEER_GET_ESTIMATED_LINKSPEED_CMDID,
+ WMI_PEER_SET_RATE_REPORT_CONDITION_CMDID,
+ WMI_PEER_UPDATE_WDS_ENTRY_CMDID,
+ WMI_PEER_ADD_PROXY_STA_ENTRY_CMDID,
+ WMI_PEER_SMART_ANT_SET_TX_ANTENNA_CMDID,
+ WMI_PEER_SMART_ANT_SET_TRAIN_INFO_CMDID,
+ WMI_PEER_SMART_ANT_SET_NODE_CONFIG_OPS_CMDID,
+ WMI_PEER_ATF_REQUEST_CMDID,
+ WMI_PEER_BWF_REQUEST_CMDID,
+ WMI_PEER_REORDER_QUEUE_SETUP_CMDID,
+ WMI_PEER_REORDER_QUEUE_REMOVE_CMDID,
+ WMI_PEER_SET_RX_BLOCKSIZE_CMDID,
+ WMI_PEER_ANTDIV_INFO_REQ_CMDID,
+ WMI_BCN_TX_CMDID = WMI_TLV_CMD(WMI_GRP_MGMT),
+ WMI_PDEV_SEND_BCN_CMDID,
+ WMI_BCN_TMPL_CMDID,
+ WMI_BCN_FILTER_RX_CMDID,
+ WMI_PRB_REQ_FILTER_RX_CMDID,
+ WMI_MGMT_TX_CMDID,
+ WMI_PRB_TMPL_CMDID,
+ WMI_MGMT_TX_SEND_CMDID,
+ WMI_OFFCHAN_DATA_TX_SEND_CMDID,
+ WMI_PDEV_SEND_FD_CMDID,
+ WMI_BCN_OFFLOAD_CTRL_CMDID,
+ WMI_BSS_COLOR_CHANGE_ENABLE_CMDID,
+ WMI_VDEV_BCN_OFFLOAD_QUIET_CONFIG_CMDID,
+ WMI_FILS_DISCOVERY_TMPL_CMDID,
+ WMI_ADDBA_CLEAR_RESP_CMDID = WMI_TLV_CMD(WMI_GRP_BA_NEG),
+ WMI_ADDBA_SEND_CMDID,
+ WMI_ADDBA_STATUS_CMDID,
+ WMI_DELBA_SEND_CMDID,
+ WMI_ADDBA_SET_RESP_CMDID,
+ WMI_SEND_SINGLEAMSDU_CMDID,
+ WMI_STA_POWERSAVE_MODE_CMDID = WMI_TLV_CMD(WMI_GRP_STA_PS),
+ WMI_STA_POWERSAVE_PARAM_CMDID,
+ WMI_STA_MIMO_PS_MODE_CMDID,
+ WMI_PDEV_DFS_ENABLE_CMDID = WMI_TLV_CMD(WMI_GRP_DFS),
+ WMI_PDEV_DFS_DISABLE_CMDID,
+ WMI_DFS_PHYERR_FILTER_ENA_CMDID,
+ WMI_DFS_PHYERR_FILTER_DIS_CMDID,
+ WMI_PDEV_DFS_PHYERR_OFFLOAD_ENABLE_CMDID,
+ WMI_PDEV_DFS_PHYERR_OFFLOAD_DISABLE_CMDID,
+ WMI_VDEV_ADFS_CH_CFG_CMDID,
+ WMI_VDEV_ADFS_OCAC_ABORT_CMDID,
+ WMI_ROAM_SCAN_MODE = WMI_TLV_CMD(WMI_GRP_ROAM),
+ WMI_ROAM_SCAN_RSSI_THRESHOLD,
+ WMI_ROAM_SCAN_PERIOD,
+ WMI_ROAM_SCAN_RSSI_CHANGE_THRESHOLD,
+ WMI_ROAM_AP_PROFILE,
+ WMI_ROAM_CHAN_LIST,
+ WMI_ROAM_SCAN_CMD,
+ WMI_ROAM_SYNCH_COMPLETE,
+ WMI_ROAM_SET_RIC_REQUEST_CMDID,
+ WMI_ROAM_INVOKE_CMDID,
+ WMI_ROAM_FILTER_CMDID,
+ WMI_ROAM_SUBNET_CHANGE_CONFIG_CMDID,
+ WMI_ROAM_CONFIGURE_MAWC_CMDID,
+ WMI_ROAM_SET_MBO_PARAM_CMDID,
+ WMI_ROAM_PER_CONFIG_CMDID,
+ WMI_ROAM_BTM_CONFIG_CMDID,
+ WMI_ENABLE_FILS_CMDID,
+ WMI_OFL_SCAN_ADD_AP_PROFILE = WMI_TLV_CMD(WMI_GRP_OFL_SCAN),
+ WMI_OFL_SCAN_REMOVE_AP_PROFILE,
+ WMI_OFL_SCAN_PERIOD,
+ WMI_P2P_DEV_SET_DEVICE_INFO = WMI_TLV_CMD(WMI_GRP_P2P),
+ WMI_P2P_DEV_SET_DISCOVERABILITY,
+ WMI_P2P_GO_SET_BEACON_IE,
+ WMI_P2P_GO_SET_PROBE_RESP_IE,
+ WMI_P2P_SET_VENDOR_IE_DATA_CMDID,
+ WMI_P2P_DISC_OFFLOAD_CONFIG_CMDID,
+ WMI_P2P_DISC_OFFLOAD_APPIE_CMDID,
+ WMI_P2P_DISC_OFFLOAD_PATTERN_CMDID,
+ WMI_P2P_SET_OPPPS_PARAM_CMDID,
+ WMI_P2P_LISTEN_OFFLOAD_START_CMDID,
+ WMI_P2P_LISTEN_OFFLOAD_STOP_CMDID,
+ WMI_AP_PS_PEER_PARAM_CMDID = WMI_TLV_CMD(WMI_GRP_AP_PS),
+ WMI_AP_PS_PEER_UAPSD_COEX_CMDID,
+ WMI_AP_PS_EGAP_PARAM_CMDID,
+ WMI_PEER_RATE_RETRY_SCHED_CMDID = WMI_TLV_CMD(WMI_GRP_RATE_CTRL),
+ WMI_WLAN_PROFILE_TRIGGER_CMDID = WMI_TLV_CMD(WMI_GRP_PROFILE),
+ WMI_WLAN_PROFILE_SET_HIST_INTVL_CMDID,
+ WMI_WLAN_PROFILE_GET_PROFILE_DATA_CMDID,
+ WMI_WLAN_PROFILE_ENABLE_PROFILE_ID_CMDID,
+ WMI_WLAN_PROFILE_LIST_PROFILE_ID_CMDID,
+ WMI_PDEV_SUSPEND_CMDID = WMI_TLV_CMD(WMI_GRP_SUSPEND),
+ WMI_PDEV_RESUME_CMDID,
+ WMI_ADD_BCN_FILTER_CMDID = WMI_TLV_CMD(WMI_GRP_BCN_FILTER),
+ WMI_RMV_BCN_FILTER_CMDID,
+ WMI_WOW_ADD_WAKE_PATTERN_CMDID = WMI_TLV_CMD(WMI_GRP_WOW),
+ WMI_WOW_DEL_WAKE_PATTERN_CMDID,
+ WMI_WOW_ENABLE_DISABLE_WAKE_EVENT_CMDID,
+ WMI_WOW_ENABLE_CMDID,
+ WMI_WOW_HOSTWAKEUP_FROM_SLEEP_CMDID,
+ WMI_WOW_IOAC_ADD_KEEPALIVE_CMDID,
+ WMI_WOW_IOAC_DEL_KEEPALIVE_CMDID,
+ WMI_WOW_IOAC_ADD_WAKE_PATTERN_CMDID,
+ WMI_WOW_IOAC_DEL_WAKE_PATTERN_CMDID,
+ WMI_D0_WOW_ENABLE_DISABLE_CMDID,
+ WMI_EXTWOW_ENABLE_CMDID,
+ WMI_EXTWOW_SET_APP_TYPE1_PARAMS_CMDID,
+ WMI_EXTWOW_SET_APP_TYPE2_PARAMS_CMDID,
+ WMI_WOW_ENABLE_ICMPV6_NA_FLT_CMDID,
+ WMI_WOW_UDP_SVC_OFLD_CMDID,
+ WMI_WOW_HOSTWAKEUP_GPIO_PIN_PATTERN_CONFIG_CMDID,
+ WMI_WOW_SET_ACTION_WAKE_UP_CMDID,
+ WMI_RTT_MEASREQ_CMDID = WMI_TLV_CMD(WMI_GRP_RTT),
+ WMI_RTT_TSF_CMDID,
+ WMI_VDEV_SPECTRAL_SCAN_CONFIGURE_CMDID = WMI_TLV_CMD(WMI_GRP_SPECTRAL),
+ WMI_VDEV_SPECTRAL_SCAN_ENABLE_CMDID,
+ WMI_REQUEST_STATS_CMDID = WMI_TLV_CMD(WMI_GRP_STATS),
+ WMI_MCC_SCHED_TRAFFIC_STATS_CMDID,
+ WMI_REQUEST_STATS_EXT_CMDID,
+ WMI_REQUEST_LINK_STATS_CMDID,
+ WMI_START_LINK_STATS_CMDID,
+ WMI_CLEAR_LINK_STATS_CMDID,
+ WMI_GET_FW_MEM_DUMP_CMDID,
+ WMI_DEBUG_MESG_FLUSH_CMDID,
+ WMI_DIAG_EVENT_LOG_CONFIG_CMDID,
+ WMI_REQUEST_WLAN_STATS_CMDID,
+ WMI_REQUEST_RCPI_CMDID,
+ WMI_REQUEST_PEER_STATS_INFO_CMDID,
+ WMI_REQUEST_RADIO_CHAN_STATS_CMDID,
+ WMI_SET_ARP_NS_OFFLOAD_CMDID = WMI_TLV_CMD(WMI_GRP_ARP_NS_OFL),
+ WMI_ADD_PROACTIVE_ARP_RSP_PATTERN_CMDID,
+ WMI_DEL_PROACTIVE_ARP_RSP_PATTERN_CMDID,
+ WMI_NETWORK_LIST_OFFLOAD_CONFIG_CMDID = WMI_TLV_CMD(WMI_GRP_NLO_OFL),
+ WMI_APFIND_CMDID,
+ WMI_PASSPOINT_LIST_CONFIG_CMDID,
+ WMI_NLO_CONFIGURE_MAWC_CMDID,
+ WMI_GTK_OFFLOAD_CMDID = WMI_TLV_CMD(WMI_GRP_GTK_OFL),
+ WMI_CSA_OFFLOAD_ENABLE_CMDID = WMI_TLV_CMD(WMI_GRP_CSA_OFL),
+ WMI_CSA_OFFLOAD_CHANSWITCH_CMDID,
+ WMI_CHATTER_SET_MODE_CMDID = WMI_TLV_CMD(WMI_GRP_CHATTER),
+ WMI_CHATTER_ADD_COALESCING_FILTER_CMDID,
+ WMI_CHATTER_DELETE_COALESCING_FILTER_CMDID,
+ WMI_CHATTER_COALESCING_QUERY_CMDID,
+ WMI_PEER_TID_ADDBA_CMDID = WMI_TLV_CMD(WMI_GRP_TID_ADDBA),
+ WMI_PEER_TID_DELBA_CMDID,
+ WMI_STA_DTIM_PS_METHOD_CMDID,
+ WMI_STA_UAPSD_AUTO_TRIG_CMDID,
+ WMI_STA_KEEPALIVE_CMDID,
+ WMI_BA_REQ_SSN_CMDID,
+ WMI_ECHO_CMDID = WMI_TLV_CMD(WMI_GRP_MISC),
+ WMI_PDEV_UTF_CMDID,
+ WMI_DBGLOG_CFG_CMDID,
+ WMI_PDEV_QVIT_CMDID,
+ WMI_PDEV_FTM_INTG_CMDID,
+ WMI_VDEV_SET_KEEPALIVE_CMDID,
+ WMI_VDEV_GET_KEEPALIVE_CMDID,
+ WMI_FORCE_FW_HANG_CMDID,
+ WMI_SET_MCASTBCAST_FILTER_CMDID,
+ WMI_THERMAL_MGMT_CMDID,
+ WMI_HOST_AUTO_SHUTDOWN_CFG_CMDID,
+ WMI_TPC_CHAINMASK_CONFIG_CMDID,
+ WMI_SET_ANTENNA_DIVERSITY_CMDID,
+ WMI_OCB_SET_SCHED_CMDID,
+ WMI_RSSI_BREACH_MONITOR_CONFIG_CMDID,
+ WMI_LRO_CONFIG_CMDID,
+ WMI_TRANSFER_DATA_TO_FLASH_CMDID,
+ WMI_CONFIG_ENHANCED_MCAST_FILTER_CMDID,
+ WMI_VDEV_WISA_CMDID,
+ WMI_DBGLOG_TIME_STAMP_SYNC_CMDID,
+ WMI_SET_MULTIPLE_MCAST_FILTER_CMDID,
+ WMI_READ_DATA_FROM_FLASH_CMDID,
+ WMI_THERM_THROT_SET_CONF_CMDID,
+ WMI_RUNTIME_DPD_RECAL_CMDID,
+ WMI_GET_TPC_POWER_CMDID,
+ WMI_IDLE_TRIGGER_MONITOR_CMDID,
+ WMI_GPIO_CONFIG_CMDID = WMI_TLV_CMD(WMI_GRP_GPIO),
+ WMI_GPIO_OUTPUT_CMDID,
+ WMI_TXBF_CMDID,
+ WMI_FWTEST_VDEV_MCC_SET_TBTT_MODE_CMDID = WMI_TLV_CMD(WMI_GRP_FWTEST),
+ WMI_FWTEST_P2P_SET_NOA_PARAM_CMDID,
+ WMI_UNIT_TEST_CMDID,
+ WMI_FWTEST_CMDID,
+ WMI_QBOOST_CFG_CMDID,
+ WMI_TDLS_SET_STATE_CMDID = WMI_TLV_CMD(WMI_GRP_TDLS),
+ WMI_TDLS_PEER_UPDATE_CMDID,
+ WMI_TDLS_SET_OFFCHAN_MODE_CMDID,
+ WMI_RESMGR_ADAPTIVE_OCS_EN_DIS_CMDID = WMI_TLV_CMD(WMI_GRP_RESMGR),
+ WMI_RESMGR_SET_CHAN_TIME_QUOTA_CMDID,
+ WMI_RESMGR_SET_CHAN_LATENCY_CMDID,
+ WMI_STA_SMPS_FORCE_MODE_CMDID = WMI_TLV_CMD(WMI_GRP_STA_SMPS),
+ WMI_STA_SMPS_PARAM_CMDID,
+ WMI_HB_SET_ENABLE_CMDID = WMI_TLV_CMD(WMI_GRP_WLAN_HB),
+ WMI_HB_SET_TCP_PARAMS_CMDID,
+ WMI_HB_SET_TCP_PKT_FILTER_CMDID,
+ WMI_HB_SET_UDP_PARAMS_CMDID,
+ WMI_HB_SET_UDP_PKT_FILTER_CMDID,
+ WMI_RMC_SET_MODE_CMDID = WMI_TLV_CMD(WMI_GRP_RMC),
+ WMI_RMC_SET_ACTION_PERIOD_CMDID,
+ WMI_RMC_CONFIG_CMDID,
+ WMI_RMC_SET_MANUAL_LEADER_CMDID,
+ WMI_MHF_OFFLOAD_SET_MODE_CMDID = WMI_TLV_CMD(WMI_GRP_MHF_OFL),
+ WMI_MHF_OFFLOAD_PLUMB_ROUTING_TBL_CMDID,
+ WMI_BATCH_SCAN_ENABLE_CMDID = WMI_TLV_CMD(WMI_GRP_LOCATION_SCAN),
+ WMI_BATCH_SCAN_DISABLE_CMDID,
+ WMI_BATCH_SCAN_TRIGGER_RESULT_CMDID,
+ WMI_OEM_REQ_CMDID = WMI_TLV_CMD(WMI_GRP_OEM),
+ WMI_OEM_REQUEST_CMDID,
+ WMI_LPI_OEM_REQ_CMDID,
+ WMI_NAN_CMDID = WMI_TLV_CMD(WMI_GRP_NAN),
+ WMI_MODEM_POWER_STATE_CMDID = WMI_TLV_CMD(WMI_GRP_COEX),
+ WMI_CHAN_AVOID_UPDATE_CMDID,
+ WMI_COEX_CONFIG_CMDID,
+ WMI_CHAN_AVOID_RPT_ALLOW_CMDID,
+ WMI_COEX_GET_ANTENNA_ISOLATION_CMDID,
+ WMI_SAR_LIMITS_CMDID,
+ WMI_OBSS_SCAN_ENABLE_CMDID = WMI_TLV_CMD(WMI_GRP_OBSS_OFL),
+ WMI_OBSS_SCAN_DISABLE_CMDID,
+ WMI_OBSS_COLOR_COLLISION_DET_CONFIG_CMDID,
+ WMI_LPI_MGMT_SNOOPING_CONFIG_CMDID = WMI_TLV_CMD(WMI_GRP_LPI),
+ WMI_LPI_START_SCAN_CMDID,
+ WMI_LPI_STOP_SCAN_CMDID,
+ WMI_EXTSCAN_START_CMDID = WMI_TLV_CMD(WMI_GRP_EXTSCAN),
+ WMI_EXTSCAN_STOP_CMDID,
+ WMI_EXTSCAN_CONFIGURE_WLAN_CHANGE_MONITOR_CMDID,
+ WMI_EXTSCAN_CONFIGURE_HOTLIST_MONITOR_CMDID,
+ WMI_EXTSCAN_GET_CACHED_RESULTS_CMDID,
+ WMI_EXTSCAN_GET_WLAN_CHANGE_RESULTS_CMDID,
+ WMI_EXTSCAN_SET_CAPABILITIES_CMDID,
+ WMI_EXTSCAN_GET_CAPABILITIES_CMDID,
+ WMI_EXTSCAN_CONFIGURE_HOTLIST_SSID_MONITOR_CMDID,
+ WMI_EXTSCAN_CONFIGURE_MAWC_CMDID,
+ WMI_SET_DHCP_SERVER_OFFLOAD_CMDID = WMI_TLV_CMD(WMI_GRP_DHCP_OFL),
+ WMI_IPA_OFFLOAD_ENABLE_DISABLE_CMDID = WMI_TLV_CMD(WMI_GRP_IPA),
+ WMI_MDNS_OFFLOAD_ENABLE_CMDID = WMI_TLV_CMD(WMI_GRP_MDNS_OFL),
+ WMI_MDNS_SET_FQDN_CMDID,
+ WMI_MDNS_SET_RESPONSE_CMDID,
+ WMI_MDNS_GET_STATS_CMDID,
+ WMI_SAP_OFL_ENABLE_CMDID = WMI_TLV_CMD(WMI_GRP_SAP_OFL),
+ WMI_SAP_SET_BLACKLIST_PARAM_CMDID,
+ WMI_OCB_SET_CONFIG_CMDID = WMI_TLV_CMD(WMI_GRP_OCB),
+ WMI_OCB_SET_UTC_TIME_CMDID,
+ WMI_OCB_START_TIMING_ADVERT_CMDID,
+ WMI_OCB_STOP_TIMING_ADVERT_CMDID,
+ WMI_OCB_GET_TSF_TIMER_CMDID,
+ WMI_DCC_GET_STATS_CMDID,
+ WMI_DCC_CLEAR_STATS_CMDID,
+ WMI_DCC_UPDATE_NDL_CMDID,
+ WMI_SOC_SET_PCL_CMDID = WMI_TLV_CMD(WMI_GRP_SOC),
+ WMI_SOC_SET_HW_MODE_CMDID,
+ WMI_SOC_SET_DUAL_MAC_CONFIG_CMDID,
+ WMI_SOC_SET_ANTENNA_MODE_CMDID,
+ WMI_PACKET_FILTER_CONFIG_CMDID = WMI_TLV_CMD(WMI_GRP_PKT_FILTER),
+ WMI_PACKET_FILTER_ENABLE_CMDID,
+ WMI_MAWC_SENSOR_REPORT_IND_CMDID = WMI_TLV_CMD(WMI_GRP_MAWC),
+ WMI_PMF_OFFLOAD_SET_SA_QUERY_CMDID = WMI_TLV_CMD(WMI_GRP_PMF_OFFLOAD),
+ WMI_BPF_GET_CAPABILITY_CMDID = WMI_TLV_CMD(WMI_GRP_BPF_OFFLOAD),
+ WMI_BPF_GET_VDEV_STATS_CMDID,
+ WMI_BPF_SET_VDEV_INSTRUCTIONS_CMDID,
+ WMI_BPF_DEL_VDEV_INSTRUCTIONS_CMDID,
+ WMI_BPF_SET_VDEV_ACTIVE_MODE_CMDID,
+ WMI_MNT_FILTER_CMDID = WMI_TLV_CMD(WMI_GRP_MONITOR),
+ WMI_SET_CURRENT_COUNTRY_CMDID = WMI_TLV_CMD(WMI_GRP_REGULATORY),
+ WMI_11D_SCAN_START_CMDID,
+ WMI_11D_SCAN_STOP_CMDID,
+ WMI_SET_INIT_COUNTRY_CMDID,
+ WMI_NDI_GET_CAP_REQ_CMDID = WMI_TLV_CMD(WMI_GRP_PROTOTYPE),
+ WMI_NDP_INITIATOR_REQ_CMDID,
+ WMI_NDP_RESPONDER_REQ_CMDID,
+ WMI_NDP_END_REQ_CMDID,
+ WMI_HW_DATA_FILTER_CMDID = WMI_TLV_CMD(WMI_GRP_HW_DATA_FILTER),
+ WMI_TWT_ENABLE_CMDID = WMI_TLV_CMD(WMI_GRP_TWT),
+ WMI_TWT_DISABLE_CMDID,
+ WMI_TWT_ADD_DIALOG_CMDID,
+ WMI_TWT_DEL_DIALOG_CMDID,
+ WMI_TWT_PAUSE_DIALOG_CMDID,
+ WMI_TWT_RESUME_DIALOG_CMDID,
+ WMI_PDEV_OBSS_PD_SPATIAL_REUSE_CMDID =
+ WMI_TLV_CMD(WMI_GRP_SPATIAL_REUSE),
+ WMI_PDEV_OBSS_PD_SPATIAL_REUSE_SET_DEF_OBSS_THRESH_CMDID,
+};
+
+enum wmi_tlv_event_id {
+ WMI_SERVICE_READY_EVENTID = 0x1,
+ WMI_READY_EVENTID,
+ WMI_SERVICE_AVAILABLE_EVENTID,
+ WMI_SCAN_EVENTID = WMI_EVT_GRP_START_ID(WMI_GRP_SCAN),
+ WMI_PDEV_TPC_CONFIG_EVENTID = WMI_TLV_CMD(WMI_GRP_PDEV),
+ WMI_CHAN_INFO_EVENTID,
+ WMI_PHYERR_EVENTID,
+ WMI_PDEV_DUMP_EVENTID,
+ WMI_TX_PAUSE_EVENTID,
+ WMI_DFS_RADAR_EVENTID,
+ WMI_PDEV_L1SS_TRACK_EVENTID,
+ WMI_PDEV_TEMPERATURE_EVENTID,
+ WMI_SERVICE_READY_EXT_EVENTID,
+ WMI_PDEV_FIPS_EVENTID,
+ WMI_PDEV_CHANNEL_HOPPING_EVENTID,
+ WMI_PDEV_ANI_CCK_LEVEL_EVENTID,
+ WMI_PDEV_ANI_OFDM_LEVEL_EVENTID,
+ WMI_PDEV_TPC_EVENTID,
+ WMI_PDEV_NFCAL_POWER_ALL_CHANNELS_EVENTID,
+ WMI_PDEV_SET_HW_MODE_RESP_EVENTID,
+ WMI_PDEV_HW_MODE_TRANSITION_EVENTID,
+ WMI_PDEV_SET_MAC_CONFIG_RESP_EVENTID,
+ WMI_PDEV_ANTDIV_STATUS_EVENTID,
+ WMI_PDEV_CHIP_POWER_STATS_EVENTID,
+ WMI_PDEV_CHIP_POWER_SAVE_FAILURE_DETECTED_EVENTID,
+ WMI_PDEV_CSA_SWITCH_COUNT_STATUS_EVENTID,
+ WMI_PDEV_CHECK_CAL_VERSION_EVENTID,
+ WMI_PDEV_DIV_RSSI_ANTID_EVENTID,
+ WMI_PDEV_BSS_CHAN_INFO_EVENTID,
+ WMI_PDEV_UPDATE_CTLTABLE_EVENTID,
+ WMI_PDEV_DMA_RING_CFG_RSP_EVENTID,
+ WMI_PDEV_DMA_RING_BUF_RELEASE_EVENTID,
+ WMI_PDEV_CTL_FAILSAFE_CHECK_EVENTID,
+ WMI_PDEV_CSC_SWITCH_COUNT_STATUS_EVENTID,
+ WMI_PDEV_COLD_BOOT_CAL_DATA_EVENTID,
+ WMI_PDEV_RAP_INFO_EVENTID,
+ WMI_CHAN_RF_CHARACTERIZATION_INFO_EVENTID,
+ WMI_SERVICE_READY_EXT2_EVENTID,
+ WMI_VDEV_START_RESP_EVENTID = WMI_TLV_CMD(WMI_GRP_VDEV),
+ WMI_VDEV_STOPPED_EVENTID,
+ WMI_VDEV_INSTALL_KEY_COMPLETE_EVENTID,
+ WMI_VDEV_MCC_BCN_INTERVAL_CHANGE_REQ_EVENTID,
+ WMI_VDEV_TSF_REPORT_EVENTID,
+ WMI_VDEV_DELETE_RESP_EVENTID,
+ WMI_VDEV_ENCRYPT_DECRYPT_DATA_RESP_EVENTID,
+ WMI_VDEV_ADD_MAC_ADDR_TO_RX_FILTER_STATUS_EVENTID,
+ WMI_PEER_STA_KICKOUT_EVENTID = WMI_TLV_CMD(WMI_GRP_PEER),
+ WMI_PEER_INFO_EVENTID,
+ WMI_PEER_TX_FAIL_CNT_THR_EVENTID,
+ WMI_PEER_ESTIMATED_LINKSPEED_EVENTID,
+ WMI_PEER_STATE_EVENTID,
+ WMI_PEER_ASSOC_CONF_EVENTID,
+ WMI_PEER_DELETE_RESP_EVENTID,
+ WMI_PEER_RATECODE_LIST_EVENTID,
+ WMI_WDS_PEER_EVENTID,
+ WMI_PEER_STA_PS_STATECHG_EVENTID,
+ WMI_PEER_ANTDIV_INFO_EVENTID,
+ WMI_PEER_RESERVED0_EVENTID,
+ WMI_PEER_RESERVED1_EVENTID,
+ WMI_PEER_RESERVED2_EVENTID,
+ WMI_PEER_RESERVED3_EVENTID,
+ WMI_PEER_RESERVED4_EVENTID,
+ WMI_PEER_RESERVED5_EVENTID,
+ WMI_PEER_RESERVED6_EVENTID,
+ WMI_PEER_RESERVED7_EVENTID,
+ WMI_PEER_RESERVED8_EVENTID,
+ WMI_PEER_RESERVED9_EVENTID,
+ WMI_PEER_RESERVED10_EVENTID,
+ WMI_PEER_OPER_MODE_CHANGE_EVENTID,
+ WMI_MGMT_RX_EVENTID = WMI_TLV_CMD(WMI_GRP_MGMT),
+ WMI_HOST_SWBA_EVENTID,
+ WMI_TBTTOFFSET_UPDATE_EVENTID,
+ WMI_OFFLOAD_BCN_TX_STATUS_EVENTID,
+ WMI_OFFLOAD_PROB_RESP_TX_STATUS_EVENTID,
+ WMI_MGMT_TX_COMPLETION_EVENTID,
+ WMI_MGMT_TX_BUNDLE_COMPLETION_EVENTID,
+ WMI_TBTTOFFSET_EXT_UPDATE_EVENTID,
+ WMI_OFFCHAN_DATA_TX_COMPLETION_EVENTID,
+ WMI_HOST_FILS_DISCOVERY_EVENTID,
+ WMI_TX_DELBA_COMPLETE_EVENTID = WMI_TLV_CMD(WMI_GRP_BA_NEG),
+ WMI_TX_ADDBA_COMPLETE_EVENTID,
+ WMI_BA_RSP_SSN_EVENTID,
+ WMI_AGGR_STATE_TRIG_EVENTID,
+ WMI_ROAM_EVENTID = WMI_TLV_CMD(WMI_GRP_ROAM),
+ WMI_PROFILE_MATCH,
+ WMI_ROAM_SYNCH_EVENTID,
+ WMI_P2P_DISC_EVENTID = WMI_TLV_CMD(WMI_GRP_P2P),
+ WMI_P2P_NOA_EVENTID,
+ WMI_P2P_LISTEN_OFFLOAD_STOPPED_EVENTID,
+ WMI_AP_PS_EGAP_INFO_EVENTID = WMI_TLV_CMD(WMI_GRP_AP_PS),
+ WMI_PDEV_RESUME_EVENTID = WMI_TLV_CMD(WMI_GRP_SUSPEND),
+ WMI_WOW_WAKEUP_HOST_EVENTID = WMI_TLV_CMD(WMI_GRP_WOW),
+ WMI_D0_WOW_DISABLE_ACK_EVENTID,
+ WMI_WOW_INITIAL_WAKEUP_EVENTID,
+ WMI_RTT_MEASUREMENT_REPORT_EVENTID = WMI_TLV_CMD(WMI_GRP_RTT),
+ WMI_TSF_MEASUREMENT_REPORT_EVENTID,
+ WMI_RTT_ERROR_REPORT_EVENTID,
+ WMI_STATS_EXT_EVENTID = WMI_TLV_CMD(WMI_GRP_STATS),
+ WMI_IFACE_LINK_STATS_EVENTID,
+ WMI_PEER_LINK_STATS_EVENTID,
+ WMI_RADIO_LINK_STATS_EVENTID,
+ WMI_UPDATE_FW_MEM_DUMP_EVENTID,
+ WMI_DIAG_EVENT_LOG_SUPPORTED_EVENTID,
+ WMI_INST_RSSI_STATS_EVENTID,
+ WMI_RADIO_TX_POWER_LEVEL_STATS_EVENTID,
+ WMI_REPORT_STATS_EVENTID,
+ WMI_UPDATE_RCPI_EVENTID,
+ WMI_PEER_STATS_INFO_EVENTID,
+ WMI_RADIO_CHAN_STATS_EVENTID,
+ WMI_NLO_MATCH_EVENTID = WMI_TLV_CMD(WMI_GRP_NLO_OFL),
+ WMI_NLO_SCAN_COMPLETE_EVENTID,
+ WMI_APFIND_EVENTID,
+ WMI_PASSPOINT_MATCH_EVENTID,
+ WMI_GTK_OFFLOAD_STATUS_EVENTID = WMI_TLV_CMD(WMI_GRP_GTK_OFL),
+ WMI_GTK_REKEY_FAIL_EVENTID,
+ WMI_CSA_HANDLING_EVENTID = WMI_TLV_CMD(WMI_GRP_CSA_OFL),
+ WMI_CHATTER_PC_QUERY_EVENTID = WMI_TLV_CMD(WMI_GRP_CHATTER),
+ WMI_PDEV_DFS_RADAR_DETECTION_EVENTID = WMI_TLV_CMD(WMI_GRP_DFS),
+ WMI_VDEV_DFS_CAC_COMPLETE_EVENTID,
+ WMI_VDEV_ADFS_OCAC_COMPLETE_EVENTID,
+ WMI_ECHO_EVENTID = WMI_TLV_CMD(WMI_GRP_MISC),
+ WMI_PDEV_UTF_EVENTID,
+ WMI_DEBUG_MESG_EVENTID,
+ WMI_UPDATE_STATS_EVENTID,
+ WMI_DEBUG_PRINT_EVENTID,
+ WMI_DCS_INTERFERENCE_EVENTID,
+ WMI_PDEV_QVIT_EVENTID,
+ WMI_WLAN_PROFILE_DATA_EVENTID,
+ WMI_PDEV_FTM_INTG_EVENTID,
+ WMI_WLAN_FREQ_AVOID_EVENTID,
+ WMI_VDEV_GET_KEEPALIVE_EVENTID,
+ WMI_THERMAL_MGMT_EVENTID,
+ WMI_DIAG_DATA_CONTAINER_EVENTID,
+ WMI_HOST_AUTO_SHUTDOWN_EVENTID,
+ WMI_UPDATE_WHAL_MIB_STATS_EVENTID,
+ WMI_UPDATE_VDEV_RATE_STATS_EVENTID,
+ WMI_DIAG_EVENTID,
+ WMI_OCB_SET_SCHED_EVENTID,
+ WMI_DEBUG_MESG_FLUSH_COMPLETE_EVENTID,
+ WMI_RSSI_BREACH_EVENTID,
+ WMI_TRANSFER_DATA_TO_FLASH_COMPLETE_EVENTID,
+ WMI_PDEV_UTF_SCPC_EVENTID,
+ WMI_READ_DATA_FROM_FLASH_EVENTID,
+ WMI_REPORT_RX_AGGR_FAILURE_EVENTID,
+ WMI_PKGID_EVENTID,
+ WMI_GPIO_INPUT_EVENTID = WMI_TLV_CMD(WMI_GRP_GPIO),
+ WMI_UPLOADH_EVENTID,
+ WMI_CAPTUREH_EVENTID,
+ WMI_RFKILL_STATE_CHANGE_EVENTID,
+ WMI_TDLS_PEER_EVENTID = WMI_TLV_CMD(WMI_GRP_TDLS),
+ WMI_STA_SMPS_FORCE_MODE_COMPL_EVENTID = WMI_TLV_CMD(WMI_GRP_STA_SMPS),
+ WMI_BATCH_SCAN_ENABLED_EVENTID = WMI_TLV_CMD(WMI_GRP_LOCATION_SCAN),
+ WMI_BATCH_SCAN_RESULT_EVENTID,
+ WMI_OEM_CAPABILITY_EVENTID = WMI_TLV_CMD(WMI_GRP_OEM),
+ WMI_OEM_MEASUREMENT_REPORT_EVENTID,
+ WMI_OEM_ERROR_REPORT_EVENTID,
+ WMI_OEM_RESPONSE_EVENTID,
+ WMI_NAN_EVENTID = WMI_TLV_CMD(WMI_GRP_NAN),
+ WMI_NAN_DISC_IFACE_CREATED_EVENTID,
+ WMI_NAN_DISC_IFACE_DELETED_EVENTID,
+ WMI_NAN_STARTED_CLUSTER_EVENTID,
+ WMI_NAN_JOINED_CLUSTER_EVENTID,
+ WMI_COEX_REPORT_ANTENNA_ISOLATION_EVENTID = WMI_TLV_CMD(WMI_GRP_COEX),
+ WMI_LPI_RESULT_EVENTID = WMI_TLV_CMD(WMI_GRP_LPI),
+ WMI_LPI_STATUS_EVENTID,
+ WMI_LPI_HANDOFF_EVENTID,
+ WMI_EXTSCAN_START_STOP_EVENTID = WMI_TLV_CMD(WMI_GRP_EXTSCAN),
+ WMI_EXTSCAN_OPERATION_EVENTID,
+ WMI_EXTSCAN_TABLE_USAGE_EVENTID,
+ WMI_EXTSCAN_CACHED_RESULTS_EVENTID,
+ WMI_EXTSCAN_WLAN_CHANGE_RESULTS_EVENTID,
+ WMI_EXTSCAN_HOTLIST_MATCH_EVENTID,
+ WMI_EXTSCAN_CAPABILITIES_EVENTID,
+ WMI_EXTSCAN_HOTLIST_SSID_MATCH_EVENTID,
+ WMI_MDNS_STATS_EVENTID = WMI_TLV_CMD(WMI_GRP_MDNS_OFL),
+ WMI_SAP_OFL_ADD_STA_EVENTID = WMI_TLV_CMD(WMI_GRP_SAP_OFL),
+ WMI_SAP_OFL_DEL_STA_EVENTID,
+ WMI_OCB_SET_CONFIG_RESP_EVENTID = WMI_TLV_CMD(WMI_GRP_OCB),
+ WMI_OCB_GET_TSF_TIMER_RESP_EVENTID,
+ WMI_DCC_GET_STATS_RESP_EVENTID,
+ WMI_DCC_UPDATE_NDL_RESP_EVENTID,
+ WMI_DCC_STATS_EVENTID,
+ WMI_SOC_SET_HW_MODE_RESP_EVENTID = WMI_TLV_CMD(WMI_GRP_SOC),
+ WMI_SOC_HW_MODE_TRANSITION_EVENTID,
+ WMI_SOC_SET_DUAL_MAC_CONFIG_RESP_EVENTID,
+ WMI_MAWC_ENABLE_SENSOR_EVENTID = WMI_TLV_CMD(WMI_GRP_MAWC),
+ WMI_BPF_CAPABILIY_INFO_EVENTID = WMI_TLV_CMD(WMI_GRP_BPF_OFFLOAD),
+ WMI_BPF_VDEV_STATS_INFO_EVENTID,
+ WMI_RMC_NEW_LEADER_EVENTID = WMI_TLV_CMD(WMI_GRP_RMC),
+ WMI_REG_CHAN_LIST_CC_EVENTID = WMI_TLV_CMD(WMI_GRP_REGULATORY),
+ WMI_11D_NEW_COUNTRY_EVENTID,
+ WMI_REG_CHAN_LIST_CC_EXT_EVENTID,
+ WMI_NDI_CAP_RSP_EVENTID = WMI_TLV_CMD(WMI_GRP_PROTOTYPE),
+ WMI_NDP_INITIATOR_RSP_EVENTID,
+ WMI_NDP_RESPONDER_RSP_EVENTID,
+ WMI_NDP_END_RSP_EVENTID,
+ WMI_NDP_INDICATION_EVENTID,
+ WMI_NDP_CONFIRM_EVENTID,
+ WMI_NDP_END_INDICATION_EVENTID,
+
+ WMI_TWT_ENABLE_EVENTID = WMI_TLV_CMD(WMI_GRP_TWT),
+ WMI_TWT_DISABLE_EVENTID,
+ WMI_TWT_ADD_DIALOG_EVENTID,
+ WMI_TWT_DEL_DIALOG_EVENTID,
+ WMI_TWT_PAUSE_DIALOG_EVENTID,
+ WMI_TWT_RESUME_DIALOG_EVENTID,
+};
+
+enum wmi_tlv_pdev_param {
+ WMI_PDEV_PARAM_TX_CHAIN_MASK = 0x1,
+ WMI_PDEV_PARAM_RX_CHAIN_MASK,
+ WMI_PDEV_PARAM_TXPOWER_LIMIT2G,
+ WMI_PDEV_PARAM_TXPOWER_LIMIT5G,
+ WMI_PDEV_PARAM_TXPOWER_SCALE,
+ WMI_PDEV_PARAM_BEACON_GEN_MODE,
+ WMI_PDEV_PARAM_BEACON_TX_MODE,
+ WMI_PDEV_PARAM_RESMGR_OFFCHAN_MODE,
+ WMI_PDEV_PARAM_PROTECTION_MODE,
+ WMI_PDEV_PARAM_DYNAMIC_BW,
+ WMI_PDEV_PARAM_NON_AGG_SW_RETRY_TH,
+ WMI_PDEV_PARAM_AGG_SW_RETRY_TH,
+ WMI_PDEV_PARAM_STA_KICKOUT_TH,
+ WMI_PDEV_PARAM_AC_AGGRSIZE_SCALING,
+ WMI_PDEV_PARAM_LTR_ENABLE,
+ WMI_PDEV_PARAM_LTR_AC_LATENCY_BE,
+ WMI_PDEV_PARAM_LTR_AC_LATENCY_BK,
+ WMI_PDEV_PARAM_LTR_AC_LATENCY_VI,
+ WMI_PDEV_PARAM_LTR_AC_LATENCY_VO,
+ WMI_PDEV_PARAM_LTR_AC_LATENCY_TIMEOUT,
+ WMI_PDEV_PARAM_LTR_SLEEP_OVERRIDE,
+ WMI_PDEV_PARAM_LTR_RX_OVERRIDE,
+ WMI_PDEV_PARAM_LTR_TX_ACTIVITY_TIMEOUT,
+ WMI_PDEV_PARAM_L1SS_ENABLE,
+ WMI_PDEV_PARAM_DSLEEP_ENABLE,
+ WMI_PDEV_PARAM_PCIELP_TXBUF_FLUSH,
+ WMI_PDEV_PARAM_PCIELP_TXBUF_WATERMARK,
+ WMI_PDEV_PARAM_PCIELP_TXBUF_TMO_EN,
+ WMI_PDEV_PARAM_PCIELP_TXBUF_TMO_VALUE,
+ WMI_PDEV_PARAM_PDEV_STATS_UPDATE_PERIOD,
+ WMI_PDEV_PARAM_VDEV_STATS_UPDATE_PERIOD,
+ WMI_PDEV_PARAM_PEER_STATS_UPDATE_PERIOD,
+ WMI_PDEV_PARAM_BCNFLT_STATS_UPDATE_PERIOD,
+ WMI_PDEV_PARAM_PMF_QOS,
+ WMI_PDEV_PARAM_ARP_AC_OVERRIDE,
+ WMI_PDEV_PARAM_DCS,
+ WMI_PDEV_PARAM_ANI_ENABLE,
+ WMI_PDEV_PARAM_ANI_POLL_PERIOD,
+ WMI_PDEV_PARAM_ANI_LISTEN_PERIOD,
+ WMI_PDEV_PARAM_ANI_OFDM_LEVEL,
+ WMI_PDEV_PARAM_ANI_CCK_LEVEL,
+ WMI_PDEV_PARAM_DYNTXCHAIN,
+ WMI_PDEV_PARAM_PROXY_STA,
+ WMI_PDEV_PARAM_IDLE_PS_CONFIG,
+ WMI_PDEV_PARAM_POWER_GATING_SLEEP,
+ WMI_PDEV_PARAM_RFKILL_ENABLE,
+ WMI_PDEV_PARAM_BURST_DUR,
+ WMI_PDEV_PARAM_BURST_ENABLE,
+ WMI_PDEV_PARAM_HW_RFKILL_CONFIG,
+ WMI_PDEV_PARAM_LOW_POWER_RF_ENABLE,
+ WMI_PDEV_PARAM_L1SS_TRACK,
+ WMI_PDEV_PARAM_HYST_EN,
+ WMI_PDEV_PARAM_POWER_COLLAPSE_ENABLE,
+ WMI_PDEV_PARAM_LED_SYS_STATE,
+ WMI_PDEV_PARAM_LED_ENABLE,
+ WMI_PDEV_PARAM_AUDIO_OVER_WLAN_LATENCY,
+ WMI_PDEV_PARAM_AUDIO_OVER_WLAN_ENABLE,
+ WMI_PDEV_PARAM_WHAL_MIB_STATS_UPDATE_ENABLE,
+ WMI_PDEV_PARAM_VDEV_RATE_STATS_UPDATE_PERIOD,
+ WMI_PDEV_PARAM_CTS_CBW,
+ WMI_PDEV_PARAM_WNTS_CONFIG,
+ WMI_PDEV_PARAM_ADAPTIVE_EARLY_RX_ENABLE,
+ WMI_PDEV_PARAM_ADAPTIVE_EARLY_RX_MIN_SLEEP_SLOP,
+ WMI_PDEV_PARAM_ADAPTIVE_EARLY_RX_INC_DEC_STEP,
+ WMI_PDEV_PARAM_EARLY_RX_FIX_SLEEP_SLOP,
+ WMI_PDEV_PARAM_BMISS_BASED_ADAPTIVE_BTO_ENABLE,
+ WMI_PDEV_PARAM_BMISS_BTO_MIN_BCN_TIMEOUT,
+ WMI_PDEV_PARAM_BMISS_BTO_INC_DEC_STEP,
+ WMI_PDEV_PARAM_BTO_FIX_BCN_TIMEOUT,
+ WMI_PDEV_PARAM_CE_BASED_ADAPTIVE_BTO_ENABLE,
+ WMI_PDEV_PARAM_CE_BTO_COMBO_CE_VALUE,
+ WMI_PDEV_PARAM_TX_CHAIN_MASK_2G,
+ WMI_PDEV_PARAM_RX_CHAIN_MASK_2G,
+ WMI_PDEV_PARAM_TX_CHAIN_MASK_5G,
+ WMI_PDEV_PARAM_RX_CHAIN_MASK_5G,
+ WMI_PDEV_PARAM_TX_CHAIN_MASK_CCK,
+ WMI_PDEV_PARAM_TX_CHAIN_MASK_1SS,
+ WMI_PDEV_PARAM_CTS2SELF_FOR_P2P_GO_CONFIG,
+ WMI_PDEV_PARAM_TXPOWER_DECR_DB,
+ WMI_PDEV_PARAM_AGGR_BURST,
+ WMI_PDEV_PARAM_RX_DECAP_MODE,
+ WMI_PDEV_PARAM_FAST_CHANNEL_RESET,
+ WMI_PDEV_PARAM_SMART_ANTENNA_DEFAULT_ANTENNA,
+ WMI_PDEV_PARAM_ANTENNA_GAIN,
+ WMI_PDEV_PARAM_RX_FILTER,
+ WMI_PDEV_SET_MCAST_TO_UCAST_TID,
+ WMI_PDEV_PARAM_PROXY_STA_MODE,
+ WMI_PDEV_PARAM_SET_MCAST2UCAST_MODE,
+ WMI_PDEV_PARAM_SET_MCAST2UCAST_BUFFER,
+ WMI_PDEV_PARAM_REMOVE_MCAST2UCAST_BUFFER,
+ WMI_PDEV_PEER_STA_PS_STATECHG_ENABLE,
+ WMI_PDEV_PARAM_IGMPMLD_AC_OVERRIDE,
+ WMI_PDEV_PARAM_BLOCK_INTERBSS,
+ WMI_PDEV_PARAM_SET_DISABLE_RESET_CMDID,
+ WMI_PDEV_PARAM_SET_MSDU_TTL_CMDID,
+ WMI_PDEV_PARAM_SET_PPDU_DURATION_CMDID,
+ WMI_PDEV_PARAM_TXBF_SOUND_PERIOD_CMDID,
+ WMI_PDEV_PARAM_SET_PROMISC_MODE_CMDID,
+ WMI_PDEV_PARAM_SET_BURST_MODE_CMDID,
+ WMI_PDEV_PARAM_EN_STATS,
+ WMI_PDEV_PARAM_MU_GROUP_POLICY,
+ WMI_PDEV_PARAM_NOISE_DETECTION,
+ WMI_PDEV_PARAM_NOISE_THRESHOLD,
+ WMI_PDEV_PARAM_DPD_ENABLE,
+ WMI_PDEV_PARAM_SET_MCAST_BCAST_ECHO,
+ WMI_PDEV_PARAM_ATF_STRICT_SCH,
+ WMI_PDEV_PARAM_ATF_SCHED_DURATION,
+ WMI_PDEV_PARAM_ANT_PLZN,
+ WMI_PDEV_PARAM_MGMT_RETRY_LIMIT,
+ WMI_PDEV_PARAM_SENSITIVITY_LEVEL,
+ WMI_PDEV_PARAM_SIGNED_TXPOWER_2G,
+ WMI_PDEV_PARAM_SIGNED_TXPOWER_5G,
+ WMI_PDEV_PARAM_ENABLE_PER_TID_AMSDU,
+ WMI_PDEV_PARAM_ENABLE_PER_TID_AMPDU,
+ WMI_PDEV_PARAM_CCA_THRESHOLD,
+ WMI_PDEV_PARAM_RTS_FIXED_RATE,
+ WMI_PDEV_PARAM_PDEV_RESET,
+ WMI_PDEV_PARAM_WAPI_MBSSID_OFFSET,
+ WMI_PDEV_PARAM_ARP_DBG_SRCADDR,
+ WMI_PDEV_PARAM_ARP_DBG_DSTADDR,
+ WMI_PDEV_PARAM_ATF_OBSS_NOISE_SCH,
+ WMI_PDEV_PARAM_ATF_OBSS_NOISE_SCALING_FACTOR,
+ WMI_PDEV_PARAM_CUST_TXPOWER_SCALE,
+ WMI_PDEV_PARAM_ATF_DYNAMIC_ENABLE,
+ WMI_PDEV_PARAM_CTRL_RETRY_LIMIT,
+ WMI_PDEV_PARAM_PROPAGATION_DELAY,
+ WMI_PDEV_PARAM_ENA_ANT_DIV,
+ WMI_PDEV_PARAM_FORCE_CHAIN_ANT,
+ WMI_PDEV_PARAM_ANT_DIV_SELFTEST,
+ WMI_PDEV_PARAM_ANT_DIV_SELFTEST_INTVL,
+ WMI_PDEV_PARAM_STATS_OBSERVATION_PERIOD,
+ WMI_PDEV_PARAM_TX_PPDU_DELAY_BIN_SIZE_MS,
+ WMI_PDEV_PARAM_TX_PPDU_DELAY_ARRAY_LEN,
+ WMI_PDEV_PARAM_TX_MPDU_AGGR_ARRAY_LEN,
+ WMI_PDEV_PARAM_RX_MPDU_AGGR_ARRAY_LEN,
+ WMI_PDEV_PARAM_TX_SCH_DELAY,
+ WMI_PDEV_PARAM_ENABLE_RTS_SIFS_BURSTING,
+ WMI_PDEV_PARAM_MAX_MPDUS_IN_AMPDU,
+ WMI_PDEV_PARAM_PEER_STATS_INFO_ENABLE,
+ WMI_PDEV_PARAM_FAST_PWR_TRANSITION,
+ WMI_PDEV_PARAM_RADIO_CHAN_STATS_ENABLE,
+ WMI_PDEV_PARAM_RADIO_DIAGNOSIS_ENABLE,
+ WMI_PDEV_PARAM_MESH_MCAST_ENABLE,
+};
+
+enum wmi_tlv_vdev_param {
+ WMI_VDEV_PARAM_RTS_THRESHOLD = 0x1,
+ WMI_VDEV_PARAM_FRAGMENTATION_THRESHOLD,
+ WMI_VDEV_PARAM_BEACON_INTERVAL,
+ WMI_VDEV_PARAM_LISTEN_INTERVAL,
+ WMI_VDEV_PARAM_MULTICAST_RATE,
+ WMI_VDEV_PARAM_MGMT_TX_RATE,
+ WMI_VDEV_PARAM_SLOT_TIME,
+ WMI_VDEV_PARAM_PREAMBLE,
+ WMI_VDEV_PARAM_SWBA_TIME,
+ WMI_VDEV_STATS_UPDATE_PERIOD,
+ WMI_VDEV_PWRSAVE_AGEOUT_TIME,
+ WMI_VDEV_HOST_SWBA_INTERVAL,
+ WMI_VDEV_PARAM_DTIM_PERIOD,
+ WMI_VDEV_OC_SCHEDULER_AIR_TIME_LIMIT,
+ WMI_VDEV_PARAM_WDS,
+ WMI_VDEV_PARAM_ATIM_WINDOW,
+ WMI_VDEV_PARAM_BMISS_COUNT_MAX,
+ WMI_VDEV_PARAM_BMISS_FIRST_BCNT,
+ WMI_VDEV_PARAM_BMISS_FINAL_BCNT,
+ WMI_VDEV_PARAM_FEATURE_WMM,
+ WMI_VDEV_PARAM_CHWIDTH,
+ WMI_VDEV_PARAM_CHEXTOFFSET,
+ WMI_VDEV_PARAM_DISABLE_HTPROTECTION,
+ WMI_VDEV_PARAM_STA_QUICKKICKOUT,
+ WMI_VDEV_PARAM_MGMT_RATE,
+ WMI_VDEV_PARAM_PROTECTION_MODE,
+ WMI_VDEV_PARAM_FIXED_RATE,
+ WMI_VDEV_PARAM_SGI,
+ WMI_VDEV_PARAM_LDPC,
+ WMI_VDEV_PARAM_TX_STBC,
+ WMI_VDEV_PARAM_RX_STBC,
+ WMI_VDEV_PARAM_INTRA_BSS_FWD,
+ WMI_VDEV_PARAM_DEF_KEYID,
+ WMI_VDEV_PARAM_NSS,
+ WMI_VDEV_PARAM_BCAST_DATA_RATE,
+ WMI_VDEV_PARAM_MCAST_DATA_RATE,
+ WMI_VDEV_PARAM_MCAST_INDICATE,
+ WMI_VDEV_PARAM_DHCP_INDICATE,
+ WMI_VDEV_PARAM_UNKNOWN_DEST_INDICATE,
+ WMI_VDEV_PARAM_AP_KEEPALIVE_MIN_IDLE_INACTIVE_TIME_SECS,
+ WMI_VDEV_PARAM_AP_KEEPALIVE_MAX_IDLE_INACTIVE_TIME_SECS,
+ WMI_VDEV_PARAM_AP_KEEPALIVE_MAX_UNRESPONSIVE_TIME_SECS,
+ WMI_VDEV_PARAM_AP_ENABLE_NAWDS,
+ WMI_VDEV_PARAM_ENABLE_RTSCTS,
+ WMI_VDEV_PARAM_TXBF,
+ WMI_VDEV_PARAM_PACKET_POWERSAVE,
+ WMI_VDEV_PARAM_DROP_UNENCRY,
+ WMI_VDEV_PARAM_TX_ENCAP_TYPE,
+ WMI_VDEV_PARAM_AP_DETECT_OUT_OF_SYNC_SLEEPING_STA_TIME_SECS,
+ WMI_VDEV_PARAM_EARLY_RX_ADJUST_ENABLE,
+ WMI_VDEV_PARAM_EARLY_RX_TGT_BMISS_NUM,
+ WMI_VDEV_PARAM_EARLY_RX_BMISS_SAMPLE_CYCLE,
+ WMI_VDEV_PARAM_EARLY_RX_SLOP_STEP,
+ WMI_VDEV_PARAM_EARLY_RX_INIT_SLOP,
+ WMI_VDEV_PARAM_EARLY_RX_ADJUST_PAUSE,
+ WMI_VDEV_PARAM_TX_PWRLIMIT,
+ WMI_VDEV_PARAM_SNR_NUM_FOR_CAL,
+ WMI_VDEV_PARAM_ROAM_FW_OFFLOAD,
+ WMI_VDEV_PARAM_ENABLE_RMC,
+ WMI_VDEV_PARAM_IBSS_MAX_BCN_LOST_MS,
+ WMI_VDEV_PARAM_MAX_RATE,
+ WMI_VDEV_PARAM_EARLY_RX_DRIFT_SAMPLE,
+ WMI_VDEV_PARAM_SET_IBSS_TX_FAIL_CNT_THR,
+ WMI_VDEV_PARAM_EBT_RESYNC_TIMEOUT,
+ WMI_VDEV_PARAM_AGGR_TRIG_EVENT_ENABLE,
+ WMI_VDEV_PARAM_IS_IBSS_POWER_SAVE_ALLOWED,
+ WMI_VDEV_PARAM_IS_POWER_COLLAPSE_ALLOWED,
+ WMI_VDEV_PARAM_IS_AWAKE_ON_TXRX_ENABLED,
+ WMI_VDEV_PARAM_INACTIVITY_CNT,
+ WMI_VDEV_PARAM_TXSP_END_INACTIVITY_TIME_MS,
+ WMI_VDEV_PARAM_DTIM_POLICY,
+ WMI_VDEV_PARAM_IBSS_PS_WARMUP_TIME_SECS,
+ WMI_VDEV_PARAM_IBSS_PS_1RX_CHAIN_IN_ATIM_WINDOW_ENABLE,
+ WMI_VDEV_PARAM_RX_LEAK_WINDOW,
+ WMI_VDEV_PARAM_STATS_AVG_FACTOR,
+ WMI_VDEV_PARAM_DISCONNECT_TH,
+ WMI_VDEV_PARAM_RTSCTS_RATE,
+ WMI_VDEV_PARAM_MCC_RTSCTS_PROTECTION_ENABLE,
+ WMI_VDEV_PARAM_MCC_BROADCAST_PROBE_ENABLE,
+ WMI_VDEV_PARAM_TXPOWER_SCALE,
+ WMI_VDEV_PARAM_TXPOWER_SCALE_DECR_DB,
+ WMI_VDEV_PARAM_MCAST2UCAST_SET,
+ WMI_VDEV_PARAM_RC_NUM_RETRIES,
+ WMI_VDEV_PARAM_CABQ_MAXDUR,
+ WMI_VDEV_PARAM_MFPTEST_SET,
+ WMI_VDEV_PARAM_RTS_FIXED_RATE,
+ WMI_VDEV_PARAM_VHT_SGIMASK,
+ WMI_VDEV_PARAM_VHT80_RATEMASK,
+ WMI_VDEV_PARAM_PROXY_STA,
+ WMI_VDEV_PARAM_VIRTUAL_CELL_MODE,
+ WMI_VDEV_PARAM_RX_DECAP_TYPE,
+ WMI_VDEV_PARAM_BW_NSS_RATEMASK,
+ WMI_VDEV_PARAM_SENSOR_AP,
+ WMI_VDEV_PARAM_BEACON_RATE,
+ WMI_VDEV_PARAM_DTIM_ENABLE_CTS,
+ WMI_VDEV_PARAM_STA_KICKOUT,
+ WMI_VDEV_PARAM_CAPABILITIES,
+ WMI_VDEV_PARAM_TSF_INCREMENT,
+ WMI_VDEV_PARAM_AMPDU_PER_AC,
+ WMI_VDEV_PARAM_RX_FILTER,
+ WMI_VDEV_PARAM_MGMT_TX_POWER,
+ WMI_VDEV_PARAM_NON_AGG_SW_RETRY_TH,
+ WMI_VDEV_PARAM_AGG_SW_RETRY_TH,
+ WMI_VDEV_PARAM_DISABLE_DYN_BW_RTS,
+ WMI_VDEV_PARAM_ATF_SSID_SCHED_POLICY,
+ WMI_VDEV_PARAM_HE_DCM,
+ WMI_VDEV_PARAM_HE_RANGE_EXT,
+ WMI_VDEV_PARAM_ENABLE_BCAST_PROBE_RESPONSE,
+ WMI_VDEV_PARAM_FILS_MAX_CHANNEL_GUARD_TIME,
+ WMI_VDEV_PARAM_BA_MODE = 0x7e,
+ WMI_VDEV_PARAM_SET_HE_SOUNDING_MODE = 0x87,
+ WMI_VDEV_PARAM_6GHZ_PARAMS = 0x99,
+ WMI_VDEV_PARAM_PROTOTYPE = 0x8000,
+ WMI_VDEV_PARAM_BSS_COLOR,
+ WMI_VDEV_PARAM_SET_HEMU_MODE,
+ WMI_VDEV_PARAM_HEOPS_0_31 = 0x8003,
+};
+
+enum wmi_tlv_peer_flags {
+ WMI_TLV_PEER_AUTH = 0x00000001,
+ WMI_TLV_PEER_QOS = 0x00000002,
+ WMI_TLV_PEER_NEED_PTK_4_WAY = 0x00000004,
+ WMI_TLV_PEER_NEED_GTK_2_WAY = 0x00000010,
+ WMI_TLV_PEER_APSD = 0x00000800,
+ WMI_TLV_PEER_HT = 0x00001000,
+ WMI_TLV_PEER_40MHZ = 0x00002000,
+ WMI_TLV_PEER_STBC = 0x00008000,
+ WMI_TLV_PEER_LDPC = 0x00010000,
+ WMI_TLV_PEER_DYN_MIMOPS = 0x00020000,
+ WMI_TLV_PEER_STATIC_MIMOPS = 0x00040000,
+ WMI_TLV_PEER_SPATIAL_MUX = 0x00200000,
+ WMI_TLV_PEER_VHT = 0x02000000,
+ WMI_TLV_PEER_80MHZ = 0x04000000,
+ WMI_TLV_PEER_PMF = 0x08000000,
+ WMI_PEER_IS_P2P_CAPABLE = 0x20000000,
+ WMI_PEER_160MHZ = 0x40000000,
+ WMI_PEER_SAFEMODE_EN = 0x80000000,
+
+};
+
+/** Enum list of TLV Tags for each parameter structure type. */
+enum wmi_tlv_tag {
+ WMI_TAG_LAST_RESERVED = 15,
+ WMI_TAG_FIRST_ARRAY_ENUM,
+ WMI_TAG_ARRAY_UINT32 = WMI_TAG_FIRST_ARRAY_ENUM,
+ WMI_TAG_ARRAY_BYTE,
+ WMI_TAG_ARRAY_STRUCT,
+ WMI_TAG_ARRAY_FIXED_STRUCT,
+ WMI_TAG_LAST_ARRAY_ENUM = 31,
+ WMI_TAG_SERVICE_READY_EVENT,
+ WMI_TAG_HAL_REG_CAPABILITIES,
+ WMI_TAG_WLAN_HOST_MEM_REQ,
+ WMI_TAG_READY_EVENT,
+ WMI_TAG_SCAN_EVENT,
+ WMI_TAG_PDEV_TPC_CONFIG_EVENT,
+ WMI_TAG_CHAN_INFO_EVENT,
+ WMI_TAG_COMB_PHYERR_RX_HDR,
+ WMI_TAG_VDEV_START_RESPONSE_EVENT,
+ WMI_TAG_VDEV_STOPPED_EVENT,
+ WMI_TAG_VDEV_INSTALL_KEY_COMPLETE_EVENT,
+ WMI_TAG_PEER_STA_KICKOUT_EVENT,
+ WMI_TAG_MGMT_RX_HDR,
+ WMI_TAG_TBTT_OFFSET_EVENT,
+ WMI_TAG_TX_DELBA_COMPLETE_EVENT,
+ WMI_TAG_TX_ADDBA_COMPLETE_EVENT,
+ WMI_TAG_ROAM_EVENT,
+ WMI_TAG_WOW_EVENT_INFO,
+ WMI_TAG_WOW_EVENT_INFO_SECTION_BITMAP,
+ WMI_TAG_RTT_EVENT_HEADER,
+ WMI_TAG_RTT_ERROR_REPORT_EVENT,
+ WMI_TAG_RTT_MEAS_EVENT,
+ WMI_TAG_ECHO_EVENT,
+ WMI_TAG_FTM_INTG_EVENT,
+ WMI_TAG_VDEV_GET_KEEPALIVE_EVENT,
+ WMI_TAG_GPIO_INPUT_EVENT,
+ WMI_TAG_CSA_EVENT,
+ WMI_TAG_GTK_OFFLOAD_STATUS_EVENT,
+ WMI_TAG_IGTK_INFO,
+ WMI_TAG_DCS_INTERFERENCE_EVENT,
+ WMI_TAG_ATH_DCS_CW_INT,
+ WMI_TAG_WLAN_DCS_CW_INT = /* ALIAS */
+ WMI_TAG_ATH_DCS_CW_INT,
+ WMI_TAG_ATH_DCS_WLAN_INT_STAT,
+ WMI_TAG_WLAN_DCS_IM_TGT_STATS_T = /* ALIAS */
+ WMI_TAG_ATH_DCS_WLAN_INT_STAT,
+ WMI_TAG_WLAN_PROFILE_CTX_T,
+ WMI_TAG_WLAN_PROFILE_T,
+ WMI_TAG_PDEV_QVIT_EVENT,
+ WMI_TAG_HOST_SWBA_EVENT,
+ WMI_TAG_TIM_INFO,
+ WMI_TAG_P2P_NOA_INFO,
+ WMI_TAG_STATS_EVENT,
+ WMI_TAG_AVOID_FREQ_RANGES_EVENT,
+ WMI_TAG_AVOID_FREQ_RANGE_DESC,
+ WMI_TAG_GTK_REKEY_FAIL_EVENT,
+ WMI_TAG_INIT_CMD,
+ WMI_TAG_RESOURCE_CONFIG,
+ WMI_TAG_WLAN_HOST_MEMORY_CHUNK,
+ WMI_TAG_START_SCAN_CMD,
+ WMI_TAG_STOP_SCAN_CMD,
+ WMI_TAG_SCAN_CHAN_LIST_CMD,
+ WMI_TAG_CHANNEL,
+ WMI_TAG_PDEV_SET_REGDOMAIN_CMD,
+ WMI_TAG_PDEV_SET_PARAM_CMD,
+ WMI_TAG_PDEV_SET_WMM_PARAMS_CMD,
+ WMI_TAG_WMM_PARAMS,
+ WMI_TAG_PDEV_SET_QUIET_CMD,
+ WMI_TAG_VDEV_CREATE_CMD,
+ WMI_TAG_VDEV_DELETE_CMD,
+ WMI_TAG_VDEV_START_REQUEST_CMD,
+ WMI_TAG_P2P_NOA_DESCRIPTOR,
+ WMI_TAG_P2P_GO_SET_BEACON_IE,
+ WMI_TAG_GTK_OFFLOAD_CMD,
+ WMI_TAG_VDEV_UP_CMD,
+ WMI_TAG_VDEV_STOP_CMD,
+ WMI_TAG_VDEV_DOWN_CMD,
+ WMI_TAG_VDEV_SET_PARAM_CMD,
+ WMI_TAG_VDEV_INSTALL_KEY_CMD,
+ WMI_TAG_PEER_CREATE_CMD,
+ WMI_TAG_PEER_DELETE_CMD,
+ WMI_TAG_PEER_FLUSH_TIDS_CMD,
+ WMI_TAG_PEER_SET_PARAM_CMD,
+ WMI_TAG_PEER_ASSOC_COMPLETE_CMD,
+ WMI_TAG_VHT_RATE_SET,
+ WMI_TAG_BCN_TMPL_CMD,
+ WMI_TAG_PRB_TMPL_CMD,
+ WMI_TAG_BCN_PRB_INFO,
+ WMI_TAG_PEER_TID_ADDBA_CMD,
+ WMI_TAG_PEER_TID_DELBA_CMD,
+ WMI_TAG_STA_POWERSAVE_MODE_CMD,
+ WMI_TAG_STA_POWERSAVE_PARAM_CMD,
+ WMI_TAG_STA_DTIM_PS_METHOD_CMD,
+ WMI_TAG_ROAM_SCAN_MODE,
+ WMI_TAG_ROAM_SCAN_RSSI_THRESHOLD,
+ WMI_TAG_ROAM_SCAN_PERIOD,
+ WMI_TAG_ROAM_SCAN_RSSI_CHANGE_THRESHOLD,
+ WMI_TAG_PDEV_SUSPEND_CMD,
+ WMI_TAG_PDEV_RESUME_CMD,
+ WMI_TAG_ADD_BCN_FILTER_CMD,
+ WMI_TAG_RMV_BCN_FILTER_CMD,
+ WMI_TAG_WOW_ENABLE_CMD,
+ WMI_TAG_WOW_HOSTWAKEUP_FROM_SLEEP_CMD,
+ WMI_TAG_STA_UAPSD_AUTO_TRIG_CMD,
+ WMI_TAG_STA_UAPSD_AUTO_TRIG_PARAM,
+ WMI_TAG_SET_ARP_NS_OFFLOAD_CMD,
+ WMI_TAG_ARP_OFFLOAD_TUPLE,
+ WMI_TAG_NS_OFFLOAD_TUPLE,
+ WMI_TAG_FTM_INTG_CMD,
+ WMI_TAG_STA_KEEPALIVE_CMD,
+ WMI_TAG_STA_KEEPALVE_ARP_RESPONSE,
+ WMI_TAG_P2P_SET_VENDOR_IE_DATA_CMD,
+ WMI_TAG_AP_PS_PEER_CMD,
+ WMI_TAG_PEER_RATE_RETRY_SCHED_CMD,
+ WMI_TAG_WLAN_PROFILE_TRIGGER_CMD,
+ WMI_TAG_WLAN_PROFILE_SET_HIST_INTVL_CMD,
+ WMI_TAG_WLAN_PROFILE_GET_PROF_DATA_CMD,
+ WMI_TAG_WLAN_PROFILE_ENABLE_PROFILE_ID_CMD,
+ WMI_TAG_WOW_DEL_PATTERN_CMD,
+ WMI_TAG_WOW_ADD_DEL_EVT_CMD,
+ WMI_TAG_RTT_MEASREQ_HEAD,
+ WMI_TAG_RTT_MEASREQ_BODY,
+ WMI_TAG_RTT_TSF_CMD,
+ WMI_TAG_VDEV_SPECTRAL_CONFIGURE_CMD,
+ WMI_TAG_VDEV_SPECTRAL_ENABLE_CMD,
+ WMI_TAG_REQUEST_STATS_CMD,
+ WMI_TAG_NLO_CONFIG_CMD,
+ WMI_TAG_NLO_CONFIGURED_PARAMETERS,
+ WMI_TAG_CSA_OFFLOAD_ENABLE_CMD,
+ WMI_TAG_CSA_OFFLOAD_CHANSWITCH_CMD,
+ WMI_TAG_CHATTER_SET_MODE_CMD,
+ WMI_TAG_ECHO_CMD,
+ WMI_TAG_VDEV_SET_KEEPALIVE_CMD,
+ WMI_TAG_VDEV_GET_KEEPALIVE_CMD,
+ WMI_TAG_FORCE_FW_HANG_CMD,
+ WMI_TAG_GPIO_CONFIG_CMD,
+ WMI_TAG_GPIO_OUTPUT_CMD,
+ WMI_TAG_PEER_ADD_WDS_ENTRY_CMD,
+ WMI_TAG_PEER_REMOVE_WDS_ENTRY_CMD,
+ WMI_TAG_BCN_TX_HDR,
+ WMI_TAG_BCN_SEND_FROM_HOST_CMD,
+ WMI_TAG_MGMT_TX_HDR,
+ WMI_TAG_ADDBA_CLEAR_RESP_CMD,
+ WMI_TAG_ADDBA_SEND_CMD,
+ WMI_TAG_DELBA_SEND_CMD,
+ WMI_TAG_ADDBA_SETRESPONSE_CMD,
+ WMI_TAG_SEND_SINGLEAMSDU_CMD,
+ WMI_TAG_PDEV_PKTLOG_ENABLE_CMD,
+ WMI_TAG_PDEV_PKTLOG_DISABLE_CMD,
+ WMI_TAG_PDEV_SET_HT_IE_CMD,
+ WMI_TAG_PDEV_SET_VHT_IE_CMD,
+ WMI_TAG_PDEV_SET_DSCP_TID_MAP_CMD,
+ WMI_TAG_PDEV_GREEN_AP_PS_ENABLE_CMD,
+ WMI_TAG_PDEV_GET_TPC_CONFIG_CMD,
+ WMI_TAG_PDEV_SET_BASE_MACADDR_CMD,
+ WMI_TAG_PEER_MCAST_GROUP_CMD,
+ WMI_TAG_ROAM_AP_PROFILE,
+ WMI_TAG_AP_PROFILE,
+ WMI_TAG_SCAN_SCH_PRIORITY_TABLE_CMD,
+ WMI_TAG_PDEV_DFS_ENABLE_CMD,
+ WMI_TAG_PDEV_DFS_DISABLE_CMD,
+ WMI_TAG_WOW_ADD_PATTERN_CMD,
+ WMI_TAG_WOW_BITMAP_PATTERN_T,
+ WMI_TAG_WOW_IPV4_SYNC_PATTERN_T,
+ WMI_TAG_WOW_IPV6_SYNC_PATTERN_T,
+ WMI_TAG_WOW_MAGIC_PATTERN_CMD,
+ WMI_TAG_SCAN_UPDATE_REQUEST_CMD,
+ WMI_TAG_CHATTER_PKT_COALESCING_FILTER,
+ WMI_TAG_CHATTER_COALESCING_ADD_FILTER_CMD,
+ WMI_TAG_CHATTER_COALESCING_DELETE_FILTER_CMD,
+ WMI_TAG_CHATTER_COALESCING_QUERY_CMD,
+ WMI_TAG_TXBF_CMD,
+ WMI_TAG_DEBUG_LOG_CONFIG_CMD,
+ WMI_TAG_NLO_EVENT,
+ WMI_TAG_CHATTER_QUERY_REPLY_EVENT,
+ WMI_TAG_UPLOAD_H_HDR,
+ WMI_TAG_CAPTURE_H_EVENT_HDR,
+ WMI_TAG_VDEV_WNM_SLEEPMODE_CMD,
+ WMI_TAG_VDEV_IPSEC_NATKEEPALIVE_FILTER_CMD,
+ WMI_TAG_VDEV_WMM_ADDTS_CMD,
+ WMI_TAG_VDEV_WMM_DELTS_CMD,
+ WMI_TAG_VDEV_SET_WMM_PARAMS_CMD,
+ WMI_TAG_TDLS_SET_STATE_CMD,
+ WMI_TAG_TDLS_PEER_UPDATE_CMD,
+ WMI_TAG_TDLS_PEER_EVENT,
+ WMI_TAG_TDLS_PEER_CAPABILITIES,
+ WMI_TAG_VDEV_MCC_SET_TBTT_MODE_CMD,
+ WMI_TAG_ROAM_CHAN_LIST,
+ WMI_TAG_VDEV_MCC_BCN_INTVL_CHANGE_EVENT,
+ WMI_TAG_RESMGR_ADAPTIVE_OCS_ENABLE_DISABLE_CMD,
+ WMI_TAG_RESMGR_SET_CHAN_TIME_QUOTA_CMD,
+ WMI_TAG_RESMGR_SET_CHAN_LATENCY_CMD,
+ WMI_TAG_BA_REQ_SSN_CMD,
+ WMI_TAG_BA_RSP_SSN_EVENT,
+ WMI_TAG_STA_SMPS_FORCE_MODE_CMD,
+ WMI_TAG_SET_MCASTBCAST_FILTER_CMD,
+ WMI_TAG_P2P_SET_OPPPS_CMD,
+ WMI_TAG_P2P_SET_NOA_CMD,
+ WMI_TAG_BA_REQ_SSN_CMD_SUB_STRUCT_PARAM,
+ WMI_TAG_BA_REQ_SSN_EVENT_SUB_STRUCT_PARAM,
+ WMI_TAG_STA_SMPS_PARAM_CMD,
+ WMI_TAG_VDEV_SET_GTX_PARAMS_CMD,
+ WMI_TAG_MCC_SCHED_TRAFFIC_STATS_CMD,
+ WMI_TAG_MCC_SCHED_STA_TRAFFIC_STATS,
+ WMI_TAG_OFFLOAD_BCN_TX_STATUS_EVENT,
+ WMI_TAG_P2P_NOA_EVENT,
+ WMI_TAG_HB_SET_ENABLE_CMD,
+ WMI_TAG_HB_SET_TCP_PARAMS_CMD,
+ WMI_TAG_HB_SET_TCP_PKT_FILTER_CMD,
+ WMI_TAG_HB_SET_UDP_PARAMS_CMD,
+ WMI_TAG_HB_SET_UDP_PKT_FILTER_CMD,
+ WMI_TAG_HB_IND_EVENT,
+ WMI_TAG_TX_PAUSE_EVENT,
+ WMI_TAG_RFKILL_EVENT,
+ WMI_TAG_DFS_RADAR_EVENT,
+ WMI_TAG_DFS_PHYERR_FILTER_ENA_CMD,
+ WMI_TAG_DFS_PHYERR_FILTER_DIS_CMD,
+ WMI_TAG_BATCH_SCAN_RESULT_SCAN_LIST,
+ WMI_TAG_BATCH_SCAN_RESULT_NETWORK_INFO,
+ WMI_TAG_BATCH_SCAN_ENABLE_CMD,
+ WMI_TAG_BATCH_SCAN_DISABLE_CMD,
+ WMI_TAG_BATCH_SCAN_TRIGGER_RESULT_CMD,
+ WMI_TAG_BATCH_SCAN_ENABLED_EVENT,
+ WMI_TAG_BATCH_SCAN_RESULT_EVENT,
+ WMI_TAG_VDEV_PLMREQ_START_CMD,
+ WMI_TAG_VDEV_PLMREQ_STOP_CMD,
+ WMI_TAG_THERMAL_MGMT_CMD,
+ WMI_TAG_THERMAL_MGMT_EVENT,
+ WMI_TAG_PEER_INFO_REQ_CMD,
+ WMI_TAG_PEER_INFO_EVENT,
+ WMI_TAG_PEER_INFO,
+ WMI_TAG_PEER_TX_FAIL_CNT_THR_EVENT,
+ WMI_TAG_RMC_SET_MODE_CMD,
+ WMI_TAG_RMC_SET_ACTION_PERIOD_CMD,
+ WMI_TAG_RMC_CONFIG_CMD,
+ WMI_TAG_MHF_OFFLOAD_SET_MODE_CMD,
+ WMI_TAG_MHF_OFFLOAD_PLUMB_ROUTING_TABLE_CMD,
+ WMI_TAG_ADD_PROACTIVE_ARP_RSP_PATTERN_CMD,
+ WMI_TAG_DEL_PROACTIVE_ARP_RSP_PATTERN_CMD,
+ WMI_TAG_NAN_CMD_PARAM,
+ WMI_TAG_NAN_EVENT_HDR,
+ WMI_TAG_PDEV_L1SS_TRACK_EVENT,
+ WMI_TAG_DIAG_DATA_CONTAINER_EVENT,
+ WMI_TAG_MODEM_POWER_STATE_CMD_PARAM,
+ WMI_TAG_PEER_GET_ESTIMATED_LINKSPEED_CMD,
+ WMI_TAG_PEER_ESTIMATED_LINKSPEED_EVENT,
+ WMI_TAG_AGGR_STATE_TRIG_EVENT,
+ WMI_TAG_MHF_OFFLOAD_ROUTING_TABLE_ENTRY,
+ WMI_TAG_ROAM_SCAN_CMD,
+ WMI_TAG_REQ_STATS_EXT_CMD,
+ WMI_TAG_STATS_EXT_EVENT,
+ WMI_TAG_OBSS_SCAN_ENABLE_CMD,
+ WMI_TAG_OBSS_SCAN_DISABLE_CMD,
+ WMI_TAG_OFFLOAD_PRB_RSP_TX_STATUS_EVENT,
+ WMI_TAG_PDEV_SET_LED_CONFIG_CMD,
+ WMI_TAG_HOST_AUTO_SHUTDOWN_CFG_CMD,
+ WMI_TAG_HOST_AUTO_SHUTDOWN_EVENT,
+ WMI_TAG_UPDATE_WHAL_MIB_STATS_EVENT,
+ WMI_TAG_CHAN_AVOID_UPDATE_CMD_PARAM,
+ WMI_TAG_WOW_IOAC_PKT_PATTERN_T,
+ WMI_TAG_WOW_IOAC_TMR_PATTERN_T,
+ WMI_TAG_WOW_IOAC_ADD_KEEPALIVE_CMD,
+ WMI_TAG_WOW_IOAC_DEL_KEEPALIVE_CMD,
+ WMI_TAG_WOW_IOAC_KEEPALIVE_T,
+ WMI_TAG_WOW_IOAC_ADD_PATTERN_CMD,
+ WMI_TAG_WOW_IOAC_DEL_PATTERN_CMD,
+ WMI_TAG_START_LINK_STATS_CMD,
+ WMI_TAG_CLEAR_LINK_STATS_CMD,
+ WMI_TAG_REQUEST_LINK_STATS_CMD,
+ WMI_TAG_IFACE_LINK_STATS_EVENT,
+ WMI_TAG_RADIO_LINK_STATS_EVENT,
+ WMI_TAG_PEER_STATS_EVENT,
+ WMI_TAG_CHANNEL_STATS,
+ WMI_TAG_RADIO_LINK_STATS,
+ WMI_TAG_RATE_STATS,
+ WMI_TAG_PEER_LINK_STATS,
+ WMI_TAG_WMM_AC_STATS,
+ WMI_TAG_IFACE_LINK_STATS,
+ WMI_TAG_LPI_MGMT_SNOOPING_CONFIG_CMD,
+ WMI_TAG_LPI_START_SCAN_CMD,
+ WMI_TAG_LPI_STOP_SCAN_CMD,
+ WMI_TAG_LPI_RESULT_EVENT,
+ WMI_TAG_PEER_STATE_EVENT,
+ WMI_TAG_EXTSCAN_BUCKET_CMD,
+ WMI_TAG_EXTSCAN_BUCKET_CHANNEL_EVENT,
+ WMI_TAG_EXTSCAN_START_CMD,
+ WMI_TAG_EXTSCAN_STOP_CMD,
+ WMI_TAG_EXTSCAN_CONFIGURE_WLAN_CHANGE_MONITOR_CMD,
+ WMI_TAG_EXTSCAN_WLAN_CHANGE_BSSID_PARAM_CMD,
+ WMI_TAG_EXTSCAN_CONFIGURE_HOTLIST_MONITOR_CMD,
+ WMI_TAG_EXTSCAN_GET_CACHED_RESULTS_CMD,
+ WMI_TAG_EXTSCAN_GET_WLAN_CHANGE_RESULTS_CMD,
+ WMI_TAG_EXTSCAN_SET_CAPABILITIES_CMD,
+ WMI_TAG_EXTSCAN_GET_CAPABILITIES_CMD,
+ WMI_TAG_EXTSCAN_OPERATION_EVENT,
+ WMI_TAG_EXTSCAN_START_STOP_EVENT,
+ WMI_TAG_EXTSCAN_TABLE_USAGE_EVENT,
+ WMI_TAG_EXTSCAN_WLAN_DESCRIPTOR_EVENT,
+ WMI_TAG_EXTSCAN_RSSI_INFO_EVENT,
+ WMI_TAG_EXTSCAN_CACHED_RESULTS_EVENT,
+ WMI_TAG_EXTSCAN_WLAN_CHANGE_RESULTS_EVENT,
+ WMI_TAG_EXTSCAN_WLAN_CHANGE_RESULT_BSSID_EVENT,
+ WMI_TAG_EXTSCAN_HOTLIST_MATCH_EVENT,
+ WMI_TAG_EXTSCAN_CAPABILITIES_EVENT,
+ WMI_TAG_EXTSCAN_CACHE_CAPABILITIES_EVENT,
+ WMI_TAG_EXTSCAN_WLAN_CHANGE_MONITOR_CAPABILITIES_EVENT,
+ WMI_TAG_EXTSCAN_HOTLIST_MONITOR_CAPABILITIES_EVENT,
+ WMI_TAG_D0_WOW_ENABLE_DISABLE_CMD,
+ WMI_TAG_D0_WOW_DISABLE_ACK_EVENT,
+ WMI_TAG_UNIT_TEST_CMD,
+ WMI_TAG_ROAM_OFFLOAD_TLV_PARAM,
+ WMI_TAG_ROAM_11I_OFFLOAD_TLV_PARAM,
+ WMI_TAG_ROAM_11R_OFFLOAD_TLV_PARAM,
+ WMI_TAG_ROAM_ESE_OFFLOAD_TLV_PARAM,
+ WMI_TAG_ROAM_SYNCH_EVENT,
+ WMI_TAG_ROAM_SYNCH_COMPLETE,
+ WMI_TAG_EXTWOW_ENABLE_CMD,
+ WMI_TAG_EXTWOW_SET_APP_TYPE1_PARAMS_CMD,
+ WMI_TAG_EXTWOW_SET_APP_TYPE2_PARAMS_CMD,
+ WMI_TAG_LPI_STATUS_EVENT,
+ WMI_TAG_LPI_HANDOFF_EVENT,
+ WMI_TAG_VDEV_RATE_STATS_EVENT,
+ WMI_TAG_VDEV_RATE_HT_INFO,
+ WMI_TAG_RIC_REQUEST,
+ WMI_TAG_PDEV_GET_TEMPERATURE_CMD,
+ WMI_TAG_PDEV_TEMPERATURE_EVENT,
+ WMI_TAG_SET_DHCP_SERVER_OFFLOAD_CMD,
+ WMI_TAG_TPC_CHAINMASK_CONFIG_CMD,
+ WMI_TAG_RIC_TSPEC,
+ WMI_TAG_TPC_CHAINMASK_CONFIG,
+ WMI_TAG_IPA_OFFLOAD_ENABLE_DISABLE_CMD,
+ WMI_TAG_SCAN_PROB_REQ_OUI_CMD,
+ WMI_TAG_KEY_MATERIAL,
+ WMI_TAG_TDLS_SET_OFFCHAN_MODE_CMD,
+ WMI_TAG_SET_LED_FLASHING_CMD,
+ WMI_TAG_MDNS_OFFLOAD_CMD,
+ WMI_TAG_MDNS_SET_FQDN_CMD,
+ WMI_TAG_MDNS_SET_RESP_CMD,
+ WMI_TAG_MDNS_GET_STATS_CMD,
+ WMI_TAG_MDNS_STATS_EVENT,
+ WMI_TAG_ROAM_INVOKE_CMD,
+ WMI_TAG_PDEV_RESUME_EVENT,
+ WMI_TAG_PDEV_SET_ANTENNA_DIVERSITY_CMD,
+ WMI_TAG_SAP_OFL_ENABLE_CMD,
+ WMI_TAG_SAP_OFL_ADD_STA_EVENT,
+ WMI_TAG_SAP_OFL_DEL_STA_EVENT,
+ WMI_TAG_APFIND_CMD_PARAM,
+ WMI_TAG_APFIND_EVENT_HDR,
+ WMI_TAG_OCB_SET_SCHED_CMD,
+ WMI_TAG_OCB_SET_SCHED_EVENT,
+ WMI_TAG_OCB_SET_CONFIG_CMD,
+ WMI_TAG_OCB_SET_CONFIG_RESP_EVENT,
+ WMI_TAG_OCB_SET_UTC_TIME_CMD,
+ WMI_TAG_OCB_START_TIMING_ADVERT_CMD,
+ WMI_TAG_OCB_STOP_TIMING_ADVERT_CMD,
+ WMI_TAG_OCB_GET_TSF_TIMER_CMD,
+ WMI_TAG_OCB_GET_TSF_TIMER_RESP_EVENT,
+ WMI_TAG_DCC_GET_STATS_CMD,
+ WMI_TAG_DCC_CHANNEL_STATS_REQUEST,
+ WMI_TAG_DCC_GET_STATS_RESP_EVENT,
+ WMI_TAG_DCC_CLEAR_STATS_CMD,
+ WMI_TAG_DCC_UPDATE_NDL_CMD,
+ WMI_TAG_DCC_UPDATE_NDL_RESP_EVENT,
+ WMI_TAG_DCC_STATS_EVENT,
+ WMI_TAG_OCB_CHANNEL,
+ WMI_TAG_OCB_SCHEDULE_ELEMENT,
+ WMI_TAG_DCC_NDL_STATS_PER_CHANNEL,
+ WMI_TAG_DCC_NDL_CHAN,
+ WMI_TAG_QOS_PARAMETER,
+ WMI_TAG_DCC_NDL_ACTIVE_STATE_CONFIG,
+ WMI_TAG_ROAM_SCAN_EXTENDED_THRESHOLD_PARAM,
+ WMI_TAG_ROAM_FILTER,
+ WMI_TAG_PASSPOINT_CONFIG_CMD,
+ WMI_TAG_PASSPOINT_EVENT_HDR,
+ WMI_TAG_EXTSCAN_CONFIGURE_HOTLIST_SSID_MONITOR_CMD,
+ WMI_TAG_EXTSCAN_HOTLIST_SSID_MATCH_EVENT,
+ WMI_TAG_VDEV_TSF_TSTAMP_ACTION_CMD,
+ WMI_TAG_VDEV_TSF_REPORT_EVENT,
+ WMI_TAG_GET_FW_MEM_DUMP,
+ WMI_TAG_UPDATE_FW_MEM_DUMP,
+ WMI_TAG_FW_MEM_DUMP_PARAMS,
+ WMI_TAG_DEBUG_MESG_FLUSH,
+ WMI_TAG_DEBUG_MESG_FLUSH_COMPLETE,
+ WMI_TAG_PEER_SET_RATE_REPORT_CONDITION,
+ WMI_TAG_ROAM_SUBNET_CHANGE_CONFIG,
+ WMI_TAG_VDEV_SET_IE_CMD,
+ WMI_TAG_RSSI_BREACH_MONITOR_CONFIG,
+ WMI_TAG_RSSI_BREACH_EVENT,
+ WMI_TAG_WOW_EVENT_INITIAL_WAKEUP,
+ WMI_TAG_SOC_SET_PCL_CMD,
+ WMI_TAG_SOC_SET_HW_MODE_CMD,
+ WMI_TAG_SOC_SET_HW_MODE_RESPONSE_EVENT,
+ WMI_TAG_SOC_HW_MODE_TRANSITION_EVENT,
+ WMI_TAG_VDEV_TXRX_STREAMS,
+ WMI_TAG_SOC_SET_HW_MODE_RESPONSE_VDEV_MAC_ENTRY,
+ WMI_TAG_SOC_SET_DUAL_MAC_CONFIG_CMD,
+ WMI_TAG_SOC_SET_DUAL_MAC_CONFIG_RESPONSE_EVENT,
+ WMI_TAG_WOW_IOAC_SOCK_PATTERN_T,
+ WMI_TAG_WOW_ENABLE_ICMPV6_NA_FLT_CMD,
+ WMI_TAG_DIAG_EVENT_LOG_CONFIG,
+ WMI_TAG_DIAG_EVENT_LOG_SUPPORTED_EVENT_FIXED_PARAMS,
+ WMI_TAG_PACKET_FILTER_CONFIG,
+ WMI_TAG_PACKET_FILTER_ENABLE,
+ WMI_TAG_SAP_SET_BLACKLIST_PARAM_CMD,
+ WMI_TAG_MGMT_TX_SEND_CMD,
+ WMI_TAG_MGMT_TX_COMPL_EVENT,
+ WMI_TAG_SOC_SET_ANTENNA_MODE_CMD,
+ WMI_TAG_WOW_UDP_SVC_OFLD_CMD,
+ WMI_TAG_LRO_INFO_CMD,
+ WMI_TAG_ROAM_EARLYSTOP_RSSI_THRES_PARAM,
+ WMI_TAG_SERVICE_READY_EXT_EVENT,
+ WMI_TAG_MAWC_SENSOR_REPORT_IND_CMD,
+ WMI_TAG_MAWC_ENABLE_SENSOR_EVENT,
+ WMI_TAG_ROAM_CONFIGURE_MAWC_CMD,
+ WMI_TAG_NLO_CONFIGURE_MAWC_CMD,
+ WMI_TAG_EXTSCAN_CONFIGURE_MAWC_CMD,
+ WMI_TAG_PEER_ASSOC_CONF_EVENT,
+ WMI_TAG_WOW_HOSTWAKEUP_GPIO_PIN_PATTERN_CONFIG_CMD,
+ WMI_TAG_AP_PS_EGAP_PARAM_CMD,
+ WMI_TAG_AP_PS_EGAP_INFO_EVENT,
+ WMI_TAG_PMF_OFFLOAD_SET_SA_QUERY_CMD,
+ WMI_TAG_TRANSFER_DATA_TO_FLASH_CMD,
+ WMI_TAG_TRANSFER_DATA_TO_FLASH_COMPLETE_EVENT,
+ WMI_TAG_SCPC_EVENT,
+ WMI_TAG_AP_PS_EGAP_INFO_CHAINMASK_LIST,
+ WMI_TAG_STA_SMPS_FORCE_MODE_COMPLETE_EVENT,
+ WMI_TAG_BPF_GET_CAPABILITY_CMD,
+ WMI_TAG_BPF_CAPABILITY_INFO_EVT,
+ WMI_TAG_BPF_GET_VDEV_STATS_CMD,
+ WMI_TAG_BPF_VDEV_STATS_INFO_EVT,
+ WMI_TAG_BPF_SET_VDEV_INSTRUCTIONS_CMD,
+ WMI_TAG_BPF_DEL_VDEV_INSTRUCTIONS_CMD,
+ WMI_TAG_VDEV_DELETE_RESP_EVENT,
+ WMI_TAG_PEER_DELETE_RESP_EVENT,
+ WMI_TAG_ROAM_DENSE_THRES_PARAM,
+ WMI_TAG_ENLO_CANDIDATE_SCORE_PARAM,
+ WMI_TAG_PEER_UPDATE_WDS_ENTRY_CMD,
+ WMI_TAG_VDEV_CONFIG_RATEMASK,
+ WMI_TAG_PDEV_FIPS_CMD,
+ WMI_TAG_PDEV_SMART_ANT_ENABLE_CMD,
+ WMI_TAG_PDEV_SMART_ANT_SET_RX_ANTENNA_CMD,
+ WMI_TAG_PEER_SMART_ANT_SET_TX_ANTENNA_CMD,
+ WMI_TAG_PEER_SMART_ANT_SET_TRAIN_ANTENNA_CMD,
+ WMI_TAG_PEER_SMART_ANT_SET_NODE_CONFIG_OPS_CMD,
+ WMI_TAG_PDEV_SET_ANT_SWITCH_TBL_CMD,
+ WMI_TAG_PDEV_SET_CTL_TABLE_CMD,
+ WMI_TAG_PDEV_SET_MIMOGAIN_TABLE_CMD,
+ WMI_TAG_FWTEST_SET_PARAM_CMD,
+ WMI_TAG_PEER_ATF_REQUEST,
+ WMI_TAG_VDEV_ATF_REQUEST,
+ WMI_TAG_PDEV_GET_ANI_CCK_CONFIG_CMD,
+ WMI_TAG_PDEV_GET_ANI_OFDM_CONFIG_CMD,
+ WMI_TAG_INST_RSSI_STATS_RESP,
+ WMI_TAG_MED_UTIL_REPORT_EVENT,
+ WMI_TAG_PEER_STA_PS_STATECHANGE_EVENT,
+ WMI_TAG_WDS_ADDR_EVENT,
+ WMI_TAG_PEER_RATECODE_LIST_EVENT,
+ WMI_TAG_PDEV_NFCAL_POWER_ALL_CHANNELS_EVENT,
+ WMI_TAG_PDEV_TPC_EVENT,
+ WMI_TAG_ANI_OFDM_EVENT,
+ WMI_TAG_ANI_CCK_EVENT,
+ WMI_TAG_PDEV_CHANNEL_HOPPING_EVENT,
+ WMI_TAG_PDEV_FIPS_EVENT,
+ WMI_TAG_ATF_PEER_INFO,
+ WMI_TAG_PDEV_GET_TPC_CMD,
+ WMI_TAG_VDEV_FILTER_NRP_CONFIG_CMD,
+ WMI_TAG_QBOOST_CFG_CMD,
+ WMI_TAG_PDEV_SMART_ANT_GPIO_HANDLE,
+ WMI_TAG_PEER_SMART_ANT_SET_TX_ANTENNA_SERIES,
+ WMI_TAG_PEER_SMART_ANT_SET_TRAIN_ANTENNA_PARAM,
+ WMI_TAG_PDEV_SET_ANT_CTRL_CHAIN,
+ WMI_TAG_PEER_CCK_OFDM_RATE_INFO,
+ WMI_TAG_PEER_MCS_RATE_INFO,
+ WMI_TAG_PDEV_NFCAL_POWER_ALL_CHANNELS_NFDBR,
+ WMI_TAG_PDEV_NFCAL_POWER_ALL_CHANNELS_NFDBM,
+ WMI_TAG_PDEV_NFCAL_POWER_ALL_CHANNELS_FREQNUM,
+ WMI_TAG_MU_REPORT_TOTAL_MU,
+ WMI_TAG_VDEV_SET_DSCP_TID_MAP_CMD,
+ WMI_TAG_ROAM_SET_MBO,
+ WMI_TAG_MIB_STATS_ENABLE_CMD,
+ WMI_TAG_NAN_DISC_IFACE_CREATED_EVENT,
+ WMI_TAG_NAN_DISC_IFACE_DELETED_EVENT,
+ WMI_TAG_NAN_STARTED_CLUSTER_EVENT,
+ WMI_TAG_NAN_JOINED_CLUSTER_EVENT,
+ WMI_TAG_NDI_GET_CAP_REQ,
+ WMI_TAG_NDP_INITIATOR_REQ,
+ WMI_TAG_NDP_RESPONDER_REQ,
+ WMI_TAG_NDP_END_REQ,
+ WMI_TAG_NDI_CAP_RSP_EVENT,
+ WMI_TAG_NDP_INITIATOR_RSP_EVENT,
+ WMI_TAG_NDP_RESPONDER_RSP_EVENT,
+ WMI_TAG_NDP_END_RSP_EVENT,
+ WMI_TAG_NDP_INDICATION_EVENT,
+ WMI_TAG_NDP_CONFIRM_EVENT,
+ WMI_TAG_NDP_END_INDICATION_EVENT,
+ WMI_TAG_VDEV_SET_QUIET_CMD,
+ WMI_TAG_PDEV_SET_PCL_CMD,
+ WMI_TAG_PDEV_SET_HW_MODE_CMD,
+ WMI_TAG_PDEV_SET_MAC_CONFIG_CMD,
+ WMI_TAG_PDEV_SET_ANTENNA_MODE_CMD,
+ WMI_TAG_PDEV_SET_HW_MODE_RESPONSE_EVENT,
+ WMI_TAG_PDEV_HW_MODE_TRANSITION_EVENT,
+ WMI_TAG_PDEV_SET_HW_MODE_RESPONSE_VDEV_MAC_ENTRY,
+ WMI_TAG_PDEV_SET_MAC_CONFIG_RESPONSE_EVENT,
+ WMI_TAG_COEX_CONFIG_CMD,
+ WMI_TAG_CONFIG_ENHANCED_MCAST_FILTER,
+ WMI_TAG_CHAN_AVOID_RPT_ALLOW_CMD,
+ WMI_TAG_SET_PERIODIC_CHANNEL_STATS_CONFIG,
+ WMI_TAG_VDEV_SET_CUSTOM_AGGR_SIZE_CMD,
+ WMI_TAG_PDEV_WAL_POWER_DEBUG_CMD,
+ WMI_TAG_MAC_PHY_CAPABILITIES,
+ WMI_TAG_HW_MODE_CAPABILITIES,
+ WMI_TAG_SOC_MAC_PHY_HW_MODE_CAPS,
+ WMI_TAG_HAL_REG_CAPABILITIES_EXT,
+ WMI_TAG_SOC_HAL_REG_CAPABILITIES,
+ WMI_TAG_VDEV_WISA_CMD,
+ WMI_TAG_TX_POWER_LEVEL_STATS_EVT,
+ WMI_TAG_SCAN_ADAPTIVE_DWELL_PARAMETERS_TLV,
+ WMI_TAG_SCAN_ADAPTIVE_DWELL_CONFIG,
+ WMI_TAG_WOW_SET_ACTION_WAKE_UP_CMD,
+ WMI_TAG_NDP_END_RSP_PER_NDI,
+ WMI_TAG_PEER_BWF_REQUEST,
+ WMI_TAG_BWF_PEER_INFO,
+ WMI_TAG_DBGLOG_TIME_STAMP_SYNC_CMD,
+ WMI_TAG_RMC_SET_LEADER_CMD,
+ WMI_TAG_RMC_MANUAL_LEADER_EVENT,
+ WMI_TAG_PER_CHAIN_RSSI_STATS,
+ WMI_TAG_RSSI_STATS,
+ WMI_TAG_P2P_LO_START_CMD,
+ WMI_TAG_P2P_LO_STOP_CMD,
+ WMI_TAG_P2P_LO_STOPPED_EVENT,
+ WMI_TAG_REORDER_QUEUE_SETUP_CMD,
+ WMI_TAG_REORDER_QUEUE_REMOVE_CMD,
+ WMI_TAG_SET_MULTIPLE_MCAST_FILTER_CMD,
+ WMI_TAG_MGMT_TX_COMPL_BUNDLE_EVENT,
+ WMI_TAG_READ_DATA_FROM_FLASH_CMD,
+ WMI_TAG_READ_DATA_FROM_FLASH_EVENT,
+ WMI_TAG_PDEV_SET_REORDER_TIMEOUT_VAL_CMD,
+ WMI_TAG_PEER_SET_RX_BLOCKSIZE_CMD,
+ WMI_TAG_PDEV_SET_WAKEUP_CONFIG_CMDID,
+ WMI_TAG_TLV_BUF_LEN_PARAM,
+ WMI_TAG_SERVICE_AVAILABLE_EVENT,
+ WMI_TAG_PEER_ANTDIV_INFO_REQ_CMD,
+ WMI_TAG_PEER_ANTDIV_INFO_EVENT,
+ WMI_TAG_PEER_ANTDIV_INFO,
+ WMI_TAG_PDEV_GET_ANTDIV_STATUS_CMD,
+ WMI_TAG_PDEV_ANTDIV_STATUS_EVENT,
+ WMI_TAG_MNT_FILTER_CMD,
+ WMI_TAG_GET_CHIP_POWER_STATS_CMD,
+ WMI_TAG_PDEV_CHIP_POWER_STATS_EVENT,
+ WMI_TAG_COEX_GET_ANTENNA_ISOLATION_CMD,
+ WMI_TAG_COEX_REPORT_ISOLATION_EVENT,
+ WMI_TAG_CHAN_CCA_STATS,
+ WMI_TAG_PEER_SIGNAL_STATS,
+ WMI_TAG_TX_STATS,
+ WMI_TAG_PEER_AC_TX_STATS,
+ WMI_TAG_RX_STATS,
+ WMI_TAG_PEER_AC_RX_STATS,
+ WMI_TAG_REPORT_STATS_EVENT,
+ WMI_TAG_CHAN_CCA_STATS_THRESH,
+ WMI_TAG_PEER_SIGNAL_STATS_THRESH,
+ WMI_TAG_TX_STATS_THRESH,
+ WMI_TAG_RX_STATS_THRESH,
+ WMI_TAG_PDEV_SET_STATS_THRESHOLD_CMD,
+ WMI_TAG_REQUEST_WLAN_STATS_CMD,
+ WMI_TAG_RX_AGGR_FAILURE_EVENT,
+ WMI_TAG_RX_AGGR_FAILURE_INFO,
+ WMI_TAG_VDEV_ENCRYPT_DECRYPT_DATA_REQ_CMD,
+ WMI_TAG_VDEV_ENCRYPT_DECRYPT_DATA_RESP_EVENT,
+ WMI_TAG_PDEV_BAND_TO_MAC,
+ WMI_TAG_TBTT_OFFSET_INFO,
+ WMI_TAG_TBTT_OFFSET_EXT_EVENT,
+ WMI_TAG_SAR_LIMITS_CMD,
+ WMI_TAG_SAR_LIMIT_CMD_ROW,
+ WMI_TAG_PDEV_DFS_PHYERR_OFFLOAD_ENABLE_CMD,
+ WMI_TAG_PDEV_DFS_PHYERR_OFFLOAD_DISABLE_CMD,
+ WMI_TAG_VDEV_ADFS_CH_CFG_CMD,
+ WMI_TAG_VDEV_ADFS_OCAC_ABORT_CMD,
+ WMI_TAG_PDEV_DFS_RADAR_DETECTION_EVENT,
+ WMI_TAG_VDEV_ADFS_OCAC_COMPLETE_EVENT,
+ WMI_TAG_VDEV_DFS_CAC_COMPLETE_EVENT,
+ WMI_TAG_VENDOR_OUI,
+ WMI_TAG_REQUEST_RCPI_CMD,
+ WMI_TAG_UPDATE_RCPI_EVENT,
+ WMI_TAG_REQUEST_PEER_STATS_INFO_CMD,
+ WMI_TAG_PEER_STATS_INFO,
+ WMI_TAG_PEER_STATS_INFO_EVENT,
+ WMI_TAG_PKGID_EVENT,
+ WMI_TAG_CONNECTED_NLO_RSSI_PARAMS,
+ WMI_TAG_SET_CURRENT_COUNTRY_CMD,
+ WMI_TAG_REGULATORY_RULE_STRUCT,
+ WMI_TAG_REG_CHAN_LIST_CC_EVENT,
+ WMI_TAG_11D_SCAN_START_CMD,
+ WMI_TAG_11D_SCAN_STOP_CMD,
+ WMI_TAG_11D_NEW_COUNTRY_EVENT,
+ WMI_TAG_REQUEST_RADIO_CHAN_STATS_CMD,
+ WMI_TAG_RADIO_CHAN_STATS,
+ WMI_TAG_RADIO_CHAN_STATS_EVENT,
+ WMI_TAG_ROAM_PER_CONFIG,
+ WMI_TAG_VDEV_ADD_MAC_ADDR_TO_RX_FILTER_CMD,
+ WMI_TAG_VDEV_ADD_MAC_ADDR_TO_RX_FILTER_STATUS_EVENT,
+ WMI_TAG_BPF_SET_VDEV_ACTIVE_MODE_CMD,
+ WMI_TAG_HW_DATA_FILTER_CMD,
+ WMI_TAG_CONNECTED_NLO_BSS_BAND_RSSI_PREF,
+ WMI_TAG_PEER_OPER_MODE_CHANGE_EVENT,
+ WMI_TAG_CHIP_POWER_SAVE_FAILURE_DETECTED,
+ WMI_TAG_PDEV_MULTIPLE_VDEV_RESTART_REQUEST_CMD,
+ WMI_TAG_PDEV_CSA_SWITCH_COUNT_STATUS_EVENT,
+ WMI_TAG_PDEV_UPDATE_PKT_ROUTING_CMD,
+ WMI_TAG_PDEV_CHECK_CAL_VERSION_CMD,
+ WMI_TAG_PDEV_CHECK_CAL_VERSION_EVENT,
+ WMI_TAG_PDEV_SET_DIVERSITY_GAIN_CMD,
+ WMI_TAG_MAC_PHY_CHAINMASK_COMBO,
+ WMI_TAG_MAC_PHY_CHAINMASK_CAPABILITY,
+ WMI_TAG_VDEV_SET_ARP_STATS_CMD,
+ WMI_TAG_VDEV_GET_ARP_STATS_CMD,
+ WMI_TAG_VDEV_GET_ARP_STATS_EVENT,
+ WMI_TAG_IFACE_OFFLOAD_STATS,
+ WMI_TAG_REQUEST_STATS_CMD_SUB_STRUCT_PARAM,
+ WMI_TAG_RSSI_CTL_EXT,
+ WMI_TAG_SINGLE_PHYERR_EXT_RX_HDR,
+ WMI_TAG_COEX_BT_ACTIVITY_EVENT,
+ WMI_TAG_VDEV_GET_TX_POWER_CMD,
+ WMI_TAG_VDEV_TX_POWER_EVENT,
+ WMI_TAG_OFFCHAN_DATA_TX_COMPL_EVENT,
+ WMI_TAG_OFFCHAN_DATA_TX_SEND_CMD,
+ WMI_TAG_TX_SEND_PARAMS,
+ WMI_TAG_HE_RATE_SET,
+ WMI_TAG_CONGESTION_STATS,
+ WMI_TAG_SET_INIT_COUNTRY_CMD,
+ WMI_TAG_SCAN_DBS_DUTY_CYCLE,
+ WMI_TAG_SCAN_DBS_DUTY_CYCLE_PARAM_TLV,
+ WMI_TAG_PDEV_DIV_GET_RSSI_ANTID,
+ WMI_TAG_THERM_THROT_CONFIG_REQUEST,
+ WMI_TAG_THERM_THROT_LEVEL_CONFIG_INFO,
+ WMI_TAG_THERM_THROT_STATS_EVENT,
+ WMI_TAG_THERM_THROT_LEVEL_STATS_INFO,
+ WMI_TAG_PDEV_DIV_RSSI_ANTID_EVENT,
+ WMI_TAG_OEM_DMA_RING_CAPABILITIES,
+ WMI_TAG_OEM_DMA_RING_CFG_REQ,
+ WMI_TAG_OEM_DMA_RING_CFG_RSP,
+ WMI_TAG_OEM_INDIRECT_DATA,
+ WMI_TAG_OEM_DMA_BUF_RELEASE,
+ WMI_TAG_OEM_DMA_BUF_RELEASE_ENTRY,
+ WMI_TAG_PDEV_BSS_CHAN_INFO_REQUEST,
+ WMI_TAG_PDEV_BSS_CHAN_INFO_EVENT,
+ WMI_TAG_ROAM_LCA_DISALLOW_CONFIG,
+ WMI_TAG_VDEV_LIMIT_OFFCHAN_CMD,
+ WMI_TAG_ROAM_RSSI_REJECTION_OCE_CONFIG,
+ WMI_TAG_UNIT_TEST_EVENT,
+ WMI_TAG_ROAM_FILS_OFFLOAD,
+ WMI_TAG_PDEV_UPDATE_PMK_CACHE_CMD,
+ WMI_TAG_PMK_CACHE,
+ WMI_TAG_PDEV_UPDATE_FILS_HLP_PKT_CMD,
+ WMI_TAG_ROAM_FILS_SYNCH,
+ WMI_TAG_GTK_OFFLOAD_EXTENDED,
+ WMI_TAG_ROAM_BG_SCAN_ROAMING,
+ WMI_TAG_OIC_PING_OFFLOAD_PARAMS_CMD,
+ WMI_TAG_OIC_PING_OFFLOAD_SET_ENABLE_CMD,
+ WMI_TAG_OIC_PING_HANDOFF_EVENT,
+ WMI_TAG_DHCP_LEASE_RENEW_OFFLOAD_CMD,
+ WMI_TAG_DHCP_LEASE_RENEW_EVENT,
+ WMI_TAG_BTM_CONFIG,
+ WMI_TAG_DEBUG_MESG_FW_DATA_STALL,
+ WMI_TAG_WLM_CONFIG_CMD,
+ WMI_TAG_PDEV_UPDATE_CTLTABLE_REQUEST,
+ WMI_TAG_PDEV_UPDATE_CTLTABLE_EVENT,
+ WMI_TAG_ROAM_CND_SCORING_PARAM,
+ WMI_TAG_PDEV_CONFIG_VENDOR_OUI_ACTION,
+ WMI_TAG_VENDOR_OUI_EXT,
+ WMI_TAG_ROAM_SYNCH_FRAME_EVENT,
+ WMI_TAG_FD_SEND_FROM_HOST_CMD,
+ WMI_TAG_ENABLE_FILS_CMD,
+ WMI_TAG_HOST_SWFDA_EVENT,
+ WMI_TAG_BCN_OFFLOAD_CTRL_CMD,
+ WMI_TAG_PDEV_SET_AC_TX_QUEUE_OPTIMIZED_CMD,
+ WMI_TAG_STATS_PERIOD,
+ WMI_TAG_NDL_SCHEDULE_UPDATE,
+ WMI_TAG_PEER_TID_MSDUQ_QDEPTH_THRESH_UPDATE_CMD,
+ WMI_TAG_MSDUQ_QDEPTH_THRESH_UPDATE,
+ WMI_TAG_PDEV_SET_RX_FILTER_PROMISCUOUS_CMD,
+ WMI_TAG_SAR2_RESULT_EVENT,
+ WMI_TAG_SAR_CAPABILITIES,
+ WMI_TAG_SAP_OBSS_DETECTION_CFG_CMD,
+ WMI_TAG_SAP_OBSS_DETECTION_INFO_EVT,
+ WMI_TAG_DMA_RING_CAPABILITIES,
+ WMI_TAG_DMA_RING_CFG_REQ,
+ WMI_TAG_DMA_RING_CFG_RSP,
+ WMI_TAG_DMA_BUF_RELEASE,
+ WMI_TAG_DMA_BUF_RELEASE_ENTRY,
+ WMI_TAG_SAR_GET_LIMITS_CMD,
+ WMI_TAG_SAR_GET_LIMITS_EVENT,
+ WMI_TAG_SAR_GET_LIMITS_EVENT_ROW,
+ WMI_TAG_OFFLOAD_11K_REPORT,
+ WMI_TAG_INVOKE_NEIGHBOR_REPORT,
+ WMI_TAG_NEIGHBOR_REPORT_OFFLOAD,
+ WMI_TAG_VDEV_SET_CONNECTIVITY_CHECK_STATS,
+ WMI_TAG_VDEV_GET_CONNECTIVITY_CHECK_STATS,
+ WMI_TAG_BPF_SET_VDEV_ENABLE_CMD,
+ WMI_TAG_BPF_SET_VDEV_WORK_MEMORY_CMD,
+ WMI_TAG_BPF_GET_VDEV_WORK_MEMORY_CMD,
+ WMI_TAG_BPF_GET_VDEV_WORK_MEMORY_RESP_EVT,
+ WMI_TAG_PDEV_GET_NFCAL_POWER,
+ WMI_TAG_BSS_COLOR_CHANGE_ENABLE,
+ WMI_TAG_OBSS_COLOR_COLLISION_DET_CONFIG,
+ WMI_TAG_OBSS_COLOR_COLLISION_EVT,
+ WMI_TAG_RUNTIME_DPD_RECAL_CMD,
+ WMI_TAG_TWT_ENABLE_CMD,
+ WMI_TAG_TWT_DISABLE_CMD,
+ WMI_TAG_TWT_ADD_DIALOG_CMD,
+ WMI_TAG_TWT_DEL_DIALOG_CMD,
+ WMI_TAG_TWT_PAUSE_DIALOG_CMD,
+ WMI_TAG_TWT_RESUME_DIALOG_CMD,
+ WMI_TAG_TWT_ENABLE_COMPLETE_EVENT,
+ WMI_TAG_TWT_DISABLE_COMPLETE_EVENT,
+ WMI_TAG_TWT_ADD_DIALOG_COMPLETE_EVENT,
+ WMI_TAG_TWT_DEL_DIALOG_COMPLETE_EVENT,
+ WMI_TAG_TWT_PAUSE_DIALOG_COMPLETE_EVENT,
+ WMI_TAG_TWT_RESUME_DIALOG_COMPLETE_EVENT,
+ WMI_TAG_REQUEST_ROAM_SCAN_STATS_CMD,
+ WMI_TAG_ROAM_SCAN_STATS_EVENT,
+ WMI_TAG_PEER_TID_CONFIGURATIONS_CMD,
+ WMI_TAG_VDEV_SET_CUSTOM_SW_RETRY_TH_CMD,
+ WMI_TAG_GET_TPC_POWER_CMD,
+ WMI_TAG_GET_TPC_POWER_EVENT,
+ WMI_TAG_DMA_BUF_RELEASE_SPECTRAL_META_DATA,
+ WMI_TAG_MOTION_DET_CONFIG_PARAMS_CMD,
+ WMI_TAG_MOTION_DET_BASE_LINE_CONFIG_PARAMS_CMD,
+ WMI_TAG_MOTION_DET_START_STOP_CMD,
+ WMI_TAG_MOTION_DET_BASE_LINE_START_STOP_CMD,
+ WMI_TAG_MOTION_DET_EVENT,
+ WMI_TAG_MOTION_DET_BASE_LINE_EVENT,
+ WMI_TAG_NDP_TRANSPORT_IP,
+ WMI_TAG_OBSS_SPATIAL_REUSE_SET_CMD,
+ WMI_TAG_ESP_ESTIMATE_EVENT,
+ WMI_TAG_NAN_HOST_CONFIG,
+ WMI_TAG_SPECTRAL_BIN_SCALING_PARAMS,
+ WMI_TAG_PEER_CFR_CAPTURE_CMD,
+ WMI_TAG_PEER_CHAN_WIDTH_SWITCH_CMD,
+ WMI_TAG_CHAN_WIDTH_PEER_LIST,
+ WMI_TAG_OBSS_SPATIAL_REUSE_SET_DEF_OBSS_THRESH_CMD,
+ WMI_TAG_PDEV_HE_TB_ACTION_FRM_CMD,
+ WMI_TAG_PEER_EXTD2_STATS,
+ WMI_TAG_HPCS_PULSE_START_CMD,
+ WMI_TAG_PDEV_CTL_FAILSAFE_CHECK_EVENT,
+ WMI_TAG_VDEV_CHAINMASK_CONFIG_CMD,
+ WMI_TAG_VDEV_BCN_OFFLOAD_QUIET_CONFIG_CMD,
+ WMI_TAG_NAN_EVENT_INFO,
+ WMI_TAG_NDP_CHANNEL_INFO,
+ WMI_TAG_NDP_CMD,
+ WMI_TAG_NDP_EVENT,
+ /* TODO add all the missing cmds */
+ WMI_TAG_PDEV_PEER_PKTLOG_FILTER_CMD = 0x301,
+ WMI_TAG_PDEV_PEER_PKTLOG_FILTER_INFO,
+ WMI_TAG_FILS_DISCOVERY_TMPL_CMD = 0x344,
+ WMI_TAG_MAC_PHY_CAPABILITIES_EXT = 0x36F,
+ WMI_TAG_REGULATORY_RULE_EXT_STRUCT = 0x3A9,
+ WMI_TAG_REG_CHAN_LIST_CC_EXT_EVENT,
+ WMI_TAG_MAX
+};
+
+enum wmi_tlv_service {
+ WMI_TLV_SERVICE_BEACON_OFFLOAD = 0,
+ WMI_TLV_SERVICE_SCAN_OFFLOAD = 1,
+ WMI_TLV_SERVICE_ROAM_SCAN_OFFLOAD = 2,
+ WMI_TLV_SERVICE_BCN_MISS_OFFLOAD = 3,
+ WMI_TLV_SERVICE_STA_PWRSAVE = 4,
+ WMI_TLV_SERVICE_STA_ADVANCED_PWRSAVE = 5,
+ WMI_TLV_SERVICE_AP_UAPSD = 6,
+ WMI_TLV_SERVICE_AP_DFS = 7,
+ WMI_TLV_SERVICE_11AC = 8,
+ WMI_TLV_SERVICE_BLOCKACK = 9,
+ WMI_TLV_SERVICE_PHYERR = 10,
+ WMI_TLV_SERVICE_BCN_FILTER = 11,
+ WMI_TLV_SERVICE_RTT = 12,
+ WMI_TLV_SERVICE_WOW = 13,
+ WMI_TLV_SERVICE_RATECTRL_CACHE = 14,
+ WMI_TLV_SERVICE_IRAM_TIDS = 15,
+ WMI_TLV_SERVICE_ARPNS_OFFLOAD = 16,
+ WMI_TLV_SERVICE_NLO = 17,
+ WMI_TLV_SERVICE_GTK_OFFLOAD = 18,
+ WMI_TLV_SERVICE_SCAN_SCH = 19,
+ WMI_TLV_SERVICE_CSA_OFFLOAD = 20,
+ WMI_TLV_SERVICE_CHATTER = 21,
+ WMI_TLV_SERVICE_COEX_FREQAVOID = 22,
+ WMI_TLV_SERVICE_PACKET_POWER_SAVE = 23,
+ WMI_TLV_SERVICE_FORCE_FW_HANG = 24,
+ WMI_TLV_SERVICE_GPIO = 25,
+ WMI_TLV_SERVICE_STA_DTIM_PS_MODULATED_DTIM = 26,
+ WMI_STA_UAPSD_BASIC_AUTO_TRIG = 27,
+ WMI_STA_UAPSD_VAR_AUTO_TRIG = 28,
+ WMI_TLV_SERVICE_STA_KEEP_ALIVE = 29,
+ WMI_TLV_SERVICE_TX_ENCAP = 30,
+ WMI_TLV_SERVICE_AP_PS_DETECT_OUT_OF_SYNC = 31,
+ WMI_TLV_SERVICE_EARLY_RX = 32,
+ WMI_TLV_SERVICE_STA_SMPS = 33,
+ WMI_TLV_SERVICE_FWTEST = 34,
+ WMI_TLV_SERVICE_STA_WMMAC = 35,
+ WMI_TLV_SERVICE_TDLS = 36,
+ WMI_TLV_SERVICE_BURST = 37,
+ WMI_TLV_SERVICE_MCC_BCN_INTERVAL_CHANGE = 38,
+ WMI_TLV_SERVICE_ADAPTIVE_OCS = 39,
+ WMI_TLV_SERVICE_BA_SSN_SUPPORT = 40,
+ WMI_TLV_SERVICE_FILTER_IPSEC_NATKEEPALIVE = 41,
+ WMI_TLV_SERVICE_WLAN_HB = 42,
+ WMI_TLV_SERVICE_LTE_ANT_SHARE_SUPPORT = 43,
+ WMI_TLV_SERVICE_BATCH_SCAN = 44,
+ WMI_TLV_SERVICE_QPOWER = 45,
+ WMI_TLV_SERVICE_PLMREQ = 46,
+ WMI_TLV_SERVICE_THERMAL_MGMT = 47,
+ WMI_TLV_SERVICE_RMC = 48,
+ WMI_TLV_SERVICE_MHF_OFFLOAD = 49,
+ WMI_TLV_SERVICE_COEX_SAR = 50,
+ WMI_TLV_SERVICE_BCN_TXRATE_OVERRIDE = 51,
+ WMI_TLV_SERVICE_NAN = 52,
+ WMI_TLV_SERVICE_L1SS_STAT = 53,
+ WMI_TLV_SERVICE_ESTIMATE_LINKSPEED = 54,
+ WMI_TLV_SERVICE_OBSS_SCAN = 55,
+ WMI_TLV_SERVICE_TDLS_OFFCHAN = 56,
+ WMI_TLV_SERVICE_TDLS_UAPSD_BUFFER_STA = 57,
+ WMI_TLV_SERVICE_TDLS_UAPSD_SLEEP_STA = 58,
+ WMI_TLV_SERVICE_IBSS_PWRSAVE = 59,
+ WMI_TLV_SERVICE_LPASS = 60,
+ WMI_TLV_SERVICE_EXTSCAN = 61,
+ WMI_TLV_SERVICE_D0WOW = 62,
+ WMI_TLV_SERVICE_HSOFFLOAD = 63,
+ WMI_TLV_SERVICE_ROAM_HO_OFFLOAD = 64,
+ WMI_TLV_SERVICE_RX_FULL_REORDER = 65,
+ WMI_TLV_SERVICE_DHCP_OFFLOAD = 66,
+ WMI_TLV_SERVICE_STA_RX_IPA_OFFLOAD_SUPPORT = 67,
+ WMI_TLV_SERVICE_MDNS_OFFLOAD = 68,
+ WMI_TLV_SERVICE_SAP_AUTH_OFFLOAD = 69,
+ WMI_TLV_SERVICE_DUAL_BAND_SIMULTANEOUS_SUPPORT = 70,
+ WMI_TLV_SERVICE_OCB = 71,
+ WMI_TLV_SERVICE_AP_ARPNS_OFFLOAD = 72,
+ WMI_TLV_SERVICE_PER_BAND_CHAINMASK_SUPPORT = 73,
+ WMI_TLV_SERVICE_PACKET_FILTER_OFFLOAD = 74,
+ WMI_TLV_SERVICE_MGMT_TX_HTT = 75,
+ WMI_TLV_SERVICE_MGMT_TX_WMI = 76,
+ WMI_TLV_SERVICE_EXT_MSG = 77,
+ WMI_TLV_SERVICE_MAWC = 78,
+ WMI_TLV_SERVICE_PEER_ASSOC_CONF = 79,
+ WMI_TLV_SERVICE_EGAP = 80,
+ WMI_TLV_SERVICE_STA_PMF_OFFLOAD = 81,
+ WMI_TLV_SERVICE_UNIFIED_WOW_CAPABILITY = 82,
+ WMI_TLV_SERVICE_ENHANCED_PROXY_STA = 83,
+ WMI_TLV_SERVICE_ATF = 84,
+ WMI_TLV_SERVICE_COEX_GPIO = 85,
+ WMI_TLV_SERVICE_AUX_SPECTRAL_INTF = 86,
+ WMI_TLV_SERVICE_AUX_CHAN_LOAD_INTF = 87,
+ WMI_TLV_SERVICE_BSS_CHANNEL_INFO_64 = 88,
+ WMI_TLV_SERVICE_ENTERPRISE_MESH = 89,
+ WMI_TLV_SERVICE_RESTRT_CHNL_SUPPORT = 90,
+ WMI_TLV_SERVICE_BPF_OFFLOAD = 91,
+ WMI_TLV_SERVICE_SYNC_DELETE_CMDS = 92,
+ WMI_TLV_SERVICE_SMART_ANTENNA_SW_SUPPORT = 93,
+ WMI_TLV_SERVICE_SMART_ANTENNA_HW_SUPPORT = 94,
+ WMI_TLV_SERVICE_RATECTRL_LIMIT_MAX_MIN_RATES = 95,
+ WMI_TLV_SERVICE_NAN_DATA = 96,
+ WMI_TLV_SERVICE_NAN_RTT = 97,
+ WMI_TLV_SERVICE_11AX = 98,
+ WMI_TLV_SERVICE_DEPRECATED_REPLACE = 99,
+ WMI_TLV_SERVICE_TDLS_CONN_TRACKER_IN_HOST_MODE = 100,
+ WMI_TLV_SERVICE_ENHANCED_MCAST_FILTER = 101,
+ WMI_TLV_SERVICE_PERIODIC_CHAN_STAT_SUPPORT = 102,
+ WMI_TLV_SERVICE_MESH_11S = 103,
+ WMI_TLV_SERVICE_HALF_RATE_QUARTER_RATE_SUPPORT = 104,
+ WMI_TLV_SERVICE_VDEV_RX_FILTER = 105,
+ WMI_TLV_SERVICE_P2P_LISTEN_OFFLOAD_SUPPORT = 106,
+ WMI_TLV_SERVICE_MARK_FIRST_WAKEUP_PACKET = 107,
+ WMI_TLV_SERVICE_MULTIPLE_MCAST_FILTER_SET = 108,
+ WMI_TLV_SERVICE_HOST_MANAGED_RX_REORDER = 109,
+ WMI_TLV_SERVICE_FLASH_RDWR_SUPPORT = 110,
+ WMI_TLV_SERVICE_WLAN_STATS_REPORT = 111,
+ WMI_TLV_SERVICE_TX_MSDU_ID_NEW_PARTITION_SUPPORT = 112,
+ WMI_TLV_SERVICE_DFS_PHYERR_OFFLOAD = 113,
+ WMI_TLV_SERVICE_RCPI_SUPPORT = 114,
+ WMI_TLV_SERVICE_FW_MEM_DUMP_SUPPORT = 115,
+ WMI_TLV_SERVICE_PEER_STATS_INFO = 116,
+ WMI_TLV_SERVICE_REGULATORY_DB = 117,
+ WMI_TLV_SERVICE_11D_OFFLOAD = 118,
+ WMI_TLV_SERVICE_HW_DATA_FILTERING = 119,
+ WMI_TLV_SERVICE_MULTIPLE_VDEV_RESTART = 120,
+ WMI_TLV_SERVICE_PKT_ROUTING = 121,
+ WMI_TLV_SERVICE_CHECK_CAL_VERSION = 122,
+ WMI_TLV_SERVICE_OFFCHAN_TX_WMI = 123,
+ WMI_TLV_SERVICE_8SS_TX_BFEE = 124,
+ WMI_TLV_SERVICE_EXTENDED_NSS_SUPPORT = 125,
+ WMI_TLV_SERVICE_ACK_TIMEOUT = 126,
+ WMI_TLV_SERVICE_PDEV_BSS_CHANNEL_INFO_64 = 127,
+
+ WMI_MAX_SERVICE = 128,
+
+ WMI_TLV_SERVICE_CHAN_LOAD_INFO = 128,
+ WMI_TLV_SERVICE_TX_PPDU_INFO_STATS_SUPPORT = 129,
+ WMI_TLV_SERVICE_VDEV_LIMIT_OFFCHAN_SUPPORT = 130,
+ WMI_TLV_SERVICE_FILS_SUPPORT = 131,
+ WMI_TLV_SERVICE_WLAN_OIC_PING_OFFLOAD = 132,
+ WMI_TLV_SERVICE_WLAN_DHCP_RENEW = 133,
+ WMI_TLV_SERVICE_MAWC_SUPPORT = 134,
+ WMI_TLV_SERVICE_VDEV_LATENCY_CONFIG = 135,
+ WMI_TLV_SERVICE_PDEV_UPDATE_CTLTABLE_SUPPORT = 136,
+ WMI_TLV_SERVICE_PKTLOG_SUPPORT_OVER_HTT = 137,
+ WMI_TLV_SERVICE_VDEV_MULTI_GROUP_KEY_SUPPORT = 138,
+ WMI_TLV_SERVICE_SCAN_PHYMODE_SUPPORT = 139,
+ WMI_TLV_SERVICE_THERM_THROT = 140,
+ WMI_TLV_SERVICE_BCN_OFFLOAD_START_STOP_SUPPORT = 141,
+ WMI_TLV_SERVICE_WOW_WAKEUP_BY_TIMER_PATTERN = 142,
+ WMI_TLV_SERVICE_PEER_MAP_UNMAP_V2_SUPPORT = 143,
+ WMI_TLV_SERVICE_OFFCHAN_DATA_TID_SUPPORT = 144,
+ WMI_TLV_SERVICE_RX_PROMISC_ENABLE_SUPPORT = 145,
+ WMI_TLV_SERVICE_SUPPORT_DIRECT_DMA = 146,
+ WMI_TLV_SERVICE_AP_OBSS_DETECTION_OFFLOAD = 147,
+ WMI_TLV_SERVICE_11K_NEIGHBOUR_REPORT_SUPPORT = 148,
+ WMI_TLV_SERVICE_LISTEN_INTERVAL_OFFLOAD_SUPPORT = 149,
+ WMI_TLV_SERVICE_BSS_COLOR_OFFLOAD = 150,
+ WMI_TLV_SERVICE_RUNTIME_DPD_RECAL = 151,
+ WMI_TLV_SERVICE_STA_TWT = 152,
+ WMI_TLV_SERVICE_AP_TWT = 153,
+ WMI_TLV_SERVICE_GMAC_OFFLOAD_SUPPORT = 154,
+ WMI_TLV_SERVICE_SPOOF_MAC_SUPPORT = 155,
+ WMI_TLV_SERVICE_PEER_TID_CONFIGS_SUPPORT = 156,
+ WMI_TLV_SERVICE_VDEV_SWRETRY_PER_AC_CONFIG_SUPPORT = 157,
+ WMI_TLV_SERVICE_DUAL_BEACON_ON_SINGLE_MAC_SCC_SUPPORT = 158,
+ WMI_TLV_SERVICE_DUAL_BEACON_ON_SINGLE_MAC_MCC_SUPPORT = 159,
+ WMI_TLV_SERVICE_MOTION_DET = 160,
+ WMI_TLV_SERVICE_INFRA_MBSSID = 161,
+ WMI_TLV_SERVICE_OBSS_SPATIAL_REUSE = 162,
+ WMI_TLV_SERVICE_VDEV_DIFFERENT_BEACON_INTERVAL_SUPPORT = 163,
+ WMI_TLV_SERVICE_NAN_DBS_SUPPORT = 164,
+ WMI_TLV_SERVICE_NDI_DBS_SUPPORT = 165,
+ WMI_TLV_SERVICE_NAN_SAP_SUPPORT = 166,
+ WMI_TLV_SERVICE_NDI_SAP_SUPPORT = 167,
+ WMI_TLV_SERVICE_CFR_CAPTURE_SUPPORT = 168,
+ WMI_TLV_SERVICE_CFR_CAPTURE_IND_MSG_TYPE_1 = 169,
+ WMI_TLV_SERVICE_ESP_SUPPORT = 170,
+ WMI_TLV_SERVICE_PEER_CHWIDTH_CHANGE = 171,
+ WMI_TLV_SERVICE_WLAN_HPCS_PULSE = 172,
+ WMI_TLV_SERVICE_PER_VDEV_CHAINMASK_CONFIG_SUPPORT = 173,
+ WMI_TLV_SERVICE_TX_DATA_MGMT_ACK_RSSI = 174,
+ WMI_TLV_SERVICE_NAN_DISABLE_SUPPORT = 175,
+ WMI_TLV_SERVICE_HTT_H2T_NO_HTC_HDR_LEN_IN_MSG_LEN = 176,
+ WMI_TLV_SERVICE_COEX_SUPPORT_UNEQUAL_ISOLATION = 177,
+ WMI_TLV_SERVICE_HW_DB2DBM_CONVERSION_SUPPORT = 178,
+ WMI_TLV_SERVICE_SUPPORT_EXTEND_ADDRESS = 179,
+ WMI_TLV_SERVICE_BEACON_RECEPTION_STATS = 180,
+ WMI_TLV_SERVICE_FETCH_TX_PN = 181,
+ WMI_TLV_SERVICE_PEER_UNMAP_RESPONSE_SUPPORT = 182,
+ WMI_TLV_SERVICE_TX_PER_PEER_AMPDU_SIZE = 183,
+ WMI_TLV_SERVICE_BSS_COLOR_SWITCH_COUNT = 184,
+ WMI_TLV_SERVICE_HTT_PEER_STATS_SUPPORT = 185,
+ WMI_TLV_SERVICE_UL_RU26_ALLOWED = 186,
+ WMI_TLV_SERVICE_GET_MWS_COEX_STATE = 187,
+ WMI_TLV_SERVICE_GET_MWS_DPWB_STATE = 188,
+ WMI_TLV_SERVICE_GET_MWS_TDM_STATE = 189,
+ WMI_TLV_SERVICE_GET_MWS_IDRX_STATE = 190,
+ WMI_TLV_SERVICE_GET_MWS_ANTENNA_SHARING_STATE = 191,
+ WMI_TLV_SERVICE_ENHANCED_TPC_CONFIG_EVENT = 192,
+ WMI_TLV_SERVICE_WLM_STATS_REQUEST = 193,
+ WMI_TLV_SERVICE_EXT_PEER_TID_CONFIGS_SUPPORT = 194,
+ WMI_TLV_SERVICE_WPA3_FT_SAE_SUPPORT = 195,
+ WMI_TLV_SERVICE_WPA3_FT_SUITE_B_SUPPORT = 196,
+ WMI_TLV_SERVICE_VOW_ENABLE = 197,
+ WMI_TLV_SERVICE_CFR_CAPTURE_IND_EVT_TYPE_1 = 198,
+ WMI_TLV_SERVICE_BROADCAST_TWT = 199,
+ WMI_TLV_SERVICE_RAP_DETECTION_SUPPORT = 200,
+ WMI_TLV_SERVICE_PS_TDCC = 201,
+ WMI_TLV_SERVICE_THREE_WAY_COEX_CONFIG_LEGACY = 202,
+ WMI_TLV_SERVICE_THREE_WAY_COEX_CONFIG_OVERRIDE = 203,
+ WMI_TLV_SERVICE_TX_PWR_PER_PEER = 204,
+ WMI_TLV_SERVICE_STA_PLUS_STA_SUPPORT = 205,
+ WMI_TLV_SERVICE_WPA3_FT_FILS = 206,
+ WMI_TLV_SERVICE_ADAPTIVE_11R_ROAM = 207,
+ WMI_TLV_SERVICE_CHAN_RF_CHARACTERIZATION_INFO = 208,
+ WMI_TLV_SERVICE_FW_IFACE_COMBINATION_SUPPORT = 209,
+ WMI_TLV_SERVICE_TX_COMPL_TSF64 = 210,
+ WMI_TLV_SERVICE_DSM_ROAM_FILTER = 211,
+ WMI_TLV_SERVICE_PACKET_CAPTURE_SUPPORT = 212,
+ WMI_TLV_SERVICE_PER_PEER_HTT_STATS_RESET = 213,
+ WMI_TLV_SERVICE_FREQINFO_IN_METADATA = 219,
+ WMI_TLV_SERVICE_EXT2_MSG = 220,
+
+ WMI_MAX_EXT_SERVICE
+};
+
+enum {
+ WMI_SMPS_FORCED_MODE_NONE = 0,
+ WMI_SMPS_FORCED_MODE_DISABLED,
+ WMI_SMPS_FORCED_MODE_STATIC,
+ WMI_SMPS_FORCED_MODE_DYNAMIC
+};
+
+enum wmi_tpc_chainmask {
+ WMI_TPC_CHAINMASK_CONFIG_BAND_2G = 0,
+ WMI_TPC_CHAINMASK_CONFIG_BAND_5G = 1,
+ WMI_NUM_SUPPORTED_BAND_MAX = 2,
+};
+
+enum wmi_peer_param {
+ WMI_PEER_MIMO_PS_STATE = 1,
+ WMI_PEER_AMPDU = 2,
+ WMI_PEER_AUTHORIZE = 3,
+ WMI_PEER_CHWIDTH = 4,
+ WMI_PEER_NSS = 5,
+ WMI_PEER_USE_4ADDR = 6,
+ WMI_PEER_MEMBERSHIP = 7,
+ WMI_PEER_USERPOS = 8,
+ WMI_PEER_CRIT_PROTO_HINT_ENABLED = 9,
+ WMI_PEER_TX_FAIL_CNT_THR = 10,
+ WMI_PEER_SET_HW_RETRY_CTS2S = 11,
+ WMI_PEER_IBSS_ATIM_WINDOW_LENGTH = 12,
+ WMI_PEER_PHYMODE = 13,
+ WMI_PEER_USE_FIXED_PWR = 14,
+ WMI_PEER_PARAM_FIXED_RATE = 15,
+ WMI_PEER_SET_MU_WHITELIST = 16,
+ WMI_PEER_SET_MAX_TX_RATE = 17,
+ WMI_PEER_SET_MIN_TX_RATE = 18,
+ WMI_PEER_SET_DEFAULT_ROUTING = 19,
+};
+
+enum wmi_slot_time {
+ WMI_VDEV_SLOT_TIME_LONG = 1,
+ WMI_VDEV_SLOT_TIME_SHORT = 2,
+};
+
+enum wmi_preamble {
+ WMI_VDEV_PREAMBLE_LONG = 1,
+ WMI_VDEV_PREAMBLE_SHORT = 2,
+};
+
+enum wmi_peer_smps_state {
+ WMI_PEER_SMPS_PS_NONE = 0,
+ WMI_PEER_SMPS_STATIC = 1,
+ WMI_PEER_SMPS_DYNAMIC = 2
+};
+
+enum wmi_peer_chwidth {
+ WMI_PEER_CHWIDTH_20MHZ = 0,
+ WMI_PEER_CHWIDTH_40MHZ = 1,
+ WMI_PEER_CHWIDTH_80MHZ = 2,
+ WMI_PEER_CHWIDTH_160MHZ = 3,
+};
+
+enum wmi_beacon_gen_mode {
+ WMI_BEACON_STAGGERED_MODE = 0,
+ WMI_BEACON_BURST_MODE = 1
+};
+
+enum wmi_direct_buffer_module {
+ WMI_DIRECT_BUF_SPECTRAL = 0,
+ WMI_DIRECT_BUF_CFR = 1,
+
+ /* keep it last */
+ WMI_DIRECT_BUF_MAX
+};
+
+struct ath12k_wmi_pdev_band_arg {
+ u32 pdev_id;
+ u32 start_freq;
+ u32 end_freq;
+};
+
+struct ath12k_wmi_ppe_threshold_arg {
+ u32 numss_m1;
+ u32 ru_bit_mask;
+ u32 ppet16_ppet8_ru3_ru0[WMI_MAX_NUM_SS];
+};
+
+#define PSOC_HOST_MAX_PHY_SIZE (3)
+#define ATH12K_11B_SUPPORT BIT(0)
+#define ATH12K_11G_SUPPORT BIT(1)
+#define ATH12K_11A_SUPPORT BIT(2)
+#define ATH12K_11N_SUPPORT BIT(3)
+#define ATH12K_11AC_SUPPORT BIT(4)
+#define ATH12K_11AX_SUPPORT BIT(5)
+
+struct ath12k_wmi_hal_reg_capabilities_ext_arg {
+ u32 phy_id;
+ u32 eeprom_reg_domain;
+ u32 eeprom_reg_domain_ext;
+ u32 regcap1;
+ u32 regcap2;
+ u32 wireless_modes;
+ u32 low_2ghz_chan;
+ u32 high_2ghz_chan;
+ u32 low_5ghz_chan;
+ u32 high_5ghz_chan;
+};
+
+#define WMI_HOST_MAX_PDEV 3
+
+struct ath12k_wmi_host_mem_chunk_params {
+ __le32 tlv_header;
+ __le32 req_id;
+ __le32 ptr;
+ __le32 size;
+} __packed;
+
+struct ath12k_wmi_host_mem_chunk_arg {
+ void *vaddr;
+ dma_addr_t paddr;
+ u32 len;
+ u32 req_id;
+};
+
+struct ath12k_wmi_resource_config_arg {
+ u32 num_vdevs;
+ u32 num_peers;
+ u32 num_active_peers;
+ u32 num_offload_peers;
+ u32 num_offload_reorder_buffs;
+ u32 num_peer_keys;
+ u32 num_tids;
+ u32 ast_skid_limit;
+ u32 tx_chain_mask;
+ u32 rx_chain_mask;
+ u32 rx_timeout_pri[4];
+ u32 rx_decap_mode;
+ u32 scan_max_pending_req;
+ u32 bmiss_offload_max_vdev;
+ u32 roam_offload_max_vdev;
+ u32 roam_offload_max_ap_profiles;
+ u32 num_mcast_groups;
+ u32 num_mcast_table_elems;
+ u32 mcast2ucast_mode;
+ u32 tx_dbg_log_size;
+ u32 num_wds_entries;
+ u32 dma_burst_size;
+ u32 mac_aggr_delim;
+ u32 rx_skip_defrag_timeout_dup_detection_check;
+ u32 vow_config;
+ u32 gtk_offload_max_vdev;
+ u32 num_msdu_desc;
+ u32 max_frag_entries;
+ u32 max_peer_ext_stats;
+ u32 smart_ant_cap;
+ u32 bk_minfree;
+ u32 be_minfree;
+ u32 vi_minfree;
+ u32 vo_minfree;
+ u32 rx_batchmode;
+ u32 tt_support;
+ u32 atf_config;
+ u32 iphdr_pad_config;
+ u32 qwrap_config:16,
+ alloc_frag_desc_for_data_pkt:16;
+ u32 num_tdls_vdevs;
+ u32 num_tdls_conn_table_entries;
+ u32 beacon_tx_offload_max_vdev;
+ u32 num_multicast_filter_entries;
+ u32 num_wow_filters;
+ u32 num_keep_alive_pattern;
+ u32 keep_alive_pattern_size;
+ u32 max_tdls_concurrent_sleep_sta;
+ u32 max_tdls_concurrent_buffer_sta;
+ u32 wmi_send_separate;
+ u32 num_ocb_vdevs;
+ u32 num_ocb_channels;
+ u32 num_ocb_schedules;
+ u32 num_ns_ext_tuples_cfg;
+ u32 bpf_instruction_size;
+ u32 max_bssid_rx_filters;
+ u32 use_pdev_id;
+ u32 peer_map_unmap_version;
+ u32 sched_params;
+ u32 twt_ap_pdev_count;
+ u32 twt_ap_sta_count;
+};
+
+struct ath12k_wmi_init_cmd_arg {
+ struct ath12k_wmi_resource_config_arg res_cfg;
+ u8 num_mem_chunks;
+ struct ath12k_wmi_host_mem_chunk_arg *mem_chunks;
+ u32 hw_mode_id;
+ u32 num_band_to_mac;
+ struct ath12k_wmi_pdev_band_arg band_to_mac[WMI_HOST_MAX_PDEV];
+};
+
+struct ath12k_wmi_pdev_band_to_mac_params {
+ __le32 tlv_header;
+ __le32 pdev_id;
+ __le32 start_freq;
+ __le32 end_freq;
+} __packed;
+
+/* This is both individual command WMI_PDEV_SET_HW_MODE_CMDID and also part
+ * of WMI_TAG_INIT_CMD.
+ */
+struct ath12k_wmi_pdev_set_hw_mode_cmd {
+ __le32 tlv_header;
+ __le32 pdev_id;
+ __le32 hw_mode_index;
+ __le32 num_band_to_mac;
+} __packed;
+
+struct ath12k_wmi_ppe_threshold_params {
+ __le32 numss_m1; /** NSS - 1*/
+ __le32 ru_info;
+ __le32 ppet16_ppet8_ru3_ru0[WMI_MAX_NUM_SS];
+} __packed;
+
+#define HW_BD_INFO_SIZE 5
+
+struct ath12k_wmi_abi_version_params {
+ __le32 abi_version_0;
+ __le32 abi_version_1;
+ __le32 abi_version_ns_0;
+ __le32 abi_version_ns_1;
+ __le32 abi_version_ns_2;
+ __le32 abi_version_ns_3;
+} __packed;
+
+struct wmi_init_cmd {
+ __le32 tlv_header;
+ struct ath12k_wmi_abi_version_params host_abi_vers;
+ __le32 num_host_mem_chunks;
+} __packed;
+
+#define WMI_RSRC_CFG_HOST_SVC_FLAG_REG_CC_EXT_SUPPORT_BIT 4
+
+struct ath12k_wmi_resource_config_params {
+ __le32 tlv_header;
+ __le32 num_vdevs;
+ __le32 num_peers;
+ __le32 num_offload_peers;
+ __le32 num_offload_reorder_buffs;
+ __le32 num_peer_keys;
+ __le32 num_tids;
+ __le32 ast_skid_limit;
+ __le32 tx_chain_mask;
+ __le32 rx_chain_mask;
+ __le32 rx_timeout_pri[4];
+ __le32 rx_decap_mode;
+ __le32 scan_max_pending_req;
+ __le32 bmiss_offload_max_vdev;
+ __le32 roam_offload_max_vdev;
+ __le32 roam_offload_max_ap_profiles;
+ __le32 num_mcast_groups;
+ __le32 num_mcast_table_elems;
+ __le32 mcast2ucast_mode;
+ __le32 tx_dbg_log_size;
+ __le32 num_wds_entries;
+ __le32 dma_burst_size;
+ __le32 mac_aggr_delim;
+ __le32 rx_skip_defrag_timeout_dup_detection_check;
+ __le32 vow_config;
+ __le32 gtk_offload_max_vdev;
+ __le32 num_msdu_desc;
+ __le32 max_frag_entries;
+ __le32 num_tdls_vdevs;
+ __le32 num_tdls_conn_table_entries;
+ __le32 beacon_tx_offload_max_vdev;
+ __le32 num_multicast_filter_entries;
+ __le32 num_wow_filters;
+ __le32 num_keep_alive_pattern;
+ __le32 keep_alive_pattern_size;
+ __le32 max_tdls_concurrent_sleep_sta;
+ __le32 max_tdls_concurrent_buffer_sta;
+ __le32 wmi_send_separate;
+ __le32 num_ocb_vdevs;
+ __le32 num_ocb_channels;
+ __le32 num_ocb_schedules;
+ __le32 flag1;
+ __le32 smart_ant_cap;
+ __le32 bk_minfree;
+ __le32 be_minfree;
+ __le32 vi_minfree;
+ __le32 vo_minfree;
+ __le32 alloc_frag_desc_for_data_pkt;
+ __le32 num_ns_ext_tuples_cfg;
+ __le32 bpf_instruction_size;
+ __le32 max_bssid_rx_filters;
+ __le32 use_pdev_id;
+ __le32 max_num_dbs_scan_duty_cycle;
+ __le32 max_num_group_keys;
+ __le32 peer_map_unmap_version;
+ __le32 sched_params;
+ __le32 twt_ap_pdev_count;
+ __le32 twt_ap_sta_count;
+ __le32 max_nlo_ssids;
+ __le32 num_pkt_filters;
+ __le32 num_max_sta_vdevs;
+ __le32 max_bssid_indicator;
+ __le32 ul_resp_config;
+ __le32 msdu_flow_override_config0;
+ __le32 msdu_flow_override_config1;
+ __le32 flags2;
+ __le32 host_service_flags;
+ __le32 max_rnr_neighbours;
+ __le32 ema_max_vap_cnt;
+ __le32 ema_max_profile_period;
+} __packed;
+
+struct wmi_service_ready_event {
+ __le32 fw_build_vers;
+ struct ath12k_wmi_abi_version_params fw_abi_vers;
+ __le32 phy_capability;
+ __le32 max_frag_entry;
+ __le32 num_rf_chains;
+ __le32 ht_cap_info;
+ __le32 vht_cap_info;
+ __le32 vht_supp_mcs;
+ __le32 hw_min_tx_power;
+ __le32 hw_max_tx_power;
+ __le32 sys_cap_info;
+ __le32 min_pkt_size_enable;
+ __le32 max_bcn_ie_size;
+ __le32 num_mem_reqs;
+ __le32 max_num_scan_channels;
+ __le32 hw_bd_id;
+ __le32 hw_bd_info[HW_BD_INFO_SIZE];
+ __le32 max_supported_macs;
+ __le32 wmi_fw_sub_feat_caps;
+ __le32 num_dbs_hw_modes;
+ /* txrx_chainmask
+ * [7:0] - 2G band tx chain mask
+ * [15:8] - 2G band rx chain mask
+ * [23:16] - 5G band tx chain mask
+ * [31:24] - 5G band rx chain mask
+ */
+ __le32 txrx_chainmask;
+ __le32 default_dbs_hw_mode_index;
+ __le32 num_msdu_desc;
+} __packed;
+
+#define WMI_SERVICE_BM_SIZE ((WMI_MAX_SERVICE + sizeof(u32) - 1) / sizeof(u32))
+
+#define WMI_SERVICE_SEGMENT_BM_SIZE32 4 /* 4x u32 = 128 bits */
+#define WMI_SERVICE_EXT_BM_SIZE (WMI_SERVICE_SEGMENT_BM_SIZE32 * sizeof(u32))
+#define WMI_AVAIL_SERVICE_BITS_IN_SIZE32 32
+#define WMI_SERVICE_BITS_IN_SIZE32 4
+
+struct wmi_service_ready_ext_event {
+ __le32 default_conc_scan_config_bits;
+ __le32 default_fw_config_bits;
+ struct ath12k_wmi_ppe_threshold_params ppet;
+ __le32 he_cap_info;
+ __le32 mpdu_density;
+ __le32 max_bssid_rx_filters;
+ __le32 fw_build_vers_ext;
+ __le32 max_nlo_ssids;
+ __le32 max_bssid_indicator;
+ __le32 he_cap_info_ext;
+} __packed;
+
+struct ath12k_wmi_soc_mac_phy_hw_mode_caps_params {
+ __le32 num_hw_modes;
+ __le32 num_chainmask_tables;
+} __packed;
+
+struct ath12k_wmi_hw_mode_cap_params {
+ __le32 tlv_header;
+ __le32 hw_mode_id;
+ __le32 phy_id_map;
+ __le32 hw_mode_config_type;
+} __packed;
+
+#define WMI_MAX_HECAP_PHY_SIZE (3)
+
+struct ath12k_wmi_mac_phy_caps_params {
+ __le32 hw_mode_id;
+ __le32 pdev_id;
+ __le32 phy_id;
+ __le32 supported_flags;
+ __le32 supported_bands;
+ __le32 ampdu_density;
+ __le32 max_bw_supported_2g;
+ __le32 ht_cap_info_2g;
+ __le32 vht_cap_info_2g;
+ __le32 vht_supp_mcs_2g;
+ __le32 he_cap_info_2g;
+ __le32 he_supp_mcs_2g;
+ __le32 tx_chain_mask_2g;
+ __le32 rx_chain_mask_2g;
+ __le32 max_bw_supported_5g;
+ __le32 ht_cap_info_5g;
+ __le32 vht_cap_info_5g;
+ __le32 vht_supp_mcs_5g;
+ __le32 he_cap_info_5g;
+ __le32 he_supp_mcs_5g;
+ __le32 tx_chain_mask_5g;
+ __le32 rx_chain_mask_5g;
+ __le32 he_cap_phy_info_2g[WMI_MAX_HECAP_PHY_SIZE];
+ __le32 he_cap_phy_info_5g[WMI_MAX_HECAP_PHY_SIZE];
+ struct ath12k_wmi_ppe_threshold_params he_ppet2g;
+ struct ath12k_wmi_ppe_threshold_params he_ppet5g;
+ __le32 chainmask_table_id;
+ __le32 lmac_id;
+ __le32 he_cap_info_2g_ext;
+ __le32 he_cap_info_5g_ext;
+ __le32 he_cap_info_internal;
+} __packed;
+
+struct ath12k_wmi_hal_reg_caps_ext_params {
+ __le32 tlv_header;
+ __le32 phy_id;
+ __le32 eeprom_reg_domain;
+ __le32 eeprom_reg_domain_ext;
+ __le32 regcap1;
+ __le32 regcap2;
+ __le32 wireless_modes;
+ __le32 low_2ghz_chan;
+ __le32 high_2ghz_chan;
+ __le32 low_5ghz_chan;
+ __le32 high_5ghz_chan;
+} __packed;
+
+struct ath12k_wmi_soc_hal_reg_caps_params {
+ __le32 num_phy;
+} __packed;
+
+/* 2 word representation of MAC addr */
+struct ath12k_wmi_mac_addr_params {
+ u8 addr[ETH_ALEN];
+ u8 padding[2];
+} __packed;
+
+struct ath12k_wmi_dma_ring_caps_params {
+ __le32 tlv_header;
+ __le32 pdev_id;
+ __le32 module_id;
+ __le32 min_elem;
+ __le32 min_buf_sz;
+ __le32 min_buf_align;
+} __packed;
+
+struct ath12k_wmi_ready_event_min_params {
+ struct ath12k_wmi_abi_version_params fw_abi_vers;
+ struct ath12k_wmi_mac_addr_params mac_addr;
+ __le32 status;
+ __le32 num_dscp_table;
+ __le32 num_extra_mac_addr;
+ __le32 num_total_peers;
+ __le32 num_extra_peers;
+} __packed;
+
+struct wmi_ready_event {
+ struct ath12k_wmi_ready_event_min_params ready_event_min;
+ __le32 max_ast_index;
+ __le32 pktlog_defs_checksum;
+} __packed;
+
+struct wmi_service_available_event {
+ __le32 wmi_service_segment_offset;
+ __le32 wmi_service_segment_bitmap[WMI_SERVICE_SEGMENT_BM_SIZE32];
+} __packed;
+
+struct ath12k_wmi_vdev_create_arg {
+ u8 if_id;
+ u32 type;
+ u32 subtype;
+ struct {
+ u8 tx;
+ u8 rx;
+ } chains[NUM_NL80211_BANDS];
+ u32 pdev_id;
+ u8 if_stats_id;
+};
+
+#define ATH12K_MAX_VDEV_STATS_ID 0x30
+#define ATH12K_INVAL_VDEV_STATS_ID 0xFF
+
+struct wmi_vdev_create_cmd {
+ __le32 tlv_header;
+ __le32 vdev_id;
+ __le32 vdev_type;
+ __le32 vdev_subtype;
+ struct ath12k_wmi_mac_addr_params vdev_macaddr;
+ __le32 num_cfg_txrx_streams;
+ __le32 pdev_id;
+ __le32 vdev_stats_id;
+} __packed;
+
+struct ath12k_wmi_vdev_txrx_streams_params {
+ __le32 tlv_header;
+ u32 band;
+ u32 supported_tx_streams;
+ u32 supported_rx_streams;
+} __packed;
+
+struct wmi_vdev_delete_cmd {
+ __le32 tlv_header;
+ __le32 vdev_id;
+} __packed;
+
+struct wmi_vdev_up_cmd {
+ __le32 tlv_header;
+ __le32 vdev_id;
+ __le32 vdev_assoc_id;
+ struct ath12k_wmi_mac_addr_params vdev_bssid;
+ struct ath12k_wmi_mac_addr_params trans_bssid;
+ __le32 profile_idx;
+ __le32 profile_num;
+} __packed;
+
+struct wmi_vdev_stop_cmd {
+ __le32 tlv_header;
+ __le32 vdev_id;
+} __packed;
+
+struct wmi_vdev_down_cmd {
+ __le32 tlv_header;
+ __le32 vdev_id;
+} __packed;
+
+#define WMI_VDEV_START_HIDDEN_SSID BIT(0)
+#define WMI_VDEV_START_PMF_ENABLED BIT(1)
+#define WMI_VDEV_START_LDPC_RX_ENABLED BIT(3)
+
+#define ATH12K_WMI_SSID_LEN 32
+
+struct ath12k_wmi_ssid_params {
+ __le32 ssid_len;
+ u8 ssid[ATH12K_WMI_SSID_LEN];
+} __packed;
+
+#define ATH12K_VDEV_SETUP_TIMEOUT_HZ (1 * HZ)
+
+struct wmi_vdev_start_request_cmd {
+ __le32 tlv_header;
+ __le32 vdev_id;
+ __le32 requestor_id;
+ __le32 beacon_interval;
+ __le32 dtim_period;
+ __le32 flags;
+ struct ath12k_wmi_ssid_params ssid;
+ __le32 bcn_tx_rate;
+ __le32 bcn_txpower;
+ __le32 num_noa_descriptors;
+ __le32 disable_hw_ack;
+ __le32 preferred_tx_streams;
+ __le32 preferred_rx_streams;
+ __le32 he_ops;
+ __le32 cac_duration_ms;
+ __le32 regdomain;
+} __packed;
+
+#define MGMT_TX_DL_FRM_LEN 64
+
+struct ath12k_wmi_channel_arg {
+ u8 chan_id;
+ u8 pwr;
+ u32 mhz;
+ u32 half_rate:1,
+ quarter_rate:1,
+ dfs_set:1,
+ dfs_set_cfreq2:1,
+ is_chan_passive:1,
+ allow_ht:1,
+ allow_vht:1,
+ allow_he:1,
+ set_agile:1,
+ psc_channel:1;
+ u32 phy_mode;
+ u32 cfreq1;
+ u32 cfreq2;
+ char maxpower;
+ char minpower;
+ char maxregpower;
+ u8 antennamax;
+ u8 reg_class_id;
+};
+
+enum wmi_phy_mode {
+ MODE_11A = 0,
+ MODE_11G = 1, /* 11b/g Mode */
+ MODE_11B = 2, /* 11b Mode */
+ MODE_11GONLY = 3, /* 11g only Mode */
+ MODE_11NA_HT20 = 4,
+ MODE_11NG_HT20 = 5,
+ MODE_11NA_HT40 = 6,
+ MODE_11NG_HT40 = 7,
+ MODE_11AC_VHT20 = 8,
+ MODE_11AC_VHT40 = 9,
+ MODE_11AC_VHT80 = 10,
+ MODE_11AC_VHT20_2G = 11,
+ MODE_11AC_VHT40_2G = 12,
+ MODE_11AC_VHT80_2G = 13,
+ MODE_11AC_VHT80_80 = 14,
+ MODE_11AC_VHT160 = 15,
+ MODE_11AX_HE20 = 16,
+ MODE_11AX_HE40 = 17,
+ MODE_11AX_HE80 = 18,
+ MODE_11AX_HE80_80 = 19,
+ MODE_11AX_HE160 = 20,
+ MODE_11AX_HE20_2G = 21,
+ MODE_11AX_HE40_2G = 22,
+ MODE_11AX_HE80_2G = 23,
+ MODE_UNKNOWN = 24,
+ MODE_MAX = 24
+};
+
+struct wmi_vdev_start_req_arg {
+ u32 vdev_id;
+ u32 freq;
+ u32 band_center_freq1;
+ u32 band_center_freq2;
+ bool passive;
+ bool allow_ibss;
+ bool allow_ht;
+ bool allow_vht;
+ bool ht40plus;
+ bool chan_radar;
+ bool freq2_radar;
+ bool allow_he;
+ u32 min_power;
+ u32 max_power;
+ u32 max_reg_power;
+ u32 max_antenna_gain;
+ enum wmi_phy_mode mode;
+ u32 bcn_intval;
+ u32 dtim_period;
+ u8 *ssid;
+ u32 ssid_len;
+ u32 bcn_tx_rate;
+ u32 bcn_tx_power;
+ bool disable_hw_ack;
+ bool hidden_ssid;
+ bool pmf_enabled;
+ u32 he_ops;
+ u32 cac_duration_ms;
+ u32 regdomain;
+ u32 pref_rx_streams;
+ u32 pref_tx_streams;
+ u32 num_noa_descriptors;
+};
+
+struct ath12k_wmi_peer_create_arg {
+ const u8 *peer_addr;
+ u32 peer_type;
+ u32 vdev_id;
+};
+
+struct ath12k_wmi_pdev_set_regdomain_arg {
+ u16 current_rd_in_use;
+ u16 current_rd_2g;
+ u16 current_rd_5g;
+ u32 ctl_2g;
+ u32 ctl_5g;
+ u8 dfs_domain;
+ u32 pdev_id;
+};
+
+struct ath12k_wmi_rx_reorder_queue_remove_arg {
+ u8 *peer_macaddr;
+ u16 vdev_id;
+ u32 peer_tid_bitmap;
+};
+
+#define WMI_HOST_PDEV_ID_SOC 0xFF
+#define WMI_HOST_PDEV_ID_0 0
+#define WMI_HOST_PDEV_ID_1 1
+#define WMI_HOST_PDEV_ID_2 2
+
+#define WMI_PDEV_ID_SOC 0
+#define WMI_PDEV_ID_1ST 1
+#define WMI_PDEV_ID_2ND 2
+#define WMI_PDEV_ID_3RD 3
+
+/* Freq units in MHz */
+#define REG_RULE_START_FREQ 0x0000ffff
+#define REG_RULE_END_FREQ 0xffff0000
+#define REG_RULE_FLAGS 0x0000ffff
+#define REG_RULE_MAX_BW 0x0000ffff
+#define REG_RULE_REG_PWR 0x00ff0000
+#define REG_RULE_ANT_GAIN 0xff000000
+#define REG_RULE_PSD_INFO BIT(2)
+#define REG_RULE_PSD_EIRP 0xffff0000
+
+#define WMI_VDEV_PARAM_TXBF_SU_TX_BFEE BIT(0)
+#define WMI_VDEV_PARAM_TXBF_MU_TX_BFEE BIT(1)
+#define WMI_VDEV_PARAM_TXBF_SU_TX_BFER BIT(2)
+#define WMI_VDEV_PARAM_TXBF_MU_TX_BFER BIT(3)
+
+#define HECAP_PHYDWORD_0 0
+#define HECAP_PHYDWORD_1 1
+#define HECAP_PHYDWORD_2 2
+
+#define HECAP_PHY_SU_BFER BIT(31)
+#define HECAP_PHY_SU_BFEE BIT(0)
+#define HECAP_PHY_MU_BFER BIT(1)
+#define HECAP_PHY_UL_MUMIMO BIT(22)
+#define HECAP_PHY_UL_MUOFDMA BIT(23)
+
+#define HECAP_PHY_SUBFMR_GET(hecap_phy) \
+ u32_get_bits(hecap_phy[HECAP_PHYDWORD_0], HECAP_PHY_SU_BFER)
+
+#define HECAP_PHY_SUBFME_GET(hecap_phy) \
+ u32_get_bits(hecap_phy[HECAP_PHYDWORD_1], HECAP_PHY_SU_BFEE)
+
+#define HECAP_PHY_MUBFMR_GET(hecap_phy) \
+ u32_get_bits(hecap_phy[HECAP_PHYDWORD_1], HECAP_PHY_MU_BFER)
+
+#define HECAP_PHY_ULMUMIMO_GET(hecap_phy) \
+ u32_get_bits(hecap_phy[HECAP_PHYDWORD_0], HECAP_PHY_UL_MUMIMO)
+
+#define HECAP_PHY_ULOFDMA_GET(hecap_phy) \
+ u32_get_bits(hecap_phy[HECAP_PHYDWORD_0], HECAP_PHY_UL_MUOFDMA)
+
+#define HE_MODE_SU_TX_BFEE BIT(0)
+#define HE_MODE_SU_TX_BFER BIT(1)
+#define HE_MODE_MU_TX_BFEE BIT(2)
+#define HE_MODE_MU_TX_BFER BIT(3)
+#define HE_MODE_DL_OFDMA BIT(4)
+#define HE_MODE_UL_OFDMA BIT(5)
+#define HE_MODE_UL_MUMIMO BIT(6)
+
+#define HE_DL_MUOFDMA_ENABLE 1
+#define HE_UL_MUOFDMA_ENABLE 1
+#define HE_DL_MUMIMO_ENABLE 1
+#define HE_MU_BFEE_ENABLE 1
+#define HE_SU_BFEE_ENABLE 1
+
+#define HE_VHT_SOUNDING_MODE_ENABLE 1
+#define HE_SU_MU_SOUNDING_MODE_ENABLE 1
+#define HE_TRIG_NONTRIG_SOUNDING_MODE_ENABLE 1
+
+/* HE or VHT Sounding */
+#define HE_VHT_SOUNDING_MODE BIT(0)
+/* SU or MU Sounding */
+#define HE_SU_MU_SOUNDING_MODE BIT(2)
+/* Trig or Non-Trig Sounding */
+#define HE_TRIG_NONTRIG_SOUNDING_MODE BIT(3)
+
+#define WMI_TXBF_STS_CAP_OFFSET_LSB 4
+#define WMI_TXBF_STS_CAP_OFFSET_MASK 0x70
+#define WMI_BF_SOUND_DIM_OFFSET_LSB 8
+#define WMI_BF_SOUND_DIM_OFFSET_MASK 0x700
+
+enum wmi_peer_type {
+ WMI_PEER_TYPE_DEFAULT = 0,
+ WMI_PEER_TYPE_BSS = 1,
+ WMI_PEER_TYPE_TDLS = 2,
+};
+
+struct wmi_peer_create_cmd {
+ __le32 tlv_header;
+ __le32 vdev_id;
+ struct ath12k_wmi_mac_addr_params peer_macaddr;
+ __le32 peer_type;
+} __packed;
+
+struct wmi_peer_delete_cmd {
+ __le32 tlv_header;
+ __le32 vdev_id;
+ struct ath12k_wmi_mac_addr_params peer_macaddr;
+} __packed;
+
+struct wmi_peer_reorder_queue_setup_cmd {
+ __le32 tlv_header;
+ __le32 vdev_id;
+ struct ath12k_wmi_mac_addr_params peer_macaddr;
+ __le32 tid;
+ __le32 queue_ptr_lo;
+ __le32 queue_ptr_hi;
+ __le32 queue_no;
+ __le32 ba_window_size_valid;
+ __le32 ba_window_size;
+} __packed;
+
+struct wmi_peer_reorder_queue_remove_cmd {
+ __le32 tlv_header;
+ __le32 vdev_id;
+ struct ath12k_wmi_mac_addr_params peer_macaddr;
+ __le32 tid_mask;
+} __packed;
+
+enum wmi_bss_chan_info_req_type {
+ WMI_BSS_SURVEY_REQ_TYPE_READ = 1,
+ WMI_BSS_SURVEY_REQ_TYPE_READ_CLEAR,
+};
+
+struct wmi_pdev_set_param_cmd {
+ __le32 tlv_header;
+ __le32 pdev_id;
+ __le32 param_id;
+ __le32 param_value;
+} __packed;
+
+struct wmi_pdev_set_ps_mode_cmd {
+ __le32 tlv_header;
+ __le32 vdev_id;
+ __le32 sta_ps_mode;
+} __packed;
+
+struct wmi_pdev_suspend_cmd {
+ __le32 tlv_header;
+ __le32 pdev_id;
+ __le32 suspend_opt;
+} __packed;
+
+struct wmi_pdev_resume_cmd {
+ __le32 tlv_header;
+ __le32 pdev_id;
+} __packed;
+
+struct wmi_pdev_bss_chan_info_req_cmd {
+ __le32 tlv_header;
+ /* ref wmi_bss_chan_info_req_type */
+ __le32 req_type;
+} __packed;
+
+struct wmi_ap_ps_peer_cmd {
+ __le32 tlv_header;
+ __le32 vdev_id;
+ struct ath12k_wmi_mac_addr_params peer_macaddr;
+ __le32 param;
+ __le32 value;
+} __packed;
+
+struct wmi_sta_powersave_param_cmd {
+ __le32 tlv_header;
+ __le32 vdev_id;
+ __le32 param;
+ __le32 value;
+} __packed;
+
+struct wmi_pdev_set_regdomain_cmd {
+ __le32 tlv_header;
+ __le32 pdev_id;
+ __le32 reg_domain;
+ __le32 reg_domain_2g;
+ __le32 reg_domain_5g;
+ __le32 conformance_test_limit_2g;
+ __le32 conformance_test_limit_5g;
+ __le32 dfs_domain;
+} __packed;
+
+struct wmi_peer_set_param_cmd {
+ __le32 tlv_header;
+ __le32 vdev_id;
+ struct ath12k_wmi_mac_addr_params peer_macaddr;
+ __le32 param_id;
+ __le32 param_value;
+} __packed;
+
+struct wmi_peer_flush_tids_cmd {
+ __le32 tlv_header;
+ __le32 vdev_id;
+ struct ath12k_wmi_mac_addr_params peer_macaddr;
+ __le32 peer_tid_bitmap;
+} __packed;
+
+struct wmi_dfs_phyerr_offload_cmd {
+ __le32 tlv_header;
+ __le32 pdev_id;
+} __packed;
+
+struct wmi_bcn_offload_ctrl_cmd {
+ __le32 tlv_header;
+ __le32 vdev_id;
+ __le32 bcn_ctrl_op;
+} __packed;
+
+enum scan_dwelltime_adaptive_mode {
+ SCAN_DWELL_MODE_DEFAULT = 0,
+ SCAN_DWELL_MODE_CONSERVATIVE = 1,
+ SCAN_DWELL_MODE_MODERATE = 2,
+ SCAN_DWELL_MODE_AGGRESSIVE = 3,
+ SCAN_DWELL_MODE_STATIC = 4
+};
+
+#define WLAN_SCAN_MAX_NUM_SSID 10
+#define WLAN_SCAN_MAX_NUM_BSSID 10
+#define WLAN_SCAN_MAX_NUM_CHANNELS 40
+
+struct ath12k_wmi_element_info_arg {
+ u32 len;
+ u8 *ptr;
+};
+
+#define WMI_IE_BITMAP_SIZE 8
+
+#define WMI_SCAN_MAX_NUM_SSID 0x0A
+/* prefix used by scan requestor ids on the host */
+#define WMI_HOST_SCAN_REQUESTOR_ID_PREFIX 0xA000
+
+/* prefix used by scan request ids generated on the host */
+/* host cycles through the lower 12 bits to generate ids */
+#define WMI_HOST_SCAN_REQ_ID_PREFIX 0xA000
+
+#define WLAN_SCAN_PARAMS_MAX_SSID 16
+#define WLAN_SCAN_PARAMS_MAX_BSSID 4
+#define WLAN_SCAN_PARAMS_MAX_IE_LEN 256
+
+/* Values lower than this may be refused by some firmware revisions with a scan
+ * completion with a timedout reason.
+ */
+#define WMI_SCAN_CHAN_MIN_TIME_MSEC 40
+
+/* Scan priority numbers must be sequential, starting with 0 */
+enum wmi_scan_priority {
+ WMI_SCAN_PRIORITY_VERY_LOW = 0,
+ WMI_SCAN_PRIORITY_LOW,
+ WMI_SCAN_PRIORITY_MEDIUM,
+ WMI_SCAN_PRIORITY_HIGH,
+ WMI_SCAN_PRIORITY_VERY_HIGH,
+ WMI_SCAN_PRIORITY_COUNT /* number of priorities supported */
+};
+
+enum wmi_scan_event_type {
+ WMI_SCAN_EVENT_STARTED = BIT(0),
+ WMI_SCAN_EVENT_COMPLETED = BIT(1),
+ WMI_SCAN_EVENT_BSS_CHANNEL = BIT(2),
+ WMI_SCAN_EVENT_FOREIGN_CHAN = BIT(3),
+ WMI_SCAN_EVENT_DEQUEUED = BIT(4),
+ /* possibly by high-prio scan */
+ WMI_SCAN_EVENT_PREEMPTED = BIT(5),
+ WMI_SCAN_EVENT_START_FAILED = BIT(6),
+ WMI_SCAN_EVENT_RESTARTED = BIT(7),
+ WMI_SCAN_EVENT_FOREIGN_CHAN_EXIT = BIT(8),
+ WMI_SCAN_EVENT_SUSPENDED = BIT(9),
+ WMI_SCAN_EVENT_RESUMED = BIT(10),
+ WMI_SCAN_EVENT_MAX = BIT(15),
+};
+
+enum wmi_scan_completion_reason {
+ WMI_SCAN_REASON_COMPLETED,
+ WMI_SCAN_REASON_CANCELLED,
+ WMI_SCAN_REASON_PREEMPTED,
+ WMI_SCAN_REASON_TIMEDOUT,
+ WMI_SCAN_REASON_INTERNAL_FAILURE,
+ WMI_SCAN_REASON_MAX,
+};
+
+struct wmi_start_scan_cmd {
+ __le32 tlv_header;
+ __le32 scan_id;
+ __le32 scan_req_id;
+ __le32 vdev_id;
+ __le32 scan_priority;
+ __le32 notify_scan_events;
+ __le32 dwell_time_active;
+ __le32 dwell_time_passive;
+ __le32 min_rest_time;
+ __le32 max_rest_time;
+ __le32 repeat_probe_time;
+ __le32 probe_spacing_time;
+ __le32 idle_time;
+ __le32 max_scan_time;
+ __le32 probe_delay;
+ __le32 scan_ctrl_flags;
+ __le32 burst_duration;
+ __le32 num_chan;
+ __le32 num_bssid;
+ __le32 num_ssids;
+ __le32 ie_len;
+ __le32 n_probes;
+ struct ath12k_wmi_mac_addr_params mac_addr;
+ struct ath12k_wmi_mac_addr_params mac_mask;
+ u32 ie_bitmap[WMI_IE_BITMAP_SIZE];
+ __le32 num_vendor_oui;
+ __le32 scan_ctrl_flags_ext;
+ __le32 dwell_time_active_2g;
+ __le32 dwell_time_active_6g;
+ __le32 dwell_time_passive_6g;
+ __le32 scan_start_offset;
+} __packed;
+
+#define WMI_SCAN_FLAG_PASSIVE 0x1
+#define WMI_SCAN_ADD_BCAST_PROBE_REQ 0x2
+#define WMI_SCAN_ADD_CCK_RATES 0x4
+#define WMI_SCAN_ADD_OFDM_RATES 0x8
+#define WMI_SCAN_CHAN_STAT_EVENT 0x10
+#define WMI_SCAN_FILTER_PROBE_REQ 0x20
+#define WMI_SCAN_BYPASS_DFS_CHN 0x40
+#define WMI_SCAN_CONTINUE_ON_ERROR 0x80
+#define WMI_SCAN_FILTER_PROMISCUOS 0x100
+#define WMI_SCAN_FLAG_FORCE_ACTIVE_ON_DFS 0x200
+#define WMI_SCAN_ADD_TPC_IE_IN_PROBE_REQ 0x400
+#define WMI_SCAN_ADD_DS_IE_IN_PROBE_REQ 0x800
+#define WMI_SCAN_ADD_SPOOF_MAC_IN_PROBE_REQ 0x1000
+#define WMI_SCAN_OFFCHAN_MGMT_TX 0x2000
+#define WMI_SCAN_OFFCHAN_DATA_TX 0x4000
+#define WMI_SCAN_CAPTURE_PHY_ERROR 0x8000
+#define WMI_SCAN_FLAG_STRICT_PASSIVE_ON_PCHN 0x10000
+#define WMI_SCAN_FLAG_HALF_RATE_SUPPORT 0x20000
+#define WMI_SCAN_FLAG_QUARTER_RATE_SUPPORT 0x40000
+#define WMI_SCAN_RANDOM_SEQ_NO_IN_PROBE_REQ 0x80000
+#define WMI_SCAN_ENABLE_IE_WHTELIST_IN_PROBE_REQ 0x100000
+
+#define WMI_SCAN_DWELL_MODE_MASK GENMASK(23, 21)
+
+enum {
+ WMI_SCAN_DWELL_MODE_DEFAULT = 0,
+ WMI_SCAN_DWELL_MODE_CONSERVATIVE = 1,
+ WMI_SCAN_DWELL_MODE_MODERATE = 2,
+ WMI_SCAN_DWELL_MODE_AGGRESSIVE = 3,
+ WMI_SCAN_DWELL_MODE_STATIC = 4,
+};
+
+struct ath12k_wmi_hint_short_ssid_arg {
+ u32 freq_flags;
+ u32 short_ssid;
+};
+
+struct ath12k_wmi_hint_bssid_arg {
+ u32 freq_flags;
+ struct ath12k_wmi_mac_addr_params bssid;
+};
+
+struct ath12k_wmi_scan_req_arg {
+ u32 scan_id;
+ u32 scan_req_id;
+ u32 vdev_id;
+ u32 pdev_id;
+ enum wmi_scan_priority scan_priority;
+ union {
+ struct {
+ u32 scan_ev_started:1,
+ scan_ev_completed:1,
+ scan_ev_bss_chan:1,
+ scan_ev_foreign_chan:1,
+ scan_ev_dequeued:1,
+ scan_ev_preempted:1,
+ scan_ev_start_failed:1,
+ scan_ev_restarted:1,
+ scan_ev_foreign_chn_exit:1,
+ scan_ev_invalid:1,
+ scan_ev_gpio_timeout:1,
+ scan_ev_suspended:1,
+ scan_ev_resumed:1;
+ };
+ u32 scan_events;
+ };
+ u32 dwell_time_active;
+ u32 dwell_time_active_2g;
+ u32 dwell_time_passive;
+ u32 dwell_time_active_6g;
+ u32 dwell_time_passive_6g;
+ u32 min_rest_time;
+ u32 max_rest_time;
+ u32 repeat_probe_time;
+ u32 probe_spacing_time;
+ u32 idle_time;
+ u32 max_scan_time;
+ u32 probe_delay;
+ union {
+ struct {
+ u32 scan_f_passive:1,
+ scan_f_bcast_probe:1,
+ scan_f_cck_rates:1,
+ scan_f_ofdm_rates:1,
+ scan_f_chan_stat_evnt:1,
+ scan_f_filter_prb_req:1,
+ scan_f_bypass_dfs_chn:1,
+ scan_f_continue_on_err:1,
+ scan_f_offchan_mgmt_tx:1,
+ scan_f_offchan_data_tx:1,
+ scan_f_promisc_mode:1,
+ scan_f_capture_phy_err:1,
+ scan_f_strict_passive_pch:1,
+ scan_f_half_rate:1,
+ scan_f_quarter_rate:1,
+ scan_f_force_active_dfs_chn:1,
+ scan_f_add_tpc_ie_in_probe:1,
+ scan_f_add_ds_ie_in_probe:1,
+ scan_f_add_spoofed_mac_in_probe:1,
+ scan_f_add_rand_seq_in_probe:1,
+ scan_f_en_ie_whitelist_in_probe:1,
+ scan_f_forced:1,
+ scan_f_2ghz:1,
+ scan_f_5ghz:1,
+ scan_f_80mhz:1;
+ };
+ u32 scan_flags;
+ };
+ enum scan_dwelltime_adaptive_mode adaptive_dwell_time_mode;
+ u32 burst_duration;
+ u32 num_chan;
+ u32 num_bssid;
+ u32 num_ssids;
+ u32 n_probes;
+ u32 chan_list[WLAN_SCAN_MAX_NUM_CHANNELS];
+ u32 notify_scan_events;
+ struct cfg80211_ssid ssid[WLAN_SCAN_MAX_NUM_SSID];
+ struct ath12k_wmi_mac_addr_params bssid_list[WLAN_SCAN_MAX_NUM_BSSID];
+ struct ath12k_wmi_element_info_arg extraie;
+ u32 num_hint_s_ssid;
+ u32 num_hint_bssid;
+ struct ath12k_wmi_hint_short_ssid_arg hint_s_ssid[WLAN_SCAN_MAX_HINT_S_SSID];
+ struct ath12k_wmi_hint_bssid_arg hint_bssid[WLAN_SCAN_MAX_HINT_BSSID];
+};
+
+struct wmi_ssid_arg {
+ int len;
+ const u8 *ssid;
+};
+
+struct wmi_bssid_arg {
+ const u8 *bssid;
+};
+
+struct wmi_start_scan_arg {
+ u32 scan_id;
+ u32 scan_req_id;
+ u32 vdev_id;
+ u32 scan_priority;
+ u32 notify_scan_events;
+ u32 dwell_time_active;
+ u32 dwell_time_passive;
+ u32 min_rest_time;
+ u32 max_rest_time;
+ u32 repeat_probe_time;
+ u32 probe_spacing_time;
+ u32 idle_time;
+ u32 max_scan_time;
+ u32 probe_delay;
+ u32 scan_ctrl_flags;
+
+ u32 ie_len;
+ u32 n_channels;
+ u32 n_ssids;
+ u32 n_bssids;
+
+ u8 ie[WLAN_SCAN_PARAMS_MAX_IE_LEN];
+ u32 channels[64];
+ struct wmi_ssid_arg ssids[WLAN_SCAN_PARAMS_MAX_SSID];
+ struct wmi_bssid_arg bssids[WLAN_SCAN_PARAMS_MAX_BSSID];
+};
+
+#define WMI_SCAN_STOP_ONE 0x00000000
+#define WMI_SCAN_STOP_VAP_ALL 0x01000000
+#define WMI_SCAN_STOP_ALL 0x04000000
+
+/* Prefix 0xA000 indicates that the scan request
+ * is trigger by HOST
+ */
+#define ATH12K_SCAN_ID 0xA000
+
+enum scan_cancel_req_type {
+ WLAN_SCAN_CANCEL_SINGLE = 1,
+ WLAN_SCAN_CANCEL_VDEV_ALL,
+ WLAN_SCAN_CANCEL_PDEV_ALL,
+};
+
+struct ath12k_wmi_scan_cancel_arg {
+ u32 requester;
+ u32 scan_id;
+ enum scan_cancel_req_type req_type;
+ u32 vdev_id;
+ u32 pdev_id;
+};
+
+struct wmi_bcn_send_from_host_cmd {
+ __le32 tlv_header;
+ __le32 vdev_id;
+ __le32 data_len;
+ union {
+ __le32 frag_ptr;
+ __le32 frag_ptr_lo;
+ };
+ __le32 frame_ctrl;
+ __le32 dtim_flag;
+ __le32 bcn_antenna;
+ __le32 frag_ptr_hi;
+};
+
+#define WMI_CHAN_INFO_MODE GENMASK(5, 0)
+#define WMI_CHAN_INFO_HT40_PLUS BIT(6)
+#define WMI_CHAN_INFO_PASSIVE BIT(7)
+#define WMI_CHAN_INFO_ADHOC_ALLOWED BIT(8)
+#define WMI_CHAN_INFO_AP_DISABLED BIT(9)
+#define WMI_CHAN_INFO_DFS BIT(10)
+#define WMI_CHAN_INFO_ALLOW_HT BIT(11)
+#define WMI_CHAN_INFO_ALLOW_VHT BIT(12)
+#define WMI_CHAN_INFO_CHAN_CHANGE_CAUSE_CSA BIT(13)
+#define WMI_CHAN_INFO_HALF_RATE BIT(14)
+#define WMI_CHAN_INFO_QUARTER_RATE BIT(15)
+#define WMI_CHAN_INFO_DFS_FREQ2 BIT(16)
+#define WMI_CHAN_INFO_ALLOW_HE BIT(17)
+#define WMI_CHAN_INFO_PSC BIT(18)
+
+#define WMI_CHAN_REG_INFO1_MIN_PWR GENMASK(7, 0)
+#define WMI_CHAN_REG_INFO1_MAX_PWR GENMASK(15, 8)
+#define WMI_CHAN_REG_INFO1_MAX_REG_PWR GENMASK(23, 16)
+#define WMI_CHAN_REG_INFO1_REG_CLS GENMASK(31, 24)
+
+#define WMI_CHAN_REG_INFO2_ANT_MAX GENMASK(7, 0)
+#define WMI_CHAN_REG_INFO2_MAX_TX_PWR GENMASK(15, 8)
+
+struct ath12k_wmi_channel_params {
+ __le32 tlv_header;
+ __le32 mhz;
+ __le32 band_center_freq1;
+ __le32 band_center_freq2;
+ __le32 info;
+ __le32 reg_info_1;
+ __le32 reg_info_2;
+} __packed;
+
+enum wmi_sta_ps_mode {
+ WMI_STA_PS_MODE_DISABLED = 0,
+ WMI_STA_PS_MODE_ENABLED = 1,
+};
+
+#define WMI_SMPS_MASK_LOWER_16BITS 0xFF
+#define WMI_SMPS_MASK_UPPER_3BITS 0x7
+#define WMI_SMPS_PARAM_VALUE_SHIFT 29
+
+#define ATH12K_WMI_FW_HANG_ASSERT_TYPE 1
+#define ATH12K_WMI_FW_HANG_DELAY 0
+
+/* type, 0:unused 1: ASSERT 2: not respond detect command
+ * delay_time_ms, the simulate will delay time
+ */
+
+struct wmi_force_fw_hang_cmd {
+ __le32 tlv_header;
+ __le32 type;
+ __le32 delay_time_ms;
+} __packed;
+
+struct wmi_vdev_set_param_cmd {
+ __le32 tlv_header;
+ __le32 vdev_id;
+ __le32 param_id;
+ __le32 param_value;
+} __packed;
+
+struct wmi_get_pdev_temperature_cmd {
+ __le32 tlv_header;
+ __le32 param;
+ __le32 pdev_id;
+} __packed;
+
+#define WMI_BEACON_TX_BUFFER_SIZE 512
+
+struct wmi_bcn_tmpl_cmd {
+ __le32 tlv_header;
+ __le32 vdev_id;
+ __le32 tim_ie_offset;
+ __le32 buf_len;
+ __le32 csa_switch_count_offset;
+ __le32 ext_csa_switch_count_offset;
+ __le32 csa_event_bitmap;
+ __le32 mbssid_ie_offset;
+ __le32 esp_ie_offset;
+} __packed;
+
+struct wmi_vdev_install_key_cmd {
+ __le32 tlv_header;
+ __le32 vdev_id;
+ struct ath12k_wmi_mac_addr_params peer_macaddr;
+ __le32 key_idx;
+ __le32 key_flags;
+ __le32 key_cipher;
+ __le64 key_rsc_counter;
+ __le64 key_global_rsc_counter;
+ __le64 key_tsc_counter;
+ u8 wpi_key_rsc_counter[16];
+ u8 wpi_key_tsc_counter[16];
+ __le32 key_len;
+ __le32 key_txmic_len;
+ __le32 key_rxmic_len;
+ __le32 is_group_key_id_valid;
+ __le32 group_key_id;
+
+ /* Followed by key_data containing key followed by
+ * tx mic and then rx mic
+ */
+} __packed;
+
+struct wmi_vdev_install_key_arg {
+ u32 vdev_id;
+ const u8 *macaddr;
+ u32 key_idx;
+ u32 key_flags;
+ u32 key_cipher;
+ u32 key_len;
+ u32 key_txmic_len;
+ u32 key_rxmic_len;
+ u64 key_rsc_counter;
+ const void *key_data;
+};
+
+#define WMI_MAX_SUPPORTED_RATES 128
+#define WMI_HOST_MAX_HECAP_PHY_SIZE 3
+#define WMI_HOST_MAX_HE_RATE_SET 3
+#define WMI_HECAP_TXRX_MCS_NSS_IDX_80 0
+#define WMI_HECAP_TXRX_MCS_NSS_IDX_160 1
+#define WMI_HECAP_TXRX_MCS_NSS_IDX_80_80 2
+
+struct wmi_rate_set_arg {
+ u32 num_rates;
+ u8 rates[WMI_MAX_SUPPORTED_RATES];
+};
+
+struct ath12k_wmi_peer_assoc_arg {
+ u32 vdev_id;
+ u32 peer_new_assoc;
+ u32 peer_associd;
+ u32 peer_flags;
+ u32 peer_caps;
+ u32 peer_listen_intval;
+ u32 peer_ht_caps;
+ u32 peer_max_mpdu;
+ u32 peer_mpdu_density;
+ u32 peer_rate_caps;
+ u32 peer_nss;
+ u32 peer_vht_caps;
+ u32 peer_phymode;
+ u32 peer_ht_info[2];
+ struct wmi_rate_set_arg peer_legacy_rates;
+ struct wmi_rate_set_arg peer_ht_rates;
+ u32 rx_max_rate;
+ u32 rx_mcs_set;
+ u32 tx_max_rate;
+ u32 tx_mcs_set;
+ u8 vht_capable;
+ u8 min_data_rate;
+ u32 tx_max_mcs_nss;
+ u32 peer_bw_rxnss_override;
+ bool is_pmf_enabled;
+ bool is_wme_set;
+ bool qos_flag;
+ bool apsd_flag;
+ bool ht_flag;
+ bool bw_40;
+ bool bw_80;
+ bool bw_160;
+ bool stbc_flag;
+ bool ldpc_flag;
+ bool static_mimops_flag;
+ bool dynamic_mimops_flag;
+ bool spatial_mux_flag;
+ bool vht_flag;
+ bool vht_ng_flag;
+ bool need_ptk_4_way;
+ bool need_gtk_2_way;
+ bool auth_flag;
+ bool safe_mode_enabled;
+ bool amsdu_disable;
+ /* Use common structure */
+ u8 peer_mac[ETH_ALEN];
+
+ bool he_flag;
+ u32 peer_he_cap_macinfo[2];
+ u32 peer_he_cap_macinfo_internal;
+ u32 peer_he_caps_6ghz;
+ u32 peer_he_ops;
+ u32 peer_he_cap_phyinfo[WMI_HOST_MAX_HECAP_PHY_SIZE];
+ u32 peer_he_mcs_count;
+ u32 peer_he_rx_mcs_set[WMI_HOST_MAX_HE_RATE_SET];
+ u32 peer_he_tx_mcs_set[WMI_HOST_MAX_HE_RATE_SET];
+ bool twt_responder;
+ bool twt_requester;
+ struct ath12k_wmi_ppe_threshold_arg peer_ppet;
+};
+
+struct wmi_peer_assoc_complete_cmd {
+ __le32 tlv_header;
+ struct ath12k_wmi_mac_addr_params peer_macaddr;
+ __le32 vdev_id;
+ __le32 peer_new_assoc;
+ __le32 peer_associd;
+ __le32 peer_flags;
+ __le32 peer_caps;
+ __le32 peer_listen_intval;
+ __le32 peer_ht_caps;
+ __le32 peer_max_mpdu;
+ __le32 peer_mpdu_density;
+ __le32 peer_rate_caps;
+ __le32 peer_nss;
+ __le32 peer_vht_caps;
+ __le32 peer_phymode;
+ __le32 peer_ht_info[2];
+ __le32 num_peer_legacy_rates;
+ __le32 num_peer_ht_rates;
+ __le32 peer_bw_rxnss_override;
+ struct ath12k_wmi_ppe_threshold_params peer_ppet;
+ __le32 peer_he_cap_info;
+ __le32 peer_he_ops;
+ __le32 peer_he_cap_phy[WMI_MAX_HECAP_PHY_SIZE];
+ __le32 peer_he_mcs;
+ __le32 peer_he_cap_info_ext;
+ __le32 peer_he_cap_info_internal;
+ __le32 min_data_rate;
+ __le32 peer_he_caps_6ghz;
+} __packed;
+
+struct wmi_stop_scan_cmd {
+ __le32 tlv_header;
+ __le32 requestor;
+ __le32 scan_id;
+ __le32 req_type;
+ __le32 vdev_id;
+ __le32 pdev_id;
+} __packed;
+
+struct ath12k_wmi_scan_chan_list_arg {
+ u32 pdev_id;
+ u16 nallchans;
+ struct ath12k_wmi_channel_arg channel[];
+};
+
+struct wmi_scan_chan_list_cmd {
+ __le32 tlv_header;
+ __le32 num_scan_chans;
+ __le32 flags;
+ __le32 pdev_id;
+} __packed;
+
+#define WMI_MGMT_SEND_DOWNLD_LEN 64
+
+#define WMI_TX_PARAMS_DWORD0_POWER GENMASK(7, 0)
+#define WMI_TX_PARAMS_DWORD0_MCS_MASK GENMASK(19, 8)
+#define WMI_TX_PARAMS_DWORD0_NSS_MASK GENMASK(27, 20)
+#define WMI_TX_PARAMS_DWORD0_RETRY_LIMIT GENMASK(31, 28)
+
+#define WMI_TX_PARAMS_DWORD1_CHAIN_MASK GENMASK(7, 0)
+#define WMI_TX_PARAMS_DWORD1_BW_MASK GENMASK(14, 8)
+#define WMI_TX_PARAMS_DWORD1_PREAMBLE_TYPE GENMASK(19, 15)
+#define WMI_TX_PARAMS_DWORD1_FRAME_TYPE BIT(20)
+#define WMI_TX_PARAMS_DWORD1_RSVD GENMASK(31, 21)
+
+struct wmi_mgmt_send_cmd {
+ __le32 tlv_header;
+ __le32 vdev_id;
+ __le32 desc_id;
+ __le32 chanfreq;
+ __le32 paddr_lo;
+ __le32 paddr_hi;
+ __le32 frame_len;
+ __le32 buf_len;
+ __le32 tx_params_valid;
+
+ /* This TLV is followed by struct wmi_mgmt_frame */
+
+ /* Followed by struct wmi_mgmt_send_params */
+} __packed;
+
+struct wmi_sta_powersave_mode_cmd {
+ __le32 tlv_header;
+ __le32 vdev_id;
+ __le32 sta_ps_mode;
+} __packed;
+
+struct wmi_sta_smps_force_mode_cmd {
+ __le32 tlv_header;
+ __le32 vdev_id;
+ __le32 forced_mode;
+} __packed;
+
+struct wmi_sta_smps_param_cmd {
+ __le32 tlv_header;
+ __le32 vdev_id;
+ __le32 param;
+ __le32 value;
+} __packed;
+
+struct ath12k_wmi_bcn_prb_info_params {
+ __le32 tlv_header;
+ __le32 caps;
+ __le32 erp;
+} __packed;
+
+enum {
+ WMI_PDEV_SUSPEND,
+ WMI_PDEV_SUSPEND_AND_DISABLE_INTR,
+};
+
+struct wmi_pdev_green_ap_ps_enable_cmd_param {
+ __le32 tlv_header;
+ __le32 pdev_id;
+ __le32 enable;
+} __packed;
+
+struct ath12k_wmi_ap_ps_arg {
+ u32 vdev_id;
+ u32 param;
+ u32 value;
+};
+
+enum set_init_cc_type {
+ WMI_COUNTRY_INFO_TYPE_ALPHA,
+ WMI_COUNTRY_INFO_TYPE_COUNTRY_CODE,
+ WMI_COUNTRY_INFO_TYPE_REGDOMAIN,
+};
+
+enum set_init_cc_flags {
+ INVALID_CC,
+ CC_IS_SET,
+ REGDMN_IS_SET,
+ ALPHA_IS_SET,
+};
+
+struct ath12k_wmi_init_country_arg {
+ union {
+ u16 country_code;
+ u16 regdom_id;
+ u8 alpha2[3];
+ } cc_info;
+ enum set_init_cc_flags flags;
+};
+
+struct wmi_init_country_cmd {
+ __le32 tlv_header;
+ __le32 pdev_id;
+ __le32 init_cc_type;
+ union {
+ __le32 country_code;
+ __le32 regdom_id;
+ __le32 alpha2;
+ } cc_info;
+} __packed;
+
+struct wmi_delba_send_cmd {
+ __le32 tlv_header;
+ __le32 vdev_id;
+ struct ath12k_wmi_mac_addr_params peer_macaddr;
+ __le32 tid;
+ __le32 initiator;
+ __le32 reasoncode;
+} __packed;
+
+struct wmi_addba_setresponse_cmd {
+ __le32 tlv_header;
+ __le32 vdev_id;
+ struct ath12k_wmi_mac_addr_params peer_macaddr;
+ __le32 tid;
+ __le32 statuscode;
+} __packed;
+
+struct wmi_addba_send_cmd {
+ __le32 tlv_header;
+ __le32 vdev_id;
+ struct ath12k_wmi_mac_addr_params peer_macaddr;
+ __le32 tid;
+ __le32 buffersize;
+} __packed;
+
+struct wmi_addba_clear_resp_cmd {
+ __le32 tlv_header;
+ __le32 vdev_id;
+ struct ath12k_wmi_mac_addr_params peer_macaddr;
+} __packed;
+
+#define DFS_PHYERR_UNIT_TEST_CMD 0
+#define DFS_UNIT_TEST_MODULE 0x2b
+#define DFS_UNIT_TEST_TOKEN 0xAA
+
+enum dfs_test_args_idx {
+ DFS_TEST_CMDID = 0,
+ DFS_TEST_PDEV_ID,
+ DFS_TEST_RADAR_PARAM,
+ DFS_MAX_TEST_ARGS,
+};
+
+struct wmi_dfs_unit_test_arg {
+ u32 cmd_id;
+ u32 pdev_id;
+ u32 radar_param;
+};
+
+struct wmi_unit_test_cmd {
+ __le32 tlv_header;
+ __le32 vdev_id;
+ __le32 module_id;
+ __le32 num_args;
+ __le32 diag_token;
+ /* Followed by test args*/
+} __packed;
+
+#define MAX_SUPPORTED_RATES 128
+
+#define WMI_PEER_AUTH 0x00000001
+#define WMI_PEER_QOS 0x00000002
+#define WMI_PEER_NEED_PTK_4_WAY 0x00000004
+#define WMI_PEER_NEED_GTK_2_WAY 0x00000010
+#define WMI_PEER_HE 0x00000400
+#define WMI_PEER_APSD 0x00000800
+#define WMI_PEER_HT 0x00001000
+#define WMI_PEER_40MHZ 0x00002000
+#define WMI_PEER_STBC 0x00008000
+#define WMI_PEER_LDPC 0x00010000
+#define WMI_PEER_DYN_MIMOPS 0x00020000
+#define WMI_PEER_STATIC_MIMOPS 0x00040000
+#define WMI_PEER_SPATIAL_MUX 0x00200000
+#define WMI_PEER_TWT_REQ 0x00400000
+#define WMI_PEER_TWT_RESP 0x00800000
+#define WMI_PEER_VHT 0x02000000
+#define WMI_PEER_80MHZ 0x04000000
+#define WMI_PEER_PMF 0x08000000
+/* TODO: Place holder for WLAN_PEER_F_PS_PRESEND_REQUIRED = 0x10000000.
+ * Need to be cleaned up
+ */
+#define WMI_PEER_IS_P2P_CAPABLE 0x20000000
+#define WMI_PEER_160MHZ 0x40000000
+#define WMI_PEER_SAFEMODE_EN 0x80000000
+
+struct ath12k_wmi_vht_rate_set_params {
+ __le32 tlv_header;
+ __le32 rx_max_rate;
+ __le32 rx_mcs_set;
+ __le32 tx_max_rate;
+ __le32 tx_mcs_set;
+ __le32 tx_max_mcs_nss;
+} __packed;
+
+struct ath12k_wmi_he_rate_set_params {
+ __le32 tlv_header;
+ __le32 rx_mcs_set;
+ __le32 tx_mcs_set;
+} __packed;
+
+#define MAX_REG_RULES 10
+#define REG_ALPHA2_LEN 2
+#define MAX_6G_REG_RULES 5
+#define REG_US_5G_NUM_REG_RULES 4
+
+enum wmi_start_event_param {
+ WMI_VDEV_START_RESP_EVENT = 0,
+ WMI_VDEV_RESTART_RESP_EVENT,
+};
+
+struct wmi_vdev_start_resp_event {
+ __le32 vdev_id;
+ __le32 requestor_id;
+ /* enum wmi_start_event_param */
+ __le32 resp_type;
+ __le32 status;
+ __le32 chain_mask;
+ __le32 smps_mode;
+ union {
+ __le32 mac_id;
+ __le32 pdev_id;
+ };
+ __le32 cfgd_tx_streams;
+ __le32 cfgd_rx_streams;
+} __packed;
+
+/* VDEV start response status codes */
+enum wmi_vdev_start_resp_status_code {
+ WMI_VDEV_START_RESPONSE_STATUS_SUCCESS = 0,
+ WMI_VDEV_START_RESPONSE_INVALID_VDEVID = 1,
+ WMI_VDEV_START_RESPONSE_NOT_SUPPORTED = 2,
+ WMI_VDEV_START_RESPONSE_DFS_VIOLATION = 3,
+ WMI_VDEV_START_RESPONSE_INVALID_REGDOMAIN = 4,
+};
+
+enum wmi_reg_6g_ap_type {
+ WMI_REG_INDOOR_AP = 0,
+ WMI_REG_STD_POWER_AP = 1,
+ WMI_REG_VLP_AP = 2,
+ WMI_REG_CURRENT_MAX_AP_TYPE,
+ WMI_REG_MAX_SUPP_AP_TYPE = WMI_REG_VLP_AP,
+ WMI_REG_MAX_AP_TYPE = 7,
+};
+
+enum wmi_reg_6g_client_type {
+ WMI_REG_DEFAULT_CLIENT = 0,
+ WMI_REG_SUBORDINATE_CLIENT = 1,
+ WMI_REG_MAX_CLIENT_TYPE = 2,
+};
+
+/* Regulatory Rule Flags Passed by FW */
+#define REGULATORY_CHAN_DISABLED BIT(0)
+#define REGULATORY_CHAN_NO_IR BIT(1)
+#define REGULATORY_CHAN_RADAR BIT(3)
+#define REGULATORY_CHAN_NO_OFDM BIT(6)
+#define REGULATORY_CHAN_INDOOR_ONLY BIT(9)
+
+#define REGULATORY_CHAN_NO_HT40 BIT(4)
+#define REGULATORY_CHAN_NO_80MHZ BIT(7)
+#define REGULATORY_CHAN_NO_160MHZ BIT(8)
+#define REGULATORY_CHAN_NO_20MHZ BIT(11)
+#define REGULATORY_CHAN_NO_10MHZ BIT(12)
+
+enum {
+ WMI_REG_SET_CC_STATUS_PASS = 0,
+ WMI_REG_CURRENT_ALPHA2_NOT_FOUND = 1,
+ WMI_REG_INIT_ALPHA2_NOT_FOUND = 2,
+ WMI_REG_SET_CC_CHANGE_NOT_ALLOWED = 3,
+ WMI_REG_SET_CC_STATUS_NO_MEMORY = 4,
+ WMI_REG_SET_CC_STATUS_FAIL = 5,
+};
+
+#define WMI_REG_CLIENT_MAX 4
+
+struct wmi_reg_chan_list_cc_ext_event {
+ __le32 status_code;
+ __le32 phy_id;
+ __le32 alpha2;
+ __le32 num_phy;
+ __le32 country_id;
+ __le32 domain_code;
+ __le32 dfs_region;
+ __le32 phybitmap;
+ __le32 min_bw_2g;
+ __le32 max_bw_2g;
+ __le32 min_bw_5g;
+ __le32 max_bw_5g;
+ __le32 num_2g_reg_rules;
+ __le32 num_5g_reg_rules;
+ __le32 client_type;
+ __le32 rnr_tpe_usable;
+ __le32 unspecified_ap_usable;
+ __le32 domain_code_6g_ap_lpi;
+ __le32 domain_code_6g_ap_sp;
+ __le32 domain_code_6g_ap_vlp;
+ __le32 domain_code_6g_client_lpi[WMI_REG_CLIENT_MAX];
+ __le32 domain_code_6g_client_sp[WMI_REG_CLIENT_MAX];
+ __le32 domain_code_6g_client_vlp[WMI_REG_CLIENT_MAX];
+ __le32 domain_code_6g_super_id;
+ __le32 min_bw_6g_ap_sp;
+ __le32 max_bw_6g_ap_sp;
+ __le32 min_bw_6g_ap_lpi;
+ __le32 max_bw_6g_ap_lpi;
+ __le32 min_bw_6g_ap_vlp;
+ __le32 max_bw_6g_ap_vlp;
+ __le32 min_bw_6g_client_sp[WMI_REG_CLIENT_MAX];
+ __le32 max_bw_6g_client_sp[WMI_REG_CLIENT_MAX];
+ __le32 min_bw_6g_client_lpi[WMI_REG_CLIENT_MAX];
+ __le32 max_bw_6g_client_lpi[WMI_REG_CLIENT_MAX];
+ __le32 min_bw_6g_client_vlp[WMI_REG_CLIENT_MAX];
+ __le32 max_bw_6g_client_vlp[WMI_REG_CLIENT_MAX];
+ __le32 num_6g_reg_rules_ap_sp;
+ __le32 num_6g_reg_rules_ap_lpi;
+ __le32 num_6g_reg_rules_ap_vlp;
+ __le32 num_6g_reg_rules_cl_sp[WMI_REG_CLIENT_MAX];
+ __le32 num_6g_reg_rules_cl_lpi[WMI_REG_CLIENT_MAX];
+ __le32 num_6g_reg_rules_cl_vlp[WMI_REG_CLIENT_MAX];
+} __packed;
+
+struct ath12k_wmi_reg_rule_ext_params {
+ __le32 tlv_header;
+ __le32 freq_info;
+ __le32 bw_pwr_info;
+ __le32 flag_info;
+ __le32 psd_power_info;
+} __packed;
+
+struct wmi_vdev_delete_resp_event {
+ __le32 vdev_id;
+} __packed;
+
+struct wmi_peer_delete_resp_event {
+ __le32 vdev_id;
+ struct ath12k_wmi_mac_addr_params peer_macaddr;
+} __packed;
+
+struct wmi_bcn_tx_status_event {
+ __le32 vdev_id;
+ __le32 tx_status;
+} __packed;
+
+struct wmi_vdev_stopped_event {
+ __le32 vdev_id;
+} __packed;
+
+struct wmi_pdev_bss_chan_info_event {
+ __le32 pdev_id;
+ __le32 freq; /* Units in MHz */
+ __le32 noise_floor; /* units are dBm */
+ /* rx clear - how often the channel was unused */
+ __le32 rx_clear_count_low;
+ __le32 rx_clear_count_high;
+ /* cycle count - elapsed time during measured period, in clock ticks */
+ __le32 cycle_count_low;
+ __le32 cycle_count_high;
+ /* tx cycle count - elapsed time spent in tx, in clock ticks */
+ __le32 tx_cycle_count_low;
+ __le32 tx_cycle_count_high;
+ /* rx cycle count - elapsed time spent in rx, in clock ticks */
+ __le32 rx_cycle_count_low;
+ __le32 rx_cycle_count_high;
+ /*rx_cycle cnt for my bss in 64bits format */
+ __le32 rx_bss_cycle_count_low;
+ __le32 rx_bss_cycle_count_high;
+} __packed;
+
+#define WMI_VDEV_INSTALL_KEY_COMPL_STATUS_SUCCESS 0
+
+struct wmi_vdev_install_key_compl_event {
+ __le32 vdev_id;
+ struct ath12k_wmi_mac_addr_params peer_macaddr;
+ __le32 key_idx;
+ __le32 key_flags;
+ __le32 status;
+} __packed;
+
+struct wmi_vdev_install_key_complete_arg {
+ u32 vdev_id;
+ const u8 *macaddr;
+ u32 key_idx;
+ u32 key_flags;
+ u32 status;
+};
+
+struct wmi_peer_assoc_conf_event {
+ __le32 vdev_id;
+ struct ath12k_wmi_mac_addr_params peer_macaddr;
+} __packed;
+
+struct wmi_peer_assoc_conf_arg {
+ u32 vdev_id;
+ const u8 *macaddr;
+};
+
+struct wmi_fils_discovery_event {
+ __le32 vdev_id;
+ __le32 fils_tt;
+ __le32 tbtt;
+} __packed;
+
+struct wmi_probe_resp_tx_status_event {
+ __le32 vdev_id;
+ __le32 tx_status;
+} __packed;
+
+struct wmi_pdev_ctl_failsafe_chk_event {
+ __le32 pdev_id;
+ __le32 ctl_failsafe_status;
+} __packed;
+
+struct ath12k_wmi_pdev_csa_event {
+ __le32 pdev_id;
+ __le32 current_switch_count;
+ __le32 num_vdevs;
+} __packed;
+
+struct ath12k_wmi_pdev_radar_event {
+ __le32 pdev_id;
+ __le32 detection_mode;
+ __le32 chan_freq;
+ __le32 chan_width;
+ __le32 detector_id;
+ __le32 segment_id;
+ __le32 timestamp;
+ __le32 is_chirp;
+ a_sle32 freq_offset;
+ a_sle32 sidx;
+} __packed;
+
+struct wmi_pdev_temperature_event {
+ /* temperature value in Celcius degree */
+ a_sle32 temp;
+ __le32 pdev_id;
+} __packed;
+
+#define WMI_RX_STATUS_OK 0x00
+#define WMI_RX_STATUS_ERR_CRC 0x01
+#define WMI_RX_STATUS_ERR_DECRYPT 0x08
+#define WMI_RX_STATUS_ERR_MIC 0x10
+#define WMI_RX_STATUS_ERR_KEY_CACHE_MISS 0x20
+
+#define WLAN_MGMT_TXRX_HOST_MAX_ANTENNA 4
+
+struct ath12k_wmi_mgmt_rx_arg {
+ u32 chan_freq;
+ u32 channel;
+ u32 snr;
+ u8 rssi_ctl[WLAN_MGMT_TXRX_HOST_MAX_ANTENNA];
+ u32 rate;
+ enum wmi_phy_mode phy_mode;
+ u32 buf_len;
+ int status;
+ u32 flags;
+ int rssi;
+ u32 tsf_delta;
+ u8 pdev_id;
+};
+
+#define ATH_MAX_ANTENNA 4
+
+struct ath12k_wmi_mgmt_rx_params {
+ __le32 channel;
+ __le32 snr;
+ __le32 rate;
+ __le32 phy_mode;
+ __le32 buf_len;
+ __le32 status;
+ __le32 rssi_ctl[ATH_MAX_ANTENNA];
+ __le32 flags;
+ a_sle32 rssi;
+ __le32 tsf_delta;
+ __le32 rx_tsf_l32;
+ __le32 rx_tsf_u32;
+ __le32 pdev_id;
+ __le32 chan_freq;
+} __packed;
+
+#define MAX_ANTENNA_EIGHT 8
+
+struct wmi_mgmt_tx_compl_event {
+ __le32 desc_id;
+ __le32 status;
+ __le32 pdev_id;
+} __packed;
+
+struct wmi_scan_event {
+ __le32 event_type; /* %WMI_SCAN_EVENT_ */
+ __le32 reason; /* %WMI_SCAN_REASON_ */
+ __le32 channel_freq; /* only valid for WMI_SCAN_EVENT_FOREIGN_CHANNEL */
+ __le32 scan_req_id;
+ __le32 scan_id;
+ __le32 vdev_id;
+ /* TSF Timestamp when the scan event (%WMI_SCAN_EVENT_) is completed
+ * In case of AP it is TSF of the AP vdev
+ * In case of STA connected state, this is the TSF of the AP
+ * In case of STA not connected, it will be the free running HW timer
+ */
+ __le32 tsf_timestamp;
+} __packed;
+
+struct wmi_peer_sta_kickout_arg {
+ const u8 *mac_addr;
+};
+
+struct wmi_peer_sta_kickout_event {
+ struct ath12k_wmi_mac_addr_params peer_macaddr;
+} __packed;
+
+enum wmi_roam_reason {
+ WMI_ROAM_REASON_BETTER_AP = 1,
+ WMI_ROAM_REASON_BEACON_MISS = 2,
+ WMI_ROAM_REASON_LOW_RSSI = 3,
+ WMI_ROAM_REASON_SUITABLE_AP_FOUND = 4,
+ WMI_ROAM_REASON_HO_FAILED = 5,
+
+ /* keep last */
+ WMI_ROAM_REASON_MAX,
+};
+
+struct wmi_roam_event {
+ __le32 vdev_id;
+ __le32 reason;
+ __le32 rssi;
+} __packed;
+
+#define WMI_CHAN_INFO_START_RESP 0
+#define WMI_CHAN_INFO_END_RESP 1
+
+struct wmi_chan_info_event {
+ __le32 err_code;
+ __le32 freq;
+ __le32 cmd_flags;
+ __le32 noise_floor;
+ __le32 rx_clear_count;
+ __le32 cycle_count;
+ __le32 chan_tx_pwr_range;
+ __le32 chan_tx_pwr_tp;
+ __le32 rx_frame_count;
+ __le32 my_bss_rx_cycle_count;
+ __le32 rx_11b_mode_data_duration;
+ __le32 tx_frame_cnt;
+ __le32 mac_clk_mhz;
+ __le32 vdev_id;
+} __packed;
+
+struct ath12k_wmi_target_cap_arg {
+ u32 phy_capability;
+ u32 max_frag_entry;
+ u32 num_rf_chains;
+ u32 ht_cap_info;
+ u32 vht_cap_info;
+ u32 vht_supp_mcs;
+ u32 hw_min_tx_power;
+ u32 hw_max_tx_power;
+ u32 sys_cap_info;
+ u32 min_pkt_size_enable;
+ u32 max_bcn_ie_size;
+ u32 max_num_scan_channels;
+ u32 max_supported_macs;
+ u32 wmi_fw_sub_feat_caps;
+ u32 txrx_chainmask;
+ u32 default_dbs_hw_mode_index;
+ u32 num_msdu_desc;
+};
+
+enum wmi_vdev_type {
+ WMI_VDEV_TYPE_AP = 1,
+ WMI_VDEV_TYPE_STA = 2,
+ WMI_VDEV_TYPE_IBSS = 3,
+ WMI_VDEV_TYPE_MONITOR = 4,
+};
+
+enum wmi_vdev_subtype {
+ WMI_VDEV_SUBTYPE_NONE,
+ WMI_VDEV_SUBTYPE_P2P_DEVICE,
+ WMI_VDEV_SUBTYPE_P2P_CLIENT,
+ WMI_VDEV_SUBTYPE_P2P_GO,
+ WMI_VDEV_SUBTYPE_PROXY_STA,
+ WMI_VDEV_SUBTYPE_MESH_NON_11S,
+ WMI_VDEV_SUBTYPE_MESH_11S,
+};
+
+enum wmi_sta_powersave_param {
+ WMI_STA_PS_PARAM_RX_WAKE_POLICY = 0,
+ WMI_STA_PS_PARAM_TX_WAKE_THRESHOLD = 1,
+ WMI_STA_PS_PARAM_PSPOLL_COUNT = 2,
+ WMI_STA_PS_PARAM_INACTIVITY_TIME = 3,
+ WMI_STA_PS_PARAM_UAPSD = 4,
+};
+
+enum wmi_sta_ps_param_uapsd {
+ WMI_STA_PS_UAPSD_AC0_DELIVERY_EN = (1 << 0),
+ WMI_STA_PS_UAPSD_AC0_TRIGGER_EN = (1 << 1),
+ WMI_STA_PS_UAPSD_AC1_DELIVERY_EN = (1 << 2),
+ WMI_STA_PS_UAPSD_AC1_TRIGGER_EN = (1 << 3),
+ WMI_STA_PS_UAPSD_AC2_DELIVERY_EN = (1 << 4),
+ WMI_STA_PS_UAPSD_AC2_TRIGGER_EN = (1 << 5),
+ WMI_STA_PS_UAPSD_AC3_DELIVERY_EN = (1 << 6),
+ WMI_STA_PS_UAPSD_AC3_TRIGGER_EN = (1 << 7),
+};
+
+enum wmi_sta_ps_param_tx_wake_threshold {
+ WMI_STA_PS_TX_WAKE_THRESHOLD_NEVER = 0,
+ WMI_STA_PS_TX_WAKE_THRESHOLD_ALWAYS = 1,
+
+ /* Values greater than one indicate that many TX attempts per beacon
+ * interval before the STA will wake up
+ */
+};
+
+/* The maximum number of PS-Poll frames the FW will send in response to
+ * traffic advertised in TIM before waking up (by sending a null frame with PS
+ * = 0). Value 0 has a special meaning: there is no maximum count and the FW
+ * will send as many PS-Poll as are necessary to retrieve buffered BU. This
+ * parameter is used when the RX wake policy is
+ * WMI_STA_PS_RX_WAKE_POLICY_POLL_UAPSD and ignored when the RX wake
+ * policy is WMI_STA_PS_RX_WAKE_POLICY_WAKE.
+ */
+enum wmi_sta_ps_param_pspoll_count {
+ WMI_STA_PS_PSPOLL_COUNT_NO_MAX = 0,
+ /* Values greater than 0 indicate the maximum numer of PS-Poll frames
+ * FW will send before waking up.
+ */
+};
+
+/* U-APSD configuration of peer station from (re)assoc request and TSPECs */
+enum wmi_ap_ps_param_uapsd {
+ WMI_AP_PS_UAPSD_AC0_DELIVERY_EN = (1 << 0),
+ WMI_AP_PS_UAPSD_AC0_TRIGGER_EN = (1 << 1),
+ WMI_AP_PS_UAPSD_AC1_DELIVERY_EN = (1 << 2),
+ WMI_AP_PS_UAPSD_AC1_TRIGGER_EN = (1 << 3),
+ WMI_AP_PS_UAPSD_AC2_DELIVERY_EN = (1 << 4),
+ WMI_AP_PS_UAPSD_AC2_TRIGGER_EN = (1 << 5),
+ WMI_AP_PS_UAPSD_AC3_DELIVERY_EN = (1 << 6),
+ WMI_AP_PS_UAPSD_AC3_TRIGGER_EN = (1 << 7),
+};
+
+/* U-APSD maximum service period of peer station */
+enum wmi_ap_ps_peer_param_max_sp {
+ WMI_AP_PS_PEER_PARAM_MAX_SP_UNLIMITED = 0,
+ WMI_AP_PS_PEER_PARAM_MAX_SP_2 = 1,
+ WMI_AP_PS_PEER_PARAM_MAX_SP_4 = 2,
+ WMI_AP_PS_PEER_PARAM_MAX_SP_6 = 3,
+ MAX_WMI_AP_PS_PEER_PARAM_MAX_SP,
+};
+
+enum wmi_ap_ps_peer_param {
+ /** Set uapsd configuration for a given peer.
+ *
+ * This include the delivery and trigger enabled state for each AC.
+ * The host MLME needs to set this based on AP capability and stations
+ * request Set in the association request received from the station.
+ *
+ * Lower 8 bits of the value specify the UAPSD configuration.
+ *
+ * (see enum wmi_ap_ps_param_uapsd)
+ * The default value is 0.
+ */
+ WMI_AP_PS_PEER_PARAM_UAPSD = 0,
+
+ /**
+ * Set the service period for a UAPSD capable station
+ *
+ * The service period from wme ie in the (re)assoc request frame.
+ *
+ * (see enum wmi_ap_ps_peer_param_max_sp)
+ */
+ WMI_AP_PS_PEER_PARAM_MAX_SP = 1,
+
+ /** Time in seconds for aging out buffered frames
+ * for STA in power save
+ */
+ WMI_AP_PS_PEER_PARAM_AGEOUT_TIME = 2,
+
+ /** Specify frame types that are considered SIFS
+ * RESP trigger frame
+ */
+ WMI_AP_PS_PEER_PARAM_SIFS_RESP_FRMTYPE = 3,
+
+ /** Specifies the trigger state of TID.
+ * Valid only for UAPSD frame type
+ */
+ WMI_AP_PS_PEER_PARAM_SIFS_RESP_UAPSD = 4,
+
+ /* Specifies the WNM sleep state of a STA */
+ WMI_AP_PS_PEER_PARAM_WNM_SLEEP = 5,
+};
+
+#define DISABLE_SIFS_RESPONSE_TRIGGER 0
+
+#define WMI_MAX_KEY_INDEX 3
+#define WMI_MAX_KEY_LEN 32
+
+enum wmi_key_type {
+ WMI_KEY_PAIRWISE = 0,
+ WMI_KEY_GROUP = 1,
+};
+
+enum wmi_cipher_type {
+ WMI_CIPHER_NONE = 0, /* clear key */
+ WMI_CIPHER_WEP = 1,
+ WMI_CIPHER_TKIP = 2,
+ WMI_CIPHER_AES_OCB = 3,
+ WMI_CIPHER_AES_CCM = 4,
+ WMI_CIPHER_WAPI = 5,
+ WMI_CIPHER_CKIP = 6,
+ WMI_CIPHER_AES_CMAC = 7,
+ WMI_CIPHER_ANY = 8,
+ WMI_CIPHER_AES_GCM = 9,
+ WMI_CIPHER_AES_GMAC = 10,
+};
+
+/* Value to disable fixed rate setting */
+#define WMI_FIXED_RATE_NONE (0xffff)
+
+#define ATH12K_RC_VERSION_OFFSET 28
+#define ATH12K_RC_PREAMBLE_OFFSET 8
+#define ATH12K_RC_NSS_OFFSET 5
+
+#define ATH12K_HW_RATE_CODE(rate, nss, preamble) \
+ ((1 << ATH12K_RC_VERSION_OFFSET) | \
+ ((nss) << ATH12K_RC_NSS_OFFSET) | \
+ ((preamble) << ATH12K_RC_PREAMBLE_OFFSET) | \
+ (rate))
+
+/* Preamble types to be used with VDEV fixed rate configuration */
+enum wmi_rate_preamble {
+ WMI_RATE_PREAMBLE_OFDM,
+ WMI_RATE_PREAMBLE_CCK,
+ WMI_RATE_PREAMBLE_HT,
+ WMI_RATE_PREAMBLE_VHT,
+ WMI_RATE_PREAMBLE_HE,
+};
+
+/**
+ * enum wmi_rtscts_prot_mode - Enable/Disable RTS/CTS and CTS2Self Protection.
+ * @WMI_RTS_CTS_DISABLED: RTS/CTS protection is disabled.
+ * @WMI_USE_RTS_CTS: RTS/CTS Enabled.
+ * @WMI_USE_CTS2SELF: CTS to self protection Enabled.
+ */
+enum wmi_rtscts_prot_mode {
+ WMI_RTS_CTS_DISABLED = 0,
+ WMI_USE_RTS_CTS = 1,
+ WMI_USE_CTS2SELF = 2,
+};
+
+/**
+ * enum wmi_rtscts_profile - Selection of RTS CTS profile along with enabling
+ * protection mode.
+ * @WMI_RTSCTS_FOR_NO_RATESERIES: Neither of rate-series should use RTS-CTS
+ * @WMI_RTSCTS_FOR_SECOND_RATESERIES: Only second rate-series will use RTS-CTS
+ * @WMI_RTSCTS_ACROSS_SW_RETRIES: Only the second rate-series will use RTS-CTS,
+ * but if there's a sw retry, both the rate
+ * series will use RTS-CTS.
+ * @WMI_RTSCTS_ERP: RTS/CTS used for ERP protection for every PPDU.
+ * @WMI_RTSCTS_FOR_ALL_RATESERIES: Enable RTS-CTS for all rate series.
+ */
+enum wmi_rtscts_profile {
+ WMI_RTSCTS_FOR_NO_RATESERIES = 0,
+ WMI_RTSCTS_FOR_SECOND_RATESERIES = 1,
+ WMI_RTSCTS_ACROSS_SW_RETRIES = 2,
+ WMI_RTSCTS_ERP = 3,
+ WMI_RTSCTS_FOR_ALL_RATESERIES = 4,
+};
+
+#define WMI_SKB_HEADROOM sizeof(struct wmi_cmd_hdr)
+
+enum wmi_sta_ps_param_rx_wake_policy {
+ WMI_STA_PS_RX_WAKE_POLICY_WAKE = 0,
+ WMI_STA_PS_RX_WAKE_POLICY_POLL_UAPSD = 1,
+};
+
+/* Do not change existing values! Used by ath12k_frame_mode parameter
+ * module parameter.
+ */
+enum ath12k_hw_txrx_mode {
+ ATH12K_HW_TXRX_RAW = 0,
+ ATH12K_HW_TXRX_NATIVE_WIFI = 1,
+ ATH12K_HW_TXRX_ETHERNET = 2,
+};
+
+struct wmi_wmm_params {
+ __le32 tlv_header;
+ __le32 cwmin;
+ __le32 cwmax;
+ __le32 aifs;
+ __le32 txoplimit;
+ __le32 acm;
+ __le32 no_ack;
+} __packed;
+
+struct wmi_wmm_params_arg {
+ u8 acm;
+ u8 aifs;
+ u16 cwmin;
+ u16 cwmax;
+ u16 txop;
+ u8 no_ack;
+};
+
+struct wmi_vdev_set_wmm_params_cmd {
+ __le32 tlv_header;
+ __le32 vdev_id;
+ struct wmi_wmm_params wmm_params[4];
+ __le32 wmm_param_type;
+} __packed;
+
+struct wmi_wmm_params_all_arg {
+ struct wmi_wmm_params_arg ac_be;
+ struct wmi_wmm_params_arg ac_bk;
+ struct wmi_wmm_params_arg ac_vi;
+ struct wmi_wmm_params_arg ac_vo;
+};
+
+#define ATH12K_TWT_DEF_STA_CONG_TIMER_MS 5000
+#define ATH12K_TWT_DEF_DEFAULT_SLOT_SIZE 10
+#define ATH12K_TWT_DEF_CONGESTION_THRESH_SETUP 50
+#define ATH12K_TWT_DEF_CONGESTION_THRESH_TEARDOWN 20
+#define ATH12K_TWT_DEF_CONGESTION_THRESH_CRITICAL 100
+#define ATH12K_TWT_DEF_INTERFERENCE_THRESH_TEARDOWN 80
+#define ATH12K_TWT_DEF_INTERFERENCE_THRESH_SETUP 50
+#define ATH12K_TWT_DEF_MIN_NO_STA_SETUP 10
+#define ATH12K_TWT_DEF_MIN_NO_STA_TEARDOWN 2
+#define ATH12K_TWT_DEF_NO_OF_BCAST_MCAST_SLOTS 2
+#define ATH12K_TWT_DEF_MIN_NO_TWT_SLOTS 2
+#define ATH12K_TWT_DEF_MAX_NO_STA_TWT 500
+#define ATH12K_TWT_DEF_MODE_CHECK_INTERVAL 10000
+#define ATH12K_TWT_DEF_ADD_STA_SLOT_INTERVAL 1000
+#define ATH12K_TWT_DEF_REMOVE_STA_SLOT_INTERVAL 5000
+
+struct wmi_twt_enable_params_cmd {
+ __le32 tlv_header;
+ __le32 pdev_id;
+ __le32 sta_cong_timer_ms;
+ __le32 mbss_support;
+ __le32 default_slot_size;
+ __le32 congestion_thresh_setup;
+ __le32 congestion_thresh_teardown;
+ __le32 congestion_thresh_critical;
+ __le32 interference_thresh_teardown;
+ __le32 interference_thresh_setup;
+ __le32 min_no_sta_setup;
+ __le32 min_no_sta_teardown;
+ __le32 no_of_bcast_mcast_slots;
+ __le32 min_no_twt_slots;
+ __le32 max_no_sta_twt;
+ __le32 mode_check_interval;
+ __le32 add_sta_slot_interval;
+ __le32 remove_sta_slot_interval;
+} __packed;
+
+struct wmi_twt_disable_params_cmd {
+ __le32 tlv_header;
+ __le32 pdev_id;
+} __packed;
+
+struct wmi_obss_spatial_reuse_params_cmd {
+ __le32 tlv_header;
+ __le32 pdev_id;
+ __le32 enable;
+ a_sle32 obss_min;
+ a_sle32 obss_max;
+ __le32 vdev_id;
+} __packed;
+
+#define ATH12K_BSS_COLOR_COLLISION_SCAN_PERIOD_MS 200
+#define ATH12K_OBSS_COLOR_COLLISION_DETECTION_DISABLE 0
+#define ATH12K_OBSS_COLOR_COLLISION_DETECTION 1
+
+#define ATH12K_BSS_COLOR_STA_PERIODS 10000
+#define ATH12K_BSS_COLOR_AP_PERIODS 5000
+
+struct wmi_obss_color_collision_cfg_params_cmd {
+ __le32 tlv_header;
+ __le32 vdev_id;
+ __le32 flags;
+ __le32 evt_type;
+ __le32 current_bss_color;
+ __le32 detection_period_ms;
+ __le32 scan_period_ms;
+ __le32 free_slot_expiry_time_ms;
+} __packed;
+
+struct wmi_bss_color_change_enable_params_cmd {
+ __le32 tlv_header;
+ __le32 vdev_id;
+ __le32 enable;
+} __packed;
+
+#define ATH12K_IPV4_TH_SEED_SIZE 5
+#define ATH12K_IPV6_TH_SEED_SIZE 11
+
+struct ath12k_wmi_pdev_lro_config_cmd {
+ __le32 tlv_header;
+ __le32 lro_enable;
+ __le32 res;
+ u32 th_4[ATH12K_IPV4_TH_SEED_SIZE];
+ u32 th_6[ATH12K_IPV6_TH_SEED_SIZE];
+ __le32 pdev_id;
+} __packed;
+
+#define ATH12K_WMI_SPECTRAL_COUNT_DEFAULT 0
+#define ATH12K_WMI_SPECTRAL_PERIOD_DEFAULT 224
+#define ATH12K_WMI_SPECTRAL_PRIORITY_DEFAULT 1
+#define ATH12K_WMI_SPECTRAL_FFT_SIZE_DEFAULT 7
+#define ATH12K_WMI_SPECTRAL_GC_ENA_DEFAULT 1
+#define ATH12K_WMI_SPECTRAL_RESTART_ENA_DEFAULT 0
+#define ATH12K_WMI_SPECTRAL_NOISE_FLOOR_REF_DEFAULT -96
+#define ATH12K_WMI_SPECTRAL_INIT_DELAY_DEFAULT 80
+#define ATH12K_WMI_SPECTRAL_NB_TONE_THR_DEFAULT 12
+#define ATH12K_WMI_SPECTRAL_STR_BIN_THR_DEFAULT 8
+#define ATH12K_WMI_SPECTRAL_WB_RPT_MODE_DEFAULT 0
+#define ATH12K_WMI_SPECTRAL_RSSI_RPT_MODE_DEFAULT 0
+#define ATH12K_WMI_SPECTRAL_RSSI_THR_DEFAULT 0xf0
+#define ATH12K_WMI_SPECTRAL_PWR_FORMAT_DEFAULT 0
+#define ATH12K_WMI_SPECTRAL_RPT_MODE_DEFAULT 2
+#define ATH12K_WMI_SPECTRAL_BIN_SCALE_DEFAULT 1
+#define ATH12K_WMI_SPECTRAL_DBM_ADJ_DEFAULT 1
+#define ATH12K_WMI_SPECTRAL_CHN_MASK_DEFAULT 1
+
+struct ath12k_wmi_vdev_spectral_conf_arg {
+ u32 vdev_id;
+ u32 scan_count;
+ u32 scan_period;
+ u32 scan_priority;
+ u32 scan_fft_size;
+ u32 scan_gc_ena;
+ u32 scan_restart_ena;
+ u32 scan_noise_floor_ref;
+ u32 scan_init_delay;
+ u32 scan_nb_tone_thr;
+ u32 scan_str_bin_thr;
+ u32 scan_wb_rpt_mode;
+ u32 scan_rssi_rpt_mode;
+ u32 scan_rssi_thr;
+ u32 scan_pwr_format;
+ u32 scan_rpt_mode;
+ u32 scan_bin_scale;
+ u32 scan_dbm_adj;
+ u32 scan_chn_mask;
+};
+
+struct ath12k_wmi_vdev_spectral_conf_cmd {
+ __le32 tlv_header;
+ __le32 vdev_id;
+ __le32 scan_count;
+ __le32 scan_period;
+ __le32 scan_priority;
+ __le32 scan_fft_size;
+ __le32 scan_gc_ena;
+ __le32 scan_restart_ena;
+ __le32 scan_noise_floor_ref;
+ __le32 scan_init_delay;
+ __le32 scan_nb_tone_thr;
+ __le32 scan_str_bin_thr;
+ __le32 scan_wb_rpt_mode;
+ __le32 scan_rssi_rpt_mode;
+ __le32 scan_rssi_thr;
+ __le32 scan_pwr_format;
+ __le32 scan_rpt_mode;
+ __le32 scan_bin_scale;
+ __le32 scan_dbm_adj;
+ __le32 scan_chn_mask;
+} __packed;
+
+#define ATH12K_WMI_SPECTRAL_TRIGGER_CMD_TRIGGER 1
+#define ATH12K_WMI_SPECTRAL_TRIGGER_CMD_CLEAR 2
+#define ATH12K_WMI_SPECTRAL_ENABLE_CMD_ENABLE 1
+#define ATH12K_WMI_SPECTRAL_ENABLE_CMD_DISABLE 2
+
+struct ath12k_wmi_vdev_spectral_enable_cmd {
+ __le32 tlv_header;
+ __le32 vdev_id;
+ __le32 trigger_cmd;
+ __le32 enable_cmd;
+} __packed;
+
+struct ath12k_wmi_pdev_dma_ring_cfg_arg {
+ u32 tlv_header;
+ u32 pdev_id;
+ u32 module_id;
+ u32 base_paddr_lo;
+ u32 base_paddr_hi;
+ u32 head_idx_paddr_lo;
+ u32 head_idx_paddr_hi;
+ u32 tail_idx_paddr_lo;
+ u32 tail_idx_paddr_hi;
+ u32 num_elems;
+ u32 buf_size;
+ u32 num_resp_per_event;
+ u32 event_timeout_ms;
+};
+
+struct ath12k_wmi_pdev_dma_ring_cfg_req_cmd {
+ __le32 tlv_header;
+ __le32 pdev_id;
+ __le32 module_id; /* see enum wmi_direct_buffer_module */
+ __le32 base_paddr_lo;
+ __le32 base_paddr_hi;
+ __le32 head_idx_paddr_lo;
+ __le32 head_idx_paddr_hi;
+ __le32 tail_idx_paddr_lo;
+ __le32 tail_idx_paddr_hi;
+ __le32 num_elems; /* Number of elems in the ring */
+ __le32 buf_size; /* size of allocated buffer in bytes */
+
+ /* Number of wmi_dma_buf_release_entry packed together */
+ __le32 num_resp_per_event;
+
+ /* Target should timeout and send whatever resp
+ * it has if this time expires, units in milliseconds
+ */
+ __le32 event_timeout_ms;
+} __packed;
+
+struct ath12k_wmi_dma_buf_release_fixed_params {
+ __le32 pdev_id;
+ __le32 module_id;
+ __le32 num_buf_release_entry;
+ __le32 num_meta_data_entry;
+} __packed;
+
+struct ath12k_wmi_dma_buf_release_entry_params {
+ __le32 tlv_header;
+ __le32 paddr_lo;
+
+ /* Bits 11:0: address of data
+ * Bits 31:12: host context data
+ */
+ __le32 paddr_hi;
+} __packed;
+
+#define WMI_SPECTRAL_META_INFO1_FREQ1 GENMASK(15, 0)
+#define WMI_SPECTRAL_META_INFO1_FREQ2 GENMASK(31, 16)
+
+#define WMI_SPECTRAL_META_INFO2_CHN_WIDTH GENMASK(7, 0)
+
+struct ath12k_wmi_dma_buf_release_meta_data_params {
+ __le32 tlv_header;
+ a_sle32 noise_floor[WMI_MAX_CHAINS];
+ __le32 reset_delay;
+ __le32 freq1;
+ __le32 freq2;
+ __le32 ch_width;
+} __packed;
+
+enum wmi_fils_discovery_cmd_type {
+ WMI_FILS_DISCOVERY_CMD,
+ WMI_UNSOL_BCAST_PROBE_RESP,
+};
+
+struct wmi_fils_discovery_cmd {
+ __le32 tlv_header;
+ __le32 vdev_id;
+ __le32 interval;
+ __le32 config; /* enum wmi_fils_discovery_cmd_type */
+} __packed;
+
+struct wmi_fils_discovery_tmpl_cmd {
+ __le32 tlv_header;
+ __le32 vdev_id;
+ __le32 buf_len;
+} __packed;
+
+struct wmi_probe_tmpl_cmd {
+ __le32 tlv_header;
+ __le32 vdev_id;
+ __le32 buf_len;
+} __packed;
+
+#define WMI_MAX_MEM_REQS 32
+
+#define MAX_RADIOS 3
+
+#define WMI_SERVICE_READY_TIMEOUT_HZ (5 * HZ)
+#define WMI_SEND_TIMEOUT_HZ (3 * HZ)
+
+struct ath12k_wmi_pdev {
+ struct ath12k_wmi_base *wmi_ab;
+ enum ath12k_htc_ep_id eid;
+ const struct wmi_peer_flags_map *peer_flags;
+ u32 rx_decap_mode;
+};
+
+struct ath12k_wmi_base {
+ struct ath12k_base *ab;
+ struct ath12k_wmi_pdev wmi[MAX_RADIOS];
+ enum ath12k_htc_ep_id wmi_endpoint_id[MAX_RADIOS];
+ u32 max_msg_len[MAX_RADIOS];
+
+ struct completion service_ready;
+ struct completion unified_ready;
+ DECLARE_BITMAP(svc_map, WMI_MAX_EXT_SERVICE);
+ wait_queue_head_t tx_credits_wq;
+ const struct wmi_peer_flags_map *peer_flags;
+ u32 num_mem_chunks;
+ u32 rx_decap_mode;
+ struct ath12k_wmi_host_mem_chunk_arg mem_chunks[WMI_MAX_MEM_REQS];
+
+ enum wmi_host_hw_mode_config_type preferred_hw_mode;
+
+ struct ath12k_wmi_target_cap_arg *targ_cap;
+};
+
+#define ATH12K_FW_STATS_BUF_SIZE (1024 * 1024)
+
+void ath12k_wmi_init_qcn9274(struct ath12k_base *ab,
+ struct ath12k_wmi_resource_config_arg *config);
+void ath12k_wmi_init_wcn7850(struct ath12k_base *ab,
+ struct ath12k_wmi_resource_config_arg *config);
+int ath12k_wmi_cmd_send(struct ath12k_wmi_pdev *wmi, struct sk_buff *skb,
+ u32 cmd_id);
+struct sk_buff *ath12k_wmi_alloc_skb(struct ath12k_wmi_base *wmi_sc, u32 len);
+int ath12k_wmi_mgmt_send(struct ath12k *ar, u32 vdev_id, u32 buf_id,
+ struct sk_buff *frame);
+int ath12k_wmi_bcn_tmpl(struct ath12k *ar, u32 vdev_id,
+ struct ieee80211_mutable_offsets *offs,
+ struct sk_buff *bcn);
+int ath12k_wmi_vdev_down(struct ath12k *ar, u8 vdev_id);
+int ath12k_wmi_vdev_up(struct ath12k *ar, u32 vdev_id, u32 aid,
+ const u8 *bssid);
+int ath12k_wmi_vdev_stop(struct ath12k *ar, u8 vdev_id);
+int ath12k_wmi_vdev_start(struct ath12k *ar, struct wmi_vdev_start_req_arg *arg,
+ bool restart);
+int ath12k_wmi_set_peer_param(struct ath12k *ar, const u8 *peer_addr,
+ u32 vdev_id, u32 param_id, u32 param_val);
+int ath12k_wmi_pdev_set_param(struct ath12k *ar, u32 param_id,
+ u32 param_value, u8 pdev_id);
+int ath12k_wmi_pdev_set_ps_mode(struct ath12k *ar, int vdev_id, u32 enable);
+int ath12k_wmi_wait_for_unified_ready(struct ath12k_base *ab);
+int ath12k_wmi_cmd_init(struct ath12k_base *ab);
+int ath12k_wmi_wait_for_service_ready(struct ath12k_base *ab);
+int ath12k_wmi_connect(struct ath12k_base *ab);
+int ath12k_wmi_pdev_attach(struct ath12k_base *ab,
+ u8 pdev_id);
+int ath12k_wmi_attach(struct ath12k_base *ab);
+void ath12k_wmi_detach(struct ath12k_base *ab);
+int ath12k_wmi_vdev_create(struct ath12k *ar, u8 *macaddr,
+ struct ath12k_wmi_vdev_create_arg *arg);
+int ath12k_wmi_send_peer_create_cmd(struct ath12k *ar,
+ struct ath12k_wmi_peer_create_arg *arg);
+int ath12k_wmi_vdev_set_param_cmd(struct ath12k *ar, u32 vdev_id,
+ u32 param_id, u32 param_value);
+
+int ath12k_wmi_set_sta_ps_param(struct ath12k *ar, u32 vdev_id,
+ u32 param, u32 param_value);
+int ath12k_wmi_force_fw_hang_cmd(struct ath12k *ar, u32 type, u32 delay_time_ms);
+int ath12k_wmi_send_peer_delete_cmd(struct ath12k *ar,
+ const u8 *peer_addr, u8 vdev_id);
+int ath12k_wmi_vdev_delete(struct ath12k *ar, u8 vdev_id);
+void ath12k_wmi_start_scan_init(struct ath12k *ar,
+ struct ath12k_wmi_scan_req_arg *arg);
+int ath12k_wmi_send_scan_start_cmd(struct ath12k *ar,
+ struct ath12k_wmi_scan_req_arg *arg);
+int ath12k_wmi_send_scan_stop_cmd(struct ath12k *ar,
+ struct ath12k_wmi_scan_cancel_arg *arg);
+int ath12k_wmi_send_wmm_update_cmd(struct ath12k *ar, u32 vdev_id,
+ struct wmi_wmm_params_all_arg *param);
+int ath12k_wmi_pdev_suspend(struct ath12k *ar, u32 suspend_opt,
+ u32 pdev_id);
+int ath12k_wmi_pdev_resume(struct ath12k *ar, u32 pdev_id);
+
+int ath12k_wmi_send_peer_assoc_cmd(struct ath12k *ar,
+ struct ath12k_wmi_peer_assoc_arg *arg);
+int ath12k_wmi_vdev_install_key(struct ath12k *ar,
+ struct wmi_vdev_install_key_arg *arg);
+int ath12k_wmi_pdev_bss_chan_info_request(struct ath12k *ar,
+ enum wmi_bss_chan_info_req_type type);
+int ath12k_wmi_send_stats_request_cmd(struct ath12k *ar, u32 stats_id,
+ u32 vdev_id, u32 pdev_id);
+int ath12k_wmi_send_pdev_temperature_cmd(struct ath12k *ar);
+int ath12k_wmi_send_peer_flush_tids_cmd(struct ath12k *ar,
+ u8 peer_addr[ETH_ALEN],
+ u32 peer_tid_bitmap,
+ u8 vdev_id);
+int ath12k_wmi_send_set_ap_ps_param_cmd(struct ath12k *ar, u8 *peer_addr,
+ struct ath12k_wmi_ap_ps_arg *arg);
+int ath12k_wmi_send_scan_chan_list_cmd(struct ath12k *ar,
+ struct ath12k_wmi_scan_chan_list_arg *arg);
+int ath12k_wmi_send_dfs_phyerr_offload_enable_cmd(struct ath12k *ar,
+ u32 pdev_id);
+int ath12k_wmi_addba_clear_resp(struct ath12k *ar, u32 vdev_id, const u8 *mac);
+int ath12k_wmi_addba_send(struct ath12k *ar, u32 vdev_id, const u8 *mac,
+ u32 tid, u32 buf_size);
+int ath12k_wmi_addba_set_resp(struct ath12k *ar, u32 vdev_id, const u8 *mac,
+ u32 tid, u32 status);
+int ath12k_wmi_delba_send(struct ath12k *ar, u32 vdev_id, const u8 *mac,
+ u32 tid, u32 initiator, u32 reason);
+int ath12k_wmi_send_bcn_offload_control_cmd(struct ath12k *ar,
+ u32 vdev_id, u32 bcn_ctrl_op);
+int ath12k_wmi_send_init_country_cmd(struct ath12k *ar,
+ struct ath12k_wmi_init_country_arg *arg);
+int ath12k_wmi_peer_rx_reorder_queue_setup(struct ath12k *ar,
+ int vdev_id, const u8 *addr,
+ dma_addr_t paddr, u8 tid,
+ u8 ba_window_size_valid,
+ u32 ba_window_size);
+int
+ath12k_wmi_rx_reord_queue_remove(struct ath12k *ar,
+ struct ath12k_wmi_rx_reorder_queue_remove_arg *arg);
+int ath12k_wmi_send_pdev_set_regdomain(struct ath12k *ar,
+ struct ath12k_wmi_pdev_set_regdomain_arg *arg);
+int ath12k_wmi_simulate_radar(struct ath12k *ar);
+int ath12k_wmi_send_twt_enable_cmd(struct ath12k *ar, u32 pdev_id);
+int ath12k_wmi_send_twt_disable_cmd(struct ath12k *ar, u32 pdev_id);
+int ath12k_wmi_send_obss_spr_cmd(struct ath12k *ar, u32 vdev_id,
+ struct ieee80211_he_obss_pd *he_obss_pd);
+int ath12k_wmi_obss_color_cfg_cmd(struct ath12k *ar, u32 vdev_id,
+ u8 bss_color, u32 period,
+ bool enable);
+int ath12k_wmi_send_bss_color_change_enable_cmd(struct ath12k *ar, u32 vdev_id,
+ bool enable);
+int ath12k_wmi_pdev_lro_cfg(struct ath12k *ar, int pdev_id);
+int ath12k_wmi_pdev_dma_ring_cfg(struct ath12k *ar,
+ struct ath12k_wmi_pdev_dma_ring_cfg_arg *arg);
+int ath12k_wmi_vdev_spectral_enable(struct ath12k *ar, u32 vdev_id,
+ u32 trigger, u32 enable);
+int ath12k_wmi_vdev_spectral_conf(struct ath12k *ar,
+ struct ath12k_wmi_vdev_spectral_conf_arg *arg);
+int ath12k_wmi_fils_discovery_tmpl(struct ath12k *ar, u32 vdev_id,
+ struct sk_buff *tmpl);
+int ath12k_wmi_fils_discovery(struct ath12k *ar, u32 vdev_id, u32 interval,
+ bool unsol_bcast_probe_resp_enabled);
+int ath12k_wmi_probe_resp_tmpl(struct ath12k *ar, u32 vdev_id,
+ struct sk_buff *tmpl);
+int ath12k_wmi_set_hw_mode(struct ath12k_base *ab,
+ enum wmi_host_hw_mode_config_type mode);
+
+#endif
diff --git a/drivers/net/wireless/ath/ath6kl/cfg80211.c b/drivers/net/wireless/ath/ath6kl/cfg80211.c
index a20e0aeae284..0c2b8b1a10d5 100644
--- a/drivers/net/wireless/ath/ath6kl/cfg80211.c
+++ b/drivers/net/wireless/ath/ath6kl/cfg80211.c
@@ -1119,7 +1119,7 @@ void ath6kl_cfg80211_ch_switch_notify(struct ath6kl_vif *vif, int freq,
NL80211_CHAN_HT20 : NL80211_CHAN_NO_HT);
mutex_lock(&vif->wdev.mtx);
- cfg80211_ch_switch_notify(vif->ndev, &chandef, 0);
+ cfg80211_ch_switch_notify(vif->ndev, &chandef, 0, 0);
mutex_unlock(&vif->wdev.mtx);
}
diff --git a/drivers/net/wireless/ath/ath9k/ar5008_phy.c b/drivers/net/wireless/ath/ath9k/ar5008_phy.c
index 6610d76131fa..7a45f5f62826 100644
--- a/drivers/net/wireless/ath/ath9k/ar5008_phy.c
+++ b/drivers/net/wireless/ath/ath9k/ar5008_phy.c
@@ -1277,13 +1277,13 @@ static void ar5008_hw_set_radar_conf(struct ath_hw *ah)
static void ar5008_hw_init_txpower_cck(struct ath_hw *ah, int16_t *rate_array)
{
-#define CCK_DELTA(x) ((OLC_FOR_AR9280_20_LATER) ? max((x) - 2, 0) : (x))
- ah->tx_power[0] = CCK_DELTA(rate_array[rate1l]);
- ah->tx_power[1] = CCK_DELTA(min(rate_array[rate2l],
+#define CCK_DELTA(_ah, x) ((OLC_FOR_AR9280_20_LATER(_ah)) ? max((x) - 2, 0) : (x))
+ ah->tx_power[0] = CCK_DELTA(ah, rate_array[rate1l]);
+ ah->tx_power[1] = CCK_DELTA(ah, min(rate_array[rate2l],
rate_array[rate2s]));
- ah->tx_power[2] = CCK_DELTA(min(rate_array[rate5_5l],
+ ah->tx_power[2] = CCK_DELTA(ah, min(rate_array[rate5_5l],
rate_array[rate5_5s]));
- ah->tx_power[3] = CCK_DELTA(min(rate_array[rate11l],
+ ah->tx_power[3] = CCK_DELTA(ah, min(rate_array[rate11l],
rate_array[rate11s]));
#undef CCK_DELTA
}
diff --git a/drivers/net/wireless/ath/ath9k/ar9002_calib.c b/drivers/net/wireless/ath/ath9k/ar9002_calib.c
index fd53b5f9e9b5..c8b3f3aaa45b 100644
--- a/drivers/net/wireless/ath/ath9k/ar9002_calib.c
+++ b/drivers/net/wireless/ath/ath9k/ar9002_calib.c
@@ -659,9 +659,9 @@ static void ar9002_hw_pa_cal(struct ath_hw *ah, bool is_reset)
static void ar9002_hw_olc_temp_compensation(struct ath_hw *ah)
{
- if (OLC_FOR_AR9287_10_LATER)
+ if (OLC_FOR_AR9287_10_LATER(ah))
ar9287_hw_olc_temp_compensation(ah);
- else if (OLC_FOR_AR9280_20_LATER)
+ else if (OLC_FOR_AR9280_20_LATER(ah))
ar9280_hw_olc_temp_compensation(ah);
}
@@ -672,7 +672,7 @@ static int ar9002_hw_calibrate(struct ath_hw *ah, struct ath9k_channel *chan,
bool nfcal, nfcal_pending = false, percal_pending;
int ret;
- nfcal = !!(REG_READ(ah, AR_PHY_AGC_CONTROL) & AR_PHY_AGC_CONTROL_NF);
+ nfcal = !!(REG_READ(ah, AR_PHY_AGC_CONTROL(ah)) & AR_PHY_AGC_CONTROL_NF);
if (ah->caldata) {
nfcal_pending = test_bit(NFCAL_PENDING, &ah->caldata->cal_flags);
if (longcal) /* Remember to not miss */
@@ -752,11 +752,11 @@ static bool ar9285_hw_cl_cal(struct ath_hw *ah, struct ath9k_channel *chan)
if (IS_CHAN_HT20(chan)) {
REG_SET_BIT(ah, AR_PHY_CL_CAL_CTL, AR_PHY_PARALLEL_CAL_ENABLE);
REG_SET_BIT(ah, AR_PHY_TURBO, AR_PHY_FC_DYN2040_EN);
- REG_CLR_BIT(ah, AR_PHY_AGC_CONTROL,
+ REG_CLR_BIT(ah, AR_PHY_AGC_CONTROL(ah),
AR_PHY_AGC_CONTROL_FLTR_CAL);
REG_CLR_BIT(ah, AR_PHY_TPCRG1, AR_PHY_TPCRG1_PD_CAL_ENABLE);
- REG_SET_BIT(ah, AR_PHY_AGC_CONTROL, AR_PHY_AGC_CONTROL_CAL);
- if (!ath9k_hw_wait(ah, AR_PHY_AGC_CONTROL,
+ REG_SET_BIT(ah, AR_PHY_AGC_CONTROL(ah), AR_PHY_AGC_CONTROL_CAL);
+ if (!ath9k_hw_wait(ah, AR_PHY_AGC_CONTROL(ah),
AR_PHY_AGC_CONTROL_CAL, 0, AH_WAIT_TIMEOUT)) {
ath_dbg(common, CALIBRATE,
"offset calibration failed to complete in %d ms; noisy environment?\n",
@@ -768,10 +768,10 @@ static bool ar9285_hw_cl_cal(struct ath_hw *ah, struct ath9k_channel *chan)
REG_CLR_BIT(ah, AR_PHY_CL_CAL_CTL, AR_PHY_CL_CAL_ENABLE);
}
REG_CLR_BIT(ah, AR_PHY_ADC_CTL, AR_PHY_ADC_CTL_OFF_PWDADC);
- REG_SET_BIT(ah, AR_PHY_AGC_CONTROL, AR_PHY_AGC_CONTROL_FLTR_CAL);
+ REG_SET_BIT(ah, AR_PHY_AGC_CONTROL(ah), AR_PHY_AGC_CONTROL_FLTR_CAL);
REG_SET_BIT(ah, AR_PHY_TPCRG1, AR_PHY_TPCRG1_PD_CAL_ENABLE);
- REG_SET_BIT(ah, AR_PHY_AGC_CONTROL, AR_PHY_AGC_CONTROL_CAL);
- if (!ath9k_hw_wait(ah, AR_PHY_AGC_CONTROL, AR_PHY_AGC_CONTROL_CAL,
+ REG_SET_BIT(ah, AR_PHY_AGC_CONTROL(ah), AR_PHY_AGC_CONTROL_CAL);
+ if (!ath9k_hw_wait(ah, AR_PHY_AGC_CONTROL(ah), AR_PHY_AGC_CONTROL_CAL,
0, AH_WAIT_TIMEOUT)) {
ath_dbg(common, CALIBRATE,
"offset calibration failed to complete in %d ms; noisy environment?\n",
@@ -781,7 +781,7 @@ static bool ar9285_hw_cl_cal(struct ath_hw *ah, struct ath9k_channel *chan)
REG_SET_BIT(ah, AR_PHY_ADC_CTL, AR_PHY_ADC_CTL_OFF_PWDADC);
REG_CLR_BIT(ah, AR_PHY_CL_CAL_CTL, AR_PHY_CL_CAL_ENABLE);
- REG_CLR_BIT(ah, AR_PHY_AGC_CONTROL, AR_PHY_AGC_CONTROL_FLTR_CAL);
+ REG_CLR_BIT(ah, AR_PHY_AGC_CONTROL(ah), AR_PHY_AGC_CONTROL_FLTR_CAL);
return true;
}
@@ -857,17 +857,17 @@ static bool ar9002_hw_init_cal(struct ath_hw *ah, struct ath9k_channel *chan)
if (!AR_SREV_9287_11_OR_LATER(ah))
REG_CLR_BIT(ah, AR_PHY_ADC_CTL,
AR_PHY_ADC_CTL_OFF_PWDADC);
- REG_SET_BIT(ah, AR_PHY_AGC_CONTROL,
+ REG_SET_BIT(ah, AR_PHY_AGC_CONTROL(ah),
AR_PHY_AGC_CONTROL_FLTR_CAL);
}
/* Calibrate the AGC */
- REG_WRITE(ah, AR_PHY_AGC_CONTROL,
- REG_READ(ah, AR_PHY_AGC_CONTROL) |
+ REG_WRITE(ah, AR_PHY_AGC_CONTROL(ah),
+ REG_READ(ah, AR_PHY_AGC_CONTROL(ah)) |
AR_PHY_AGC_CONTROL_CAL);
/* Poll for offset calibration complete */
- if (!ath9k_hw_wait(ah, AR_PHY_AGC_CONTROL,
+ if (!ath9k_hw_wait(ah, AR_PHY_AGC_CONTROL(ah),
AR_PHY_AGC_CONTROL_CAL,
0, AH_WAIT_TIMEOUT)) {
ath_dbg(common, CALIBRATE,
@@ -880,7 +880,7 @@ static bool ar9002_hw_init_cal(struct ath_hw *ah, struct ath9k_channel *chan)
if (!AR_SREV_9287_11_OR_LATER(ah))
REG_SET_BIT(ah, AR_PHY_ADC_CTL,
AR_PHY_ADC_CTL_OFF_PWDADC);
- REG_CLR_BIT(ah, AR_PHY_AGC_CONTROL,
+ REG_CLR_BIT(ah, AR_PHY_AGC_CONTROL(ah),
AR_PHY_AGC_CONTROL_FLTR_CAL);
}
}
diff --git a/drivers/net/wireless/ath/ath9k/ar9002_hw.c b/drivers/net/wireless/ath/ath9k/ar9002_hw.c
index ae68f674829b..d08ea0b28530 100644
--- a/drivers/net/wireless/ath/ath9k/ar9002_hw.c
+++ b/drivers/net/wireless/ath/ath9k/ar9002_hw.c
@@ -249,9 +249,9 @@ static void ar9002_hw_configpcipowersave(struct ath_hw *ah,
if (power_off) {
/* clear bit 19 to disable L1 */
- REG_CLR_BIT(ah, AR_PCIE_PM_CTRL, AR_PCIE_PM_CTRL_ENA);
+ REG_CLR_BIT(ah, AR_PCIE_PM_CTRL(ah), AR_PCIE_PM_CTRL_ENA);
- val = REG_READ(ah, AR_WA);
+ val = REG_READ(ah, AR_WA(ah));
/*
* Set PCIe workaround bits
@@ -286,7 +286,7 @@ static void ar9002_hw_configpcipowersave(struct ath_hw *ah,
if (AR_SREV_9285E_20(ah))
val |= AR_WA_BIT23;
- REG_WRITE(ah, AR_WA, val);
+ REG_WRITE(ah, AR_WA(ah), val);
} else {
if (ah->config.pcie_waen) {
val = ah->config.pcie_waen;
@@ -314,10 +314,10 @@ static void ar9002_hw_configpcipowersave(struct ath_hw *ah,
if (AR_SREV_9285E_20(ah))
val |= AR_WA_BIT23;
- REG_WRITE(ah, AR_WA, val);
+ REG_WRITE(ah, AR_WA(ah), val);
/* set bit 19 to allow forcing of pcie core into L1 state */
- REG_SET_BIT(ah, AR_PCIE_PM_CTRL, AR_PCIE_PM_CTRL_ENA);
+ REG_SET_BIT(ah, AR_PCIE_PM_CTRL(ah), AR_PCIE_PM_CTRL_ENA);
}
}
diff --git a/drivers/net/wireless/ath/ath9k/ar9002_mac.c b/drivers/net/wireless/ath/ath9k/ar9002_mac.c
index a8c0e8e2d78c..b70cd4af1ae0 100644
--- a/drivers/net/wireless/ath/ath9k/ar9002_mac.c
+++ b/drivers/net/wireless/ath/ath9k/ar9002_mac.c
@@ -21,7 +21,7 @@
static void ar9002_hw_rx_enable(struct ath_hw *ah)
{
- REG_WRITE(ah, AR_CR, AR_CR_RXE);
+ REG_WRITE(ah, AR_CR, AR_CR_RXE(ah));
}
static void ar9002_hw_set_desc_link(void *ds, u32 ds_link)
@@ -40,14 +40,14 @@ static bool ar9002_hw_get_isr(struct ath_hw *ah, enum ath9k_int *masked,
struct ath_common *common = ath9k_hw_common(ah);
if (!AR_SREV_9100(ah)) {
- if (REG_READ(ah, AR_INTR_ASYNC_CAUSE) & AR_INTR_MAC_IRQ) {
- if ((REG_READ(ah, AR_RTC_STATUS) & AR_RTC_STATUS_M)
+ if (REG_READ(ah, AR_INTR_ASYNC_CAUSE(ah)) & AR_INTR_MAC_IRQ) {
+ if ((REG_READ(ah, AR_RTC_STATUS(ah)) & AR_RTC_STATUS_M(ah))
== AR_RTC_STATUS_ON) {
isr = REG_READ(ah, AR_ISR);
}
}
- sync_cause = REG_READ(ah, AR_INTR_SYNC_CAUSE) &
+ sync_cause = REG_READ(ah, AR_INTR_SYNC_CAUSE(ah)) &
AR_INTR_SYNC_DEFAULT;
*masked = 0;
@@ -138,7 +138,7 @@ static bool ar9002_hw_get_isr(struct ath_hw *ah, enum ath9k_int *masked,
u32 s5_s;
if (pCap->hw_caps & ATH9K_HW_CAP_RAC_SUPPORTED) {
- s5_s = REG_READ(ah, AR_ISR_S5_S);
+ s5_s = REG_READ(ah, AR_ISR_S5_S(ah));
} else {
s5_s = REG_READ(ah, AR_ISR_S5);
}
@@ -201,8 +201,8 @@ static bool ar9002_hw_get_isr(struct ath_hw *ah, enum ath9k_int *masked,
"AR_INTR_SYNC_LOCAL_TIMEOUT\n");
}
- REG_WRITE(ah, AR_INTR_SYNC_CAUSE_CLR, sync_cause);
- (void) REG_READ(ah, AR_INTR_SYNC_CAUSE_CLR);
+ REG_WRITE(ah, AR_INTR_SYNC_CAUSE_CLR(ah), sync_cause);
+ (void) REG_READ(ah, AR_INTR_SYNC_CAUSE_CLR(ah));
}
return true;
diff --git a/drivers/net/wireless/ath/ath9k/ar9002_phy.c b/drivers/net/wireless/ath/ath9k/ar9002_phy.c
index ebdb97999335..23ac6b7c2cbd 100644
--- a/drivers/net/wireless/ath/ath9k/ar9002_phy.c
+++ b/drivers/net/wireless/ath/ath9k/ar9002_phy.c
@@ -281,10 +281,10 @@ static void ar9002_olc_init(struct ath_hw *ah)
{
u32 i;
- if (!OLC_FOR_AR9280_20_LATER)
+ if (!OLC_FOR_AR9280_20_LATER(ah))
return;
- if (OLC_FOR_AR9287_10_LATER) {
+ if (OLC_FOR_AR9287_10_LATER(ah)) {
REG_SET_BIT(ah, AR_PHY_TX_PWRCTRL9,
AR_PHY_TX_PWRCTRL9_RES_DC_REMOVAL);
ath9k_hw_analog_shift_rmw(ah, AR9287_AN_TXPC0,
diff --git a/drivers/net/wireless/ath/ath9k/ar9003_calib.c b/drivers/net/wireless/ath/ath9k/ar9003_calib.c
index 6ca089f15629..2224cb74b1d4 100644
--- a/drivers/net/wireless/ath/ath9k/ar9003_calib.c
+++ b/drivers/net/wireless/ath/ath9k/ar9003_calib.c
@@ -346,14 +346,14 @@ static bool ar9003_hw_dynamic_osdac_selection(struct ath_hw *ah,
/*
* Clear offset and IQ calibration, run AGC cal.
*/
- REG_CLR_BIT(ah, AR_PHY_AGC_CONTROL,
+ REG_CLR_BIT(ah, AR_PHY_AGC_CONTROL(ah),
AR_PHY_AGC_CONTROL_OFFSET_CAL);
- REG_CLR_BIT(ah, AR_PHY_TX_IQCAL_CONTROL_0,
+ REG_CLR_BIT(ah, AR_PHY_TX_IQCAL_CONTROL_0(ah),
AR_PHY_TX_IQCAL_CONTROL_0_ENABLE_TXIQ_CAL);
- REG_WRITE(ah, AR_PHY_AGC_CONTROL,
- REG_READ(ah, AR_PHY_AGC_CONTROL) | AR_PHY_AGC_CONTROL_CAL);
+ REG_WRITE(ah, AR_PHY_AGC_CONTROL(ah),
+ REG_READ(ah, AR_PHY_AGC_CONTROL(ah)) | AR_PHY_AGC_CONTROL_CAL);
- status = ath9k_hw_wait(ah, AR_PHY_AGC_CONTROL,
+ status = ath9k_hw_wait(ah, AR_PHY_AGC_CONTROL(ah),
AR_PHY_AGC_CONTROL_CAL,
0, AH_WAIT_TIMEOUT);
if (!status) {
@@ -367,13 +367,13 @@ static bool ar9003_hw_dynamic_osdac_selection(struct ath_hw *ah,
* (Carrier Leak calibration, TX Filter calibration and
* Peak Detector offset calibration).
*/
- REG_SET_BIT(ah, AR_PHY_AGC_CONTROL,
+ REG_SET_BIT(ah, AR_PHY_AGC_CONTROL(ah),
AR_PHY_AGC_CONTROL_OFFSET_CAL);
REG_CLR_BIT(ah, AR_PHY_CL_CAL_CTL,
AR_PHY_CL_CAL_ENABLE);
- REG_CLR_BIT(ah, AR_PHY_AGC_CONTROL,
+ REG_CLR_BIT(ah, AR_PHY_AGC_CONTROL(ah),
AR_PHY_AGC_CONTROL_FLTR_CAL);
- REG_CLR_BIT(ah, AR_PHY_AGC_CONTROL,
+ REG_CLR_BIT(ah, AR_PHY_AGC_CONTROL(ah),
AR_PHY_AGC_CONTROL_PKDET_CAL);
ch0_done = 0;
@@ -387,10 +387,10 @@ static bool ar9003_hw_dynamic_osdac_selection(struct ath_hw *ah,
REG_SET_BIT(ah, AR_PHY_ACTIVE, AR_PHY_ACTIVE_EN);
- REG_WRITE(ah, AR_PHY_AGC_CONTROL,
- REG_READ(ah, AR_PHY_AGC_CONTROL) | AR_PHY_AGC_CONTROL_CAL);
+ REG_WRITE(ah, AR_PHY_AGC_CONTROL(ah),
+ REG_READ(ah, AR_PHY_AGC_CONTROL(ah)) | AR_PHY_AGC_CONTROL_CAL);
- status = ath9k_hw_wait(ah, AR_PHY_AGC_CONTROL,
+ status = ath9k_hw_wait(ah, AR_PHY_AGC_CONTROL(ah),
AR_PHY_AGC_CONTROL_CAL,
0, AH_WAIT_TIMEOUT);
if (!status) {
@@ -531,7 +531,7 @@ static bool ar9003_hw_dynamic_osdac_selection(struct ath_hw *ah,
}
}
- REG_CLR_BIT(ah, AR_PHY_AGC_CONTROL,
+ REG_CLR_BIT(ah, AR_PHY_AGC_CONTROL(ah),
AR_PHY_AGC_CONTROL_OFFSET_CAL);
REG_SET_BIT(ah, AR_PHY_ACTIVE, AR_PHY_ACTIVE_EN);
@@ -539,7 +539,7 @@ static bool ar9003_hw_dynamic_osdac_selection(struct ath_hw *ah,
* We don't need to check txiqcal_done here since it is always
* set for AR9550.
*/
- REG_SET_BIT(ah, AR_PHY_TX_IQCAL_CONTROL_0,
+ REG_SET_BIT(ah, AR_PHY_TX_IQCAL_CONTROL_0(ah),
AR_PHY_TX_IQCAL_CONTROL_0_ENABLE_TXIQ_CAL);
return true;
@@ -897,7 +897,7 @@ static void ar9003_hw_tx_iq_cal_outlier_detection(struct ath_hw *ah,
memset(tx_corr_coeff, 0, sizeof(tx_corr_coeff));
for (i = 0; i < MAX_MEASUREMENT / 2; i++) {
tx_corr_coeff[i * 2][0] = tx_corr_coeff[(i * 2) + 1][0] =
- AR_PHY_TX_IQCAL_CORR_COEFF_B0(i);
+ AR_PHY_TX_IQCAL_CORR_COEFF_B0(ah, i);
if (!AR_SREV_9485(ah)) {
tx_corr_coeff[i * 2][1] =
tx_corr_coeff[(i * 2) + 1][1] =
@@ -914,7 +914,7 @@ static void ar9003_hw_tx_iq_cal_outlier_detection(struct ath_hw *ah,
if (!(ah->txchainmask & (1 << i)))
continue;
nmeasurement = REG_READ_FIELD(ah,
- AR_PHY_TX_IQCAL_STATUS_B0,
+ AR_PHY_TX_IQCAL_STATUS_B0(ah),
AR_PHY_CALIBRATED_GAINS_0);
if (nmeasurement > MAX_MEASUREMENT)
@@ -988,10 +988,10 @@ static bool ar9003_hw_tx_iq_cal_run(struct ath_hw *ah)
REG_RMW_FIELD(ah, AR_PHY_TX_FORCED_GAIN,
AR_PHY_TXGAIN_FORCE, 0);
- REG_RMW_FIELD(ah, AR_PHY_TX_IQCAL_START,
+ REG_RMW_FIELD(ah, AR_PHY_TX_IQCAL_START(ah),
AR_PHY_TX_IQCAL_START_DO_CAL, 1);
- if (!ath9k_hw_wait(ah, AR_PHY_TX_IQCAL_START,
+ if (!ath9k_hw_wait(ah, AR_PHY_TX_IQCAL_START(ah),
AR_PHY_TX_IQCAL_START_DO_CAL, 0,
AH_WAIT_TIMEOUT)) {
ath_dbg(common, CALIBRATE, "Tx IQ Cal is not completed\n");
@@ -1056,7 +1056,7 @@ static void ar9003_hw_tx_iq_cal_post_proc(struct ath_hw *ah,
{
struct ath_common *common = ath9k_hw_common(ah);
const u32 txiqcal_status[AR9300_MAX_CHAINS] = {
- AR_PHY_TX_IQCAL_STATUS_B0,
+ AR_PHY_TX_IQCAL_STATUS_B0(ah),
AR_PHY_TX_IQCAL_STATUS_B1,
AR_PHY_TX_IQCAL_STATUS_B2,
};
@@ -1076,7 +1076,7 @@ static void ar9003_hw_tx_iq_cal_post_proc(struct ath_hw *ah,
continue;
nmeasurement = REG_READ_FIELD(ah,
- AR_PHY_TX_IQCAL_STATUS_B0,
+ AR_PHY_TX_IQCAL_STATUS_B0(ah),
AR_PHY_CALIBRATED_GAINS_0);
if (nmeasurement > MAX_MEASUREMENT)
nmeasurement = MAX_MEASUREMENT;
@@ -1096,7 +1096,7 @@ static void ar9003_hw_tx_iq_cal_post_proc(struct ath_hw *ah,
u32 idx = 2 * j, offset = 4 * (3 * im + j);
REG_RMW_FIELD(ah,
- AR_PHY_CHAN_INFO_MEMORY,
+ AR_PHY_CHAN_INFO_MEMORY(ah),
AR_PHY_CHAN_INFO_TAB_S2_READ,
0);
@@ -1106,7 +1106,7 @@ static void ar9003_hw_tx_iq_cal_post_proc(struct ath_hw *ah,
offset);
REG_RMW_FIELD(ah,
- AR_PHY_CHAN_INFO_MEMORY,
+ AR_PHY_CHAN_INFO_MEMORY(ah),
AR_PHY_CHAN_INFO_TAB_S2_READ,
1);
@@ -1161,7 +1161,7 @@ static void ar9003_hw_tx_iq_cal_reload(struct ath_hw *ah)
memset(tx_corr_coeff, 0, sizeof(tx_corr_coeff));
for (i = 0; i < MAX_MEASUREMENT / 2; i++) {
tx_corr_coeff[i * 2][0] = tx_corr_coeff[(i * 2) + 1][0] =
- AR_PHY_TX_IQCAL_CORR_COEFF_B0(i);
+ AR_PHY_TX_IQCAL_CORR_COEFF_B0(ah, i);
if (!AR_SREV_9485(ah)) {
tx_corr_coeff[i * 2][1] =
tx_corr_coeff[(i * 2) + 1][1] =
@@ -1346,7 +1346,7 @@ static void ar9003_hw_cl_cal_post_proc(struct ath_hw *ah, bool is_reusable)
if (!caldata || !(ah->enabled_cals & TX_CL_CAL))
return;
- txclcal_done = !!(REG_READ(ah, AR_PHY_AGC_CONTROL) &
+ txclcal_done = !!(REG_READ(ah, AR_PHY_AGC_CONTROL(ah)) &
AR_PHY_AGC_CONTROL_CLC_SUCCESS);
if (test_bit(TXCLCAL_DONE, &caldata->cal_flags)) {
@@ -1424,12 +1424,12 @@ static bool ar9003_hw_init_cal_pcoem(struct ath_hw *ah,
if (rtt) {
if (!run_rtt_cal) {
- agc_ctrl = REG_READ(ah, AR_PHY_AGC_CONTROL);
+ agc_ctrl = REG_READ(ah, AR_PHY_AGC_CONTROL(ah));
agc_supp_cals &= agc_ctrl;
agc_ctrl &= ~(AR_PHY_AGC_CONTROL_OFFSET_CAL |
AR_PHY_AGC_CONTROL_FLTR_CAL |
AR_PHY_AGC_CONTROL_PKDET_CAL);
- REG_WRITE(ah, AR_PHY_AGC_CONTROL, agc_ctrl);
+ REG_WRITE(ah, AR_PHY_AGC_CONTROL(ah), agc_ctrl);
} else {
if (ah->ah_flags & AH_FASTCC)
run_agc_cal = true;
@@ -1452,7 +1452,7 @@ static bool ar9003_hw_init_cal_pcoem(struct ath_hw *ah,
goto skip_tx_iqcal;
/* Do Tx IQ Calibration */
- REG_RMW_FIELD(ah, AR_PHY_TX_IQCAL_CONTROL_1,
+ REG_RMW_FIELD(ah, AR_PHY_TX_IQCAL_CONTROL_1(ah),
AR_PHY_TX_IQCAL_CONTROL_1_IQCORR_I_Q_COFF_DELPT,
DELPT);
@@ -1462,10 +1462,10 @@ static bool ar9003_hw_init_cal_pcoem(struct ath_hw *ah,
*/
if (ah->enabled_cals & TX_IQ_ON_AGC_CAL) {
if (caldata && !test_bit(TXIQCAL_DONE, &caldata->cal_flags))
- REG_SET_BIT(ah, AR_PHY_TX_IQCAL_CONTROL_0,
+ REG_SET_BIT(ah, AR_PHY_TX_IQCAL_CONTROL_0(ah),
AR_PHY_TX_IQCAL_CONTROL_0_ENABLE_TXIQ_CAL);
else
- REG_CLR_BIT(ah, AR_PHY_TX_IQCAL_CONTROL_0,
+ REG_CLR_BIT(ah, AR_PHY_TX_IQCAL_CONTROL_0(ah),
AR_PHY_TX_IQCAL_CONTROL_0_ENABLE_TXIQ_CAL);
txiqcal_done = run_agc_cal = true;
}
@@ -1485,12 +1485,12 @@ skip_tx_iqcal:
if (run_agc_cal || !(ah->ah_flags & AH_FASTCC)) {
/* Calibrate the AGC */
- REG_WRITE(ah, AR_PHY_AGC_CONTROL,
- REG_READ(ah, AR_PHY_AGC_CONTROL) |
+ REG_WRITE(ah, AR_PHY_AGC_CONTROL(ah),
+ REG_READ(ah, AR_PHY_AGC_CONTROL(ah)) |
AR_PHY_AGC_CONTROL_CAL);
/* Poll for offset calibration complete */
- status = ath9k_hw_wait(ah, AR_PHY_AGC_CONTROL,
+ status = ath9k_hw_wait(ah, AR_PHY_AGC_CONTROL(ah),
AR_PHY_AGC_CONTROL_CAL,
0, AH_WAIT_TIMEOUT);
@@ -1507,7 +1507,7 @@ skip_tx_iqcal:
if (rtt && !run_rtt_cal) {
agc_ctrl |= agc_supp_cals;
- REG_WRITE(ah, AR_PHY_AGC_CONTROL, agc_ctrl);
+ REG_WRITE(ah, AR_PHY_AGC_CONTROL(ah), agc_ctrl);
}
if (!status) {
@@ -1558,11 +1558,11 @@ static bool do_ar9003_agc_cal(struct ath_hw *ah)
struct ath_common *common = ath9k_hw_common(ah);
bool status;
- REG_WRITE(ah, AR_PHY_AGC_CONTROL,
- REG_READ(ah, AR_PHY_AGC_CONTROL) |
+ REG_WRITE(ah, AR_PHY_AGC_CONTROL(ah),
+ REG_READ(ah, AR_PHY_AGC_CONTROL(ah)) |
AR_PHY_AGC_CONTROL_CAL);
- status = ath9k_hw_wait(ah, AR_PHY_AGC_CONTROL,
+ status = ath9k_hw_wait(ah, AR_PHY_AGC_CONTROL(ah),
AR_PHY_AGC_CONTROL_CAL,
0, AH_WAIT_TIMEOUT);
if (!status) {
@@ -1596,7 +1596,7 @@ static bool ar9003_hw_init_cal_soc(struct ath_hw *ah,
goto skip_tx_iqcal;
/* Do Tx IQ Calibration */
- REG_RMW_FIELD(ah, AR_PHY_TX_IQCAL_CONTROL_1,
+ REG_RMW_FIELD(ah, AR_PHY_TX_IQCAL_CONTROL_1(ah),
AR_PHY_TX_IQCAL_CONTROL_1_IQCORR_I_Q_COFF_DELPT,
DELPT);
@@ -1605,7 +1605,7 @@ static bool ar9003_hw_init_cal_soc(struct ath_hw *ah,
* AGC calibration. Specifically, AR9550 in SoC chips.
*/
if (ah->enabled_cals & TX_IQ_ON_AGC_CAL) {
- if (REG_READ_FIELD(ah, AR_PHY_TX_IQCAL_CONTROL_0,
+ if (REG_READ_FIELD(ah, AR_PHY_TX_IQCAL_CONTROL_0(ah),
AR_PHY_TX_IQCAL_CONTROL_0_ENABLE_TXIQ_CAL)) {
txiqcal_done = true;
} else {
diff --git a/drivers/net/wireless/ath/ath9k/ar9003_eeprom.c b/drivers/net/wireless/ath/ath9k/ar9003_eeprom.c
index 16bfcd0a1f6e..944f46cdf34c 100644
--- a/drivers/net/wireless/ath/ath9k/ar9003_eeprom.c
+++ b/drivers/net/wireless/ath/ath9k/ar9003_eeprom.c
@@ -3084,13 +3084,13 @@ error:
static bool ar9300_otp_read_word(struct ath_hw *ah, int addr, u32 *data)
{
- REG_READ(ah, AR9300_OTP_BASE + (4 * addr));
+ REG_READ(ah, AR9300_OTP_BASE(ah) + (4 * addr));
- if (!ath9k_hw_wait(ah, AR9300_OTP_STATUS, AR9300_OTP_STATUS_TYPE,
+ if (!ath9k_hw_wait(ah, AR9300_OTP_STATUS(ah), AR9300_OTP_STATUS_TYPE,
AR9300_OTP_STATUS_VALID, 1000))
return false;
- *data = REG_READ(ah, AR9300_OTP_READ_DATA);
+ *data = REG_READ(ah, AR9300_OTP_READ_DATA(ah));
return true;
}
@@ -3607,15 +3607,15 @@ static void ar9003_hw_xpa_bias_level_apply(struct ath_hw *ah, bool is2ghz)
if (AR_SREV_9485(ah) || AR_SREV_9330(ah) || AR_SREV_9340(ah) ||
AR_SREV_9531(ah) || AR_SREV_9561(ah))
- REG_RMW_FIELD(ah, AR_CH0_TOP2, AR_CH0_TOP2_XPABIASLVL, bias);
+ REG_RMW_FIELD(ah, AR_CH0_TOP2(ah), AR_CH0_TOP2_XPABIASLVL, bias);
else if (AR_SREV_9462(ah) || AR_SREV_9550(ah) || AR_SREV_9565(ah))
- REG_RMW_FIELD(ah, AR_CH0_TOP, AR_CH0_TOP_XPABIASLVL, bias);
+ REG_RMW_FIELD(ah, AR_CH0_TOP(ah), AR_CH0_TOP_XPABIASLVL, bias);
else {
- REG_RMW_FIELD(ah, AR_CH0_TOP, AR_CH0_TOP_XPABIASLVL, bias);
- REG_RMW_FIELD(ah, AR_CH0_THERM,
+ REG_RMW_FIELD(ah, AR_CH0_TOP(ah), AR_CH0_TOP_XPABIASLVL, bias);
+ REG_RMW_FIELD(ah, AR_CH0_THERM(ah),
AR_CH0_THERM_XPABIASLVL_MSB,
bias >> 2);
- REG_RMW_FIELD(ah, AR_CH0_THERM,
+ REG_RMW_FIELD(ah, AR_CH0_THERM(ah),
AR_CH0_THERM_XPASHORT2GND, 1);
}
}
@@ -3960,9 +3960,9 @@ void ar9003_hw_internal_regulator_apply(struct ath_hw *ah)
if (AR_SREV_9330(ah) || AR_SREV_9485(ah)) {
int reg_pmu_set;
- reg_pmu_set = REG_READ(ah, AR_PHY_PMU2) & ~AR_PHY_PMU2_PGM;
- REG_WRITE(ah, AR_PHY_PMU2, reg_pmu_set);
- if (!is_pmu_set(ah, AR_PHY_PMU2, reg_pmu_set))
+ reg_pmu_set = REG_READ(ah, AR_PHY_PMU2(ah)) & ~AR_PHY_PMU2_PGM;
+ REG_WRITE(ah, AR_PHY_PMU2(ah), reg_pmu_set);
+ if (!is_pmu_set(ah, AR_PHY_PMU2(ah), reg_pmu_set))
return;
if (AR_SREV_9330(ah)) {
@@ -3984,28 +3984,28 @@ void ar9003_hw_internal_regulator_apply(struct ath_hw *ah)
(3 << 24) | (1 << 28);
}
- REG_WRITE(ah, AR_PHY_PMU1, reg_pmu_set);
- if (!is_pmu_set(ah, AR_PHY_PMU1, reg_pmu_set))
+ REG_WRITE(ah, AR_PHY_PMU1(ah), reg_pmu_set);
+ if (!is_pmu_set(ah, AR_PHY_PMU1(ah), reg_pmu_set))
return;
- reg_pmu_set = (REG_READ(ah, AR_PHY_PMU2) & ~0xFFC00000)
+ reg_pmu_set = (REG_READ(ah, AR_PHY_PMU2(ah)) & ~0xFFC00000)
| (4 << 26);
- REG_WRITE(ah, AR_PHY_PMU2, reg_pmu_set);
- if (!is_pmu_set(ah, AR_PHY_PMU2, reg_pmu_set))
+ REG_WRITE(ah, AR_PHY_PMU2(ah), reg_pmu_set);
+ if (!is_pmu_set(ah, AR_PHY_PMU2(ah), reg_pmu_set))
return;
- reg_pmu_set = (REG_READ(ah, AR_PHY_PMU2) & ~0x00200000)
+ reg_pmu_set = (REG_READ(ah, AR_PHY_PMU2(ah)) & ~0x00200000)
| (1 << 21);
- REG_WRITE(ah, AR_PHY_PMU2, reg_pmu_set);
- if (!is_pmu_set(ah, AR_PHY_PMU2, reg_pmu_set))
+ REG_WRITE(ah, AR_PHY_PMU2(ah), reg_pmu_set);
+ if (!is_pmu_set(ah, AR_PHY_PMU2(ah), reg_pmu_set))
return;
} else if (AR_SREV_9462(ah) || AR_SREV_9565(ah) ||
AR_SREV_9561(ah)) {
reg_val = le32_to_cpu(pBase->swreg);
- REG_WRITE(ah, AR_PHY_PMU1, reg_val);
+ REG_WRITE(ah, AR_PHY_PMU1(ah), reg_val);
if (AR_SREV_9561(ah))
- REG_WRITE(ah, AR_PHY_PMU2, 0x10200000);
+ REG_WRITE(ah, AR_PHY_PMU2(ah), 0x10200000);
} else {
/* Internal regulator is ON. Write swreg register. */
reg_val = le32_to_cpu(pBase->swreg);
@@ -4021,25 +4021,25 @@ void ar9003_hw_internal_regulator_apply(struct ath_hw *ah)
}
} else {
if (AR_SREV_9330(ah) || AR_SREV_9485(ah)) {
- REG_RMW_FIELD(ah, AR_PHY_PMU2, AR_PHY_PMU2_PGM, 0);
- while (REG_READ_FIELD(ah, AR_PHY_PMU2,
+ REG_RMW_FIELD(ah, AR_PHY_PMU2(ah), AR_PHY_PMU2_PGM, 0);
+ while (REG_READ_FIELD(ah, AR_PHY_PMU2(ah),
AR_PHY_PMU2_PGM))
udelay(10);
- REG_RMW_FIELD(ah, AR_PHY_PMU1, AR_PHY_PMU1_PWD, 0x1);
- while (!REG_READ_FIELD(ah, AR_PHY_PMU1,
+ REG_RMW_FIELD(ah, AR_PHY_PMU1(ah), AR_PHY_PMU1_PWD, 0x1);
+ while (!REG_READ_FIELD(ah, AR_PHY_PMU1(ah),
AR_PHY_PMU1_PWD))
udelay(10);
- REG_RMW_FIELD(ah, AR_PHY_PMU2, AR_PHY_PMU2_PGM, 0x1);
- while (!REG_READ_FIELD(ah, AR_PHY_PMU2,
+ REG_RMW_FIELD(ah, AR_PHY_PMU2(ah), AR_PHY_PMU2_PGM, 0x1);
+ while (!REG_READ_FIELD(ah, AR_PHY_PMU2(ah),
AR_PHY_PMU2_PGM))
udelay(10);
} else if (AR_SREV_9462(ah) || AR_SREV_9565(ah))
- REG_RMW_FIELD(ah, AR_PHY_PMU1, AR_PHY_PMU1_PWD, 0x1);
+ REG_RMW_FIELD(ah, AR_PHY_PMU1(ah), AR_PHY_PMU1_PWD, 0x1);
else {
- reg_val = REG_READ(ah, AR_RTC_SLEEP_CLK) |
+ reg_val = REG_READ(ah, AR_RTC_SLEEP_CLK(ah)) |
AR_RTC_FORCE_SWREG_PRD;
- REG_WRITE(ah, AR_RTC_SLEEP_CLK, reg_val);
+ REG_WRITE(ah, AR_RTC_SLEEP_CLK(ah), reg_val);
}
}
@@ -4055,9 +4055,9 @@ static void ar9003_hw_apply_tuning_caps(struct ath_hw *ah)
if (eep->baseEepHeader.featureEnable & 0x40) {
tuning_caps_param &= 0x7f;
- REG_RMW_FIELD(ah, AR_CH0_XTAL, AR_CH0_XTAL_CAPINDAC,
+ REG_RMW_FIELD(ah, AR_CH0_XTAL(ah), AR_CH0_XTAL_CAPINDAC,
tuning_caps_param);
- REG_RMW_FIELD(ah, AR_CH0_XTAL, AR_CH0_XTAL_CAPOUTDAC,
+ REG_RMW_FIELD(ah, AR_CH0_XTAL(ah), AR_CH0_XTAL_CAPOUTDAC,
tuning_caps_param);
}
}
diff --git a/drivers/net/wireless/ath/ath9k/ar9003_eeprom.h b/drivers/net/wireless/ath/ath9k/ar9003_eeprom.h
index f8ae20318302..b91ef1250ba8 100644
--- a/drivers/net/wireless/ath/ath9k/ar9003_eeprom.h
+++ b/drivers/net/wireless/ath/ath9k/ar9003_eeprom.h
@@ -82,16 +82,16 @@
/* AR5416_EEPMISC_BIG_ENDIAN not set indicates little endian */
#define AR9300_EEPMISC_LITTLE_ENDIAN 0
-#define AR9300_OTP_BASE \
- ((AR_SREV_9340(ah) || AR_SREV_9550(ah)) ? 0x30000 : 0x14000)
-#define AR9300_OTP_STATUS \
- ((AR_SREV_9340(ah) || AR_SREV_9550(ah)) ? 0x31018 : 0x15f18)
+#define AR9300_OTP_BASE(_ah) \
+ ((AR_SREV_9340(_ah) || AR_SREV_9550(_ah)) ? 0x30000 : 0x14000)
+#define AR9300_OTP_STATUS(_ah) \
+ ((AR_SREV_9340(_ah) || AR_SREV_9550(_ah)) ? 0x31018 : 0x15f18)
#define AR9300_OTP_STATUS_TYPE 0x7
#define AR9300_OTP_STATUS_VALID 0x4
#define AR9300_OTP_STATUS_ACCESS_BUSY 0x2
#define AR9300_OTP_STATUS_SM_BUSY 0x1
-#define AR9300_OTP_READ_DATA \
- ((AR_SREV_9340(ah) || AR_SREV_9550(ah)) ? 0x3101c : 0x15f1c)
+#define AR9300_OTP_READ_DATA(_ah) \
+ ((AR_SREV_9340(_ah) || AR_SREV_9550(_ah)) ? 0x3101c : 0x15f1c)
enum targetPowerHTRates {
HT_TARGET_RATE_0_8_16,
diff --git a/drivers/net/wireless/ath/ath9k/ar9003_hw.c b/drivers/net/wireless/ath/ath9k/ar9003_hw.c
index 42f00a2a8c80..4f27a9fb1482 100644
--- a/drivers/net/wireless/ath/ath9k/ar9003_hw.c
+++ b/drivers/net/wireless/ath/ath9k/ar9003_hw.c
@@ -1032,8 +1032,8 @@ static void ar9003_hw_configpcipowersave(struct ath_hw *ah,
/* Nothing to do on restore for 11N */
if (!power_off /* !restore */) {
/* set bit 19 to allow forcing of pcie core into L1 state */
- REG_SET_BIT(ah, AR_PCIE_PM_CTRL, AR_PCIE_PM_CTRL_ENA);
- REG_WRITE(ah, AR_WA, ah->WARegVal);
+ REG_SET_BIT(ah, AR_PCIE_PM_CTRL(ah), AR_PCIE_PM_CTRL_ENA);
+ REG_WRITE(ah, AR_WA(ah), ah->WARegVal);
}
/*
diff --git a/drivers/net/wireless/ath/ath9k/ar9003_mac.c b/drivers/net/wireless/ath/ath9k/ar9003_mac.c
index ff8ab58e67d9..a8bc003077dc 100644
--- a/drivers/net/wireless/ath/ath9k/ar9003_mac.c
+++ b/drivers/net/wireless/ath/ath9k/ar9003_mac.c
@@ -193,16 +193,16 @@ static bool ar9003_hw_get_isr(struct ath_hw *ah, enum ath9k_int *masked,
if (ath9k_hw_mci_is_enabled(ah))
async_mask |= AR_INTR_ASYNC_MASK_MCI;
- async_cause = REG_READ(ah, AR_INTR_ASYNC_CAUSE);
+ async_cause = REG_READ(ah, AR_INTR_ASYNC_CAUSE(ah));
if (async_cause & async_mask) {
- if ((REG_READ(ah, AR_RTC_STATUS) & AR_RTC_STATUS_M)
+ if ((REG_READ(ah, AR_RTC_STATUS(ah)) & AR_RTC_STATUS_M(ah))
== AR_RTC_STATUS_ON)
isr = REG_READ(ah, AR_ISR);
}
- sync_cause = REG_READ(ah, AR_INTR_SYNC_CAUSE) & AR_INTR_SYNC_DEFAULT;
+ sync_cause = REG_READ(ah, AR_INTR_SYNC_CAUSE(ah)) & AR_INTR_SYNC_DEFAULT;
*masked = 0;
@@ -280,7 +280,7 @@ static bool ar9003_hw_get_isr(struct ath_hw *ah, enum ath9k_int *masked,
u32 s5;
if (pCap->hw_caps & ATH9K_HW_CAP_RAC_SUPPORTED)
- s5 = REG_READ(ah, AR_ISR_S5_S);
+ s5 = REG_READ(ah, AR_ISR_S5_S(ah));
else
s5 = REG_READ(ah, AR_ISR_S5);
@@ -345,8 +345,8 @@ static bool ar9003_hw_get_isr(struct ath_hw *ah, enum ath9k_int *masked,
ath_dbg(common, INTERRUPT,
"AR_INTR_SYNC_LOCAL_TIMEOUT\n");
- REG_WRITE(ah, AR_INTR_SYNC_CAUSE_CLR, sync_cause);
- (void) REG_READ(ah, AR_INTR_SYNC_CAUSE_CLR);
+ REG_WRITE(ah, AR_INTR_SYNC_CAUSE_CLR(ah), sync_cause);
+ (void) REG_READ(ah, AR_INTR_SYNC_CAUSE_CLR(ah));
}
return true;
diff --git a/drivers/net/wireless/ath/ath9k/ar9003_mci.c b/drivers/net/wireless/ath/ath9k/ar9003_mci.c
index 8d7efd80d97a..2b9c07961cd7 100644
--- a/drivers/net/wireless/ath/ath9k/ar9003_mci.c
+++ b/drivers/net/wireless/ath/ath9k/ar9003_mci.c
@@ -458,7 +458,7 @@ static void ar9003_mci_observation_set_up(struct ath_hw *ah)
} else
return;
- REG_SET_BIT(ah, AR_GPIO_INPUT_EN_VAL, AR_GPIO_JTAG_DISABLE);
+ REG_SET_BIT(ah, AR_GPIO_INPUT_EN_VAL(ah), AR_GPIO_JTAG_DISABLE);
REG_RMW_FIELD(ah, AR_PHY_GLB_CONTROL, AR_GLB_DS_JTAG_DISABLE, 1);
REG_RMW_FIELD(ah, AR_PHY_GLB_CONTROL, AR_GLB_WLAN_UART_INTF_EN, 0);
@@ -466,12 +466,12 @@ static void ar9003_mci_observation_set_up(struct ath_hw *ah)
REG_RMW_FIELD(ah, AR_BTCOEX_CTRL2, AR_BTCOEX_CTRL2_GPIO_OBS_SEL, 0);
REG_RMW_FIELD(ah, AR_BTCOEX_CTRL2, AR_BTCOEX_CTRL2_MAC_BB_OBS_SEL, 1);
- REG_WRITE(ah, AR_OBS, 0x4b);
+ REG_WRITE(ah, AR_OBS(ah), 0x4b);
REG_RMW_FIELD(ah, AR_DIAG_SW, AR_DIAG_OBS_PT_SEL1, 0x03);
REG_RMW_FIELD(ah, AR_DIAG_SW, AR_DIAG_OBS_PT_SEL2, 0x01);
REG_RMW_FIELD(ah, AR_MACMISC, AR_MACMISC_MISC_OBS_BUS_LSB, 0x02);
REG_RMW_FIELD(ah, AR_MACMISC, AR_MACMISC_MISC_OBS_BUS_MSB, 0x03);
- REG_RMW_FIELD(ah, AR_PHY_TEST_CTL_STATUS,
+ REG_RMW_FIELD(ah, AR_PHY_TEST_CTL_STATUS(ah),
AR_PHY_TEST_CTL_DEBUGPORT_SEL, 0x07);
}
diff --git a/drivers/net/wireless/ath/ath9k/ar9003_paprd.c b/drivers/net/wireless/ath/ath9k/ar9003_paprd.c
index b2d53b6c0ffd..83d993fff695 100644
--- a/drivers/net/wireless/ath/ath9k/ar9003_paprd.c
+++ b/drivers/net/wireless/ath/ath9k/ar9003_paprd.c
@@ -201,19 +201,19 @@ static int ar9003_paprd_setup_single_table(struct ath_hw *ah)
ar9003_paprd_enable(ah, false);
- REG_RMW_FIELD(ah, AR_PHY_PAPRD_TRAINER_CNTL1,
+ REG_RMW_FIELD(ah, AR_PHY_PAPRD_TRAINER_CNTL1(ah),
AR_PHY_PAPRD_TRAINER_CNTL1_CF_PAPRD_LB_SKIP, 0x30);
- REG_RMW_FIELD(ah, AR_PHY_PAPRD_TRAINER_CNTL1,
+ REG_RMW_FIELD(ah, AR_PHY_PAPRD_TRAINER_CNTL1(ah),
AR_PHY_PAPRD_TRAINER_CNTL1_CF_PAPRD_LB_ENABLE, 1);
- REG_RMW_FIELD(ah, AR_PHY_PAPRD_TRAINER_CNTL1,
+ REG_RMW_FIELD(ah, AR_PHY_PAPRD_TRAINER_CNTL1(ah),
AR_PHY_PAPRD_TRAINER_CNTL1_CF_PAPRD_TX_GAIN_FORCE, 1);
- REG_RMW_FIELD(ah, AR_PHY_PAPRD_TRAINER_CNTL1,
+ REG_RMW_FIELD(ah, AR_PHY_PAPRD_TRAINER_CNTL1(ah),
AR_PHY_PAPRD_TRAINER_CNTL1_CF_PAPRD_RX_BB_GAIN_FORCE, 0);
- REG_RMW_FIELD(ah, AR_PHY_PAPRD_TRAINER_CNTL1,
+ REG_RMW_FIELD(ah, AR_PHY_PAPRD_TRAINER_CNTL1(ah),
AR_PHY_PAPRD_TRAINER_CNTL1_CF_PAPRD_IQCORR_ENABLE, 0);
- REG_RMW_FIELD(ah, AR_PHY_PAPRD_TRAINER_CNTL1,
+ REG_RMW_FIELD(ah, AR_PHY_PAPRD_TRAINER_CNTL1(ah),
AR_PHY_PAPRD_TRAINER_CNTL1_CF_PAPRD_AGC2_SETTLING, 28);
- REG_RMW_FIELD(ah, AR_PHY_PAPRD_TRAINER_CNTL1,
+ REG_RMW_FIELD(ah, AR_PHY_PAPRD_TRAINER_CNTL1(ah),
AR_PHY_PAPRD_TRAINER_CNTL1_CF_CF_PAPRD_TRAIN_ENABLE, 1);
if (AR_SREV_9485(ah)) {
@@ -229,15 +229,15 @@ static int ar9003_paprd_setup_single_table(struct ath_hw *ah)
}
}
- REG_RMW_FIELD(ah, AR_PHY_PAPRD_TRAINER_CNTL2,
+ REG_RMW_FIELD(ah, AR_PHY_PAPRD_TRAINER_CNTL2(ah),
AR_PHY_PAPRD_TRAINER_CNTL2_CF_PAPRD_INIT_RX_BB_GAIN, val);
- REG_RMW_FIELD(ah, AR_PHY_PAPRD_TRAINER_CNTL3,
+ REG_RMW_FIELD(ah, AR_PHY_PAPRD_TRAINER_CNTL3(ah),
AR_PHY_PAPRD_TRAINER_CNTL3_CF_PAPRD_FINE_CORR_LEN, 4);
- REG_RMW_FIELD(ah, AR_PHY_PAPRD_TRAINER_CNTL3,
+ REG_RMW_FIELD(ah, AR_PHY_PAPRD_TRAINER_CNTL3(ah),
AR_PHY_PAPRD_TRAINER_CNTL3_CF_PAPRD_COARSE_CORR_LEN, 4);
- REG_RMW_FIELD(ah, AR_PHY_PAPRD_TRAINER_CNTL3,
+ REG_RMW_FIELD(ah, AR_PHY_PAPRD_TRAINER_CNTL3(ah),
AR_PHY_PAPRD_TRAINER_CNTL3_CF_PAPRD_NUM_CORR_STAGES, 7);
- REG_RMW_FIELD(ah, AR_PHY_PAPRD_TRAINER_CNTL3,
+ REG_RMW_FIELD(ah, AR_PHY_PAPRD_TRAINER_CNTL3(ah),
AR_PHY_PAPRD_TRAINER_CNTL3_CF_PAPRD_MIN_LOOPBACK_DEL, 1);
if (AR_SREV_9485(ah) ||
@@ -246,10 +246,10 @@ static int ar9003_paprd_setup_single_table(struct ath_hw *ah)
AR_SREV_9550(ah) ||
AR_SREV_9330(ah) ||
AR_SREV_9340(ah))
- REG_RMW_FIELD(ah, AR_PHY_PAPRD_TRAINER_CNTL3,
+ REG_RMW_FIELD(ah, AR_PHY_PAPRD_TRAINER_CNTL3(ah),
AR_PHY_PAPRD_TRAINER_CNTL3_CF_PAPRD_QUICK_DROP, -3);
else
- REG_RMW_FIELD(ah, AR_PHY_PAPRD_TRAINER_CNTL3,
+ REG_RMW_FIELD(ah, AR_PHY_PAPRD_TRAINER_CNTL3(ah),
AR_PHY_PAPRD_TRAINER_CNTL3_CF_PAPRD_QUICK_DROP, -6);
val = -10;
@@ -257,16 +257,16 @@ static int ar9003_paprd_setup_single_table(struct ath_hw *ah)
if (IS_CHAN_2GHZ(ah->curchan) && !AR_SREV_9462(ah) && !AR_SREV_9565(ah))
val = -15;
- REG_RMW_FIELD(ah, AR_PHY_PAPRD_TRAINER_CNTL3,
+ REG_RMW_FIELD(ah, AR_PHY_PAPRD_TRAINER_CNTL3(ah),
AR_PHY_PAPRD_TRAINER_CNTL3_CF_PAPRD_ADC_DESIRED_SIZE,
val);
- REG_RMW_FIELD(ah, AR_PHY_PAPRD_TRAINER_CNTL3,
+ REG_RMW_FIELD(ah, AR_PHY_PAPRD_TRAINER_CNTL3(ah),
AR_PHY_PAPRD_TRAINER_CNTL3_CF_PAPRD_BBTXMIX_DISABLE, 1);
- REG_RMW_FIELD(ah, AR_PHY_PAPRD_TRAINER_CNTL4,
+ REG_RMW_FIELD(ah, AR_PHY_PAPRD_TRAINER_CNTL4(ah),
AR_PHY_PAPRD_TRAINER_CNTL4_CF_PAPRD_SAFETY_DELTA, 0);
- REG_RMW_FIELD(ah, AR_PHY_PAPRD_TRAINER_CNTL4,
+ REG_RMW_FIELD(ah, AR_PHY_PAPRD_TRAINER_CNTL4(ah),
AR_PHY_PAPRD_TRAINER_CNTL4_CF_PAPRD_MIN_CORR, 400);
- REG_RMW_FIELD(ah, AR_PHY_PAPRD_TRAINER_CNTL4,
+ REG_RMW_FIELD(ah, AR_PHY_PAPRD_TRAINER_CNTL4(ah),
AR_PHY_PAPRD_TRAINER_CNTL4_CF_PAPRD_NUM_TRAIN_SAMPLES,
100);
REG_RMW_FIELD(ah, AR_PHY_PAPRD_PRE_POST_SCALE_0_B0,
@@ -313,7 +313,7 @@ static unsigned int ar9003_get_desired_gain(struct ath_hw *ah, int chain,
int desired_scale, desired_gain = 0;
u32 reg_olpc = 0, reg_cl_gain = 0;
- REG_CLR_BIT(ah, AR_PHY_PAPRD_TRAINER_STAT1,
+ REG_CLR_BIT(ah, AR_PHY_PAPRD_TRAINER_STAT1(ah),
AR_PHY_PAPRD_TRAINER_STAT1_PAPRD_TRAIN_DONE);
desired_scale = REG_READ_FIELD(ah, AR_PHY_TPC_12,
AR_PHY_TPC_12_DESIRED_SCALE_HT40_5);
@@ -812,7 +812,7 @@ void ar9003_paprd_setup_gain_table(struct ath_hw *ah, int chain)
ar9003_tx_force_gain(ah, gain_index);
- REG_CLR_BIT(ah, AR_PHY_PAPRD_TRAINER_STAT1,
+ REG_CLR_BIT(ah, AR_PHY_PAPRD_TRAINER_STAT1(ah),
AR_PHY_PAPRD_TRAINER_STAT1_PAPRD_TRAIN_DONE);
}
EXPORT_SYMBOL(ar9003_paprd_setup_gain_table);
@@ -833,7 +833,7 @@ static bool ar9003_paprd_retrain_pa_in(struct ath_hw *ah,
capdiv2g = REG_READ_FIELD(ah, AR_PHY_65NM_CH0_TXRF3,
AR_PHY_65NM_CH0_TXRF3_CAPDIV2G);
- quick_drop = REG_READ_FIELD(ah, AR_PHY_PAPRD_TRAINER_CNTL3,
+ quick_drop = REG_READ_FIELD(ah, AR_PHY_PAPRD_TRAINER_CNTL3(ah),
AR_PHY_PAPRD_TRAINER_CNTL3_CF_PAPRD_QUICK_DROP);
if (quick_drop)
@@ -906,7 +906,7 @@ static bool ar9003_paprd_retrain_pa_in(struct ath_hw *ah,
REG_RMW_FIELD(ah, AR_PHY_65NM_CH0_TXRF3,
AR_PHY_65NM_CH0_TXRF3_CAPDIV2G, capdiv2g);
- REG_RMW_FIELD(ah, AR_PHY_PAPRD_TRAINER_CNTL3,
+ REG_RMW_FIELD(ah, AR_PHY_PAPRD_TRAINER_CNTL3(ah),
AR_PHY_PAPRD_TRAINER_CNTL3_CF_PAPRD_QUICK_DROP,
quick_drop);
@@ -932,14 +932,14 @@ int ar9003_paprd_create_curve(struct ath_hw *ah,
data_L = &buf[0];
data_U = &buf[48];
- REG_CLR_BIT(ah, AR_PHY_CHAN_INFO_MEMORY,
+ REG_CLR_BIT(ah, AR_PHY_CHAN_INFO_MEMORY(ah),
AR_PHY_CHAN_INFO_MEMORY_CHANINFOMEM_S2_READ);
reg = AR_PHY_CHAN_INFO_TAB_0;
for (i = 0; i < 48; i++)
data_L[i] = REG_READ(ah, reg + (i << 2));
- REG_SET_BIT(ah, AR_PHY_CHAN_INFO_MEMORY,
+ REG_SET_BIT(ah, AR_PHY_CHAN_INFO_MEMORY(ah),
AR_PHY_CHAN_INFO_MEMORY_CHANINFOMEM_S2_READ);
for (i = 0; i < 48; i++)
@@ -951,7 +951,7 @@ int ar9003_paprd_create_curve(struct ath_hw *ah,
if (ar9003_paprd_retrain_pa_in(ah, caldata, chain))
status = -EINPROGRESS;
- REG_CLR_BIT(ah, AR_PHY_PAPRD_TRAINER_STAT1,
+ REG_CLR_BIT(ah, AR_PHY_PAPRD_TRAINER_STAT1(ah),
AR_PHY_PAPRD_TRAINER_STAT1_PAPRD_TRAIN_DONE);
kfree(buf);
@@ -977,14 +977,14 @@ bool ar9003_paprd_is_done(struct ath_hw *ah)
{
int paprd_done, agc2_pwr;
- paprd_done = REG_READ_FIELD(ah, AR_PHY_PAPRD_TRAINER_STAT1,
+ paprd_done = REG_READ_FIELD(ah, AR_PHY_PAPRD_TRAINER_STAT1(ah),
AR_PHY_PAPRD_TRAINER_STAT1_PAPRD_TRAIN_DONE);
if (AR_SREV_9485(ah))
goto exit;
if (paprd_done == 0x1) {
- agc2_pwr = REG_READ_FIELD(ah, AR_PHY_PAPRD_TRAINER_STAT1,
+ agc2_pwr = REG_READ_FIELD(ah, AR_PHY_PAPRD_TRAINER_STAT1(ah),
AR_PHY_PAPRD_TRAINER_STAT1_PAPRD_AGC2_PWR);
ath_dbg(ath9k_hw_common(ah), CALIBRATE,
diff --git a/drivers/net/wireless/ath/ath9k/ar9003_phy.c b/drivers/net/wireless/ath/ath9k/ar9003_phy.c
index 090ff0600c81..a29c11f944a5 100644
--- a/drivers/net/wireless/ath/ath9k/ar9003_phy.c
+++ b/drivers/net/wireless/ath/ath9k/ar9003_phy.c
@@ -296,7 +296,7 @@ static void ar9003_hw_spur_mitigate_mrc_cck(struct ath_hw *ah,
cck_spur_freq = cck_spur_freq & 0xfffff;
- REG_RMW_FIELD(ah, AR_PHY_AGC_CONTROL,
+ REG_RMW_FIELD(ah, AR_PHY_AGC_CONTROL(ah),
AR_PHY_AGC_CONTROL_YCOK_MAX, 0x7);
REG_RMW_FIELD(ah, AR_PHY_CCK_SPUR_MIT,
AR_PHY_CCK_SPUR_MIT_SPUR_RSSI_THR, 0x7f);
@@ -314,7 +314,7 @@ static void ar9003_hw_spur_mitigate_mrc_cck(struct ath_hw *ah,
}
}
- REG_RMW_FIELD(ah, AR_PHY_AGC_CONTROL,
+ REG_RMW_FIELD(ah, AR_PHY_AGC_CONTROL(ah),
AR_PHY_AGC_CONTROL_YCOK_MAX, 0x5);
REG_RMW_FIELD(ah, AR_PHY_CCK_SPUR_MIT,
AR_PHY_CCK_SPUR_MIT_USE_CCK_SPUR_MIT, 0x0);
@@ -352,7 +352,7 @@ static void ar9003_hw_spur_ofdm_clear(struct ath_hw *ah)
AR_PHY_TIMING4_ENABLE_CHAN_MASK, 0);
REG_RMW_FIELD(ah, AR_PHY_PILOT_SPUR_MASK,
AR_PHY_PILOT_SPUR_MASK_CF_PILOT_MASK_IDX_A, 0);
- REG_RMW_FIELD(ah, AR_PHY_SPUR_MASK_A,
+ REG_RMW_FIELD(ah, AR_PHY_SPUR_MASK_A(ah),
AR_PHY_SPUR_MASK_A_CF_PUNC_MASK_IDX_A, 0);
REG_RMW_FIELD(ah, AR_PHY_CHAN_SPUR_MASK,
AR_PHY_CHAN_SPUR_MASK_CF_CHAN_MASK_IDX_A, 0);
@@ -360,7 +360,7 @@ static void ar9003_hw_spur_ofdm_clear(struct ath_hw *ah)
AR_PHY_PILOT_SPUR_MASK_CF_PILOT_MASK_A, 0);
REG_RMW_FIELD(ah, AR_PHY_CHAN_SPUR_MASK,
AR_PHY_CHAN_SPUR_MASK_CF_CHAN_MASK_A, 0);
- REG_RMW_FIELD(ah, AR_PHY_SPUR_MASK_A,
+ REG_RMW_FIELD(ah, AR_PHY_SPUR_MASK_A(ah),
AR_PHY_SPUR_MASK_A_CF_PUNC_MASK_A, 0);
REG_RMW_FIELD(ah, AR_PHY_SPUR_REG,
AR_PHY_SPUR_REG_MASK_RATE_CNTL, 0);
@@ -419,7 +419,7 @@ static void ar9003_hw_spur_ofdm(struct ath_hw *ah,
AR_PHY_TIMING4_ENABLE_CHAN_MASK, 0x1);
REG_RMW_FIELD(ah, AR_PHY_PILOT_SPUR_MASK,
AR_PHY_PILOT_SPUR_MASK_CF_PILOT_MASK_IDX_A, mask_index);
- REG_RMW_FIELD(ah, AR_PHY_SPUR_MASK_A,
+ REG_RMW_FIELD(ah, AR_PHY_SPUR_MASK_A(ah),
AR_PHY_SPUR_MASK_A_CF_PUNC_MASK_IDX_A, mask_index);
REG_RMW_FIELD(ah, AR_PHY_CHAN_SPUR_MASK,
AR_PHY_CHAN_SPUR_MASK_CF_CHAN_MASK_IDX_A, mask_index);
@@ -427,7 +427,7 @@ static void ar9003_hw_spur_ofdm(struct ath_hw *ah,
AR_PHY_PILOT_SPUR_MASK_CF_PILOT_MASK_A, 0xc);
REG_RMW_FIELD(ah, AR_PHY_CHAN_SPUR_MASK,
AR_PHY_CHAN_SPUR_MASK_CF_CHAN_MASK_A, 0xc);
- REG_RMW_FIELD(ah, AR_PHY_SPUR_MASK_A,
+ REG_RMW_FIELD(ah, AR_PHY_SPUR_MASK_A(ah),
AR_PHY_SPUR_MASK_A_CF_PUNC_MASK_A, 0xa0);
REG_RMW_FIELD(ah, AR_PHY_SPUR_REG,
AR_PHY_SPUR_REG_MASK_RATE_CNTL, 0xff);
@@ -449,7 +449,7 @@ static void ar9003_hw_spur_ofdm_9565(struct ath_hw *ah,
mask_index);
/* A == B */
- REG_RMW_FIELD(ah, AR_PHY_SPUR_MASK_B,
+ REG_RMW_FIELD(ah, AR_PHY_SPUR_MASK_B(ah),
AR_PHY_SPUR_MASK_A_CF_PUNC_MASK_IDX_A,
mask_index);
@@ -462,7 +462,7 @@ static void ar9003_hw_spur_ofdm_9565(struct ath_hw *ah,
AR_PHY_CHAN_SPUR_MASK_CF_CHAN_MASK_B, 0xe);
/* A == B */
- REG_RMW_FIELD(ah, AR_PHY_SPUR_MASK_B,
+ REG_RMW_FIELD(ah, AR_PHY_SPUR_MASK_B(ah),
AR_PHY_SPUR_MASK_A_CF_PUNC_MASK_A, 0xa0);
}
@@ -710,7 +710,7 @@ static void ar9003_hw_override_ini(struct ath_hw *ah)
REG_WRITE(ah, AR_GLB_SWREG_DISCONT_MODE,
AR_GLB_SWREG_DISCONT_EN_BT_WLAN);
- if (REG_READ_FIELD(ah, AR_PHY_TX_IQCAL_CONTROL_0,
+ if (REG_READ_FIELD(ah, AR_PHY_TX_IQCAL_CONTROL_0(ah),
AR_PHY_TX_IQCAL_CONTROL_0_ENABLE_TXIQ_CAL))
ah->enabled_cals |= TX_IQ_CAL;
else
@@ -726,11 +726,11 @@ static void ar9003_hw_override_ini(struct ath_hw *ah)
if (AR_SREV_9340(ah) || AR_SREV_9531(ah) || AR_SREV_9550(ah) ||
AR_SREV_9561(ah)) {
if (ah->is_clk_25mhz) {
- REG_WRITE(ah, AR_RTC_DERIVED_CLK, 0x17c << 1);
+ REG_WRITE(ah, AR_RTC_DERIVED_CLK(ah), 0x17c << 1);
REG_WRITE(ah, AR_SLP32_MODE, 0x0010f3d7);
REG_WRITE(ah, AR_SLP32_INC, 0x0001e7ae);
} else {
- REG_WRITE(ah, AR_RTC_DERIVED_CLK, 0x261 << 1);
+ REG_WRITE(ah, AR_RTC_DERIVED_CLK(ah), 0x261 << 1);
REG_WRITE(ah, AR_SLP32_MODE, 0x0010f400);
REG_WRITE(ah, AR_SLP32_INC, 0x0001e800);
}
@@ -1795,7 +1795,7 @@ static void ar9003_hw_spectral_scan_wait(struct ath_hw *ah)
static void ar9003_hw_tx99_start(struct ath_hw *ah, u32 qnum)
{
- REG_SET_BIT(ah, AR_PHY_TEST, PHY_AGC_CLR);
+ REG_SET_BIT(ah, AR_PHY_TEST(ah), PHY_AGC_CLR);
REG_CLR_BIT(ah, AR_DIAG_SW, AR_DIAG_RX_DIS);
REG_WRITE(ah, AR_CR, AR_CR_RXD);
REG_WRITE(ah, AR_DLCL_IFS(qnum), 0);
@@ -1808,7 +1808,7 @@ static void ar9003_hw_tx99_start(struct ath_hw *ah, u32 qnum)
static void ar9003_hw_tx99_stop(struct ath_hw *ah)
{
- REG_CLR_BIT(ah, AR_PHY_TEST, PHY_AGC_CLR);
+ REG_CLR_BIT(ah, AR_PHY_TEST(ah), PHY_AGC_CLR);
REG_SET_BIT(ah, AR_DIAG_SW, AR_DIAG_RX_DIS);
}
diff --git a/drivers/net/wireless/ath/ath9k/ar9003_phy.h b/drivers/net/wireless/ath/ath9k/ar9003_phy.h
index ad949eb02f3d..57e2b4c89125 100644
--- a/drivers/net/wireless/ath/ath9k/ar9003_phy.h
+++ b/drivers/net/wireless/ath/ath9k/ar9003_phy.h
@@ -454,8 +454,8 @@
#define AR_PHY_GEN_CTRL (AR_SM_BASE + 0x4)
#define AR_PHY_MODE (AR_SM_BASE + 0x8)
#define AR_PHY_ACTIVE (AR_SM_BASE + 0xc)
-#define AR_PHY_SPUR_MASK_A (AR_SM_BASE + (AR_SREV_9561(ah) ? 0x18 : 0x20))
-#define AR_PHY_SPUR_MASK_B (AR_SM_BASE + (AR_SREV_9561(ah) ? 0x1c : 0x24))
+#define AR_PHY_SPUR_MASK_A(_ah) (AR_SM_BASE + (AR_SREV_9561(_ah) ? 0x18 : 0x20))
+#define AR_PHY_SPUR_MASK_B(_ah) (AR_SM_BASE + (AR_SREV_9561(_ah) ? 0x1c : 0x24))
#define AR_PHY_SPECTRAL_SCAN (AR_SM_BASE + 0x28)
#define AR_PHY_RADAR_BW_FILTER (AR_SM_BASE + 0x2c)
#define AR_PHY_SEARCH_START_DELAY (AR_SM_BASE + 0x30)
@@ -498,7 +498,7 @@
#define AR_PHY_SPUR_MASK_A_CF_PUNC_MASK_A 0x3FF
#define AR_PHY_SPUR_MASK_A_CF_PUNC_MASK_A_S 0
-#define AR_PHY_TEST (AR_SM_BASE + (AR_SREV_9561(ah) ? 0x15c : 0x160))
+#define AR_PHY_TEST(_ah) (AR_SM_BASE + (AR_SREV_9561(_ah) ? 0x15c : 0x160))
#define AR_PHY_TEST_BBB_OBS_SEL 0x780000
#define AR_PHY_TEST_BBB_OBS_SEL_S 19
@@ -509,7 +509,7 @@
#define AR_PHY_TEST_CHAIN_SEL 0xC0000000
#define AR_PHY_TEST_CHAIN_SEL_S 30
-#define AR_PHY_TEST_CTL_STATUS (AR_SM_BASE + (AR_SREV_9561(ah) ? 0x160 : 0x164))
+#define AR_PHY_TEST_CTL_STATUS(_ah) (AR_SM_BASE + (AR_SREV_9561(_ah) ? 0x160 : 0x164))
#define AR_PHY_TEST_CTL_TSTDAC_EN 0x1
#define AR_PHY_TEST_CTL_TSTDAC_EN_S 0
#define AR_PHY_TEST_CTL_TX_OBS_SEL 0x1C
@@ -524,22 +524,22 @@
#define AR_PHY_TEST_CTL_DEBUGPORT_SEL_S 29
-#define AR_PHY_TSTDAC (AR_SM_BASE + (AR_SREV_9561(ah) ? 0x164 : 0x168))
+#define AR_PHY_TSTDAC(_ah) (AR_SM_BASE + (AR_SREV_9561(_ah) ? 0x164 : 0x168))
-#define AR_PHY_CHAN_STATUS (AR_SM_BASE + (AR_SREV_9561(ah) ? 0x168 : 0x16c))
+#define AR_PHY_CHAN_STATUS(_ah) (AR_SM_BASE + (AR_SREV_9561(_ah) ? 0x168 : 0x16c))
-#define AR_PHY_CHAN_INFO_MEMORY (AR_SM_BASE + (AR_SREV_9561(ah) ? 0x16c : 0x170))
+#define AR_PHY_CHAN_INFO_MEMORY(_ah) (AR_SM_BASE + (AR_SREV_9561(_ah) ? 0x16c : 0x170))
#define AR_PHY_CHAN_INFO_MEMORY_CHANINFOMEM_S2_READ 0x00000008
#define AR_PHY_CHAN_INFO_MEMORY_CHANINFOMEM_S2_READ_S 3
-#define AR_PHY_CHNINFO_NOISEPWR (AR_SM_BASE + (AR_SREV_9561(ah) ? 0x170 : 0x174))
-#define AR_PHY_CHNINFO_GAINDIFF (AR_SM_BASE + (AR_SREV_9561(ah) ? 0x174 : 0x178))
-#define AR_PHY_CHNINFO_FINETIM (AR_SM_BASE + (AR_SREV_9561(ah) ? 0x178 : 0x17c))
-#define AR_PHY_CHAN_INFO_GAIN_0 (AR_SM_BASE + (AR_SREV_9561(ah) ? 0x17c : 0x180))
-#define AR_PHY_SCRAMBLER_SEED (AR_SM_BASE + (AR_SREV_9561(ah) ? 0x184 : 0x190))
-#define AR_PHY_CCK_TX_CTRL (AR_SM_BASE + (AR_SREV_9561(ah) ? 0x188 : 0x194))
+#define AR_PHY_CHNINFO_NOISEPWR(_ah) (AR_SM_BASE + (AR_SREV_9561(_ah) ? 0x170 : 0x174))
+#define AR_PHY_CHNINFO_GAINDIFF(_ah) (AR_SM_BASE + (AR_SREV_9561(_ah) ? 0x174 : 0x178))
+#define AR_PHY_CHNINFO_FINETIM(_ah) (AR_SM_BASE + (AR_SREV_9561(_ah) ? 0x178 : 0x17c))
+#define AR_PHY_CHAN_INFO_GAIN_0(_ah) (AR_SM_BASE + (AR_SREV_9561(_ah) ? 0x17c : 0x180))
+#define AR_PHY_SCRAMBLER_SEED(_ah) (AR_SM_BASE + (AR_SREV_9561(_ah) ? 0x184 : 0x190))
+#define AR_PHY_CCK_TX_CTRL(_ah) (AR_SM_BASE + (AR_SREV_9561(_ah) ? 0x188 : 0x194))
-#define AR_PHY_HEAVYCLIP_CTL (AR_SM_BASE + (AR_SREV_9561(ah) ? 0x198 : 0x1a4))
+#define AR_PHY_HEAVYCLIP_CTL(_ah) (AR_SM_BASE + (AR_SREV_9561(_ah) ? 0x198 : 0x1a4))
#define AR_PHY_HEAVYCLIP_20 (AR_SM_BASE + 0x1a8)
#define AR_PHY_HEAVYCLIP_40 (AR_SM_BASE + 0x1ac)
#define AR_PHY_HEAVYCLIP_1 (AR_SM_BASE + 0x19c)
@@ -611,16 +611,16 @@
#define AR_PHY_TXGAIN_TABLE (AR_SM_BASE + 0x300)
-#define AR_PHY_TX_IQCAL_CONTROL_0 (AR_SM_BASE + (AR_SREV_9485(ah) ? \
+#define AR_PHY_TX_IQCAL_CONTROL_0(_ah) (AR_SM_BASE + (AR_SREV_9485(_ah) ? \
0x3c4 : 0x444))
-#define AR_PHY_TX_IQCAL_CONTROL_1 (AR_SM_BASE + (AR_SREV_9485(ah) ? \
+#define AR_PHY_TX_IQCAL_CONTROL_1(_ah) (AR_SM_BASE + (AR_SREV_9485(_ah) ? \
0x3c8 : 0x448))
-#define AR_PHY_TX_IQCAL_START (AR_SM_BASE + (AR_SREV_9485(ah) ? \
+#define AR_PHY_TX_IQCAL_START(_ah) (AR_SM_BASE + (AR_SREV_9485(_ah) ? \
0x3c4 : 0x440))
-#define AR_PHY_TX_IQCAL_STATUS_B0 (AR_SM_BASE + (AR_SREV_9485(ah) ? \
+#define AR_PHY_TX_IQCAL_STATUS_B0(_ah) (AR_SM_BASE + (AR_SREV_9485(_ah) ? \
0x3f0 : 0x48c))
-#define AR_PHY_TX_IQCAL_CORR_COEFF_B0(_i) (AR_SM_BASE + \
- (AR_SREV_9485(ah) ? \
+#define AR_PHY_TX_IQCAL_CORR_COEFF_B0(_ah, _i) (AR_SM_BASE + \
+ (AR_SREV_9485(_ah) ? \
0x3d0 : 0x450) + ((_i) << 2))
#define AR_PHY_RTT_CTRL (AR_SM_BASE + 0x380)
@@ -684,8 +684,8 @@
#define AR_PHY_65NM_CH0_RXTX2_SYNTHOVR_MASK 0x00000008
#define AR_PHY_65NM_CH0_RXTX2_SYNTHOVR_MASK_S 3
-#define AR_CH0_TOP (AR_SREV_9300(ah) ? 0x16288 : \
- (((AR_SREV_9462(ah) || AR_SREV_9565(ah)) ? 0x1628c : 0x16280)))
+#define AR_CH0_TOP(_ah) (AR_SREV_9300(_ah) ? 0x16288 : \
+ (((AR_SREV_9462(_ah) || AR_SREV_9565(_ah)) ? 0x1628c : 0x16280)))
#define AR_CH0_TOP_XPABIASLVL (AR_SREV_9550(ah) ? 0x3c0 : 0x300)
#define AR_CH0_TOP_XPABIASLVL_S (AR_SREV_9550(ah) ? 6 : 8)
@@ -705,8 +705,8 @@
#define AR_SWITCH_TABLE_ALL (0xfff)
#define AR_SWITCH_TABLE_ALL_S (0)
-#define AR_CH0_THERM (AR_SREV_9300(ah) ? 0x16290 :\
- ((AR_SREV_9462(ah) || AR_SREV_9565(ah)) ? 0x16294 : 0x1628c))
+#define AR_CH0_THERM(_ah) (AR_SREV_9300(_ah) ? 0x16290 :\
+ ((AR_SREV_9462(_ah) || AR_SREV_9565(_ah)) ? 0x16294 : 0x1628c))
#define AR_CH0_THERM_XPABIASLVL_MSB 0x3
#define AR_CH0_THERM_XPABIASLVL_MSB_S 0
#define AR_CH0_THERM_XPASHORT2GND 0x4
@@ -717,26 +717,26 @@
#define AR_CH0_THERM_SAR_ADC_OUT 0x0000ff00
#define AR_CH0_THERM_SAR_ADC_OUT_S 8
-#define AR_CH0_TOP2 (AR_SREV_9300(ah) ? 0x1628c : \
- (AR_SREV_9462(ah) ? 0x16290 : 0x16284))
+#define AR_CH0_TOP2(_ah) (AR_SREV_9300(_ah) ? 0x1628c : \
+ (AR_SREV_9462(_ah) ? 0x16290 : 0x16284))
#define AR_CH0_TOP2_XPABIASLVL (AR_SREV_9561(ah) ? 0x1e00 : 0xf000)
#define AR_CH0_TOP2_XPABIASLVL_S (AR_SREV_9561(ah) ? 9 : 12)
-#define AR_CH0_XTAL (AR_SREV_9300(ah) ? 0x16294 : \
- ((AR_SREV_9462(ah) || AR_SREV_9565(ah)) ? 0x16298 : \
- (AR_SREV_9561(ah) ? 0x162c0 : 0x16290)))
+#define AR_CH0_XTAL(_ah) (AR_SREV_9300(_ah) ? 0x16294 : \
+ ((AR_SREV_9462(_ah) || AR_SREV_9565(_ah)) ? 0x16298 : \
+ (AR_SREV_9561(_ah) ? 0x162c0 : 0x16290)))
#define AR_CH0_XTAL_CAPINDAC 0x7f000000
#define AR_CH0_XTAL_CAPINDAC_S 24
#define AR_CH0_XTAL_CAPOUTDAC 0x00fe0000
#define AR_CH0_XTAL_CAPOUTDAC_S 17
-#define AR_PHY_PMU1 ((AR_SREV_9462(ah) || AR_SREV_9565(ah)) ? 0x16340 : \
- (AR_SREV_9561(ah) ? 0x16cc0 : 0x16c40))
+#define AR_PHY_PMU1(_ah) ((AR_SREV_9462(_ah) || AR_SREV_9565(_ah)) ? 0x16340 : \
+ (AR_SREV_9561(_ah) ? 0x16cc0 : 0x16c40))
#define AR_PHY_PMU1_PWD 0x1
#define AR_PHY_PMU1_PWD_S 0
-#define AR_PHY_PMU2 ((AR_SREV_9462(ah) || AR_SREV_9565(ah)) ? 0x16344 : \
- (AR_SREV_9561(ah) ? 0x16cc4 : 0x16c44))
+#define AR_PHY_PMU2(_ah) ((AR_SREV_9462(_ah) || AR_SREV_9565(_ah)) ? 0x16344 : \
+ (AR_SREV_9561(_ah) ? 0x16cc4 : 0x16c44))
#define AR_PHY_PMU2_PGM 0x00200000
#define AR_PHY_PMU2_PGM_S 21
@@ -974,7 +974,7 @@
#define AR_PHY_TPC_5_B1 (AR_SM1_BASE + 0x208)
#define AR_PHY_TPC_6_B1 (AR_SM1_BASE + 0x20c)
#define AR_PHY_TPC_11_B1 (AR_SM1_BASE + 0x220)
-#define AR_PHY_PDADC_TAB_1 (AR_SM1_BASE + (AR_SREV_9462_20_OR_LATER(ah) ? \
+#define AR_PHY_PDADC_TAB_1(_ah) (AR_SM1_BASE + (AR_SREV_9462_20_OR_LATER(_ah) ? \
0x280 : 0x240))
#define AR_PHY_TPC_19_B1 (AR_SM1_BASE + 0x240)
#define AR_PHY_TPC_19_B1_ALPHA_THERM 0xff
@@ -1152,7 +1152,7 @@
#define AR_PHY_PAPRD_CTRL1_PAPRD_MAG_SCALE_FACT 0x0ffe0000
#define AR_PHY_PAPRD_CTRL1_PAPRD_MAG_SCALE_FACT_S 17
-#define AR_PHY_PAPRD_TRAINER_CNTL1 (AR_SM_BASE + (AR_SREV_9485(ah) ? 0x580 : 0x490))
+#define AR_PHY_PAPRD_TRAINER_CNTL1(_ah) (AR_SM_BASE + (AR_SREV_9485(_ah) ? 0x580 : 0x490))
#define AR_PHY_PAPRD_TRAINER_CNTL1_CF_CF_PAPRD_TRAIN_ENABLE 0x00000001
#define AR_PHY_PAPRD_TRAINER_CNTL1_CF_CF_PAPRD_TRAIN_ENABLE_S 0
@@ -1169,12 +1169,12 @@
#define AR_PHY_PAPRD_TRAINER_CNTL1_CF_PAPRD_LB_SKIP 0x0003f000
#define AR_PHY_PAPRD_TRAINER_CNTL1_CF_PAPRD_LB_SKIP_S 12
-#define AR_PHY_PAPRD_TRAINER_CNTL2 (AR_SM_BASE + (AR_SREV_9485(ah) ? 0x584 : 0x494))
+#define AR_PHY_PAPRD_TRAINER_CNTL2(_ah) (AR_SM_BASE + (AR_SREV_9485(_ah) ? 0x584 : 0x494))
#define AR_PHY_PAPRD_TRAINER_CNTL2_CF_PAPRD_INIT_RX_BB_GAIN 0xFFFFFFFF
#define AR_PHY_PAPRD_TRAINER_CNTL2_CF_PAPRD_INIT_RX_BB_GAIN_S 0
-#define AR_PHY_PAPRD_TRAINER_CNTL3 (AR_SM_BASE + (AR_SREV_9485(ah) ? 0x588 : 0x498))
+#define AR_PHY_PAPRD_TRAINER_CNTL3(_ah) (AR_SM_BASE + (AR_SREV_9485(_ah) ? 0x588 : 0x498))
#define AR_PHY_PAPRD_TRAINER_CNTL3_CF_PAPRD_ADC_DESIRED_SIZE 0x0000003f
#define AR_PHY_PAPRD_TRAINER_CNTL3_CF_PAPRD_ADC_DESIRED_SIZE_S 0
@@ -1191,7 +1191,7 @@
#define AR_PHY_PAPRD_TRAINER_CNTL3_CF_PAPRD_BBTXMIX_DISABLE 0x20000000
#define AR_PHY_PAPRD_TRAINER_CNTL3_CF_PAPRD_BBTXMIX_DISABLE_S 29
-#define AR_PHY_PAPRD_TRAINER_CNTL4 (AR_SM_BASE + (AR_SREV_9485(ah) ? 0x58c : 0x49c))
+#define AR_PHY_PAPRD_TRAINER_CNTL4(_ah) (AR_SM_BASE + (AR_SREV_9485(_ah) ? 0x58c : 0x49c))
#define AR_PHY_PAPRD_TRAINER_CNTL4_CF_PAPRD_NUM_TRAIN_SAMPLES 0x03ff0000
#define AR_PHY_PAPRD_TRAINER_CNTL4_CF_PAPRD_NUM_TRAIN_SAMPLES_S 16
@@ -1211,7 +1211,7 @@
#define AR_PHY_PAPRD_PRE_POST_SCALING 0x3FFFF
#define AR_PHY_PAPRD_PRE_POST_SCALING_S 0
-#define AR_PHY_PAPRD_TRAINER_STAT1 (AR_SM_BASE + (AR_SREV_9485(ah) ? 0x590 : 0x4a0))
+#define AR_PHY_PAPRD_TRAINER_STAT1(_ah) (AR_SM_BASE + (AR_SREV_9485(_ah) ? 0x590 : 0x4a0))
#define AR_PHY_PAPRD_TRAINER_STAT1_PAPRD_TRAIN_DONE 0x00000001
#define AR_PHY_PAPRD_TRAINER_STAT1_PAPRD_TRAIN_DONE_S 0
@@ -1226,7 +1226,7 @@
#define AR_PHY_PAPRD_TRAINER_STAT1_PAPRD_AGC2_PWR 0x0001fe00
#define AR_PHY_PAPRD_TRAINER_STAT1_PAPRD_AGC2_PWR_S 9
-#define AR_PHY_PAPRD_TRAINER_STAT2 (AR_SM_BASE + (AR_SREV_9485(ah) ? 0x594 : 0x4a4))
+#define AR_PHY_PAPRD_TRAINER_STAT2(_ah) (AR_SM_BASE + (AR_SREV_9485(_ah) ? 0x594 : 0x4a4))
#define AR_PHY_PAPRD_TRAINER_STAT2_PAPRD_FINE_VAL 0x0000ffff
#define AR_PHY_PAPRD_TRAINER_STAT2_PAPRD_FINE_VAL_S 0
@@ -1235,7 +1235,7 @@
#define AR_PHY_PAPRD_TRAINER_STAT2_PAPRD_FINE_IDX 0x00600000
#define AR_PHY_PAPRD_TRAINER_STAT2_PAPRD_FINE_IDX_S 21
-#define AR_PHY_PAPRD_TRAINER_STAT3 (AR_SM_BASE + (AR_SREV_9485(ah) ? 0x598 : 0x4a8))
+#define AR_PHY_PAPRD_TRAINER_STAT3(_ah) (AR_SM_BASE + (AR_SREV_9485(_ah) ? 0x598 : 0x4a8))
#define AR_PHY_PAPRD_TRAINER_STAT3_PAPRD_TRAIN_SAMPLES_CNT 0x000fffff
#define AR_PHY_PAPRD_TRAINER_STAT3_PAPRD_TRAIN_SAMPLES_CNT_S 0
diff --git a/drivers/net/wireless/ath/ath9k/ar9003_wow.c b/drivers/net/wireless/ath/ath9k/ar9003_wow.c
index bea41df9fbd7..ac32afbf2c97 100644
--- a/drivers/net/wireless/ath/ath9k/ar9003_wow.c
+++ b/drivers/net/wireless/ath/ath9k/ar9003_wow.c
@@ -43,7 +43,7 @@ static void ath9k_hw_set_powermode_wow_sleep(struct ath_hw *ah)
/* set rx disable bit */
REG_WRITE(ah, AR_CR, AR_CR_RXD);
- if (!ath9k_hw_wait(ah, AR_CR, AR_CR_RXE, 0, AH_WAIT_TIMEOUT)) {
+ if (!ath9k_hw_wait(ah, AR_CR, AR_CR_RXE(ah), 0, AH_WAIT_TIMEOUT)) {
ath_err(common, "Failed to stop Rx DMA in 10ms AR_CR=0x%08x AR_DIAG_SW=0x%08x\n",
REG_READ(ah, AR_CR), REG_READ(ah, AR_DIAG_SW));
return;
@@ -61,7 +61,7 @@ static void ath9k_hw_set_powermode_wow_sleep(struct ath_hw *ah)
if (ath9k_hw_mci_is_enabled(ah))
REG_WRITE(ah, AR_RTC_KEEP_AWAKE, 0x2);
- REG_WRITE(ah, AR_RTC_FORCE_WAKE, AR_RTC_FORCE_WAKE_ON_INT);
+ REG_WRITE(ah, AR_RTC_FORCE_WAKE(ah), AR_RTC_FORCE_WAKE_ON_INT);
}
static void ath9k_wow_create_keep_alive_pattern(struct ath_hw *ah)
@@ -226,7 +226,7 @@ u32 ath9k_hw_wow_wakeup(struct ath_hw *ah)
*/
/* do we need to check the bit value 0x01000000 (7-10) ?? */
- REG_RMW(ah, AR_PCIE_PM_CTRL, AR_PMCTRL_WOW_PME_CLR,
+ REG_RMW(ah, AR_PCIE_PM_CTRL(ah), AR_PMCTRL_WOW_PME_CLR,
AR_PMCTRL_PWR_STATE_D1D3);
/*
@@ -278,12 +278,12 @@ static void ath9k_hw_wow_set_arwr_reg(struct ath_hw *ah)
* to the external PCI-E reset. We also need to tie
* the PCI-E Phy reset to the PCI-E reset.
*/
- wa_reg = REG_READ(ah, AR_WA);
+ wa_reg = REG_READ(ah, AR_WA(ah));
wa_reg &= ~AR_WA_UNTIE_RESET_EN;
wa_reg |= AR_WA_RESET_EN;
wa_reg |= AR_WA_POR_SHORT;
- REG_WRITE(ah, AR_WA, wa_reg);
+ REG_WRITE(ah, AR_WA(ah), wa_reg);
}
void ath9k_hw_wow_enable(struct ath_hw *ah, u32 pattern_enable)
@@ -309,11 +309,11 @@ void ath9k_hw_wow_enable(struct ath_hw *ah, u32 pattern_enable)
* Set and clear WOW_PME_CLEAR for the chip
* to generate next wow signal.
*/
- REG_SET_BIT(ah, AR_PCIE_PM_CTRL, AR_PMCTRL_HOST_PME_EN |
+ REG_SET_BIT(ah, AR_PCIE_PM_CTRL(ah), AR_PMCTRL_HOST_PME_EN |
AR_PMCTRL_PWR_PM_CTRL_ENA |
AR_PMCTRL_AUX_PWR_DET |
AR_PMCTRL_WOW_PME_CLR);
- REG_CLR_BIT(ah, AR_PCIE_PM_CTRL, AR_PMCTRL_WOW_PME_CLR);
+ REG_CLR_BIT(ah, AR_PCIE_PM_CTRL(ah), AR_PMCTRL_WOW_PME_CLR);
/*
* Random Backoff.
@@ -414,7 +414,7 @@ void ath9k_hw_wow_enable(struct ath_hw *ah, u32 pattern_enable)
/*
* Set the power states appropriately and enable PME.
*/
- host_pm_ctrl = REG_READ(ah, AR_PCIE_PM_CTRL);
+ host_pm_ctrl = REG_READ(ah, AR_PCIE_PM_CTRL(ah));
host_pm_ctrl |= AR_PMCTRL_PWR_STATE_D1D3 |
AR_PMCTRL_HOST_PME_EN |
AR_PMCTRL_PWR_PM_CTRL_ENA;
@@ -430,7 +430,7 @@ void ath9k_hw_wow_enable(struct ath_hw *ah, u32 pattern_enable)
host_pm_ctrl |= AR_PMCTRL_PWR_STATE_D1D3_REAL;
}
- REG_WRITE(ah, AR_PCIE_PM_CTRL, host_pm_ctrl);
+ REG_WRITE(ah, AR_PCIE_PM_CTRL(ah), host_pm_ctrl);
/*
* Enable sequence number generation when asleep.
diff --git a/drivers/net/wireless/ath/ath9k/btcoex.c b/drivers/net/wireless/ath/ath9k/btcoex.c
index 618c9df35fc1..9b393a8f7c3a 100644
--- a/drivers/net/wireless/ath/ath9k/btcoex.c
+++ b/drivers/net/wireless/ath/ath9k/btcoex.c
@@ -173,16 +173,16 @@ void ath9k_hw_btcoex_init_2wire(struct ath_hw *ah)
struct ath_btcoex_hw *btcoex_hw = &ah->btcoex_hw;
/* connect bt_active to baseband */
- REG_CLR_BIT(ah, AR_GPIO_INPUT_EN_VAL,
+ REG_CLR_BIT(ah, AR_GPIO_INPUT_EN_VAL(ah),
(AR_GPIO_INPUT_EN_VAL_BT_PRIORITY_DEF |
AR_GPIO_INPUT_EN_VAL_BT_FREQUENCY_DEF));
- REG_SET_BIT(ah, AR_GPIO_INPUT_EN_VAL,
+ REG_SET_BIT(ah, AR_GPIO_INPUT_EN_VAL(ah),
AR_GPIO_INPUT_EN_VAL_BT_ACTIVE_BB);
/* Set input mux for bt_active to gpio pin */
if (!AR_SREV_SOC(ah))
- REG_RMW_FIELD(ah, AR_GPIO_INPUT_MUX1,
+ REG_RMW_FIELD(ah, AR_GPIO_INPUT_MUX1(ah),
AR_GPIO_INPUT_MUX1_BT_ACTIVE,
btcoex_hw->btactive_gpio);
@@ -197,17 +197,17 @@ void ath9k_hw_btcoex_init_3wire(struct ath_hw *ah)
struct ath_btcoex_hw *btcoex_hw = &ah->btcoex_hw;
/* btcoex 3-wire */
- REG_SET_BIT(ah, AR_GPIO_INPUT_EN_VAL,
+ REG_SET_BIT(ah, AR_GPIO_INPUT_EN_VAL(ah),
(AR_GPIO_INPUT_EN_VAL_BT_PRIORITY_BB |
AR_GPIO_INPUT_EN_VAL_BT_ACTIVE_BB));
/* Set input mux for bt_prority_async and
* bt_active_async to GPIO pins */
if (!AR_SREV_SOC(ah)) {
- REG_RMW_FIELD(ah, AR_GPIO_INPUT_MUX1,
+ REG_RMW_FIELD(ah, AR_GPIO_INPUT_MUX1(ah),
AR_GPIO_INPUT_MUX1_BT_ACTIVE,
btcoex_hw->btactive_gpio);
- REG_RMW_FIELD(ah, AR_GPIO_INPUT_MUX1,
+ REG_RMW_FIELD(ah, AR_GPIO_INPUT_MUX1(ah),
AR_GPIO_INPUT_MUX1_BT_PRIORITY,
btcoex_hw->btpriority_gpio);
}
@@ -404,7 +404,7 @@ void ath9k_hw_btcoex_enable(struct ath_hw *ah)
if (ath9k_hw_get_btcoex_scheme(ah) != ATH_BTCOEX_CFG_MCI &&
!AR_SREV_SOC(ah)) {
- REG_RMW(ah, AR_GPIO_PDPU,
+ REG_RMW(ah, AR_GPIO_PDPU(ah),
(0x2 << (btcoex_hw->btactive_gpio * 2)),
(0x3 << (btcoex_hw->btactive_gpio * 2)));
}
diff --git a/drivers/net/wireless/ath/ath9k/calib.c b/drivers/net/wireless/ath/ath9k/calib.c
index 0422a33395b7..fb270df75eb2 100644
--- a/drivers/net/wireless/ath/ath9k/calib.c
+++ b/drivers/net/wireless/ath/ath9k/calib.c
@@ -231,17 +231,17 @@ void ath9k_hw_start_nfcal(struct ath_hw *ah, bool update)
if (ah->caldata)
set_bit(NFCAL_PENDING, &ah->caldata->cal_flags);
- REG_SET_BIT(ah, AR_PHY_AGC_CONTROL,
+ REG_SET_BIT(ah, AR_PHY_AGC_CONTROL(ah),
AR_PHY_AGC_CONTROL_ENABLE_NF);
if (update)
- REG_CLR_BIT(ah, AR_PHY_AGC_CONTROL,
+ REG_CLR_BIT(ah, AR_PHY_AGC_CONTROL(ah),
AR_PHY_AGC_CONTROL_NO_UPDATE_NF);
else
- REG_SET_BIT(ah, AR_PHY_AGC_CONTROL,
+ REG_SET_BIT(ah, AR_PHY_AGC_CONTROL(ah),
AR_PHY_AGC_CONTROL_NO_UPDATE_NF);
- REG_SET_BIT(ah, AR_PHY_AGC_CONTROL, AR_PHY_AGC_CONTROL_NF);
+ REG_SET_BIT(ah, AR_PHY_AGC_CONTROL(ah), AR_PHY_AGC_CONTROL_NF);
}
int ath9k_hw_loadnf(struct ath_hw *ah, struct ath9k_channel *chan)
@@ -251,7 +251,7 @@ int ath9k_hw_loadnf(struct ath_hw *ah, struct ath9k_channel *chan)
u8 chainmask = (ah->rxchainmask << 3) | ah->rxchainmask;
struct ath_common *common = ath9k_hw_common(ah);
s16 default_nf = ath9k_hw_get_nf_limits(ah, chan)->nominal;
- u32 bb_agc_ctl = REG_READ(ah, AR_PHY_AGC_CONTROL);
+ u32 bb_agc_ctl = REG_READ(ah, AR_PHY_AGC_CONTROL(ah));
if (ah->caldata)
h = ah->caldata->nfCalHist;
@@ -286,7 +286,7 @@ int ath9k_hw_loadnf(struct ath_hw *ah, struct ath9k_channel *chan)
* (or after end rx/tx frame if ongoing)
*/
if (bb_agc_ctl & AR_PHY_AGC_CONTROL_NF) {
- REG_CLR_BIT(ah, AR_PHY_AGC_CONTROL, AR_PHY_AGC_CONTROL_NF);
+ REG_CLR_BIT(ah, AR_PHY_AGC_CONTROL(ah), AR_PHY_AGC_CONTROL_NF);
REG_RMW_BUFFER_FLUSH(ah);
ENABLE_REG_RMW_BUFFER(ah);
}
@@ -295,11 +295,11 @@ int ath9k_hw_loadnf(struct ath_hw *ah, struct ath9k_channel *chan)
* Load software filtered NF value into baseband internal minCCApwr
* variable.
*/
- REG_CLR_BIT(ah, AR_PHY_AGC_CONTROL,
+ REG_CLR_BIT(ah, AR_PHY_AGC_CONTROL(ah),
AR_PHY_AGC_CONTROL_ENABLE_NF);
- REG_CLR_BIT(ah, AR_PHY_AGC_CONTROL,
+ REG_CLR_BIT(ah, AR_PHY_AGC_CONTROL(ah),
AR_PHY_AGC_CONTROL_NO_UPDATE_NF);
- REG_SET_BIT(ah, AR_PHY_AGC_CONTROL, AR_PHY_AGC_CONTROL_NF);
+ REG_SET_BIT(ah, AR_PHY_AGC_CONTROL(ah), AR_PHY_AGC_CONTROL_NF);
REG_RMW_BUFFER_FLUSH(ah);
/*
@@ -309,7 +309,7 @@ int ath9k_hw_loadnf(struct ath_hw *ah, struct ath9k_channel *chan)
* (11n max length 22.1 msec)
*/
for (j = 0; j < 22200; j++) {
- if ((REG_READ(ah, AR_PHY_AGC_CONTROL) &
+ if ((REG_READ(ah, AR_PHY_AGC_CONTROL(ah)) &
AR_PHY_AGC_CONTROL_NF) == 0)
break;
udelay(10);
@@ -321,12 +321,12 @@ int ath9k_hw_loadnf(struct ath_hw *ah, struct ath9k_channel *chan)
if (bb_agc_ctl & AR_PHY_AGC_CONTROL_NF) {
ENABLE_REG_RMW_BUFFER(ah);
if (bb_agc_ctl & AR_PHY_AGC_CONTROL_ENABLE_NF)
- REG_SET_BIT(ah, AR_PHY_AGC_CONTROL,
+ REG_SET_BIT(ah, AR_PHY_AGC_CONTROL(ah),
AR_PHY_AGC_CONTROL_ENABLE_NF);
if (bb_agc_ctl & AR_PHY_AGC_CONTROL_NO_UPDATE_NF)
- REG_SET_BIT(ah, AR_PHY_AGC_CONTROL,
+ REG_SET_BIT(ah, AR_PHY_AGC_CONTROL(ah),
AR_PHY_AGC_CONTROL_NO_UPDATE_NF);
- REG_SET_BIT(ah, AR_PHY_AGC_CONTROL, AR_PHY_AGC_CONTROL_NF);
+ REG_SET_BIT(ah, AR_PHY_AGC_CONTROL(ah), AR_PHY_AGC_CONTROL_NF);
REG_RMW_BUFFER_FLUSH(ah);
}
@@ -342,7 +342,7 @@ int ath9k_hw_loadnf(struct ath_hw *ah, struct ath9k_channel *chan)
if (j == 22200) {
ath_dbg(common, ANY,
"Timeout while waiting for nf to load: AR_PHY_AGC_CONTROL=0x%x\n",
- REG_READ(ah, AR_PHY_AGC_CONTROL));
+ REG_READ(ah, AR_PHY_AGC_CONTROL(ah)));
return -ETIMEDOUT;
}
@@ -410,7 +410,7 @@ bool ath9k_hw_getnf(struct ath_hw *ah, struct ath9k_channel *chan)
struct ieee80211_channel *c = chan->chan;
struct ath9k_hw_cal_data *caldata = ah->caldata;
- if (REG_READ(ah, AR_PHY_AGC_CONTROL) & AR_PHY_AGC_CONTROL_NF) {
+ if (REG_READ(ah, AR_PHY_AGC_CONTROL(ah)) & AR_PHY_AGC_CONTROL_NF) {
ath_dbg(common, CALIBRATE,
"NF did not complete in calibration window\n");
return false;
@@ -478,7 +478,7 @@ void ath9k_hw_bstuck_nfcal(struct ath_hw *ah)
*/
if (!test_bit(NFCAL_PENDING, &caldata->cal_flags))
ath9k_hw_start_nfcal(ah, true);
- else if (!(REG_READ(ah, AR_PHY_AGC_CONTROL) & AR_PHY_AGC_CONTROL_NF))
+ else if (!(REG_READ(ah, AR_PHY_AGC_CONTROL(ah)) & AR_PHY_AGC_CONTROL_NF))
ath9k_hw_getnf(ah, ah->curchan);
set_bit(NFCAL_INTF, &caldata->cal_flags);
diff --git a/drivers/net/wireless/ath/ath9k/eeprom.h b/drivers/net/wireless/ath/ath9k/eeprom.h
index 31390af6c33e..f1cde43fcb55 100644
--- a/drivers/net/wireless/ath/ath9k/eeprom.h
+++ b/drivers/net/wireless/ath/ath9k/eeprom.h
@@ -68,8 +68,8 @@
#define AR5416_EEPROM_OFFSET 0x2000
#define AR5416_EEPROM_MAX 0xae0
-#define AR5416_EEPROM_START_ADDR \
- (AR_SREV_9100(ah)) ? 0x1fff1000 : 0x503f1200
+#define AR5416_EEPROM_START_ADDR(_ah) \
+ (AR_SREV_9100(_ah)) ? 0x1fff1000 : 0x503f1200
#define SD_NO_CTL 0xE0
#define NO_CTL 0xff
@@ -110,10 +110,10 @@
#define FBIN2FREQ(x, y) ((y) ? (2300 + x) : (4800 + 5 * x))
#define ath9k_hw_use_flash(_ah) (!(_ah->ah_flags & AH_USE_EEPROM))
-#define OLC_FOR_AR9280_20_LATER (AR_SREV_9280_20_OR_LATER(ah) && \
- ah->eep_ops->get_eeprom(ah, EEP_OL_PWRCTRL))
-#define OLC_FOR_AR9287_10_LATER (AR_SREV_9287_11_OR_LATER(ah) && \
- ah->eep_ops->get_eeprom(ah, EEP_OL_PWRCTRL))
+#define OLC_FOR_AR9280_20_LATER(_ah) (AR_SREV_9280_20_OR_LATER(_ah) && \
+ _ah->eep_ops->get_eeprom(_ah, EEP_OL_PWRCTRL))
+#define OLC_FOR_AR9287_10_LATER(_ah) (AR_SREV_9287_11_OR_LATER(_ah) && \
+ _ah->eep_ops->get_eeprom(_ah, EEP_OL_PWRCTRL))
#define EEP_RFSILENT_ENABLED 0x0001
#define EEP_RFSILENT_ENABLED_S 0
diff --git a/drivers/net/wireless/ath/ath9k/eeprom_def.c b/drivers/net/wireless/ath/ath9k/eeprom_def.c
index 9729a69d3e2e..7685f8ab371e 100644
--- a/drivers/net/wireless/ath/ath9k/eeprom_def.c
+++ b/drivers/net/wireless/ath/ath9k/eeprom_def.c
@@ -800,7 +800,7 @@ static void ath9k_hw_set_def_power_cal_table(struct ath_hw *ah,
numPiers = AR5416_NUM_5G_CAL_PIERS;
}
- if (OLC_FOR_AR9280_20_LATER && IS_CHAN_2GHZ(chan)) {
+ if (OLC_FOR_AR9280_20_LATER(ah) && IS_CHAN_2GHZ(chan)) {
pRawDataset = pEepData->calPierData2G[0];
ah->initPDADC = ((struct calDataPerFreqOpLoop *)
pRawDataset)->vpdPdg[0][0];
@@ -841,7 +841,7 @@ static void ath9k_hw_set_def_power_cal_table(struct ath_hw *ah,
pRawDataset = pEepData->calPierData5G[i];
- if (OLC_FOR_AR9280_20_LATER) {
+ if (OLC_FOR_AR9280_20_LATER(ah)) {
u8 pcdacIdx;
u8 txPower;
@@ -869,7 +869,7 @@ static void ath9k_hw_set_def_power_cal_table(struct ath_hw *ah,
ENABLE_REGWRITE_BUFFER(ah);
- if (OLC_FOR_AR9280_20_LATER) {
+ if (OLC_FOR_AR9280_20_LATER(ah)) {
REG_WRITE(ah,
AR_PHY_TPCRG5 + regChainOffset,
SM(0x6,
@@ -1203,7 +1203,7 @@ static void ath9k_hw_def_set_txpower(struct ath_hw *ah,
| ATH9K_POW_SM(ratesArray[rate24mb], 0));
if (IS_CHAN_2GHZ(chan)) {
- if (OLC_FOR_AR9280_20_LATER) {
+ if (OLC_FOR_AR9280_20_LATER(ah)) {
cck_ofdm_delta = 2;
REG_WRITE(ah, AR_PHY_POWER_TX_RATE3,
ATH9K_POW_SM(RT_AR_DELTA(rate2s), 24)
@@ -1259,7 +1259,7 @@ static void ath9k_hw_def_set_txpower(struct ath_hw *ah,
ht40PowerIncForPdadc, 8)
| ATH9K_POW_SM(ratesArray[rateHt40_4] +
ht40PowerIncForPdadc, 0));
- if (OLC_FOR_AR9280_20_LATER) {
+ if (OLC_FOR_AR9280_20_LATER(ah)) {
REG_WRITE(ah, AR_PHY_POWER_TX_RATE9,
ATH9K_POW_SM(ratesArray[rateExtOfdm], 24)
| ATH9K_POW_SM(RT_AR_DELTA(rateExtCck), 16)
diff --git a/drivers/net/wireless/ath/ath9k/hif_usb.c b/drivers/net/wireless/ath/ath9k/hif_usb.c
index 1a2e0c7eeb02..f521dfa2f194 100644
--- a/drivers/net/wireless/ath/ath9k/hif_usb.c
+++ b/drivers/net/wireless/ath/ath9k/hif_usb.c
@@ -561,11 +561,11 @@ static void ath9k_hif_usb_rx_stream(struct hif_device_usb *hif_dev,
memcpy(ptr, skb->data, rx_remain_len);
rx_pkt_len += rx_remain_len;
- hif_dev->rx_remain_len = 0;
skb_put(remain_skb, rx_pkt_len);
skb_pool[pool_index++] = remain_skb;
-
+ hif_dev->remain_skb = NULL;
+ hif_dev->rx_remain_len = 0;
} else {
index = rx_remain_len;
}
@@ -584,16 +584,21 @@ static void ath9k_hif_usb_rx_stream(struct hif_device_usb *hif_dev,
pkt_len = get_unaligned_le16(ptr + index);
pkt_tag = get_unaligned_le16(ptr + index + 2);
+ /* It is supposed that if we have an invalid pkt_tag or
+ * pkt_len then the whole input SKB is considered invalid
+ * and dropped; the associated packets already in skb_pool
+ * are dropped, too.
+ */
if (pkt_tag != ATH_USB_RX_STREAM_MODE_TAG) {
RX_STAT_INC(hif_dev, skb_dropped);
- return;
+ goto invalid_pkt;
}
if (pkt_len > 2 * MAX_RX_BUF_SIZE) {
dev_err(&hif_dev->udev->dev,
"ath9k_htc: invalid pkt_len (%x)\n", pkt_len);
RX_STAT_INC(hif_dev, skb_dropped);
- return;
+ goto invalid_pkt;
}
pad_len = 4 - (pkt_len & 0x3);
@@ -605,11 +610,6 @@ static void ath9k_hif_usb_rx_stream(struct hif_device_usb *hif_dev,
if (index > MAX_RX_BUF_SIZE) {
spin_lock(&hif_dev->rx_lock);
- hif_dev->rx_remain_len = index - MAX_RX_BUF_SIZE;
- hif_dev->rx_transfer_len =
- MAX_RX_BUF_SIZE - chk_idx - 4;
- hif_dev->rx_pad_len = pad_len;
-
nskb = __dev_alloc_skb(pkt_len + 32, GFP_ATOMIC);
if (!nskb) {
dev_err(&hif_dev->udev->dev,
@@ -617,6 +617,12 @@ static void ath9k_hif_usb_rx_stream(struct hif_device_usb *hif_dev,
spin_unlock(&hif_dev->rx_lock);
goto err;
}
+
+ hif_dev->rx_remain_len = index - MAX_RX_BUF_SIZE;
+ hif_dev->rx_transfer_len =
+ MAX_RX_BUF_SIZE - chk_idx - 4;
+ hif_dev->rx_pad_len = pad_len;
+
skb_reserve(nskb, 32);
RX_STAT_INC(hif_dev, skb_allocated);
@@ -654,6 +660,13 @@ err:
skb_pool[i]->len, USB_WLAN_RX_PIPE);
RX_STAT_INC(hif_dev, skb_completed);
}
+ return;
+invalid_pkt:
+ for (i = 0; i < pool_index; i++) {
+ dev_kfree_skb_any(skb_pool[i]);
+ RX_STAT_INC(hif_dev, skb_dropped);
+ }
+ return;
}
static void ath9k_hif_usb_rx_cb(struct urb *urb)
@@ -1411,8 +1424,6 @@ static void ath9k_hif_usb_disconnect(struct usb_interface *interface)
if (hif_dev->flags & HIF_USB_READY) {
ath9k_htc_hw_deinit(hif_dev->htc_handle, unplugged);
- ath9k_hif_usb_dev_deinit(hif_dev);
- ath9k_destroy_wmi(hif_dev->htc_handle->drv_priv);
ath9k_htc_hw_free(hif_dev->htc_handle);
}
diff --git a/drivers/net/wireless/ath/ath9k/htc_drv_init.c b/drivers/net/wireless/ath/ath9k/htc_drv_init.c
index 07ac88fb1c57..dae3d9c7b640 100644
--- a/drivers/net/wireless/ath/ath9k/htc_drv_init.c
+++ b/drivers/net/wireless/ath/ath9k/htc_drv_init.c
@@ -523,13 +523,13 @@ static bool ath_usb_eeprom_read(struct ath_common *common, u32 off, u16 *data)
(void)REG_READ(ah, AR5416_EEPROM_OFFSET + (off << AR5416_EEPROM_S));
if (!ath9k_hw_wait(ah,
- AR_EEPROM_STATUS_DATA,
+ AR_EEPROM_STATUS_DATA(ah),
AR_EEPROM_STATUS_DATA_BUSY |
AR_EEPROM_STATUS_DATA_PROT_ACCESS, 0,
AH_WAIT_TIMEOUT))
return false;
- *data = MS(REG_READ(ah, AR_EEPROM_STATUS_DATA),
+ *data = MS(REG_READ(ah, AR_EEPROM_STATUS_DATA(ah)),
AR_EEPROM_STATUS_DATA_VAL);
return true;
@@ -988,6 +988,8 @@ void ath9k_htc_disconnect_device(struct htc_target *htc_handle, bool hotunplug)
ath9k_deinit_device(htc_handle->drv_priv);
ath9k_stop_wmi(htc_handle->drv_priv);
+ ath9k_hif_usb_dealloc_urbs((struct hif_device_usb *)htc_handle->hif_dev);
+ ath9k_destroy_wmi(htc_handle->drv_priv);
ieee80211_free_hw(htc_handle->drv_priv->hw);
}
}
diff --git a/drivers/net/wireless/ath/ath9k/htc_hst.c b/drivers/net/wireless/ath/ath9k/htc_hst.c
index ca05b07a45e6..fe62ff668f75 100644
--- a/drivers/net/wireless/ath/ath9k/htc_hst.c
+++ b/drivers/net/wireless/ath/ath9k/htc_hst.c
@@ -391,7 +391,7 @@ static void ath9k_htc_fw_panic_report(struct htc_target *htc_handle,
* HTC Messages are handled directly here and the obtained SKB
* is freed.
*
- * Service messages (Data, WMI) passed to the corresponding
+ * Service messages (Data, WMI) are passed to the corresponding
* endpoint RX handlers, which have to free the SKB.
*/
void ath9k_htc_rx_msg(struct htc_target *htc_handle,
@@ -478,6 +478,8 @@ invalid:
if (endpoint->ep_callbacks.rx)
endpoint->ep_callbacks.rx(endpoint->ep_callbacks.priv,
skb, epid);
+ else
+ goto invalid;
}
}
diff --git a/drivers/net/wireless/ath/ath9k/hw.c b/drivers/net/wireless/ath/ath9k/hw.c
index 172081ffe477..5982e0db45f9 100644
--- a/drivers/net/wireless/ath/ath9k/hw.c
+++ b/drivers/net/wireless/ath/ath9k/hw.c
@@ -266,7 +266,7 @@ static bool ath9k_hw_read_revisions(struct ath_hw *ah)
case AR9300_DEVID_AR9330:
ah->hw_version.macVersion = AR_SREV_VERSION_9330;
if (!ah->get_mac_revision) {
- val = REG_READ(ah, AR_SREV);
+ val = REG_READ(ah, AR_SREV(ah));
ah->hw_version.macRev = MS(val, AR_SREV_REVISION2);
}
return true;
@@ -284,7 +284,7 @@ static bool ath9k_hw_read_revisions(struct ath_hw *ah)
return true;
}
- srev = REG_READ(ah, AR_SREV);
+ srev = REG_READ(ah, AR_SREV(ah));
if (srev == -1) {
ath_err(ath9k_hw_common(ah),
@@ -292,7 +292,7 @@ static bool ath9k_hw_read_revisions(struct ath_hw *ah)
return false;
}
- val = srev & AR_SREV_ID;
+ val = srev & AR_SREV_ID(ah);
if (val == 0xFF) {
val = srev;
@@ -601,12 +601,12 @@ static int __ath9k_hw_init(struct ath_hw *ah)
}
/*
- * Read back AR_WA into a permanent copy and set bits 14 and 17.
+ * Read back AR_WA(ah) into a permanent copy and set bits 14 and 17.
* We need to do this to avoid RMW of this register. We cannot
* read the reg when chip is asleep.
*/
if (AR_SREV_9300_20_OR_LATER(ah)) {
- ah->WARegVal = REG_READ(ah, AR_WA);
+ ah->WARegVal = REG_READ(ah, AR_WA(ah));
ah->WARegVal |= (AR_WA_D3_L1_DISABLE |
AR_WA_ASPM_TIMER_BASED_DISABLE);
}
@@ -618,7 +618,7 @@ static int __ath9k_hw_init(struct ath_hw *ah)
if (AR_SREV_9565(ah)) {
ah->WARegVal |= AR_WA_BIT22;
- REG_WRITE(ah, AR_WA, ah->WARegVal);
+ REG_WRITE(ah, AR_WA(ah), ah->WARegVal);
}
ath9k_hw_init_defaults(ah);
@@ -814,7 +814,7 @@ static void ath9k_hw_init_pll(struct ath_hw *ah,
REG_RMW_FIELD(ah, AR_CH0_DDR_DPLL3,
AR_CH0_DPLL3_PHASE_SHIFT, 0x1);
- REG_WRITE(ah, AR_RTC_PLL_CONTROL,
+ REG_WRITE(ah, AR_RTC_PLL_CONTROL(ah),
pll | AR_RTC_9300_PLL_BYPASS);
udelay(1000);
@@ -832,7 +832,7 @@ static void ath9k_hw_init_pll(struct ath_hw *ah,
AR_SREV_9561(ah)) {
u32 regval, pll2_divint, pll2_divfrac, refdiv;
- REG_WRITE(ah, AR_RTC_PLL_CONTROL,
+ REG_WRITE(ah, AR_RTC_PLL_CONTROL(ah),
pll | AR_RTC_9300_SOC_PLL_BYPASS);
udelay(1000);
@@ -911,7 +911,7 @@ static void ath9k_hw_init_pll(struct ath_hw *ah,
if (AR_SREV_9565(ah))
pll |= 0x40000;
- REG_WRITE(ah, AR_RTC_PLL_CONTROL, pll);
+ REG_WRITE(ah, AR_RTC_PLL_CONTROL(ah), pll);
if (AR_SREV_9485(ah) || AR_SREV_9340(ah) || AR_SREV_9330(ah) ||
AR_SREV_9550(ah))
@@ -925,7 +925,7 @@ static void ath9k_hw_init_pll(struct ath_hw *ah,
udelay(RTC_PLL_SETTLE_DELAY);
- REG_WRITE(ah, AR_RTC_SLEEP_CLK, AR_RTC_FORCE_DERIVED_CLK);
+ REG_WRITE(ah, AR_RTC_SLEEP_CLK(ah), AR_RTC_FORCE_DERIVED_CLK);
}
static void ath9k_hw_init_interrupt_masks(struct ath_hw *ah,
@@ -977,7 +977,7 @@ static void ath9k_hw_init_interrupt_masks(struct ath_hw *ah,
REG_WRITE(ah, AR_IMR_S2, ah->imrs2_reg);
if (ah->msi_enabled) {
- ah->msi_reg = REG_READ(ah, AR_PCIE_MSI);
+ ah->msi_reg = REG_READ(ah, AR_PCIE_MSI(ah));
ah->msi_reg |= AR_PCIE_MSI_HW_DBI_WR_EN;
ah->msi_reg &= AR_PCIE_MSI_HW_INT_PENDING_ADDR_MSI_64;
REG_WRITE(ah, AR_INTCFG, msi_cfg);
@@ -987,18 +987,18 @@ static void ath9k_hw_init_interrupt_masks(struct ath_hw *ah,
}
if (!AR_SREV_9100(ah)) {
- REG_WRITE(ah, AR_INTR_SYNC_CAUSE, 0xFFFFFFFF);
- REG_WRITE(ah, AR_INTR_SYNC_ENABLE, sync_default);
- REG_WRITE(ah, AR_INTR_SYNC_MASK, 0);
+ REG_WRITE(ah, AR_INTR_SYNC_CAUSE(ah), 0xFFFFFFFF);
+ REG_WRITE(ah, AR_INTR_SYNC_ENABLE(ah), sync_default);
+ REG_WRITE(ah, AR_INTR_SYNC_MASK(ah), 0);
}
REGWRITE_BUFFER_FLUSH(ah);
if (AR_SREV_9300_20_OR_LATER(ah)) {
- REG_WRITE(ah, AR_INTR_PRIO_ASYNC_ENABLE, 0);
- REG_WRITE(ah, AR_INTR_PRIO_ASYNC_MASK, 0);
- REG_WRITE(ah, AR_INTR_PRIO_SYNC_ENABLE, 0);
- REG_WRITE(ah, AR_INTR_PRIO_SYNC_MASK, 0);
+ REG_WRITE(ah, AR_INTR_PRIO_ASYNC_ENABLE(ah), 0);
+ REG_WRITE(ah, AR_INTR_PRIO_ASYNC_MASK(ah), 0);
+ REG_WRITE(ah, AR_INTR_PRIO_SYNC_ENABLE(ah), 0);
+ REG_WRITE(ah, AR_INTR_PRIO_SYNC_MASK(ah), 0);
}
}
@@ -1341,7 +1341,7 @@ static bool ath9k_hw_ar9330_reset_war(struct ath_hw *ah, int type)
return false;
}
- REG_WRITE(ah, AR_RTC_RESET, 1);
+ REG_WRITE(ah, AR_RTC_RESET(ah), 1);
}
return true;
@@ -1353,26 +1353,26 @@ static bool ath9k_hw_set_reset(struct ath_hw *ah, int type)
u32 tmpReg;
if (AR_SREV_9100(ah)) {
- REG_RMW_FIELD(ah, AR_RTC_DERIVED_CLK,
+ REG_RMW_FIELD(ah, AR_RTC_DERIVED_CLK(ah),
AR_RTC_DERIVED_CLK_PERIOD, 1);
- (void)REG_READ(ah, AR_RTC_DERIVED_CLK);
+ (void)REG_READ(ah, AR_RTC_DERIVED_CLK(ah));
}
ENABLE_REGWRITE_BUFFER(ah);
if (AR_SREV_9300_20_OR_LATER(ah)) {
- REG_WRITE(ah, AR_WA, ah->WARegVal);
+ REG_WRITE(ah, AR_WA(ah), ah->WARegVal);
udelay(10);
}
- REG_WRITE(ah, AR_RTC_FORCE_WAKE, AR_RTC_FORCE_WAKE_EN |
+ REG_WRITE(ah, AR_RTC_FORCE_WAKE(ah), AR_RTC_FORCE_WAKE_EN |
AR_RTC_FORCE_WAKE_ON_INT);
if (AR_SREV_9100(ah)) {
rst_flags = AR_RTC_RC_MAC_WARM | AR_RTC_RC_MAC_COLD |
AR_RTC_RC_COLD_RESET | AR_RTC_RC_WARM_RESET;
} else {
- tmpReg = REG_READ(ah, AR_INTR_SYNC_CAUSE);
+ tmpReg = REG_READ(ah, AR_INTR_SYNC_CAUSE(ah));
if (AR_SREV_9340(ah))
tmpReg &= AR9340_INTR_SYNC_LOCAL_TIMEOUT;
else
@@ -1381,7 +1381,7 @@ static bool ath9k_hw_set_reset(struct ath_hw *ah, int type)
if (tmpReg) {
u32 val;
- REG_WRITE(ah, AR_INTR_SYNC_ENABLE, 0);
+ REG_WRITE(ah, AR_INTR_SYNC_ENABLE(ah), 0);
val = AR_RC_HOSTIF;
if (!AR_SREV_9300_20_OR_LATER(ah))
@@ -1414,7 +1414,7 @@ static bool ath9k_hw_set_reset(struct ath_hw *ah, int type)
REG_CLR_BIT(ah, AR_CFG, AR_CFG_HALT_REQ);
}
- REG_WRITE(ah, AR_RTC_RC, rst_flags);
+ REG_WRITE(ah, AR_RTC_RC(ah), rst_flags);
REGWRITE_BUFFER_FLUSH(ah);
@@ -1425,8 +1425,8 @@ static bool ath9k_hw_set_reset(struct ath_hw *ah, int type)
else
udelay(100);
- REG_WRITE(ah, AR_RTC_RC, 0);
- if (!ath9k_hw_wait(ah, AR_RTC_RC, AR_RTC_RC_M, 0, AH_WAIT_TIMEOUT)) {
+ REG_WRITE(ah, AR_RTC_RC(ah), 0);
+ if (!ath9k_hw_wait(ah, AR_RTC_RC(ah), AR_RTC_RC_M, 0, AH_WAIT_TIMEOUT)) {
ath_dbg(ath9k_hw_common(ah), RESET, "RTC stuck in MAC reset\n");
return false;
}
@@ -1445,17 +1445,17 @@ static bool ath9k_hw_set_reset_power_on(struct ath_hw *ah)
ENABLE_REGWRITE_BUFFER(ah);
if (AR_SREV_9300_20_OR_LATER(ah)) {
- REG_WRITE(ah, AR_WA, ah->WARegVal);
+ REG_WRITE(ah, AR_WA(ah), ah->WARegVal);
udelay(10);
}
- REG_WRITE(ah, AR_RTC_FORCE_WAKE, AR_RTC_FORCE_WAKE_EN |
+ REG_WRITE(ah, AR_RTC_FORCE_WAKE(ah), AR_RTC_FORCE_WAKE_EN |
AR_RTC_FORCE_WAKE_ON_INT);
if (!AR_SREV_9100(ah) && !AR_SREV_9300_20_OR_LATER(ah))
REG_WRITE(ah, AR_RC, AR_RC_AHB);
- REG_WRITE(ah, AR_RTC_RESET, 0);
+ REG_WRITE(ah, AR_RTC_RESET(ah), 0);
REGWRITE_BUFFER_FLUSH(ah);
@@ -1464,11 +1464,11 @@ static bool ath9k_hw_set_reset_power_on(struct ath_hw *ah)
if (!AR_SREV_9100(ah) && !AR_SREV_9300_20_OR_LATER(ah))
REG_WRITE(ah, AR_RC, 0);
- REG_WRITE(ah, AR_RTC_RESET, 1);
+ REG_WRITE(ah, AR_RTC_RESET(ah), 1);
if (!ath9k_hw_wait(ah,
- AR_RTC_STATUS,
- AR_RTC_STATUS_M,
+ AR_RTC_STATUS(ah),
+ AR_RTC_STATUS_M(ah),
AR_RTC_STATUS_ON,
AH_WAIT_TIMEOUT)) {
ath_dbg(ath9k_hw_common(ah), RESET, "RTC not waking up\n");
@@ -1483,11 +1483,11 @@ static bool ath9k_hw_set_reset_reg(struct ath_hw *ah, u32 type)
bool ret = false;
if (AR_SREV_9300_20_OR_LATER(ah)) {
- REG_WRITE(ah, AR_WA, ah->WARegVal);
+ REG_WRITE(ah, AR_WA(ah), ah->WARegVal);
udelay(10);
}
- REG_WRITE(ah, AR_RTC_FORCE_WAKE,
+ REG_WRITE(ah, AR_RTC_FORCE_WAKE(ah),
AR_RTC_FORCE_WAKE_EN | AR_RTC_FORCE_WAKE_ON_INT);
if (!ah->reset_power_on)
@@ -1521,7 +1521,7 @@ static bool ath9k_hw_chip_reset(struct ath_hw *ah,
else
reset_type = ATH9K_RESET_COLD;
} else if (ah->chip_fullsleep || REG_READ(ah, AR_Q_TXE) ||
- (REG_READ(ah, AR_CR) & AR_CR_RXE))
+ (REG_READ(ah, AR_CR) & AR_CR_RXE(ah)))
reset_type = ATH9K_RESET_COLD;
if (!ath9k_hw_set_reset_reg(ah, reset_type))
@@ -1955,7 +1955,7 @@ int ath9k_hw_reset(struct ath_hw *ah, struct ath9k_channel *chan,
ath9k_hw_settsf64(ah, tsf + tsf_offset);
if (AR_SREV_9280_20_OR_LATER(ah))
- REG_SET_BIT(ah, AR_GPIO_INPUT_EN_VAL, AR_GPIO_JTAG_DISABLE);
+ REG_SET_BIT(ah, AR_GPIO_INPUT_EN_VAL(ah), AR_GPIO_JTAG_DISABLE);
if (!AR_SREV_9300_20_OR_LATER(ah))
ar9002_hw_enable_async_fifo(ah);
@@ -2017,7 +2017,7 @@ int ath9k_hw_reset(struct ath_hw *ah, struct ath9k_channel *chan,
ath9k_hw_set_dma(ah);
if (!ath9k_hw_mci_is_enabled(ah))
- REG_WRITE(ah, AR_OBS, 8);
+ REG_WRITE(ah, AR_OBS(ah), 8);
ENABLE_REG_RMW_BUFFER(ah);
if (ah->config.rx_intr_mitigation) {
@@ -2111,7 +2111,7 @@ static void ath9k_set_power_sleep(struct ath_hw *ah)
* Clear the RTC force wake bit to allow the
* mac to go to sleep.
*/
- REG_CLR_BIT(ah, AR_RTC_FORCE_WAKE, AR_RTC_FORCE_WAKE_EN);
+ REG_CLR_BIT(ah, AR_RTC_FORCE_WAKE(ah), AR_RTC_FORCE_WAKE_EN);
if (ath9k_hw_mci_is_enabled(ah))
udelay(100);
@@ -2121,13 +2121,13 @@ static void ath9k_set_power_sleep(struct ath_hw *ah)
/* Shutdown chip. Active low */
if (!AR_SREV_5416(ah) && !AR_SREV_9271(ah)) {
- REG_CLR_BIT(ah, AR_RTC_RESET, AR_RTC_RESET_EN);
+ REG_CLR_BIT(ah, AR_RTC_RESET(ah), AR_RTC_RESET_EN);
udelay(2);
}
- /* Clear Bit 14 of AR_WA after putting chip into Full Sleep mode. */
+ /* Clear Bit 14 of AR_WA(ah) after putting chip into Full Sleep mode. */
if (AR_SREV_9300_20_OR_LATER(ah))
- REG_WRITE(ah, AR_WA, ah->WARegVal & ~AR_WA_D3_L1_DISABLE);
+ REG_WRITE(ah, AR_WA(ah), ah->WARegVal & ~AR_WA_D3_L1_DISABLE);
}
/*
@@ -2143,7 +2143,7 @@ static void ath9k_set_power_network_sleep(struct ath_hw *ah)
if (!(pCap->hw_caps & ATH9K_HW_CAP_AUTOSLEEP)) {
/* Set WakeOnInterrupt bit; clear ForceWake bit */
- REG_WRITE(ah, AR_RTC_FORCE_WAKE,
+ REG_WRITE(ah, AR_RTC_FORCE_WAKE(ah),
AR_RTC_FORCE_WAKE_ON_INT);
} else {
@@ -2163,15 +2163,15 @@ static void ath9k_set_power_network_sleep(struct ath_hw *ah)
* Clear the RTC force wake bit to allow the
* mac to go to sleep.
*/
- REG_CLR_BIT(ah, AR_RTC_FORCE_WAKE, AR_RTC_FORCE_WAKE_EN);
+ REG_CLR_BIT(ah, AR_RTC_FORCE_WAKE(ah), AR_RTC_FORCE_WAKE_EN);
if (ath9k_hw_mci_is_enabled(ah))
udelay(30);
}
- /* Clear Bit 14 of AR_WA after putting chip into Net Sleep mode. */
+ /* Clear Bit 14 of AR_WA(ah) after putting chip into Net Sleep mode. */
if (AR_SREV_9300_20_OR_LATER(ah))
- REG_WRITE(ah, AR_WA, ah->WARegVal & ~AR_WA_D3_L1_DISABLE);
+ REG_WRITE(ah, AR_WA(ah), ah->WARegVal & ~AR_WA_D3_L1_DISABLE);
}
static bool ath9k_hw_set_power_awake(struct ath_hw *ah)
@@ -2179,14 +2179,14 @@ static bool ath9k_hw_set_power_awake(struct ath_hw *ah)
u32 val;
int i;
- /* Set Bits 14 and 17 of AR_WA before powering on the chip. */
+ /* Set Bits 14 and 17 of AR_WA(ah) before powering on the chip. */
if (AR_SREV_9300_20_OR_LATER(ah)) {
- REG_WRITE(ah, AR_WA, ah->WARegVal);
+ REG_WRITE(ah, AR_WA(ah), ah->WARegVal);
udelay(10);
}
- if ((REG_READ(ah, AR_RTC_STATUS) &
- AR_RTC_STATUS_M) == AR_RTC_STATUS_SHUTDOWN) {
+ if ((REG_READ(ah, AR_RTC_STATUS(ah)) &
+ AR_RTC_STATUS_M(ah)) == AR_RTC_STATUS_SHUTDOWN) {
if (!ath9k_hw_set_reset_reg(ah, ATH9K_RESET_POWER_ON)) {
return false;
}
@@ -2194,10 +2194,10 @@ static bool ath9k_hw_set_power_awake(struct ath_hw *ah)
ath9k_hw_init_pll(ah, NULL);
}
if (AR_SREV_9100(ah))
- REG_SET_BIT(ah, AR_RTC_RESET,
+ REG_SET_BIT(ah, AR_RTC_RESET(ah),
AR_RTC_RESET_EN);
- REG_SET_BIT(ah, AR_RTC_FORCE_WAKE,
+ REG_SET_BIT(ah, AR_RTC_FORCE_WAKE(ah),
AR_RTC_FORCE_WAKE_EN);
if (AR_SREV_9100(ah))
mdelay(10);
@@ -2205,11 +2205,11 @@ static bool ath9k_hw_set_power_awake(struct ath_hw *ah)
udelay(50);
for (i = POWER_UP_TIME / 50; i > 0; i--) {
- val = REG_READ(ah, AR_RTC_STATUS) & AR_RTC_STATUS_M;
+ val = REG_READ(ah, AR_RTC_STATUS(ah)) & AR_RTC_STATUS_M(ah);
if (val == AR_RTC_STATUS_ON)
break;
udelay(50);
- REG_SET_BIT(ah, AR_RTC_FORCE_WAKE,
+ REG_SET_BIT(ah, AR_RTC_FORCE_WAKE(ah),
AR_RTC_FORCE_WAKE_EN);
}
if (i == 0) {
@@ -2701,16 +2701,16 @@ static void ath9k_hw_gpio_cfg_output_mux(struct ath_hw *ah, u32 gpio, u32 type)
u32 gpio_shift, tmp;
if (gpio > 11)
- addr = AR_GPIO_OUTPUT_MUX3;
+ addr = AR_GPIO_OUTPUT_MUX3(ah);
else if (gpio > 5)
- addr = AR_GPIO_OUTPUT_MUX2;
+ addr = AR_GPIO_OUTPUT_MUX2(ah);
else
- addr = AR_GPIO_OUTPUT_MUX1;
+ addr = AR_GPIO_OUTPUT_MUX1(ah);
gpio_shift = (gpio % 6) * 5;
if (AR_SREV_9280_20_OR_LATER(ah) ||
- (addr != AR_GPIO_OUTPUT_MUX1)) {
+ (addr != AR_GPIO_OUTPUT_MUX1(ah))) {
REG_RMW(ah, addr, (type << gpio_shift),
(0x1f << gpio_shift));
} else {
@@ -2754,13 +2754,13 @@ static void ath9k_hw_gpio_cfg_wmac(struct ath_hw *ah, u32 gpio, bool out,
AR7010_GPIO_OE_MASK << gpio_shift);
} else if (AR_SREV_SOC(ah)) {
gpio_set = out ? 1 : 0;
- REG_RMW(ah, AR_GPIO_OE_OUT, gpio_set << gpio_shift,
+ REG_RMW(ah, AR_GPIO_OE_OUT(ah), gpio_set << gpio_shift,
gpio_set << gpio_shift);
} else {
gpio_shift = gpio << 1;
gpio_set = out ?
AR_GPIO_OE_OUT_DRV_ALL : AR_GPIO_OE_OUT_DRV_NO;
- REG_RMW(ah, AR_GPIO_OE_OUT, gpio_set << gpio_shift,
+ REG_RMW(ah, AR_GPIO_OE_OUT(ah), gpio_set << gpio_shift,
AR_GPIO_OE_OUT_DRV << gpio_shift);
if (out)
@@ -2813,7 +2813,7 @@ u32 ath9k_hw_gpio_get(struct ath_hw *ah, u32 gpio)
u32 val = 0xffffffff;
#define MS_REG_READ(x, y) \
- (MS(REG_READ(ah, AR_GPIO_IN_OUT), x##_GPIO_IN_VAL) & BIT(y))
+ (MS(REG_READ(ah, AR_GPIO_IN_OUT(ah)), x##_GPIO_IN_VAL) & BIT(y))
WARN_ON(gpio >= ah->caps.num_gpio_pins);
@@ -2829,7 +2829,7 @@ u32 ath9k_hw_gpio_get(struct ath_hw *ah, u32 gpio)
else if (AR_DEVID_7010(ah))
val = REG_READ(ah, AR7010_GPIO_IN) & BIT(gpio);
else if (AR_SREV_9300_20_OR_LATER(ah))
- val = REG_READ(ah, AR_GPIO_IN) & BIT(gpio);
+ val = REG_READ(ah, AR_GPIO_IN(ah)) & BIT(gpio);
else
val = MS_REG_READ(AR, gpio);
} else if (BIT(gpio) & ah->caps.gpio_requested) {
@@ -2853,7 +2853,7 @@ void ath9k_hw_set_gpio(struct ath_hw *ah, u32 gpio, u32 val)
if (BIT(gpio) & ah->caps.gpio_mask) {
u32 out_addr = AR_DEVID_7010(ah) ?
- AR7010_GPIO_OUT : AR_GPIO_IN_OUT;
+ AR7010_GPIO_OUT : AR_GPIO_IN_OUT(ah);
REG_RMW(ah, out_addr, val << gpio, BIT(gpio));
} else if (BIT(gpio) & ah->caps.gpio_requested) {
diff --git a/drivers/net/wireless/ath/ath9k/mac.c b/drivers/net/wireless/ath/ath9k/mac.c
index 58d02c19b6d0..b070403e083f 100644
--- a/drivers/net/wireless/ath/ath9k/mac.c
+++ b/drivers/net/wireless/ath/ath9k/mac.c
@@ -707,7 +707,7 @@ bool ath9k_hw_stopdmarecv(struct ath_hw *ah, bool *reset)
/* Wait for rx enable bit to go low */
for (i = AH_RX_STOP_DMA_TIMEOUT / AH_TIME_QUANTUM; i != 0; i--) {
- if ((REG_READ(ah, AR_CR) & AR_CR_RXE) == 0)
+ if ((REG_READ(ah, AR_CR) & AR_CR_RXE(ah)) == 0)
break;
if (!AR_SREV_9300_20_OR_LATER(ah)) {
@@ -762,14 +762,14 @@ bool ath9k_hw_intrpend(struct ath_hw *ah)
if (AR_SREV_9100(ah))
return true;
- host_isr = REG_READ(ah, AR_INTR_ASYNC_CAUSE);
+ host_isr = REG_READ(ah, AR_INTR_ASYNC_CAUSE(ah));
if (((host_isr & AR_INTR_MAC_IRQ) ||
(host_isr & AR_INTR_ASYNC_MASK_MCI)) &&
(host_isr != AR_INTR_SPURIOUS))
return true;
- host_isr = REG_READ(ah, AR_INTR_SYNC_CAUSE);
+ host_isr = REG_READ(ah, AR_INTR_SYNC_CAUSE(ah));
if ((host_isr & AR_INTR_SYNC_DEFAULT)
&& (host_isr != AR_INTR_SPURIOUS))
return true;
@@ -786,11 +786,11 @@ void ath9k_hw_kill_interrupts(struct ath_hw *ah)
REG_WRITE(ah, AR_IER, AR_IER_DISABLE);
(void) REG_READ(ah, AR_IER);
if (!AR_SREV_9100(ah)) {
- REG_WRITE(ah, AR_INTR_ASYNC_ENABLE, 0);
- (void) REG_READ(ah, AR_INTR_ASYNC_ENABLE);
+ REG_WRITE(ah, AR_INTR_ASYNC_ENABLE(ah), 0);
+ (void) REG_READ(ah, AR_INTR_ASYNC_ENABLE(ah));
- REG_WRITE(ah, AR_INTR_SYNC_ENABLE, 0);
- (void) REG_READ(ah, AR_INTR_SYNC_ENABLE);
+ REG_WRITE(ah, AR_INTR_SYNC_ENABLE(ah), 0);
+ (void) REG_READ(ah, AR_INTR_SYNC_ENABLE(ah));
}
}
EXPORT_SYMBOL(ath9k_hw_kill_interrupts);
@@ -824,11 +824,11 @@ static void __ath9k_hw_enable_interrupts(struct ath_hw *ah)
ath_dbg(common, INTERRUPT, "enable IER\n");
REG_WRITE(ah, AR_IER, AR_IER_ENABLE);
if (!AR_SREV_9100(ah)) {
- REG_WRITE(ah, AR_INTR_ASYNC_ENABLE, async_mask);
- REG_WRITE(ah, AR_INTR_ASYNC_MASK, async_mask);
+ REG_WRITE(ah, AR_INTR_ASYNC_ENABLE(ah), async_mask);
+ REG_WRITE(ah, AR_INTR_ASYNC_MASK(ah), async_mask);
- REG_WRITE(ah, AR_INTR_SYNC_ENABLE, sync_default);
- REG_WRITE(ah, AR_INTR_SYNC_MASK, sync_default);
+ REG_WRITE(ah, AR_INTR_SYNC_ENABLE(ah), sync_default);
+ REG_WRITE(ah, AR_INTR_SYNC_MASK(ah), sync_default);
}
ath_dbg(common, INTERRUPT, "AR_IMR 0x%x IER 0x%x\n",
REG_READ(ah, AR_IMR), REG_READ(ah, AR_IER));
@@ -841,26 +841,26 @@ static void __ath9k_hw_enable_interrupts(struct ath_hw *ah)
ath_dbg(ath9k_hw_common(ah), INTERRUPT,
"Enabling MSI, msi_mask=0x%X\n", ah->msi_mask);
- REG_WRITE(ah, AR_INTR_PRIO_ASYNC_ENABLE, ah->msi_mask);
- REG_WRITE(ah, AR_INTR_PRIO_ASYNC_MASK, ah->msi_mask);
+ REG_WRITE(ah, AR_INTR_PRIO_ASYNC_ENABLE(ah), ah->msi_mask);
+ REG_WRITE(ah, AR_INTR_PRIO_ASYNC_MASK(ah), ah->msi_mask);
ath_dbg(ath9k_hw_common(ah), INTERRUPT,
"AR_INTR_PRIO_ASYNC_ENABLE=0x%X, AR_INTR_PRIO_ASYNC_MASK=0x%X\n",
- REG_READ(ah, AR_INTR_PRIO_ASYNC_ENABLE),
- REG_READ(ah, AR_INTR_PRIO_ASYNC_MASK));
+ REG_READ(ah, AR_INTR_PRIO_ASYNC_ENABLE(ah)),
+ REG_READ(ah, AR_INTR_PRIO_ASYNC_MASK(ah)));
if (ah->msi_reg == 0)
- ah->msi_reg = REG_READ(ah, AR_PCIE_MSI);
+ ah->msi_reg = REG_READ(ah, AR_PCIE_MSI(ah));
ath_dbg(ath9k_hw_common(ah), INTERRUPT,
"AR_PCIE_MSI=0x%X, ah->msi_reg = 0x%X\n",
- AR_PCIE_MSI, ah->msi_reg);
+ AR_PCIE_MSI(ah), ah->msi_reg);
i = 0;
do {
- REG_WRITE(ah, AR_PCIE_MSI,
+ REG_WRITE(ah, AR_PCIE_MSI(ah),
(ah->msi_reg | AR_PCIE_MSI_ENABLE)
& msi_pend_addr_mask);
- _msi_reg = REG_READ(ah, AR_PCIE_MSI);
+ _msi_reg = REG_READ(ah, AR_PCIE_MSI(ah));
i++;
} while ((_msi_reg & AR_PCIE_MSI_ENABLE) == 0 && i < 200);
@@ -918,8 +918,8 @@ void ath9k_hw_set_interrupts(struct ath_hw *ah)
if (ah->msi_enabled) {
ath_dbg(common, INTERRUPT, "Clearing AR_INTR_PRIO_ASYNC_ENABLE\n");
- REG_WRITE(ah, AR_INTR_PRIO_ASYNC_ENABLE, 0);
- REG_READ(ah, AR_INTR_PRIO_ASYNC_ENABLE);
+ REG_WRITE(ah, AR_INTR_PRIO_ASYNC_ENABLE(ah), 0);
+ REG_READ(ah, AR_INTR_PRIO_ASYNC_ENABLE(ah));
}
ath_dbg(common, INTERRUPT, "New interrupt mask 0x%x\n", ints);
diff --git a/drivers/net/wireless/ath/ath9k/pci.c b/drivers/net/wireless/ath/ath9k/pci.c
index a074e23013c5..a09f9d223f3d 100644
--- a/drivers/net/wireless/ath/ath9k/pci.c
+++ b/drivers/net/wireless/ath/ath9k/pci.c
@@ -804,14 +804,14 @@ static bool ath_pci_eeprom_read(struct ath_common *common, u32 off, u16 *data)
common->ops->read(ah, AR5416_EEPROM_OFFSET + (off << AR5416_EEPROM_S));
if (!ath9k_hw_wait(ah,
- AR_EEPROM_STATUS_DATA,
+ AR_EEPROM_STATUS_DATA(ah),
AR_EEPROM_STATUS_DATA_BUSY |
AR_EEPROM_STATUS_DATA_PROT_ACCESS, 0,
AH_WAIT_TIMEOUT)) {
return false;
}
- *data = MS(common->ops->read(ah, AR_EEPROM_STATUS_DATA),
+ *data = MS(common->ops->read(ah, AR_EEPROM_STATUS_DATA(ah)),
AR_EEPROM_STATUS_DATA_VAL);
return true;
diff --git a/drivers/net/wireless/ath/ath9k/reg.h b/drivers/net/wireless/ath/ath9k/reg.h
index 8983ea6fc727..9f5b8a538071 100644
--- a/drivers/net/wireless/ath/ath9k/reg.h
+++ b/drivers/net/wireless/ath/ath9k/reg.h
@@ -20,7 +20,7 @@
#include "../reg.h"
#define AR_CR 0x0008
-#define AR_CR_RXE (AR_SREV_9300_20_OR_LATER(ah) ? 0x0000000c : 0x00000004)
+#define AR_CR_RXE(_ah) (AR_SREV_9300_20_OR_LATER(_ah) ? 0x0000000c : 0x00000004)
#define AR_CR_RXD 0x00000020
#define AR_CR_SWI 0x00000040
@@ -352,10 +352,10 @@
#define AR_ISR_S1_QCU_TXEOL 0x03FF0000
#define AR_ISR_S1_QCU_TXEOL_S 16
-#define AR_ISR_S2_S (AR_SREV_9300_20_OR_LATER(ah) ? 0x00d0 : 0x00cc)
-#define AR_ISR_S3_S (AR_SREV_9300_20_OR_LATER(ah) ? 0x00d4 : 0x00d0)
-#define AR_ISR_S4_S (AR_SREV_9300_20_OR_LATER(ah) ? 0x00d8 : 0x00d4)
-#define AR_ISR_S5_S (AR_SREV_9300_20_OR_LATER(ah) ? 0x00dc : 0x00d8)
+#define AR_ISR_S2_S(_ah) (AR_SREV_9300_20_OR_LATER(_ah) ? 0x00d0 : 0x00cc)
+#define AR_ISR_S3_S(_ah) (AR_SREV_9300_20_OR_LATER(_ah) ? 0x00d4 : 0x00d0)
+#define AR_ISR_S4_S(_ah) (AR_SREV_9300_20_OR_LATER(_ah) ? 0x00d8 : 0x00d4)
+#define AR_ISR_S5_S(_ah) (AR_SREV_9300_20_OR_LATER(_ah) ? 0x00dc : 0x00d8)
#define AR_DMADBG_0 0x00e0
#define AR_DMADBG_1 0x00e4
#define AR_DMADBG_2 0x00e8
@@ -699,7 +699,7 @@
#define AR_RC_APB 0x00000002
#define AR_RC_HOSTIF 0x00000100
-#define AR_WA (AR_SREV_9340(ah) ? 0x40c4 : 0x4004)
+#define AR_WA(_ah) (AR_SREV_9340(_ah) ? 0x40c4 : 0x4004)
#define AR_WA_BIT6 (1 << 6)
#define AR_WA_BIT7 (1 << 7)
#define AR_WA_BIT23 (1 << 23)
@@ -721,7 +721,7 @@
#define AR_PM_STATE 0x4008
#define AR_PM_STATE_PME_D3COLD_VAUX 0x00100000
-#define AR_HOST_TIMEOUT (AR_SREV_9340(ah) ? 0x4008 : 0x4018)
+#define AR_HOST_TIMEOUT(_ah) (AR_SREV_9340(_ah) ? 0x4008 : 0x4018)
#define AR_HOST_TIMEOUT_APB_CNTR 0x0000FFFF
#define AR_HOST_TIMEOUT_APB_CNTR_S 0
#define AR_HOST_TIMEOUT_LCL_CNTR 0xFFFF0000
@@ -750,12 +750,12 @@
#define EEPROM_PROTECT_RP_1024_2047 0x4000
#define EEPROM_PROTECT_WP_1024_2047 0x8000
-#define AR_SREV \
- ((AR_SREV_9100(ah)) ? 0x0600 : (AR_SREV_9340(ah) \
+#define AR_SREV(_ah) \
+ ((AR_SREV_9100(_ah)) ? 0x0600 : (AR_SREV_9340(_ah) \
? 0x400c : 0x4020))
-#define AR_SREV_ID \
- ((AR_SREV_9100(ah)) ? 0x00000FFF : 0x000000FF)
+#define AR_SREV_ID(_ah) \
+ ((AR_SREV_9100(_ah)) ? 0x00000FFF : 0x000000FF)
#define AR_SREV_VERSION 0x000000F0
#define AR_SREV_VERSION_S 4
#define AR_SREV_REVISION 0x00000007
@@ -1038,11 +1038,11 @@ enum ath_usb_dev {
#define AR_INTR_SPURIOUS 0xFFFFFFFF
-#define AR_INTR_SYNC_CAUSE (AR_SREV_9340(ah) ? 0x4010 : 0x4028)
-#define AR_INTR_SYNC_CAUSE_CLR (AR_SREV_9340(ah) ? 0x4010 : 0x4028)
+#define AR_INTR_SYNC_CAUSE(_ah) (AR_SREV_9340(_ah) ? 0x4010 : 0x4028)
+#define AR_INTR_SYNC_CAUSE_CLR(_ah) (AR_SREV_9340(_ah) ? 0x4010 : 0x4028)
-#define AR_INTR_SYNC_ENABLE (AR_SREV_9340(ah) ? 0x4014 : 0x402c)
+#define AR_INTR_SYNC_ENABLE(_ah) (AR_SREV_9340(_ah) ? 0x4014 : 0x402c)
#define AR_INTR_SYNC_ENABLE_GPIO 0xFFFC0000
#define AR_INTR_SYNC_ENABLE_GPIO_S 18
@@ -1084,18 +1084,18 @@ enum {
};
-#define AR_INTR_ASYNC_MASK (AR_SREV_9340(ah) ? 0x4018 : 0x4030)
+#define AR_INTR_ASYNC_MASK(_ah) (AR_SREV_9340(_ah) ? 0x4018 : 0x4030)
#define AR_INTR_ASYNC_MASK_GPIO 0xFFFC0000
#define AR_INTR_ASYNC_MASK_GPIO_S 18
#define AR_INTR_ASYNC_MASK_MCI 0x00000080
#define AR_INTR_ASYNC_MASK_MCI_S 7
-#define AR_INTR_SYNC_MASK (AR_SREV_9340(ah) ? 0x401c : 0x4034)
+#define AR_INTR_SYNC_MASK(_ah) (AR_SREV_9340(_ah) ? 0x401c : 0x4034)
#define AR_INTR_SYNC_MASK_GPIO 0xFFFC0000
#define AR_INTR_SYNC_MASK_GPIO_S 18
-#define AR_INTR_ASYNC_CAUSE_CLR (AR_SREV_9340(ah) ? 0x4020 : 0x4038)
-#define AR_INTR_ASYNC_CAUSE (AR_SREV_9340(ah) ? 0x4020 : 0x4038)
+#define AR_INTR_ASYNC_CAUSE_CLR(_ah) (AR_SREV_9340(_ah) ? 0x4020 : 0x4038)
+#define AR_INTR_ASYNC_CAUSE(_ah) (AR_SREV_9340(_ah) ? 0x4020 : 0x4038)
#define AR_INTR_ASYNC_CAUSE_MCI 0x00000080
#define AR_INTR_ASYNC_USED (AR_INTR_MAC_IRQ | \
AR_INTR_ASYNC_CAUSE_MCI)
@@ -1105,13 +1105,13 @@ enum {
#define AR_INTR_ASYNC_ENABLE_MCI_S 7
-#define AR_INTR_ASYNC_ENABLE (AR_SREV_9340(ah) ? 0x4024 : 0x403c)
+#define AR_INTR_ASYNC_ENABLE(_ah) (AR_SREV_9340(_ah) ? 0x4024 : 0x403c)
#define AR_INTR_ASYNC_ENABLE_GPIO 0xFFFC0000
#define AR_INTR_ASYNC_ENABLE_GPIO_S 18
#define AR_PCIE_SERDES 0x4040
#define AR_PCIE_SERDES2 0x4044
-#define AR_PCIE_PM_CTRL (AR_SREV_9340(ah) ? 0x4004 : 0x4014)
+#define AR_PCIE_PM_CTRL(_ah) (AR_SREV_9340(_ah) ? 0x4004 : 0x4014)
#define AR_PCIE_PM_CTRL_ENA 0x00080000
#define AR_PCIE_PHY_REG3 0x18c08
@@ -1156,7 +1156,7 @@ enum {
#define AR9580_GPIO_MASK 0x0000F4FF
#define AR7010_GPIO_MASK 0x0000FFFF
-#define AR_GPIO_IN_OUT (AR_SREV_9340(ah) ? 0x4028 : 0x4048)
+#define AR_GPIO_IN_OUT(_ah) (AR_SREV_9340(_ah) ? 0x4028 : 0x4048)
#define AR_GPIO_IN_VAL 0x0FFFC000
#define AR_GPIO_IN_VAL_S 14
#define AR928X_GPIO_IN_VAL 0x000FFC00
@@ -1170,12 +1170,12 @@ enum {
#define AR7010_GPIO_IN_VAL 0x0000FFFF
#define AR7010_GPIO_IN_VAL_S 0
-#define AR_GPIO_IN (AR_SREV_9340(ah) ? 0x402c : 0x404c)
+#define AR_GPIO_IN(_ah) (AR_SREV_9340(_ah) ? 0x402c : 0x404c)
#define AR9300_GPIO_IN_VAL 0x0001FFFF
#define AR9300_GPIO_IN_VAL_S 0
-#define AR_GPIO_OE_OUT (AR_SREV_9340(ah) ? 0x4030 : \
- (AR_SREV_9300_20_OR_LATER(ah) ? 0x4050 : 0x404c))
+#define AR_GPIO_OE_OUT(_ah) (AR_SREV_9340(_ah) ? 0x4030 : \
+ (AR_SREV_9300_20_OR_LATER(_ah) ? 0x4050 : 0x404c))
#define AR_GPIO_OE_OUT_DRV 0x3
#define AR_GPIO_OE_OUT_DRV_NO 0x0
#define AR_GPIO_OE_OUT_DRV_LOW 0x1
@@ -1197,13 +1197,13 @@ enum {
#define AR7010_GPIO_INT_MASK 0x52024
#define AR7010_GPIO_FUNCTION 0x52028
-#define AR_GPIO_INTR_POL (AR_SREV_9340(ah) ? 0x4038 : \
- (AR_SREV_9300_20_OR_LATER(ah) ? 0x4058 : 0x4050))
+#define AR_GPIO_INTR_POL(_ah) (AR_SREV_9340(_ah) ? 0x4038 : \
+ (AR_SREV_9300_20_OR_LATER(_ah) ? 0x4058 : 0x4050))
#define AR_GPIO_INTR_POL_VAL 0x0001FFFF
#define AR_GPIO_INTR_POL_VAL_S 0
-#define AR_GPIO_INPUT_EN_VAL (AR_SREV_9340(ah) ? 0x403c : \
- (AR_SREV_9300_20_OR_LATER(ah) ? 0x405c : 0x4054))
+#define AR_GPIO_INPUT_EN_VAL(_ah) (AR_SREV_9340(_ah) ? 0x403c : \
+ (AR_SREV_9300_20_OR_LATER(_ah) ? 0x405c : 0x4054))
#define AR_GPIO_INPUT_EN_VAL_BT_PRIORITY_DEF 0x00000004
#define AR_GPIO_INPUT_EN_VAL_BT_PRIORITY_S 2
#define AR_GPIO_INPUT_EN_VAL_BT_FREQUENCY_DEF 0x00000008
@@ -1221,15 +1221,15 @@ enum {
#define AR_GPIO_RTC_RESET_OVERRIDE_ENABLE 0x00010000
#define AR_GPIO_JTAG_DISABLE 0x00020000
-#define AR_GPIO_INPUT_MUX1 (AR_SREV_9340(ah) ? 0x4040 : \
- (AR_SREV_9300_20_OR_LATER(ah) ? 0x4060 : 0x4058))
+#define AR_GPIO_INPUT_MUX1(_ah) (AR_SREV_9340(_ah) ? 0x4040 : \
+ (AR_SREV_9300_20_OR_LATER(_ah) ? 0x4060 : 0x4058))
#define AR_GPIO_INPUT_MUX1_BT_ACTIVE 0x000f0000
#define AR_GPIO_INPUT_MUX1_BT_ACTIVE_S 16
#define AR_GPIO_INPUT_MUX1_BT_PRIORITY 0x00000f00
#define AR_GPIO_INPUT_MUX1_BT_PRIORITY_S 8
-#define AR_GPIO_INPUT_MUX2 (AR_SREV_9340(ah) ? 0x4044 : \
- (AR_SREV_9300_20_OR_LATER(ah) ? 0x4064 : 0x405c))
+#define AR_GPIO_INPUT_MUX2(_ah) (AR_SREV_9340(_ah) ? 0x4044 : \
+ (AR_SREV_9300_20_OR_LATER(_ah) ? 0x4064 : 0x405c))
#define AR_GPIO_INPUT_MUX2_CLK25 0x0000000f
#define AR_GPIO_INPUT_MUX2_CLK25_S 0
#define AR_GPIO_INPUT_MUX2_RFSILENT 0x000000f0
@@ -1237,18 +1237,18 @@ enum {
#define AR_GPIO_INPUT_MUX2_RTC_RESET 0x00000f00
#define AR_GPIO_INPUT_MUX2_RTC_RESET_S 8
-#define AR_GPIO_OUTPUT_MUX1 (AR_SREV_9340(ah) ? 0x4048 : \
- (AR_SREV_9300_20_OR_LATER(ah) ? 0x4068 : 0x4060))
-#define AR_GPIO_OUTPUT_MUX2 (AR_SREV_9340(ah) ? 0x404c : \
- (AR_SREV_9300_20_OR_LATER(ah) ? 0x406c : 0x4064))
-#define AR_GPIO_OUTPUT_MUX3 (AR_SREV_9340(ah) ? 0x4050 : \
- (AR_SREV_9300_20_OR_LATER(ah) ? 0x4070 : 0x4068))
+#define AR_GPIO_OUTPUT_MUX1(_ah) (AR_SREV_9340(_ah) ? 0x4048 : \
+ (AR_SREV_9300_20_OR_LATER(_ah) ? 0x4068 : 0x4060))
+#define AR_GPIO_OUTPUT_MUX2(_ah) (AR_SREV_9340(_ah) ? 0x404c : \
+ (AR_SREV_9300_20_OR_LATER(_ah) ? 0x406c : 0x4064))
+#define AR_GPIO_OUTPUT_MUX3(_ah) (AR_SREV_9340(_ah) ? 0x4050 : \
+ (AR_SREV_9300_20_OR_LATER(_ah) ? 0x4070 : 0x4068))
-#define AR_INPUT_STATE (AR_SREV_9340(ah) ? 0x4054 : \
- (AR_SREV_9300_20_OR_LATER(ah) ? 0x4074 : 0x406c))
+#define AR_INPUT_STATE(_ah) (AR_SREV_9340(_ah) ? 0x4054 : \
+ (AR_SREV_9300_20_OR_LATER(_ah) ? 0x4074 : 0x406c))
-#define AR_EEPROM_STATUS_DATA (AR_SREV_9340(ah) ? 0x40c8 : \
- (AR_SREV_9300_20_OR_LATER(ah) ? 0x4084 : 0x407c))
+#define AR_EEPROM_STATUS_DATA(_ah) (AR_SREV_9340(_ah) ? 0x40c8 : \
+ (AR_SREV_9300_20_OR_LATER(_ah) ? 0x4084 : 0x407c))
#define AR_EEPROM_STATUS_DATA_VAL 0x0000ffff
#define AR_EEPROM_STATUS_DATA_VAL_S 0
#define AR_EEPROM_STATUS_DATA_BUSY 0x00010000
@@ -1256,13 +1256,13 @@ enum {
#define AR_EEPROM_STATUS_DATA_PROT_ACCESS 0x00040000
#define AR_EEPROM_STATUS_DATA_ABSENT_ACCESS 0x00080000
-#define AR_OBS (AR_SREV_9340(ah) ? 0x405c : \
- (AR_SREV_9300_20_OR_LATER(ah) ? 0x4088 : 0x4080))
+#define AR_OBS(_ah) (AR_SREV_9340(_ah) ? 0x405c : \
+ (AR_SREV_9300_20_OR_LATER(_ah) ? 0x4088 : 0x4080))
-#define AR_GPIO_PDPU (AR_SREV_9300_20_OR_LATER(ah) ? 0x4090 : 0x4088)
+#define AR_GPIO_PDPU(_ah) (AR_SREV_9300_20_OR_LATER(_ah) ? 0x4090 : 0x4088)
-#define AR_PCIE_MSI (AR_SREV_9340(ah) ? 0x40d8 : \
- (AR_SREV_9300_20_OR_LATER(ah) ? 0x40a4 : 0x4094))
+#define AR_PCIE_MSI(_ah) (AR_SREV_9340(_ah) ? 0x40d8 : \
+ (AR_SREV_9300_20_OR_LATER(_ah) ? 0x40a4 : 0x4094))
#define AR_PCIE_MSI_ENABLE 0x00000001
#define AR_PCIE_MSI_HW_DBI_WR_EN 0x02000000
#define AR_PCIE_MSI_HW_INT_PENDING_ADDR 0xFFA0C1FF /* bits 8..11: value must be 0x5060 */
@@ -1272,10 +1272,10 @@ enum {
#define AR_INTR_PRIO_RXLP 0x00000002
#define AR_INTR_PRIO_RXHP 0x00000004
-#define AR_INTR_PRIO_SYNC_ENABLE (AR_SREV_9340(ah) ? 0x4088 : 0x40c4)
-#define AR_INTR_PRIO_ASYNC_MASK (AR_SREV_9340(ah) ? 0x408c : 0x40c8)
-#define AR_INTR_PRIO_SYNC_MASK (AR_SREV_9340(ah) ? 0x4090 : 0x40cc)
-#define AR_INTR_PRIO_ASYNC_ENABLE (AR_SREV_9340(ah) ? 0x4094 : 0x40d4)
+#define AR_INTR_PRIO_SYNC_ENABLE(_ah) (AR_SREV_9340(_ah) ? 0x4088 : 0x40c4)
+#define AR_INTR_PRIO_ASYNC_MASK(_ah) (AR_SREV_9340(_ah) ? 0x408c : 0x40c8)
+#define AR_INTR_PRIO_SYNC_MASK(_ah) (AR_SREV_9340(_ah) ? 0x4090 : 0x40cc)
+#define AR_INTR_PRIO_ASYNC_ENABLE(_ah) (AR_SREV_9340(_ah) ? 0x4094 : 0x40d4)
#define AR_ENT_OTP 0x40d8
#define AR_ENT_OTP_CHAIN2_DISABLE 0x00020000
#define AR_ENT_OTP_49GHZ_DISABLE 0x00100000
@@ -1339,8 +1339,8 @@ enum {
#define AR_RTC_9160_PLL_CLKSEL_S 14
#define AR_RTC_BASE 0x00020000
-#define AR_RTC_RC \
- ((AR_SREV_9100(ah)) ? (AR_RTC_BASE + 0x0000) : 0x7000)
+#define AR_RTC_RC(_ah) \
+ ((AR_SREV_9100(_ah)) ? (AR_RTC_BASE + 0x0000) : 0x7000)
#define AR_RTC_RC_M 0x00000003
#define AR_RTC_RC_MAC_WARM 0x00000001
#define AR_RTC_RC_MAC_COLD 0x00000002
@@ -1357,8 +1357,8 @@ enum {
#define AR_RTC_REG_CONTROL1 0x700c
#define AR_RTC_REG_CONTROL1_SWREG_PROGRAM 0x00000001
-#define AR_RTC_PLL_CONTROL \
- ((AR_SREV_9100(ah)) ? (AR_RTC_BASE + 0x0014) : 0x7014)
+#define AR_RTC_PLL_CONTROL(_ah) \
+ ((AR_SREV_9100(_ah)) ? (AR_RTC_BASE + 0x0014) : 0x7014)
#define AR_RTC_PLL_CONTROL2 0x703c
@@ -1378,15 +1378,15 @@ enum {
#define PLL4_MEAS_DONE 0x8
#define SQSUM_DVC_MASK 0x007ffff8
-#define AR_RTC_RESET \
- ((AR_SREV_9100(ah)) ? (AR_RTC_BASE + 0x0040) : 0x7040)
+#define AR_RTC_RESET(_ah) \
+ ((AR_SREV_9100(_ah)) ? (AR_RTC_BASE + 0x0040) : 0x7040)
#define AR_RTC_RESET_EN (0x00000001)
-#define AR_RTC_STATUS \
- ((AR_SREV_9100(ah)) ? (AR_RTC_BASE + 0x0044) : 0x7044)
+#define AR_RTC_STATUS(_ah) \
+ ((AR_SREV_9100(_ah)) ? (AR_RTC_BASE + 0x0044) : 0x7044)
-#define AR_RTC_STATUS_M \
- ((AR_SREV_9100(ah)) ? 0x0000003f : 0x0000000f)
+#define AR_RTC_STATUS_M(_ah) \
+ ((AR_SREV_9100(_ah)) ? 0x0000003f : 0x0000000f)
#define AR_RTC_PM_STATUS_M 0x0000000f
@@ -1395,32 +1395,32 @@ enum {
#define AR_RTC_STATUS_SLEEP 0x00000004
#define AR_RTC_STATUS_WAKEUP 0x00000008
-#define AR_RTC_SLEEP_CLK \
- ((AR_SREV_9100(ah)) ? (AR_RTC_BASE + 0x0048) : 0x7048)
+#define AR_RTC_SLEEP_CLK(_ah) \
+ ((AR_SREV_9100(_ah)) ? (AR_RTC_BASE + 0x0048) : 0x7048)
#define AR_RTC_FORCE_DERIVED_CLK 0x2
#define AR_RTC_FORCE_SWREG_PRD 0x00000004
-#define AR_RTC_FORCE_WAKE \
- ((AR_SREV_9100(ah)) ? (AR_RTC_BASE + 0x004c) : 0x704c)
+#define AR_RTC_FORCE_WAKE(_ah) \
+ ((AR_SREV_9100(_ah)) ? (AR_RTC_BASE + 0x004c) : 0x704c)
#define AR_RTC_FORCE_WAKE_EN 0x00000001
#define AR_RTC_FORCE_WAKE_ON_INT 0x00000002
-#define AR_RTC_INTR_CAUSE \
- ((AR_SREV_9100(ah)) ? (AR_RTC_BASE + 0x0050) : 0x7050)
+#define AR_RTC_INTR_CAUSE(_ah) \
+ ((AR_SREV_9100(_ah)) ? (AR_RTC_BASE + 0x0050) : 0x7050)
-#define AR_RTC_INTR_ENABLE \
- ((AR_SREV_9100(ah)) ? (AR_RTC_BASE + 0x0054) : 0x7054)
+#define AR_RTC_INTR_ENABLE(_ah) \
+ ((AR_SREV_9100(_ah)) ? (AR_RTC_BASE + 0x0054) : 0x7054)
-#define AR_RTC_INTR_MASK \
- ((AR_SREV_9100(ah)) ? (AR_RTC_BASE + 0x0058) : 0x7058)
+#define AR_RTC_INTR_MASK(_ah) \
+ ((AR_SREV_9100(_ah)) ? (AR_RTC_BASE + 0x0058) : 0x7058)
#define AR_RTC_KEEP_AWAKE 0x7034
/* RTC_DERIVED_* - only for AR9100 */
-#define AR_RTC_DERIVED_CLK \
- (AR_SREV_9100(ah) ? (AR_RTC_BASE + 0x0038) : 0x7038)
+#define AR_RTC_DERIVED_CLK(_ah) \
+ (AR_SREV_9100(_ah) ? (AR_RTC_BASE + 0x0038) : 0x7038)
#define AR_RTC_DERIVED_CLK_PERIOD 0x0000fffe
#define AR_RTC_DERIVED_CLK_PERIOD_S 1
@@ -2114,7 +2114,7 @@ enum {
#define AR9300_SM_BASE 0xa200
#define AR9002_PHY_AGC_CONTROL 0x9860
#define AR9003_PHY_AGC_CONTROL AR9300_SM_BASE + 0xc4
-#define AR_PHY_AGC_CONTROL (AR_SREV_9300_20_OR_LATER(ah) ? AR9003_PHY_AGC_CONTROL : AR9002_PHY_AGC_CONTROL)
+#define AR_PHY_AGC_CONTROL(_ah) (AR_SREV_9300_20_OR_LATER(_ah) ? AR9003_PHY_AGC_CONTROL : AR9002_PHY_AGC_CONTROL)
#define AR_PHY_AGC_CONTROL_CAL 0x00000001 /* do internal calibration */
#define AR_PHY_AGC_CONTROL_NF 0x00000002 /* do noise-floor calibration */
#define AR_PHY_AGC_CONTROL_OFFSET_CAL 0x00000800 /* allow offset calibration */
diff --git a/drivers/net/wireless/ath/ath9k/rng.c b/drivers/net/wireless/ath/ath9k/rng.c
index 58c0ab01771b..e1def77591c6 100644
--- a/drivers/net/wireless/ath/ath9k/rng.c
+++ b/drivers/net/wireless/ath/ath9k/rng.c
@@ -29,9 +29,9 @@ static int ath9k_rng_data_read(struct ath_softc *sc, u32 *buf, u32 buf_size)
ath9k_ps_wakeup(sc);
- REG_RMW_FIELD(ah, AR_PHY_TEST, AR_PHY_TEST_BBB_OBS_SEL, 1);
- REG_CLR_BIT(ah, AR_PHY_TEST, AR_PHY_TEST_RX_OBS_SEL_BIT5);
- REG_RMW_FIELD(ah, AR_PHY_TEST_CTL_STATUS, AR_PHY_TEST_CTL_RX_OBS_SEL, 0);
+ REG_RMW_FIELD(ah, AR_PHY_TEST(ah), AR_PHY_TEST_BBB_OBS_SEL, 1);
+ REG_CLR_BIT(ah, AR_PHY_TEST(ah), AR_PHY_TEST_RX_OBS_SEL_BIT5);
+ REG_RMW_FIELD(ah, AR_PHY_TEST_CTL_STATUS(ah), AR_PHY_TEST_CTL_RX_OBS_SEL, 0);
for (i = 0, j = 0; i < buf_size; i++) {
v1 = REG_READ(ah, AR_PHY_TST_ADC) & 0xffff;
diff --git a/drivers/net/wireless/ath/ath9k/wmi.c b/drivers/net/wireless/ath/ath9k/wmi.c
index f315c54bd3ac..19345b8f7bfd 100644
--- a/drivers/net/wireless/ath/ath9k/wmi.c
+++ b/drivers/net/wireless/ath/ath9k/wmi.c
@@ -341,6 +341,7 @@ int ath9k_wmi_cmd(struct wmi *wmi, enum wmi_cmd_id cmd_id,
if (!time_left) {
ath_dbg(common, WMI, "Timeout waiting for WMI command: %s\n",
wmi_cmd_to_name(cmd_id));
+ wmi->last_seq_id = 0;
mutex_unlock(&wmi->op_mutex);
return -ETIMEDOUT;
}
diff --git a/drivers/net/wireless/ath/ath9k/xmit.c b/drivers/net/wireless/ath/ath9k/xmit.c
index 39abb59d8771..ef9a8e0b75e6 100644
--- a/drivers/net/wireless/ath/ath9k/xmit.c
+++ b/drivers/net/wireless/ath/ath9k/xmit.c
@@ -1216,7 +1216,7 @@ static u8 ath_get_rate_txpower(struct ath_softc *sc, struct ath_buf *bf,
txpower -= 2 * power_offset;
}
- if (OLC_FOR_AR9280_20_LATER && is_cck)
+ if (OLC_FOR_AR9280_20_LATER(ah) && is_cck)
txpower -= 2;
txpower = max(txpower, 0);
diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c
index b115902eb475..a9690ec4c850 100644
--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c
+++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c
@@ -7928,13 +7928,10 @@ exit:
}
static s32
-cfg80211_set_channel(struct wiphy *wiphy, struct net_device *dev,
- struct ieee80211_channel *chan,
- enum nl80211_channel_type channel_type)
+brcmf_set_channel(struct brcmf_cfg80211_info *cfg, struct ieee80211_channel *chan)
{
u16 chspec = 0;
int err = 0;
- struct brcmf_cfg80211_info *cfg = wiphy_to_cfg(wiphy);
struct brcmf_if *ifp = netdev_priv(cfg_to_ndev(cfg));
if (chan->flags & IEEE80211_CHAN_DISABLED)
@@ -7994,7 +7991,7 @@ brcmf_cfg80211_dump_survey(struct wiphy *wiphy, struct net_device *ndev,
/* Setting current channel to the requested channel */
info->filled = 0;
- if (cfg80211_set_channel(wiphy, ndev, info->channel, NL80211_CHAN_HT20))
+ if (brcmf_set_channel(cfg, info->channel))
return 0;
/* Disable mpc */
diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.h b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.h
index e90a30808c22..0e1fa3f0dea2 100644
--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.h
+++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.h
@@ -383,7 +383,7 @@ struct brcmf_cfg80211_info {
struct brcmf_tlv {
u8 id;
u8 len;
- u8 data[1];
+ u8 data[];
};
static inline struct wiphy *cfg_to_wiphy(struct brcmf_cfg80211_info *cfg)
diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/chip.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/chip.c
index 121893bbaa1d..8073f31be27d 100644
--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/chip.c
+++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/chip.c
@@ -726,17 +726,17 @@ static u32 brcmf_chip_tcm_rambase(struct brcmf_chip_priv *ci)
case BRCM_CC_43664_CHIP_ID:
case BRCM_CC_43666_CHIP_ID:
return 0x200000;
+ case BRCM_CC_4355_CHIP_ID:
case BRCM_CC_4359_CHIP_ID:
return (ci->pub.chiprev < 9) ? 0x180000 : 0x160000;
case BRCM_CC_4364_CHIP_ID:
case CY_CC_4373_CHIP_ID:
return 0x160000;
case CY_CC_43752_CHIP_ID:
+ case BRCM_CC_4377_CHIP_ID:
return 0x170000;
case BRCM_CC_4378_CHIP_ID:
return 0x352000;
- case CY_CC_89459_CHIP_ID:
- return ((ci->pub.chiprev < 9) ? 0x180000 : 0x160000);
default:
brcmf_err("unknown chip: %s\n", ci->pub.name);
break;
@@ -1426,8 +1426,8 @@ bool brcmf_chip_sr_capable(struct brcmf_chip *pub)
addr = CORE_CC_REG(base, sr_control1);
reg = chip->ops->read32(chip->ctx, addr);
return reg != 0;
+ case BRCM_CC_4355_CHIP_ID:
case CY_CC_4373_CHIP_ID:
- case CY_CC_89459_CHIP_ID:
/* explicitly check SR engine enable bit */
addr = CORE_CC_REG(base, sr_control0);
reg = chip->ops->read32(chip->ctx, addr);
diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/common.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/common.c
index 4a309e5a5707..f235beaddddb 100644
--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/common.c
+++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/common.c
@@ -299,6 +299,7 @@ int brcmf_c_preinit_dcmds(struct brcmf_if *ifp)
err);
goto done;
}
+ buf[sizeof(buf) - 1] = '\0';
ptr = (char *)buf;
strsep(&ptr, "\n");
@@ -319,15 +320,17 @@ int brcmf_c_preinit_dcmds(struct brcmf_if *ifp)
if (err) {
brcmf_dbg(TRACE, "retrieving clmver failed, %d\n", err);
} else {
+ buf[sizeof(buf) - 1] = '\0';
clmver = (char *)buf;
- /* store CLM version for adding it to revinfo debugfs file */
- memcpy(ifp->drvr->clmver, clmver, sizeof(ifp->drvr->clmver));
/* Replace all newline/linefeed characters with space
* character
*/
strreplace(clmver, '\n', ' ');
+ /* store CLM version for adding it to revinfo debugfs file */
+ memcpy(ifp->drvr->clmver, clmver, sizeof(ifp->drvr->clmver));
+
brcmf_dbg(INFO, "CLM version = %s\n", clmver);
}
diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/core.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/core.c
index 83ea251cfcec..f599d5f896e8 100644
--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/core.c
+++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/core.c
@@ -336,6 +336,7 @@ static netdev_tx_t brcmf_netdev_start_xmit(struct sk_buff *skb,
bphy_err(drvr, "%s: failed to expand headroom\n",
brcmf_ifname(ifp));
atomic_inc(&drvr->bus_if->stats.pktcow_failed);
+ dev_kfree_skb(skb);
goto done;
}
}
diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/msgbuf.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/msgbuf.c
index cec53f934940..45fbcbdc7d9e 100644
--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/msgbuf.c
+++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/msgbuf.c
@@ -347,8 +347,11 @@ brcmf_msgbuf_alloc_pktid(struct device *dev,
count++;
} while (count < pktids->array_size);
- if (count == pktids->array_size)
+ if (count == pktids->array_size) {
+ dma_unmap_single(dev, *physaddr, skb->len - data_offset,
+ pktids->direction);
return -ENOMEM;
+ }
array[*idx].data_offset = data_offset;
array[*idx].physaddr = *physaddr;
diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/p2p.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/p2p.c
index e975d10e6009..d4492d02e4ea 100644
--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/p2p.c
+++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/p2p.c
@@ -1466,8 +1466,8 @@ int brcmf_p2p_notify_action_frame_rx(struct brcmf_if *ifp,
ETH_ALEN);
memcpy(mgmt_frame->sa, e->addr, ETH_ALEN);
mgmt_frame->frame_control = cpu_to_le16(IEEE80211_STYPE_ACTION);
- memcpy(&mgmt_frame->u, frame, mgmt_frame_len);
- mgmt_frame_len += offsetof(struct ieee80211_mgmt, u);
+ memcpy(mgmt_frame->u.body, frame, mgmt_frame_len);
+ mgmt_frame_len += offsetof(struct ieee80211_mgmt, u.body);
freq = ieee80211_channel_to_frequency(ch.control_ch_num,
ch.band == BRCMU_CHAN_BAND_2G ?
diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/pcie.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/pcie.c
index b67f6d0810b6..a9b9b2dc62d4 100644
--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/pcie.c
+++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/pcie.c
@@ -51,18 +51,21 @@ enum brcmf_pcie_state {
BRCMF_FW_DEF(43602, "brcmfmac43602-pcie");
BRCMF_FW_DEF(4350, "brcmfmac4350-pcie");
BRCMF_FW_DEF(4350C, "brcmfmac4350c2-pcie");
+BRCMF_FW_CLM_DEF(4355, "brcmfmac4355-pcie");
+BRCMF_FW_CLM_DEF(4355C1, "brcmfmac4355c1-pcie");
BRCMF_FW_CLM_DEF(4356, "brcmfmac4356-pcie");
BRCMF_FW_CLM_DEF(43570, "brcmfmac43570-pcie");
BRCMF_FW_DEF(4358, "brcmfmac4358-pcie");
BRCMF_FW_DEF(4359, "brcmfmac4359-pcie");
-BRCMF_FW_DEF(4364, "brcmfmac4364-pcie");
+BRCMF_FW_CLM_DEF(4364B2, "brcmfmac4364b2-pcie");
+BRCMF_FW_CLM_DEF(4364B3, "brcmfmac4364b3-pcie");
BRCMF_FW_DEF(4365B, "brcmfmac4365b-pcie");
BRCMF_FW_DEF(4365C, "brcmfmac4365c-pcie");
BRCMF_FW_DEF(4366B, "brcmfmac4366b-pcie");
BRCMF_FW_DEF(4366C, "brcmfmac4366c-pcie");
BRCMF_FW_DEF(4371, "brcmfmac4371-pcie");
+BRCMF_FW_CLM_DEF(4377B3, "brcmfmac4377b3-pcie");
BRCMF_FW_CLM_DEF(4378B1, "brcmfmac4378b1-pcie");
-BRCMF_FW_DEF(4355, "brcmfmac89459-pcie");
/* firmware config files */
MODULE_FIRMWARE(BRCMF_FW_DEFAULT_PATH "brcmfmac*-pcie.txt");
@@ -78,13 +81,16 @@ static const struct brcmf_firmware_mapping brcmf_pcie_fwnames[] = {
BRCMF_FW_ENTRY(BRCM_CC_4350_CHIP_ID, 0x000000FF, 4350C),
BRCMF_FW_ENTRY(BRCM_CC_4350_CHIP_ID, 0xFFFFFF00, 4350),
BRCMF_FW_ENTRY(BRCM_CC_43525_CHIP_ID, 0xFFFFFFF0, 4365C),
+ BRCMF_FW_ENTRY(BRCM_CC_4355_CHIP_ID, 0x000007FF, 4355),
+ BRCMF_FW_ENTRY(BRCM_CC_4355_CHIP_ID, 0xFFFFF800, 4355C1), /* rev ID 12/C2 seen */
BRCMF_FW_ENTRY(BRCM_CC_4356_CHIP_ID, 0xFFFFFFFF, 4356),
BRCMF_FW_ENTRY(BRCM_CC_43567_CHIP_ID, 0xFFFFFFFF, 43570),
BRCMF_FW_ENTRY(BRCM_CC_43569_CHIP_ID, 0xFFFFFFFF, 43570),
BRCMF_FW_ENTRY(BRCM_CC_43570_CHIP_ID, 0xFFFFFFFF, 43570),
BRCMF_FW_ENTRY(BRCM_CC_4358_CHIP_ID, 0xFFFFFFFF, 4358),
BRCMF_FW_ENTRY(BRCM_CC_4359_CHIP_ID, 0xFFFFFFFF, 4359),
- BRCMF_FW_ENTRY(BRCM_CC_4364_CHIP_ID, 0xFFFFFFFF, 4364),
+ BRCMF_FW_ENTRY(BRCM_CC_4364_CHIP_ID, 0x0000000F, 4364B2), /* 3 */
+ BRCMF_FW_ENTRY(BRCM_CC_4364_CHIP_ID, 0xFFFFFFF0, 4364B3), /* 4 */
BRCMF_FW_ENTRY(BRCM_CC_4365_CHIP_ID, 0x0000000F, 4365B),
BRCMF_FW_ENTRY(BRCM_CC_4365_CHIP_ID, 0xFFFFFFF0, 4365C),
BRCMF_FW_ENTRY(BRCM_CC_4366_CHIP_ID, 0x0000000F, 4366B),
@@ -92,8 +98,8 @@ static const struct brcmf_firmware_mapping brcmf_pcie_fwnames[] = {
BRCMF_FW_ENTRY(BRCM_CC_43664_CHIP_ID, 0xFFFFFFF0, 4366C),
BRCMF_FW_ENTRY(BRCM_CC_43666_CHIP_ID, 0xFFFFFFF0, 4366C),
BRCMF_FW_ENTRY(BRCM_CC_4371_CHIP_ID, 0xFFFFFFFF, 4371),
+ BRCMF_FW_ENTRY(BRCM_CC_4377_CHIP_ID, 0xFFFFFFFF, 4377B3), /* revision ID 4 */
BRCMF_FW_ENTRY(BRCM_CC_4378_CHIP_ID, 0xFFFFFFFF, 4378B1), /* revision ID 3 */
- BRCMF_FW_ENTRY(CY_CC_89459_CHIP_ID, 0xFFFFFFFF, 4355),
};
#define BRCMF_PCIE_FW_UP_TIMEOUT 5000 /* msec */
@@ -1994,6 +2000,17 @@ static int brcmf_pcie_read_otp(struct brcmf_pciedev_info *devinfo)
int ret;
switch (devinfo->ci->chip) {
+ case BRCM_CC_4355_CHIP_ID:
+ coreid = BCMA_CORE_CHIPCOMMON;
+ base = 0x8c0;
+ words = 0xb2;
+ break;
+ case BRCM_CC_4364_CHIP_ID:
+ coreid = BCMA_CORE_CHIPCOMMON;
+ base = 0x8c0;
+ words = 0x1a0;
+ break;
+ case BRCM_CC_4377_CHIP_ID:
case BRCM_CC_4378_CHIP_ID:
coreid = BCMA_CORE_GCI;
base = 0x1120;
@@ -2590,6 +2607,7 @@ static const struct pci_device_id brcmf_pcie_devid_table[] = {
BRCMF_PCIE_DEVICE(BRCM_PCIE_4350_DEVICE_ID, WCC),
BRCMF_PCIE_DEVICE_SUB(0x4355, BRCM_PCIE_VENDOR_ID_BROADCOM, 0x4355, WCC),
BRCMF_PCIE_DEVICE(BRCM_PCIE_4354_RAW_DEVICE_ID, WCC),
+ BRCMF_PCIE_DEVICE(BRCM_PCIE_4355_DEVICE_ID, WCC),
BRCMF_PCIE_DEVICE(BRCM_PCIE_4356_DEVICE_ID, WCC),
BRCMF_PCIE_DEVICE(BRCM_PCIE_43567_DEVICE_ID, WCC),
BRCMF_PCIE_DEVICE(BRCM_PCIE_43570_DEVICE_ID, WCC),
@@ -2600,7 +2618,7 @@ static const struct pci_device_id brcmf_pcie_devid_table[] = {
BRCMF_PCIE_DEVICE(BRCM_PCIE_43602_2G_DEVICE_ID, WCC),
BRCMF_PCIE_DEVICE(BRCM_PCIE_43602_5G_DEVICE_ID, WCC),
BRCMF_PCIE_DEVICE(BRCM_PCIE_43602_RAW_DEVICE_ID, WCC),
- BRCMF_PCIE_DEVICE(BRCM_PCIE_4364_DEVICE_ID, BCA),
+ BRCMF_PCIE_DEVICE(BRCM_PCIE_4364_DEVICE_ID, WCC),
BRCMF_PCIE_DEVICE(BRCM_PCIE_4365_DEVICE_ID, BCA),
BRCMF_PCIE_DEVICE(BRCM_PCIE_4365_2G_DEVICE_ID, BCA),
BRCMF_PCIE_DEVICE(BRCM_PCIE_4365_5G_DEVICE_ID, BCA),
@@ -2609,9 +2627,10 @@ static const struct pci_device_id brcmf_pcie_devid_table[] = {
BRCMF_PCIE_DEVICE(BRCM_PCIE_4366_2G_DEVICE_ID, BCA),
BRCMF_PCIE_DEVICE(BRCM_PCIE_4366_5G_DEVICE_ID, BCA),
BRCMF_PCIE_DEVICE(BRCM_PCIE_4371_DEVICE_ID, WCC),
+ BRCMF_PCIE_DEVICE(BRCM_PCIE_43596_DEVICE_ID, CYW),
+ BRCMF_PCIE_DEVICE(BRCM_PCIE_4377_DEVICE_ID, WCC),
BRCMF_PCIE_DEVICE(BRCM_PCIE_4378_DEVICE_ID, WCC),
- BRCMF_PCIE_DEVICE(CY_PCIE_89459_DEVICE_ID, CYW),
- BRCMF_PCIE_DEVICE(CY_PCIE_89459_RAW_DEVICE_ID, CYW),
+
{ /* end: all zeroes */ }
};
diff --git a/drivers/net/wireless/broadcom/brcm80211/include/brcm_hw_ids.h b/drivers/net/wireless/broadcom/brcm80211/include/brcm_hw_ids.h
index f4939cf62767..896615f57952 100644
--- a/drivers/net/wireless/broadcom/brcm80211/include/brcm_hw_ids.h
+++ b/drivers/net/wireless/broadcom/brcm80211/include/brcm_hw_ids.h
@@ -37,6 +37,7 @@
#define BRCM_CC_4350_CHIP_ID 0x4350
#define BRCM_CC_43525_CHIP_ID 43525
#define BRCM_CC_4354_CHIP_ID 0x4354
+#define BRCM_CC_4355_CHIP_ID 0x4355
#define BRCM_CC_4356_CHIP_ID 0x4356
#define BRCM_CC_43566_CHIP_ID 43566
#define BRCM_CC_43567_CHIP_ID 43567
@@ -51,12 +52,12 @@
#define BRCM_CC_43664_CHIP_ID 43664
#define BRCM_CC_43666_CHIP_ID 43666
#define BRCM_CC_4371_CHIP_ID 0x4371
+#define BRCM_CC_4377_CHIP_ID 0x4377
#define BRCM_CC_4378_CHIP_ID 0x4378
#define CY_CC_4373_CHIP_ID 0x4373
#define CY_CC_43012_CHIP_ID 43012
#define CY_CC_43439_CHIP_ID 43439
#define CY_CC_43752_CHIP_ID 43752
-#define CY_CC_89459_CHIP_ID 0x4355
/* USB Device IDs */
#define BRCM_USB_43143_DEVICE_ID 0xbd1e
@@ -72,6 +73,7 @@
#define BRCM_PCIE_4350_DEVICE_ID 0x43a3
#define BRCM_PCIE_4354_DEVICE_ID 0x43df
#define BRCM_PCIE_4354_RAW_DEVICE_ID 0x4354
+#define BRCM_PCIE_4355_DEVICE_ID 0x43dc
#define BRCM_PCIE_4356_DEVICE_ID 0x43ec
#define BRCM_PCIE_43567_DEVICE_ID 0x43d3
#define BRCM_PCIE_43570_DEVICE_ID 0x43d9
@@ -90,9 +92,9 @@
#define BRCM_PCIE_4366_2G_DEVICE_ID 0x43c4
#define BRCM_PCIE_4366_5G_DEVICE_ID 0x43c5
#define BRCM_PCIE_4371_DEVICE_ID 0x440d
+#define BRCM_PCIE_43596_DEVICE_ID 0x4415
+#define BRCM_PCIE_4377_DEVICE_ID 0x4488
#define BRCM_PCIE_4378_DEVICE_ID 0x4425
-#define CY_PCIE_89459_DEVICE_ID 0x4415
-#define CY_PCIE_89459_RAW_DEVICE_ID 0x4355
/* brcmsmac IDs */
#define BCM4313_D11N2G_ID 0x4727 /* 4313 802.11n 2.4G device */
diff --git a/drivers/net/wireless/intel/ipw2x00/ipw2200.c b/drivers/net/wireless/intel/ipw2x00/ipw2200.c
index ca802af8cddc..d382f2017325 100644
--- a/drivers/net/wireless/intel/ipw2x00/ipw2200.c
+++ b/drivers/net/wireless/intel/ipw2x00/ipw2200.c
@@ -3427,7 +3427,7 @@ static void ipw_rx_queue_reset(struct ipw_priv *priv,
dma_unmap_single(&priv->pci_dev->dev,
rxq->pool[i].dma_addr,
IPW_RX_BUF_SIZE, DMA_FROM_DEVICE);
- dev_kfree_skb(rxq->pool[i].skb);
+ dev_kfree_skb_irq(rxq->pool[i].skb);
rxq->pool[i].skb = NULL;
}
list_add_tail(&rxq->pool[i].list, &rxq->rx_used);
@@ -11383,9 +11383,14 @@ static int ipw_wdev_init(struct net_device *dev)
set_wiphy_dev(wdev->wiphy, &priv->pci_dev->dev);
/* With that information in place, we can now register the wiphy... */
- if (wiphy_register(wdev->wiphy))
- rc = -EIO;
+ rc = wiphy_register(wdev->wiphy);
+ if (rc)
+ goto out;
+
+ return 0;
out:
+ kfree(priv->ieee->a_band.channels);
+ kfree(priv->ieee->bg_band.channels);
return rc;
}
diff --git a/drivers/net/wireless/intel/iwlegacy/3945-mac.c b/drivers/net/wireless/intel/iwlegacy/3945-mac.c
index d7e99d50b287..9eaf5ec133f9 100644
--- a/drivers/net/wireless/intel/iwlegacy/3945-mac.c
+++ b/drivers/net/wireless/intel/iwlegacy/3945-mac.c
@@ -3372,10 +3372,12 @@ static DEVICE_ATTR(dump_errors, 0200, NULL, il3945_dump_error_log);
*
*****************************************************************************/
-static void
+static int
il3945_setup_deferred_work(struct il_priv *il)
{
il->workqueue = create_singlethread_workqueue(DRV_NAME);
+ if (!il->workqueue)
+ return -ENOMEM;
init_waitqueue_head(&il->wait_command_queue);
@@ -3392,6 +3394,8 @@ il3945_setup_deferred_work(struct il_priv *il)
timer_setup(&il->watchdog, il_bg_watchdog, 0);
tasklet_setup(&il->irq_tasklet, il3945_irq_tasklet);
+
+ return 0;
}
static void
@@ -3712,7 +3716,10 @@ il3945_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
}
il_set_rxon_channel(il, &il->bands[NL80211_BAND_2GHZ].channels[5]);
- il3945_setup_deferred_work(il);
+ err = il3945_setup_deferred_work(il);
+ if (err)
+ goto out_remove_sysfs;
+
il3945_setup_handlers(il);
il_power_initialize(il);
@@ -3724,7 +3731,7 @@ il3945_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
err = il3945_setup_mac(il);
if (err)
- goto out_remove_sysfs;
+ goto out_destroy_workqueue;
il_dbgfs_register(il, DRV_NAME);
@@ -3733,9 +3740,10 @@ il3945_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
return 0;
-out_remove_sysfs:
+out_destroy_workqueue:
destroy_workqueue(il->workqueue);
il->workqueue = NULL;
+out_remove_sysfs:
sysfs_remove_group(&pdev->dev.kobj, &il3945_attribute_group);
out_release_irq:
free_irq(il->pci_dev->irq, il);
diff --git a/drivers/net/wireless/intel/iwlegacy/4965-mac.c b/drivers/net/wireless/intel/iwlegacy/4965-mac.c
index 721b4042b4bf..0a4aa3c678c1 100644
--- a/drivers/net/wireless/intel/iwlegacy/4965-mac.c
+++ b/drivers/net/wireless/intel/iwlegacy/4965-mac.c
@@ -4020,7 +4020,7 @@ il4965_hdl_alive(struct il_priv *il, struct il_rx_buf *rxb)
if (palive->ver_subtype == INITIALIZE_SUBTYPE) {
D_INFO("Initialization Alive received.\n");
- memcpy(&il->card_alive_init, &pkt->u.alive_frame,
+ memcpy(&il->card_alive_init, &pkt->u.raw,
sizeof(struct il_init_alive_resp));
pwork = &il->init_alive_start;
} else {
@@ -6211,10 +6211,12 @@ out:
mutex_unlock(&il->mutex);
}
-static void
+static int
il4965_setup_deferred_work(struct il_priv *il)
{
il->workqueue = create_singlethread_workqueue(DRV_NAME);
+ if (!il->workqueue)
+ return -ENOMEM;
init_waitqueue_head(&il->wait_command_queue);
@@ -6233,6 +6235,8 @@ il4965_setup_deferred_work(struct il_priv *il)
timer_setup(&il->watchdog, il_bg_watchdog, 0);
tasklet_setup(&il->irq_tasklet, il4965_irq_tasklet);
+
+ return 0;
}
static void
@@ -6618,7 +6622,10 @@ il4965_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
goto out_disable_msi;
}
- il4965_setup_deferred_work(il);
+ err = il4965_setup_deferred_work(il);
+ if (err)
+ goto out_free_irq;
+
il4965_setup_handlers(il);
/*********************************************
@@ -6656,6 +6663,7 @@ il4965_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
out_destroy_workqueue:
destroy_workqueue(il->workqueue);
il->workqueue = NULL;
+out_free_irq:
free_irq(il->pci_dev->irq, il);
out_disable_msi:
pci_disable_msi(il->pci_dev);
diff --git a/drivers/net/wireless/intel/iwlegacy/common.c b/drivers/net/wireless/intel/iwlegacy/common.c
index 341c17fe2af4..96002121bb8b 100644
--- a/drivers/net/wireless/intel/iwlegacy/common.c
+++ b/drivers/net/wireless/intel/iwlegacy/common.c
@@ -5174,7 +5174,7 @@ il_mac_reset_tsf(struct ieee80211_hw *hw, struct ieee80211_vif *vif)
memset(&il->current_ht_config, 0, sizeof(struct il_ht_config));
/* new association get rid of ibss beacon skb */
- dev_kfree_skb(il->beacon_skb);
+ dev_consume_skb_irq(il->beacon_skb);
il->beacon_skb = NULL;
il->timestamp = 0;
@@ -5293,7 +5293,7 @@ il_beacon_update(struct ieee80211_hw *hw, struct ieee80211_vif *vif)
}
spin_lock_irqsave(&il->lock, flags);
- dev_kfree_skb(il->beacon_skb);
+ dev_consume_skb_irq(il->beacon_skb);
il->beacon_skb = skb;
timestamp = ((struct ieee80211_mgmt *)skb->data)->u.beacon.timestamp;
diff --git a/drivers/net/wireless/intel/iwlwifi/cfg/22000.c b/drivers/net/wireless/intel/iwlwifi/cfg/22000.c
index ec6198f1b38c..3bdd6774716d 100644
--- a/drivers/net/wireless/intel/iwlwifi/cfg/22000.c
+++ b/drivers/net/wireless/intel/iwlwifi/cfg/22000.c
@@ -10,7 +10,7 @@
#include "fw/api/txq.h"
/* Highest firmware API version supported */
-#define IWL_22000_UCODE_API_MAX 72
+#define IWL_22000_UCODE_API_MAX 74
/* Lowest firmware API version supported */
#define IWL_22000_UCODE_API_MIN 39
diff --git a/drivers/net/wireless/intel/iwlwifi/fw/api/commands.h b/drivers/net/wireless/intel/iwlwifi/fw/api/commands.h
index 0b052c2e563a..28c87a480246 100644
--- a/drivers/net/wireless/intel/iwlwifi/fw/api/commands.h
+++ b/drivers/net/wireless/intel/iwlwifi/fw/api/commands.h
@@ -153,6 +153,7 @@ enum iwl_legacy_cmds {
/**
* @TXPATH_FLUSH: &struct iwl_tx_path_flush_cmd
+ * response in &struct iwl_tx_path_flush_cmd_rsp
*/
TXPATH_FLUSH = 0x1e,
diff --git a/drivers/net/wireless/intel/iwlwifi/fw/api/datapath.h b/drivers/net/wireless/intel/iwlwifi/fw/api/datapath.h
index 9263413ee06f..8b38a0073077 100644
--- a/drivers/net/wireless/intel/iwlwifi/fw/api/datapath.h
+++ b/drivers/net/wireless/intel/iwlwifi/fw/api/datapath.h
@@ -83,7 +83,7 @@ enum iwl_data_path_subcmd_ids {
MONITOR_NOTIF = 0xF4,
/**
- * @RX_NO_DATA_NOTIF: &struct iwl_rx_no_data
+ * @RX_NO_DATA_NOTIF: &struct iwl_rx_no_data or &struct iwl_rx_no_data_ver_3
*/
RX_NO_DATA_NOTIF = 0xF5,
diff --git a/drivers/net/wireless/intel/iwlwifi/fw/api/rx.h b/drivers/net/wireless/intel/iwlwifi/fw/api/rx.h
index 74a01888715b..1c4e84932058 100644
--- a/drivers/net/wireless/intel/iwlwifi/fw/api/rx.h
+++ b/drivers/net/wireless/intel/iwlwifi/fw/api/rx.h
@@ -273,7 +273,7 @@ enum iwl_rx_mpdu_mac_info {
};
/* TSF overload low dword */
-enum iwl_rx_phy_data0 {
+enum iwl_rx_phy_he_data0 {
/* info type: HE any */
IWL_RX_PHY_DATA0_HE_BEAM_CHNG = 0x00000001,
IWL_RX_PHY_DATA0_HE_UPLINK = 0x00000002,
@@ -289,6 +289,25 @@ enum iwl_rx_phy_data0 {
IWL_RX_PHY_DATA0_HE_DELIM_EOF = 0x80000000,
};
+/* TSF overload low dword */
+enum iwl_rx_phy_eht_data0 {
+ /* info type: EHT any */
+ /* 1 bits reserved */
+ IWL_RX_PHY_DATA0_EHT_UPLINK = BIT(1),
+ IWL_RX_PHY_DATA0_EHT_BSS_COLOR_MASK = 0x000000fc,
+ IWL_RX_PHY_DATA0_ETH_SPATIAL_REUSE_MASK = 0x00000f00,
+ IWL_RX_PHY_DATA0_EHT_PS160 = BIT(12),
+ IWL_RX_PHY_DATA0_EHT_TXOP_DUR_MASK = 0x000fe000,
+ IWL_RX_PHY_DATA0_EHT_LDPC_EXT_SYM = BIT(20),
+ IWL_RX_PHY_DATA0_EHT_PRE_FEC_PAD_MASK = 0x00600000,
+ IWL_RX_PHY_DATA0_EHT_PE_DISAMBIG = BIT(23),
+ IWL_RX_PHY_DATA0_EHT_BW320_SLOT = BIT(24),
+ IWL_RX_PHY_DATA0_EHT_SIGA_CRC_OK = BIT(25),
+ IWL_RX_PHY_DATA0_EHT_PHY_VER = 0x1c000000,
+ /* 2 bits reserved */
+ IWL_RX_PHY_DATA0_EHT_DELIM_EOF = BIT(31),
+};
+
enum iwl_rx_phy_info_type {
IWL_RX_PHY_INFO_TYPE_NONE = 0,
IWL_RX_PHY_INFO_TYPE_CCK = 1,
@@ -301,19 +320,26 @@ enum iwl_rx_phy_info_type {
IWL_RX_PHY_INFO_TYPE_HE_TB = 8,
IWL_RX_PHY_INFO_TYPE_HE_MU_EXT = 9,
IWL_RX_PHY_INFO_TYPE_HE_TB_EXT = 10,
+ IWL_RX_PHY_INFO_TYPE_EHT_MU = 11,
+ IWL_RX_PHY_INFO_TYPE_EHT_TB = 12,
+ IWL_RX_PHY_INFO_TYPE_EHT_MU_EXT = 13,
+ IWL_RX_PHY_INFO_TYPE_EHT_TB_EXT = 14,
};
/* TSF overload high dword */
-enum iwl_rx_phy_data1 {
+enum iwl_rx_phy_common_data1 {
/*
* check this first - if TSF overload is set,
* see &enum iwl_rx_phy_info_type
*/
IWL_RX_PHY_DATA1_INFO_TYPE_MASK = 0xf0000000,
- /* info type: HT/VHT/HE any */
+ /* info type: HT/VHT/HE/EHT any */
IWL_RX_PHY_DATA1_LSIG_LEN_MASK = 0x0fff0000,
+};
+/* TSF overload high dword For HE rates*/
+enum iwl_rx_phy_he_data1 {
/* info type: HE MU/MU-EXT */
IWL_RX_PHY_DATA1_HE_MU_SIGB_COMPRESSION = 0x00000001,
IWL_RX_PHY_DATA1_HE_MU_SIBG_SYM_OR_USER_NUM_MASK = 0x0000001e,
@@ -329,8 +355,23 @@ enum iwl_rx_phy_data1 {
IWL_RX_PHY_DATA1_HE_TB_LOW_SS_MASK = 0x0000000e,
};
+/* TSF overload high dword For EHT-MU/TB rates*/
+enum iwl_rx_phy_eht_data1 {
+ /* info type: EHT-MU */
+ IWL_RX_PHY_DATA1_EHT_MU_NUM_SIG_SYM_USIGA2 = 0x0000001f,
+ /* info type: EHT-TB */
+ IWL_RX_PHY_DATA1_EHT_TB_PILOT_TYPE = BIT(0),
+ IWL_RX_PHY_DATA1_EHT_TB_LOW_SS = 0x0000001e,
+
+ /* info type: EHT any */
+ /* number of EHT-LTF symbols 0 - 1 EHT-LTF, 1 - 2 EHT-LTFs, 2 - 4 EHT-LTFs,
+ * 3 - 6 EHT-LTFs, 4 - 8 EHT-LTFs */
+ IWL_RX_PHY_DATA1_EHT_SIG_LTF_NUM = 0x000000e0,
+ IWL_RX_PHY_DATA1_EHT_RU_ALLOC = 0x0000ff00,
+};
+
/* goes into Metadata DW 7 */
-enum iwl_rx_phy_data2 {
+enum iwl_rx_phy_he_data2 {
/* info type: HE MU-EXT */
/* the a1/a2/... is what the PHY/firmware calls the values */
IWL_RX_PHY_DATA2_HE_MU_EXT_CH1_RU0 = 0x000000ff, /* a1 */
@@ -346,7 +387,7 @@ enum iwl_rx_phy_data2 {
};
/* goes into Metadata DW 8 */
-enum iwl_rx_phy_data3 {
+enum iwl_rx_phy_he_data3 {
/* info type: HE MU-EXT */
IWL_RX_PHY_DATA3_HE_MU_EXT_CH1_RU1 = 0x000000ff, /* c1 */
IWL_RX_PHY_DATA3_HE_MU_EXT_CH1_RU3 = 0x0000ff00, /* c2 */
@@ -355,7 +396,7 @@ enum iwl_rx_phy_data3 {
};
/* goes into Metadata DW 4 high 16 bits */
-enum iwl_rx_phy_data4 {
+enum iwl_rx_phy_he_he_data4 {
/* info type: HE MU-EXT */
IWL_RX_PHY_DATA4_HE_MU_EXT_CH1_CTR_RU = 0x0001,
IWL_RX_PHY_DATA4_HE_MU_EXT_CH2_CTR_RU = 0x0002,
@@ -366,6 +407,51 @@ enum iwl_rx_phy_data4 {
IWL_RX_PHY_DATA4_HE_MU_EXT_PREAMBLE_PUNC_TYPE_MASK = 0x0600,
};
+/* goes into Metadata DW 7 */
+enum iwl_rx_phy_eht_data2 {
+ /* info type: EHT-MU-EXT */
+ /* OFDM_RX_VECTOR_COMMON_RU_ALLOC_0_OUT */
+ IWL_RX_PHY_DATA2_EHT_MU_EXT_RU_ALLOC_A1 = 0x000001ff,
+ IWL_RX_PHY_DATA2_EHT_MU_EXT_RU_ALLOC_A2 = 0x0003fe00,
+ IWL_RX_PHY_DATA2_EHT_MU_EXT_RU_ALLOC_A3 = 0x01fc0000,
+
+ /* info type: EHT-TB-EXT */
+ IWL_RX_PHY_DATA2_EHT_TB_EXT_TRIG_SIGA1 = 0xffffffff,
+};
+
+/* goes into Metadata DW 8 */
+enum iwl_rx_phy_eht_data3 {
+ /* info type: EHT-MU-EXT */
+ /* OFDM_RX_VECTOR_COMMON_RU_ALLOC_1_OUT */
+ IWL_RX_PHY_DATA3_EHT_MU_EXT_RU_ALLOC_B1 = 0x000001ff,
+ IWL_RX_PHY_DATA3_EHT_MU_EXT_RU_ALLOC_B2 = 0x0003fe00,
+ IWL_RX_PHY_DATA3_EHT_MU_EXT_RU_ALLOC_B3 = 0x01fc0000,
+};
+
+/* goes into Metadata DW 4 */
+enum iwl_rx_phy_eht_data4 {
+ /* info type: EHT-MU-EXT */
+ /* OFDM_RX_VECTOR_COMMON_RU_ALLOC_2_OUT */
+ IWL_RX_PHY_DATA4_EHT_MU_EXT_RU_ALLOC_C1 = 0x000001ff,
+ IWL_RX_PHY_DATA4_EHT_MU_EXT_RU_ALLOC_C2 = 0x0003fe00,
+ IWL_RX_PHY_DATA4_EHT_MU_EXT_RU_ALLOC_C3 = 0x01fc0000,
+ IWL_RX_PHY_DATA4_EHT_MU_EXT_SIGB_MCS = 0x18000000,
+};
+
+/* goes into Metadata DW 16 */
+enum iwl_rx_phy_data5 {
+ /* info type: EHT any */
+ IWL_RX_PHY_DATA5_EHT_TYPE_AND_COMP = 0x00000003,
+ /* info type: EHT-TB */
+ IWL_RX_PHY_DATA5_EHT_TB_SPATIAL_REUSE1 = 0x0000003c,
+ IWL_RX_PHY_DATA5_EHT_TB_SPATIAL_REUSE2 = 0x000003c0,
+ /* info type: EHT-MU */
+ IWL_RX_PHY_DATA5_EHT_MU_PUNC_CH_CODE = 0x0000007c,
+ IWL_RX_PHY_DATA5_EHT_MU_STA_ID_USR = 0x0003ff80,
+ IWL_RX_PHY_DATA5_EHT_MU_NUM_USR_NON_OFDMA = 0x001c0000,
+ IWL_RX_PHY_DATA5_EHT_MU_SPATIAL_CONF_USR_FIELD = 0x0fe00000,
+};
+
/**
* struct iwl_rx_mpdu_desc_v1 - RX MPDU descriptor
*/
@@ -440,7 +526,9 @@ struct iwl_rx_mpdu_desc_v1 {
/**
* @phy_data1: valid only if
* %IWL_RX_MPDU_PHY_TSF_OVERLOAD is set,
- * see &enum iwl_rx_phy_data1.
+ * see &enum iwl_rx_phy_common_data1 or
+ * &enum iwl_rx_phy_he_data1 or
+ * &enum iwl_rx_phy_eht_data1.
*/
__le32 phy_data1;
};
@@ -540,11 +628,18 @@ struct iwl_rx_mpdu_desc_v3 {
__le32 phy_data1;
};
};
- /* DW16 & DW17 */
+ /* DW16 */
+ /**
+ * @phy_data5: valid only if
+ * %IWL_RX_MPDU_PHY_TSF_OVERLOAD is set,
+ * see &enum iwl_rx_phy_data5.
+ */
+ __le32 phy_data5;
+ /* DW17 */
/**
* @reserved: reserved
*/
- __le32 reserved[2];
+ __le32 reserved[1];
} __packed; /* RX_MPDU_RES_START_API_S_VER_3,
RX_MPDU_RES_START_API_S_VER_5 */
@@ -639,12 +734,14 @@ struct iwl_rx_mpdu_desc {
#define RX_NO_DATA_INFO_ERR_UNSUPPORTED_RATE 2
#define RX_NO_DATA_INFO_ERR_NO_DELIM 3
#define RX_NO_DATA_INFO_ERR_BAD_MAC_HDR 4
+#define RX_NO_DATA_INFO_LOW_ENERGY 5
#define RX_NO_DATA_FRAME_TIME_POS 0
#define RX_NO_DATA_FRAME_TIME_MSK (0xfffff << RX_NO_DATA_FRAME_TIME_POS)
#define RX_NO_DATA_RX_VEC0_HE_NSTS_MSK 0x03800000
#define RX_NO_DATA_RX_VEC0_VHT_NSTS_MSK 0x38000000
+#define RX_NO_DATA_RX_VEC2_EHT_NSTS_MSK 0x00f00000
/**
* struct iwl_rx_no_data - RX no data descriptor
@@ -654,7 +751,8 @@ struct iwl_rx_mpdu_desc {
* @on_air_rise_time: GP2 during on air rise
* @fr_time: frame time
* @rate: rate/mcs of frame
- * @phy_info: &enum iwl_rx_phy_data0 and &enum iwl_rx_phy_info_type
+ * @phy_info: &enum iwl_rx_phy_he_data0 or &enum iwl_rx_phy_eht_data0
+ * based on &enum iwl_rx_phy_info_type
* @rx_vec: DW-12:9 raw RX vectors from DSP according to modulation type.
* for VHT: OFDM_RX_VECTOR_SIGA1_OUT, OFDM_RX_VECTOR_SIGA2_OUT
* for HE: OFDM_RX_VECTOR_HE_SIGA1_OUT, OFDM_RX_VECTOR_HE_SIGA2_OUT
@@ -670,6 +768,33 @@ struct iwl_rx_no_data {
} __packed; /* RX_NO_DATA_NTFY_API_S_VER_1,
RX_NO_DATA_NTFY_API_S_VER_2 */
+/**
+ * struct iwl_rx_no_data_ver_3 - RX no data descriptor
+ * @info: 7:0 frame type, 15:8 RX error type
+ * @rssi: 7:0 energy chain-A,
+ * 15:8 chain-B, measured at FINA time (FINA_ENERGY), 16:23 channel
+ * @on_air_rise_time: GP2 during on air rise
+ * @fr_time: frame time
+ * @rate: rate/mcs of frame
+ * @phy_info: &enum iwl_rx_phy_eht_data0 and &enum iwl_rx_phy_info_type
+ * @rx_vec: DW-12:9 raw RX vectors from DSP according to modulation type.
+ * for VHT: OFDM_RX_VECTOR_SIGA1_OUT, OFDM_RX_VECTOR_SIGA2_OUT
+ * for HE: OFDM_RX_VECTOR_HE_SIGA1_OUT, OFDM_RX_VECTOR_HE_SIGA2_OUT
+ * for EHT: OFDM_RX_VECTOR_USIG_A1_OUT, OFDM_RX_VECTOR_USIG_A2_OUT,
+ * OFDM_RX_VECTOR_EHT_OUT, OFDM_RX_VECTOR_EHT_USER_FIELD_OUT
+ */
+struct iwl_rx_no_data_ver_3 {
+ __le32 info;
+ __le32 rssi;
+ __le32 on_air_rise_time;
+ __le32 fr_time;
+ __le32 rate;
+ __le32 phy_info[2];
+ __le32 rx_vec[4];
+} __packed; /* RX_NO_DATA_NTFY_API_S_VER_1,
+ RX_NO_DATA_NTFY_API_S_VER_2
+ RX_NO_DATA_NTFY_API_S_VER_3 */
+
struct iwl_frame_release {
u8 baid;
u8 reserved;
diff --git a/drivers/net/wireless/intel/iwlwifi/fw/uefi.c b/drivers/net/wireless/intel/iwlwifi/fw/uefi.c
index 6d408cd0f517..0b6f694cf30d 100644
--- a/drivers/net/wireless/intel/iwlwifi/fw/uefi.c
+++ b/drivers/net/wireless/intel/iwlwifi/fw/uefi.c
@@ -1,6 +1,6 @@
// SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause
/*
- * Copyright(c) 2021 Intel Corporation
+ * Copyright(c) 2021-2022 Intel Corporation
*/
#include "iwl-drv.h"
@@ -246,6 +246,63 @@ void *iwl_uefi_get_reduced_power(struct iwl_trans *trans, size_t *len)
return data;
}
+static int iwl_uefi_step_parse(struct uefi_cnv_common_step_data *common_step_data,
+ struct iwl_trans *trans)
+{
+ if (common_step_data->revision != 1)
+ return -EINVAL;
+
+ trans->mbx_addr_0_step = (u32)common_step_data->revision |
+ (u32)common_step_data->cnvi_eq_channel << 8 |
+ (u32)common_step_data->cnvr_eq_channel << 16 |
+ (u32)common_step_data->radio1 << 24;
+ trans->mbx_addr_1_step = (u32)common_step_data->radio2;
+ return 0;
+}
+
+void iwl_uefi_get_step_table(struct iwl_trans *trans)
+{
+ struct uefi_cnv_common_step_data *data;
+ unsigned long package_size;
+ efi_status_t status;
+ int ret;
+
+ if (trans->trans_cfg->device_family < IWL_DEVICE_FAMILY_AX210)
+ return;
+
+ if (!efi_rt_services_supported(EFI_RT_SUPPORTED_GET_VARIABLE))
+ return;
+
+ /* TODO: we hardcode a maximum length here, because reading
+ * from the UEFI is not working. To implement this properly,
+ * we have to call efivar_entry_size().
+ */
+ package_size = IWL_HARDCODED_STEP_SIZE;
+
+ data = kmalloc(package_size, GFP_KERNEL);
+ if (!data)
+ return;
+
+ status = efi.get_variable(IWL_UEFI_STEP_NAME, &IWL_EFI_VAR_GUID,
+ NULL, &package_size, data);
+ if (status != EFI_SUCCESS) {
+ IWL_DEBUG_FW(trans,
+ "STEP UEFI variable not found 0x%lx\n", status);
+ goto out_free;
+ }
+
+ IWL_DEBUG_FW(trans, "Read STEP from UEFI with size %lu\n",
+ package_size);
+
+ ret = iwl_uefi_step_parse(data, trans);
+ if (ret < 0)
+ IWL_DEBUG_FW(trans, "Cannot read STEP tables. rev is invalid\n");
+
+out_free:
+ kfree(data);
+}
+IWL_EXPORT_SYMBOL(iwl_uefi_get_step_table);
+
#ifdef CONFIG_ACPI
static int iwl_uefi_sgom_parse(struct uefi_cnv_wlan_sgom_data *sgom_data,
struct iwl_fw_runtime *fwrt)
diff --git a/drivers/net/wireless/intel/iwlwifi/fw/uefi.h b/drivers/net/wireless/intel/iwlwifi/fw/uefi.h
index 09d2a971b3a0..17089bc74cf9 100644
--- a/drivers/net/wireless/intel/iwlwifi/fw/uefi.h
+++ b/drivers/net/wireless/intel/iwlwifi/fw/uefi.h
@@ -1,6 +1,6 @@
/* SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause */
/*
- * Copyright(c) 2021 Intel Corporation
+ * Copyright(c) 2021-2022 Intel Corporation
*/
#ifndef __iwl_fw_uefi__
#define __iwl_fw_uefi__
@@ -8,6 +8,7 @@
#define IWL_UEFI_OEM_PNVM_NAME L"UefiCnvWlanOemSignedPnvm"
#define IWL_UEFI_REDUCED_POWER_NAME L"UefiCnvWlanReducedPower"
#define IWL_UEFI_SGOM_NAME L"UefiCnvWlanSarGeoOffsetMapping"
+#define IWL_UEFI_STEP_NAME L"UefiCnvCommonSTEP"
/*
* TODO: we have these hardcoded values that the caller must pass,
@@ -18,6 +19,7 @@
#define IWL_HARDCODED_PNVM_SIZE 4096
#define IWL_HARDCODED_REDUCE_POWER_SIZE 32768
#define IWL_HARDCODED_SGOM_SIZE 339
+#define IWL_HARDCODED_STEP_SIZE 6
struct pnvm_sku_package {
u8 rev;
@@ -32,6 +34,15 @@ struct uefi_cnv_wlan_sgom_data {
u8 offset_map[IWL_HARDCODED_SGOM_SIZE - 1];
} __packed;
+struct uefi_cnv_common_step_data {
+ u8 revision;
+ u8 step_mode;
+ u8 cnvi_eq_channel;
+ u8 cnvr_eq_channel;
+ u8 radio1;
+ u8 radio2;
+} __packed;
+
/*
* This is known to be broken on v4.19 and to work on v5.4. Until we
* figure out why this is the case and how to make it work, simply
@@ -40,6 +51,7 @@ struct uefi_cnv_wlan_sgom_data {
#ifdef CONFIG_EFI
void *iwl_uefi_get_pnvm(struct iwl_trans *trans, size_t *len);
void *iwl_uefi_get_reduced_power(struct iwl_trans *trans, size_t *len);
+void iwl_uefi_get_step_table(struct iwl_trans *trans);
#else /* CONFIG_EFI */
static inline
void *iwl_uefi_get_pnvm(struct iwl_trans *trans, size_t *len)
@@ -52,6 +64,11 @@ void *iwl_uefi_get_reduced_power(struct iwl_trans *trans, size_t *len)
{
return ERR_PTR(-EOPNOTSUPP);
}
+
+static inline
+void iwl_uefi_get_step_table(struct iwl_trans *trans)
+{
+}
#endif /* CONFIG_EFI */
#if defined(CONFIG_EFI) && defined(CONFIG_ACPI)
diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-context-info-gen3.h b/drivers/net/wireless/intel/iwlwifi/iwl-context-info-gen3.h
index b84884034c74..3f7278014009 100644
--- a/drivers/net/wireless/intel/iwlwifi/iwl-context-info-gen3.h
+++ b/drivers/net/wireless/intel/iwlwifi/iwl-context-info-gen3.h
@@ -141,12 +141,27 @@ struct iwl_prph_scratch_uefi_cfg {
} __packed; /* PERIPH_SCRATCH_UEFI_CFG_S */
/*
+ * struct iwl_prph_scratch_step_cfg - prph scratch step configuration
+ * @mbx_addr_0: [0:7] revision,
+ * [8:15] cnvi_to_cnvr length,
+ * [16:23] cnvr_to_cnvi channel length,
+ * [24:31] radio1 reserved
+ * @mbx_addr_1: [0:7] radio2 reserved
+ */
+
+struct iwl_prph_scratch_step_cfg {
+ __le32 mbx_addr_0;
+ __le32 mbx_addr_1;
+} __packed;
+
+/*
* struct iwl_prph_scratch_ctrl_cfg - prph scratch ctrl and config
* @version: version information of context info and HW
* @control: control flags of FH configurations
* @pnvm_cfg: ror configuration
* @hwm_cfg: hwm configuration
* @rbd_cfg: default RX queue configuration
+ * @step_cfg: step configuration
*/
struct iwl_prph_scratch_ctrl_cfg {
struct iwl_prph_scratch_version version;
@@ -155,6 +170,7 @@ struct iwl_prph_scratch_ctrl_cfg {
struct iwl_prph_scratch_hwm_cfg hwm_cfg;
struct iwl_prph_scratch_rbd_cfg rbd_cfg;
struct iwl_prph_scratch_uefi_cfg reduce_power_cfg;
+ struct iwl_prph_scratch_step_cfg step_cfg;
} __packed; /* PERIPH_SCRATCH_CTRL_CFG_S */
/*
@@ -165,7 +181,7 @@ struct iwl_prph_scratch_ctrl_cfg {
*/
struct iwl_prph_scratch {
struct iwl_prph_scratch_ctrl_cfg ctrl_cfg;
- __le32 reserved[12];
+ __le32 reserved[10];
struct iwl_context_info_dram dram;
} __packed; /* PERIPH_SCRATCH_S */
@@ -265,5 +281,6 @@ int iwl_trans_pcie_ctx_info_gen3_set_pnvm(struct iwl_trans *trans,
const void *data, u32 len);
int iwl_trans_pcie_ctx_info_gen3_set_reduce_power(struct iwl_trans *trans,
const void *data, u32 len);
-
+int iwl_trans_pcie_ctx_info_gen3_set_step(struct iwl_trans *trans,
+ u32 mbx_addr_0_step, u32 mbx_addr_1_step);
#endif /* __iwl_context_info_file_gen3_h__ */
diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-drv.c b/drivers/net/wireless/intel/iwlwifi/iwl-drv.c
index ab7065c93826..4c977ba9cd85 100644
--- a/drivers/net/wireless/intel/iwlwifi/iwl-drv.c
+++ b/drivers/net/wireless/intel/iwlwifi/iwl-drv.c
@@ -163,7 +163,6 @@ static void iwl_req_fw_callback(const struct firmware *ucode_raw,
static int iwl_request_firmware(struct iwl_drv *drv, bool first)
{
const struct iwl_cfg *cfg = drv->trans->cfg;
- char tag[8];
if (drv->trans->trans_cfg->device_family == IWL_DEVICE_FAMILY_9000 &&
(drv->trans->hw_rev_step != SILICON_B_STEP &&
@@ -174,13 +173,10 @@ static int iwl_request_firmware(struct iwl_drv *drv, bool first)
return -EINVAL;
}
- if (first) {
+ if (first)
drv->fw_index = cfg->ucode_api_max;
- sprintf(tag, "%d", drv->fw_index);
- } else {
+ else
drv->fw_index--;
- sprintf(tag, "%d", drv->fw_index);
- }
if (drv->fw_index < cfg->ucode_api_min) {
IWL_ERR(drv, "no suitable firmware found!\n");
@@ -200,8 +196,8 @@ static int iwl_request_firmware(struct iwl_drv *drv, bool first)
return -ENOENT;
}
- snprintf(drv->firmware_name, sizeof(drv->firmware_name), "%s%s.ucode",
- cfg->fw_name_pre, tag);
+ snprintf(drv->firmware_name, sizeof(drv->firmware_name), "%s%d.ucode",
+ cfg->fw_name_pre, drv->fw_index);
IWL_DEBUG_FW_INFO(drv, "attempting to load firmware '%s'\n",
drv->firmware_name);
diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-trans.h b/drivers/net/wireless/intel/iwlwifi/iwl-trans.h
index 479a518c89a1..9aced3e44bc2 100644
--- a/drivers/net/wireless/intel/iwlwifi/iwl-trans.h
+++ b/drivers/net/wireless/intel/iwlwifi/iwl-trans.h
@@ -1001,6 +1001,8 @@ struct iwl_trans_txqs {
* This mode is set dynamically, depending on the WoWLAN values
* configured from the userspace at runtime.
* @iwl_trans_txqs: transport tx queues data.
+ * @mbx_addr_0_step: step address data 0
+ * @mbx_addr_1_step: step address data 1
*/
struct iwl_trans {
bool csme_own;
@@ -1057,6 +1059,8 @@ struct iwl_trans {
const char *name;
struct iwl_trans_txqs txqs;
+ u32 mbx_addr_0_step;
+ u32 mbx_addr_1_step;
/* pointer to trans specific struct */
/*Ensure that this pointer will always be aligned to sizeof pointer */
diff --git a/drivers/net/wireless/intel/iwlwifi/mei/main.c b/drivers/net/wireless/intel/iwlwifi/mei/main.c
index f9d11935ed97..67dfb77fedf7 100644
--- a/drivers/net/wireless/intel/iwlwifi/mei/main.c
+++ b/drivers/net/wireless/intel/iwlwifi/mei/main.c
@@ -788,7 +788,7 @@ static void iwl_mei_handle_amt_state(struct mei_cl_device *cldev,
if (mei->amt_enabled)
iwl_mei_set_init_conf(mei);
else if (iwl_mei_cache.ops)
- iwl_mei_cache.ops->rfkill(iwl_mei_cache.priv, false, false);
+ iwl_mei_cache.ops->rfkill(iwl_mei_cache.priv, false);
schedule_work(&mei->netdev_work);
@@ -829,7 +829,7 @@ static void iwl_mei_handle_csme_taking_ownership(struct mei_cl_device *cldev,
*/
mei->csme_taking_ownership = true;
- iwl_mei_cache.ops->rfkill(iwl_mei_cache.priv, true, true);
+ iwl_mei_cache.ops->rfkill(iwl_mei_cache.priv, true);
} else {
iwl_mei_send_sap_msg(cldev,
SAP_MSG_NOTIF_CSME_OWNERSHIP_CONFIRMED);
@@ -1774,7 +1774,7 @@ int iwl_mei_register(void *priv, const struct iwl_mei_ops *ops)
if (mei->amt_enabled)
iwl_mei_send_sap_msg(mei->cldev,
SAP_MSG_NOTIF_WIFIDR_UP);
- ops->rfkill(priv, mei->link_prot_state, false);
+ ops->rfkill(priv, mei->link_prot_state);
}
}
ret = 0;
diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/debugfs-vif.c b/drivers/net/wireless/intel/iwlwifi/mvm/debugfs-vif.c
index 78d8b37eb71a..3779ac040ba0 100644
--- a/drivers/net/wireless/intel/iwlwifi/mvm/debugfs-vif.c
+++ b/drivers/net/wireless/intel/iwlwifi/mvm/debugfs-vif.c
@@ -438,13 +438,6 @@ static ssize_t iwl_dbgfs_bf_params_read(struct file *file,
return simple_read_from_buffer(user_buf, count, ppos, buf, pos);
}
-static inline char *iwl_dbgfs_is_match(char *name, char *buf)
-{
- int len = strlen(name);
-
- return !strncmp(name, buf, len) ? buf + len : NULL;
-}
-
static ssize_t iwl_dbgfs_os_device_timediff_read(struct file *file,
char __user *user_buf,
size_t count, loff_t *ppos)
diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/debugfs.c b/drivers/net/wireless/intel/iwlwifi/mvm/debugfs.c
index 1ce9450e5add..85b99316d029 100644
--- a/drivers/net/wireless/intel/iwlwifi/mvm/debugfs.c
+++ b/drivers/net/wireless/intel/iwlwifi/mvm/debugfs.c
@@ -324,12 +324,12 @@ static ssize_t iwl_dbgfs_sar_geo_profile_read(struct file *file,
pos += scnprintf(buf + pos, bufsz - pos,
"Use geographic profile %d\n", tbl_idx);
pos += scnprintf(buf + pos, bufsz - pos,
- "2.4GHz:\n\tChain A offset: %hhu dBm\n\tChain B offset: %hhu dBm\n\tmax tx power: %hhu dBm\n",
+ "2.4GHz:\n\tChain A offset: %u dBm\n\tChain B offset: %u dBm\n\tmax tx power: %u dBm\n",
mvm->fwrt.geo_profiles[tbl_idx - 1].bands[0].chains[0],
mvm->fwrt.geo_profiles[tbl_idx - 1].bands[0].chains[1],
mvm->fwrt.geo_profiles[tbl_idx - 1].bands[0].max);
pos += scnprintf(buf + pos, bufsz - pos,
- "5.2GHz:\n\tChain A offset: %hhu dBm\n\tChain B offset: %hhu dBm\n\tmax tx power: %hhu dBm\n",
+ "5.2GHz:\n\tChain A offset: %u dBm\n\tChain B offset: %u dBm\n\tmax tx power: %u dBm\n",
mvm->fwrt.geo_profiles[tbl_idx - 1].bands[1].chains[0],
mvm->fwrt.geo_profiles[tbl_idx - 1].bands[1].chains[1],
mvm->fwrt.geo_profiles[tbl_idx - 1].bands[1].max);
@@ -1069,7 +1069,7 @@ iwl_dbgfs_scan_ant_rxchain_read(struct file *file,
pos += scnprintf(buf + pos, bufsz - pos, "A");
if (mvm->scan_rx_ant & ANT_B)
pos += scnprintf(buf + pos, bufsz - pos, "B");
- pos += scnprintf(buf + pos, bufsz - pos, " (%hhx)\n", mvm->scan_rx_ant);
+ pos += scnprintf(buf + pos, bufsz - pos, " (%x)\n", mvm->scan_rx_ant);
return simple_read_from_buffer(user_buf, count, ppos, buf, pos);
}
diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/ftm-initiator.c b/drivers/net/wireless/intel/iwlwifi/mvm/ftm-initiator.c
index 6eee3d0b2157..05f3136b1c43 100644
--- a/drivers/net/wireless/intel/iwlwifi/mvm/ftm-initiator.c
+++ b/drivers/net/wireless/intel/iwlwifi/mvm/ftm-initiator.c
@@ -1206,7 +1206,7 @@ void iwl_mvm_ftm_range_resp(struct iwl_mvm *mvm, struct iwl_rx_cmd_buffer *rxb)
}
IWL_DEBUG_INFO(mvm, "Range response received\n");
- IWL_DEBUG_INFO(mvm, "request id: %lld, num of entries: %hhu\n",
+ IWL_DEBUG_INFO(mvm, "request id: %lld, num of entries: %u\n",
mvm->ftm_initiator.req->cookie, num_of_aps);
for (i = 0; i < num_of_aps && i < IWL_MVM_TOF_MAX_APS; i++) {
@@ -1298,7 +1298,7 @@ void iwl_mvm_ftm_range_resp(struct iwl_mvm *mvm, struct iwl_rx_cmd_buffer *rxb)
if (fw_has_api(&mvm->fw->ucode_capa,
IWL_UCODE_TLV_API_FTM_RTT_ACCURACY))
- IWL_DEBUG_INFO(mvm, "RTT confidence: %hhu\n",
+ IWL_DEBUG_INFO(mvm, "RTT confidence: %u\n",
fw_ap->rttConfidence);
iwl_mvm_debug_range_resp(mvm, i, &result);
diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c b/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
index 5273ade71117..565522466eba 100644
--- a/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
+++ b/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
@@ -5116,6 +5116,9 @@ static void iwl_mvm_set_sta_rate(u32 rate_n_flags, struct rate_info *rinfo)
case RATE_MCS_CHAN_WIDTH_160:
rinfo->bw = RATE_INFO_BW_160;
break;
+ case RATE_MCS_CHAN_WIDTH_320:
+ rinfo->bw = RATE_INFO_BW_320;
+ break;
}
if (format == RATE_MCS_CCK_MSK ||
@@ -5176,6 +5179,10 @@ static void iwl_mvm_set_sta_rate(u32 rate_n_flags, struct rate_info *rinfo)
rinfo->flags |= RATE_INFO_FLAGS_SHORT_GI;
switch (format) {
+ case RATE_MCS_EHT_MSK:
+ /* TODO: GI/LTF/RU. How does the firmware encode them? */
+ rinfo->flags |= RATE_INFO_FLAGS_EHT_MCS;
+ break;
case RATE_MCS_HE_MSK:
gi_ltf = u32_get_bits(rate_n_flags, RATE_MCS_HE_GI_LTF_MSK);
diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/ops.c b/drivers/net/wireless/intel/iwlwifi/mvm/ops.c
index ebe6d9c4ccaf..f4e9446d9dc2 100644
--- a/drivers/net/wireless/intel/iwlwifi/mvm/ops.c
+++ b/drivers/net/wireless/intel/iwlwifi/mvm/ops.c
@@ -1128,6 +1128,7 @@ iwl_op_mode_mvm_start(struct iwl_trans *trans, const struct iwl_cfg *cfg,
iwl_mvm_get_acpi_tables(mvm);
iwl_uefi_get_sgom_table(trans, &mvm->fwrt);
+ iwl_uefi_get_step_table(trans);
mvm->init_status = 0;
diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/rxmq.c b/drivers/net/wireless/intel/iwlwifi/mvm/rxmq.c
index 97b67270f384..549dbe0be223 100644
--- a/drivers/net/wireless/intel/iwlwifi/mvm/rxmq.c
+++ b/drivers/net/wireless/intel/iwlwifi/mvm/rxmq.c
@@ -10,37 +10,11 @@
#include "mvm.h"
#include "fw-api.h"
-static void *iwl_mvm_skb_get_hdr(struct sk_buff *skb)
-{
- struct ieee80211_rx_status *rx_status = IEEE80211_SKB_RXCB(skb);
- u8 *data = skb->data;
-
- /* Alignment concerns */
- BUILD_BUG_ON(sizeof(struct ieee80211_radiotap_he) % 4);
- BUILD_BUG_ON(sizeof(struct ieee80211_radiotap_he_mu) % 4);
- BUILD_BUG_ON(sizeof(struct ieee80211_radiotap_lsig) % 4);
- BUILD_BUG_ON(sizeof(struct ieee80211_vendor_radiotap) % 4);
-
- if (rx_status->flag & RX_FLAG_RADIOTAP_HE)
- data += sizeof(struct ieee80211_radiotap_he);
- if (rx_status->flag & RX_FLAG_RADIOTAP_HE_MU)
- data += sizeof(struct ieee80211_radiotap_he_mu);
- if (rx_status->flag & RX_FLAG_RADIOTAP_LSIG)
- data += sizeof(struct ieee80211_radiotap_lsig);
- if (rx_status->flag & RX_FLAG_RADIOTAP_VENDOR_DATA) {
- struct ieee80211_vendor_radiotap *radiotap = (void *)data;
-
- data += sizeof(*radiotap) + radiotap->len + radiotap->pad;
- }
-
- return data;
-}
-
static inline int iwl_mvm_check_pn(struct iwl_mvm *mvm, struct sk_buff *skb,
int queue, struct ieee80211_sta *sta)
{
struct iwl_mvm_sta *mvmsta;
- struct ieee80211_hdr *hdr = iwl_mvm_skb_get_hdr(skb);
+ struct ieee80211_hdr *hdr = (void *)skb_mac_header(skb);
struct ieee80211_rx_status *stats = IEEE80211_SKB_RXCB(skb);
struct iwl_mvm_key_pn *ptk_pn;
int res;
@@ -179,6 +153,10 @@ static int iwl_mvm_create_skb(struct iwl_mvm *mvm, struct sk_buff *skb,
if (unlikely(headlen < hdrlen))
return -EINVAL;
+ /* Since data doesn't move data while putting data on skb and that is
+ * the only way we use, data + len is the next place that hdr would be put
+ */
+ skb_set_mac_header(skb, skb->len);
skb_put_data(skb, hdr, hdrlen);
skb_put_data(skb, (u8 *)hdr + hdrlen + pad_len, headlen - hdrlen);
@@ -936,7 +914,7 @@ static bool iwl_mvm_reorder(struct iwl_mvm *mvm,
struct iwl_rx_mpdu_desc *desc)
{
struct ieee80211_rx_status *rx_status = IEEE80211_SKB_RXCB(skb);
- struct ieee80211_hdr *hdr = iwl_mvm_skb_get_hdr(skb);
+ struct ieee80211_hdr *hdr = (void *)skb_mac_header(skb);
struct iwl_mvm_sta *mvm_sta;
struct iwl_mvm_baid_data *baid_data;
struct iwl_mvm_reorder_buffer *buffer;
@@ -1346,6 +1324,10 @@ static void iwl_mvm_decode_he_phy_data(struct iwl_mvm *mvm,
case IWL_RX_PHY_INFO_TYPE_HT:
case IWL_RX_PHY_INFO_TYPE_VHT_SU:
case IWL_RX_PHY_INFO_TYPE_VHT_MU:
+ case IWL_RX_PHY_INFO_TYPE_EHT_MU:
+ case IWL_RX_PHY_INFO_TYPE_EHT_TB:
+ case IWL_RX_PHY_INFO_TYPE_EHT_MU_EXT:
+ case IWL_RX_PHY_INFO_TYPE_EHT_TB_EXT:
return;
case IWL_RX_PHY_INFO_TYPE_HE_TB_EXT:
he->data1 |= cpu_to_le16(IEEE80211_RADIOTAP_HE_DATA1_SPTL_REUSE_KNOWN |
@@ -1690,6 +1672,9 @@ static void iwl_mvm_rx_fill_status(struct iwl_mvm *mvm,
case RATE_MCS_CHAN_WIDTH_160:
rx_status->bw = RATE_INFO_BW_160;
break;
+ case RATE_MCS_CHAN_WIDTH_320:
+ rx_status->bw = RATE_INFO_BW_320;
+ break;
}
/* must be before L-SIG data */
@@ -1726,6 +1711,9 @@ static void iwl_mvm_rx_fill_status(struct iwl_mvm *mvm,
rx_status->he_dcm =
!!(rate_n_flags & RATE_HE_DUAL_CARRIER_MODE_MSK);
break;
+ case RATE_MCS_EHT_MSK:
+ rx_status->encoding = RX_ENC_EHT;
+ break;
}
switch (format) {
@@ -1736,6 +1724,7 @@ static void iwl_mvm_rx_fill_status(struct iwl_mvm *mvm,
break;
case RATE_MCS_VHT_MSK:
case RATE_MCS_HE_MSK:
+ case RATE_MCS_EHT_MSK:
rx_status->nss =
u32_get_bits(rate_n_flags, RATE_MCS_NSS_MSK) + 1;
rx_status->rate_idx = rate_n_flags & RATE_MCS_CODE_MSK;
@@ -1747,10 +1736,11 @@ static void iwl_mvm_rx_fill_status(struct iwl_mvm *mvm,
rx_status->rate_idx = rate;
- if ((rate < 0 || rate > 0xFF) && net_ratelimit()) {
- IWL_ERR(mvm, "Invalid rate flags 0x%x, band %d,\n",
- rate_n_flags, rx_status->band);
+ if ((rate < 0 || rate > 0xFF)) {
rx_status->rate_idx = 0;
+ if (net_ratelimit())
+ IWL_ERR(mvm, "Invalid rate flags 0x%x, band %d,\n",
+ rate_n_flags, rx_status->band);
}
break;
@@ -2065,7 +2055,7 @@ void iwl_mvm_rx_monitor_no_data(struct iwl_mvm *mvm, struct napi_struct *napi,
{
struct ieee80211_rx_status *rx_status;
struct iwl_rx_packet *pkt = rxb_addr(rxb);
- struct iwl_rx_no_data *desc = (void *)pkt->data;
+ struct iwl_rx_no_data_ver_3 *desc = (void *)pkt->data;
u32 rssi;
u32 info_type;
struct ieee80211_sta *sta = NULL;
@@ -2101,6 +2091,18 @@ void iwl_mvm_rx_monitor_no_data(struct iwl_mvm *mvm, struct napi_struct *napi,
format = phy_data.rate_n_flags & RATE_MCS_MOD_TYPE_MSK;
+ if (iwl_fw_lookup_notif_ver(mvm->fw, DATA_PATH_GROUP,
+ RX_NO_DATA_NOTIF, 0) >= 3) {
+ if (unlikely(iwl_rx_packet_payload_len(pkt) <
+ sizeof(struct iwl_rx_no_data_ver_3)))
+ /* invalid len for ver 3 */
+ return;
+ } else {
+ if (format == RATE_MCS_EHT_MSK)
+ /* no support for EHT before version 3 API */
+ return;
+ }
+
/* Dont use dev_alloc_skb(), we'll have enough headroom once
* ieee80211_hdr pulled.
*/
@@ -2136,6 +2138,16 @@ void iwl_mvm_rx_monitor_no_data(struct iwl_mvm *mvm, struct napi_struct *napi,
iwl_mvm_rx_fill_status(mvm, skb, &phy_data, queue);
+ /* no more radio tap info should be put after this point.
+ *
+ * We mark it as mac header, for upper layers to know where
+ * all radio tap header ends.
+ *
+ * Since data doesn't move data while putting data on skb and that is
+ * the only way we use, data + len is the next place that hdr would be put
+ */
+ skb_set_mac_header(skb, skb->len);
+
/*
* Override the nss from the rx_vec since the rate_n_flags has
* only 2 bits for the nss which gives a max of 4 ss but there
@@ -2152,6 +2164,10 @@ void iwl_mvm_rx_monitor_no_data(struct iwl_mvm *mvm, struct napi_struct *napi,
le32_get_bits(desc->rx_vec[0],
RX_NO_DATA_RX_VEC0_HE_NSTS_MSK) + 1;
break;
+ case RATE_MCS_EHT_MSK:
+ rx_status->nss =
+ le32_get_bits(desc->rx_vec[2],
+ RX_NO_DATA_RX_VEC2_EHT_NSTS_MSK) + 1;
}
rcu_read_lock();
diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/tx.c b/drivers/net/wireless/intel/iwlwifi/mvm/tx.c
index fadaa683a416..9813d7fa1800 100644
--- a/drivers/net/wireless/intel/iwlwifi/mvm/tx.c
+++ b/drivers/net/wireless/intel/iwlwifi/mvm/tx.c
@@ -1107,8 +1107,8 @@ static int iwl_mvm_tx_mpdu(struct iwl_mvm *mvm, struct sk_buff *skb,
spin_lock(&mvmsta->lock);
/* nullfunc frames should go to the MGMT queue regardless of QOS,
- * the condition of !ieee80211_is_qos_nullfunc(fc) keeps the default
- * assignment of MGMT TID
+ * the conditions of !ieee80211_is_qos_nullfunc(fc) and
+ * !ieee80211_is_data_qos(fc) keep the default assignment of MGMT TID
*/
if (ieee80211_is_data_qos(fc) && !ieee80211_is_qos_nullfunc(fc)) {
tid = ieee80211_get_tid(hdr);
@@ -1133,7 +1133,8 @@ static int iwl_mvm_tx_mpdu(struct iwl_mvm *mvm, struct sk_buff *skb,
/* update the tx_cmd hdr as it was already copied */
tx_cmd->hdr->seq_ctrl = hdr->seq_ctrl;
}
- } else if (ieee80211_is_data(fc) && !ieee80211_is_data_qos(fc)) {
+ } else if (ieee80211_is_data(fc) && !ieee80211_is_data_qos(fc) &&
+ !ieee80211_is_nullfunc(fc)) {
tid = IWL_TID_NON_QOS;
}
diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/ctxt-info-gen3.c b/drivers/net/wireless/intel/iwlwifi/pcie/ctxt-info-gen3.c
index 75fd386b048e..cb60ba40fe97 100644
--- a/drivers/net/wireless/intel/iwlwifi/pcie/ctxt-info-gen3.c
+++ b/drivers/net/wireless/intel/iwlwifi/pcie/ctxt-info-gen3.c
@@ -136,6 +136,10 @@ int iwl_pcie_ctxt_info_gen3_init(struct iwl_trans *trans,
&control_flags);
prph_sc_ctrl->control.control_flags = cpu_to_le32(control_flags);
+ /* initialize the Step equalizer data */
+ prph_sc_ctrl->step_cfg.mbx_addr_0 = cpu_to_le32(trans->mbx_addr_0_step);
+ prph_sc_ctrl->step_cfg.mbx_addr_1 = cpu_to_le32(trans->mbx_addr_1_step);
+
/* allocate ucode sections in dram and set addresses */
ret = iwl_pcie_init_fw_sec(trans, fw, &prph_scratch->dram);
if (ret)
@@ -343,3 +347,4 @@ int iwl_trans_pcie_ctx_info_gen3_set_reduce_power(struct iwl_trans *trans,
return 0;
}
+
diff --git a/drivers/net/wireless/intersil/orinoco/hermes.c b/drivers/net/wireless/intersil/orinoco/hermes.c
index 256946552742..4888286727ff 100644
--- a/drivers/net/wireless/intersil/orinoco/hermes.c
+++ b/drivers/net/wireless/intersil/orinoco/hermes.c
@@ -38,6 +38,7 @@
* under either the MPL or the GPL.
*/
+#include <linux/net.h>
#include <linux/module.h>
#include <linux/kernel.h>
#include <linux/delay.h>
diff --git a/drivers/net/wireless/intersil/orinoco/hw.c b/drivers/net/wireless/intersil/orinoco/hw.c
index 0aea35c9c11c..4fcca08e50de 100644
--- a/drivers/net/wireless/intersil/orinoco/hw.c
+++ b/drivers/net/wireless/intersil/orinoco/hw.c
@@ -931,6 +931,8 @@ int __orinoco_hw_setup_enc(struct orinoco_private *priv)
err = hermes_write_wordrec(hw, USER_BAP,
HERMES_RID_CNFAUTHENTICATION_AGERE,
auth_flag);
+ if (err)
+ return err;
}
err = hermes_write_wordrec(hw, USER_BAP,
HERMES_RID_CNFWEPENABLED_AGERE,
diff --git a/drivers/net/wireless/mac80211_hwsim.c b/drivers/net/wireless/mac80211_hwsim.c
index c57c8903b7c0..4cc4eaf80b14 100644
--- a/drivers/net/wireless/mac80211_hwsim.c
+++ b/drivers/net/wireless/mac80211_hwsim.c
@@ -2034,7 +2034,7 @@ static void mac80211_hwsim_tx_frame(struct ieee80211_hw *hw,
struct ieee80211_channel *chan)
{
struct mac80211_hwsim_data *data = hw->priv;
- u32 _pid = READ_ONCE(data->wmediumd);
+ u32 _portid = READ_ONCE(data->wmediumd);
if (ieee80211_hw_check(hw, SUPPORTS_RC_TABLE)) {
struct ieee80211_tx_info *txi = IEEE80211_SKB_CB(skb);
@@ -2045,8 +2045,8 @@ static void mac80211_hwsim_tx_frame(struct ieee80211_hw *hw,
mac80211_hwsim_monitor_rx(hw, skb, chan);
- if (_pid || hwsim_virtio_enabled)
- return mac80211_hwsim_tx_frame_nl(hw, skb, _pid, chan);
+ if (_portid || hwsim_virtio_enabled)
+ return mac80211_hwsim_tx_frame_nl(hw, skb, _portid, chan);
data->tx_pkts++;
data->tx_bytes += skb->len;
diff --git a/drivers/net/wireless/marvell/libertas/cfg.c b/drivers/net/wireless/marvell/libertas/cfg.c
index 3e065cbb0af9..b700c213d10c 100644
--- a/drivers/net/wireless/marvell/libertas/cfg.c
+++ b/drivers/net/wireless/marvell/libertas/cfg.c
@@ -416,10 +416,20 @@ static int lbs_add_cf_param_tlv(u8 *tlv)
static int lbs_add_wpa_tlv(u8 *tlv, const u8 *ie, u8 ie_len)
{
- size_t tlv_len;
+ struct mrvl_ie_data *wpatlv = (struct mrvl_ie_data *)tlv;
+ const struct element *wpaie;
+
+ /* Find the first RSN or WPA IE to use */
+ wpaie = cfg80211_find_elem(WLAN_EID_RSN, ie, ie_len);
+ if (!wpaie)
+ wpaie = cfg80211_find_vendor_elem(WLAN_OUI_MICROSOFT,
+ WLAN_OUI_TYPE_MICROSOFT_WPA,
+ ie, ie_len);
+ if (!wpaie || wpaie->datalen > 128)
+ return 0;
/*
- * We need just convert an IE to an TLV. IEs use u8 for the header,
+ * Convert the found IE to a TLV. IEs use u8 for the header,
* u8 type
* u8 len
* u8[] data
@@ -428,14 +438,47 @@ static int lbs_add_wpa_tlv(u8 *tlv, const u8 *ie, u8 ie_len)
* __le16 len
* u8[] data
*/
- *tlv++ = *ie++;
- *tlv++ = 0;
- tlv_len = *tlv++ = *ie++;
- *tlv++ = 0;
- while (tlv_len--)
- *tlv++ = *ie++;
- /* the TLV is two bytes larger than the IE */
- return ie_len + 2;
+ wpatlv->header.type = cpu_to_le16(wpaie->id);
+ wpatlv->header.len = cpu_to_le16(wpaie->datalen);
+ memcpy(wpatlv->data, wpaie->data, wpaie->datalen);
+
+ /* Return the total number of bytes added to the TLV buffer */
+ return sizeof(struct mrvl_ie_header) + wpaie->datalen;
+}
+
+/* Add WPS enrollee TLV
+ */
+#define LBS_MAX_WPS_ENROLLEE_TLV_SIZE \
+ (sizeof(struct mrvl_ie_header) \
+ + 256)
+
+static int lbs_add_wps_enrollee_tlv(u8 *tlv, const u8 *ie, size_t ie_len)
+{
+ struct mrvl_ie_data *wpstlv = (struct mrvl_ie_data *)tlv;
+ const struct element *wpsie;
+
+ /* Look for a WPS IE and add it to the probe request */
+ wpsie = cfg80211_find_vendor_elem(WLAN_OUI_MICROSOFT,
+ WLAN_OUI_TYPE_MICROSOFT_WPS,
+ ie, ie_len);
+ if (!wpsie)
+ return 0;
+
+ /* Convert the WPS IE to a TLV. The IE looks like this:
+ * u8 type (WLAN_EID_VENDOR_SPECIFIC)
+ * u8 len
+ * u8[] data
+ * but the TLV will look like this instead:
+ * __le16 type (TLV_TYPE_WPS_ENROLLEE)
+ * __le16 len
+ * u8[] data
+ */
+ wpstlv->header.type = cpu_to_le16(TLV_TYPE_WPS_ENROLLEE);
+ wpstlv->header.len = cpu_to_le16(wpsie->datalen);
+ memcpy(wpstlv->data, wpsie->data, wpsie->datalen);
+
+ /* Return the total number of bytes added to the TLV buffer */
+ return sizeof(struct mrvl_ie_header) + wpsie->datalen;
}
/*
@@ -664,14 +707,15 @@ static int lbs_ret_scan(struct lbs_private *priv, unsigned long dummy,
/*
- * Our scan command contains a TLV, consting of a SSID TLV, a channel list
- * TLV and a rates TLV. Determine the maximum size of them:
+ * Our scan command contains a TLV, consisting of a SSID TLV, a channel list
+ * TLV, a rates TLV, and an optional WPS IE. Determine the maximum size of them:
*/
#define LBS_SCAN_MAX_CMD_SIZE \
(sizeof(struct cmd_ds_802_11_scan) \
+ LBS_MAX_SSID_TLV_SIZE \
+ LBS_MAX_CHANNEL_LIST_TLV_SIZE \
- + LBS_MAX_RATES_TLV_SIZE)
+ + LBS_MAX_RATES_TLV_SIZE \
+ + LBS_MAX_WPS_ENROLLEE_TLV_SIZE)
/*
* Assumes priv->scan_req is initialized and valid
@@ -720,6 +764,11 @@ static void lbs_scan_worker(struct work_struct *work)
/* add rates TLV */
tlv += lbs_add_supported_rates_tlv(tlv);
+ /* add optional WPS enrollee TLV */
+ if (priv->scan_req->ie && priv->scan_req->ie_len)
+ tlv += lbs_add_wps_enrollee_tlv(tlv, priv->scan_req->ie,
+ priv->scan_req->ie_len);
+
if (priv->scan_channel < priv->scan_req->n_channels) {
cancel_delayed_work(&priv->scan_work);
if (netif_running(priv->dev))
@@ -2106,6 +2155,7 @@ int lbs_cfg_register(struct lbs_private *priv)
int ret;
wdev->wiphy->max_scan_ssids = 1;
+ wdev->wiphy->max_scan_ie_len = 256;
wdev->wiphy->signal_type = CFG80211_SIGNAL_TYPE_MBM;
wdev->wiphy->interface_modes =
diff --git a/drivers/net/wireless/marvell/libertas/cmdresp.c b/drivers/net/wireless/marvell/libertas/cmdresp.c
index cb515c5584c1..74cb7551f427 100644
--- a/drivers/net/wireless/marvell/libertas/cmdresp.c
+++ b/drivers/net/wireless/marvell/libertas/cmdresp.c
@@ -48,7 +48,7 @@ void lbs_mac_event_disconnected(struct lbs_private *priv,
/* Free Tx and Rx packets */
spin_lock_irqsave(&priv->driver_lock, flags);
- kfree_skb(priv->currenttxskb);
+ dev_kfree_skb_irq(priv->currenttxskb);
priv->currenttxskb = NULL;
priv->tx_pending_len = 0;
spin_unlock_irqrestore(&priv->driver_lock, flags);
diff --git a/drivers/net/wireless/marvell/libertas/if_usb.c b/drivers/net/wireless/marvell/libertas/if_usb.c
index 32fdc4150b60..2240b4db8c03 100644
--- a/drivers/net/wireless/marvell/libertas/if_usb.c
+++ b/drivers/net/wireless/marvell/libertas/if_usb.c
@@ -637,7 +637,7 @@ static inline void process_cmdrequest(int recvlength, uint8_t *recvbuff,
priv->resp_len[i] = (recvlength - MESSAGE_HEADER_LEN);
memcpy(priv->resp_buf[i], recvbuff + MESSAGE_HEADER_LEN,
priv->resp_len[i]);
- kfree_skb(skb);
+ dev_kfree_skb_irq(skb);
lbs_notify_command_response(priv, i);
spin_unlock_irqrestore(&priv->driver_lock, flags);
diff --git a/drivers/net/wireless/marvell/libertas/main.c b/drivers/net/wireless/marvell/libertas/main.c
index 8f5220cee112..78e8b5aecec0 100644
--- a/drivers/net/wireless/marvell/libertas/main.c
+++ b/drivers/net/wireless/marvell/libertas/main.c
@@ -216,7 +216,7 @@ int lbs_stop_iface(struct lbs_private *priv)
spin_lock_irqsave(&priv->driver_lock, flags);
priv->iface_running = false;
- kfree_skb(priv->currenttxskb);
+ dev_kfree_skb_irq(priv->currenttxskb);
priv->currenttxskb = NULL;
priv->tx_pending_len = 0;
spin_unlock_irqrestore(&priv->driver_lock, flags);
@@ -869,6 +869,7 @@ static int lbs_init_adapter(struct lbs_private *priv)
ret = kfifo_alloc(&priv->event_fifo, sizeof(u32) * 16, GFP_KERNEL);
if (ret) {
pr_err("Out of memory allocating event FIFO buffer\n");
+ lbs_free_cmd_buffer(priv);
goto out;
}
diff --git a/drivers/net/wireless/marvell/libertas/types.h b/drivers/net/wireless/marvell/libertas/types.h
index cd4ceb6f885d..bad38d312d0d 100644
--- a/drivers/net/wireless/marvell/libertas/types.h
+++ b/drivers/net/wireless/marvell/libertas/types.h
@@ -93,6 +93,7 @@ union ieee_phy_param_set {
#define TLV_TYPE_TSFTIMESTAMP (PROPRIETARY_TLV_BASE_ID + 19)
#define TLV_TYPE_RSSI_HIGH (PROPRIETARY_TLV_BASE_ID + 22)
#define TLV_TYPE_SNR_HIGH (PROPRIETARY_TLV_BASE_ID + 23)
+#define TLV_TYPE_WPS_ENROLLEE (PROPRIETARY_TLV_BASE_ID + 27)
#define TLV_TYPE_AUTH_TYPE (PROPRIETARY_TLV_BASE_ID + 31)
#define TLV_TYPE_MESH_ID (PROPRIETARY_TLV_BASE_ID + 37)
#define TLV_TYPE_OLD_MESH_ID (PROPRIETARY_TLV_BASE_ID + 291)
@@ -105,23 +106,23 @@ struct mrvl_ie_header {
struct mrvl_ie_data {
struct mrvl_ie_header header;
- u8 Data[1];
+ u8 data[];
} __packed;
struct mrvl_ie_rates_param_set {
struct mrvl_ie_header header;
- u8 rates[1];
+ u8 rates[];
} __packed;
struct mrvl_ie_ssid_param_set {
struct mrvl_ie_header header;
- u8 ssid[1];
+ u8 ssid[];
} __packed;
struct mrvl_ie_wildcard_ssid_param_set {
struct mrvl_ie_header header;
- u8 MaxSsidlength;
- u8 ssid[1];
+ u8 maxssidlength;
+ u8 ssid[];
} __packed;
struct chanscanmode {
@@ -146,7 +147,7 @@ struct chanscanparamset {
struct mrvl_ie_chanlist_param_set {
struct mrvl_ie_header header;
- struct chanscanparamset chanscanparam[1];
+ struct chanscanparamset chanscanparam[];
} __packed;
struct mrvl_ie_cf_param_set {
@@ -164,12 +165,12 @@ struct mrvl_ie_ds_param_set {
struct mrvl_ie_rsn_param_set {
struct mrvl_ie_header header;
- u8 rsnie[1];
+ u8 rsnie[];
} __packed;
struct mrvl_ie_tsf_timestamp {
struct mrvl_ie_header header;
- __le64 tsftable[1];
+ __le64 tsftable[];
} __packed;
/* v9 and later firmware only */
@@ -220,7 +221,7 @@ struct led_pin {
struct mrvl_ie_ledgpio {
struct mrvl_ie_header header;
- struct led_pin ledpin[1];
+ struct led_pin ledpin[];
} __packed;
struct led_bhv {
@@ -233,7 +234,7 @@ struct led_bhv {
struct mrvl_ie_ledbhv {
struct mrvl_ie_header header;
- struct led_bhv ledbhv[1];
+ struct led_bhv ledbhv[];
} __packed;
/*
diff --git a/drivers/net/wireless/marvell/libertas_tf/if_usb.c b/drivers/net/wireless/marvell/libertas_tf/if_usb.c
index 75b5319d033f..1750f5e93de2 100644
--- a/drivers/net/wireless/marvell/libertas_tf/if_usb.c
+++ b/drivers/net/wireless/marvell/libertas_tf/if_usb.c
@@ -613,7 +613,7 @@ static inline void process_cmdrequest(int recvlength, uint8_t *recvbuff,
spin_lock_irqsave(&priv->driver_lock, flags);
memcpy(priv->cmd_resp_buff, recvbuff + MESSAGE_HEADER_LEN,
recvlength - MESSAGE_HEADER_LEN);
- kfree_skb(skb);
+ dev_kfree_skb_irq(skb);
lbtf_cmd_response_rx(priv);
spin_unlock_irqrestore(&priv->driver_lock, flags);
}
diff --git a/drivers/net/wireless/marvell/mwifiex/11h.c b/drivers/net/wireless/marvell/mwifiex/11h.c
index 6a9d7bc1f41e..b0c40a776a2e 100644
--- a/drivers/net/wireless/marvell/mwifiex/11h.c
+++ b/drivers/net/wireless/marvell/mwifiex/11h.c
@@ -292,6 +292,6 @@ void mwifiex_dfs_chan_sw_work_queue(struct work_struct *work)
mwifiex_dbg(priv->adapter, MSG,
"indicating channel switch completion to kernel\n");
mutex_lock(&priv->wdev.mtx);
- cfg80211_ch_switch_notify(priv->netdev, &priv->dfs_chandef, 0);
+ cfg80211_ch_switch_notify(priv->netdev, &priv->dfs_chandef, 0, 0);
mutex_unlock(&priv->wdev.mtx);
}
diff --git a/drivers/net/wireless/marvell/mwifiex/11n.c b/drivers/net/wireless/marvell/mwifiex/11n.c
index 4af57e6d4393..90e401100898 100644
--- a/drivers/net/wireless/marvell/mwifiex/11n.c
+++ b/drivers/net/wireless/marvell/mwifiex/11n.c
@@ -878,7 +878,7 @@ mwifiex_send_delba_txbastream_tbl(struct mwifiex_private *priv, u8 tid)
*/
void mwifiex_update_ampdu_txwinsize(struct mwifiex_adapter *adapter)
{
- u8 i;
+ u8 i, j;
u32 tx_win_size;
struct mwifiex_private *priv;
@@ -909,8 +909,8 @@ void mwifiex_update_ampdu_txwinsize(struct mwifiex_adapter *adapter)
if (tx_win_size != priv->add_ba_param.tx_win_size) {
if (!priv->media_connected)
continue;
- for (i = 0; i < MAX_NUM_TID; i++)
- mwifiex_send_delba_txbastream_tbl(priv, i);
+ for (j = 0; j < MAX_NUM_TID; j++)
+ mwifiex_send_delba_txbastream_tbl(priv, j);
}
}
}
diff --git a/drivers/net/wireless/marvell/mwifiex/11n_rxreorder.c b/drivers/net/wireless/marvell/mwifiex/11n_rxreorder.c
index a04b66284af4..391793a16adc 100644
--- a/drivers/net/wireless/marvell/mwifiex/11n_rxreorder.c
+++ b/drivers/net/wireless/marvell/mwifiex/11n_rxreorder.c
@@ -33,7 +33,7 @@ static int mwifiex_11n_dispatch_amsdu_pkt(struct mwifiex_private *priv,
skb_trim(skb, le16_to_cpu(local_rx_pd->rx_pkt_length));
ieee80211_amsdu_to_8023s(skb, &list, priv->curr_addr,
- priv->wdev.iftype, 0, NULL, NULL);
+ priv->wdev.iftype, 0, NULL, NULL, false);
while (!skb_queue_empty(&list)) {
struct rx_packet_hdr *rx_hdr;
diff --git a/drivers/net/wireless/marvell/mwifiex/Kconfig b/drivers/net/wireless/marvell/mwifiex/Kconfig
index 2b4ff2b78a7e..b182f7155d66 100644
--- a/drivers/net/wireless/marvell/mwifiex/Kconfig
+++ b/drivers/net/wireless/marvell/mwifiex/Kconfig
@@ -10,13 +10,14 @@ config MWIFIEX
mwifiex.
config MWIFIEX_SDIO
- tristate "Marvell WiFi-Ex Driver for SD8786/SD8787/SD8797/SD8887/SD8897/SD8977/SD8987/SD8997"
+ tristate "Marvell WiFi-Ex Driver for SD8786/SD8787/SD8797/SD8887/SD8897/SD8977/SD8978/SD8987/SD8997"
depends on MWIFIEX && MMC
select FW_LOADER
select WANT_DEV_COREDUMP
help
This adds support for wireless adapters based on Marvell
- 8786/8787/8797/8887/8897/8977/8987/8997 chipsets with SDIO interface.
+ 8786/8787/8797/8887/8897/8977/8978/8987/8997 chipsets with
+ SDIO interface. SD8978 is also known as NXP IW416.
If you choose to build it as a module, it will be called
mwifiex_sdio.
diff --git a/drivers/net/wireless/marvell/mwifiex/cmdevt.c b/drivers/net/wireless/marvell/mwifiex/cmdevt.c
index d3339d67e7a0..3756aa247e77 100644
--- a/drivers/net/wireless/marvell/mwifiex/cmdevt.c
+++ b/drivers/net/wireless/marvell/mwifiex/cmdevt.c
@@ -1607,6 +1607,11 @@ int mwifiex_ret_get_hw_spec(struct mwifiex_private *priv,
api_rev->major_ver,
api_rev->minor_ver);
break;
+ case FW_HOTFIX_VER_ID:
+ mwifiex_dbg(adapter, INFO,
+ "Firmware hotfix version %d\n",
+ api_rev->major_ver);
+ break;
default:
mwifiex_dbg(adapter, FATAL,
"Unknown api_id: %d\n",
diff --git a/drivers/net/wireless/marvell/mwifiex/fw.h b/drivers/net/wireless/marvell/mwifiex/fw.h
index b4f945a549f7..f2168fac95ed 100644
--- a/drivers/net/wireless/marvell/mwifiex/fw.h
+++ b/drivers/net/wireless/marvell/mwifiex/fw.h
@@ -41,7 +41,7 @@ struct mwifiex_fw_header {
struct mwifiex_fw_data {
struct mwifiex_fw_header header;
__le32 seq_num;
- u8 data[1];
+ u8 data[];
} __packed;
struct mwifiex_fw_dump_header {
@@ -641,7 +641,7 @@ struct mwifiex_ie_types_header {
struct mwifiex_ie_types_data {
struct mwifiex_ie_types_header header;
- u8 data[1];
+ u8 data[];
} __packed;
#define MWIFIEX_TxPD_POWER_MGMT_NULL_PACKET 0x01
@@ -794,12 +794,12 @@ struct mwifiex_ie_types_chan_band_list_param_set {
struct mwifiex_ie_types_rates_param_set {
struct mwifiex_ie_types_header header;
- u8 rates[1];
+ u8 rates[];
} __packed;
struct mwifiex_ie_types_ssid_param_set {
struct mwifiex_ie_types_header header;
- u8 ssid[1];
+ u8 ssid[];
} __packed;
struct mwifiex_ie_types_num_probes {
@@ -907,7 +907,7 @@ struct mwifiex_ie_types_tdls_idle_timeout {
struct mwifiex_ie_types_rsn_param_set {
struct mwifiex_ie_types_header header;
- u8 rsn_ie[1];
+ u8 rsn_ie[];
} __packed;
#define KEYPARAMSET_FIXED_LEN 6
@@ -1048,6 +1048,7 @@ enum API_VER_ID {
FW_API_VER_ID = 2,
UAP_FW_API_VER_ID = 3,
CHANRPT_API_VER_ID = 4,
+ FW_HOTFIX_VER_ID = 5,
};
struct hw_spec_api_rev {
@@ -1433,7 +1434,7 @@ struct mwifiex_tdls_stop_cs_params {
struct host_cmd_ds_tdls_config {
__le16 tdls_action;
- u8 tdls_data[1];
+ u8 tdls_data[];
} __packed;
struct mwifiex_chan_desc {
@@ -1574,13 +1575,13 @@ struct ie_body {
struct host_cmd_ds_802_11_scan {
u8 bss_mode;
u8 bssid[ETH_ALEN];
- u8 tlv_buffer[1];
+ u8 tlv_buffer[];
} __packed;
struct host_cmd_ds_802_11_scan_rsp {
__le16 bss_descript_size;
u8 number_of_sets;
- u8 bss_desc_and_tlv_buffer[1];
+ u8 bss_desc_and_tlv_buffer[];
} __packed;
struct host_cmd_ds_802_11_scan_ext {
@@ -1596,7 +1597,7 @@ struct mwifiex_ie_types_bss_mode {
struct mwifiex_ie_types_bss_scan_rsp {
struct mwifiex_ie_types_header header;
u8 bssid[ETH_ALEN];
- u8 frame_body[1];
+ u8 frame_body[];
} __packed;
struct mwifiex_ie_types_bss_scan_info {
@@ -1733,7 +1734,7 @@ struct mwifiex_ie_types_local_pwr_constraint {
struct mwifiex_ie_types_wmm_param_set {
struct mwifiex_ie_types_header header;
- u8 wmm_ie[1];
+ u8 wmm_ie[];
} __packed;
struct mwifiex_ie_types_mgmt_frame {
@@ -1959,7 +1960,7 @@ struct host_cmd_tlv_wep_key {
struct mwifiex_ie_types_header header;
u8 key_index;
u8 is_default;
- u8 key[1];
+ u8 key[];
};
struct host_cmd_tlv_auth_type {
diff --git a/drivers/net/wireless/marvell/mwifiex/sdio.c b/drivers/net/wireless/marvell/mwifiex/sdio.c
index b8dc3b5c9ad9..c64e24c10ea6 100644
--- a/drivers/net/wireless/marvell/mwifiex/sdio.c
+++ b/drivers/net/wireless/marvell/mwifiex/sdio.c
@@ -263,7 +263,7 @@ static const struct mwifiex_sdio_card_reg mwifiex_reg_sd8887 = {
0x68, 0x69, 0x6a},
};
-static const struct mwifiex_sdio_card_reg mwifiex_reg_sd8987 = {
+static const struct mwifiex_sdio_card_reg mwifiex_reg_sd89xx = {
.start_rd_port = 0,
.start_wr_port = 0,
.base_0_reg = 0xF8,
@@ -394,6 +394,22 @@ static const struct mwifiex_sdio_device mwifiex_sdio_sd8977 = {
.can_ext_scan = true,
};
+static const struct mwifiex_sdio_device mwifiex_sdio_sd8978 = {
+ .firmware_sdiouart = SD8978_SDIOUART_FW_NAME,
+ .reg = &mwifiex_reg_sd89xx,
+ .max_ports = 32,
+ .mp_agg_pkt_limit = 16,
+ .tx_buf_size = MWIFIEX_TX_DATA_BUF_SIZE_4K,
+ .mp_tx_agg_buf_size = MWIFIEX_MP_AGGR_BUF_SIZE_MAX,
+ .mp_rx_agg_buf_size = MWIFIEX_MP_AGGR_BUF_SIZE_MAX,
+ .supports_sdio_new_mode = true,
+ .has_control_mask = false,
+ .can_dump_fw = true,
+ .fw_dump_enh = true,
+ .can_auto_tdls = false,
+ .can_ext_scan = true,
+};
+
static const struct mwifiex_sdio_device mwifiex_sdio_sd8997 = {
.firmware = SD8997_DEFAULT_FW_NAME,
.firmware_sdiouart = SD8997_SDIOUART_FW_NAME,
@@ -428,7 +444,7 @@ static const struct mwifiex_sdio_device mwifiex_sdio_sd8887 = {
static const struct mwifiex_sdio_device mwifiex_sdio_sd8987 = {
.firmware = SD8987_DEFAULT_FW_NAME,
- .reg = &mwifiex_reg_sd8987,
+ .reg = &mwifiex_reg_sd89xx,
.max_ports = 32,
.mp_agg_pkt_limit = 16,
.tx_buf_size = MWIFIEX_TX_DATA_BUF_SIZE_2K,
@@ -480,8 +496,11 @@ static struct memory_type_mapping mem_type_mapping_tbl[] = {
};
static const struct of_device_id mwifiex_sdio_of_match_table[] = {
+ { .compatible = "marvell,sd8787" },
{ .compatible = "marvell,sd8897" },
+ { .compatible = "marvell,sd8978" },
{ .compatible = "marvell,sd8997" },
+ { .compatible = "nxp,iw416" },
{ }
};
@@ -919,6 +938,8 @@ static const struct sdio_device_id mwifiex_ids[] = {
.driver_data = (unsigned long)&mwifiex_sdio_sd8801},
{SDIO_DEVICE(SDIO_VENDOR_ID_MARVELL, SDIO_DEVICE_ID_MARVELL_8977_WLAN),
.driver_data = (unsigned long)&mwifiex_sdio_sd8977},
+ {SDIO_DEVICE(SDIO_VENDOR_ID_MARVELL, SDIO_DEVICE_ID_MARVELL_8978_WLAN),
+ .driver_data = (unsigned long)&mwifiex_sdio_sd8978},
{SDIO_DEVICE(SDIO_VENDOR_ID_MARVELL, SDIO_DEVICE_ID_MARVELL_8987_WLAN),
.driver_data = (unsigned long)&mwifiex_sdio_sd8987},
{SDIO_DEVICE(SDIO_VENDOR_ID_MARVELL, SDIO_DEVICE_ID_MARVELL_8997_WLAN),
@@ -3163,6 +3184,7 @@ MODULE_FIRMWARE(SD8797_DEFAULT_FW_NAME);
MODULE_FIRMWARE(SD8897_DEFAULT_FW_NAME);
MODULE_FIRMWARE(SD8887_DEFAULT_FW_NAME);
MODULE_FIRMWARE(SD8977_DEFAULT_FW_NAME);
+MODULE_FIRMWARE(SD8978_SDIOUART_FW_NAME);
MODULE_FIRMWARE(SD8987_DEFAULT_FW_NAME);
MODULE_FIRMWARE(SD8997_DEFAULT_FW_NAME);
MODULE_FIRMWARE(SD8997_SDIOUART_FW_NAME);
diff --git a/drivers/net/wireless/marvell/mwifiex/sdio.h b/drivers/net/wireless/marvell/mwifiex/sdio.h
index 3a24bb48b299..ae94c172310f 100644
--- a/drivers/net/wireless/marvell/mwifiex/sdio.h
+++ b/drivers/net/wireless/marvell/mwifiex/sdio.h
@@ -25,6 +25,7 @@
#define SD8887_DEFAULT_FW_NAME "mrvl/sd8887_uapsta.bin"
#define SD8801_DEFAULT_FW_NAME "mrvl/sd8801_uapsta.bin"
#define SD8977_DEFAULT_FW_NAME "mrvl/sdsd8977_combo_v2.bin"
+#define SD8978_SDIOUART_FW_NAME "mrvl/sdiouartiw416_combo_v0.bin"
#define SD8987_DEFAULT_FW_NAME "mrvl/sd8987_uapsta.bin"
#define SD8997_DEFAULT_FW_NAME "mrvl/sdsd8997_combo_v4.bin"
#define SD8997_SDIOUART_FW_NAME "mrvl/sdiouart8997_combo_v4.bin"
diff --git a/drivers/net/wireless/mediatek/mt76/Kconfig b/drivers/net/wireless/mediatek/mt76/Kconfig
index d7f90a0eb21e..18152c16c36f 100644
--- a/drivers/net/wireless/mediatek/mt76/Kconfig
+++ b/drivers/net/wireless/mediatek/mt76/Kconfig
@@ -1,6 +1,7 @@
# SPDX-License-Identifier: GPL-2.0-only
config MT76_CORE
tristate
+ select PAGE_POOL
config MT76_LEDS
bool
diff --git a/drivers/net/wireless/mediatek/mt76/debugfs.c b/drivers/net/wireless/mediatek/mt76/debugfs.c
index 11b0b3d62f29..57fbcc83e074 100644
--- a/drivers/net/wireless/mediatek/mt76/debugfs.c
+++ b/drivers/net/wireless/mediatek/mt76/debugfs.c
@@ -112,7 +112,7 @@ mt76_register_debugfs_fops(struct mt76_phy *phy,
if (!dir)
return NULL;
- debugfs_create_u8("led_pin", 0600, dir, &dev->led_pin);
+ debugfs_create_u8("led_pin", 0600, dir, &phy->leds.pin);
debugfs_create_u32("regidx", 0600, dir, &dev->debugfs_reg);
debugfs_create_file_unsafe("regval", 0600, dir, dev, fops);
debugfs_create_file_unsafe("napi_threaded", 0600, dir, dev,
diff --git a/drivers/net/wireless/mediatek/mt76/dma.c b/drivers/net/wireless/mediatek/mt76/dma.c
index 06161815c180..da281cd1d36f 100644
--- a/drivers/net/wireless/mediatek/mt76/dma.c
+++ b/drivers/net/wireless/mediatek/mt76/dma.c
@@ -165,7 +165,7 @@ mt76_free_pending_txwi(struct mt76_dev *dev)
local_bh_enable();
}
-static void
+void
mt76_free_pending_rxwi(struct mt76_dev *dev)
{
struct mt76_txwi_cache *t;
@@ -173,11 +173,12 @@ mt76_free_pending_rxwi(struct mt76_dev *dev)
local_bh_disable();
while ((t = __mt76_get_rxwi(dev)) != NULL) {
if (t->ptr)
- skb_free_frag(t->ptr);
+ mt76_put_page_pool_buf(t->ptr, false);
kfree(t);
}
local_bh_enable();
}
+EXPORT_SYMBOL_GPL(mt76_free_pending_rxwi);
static void
mt76_dma_sync_idx(struct mt76_dev *dev, struct mt76_queue *q)
@@ -218,8 +219,7 @@ mt76_dma_add_rx_buf(struct mt76_dev *dev, struct mt76_queue *q,
ctrl = FIELD_PREP(MT_DMA_CTL_SD_LEN0, buf[0].len);
- if ((q->flags & MT_QFLAG_WED) &&
- FIELD_GET(MT_QFLAG_WED_TYPE, q->flags) == MT76_WED_Q_RX) {
+ if (mt76_queue_is_wed_rx(q)) {
txwi = mt76_get_rxwi(dev);
if (!txwi)
return -ENOMEM;
@@ -401,8 +401,7 @@ mt76_dma_get_buf(struct mt76_dev *dev, struct mt76_queue *q, int idx,
if (info)
*info = le32_to_cpu(desc->info);
- if ((q->flags & MT_QFLAG_WED) &&
- FIELD_GET(MT_QFLAG_WED_TYPE, q->flags) == MT76_WED_Q_RX) {
+ if (mt76_queue_is_wed_rx(q)) {
u32 token = FIELD_GET(MT_DMA_CTL_TOKEN,
le32_to_cpu(desc->buf1));
struct mt76_txwi_cache *t = mt76_rx_token_release(dev, token);
@@ -410,9 +409,9 @@ mt76_dma_get_buf(struct mt76_dev *dev, struct mt76_queue *q, int idx,
if (!t)
return NULL;
- dma_unmap_single(dev->dma_dev, t->dma_addr,
- SKB_WITH_OVERHEAD(q->buf_size),
- DMA_FROM_DEVICE);
+ dma_sync_single_for_cpu(dev->dma_dev, t->dma_addr,
+ SKB_WITH_OVERHEAD(q->buf_size),
+ page_pool_get_dma_dir(q->page_pool));
buf = t->ptr;
t->dma_addr = 0;
@@ -429,9 +428,9 @@ mt76_dma_get_buf(struct mt76_dev *dev, struct mt76_queue *q, int idx,
} else {
buf = e->buf;
e->buf = NULL;
- dma_unmap_single(dev->dma_dev, e->dma_addr[0],
- SKB_WITH_OVERHEAD(q->buf_size),
- DMA_FROM_DEVICE);
+ dma_sync_single_for_cpu(dev->dma_dev, e->dma_addr[0],
+ SKB_WITH_OVERHEAD(q->buf_size),
+ page_pool_get_dma_dir(q->page_pool));
}
return buf;
@@ -582,26 +581,12 @@ free_skb:
return ret;
}
-static struct page_frag_cache *
-mt76_dma_rx_get_frag_cache(struct mt76_dev *dev, struct mt76_queue *q)
-{
- struct page_frag_cache *rx_page = &q->rx_page;
-
-#ifdef CONFIG_NET_MEDIATEK_SOC_WED
- if ((q->flags & MT_QFLAG_WED) &&
- FIELD_GET(MT_QFLAG_WED_TYPE, q->flags) == MT76_WED_Q_RX)
- rx_page = &dev->mmio.wed.rx_buf_ring.rx_page;
-#endif
- return rx_page;
-}
-
static int
-mt76_dma_rx_fill(struct mt76_dev *dev, struct mt76_queue *q)
+mt76_dma_rx_fill(struct mt76_dev *dev, struct mt76_queue *q,
+ bool allow_direct)
{
- struct page_frag_cache *rx_page = mt76_dma_rx_get_frag_cache(dev, q);
int len = SKB_WITH_OVERHEAD(q->buf_size);
- int frames = 0, offset = q->buf_offset;
- dma_addr_t addr;
+ int frames = 0;
if (!q->ndesc)
return 0;
@@ -609,26 +594,25 @@ mt76_dma_rx_fill(struct mt76_dev *dev, struct mt76_queue *q)
spin_lock_bh(&q->lock);
while (q->queued < q->ndesc - 1) {
+ enum dma_data_direction dir;
struct mt76_queue_buf qbuf;
- void *buf = NULL;
+ dma_addr_t addr;
+ int offset;
+ void *buf;
- buf = page_frag_alloc(rx_page, q->buf_size, GFP_ATOMIC);
+ buf = mt76_get_page_pool_buf(q, &offset, q->buf_size);
if (!buf)
break;
- addr = dma_map_single(dev->dma_dev, buf, len, DMA_FROM_DEVICE);
- if (unlikely(dma_mapping_error(dev->dma_dev, addr))) {
- skb_free_frag(buf);
- break;
- }
+ addr = page_pool_get_dma_addr(virt_to_head_page(buf)) + offset;
+ dir = page_pool_get_dma_dir(q->page_pool);
+ dma_sync_single_for_device(dev->dma_dev, addr, len, dir);
- qbuf.addr = addr + offset;
- qbuf.len = len - offset;
+ qbuf.addr = addr + q->buf_offset;
+ qbuf.len = len - q->buf_offset;
qbuf.skip_unmap = false;
if (mt76_dma_add_rx_buf(dev, q, &qbuf, buf) < 0) {
- dma_unmap_single(dev->dma_dev, addr, len,
- DMA_FROM_DEVICE);
- skb_free_frag(buf);
+ mt76_put_page_pool_buf(buf, allow_direct);
break;
}
frames++;
@@ -642,14 +626,17 @@ mt76_dma_rx_fill(struct mt76_dev *dev, struct mt76_queue *q)
return frames;
}
-static int
-mt76_dma_wed_setup(struct mt76_dev *dev, struct mt76_queue *q)
+int mt76_dma_wed_setup(struct mt76_dev *dev, struct mt76_queue *q, bool reset)
{
#ifdef CONFIG_NET_MEDIATEK_SOC_WED
struct mtk_wed_device *wed = &dev->mmio.wed;
int ret, type, ring;
- u8 flags = q->flags;
+ u8 flags;
+ if (!q || !q->ndesc)
+ return -EINVAL;
+
+ flags = q->flags;
if (!mtk_wed_device_active(wed))
q->flags &= ~MT_QFLAG_WED;
@@ -661,7 +648,7 @@ mt76_dma_wed_setup(struct mt76_dev *dev, struct mt76_queue *q)
switch (type) {
case MT76_WED_Q_TX:
- ret = mtk_wed_device_tx_ring_setup(wed, ring, q->regs, false);
+ ret = mtk_wed_device_tx_ring_setup(wed, ring, q->regs, reset);
if (!ret)
q->wed_regs = wed->tx_ring[ring].reg_base;
break;
@@ -669,7 +656,7 @@ mt76_dma_wed_setup(struct mt76_dev *dev, struct mt76_queue *q)
/* WED txfree queue needs ring to be initialized before setup */
q->flags = 0;
mt76_dma_queue_reset(dev, q);
- mt76_dma_rx_fill(dev, q);
+ mt76_dma_rx_fill(dev, q, false);
q->flags = flags;
ret = mtk_wed_device_txfree_ring_setup(wed, q->regs);
@@ -677,7 +664,7 @@ mt76_dma_wed_setup(struct mt76_dev *dev, struct mt76_queue *q)
q->wed_regs = wed->txfree_ring.reg_base;
break;
case MT76_WED_Q_RX:
- ret = mtk_wed_device_rx_ring_setup(wed, ring, q->regs, false);
+ ret = mtk_wed_device_rx_ring_setup(wed, ring, q->regs, reset);
if (!ret)
q->wed_regs = wed->rx_ring[ring].reg_base;
break;
@@ -690,6 +677,7 @@ mt76_dma_wed_setup(struct mt76_dev *dev, struct mt76_queue *q)
return 0;
#endif
}
+EXPORT_SYMBOL_GPL(mt76_dma_wed_setup);
static int
mt76_dma_alloc_queue(struct mt76_dev *dev, struct mt76_queue *q,
@@ -716,7 +704,11 @@ mt76_dma_alloc_queue(struct mt76_dev *dev, struct mt76_queue *q,
if (!q->entry)
return -ENOMEM;
- ret = mt76_dma_wed_setup(dev, q);
+ ret = mt76_create_page_pool(dev, q);
+ if (ret)
+ return ret;
+
+ ret = mt76_dma_wed_setup(dev, q, false);
if (ret)
return ret;
@@ -729,7 +721,6 @@ mt76_dma_alloc_queue(struct mt76_dev *dev, struct mt76_queue *q,
static void
mt76_dma_rx_cleanup(struct mt76_dev *dev, struct mt76_queue *q)
{
- struct page *page;
void *buf;
bool more;
@@ -737,21 +728,21 @@ mt76_dma_rx_cleanup(struct mt76_dev *dev, struct mt76_queue *q)
return;
spin_lock_bh(&q->lock);
+
do {
buf = mt76_dma_dequeue(dev, q, true, NULL, NULL, &more, NULL);
if (!buf)
break;
- skb_free_frag(buf);
+ mt76_put_page_pool_buf(buf, false);
} while (1);
- spin_unlock_bh(&q->lock);
- if (!q->rx_page.va)
- return;
+ if (q->rx_head) {
+ dev_kfree_skb(q->rx_head);
+ q->rx_head = NULL;
+ }
- page = virt_to_page(q->rx_page.va);
- __page_frag_cache_drain(page, q->rx_page.pagecnt_bias);
- memset(&q->rx_page, 0, sizeof(q->rx_page));
+ spin_unlock_bh(&q->lock);
}
static void
@@ -767,14 +758,13 @@ mt76_dma_rx_reset(struct mt76_dev *dev, enum mt76_rxq_id qid)
q->desc[i].ctrl = cpu_to_le32(MT_DMA_CTL_DMA_DONE);
mt76_dma_rx_cleanup(dev, q);
- mt76_dma_sync_idx(dev, q);
- mt76_dma_rx_fill(dev, q);
-
- if (!q->rx_head)
- return;
- dev_kfree_skb(q->rx_head);
- q->rx_head = NULL;
+ /* reset WED rx queues */
+ mt76_dma_wed_setup(dev, q, true);
+ if (q->flags != MT_WED_Q_TXFREE) {
+ mt76_dma_sync_idx(dev, q);
+ mt76_dma_rx_fill(dev, q, false);
+ }
}
static void
@@ -791,7 +781,7 @@ mt76_add_fragment(struct mt76_dev *dev, struct mt76_queue *q, void *data,
skb_add_rx_frag(skb, nr_frags, page, offset, len, q->buf_size);
} else {
- skb_free_frag(data);
+ mt76_put_page_pool_buf(data, true);
}
if (more)
@@ -864,6 +854,7 @@ mt76_dma_rx_process(struct mt76_dev *dev, struct mt76_queue *q, int budget)
goto free_frag;
skb_reserve(skb, q->buf_offset);
+ skb_mark_for_recycle(skb);
*(u32 *)skb->cb = info;
@@ -879,10 +870,10 @@ mt76_dma_rx_process(struct mt76_dev *dev, struct mt76_queue *q, int budget)
continue;
free_frag:
- skb_free_frag(data);
+ mt76_put_page_pool_buf(data, true);
}
- mt76_dma_rx_fill(dev, q);
+ mt76_dma_rx_fill(dev, q, true);
return done;
}
@@ -922,10 +913,12 @@ mt76_dma_init(struct mt76_dev *dev,
snprintf(dev->napi_dev.name, sizeof(dev->napi_dev.name), "%s",
wiphy_name(dev->hw->wiphy));
dev->napi_dev.threaded = 1;
+ init_completion(&dev->mmio.wed_reset);
+ init_completion(&dev->mmio.wed_reset_complete);
mt76_for_each_q_rx(dev, i) {
netif_napi_add(&dev->napi_dev, &dev->napi[i], poll);
- mt76_dma_rx_fill(dev, &dev->q_rx[i]);
+ mt76_dma_rx_fill(dev, &dev->q_rx[i], false);
napi_enable(&dev->napi[i]);
}
@@ -975,8 +968,9 @@ void mt76_dma_cleanup(struct mt76_dev *dev)
struct mt76_queue *q = &dev->q_rx[i];
netif_napi_del(&dev->napi[i]);
- if (FIELD_GET(MT_QFLAG_WED_TYPE, q->flags))
- mt76_dma_rx_cleanup(dev, q);
+ mt76_dma_rx_cleanup(dev, q);
+
+ page_pool_destroy(q->page_pool);
}
mt76_free_pending_txwi(dev);
diff --git a/drivers/net/wireless/mediatek/mt76/dma.h b/drivers/net/wireless/mediatek/mt76/dma.h
index 53c6ce2528b2..4b9bc7f462b8 100644
--- a/drivers/net/wireless/mediatek/mt76/dma.h
+++ b/drivers/net/wireless/mediatek/mt76/dma.h
@@ -56,5 +56,6 @@ enum mt76_mcu_evt_type {
int mt76_dma_rx_poll(struct napi_struct *napi, int budget);
void mt76_dma_attach(struct mt76_dev *dev);
void mt76_dma_cleanup(struct mt76_dev *dev);
+int mt76_dma_wed_setup(struct mt76_dev *dev, struct mt76_queue *q, bool reset);
#endif
diff --git a/drivers/net/wireless/mediatek/mt76/eeprom.c b/drivers/net/wireless/mediatek/mt76/eeprom.c
index 9bc8758573fc..dce851d42e08 100644
--- a/drivers/net/wireless/mediatek/mt76/eeprom.c
+++ b/drivers/net/wireless/mediatek/mt76/eeprom.c
@@ -138,6 +138,7 @@ mt76_find_power_limits_node(struct mt76_dev *dev)
{
struct device_node *np = dev->dev->of_node;
const char *const region_names[] = {
+ [NL80211_DFS_UNSET] = "ww",
[NL80211_DFS_ETSI] = "etsi",
[NL80211_DFS_FCC] = "fcc",
[NL80211_DFS_JP] = "jp",
diff --git a/drivers/net/wireless/mediatek/mt76/mac80211.c b/drivers/net/wireless/mediatek/mt76/mac80211.c
index fc608b369b3c..b117e4467c87 100644
--- a/drivers/net/wireless/mediatek/mt76/mac80211.c
+++ b/drivers/net/wireless/mediatek/mt76/mac80211.c
@@ -4,6 +4,7 @@
*/
#include <linux/sched.h>
#include <linux/of.h>
+#include <net/page_pool.h>
#include "mt76.h"
#define CHAN2G(_idx, _freq) { \
@@ -192,42 +193,48 @@ static const struct cfg80211_sar_capa mt76_sar_capa = {
.freq_ranges = &mt76_sar_freq_ranges[0],
};
-static int mt76_led_init(struct mt76_dev *dev)
+static int mt76_led_init(struct mt76_phy *phy)
{
- struct device_node *np = dev->dev->of_node;
- struct ieee80211_hw *hw = dev->hw;
- int led_pin;
+ struct mt76_dev *dev = phy->dev;
+ struct ieee80211_hw *hw = phy->hw;
- if (!dev->led_cdev.brightness_set && !dev->led_cdev.blink_set)
+ if (!phy->leds.cdev.brightness_set && !phy->leds.cdev.blink_set)
return 0;
- snprintf(dev->led_name, sizeof(dev->led_name),
- "mt76-%s", wiphy_name(hw->wiphy));
+ snprintf(phy->leds.name, sizeof(phy->leds.name), "mt76-%s",
+ wiphy_name(hw->wiphy));
- dev->led_cdev.name = dev->led_name;
- dev->led_cdev.default_trigger =
+ phy->leds.cdev.name = phy->leds.name;
+ phy->leds.cdev.default_trigger =
ieee80211_create_tpt_led_trigger(hw,
IEEE80211_TPT_LEDTRIG_FL_RADIO,
mt76_tpt_blink,
ARRAY_SIZE(mt76_tpt_blink));
- np = of_get_child_by_name(np, "led");
- if (np) {
- if (!of_property_read_u32(np, "led-sources", &led_pin))
- dev->led_pin = led_pin;
- dev->led_al = of_property_read_bool(np, "led-active-low");
- of_node_put(np);
+ if (phy == &dev->phy) {
+ struct device_node *np = dev->dev->of_node;
+
+ np = of_get_child_by_name(np, "led");
+ if (np) {
+ int led_pin;
+
+ if (!of_property_read_u32(np, "led-sources", &led_pin))
+ phy->leds.pin = led_pin;
+ phy->leds.al = of_property_read_bool(np,
+ "led-active-low");
+ of_node_put(np);
+ }
}
- return led_classdev_register(dev->dev, &dev->led_cdev);
+ return led_classdev_register(dev->dev, &phy->leds.cdev);
}
-static void mt76_led_cleanup(struct mt76_dev *dev)
+static void mt76_led_cleanup(struct mt76_phy *phy)
{
- if (!dev->led_cdev.brightness_set && !dev->led_cdev.blink_set)
+ if (!phy->leds.cdev.brightness_set && !phy->leds.cdev.blink_set)
return;
- led_classdev_unregister(&dev->led_cdev);
+ led_classdev_unregister(&phy->leds.cdev);
}
static void mt76_init_stream_cap(struct mt76_phy *phy,
@@ -517,6 +524,12 @@ int mt76_register_phy(struct mt76_phy *phy, bool vht,
return ret;
}
+ if (IS_ENABLED(CONFIG_MT76_LEDS)) {
+ ret = mt76_led_init(phy);
+ if (ret)
+ return ret;
+ }
+
wiphy_read_of_freq_limits(phy->hw->wiphy);
mt76_check_sband(phy, &phy->sband_2g, NL80211_BAND_2GHZ);
mt76_check_sband(phy, &phy->sband_5g, NL80211_BAND_5GHZ);
@@ -536,12 +549,55 @@ void mt76_unregister_phy(struct mt76_phy *phy)
{
struct mt76_dev *dev = phy->dev;
+ if (IS_ENABLED(CONFIG_MT76_LEDS))
+ mt76_led_cleanup(phy);
mt76_tx_status_check(dev, true);
ieee80211_unregister_hw(phy->hw);
dev->phys[phy->band_idx] = NULL;
}
EXPORT_SYMBOL_GPL(mt76_unregister_phy);
+int mt76_create_page_pool(struct mt76_dev *dev, struct mt76_queue *q)
+{
+ struct page_pool_params pp_params = {
+ .order = 0,
+ .flags = PP_FLAG_PAGE_FRAG,
+ .nid = NUMA_NO_NODE,
+ .dev = dev->dma_dev,
+ };
+ int idx = q - dev->q_rx;
+
+ switch (idx) {
+ case MT_RXQ_MAIN:
+ case MT_RXQ_BAND1:
+ case MT_RXQ_BAND2:
+ pp_params.pool_size = 256;
+ break;
+ default:
+ pp_params.pool_size = 16;
+ break;
+ }
+
+ if (mt76_is_mmio(dev)) {
+ /* rely on page_pool for DMA mapping */
+ pp_params.flags |= PP_FLAG_DMA_MAP | PP_FLAG_DMA_SYNC_DEV;
+ pp_params.dma_dir = DMA_FROM_DEVICE;
+ pp_params.max_len = PAGE_SIZE;
+ pp_params.offset = 0;
+ }
+
+ q->page_pool = page_pool_create(&pp_params);
+ if (IS_ERR(q->page_pool)) {
+ int err = PTR_ERR(q->page_pool);
+
+ q->page_pool = NULL;
+ return err;
+ }
+
+ return 0;
+}
+EXPORT_SYMBOL_GPL(mt76_create_page_pool);
+
struct mt76_dev *
mt76_alloc_device(struct device *pdev, unsigned int size,
const struct ieee80211_ops *ops,
@@ -653,7 +709,7 @@ int mt76_register_device(struct mt76_dev *dev, bool vht,
mt76_check_sband(&dev->phy, &phy->sband_6g, NL80211_BAND_6GHZ);
if (IS_ENABLED(CONFIG_MT76_LEDS)) {
- ret = mt76_led_init(dev);
+ ret = mt76_led_init(phy);
if (ret)
return ret;
}
@@ -674,7 +730,7 @@ void mt76_unregister_device(struct mt76_dev *dev)
struct ieee80211_hw *hw = dev->hw;
if (IS_ENABLED(CONFIG_MT76_LEDS))
- mt76_led_cleanup(dev);
+ mt76_led_cleanup(&dev->phy);
mt76_tx_status_check(dev, true);
ieee80211_unregister_hw(hw);
}
@@ -1644,7 +1700,7 @@ u16 mt76_calculate_default_rate(struct mt76_phy *phy, int rateidx)
EXPORT_SYMBOL_GPL(mt76_calculate_default_rate);
void mt76_ethtool_worker(struct mt76_ethtool_worker_info *wi,
- struct mt76_sta_stats *stats)
+ struct mt76_sta_stats *stats, bool eht)
{
int i, ei = wi->initial_stat_idx;
u64 *data = wi->data;
@@ -1660,17 +1716,37 @@ void mt76_ethtool_worker(struct mt76_ethtool_worker_info *wi,
data[ei++] += stats->tx_mode[MT_PHY_TYPE_HE_EXT_SU];
data[ei++] += stats->tx_mode[MT_PHY_TYPE_HE_TB];
data[ei++] += stats->tx_mode[MT_PHY_TYPE_HE_MU];
+ if (eht) {
+ data[ei++] += stats->tx_mode[MT_PHY_TYPE_EHT_SU];
+ data[ei++] += stats->tx_mode[MT_PHY_TYPE_EHT_TRIG];
+ data[ei++] += stats->tx_mode[MT_PHY_TYPE_EHT_MU];
+ }
- for (i = 0; i < ARRAY_SIZE(stats->tx_bw); i++)
+ for (i = 0; i < (ARRAY_SIZE(stats->tx_bw) - !eht); i++)
data[ei++] += stats->tx_bw[i];
- for (i = 0; i < 12; i++)
+ for (i = 0; i < (eht ? 14 : 12); i++)
data[ei++] += stats->tx_mcs[i];
wi->worker_stat_count = ei - wi->initial_stat_idx;
}
EXPORT_SYMBOL_GPL(mt76_ethtool_worker);
+void mt76_ethtool_page_pool_stats(struct mt76_dev *dev, u64 *data, int *index)
+{
+#ifdef CONFIG_PAGE_POOL_STATS
+ struct page_pool_stats stats = {};
+ int i;
+
+ mt76_for_each_q_rx(dev, i)
+ page_pool_get_stats(dev->q_rx[i].page_pool, &stats);
+
+ page_pool_ethtool_stats_get(data, &stats);
+ *index += page_pool_ethtool_stats_get_count();
+#endif
+}
+EXPORT_SYMBOL_GPL(mt76_ethtool_page_pool_stats);
+
enum mt76_dfs_state mt76_phy_dfs_state(struct mt76_phy *phy)
{
struct ieee80211_hw *hw = phy->hw;
diff --git a/drivers/net/wireless/mediatek/mt76/mt76.h b/drivers/net/wireless/mediatek/mt76/mt76.h
index 32a77a0ae9da..ccca0162c8f8 100644
--- a/drivers/net/wireless/mediatek/mt76/mt76.h
+++ b/drivers/net/wireless/mediatek/mt76/mt76.h
@@ -202,7 +202,7 @@ struct mt76_queue {
dma_addr_t desc_dma;
struct sk_buff *rx_head;
- struct page_frag_cache rx_page;
+ struct page_pool *page_pool;
};
struct mt76_mcu_ops {
@@ -264,12 +264,15 @@ enum mt76_phy_type {
MT_PHY_TYPE_HE_EXT_SU,
MT_PHY_TYPE_HE_TB,
MT_PHY_TYPE_HE_MU,
- __MT_PHY_TYPE_HE_MAX,
+ MT_PHY_TYPE_EHT_SU = 13,
+ MT_PHY_TYPE_EHT_TRIG,
+ MT_PHY_TYPE_EHT_MU,
+ __MT_PHY_TYPE_MAX,
};
struct mt76_sta_stats {
- u64 tx_mode[__MT_PHY_TYPE_HE_MAX];
- u64 tx_bw[4]; /* 20, 40, 80, 160 */
+ u64 tx_mode[__MT_PHY_TYPE_MAX];
+ u64 tx_bw[5]; /* 20, 40, 80, 160, 320 */
u64 tx_nss[4]; /* 1, 2, 3, 4 */
u64 tx_mcs[16]; /* mcs idx */
u64 tx_bytes;
@@ -291,7 +294,7 @@ enum mt76_wcid_flags {
MT_WCID_FLAG_HDR_TRANS,
};
-#define MT76_N_WCIDS 544
+#define MT76_N_WCIDS 1088
/* stored in ieee80211_tx_info::hw_queue */
#define MT_TX_HW_QUEUE_PHY GENMASK(3, 2)
@@ -413,6 +416,7 @@ enum {
MT76_STATE_SUSPEND,
MT76_STATE_ROC,
MT76_STATE_PM,
+ MT76_STATE_WED_RESET,
};
struct mt76_hw_cap {
@@ -591,6 +595,8 @@ struct mt76_mmio {
u32 irqmask;
struct mtk_wed_device wed;
+ struct completion wed_reset;
+ struct completion wed_reset_complete;
};
struct mt76_rx_status {
@@ -731,6 +737,13 @@ struct mt76_phy {
} rx_amsdu[__MT_RXQ_MAX];
struct mt76_freq_range_power *frp;
+
+ struct {
+ struct led_classdev cdev;
+ char name[32];
+ bool al;
+ u8 pin;
+ } leds;
};
struct mt76_dev {
@@ -812,11 +825,6 @@ struct mt76_dev {
u32 debugfs_reg;
- struct led_classdev led_cdev;
- char led_name[32];
- bool led_al;
- u8 led_pin;
-
u8 csa_complete;
u32 rxfilter;
@@ -886,7 +894,6 @@ extern struct ieee80211_rate mt76_rates[12];
#define mt76_mcu_restart(dev, ...) (dev)->mt76.mcu_ops->mcu_restart(&((dev)->mt76))
-#define __mt76_mcu_restart(dev, ...) (dev)->mcu_ops->mcu_restart((dev))
#define mt76_set(dev, offset, val) mt76_rmw(dev, offset, 0, val)
#define mt76_clear(dev, offset, val) mt76_rmw(dev, offset, val, 0)
@@ -907,10 +914,11 @@ bool __mt76_poll(struct mt76_dev *dev, u32 offset, u32 mask, u32 val,
#define mt76_poll(dev, ...) __mt76_poll(&((dev)->mt76), __VA_ARGS__)
-bool __mt76_poll_msec(struct mt76_dev *dev, u32 offset, u32 mask, u32 val,
- int timeout);
-
-#define mt76_poll_msec(dev, ...) __mt76_poll_msec(&((dev)->mt76), __VA_ARGS__)
+bool ____mt76_poll_msec(struct mt76_dev *dev, u32 offset, u32 mask, u32 val,
+ int timeout, int kick);
+#define __mt76_poll_msec(...) ____mt76_poll_msec(__VA_ARGS__, 10)
+#define mt76_poll_msec(dev, ...) ____mt76_poll_msec(&((dev)->mt76), __VA_ARGS__, 10)
+#define mt76_poll_msec_tick(dev, ...) ____mt76_poll_msec(&((dev)->mt76), __VA_ARGS__)
void mt76_mmio_init(struct mt76_dev *dev, void __iomem *regs);
void mt76_pci_disable_aspm(struct pci_dev *pdev);
@@ -1267,6 +1275,7 @@ mt76_tx_status_get_hw(struct mt76_dev *dev, struct sk_buff *skb)
void mt76_put_txwi(struct mt76_dev *dev, struct mt76_txwi_cache *t);
void mt76_put_rxwi(struct mt76_dev *dev, struct mt76_txwi_cache *t);
struct mt76_txwi_cache *mt76_get_rxwi(struct mt76_dev *dev);
+void mt76_free_pending_rxwi(struct mt76_dev *dev);
void mt76_rx_complete(struct mt76_dev *dev, struct sk_buff_head *frames,
struct napi_struct *napi);
void mt76_rx_poll_complete(struct mt76_dev *dev, enum mt76_rxq_id q,
@@ -1309,8 +1318,9 @@ mt76u_bulk_msg(struct mt76_dev *dev, void *data, int len, int *actual_len,
return usb_bulk_msg(udev, pipe, data, len, actual_len, timeout);
}
+void mt76_ethtool_page_pool_stats(struct mt76_dev *dev, u64 *data, int *index);
void mt76_ethtool_worker(struct mt76_ethtool_worker_info *wi,
- struct mt76_sta_stats *stats);
+ struct mt76_sta_stats *stats, bool eht);
int mt76_skb_adjust_pad(struct sk_buff *skb, int pad);
int __mt76u_vendor_request(struct mt76_dev *dev, u8 req, u8 req_type,
u16 val, u16 offset, void *buf, size_t len);
@@ -1407,6 +1417,12 @@ s8 mt76_get_rate_power_limits(struct mt76_phy *phy,
struct mt76_power_limits *dest,
s8 target_power);
+static inline bool mt76_queue_is_wed_rx(struct mt76_queue *q)
+{
+ return (q->flags & MT_QFLAG_WED) &&
+ FIELD_GET(MT_QFLAG_WED_TYPE, q->flags) == MT76_WED_Q_RX;
+}
+
struct mt76_txwi_cache *
mt76_token_release(struct mt76_dev *dev, int token, bool *wake);
int mt76_token_consume(struct mt76_dev *dev, struct mt76_txwi_cache **ptxwi);
@@ -1414,6 +1430,25 @@ void __mt76_set_tx_blocked(struct mt76_dev *dev, bool blocked);
struct mt76_txwi_cache *mt76_rx_token_release(struct mt76_dev *dev, int token);
int mt76_rx_token_consume(struct mt76_dev *dev, void *ptr,
struct mt76_txwi_cache *r, dma_addr_t phys);
+int mt76_create_page_pool(struct mt76_dev *dev, struct mt76_queue *q);
+static inline void mt76_put_page_pool_buf(void *buf, bool allow_direct)
+{
+ struct page *page = virt_to_head_page(buf);
+
+ page_pool_put_full_page(page->pp, page, allow_direct);
+}
+
+static inline void *
+mt76_get_page_pool_buf(struct mt76_queue *q, u32 *offset, u32 size)
+{
+ struct page *page;
+
+ page = page_pool_dev_alloc_frag(q->page_pool, offset, size);
+ if (!page)
+ return NULL;
+
+ return page_address(page) + *offset;
+}
static inline void mt76_set_tx_blocked(struct mt76_dev *dev, bool blocked)
{
diff --git a/drivers/net/wireless/mediatek/mt76/mt7603/init.c b/drivers/net/wireless/mediatek/mt76/mt7603/init.c
index 031d39a48a55..9a2e632d577a 100644
--- a/drivers/net/wireless/mediatek/mt76/mt7603/init.c
+++ b/drivers/net/wireless/mediatek/mt76/mt7603/init.c
@@ -330,10 +330,10 @@ static const struct ieee80211_iface_combination if_comb[] = {
}
};
-static void mt7603_led_set_config(struct mt76_dev *mt76, u8 delay_on,
+static void mt7603_led_set_config(struct mt76_phy *mphy, u8 delay_on,
u8 delay_off)
{
- struct mt7603_dev *dev = container_of(mt76, struct mt7603_dev,
+ struct mt7603_dev *dev = container_of(mphy->dev, struct mt7603_dev,
mt76);
u32 val, addr;
@@ -341,15 +341,15 @@ static void mt7603_led_set_config(struct mt76_dev *mt76, u8 delay_on,
FIELD_PREP(MT_LED_STATUS_OFF, delay_off) |
FIELD_PREP(MT_LED_STATUS_ON, delay_on);
- addr = mt7603_reg_map(dev, MT_LED_STATUS_0(mt76->led_pin));
+ addr = mt7603_reg_map(dev, MT_LED_STATUS_0(mphy->leds.pin));
mt76_wr(dev, addr, val);
- addr = mt7603_reg_map(dev, MT_LED_STATUS_1(mt76->led_pin));
+ addr = mt7603_reg_map(dev, MT_LED_STATUS_1(mphy->leds.pin));
mt76_wr(dev, addr, val);
- val = MT_LED_CTRL_REPLAY(mt76->led_pin) |
- MT_LED_CTRL_KICK(mt76->led_pin);
- if (mt76->led_al)
- val |= MT_LED_CTRL_POLARITY(mt76->led_pin);
+ val = MT_LED_CTRL_REPLAY(mphy->leds.pin) |
+ MT_LED_CTRL_KICK(mphy->leds.pin);
+ if (mphy->leds.al)
+ val |= MT_LED_CTRL_POLARITY(mphy->leds.pin);
addr = mt7603_reg_map(dev, MT_LED_CTRL);
mt76_wr(dev, addr, val);
}
@@ -358,27 +358,27 @@ static int mt7603_led_set_blink(struct led_classdev *led_cdev,
unsigned long *delay_on,
unsigned long *delay_off)
{
- struct mt76_dev *mt76 = container_of(led_cdev, struct mt76_dev,
- led_cdev);
+ struct mt76_phy *mphy = container_of(led_cdev, struct mt76_phy,
+ leds.cdev);
u8 delta_on, delta_off;
delta_off = max_t(u8, *delay_off / 10, 1);
delta_on = max_t(u8, *delay_on / 10, 1);
- mt7603_led_set_config(mt76, delta_on, delta_off);
+ mt7603_led_set_config(mphy, delta_on, delta_off);
return 0;
}
static void mt7603_led_set_brightness(struct led_classdev *led_cdev,
enum led_brightness brightness)
{
- struct mt76_dev *mt76 = container_of(led_cdev, struct mt76_dev,
- led_cdev);
+ struct mt76_phy *mphy = container_of(led_cdev, struct mt76_phy,
+ leds.cdev);
if (!brightness)
- mt7603_led_set_config(mt76, 0, 0xff);
+ mt7603_led_set_config(mphy, 0, 0xff);
else
- mt7603_led_set_config(mt76, 0xff, 0);
+ mt7603_led_set_config(mphy, 0xff, 0);
}
static u32 __mt7603_reg_addr(struct mt7603_dev *dev, u32 addr)
@@ -535,8 +535,8 @@ int mt7603_register_device(struct mt7603_dev *dev)
/* init led callbacks */
if (IS_ENABLED(CONFIG_MT76_LEDS)) {
- dev->mt76.led_cdev.brightness_set = mt7603_led_set_brightness;
- dev->mt76.led_cdev.blink_set = mt7603_led_set_blink;
+ dev->mphy.leds.cdev.brightness_set = mt7603_led_set_brightness;
+ dev->mphy.leds.cdev.blink_set = mt7603_led_set_blink;
}
wiphy->reg_notifier = mt7603_regd_notifier;
diff --git a/drivers/net/wireless/mediatek/mt76/mt7603/mcu.c b/drivers/net/wireless/mediatek/mt76/mt7603/mcu.c
index 7884b952b720..301668c3cc92 100644
--- a/drivers/net/wireless/mediatek/mt76/mt7603/mcu.c
+++ b/drivers/net/wireless/mediatek/mt76/mt7603/mcu.c
@@ -221,7 +221,6 @@ int mt7603_mcu_init(struct mt7603_dev *dev)
.headroom = sizeof(struct mt7603_mcu_txd),
.mcu_skb_send_msg = mt7603_mcu_skb_send_msg,
.mcu_parse_response = mt7603_mcu_parse_response,
- .mcu_restart = mt7603_mcu_restart,
};
dev->mt76.mcu_ops = &mt7603_mcu_ops;
@@ -230,7 +229,7 @@ int mt7603_mcu_init(struct mt7603_dev *dev)
void mt7603_mcu_exit(struct mt7603_dev *dev)
{
- __mt76_mcu_restart(&dev->mt76);
+ mt7603_mcu_restart(&dev->mt76);
skb_queue_purge(&dev->mt76.mcu.res_q);
}
diff --git a/drivers/net/wireless/mediatek/mt76/mt7615/init.c b/drivers/net/wireless/mediatek/mt76/mt7615/init.c
index 07a1fea94f66..5fa6f097ec30 100644
--- a/drivers/net/wireless/mediatek/mt76/mt7615/init.c
+++ b/drivers/net/wireless/mediatek/mt76/mt7615/init.c
@@ -443,6 +443,85 @@ mt7615_cap_dbdc_disable(struct mt7615_dev *dev)
mt76_set_stream_caps(&dev->mphy, true);
}
+u32 mt7615_reg_map(struct mt7615_dev *dev, u32 addr)
+{
+ u32 base, offset;
+
+ if (is_mt7663(&dev->mt76)) {
+ base = addr & MT7663_MCU_PCIE_REMAP_2_BASE;
+ offset = addr & MT7663_MCU_PCIE_REMAP_2_OFFSET;
+ } else {
+ base = addr & MT_MCU_PCIE_REMAP_2_BASE;
+ offset = addr & MT_MCU_PCIE_REMAP_2_OFFSET;
+ }
+ mt76_wr(dev, MT_MCU_PCIE_REMAP_2, base);
+
+ return MT_PCIE_REMAP_BASE_2 + offset;
+}
+EXPORT_SYMBOL_GPL(mt7615_reg_map);
+
+static void
+mt7615_led_set_config(struct led_classdev *led_cdev,
+ u8 delay_on, u8 delay_off)
+{
+ struct mt7615_dev *dev;
+ struct mt76_phy *mphy;
+ u32 val, addr;
+ u8 index;
+
+ mphy = container_of(led_cdev, struct mt76_phy, leds.cdev);
+ dev = container_of(mphy->dev, struct mt7615_dev, mt76);
+
+ if (!mt76_connac_pm_ref(mphy, &dev->pm))
+ return;
+
+ val = FIELD_PREP(MT_LED_STATUS_DURATION, 0xffff) |
+ FIELD_PREP(MT_LED_STATUS_OFF, delay_off) |
+ FIELD_PREP(MT_LED_STATUS_ON, delay_on);
+
+ index = dev->dbdc_support ? mphy->band_idx : mphy->leds.pin;
+ addr = mt7615_reg_map(dev, MT_LED_STATUS_0(index));
+ mt76_wr(dev, addr, val);
+ addr = mt7615_reg_map(dev, MT_LED_STATUS_1(index));
+ mt76_wr(dev, addr, val);
+
+ val = MT_LED_CTRL_REPLAY(index) | MT_LED_CTRL_KICK(index);
+ if (dev->mphy.leds.al)
+ val |= MT_LED_CTRL_POLARITY(index);
+ if (mphy->band_idx)
+ val |= MT_LED_CTRL_BAND(index);
+
+ addr = mt7615_reg_map(dev, MT_LED_CTRL);
+ mt76_wr(dev, addr, val);
+
+ mt76_connac_pm_unref(mphy, &dev->pm);
+}
+
+int mt7615_led_set_blink(struct led_classdev *led_cdev,
+ unsigned long *delay_on,
+ unsigned long *delay_off)
+{
+ u8 delta_on, delta_off;
+
+ delta_off = max_t(u8, *delay_off / 10, 1);
+ delta_on = max_t(u8, *delay_on / 10, 1);
+
+ mt7615_led_set_config(led_cdev, delta_on, delta_off);
+
+ return 0;
+}
+EXPORT_SYMBOL_GPL(mt7615_led_set_blink);
+
+void mt7615_led_set_brightness(struct led_classdev *led_cdev,
+ enum led_brightness brightness)
+{
+ if (!brightness)
+ mt7615_led_set_config(led_cdev, 0, 0xff);
+ else
+ mt7615_led_set_config(led_cdev, 0xff, 0);
+}
+EXPORT_SYMBOL_GPL(mt7615_led_set_brightness);
+
int mt7615_register_ext_phy(struct mt7615_dev *dev)
{
struct mt7615_phy *phy = mt7615_ext_phy(dev);
@@ -497,6 +576,12 @@ int mt7615_register_ext_phy(struct mt7615_dev *dev)
for (i = 0; i <= MT_TXQ_PSD ; i++)
mphy->q_tx[i] = dev->mphy.q_tx[i];
+ /* init led callbacks */
+ if (IS_ENABLED(CONFIG_MT76_LEDS)) {
+ mphy->leds.cdev.brightness_set = mt7615_led_set_brightness;
+ mphy->leds.cdev.blink_set = mt7615_led_set_blink;
+ }
+
ret = mt76_register_phy(mphy, true, mt76_rates,
ARRAY_SIZE(mt76_rates));
if (ret)
diff --git a/drivers/net/wireless/mediatek/mt76/mt7615/mcu.c b/drivers/net/wireless/mediatek/mt76/mt7615/mcu.c
index 83f30305414d..eea398c79a98 100644
--- a/drivers/net/wireless/mediatek/mt76/mt7615/mcu.c
+++ b/drivers/net/wireless/mediatek/mt76/mt7615/mcu.c
@@ -1692,7 +1692,6 @@ int mt7615_mcu_init(struct mt7615_dev *dev)
.headroom = sizeof(struct mt7615_mcu_txd),
.mcu_skb_send_msg = mt7615_mcu_send_message,
.mcu_parse_response = mt7615_mcu_parse_response,
- .mcu_restart = mt7615_mcu_restart,
};
int ret;
@@ -1732,7 +1731,7 @@ EXPORT_SYMBOL_GPL(mt7615_mcu_init);
void mt7615_mcu_exit(struct mt7615_dev *dev)
{
- __mt76_mcu_restart(&dev->mt76);
+ mt7615_mcu_restart(&dev->mt76);
mt7615_mcu_set_fw_ctrl(dev);
skb_queue_purge(&dev->mt76.mcu.res_q);
}
diff --git a/drivers/net/wireless/mediatek/mt76/mt7615/mmio.c b/drivers/net/wireless/mediatek/mt76/mt7615/mmio.c
index a784f9d9e935..83173efb56dc 100644
--- a/drivers/net/wireless/mediatek/mt76/mt7615/mmio.c
+++ b/drivers/net/wireless/mediatek/mt76/mt7615/mmio.c
@@ -63,22 +63,6 @@ const u32 mt7663e_reg_map[] = {
[MT_EFUSE_ADDR_BASE] = 0x78011000,
};
-u32 mt7615_reg_map(struct mt7615_dev *dev, u32 addr)
-{
- u32 base, offset;
-
- if (is_mt7663(&dev->mt76)) {
- base = addr & MT7663_MCU_PCIE_REMAP_2_BASE;
- offset = addr & MT7663_MCU_PCIE_REMAP_2_OFFSET;
- } else {
- base = addr & MT_MCU_PCIE_REMAP_2_BASE;
- offset = addr & MT_MCU_PCIE_REMAP_2_OFFSET;
- }
- mt76_wr(dev, MT_MCU_PCIE_REMAP_2, base);
-
- return MT_PCIE_REMAP_BASE_2 + offset;
-}
-
static void
mt7615_rx_poll_complete(struct mt76_dev *mdev, enum mt76_rxq_id q)
{
diff --git a/drivers/net/wireless/mediatek/mt76/mt7615/mt7615.h b/drivers/net/wireless/mediatek/mt76/mt7615/mt7615.h
index 087d4886162e..43591b4c1d9a 100644
--- a/drivers/net/wireless/mediatek/mt76/mt7615/mt7615.h
+++ b/drivers/net/wireless/mediatek/mt76/mt7615/mt7615.h
@@ -376,6 +376,12 @@ int mt7615_mmio_probe(struct device *pdev, void __iomem *mem_base,
int irq, const u32 *map);
u32 mt7615_reg_map(struct mt7615_dev *dev, u32 addr);
+u32 mt7615_reg_map(struct mt7615_dev *dev, u32 addr);
+int mt7615_led_set_blink(struct led_classdev *led_cdev,
+ unsigned long *delay_on,
+ unsigned long *delay_off);
+void mt7615_led_set_brightness(struct led_classdev *led_cdev,
+ enum led_brightness brightness);
void mt7615_init_device(struct mt7615_dev *dev);
int mt7615_register_device(struct mt7615_dev *dev);
void mt7615_unregister_device(struct mt7615_dev *dev);
diff --git a/drivers/net/wireless/mediatek/mt76/mt7615/pci_init.c b/drivers/net/wireless/mediatek/mt76/mt7615/pci_init.c
index 87b4aa52ee0f..0680e002b981 100644
--- a/drivers/net/wireless/mediatek/mt76/mt7615/pci_init.c
+++ b/drivers/net/wireless/mediatek/mt76/mt7615/pci_init.c
@@ -66,64 +66,6 @@ static int mt7615_init_hardware(struct mt7615_dev *dev)
return 0;
}
-static void
-mt7615_led_set_config(struct led_classdev *led_cdev,
- u8 delay_on, u8 delay_off)
-{
- struct mt7615_dev *dev;
- struct mt76_dev *mt76;
- u32 val, addr;
-
- mt76 = container_of(led_cdev, struct mt76_dev, led_cdev);
- dev = container_of(mt76, struct mt7615_dev, mt76);
-
- if (!mt76_connac_pm_ref(&dev->mphy, &dev->pm))
- return;
-
- val = FIELD_PREP(MT_LED_STATUS_DURATION, 0xffff) |
- FIELD_PREP(MT_LED_STATUS_OFF, delay_off) |
- FIELD_PREP(MT_LED_STATUS_ON, delay_on);
-
- addr = mt7615_reg_map(dev, MT_LED_STATUS_0(mt76->led_pin));
- mt76_wr(dev, addr, val);
- addr = mt7615_reg_map(dev, MT_LED_STATUS_1(mt76->led_pin));
- mt76_wr(dev, addr, val);
-
- val = MT_LED_CTRL_REPLAY(mt76->led_pin) |
- MT_LED_CTRL_KICK(mt76->led_pin);
- if (mt76->led_al)
- val |= MT_LED_CTRL_POLARITY(mt76->led_pin);
- addr = mt7615_reg_map(dev, MT_LED_CTRL);
- mt76_wr(dev, addr, val);
-
- mt76_connac_pm_unref(&dev->mphy, &dev->pm);
-}
-
-static int
-mt7615_led_set_blink(struct led_classdev *led_cdev,
- unsigned long *delay_on,
- unsigned long *delay_off)
-{
- u8 delta_on, delta_off;
-
- delta_off = max_t(u8, *delay_off / 10, 1);
- delta_on = max_t(u8, *delay_on / 10, 1);
-
- mt7615_led_set_config(led_cdev, delta_on, delta_off);
-
- return 0;
-}
-
-static void
-mt7615_led_set_brightness(struct led_classdev *led_cdev,
- enum led_brightness brightness)
-{
- if (!brightness)
- mt7615_led_set_config(led_cdev, 0, 0xff);
- else
- mt7615_led_set_config(led_cdev, 0xff, 0);
-}
-
int mt7615_register_device(struct mt7615_dev *dev)
{
int ret;
@@ -133,8 +75,8 @@ int mt7615_register_device(struct mt7615_dev *dev)
/* init led callbacks */
if (IS_ENABLED(CONFIG_MT76_LEDS)) {
- dev->mt76.led_cdev.brightness_set = mt7615_led_set_brightness;
- dev->mt76.led_cdev.blink_set = mt7615_led_set_blink;
+ dev->mphy.leds.cdev.brightness_set = mt7615_led_set_brightness;
+ dev->mphy.leds.cdev.blink_set = mt7615_led_set_blink;
}
ret = mt7622_wmac_init(dev);
diff --git a/drivers/net/wireless/mediatek/mt76/mt7615/regs.h b/drivers/net/wireless/mediatek/mt76/mt7615/regs.h
index fa1b9b26b399..7cecb22c569e 100644
--- a/drivers/net/wireless/mediatek/mt76/mt7615/regs.h
+++ b/drivers/net/wireless/mediatek/mt76/mt7615/regs.h
@@ -544,6 +544,7 @@ enum mt7615_reg_base {
#define MT_LED_CTRL_POLARITY(_n) BIT(1 + (8 * (_n)))
#define MT_LED_CTRL_TX_BLINK_MODE(_n) BIT(2 + (8 * (_n)))
#define MT_LED_CTRL_TX_MANUAL_BLINK(_n) BIT(3 + (8 * (_n)))
+#define MT_LED_CTRL_BAND(_n) BIT(4 + (8 * (_n)))
#define MT_LED_CTRL_TX_OVER_BLINK(_n) BIT(5 + (8 * (_n)))
#define MT_LED_CTRL_KICK(_n) BIT(7 + (8 * (_n)))
diff --git a/drivers/net/wireless/mediatek/mt76/mt7615/sdio_mcu.c b/drivers/net/wireless/mediatek/mt76/mt7615/sdio_mcu.c
index dc9a2f0b45a5..b0094205ba95 100644
--- a/drivers/net/wireless/mediatek/mt76/mt7615/sdio_mcu.c
+++ b/drivers/net/wireless/mediatek/mt76/mt7615/sdio_mcu.c
@@ -137,7 +137,6 @@ int mt7663s_mcu_init(struct mt7615_dev *dev)
.tailroom = MT_USB_TAIL_SIZE,
.mcu_skb_send_msg = mt7663s_mcu_send_message,
.mcu_parse_response = mt7615_mcu_parse_response,
- .mcu_restart = mt7615_mcu_restart,
.mcu_rr = mt76_connac_mcu_reg_rr,
.mcu_wr = mt76_connac_mcu_reg_wr,
};
diff --git a/drivers/net/wireless/mediatek/mt76/mt7615/usb_mcu.c b/drivers/net/wireless/mediatek/mt76/mt7615/usb_mcu.c
index 98bf2f6ae936..a8b1a0f8b2d7 100644
--- a/drivers/net/wireless/mediatek/mt76/mt7615/usb_mcu.c
+++ b/drivers/net/wireless/mediatek/mt76/mt7615/usb_mcu.c
@@ -69,7 +69,6 @@ int mt7663u_mcu_init(struct mt7615_dev *dev)
.tailroom = MT_USB_TAIL_SIZE,
.mcu_skb_send_msg = mt7663u_mcu_send_message,
.mcu_parse_response = mt7615_mcu_parse_response,
- .mcu_restart = mt7615_mcu_restart,
};
int ret;
diff --git a/drivers/net/wireless/mediatek/mt76/mt76_connac.h b/drivers/net/wireless/mediatek/mt76/mt76_connac.h
index 8ba883b03e50..b339c50bff20 100644
--- a/drivers/net/wireless/mediatek/mt76/mt76_connac.h
+++ b/drivers/net/wireless/mediatek/mt76/mt76_connac.h
@@ -42,6 +42,7 @@ enum {
CMD_CBW_10MHZ,
CMD_CBW_5MHZ,
CMD_CBW_8080MHZ,
+ CMD_CBW_320MHZ,
CMD_HE_MCS_BW80 = 0,
CMD_HE_MCS_BW160,
@@ -239,6 +240,7 @@ static inline u8 mt76_connac_chan_bw(struct cfg80211_chan_def *chandef)
[NL80211_CHAN_WIDTH_10] = CMD_CBW_10MHZ,
[NL80211_CHAN_WIDTH_20] = CMD_CBW_20MHZ,
[NL80211_CHAN_WIDTH_20_NOHT] = CMD_CBW_20MHZ,
+ [NL80211_CHAN_WIDTH_320] = CMD_CBW_320MHZ,
};
if (chandef->width >= ARRAY_SIZE(width_to_bw))
@@ -370,6 +372,9 @@ void mt76_connac2_mac_write_txwi(struct mt76_dev *dev, __le32 *txwi,
struct sk_buff *skb, struct mt76_wcid *wcid,
struct ieee80211_key_conf *key, int pid,
enum mt76_txq_id qid, u32 changed);
+u16 mt76_connac2_mac_tx_rate_val(struct mt76_phy *mphy,
+ struct ieee80211_vif *vif,
+ bool beacon, bool mcast);
bool mt76_connac2_mac_fill_txs(struct mt76_dev *dev, struct mt76_wcid *wcid,
__le32 *txs_data);
bool mt76_connac2_mac_add_txs_skb(struct mt76_dev *dev, struct mt76_wcid *wcid,
diff --git a/drivers/net/wireless/mediatek/mt76/mt76_connac_mac.c b/drivers/net/wireless/mediatek/mt76/mt76_connac_mac.c
index fd60123fb284..aed4ee95fb2e 100644
--- a/drivers/net/wireless/mediatek/mt76/mt76_connac_mac.c
+++ b/drivers/net/wireless/mediatek/mt76/mt76_connac_mac.c
@@ -267,9 +267,9 @@ int mt76_connac_init_tx_queues(struct mt76_phy *phy, int idx, int n_desc,
}
EXPORT_SYMBOL_GPL(mt76_connac_init_tx_queues);
-static u16
-mt76_connac2_mac_tx_rate_val(struct mt76_phy *mphy, struct ieee80211_vif *vif,
- bool beacon, bool mcast)
+u16 mt76_connac2_mac_tx_rate_val(struct mt76_phy *mphy,
+ struct ieee80211_vif *vif,
+ bool beacon, bool mcast)
{
u8 mode = 0, band = mphy->chandef.chan->band;
int rateidx = 0, mcast_rate;
@@ -319,6 +319,7 @@ out:
return FIELD_PREP(MT_TX_RATE_IDX, rateidx) |
FIELD_PREP(MT_TX_RATE_MODE, mode);
}
+EXPORT_SYMBOL_GPL(mt76_connac2_mac_tx_rate_val);
static void
mt76_connac2_mac_write_txwi_8023(__le32 *txwi, struct sk_buff *skb,
@@ -930,7 +931,7 @@ int mt76_connac2_reverse_frag0_hdr_trans(struct ieee80211_vif *vif,
ether_addr_copy(hdr.addr4, eth_hdr->h_source);
break;
default:
- break;
+ return -EINVAL;
}
skb_pull(skb, hdr_offset + sizeof(struct ethhdr) - 2);
diff --git a/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.c b/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.c
index 5a047e630860..efb9bfaa187f 100644
--- a/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.c
+++ b/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.c
@@ -1329,6 +1329,40 @@ u8 mt76_connac_get_phy_mode(struct mt76_phy *phy, struct ieee80211_vif *vif,
}
EXPORT_SYMBOL_GPL(mt76_connac_get_phy_mode);
+u8 mt76_connac_get_phy_mode_ext(struct mt76_phy *phy, struct ieee80211_vif *vif,
+ enum nl80211_band band)
+{
+ const struct ieee80211_sta_eht_cap *eht_cap;
+ struct ieee80211_supported_band *sband;
+ u8 mode = 0;
+
+ if (band == NL80211_BAND_6GHZ)
+ mode |= PHY_MODE_AX_6G;
+
+ sband = phy->hw->wiphy->bands[band];
+ eht_cap = ieee80211_get_eht_iftype_cap(sband, vif->type);
+
+ if (!eht_cap || !eht_cap->has_eht)
+ return mode;
+
+ switch (band) {
+ case NL80211_BAND_6GHZ:
+ mode |= PHY_MODE_BE_6G;
+ break;
+ case NL80211_BAND_5GHZ:
+ mode |= PHY_MODE_BE_5G;
+ break;
+ case NL80211_BAND_2GHZ:
+ mode |= PHY_MODE_BE_24G;
+ break;
+ default:
+ break;
+ }
+
+ return mode;
+}
+EXPORT_SYMBOL_GPL(mt76_connac_get_phy_mode_ext);
+
const struct ieee80211_sta_he_cap *
mt76_connac_get_he_phy_cap(struct mt76_phy *phy, struct ieee80211_vif *vif)
{
@@ -1341,6 +1375,18 @@ mt76_connac_get_he_phy_cap(struct mt76_phy *phy, struct ieee80211_vif *vif)
}
EXPORT_SYMBOL_GPL(mt76_connac_get_he_phy_cap);
+const struct ieee80211_sta_eht_cap *
+mt76_connac_get_eht_phy_cap(struct mt76_phy *phy, struct ieee80211_vif *vif)
+{
+ enum nl80211_band band = phy->chandef.chan->band;
+ struct ieee80211_supported_band *sband;
+
+ sband = phy->hw->wiphy->bands[band];
+
+ return ieee80211_get_eht_iftype_cap(sband, vif->type);
+}
+EXPORT_SYMBOL_GPL(mt76_connac_get_eht_phy_cap);
+
#define DEFAULT_HE_PE_DURATION 4
#define DEFAULT_HE_DURATION_RTS_THRES 1023
static void
diff --git a/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.h b/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.h
index f1e942b9a887..a5e6ee4daf92 100644
--- a/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.h
+++ b/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.h
@@ -793,6 +793,7 @@ enum {
STA_REC_PHY = 0x15,
STA_REC_HE_6G = 0x17,
STA_REC_HE_V2 = 0x19,
+ STA_REC_EHT = 0x22,
STA_REC_HDRT = 0x28,
STA_REC_HDR_TRANS = 0x2B,
STA_REC_MAX_NUM
@@ -882,12 +883,16 @@ enum {
#define PHY_MODE_AX_5G BIT(7)
#define PHY_MODE_AX_6G BIT(0) /* phymode_ext */
+#define PHY_MODE_BE_24G BIT(1)
+#define PHY_MODE_BE_5G BIT(2)
+#define PHY_MODE_BE_6G BIT(3)
#define MODE_CCK BIT(0)
#define MODE_OFDM BIT(1)
#define MODE_HT BIT(2)
#define MODE_VHT BIT(3)
#define MODE_HE BIT(4)
+#define MODE_EHT BIT(5)
#define STA_CAP_WMM BIT(0)
#define STA_CAP_SGI_20 BIT(4)
@@ -1171,6 +1176,7 @@ enum {
MCU_EXT_CMD_GET_MIB_INFO = 0x5a,
MCU_EXT_CMD_TXDPD_CAL = 0x60,
MCU_EXT_CMD_CAL_CACHE = 0x67,
+ MCU_EXT_CMD_RED_ENABLE = 0x68,
MCU_EXT_CMD_SET_RADAR_TH = 0x7c,
MCU_EXT_CMD_SET_RDD_PATTERN = 0x7d,
MCU_EXT_CMD_MWDS_SUPPORT = 0x80,
@@ -1198,7 +1204,8 @@ enum {
MCU_UNI_CMD_REPT_MUAR = 0x09,
MCU_UNI_CMD_WSYS_CONFIG = 0x0b,
MCU_UNI_CMD_REG_ACCESS = 0x0d,
- MCU_UNI_CMD_POWER_CREL = 0x0f,
+ MCU_UNI_CMD_CHIP_CONFIG = 0x0e,
+ MCU_UNI_CMD_POWER_CTRL = 0x0f,
MCU_UNI_CMD_RX_HDR_TRANS = 0x12,
MCU_UNI_CMD_SER = 0x13,
MCU_UNI_CMD_TWT = 0x14,
@@ -1238,6 +1245,7 @@ enum {
MCU_CE_CMD_TEST_CTRL = 0x01,
MCU_CE_CMD_START_HW_SCAN = 0x03,
MCU_CE_CMD_SET_PS_PROFILE = 0x05,
+ MCU_CE_CMD_SET_RX_FILTER = 0x0a,
MCU_CE_CMD_SET_CHAN_DOMAIN = 0x0f,
MCU_CE_CMD_SET_BSS_CONNECTED = 0x16,
MCU_CE_CMD_SET_BSS_ABORT = 0x17,
@@ -1730,7 +1738,7 @@ mt76_connac_mcu_gen_dl_mode(struct mt76_dev *dev, u8 feature_set, bool is_wa)
}
#define to_wcid_lo(id) FIELD_GET(GENMASK(7, 0), (u16)id)
-#define to_wcid_hi(id) FIELD_GET(GENMASK(9, 8), (u16)id)
+#define to_wcid_hi(id) FIELD_GET(GENMASK(10, 8), (u16)id)
static inline void
mt76_connac_mcu_get_wlan_idx(struct mt76_dev *dev, struct mt76_wcid *wcid,
@@ -1866,8 +1874,12 @@ void mt76_connac_mcu_reg_wr(struct mt76_dev *dev, u32 offset, u32 val);
const struct ieee80211_sta_he_cap *
mt76_connac_get_he_phy_cap(struct mt76_phy *phy, struct ieee80211_vif *vif);
+const struct ieee80211_sta_eht_cap *
+mt76_connac_get_eht_phy_cap(struct mt76_phy *phy, struct ieee80211_vif *vif);
u8 mt76_connac_get_phy_mode(struct mt76_phy *phy, struct ieee80211_vif *vif,
enum nl80211_band band, struct ieee80211_sta *sta);
+u8 mt76_connac_get_phy_mode_ext(struct mt76_phy *phy, struct ieee80211_vif *vif,
+ enum nl80211_band band);
int mt76_connac_mcu_add_key(struct mt76_dev *dev, struct ieee80211_vif *vif,
struct mt76_connac_sta_key_conf *sta_key_conf,
diff --git a/drivers/net/wireless/mediatek/mt76/mt76x0/phy.c b/drivers/net/wireless/mediatek/mt76/mt76x0/phy.c
index 6c6c8ada7943..d543ef3de65b 100644
--- a/drivers/net/wireless/mediatek/mt76/mt76x0/phy.c
+++ b/drivers/net/wireless/mediatek/mt76/mt76x0/phy.c
@@ -642,7 +642,12 @@ mt76x0_phy_get_target_power(struct mt76x02_dev *dev, u8 tx_mode,
if (tx_rate > 9)
return -EINVAL;
- *target_power = cur_power + dev->rate_power.vht[tx_rate];
+ *target_power = cur_power;
+ if (tx_rate > 7)
+ *target_power += dev->rate_power.vht[tx_rate - 8];
+ else
+ *target_power += dev->rate_power.ht[tx_rate];
+
*target_pa_power = mt76x0_phy_get_rf_pa_mode(dev, 1, tx_rate);
break;
default:
diff --git a/drivers/net/wireless/mediatek/mt76/mt76x0/usb_mcu.c b/drivers/net/wireless/mediatek/mt76/mt76x0/usb_mcu.c
index 45502fd4693f..6dc1f51f5658 100644
--- a/drivers/net/wireless/mediatek/mt76/mt76x0/usb_mcu.c
+++ b/drivers/net/wireless/mediatek/mt76/mt76x0/usb_mcu.c
@@ -148,6 +148,7 @@ static int mt76x0u_load_firmware(struct mt76x02_dev *dev)
mt76_wr(dev, MT_USB_DMA_CFG, val);
ret = mt76x0u_upload_firmware(dev, hdr);
+ mt76x02_set_ethtool_fwver(dev, hdr);
release_firmware(fw);
mt76_wr(dev, MT_FCE_PSE_CTRL, 1);
diff --git a/drivers/net/wireless/mediatek/mt76/mt76x02_util.c b/drivers/net/wireless/mediatek/mt76/mt76x02_util.c
index 604ddcc21123..7451a63206a5 100644
--- a/drivers/net/wireless/mediatek/mt76/mt76x02_util.c
+++ b/drivers/net/wireless/mediatek/mt76/mt76x02_util.c
@@ -87,10 +87,9 @@ static const struct ieee80211_iface_combination mt76x02u_if_comb[] = {
};
static void
-mt76x02_led_set_config(struct mt76_dev *mdev, u8 delay_on,
- u8 delay_off)
+mt76x02_led_set_config(struct mt76_phy *mphy, u8 delay_on, u8 delay_off)
{
- struct mt76x02_dev *dev = container_of(mdev, struct mt76x02_dev,
+ struct mt76x02_dev *dev = container_of(mphy->dev, struct mt76x02_dev,
mt76);
u32 val;
@@ -98,13 +97,13 @@ mt76x02_led_set_config(struct mt76_dev *mdev, u8 delay_on,
FIELD_PREP(MT_LED_STATUS_OFF, delay_off) |
FIELD_PREP(MT_LED_STATUS_ON, delay_on);
- mt76_wr(dev, MT_LED_S0(mdev->led_pin), val);
- mt76_wr(dev, MT_LED_S1(mdev->led_pin), val);
+ mt76_wr(dev, MT_LED_S0(mphy->leds.pin), val);
+ mt76_wr(dev, MT_LED_S1(mphy->leds.pin), val);
- val = MT_LED_CTRL_REPLAY(mdev->led_pin) |
- MT_LED_CTRL_KICK(mdev->led_pin);
- if (mdev->led_al)
- val |= MT_LED_CTRL_POLARITY(mdev->led_pin);
+ val = MT_LED_CTRL_REPLAY(mphy->leds.pin) |
+ MT_LED_CTRL_KICK(mphy->leds.pin);
+ if (mphy->leds.al)
+ val |= MT_LED_CTRL_POLARITY(mphy->leds.pin);
mt76_wr(dev, MT_LED_CTRL, val);
}
@@ -113,14 +112,14 @@ mt76x02_led_set_blink(struct led_classdev *led_cdev,
unsigned long *delay_on,
unsigned long *delay_off)
{
- struct mt76_dev *mdev = container_of(led_cdev, struct mt76_dev,
- led_cdev);
+ struct mt76_phy *mphy = container_of(led_cdev, struct mt76_phy,
+ leds.cdev);
u8 delta_on, delta_off;
delta_off = max_t(u8, *delay_off / 10, 1);
delta_on = max_t(u8, *delay_on / 10, 1);
- mt76x02_led_set_config(mdev, delta_on, delta_off);
+ mt76x02_led_set_config(mphy, delta_on, delta_off);
return 0;
}
@@ -129,13 +128,13 @@ static void
mt76x02_led_set_brightness(struct led_classdev *led_cdev,
enum led_brightness brightness)
{
- struct mt76_dev *mdev = container_of(led_cdev, struct mt76_dev,
- led_cdev);
+ struct mt76_phy *mphy = container_of(led_cdev, struct mt76_phy,
+ leds.cdev);
if (!brightness)
- mt76x02_led_set_config(mdev, 0, 0xff);
+ mt76x02_led_set_config(mphy, 0, 0xff);
else
- mt76x02_led_set_config(mdev, 0xff, 0);
+ mt76x02_led_set_config(mphy, 0xff, 0);
}
int mt76x02_init_device(struct mt76x02_dev *dev)
@@ -167,9 +166,9 @@ int mt76x02_init_device(struct mt76x02_dev *dev)
/* init led callbacks */
if (IS_ENABLED(CONFIG_MT76_LEDS)) {
- dev->mt76.led_cdev.brightness_set =
+ dev->mphy.leds.cdev.brightness_set =
mt76x02_led_set_brightness;
- dev->mt76.led_cdev.blink_set = mt76x02_led_set_blink;
+ dev->mphy.leds.cdev.blink_set = mt76x02_led_set_blink;
}
}
diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/debugfs.c b/drivers/net/wireless/mediatek/mt76/mt7915/debugfs.c
index fb46c2c1784f..5a46813a59ea 100644
--- a/drivers/net/wireless/mediatek/mt76/mt7915/debugfs.c
+++ b/drivers/net/wireless/mediatek/mt76/mt7915/debugfs.c
@@ -811,7 +811,7 @@ mt7915_hw_queue_read(struct seq_file *s, u32 size,
if (val & BIT(map[i].index))
continue;
- ctrl = BIT(31) | (map[i].pid << 10) | (map[i].qid << 24);
+ ctrl = BIT(31) | (map[i].pid << 10) | ((u32)map[i].qid << 24);
mt76_wr(dev, MT_FL_Q0_CTRL, ctrl);
head = mt76_get_field(dev, MT_FL_Q2_CTRL,
@@ -996,7 +996,7 @@ mt7915_rate_txpower_get(struct file *file, char __user *user_buf,
ret = mt7915_mcu_get_txpower_sku(phy, txpwr, sizeof(txpwr));
if (ret)
- return ret;
+ goto out;
/* Txpower propagation path: TMAC -> TXV -> BBP */
len += scnprintf(buf + len, sz - len,
@@ -1047,6 +1047,8 @@ mt7915_rate_txpower_get(struct file *file, char __user *user_buf,
mt76_get_field(dev, reg, MT_WF_PHY_TPC_POWER));
ret = simple_read_from_buffer(user_buf, count, ppos, buf, len);
+
+out:
kfree(buf);
return ret;
}
diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/dma.c b/drivers/net/wireless/mediatek/mt76/mt7915/dma.c
index e3fa064918bf..abe17dac9996 100644
--- a/drivers/net/wireless/mediatek/mt76/mt7915/dma.c
+++ b/drivers/net/wireless/mediatek/mt76/mt7915/dma.c
@@ -559,9 +559,32 @@ int mt7915_dma_init(struct mt7915_dev *dev, struct mt7915_phy *phy2)
return 0;
}
+static void mt7915_dma_wed_reset(struct mt7915_dev *dev)
+{
+ struct mt76_dev *mdev = &dev->mt76;
+
+ if (!test_bit(MT76_STATE_WED_RESET, &dev->mphy.state))
+ return;
+
+ complete(&mdev->mmio.wed_reset);
+
+ if (!wait_for_completion_timeout(&dev->mt76.mmio.wed_reset_complete,
+ 3 * HZ))
+ dev_err(dev->mt76.dev, "wed reset complete timeout\n");
+}
+
+static void
+mt7915_dma_reset_tx_queue(struct mt7915_dev *dev, struct mt76_queue *q)
+{
+ mt76_queue_reset(dev, q);
+ if (mtk_wed_device_active(&dev->mt76.mmio.wed))
+ mt76_dma_wed_setup(&dev->mt76, q, true);
+}
+
int mt7915_dma_reset(struct mt7915_dev *dev, bool force)
{
struct mt76_phy *mphy_ext = dev->mt76.phys[MT_BAND1];
+ struct mtk_wed_device *wed = &dev->mt76.mmio.wed;
int i;
/* clean up hw queues */
@@ -581,28 +604,40 @@ int mt7915_dma_reset(struct mt7915_dev *dev, bool force)
if (force)
mt7915_wfsys_reset(dev);
+ if (mtk_wed_device_active(wed))
+ mtk_wed_device_dma_reset(wed);
+
mt7915_dma_disable(dev, force);
+ mt7915_dma_wed_reset(dev);
/* reset hw queues */
for (i = 0; i < __MT_TXQ_MAX; i++) {
- mt76_queue_reset(dev, dev->mphy.q_tx[i]);
+ mt7915_dma_reset_tx_queue(dev, dev->mphy.q_tx[i]);
if (mphy_ext)
- mt76_queue_reset(dev, mphy_ext->q_tx[i]);
+ mt7915_dma_reset_tx_queue(dev, mphy_ext->q_tx[i]);
}
for (i = 0; i < __MT_MCUQ_MAX; i++)
mt76_queue_reset(dev, dev->mt76.q_mcu[i]);
- mt76_for_each_q_rx(&dev->mt76, i)
+ mt76_for_each_q_rx(&dev->mt76, i) {
+ if (dev->mt76.q_rx[i].flags == MT_WED_Q_TXFREE)
+ continue;
+
mt76_queue_reset(dev, &dev->mt76.q_rx[i]);
+ }
mt76_tx_status_check(&dev->mt76, true);
- mt7915_dma_enable(dev);
-
mt76_for_each_q_rx(&dev->mt76, i)
mt76_queue_rx_reset(dev, i);
+ if (mtk_wed_device_active(wed) && is_mt7915(&dev->mt76))
+ mt76_rmw(dev, MT_WFDMA0_EXT0_CFG, MT_WFDMA0_EXT0_RXWB_KEEP,
+ MT_WFDMA0_EXT0_RXWB_KEEP);
+
+ mt7915_dma_enable(dev);
+
return 0;
}
diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/eeprom.c b/drivers/net/wireless/mediatek/mt76/mt7915/eeprom.c
index 59069fb86414..a79628933948 100644
--- a/drivers/net/wireless/mediatek/mt76/mt7915/eeprom.c
+++ b/drivers/net/wireless/mediatek/mt76/mt7915/eeprom.c
@@ -33,11 +33,14 @@ static int mt7915_check_eeprom(struct mt7915_dev *dev)
u8 *eeprom = dev->mt76.eeprom.data;
u16 val = get_unaligned_le16(eeprom);
+#define CHECK_EEPROM_ERR(match) (match ? 0 : -EINVAL)
switch (val) {
case 0x7915:
+ return CHECK_EEPROM_ERR(is_mt7915(&dev->mt76));
case 0x7916:
+ return CHECK_EEPROM_ERR(is_mt7916(&dev->mt76));
case 0x7986:
- return 0;
+ return CHECK_EEPROM_ERR(is_mt7986(&dev->mt76));
default:
return -EINVAL;
}
@@ -110,18 +113,23 @@ static int mt7915_eeprom_load(struct mt7915_dev *dev)
} else {
u8 free_block_num;
u32 block_num, i;
+ u32 eeprom_blk_size = MT7915_EEPROM_BLOCK_SIZE;
+
+ ret = mt7915_mcu_get_eeprom_free_block(dev, &free_block_num);
+ if (ret < 0)
+ return ret;
- mt7915_mcu_get_eeprom_free_block(dev, &free_block_num);
- /* efuse info not enough */
+ /* efuse info isn't enough */
if (free_block_num >= 29)
return -EINVAL;
/* read eeprom data from efuse */
- block_num = DIV_ROUND_UP(eeprom_size,
- MT7915_EEPROM_BLOCK_SIZE);
- for (i = 0; i < block_num; i++)
- mt7915_mcu_get_eeprom(dev,
- i * MT7915_EEPROM_BLOCK_SIZE);
+ block_num = DIV_ROUND_UP(eeprom_size, eeprom_blk_size);
+ for (i = 0; i < block_num; i++) {
+ ret = mt7915_mcu_get_eeprom(dev, i * eeprom_blk_size);
+ if (ret < 0)
+ return ret;
+ }
}
return mt7915_check_eeprom(dev);
diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/init.c b/drivers/net/wireless/mediatek/mt76/mt7915/init.c
index c810c31fbd6e..1ab768feccaa 100644
--- a/drivers/net/wireless/mediatek/mt76/mt7915/init.c
+++ b/drivers/net/wireless/mediatek/mt76/mt7915/init.c
@@ -38,8 +38,7 @@ static const struct ieee80211_iface_combination if_comb[] = {
BIT(NL80211_CHAN_WIDTH_20) |
BIT(NL80211_CHAN_WIDTH_40) |
BIT(NL80211_CHAN_WIDTH_80) |
- BIT(NL80211_CHAN_WIDTH_160) |
- BIT(NL80211_CHAN_WIDTH_80P80),
+ BIT(NL80211_CHAN_WIDTH_160),
}
};
@@ -83,9 +82,23 @@ static ssize_t mt7915_thermal_temp_store(struct device *dev,
mutex_lock(&phy->dev->mt76.mutex);
val = clamp_val(DIV_ROUND_CLOSEST(val, 1000), 60, 130);
+
+ if ((i - 1 == MT7915_CRIT_TEMP_IDX &&
+ val > phy->throttle_temp[MT7915_MAX_TEMP_IDX]) ||
+ (i - 1 == MT7915_MAX_TEMP_IDX &&
+ val < phy->throttle_temp[MT7915_CRIT_TEMP_IDX])) {
+ dev_err(phy->dev->mt76.dev,
+ "temp1_max shall be greater than temp1_crit.");
+ return -EINVAL;
+ }
+
phy->throttle_temp[i - 1] = val;
mutex_unlock(&phy->dev->mt76.mutex);
+ ret = mt7915_mcu_set_thermal_protect(phy);
+ if (ret)
+ return ret;
+
return count;
}
@@ -131,11 +144,11 @@ mt7915_thermal_set_cur_throttle_state(struct thermal_cooling_device *cdev,
u8 throttling = MT7915_THERMAL_THROTTLE_MAX - state;
int ret;
- if (state > MT7915_CDEV_THROTTLE_MAX)
+ if (state > MT7915_CDEV_THROTTLE_MAX) {
+ dev_err(phy->dev->mt76.dev,
+ "please specify a valid throttling state\n");
return -EINVAL;
-
- if (phy->throttle_temp[0] > phy->throttle_temp[1])
- return 0;
+ }
if (state == phy->cdev_state)
return 0;
@@ -164,7 +177,7 @@ static void mt7915_unregister_thermal(struct mt7915_phy *phy)
struct wiphy *wiphy = phy->mt76->hw->wiphy;
if (!phy->cdev)
- return;
+ return;
sysfs_remove_link(&wiphy->dev.kobj, "cooling_device");
thermal_cooling_device_unregister(phy->cdev);
@@ -198,41 +211,41 @@ static int mt7915_thermal_init(struct mt7915_phy *phy)
return PTR_ERR(hwmon);
/* initialize critical/maximum high temperature */
- phy->throttle_temp[0] = 110;
- phy->throttle_temp[1] = 120;
+ phy->throttle_temp[MT7915_CRIT_TEMP_IDX] = MT7915_CRIT_TEMP;
+ phy->throttle_temp[MT7915_MAX_TEMP_IDX] = MT7915_MAX_TEMP;
- return mt7915_mcu_set_thermal_throttling(phy,
- MT7915_THERMAL_THROTTLE_MAX);
+ return 0;
}
static void mt7915_led_set_config(struct led_classdev *led_cdev,
u8 delay_on, u8 delay_off)
{
struct mt7915_dev *dev;
- struct mt76_dev *mt76;
+ struct mt76_phy *mphy;
u32 val;
- mt76 = container_of(led_cdev, struct mt76_dev, led_cdev);
- dev = container_of(mt76, struct mt7915_dev, mt76);
+ mphy = container_of(led_cdev, struct mt76_phy, leds.cdev);
+ dev = container_of(mphy->dev, struct mt7915_dev, mt76);
- /* select TX blink mode, 2: only data frames */
- mt76_rmw_field(dev, MT_TMAC_TCR0(0), MT_TMAC_TCR0_TX_BLINK, 2);
+ /* set PWM mode */
+ val = FIELD_PREP(MT_LED_STATUS_DURATION, 0xffff) |
+ FIELD_PREP(MT_LED_STATUS_OFF, delay_off) |
+ FIELD_PREP(MT_LED_STATUS_ON, delay_on);
+ mt76_wr(dev, MT_LED_STATUS_0(mphy->band_idx), val);
+ mt76_wr(dev, MT_LED_STATUS_1(mphy->band_idx), val);
/* enable LED */
- mt76_wr(dev, MT_LED_EN(0), 1);
-
- /* set LED Tx blink on/off time */
- val = FIELD_PREP(MT_LED_TX_BLINK_ON_MASK, delay_on) |
- FIELD_PREP(MT_LED_TX_BLINK_OFF_MASK, delay_off);
- mt76_wr(dev, MT_LED_TX_BLINK(0), val);
+ mt76_wr(dev, MT_LED_EN(mphy->band_idx), 1);
/* control LED */
- val = MT_LED_CTRL_BLINK_MODE | MT_LED_CTRL_KICK;
- if (dev->mt76.led_al)
+ val = MT_LED_CTRL_KICK;
+ if (dev->mphy.leds.al)
val |= MT_LED_CTRL_POLARITY;
+ if (mphy->band_idx)
+ val |= MT_LED_CTRL_BAND;
- mt76_wr(dev, MT_LED_CTRL(0), val);
- mt76_clear(dev, MT_LED_CTRL(0), MT_LED_CTRL_KICK);
+ mt76_wr(dev, MT_LED_CTRL(mphy->band_idx), val);
+ mt76_clear(dev, MT_LED_CTRL(mphy->band_idx), MT_LED_CTRL_KICK);
}
static int mt7915_led_set_blink(struct led_classdev *led_cdev,
@@ -319,9 +332,10 @@ mt7915_regd_notifier(struct wiphy *wiphy,
}
static void
-mt7915_init_wiphy(struct ieee80211_hw *hw)
+mt7915_init_wiphy(struct mt7915_phy *phy)
{
- struct mt7915_phy *phy = mt7915_hw_phy(hw);
+ struct mt76_phy *mphy = phy->mt76;
+ struct ieee80211_hw *hw = mphy->hw;
struct mt76_dev *mdev = &phy->dev->mt76;
struct wiphy *wiphy = hw->wiphy;
struct mt7915_dev *dev = phy->dev;
@@ -392,11 +406,6 @@ mt7915_init_wiphy(struct ieee80211_hw *hw)
phy->mt76->sband_5g.sband.vht_cap.cap |=
IEEE80211_VHT_CAP_MAX_MPDU_LENGTH_7991 |
IEEE80211_VHT_CAP_MAX_A_MPDU_LENGTH_EXPONENT_MASK;
-
- if (!dev->dbdc_support)
- phy->mt76->sband_5g.sband.vht_cap.cap |=
- IEEE80211_VHT_CAP_SHORT_GI_160 |
- IEEE80211_VHT_CAP_SUPP_CHAN_WIDTH_160_80PLUS80MHZ;
} else {
phy->mt76->sband_5g.sband.vht_cap.cap |=
IEEE80211_VHT_CAP_MAX_MPDU_LENGTH_11454 |
@@ -415,6 +424,12 @@ mt7915_init_wiphy(struct ieee80211_hw *hw)
wiphy->available_antennas_rx = phy->mt76->antenna_mask;
wiphy->available_antennas_tx = phy->mt76->antenna_mask;
+
+ /* init led callbacks */
+ if (IS_ENABLED(CONFIG_MT76_LEDS)) {
+ mphy->leds.cdev.brightness_set = mt7915_led_set_brightness;
+ mphy->leds.cdev.blink_set = mt7915_led_set_blink;
+ }
}
static void
@@ -473,6 +488,72 @@ mt7915_mac_init_band(struct mt7915_dev *dev, u8 band)
mt76_rmw(dev, MT_WTBLOFF_TOP_RSCR(band), mask, set);
}
+static void
+mt7915_init_led_mux(struct mt7915_dev *dev)
+{
+ if (!IS_ENABLED(CONFIG_MT76_LEDS))
+ return;
+
+ if (dev->dbdc_support) {
+ switch (mt76_chip(&dev->mt76)) {
+ case 0x7915:
+ mt76_rmw_field(dev, MT_LED_GPIO_MUX2,
+ GENMASK(11, 8), 4);
+ mt76_rmw_field(dev, MT_LED_GPIO_MUX3,
+ GENMASK(11, 8), 4);
+ break;
+ case 0x7986:
+ mt76_rmw_field(dev, MT_LED_GPIO_MUX0,
+ GENMASK(7, 4), 1);
+ mt76_rmw_field(dev, MT_LED_GPIO_MUX0,
+ GENMASK(11, 8), 1);
+ break;
+ case 0x7916:
+ mt76_rmw_field(dev, MT_LED_GPIO_MUX1,
+ GENMASK(27, 24), 3);
+ mt76_rmw_field(dev, MT_LED_GPIO_MUX1,
+ GENMASK(31, 28), 3);
+ break;
+ default:
+ break;
+ }
+ } else if (dev->mphy.leds.pin) {
+ switch (mt76_chip(&dev->mt76)) {
+ case 0x7915:
+ mt76_rmw_field(dev, MT_LED_GPIO_MUX3,
+ GENMASK(11, 8), 4);
+ break;
+ case 0x7986:
+ mt76_rmw_field(dev, MT_LED_GPIO_MUX0,
+ GENMASK(11, 8), 1);
+ break;
+ case 0x7916:
+ mt76_rmw_field(dev, MT_LED_GPIO_MUX1,
+ GENMASK(31, 28), 3);
+ break;
+ default:
+ break;
+ }
+ } else {
+ switch (mt76_chip(&dev->mt76)) {
+ case 0x7915:
+ mt76_rmw_field(dev, MT_LED_GPIO_MUX2,
+ GENMASK(11, 8), 4);
+ break;
+ case 0x7986:
+ mt76_rmw_field(dev, MT_LED_GPIO_MUX0,
+ GENMASK(7, 4), 1);
+ break;
+ case 0x7916:
+ mt76_rmw_field(dev, MT_LED_GPIO_MUX1,
+ GENMASK(27, 24), 3);
+ break;
+ default:
+ break;
+ }
+ }
+}
+
void mt7915_mac_init(struct mt7915_dev *dev)
{
int i;
@@ -497,10 +578,7 @@ void mt7915_mac_init(struct mt7915_dev *dev)
for (i = 0; i < 2; i++)
mt7915_mac_init_band(dev, i);
- if (IS_ENABLED(CONFIG_MT76_LEDS)) {
- i = dev->mt76.led_pin ? MT_LED_GPIO_MUX3 : MT_LED_GPIO_MUX2;
- mt76_rmw_field(dev, i, MT_LED_GPIO_SEL_MASK, 4);
- }
+ mt7915_init_led_mux(dev);
}
int mt7915_txbf_init(struct mt7915_dev *dev)
@@ -569,7 +647,7 @@ mt7915_register_ext_phy(struct mt7915_dev *dev, struct mt7915_phy *phy)
mt76_eeprom_override(mphy);
/* init wiphy according to mphy and phy */
- mt7915_init_wiphy(mphy->hw);
+ mt7915_init_wiphy(phy);
ret = mt76_register_phy(mphy, true, mt76_rates,
ARRAY_SIZE(mt76_rates));
@@ -763,13 +841,9 @@ mt7915_set_stream_he_txbf_caps(struct mt7915_phy *phy,
int sts = hweight8(phy->mt76->chainmask);
u8 c, sts_160 = sts;
- /* Can do 1/2 of STS in 160Mhz mode for mt7915 */
- if (is_mt7915(&dev->mt76)) {
- if (!dev->dbdc_support)
- sts_160 /= 2;
- else
- sts_160 = 0;
- }
+ /* mt7915 doesn't support bw160 */
+ if (is_mt7915(&dev->mt76))
+ sts_160 = 0;
#ifdef CONFIG_MAC80211_MESH
if (vif == NL80211_IFTYPE_MESH_POINT)
@@ -823,9 +897,6 @@ mt7915_set_stream_he_txbf_caps(struct mt7915_phy *phy,
elem->phy_cap_info[3] |= IEEE80211_HE_PHY_CAP3_SU_BEAMFORMER;
elem->phy_cap_info[4] |= IEEE80211_HE_PHY_CAP4_MU_BEAMFORMER;
- /* num_snd_dim
- * for mt7915, max supported sts is 2 for bw > 80MHz and 0 if dbdc
- */
c = FIELD_PREP(IEEE80211_HE_PHY_CAP5_BEAMFORMEE_NUM_SND_DIM_UNDER_80MHZ_MASK,
sts - 1);
if (sts_160)
@@ -873,15 +944,10 @@ mt7915_init_he_caps(struct mt7915_phy *phy, enum nl80211_band band,
int i, idx = 0, nss = hweight8(phy->mt76->antenna_mask);
u16 mcs_map = 0;
u16 mcs_map_160 = 0;
- u8 nss_160;
+ u8 nss_160 = nss;
- if (!is_mt7915(&dev->mt76))
- nss_160 = nss;
- else if (!dev->dbdc_support)
- /* Can do 1/2 of NSS streams in 160Mhz mode for mt7915 */
- nss_160 = nss / 2;
- else
- /* Can't do 160MHz with mt7915 dbdc */
+ /* Can't do 160MHz with mt7915 */
+ if (is_mt7915(&dev->mt76))
nss_160 = 0;
for (i = 0; i < 8; i++) {
@@ -931,8 +997,7 @@ mt7915_init_he_caps(struct mt7915_phy *phy, enum nl80211_band band,
else if (nss_160)
he_cap_elem->phy_cap_info[0] =
IEEE80211_HE_PHY_CAP0_CHANNEL_WIDTH_SET_40MHZ_80MHZ_IN_5G |
- IEEE80211_HE_PHY_CAP0_CHANNEL_WIDTH_SET_160MHZ_IN_5G |
- IEEE80211_HE_PHY_CAP0_CHANNEL_WIDTH_SET_80PLUS80_MHZ_IN_5G;
+ IEEE80211_HE_PHY_CAP0_CHANNEL_WIDTH_SET_160MHZ_IN_5G;
else
he_cap_elem->phy_cap_info[0] =
IEEE80211_HE_PHY_CAP0_CHANNEL_WIDTH_SET_40MHZ_80MHZ_IN_5G;
@@ -1004,12 +1069,11 @@ mt7915_init_he_caps(struct mt7915_phy *phy, enum nl80211_band band,
break;
}
+ memset(he_mcs, 0, sizeof(*he_mcs));
he_mcs->rx_mcs_80 = cpu_to_le16(mcs_map);
he_mcs->tx_mcs_80 = cpu_to_le16(mcs_map);
he_mcs->rx_mcs_160 = cpu_to_le16(mcs_map_160);
he_mcs->tx_mcs_160 = cpu_to_le16(mcs_map_160);
- he_mcs->rx_mcs_80p80 = cpu_to_le16(mcs_map_160);
- he_mcs->tx_mcs_80p80 = cpu_to_le16(mcs_map_160);
mt7915_set_stream_he_txbf_caps(phy, he_cap, i);
@@ -1101,10 +1165,8 @@ static void mt7915_stop_hardware(struct mt7915_dev *dev)
mt7986_wmac_disable(dev);
}
-
int mt7915_register_device(struct mt7915_dev *dev)
{
- struct ieee80211_hw *hw = mt76_hw(dev);
struct mt7915_phy *phy2;
int ret;
@@ -1133,18 +1195,12 @@ int mt7915_register_device(struct mt7915_dev *dev)
if (ret)
goto free_phy2;
- mt7915_init_wiphy(hw);
+ mt7915_init_wiphy(&dev->phy);
#ifdef CONFIG_NL80211_TESTMODE
dev->mt76.test_ops = &mt7915_testmode_ops;
#endif
- /* init led callbacks */
- if (IS_ENABLED(CONFIG_MT76_LEDS)) {
- dev->mt76.led_cdev.brightness_set = mt7915_led_set_brightness;
- dev->mt76.led_cdev.blink_set = mt7915_led_set_blink;
- }
-
ret = mt76_register_device(&dev->mt76, true, mt76_rates,
ARRAY_SIZE(mt76_rates));
if (ret)
diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/mac.c b/drivers/net/wireless/mediatek/mt76/mt7915/mac.c
index f0d5a3603902..97ca55d283fb 100644
--- a/drivers/net/wireless/mediatek/mt76/mt7915/mac.c
+++ b/drivers/net/wireless/mediatek/mt76/mt7915/mac.c
@@ -256,8 +256,7 @@ mt7915_wed_check_ppe(struct mt7915_dev *dev, struct mt76_queue *q,
if (!msta || !msta->vif)
return;
- if (!(q->flags & MT_QFLAG_WED) ||
- FIELD_GET(MT_QFLAG_WED_TYPE, q->flags) != MT76_WED_Q_RX)
+ if (!mt76_queue_is_wed_rx(q))
return;
if (!(info & MT_DMA_INFO_PPE_VLD))
@@ -1061,9 +1060,6 @@ static void mt7915_mac_add_txs(struct mt7915_dev *dev, void *data)
u16 wcidx;
u8 pid;
- if (le32_get_bits(txs_data[0], MT_TXS0_TXS_FORMAT) > 1)
- return;
-
wcidx = le32_get_bits(txs_data[2], MT_TXS2_WCID);
pid = le32_get_bits(txs_data[3], MT_TXS3_PID);
@@ -1582,6 +1578,12 @@ void mt7915_mac_reset_work(struct work_struct *work)
if (!(READ_ONCE(dev->recovery.state) & MT_MCU_CMD_STOP_DMA))
return;
+ if (mtk_wed_device_active(&dev->mt76.mmio.wed)) {
+ mtk_wed_device_stop(&dev->mt76.mmio.wed);
+ if (!is_mt7986(&dev->mt76))
+ mt76_wr(dev, MT_INT_WED_MASK_CSR, 0);
+ }
+
ieee80211_stop_queues(mt76_hw(dev));
if (ext_phy)
ieee80211_stop_queues(ext_phy->hw);
diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/main.c b/drivers/net/wireless/mediatek/mt76/mt7915/main.c
index 0511d6a505b0..3bbccbdfc5eb 100644
--- a/drivers/net/wireless/mediatek/mt76/mt7915/main.c
+++ b/drivers/net/wireless/mediatek/mt76/mt7915/main.c
@@ -57,6 +57,17 @@ int mt7915_run(struct ieee80211_hw *hw)
mt7915_mac_enable_nf(dev, phy->mt76->band_idx);
}
+ ret = mt7915_mcu_set_thermal_throttling(phy,
+ MT7915_THERMAL_THROTTLE_MAX);
+
+ if (ret)
+ goto out;
+
+ ret = mt7915_mcu_set_thermal_protect(phy);
+
+ if (ret)
+ goto out;
+
ret = mt76_connac_mcu_set_rts_thresh(&dev->mt76, 0x92b,
phy->mt76->band_idx);
if (ret)
@@ -1280,19 +1291,22 @@ void mt7915_get_et_strings(struct ieee80211_hw *hw,
struct ieee80211_vif *vif,
u32 sset, u8 *data)
{
- if (sset == ETH_SS_STATS)
- memcpy(data, *mt7915_gstrings_stats,
- sizeof(mt7915_gstrings_stats));
+ if (sset != ETH_SS_STATS)
+ return;
+
+ memcpy(data, *mt7915_gstrings_stats, sizeof(mt7915_gstrings_stats));
+ data += sizeof(mt7915_gstrings_stats);
+ page_pool_ethtool_stats_get_strings(data);
}
static
int mt7915_get_et_sset_count(struct ieee80211_hw *hw,
struct ieee80211_vif *vif, int sset)
{
- if (sset == ETH_SS_STATS)
- return MT7915_SSTATS_LEN;
+ if (sset != ETH_SS_STATS)
+ return 0;
- return 0;
+ return MT7915_SSTATS_LEN + page_pool_ethtool_stats_get_count();
}
static void mt7915_ethtool_worker(void *wi_data, struct ieee80211_sta *sta)
@@ -1303,7 +1317,7 @@ static void mt7915_ethtool_worker(void *wi_data, struct ieee80211_sta *sta)
if (msta->vif->mt76.idx != wi->idx)
return;
- mt76_ethtool_worker(wi, &msta->wcid.stats);
+ mt76_ethtool_worker(wi, &msta->wcid.stats, false);
}
static
@@ -1320,7 +1334,7 @@ void mt7915_get_et_stats(struct ieee80211_hw *hw,
};
struct mib_stats *mib = &phy->mib;
/* See mt7915_ampdu_stat_read_phy, etc */
- int i, ei = 0;
+ int i, ei = 0, stats_size;
mutex_lock(&dev->mt76.mutex);
@@ -1401,9 +1415,12 @@ void mt7915_get_et_stats(struct ieee80211_hw *hw,
return;
ei += wi.worker_stat_count;
- if (ei != MT7915_SSTATS_LEN)
- dev_err(dev->mt76.dev, "ei: %d MT7915_SSTATS_LEN: %d",
- ei, (int)MT7915_SSTATS_LEN);
+
+ mt76_ethtool_page_pool_stats(&dev->mt76, &data[ei], &ei);
+
+ stats_size = MT7915_SSTATS_LEN + page_pool_ethtool_stats_get_count();
+ if (ei != stats_size)
+ dev_err(dev->mt76.dev, "ei: %d size: %d", ei, stats_size);
}
static void
diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/mcu.c b/drivers/net/wireless/mediatek/mt76/mt7915/mcu.c
index b2652de082ba..5545a8bdf1d0 100644
--- a/drivers/net/wireless/mediatek/mt76/mt7915/mcu.c
+++ b/drivers/net/wireless/mediatek/mt76/mt7915/mcu.c
@@ -232,8 +232,11 @@ mt7915_mcu_rx_csa_notify(struct mt7915_dev *dev, struct sk_buff *skb)
c = (struct mt7915_mcu_csa_notify *)skb->data;
+ if (c->band_idx > MT_BAND1)
+ return;
+
if ((c->band_idx && !dev->phy.mt76->band_idx) &&
- dev->mt76.phys[MT_BAND1])
+ dev->mt76.phys[MT_BAND1])
mphy = dev->mt76.phys[MT_BAND1];
ieee80211_iterate_active_interfaces_atomic(mphy->hw,
@@ -252,8 +255,11 @@ mt7915_mcu_rx_thermal_notify(struct mt7915_dev *dev, struct sk_buff *skb)
if (t->ctrl.ctrl_id != THERMAL_PROTECT_ENABLE)
return;
+ if (t->ctrl.band_idx > MT_BAND1)
+ return;
+
if ((t->ctrl.band_idx && !dev->phy.mt76->band_idx) &&
- dev->mt76.phys[MT_BAND1])
+ dev->mt76.phys[MT_BAND1])
mphy = dev->mt76.phys[MT_BAND1];
phy = (struct mt7915_phy *)mphy->priv;
@@ -268,8 +274,11 @@ mt7915_mcu_rx_radar_detected(struct mt7915_dev *dev, struct sk_buff *skb)
r = (struct mt7915_mcu_rdd_report *)skb->data;
+ if (r->band_idx > MT_BAND1)
+ return;
+
if ((r->band_idx && !dev->phy.mt76->band_idx) &&
- dev->mt76.phys[MT_BAND1])
+ dev->mt76.phys[MT_BAND1])
mphy = dev->mt76.phys[MT_BAND1];
if (r->band_idx == MT_RX_SEL2)
@@ -326,7 +335,11 @@ mt7915_mcu_rx_bcc_notify(struct mt7915_dev *dev, struct sk_buff *skb)
b = (struct mt7915_mcu_bcc_notify *)skb->data;
- if ((b->band_idx && !dev->phy.mt76->band_idx) && dev->mt76.phys[MT_BAND1])
+ if (b->band_idx > MT_BAND1)
+ return;
+
+ if ((b->band_idx && !dev->phy.mt76->band_idx) &&
+ dev->mt76.phys[MT_BAND1])
mphy = dev->mt76.phys[MT_BAND1];
ieee80211_iterate_active_interfaces_atomic(mphy->hw,
@@ -2104,7 +2117,7 @@ static int mt7915_load_firmware(struct mt7915_dev *dev)
/* make sure fw is download state */
if (mt7915_firmware_state(dev, false)) {
/* restart firmware once */
- __mt76_mcu_restart(&dev->mt76);
+ mt76_connac_mcu_restart(&dev->mt76);
ret = mt7915_firmware_state(dev, false);
if (ret) {
dev_err(dev->mt76.dev,
@@ -2278,6 +2291,53 @@ mt7915_mcu_init_rx_airtime(struct mt7915_dev *dev)
sizeof(req), true);
}
+static int mt7915_red_set_watermark(struct mt7915_dev *dev)
+{
+#define RED_GLOBAL_TOKEN_WATERMARK 2
+ struct {
+ __le32 args[3];
+ u8 cmd;
+ u8 version;
+ u8 __rsv1[4];
+ __le16 len;
+ __le16 high_mark;
+ __le16 low_mark;
+ u8 __rsv2[12];
+ } __packed req = {
+ .args[0] = cpu_to_le32(MCU_WA_PARAM_RED_SETTING),
+ .cmd = RED_GLOBAL_TOKEN_WATERMARK,
+ .len = cpu_to_le16(sizeof(req) - sizeof(req.args)),
+ .high_mark = cpu_to_le16(MT7915_HW_TOKEN_SIZE - 256),
+ .low_mark = cpu_to_le16(MT7915_HW_TOKEN_SIZE - 256 - 1536),
+ };
+
+ return mt76_mcu_send_msg(&dev->mt76, MCU_WA_PARAM_CMD(SET), &req,
+ sizeof(req), false);
+}
+
+static int mt7915_mcu_set_red(struct mt7915_dev *dev, bool enabled)
+{
+#define RED_DISABLE 0
+#define RED_BY_WA_ENABLE 2
+ int ret;
+ u32 red_type = enabled ? RED_BY_WA_ENABLE : RED_DISABLE;
+ __le32 req = cpu_to_le32(red_type);
+
+ if (enabled) {
+ ret = mt7915_red_set_watermark(dev);
+ if (ret < 0)
+ return ret;
+ }
+
+ ret = mt76_mcu_send_msg(&dev->mt76, MCU_EXT_CMD(RED_ENABLE), &req,
+ sizeof(req), false);
+ if (ret < 0)
+ return ret;
+
+ return mt7915_mcu_wa_cmd(dev, MCU_WA_PARAM_CMD(SET),
+ MCU_WA_PARAM_RED, enabled, 0);
+}
+
int mt7915_mcu_init_firmware(struct mt7915_dev *dev)
{
int ret;
@@ -2326,8 +2386,7 @@ int mt7915_mcu_init_firmware(struct mt7915_dev *dev)
if (ret)
return ret;
- return mt7915_mcu_wa_cmd(dev, MCU_WA_PARAM_CMD(SET),
- MCU_WA_PARAM_RED, 0, 0);
+ return mt7915_mcu_set_red(dev, mtk_wed_device_active(&dev->mt76.mmio.wed));
}
int mt7915_mcu_init(struct mt7915_dev *dev)
@@ -2336,7 +2395,6 @@ int mt7915_mcu_init(struct mt7915_dev *dev)
.headroom = sizeof(struct mt76_connac2_mcu_txd),
.mcu_skb_send_msg = mt7915_mcu_send_message,
.mcu_parse_response = mt7915_mcu_parse_response,
- .mcu_restart = mt76_connac_mcu_restart,
};
dev->mt76.mcu_ops = &mt7915_mcu_ops;
@@ -2346,16 +2404,17 @@ int mt7915_mcu_init(struct mt7915_dev *dev)
void mt7915_mcu_exit(struct mt7915_dev *dev)
{
- __mt76_mcu_restart(&dev->mt76);
+ mt76_connac_mcu_restart(&dev->mt76);
if (mt7915_firmware_state(dev, false)) {
dev_err(dev->mt76.dev, "Failed to exit mcu\n");
- return;
+ goto out;
}
mt76_wr(dev, MT_TOP_LPCR_HOST_BAND(0), MT_TOP_LPCR_HOST_FW_OWN);
if (dev->hif2)
mt76_wr(dev, MT_TOP_LPCR_HOST_BAND(1),
MT_TOP_LPCR_HOST_FW_OWN);
+out:
skb_queue_purge(&dev->mt76.mcu.res_q);
}
@@ -2792,8 +2851,9 @@ int mt7915_mcu_get_eeprom(struct mt7915_dev *dev, u32 offset)
int ret;
u8 *buf;
- ret = mt76_mcu_send_and_get_msg(&dev->mt76, MCU_EXT_QUERY(EFUSE_ACCESS), &req,
- sizeof(req), true, &skb);
+ ret = mt76_mcu_send_and_get_msg(&dev->mt76,
+ MCU_EXT_QUERY(EFUSE_ACCESS),
+ &req, sizeof(req), true, &skb);
if (ret)
return ret;
@@ -2818,8 +2878,9 @@ int mt7915_mcu_get_eeprom_free_block(struct mt7915_dev *dev, u8 *block_num)
struct sk_buff *skb;
int ret;
- ret = mt76_mcu_send_and_get_msg(&dev->mt76, MCU_EXT_QUERY(EFUSE_FREE_BLOCK), &req,
- sizeof(req), true, &skb);
+ ret = mt76_mcu_send_and_get_msg(&dev->mt76,
+ MCU_EXT_QUERY(EFUSE_FREE_BLOCK),
+ &req, sizeof(req), true, &skb);
if (ret)
return ret;
@@ -2974,38 +3035,42 @@ int mt7915_mcu_apply_tx_dpd(struct mt7915_phy *phy)
int mt7915_mcu_get_chan_mib_info(struct mt7915_phy *phy, bool chan_switch)
{
- /* strict order */
- static const u32 offs[] = {
- MIB_NON_WIFI_TIME,
- MIB_TX_TIME,
- MIB_RX_TIME,
- MIB_OBSS_AIRTIME,
- MIB_TXOP_INIT_COUNT,
- /* v2 */
- MIB_NON_WIFI_TIME_V2,
- MIB_TX_TIME_V2,
- MIB_RX_TIME_V2,
- MIB_OBSS_AIRTIME_V2
- };
struct mt76_channel_state *state = phy->mt76->chan_state;
struct mt76_channel_state *state_ts = &phy->state_ts;
struct mt7915_dev *dev = phy->dev;
struct mt7915_mcu_mib *res, req[5];
struct sk_buff *skb;
- int i, ret, start = 0, ofs = 20;
+ static const u32 *offs;
+ int i, ret, len, offs_cc;
u64 cc_tx;
- if (!is_mt7915(&dev->mt76)) {
- start = 5;
- ofs = 0;
+ /* strict order */
+ if (is_mt7915(&dev->mt76)) {
+ static const u32 chip_offs[] = {
+ MIB_NON_WIFI_TIME,
+ MIB_TX_TIME,
+ MIB_RX_TIME,
+ MIB_OBSS_AIRTIME,
+ MIB_TXOP_INIT_COUNT,
+ };
+ len = ARRAY_SIZE(chip_offs);
+ offs = chip_offs;
+ offs_cc = 20;
+ } else {
+ static const u32 chip_offs[] = {
+ MIB_NON_WIFI_TIME_V2,
+ MIB_TX_TIME_V2,
+ MIB_RX_TIME_V2,
+ MIB_OBSS_AIRTIME_V2
+ };
+ len = ARRAY_SIZE(chip_offs);
+ offs = chip_offs;
+ offs_cc = 0;
}
- for (i = 0; i < 5; i++) {
+ for (i = 0; i < len; i++) {
req[i].band = cpu_to_le32(phy->mt76->band_idx);
- req[i].offs = cpu_to_le32(offs[i + start]);
-
- if (!is_mt7915(&dev->mt76) && i == 3)
- break;
+ req[i].offs = cpu_to_le32(offs[i]);
}
ret = mt76_mcu_send_and_get_msg(&dev->mt76, MCU_EXT_CMD(GET_MIB_INFO),
@@ -3013,7 +3078,7 @@ int mt7915_mcu_get_chan_mib_info(struct mt7915_phy *phy, bool chan_switch)
if (ret)
return ret;
- res = (struct mt7915_mcu_mib *)(skb->data + ofs);
+ res = (struct mt7915_mcu_mib *)(skb->data + offs_cc);
#define __res_u64(s) le64_to_cpu(res[s].data)
/* subtract Tx backoff time from Tx duration */
@@ -3060,6 +3125,29 @@ int mt7915_mcu_get_temperature(struct mt7915_phy *phy)
int mt7915_mcu_set_thermal_throttling(struct mt7915_phy *phy, u8 state)
{
struct mt7915_dev *dev = phy->dev;
+ struct mt7915_mcu_thermal_ctrl req = {
+ .band_idx = phy->mt76->band_idx,
+ .ctrl_id = THERMAL_PROTECT_DUTY_CONFIG,
+ };
+ int level, ret;
+
+ /* set duty cycle and level */
+ for (level = 0; level < 4; level++) {
+ req.duty.duty_level = level;
+ req.duty.duty_cycle = state;
+ state /= 2;
+
+ ret = mt76_mcu_send_msg(&dev->mt76, MCU_EXT_CMD(THERMAL_PROT),
+ &req, sizeof(req), false);
+ if (ret)
+ return ret;
+ }
+ return 0;
+}
+
+int mt7915_mcu_set_thermal_protect(struct mt7915_phy *phy)
+{
+ struct mt7915_dev *dev = phy->dev;
struct {
struct mt7915_mcu_thermal_ctrl ctrl;
@@ -3070,29 +3158,18 @@ int mt7915_mcu_set_thermal_throttling(struct mt7915_phy *phy, u8 state)
} __packed req = {
.ctrl = {
.band_idx = phy->mt76->band_idx,
+ .type.protect_type = 1,
+ .type.trigger_type = 1,
},
};
- int level;
-
- if (!state) {
- req.ctrl.ctrl_id = THERMAL_PROTECT_DISABLE;
- goto out;
- }
-
- /* set duty cycle and level */
- for (level = 0; level < 4; level++) {
- int ret;
+ int ret;
- req.ctrl.ctrl_id = THERMAL_PROTECT_DUTY_CONFIG;
- req.ctrl.duty.duty_level = level;
- req.ctrl.duty.duty_cycle = state;
- state /= 2;
+ req.ctrl.ctrl_id = THERMAL_PROTECT_DISABLE;
+ ret = mt76_mcu_send_msg(&dev->mt76, MCU_EXT_CMD(THERMAL_PROT),
+ &req, sizeof(req.ctrl), false);
- ret = mt76_mcu_send_msg(&dev->mt76, MCU_EXT_CMD(THERMAL_PROT),
- &req, sizeof(req.ctrl), false);
- if (ret)
- return ret;
- }
+ if (ret)
+ return ret;
/* set high-temperature trigger threshold */
req.ctrl.ctrl_id = THERMAL_PROTECT_ENABLE;
@@ -3101,10 +3178,6 @@ int mt7915_mcu_set_thermal_throttling(struct mt7915_phy *phy, u8 state)
req.trigger_temp = cpu_to_le32(phy->throttle_temp[1]);
req.sustain_time = cpu_to_le16(10);
-out:
- req.ctrl.type.protect_type = 1;
- req.ctrl.type.trigger_type = 1;
-
return mt76_mcu_send_msg(&dev->mt76, MCU_EXT_CMD(THERMAL_PROT),
&req, sizeof(req), false);
}
diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/mcu.h b/drivers/net/wireless/mediatek/mt76/mt7915/mcu.h
index 29b5434bfdb8..b9ea297f382c 100644
--- a/drivers/net/wireless/mediatek/mt76/mt7915/mcu.h
+++ b/drivers/net/wireless/mediatek/mt76/mt7915/mcu.h
@@ -278,6 +278,7 @@ enum {
MCU_WA_PARAM_PDMA_RX = 0x04,
MCU_WA_PARAM_CPU_UTIL = 0x0b,
MCU_WA_PARAM_RED = 0x0e,
+ MCU_WA_PARAM_RED_SETTING = 0x40,
};
enum mcu_mmps_mode {
diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/mmio.c b/drivers/net/wireless/mediatek/mt76/mt7915/mmio.c
index 8388e2a65853..225a19604d3e 100644
--- a/drivers/net/wireless/mediatek/mt76/mt7915/mmio.c
+++ b/drivers/net/wireless/mediatek/mt76/mt7915/mmio.c
@@ -4,10 +4,12 @@
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/platform_device.h>
+#include <linux/rtnetlink.h>
#include <linux/pci.h>
#include "mt7915.h"
#include "mac.h"
+#include "mcu.h"
#include "../trace.h"
#include "../dma.h"
@@ -495,7 +497,7 @@ static u32 __mt7915_reg_addr(struct mt7915_dev *dev, u32 addr)
if (dev_is_pci(dev->mt76.dev) &&
((addr >= MT_CBTOP1_PHY_START && addr <= MT_CBTOP1_PHY_END) ||
- (addr >= MT_CBTOP2_PHY_START && addr <= MT_CBTOP2_PHY_END)))
+ addr >= MT_CBTOP2_PHY_START))
return mt7915_reg_map_l1(dev, addr);
/* CONN_INFRA: covert to phyiscal addr and use layer 1 remap */
@@ -594,7 +596,6 @@ static void mt7915_mmio_wed_offload_disable(struct mtk_wed_device *wed)
static void mt7915_mmio_wed_release_rx_buf(struct mtk_wed_device *wed)
{
struct mt7915_dev *dev;
- struct page *page;
int i;
dev = container_of(wed, struct mt7915_dev, mt76.mmio.wed);
@@ -605,58 +606,50 @@ static void mt7915_mmio_wed_release_rx_buf(struct mtk_wed_device *wed)
if (!t || !t->ptr)
continue;
- dma_unmap_single(dev->mt76.dma_dev, t->dma_addr,
- wed->wlan.rx_size, DMA_FROM_DEVICE);
- skb_free_frag(t->ptr);
+ mt76_put_page_pool_buf(t->ptr, false);
t->ptr = NULL;
mt76_put_rxwi(&dev->mt76, t);
}
- if (!wed->rx_buf_ring.rx_page.va)
- return;
-
- page = virt_to_page(wed->rx_buf_ring.rx_page.va);
- __page_frag_cache_drain(page, wed->rx_buf_ring.rx_page.pagecnt_bias);
- memset(&wed->rx_buf_ring.rx_page, 0, sizeof(wed->rx_buf_ring.rx_page));
+ mt76_free_pending_rxwi(&dev->mt76);
}
static u32 mt7915_mmio_wed_init_rx_buf(struct mtk_wed_device *wed, int size)
{
struct mtk_rxbm_desc *desc = wed->rx_buf_ring.desc;
+ struct mt76_txwi_cache *t = NULL;
struct mt7915_dev *dev;
- u32 length;
- int i;
+ struct mt76_queue *q;
+ int i, len;
dev = container_of(wed, struct mt7915_dev, mt76.mmio.wed);
- length = SKB_DATA_ALIGN(NET_SKB_PAD + wed->wlan.rx_size +
- sizeof(struct skb_shared_info));
+ q = &dev->mt76.q_rx[MT_RXQ_MAIN];
+ len = SKB_WITH_OVERHEAD(q->buf_size);
for (i = 0; i < size; i++) {
- struct mt76_txwi_cache *t = mt76_get_rxwi(&dev->mt76);
- dma_addr_t phy_addr;
+ enum dma_data_direction dir;
+ dma_addr_t addr;
+ u32 offset;
int token;
- void *ptr;
+ void *buf;
- ptr = page_frag_alloc(&wed->rx_buf_ring.rx_page, length,
- GFP_KERNEL);
- if (!ptr)
+ t = mt76_get_rxwi(&dev->mt76);
+ if (!t)
goto unmap;
- phy_addr = dma_map_single(dev->mt76.dma_dev, ptr,
- wed->wlan.rx_size,
- DMA_TO_DEVICE);
- if (unlikely(dma_mapping_error(dev->mt76.dev, phy_addr))) {
- skb_free_frag(ptr);
+ buf = mt76_get_page_pool_buf(q, &offset, q->buf_size);
+ if (!buf)
goto unmap;
- }
- desc->buf0 = cpu_to_le32(phy_addr);
- token = mt76_rx_token_consume(&dev->mt76, ptr, t, phy_addr);
+ addr = page_pool_get_dma_addr(virt_to_head_page(buf)) + offset;
+ dir = page_pool_get_dma_dir(q->page_pool);
+ dma_sync_single_for_device(dev->mt76.dma_dev, addr, len, dir);
+
+ desc->buf0 = cpu_to_le32(addr);
+ token = mt76_rx_token_consume(&dev->mt76, buf, t, addr);
if (token < 0) {
- dma_unmap_single(dev->mt76.dma_dev, phy_addr,
- wed->wlan.rx_size, DMA_TO_DEVICE);
- skb_free_frag(ptr);
+ mt76_put_page_pool_buf(buf, false);
goto unmap;
}
@@ -668,6 +661,8 @@ static u32 mt7915_mmio_wed_init_rx_buf(struct mtk_wed_device *wed, int size)
return 0;
unmap:
+ if (t)
+ mt76_put_rxwi(&dev->mt76, t);
mt7915_mmio_wed_release_rx_buf(wed);
return -ENOMEM;
}
@@ -696,6 +691,42 @@ static void mt7915_mmio_wed_update_rx_stats(struct mtk_wed_device *wed,
rcu_read_unlock();
}
+
+static int mt7915_mmio_wed_reset(struct mtk_wed_device *wed)
+{
+ struct mt76_dev *mdev = container_of(wed, struct mt76_dev, mmio.wed);
+ struct mt7915_dev *dev = container_of(mdev, struct mt7915_dev, mt76);
+ struct mt76_phy *mphy = &dev->mphy;
+ int ret;
+
+ ASSERT_RTNL();
+
+ if (test_and_set_bit(MT76_STATE_WED_RESET, &mphy->state))
+ return -EBUSY;
+
+ ret = mt7915_mcu_set_ser(dev, SER_RECOVER, SER_SET_RECOVER_L1,
+ mphy->band_idx);
+ if (ret)
+ goto out;
+
+ rtnl_unlock();
+ if (!wait_for_completion_timeout(&mdev->mmio.wed_reset, 20 * HZ)) {
+ dev_err(mdev->dev, "wed reset timeout\n");
+ ret = -ETIMEDOUT;
+ }
+ rtnl_lock();
+out:
+ clear_bit(MT76_STATE_WED_RESET, &mphy->state);
+
+ return ret;
+}
+
+static void mt7915_mmio_wed_reset_complete(struct mtk_wed_device *wed)
+{
+ struct mt76_dev *dev = container_of(wed, struct mt76_dev, mmio.wed);
+
+ complete(&dev->mmio.wed_reset_complete);
+}
#endif
int mt7915_mmio_wed_init(struct mt7915_dev *dev, void *pdev_ptr,
@@ -751,7 +782,7 @@ int mt7915_mmio_wed_init(struct mt7915_dev *dev, void *pdev_ptr,
wed->wlan.wpdma_rx_glo = res->start + MT_WPDMA_GLO_CFG;
wed->wlan.wpdma_rx = res->start + MT_RXQ_WED_DATA_RING_BASE;
}
- wed->wlan.nbuf = 4096;
+ wed->wlan.nbuf = MT7915_HW_TOKEN_SIZE;
wed->wlan.tx_tbit[0] = is_mt7915(&dev->mt76) ? 4 : 30;
wed->wlan.tx_tbit[1] = is_mt7915(&dev->mt76) ? 5 : 31;
wed->wlan.txfree_tbit = is_mt7986(&dev->mt76) ? 2 : 1;
@@ -778,6 +809,8 @@ int mt7915_mmio_wed_init(struct mt7915_dev *dev, void *pdev_ptr,
wed->wlan.init_rx_buf = mt7915_mmio_wed_init_rx_buf;
wed->wlan.release_rx_buf = mt7915_mmio_wed_release_rx_buf;
wed->wlan.update_wo_rx_stats = mt7915_mmio_wed_update_rx_stats;
+ wed->wlan.reset = mt7915_mmio_wed_reset;
+ wed->wlan.reset_complete = mt7915_mmio_wed_reset_complete;
dev->mt76.rx_token_size = wed->wlan.rx_npkt;
diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/mt7915.h b/drivers/net/wireless/mediatek/mt76/mt7915/mt7915.h
index 6351feba6bdf..3cbfb9b6a305 100644
--- a/drivers/net/wireless/mediatek/mt76/mt7915/mt7915.h
+++ b/drivers/net/wireless/mediatek/mt76/mt7915/mt7915.h
@@ -53,6 +53,7 @@
#define MT7916_EEPROM_SIZE 4096
#define MT7915_EEPROM_BLOCK_SIZE 16
+#define MT7915_HW_TOKEN_SIZE 4096
#define MT7915_TOKEN_SIZE 8192
#define MT7915_CFEND_RATE_DEFAULT 0x49 /* OFDM 24M */
@@ -70,6 +71,11 @@
#define MT7915_WED_RX_TOKEN_SIZE 12288
+#define MT7915_CRIT_TEMP_IDX 0
+#define MT7915_MAX_TEMP_IDX 1
+#define MT7915_CRIT_TEMP 110
+#define MT7915_MAX_TEMP 120
+
struct mt7915_vif;
struct mt7915_sta;
struct mt7915_dfs_pulse;
@@ -543,6 +549,7 @@ int mt7915_mcu_apply_tx_dpd(struct mt7915_phy *phy);
int mt7915_mcu_get_chan_mib_info(struct mt7915_phy *phy, bool chan_switch);
int mt7915_mcu_get_temperature(struct mt7915_phy *phy);
int mt7915_mcu_set_thermal_throttling(struct mt7915_phy *phy, u8 state);
+int mt7915_mcu_set_thermal_protect(struct mt7915_phy *phy);
int mt7915_mcu_get_rx_rate(struct mt7915_phy *phy, struct ieee80211_vif *vif,
struct ieee80211_sta *sta, struct rate_info *rate);
int mt7915_mcu_rdd_background_enable(struct mt7915_phy *phy,
diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/regs.h b/drivers/net/wireless/mediatek/mt76/mt7915/regs.h
index aca1b2f1e9e3..c8e478a55081 100644
--- a/drivers/net/wireless/mediatek/mt76/mt7915/regs.h
+++ b/drivers/net/wireless/mediatek/mt76/mt7915/regs.h
@@ -803,7 +803,6 @@ enum offs_rev {
#define MT_CBTOP1_PHY_START 0x70000000
#define MT_CBTOP1_PHY_END __REG(CBTOP1_PHY_END)
#define MT_CBTOP2_PHY_START 0xf0000000
-#define MT_CBTOP2_PHY_END 0xffffffff
#define MT_INFRA_MCU_START 0x7c000000
#define MT_INFRA_MCU_END __REG(INFRA_MCU_ADDR_END)
#define MT_CONN_INFRA_OFFSET(p) ((p) - MT_INFRA_BASE)
@@ -1055,6 +1054,7 @@ enum offs_rev {
#define MT_LED_CTRL(_n) MT_LED_PHYS(0x00 + ((_n) * 4))
#define MT_LED_CTRL_KICK BIT(7)
+#define MT_LED_CTRL_BAND BIT(4)
#define MT_LED_CTRL_BLINK_MODE BIT(2)
#define MT_LED_CTRL_POLARITY BIT(1)
@@ -1062,11 +1062,18 @@ enum offs_rev {
#define MT_LED_TX_BLINK_ON_MASK GENMASK(7, 0)
#define MT_LED_TX_BLINK_OFF_MASK GENMASK(15, 8)
+#define MT_LED_STATUS_0(_n) MT_LED_PHYS(0x20 + ((_n) * 8))
+#define MT_LED_STATUS_1(_n) MT_LED_PHYS(0x24 + ((_n) * 8))
+#define MT_LED_STATUS_OFF GENMASK(31, 24)
+#define MT_LED_STATUS_ON GENMASK(23, 16)
+#define MT_LED_STATUS_DURATION GENMASK(15, 0)
+
#define MT_LED_EN(_n) MT_LED_PHYS(0x40 + ((_n) * 4))
+#define MT_LED_GPIO_MUX0 0x70005050 /* GPIO 1 and GPIO 2 */
+#define MT_LED_GPIO_MUX1 0x70005054 /* GPIO 14 and 15 */
#define MT_LED_GPIO_MUX2 0x70005058 /* GPIO 18 */
-#define MT_LED_GPIO_MUX3 0x7000505C /* GPIO 26 */
-#define MT_LED_GPIO_SEL_MASK GENMASK(11, 8)
+#define MT_LED_GPIO_MUX3 0x7000505c /* GPIO 26 */
/* MT TOP */
#define MT_TOP_BASE 0x18060000
diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/soc.c b/drivers/net/wireless/mediatek/mt76/mt7915/soc.c
index c06c56a0270d..2ac0a0f2859c 100644
--- a/drivers/net/wireless/mediatek/mt76/mt7915/soc.c
+++ b/drivers/net/wireless/mediatek/mt76/mt7915/soc.c
@@ -278,6 +278,7 @@ static int mt7986_wmac_coninfra_setup(struct mt7915_dev *dev)
return -EINVAL;
rmem = of_reserved_mem_lookup(np);
+ of_node_put(np);
if (!rmem)
return -EINVAL;
@@ -882,6 +883,8 @@ static int mt7986_wmac_wm_enable(struct mt7915_dev *dev, bool enable)
{
u32 cur;
+ mt76_wr(dev, MT_CONNINFRA_SKU_DEC_ADDR, 0);
+
mt76_rmw_field(dev, MT7986_TOP_WM_RESET,
MT7986_TOP_WM_RESET_MASK, enable);
if (!enable)
diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/acpi_sar.c b/drivers/net/wireless/mediatek/mt76/mt7921/acpi_sar.c
index 47e034a9b003..48dd0decac5d 100644
--- a/drivers/net/wireless/mediatek/mt76/mt7921/acpi_sar.c
+++ b/drivers/net/wireless/mediatek/mt76/mt7921/acpi_sar.c
@@ -33,14 +33,17 @@ mt7921_acpi_read(struct mt7921_dev *dev, u8 *method, u8 **tbl, u32 *len)
sar_root->package.elements[0].type != ACPI_TYPE_INTEGER) {
dev_err(mdev->dev, "sar cnt = %d\n",
sar_root->package.count);
+ ret = -EINVAL;
goto free;
}
if (!*tbl) {
*tbl = devm_kzalloc(mdev->dev, sar_root->package.count,
GFP_KERNEL);
- if (!*tbl)
+ if (!*tbl) {
+ ret = -ENOMEM;
goto free;
+ }
}
if (len)
*len = sar_root->package.count;
@@ -52,9 +55,9 @@ mt7921_acpi_read(struct mt7921_dev *dev, u8 *method, u8 **tbl, u32 *len)
break;
*(*tbl + i) = (u8)sar_unit->integer.value;
}
-free:
ret = (i == sar_root->package.count) ? 0 : -EINVAL;
+free:
kfree(sar_root);
return ret;
@@ -135,6 +138,22 @@ mt7921_asar_acpi_read_mtgs(struct mt7921_dev *dev, u8 **table, u8 version)
return ret;
}
+/* MTFG : Flag Table */
+static int
+mt7921_asar_acpi_read_mtfg(struct mt7921_dev *dev, u8 **table)
+{
+ int len, ret;
+
+ ret = mt7921_acpi_read(dev, MT7921_ACPI_MTFG, table, &len);
+ if (ret)
+ return ret;
+
+ if (len < MT7921_ASAR_MIN_FG)
+ ret = -EINVAL;
+
+ return ret;
+}
+
int mt7921_init_acpi_sar(struct mt7921_dev *dev)
{
struct mt7921_acpi_sar *asar;
@@ -162,6 +181,12 @@ int mt7921_init_acpi_sar(struct mt7921_dev *dev)
asar->geo = NULL;
}
+ /* MTFG is optional */
+ ret = mt7921_asar_acpi_read_mtfg(dev, (u8 **)&asar->fg);
+ if (ret) {
+ devm_kfree(dev->mt76.dev, asar->fg);
+ asar->fg = NULL;
+ }
dev->phy.acpisar = asar;
return 0;
@@ -280,3 +305,36 @@ int mt7921_init_acpi_sar_power(struct mt7921_phy *phy, bool set_default)
return 0;
}
+
+u8 mt7921_acpi_get_flags(struct mt7921_phy *phy)
+{
+ struct mt7921_asar_fg *fg;
+ struct {
+ u8 acpi_idx;
+ u8 chip_idx;
+ } map[] = {
+ {1, 1},
+ {4, 2},
+ };
+ u8 flags = BIT(0);
+ int i, j;
+
+ if (!phy->acpisar)
+ return 0;
+
+ fg = phy->acpisar->fg;
+ if (!fg)
+ return flags;
+
+ /* pickup necessary settings per device and
+ * translate the index of bitmap for chip command.
+ */
+ for (i = 0; i < fg->nr_flag; i++)
+ for (j = 0; j < ARRAY_SIZE(map); j++)
+ if (fg->flag[i] == map[j].acpi_idx) {
+ flags |= BIT(map[j].chip_idx);
+ break;
+ }
+
+ return flags;
+}
diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/acpi_sar.h b/drivers/net/wireless/mediatek/mt76/mt7921/acpi_sar.h
index 23f86bfae0c0..35268b0890ad 100644
--- a/drivers/net/wireless/mediatek/mt76/mt7921/acpi_sar.h
+++ b/drivers/net/wireless/mediatek/mt76/mt7921/acpi_sar.h
@@ -8,10 +8,12 @@
#define MT7921_ASAR_MAX_DYN 8
#define MT7921_ASAR_MIN_GEO 3
#define MT7921_ASAR_MAX_GEO 8
+#define MT7921_ASAR_MIN_FG 8
#define MT7921_ACPI_MTCL "MTCL"
#define MT7921_ACPI_MTDS "MTDS"
#define MT7921_ACPI_MTGS "MTGS"
+#define MT7921_ACPI_MTFG "MTFG"
struct mt7921_asar_dyn_limit {
u8 idx;
@@ -77,6 +79,15 @@ struct mt7921_asar_cl {
u8 cl6g[6];
} __packed;
+struct mt7921_asar_fg {
+ u8 names[4];
+ u8 version;
+ u8 rsvd;
+ u8 nr_flag;
+ u8 rsvd1;
+ u8 flag[0];
+} __packed;
+
struct mt7921_acpi_sar {
u8 ver;
union {
@@ -88,6 +99,7 @@ struct mt7921_acpi_sar {
struct mt7921_asar_geo_v2 *geo_v2;
};
struct mt7921_asar_cl *countrylist;
+ struct mt7921_asar_fg *fg;
};
#endif
diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/init.c b/drivers/net/wireless/mediatek/mt76/mt7921/init.c
index 542dfd425129..80c71acfe159 100644
--- a/drivers/net/wireless/mediatek/mt76/mt7921/init.c
+++ b/drivers/net/wireless/mediatek/mt76/mt7921/init.c
@@ -120,6 +120,7 @@ mt7921_init_wiphy(struct ieee80211_hw *hw)
wiphy_ext_feature_set(wiphy, NL80211_EXT_FEATURE_BEACON_RATE_HT);
wiphy_ext_feature_set(wiphy, NL80211_EXT_FEATURE_BEACON_RATE_VHT);
wiphy_ext_feature_set(wiphy, NL80211_EXT_FEATURE_BEACON_RATE_HE);
+ wiphy_ext_feature_set(wiphy, NL80211_EXT_FEATURE_ACK_SIGNAL_SUPPORT);
ieee80211_hw_set(hw, SINGLE_SCAN_ON_ALL_BANDS);
ieee80211_hw_set(hw, HAS_RATE_CONTROL);
@@ -142,6 +143,8 @@ mt7921_init_wiphy(struct ieee80211_hw *hw)
static void
mt7921_mac_init_band(struct mt7921_dev *dev, u8 band)
{
+ u32 mask, set;
+
mt76_rmw_field(dev, MT_TMAC_CTCR0(band),
MT_TMAC_CTCR0_INS_DDLMT_REFTIME, 0x3f);
mt76_set(dev, MT_TMAC_CTCR0(band),
@@ -158,6 +161,12 @@ mt7921_mac_init_band(struct mt7921_dev *dev, u8 band)
mt76_rmw_field(dev, MT_DMA_DCR0(band), MT_DMA_DCR0_MAX_RX_LEN, 1536);
/* disable rx rate report by default due to hw issues */
mt76_clear(dev, MT_DMA_DCR0(band), MT_DMA_DCR0_RXD_G5_EN);
+
+ /* filter out non-resp frames and get instantaneous signal reporting */
+ mask = MT_WTBLOFF_TOP_RSCR_RCPI_MODE | MT_WTBLOFF_TOP_RSCR_RCPI_PARAM;
+ set = FIELD_PREP(MT_WTBLOFF_TOP_RSCR_RCPI_MODE, 0) |
+ FIELD_PREP(MT_WTBLOFF_TOP_RSCR_RCPI_PARAM, 0x3);
+ mt76_rmw(dev, MT_WTBLOFF_TOP_RSCR(band), mask, set);
}
u8 mt7921_check_offload_capability(struct device *dev, const char *fw_wm)
@@ -175,7 +184,7 @@ u8 mt7921_check_offload_capability(struct device *dev, const char *fw_wm)
if (!fw || !fw->data || fw->size < sizeof(*hdr)) {
dev_err(dev, "Invalid firmware\n");
- return -EINVAL;
+ goto out;
}
data = fw->data;
@@ -206,6 +215,7 @@ u8 mt7921_check_offload_capability(struct device *dev, const char *fw_wm)
data += le16_to_cpu(rel_info->len) + rel_info->pad_len;
}
+out:
release_firmware(fw);
return features ? features->data : 0;
@@ -228,8 +238,6 @@ int mt7921_mac_init(struct mt7921_dev *dev)
for (i = 0; i < 2; i++)
mt7921_mac_init_band(dev, i);
- dev->mt76.rxfilter = mt76_rr(dev, MT_WF_RFCR(0));
-
return mt76_connac_mcu_set_rts_thresh(&dev->mt76, 0x92b, 0);
}
EXPORT_SYMBOL_GPL(mt7921_mac_init);
diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/mac.c b/drivers/net/wireless/mediatek/mt76/mt7921/mac.c
index 82db3762be33..557c20190c2b 100644
--- a/drivers/net/wireless/mediatek/mt76/mt7921/mac.c
+++ b/drivers/net/wireless/mediatek/mt76/mt7921/mac.c
@@ -59,6 +59,7 @@ void mt7921_mac_sta_poll(struct mt7921_dev *dev)
u32 tx_time[IEEE80211_NUM_ACS], rx_time[IEEE80211_NUM_ACS];
LIST_HEAD(sta_poll_list);
struct rate_info *rate;
+ s8 rssi[4];
int i;
spin_lock_bh(&dev->sta_poll_lock);
@@ -160,6 +161,20 @@ void mt7921_mac_sta_poll(struct mt7921_dev *dev)
else
rate->flags &= ~RATE_INFO_FLAGS_SHORT_GI;
}
+
+ /* get signal strength of resp frames (CTS/BA/ACK) */
+ addr = mt7921_mac_wtbl_lmac_addr(idx, 30);
+ val = mt76_rr(dev, addr);
+
+ rssi[0] = to_rssi(GENMASK(7, 0), val);
+ rssi[1] = to_rssi(GENMASK(15, 8), val);
+ rssi[2] = to_rssi(GENMASK(23, 16), val);
+ rssi[3] = to_rssi(GENMASK(31, 14), val);
+
+ msta->ack_signal =
+ mt76_rx_signal(msta->vif->phy->mt76->antenna_mask, rssi);
+
+ ewma_avg_signal_add(&msta->avg_ack_signal, -msta->ack_signal);
}
}
EXPORT_SYMBOL_GPL(mt7921_mac_sta_poll);
diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/main.c b/drivers/net/wireless/mediatek/mt76/mt7921/main.c
index 76ac5069638f..75eaf86c6a78 100644
--- a/drivers/net/wireless/mediatek/mt76/mt7921/main.c
+++ b/drivers/net/wireless/mediatek/mt76/mt7921/main.c
@@ -422,15 +422,15 @@ void mt7921_roc_timer(struct timer_list *timer)
static int mt7921_abort_roc(struct mt7921_phy *phy, struct mt7921_vif *vif)
{
- int err;
-
- if (!test_and_clear_bit(MT76_STATE_ROC, &phy->mt76->state))
- return 0;
+ int err = 0;
del_timer_sync(&phy->roc_timer);
cancel_work_sync(&phy->roc_work);
- err = mt7921_mcu_abort_roc(phy, vif, phy->roc_token_id);
- clear_bit(MT76_STATE_ROC, &phy->mt76->state);
+
+ mt7921_mutex_acquire(phy->dev);
+ if (test_and_clear_bit(MT76_STATE_ROC, &phy->mt76->state))
+ err = mt7921_mcu_abort_roc(phy, vif, phy->roc_token_id);
+ mt7921_mutex_release(phy->dev);
return err;
}
@@ -487,13 +487,8 @@ static int mt7921_cancel_remain_on_channel(struct ieee80211_hw *hw,
{
struct mt7921_vif *mvif = (struct mt7921_vif *)vif->drv_priv;
struct mt7921_phy *phy = mt7921_hw_phy(hw);
- int err;
- mt7921_mutex_acquire(phy->dev);
- err = mt7921_abort_roc(phy, mvif);
- mt7921_mutex_release(phy->dev);
-
- return err;
+ return mt7921_abort_roc(phy, mvif);
}
static int mt7921_set_channel(struct mt7921_phy *phy)
@@ -681,7 +676,6 @@ static int mt7921_config(struct ieee80211_hw *hw, u32 changed)
ieee80211_iterate_active_interfaces(hw,
IEEE80211_IFACE_ITER_RESUME_ALL,
mt7921_sniffer_interface_iter, dev);
- dev->mt76.rxfilter = mt76_rr(dev, MT_WF_RFCR(0));
}
out:
@@ -710,53 +704,12 @@ static void mt7921_configure_filter(struct ieee80211_hw *hw,
u64 multicast)
{
struct mt7921_dev *dev = mt7921_hw_dev(hw);
- u32 ctl_flags = MT_WF_RFCR1_DROP_ACK |
- MT_WF_RFCR1_DROP_BF_POLL |
- MT_WF_RFCR1_DROP_BA |
- MT_WF_RFCR1_DROP_CFEND |
- MT_WF_RFCR1_DROP_CFACK;
- u32 flags = 0;
-
-#define MT76_FILTER(_flag, _hw) do { \
- flags |= *total_flags & FIF_##_flag; \
- dev->mt76.rxfilter &= ~(_hw); \
- dev->mt76.rxfilter |= !(flags & FIF_##_flag) * (_hw); \
- } while (0)
mt7921_mutex_acquire(dev);
-
- dev->mt76.rxfilter &= ~(MT_WF_RFCR_DROP_OTHER_BSS |
- MT_WF_RFCR_DROP_OTHER_BEACON |
- MT_WF_RFCR_DROP_FRAME_REPORT |
- MT_WF_RFCR_DROP_PROBEREQ |
- MT_WF_RFCR_DROP_MCAST_FILTERED |
- MT_WF_RFCR_DROP_MCAST |
- MT_WF_RFCR_DROP_BCAST |
- MT_WF_RFCR_DROP_DUPLICATE |
- MT_WF_RFCR_DROP_A2_BSSID |
- MT_WF_RFCR_DROP_UNWANTED_CTL |
- MT_WF_RFCR_DROP_STBC_MULTI);
-
- MT76_FILTER(OTHER_BSS, MT_WF_RFCR_DROP_OTHER_TIM |
- MT_WF_RFCR_DROP_A3_MAC |
- MT_WF_RFCR_DROP_A3_BSSID);
-
- MT76_FILTER(FCSFAIL, MT_WF_RFCR_DROP_FCSFAIL);
-
- MT76_FILTER(CONTROL, MT_WF_RFCR_DROP_CTS |
- MT_WF_RFCR_DROP_RTS |
- MT_WF_RFCR_DROP_CTL_RSV |
- MT_WF_RFCR_DROP_NDPA);
-
- *total_flags = flags;
- mt76_wr(dev, MT_WF_RFCR(0), dev->mt76.rxfilter);
-
- if (*total_flags & FIF_CONTROL)
- mt76_clear(dev, MT_WF_RFCR1(0), ctl_flags);
- else
- mt76_set(dev, MT_WF_RFCR1(0), ctl_flags);
-
+ mt7921_mcu_set_rxfilter(dev, *total_flags, 0, 0);
mt7921_mutex_release(dev);
+
+ *total_flags &= (FIF_OTHER_BSS | FIF_FCSFAIL | FIF_CONTROL);
}
static void mt7921_bss_info_changed(struct ieee80211_hw *hw,
@@ -860,6 +813,8 @@ void mt7921_mac_sta_assoc(struct mt76_dev *mdev, struct ieee80211_vif *vif,
mt76_connac_mcu_uni_add_bss(&dev->mphy, vif, &mvif->sta.wcid,
true, mvif->ctx);
+ ewma_avg_signal_init(&msta->avg_ack_signal);
+
mt7921_mac_wtbl_update(dev, msta->wcid.idx,
MT_WTBL_UPDATE_ADM_COUNT_CLEAR);
memset(msta->airtime_ac, 0, sizeof(msta->airtime_ac));
@@ -1135,17 +1090,34 @@ static void
mt7921_get_et_strings(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
u32 sset, u8 *data)
{
+ struct mt7921_dev *dev = mt7921_hw_dev(hw);
+
if (sset != ETH_SS_STATS)
return;
memcpy(data, *mt7921_gstrings_stats, sizeof(mt7921_gstrings_stats));
+
+ if (mt76_is_sdio(&dev->mt76))
+ return;
+
+ data += sizeof(mt7921_gstrings_stats);
+ page_pool_ethtool_stats_get_strings(data);
}
static int
mt7921_get_et_sset_count(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
int sset)
{
- return sset == ETH_SS_STATS ? ARRAY_SIZE(mt7921_gstrings_stats) : 0;
+ struct mt7921_dev *dev = mt7921_hw_dev(hw);
+
+ if (sset != ETH_SS_STATS)
+ return 0;
+
+ if (mt76_is_sdio(&dev->mt76))
+ return ARRAY_SIZE(mt7921_gstrings_stats);
+
+ return ARRAY_SIZE(mt7921_gstrings_stats) +
+ page_pool_ethtool_stats_get_count();
}
static void
@@ -1157,7 +1129,7 @@ mt7921_ethtool_worker(void *wi_data, struct ieee80211_sta *sta)
if (msta->vif->mt76.idx != wi->idx)
return;
- mt76_ethtool_worker(wi, &msta->wcid.stats);
+ mt76_ethtool_worker(wi, &msta->wcid.stats, false);
}
static
@@ -1165,6 +1137,7 @@ void mt7921_get_et_stats(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
struct ethtool_stats *stats, u64 *data)
{
struct mt7921_vif *mvif = (struct mt7921_vif *)vif->drv_priv;
+ int stats_size = ARRAY_SIZE(mt7921_gstrings_stats);
struct mt7921_phy *phy = mt7921_hw_phy(hw);
struct mt7921_dev *dev = phy->dev;
struct mib_stats *mib = &phy->mib;
@@ -1220,9 +1193,14 @@ void mt7921_get_et_stats(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
return;
ei += wi.worker_stat_count;
- if (ei != ARRAY_SIZE(mt7921_gstrings_stats))
- dev_err(dev->mt76.dev, "ei: %d SSTATS_LEN: %zu",
- ei, ARRAY_SIZE(mt7921_gstrings_stats));
+
+ if (!mt76_is_sdio(&dev->mt76)) {
+ mt76_ethtool_page_pool_stats(&dev->mt76, &data[ei], &ei);
+ stats_size += page_pool_ethtool_stats_get_count();
+ }
+
+ if (ei != stats_size)
+ dev_err(dev->mt76.dev, "ei: %d SSTATS_LEN: %d", ei, stats_size);
}
static u64
@@ -1430,6 +1408,12 @@ static void mt7921_sta_statistics(struct ieee80211_hw *hw,
}
sinfo->txrate.flags = txrate->flags;
sinfo->filled |= BIT_ULL(NL80211_STA_INFO_TX_BITRATE);
+
+ sinfo->ack_signal = (s8)msta->ack_signal;
+ sinfo->filled |= BIT_ULL(NL80211_STA_INFO_ACK_SIGNAL);
+
+ sinfo->avg_ack_signal = -(s8)ewma_avg_signal_read(&msta->avg_ack_signal);
+ sinfo->filled |= BIT_ULL(NL80211_STA_INFO_ACK_SIGNAL_AVG);
}
#ifdef CONFIG_PM
@@ -1711,7 +1695,10 @@ static void mt7921_ctx_iter(void *priv, u8 *mac,
if (ctx != mvif->ctx)
return;
- mt76_connac_mcu_uni_set_chctx(mvif->phy->mt76, &mvif->mt76, ctx);
+ if (vif->type & NL80211_IFTYPE_MONITOR)
+ mt7921_mcu_config_sniffer(mvif, ctx);
+ else
+ mt76_connac_mcu_uni_set_chctx(mvif->phy->mt76, &mvif->mt76, ctx);
}
static void
@@ -1778,11 +1765,8 @@ static void mt7921_mgd_complete_tx(struct ieee80211_hw *hw,
struct ieee80211_prep_tx_info *info)
{
struct mt7921_vif *mvif = (struct mt7921_vif *)vif->drv_priv;
- struct mt7921_dev *dev = mt7921_hw_dev(hw);
- mt7921_mutex_acquire(dev);
mt7921_abort_roc(mvif->phy, mvif);
- mt7921_mutex_release(dev);
}
const struct ieee80211_ops mt7921_ops = {
diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/mcu.c b/drivers/net/wireless/mediatek/mt76/mt7921/mcu.c
index fb9c0f66cb27..c5e7ad06f877 100644
--- a/drivers/net/wireless/mediatek/mt76/mt7921/mcu.c
+++ b/drivers/net/wireless/mediatek/mt76/mt7921/mcu.c
@@ -174,7 +174,7 @@ mt7921_mcu_uni_roc_event(struct mt7921_dev *dev, struct sk_buff *skb)
wake_up(&dev->phy.roc_wait);
duration = le32_to_cpu(grant->max_interval);
mod_timer(&dev->phy.roc_timer,
- round_jiffies_up(jiffies + msecs_to_jiffies(duration)));
+ jiffies + msecs_to_jiffies(duration));
}
static void
@@ -1019,6 +1019,8 @@ int mt7921_mcu_set_beacon_filter(struct mt7921_dev *dev,
struct ieee80211_vif *vif,
bool enable)
{
+#define MT7921_FIF_BIT_CLR BIT(1)
+#define MT7921_FIF_BIT_SET BIT(0)
int err;
if (enable) {
@@ -1026,7 +1028,11 @@ int mt7921_mcu_set_beacon_filter(struct mt7921_dev *dev,
if (err)
return err;
- mt76_set(dev, MT_WF_RFCR(0), MT_WF_RFCR_DROP_OTHER_BEACON);
+ err = mt7921_mcu_set_rxfilter(dev, 0,
+ MT7921_FIF_BIT_SET,
+ MT_WF_RFCR_DROP_OTHER_BEACON);
+ if (err)
+ return err;
return 0;
}
@@ -1035,7 +1041,11 @@ int mt7921_mcu_set_beacon_filter(struct mt7921_dev *dev,
if (err)
return err;
- mt76_clear(dev, MT_WF_RFCR(0), MT_WF_RFCR_DROP_OTHER_BEACON);
+ err = mt7921_mcu_set_rxfilter(dev, 0,
+ MT7921_FIF_BIT_CLR,
+ MT_WF_RFCR_DROP_OTHER_BEACON);
+ if (err)
+ return err;
return 0;
}
@@ -1093,6 +1103,74 @@ int mt7921_mcu_set_sniffer(struct mt7921_dev *dev, struct ieee80211_vif *vif,
true);
}
+int mt7921_mcu_config_sniffer(struct mt7921_vif *vif,
+ struct ieee80211_chanctx_conf *ctx)
+{
+ struct cfg80211_chan_def *chandef = &ctx->def;
+ int freq1 = chandef->center_freq1, freq2 = chandef->center_freq2;
+ const u8 ch_band[] = {
+ [NL80211_BAND_2GHZ] = 1,
+ [NL80211_BAND_5GHZ] = 2,
+ [NL80211_BAND_6GHZ] = 3,
+ };
+ const u8 ch_width[] = {
+ [NL80211_CHAN_WIDTH_20_NOHT] = 0,
+ [NL80211_CHAN_WIDTH_20] = 0,
+ [NL80211_CHAN_WIDTH_40] = 0,
+ [NL80211_CHAN_WIDTH_80] = 1,
+ [NL80211_CHAN_WIDTH_160] = 2,
+ [NL80211_CHAN_WIDTH_80P80] = 3,
+ [NL80211_CHAN_WIDTH_5] = 4,
+ [NL80211_CHAN_WIDTH_10] = 5,
+ [NL80211_CHAN_WIDTH_320] = 6,
+ };
+ struct {
+ struct {
+ u8 band_idx;
+ u8 pad[3];
+ } __packed hdr;
+ struct config_tlv {
+ __le16 tag;
+ __le16 len;
+ u16 aid;
+ u8 ch_band;
+ u8 bw;
+ u8 control_ch;
+ u8 sco;
+ u8 center_ch;
+ u8 center_ch2;
+ u8 drop_err;
+ u8 pad[3];
+ } __packed tlv;
+ } __packed req = {
+ .hdr = {
+ .band_idx = vif->mt76.band_idx,
+ },
+ .tlv = {
+ .tag = cpu_to_le16(1),
+ .len = cpu_to_le16(sizeof(req.tlv)),
+ .control_ch = chandef->chan->hw_value,
+ .center_ch = ieee80211_frequency_to_channel(freq1),
+ .drop_err = 1,
+ },
+ };
+ if (chandef->chan->band < ARRAY_SIZE(ch_band))
+ req.tlv.ch_band = ch_band[chandef->chan->band];
+ if (chandef->width < ARRAY_SIZE(ch_width))
+ req.tlv.bw = ch_width[chandef->width];
+
+ if (freq2)
+ req.tlv.center_ch2 = ieee80211_frequency_to_channel(freq2);
+
+ if (req.tlv.control_ch < req.tlv.center_ch)
+ req.tlv.sco = 1; /* SCA */
+ else if (req.tlv.control_ch > req.tlv.center_ch)
+ req.tlv.sco = 3; /* SCB */
+
+ return mt76_mcu_send_msg(vif->phy->mt76->dev, MCU_UNI_CMD(SNIFFER),
+ &req, sizeof(req), true);
+}
+
int
mt7921_mcu_uni_add_beacon_offload(struct mt7921_dev *dev,
struct ieee80211_hw *hw,
@@ -1184,13 +1262,15 @@ int __mt7921_mcu_set_clc(struct mt7921_dev *dev, u8 *alpha2,
__le16 len;
u8 idx;
u8 env;
- u8 pad1[2];
+ u8 acpi_conf;
+ u8 pad1;
u8 alpha2[2];
u8 type[2];
u8 rsvd[64];
} __packed req = {
.idx = idx,
.env = env_cap,
+ .acpi_conf = mt7921_acpi_get_flags(&dev->phy),
};
int ret, valid_cnt = 0;
u8 i, *pos;
@@ -1253,3 +1333,25 @@ int mt7921_mcu_set_clc(struct mt7921_dev *dev, u8 *alpha2,
}
return 0;
}
+
+int mt7921_mcu_set_rxfilter(struct mt7921_dev *dev, u32 fif,
+ u8 bit_op, u32 bit_map)
+{
+ struct {
+ u8 rsv[4];
+ u8 mode;
+ u8 rsv2[3];
+ __le32 fif;
+ __le32 bit_map; /* bit_* for bitmap update */
+ u8 bit_op;
+ u8 pad[51];
+ } __packed data = {
+ .mode = fif ? 1 : 2,
+ .fif = cpu_to_le32(fif),
+ .bit_map = cpu_to_le32(bit_map),
+ .bit_op = bit_op,
+ };
+
+ return mt76_mcu_send_msg(&dev->mt76, MCU_CE_CMD(SET_RX_FILTER),
+ &data, sizeof(data), false);
+}
diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/mt7921.h b/drivers/net/wireless/mediatek/mt76/mt7921/mt7921.h
index 15d6b7fe1c6c..1af70dac723b 100644
--- a/drivers/net/wireless/mediatek/mt76/mt7921/mt7921.h
+++ b/drivers/net/wireless/mediatek/mt76/mt7921/mt7921.h
@@ -144,6 +144,8 @@ enum mt7921_rxq_id {
MT7921_RXQ_MCU_WM = 0,
};
+DECLARE_EWMA(avg_signal, 10, 8)
+
struct mt7921_sta {
struct mt76_wcid wcid; /* must be first */
@@ -152,6 +154,9 @@ struct mt7921_sta {
struct list_head poll_list;
u32 airtime_ac[8];
+ int ack_signal;
+ struct ewma_avg_signal avg_ack_signal;
+
unsigned long last_txs;
unsigned long ampdu_state;
@@ -383,6 +388,8 @@ int mt7921_mcu_get_rx_rate(struct mt7921_phy *phy, struct ieee80211_vif *vif,
struct ieee80211_sta *sta, struct rate_info *rate);
int mt7921_mcu_fw_log_2_host(struct mt7921_dev *dev, u8 ctrl);
void mt7921_mcu_rx_event(struct mt7921_dev *dev, struct sk_buff *skb);
+int mt7921_mcu_set_rxfilter(struct mt7921_dev *dev, u32 fif,
+ u8 bit_op, u32 bit_map);
static inline void mt7921_irq_enable(struct mt7921_dev *dev, u32 mask)
{
@@ -529,6 +536,8 @@ void mt7921_set_ipv6_ns_work(struct work_struct *work);
int mt7921_mcu_set_sniffer(struct mt7921_dev *dev, struct ieee80211_vif *vif,
bool enable);
+int mt7921_mcu_config_sniffer(struct mt7921_vif *vif,
+ struct ieee80211_chanctx_conf *ctx);
int mt7921_usb_sdio_tx_prepare_skb(struct mt76_dev *mdev, void *txwi_ptr,
enum mt76_txq_id qid, struct mt76_wcid *wcid,
@@ -554,6 +563,7 @@ int mt7921_mcu_uni_add_beacon_offload(struct mt7921_dev *dev,
#ifdef CONFIG_ACPI
int mt7921_init_acpi_sar(struct mt7921_dev *dev);
int mt7921_init_acpi_sar_power(struct mt7921_phy *phy, bool set_default);
+u8 mt7921_acpi_get_flags(struct mt7921_phy *phy);
#else
static inline int
mt7921_init_acpi_sar(struct mt7921_dev *dev)
@@ -566,6 +576,12 @@ mt7921_init_acpi_sar_power(struct mt7921_phy *phy, bool set_default)
{
return 0;
}
+
+static inline u8
+mt7921_acpi_get_flags(struct mt7921_phy *phy)
+{
+ return 0;
+}
#endif
int mt7921_set_tx_sar_pwr(struct ieee80211_hw *hw,
const struct cfg80211_sar_specs *sar);
diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/pci_mcu.c b/drivers/net/wireless/mediatek/mt76/mt7921/pci_mcu.c
index 86340d3205c5..1aefbb6cf0ab 100644
--- a/drivers/net/wireless/mediatek/mt76/mt7921/pci_mcu.c
+++ b/drivers/net/wireless/mediatek/mt76/mt7921/pci_mcu.c
@@ -44,7 +44,6 @@ int mt7921e_mcu_init(struct mt7921_dev *dev)
.headroom = sizeof(struct mt76_connac2_mcu_txd),
.mcu_skb_send_msg = mt7921_mcu_send_message,
.mcu_parse_response = mt7921_mcu_parse_response,
- .mcu_restart = mt76_connac_mcu_restart,
};
int err;
@@ -69,8 +68,8 @@ int __mt7921e_mcu_drv_pmctrl(struct mt7921_dev *dev)
for (i = 0; i < MT7921_DRV_OWN_RETRY_COUNT; i++) {
mt76_wr(dev, MT_CONN_ON_LPCTL, PCIE_LPCR_HOST_CLR_OWN);
- if (mt76_poll_msec(dev, MT_CONN_ON_LPCTL,
- PCIE_LPCR_HOST_OWN_SYNC, 0, 50))
+ if (mt76_poll_msec_tick(dev, MT_CONN_ON_LPCTL,
+ PCIE_LPCR_HOST_OWN_SYNC, 0, 50, 1))
break;
}
@@ -110,8 +109,8 @@ int mt7921e_mcu_fw_pmctrl(struct mt7921_dev *dev)
for (i = 0; i < MT7921_DRV_OWN_RETRY_COUNT; i++) {
mt76_wr(dev, MT_CONN_ON_LPCTL, PCIE_LPCR_HOST_SET_OWN);
- if (mt76_poll_msec(dev, MT_CONN_ON_LPCTL,
- PCIE_LPCR_HOST_OWN_SYNC, 4, 50))
+ if (mt76_poll_msec_tick(dev, MT_CONN_ON_LPCTL,
+ PCIE_LPCR_HOST_OWN_SYNC, 4, 50, 1))
break;
}
diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/regs.h b/drivers/net/wireless/mediatek/mt76/mt7921/regs.h
index c65582acfa55..e52977ff3349 100644
--- a/drivers/net/wireless/mediatek/mt76/mt7921/regs.h
+++ b/drivers/net/wireless/mediatek/mt76/mt7921/regs.h
@@ -80,6 +80,14 @@
#define MT_DMA_DCR0_MAX_RX_LEN GENMASK(15, 3)
#define MT_DMA_DCR0_RXD_G5_EN BIT(23)
+/* WTBLOFF TOP: band 0(0x820e9000),band 1(0x820f9000) */
+#define MT_WTBLOFF_TOP_BASE(_band) ((_band) ? 0x820f9000 : 0x820e9000)
+#define MT_WTBLOFF_TOP(_band, ofs) (MT_WTBLOFF_TOP_BASE(_band) + (ofs))
+
+#define MT_WTBLOFF_TOP_RSCR(_band) MT_WTBLOFF_TOP(_band, 0x008)
+#define MT_WTBLOFF_TOP_RSCR_RCPI_MODE GENMASK(31, 30)
+#define MT_WTBLOFF_TOP_RSCR_RCPI_PARAM GENMASK(25, 24)
+
/* LPON: band 0(0x24200), band 1(0xa4200) */
#define MT_WF_LPON_BASE(_band) ((_band) ? 0x820fb000 : 0x820eb000)
#define MT_WF_LPON(_band, ofs) (MT_WF_LPON_BASE(_band) + (ofs))
diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/testmode.c b/drivers/net/wireless/mediatek/mt76/mt7921/testmode.c
index bdec8684ce94..7f408212e716 100644
--- a/drivers/net/wireless/mediatek/mt76/mt7921/testmode.c
+++ b/drivers/net/wireless/mediatek/mt76/mt7921/testmode.c
@@ -59,7 +59,6 @@ mt7921_tm_set(struct mt7921_dev *dev, struct mt7921_tm_cmd *req)
cancel_work_sync(&pm->wake_work);
__mt7921_mcu_drv_pmctrl(dev);
- mt76_wr(dev, MT_WF_RFCR(0), dev->mt76.rxfilter);
phy->test.state = MT76_TM_STATE_ON;
}
diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/usb.c b/drivers/net/wireless/mediatek/mt76/mt7921/usb.c
index 5321d20dcdcb..8fef09ed29c9 100644
--- a/drivers/net/wireless/mediatek/mt76/mt7921/usb.c
+++ b/drivers/net/wireless/mediatek/mt76/mt7921/usb.c
@@ -15,6 +15,9 @@
static const struct usb_device_id mt7921u_device_table[] = {
{ USB_DEVICE_AND_INTERFACE_INFO(0x0e8d, 0x7961, 0xff, 0xff, 0xff),
.driver_info = (kernel_ulong_t)MT7921_FIRMWARE_WM },
+ /* Comfast CF-952AX */
+ { USB_DEVICE_AND_INTERFACE_INFO(0x3574, 0x6211, 0xff, 0xff, 0xff),
+ .driver_info = (kernel_ulong_t)MT7921_FIRMWARE_WM },
{ },
};
@@ -133,7 +136,6 @@ static int mt7921u_mcu_init(struct mt7921_dev *dev)
.tailroom = MT_USB_TAIL_SIZE,
.mcu_skb_send_msg = mt7921u_mcu_send_message,
.mcu_parse_response = mt7921_mcu_parse_response,
- .mcu_restart = mt76_connac_mcu_restart,
};
int ret;
diff --git a/drivers/net/wireless/mediatek/mt76/mt7996/debugfs.c b/drivers/net/wireless/mediatek/mt76/mt7996/debugfs.c
index 2e4a8909b9e8..9c5e9ac1c335 100644
--- a/drivers/net/wireless/mediatek/mt76/mt7996/debugfs.c
+++ b/drivers/net/wireless/mediatek/mt76/mt7996/debugfs.c
@@ -457,7 +457,7 @@ mt7996_hw_queue_read(struct seq_file *s, u32 size,
if (val & BIT(map[i].index))
continue;
- ctrl = BIT(31) | (map[i].pid << 10) | (map[i].qid << 24);
+ ctrl = BIT(31) | (map[i].pid << 10) | ((u32)map[i].qid << 24);
mt76_wr(dev, MT_FL_Q0_CTRL, ctrl);
head = mt76_get_field(dev, MT_FL_Q2_CTRL,
@@ -653,8 +653,9 @@ static int
mt7996_rf_regval_set(void *data, u64 val)
{
struct mt7996_dev *dev = data;
+ u32 val32 = val;
- return mt7996_mcu_rf_regval(dev, dev->mt76.debugfs_reg, (u32 *)&val, true);
+ return mt7996_mcu_rf_regval(dev, dev->mt76.debugfs_reg, &val32, true);
}
DEFINE_DEBUGFS_ATTRIBUTE(fops_rf_regval, mt7996_rf_regval_get,
@@ -790,10 +791,10 @@ static ssize_t mt7996_sta_fixed_rate_set(struct file *file,
else
buf[count] = '\0';
- /* mode - cck: 0, ofdm: 1, ht: 2, gf: 3, vht: 4, he_su: 8, he_er: 9
- * bw - bw20: 0, bw40: 1, bw80: 2, bw160: 3
- * nss - vht: 1~4, he: 1~4, others: ignore
- * mcs - cck: 0~4, ofdm: 0~7, ht: 0~32, vht: 0~9, he_su: 0~11, he_er: 0~2
+ /* mode - cck: 0, ofdm: 1, ht: 2, gf: 3, vht: 4, he_su: 8, he_er: 9 EHT: 15
+ * bw - bw20: 0, bw40: 1, bw80: 2, bw160: 3, BW320: 4
+ * nss - vht: 1~4, he: 1~4, eht: 1~4, others: ignore
+ * mcs - cck: 0~4, ofdm: 0~7, ht: 0~32, vht: 0~9, he_su: 0~11, he_er: 0~2, eht: 0~13
* gi - (ht/vht) lgi: 0, sgi: 1; (he) 0.8us: 0, 1.6us: 1, 3.2us: 2
* preamble - short: 1, long: 0
* ldpc - off: 0, on: 1
diff --git a/drivers/net/wireless/mediatek/mt76/mt7996/eeprom.c b/drivers/net/wireless/mediatek/mt76/mt7996/eeprom.c
index b9f62bedbc48..2e48c5a40f81 100644
--- a/drivers/net/wireless/mediatek/mt76/mt7996/eeprom.c
+++ b/drivers/net/wireless/mediatek/mt76/mt7996/eeprom.c
@@ -65,22 +65,50 @@ static int mt7996_eeprom_load(struct mt7996_dev *dev)
} else {
u8 free_block_num;
u32 block_num, i;
+ u32 eeprom_blk_size = MT7996_EEPROM_BLOCK_SIZE;
- /* TODO: check free block event */
- mt7996_mcu_get_eeprom_free_block(dev, &free_block_num);
- /* efuse info not enough */
+ ret = mt7996_mcu_get_eeprom_free_block(dev, &free_block_num);
+ if (ret < 0)
+ return ret;
+
+ /* efuse info isn't enough */
if (free_block_num >= 59)
return -EINVAL;
/* read eeprom data from efuse */
- block_num = DIV_ROUND_UP(MT7996_EEPROM_SIZE, MT7996_EEPROM_BLOCK_SIZE);
- for (i = 0; i < block_num; i++)
- mt7996_mcu_get_eeprom(dev, i * MT7996_EEPROM_BLOCK_SIZE);
+ block_num = DIV_ROUND_UP(MT7996_EEPROM_SIZE, eeprom_blk_size);
+ for (i = 0; i < block_num; i++) {
+ ret = mt7996_mcu_get_eeprom(dev, i * eeprom_blk_size);
+ if (ret < 0)
+ return ret;
+ }
}
return mt7996_check_eeprom(dev);
}
+static int mt7996_eeprom_parse_efuse_hw_cap(struct mt7996_dev *dev)
+{
+#define MODE_HE_ONLY BIT(0)
+#define WTBL_SIZE_GROUP GENMASK(31, 28)
+ u32 cap = 0;
+ int ret;
+
+ ret = mt7996_mcu_get_chip_config(dev, &cap);
+ if (ret)
+ return ret;
+
+ if (cap) {
+ dev->has_eht = !(cap & MODE_HE_ONLY);
+ dev->wtbl_size_group = u32_get_bits(cap, WTBL_SIZE_GROUP);
+ }
+
+ if (dev->wtbl_size_group < 2 || dev->wtbl_size_group > 4)
+ dev->wtbl_size_group = 2; /* set default */
+
+ return 0;
+}
+
static int mt7996_eeprom_parse_band_config(struct mt7996_phy *phy)
{
u8 *eeprom = phy->dev->mt76.eeprom.data;
@@ -127,6 +155,7 @@ int mt7996_eeprom_parse_hw_cap(struct mt7996_dev *dev, struct mt7996_phy *phy)
u8 path, nss, band_idx = phy->mt76->band_idx;
u8 *eeprom = dev->mt76.eeprom.data;
struct mt76_phy *mphy = phy->mt76;
+ int ret;
switch (band_idx) {
case MT_BAND1:
@@ -161,6 +190,10 @@ int mt7996_eeprom_parse_hw_cap(struct mt7996_dev *dev, struct mt7996_phy *phy)
dev->chainshift[band_idx + 1] = dev->chainshift[band_idx] +
hweight16(mphy->chainmask);
+ ret = mt7996_eeprom_parse_efuse_hw_cap(dev);
+ if (ret)
+ return ret;
+
return mt7996_eeprom_parse_band_config(phy);
}
diff --git a/drivers/net/wireless/mediatek/mt76/mt7996/init.c b/drivers/net/wireless/mediatek/mt76/mt7996/init.c
index 46b290526092..946da93eed32 100644
--- a/drivers/net/wireless/mediatek/mt76/mt7996/init.c
+++ b/drivers/net/wireless/mediatek/mt76/mt7996/init.c
@@ -37,8 +37,7 @@ static const struct ieee80211_iface_combination if_comb[] = {
BIT(NL80211_CHAN_WIDTH_20) |
BIT(NL80211_CHAN_WIDTH_40) |
BIT(NL80211_CHAN_WIDTH_80) |
- BIT(NL80211_CHAN_WIDTH_160) |
- BIT(NL80211_CHAN_WIDTH_80P80),
+ BIT(NL80211_CHAN_WIDTH_160),
}
};
@@ -46,11 +45,11 @@ static void mt7996_led_set_config(struct led_classdev *led_cdev,
u8 delay_on, u8 delay_off)
{
struct mt7996_dev *dev;
- struct mt76_dev *mt76;
+ struct mt76_phy *mphy;
u32 val;
- mt76 = container_of(led_cdev, struct mt76_dev, led_cdev);
- dev = container_of(mt76, struct mt7996_dev, mt76);
+ mphy = container_of(led_cdev, struct mt76_phy, leds.cdev);
+ dev = container_of(mphy->dev, struct mt7996_dev, mt76);
/* select TX blink mode, 2: only data frames */
mt76_rmw_field(dev, MT_TMAC_TCR0(0), MT_TMAC_TCR0_TX_BLINK, 2);
@@ -65,7 +64,7 @@ static void mt7996_led_set_config(struct led_classdev *led_cdev,
/* control LED */
val = MT_LED_CTRL_BLINK_MODE | MT_LED_CTRL_KICK;
- if (dev->mt76.led_al)
+ if (mphy->leds.al)
val |= MT_LED_CTRL_POLARITY;
mt76_wr(dev, MT_LED_CTRL(0), val);
@@ -153,10 +152,12 @@ mt7996_init_wiphy(struct ieee80211_hw *hw)
struct mt7996_phy *phy = mt7996_hw_phy(hw);
struct mt76_dev *mdev = &phy->dev->mt76;
struct wiphy *wiphy = hw->wiphy;
+ u16 max_subframes = phy->dev->has_eht ? IEEE80211_MAX_AMPDU_BUF_EHT :
+ IEEE80211_MAX_AMPDU_BUF_HE;
hw->queues = 4;
- hw->max_rx_aggregation_subframes = IEEE80211_MAX_AMPDU_BUF_HE;
- hw->max_tx_aggregation_subframes = IEEE80211_MAX_AMPDU_BUF_HE;
+ hw->max_rx_aggregation_subframes = max_subframes;
+ hw->max_tx_aggregation_subframes = max_subframes;
hw->netdev_features = NETIF_F_RXCSUM;
hw->radiotap_timestamp.units_pos =
@@ -214,7 +215,7 @@ mt7996_init_wiphy(struct ieee80211_hw *hw)
mt76_set_stream_caps(phy->mt76, true);
mt7996_set_stream_vht_txbf_caps(phy);
- mt7996_set_stream_he_caps(phy);
+ mt7996_set_stream_he_eht_caps(phy);
wiphy->available_antennas_rx = phy->mt76->antenna_mask;
wiphy->available_antennas_tx = phy->mt76->antenna_mask;
@@ -256,12 +257,12 @@ static void mt7996_mac_init(struct mt7996_dev *dev)
mt76_clear(dev, MT_MDP_DCR2, MT_MDP_DCR2_RX_TRANS_SHORT);
- for (i = 0; i < MT7996_WTBL_SIZE; i++)
+ for (i = 0; i < mt7996_wtbl_size(dev); i++)
mt7996_mac_wtbl_update(dev, i,
MT_WTBL_UPDATE_ADM_COUNT_CLEAR);
if (IS_ENABLED(CONFIG_MT76_LEDS)) {
- i = dev->mt76.led_pin ? MT_LED_GPIO_MUX3 : MT_LED_GPIO_MUX2;
+ i = dev->mphy.leds.pin ? MT_LED_GPIO_MUX3 : MT_LED_GPIO_MUX2;
mt76_rmw_field(dev, i, MT_LED_GPIO_SEL_MASK, 4);
}
@@ -465,7 +466,7 @@ void mt7996_set_stream_vht_txbf_caps(struct mt7996_phy *phy)
*cap |= IEEE80211_VHT_CAP_SU_BEAMFORMEE_CAPABLE |
IEEE80211_VHT_CAP_MU_BEAMFORMEE_CAPABLE |
- (3 << IEEE80211_VHT_CAP_BEAMFORMEE_STS_SHIFT);
+ FIELD_PREP(IEEE80211_VHT_CAP_BEAMFORMEE_STS_MASK, sts - 1);
*cap &= ~(IEEE80211_VHT_CAP_SOUNDING_DIMENSIONS_MASK |
IEEE80211_VHT_CAP_SU_BEAMFORMER_CAPABLE |
@@ -572,11 +573,15 @@ mt7996_gen_ppe_thresh(u8 *he_ppet, int nss)
(0xff >> (8 - (ppet_bits - 1) % 8));
}
-static int
+static void
mt7996_init_he_caps(struct mt7996_phy *phy, enum nl80211_band band,
- struct ieee80211_sband_iftype_data *data)
+ struct ieee80211_sband_iftype_data *data,
+ enum nl80211_iftype iftype)
{
- int i, idx = 0, nss = hweight8(phy->mt76->antenna_mask);
+ struct ieee80211_sta_he_cap *he_cap = &data->he_cap;
+ struct ieee80211_he_cap_elem *he_cap_elem = &he_cap->he_cap_elem;
+ struct ieee80211_he_mcs_nss_supp *he_mcs = &he_cap->he_mcs_nss_supp;
+ int i, nss = hweight8(phy->mt76->antenna_mask);
u16 mcs_map = 0;
for (i = 0; i < 8; i++) {
@@ -586,179 +591,254 @@ mt7996_init_he_caps(struct mt7996_phy *phy, enum nl80211_band band,
mcs_map |= (IEEE80211_HE_MCS_NOT_SUPPORTED << (i * 2));
}
- for (i = 0; i < NUM_NL80211_IFTYPES; i++) {
- struct ieee80211_sta_he_cap *he_cap = &data[idx].he_cap;
- struct ieee80211_he_cap_elem *he_cap_elem =
- &he_cap->he_cap_elem;
- struct ieee80211_he_mcs_nss_supp *he_mcs =
- &he_cap->he_mcs_nss_supp;
-
- switch (i) {
- case NL80211_IFTYPE_STATION:
- case NL80211_IFTYPE_AP:
-#ifdef CONFIG_MAC80211_MESH
- case NL80211_IFTYPE_MESH_POINT:
-#endif
- break;
- default:
- continue;
- }
+ he_cap->has_he = true;
- data[idx].types_mask = BIT(i);
- he_cap->has_he = true;
+ he_cap_elem->mac_cap_info[0] = IEEE80211_HE_MAC_CAP0_HTC_HE;
+ he_cap_elem->mac_cap_info[3] = IEEE80211_HE_MAC_CAP3_OMI_CONTROL |
+ IEEE80211_HE_MAC_CAP3_MAX_AMPDU_LEN_EXP_EXT_3;
+ he_cap_elem->mac_cap_info[4] = IEEE80211_HE_MAC_CAP4_AMSDU_IN_AMPDU;
- he_cap_elem->mac_cap_info[0] =
- IEEE80211_HE_MAC_CAP0_HTC_HE;
- he_cap_elem->mac_cap_info[3] =
- IEEE80211_HE_MAC_CAP3_OMI_CONTROL |
- IEEE80211_HE_MAC_CAP3_MAX_AMPDU_LEN_EXP_EXT_3;
- he_cap_elem->mac_cap_info[4] =
- IEEE80211_HE_MAC_CAP4_AMSDU_IN_AMPDU;
+ if (band == NL80211_BAND_2GHZ)
+ he_cap_elem->phy_cap_info[0] =
+ IEEE80211_HE_PHY_CAP0_CHANNEL_WIDTH_SET_40MHZ_IN_2G;
+ else
+ he_cap_elem->phy_cap_info[0] =
+ IEEE80211_HE_PHY_CAP0_CHANNEL_WIDTH_SET_40MHZ_80MHZ_IN_5G |
+ IEEE80211_HE_PHY_CAP0_CHANNEL_WIDTH_SET_160MHZ_IN_5G;
+
+ he_cap_elem->phy_cap_info[1] = IEEE80211_HE_PHY_CAP1_LDPC_CODING_IN_PAYLOAD;
+ he_cap_elem->phy_cap_info[2] = IEEE80211_HE_PHY_CAP2_STBC_TX_UNDER_80MHZ |
+ IEEE80211_HE_PHY_CAP2_STBC_RX_UNDER_80MHZ;
+
+ switch (iftype) {
+ case NL80211_IFTYPE_AP:
+ he_cap_elem->mac_cap_info[0] |= IEEE80211_HE_MAC_CAP0_TWT_RES;
+ he_cap_elem->mac_cap_info[2] |= IEEE80211_HE_MAC_CAP2_BSR;
+ he_cap_elem->mac_cap_info[4] |= IEEE80211_HE_MAC_CAP4_BQR;
+ he_cap_elem->mac_cap_info[5] |=
+ IEEE80211_HE_MAC_CAP5_OM_CTRL_UL_MU_DATA_DIS_RX;
+ he_cap_elem->phy_cap_info[3] |=
+ IEEE80211_HE_PHY_CAP3_DCM_MAX_CONST_TX_QPSK |
+ IEEE80211_HE_PHY_CAP3_DCM_MAX_CONST_RX_QPSK;
+ he_cap_elem->phy_cap_info[6] |=
+ IEEE80211_HE_PHY_CAP6_PARTIAL_BW_EXT_RANGE |
+ IEEE80211_HE_PHY_CAP6_PPE_THRESHOLD_PRESENT;
+ he_cap_elem->phy_cap_info[9] |=
+ IEEE80211_HE_PHY_CAP9_TX_1024_QAM_LESS_THAN_242_TONE_RU |
+ IEEE80211_HE_PHY_CAP9_RX_1024_QAM_LESS_THAN_242_TONE_RU;
+ break;
+ case NL80211_IFTYPE_STATION:
+ he_cap_elem->mac_cap_info[1] |=
+ IEEE80211_HE_MAC_CAP1_TF_MAC_PAD_DUR_16US;
if (band == NL80211_BAND_2GHZ)
- he_cap_elem->phy_cap_info[0] =
- IEEE80211_HE_PHY_CAP0_CHANNEL_WIDTH_SET_40MHZ_IN_2G;
+ he_cap_elem->phy_cap_info[0] |=
+ IEEE80211_HE_PHY_CAP0_CHANNEL_WIDTH_SET_RU_MAPPING_IN_2G;
else
- he_cap_elem->phy_cap_info[0] =
- IEEE80211_HE_PHY_CAP0_CHANNEL_WIDTH_SET_40MHZ_80MHZ_IN_5G |
- IEEE80211_HE_PHY_CAP0_CHANNEL_WIDTH_SET_160MHZ_IN_5G |
- IEEE80211_HE_PHY_CAP0_CHANNEL_WIDTH_SET_80PLUS80_MHZ_IN_5G;
-
- he_cap_elem->phy_cap_info[1] =
- IEEE80211_HE_PHY_CAP1_LDPC_CODING_IN_PAYLOAD;
- he_cap_elem->phy_cap_info[2] =
- IEEE80211_HE_PHY_CAP2_STBC_TX_UNDER_80MHZ |
- IEEE80211_HE_PHY_CAP2_STBC_RX_UNDER_80MHZ;
+ he_cap_elem->phy_cap_info[0] |=
+ IEEE80211_HE_PHY_CAP0_CHANNEL_WIDTH_SET_RU_MAPPING_IN_5G;
+
+ he_cap_elem->phy_cap_info[1] |=
+ IEEE80211_HE_PHY_CAP1_DEVICE_CLASS_A |
+ IEEE80211_HE_PHY_CAP1_HE_LTF_AND_GI_FOR_HE_PPDUS_0_8US;
+ he_cap_elem->phy_cap_info[3] |=
+ IEEE80211_HE_PHY_CAP3_DCM_MAX_CONST_TX_QPSK |
+ IEEE80211_HE_PHY_CAP3_DCM_MAX_CONST_RX_QPSK;
+ he_cap_elem->phy_cap_info[6] |=
+ IEEE80211_HE_PHY_CAP6_TRIG_CQI_FB |
+ IEEE80211_HE_PHY_CAP6_PARTIAL_BW_EXT_RANGE |
+ IEEE80211_HE_PHY_CAP6_PPE_THRESHOLD_PRESENT;
+ he_cap_elem->phy_cap_info[7] |=
+ IEEE80211_HE_PHY_CAP7_POWER_BOOST_FACTOR_SUPP |
+ IEEE80211_HE_PHY_CAP7_HE_SU_MU_PPDU_4XLTF_AND_08_US_GI;
+ he_cap_elem->phy_cap_info[8] |=
+ IEEE80211_HE_PHY_CAP8_20MHZ_IN_40MHZ_HE_PPDU_IN_2G |
+ IEEE80211_HE_PHY_CAP8_20MHZ_IN_160MHZ_HE_PPDU |
+ IEEE80211_HE_PHY_CAP8_80MHZ_IN_160MHZ_HE_PPDU |
+ IEEE80211_HE_PHY_CAP8_DCM_MAX_RU_484;
+ he_cap_elem->phy_cap_info[9] |=
+ IEEE80211_HE_PHY_CAP9_LONGER_THAN_16_SIGB_OFDM_SYM |
+ IEEE80211_HE_PHY_CAP9_NON_TRIGGERED_CQI_FEEDBACK |
+ IEEE80211_HE_PHY_CAP9_TX_1024_QAM_LESS_THAN_242_TONE_RU |
+ IEEE80211_HE_PHY_CAP9_RX_1024_QAM_LESS_THAN_242_TONE_RU |
+ IEEE80211_HE_PHY_CAP9_RX_FULL_BW_SU_USING_MU_WITH_COMP_SIGB |
+ IEEE80211_HE_PHY_CAP9_RX_FULL_BW_SU_USING_MU_WITH_NON_COMP_SIGB;
+ break;
+ default:
+ break;
+ }
- switch (i) {
- case NL80211_IFTYPE_AP:
- he_cap_elem->mac_cap_info[0] |=
- IEEE80211_HE_MAC_CAP0_TWT_RES;
- he_cap_elem->mac_cap_info[2] |=
- IEEE80211_HE_MAC_CAP2_BSR;
- he_cap_elem->mac_cap_info[4] |=
- IEEE80211_HE_MAC_CAP4_BQR;
- he_cap_elem->mac_cap_info[5] |=
- IEEE80211_HE_MAC_CAP5_OM_CTRL_UL_MU_DATA_DIS_RX;
- he_cap_elem->phy_cap_info[3] |=
- IEEE80211_HE_PHY_CAP3_DCM_MAX_CONST_TX_QPSK |
- IEEE80211_HE_PHY_CAP3_DCM_MAX_CONST_RX_QPSK;
- he_cap_elem->phy_cap_info[6] |=
- IEEE80211_HE_PHY_CAP6_PARTIAL_BW_EXT_RANGE |
- IEEE80211_HE_PHY_CAP6_PPE_THRESHOLD_PRESENT;
- he_cap_elem->phy_cap_info[9] |=
- IEEE80211_HE_PHY_CAP9_TX_1024_QAM_LESS_THAN_242_TONE_RU |
- IEEE80211_HE_PHY_CAP9_RX_1024_QAM_LESS_THAN_242_TONE_RU;
- break;
- case NL80211_IFTYPE_STATION:
- he_cap_elem->mac_cap_info[1] |=
- IEEE80211_HE_MAC_CAP1_TF_MAC_PAD_DUR_16US;
-
- if (band == NL80211_BAND_2GHZ)
- he_cap_elem->phy_cap_info[0] |=
- IEEE80211_HE_PHY_CAP0_CHANNEL_WIDTH_SET_RU_MAPPING_IN_2G;
- else
- he_cap_elem->phy_cap_info[0] |=
- IEEE80211_HE_PHY_CAP0_CHANNEL_WIDTH_SET_RU_MAPPING_IN_5G;
-
- he_cap_elem->phy_cap_info[1] |=
- IEEE80211_HE_PHY_CAP1_DEVICE_CLASS_A |
- IEEE80211_HE_PHY_CAP1_HE_LTF_AND_GI_FOR_HE_PPDUS_0_8US;
- he_cap_elem->phy_cap_info[3] |=
- IEEE80211_HE_PHY_CAP3_DCM_MAX_CONST_TX_QPSK |
- IEEE80211_HE_PHY_CAP3_DCM_MAX_CONST_RX_QPSK;
- he_cap_elem->phy_cap_info[6] |=
- IEEE80211_HE_PHY_CAP6_TRIG_CQI_FB |
- IEEE80211_HE_PHY_CAP6_PARTIAL_BW_EXT_RANGE |
- IEEE80211_HE_PHY_CAP6_PPE_THRESHOLD_PRESENT;
- he_cap_elem->phy_cap_info[7] |=
- IEEE80211_HE_PHY_CAP7_POWER_BOOST_FACTOR_SUPP |
- IEEE80211_HE_PHY_CAP7_HE_SU_MU_PPDU_4XLTF_AND_08_US_GI;
- he_cap_elem->phy_cap_info[8] |=
- IEEE80211_HE_PHY_CAP8_20MHZ_IN_40MHZ_HE_PPDU_IN_2G |
- IEEE80211_HE_PHY_CAP8_20MHZ_IN_160MHZ_HE_PPDU |
- IEEE80211_HE_PHY_CAP8_80MHZ_IN_160MHZ_HE_PPDU |
- IEEE80211_HE_PHY_CAP8_DCM_MAX_RU_484;
- he_cap_elem->phy_cap_info[9] |=
- IEEE80211_HE_PHY_CAP9_LONGER_THAN_16_SIGB_OFDM_SYM |
- IEEE80211_HE_PHY_CAP9_NON_TRIGGERED_CQI_FEEDBACK |
- IEEE80211_HE_PHY_CAP9_TX_1024_QAM_LESS_THAN_242_TONE_RU |
- IEEE80211_HE_PHY_CAP9_RX_1024_QAM_LESS_THAN_242_TONE_RU |
- IEEE80211_HE_PHY_CAP9_RX_FULL_BW_SU_USING_MU_WITH_COMP_SIGB |
- IEEE80211_HE_PHY_CAP9_RX_FULL_BW_SU_USING_MU_WITH_NON_COMP_SIGB;
- break;
- }
+ he_mcs->rx_mcs_80 = cpu_to_le16(mcs_map);
+ he_mcs->tx_mcs_80 = cpu_to_le16(mcs_map);
+ he_mcs->rx_mcs_160 = cpu_to_le16(mcs_map);
+ he_mcs->tx_mcs_160 = cpu_to_le16(mcs_map);
+
+ mt7996_set_stream_he_txbf_caps(phy, he_cap, iftype);
+
+ memset(he_cap->ppe_thres, 0, sizeof(he_cap->ppe_thres));
+ if (he_cap_elem->phy_cap_info[6] &
+ IEEE80211_HE_PHY_CAP6_PPE_THRESHOLD_PRESENT) {
+ mt7996_gen_ppe_thresh(he_cap->ppe_thres, nss);
+ } else {
+ he_cap_elem->phy_cap_info[9] |=
+ u8_encode_bits(IEEE80211_HE_PHY_CAP9_NOMINAL_PKT_PADDING_16US,
+ IEEE80211_HE_PHY_CAP9_NOMINAL_PKT_PADDING_MASK);
+ }
- he_mcs->rx_mcs_80 = cpu_to_le16(mcs_map);
- he_mcs->tx_mcs_80 = cpu_to_le16(mcs_map);
- he_mcs->rx_mcs_160 = cpu_to_le16(mcs_map);
- he_mcs->tx_mcs_160 = cpu_to_le16(mcs_map);
- he_mcs->rx_mcs_80p80 = cpu_to_le16(mcs_map);
- he_mcs->tx_mcs_80p80 = cpu_to_le16(mcs_map);
-
- mt7996_set_stream_he_txbf_caps(phy, he_cap, i);
-
- memset(he_cap->ppe_thres, 0, sizeof(he_cap->ppe_thres));
- if (he_cap_elem->phy_cap_info[6] &
- IEEE80211_HE_PHY_CAP6_PPE_THRESHOLD_PRESENT) {
- mt7996_gen_ppe_thresh(he_cap->ppe_thres, nss);
- } else {
- he_cap_elem->phy_cap_info[9] |=
- IEEE80211_HE_PHY_CAP9_NOMINAL_PKT_PADDING_16US;
- }
+ if (band == NL80211_BAND_6GHZ) {
+ u16 cap = IEEE80211_HE_6GHZ_CAP_TX_ANTPAT_CONS |
+ IEEE80211_HE_6GHZ_CAP_RX_ANTPAT_CONS;
- if (band == NL80211_BAND_6GHZ) {
- u16 cap = IEEE80211_HE_6GHZ_CAP_TX_ANTPAT_CONS |
- IEEE80211_HE_6GHZ_CAP_RX_ANTPAT_CONS;
+ cap |= u16_encode_bits(IEEE80211_HT_MPDU_DENSITY_2,
+ IEEE80211_HE_6GHZ_CAP_MIN_MPDU_START) |
+ u16_encode_bits(IEEE80211_VHT_MAX_AMPDU_1024K,
+ IEEE80211_HE_6GHZ_CAP_MAX_AMPDU_LEN_EXP) |
+ u16_encode_bits(IEEE80211_VHT_CAP_MAX_MPDU_LENGTH_11454,
+ IEEE80211_HE_6GHZ_CAP_MAX_MPDU_LEN);
- cap |= u16_encode_bits(IEEE80211_HT_MPDU_DENSITY_2,
- IEEE80211_HE_6GHZ_CAP_MIN_MPDU_START) |
- u16_encode_bits(IEEE80211_VHT_MAX_AMPDU_1024K,
- IEEE80211_HE_6GHZ_CAP_MAX_AMPDU_LEN_EXP) |
- u16_encode_bits(IEEE80211_VHT_CAP_MAX_MPDU_LENGTH_11454,
- IEEE80211_HE_6GHZ_CAP_MAX_MPDU_LEN);
+ data->he_6ghz_capa.capa = cpu_to_le16(cap);
+ }
+}
- data[idx].he_6ghz_capa.capa = cpu_to_le16(cap);
- }
+static void
+mt7996_init_eht_caps(struct mt7996_phy *phy, enum nl80211_band band,
+ struct ieee80211_sband_iftype_data *data,
+ enum nl80211_iftype iftype)
+{
+ struct ieee80211_sta_eht_cap *eht_cap = &data->eht_cap;
+ struct ieee80211_eht_cap_elem_fixed *eht_cap_elem = &eht_cap->eht_cap_elem;
+ struct ieee80211_eht_mcs_nss_supp *eht_nss = &eht_cap->eht_mcs_nss_supp;
+ enum nl80211_chan_width width = phy->mt76->chandef.width;
+ int nss = hweight8(phy->mt76->antenna_mask);
+ int sts = hweight16(phy->mt76->chainmask);
+ u8 val;
- idx++;
- }
+ if (!phy->dev->has_eht)
+ return;
- return idx;
+ eht_cap->has_eht = true;
+
+ eht_cap_elem->mac_cap_info[0] =
+ IEEE80211_EHT_MAC_CAP0_EPCS_PRIO_ACCESS |
+ IEEE80211_EHT_MAC_CAP0_OM_CONTROL;
+
+ eht_cap_elem->phy_cap_info[0] =
+ IEEE80211_EHT_PHY_CAP0_320MHZ_IN_6GHZ |
+ IEEE80211_EHT_PHY_CAP0_NDP_4_EHT_LFT_32_GI |
+ IEEE80211_EHT_PHY_CAP0_SU_BEAMFORMER |
+ IEEE80211_EHT_PHY_CAP0_SU_BEAMFORMEE;
+
+ eht_cap_elem->phy_cap_info[0] |=
+ u8_encode_bits(u8_get_bits(sts - 1, BIT(0)),
+ IEEE80211_EHT_PHY_CAP0_BEAMFORMEE_SS_80MHZ_MASK);
+
+ eht_cap_elem->phy_cap_info[1] =
+ u8_encode_bits(u8_get_bits(sts - 1, GENMASK(2, 1)),
+ IEEE80211_EHT_PHY_CAP1_BEAMFORMEE_SS_80MHZ_MASK) |
+ u8_encode_bits(sts - 1,
+ IEEE80211_EHT_PHY_CAP1_BEAMFORMEE_SS_160MHZ_MASK) |
+ u8_encode_bits(sts - 1,
+ IEEE80211_EHT_PHY_CAP1_BEAMFORMEE_SS_320MHZ_MASK);
+
+ eht_cap_elem->phy_cap_info[2] =
+ u8_encode_bits(sts - 1, IEEE80211_EHT_PHY_CAP2_SOUNDING_DIM_80MHZ_MASK) |
+ u8_encode_bits(sts - 1, IEEE80211_EHT_PHY_CAP2_SOUNDING_DIM_160MHZ_MASK) |
+ u8_encode_bits(sts - 1, IEEE80211_EHT_PHY_CAP2_SOUNDING_DIM_320MHZ_MASK);
+
+ eht_cap_elem->phy_cap_info[3] =
+ IEEE80211_EHT_PHY_CAP3_NG_16_SU_FEEDBACK |
+ IEEE80211_EHT_PHY_CAP3_NG_16_MU_FEEDBACK |
+ IEEE80211_EHT_PHY_CAP3_CODEBOOK_4_2_SU_FDBK |
+ IEEE80211_EHT_PHY_CAP3_CODEBOOK_7_5_MU_FDBK |
+ IEEE80211_EHT_PHY_CAP3_TRIG_SU_BF_FDBK |
+ IEEE80211_EHT_PHY_CAP3_TRIG_MU_BF_PART_BW_FDBK |
+ IEEE80211_EHT_PHY_CAP3_TRIG_CQI_FDBK;
+
+ eht_cap_elem->phy_cap_info[4] =
+ u8_encode_bits(min_t(int, sts - 1, 2),
+ IEEE80211_EHT_PHY_CAP4_MAX_NC_MASK);
+
+ eht_cap_elem->phy_cap_info[5] =
+ IEEE80211_EHT_PHY_CAP5_NON_TRIG_CQI_FEEDBACK |
+ u8_encode_bits(IEEE80211_EHT_PHY_CAP5_COMMON_NOMINAL_PKT_PAD_16US,
+ IEEE80211_EHT_PHY_CAP5_COMMON_NOMINAL_PKT_PAD_MASK) |
+ u8_encode_bits(u8_get_bits(0x11, GENMASK(1, 0)),
+ IEEE80211_EHT_PHY_CAP5_MAX_NUM_SUPP_EHT_LTF_MASK);
+
+ val = width == NL80211_CHAN_WIDTH_320 ? 0xf :
+ width == NL80211_CHAN_WIDTH_160 ? 0x7 :
+ width == NL80211_CHAN_WIDTH_80 ? 0x3 : 0x1;
+ eht_cap_elem->phy_cap_info[6] =
+ u8_encode_bits(u8_get_bits(0x11, GENMASK(4, 2)),
+ IEEE80211_EHT_PHY_CAP6_MAX_NUM_SUPP_EHT_LTF_MASK) |
+ u8_encode_bits(val, IEEE80211_EHT_PHY_CAP6_MCS15_SUPP_MASK);
+
+ eht_cap_elem->phy_cap_info[7] =
+ IEEE80211_EHT_PHY_CAP7_NON_OFDMA_UL_MU_MIMO_80MHZ |
+ IEEE80211_EHT_PHY_CAP7_NON_OFDMA_UL_MU_MIMO_160MHZ |
+ IEEE80211_EHT_PHY_CAP7_NON_OFDMA_UL_MU_MIMO_320MHZ |
+ IEEE80211_EHT_PHY_CAP7_MU_BEAMFORMER_80MHZ |
+ IEEE80211_EHT_PHY_CAP7_MU_BEAMFORMER_160MHZ |
+ IEEE80211_EHT_PHY_CAP7_MU_BEAMFORMER_320MHZ;
+
+ val = u8_encode_bits(nss, IEEE80211_EHT_MCS_NSS_RX) |
+ u8_encode_bits(nss, IEEE80211_EHT_MCS_NSS_TX);
+#define SET_EHT_MAX_NSS(_bw, _val) do { \
+ eht_nss->bw._##_bw.rx_tx_mcs9_max_nss = _val; \
+ eht_nss->bw._##_bw.rx_tx_mcs11_max_nss = _val; \
+ eht_nss->bw._##_bw.rx_tx_mcs13_max_nss = _val; \
+ } while (0)
+
+ SET_EHT_MAX_NSS(80, val);
+ SET_EHT_MAX_NSS(160, val);
+ SET_EHT_MAX_NSS(320, val);
+#undef SET_EHT_MAX_NSS
}
-void mt7996_set_stream_he_caps(struct mt7996_phy *phy)
+static void
+__mt7996_set_stream_he_eht_caps(struct mt7996_phy *phy,
+ struct ieee80211_supported_band *sband,
+ enum nl80211_band band)
{
- struct ieee80211_sband_iftype_data *data;
- struct ieee80211_supported_band *band;
- int n;
+ struct ieee80211_sband_iftype_data *data = phy->iftype[band];
+ int i, n = 0;
+
+ for (i = 0; i < NUM_NL80211_IFTYPES; i++) {
+ switch (i) {
+ case NL80211_IFTYPE_STATION:
+ case NL80211_IFTYPE_AP:
+#ifdef CONFIG_MAC80211_MESH
+ case NL80211_IFTYPE_MESH_POINT:
+#endif
+ break;
+ default:
+ continue;
+ }
- if (phy->mt76->cap.has_2ghz) {
- data = phy->iftype[NL80211_BAND_2GHZ];
- n = mt7996_init_he_caps(phy, NL80211_BAND_2GHZ, data);
+ data[n].types_mask = BIT(i);
+ mt7996_init_he_caps(phy, band, &data[n], i);
+ mt7996_init_eht_caps(phy, band, &data[n], i);
- band = &phy->mt76->sband_2g.sband;
- band->iftype_data = data;
- band->n_iftype_data = n;
+ n++;
}
- if (phy->mt76->cap.has_5ghz) {
- data = phy->iftype[NL80211_BAND_5GHZ];
- n = mt7996_init_he_caps(phy, NL80211_BAND_5GHZ, data);
+ sband->iftype_data = data;
+ sband->n_iftype_data = n;
+}
- band = &phy->mt76->sband_5g.sband;
- band->iftype_data = data;
- band->n_iftype_data = n;
- }
+void mt7996_set_stream_he_eht_caps(struct mt7996_phy *phy)
+{
+ if (phy->mt76->cap.has_2ghz)
+ __mt7996_set_stream_he_eht_caps(phy, &phy->mt76->sband_2g.sband,
+ NL80211_BAND_2GHZ);
- if (phy->mt76->cap.has_6ghz) {
- data = phy->iftype[NL80211_BAND_6GHZ];
- n = mt7996_init_he_caps(phy, NL80211_BAND_6GHZ, data);
+ if (phy->mt76->cap.has_5ghz)
+ __mt7996_set_stream_he_eht_caps(phy, &phy->mt76->sband_5g.sband,
+ NL80211_BAND_5GHZ);
- band = &phy->mt76->sband_6g.sband;
- band->iftype_data = data;
- band->n_iftype_data = n;
- }
+ if (phy->mt76->cap.has_6ghz)
+ __mt7996_set_stream_he_eht_caps(phy, &phy->mt76->sband_6g.sband,
+ NL80211_BAND_6GHZ);
}
int mt7996_register_device(struct mt7996_dev *dev)
@@ -787,8 +867,8 @@ int mt7996_register_device(struct mt7996_dev *dev)
/* init led callbacks */
if (IS_ENABLED(CONFIG_MT76_LEDS)) {
- dev->mt76.led_cdev.brightness_set = mt7996_led_set_brightness;
- dev->mt76.led_cdev.blink_set = mt7996_led_set_blink;
+ dev->mphy.leds.cdev.brightness_set = mt7996_led_set_brightness;
+ dev->mphy.leds.cdev.blink_set = mt7996_led_set_blink;
}
ret = mt76_register_device(&dev->mt76, true, mt76_rates,
diff --git a/drivers/net/wireless/mediatek/mt76/mt7996/mac.c b/drivers/net/wireless/mediatek/mt76/mt7996/mac.c
index 0b3e28748e76..c9a9f0e31771 100644
--- a/drivers/net/wireless/mediatek/mt76/mt7996/mac.c
+++ b/drivers/net/wireless/mediatek/mt76/mt7996/mac.c
@@ -189,6 +189,9 @@ static void mt7996_mac_sta_poll(struct mt7996_dev *dev)
rate = &msta->wcid.rate;
switch (rate->bw) {
+ case RATE_INFO_BW_320:
+ bw = IEEE80211_STA_RX_BW_320;
+ break;
case RATE_INFO_BW_160:
bw = IEEE80211_STA_RX_BW_160;
break;
@@ -205,7 +208,11 @@ static void mt7996_mac_sta_poll(struct mt7996_dev *dev)
addr = mt7996_mac_wtbl_lmac_addr(dev, idx, 6);
val = mt76_rr(dev, addr);
- if (rate->flags & RATE_INFO_FLAGS_HE_MCS) {
+ if (rate->flags & RATE_INFO_FLAGS_EHT_MCS) {
+ addr = mt7996_mac_wtbl_lmac_addr(dev, idx, 5);
+ val = mt76_rr(dev, addr);
+ rate->eht_gi = FIELD_GET(GENMASK(25, 24), val);
+ } else if (rate->flags & RATE_INFO_FLAGS_HE_MCS) {
u8 offs = 24 + 2 * bw;
rate->he_gi = (val & (0x3 << offs)) >> offs;
@@ -469,7 +476,7 @@ static int mt7996_reverse_frag0_hdr_trans(struct sk_buff *skb, u16 hdr_gap)
ether_addr_copy(hdr.addr4, eth_hdr->h_source);
break;
default:
- break;
+ return -EINVAL;
}
skb_pull(skb, hdr_gap + sizeof(struct ethhdr) - 2);
@@ -560,6 +567,15 @@ mt7996_mac_fill_rx_rate(struct mt7996_dev *dev,
status->he_dcm = dcm;
break;
+ case MT_PHY_TYPE_EHT_SU:
+ case MT_PHY_TYPE_EHT_TRIG:
+ case MT_PHY_TYPE_EHT_MU:
+ /* TODO: currently report rx rate with HE rate */
+ status->nss = nss;
+ status->encoding = RX_ENC_HE;
+ bw = min_t(int, bw, IEEE80211_STA_RX_BW_160);
+ i = min_t(int, i & 0xf, 11);
+ break;
default:
return -EINVAL;
}
@@ -584,6 +600,9 @@ mt7996_mac_fill_rx_rate(struct mt7996_dev *dev,
case IEEE80211_STA_RX_BW_160:
status->bw = RATE_INFO_BW_160;
break;
+ case IEEE80211_STA_RX_BW_320:
+ status->bw = RATE_INFO_BW_320;
+ break;
default:
return -EINVAL;
}
@@ -959,51 +978,6 @@ mt7996_mac_write_txwi_80211(struct mt7996_dev *dev, __le32 *txwi,
}
}
-static u16
-mt7996_mac_tx_rate_val(struct mt76_phy *mphy, struct ieee80211_vif *vif,
- bool beacon, bool mcast)
-{
- u8 mode = 0, band = mphy->chandef.chan->band;
- int rateidx = 0, mcast_rate;
-
- if (beacon) {
- struct cfg80211_bitrate_mask *mask;
-
- mask = &vif->bss_conf.beacon_tx_rate;
- if (hweight16(mask->control[band].he_mcs[0]) == 1) {
- rateidx = ffs(mask->control[band].he_mcs[0]) - 1;
- mode = MT_PHY_TYPE_HE_SU;
- goto out;
- } else if (hweight16(mask->control[band].vht_mcs[0]) == 1) {
- rateidx = ffs(mask->control[band].vht_mcs[0]) - 1;
- mode = MT_PHY_TYPE_VHT;
- goto out;
- } else if (hweight8(mask->control[band].ht_mcs[0]) == 1) {
- rateidx = ffs(mask->control[band].ht_mcs[0]) - 1;
- mode = MT_PHY_TYPE_HT;
- goto out;
- } else if (hweight32(mask->control[band].legacy) == 1) {
- rateidx = ffs(mask->control[band].legacy) - 1;
- goto legacy;
- }
- }
-
- mcast_rate = vif->bss_conf.mcast_rate[band];
- if (mcast && mcast_rate > 0)
- rateidx = mcast_rate - 1;
- else
- rateidx = ffs(vif->bss_conf.basic_rates) - 1;
-
-legacy:
- rateidx = mt76_calculate_default_rate(mphy, rateidx);
- mode = rateidx >> 8;
- rateidx &= GENMASK(7, 0);
-
-out:
- return FIELD_PREP(MT_TX_RATE_IDX, rateidx) |
- FIELD_PREP(MT_TX_RATE_MODE, mode);
-}
-
void mt7996_mac_write_txwi(struct mt7996_dev *dev, __le32 *txwi,
struct sk_buff *skb, struct mt76_wcid *wcid, int pid,
struct ieee80211_key_conf *key, u32 changed)
@@ -1091,7 +1065,8 @@ void mt7996_mac_write_txwi(struct mt7996_dev *dev, __le32 *txwi,
/* Fixed rata is available just for 802.11 txd */
struct ieee80211_hdr *hdr = (struct ieee80211_hdr *)skb->data;
bool multicast = is_multicast_ether_addr(hdr->addr1);
- u16 rate = mt7996_mac_tx_rate_val(mphy, vif, beacon, multicast);
+ u16 rate = mt76_connac2_mac_tx_rate_val(mphy, vif, beacon,
+ multicast);
/* fix to bw 20 */
val = MT_TXD6_FIXED_BW |
@@ -1113,8 +1088,8 @@ int mt7996_tx_prepare_skb(struct mt76_dev *mdev, void *txwi_ptr,
struct ieee80211_tx_info *info = IEEE80211_SKB_CB(tx_info->skb);
struct ieee80211_key_conf *key = info->control.hw_key;
struct ieee80211_vif *vif = info->control.vif;
+ struct mt76_connac_txp_common *txp;
struct mt76_txwi_cache *t;
- struct mt7996_txp *txp;
int id, i, pid, nbuf = tx_info->nbuf - 1;
bool is_8023 = info->flags & IEEE80211_TX_CTL_HW_80211_ENCAP;
u8 *txwi = (u8 *)txwi_ptr;
@@ -1148,35 +1123,35 @@ int mt7996_tx_prepare_skb(struct mt76_dev *mdev, void *txwi_ptr,
mt7996_mac_write_txwi(dev, txwi_ptr, tx_info->skb, wcid, pid,
key, 0);
- txp = (struct mt7996_txp *)(txwi + MT_TXD_SIZE);
+ txp = (struct mt76_connac_txp_common *)(txwi + MT_TXD_SIZE);
for (i = 0; i < nbuf; i++) {
- txp->buf[i] = cpu_to_le32(tx_info->buf[i + 1].addr);
- txp->len[i] = cpu_to_le16(tx_info->buf[i + 1].len);
+ txp->fw.buf[i] = cpu_to_le32(tx_info->buf[i + 1].addr);
+ txp->fw.len[i] = cpu_to_le16(tx_info->buf[i + 1].len);
}
- txp->nbuf = nbuf;
+ txp->fw.nbuf = nbuf;
- txp->flags = cpu_to_le16(MT_CT_INFO_FROM_HOST);
+ txp->fw.flags = cpu_to_le16(MT_CT_INFO_FROM_HOST);
if (!is_8023 || pid >= MT_PACKET_ID_FIRST)
- txp->flags |= cpu_to_le16(MT_CT_INFO_APPLY_TXD);
+ txp->fw.flags |= cpu_to_le16(MT_CT_INFO_APPLY_TXD);
if (!key)
- txp->flags |= cpu_to_le16(MT_CT_INFO_NONE_CIPHER_FRAME);
+ txp->fw.flags |= cpu_to_le16(MT_CT_INFO_NONE_CIPHER_FRAME);
if (!is_8023 && ieee80211_is_mgmt(hdr->frame_control))
- txp->flags |= cpu_to_le16(MT_CT_INFO_MGMT_FRAME);
+ txp->fw.flags |= cpu_to_le16(MT_CT_INFO_MGMT_FRAME);
if (vif) {
struct mt7996_vif *mvif = (struct mt7996_vif *)vif->drv_priv;
- txp->bss_idx = mvif->mt76.idx;
+ txp->fw.bss_idx = mvif->mt76.idx;
}
- txp->token = cpu_to_le16(id);
+ txp->fw.token = cpu_to_le16(id);
if (test_bit(MT_WCID_FLAG_4ADDR, &wcid->flags))
- txp->rept_wds_wcid = cpu_to_le16(wcid->idx);
+ txp->fw.rept_wds_wcid = cpu_to_le16(wcid->idx);
else
- txp->rept_wds_wcid = cpu_to_le16(0xfff);
+ txp->fw.rept_wds_wcid = cpu_to_le16(0xfff);
tx_info->skb = DMA_DUMMY_DATA;
/* pass partial skb header to fw */
@@ -1213,18 +1188,6 @@ mt7996_tx_check_aggr(struct ieee80211_sta *sta, __le32 *txwi)
}
static void
-mt7996_txp_skb_unmap(struct mt76_dev *dev, struct mt76_txwi_cache *t)
-{
- struct mt7996_txp *txp;
- int i;
-
- txp = mt7996_txwi_to_txp(dev, t);
- for (i = 0; i < txp->nbuf; i++)
- dma_unmap_single(dev->dev, le32_to_cpu(txp->buf[i]),
- le16_to_cpu(txp->len[i]), DMA_TO_DEVICE);
-}
-
-static void
mt7996_txwi_free(struct mt7996_dev *dev, struct mt76_txwi_cache *t,
struct ieee80211_sta *sta, struct list_head *free_list)
{
@@ -1233,7 +1196,7 @@ mt7996_txwi_free(struct mt7996_dev *dev, struct mt76_txwi_cache *t,
__le32 *txwi;
u16 wcid_idx;
- mt7996_txp_skb_unmap(mdev, t);
+ mt76_connac_txp_skb_unmap(mdev, t);
if (!t->skb)
goto out;
@@ -1434,6 +1397,15 @@ mt7996_mac_add_txs_skb(struct mt7996_dev *dev, struct mt76_wcid *wcid, int pid,
rate.he_dcm = FIELD_GET(MT_TX_RATE_DCM, txrate);
rate.flags = RATE_INFO_FLAGS_HE_MCS;
break;
+ case MT_PHY_TYPE_EHT_SU:
+ case MT_PHY_TYPE_EHT_TRIG:
+ case MT_PHY_TYPE_EHT_MU:
+ if (rate.mcs > 13)
+ goto out;
+
+ rate.eht_gi = wcid->rate.eht_gi;
+ rate.flags = RATE_INFO_FLAGS_EHT_MCS;
+ break;
default:
goto out;
}
@@ -1441,6 +1413,10 @@ mt7996_mac_add_txs_skb(struct mt7996_dev *dev, struct mt76_wcid *wcid, int pid,
stats->tx_mode[mode]++;
switch (FIELD_GET(MT_TXS0_BW, txs)) {
+ case IEEE80211_STA_RX_BW_320:
+ rate.bw = RATE_INFO_BW_320;
+ stats->tx_bw[4]++;
+ break;
case IEEE80211_STA_RX_BW_160:
rate.bw = RATE_INFO_BW_160;
stats->tx_bw[3]++;
@@ -1486,7 +1462,7 @@ static void mt7996_mac_add_txs(struct mt7996_dev *dev, void *data)
if (pid < MT_PACKET_ID_FIRST)
return;
- if (wcidx >= MT7996_WTBL_SIZE)
+ if (wcidx >= mt7996_wtbl_size(dev))
return;
rcu_read_lock();
@@ -1589,27 +1565,6 @@ void mt7996_queue_rx_skb(struct mt76_dev *mdev, enum mt76_rxq_id q,
}
}
-void mt7996_tx_complete_skb(struct mt76_dev *mdev, struct mt76_queue_entry *e)
-{
- if (!e->txwi) {
- dev_kfree_skb_any(e->skb);
- return;
- }
-
- /* error path */
- if (e->skb == DMA_DUMMY_DATA) {
- struct mt76_txwi_cache *t;
- struct mt7996_txp *txp;
-
- txp = mt7996_txwi_to_txp(mdev, e->txwi);
- t = mt76_token_put(mdev, le16_to_cpu(txp->token));
- e->skb = t ? t->skb : NULL;
- }
-
- if (e->skb)
- mt76_tx_complete_skb(mdev, e->wcid, e->skb);
-}
-
void mt7996_mac_cca_stats_reset(struct mt7996_phy *phy)
{
struct mt7996_dev *dev = phy->dev;
@@ -1690,7 +1645,7 @@ void mt7996_mac_set_timing(struct mt7996_phy *phy)
else
val = MT7996_CFEND_RATE_11B;
- mt76_rmw_field(dev, MT_AGG_ACR0(band_idx), MT_AGG_ACR_CFEND_RATE, val);
+ mt76_rmw_field(dev, MT_RATE_HRCR0(band_idx), MT_RATE_HRCR0_CFEND_RATE, val);
mt76_clear(dev, MT_ARB_SCR(band_idx),
MT_ARB_SCR_TX_DISABLE | MT_ARB_SCR_RX_DISABLE);
}
diff --git a/drivers/net/wireless/mediatek/mt76/mt7996/mac.h b/drivers/net/wireless/mediatek/mt76/mt7996/mac.h
index 9f68852012b9..27184cbac619 100644
--- a/drivers/net/wireless/mediatek/mt76/mt7996/mac.h
+++ b/drivers/net/wireless/mediatek/mt76/mt7996/mac.h
@@ -268,17 +268,6 @@ enum tx_mgnt_type {
/* VHT/HE only use bits 0-3 */
#define MT_TX_RATE_IDX GENMASK(5, 0)
-struct mt7996_txp {
- __le16 flags;
- __le16 token;
- u8 bss_idx;
- __le16 rept_wds_wcid;
- u8 nbuf;
-#define MT_TXP_MAX_BUF_NUM 6
- __le32 buf[MT_TXP_MAX_BUF_NUM];
- __le16 len[MT_TXP_MAX_BUF_NUM];
-} __packed __aligned(4);
-
#define MT_TXFREE0_PKT_TYPE GENMASK(31, 27)
#define MT_TXFREE0_MSDU_CNT GENMASK(25, 16)
#define MT_TXFREE0_RX_BYTE GENMASK(15, 0)
@@ -382,17 +371,4 @@ struct mt7996_dfs_radar_spec {
struct mt7996_dfs_pattern radar_pattern[16];
};
-static inline struct mt7996_txp *
-mt7996_txwi_to_txp(struct mt76_dev *dev, struct mt76_txwi_cache *t)
-{
- u8 *txwi;
-
- if (!t)
- return NULL;
-
- txwi = mt76_get_txwi_ptr(dev, t);
-
- return (struct mt7996_txp *)(txwi + MT_TXD_SIZE);
-}
-
#endif
diff --git a/drivers/net/wireless/mediatek/mt76/mt7996/main.c b/drivers/net/wireless/mediatek/mt76/mt7996/main.c
index 4421cd54311b..3e4da0350d96 100644
--- a/drivers/net/wireless/mediatek/mt76/mt7996/main.c
+++ b/drivers/net/wireless/mediatek/mt76/mt7996/main.c
@@ -170,7 +170,7 @@ static int mt7996_add_interface(struct ieee80211_hw *hw,
phy->monitor_vif = vif;
mvif->mt76.idx = __ffs64(~dev->mt76.vif_mask);
- if (mvif->mt76.idx >= (MT7996_MAX_INTERFACES << dev->dbdc_support)) {
+ if (mvif->mt76.idx >= mt7996_max_interface_num(dev)) {
ret = -ENOSPC;
goto out;
}
@@ -880,14 +880,17 @@ mt7996_set_antenna(struct ieee80211_hw *hw, u32 tx_ant, u32 rx_ant)
phy->mt76->antenna_mask = tx_ant;
/* restore to the origin chainmask which might have auxiliary path */
- if (hweight8(tx_ant) == max_nss)
+ if (hweight8(tx_ant) == max_nss && band_idx < MT_BAND2)
+ phy->mt76->chainmask = ((dev->chainmask >> shift) &
+ (BIT(dev->chainshift[band_idx + 1] - shift) - 1)) << shift;
+ else if (hweight8(tx_ant) == max_nss)
phy->mt76->chainmask = (dev->chainmask >> shift) << shift;
else
phy->mt76->chainmask = tx_ant << shift;
mt76_set_stream_caps(phy->mt76, true);
mt7996_set_stream_vht_txbf_caps(phy);
- mt7996_set_stream_he_caps(phy);
+ mt7996_set_stream_he_eht_caps(phy);
mutex_unlock(&dev->mt76.mutex);
@@ -1081,10 +1084,14 @@ static const char mt7996_gstrings_stats[][ETH_GSTRING_LEN] = {
"v_tx_mode_he_ext_su",
"v_tx_mode_he_tb",
"v_tx_mode_he_mu",
+ "v_tx_mode_eht_su",
+ "v_tx_mode_eht_trig",
+ "v_tx_mode_eht_mu",
"v_tx_bw_20",
"v_tx_bw_40",
"v_tx_bw_80",
"v_tx_bw_160",
+ "v_tx_bw_320",
"v_tx_mcs_0",
"v_tx_mcs_1",
"v_tx_mcs_2",
@@ -1097,6 +1104,8 @@ static const char mt7996_gstrings_stats[][ETH_GSTRING_LEN] = {
"v_tx_mcs_9",
"v_tx_mcs_10",
"v_tx_mcs_11",
+ "v_tx_mcs_12",
+ "v_tx_mcs_13",
};
#define MT7996_SSTATS_LEN ARRAY_SIZE(mt7996_gstrings_stats)
@@ -1130,7 +1139,7 @@ static void mt7996_ethtool_worker(void *wi_data, struct ieee80211_sta *sta)
if (msta->vif->mt76.idx != wi->idx)
return;
- mt76_ethtool_worker(wi, &msta->stats);
+ mt76_ethtool_worker(wi, &msta->stats, true);
}
static
diff --git a/drivers/net/wireless/mediatek/mt76/mt7996/mcu.c b/drivers/net/wireless/mediatek/mt76/mt7996/mcu.c
index 04e1d10bbd21..dbe30832fd88 100644
--- a/drivers/net/wireless/mediatek/mt76/mt7996/mcu.c
+++ b/drivers/net/wireless/mediatek/mt76/mt7996/mcu.c
@@ -70,6 +70,7 @@ struct mt7996_fw_region {
#define HE_PHY(p, c) u8_get_bits(c, IEEE80211_HE_PHY_##p)
#define HE_MAC(m, c) u8_get_bits(c, IEEE80211_HE_MAC_##m)
+#define EHT_PHY(p, c) u8_get_bits(c, IEEE80211_EHT_PHY_##p)
static bool sr_scene_detect = true;
module_param(sr_scene_detect, bool, 0644);
@@ -335,6 +336,9 @@ mt7996_mcu_rx_radar_detected(struct mt7996_dev *dev, struct sk_buff *skb)
r = (struct mt7996_mcu_rdd_report *)skb->data;
+ if (r->band_idx >= ARRAY_SIZE(dev->mt76.phys))
+ return;
+
mphy = dev->mt76.phys[r->band_idx];
if (!mphy)
return;
@@ -412,6 +416,9 @@ mt7996_mcu_ie_countdown(struct mt7996_dev *dev, struct sk_buff *skb)
struct header *hdr = (struct header *)data;
struct tlv *tlv = (struct tlv *)(data + 4);
+ if (hdr->band >= ARRAY_SIZE(dev->mt76.phys))
+ return;
+
if (hdr->band && dev->mt76.phys[hdr->band])
mphy = dev->mt76.phys[hdr->band];
@@ -765,9 +772,8 @@ mt7996_mcu_bss_basic_tlv(struct sk_buff *skb,
bss->dtim_period = vif->bss_conf.dtim_period;
bss->phymode = mt76_connac_get_phy_mode(phy, vif,
chandef->chan->band, NULL);
-
- if (chandef->chan->band == NL80211_BAND_6GHZ)
- bss->phymode_ext |= PHY_MODE_AX_6G;
+ bss->phymode_ext = mt76_connac_get_phy_mode_ext(phy, vif,
+ chandef->chan->band);
return 0;
}
@@ -903,8 +909,8 @@ mt7996_mcu_sta_he_tlv(struct sk_buff *skb, struct ieee80211_sta *sta)
he = (struct sta_rec_he_v2 *)tlv;
for (i = 0; i < 11; i++) {
if (i < 6)
- he->he_mac_cap[i] = cpu_to_le16(elem->mac_cap_info[i]);
- he->he_phy_cap[i] = cpu_to_le16(elem->phy_cap_info[i]);
+ he->he_mac_cap[i] = elem->mac_cap_info[i];
+ he->he_phy_cap[i] = elem->phy_cap_info[i];
}
mcs_map = sta->deflink.he_cap.he_mcs_nss_supp;
@@ -946,6 +952,35 @@ mt7996_mcu_sta_he_6g_tlv(struct sk_buff *skb, struct ieee80211_sta *sta)
}
static void
+mt7996_mcu_sta_eht_tlv(struct sk_buff *skb, struct ieee80211_sta *sta)
+{
+ struct ieee80211_eht_mcs_nss_supp *mcs_map;
+ struct ieee80211_eht_cap_elem_fixed *elem;
+ struct sta_rec_eht *eht;
+ struct tlv *tlv;
+
+ if (!sta->deflink.eht_cap.has_eht)
+ return;
+
+ mcs_map = &sta->deflink.eht_cap.eht_mcs_nss_supp;
+ elem = &sta->deflink.eht_cap.eht_cap_elem;
+
+ tlv = mt76_connac_mcu_add_tlv(skb, STA_REC_EHT, sizeof(*eht));
+
+ eht = (struct sta_rec_eht *)tlv;
+ eht->tid_bitmap = 0xff;
+ eht->mac_cap = cpu_to_le16(*(u16 *)elem->mac_cap_info);
+ eht->phy_cap = cpu_to_le64(*(u64 *)elem->phy_cap_info);
+ eht->phy_cap_ext = cpu_to_le64(elem->phy_cap_info[8]);
+
+ if (sta->deflink.bandwidth == IEEE80211_STA_RX_BW_20)
+ memcpy(eht->mcs_map_bw20, &mcs_map->only_20mhz, sizeof(eht->mcs_map_bw20));
+ memcpy(eht->mcs_map_bw80, &mcs_map->bw._80, sizeof(eht->mcs_map_bw80));
+ memcpy(eht->mcs_map_bw160, &mcs_map->bw._160, sizeof(eht->mcs_map_bw160));
+ memcpy(eht->mcs_map_bw320, &mcs_map->bw._320, sizeof(eht->mcs_map_bw320));
+}
+
+static void
mt7996_mcu_sta_ht_tlv(struct sk_buff *skb, struct ieee80211_sta *sta)
{
struct sta_rec_ht *ht;
@@ -1019,15 +1054,27 @@ mt7996_is_ebf_supported(struct mt7996_phy *phy, struct ieee80211_vif *vif,
struct ieee80211_sta *sta, bool bfee)
{
struct mt7996_vif *mvif = (struct mt7996_vif *)vif->drv_priv;
- int tx_ant = hweight8(phy->mt76->antenna_mask) - 1;
+ int sts = hweight16(phy->mt76->chainmask);
if (vif->type != NL80211_IFTYPE_STATION &&
vif->type != NL80211_IFTYPE_AP)
return false;
- if (!bfee && tx_ant < 2)
+ if (!bfee && sts < 2)
return false;
+ if (sta->deflink.eht_cap.has_eht) {
+ struct ieee80211_sta_eht_cap *pc = &sta->deflink.eht_cap;
+ struct ieee80211_eht_cap_elem_fixed *pe = &pc->eht_cap_elem;
+
+ if (bfee)
+ return mvif->cap.eht_su_ebfee &&
+ EHT_PHY(CAP0_SU_BEAMFORMEE, pe->phy_cap_info[0]);
+ else
+ return mvif->cap.eht_su_ebfer &&
+ EHT_PHY(CAP0_SU_BEAMFORMER, pe->phy_cap_info[0]);
+ }
+
if (sta->deflink.he_cap.has_he) {
struct ieee80211_he_cap_elem *pe = &sta->deflink.he_cap.he_cap_elem;
@@ -1185,12 +1232,68 @@ mt7996_mcu_sta_bfer_he(struct ieee80211_sta *sta, struct ieee80211_vif *vif,
}
static void
+mt7996_mcu_sta_bfer_eht(struct ieee80211_sta *sta, struct ieee80211_vif *vif,
+ struct mt7996_phy *phy, struct sta_rec_bf *bf)
+{
+ struct ieee80211_sta_eht_cap *pc = &sta->deflink.eht_cap;
+ struct ieee80211_eht_cap_elem_fixed *pe = &pc->eht_cap_elem;
+ struct ieee80211_eht_mcs_nss_supp *eht_nss = &pc->eht_mcs_nss_supp;
+ const struct ieee80211_sta_eht_cap *vc =
+ mt76_connac_get_eht_phy_cap(phy->mt76, vif);
+ const struct ieee80211_eht_cap_elem_fixed *ve = &vc->eht_cap_elem;
+ u8 nss_mcs = u8_get_bits(eht_nss->bw._80.rx_tx_mcs9_max_nss,
+ IEEE80211_EHT_MCS_NSS_RX) - 1;
+ u8 snd_dim, sts;
+
+ bf->tx_mode = MT_PHY_TYPE_EHT_MU;
+
+ mt7996_mcu_sta_sounding_rate(bf);
+
+ bf->trigger_su = EHT_PHY(CAP3_TRIG_SU_BF_FDBK, pe->phy_cap_info[3]);
+ bf->trigger_mu = EHT_PHY(CAP3_TRIG_MU_BF_PART_BW_FDBK, pe->phy_cap_info[3]);
+ snd_dim = EHT_PHY(CAP2_SOUNDING_DIM_80MHZ_MASK, ve->phy_cap_info[2]);
+ sts = EHT_PHY(CAP0_BEAMFORMEE_SS_80MHZ_MASK, pe->phy_cap_info[0]) +
+ (EHT_PHY(CAP1_BEAMFORMEE_SS_80MHZ_MASK, pe->phy_cap_info[1]) << 1);
+ bf->nrow = min_t(u8, snd_dim, sts);
+ bf->ncol = min_t(u8, nss_mcs, bf->nrow);
+ bf->ibf_ncol = bf->ncol;
+
+ if (sta->deflink.bandwidth < IEEE80211_STA_RX_BW_160)
+ return;
+
+ switch (sta->deflink.bandwidth) {
+ case IEEE80211_STA_RX_BW_160:
+ snd_dim = EHT_PHY(CAP2_SOUNDING_DIM_160MHZ_MASK, ve->phy_cap_info[2]);
+ sts = EHT_PHY(CAP1_BEAMFORMEE_SS_160MHZ_MASK, pe->phy_cap_info[1]);
+ nss_mcs = u8_get_bits(eht_nss->bw._160.rx_tx_mcs9_max_nss,
+ IEEE80211_EHT_MCS_NSS_RX) - 1;
+
+ bf->nrow_gt_bw80 = min_t(u8, snd_dim, sts);
+ bf->ncol_gt_bw80 = nss_mcs;
+ break;
+ case IEEE80211_STA_RX_BW_320:
+ snd_dim = EHT_PHY(CAP2_SOUNDING_DIM_320MHZ_MASK, ve->phy_cap_info[2]) +
+ (EHT_PHY(CAP3_SOUNDING_DIM_320MHZ_MASK,
+ ve->phy_cap_info[3]) << 1);
+ sts = EHT_PHY(CAP1_BEAMFORMEE_SS_320MHZ_MASK, pe->phy_cap_info[1]);
+ nss_mcs = u8_get_bits(eht_nss->bw._320.rx_tx_mcs9_max_nss,
+ IEEE80211_EHT_MCS_NSS_RX) - 1;
+
+ bf->nrow_gt_bw80 = min_t(u8, snd_dim, sts) << 4;
+ bf->ncol_gt_bw80 = nss_mcs << 4;
+ break;
+ default:
+ break;
+ }
+}
+
+static void
mt7996_mcu_sta_bfer_tlv(struct mt7996_dev *dev, struct sk_buff *skb,
struct ieee80211_vif *vif, struct ieee80211_sta *sta)
{
struct mt7996_vif *mvif = (struct mt7996_vif *)vif->drv_priv;
struct mt7996_phy *phy = mvif->phy;
- int tx_ant = hweight8(phy->mt76->antenna_mask) - 1;
+ int tx_ant = hweight8(phy->mt76->chainmask) - 1;
struct sta_rec_bf *bf;
struct tlv *tlv;
const u8 matrix[4][4] = {
@@ -1211,11 +1314,13 @@ mt7996_mcu_sta_bfer_tlv(struct mt7996_dev *dev, struct sk_buff *skb,
tlv = mt76_connac_mcu_add_tlv(skb, STA_REC_BF, sizeof(*bf));
bf = (struct sta_rec_bf *)tlv;
- /* he: eBF only, in accordance with spec
+ /* he/eht: eBF only, in accordance with spec
* vht: support eBF and iBF
* ht: iBF only, since mac80211 lacks of eBF support
*/
- if (sta->deflink.he_cap.has_he && ebf)
+ if (sta->deflink.eht_cap.has_eht && ebf)
+ mt7996_mcu_sta_bfer_eht(sta, vif, phy, bf);
+ else if (sta->deflink.he_cap.has_he && ebf)
mt7996_mcu_sta_bfer_he(sta, vif, phy, bf);
else if (sta->deflink.vht_cap.vht_supported)
mt7996_mcu_sta_bfer_vht(sta, phy, bf, ebf);
@@ -1430,8 +1535,9 @@ mt7996_mcu_sta_rate_ctrl_tlv(struct sk_buff *skb, struct mt7996_dev *dev,
ra->auto_rate = true;
ra->phy_mode = mt76_connac_get_phy_mode(mphy, vif, band, sta);
ra->channel = chandef->chan->hw_value;
- ra->bw = sta->deflink.bandwidth;
- ra->phy.bw = sta->deflink.bandwidth;
+ ra->bw = (sta->deflink.bandwidth == IEEE80211_STA_RX_BW_320) ?
+ CMD_CBW_320MHZ : sta->deflink.bandwidth;
+ ra->phy.bw = ra->bw;
ra->mmps_mode = mt7996_mcu_get_mmps_mode(sta->deflink.smps_mode);
if (supp_rate) {
@@ -1613,6 +1719,8 @@ int mt7996_mcu_add_sta(struct mt7996_dev *dev, struct ieee80211_vif *vif,
mt7996_mcu_sta_he_tlv(skb, sta);
/* starec he 6g*/
mt7996_mcu_sta_he_6g_tlv(skb, sta);
+ /* starec eht */
+ mt7996_mcu_sta_eht_tlv(skb, sta);
/* TODO: starec muru */
/* starec bfee */
mt7996_mcu_sta_bfee_tlv(dev, skb, vif, sta);
@@ -1809,6 +1917,7 @@ mt7996_mcu_beacon_check_caps(struct mt7996_phy *phy, struct ieee80211_vif *vif,
{
struct mt7996_vif *mvif = (struct mt7996_vif *)vif->drv_priv;
struct mt7996_vif_cap *vc = &mvif->cap;
+ const struct ieee80211_eht_cap_elem_fixed *eht;
const struct ieee80211_he_cap_elem *he;
const struct ieee80211_vht_cap *vht;
const struct ieee80211_ht_cap *ht;
@@ -1879,6 +1988,23 @@ mt7996_mcu_beacon_check_caps(struct mt7996_phy *phy, struct ieee80211_vif *vif,
HE_PHY(CAP4_MU_BEAMFORMER, he->phy_cap_info[4]) &&
HE_PHY(CAP4_MU_BEAMFORMER, pe->phy_cap_info[4]);
}
+
+ ie = cfg80211_find_ext_ie(WLAN_EID_EXT_EHT_CAPABILITY,
+ mgmt->u.beacon.variable, len);
+ if (ie && ie[1] >= sizeof(*eht) + 1) {
+ const struct ieee80211_sta_eht_cap *pc =
+ mt76_connac_get_eht_phy_cap(phy->mt76, vif);
+ const struct ieee80211_eht_cap_elem_fixed *pe = &pc->eht_cap_elem;
+
+ eht = (void *)(ie + 3);
+
+ vc->eht_su_ebfer =
+ EHT_PHY(CAP0_SU_BEAMFORMER, eht->phy_cap_info[0]) &&
+ EHT_PHY(CAP0_SU_BEAMFORMER, pe->phy_cap_info[0]);
+ vc->eht_su_ebfee =
+ EHT_PHY(CAP0_SU_BEAMFORMEE, eht->phy_cap_info[0]) &&
+ EHT_PHY(CAP0_SU_BEAMFORMEE, pe->phy_cap_info[0]);
+ }
}
int mt7996_mcu_add_beacon(struct ieee80211_hw *hw,
@@ -2241,6 +2367,26 @@ mt7996_firmware_state(struct mt7996_dev *dev, bool wa)
return 0;
}
+static int
+mt7996_mcu_restart(struct mt76_dev *dev)
+{
+ struct {
+ u8 __rsv1[4];
+
+ __le16 tag;
+ __le16 len;
+ u8 power_mode;
+ u8 __rsv2[3];
+ } __packed req = {
+ .tag = cpu_to_le16(UNI_POWER_OFF),
+ .len = cpu_to_le16(sizeof(req) - 4),
+ .power_mode = 1,
+ };
+
+ return mt76_mcu_send_msg(dev, MCU_WM_UNI_CMD(POWER_CTRL), &req,
+ sizeof(req), false);
+}
+
static int mt7996_load_firmware(struct mt7996_dev *dev)
{
int ret;
@@ -2248,7 +2394,7 @@ static int mt7996_load_firmware(struct mt7996_dev *dev)
/* make sure fw is download state */
if (mt7996_firmware_state(dev, false)) {
/* restart firmware once */
- __mt76_mcu_restart(&dev->mt76);
+ mt7996_mcu_restart(&dev->mt76);
ret = mt7996_firmware_state(dev, false);
if (ret) {
dev_err(dev->mt76.dev,
@@ -2377,33 +2523,12 @@ mt7996_mcu_init_rx_airtime(struct mt7996_dev *dev)
MCU_WM_UNI_CMD(VOW), true);
}
-static int
-mt7996_mcu_restart(struct mt76_dev *dev)
-{
- struct {
- u8 __rsv1[4];
-
- __le16 tag;
- __le16 len;
- u8 power_mode;
- u8 __rsv2[3];
- } __packed req = {
- .tag = cpu_to_le16(UNI_POWER_OFF),
- .len = cpu_to_le16(sizeof(req) - 4),
- .power_mode = 1,
- };
-
- return mt76_mcu_send_msg(dev, MCU_WM_UNI_CMD(POWER_CREL), &req,
- sizeof(req), false);
-}
-
int mt7996_mcu_init(struct mt7996_dev *dev)
{
static const struct mt76_mcu_ops mt7996_mcu_ops = {
.headroom = sizeof(struct mt76_connac2_mcu_txd), /* reuse */
.mcu_skb_send_msg = mt7996_mcu_send_message,
.mcu_parse_response = mt7996_mcu_parse_response,
- .mcu_restart = mt7996_mcu_restart,
};
int ret;
@@ -2451,16 +2576,17 @@ int mt7996_mcu_init(struct mt7996_dev *dev)
void mt7996_mcu_exit(struct mt7996_dev *dev)
{
- __mt76_mcu_restart(&dev->mt76);
+ mt7996_mcu_restart(&dev->mt76);
if (mt7996_firmware_state(dev, false)) {
dev_err(dev->mt76.dev, "Failed to exit mcu\n");
- return;
+ goto out;
}
mt76_wr(dev, MT_TOP_LPCR_HOST_BAND(0), MT_TOP_LPCR_HOST_FW_OWN);
if (dev->hif2)
mt76_wr(dev, MT_TOP_LPCR_HOST_BAND(1),
MT_TOP_LPCR_HOST_FW_OWN);
+out:
skb_queue_purge(&dev->mt76.mcu.res_q);
}
@@ -2921,8 +3047,9 @@ int mt7996_mcu_get_eeprom(struct mt7996_dev *dev, u32 offset)
bool valid;
int ret;
- ret = mt76_mcu_send_and_get_msg(&dev->mt76, MCU_WM_UNI_CMD_QUERY(EFUSE_CTRL), &req,
- sizeof(req), true, &skb);
+ ret = mt76_mcu_send_and_get_msg(&dev->mt76,
+ MCU_WM_UNI_CMD_QUERY(EFUSE_CTRL),
+ &req, sizeof(req), true, &skb);
if (ret)
return ret;
@@ -2970,6 +3097,52 @@ int mt7996_mcu_get_eeprom_free_block(struct mt7996_dev *dev, u8 *block_num)
return 0;
}
+int mt7996_mcu_get_chip_config(struct mt7996_dev *dev, u32 *cap)
+{
+#define NIC_CAP 3
+#define UNI_EVENT_CHIP_CONFIG_EFUSE_VERSION 0x21
+ struct {
+ u8 _rsv[4];
+
+ __le16 tag;
+ __le16 len;
+ } __packed req = {
+ .tag = cpu_to_le16(NIC_CAP),
+ .len = cpu_to_le16(sizeof(req) - 4),
+ };
+ struct sk_buff *skb;
+ u8 *buf;
+ int ret;
+
+ ret = mt76_mcu_send_and_get_msg(&dev->mt76,
+ MCU_WM_UNI_CMD_QUERY(CHIP_CONFIG), &req,
+ sizeof(req), true, &skb);
+ if (ret)
+ return ret;
+
+ /* fixed field */
+ skb_pull(skb, 4);
+
+ buf = skb->data;
+ while (buf - skb->data < skb->len) {
+ struct tlv *tlv = (struct tlv *)buf;
+
+ switch (le16_to_cpu(tlv->tag)) {
+ case UNI_EVENT_CHIP_CONFIG_EFUSE_VERSION:
+ *cap = le32_to_cpu(*(__le32 *)(buf + sizeof(*tlv)));
+ break;
+ default:
+ break;
+ };
+
+ buf += le16_to_cpu(tlv->len);
+ }
+
+ dev_kfree_skb(skb);
+
+ return 0;
+}
+
int mt7996_mcu_get_chan_mib_info(struct mt7996_phy *phy, bool chan_switch)
{
struct {
diff --git a/drivers/net/wireless/mediatek/mt76/mt7996/mcu.h b/drivers/net/wireless/mediatek/mt76/mt7996/mcu.h
index 6084b2337598..dd0c5ac52703 100644
--- a/drivers/net/wireless/mediatek/mt76/mt7996/mcu.h
+++ b/drivers/net/wireless/mediatek/mt76/mt7996/mcu.h
@@ -347,6 +347,21 @@ struct sta_rec_ba_uni {
u8 __rsv[3];
} __packed;
+struct sta_rec_eht {
+ __le16 tag;
+ __le16 len;
+ u8 tid_bitmap;
+ u8 _rsv;
+ __le16 mac_cap;
+ __le64 phy_cap;
+ __le64 phy_cap_ext;
+ u8 mcs_map_bw20[4];
+ u8 mcs_map_bw80[3];
+ u8 mcs_map_bw160[3];
+ u8 mcs_map_bw320[3];
+ u8 _rsv2[3];
+} __packed;
+
struct sec_key_uni {
__le16 wlan_idx;
u8 mgmt_prot;
@@ -554,6 +569,7 @@ enum {
sizeof(struct sta_rec_sec) + \
sizeof(struct sta_rec_ra_fixed) + \
sizeof(struct sta_rec_he_6g_capa) + \
+ sizeof(struct sta_rec_eht) + \
sizeof(struct sta_rec_hdrt) + \
sizeof(struct sta_rec_hdr_trans) + \
sizeof(struct tlv))
diff --git a/drivers/net/wireless/mediatek/mt76/mt7996/mmio.c b/drivers/net/wireless/mediatek/mt76/mt7996/mmio.c
index 521769eb6b0e..902370a2a639 100644
--- a/drivers/net/wireless/mediatek/mt76/mt7996/mmio.c
+++ b/drivers/net/wireless/mediatek/mt76/mt7996/mmio.c
@@ -21,6 +21,7 @@ static const struct __base mt7996_reg_base[] = {
[WF_ETBF_BASE] = { { 0x820ea000, 0x820fa000, 0x830ea000 } },
[WF_LPON_BASE] = { { 0x820eb000, 0x820fb000, 0x830eb000 } },
[WF_MIB_BASE] = { { 0x820ed000, 0x820fd000, 0x830ed000 } },
+ [WF_RATE_BASE] = { { 0x820ee000, 0x820fe000, 0x830ee000 } },
};
static const struct __map mt7996_reg_map[] = {
@@ -149,7 +150,7 @@ static u32 __mt7996_reg_addr(struct mt7996_dev *dev, u32 addr)
if (dev_is_pci(dev->mt76.dev) &&
((addr >= MT_CBTOP1_PHY_START && addr <= MT_CBTOP1_PHY_END) ||
- (addr >= MT_CBTOP2_PHY_START && addr <= MT_CBTOP2_PHY_END)))
+ addr >= MT_CBTOP2_PHY_START))
return mt7996_reg_map_l1(dev, addr);
/* CONN_INFRA: covert to phyiscal addr and use layer 1 remap */
@@ -317,7 +318,7 @@ struct mt7996_dev *mt7996_mmio_probe(struct device *pdev,
{
static const struct mt76_driver_ops drv_ops = {
/* txwi_size = txd size + txp size */
- .txwi_size = MT_TXD_SIZE + sizeof(struct mt7996_txp),
+ .txwi_size = MT_TXD_SIZE + sizeof(struct mt76_connac_fw_txp),
.drv_flags = MT_DRV_TXWI_NO_FREE |
MT_DRV_HW_MGMT_TXQ,
.survey_flags = SURVEY_INFO_TIME_TX |
@@ -325,7 +326,7 @@ struct mt7996_dev *mt7996_mmio_probe(struct device *pdev,
SURVEY_INFO_TIME_BSS_RX,
.token_size = MT7996_TOKEN_SIZE,
.tx_prepare_skb = mt7996_tx_prepare_skb,
- .tx_complete_skb = mt7996_tx_complete_skb,
+ .tx_complete_skb = mt76_connac_tx_complete_skb,
.rx_skb = mt7996_queue_rx_skb,
.rx_check = mt7996_rx_check,
.rx_poll_complete = mt7996_rx_poll_complete,
diff --git a/drivers/net/wireless/mediatek/mt76/mt7996/mt7996.h b/drivers/net/wireless/mediatek/mt76/mt7996/mt7996.h
index 725344791b4c..018dfd2b36b0 100644
--- a/drivers/net/wireless/mediatek/mt76/mt7996/mt7996.h
+++ b/drivers/net/wireless/mediatek/mt76/mt7996/mt7996.h
@@ -11,12 +11,11 @@
#include "../mt76_connac.h"
#include "regs.h"
-#define MT7996_MAX_INTERFACES 19
+#define MT7996_MAX_INTERFACES 19 /* per-band */
#define MT7996_MAX_WMM_SETS 4
-#define MT7996_WTBL_SIZE 544
-#define MT7996_WTBL_RESERVED (MT7996_WTBL_SIZE - 1)
+#define MT7996_WTBL_RESERVED (mt7996_wtbl_size(dev) - 1)
#define MT7996_WTBL_STA (MT7996_WTBL_RESERVED - \
- MT7996_MAX_INTERFACES)
+ mt7996_max_interface_num(dev))
#define MT7996_WATCHDOG_TIME (HZ / 10)
#define MT7996_RESET_TIMEOUT (30 * HZ)
@@ -124,6 +123,8 @@ struct mt7996_vif_cap {
bool he_su_ebfer:1;
bool he_su_ebfee:1;
bool he_mu_ebfer:1;
+ bool eht_su_ebfer:1;
+ bool eht_su_ebfee:1;
};
struct mt7996_vif {
@@ -264,6 +265,7 @@ struct mt7996_dev {
bool dbdc_support:1;
bool tbtc_support:1;
bool flash_mode:1;
+ bool has_eht:1;
bool ibf;
u8 fw_debug_wm;
@@ -281,6 +283,8 @@ struct mt7996_dev {
u32 reg_l1_backup;
u32 reg_l2_backup;
+
+ u8 wtbl_size_group;
};
enum {
@@ -419,6 +423,7 @@ int mt7996_mcu_set_fixed_rate_ctrl(struct mt7996_dev *dev,
int mt7996_mcu_set_eeprom(struct mt7996_dev *dev);
int mt7996_mcu_get_eeprom(struct mt7996_dev *dev, u32 offset);
int mt7996_mcu_get_eeprom_free_block(struct mt7996_dev *dev, u8 *block_num);
+int mt7996_mcu_get_chip_config(struct mt7996_dev *dev, u32 *cap);
int mt7996_mcu_set_ser(struct mt7996_dev *dev, u8 action, u8 set, u8 band);
int mt7996_mcu_set_txbf(struct mt7996_dev *dev, u8 action);
int mt7996_mcu_set_fcc5_lpn(struct mt7996_dev *dev, int val);
@@ -443,6 +448,16 @@ int mt7996_mcu_fw_dbg_ctrl(struct mt7996_dev *dev, u32 module, u8 level);
void mt7996_mcu_rx_event(struct mt7996_dev *dev, struct sk_buff *skb);
void mt7996_mcu_exit(struct mt7996_dev *dev);
+static inline u8 mt7996_max_interface_num(struct mt7996_dev *dev)
+{
+ return MT7996_MAX_INTERFACES * (1 + dev->dbdc_support + dev->tbtc_support);
+}
+
+static inline u16 mt7996_wtbl_size(struct mt7996_dev *dev)
+{
+ return (dev->wtbl_size_group << 8) + 64;
+}
+
void mt7996_dual_hif_set_irq_mask(struct mt7996_dev *dev, bool write_reg,
u32 clear, u32 set);
@@ -493,7 +508,6 @@ int mt7996_tx_prepare_skb(struct mt76_dev *mdev, void *txwi_ptr,
enum mt76_txq_id qid, struct mt76_wcid *wcid,
struct ieee80211_sta *sta,
struct mt76_tx_info *tx_info);
-void mt7996_tx_complete_skb(struct mt76_dev *mdev, struct mt76_queue_entry *e);
void mt7996_tx_token_put(struct mt7996_dev *dev);
void mt7996_queue_rx_skb(struct mt76_dev *mdev, enum mt76_rxq_id q,
struct sk_buff *skb, u32 *info);
@@ -502,7 +516,7 @@ void mt7996_sta_ps(struct mt76_dev *mdev, struct ieee80211_sta *sta, bool ps);
void mt7996_stats_work(struct work_struct *work);
int mt76_dfs_start_rdd(struct mt7996_dev *dev, bool force);
int mt7996_dfs_init_radar_detector(struct mt7996_phy *phy);
-void mt7996_set_stream_he_caps(struct mt7996_phy *phy);
+void mt7996_set_stream_he_eht_caps(struct mt7996_phy *phy);
void mt7996_set_stream_vht_txbf_caps(struct mt7996_phy *phy);
void mt7996_update_channel(struct mt76_phy *mphy);
int mt7996_init_debugfs(struct mt7996_phy *phy);
diff --git a/drivers/net/wireless/mediatek/mt76/mt7996/regs.h b/drivers/net/wireless/mediatek/mt76/mt7996/regs.h
index 794f61b93a46..7a28cae34e34 100644
--- a/drivers/net/wireless/mediatek/mt76/mt7996/regs.h
+++ b/drivers/net/wireless/mediatek/mt76/mt7996/regs.h
@@ -33,6 +33,7 @@ enum base_rev {
WF_ETBF_BASE,
WF_LPON_BASE,
WF_MIB_BASE,
+ WF_RATE_BASE,
__MT_REG_BASE_MAX,
};
@@ -235,13 +236,6 @@ enum base_rev {
FIELD_PREP(MT_WTBL_LMAC_ID, _id) | \
FIELD_PREP(MT_WTBL_LMAC_DW, _dw))
-/* AGG: band 0(0x820e2000), band 1(0x820f2000), band 2(0x830e2000) */
-#define MT_WF_AGG_BASE(_band) __BASE(WF_AGG_BASE, (_band))
-#define MT_WF_AGG(_band, ofs) (MT_WF_AGG_BASE(_band) + (ofs))
-
-#define MT_AGG_ACR0(_band) MT_WF_AGG(_band, 0x054)
-#define MT_AGG_ACR_CFEND_RATE GENMASK(13, 0)
-
/* ARB: band 0(0x820e3000), band 1(0x820f3000), band 2(0x830e3000) */
#define MT_WF_ARB_BASE(_band) __BASE(WF_ARB_BASE, (_band))
#define MT_WF_ARB(_band, ofs) (MT_WF_ARB_BASE(_band) + (ofs))
@@ -300,6 +294,13 @@ enum base_rev {
#define MT_WF_RMAC_RSVD0(_band) MT_WF_RMAC(_band, 0x03e0)
#define MT_WF_RMAC_RSVD0_EIFS_CLR BIT(21)
+/* RATE: band 0(0x820ee000), band 1(0x820fe000), band 2(0x830ee000) */
+#define MT_WF_RATE_BASE(_band) __BASE(WF_RATE_BASE, (_band))
+#define MT_WF_RATE(_band, ofs) (MT_WF_RATE_BASE(_band) + (ofs))
+
+#define MT_RATE_HRCR0(_band) MT_WF_RATE(_band, 0x050)
+#define MT_RATE_HRCR0_CFEND_RATE GENMASK(14, 0)
+
/* WFDMA0 */
#define MT_WFDMA0_BASE 0xd4000
#define MT_WFDMA0(ofs) (MT_WFDMA0_BASE + (ofs))
@@ -463,7 +464,6 @@ enum base_rev {
#define MT_CBTOP1_PHY_START 0x70000000
#define MT_CBTOP1_PHY_END 0x77ffffff
#define MT_CBTOP2_PHY_START 0xf0000000
-#define MT_CBTOP2_PHY_END 0xffffffff
#define MT_INFRA_MCU_START 0x7c000000
#define MT_INFRA_MCU_END 0x7c3fffff
diff --git a/drivers/net/wireless/mediatek/mt76/sdio.c b/drivers/net/wireless/mediatek/mt76/sdio.c
index 228bc7d45011..419723118ded 100644
--- a/drivers/net/wireless/mediatek/mt76/sdio.c
+++ b/drivers/net/wireless/mediatek/mt76/sdio.c
@@ -562,6 +562,10 @@ mt76s_tx_queue_skb_raw(struct mt76_dev *dev, struct mt76_queue *q,
q->entry[q->head].buf_sz = len;
q->entry[q->head].skb = skb;
+
+ /* ensure the entry fully updated before bus access */
+ smp_wmb();
+
q->head = (q->head + 1) % q->ndesc;
q->queued++;
diff --git a/drivers/net/wireless/mediatek/mt76/sdio_txrx.c b/drivers/net/wireless/mediatek/mt76/sdio_txrx.c
index bfc4de50a4d2..ddd8c0cc744d 100644
--- a/drivers/net/wireless/mediatek/mt76/sdio_txrx.c
+++ b/drivers/net/wireless/mediatek/mt76/sdio_txrx.c
@@ -254,6 +254,10 @@ static int mt76s_tx_run_queue(struct mt76_dev *dev, struct mt76_queue *q)
if (!test_bit(MT76_STATE_MCU_RUNNING, &dev->phy.state)) {
__skb_put_zero(e->skb, 4);
+ err = __skb_grow(e->skb, roundup(e->skb->len,
+ sdio->func->cur_blksize));
+ if (err)
+ return err;
err = __mt76s_xmit_queue(dev, e->skb->data,
e->skb->len);
if (err)
diff --git a/drivers/net/wireless/mediatek/mt76/usb.c b/drivers/net/wireless/mediatek/mt76/usb.c
index 3e281715fcd4..b88959ef38aa 100644
--- a/drivers/net/wireless/mediatek/mt76/usb.c
+++ b/drivers/net/wireless/mediatek/mt76/usb.c
@@ -319,29 +319,27 @@ mt76u_set_endpoints(struct usb_interface *intf,
static int
mt76u_fill_rx_sg(struct mt76_dev *dev, struct mt76_queue *q, struct urb *urb,
- int nsgs, gfp_t gfp)
+ int nsgs)
{
int i;
for (i = 0; i < nsgs; i++) {
- struct page *page;
void *data;
int offset;
- data = page_frag_alloc(&q->rx_page, q->buf_size, gfp);
+ data = mt76_get_page_pool_buf(q, &offset, q->buf_size);
if (!data)
break;
- page = virt_to_head_page(data);
- offset = data - page_address(page);
- sg_set_page(&urb->sg[i], page, q->buf_size, offset);
+ sg_set_page(&urb->sg[i], virt_to_head_page(data), q->buf_size,
+ offset);
}
if (i < nsgs) {
int j;
for (j = nsgs; j < urb->num_sgs; j++)
- skb_free_frag(sg_virt(&urb->sg[j]));
+ mt76_put_page_pool_buf(sg_virt(&urb->sg[j]), false);
urb->num_sgs = i;
}
@@ -354,15 +352,16 @@ mt76u_fill_rx_sg(struct mt76_dev *dev, struct mt76_queue *q, struct urb *urb,
static int
mt76u_refill_rx(struct mt76_dev *dev, struct mt76_queue *q,
- struct urb *urb, int nsgs, gfp_t gfp)
+ struct urb *urb, int nsgs)
{
enum mt76_rxq_id qid = q - &dev->q_rx[MT_RXQ_MAIN];
+ int offset;
if (qid == MT_RXQ_MAIN && dev->usb.sg_en)
- return mt76u_fill_rx_sg(dev, q, urb, nsgs, gfp);
+ return mt76u_fill_rx_sg(dev, q, urb, nsgs);
urb->transfer_buffer_length = q->buf_size;
- urb->transfer_buffer = page_frag_alloc(&q->rx_page, q->buf_size, gfp);
+ urb->transfer_buffer = mt76_get_page_pool_buf(q, &offset, q->buf_size);
return urb->transfer_buffer ? 0 : -ENOMEM;
}
@@ -400,7 +399,7 @@ mt76u_rx_urb_alloc(struct mt76_dev *dev, struct mt76_queue *q,
if (err)
return err;
- return mt76u_refill_rx(dev, q, e->urb, sg_size, GFP_KERNEL);
+ return mt76u_refill_rx(dev, q, e->urb, sg_size);
}
static void mt76u_urb_free(struct urb *urb)
@@ -408,10 +407,10 @@ static void mt76u_urb_free(struct urb *urb)
int i;
for (i = 0; i < urb->num_sgs; i++)
- skb_free_frag(sg_virt(&urb->sg[i]));
+ mt76_put_page_pool_buf(sg_virt(&urb->sg[i]), false);
if (urb->transfer_buffer)
- skb_free_frag(urb->transfer_buffer);
+ mt76_put_page_pool_buf(urb->transfer_buffer, false);
usb_free_urb(urb);
}
@@ -547,6 +546,8 @@ mt76u_process_rx_entry(struct mt76_dev *dev, struct urb *urb,
len -= data_len;
nsgs++;
}
+
+ skb_mark_for_recycle(skb);
dev->drv->rx_skb(dev, MT_RXQ_MAIN, skb, NULL);
return nsgs;
@@ -612,7 +613,7 @@ mt76u_process_rx_queue(struct mt76_dev *dev, struct mt76_queue *q)
count = mt76u_process_rx_entry(dev, urb, q->buf_size);
if (count > 0) {
- err = mt76u_refill_rx(dev, q, urb, count, GFP_ATOMIC);
+ err = mt76u_refill_rx(dev, q, urb, count);
if (err < 0)
break;
}
@@ -663,6 +664,10 @@ mt76u_alloc_rx_queue(struct mt76_dev *dev, enum mt76_rxq_id qid)
struct mt76_queue *q = &dev->q_rx[qid];
int i, err;
+ err = mt76_create_page_pool(dev, q);
+ if (err)
+ return err;
+
spin_lock_init(&q->lock);
q->entry = devm_kcalloc(dev->dev,
MT_NUM_RX_ENTRIES, sizeof(*q->entry),
@@ -691,7 +696,6 @@ EXPORT_SYMBOL_GPL(mt76u_alloc_mcu_queue);
static void
mt76u_free_rx_queue(struct mt76_dev *dev, struct mt76_queue *q)
{
- struct page *page;
int i;
for (i = 0; i < q->ndesc; i++) {
@@ -701,13 +705,7 @@ mt76u_free_rx_queue(struct mt76_dev *dev, struct mt76_queue *q)
mt76u_urb_free(q->entry[i].urb);
q->entry[i].urb = NULL;
}
-
- if (!q->rx_page.va)
- return;
-
- page = virt_to_page(q->rx_page.va);
- __page_frag_cache_drain(page, q->rx_page.pagecnt_bias);
- memset(&q->rx_page, 0, sizeof(q->rx_page));
+ page_pool_destroy(q->page_pool);
}
static void mt76u_free_rx(struct mt76_dev *dev)
diff --git a/drivers/net/wireless/mediatek/mt76/util.c b/drivers/net/wireless/mediatek/mt76/util.c
index 581964425468..fc76c66ff1a5 100644
--- a/drivers/net/wireless/mediatek/mt76/util.c
+++ b/drivers/net/wireless/mediatek/mt76/util.c
@@ -24,23 +24,23 @@ bool __mt76_poll(struct mt76_dev *dev, u32 offset, u32 mask, u32 val,
}
EXPORT_SYMBOL_GPL(__mt76_poll);
-bool __mt76_poll_msec(struct mt76_dev *dev, u32 offset, u32 mask, u32 val,
- int timeout)
+bool ____mt76_poll_msec(struct mt76_dev *dev, u32 offset, u32 mask, u32 val,
+ int timeout, int tick)
{
u32 cur;
- timeout /= 10;
+ timeout /= tick;
do {
cur = __mt76_rr(dev, offset) & mask;
if (cur == val)
return true;
- usleep_range(10000, 20000);
+ usleep_range(1000 * tick, 2000 * tick);
} while (timeout-- > 0);
return false;
}
-EXPORT_SYMBOL_GPL(__mt76_poll_msec);
+EXPORT_SYMBOL_GPL(____mt76_poll_msec);
int mt76_wcid_alloc(u32 *mask, int size)
{
diff --git a/drivers/net/wireless/mediatek/mt7601u/dma.c b/drivers/net/wireless/mediatek/mt7601u/dma.c
index 457147394edc..773a1cc2f852 100644
--- a/drivers/net/wireless/mediatek/mt7601u/dma.c
+++ b/drivers/net/wireless/mediatek/mt7601u/dma.c
@@ -123,7 +123,8 @@ static u16 mt7601u_rx_next_seg_len(u8 *data, u32 data_len)
if (data_len < min_seg_len ||
WARN_ON_ONCE(!dma_len) ||
WARN_ON_ONCE(dma_len + MT_DMA_HDRS > data_len) ||
- WARN_ON_ONCE(dma_len & 0x3))
+ WARN_ON_ONCE(dma_len & 0x3) ||
+ WARN_ON_ONCE(dma_len < min_seg_len))
return 0;
return MT_DMA_HDRS + dma_len;
diff --git a/drivers/net/wireless/microchip/wilc1000/netdev.c b/drivers/net/wireless/microchip/wilc1000/netdev.c
index 9b319a455b96..e9f59de31b0b 100644
--- a/drivers/net/wireless/microchip/wilc1000/netdev.c
+++ b/drivers/net/wireless/microchip/wilc1000/netdev.c
@@ -730,6 +730,7 @@ netdev_tx_t wilc_mac_xmit(struct sk_buff *skb, struct net_device *ndev)
if (skb->dev != ndev) {
netdev_err(ndev, "Packet not destined to this device\n");
+ dev_kfree_skb(skb);
return NETDEV_TX_OK;
}
@@ -980,7 +981,7 @@ struct wilc_vif *wilc_netdev_ifc_init(struct wilc *wl, const char *name,
ndev->name);
if (!wl->hif_workqueue) {
ret = -ENOMEM;
- goto error;
+ goto unregister_netdev;
}
ndev->needs_free_netdev = true;
@@ -995,6 +996,11 @@ struct wilc_vif *wilc_netdev_ifc_init(struct wilc *wl, const char *name,
return vif;
+unregister_netdev:
+ if (rtnl_locked)
+ cfg80211_unregister_netdevice(ndev);
+ else
+ unregister_netdev(ndev);
error:
free_netdev(ndev);
return ERR_PTR(ret);
diff --git a/drivers/net/wireless/quantenna/qtnfmac/event.c b/drivers/net/wireless/quantenna/qtnfmac/event.c
index 4fafe370101a..31bc58e96ac0 100644
--- a/drivers/net/wireless/quantenna/qtnfmac/event.c
+++ b/drivers/net/wireless/quantenna/qtnfmac/event.c
@@ -478,7 +478,7 @@ qtnf_event_handle_freq_change(struct qtnf_wmac *mac,
continue;
mutex_lock(&vif->wdev.mtx);
- cfg80211_ch_switch_notify(vif->netdev, &chandef, 0);
+ cfg80211_ch_switch_notify(vif->netdev, &chandef, 0, 0);
mutex_unlock(&vif->wdev.mtx);
}
@@ -662,6 +662,7 @@ qtnf_event_handle_update_owe(struct qtnf_vif *vif,
memcpy(ie, owe_ev->ies, ie_len);
owe_info.ie_len = ie_len;
owe_info.ie = ie;
+ owe_info.assoc_link_id = -1;
pr_info("%s: external OWE processing: peer=%pM\n",
vif->netdev->name, owe_ev->peer);
diff --git a/drivers/net/wireless/ralink/rt2x00/rt2800lib.c b/drivers/net/wireless/ralink/rt2x00/rt2800lib.c
index 12b700c7b9c3..1226a883cd67 100644
--- a/drivers/net/wireless/ralink/rt2x00/rt2800lib.c
+++ b/drivers/net/wireless/ralink/rt2x00/rt2800lib.c
@@ -8924,8 +8924,6 @@ static void rt2800_rxiq_calibration(struct rt2x00_dev *rt2x00dev)
if (i < 2 && (bbptemp & 0x800000))
result = (bbptemp & 0xffffff) - 0x1000000;
- else if (i == 4)
- result = bbptemp;
else
result = bbptemp;
diff --git a/drivers/net/wireless/realtek/rtl8xxxu/Kconfig b/drivers/net/wireless/realtek/rtl8xxxu/Kconfig
index 631d078278be..2eed20b0988c 100644
--- a/drivers/net/wireless/realtek/rtl8xxxu/Kconfig
+++ b/drivers/net/wireless/realtek/rtl8xxxu/Kconfig
@@ -5,12 +5,13 @@
config RTL8XXXU
tristate "Realtek 802.11n USB wireless chips support"
depends on MAC80211 && USB
+ depends on LEDS_CLASS
help
This is an alternative driver for various Realtek RTL8XXX
parts written to utilize the Linux mac80211 stack.
The driver is known to work with a number of RTL8723AU,
RL8188CU, RTL8188RU, RTL8191CU, RTL8192CU, RTL8723BU, RTL8192EU,
- and RTL8188FU devices.
+ RTL8188FU, and RTL8188EU devices.
This driver is under development and has a limited feature
set. In particular it does not yet support 40MHz channels
diff --git a/drivers/net/wireless/realtek/rtl8xxxu/Makefile b/drivers/net/wireless/realtek/rtl8xxxu/Makefile
index c4ad5325f5e7..0cb58fb30228 100644
--- a/drivers/net/wireless/realtek/rtl8xxxu/Makefile
+++ b/drivers/net/wireless/realtek/rtl8xxxu/Makefile
@@ -2,4 +2,5 @@
obj-$(CONFIG_RTL8XXXU) += rtl8xxxu.o
rtl8xxxu-y := rtl8xxxu_core.o rtl8xxxu_8192e.o rtl8xxxu_8723b.o \
- rtl8xxxu_8723a.o rtl8xxxu_8192c.o rtl8xxxu_8188f.o
+ rtl8xxxu_8723a.o rtl8xxxu_8192c.o rtl8xxxu_8188f.o \
+ rtl8xxxu_8188e.o
diff --git a/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu.h b/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu.h
index d26df4095da0..c8cee4a24755 100644
--- a/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu.h
+++ b/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu.h
@@ -36,6 +36,7 @@
#define TX_TOTAL_PAGE_NUM 0xf8
#define TX_TOTAL_PAGE_NUM_8188F 0xf7
+#define TX_TOTAL_PAGE_NUM_8188E 0xa9
#define TX_TOTAL_PAGE_NUM_8192E 0xf3
#define TX_TOTAL_PAGE_NUM_8723B 0xf7
/* (HPQ + LPQ + NPQ + PUBQ) = TX_TOTAL_PAGE_NUM */
@@ -49,6 +50,11 @@
#define TX_PAGE_NUM_LO_PQ_8188F 0x02
#define TX_PAGE_NUM_NORM_PQ_8188F 0x02
+#define TX_PAGE_NUM_PUBQ_8188E 0x47
+#define TX_PAGE_NUM_HI_PQ_8188E 0x29
+#define TX_PAGE_NUM_LO_PQ_8188E 0x1c
+#define TX_PAGE_NUM_NORM_PQ_8188E 0x1c
+
#define TX_PAGE_NUM_PUBQ_8192E 0xe7
#define TX_PAGE_NUM_HI_PQ_8192E 0x08
#define TX_PAGE_NUM_LO_PQ_8192E 0x0c
@@ -153,7 +159,8 @@ struct rtl8xxxu_rxdesc16 {
u32 htc:1;
u32 eosp:1;
u32 bssidfit:2;
- u32 reserved1:16;
+ u32 rpt_sel:2; /* 8188e */
+ u32 reserved1:14;
u32 unicastwake:1;
u32 magicwake:1;
@@ -211,7 +218,8 @@ struct rtl8xxxu_rxdesc16 {
u32 magicwake:1;
u32 unicastwake:1;
- u32 reserved1:16;
+ u32 reserved1:14;
+ u32 rpt_sel:2; /* 8188e */
u32 bssidfit:2;
u32 eosp:1;
u32 htc:1;
@@ -502,6 +510,8 @@ struct rtl8xxxu_txdesc40 {
#define TXDESC_AMPDU_DENSITY_SHIFT 20
#define TXDESC40_BT_INT BIT(23)
#define TXDESC40_GID_SHIFT 24
+#define TXDESC_ANTENNA_SELECT_A BIT(24)
+#define TXDESC_ANTENNA_SELECT_B BIT(25)
/* Word 3 */
#define TXDESC40_USE_DRIVER_RATE BIT(8)
@@ -521,6 +531,7 @@ struct rtl8xxxu_txdesc40 {
#define TXDESC32_CTS_SELF_ENABLE BIT(11)
#define TXDESC32_RTS_CTS_ENABLE BIT(12)
#define TXDESC32_HW_RTS_ENABLE BIT(13)
+#define TXDESC32_PT_STAGE_MASK GENMASK(17, 15)
#define TXDESC_PRIME_CH_OFF_LOWER BIT(20)
#define TXDESC_PRIME_CH_OFF_UPPER BIT(21)
#define TXDESC32_SHORT_PREAMBLE BIT(24)
@@ -546,6 +557,10 @@ struct rtl8xxxu_txdesc40 {
/* Word 6 */
#define TXDESC_MAX_AGG_SHIFT 11
+#define TXDESC_USB_TX_AGG_SHIT 24
+
+/* Word 7 */
+#define TXDESC_ANTENNA_SELECT_C BIT(29)
/* Word 8 */
#define TXDESC40_HW_SEQ_ENABLE BIT(15)
@@ -562,6 +577,9 @@ struct phy_rx_agc_info {
#endif
};
+#define CCK_AGC_RPT_LNA_IDX_MASK GENMASK(7, 5)
+#define CCK_AGC_RPT_VGA_IDX_MASK GENMASK(4, 0)
+
struct rtl8723au_phy_stats {
struct phy_rx_agc_info path_agc[RTL8723A_MAX_RF_PATHS];
u8 ch_corr[RTL8723A_MAX_RF_PATHS];
@@ -909,6 +927,42 @@ struct rtl8188fu_efuse {
u8 res11[0xc3];
};
+struct rtl8188eu_efuse {
+ __le16 rtl_id;
+ u8 res0[0x0e];
+ struct rtl8192eu_efuse_tx_power tx_power_index_A; /* 0x10 */
+ u8 res1[0x7e]; /* 0x3a */
+ u8 channel_plan; /* 0xb8 */
+ u8 xtal_k;
+ u8 thermal_meter;
+ u8 iqk_lck;
+ u8 res2[5];
+ u8 rf_board_option;
+ u8 rf_feature_option;
+ u8 rf_bt_setting;
+ u8 eeprom_version;
+ u8 eeprom_customer_id;
+ u8 res3[3];
+ u8 rf_antenna_option; /* 0xc9 */
+ u8 res4[6];
+ u8 vid; /* 0xd0 */
+ u8 res5[1];
+ u8 pid; /* 0xd2 */
+ u8 res6[1];
+ u8 usb_optional_function;
+ u8 res7[2];
+ u8 mac_addr[ETH_ALEN]; /* 0xd7 */
+ u8 res8[2];
+ u8 vendor_name[7];
+ u8 res9[2];
+ u8 device_name[0x0b]; /* 0xe8 */
+ u8 res10[2];
+ u8 serial[0x0b]; /* 0xf5 */
+ u8 res11[0x30];
+ u8 unknown[0x0d]; /* 0x130 */
+ u8 res12[0xc3];
+} __packed;
+
struct rtl8xxxu_reg8val {
u16 reg;
u8 val;
@@ -1114,6 +1168,26 @@ struct h2c_cmd {
u8 cmd;
u8 data;
} __packed bt_grant;
+ struct {
+ u8 cmd;
+ u8 macid;
+ u8 unknown0;
+ u8 rssi;
+ /*
+ * [0] - is_rx
+ * [1] - stbc_en
+ * [2] - noisy_decision
+ * [6] - bf_en
+ */
+ u8 data;
+ /*
+ * [0:6] - ra_th_offset
+ * [7] - ra_offset_direction
+ */
+ u8 ra_th_offset;
+ u8 unknown1;
+ u8 unknown2;
+ } __packed rssi_report;
};
};
@@ -1323,6 +1397,39 @@ struct rtl8xxxu_ra_report {
u8 desc_rate;
};
+struct rtl8xxxu_ra_info {
+ u8 rate_id;
+ u32 rate_mask;
+ u32 ra_use_rate;
+ u8 rate_sgi;
+ u8 rssi_sta_ra; /* Percentage */
+ u8 pre_rssi_sta_ra;
+ u8 sgi_enable;
+ u8 decision_rate;
+ u8 pre_rate;
+ u8 highest_rate;
+ u8 lowest_rate;
+ u32 nsc_up;
+ u32 nsc_down;
+ u32 total;
+ u16 retry[5];
+ u16 drop;
+ u16 rpt_time;
+ u16 pre_min_rpt_time;
+ u8 dynamic_tx_rpt_timing_counter;
+ u8 ra_waiting_counter;
+ u8 ra_pending_counter;
+ u8 ra_drop_after_down;
+ u8 pt_try_state; /* 0 trying state, 1 for decision state */
+ u8 pt_stage; /* 0~6 */
+ u8 pt_stop_count; /* Stop PT counter */
+ u8 pt_pre_rate; /* if rate change do PT */
+ u8 pt_pre_rssi; /* if RSSI change 5% do PT */
+ u8 pt_mode_ss; /* decide which rate should do PT */
+ u8 ra_stage; /* StageRA, decide how many times RA will be done between PT */
+ u8 pt_smooth_factor;
+};
+
#define CFO_TH_XTAL_HIGH 20 /* kHz */
#define CFO_TH_XTAL_LOW 10 /* kHz */
#define CFO_TH_ATC 80 /* kHz */
@@ -1336,6 +1443,8 @@ struct rtl8xxxu_cfo_tracking {
u32 packet_count_pre;
};
+#define RTL8XXXU_HW_LED_CONTROL 2
+
struct rtl8xxxu_priv {
struct ieee80211_hw *hw;
struct usb_device *udev;
@@ -1432,6 +1541,7 @@ struct rtl8xxxu_priv {
struct rtl8192cu_efuse efuse8192;
struct rtl8192eu_efuse efuse8192eu;
struct rtl8188fu_efuse efuse8188fu;
+ struct rtl8188eu_efuse efuse8188eu;
} efuse_wifi;
u32 adda_backup[RTL8XXXU_ADDA_REGS];
u32 mac_backup[RTL8XXXU_MAC_REGS];
@@ -1455,6 +1565,11 @@ struct rtl8xxxu_priv {
struct rtl8xxxu_btcoex bt_coex;
struct rtl8xxxu_ra_report ra_report;
struct rtl8xxxu_cfo_tracking cfo_tracking;
+ struct rtl8xxxu_ra_info ra_info;
+
+ bool led_registered;
+ char led_name[32];
+ struct led_classdev led_cdev;
};
struct rtl8xxxu_rx_urb {
@@ -1496,6 +1611,7 @@ struct rtl8xxxu_fileops {
u32 ramask, u8 rateid, int sgi, int txbw_40mhz);
void (*report_connect) (struct rtl8xxxu_priv *priv,
u8 macid, bool connect);
+ void (*report_rssi) (struct rtl8xxxu_priv *priv, u8 macid, u8 rssi);
void (*fill_txdesc) (struct ieee80211_hw *hw, struct ieee80211_hdr *hdr,
struct ieee80211_tx_info *tx_info,
struct rtl8xxxu_txdesc32 *tx_desc, bool sgi,
@@ -1503,6 +1619,8 @@ struct rtl8xxxu_fileops {
u32 rts_rate);
void (*set_crystal_cap) (struct rtl8xxxu_priv *priv, u8 crystal_cap);
s8 (*cck_rssi) (struct rtl8xxxu_priv *priv, u8 cck_agc_rpt);
+ int (*led_classdev_brightness_set) (struct led_classdev *led_cdev,
+ enum led_brightness brightness);
int writeN_block_size;
int rx_agg_buf_size;
char tx_desc_size;
@@ -1523,6 +1641,7 @@ struct rtl8xxxu_fileops {
u8 page_num_hi;
u8 page_num_lo;
u8 page_num_norm;
+ u8 last_llt_entry;
};
extern int rtl8xxxu_debug;
@@ -1560,7 +1679,7 @@ int rtl8xxxu_init_phy_rf(struct rtl8xxxu_priv *priv,
enum rtl8xxxu_rfpath path);
int rtl8xxxu_init_phy_regs(struct rtl8xxxu_priv *priv,
const struct rtl8xxxu_reg32val *array);
-int rtl8xxxu_load_firmware(struct rtl8xxxu_priv *priv, char *fw_name);
+int rtl8xxxu_load_firmware(struct rtl8xxxu_priv *priv, const char *fw_name);
void rtl8xxxu_firmware_self_reset(struct rtl8xxxu_priv *priv);
void rtl8xxxu_power_off(struct rtl8xxxu_priv *priv);
void rtl8xxxu_identify_vendor_1bit(struct rtl8xxxu_priv *priv, u32 vendor);
@@ -1582,6 +1701,8 @@ void rtl8xxxu_gen1_phy_iq_calibrate(struct rtl8xxxu_priv *priv);
void rtl8xxxu_gen1_init_phy_bb(struct rtl8xxxu_priv *priv);
void rtl8xxxu_gen1_set_tx_power(struct rtl8xxxu_priv *priv,
int channel, bool ht40);
+void rtl8188f_set_tx_power(struct rtl8xxxu_priv *priv,
+ int channel, bool ht40);
void rtl8xxxu_gen1_config_channel(struct ieee80211_hw *hw);
void rtl8xxxu_gen2_config_channel(struct ieee80211_hw *hw);
void rtl8xxxu_gen1_usb_quirks(struct rtl8xxxu_priv *priv);
@@ -1594,6 +1715,8 @@ void rtl8xxxu_gen1_report_connect(struct rtl8xxxu_priv *priv,
u8 macid, bool connect);
void rtl8xxxu_gen2_report_connect(struct rtl8xxxu_priv *priv,
u8 macid, bool connect);
+void rtl8xxxu_gen1_report_rssi(struct rtl8xxxu_priv *priv, u8 macid, u8 rssi);
+void rtl8xxxu_gen2_report_rssi(struct rtl8xxxu_priv *priv, u8 macid, u8 rssi);
void rtl8xxxu_gen1_init_aggregation(struct rtl8xxxu_priv *priv);
void rtl8xxxu_gen1_enable_rf(struct rtl8xxxu_priv *priv);
void rtl8xxxu_gen1_disable_rf(struct rtl8xxxu_priv *priv);
@@ -1602,6 +1725,8 @@ void rtl8xxxu_init_burst(struct rtl8xxxu_priv *priv);
int rtl8xxxu_parse_rxdesc16(struct rtl8xxxu_priv *priv, struct sk_buff *skb);
int rtl8xxxu_parse_rxdesc24(struct rtl8xxxu_priv *priv, struct sk_buff *skb);
int rtl8xxxu_gen2_channel_to_group(int channel);
+bool rtl8xxxu_simularity_compare(struct rtl8xxxu_priv *priv,
+ int result[][8], int c1, int c2);
bool rtl8xxxu_gen2_simularity_compare(struct rtl8xxxu_priv *priv,
int result[][8], int c1, int c2);
void rtl8xxxu_fill_txdesc_v1(struct ieee80211_hw *hw, struct ieee80211_hdr *hdr,
@@ -1614,13 +1739,24 @@ void rtl8xxxu_fill_txdesc_v2(struct ieee80211_hw *hw, struct ieee80211_hdr *hdr,
struct rtl8xxxu_txdesc32 *tx_desc32, bool sgi,
bool short_preamble, bool ampdu_enable,
u32 rts_rate);
+void rtl8xxxu_fill_txdesc_v3(struct ieee80211_hw *hw, struct ieee80211_hdr *hdr,
+ struct ieee80211_tx_info *tx_info,
+ struct rtl8xxxu_txdesc32 *tx_desc32, bool sgi,
+ bool short_preamble, bool ampdu_enable,
+ u32 rts_rate);
void rtl8723bu_set_ps_tdma(struct rtl8xxxu_priv *priv,
u8 arg1, u8 arg2, u8 arg3, u8 arg4, u8 arg5);
void rtl8723bu_phy_init_antenna_selection(struct rtl8xxxu_priv *priv);
void rtl8723a_set_crystal_cap(struct rtl8xxxu_priv *priv, u8 crystal_cap);
+void rtl8188f_set_crystal_cap(struct rtl8xxxu_priv *priv, u8 crystal_cap);
s8 rtl8723a_cck_rssi(struct rtl8xxxu_priv *priv, u8 cck_agc_rpt);
+void rtl8xxxu_update_ra_report(struct rtl8xxxu_ra_report *rarpt,
+ u8 rate, u8 sgi, u8 bw);
+void rtl8188e_ra_info_init_all(struct rtl8xxxu_ra_info *ra);
+void rtl8188e_handle_ra_tx_report2(struct rtl8xxxu_priv *priv, struct sk_buff *skb);
extern struct rtl8xxxu_fileops rtl8188fu_fops;
+extern struct rtl8xxxu_fileops rtl8188eu_fops;
extern struct rtl8xxxu_fileops rtl8192cu_fops;
extern struct rtl8xxxu_fileops rtl8192eu_fops;
extern struct rtl8xxxu_fileops rtl8723au_fops;
diff --git a/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_8188e.c b/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_8188e.c
new file mode 100644
index 000000000000..a99ddb41cd24
--- /dev/null
+++ b/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_8188e.c
@@ -0,0 +1,1899 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * RTL8XXXU mac80211 USB driver - 8188e specific subdriver
+ *
+ * Copyright (c) 2014 - 2016 Jes Sorensen <Jes.Sorensen@gmail.com>
+ *
+ * Portions, notably calibration code:
+ * Copyright(c) 2007 - 2011 Realtek Corporation. All rights reserved.
+ *
+ * This driver was written as a replacement for the vendor provided
+ * rtl8723au driver. As the Realtek 8xxx chips are very similar in
+ * their programming interface, I have started adding support for
+ * additional 8xxx chips like the 8192cu, 8188cus, etc.
+ */
+
+#include <linux/init.h>
+#include <linux/kernel.h>
+#include <linux/sched.h>
+#include <linux/errno.h>
+#include <linux/slab.h>
+#include <linux/module.h>
+#include <linux/spinlock.h>
+#include <linux/list.h>
+#include <linux/usb.h>
+#include <linux/netdevice.h>
+#include <linux/etherdevice.h>
+#include <linux/ethtool.h>
+#include <linux/wireless.h>
+#include <linux/firmware.h>
+#include <linux/moduleparam.h>
+#include <net/mac80211.h>
+#include "rtl8xxxu.h"
+#include "rtl8xxxu_regs.h"
+
+static const struct rtl8xxxu_reg8val rtl8188e_mac_init_table[] = {
+ {0x026, 0x41}, {0x027, 0x35}, {0x040, 0x00}, {0x421, 0x0f},
+ {0x428, 0x0a}, {0x429, 0x10}, {0x430, 0x00}, {0x431, 0x01},
+ {0x432, 0x02}, {0x433, 0x04}, {0x434, 0x05}, {0x435, 0x06},
+ {0x436, 0x07}, {0x437, 0x08}, {0x438, 0x00}, {0x439, 0x00},
+ {0x43a, 0x01}, {0x43b, 0x02}, {0x43c, 0x04}, {0x43d, 0x05},
+ {0x43e, 0x06}, {0x43f, 0x07}, {0x440, 0x5d}, {0x441, 0x01},
+ {0x442, 0x00}, {0x444, 0x15}, {0x445, 0xf0}, {0x446, 0x0f},
+ {0x447, 0x00}, {0x458, 0x41}, {0x459, 0xa8}, {0x45a, 0x72},
+ {0x45b, 0xb9}, {0x460, 0x66}, {0x461, 0x66}, {0x480, 0x08},
+ {0x4c8, 0xff}, {0x4c9, 0x08}, {0x4cc, 0xff}, {0x4cd, 0xff},
+ {0x4ce, 0x01}, {0x4d3, 0x01}, {0x500, 0x26}, {0x501, 0xa2},
+ {0x502, 0x2f}, {0x503, 0x00}, {0x504, 0x28}, {0x505, 0xa3},
+ {0x506, 0x5e}, {0x507, 0x00}, {0x508, 0x2b}, {0x509, 0xa4},
+ {0x50a, 0x5e}, {0x50b, 0x00}, {0x50c, 0x4f}, {0x50d, 0xa4},
+ {0x50e, 0x00}, {0x50f, 0x00}, {0x512, 0x1c}, {0x514, 0x0a},
+ {0x516, 0x0a}, {0x525, 0x4f}, {0x550, 0x10}, {0x551, 0x10},
+ {0x559, 0x02}, {0x55d, 0xff}, {0x605, 0x30}, {0x608, 0x0e},
+ {0x609, 0x2a}, {0x620, 0xff}, {0x621, 0xff}, {0x622, 0xff},
+ {0x623, 0xff}, {0x624, 0xff}, {0x625, 0xff}, {0x626, 0xff},
+ {0x627, 0xff}, {0x63c, 0x08}, {0x63d, 0x08}, {0x63e, 0x0c},
+ {0x63f, 0x0c}, {0x640, 0x40}, {0x652, 0x20}, {0x66e, 0x05},
+ {0x700, 0x21}, {0x701, 0x43}, {0x702, 0x65}, {0x703, 0x87},
+ {0x708, 0x21}, {0x709, 0x43}, {0x70a, 0x65}, {0x70b, 0x87},
+ {0xffff, 0xff},
+};
+
+static const struct rtl8xxxu_reg32val rtl8188eu_phy_init_table[] = {
+ {0x800, 0x80040000}, {0x804, 0x00000003},
+ {0x808, 0x0000fc00}, {0x80c, 0x0000000a},
+ {0x810, 0x10001331}, {0x814, 0x020c3d10},
+ {0x818, 0x02200385}, {0x81c, 0x00000000},
+ {0x820, 0x01000100}, {0x824, 0x00390204},
+ {0x828, 0x00000000}, {0x82c, 0x00000000},
+ {0x830, 0x00000000}, {0x834, 0x00000000},
+ {0x838, 0x00000000}, {0x83c, 0x00000000},
+ {0x840, 0x00010000}, {0x844, 0x00000000},
+ {0x848, 0x00000000}, {0x84c, 0x00000000},
+ {0x850, 0x00000000}, {0x854, 0x00000000},
+ {0x858, 0x569a11a9}, {0x85c, 0x01000014},
+ {0x860, 0x66f60110}, {0x864, 0x061f0649},
+ {0x868, 0x00000000}, {0x86c, 0x27272700},
+ {0x870, 0x07000760}, {0x874, 0x25004000},
+ {0x878, 0x00000808}, {0x87c, 0x00000000},
+ {0x880, 0xb0000c1c}, {0x884, 0x00000001},
+ {0x888, 0x00000000}, {0x88c, 0xccc000c0},
+ {0x890, 0x00000800}, {0x894, 0xfffffffe},
+ {0x898, 0x40302010}, {0x89c, 0x00706050},
+ {0x900, 0x00000000}, {0x904, 0x00000023},
+ {0x908, 0x00000000}, {0x90c, 0x81121111},
+ {0x910, 0x00000002}, {0x914, 0x00000201},
+ {0xa00, 0x00d047c8}, {0xa04, 0x80ff800c},
+ {0xa08, 0x8c838300}, {0xa0c, 0x2e7f120f},
+ {0xa10, 0x9500bb7e}, {0xa14, 0x1114d028},
+ {0xa18, 0x00881117}, {0xa1c, 0x89140f00},
+ {0xa20, 0x1a1b0000}, {0xa24, 0x090e1317},
+ {0xa28, 0x00000204}, {0xa2c, 0x00d30000},
+ {0xa70, 0x101fbf00}, {0xa74, 0x00000007},
+ {0xa78, 0x00000900}, {0xa7c, 0x225b0606},
+ {0xa80, 0x218075b1}, {0xb2c, 0x80000000},
+ {0xc00, 0x48071d40}, {0xc04, 0x03a05611},
+ {0xc08, 0x000000e4}, {0xc0c, 0x6c6c6c6c},
+ {0xc10, 0x08800000}, {0xc14, 0x40000100},
+ {0xc18, 0x08800000}, {0xc1c, 0x40000100},
+ {0xc20, 0x00000000}, {0xc24, 0x00000000},
+ {0xc28, 0x00000000}, {0xc2c, 0x00000000},
+ {0xc30, 0x69e9ac47}, {0xc34, 0x469652af},
+ {0xc38, 0x49795994}, {0xc3c, 0x0a97971c},
+ {0xc40, 0x1f7c403f}, {0xc44, 0x000100b7},
+ {0xc48, 0xec020107}, {0xc4c, 0x007f037f},
+ {0xc50, 0x69553420}, {0xc54, 0x43bc0094},
+ {0xc58, 0x00013169}, {0xc5c, 0x00250492},
+ {0xc60, 0x00000000}, {0xc64, 0x7112848b},
+ {0xc68, 0x47c00bff}, {0xc6c, 0x00000036},
+ {0xc70, 0x2c7f000d}, {0xc74, 0x020610db},
+ {0xc78, 0x0000001f}, {0xc7c, 0x00b91612},
+ {0xc80, 0x390000e4}, {0xc84, 0x21f60000},
+ {0xc88, 0x40000100}, {0xc8c, 0x20200000},
+ {0xc90, 0x00091521}, {0xc94, 0x00000000},
+ {0xc98, 0x00121820}, {0xc9c, 0x00007f7f},
+ {0xca0, 0x00000000}, {0xca4, 0x000300a0},
+ {0xca8, 0x00000000}, {0xcac, 0x00000000},
+ {0xcb0, 0x00000000}, {0xcb4, 0x00000000},
+ {0xcb8, 0x00000000}, {0xcbc, 0x28000000},
+ {0xcc0, 0x00000000}, {0xcc4, 0x00000000},
+ {0xcc8, 0x00000000}, {0xccc, 0x00000000},
+ {0xcd0, 0x00000000}, {0xcd4, 0x00000000},
+ {0xcd8, 0x64b22427}, {0xcdc, 0x00766932},
+ {0xce0, 0x00222222}, {0xce4, 0x00000000},
+ {0xce8, 0x37644302}, {0xcec, 0x2f97d40c},
+ {0xd00, 0x00000740}, {0xd04, 0x00020401},
+ {0xd08, 0x0000907f}, {0xd0c, 0x20010201},
+ {0xd10, 0xa0633333}, {0xd14, 0x3333bc43},
+ {0xd18, 0x7a8f5b6f}, {0xd2c, 0xcc979975},
+ {0xd30, 0x00000000}, {0xd34, 0x80608000},
+ {0xd38, 0x00000000}, {0xd3c, 0x00127353},
+ {0xd40, 0x00000000}, {0xd44, 0x00000000},
+ {0xd48, 0x00000000}, {0xd4c, 0x00000000},
+ {0xd50, 0x6437140a}, {0xd54, 0x00000000},
+ {0xd58, 0x00000282}, {0xd5c, 0x30032064},
+ {0xd60, 0x4653de68}, {0xd64, 0x04518a3c},
+ {0xd68, 0x00002101}, {0xd6c, 0x2a201c16},
+ {0xd70, 0x1812362e}, {0xd74, 0x322c2220},
+ {0xd78, 0x000e3c24}, {0xe00, 0x2d2d2d2d},
+ {0xe04, 0x2d2d2d2d}, {0xe08, 0x0390272d},
+ {0xe10, 0x2d2d2d2d}, {0xe14, 0x2d2d2d2d},
+ {0xe18, 0x2d2d2d2d}, {0xe1c, 0x2d2d2d2d},
+ {0xe28, 0x00000000}, {0xe30, 0x1000dc1f},
+ {0xe34, 0x10008c1f}, {0xe38, 0x02140102},
+ {0xe3c, 0x681604c2}, {0xe40, 0x01007c00},
+ {0xe44, 0x01004800}, {0xe48, 0xfb000000},
+ {0xe4c, 0x000028d1}, {0xe50, 0x1000dc1f},
+ {0xe54, 0x10008c1f}, {0xe58, 0x02140102},
+ {0xe5c, 0x28160d05}, {0xe60, 0x00000048},
+ {0xe68, 0x001b25a4}, {0xe6c, 0x00c00014},
+ {0xe70, 0x00c00014}, {0xe74, 0x01000014},
+ {0xe78, 0x01000014}, {0xe7c, 0x01000014},
+ {0xe80, 0x01000014}, {0xe84, 0x00c00014},
+ {0xe88, 0x01000014}, {0xe8c, 0x00c00014},
+ {0xed0, 0x00c00014}, {0xed4, 0x00c00014},
+ {0xed8, 0x00c00014}, {0xedc, 0x00000014},
+ {0xee0, 0x00000014}, {0xee8, 0x21555448},
+ {0xeec, 0x01c00014}, {0xf14, 0x00000003},
+ {0xf4c, 0x00000000}, {0xf00, 0x00000300},
+ {0xffff, 0xffffffff},
+};
+
+static const struct rtl8xxxu_reg32val rtl8188e_agc_table[] = {
+ {0xc78, 0xfb000001}, {0xc78, 0xfb010001},
+ {0xc78, 0xfb020001}, {0xc78, 0xfb030001},
+ {0xc78, 0xfb040001}, {0xc78, 0xfb050001},
+ {0xc78, 0xfa060001}, {0xc78, 0xf9070001},
+ {0xc78, 0xf8080001}, {0xc78, 0xf7090001},
+ {0xc78, 0xf60a0001}, {0xc78, 0xf50b0001},
+ {0xc78, 0xf40c0001}, {0xc78, 0xf30d0001},
+ {0xc78, 0xf20e0001}, {0xc78, 0xf10f0001},
+ {0xc78, 0xf0100001}, {0xc78, 0xef110001},
+ {0xc78, 0xee120001}, {0xc78, 0xed130001},
+ {0xc78, 0xec140001}, {0xc78, 0xeb150001},
+ {0xc78, 0xea160001}, {0xc78, 0xe9170001},
+ {0xc78, 0xe8180001}, {0xc78, 0xe7190001},
+ {0xc78, 0xe61a0001}, {0xc78, 0xe51b0001},
+ {0xc78, 0xe41c0001}, {0xc78, 0xe31d0001},
+ {0xc78, 0xe21e0001}, {0xc78, 0xe11f0001},
+ {0xc78, 0x8a200001}, {0xc78, 0x89210001},
+ {0xc78, 0x88220001}, {0xc78, 0x87230001},
+ {0xc78, 0x86240001}, {0xc78, 0x85250001},
+ {0xc78, 0x84260001}, {0xc78, 0x83270001},
+ {0xc78, 0x82280001}, {0xc78, 0x6b290001},
+ {0xc78, 0x6a2a0001}, {0xc78, 0x692b0001},
+ {0xc78, 0x682c0001}, {0xc78, 0x672d0001},
+ {0xc78, 0x662e0001}, {0xc78, 0x652f0001},
+ {0xc78, 0x64300001}, {0xc78, 0x63310001},
+ {0xc78, 0x62320001}, {0xc78, 0x61330001},
+ {0xc78, 0x46340001}, {0xc78, 0x45350001},
+ {0xc78, 0x44360001}, {0xc78, 0x43370001},
+ {0xc78, 0x42380001}, {0xc78, 0x41390001},
+ {0xc78, 0x403a0001}, {0xc78, 0x403b0001},
+ {0xc78, 0x403c0001}, {0xc78, 0x403d0001},
+ {0xc78, 0x403e0001}, {0xc78, 0x403f0001},
+ {0xc78, 0xfb400001}, {0xc78, 0xfb410001},
+ {0xc78, 0xfb420001}, {0xc78, 0xfb430001},
+ {0xc78, 0xfb440001}, {0xc78, 0xfb450001},
+ {0xc78, 0xfb460001}, {0xc78, 0xfb470001},
+ {0xc78, 0xfb480001}, {0xc78, 0xfa490001},
+ {0xc78, 0xf94a0001}, {0xc78, 0xf84b0001},
+ {0xc78, 0xf74c0001}, {0xc78, 0xf64d0001},
+ {0xc78, 0xf54e0001}, {0xc78, 0xf44f0001},
+ {0xc78, 0xf3500001}, {0xc78, 0xf2510001},
+ {0xc78, 0xf1520001}, {0xc78, 0xf0530001},
+ {0xc78, 0xef540001}, {0xc78, 0xee550001},
+ {0xc78, 0xed560001}, {0xc78, 0xec570001},
+ {0xc78, 0xeb580001}, {0xc78, 0xea590001},
+ {0xc78, 0xe95a0001}, {0xc78, 0xe85b0001},
+ {0xc78, 0xe75c0001}, {0xc78, 0xe65d0001},
+ {0xc78, 0xe55e0001}, {0xc78, 0xe45f0001},
+ {0xc78, 0xe3600001}, {0xc78, 0xe2610001},
+ {0xc78, 0xc3620001}, {0xc78, 0xc2630001},
+ {0xc78, 0xc1640001}, {0xc78, 0x8b650001},
+ {0xc78, 0x8a660001}, {0xc78, 0x89670001},
+ {0xc78, 0x88680001}, {0xc78, 0x87690001},
+ {0xc78, 0x866a0001}, {0xc78, 0x856b0001},
+ {0xc78, 0x846c0001}, {0xc78, 0x676d0001},
+ {0xc78, 0x666e0001}, {0xc78, 0x656f0001},
+ {0xc78, 0x64700001}, {0xc78, 0x63710001},
+ {0xc78, 0x62720001}, {0xc78, 0x61730001},
+ {0xc78, 0x60740001}, {0xc78, 0x46750001},
+ {0xc78, 0x45760001}, {0xc78, 0x44770001},
+ {0xc78, 0x43780001}, {0xc78, 0x42790001},
+ {0xc78, 0x417a0001}, {0xc78, 0x407b0001},
+ {0xc78, 0x407c0001}, {0xc78, 0x407d0001},
+ {0xc78, 0x407e0001}, {0xc78, 0x407f0001},
+ {0xc50, 0x69553422}, {0xc50, 0x69553420},
+ {0xffff, 0xffffffff}
+};
+
+static const struct rtl8xxxu_rfregval rtl8188eu_radioa_init_table[] = {
+ {0x00, 0x00030000}, {0x08, 0x00084000},
+ {0x18, 0x00000407}, {0x19, 0x00000012},
+ {0x1e, 0x00080009}, {0x1f, 0x00000880},
+ {0x2f, 0x0001a060}, {0x3f, 0x00000000},
+ {0x42, 0x000060c0}, {0x57, 0x000d0000},
+ {0x58, 0x000be180}, {0x67, 0x00001552},
+ {0x83, 0x00000000}, {0xb0, 0x000ff8fc},
+ {0xb1, 0x00054400}, {0xb2, 0x000ccc19},
+ {0xb4, 0x00043003}, {0xb6, 0x0004953e},
+ {0xb7, 0x0001c718}, {0xb8, 0x000060ff},
+ {0xb9, 0x00080001}, {0xba, 0x00040000},
+ {0xbb, 0x00000400}, {0xbf, 0x000c0000},
+ {0xc2, 0x00002400}, {0xc3, 0x00000009},
+ {0xc4, 0x00040c91}, {0xc5, 0x00099999},
+ {0xc6, 0x000000a3}, {0xc7, 0x00088820},
+ {0xc8, 0x00076c06}, {0xc9, 0x00000000},
+ {0xca, 0x00080000}, {0xdf, 0x00000180},
+ {0xef, 0x000001a0}, {0x51, 0x0006b27d},
+ {0x52, 0x0007e49d}, /* Set to 0x0007e4dd for SDIO */
+ {0x53, 0x00000073}, {0x56, 0x00051ff3},
+ {0x35, 0x00000086}, {0x35, 0x00000186},
+ {0x35, 0x00000286}, {0x36, 0x00001c25},
+ {0x36, 0x00009c25}, {0x36, 0x00011c25},
+ {0x36, 0x00019c25}, {0xb6, 0x00048538},
+ {0x18, 0x00000c07}, {0x5a, 0x0004bd00},
+ {0x19, 0x000739d0}, {0x34, 0x0000adf3},
+ {0x34, 0x00009df0}, {0x34, 0x00008ded},
+ {0x34, 0x00007dea}, {0x34, 0x00006de7},
+ {0x34, 0x000054ee}, {0x34, 0x000044eb},
+ {0x34, 0x000034e8}, {0x34, 0x0000246b},
+ {0x34, 0x00001468}, {0x34, 0x0000006d},
+ {0x00, 0x00030159}, {0x84, 0x00068200},
+ {0x86, 0x000000ce}, {0x87, 0x00048a00},
+ {0x8e, 0x00065540}, {0x8f, 0x00088000},
+ {0xef, 0x000020a0}, {0x3b, 0x000f02b0},
+ {0x3b, 0x000ef7b0}, {0x3b, 0x000d4fb0},
+ {0x3b, 0x000cf060}, {0x3b, 0x000b0090},
+ {0x3b, 0x000a0080}, {0x3b, 0x00090080},
+ {0x3b, 0x0008f780}, {0x3b, 0x000722b0},
+ {0x3b, 0x0006f7b0}, {0x3b, 0x00054fb0},
+ {0x3b, 0x0004f060}, {0x3b, 0x00030090},
+ {0x3b, 0x00020080}, {0x3b, 0x00010080},
+ {0x3b, 0x0000f780}, {0xef, 0x000000a0},
+ {0x00, 0x00010159}, {0x18, 0x0000f407},
+ {0xFE, 0x00000000}, {0xFE, 0x00000000},
+ {0x1F, 0x00080003}, {0xFE, 0x00000000},
+ {0xFE, 0x00000000}, {0x1E, 0x00000001},
+ {0x1F, 0x00080000}, {0x00, 0x00033e60},
+ {0xff, 0xffffffff}
+};
+
+#define PERENTRY 23
+#define RETRYSIZE 5
+#define RATESIZE 28
+#define TX_RPT2_ITEM_SIZE 8
+
+static const u8 retry_penalty[PERENTRY][RETRYSIZE + 1] = {
+ {5, 4, 3, 2, 0, 3}, /* 92 , idx=0 */
+ {6, 5, 4, 3, 0, 4}, /* 86 , idx=1 */
+ {6, 5, 4, 2, 0, 4}, /* 81 , idx=2 */
+ {8, 7, 6, 4, 0, 6}, /* 75 , idx=3 */
+ {10, 9, 8, 6, 0, 8}, /* 71 , idx=4 */
+ {10, 9, 8, 4, 0, 8}, /* 66 , idx=5 */
+ {10, 9, 8, 2, 0, 8}, /* 62 , idx=6 */
+ {10, 9, 8, 0, 0, 8}, /* 59 , idx=7 */
+ {18, 17, 16, 8, 0, 16}, /* 53 , idx=8 */
+ {26, 25, 24, 16, 0, 24}, /* 50 , idx=9 */
+ {34, 33, 32, 24, 0, 32}, /* 47 , idx=0x0a */
+ {34, 31, 28, 20, 0, 32}, /* 43 , idx=0x0b */
+ {34, 31, 27, 18, 0, 32}, /* 40 , idx=0x0c */
+ {34, 31, 26, 16, 0, 32}, /* 37 , idx=0x0d */
+ {34, 30, 22, 16, 0, 32}, /* 32 , idx=0x0e */
+ {34, 30, 24, 16, 0, 32}, /* 26 , idx=0x0f */
+ {49, 46, 40, 16, 0, 48}, /* 20 , idx=0x10 */
+ {49, 45, 32, 0, 0, 48}, /* 17 , idx=0x11 */
+ {49, 45, 22, 18, 0, 48}, /* 15 , idx=0x12 */
+ {49, 40, 24, 16, 0, 48}, /* 12 , idx=0x13 */
+ {49, 32, 18, 12, 0, 48}, /* 9 , idx=0x14 */
+ {49, 22, 18, 14, 0, 48}, /* 6 , idx=0x15 */
+ {49, 16, 16, 0, 0, 48} /* 3, idx=0x16 */
+};
+
+static const u8 pt_penalty[RETRYSIZE + 1] = {34, 31, 30, 24, 0, 32};
+
+static const u8 retry_penalty_idx_normal[2][RATESIZE] = {
+ { /* RSSI>TH */
+ 4, 4, 4, 5,
+ 4, 4, 5, 7, 7, 7, 8, 0x0a,
+ 4, 4, 4, 4, 6, 0x0a, 0x0b, 0x0d,
+ 5, 5, 7, 7, 8, 0x0b, 0x0d, 0x0f
+ },
+ { /* RSSI<TH */
+ 0x0a, 0x0a, 0x0b, 0x0c,
+ 0x0a, 0x0a, 0x0b, 0x0c, 0x0d, 0x10, 0x13, 0x13,
+ 0x0b, 0x0c, 0x0d, 0x0e, 0x0f, 0x11, 0x13, 0x13,
+ 9, 9, 9, 9, 0x0c, 0x0e, 0x11, 0x13
+ }
+};
+
+static const u8 retry_penalty_idx_cut_i[2][RATESIZE] = {
+ { /* RSSI>TH */
+ 4, 4, 4, 5,
+ 4, 4, 5, 7, 7, 7, 8, 0x0a,
+ 4, 4, 4, 4, 6, 0x0a, 0x0b, 0x0d,
+ 5, 5, 7, 7, 8, 0x0b, 0x0d, 0x0f
+ },
+ { /* RSSI<TH */
+ 0x0a, 0x0a, 0x0b, 0x0c,
+ 0x0a, 0x0a, 0x0b, 0x0c, 0x0d, 0x10, 0x13, 0x13,
+ 0x06, 0x07, 0x08, 0x0d, 0x0e, 0x11, 0x11, 0x11,
+ 9, 9, 9, 9, 0x0c, 0x0e, 0x11, 0x13
+ }
+};
+
+static const u8 retry_penalty_up_idx_normal[RATESIZE] = {
+ 0x0c, 0x0d, 0x0d, 0x0f,
+ 0x0d, 0x0e, 0x0f, 0x0f, 0x10, 0x12, 0x13, 0x14,
+ 0x0f, 0x10, 0x10, 0x12, 0x12, 0x13, 0x14, 0x15,
+ 0x11, 0x11, 0x12, 0x13, 0x13, 0x13, 0x14, 0x15
+};
+
+static const u8 retry_penalty_up_idx_cut_i[RATESIZE] = {
+ 0x0c, 0x0d, 0x0d, 0x0f,
+ 0x0d, 0x0e, 0x0f, 0x0f, 0x10, 0x12, 0x13, 0x14,
+ 0x0b, 0x0b, 0x11, 0x11, 0x12, 0x12, 0x12, 0x12,
+ 0x11, 0x11, 0x12, 0x13, 0x13, 0x13, 0x14, 0x15
+};
+
+static const u8 rssi_threshold[RATESIZE] = {
+ 0, 0, 0, 0,
+ 0, 0, 0, 0, 0, 0x24, 0x26, 0x2a,
+ 0x18, 0x1a, 0x1d, 0x1f, 0x21, 0x27, 0x29, 0x2a,
+ 0, 0, 0, 0x1f, 0x23, 0x28, 0x2a, 0x2c
+};
+
+static const u16 n_threshold_high[RATESIZE] = {
+ 4, 4, 8, 16,
+ 24, 36, 48, 72, 96, 144, 192, 216,
+ 60, 80, 100, 160, 240, 400, 600, 800,
+ 300, 320, 480, 720, 1000, 1200, 1600, 2000
+};
+
+static const u16 n_threshold_low[RATESIZE] = {
+ 2, 2, 4, 8,
+ 12, 18, 24, 36, 48, 72, 96, 108,
+ 30, 40, 50, 80, 120, 200, 300, 400,
+ 150, 160, 240, 360, 500, 600, 800, 1000
+};
+
+static const u8 dropping_necessary[RATESIZE] = {
+ 1, 1, 1, 1,
+ 1, 2, 3, 4, 5, 6, 7, 8,
+ 1, 2, 3, 4, 5, 6, 7, 8,
+ 5, 6, 7, 8, 9, 10, 11, 12
+};
+
+static const u8 pending_for_rate_up_fail[5] = {2, 10, 24, 40, 60};
+
+static const u16 dynamic_tx_rpt_timing[6] = {
+ 0x186a, 0x30d4, 0x493e, 0x61a8, 0x7a12, 0x927c /* 200ms-1200ms */
+};
+
+enum rtl8188e_tx_rpt_timing {
+ DEFAULT_TIMING = 0,
+ INCREASE_TIMING,
+ DECREASE_TIMING
+};
+
+static int rtl8188eu_identify_chip(struct rtl8xxxu_priv *priv)
+{
+ struct device *dev = &priv->udev->dev;
+ u32 sys_cfg, vendor;
+ int ret = 0;
+
+ strscpy(priv->chip_name, "8188EU", sizeof(priv->chip_name));
+ priv->rtl_chip = RTL8188E;
+ priv->rf_paths = 1;
+ priv->rx_paths = 1;
+ priv->tx_paths = 1;
+ priv->has_wifi = 1;
+
+ sys_cfg = rtl8xxxu_read32(priv, REG_SYS_CFG);
+ priv->chip_cut = u32_get_bits(sys_cfg, SYS_CFG_CHIP_VERSION_MASK);
+ if (sys_cfg & SYS_CFG_TRP_VAUX_EN) {
+ dev_info(dev, "Unsupported test chip\n");
+ return -EOPNOTSUPP;
+ }
+
+ /*
+ * TODO: At a glance, I cut requires a different firmware,
+ * different initialisation tables, and no software rate
+ * control. The vendor driver is not configured to handle
+ * I cut chips by default. Are there any in the wild?
+ */
+ if (priv->chip_cut == 8) {
+ dev_info(dev, "RTL8188EU cut I is not supported. Please complain about it at linux-wireless@vger.kernel.org.\n");
+ return -EOPNOTSUPP;
+ }
+
+ vendor = sys_cfg & SYS_CFG_VENDOR_ID;
+ rtl8xxxu_identify_vendor_1bit(priv, vendor);
+
+ ret = rtl8xxxu_config_endpoints_no_sie(priv);
+
+ return ret;
+}
+
+static void rtl8188eu_config_channel(struct ieee80211_hw *hw)
+{
+ struct rtl8xxxu_priv *priv = hw->priv;
+ u32 val32, rsr;
+ u8 opmode;
+ int sec_ch_above, channel;
+ int i;
+
+ opmode = rtl8xxxu_read8(priv, REG_BW_OPMODE);
+ rsr = rtl8xxxu_read32(priv, REG_RESPONSE_RATE_SET);
+ channel = hw->conf.chandef.chan->hw_value;
+
+ switch (hw->conf.chandef.width) {
+ case NL80211_CHAN_WIDTH_20_NOHT:
+ case NL80211_CHAN_WIDTH_20:
+ opmode |= BW_OPMODE_20MHZ;
+ rtl8xxxu_write8(priv, REG_BW_OPMODE, opmode);
+
+ val32 = rtl8xxxu_read32(priv, REG_FPGA0_RF_MODE);
+ val32 &= ~FPGA_RF_MODE;
+ rtl8xxxu_write32(priv, REG_FPGA0_RF_MODE, val32);
+
+ val32 = rtl8xxxu_read32(priv, REG_FPGA1_RF_MODE);
+ val32 &= ~FPGA_RF_MODE;
+ rtl8xxxu_write32(priv, REG_FPGA1_RF_MODE, val32);
+ break;
+ case NL80211_CHAN_WIDTH_40:
+ if (hw->conf.chandef.center_freq1 >
+ hw->conf.chandef.chan->center_freq) {
+ sec_ch_above = 1;
+ channel += 2;
+ } else {
+ sec_ch_above = 0;
+ channel -= 2;
+ }
+
+ opmode &= ~BW_OPMODE_20MHZ;
+ rtl8xxxu_write8(priv, REG_BW_OPMODE, opmode);
+ rsr &= ~RSR_RSC_BANDWIDTH_40M;
+ if (sec_ch_above)
+ rsr |= RSR_RSC_LOWER_SUB_CHANNEL;
+ else
+ rsr |= RSR_RSC_UPPER_SUB_CHANNEL;
+ rtl8xxxu_write32(priv, REG_RESPONSE_RATE_SET, rsr);
+
+ val32 = rtl8xxxu_read32(priv, REG_FPGA0_RF_MODE);
+ val32 |= FPGA_RF_MODE;
+ rtl8xxxu_write32(priv, REG_FPGA0_RF_MODE, val32);
+
+ val32 = rtl8xxxu_read32(priv, REG_FPGA1_RF_MODE);
+ val32 |= FPGA_RF_MODE;
+ rtl8xxxu_write32(priv, REG_FPGA1_RF_MODE, val32);
+
+ /*
+ * Set Control channel to upper or lower. These settings
+ * are required only for 40MHz
+ */
+ val32 = rtl8xxxu_read32(priv, REG_CCK0_SYSTEM);
+ val32 &= ~CCK0_SIDEBAND;
+ if (!sec_ch_above)
+ val32 |= CCK0_SIDEBAND;
+ rtl8xxxu_write32(priv, REG_CCK0_SYSTEM, val32);
+
+ val32 = rtl8xxxu_read32(priv, REG_OFDM1_LSTF);
+ val32 &= ~OFDM_LSTF_PRIME_CH_MASK; /* 0xc00 */
+ if (sec_ch_above)
+ val32 |= OFDM_LSTF_PRIME_CH_LOW;
+ else
+ val32 |= OFDM_LSTF_PRIME_CH_HIGH;
+ rtl8xxxu_write32(priv, REG_OFDM1_LSTF, val32);
+
+ val32 = rtl8xxxu_read32(priv, REG_FPGA0_POWER_SAVE);
+ val32 &= ~(FPGA0_PS_LOWER_CHANNEL | FPGA0_PS_UPPER_CHANNEL);
+ if (sec_ch_above)
+ val32 |= FPGA0_PS_UPPER_CHANNEL;
+ else
+ val32 |= FPGA0_PS_LOWER_CHANNEL;
+ rtl8xxxu_write32(priv, REG_FPGA0_POWER_SAVE, val32);
+ break;
+
+ default:
+ break;
+ }
+
+ for (i = RF_A; i < priv->rf_paths; i++) {
+ val32 = rtl8xxxu_read_rfreg(priv, i, RF6052_REG_MODE_AG);
+ u32p_replace_bits(&val32, channel, MODE_AG_CHANNEL_MASK);
+ rtl8xxxu_write_rfreg(priv, i, RF6052_REG_MODE_AG, val32);
+ }
+
+ for (i = RF_A; i < priv->rf_paths; i++) {
+ val32 = rtl8xxxu_read_rfreg(priv, i, RF6052_REG_MODE_AG);
+ val32 &= ~MODE_AG_BW_MASK;
+ if (hw->conf.chandef.width == NL80211_CHAN_WIDTH_40)
+ val32 |= MODE_AG_BW_40MHZ_8723B;
+ else
+ val32 |= MODE_AG_BW_20MHZ_8723B;
+ rtl8xxxu_write_rfreg(priv, i, RF6052_REG_MODE_AG, val32);
+ }
+}
+
+static void rtl8188eu_init_aggregation(struct rtl8xxxu_priv *priv)
+{
+ u8 agg_ctrl, usb_spec;
+
+ usb_spec = rtl8xxxu_read8(priv, REG_USB_SPECIAL_OPTION);
+ usb_spec &= ~USB_SPEC_USB_AGG_ENABLE;
+ rtl8xxxu_write8(priv, REG_USB_SPECIAL_OPTION, usb_spec);
+
+ agg_ctrl = rtl8xxxu_read8(priv, REG_TRXDMA_CTRL);
+ agg_ctrl &= ~TRXDMA_CTRL_RXDMA_AGG_EN;
+ rtl8xxxu_write8(priv, REG_TRXDMA_CTRL, agg_ctrl);
+}
+
+static int rtl8188eu_parse_efuse(struct rtl8xxxu_priv *priv)
+{
+ struct rtl8188eu_efuse *efuse = &priv->efuse_wifi.efuse8188eu;
+
+ if (efuse->rtl_id != cpu_to_le16(0x8129))
+ return -EINVAL;
+
+ ether_addr_copy(priv->mac_addr, efuse->mac_addr);
+
+ memcpy(priv->cck_tx_power_index_A, efuse->tx_power_index_A.cck_base,
+ sizeof(efuse->tx_power_index_A.cck_base));
+
+ memcpy(priv->ht40_1s_tx_power_index_A,
+ efuse->tx_power_index_A.ht40_base,
+ sizeof(efuse->tx_power_index_A.ht40_base));
+
+ priv->default_crystal_cap = efuse->xtal_k & 0x3f;
+
+ dev_info(&priv->udev->dev, "Vendor: %.7s\n", efuse->vendor_name);
+ dev_info(&priv->udev->dev, "Product: %.11s\n", efuse->device_name);
+ dev_info(&priv->udev->dev, "Serial: %.11s\n", efuse->serial);
+
+ return 0;
+}
+
+static void rtl8188eu_reset_8051(struct rtl8xxxu_priv *priv)
+{
+ u16 sys_func;
+
+ sys_func = rtl8xxxu_read16(priv, REG_SYS_FUNC);
+ sys_func &= ~SYS_FUNC_CPU_ENABLE;
+ rtl8xxxu_write16(priv, REG_SYS_FUNC, sys_func);
+
+ sys_func |= SYS_FUNC_CPU_ENABLE;
+ rtl8xxxu_write16(priv, REG_SYS_FUNC, sys_func);
+}
+
+static int rtl8188eu_load_firmware(struct rtl8xxxu_priv *priv)
+{
+ const char *fw_name;
+ int ret;
+
+ fw_name = "rtlwifi/rtl8188eufw.bin";
+
+ ret = rtl8xxxu_load_firmware(priv, fw_name);
+
+ return ret;
+}
+
+static void rtl8188eu_init_phy_bb(struct rtl8xxxu_priv *priv)
+{
+ u8 val8;
+ u16 val16;
+
+ val16 = rtl8xxxu_read16(priv, REG_SYS_FUNC);
+ val16 |= SYS_FUNC_BB_GLB_RSTN | SYS_FUNC_BBRSTB | SYS_FUNC_DIO_RF;
+ rtl8xxxu_write16(priv, REG_SYS_FUNC, val16);
+
+ /*
+ * Per vendor driver, run power sequence before init of RF
+ */
+ val8 = RF_ENABLE | RF_RSTB | RF_SDMRSTB;
+ rtl8xxxu_write8(priv, REG_RF_CTRL, val8);
+
+ val8 = SYS_FUNC_USBA | SYS_FUNC_USBD |
+ SYS_FUNC_BB_GLB_RSTN | SYS_FUNC_BBRSTB;
+ rtl8xxxu_write8(priv, REG_SYS_FUNC, val8);
+
+ rtl8xxxu_init_phy_regs(priv, rtl8188eu_phy_init_table);
+ rtl8xxxu_init_phy_regs(priv, rtl8188e_agc_table);
+}
+
+static int rtl8188eu_init_phy_rf(struct rtl8xxxu_priv *priv)
+{
+ return rtl8xxxu_init_phy_rf(priv, rtl8188eu_radioa_init_table, RF_A);
+}
+
+static int rtl8188eu_iqk_path_a(struct rtl8xxxu_priv *priv)
+{
+ u32 reg_eac, reg_e94, reg_e9c;
+ int result = 0;
+
+ /* Path A IQK setting */
+ rtl8xxxu_write32(priv, REG_TX_IQK_TONE_A, 0x10008c1c);
+ rtl8xxxu_write32(priv, REG_RX_IQK_TONE_A, 0x30008c1c);
+
+ rtl8xxxu_write32(priv, REG_TX_IQK_PI_A, 0x8214032a);
+ rtl8xxxu_write32(priv, REG_RX_IQK_PI_A, 0x28160000);
+
+ /* LO calibration setting */
+ rtl8xxxu_write32(priv, REG_IQK_AGC_RSP, 0x00462911);
+
+ /* One shot, path A LOK & IQK */
+ rtl8xxxu_write32(priv, REG_IQK_AGC_PTS, 0xf9000000);
+ rtl8xxxu_write32(priv, REG_IQK_AGC_PTS, 0xf8000000);
+
+ mdelay(10);
+
+ /* Check failed */
+ reg_eac = rtl8xxxu_read32(priv, REG_RX_POWER_AFTER_IQK_A_2);
+ reg_e94 = rtl8xxxu_read32(priv, REG_TX_POWER_BEFORE_IQK_A);
+ reg_e9c = rtl8xxxu_read32(priv, REG_TX_POWER_AFTER_IQK_A);
+
+ if (!(reg_eac & BIT(28)) &&
+ ((reg_e94 & 0x03ff0000) != 0x01420000) &&
+ ((reg_e9c & 0x03ff0000) != 0x00420000))
+ result |= 0x01;
+
+ return result;
+}
+
+static int rtl8188eu_rx_iqk_path_a(struct rtl8xxxu_priv *priv)
+{
+ u32 reg_ea4, reg_eac, reg_e94, reg_e9c, val32;
+ int result = 0;
+
+ /* Leave IQK mode */
+ val32 = rtl8xxxu_read32(priv, REG_FPGA0_IQK);
+ u32p_replace_bits(&val32, 0, 0xffffff00);
+ rtl8xxxu_write32(priv, REG_FPGA0_IQK, val32);
+
+ /* Enable path A PA in TX IQK mode */
+ rtl8xxxu_write_rfreg(priv, RF_A, RF6052_REG_WE_LUT, 0x800a0);
+ rtl8xxxu_write_rfreg(priv, RF_A, RF6052_REG_RCK_OS, 0x30000);
+ rtl8xxxu_write_rfreg(priv, RF_A, RF6052_REG_TXPA_G1, 0x0000f);
+ rtl8xxxu_write_rfreg(priv, RF_A, RF6052_REG_TXPA_G2, 0xf117b);
+
+ /* Enter IQK mode */
+ val32 = rtl8xxxu_read32(priv, REG_FPGA0_IQK);
+ u32p_replace_bits(&val32, 0x808000, 0xffffff00);
+ rtl8xxxu_write32(priv, REG_FPGA0_IQK, val32);
+
+ /* TX IQK setting */
+ rtl8xxxu_write32(priv, REG_TX_IQK, 0x01007c00);
+ rtl8xxxu_write32(priv, REG_RX_IQK, 0x81004800);
+
+ /* path-A IQK setting */
+ rtl8xxxu_write32(priv, REG_TX_IQK_TONE_A, 0x10008c1c);
+ rtl8xxxu_write32(priv, REG_RX_IQK_TONE_A, 0x30008c1c);
+
+ rtl8xxxu_write32(priv, REG_TX_IQK_PI_A, 0x82160804);
+ rtl8xxxu_write32(priv, REG_RX_IQK_PI_A, 0x28160000);
+
+ /* LO calibration setting */
+ rtl8xxxu_write32(priv, REG_IQK_AGC_RSP, 0x0046a911);
+
+ /* One shot, path A LOK & IQK */
+ rtl8xxxu_write32(priv, REG_IQK_AGC_PTS, 0xf9000000);
+ rtl8xxxu_write32(priv, REG_IQK_AGC_PTS, 0xf8000000);
+
+ mdelay(10);
+
+ /* Check failed */
+ reg_eac = rtl8xxxu_read32(priv, REG_RX_POWER_AFTER_IQK_A_2);
+ reg_e94 = rtl8xxxu_read32(priv, REG_TX_POWER_BEFORE_IQK_A);
+ reg_e9c = rtl8xxxu_read32(priv, REG_TX_POWER_AFTER_IQK_A);
+
+ if (!(reg_eac & BIT(28)) &&
+ ((reg_e94 & 0x03ff0000) != 0x01420000) &&
+ ((reg_e9c & 0x03ff0000) != 0x00420000))
+ result |= 0x01;
+ else
+ goto out;
+
+ val32 = 0x80007c00 |
+ (reg_e94 & 0x03ff0000) | ((reg_e9c >> 16) & 0x03ff);
+ rtl8xxxu_write32(priv, REG_TX_IQK, val32);
+
+ /* Modify RX IQK mode table */
+ val32 = rtl8xxxu_read32(priv, REG_FPGA0_IQK);
+ u32p_replace_bits(&val32, 0, 0xffffff00);
+ rtl8xxxu_write32(priv, REG_FPGA0_IQK, val32);
+
+ rtl8xxxu_write_rfreg(priv, RF_A, RF6052_REG_WE_LUT, 0x800a0);
+ rtl8xxxu_write_rfreg(priv, RF_A, RF6052_REG_RCK_OS, 0x30000);
+ rtl8xxxu_write_rfreg(priv, RF_A, RF6052_REG_TXPA_G1, 0x0000f);
+ rtl8xxxu_write_rfreg(priv, RF_A, RF6052_REG_TXPA_G2, 0xf7ffa);
+
+ /* Enter IQK mode */
+ val32 = rtl8xxxu_read32(priv, REG_FPGA0_IQK);
+ u32p_replace_bits(&val32, 0x808000, 0xffffff00);
+ rtl8xxxu_write32(priv, REG_FPGA0_IQK, val32);
+
+ /* IQK setting */
+ rtl8xxxu_write32(priv, REG_RX_IQK, 0x01004800);
+
+ /* Path A IQK setting */
+ rtl8xxxu_write32(priv, REG_TX_IQK_TONE_A, 0x30008c1c);
+ rtl8xxxu_write32(priv, REG_RX_IQK_TONE_A, 0x10008c1c);
+
+ rtl8xxxu_write32(priv, REG_TX_IQK_PI_A, 0x82160c05);
+ rtl8xxxu_write32(priv, REG_RX_IQK_PI_A, 0x28160c05);
+
+ /* LO calibration setting */
+ rtl8xxxu_write32(priv, REG_IQK_AGC_RSP, 0x0046a911);
+
+ /* One shot, path A LOK & IQK */
+ rtl8xxxu_write32(priv, REG_IQK_AGC_PTS, 0xf9000000);
+ rtl8xxxu_write32(priv, REG_IQK_AGC_PTS, 0xf8000000);
+
+ mdelay(10);
+
+ reg_eac = rtl8xxxu_read32(priv, REG_RX_POWER_AFTER_IQK_A_2);
+ reg_ea4 = rtl8xxxu_read32(priv, REG_RX_POWER_BEFORE_IQK_A_2);
+
+ if (!(reg_eac & BIT(27)) &&
+ ((reg_ea4 & 0x03ff0000) != 0x01320000) &&
+ ((reg_eac & 0x03ff0000) != 0x00360000))
+ result |= 0x02;
+ else
+ dev_warn(&priv->udev->dev, "%s: Path A RX IQK failed!\n",
+ __func__);
+
+out:
+ return result;
+}
+
+static void rtl8188eu_phy_iqcalibrate(struct rtl8xxxu_priv *priv,
+ int result[][8], int t)
+{
+ struct device *dev = &priv->udev->dev;
+ u32 i, val32;
+ int path_a_ok;
+ int retry = 2;
+ static const u32 adda_regs[RTL8XXXU_ADDA_REGS] = {
+ REG_FPGA0_XCD_SWITCH_CTRL, REG_BLUETOOTH,
+ REG_RX_WAIT_CCA, REG_TX_CCK_RFON,
+ REG_TX_CCK_BBON, REG_TX_OFDM_RFON,
+ REG_TX_OFDM_BBON, REG_TX_TO_RX,
+ REG_TX_TO_TX, REG_RX_CCK,
+ REG_RX_OFDM, REG_RX_WAIT_RIFS,
+ REG_RX_TO_RX, REG_STANDBY,
+ REG_SLEEP, REG_PMPD_ANAEN
+ };
+ static const u32 iqk_mac_regs[RTL8XXXU_MAC_REGS] = {
+ REG_TXPAUSE, REG_BEACON_CTRL,
+ REG_BEACON_CTRL_1, REG_GPIO_MUXCFG
+ };
+ static const u32 iqk_bb_regs[RTL8XXXU_BB_REGS] = {
+ REG_OFDM0_TRX_PATH_ENABLE, REG_OFDM0_TR_MUX_PAR,
+ REG_FPGA0_XCD_RF_SW_CTRL, REG_CONFIG_ANT_A, REG_CONFIG_ANT_B,
+ REG_FPGA0_XAB_RF_SW_CTRL, REG_FPGA0_XA_RF_INT_OE,
+ REG_FPGA0_XB_RF_INT_OE, REG_CCK0_AFE_SETTING
+ };
+
+ /*
+ * Note: IQ calibration must be performed after loading
+ * PHY_REG.txt , and radio_a, radio_b.txt
+ */
+
+ if (t == 0) {
+ /* Save ADDA parameters, turn Path A ADDA on */
+ rtl8xxxu_save_regs(priv, adda_regs, priv->adda_backup,
+ RTL8XXXU_ADDA_REGS);
+ rtl8xxxu_save_mac_regs(priv, iqk_mac_regs, priv->mac_backup);
+ rtl8xxxu_save_regs(priv, iqk_bb_regs,
+ priv->bb_backup, RTL8XXXU_BB_REGS);
+ }
+
+ rtl8xxxu_path_adda_on(priv, adda_regs, true);
+
+ if (t == 0) {
+ val32 = rtl8xxxu_read32(priv, REG_FPGA0_XA_HSSI_PARM1);
+ priv->pi_enabled = u32_get_bits(val32, FPGA0_HSSI_PARM1_PI);
+ }
+
+ if (!priv->pi_enabled) {
+ /* Switch BB to PI mode to do IQ Calibration. */
+ rtl8xxxu_write32(priv, REG_FPGA0_XA_HSSI_PARM1, 0x01000100);
+ rtl8xxxu_write32(priv, REG_FPGA0_XB_HSSI_PARM1, 0x01000100);
+ }
+
+ /* MAC settings */
+ rtl8xxxu_mac_calibration(priv, iqk_mac_regs, priv->mac_backup);
+
+ val32 = rtl8xxxu_read32(priv, REG_CCK0_AFE_SETTING);
+ u32p_replace_bits(&val32, 0xf, 0x0f000000);
+ rtl8xxxu_write32(priv, REG_CCK0_AFE_SETTING, val32);
+
+ rtl8xxxu_write32(priv, REG_OFDM0_TRX_PATH_ENABLE, 0x03a05600);
+ rtl8xxxu_write32(priv, REG_OFDM0_TR_MUX_PAR, 0x000800e4);
+ rtl8xxxu_write32(priv, REG_FPGA0_XCD_RF_SW_CTRL, 0x22204000);
+
+ if (!priv->no_pape) {
+ val32 = rtl8xxxu_read32(priv, REG_FPGA0_XAB_RF_SW_CTRL);
+ val32 |= (FPGA0_RF_PAPE |
+ (FPGA0_RF_PAPE << FPGA0_RF_BD_CTRL_SHIFT));
+ rtl8xxxu_write32(priv, REG_FPGA0_XAB_RF_SW_CTRL, val32);
+ }
+
+ val32 = rtl8xxxu_read32(priv, REG_FPGA0_XA_RF_INT_OE);
+ val32 &= ~BIT(10);
+ rtl8xxxu_write32(priv, REG_FPGA0_XA_RF_INT_OE, val32);
+ val32 = rtl8xxxu_read32(priv, REG_FPGA0_XB_RF_INT_OE);
+ val32 &= ~BIT(10);
+ rtl8xxxu_write32(priv, REG_FPGA0_XB_RF_INT_OE, val32);
+
+ /* Page B init */
+ rtl8xxxu_write32(priv, REG_CONFIG_ANT_A, 0x0f600000);
+
+ /* IQ calibration setting */
+ val32 = rtl8xxxu_read32(priv, REG_FPGA0_IQK);
+ u32p_replace_bits(&val32, 0x808000, 0xffffff00);
+ rtl8xxxu_write32(priv, REG_FPGA0_IQK, val32);
+ rtl8xxxu_write32(priv, REG_TX_IQK, 0x01007c00);
+ rtl8xxxu_write32(priv, REG_RX_IQK, 0x81004800);
+
+ for (i = 0; i < retry; i++) {
+ path_a_ok = rtl8188eu_iqk_path_a(priv);
+ if (path_a_ok == 0x01) {
+ val32 = rtl8xxxu_read32(priv,
+ REG_TX_POWER_BEFORE_IQK_A);
+ result[t][0] = (val32 >> 16) & 0x3ff;
+ val32 = rtl8xxxu_read32(priv,
+ REG_TX_POWER_AFTER_IQK_A);
+ result[t][1] = (val32 >> 16) & 0x3ff;
+ break;
+ }
+ }
+
+ if (!path_a_ok)
+ dev_dbg(dev, "%s: Path A TX IQK failed!\n", __func__);
+
+ for (i = 0; i < retry; i++) {
+ path_a_ok = rtl8188eu_rx_iqk_path_a(priv);
+ if (path_a_ok == 0x03) {
+ val32 = rtl8xxxu_read32(priv,
+ REG_RX_POWER_BEFORE_IQK_A_2);
+ result[t][2] = (val32 >> 16) & 0x3ff;
+ val32 = rtl8xxxu_read32(priv,
+ REG_RX_POWER_AFTER_IQK_A_2);
+ result[t][3] = (val32 >> 16) & 0x3ff;
+
+ break;
+ }
+ }
+
+ if (!path_a_ok)
+ dev_dbg(dev, "%s: Path A RX IQK failed!\n", __func__);
+
+ /* Back to BB mode, load original value */
+ val32 = rtl8xxxu_read32(priv, REG_FPGA0_IQK);
+ u32p_replace_bits(&val32, 0, 0xffffff00);
+ rtl8xxxu_write32(priv, REG_FPGA0_IQK, val32);
+
+ if (t == 0)
+ return;
+
+ if (!priv->pi_enabled) {
+ /* Switch back BB to SI mode after finishing IQ Calibration */
+ rtl8xxxu_write32(priv, REG_FPGA0_XA_HSSI_PARM1, 0x01000000);
+ rtl8xxxu_write32(priv, REG_FPGA0_XB_HSSI_PARM1, 0x01000000);
+ }
+
+ /* Reload ADDA power saving parameters */
+ rtl8xxxu_restore_regs(priv, adda_regs, priv->adda_backup,
+ RTL8XXXU_ADDA_REGS);
+
+ /* Reload MAC parameters */
+ rtl8xxxu_restore_mac_regs(priv, iqk_mac_regs, priv->mac_backup);
+
+ /* Reload BB parameters */
+ rtl8xxxu_restore_regs(priv, iqk_bb_regs,
+ priv->bb_backup, RTL8XXXU_BB_REGS);
+
+ /* Restore RX initial gain */
+ rtl8xxxu_write32(priv, REG_FPGA0_XA_LSSI_PARM, 0x00032ed3);
+
+ /* Load 0xe30 IQC default value */
+ rtl8xxxu_write32(priv, REG_TX_IQK_TONE_A, 0x01008c00);
+ rtl8xxxu_write32(priv, REG_RX_IQK_TONE_A, 0x01008c00);
+}
+
+static void rtl8188eu_phy_iq_calibrate(struct rtl8xxxu_priv *priv)
+{
+ struct device *dev = &priv->udev->dev;
+ int result[4][8]; /* last is final result */
+ int i, candidate;
+ bool path_a_ok;
+ u32 reg_e94, reg_e9c, reg_ea4, reg_eac;
+ u32 reg_eb4, reg_ebc, reg_ec4, reg_ecc;
+ bool simu;
+
+ memset(result, 0, sizeof(result));
+ result[3][0] = 0x100;
+ result[3][2] = 0x100;
+ result[3][4] = 0x100;
+ result[3][6] = 0x100;
+
+ candidate = -1;
+
+ path_a_ok = false;
+
+ for (i = 0; i < 3; i++) {
+ rtl8188eu_phy_iqcalibrate(priv, result, i);
+
+ if (i == 1) {
+ simu = rtl8xxxu_simularity_compare(priv,
+ result, 0, 1);
+ if (simu) {
+ candidate = 0;
+ break;
+ }
+ }
+
+ if (i == 2) {
+ simu = rtl8xxxu_simularity_compare(priv,
+ result, 0, 2);
+ if (simu) {
+ candidate = 0;
+ break;
+ }
+
+ simu = rtl8xxxu_simularity_compare(priv,
+ result, 1, 2);
+ if (simu)
+ candidate = 1;
+ else
+ candidate = 3;
+ }
+ }
+
+ if (candidate >= 0) {
+ reg_e94 = result[candidate][0];
+ priv->rege94 = reg_e94;
+ reg_e9c = result[candidate][1];
+ priv->rege9c = reg_e9c;
+ reg_ea4 = result[candidate][2];
+ reg_eac = result[candidate][3];
+ reg_eb4 = result[candidate][4];
+ priv->regeb4 = reg_eb4;
+ reg_ebc = result[candidate][5];
+ priv->regebc = reg_ebc;
+ reg_ec4 = result[candidate][6];
+ reg_ecc = result[candidate][7];
+ dev_dbg(dev, "%s: candidate is %x\n", __func__, candidate);
+ dev_dbg(dev,
+ "%s: e94=%x e9c=%x ea4=%x eac=%x eb4=%x ebc=%x ec4=%x ecc=%x\n",
+ __func__, reg_e94, reg_e9c, reg_ea4, reg_eac,
+ reg_eb4, reg_ebc, reg_ec4, reg_ecc);
+ path_a_ok = true;
+ } else {
+ reg_e94 = 0x100;
+ reg_eb4 = 0x100;
+ priv->rege94 = 0x100;
+ priv->regeb4 = 0x100;
+ reg_e9c = 0x0;
+ reg_ebc = 0x0;
+ priv->rege9c = 0x0;
+ priv->regebc = 0x0;
+ }
+
+ if (reg_e94 && candidate >= 0)
+ rtl8xxxu_fill_iqk_matrix_a(priv, path_a_ok, result,
+ candidate, (reg_ea4 == 0));
+
+ rtl8xxxu_save_regs(priv, rtl8xxxu_iqk_phy_iq_bb_reg,
+ priv->bb_recovery_backup, RTL8XXXU_BB_REGS);
+}
+
+static void rtl8188e_disabled_to_emu(struct rtl8xxxu_priv *priv)
+{
+ u16 val16;
+
+ val16 = rtl8xxxu_read16(priv, REG_APS_FSMCO);
+ val16 &= ~(APS_FSMCO_HW_SUSPEND | APS_FSMCO_PCIE);
+ rtl8xxxu_write16(priv, REG_APS_FSMCO, val16);
+}
+
+static int rtl8188e_emu_to_active(struct rtl8xxxu_priv *priv)
+{
+ u8 val8;
+ u32 val32;
+ u16 val16;
+ int count, ret = 0;
+
+ /* wait till 0x04[17] = 1 power ready*/
+ for (count = RTL8XXXU_MAX_REG_POLL; count; count--) {
+ val32 = rtl8xxxu_read32(priv, REG_APS_FSMCO);
+ if (val32 & BIT(17))
+ break;
+
+ udelay(10);
+ }
+
+ if (!count) {
+ ret = -EBUSY;
+ goto exit;
+ }
+
+ /* reset baseband */
+ val8 = rtl8xxxu_read8(priv, REG_SYS_FUNC);
+ val8 &= ~(SYS_FUNC_BBRSTB | SYS_FUNC_BB_GLB_RSTN);
+ rtl8xxxu_write8(priv, REG_SYS_FUNC, val8);
+
+ /*0x24[23] = 2b'01 schmit trigger */
+ val32 = rtl8xxxu_read32(priv, REG_AFE_XTAL_CTRL);
+ val32 |= BIT(23);
+ rtl8xxxu_write32(priv, REG_AFE_XTAL_CTRL, val32);
+
+ /* 0x04[15] = 0 disable HWPDN (control by DRV)*/
+ val16 = rtl8xxxu_read16(priv, REG_APS_FSMCO);
+ val16 &= ~APS_FSMCO_HW_POWERDOWN;
+ rtl8xxxu_write16(priv, REG_APS_FSMCO, val16);
+
+ /*0x04[12:11] = 2b'00 disable WL suspend*/
+ val16 = rtl8xxxu_read16(priv, REG_APS_FSMCO);
+ val16 &= ~(APS_FSMCO_HW_SUSPEND | APS_FSMCO_PCIE);
+ rtl8xxxu_write16(priv, REG_APS_FSMCO, val16);
+
+ /* set, then poll until 0 */
+ val32 = rtl8xxxu_read32(priv, REG_APS_FSMCO);
+ val32 |= APS_FSMCO_MAC_ENABLE;
+ rtl8xxxu_write32(priv, REG_APS_FSMCO, val32);
+
+ for (count = RTL8XXXU_MAX_REG_POLL; count; count--) {
+ val32 = rtl8xxxu_read32(priv, REG_APS_FSMCO);
+ if ((val32 & APS_FSMCO_MAC_ENABLE) == 0) {
+ ret = 0;
+ break;
+ }
+ udelay(10);
+ }
+
+ if (!count) {
+ ret = -EBUSY;
+ goto exit;
+ }
+
+ /* LDO normal mode*/
+ val8 = rtl8xxxu_read8(priv, REG_LPLDO_CTRL);
+ val8 &= ~BIT(4);
+ rtl8xxxu_write8(priv, REG_LPLDO_CTRL, val8);
+
+exit:
+ return ret;
+}
+
+static int rtl8188eu_active_to_emu(struct rtl8xxxu_priv *priv)
+{
+ u8 val8;
+
+ /* Turn off RF */
+ val8 = rtl8xxxu_read8(priv, REG_RF_CTRL);
+ val8 &= ~RF_ENABLE;
+ rtl8xxxu_write8(priv, REG_RF_CTRL, val8);
+
+ /* LDO Sleep mode */
+ val8 = rtl8xxxu_read8(priv, REG_LPLDO_CTRL);
+ val8 |= BIT(4);
+ rtl8xxxu_write8(priv, REG_LPLDO_CTRL, val8);
+
+ return 0;
+}
+
+static int rtl8188eu_emu_to_disabled(struct rtl8xxxu_priv *priv)
+{
+ u32 val32;
+ u16 val16;
+ u8 val8;
+
+ val32 = rtl8xxxu_read32(priv, REG_AFE_XTAL_CTRL);
+ val32 |= BIT(23);
+ rtl8xxxu_write32(priv, REG_AFE_XTAL_CTRL, val32);
+
+ val16 = rtl8xxxu_read16(priv, REG_APS_FSMCO);
+ val16 &= ~APS_FSMCO_PCIE;
+ val16 |= APS_FSMCO_HW_SUSPEND;
+ rtl8xxxu_write16(priv, REG_APS_FSMCO, val16);
+
+ rtl8xxxu_write8(priv, REG_APS_FSMCO + 3, 0x00);
+
+ val8 = rtl8xxxu_read8(priv, REG_GPIO_MUXCFG + 1);
+ val8 &= ~BIT(4);
+ rtl8xxxu_write8(priv, REG_GPIO_MUXCFG + 1, val8);
+
+ /* Set USB suspend enable local register 0xfe10[4]=1 */
+ val8 = rtl8xxxu_read8(priv, 0xfe10);
+ val8 |= BIT(4);
+ rtl8xxxu_write8(priv, 0xfe10, val8);
+
+ return 0;
+}
+
+static int rtl8188eu_active_to_lps(struct rtl8xxxu_priv *priv)
+{
+ struct device *dev = &priv->udev->dev;
+ u8 val8;
+ u16 val16;
+ u32 val32;
+ int retry, retval;
+
+ rtl8xxxu_write8(priv, REG_TXPAUSE, 0x7f);
+
+ retry = 100;
+ retval = -EBUSY;
+ /* Poll 32 bit wide REG_SCH_TX_CMD for 0 to ensure no TX is pending. */
+ do {
+ val32 = rtl8xxxu_read32(priv, REG_SCH_TX_CMD);
+ if (!val32) {
+ retval = 0;
+ break;
+ }
+ } while (retry--);
+
+ if (!retry) {
+ dev_warn(dev, "Failed to flush TX queue\n");
+ retval = -EBUSY;
+ goto out;
+ }
+
+ /* Disable CCK and OFDM, clock gated */
+ val8 = rtl8xxxu_read8(priv, REG_SYS_FUNC);
+ val8 &= ~SYS_FUNC_BBRSTB;
+ rtl8xxxu_write8(priv, REG_SYS_FUNC, val8);
+
+ udelay(2);
+
+ /* Reset MAC TRX */
+ val16 = rtl8xxxu_read16(priv, REG_CR);
+ val16 |= 0xff;
+ val16 &= ~(CR_MAC_TX_ENABLE | CR_MAC_RX_ENABLE | CR_SECURITY_ENABLE);
+ rtl8xxxu_write16(priv, REG_CR, val16);
+
+ val8 = rtl8xxxu_read8(priv, REG_DUAL_TSF_RST);
+ val8 |= DUAL_TSF_TX_OK;
+ rtl8xxxu_write8(priv, REG_DUAL_TSF_RST, val8);
+
+out:
+ return retval;
+}
+
+static int rtl8188eu_power_on(struct rtl8xxxu_priv *priv)
+{
+ u16 val16;
+ int ret;
+
+ rtl8188e_disabled_to_emu(priv);
+
+ ret = rtl8188e_emu_to_active(priv);
+ if (ret)
+ goto exit;
+
+ /*
+ * Enable MAC DMA/WMAC/SCHEDULE/SEC block
+ * Set CR bit10 to enable 32k calibration.
+ * We do not set CR_MAC_TX_ENABLE | CR_MAC_RX_ENABLE here
+ * due to a hardware bug in the 88E, requiring those to be
+ * set after REG_TRXFF_BNDY is set. If not the RXFF bundary
+ * will get set to a larger buffer size than the real buffer
+ * size.
+ */
+ val16 = (CR_HCI_TXDMA_ENABLE | CR_HCI_RXDMA_ENABLE |
+ CR_TXDMA_ENABLE | CR_RXDMA_ENABLE |
+ CR_PROTOCOL_ENABLE | CR_SCHEDULE_ENABLE |
+ CR_SECURITY_ENABLE | CR_CALTIMER_ENABLE);
+ rtl8xxxu_write16(priv, REG_CR, val16);
+
+exit:
+ return ret;
+}
+
+static void rtl8188eu_power_off(struct rtl8xxxu_priv *priv)
+{
+ u8 val8;
+ u16 val16;
+
+ rtl8xxxu_flush_fifo(priv);
+
+ val8 = rtl8xxxu_read8(priv, REG_TX_REPORT_CTRL);
+ val8 &= ~TX_REPORT_CTRL_TIMER_ENABLE;
+ rtl8xxxu_write8(priv, REG_TX_REPORT_CTRL, val8);
+
+ /* Turn off RF */
+ rtl8xxxu_write8(priv, REG_RF_CTRL, 0x00);
+
+ rtl8188eu_active_to_lps(priv);
+
+ /* Reset Firmware if running in RAM */
+ if (rtl8xxxu_read8(priv, REG_MCU_FW_DL) & MCU_FW_RAM_SEL)
+ rtl8xxxu_firmware_self_reset(priv);
+
+ /* Reset MCU */
+ val16 = rtl8xxxu_read16(priv, REG_SYS_FUNC);
+ val16 &= ~SYS_FUNC_CPU_ENABLE;
+ rtl8xxxu_write16(priv, REG_SYS_FUNC, val16);
+
+ /* Reset MCU ready status */
+ rtl8xxxu_write8(priv, REG_MCU_FW_DL, 0x00);
+
+ /* 32K_CTRL looks to be very 8188e specific */
+ val8 = rtl8xxxu_read8(priv, REG_32K_CTRL);
+ val8 &= ~BIT(0);
+ rtl8xxxu_write8(priv, REG_32K_CTRL, val8);
+
+ rtl8188eu_active_to_emu(priv);
+ rtl8188eu_emu_to_disabled(priv);
+
+ /* Reset MCU IO Wrapper */
+ val8 = rtl8xxxu_read8(priv, REG_RSV_CTRL + 1);
+ val8 &= ~BIT(3);
+ rtl8xxxu_write8(priv, REG_RSV_CTRL + 1, val8);
+
+ val8 = rtl8xxxu_read8(priv, REG_RSV_CTRL + 1);
+ val8 |= BIT(3);
+ rtl8xxxu_write8(priv, REG_RSV_CTRL + 1, val8);
+
+ /* Vendor driver refers to GPIO_IN */
+ val8 = rtl8xxxu_read8(priv, REG_GPIO_PIN_CTRL);
+ /* Vendor driver refers to GPIO_OUT */
+ rtl8xxxu_write8(priv, REG_GPIO_PIN_CTRL + 1, val8);
+ rtl8xxxu_write8(priv, REG_GPIO_PIN_CTRL + 2, 0xff);
+
+ val8 = rtl8xxxu_read8(priv, REG_GPIO_IO_SEL);
+ rtl8xxxu_write8(priv, REG_GPIO_IO_SEL, val8 << 4);
+ val8 = rtl8xxxu_read8(priv, REG_GPIO_IO_SEL + 1);
+ rtl8xxxu_write8(priv, REG_GPIO_IO_SEL + 1, val8 | 0x0f);
+
+ /*
+ * Set LNA, TRSW, EX_PA Pin to output mode
+ * Referred to as REG_BB_PAD_CTRL in 8188eu vendor driver
+ */
+ rtl8xxxu_write32(priv, REG_PAD_CTRL1, 0x00080808);
+
+ rtl8xxxu_write8(priv, REG_RSV_CTRL, 0x00);
+
+ rtl8xxxu_write32(priv, REG_GPIO_MUXCFG, 0x00000000);
+}
+
+static void rtl8188e_enable_rf(struct rtl8xxxu_priv *priv)
+{
+ u32 val32;
+
+ rtl8xxxu_write8(priv, REG_RF_CTRL, RF_ENABLE | RF_RSTB | RF_SDMRSTB);
+
+ val32 = rtl8xxxu_read32(priv, REG_OFDM0_TRX_PATH_ENABLE);
+ val32 &= ~(OFDM_RF_PATH_RX_MASK | OFDM_RF_PATH_TX_MASK);
+ val32 |= OFDM_RF_PATH_RX_A | OFDM_RF_PATH_TX_A;
+ rtl8xxxu_write32(priv, REG_OFDM0_TRX_PATH_ENABLE, val32);
+
+ rtl8xxxu_write8(priv, REG_TXPAUSE, 0x00);
+}
+
+static void rtl8188e_disable_rf(struct rtl8xxxu_priv *priv)
+{
+ u32 val32;
+
+ val32 = rtl8xxxu_read32(priv, REG_OFDM0_TRX_PATH_ENABLE);
+ val32 &= ~OFDM_RF_PATH_TX_MASK;
+ rtl8xxxu_write32(priv, REG_OFDM0_TRX_PATH_ENABLE, val32);
+
+ /* Power down RF module */
+ rtl8xxxu_write_rfreg(priv, RF_A, RF6052_REG_AC, 0);
+
+ rtl8188eu_active_to_emu(priv);
+}
+
+static void rtl8188e_usb_quirks(struct rtl8xxxu_priv *priv)
+{
+ u16 val16;
+
+ /*
+ * Technically this is not a USB quirk, but a chip quirk.
+ * This has to be done after REG_TRXFF_BNDY is set, see
+ * rtl8188eu_power_on() for details.
+ */
+ val16 = rtl8xxxu_read16(priv, REG_CR);
+ val16 |= (CR_MAC_TX_ENABLE | CR_MAC_RX_ENABLE);
+ rtl8xxxu_write16(priv, REG_CR, val16);
+
+ rtl8xxxu_gen2_usb_quirks(priv);
+
+ /* Pre-TX enable WEP/TKIP security */
+ rtl8xxxu_write8(priv, REG_EARLY_MODE_CONTROL_8188E + 3, 0x01);
+}
+
+static s8 rtl8188e_cck_rssi(struct rtl8xxxu_priv *priv, u8 cck_agc_rpt)
+{
+ /* only use lna 0/1/2/3/7 */
+ static const s8 lna_gain_table_0[8] = {17, -1, -13, -29, -32, -35, -38, -41};
+ /* only use lna 3/7 */
+ static const s8 lna_gain_table_1[8] = {29, 20, 12, 3, -6, -15, -24, -33};
+
+ s8 rx_pwr_all = 0x00;
+ u8 vga_idx, lna_idx;
+ s8 lna_gain = 0;
+
+ lna_idx = u8_get_bits(cck_agc_rpt, CCK_AGC_RPT_LNA_IDX_MASK);
+ vga_idx = u8_get_bits(cck_agc_rpt, CCK_AGC_RPT_VGA_IDX_MASK);
+
+ if (priv->chip_cut >= 8) /* cut I */ /* SMIC */
+ lna_gain = lna_gain_table_0[lna_idx];
+ else /* TSMC */
+ lna_gain = lna_gain_table_1[lna_idx];
+
+ rx_pwr_all = lna_gain - (2 * vga_idx);
+
+ return rx_pwr_all;
+}
+
+static int rtl8188eu_led_brightness_set(struct led_classdev *led_cdev,
+ enum led_brightness brightness)
+{
+ struct rtl8xxxu_priv *priv = container_of(led_cdev,
+ struct rtl8xxxu_priv,
+ led_cdev);
+ u8 ledcfg = rtl8xxxu_read8(priv, REG_LEDCFG2);
+
+ if (brightness == LED_OFF) {
+ ledcfg &= ~LEDCFG2_HW_LED_CONTROL;
+ ledcfg |= LEDCFG2_SW_LED_CONTROL | LEDCFG2_SW_LED_DISABLE;
+ } else if (brightness == LED_ON) {
+ ledcfg &= ~(LEDCFG2_HW_LED_CONTROL | LEDCFG2_SW_LED_DISABLE);
+ ledcfg |= LEDCFG2_SW_LED_CONTROL;
+ } else if (brightness == RTL8XXXU_HW_LED_CONTROL) {
+ ledcfg &= ~LEDCFG2_SW_LED_DISABLE;
+ ledcfg |= LEDCFG2_HW_LED_CONTROL | LEDCFG2_HW_LED_ENABLE;
+ }
+
+ rtl8xxxu_write8(priv, REG_LEDCFG2, ledcfg);
+
+ return 0;
+}
+
+static void rtl8188e_set_tx_rpt_timing(struct rtl8xxxu_ra_info *ra, u8 timing)
+{
+ u8 idx;
+
+ for (idx = 0; idx < 5; idx++)
+ if (dynamic_tx_rpt_timing[idx] == ra->rpt_time)
+ break;
+
+ if (timing == DEFAULT_TIMING) {
+ idx = 0; /* 200ms */
+ } else if (timing == INCREASE_TIMING) {
+ if (idx < 5)
+ idx++;
+ } else if (timing == DECREASE_TIMING) {
+ if (idx > 0)
+ idx--;
+ }
+
+ ra->rpt_time = dynamic_tx_rpt_timing[idx];
+}
+
+static void rtl8188e_rate_down(struct rtl8xxxu_ra_info *ra)
+{
+ u8 rate_id = ra->pre_rate;
+ u8 lowest_rate = ra->lowest_rate;
+ u8 highest_rate = ra->highest_rate;
+ s8 i;
+
+ if (rate_id > highest_rate) {
+ rate_id = highest_rate;
+ } else if (ra->rate_sgi) {
+ ra->rate_sgi = 0;
+ } else if (rate_id > lowest_rate) {
+ if (rate_id > 0) {
+ for (i = rate_id - 1; i >= lowest_rate; i--) {
+ if (ra->ra_use_rate & BIT(i)) {
+ rate_id = i;
+ goto rate_down_finish;
+ }
+ }
+ }
+ } else if (rate_id <= lowest_rate) {
+ rate_id = lowest_rate;
+ }
+
+rate_down_finish:
+ if (ra->ra_waiting_counter == 1) {
+ ra->ra_waiting_counter++;
+ ra->ra_pending_counter++;
+ } else if (ra->ra_waiting_counter > 1) {
+ ra->ra_waiting_counter = 0;
+ ra->ra_pending_counter = 0;
+ }
+
+ if (ra->ra_pending_counter >= 4)
+ ra->ra_pending_counter = 4;
+
+ ra->ra_drop_after_down = 1;
+
+ ra->decision_rate = rate_id;
+
+ rtl8188e_set_tx_rpt_timing(ra, DECREASE_TIMING);
+}
+
+static void rtl8188e_rate_up(struct rtl8xxxu_ra_info *ra)
+{
+ u8 rate_id = ra->pre_rate;
+ u8 highest_rate = ra->highest_rate;
+ u8 i;
+
+ if (ra->ra_waiting_counter == 1) {
+ ra->ra_waiting_counter = 0;
+ ra->ra_pending_counter = 0;
+ } else if (ra->ra_waiting_counter > 1) {
+ ra->pre_rssi_sta_ra = ra->rssi_sta_ra;
+ goto rate_up_finish;
+ }
+
+ rtl8188e_set_tx_rpt_timing(ra, DEFAULT_TIMING);
+
+ if (rate_id < highest_rate) {
+ for (i = rate_id + 1; i <= highest_rate; i++) {
+ if (ra->ra_use_rate & BIT(i)) {
+ rate_id = i;
+ goto rate_up_finish;
+ }
+ }
+ } else if (rate_id == highest_rate) {
+ if (ra->sgi_enable && !ra->rate_sgi)
+ ra->rate_sgi = 1;
+ else if (!ra->sgi_enable)
+ ra->rate_sgi = 0;
+ } else { /* rate_id > ra->highest_rate */
+ rate_id = highest_rate;
+ }
+
+rate_up_finish:
+ if (ra->ra_waiting_counter == (4 + pending_for_rate_up_fail[ra->ra_pending_counter]))
+ ra->ra_waiting_counter = 0;
+ else
+ ra->ra_waiting_counter++;
+
+ ra->decision_rate = rate_id;
+}
+
+static void rtl8188e_reset_ra_counter(struct rtl8xxxu_ra_info *ra)
+{
+ u8 rate_id = ra->decision_rate;
+
+ ra->nsc_up = (n_threshold_high[rate_id] + n_threshold_low[rate_id]) >> 1;
+ ra->nsc_down = (n_threshold_high[rate_id] + n_threshold_low[rate_id]) >> 1;
+}
+
+static void rtl8188e_rate_decision(struct rtl8xxxu_ra_info *ra)
+{
+ struct rtl8xxxu_priv *priv = container_of(ra, struct rtl8xxxu_priv, ra_info);
+ const u8 *retry_penalty_idx_0;
+ const u8 *retry_penalty_idx_1;
+ const u8 *retry_penalty_up_idx;
+ u8 rate_id, penalty_id1, penalty_id2;
+ int i;
+
+ if (ra->total == 0)
+ return;
+
+ if (ra->ra_drop_after_down) {
+ ra->ra_drop_after_down--;
+
+ rtl8188e_reset_ra_counter(ra);
+
+ return;
+ }
+
+ if (priv->chip_cut == 8) { /* cut I */
+ retry_penalty_idx_0 = retry_penalty_idx_cut_i[0];
+ retry_penalty_idx_1 = retry_penalty_idx_cut_i[1];
+ retry_penalty_up_idx = retry_penalty_up_idx_cut_i;
+ } else {
+ retry_penalty_idx_0 = retry_penalty_idx_normal[0];
+ retry_penalty_idx_1 = retry_penalty_idx_normal[1];
+ retry_penalty_up_idx = retry_penalty_up_idx_normal;
+ }
+
+ if (ra->rssi_sta_ra < (ra->pre_rssi_sta_ra - 3) ||
+ ra->rssi_sta_ra > (ra->pre_rssi_sta_ra + 3)) {
+ ra->pre_rssi_sta_ra = ra->rssi_sta_ra;
+ ra->ra_waiting_counter = 0;
+ ra->ra_pending_counter = 0;
+ }
+
+ /* Start RA decision */
+ if (ra->pre_rate > ra->highest_rate)
+ rate_id = ra->highest_rate;
+ else
+ rate_id = ra->pre_rate;
+
+ /* rate down */
+ if (ra->rssi_sta_ra > rssi_threshold[rate_id])
+ penalty_id1 = retry_penalty_idx_0[rate_id];
+ else
+ penalty_id1 = retry_penalty_idx_1[rate_id];
+
+ for (i = 0; i < 5; i++)
+ ra->nsc_down += ra->retry[i] * retry_penalty[penalty_id1][i];
+
+ if (ra->nsc_down > (ra->total * retry_penalty[penalty_id1][5]))
+ ra->nsc_down -= ra->total * retry_penalty[penalty_id1][5];
+ else
+ ra->nsc_down = 0;
+
+ /* rate up */
+ penalty_id2 = retry_penalty_up_idx[rate_id];
+
+ for (i = 0; i < 5; i++)
+ ra->nsc_up += ra->retry[i] * retry_penalty[penalty_id2][i];
+
+ if (ra->nsc_up > (ra->total * retry_penalty[penalty_id2][5]))
+ ra->nsc_up -= ra->total * retry_penalty[penalty_id2][5];
+ else
+ ra->nsc_up = 0;
+
+ if (ra->nsc_down < n_threshold_low[rate_id] ||
+ ra->drop > dropping_necessary[rate_id]) {
+ rtl8188e_rate_down(ra);
+
+ rtl8xxxu_update_ra_report(&priv->ra_report, ra->decision_rate,
+ ra->rate_sgi, priv->ra_report.txrate.bw);
+ } else if (ra->nsc_up > n_threshold_high[rate_id]) {
+ rtl8188e_rate_up(ra);
+
+ rtl8xxxu_update_ra_report(&priv->ra_report, ra->decision_rate,
+ ra->rate_sgi, priv->ra_report.txrate.bw);
+ }
+
+ if (ra->decision_rate == ra->pre_rate)
+ ra->dynamic_tx_rpt_timing_counter++;
+ else
+ ra->dynamic_tx_rpt_timing_counter = 0;
+
+ if (ra->dynamic_tx_rpt_timing_counter >= 4) {
+ /* Rate didn't change 4 times, extend RPT timing */
+ rtl8188e_set_tx_rpt_timing(ra, INCREASE_TIMING);
+ ra->dynamic_tx_rpt_timing_counter = 0;
+ }
+
+ ra->pre_rate = ra->decision_rate;
+
+ rtl8188e_reset_ra_counter(ra);
+}
+
+static void rtl8188e_power_training_try_state(struct rtl8xxxu_ra_info *ra)
+{
+ ra->pt_try_state = 0;
+ switch (ra->pt_mode_ss) {
+ case 3:
+ if (ra->decision_rate >= DESC_RATE_MCS13)
+ ra->pt_try_state = 1;
+ break;
+ case 2:
+ if (ra->decision_rate >= DESC_RATE_MCS5)
+ ra->pt_try_state = 1;
+ break;
+ case 1:
+ if (ra->decision_rate >= DESC_RATE_48M)
+ ra->pt_try_state = 1;
+ break;
+ case 0:
+ if (ra->decision_rate >= DESC_RATE_11M)
+ ra->pt_try_state = 1;
+ break;
+ default:
+ break;
+ }
+
+ if (ra->rssi_sta_ra < 48) {
+ ra->pt_stage = 0;
+ } else if (ra->pt_try_state == 1) {
+ if ((ra->pt_stop_count >= 10) ||
+ (ra->pt_pre_rssi > ra->rssi_sta_ra + 5) ||
+ (ra->pt_pre_rssi < ra->rssi_sta_ra - 5) ||
+ (ra->decision_rate != ra->pt_pre_rate)) {
+ if (ra->pt_stage == 0)
+ ra->pt_stage = 1;
+ else if (ra->pt_stage == 1)
+ ra->pt_stage = 3;
+ else
+ ra->pt_stage = 5;
+
+ ra->pt_pre_rssi = ra->rssi_sta_ra;
+ ra->pt_stop_count = 0;
+ } else {
+ ra->ra_stage = 0;
+ ra->pt_stop_count++;
+ }
+ } else {
+ ra->pt_stage = 0;
+ ra->ra_stage = 0;
+ }
+
+ ra->pt_pre_rate = ra->decision_rate;
+
+ /* TODO: implement the "false alarm" statistics for this */
+ /* Disable power training when noisy environment */
+ /* if (p_dm_odm->is_disable_power_training) { */
+ if (1) {
+ ra->pt_stage = 0;
+ ra->ra_stage = 0;
+ ra->pt_stop_count = 0;
+ }
+}
+
+static void rtl8188e_power_training_decision(struct rtl8xxxu_ra_info *ra)
+{
+ u8 temp_stage;
+ u32 numsc;
+ u32 num_total;
+ u8 stage_id;
+ u8 j;
+
+ numsc = 0;
+ num_total = ra->total * pt_penalty[5];
+ for (j = 0; j <= 4; j++) {
+ numsc += ra->retry[j] * pt_penalty[j];
+
+ if (numsc > num_total)
+ break;
+ }
+
+ j >>= 1;
+ temp_stage = (ra->pt_stage + 1) >> 1;
+ if (temp_stage > j)
+ stage_id = temp_stage - j;
+ else
+ stage_id = 0;
+
+ ra->pt_smooth_factor = (ra->pt_smooth_factor >> 1) +
+ (ra->pt_smooth_factor >> 2) +
+ stage_id * 16 + 2;
+ if (ra->pt_smooth_factor > 192)
+ ra->pt_smooth_factor = 192;
+ stage_id = ra->pt_smooth_factor >> 6;
+ temp_stage = stage_id * 2;
+ if (temp_stage != 0)
+ temp_stage--;
+ if (ra->drop > 3)
+ temp_stage = 0;
+ ra->pt_stage = temp_stage;
+}
+
+void rtl8188e_handle_ra_tx_report2(struct rtl8xxxu_priv *priv, struct sk_buff *skb)
+{
+ u32 *_rx_desc = (u32 *)(skb->data - sizeof(struct rtl8xxxu_rxdesc16));
+ struct rtl8xxxu_rxdesc16 *rx_desc = (struct rtl8xxxu_rxdesc16 *)_rx_desc;
+ struct device *dev = &priv->udev->dev;
+ struct rtl8xxxu_ra_info *ra = &priv->ra_info;
+ u32 tx_rpt_len = rx_desc->pktlen & 0x3ff;
+ u32 items = tx_rpt_len / TX_RPT2_ITEM_SIZE;
+ u64 macid_valid = ((u64)_rx_desc[5] << 32) | _rx_desc[4];
+ u32 macid;
+ u8 *rpt = skb->data;
+ bool valid;
+ u16 min_rpt_time = 0x927c;
+
+ dev_dbg(dev, "%s: len: %d items: %d\n", __func__, tx_rpt_len, items);
+
+ for (macid = 0; macid < items; macid++) {
+ valid = false;
+
+ if (macid < 64)
+ valid = macid_valid & BIT(macid);
+
+ if (valid) {
+ ra->retry[0] = le16_to_cpu(*(__le16 *)rpt);
+ ra->retry[1] = rpt[2];
+ ra->retry[2] = rpt[3];
+ ra->retry[3] = rpt[4];
+ ra->retry[4] = rpt[5];
+ ra->drop = rpt[6];
+ ra->total = ra->retry[0] + ra->retry[1] + ra->retry[2] +
+ ra->retry[3] + ra->retry[4] + ra->drop;
+
+ if (ra->total > 0) {
+ if (ra->ra_stage < 5)
+ rtl8188e_rate_decision(ra);
+ else if (ra->ra_stage == 5)
+ rtl8188e_power_training_try_state(ra);
+ else /* ra->ra_stage == 6 */
+ rtl8188e_power_training_decision(ra);
+
+ if (ra->ra_stage <= 5)
+ ra->ra_stage++;
+ else
+ ra->ra_stage = 0;
+ }
+ } else if (macid == 0) {
+ dev_warn(dev, "%s: TX report item 0 not valid\n", __func__);
+ }
+
+ dev_dbg(dev, "%s: valid: %d retry: %d %d %d %d %d drop: %d\n",
+ __func__, valid,
+ ra->retry[0], ra->retry[1], ra->retry[2],
+ ra->retry[3], ra->retry[4], ra->drop);
+
+ if (min_rpt_time > ra->rpt_time)
+ min_rpt_time = ra->rpt_time;
+
+ rpt += TX_RPT2_ITEM_SIZE;
+
+ /*
+ * We only use macid 0, so only the first item is relevant.
+ * AP mode will use more of them if it's ever implemented.
+ */
+ break;
+ }
+
+ if (min_rpt_time != ra->pre_min_rpt_time) {
+ rtl8xxxu_write16(priv, REG_TX_REPORT_TIME, min_rpt_time);
+ ra->pre_min_rpt_time = min_rpt_time;
+ }
+}
+
+static void rtl8188e_arfb_refresh(struct rtl8xxxu_ra_info *ra)
+{
+ s8 i;
+
+ ra->ra_use_rate = ra->rate_mask;
+
+ /* Highest rate */
+ if (ra->ra_use_rate) {
+ for (i = RATESIZE; i >= 0; i--) {
+ if (ra->ra_use_rate & BIT(i)) {
+ ra->highest_rate = i;
+ break;
+ }
+ }
+ } else {
+ ra->highest_rate = 0;
+ }
+
+ /* Lowest rate */
+ if (ra->ra_use_rate) {
+ for (i = 0; i < RATESIZE; i++) {
+ if (ra->ra_use_rate & BIT(i)) {
+ ra->lowest_rate = i;
+ break;
+ }
+ }
+ } else {
+ ra->lowest_rate = 0;
+ }
+
+ if (ra->highest_rate > DESC_RATE_MCS7)
+ ra->pt_mode_ss = 3;
+ else if (ra->highest_rate > DESC_RATE_54M)
+ ra->pt_mode_ss = 2;
+ else if (ra->highest_rate > DESC_RATE_11M)
+ ra->pt_mode_ss = 1;
+ else
+ ra->pt_mode_ss = 0;
+}
+
+static void
+rtl8188e_update_rate_mask(struct rtl8xxxu_priv *priv,
+ u32 ramask, u8 rateid, int sgi, int txbw_40mhz)
+{
+ struct rtl8xxxu_ra_info *ra = &priv->ra_info;
+
+ ra->rate_id = rateid;
+ ra->rate_mask = ramask;
+ ra->sgi_enable = sgi;
+
+ rtl8188e_arfb_refresh(ra);
+}
+
+static void rtl8188e_ra_set_rssi(struct rtl8xxxu_priv *priv, u8 macid, u8 rssi)
+{
+ priv->ra_info.rssi_sta_ra = rssi;
+}
+
+void rtl8188e_ra_info_init_all(struct rtl8xxxu_ra_info *ra)
+{
+ ra->decision_rate = DESC_RATE_MCS7;
+ ra->pre_rate = DESC_RATE_MCS7;
+ ra->highest_rate = DESC_RATE_MCS7;
+ ra->lowest_rate = 0;
+ ra->rate_id = 0;
+ ra->rate_mask = 0xfffff;
+ ra->rssi_sta_ra = 0;
+ ra->pre_rssi_sta_ra = 0;
+ ra->sgi_enable = 0;
+ ra->ra_use_rate = 0xfffff;
+ ra->nsc_down = (n_threshold_high[DESC_RATE_MCS7] + n_threshold_low[DESC_RATE_MCS7]) / 2;
+ ra->nsc_up = (n_threshold_high[DESC_RATE_MCS7] + n_threshold_low[DESC_RATE_MCS7]) / 2;
+ ra->rate_sgi = 0;
+ ra->rpt_time = 0x927c;
+ ra->drop = 0;
+ ra->retry[0] = 0;
+ ra->retry[1] = 0;
+ ra->retry[2] = 0;
+ ra->retry[3] = 0;
+ ra->retry[4] = 0;
+ ra->total = 0;
+ ra->ra_waiting_counter = 0;
+ ra->ra_pending_counter = 0;
+ ra->ra_drop_after_down = 0;
+
+ ra->pt_try_state = 0;
+ ra->pt_stage = 5;
+ ra->pt_smooth_factor = 192;
+ ra->pt_stop_count = 0;
+ ra->pt_pre_rate = 0;
+ ra->pt_pre_rssi = 0;
+ ra->pt_mode_ss = 0;
+ ra->ra_stage = 0;
+}
+
+struct rtl8xxxu_fileops rtl8188eu_fops = {
+ .identify_chip = rtl8188eu_identify_chip,
+ .parse_efuse = rtl8188eu_parse_efuse,
+ .load_firmware = rtl8188eu_load_firmware,
+ .power_on = rtl8188eu_power_on,
+ .power_off = rtl8188eu_power_off,
+ .reset_8051 = rtl8188eu_reset_8051,
+ .llt_init = rtl8xxxu_init_llt_table,
+ .init_phy_bb = rtl8188eu_init_phy_bb,
+ .init_phy_rf = rtl8188eu_init_phy_rf,
+ .phy_lc_calibrate = rtl8723a_phy_lc_calibrate,
+ .phy_iq_calibrate = rtl8188eu_phy_iq_calibrate,
+ .config_channel = rtl8188eu_config_channel,
+ .parse_rx_desc = rtl8xxxu_parse_rxdesc16,
+ .init_aggregation = rtl8188eu_init_aggregation,
+ .enable_rf = rtl8188e_enable_rf,
+ .disable_rf = rtl8188e_disable_rf,
+ .usb_quirks = rtl8188e_usb_quirks,
+ .set_tx_power = rtl8188f_set_tx_power,
+ .update_rate_mask = rtl8188e_update_rate_mask,
+ .report_connect = rtl8xxxu_gen2_report_connect,
+ .report_rssi = rtl8188e_ra_set_rssi,
+ .fill_txdesc = rtl8xxxu_fill_txdesc_v3,
+ .set_crystal_cap = rtl8188f_set_crystal_cap,
+ .cck_rssi = rtl8188e_cck_rssi,
+ .led_classdev_brightness_set = rtl8188eu_led_brightness_set,
+ .writeN_block_size = 128,
+ .rx_desc_size = sizeof(struct rtl8xxxu_rxdesc16),
+ .tx_desc_size = sizeof(struct rtl8xxxu_txdesc32),
+ .has_tx_report = 1,
+ .gen2_thermal_meter = 1,
+ .adda_1t_init = 0x0b1b25a0,
+ .adda_1t_path_on = 0x0bdb25a0,
+ /*
+ * Use 9K for 8188e normal chip
+ * Max RX buffer = 10K - max(TxReportSize(64*8), WOLPattern(16*24))
+ */
+ .trxff_boundary = 0x25ff,
+ .pbp_rx = PBP_PAGE_SIZE_128,
+ .pbp_tx = PBP_PAGE_SIZE_128,
+ .mactable = rtl8188e_mac_init_table,
+ .total_page_num = TX_TOTAL_PAGE_NUM_8188E,
+ .page_num_hi = TX_PAGE_NUM_HI_PQ_8188E,
+ .page_num_lo = TX_PAGE_NUM_LO_PQ_8188E,
+ .page_num_norm = TX_PAGE_NUM_NORM_PQ_8188E,
+ .last_llt_entry = 175,
+};
diff --git a/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_8188f.c b/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_8188f.c
index 2c4f403ba68f..af6e2c8a5025 100644
--- a/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_8188f.c
+++ b/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_8188f.c
@@ -370,7 +370,7 @@ static void rtl8188f_channel_to_group(int channel, int *group, int *cck_group)
*cck_group = *group;
}
-static void
+void
rtl8188f_set_tx_power(struct rtl8xxxu_priv *priv, int channel, bool ht40)
{
u32 val32, ofdm, mcs;
@@ -716,7 +716,6 @@ static void rtl8188fu_init_statistics(struct rtl8xxxu_priv *priv)
static int rtl8188fu_parse_efuse(struct rtl8xxxu_priv *priv)
{
struct rtl8188fu_efuse *efuse = &priv->efuse_wifi.efuse8188fu;
- int i;
if (efuse->rtl_id != cpu_to_le16(0x8129))
return -EINVAL;
@@ -738,22 +737,12 @@ static int rtl8188fu_parse_efuse(struct rtl8xxxu_priv *priv)
dev_info(&priv->udev->dev, "Vendor: %.7s\n", efuse->vendor_name);
dev_info(&priv->udev->dev, "Product: %.7s\n", efuse->device_name);
- if (rtl8xxxu_debug & RTL8XXXU_DEBUG_EFUSE) {
- unsigned char *raw = priv->efuse_wifi.raw;
-
- dev_info(&priv->udev->dev,
- "%s: dumping efuse (0x%02zx bytes):\n",
- __func__, sizeof(struct rtl8188fu_efuse));
- for (i = 0; i < sizeof(struct rtl8188fu_efuse); i += 8)
- dev_info(&priv->udev->dev, "%02x: %8ph\n", i, &raw[i]);
- }
-
return 0;
}
static int rtl8188fu_load_firmware(struct rtl8xxxu_priv *priv)
{
- char *fw_name;
+ const char *fw_name;
int ret;
fw_name = "rtlwifi/rtl8188fufw.bin";
@@ -1122,7 +1111,7 @@ static void rtl8188fu_phy_iqcalibrate(struct rtl8xxxu_priv *priv,
if (t == 0) {
val32 = rtl8xxxu_read32(priv, REG_FPGA0_XA_HSSI_PARM1);
- priv->pi_enabled = val32 & FPGA0_HSSI_PARM1_PI;
+ priv->pi_enabled = u32_get_bits(val32, FPGA0_HSSI_PARM1_PI);
}
/* save RF path */
@@ -1662,7 +1651,7 @@ static void rtl8188f_usb_quirks(struct rtl8xxxu_priv *priv)
#define XTAL1 GENMASK(22, 17)
#define XTAL0 GENMASK(16, 11)
-static void rtl8188f_set_crystal_cap(struct rtl8xxxu_priv *priv, u8 crystal_cap)
+void rtl8188f_set_crystal_cap(struct rtl8xxxu_priv *priv, u8 crystal_cap)
{
struct rtl8xxxu_cfo_tracking *cfo = &priv->cfo_tracking;
u32 val32;
@@ -1693,8 +1682,8 @@ static s8 rtl8188f_cck_rssi(struct rtl8xxxu_priv *priv, u8 cck_agc_rpt)
s8 rx_pwr_all = 0x00;
u8 vga_idx, lna_idx;
- lna_idx = (cck_agc_rpt & 0xE0) >> 5;
- vga_idx = cck_agc_rpt & 0x1F;
+ lna_idx = u8_get_bits(cck_agc_rpt, CCK_AGC_RPT_LNA_IDX_MASK);
+ vga_idx = u8_get_bits(cck_agc_rpt, CCK_AGC_RPT_VGA_IDX_MASK);
switch (lna_idx) {
case 7:
@@ -1743,6 +1732,7 @@ struct rtl8xxxu_fileops rtl8188fu_fops = {
.set_tx_power = rtl8188f_set_tx_power,
.update_rate_mask = rtl8xxxu_gen2_update_rate_mask,
.report_connect = rtl8xxxu_gen2_report_connect,
+ .report_rssi = rtl8xxxu_gen2_report_rssi,
.fill_txdesc = rtl8xxxu_fill_txdesc_v2,
.set_crystal_cap = rtl8188f_set_crystal_cap,
.cck_rssi = rtl8188f_cck_rssi,
diff --git a/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_8192c.c b/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_8192c.c
index 3bef9ffc8b02..e61d65c3579b 100644
--- a/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_8192c.c
+++ b/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_8192c.c
@@ -386,7 +386,7 @@ out:
static int rtl8192cu_load_firmware(struct rtl8xxxu_priv *priv)
{
- char *fw_name;
+ const char *fw_name;
int ret;
if (!priv->vendor_umc)
@@ -404,7 +404,6 @@ static int rtl8192cu_load_firmware(struct rtl8xxxu_priv *priv)
static int rtl8192cu_parse_efuse(struct rtl8xxxu_priv *priv)
{
struct rtl8192cu_efuse *efuse = &priv->efuse_wifi.efuse8192;
- int i;
if (efuse->rtl_id != cpu_to_le16(0x8129))
return -EINVAL;
@@ -457,15 +456,6 @@ static int rtl8192cu_parse_efuse(struct rtl8xxxu_priv *priv)
priv->power_base = &rtl8188r_power_base;
}
- if (rtl8xxxu_debug & RTL8XXXU_DEBUG_EFUSE) {
- unsigned char *raw = priv->efuse_wifi.raw;
-
- dev_info(&priv->udev->dev,
- "%s: dumping efuse (0x%02zx bytes):\n",
- __func__, sizeof(struct rtl8192cu_efuse));
- for (i = 0; i < sizeof(struct rtl8192cu_efuse); i += 8)
- dev_info(&priv->udev->dev, "%02x: %8ph\n", i, &raw[i]);
- }
return 0;
}
@@ -619,6 +609,7 @@ struct rtl8xxxu_fileops rtl8192cu_fops = {
.set_tx_power = rtl8xxxu_gen1_set_tx_power,
.update_rate_mask = rtl8xxxu_update_rate_mask,
.report_connect = rtl8xxxu_gen1_report_connect,
+ .report_rssi = rtl8xxxu_gen1_report_rssi,
.fill_txdesc = rtl8xxxu_fill_txdesc_v1,
.cck_rssi = rtl8723a_cck_rssi,
.writeN_block_size = 128,
diff --git a/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_8192e.c b/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_8192e.c
index a7d76693c02d..5cfc00237f42 100644
--- a/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_8192e.c
+++ b/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_8192e.c
@@ -704,21 +704,12 @@ static int rtl8192eu_parse_efuse(struct rtl8xxxu_priv *priv)
rtl8192eu_log_next_device_info(priv, "Product", efuse->device_info, &record_offset);
rtl8192eu_log_next_device_info(priv, "Serial", efuse->device_info, &record_offset);
- if (rtl8xxxu_debug & RTL8XXXU_DEBUG_EFUSE) {
- unsigned char *raw = priv->efuse_wifi.raw;
-
- dev_info(&priv->udev->dev,
- "%s: dumping efuse (0x%02zx bytes):\n",
- __func__, sizeof(struct rtl8192eu_efuse));
- for (i = 0; i < sizeof(struct rtl8192eu_efuse); i += 8)
- dev_info(&priv->udev->dev, "%02x: %8ph\n", i, &raw[i]);
- }
return 0;
}
static int rtl8192eu_load_firmware(struct rtl8xxxu_priv *priv)
{
- char *fw_name;
+ const char *fw_name;
int ret;
fw_name = "rtlwifi/rtl8192eu_nic.bin";
@@ -1744,6 +1735,11 @@ static void rtl8192e_enable_rf(struct rtl8xxxu_priv *priv)
val8 = rtl8xxxu_read8(priv, REG_PAD_CTRL1);
val8 &= ~BIT(0);
rtl8xxxu_write8(priv, REG_PAD_CTRL1, val8);
+
+ /*
+ * Fix transmission failure of rtl8192e.
+ */
+ rtl8xxxu_write8(priv, REG_TXPAUSE, 0x00);
}
static s8 rtl8192e_cck_rssi(struct rtl8xxxu_priv *priv, u8 cck_agc_rpt)
@@ -1755,8 +1751,8 @@ static s8 rtl8192e_cck_rssi(struct rtl8xxxu_priv *priv, u8 cck_agc_rpt)
u8 vga_idx, lna_idx;
s8 lna_gain = 0;
- lna_idx = (cck_agc_rpt & 0xE0) >> 5;
- vga_idx = cck_agc_rpt & 0x1F;
+ lna_idx = u8_get_bits(cck_agc_rpt, CCK_AGC_RPT_LNA_IDX_MASK);
+ vga_idx = u8_get_bits(cck_agc_rpt, CCK_AGC_RPT_VGA_IDX_MASK);
if (priv->cck_agc_report_type == 0)
lna_gain = lna_gain_table_0[lna_idx];
@@ -1768,6 +1764,29 @@ static s8 rtl8192e_cck_rssi(struct rtl8xxxu_priv *priv, u8 cck_agc_rpt)
return rx_pwr_all;
}
+static int rtl8192eu_led_brightness_set(struct led_classdev *led_cdev,
+ enum led_brightness brightness)
+{
+ struct rtl8xxxu_priv *priv = container_of(led_cdev,
+ struct rtl8xxxu_priv,
+ led_cdev);
+ u8 ledcfg = rtl8xxxu_read8(priv, REG_LEDCFG1);
+
+ if (brightness == LED_OFF) {
+ ledcfg &= ~LEDCFG1_HW_LED_CONTROL;
+ ledcfg |= LEDCFG1_LED_DISABLE;
+ } else if (brightness == LED_ON) {
+ ledcfg &= ~(LEDCFG1_HW_LED_CONTROL | LEDCFG1_LED_DISABLE);
+ } else if (brightness == RTL8XXXU_HW_LED_CONTROL) {
+ ledcfg &= ~LEDCFG1_LED_DISABLE;
+ ledcfg |= LEDCFG1_HW_LED_CONTROL;
+ }
+
+ rtl8xxxu_write8(priv, REG_LEDCFG1, ledcfg);
+
+ return 0;
+}
+
struct rtl8xxxu_fileops rtl8192eu_fops = {
.identify_chip = rtl8192eu_identify_chip,
.parse_efuse = rtl8192eu_parse_efuse,
@@ -1788,9 +1807,11 @@ struct rtl8xxxu_fileops rtl8192eu_fops = {
.set_tx_power = rtl8192e_set_tx_power,
.update_rate_mask = rtl8xxxu_gen2_update_rate_mask,
.report_connect = rtl8xxxu_gen2_report_connect,
+ .report_rssi = rtl8xxxu_gen2_report_rssi,
.fill_txdesc = rtl8xxxu_fill_txdesc_v2,
.set_crystal_cap = rtl8723a_set_crystal_cap,
.cck_rssi = rtl8192e_cck_rssi,
+ .led_classdev_brightness_set = rtl8192eu_led_brightness_set,
.writeN_block_size = 128,
.tx_desc_size = sizeof(struct rtl8xxxu_txdesc40),
.rx_desc_size = sizeof(struct rtl8xxxu_rxdesc24),
diff --git a/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_8723a.c b/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_8723a.c
index 707ac48ecc83..5e7b58d395ba 100644
--- a/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_8723a.c
+++ b/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_8723a.c
@@ -231,7 +231,7 @@ static int rtl8723au_parse_efuse(struct rtl8xxxu_priv *priv)
static int rtl8723au_load_firmware(struct rtl8xxxu_priv *priv)
{
- char *fw_name;
+ const char *fw_name;
int ret;
switch (priv->chip_cut) {
@@ -457,6 +457,30 @@ s8 rtl8723a_cck_rssi(struct rtl8xxxu_priv *priv, u8 cck_agc_rpt)
return rx_pwr_all;
}
+static int rtl8723au_led_brightness_set(struct led_classdev *led_cdev,
+ enum led_brightness brightness)
+{
+ struct rtl8xxxu_priv *priv = container_of(led_cdev,
+ struct rtl8xxxu_priv,
+ led_cdev);
+ u8 ledcfg = rtl8xxxu_read8(priv, REG_LEDCFG2);
+
+ if (brightness == LED_OFF) {
+ ledcfg &= ~LEDCFG2_HW_LED_CONTROL;
+ ledcfg |= LEDCFG2_SW_LED_CONTROL | LEDCFG2_SW_LED_DISABLE;
+ } else if (brightness == LED_ON) {
+ ledcfg &= ~(LEDCFG2_HW_LED_CONTROL | LEDCFG2_SW_LED_DISABLE);
+ ledcfg |= LEDCFG2_SW_LED_CONTROL;
+ } else if (brightness == RTL8XXXU_HW_LED_CONTROL) {
+ ledcfg &= ~LEDCFG2_SW_LED_DISABLE;
+ ledcfg |= LEDCFG2_HW_LED_CONTROL | LEDCFG2_HW_LED_ENABLE;
+ }
+
+ rtl8xxxu_write8(priv, REG_LEDCFG2, ledcfg);
+
+ return 0;
+}
+
struct rtl8xxxu_fileops rtl8723au_fops = {
.identify_chip = rtl8723au_identify_chip,
.parse_efuse = rtl8723au_parse_efuse,
@@ -478,9 +502,11 @@ struct rtl8xxxu_fileops rtl8723au_fops = {
.set_tx_power = rtl8xxxu_gen1_set_tx_power,
.update_rate_mask = rtl8xxxu_update_rate_mask,
.report_connect = rtl8xxxu_gen1_report_connect,
+ .report_rssi = rtl8xxxu_gen1_report_rssi,
.fill_txdesc = rtl8xxxu_fill_txdesc_v1,
.set_crystal_cap = rtl8723a_set_crystal_cap,
.cck_rssi = rtl8723a_cck_rssi,
+ .led_classdev_brightness_set = rtl8723au_led_brightness_set,
.writeN_block_size = 1024,
.rx_agg_buf_size = 16000,
.tx_desc_size = sizeof(struct rtl8xxxu_txdesc32),
diff --git a/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_8723b.c b/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_8723b.c
index a0ec895b61a4..21613d60dc22 100644
--- a/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_8723b.c
+++ b/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_8723b.c
@@ -497,23 +497,12 @@ static int rtl8723bu_parse_efuse(struct rtl8xxxu_priv *priv)
dev_info(&priv->udev->dev, "Vendor: %.7s\n", efuse->vendor_name);
dev_info(&priv->udev->dev, "Product: %.41s\n", efuse->device_name);
- if (rtl8xxxu_debug & RTL8XXXU_DEBUG_EFUSE) {
- int i;
- unsigned char *raw = priv->efuse_wifi.raw;
-
- dev_info(&priv->udev->dev,
- "%s: dumping efuse (0x%02zx bytes):\n",
- __func__, sizeof(struct rtl8723bu_efuse));
- for (i = 0; i < sizeof(struct rtl8723bu_efuse); i += 8)
- dev_info(&priv->udev->dev, "%02x: %8ph\n", i, &raw[i]);
- }
-
return 0;
}
static int rtl8723bu_load_firmware(struct rtl8xxxu_priv *priv)
{
- char *fw_name;
+ const char *fw_name;
int ret;
if (priv->enable_bluetooth)
@@ -1691,8 +1680,8 @@ static s8 rtl8723b_cck_rssi(struct rtl8xxxu_priv *priv, u8 cck_agc_rpt)
s8 rx_pwr_all = 0x00;
u8 vga_idx, lna_idx;
- lna_idx = (cck_agc_rpt & 0xE0) >> 5;
- vga_idx = cck_agc_rpt & 0x1F;
+ lna_idx = u8_get_bits(cck_agc_rpt, CCK_AGC_RPT_LNA_IDX_MASK);
+ vga_idx = u8_get_bits(cck_agc_rpt, CCK_AGC_RPT_VGA_IDX_MASK);
switch (lna_idx) {
case 6:
@@ -1738,6 +1727,7 @@ struct rtl8xxxu_fileops rtl8723bu_fops = {
.set_tx_power = rtl8723b_set_tx_power,
.update_rate_mask = rtl8xxxu_gen2_update_rate_mask,
.report_connect = rtl8xxxu_gen2_report_connect,
+ .report_rssi = rtl8xxxu_gen2_report_rssi,
.fill_txdesc = rtl8xxxu_fill_txdesc_v2,
.set_crystal_cap = rtl8723a_set_crystal_cap,
.cck_rssi = rtl8723b_cck_rssi,
diff --git a/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_core.c b/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_core.c
index 3ed435401e57..620a5cc2bfdd 100644
--- a/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_core.c
+++ b/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_core.c
@@ -34,7 +34,7 @@
#define DRIVER_NAME "rtl8xxxu"
-int rtl8xxxu_debug = RTL8XXXU_DEBUG_EFUSE;
+int rtl8xxxu_debug;
static bool rtl8xxxu_ht40_2g;
static bool rtl8xxxu_dma_aggregation;
static int rtl8xxxu_dma_agg_timeout = -1;
@@ -46,6 +46,7 @@ MODULE_LICENSE("GPL");
MODULE_FIRMWARE("rtlwifi/rtl8723aufw_A.bin");
MODULE_FIRMWARE("rtlwifi/rtl8723aufw_B.bin");
MODULE_FIRMWARE("rtlwifi/rtl8723aufw_B_NoBT.bin");
+MODULE_FIRMWARE("rtlwifi/rtl8188eufw.bin");
MODULE_FIRMWARE("rtlwifi/rtl8192cufw_A.bin");
MODULE_FIRMWARE("rtlwifi/rtl8192cufw_B.bin");
MODULE_FIRMWARE("rtlwifi/rtl8192cufw_TMSC.bin");
@@ -1581,10 +1582,11 @@ static void rtl8xxxu_print_chipinfo(struct rtl8xxxu_priv *priv)
cut = 'A' + priv->chip_cut;
dev_info(dev,
- "RTL%s rev %c (%s) %iT%iR, TX queues %i, WiFi=%i, BT=%i, GPS=%i, HI PA=%i\n",
- priv->chip_name, cut, priv->chip_vendor, priv->tx_paths,
- priv->rx_paths, priv->ep_tx_count, priv->has_wifi,
- priv->has_bluetooth, priv->has_gps, priv->hi_pa);
+ "RTL%s rev %c (%s) romver %d, %iT%iR, TX queues %i, WiFi=%i, BT=%i, GPS=%i, HI PA=%i\n",
+ priv->chip_name, cut, priv->chip_vendor, priv->rom_rev,
+ priv->tx_paths, priv->rx_paths, priv->ep_tx_count,
+ priv->has_wifi, priv->has_bluetooth, priv->has_gps,
+ priv->hi_pa);
dev_info(dev, "RTL%s MAC: %pM\n", priv->chip_name, priv->mac_addr);
}
@@ -1813,6 +1815,16 @@ exit:
return ret;
}
+static void rtl8xxxu_dump_efuse(struct rtl8xxxu_priv *priv)
+{
+ dev_info(&priv->udev->dev,
+ "Dumping efuse for RTL%s (0x%02x bytes):\n",
+ priv->chip_name, EFUSE_MAP_LEN);
+
+ print_hex_dump(KERN_INFO, "", DUMP_PREFIX_OFFSET, 16, 1,
+ priv->efuse_wifi.raw, EFUSE_MAP_LEN, true);
+}
+
void rtl8xxxu_reset_8051(struct rtl8xxxu_priv *priv)
{
u8 val8;
@@ -1970,7 +1982,7 @@ fw_abort:
return ret;
}
-int rtl8xxxu_load_firmware(struct rtl8xxxu_priv *priv, char *fw_name)
+int rtl8xxxu_load_firmware(struct rtl8xxxu_priv *priv, const char *fw_name)
{
struct device *dev = &priv->udev->dev;
const struct firmware *fw;
@@ -2000,6 +2012,7 @@ int rtl8xxxu_load_firmware(struct rtl8xxxu_priv *priv, char *fw_name)
switch (signature & 0xfff0) {
case 0x92e0:
case 0x92c0:
+ case 0x88e0:
case 0x88c0:
case 0x5300:
case 0x2300:
@@ -2071,10 +2084,20 @@ rtl8xxxu_init_mac(struct rtl8xxxu_priv *priv)
}
}
- if (priv->rtl_chip != RTL8723B &&
- priv->rtl_chip != RTL8192E &&
- priv->rtl_chip != RTL8188F)
+ switch (priv->rtl_chip) {
+ case RTL8188C:
+ case RTL8188R:
+ case RTL8191C:
+ case RTL8192C:
+ case RTL8723A:
rtl8xxxu_write8(priv, REG_MAX_AGGR_NUM, 0x0a);
+ break;
+ case RTL8188E:
+ rtl8xxxu_write16(priv, REG_MAX_AGGR_NUM, 0x0707);
+ break;
+ default:
+ break;
+ }
return 0;
}
@@ -2373,11 +2396,16 @@ static int rtl8xxxu_llt_write(struct rtl8xxxu_priv *priv, u8 address, u8 data)
int rtl8xxxu_init_llt_table(struct rtl8xxxu_priv *priv)
{
int ret;
- int i;
+ int i, last_entry;
u8 last_tx_page;
last_tx_page = priv->fops->total_page_num;
+ if (priv->fops->last_llt_entry)
+ last_entry = priv->fops->last_llt_entry;
+ else
+ last_entry = 255;
+
for (i = 0; i < last_tx_page; i++) {
ret = rtl8xxxu_llt_write(priv, i, i + 1);
if (ret)
@@ -2389,14 +2417,14 @@ int rtl8xxxu_init_llt_table(struct rtl8xxxu_priv *priv)
goto exit;
/* Mark remaining pages as a ring buffer */
- for (i = last_tx_page + 1; i < 0xff; i++) {
+ for (i = last_tx_page + 1; i < last_entry; i++) {
ret = rtl8xxxu_llt_write(priv, i, (i + 1));
if (ret)
goto exit;
}
/* Let last entry point to the start entry of ring buffer */
- ret = rtl8xxxu_llt_write(priv, 0xff, last_tx_page + 1);
+ ret = rtl8xxxu_llt_write(priv, last_entry, last_tx_page + 1);
if (ret)
goto exit;
@@ -2704,8 +2732,8 @@ void rtl8xxxu_fill_iqk_matrix_b(struct rtl8xxxu_priv *priv, bool iqk_ok,
#define MAX_TOLERANCE 5
-static bool rtl8xxxu_simularity_compare(struct rtl8xxxu_priv *priv,
- int result[][8], int c1, int c2)
+bool rtl8xxxu_simularity_compare(struct rtl8xxxu_priv *priv,
+ int result[][8], int c1, int c2)
{
u32 i, j, diff, simubitmap, bound = 0;
int candidate[2] = {-1, -1}; /* for path A and path B */
@@ -3898,7 +3926,8 @@ static int rtl8xxxu_init_device(struct ieee80211_hw *hw)
goto exit;
/* RFSW Control - clear bit 14 ?? */
- if (priv->rtl_chip != RTL8723B && priv->rtl_chip != RTL8192E)
+ if (priv->rtl_chip != RTL8723B && priv->rtl_chip != RTL8192E &&
+ priv->rtl_chip != RTL8188E)
rtl8xxxu_write32(priv, REG_FPGA0_TX_INFO, 0x00000003);
val32 = FPGA0_RF_TRSW | FPGA0_RF_TRSWB | FPGA0_RF_ANTSW |
@@ -3911,7 +3940,7 @@ static int rtl8xxxu_init_device(struct ieee80211_hw *hw)
rtl8xxxu_write32(priv, REG_FPGA0_XAB_RF_SW_CTRL, val32);
/* 0x860[6:5]= 00 - why? - this sets antenna B */
- if (priv->rtl_chip != RTL8192E)
+ if (priv->rtl_chip != RTL8192E && priv->rtl_chip != RTL8188E)
rtl8xxxu_write32(priv, REG_FPGA0_XA_RF_INT_OE, 0x66f60210);
if (!macpower) {
@@ -3953,7 +3982,25 @@ static int rtl8xxxu_init_device(struct ieee80211_hw *hw)
* Enable TX report and TX report timer for 8723bu/8188eu/...
*/
if (fops->has_tx_report) {
+ /*
+ * The RTL8188EU has two types of TX reports:
+ * rpt_sel=1:
+ * One report for one frame. We can use this for frames
+ * with IEEE80211_TX_CTL_REQ_TX_STATUS.
+ * rpt_sel=2:
+ * One report for many frames transmitted over a period
+ * of time. (This is what REG_TX_REPORT_TIME is for.) The
+ * report includes the number of frames transmitted
+ * successfully, and the number of unsuccessful
+ * transmissions. We use this for software rate control.
+ *
+ * Bit 0 of REG_TX_REPORT_CTRL is required for both types.
+ * Bit 1 (TX_REPORT_CTRL_TIMER_ENABLE) is required for
+ * type 2.
+ */
val8 = rtl8xxxu_read8(priv, REG_TX_REPORT_CTRL);
+ if (priv->rtl_chip == RTL8188E)
+ val8 |= BIT(0);
val8 |= TX_REPORT_CTRL_TIMER_ENABLE;
rtl8xxxu_write8(priv, REG_TX_REPORT_CTRL, val8);
/* Set MAX RPT MACID */
@@ -3979,6 +4026,15 @@ static int rtl8xxxu_init_device(struct ieee80211_hw *hw)
} else if (priv->rtl_chip == RTL8188F) {
rtl8xxxu_write32(priv, REG_HISR0, 0xffffffff);
rtl8xxxu_write32(priv, REG_HISR1, 0xffffffff);
+ } else if (priv->rtl_chip == RTL8188E) {
+ rtl8xxxu_write32(priv, REG_HISR0, 0xffffffff);
+ val32 = IMR0_PSTIMEOUT | IMR0_TBDER | IMR0_CPWM | IMR0_CPWM2;
+ rtl8xxxu_write32(priv, REG_HIMR0, val32);
+ val32 = IMR1_TXERR | IMR1_RXERR | IMR1_TXFOVW | IMR1_RXFOVW;
+ rtl8xxxu_write32(priv, REG_HIMR1, val32);
+ val8 = rtl8xxxu_read8(priv, REG_USB_SPECIAL_OPTION);
+ val8 |= USB_SPEC_INT_BULK_SELECT;
+ rtl8xxxu_write8(priv, REG_USB_SPECIAL_OPTION, val8);
} else {
/*
* Enable all interrupts - not obvious USB needs to do this
@@ -4084,7 +4140,7 @@ static int rtl8xxxu_init_device(struct ieee80211_hw *hw)
if (fops->init_aggregation)
fops->init_aggregation(priv);
- if (priv->rtl_chip == RTL8188F) {
+ if (priv->rtl_chip == RTL8188F || priv->rtl_chip == RTL8188E) {
rtl8xxxu_write16(priv, REG_PKT_VO_VI_LIFE_TIME, 0x0400); /* unit: 256us. 256ms */
rtl8xxxu_write16(priv, REG_PKT_BE_BK_LIFE_TIME, 0x0400); /* unit: 256us. 256ms */
}
@@ -4118,7 +4174,7 @@ static int rtl8xxxu_init_device(struct ieee80211_hw *hw)
/* Disable BAR - not sure if this has any effect on USB */
rtl8xxxu_write32(priv, REG_BAR_MODE_CTRL, 0x0201ffff);
- if (priv->rtl_chip != RTL8188F)
+ if (priv->rtl_chip != RTL8188F && priv->rtl_chip != RTL8188E)
rtl8xxxu_write16(priv, REG_FAST_EDCA_CTRL, 0);
if (fops->init_statistics)
@@ -4136,9 +4192,9 @@ static int rtl8xxxu_init_device(struct ieee80211_hw *hw)
* Reset USB mode switch setting
*/
rtl8xxxu_write8(priv, REG_ACLK_MON, 0x00);
- } else if (priv->rtl_chip == RTL8188F) {
+ } else if (priv->rtl_chip == RTL8188F || priv->rtl_chip == RTL8188E) {
/*
- * Init GPIO settings for 8188f
+ * Init GPIO settings for 8188f, 8188e
*/
val8 = rtl8xxxu_read8(priv, REG_GPIO_MUXCFG);
val8 &= ~GPIO_MUXCFG_IO_SEL_ENBT;
@@ -4184,7 +4240,7 @@ static int rtl8xxxu_init_device(struct ieee80211_hw *hw)
val32 |= FPGA_RF_MODE_CCK;
rtl8xxxu_write32(priv, REG_FPGA0_RF_MODE, val32);
}
- } else if (priv->rtl_chip == RTL8192E) {
+ } else if (priv->rtl_chip == RTL8192E || priv->rtl_chip == RTL8188E) {
rtl8xxxu_write8(priv, REG_USB_HRPWM, 0x00);
}
@@ -4208,10 +4264,12 @@ static int rtl8xxxu_init_device(struct ieee80211_hw *hw)
* should be equal or CCK RSSI report may be incorrect
*/
val32 = rtl8xxxu_read32(priv, REG_FPGA0_XA_HSSI_PARM2);
- priv->cck_agc_report_type = val32 & FPGA0_HSSI_PARM2_CCK_HIGH_PWR;
+ priv->cck_agc_report_type =
+ u32_get_bits(val32, FPGA0_HSSI_PARM2_CCK_HIGH_PWR);
val32 = rtl8xxxu_read32(priv, REG_FPGA0_XB_HSSI_PARM2);
- if (priv->cck_agc_report_type != (bool)(val32 & FPGA0_HSSI_PARM2_CCK_HIGH_PWR)) {
+ if (priv->cck_agc_report_type !=
+ u32_get_bits(val32, FPGA0_HSSI_PARM2_CCK_HIGH_PWR)) {
if (priv->cck_agc_report_type)
val32 |= FPGA0_HSSI_PARM2_CCK_HIGH_PWR;
else
@@ -4235,6 +4293,9 @@ static int rtl8xxxu_init_device(struct ieee80211_hw *hw)
priv->cfo_tracking.crystal_cap = priv->default_crystal_cap;
}
+ if (priv->rtl_chip == RTL8188E)
+ rtl8188e_ra_info_init_all(&priv->ra_info);
+
exit:
return ret;
}
@@ -4401,6 +4462,37 @@ void rtl8xxxu_gen2_report_connect(struct rtl8xxxu_priv *priv,
rtl8xxxu_gen2_h2c_cmd(priv, &h2c, sizeof(h2c.media_status_rpt));
}
+void rtl8xxxu_gen1_report_rssi(struct rtl8xxxu_priv *priv, u8 macid, u8 rssi)
+{
+ struct h2c_cmd h2c;
+ const int h2c_size = 4;
+
+ memset(&h2c, 0, sizeof(struct h2c_cmd));
+
+ h2c.rssi_report.cmd = H2C_SET_RSSI;
+ h2c.rssi_report.macid = macid;
+ h2c.rssi_report.rssi = rssi;
+
+ rtl8xxxu_gen1_h2c_cmd(priv, &h2c, h2c_size);
+}
+
+void rtl8xxxu_gen2_report_rssi(struct rtl8xxxu_priv *priv, u8 macid, u8 rssi)
+{
+ struct h2c_cmd h2c;
+ int h2c_size = sizeof(h2c.rssi_report);
+
+ if (priv->rtl_chip == RTL8723B)
+ h2c_size = 4;
+
+ memset(&h2c, 0, sizeof(struct h2c_cmd));
+
+ h2c.rssi_report.cmd = H2C_8723B_RSSI_SETTING;
+ h2c.rssi_report.macid = macid;
+ h2c.rssi_report.rssi = rssi;
+
+ rtl8xxxu_gen2_h2c_cmd(priv, &h2c, h2c_size);
+}
+
void rtl8xxxu_gen1_init_aggregation(struct rtl8xxxu_priv *priv)
{
u8 agg_ctrl, usb_spec, page_thresh, timeout;
@@ -4598,8 +4690,8 @@ static void rtl8xxxu_set_aifs(struct rtl8xxxu_priv *priv, u8 slot_time)
}
}
-static void rtl8xxxu_update_ra_report(struct rtl8xxxu_ra_report *rarpt,
- u8 rate, u8 sgi, u8 bw)
+void rtl8xxxu_update_ra_report(struct rtl8xxxu_ra_report *rarpt,
+ u8 rate, u8 sgi, u8 bw)
{
u8 mcs, nss;
@@ -5069,6 +5161,98 @@ rtl8xxxu_fill_txdesc_v2(struct ieee80211_hw *hw, struct ieee80211_hdr *hdr,
}
}
+/*
+ * Fill in v3 (gen1) specific TX descriptor bits.
+ * This format is a hybrid between the v1 and v2 formats, only seen
+ * on 8188eu devices so far.
+ */
+void
+rtl8xxxu_fill_txdesc_v3(struct ieee80211_hw *hw, struct ieee80211_hdr *hdr,
+ struct ieee80211_tx_info *tx_info,
+ struct rtl8xxxu_txdesc32 *tx_desc, bool sgi,
+ bool short_preamble, bool ampdu_enable, u32 rts_rate)
+{
+ struct ieee80211_rate *tx_rate = ieee80211_get_tx_rate(hw, tx_info);
+ struct rtl8xxxu_priv *priv = hw->priv;
+ struct device *dev = &priv->udev->dev;
+ struct rtl8xxxu_ra_info *ra = &priv->ra_info;
+ u8 *qc = ieee80211_get_qos_ctl(hdr);
+ u8 tid = qc[0] & IEEE80211_QOS_CTL_TID_MASK;
+ u32 rate;
+ u16 rate_flags = tx_info->control.rates[0].flags;
+ u16 seq_number;
+
+ if (rate_flags & IEEE80211_TX_RC_MCS &&
+ !ieee80211_is_mgmt(hdr->frame_control))
+ rate = tx_info->control.rates[0].idx + DESC_RATE_MCS0;
+ else
+ rate = tx_rate->hw_value;
+
+ seq_number = IEEE80211_SEQ_TO_SN(le16_to_cpu(hdr->seq_ctrl));
+
+ if (ieee80211_is_data(hdr->frame_control)) {
+ rate = ra->decision_rate;
+ tx_desc->txdw5 = cpu_to_le32(rate);
+ tx_desc->txdw4 |= cpu_to_le32(TXDESC32_USE_DRIVER_RATE);
+ tx_desc->txdw4 |= le32_encode_bits(ra->pt_stage, TXDESC32_PT_STAGE_MASK);
+ /* Data/RTS rate FB limit */
+ tx_desc->txdw5 |= cpu_to_le32(0x0001ff00);
+ }
+
+ if (rtl8xxxu_debug & RTL8XXXU_DEBUG_TX)
+ dev_info(dev, "%s: TX rate: %d, pkt size %d\n",
+ __func__, rate, le16_to_cpu(tx_desc->pkt_size));
+
+ tx_desc->txdw3 = cpu_to_le32((u32)seq_number << TXDESC32_SEQ_SHIFT);
+
+ if (ampdu_enable && test_bit(tid, priv->tid_tx_operational))
+ tx_desc->txdw2 |= cpu_to_le32(TXDESC40_AGG_ENABLE);
+ else
+ tx_desc->txdw2 |= cpu_to_le32(TXDESC40_AGG_BREAK);
+
+ if (ieee80211_is_mgmt(hdr->frame_control)) {
+ tx_desc->txdw5 = cpu_to_le32(rate);
+ tx_desc->txdw4 |= cpu_to_le32(TXDESC32_USE_DRIVER_RATE);
+ tx_desc->txdw5 |= cpu_to_le32(6 << TXDESC32_RETRY_LIMIT_SHIFT);
+ tx_desc->txdw5 |= cpu_to_le32(TXDESC32_RETRY_LIMIT_ENABLE);
+ }
+
+ if (ieee80211_is_data_qos(hdr->frame_control)) {
+ tx_desc->txdw4 |= cpu_to_le32(TXDESC32_QOS);
+
+ if (conf_is_ht40(&hw->conf)) {
+ tx_desc->txdw4 |= cpu_to_le32(TXDESC_DATA_BW);
+
+ if (conf_is_ht40_minus(&hw->conf))
+ tx_desc->txdw4 |= cpu_to_le32(TXDESC_PRIME_CH_OFF_UPPER);
+ else
+ tx_desc->txdw4 |= cpu_to_le32(TXDESC_PRIME_CH_OFF_LOWER);
+ }
+ }
+
+ if (short_preamble)
+ tx_desc->txdw4 |= cpu_to_le32(TXDESC32_SHORT_PREAMBLE);
+
+ if (sgi && ra->rate_sgi)
+ tx_desc->txdw5 |= cpu_to_le32(TXDESC32_SHORT_GI);
+
+ /*
+ * rts_rate is zero if RTS/CTS or CTS to SELF are not enabled
+ */
+ tx_desc->txdw4 |= cpu_to_le32(rts_rate << TXDESC32_RTS_RATE_SHIFT);
+ if (ampdu_enable || (rate_flags & IEEE80211_TX_RC_USE_RTS_CTS)) {
+ tx_desc->txdw4 |= cpu_to_le32(TXDESC32_RTS_CTS_ENABLE);
+ tx_desc->txdw4 |= cpu_to_le32(TXDESC32_HW_RTS_ENABLE);
+ } else if (rate_flags & IEEE80211_TX_RC_USE_CTS_PROTECT) {
+ tx_desc->txdw4 |= cpu_to_le32(TXDESC32_CTS_SELF_ENABLE);
+ tx_desc->txdw4 |= cpu_to_le32(TXDESC32_HW_RTS_ENABLE);
+ }
+
+ tx_desc->txdw2 |= cpu_to_le32(TXDESC_ANTENNA_SELECT_A |
+ TXDESC_ANTENNA_SELECT_B);
+ tx_desc->txdw7 |= cpu_to_le16(TXDESC_ANTENNA_SELECT_C >> 16);
+}
+
static void rtl8xxxu_tx(struct ieee80211_hw *hw,
struct ieee80211_tx_control *control,
struct sk_buff *skb)
@@ -5274,7 +5458,7 @@ static void rtl8xxxu_queue_rx_urb(struct rtl8xxxu_priv *priv,
pending = priv->rx_urb_pending_count;
} else {
skb = (struct sk_buff *)rx_urb->urb.context;
- dev_kfree_skb(skb);
+ dev_kfree_skb_irq(skb);
usb_free_urb(&rx_urb->urb);
}
@@ -5550,9 +5734,6 @@ static void rtl8xxxu_c2hcmd_callback(struct work_struct *work)
btcoex = &priv->bt_coex;
rarpt = &priv->ra_report;
- if (priv->rf_paths > 1)
- goto out;
-
while (!skb_queue_empty(&priv->c2hcmd_queue)) {
skb = skb_dequeue(&priv->c2hcmd_queue);
@@ -5585,10 +5766,9 @@ static void rtl8xxxu_c2hcmd_callback(struct work_struct *work)
default:
break;
}
- }
-out:
- dev_kfree_skb(skb);
+ dev_kfree_skb(skb);
+ }
}
static void rtl8723bu_handle_c2h(struct rtl8xxxu_priv *priv,
@@ -5640,6 +5820,44 @@ static void rtl8723bu_handle_c2h(struct rtl8xxxu_priv *priv,
schedule_work(&priv->c2hcmd_work);
}
+static void rtl8188e_c2hcmd_callback(struct work_struct *work)
+{
+ struct rtl8xxxu_priv *priv = container_of(work, struct rtl8xxxu_priv, c2hcmd_work);
+ struct device *dev = &priv->udev->dev;
+ struct sk_buff *skb = NULL;
+ struct rtl8xxxu_rxdesc16 *rx_desc;
+
+ while (!skb_queue_empty(&priv->c2hcmd_queue)) {
+ skb = skb_dequeue(&priv->c2hcmd_queue);
+
+ rx_desc = (struct rtl8xxxu_rxdesc16 *)(skb->data - sizeof(struct rtl8xxxu_rxdesc16));
+
+ switch (rx_desc->rpt_sel) {
+ case 1:
+ dev_dbg(dev, "C2H TX report type 1\n");
+
+ break;
+ case 2:
+ dev_dbg(dev, "C2H TX report type 2\n");
+
+ rtl8188e_handle_ra_tx_report2(priv, skb);
+
+ break;
+ case 3:
+ dev_dbg(dev, "C2H USB interrupt report\n");
+
+ break;
+ default:
+ dev_warn(dev, "%s: rpt_sel should not be %d\n",
+ __func__, rx_desc->rpt_sel);
+
+ break;
+ }
+
+ dev_kfree_skb(skb);
+ }
+}
+
int rtl8xxxu_parse_rxdesc16(struct rtl8xxxu_priv *priv, struct sk_buff *skb)
{
struct ieee80211_hw *hw = priv->hw;
@@ -5695,38 +5913,45 @@ int rtl8xxxu_parse_rxdesc16(struct rtl8xxxu_priv *priv, struct sk_buff *skb)
skb_pull(skb, sizeof(struct rtl8xxxu_rxdesc16));
- phy_stats = (struct rtl8723au_phy_stats *)skb->data;
+ if (rx_desc->rpt_sel) {
+ skb_queue_tail(&priv->c2hcmd_queue, skb);
+ schedule_work(&priv->c2hcmd_work);
+ } else {
+ phy_stats = (struct rtl8723au_phy_stats *)skb->data;
- skb_pull(skb, drvinfo_sz + desc_shift);
+ skb_pull(skb, drvinfo_sz + desc_shift);
- skb_trim(skb, pkt_len);
+ skb_trim(skb, pkt_len);
- if (rx_desc->phy_stats)
- rtl8xxxu_rx_parse_phystats(priv, rx_status, phy_stats,
- rx_desc->rxmcs, (struct ieee80211_hdr *)skb->data,
- rx_desc->crc32 || rx_desc->icverr);
+ if (rx_desc->phy_stats)
+ rtl8xxxu_rx_parse_phystats(
+ priv, rx_status, phy_stats,
+ rx_desc->rxmcs,
+ (struct ieee80211_hdr *)skb->data,
+ rx_desc->crc32 || rx_desc->icverr);
- rx_status->mactime = rx_desc->tsfl;
- rx_status->flag |= RX_FLAG_MACTIME_START;
+ rx_status->mactime = rx_desc->tsfl;
+ rx_status->flag |= RX_FLAG_MACTIME_START;
- if (!rx_desc->swdec)
- rx_status->flag |= RX_FLAG_DECRYPTED;
- if (rx_desc->crc32)
- rx_status->flag |= RX_FLAG_FAILED_FCS_CRC;
- if (rx_desc->bw)
- rx_status->bw = RATE_INFO_BW_40;
+ if (!rx_desc->swdec)
+ rx_status->flag |= RX_FLAG_DECRYPTED;
+ if (rx_desc->crc32)
+ rx_status->flag |= RX_FLAG_FAILED_FCS_CRC;
+ if (rx_desc->bw)
+ rx_status->bw = RATE_INFO_BW_40;
- if (rx_desc->rxht) {
- rx_status->encoding = RX_ENC_HT;
- rx_status->rate_idx = rx_desc->rxmcs - DESC_RATE_MCS0;
- } else {
- rx_status->rate_idx = rx_desc->rxmcs;
- }
+ if (rx_desc->rxht) {
+ rx_status->encoding = RX_ENC_HT;
+ rx_status->rate_idx = rx_desc->rxmcs - DESC_RATE_MCS0;
+ } else {
+ rx_status->rate_idx = rx_desc->rxmcs;
+ }
- rx_status->freq = hw->conf.chandef.chan->center_freq;
- rx_status->band = hw->conf.chandef.chan->band;
+ rx_status->freq = hw->conf.chandef.chan->center_freq;
+ rx_status->band = hw->conf.chandef.chan->band;
- ieee80211_rx_irqsafe(hw, skb);
+ ieee80211_rx_irqsafe(hw, skb);
+ }
skb = next_skb;
if (skb)
@@ -5956,7 +6181,6 @@ static int rtl8xxxu_config(struct ieee80211_hw *hw, u32 changed)
{
struct rtl8xxxu_priv *priv = hw->priv;
struct device *dev = &priv->udev->dev;
- u16 val16;
int ret = 0, channel;
bool ht40;
@@ -5966,14 +6190,6 @@ static int rtl8xxxu_config(struct ieee80211_hw *hw, u32 changed)
__func__, hw->conf.chandef.chan->hw_value,
changed, hw->conf.chandef.width);
- if (changed & IEEE80211_CONF_CHANGE_RETRY_LIMITS) {
- val16 = ((hw->conf.long_frame_max_tx_count <<
- RETRY_LIMIT_LONG_SHIFT) & RETRY_LIMIT_LONG_MASK) |
- ((hw->conf.short_frame_max_tx_count <<
- RETRY_LIMIT_SHORT_SHIFT) & RETRY_LIMIT_SHORT_MASK);
- rtl8xxxu_write16(priv, REG_RETRY_LIMIT, val16);
- }
-
if (changed & IEEE80211_CONF_CHANGE_CHANNEL) {
switch (hw->conf.chandef.width) {
case NL80211_CHAN_WIDTH_20_NOHT:
@@ -6504,6 +6720,9 @@ static void rtl8xxxu_watchdog_callback(struct work_struct *work)
signal = ieee80211_ave_rssi(vif);
+ priv->fops->report_rssi(priv, 0,
+ rtl8xxxu_signal_to_snr(signal));
+
if (priv->fops->set_crystal_cap)
rtl8xxxu_track_cfo(priv);
@@ -6587,7 +6806,10 @@ exit:
rtl8xxxu_write16(priv, REG_RXFLTMAP2, 0xffff);
rtl8xxxu_write16(priv, REG_RXFLTMAP0, 0xffff);
- rtl8xxxu_write32(priv, REG_OFDM0_XA_AGC_CORE1, 0x6954341e);
+ if (priv->rtl_chip == RTL8188E)
+ rtl8xxxu_write32(priv, REG_OFDM0_XA_AGC_CORE1, 0x6955341e);
+ else
+ rtl8xxxu_write32(priv, REG_OFDM0_XA_AGC_CORE1, 0x6954341e);
return ret;
@@ -6733,6 +6955,40 @@ exit:
return ret;
}
+static void rtl8xxxu_init_led(struct rtl8xxxu_priv *priv)
+{
+ struct led_classdev *led = &priv->led_cdev;
+
+ if (!priv->fops->led_classdev_brightness_set)
+ return;
+
+ led->brightness_set_blocking = priv->fops->led_classdev_brightness_set;
+
+ snprintf(priv->led_name, sizeof(priv->led_name),
+ "rtl8xxxu-usb%s", dev_name(&priv->udev->dev));
+ led->name = priv->led_name;
+ led->max_brightness = RTL8XXXU_HW_LED_CONTROL;
+
+ if (led_classdev_register(&priv->udev->dev, led))
+ return;
+
+ priv->led_registered = true;
+
+ led->brightness = led->max_brightness;
+ priv->fops->led_classdev_brightness_set(led, led->brightness);
+}
+
+static void rtl8xxxu_deinit_led(struct rtl8xxxu_priv *priv)
+{
+ struct led_classdev *led = &priv->led_cdev;
+
+ if (!priv->led_registered)
+ return;
+
+ priv->fops->led_classdev_brightness_set(led, LED_OFF);
+ led_classdev_unregister(led);
+}
+
static int rtl8xxxu_probe(struct usb_interface *interface,
const struct usb_device_id *id)
{
@@ -6754,6 +7010,7 @@ static int rtl8xxxu_probe(struct usb_interface *interface,
case 0x817f:
case 0x818b:
case 0xf179:
+ case 0x8179:
untested = 0;
break;
}
@@ -6809,7 +7066,6 @@ static int rtl8xxxu_probe(struct usb_interface *interface,
spin_lock_init(&priv->rx_urb_lock);
INIT_WORK(&priv->rx_urb_wq, rtl8xxxu_rx_urb_work);
INIT_DELAYED_WORK(&priv->ra_watchdog, rtl8xxxu_watchdog_callback);
- INIT_WORK(&priv->c2hcmd_work, rtl8xxxu_c2hcmd_callback);
skb_queue_head_init(&priv->c2hcmd_queue);
usb_set_intfdata(interface, hw);
@@ -6827,6 +7083,11 @@ static int rtl8xxxu_probe(struct usb_interface *interface,
hw->wiphy->available_antennas_tx = BIT(priv->tx_paths) - 1;
hw->wiphy->available_antennas_rx = BIT(priv->rx_paths) - 1;
+ if (priv->rtl_chip == RTL8188E)
+ INIT_WORK(&priv->c2hcmd_work, rtl8188e_c2hcmd_callback);
+ else
+ INIT_WORK(&priv->c2hcmd_work, rtl8xxxu_c2hcmd_callback);
+
ret = rtl8xxxu_read_efuse(priv);
if (ret) {
dev_err(&udev->dev, "Fatal - failed to read EFuse\n");
@@ -6839,6 +7100,9 @@ static int rtl8xxxu_probe(struct usb_interface *interface,
goto err_set_intfdata;
}
+ if (rtl8xxxu_debug & RTL8XXXU_DEBUG_EFUSE)
+ rtl8xxxu_dump_efuse(priv);
+
rtl8xxxu_print_chipinfo(priv);
ret = priv->fops->load_firmware(priv);
@@ -6887,8 +7151,10 @@ static int rtl8xxxu_probe(struct usb_interface *interface,
hw->extra_tx_headroom = priv->fops->tx_desc_size;
ieee80211_hw_set(hw, SIGNAL_DBM);
+
/*
- * The firmware handles rate control
+ * The firmware handles rate control, except for RTL8188EU,
+ * where we handle the rate control in the driver.
*/
ieee80211_hw_set(hw, HAS_RATE_CONTROL);
ieee80211_hw_set(hw, SUPPORT_FAST_XMIT);
@@ -6903,6 +7169,8 @@ static int rtl8xxxu_probe(struct usb_interface *interface,
goto err_set_intfdata;
}
+ rtl8xxxu_init_led(priv);
+
return 0;
err_set_intfdata:
@@ -6927,6 +7195,8 @@ static void rtl8xxxu_disconnect(struct usb_interface *interface)
hw = usb_get_intfdata(interface);
priv = hw->priv;
+ rtl8xxxu_deinit_led(priv);
+
ieee80211_unregister_hw(hw);
priv->fops->power_off(priv);
@@ -6973,6 +7243,50 @@ static const struct usb_device_id dev_table[] = {
/* RTL8188FU */
{USB_DEVICE_AND_INTERFACE_INFO(USB_VENDOR_ID_REALTEK, 0xf179, 0xff, 0xff, 0xff),
.driver_info = (unsigned long)&rtl8188fu_fops},
+{USB_DEVICE_AND_INTERFACE_INFO(USB_VENDOR_ID_REALTEK, 0x8179, 0xff, 0xff, 0xff),
+ .driver_info = (unsigned long)&rtl8188eu_fops},
+/* Tested by Hans de Goede - rtl8188etv */
+{USB_DEVICE_AND_INTERFACE_INFO(USB_VENDOR_ID_REALTEK, 0x0179, 0xff, 0xff, 0xff),
+ .driver_info = (unsigned long)&rtl8188eu_fops},
+/* Sitecom rtl8188eus */
+{USB_DEVICE_AND_INTERFACE_INFO(0x0df6, 0x0076, 0xff, 0xff, 0xff),
+ .driver_info = (unsigned long)&rtl8188eu_fops},
+/* D-Link USB-GO-N150 */
+{USB_DEVICE_AND_INTERFACE_INFO(0x2001, 0x3311, 0xff, 0xff, 0xff),
+ .driver_info = (unsigned long)&rtl8188eu_fops},
+/* D-Link DWA-125 REV D1 */
+{USB_DEVICE_AND_INTERFACE_INFO(0x2001, 0x330f, 0xff, 0xff, 0xff),
+ .driver_info = (unsigned long)&rtl8188eu_fops},
+/* D-Link DWA-123 REV D1 */
+{USB_DEVICE_AND_INTERFACE_INFO(0x2001, 0x3310, 0xff, 0xff, 0xff),
+ .driver_info = (unsigned long)&rtl8188eu_fops},
+/* D-Link DWA-121 rev B1 */
+{USB_DEVICE_AND_INTERFACE_INFO(0x2001, 0x331b, 0xff, 0xff, 0xff),
+ .driver_info = (unsigned long)&rtl8188eu_fops},
+/* Abocom - Abocom */
+{USB_DEVICE_AND_INTERFACE_INFO(0x07b8, 0x8179, 0xff, 0xff, 0xff),
+ .driver_info = (unsigned long)&rtl8188eu_fops},
+/* Elecom WDC-150SU2M */
+{USB_DEVICE_AND_INTERFACE_INFO(0x056e, 0x4008, 0xff, 0xff, 0xff),
+ .driver_info = (unsigned long)&rtl8188eu_fops},
+/* TP-Link TL-WN722N v2 */
+{USB_DEVICE_AND_INTERFACE_INFO(0x2357, 0x010c, 0xff, 0xff, 0xff),
+ .driver_info = (unsigned long)&rtl8188eu_fops},
+/* TP-Link TL-WN727N v5.21 */
+{USB_DEVICE_AND_INTERFACE_INFO(0x2357, 0x0111, 0xff, 0xff, 0xff),
+ .driver_info = (unsigned long)&rtl8188eu_fops},
+/* MERCUSYS MW150US v2 */
+{USB_DEVICE_AND_INTERFACE_INFO(0x2c4e, 0x0102, 0xff, 0xff, 0xff),
+ .driver_info = (unsigned long)&rtl8188eu_fops},
+/* ASUS USB-N10 Nano B1 */
+{USB_DEVICE_AND_INTERFACE_INFO(0x0b05, 0x18f0, 0xff, 0xff, 0xff),
+ .driver_info = (unsigned long)&rtl8188eu_fops},
+ /* Edimax EW-7811Un V2 */
+{USB_DEVICE_AND_INTERFACE_INFO(0x7392, 0xb811, 0xff, 0xff, 0xff),
+ .driver_info = (unsigned long)&rtl8188eu_fops},
+/* Rosewill USB-N150 Nano */
+{USB_DEVICE_AND_INTERFACE_INFO(USB_VENDOR_ID_REALTEK, 0xffef, 0xff, 0xff, 0xff),
+ .driver_info = (unsigned long)&rtl8188eu_fops},
#ifdef CONFIG_RTL8XXXU_UNTESTED
/* Still supported by rtlwifi */
{USB_DEVICE_AND_INTERFACE_INFO(USB_VENDOR_ID_REALTEK, 0x8176, 0xff, 0xff, 0xff),
diff --git a/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_regs.h b/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_regs.h
index 3e79efdfb4c2..5849fa4e1566 100644
--- a/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_regs.h
+++ b/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_regs.h
@@ -147,7 +147,13 @@
#define REG_LEDCFG0 0x004c
#define LEDCFG0_DPDT_SELECT BIT(23)
#define REG_LEDCFG1 0x004d
+#define LEDCFG1_HW_LED_CONTROL BIT(1)
+#define LEDCFG1_LED_DISABLE BIT(7)
#define REG_LEDCFG2 0x004e
+#define LEDCFG2_HW_LED_CONTROL BIT(1)
+#define LEDCFG2_HW_LED_ENABLE BIT(5)
+#define LEDCFG2_SW_LED_DISABLE BIT(3)
+#define LEDCFG2_SW_LED_CONTROL BIT(5)
#define LEDCFG2_DPDT_SELECT BIT(7)
#define REG_LEDCFG3 0x004f
#define REG_LEDCFG REG_LEDCFG2
@@ -371,6 +377,11 @@
#define PBP_PAGE_SIZE_512 0x3
#define PBP_PAGE_SIZE_1024 0x4
+/* 8188eu IOL magic */
+#define REG_PKT_BUF_ACCESS_CTRL 0x0106
+#define PKT_BUF_ACCESS_CTRL_TX 0x69
+#define PKT_BUF_ACCESS_CTRL_RX 0xa5
+
#define REG_TRXDMA_CTRL 0x010c
#define TRXDMA_CTRL_RXDMA_AGG_EN BIT(2)
#define TRXDMA_CTRL_VOQ_SHIFT 4
@@ -407,6 +418,8 @@
#define REG_MBIST_START 0x0174
#define REG_MBIST_DONE 0x0178
#define REG_MBIST_FAIL 0x017c
+/* 8188EU */
+#define REG_32K_CTRL 0x0194
#define REG_C2HEVT_MSG_NORMAL 0x01a0
/* 8192EU/8723BU/8812 */
#define REG_C2HEVT_CMD_ID_8723B 0x01ae
@@ -942,6 +955,16 @@
#define REG_FPGA1_RF_MODE 0x0900
#define REG_FPGA1_TX_INFO 0x090c
+#define FPGA1_TX_ANT_MASK 0x0000000f
+#define FPGA1_TX_ANT_L_MASK 0x000000f0
+#define FPGA1_TX_ANT_NON_HT_MASK 0x00000f00
+#define FPGA1_TX_ANT_HT1_MASK 0x0000f000
+#define FPGA1_TX_ANT_HT2_MASK 0x000f0000
+#define FPGA1_TX_ANT_HT_S1_MASK 0x00f00000
+#define FPGA1_TX_ANT_NON_HT_S1_MASK 0x0f000000
+#define FPGA1_TX_OFDM_TXSC_MASK 0x30000000
+
+#define REG_ANT_MAPPING1 0x0914
#define REG_DPDT_CTRL 0x092c /* 8723BU */
#define REG_RFE_CTRL_ANTA_SRC 0x0930 /* 8723BU */
#define REG_RFE_PATH_SELECT 0x0940 /* 8723BU */
@@ -954,9 +977,25 @@
#define REG_CCK0_AFE_SETTING 0x0a04
#define CCK0_AFE_RX_MASK 0x0f000000
-#define CCK0_AFE_RX_ANT_AB BIT(24)
+#define CCK0_AFE_TX_MASK 0xf0000000
#define CCK0_AFE_RX_ANT_A 0
-#define CCK0_AFE_RX_ANT_B (BIT(24) | BIT(26))
+#define CCK0_AFE_RX_ANT_B BIT(26)
+#define CCK0_AFE_RX_ANT_C BIT(27)
+#define CCK0_AFE_RX_ANT_D (BIT(26) | BIT(27))
+#define CCK0_AFE_RX_ANT_OPTION_A 0
+#define CCK0_AFE_RX_ANT_OPTION_B BIT(24)
+#define CCK0_AFE_RX_ANT_OPTION_C BIT(25)
+#define CCK0_AFE_RX_ANT_OPTION_D (BIT(24) | BIT(25))
+#define CCK0_AFE_TX_ANT_A BIT(31)
+#define CCK0_AFE_TX_ANT_B BIT(30)
+
+#define REG_CCK_ANTDIV_PARA2 0x0a04
+#define REG_BB_POWER_SAVE4 0x0a74
+
+/* 8188eu */
+#define REG_LNA_SWITCH 0x0b2c
+#define LNA_SWITCH_DISABLE_CSCG BIT(22)
+#define LNA_SWITCH_OUTPUT_CG BIT(31)
#define REG_CCK_PD_THRESH 0x0a0a
#define CCK_PD_TYPE1_LV0_TH 0x40
@@ -1020,6 +1059,9 @@
#define REG_OFDM0_RX_IQ_EXT_ANTA 0x0ca0
+/* 8188eu */
+#define REG_ANTDIV_PARA1 0x0ca4
+
/* 8723bu */
#define REG_OFDM0_TX_PSDO_NOISE_WEIGHT 0x0ce4
diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8188ee/hw.c b/drivers/net/wireless/realtek/rtlwifi/rtl8188ee/hw.c
index 58c2ab3d44be..de61c9c0ddec 100644
--- a/drivers/net/wireless/realtek/rtlwifi/rtl8188ee/hw.c
+++ b/drivers/net/wireless/realtek/rtlwifi/rtl8188ee/hw.c
@@ -68,8 +68,10 @@ static void _rtl88ee_return_beacon_queue_skb(struct ieee80211_hw *hw)
struct rtl_priv *rtlpriv = rtl_priv(hw);
struct rtl_pci *rtlpci = rtl_pcidev(rtl_pcipriv(hw));
struct rtl8192_tx_ring *ring = &rtlpci->tx_ring[BEACON_QUEUE];
+ struct sk_buff_head free_list;
unsigned long flags;
+ skb_queue_head_init(&free_list);
spin_lock_irqsave(&rtlpriv->locks.irq_th_lock, flags);
while (skb_queue_len(&ring->queue)) {
struct rtl_tx_desc *entry = &ring->desc[ring->idx];
@@ -79,10 +81,12 @@ static void _rtl88ee_return_beacon_queue_skb(struct ieee80211_hw *hw)
rtlpriv->cfg->ops->get_desc(hw, (u8 *)entry,
true, HW_DESC_TXBUFF_ADDR),
skb->len, DMA_TO_DEVICE);
- kfree_skb(skb);
+ __skb_queue_tail(&free_list, skb);
ring->idx = (ring->idx + 1) % ring->entries;
}
spin_unlock_irqrestore(&rtlpriv->locks.irq_th_lock, flags);
+
+ __skb_queue_purge(&free_list);
}
static void _rtl88ee_disable_bcn_sub_func(struct ieee80211_hw *hw)
diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8723ae/hal_bt_coexist.h b/drivers/net/wireless/realtek/rtlwifi/rtl8723ae/hal_bt_coexist.h
index 0455a3712f3e..12cdecdafc32 100644
--- a/drivers/net/wireless/realtek/rtlwifi/rtl8723ae/hal_bt_coexist.h
+++ b/drivers/net/wireless/realtek/rtlwifi/rtl8723ae/hal_bt_coexist.h
@@ -116,7 +116,7 @@ void rtl8723e_dm_bt_hw_coex_all_off(struct ieee80211_hw *hw);
long rtl8723e_dm_bt_get_rx_ss(struct ieee80211_hw *hw);
void rtl8723e_dm_bt_balance(struct ieee80211_hw *hw,
bool balance_on, u8 ms0, u8 ms1);
-void rtl8723e_dm_bt_agc_table(struct ieee80211_hw *hw, u8 tyep);
+void rtl8723e_dm_bt_agc_table(struct ieee80211_hw *hw, u8 type);
void rtl8723e_dm_bt_bb_back_off_level(struct ieee80211_hw *hw, u8 type);
u8 rtl8723e_dm_bt_check_coex_rssi_state(struct ieee80211_hw *hw,
u8 level_num, u8 rssi_thresh,
diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8723be/hw.c b/drivers/net/wireless/realtek/rtlwifi/rtl8723be/hw.c
index 189cc6437600..0ba3bbed6ed3 100644
--- a/drivers/net/wireless/realtek/rtlwifi/rtl8723be/hw.c
+++ b/drivers/net/wireless/realtek/rtlwifi/rtl8723be/hw.c
@@ -30,8 +30,10 @@ static void _rtl8723be_return_beacon_queue_skb(struct ieee80211_hw *hw)
struct rtl_priv *rtlpriv = rtl_priv(hw);
struct rtl_pci *rtlpci = rtl_pcidev(rtl_pcipriv(hw));
struct rtl8192_tx_ring *ring = &rtlpci->tx_ring[BEACON_QUEUE];
+ struct sk_buff_head free_list;
unsigned long flags;
+ skb_queue_head_init(&free_list);
spin_lock_irqsave(&rtlpriv->locks.irq_th_lock, flags);
while (skb_queue_len(&ring->queue)) {
struct rtl_tx_desc *entry = &ring->desc[ring->idx];
@@ -41,10 +43,12 @@ static void _rtl8723be_return_beacon_queue_skb(struct ieee80211_hw *hw)
rtlpriv->cfg->ops->get_desc(hw, (u8 *)entry,
true, HW_DESC_TXBUFF_ADDR),
skb->len, DMA_TO_DEVICE);
- kfree_skb(skb);
+ __skb_queue_tail(&free_list, skb);
ring->idx = (ring->idx + 1) % ring->entries;
}
spin_unlock_irqrestore(&rtlpriv->locks.irq_th_lock, flags);
+
+ __skb_queue_purge(&free_list);
}
static void _rtl8723be_set_bcn_ctrl_reg(struct ieee80211_hw *hw,
diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8821ae/hw.c b/drivers/net/wireless/realtek/rtlwifi/rtl8821ae/hw.c
index 7e0f62d59fe1..a7e3250957dc 100644
--- a/drivers/net/wireless/realtek/rtlwifi/rtl8821ae/hw.c
+++ b/drivers/net/wireless/realtek/rtlwifi/rtl8821ae/hw.c
@@ -26,8 +26,10 @@ static void _rtl8821ae_return_beacon_queue_skb(struct ieee80211_hw *hw)
struct rtl_priv *rtlpriv = rtl_priv(hw);
struct rtl_pci *rtlpci = rtl_pcidev(rtl_pcipriv(hw));
struct rtl8192_tx_ring *ring = &rtlpci->tx_ring[BEACON_QUEUE];
+ struct sk_buff_head free_list;
unsigned long flags;
+ skb_queue_head_init(&free_list);
spin_lock_irqsave(&rtlpriv->locks.irq_th_lock, flags);
while (skb_queue_len(&ring->queue)) {
struct rtl_tx_desc *entry = &ring->desc[ring->idx];
@@ -37,10 +39,12 @@ static void _rtl8821ae_return_beacon_queue_skb(struct ieee80211_hw *hw)
rtlpriv->cfg->ops->get_desc(hw, (u8 *)entry,
true, HW_DESC_TXBUFF_ADDR),
skb->len, DMA_TO_DEVICE);
- kfree_skb(skb);
+ __skb_queue_tail(&free_list, skb);
ring->idx = (ring->idx + 1) % ring->entries;
}
spin_unlock_irqrestore(&rtlpriv->locks.irq_th_lock, flags);
+
+ __skb_queue_purge(&free_list);
}
static void _rtl8821ae_set_bcn_ctrl_reg(struct ieee80211_hw *hw,
diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8821ae/phy.c b/drivers/net/wireless/realtek/rtlwifi/rtl8821ae/phy.c
index a29321e2fa72..5323ead30db0 100644
--- a/drivers/net/wireless/realtek/rtlwifi/rtl8821ae/phy.c
+++ b/drivers/net/wireless/realtek/rtlwifi/rtl8821ae/phy.c
@@ -1598,18 +1598,6 @@ static bool _rtl8812ae_get_integer_from_string(const char *str, u8 *pint)
return true;
}
-static bool _rtl8812ae_eq_n_byte(const char *str1, const char *str2, u32 num)
-{
- if (num == 0)
- return false;
- while (num > 0) {
- num--;
- if (str1[num] != str2[num])
- return false;
- }
- return true;
-}
-
static s8 _rtl8812ae_phy_get_chnl_idx_of_txpwr_lmt(struct ieee80211_hw *hw,
u8 band, u8 channel)
{
@@ -1659,42 +1647,42 @@ static void _rtl8812ae_phy_set_txpower_limit(struct ieee80211_hw *hw,
power_limit = power_limit > MAX_POWER_INDEX ?
MAX_POWER_INDEX : power_limit;
- if (_rtl8812ae_eq_n_byte(pregulation, "FCC", 3))
+ if (strcmp(pregulation, "FCC") == 0)
regulation = 0;
- else if (_rtl8812ae_eq_n_byte(pregulation, "MKK", 3))
+ else if (strcmp(pregulation, "MKK") == 0)
regulation = 1;
- else if (_rtl8812ae_eq_n_byte(pregulation, "ETSI", 4))
+ else if (strcmp(pregulation, "ETSI") == 0)
regulation = 2;
- else if (_rtl8812ae_eq_n_byte(pregulation, "WW13", 4))
+ else if (strcmp(pregulation, "WW13") == 0)
regulation = 3;
- if (_rtl8812ae_eq_n_byte(prate_section, "CCK", 3))
+ if (strcmp(prate_section, "CCK") == 0)
rate_section = 0;
- else if (_rtl8812ae_eq_n_byte(prate_section, "OFDM", 4))
+ else if (strcmp(prate_section, "OFDM") == 0)
rate_section = 1;
- else if (_rtl8812ae_eq_n_byte(prate_section, "HT", 2) &&
- _rtl8812ae_eq_n_byte(prf_path, "1T", 2))
+ else if (strcmp(prate_section, "HT") == 0 &&
+ strcmp(prf_path, "1T") == 0)
rate_section = 2;
- else if (_rtl8812ae_eq_n_byte(prate_section, "HT", 2) &&
- _rtl8812ae_eq_n_byte(prf_path, "2T", 2))
+ else if (strcmp(prate_section, "HT") == 0 &&
+ strcmp(prf_path, "2T") == 0)
rate_section = 3;
- else if (_rtl8812ae_eq_n_byte(prate_section, "VHT", 3) &&
- _rtl8812ae_eq_n_byte(prf_path, "1T", 2))
+ else if (strcmp(prate_section, "VHT") == 0 &&
+ strcmp(prf_path, "1T") == 0)
rate_section = 4;
- else if (_rtl8812ae_eq_n_byte(prate_section, "VHT", 3) &&
- _rtl8812ae_eq_n_byte(prf_path, "2T", 2))
+ else if (strcmp(prate_section, "VHT") == 0 &&
+ strcmp(prf_path, "2T") == 0)
rate_section = 5;
- if (_rtl8812ae_eq_n_byte(pbandwidth, "20M", 3))
+ if (strcmp(pbandwidth, "20M") == 0)
bandwidth = 0;
- else if (_rtl8812ae_eq_n_byte(pbandwidth, "40M", 3))
+ else if (strcmp(pbandwidth, "40M") == 0)
bandwidth = 1;
- else if (_rtl8812ae_eq_n_byte(pbandwidth, "80M", 3))
+ else if (strcmp(pbandwidth, "80M") == 0)
bandwidth = 2;
- else if (_rtl8812ae_eq_n_byte(pbandwidth, "160M", 4))
+ else if (strcmp(pbandwidth, "160M") == 0)
bandwidth = 3;
- if (_rtl8812ae_eq_n_byte(pband, "2.4G", 4)) {
+ if (strcmp(pband, "2.4G") == 0) {
ret = _rtl8812ae_phy_get_chnl_idx_of_txpwr_lmt(hw,
BAND_ON_2_4G,
channel);
@@ -1718,7 +1706,7 @@ static void _rtl8812ae_phy_set_txpower_limit(struct ieee80211_hw *hw,
regulation, bandwidth, rate_section, channel_index,
rtlphy->txpwr_limit_2_4g[regulation][bandwidth]
[rate_section][channel_index][RF90_PATH_A]);
- } else if (_rtl8812ae_eq_n_byte(pband, "5G", 2)) {
+ } else if (strcmp(pband, "5G") == 0) {
ret = _rtl8812ae_phy_get_chnl_idx_of_txpwr_lmt(hw,
BAND_ON_5G,
channel);
diff --git a/drivers/net/wireless/realtek/rtw88/bf.c b/drivers/net/wireless/realtek/rtw88/bf.c
index 038a30b170ef..c827c4a2814b 100644
--- a/drivers/net/wireless/realtek/rtw88/bf.c
+++ b/drivers/net/wireless/realtek/rtw88/bf.c
@@ -49,19 +49,23 @@ void rtw_bf_assoc(struct rtw_dev *rtwdev, struct ieee80211_vif *vif,
sta = ieee80211_find_sta(vif, bssid);
if (!sta) {
+ rcu_read_unlock();
+
rtw_warn(rtwdev, "failed to find station entry for bss %pM\n",
bssid);
- goto out_unlock;
+ return;
}
ic_vht_cap = &hw->wiphy->bands[NL80211_BAND_5GHZ]->vht_cap;
vht_cap = &sta->deflink.vht_cap;
+ rcu_read_unlock();
+
if ((ic_vht_cap->cap & IEEE80211_VHT_CAP_MU_BEAMFORMEE_CAPABLE) &&
(vht_cap->cap & IEEE80211_VHT_CAP_MU_BEAMFORMER_CAPABLE)) {
if (bfinfo->bfer_mu_cnt >= chip->bfer_mu_max_num) {
rtw_dbg(rtwdev, RTW_DBG_BF, "mu bfer number over limit\n");
- goto out_unlock;
+ return;
}
ether_addr_copy(bfee->mac_addr, bssid);
@@ -75,7 +79,7 @@ void rtw_bf_assoc(struct rtw_dev *rtwdev, struct ieee80211_vif *vif,
(vht_cap->cap & IEEE80211_VHT_CAP_SU_BEAMFORMER_CAPABLE)) {
if (bfinfo->bfer_su_cnt >= chip->bfer_su_max_num) {
rtw_dbg(rtwdev, RTW_DBG_BF, "su bfer number over limit\n");
- goto out_unlock;
+ return;
}
sound_dim = vht_cap->cap &
@@ -98,9 +102,6 @@ void rtw_bf_assoc(struct rtw_dev *rtwdev, struct ieee80211_vif *vif,
rtw_chip_config_bfee(rtwdev, rtwvif, bfee, true);
}
-
-out_unlock:
- rcu_read_unlock();
}
void rtw_bf_init_bfer_entry_mu(struct rtw_dev *rtwdev,
diff --git a/drivers/net/wireless/realtek/rtw88/coex.c b/drivers/net/wireless/realtek/rtw88/coex.c
index 38697237ee5f..86467d2f8888 100644
--- a/drivers/net/wireless/realtek/rtw88/coex.c
+++ b/drivers/net/wireless/realtek/rtw88/coex.c
@@ -4056,7 +4056,7 @@ void rtw_coex_display_coex_info(struct rtw_dev *rtwdev, struct seq_file *m)
rtwdev->stats.tx_throughput, rtwdev->stats.rx_throughput);
seq_printf(m, "%-40s = %u/ %u/ %u\n",
"IPS/ Low Power/ PS mode",
- test_bit(RTW_FLAG_INACTIVE_PS, rtwdev->flags),
+ !test_bit(RTW_FLAG_POWERON, rtwdev->flags),
test_bit(RTW_FLAG_LEISURE_PS_DEEP, rtwdev->flags),
rtwdev->lps_conf.mode);
diff --git a/drivers/net/wireless/realtek/rtw88/mac.c b/drivers/net/wireless/realtek/rtw88/mac.c
index 98777f294945..dae64901bac5 100644
--- a/drivers/net/wireless/realtek/rtw88/mac.c
+++ b/drivers/net/wireless/realtek/rtw88/mac.c
@@ -217,10 +217,10 @@ static int rtw_pwr_seq_parser(struct rtw_dev *rtwdev,
cut_mask = cut_version_to_mask(cut);
switch (rtw_hci_type(rtwdev)) {
case RTW_HCI_TYPE_PCIE:
- intf_mask = BIT(2);
+ intf_mask = RTW_PWR_INTF_PCI_MSK;
break;
case RTW_HCI_TYPE_USB:
- intf_mask = BIT(1);
+ intf_mask = RTW_PWR_INTF_USB_MSK;
break;
default:
return -EINVAL;
@@ -273,6 +273,11 @@ static int rtw_mac_power_switch(struct rtw_dev *rtwdev, bool pwr_on)
if (rtw_pwr_seq_parser(rtwdev, pwr_seq))
return -EINVAL;
+ if (pwr_on)
+ set_bit(RTW_FLAG_POWERON, rtwdev->flags);
+ else
+ clear_bit(RTW_FLAG_POWERON, rtwdev->flags);
+
return 0;
}
@@ -335,6 +340,11 @@ int rtw_mac_power_on(struct rtw_dev *rtwdev)
ret = rtw_mac_power_switch(rtwdev, true);
if (ret == -EALREADY) {
rtw_mac_power_switch(rtwdev, false);
+
+ ret = rtw_mac_pre_system_cfg(rtwdev);
+ if (ret)
+ goto err;
+
ret = rtw_mac_power_switch(rtwdev, true);
if (ret)
goto err;
diff --git a/drivers/net/wireless/realtek/rtw88/mac80211.c b/drivers/net/wireless/realtek/rtw88/mac80211.c
index 776a9a9884b5..3b92ac611d3f 100644
--- a/drivers/net/wireless/realtek/rtw88/mac80211.c
+++ b/drivers/net/wireless/realtek/rtw88/mac80211.c
@@ -737,7 +737,7 @@ static void rtw_ra_mask_info_update(struct rtw_dev *rtwdev,
br_data.rtwdev = rtwdev;
br_data.vif = vif;
br_data.mask = mask;
- rtw_iterate_stas_atomic(rtwdev, rtw_ra_mask_info_update_iter, &br_data);
+ rtw_iterate_stas(rtwdev, rtw_ra_mask_info_update_iter, &br_data);
}
static int rtw_ops_set_bitrate_mask(struct ieee80211_hw *hw,
@@ -746,7 +746,9 @@ static int rtw_ops_set_bitrate_mask(struct ieee80211_hw *hw,
{
struct rtw_dev *rtwdev = hw->priv;
+ mutex_lock(&rtwdev->mutex);
rtw_ra_mask_info_update(rtwdev, vif, mask);
+ mutex_unlock(&rtwdev->mutex);
return 0;
}
diff --git a/drivers/net/wireless/realtek/rtw88/main.c b/drivers/net/wireless/realtek/rtw88/main.c
index 888427cf3bdf..b2e78737bd5d 100644
--- a/drivers/net/wireless/realtek/rtw88/main.c
+++ b/drivers/net/wireless/realtek/rtw88/main.c
@@ -241,8 +241,10 @@ static void rtw_watch_dog_work(struct work_struct *work)
rtw_phy_dynamic_mechanism(rtwdev);
data.rtwdev = rtwdev;
- /* use atomic version to avoid taking local->iflist_mtx mutex */
- rtw_iterate_vifs_atomic(rtwdev, rtw_vif_watch_dog_iter, &data);
+ /* rtw_iterate_vifs internally uses an atomic iterator which is needed
+ * to avoid taking local->iflist_mtx mutex
+ */
+ rtw_iterate_vifs(rtwdev, rtw_vif_watch_dog_iter, &data);
/* fw supports only one station associated to enter lps, if there are
* more than two stations associated to the AP, then we can not enter
diff --git a/drivers/net/wireless/realtek/rtw88/main.h b/drivers/net/wireless/realtek/rtw88/main.h
index 165f299e8e1f..d4a53d556745 100644
--- a/drivers/net/wireless/realtek/rtw88/main.h
+++ b/drivers/net/wireless/realtek/rtw88/main.h
@@ -356,7 +356,7 @@ enum rtw_flags {
RTW_FLAG_RUNNING,
RTW_FLAG_FW_RUNNING,
RTW_FLAG_SCANNING,
- RTW_FLAG_INACTIVE_PS,
+ RTW_FLAG_POWERON,
RTW_FLAG_LEISURE_PS,
RTW_FLAG_LEISURE_PS_DEEP,
RTW_FLAG_DIG_DISABLE,
diff --git a/drivers/net/wireless/realtek/rtw88/pci.c b/drivers/net/wireless/realtek/rtw88/pci.c
index 0975d27240e4..b4bd831c9845 100644
--- a/drivers/net/wireless/realtek/rtw88/pci.c
+++ b/drivers/net/wireless/realtek/rtw88/pci.c
@@ -30,7 +30,8 @@ static u32 rtw_pci_tx_queue_idx_addr[] = {
[RTW_TX_QUEUE_H2C] = RTK_PCI_TXBD_IDX_H2CQ,
};
-static u8 rtw_pci_get_tx_qsel(struct sk_buff *skb, u8 queue)
+static u8 rtw_pci_get_tx_qsel(struct sk_buff *skb,
+ enum rtw_tx_queue_type queue)
{
switch (queue) {
case RTW_TX_QUEUE_BCN:
@@ -542,7 +543,7 @@ static int rtw_pci_setup(struct rtw_dev *rtwdev)
static void rtw_pci_dma_release(struct rtw_dev *rtwdev, struct rtw_pci *rtwpci)
{
struct rtw_pci_tx_ring *tx_ring;
- u8 queue;
+ enum rtw_tx_queue_type queue;
rtw_pci_reset_trx_ring(rtwdev);
for (queue = 0; queue < RTK_MAX_TX_QUEUE_NUM; queue++) {
@@ -608,8 +609,8 @@ static void rtw_pci_deep_ps_enter(struct rtw_dev *rtwdev)
{
struct rtw_pci *rtwpci = (struct rtw_pci *)rtwdev->priv;
struct rtw_pci_tx_ring *tx_ring;
+ enum rtw_tx_queue_type queue;
bool tx_empty = true;
- u8 queue;
if (rtw_fw_feature_check(&rtwdev->fw, FW_FEATURE_TX_WAKE))
goto enter_deep_ps;
@@ -669,37 +670,6 @@ static void rtw_pci_deep_ps(struct rtw_dev *rtwdev, bool enter)
spin_unlock_bh(&rtwpci->irq_lock);
}
-static u8 ac_to_hwq[] = {
- [IEEE80211_AC_VO] = RTW_TX_QUEUE_VO,
- [IEEE80211_AC_VI] = RTW_TX_QUEUE_VI,
- [IEEE80211_AC_BE] = RTW_TX_QUEUE_BE,
- [IEEE80211_AC_BK] = RTW_TX_QUEUE_BK,
-};
-
-static_assert(ARRAY_SIZE(ac_to_hwq) == IEEE80211_NUM_ACS);
-
-static u8 rtw_hw_queue_mapping(struct sk_buff *skb)
-{
- struct ieee80211_hdr *hdr = (struct ieee80211_hdr *)skb->data;
- __le16 fc = hdr->frame_control;
- u8 q_mapping = skb_get_queue_mapping(skb);
- u8 queue;
-
- if (unlikely(ieee80211_is_beacon(fc)))
- queue = RTW_TX_QUEUE_BCN;
- else if (unlikely(ieee80211_is_mgmt(fc) || ieee80211_is_ctl(fc)))
- queue = RTW_TX_QUEUE_MGMT;
- else if (is_broadcast_ether_addr(hdr->addr1) ||
- is_multicast_ether_addr(hdr->addr1))
- queue = RTW_TX_QUEUE_HI0;
- else if (WARN_ON_ONCE(q_mapping >= ARRAY_SIZE(ac_to_hwq)))
- queue = ac_to_hwq[IEEE80211_AC_BE];
- else
- queue = ac_to_hwq[q_mapping];
-
- return queue;
-}
-
static void rtw_pci_release_rsvd_page(struct rtw_pci *rtwpci,
struct rtw_pci_tx_ring *ring)
{
@@ -797,13 +767,14 @@ static void rtw_pci_flush_queues(struct rtw_dev *rtwdev, u32 queues, bool drop)
} else {
for (i = 0; i < rtwdev->hw->queues; i++)
if (queues & BIT(i))
- pci_queues |= BIT(ac_to_hwq[i]);
+ pci_queues |= BIT(rtw_tx_ac_to_hwq(i));
}
__rtw_pci_flush_queues(rtwdev, pci_queues, drop);
}
-static void rtw_pci_tx_kick_off_queue(struct rtw_dev *rtwdev, u8 queue)
+static void rtw_pci_tx_kick_off_queue(struct rtw_dev *rtwdev,
+ enum rtw_tx_queue_type queue)
{
struct rtw_pci *rtwpci = (struct rtw_pci *)rtwdev->priv;
struct rtw_pci_tx_ring *ring;
@@ -822,7 +793,7 @@ static void rtw_pci_tx_kick_off_queue(struct rtw_dev *rtwdev, u8 queue)
static void rtw_pci_tx_kick_off(struct rtw_dev *rtwdev)
{
struct rtw_pci *rtwpci = (struct rtw_pci *)rtwdev->priv;
- u8 queue;
+ enum rtw_tx_queue_type queue;
for (queue = 0; queue < RTK_MAX_TX_QUEUE_NUM; queue++)
if (test_and_clear_bit(queue, rtwpci->tx_queued))
@@ -831,7 +802,8 @@ static void rtw_pci_tx_kick_off(struct rtw_dev *rtwdev)
static int rtw_pci_tx_write_data(struct rtw_dev *rtwdev,
struct rtw_tx_pkt_info *pkt_info,
- struct sk_buff *skb, u8 queue)
+ struct sk_buff *skb,
+ enum rtw_tx_queue_type queue)
{
struct rtw_pci *rtwpci = (struct rtw_pci *)rtwdev->priv;
const struct rtw_chip_info *chip = rtwdev->chip;
@@ -949,9 +921,9 @@ static int rtw_pci_tx_write(struct rtw_dev *rtwdev,
struct rtw_tx_pkt_info *pkt_info,
struct sk_buff *skb)
{
+ enum rtw_tx_queue_type queue = rtw_tx_queue_mapping(skb);
struct rtw_pci *rtwpci = (struct rtw_pci *)rtwdev->priv;
struct rtw_pci_tx_ring *ring;
- u8 queue = rtw_hw_queue_mapping(skb);
int ret;
ret = rtw_pci_tx_write_data(rtwdev, pkt_info, skb, queue);
diff --git a/drivers/net/wireless/realtek/rtw88/ps.c b/drivers/net/wireless/realtek/rtw88/ps.c
index 11594940d6b0..996365575f44 100644
--- a/drivers/net/wireless/realtek/rtw88/ps.c
+++ b/drivers/net/wireless/realtek/rtw88/ps.c
@@ -25,7 +25,7 @@ static int rtw_ips_pwr_up(struct rtw_dev *rtwdev)
int rtw_enter_ips(struct rtw_dev *rtwdev)
{
- if (test_and_set_bit(RTW_FLAG_INACTIVE_PS, rtwdev->flags))
+ if (!test_bit(RTW_FLAG_POWERON, rtwdev->flags))
return 0;
rtw_coex_ips_notify(rtwdev, COEX_IPS_ENTER);
@@ -50,7 +50,7 @@ int rtw_leave_ips(struct rtw_dev *rtwdev)
{
int ret;
- if (!test_and_clear_bit(RTW_FLAG_INACTIVE_PS, rtwdev->flags))
+ if (test_bit(RTW_FLAG_POWERON, rtwdev->flags))
return 0;
rtw_hci_link_ps(rtwdev, false);
diff --git a/drivers/net/wireless/realtek/rtw88/tx.c b/drivers/net/wireless/realtek/rtw88/tx.c
index ab39245e9c2f..bb5c7492c98b 100644
--- a/drivers/net/wireless/realtek/rtw88/tx.c
+++ b/drivers/net/wireless/realtek/rtw88/tx.c
@@ -682,3 +682,44 @@ void rtw_txq_cleanup(struct rtw_dev *rtwdev, struct ieee80211_txq *txq)
list_del_init(&rtwtxq->list);
spin_unlock_bh(&rtwdev->txq_lock);
}
+
+static const enum rtw_tx_queue_type ac_to_hwq[] = {
+ [IEEE80211_AC_VO] = RTW_TX_QUEUE_VO,
+ [IEEE80211_AC_VI] = RTW_TX_QUEUE_VI,
+ [IEEE80211_AC_BE] = RTW_TX_QUEUE_BE,
+ [IEEE80211_AC_BK] = RTW_TX_QUEUE_BK,
+};
+
+static_assert(ARRAY_SIZE(ac_to_hwq) == IEEE80211_NUM_ACS);
+
+enum rtw_tx_queue_type rtw_tx_ac_to_hwq(enum ieee80211_ac_numbers ac)
+{
+ if (WARN_ON(unlikely(ac >= IEEE80211_NUM_ACS)))
+ return RTW_TX_QUEUE_BE;
+
+ return ac_to_hwq[ac];
+}
+EXPORT_SYMBOL(rtw_tx_ac_to_hwq);
+
+enum rtw_tx_queue_type rtw_tx_queue_mapping(struct sk_buff *skb)
+{
+ struct ieee80211_hdr *hdr = (struct ieee80211_hdr *)skb->data;
+ __le16 fc = hdr->frame_control;
+ u8 q_mapping = skb_get_queue_mapping(skb);
+ enum rtw_tx_queue_type queue;
+
+ if (unlikely(ieee80211_is_beacon(fc)))
+ queue = RTW_TX_QUEUE_BCN;
+ else if (unlikely(ieee80211_is_mgmt(fc) || ieee80211_is_ctl(fc)))
+ queue = RTW_TX_QUEUE_MGMT;
+ else if (is_broadcast_ether_addr(hdr->addr1) ||
+ is_multicast_ether_addr(hdr->addr1))
+ queue = RTW_TX_QUEUE_HI0;
+ else if (WARN_ON_ONCE(q_mapping >= ARRAY_SIZE(ac_to_hwq)))
+ queue = ac_to_hwq[IEEE80211_AC_BE];
+ else
+ queue = ac_to_hwq[q_mapping];
+
+ return queue;
+}
+EXPORT_SYMBOL(rtw_tx_queue_mapping);
diff --git a/drivers/net/wireless/realtek/rtw88/tx.h b/drivers/net/wireless/realtek/rtw88/tx.h
index a2f3ac326041..197d5868c8ad 100644
--- a/drivers/net/wireless/realtek/rtw88/tx.h
+++ b/drivers/net/wireless/realtek/rtw88/tx.h
@@ -131,6 +131,9 @@ rtw_tx_write_data_h2c_get(struct rtw_dev *rtwdev,
struct rtw_tx_pkt_info *pkt_info,
u8 *buf, u32 size);
+enum rtw_tx_queue_type rtw_tx_ac_to_hwq(enum ieee80211_ac_numbers ac);
+enum rtw_tx_queue_type rtw_tx_queue_mapping(struct sk_buff *skb);
+
static inline
void fill_txdesc_checksum_common(u8 *txdesc, size_t words)
{
diff --git a/drivers/net/wireless/realtek/rtw88/usb.c b/drivers/net/wireless/realtek/rtw88/usb.c
index 4ef38279b64c..2a8336b1847a 100644
--- a/drivers/net/wireless/realtek/rtw88/usb.c
+++ b/drivers/net/wireless/realtek/rtw88/usb.c
@@ -271,6 +271,7 @@ static int rtw_usb_write_port(struct rtw_dev *rtwdev, u8 qsel, struct sk_buff *s
return -ENOMEM;
usb_fill_bulk_urb(urb, usbd, pipe, skb->data, skb->len, cb, context);
+ urb->transfer_flags |= URB_ZERO_PACKET;
ret = usb_submit_urb(urb, GFP_ATOMIC);
usb_free_urb(urb);
@@ -413,24 +414,11 @@ static int rtw_usb_write_data_rsvd_page(struct rtw_dev *rtwdev, u8 *buf,
u32 size)
{
const struct rtw_chip_info *chip = rtwdev->chip;
- struct rtw_usb *rtwusb;
struct rtw_tx_pkt_info pkt_info = {0};
- u32 len, desclen;
-
- rtwusb = rtw_get_usb_priv(rtwdev);
pkt_info.tx_pkt_size = size;
pkt_info.qsel = TX_DESC_QSEL_BEACON;
-
- desclen = chip->tx_pkt_desc_sz;
- len = desclen + size;
- if (len % rtwusb->bulkout_size == 0) {
- len += RTW_USB_PACKET_OFFSET_SZ;
- pkt_info.offset = desclen + RTW_USB_PACKET_OFFSET_SZ;
- pkt_info.pkt_offset = 1;
- } else {
- pkt_info.offset = desclen;
- }
+ pkt_info.offset = chip->tx_pkt_desc_sz;
return rtw_usb_write_data(rtwdev, &pkt_info, buf);
}
@@ -471,9 +459,9 @@ static int rtw_usb_tx_write(struct rtw_dev *rtwdev,
u8 *pkt_desc;
int ep;
+ pkt_info->qsel = rtw_usb_tx_queue_mapping_to_qsel(skb);
pkt_desc = skb_push(skb, chip->tx_pkt_desc_sz);
memset(pkt_desc, 0, chip->tx_pkt_desc_sz);
- pkt_info->qsel = rtw_usb_tx_queue_mapping_to_qsel(skb);
ep = qsel_to_ep(rtwusb, pkt_info->qsel);
rtw_tx_fill_tx_desc(pkt_info, skb);
rtw_tx_fill_txdesc_checksum(rtwdev, pkt_info, skb->data);
diff --git a/drivers/net/wireless/realtek/rtw88/wow.c b/drivers/net/wireless/realtek/rtw88/wow.c
index 89dc595094d5..16ddee577efe 100644
--- a/drivers/net/wireless/realtek/rtw88/wow.c
+++ b/drivers/net/wireless/realtek/rtw88/wow.c
@@ -592,7 +592,7 @@ static int rtw_wow_leave_no_link_ps(struct rtw_dev *rtwdev)
if (rtw_get_lps_deep_mode(rtwdev) != LPS_DEEP_MODE_NONE)
rtw_leave_lps_deep(rtwdev);
} else {
- if (test_bit(RTW_FLAG_INACTIVE_PS, rtwdev->flags)) {
+ if (!test_bit(RTW_FLAG_POWERON, rtwdev->flags)) {
rtw_wow->ips_enabled = true;
ret = rtw_leave_ips(rtwdev);
if (ret)
diff --git a/drivers/net/wireless/realtek/rtw89/coex.c b/drivers/net/wireless/realtek/rtw89/coex.c
index f21c73310fdb..bcf483cafd20 100644
--- a/drivers/net/wireless/realtek/rtw89/coex.c
+++ b/drivers/net/wireless/realtek/rtw89/coex.c
@@ -9,7 +9,7 @@
#include "ps.h"
#include "reg.h"
-#define RTW89_COEX_VERSION 0x06030013
+#define RTW89_COEX_VERSION 0x07000013
#define FCXDEF_STEP 50 /* MUST <= FCXMAX_STEP and match with wl fw*/
enum btc_fbtc_tdma_template {
@@ -63,7 +63,7 @@ struct btc_fbtc_1slot {
static const struct rtw89_btc_fbtc_tdma t_def[] = {
[CXTD_OFF] = { CXTDMA_OFF, CXFLC_OFF, CXTPS_OFF, 0, 0, 0, 0, 0},
[CXTD_OFF_B2] = { CXTDMA_OFF, CXFLC_OFF, CXTPS_OFF, 0, 0, 1, 0, 0},
- [CXTD_OFF_EXT] = { CXTDMA_OFF, CXFLC_OFF, CXTPS_OFF, 0, 0, 3, 0, 0},
+ [CXTD_OFF_EXT] = { CXTDMA_OFF, CXFLC_OFF, CXTPS_OFF, 0, 0, 2, 0, 0},
[CXTD_FIX] = { CXTDMA_FIX, CXFLC_OFF, CXTPS_OFF, 0, 0, 0, 0, 0},
[CXTD_PFIX] = { CXTDMA_FIX, CXFLC_NULLP, CXTPS_ON, 0, 5, 0, 0, 0},
[CXTD_AUTO] = { CXTDMA_AUTO, CXFLC_OFF, CXTPS_OFF, 0, 0, 0, 0, 0},
@@ -80,21 +80,21 @@ static const struct rtw89_btc_fbtc_slot s_def[] = {
[CXST_OFF] = __DEF_FBTC_SLOT(100, 0x55555555, SLOT_MIX),
[CXST_B2W] = __DEF_FBTC_SLOT(5, 0xea5a5a5a, SLOT_ISO),
[CXST_W1] = __DEF_FBTC_SLOT(70, 0xea5a5a5a, SLOT_ISO),
- [CXST_W2] = __DEF_FBTC_SLOT(70, 0xea5a5aaa, SLOT_ISO),
+ [CXST_W2] = __DEF_FBTC_SLOT(15, 0xea5a5a5a, SLOT_ISO),
[CXST_W2B] = __DEF_FBTC_SLOT(15, 0xea5a5a5a, SLOT_ISO),
- [CXST_B1] = __DEF_FBTC_SLOT(100, 0xe5555555, SLOT_MIX),
+ [CXST_B1] = __DEF_FBTC_SLOT(250, 0xe5555555, SLOT_MIX),
[CXST_B2] = __DEF_FBTC_SLOT(7, 0xea5a5a5a, SLOT_MIX),
[CXST_B3] = __DEF_FBTC_SLOT(5, 0xe5555555, SLOT_MIX),
[CXST_B4] = __DEF_FBTC_SLOT(50, 0xe5555555, SLOT_MIX),
[CXST_LK] = __DEF_FBTC_SLOT(20, 0xea5a5a5a, SLOT_ISO),
- [CXST_BLK] = __DEF_FBTC_SLOT(250, 0x55555555, SLOT_MIX),
- [CXST_E2G] = __DEF_FBTC_SLOT(20, 0xea5a5a5a, SLOT_MIX),
- [CXST_E5G] = __DEF_FBTC_SLOT(20, 0xffffffff, SLOT_MIX),
- [CXST_EBT] = __DEF_FBTC_SLOT(20, 0xe5555555, SLOT_MIX),
- [CXST_ENULL] = __DEF_FBTC_SLOT(7, 0xaaaaaaaa, SLOT_ISO),
+ [CXST_BLK] = __DEF_FBTC_SLOT(500, 0x55555555, SLOT_MIX),
+ [CXST_E2G] = __DEF_FBTC_SLOT(0, 0xea5a5a5a, SLOT_MIX),
+ [CXST_E5G] = __DEF_FBTC_SLOT(0, 0xffffffff, SLOT_ISO),
+ [CXST_EBT] = __DEF_FBTC_SLOT(0, 0xe5555555, SLOT_MIX),
+ [CXST_ENULL] = __DEF_FBTC_SLOT(0, 0xaaaaaaaa, SLOT_ISO),
[CXST_WLK] = __DEF_FBTC_SLOT(250, 0xea5a5a5a, SLOT_MIX),
- [CXST_W1FDD] = __DEF_FBTC_SLOT(35, 0xfafafafa, SLOT_ISO),
- [CXST_B1FDD] = __DEF_FBTC_SLOT(100, 0xffffffff, SLOT_MIX),
+ [CXST_W1FDD] = __DEF_FBTC_SLOT(50, 0xffffffff, SLOT_ISO),
+ [CXST_B1FDD] = __DEF_FBTC_SLOT(50, 0xffffdfff, SLOT_ISO),
};
static const u32 cxtbl[] = {
@@ -117,9 +117,78 @@ static const u32 cxtbl[] = {
0xfafafafa, /* 16 */
0xffffddff, /* 17 */
0xdaffdaff, /* 18 */
- 0xfafadafa /* 19 */
+ 0xfafadafa, /* 19 */
+ 0xea6a6a6a, /* 20 */
+ 0xea55556a, /* 21 */
+ 0xaafafafa, /* 22 */
+ 0xfafaaafa, /* 23 */
+ 0xfafffaff /* 24 */
};
+static const struct rtw89_btc_ver rtw89_btc_ver_defs[] = {
+ /* firmware version must be in decreasing order for each chip */
+ {RTL8852C, RTW89_FW_VER_CODE(0, 27, 57, 0),
+ .fcxbtcrpt = 4, .fcxtdma = 3, .fcxslots = 1, .fcxcysta = 3,
+ .fcxstep = 3, .fcxnullsta = 2, .fcxmreg = 1, .fcxgpiodbg = 1,
+ .fcxbtver = 1, .fcxbtscan = 1, .fcxbtafh = 2, .fcxbtdevinfo = 1,
+ .fwlrole = 1, .frptmap = 3, .fcxctrl = 1,
+ .info_buf = 1280, .max_role_num = 5,
+ },
+ {RTL8852C, RTW89_FW_VER_CODE(0, 27, 42, 0),
+ .fcxbtcrpt = 4, .fcxtdma = 3, .fcxslots = 1, .fcxcysta = 3,
+ .fcxstep = 3, .fcxnullsta = 2, .fcxmreg = 1, .fcxgpiodbg = 1,
+ .fcxbtver = 1, .fcxbtscan = 1, .fcxbtafh = 2, .fcxbtdevinfo = 1,
+ .fwlrole = 1, .frptmap = 2, .fcxctrl = 1,
+ .info_buf = 1280, .max_role_num = 5,
+ },
+ {RTL8852C, RTW89_FW_VER_CODE(0, 27, 0, 0),
+ .fcxbtcrpt = 4, .fcxtdma = 3, .fcxslots = 1, .fcxcysta = 3,
+ .fcxstep = 3, .fcxnullsta = 2, .fcxmreg = 1, .fcxgpiodbg = 1,
+ .fcxbtver = 1, .fcxbtscan = 1, .fcxbtafh = 1, .fcxbtdevinfo = 1,
+ .fwlrole = 1, .frptmap = 2, .fcxctrl = 1,
+ .info_buf = 1280, .max_role_num = 5,
+ },
+ {RTL8852B, RTW89_FW_VER_CODE(0, 29, 14, 0),
+ .fcxbtcrpt = 5, .fcxtdma = 3, .fcxslots = 1, .fcxcysta = 4,
+ .fcxstep = 3, .fcxnullsta = 2, .fcxmreg = 1, .fcxgpiodbg = 1,
+ .fcxbtver = 1, .fcxbtscan = 1, .fcxbtafh = 2, .fcxbtdevinfo = 1,
+ .fwlrole = 1, .frptmap = 3, .fcxctrl = 1,
+ .info_buf = 1800, .max_role_num = 6,
+ },
+ {RTL8852B, RTW89_FW_VER_CODE(0, 27, 0, 0),
+ .fcxbtcrpt = 4, .fcxtdma = 3, .fcxslots = 1, .fcxcysta = 3,
+ .fcxstep = 3, .fcxnullsta = 2, .fcxmreg = 1, .fcxgpiodbg = 1,
+ .fcxbtver = 1, .fcxbtscan = 1, .fcxbtafh = 1, .fcxbtdevinfo = 1,
+ .fwlrole = 1, .frptmap = 1, .fcxctrl = 1,
+ .info_buf = 1280, .max_role_num = 5,
+ },
+ {RTL8852A, RTW89_FW_VER_CODE(0, 13, 37, 0),
+ .fcxbtcrpt = 4, .fcxtdma = 3, .fcxslots = 1, .fcxcysta = 3,
+ .fcxstep = 3, .fcxnullsta = 2, .fcxmreg = 1, .fcxgpiodbg = 1,
+ .fcxbtver = 1, .fcxbtscan = 1, .fcxbtafh = 2, .fcxbtdevinfo = 1,
+ .fwlrole = 1, .frptmap = 3, .fcxctrl = 1,
+ .info_buf = 1280, .max_role_num = 5,
+ },
+ {RTL8852A, RTW89_FW_VER_CODE(0, 13, 0, 0),
+ .fcxbtcrpt = 1, .fcxtdma = 1, .fcxslots = 1, .fcxcysta = 2,
+ .fcxstep = 2, .fcxnullsta = 1, .fcxmreg = 1, .fcxgpiodbg = 1,
+ .fcxbtver = 1, .fcxbtscan = 1, .fcxbtafh = 1, .fcxbtdevinfo = 1,
+ .fwlrole = 0, .frptmap = 0, .fcxctrl = 0,
+ .info_buf = 1024, .max_role_num = 5,
+ },
+
+ /* keep it to be the last as default entry */
+ {0, RTW89_FW_VER_CODE(0, 0, 0, 0),
+ .fcxbtcrpt = 1, .fcxtdma = 1, .fcxslots = 1, .fcxcysta = 2,
+ .fcxstep = 2, .fcxnullsta = 1, .fcxmreg = 1, .fcxgpiodbg = 1,
+ .fcxbtver = 1, .fcxbtscan = 1, .fcxbtafh = 1, .fcxbtdevinfo = 1,
+ .fwlrole = 0, .frptmap = 0, .fcxctrl = 0,
+ .info_buf = 1024, .max_role_num = 5,
+ },
+};
+
+#define RTW89_DEFAULT_BTC_VER_IDX (ARRAY_SIZE(rtw89_btc_ver_defs) - 1)
+
struct rtw89_btc_btf_tlv {
u8 type;
u8 len;
@@ -127,16 +196,20 @@ struct rtw89_btc_btf_tlv {
} __packed;
enum btc_btf_set_report_en {
- RPT_EN_TDMA = BIT(0),
- RPT_EN_CYCLE = BIT(1),
- RPT_EN_MREG = BIT(2),
- RPT_EN_BT_VER_INFO = BIT(3),
- RPT_EN_BT_SCAN_INFO = BIT(4),
- RPT_EN_BT_AFH_MAP = BIT(5),
- RPT_EN_BT_DEVICE_INFO = BIT(6),
- RPT_EN_WL_ALL = GENMASK(2, 0),
- RPT_EN_BT_ALL = GENMASK(6, 3),
- RPT_EN_ALL = GENMASK(6, 0),
+ RPT_EN_TDMA,
+ RPT_EN_CYCLE,
+ RPT_EN_MREG,
+ RPT_EN_BT_VER_INFO,
+ RPT_EN_BT_SCAN_INFO,
+ RPT_EN_BT_DEVICE_INFO,
+ RPT_EN_BT_AFH_MAP,
+ RPT_EN_BT_AFH_MAP_LE,
+ RPT_EN_FW_STEP_INFO,
+ RPT_EN_TEST,
+ RPT_EN_WL_ALL,
+ RPT_EN_BT_ALL,
+ RPT_EN_ALL,
+ RPT_EN_MONITER,
};
#define BTF_SET_REPORT_VER 1
@@ -786,17 +859,18 @@ static void _chk_btc_err(struct rtw89_dev *rtwdev, u8 type, u32 cnt)
static void _update_bt_report(struct rtw89_dev *rtwdev, u8 rpt_type, u8 *pfinfo)
{
struct rtw89_btc *btc = &rtwdev->btc;
+ const struct rtw89_btc_ver *ver = btc->ver;
struct rtw89_btc_bt_info *bt = &btc->cx.bt;
struct rtw89_btc_bt_link_info *bt_linfo = &bt->link_info;
struct rtw89_btc_bt_a2dp_desc *a2dp = &bt_linfo->a2dp_desc;
struct rtw89_btc_fbtc_btver *pver = NULL;
struct rtw89_btc_fbtc_btscan *pscan = NULL;
- struct rtw89_btc_fbtc_btafh *pafh = NULL;
+ struct rtw89_btc_fbtc_btafh *pafh_v1 = NULL;
+ struct rtw89_btc_fbtc_btafh_v2 *pafh_v2 = NULL;
struct rtw89_btc_fbtc_btdevinfo *pdev = NULL;
pver = (struct rtw89_btc_fbtc_btver *)pfinfo;
pscan = (struct rtw89_btc_fbtc_btscan *)pfinfo;
- pafh = (struct rtw89_btc_fbtc_btafh *)pfinfo;
pdev = (struct rtw89_btc_fbtc_btdevinfo *)pfinfo;
rtw89_debug(rtwdev, RTW89_DBG_BTC,
@@ -813,9 +887,23 @@ static void _update_bt_report(struct rtw89_dev *rtwdev, u8 rpt_type, u8 *pfinfo)
memcpy(bt->scan_info, pscan->scan, BTC_SCAN_MAX1);
break;
case BTC_RPT_TYPE_BT_AFH:
- memcpy(&bt_linfo->afh_map[0], pafh->afh_l, 4);
- memcpy(&bt_linfo->afh_map[4], pafh->afh_m, 4);
- memcpy(&bt_linfo->afh_map[8], pafh->afh_h, 2);
+ if (ver->fcxbtafh == 2) {
+ pafh_v2 = (struct rtw89_btc_fbtc_btafh_v2 *)pfinfo;
+ if (pafh_v2->map_type & RPT_BT_AFH_SEQ_LEGACY) {
+ memcpy(&bt_linfo->afh_map[0], pafh_v2->afh_l, 4);
+ memcpy(&bt_linfo->afh_map[4], pafh_v2->afh_m, 4);
+ memcpy(&bt_linfo->afh_map[8], pafh_v2->afh_h, 2);
+ }
+ if (pafh_v2->map_type & RPT_BT_AFH_SEQ_LE) {
+ memcpy(&bt_linfo->afh_map_le[0], pafh_v2->afh_le_a, 4);
+ memcpy(&bt_linfo->afh_map_le[4], pafh_v2->afh_le_b, 1);
+ }
+ } else if (ver->fcxbtafh == 1) {
+ pafh_v1 = (struct rtw89_btc_fbtc_btafh *)pfinfo;
+ memcpy(&bt_linfo->afh_map[0], pafh_v1->afh_l, 4);
+ memcpy(&bt_linfo->afh_map[4], pafh_v1->afh_m, 4);
+ memcpy(&bt_linfo->afh_map[8], pafh_v1->afh_h, 2);
+ }
break;
case BTC_RPT_TYPE_BT_DEVICE:
a2dp->device_name = le32_to_cpu(pdev->dev_name);
@@ -827,76 +915,6 @@ static void _update_bt_report(struct rtw89_dev *rtwdev, u8 rpt_type, u8 *pfinfo)
}
}
-struct rtw89_btc_fbtc_cysta_cpu {
- u8 fver;
- u8 rsvd;
- u16 cycles;
- u16 cycles_a2dp[CXT_FLCTRL_MAX];
- u16 a2dpept;
- u16 a2dpeptto;
- u16 tavg_cycle[CXT_MAX];
- u16 tmax_cycle[CXT_MAX];
- u16 tmaxdiff_cycle[CXT_MAX];
- u16 tavg_a2dp[CXT_FLCTRL_MAX];
- u16 tmax_a2dp[CXT_FLCTRL_MAX];
- u16 tavg_a2dpept;
- u16 tmax_a2dpept;
- u16 tavg_lk;
- u16 tmax_lk;
- u32 slot_cnt[CXST_MAX];
- u32 bcn_cnt[CXBCN_MAX];
- u32 leakrx_cnt;
- u32 collision_cnt;
- u32 skip_cnt;
- u32 exception;
- u32 except_cnt;
- u16 tslot_cycle[BTC_CYCLE_SLOT_MAX];
-};
-
-static void rtw89_btc_fbtc_cysta_to_cpu(const struct rtw89_btc_fbtc_cysta *src,
- struct rtw89_btc_fbtc_cysta_cpu *dst)
-{
- static_assert(sizeof(*src) == sizeof(*dst));
-
-#define __CPY_U8(_x) ({dst->_x = src->_x; })
-#define __CPY_LE16(_x) ({dst->_x = le16_to_cpu(src->_x); })
-#define __CPY_LE16S(_x) ({int _i; for (_i = 0; _i < ARRAY_SIZE(dst->_x); _i++) \
- dst->_x[_i] = le16_to_cpu(src->_x[_i]); })
-#define __CPY_LE32(_x) ({dst->_x = le32_to_cpu(src->_x); })
-#define __CPY_LE32S(_x) ({int _i; for (_i = 0; _i < ARRAY_SIZE(dst->_x); _i++) \
- dst->_x[_i] = le32_to_cpu(src->_x[_i]); })
-
- __CPY_U8(fver);
- __CPY_U8(rsvd);
- __CPY_LE16(cycles);
- __CPY_LE16S(cycles_a2dp);
- __CPY_LE16(a2dpept);
- __CPY_LE16(a2dpeptto);
- __CPY_LE16S(tavg_cycle);
- __CPY_LE16S(tmax_cycle);
- __CPY_LE16S(tmaxdiff_cycle);
- __CPY_LE16S(tavg_a2dp);
- __CPY_LE16S(tmax_a2dp);
- __CPY_LE16(tavg_a2dpept);
- __CPY_LE16(tmax_a2dpept);
- __CPY_LE16(tavg_lk);
- __CPY_LE16(tmax_lk);
- __CPY_LE32S(slot_cnt);
- __CPY_LE32S(bcn_cnt);
- __CPY_LE32(leakrx_cnt);
- __CPY_LE32(collision_cnt);
- __CPY_LE32(skip_cnt);
- __CPY_LE32(exception);
- __CPY_LE32(except_cnt);
- __CPY_LE16S(tslot_cycle);
-
-#undef __CPY_U8
-#undef __CPY_LE16
-#undef __CPY_LE16S
-#undef __CPY_LE32
-#undef __CPY_LE32S
-}
-
#define BTC_LEAK_AP_TH 10
#define BTC_CYSTA_CHK_PERIOD 100
@@ -910,19 +928,15 @@ static u32 _chk_btc_report(struct rtw89_dev *rtwdev,
struct rtw89_btc_btf_fwinfo *pfwinfo,
u8 *prptbuf, u32 index)
{
- const struct rtw89_chip_info *chip = rtwdev->chip;
struct rtw89_btc *btc = &rtwdev->btc;
+ const struct rtw89_btc_ver *ver = btc->ver;
struct rtw89_btc_dm *dm = &btc->dm;
struct rtw89_btc_rpt_cmn_info *pcinfo = NULL;
struct rtw89_btc_wl_info *wl = &btc->cx.wl;
struct rtw89_btc_bt_info *bt = &btc->cx.bt;
- struct rtw89_btc_fbtc_rpt_ctrl *prpt;
- struct rtw89_btc_fbtc_rpt_ctrl_v1 *prpt_v1;
- struct rtw89_btc_fbtc_cysta *pcysta_le32 = NULL;
- struct rtw89_btc_fbtc_cysta_v1 *pcysta_v1 = NULL;
- struct rtw89_btc_fbtc_cysta_cpu pcysta[1];
+ union rtw89_btc_fbtc_rpt_ctrl_ver_info *prpt = NULL;
+ union rtw89_btc_fbtc_cysta_info *pcysta = NULL;
struct rtw89_btc_prpt *btc_prpt = NULL;
- struct rtw89_btc_fbtc_slot *rtp_slot = NULL;
void *rpt_content = NULL, *pfinfo = NULL;
u8 rpt_type = 0;
u16 wl_slot_set = 0, wl_slot_real = 0;
@@ -951,137 +965,141 @@ static u32 _chk_btc_report(struct rtw89_dev *rtwdev,
switch (rpt_type) {
case BTC_RPT_TYPE_CTRL:
pcinfo = &pfwinfo->rpt_ctrl.cinfo;
- if (chip->chip_id == RTL8852A) {
- pfinfo = &pfwinfo->rpt_ctrl.finfo;
- pcinfo->req_len = sizeof(pfwinfo->rpt_ctrl.finfo);
+ prpt = &pfwinfo->rpt_ctrl.finfo;
+ if (ver->fcxbtcrpt == 1) {
+ pfinfo = &pfwinfo->rpt_ctrl.finfo.v1;
+ pcinfo->req_len = sizeof(pfwinfo->rpt_ctrl.finfo.v1);
+ } else if (ver->fcxbtcrpt == 4) {
+ pfinfo = &pfwinfo->rpt_ctrl.finfo.v4;
+ pcinfo->req_len = sizeof(pfwinfo->rpt_ctrl.finfo.v4);
+ } else if (ver->fcxbtcrpt == 5) {
+ pfinfo = &pfwinfo->rpt_ctrl.finfo.v5;
+ pcinfo->req_len = sizeof(pfwinfo->rpt_ctrl.finfo.v5);
} else {
- pfinfo = &pfwinfo->rpt_ctrl.finfo_v1;
- pcinfo->req_len = sizeof(pfwinfo->rpt_ctrl.finfo_v1);
+ goto err;
}
- pcinfo->req_fver = chip->fcxbtcrpt_ver;
- pcinfo->rx_len = rpt_len;
- pcinfo->rx_cnt++;
+ pcinfo->req_fver = ver->fcxbtcrpt;
break;
case BTC_RPT_TYPE_TDMA:
pcinfo = &pfwinfo->rpt_fbtc_tdma.cinfo;
- if (chip->chip_id == RTL8852A) {
- pfinfo = &pfwinfo->rpt_fbtc_tdma.finfo;
- pcinfo->req_len = sizeof(pfwinfo->rpt_fbtc_tdma.finfo);
+ if (ver->fcxtdma == 1) {
+ pfinfo = &pfwinfo->rpt_fbtc_tdma.finfo.v1;
+ pcinfo->req_len = sizeof(pfwinfo->rpt_fbtc_tdma.finfo.v1);
+ } else if (ver->fcxtdma == 3) {
+ pfinfo = &pfwinfo->rpt_fbtc_tdma.finfo.v3;
+ pcinfo->req_len = sizeof(pfwinfo->rpt_fbtc_tdma.finfo.v3);
} else {
- pfinfo = &pfwinfo->rpt_fbtc_tdma.finfo_v1;
- pcinfo->req_len = sizeof(pfwinfo->rpt_fbtc_tdma.finfo_v1);
+ goto err;
}
- pcinfo->req_fver = chip->fcxtdma_ver;
- pcinfo->rx_len = rpt_len;
- pcinfo->rx_cnt++;
+ pcinfo->req_fver = ver->fcxtdma;
break;
case BTC_RPT_TYPE_SLOT:
pcinfo = &pfwinfo->rpt_fbtc_slots.cinfo;
pfinfo = &pfwinfo->rpt_fbtc_slots.finfo;
pcinfo->req_len = sizeof(pfwinfo->rpt_fbtc_slots.finfo);
- pcinfo->req_fver = chip->fcxslots_ver;
- pcinfo->rx_len = rpt_len;
- pcinfo->rx_cnt++;
+ pcinfo->req_fver = ver->fcxslots;
break;
case BTC_RPT_TYPE_CYSTA:
pcinfo = &pfwinfo->rpt_fbtc_cysta.cinfo;
- if (chip->chip_id == RTL8852A) {
- pfinfo = &pfwinfo->rpt_fbtc_cysta.finfo;
- pcysta_le32 = &pfwinfo->rpt_fbtc_cysta.finfo;
- rtw89_btc_fbtc_cysta_to_cpu(pcysta_le32, pcysta);
- pcinfo->req_len = sizeof(pfwinfo->rpt_fbtc_cysta.finfo);
+ pcysta = &pfwinfo->rpt_fbtc_cysta.finfo;
+ if (ver->fcxcysta == 2) {
+ pfinfo = &pfwinfo->rpt_fbtc_cysta.finfo.v2;
+ pcysta->v2 = pfwinfo->rpt_fbtc_cysta.finfo.v2;
+ pcinfo->req_len = sizeof(pfwinfo->rpt_fbtc_cysta.finfo.v2);
+ } else if (ver->fcxcysta == 3) {
+ pfinfo = &pfwinfo->rpt_fbtc_cysta.finfo.v3;
+ pcysta->v3 = pfwinfo->rpt_fbtc_cysta.finfo.v3;
+ pcinfo->req_len = sizeof(pfwinfo->rpt_fbtc_cysta.finfo.v3);
+ } else if (ver->fcxcysta == 4) {
+ pfinfo = &pfwinfo->rpt_fbtc_cysta.finfo.v4;
+ pcysta->v4 = pfwinfo->rpt_fbtc_cysta.finfo.v4;
+ pcinfo->req_len = sizeof(pfwinfo->rpt_fbtc_cysta.finfo.v4);
} else {
- pfinfo = &pfwinfo->rpt_fbtc_cysta.finfo_v1;
- pcysta_v1 = &pfwinfo->rpt_fbtc_cysta.finfo_v1;
- pcinfo->req_len = sizeof(pfwinfo->rpt_fbtc_cysta.finfo_v1);
+ goto err;
}
- pcinfo->req_fver = chip->fcxcysta_ver;
- pcinfo->rx_len = rpt_len;
- pcinfo->rx_cnt++;
+ pcinfo->req_fver = ver->fcxcysta;
break;
case BTC_RPT_TYPE_STEP:
pcinfo = &pfwinfo->rpt_fbtc_step.cinfo;
- if (chip->chip_id == RTL8852A) {
- pfinfo = &pfwinfo->rpt_fbtc_step.finfo;
- pcinfo->req_len = sizeof(pfwinfo->rpt_fbtc_step.finfo.step[0]) *
+ if (ver->fcxstep == 2) {
+ pfinfo = &pfwinfo->rpt_fbtc_step.finfo.v2;
+ pcinfo->req_len = sizeof(pfwinfo->rpt_fbtc_step.finfo.v2.step[0]) *
trace_step +
- offsetof(struct rtw89_btc_fbtc_steps, step);
- } else {
- pfinfo = &pfwinfo->rpt_fbtc_step.finfo_v1;
- pcinfo->req_len = sizeof(pfwinfo->rpt_fbtc_step.finfo_v1.step[0]) *
+ offsetof(struct rtw89_btc_fbtc_steps_v2, step);
+ } else if (ver->fcxstep == 3) {
+ pfinfo = &pfwinfo->rpt_fbtc_step.finfo.v3;
+ pcinfo->req_len = sizeof(pfwinfo->rpt_fbtc_step.finfo.v3.step[0]) *
trace_step +
- offsetof(struct rtw89_btc_fbtc_steps_v1, step);
+ offsetof(struct rtw89_btc_fbtc_steps_v3, step);
+ } else {
+ goto err;
}
- pcinfo->req_fver = chip->fcxstep_ver;
- pcinfo->rx_len = rpt_len;
- pcinfo->rx_cnt++;
+ pcinfo->req_fver = ver->fcxstep;
break;
case BTC_RPT_TYPE_NULLSTA:
pcinfo = &pfwinfo->rpt_fbtc_nullsta.cinfo;
- if (chip->chip_id == RTL8852A) {
+ if (ver->fcxnullsta == 1) {
pfinfo = &pfwinfo->rpt_fbtc_nullsta.finfo;
- pcinfo->req_len = sizeof(pfwinfo->rpt_fbtc_nullsta.finfo);
+ pcinfo->req_len = sizeof(pfwinfo->rpt_fbtc_nullsta.finfo.v1);
+ } else if (ver->fcxnullsta == 2) {
+ pfinfo = &pfwinfo->rpt_fbtc_nullsta.finfo.v2;
+ pcinfo->req_len = sizeof(pfwinfo->rpt_fbtc_nullsta.finfo.v2);
} else {
- pfinfo = &pfwinfo->rpt_fbtc_nullsta.finfo_v1;
- pcinfo->req_len = sizeof(pfwinfo->rpt_fbtc_nullsta.finfo_v1);
+ goto err;
}
- pcinfo->req_fver = chip->fcxnullsta_ver;
- pcinfo->rx_len = rpt_len;
- pcinfo->rx_cnt++;
+ pcinfo->req_fver = ver->fcxnullsta;
break;
case BTC_RPT_TYPE_MREG:
pcinfo = &pfwinfo->rpt_fbtc_mregval.cinfo;
pfinfo = &pfwinfo->rpt_fbtc_mregval.finfo;
pcinfo->req_len = sizeof(pfwinfo->rpt_fbtc_mregval.finfo);
- pcinfo->req_fver = chip->fcxmreg_ver;
- pcinfo->rx_len = rpt_len;
- pcinfo->rx_cnt++;
+ pcinfo->req_fver = ver->fcxmreg;
break;
case BTC_RPT_TYPE_GPIO_DBG:
pcinfo = &pfwinfo->rpt_fbtc_gpio_dbg.cinfo;
pfinfo = &pfwinfo->rpt_fbtc_gpio_dbg.finfo;
pcinfo->req_len = sizeof(pfwinfo->rpt_fbtc_gpio_dbg.finfo);
- pcinfo->req_fver = chip->fcxgpiodbg_ver;
- pcinfo->rx_len = rpt_len;
- pcinfo->rx_cnt++;
+ pcinfo->req_fver = ver->fcxgpiodbg;
break;
case BTC_RPT_TYPE_BT_VER:
pcinfo = &pfwinfo->rpt_fbtc_btver.cinfo;
pfinfo = &pfwinfo->rpt_fbtc_btver.finfo;
pcinfo->req_len = sizeof(pfwinfo->rpt_fbtc_btver.finfo);
- pcinfo->req_fver = chip->fcxbtver_ver;
- pcinfo->rx_len = rpt_len;
- pcinfo->rx_cnt++;
+ pcinfo->req_fver = ver->fcxbtver;
break;
case BTC_RPT_TYPE_BT_SCAN:
pcinfo = &pfwinfo->rpt_fbtc_btscan.cinfo;
pfinfo = &pfwinfo->rpt_fbtc_btscan.finfo;
pcinfo->req_len = sizeof(pfwinfo->rpt_fbtc_btscan.finfo);
- pcinfo->req_fver = chip->fcxbtscan_ver;
- pcinfo->rx_len = rpt_len;
- pcinfo->rx_cnt++;
+ pcinfo->req_fver = ver->fcxbtscan;
break;
case BTC_RPT_TYPE_BT_AFH:
pcinfo = &pfwinfo->rpt_fbtc_btafh.cinfo;
- pfinfo = &pfwinfo->rpt_fbtc_btafh.finfo;
- pcinfo->req_len = sizeof(pfwinfo->rpt_fbtc_btafh.finfo);
- pcinfo->req_fver = chip->fcxbtafh_ver;
- pcinfo->rx_len = rpt_len;
- pcinfo->rx_cnt++;
+ if (ver->fcxbtafh == 1) {
+ pfinfo = &pfwinfo->rpt_fbtc_btafh.finfo.v1;
+ pcinfo->req_len = sizeof(pfwinfo->rpt_fbtc_btafh.finfo.v1);
+ } else if (ver->fcxbtafh == 2) {
+ pfinfo = &pfwinfo->rpt_fbtc_btafh.finfo.v2;
+ pcinfo->req_len = sizeof(pfwinfo->rpt_fbtc_btafh.finfo.v2);
+ } else {
+ goto err;
+ }
+ pcinfo->req_fver = ver->fcxbtafh;
break;
case BTC_RPT_TYPE_BT_DEVICE:
pcinfo = &pfwinfo->rpt_fbtc_btdev.cinfo;
pfinfo = &pfwinfo->rpt_fbtc_btdev.finfo;
pcinfo->req_len = sizeof(pfwinfo->rpt_fbtc_btdev.finfo);
- pcinfo->req_fver = chip->fcxbtdevinfo_ver;
- pcinfo->rx_len = rpt_len;
- pcinfo->rx_cnt++;
+ pcinfo->req_fver = ver->fcxbtdevinfo;
break;
default:
pfwinfo->err[BTFRE_UNDEF_TYPE]++;
return 0;
}
+ pcinfo->rx_len = rpt_len;
+ pcinfo->rx_cnt++;
+
if (rpt_len != pcinfo->req_len) {
if (rpt_type < BTC_RPT_TYPE_MAX)
pfwinfo->len_mismch |= (0x1 << rpt_type);
@@ -1102,235 +1120,257 @@ static u32 _chk_btc_report(struct rtw89_dev *rtwdev,
memcpy(pfinfo, rpt_content, pcinfo->req_len);
pcinfo->valid = 1;
- if (rpt_type == BTC_RPT_TYPE_TDMA && chip->chip_id == RTL8852A) {
- rtw89_debug(rtwdev, RTW89_DBG_BTC,
- "[BTC], %s(): check %d %zu\n", __func__,
- BTC_DCNT_TDMA_NONSYNC, sizeof(dm->tdma_now));
-
- if (memcmp(&dm->tdma_now, &pfwinfo->rpt_fbtc_tdma.finfo,
- sizeof(dm->tdma_now)) != 0) {
- rtw89_debug(rtwdev, RTW89_DBG_BTC,
- "[BTC], %s(): %d tdma_now %x %x %x %x %x %x %x %x\n",
- __func__, BTC_DCNT_TDMA_NONSYNC,
- dm->tdma_now.type, dm->tdma_now.rxflctrl,
- dm->tdma_now.txpause, dm->tdma_now.wtgle_n,
- dm->tdma_now.leak_n, dm->tdma_now.ext_ctrl,
- dm->tdma_now.rxflctrl_role,
- dm->tdma_now.option_ctrl);
-
- rtw89_debug(rtwdev, RTW89_DBG_BTC,
- "[BTC], %s(): %d rpt_fbtc_tdma %x %x %x %x %x %x %x %x\n",
- __func__, BTC_DCNT_TDMA_NONSYNC,
- pfwinfo->rpt_fbtc_tdma.finfo.type,
- pfwinfo->rpt_fbtc_tdma.finfo.rxflctrl,
- pfwinfo->rpt_fbtc_tdma.finfo.txpause,
- pfwinfo->rpt_fbtc_tdma.finfo.wtgle_n,
- pfwinfo->rpt_fbtc_tdma.finfo.leak_n,
- pfwinfo->rpt_fbtc_tdma.finfo.ext_ctrl,
- pfwinfo->rpt_fbtc_tdma.finfo.rxflctrl_role,
- pfwinfo->rpt_fbtc_tdma.finfo.option_ctrl);
- }
+ switch (rpt_type) {
+ case BTC_RPT_TYPE_CTRL:
+ if (ver->fcxbtcrpt == 1) {
+ prpt->v1 = pfwinfo->rpt_ctrl.finfo.v1;
+ btc->fwinfo.rpt_en_map = prpt->v1.rpt_enable;
+ wl->ver_info.fw_coex = prpt->v1.wl_fw_coex_ver;
+ wl->ver_info.fw = prpt->v1.wl_fw_ver;
+ dm->wl_fw_cx_offload = !!prpt->v1.wl_fw_cx_offload;
+
+ _chk_btc_err(rtwdev, BTC_DCNT_RPT_FREEZE,
+ pfwinfo->event[BTF_EVNT_RPT]);
+
+ /* To avoid I/O if WL LPS or power-off */
+ if (wl->status.map.lps != BTC_LPS_RF_OFF &&
+ !wl->status.map.rf_off) {
+ rtwdev->chip->ops->btc_update_bt_cnt(rtwdev);
+ _chk_btc_err(rtwdev, BTC_DCNT_BTCNT_FREEZE, 0);
+
+ btc->cx.cnt_bt[BTC_BCNT_POLUT] =
+ rtw89_mac_get_plt_cnt(rtwdev,
+ RTW89_MAC_0);
+ }
+ } else if (ver->fcxbtcrpt == 4) {
+ prpt->v4 = pfwinfo->rpt_ctrl.finfo.v4;
+ btc->fwinfo.rpt_en_map = le32_to_cpu(prpt->v4.rpt_info.en);
+ wl->ver_info.fw_coex = le32_to_cpu(prpt->v4.wl_fw_info.cx_ver);
+ wl->ver_info.fw = le32_to_cpu(prpt->v4.wl_fw_info.fw_ver);
+ dm->wl_fw_cx_offload = !!le32_to_cpu(prpt->v4.wl_fw_info.cx_offload);
+
+ for (i = RTW89_PHY_0; i < RTW89_PHY_MAX; i++)
+ memcpy(&dm->gnt.band[i], &prpt->v4.gnt_val[i],
+ sizeof(dm->gnt.band[i]));
+
+ btc->cx.cnt_bt[BTC_BCNT_HIPRI_TX] =
+ le32_to_cpu(prpt->v4.bt_cnt[BTC_BCNT_HI_TX]);
+ btc->cx.cnt_bt[BTC_BCNT_HIPRI_RX] =
+ le32_to_cpu(prpt->v4.bt_cnt[BTC_BCNT_HI_RX]);
+ btc->cx.cnt_bt[BTC_BCNT_LOPRI_TX] =
+ le32_to_cpu(prpt->v4.bt_cnt[BTC_BCNT_LO_TX]);
+ btc->cx.cnt_bt[BTC_BCNT_LOPRI_RX] =
+ le32_to_cpu(prpt->v4.bt_cnt[BTC_BCNT_LO_RX]);
+ btc->cx.cnt_bt[BTC_BCNT_POLUT] =
+ le32_to_cpu(prpt->v4.bt_cnt[BTC_BCNT_POLLUTED]);
- _chk_btc_err(rtwdev, BTC_DCNT_TDMA_NONSYNC,
- memcmp(&dm->tdma_now,
- &pfwinfo->rpt_fbtc_tdma.finfo,
- sizeof(dm->tdma_now)));
- } else if (rpt_type == BTC_RPT_TYPE_TDMA) {
- rtw89_debug(rtwdev, RTW89_DBG_BTC,
- "[BTC], %s(): check %d %zu\n", __func__,
- BTC_DCNT_TDMA_NONSYNC, sizeof(dm->tdma_now));
-
- if (memcmp(&dm->tdma_now, &pfwinfo->rpt_fbtc_tdma.finfo_v1.tdma,
- sizeof(dm->tdma_now)) != 0) {
- rtw89_debug(rtwdev, RTW89_DBG_BTC,
- "[BTC], %s(): %d tdma_now %x %x %x %x %x %x %x %x\n",
- __func__, BTC_DCNT_TDMA_NONSYNC,
- dm->tdma_now.type, dm->tdma_now.rxflctrl,
- dm->tdma_now.txpause, dm->tdma_now.wtgle_n,
- dm->tdma_now.leak_n, dm->tdma_now.ext_ctrl,
- dm->tdma_now.rxflctrl_role,
- dm->tdma_now.option_ctrl);
- rtw89_debug(rtwdev, RTW89_DBG_BTC,
- "[BTC], %s(): %d rpt_fbtc_tdma %x %x %x %x %x %x %x %x\n",
- __func__, BTC_DCNT_TDMA_NONSYNC,
- pfwinfo->rpt_fbtc_tdma.finfo_v1.tdma.type,
- pfwinfo->rpt_fbtc_tdma.finfo_v1.tdma.rxflctrl,
- pfwinfo->rpt_fbtc_tdma.finfo_v1.tdma.txpause,
- pfwinfo->rpt_fbtc_tdma.finfo_v1.tdma.wtgle_n,
- pfwinfo->rpt_fbtc_tdma.finfo_v1.tdma.leak_n,
- pfwinfo->rpt_fbtc_tdma.finfo_v1.tdma.ext_ctrl,
- pfwinfo->rpt_fbtc_tdma.finfo_v1.tdma.rxflctrl_role,
- pfwinfo->rpt_fbtc_tdma.finfo_v1.tdma.option_ctrl);
- }
+ _chk_btc_err(rtwdev, BTC_DCNT_BTCNT_FREEZE, 0);
+ _chk_btc_err(rtwdev, BTC_DCNT_RPT_FREEZE,
+ pfwinfo->event[BTF_EVNT_RPT]);
- _chk_btc_err(rtwdev, BTC_DCNT_TDMA_NONSYNC,
- memcmp(&dm->tdma_now,
- &pfwinfo->rpt_fbtc_tdma.finfo_v1.tdma,
- sizeof(dm->tdma_now)));
- }
+ if (le32_to_cpu(prpt->v4.bt_cnt[BTC_BCNT_RFK_TIMEOUT]) > 0)
+ bt->rfk_info.map.timeout = 1;
+ else
+ bt->rfk_info.map.timeout = 0;
+
+ dm->error.map.bt_rfk_timeout = bt->rfk_info.map.timeout;
+ } else if (ver->fcxbtcrpt == 5) {
+ prpt->v5 = pfwinfo->rpt_ctrl.finfo.v5;
+ pfwinfo->rpt_en_map = le32_to_cpu(prpt->v5.rpt_info.en);
+ wl->ver_info.fw_coex = le32_to_cpu(prpt->v5.rpt_info.cx_ver);
+ wl->ver_info.fw = le32_to_cpu(prpt->v5.rpt_info.fw_ver);
+ dm->wl_fw_cx_offload = 0;
+
+ for (i = RTW89_PHY_0; i < RTW89_PHY_MAX; i++)
+ memcpy(&dm->gnt.band[i], &prpt->v5.gnt_val[i][0],
+ sizeof(dm->gnt.band[i]));
+
+ btc->cx.cnt_bt[BTC_BCNT_HIPRI_TX] =
+ le16_to_cpu(prpt->v5.bt_cnt[BTC_BCNT_HI_TX]);
+ btc->cx.cnt_bt[BTC_BCNT_HIPRI_RX] =
+ le16_to_cpu(prpt->v5.bt_cnt[BTC_BCNT_HI_RX]);
+ btc->cx.cnt_bt[BTC_BCNT_LOPRI_TX] =
+ le16_to_cpu(prpt->v5.bt_cnt[BTC_BCNT_LO_TX]);
+ btc->cx.cnt_bt[BTC_BCNT_LOPRI_RX] =
+ le16_to_cpu(prpt->v5.bt_cnt[BTC_BCNT_LO_RX]);
+ btc->cx.cnt_bt[BTC_BCNT_POLUT] =
+ le16_to_cpu(prpt->v5.bt_cnt[BTC_BCNT_POLLUTED]);
+
+ _chk_btc_err(rtwdev, BTC_DCNT_BTCNT_FREEZE, 0);
+ _chk_btc_err(rtwdev, BTC_DCNT_RPT_FREEZE,
+ pfwinfo->event[BTF_EVNT_RPT]);
- if (rpt_type == BTC_RPT_TYPE_SLOT) {
+ dm->error.map.bt_rfk_timeout = bt->rfk_info.map.timeout;
+ } else {
+ goto err;
+ }
+ break;
+ case BTC_RPT_TYPE_TDMA:
+ rtw89_debug(rtwdev, RTW89_DBG_BTC,
+ "[BTC], %s(): check %d %zu\n", __func__,
+ BTC_DCNT_TDMA_NONSYNC,
+ sizeof(dm->tdma_now));
+ if (ver->fcxtdma == 1)
+ _chk_btc_err(rtwdev, BTC_DCNT_TDMA_NONSYNC,
+ memcmp(&dm->tdma_now,
+ &pfwinfo->rpt_fbtc_tdma.finfo.v1,
+ sizeof(dm->tdma_now)));
+ else if (ver->fcxtdma == 3)
+ _chk_btc_err(rtwdev, BTC_DCNT_TDMA_NONSYNC,
+ memcmp(&dm->tdma_now,
+ &pfwinfo->rpt_fbtc_tdma.finfo.v3.tdma,
+ sizeof(dm->tdma_now)));
+ else
+ goto err;
+ break;
+ case BTC_RPT_TYPE_SLOT:
rtw89_debug(rtwdev, RTW89_DBG_BTC,
"[BTC], %s(): check %d %zu\n",
__func__, BTC_DCNT_SLOT_NONSYNC,
sizeof(dm->slot_now));
-
- if (memcmp(dm->slot_now, pfwinfo->rpt_fbtc_slots.finfo.slot,
- sizeof(dm->slot_now)) != 0) {
- for (i = 0; i < CXST_MAX; i++) {
- rtp_slot =
- &pfwinfo->rpt_fbtc_slots.finfo.slot[i];
- if (memcmp(&dm->slot_now[i], rtp_slot,
- sizeof(dm->slot_now[i])) != 0) {
- rtw89_debug(rtwdev, RTW89_DBG_BTC,
- "[BTC], %s(): %d slot_now[%d] dur=0x%04x tbl=%08x type=0x%04x\n",
- __func__,
- BTC_DCNT_SLOT_NONSYNC, i,
- dm->slot_now[i].dur,
- dm->slot_now[i].cxtbl,
- dm->slot_now[i].cxtype);
-
- rtw89_debug(rtwdev, RTW89_DBG_BTC,
- "[BTC], %s(): %d rpt_fbtc_slots[%d] dur=0x%04x tbl=%08x type=0x%04x\n",
- __func__,
- BTC_DCNT_SLOT_NONSYNC, i,
- rtp_slot->dur,
- rtp_slot->cxtbl,
- rtp_slot->cxtype);
- }
- }
- }
_chk_btc_err(rtwdev, BTC_DCNT_SLOT_NONSYNC,
memcmp(dm->slot_now,
pfwinfo->rpt_fbtc_slots.finfo.slot,
sizeof(dm->slot_now)));
- }
+ break;
+ case BTC_RPT_TYPE_CYSTA:
+ if (ver->fcxcysta == 2) {
+ if (le16_to_cpu(pcysta->v2.cycles) < BTC_CYSTA_CHK_PERIOD)
+ break;
+ /* Check Leak-AP */
+ if (le32_to_cpu(pcysta->v2.slot_cnt[CXST_LK]) != 0 &&
+ le32_to_cpu(pcysta->v2.leakrx_cnt) != 0 && dm->tdma_now.rxflctrl) {
+ if (le32_to_cpu(pcysta->v2.slot_cnt[CXST_LK]) <
+ BTC_LEAK_AP_TH * le32_to_cpu(pcysta->v2.leakrx_cnt))
+ dm->leak_ap = 1;
+ }
- if (rpt_type == BTC_RPT_TYPE_CYSTA && chip->chip_id == RTL8852A &&
- pcysta->cycles >= BTC_CYSTA_CHK_PERIOD) {
- /* Check Leak-AP */
- if (pcysta->slot_cnt[CXST_LK] != 0 &&
- pcysta->leakrx_cnt != 0 && dm->tdma_now.rxflctrl) {
- if (pcysta->slot_cnt[CXST_LK] <
- BTC_LEAK_AP_TH * pcysta->leakrx_cnt)
- dm->leak_ap = 1;
- }
+ /* Check diff time between WL slot and W1/E2G slot */
+ if (dm->tdma_now.type == CXTDMA_OFF &&
+ dm->tdma_now.ext_ctrl == CXECTL_EXT)
+ wl_slot_set = le16_to_cpu(dm->slot_now[CXST_E2G].dur);
+ else
+ wl_slot_set = le16_to_cpu(dm->slot_now[CXST_W1].dur);
- /* Check diff time between WL slot and W1/E2G slot */
- if (dm->tdma_now.type == CXTDMA_OFF &&
- dm->tdma_now.ext_ctrl == CXECTL_EXT)
- wl_slot_set = le16_to_cpu(dm->slot_now[CXST_E2G].dur);
- else
- wl_slot_set = le16_to_cpu(dm->slot_now[CXST_W1].dur);
+ if (le16_to_cpu(pcysta->v2.tavg_cycle[CXT_WL]) > wl_slot_set) {
+ diff_t = le16_to_cpu(pcysta->v2.tavg_cycle[CXT_WL]) - wl_slot_set;
+ _chk_btc_err(rtwdev,
+ BTC_DCNT_WL_SLOT_DRIFT, diff_t);
+ }
- if (pcysta->tavg_cycle[CXT_WL] > wl_slot_set) {
- diff_t = pcysta->tavg_cycle[CXT_WL] - wl_slot_set;
- _chk_btc_err(rtwdev, BTC_DCNT_WL_SLOT_DRIFT, diff_t);
- }
+ _chk_btc_err(rtwdev, BTC_DCNT_W1_FREEZE,
+ le32_to_cpu(pcysta->v2.slot_cnt[CXST_W1]));
+ _chk_btc_err(rtwdev, BTC_DCNT_W1_FREEZE,
+ le32_to_cpu(pcysta->v2.slot_cnt[CXST_B1]));
+ _chk_btc_err(rtwdev, BTC_DCNT_CYCLE_FREEZE,
+ le16_to_cpu(pcysta->v2.cycles));
+ } else if (ver->fcxcysta == 3) {
+ if (le16_to_cpu(pcysta->v3.cycles) < BTC_CYSTA_CHK_PERIOD)
+ break;
- _chk_btc_err(rtwdev, BTC_DCNT_W1_FREEZE, pcysta->slot_cnt[CXST_W1]);
- _chk_btc_err(rtwdev, BTC_DCNT_W1_FREEZE, pcysta->slot_cnt[CXST_B1]);
- _chk_btc_err(rtwdev, BTC_DCNT_CYCLE_FREEZE, (u32)pcysta->cycles);
- } else if (rpt_type == BTC_RPT_TYPE_CYSTA && pcysta_v1 &&
- le16_to_cpu(pcysta_v1->cycles) >= BTC_CYSTA_CHK_PERIOD) {
- cnt_leak_slot = le32_to_cpu(pcysta_v1->slot_cnt[CXST_LK]);
- cnt_rx_imr = le32_to_cpu(pcysta_v1->leak_slot.cnt_rximr);
- /* Check Leak-AP */
- if (cnt_leak_slot != 0 && cnt_rx_imr != 0 &&
- dm->tdma_now.rxflctrl) {
- if (cnt_leak_slot < BTC_LEAK_AP_TH * cnt_rx_imr)
- dm->leak_ap = 1;
- }
+ cnt_leak_slot = le32_to_cpu(pcysta->v3.slot_cnt[CXST_LK]);
+ cnt_rx_imr = le32_to_cpu(pcysta->v3.leak_slot.cnt_rximr);
- /* Check diff time between real WL slot and W1 slot */
- if (dm->tdma_now.type == CXTDMA_OFF) {
- wl_slot_set = le16_to_cpu(dm->slot_now[CXST_W1].dur);
- wl_slot_real = le16_to_cpu(pcysta_v1->cycle_time.tavg[CXT_WL]);
- if (wl_slot_real > wl_slot_set) {
- diff_t = wl_slot_real - wl_slot_set;
- _chk_btc_err(rtwdev, BTC_DCNT_WL_SLOT_DRIFT, diff_t);
+ /* Check Leak-AP */
+ if (cnt_leak_slot != 0 && cnt_rx_imr != 0 &&
+ dm->tdma_now.rxflctrl) {
+ if (cnt_leak_slot < BTC_LEAK_AP_TH * cnt_rx_imr)
+ dm->leak_ap = 1;
}
- }
- /* Check diff time between real BT slot and EBT/E5G slot */
- if (dm->tdma_now.type == CXTDMA_OFF &&
- dm->tdma_now.ext_ctrl == CXECTL_EXT &&
- btc->bt_req_len != 0) {
- bt_slot_real = le16_to_cpu(pcysta_v1->cycle_time.tavg[CXT_BT]);
+ /* Check diff time between real WL slot and W1 slot */
+ if (dm->tdma_now.type == CXTDMA_OFF) {
+ wl_slot_set = le16_to_cpu(dm->slot_now[CXST_W1].dur);
+ wl_slot_real = le16_to_cpu(pcysta->v3.cycle_time.tavg[CXT_WL]);
+ if (wl_slot_real > wl_slot_set) {
+ diff_t = wl_slot_real - wl_slot_set;
+ _chk_btc_err(rtwdev, BTC_DCNT_WL_SLOT_DRIFT, diff_t);
+ }
+ }
- if (btc->bt_req_len > bt_slot_real) {
- diff_t = btc->bt_req_len - bt_slot_real;
- _chk_btc_err(rtwdev, BTC_DCNT_BT_SLOT_DRIFT, diff_t);
+ /* Check diff time between real BT slot and EBT/E5G slot */
+ if (dm->tdma_now.type == CXTDMA_OFF &&
+ dm->tdma_now.ext_ctrl == CXECTL_EXT &&
+ btc->bt_req_len != 0) {
+ bt_slot_real = le16_to_cpu(pcysta->v3.cycle_time.tavg[CXT_BT]);
+ if (btc->bt_req_len > bt_slot_real) {
+ diff_t = btc->bt_req_len - bt_slot_real;
+ _chk_btc_err(rtwdev, BTC_DCNT_BT_SLOT_DRIFT, diff_t);
+ }
}
- }
- _chk_btc_err(rtwdev, BTC_DCNT_W1_FREEZE,
- le32_to_cpu(pcysta_v1->slot_cnt[CXST_W1]));
- _chk_btc_err(rtwdev, BTC_DCNT_B1_FREEZE,
- le32_to_cpu(pcysta_v1->slot_cnt[CXST_B1]));
- _chk_btc_err(rtwdev, BTC_DCNT_CYCLE_FREEZE,
- (u32)le16_to_cpu(pcysta_v1->cycles));
- }
+ _chk_btc_err(rtwdev, BTC_DCNT_W1_FREEZE,
+ le32_to_cpu(pcysta->v3.slot_cnt[CXST_W1]));
+ _chk_btc_err(rtwdev, BTC_DCNT_B1_FREEZE,
+ le32_to_cpu(pcysta->v3.slot_cnt[CXST_B1]));
+ _chk_btc_err(rtwdev, BTC_DCNT_CYCLE_FREEZE,
+ le16_to_cpu(pcysta->v3.cycles));
+ } else if (ver->fcxcysta == 4) {
+ if (le16_to_cpu(pcysta->v4.cycles) < BTC_CYSTA_CHK_PERIOD)
+ break;
- if (rpt_type == BTC_RPT_TYPE_CTRL && chip->chip_id == RTL8852A) {
- prpt = &pfwinfo->rpt_ctrl.finfo;
- btc->fwinfo.rpt_en_map = prpt->rpt_enable;
- wl->ver_info.fw_coex = prpt->wl_fw_coex_ver;
- wl->ver_info.fw = prpt->wl_fw_ver;
- dm->wl_fw_cx_offload = !!prpt->wl_fw_cx_offload;
+ cnt_leak_slot = le16_to_cpu(pcysta->v4.slot_cnt[CXST_LK]);
+ cnt_rx_imr = le32_to_cpu(pcysta->v4.leak_slot.cnt_rximr);
- _chk_btc_err(rtwdev, BTC_DCNT_RPT_FREEZE,
- pfwinfo->event[BTF_EVNT_RPT]);
+ /* Check Leak-AP */
+ if (cnt_leak_slot != 0 && cnt_rx_imr != 0 &&
+ dm->tdma_now.rxflctrl) {
+ if (cnt_leak_slot < BTC_LEAK_AP_TH * cnt_rx_imr)
+ dm->leak_ap = 1;
+ }
- /* To avoid I/O if WL LPS or power-off */
- if (wl->status.map.lps != BTC_LPS_RF_OFF && !wl->status.map.rf_off) {
- rtwdev->chip->ops->btc_update_bt_cnt(rtwdev);
- _chk_btc_err(rtwdev, BTC_DCNT_BTCNT_FREEZE, 0);
+ /* Check diff time between real WL slot and W1 slot */
+ if (dm->tdma_now.type == CXTDMA_OFF) {
+ wl_slot_set = le16_to_cpu(dm->slot_now[CXST_W1].dur);
+ wl_slot_real = le16_to_cpu(pcysta->v4.cycle_time.tavg[CXT_WL]);
+ if (wl_slot_real > wl_slot_set) {
+ diff_t = wl_slot_real - wl_slot_set;
+ _chk_btc_err(rtwdev, BTC_DCNT_WL_SLOT_DRIFT, diff_t);
+ }
+ }
- btc->cx.cnt_bt[BTC_BCNT_POLUT] =
- rtw89_mac_get_plt_cnt(rtwdev, RTW89_MAC_0);
- }
- } else if (rpt_type == BTC_RPT_TYPE_CTRL) {
- prpt_v1 = &pfwinfo->rpt_ctrl.finfo_v1;
- btc->fwinfo.rpt_en_map = le32_to_cpu(prpt_v1->rpt_info.en);
- wl->ver_info.fw_coex = le32_to_cpu(prpt_v1->wl_fw_info.cx_ver);
- wl->ver_info.fw = le32_to_cpu(prpt_v1->wl_fw_info.fw_ver);
- dm->wl_fw_cx_offload = !!le32_to_cpu(prpt_v1->wl_fw_info.cx_offload);
-
- for (i = RTW89_PHY_0; i < RTW89_PHY_MAX; i++)
- memcpy(&dm->gnt.band[i], &prpt_v1->gnt_val[i],
- sizeof(dm->gnt.band[i]));
-
- btc->cx.cnt_bt[BTC_BCNT_HIPRI_TX] = le32_to_cpu(prpt_v1->bt_cnt[BTC_BCNT_HI_TX]);
- btc->cx.cnt_bt[BTC_BCNT_HIPRI_RX] = le32_to_cpu(prpt_v1->bt_cnt[BTC_BCNT_HI_RX]);
- btc->cx.cnt_bt[BTC_BCNT_LOPRI_TX] = le32_to_cpu(prpt_v1->bt_cnt[BTC_BCNT_LO_TX]);
- btc->cx.cnt_bt[BTC_BCNT_LOPRI_RX] = le32_to_cpu(prpt_v1->bt_cnt[BTC_BCNT_LO_RX]);
- btc->cx.cnt_bt[BTC_BCNT_POLUT] = le32_to_cpu(prpt_v1->bt_cnt[BTC_BCNT_POLLUTED]);
-
- _chk_btc_err(rtwdev, BTC_DCNT_BTCNT_FREEZE, 0);
- _chk_btc_err(rtwdev, BTC_DCNT_RPT_FREEZE,
- pfwinfo->event[BTF_EVNT_RPT]);
-
- if (le32_to_cpu(prpt_v1->bt_cnt[BTC_BCNT_RFK_TIMEOUT]) > 0)
- bt->rfk_info.map.timeout = 1;
- else
- bt->rfk_info.map.timeout = 0;
+ /* Check diff time between real BT slot and EBT/E5G slot */
+ if (dm->tdma_now.type == CXTDMA_OFF &&
+ dm->tdma_now.ext_ctrl == CXECTL_EXT &&
+ btc->bt_req_len != 0) {
+ bt_slot_real = le16_to_cpu(pcysta->v4.cycle_time.tavg[CXT_BT]);
- dm->error.map.bt_rfk_timeout = bt->rfk_info.map.timeout;
- }
+ if (btc->bt_req_len > bt_slot_real) {
+ diff_t = btc->bt_req_len - bt_slot_real;
+ _chk_btc_err(rtwdev, BTC_DCNT_BT_SLOT_DRIFT, diff_t);
+ }
+ }
- if (rpt_type >= BTC_RPT_TYPE_BT_VER &&
- rpt_type <= BTC_RPT_TYPE_BT_DEVICE)
+ _chk_btc_err(rtwdev, BTC_DCNT_W1_FREEZE,
+ le16_to_cpu(pcysta->v4.slot_cnt[CXST_W1]));
+ _chk_btc_err(rtwdev, BTC_DCNT_B1_FREEZE,
+ le16_to_cpu(pcysta->v4.slot_cnt[CXST_B1]));
+ _chk_btc_err(rtwdev, BTC_DCNT_CYCLE_FREEZE,
+ le16_to_cpu(pcysta->v4.cycles));
+ } else {
+ goto err;
+ }
+ break;
+ case BTC_RPT_TYPE_BT_VER:
+ case BTC_RPT_TYPE_BT_SCAN:
+ case BTC_RPT_TYPE_BT_AFH:
+ case BTC_RPT_TYPE_BT_DEVICE:
_update_bt_report(rtwdev, rpt_type, pfinfo);
-
+ break;
+ }
return (rpt_len + BTC_RPT_HDR_SIZE);
+
+err:
+ rtw89_debug(rtwdev, RTW89_DBG_BTC,
+ "[BTC], %s(): Undefined version for type=%d\n", __func__, rpt_type);
+ return 0;
}
static void _parse_btc_report(struct rtw89_dev *rtwdev,
struct rtw89_btc_btf_fwinfo *pfwinfo,
u8 *pbuf, u32 buf_len)
{
- const struct rtw89_chip_info *chip = rtwdev->chip;
+ const struct rtw89_btc_ver *ver = rtwdev->btc.ver;
struct rtw89_btc_prpt *btc_prpt = NULL;
u32 index = 0, rpt_len = 0;
@@ -1340,7 +1380,7 @@ static void _parse_btc_report(struct rtw89_dev *rtwdev,
while (pbuf) {
btc_prpt = (struct rtw89_btc_prpt *)&pbuf[index];
- if (index + 2 >= chip->btc_fwinfo_buf)
+ if (index + 2 >= ver->info_buf)
break;
/* At least 3 bytes: type(1) & len(2) */
rpt_len = le16_to_cpu(btc_prpt->len);
@@ -1358,12 +1398,12 @@ static void _parse_btc_report(struct rtw89_dev *rtwdev,
static void _append_tdma(struct rtw89_dev *rtwdev)
{
- const struct rtw89_chip_info *chip = rtwdev->chip;
struct rtw89_btc *btc = &rtwdev->btc;
+ const struct rtw89_btc_ver *ver = btc->ver;
struct rtw89_btc_dm *dm = &btc->dm;
struct rtw89_btc_btf_tlv *tlv;
struct rtw89_btc_fbtc_tdma *v;
- struct rtw89_btc_fbtc_tdma_v1 *v1;
+ struct rtw89_btc_fbtc_tdma_v3 *v3;
u16 len = btc->policy_len;
if (!btc->update_policy_force &&
@@ -1376,17 +1416,17 @@ static void _append_tdma(struct rtw89_dev *rtwdev)
tlv = (struct rtw89_btc_btf_tlv *)&btc->policy[len];
tlv->type = CXPOLICY_TDMA;
- if (chip->chip_id == RTL8852A) {
+ if (ver->fcxtdma == 1) {
v = (struct rtw89_btc_fbtc_tdma *)&tlv->val[0];
tlv->len = sizeof(*v);
memcpy(v, &dm->tdma, sizeof(*v));
- btc->policy_len += BTC_TLV_HDR_LEN + sizeof(*v);
+ btc->policy_len += BTC_TLV_HDR_LEN + sizeof(*v);
} else {
- tlv->len = sizeof(*v1);
- v1 = (struct rtw89_btc_fbtc_tdma_v1 *)&tlv->val[0];
- v1->fver = chip->fcxtdma_ver;
- v1->tdma = dm->tdma;
- btc->policy_len += BTC_TLV_HDR_LEN + sizeof(*v1);
+ tlv->len = sizeof(*v3);
+ v3 = (struct rtw89_btc_fbtc_tdma_v3 *)&tlv->val[0];
+ v3->fver = ver->fcxtdma;
+ memcpy(&v3->tdma, &dm->tdma, sizeof(v3->tdma));
+ btc->policy_len += BTC_TLV_HDR_LEN + sizeof(*v3);
}
rtw89_debug(rtwdev, RTW89_DBG_BTC,
@@ -1441,22 +1481,169 @@ static void _append_slot(struct rtw89_dev *rtwdev)
__func__, cnt);
}
+static u32 rtw89_btc_fw_rpt_ver(struct rtw89_dev *rtwdev, u32 rpt_map)
+{
+ struct rtw89_btc *btc = &rtwdev->btc;
+ const struct rtw89_btc_ver *ver = btc->ver;
+ u32 bit_map = 0;
+
+ switch (rpt_map) {
+ case RPT_EN_TDMA:
+ bit_map = BIT(0);
+ break;
+ case RPT_EN_CYCLE:
+ bit_map = BIT(1);
+ break;
+ case RPT_EN_MREG:
+ bit_map = BIT(2);
+ break;
+ case RPT_EN_BT_VER_INFO:
+ bit_map = BIT(3);
+ break;
+ case RPT_EN_BT_SCAN_INFO:
+ bit_map = BIT(4);
+ break;
+ case RPT_EN_BT_DEVICE_INFO:
+ switch (ver->frptmap) {
+ case 0:
+ case 1:
+ case 2:
+ bit_map = BIT(6);
+ break;
+ case 3:
+ bit_map = BIT(5);
+ break;
+ default:
+ break;
+ }
+ break;
+ case RPT_EN_BT_AFH_MAP:
+ switch (ver->frptmap) {
+ case 0:
+ case 1:
+ case 2:
+ bit_map = BIT(5);
+ break;
+ case 3:
+ bit_map = BIT(6);
+ break;
+ default:
+ break;
+ }
+ break;
+ case RPT_EN_BT_AFH_MAP_LE:
+ switch (ver->frptmap) {
+ case 2:
+ bit_map = BIT(8);
+ break;
+ case 3:
+ bit_map = BIT(7);
+ break;
+ default:
+ break;
+ }
+ break;
+ case RPT_EN_FW_STEP_INFO:
+ switch (ver->frptmap) {
+ case 1:
+ case 2:
+ bit_map = BIT(7);
+ break;
+ case 3:
+ bit_map = BIT(8);
+ break;
+ default:
+ break;
+ }
+ break;
+ case RPT_EN_TEST:
+ bit_map = BIT(31);
+ break;
+ case RPT_EN_WL_ALL:
+ switch (ver->frptmap) {
+ case 0:
+ case 1:
+ case 2:
+ bit_map = GENMASK(2, 0);
+ break;
+ case 3:
+ bit_map = GENMASK(2, 0) | BIT(8);
+ break;
+ default:
+ break;
+ }
+ break;
+ case RPT_EN_BT_ALL:
+ switch (ver->frptmap) {
+ case 0:
+ case 1:
+ bit_map = GENMASK(6, 3);
+ break;
+ case 2:
+ bit_map = GENMASK(6, 3) | BIT(8);
+ break;
+ case 3:
+ bit_map = GENMASK(7, 3);
+ break;
+ default:
+ break;
+ }
+ break;
+ case RPT_EN_ALL:
+ switch (ver->frptmap) {
+ case 0:
+ bit_map = GENMASK(6, 0);
+ break;
+ case 1:
+ bit_map = GENMASK(7, 0);
+ break;
+ case 2:
+ case 3:
+ bit_map = GENMASK(8, 0);
+ break;
+ default:
+ break;
+ }
+ break;
+ case RPT_EN_MONITER:
+ switch (ver->frptmap) {
+ case 0:
+ case 1:
+ bit_map = GENMASK(6, 2);
+ break;
+ case 2:
+ bit_map = GENMASK(6, 2) | BIT(8);
+ break;
+ case 3:
+ bit_map = GENMASK(8, 2);
+ break;
+ default:
+ break;
+ }
+ break;
+ }
+
+ return bit_map;
+}
+
static void rtw89_btc_fw_en_rpt(struct rtw89_dev *rtwdev,
u32 rpt_map, bool rpt_state)
{
struct rtw89_btc *btc = &rtwdev->btc;
struct rtw89_btc_btf_fwinfo *fwinfo = &btc->fwinfo;
struct rtw89_btc_btf_set_report r = {0};
- u32 val = 0;
+ u32 val, bit_map;
+
+ bit_map = rtw89_btc_fw_rpt_ver(rtwdev, rpt_map);
rtw89_debug(rtwdev, RTW89_DBG_BTC,
"[BTC], %s(): rpt_map=%x, rpt_state=%x\n",
__func__, rpt_map, rpt_state);
if (rpt_state)
- val = fwinfo->rpt_en_map | rpt_map;
+ val = fwinfo->rpt_en_map | bit_map;
else
- val = fwinfo->rpt_en_map & ~rpt_map;
+ val = fwinfo->rpt_en_map & ~bit_map;
if (val == fwinfo->rpt_en_map)
return;
@@ -1593,16 +1780,17 @@ static void _fw_set_policy(struct rtw89_dev *rtwdev, u16 policy_type,
static void _fw_set_drv_info(struct rtw89_dev *rtwdev, u8 type)
{
- const struct rtw89_chip_info *chip = rtwdev->chip;
+ struct rtw89_btc *btc = &rtwdev->btc;
+ const struct rtw89_btc_ver *ver = btc->ver;
switch (type) {
case CXDRVINFO_INIT:
rtw89_fw_h2c_cxdrv_init(rtwdev);
break;
case CXDRVINFO_ROLE:
- if (chip->chip_id == RTL8852A)
+ if (ver->fwlrole == 0)
rtw89_fw_h2c_cxdrv_role(rtwdev);
- else
+ else if (ver->fwlrole == 1)
rtw89_fw_h2c_cxdrv_role_v1(rtwdev);
break;
case CXDRVINFO_CTRL:
@@ -1892,6 +2080,7 @@ static void _set_bt_afh_info(struct rtw89_dev *rtwdev)
{
const struct rtw89_chip_info *chip = rtwdev->chip;
struct rtw89_btc *btc = &rtwdev->btc;
+ const struct rtw89_btc_ver *ver = btc->ver;
struct rtw89_btc_wl_info *wl = &btc->cx.wl;
struct rtw89_btc_bt_info *bt = &btc->cx.bt;
struct rtw89_btc_bt_link_info *b = &bt->link_info;
@@ -1905,7 +2094,7 @@ static void _set_bt_afh_info(struct rtw89_dev *rtwdev)
if (btc->ctrl.manual || wl->status.map.scan)
return;
- if (chip->chip_id == RTL8852A) {
+ if (ver->fwlrole == 0) {
mode = wl_rinfo->link_mode;
connect_cnt = wl_rinfo->connect_cnt;
} else {
@@ -1924,13 +2113,13 @@ static void _set_bt_afh_info(struct rtw89_dev *rtwdev)
r = &wl_rinfo->active_role[i];
r1 = &wl_rinfo_v1->active_role_v1[i];
- if (chip->chip_id == RTL8852A &&
+ if (ver->fwlrole == 0 &&
(r->role == RTW89_WIFI_ROLE_P2P_GO ||
r->role == RTW89_WIFI_ROLE_P2P_CLIENT)) {
ch = r->ch;
bw = r->bw;
break;
- } else if (chip->chip_id != RTL8852A &&
+ } else if (ver->fwlrole == 1 &&
(r1->role == RTW89_WIFI_ROLE_P2P_GO ||
r1->role == RTW89_WIFI_ROLE_P2P_CLIENT)) {
ch = r1->ch;
@@ -1945,12 +2134,12 @@ static void _set_bt_afh_info(struct rtw89_dev *rtwdev)
r = &wl_rinfo->active_role[i];
r1 = &wl_rinfo_v1->active_role_v1[i];
- if (chip->chip_id == RTL8852A &&
+ if (ver->fwlrole == 0 &&
r->connected && r->band == RTW89_BAND_2G) {
ch = r->ch;
bw = r->bw;
break;
- } else if (chip->chip_id != RTL8852A &&
+ } else if (ver->fwlrole == 1 &&
r1->connected && r1->band == RTW89_BAND_2G) {
ch = r1->ch;
bw = r1->bw;
@@ -2517,15 +2706,16 @@ void rtw89_btc_set_policy_v1(struct rtw89_dev *rtwdev, u16 policy_type)
break;
case BTC_CXP_OFF_EQ0:
_slot_set_tbl(btc, CXST_OFF, cxtbl[0]);
+ _slot_set_type(btc, CXST_OFF, SLOT_ISO);
break;
case BTC_CXP_OFF_EQ1:
_slot_set_tbl(btc, CXST_OFF, cxtbl[16]);
break;
case BTC_CXP_OFF_EQ2:
- _slot_set_tbl(btc, CXST_OFF, cxtbl[17]);
+ _slot_set_tbl(btc, CXST_OFF, cxtbl[0]);
break;
case BTC_CXP_OFF_EQ3:
- _slot_set_tbl(btc, CXST_OFF, cxtbl[18]);
+ _slot_set_tbl(btc, CXST_OFF, cxtbl[24]);
break;
case BTC_CXP_OFF_BWB0:
_slot_set_tbl(btc, CXST_OFF, cxtbl[5]);
@@ -2581,6 +2771,7 @@ void rtw89_btc_set_policy_v1(struct rtw89_dev *rtwdev, u16 policy_type)
default:
break;
}
+ s[CXST_OFF] = s_def[CXST_OFF];
break;
case BTC_CXP_FIX: /* TDMA Fix-Slot */
_write_scbd(rtwdev, BTC_WSCB_TDMA, true);
@@ -2607,6 +2798,10 @@ void rtw89_btc_set_policy_v1(struct rtw89_dev *rtwdev, u16 policy_type)
_slot_set(btc, CXST_W1, 40, cxtbl[1], SLOT_ISO);
_slot_set(btc, CXST_B1, 10, tbl_b1, SLOT_MIX);
break;
+ case BTC_CXP_FIX_TD4020:
+ _slot_set(btc, CXST_W1, 40, cxtbl[1], SLOT_MIX);
+ _slot_set(btc, CXST_B1, 20, tbl_b1, SLOT_MIX);
+ break;
case BTC_CXP_FIX_TD7010:
_slot_set(btc, CXST_W1, 70, tbl_w1, SLOT_ISO);
_slot_set(btc, CXST_B1, 10, tbl_b1, SLOT_MIX);
@@ -3398,8 +3593,8 @@ static void _action_wl_rfk(struct rtw89_dev *rtwdev)
static void _set_btg_ctrl(struct rtw89_dev *rtwdev)
{
- const struct rtw89_chip_info *chip = rtwdev->chip;
struct rtw89_btc *btc = &rtwdev->btc;
+ const struct rtw89_btc_ver *ver = btc->ver;
struct rtw89_btc_wl_info *wl = &btc->cx.wl;
struct rtw89_btc_wl_role_info *wl_rinfo = &wl->role_info;
struct rtw89_btc_wl_role_info_v1 *wl_rinfo_v1 = &wl->role_info_v1;
@@ -3410,7 +3605,7 @@ static void _set_btg_ctrl(struct rtw89_dev *rtwdev)
if (btc->ctrl.manual)
return;
- if (chip->chip_id == RTL8852A)
+ if (ver->fwlrole == 0)
mode = wl_rinfo->link_mode;
else
mode = wl_rinfo_v1->link_mode;
@@ -3503,8 +3698,8 @@ static void rtw89_tx_time_iter(void *data, struct ieee80211_sta *sta)
static void _set_wl_tx_limit(struct rtw89_dev *rtwdev)
{
- const struct rtw89_chip_info *chip = rtwdev->chip;
struct rtw89_btc *btc = &rtwdev->btc;
+ const struct rtw89_btc_ver *ver = btc->ver;
struct rtw89_btc_cx *cx = &btc->cx;
struct rtw89_btc_dm *dm = &btc->dm;
struct rtw89_btc_wl_info *wl = &cx->wl;
@@ -3524,7 +3719,7 @@ static void _set_wl_tx_limit(struct rtw89_dev *rtwdev)
if (btc->ctrl.manual)
return;
- if (chip->chip_id == RTL8852A)
+ if (ver->fwlrole == 0)
mode = wl_rinfo->link_mode;
else
mode = wl_rinfo_v1->link_mode;
@@ -3572,8 +3767,8 @@ static void _set_wl_tx_limit(struct rtw89_dev *rtwdev)
static void _set_bt_rx_agc(struct rtw89_dev *rtwdev)
{
- const struct rtw89_chip_info *chip = rtwdev->chip;
struct rtw89_btc *btc = &rtwdev->btc;
+ const struct rtw89_btc_ver *ver = btc->ver;
struct rtw89_btc_wl_info *wl = &btc->cx.wl;
struct rtw89_btc_wl_role_info *wl_rinfo = &wl->role_info;
struct rtw89_btc_wl_role_info_v1 *wl_rinfo_v1 = &wl->role_info_v1;
@@ -3581,7 +3776,7 @@ static void _set_bt_rx_agc(struct rtw89_dev *rtwdev)
bool bt_hi_lna_rx = false;
u8 mode;
- if (chip->chip_id == RTL8852A)
+ if (ver->fwlrole == 0)
mode = wl_rinfo->link_mode;
else
mode = wl_rinfo_v1->link_mode;
@@ -3623,6 +3818,7 @@ static void _action_common(struct rtw89_dev *rtwdev)
wl->scbd_change = false;
btc->cx.cnt_wl[BTC_WCNT_SCBDUPDATE]++;
}
+ btc->dm.tdma_instant_excute = 0;
}
static void _action_by_bt(struct rtw89_dev *rtwdev)
@@ -3651,7 +3847,7 @@ static void _action_by_bt(struct rtw89_dev *rtwdev)
case BTC_BT_NOPROFILE:
if (_check_freerun(rtwdev))
_action_freerun(rtwdev);
- else if (a2dp.active || pan.active)
+ else if (pan.active)
_action_bt_pan(rtwdev);
else
_action_bt_idle(rtwdev);
@@ -4406,7 +4602,7 @@ static void _update_bt_scbd(struct rtw89_dev *rtwdev, bool only_update)
bt->whql_test = !!(val & BTC_BSCB_WHQL);
bt->btg_type = val & BTC_BSCB_BT_S1 ? BTC_BT_BTG : BTC_BT_ALONE;
- bt->link_info.a2dp_desc.active = !!(val & BTC_BSCB_A2DP_ACT);
+ bt->link_info.a2dp_desc.exist = !!(val & BTC_BSCB_A2DP_ACT);
/* if rfk run 1->0 */
if (bt->rfk_info.map.run && !(val & BTC_BSCB_RFK_RUN))
@@ -4445,8 +4641,8 @@ static bool _chk_wl_rfk_request(struct rtw89_dev *rtwdev)
static
void _run_coex(struct rtw89_dev *rtwdev, enum btc_reason_and_action reason)
{
- const struct rtw89_chip_info *chip = rtwdev->chip;
struct rtw89_btc *btc = &rtwdev->btc;
+ const struct rtw89_btc_ver *ver = btc->ver;
struct rtw89_btc_dm *dm = &rtwdev->btc.dm;
struct rtw89_btc_cx *cx = &btc->cx;
struct rtw89_btc_wl_info *wl = &btc->cx.wl;
@@ -4461,7 +4657,7 @@ void _run_coex(struct rtw89_dev *rtwdev, enum btc_reason_and_action reason)
_update_dm_step(rtwdev, reason);
_update_btc_state_map(rtwdev);
- if (chip->chip_id == RTL8852A)
+ if (ver->fwlrole == 0)
mode = wl_rinfo->link_mode;
else
mode = wl_rinfo_v1->link_mode;
@@ -4567,6 +4763,8 @@ void _run_coex(struct rtw89_dev *rtwdev, enum btc_reason_and_action reason)
_action_wl_nc(rtwdev);
break;
case BTC_WLINK_2G_STA:
+ if (wl->status.map.traffic_dir & BIT(RTW89_TFC_DL))
+ bt->scan_rx_low_pri = true;
_action_wl_2g_sta(rtwdev);
break;
case BTC_WLINK_2G_AP:
@@ -4583,9 +4781,9 @@ void _run_coex(struct rtw89_dev *rtwdev, enum btc_reason_and_action reason)
break;
case BTC_WLINK_2G_SCC:
bt->scan_rx_low_pri = true;
- if (chip->chip_id == RTL8852A)
+ if (ver->fwlrole == 0)
_action_wl_2g_scc(rtwdev);
- else if (chip->chip_id == RTL8852C)
+ else if (ver->fwlrole == 1)
_action_wl_2g_scc_v1(rtwdev);
break;
case BTC_WLINK_2G_MCC:
@@ -4690,7 +4888,7 @@ void rtw89_btc_ntfy_init(struct rtw89_dev *rtwdev, u8 mode)
_write_scbd(rtwdev,
BTC_WSCB_ACTIVE | BTC_WSCB_ON | BTC_WSCB_BTLOG, true);
_update_bt_scbd(rtwdev, true);
- if (rtw89_mac_get_ctrl_path(rtwdev) && chip->chip_id == RTL8852A) {
+ if (rtw89_mac_get_ctrl_path(rtwdev)) {
rtw89_debug(rtwdev, RTW89_DBG_BTC,
"[BTC], %s(): PTA owner warning!!\n",
__func__);
@@ -5000,11 +5198,6 @@ static void _update_bt_info(struct rtw89_dev *rtwdev, u8 *buf, u32 len)
a2dp->sink = btinfo.hb3.a2dp_sink;
- if (b->profile_cnt.now || b->status.map.ble_connect)
- rtw89_btc_fw_en_rpt(rtwdev, RPT_EN_BT_AFH_MAP, 1);
- else
- rtw89_btc_fw_en_rpt(rtwdev, RPT_EN_BT_AFH_MAP, 0);
-
if (!a2dp->exist_last && a2dp->exist) {
a2dp->vendor_id = 0;
a2dp->flush_time = 0;
@@ -5014,12 +5207,6 @@ static void _update_bt_info(struct rtw89_dev *rtwdev, u8 *buf, u32 len)
RTW89_COEX_BT_DEVINFO_WORK_PERIOD);
}
- if (a2dp->exist && (a2dp->flush_time == 0 || a2dp->vendor_id == 0 ||
- a2dp->play_latency == 1))
- rtw89_btc_fw_en_rpt(rtwdev, RPT_EN_BT_DEVICE_INFO, 1);
- else
- rtw89_btc_fw_en_rpt(rtwdev, RPT_EN_BT_DEVICE_INFO, 0);
-
_run_coex(rtwdev, BTC_RSN_UPDATE_BT_INFO);
}
@@ -5034,10 +5221,10 @@ void rtw89_btc_ntfy_role_info(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif
struct rtw89_sta *rtwsta, enum btc_role_state state)
{
const struct rtw89_chan *chan = rtw89_chan_get(rtwdev, RTW89_SUB_ENTITY_0);
- const struct rtw89_chip_info *chip = rtwdev->chip;
struct ieee80211_vif *vif = rtwvif_to_vif(rtwvif);
struct ieee80211_sta *sta = rtwsta_to_sta(rtwsta);
struct rtw89_btc *btc = &rtwdev->btc;
+ const struct rtw89_btc_ver *ver = btc->ver;
struct rtw89_btc_wl_info *wl = &btc->cx.wl;
struct rtw89_btc_wl_link_info r = {0};
struct rtw89_btc_wl_link_info *wlinfo = NULL;
@@ -5101,7 +5288,7 @@ void rtw89_btc_ntfy_role_info(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif
wlinfo = &wl->link_info[r.pid];
memcpy(wlinfo, &r, sizeof(*wlinfo));
- if (chip->chip_id == RTL8852A)
+ if (ver->fwlrole == 0)
_update_wl_info(rtwdev);
else
_update_wl_info_v1(rtwdev);
@@ -5151,9 +5338,7 @@ void rtw89_btc_ntfy_radio_state(struct rtw89_dev *rtwdev, enum btc_rfctrl rf_sta
}
if (rf_state == BTC_RFCTRL_WL_ON) {
- btc->dm.cnt_dm[BTC_DCNT_BTCNT_FREEZE] = 0;
- rtw89_btc_fw_en_rpt(rtwdev,
- RPT_EN_MREG | RPT_EN_BT_VER_INFO, true);
+ rtw89_btc_fw_en_rpt(rtwdev, RPT_EN_MREG, true);
val = BTC_WSCB_ACTIVE | BTC_WSCB_ON | BTC_WSCB_BTLOG;
_write_scbd(rtwdev, val, true);
_update_bt_scbd(rtwdev, true);
@@ -5164,6 +5349,13 @@ void rtw89_btc_ntfy_radio_state(struct rtw89_dev *rtwdev, enum btc_rfctrl rf_sta
_write_scbd(rtwdev, BTC_WSCB_ALL, false);
}
+ btc->dm.cnt_dm[BTC_DCNT_BTCNT_FREEZE] = 0;
+ if (wl->status.map.lps_pre == BTC_LPS_OFF &&
+ wl->status.map.lps_pre != wl->status.map.lps)
+ btc->dm.tdma_instant_excute = 1;
+ else
+ btc->dm.tdma_instant_excute = 0;
+
_run_coex(rtwdev, BTC_RSN_NTFY_RADIO_STATE);
wl->status.map.rf_off_pre = wl->status.map.rf_off;
@@ -5618,8 +5810,8 @@ static void _show_wl_role_info(struct rtw89_dev *rtwdev, struct seq_file *m)
static void _show_wl_info(struct rtw89_dev *rtwdev, struct seq_file *m)
{
- const struct rtw89_chip_info *chip = rtwdev->chip;
struct rtw89_btc *btc = &rtwdev->btc;
+ const struct rtw89_btc_ver *ver = btc->ver;
struct rtw89_btc_cx *cx = &btc->cx;
struct rtw89_btc_wl_info *wl = &cx->wl;
struct rtw89_btc_wl_role_info *wl_rinfo = &wl->role_info;
@@ -5631,7 +5823,7 @@ static void _show_wl_info(struct rtw89_dev *rtwdev, struct seq_file *m)
seq_puts(m, "========== [WL Status] ==========\n");
- if (chip->chip_id == RTL8852A)
+ if (ver->fwlrole == 0)
mode = wl_rinfo->link_mode;
else
mode = wl_rinfo_v1->link_mode;
@@ -5714,12 +5906,14 @@ static void _show_bt_profile_info(struct rtw89_dev *rtwdev, struct seq_file *m)
static void _show_bt_info(struct rtw89_dev *rtwdev, struct seq_file *m)
{
struct rtw89_btc *btc = &rtwdev->btc;
+ const struct rtw89_btc_ver *ver = btc->ver;
struct rtw89_btc_cx *cx = &btc->cx;
struct rtw89_btc_bt_info *bt = &cx->bt;
struct rtw89_btc_wl_info *wl = &cx->wl;
struct rtw89_btc_module *module = &btc->mdinfo;
struct rtw89_btc_bt_link_info *bt_linfo = &bt->link_info;
u8 *afh = bt_linfo->afh_map;
+ u8 *afh_le = bt_linfo->afh_map_le;
if (!(btc->dm.coex_info_map & BTC_COEX_INFO_BT))
return;
@@ -5769,6 +5963,12 @@ static void _show_bt_info(struct rtw89_dev *rtwdev, struct seq_file *m)
afh[0], afh[1], afh[2], afh[3], afh[4],
afh[5], afh[6], afh[7], afh[8], afh[9]);
+ if (ver->fcxbtafh == 2 && bt_linfo->status.map.ble_connect)
+ seq_printf(m,
+ "LE[%02x%02x_%02x_%02x%02x]",
+ afh_le[0], afh_le[1], afh_le[2],
+ afh_le[3], afh_le[4]);
+
seq_printf(m, "wl_ch_map[en:%d/ch:%d/bw:%d]\n",
wl->afh_info.en, wl->afh_info.ch, wl->afh_info.bw);
@@ -5800,6 +6000,29 @@ static void _show_bt_info(struct rtw89_dev *rtwdev, struct seq_file *m)
"[trx_req_cnt]", cx->cnt_bt[BTC_BCNT_HIPRI_RX],
cx->cnt_bt[BTC_BCNT_HIPRI_TX], cx->cnt_bt[BTC_BCNT_LOPRI_RX],
cx->cnt_bt[BTC_BCNT_LOPRI_TX], cx->cnt_bt[BTC_BCNT_POLUT]);
+
+ if (bt->enable.now && bt->ver_info.fw == 0)
+ rtw89_btc_fw_en_rpt(rtwdev, RPT_EN_BT_VER_INFO, true);
+ else
+ rtw89_btc_fw_en_rpt(rtwdev, RPT_EN_BT_VER_INFO, false);
+
+ if (bt_linfo->profile_cnt.now || bt_linfo->status.map.ble_connect)
+ rtw89_btc_fw_en_rpt(rtwdev, RPT_EN_BT_AFH_MAP, true);
+ else
+ rtw89_btc_fw_en_rpt(rtwdev, RPT_EN_BT_AFH_MAP, false);
+
+ if (ver->fcxbtafh == 2 && bt_linfo->status.map.ble_connect)
+ rtw89_btc_fw_en_rpt(rtwdev, RPT_EN_BT_AFH_MAP_LE, true);
+ else
+ rtw89_btc_fw_en_rpt(rtwdev, RPT_EN_BT_AFH_MAP_LE, false);
+
+ if (bt_linfo->a2dp_desc.exist &&
+ (bt_linfo->a2dp_desc.flush_time == 0 ||
+ bt_linfo->a2dp_desc.vendor_id == 0 ||
+ bt_linfo->a2dp_desc.play_latency == 1))
+ rtw89_btc_fw_en_rpt(rtwdev, RPT_EN_BT_DEVICE_INFO, true);
+ else
+ rtw89_btc_fw_en_rpt(rtwdev, RPT_EN_BT_DEVICE_INFO, false);
}
#define CASE_BTC_RSN_STR(e) case BTC_RSN_ ## e: return #e
@@ -5807,6 +6030,7 @@ static void _show_bt_info(struct rtw89_dev *rtwdev, struct seq_file *m)
#define CASE_BTC_POLICY_STR(e) \
case BTC_CXP_ ## e | BTC_POLICY_EXT_BIT: return #e
#define CASE_BTC_SLOT_STR(e) case CXST_ ## e: return #e
+#define CASE_BTC_EVT_STR(e) case CXEVNT_## e: return #e
static const char *steps_to_str(u16 step)
{
@@ -5947,6 +6171,40 @@ static const char *id_to_slot(u32 id)
}
}
+static const char *id_to_evt(u32 id)
+{
+ switch (id) {
+ CASE_BTC_EVT_STR(TDMA_ENTRY);
+ CASE_BTC_EVT_STR(WL_TMR);
+ CASE_BTC_EVT_STR(B1_TMR);
+ CASE_BTC_EVT_STR(B2_TMR);
+ CASE_BTC_EVT_STR(B3_TMR);
+ CASE_BTC_EVT_STR(B4_TMR);
+ CASE_BTC_EVT_STR(W2B_TMR);
+ CASE_BTC_EVT_STR(B2W_TMR);
+ CASE_BTC_EVT_STR(BCN_EARLY);
+ CASE_BTC_EVT_STR(A2DP_EMPTY);
+ CASE_BTC_EVT_STR(LK_END);
+ CASE_BTC_EVT_STR(RX_ISR);
+ CASE_BTC_EVT_STR(RX_FC0);
+ CASE_BTC_EVT_STR(RX_FC1);
+ CASE_BTC_EVT_STR(BT_RELINK);
+ CASE_BTC_EVT_STR(BT_RETRY);
+ CASE_BTC_EVT_STR(E2G);
+ CASE_BTC_EVT_STR(E5G);
+ CASE_BTC_EVT_STR(EBT);
+ CASE_BTC_EVT_STR(ENULL);
+ CASE_BTC_EVT_STR(DRV_WLK);
+ CASE_BTC_EVT_STR(BCN_OK);
+ CASE_BTC_EVT_STR(BT_CHANGE);
+ CASE_BTC_EVT_STR(EBT_EXTEND);
+ CASE_BTC_EVT_STR(E2G_NULL1);
+ CASE_BTC_EVT_STR(B1FDD_TMR);
+ default:
+ return "unknown";
+ }
+}
+
static
void seq_print_segment(struct seq_file *m, const char *prefix, u16 *data,
u8 len, u8 seg_len, u8 start_idx, u8 ring_len)
@@ -6045,21 +6303,27 @@ static void _show_dm_info(struct rtw89_dev *rtwdev, struct seq_file *m)
static void _show_error(struct rtw89_dev *rtwdev, struct seq_file *m)
{
- const struct rtw89_chip_info *chip = rtwdev->chip;
struct rtw89_btc *btc = &rtwdev->btc;
+ const struct rtw89_btc_ver *ver = btc->ver;
struct rtw89_btc_btf_fwinfo *pfwinfo = &btc->fwinfo;
- struct rtw89_btc_fbtc_cysta *pcysta;
- struct rtw89_btc_fbtc_cysta_v1 *pcysta_v1;
+ union rtw89_btc_fbtc_cysta_info *pcysta;
u32 except_cnt, exception_map;
- if (chip->chip_id == RTL8852A) {
- pcysta = &pfwinfo->rpt_fbtc_cysta.finfo;
- except_cnt = le32_to_cpu(pcysta->except_cnt);
- exception_map = le32_to_cpu(pcysta->exception);
+ pcysta = &pfwinfo->rpt_fbtc_cysta.finfo;
+ if (ver->fcxcysta == 2) {
+ pcysta->v2 = pfwinfo->rpt_fbtc_cysta.finfo.v2;
+ except_cnt = le32_to_cpu(pcysta->v2.except_cnt);
+ exception_map = le32_to_cpu(pcysta->v2.exception);
+ } else if (ver->fcxcysta == 3) {
+ pcysta->v3 = pfwinfo->rpt_fbtc_cysta.finfo.v3;
+ except_cnt = le32_to_cpu(pcysta->v3.except_cnt);
+ exception_map = le32_to_cpu(pcysta->v3.except_map);
+ } else if (ver->fcxcysta == 4) {
+ pcysta->v4 = pfwinfo->rpt_fbtc_cysta.finfo.v4;
+ except_cnt = pcysta->v4.except_cnt;
+ exception_map = le32_to_cpu(pcysta->v4.except_map);
} else {
- pcysta_v1 = &pfwinfo->rpt_fbtc_cysta.finfo_v1;
- except_cnt = le32_to_cpu(pcysta_v1->except_cnt);
- exception_map = le32_to_cpu(pcysta_v1->except_map);
+ return;
}
if (pfwinfo->event[BTF_EVNT_BUF_OVERFLOW] == 0 && except_cnt == 0 &&
@@ -6097,23 +6361,20 @@ static void _show_error(struct rtw89_dev *rtwdev, struct seq_file *m)
static void _show_fbtc_tdma(struct rtw89_dev *rtwdev, struct seq_file *m)
{
- const struct rtw89_chip_info *chip = rtwdev->chip;
struct rtw89_btc *btc = &rtwdev->btc;
+ const struct rtw89_btc_ver *ver = btc->ver;
struct rtw89_btc_btf_fwinfo *pfwinfo = &btc->fwinfo;
struct rtw89_btc_rpt_cmn_info *pcinfo = NULL;
struct rtw89_btc_fbtc_tdma *t = NULL;
- struct rtw89_btc_fbtc_slot *s = NULL;
- struct rtw89_btc_dm *dm = &btc->dm;
- u8 i, cnt = 0;
pcinfo = &pfwinfo->rpt_fbtc_tdma.cinfo;
if (!pcinfo->valid)
return;
- if (chip->chip_id == RTL8852A)
- t = &pfwinfo->rpt_fbtc_tdma.finfo;
+ if (ver->fcxtdma == 1)
+ t = &pfwinfo->rpt_fbtc_tdma.finfo.v1;
else
- t = &pfwinfo->rpt_fbtc_tdma.finfo_v1.tdma;
+ t = &pfwinfo->rpt_fbtc_tdma.finfo.v3.tdma;
seq_printf(m,
" %-15s : ", "[tdma_policy]");
@@ -6130,75 +6391,43 @@ static void _show_fbtc_tdma(struct rtw89_dev *rtwdev, struct seq_file *m)
"policy_type:%d",
(u32)btc->policy_type);
- s = pfwinfo->rpt_fbtc_slots.finfo.slot;
-
- for (i = 0; i < CXST_MAX; i++) {
- if (dm->update_slot_map == BIT(CXST_MAX) - 1)
- break;
-
- if (!(dm->update_slot_map & BIT(i)))
- continue;
-
- if (cnt % 6 == 0)
- seq_printf(m,
- " %-15s : %d[%d/0x%x/%d]",
- "[slot_policy]",
- (u32)i,
- s[i].dur, s[i].cxtbl, s[i].cxtype);
- else
- seq_printf(m,
- ", %d[%d/0x%x/%d]",
- (u32)i,
- s[i].dur, s[i].cxtbl, s[i].cxtype);
- if (cnt % 6 == 5)
- seq_puts(m, "\n");
- cnt++;
- }
seq_puts(m, "\n");
}
static void _show_fbtc_slots(struct rtw89_dev *rtwdev, struct seq_file *m)
{
struct rtw89_btc *btc = &rtwdev->btc;
- struct rtw89_btc_btf_fwinfo *pfwinfo = &btc->fwinfo;
- struct rtw89_btc_rpt_cmn_info *pcinfo = NULL;
- struct rtw89_btc_fbtc_slots *pslots = NULL;
- struct rtw89_btc_fbtc_slot s;
+ struct rtw89_btc_dm *dm = &btc->dm;
+ struct rtw89_btc_fbtc_slot *s;
u8 i = 0;
- pcinfo = &pfwinfo->rpt_fbtc_slots.cinfo;
- if (!pcinfo->valid)
- return;
-
- pslots = &pfwinfo->rpt_fbtc_slots.finfo;
-
for (i = 0; i < CXST_MAX; i++) {
- s = pslots->slot[i];
- if (i % 6 == 0)
+ s = &dm->slot_now[i];
+ if (i % 5 == 0)
seq_printf(m,
- " %-15s : %02d[%03d/0x%x/%d]",
+ " %-15s : %5s[%03d/0x%x/%d]",
"[slot_list]",
- (u32)i,
- s.dur, s.cxtbl, s.cxtype);
+ id_to_slot((u32)i),
+ s->dur, s->cxtbl, s->cxtype);
else
seq_printf(m,
- ", %02d[%03d/0x%x/%d]",
- (u32)i,
- s.dur, s.cxtbl, s.cxtype);
- if (i % 6 == 5)
+ ", %5s[%03d/0x%x/%d]",
+ id_to_slot((u32)i),
+ s->dur, s->cxtbl, s->cxtype);
+ if (i % 5 == 4)
seq_puts(m, "\n");
}
+ seq_puts(m, "\n");
}
-static void _show_fbtc_cysta(struct rtw89_dev *rtwdev, struct seq_file *m)
+static void _show_fbtc_cysta_v2(struct rtw89_dev *rtwdev, struct seq_file *m)
{
struct rtw89_btc *btc = &rtwdev->btc;
struct rtw89_btc_btf_fwinfo *pfwinfo = &btc->fwinfo;
struct rtw89_btc_dm *dm = &btc->dm;
struct rtw89_btc_bt_a2dp_desc *a2dp = &btc->cx.bt.link_info.a2dp_desc;
struct rtw89_btc_rpt_cmn_info *pcinfo = NULL;
- struct rtw89_btc_fbtc_cysta *pcysta_le32 = NULL;
- struct rtw89_btc_fbtc_cysta_cpu pcysta[1];
+ struct rtw89_btc_fbtc_cysta_v2 *pcysta_le32 = NULL;
union rtw89_btc_fbtc_rxflct r;
u8 i, cnt = 0, slot_pair;
u16 cycle, c_begin, c_end, store_index;
@@ -6207,65 +6436,66 @@ static void _show_fbtc_cysta(struct rtw89_dev *rtwdev, struct seq_file *m)
if (!pcinfo->valid)
return;
- pcysta_le32 = &pfwinfo->rpt_fbtc_cysta.finfo;
- rtw89_btc_fbtc_cysta_to_cpu(pcysta_le32, pcysta);
+ pcysta_le32 = &pfwinfo->rpt_fbtc_cysta.finfo.v2;
seq_printf(m,
" %-15s : cycle:%d, bcn[all:%d/all_ok:%d/bt:%d/bt_ok:%d]",
- "[cycle_cnt]", pcysta->cycles, pcysta->bcn_cnt[CXBCN_ALL],
- pcysta->bcn_cnt[CXBCN_ALL_OK],
- pcysta->bcn_cnt[CXBCN_BT_SLOT],
- pcysta->bcn_cnt[CXBCN_BT_OK]);
+ "[cycle_cnt]",
+ le16_to_cpu(pcysta_le32->cycles),
+ le32_to_cpu(pcysta_le32->bcn_cnt[CXBCN_ALL]),
+ le32_to_cpu(pcysta_le32->bcn_cnt[CXBCN_ALL_OK]),
+ le32_to_cpu(pcysta_le32->bcn_cnt[CXBCN_BT_SLOT]),
+ le32_to_cpu(pcysta_le32->bcn_cnt[CXBCN_BT_OK]));
for (i = 0; i < CXST_MAX; i++) {
- if (!pcysta->slot_cnt[i])
+ if (!le32_to_cpu(pcysta_le32->slot_cnt[i]))
continue;
- seq_printf(m,
- ", %d:%d", (u32)i, pcysta->slot_cnt[i]);
+ seq_printf(m, ", %s:%d", id_to_slot((u32)i),
+ le32_to_cpu(pcysta_le32->slot_cnt[i]));
}
if (dm->tdma_now.rxflctrl) {
- seq_printf(m,
- ", leak_rx:%d", pcysta->leakrx_cnt);
+ seq_printf(m, ", leak_rx:%d",
+ le32_to_cpu(pcysta_le32->leakrx_cnt));
}
- if (pcysta->collision_cnt) {
- seq_printf(m,
- ", collision:%d", pcysta->collision_cnt);
+ if (le32_to_cpu(pcysta_le32->collision_cnt)) {
+ seq_printf(m, ", collision:%d",
+ le32_to_cpu(pcysta_le32->collision_cnt));
}
- if (pcysta->skip_cnt) {
- seq_printf(m,
- ", skip:%d", pcysta->skip_cnt);
+ if (le32_to_cpu(pcysta_le32->skip_cnt)) {
+ seq_printf(m, ", skip:%d",
+ le32_to_cpu(pcysta_le32->skip_cnt));
}
seq_puts(m, "\n");
seq_printf(m, " %-15s : avg_t[wl:%d/bt:%d/lk:%d.%03d]",
"[cycle_time]",
- pcysta->tavg_cycle[CXT_WL],
- pcysta->tavg_cycle[CXT_BT],
- pcysta->tavg_lk / 1000, pcysta->tavg_lk % 1000);
- seq_printf(m,
- ", max_t[wl:%d/bt:%d/lk:%d.%03d]",
- pcysta->tmax_cycle[CXT_WL],
- pcysta->tmax_cycle[CXT_BT],
- pcysta->tmax_lk / 1000, pcysta->tmax_lk % 1000);
- seq_printf(m,
- ", maxdiff_t[wl:%d/bt:%d]\n",
- pcysta->tmaxdiff_cycle[CXT_WL],
- pcysta->tmaxdiff_cycle[CXT_BT]);
-
- if (pcysta->cycles == 0)
+ le16_to_cpu(pcysta_le32->tavg_cycle[CXT_WL]),
+ le16_to_cpu(pcysta_le32->tavg_cycle[CXT_BT]),
+ le16_to_cpu(pcysta_le32->tavg_lk) / 1000,
+ le16_to_cpu(pcysta_le32->tavg_lk) % 1000);
+ seq_printf(m, ", max_t[wl:%d/bt:%d/lk:%d.%03d]",
+ le16_to_cpu(pcysta_le32->tmax_cycle[CXT_WL]),
+ le16_to_cpu(pcysta_le32->tmax_cycle[CXT_BT]),
+ le16_to_cpu(pcysta_le32->tmax_lk) / 1000,
+ le16_to_cpu(pcysta_le32->tmax_lk) % 1000);
+ seq_printf(m, ", maxdiff_t[wl:%d/bt:%d]\n",
+ le16_to_cpu(pcysta_le32->tmaxdiff_cycle[CXT_WL]),
+ le16_to_cpu(pcysta_le32->tmaxdiff_cycle[CXT_BT]));
+
+ if (le16_to_cpu(pcysta_le32->cycles) <= 1)
return;
/* 1 cycle record 1 wl-slot and 1 bt-slot */
slot_pair = BTC_CYCLE_SLOT_MAX / 2;
- if (pcysta->cycles <= slot_pair)
+ if (le16_to_cpu(pcysta_le32->cycles) <= slot_pair)
c_begin = 1;
else
- c_begin = pcysta->cycles - slot_pair + 1;
+ c_begin = le16_to_cpu(pcysta_le32->cycles) - slot_pair + 1;
- c_end = pcysta->cycles;
+ c_end = le16_to_cpu(pcysta_le32->cycles);
for (cycle = c_begin; cycle <= c_end; cycle++) {
cnt++;
@@ -6274,13 +6504,13 @@ static void _show_fbtc_cysta(struct rtw89_dev *rtwdev, struct seq_file *m)
if (cnt % (BTC_CYCLE_SLOT_MAX / 4) == 1)
seq_printf(m,
" %-15s : ->b%02d->w%02d", "[cycle_step]",
- pcysta->tslot_cycle[store_index],
- pcysta->tslot_cycle[store_index + 1]);
+ le16_to_cpu(pcysta_le32->tslot_cycle[store_index]),
+ le16_to_cpu(pcysta_le32->tslot_cycle[store_index + 1]));
else
seq_printf(m,
"->b%02d->w%02d",
- pcysta->tslot_cycle[store_index],
- pcysta->tslot_cycle[store_index + 1]);
+ le16_to_cpu(pcysta_le32->tslot_cycle[store_index]),
+ le16_to_cpu(pcysta_le32->tslot_cycle[store_index + 1]));
if (cnt % (BTC_CYCLE_SLOT_MAX / 4) == 0 || cnt == c_end)
seq_puts(m, "\n");
}
@@ -6289,41 +6519,43 @@ static void _show_fbtc_cysta(struct rtw89_dev *rtwdev, struct seq_file *m)
seq_printf(m,
" %-15s : a2dp_ept:%d, a2dp_late:%d",
"[a2dp_t_sta]",
- pcysta->a2dpept, pcysta->a2dpeptto);
+ le16_to_cpu(pcysta_le32->a2dpept),
+ le16_to_cpu(pcysta_le32->a2dpeptto));
seq_printf(m,
", avg_t:%d, max_t:%d",
- pcysta->tavg_a2dpept, pcysta->tmax_a2dpept);
+ le16_to_cpu(pcysta_le32->tavg_a2dpept),
+ le16_to_cpu(pcysta_le32->tmax_a2dpept));
r.val = dm->tdma_now.rxflctrl;
if (r.type && r.tgln_n) {
seq_printf(m,
", cycle[PSTDMA:%d/TDMA:%d], ",
- pcysta->cycles_a2dp[CXT_FLCTRL_ON],
- pcysta->cycles_a2dp[CXT_FLCTRL_OFF]);
+ le16_to_cpu(pcysta_le32->cycles_a2dp[CXT_FLCTRL_ON]),
+ le16_to_cpu(pcysta_le32->cycles_a2dp[CXT_FLCTRL_OFF]));
seq_printf(m,
"avg_t[PSTDMA:%d/TDMA:%d], ",
- pcysta->tavg_a2dp[CXT_FLCTRL_ON],
- pcysta->tavg_a2dp[CXT_FLCTRL_OFF]);
+ le16_to_cpu(pcysta_le32->tavg_a2dp[CXT_FLCTRL_ON]),
+ le16_to_cpu(pcysta_le32->tavg_a2dp[CXT_FLCTRL_OFF]));
seq_printf(m,
"max_t[PSTDMA:%d/TDMA:%d]",
- pcysta->tmax_a2dp[CXT_FLCTRL_ON],
- pcysta->tmax_a2dp[CXT_FLCTRL_OFF]);
+ le16_to_cpu(pcysta_le32->tmax_a2dp[CXT_FLCTRL_ON]),
+ le16_to_cpu(pcysta_le32->tmax_a2dp[CXT_FLCTRL_OFF]));
}
seq_puts(m, "\n");
}
}
-static void _show_fbtc_cysta_v1(struct rtw89_dev *rtwdev, struct seq_file *m)
+static void _show_fbtc_cysta_v3(struct rtw89_dev *rtwdev, struct seq_file *m)
{
struct rtw89_btc *btc = &rtwdev->btc;
struct rtw89_btc_bt_a2dp_desc *a2dp = &btc->cx.bt.link_info.a2dp_desc;
struct rtw89_btc_btf_fwinfo *pfwinfo = &btc->fwinfo;
struct rtw89_btc_dm *dm = &btc->dm;
struct rtw89_btc_fbtc_a2dp_trx_stat *a2dp_trx;
- struct rtw89_btc_fbtc_cysta_v1 *pcysta;
+ struct rtw89_btc_fbtc_cysta_v3 *pcysta;
struct rtw89_btc_rpt_cmn_info *pcinfo;
u8 i, cnt = 0, slot_pair, divide_cnt;
u16 cycle, c_begin, c_end, store_index;
@@ -6332,7 +6564,7 @@ static void _show_fbtc_cysta_v1(struct rtw89_dev *rtwdev, struct seq_file *m)
if (!pcinfo->valid)
return;
- pcysta = &pfwinfo->rpt_fbtc_cysta.finfo_v1;
+ pcysta = &pfwinfo->rpt_fbtc_cysta.finfo.v3;
seq_printf(m,
" %-15s : cycle:%d, bcn[all:%d/all_ok:%d/bt:%d/bt_ok:%d]",
"[cycle_cnt]",
@@ -6379,7 +6611,7 @@ static void _show_fbtc_cysta_v1(struct rtw89_dev *rtwdev, struct seq_file *m)
le16_to_cpu(pcysta->cycle_time.tmaxdiff[CXT_BT]));
cycle = le16_to_cpu(pcysta->cycles);
- if (cycle == 0)
+ if (cycle <= 1)
return;
/* 1 cycle record 1 wl-slot and 1 bt-slot */
@@ -6401,40 +6633,171 @@ static void _show_fbtc_cysta_v1(struct rtw89_dev *rtwdev, struct seq_file *m)
cnt++;
store_index = ((cycle - 1) % slot_pair) * 2;
- if (cnt % divide_cnt == 1) {
- seq_printf(m, "\n\r %-15s : ", "[cycle_step]");
- } else {
- seq_printf(m, "->b%02d",
- le16_to_cpu(pcysta->slot_step_time[store_index]));
- if (a2dp->exist) {
- a2dp_trx = &pcysta->a2dp_trx[store_index];
- seq_printf(m, "(%d/%d/%dM/%d/%d/%d)",
- a2dp_trx->empty_cnt,
- a2dp_trx->retry_cnt,
- a2dp_trx->tx_rate ? 3 : 2,
- a2dp_trx->tx_cnt,
- a2dp_trx->ack_cnt,
- a2dp_trx->nack_cnt);
- }
- seq_printf(m, "->w%02d",
- le16_to_cpu(pcysta->slot_step_time[store_index + 1]));
- if (a2dp->exist) {
- a2dp_trx = &pcysta->a2dp_trx[store_index + 1];
- seq_printf(m, "(%d/%d/%dM/%d/%d/%d)",
- a2dp_trx->empty_cnt,
- a2dp_trx->retry_cnt,
- a2dp_trx->tx_rate ? 3 : 2,
- a2dp_trx->tx_cnt,
- a2dp_trx->ack_cnt,
- a2dp_trx->nack_cnt);
- }
+ if (cnt % divide_cnt == 1)
+ seq_printf(m, " %-15s : ", "[cycle_step]");
+
+ seq_printf(m, "->b%02d",
+ le16_to_cpu(pcysta->slot_step_time[store_index]));
+ if (a2dp->exist) {
+ a2dp_trx = &pcysta->a2dp_trx[store_index];
+ seq_printf(m, "(%d/%d/%dM/%d/%d/%d)",
+ a2dp_trx->empty_cnt,
+ a2dp_trx->retry_cnt,
+ a2dp_trx->tx_rate ? 3 : 2,
+ a2dp_trx->tx_cnt,
+ a2dp_trx->ack_cnt,
+ a2dp_trx->nack_cnt);
}
- if (cnt % (BTC_CYCLE_SLOT_MAX / 4) == 0 || cnt == c_end)
+ seq_printf(m, "->w%02d",
+ le16_to_cpu(pcysta->slot_step_time[store_index + 1]));
+ if (a2dp->exist) {
+ a2dp_trx = &pcysta->a2dp_trx[store_index + 1];
+ seq_printf(m, "(%d/%d/%dM/%d/%d/%d)",
+ a2dp_trx->empty_cnt,
+ a2dp_trx->retry_cnt,
+ a2dp_trx->tx_rate ? 3 : 2,
+ a2dp_trx->tx_cnt,
+ a2dp_trx->ack_cnt,
+ a2dp_trx->nack_cnt);
+ }
+ if (cnt % divide_cnt == 0 || cnt == c_end)
seq_puts(m, "\n");
}
if (a2dp->exist) {
- seq_printf(m, "%-15s : a2dp_ept:%d, a2dp_late:%d",
+ seq_printf(m, " %-15s : a2dp_ept:%d, a2dp_late:%d",
+ "[a2dp_t_sta]",
+ le16_to_cpu(pcysta->a2dp_ept.cnt),
+ le16_to_cpu(pcysta->a2dp_ept.cnt_timeout));
+
+ seq_printf(m, ", avg_t:%d, max_t:%d",
+ le16_to_cpu(pcysta->a2dp_ept.tavg),
+ le16_to_cpu(pcysta->a2dp_ept.tmax));
+
+ seq_puts(m, "\n");
+ }
+}
+
+static void _show_fbtc_cysta_v4(struct rtw89_dev *rtwdev, struct seq_file *m)
+{
+ struct rtw89_btc *btc = &rtwdev->btc;
+ struct rtw89_btc_bt_a2dp_desc *a2dp = &btc->cx.bt.link_info.a2dp_desc;
+ struct rtw89_btc_btf_fwinfo *pfwinfo = &btc->fwinfo;
+ struct rtw89_btc_dm *dm = &btc->dm;
+ struct rtw89_btc_fbtc_a2dp_trx_stat_v4 *a2dp_trx;
+ struct rtw89_btc_fbtc_cysta_v4 *pcysta;
+ struct rtw89_btc_rpt_cmn_info *pcinfo;
+ u8 i, cnt = 0, slot_pair, divide_cnt;
+ u16 cycle, c_begin, c_end, store_index;
+
+ pcinfo = &pfwinfo->rpt_fbtc_cysta.cinfo;
+ if (!pcinfo->valid)
+ return;
+
+ pcysta = &pfwinfo->rpt_fbtc_cysta.finfo.v4;
+ seq_printf(m,
+ " %-15s : cycle:%d, bcn[all:%d/all_ok:%d/bt:%d/bt_ok:%d]",
+ "[cycle_cnt]",
+ le16_to_cpu(pcysta->cycles),
+ le16_to_cpu(pcysta->bcn_cnt[CXBCN_ALL]),
+ le16_to_cpu(pcysta->bcn_cnt[CXBCN_ALL_OK]),
+ le16_to_cpu(pcysta->bcn_cnt[CXBCN_BT_SLOT]),
+ le16_to_cpu(pcysta->bcn_cnt[CXBCN_BT_OK]));
+
+ for (i = 0; i < CXST_MAX; i++) {
+ if (!le16_to_cpu(pcysta->slot_cnt[i]))
+ continue;
+
+ seq_printf(m, ", %s:%d", id_to_slot(i),
+ le16_to_cpu(pcysta->slot_cnt[i]));
+ }
+
+ if (dm->tdma_now.rxflctrl)
+ seq_printf(m, ", leak_rx:%d",
+ le32_to_cpu(pcysta->leak_slot.cnt_rximr));
+
+ if (pcysta->collision_cnt)
+ seq_printf(m, ", collision:%d", pcysta->collision_cnt);
+
+ if (le16_to_cpu(pcysta->skip_cnt))
+ seq_printf(m, ", skip:%d",
+ le16_to_cpu(pcysta->skip_cnt));
+
+ seq_puts(m, "\n");
+
+ seq_printf(m, " %-15s : avg_t[wl:%d/bt:%d/lk:%d.%03d]",
+ "[cycle_time]",
+ le16_to_cpu(pcysta->cycle_time.tavg[CXT_WL]),
+ le16_to_cpu(pcysta->cycle_time.tavg[CXT_BT]),
+ le16_to_cpu(pcysta->leak_slot.tavg) / 1000,
+ le16_to_cpu(pcysta->leak_slot.tavg) % 1000);
+ seq_printf(m,
+ ", max_t[wl:%d/bt:%d/lk:%d.%03d]",
+ le16_to_cpu(pcysta->cycle_time.tmax[CXT_WL]),
+ le16_to_cpu(pcysta->cycle_time.tmax[CXT_BT]),
+ le16_to_cpu(pcysta->leak_slot.tmax) / 1000,
+ le16_to_cpu(pcysta->leak_slot.tmax) % 1000);
+ seq_printf(m,
+ ", maxdiff_t[wl:%d/bt:%d]\n",
+ le16_to_cpu(pcysta->cycle_time.tmaxdiff[CXT_WL]),
+ le16_to_cpu(pcysta->cycle_time.tmaxdiff[CXT_BT]));
+
+ cycle = le16_to_cpu(pcysta->cycles);
+ if (cycle <= 1)
+ return;
+
+ /* 1 cycle record 1 wl-slot and 1 bt-slot */
+ slot_pair = BTC_CYCLE_SLOT_MAX / 2;
+
+ if (cycle <= slot_pair)
+ c_begin = 1;
+ else
+ c_begin = cycle - slot_pair + 1;
+
+ c_end = cycle;
+
+ if (a2dp->exist)
+ divide_cnt = 3;
+ else
+ divide_cnt = BTC_CYCLE_SLOT_MAX / 4;
+
+ for (cycle = c_begin; cycle <= c_end; cycle++) {
+ cnt++;
+ store_index = ((cycle - 1) % slot_pair) * 2;
+
+ if (cnt % divide_cnt == 1)
+ seq_printf(m, " %-15s : ", "[cycle_step]");
+
+ seq_printf(m, "->b%02d",
+ le16_to_cpu(pcysta->slot_step_time[store_index]));
+ if (a2dp->exist) {
+ a2dp_trx = &pcysta->a2dp_trx[store_index];
+ seq_printf(m, "(%d/%d/%dM/%d/%d/%d)",
+ a2dp_trx->empty_cnt,
+ a2dp_trx->retry_cnt,
+ a2dp_trx->tx_rate ? 3 : 2,
+ a2dp_trx->tx_cnt,
+ a2dp_trx->ack_cnt,
+ a2dp_trx->nack_cnt);
+ }
+ seq_printf(m, "->w%02d",
+ le16_to_cpu(pcysta->slot_step_time[store_index + 1]));
+ if (a2dp->exist) {
+ a2dp_trx = &pcysta->a2dp_trx[store_index + 1];
+ seq_printf(m, "(%d/%d/%dM/%d/%d/%d)",
+ a2dp_trx->empty_cnt,
+ a2dp_trx->retry_cnt,
+ a2dp_trx->tx_rate ? 3 : 2,
+ a2dp_trx->tx_cnt,
+ a2dp_trx->ack_cnt,
+ a2dp_trx->nack_cnt);
+ }
+ if (cnt % divide_cnt == 0 || cnt == c_end)
+ seq_puts(m, "\n");
+ }
+
+ if (a2dp->exist) {
+ seq_printf(m, " %-15s : a2dp_ept:%d, a2dp_late:%d",
"[a2dp_t_sta]",
le16_to_cpu(pcysta->a2dp_ept.cnt),
le16_to_cpu(pcysta->a2dp_ept.cnt_timeout));
@@ -6449,12 +6812,11 @@ static void _show_fbtc_cysta_v1(struct rtw89_dev *rtwdev, struct seq_file *m)
static void _show_fbtc_nullsta(struct rtw89_dev *rtwdev, struct seq_file *m)
{
- const struct rtw89_chip_info *chip = rtwdev->chip;
struct rtw89_btc *btc = &rtwdev->btc;
+ const struct rtw89_btc_ver *ver = btc->ver;
struct rtw89_btc_btf_fwinfo *pfwinfo = &btc->fwinfo;
struct rtw89_btc_rpt_cmn_info *pcinfo;
- struct rtw89_btc_fbtc_cynullsta *ns;
- struct rtw89_btc_fbtc_cynullsta_v1 *ns_v1;
+ union rtw89_btc_fbtc_cynullsta_info *ns;
u8 i = 0;
if (!btc->dm.tdma_now.rxflctrl)
@@ -6464,68 +6826,56 @@ static void _show_fbtc_nullsta(struct rtw89_dev *rtwdev, struct seq_file *m)
if (!pcinfo->valid)
return;
- if (chip->chip_id == RTL8852A) {
- ns = &pfwinfo->rpt_fbtc_nullsta.finfo;
-
- seq_printf(m, " %-15s : ", "[null_sta]");
-
+ ns = &pfwinfo->rpt_fbtc_nullsta.finfo;
+ if (ver->fcxnullsta == 1) {
for (i = 0; i < 2; i++) {
- if (i != 0)
- seq_printf(m, ", null-%d", i);
- else
- seq_printf(m, "null-%d", i);
+ seq_printf(m, " %-15s : ", "[NULL-STA]");
+ seq_printf(m, "null-%d", i);
seq_printf(m, "[ok:%d/",
- le32_to_cpu(ns->result[i][1]));
+ le32_to_cpu(ns->v1.result[i][1]));
seq_printf(m, "fail:%d/",
- le32_to_cpu(ns->result[i][0]));
+ le32_to_cpu(ns->v1.result[i][0]));
seq_printf(m, "on_time:%d/",
- le32_to_cpu(ns->result[i][2]));
+ le32_to_cpu(ns->v1.result[i][2]));
seq_printf(m, "retry:%d/",
- le32_to_cpu(ns->result[i][3]));
+ le32_to_cpu(ns->v1.result[i][3]));
seq_printf(m, "avg_t:%d.%03d/",
- le32_to_cpu(ns->avg_t[i]) / 1000,
- le32_to_cpu(ns->avg_t[i]) % 1000);
- seq_printf(m, "max_t:%d.%03d]",
- le32_to_cpu(ns->max_t[i]) / 1000,
- le32_to_cpu(ns->max_t[i]) % 1000);
+ le32_to_cpu(ns->v1.avg_t[i]) / 1000,
+ le32_to_cpu(ns->v1.avg_t[i]) % 1000);
+ seq_printf(m, "max_t:%d.%03d]\n",
+ le32_to_cpu(ns->v1.max_t[i]) / 1000,
+ le32_to_cpu(ns->v1.max_t[i]) % 1000);
}
} else {
- ns_v1 = &pfwinfo->rpt_fbtc_nullsta.finfo_v1;
-
- seq_printf(m, " %-15s : ", "[null_sta]");
-
for (i = 0; i < 2; i++) {
- if (i != 0)
- seq_printf(m, ", null-%d", i);
- else
- seq_printf(m, "null-%d", i);
+ seq_printf(m, " %-15s : ", "[NULL-STA]");
+ seq_printf(m, "null-%d", i);
seq_printf(m, "[Tx:%d/",
- le32_to_cpu(ns_v1->result[i][4]));
+ le32_to_cpu(ns->v2.result[i][4]));
seq_printf(m, "[ok:%d/",
- le32_to_cpu(ns_v1->result[i][1]));
+ le32_to_cpu(ns->v2.result[i][1]));
seq_printf(m, "fail:%d/",
- le32_to_cpu(ns_v1->result[i][0]));
+ le32_to_cpu(ns->v2.result[i][0]));
seq_printf(m, "on_time:%d/",
- le32_to_cpu(ns_v1->result[i][2]));
+ le32_to_cpu(ns->v2.result[i][2]));
seq_printf(m, "retry:%d/",
- le32_to_cpu(ns_v1->result[i][3]));
+ le32_to_cpu(ns->v2.result[i][3]));
seq_printf(m, "avg_t:%d.%03d/",
- le32_to_cpu(ns_v1->avg_t[i]) / 1000,
- le32_to_cpu(ns_v1->avg_t[i]) % 1000);
- seq_printf(m, "max_t:%d.%03d]",
- le32_to_cpu(ns_v1->max_t[i]) / 1000,
- le32_to_cpu(ns_v1->max_t[i]) % 1000);
+ le32_to_cpu(ns->v2.avg_t[i]) / 1000,
+ le32_to_cpu(ns->v2.avg_t[i]) % 1000);
+ seq_printf(m, "max_t:%d.%03d]\n",
+ le32_to_cpu(ns->v2.max_t[i]) / 1000,
+ le32_to_cpu(ns->v2.max_t[i]) % 1000);
}
}
- seq_puts(m, "\n");
}
-static void _show_fbtc_step(struct rtw89_dev *rtwdev, struct seq_file *m)
+static void _show_fbtc_step_v2(struct rtw89_dev *rtwdev, struct seq_file *m)
{
struct rtw89_btc *btc = &rtwdev->btc;
struct rtw89_btc_btf_fwinfo *pfwinfo = &btc->fwinfo;
struct rtw89_btc_rpt_cmn_info *pcinfo = NULL;
- struct rtw89_btc_fbtc_steps *pstep = NULL;
+ struct rtw89_btc_fbtc_steps_v2 *pstep = NULL;
u8 type, val, cnt = 0, state = 0;
bool outloop = false;
u16 i, diff_t, n_start = 0, n_stop = 0;
@@ -6535,7 +6885,7 @@ static void _show_fbtc_step(struct rtw89_dev *rtwdev, struct seq_file *m)
if (!pcinfo->valid)
return;
- pstep = &pfwinfo->rpt_fbtc_step.finfo;
+ pstep = &pfwinfo->rpt_fbtc_step.finfo.v2;
pos_old = le16_to_cpu(pstep->pos_old);
pos_new = le16_to_cpu(pstep->pos_new);
@@ -6589,10 +6939,67 @@ static void _show_fbtc_step(struct rtw89_dev *rtwdev, struct seq_file *m)
} while (!outloop);
}
+static void _show_fbtc_step_v3(struct rtw89_dev *rtwdev, struct seq_file *m)
+{
+ struct rtw89_btc *btc = &rtwdev->btc;
+ struct rtw89_btc_btf_fwinfo *pfwinfo = &btc->fwinfo;
+ struct rtw89_btc_rpt_cmn_info *pcinfo;
+ struct rtw89_btc_fbtc_steps_v3 *pstep;
+ u32 i, n_begin, n_end, array_idx, cnt = 0;
+ u8 type, val;
+ u16 diff_t;
+
+ if ((pfwinfo->rpt_en_map &
+ rtw89_btc_fw_rpt_ver(rtwdev, RPT_EN_FW_STEP_INFO)) == 0)
+ return;
+
+ pcinfo = &pfwinfo->rpt_fbtc_step.cinfo;
+ if (!pcinfo->valid)
+ return;
+
+ pstep = &pfwinfo->rpt_fbtc_step.finfo.v3;
+ if (pcinfo->req_fver != pstep->fver)
+ return;
+
+ if (le32_to_cpu(pstep->cnt) <= FCXDEF_STEP)
+ n_begin = 1;
+ else
+ n_begin = le32_to_cpu(pstep->cnt) - FCXDEF_STEP + 1;
+
+ n_end = le32_to_cpu(pstep->cnt);
+
+ if (n_begin > n_end)
+ return;
+
+ /* restore step info by using ring instead of FIFO */
+ for (i = n_begin; i <= n_end; i++) {
+ array_idx = (i - 1) % FCXDEF_STEP;
+ type = pstep->step[array_idx].type;
+ val = pstep->step[array_idx].val;
+ diff_t = le16_to_cpu(pstep->step[array_idx].difft);
+
+ if (type == CXSTEP_NONE || type >= CXSTEP_MAX)
+ continue;
+
+ if (cnt % 10 == 0)
+ seq_printf(m, " %-15s : ", "[steps]");
+
+ seq_printf(m, "-> %s(%02d)",
+ (type == CXSTEP_SLOT ?
+ id_to_slot((u32)val) :
+ id_to_evt((u32)val)), diff_t);
+
+ if (cnt % 10 == 9)
+ seq_puts(m, "\n");
+
+ cnt++;
+ }
+}
+
static void _show_fw_dm_msg(struct rtw89_dev *rtwdev, struct seq_file *m)
{
- const struct rtw89_chip_info *chip = rtwdev->chip;
struct rtw89_btc *btc = &rtwdev->btc;
+ const struct rtw89_btc_ver *ver = btc->ver;
if (!(btc->dm.coex_info_map & BTC_COEX_INFO_DM))
return;
@@ -6601,13 +7008,20 @@ static void _show_fw_dm_msg(struct rtw89_dev *rtwdev, struct seq_file *m)
_show_fbtc_tdma(rtwdev, m);
_show_fbtc_slots(rtwdev, m);
- if (chip->chip_id == RTL8852A)
- _show_fbtc_cysta(rtwdev, m);
- else
- _show_fbtc_cysta_v1(rtwdev, m);
+ if (ver->fcxcysta == 2)
+ _show_fbtc_cysta_v2(rtwdev, m);
+ else if (ver->fcxcysta == 3)
+ _show_fbtc_cysta_v3(rtwdev, m);
+ else if (ver->fcxcysta == 4)
+ _show_fbtc_cysta_v4(rtwdev, m);
_show_fbtc_nullsta(rtwdev, m);
- _show_fbtc_step(rtwdev, m);
+
+ if (ver->fcxstep == 2)
+ _show_fbtc_step_v2(rtwdev, m);
+ else if (ver->fcxstep == 3)
+ _show_fbtc_step_v3(rtwdev, m);
+
}
static void _get_gnt(struct rtw89_dev *rtwdev, struct rtw89_mac_ax_coex_gnt *gnt_cfg)
@@ -6680,10 +7094,7 @@ static void _show_mreg(struct rtw89_dev *rtwdev, struct seq_file *m)
/* To avoid I/O if WL LPS or power-off */
if (!wl->status.map.lps && !wl->status.map.rf_off) {
- if (chip->chip_id == RTL8852A)
- btc->dm.pta_owner = rtw89_mac_get_ctrl_path(rtwdev);
- else if (chip->chip_id == RTL8852C)
- btc->dm.pta_owner = 0;
+ btc->dm.pta_owner = rtw89_mac_get_ctrl_path(rtwdev);
_get_gnt(rtwdev, &gnt_cfg);
gnt = gnt_cfg.band[0];
@@ -6729,6 +7140,9 @@ static void _show_mreg(struct rtw89_dev *rtwdev, struct seq_file *m)
if (cnt % 6 == 5)
seq_puts(m, "\n");
cnt++;
+
+ if (i >= pmreg->reg_num)
+ seq_puts(m, "\n");
}
pcinfo = &pfwinfo->rpt_fbtc_gpio_dbg.cinfo;
@@ -6754,12 +7168,12 @@ static void _show_mreg(struct rtw89_dev *rtwdev, struct seq_file *m)
seq_puts(m, "\n");
}
-static void _show_summary(struct rtw89_dev *rtwdev, struct seq_file *m)
+static void _show_summary_v1(struct rtw89_dev *rtwdev, struct seq_file *m)
{
struct rtw89_btc *btc = &rtwdev->btc;
struct rtw89_btc_btf_fwinfo *pfwinfo = &btc->fwinfo;
struct rtw89_btc_rpt_cmn_info *pcinfo = NULL;
- struct rtw89_btc_fbtc_rpt_ctrl *prptctrl = NULL;
+ struct rtw89_btc_fbtc_rpt_ctrl_v1 *prptctrl = NULL;
struct rtw89_btc_cx *cx = &btc->cx;
struct rtw89_btc_dm *dm = &btc->dm;
struct rtw89_btc_wl_info *wl = &cx->wl;
@@ -6774,7 +7188,7 @@ static void _show_summary(struct rtw89_dev *rtwdev, struct seq_file *m)
pcinfo = &pfwinfo->rpt_ctrl.cinfo;
if (pcinfo->valid && !wl->status.map.lps && !wl->status.map.rf_off) {
- prptctrl = &pfwinfo->rpt_ctrl.finfo;
+ prptctrl = &pfwinfo->rpt_ctrl.finfo.v1;
seq_printf(m,
" %-15s : h2c_cnt=%d(fail:%d, fw_recv:%d), c2h_cnt=%d(fw_send:%d), ",
@@ -6858,11 +7272,11 @@ static void _show_summary(struct rtw89_dev *rtwdev, struct seq_file *m)
cnt[BTC_NCNT_CUSTOMERIZE]);
}
-static void _show_summary_v1(struct rtw89_dev *rtwdev, struct seq_file *m)
+static void _show_summary_v4(struct rtw89_dev *rtwdev, struct seq_file *m)
{
struct rtw89_btc *btc = &rtwdev->btc;
struct rtw89_btc_btf_fwinfo *pfwinfo = &btc->fwinfo;
- struct rtw89_btc_fbtc_rpt_ctrl_v1 *prptctrl;
+ struct rtw89_btc_fbtc_rpt_ctrl_v4 *prptctrl;
struct rtw89_btc_rpt_cmn_info *pcinfo;
struct rtw89_btc_cx *cx = &btc->cx;
struct rtw89_btc_dm *dm = &btc->dm;
@@ -6878,7 +7292,7 @@ static void _show_summary_v1(struct rtw89_dev *rtwdev, struct seq_file *m)
pcinfo = &pfwinfo->rpt_ctrl.cinfo;
if (pcinfo->valid && !wl->status.map.lps && !wl->status.map.rf_off) {
- prptctrl = &pfwinfo->rpt_ctrl.finfo_v1;
+ prptctrl = &pfwinfo->rpt_ctrl.finfo.v4;
seq_printf(m,
" %-15s : h2c_cnt=%d(fail:%d, fw_recv:%d), c2h_cnt=%d(fw_send:%d), ",
@@ -6970,11 +7384,127 @@ static void _show_summary_v1(struct rtw89_dev *rtwdev, struct seq_file *m)
cnt[BTC_NCNT_CUSTOMERIZE]);
}
+static void _show_summary_v5(struct rtw89_dev *rtwdev, struct seq_file *m)
+{
+ struct rtw89_btc *btc = &rtwdev->btc;
+ struct rtw89_btc_btf_fwinfo *pfwinfo = &btc->fwinfo;
+ struct rtw89_btc_fbtc_rpt_ctrl_v5 *prptctrl;
+ struct rtw89_btc_rpt_cmn_info *pcinfo;
+ struct rtw89_btc_cx *cx = &btc->cx;
+ struct rtw89_btc_dm *dm = &btc->dm;
+ struct rtw89_btc_wl_info *wl = &cx->wl;
+ u32 cnt_sum = 0, *cnt = btc->dm.cnt_notify;
+ u8 i;
+
+ if (!(dm->coex_info_map & BTC_COEX_INFO_SUMMARY))
+ return;
+
+ seq_puts(m, "========== [Statistics] ==========\n");
+
+ pcinfo = &pfwinfo->rpt_ctrl.cinfo;
+ if (pcinfo->valid && !wl->status.map.lps && !wl->status.map.rf_off) {
+ prptctrl = &pfwinfo->rpt_ctrl.finfo.v5;
+
+ seq_printf(m,
+ " %-15s : h2c_cnt=%d(fail:%d, fw_recv:%d), c2h_cnt=%d(fw_send:%d, len:%d), ",
+ "[summary]", pfwinfo->cnt_h2c, pfwinfo->cnt_h2c_fail,
+ le16_to_cpu(prptctrl->rpt_info.cnt_h2c),
+ pfwinfo->cnt_c2h,
+ le16_to_cpu(prptctrl->rpt_info.cnt_c2h),
+ le16_to_cpu(prptctrl->rpt_info.len_c2h));
+
+ seq_printf(m,
+ "rpt_cnt=%d(fw_send:%d), rpt_map=0x%x",
+ pfwinfo->event[BTF_EVNT_RPT],
+ le16_to_cpu(prptctrl->rpt_info.cnt),
+ le32_to_cpu(prptctrl->rpt_info.en));
+
+ if (dm->error.map.wl_fw_hang)
+ seq_puts(m, " (WL FW Hang!!)");
+ seq_puts(m, "\n");
+ seq_printf(m,
+ " %-15s : send_ok:%d, send_fail:%d, recv:%d, ",
+ "[mailbox]",
+ le32_to_cpu(prptctrl->bt_mbx_info.cnt_send_ok),
+ le32_to_cpu(prptctrl->bt_mbx_info.cnt_send_fail),
+ le32_to_cpu(prptctrl->bt_mbx_info.cnt_recv));
+
+ seq_printf(m,
+ "A2DP_empty:%d(stop:%d, tx:%d, ack:%d, nack:%d)\n",
+ le32_to_cpu(prptctrl->bt_mbx_info.a2dp.cnt_empty),
+ le32_to_cpu(prptctrl->bt_mbx_info.a2dp.cnt_flowctrl),
+ le32_to_cpu(prptctrl->bt_mbx_info.a2dp.cnt_tx),
+ le32_to_cpu(prptctrl->bt_mbx_info.a2dp.cnt_ack),
+ le32_to_cpu(prptctrl->bt_mbx_info.a2dp.cnt_nack));
+
+ seq_printf(m,
+ " %-15s : wl_rfk[req:%d/go:%d/reject:%d/tout:%d]",
+ "[RFK/LPS]", cx->cnt_wl[BTC_WCNT_RFK_REQ],
+ cx->cnt_wl[BTC_WCNT_RFK_GO],
+ cx->cnt_wl[BTC_WCNT_RFK_REJECT],
+ cx->cnt_wl[BTC_WCNT_RFK_TIMEOUT]);
+
+ seq_printf(m,
+ ", bt_rfk[req:%d]",
+ le16_to_cpu(prptctrl->bt_cnt[BTC_BCNT_RFK_REQ]));
+
+ seq_printf(m,
+ ", AOAC[RF_on:%d/RF_off:%d]",
+ le16_to_cpu(prptctrl->rpt_info.cnt_aoac_rf_on),
+ le16_to_cpu(prptctrl->rpt_info.cnt_aoac_rf_off));
+ } else {
+ seq_puts(m, "\n");
+ seq_printf(m,
+ " %-15s : h2c_cnt=%d(fail:%d), c2h_cnt=%d",
+ "[summary]", pfwinfo->cnt_h2c,
+ pfwinfo->cnt_h2c_fail, pfwinfo->cnt_c2h);
+ }
+
+ if (!pcinfo->valid || pfwinfo->len_mismch || pfwinfo->fver_mismch ||
+ pfwinfo->err[BTFRE_EXCEPTION]) {
+ seq_puts(m, "\n");
+ seq_printf(m,
+ " %-15s : WL FW rpt error!![rpt_ctrl_valid:%d/len:"
+ "0x%x/ver:0x%x/ex:%d/lps=%d/rf_off=%d]",
+ "[ERROR]", pcinfo->valid, pfwinfo->len_mismch,
+ pfwinfo->fver_mismch, pfwinfo->err[BTFRE_EXCEPTION],
+ wl->status.map.lps, wl->status.map.rf_off);
+ }
+
+ for (i = 0; i < BTC_NCNT_NUM; i++)
+ cnt_sum += dm->cnt_notify[i];
+
+ seq_puts(m, "\n");
+ seq_printf(m,
+ " %-15s : total=%d, show_coex_info=%d, power_on=%d, init_coex=%d, ",
+ "[notify_cnt]",
+ cnt_sum, cnt[BTC_NCNT_SHOW_COEX_INFO],
+ cnt[BTC_NCNT_POWER_ON], cnt[BTC_NCNT_INIT_COEX]);
+
+ seq_printf(m,
+ "power_off=%d, radio_state=%d, role_info=%d, wl_rfk=%d, wl_sta=%d",
+ cnt[BTC_NCNT_POWER_OFF], cnt[BTC_NCNT_RADIO_STATE],
+ cnt[BTC_NCNT_ROLE_INFO], cnt[BTC_NCNT_WL_RFK],
+ cnt[BTC_NCNT_WL_STA]);
+
+ seq_puts(m, "\n");
+ seq_printf(m,
+ " %-15s : scan_start=%d, scan_finish=%d, switch_band=%d, special_pkt=%d, ",
+ "[notify_cnt]",
+ cnt[BTC_NCNT_SCAN_START], cnt[BTC_NCNT_SCAN_FINISH],
+ cnt[BTC_NCNT_SWITCH_BAND], cnt[BTC_NCNT_SPECIAL_PACKET]);
+
+ seq_printf(m,
+ "timer=%d, control=%d, customerize=%d",
+ cnt[BTC_NCNT_TIMER], cnt[BTC_NCNT_CONTROL],
+ cnt[BTC_NCNT_CUSTOMERIZE]);
+}
+
void rtw89_btc_dump_info(struct rtw89_dev *rtwdev, struct seq_file *m)
{
- const struct rtw89_chip_info *chip = rtwdev->chip;
struct rtw89_fw_suit *fw_suit = &rtwdev->fw.normal;
struct rtw89_btc *btc = &rtwdev->btc;
+ const struct rtw89_btc_ver *ver = btc->ver;
struct rtw89_btc_cx *cx = &btc->cx;
struct rtw89_btc_bt_info *bt = &cx->bt;
@@ -7003,8 +7533,41 @@ void rtw89_btc_dump_info(struct rtw89_dev *rtwdev, struct seq_file *m)
_show_dm_info(rtwdev, m);
_show_fw_dm_msg(rtwdev, m);
_show_mreg(rtwdev, m);
- if (chip->chip_id == RTL8852A)
- _show_summary(rtwdev, m);
- else
+ if (ver->fcxbtcrpt == 1)
_show_summary_v1(rtwdev, m);
+ else if (ver->fcxbtcrpt == 4)
+ _show_summary_v4(rtwdev, m);
+ else if (ver->fcxbtcrpt == 5)
+ _show_summary_v5(rtwdev, m);
+}
+
+void rtw89_coex_recognize_ver(struct rtw89_dev *rtwdev)
+{
+ const struct rtw89_chip_info *chip = rtwdev->chip;
+ struct rtw89_btc *btc = &rtwdev->btc;
+ const struct rtw89_btc_ver *btc_ver_def;
+ const struct rtw89_fw_suit *fw_suit;
+ u32 suit_ver_code;
+ int i;
+
+ fw_suit = rtw89_fw_suit_get(rtwdev, RTW89_FW_NORMAL);
+ suit_ver_code = RTW89_FW_SUIT_VER_CODE(fw_suit);
+
+ for (i = 0; i < ARRAY_SIZE(rtw89_btc_ver_defs); i++) {
+ btc_ver_def = &rtw89_btc_ver_defs[i];
+
+ if (chip->chip_id != btc_ver_def->chip_id)
+ continue;
+
+ if (suit_ver_code >= btc_ver_def->fw_ver_code) {
+ btc->ver = btc_ver_def;
+ goto out;
+ }
+ }
+
+ btc->ver = &rtw89_btc_ver_defs[RTW89_DEFAULT_BTC_VER_IDX];
+
+out:
+ rtw89_debug(rtwdev, RTW89_DBG_BTC, "[BTC] use version def[%d] = 0x%08x\n",
+ (int)(btc->ver - rtw89_btc_ver_defs), btc->ver->fw_ver_code);
}
diff --git a/drivers/net/wireless/realtek/rtw89/coex.h b/drivers/net/wireless/realtek/rtw89/coex.h
index ca16afa97ec0..401fb55df82b 100644
--- a/drivers/net/wireless/realtek/rtw89/coex.h
+++ b/drivers/net/wireless/realtek/rtw89/coex.h
@@ -164,6 +164,7 @@ void rtw89_coex_rfk_chk_work(struct work_struct *work);
void rtw89_coex_power_on(struct rtw89_dev *rtwdev);
void rtw89_btc_set_policy(struct rtw89_dev *rtwdev, u16 policy_type);
void rtw89_btc_set_policy_v1(struct rtw89_dev *rtwdev, u16 policy_type);
+void rtw89_coex_recognize_ver(struct rtw89_dev *rtwdev);
static inline u8 rtw89_btc_phymap(struct rtw89_dev *rtwdev,
enum rtw89_phy_idx phy_idx,
diff --git a/drivers/net/wireless/realtek/rtw89/core.c b/drivers/net/wireless/realtek/rtw89/core.c
index 931aff8b5dc9..f09361bc4a4d 100644
--- a/drivers/net/wireless/realtek/rtw89/core.c
+++ b/drivers/net/wireless/realtek/rtw89/core.c
@@ -498,7 +498,8 @@ static u16 rtw89_core_get_mgmt_rate(struct rtw89_dev *rtwdev,
const struct rtw89_chan *chan = rtw89_chan_get(rtwdev, RTW89_SUB_ENTITY_0);
u16 lowest_rate;
- if (tx_info->flags & IEEE80211_TX_CTL_NO_CCK_RATE || vif->p2p)
+ if (tx_info->flags & IEEE80211_TX_CTL_NO_CCK_RATE ||
+ (vif && vif->p2p))
lowest_rate = RTW89_HW_RATE_OFDM6;
else if (chan->band_type == RTW89_BAND_2G)
lowest_rate = RTW89_HW_RATE_CCK1;
@@ -511,6 +512,21 @@ static u16 rtw89_core_get_mgmt_rate(struct rtw89_dev *rtwdev,
return __ffs(vif->bss_conf.basic_rates) + lowest_rate;
}
+static u8 rtw89_core_tx_get_mac_id(struct rtw89_dev *rtwdev,
+ struct rtw89_core_tx_request *tx_req)
+{
+ struct ieee80211_vif *vif = tx_req->vif;
+ struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv;
+ struct ieee80211_sta *sta = tx_req->sta;
+ struct rtw89_sta *rtwsta;
+
+ if (!sta)
+ return rtwvif->mac_id;
+
+ rtwsta = (struct rtw89_sta *)sta->drv_priv;
+ return rtwsta->mac_id;
+}
+
static void
rtw89_core_tx_update_mgmt_info(struct rtw89_dev *rtwdev,
struct rtw89_core_tx_request *tx_req)
@@ -527,6 +543,7 @@ rtw89_core_tx_update_mgmt_info(struct rtw89_dev *rtwdev,
desc_info->qsel = qsel;
desc_info->ch_dma = ch_dma;
desc_info->port = desc_info->hiq ? rtwvif->port : 0;
+ desc_info->mac_id = rtw89_core_tx_get_mac_id(rtwdev, tx_req);
desc_info->hw_ssn_sel = RTW89_MGMT_HW_SSN_SEL;
desc_info->hw_seq_mode = RTW89_MGMT_HW_SEQ_MODE;
@@ -669,27 +686,14 @@ desc_bk:
desc_info->bk = true;
}
-static u8 rtw89_core_tx_get_mac_id(struct rtw89_dev *rtwdev,
- struct rtw89_core_tx_request *tx_req)
-{
- struct ieee80211_vif *vif = tx_req->vif;
- struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv;
- struct ieee80211_sta *sta = tx_req->sta;
- struct rtw89_sta *rtwsta;
-
- if (!sta)
- return rtwvif->mac_id;
-
- rtwsta = (struct rtw89_sta *)sta->drv_priv;
- return rtwsta->mac_id;
-}
-
static void
rtw89_core_tx_update_data_info(struct rtw89_dev *rtwdev,
struct rtw89_core_tx_request *tx_req)
{
struct ieee80211_vif *vif = tx_req->vif;
+ struct ieee80211_sta *sta = tx_req->sta;
struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv;
+ struct rtw89_sta *rtwsta = sta_to_rtwsta_safe(sta);
struct rtw89_phy_rate_pattern *rate_pattern = &rtwvif->rate_pattern;
const struct rtw89_chan *chan = rtw89_chan_get(rtwdev, RTW89_SUB_ENTITY_0);
struct rtw89_tx_desc_info *desc_info = &tx_req->desc_info;
@@ -707,6 +711,7 @@ rtw89_core_tx_update_data_info(struct rtw89_dev *rtwdev,
desc_info->qsel = qsel;
desc_info->mac_id = rtw89_core_tx_get_mac_id(rtwdev, tx_req);
desc_info->port = desc_info->hiq ? rtwvif->port : 0;
+ desc_info->er_cap = rtwsta ? rtwsta->er_cap : false;
/* enable wd_info for AMPDU */
desc_info->en_wd_info = true;
@@ -1006,7 +1011,9 @@ static __le32 rtw89_build_txwd_info0(struct rtw89_tx_desc_info *desc_info)
static __le32 rtw89_build_txwd_info0_v1(struct rtw89_tx_desc_info *desc_info)
{
u32 dword = FIELD_PREP(RTW89_TXWD_INFO0_DISDATAFB, desc_info->dis_data_fb) |
- FIELD_PREP(RTW89_TXWD_INFO0_MULTIPORT_ID, desc_info->port);
+ FIELD_PREP(RTW89_TXWD_INFO0_MULTIPORT_ID, desc_info->port) |
+ FIELD_PREP(RTW89_TXWD_INFO0_DATA_ER, desc_info->er_cap) |
+ FIELD_PREP(RTW89_TXWD_INFO0_DATA_BW_ER, 0);
return cpu_to_le32(dword);
}
@@ -1757,7 +1764,8 @@ static enum rtw89_ps_mode rtw89_update_ps_mode(struct rtw89_dev *rtwdev)
RTW89_CHK_FW_FEATURE(NO_DEEP_PS, &rtwdev->fw))
return RTW89_PS_MODE_NONE;
- if (chip->ps_mode_supported & BIT(RTW89_PS_MODE_PWR_GATED))
+ if ((chip->ps_mode_supported & BIT(RTW89_PS_MODE_PWR_GATED)) &&
+ !RTW89_CHK_FW_FEATURE(NO_LPS_PG, &rtwdev->fw))
return RTW89_PS_MODE_PWR_GATED;
if (chip->ps_mode_supported & BIT(RTW89_PS_MODE_CLK_GATED))
@@ -2199,8 +2207,9 @@ static bool rtw89_traffic_stats_track(struct rtw89_dev *rtwdev)
static void rtw89_vif_enter_lps(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif)
{
- if (rtwvif->wifi_role != RTW89_WIFI_ROLE_STATION &&
- rtwvif->wifi_role != RTW89_WIFI_ROLE_P2P_CLIENT)
+ if ((rtwvif->wifi_role != RTW89_WIFI_ROLE_STATION &&
+ rtwvif->wifi_role != RTW89_WIFI_ROLE_P2P_CLIENT) ||
+ rtwvif->tdls_peer)
return;
if (rtwvif->stats.tx_tfc_lv == RTW89_TFC_IDLE &&
@@ -2426,6 +2435,7 @@ int rtw89_core_sta_add(struct rtw89_dev *rtwdev,
struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv;
struct rtw89_sta *rtwsta = (struct rtw89_sta *)sta->drv_priv;
int i;
+ int ret;
rtwsta->rtwdev = rtwdev;
rtwsta->rtwvif = rtwvif;
@@ -2450,6 +2460,21 @@ int rtw89_core_sta_add(struct rtw89_dev *rtwdev,
RTW89_MAX_MAC_ID_NUM);
if (rtwsta->mac_id == RTW89_MAX_MAC_ID_NUM)
return -ENOSPC;
+
+ ret = rtw89_mac_set_macid_pause(rtwdev, rtwsta->mac_id, false);
+ if (ret) {
+ rtw89_core_release_bit_map(rtwdev->mac_id_map, rtwsta->mac_id);
+ rtw89_warn(rtwdev, "failed to send h2c macid pause\n");
+ return ret;
+ }
+
+ ret = rtw89_fw_h2c_role_maintain(rtwdev, rtwvif, rtwsta,
+ RTW89_ROLE_CREATE);
+ if (ret) {
+ rtw89_core_release_bit_map(rtwdev->mac_id_map, rtwsta->mac_id);
+ rtw89_warn(rtwdev, "failed to send h2c role info\n");
+ return ret;
+ }
}
return 0;
@@ -2459,9 +2484,12 @@ int rtw89_core_sta_disassoc(struct rtw89_dev *rtwdev,
struct ieee80211_vif *vif,
struct ieee80211_sta *sta)
{
+ struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv;
struct rtw89_sta *rtwsta = (struct rtw89_sta *)sta->drv_priv;
rtwdev->total_sta_assoc--;
+ if (sta->tdls)
+ rtwvif->tdls_peer--;
rtwsta->disassoc = true;
return 0;
@@ -2484,8 +2512,10 @@ int rtw89_core_sta_disconnect(struct rtw89_dev *rtwdev,
if (sta->tdls)
rtw89_cam_deinit_bssid_cam(rtwdev, &rtwsta->bssid_cam);
- if (vif->type == NL80211_IFTYPE_STATION && !sta->tdls)
+ if (vif->type == NL80211_IFTYPE_STATION && !sta->tdls) {
rtw89_vif_type_mapping(vif, false);
+ rtw89_fw_release_general_pkt_list_vif(rtwdev, rtwvif, true);
+ }
ret = rtw89_fw_h2c_assoc_cmac_tbl(rtwdev, vif, sta);
if (ret) {
@@ -2499,14 +2529,6 @@ int rtw89_core_sta_disconnect(struct rtw89_dev *rtwdev,
return ret;
}
- if (vif->type == NL80211_IFTYPE_AP || sta->tdls) {
- ret = rtw89_fw_h2c_role_maintain(rtwdev, rtwvif, rtwsta, RTW89_ROLE_REMOVE);
- if (ret) {
- rtw89_warn(rtwdev, "failed to send h2c role info\n");
- return ret;
- }
- }
-
/* update cam aid mac_id net_type */
ret = rtw89_fw_h2c_cam(rtwdev, rtwvif, rtwsta, NULL);
if (ret) {
@@ -2527,18 +2549,6 @@ int rtw89_core_sta_assoc(struct rtw89_dev *rtwdev,
int ret;
if (vif->type == NL80211_IFTYPE_AP || sta->tdls) {
- ret = rtw89_mac_set_macid_pause(rtwdev, rtwsta->mac_id, false);
- if (ret) {
- rtw89_warn(rtwdev, "failed to send h2c macid pause\n");
- return ret;
- }
-
- ret = rtw89_fw_h2c_role_maintain(rtwdev, rtwvif, rtwsta, RTW89_ROLE_CREATE);
- if (ret) {
- rtw89_warn(rtwdev, "failed to send h2c role info\n");
- return ret;
- }
-
if (sta->tdls) {
ret = rtw89_cam_init_bssid_cam(rtwdev, rtwvif, bssid_cam, sta->addr);
if (ret) {
@@ -2573,22 +2583,30 @@ int rtw89_core_sta_assoc(struct rtw89_dev *rtwdev,
return ret;
}
- ret = rtw89_fw_h2c_general_pkt(rtwdev, rtwsta->mac_id);
- if (ret) {
- rtw89_warn(rtwdev, "failed to send h2c general packet\n");
- return ret;
- }
-
rtwdev->total_sta_assoc++;
+ if (sta->tdls)
+ rtwvif->tdls_peer++;
rtw89_phy_ra_assoc(rtwdev, sta);
rtw89_mac_bf_assoc(rtwdev, vif, sta);
rtw89_mac_bf_monitor_calc(rtwdev, sta, false);
if (vif->type == NL80211_IFTYPE_STATION && !sta->tdls) {
+ struct ieee80211_bss_conf *bss_conf = &vif->bss_conf;
+
+ if (bss_conf->he_support &&
+ !(bss_conf->he_oper.params & IEEE80211_HE_OPERATION_ER_SU_DISABLE))
+ rtwsta->er_cap = true;
+
rtw89_btc_ntfy_role_info(rtwdev, rtwvif, rtwsta,
BTC_ROLE_MSTS_STA_CONN_END);
rtw89_core_get_no_ul_ofdma_htc(rtwdev, &rtwsta->htc_template);
rtw89_phy_ul_tb_assoc(rtwdev, rtwvif);
+
+ ret = rtw89_fw_h2c_general_pkt(rtwdev, rtwvif, rtwsta->mac_id);
+ if (ret) {
+ rtw89_warn(rtwdev, "failed to send h2c general packet\n");
+ return ret;
+ }
}
return ret;
@@ -2600,13 +2618,22 @@ int rtw89_core_sta_remove(struct rtw89_dev *rtwdev,
{
struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv;
struct rtw89_sta *rtwsta = (struct rtw89_sta *)sta->drv_priv;
+ int ret;
if (vif->type == NL80211_IFTYPE_STATION && !sta->tdls)
rtw89_btc_ntfy_role_info(rtwdev, rtwvif, rtwsta,
BTC_ROLE_MSTS_STA_DIS_CONN);
- else if (vif->type == NL80211_IFTYPE_AP || sta->tdls)
+ else if (vif->type == NL80211_IFTYPE_AP || sta->tdls) {
rtw89_core_release_bit_map(rtwdev->mac_id_map, rtwsta->mac_id);
+ ret = rtw89_fw_h2c_role_maintain(rtwdev, rtwvif, rtwsta,
+ RTW89_ROLE_REMOVE);
+ if (ret) {
+ rtw89_warn(rtwdev, "failed to send h2c role info\n");
+ return ret;
+ }
+ }
+
return 0;
}
@@ -3113,7 +3140,6 @@ int rtw89_core_init(struct rtw89_dev *rtwdev)
continue;
INIT_LIST_HEAD(&rtwdev->scan_info.pkt_list[band]);
}
- INIT_LIST_HEAD(&rtwdev->wow.pkt_list);
INIT_WORK(&rtwdev->ba_work, rtw89_core_ba_work);
INIT_WORK(&rtwdev->txq_work, rtw89_core_txq_work);
INIT_DELAYED_WORK(&rtwdev->txq_reinvoke_work, rtw89_core_txq_reinvoke_work);
@@ -3124,6 +3150,8 @@ int rtw89_core_init(struct rtw89_dev *rtwdev)
INIT_DELAYED_WORK(&rtwdev->cfo_track_work, rtw89_phy_cfo_track_work);
INIT_DELAYED_WORK(&rtwdev->forbid_ba_work, rtw89_forbid_ba_work);
rtwdev->txq_wq = alloc_workqueue("rtw89_tx_wq", WQ_UNBOUND | WQ_HIGHPRI, 0);
+ if (!rtwdev->txq_wq)
+ return -ENOMEM;
spin_lock_init(&rtwdev->ba_lock);
spin_lock_init(&rtwdev->rpwm_lock);
mutex_init(&rtwdev->mutex);
@@ -3138,7 +3166,6 @@ int rtw89_core_init(struct rtw89_dev *rtwdev)
rtw89_core_ppdu_sts_init(rtwdev);
rtw89_traffic_stats_init(rtwdev, &rtwdev->stats);
- rtwdev->ps_mode = rtw89_update_ps_mode(rtwdev);
rtwdev->hal.rx_fltr = DEFAULT_AX_RX_FLTR;
INIT_WORK(&btc->eapol_notify_work, rtw89_btc_ntfy_eapol_packet_work);
@@ -3149,6 +3176,7 @@ int rtw89_core_init(struct rtw89_dev *rtwdev)
ret = rtw89_load_firmware(rtwdev);
if (ret) {
rtw89_warn(rtwdev, "no firmware loaded\n");
+ destroy_workqueue(rtwdev->txq_wq);
return ret;
}
rtw89_ser_init(rtwdev);
@@ -3293,6 +3321,8 @@ int rtw89_chip_info_setup(struct rtw89_dev *rtwdev)
if (ret)
return ret;
+ rtwdev->ps_mode = rtw89_update_ps_mode(rtwdev);
+
return 0;
}
EXPORT_SYMBOL(rtw89_chip_info_setup);
diff --git a/drivers/net/wireless/realtek/rtw89/core.h b/drivers/net/wireless/realtek/rtw89/core.h
index 2badb96d2ae3..41365ffb7e5e 100644
--- a/drivers/net/wireless/realtek/rtw89/core.h
+++ b/drivers/net/wireless/realtek/rtw89/core.h
@@ -816,6 +816,7 @@ struct rtw89_tx_desc_info {
#define RTW89_MGMT_HW_SEQ_MODE 1
bool hiq;
u8 port;
+ bool er_cap;
};
struct rtw89_core_tx_request {
@@ -1263,6 +1264,7 @@ union rtw89_btc_bt_state_map {
#define BTC_BT_RSSI_THMAX 4
#define BTC_BT_AFH_GROUP 12
+#define BTC_BT_AFH_LE_GROUP 5
struct rtw89_btc_bt_link_info {
struct rtw89_btc_u8_sta_chg profile_cnt;
@@ -1278,6 +1280,7 @@ struct rtw89_btc_bt_link_info {
u8 golden_rx_shift[BTC_PROFILE_MAX];
u8 rssi_state[BTC_BT_RSSI_THMAX];
u8 afh_map[BTC_BT_AFH_GROUP];
+ u8 afh_map_le[BTC_BT_AFH_LE_GROUP];
u32 role_sw: 1;
u32 slave_role: 1;
@@ -1437,7 +1440,7 @@ struct rtw89_btc_cx {
};
struct rtw89_btc_fbtc_tdma {
- u8 type; /* chip_info::fcxtdma_ver */
+ u8 type; /* btc_ver::fcxtdma */
u8 rxflctrl;
u8 txpause;
u8 wtgle_n;
@@ -1447,13 +1450,18 @@ struct rtw89_btc_fbtc_tdma {
u8 option_ctrl;
} __packed;
-struct rtw89_btc_fbtc_tdma_v1 {
- u8 fver; /* chip_info::fcxtdma_ver */
+struct rtw89_btc_fbtc_tdma_v3 {
+ u8 fver; /* btc_ver::fcxtdma */
u8 rsvd;
__le16 rsvd1;
struct rtw89_btc_fbtc_tdma tdma;
} __packed;
+union rtw89_btc_fbtc_tdma_le32 {
+ struct rtw89_btc_fbtc_tdma v1;
+ struct rtw89_btc_fbtc_tdma_v3 v3;
+};
+
#define CXMREG_MAX 30
#define FCXMAX_STEP 255 /*STEP trace record cnt, Max:65535, default:255*/
#define BTC_CYCLE_SLOT_MAX 48 /* must be even number, non-zero */
@@ -1472,8 +1480,8 @@ enum rtw89_btc_bt_sta_counter {
BTC_BCNT_STA_MAX
};
-struct rtw89_btc_fbtc_rpt_ctrl {
- u16 fver; /* chip_info::fcxbtcrpt_ver */
+struct rtw89_btc_fbtc_rpt_ctrl_v1 {
+ u16 fver; /* btc_ver::fcxbtcrpt */
u16 rpt_cnt; /* tmr counters */
u32 wl_fw_coex_ver; /* match which driver's coex version */
u32 wl_fw_cx_offload;
@@ -1504,6 +1512,20 @@ struct rtw89_btc_fbtc_rpt_ctrl_info {
__le32 cnt_aoac_rf_off; /* rf-off counter for aoac switch notify */
} __packed;
+struct rtw89_btc_fbtc_rpt_ctrl_info_v5 {
+ __le32 cx_ver; /* match which driver's coex version */
+ __le32 fw_ver;
+ __le32 en; /* report map */
+
+ __le16 cnt; /* fw report counter */
+ __le16 cnt_c2h; /* fw send c2h counter */
+ __le16 cnt_h2c; /* fw recv h2c counter */
+ __le16 len_c2h; /* The total length of the last C2H */
+
+ __le16 cnt_aoac_rf_on; /* rf-on counter for aoac switch notify */
+ __le16 cnt_aoac_rf_off; /* rf-off counter for aoac switch notify */
+} __packed;
+
struct rtw89_btc_fbtc_rpt_ctrl_wl_fw_info {
__le32 cx_ver; /* match which driver's coex version */
__le32 cx_offload;
@@ -1525,7 +1547,7 @@ struct rtw89_btc_fbtc_rpt_ctrl_bt_mailbox {
struct rtw89_btc_fbtc_rpt_ctrl_a2dp_empty a2dp;
} __packed;
-struct rtw89_btc_fbtc_rpt_ctrl_v1 {
+struct rtw89_btc_fbtc_rpt_ctrl_v4 {
u8 fver;
u8 rsvd;
__le16 rsvd1;
@@ -1536,6 +1558,24 @@ struct rtw89_btc_fbtc_rpt_ctrl_v1 {
struct rtw89_mac_ax_gnt gnt_val[RTW89_PHY_MAX];
} __packed;
+struct rtw89_btc_fbtc_rpt_ctrl_v5 {
+ u8 fver;
+ u8 rsvd;
+ __le16 rsvd1;
+
+ u8 gnt_val[RTW89_PHY_MAX][4];
+ __le16 bt_cnt[BTC_BCNT_STA_MAX];
+
+ struct rtw89_btc_fbtc_rpt_ctrl_info_v5 rpt_info;
+ struct rtw89_btc_fbtc_rpt_ctrl_bt_mailbox bt_mbx_info;
+} __packed;
+
+union rtw89_btc_fbtc_rpt_ctrl_ver_info {
+ struct rtw89_btc_fbtc_rpt_ctrl_v1 v1;
+ struct rtw89_btc_fbtc_rpt_ctrl_v4 v4;
+ struct rtw89_btc_fbtc_rpt_ctrl_v5 v5;
+};
+
enum rtw89_fbtc_ext_ctrl_type {
CXECTL_OFF = 0x0, /* tdma off */
CXECTL_B2 = 0x1, /* allow B2 (beacon-early) */
@@ -1571,6 +1611,36 @@ enum rtw89_btc_cxst_state {
CXST_MAX = 0x12,
};
+enum rtw89_btc_cxevnt {
+ CXEVNT_TDMA_ENTRY = 0x0,
+ CXEVNT_WL_TMR,
+ CXEVNT_B1_TMR,
+ CXEVNT_B2_TMR,
+ CXEVNT_B3_TMR,
+ CXEVNT_B4_TMR,
+ CXEVNT_W2B_TMR,
+ CXEVNT_B2W_TMR,
+ CXEVNT_BCN_EARLY,
+ CXEVNT_A2DP_EMPTY,
+ CXEVNT_LK_END,
+ CXEVNT_RX_ISR,
+ CXEVNT_RX_FC0,
+ CXEVNT_RX_FC1,
+ CXEVNT_BT_RELINK,
+ CXEVNT_BT_RETRY,
+ CXEVNT_E2G,
+ CXEVNT_E5G,
+ CXEVNT_EBT,
+ CXEVNT_ENULL,
+ CXEVNT_DRV_WLK,
+ CXEVNT_BCN_OK,
+ CXEVNT_BT_CHANGE,
+ CXEVNT_EBT_EXTEND,
+ CXEVNT_E2G_NULL1,
+ CXEVNT_B1FDD_TMR,
+ CXEVNT_MAX
+};
+
enum {
CXBCN_ALL = 0x0,
CXBCN_ALL_OK,
@@ -1604,9 +1674,14 @@ enum { /* STEP TYPE */
CXSTEP_MAX,
};
+enum rtw89_btc_afh_map_type { /*AFH MAP TYPE */
+ RPT_BT_AFH_SEQ_LEGACY = 0x10,
+ RPT_BT_AFH_SEQ_LE = 0x20
+};
+
#define BTC_DBG_MAX1 32
struct rtw89_btc_fbtc_gpio_dbg {
- u8 fver; /* chip_info::fcxgpiodbg_ver */
+ u8 fver; /* btc_ver::fcxgpiodbg */
u8 rsvd;
u16 rsvd2;
u32 en_map; /* which debug signal (see btc_wl_gpio_debug) is enable */
@@ -1615,7 +1690,7 @@ struct rtw89_btc_fbtc_gpio_dbg {
} __packed;
struct rtw89_btc_fbtc_mreg_val {
- u8 fver; /* chip_info::fcxmreg_ver */
+ u8 fver; /* btc_ver::fcxmreg */
u8 reg_num;
__le16 rsvd;
__le32 mreg_val[CXMREG_MAX];
@@ -1638,7 +1713,7 @@ struct rtw89_btc_fbtc_slot {
} __packed;
struct rtw89_btc_fbtc_slots {
- u8 fver; /* chip_info::fcxslots_ver */
+ u8 fver; /* btc_ver::fcxslots */
u8 tbl_num;
__le16 rsvd;
__le32 update_map;
@@ -1651,8 +1726,8 @@ struct rtw89_btc_fbtc_step {
__le16 difft;
} __packed;
-struct rtw89_btc_fbtc_steps {
- u8 fver; /* chip_info::fcxstep_ver */
+struct rtw89_btc_fbtc_steps_v2 {
+ u8 fver; /* btc_ver::fcxstep */
u8 rsvd;
__le16 cnt;
__le16 pos_old;
@@ -1660,7 +1735,7 @@ struct rtw89_btc_fbtc_steps {
struct rtw89_btc_fbtc_step step[FCXMAX_STEP];
} __packed;
-struct rtw89_btc_fbtc_steps_v1 {
+struct rtw89_btc_fbtc_steps_v3 {
u8 fver;
u8 en;
__le16 rsvd;
@@ -1668,8 +1743,13 @@ struct rtw89_btc_fbtc_steps_v1 {
struct rtw89_btc_fbtc_step step[FCXMAX_STEP];
} __packed;
-struct rtw89_btc_fbtc_cysta { /* statistics for cycles */
- u8 fver; /* chip_info::fcxcysta_ver */
+union rtw89_btc_fbtc_steps_info {
+ struct rtw89_btc_fbtc_steps_v2 v2;
+ struct rtw89_btc_fbtc_steps_v3 v3;
+};
+
+struct rtw89_btc_fbtc_cysta_v2 { /* statistics for cycles */
+ u8 fver; /* btc_ver::fcxcysta */
u8 rsvd;
__le16 cycles; /* total cycle number */
__le16 cycles_a2dp[CXT_FLCTRL_MAX];
@@ -1717,6 +1797,17 @@ struct rtw89_btc_fbtc_a2dp_trx_stat {
u8 rsvd2;
} __packed;
+struct rtw89_btc_fbtc_a2dp_trx_stat_v4 {
+ u8 empty_cnt;
+ u8 retry_cnt;
+ u8 tx_rate;
+ u8 tx_cnt;
+ u8 ack_cnt;
+ u8 nack_cnt;
+ u8 no_empty_cnt;
+ u8 rsvd;
+} __packed;
+
struct rtw89_btc_fbtc_cycle_a2dp_empty_info {
__le16 cnt; /* a2dp empty cnt */
__le16 cnt_timeout; /* a2dp empty timeout cnt*/
@@ -1730,7 +1821,35 @@ struct rtw89_btc_fbtc_cycle_leak_info {
__le16 tmax; /* max leak-slot time */
} __packed;
-struct rtw89_btc_fbtc_cysta_v1 { /* statistics for cycles */
+#define RTW89_BTC_FDDT_PHASE_CYCLE GENMASK(9, 0)
+#define RTW89_BTC_FDDT_TRAIN_STEP GENMASK(15, 10)
+
+struct rtw89_btc_fbtc_cycle_fddt_info {
+ __le16 train_cycle;
+ __le16 tp;
+
+ s8 tx_power; /* absolute Tx power (dBm), 0xff-> no BTC control */
+ s8 bt_tx_power; /* decrease Tx power (dB) */
+ s8 bt_rx_gain; /* LNA constrain level */
+ u8 no_empty_cnt;
+
+ u8 rssi; /* [7:4] -> bt_rssi_level, [3:0]-> wl_rssi_level */
+ u8 cn; /* condition_num */
+ u8 train_status; /* [7:4]-> train-state, [3:0]-> train-phase */
+ u8 train_result; /* refer to enum btc_fddt_check_map */
+} __packed;
+
+#define RTW89_BTC_FDDT_CELL_TRAIN_STATE GENMASK(3, 0)
+#define RTW89_BTC_FDDT_CELL_TRAIN_PHASE GENMASK(7, 4)
+
+struct rtw89_btc_fbtc_fddt_cell_status {
+ s8 wl_tx_pwr;
+ s8 bt_tx_pwr;
+ s8 bt_rx_gain;
+ u8 state_phase; /* [0:3] train state, [4:7] train phase */
+} __packed;
+
+struct rtw89_btc_fbtc_cysta_v3 { /* statistics for cycles */
u8 fver;
u8 rsvd;
__le16 cycles; /* total cycle number */
@@ -1748,8 +1867,41 @@ struct rtw89_btc_fbtc_cysta_v1 { /* statistics for cycles */
__le32 except_map;
} __packed;
-struct rtw89_btc_fbtc_cynullsta { /* cycle null statistics */
- u8 fver; /* chip_info::fcxnullsta_ver */
+#define FDD_TRAIN_WL_DIRECTION 2
+#define FDD_TRAIN_WL_RSSI_LEVEL 5
+#define FDD_TRAIN_BT_RSSI_LEVEL 5
+
+struct rtw89_btc_fbtc_cysta_v4 { /* statistics for cycles */
+ u8 fver;
+ u8 rsvd;
+ u8 collision_cnt; /* counter for event/timer occur at the same time */
+ u8 except_cnt;
+
+ __le16 skip_cnt;
+ __le16 cycles; /* total cycle number */
+
+ __le16 slot_step_time[BTC_CYCLE_SLOT_MAX]; /* record the wl/bt slot time */
+ __le16 slot_cnt[CXST_MAX]; /* slot count */
+ __le16 bcn_cnt[CXBCN_MAX];
+ struct rtw89_btc_fbtc_cycle_time_info cycle_time;
+ struct rtw89_btc_fbtc_cycle_leak_info leak_slot;
+ struct rtw89_btc_fbtc_cycle_a2dp_empty_info a2dp_ept;
+ struct rtw89_btc_fbtc_a2dp_trx_stat_v4 a2dp_trx[BTC_CYCLE_SLOT_MAX];
+ struct rtw89_btc_fbtc_cycle_fddt_info fddt_trx[BTC_CYCLE_SLOT_MAX];
+ struct rtw89_btc_fbtc_fddt_cell_status fddt_cells[FDD_TRAIN_WL_DIRECTION]
+ [FDD_TRAIN_WL_RSSI_LEVEL]
+ [FDD_TRAIN_BT_RSSI_LEVEL];
+ __le32 except_map;
+} __packed;
+
+union rtw89_btc_fbtc_cysta_info {
+ struct rtw89_btc_fbtc_cysta_v2 v2;
+ struct rtw89_btc_fbtc_cysta_v3 v3;
+ struct rtw89_btc_fbtc_cysta_v4 v4;
+};
+
+struct rtw89_btc_fbtc_cynullsta_v1 { /* cycle null statistics */
+ u8 fver; /* btc_ver::fcxnullsta */
u8 rsvd;
__le16 rsvd2;
__le32 max_t[2]; /* max_t for 0:null0/1:null1 */
@@ -1757,8 +1909,8 @@ struct rtw89_btc_fbtc_cynullsta { /* cycle null statistics */
__le32 result[2][4]; /* 0:fail, 1:ok, 2:on_time, 3:retry */
} __packed;
-struct rtw89_btc_fbtc_cynullsta_v1 { /* cycle null statistics */
- u8 fver; /* chip_info::fcxnullsta_ver */
+struct rtw89_btc_fbtc_cynullsta_v2 { /* cycle null statistics */
+ u8 fver; /* btc_ver::fcxnullsta */
u8 rsvd;
__le16 rsvd2;
__le32 max_t[2]; /* max_t for 0:null0/1:null1 */
@@ -1766,8 +1918,13 @@ struct rtw89_btc_fbtc_cynullsta_v1 { /* cycle null statistics */
__le32 result[2][5]; /* 0:fail, 1:ok, 2:on_time, 3:retry, 4:tx */
} __packed;
+union rtw89_btc_fbtc_cynullsta_info {
+ struct rtw89_btc_fbtc_cynullsta_v1 v1; /* info from fw */
+ struct rtw89_btc_fbtc_cynullsta_v2 v2;
+};
+
struct rtw89_btc_fbtc_btver {
- u8 fver; /* chip_info::fcxbtver_ver */
+ u8 fver; /* btc_ver::fcxbtver */
u8 rsvd;
__le16 rsvd2;
__le32 coex_ver; /*bit[15:8]->shared, bit[7:0]->non-shared */
@@ -1776,14 +1933,14 @@ struct rtw89_btc_fbtc_btver {
} __packed;
struct rtw89_btc_fbtc_btscan {
- u8 fver; /* chip_info::fcxbtscan_ver */
+ u8 fver; /* btc_ver::fcxbtscan */
u8 rsvd;
__le16 rsvd2;
u8 scan[6];
} __packed;
struct rtw89_btc_fbtc_btafh {
- u8 fver; /* chip_info::fcxbtafh_ver */
+ u8 fver; /* btc_ver::fcxbtafh */
u8 rsvd;
__le16 rsvd2;
u8 afh_l[4]; /*bit0:2402, bit1: 2403.... bit31:2433 */
@@ -1791,8 +1948,20 @@ struct rtw89_btc_fbtc_btafh {
u8 afh_h[4]; /*bit0:2466, bit1:2467......bit14:2480 */
} __packed;
+struct rtw89_btc_fbtc_btafh_v2 {
+ u8 fver; /* btc_ver::fcxbtafh */
+ u8 rsvd;
+ u8 rsvd2;
+ u8 map_type;
+ u8 afh_l[4];
+ u8 afh_m[4];
+ u8 afh_h[4];
+ u8 afh_le_a[4];
+ u8 afh_le_b[4];
+} __packed;
+
struct rtw89_btc_fbtc_btdevinfo {
- u8 fver; /* chip_info::fcxbtdevinfo_ver */
+ u8 fver; /* btc_ver::fcxbtdevinfo */
u8 rsvd;
__le16 vendor_id;
__le32 dev_name; /* only 24 bits valid */
@@ -1911,20 +2080,19 @@ struct rtw89_btc_rpt_cmn_info {
u8 valid;
} __packed;
+union rtw89_btc_fbtc_btafh_info {
+ struct rtw89_btc_fbtc_btafh v1;
+ struct rtw89_btc_fbtc_btafh_v2 v2;
+};
+
struct rtw89_btc_report_ctrl_state {
struct rtw89_btc_rpt_cmn_info cinfo; /* common info, by driver */
- union {
- struct rtw89_btc_fbtc_rpt_ctrl finfo; /* info from fw for 52A*/
- struct rtw89_btc_fbtc_rpt_ctrl_v1 finfo_v1; /* info from fw for 52C*/
- };
+ union rtw89_btc_fbtc_rpt_ctrl_ver_info finfo;
};
struct rtw89_btc_rpt_fbtc_tdma {
struct rtw89_btc_rpt_cmn_info cinfo; /* common info, by driver */
- union {
- struct rtw89_btc_fbtc_tdma finfo; /* info from fw */
- struct rtw89_btc_fbtc_tdma_v1 finfo_v1; /* info from fw for 52C*/
- };
+ union rtw89_btc_fbtc_tdma_le32 finfo;
};
struct rtw89_btc_rpt_fbtc_slots {
@@ -1934,26 +2102,17 @@ struct rtw89_btc_rpt_fbtc_slots {
struct rtw89_btc_rpt_fbtc_cysta {
struct rtw89_btc_rpt_cmn_info cinfo; /* common info, by driver */
- union {
- struct rtw89_btc_fbtc_cysta finfo; /* info from fw for 52A*/
- struct rtw89_btc_fbtc_cysta_v1 finfo_v1; /* info from fw for 52C*/
- };
+ union rtw89_btc_fbtc_cysta_info finfo;
};
struct rtw89_btc_rpt_fbtc_step {
struct rtw89_btc_rpt_cmn_info cinfo; /* common info, by driver */
- union {
- struct rtw89_btc_fbtc_steps finfo; /* info from fw */
- struct rtw89_btc_fbtc_steps_v1 finfo_v1; /* info from fw */
- };
+ union rtw89_btc_fbtc_steps_info finfo; /* info from fw */
};
struct rtw89_btc_rpt_fbtc_nullsta {
struct rtw89_btc_rpt_cmn_info cinfo; /* common info, by driver */
- union {
- struct rtw89_btc_fbtc_cynullsta finfo; /* info from fw */
- struct rtw89_btc_fbtc_cynullsta_v1 finfo_v1; /* info from fw */
- };
+ union rtw89_btc_fbtc_cynullsta_info finfo;
};
struct rtw89_btc_rpt_fbtc_mreg {
@@ -1978,7 +2137,7 @@ struct rtw89_btc_rpt_fbtc_btscan {
struct rtw89_btc_rpt_fbtc_btafh {
struct rtw89_btc_rpt_cmn_info cinfo; /* common info, by driver */
- struct rtw89_btc_fbtc_btafh finfo; /* info from fw */
+ union rtw89_btc_fbtc_btafh_info finfo;
};
struct rtw89_btc_rpt_fbtc_btdev {
@@ -2018,9 +2177,35 @@ struct rtw89_btc_btf_fwinfo {
struct rtw89_btc_rpt_fbtc_btdev rpt_fbtc_btdev;
};
+struct rtw89_btc_ver {
+ enum rtw89_core_chip_id chip_id;
+ u32 fw_ver_code;
+
+ u8 fcxbtcrpt;
+ u8 fcxtdma;
+ u8 fcxslots;
+ u8 fcxcysta;
+ u8 fcxstep;
+ u8 fcxnullsta;
+ u8 fcxmreg;
+ u8 fcxgpiodbg;
+ u8 fcxbtver;
+ u8 fcxbtscan;
+ u8 fcxbtafh;
+ u8 fcxbtdevinfo;
+ u8 fwlrole;
+ u8 frptmap;
+ u8 fcxctrl;
+
+ u16 info_buf;
+ u8 max_role_num;
+};
+
#define RTW89_BTC_POLICY_MAXLEN 512
struct rtw89_btc {
+ const struct rtw89_btc_ver *ver;
+
struct rtw89_btc_cx cx;
struct rtw89_btc_dm dm;
struct rtw89_btc_ctrl ctrl;
@@ -2194,6 +2379,7 @@ struct rtw89_sec_cam_entry {
struct rtw89_sta {
u8 mac_id;
bool disassoc;
+ bool er_cap;
struct rtw89_dev *rtwdev;
struct rtw89_vif *rtwvif;
struct rtw89_ra_info ra;
@@ -2266,6 +2452,7 @@ struct rtw89_vif {
bool last_a_ctrl;
bool dyn_tb_bedge_en;
u8 def_tri_idx;
+ u32 tdls_peer;
struct work_struct update_beacon_work;
struct rtw89_addr_cam_entry addr_cam;
struct rtw89_bssid_cam_entry bssid_cam;
@@ -2274,6 +2461,7 @@ struct rtw89_vif {
struct rtw89_phy_rate_pattern rate_pattern;
struct cfg80211_scan_request *scan_req;
struct ieee80211_scan_ies *scan_ies;
+ struct list_head general_pkt_list;
};
enum rtw89_lv1_rcvy_step {
@@ -2660,6 +2848,7 @@ struct rtw89_chip_info {
enum rtw89_core_chip_id chip_id;
const struct rtw89_chip_ops *ops;
const char *fw_name;
+ bool try_ce_fw;
u32 fifo_size;
u32 dle_scc_rsvd_size;
u16 max_amsdu_limit;
@@ -2728,20 +2917,6 @@ struct rtw89_chip_info {
u8 btcx_desired;
u8 scbd;
u8 mailbox;
- u16 btc_fwinfo_buf;
-
- u8 fcxbtcrpt_ver;
- u8 fcxtdma_ver;
- u8 fcxslots_ver;
- u8 fcxcysta_ver;
- u8 fcxstep_ver;
- u8 fcxnullsta_ver;
- u8 fcxmreg_ver;
- u8 fcxgpiodbg_ver;
- u8 fcxbtver_ver;
- u8 fcxbtscan_ver;
- u8 fcxbtafh_ver;
- u8 fcxbtdevinfo_ver;
u8 afh_guard_ch;
const u8 *wl_rssi_thres;
@@ -2771,6 +2946,7 @@ struct rtw89_chip_info {
u8 dcfo_comp_sft;
const struct rtw89_imr_info *imr_info;
const struct rtw89_rrsr_cfgs *rrsr_cfgs;
+ u32 bss_clr_map_reg;
u32 dma_ch_mask;
const struct wiphy_wowlan_support *wowlan_stub;
};
@@ -2839,6 +3015,7 @@ static inline void rtw89_init_wait(struct rtw89_wait_info *wait)
enum rtw89_fw_type {
RTW89_FW_NORMAL = 1,
RTW89_FW_WOWLAN = 3,
+ RTW89_FW_NORMAL_CE = 5,
};
enum rtw89_fw_feature {
@@ -2848,6 +3025,7 @@ enum rtw89_fw_feature {
RTW89_FW_FEATURE_CRASH_TRIGGER,
RTW89_FW_FEATURE_PACKET_DROP,
RTW89_FW_FEATURE_NO_DEEP_PS,
+ RTW89_FW_FEATURE_NO_LPS_PG,
};
struct rtw89_fw_suit {
@@ -3550,7 +3728,6 @@ struct rtw89_wow_param {
DECLARE_BITMAP(flags, RTW89_WOW_FLAG_NUM);
struct rtw89_wow_cam_info patterns[RTW89_MAX_PATTERN_NUM];
u8 pattern_cnt;
- struct list_head pkt_list;
};
struct rtw89_mcc_info {
diff --git a/drivers/net/wireless/realtek/rtw89/debug.c b/drivers/net/wireless/realtek/rtw89/debug.c
index 8297e35bfa52..0e0e1483c099 100644
--- a/drivers/net/wireless/realtek/rtw89/debug.c
+++ b/drivers/net/wireless/realtek/rtw89/debug.c
@@ -615,6 +615,7 @@ rtw89_debug_priv_mac_reg_dump_select(struct file *filp,
struct seq_file *m = (struct seq_file *)filp->private_data;
struct rtw89_debugfs_priv *debugfs_priv = m->private;
struct rtw89_dev *rtwdev = debugfs_priv->rtwdev;
+ const struct rtw89_chip_info *chip = rtwdev->chip;
char buf[32];
size_t buf_size;
int sel;
@@ -634,6 +635,12 @@ rtw89_debug_priv_mac_reg_dump_select(struct file *filp,
return -EINVAL;
}
+ if (sel == RTW89_DBG_SEL_MAC_30 && chip->chip_id != RTL8852C) {
+ rtw89_info(rtwdev, "sel %d is address hole on chip %d\n", sel,
+ chip->chip_id);
+ return -EINVAL;
+ }
+
debugfs_priv->cb_data = sel;
rtw89_info(rtwdev, "select mac page dump %d\n", debugfs_priv->cb_data);
@@ -3347,6 +3354,31 @@ static void rtw89_dump_addr_cam(struct seq_file *m,
}
}
+__printf(3, 4)
+static void rtw89_dump_pkt_offload(struct seq_file *m, struct list_head *pkt_list,
+ const char *fmt, ...)
+{
+ struct rtw89_pktofld_info *info;
+ struct va_format vaf;
+ va_list args;
+
+ if (list_empty(pkt_list))
+ return;
+
+ va_start(args, fmt);
+ vaf.va = &args;
+ vaf.fmt = fmt;
+
+ seq_printf(m, "%pV", &vaf);
+
+ va_end(args);
+
+ list_for_each_entry(info, pkt_list, list)
+ seq_printf(m, "%d ", info->id);
+
+ seq_puts(m, "\n");
+}
+
static
void rtw89_vif_ids_get_iter(void *data, u8 *mac, struct ieee80211_vif *vif)
{
@@ -3357,6 +3389,7 @@ void rtw89_vif_ids_get_iter(void *data, u8 *mac, struct ieee80211_vif *vif)
seq_printf(m, "VIF [%d] %pM\n", rtwvif->mac_id, rtwvif->mac_addr);
seq_printf(m, "\tbssid_cam_idx=%u\n", bssid_cam->bssid_cam_idx);
rtw89_dump_addr_cam(m, &rtwvif->addr_cam);
+ rtw89_dump_pkt_offload(m, &rtwvif->general_pkt_list, "\tpkt_ofld[GENERAL]: ");
}
static void rtw89_dump_ba_cam(struct seq_file *m, struct rtw89_sta *rtwsta)
@@ -3395,6 +3428,7 @@ static int rtw89_debug_priv_stations_get(struct seq_file *m, void *v)
struct rtw89_debugfs_priv *debugfs_priv = m->private;
struct rtw89_dev *rtwdev = debugfs_priv->rtwdev;
struct rtw89_cam_info *cam_info = &rtwdev->cam_info;
+ u8 idx;
mutex_lock(&rtwdev->mutex);
@@ -3409,6 +3443,15 @@ static int rtw89_debug_priv_stations_get(struct seq_file *m, void *v)
cam_info->sec_cam_map);
seq_printf(m, "\tba_cam: %*ph\n", (int)sizeof(cam_info->ba_cam_map),
cam_info->ba_cam_map);
+ seq_printf(m, "\tpkt_ofld: %*ph\n", (int)sizeof(rtwdev->pkt_offload),
+ rtwdev->pkt_offload);
+
+ for (idx = NL80211_BAND_2GHZ; idx < NUM_NL80211_BANDS; idx++) {
+ if (!(rtwdev->chip->support_bands & BIT(idx)))
+ continue;
+ rtw89_dump_pkt_offload(m, &rtwdev->scan_info.pkt_list[idx],
+ "\t\t[SCAN %u]: ", idx);
+ }
ieee80211_iterate_active_interfaces_atomic(rtwdev->hw,
IEEE80211_IFACE_ITER_NORMAL, rtw89_vif_ids_get_iter, m);
diff --git a/drivers/net/wireless/realtek/rtw89/debug.h b/drivers/net/wireless/realtek/rtw89/debug.h
index d1de5e600836..079269bb5251 100644
--- a/drivers/net/wireless/realtek/rtw89/debug.h
+++ b/drivers/net/wireless/realtek/rtw89/debug.h
@@ -28,6 +28,7 @@ enum rtw89_debug_mask {
RTW89_DBG_STATE = BIT(17),
RTW89_DBG_WOW = BIT(18),
RTW89_DBG_UL_TB = BIT(19),
+ RTW89_DBG_CHAN = BIT(20),
RTW89_DBG_UNEXP = BIT(31),
};
diff --git a/drivers/net/wireless/realtek/rtw89/fw.c b/drivers/net/wireless/realtek/rtw89/fw.c
index de1f23779fc6..0b73dc2e9ad7 100644
--- a/drivers/net/wireless/realtek/rtw89/fw.c
+++ b/drivers/net/wireless/realtek/rtw89/fw.c
@@ -10,6 +10,7 @@
#include "mac.h"
#include "phy.h"
#include "reg.h"
+#include "util.h"
static void rtw89_fw_c2h_cmd_handle(struct rtw89_dev *rtwdev,
struct sk_buff *skb);
@@ -91,6 +92,7 @@ static int rtw89_fw_hdr_parser(struct rtw89_dev *rtwdev, const u8 *fw, u32 len,
const u8 *fwdynhdr;
const u8 *bin;
u32 base_hdr_len;
+ u32 mssc_len = 0;
u32 i;
if (!info)
@@ -120,6 +122,14 @@ static int rtw89_fw_hdr_parser(struct rtw89_dev *rtwdev, const u8 *fw, u32 len,
fw += RTW89_FW_HDR_SIZE;
section_info = info->section_info;
for (i = 0; i < info->section_num; i++) {
+ section_info->type = GET_FWSECTION_HDR_SECTIONTYPE(fw);
+ if (section_info->type == FWDL_SECURITY_SECTION_TYPE) {
+ section_info->mssc = GET_FWSECTION_HDR_MSSC(fw);
+ mssc_len += section_info->mssc * FWDL_SECURITY_SIGLEN;
+ } else {
+ section_info->mssc = 0;
+ }
+
section_info->len = GET_FWSECTION_HDR_SEC_SIZE(fw);
if (GET_FWSECTION_HDR_CHECKSUM(fw))
section_info->len += FWDL_SECTION_CHKSUM_LEN;
@@ -132,7 +142,7 @@ static int rtw89_fw_hdr_parser(struct rtw89_dev *rtwdev, const u8 *fw, u32 len,
section_info++;
}
- if (fw_end != bin) {
+ if (fw_end != bin + mssc_len) {
rtw89_err(rtwdev, "[ERR]fw bin size\n");
return -EINVAL;
}
@@ -142,7 +152,7 @@ static int rtw89_fw_hdr_parser(struct rtw89_dev *rtwdev, const u8 *fw, u32 len,
static
int rtw89_mfw_recognize(struct rtw89_dev *rtwdev, enum rtw89_fw_type type,
- struct rtw89_fw_suit *fw_suit)
+ struct rtw89_fw_suit *fw_suit, bool nowarn)
{
struct rtw89_fw_info *fw_info = &rtwdev->fw;
const u8 *mfw = fw_info->firmware->data;
@@ -173,7 +183,8 @@ int rtw89_mfw_recognize(struct rtw89_dev *rtwdev, enum rtw89_fw_type type,
return 0;
}
- rtw89_err(rtwdev, "no suitable firmware found\n");
+ if (!nowarn)
+ rtw89_err(rtwdev, "no suitable firmware found\n");
return -ENOENT;
}
@@ -201,12 +212,13 @@ static void rtw89_fw_update_ver(struct rtw89_dev *rtwdev,
}
static
-int __rtw89_fw_recognize(struct rtw89_dev *rtwdev, enum rtw89_fw_type type)
+int __rtw89_fw_recognize(struct rtw89_dev *rtwdev, enum rtw89_fw_type type,
+ bool nowarn)
{
struct rtw89_fw_suit *fw_suit = rtw89_fw_suit_get(rtwdev, type);
int ret;
- ret = rtw89_mfw_recognize(rtwdev, type, fw_suit);
+ ret = rtw89_mfw_recognize(rtwdev, type, fw_suit, nowarn);
if (ret)
return ret;
@@ -245,6 +257,7 @@ static const struct __fw_feat_cfg fw_feat_tbl[] = {
__CFG_FW_FEAT(RTL8852A, ge, 0, 13, 35, 0, TX_WAKE),
__CFG_FW_FEAT(RTL8852A, ge, 0, 13, 36, 0, CRASH_TRIGGER),
__CFG_FW_FEAT(RTL8852A, ge, 0, 13, 38, 0, PACKET_DROP),
+ __CFG_FW_FEAT(RTL8852B, ge, 0, 29, 26, 0, NO_LPS_PG),
__CFG_FW_FEAT(RTL8852C, ge, 0, 27, 20, 0, PACKET_DROP),
__CFG_FW_FEAT(RTL8852C, le, 0, 27, 33, 0, NO_DEEP_PS),
__CFG_FW_FEAT(RTL8852C, ge, 0, 27, 34, 0, TX_WAKE),
@@ -332,17 +345,27 @@ out:
int rtw89_fw_recognize(struct rtw89_dev *rtwdev)
{
+ const struct rtw89_chip_info *chip = rtwdev->chip;
int ret;
- ret = __rtw89_fw_recognize(rtwdev, RTW89_FW_NORMAL);
+ if (chip->try_ce_fw) {
+ ret = __rtw89_fw_recognize(rtwdev, RTW89_FW_NORMAL_CE, true);
+ if (!ret)
+ goto normal_done;
+ }
+
+ ret = __rtw89_fw_recognize(rtwdev, RTW89_FW_NORMAL, false);
if (ret)
return ret;
+normal_done:
/* It still works if wowlan firmware isn't existing. */
- __rtw89_fw_recognize(rtwdev, RTW89_FW_WOWLAN);
+ __rtw89_fw_recognize(rtwdev, RTW89_FW_WOWLAN, false);
rtw89_fw_recognize_features(rtwdev);
+ rtw89_coex_recognize_ver(rtwdev);
+
return 0;
}
@@ -902,13 +925,12 @@ fail:
return ret;
}
-static int rtw89_fw_h2c_add_wow_fw_ofld(struct rtw89_dev *rtwdev,
+static int rtw89_fw_h2c_add_general_pkt(struct rtw89_dev *rtwdev,
struct rtw89_vif *rtwvif,
enum rtw89_fw_pkt_ofld_type type,
u8 *id)
{
struct ieee80211_vif *vif = rtwvif_to_vif(rtwvif);
- struct rtw89_wow_param *rtw_wow = &rtwdev->wow;
struct rtw89_pktofld_info *info;
struct sk_buff *skb;
int ret;
@@ -937,13 +959,13 @@ static int rtw89_fw_h2c_add_wow_fw_ofld(struct rtw89_dev *rtwdev,
if (!skb)
goto err;
- list_add_tail(&info->list, &rtw_wow->pkt_list);
ret = rtw89_fw_h2c_add_pkt_offload(rtwdev, &info->id, skb);
kfree_skb(skb);
if (ret)
- return ret;
+ goto err;
+ list_add_tail(&info->list, &rtwvif->general_pkt_list);
*id = info->id;
return 0;
@@ -952,13 +974,48 @@ err:
return -ENOMEM;
}
+void rtw89_fw_release_general_pkt_list_vif(struct rtw89_dev *rtwdev,
+ struct rtw89_vif *rtwvif, bool notify_fw)
+{
+ struct list_head *pkt_list = &rtwvif->general_pkt_list;
+ struct rtw89_pktofld_info *info, *tmp;
+
+ list_for_each_entry_safe(info, tmp, pkt_list, list) {
+ if (notify_fw)
+ rtw89_fw_h2c_del_pkt_offload(rtwdev, info->id);
+ rtw89_core_release_bit_map(rtwdev->pkt_offload,
+ info->id);
+ list_del(&info->list);
+ kfree(info);
+ }
+}
+
+void rtw89_fw_release_general_pkt_list(struct rtw89_dev *rtwdev, bool notify_fw)
+{
+ struct rtw89_vif *rtwvif;
+
+ rtw89_for_each_rtwvif(rtwdev, rtwvif)
+ rtw89_fw_release_general_pkt_list_vif(rtwdev, rtwvif, notify_fw);
+}
+
#define H2C_GENERAL_PKT_LEN 6
#define H2C_GENERAL_PKT_ID_UND 0xff
-int rtw89_fw_h2c_general_pkt(struct rtw89_dev *rtwdev, u8 macid)
+int rtw89_fw_h2c_general_pkt(struct rtw89_dev *rtwdev,
+ struct rtw89_vif *rtwvif, u8 macid)
{
+ u8 pkt_id_ps_poll = H2C_GENERAL_PKT_ID_UND;
+ u8 pkt_id_null = H2C_GENERAL_PKT_ID_UND;
+ u8 pkt_id_qos_null = H2C_GENERAL_PKT_ID_UND;
struct sk_buff *skb;
int ret;
+ rtw89_fw_h2c_add_general_pkt(rtwdev, rtwvif,
+ RTW89_PKT_OFLD_TYPE_PS_POLL, &pkt_id_ps_poll);
+ rtw89_fw_h2c_add_general_pkt(rtwdev, rtwvif,
+ RTW89_PKT_OFLD_TYPE_NULL_DATA, &pkt_id_null);
+ rtw89_fw_h2c_add_general_pkt(rtwdev, rtwvif,
+ RTW89_PKT_OFLD_TYPE_QOS_NULL, &pkt_id_qos_null);
+
skb = rtw89_fw_h2c_alloc_skb_with_hdr(rtwdev, H2C_GENERAL_PKT_LEN);
if (!skb) {
rtw89_err(rtwdev, "failed to alloc skb for fw dl\n");
@@ -967,9 +1024,9 @@ int rtw89_fw_h2c_general_pkt(struct rtw89_dev *rtwdev, u8 macid)
skb_put(skb, H2C_GENERAL_PKT_LEN);
SET_GENERAL_PKT_MACID(skb->data, macid);
SET_GENERAL_PKT_PROBRSP_ID(skb->data, H2C_GENERAL_PKT_ID_UND);
- SET_GENERAL_PKT_PSPOLL_ID(skb->data, H2C_GENERAL_PKT_ID_UND);
- SET_GENERAL_PKT_NULL_ID(skb->data, H2C_GENERAL_PKT_ID_UND);
- SET_GENERAL_PKT_QOS_NULL_ID(skb->data, H2C_GENERAL_PKT_ID_UND);
+ SET_GENERAL_PKT_PSPOLL_ID(skb->data, pkt_id_ps_poll);
+ SET_GENERAL_PKT_NULL_ID(skb->data, pkt_id_null);
+ SET_GENERAL_PKT_QOS_NULL_ID(skb->data, pkt_id_qos_null);
SET_GENERAL_PKT_CTS2SELF_ID(skb->data, H2C_GENERAL_PKT_ID_UND);
rtw89_h2c_pkt_set_hdr(rtwdev, skb, FWCMD_TYPE_H2C,
@@ -1807,33 +1864,36 @@ fail:
#define PORT_DATA_OFFSET 4
#define H2C_LEN_CXDRVINFO_ROLE_DBCC_LEN 12
-#define H2C_LEN_CXDRVINFO_ROLE (4 + 12 * RTW89_PORT_NUM + H2C_LEN_CXDRVHDR)
-#define H2C_LEN_CXDRVINFO_ROLE_V1 (4 + 16 * RTW89_PORT_NUM + \
- H2C_LEN_CXDRVINFO_ROLE_DBCC_LEN + \
- H2C_LEN_CXDRVHDR)
+#define H2C_LEN_CXDRVINFO_ROLE_SIZE(max_role_num) \
+ (4 + 12 * (max_role_num) + H2C_LEN_CXDRVHDR)
+
int rtw89_fw_h2c_cxdrv_role(struct rtw89_dev *rtwdev)
{
struct rtw89_btc *btc = &rtwdev->btc;
+ const struct rtw89_btc_ver *ver = btc->ver;
struct rtw89_btc_wl_info *wl = &btc->cx.wl;
struct rtw89_btc_wl_role_info *role_info = &wl->role_info;
struct rtw89_btc_wl_role_info_bpos *bpos = &role_info->role_map.role;
struct rtw89_btc_wl_active_role *active = role_info->active_role;
struct sk_buff *skb;
+ u32 len;
u8 offset = 0;
u8 *cmd;
int ret;
int i;
- skb = rtw89_fw_h2c_alloc_skb_with_hdr(rtwdev, H2C_LEN_CXDRVINFO_ROLE);
+ len = H2C_LEN_CXDRVINFO_ROLE_SIZE(ver->max_role_num);
+
+ skb = rtw89_fw_h2c_alloc_skb_with_hdr(rtwdev, len);
if (!skb) {
rtw89_err(rtwdev, "failed to alloc skb for h2c cxdrv_role\n");
return -ENOMEM;
}
- skb_put(skb, H2C_LEN_CXDRVINFO_ROLE);
+ skb_put(skb, len);
cmd = skb->data;
RTW89_SET_FWCMD_CXHDR_TYPE(cmd, CXDRVINFO_ROLE);
- RTW89_SET_FWCMD_CXHDR_LEN(cmd, H2C_LEN_CXDRVINFO_ROLE - H2C_LEN_CXDRVHDR);
+ RTW89_SET_FWCMD_CXHDR_LEN(cmd, len - H2C_LEN_CXDRVHDR);
RTW89_SET_FWCMD_CXROLE_CONNECT_CNT(cmd, role_info->connect_cnt);
RTW89_SET_FWCMD_CXROLE_LINK_MODE(cmd, role_info->link_mode);
@@ -1870,7 +1930,7 @@ int rtw89_fw_h2c_cxdrv_role(struct rtw89_dev *rtwdev)
rtw89_h2c_pkt_set_hdr(rtwdev, skb, FWCMD_TYPE_H2C,
H2C_CAT_OUTSRC, BTFC_SET,
SET_DRV_INFO, 0, 0,
- H2C_LEN_CXDRVINFO_ROLE);
+ len);
ret = rtw89_h2c_tx(rtwdev, skb, false);
if (ret) {
@@ -1885,28 +1945,35 @@ fail:
return ret;
}
+#define H2C_LEN_CXDRVINFO_ROLE_SIZE_V1(max_role_num) \
+ (4 + 16 * (max_role_num) + H2C_LEN_CXDRVINFO_ROLE_DBCC_LEN + H2C_LEN_CXDRVHDR)
+
int rtw89_fw_h2c_cxdrv_role_v1(struct rtw89_dev *rtwdev)
{
struct rtw89_btc *btc = &rtwdev->btc;
+ const struct rtw89_btc_ver *ver = btc->ver;
struct rtw89_btc_wl_info *wl = &btc->cx.wl;
struct rtw89_btc_wl_role_info_v1 *role_info = &wl->role_info_v1;
struct rtw89_btc_wl_role_info_bpos *bpos = &role_info->role_map.role;
struct rtw89_btc_wl_active_role_v1 *active = role_info->active_role_v1;
struct sk_buff *skb;
+ u32 len;
u8 *cmd, offset;
int ret;
int i;
- skb = rtw89_fw_h2c_alloc_skb_with_hdr(rtwdev, H2C_LEN_CXDRVINFO_ROLE_V1);
+ len = H2C_LEN_CXDRVINFO_ROLE_SIZE_V1(ver->max_role_num);
+
+ skb = rtw89_fw_h2c_alloc_skb_with_hdr(rtwdev, len);
if (!skb) {
rtw89_err(rtwdev, "failed to alloc skb for h2c cxdrv_role\n");
return -ENOMEM;
}
- skb_put(skb, H2C_LEN_CXDRVINFO_ROLE_V1);
+ skb_put(skb, len);
cmd = skb->data;
RTW89_SET_FWCMD_CXHDR_TYPE(cmd, CXDRVINFO_ROLE);
- RTW89_SET_FWCMD_CXHDR_LEN(cmd, H2C_LEN_CXDRVINFO_ROLE_V1 - H2C_LEN_CXDRVHDR);
+ RTW89_SET_FWCMD_CXHDR_LEN(cmd, len - H2C_LEN_CXDRVHDR);
RTW89_SET_FWCMD_CXROLE_CONNECT_CNT(cmd, role_info->connect_cnt);
RTW89_SET_FWCMD_CXROLE_LINK_MODE(cmd, role_info->link_mode);
@@ -1942,7 +2009,7 @@ int rtw89_fw_h2c_cxdrv_role_v1(struct rtw89_dev *rtwdev)
RTW89_SET_FWCMD_CXROLE_ACT_NOA_DUR(cmd, active->noa_duration, i, offset);
}
- offset = H2C_LEN_CXDRVINFO_ROLE_V1 - H2C_LEN_CXDRVINFO_ROLE_DBCC_LEN;
+ offset = len - H2C_LEN_CXDRVINFO_ROLE_DBCC_LEN;
RTW89_SET_FWCMD_CXROLE_MROLE_TYPE(cmd, role_info->mrole_type, offset);
RTW89_SET_FWCMD_CXROLE_MROLE_NOA(cmd, role_info->mrole_noa_duration, offset);
RTW89_SET_FWCMD_CXROLE_DBCC_EN(cmd, role_info->dbcc_en, offset);
@@ -1953,7 +2020,7 @@ int rtw89_fw_h2c_cxdrv_role_v1(struct rtw89_dev *rtwdev)
rtw89_h2c_pkt_set_hdr(rtwdev, skb, FWCMD_TYPE_H2C,
H2C_CAT_OUTSRC, BTFC_SET,
SET_DRV_INFO, 0, 0,
- H2C_LEN_CXDRVINFO_ROLE_V1);
+ len);
ret = rtw89_h2c_tx(rtwdev, skb, false);
if (ret) {
@@ -1971,8 +2038,8 @@ fail:
#define H2C_LEN_CXDRVINFO_CTRL (4 + H2C_LEN_CXDRVHDR)
int rtw89_fw_h2c_cxdrv_ctrl(struct rtw89_dev *rtwdev)
{
- const struct rtw89_chip_info *chip = rtwdev->chip;
struct rtw89_btc *btc = &rtwdev->btc;
+ const struct rtw89_btc_ver *ver = btc->ver;
struct rtw89_btc_ctrl *ctrl = &btc->ctrl;
struct sk_buff *skb;
u8 *cmd;
@@ -1992,7 +2059,7 @@ int rtw89_fw_h2c_cxdrv_ctrl(struct rtw89_dev *rtwdev)
RTW89_SET_FWCMD_CXCTRL_MANUAL(cmd, ctrl->manual);
RTW89_SET_FWCMD_CXCTRL_IGNORE_BT(cmd, ctrl->igno_bt);
RTW89_SET_FWCMD_CXCTRL_ALWAYS_FREERUN(cmd, ctrl->always_freerun);
- if (chip->chip_id == RTL8852A)
+ if (ver->fcxctrl == 0)
RTW89_SET_FWCMD_CXCTRL_TRACE_STEP(cmd, ctrl->trace_step);
rtw89_h2c_pkt_set_hdr(rtwdev, skb, FWCMD_TYPE_H2C,
@@ -2112,6 +2179,7 @@ int rtw89_fw_h2c_add_pkt_offload(struct rtw89_dev *rtwdev, u8 *id,
skb = rtw89_fw_h2c_alloc_skb_with_hdr(rtwdev, H2C_LEN_PKT_OFLD + skb_ofld->len);
if (!skb) {
rtw89_err(rtwdev, "failed to alloc skb for h2c pkt offload\n");
+ rtw89_core_release_bit_map(rtwdev->pkt_offload, alloc_id);
return -ENOMEM;
}
skb_put(skb, H2C_LEN_PKT_OFLD);
@@ -2130,6 +2198,7 @@ int rtw89_fw_h2c_add_pkt_offload(struct rtw89_dev *rtwdev, u8 *id,
ret = rtw89_h2c_tx(rtwdev, skb, false);
if (ret) {
rtw89_err(rtwdev, "failed to send h2c\n");
+ rtw89_core_release_bit_map(rtwdev->pkt_offload, alloc_id);
goto fail;
}
@@ -2663,11 +2732,14 @@ static int rtw89_append_probe_req_ie(struct rtw89_dev *rtwdev,
goto out;
}
- list_add_tail(&info->list, &scan_info->pkt_list[band]);
ret = rtw89_fw_h2c_add_pkt_offload(rtwdev, &info->id, new);
- if (ret)
+ if (ret) {
+ kfree_skb(new);
+ kfree(info);
goto out;
+ }
+ list_add_tail(&info->list, &scan_info->pkt_list[band]);
kfree_skb(new);
}
out:
@@ -2738,7 +2810,7 @@ static void rtw89_hw_scan_add_chan(struct rtw89_dev *rtwdev, int chan_type,
if (ssid_num == 1 && req->ssids[0].ssid_len == 0) {
ch_info->tx_pkt = false;
if (!req->duration_mandatory)
- ch_info->period -= RTW89_DWELL_TIME;
+ ch_info->period -= RTW89_DWELL_TIME_6G;
}
}
@@ -2791,7 +2863,8 @@ static int rtw89_hw_scan_add_chan_list(struct rtw89_dev *rtwdev,
if (req->duration_mandatory)
ch_info->period = req->duration;
else if (channel->band == NL80211_BAND_6GHZ)
- ch_info->period = RTW89_CHANNEL_TIME_6G + RTW89_DWELL_TIME;
+ ch_info->period = RTW89_CHANNEL_TIME_6G +
+ RTW89_DWELL_TIME_6G;
else
ch_info->period = RTW89_CHANNEL_TIME;
@@ -3072,8 +3145,9 @@ int rtw89_fw_h2c_keep_alive(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
int ret;
if (enable) {
- ret = rtw89_fw_h2c_add_wow_fw_ofld(rtwdev, rtwvif,
- RTW89_PKT_OFLD_TYPE_NULL_DATA, &pkt_id);
+ ret = rtw89_fw_h2c_add_general_pkt(rtwdev, rtwvif,
+ RTW89_PKT_OFLD_TYPE_NULL_DATA,
+ &pkt_id);
if (ret)
return -EPERM;
}
diff --git a/drivers/net/wireless/realtek/rtw89/fw.h b/drivers/net/wireless/realtek/rtw89/fw.h
index 4d2f9ea9e002..cae07e325326 100644
--- a/drivers/net/wireless/realtek/rtw89/fw.h
+++ b/drivers/net/wireless/realtek/rtw89/fw.h
@@ -171,6 +171,8 @@ struct rtw89_fw_hdr_section_info {
const u8 *addr;
u32 len;
u32 dladdr;
+ u32 mssc;
+ u8 type;
};
struct rtw89_fw_bin_info {
@@ -203,6 +205,7 @@ struct rtw89_h2creg_sch_tx_en {
#define RTW89_DFS_CHAN_TIME 105
#define RTW89_OFF_CHAN_TIME 100
#define RTW89_DWELL_TIME 20
+#define RTW89_DWELL_TIME_6G 10
#define RTW89_SCAN_WIDTH 0
#define RTW89_SCANOFLD_MAX_SSID 8
#define RTW89_SCANOFLD_MAX_IE_LEN 512
@@ -480,14 +483,21 @@ static inline void RTW89_SET_EDCA_PARAM(void *cmd, u32 val)
#define FW_EDCA_PARAM_CWMIN_MSK GENMASK(11, 8)
#define FW_EDCA_PARAM_AIFS_MSK GENMASK(7, 0)
+#define FWDL_SECURITY_SECTION_TYPE 9
+#define FWDL_SECURITY_SIGLEN 512
+
+#define GET_FWSECTION_HDR_DL_ADDR(fwhdr) \
+ le32_get_bits(*((const __le32 *)(fwhdr)), GENMASK(31, 0))
+#define GET_FWSECTION_HDR_SECTIONTYPE(fwhdr) \
+ le32_get_bits(*((const __le32 *)(fwhdr) + 1), GENMASK(27, 24))
#define GET_FWSECTION_HDR_SEC_SIZE(fwhdr) \
le32_get_bits(*((const __le32 *)(fwhdr) + 1), GENMASK(23, 0))
#define GET_FWSECTION_HDR_CHECKSUM(fwhdr) \
le32_get_bits(*((const __le32 *)(fwhdr) + 1), BIT(28))
#define GET_FWSECTION_HDR_REDL(fwhdr) \
le32_get_bits(*((const __le32 *)(fwhdr) + 1), BIT(29))
-#define GET_FWSECTION_HDR_DL_ADDR(fwhdr) \
- le32_get_bits(*((const __le32 *)(fwhdr)), GENMASK(31, 0))
+#define GET_FWSECTION_HDR_MSSC(fwhdr) \
+ le32_get_bits(*((const __le32 *)(fwhdr) + 2), GENMASK(31, 0))
#define GET_FW_HDR_MAJOR_VERSION(fwhdr) \
le32_get_bits(*((const __le32 *)(fwhdr) + 1), GENMASK(7, 0))
@@ -3209,16 +3219,16 @@ static inline struct rtw89_fw_c2h_attr *RTW89_SKB_C2H_CB(struct sk_buff *skb)
le32_get_bits(*((const __le32 *)(c2h) + 5), GENMASK(25, 24))
#define RTW89_GET_MAC_C2H_MCC_RCV_ACK_GROUP(c2h) \
- le32_get_bits(*((const __le32 *)(c2h)), GENMASK(1, 0))
+ le32_get_bits(*((const __le32 *)(c2h) + 2), GENMASK(1, 0))
#define RTW89_GET_MAC_C2H_MCC_RCV_ACK_H2C_FUNC(c2h) \
- le32_get_bits(*((const __le32 *)(c2h)), GENMASK(15, 8))
+ le32_get_bits(*((const __le32 *)(c2h) + 2), GENMASK(15, 8))
#define RTW89_GET_MAC_C2H_MCC_REQ_ACK_GROUP(c2h) \
- le32_get_bits(*((const __le32 *)(c2h)), GENMASK(1, 0))
+ le32_get_bits(*((const __le32 *)(c2h) + 2), GENMASK(1, 0))
#define RTW89_GET_MAC_C2H_MCC_REQ_ACK_H2C_RETURN(c2h) \
- le32_get_bits(*((const __le32 *)(c2h)), GENMASK(7, 2))
+ le32_get_bits(*((const __le32 *)(c2h) + 2), GENMASK(7, 2))
#define RTW89_GET_MAC_C2H_MCC_REQ_ACK_H2C_FUNC(c2h) \
- le32_get_bits(*((const __le32 *)(c2h)), GENMASK(15, 8))
+ le32_get_bits(*((const __le32 *)(c2h) + 2), GENMASK(15, 8))
struct rtw89_mac_mcc_tsf_rpt {
u32 macid_x;
@@ -3232,30 +3242,30 @@ struct rtw89_mac_mcc_tsf_rpt {
static_assert(sizeof(struct rtw89_mac_mcc_tsf_rpt) <= RTW89_COMPLETION_BUF_SIZE);
#define RTW89_GET_MAC_C2H_MCC_TSF_RPT_MACID_X(c2h) \
- le32_get_bits(*((const __le32 *)(c2h)), GENMASK(7, 0))
+ le32_get_bits(*((const __le32 *)(c2h) + 2), GENMASK(7, 0))
#define RTW89_GET_MAC_C2H_MCC_TSF_RPT_MACID_Y(c2h) \
- le32_get_bits(*((const __le32 *)(c2h)), GENMASK(15, 8))
+ le32_get_bits(*((const __le32 *)(c2h) + 2), GENMASK(15, 8))
#define RTW89_GET_MAC_C2H_MCC_TSF_RPT_GROUP(c2h) \
- le32_get_bits(*((const __le32 *)(c2h)), GENMASK(17, 16))
+ le32_get_bits(*((const __le32 *)(c2h) + 2), GENMASK(17, 16))
#define RTW89_GET_MAC_C2H_MCC_TSF_RPT_TSF_LOW_X(c2h) \
- le32_get_bits(*((const __le32 *)(c2h) + 1), GENMASK(31, 0))
+ le32_get_bits(*((const __le32 *)(c2h) + 3), GENMASK(31, 0))
#define RTW89_GET_MAC_C2H_MCC_TSF_RPT_TSF_HIGH_X(c2h) \
- le32_get_bits(*((const __le32 *)(c2h) + 2), GENMASK(31, 0))
+ le32_get_bits(*((const __le32 *)(c2h) + 4), GENMASK(31, 0))
#define RTW89_GET_MAC_C2H_MCC_TSF_RPT_TSF_LOW_Y(c2h) \
- le32_get_bits(*((const __le32 *)(c2h) + 3), GENMASK(31, 0))
+ le32_get_bits(*((const __le32 *)(c2h) + 5), GENMASK(31, 0))
#define RTW89_GET_MAC_C2H_MCC_TSF_RPT_TSF_HIGH_Y(c2h) \
- le32_get_bits(*((const __le32 *)(c2h) + 4), GENMASK(31, 0))
+ le32_get_bits(*((const __le32 *)(c2h) + 6), GENMASK(31, 0))
#define RTW89_GET_MAC_C2H_MCC_STATUS_RPT_STATUS(c2h) \
- le32_get_bits(*((const __le32 *)(c2h)), GENMASK(5, 0))
+ le32_get_bits(*((const __le32 *)(c2h) + 2), GENMASK(5, 0))
#define RTW89_GET_MAC_C2H_MCC_STATUS_RPT_GROUP(c2h) \
- le32_get_bits(*((const __le32 *)(c2h)), GENMASK(7, 6))
+ le32_get_bits(*((const __le32 *)(c2h) + 2), GENMASK(7, 6))
#define RTW89_GET_MAC_C2H_MCC_STATUS_RPT_MACID(c2h) \
- le32_get_bits(*((const __le32 *)(c2h)), GENMASK(15, 8))
+ le32_get_bits(*((const __le32 *)(c2h) + 2), GENMASK(15, 8))
#define RTW89_GET_MAC_C2H_MCC_STATUS_RPT_TSF_LOW(c2h) \
- le32_get_bits(*((const __le32 *)(c2h) + 1), GENMASK(31, 0))
+ le32_get_bits(*((const __le32 *)(c2h) + 3), GENMASK(31, 0))
#define RTW89_GET_MAC_C2H_MCC_STATUS_RPT_TSF_HIGH(c2h) \
- le32_get_bits(*((const __le32 *)(c2h) + 2), GENMASK(31, 0))
+ le32_get_bits(*((const __le32 *)(c2h) + 4), GENMASK(31, 0))
#define RTW89_FW_HDR_SIZE 32
#define RTW89_FW_SECTION_HDR_SIZE 16
@@ -3508,7 +3518,11 @@ int rtw89_fw_h2c_raw_with_hdr(struct rtw89_dev *rtwdev,
int rtw89_fw_h2c_raw(struct rtw89_dev *rtwdev, const u8 *buf, u16 len);
void rtw89_fw_send_all_early_h2c(struct rtw89_dev *rtwdev);
void rtw89_fw_free_all_early_h2c(struct rtw89_dev *rtwdev);
-int rtw89_fw_h2c_general_pkt(struct rtw89_dev *rtwdev, u8 macid);
+int rtw89_fw_h2c_general_pkt(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
+ u8 macid);
+void rtw89_fw_release_general_pkt_list_vif(struct rtw89_dev *rtwdev,
+ struct rtw89_vif *rtwvif, bool notify_fw);
+void rtw89_fw_release_general_pkt_list(struct rtw89_dev *rtwdev, bool notify_fw);
int rtw89_fw_h2c_ba_cam(struct rtw89_dev *rtwdev, struct rtw89_sta *rtwsta,
bool valid, struct ieee80211_ampdu_params *params);
void rtw89_fw_h2c_init_ba_cam_v1(struct rtw89_dev *rtwdev);
diff --git a/drivers/net/wireless/realtek/rtw89/mac.c b/drivers/net/wireless/realtek/rtw89/mac.c
index cf9a0a3120a7..2e2a2b6eab09 100644
--- a/drivers/net/wireless/realtek/rtw89/mac.c
+++ b/drivers/net/wireless/realtek/rtw89/mac.c
@@ -623,7 +623,8 @@ static void rtw89_mac_dump_err_status(struct rtw89_dev *rtwdev,
if (err != MAC_AX_ERR_L1_ERR_DMAC &&
err != MAC_AX_ERR_L0_PROMOTE_TO_L1 &&
err != MAC_AX_ERR_L0_ERR_CMAC0 &&
- err != MAC_AX_ERR_L0_ERR_CMAC1)
+ err != MAC_AX_ERR_L0_ERR_CMAC1 &&
+ err != MAC_AX_ERR_RXI300)
return;
rtw89_info(rtwdev, "--->\nerr=0x%x\n", err);
@@ -663,6 +664,8 @@ u32 rtw89_mac_get_err_status(struct rtw89_dev *rtwdev)
err = MAC_AX_ERR_CPU_EXCEPTION;
else if (err_scnr == RTW89_WCPU_ASSERTION)
err = MAC_AX_ERR_ASSERTION;
+ else if (err_scnr == RTW89_RXI300_ERROR)
+ err = MAC_AX_ERR_RXI300;
rtw89_fw_st_dbg_dump(rtwdev);
rtw89_mac_dump_err_status(rtwdev, err);
@@ -3411,6 +3414,11 @@ int rtw89_mac_enable_cpu(struct rtw89_dev *rtwdev, u8 boot_reason, bool dlfw)
val |= B_AX_WCPU_FWDL_EN;
rtw89_write32(rtwdev, R_AX_WCPU_FW_CTRL, val);
+
+ if (rtwdev->chip->chip_id == RTL8852B)
+ rtw89_write32_mask(rtwdev, R_AX_SEC_CTRL,
+ B_AX_SEC_IDMEM_SIZE_CONFIG_MASK, 0x2);
+
rtw89_write16_mask(rtwdev, R_AX_BOOT_REASON, B_AX_BOOT_REASON_MASK,
boot_reason);
rtw89_write32_set(rtwdev, R_AX_PLATFORM_ENABLE, B_AX_WCPU_EN);
@@ -3918,19 +3926,14 @@ static void rtw89_mac_port_cfg_tbtt_shift(struct rtw89_dev *rtwdev,
B_AX_TBTT_SHIFT_OFST_MASK, val);
}
-static void rtw89_mac_port_tsf_sync(struct rtw89_dev *rtwdev,
- struct rtw89_vif *rtwvif,
- struct rtw89_vif *rtwvif_src, u8 offset,
- int *n_offset)
+void rtw89_mac_port_tsf_sync(struct rtw89_dev *rtwdev,
+ struct rtw89_vif *rtwvif,
+ struct rtw89_vif *rtwvif_src,
+ u16 offset_tu)
{
u32 val, reg;
- if (rtwvif->net_type != RTW89_NET_TYPE_AP_MODE || rtwvif == rtwvif_src)
- return;
-
- /* adjust offset randomly to avoid beacon conflict */
- offset = offset - offset / 4 + get_random_u32() % (offset / 2);
- val = RTW89_PORT_OFFSET_MS_TO_32US((*n_offset)++, offset);
+ val = RTW89_PORT_OFFSET_TU_TO_32US(offset_tu);
reg = rtw89_mac_reg_by_idx(R_AX_PORT0_TSF_SYNC + rtwvif->port * 4,
rtwvif->mac_idx);
@@ -3939,6 +3942,22 @@ static void rtw89_mac_port_tsf_sync(struct rtw89_dev *rtwdev,
rtw89_write32_set(rtwdev, reg, B_AX_SYNC_NOW);
}
+static void rtw89_mac_port_tsf_sync_rand(struct rtw89_dev *rtwdev,
+ struct rtw89_vif *rtwvif,
+ struct rtw89_vif *rtwvif_src,
+ u8 offset, int *n_offset)
+{
+ if (rtwvif->net_type != RTW89_NET_TYPE_AP_MODE || rtwvif == rtwvif_src)
+ return;
+
+ /* adjust offset randomly to avoid beacon conflict */
+ offset = offset - offset / 4 + get_random_u32() % (offset / 2);
+ rtw89_mac_port_tsf_sync(rtwdev, rtwvif, rtwvif_src,
+ (*n_offset) * offset);
+
+ (*n_offset)++;
+}
+
static void rtw89_mac_port_tsf_resync_all(struct rtw89_dev *rtwdev)
{
struct rtw89_vif *src = NULL, *tmp;
@@ -3958,7 +3977,7 @@ static void rtw89_mac_port_tsf_resync_all(struct rtw89_dev *rtwdev)
offset /= (vif_aps + 1);
rtw89_for_each_rtwvif(rtwdev, tmp)
- rtw89_mac_port_tsf_sync(rtwdev, tmp, src, offset, &n_offset);
+ rtw89_mac_port_tsf_sync_rand(rtwdev, tmp, src, offset, &n_offset);
}
int rtw89_mac_vif_init(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif)
@@ -4050,6 +4069,24 @@ int rtw89_mac_port_update(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif)
return 0;
}
+int rtw89_mac_port_get_tsf(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
+ u64 *tsf)
+{
+ const struct rtw89_port_reg *p = &rtw_port_base;
+ u32 tsf_low, tsf_high;
+ int ret;
+
+ ret = rtw89_mac_check_mac_en(rtwdev, rtwvif->mac_idx, RTW89_CMAC_SEL);
+ if (ret)
+ return ret;
+
+ tsf_low = rtw89_read32_port(rtwdev, rtwvif, p->tsftr_l);
+ tsf_high = rtw89_read32_port(rtwdev, rtwvif, p->tsftr_h);
+ *tsf = (u64)tsf_high << 32 | tsf_low;
+
+ return 0;
+}
+
static void rtw89_mac_check_he_obss_narrow_bw_ru_iter(struct wiphy *wiphy,
struct cfg80211_bss *bss,
void *data)
@@ -4263,12 +4300,12 @@ rtw89_mac_c2h_mcc_rcv_ack(struct rtw89_dev *rtwdev, struct sk_buff *c2h, u32 len
case H2C_FUNC_MCC_SET_DURATION:
break;
default:
- rtw89_debug(rtwdev, RTW89_DBG_FW,
+ rtw89_debug(rtwdev, RTW89_DBG_CHAN,
"invalid MCC C2H RCV ACK: func %d\n", func);
return;
}
- rtw89_debug(rtwdev, RTW89_DBG_FW,
+ rtw89_debug(rtwdev, RTW89_DBG_CHAN,
"MCC C2H RCV ACK: group %d, func %d\n", group, func);
}
@@ -4296,12 +4333,12 @@ rtw89_mac_c2h_mcc_req_ack(struct rtw89_dev *rtwdev, struct sk_buff *c2h, u32 len
case H2C_FUNC_DEL_MCC_GROUP:
case H2C_FUNC_RESET_MCC_GROUP:
default:
- rtw89_debug(rtwdev, RTW89_DBG_FW,
+ rtw89_debug(rtwdev, RTW89_DBG_CHAN,
"invalid MCC C2H REQ ACK: func %d\n", func);
return;
}
- rtw89_debug(rtwdev, RTW89_DBG_FW,
+ rtw89_debug(rtwdev, RTW89_DBG_CHAN,
"MCC C2H REQ ACK: group %d, func %d, return code %d\n",
group, func, retcode);
@@ -4329,6 +4366,11 @@ rtw89_mac_c2h_mcc_tsf_rpt(struct rtw89_dev *rtwdev, struct sk_buff *c2h, u32 len
rpt->tsf_y_low = RTW89_GET_MAC_C2H_MCC_TSF_RPT_TSF_LOW_Y(c2h->data);
rpt->tsf_y_high = RTW89_GET_MAC_C2H_MCC_TSF_RPT_TSF_HIGH_Y(c2h->data);
+ rtw89_debug(rtwdev, RTW89_DBG_CHAN,
+ "MCC C2H TSF RPT: macid %d> %llu, macid %d> %llu\n",
+ rpt->macid_x, (u64)rpt->tsf_x_high << 32 | rpt->tsf_x_low,
+ rpt->macid_y, (u64)rpt->tsf_y_high << 32 | rpt->tsf_y_low);
+
cond = RTW89_MCC_WAIT_COND(group, H2C_FUNC_MCC_REQ_TSF);
rtw89_complete_cond(&rtwdev->mcc.wait, cond, &data);
}
@@ -4386,14 +4428,14 @@ rtw89_mac_c2h_mcc_status_rpt(struct rtw89_dev *rtwdev, struct sk_buff *c2h, u32
rsp = false;
break;
default:
- rtw89_debug(rtwdev, RTW89_DBG_FW,
+ rtw89_debug(rtwdev, RTW89_DBG_CHAN,
"invalid MCC C2H STS RPT: status %d\n", status);
return;
}
- rtw89_debug(rtwdev, RTW89_DBG_FW,
- "MCC C2H STS RPT: group %d, macid %d, status %d, tsf {%d, %d}\n",
- group, macid, status, tsf_low, tsf_high);
+ rtw89_debug(rtwdev, RTW89_DBG_CHAN,
+ "MCC C2H STS RPT: group %d, macid %d, status %d, tsf %llu\n",
+ group, macid, status, (u64)tsf_high << 32 | tsf_low);
if (!rsp)
return;
@@ -4512,7 +4554,7 @@ EXPORT_SYMBOL(rtw89_mac_get_txpwr_cr);
int rtw89_mac_cfg_ppdu_status(struct rtw89_dev *rtwdev, u8 mac_idx, bool enable)
{
u32 reg = rtw89_mac_reg_by_idx(R_AX_PPDU_STAT, mac_idx);
- int ret = 0;
+ int ret;
ret = rtw89_mac_check_mac_en(rtwdev, mac_idx, RTW89_CMAC_SEL);
if (ret)
@@ -4520,7 +4562,7 @@ int rtw89_mac_cfg_ppdu_status(struct rtw89_dev *rtwdev, u8 mac_idx, bool enable)
if (!enable) {
rtw89_write32_clr(rtwdev, reg, B_AX_PPDU_STAT_RPT_EN);
- return ret;
+ return 0;
}
rtw89_write32(rtwdev, reg, B_AX_PPDU_STAT_RPT_EN |
@@ -4530,7 +4572,7 @@ int rtw89_mac_cfg_ppdu_status(struct rtw89_dev *rtwdev, u8 mac_idx, bool enable)
rtw89_write32_mask(rtwdev, R_AX_HW_RPT_FWD, B_AX_FWD_PPDU_STAT_MASK,
RTW89_PRPT_DEST_HOST);
- return ret;
+ return 0;
}
EXPORT_SYMBOL(rtw89_mac_cfg_ppdu_status);
@@ -4865,9 +4907,16 @@ EXPORT_SYMBOL(rtw89_mac_cfg_ctrl_path_v1);
bool rtw89_mac_get_ctrl_path(struct rtw89_dev *rtwdev)
{
- u8 val = rtw89_read8(rtwdev, R_AX_SYS_SDIO_CTRL + 3);
+ const struct rtw89_chip_info *chip = rtwdev->chip;
+ u8 val = 0;
+
+ if (chip->chip_id == RTL8852C)
+ return false;
+ else if (chip->chip_id == RTL8852A || chip->chip_id == RTL8852B)
+ val = rtw89_read8_mask(rtwdev, R_AX_SYS_SDIO_CTRL + 3,
+ B_AX_LTE_MUX_CTRL_PATH >> 24);
- return FIELD_GET(B_AX_LTE_MUX_CTRL_PATH >> 24, val);
+ return !!val;
}
u16 rtw89_mac_get_plt_cnt(struct rtw89_dev *rtwdev, u8 band)
diff --git a/drivers/net/wireless/realtek/rtw89/mac.h b/drivers/net/wireless/realtek/rtw89/mac.h
index f0b684b205f1..8064d3953d7f 100644
--- a/drivers/net/wireless/realtek/rtw89/mac.h
+++ b/drivers/net/wireless/realtek/rtw89/mac.h
@@ -168,7 +168,7 @@ enum rtw89_mac_ax_l0_to_l1_event {
MAC_AX_L0_TO_L1_EVENT_MAX = 15,
};
-#define RTW89_PORT_OFFSET_MS_TO_32US(n, shift_ms) ((n) * (shift_ms) * 1000 / 32)
+#define RTW89_PORT_OFFSET_TU_TO_32US(shift_tu) ((shift_tu) * 1024 / 32)
enum rtw89_mac_dbg_port_sel {
/* CMAC 0 related */
@@ -623,6 +623,7 @@ struct rtw89_mac_dle_dfi_qempty {
};
enum rtw89_mac_error_scenario {
+ RTW89_RXI300_ERROR = 1,
RTW89_WCPU_CPU_EXCEPTION = 2,
RTW89_WCPU_ASSERTION = 3,
};
@@ -769,6 +770,7 @@ enum mac_ax_err_info {
MAC_AX_ERR_L2_ERR_WDT_TIMEOUT_INT = 0x2599,
MAC_AX_ERR_CPU_EXCEPTION = 0x3000,
MAC_AX_ERR_ASSERTION = 0x4000,
+ MAC_AX_ERR_RXI300 = 0x5000,
MAC_AX_GET_ERR_MAX,
MAC_AX_DUMP_SHAREBUFF_INDICATOR = 0x80000000,
@@ -828,6 +830,15 @@ static inline u32 rtw89_mac_reg_by_port(u32 base, u8 port, u8 mac_idx)
}
static inline u32
+rtw89_read32_port(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif, u32 base)
+{
+ u32 reg;
+
+ reg = rtw89_mac_reg_by_port(base, rtwvif->port, rtwvif->mac_idx);
+ return rtw89_read32(rtwdev, reg);
+}
+
+static inline u32
rtw89_read32_port_mask(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
u32 base, u32 mask)
{
@@ -906,6 +917,12 @@ int rtw89_mac_write_lte(struct rtw89_dev *rtwdev, const u32 offset, u32 val);
int rtw89_mac_read_lte(struct rtw89_dev *rtwdev, const u32 offset, u32 *val);
int rtw89_mac_add_vif(struct rtw89_dev *rtwdev, struct rtw89_vif *vif);
int rtw89_mac_port_update(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif);
+void rtw89_mac_port_tsf_sync(struct rtw89_dev *rtwdev,
+ struct rtw89_vif *rtwvif,
+ struct rtw89_vif *rtwvif_src,
+ u16 offset_tu);
+int rtw89_mac_port_get_tsf(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
+ u64 *tsf);
void rtw89_mac_set_he_obss_narrow_bw_ru(struct rtw89_dev *rtwdev,
struct ieee80211_vif *vif);
void rtw89_mac_stop_ap(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif);
diff --git a/drivers/net/wireless/realtek/rtw89/mac80211.c b/drivers/net/wireless/realtek/rtw89/mac80211.c
index f9b95c52916b..d43281f7335b 100644
--- a/drivers/net/wireless/realtek/rtw89/mac80211.c
+++ b/drivers/net/wireless/realtek/rtw89/mac80211.c
@@ -135,6 +135,7 @@ static int rtw89_ops_add_interface(struct ieee80211_hw *hw,
rtwvif->sub_entity_idx = RTW89_SUB_ENTITY_0;
rtwvif->hit_rule = 0;
ether_addr_copy(rtwvif->mac_addr, vif->addr);
+ INIT_LIST_HEAD(&rtwvif->general_pkt_list);
ret = rtw89_mac_add_vif(rtwdev, rtwvif);
if (ret) {
diff --git a/drivers/net/wireless/realtek/rtw89/pci.c b/drivers/net/wireless/realtek/rtw89/pci.c
index 1c4500ba777c..ec8bb5f10482 100644
--- a/drivers/net/wireless/realtek/rtw89/pci.c
+++ b/drivers/net/wireless/realtek/rtw89/pci.c
@@ -1384,7 +1384,7 @@ static int rtw89_pci_ops_tx_write(struct rtw89_dev *rtwdev, struct rtw89_core_tx
return 0;
}
-static const struct rtw89_pci_bd_ram bd_ram_table[RTW89_TXCH_NUM] = {
+const struct rtw89_pci_bd_ram rtw89_bd_ram_table_dual[RTW89_TXCH_NUM] = {
[RTW89_TXCH_ACH0] = {.start_idx = 0, .max_num = 5, .min_num = 2},
[RTW89_TXCH_ACH1] = {.start_idx = 5, .max_num = 5, .min_num = 2},
[RTW89_TXCH_ACH2] = {.start_idx = 10, .max_num = 5, .min_num = 2},
@@ -1399,11 +1399,24 @@ static const struct rtw89_pci_bd_ram bd_ram_table[RTW89_TXCH_NUM] = {
[RTW89_TXCH_CH11] = {.start_idx = 55, .max_num = 5, .min_num = 1},
[RTW89_TXCH_CH12] = {.start_idx = 60, .max_num = 4, .min_num = 1},
};
+EXPORT_SYMBOL(rtw89_bd_ram_table_dual);
+
+const struct rtw89_pci_bd_ram rtw89_bd_ram_table_single[RTW89_TXCH_NUM] = {
+ [RTW89_TXCH_ACH0] = {.start_idx = 0, .max_num = 5, .min_num = 2},
+ [RTW89_TXCH_ACH1] = {.start_idx = 5, .max_num = 5, .min_num = 2},
+ [RTW89_TXCH_ACH2] = {.start_idx = 10, .max_num = 5, .min_num = 2},
+ [RTW89_TXCH_ACH3] = {.start_idx = 15, .max_num = 5, .min_num = 2},
+ [RTW89_TXCH_CH8] = {.start_idx = 20, .max_num = 4, .min_num = 1},
+ [RTW89_TXCH_CH9] = {.start_idx = 24, .max_num = 4, .min_num = 1},
+ [RTW89_TXCH_CH12] = {.start_idx = 28, .max_num = 4, .min_num = 1},
+};
+EXPORT_SYMBOL(rtw89_bd_ram_table_single);
static void rtw89_pci_reset_trx_rings(struct rtw89_dev *rtwdev)
{
struct rtw89_pci *rtwpci = (struct rtw89_pci *)rtwdev->priv;
const struct rtw89_pci_info *info = rtwdev->pci_info;
+ const struct rtw89_pci_bd_ram *bd_ram_table = *info->bd_ram_table;
struct rtw89_pci_tx_ring *tx_ring;
struct rtw89_pci_rx_ring *rx_ring;
struct rtw89_pci_dma_ring *bd_ring;
@@ -3385,7 +3398,7 @@ static void rtw89_pci_clkreq_set(struct rtw89_dev *rtwdev, bool enable)
if (ret)
rtw89_err(rtwdev, "failed to set CLKREQ Delay\n");
- if (chip_id == RTL8852A) {
+ if (chip_id == RTL8852A || chip_id == RTL8852B) {
if (enable)
ret = rtw89_pci_config_byte_set(rtwdev,
RTW89_PCIE_L1_CTRL,
diff --git a/drivers/net/wireless/realtek/rtw89/pci.h b/drivers/net/wireless/realtek/rtw89/pci.h
index 7d033501d4d9..1e19740db8c5 100644
--- a/drivers/net/wireless/realtek/rtw89/pci.h
+++ b/drivers/net/wireless/realtek/rtw89/pci.h
@@ -750,6 +750,12 @@ struct rtw89_pci_ch_dma_addr_set {
struct rtw89_pci_ch_dma_addr rx[RTW89_RXCH_NUM];
};
+struct rtw89_pci_bd_ram {
+ u8 start_idx;
+ u8 max_num;
+ u8 min_num;
+};
+
struct rtw89_pci_info {
enum mac_ax_bd_trunc_mode txbd_trunc_mode;
enum mac_ax_bd_trunc_mode rxbd_trunc_mode;
@@ -785,6 +791,7 @@ struct rtw89_pci_info {
u32 tx_dma_ch_mask;
const struct rtw89_pci_bd_idx_addr *bd_idx_addr_low_power;
const struct rtw89_pci_ch_dma_addr_set *dma_addr_set;
+ const struct rtw89_pci_bd_ram (*bd_ram_table)[RTW89_TXCH_NUM];
int (*ltr_set)(struct rtw89_dev *rtwdev, bool en);
u32 (*fill_txaddr_info)(struct rtw89_dev *rtwdev,
@@ -798,12 +805,6 @@ struct rtw89_pci_info {
struct rtw89_pci_isrs *isrs);
};
-struct rtw89_pci_bd_ram {
- u8 start_idx;
- u8 max_num;
- u8 min_num;
-};
-
struct rtw89_pci_tx_data {
dma_addr_t dma;
};
@@ -1057,6 +1058,8 @@ static inline bool rtw89_pci_ltr_is_err_reg_val(u32 val)
extern const struct dev_pm_ops rtw89_pm_ops;
extern const struct rtw89_pci_ch_dma_addr_set rtw89_pci_ch_dma_addr_set;
extern const struct rtw89_pci_ch_dma_addr_set rtw89_pci_ch_dma_addr_set_v1;
+extern const struct rtw89_pci_bd_ram rtw89_bd_ram_table_dual[RTW89_TXCH_NUM];
+extern const struct rtw89_pci_bd_ram rtw89_bd_ram_table_single[RTW89_TXCH_NUM];
struct pci_device_id;
diff --git a/drivers/net/wireless/realtek/rtw89/phy.c b/drivers/net/wireless/realtek/rtw89/phy.c
index 017710c580c7..d9f61ba3d176 100644
--- a/drivers/net/wireless/realtek/rtw89/phy.c
+++ b/drivers/net/wireless/realtek/rtw89/phy.c
@@ -367,6 +367,7 @@ static void rtw89_phy_ra_sta_update(struct rtw89_dev *rtwdev,
}
ra->bw_cap = bw_mode;
+ ra->er_cap = rtwsta->er_cap;
ra->mode_ctrl = mode;
ra->macid = rtwsta->mac_id;
ra->stbc_cap = stbc_en;
@@ -2041,6 +2042,7 @@ void rtw89_phy_set_txpwr_byrate(struct rtw89_dev *rtwdev,
const struct rtw89_chan *chan,
enum rtw89_phy_idx phy_idx)
{
+ u8 max_nss_num = rtwdev->chip->rf_path_num;
static const u8 rs[] = {
RTW89_RS_CCK,
RTW89_RS_OFDM,
@@ -2063,7 +2065,7 @@ void rtw89_phy_set_txpwr_byrate(struct rtw89_dev *rtwdev,
BUILD_BUG_ON(rtw89_rs_idx_max[RTW89_RS_HEDCM] % 4);
addr = R_AX_PWR_BY_RATE;
- for (cur.nss = 0; cur.nss <= RTW89_NSS_2; cur.nss++) {
+ for (cur.nss = 0; cur.nss < max_nss_num; cur.nss++) {
for (i = 0; i < ARRAY_SIZE(rs); i++) {
if (cur.nss >= rtw89_rs_nss_max[rs[i]])
continue;
@@ -2126,6 +2128,7 @@ void rtw89_phy_set_txpwr_limit(struct rtw89_dev *rtwdev,
const struct rtw89_chan *chan,
enum rtw89_phy_idx phy_idx)
{
+ u8 max_ntx_num = rtwdev->chip->rf_path_num;
struct rtw89_txpwr_limit lmt;
u8 ch = chan->channel;
u8 bw = chan->band_width;
@@ -2140,7 +2143,7 @@ void rtw89_phy_set_txpwr_limit(struct rtw89_dev *rtwdev,
RTW89_TXPWR_LMT_PAGE_SIZE);
addr = R_AX_PWR_LMT;
- for (i = 0; i < RTW89_NTX_NUM; i++) {
+ for (i = 0; i < max_ntx_num; i++) {
rtw89_phy_fill_txpwr_limit(rtwdev, chan, &lmt, i);
ptr = (s8 *)&lmt;
@@ -2161,6 +2164,7 @@ void rtw89_phy_set_txpwr_limit_ru(struct rtw89_dev *rtwdev,
const struct rtw89_chan *chan,
enum rtw89_phy_idx phy_idx)
{
+ u8 max_ntx_num = rtwdev->chip->rf_path_num;
struct rtw89_txpwr_limit_ru lmt_ru;
u8 ch = chan->channel;
u8 bw = chan->band_width;
@@ -2175,7 +2179,7 @@ void rtw89_phy_set_txpwr_limit_ru(struct rtw89_dev *rtwdev,
RTW89_TXPWR_LMT_RU_PAGE_SIZE);
addr = R_AX_PWR_RU_LMT;
- for (i = 0; i < RTW89_NTX_NUM; i++) {
+ for (i = 0; i < max_ntx_num; i++) {
rtw89_phy_fill_txpwr_limit_ru(rtwdev, chan, &lmt_ru, i);
ptr = (s8 *)&lmt_ru;
@@ -4116,6 +4120,7 @@ void rtw89_phy_dm_init(struct rtw89_dev *rtwdev)
void rtw89_phy_set_bss_color(struct rtw89_dev *rtwdev, struct ieee80211_vif *vif)
{
+ const struct rtw89_chip_info *chip = rtwdev->chip;
enum rtw89_phy_idx phy_idx = RTW89_PHY_0;
u8 bss_color;
@@ -4124,11 +4129,11 @@ void rtw89_phy_set_bss_color(struct rtw89_dev *rtwdev, struct ieee80211_vif *vif
bss_color = vif->bss_conf.he_bss_color.color;
- rtw89_phy_write32_idx(rtwdev, R_BSS_CLR_MAP, B_BSS_CLR_MAP_VLD0, 0x1,
- phy_idx);
- rtw89_phy_write32_idx(rtwdev, R_BSS_CLR_MAP, B_BSS_CLR_MAP_TGT, bss_color,
+ rtw89_phy_write32_idx(rtwdev, chip->bss_clr_map_reg, B_BSS_CLR_MAP_VLD0, 0x1,
phy_idx);
- rtw89_phy_write32_idx(rtwdev, R_BSS_CLR_MAP, B_BSS_CLR_MAP_STAID,
+ rtw89_phy_write32_idx(rtwdev, chip->bss_clr_map_reg, B_BSS_CLR_MAP_TGT,
+ bss_color, phy_idx);
+ rtw89_phy_write32_idx(rtwdev, chip->bss_clr_map_reg, B_BSS_CLR_MAP_STAID,
vif->cfg.aid, phy_idx);
}
diff --git a/drivers/net/wireless/realtek/rtw89/reg.h b/drivers/net/wireless/realtek/rtw89/reg.h
index 5324e645728b..600257909df2 100644
--- a/drivers/net/wireless/realtek/rtw89/reg.h
+++ b/drivers/net/wireless/realtek/rtw89/reg.h
@@ -275,6 +275,9 @@
#define B_AX_S1_LDO2PWRCUT_F BIT(23)
#define B_AX_S0_LDO_VSEL_F_MASK GENMASK(22, 21)
+#define R_AX_SEC_CTRL 0x0C00
+#define B_AX_SEC_IDMEM_SIZE_CONFIG_MASK GENMASK(17, 16)
+
#define R_AX_FILTER_MODEL_ADDR 0x0C04
#define R_AX_HAXI_INIT_CFG1 0x1000
@@ -3559,6 +3562,7 @@
#define RR_MOD_IQK GENMASK(19, 4)
#define RR_MOD_DPK GENMASK(19, 5)
#define RR_MOD_MASK GENMASK(19, 16)
+#define RR_MOD_DCK GENMASK(14, 10)
#define RR_MOD_RGM GENMASK(13, 4)
#define RR_MOD_V_DOWN 0x0
#define RR_MOD_V_STANDBY 0x1
@@ -3572,6 +3576,7 @@
#define RR_MOD_NBW GENMASK(15, 14)
#define RR_MOD_M_RXG GENMASK(13, 4)
#define RR_MOD_M_RXBB GENMASK(9, 5)
+#define RR_MOD_LO_SEL BIT(1)
#define RR_MODOPT 0x01
#define RR_MODOPT_M_TXPWR GENMASK(5, 0)
#define RR_WLSEL 0x02
@@ -3638,6 +3643,7 @@
#define RR_LUTWA_M2 GENMASK(4, 0)
#define RR_LUTWD1 0x3e
#define RR_LUTWD0 0x3f
+#define RR_LUTWD0_MB GENMASK(11, 6)
#define RR_LUTWD0_LB GENMASK(5, 0)
#define RR_TM 0x42
#define RR_TM_TRI BIT(19)
@@ -3671,6 +3677,8 @@
#define RR_TXRSV_GAPK BIT(19)
#define RR_BIAS 0x5e
#define RR_BIAS_GAPK BIT(19)
+#define RR_TXAC 0x5f
+#define RR_TXAC_IQG GENMASK(3, 0)
#define RR_BIASA 0x60
#define RR_BIASA_TXG GENMASK(15, 12)
#define RR_BIASA_TXA GENMASK(19, 16)
@@ -3729,10 +3737,14 @@
#define RR_XALNA2_SW2 GENMASK(9, 8)
#define RR_XALNA2_SW GENMASK(1, 0)
#define RR_DCK 0x92
+#define RR_DCK_S1 GENMASK(19, 16)
+#define RR_DCK_TIA GENMASK(15, 9)
#define RR_DCK_DONE GENMASK(7, 5)
#define RR_DCK_FINE BIT(1)
#define RR_DCK_LV BIT(0)
#define RR_DCK1 0x93
+#define RR_DCK1_S1 GENMASK(19, 16)
+#define RR_DCK1_TIA GENMASK(15, 9)
#define RR_DCK1_DONE BIT(5)
#define RR_DCK1_CLR GENMASK(3, 0)
#define RR_DCK1_SEL BIT(3)
@@ -3781,11 +3793,14 @@
#define RR_LUTDBG 0xdf
#define RR_LUTDBG_TIA BIT(12)
#define RR_LUTDBG_LOK BIT(2)
+#define RR_LUTPLL 0xec
+#define RR_CAL_RW BIT(19)
#define RR_LUTWE2 0xee
#define RR_LUTWE2_RTXBW BIT(2)
#define RR_LUTWE 0xef
#define RR_LUTWE_LOK BIT(2)
#define RR_RFC 0xf0
+#define RR_WCAL BIT(16)
#define RR_RFC_CKEN BIT(1)
#define R_UPD_P0 0x0000
@@ -4090,12 +4105,13 @@
#define R_MUIC 0x40F8
#define B_MUIC_EN BIT(0)
#define R_DCFO 0x4264
-#define B_DCFO GENMASK(1, 0)
+#define B_DCFO GENMASK(7, 0)
#define R_SEG0CSI 0x42AC
-#define B_SEG0CSI_IDX GENMASK(11, 0)
+#define B_SEG0CSI_IDX GENMASK(10, 0)
#define R_SEG0CSI_EN 0x42C4
#define B_SEG0CSI_EN BIT(23)
#define R_BSS_CLR_MAP 0x43ac
+#define R_BSS_CLR_MAP_V1 0x43B0
#define B_BSS_CLR_MAP_VLD0 BIT(28)
#define B_BSS_CLR_MAP_TGT GENMASK(27, 22)
#define B_BSS_CLR_MAP_STAID GENMASK(21, 11)
@@ -4725,6 +4741,7 @@
#define R_DRCK_FH 0xC094
#define B_DRCK_LAT BIT(9)
#define R_DRCK 0xC0C4
+#define B_DRCK_MUL GENMASK(21, 17)
#define B_DRCK_IDLE BIT(9)
#define B_DRCK_EN BIT(6)
#define B_DRCK_VAL GENMASK(4, 0)
@@ -4742,9 +4759,13 @@
#define B_PATH0_SAMPL_DLY_T_MSK_V1 GENMASK(27, 26)
#define R_P0_CFCH_BW0 0xC0D4
#define B_P0_CFCH_BW0 GENMASK(27, 26)
+#define B_P0_CFCH_EN GENMASK(14, 11)
+#define B_P0_CFCH_CTL GENMASK(10, 7)
#define R_P0_CFCH_BW1 0xC0D8
#define B_P0_CFCH_EX BIT(13)
#define B_P0_CFCH_BW1 GENMASK(8, 5)
+#define R_ADCMOD 0xC0E8
+#define B_ADCMOD_LP GENMASK(31, 16)
#define R_ADDCK0D 0xC0F0
#define B_ADDCK0D_VAL2 GENMASK(31, 26)
#define B_ADDCK0D_VAL GENMASK(25, 16)
diff --git a/drivers/net/wireless/realtek/rtw89/rtw8852a.c b/drivers/net/wireless/realtek/rtw89/rtw8852a.c
index eff6519cf019..9c42b6abd223 100644
--- a/drivers/net/wireless/realtek/rtw89/rtw8852a.c
+++ b/drivers/net/wireless/realtek/rtw89/rtw8852a.c
@@ -1035,7 +1035,7 @@ static void rtw8852a_spur_elimination(struct rtw89_dev *rtwdev, u8 central_ch)
0x210);
rtw89_phy_write32_mask(rtwdev, R_P1_NBIIDX, B_P1_NBIIDX_VAL,
0x210);
- rtw89_phy_write32_mask(rtwdev, R_SEG0CSI, 0xfff, 0x7c0);
+ rtw89_phy_write32_mask(rtwdev, R_SEG0CSI, B_SEG0CSI_IDX, 0x7c0);
rtw89_phy_write32_mask(rtwdev, R_P0_NBIIDX,
B_P0_NBIIDX_NOTCH_EN, 0x1);
rtw89_phy_write32_mask(rtwdev, R_P1_NBIIDX,
@@ -1047,7 +1047,7 @@ static void rtw8852a_spur_elimination(struct rtw89_dev *rtwdev, u8 central_ch)
0x210);
rtw89_phy_write32_mask(rtwdev, R_P1_NBIIDX, B_P1_NBIIDX_VAL,
0x210);
- rtw89_phy_write32_mask(rtwdev, R_SEG0CSI, 0xfff, 0x40);
+ rtw89_phy_write32_mask(rtwdev, R_SEG0CSI, B_SEG0CSI_IDX, 0x40);
rtw89_phy_write32_mask(rtwdev, R_P0_NBIIDX,
B_P0_NBIIDX_NOTCH_EN, 0x1);
rtw89_phy_write32_mask(rtwdev, R_P1_NBIIDX,
@@ -1059,7 +1059,7 @@ static void rtw8852a_spur_elimination(struct rtw89_dev *rtwdev, u8 central_ch)
0x2d0);
rtw89_phy_write32_mask(rtwdev, R_P1_NBIIDX, B_P1_NBIIDX_VAL,
0x2d0);
- rtw89_phy_write32_mask(rtwdev, R_SEG0CSI, 0xfff, 0x740);
+ rtw89_phy_write32_mask(rtwdev, R_SEG0CSI, B_SEG0CSI_IDX, 0x740);
rtw89_phy_write32_mask(rtwdev, R_P0_NBIIDX,
B_P0_NBIIDX_NOTCH_EN, 0x1);
rtw89_phy_write32_mask(rtwdev, R_P1_NBIIDX,
@@ -1878,9 +1878,13 @@ static
void rtw8852a_btc_update_bt_cnt(struct rtw89_dev *rtwdev)
{
struct rtw89_btc *btc = &rtwdev->btc;
+ const struct rtw89_btc_ver *ver = btc->ver;
struct rtw89_btc_cx *cx = &btc->cx;
u32 val;
+ if (ver->fcxbtcrpt != 1)
+ return;
+
val = rtw89_read32(rtwdev, R_AX_BT_STAST_HIGH);
cx->cnt_bt[BTC_BCNT_HIPRI_TX] = FIELD_GET(B_AX_STATIS_BT_HI_TX_MASK, val);
cx->cnt_bt[BTC_BCNT_HIPRI_RX] = FIELD_GET(B_AX_STATIS_BT_HI_RX_MASK, val);
@@ -2051,6 +2055,7 @@ const struct rtw89_chip_info rtw8852a_chip_info = {
.chip_id = RTL8852A,
.ops = &rtw8852a_chip_ops,
.fw_name = "rtw89/rtw8852a_fw.bin",
+ .try_ce_fw = false,
.fifo_size = 458752,
.dle_scc_rsvd_size = 0,
.max_amsdu_limit = 3500,
@@ -2106,20 +2111,6 @@ const struct rtw89_chip_info rtw8852a_chip_info = {
.btcx_desired = 0x7,
.scbd = 0x1,
.mailbox = 0x1,
- .btc_fwinfo_buf = 1024,
-
- .fcxbtcrpt_ver = 1,
- .fcxtdma_ver = 1,
- .fcxslots_ver = 1,
- .fcxcysta_ver = 2,
- .fcxstep_ver = 2,
- .fcxnullsta_ver = 1,
- .fcxmreg_ver = 1,
- .fcxgpiodbg_ver = 1,
- .fcxbtver_ver = 1,
- .fcxbtscan_ver = 1,
- .fcxbtafh_ver = 1,
- .fcxbtdevinfo_ver = 1,
.afh_guard_ch = 6,
.wl_rssi_thres = rtw89_btc_8852a_wl_rssi_thres,
@@ -2149,6 +2140,7 @@ const struct rtw89_chip_info rtw8852a_chip_info = {
.dcfo_comp_sft = 3,
.imr_info = &rtw8852a_imr_info,
.rrsr_cfgs = &rtw8852a_rrsr_cfgs,
+ .bss_clr_map_reg = R_BSS_CLR_MAP,
.dma_ch_mask = 0,
#ifdef CONFIG_PM
.wowlan_stub = &rtw_wowlan_stub_8852a,
diff --git a/drivers/net/wireless/realtek/rtw89/rtw8852a_rfk.c b/drivers/net/wireless/realtek/rtw89/rtw8852a_rfk.c
index 582ff0d3a9ea..cd6c39b7f802 100644
--- a/drivers/net/wireless/realtek/rtw89/rtw8852a_rfk.c
+++ b/drivers/net/wireless/realtek/rtw89/rtw8852a_rfk.c
@@ -1643,7 +1643,7 @@ static void _doiqk(struct rtw89_dev *rtwdev, bool force,
rtw89_btc_ntfy_wl_rfk(rtwdev, phy_map, BTC_WRFKT_IQK, BTC_WRFK_ONESHOT_START);
rtw89_debug(rtwdev, RTW89_DBG_RFK,
- "[IQK]==========IQK strat!!!!!==========\n");
+ "[IQK]==========IQK start!!!!!==========\n");
iqk_info->iqk_times++;
iqk_info->kcount = 0;
iqk_info->version = RTW8852A_IQK_VER;
diff --git a/drivers/net/wireless/realtek/rtw89/rtw8852ae.c b/drivers/net/wireless/realtek/rtw89/rtw8852ae.c
index 0cd8c0c44d19..d835a44a1d0d 100644
--- a/drivers/net/wireless/realtek/rtw89/rtw8852ae.c
+++ b/drivers/net/wireless/realtek/rtw89/rtw8852ae.c
@@ -44,6 +44,7 @@ static const struct rtw89_pci_info rtw8852a_pci_info = {
.tx_dma_ch_mask = 0,
.bd_idx_addr_low_power = NULL,
.dma_addr_set = &rtw89_pci_ch_dma_addr_set,
+ .bd_ram_table = &rtw89_bd_ram_table_dual,
.ltr_set = rtw89_pci_ltr_set,
.fill_txaddr_info = rtw89_pci_fill_txaddr_info,
diff --git a/drivers/net/wireless/realtek/rtw89/rtw8852b.c b/drivers/net/wireless/realtek/rtw89/rtw8852b.c
index b635ac1d1ca2..ee8dba7e0074 100644
--- a/drivers/net/wireless/realtek/rtw89/rtw8852b.c
+++ b/drivers/net/wireless/realtek/rtw89/rtw8852b.c
@@ -1618,6 +1618,7 @@ static void rtw8852b_set_txpwr_ref(struct rtw89_dev *rtwdev,
}
static void rtw8852b_bb_set_tx_shape_dfir(struct rtw89_dev *rtwdev,
+ const struct rtw89_chan *chan,
u8 tx_shape_idx,
enum rtw89_phy_idx phy_idx)
{
@@ -1637,7 +1638,6 @@ static void rtw8852b_bb_set_tx_shape_dfir(struct rtw89_dev *rtwdev,
__DECL_DFIR_PARAM(sharp_14,
0x023B13FF, 0x001C42DE, 0x00FDB0AD, 0x00F60F6E,
0x00FD8F92, 0x0602D011, 0x0001C02C, 0x00FFF00A);
- const struct rtw89_chan *chan = rtw89_chan_get(rtwdev, RTW89_SUB_ENTITY_0);
u8 ch = chan->channel;
const u32 *param;
u32 addr;
@@ -1678,7 +1678,7 @@ static void rtw8852b_set_tx_shape(struct rtw89_dev *rtwdev,
u8 tx_shape_ofdm = rtw89_8852b_tx_shape[band][RTW89_RS_OFDM][regd];
if (band == RTW89_BAND_2G)
- rtw8852b_bb_set_tx_shape_dfir(rtwdev, tx_shape_cck, phy_idx);
+ rtw8852b_bb_set_tx_shape_dfir(rtwdev, chan, tx_shape_cck, phy_idx);
rtw89_phy_write32_mask(rtwdev, R_DCFO_OPT, B_TXSHAPE_TRIANGULAR_CFG,
tx_shape_ofdm);
@@ -1720,7 +1720,7 @@ void rtw8852b_set_txpwr_ul_tb_offset(struct rtw89_dev *rtwdev,
pw_ofst = max_t(s8, pw_ofst - 3, -16);
reg = rtw89_mac_reg_by_idx(R_AX_PWR_UL_TB_2T, mac_idx);
- rtw89_write32_mask(rtwdev, reg, B_AX_PWR_UL_TB_1T_MASK, pw_ofst);
+ rtw89_write32_mask(rtwdev, reg, B_AX_PWR_UL_TB_2T_MASK, pw_ofst);
}
static int
@@ -2423,13 +2423,14 @@ static const struct rtw89_chip_ops rtw8852b_chip_ops = {
.btc_update_bt_cnt = rtw8852b_btc_update_bt_cnt,
.btc_wl_s1_standby = rtw8852b_btc_wl_s1_standby,
.btc_set_wl_rx_gain = rtw8852b_btc_set_wl_rx_gain,
- .btc_set_policy = rtw89_btc_set_policy,
+ .btc_set_policy = rtw89_btc_set_policy_v1,
};
const struct rtw89_chip_info rtw8852b_chip_info = {
.chip_id = RTL8852B,
.ops = &rtw8852b_chip_ops,
.fw_name = "rtw89/rtw8852b_fw.bin",
+ .try_ce_fw = true,
.fifo_size = 196608,
.dle_scc_rsvd_size = 98304,
.max_amsdu_limit = 3500,
@@ -2437,6 +2438,8 @@ const struct rtw89_chip_info rtw8852b_chip_info = {
.rsvd_ple_ofst = 0x2f800,
.hfc_param_ini = rtw8852b_hfc_param_ini_pcie,
.dle_mem = rtw8852b_dle_mem_pcie,
+ .wde_qempty_acq_num = 4,
+ .wde_qempty_mgq_sel = 4,
.rf_base_addr = {0xe000, 0xf000},
.pwr_on_seq = NULL,
.pwr_off_seq = NULL,
@@ -2483,20 +2486,7 @@ const struct rtw89_chip_info rtw8852b_chip_info = {
.btcx_desired = 0x5,
.scbd = 0x1,
.mailbox = 0x1,
- .btc_fwinfo_buf = 1024,
-
- .fcxbtcrpt_ver = 1,
- .fcxtdma_ver = 1,
- .fcxslots_ver = 1,
- .fcxcysta_ver = 2,
- .fcxstep_ver = 2,
- .fcxnullsta_ver = 1,
- .fcxmreg_ver = 1,
- .fcxgpiodbg_ver = 1,
- .fcxbtver_ver = 1,
- .fcxbtscan_ver = 1,
- .fcxbtafh_ver = 1,
- .fcxbtdevinfo_ver = 1,
+
.afh_guard_ch = 6,
.wl_rssi_thres = rtw89_btc_8852b_wl_rssi_thres,
.bt_rssi_thres = rtw89_btc_8852b_bt_rssi_thres,
@@ -2525,6 +2515,7 @@ const struct rtw89_chip_info rtw8852b_chip_info = {
.dcfo_comp_sft = 3,
.imr_info = &rtw8852b_imr_info,
.rrsr_cfgs = &rtw8852b_rrsr_cfgs,
+ .bss_clr_map_reg = R_BSS_CLR_MAP_V1,
.dma_ch_mask = BIT(RTW89_DMA_ACH4) | BIT(RTW89_DMA_ACH5) |
BIT(RTW89_DMA_ACH6) | BIT(RTW89_DMA_ACH7) |
BIT(RTW89_DMA_B1MG) | BIT(RTW89_DMA_B1HI),
diff --git a/drivers/net/wireless/realtek/rtw89/rtw8852be.c b/drivers/net/wireless/realtek/rtw89/rtw8852be.c
index 0ef2ca8efeb0..ecf39d2d9f81 100644
--- a/drivers/net/wireless/realtek/rtw89/rtw8852be.c
+++ b/drivers/net/wireless/realtek/rtw89/rtw8852be.c
@@ -46,6 +46,7 @@ static const struct rtw89_pci_info rtw8852b_pci_info = {
BIT(RTW89_TXCH_CH10) | BIT(RTW89_TXCH_CH11),
.bd_idx_addr_low_power = NULL,
.dma_addr_set = &rtw89_pci_ch_dma_addr_set,
+ .bd_ram_table = &rtw89_bd_ram_table_single,
.ltr_set = rtw89_pci_ltr_set,
.fill_txaddr_info = rtw89_pci_fill_txaddr_info,
diff --git a/drivers/net/wireless/realtek/rtw89/rtw8852c.c b/drivers/net/wireless/realtek/rtw89/rtw8852c.c
index a87482cc25f5..d2dde21d3daf 100644
--- a/drivers/net/wireless/realtek/rtw89/rtw8852c.c
+++ b/drivers/net/wireless/realtek/rtw89/rtw8852c.c
@@ -1968,6 +1968,7 @@ static void rtw8852c_set_txpwr_ref(struct rtw89_dev *rtwdev,
}
static void rtw8852c_bb_set_tx_shape_dfir(struct rtw89_dev *rtwdev,
+ const struct rtw89_chan *chan,
u8 tx_shape_idx,
enum rtw89_phy_idx phy_idx)
{
@@ -1991,7 +1992,6 @@ static void rtw8852c_bb_set_tx_shape_dfir(struct rtw89_dev *rtwdev,
__DECL_DFIR_ADDR(filter,
0x45BC, 0x45CC, 0x45D0, 0x45D4, 0x45D8, 0x45C0,
0x45C4, 0x45C8);
- const struct rtw89_chan *chan = rtw89_chan_get(rtwdev, RTW89_SUB_ENTITY_0);
u8 ch = chan->channel;
const u32 *param;
int i;
@@ -2032,7 +2032,7 @@ static void rtw8852c_set_tx_shape(struct rtw89_dev *rtwdev,
u8 tx_shape_ofdm = rtw89_8852c_tx_shape[band][RTW89_RS_OFDM][regd];
if (band == RTW89_BAND_2G)
- rtw8852c_bb_set_tx_shape_dfir(rtwdev, tx_shape_cck, phy_idx);
+ rtw8852c_bb_set_tx_shape_dfir(rtwdev, chan, tx_shape_cck, phy_idx);
rtw89_phy_tssi_ctrl_set_bandedge_cfg(rtwdev,
(enum rtw89_mac_idx)phy_idx,
@@ -2857,6 +2857,7 @@ const struct rtw89_chip_info rtw8852c_chip_info = {
.chip_id = RTL8852C,
.ops = &rtw8852c_chip_ops,
.fw_name = "rtw89/rtw8852c_fw.bin",
+ .try_ce_fw = false,
.fifo_size = 458752,
.dle_scc_rsvd_size = 0,
.max_amsdu_limit = 8000,
@@ -2915,20 +2916,6 @@ const struct rtw89_chip_info rtw8852c_chip_info = {
.btcx_desired = 0x7,
.scbd = 0x1,
.mailbox = 0x1,
- .btc_fwinfo_buf = 1280,
-
- .fcxbtcrpt_ver = 4,
- .fcxtdma_ver = 3,
- .fcxslots_ver = 1,
- .fcxcysta_ver = 3,
- .fcxstep_ver = 3,
- .fcxnullsta_ver = 2,
- .fcxmreg_ver = 1,
- .fcxgpiodbg_ver = 1,
- .fcxbtver_ver = 1,
- .fcxbtscan_ver = 1,
- .fcxbtafh_ver = 1,
- .fcxbtdevinfo_ver = 1,
.afh_guard_ch = 6,
.wl_rssi_thres = rtw89_btc_8852c_wl_rssi_thres,
@@ -2959,6 +2946,7 @@ const struct rtw89_chip_info rtw8852c_chip_info = {
.dcfo_comp_sft = 5,
.imr_info = &rtw8852c_imr_info,
.rrsr_cfgs = &rtw8852c_rrsr_cfgs,
+ .bss_clr_map_reg = R_BSS_CLR_MAP,
.dma_ch_mask = 0,
#ifdef CONFIG_PM
.wowlan_stub = &rtw_wowlan_stub_8852c,
diff --git a/drivers/net/wireless/realtek/rtw89/rtw8852c_rfk.c b/drivers/net/wireless/realtek/rtw89/rtw8852c_rfk.c
index 60cd676fe22c..2c0bc3a4ab3b 100644
--- a/drivers/net/wireless/realtek/rtw89/rtw8852c_rfk.c
+++ b/drivers/net/wireless/realtek/rtw89/rtw8852c_rfk.c
@@ -11,6 +11,15 @@
#include "rtw8852c_rfk_table.h"
#include "rtw8852c_table.h"
+struct rxck_def {
+ u32 ctl;
+ u32 en;
+ u32 bw0;
+ u32 bw1;
+ u32 mul;
+ u32 lp;
+};
+
#define _TSSI_DE_MASK GENMASK(21, 12)
static const u32 _tssi_de_cck_long[RF_PATH_NUM_8852C] = {0x5858, 0x7858};
static const u32 _tssi_de_cck_short[RF_PATH_NUM_8852C] = {0x5860, 0x7860};
@@ -26,7 +35,7 @@ static const u32 rtw8852c_backup_bb_regs[] = {
};
static const u32 rtw8852c_backup_rf_regs[] = {
- 0xdf, 0x8f, 0x97, 0xa3, 0x5, 0x10005
+ 0xdf, 0x5f, 0x8f, 0x97, 0xa3, 0x5, 0x10005
};
#define BACKUP_BB_REGS_NR ARRAY_SIZE(rtw8852c_backup_bb_regs)
@@ -59,6 +68,13 @@ static const u32 dpk_par_regs[RTW89_DPK_RF_PATH][4] = {
{0x81a8, 0x81c4, 0x81c8, 0x81e8},
};
+static const u8 _dck_addr_bs[RF_PATH_NUM_8852C] = {0x0, 0x10};
+static const u8 _dck_addr[RF_PATH_NUM_8852C] = {0xc, 0x1c};
+
+static const struct rxck_def _ck480M = {0x8, 0x2, 0x3, 0xf, 0x0, 0x9};
+static const struct rxck_def _ck960M = {0x8, 0x2, 0x2, 0x8, 0x0, 0x9};
+static const struct rxck_def _ck1920M = {0x8, 0x0, 0x2, 0x4, 0x6, 0x9};
+
static u8 _kpath(struct rtw89_dev *rtwdev, enum rtw89_phy_idx phy_idx)
{
rtw89_debug(rtwdev, RTW89_DBG_RFK, "[RFK]dbcc_en: %x, PHY%d\n",
@@ -337,7 +353,7 @@ static void _dack_reload_by_path(struct rtw89_dev *rtwdev,
(dack->dadck_d[path][index] << 14);
addr = 0xc210 + offset;
rtw89_phy_write32(rtwdev, addr, val32);
- rtw89_phy_write32_set(rtwdev, addr, BIT(1));
+ rtw89_phy_write32_set(rtwdev, addr, BIT(0));
}
static void _dack_reload(struct rtw89_dev *rtwdev, enum rtw89_rf_path path)
@@ -437,6 +453,8 @@ static void rtw8852c_txck_force(struct rtw89_dev *rtwdev, u8 path, bool force,
static void rtw8852c_rxck_force(struct rtw89_dev *rtwdev, u8 path, bool force,
enum adc_ck ck)
{
+ const struct rxck_def *def;
+
rtw89_phy_write32_mask(rtwdev, R_P0_RXCK | (path << 13), B_P0_RXCK_ON, 0x0);
if (!force)
@@ -444,6 +462,26 @@ static void rtw8852c_rxck_force(struct rtw89_dev *rtwdev, u8 path, bool force,
rtw89_phy_write32_mask(rtwdev, R_P0_RXCK | (path << 13), B_P0_RXCK_VAL, ck);
rtw89_phy_write32_mask(rtwdev, R_P0_RXCK | (path << 13), B_P0_RXCK_ON, 0x1);
+
+ switch (ck) {
+ case ADC_480M:
+ def = &_ck480M;
+ break;
+ case ADC_960M:
+ def = &_ck960M;
+ break;
+ case ADC_1920M:
+ default:
+ def = &_ck1920M;
+ break;
+ }
+
+ rtw89_phy_write32_mask(rtwdev, R_P0_CFCH_BW0 | (path << 8), B_P0_CFCH_CTL, def->ctl);
+ rtw89_phy_write32_mask(rtwdev, R_P0_CFCH_BW0 | (path << 8), B_P0_CFCH_EN, def->en);
+ rtw89_phy_write32_mask(rtwdev, R_P0_CFCH_BW0 | (path << 8), B_P0_CFCH_BW0, def->bw0);
+ rtw89_phy_write32_mask(rtwdev, R_P0_CFCH_BW1 | (path << 8), B_P0_CFCH_BW1, def->bw1);
+ rtw89_phy_write32_mask(rtwdev, R_DRCK | (path << 8), B_DRCK_MUL, def->mul);
+ rtw89_phy_write32_mask(rtwdev, R_ADCMOD | (path << 8), B_ADCMOD_LP, def->lp);
}
static bool _check_dack_done(struct rtw89_dev *rtwdev, bool s0)
@@ -627,8 +665,6 @@ static void _iqk_rxk_setting(struct rtw89_dev *rtwdev, u8 path)
rtw89_phy_write32_mask(rtwdev, R_UPD_CLK + (path << 13), B_DPD_GDIS, 0x1);
rtw8852c_rxck_force(rtwdev, path, true, ADC_480M);
rtw89_phy_write32_mask(rtwdev, R_UPD_CLK + (path << 13), B_ACK_VAL, 0x0);
- rtw89_phy_write32_mask(rtwdev, R_P0_CFCH_BW0 + (path << 8), B_P0_CFCH_BW0, 0x3);
- rtw89_phy_write32_mask(rtwdev, R_P0_CFCH_BW1 + (path << 8), B_P0_CFCH_BW1, 0xf);
rtw89_write_rf(rtwdev, path, RR_RXBB2, RR_RXBB2_CKT, 0x1);
rtw89_phy_write32_mask(rtwdev, R_P0_NRBW + (path << 13), B_P0_NRBW_DBG, 0x1);
break;
@@ -636,8 +672,6 @@ static void _iqk_rxk_setting(struct rtw89_dev *rtwdev, u8 path)
rtw89_phy_write32_mask(rtwdev, R_UPD_CLK + (path << 13), B_DPD_GDIS, 0x1);
rtw8852c_rxck_force(rtwdev, path, true, ADC_960M);
rtw89_phy_write32_mask(rtwdev, R_UPD_CLK + (path << 13), B_ACK_VAL, 0x1);
- rtw89_phy_write32_mask(rtwdev, R_P0_CFCH_BW0 + (path << 8), B_P0_CFCH_BW0, 0x2);
- rtw89_phy_write32_mask(rtwdev, R_P0_CFCH_BW1 + (path << 8), B_P0_CFCH_BW1, 0xd);
rtw89_write_rf(rtwdev, path, RR_RXBB2, RR_RXBB2_CKT, 0x1);
rtw89_phy_write32_mask(rtwdev, R_P0_NRBW + (path << 13), B_P0_NRBW_DBG, 0x1);
break;
@@ -645,8 +679,6 @@ static void _iqk_rxk_setting(struct rtw89_dev *rtwdev, u8 path)
rtw89_phy_write32_mask(rtwdev, R_UPD_CLK + (path << 13), B_DPD_GDIS, 0x1);
rtw8852c_rxck_force(rtwdev, path, true, ADC_1920M);
rtw89_phy_write32_mask(rtwdev, R_UPD_CLK + (path << 13), B_ACK_VAL, 0x2);
- rtw89_phy_write32_mask(rtwdev, R_P0_CFCH_BW0 + (path << 8), B_P0_CFCH_BW0, 0x1);
- rtw89_phy_write32_mask(rtwdev, R_P0_CFCH_BW1 + (path << 8), B_P0_CFCH_BW1, 0xb);
rtw89_write_rf(rtwdev, path, RR_RXBB2, RR_RXBB2_CKT, 0x1);
rtw89_phy_write32_mask(rtwdev, R_P0_NRBW + (path << 13), B_P0_NRBW_DBG, 0x1);
break;
@@ -1410,8 +1442,6 @@ static void _iqk_macbb_setting(struct rtw89_dev *rtwdev,
rtw8852c_rxck_force(rtwdev, path, true, ADC_1920M);
rtw89_phy_write32_mask(rtwdev, R_UPD_CLK | (path << 13), B_ACK_VAL, 0x2);
- rtw89_phy_write32_mask(rtwdev, R_P0_CFCH_BW0 | (path << 8), B_P0_CFCH_BW0, 0x1);
- rtw89_phy_write32_mask(rtwdev, R_P0_CFCH_BW1 | (path << 8), B_P0_CFCH_BW1, 0xb);
rtw89_phy_write32_mask(rtwdev, R_P0_NRBW | (path << 13), B_P0_NRBW_DBG, 0x1);
rtw89_phy_write32_mask(rtwdev, R_ANAPAR_PW15, B_ANAPAR_PW15, 0x1f);
rtw89_phy_write32_mask(rtwdev, R_ANAPAR_PW15, B_ANAPAR_PW15, 0x13);
@@ -1536,6 +1566,155 @@ static void _iqk(struct rtw89_dev *rtwdev, enum rtw89_phy_idx phy_idx, bool forc
}
}
+static void _rx_dck_value_rewrite(struct rtw89_dev *rtwdev, u8 path, u8 addr,
+ u8 val_i, u8 val_q)
+{
+ u32 ofst_val;
+
+ rtw89_debug(rtwdev, RTW89_DBG_RFK,
+ "[RX_DCK] rewrite val_i = 0x%x, val_q = 0x%x\n", val_i, val_q);
+
+ /* val_i and val_q are 7 bits, and target is 6 bits. */
+ ofst_val = u32_encode_bits(val_q >> 1, RR_LUTWD0_MB) |
+ u32_encode_bits(val_i >> 1, RR_LUTWD0_LB);
+
+ rtw89_write_rf(rtwdev, path, RR_LUTPLL, RR_CAL_RW, 0x1);
+ rtw89_write_rf(rtwdev, path, RR_RFC, RR_WCAL, 0x1);
+ rtw89_write_rf(rtwdev, path, RR_DCK, RR_DCK_FINE, 0x1);
+ rtw89_write_rf(rtwdev, path, RR_LUTWA, MASKBYTE0, addr);
+ rtw89_write_rf(rtwdev, path, RR_LUTWD0, RFREG_MASK, ofst_val);
+ rtw89_write_rf(rtwdev, path, RR_LUTWD0, RFREG_MASK, ofst_val);
+ rtw89_write_rf(rtwdev, path, RR_DCK, RR_DCK_FINE, 0x0);
+ rtw89_write_rf(rtwdev, path, RR_RFC, RR_WCAL, 0x0);
+ rtw89_write_rf(rtwdev, path, RR_LUTPLL, RR_CAL_RW, 0x0);
+
+ rtw89_debug(rtwdev, RTW89_DBG_RFK, "[RX_DCK] Final val_i = 0x%x, val_q = 0x%x\n",
+ u32_get_bits(ofst_val, RR_LUTWD0_LB) << 1,
+ u32_get_bits(ofst_val, RR_LUTWD0_MB) << 1);
+}
+
+static bool _rx_dck_rek_check(struct rtw89_dev *rtwdev, u8 path)
+{
+ u8 i_even_bs, q_even_bs;
+ u8 i_odd_bs, q_odd_bs;
+ u8 i_even, q_even;
+ u8 i_odd, q_odd;
+ const u8 th = 10;
+ u8 i;
+
+ for (i = 0; i < RF_PATH_NUM_8852C; i++) {
+ rtw89_write_rf(rtwdev, path, RR_MOD, RR_MOD_DCK, _dck_addr_bs[i]);
+ i_even_bs = rtw89_read_rf(rtwdev, path, RR_DCK, RR_DCK_TIA);
+ q_even_bs = rtw89_read_rf(rtwdev, path, RR_DCK1, RR_DCK1_TIA);
+ rtw89_debug(rtwdev, RTW89_DBG_RFK,
+ "[RX_DCK] Gain[0x%x] i_even_bs/ q_even_bs = 0x%x/ 0x%x\n",
+ _dck_addr_bs[i], i_even_bs, q_even_bs);
+
+ rtw89_write_rf(rtwdev, path, RR_MOD, RR_MOD_DCK, _dck_addr[i]);
+ i_even = rtw89_read_rf(rtwdev, path, RR_DCK, RR_DCK_TIA);
+ q_even = rtw89_read_rf(rtwdev, path, RR_DCK1, RR_DCK1_TIA);
+ rtw89_debug(rtwdev, RTW89_DBG_RFK,
+ "[RX_DCK] Gain[0x%x] i_even/ q_even = 0x%x/ 0x%x\n",
+ _dck_addr[i], i_even, q_even);
+
+ if (abs(i_even_bs - i_even) > th || abs(q_even_bs - q_even) > th)
+ return true;
+
+ rtw89_write_rf(rtwdev, path, RR_MOD, RR_MOD_DCK, _dck_addr_bs[i] + 1);
+ i_odd_bs = rtw89_read_rf(rtwdev, path, RR_DCK, RR_DCK_TIA);
+ q_odd_bs = rtw89_read_rf(rtwdev, path, RR_DCK1, RR_DCK1_TIA);
+ rtw89_debug(rtwdev, RTW89_DBG_RFK,
+ "[RX_DCK] Gain[0x%x] i_odd_bs/ q_odd_bs = 0x%x/ 0x%x\n",
+ _dck_addr_bs[i] + 1, i_odd_bs, q_odd_bs);
+
+ rtw89_write_rf(rtwdev, path, RR_MOD, RR_MOD_DCK, _dck_addr[i] + 1);
+ i_odd = rtw89_read_rf(rtwdev, path, RR_DCK, RR_DCK_TIA);
+ q_odd = rtw89_read_rf(rtwdev, path, RR_DCK1, RR_DCK1_TIA);
+ rtw89_debug(rtwdev, RTW89_DBG_RFK,
+ "[RX_DCK] Gain[0x%x] i_odd/ q_odd = 0x%x/ 0x%x\n",
+ _dck_addr[i] + 1, i_odd, q_odd);
+
+ if (abs(i_odd_bs - i_odd) > th || abs(q_odd_bs - q_odd) > th)
+ return true;
+ }
+
+ return false;
+}
+
+static void _rx_dck_fix_if_need(struct rtw89_dev *rtwdev, u8 path, u8 addr,
+ u8 val_i_bs, u8 val_q_bs, u8 val_i, u8 val_q)
+{
+ const u8 th = 10;
+
+ if ((abs(val_i_bs - val_i) < th) && (abs(val_q_bs - val_q) <= th)) {
+ rtw89_debug(rtwdev, RTW89_DBG_RFK, "[RX_DCK] offset check PASS!!\n");
+ return;
+ }
+
+ if (abs(val_i_bs - val_i) > th) {
+ rtw89_debug(rtwdev, RTW89_DBG_RFK,
+ "[RX_DCK] val_i over TH (0x%x / 0x%x)\n", val_i_bs, val_i);
+ val_i = val_i_bs;
+ }
+
+ if (abs(val_q_bs - val_q) > th) {
+ rtw89_debug(rtwdev, RTW89_DBG_RFK,
+ "[RX_DCK] val_q over TH (0x%x / 0x%x)\n", val_q_bs, val_q);
+ val_q = val_q_bs;
+ }
+
+ _rx_dck_value_rewrite(rtwdev, path, addr, val_i, val_q);
+}
+
+static void _rx_dck_recover(struct rtw89_dev *rtwdev, u8 path)
+{
+ u8 i_even_bs, q_even_bs;
+ u8 i_odd_bs, q_odd_bs;
+ u8 i_even, q_even;
+ u8 i_odd, q_odd;
+ u8 i;
+
+ rtw89_debug(rtwdev, RTW89_DBG_RFK, "[RX_DCK] ===> recovery\n");
+
+ for (i = 0; i < RF_PATH_NUM_8852C; i++) {
+ rtw89_write_rf(rtwdev, path, RR_MOD, RR_MOD_DCK, _dck_addr_bs[i]);
+ i_even_bs = rtw89_read_rf(rtwdev, path, RR_DCK, RR_DCK_TIA);
+ q_even_bs = rtw89_read_rf(rtwdev, path, RR_DCK1, RR_DCK1_TIA);
+
+ rtw89_write_rf(rtwdev, path, RR_MOD, RR_MOD_DCK, _dck_addr_bs[i] + 1);
+ i_odd_bs = rtw89_read_rf(rtwdev, path, RR_DCK, RR_DCK_TIA);
+ q_odd_bs = rtw89_read_rf(rtwdev, path, RR_DCK1, RR_DCK1_TIA);
+
+ rtw89_debug(rtwdev, RTW89_DBG_RFK,
+ "[RX_DCK] Gain[0x%x] i_even_bs/ q_even_bs = 0x%x/ 0x%x\n",
+ _dck_addr_bs[i], i_even_bs, q_even_bs);
+
+ rtw89_write_rf(rtwdev, path, RR_MOD, RR_MOD_DCK, _dck_addr[i]);
+ i_even = rtw89_read_rf(rtwdev, path, RR_DCK, RR_DCK_TIA);
+ q_even = rtw89_read_rf(rtwdev, path, RR_DCK1, RR_DCK1_TIA);
+
+ rtw89_debug(rtwdev, RTW89_DBG_RFK,
+ "[RX_DCK] Gain[0x%x] i_even/ q_even = 0x%x/ 0x%x\n",
+ _dck_addr[i], i_even, q_even);
+ _rx_dck_fix_if_need(rtwdev, path, _dck_addr[i],
+ i_even_bs, q_even_bs, i_even, q_even);
+
+ rtw89_debug(rtwdev, RTW89_DBG_RFK,
+ "[RX_DCK] Gain[0x%x] i_odd_bs/ q_odd_bs = 0x%x/ 0x%x\n",
+ _dck_addr_bs[i] + 1, i_odd_bs, q_odd_bs);
+
+ rtw89_write_rf(rtwdev, path, RR_MOD, RR_MOD_DCK, _dck_addr[i] + 1);
+ i_odd = rtw89_read_rf(rtwdev, path, RR_DCK, RR_DCK_TIA);
+ q_odd = rtw89_read_rf(rtwdev, path, RR_DCK1, RR_DCK1_TIA);
+
+ rtw89_debug(rtwdev, RTW89_DBG_RFK,
+ "[RX_DCK] Gain[0x%x] i_odd/ q_odd = 0x%x/ 0x%x\n",
+ _dck_addr[i] + 1, i_odd, q_odd);
+ _rx_dck_fix_if_need(rtwdev, path, _dck_addr[i] + 1,
+ i_odd_bs, q_odd_bs, i_odd, q_odd);
+ }
+}
+
static void _rx_dck_toggle(struct rtw89_dev *rtwdev, u8 path)
{
int ret;
@@ -1573,11 +1752,42 @@ static void _set_rx_dck(struct rtw89_dev *rtwdev, enum rtw89_phy_idx phy, u8 pat
}
}
+static
+u8 _rx_dck_channel_calc(struct rtw89_dev *rtwdev, const struct rtw89_chan *chan)
+{
+ u8 target_ch = 0;
+
+ if (chan->band_type == RTW89_BAND_5G) {
+ if (chan->channel >= 36 && chan->channel <= 64) {
+ target_ch = 100;
+ } else if (chan->channel >= 100 && chan->channel <= 144) {
+ target_ch = chan->channel + 32;
+ if (target_ch > 144)
+ target_ch = chan->channel + 33;
+ } else if (chan->channel >= 149 && chan->channel <= 177) {
+ target_ch = chan->channel - 33;
+ }
+ } else if (chan->band_type == RTW89_BAND_6G) {
+ if (chan->channel >= 1 && chan->channel <= 125)
+ target_ch = chan->channel + 32;
+ else
+ target_ch = chan->channel - 32;
+ } else {
+ target_ch = chan->channel;
+ }
+
+ rtw89_debug(rtwdev, RTW89_DBG_RFK,
+ "[RX_DCK] cur_ch / target_ch = %d / %d\n",
+ chan->channel, target_ch);
+
+ return target_ch;
+}
+
#define RTW8852C_RF_REL_VERSION 34
-#define RTW8852C_DPK_VER 0x10
+#define RTW8852C_DPK_VER 0xf
#define RTW8852C_DPK_TH_AVG_NUM 4
#define RTW8852C_DPK_RF_PATH 2
-#define RTW8852C_DPK_KIP_REG_NUM 5
+#define RTW8852C_DPK_KIP_REG_NUM 7
#define RTW8852C_DPK_RXSRAM_DBG 0
enum rtw8852c_dpk_id {
@@ -1614,6 +1824,12 @@ enum dpk_agc_step {
DPK_AGC_STEP_SET_TX_GAIN,
};
+enum dpk_pas_result {
+ DPK_PAS_NOR,
+ DPK_PAS_GT,
+ DPK_PAS_LT,
+};
+
static void _rf_direct_cntrl(struct rtw89_dev *rtwdev,
enum rtw89_rf_path path, bool is_bybb)
{
@@ -1730,8 +1946,6 @@ static void _dpk_bb_afe_setting(struct rtw89_dev *rtwdev,
/*4. Set ADC clk*/
rtw8852c_rxck_force(rtwdev, path, true, ADC_1920M);
- rtw89_phy_write32_mask(rtwdev, R_P0_CFCH_BW0 + (path << 8), B_P0_CFCH_BW0, 0x1);
- rtw89_phy_write32_mask(rtwdev, R_P0_CFCH_BW1 + (path << 8), B_P0_CFCH_BW1, 0xb);
rtw89_phy_write32_mask(rtwdev, R_P0_NRBW + (path << 13),
B_P0_NRBW_DBG, 0x1);
rtw89_phy_write32_mask(rtwdev, R_ANAPAR_PW15, MASKBYTE3, 0x1f);
@@ -1872,12 +2086,11 @@ static void _dpk_rf_setting(struct rtw89_dev *rtwdev, u8 gain,
0x50101 | BIT(rtwdev->dbcc_en));
rtw89_write_rf(rtwdev, path, RR_MOD_V1, RR_MOD_MASK, RF_DPK);
- if (dpk->bp[path][kidx].band == RTW89_BAND_6G && dpk->bp[path][kidx].ch >= 161) {
+ if (dpk->bp[path][kidx].band == RTW89_BAND_6G && dpk->bp[path][kidx].ch >= 161)
rtw89_write_rf(rtwdev, path, RR_IQGEN, RR_IQGEN_BIAS, 0x8);
- rtw89_write_rf(rtwdev, path, RR_LOGEN, RR_LOGEN_RPT, 0xd);
- } else {
- rtw89_write_rf(rtwdev, path, RR_LOGEN, RR_LOGEN_RPT, 0xd);
- }
+
+ rtw89_write_rf(rtwdev, path, RR_LOGEN, RR_LOGEN_RPT, 0xd);
+ rtw89_write_rf(rtwdev, path, RR_TXAC, RR_TXAC_IQG, 0x8);
rtw89_write_rf(rtwdev, path, RR_RXA2, RR_RXA2_ATT, 0x0);
rtw89_write_rf(rtwdev, path, RR_TXIQK, RR_TXIQK_ATT2, 0x3);
@@ -2024,9 +2237,10 @@ static u8 _dpk_gainloss(struct rtw89_dev *rtwdev, enum rtw89_phy_idx phy,
return _dpk_gainloss_read(rtwdev);
}
-static bool _dpk_pas_read(struct rtw89_dev *rtwdev, bool is_check)
+static enum dpk_pas_result _dpk_pas_read(struct rtw89_dev *rtwdev, bool is_check)
{
u32 val1_i = 0, val1_q = 0, val2_i = 0, val2_q = 0;
+ u32 val1_sqrt_sum, val2_sqrt_sum;
u8 i;
rtw89_phy_write32_mask(rtwdev, R_KIP_RPT1, MASKBYTE2, 0x06);
@@ -2057,15 +2271,25 @@ static bool _dpk_pas_read(struct rtw89_dev *rtwdev, bool is_check)
}
}
- if (val1_i * val1_i + val1_q * val1_q >= (val2_i * val2_i + val2_q * val2_q) * 8 / 5)
- return true;
+ val1_sqrt_sum = val1_i * val1_i + val1_q * val1_q;
+ val2_sqrt_sum = val2_i * val2_i + val2_q * val2_q;
+
+ if (val1_sqrt_sum < val2_sqrt_sum)
+ return DPK_PAS_LT;
+ else if (val1_sqrt_sum >= val2_sqrt_sum * 8 / 5)
+ return DPK_PAS_GT;
else
- return false;
+ return DPK_PAS_NOR;
}
static bool _dpk_kip_set_rxagc(struct rtw89_dev *rtwdev, enum rtw89_phy_idx phy,
enum rtw89_rf_path path, u8 kidx)
{
+ _dpk_kip_control_rfc(rtwdev, path, false);
+ rtw89_phy_write32_mask(rtwdev, R_KIP_MOD, B_KIP_MOD,
+ rtw89_read_rf(rtwdev, path, RR_MOD, RFREG_MASK));
+ _dpk_kip_control_rfc(rtwdev, path, true);
+
_dpk_one_shot(rtwdev, phy, path, D_RXAGC);
return _dpk_sync_check(rtwdev, path, kidx);
@@ -2103,6 +2327,7 @@ static u8 _dpk_agc(struct rtw89_dev *rtwdev, enum rtw89_phy_idx phy,
u8 tmp_dbm = init_xdbm, tmp_gl_idx = 0;
u8 tmp_rxbb;
u8 goout = 0, agc_cnt = 0;
+ enum dpk_pas_result pas;
u16 dgain = 0;
bool is_fail = false;
int limit = 200;
@@ -2138,9 +2363,13 @@ static u8 _dpk_agc(struct rtw89_dev *rtwdev, enum rtw89_phy_idx phy,
case DPK_AGC_STEP_GAIN_LOSS_IDX:
tmp_gl_idx = _dpk_gainloss(rtwdev, phy, path, kidx);
+ pas = _dpk_pas_read(rtwdev, true);
- if ((tmp_gl_idx == 0 && _dpk_pas_read(rtwdev, true)) ||
- tmp_gl_idx >= 7)
+ if (pas == DPK_PAS_LT && tmp_gl_idx > 0)
+ step = DPK_AGC_STEP_GL_LT_CRITERION;
+ else if (pas == DPK_PAS_GT && tmp_gl_idx == 0)
+ step = DPK_AGC_STEP_GL_GT_CRITERION;
+ else if (tmp_gl_idx >= 7)
step = DPK_AGC_STEP_GL_GT_CRITERION;
else if (tmp_gl_idx == 0)
step = DPK_AGC_STEP_GL_LT_CRITERION;
@@ -2467,12 +2696,14 @@ static void _dpk_cal_select(struct rtw89_dev *rtwdev, bool force,
enum rtw89_phy_idx phy, u8 kpath)
{
struct rtw89_dpk_info *dpk = &rtwdev->dpk;
- static const u32 kip_reg[] = {0x813c, 0x8124, 0x8120, 0xc0d4, 0xc0d8};
+ static const u32 kip_reg[] = {0x813c, 0x8124, 0x8120, 0xc0c4, 0xc0e8, 0xc0d4, 0xc0d8};
u32 backup_rf_val[RTW8852C_DPK_RF_PATH][BACKUP_RF_REGS_NR];
u32 kip_bkup[RTW8852C_DPK_RF_PATH][RTW8852C_DPK_KIP_REG_NUM] = {};
u8 path;
bool is_fail = true, reloaded[RTW8852C_DPK_RF_PATH] = {false};
+ static_assert(ARRAY_SIZE(kip_reg) == RTW8852C_DPK_KIP_REG_NUM);
+
if (dpk->is_dpk_reload_en) {
for (path = 0; path < RTW8852C_DPK_RF_PATH; path++) {
if (!(kpath & BIT(path)))
@@ -3875,11 +4106,14 @@ void rtw8852c_iqk(struct rtw89_dev *rtwdev, enum rtw89_phy_idx phy_idx)
#define RXDCK_VER_8852C 0xe
-void rtw8852c_rx_dck(struct rtw89_dev *rtwdev, enum rtw89_phy_idx phy, bool is_afe)
+static void _rx_dck(struct rtw89_dev *rtwdev, enum rtw89_phy_idx phy,
+ bool is_afe, u8 retry_limit)
{
struct rtw89_rx_dck_info *rx_dck = &rtwdev->rx_dck;
u8 path, kpath;
u32 rf_reg5;
+ bool is_fail;
+ u8 rek_cnt;
kpath = _kpath(rtwdev, phy);
rtw89_debug(rtwdev, RTW89_DBG_RFK,
@@ -3896,7 +4130,27 @@ void rtw8852c_rx_dck(struct rtw89_dev *rtwdev, enum rtw89_phy_idx phy, bool is_a
B_P0_TSSI_TRK_EN, 0x1);
rtw89_write_rf(rtwdev, path, RR_RSV1, RR_RSV1_RST, 0x0);
rtw89_write_rf(rtwdev, path, RR_MOD, RR_MOD_MASK, RR_MOD_V_RX);
- _set_rx_dck(rtwdev, phy, path, is_afe);
+ rtw89_write_rf(rtwdev, path, RR_MOD, RR_MOD_LO_SEL, rtwdev->dbcc_en);
+
+ for (rek_cnt = 0; rek_cnt < retry_limit; rek_cnt++) {
+ _set_rx_dck(rtwdev, phy, path, is_afe);
+
+ /* To reduce IO of dck_rek_check(), the last try is seen
+ * as failure always, and then do recovery procedure.
+ */
+ if (rek_cnt == retry_limit - 1) {
+ _rx_dck_recover(rtwdev, path);
+ break;
+ }
+
+ is_fail = _rx_dck_rek_check(rtwdev, path);
+ if (!is_fail)
+ break;
+ }
+
+ rtw89_debug(rtwdev, RTW89_DBG_RFK, "[RX_DCK] rek_cnt[%d]=%d",
+ path, rek_cnt);
+
rx_dck->thermal[path] = ewma_thermal_read(&rtwdev->phystat.avg_thermal[path]);
rtw89_write_rf(rtwdev, path, RR_RSV1, RFREG_MASK, rf_reg5);
@@ -3906,15 +4160,31 @@ void rtw8852c_rx_dck(struct rtw89_dev *rtwdev, enum rtw89_phy_idx phy, bool is_a
}
}
-#define RTW8852C_RX_DCK_TH 8
+void rtw8852c_rx_dck(struct rtw89_dev *rtwdev, enum rtw89_phy_idx phy, bool is_afe)
+{
+ _rx_dck(rtwdev, phy, is_afe, 1);
+}
+
+#define RTW8852C_RX_DCK_TH 12
void rtw8852c_rx_dck_track(struct rtw89_dev *rtwdev)
{
+ const struct rtw89_chan *chan = rtw89_chan_get(rtwdev, RTW89_SUB_ENTITY_0);
struct rtw89_rx_dck_info *rx_dck = &rtwdev->rx_dck;
+ enum rtw89_phy_idx phy_idx = RTW89_PHY_0;
+ u8 phy_map = rtw89_btc_phymap(rtwdev, phy_idx, 0);
+ u8 dck_channel;
u8 cur_thermal;
+ u32 tx_en;
int delta;
int path;
+ if (chan->band_type == RTW89_BAND_2G)
+ return;
+
+ if (rtwdev->scanning)
+ return;
+
for (path = 0; path < RF_PATH_NUM_8852C; path++) {
cur_thermal =
ewma_thermal_read(&rtwdev->phystat.avg_thermal[path]);
@@ -3924,11 +4194,28 @@ void rtw8852c_rx_dck_track(struct rtw89_dev *rtwdev)
"[RX_DCK] path=%d current thermal=0x%x delta=0x%x\n",
path, cur_thermal, delta);
- if (delta >= RTW8852C_RX_DCK_TH) {
- rtw8852c_rx_dck(rtwdev, RTW89_PHY_0, false);
- return;
- }
+ if (delta >= RTW8852C_RX_DCK_TH)
+ goto trigger_rx_dck;
}
+
+ return;
+
+trigger_rx_dck:
+ rtw89_btc_ntfy_wl_rfk(rtwdev, phy_map, BTC_WRFKT_RXDCK, BTC_WRFK_START);
+ rtw89_chip_stop_sch_tx(rtwdev, phy_idx, &tx_en, RTW89_SCH_TX_SEL_ALL);
+
+ for (path = 0; path < RF_PATH_NUM_8852C; path++) {
+ dck_channel = _rx_dck_channel_calc(rtwdev, chan);
+ _ctrl_ch(rtwdev, RTW89_PHY_0, dck_channel, chan->band_type);
+ }
+
+ _rx_dck(rtwdev, RTW89_PHY_0, false, 20);
+
+ for (path = 0; path < RF_PATH_NUM_8852C; path++)
+ _ctrl_ch(rtwdev, RTW89_PHY_0, chan->channel, chan->band_type);
+
+ rtw89_chip_resume_sch_tx(rtwdev, phy_idx, tx_en);
+ rtw89_btc_ntfy_wl_rfk(rtwdev, phy_map, BTC_WRFKT_RXDCK, BTC_WRFK_STOP);
}
void rtw8852c_dpk(struct rtw89_dev *rtwdev, enum rtw89_phy_idx phy_idx)
diff --git a/drivers/net/wireless/realtek/rtw89/rtw8852ce.c b/drivers/net/wireless/realtek/rtw89/rtw8852ce.c
index 35901f64d17d..80490a5437df 100644
--- a/drivers/net/wireless/realtek/rtw89/rtw8852ce.c
+++ b/drivers/net/wireless/realtek/rtw89/rtw8852ce.c
@@ -53,6 +53,7 @@ static const struct rtw89_pci_info rtw8852c_pci_info = {
.tx_dma_ch_mask = 0,
.bd_idx_addr_low_power = &rtw8852c_bd_idx_addr_low_power,
.dma_addr_set = &rtw89_pci_ch_dma_addr_set_v1,
+ .bd_ram_table = &rtw89_bd_ram_table_dual,
.ltr_set = rtw89_pci_ltr_set_v1,
.fill_txaddr_info = rtw89_pci_fill_txaddr_info_v1,
diff --git a/drivers/net/wireless/realtek/rtw89/ser.c b/drivers/net/wireless/realtek/rtw89/ser.c
index c1a4bc1c64d1..61db7189fdab 100644
--- a/drivers/net/wireless/realtek/rtw89/ser.c
+++ b/drivers/net/wireless/realtek/rtw89/ser.c
@@ -611,6 +611,7 @@ bottom:
ser_reset_mac_binding(rtwdev);
rtw89_core_stop(rtwdev);
rtw89_entity_init(rtwdev);
+ rtw89_fw_release_general_pkt_list(rtwdev, false);
INIT_LIST_HEAD(&rtwdev->rtwvifs_list);
}
diff --git a/drivers/net/wireless/realtek/rtw89/txrx.h b/drivers/net/wireless/realtek/rtw89/txrx.h
index 9d4c6b6fa125..98eb9607cd21 100644
--- a/drivers/net/wireless/realtek/rtw89/txrx.h
+++ b/drivers/net/wireless/realtek/rtw89/txrx.h
@@ -75,7 +75,9 @@
#define RTW89_TXWD_INFO0_DATA_BW GENMASK(29, 28)
#define RTW89_TXWD_INFO0_GI_LTF GENMASK(27, 25)
#define RTW89_TXWD_INFO0_DATA_RATE GENMASK(24, 16)
+#define RTW89_TXWD_INFO0_DATA_ER BIT(15)
#define RTW89_TXWD_INFO0_DISDATAFB BIT(10)
+#define RTW89_TXWD_INFO0_DATA_BW_ER BIT(8)
#define RTW89_TXWD_INFO0_MULTIPORT_ID GENMASK(6, 4)
/* TX WD INFO DWORD 1 */
diff --git a/drivers/net/wireless/realtek/rtw89/wow.c b/drivers/net/wireless/realtek/rtw89/wow.c
index b2b826b2e09a..c78ee2ab732c 100644
--- a/drivers/net/wireless/realtek/rtw89/wow.c
+++ b/drivers/net/wireless/realtek/rtw89/wow.c
@@ -490,21 +490,6 @@ static int rtw89_wow_check_fw_status(struct rtw89_dev *rtwdev, bool wow_enable)
return ret;
}
-static void rtw89_wow_release_pkt_list(struct rtw89_dev *rtwdev)
-{
- struct rtw89_wow_param *rtw_wow = &rtwdev->wow;
- struct list_head *pkt_list = &rtw_wow->pkt_list;
- struct rtw89_pktofld_info *info, *tmp;
-
- list_for_each_entry_safe(info, tmp, pkt_list, list) {
- rtw89_fw_h2c_del_pkt_offload(rtwdev, info->id);
- rtw89_core_release_bit_map(rtwdev->pkt_offload,
- info->id);
- list_del(&info->list);
- kfree(info);
- }
-}
-
static int rtw89_wow_swap_fw(struct rtw89_dev *rtwdev, bool wow)
{
enum rtw89_fw_type fw_type = wow ? RTW89_FW_WOWLAN : RTW89_FW_NORMAL;
@@ -561,6 +546,11 @@ static int rtw89_wow_swap_fw(struct rtw89_dev *rtwdev, bool wow)
}
if (is_conn) {
+ ret = rtw89_fw_h2c_general_pkt(rtwdev, rtwvif, rtwsta->mac_id);
+ if (ret) {
+ rtw89_warn(rtwdev, "failed to send h2c general packet\n");
+ return ret;
+ }
rtw89_phy_ra_assoc(rtwdev, wow_sta);
rtw89_phy_set_bss_color(rtwdev, wow_vif);
rtw89_chip_cfg_txpwr_ul_tb_offset(rtwdev, wow_vif);
@@ -708,8 +698,6 @@ static int rtw89_wow_fw_stop(struct rtw89_dev *rtwdev)
goto out;
}
- rtw89_wow_release_pkt_list(rtwdev);
-
ret = rtw89_fw_h2c_disconnect_detect(rtwdev, rtwvif, false);
if (ret) {
rtw89_err(rtwdev, "wow: failed to disable disconnect detect\n");
@@ -722,6 +710,8 @@ static int rtw89_wow_fw_stop(struct rtw89_dev *rtwdev)
goto out;
}
+ rtw89_fw_release_general_pkt_list(rtwdev, true);
+
ret = rtw89_wow_check_fw_status(rtwdev, false);
if (ret) {
rtw89_err(rtwdev, "wow: failed to check disable fw ready\n");
@@ -744,6 +734,8 @@ static int rtw89_wow_enable(struct rtw89_dev *rtwdev)
goto out;
}
+ rtw89_fw_release_general_pkt_list(rtwdev, true);
+
ret = rtw89_wow_swap_fw(rtwdev, true);
if (ret) {
rtw89_err(rtwdev, "wow: failed to swap to wow fw\n");
diff --git a/drivers/net/wireless/rsi/rsi_91x_coex.c b/drivers/net/wireless/rsi/rsi_91x_coex.c
index 8a3d86897ea8..45ac9371f262 100644
--- a/drivers/net/wireless/rsi/rsi_91x_coex.c
+++ b/drivers/net/wireless/rsi/rsi_91x_coex.c
@@ -160,6 +160,7 @@ int rsi_coex_attach(struct rsi_common *common)
rsi_coex_scheduler_thread,
"Coex-Tx-Thread")) {
rsi_dbg(ERR_ZONE, "%s: Unable to init tx thrd\n", __func__);
+ kfree(coex_cb);
return -EINVAL;
}
return 0;
diff --git a/drivers/net/wireless/rsi/rsi_91x_hal.c b/drivers/net/wireless/rsi/rsi_91x_hal.c
index c7460fbba014..d4489b943873 100644
--- a/drivers/net/wireless/rsi/rsi_91x_hal.c
+++ b/drivers/net/wireless/rsi/rsi_91x_hal.c
@@ -894,7 +894,7 @@ static int rsi_load_9113_firmware(struct rsi_hw *adapter)
struct ta_metadata *metadata_p;
int status;
- status = bl_cmd(adapter, CONFIG_AUTO_READ_MODE, CMD_PASS,
+ status = bl_cmd(adapter, AUTO_READ_MODE, CMD_PASS,
"AUTO_READ_CMD");
if (status < 0)
return status;
@@ -984,7 +984,7 @@ fw_upgrade:
}
rsi_dbg(ERR_ZONE, "Firmware upgrade failed\n");
- status = bl_cmd(adapter, CONFIG_AUTO_READ_MODE, CMD_PASS,
+ status = bl_cmd(adapter, AUTO_READ_MODE, CMD_PASS,
"AUTO_READ_MODE");
if (status)
goto fail;
diff --git a/drivers/net/wireless/rsi/rsi_hal.h b/drivers/net/wireless/rsi/rsi_hal.h
index 5b07262a9740..479b1b0b57a6 100644
--- a/drivers/net/wireless/rsi/rsi_hal.h
+++ b/drivers/net/wireless/rsi/rsi_hal.h
@@ -69,7 +69,7 @@
#define EOF_REACHED 'E'
#define CHECK_CRC 'K'
#define POLLING_MODE 'P'
-#define CONFIG_AUTO_READ_MODE 'R'
+#define AUTO_READ_MODE 'R'
#define JUMP_TO_ZERO_PC 'J'
#define FW_LOADING_SUCCESSFUL 'S'
#define LOADING_INITIATED '1'
diff --git a/drivers/net/wireless/ti/wl1251/init.c b/drivers/net/wireless/ti/wl1251/init.c
index a19cce3a7e6f..5663f197ea69 100644
--- a/drivers/net/wireless/ti/wl1251/init.c
+++ b/drivers/net/wireless/ti/wl1251/init.c
@@ -373,7 +373,7 @@ int wl1251_hw_init(struct wl1251 *wl)
if (ret < 0)
goto out_free_data_path;
- /* Beacons and boradcast settings */
+ /* Beacons and broadcast settings */
ret = wl1251_hw_init_beacon_broadcast(wl);
if (ret < 0)
goto out_free_data_path;
diff --git a/drivers/net/wireless/wl3501_cs.c b/drivers/net/wireless/wl3501_cs.c
index 1b532e00a56f..7fb2f9513476 100644
--- a/drivers/net/wireless/wl3501_cs.c
+++ b/drivers/net/wireless/wl3501_cs.c
@@ -1328,7 +1328,7 @@ static netdev_tx_t wl3501_hard_start_xmit(struct sk_buff *skb,
} else {
++dev->stats.tx_packets;
dev->stats.tx_bytes += skb->len;
- kfree_skb(skb);
+ dev_kfree_skb_irq(skb);
if (this->tx_buffer_cnt < 2)
netif_stop_queue(dev);
diff --git a/drivers/net/wireless/zydas/zd1211rw/zd_rf.h b/drivers/net/wireless/zydas/zd1211rw/zd_rf.h
index 8bfec9e75125..9ca69df3d288 100644
--- a/drivers/net/wireless/zydas/zd1211rw/zd_rf.h
+++ b/drivers/net/wireless/zydas/zd1211rw/zd_rf.h
@@ -85,9 +85,6 @@ static inline int zd_rf_should_patch_cck_gain(struct zd_rf *rf)
return rf->patch_cck_gain;
}
-int zd_rf_patch_6m_band_edge(struct zd_rf *rf, u8 channel);
-int zd_rf_generic_patch_6m(struct zd_rf *rf, u8 channel);
-
/* Functions for individual RF chips */
int zd_rf_init_rf2959(struct zd_rf *rf);
diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
index 12b074286df9..47d54d8ea59d 100644
--- a/drivers/net/xen-netfront.c
+++ b/drivers/net/xen-netfront.c
@@ -1741,6 +1741,8 @@ static struct net_device *xennet_create_dev(struct xenbus_device *dev)
* negotiate with the backend regarding supported features.
*/
netdev->features |= netdev->hw_features;
+ netdev->xdp_features = NETDEV_XDP_ACT_BASIC | NETDEV_XDP_ACT_REDIRECT |
+ NETDEV_XDP_ACT_NDO_XMIT;
netdev->ethtool_ops = &xennet_ethtool_ops;
netdev->min_mtu = ETH_MIN_MTU;